markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
Next, we need to preprocess our data so that it matches the data BERT was trained on. For this, we'll need to do a couple of things (but don't worry--this is also included in the Python library):1. Lowercase our text (if we're using a BERT lowercase model)2. Tokenize it (i.e. "sally says hi" -> ["sally", "says", "hi"])3. Break words into WordPieces (i.e. "calling" -> ["call", "ing"])4. Map our words to indexes using a vocab file that BERT provides5. Add special "CLS" and "SEP" tokens (see the [readme](https://github.com/google-research/bert))6. Append "index" and "segment" tokens to each input (see the [BERT paper](https://arxiv.org/pdf/1810.04805.pdf))Happily, we don't have to worry about most of these details. To start, we'll need to load a vocabulary file and lowercasing information directly from the BERT tf hub module:
|
# This is a path to an uncased (all lowercase) version of BERT
BERT_MODEL_HUB = "https://tfhub.dev/google/bert_uncased_L-12_H-768_A-12/1"
def create_tokenizer_from_hub_module():
"""Get the vocab file and casing info from the Hub module."""
with tf.Graph().as_default():
bert_module = hub.Module(BERT_MODEL_HUB)
tokenization_info = bert_module(signature="tokenization_info", as_dict=True)
with tf.Session() as sess:
vocab_file, do_lower_case = sess.run([tokenization_info["vocab_file"],
tokenization_info["do_lower_case"]])
return bert.tokenization.FullTokenizer(
vocab_file=vocab_file, do_lower_case=do_lower_case)
tokenizer = create_tokenizer_from_hub_module()
|
I0414 10:19:59.282109 140105573619520 saver.py:1499] Saver not created because there are no variables in the graph to restore
W0414 10:20:00.810749 140105573619520 deprecation_wrapper.py:119] From /usr/local/lib/python3.6/dist-packages/bert/tokenization.py:125: The name tf.gfile.GFile is deprecated. Please use tf.io.gfile.GFile instead.
|
Apache-2.0
|
predicting_movie_reviews_with_bert_on_tf_hub.ipynb
|
bedman3/bert
|
Great--we just learned that the BERT model we're using expects lowercase data (that's what stored in tokenization_info["do_lower_case"]) and we also loaded BERT's vocab file. We also created a tokenizer, which breaks words into word pieces:
|
tokenizer.tokenize("This here's an example of using the BERT tokenizer")
|
_____no_output_____
|
Apache-2.0
|
predicting_movie_reviews_with_bert_on_tf_hub.ipynb
|
bedman3/bert
|
Using our tokenizer, we'll call `run_classifier.convert_examples_to_features` on our InputExamples to convert them into features BERT understands.
|
# We'll set sequences to be at most 128 tokens long.
MAX_SEQ_LENGTH = 128
# Convert our train and test features to InputFeatures that BERT understands.
train_features = bert.run_classifier.convert_examples_to_features(train_InputExamples, label_list, MAX_SEQ_LENGTH, tokenizer)
test_features = bert.run_classifier.convert_examples_to_features(test_InputExamples, label_list, MAX_SEQ_LENGTH, tokenizer)
|
W0414 10:20:00.919758 140105573619520 deprecation_wrapper.py:119] From /usr/local/lib/python3.6/dist-packages/bert/run_classifier.py:774: The name tf.logging.info is deprecated. Please use tf.compat.v1.logging.info instead.
I0414 10:20:00.921136 140105573619520 run_classifier.py:774] Writing example 0 of 20000
I0414 10:20:00.929281 140105573619520 run_classifier.py:461] *** Example ***
I0414 10:20:00.930946 140105573619520 run_classifier.py:462] guid: None
I0414 10:20:00.931859 140105573619520 run_classifier.py:464] tokens: [CLS] " what do you do for recreation ? " " oh , the usual . i bowl . drive around . the occasional acid flashback . " i do not actually like bowling . it ' s my nature . i only like things that i am good at and i am not even certain i would like to be good at bowling so i had my reservations when being invited to hang with the optimism club kids but i love these kids so . . . " fuck it , dude , let ' s go bowling . " i do have to admit ; i really had a blast here . it is vintage and sc ##um ##my on the outside which to me [SEP]
I0414 10:20:00.932403 140105573619520 run_classifier.py:465] input_ids: 101 1000 2054 2079 2017 2079 2005 8640 1029 1000 1000 2821 1010 1996 5156 1012 1045 4605 1012 3298 2105 1012 1996 8138 5648 21907 1012 1000 1045 2079 2025 2941 2066 9116 1012 2009 1005 1055 2026 3267 1012 1045 2069 2066 2477 2008 1045 2572 2204 2012 1998 1045 2572 2025 2130 3056 1045 2052 2066 2000 2022 2204 2012 9116 2061 1045 2018 2026 17829 2043 2108 4778 2000 6865 2007 1996 27451 2252 4268 2021 1045 2293 2122 4268 2061 1012 1012 1012 1000 6616 2009 1010 12043 1010 2292 1005 1055 2175 9116 1012 1000 1045 2079 2031 2000 6449 1025 1045 2428 2018 1037 8479 2182 1012 2009 2003 13528 1998 8040 2819 8029 2006 1996 2648 2029 2000 2033 102
I0414 10:20:00.932891 140105573619520 run_classifier.py:466] input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
I0414 10:20:00.933523 140105573619520 run_classifier.py:467] segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
I0414 10:20:00.934118 140105573619520 run_classifier.py:468] label: 4 (id = 3)
I0414 10:20:00.935426 140105573619520 run_classifier.py:461] *** Example ***
I0414 10:20:00.936100 140105573619520 run_classifier.py:462] guid: None
I0414 10:20:00.936616 140105573619520 run_classifier.py:464] tokens: [CLS] the burger ##s are good and they have good fries too . i used to get the ib ##ion rings but i can ' t eat onion now unfortunately . would recommend to anyone for lunch . [SEP]
I0414 10:20:00.937265 140105573619520 run_classifier.py:465] input_ids: 101 1996 15890 2015 2024 2204 1998 2027 2031 2204 22201 2205 1012 1045 2109 2000 2131 1996 21307 3258 7635 2021 1045 2064 1005 1056 4521 20949 2085 6854 1012 2052 16755 2000 3087 2005 6265 1012 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
I0414 10:20:00.937844 140105573619520 run_classifier.py:466] input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
I0414 10:20:00.938503 140105573619520 run_classifier.py:467] segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
I0414 10:20:00.939382 140105573619520 run_classifier.py:468] label: 5 (id = 4)
I0414 10:20:00.942288 140105573619520 run_classifier.py:461] *** Example ***
I0414 10:20:00.942847 140105573619520 run_classifier.py:462] guid: None
I0414 10:20:00.943509 140105573619520 run_classifier.py:464] tokens: [CLS] been there three times , twice they didn ' t have gu ##aca ##mo ##le . took the family on thursday night to hear live reggae , which they had confirmed when i called earlier in the day . ordered gu ##ac . . . no gu ##ac . ordered beef with mole . . . we ' re out of mole . wife ordered car ##ni ##tas , after long wait , waitress came back and informed her they were out of car ##ni ##tas . i think any problems relate to lai ##sse ##z fair ##e management . service is slow because they don ' t pay for enough staff . food takes long b / c there is only room for one cook [SEP]
I0414 10:20:00.944194 140105573619520 run_classifier.py:465] input_ids: 101 2042 2045 2093 2335 1010 3807 2027 2134 1005 1056 2031 19739 19629 5302 2571 1012 2165 1996 2155 2006 9432 2305 2000 2963 2444 15662 1010 2029 2027 2018 4484 2043 1045 2170 3041 1999 1996 2154 1012 3641 19739 6305 1012 1012 1012 2053 19739 6305 1012 3641 12486 2007 16709 1012 1012 1012 2057 1005 2128 2041 1997 16709 1012 2564 3641 2482 3490 10230 1010 2044 2146 3524 1010 13877 2234 2067 1998 6727 2014 2027 2020 2041 1997 2482 3490 10230 1012 1045 2228 2151 3471 14396 2000 21110 11393 2480 4189 2063 2968 1012 2326 2003 4030 2138 2027 2123 1005 1056 3477 2005 2438 3095 1012 2833 3138 2146 1038 1013 1039 2045 2003 2069 2282 2005 2028 5660 102
I0414 10:20:00.944772 140105573619520 run_classifier.py:466] input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
I0414 10:20:00.945364 140105573619520 run_classifier.py:467] segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
I0414 10:20:00.945957 140105573619520 run_classifier.py:468] label: 2 (id = 1)
I0414 10:20:00.947746 140105573619520 run_classifier.py:461] *** Example ***
I0414 10:20:00.948391 140105573619520 run_classifier.py:462] guid: None
I0414 10:20:00.948909 140105573619520 run_classifier.py:464] tokens: [CLS] pretty excited to discover this cal ##i gas ##tro ##pu ##b was arriving at the swan ##ky downtown summer ##lin . great beers and our server was not only fun but extremely knowledge ##able and turned me on to the w ##him ##sic ##al and ta ##sty golden monkey . outdoor patio is great place for a late night meal . and best of al locals get half off their bill on mondays . how good is that ? [SEP]
I0414 10:20:00.949487 140105573619520 run_classifier.py:465] input_ids: 101 3492 7568 2000 7523 2023 10250 2072 3806 13181 14289 2497 2001 7194 2012 1996 10677 4801 5116 2621 4115 1012 2307 18007 1998 2256 8241 2001 2025 2069 4569 2021 5186 3716 3085 1998 2357 2033 2006 2000 1996 1059 14341 19570 2389 1998 11937 21756 3585 10608 1012 7254 19404 2003 2307 2173 2005 1037 2397 2305 7954 1012 1998 2190 1997 2632 10575 2131 2431 2125 2037 3021 2006 28401 1012 2129 2204 2003 2008 1029 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
I0414 10:20:00.950110 140105573619520 run_classifier.py:466] input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
I0414 10:20:00.950884 140105573619520 run_classifier.py:467] segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
I0414 10:20:00.951302 140105573619520 run_classifier.py:468] label: 5 (id = 4)
I0414 10:20:00.953340 140105573619520 run_classifier.py:461] *** Example ***
I0414 10:20:00.954030 140105573619520 run_classifier.py:462] guid: None
I0414 10:20:00.954628 140105573619520 run_classifier.py:464] tokens: [CLS] i have been very pleased with the care that i receive here . i have been going to them since 2002 for five different reasons and i love them . they are thorough , caring and very dedicated to see you recover completely . their front office staff is amazing . they will work with you to assure that you are getting everything you need and make sure that your insurance and referring physician are informed about your care . i have had pablo and cory as my therapist and i would recommend them to anyone that needs physical therapy . [SEP]
I0414 10:20:00.955208 140105573619520 run_classifier.py:465] input_ids: 101 1045 2031 2042 2200 7537 2007 1996 2729 2008 1045 4374 2182 1012 1045 2031 2042 2183 2000 2068 2144 2526 2005 2274 2367 4436 1998 1045 2293 2068 1012 2027 2024 16030 1010 11922 1998 2200 4056 2000 2156 2017 8980 3294 1012 2037 2392 2436 3095 2003 6429 1012 2027 2097 2147 2007 2017 2000 14306 2008 2017 2024 2893 2673 2017 2342 1998 2191 2469 2008 2115 5427 1998 7727 7522 2024 6727 2055 2115 2729 1012 1045 2031 2018 11623 1998 18342 2004 2026 19294 1998 1045 2052 16755 2068 2000 3087 2008 3791 3558 7242 1012 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
I0414 10:20:00.955861 140105573619520 run_classifier.py:466] input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
I0414 10:20:00.956392 140105573619520 run_classifier.py:467] segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
I0414 10:20:00.956864 140105573619520 run_classifier.py:468] label: 5 (id = 4)
I0414 10:20:16.874609 140105573619520 run_classifier.py:774] Writing example 10000 of 20000
I0414 10:20:32.753437 140105573619520 run_classifier.py:774] Writing example 0 of 2000
I0414 10:20:32.754873 140105573619520 run_classifier.py:461] *** Example ***
I0414 10:20:32.755411 140105573619520 run_classifier.py:462] guid: None
I0414 10:20:32.756034 140105573619520 run_classifier.py:464] tokens: [CLS] food is sometimes sometimes great and something ##s just good . wasn ' t a fan of their br ##un ##ch but lunch and dinner is good . service is hit and miss . [SEP]
I0414 10:20:32.756607 140105573619520 run_classifier.py:465] input_ids: 101 2833 2003 2823 2823 2307 1998 2242 2015 2074 2204 1012 2347 1005 1056 1037 5470 1997 2037 7987 4609 2818 2021 6265 1998 4596 2003 2204 1012 2326 2003 2718 1998 3335 1012 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
I0414 10:20:32.757318 140105573619520 run_classifier.py:466] input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
I0414 10:20:32.757845 140105573619520 run_classifier.py:467] segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
I0414 10:20:32.758347 140105573619520 run_classifier.py:468] label: 4 (id = 3)
I0414 10:20:32.759270 140105573619520 run_classifier.py:461] *** Example ***
I0414 10:20:32.759799 140105573619520 run_classifier.py:462] guid: None
I0414 10:20:32.760307 140105573619520 run_classifier.py:464] tokens: [CLS] they should seriously market the sausage and gr ##av ##y . you may hear chicken sausage but - good lord this is some amazing sausage biscuits and gr ##av ##y . [SEP]
I0414 10:20:32.760833 140105573619520 run_classifier.py:465] input_ids: 101 2027 2323 5667 3006 1996 24165 1998 24665 11431 2100 1012 2017 2089 2963 7975 24165 2021 1011 2204 2935 2023 2003 2070 6429 24165 27529 1998 24665 11431 2100 1012 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
I0414 10:20:32.761393 140105573619520 run_classifier.py:466] input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
I0414 10:20:32.762058 140105573619520 run_classifier.py:467] segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
I0414 10:20:32.762672 140105573619520 run_classifier.py:468] label: 5 (id = 4)
I0414 10:20:32.766637 140105573619520 run_classifier.py:461] *** Example ***
I0414 10:20:32.767348 140105573619520 run_classifier.py:462] guid: None
I0414 10:20:32.767922 140105573619520 run_classifier.py:464] tokens: [CLS] while i have had several average salon and spa services at holt ##s this review is for my hair coloring experiences with ina . i have had my blonde highlights done by ina on and off for a few years . i was always very happy with the results , yet i was kept trying to look for a more reasonably priced place because it is very expensive indeed ( in my case about $ 300 with the tone ##r and blow dry , which is not included ) . and so it was that a month before my wedding i went to a very rep ##utable downtown salon ( supposedly the best in to ) to have my highlights done . to make a very [SEP]
I0414 10:20:32.768587 140105573619520 run_classifier.py:465] input_ids: 101 2096 1045 2031 2018 2195 2779 11090 1998 12403 2578 2012 12621 2015 2023 3319 2003 2005 2026 2606 22276 6322 2007 27118 1012 1045 2031 2018 2026 9081 11637 2589 2011 27118 2006 1998 2125 2005 1037 2261 2086 1012 1045 2001 2467 2200 3407 2007 1996 3463 1010 2664 1045 2001 2921 2667 2000 2298 2005 1037 2062 16286 21125 2173 2138 2009 2003 2200 6450 5262 1006 1999 2026 2553 2055 1002 3998 2007 1996 4309 2099 1998 6271 4318 1010 2029 2003 2025 2443 1007 1012 1998 2061 2009 2001 2008 1037 3204 2077 2026 5030 1045 2253 2000 1037 2200 16360 23056 5116 11090 1006 10743 1996 2190 1999 2000 1007 2000 2031 2026 11637 2589 1012 2000 2191 1037 2200 102
I0414 10:20:32.769200 140105573619520 run_classifier.py:466] input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
I0414 10:20:32.769784 140105573619520 run_classifier.py:467] segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
I0414 10:20:32.770305 140105573619520 run_classifier.py:468] label: 5 (id = 4)
I0414 10:20:32.772956 140105573619520 run_classifier.py:461] *** Example ***
I0414 10:20:32.773533 140105573619520 run_classifier.py:462] guid: None
I0414 10:20:32.774001 140105573619520 run_classifier.py:464] tokens: [CLS] i have been coming to nc ##s for almost two years . i had been going to a salon in the mall for about a year and i noticed that my hair was getting extremely dry . i used some products that they recommended but nothing seemed to help . a friend recommended sara d so i made an appointment with her hoping that she would be able to make some suggestions for my dry and damaged hair . she suggested one product and gave me tips for drying and styling that would help . since that first visit , my hair has come along way . sara always gives me suggestions for keeping my hair healthy and she ' ll recommend different products in the [SEP]
I0414 10:20:32.774591 140105573619520 run_classifier.py:465] input_ids: 101 1045 2031 2042 2746 2000 13316 2015 2005 2471 2048 2086 1012 1045 2018 2042 2183 2000 1037 11090 1999 1996 6670 2005 2055 1037 2095 1998 1045 4384 2008 2026 2606 2001 2893 5186 4318 1012 1045 2109 2070 3688 2008 2027 6749 2021 2498 2790 2000 2393 1012 1037 2767 6749 7354 1040 2061 1045 2081 2019 6098 2007 2014 5327 2008 2016 2052 2022 2583 2000 2191 2070 15690 2005 2026 4318 1998 5591 2606 1012 2016 4081 2028 4031 1998 2435 2033 10247 2005 17462 1998 20724 2008 2052 2393 1012 2144 2008 2034 3942 1010 2026 2606 2038 2272 2247 2126 1012 7354 2467 3957 2033 15690 2005 4363 2026 2606 7965 1998 2016 1005 2222 16755 2367 3688 1999 1996 102
I0414 10:20:32.775171 140105573619520 run_classifier.py:466] input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
I0414 10:20:32.775726 140105573619520 run_classifier.py:467] segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
I0414 10:20:32.776291 140105573619520 run_classifier.py:468] label: 5 (id = 4)
I0414 10:20:32.778075 140105573619520 run_classifier.py:461] *** Example ***
I0414 10:20:32.778569 140105573619520 run_classifier.py:462] guid: None
I0414 10:20:32.779173 140105573619520 run_classifier.py:464] tokens: [CLS] this place should be renamed to " we take our sweet ass time " large fried chicken . j ##ks , but seriously . there was no line , and only 3 people waiting in front of me for their order , and it still took a good 15 minutes to receive my order . i ordered the regular chicken w / the strawberry drink ( forgot what it ' s called ) . the chicken was pretty good , i must admit . the drink , however , was decent - tasted like strawberry sp ##rite - but not worth ~ $ 3 . 50 . come here only if you have time to spare . [SEP]
I0414 10:20:32.779727 140105573619520 run_classifier.py:465] input_ids: 101 2023 2173 2323 2022 4096 2000 1000 2057 2202 2256 4086 4632 2051 1000 2312 13017 7975 1012 1046 5705 1010 2021 5667 1012 2045 2001 2053 2240 1010 1998 2069 1017 2111 3403 1999 2392 1997 2033 2005 2037 2344 1010 1998 2009 2145 2165 1037 2204 2321 2781 2000 4374 2026 2344 1012 1045 3641 1996 3180 7975 1059 1013 1996 16876 4392 1006 9471 2054 2009 1005 1055 2170 1007 1012 1996 7975 2001 3492 2204 1010 1045 2442 6449 1012 1996 4392 1010 2174 1010 2001 11519 1011 12595 2066 16876 11867 17625 1011 2021 2025 4276 1066 1002 1017 1012 2753 1012 2272 2182 2069 2065 2017 2031 2051 2000 8622 1012 102 0 0 0 0 0 0 0 0 0
I0414 10:20:32.780362 140105573619520 run_classifier.py:466] input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0
I0414 10:20:32.780905 140105573619520 run_classifier.py:467] segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
I0414 10:20:32.781480 140105573619520 run_classifier.py:468] label: 3 (id = 2)
|
Apache-2.0
|
predicting_movie_reviews_with_bert_on_tf_hub.ipynb
|
bedman3/bert
|
Creating a modelNow that we've prepared our data, let's focus on building a model. `create_model` does just this below. First, it loads the BERT tf hub module again (this time to extract the computation graph). Next, it creates a single new layer that will be trained to adapt BERT to our sentiment task (i.e. classifying whether a movie review is positive or negative). This strategy of using a mostly trained model is called [fine-tuning](http://wiki.fast.ai/index.php/Fine_tuning).
|
def create_model(is_predicting, input_ids, input_mask, segment_ids, labels,
num_labels):
"""Creates a classification model."""
bert_module = hub.Module(
BERT_MODEL_HUB,
trainable=True)
bert_inputs = dict(
input_ids=input_ids,
input_mask=input_mask,
segment_ids=segment_ids)
bert_outputs = bert_module(
inputs=bert_inputs,
signature="tokens",
as_dict=True)
# Use "pooled_output" for classification tasks on an entire sentence.
# Use "sequence_outputs" for token-level output.
output_layer = bert_outputs["pooled_output"]
hidden_size = output_layer.shape[-1].value
# Create our own layer to tune for politeness data.
output_weights = tf.get_variable(
"output_weights", [num_labels, hidden_size],
initializer=tf.truncated_normal_initializer(stddev=0.02))
output_bias = tf.get_variable(
"output_bias", [num_labels], initializer=tf.zeros_initializer())
with tf.variable_scope("loss"):
# Dropout helps prevent overfitting
output_layer = tf.nn.dropout(output_layer, keep_prob=0.9)
logits = tf.matmul(output_layer, output_weights, transpose_b=True)
logits = tf.nn.bias_add(logits, output_bias)
log_probs = tf.nn.log_softmax(logits, axis=-1)
# Convert labels into one-hot encoding
one_hot_labels = tf.one_hot(labels, depth=num_labels, dtype=tf.float32)
predicted_labels = tf.squeeze(tf.argmax(log_probs, axis=-1, output_type=tf.int32))
# If we're predicting, we want predicted labels and the probabiltiies.
if is_predicting:
return (predicted_labels, log_probs)
# If we're train/eval, compute loss between predicted and actual label
per_example_loss = -tf.reduce_sum(one_hot_labels * log_probs, axis=-1)
loss = tf.reduce_mean(per_example_loss)
return (loss, predicted_labels, log_probs)
|
_____no_output_____
|
Apache-2.0
|
predicting_movie_reviews_with_bert_on_tf_hub.ipynb
|
bedman3/bert
|
Next we'll wrap our model function in a `model_fn_builder` function that adapts our model to work for training, evaluation, and prediction.
|
# model_fn_builder actually creates our model function
# using the passed parameters for num_labels, learning_rate, etc.
def model_fn_builder(num_labels, learning_rate, num_train_steps,
num_warmup_steps):
"""Returns `model_fn` closure for TPUEstimator."""
def model_fn(features, labels, mode, params): # pylint: disable=unused-argument
"""The `model_fn` for TPUEstimator."""
input_ids = features["input_ids"]
input_mask = features["input_mask"]
segment_ids = features["segment_ids"]
label_ids = features["label_ids"]
is_predicting = (mode == tf.estimator.ModeKeys.PREDICT)
# TRAIN and EVAL
if not is_predicting:
(loss, predicted_labels, log_probs) = create_model(
is_predicting, input_ids, input_mask, segment_ids, label_ids, num_labels)
train_op = bert.optimization.create_optimizer(
loss, learning_rate, num_train_steps, num_warmup_steps, use_tpu=False)
# Calculate evaluation metrics.
def metric_fn(label_ids, predicted_labels):
accuracy = tf.metrics.accuracy(label_ids, predicted_labels)
f1_score = tf.contrib.metrics.f1_score(
label_ids,
predicted_labels)
auc = tf.metrics.auc(
label_ids,
predicted_labels)
recall = tf.metrics.recall(
label_ids,
predicted_labels)
precision = tf.metrics.precision(
label_ids,
predicted_labels)
true_pos = tf.metrics.true_positives(
label_ids,
predicted_labels)
true_neg = tf.metrics.true_negatives(
label_ids,
predicted_labels)
false_pos = tf.metrics.false_positives(
label_ids,
predicted_labels)
false_neg = tf.metrics.false_negatives(
label_ids,
predicted_labels)
return {
"eval_accuracy": accuracy,
"f1_score": f1_score,
"auc": auc,
"precision": precision,
"recall": recall,
"true_positives": true_pos,
"true_negatives": true_neg,
"false_positives": false_pos,
"false_negatives": false_neg
}
eval_metrics = metric_fn(label_ids, predicted_labels)
if mode == tf.estimator.ModeKeys.TRAIN:
return tf.estimator.EstimatorSpec(mode=mode,
loss=loss,
train_op=train_op)
else:
return tf.estimator.EstimatorSpec(mode=mode,
loss=loss,
eval_metric_ops=eval_metrics)
else:
(predicted_labels, log_probs) = create_model(
is_predicting, input_ids, input_mask, segment_ids, label_ids, num_labels)
predictions = {
'probabilities': log_probs,
'labels': predicted_labels
}
return tf.estimator.EstimatorSpec(mode, predictions=predictions)
# Return the actual model function in the closure
return model_fn
# Compute train and warmup steps from batch size
# These hyperparameters are copied from this colab notebook (https://colab.sandbox.google.com/github/tensorflow/tpu/blob/master/tools/colab/bert_finetuning_with_cloud_tpus.ipynb)
BATCH_SIZE = 32
LEARNING_RATE = 2e-5
NUM_TRAIN_EPOCHS = 3.0
# Warmup is a period of time where hte learning rate
# is small and gradually increases--usually helps training.
WARMUP_PROPORTION = 0.1
# Model configs
SAVE_CHECKPOINTS_STEPS = 500
SAVE_SUMMARY_STEPS = 100
# Compute # train and warmup steps from batch size
num_train_steps = int(len(train_features) / BATCH_SIZE * NUM_TRAIN_EPOCHS)
num_warmup_steps = int(num_train_steps * WARMUP_PROPORTION)
# Specify outpit directory and number of checkpoint steps to save
run_config = tf.estimator.RunConfig(
model_dir=OUTPUT_DIR,
save_summary_steps=SAVE_SUMMARY_STEPS,
save_checkpoints_steps=SAVE_CHECKPOINTS_STEPS)
model_fn = model_fn_builder(
num_labels=len(label_list),
learning_rate=LEARNING_RATE,
num_train_steps=num_train_steps,
num_warmup_steps=num_warmup_steps)
estimator = tf.estimator.Estimator(
model_fn=model_fn,
config=run_config,
params={"batch_size": BATCH_SIZE})
|
I0414 10:20:36.028532 140105573619520 estimator.py:209] Using config: {'_model_dir': 'output_files', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': 500, '_save_checkpoints_secs': None, '_session_config': allow_soft_placement: true
graph_options {
rewrite_options {
meta_optimizer_iterations: ONE
}
}
, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7f6be0248208>, '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
|
Apache-2.0
|
predicting_movie_reviews_with_bert_on_tf_hub.ipynb
|
bedman3/bert
|
Next we create an input builder function that takes our training feature set (`train_features`) and produces a generator. This is a pretty standard design pattern for working with Tensorflow [Estimators](https://www.tensorflow.org/guide/estimators).
|
# Create an input function for training. drop_remainder = True for using TPUs.
train_input_fn = bert.run_classifier.input_fn_builder(
features=train_features,
seq_length=MAX_SEQ_LENGTH,
is_training=True,
drop_remainder=False)
|
_____no_output_____
|
Apache-2.0
|
predicting_movie_reviews_with_bert_on_tf_hub.ipynb
|
bedman3/bert
|
Now we train our model! For me, using a Colab notebook running on Google's GPUs, my training time was about 14 minutes.
|
print(f'Beginning Training!')
current_time = datetime.now()
estimator.train(input_fn=train_input_fn, max_steps=num_train_steps)
print("Training took time ", datetime.now() - current_time)
|
W0414 10:20:36.153037 140105573619520 deprecation.py:323] From /usr/local/lib/python3.6/dist-packages/tensorflow/python/training/training_util.py:236: Variable.initialized_value (from tensorflow.python.ops.variables) is deprecated and will be removed in a future version.
Instructions for updating:
Use Variable.read_value. Variables in 2.X are initialized automatically both in eager and graph (inside tf.defun) contexts.
|
Apache-2.0
|
predicting_movie_reviews_with_bert_on_tf_hub.ipynb
|
bedman3/bert
|
Now let's use our test data to see how well our model did:
|
test_input_fn = run_classifier.input_fn_builder(
features=test_features,
seq_length=MAX_SEQ_LENGTH,
is_training=False,
drop_remainder=False)
estimator.evaluate(input_fn=test_input_fn, steps=None)
|
_____no_output_____
|
Apache-2.0
|
predicting_movie_reviews_with_bert_on_tf_hub.ipynb
|
bedman3/bert
|
Now let's write code to make predictions on new sentences:
|
def getPrediction(in_sentences):
labels = ["Negative", "Positive"]
input_examples = [run_classifier.InputExample(guid="", text_a = x, text_b = None, label = 0) for x in in_sentences] # here, "" is just a dummy label
input_features = run_classifier.convert_examples_to_features(input_examples, label_list, MAX_SEQ_LENGTH, tokenizer)
predict_input_fn = run_classifier.input_fn_builder(features=input_features, seq_length=MAX_SEQ_LENGTH, is_training=False, drop_remainder=False)
predictions = estimator.predict(predict_input_fn)
return [(sentence, prediction['probabilities'], labels[prediction['labels']]) for sentence, prediction in zip(in_sentences, predictions)]
pred_sentences = [
"That movie was absolutely awful",
"The acting was a bit lacking",
"The film was creative and surprising",
"Absolutely fantastic!"
]
predictions = getPrediction(pred_sentences)
|
_____no_output_____
|
Apache-2.0
|
predicting_movie_reviews_with_bert_on_tf_hub.ipynb
|
bedman3/bert
|
Voila! We have a sentiment classifier!
|
predictions
|
_____no_output_____
|
Apache-2.0
|
predicting_movie_reviews_with_bert_on_tf_hub.ipynb
|
bedman3/bert
|
UCI Daphnet dataset (Freezing of gait for Parkinson's disease patients)
|
import numpy as np
import pandas as pd
import os
from typing import List
from pathlib import Path
from config import data_raw_folder, data_processed_folder
from timeeval import Datasets
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (20, 10)
dataset_collection_name = "Daphnet"
source_folder = Path(data_raw_folder) / "UCI ML Repository/Daphnet/dataset"
target_folder = Path(data_processed_folder)
print(f"Looking for source datasets in {source_folder.absolute()} and\nsaving processed datasets in {target_folder.absolute()}")
train_type = "unsupervised"
train_is_normal = False
input_type = "multivariate"
datetime_index = True
dataset_type = "real"
# create target directory
dataset_subfolder = os.path.join(input_type, dataset_collection_name)
target_subfolder = os.path.join(target_folder, dataset_subfolder)
try:
os.makedirs(target_subfolder)
print(f"Created directories {target_subfolder}")
except FileExistsError:
print(f"Directories {target_subfolder} already exist")
pass
dm = Datasets(target_folder)
experiments = [f for f in source_folder.iterdir()]
experiments
columns = ["timestamp", "ankle_horiz_fwd", "ankle_vert", "ankle_horiz_lateral", "leg_horiz_fwd", "leg_vert", "leg_horiz_lateral",
"trunk_horiz_fwd", "trunk_vert", "trunk_horiz_lateral", "is_anomaly"]
def transform_experiment_file(path: Path) -> List[pd.DataFrame]:
df = pd.read_csv(path, sep=" ", header=None)
df.columns = columns
df["timestamp"] = pd.to_datetime(df["timestamp"], unit="ms")
# slice out experiments (0 annotation shows unrelated data points (preparation/briefing/...))
s_group = df["is_anomaly"].isin([1, 2])
s_diff = s_group.shift(-1) - s_group
starts = (df[s_diff == 1].index + 1).values # first point has annotation 0 --> index + 1
ends = df[s_diff == -1].index.values
dfs = []
for start, end in zip(starts, ends):
df1 = df.iloc[start:end].copy()
df1["is_anomaly"] = (df1["is_anomaly"] == 2).astype(int)
dfs.append(df1)
return dfs
for exp in experiments:
# transform file to get datasets
datasets = transform_experiment_file(exp)
for i, df in enumerate(datasets):
# get target filenames
experiment_name = os.path.splitext(exp.name)[0]
dataset_name = f"{experiment_name}E{i}"
filename = f"{dataset_name}.test.csv"
path = os.path.join(dataset_subfolder, filename)
target_filepath = os.path.join(target_subfolder, filename)
# calc length and save in file
dataset_length = len(df)
df.to_csv(target_filepath, index=False)
print(f"Processed source dataset {exp} -> {target_filepath}")
# save metadata
dm.add_dataset((dataset_collection_name, dataset_name),
train_path = None,
test_path = path,
dataset_type = dataset_type,
datetime_index = datetime_index,
split_at = None,
train_type = train_type,
train_is_normal = train_is_normal,
input_type = input_type,
dataset_length = dataset_length
)
dm.save()
dm.refresh()
dm.df().loc[(slice(dataset_collection_name,dataset_collection_name), slice(None))]
|
_____no_output_____
|
MIT
|
notebooks/data-prep/UCI-Daphnet.ipynb
|
HPI-Information-Systems/TimeEval
|
ExperimentationAnnotations- `0`: not part of the experiment. For instance the sensors are installed on the user or the user is performing activities unrelated to the experimental protocol, such as debriefing- `1`: experiment, no freeze (can be any of stand, walk, turn)- `2`: freeze
|
columns = ["timestamp", "ankle_horiz_fwd", "ankle_vert", "ankle_horiz_lateral", "leg_horiz_fwd", "leg_vert", "leg_horiz_lateral",
"trunk_horiz_fwd", "trunk_vert", "trunk_horiz_lateral", "annotation"]
df1 = pd.read_csv(source_folder / "S01R01.txt", sep=' ', header=None)
df1.columns = columns
df1["timestamp"] = pd.to_datetime(df1["timestamp"], unit="ms")
df1
columns = [c for c in columns if c not in ["timestamp", "annotation"]]
df_plot = df1.set_index("timestamp", drop=True)#.loc["1970-01-01 00:15:00":"1970-01-01 00:16:00"]
df_plot.plot(y=columns, figsize=(20,10))
df_plot["annotation"].plot(secondary_y=True)
plt.legend()
plt.show()
s_group = df1["annotation"].isin([1, 2])
s_diff = s_group.shift(-1) - s_group
starts = (df1[s_diff == 1].index + 1).values
ends = df1[s_diff == -1].index.values
starts, ends
dfs = [df1.iloc[start:end] for start, end in zip(starts, ends)]
len(dfs)
columns = [c for c in columns if c not in ["timestamp", "annotation"]]
for df in dfs:
df = df.set_index("timestamp", drop=True)
df.plot(y=columns, figsize=(20,10))
df["annotation"].plot(secondary_y=True)
plt.show()
|
_____no_output_____
|
MIT
|
notebooks/data-prep/UCI-Daphnet.ipynb
|
HPI-Information-Systems/TimeEval
|
Amy Green - 200930437 5990M: Introduction to Programming for Geographical Information Analysis - Core Skills __**Assignment 2: Investigating the Black Death**__ ------------------------------------------------------------- Project AimThe aim of the project hopes to build a model, based upon initial agent-based framework coding schemes, that generates an analysis into an aspect of 'The Black Death'. This project intends to calculate the fatalities from The Great Plague of London via the known population densities of London parishes in 1665. The generation of this measure from historical data will allow any correlation to be investigated and an overall map of total deaths to be produced. Furthermore, the final code should allow for manipulation in terms of changing parameter weights to investigate possible scenarios that could have ensued. ContextThe Great Plague of London (1665-1666) was the last occurrence of the fatal ‘Black Death’ Plague that swept across Europe in the 1300s. The bubonic plague caused an epidemic across the 17th century parishes of London, as well as some smaller areas of the UK. The overcrowded city and hot Summer became a breeding ground for the bacterium Yersinia pestis disseminated by rat fleas – the known cause of the plague. Transmission was inevitable due to the high poverty levels, low sanitation rates, and open sewers in closely packed waste-filled streets; especially in poorer areas (Trueman, 2015). Deaths started slowly within the St. Giles’s Parish but rose alarmingly as documented by the weekly ‘Bill of Mortality’ that was legally required from each parish at the time (Defoe, 2005). The numbers of deaths slowed after 18 months due to quarantines, much of the population moving to the country and the onset of Winter, however, the final end emerged due to the Great Fire of London destroying central parts of the city in September 1666. Data SourceThe calculation of the average death rate from the Great Plague will be generated from two raster maps. The model will be using known rat populations and average population densities of 16 different parishes within London, both from historical records, recorded by rat-catchers and parish figures in 1665, respectively. The original maps have data stored for each 400m x 400m area as text data, but the figures have been averaged to represent either the area covered by the Parish or the area within which the rat-catcher operates. The relationship to calculate the average death rate from this source data is as follows: Death Rate = (0.8 x Rat Population)(1.3 x Population Density) Model Expectations The model should first show maps of the original source data: the rat populations and population densities for the 16 investigated parishes. These maps will then be combined using the calculation to generate the average death rate from the Great Plague per week and will be mapped as an image. The final map will then be altered so the user will be able to manipulate the weights of either the rat population or the density population to envision how these alternate factors may change the overall death rate. The code should run on Windows. ------------------------------------------------------------------------------ Part 1 - Read in Source Data
|
'''Step 1 - Set up initial imports for programme'''
import random
%matplotlib inline
import matplotlib.pyplot
import matplotlib
import matplotlib.animation
import os
import requests
import tkinter
import pandas as pd #Shortened in standard python documentation format
import numpy as np #Shortened in standard python documentation format
import ipywidgets as widgets #Shortened in standard python documentation format
|
_____no_output_____
|
BSL-1.0
|
.ipynb
|
AGreen0/BlackDeathProject
|
Map 1 - Rat Populations (Average Rats caught per week)
|
'''Step 2 - Import data for the rat populations and generate environment from the 2D array'''
#Set up a base path for the import of the rats txt file
base_path = "C:\\Users\\Home\\Documents\\MSc GIS\\Programming\\Black_Death\\BlackDeathProject" #Basepath
deathrats = "deathrats.txt" #Saved filename
path_to_file = os.path.join(base_path, deathrats)
f = open(path_to_file , 'r')
#mapA = f.read()
#print(mapA) #Test to show data has imported
#Set up an environment to read the rats txt file into - this is called environmentA
environmentA = []
for line in f:
parsed_line = str.split(line, ",") #Split values up via commas
rowlist = []
for word in parsed_line:
rowlist.append(float(word))
environmentA.append(rowlist) #Append all lists individually so can print environment
f.close()
#print(environmentA) #Test environment appears and all lines run
#Display environment of rat populations
matplotlib.pyplot.xlim(0, 400) #Set up x-axis
matplotlib.pyplot.ylim(0, 400) #Set up y-axis
matplotlib.pyplot.imshow(environmentA) #Shows the environment
matplotlib.pyplot.title('Average Rat Populations', loc='center') #Adds a centred title
hsv() #Altered colourmap to red-yellow-green-cyan-blue-pink-magenta display, from original viridis: aids user interpretation
|
_____no_output_____
|
BSL-1.0
|
.ipynb
|
AGreen0/BlackDeathProject
|
This map contains the data for the average rat populations denoted from the amount of rats caught per week. The data is initially placed into a text file which can be seen through print(mapA), but then has been put into an environment which is shown. The different colours show the different amounts of rats, however, this will have more useful when combined with Map 2 in Part 2 when calculating the overall death rates. Map 2 - Average Population Densities (per Parish)
|
'''Step 3 - Import data for the parish population densities and generate environment from the 2D array'''
#Set up a base path for the import of the parish txt file
#base_path = "C:\\Users\\Home\\Documents\\MSc GIS\\Programming\\Black_Death\\BlackDeathProject" #Basepath
deathparishes = "deathparishes.txt" #Saved filename
path_to_file = os.path.join(base_path, deathparishes)
fd = open(path_to_file , 'r')
#mapB = fd.read()
#print(mapB) #Test to show data has imported
#Set up an environment to read the parish txt file into - this is called environmentB
environmentB = []
for line in fd:
parsed_line = str.split(line, ",") #Split values up via commas
rowlist = []
for word in parsed_line:
rowlist.append(float(word))
environmentB.append(rowlist) #Append all lists individually so can print environment
f.close()
#print(environmentB) #Test environment appears and all lines run
#Display environment of parish populations
matplotlib.pyplot.xlim(0, 400) #Set up x-axis
matplotlib.pyplot.ylim(0, 400) #Set up y-axis
matplotlib.pyplot.imshow(environmentB) #Shows the environment
matplotlib.pyplot.title('Average Parish Population Densities', loc='center') #Adds a centred title
hsv() #Altered colourmap to red-yellow-green-cyan-blue-pink-magenta display, from original viridis: aids user interpretation
|
_____no_output_____
|
BSL-1.0
|
.ipynb
|
AGreen0/BlackDeathProject
|
This map contains the data for the average population densities per the 16 parishes investigated. The data is initially placed into a text file which can be seen through print(mapB), but then has been put into an environment which is shown. The different colours show the different populations per parish. ------------------------------------------------------------------------------ Part 2 - Calculate the Average Death Rate
|
'''Step 4 - Calculate Map of Average death rates '''
#Sets up a list named results to append all calculated values to
result = []
for r in range(len(environmentA)):#Goes through both environments' (A and B) rows
row_a = environmentA[r]
row_b = environmentB[r]
rowlist = []
result.append(rowlist) #Append all lists individually so can merge values from environmentA and environmentB
for c in range(len(row_a)): #Goes through both environments' (A and B) columns
rats = row_a[c]
parishes = row_b[c]
# d = (0.8 x r) x (1.3 x p) Equation used to generate average death rate
d = (0.8 * rats) * (1.3 * parishes) #Puts values through death average equation with initial set parameters
rowlist.append(d)
#print(d) #Test that results array shows
'''Step 5 - Plot and show the average death rates'''
#Sets up environment to display the results
matplotlib.pyplot.xlim(0, 400) #Set up x-axis
matplotlib.pyplot.ylim(0, 400) #Set up y-axis
matplotlib.pyplot.imshow(result) #Shows the environment
matplotlib.pyplot.title('Average Weekly Death Rates of the Great Plague', loc='center') #Adds a centred title
hsv() #Altered colourmap to red-yellow-green-cyan-blue-pink-magenta display, from original viridis: aids user interpretation
#To do:
#Insert legend
'''Step 6 - Save the average death rate results as a seperate txt.file'''
np.savetxt('result.txt', result, fmt='%-6.2f' , newline="\r\n") #Each row should equal a new line on the map
#Results have been padded to a width of 6 and rounded to 2 decimal points within the txt.file
|
_____no_output_____
|
BSL-1.0
|
.ipynb
|
AGreen0/BlackDeathProject
|
The output map within Part 2 displays the average death rate calculations within the 400x400 environment of the parishes investigated. The results array has been saved as a result.txt file (rounded to two decimal points) that can be manipulated and utilised for further investigation. ------------------------------------------------------------------------------ Part 3 - Display the Death Rate with Changing Parameters
|
'''Step 7 - Set up Rat Population Parameter Slider'''
#Generate a slider for the rats parameter
sR = widgets.FloatSlider(
value=0.8, #Initial parameter value set by the equation
min=0, #Minimum of range is 0
max=5.0, #Maximum of range is 5
step=0.1, #Values get to 1 decimal place increments
description='Rats:', #Label for slider
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='.1f',
)
display(sR) #Dispays the parameter slider for rats that users can alter
'''Step 8 - Set up Parish Population Density Parameter Slider'''
#Generate a slider for the parish parameter
sP = widgets.FloatSlider(
value=1.3, #Initial parameter value set by the equation
min=0, #Minimum of range is 0
max=5.0, #Maximum of range is 5
step=0.1, #Values get to 1 decimal place increments
description='Parishes:', #Label for slider
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='.1f',
)
display(sP) #Displays the parameter slider for parish population that users can alter
|
_____no_output_____
|
BSL-1.0
|
.ipynb
|
AGreen0/BlackDeathProject
|
The sliders above are available to alter to investigate the relationship between the rat population values and the average population density amounts. These will then be the next set parameters when the proceeding cell is run.
|
'''Step 9 - Display the Changed Parameters '''
#Formatting to display parameter amounts to correlate to the underlying map
print('Changed Parameter Values')
print('Rats:', sR.value)
print('Parishes:', sP.value)
'''Step 10 - Create a map of the death rate average with new changed parameters'''
#Alter the results list to incorporate the altered parameter values
result = []
for r in range(len(environmentA)): #Goes through both environments' (A and B) rows
row_a = environmentA[r]
row_b = environmentB[r]
rowlist = []
result.append(rowlist) #Append all lists individually so can merge values from environmentA and environmentB
for c in range(len(row_a)): #Goes through both environments' (A and B) columns
rats = row_a[c]
parishes = row_b[c]
# d = (0.8 x r) x (1.3 x p) #Original equation used to generate average death rate
d = (sR.value * rats) * (sP.value * parishes) #Updated equation to show the altered parameter values
rowlist.append(d)
#print(d) #Test that results array has updated
#Set up a larger figure view of the final map
fig = matplotlib.pyplot.figure(figsize=(7,7))
ax = fig.add_axes([0, 0, 1, 1])
matplotlib.pyplot.xlim(0, 400) #Set up x-axis
matplotlib.pyplot.ylim(0, 400) #Set up y-axis
matplotlib.pyplot.imshow(result) #Display the final map
matplotlib.pyplot.xlabel('Rat Populations') #Label the x-axis
matplotlib.pyplot.ylabel('Parish Densities') #Label the y-axis
matplotlib.pyplot.title('Average Weekly Death Rates of the Great Plague at Altered Parameters', loc='center') #Adds a centred title
hsv() #Altered colourmap to red-yellow-green-cyan-blue-pink-magenta display, from original viridis: aids user interpretation
def update(d):
d = (sR * rats)*(sP * parishes)
rowlist.append(d) #Updates figure with new parameters
print('Average weekly death rate at these parameters =', round(d,2)) #Print the average weekly death rate with altered parameters to 2 decimal places
|
Changed Parameter Values
Rats: 2.9
Parishes: 1.0
Average weekly death rate at these parameters = 29754.0
|
BSL-1.0
|
.ipynb
|
AGreen0/BlackDeathProject
|
SDLib> Shilling simulated attacks and detection methods Setup
|
!mkdir -p results
|
_____no_output_____
|
Apache-2.0
|
docs/T006054_SDLib.ipynb
|
sparsh-ai/recsys-attacks
|
Imports
|
from collections import defaultdict
import numpy as np
import random
import os
import os.path
from os.path import abspath
from os import makedirs,remove
from re import compile,findall,split
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.metrics.pairwise import pairwise_distances,cosine_similarity
from numpy.linalg import norm
from scipy.stats.stats import pearsonr
from math import sqrt,exp
import sys
from re import split
from multiprocessing import Process,Manager
from time import strftime,localtime,time
import re
from os.path import abspath
from time import strftime,localtime,time
from sklearn.metrics import classification_report
from re import split
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from random import shuffle
from sklearn.tree import DecisionTreeClassifier
import time as tm
from sklearn.metrics import classification_report
import numpy as np
from collections import defaultdict
from math import log,exp
from sklearn.ensemble import RandomForestClassifier
from sklearn.tree import DecisionTreeClassifier
from random import choice
import matplotlib
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
import random
from sklearn.metrics import classification_report
import numpy as np
from collections import defaultdict
from math import log,exp
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import classification_report
from sklearn import metrics
from sklearn.metrics import classification_report
from sklearn import preprocessing
from sklearn import metrics
import scipy
from scipy.sparse import csr_matrix
from sklearn.metrics import classification_report
from sklearn.model_selection import train_test_split
import math
from sklearn.naive_bayes import GaussianNB
|
_____no_output_____
|
Apache-2.0
|
docs/T006054_SDLib.ipynb
|
sparsh-ai/recsys-attacks
|
Data
|
!mkdir -p dataset/amazon
!cd dataset/amazon && wget -q --show-progress https://github.com/Coder-Yu/SDLib/raw/master/dataset/amazon/profiles.txt
!cd dataset/amazon && wget -q --show-progress https://github.com/Coder-Yu/SDLib/raw/master/dataset/amazon/labels.txt
!mkdir -p dataset/averageattack
!cd dataset/averageattack && wget -q --show-progress https://github.com/Coder-Yu/SDLib/raw/master/dataset/averageattack/ratings.txt
!cd dataset/averageattack && wget -q --show-progress https://github.com/Coder-Yu/SDLib/raw/master/dataset/averageattack/labels.txt
!mkdir -p dataset/filmtrust
!cd dataset/filmtrust && wget -q --show-progress https://github.com/Coder-Yu/SDLib/raw/master/dataset/filmtrust/ratings.txt
!cd dataset/filmtrust && wget -q --show-progress https://github.com/Coder-Yu/SDLib/raw/master/dataset/filmtrust/trust.txt
|
ratings.txt 100%[===================>] 367.62K --.-KB/s in 0.006s
trust.txt 100%[===================>] 19.15K --.-KB/s in 0s
|
Apache-2.0
|
docs/T006054_SDLib.ipynb
|
sparsh-ai/recsys-attacks
|
Config Configure the Detection Method Entry Example Description ratings dataset/averageattack/ratings.txt Set the path to the dirty recommendation dataset. Format: each row separated by empty, tab or comma symbol. label dataset/averageattack/labels.txt Set the path to labels (for users). Format: each row separated by empty, tab or comma symbol. ratings.setup -columns 0 1 2 -columns: (user, item, rating) columns of rating data are used; -header: to skip the first head line when reading data MethodName DegreeSAD/PCASelect/etc. The name of the detection method evaluation.setup -testSet dataset/testset.txt Main option: -testSet, -ap, -cv -testSet path/to/test/file (need to specify the test set manually) -ap ratio (ap means that the user set (including items and ratings) are automatically partitioned into training set and test set, the number is the ratio of test set. e.g. -ap 0.2) -cv k (-cv means cross validation, k is the number of the fold. e.g. -cv 5) output.setup on -dir Results/ Main option: whether to output recommendation results -dir path: the directory path of output results. Configure the Shilling Model Entry Example Description ratings dataset/averageattack/ratings.txt Set the path to the recommendation dataset. Format: each row separated by empty, tab or comma symbol. ratings.setup -columns 0 1 2 -columns: (user, item, rating) columns of rating data are used; -header: to skip the first head line when reading data attackSize 0.01 The ratio of the injected spammers to genuine users fillerSize 0.01 The ratio of the filler items to all items selectedSize 0.001 The ratio of the selected items to all items linkSize 0.01 The ratio of the users maliciously linked by a spammer to all user targetCount 20 The count of the targeted items targetScore 5.0 The score given to the target items threshold 3.0 Item has an average score lower than threshold may be chosen as one of the target items minCount 3 Item has a ratings count larger than minCount may be chosen as one of the target items maxCount 50 Item has a rating count smaller that maxCount may be chosen as one of the target items outputDir data/ User profiles and labels will be output here
|
%%writefile BayesDetector.conf
ratings=dataset/amazon/profiles.txt
ratings.setup=-columns 0 1 2
label=dataset/amazon/labels.txt
methodName=BayesDetector
evaluation.setup=-cv 5
item.ranking=off -topN 50
num.max.iter=100
learnRate=-init 0.03 -max 0.1
reg.lambda=-u 0.3 -i 0.3
BayesDetector=-k 10 -negCount 256 -gamma 1 -filter 4 -delta 0.01
output.setup=on -dir results/
%%writefile CoDetector.conf
ratings=dataset/amazon/profiles.txt
ratings.setup=-columns 0 1 2
label=dataset/amazon/labels.txt
methodName=CoDetector
evaluation.setup=-ap 0.3
item.ranking=on -topN 50
num.max.iter=200
learnRate=-init 0.01 -max 0.01
reg.lambda=-u 0.8 -i 0.4
CoDetector=-k 10 -negCount 256 -gamma 1 -filter 4
output.setup=on -dir results/amazon/
%%writefile DegreeSAD.conf
ratings=dataset/amazon/profiles.txt
ratings.setup=-columns 0 1 2
label=dataset/amazon/labels.txt
methodName=DegreeSAD
evaluation.setup=-cv 5
output.setup=on -dir results/
%%writefile FAP.conf
ratings=dataset/averageattack/ratings.txt
ratings.setup=-columns 0 1 2
label=dataset/averageattack/labels.txt
methodName=FAP
evaluation.setup=-ap 0.000001
seedUser=350
topKSpam=1557
output.setup=on -dir results/
%%writefile PCASelectUsers.conf
ratings=dataset/averageattack/ratings.txt
ratings.setup=-columns 0 1 2
label=dataset/averageattack/labels.txt
methodName=PCASelectUsers
evaluation.setup=-ap 0.00001
kVals=3
attackSize=0.1
output.setup=on -dir results/
%%writefile SemiSAD.conf
ratings=dataset/averageattack/ratings.txt
ratings.setup=-columns 0 1 2
label=dataset/averageattack/labels.txt
methodName=SemiSAD
evaluation.setup=-ap 0.2
Lambda=0.5
topK=28
output.setup=on -dir results/
|
Writing SemiSAD.conf
|
Apache-2.0
|
docs/T006054_SDLib.ipynb
|
sparsh-ai/recsys-attacks
|
Baseclass
|
class SDetection(object):
def __init__(self,conf,trainingSet=None,testSet=None,labels=None,fold='[1]'):
self.config = conf
self.isSave = False
self.isLoad = False
self.foldInfo = fold
self.labels = labels
self.dao = RatingDAO(self.config, trainingSet, testSet)
self.training = []
self.trainingLabels = []
self.test = []
self.testLabels = []
def readConfiguration(self):
self.algorName = self.config['methodName']
self.output = LineConfig(self.config['output.setup'])
def printAlgorConfig(self):
"show algorithm's configuration"
print('Algorithm:',self.config['methodName'])
print('Ratings dataSet:',abspath(self.config['ratings']))
if LineConfig(self.config['evaluation.setup']).contains('-testSet'):
print('Test set:',abspath(LineConfig(self.config['evaluation.setup']).getOption('-testSet')))
#print 'Count of the users in training set: ',len()
print('Training set size: (user count: %d, item count %d, record count: %d)' %(self.dao.trainingSize()))
print('Test set size: (user count: %d, item count %d, record count: %d)' %(self.dao.testSize()))
print('='*80)
def initModel(self):
pass
def buildModel(self):
pass
def saveModel(self):
pass
def loadModel(self):
pass
def predict(self):
pass
def execute(self):
self.readConfiguration()
if self.foldInfo == '[1]':
self.printAlgorConfig()
# load model from disk or build model
if self.isLoad:
print('Loading model %s...' % (self.foldInfo))
self.loadModel()
else:
print('Initializing model %s...' % (self.foldInfo))
self.initModel()
print('Building Model %s...' % (self.foldInfo))
self.buildModel()
# preict the ratings or item ranking
print('Predicting %s...' % (self.foldInfo))
prediction = self.predict()
report = classification_report(self.testLabels, prediction, digits=4)
currentTime = currentTime = strftime("%Y-%m-%d %H-%M-%S", localtime(time()))
FileIO.writeFile(self.output['-dir'],self.algorName+'@'+currentTime+self.foldInfo,report)
# save model
if self.isSave:
print('Saving model %s...' % (self.foldInfo))
self.saveModel()
print(report)
return report
class SSDetection(SDetection):
def __init__(self,conf,trainingSet=None,testSet=None,labels=None,relation=list(),fold='[1]'):
super(SSDetection, self).__init__(conf,trainingSet,testSet,labels,fold)
self.sao = SocialDAO(self.config, relation) # social relations access control
|
_____no_output_____
|
Apache-2.0
|
docs/T006054_SDLib.ipynb
|
sparsh-ai/recsys-attacks
|
Utils
|
class Config(object):
def __init__(self,fileName):
self.config = {}
self.readConfiguration(fileName)
def __getitem__(self, item):
if not self.contains(item):
print('parameter '+item+' is invalid!')
exit(-1)
return self.config[item]
def getOptions(self,item):
if not self.contains(item):
print('parameter '+item+' is invalid!')
exit(-1)
return self.config[item]
def contains(self,key):
return key in self.config
def readConfiguration(self,fileName):
if not os.path.exists(abspath(fileName)):
print('config file is not found!')
raise IOError
with open(fileName) as f:
for ind,line in enumerate(f):
if line.strip()!='':
try:
key,value=line.strip().split('=')
self.config[key]=value
except ValueError:
print('config file is not in the correct format! Error Line:%d'%(ind))
class LineConfig(object):
def __init__(self,content):
self.line = content.strip().split(' ')
self.options = {}
self.mainOption = False
if self.line[0] == 'on':
self.mainOption = True
elif self.line[0] == 'off':
self.mainOption = False
for i,item in enumerate(self.line):
if (item.startswith('-') or item.startswith('--')) and not item[1:].isdigit():
ind = i+1
for j,sub in enumerate(self.line[ind:]):
if (sub.startswith('-') or sub.startswith('--')) and not sub[1:].isdigit():
ind = j
break
if j == len(self.line[ind:])-1:
ind=j+1
break
try:
self.options[item] = ' '.join(self.line[i+1:i+1+ind])
except IndexError:
self.options[item] = 1
def __getitem__(self, item):
if not self.contains(item):
print('parameter '+item+' is invalid!')
exit(-1)
return self.options[item]
def getOption(self,key):
if not self.contains(key):
print('parameter '+key+' is invalid!')
exit(-1)
return self.options[key]
def isMainOn(self):
return self.mainOption
def contains(self,key):
return key in self.options
class FileIO(object):
def __init__(self):
pass
@staticmethod
def writeFile(dir,file,content,op = 'w'):
if not os.path.exists(dir):
os.makedirs(dir)
if type(content)=='str':
with open(dir + file, op) as f:
f.write(content)
else:
with open(dir+file,op) as f:
f.writelines(content)
@staticmethod
def deleteFile(filePath):
if os.path.exists(filePath):
remove(filePath)
@staticmethod
def loadDataSet(conf, file, bTest=False):
trainingData = defaultdict(dict)
testData = defaultdict(dict)
ratingConfig = LineConfig(conf['ratings.setup'])
if not bTest:
print('loading training data...')
else:
print('loading test data...')
with open(file) as f:
ratings = f.readlines()
# ignore the headline
if ratingConfig.contains('-header'):
ratings = ratings[1:]
# order of the columns
order = ratingConfig['-columns'].strip().split()
for lineNo, line in enumerate(ratings):
items = split(' |,|\t', line.strip())
if not bTest and len(order) < 3:
print('The rating file is not in a correct format. Error: Line num %d' % lineNo)
exit(-1)
try:
userId = items[int(order[0])]
itemId = items[int(order[1])]
if bTest and len(order)<3:
rating = 1 #default value
else:
rating = items[int(order[2])]
except ValueError:
print('Error! Have you added the option -header to the rating.setup?')
exit(-1)
if not bTest:
trainingData[userId][itemId]=float(rating)
else:
testData[userId][itemId] = float(rating)
if not bTest:
return trainingData
else:
return testData
@staticmethod
def loadRelationship(conf, filePath):
socialConfig = LineConfig(conf['social.setup'])
relation = []
print('loading social data...')
with open(filePath) as f:
relations = f.readlines()
# ignore the headline
if socialConfig.contains('-header'):
relations = relations[1:]
# order of the columns
order = socialConfig['-columns'].strip().split()
if len(order) <= 2:
print('The social file is not in a correct format.')
for lineNo, line in enumerate(relations):
items = split(' |,|\t', line.strip())
if len(order) < 2:
print('The social file is not in a correct format. Error: Line num %d' % lineNo)
exit(-1)
userId1 = items[int(order[0])]
userId2 = items[int(order[1])]
if len(order) < 3:
weight = 1
else:
weight = float(items[int(order[2])])
relation.append([userId1, userId2, weight])
return relation
@staticmethod
def loadLabels(filePath):
labels = {}
with open(filePath) as f:
for line in f:
items = split(' |,|\t', line.strip())
labels[items[0]] = items[1]
return labels
class DataSplit(object):
def __init__(self):
pass
@staticmethod
def dataSplit(data,test_ratio = 0.3,output=False,path='./',order=1):
if test_ratio>=1 or test_ratio <=0:
test_ratio = 0.3
testSet = {}
trainingSet = {}
for user in data:
if random.random() < test_ratio:
testSet[user] = data[user].copy()
else:
trainingSet[user] = data[user].copy()
if output:
FileIO.writeFile(path,'testSet['+str(order)+']',testSet)
FileIO.writeFile(path, 'trainingSet[' + str(order) + ']', trainingSet)
return trainingSet,testSet
@staticmethod
def crossValidation(data,k,output=False,path='./',order=1):
if k<=1 or k>10:
k=3
for i in range(k):
trainingSet = {}
testSet = {}
for ind,user in enumerate(data):
if ind%k == i:
testSet[user] = data[user].copy()
else:
trainingSet[user] = data[user].copy()
yield trainingSet,testSet
def drawLine(x,y,labels,xLabel,yLabel,title):
f, ax = plt.subplots(1, 1, figsize=(10, 6), sharex=True)
#f.tight_layout()
#sns.set(style="darkgrid")
palette = ['blue','orange','red','green','purple','pink']
# for i in range(len(ax)):
# x1 = range(0, len(x))
#ax.set_xlim(min(x1)-0.2,max(x1)+0.2)
# mini = 10000;max = -10000
# for label in labels:
# if mini>min(y[i][label]):
# mini = min(y[i][label])
# if max<max(y[i][label]):
# max = max(y[i][label])
# ax[i].set_ylim(mini-0.25*(max-mini),max+0.25*(max-mini))
# for j,label in enumerate(labels):
# if j%2==1:
# ax[i].plot(x1, y[i][label], color=palette[j/2], marker='.', label=label, markersize=12)
# else:
# ax[i].plot(x1, y[i][label], color=palette[j/2], marker='.', label=label,markersize=12,linestyle='--')
# ax[0].set_ylabel(yLabel,fontsize=20)
for xdata,ydata,lab,c in zip(x,y,labels,palette):
ax.plot(xdata,ydata,color = c,label=lab)
ind = np.arange(0,60,10)
ax.set_xticks(ind)
#ax.set_xticklabels(x)
ax.set_xlabel(xLabel, fontsize=20)
ax.set_ylabel(yLabel, fontsize=20)
ax.tick_params(labelsize=16)
#ax.tick_params(axs='y', labelsize=20)
ax.set_title(title,fontsize=24)
plt.grid(True)
handles, labels1 = ax.get_legend_handles_labels()
#ax[i].legend(handles, labels1, loc=2, fontsize=20)
# ax.legend(loc=2,
# ncol=6, borderaxespad=0.,fontsize=20)
#ax[2].legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.,fontsize=20)
ax.legend(loc='upper right',fontsize=20,shadow=True)
plt.show()
plt.close()
paths = ['SVD.txt','PMF.txt','EE.txt','RDML.txt']
files = ['EE['+str(i)+'] iteration.txt' for i in range(2,9)]
x = []
y = []
data = []
def normalize():
for file in files:
xdata = []
with open(file) as f:
for line in f:
items = line.strip().split()
rmse = items[2].split(':')[1]
xdata.append(float(rmse))
data.append(xdata)
average = []
for i in range(len(data[0])):
total = 0
for k in range(len(data)):
total += data[k][i]
average.append(str(i+1)+':'+str(float(total)/len(data))+'\n')
with open('EE.txt','w') as f:
f.writelines(average)
def readData():
for file in paths:
xdata = []
ydata = []
with open(file) as f:
for line in f:
items = line.strip().split(':')
xdata.append(int(items[0]))
rmse = float(items[1])
ydata.append(float(rmse))
x.append(xdata)
y.append(ydata)
# x = [[1,2,3],[1,2,3]]
# y = [[1,2,3],[4,5,6]]
#normalize()
readData()
labels = ['SVD','PMF','EE','RDML',]
xlabel = 'Iteration'
ylabel = 'RMSE'
drawLine(x,y,labels,xlabel,ylabel,'')
def l1(x):
return norm(x,ord=1)
def l2(x):
return norm(x)
def common(x1,x2):
# find common ratings
common = (x1!=0)&(x2!=0)
new_x1 = x1[common]
new_x2 = x2[common]
return new_x1,new_x2
def cosine_sp(x1,x2):
'x1,x2 are dicts,this version is for sparse representation'
total = 0
denom1 = 0
denom2 =0
for k in x1:
if k in x2:
total+=x1[k]*x2[k]
denom1+=x1[k]**2
denom2+=x2[k]**2
try:
return (total + 0.0) / (sqrt(denom1) * sqrt(denom2))
except ZeroDivisionError:
return 0
def cosine(x1,x2):
#find common ratings
new_x1, new_x2 = common(x1,x2)
#compute the cosine similarity between two vectors
sum = new_x1.dot(new_x2)
denom = sqrt(new_x1.dot(new_x1)*new_x2.dot(new_x2))
try:
return float(sum)/denom
except ZeroDivisionError:
return 0
#return cosine_similarity(x1,x2)[0][0]
def pearson_sp(x1,x2):
total = 0
denom1 = 0
denom2 = 0
overlapped=False
try:
mean1 = sum(x1.values())/(len(x1)+0.0)
mean2 = sum(x2.values()) / (len(x2) + 0.0)
for k in x1:
if k in x2:
total += (x1[k]-mean1) * (x2[k]-mean2)
denom1 += (x1[k]-mean1) ** 2
denom2 += (x2[k]-mean2) ** 2
overlapped=True
return (total + 0.0) / (sqrt(denom1) * sqrt(denom2))
except ZeroDivisionError:
if overlapped:
return 1
else:
return 0
def euclidean(x1,x2):
#find common ratings
new_x1, new_x2 = common(x1, x2)
#compute the euclidean between two vectors
diff = new_x1-new_x2
denom = sqrt((diff.dot(diff)))
try:
return 1/denom
except ZeroDivisionError:
return 0
def pearson(x1,x2):
#find common ratings
new_x1, new_x2 = common(x1, x2)
#compute the pearson similarity between two vectors
ind1 = new_x1 > 0
ind2 = new_x2 > 0
try:
mean_x1 = float(new_x1.sum())/ind1.sum()
mean_x2 = float(new_x2.sum())/ind2.sum()
new_x1 = new_x1 - mean_x1
new_x2 = new_x2 - mean_x2
sum = new_x1.dot(new_x2)
denom = sqrt((new_x1.dot(new_x1))*(new_x2.dot(new_x2)))
return float(sum) / denom
except ZeroDivisionError:
return 0
def similarity(x1,x2,sim):
if sim == 'pcc':
return pearson_sp(x1,x2)
if sim == 'euclidean':
return euclidean(x1,x2)
else:
return cosine_sp(x1, x2)
def normalize(vec,maxVal,minVal):
'get the normalized value using min-max normalization'
if maxVal > minVal:
return float(vec-minVal)/(maxVal-minVal)+0.01
elif maxVal==minVal:
return vec/maxVal
else:
print('error... maximum value is less than minimum value.')
raise ArithmeticError
def sigmoid(val):
return 1/(1+exp(-val))
def denormalize(vec,maxVal,minVal):
return minVal+(vec-0.01)*(maxVal-minVal)
|
_____no_output_____
|
Apache-2.0
|
docs/T006054_SDLib.ipynb
|
sparsh-ai/recsys-attacks
|
Shilling models Attack base class
|
class Attack(object):
def __init__(self,conf):
self.config = Config(conf)
self.userProfile = FileIO.loadDataSet(self.config,self.config['ratings'])
self.itemProfile = defaultdict(dict)
self.attackSize = float(self.config['attackSize'])
self.fillerSize = float(self.config['fillerSize'])
self.selectedSize = float(self.config['selectedSize'])
self.targetCount = int(self.config['targetCount'])
self.targetScore = float(self.config['targetScore'])
self.threshold = float(self.config['threshold'])
self.minCount = int(self.config['minCount'])
self.maxCount = int(self.config['maxCount'])
self.minScore = float(self.config['minScore'])
self.maxScore = float(self.config['maxScore'])
self.outputDir = self.config['outputDir']
if not os.path.exists(self.outputDir):
os.makedirs(self.outputDir)
for user in self.userProfile:
for item in self.userProfile[user]:
self.itemProfile[item][user] = self.userProfile[user][item]
self.spamProfile = defaultdict(dict)
self.spamItem = defaultdict(list) #items rated by spammers
self.targetItems = []
self.itemAverage = {}
self.getAverageRating()
self.selectTarget()
self.startUserID = 0
def getAverageRating(self):
for itemID in self.itemProfile:
li = list(self.itemProfile[itemID].values())
self.itemAverage[itemID] = float(sum(li)) / len(li)
def selectTarget(self,):
print('Selecting target items...')
print('-'*80)
print('Target item Average rating of the item')
itemList = list(self.itemProfile.keys())
itemList.sort()
while len(self.targetItems) < self.targetCount:
target = np.random.randint(len(itemList)) #generate a target order at random
if len(self.itemProfile[str(itemList[target])]) < self.maxCount and len(self.itemProfile[str(itemList[target])]) > self.minCount \
and str(itemList[target]) not in self.targetItems \
and self.itemAverage[str(itemList[target])] <= self.threshold:
self.targetItems.append(str(itemList[target]))
print(str(itemList[target]),' ',self.itemAverage[str(itemList[target])])
def getFillerItems(self):
mu = int(self.fillerSize*len(self.itemProfile))
sigma = int(0.1*mu)
markedItemsCount = abs(int(round(random.gauss(mu, sigma))))
markedItems = np.random.randint(len(self.itemProfile), size=markedItemsCount)
return markedItems.tolist()
def insertSpam(self,startID=0):
pass
def loadTarget(self,filename):
with open(filename) as f:
for line in f:
self.targetItems.append(line.strip())
def generateLabels(self,filename):
labels = []
path = self.outputDir + filename
with open(path,'w') as f:
for user in self.spamProfile:
labels.append(user+' 1\n')
for user in self.userProfile:
labels.append(user+' 0\n')
f.writelines(labels)
print('User profiles have been output to '+abspath(self.config['outputDir'])+'.')
def generateProfiles(self,filename):
ratings = []
path = self.outputDir+filename
with open(path, 'w') as f:
for user in self.userProfile:
for item in self.userProfile[user]:
ratings.append(user+' '+item+' '+str(self.userProfile[user][item])+'\n')
for user in self.spamProfile:
for item in self.spamProfile[user]:
ratings.append(user + ' ' + item + ' ' + str(self.spamProfile[user][item])+'\n')
f.writelines(ratings)
print('User labels have been output to '+abspath(self.config['outputDir'])+'.')
|
_____no_output_____
|
Apache-2.0
|
docs/T006054_SDLib.ipynb
|
sparsh-ai/recsys-attacks
|
Relation attack
|
class RelationAttack(Attack):
def __init__(self,conf):
super(RelationAttack, self).__init__(conf)
self.spamLink = defaultdict(list)
self.relation = FileIO.loadRelationship(self.config,self.config['social'])
self.trustLink = defaultdict(list)
self.trusteeLink = defaultdict(list)
for u1,u2,t in self.relation:
self.trustLink[u1].append(u2)
self.trusteeLink[u2].append(u1)
self.activeUser = {} # 关注了虚假用户的正常用户
self.linkedUser = {} # 被虚假用户种植过链接的用户
# def reload(self):
# super(RelationAttack, self).reload()
# self.spamLink = defaultdict(list)
# self.trustLink, self.trusteeLink = loadTrusts(self.config['social'])
# self.activeUser = {} # 关注了虚假用户的正常用户
# self.linkedUser = {} # 被虚假用户种植过链接的用户
def farmLink(self):
pass
def getReciprocal(self,target):
#当前目标用户关注spammer的概率,依赖于粉丝数和关注数的交集
reciprocal = float(2 * len(set(self.trustLink[target]).intersection(self.trusteeLink[target])) + 0.1) \
/ (len(set(self.trustLink[target]).union(self.trusteeLink[target])) + 1)
reciprocal += (len(self.trustLink[target]) + 0.1) / (len(self.trustLink[target]) + len(self.trusteeLink[target]) + 1)
reciprocal /= 2
return reciprocal
def generateSocialConnections(self,filename):
relations = []
path = self.outputDir + filename
with open(path, 'w') as f:
for u1 in self.trustLink:
for u2 in self.trustLink[u1]:
relations.append(u1 + ' ' + u2 + ' 1\n')
for u1 in self.spamLink:
for u2 in self.spamLink[u1]:
relations.append(u1 + ' ' + u2 + ' 1\n')
f.writelines(relations)
print('Social relations have been output to ' + abspath(self.config['outputDir']) + '.')
|
_____no_output_____
|
Apache-2.0
|
docs/T006054_SDLib.ipynb
|
sparsh-ai/recsys-attacks
|
Random relation attack
|
class RandomRelationAttack(RelationAttack):
def __init__(self,conf):
super(RandomRelationAttack, self).__init__(conf)
self.scale = float(self.config['linkSize'])
def farmLink(self): # 随机注入虚假关系
for spam in self.spamProfile:
#对购买了目标项目的用户种植链接
for item in self.spamItem[spam]:
if random.random() < 0.01:
for target in self.itemProfile[item]:
self.spamLink[spam].append(target)
response = np.random.random()
reciprocal = self.getReciprocal(target)
if response <= reciprocal:
self.trustLink[target].append(spam)
self.activeUser[target] = 1
else:
self.linkedUser[target] = 1
#对其它用户以scale的比例种植链接
for user in self.userProfile:
if random.random() < self.scale:
self.spamLink[spam].append(user)
response = np.random.random()
reciprocal = self.getReciprocal(user)
if response < reciprocal:
self.trustLink[user].append(spam)
self.activeUser[user] = 1
else:
self.linkedUser[user] = 1
|
_____no_output_____
|
Apache-2.0
|
docs/T006054_SDLib.ipynb
|
sparsh-ai/recsys-attacks
|
Random attack
|
class RandomAttack(Attack):
def __init__(self,conf):
super(RandomAttack, self).__init__(conf)
def insertSpam(self,startID=0):
print('Modeling random attack...')
itemList = list(self.itemProfile.keys())
if startID == 0:
self.startUserID = len(self.userProfile)
else:
self.startUserID = startID
for i in range(int(len(self.userProfile)*self.attackSize)):
#fill 装填项目
fillerItems = self.getFillerItems()
for item in fillerItems:
self.spamProfile[str(self.startUserID)][str(itemList[item])] = random.randint(self.minScore,self.maxScore)
#target 目标项目
for j in range(self.targetCount):
target = np.random.randint(len(self.targetItems))
self.spamProfile[str(self.startUserID)][self.targetItems[target]] = self.targetScore
self.spamItem[str(self.startUserID)].append(self.targetItems[target])
self.startUserID += 1
class RR_Attack(RandomRelationAttack,RandomAttack):
def __init__(self,conf):
super(RR_Attack, self).__init__(conf)
|
_____no_output_____
|
Apache-2.0
|
docs/T006054_SDLib.ipynb
|
sparsh-ai/recsys-attacks
|
Average attack
|
class AverageAttack(Attack):
def __init__(self,conf):
super(AverageAttack, self).__init__(conf)
def insertSpam(self,startID=0):
print('Modeling average attack...')
itemList = list(self.itemProfile.keys())
if startID == 0:
self.startUserID = len(self.userProfile)
else:
self.startUserID = startID
for i in range(int(len(self.userProfile)*self.attackSize)):
#fill
fillerItems = self.getFillerItems()
for item in fillerItems:
self.spamProfile[str(self.startUserID)][str(itemList[item])] = round(self.itemAverage[str(itemList[item])])
#target
for j in range(self.targetCount):
target = np.random.randint(len(self.targetItems))
self.spamProfile[str(self.startUserID)][self.targetItems[target]] = self.targetScore
self.spamItem[str(self.startUserID)].append(self.targetItems[target])
self.startUserID += 1
|
_____no_output_____
|
Apache-2.0
|
docs/T006054_SDLib.ipynb
|
sparsh-ai/recsys-attacks
|
Random average relation
|
class RA_Attack(RandomRelationAttack,AverageAttack):
def __init__(self,conf):
super(RA_Attack, self).__init__(conf)
|
_____no_output_____
|
Apache-2.0
|
docs/T006054_SDLib.ipynb
|
sparsh-ai/recsys-attacks
|
Bandwagon attack
|
class BandWagonAttack(Attack):
def __init__(self,conf):
super(BandWagonAttack, self).__init__(conf)
self.hotItems = sorted(iter(self.itemProfile.items()), key=lambda d: len(d[1]), reverse=True)[
:int(self.selectedSize * len(self.itemProfile))]
def insertSpam(self,startID=0):
print('Modeling bandwagon attack...')
itemList = list(self.itemProfile.keys())
if startID == 0:
self.startUserID = len(self.userProfile)
else:
self.startUserID = startID
for i in range(int(len(self.userProfile)*self.attackSize)):
#fill 装填项目
fillerItems = self.getFillerItems()
for item in fillerItems:
self.spamProfile[str(self.startUserID)][str(itemList[item])] = random.randint(self.minScore,self.maxScore)
#selected 选择项目
selectedItems = self.getSelectedItems()
for item in selectedItems:
self.spamProfile[str(self.startUserID)][item] = self.targetScore
#target 目标项目
for j in range(self.targetCount):
target = np.random.randint(len(self.targetItems))
self.spamProfile[str(self.startUserID)][self.targetItems[target]] = self.targetScore
self.spamItem[str(self.startUserID)].append(self.targetItems[target])
self.startUserID += 1
def getFillerItems(self):
mu = int(self.fillerSize*len(self.itemProfile))
sigma = int(0.1*mu)
markedItemsCount = int(round(random.gauss(mu, sigma)))
if markedItemsCount < 0:
markedItemsCount = 0
markedItems = np.random.randint(len(self.itemProfile), size=markedItemsCount)
return markedItems
def getSelectedItems(self):
mu = int(self.selectedSize * len(self.itemProfile))
sigma = int(0.1 * mu)
markedItemsCount = abs(int(round(random.gauss(mu, sigma))))
markedIndexes = np.random.randint(len(self.hotItems), size=markedItemsCount)
markedItems = [self.hotItems[index][0] for index in markedIndexes]
return markedItems
|
_____no_output_____
|
Apache-2.0
|
docs/T006054_SDLib.ipynb
|
sparsh-ai/recsys-attacks
|
Random bandwagon relation
|
class RB_Attack(RandomRelationAttack,BandWagonAttack):
def __init__(self,conf):
super(RB_Attack, self).__init__(conf)
|
_____no_output_____
|
Apache-2.0
|
docs/T006054_SDLib.ipynb
|
sparsh-ai/recsys-attacks
|
Hybrid attack
|
class HybridAttack(Attack):
def __init__(self,conf):
super(HybridAttack, self).__init__(conf)
self.aveAttack = AverageAttack(conf)
self.bandAttack = BandWagonAttack(conf)
self.randAttack = RandomAttack(conf)
def insertSpam(self,startID=0):
self.aveAttack.insertSpam()
self.bandAttack.insertSpam(self.aveAttack.startUserID+1)
self.randAttack.insertSpam(self.bandAttack.startUserID+1)
self.spamProfile = {}
self.spamProfile.update(self.aveAttack.spamProfile)
self.spamProfile.update(self.bandAttack.spamProfile)
self.spamProfile.update(self.randAttack.spamProfile)
def generateProfiles(self,filename):
ratings = []
path = self.outputDir + filename
with open(path, 'w') as f:
for user in self.userProfile:
for item in self.userProfile[user]:
ratings.append(user + ' ' + item + ' ' + str(self.userProfile[user][item]) + '\n')
for user in self.spamProfile:
for item in self.spamProfile[user]:
ratings.append(user + ' ' + item + ' ' + str(self.spamProfile[user][item]) + '\n')
f.writelines(ratings)
print('User labels have been output to ' + abspath(self.config['outputDir']) + '.')
def generateLabels(self,filename):
labels = []
path = self.outputDir + filename
with open(path,'w') as f:
for user in self.spamProfile:
labels.append(user+' 1\n')
for user in self.userProfile:
labels.append(user+' 0\n')
f.writelines(labels)
print('User profiles have been output to '+abspath(self.config['outputDir'])+'.')
|
_____no_output_____
|
Apache-2.0
|
docs/T006054_SDLib.ipynb
|
sparsh-ai/recsys-attacks
|
Generate data
|
%%writefile config.conf
ratings=dataset/filmtrust/ratings.txt
ratings.setup=-columns 0 1 2
social=dataset/filmtrust/trust.txt
social.setup=-columns 0 1 2
attackSize=0.1
fillerSize=0.05
selectedSize=0.005
targetCount=20
targetScore=4.0
threshold=3.0
maxScore=4.0
minScore=1.0
minCount=5
maxCount=50
linkSize=0.001
outputDir=output/
attack = RR_Attack('config.conf')
attack.insertSpam()
attack.farmLink()
attack.generateLabels('labels.txt')
attack.generateProfiles('profiles.txt')
attack.generateSocialConnections('relations.txt')
|
loading training data...
Selecting target items...
--------------------------------------------------------------------------------
Target item Average rating of the item
877 2.875
472 2.5833333333333335
715 2.8
528 2.7142857142857144
169 2.25
442 2.8055555555555554
270 2.962962962962963
681 2.75
843 3.0
832 1.8571428571428572
668 2.7777777777777777
938 2.9166666666666665
282 2.642857142857143
489 2.1666666666666665
927 2.5833333333333335
577 2.5
693 2.6875
593 2.7083333333333335
529 2.5
872 2.3333333333333335
loading social data...
Modeling random attack...
User profiles have been output to /content/output.
User labels have been output to /content/output.
Social relations have been output to /content/output.
|
Apache-2.0
|
docs/T006054_SDLib.ipynb
|
sparsh-ai/recsys-attacks
|
Data access objects
|
class RatingDAO(object):
'data access control'
def __init__(self,config, trainingData, testData):
self.config = config
self.ratingConfig = LineConfig(config['ratings.setup'])
self.user = {} #used to store the order of users in the training set
self.item = {} #used to store the order of items in the training set
self.id2user = {}
self.id2item = {}
self.all_Item = {}
self.all_User = {}
self.userMeans = {} #used to store the mean values of users's ratings
self.itemMeans = {} #used to store the mean values of items's ratings
self.globalMean = 0
self.timestamp = {}
# self.trainingMatrix = None
# self.validationMatrix = None
self.testSet_u = testData.copy() # used to store the test set by hierarchy user:[item,rating]
self.testSet_i = defaultdict(dict) # used to store the test set by hierarchy item:[user,rating]
self.trainingSet_u = trainingData.copy()
self.trainingSet_i = defaultdict(dict)
#self.rScale = []
self.trainingData = trainingData
self.testData = testData
self.__generateSet()
self.__computeItemMean()
self.__computeUserMean()
self.__globalAverage()
def __generateSet(self):
scale = set()
# find the maximum rating and minimum value
# for i, entry in enumerate(self.trainingData):
# userName, itemName, rating = entry
# scale.add(float(rating))
# self.rScale = list(scale)
# self.rScale.sort()
for i,user in enumerate(self.trainingData):
for item in self.trainingData[user]:
# makes the rating within the range [0, 1].
#rating = normalize(float(rating), self.rScale[-1], self.rScale[0])
#self.trainingSet_u[userName][itemName] = float(rating)
self.trainingSet_i[item][user] = self.trainingData[user][item]
# order the user
if user not in self.user:
self.user[user] = len(self.user)
self.id2user[self.user[user]] = user
# order the item
if item not in self.item:
self.item[item] = len(self.item)
self.id2item[self.item[item]] = item
self.trainingSet_i[item][user] = self.trainingData[user][item]
# userList.append
# triple.append([self.user[userName], self.item[itemName], rating])
# self.trainingMatrix = new_sparseMatrix.SparseMatrix(triple)
self.all_User.update(self.user)
self.all_Item.update(self.item)
for i, user in enumerate(self.testData):
# order the user
if user not in self.user:
self.all_User[user] = len(self.all_User)
for item in self.testData[user]:
# order the item
if item not in self.item:
self.all_Item[item] = len(self.all_Item)
#self.testSet_u[userName][itemName] = float(rating)
self.testSet_i[item][user] = self.testData[user][item]
def __globalAverage(self):
total = sum(self.userMeans.values())
if total==0:
self.globalMean = 0
else:
self.globalMean = total/len(self.userMeans)
def __computeUserMean(self):
# for u in self.user:
# n = self.row(u) > 0
# mean = 0
#
# if not self.containsUser(u): # no data about current user in training set
# pass
# else:
# sum = float(self.row(u)[0].sum())
# try:
# mean = sum/ n[0].sum()
# except ZeroDivisionError:
# mean = 0
# self.userMeans[u] = mean
for u in self.trainingSet_u:
self.userMeans[u] = sum(self.trainingSet_u[u].values())/(len(list(self.trainingSet_u[u].values()))+0.0)
for u in self.testSet_u:
self.userMeans[u] = sum(self.testSet_u[u].values())/(len(list(self.testSet_u[u].values()))+0.0)
def __computeItemMean(self):
# for c in self.item:
# n = self.col(c) > 0
# mean = 0
# if not self.containsItem(c): # no data about current user in training set
# pass
# else:
# sum = float(self.col(c)[0].sum())
# try:
# mean = sum / n[0].sum()
# except ZeroDivisionError:
# mean = 0
# self.itemMeans[c] = mean
for item in self.trainingSet_i:
self.itemMeans[item] = sum(self.trainingSet_i[item].values())/(len(list(self.trainingSet_i[item].values())) + 0.0)
for item in self.testSet_i:
self.itemMeans[item] = sum(self.testSet_i[item].values())/(len(list(self.testSet_i[item].values())) + 0.0)
def getUserId(self,u):
if u in self.user:
return self.user[u]
else:
return -1
def getItemId(self,i):
if i in self.item:
return self.item[i]
else:
return -1
def trainingSize(self):
recordCount = 0
for user in self.trainingData:
recordCount+=len(self.trainingData[user])
return (len(self.trainingSet_u),len(self.trainingSet_i),recordCount)
def testSize(self):
recordCount = 0
for user in self.testData:
recordCount += len(self.testData[user])
return (len(self.testSet_u),len(self.testSet_i),recordCount)
def contains(self,u,i):
'whether user u rated item i'
if u in self.trainingSet_u and i in self.trainingSet_u[u]:
return True
return False
def containsUser(self,u):
'whether user is in training set'
return u in self.trainingSet_u
def containsItem(self,i):
'whether item is in training set'
return i in self.trainingSet_i
def allUserRated(self, u):
if u in self.user:
return list(self.trainingSet_u[u].keys()), list(self.trainingSet_u[u].values())
else:
return list(self.testSet_u[u].keys()), list(self.testSet_u[u].values())
# def userRated(self,u):
# if self.trainingMatrix.matrix_User.has_key(self.getUserId(u)):
# itemIndex = self.trainingMatrix.matrix_User[self.user[u]].keys()
# rating = self.trainingMatrix.matrix_User[self.user[u]].values()
# return (itemIndex,rating)
# return ([],[])
#
# def itemRated(self,i):
# if self.trainingMatrix.matrix_Item.has_key(self.getItemId(i)):
# userIndex = self.trainingMatrix.matrix_Item[self.item[i]].keys()
# rating = self.trainingMatrix.matrix_Item[self.item[i]].values()
# return (userIndex,rating)
# return ([],[])
# def row(self,u):
# return self.trainingMatrix.row(self.getUserId(u))
#
# def col(self,c):
# return self.trainingMatrix.col(self.getItemId(c))
#
# def sRow(self,u):
# return self.trainingMatrix.sRow(self.getUserId(u))
#
# def sCol(self,c):
# return self.trainingMatrix.sCol(self.getItemId(c))
#
# def rating(self,u,c):
# return self.trainingMatrix.elem(self.getUserId(u),self.getItemId(c))
#
# def ratingScale(self):
# return (self.rScale[0],self.rScale[1])
# def elemCount(self):
# return self.trainingMatrix.elemCount()
class SocialDAO(object):
def __init__(self,conf,relation=list()):
self.config = conf
self.user = {} #used to store the order of users
self.relation = relation
self.followees = {}
self.followers = {}
self.trustMatrix = self.__generateSet()
def __generateSet(self):
#triple = []
for line in self.relation:
userId1,userId2,weight = line
#add relations to dict
if userId1 not in self.followees:
self.followees[userId1] = {}
self.followees[userId1][userId2] = weight
if userId2 not in self.followers:
self.followers[userId2] = {}
self.followers[userId2][userId1] = weight
# order the user
if userId1 not in self.user:
self.user[userId1] = len(self.user)
if userId2 not in self.user:
self.user[userId2] = len(self.user)
#triple.append([self.user[userId1], self.user[userId2], weight])
#return new_sparseMatrix.SparseMatrix(triple)
# def row(self,u):
# #return user u's followees
# return self.trustMatrix.row(self.user[u])
#
# def col(self,u):
# #return user u's followers
# return self.trustMatrix.col(self.user[u])
#
# def elem(self,u1,u2):
# return self.trustMatrix.elem(u1,u2)
def weight(self,u1,u2):
if u1 in self.followees and u2 in self.followees[u1]:
return self.followees[u1][u2]
else:
return 0
# def trustSize(self):
# return self.trustMatrix.size
def getFollowers(self,u):
if u in self.followers:
return self.followers[u]
else:
return {}
def getFollowees(self,u):
if u in self.followees:
return self.followees[u]
else:
return {}
def hasFollowee(self,u1,u2):
if u1 in self.followees:
if u2 in self.followees[u1]:
return True
else:
return False
return False
def hasFollower(self,u1,u2):
if u1 in self.followers:
if u2 in self.followers[u1]:
return True
else:
return False
return False
|
_____no_output_____
|
Apache-2.0
|
docs/T006054_SDLib.ipynb
|
sparsh-ai/recsys-attacks
|
Methods BayesDetector
|
#BayesDetector: Collaborative Shilling Detection Bridging Factorization and User Embedding
class BayesDetector(SDetection):
def __init__(self, conf, trainingSet=None, testSet=None, labels=None, fold='[1]'):
super(BayesDetector, self).__init__(conf, trainingSet, testSet, labels, fold)
def readConfiguration(self):
super(BayesDetector, self).readConfiguration()
extraSettings = LineConfig(self.config['BayesDetector'])
self.k = int(extraSettings['-k'])
self.negCount = int(extraSettings['-negCount']) # the number of negative samples
if self.negCount < 1:
self.negCount = 1
self.regR = float(extraSettings['-gamma'])
self.filter = int(extraSettings['-filter'])
self.delta = float(extraSettings['-delta'])
learningRate = LineConfig(self.config['learnRate'])
self.lRate = float(learningRate['-init'])
self.maxLRate = float(learningRate['-max'])
self.maxIter = int(self.config['num.max.iter'])
regular = LineConfig(self.config['reg.lambda'])
self.regU, self.regI = float(regular['-u']), float(regular['-i'])
# self.delta = float(self.config['delta'])
def printAlgorConfig(self):
super(BayesDetector, self).printAlgorConfig()
print('k: %d' % self.negCount)
print('regR: %.5f' % self.regR)
print('filter: %d' % self.filter)
print('=' * 80)
def initModel(self):
super(BayesDetector, self).initModel()
# self.c = np.random.rand(len(self.dao.all_User) + 1) / 20 # bias value of context
self.G = np.random.rand(len(self.dao.all_User)+1, self.k) / 100 # context embedding
self.P = np.random.rand(len(self.dao.all_User)+1, self.k) / 100 # latent user matrix
self.Q = np.random.rand(len(self.dao.all_Item)+1, self.k) / 100 # latent item matrix
# constructing SPPMI matrix
self.SPPMI = defaultdict(dict)
D = len(self.dao.user)
print('Constructing SPPMI matrix...')
# for larger data set has many items, the process will be time consuming
occurrence = defaultdict(dict)
for user1 in self.dao.all_User:
iList1, rList1 = self.dao.allUserRated(user1)
if len(iList1) < self.filter:
continue
for user2 in self.dao.all_User:
if user1 == user2:
continue
if user2 not in occurrence[user1]:
iList2, rList2 = self.dao.allUserRated(user2)
if len(iList2) < self.filter:
continue
count = len(set(iList1).intersection(set(iList2)))
if count > self.filter:
occurrence[user1][user2] = count
occurrence[user2][user1] = count
maxVal = 0
frequency = {}
for user1 in occurrence:
frequency[user1] = sum(occurrence[user1].values()) * 1.0
D = sum(frequency.values()) * 1.0
# maxx = -1
for user1 in occurrence:
for user2 in occurrence[user1]:
try:
val = max([log(occurrence[user1][user2] * D / (frequency[user1] * frequency[user2]), 2) - log(
self.negCount, 2), 0])
except ValueError:
print(self.SPPMI[user1][user2])
print(self.SPPMI[user1][user2] * D / (frequency[user1] * frequency[user2]))
if val > 0:
if maxVal < val:
maxVal = val
self.SPPMI[user1][user2] = val
self.SPPMI[user2][user1] = self.SPPMI[user1][user2]
# normalize
for user1 in self.SPPMI:
for user2 in self.SPPMI[user1]:
self.SPPMI[user1][user2] = self.SPPMI[user1][user2] / maxVal
def buildModel(self):
self.dao.ratings = dict(self.dao.trainingSet_u, **self.dao.testSet_u)
#suspicous set
print('Preparing sets...')
self.sSet = defaultdict(dict)
#normal set
self.nSet = defaultdict(dict)
# self.NegativeSet = defaultdict(list)
for user in self.dao.user:
for item in self.dao.ratings[user]:
# if self.dao.ratings[user][item] >= 5 and self.labels[user]=='1':
if self.labels[user] =='1':
self.sSet[item][user] = 1
# if self.dao.ratings[user][item] >= 5 and self.labels[user] == '0':
if self.labels[user] == '0':
self.nSet[item][user] = 1
# Jointly decompose R(ratings) and SPPMI with shared user latent factors P
iteration = 0
while iteration < self.maxIter:
self.loss = 0
for item in self.sSet:
i = self.dao.all_Item[item]
if item not in self.nSet:
continue
normalUserList = list(self.nSet[item].keys())
for user in self.sSet[item]:
su = self.dao.all_User[user]
# if len(self.NegativeSet[user]) > 0:
# item_j = choice(self.NegativeSet[user])
# else:
normalUser = choice(normalUserList)
nu = self.dao.all_User[normalUser]
s = sigmoid(self.P[su].dot(self.Q[i]) - self.P[nu].dot(self.Q[i]))
self.Q[i] += (self.lRate * (1 - s) * (self.P[su] - self.P[nu]))
self.P[su] += (self.lRate * (1 - s) * self.Q[i])
self.P[nu] -= (self.lRate * (1 - s) * self.Q[i])
self.Q[i] -= self.lRate * self.regI * self.Q[i]
self.P[su] -= self.lRate * self.regU * self.P[su]
self.P[nu] -= self.lRate * self.regU * self.P[nu]
self.loss += (-log(s))
#
# for item in self.sSet:
# if not self.nSet.has_key(item):
# continue
# for user1 in self.sSet[item]:
# for user2 in self.sSet[item]:
# su1 = self.dao.all_User[user1]
# su2 = self.dao.all_User[user2]
# self.P[su1] += (self.lRate*(self.P[su1]-self.P[su2]))*self.delta
# self.P[su2] -= (self.lRate*(self.P[su1]-self.P[su2]))*self.delta
#
# self.loss += ((self.P[su1]-self.P[su2]).dot(self.P[su1]-self.P[su2]))*self.delta
for user in self.dao.ratings:
for item in self.dao.ratings[user]:
rating = self.dao.ratings[user][item]
if rating < 5:
continue
error = rating - self.predictRating(user,item)
u = self.dao.all_User[user]
i = self.dao.all_Item[item]
p = self.P[u]
q = self.Q[i]
# self.loss += (error ** 2)*self.b
# update latent vectors
self.P[u] += (self.lRate * (error * q - self.regU * p))
self.Q[i] += (self.lRate * (error * p - self.regI * q))
for user in self.SPPMI:
u = self.dao.all_User[user]
p = self.P[u]
for context in self.SPPMI[user]:
v = self.dao.all_User[context]
m = self.SPPMI[user][context]
g = self.G[v]
diff = (m - p.dot(g))
self.loss += (diff ** 2)
# update latent vectors
self.P[u] += (self.lRate * diff * g)
self.G[v] += (self.lRate * diff * p)
self.loss += self.regU * (self.P * self.P).sum() + self.regI * (self.Q * self.Q).sum() + self.regR * (self.G * self.G).sum()
iteration += 1
print('iteration:',iteration)
# preparing examples
self.training = []
self.trainingLabels = []
self.test = []
self.testLabels = []
for user in self.dao.trainingSet_u:
self.training.append(self.P[self.dao.all_User[user]])
self.trainingLabels.append(self.labels[user])
for user in self.dao.testSet_u:
self.test.append(self.P[self.dao.all_User[user]])
self.testLabels.append(self.labels[user])
#
# tsne = TSNE(n_components=2)
# self.Y = tsne.fit_transform(self.P)
#
# self.normalUsers = []
# self.spammers = []
# for user in self.labels:
# if self.labels[user] == '0':
# self.normalUsers.append(user)
# else:
# self.spammers.append(user)
#
#
# print len(self.spammers)
# self.normalfeature = np.zeros((len(self.normalUsers), 2))
# self.spamfeature = np.zeros((len(self.spammers), 2))
# normal_index = 0
# for normaluser in self.normalUsers:
# if normaluser in self.dao.all_User:
# self.normalfeature[normal_index] = self.Y[self.dao.all_User[normaluser]]
# normal_index += 1
#
# spam_index = 0
# for spamuser in self.spammers:
# if spamuser in self.dao.all_User:
# self.spamfeature[spam_index] = self.Y[self.dao.all_User[spamuser]]
# spam_index += 1
# self.randomNormal = np.zeros((500,2))
# self.randomSpam = np.zeros((500,2))
# # for i in range(500):
# # self.randomNormal[i] = self.normalfeature[random.randint(0,len(self.normalfeature)-1)]
# # self.randomSpam[i] = self.spamfeature[random.randint(0,len(self.spamfeature)-1)]
# plt.scatter(self.normalfeature[:, 0], self.normalfeature[:, 1], c='red',s=8,marker='o',label='NormalUser')
# plt.scatter(self.spamfeature[:, 0], self.spamfeature[:, 1], c='blue',s=8,marker='o',label='Spammer')
# plt.legend(loc='lower left')
# plt.xticks([])
# plt.yticks([])
# plt.savefig('9.png',dpi=500)
def predictRating(self,user,item):
u = self.dao.all_User[user]
i = self.dao.all_Item[item]
return self.P[u].dot(self.Q[i])
def predict(self):
classifier = RandomForestClassifier(n_estimators=12)
# classifier = DecisionTreeClassifier(criterion='entropy')
classifier.fit(self.training, self.trainingLabels)
pred_labels = classifier.predict(self.test)
print('Decision Tree:')
return pred_labels
|
_____no_output_____
|
Apache-2.0
|
docs/T006054_SDLib.ipynb
|
sparsh-ai/recsys-attacks
|
CoDetector
|
#CoDetector: Collaborative Shilling Detection Bridging Factorization and User Embedding
class CoDetector(SDetection):
def __init__(self, conf, trainingSet=None, testSet=None, labels=None, fold='[1]'):
super(CoDetector, self).__init__(conf, trainingSet, testSet, labels, fold)
def readConfiguration(self):
super(CoDetector, self).readConfiguration()
extraSettings = LineConfig(self.config['CoDetector'])
self.k = int(extraSettings['-k'])
self.negCount = int(extraSettings['-negCount']) # the number of negative samples
if self.negCount < 1:
self.negCount = 1
self.regR = float(extraSettings['-gamma'])
self.filter = int(extraSettings['-filter'])
learningRate = LineConfig(self.config['learnRate'])
self.lRate = float(learningRate['-init'])
self.maxLRate = float(learningRate['-max'])
self.maxIter = int(self.config['num.max.iter'])
regular = LineConfig(self.config['reg.lambda'])
self.regU, self.regI = float(regular['-u']), float(regular['-i'])
def printAlgorConfig(self):
super(CoDetector, self).printAlgorConfig()
print('k: %d' % self.negCount)
print('regR: %.5f' % self.regR)
print('filter: %d' % self.filter)
print('=' * 80)
def initModel(self):
super(CoDetector, self).initModel()
self.w = np.random.rand(len(self.dao.all_User)+1) / 20 # bias value of user
self.c = np.random.rand(len(self.dao.all_User)+1)/ 20 # bias value of context
self.G = np.random.rand(len(self.dao.all_User)+1, self.k) / 20 # context embedding
self.P = np.random.rand(len(self.dao.all_User)+1, self.k) / 20 # latent user matrix
self.Q = np.random.rand(len(self.dao.all_Item)+1, self.k) / 20 # latent item matrix
# constructing SPPMI matrix
self.SPPMI = defaultdict(dict)
D = len(self.dao.user)
print('Constructing SPPMI matrix...')
# for larger data set has many items, the process will be time consuming
occurrence = defaultdict(dict)
for user1 in self.dao.all_User:
iList1, rList1 = self.dao.allUserRated(user1)
if len(iList1) < self.filter:
continue
for user2 in self.dao.all_User:
if user1 == user2:
continue
if user2 not in occurrence[user1]:
iList2, rList2 = self.dao.allUserRated(user2)
if len(iList2) < self.filter:
continue
count = len(set(iList1).intersection(set(iList2)))
if count > self.filter:
occurrence[user1][user2] = count
occurrence[user2][user1] = count
maxVal = 0
frequency = {}
for user1 in occurrence:
frequency[user1] = sum(occurrence[user1].values()) * 1.0
D = sum(frequency.values()) * 1.0
# maxx = -1
for user1 in occurrence:
for user2 in occurrence[user1]:
try:
val = max([log(occurrence[user1][user2] * D / (frequency[user1] * frequency[user2]), 2) - log(
self.negCount, 2), 0])
except ValueError:
print(self.SPPMI[user1][user2])
print(self.SPPMI[user1][user2] * D / (frequency[user1] * frequency[user2]))
if val > 0:
if maxVal < val:
maxVal = val
self.SPPMI[user1][user2] = val
self.SPPMI[user2][user1] = self.SPPMI[user1][user2]
# normalize
for user1 in self.SPPMI:
for user2 in self.SPPMI[user1]:
self.SPPMI[user1][user2] = self.SPPMI[user1][user2] / maxVal
def buildModel(self):
# Jointly decompose R(ratings) and SPPMI with shared user latent factors P
iteration = 0
while iteration < self.maxIter:
self.loss = 0
self.dao.ratings = dict(self.dao.trainingSet_u, **self.dao.testSet_u)
for user in self.dao.ratings:
for item in self.dao.ratings[user]:
rating = self.dao.ratings[user][item]
error = rating - self.predictRating(user,item)
u = self.dao.all_User[user]
i = self.dao.all_Item[item]
p = self.P[u]
q = self.Q[i]
self.loss += error ** 2
# update latent vectors
self.P[u] += self.lRate * (error * q - self.regU * p)
self.Q[i] += self.lRate * (error * p - self.regI * q)
for user in self.SPPMI:
u = self.dao.all_User[user]
p = self.P[u]
for context in self.SPPMI[user]:
v = self.dao.all_User[context]
m = self.SPPMI[user][context]
g = self.G[v]
diff = (m - p.dot(g) - self.w[u] - self.c[v])
self.loss += diff ** 2
# update latent vectors
self.P[u] += self.lRate * diff * g
self.G[v] += self.lRate * diff * p
self.w[u] += self.lRate * diff
self.c[v] += self.lRate * diff
self.loss += self.regU * (self.P * self.P).sum() + self.regI * (self.Q * self.Q).sum() + self.regR * (self.G * self.G).sum()
iteration += 1
print('iteration:',iteration)
# preparing examples
self.training = []
self.trainingLabels = []
self.test = []
self.testLabels = []
for user in self.dao.trainingSet_u:
self.training.append(self.P[self.dao.all_User[user]])
self.trainingLabels.append(self.labels[user])
for user in self.dao.testSet_u:
self.test.append(self.P[self.dao.all_User[user]])
self.testLabels.append(self.labels[user])
def predictRating(self,user,item):
u = self.dao.all_User[user]
i = self.dao.all_Item[item]
return self.P[u].dot(self.Q[i])
def predict(self):
classifier = DecisionTreeClassifier(criterion='entropy')
classifier.fit(self.training, self.trainingLabels)
pred_labels = classifier.predict(self.test)
print('Decision Tree:')
return pred_labels
|
_____no_output_____
|
Apache-2.0
|
docs/T006054_SDLib.ipynb
|
sparsh-ai/recsys-attacks
|
DegreeSAD
|
class DegreeSAD(SDetection):
def __init__(self, conf, trainingSet=None, testSet=None, labels=None, fold='[1]'):
super(DegreeSAD, self).__init__(conf, trainingSet, testSet, labels, fold)
def buildModel(self):
self.MUD = {}
self.RUD = {}
self.QUD = {}
# computing MUD,RUD,QUD for training set
sList = sorted(iter(self.dao.trainingSet_i.items()), key=lambda d: len(d[1]), reverse=True)
maxLength = len(sList[0][1])
for user in self.dao.trainingSet_u:
self.MUD[user] = 0
for item in self.dao.trainingSet_u[user]:
self.MUD[user] += len(self.dao.trainingSet_i[item]) #/ float(maxLength)
self.MUD[user]/float(len(self.dao.trainingSet_u[user]))
lengthList = [len(self.dao.trainingSet_i[item]) for item in self.dao.trainingSet_u[user]]
lengthList.sort(reverse=True)
self.RUD[user] = lengthList[0] - lengthList[-1]
lengthList = [len(self.dao.trainingSet_i[item]) for item in self.dao.trainingSet_u[user]]
lengthList.sort()
self.QUD[user] = lengthList[int((len(lengthList) - 1) / 4.0)]
# computing MUD,RUD,QUD for test set
for user in self.dao.testSet_u:
self.MUD[user] = 0
for item in self.dao.testSet_u[user]:
self.MUD[user] += len(self.dao.trainingSet_i[item]) #/ float(maxLength)
for user in self.dao.testSet_u:
lengthList = [len(self.dao.trainingSet_i[item]) for item in self.dao.testSet_u[user]]
lengthList.sort(reverse=True)
self.RUD[user] = lengthList[0] - lengthList[-1]
for user in self.dao.testSet_u:
lengthList = [len(self.dao.trainingSet_i[item]) for item in self.dao.testSet_u[user]]
lengthList.sort()
self.QUD[user] = lengthList[int((len(lengthList) - 1) / 4.0)]
# preparing examples
for user in self.dao.trainingSet_u:
self.training.append([self.MUD[user], self.RUD[user], self.QUD[user]])
self.trainingLabels.append(self.labels[user])
for user in self.dao.testSet_u:
self.test.append([self.MUD[user], self.RUD[user], self.QUD[user]])
self.testLabels.append(self.labels[user])
def predict(self):
# classifier = LogisticRegression()
# classifier.fit(self.training, self.trainingLabels)
# pred_labels = classifier.predict(self.test)
# print 'Logistic:'
# print classification_report(self.testLabels, pred_labels)
#
# classifier = SVC()
# classifier.fit(self.training, self.trainingLabels)
# pred_labels = classifier.predict(self.test)
# print 'SVM:'
# print classification_report(self.testLabels, pred_labels)
classifier = DecisionTreeClassifier(criterion='entropy')
classifier.fit(self.training, self.trainingLabels)
pred_labels = classifier.predict(self.test)
print('Decision Tree:')
return pred_labels
|
_____no_output_____
|
Apache-2.0
|
docs/T006054_SDLib.ipynb
|
sparsh-ai/recsys-attacks
|
FAP
|
class FAP(SDetection):
def __init__(self, conf, trainingSet=None, testSet=None, labels=None, fold='[1]'):
super(FAP, self).__init__(conf, trainingSet, testSet, labels, fold)
def readConfiguration(self):
super(FAP, self).readConfiguration()
# # s means the number of seedUser who be regarded as spammer in training
self.s =int( self.config['seedUser'])
# preserve the real spammer ID
self.spammer = []
for i in self.dao.user:
if self.labels[i] == '1':
self.spammer.append(self.dao.user[i])
sThreshold = int(0.5 * len(self.spammer))
if self.s > sThreshold :
self.s = sThreshold
print('*** seedUser is more than a half of spammer, so it is set to', sThreshold, '***')
# # predict top-k user as spammer
self.k = int(self.config['topKSpam'])
# 0.5 is the ratio of spammer to dataset, it can be changed according to different datasets
kThreshold = int(0.5 * (len(self.dao.user) - self.s))
if self.k > kThreshold:
self.k = kThreshold
print('*** the number of top-K users is more than threshold value, so it is set to', kThreshold, '***')
# product transition probability matrix self.TPUI and self.TPIU
def __computeTProbability(self):
# m--user count; n--item count
m, n, tmp = self.dao.trainingSize()
self.TPUI = np.zeros((m, n))
self.TPIU = np.zeros((n, m))
self.userUserIdDic = {}
self.itemItemIdDic = {}
tmpUser = list(self.dao.user.values())
tmpUserId = list(self.dao.user.keys())
tmpItem = list(self.dao.item.values())
tmpItemId = list(self.dao.item.keys())
for users in range(0, m):
self.userUserIdDic[tmpUser[users]] = tmpUserId[users]
for items in range(0, n):
self.itemItemIdDic[tmpItem[items]] = tmpItemId[items]
for i in range(0, m):
for j in range(0, n):
user = self.userUserIdDic[i]
item = self.itemItemIdDic[j]
# if has edge in graph,set a value ;otherwise set 0
if (user not in self.bipartiteGraphUI) or (item not in self.bipartiteGraphUI[user]):
continue
else:
w = float(self.bipartiteGraphUI[user][item])
# to avoid positive feedback and reliability problem,we should Polish the w
otherItemW = 0
otherUserW = 0
for otherItem in self.bipartiteGraphUI[user]:
otherItemW += float(self.bipartiteGraphUI[user][otherItem])
for otherUser in self.dao.trainingSet_i[item]:
otherUserW += float(self.bipartiteGraphUI[otherUser][item])
# wPrime = w*1.0/(otherUserW * otherItemW)
wPrime = w
self.TPUI[i][j] = wPrime / otherItemW
self.TPIU[j][i] = wPrime / otherUserW
if i % 100 == 0:
print('progress: %d/%d' %(i,m))
def initModel(self):
# construction of the bipartite graph
print("constructing bipartite graph...")
self.bipartiteGraphUI = {}
for user in self.dao.trainingSet_u:
tmpUserItemDic = {} # user-item-point
for item in self.dao.trainingSet_u[user]:
# tmpItemUserDic = {}#item-user-point
recordValue = float(self.dao.trainingSet_u[user][item])
w = 1 + abs((recordValue - self.dao.userMeans[user]) / self.dao.userMeans[user]) + abs(
(recordValue - self.dao.itemMeans[item]) / self.dao.itemMeans[item]) + abs(
(recordValue - self.dao.globalMean) / self.dao.globalMean)
# tmpItemUserDic[user] = w
tmpUserItemDic[item] = w
# self.bipartiteGraphIU[item] = tmpItemUserDic
self.bipartiteGraphUI[user] = tmpUserItemDic
# we do the polish in computing the transition probability
print("computing transition probability...")
self.__computeTProbability()
def isConvergence(self, PUser, PUserOld):
if len(PUserOld) == 0:
return True
for i in range(0, len(PUser)):
if (PUser[i] - PUserOld[i]) > 0.01:
return True
return False
def buildModel(self):
# -------init--------
m, n, tmp = self.dao.trainingSize()
PUser = np.zeros(m)
PItem = np.zeros(n)
self.testLabels = [0 for i in range(m)]
self.predLabels = [0 for i in range(m)]
# preserve seedUser Index
self.seedUser = []
randDict = {}
for i in range(0, self.s):
randNum = random.randint(0, len(self.spammer) - 1)
while randNum in randDict:
randNum = random.randint(0, len(self.spammer) - 1)
randDict[randNum] = 0
self.seedUser.append(int(self.spammer[randNum]))
# print len(randDict), randDict
#initial user and item spam probability
for j in range(0, m):
if j in self.seedUser:
#print type(j),j
PUser[j] = 1
else:
PUser[j] = random.random()
for tmp in range(0, n):
PItem[tmp] = random.random()
# -------iterator-------
PUserOld = []
iterator = 0
while self.isConvergence(PUser, PUserOld):
#while iterator < 100:
for j in self.seedUser:
PUser[j] = 1
PUserOld = PUser
PItem = np.dot(self.TPIU, PUser)
PUser = np.dot(self.TPUI, PItem)
iterator += 1
print(self.foldInfo,'iteration', iterator)
PUserDict = {}
userId = 0
for i in PUser:
PUserDict[userId] = i
userId += 1
for j in self.seedUser:
del PUserDict[j]
self.PSort = sorted(iter(PUserDict.items()), key=lambda d: d[1], reverse=True)
def predict(self):
# predLabels
# top-k user as spammer
spamList = []
sIndex = 0
while sIndex < self.k:
spam = self.PSort[sIndex][0]
spamList.append(spam)
self.predLabels[spam] = 1
sIndex += 1
# trueLabels
for user in self.dao.trainingSet_u:
userInd = self.dao.user[user]
# print type(user), user, userInd
self.testLabels[userInd] = int(self.labels[user])
# delete seedUser labels
differ = 0
for user in self.seedUser:
user = int(user - differ)
# print type(user)
del self.predLabels[user]
del self.testLabels[user]
differ += 1
return self.predLabels
|
_____no_output_____
|
Apache-2.0
|
docs/T006054_SDLib.ipynb
|
sparsh-ai/recsys-attacks
|
PCASelectUsers
|
class PCASelectUsers(SDetection):
def __init__(self, conf, trainingSet=None, testSet=None, labels=None, fold='[1]', k=None, n=None ):
super(PCASelectUsers, self).__init__(conf, trainingSet, testSet, labels, fold)
def readConfiguration(self):
super(PCASelectUsers, self).readConfiguration()
# K = top-K vals of cov
self.k = int(self.config['kVals'])
self.userNum = len(self.dao.trainingSet_u)
self.itemNum = len(self.dao.trainingSet_i)
if self.k >= min(self.userNum, self.itemNum):
self.k = 3
print('*** k-vals is more than the number of user or item, so it is set to', self.k)
# n = attack size or the ratio of spammers to normal users
self.n = float(self.config['attackSize'])
def buildModel(self):
#array initialization
dataArray = np.zeros([self.userNum, self.itemNum], dtype=float)
self.testLabels = np.zeros(self.userNum)
self.predLabels = np.zeros(self.userNum)
#add data
print('construct matrix')
for user in self.dao.trainingSet_u:
for item in list(self.dao.trainingSet_u[user].keys()):
value = self.dao.trainingSet_u[user][item]
a = self.dao.user[user]
b = self.dao.item[item]
dataArray[a][b] = value
sMatrix = csr_matrix(dataArray)
# z-scores
sMatrix = preprocessing.scale(sMatrix, axis=0, with_mean=False)
sMT = np.transpose(sMatrix)
# cov
covSM = np.dot(sMT, sMatrix)
# eigen-value-decomposition
vals, vecs = scipy.sparse.linalg.eigs(covSM, k=self.k, which='LM')
newArray = np.dot(dataArray**2, np.real(vecs))
distanceDict = {}
userId = 0
for user in newArray:
distance = 0
for tmp in user:
distance += tmp
distanceDict[userId] = float(distance)
userId += 1
print('sort distance ')
self.disSort = sorted(iter(distanceDict.items()), key=lambda d: d[1], reverse=False)
def predict(self):
print('predict spammer')
spamList = []
i = 0
while i < self.n * len(self.disSort):
spam = self.disSort[i][0]
spamList.append(spam)
self.predLabels[spam] = 1
i += 1
# trueLabels
for user in self.dao.trainingSet_u:
userInd = self.dao.user[user]
self.testLabels[userInd] = int(self.labels[user])
return self.predLabels
|
_____no_output_____
|
Apache-2.0
|
docs/T006054_SDLib.ipynb
|
sparsh-ai/recsys-attacks
|
SemiSAD
|
class SemiSAD(SDetection):
def __init__(self, conf, trainingSet=None, testSet=None, labels=None, fold='[1]'):
super(SemiSAD, self).__init__(conf, trainingSet, testSet, labels, fold)
def readConfiguration(self):
super(SemiSAD, self).readConfiguration()
# K = top-K vals of cov
self.k = int(self.config['topK'])
# Lambda = λ参数
self.Lambda = float(self.config['Lambda'])
def buildModel(self):
self.H = {}
self.DegSim = {}
self.LengVar = {}
self.RDMA = {}
self.FMTD = {}
print('Begin feature engineering...')
# computing H,DegSim,LengVar,RDMA,FMTD for LabledData set
trainingIndex = 0
testIndex = 0
trainingUserCount, trainingItemCount, trainingrecordCount = self.dao.trainingSize()
testUserCount, testItemCount, testrecordCount = self.dao.testSize()
for user in self.dao.trainingSet_u:
trainingIndex += 1
self.H[user] = 0
for i in range(10,50,5):
n = 0
for item in self.dao.trainingSet_u[user]:
if(self.dao.trainingSet_u[user][item]==(i/10.0)):
n+=1
if n==0:
self.H[user] += 0
else:
self.H[user] += (-(n/(trainingUserCount*1.0))*math.log(n/(trainingUserCount*1.0),2))
SimList = []
self.DegSim[user] = 0
for user1 in self.dao.trainingSet_u:
userA, userB, C, D, E, Count = 0,0,0,0,0,0
for item in list(set(self.dao.trainingSet_u[user]).intersection(set(self.dao.trainingSet_u[user1]))):
userA += self.dao.trainingSet_u[user][item]
userB += self.dao.trainingSet_u[user1][item]
Count += 1
if Count==0:
AverageA = 0
AverageB = 0
else:
AverageA = userA/Count
AverageB = userB/Count
for item in list(set(self.dao.trainingSet_u[user]).intersection(set(self.dao.trainingSet_u[user1]))):
C += (self.dao.trainingSet_u[user][item]-AverageA)*(self.dao.trainingSet_u[user1][item]-AverageB)
D += np.square(self.dao.trainingSet_u[user][item]-AverageA)
E += np.square(self.dao.trainingSet_u[user1][item]-AverageB)
if C==0:
SimList.append(0.0)
else:
SimList.append(C/(math.sqrt(D)*math.sqrt(E)))
SimList.sort(reverse=True)
for i in range(1,self.k+1):
self.DegSim[user] += SimList[i] / (self.k)
GlobalAverage = 0
F = 0
for user2 in self.dao.trainingSet_u:
GlobalAverage += len(self.dao.trainingSet_u[user2]) / (len(self.dao.trainingSet_u) + 0.0)
for user3 in self.dao.trainingSet_u:
F += pow(len(self.dao.trainingSet_u[user3])-GlobalAverage,2)
self.LengVar[user] = abs(len(self.dao.trainingSet_u[user])-GlobalAverage)/(F*1.0)
Divisor = 0
for item1 in self.dao.trainingSet_u[user]:
Divisor += abs(self.dao.trainingSet_u[user][item1]-self.dao.itemMeans[item1])/len(self.dao.trainingSet_i[item1])
self.RDMA[user] = Divisor/len(self.dao.trainingSet_u[user])
Minuend, index1, Subtrahend, index2 = 0, 0, 0, 0
for item3 in self.dao.trainingSet_u[user]:
if(self.dao.trainingSet_u[user][item3]==5.0 or self.dao.trainingSet_u[user][item3]==1.0) :
Minuend += sum(self.dao.trainingSet_i[item3].values())
index1 += len(self.dao.trainingSet_i[item3])
else:
Subtrahend += sum(self.dao.trainingSet_i[item3].values())
index2 += len(self.dao.trainingSet_i[item3])
if index1 == 0 and index2 == 0:
self.FMTD[user] = 0
elif index1 == 0:
self.FMTD[user] = abs(Subtrahend / index2)
elif index2 == 0:
self.FMTD[user] = abs(Minuend / index1)
else:
self.FMTD[user] = abs(Minuend / index1 - Subtrahend / index2)
if trainingIndex==(trainingUserCount/5):
print('trainingData Done 20%...')
elif trainingIndex==(trainingUserCount/5*2):
print('trainingData Done 40%...')
elif trainingIndex==(trainingUserCount/5*3):
print('trainingData Done 60%...')
elif trainingIndex==(trainingUserCount/5*4):
print('trainingData Done 80%...')
elif trainingIndex==(trainingUserCount):
print('trainingData Done 100%...')
# computing H,DegSim,LengVar,RDMA,FMTD for UnLabledData set
for user in self.dao.testSet_u:
testIndex += 1
self.H[user] = 0
for i in range(10,50,5):
n = 0
for item in self.dao.testSet_u[user]:
if(self.dao.testSet_u[user][item]==(i/10.0)):
n+=1
if n==0:
self.H[user] += 0
else:
self.H[user] += (-(n/(testUserCount*1.0))*math.log(n/(testUserCount*1.0),2))
SimList = []
self.DegSim[user] = 0
for user1 in self.dao.testSet_u:
userA, userB, C, D, E, Count = 0,0,0,0,0,0
for item in list(set(self.dao.testSet_u[user]).intersection(set(self.dao.testSet_u[user1]))):
userA += self.dao.testSet_u[user][item]
userB += self.dao.testSet_u[user1][item]
Count += 1
if Count==0:
AverageA = 0
AverageB = 0
else:
AverageA = userA/Count
AverageB = userB/Count
for item in list(set(self.dao.testSet_u[user]).intersection(set(self.dao.testSet_u[user1]))):
C += (self.dao.testSet_u[user][item]-AverageA)*(self.dao.testSet_u[user1][item]-AverageB)
D += np.square(self.dao.testSet_u[user][item]-AverageA)
E += np.square(self.dao.testSet_u[user1][item]-AverageB)
if C==0:
SimList.append(0.0)
else:
SimList.append(C/(math.sqrt(D)*math.sqrt(E)))
SimList.sort(reverse=True)
for i in range(1,self.k+1):
self.DegSim[user] += SimList[i] / self.k
GlobalAverage = 0
F = 0
for user2 in self.dao.testSet_u:
GlobalAverage += len(self.dao.testSet_u[user2]) / (len(self.dao.testSet_u) + 0.0)
for user3 in self.dao.testSet_u:
F += pow(len(self.dao.testSet_u[user3])-GlobalAverage,2)
self.LengVar[user] = abs(len(self.dao.testSet_u[user])-GlobalAverage)/(F*1.0)
Divisor = 0
for item1 in self.dao.testSet_u[user]:
Divisor += abs(self.dao.testSet_u[user][item1]-self.dao.itemMeans[item1])/len(self.dao.testSet_i[item1])
self.RDMA[user] = Divisor/len(self.dao.testSet_u[user])
Minuend, index1, Subtrahend, index2= 0,0,0,0
for item3 in self.dao.testSet_u[user]:
if(self.dao.testSet_u[user][item3]==5.0 or self.dao.testSet_u[user][item3]==1.0):
Minuend += sum(self.dao.testSet_i[item3].values())
index1 += len(self.dao.testSet_i[item3])
else:
Subtrahend += sum(self.dao.testSet_i[item3].values())
index2 += len(self.dao.testSet_i[item3])
if index1 == 0 and index2 == 0:
self.FMTD[user] = 0
elif index1 == 0:
self.FMTD[user] = abs(Subtrahend / index2)
elif index2 == 0:
self.FMTD[user] = abs(Minuend / index1)
else:
self.FMTD[user] = abs(Minuend / index1 - Subtrahend / index2)
if testIndex == testUserCount / 5:
print('testData Done 20%...')
elif testIndex == testUserCount / 5 * 2:
print('testData Done 40%...')
elif testIndex == testUserCount / 5 * 3:
print('testData Done 60%...')
elif testIndex == testUserCount / 5 * 4:
print('testData Done 80%...')
elif testIndex == testUserCount:
print('testData Done 100%...')
# preparing examples training for LabledData ,test for UnLableData
for user in self.dao.trainingSet_u:
self.training.append([self.H[user], self.DegSim[user], self.LengVar[user],self.RDMA[user],self.FMTD[user]])
self.trainingLabels.append(self.labels[user])
for user in self.dao.testSet_u:
self.test.append([self.H[user], self.DegSim[user], self.LengVar[user],self.RDMA[user],self.FMTD[user]])
self.testLabels.append(self.labels[user])
def predict(self):
ClassifierN = 0
classifier = GaussianNB()
X_train,X_test,y_train,y_test = train_test_split(self.training,self.trainingLabels,test_size=0.75,random_state=33)
classifier.fit(X_train, y_train)
# predict UnLabledData
#pred_labelsForTrainingUn = classifier.predict(X_test)
print('Enhanced classifier...')
while 1:
if len(X_test)<=5: # min
break #min
proba_labelsForTrainingUn = classifier.predict_proba(X_test)
X_test_labels = np.hstack((X_test, proba_labelsForTrainingUn))
X_test_labels0_sort = sorted(X_test_labels,key=lambda x:x[5],reverse=True)
if X_test_labels0_sort[4][5]>X_test_labels0_sort[4][6]:
a = [x[:5] for x in X_test_labels0_sort]
b = a[0:5]
classifier.partial_fit(b, ['0','0','0','0','0'], classes=['0', '1'],sample_weight=np.ones(len(b), dtype=np.float) * self.Lambda)
X_test_labels = X_test_labels0_sort[5:]
X_test = a[5:]
if len(X_test)<6: # min
break #min
X_test_labels0_sort = sorted(X_test_labels, key=lambda x: x[5], reverse=True)
if X_test_labels0_sort[4][5]<=X_test_labels0_sort[4][6]: #min
a = [x[:5] for x in X_test_labels0_sort]
b = a[0:5]
classifier.partial_fit(b, ['1', '1', '1', '1', '1'], classes=['0', '1'],sample_weight=np.ones(len(b), dtype=np.float) * 1)
X_test_labels = X_test_labels0_sort[5:] # min
X_test = a[5:]
if len(X_test)<6:
break
# while 1 :
# p1 = pred_labelsForTrainingUn
# # 将带λ参数的无标签数据拟合入分类器
# classifier.partial_fit(X_test, pred_labelsForTrainingUn,classes=['0','1'], sample_weight=np.ones(len(X_test),dtype=np.float)*self.Lambda)
# pred_labelsForTrainingUn = classifier.predict(X_test)
# p2 = pred_labelsForTrainingUn
# # 判断分类器是否稳定
# if list(p1)==list(p2) :
# ClassifierN += 1
# elif ClassifierN > 0:
# ClassifierN = 0
# if ClassifierN == 20:
# break
pred_labels = classifier.predict(self.test)
print('naive_bayes with EM algorithm:')
return pred_labels
|
_____no_output_____
|
Apache-2.0
|
docs/T006054_SDLib.ipynb
|
sparsh-ai/recsys-attacks
|
Main
|
class SDLib(object):
def __init__(self,config):
self.trainingData = [] # training data
self.testData = [] # testData
self.relation = []
self.measure = []
self.config =config
self.ratingConfig = LineConfig(config['ratings.setup'])
self.labels = FileIO.loadLabels(config['label'])
if self.config.contains('evaluation.setup'):
self.evaluation = LineConfig(config['evaluation.setup'])
if self.evaluation.contains('-testSet'):
#specify testSet
self.trainingData = FileIO.loadDataSet(config, config['ratings'])
self.testData = FileIO.loadDataSet(config, self.evaluation['-testSet'], bTest=True)
elif self.evaluation.contains('-ap'):
#auto partition
self.trainingData = FileIO.loadDataSet(config,config['ratings'])
self.trainingData,self.testData = DataSplit.\
dataSplit(self.trainingData,test_ratio=float(self.evaluation['-ap']))
elif self.evaluation.contains('-cv'):
#cross validation
self.trainingData = FileIO.loadDataSet(config, config['ratings'])
#self.trainingData,self.testData = DataSplit.crossValidation(self.trainingData,int(self.evaluation['-cv']))
else:
print('Evaluation is not well configured!')
exit(-1)
if config.contains('social'):
self.socialConfig = LineConfig(self.config['social.setup'])
self.relation = FileIO.loadRelationship(config,self.config['social'])
print('preprocessing...')
def execute(self):
if self.evaluation.contains('-cv'):
k = int(self.evaluation['-cv'])
if k <= 1 or k > 10:
k = 3
#create the manager used to communication in multiprocess
manager = Manager()
m = manager.dict()
i = 1
tasks = []
for train,test in DataSplit.crossValidation(self.trainingData,k):
fold = '['+str(i)+']'
if self.config.contains('social'):
method = self.config['methodName'] + "(self.config,train,test,self.labels,self.relation,fold)"
else:
method = self.config['methodName'] + "(self.config,train,test,self.labels,fold)"
#create the process
p = Process(target=run,args=(m,eval(method),i))
tasks.append(p)
i+=1
#start the processes
for p in tasks:
p.start()
#wait until all processes are completed
for p in tasks:
p.join()
#compute the mean error of k-fold cross validation
self.measure = [dict(m)[i] for i in range(1,k+1)]
res = []
pattern = re.compile('(\d+\.\d+)')
countPattern = re.compile('\d+\\n')
labelPattern = re.compile('\s\d{1}[^\.|\n|\d]')
labels = re.findall(labelPattern, self.measure[0])
values = np.array([0]*9,dtype=float)
count = np.array([0,0,0],dtype=int)
for report in self.measure:
patterns = np.array(re.findall(pattern,report),dtype=float)
values += patterns[:9]
patterncounts = np.array(re.findall(countPattern,report),dtype=int)
count += patterncounts[:3]
values/=k
values=np.around(values,decimals=4)
res.append(' precision recall f1-score support\n\n')
res.append(' '+labels[0]+' '+' '.join(np.array(values[0:3],dtype=str).tolist())+' '+str(count[0])+'\n')
res.append(' '+labels[1]+' '+' '.join(np.array(values[3:6],dtype=str).tolist())+' '+str(count[1])+'\n\n')
res.append(' avg/total ' + ' '.join(np.array(values[6:9], dtype=str).tolist()) + ' ' + str(count[2]) + '\n')
print('Total:')
print(''.join(res))
# for line in lines[1:]:
#
# measure = self.measure[0][i].split(':')[0]
# total = 0
# for j in range(k):
# total += float(self.measure[j][i].split(':')[1])
# res.append(measure+':'+str(total/k)+'\n')
#output result
currentTime = strftime("%Y-%m-%d %H-%M-%S", localtime(time()))
outDir = LineConfig(self.config['output.setup'])['-dir']
fileName = self.config['methodName'] +'@'+currentTime+'-'+str(k)+'-fold-cv' + '.txt'
FileIO.writeFile(outDir,fileName,res)
print('The results have been output to '+abspath(LineConfig(self.config['output.setup'])['-dir'])+'\n')
else:
if self.config.contains('social'):
method = self.config['methodName'] + '(self.config,self.trainingData,self.testData,self.labels,self.relation)'
else:
method = self.config['methodName'] + '(self.config,self.trainingData,self.testData,self.labels)'
eval(method).execute()
def run(measure,algor,order):
measure[order] = algor.execute()
conf = Config('DegreeSAD.conf')
sd = SDLib(conf)
sd.execute()
print('='*80)
print('Supervised Methods:')
print('1. DegreeSAD 2.CoDetector 3.BayesDetector\n')
print('Semi-Supervised Methods:')
print('4. SemiSAD\n')
print('Unsupervised Methods:')
print('5. PCASelectUsers 6. FAP 7.timeIndex\n')
print('-'*80)
order = eval(input('please enter the num of the method to run it:'))
algor = -1
conf = -1
s = tm.clock()
if order == 1:
conf = Config('DegreeSAD.conf')
elif order == 2:
conf = Config('CoDetector.conf')
elif order == 3:
conf = Config('BayesDetector.conf')
elif order == 4:
conf = Config('SemiSAD.conf')
elif order == 5:
conf = Config('PCASelectUsers.conf')
elif order == 6:
conf = Config('FAP.conf')
elif order == 7:
conf = Config('timeIndex.conf')
else:
print('Error num!')
exit(-1)
# conf = Config('DegreeSAD.conf')
sd = SDLib(conf)
sd.execute()
e = tm.clock()
print("Run time: %f s" % (e - s))
print('='*80)
print('Supervised Methods:')
print('1. DegreeSAD 2.CoDetector 3.BayesDetector\n')
print('Semi-Supervised Methods:')
print('4. SemiSAD\n')
print('Unsupervised Methods:')
print('5. PCASelectUsers 6. FAP 7.timeIndex\n')
print('-'*80)
order = eval(input('please enter the num of the method to run it:'))
algor = -1
conf = -1
s = tm.clock()
if order == 1:
conf = Config('DegreeSAD.conf')
elif order == 2:
conf = Config('CoDetector.conf')
elif order == 3:
conf = Config('BayesDetector.conf')
elif order == 4:
conf = Config('SemiSAD.conf')
elif order == 5:
conf = Config('PCASelectUsers.conf')
elif order == 6:
conf = Config('FAP.conf')
elif order == 7:
conf = Config('timeIndex.conf')
else:
print('Error num!')
exit(-1)
# conf = Config('DegreeSAD.conf')
sd = SDLib(conf)
sd.execute()
e = tm.clock()
print("Run time: %f s" % (e - s))
print('='*80)
print('Supervised Methods:')
print('1. DegreeSAD 2.CoDetector 3.BayesDetector\n')
print('Semi-Supervised Methods:')
print('4. SemiSAD\n')
print('Unsupervised Methods:')
print('5. PCASelectUsers 6. FAP 7.timeIndex\n')
print('-'*80)
order = eval(input('please enter the num of the method to run it:'))
algor = -1
conf = -1
s = tm.clock()
if order == 1:
conf = Config('DegreeSAD.conf')
elif order == 2:
conf = Config('CoDetector.conf')
elif order == 3:
conf = Config('BayesDetector.conf')
elif order == 4:
conf = Config('SemiSAD.conf')
elif order == 5:
conf = Config('PCASelectUsers.conf')
elif order == 6:
conf = Config('FAP.conf')
elif order == 7:
conf = Config('timeIndex.conf')
else:
print('Error num!')
exit(-1)
# conf = Config('DegreeSAD.conf')
sd = SDLib(conf)
sd.execute()
e = tm.clock()
print("Run time: %f s" % (e - s))
print('='*80)
print('Supervised Methods:')
print('1. DegreeSAD 2.CoDetector 3.BayesDetector\n')
print('Semi-Supervised Methods:')
print('4. SemiSAD\n')
print('Unsupervised Methods:')
print('5. PCASelectUsers 6. FAP 7.timeIndex\n')
print('-'*80)
order = eval(input('please enter the num of the method to run it:'))
algor = -1
conf = -1
s = tm.clock()
if order == 1:
conf = Config('DegreeSAD.conf')
elif order == 2:
conf = Config('CoDetector.conf')
elif order == 3:
conf = Config('BayesDetector.conf')
elif order == 4:
conf = Config('SemiSAD.conf')
elif order == 5:
conf = Config('PCASelectUsers.conf')
elif order == 6:
conf = Config('FAP.conf')
elif order == 7:
conf = Config('timeIndex.conf')
else:
print('Error num!')
exit(-1)
# conf = Config('DegreeSAD.conf')
sd = SDLib(conf)
sd.execute()
e = tm.clock()
print("Run time: %f s" % (e - s))
|
================================================================================
Supervised Methods:
1. DegreeSAD 2.CoDetector 3.BayesDetector
Semi-Supervised Methods:
4. SemiSAD
Unsupervised Methods:
5. PCASelectUsers 6. FAP 7.timeIndex
--------------------------------------------------------------------------------
please enter the num of the method to run it:4
|
Apache-2.0
|
docs/T006054_SDLib.ipynb
|
sparsh-ai/recsys-attacks
|
Calculate the AMOC in density space$VVEL*DZT*DXT (x,y,z)$ -> $VVEL*DZT*DXT (x,y,$\sigma$)$ -> $\sum_{x=W}^E$ -> $\sum_{\sigma=\sigma_{max/min}}^\sigma$
|
import os
import sys
import xgcm
import numpy as np
import xarray as xr
import cmocean
import pop_tools
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
matplotlib.rc_file('rc_file_paper')
%config InlineBackend.print_figure_kwargs={'bbox_inches':None}
%load_ext autoreload
%autoreload 2
from MOC import calculate_AMOC_sigma_z
from tqdm import notebook
from paths import path_results, path_prace, file_RMASK_ocn, file_RMASK_ocn_low, file_ex_ocn_ctrl, file_ex_ocn_lpd, path_data
from FW_plots import Atl_lats
from timeseries import lowpass
from xhistogram.xarray import histogram
from xr_DataArrays import xr_DZ_xgcm
from xr_regression import xr_lintrend, xr_linear_trend, xr_2D_trends, ocn_field_regression
RAPIDz = xr.open_dataarray(f'{path_data}/RAPID_AMOC/moc_vertical.nc')
kwargs = dict(combine='nested', concat_dim='time', decode_times=False)
ds_ctrl = xr.open_mfdataset(f'{path_prace}/MOC/AMOC_sz_yz_ctrl_*.nc', **kwargs)
ds_rcp = xr.open_mfdataset(f'{path_prace}/MOC/AMOC_sz_yz_rcp_*.nc' , **kwargs)
ds_lpd = xr.open_mfdataset(f'{path_prace}/MOC/AMOC_sz_yz_lpd_*.nc' , **kwargs)
ds_lr1 = xr.open_mfdataset(f'{path_prace}/MOC/AMOC_sz_yz_lr1_*.nc' , **kwargs)
AMOC_ctrl = xr.open_dataarray(f'{path_results}/MOC/AMOC_max_ctrl.nc', decode_times=False)
AMOC_rcp = xr.open_dataarray(f'{path_results}/MOC/AMOC_max_rcp.nc' , decode_times=False)
AMOC_lpd = xr.open_dataarray(f'{path_results}/MOC/AMOC_max_lpd.nc' , decode_times=False)
AMOC_lr1 = xr.open_dataarray(f'{path_results}/MOC/AMOC_max_lr1.nc' , decode_times=False)
mycmap = cmocean.tools.crop_by_percent(cmocean.cm.curl, 100/3, which='min', N=None)
f = plt.figure(figsize=(6.4,5))
# profiles
ax = f.add_axes([.84,.55,.15,.4])
ax.set_title(r'26.5$\!^\circ\!$N')
ax.set_ylim((-6,0))
ax.set_yticklabels([])
ax.axvline(0, c='k', lw=.5)
ax.axhline(-1, c='k', lw=.5)
r, = ax.plot(RAPIDz.mean('time'), -RAPIDz.depth/1e3, c='k', label='RAPID')
RAPID_ctrl = ds_ctrl['AMOC(y,z)'].isel(nlat_u=1456).mean('time')
RAPID_lpd = ds_lpd ['AMOC(y,z)'].isel(nlat_u= 271).mean('time')
RAPID_rcp = 365*100*xr_linear_trend(ds_rcp['AMOC(y,z)'].isel(nlat_u=1456)).rename({'dim_0':'z_t'}).assign_coords(z_t=ds_rcp.z_t) + RAPID_ctrl
RAPID_lr1 = 365*100*xr_linear_trend(ds_lr1['AMOC(y,z)'].isel(nlat_u= 271)).rename({'dim_0':'z_t'}).assign_coords(z_t=ds_lr1.z_t) + RAPID_lpd
hc, = ax.plot(RAPID_ctrl, -ds_ctrl.z_t/1e5, c='k', ls='--', label='HR CTRL')
lc, = ax.plot(RAPID_lpd , -ds_lpd .z_t/1e5, c='k', ls=':' , label='LR CTRL')
hr, = ax.plot(RAPID_rcp , -ds_ctrl.z_t/1e5, c='k', ls='--', lw=.7, label='HR RCP')
lr, = ax.plot(RAPID_lr1 , -ds_lpd .z_t/1e5, c='k', ls=':' , lw=.7, label='LR RCP')
ax.text(.01,.92, '(c)', transform=ax.transAxes)
ax.set_xlabel('AMOC [Sv]')
ax.legend(handles=[r, hc, lc, hr, lr], fontsize=5, frameon=False, handlelength=2, loc='lower right')
for i, sim in enumerate(['HIGH', 'LOW']):
axt = f.add_axes([.1+i*.37,.55,.35,.4])
axb = f.add_axes([.1+i*.37,.09,.35,.35])
# psi
axt.set_title(['HR-CESM', 'LR-CESM'][i])
axt.set_ylim((-6,0))
axt.set_xlim((-34,60))
if i==0:
axt.set_ylabel('depth [km]')
axb.set_ylabel('AMOC at 26.5$\!^\circ\!$N, 1000 m')
else:
axt.set_yticklabels([])
axb.set_yticklabels([])
(ds_mean, ds_trend) = [(ds_ctrl, ds_rcp), (ds_lpd, ds_lr1)][i]
vmaxm = 25
vmaxt = 10
mean = ds_mean['AMOC(y,z)'].mean('time')
trend = xr_2D_trends(ds_trend['AMOC(y,z)']).rolling(nlat_u=[15,3][i]).mean()*100*365
Xm,Ym = np.meshgrid(Atl_lats(sim=sim), -1e-5*mean['z_t'].values)
Xt,Yt = np.meshgrid(Atl_lats(sim=sim), -1e-5*trend['z_t'].values)
im = axt.contourf(Xm, Ym, mean, cmap=mycmap, levels=np.arange(-8,25,1))
cs = axt.contour(Xm, Ym, trend, levels=np.arange(-12,3,1),
cmap='cmo.balance', vmin=-10, vmax=10, linewidths=.5)
axt.clabel(cs, np.arange(-12,3,2), fmt='%d', fontsize=7)
axt.text(.01,.92, '('+['a','b'][i]+')', transform=axt.transAxes)
axt.scatter(26.5,-1, color='w', marker='x')
axt.set_xlabel(r'latitude $\theta$ [$\!^{\!\circ}\!$N]')
# time series
AMOC_c = [AMOC_ctrl, AMOC_lpd][i]
AMOC_r = [AMOC_rcp , AMOC_lr1][i]
axb.set_xlabel('time [model years]')
axb.plot(AMOC_c.time/365, AMOC_c, c='C0', alpha=.3, lw=.5)
axb.plot(AMOC_c.time[60:-60]/365, lowpass(AMOC_c,120)[60:-60], c='C0', label='CTRL')
axb.plot(AMOC_r.time/365-[1800,1500][i], AMOC_r, c='C1', alpha=.3, lw=.5)
axb.plot(AMOC_r.time[60:-60]/365-[1800,1500][i], lowpass(AMOC_r,120)[60:-60], c='C1', label='RCP')
axb.plot(AMOC_r.time/365-[1800,1500][i], xr_lintrend(AMOC_r), c='grey', lw=.8, ls='--', label='RCP linear fit')
axb.text(25+[200,500][i], 5.8, f'{xr_linear_trend(AMOC_r).values*100*365:3.2f} Sv/100yr', color='grey', fontsize=7)
axb.set_ylim((4,29.5))
if i==0:
axb.legend(frameon=False, fontsize=8)
axb.set_xlim([(95,305), (395,605)][i])
axb.text(.01,.91, '('+['d','e'][i]+')', transform=axb.transAxes)
cax1 = f.add_axes([.88,.12,.02,.3])
f.colorbar(im, cax=cax1)
cax1.text(1,-.1,'[Sv]', ha='right', fontsize=7, transform=cax1.transAxes)
cax1.yaxis.set_ticks_position('left')
cax2 = f.add_axes([.92,.12,.02,.3])
f.colorbar(cs, cax=cax2)
cax2.text(-.1,-.1,'[Sv/100yr]', ha='left', fontsize=7, transform=cax2.transAxes)
# plt.savefig(f'{path_results}/FW-paper/Fig5', dpi=600)
|
_____no_output_____
|
BSD-3-Clause
|
src/Fig5.ipynb
|
AJueling/FW-code
|
A sample of running the horizontal cylinder code through the pipeline, and visualizing it with Meshcat.
|
folder_name = "vert_cylinders"
rgb_filename = os.path.join("..", "src", "tests", "data", folder_name, "1.png")
camera_matrix_filename = os.path.join("..", "src", "tests", "data", folder_name, "camera_matrix.json")
pointcloud_filename = os.path.join("..", "src", "tests", "data", folder_name, "1.ply")
reference_mesh = meshes.VERTICAL_CYLINDERS
aligned_pointcloud, camera_angle = quality.align_pointcloud_to_reference(
reference_mesh, rgb_filename, camera_matrix_filename, pointcloud_filename, depth_scale=0.001)
# if you want to save the pointcloud to disk and load it in another visualizer
# quality.save_pointcloud(pointcloud_filename, "transformed", aligned_pointcloud)
cropped_pointcloud = quality.clip_pointcloud_to_pattern_area(
reference_mesh, aligned_pointcloud, depth_scale=0.001)
# if you want to save the pointcloud to disk and load it in another visualizer
# quality.save_pointcloud(pointcloud_filename, "cropped", cropped_pointcloud)
rmse, density = quality.calculate_rmse_and_density(
ground_truth_mesh=reference_mesh,
cropped_pointcloud=cropped_pointcloud,
depth_scale=0.001,
camera_angle=camera_angle)
print("RMSE = {}, density= {}".format(rmse, density))
print(camera_angle)
|
RMSE = 1.6844201070129325, density= 1.7608021498435062
[ 0.00533419 -0.11536905 0.99330838]
|
MIT
|
notebooks/alignment.ipynb
|
Code-128/depth-quality
|
We can use Meshcat to visualize our geometry directly in a Jupyter Notebook.
|
import meshcat
import meshcat.geometry as g
import meshcat.transformations as tfms
import numpy as np
vis = meshcat.Visualizer()
vis.jupyter_cell()
vis['reference'].set_object(g.ObjMeshGeometry.from_file(reference_mesh.path))
vis['reference'].set_transform(tfms.scale_matrix(0.001))
vis['transformed_cloud'].set_object(
g.PointCloud(np.asarray(aligned_pointcloud.points).T,
np.asarray(aligned_pointcloud.colors).T)
)
vis['cropped_cloud'].set_object(
g.PointCloud(np.asarray(cropped_pointcloud.points).T,
np.asarray(cropped_pointcloud.colors).T)
)
|
_____no_output_____
|
MIT
|
notebooks/alignment.ipynb
|
Code-128/depth-quality
|
ENGR 213 Project Demonstration: Toast Falling from Counter Iteration AND slipping of toastThis is a Jupyter notebook created to explore the utility of notebooks as an engineering/physics tool. As I consider integrating this material into physics and engineering courses I am having a hard time clarifying the outcomes that I seek for the students. It seems plausible that understanding what it would take to implement the 'Toast Project" in a way which satisfies me would be helpful to indentify those skills and outcomes I hope for.I hope to do a good job of documentation as I go but intentions are quirky creatures and prone to change:) Today's Learning:I've been working may way through this for a number of days (a couple of hours at a time). I just realized that it's getting very cumbersome to try to keep each upgrade to the model in the same notebook. I'm going to keep this notebook as an object lesson of how that can happen. In the meantime I'm going to rebuild this notebook into three new notebooks that keep each stage of the process independent. A very helpful discovery about my own workflow..... Exporting this document to pdfThis has not behaved well so far for me. My current most successful strategy is to download the Jupyter notebook as an html file, open in Firefox, print the file - which gives me the option to save the html document as a pdf. This is not terrible but it's not as good as I would like it.I also have had some luck with downloading as a .tex file and running it through TeXWorks which complains a bit but ultimately gives pretty output. This may be my strategy. The ProblemThe basic problem is this. When toast teeters off the edge of a standard counter it seems to, remarkably consistently, land 'jelly side' down. I have read that this is no accident and that in fact the solution to the problem is to push the toast when you notice it going off the edge rather than try to stop it. I have done some loose experiments and find this to be empirically true. Now -- can I use basic dynamics and an numerical approach to explore this problem without getting caught up the analytic solution to the problem documented in various places in AJP. In the previous notebook I modelled this process assuming that the angular acceleration would be changing as the toast tips over the edge. Experimentally starting with a piece of toast 3/4 of the way off the edge and then releasing it I observe that it rotates about 2𝜋 radians before it hits the floor (a little more perhaps). My basic iteration model predicted 8ish radians which is more than which is only a little over $2\pi$ radians. Definitely getting closer to my observations. It seems likely that there is a point at which the gravitational forces will make the toast begin to slip 'laterally'. This will increase the moment of inertia and the net torque in ways that are hard to predict intuitively. That is the reason to try and model this more complex process. CodeThe code starts the same way as the previous model. I will retain the comment block to keep things consistent.The following just sets up the plotting, `matplotlib`, and numerical processing, `numpy`, libaries for use by the kernal. As is apparently common they are given shorter aliases as `plt` and `np`.Here are the reference sites for [`matplotlib`](https://matplotlib.org/) and [`numpy`](http://www.numpy.org/)Note: The plt.rcParams calls are tools for making the plots larger along with the fonts that are used in the labeling. These settings seem to give acceptable plots.
|
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
plt.rcParams["figure.figsize"] = (20,10)
plt.rcParams.update({'font.size': 22})
import numpy as np
|
_____no_output_____
|
MIT
|
Toast2.2.ipynb
|
smithrockmaker/PH213
|
Defining constantsIn this problem it seems prudent to allow for a variety of 'toast' like objects of different materials and sizes. To that end I want to establish a set of constants that describe the particular setting I am considering. Note that I am working in cm and s for convenience which may turn out to be a bad idea. These variables define the physical form of the toast. Modeling parameters will be solicited from the user allowing a different way to change features.tlength (parallel to table edge - cm) twidth (perpendicular to table edge - cm) tthick (yup - cm) tdensity (in case it matters - gr/cm3) counterheight ( in cm) gravity (cm/s2) anglimit (radians - generally set to $\pi/2$ but could be different)In this model the toast will be slipping which leads to some component of downward velocity as the toast leaves the edge of the table. This leads to a more complex calculation for the time to fall to the floor which depends on the results of the model. That is why the floortime calculation from the previous models has been removed at this stage.Calculations in this cell are constant regardless of other variations in the problem parameters.
|
tlength = 10.0
twidth = 10.0
tthick = 1.0
tdensity = 0.45
counterheight = 100.0
gravity = 981.0
anglimit = np.pi/2
lindensity = tdensity*tlength*tthick # linear density of toast
tmass = lindensity*twidth # mass of toast
tinertiacm = tmass*(twidth*twidth)/12.0 # moment of inertia around CM
tinertiamax = tmass*(twidth*twidth)/3.0 # max moment around edge of 'book'
# debug
# print ("Toast mass (gr): %.2f gr"% (tmass))
# print ("Inertia around CM: %.2f gr*cm^2" % (tinertiacm))
# print ("Max inertia around edge: %.2f gr*cm^2" % (tinertiamax))
|
_____no_output_____
|
MIT
|
Toast2.2.ipynb
|
smithrockmaker/PH213
|
Updated Freebody DiagramSince I know from experiment that the toast rotates almost exactly a full $2\pi$ if I start it hanging 3/4 of it's width over the edge that would mean that the rotational velocity when the toast disconnects from the table is around 12 rad/s (since the fall time is abut 0.5 s). The previous notebook and the plots therein suggest that the toast reaches that velocity when it has rotated a little less than 1 radian. Analysis of the previous model raises the question of how slipping of the toast might affect the model. This will lead to a more complex model for sure so it seemed prudent to develop a more explicit freebody diagram using CAD software to keep variables clear. So, here's the freebody diagram with a host of 'new' labels that represent my current thinking. Rerun from here!!When I wish to rerun this model with different parameters this is where I start.... Set tstep and numitTo explore tools for interacting with the python code I am choosing to set the time step (tstep) and the maximum number of iterations (numit) as inputs from the user. This link from [`stackoverflow`](https://stackoverflow.com/questions/20449427/how-can-i-read-inputs-as-numbers) does the best job of explaining how to use the input() command in python 3.x which is the version I am using. This hopefully explains the format of the input statements....```pythontstep = float(input("Define time step in ms (ms)? "))numit = int(input("How many interations? "))overhang = float(input("What is the initial overhang of the toast (% as in 1.0 = 100%)? "))coeffric = float(input("What is the coefficient of friction? "))```The time step (tstep) can be fractions of ms if I want while the number of iterations (numit) but conceptually be an integer (It doesn't make much sense to repeat a process 11.3 times in this context). The overhang is the initial overhang of the toast (no sliding across the table yet) and the coefficient of friction is needed to handle slipping of the toast.
|
# Solicit model parameters from user.....
tstep = float(input("Define time step in ms (ms)? "))
numit = int(input("How many interations? "))
overhang = float(input("What is the initial overhang of the toast (% as in 1.0 = 100%)? "))
coeffric = float(input("What is the coefficient of friction? "))
print("Overhang is %.3f and the coefficient of friction is %.2f ."% (overhang, coeffric))
print("time step is %.2f ms and the number of iterations is %s."% (tstep, numit))
print("Rerun this cell to change these values and then rerun the calculations. ")
|
Define time step in ms (ms)? 3
How many interations? 10
What is the initial overhang of the toast (% as in 1.0 = 100%)? 1
What is the coefficient of friction? .2
|
MIT
|
Toast2.2.ipynb
|
smithrockmaker/PH213
|
Set up variable arraysGetting these arrays set up is a little bit of an iterative process itself. I set up all the arrays I think I need and invariably I find later that I need several others. Some of that process will be hidden so I apologise. I started out this just a giant list of arrays but later decided I needed to group them in a way that would help visualize how they contribute to the calculation. Much of this is very similar to the previous model.
|
# Define variable arrays needed
# time variables
count = np.linspace(0,numit,num=numit+1) # start at 0, go to numit, because it started at there is 1 more element in the array
currenttime = np.full_like(count,0.0) # same size as count will all values = 0 for starters
# moment of inertia variables
dparallel = np.full_like(count,0.0) # distance to pivot from center of mass (CM)
tinertianow = np.full_like(count,0.0) # moment of inertia from parallel axis theorem
# rotation variables
angaccel = np.full_like(count,0.0) # current angular acceleration
angvel = np.full_like(count,0.0) # current angular velocity
angpos = np.full_like(count,0.0) # current angular position
torqpos = np.full_like(count,0.0) # torque from overhanging 'right' side of toast
torqneg = np.full_like(count,0.0) # torque from 'left' side of toast still over the table
torqnet = np.full_like(count,0.0) # net torque
# general location of cm variables
rside = np.full_like(count,0.0) # length of toast hanging out over edge
lside = np.full_like(count,0.0) # length of toast to left of edge
# torque calculation variables
armr = np.full_like(count,0.0) # moment arm of overhanging toast
arml = np.full_like(count,0.0) # moment arm of toast left of pivot
weightr = np.full_like(count,0.0) # weight of overhanging toast acting at armr/2
weightl = np.full_like(count,0.0) # weight of 'left' side of toast acting at arml/2
# slipping variables
friction = np.full_like(count,0.0) # friction at pivot
latgforce = np.full_like(count,0.0) # force seeking to slide toast off
parallelaccel = np.full_like(count,0.0) # acceleration parallel to plane of toast
# These arrays had to be added later as I needed to deal with the toast slipping off the edge
slipdisplace = np.full_like(count,0.0) # displacement of toast in this interation
slipposx = np.full_like(count,0.0) # position of CM of toast in x
slipposy = np.full_like(count,0.0) # position of CM of toast in y
slipveltot = np.full_like(count,0.0) # total velocity at iteration
slipvelx = np.full_like(count,0.0) # velocity of CM in x direction
slipvely = np.full_like(count,0.0) # velocity of CM in y direction
# kinematic coefficients
quadcoef = np.zeros(3) # needed to invoke the python polynomial roots solver.
|
_____no_output_____
|
MIT
|
Toast2.2.ipynb
|
smithrockmaker/PH213
|
Initialize the arrays....In the process of taking my original notebook apart and creating separate notebooks for each model I am finding that I can do this in a more understandable way than I did the first time around. Feel free to look back at the orginal notebook which I abandoned when it got too cumbersome.Each time I perform a set of calculations I will start by considering where it is now and whether it is slipping or not. That means the next step in the iteration only depends on the previous step and some constants. Because of this I only need to establish the first value in each of the arrays. What is the value of each variable when this process starts. Note that all array values except count[] have been set to 0 so any variables whose initial value should be 0 have been commented out.See previous models for details of calculating the moment of inertia using the parallel axis theorem.
|
# Set first term of each variable
# time variables
# count : count is aready completely filled from 0 to numit
# currenttime[0] is already set to 0
# general location of cm variables
rside[0] = twidth*overhang
lside[0] = twidth-rside[0]
# torque calculation variables
armr[0] = rside[0]/2.0
arml[0] = lside[0]/2.0
weightr[0] = lindensity*rside[0]*gravity # weight of overhang
weightl[0] = lindensity*lside[0]*gravity # weight over table
# moment of inertia variables
dparallel[0] = rside[0] - twidth/2. # value changes if slipping
tinertianow[0] = tinertiacm + tmass*dparallel[0]**2
# rotation variables
#angvel[0] is already set to 0
#angpos[0] is already set to 0
torqpos[0] = (overhang*twidth/2)*(tmass*overhang*gravity)
torqneg[0] = -((1.0-overhang)*twidth/2)*(tmass*(1.0-overhang)*gravity)
torqnet[0] = torqpos[0]+torqneg[0]
angaccel[0] = torqnet[0]/tinertianow[0]
# slipping variables
# friction[0] is already set to 0
# latgforce[0] is already set to 0
# parallelaccel[0] is already set to 0
# These arrays had to be added later as I needed to deal with the toast slipping off the edge
# slipdisplace[0] is already set to 0
slipposx[0] = rside[0]- twidth/2.0 # CM relative to pivot due to overhang
# slipposy[0] is already set to 0
# slipveltot[0] is already set to 0
# slipvelx[0] is already set to 0
# slipvely[0] is already set to 0
# kinematic coefficients
# quadcoef[] depend on conditions when toast leaves edge
|
_____no_output_____
|
MIT
|
Toast2.2.ipynb
|
smithrockmaker/PH213
|
...same calculation but using variables differently.....I still need to calculate torqpos and torqneg but these will be based on my new nomenclature that tries to make it more explicit how the torques are calculated as well as the normal force on the corner and the friction generated.One of the features I have NOT dealt with yet is that the moment of inertia will change once the toast starts sliding. I'm going to let that go for now and merely calculate the normal, friction, and lateral forces on the toast to see at what point it might start to slide. Then I will worry about how to recalculate the moment of inertia after I do a first test. Look at the analysis section immediately following for discussion of how this process developed...... When it starts to slip...(initially ignored)When the toast starts to slip things get complicated in a hurry. Perhaps most obviously the pivot point starts to move which means all of the torques and moment arms change as well as the moment of intertia. That will all be sort of straightforward. Tracking the motion of the toast as it slides off the edge seems painful since the acceleration and velocity will be in a slightly different direction with each successive iteration. Yikes.....Remember that python keeps track of loops and other programming features through the indents in the code. All of this part of the model will need to take place inside the 'if-else' conditional test' part way through the calculation. To be more specific it is the 'else' part of the conditional test where all the action has to happen
|
ndex1 = 0
while (ndex1 < numit) and (angpos[ndex1] < anglimit):
print("iteration: ",ndex1)
# These calculations take place in every iteration regardless of whether it's slipping or not.
# moment of inertia NOW - ndex1
dparallel[ndex1] = rside[ndex1] - twidth/2. # value changes if slipping
tinertianow[ndex1] = tinertiacm + tmass*dparallel[ndex1]**2
# torqnet NOW - ndex1
torqpos[ndex1] = np.cos(angpos[ndex1])*armr[ndex1]*weightr[ndex1]
torqneg[ndex1] = -np.cos(angpos[ndex1])*arml[ndex1]*weightl[ndex1]
torqnet[ndex1] = torqpos[ndex1] + torqneg[ndex1]
# angular acceleration NOW -ndex1
angaccel[ndex1] = torqnet[ndex1]/tinertianow[ndex1]
# NEXT position and velocity after tstep - ndex1+1
angvel[ndex1+1] = angvel[ndex1] + angaccel[ndex1]*(tstep/1000.0)
angpos[ndex1+1] = angpos[ndex1] + angvel[ndex1]*(tstep/1000.0) + 0.5*angaccel[ndex1]*(tstep/1000.0)*(tstep/1000.0)
currenttime[ndex1+1] = currenttime[ndex1] + tstep
# determine if the toast is slipping
# calculate normal, friction, and lateral forces NOW - ndex1
currentnormal = (weightr[ndex1] + weightl[ndex1])*np.cos(angpos[ndex1])
friction[ndex1] = currentnormal*coeffric
latgforce[ndex1] = (weightr[ndex1] + weightl[ndex1])*np.sin(angpos[ndex1])
parallelaccel[ndex1] = (latgforce[ndex1] - friction[ndex1])/(tmass)
# This is where I have to deal with the toast slipping. When the parallelaccel > 0
# then the toast is starting to slip.
if parallelaccel[ndex1] < 0.0: # NOT slipping
parallelaccel[ndex1] = 0.0
# update variables for next step = ndex1+1
rside[ndex1+1] = rside[ndex1]
lside[ndex1+1] = twidth - rside[ndex1+1]
armr[ndex1+1] = rside[ndex1+1]/2.0
arml[ndex1+1] = lside[ndex1+1]/2.0
weightr[ndex1+1] = lindensity*rside[ndex1+1]*gravity # weight of overhang
weightl[ndex1+1] = lindensity*lside[ndex1+1]*gravity # weight over table
slipangle = angpos[ndex1+1] # keep updating the slip angle until is starts slipping.
else:
print("Toast is slipping!!; ndex1: ", ndex1)
# determine NEXT sliding velocity - ndex1+1
slipvelx[ndex1+1] = slipvelx[ndex1] + np.cos(angpos[ndex1])*parallelaccel[ndex1]*tstep/1000.
slipvely[ndex1+1] = slipvely[ndex1] - np.sin(angpos[ndex1])*parallelaccel[ndex1]*tstep/1000.
slipveltot[ndex1+1] = np.sqrt(slipvelx[ndex1+1]**2 + slipvely[ndex1+1]**2)
# determine NEXT slid position - ndex1+1
slipposx[ndex1+1] = slipposx[ndex1] + slipvelx[ndex1+1]*tstep/1000.
slipposy[ndex1+1] = slipposy[ndex1] + slipvely[ndex1+1]*tstep/1000.
slipdisplace[ndex1+1] = np.sqrt(slipposx[ndex1+1]**2 + slipposy[ndex1+1]**2)
# find NEXT overhang, this affects the moment of inertia - ndex1+1
rside[ndex1+1] = rside[ndex1]+slipdisplace[ndex1+1]
lside[ndex1+1] = twidth - rside[ndex1+1]
weightr[ndex1+1] = lindensity*rside[ndex1+1]*gravity # weight of overhang
weightl[ndex1+1] = lindensity*lside[ndex1+1]*gravity # weight over table
# debugging help
# print("lateral accel (cm/s^2) : ", parallelaccel[ndex1])
# print("lateral g force: ", latgforce[ndex1])
# print("currenttime: ", currenttime[ndex1])
# print("velx: %.3f vely %.3f posx %.3f posy %.3f " % (slipvelx[ndex1],slipvely[ndex1],slipposx[ndex1],slipposy[ndex1]))
# print("slip velocity %.3f slip displacement %.3f " % (slipveltot[ndex1],slipdisplace[ndex1]))
# inputcont = input("continue?")
# debugging help
# print("Tpos: %.3f Tneg %.3f Ttot %.3f angaccel %.3f " % (torqpos2[ndex4],torqneg2[ndex4],torqtot[ndex4],angaccel2[ndex4]))
# print("cos(angpos): ", np.cos(angpos2[ndex2]))
# print("pos %.3f pos+ %.3f vel %.3f vel+ %.3f accel %.3f " % (angpos2[ndex4],angpos2[ndex4+1],angvel2[ndex4],angvel2[ndex4+1],angaccel2[ndex4]))
# inputcont = input("continue?")
# test for end point of rotation
if angpos[ndex1+1] > (np.pi/2.0):
ndex1 = ndex1 + 1
print ("Got to 90 degrees at ndex1: ", ndex1)
break # get out of the loop
ndex1 = ndex1 +1 # go to the next time increment
ndexfinal = ndex1
print("final index: ", ndex1)
print("Tpos: %.3f Tneg %.3f Ttot %.3f angaccel %.3f : torque 0.0 and angaccel 0.0" % (torqpos[ndex1],torqneg[ndex1],torqnet[ndex1],angaccel[ndex1]))
print("pos %.3f vel %.3f : angular position 1.55ish" % (angpos[ndex1],angvel[ndex1]))
print("Angle at which slipping begins is %.3f radians" % (slipangle))
|
iteration: 0
iteration: 1
iteration: 2
iteration: 3
iteration: 4
iteration: 5
iteration: 6
iteration: 7
iteration: 8
iteration: 9
final index: 10
Tpos: 0.000 Tneg 0.000 Ttot 0.000 angaccel 0.000 : torque 0.0 and angaccel 0.0
pos 0.066 vel 4.413 : angular position 1.55ish
Angle at which slipping begins is 0.066 radians
|
MIT
|
Toast2.2.ipynb
|
smithrockmaker/PH213
|
Plot lateral g force and friction to see crossover point.....This introduces a different plotting requirement. I'm looking to understand where in the process the frictional force falls below the lateral g force resulting in the toast slipping. In previous dual plot I allowed the plot routines to set the scales on the vertical axes internally. Now I need to make sure both variable share the same vertical axis scale so the visual crossover point is in fact what I'm looking for. The first time I did this it looked good but because the scales on the left and right weren't the same it was misleading.
|
plt.plot(currenttime, latgforce, color = "blue", label = "lateral g force")
plt.plot(currenttime, friction, color = "red", label = "friction")
plt.title("Is it slipping?")
plt.ylabel("force");
plt.legend();
|
_____no_output_____
|
MIT
|
Toast2.2.ipynb
|
smithrockmaker/PH213
|
AnalysisThe first time I ran the analysis above with the possibility of slipping I screwed up the cos/sin thing and it started slipping right away. Fixed that and then it began slipping, with a coefficient of friction of 0.4, at the 6th interation (60 ms). I increased the coefficient of friction to 0.8 and it went up to the 8th interation before slipping. This is qualitatively what one would expect. Interestingly if you go back to the rotation speed plot it seems hopeful that if the toast starts to slide around 60-80 ms that would significantly reduce it's rotational velocity as it starts to fall which is what would be consistent with the experimental evidence.Now I need to go back and build in the impact of the slipping which will be a bit of a pain. The discussion for this is back a few cells.It feels like I have the slipping part working appropriately now. If I increase the coefficient of friction the angle at which it starts to slip is higher AND the final angular velocity is higher by a little. New Drop timeTo get the rotation of the toast I need to calculate the drop time taking into account that because of slipping (and rotation actually) the toast has some downward velocity when it comes off the edge of the counter. That will slightly reduce the drop time.
|
quadcoef[0] = -gravity/2.0
quadcoef[2] = counterheight
quadcoef[1] = slipvely[ndexfinal-1]
droptime = np.roots(quadcoef)
if droptime[0] > 0.0: # assume 2 roots and only one is positive....could be a problem
finalrotation = droptime[0]*angvel[ndexfinal]
timetofloor = droptime[0]
else:
finalrotation = droptime[1]*angvel[ndexfinal]
timetofloor = droptime[1]
print("Final Report:")
print("Final Rotation at Floor (rad): ", finalrotation)
print("Angular velocity coming off the table (rad/s):", angvel[ndexfinal])
print("Time to reach floor (s):", timetofloor)
print("Initial overhang (%):", overhang)
print("Coefficient of Friction:", coeffric)
print("Angle at which slipping started (rad):", slipangle)
print("Time until comes off edge (ms): ", currenttime[ndexfinal])
print()
print()
# debug
print("coef 0 (t^2): ", quadcoef[0])
print("coef 1 (t): ", quadcoef[1])
print("coef 2 (const):", quadcoef[2])
print("root 1: ", droptime[0])
print("root 2: ", droptime[1])
|
Final Report:
Final Rotation at Floor (rad): 6.387420535890989
Angular velocity coming off the table (rad/s): 15.015067546165607
Time to reach floor (s): 0.4254007193941757
Initial overhang (%): 0.75
Coefficient of Friction: 0.8
Angle at which slipping started (rad): 0.7006976411024474
Time until comes off edge (ms): 150.0
coef 0 (t^2): -490.5
coef 1 (t): -26.413422196467003
coef 2 (const): 100.0
root 1: -0.47925071367851224
root 2: 0.4254007193941757
|
MIT
|
Toast2.2.ipynb
|
smithrockmaker/PH213
|
Deep learning for Natural Language Processing * Simple text representations, bag of words * Word embedding and... not just another word2vec this time * 1-dimensional convolutions for text * Aggregating several data sources "the hard way" * Solving ~somewhat~ real ML problem with ~almost~ end-to-end deep learning Special thanks to Irina Golzmann for help with technical part. NLTKYou will require nltk v3.2 to solve this assignment__It is really important that the version is 3.2, otherwize russian tokenizer might not work__Install/update* `sudo pip install --upgrade nltk==3.2`* If you don't remember when was the last pip upgrade, `sudo pip install --upgrade pip`If for some reason you can't or won't switch to nltk v3.2, just make sure that russian words are tokenized properly with RegeExpTokenizer.
|
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
|
_____no_output_____
|
MIT
|
Seminar9/Bonus-seminar.ipynb
|
Omrigan/dl-course
|
DatasetEx-kaggle-competition on job salary predictionOriginal conest - https://www.kaggle.com/c/job-salary-prediction DownloadGo [here](https://www.kaggle.com/c/job-salary-prediction) and download as usualCSC cloud: data should already be here somewhere, just poke the nearest instructor. What's insideDifferent kinds of features:* 2 text fields - title and description* Categorical fields - contract type, locationOnly 1 binary target whether or not such advertisement contains prohibited materials* criminal, misleading, human reproduction-related, etc* diving into the data may result in prolonged sleep disorders
|
df = pd.read_csv("./Train_rev1.csv",sep=',')
print df.shape, df.SalaryNormalized.mean()
df[:5]
|
(244768, 12) 34122.5775755
|
MIT
|
Seminar9/Bonus-seminar.ipynb
|
Omrigan/dl-course
|
TokenizingFirst, we create a dictionary of all existing words.Assign each word a number - it's Id
|
from nltk.tokenize import RegexpTokenizer
from collections import Counter,defaultdict
tokenizer = RegexpTokenizer(r"\w+")
#Dictionary of tokens
token_counts = Counter()
#All texts
all_texts = np.hstack([df.FullDescription.values,df.Title.values])
#Compute token frequencies
for s in all_texts:
if type(s) is not str:
continue
s = s.decode('utf8').lower()
tokens = tokenizer.tokenize(s)
for token in tokens:
token_counts[token] +=1
|
_____no_output_____
|
MIT
|
Seminar9/Bonus-seminar.ipynb
|
Omrigan/dl-course
|
Remove rare tokensWe are unlikely to make use of words that are only seen a few times throughout the corpora.Again, if you want to beat Kaggle competition metrics, consider doing something better.
|
#Word frequency distribution, just for kicks
_=plt.hist(token_counts.values(),range=[0,50],bins=50)
#Select only the tokens that had at least 10 occurences in the corpora.
#Use token_counts.
min_count = 5
tokens = <tokens from token_counts keys that had at least min_count occurences throughout the dataset>
token_to_id = {t:i+1 for i,t in enumerate(tokens)}
null_token = "NULL"
token_to_id[null_token] = 0
print "# Tokens:",len(token_to_id)
if len(token_to_id) < 10000:
print "Alarm! It seems like there are too few tokens. Make sure you updated NLTK and applied correct thresholds -- unless you now what you're doing, ofc"
if len(token_to_id) > 100000:
print "Alarm! Too many tokens. You might have messed up when pruning rare ones -- unless you know what you're doin' ofc"
|
# Tokens: 44867
|
MIT
|
Seminar9/Bonus-seminar.ipynb
|
Omrigan/dl-course
|
Replace words with IDsSet a maximum length for titles and descriptions. * If string is longer that that limit - crop it, if less - pad with zeros. * Thus we obtain a matrix of size [n_samples]x[max_length] * Element at i,j - is an identifier of word j within sample i
|
def vectorize(strings, token_to_id, max_len=150):
token_matrix = []
for s in strings:
if type(s) is not str:
token_matrix.append([0]*max_len)
continue
s = s.decode('utf8').lower()
tokens = tokenizer.tokenize(s)
token_ids = map(lambda token: token_to_id.get(token,0), tokens)[:max_len]
token_ids += [0]*(max_len - len(token_ids))
token_matrix.append(token_ids)
return np.array(token_matrix)
desc_tokens = vectorize(df.FullDescription.values,token_to_id,max_len = 500)
title_tokens = vectorize(df.Title.values,token_to_id,max_len = 15)
|
_____no_output_____
|
MIT
|
Seminar9/Bonus-seminar.ipynb
|
Omrigan/dl-course
|
Data format examples
|
print "Matrix size:",title_tokens.shape
for title, tokens in zip(df.Title.values[:3],title_tokens[:3]):
print title,'->', tokens[:10],'...'
|
Размер матрицы: (244768, 15)
Engineering Systems Analyst -> [38462 12311 1632 0 0 0 0 0 0 0] ...
Stress Engineer Glasgow -> [19749 41620 5861 0 0 0 0 0 0 0] ...
Modelling and simulation analyst -> [23387 16330 32144 1632 0 0 0 0 0 0] ...
|
MIT
|
Seminar9/Bonus-seminar.ipynb
|
Omrigan/dl-course
|
__ As you can see, our preprocessing is somewhat crude. Let us see if that is enough for our network __ Non-sequencesSome data features are categorical data. E.g. location, contract type, companyThey require a separate preprocessing step.
|
#One-hot-encoded category and subcategory
from sklearn.feature_extraction import DictVectorizer
categories = []
data_cat = df[["Category","LocationNormalized","ContractType","ContractTime"]]
categories = [A list of dictionaries {"category":category_name, "subcategory":subcategory_name} for each data sample]
vectorizer = DictVectorizer(sparse=False)
df_non_text = vectorizer.fit_transform(categories)
df_non_text = pd.DataFrame(df_non_text,columns=vectorizer.feature_names_)
|
_____no_output_____
|
MIT
|
Seminar9/Bonus-seminar.ipynb
|
Omrigan/dl-course
|
Split data into training and test
|
#Target variable - whether or not sample contains prohibited material
target = df.is_blocked.values.astype('int32')
#Preprocessed titles
title_tokens = title_tokens.astype('int32')
#Preprocessed tokens
desc_tokens = desc_tokens.astype('int32')
#Non-sequences
df_non_text = df_non_text.astype('float32')
#Split into training and test set.
#Difficulty selector:
#Easy: split randomly
#Medium: split by companies, make sure no company is in both train and test set
#Hard: do whatever you want, but score yourself using kaggle private leaderboard
title_tr,title_ts,desc_tr,desc_ts,nontext_tr,nontext_ts,target_tr,target_ts = <define_these_variables>
|
_____no_output_____
|
MIT
|
Seminar9/Bonus-seminar.ipynb
|
Omrigan/dl-course
|
Save preprocessed data [optional]* The next tab can be used to stash all the essential data matrices and get rid of the rest of the data. * Highly recommended if you have less than 1.5GB RAM left* To do that, you need to first run it with save_prepared_data=True, then restart the notebook and only run this tab with read_prepared_data=True.
|
save_prepared_data = True #save
read_prepared_data = False #load
#but not both at once
assert not (save_prepared_data and read_prepared_data)
if save_prepared_data:
print "Saving preprocessed data (may take up to 3 minutes)"
import pickle
with open("preprocessed_data.pcl",'w') as fout:
pickle.dump(data_tuple,fout)
with open("token_to_id.pcl",'w') as fout:
pickle.dump(token_to_id,fout)
print "done"
elif read_prepared_data:
print "Reading saved data..."
import pickle
with open("preprocessed_data.pcl",'r') as fin:
data_tuple = pickle.load(fin)
title_tr,title_ts,desc_tr,desc_ts,nontext_tr,nontext_ts,target_tr,target_ts = data_tuple
with open("token_to_id.pcl",'r') as fin:
token_to_id = pickle.load(fin)
#Re-importing libraries to allow staring noteboook from here
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
print "done"
|
_____no_output_____
|
MIT
|
Seminar9/Bonus-seminar.ipynb
|
Omrigan/dl-course
|
Train the monsterSince we have several data sources, our neural network may differ from what you used to work with.* Separate input for titles * cnn+global max or RNN* Separate input for description * cnn+global max or RNN* Separate input for categorical features * Few dense layers + some black magic if you want These three inputs must be blended somehow - concatenated or added.* Output: a simple regression task
|
#libraries
import lasagne
from theano import tensor as T
import theano
#3 inputs and a refere output
title_token_ids = T.matrix("title_token_ids",dtype='int32')
desc_token_ids = T.matrix("desc_token_ids",dtype='int32')
categories = T.matrix("categories",dtype='float32')
target_y = T.vector("is_blocked",dtype='float32')
|
_____no_output_____
|
MIT
|
Seminar9/Bonus-seminar.ipynb
|
Omrigan/dl-course
|
NN architecture
|
title_inp = lasagne.layers.InputLayer((None,title_tr.shape[1]),input_var=title_token_ids)
descr_inp = lasagne.layers.InputLayer((None,desc_tr.shape[1]),input_var=desc_token_ids)
cat_inp = lasagne.layers.InputLayer((None,nontext_tr.shape[1]), input_var=categories)
# Descriptions
#word-wise embedding. We recommend to start from some 64 and improving after you are certain it works.
descr_nn = lasagne.layers.EmbeddingLayer(descr_inp,
input_size=len(token_to_id)+1,
output_size=?)
#reshape from [batch, time, unit] to [batch,unit,time] to allow 1d convolution over time
descr_nn = lasagne.layers.DimshuffleLayer(descr_nn, [0,2,1])
descr_nn = 1D convolution over embedding, maybe several ones in a stack
#pool over time
descr_nn = lasagne.layers.GlobalPoolLayer(descr_nn,T.max)
#Possible improvements here are adding several parallel convs with different filter sizes or stacking them the usual way
#1dconv -> 1d max pool ->1dconv and finally global pool
# Titles
title_nn = <Process titles somehow (title_inp)>
# Non-sequences
cat_nn = <Process non-sequences(cat_inp)>
nn = <merge three layers into one (e.g. lasagne.layers.concat) >
nn = lasagne.layers.DenseLayer(nn,your_lucky_number)
nn = lasagne.layers.DropoutLayer(nn,p=maybe_use_me)
nn = lasagne.layers.DenseLayer(nn,1,nonlinearity=lasagne.nonlinearities.linear)
|
_____no_output_____
|
MIT
|
Seminar9/Bonus-seminar.ipynb
|
Omrigan/dl-course
|
Loss function* The standard way: * prediction * loss * updates * training and evaluation functions
|
#All trainable params
weights = lasagne.layers.get_all_params(nn,trainable=True)
#Simple NN prediction
prediction = lasagne.layers.get_output(nn)[:,0]
#loss function
loss = lasagne.objectives.squared_error(prediction,target_y).mean()
#Weight optimization step
updates = <your favorite optimizer>
|
_____no_output_____
|
MIT
|
Seminar9/Bonus-seminar.ipynb
|
Omrigan/dl-course
|
Determinitic prediction * In case we use stochastic elements, e.g. dropout or noize * Compile a separate set of functions with deterministic prediction (deterministic = True) * Unless you think there's no neet for dropout there ofc. Btw is there?
|
#deterministic version
det_prediction = lasagne.layers.get_output(nn,deterministic=True)[:,0]
#equivalent loss function
det_loss = <an excercise in copy-pasting and editing>
|
_____no_output_____
|
MIT
|
Seminar9/Bonus-seminar.ipynb
|
Omrigan/dl-course
|
Coffee-lation
|
train_fun = theano.function([desc_token_ids,title_token_ids,categories,target_y],[loss,prediction],updates = updates)
eval_fun = theano.function([desc_token_ids,title_token_ids,categories,target_y],[det_loss,det_prediction])
|
_____no_output_____
|
MIT
|
Seminar9/Bonus-seminar.ipynb
|
Omrigan/dl-course
|
Training loop* The regular way with loops over minibatches* Since the dataset is huge, we define epoch as some fixed amount of samples isntead of all dataset
|
# Out good old minibatch iterator now supports arbitrary amount of arrays (X,y,z)
def iterate_minibatches(*arrays,**kwargs):
batchsize=kwargs.get("batchsize",100)
shuffle = kwargs.get("shuffle",True)
if shuffle:
indices = np.arange(len(arrays[0]))
np.random.shuffle(indices)
for start_idx in range(0, len(arrays[0]) - batchsize + 1, batchsize):
if shuffle:
excerpt = indices[start_idx:start_idx + batchsize]
else:
excerpt = slice(start_idx, start_idx + batchsize)
yield [arr[excerpt] for arr in arrays]
|
_____no_output_____
|
MIT
|
Seminar9/Bonus-seminar.ipynb
|
Omrigan/dl-course
|
Tweaking guide* batch_size - how many samples are processed per function call * optimization gets slower, but more stable, as you increase it. * May consider increasing it halfway through training* minibatches_per_epoch - max amount of minibatches per epoch * Does not affect training. Lesser value means more frequent and less stable printing * Setting it to less than 10 is only meaningfull if you want to make sure your NN does not break down after one epoch* n_epochs - total amount of epochs to train for * `n_epochs = 10**10` and manual interrupting is still an optionTips:* With small minibatches_per_epoch, network quality may jump up and down for several epochs* Plotting metrics over training time may be a good way to analyze which architectures work better.* Once you are sure your network aint gonna crash, it's worth letting it train for a few hours of an average laptop's time to see it's true potential
|
from sklearn.metrics import mean_squared_error,mean_absolute_error
n_epochs = 100
batch_size = 100
minibatches_per_epoch = 100
for i in range(n_epochs):
#training
epoch_y_true = []
epoch_y_pred = []
b_c = b_loss = 0
for j, (b_desc,b_title,b_cat, b_y) in enumerate(
iterate_minibatches(desc_tr,title_tr,nontext_tr,target_tr,batchsize=batch_size,shuffle=True)):
if j > minibatches_per_epoch:break
loss,pred_probas = train_fun(b_desc,b_title,b_cat,b_y)
b_loss += loss
b_c +=1
epoch_y_true.append(b_y)
epoch_y_pred.append(pred_probas)
epoch_y_true = np.concatenate(epoch_y_true)
epoch_y_pred = np.concatenate(epoch_y_pred)
print "Train:"
print '\tloss:',b_loss/b_c
print '\trmse:',mean_squared_error(epoch_y_true,epoch_y_pred)**.5
print '\tmae:',mean_absolute_error(epoch_y_true,epoch_y_pred)
#evaluation
epoch_y_true = []
epoch_y_pred = []
b_c = b_loss = 0
for j, (b_desc,b_title,b_cat, b_y) in enumerate(
iterate_minibatches(desc_ts,title_ts,nontext_ts,target_ts,batchsize=batch_size,shuffle=True)):
if j > minibatches_per_epoch: break
loss,pred_probas = eval_fun(b_desc,b_title,b_cat,b_y)
b_loss += loss
b_c +=1
epoch_y_true.append(b_y)
epoch_y_pred.append(pred_probas)
epoch_y_true = np.concatenate(epoch_y_true)
epoch_y_pred = np.concatenate(epoch_y_pred)
print "Val:"
print '\tloss:',b_loss/b_c
print '\trmse:',mean_squared_error(epoch_y_true,epoch_y_pred)**.5
print '\tmae:',mean_absolute_error(epoch_y_true,epoch_y_pred)
print "If you are seeing this, it's time to backup your notebook. No, really, 'tis too easy to mess up everything without noticing. "
|
If you are seeing this, it's time to backup your notebook. No, really, 'tis too easy to mess up everything without noticing.
|
MIT
|
Seminar9/Bonus-seminar.ipynb
|
Omrigan/dl-course
|
Final evaluationEvaluate network over the entire test set
|
#evaluation
epoch_y_true = []
epoch_y_pred = []
b_c = b_loss = 0
for j, (b_desc,b_title,b_cat, b_y) in enumerate(
iterate_minibatches(desc_ts,title_ts,nontext_ts,target_ts,batchsize=batch_size,shuffle=True)):
loss,pred_probas = eval_fun(b_desc,b_title,b_cat,b_y)
b_loss += loss
b_c +=1
epoch_y_true.append(b_y)
epoch_y_pred.append(pred_probas)
epoch_y_true = np.concatenate(epoch_y_true)
epoch_y_pred = np.concatenate(epoch_y_pred)
print "Scores:"
print '\tloss:',b_loss/b_c
print '\trmse:',mean_squared_error(epoch_y_true,epoch_y_pred)**.5
print '\tmae:',mean_absolute_error(epoch_y_true,epoch_y_pred)
|
_____no_output_____
|
MIT
|
Seminar9/Bonus-seminar.ipynb
|
Omrigan/dl-course
|
Linear Algebra (CpE210A) Midterms Project Coded and submitted by:Galario, Adrian Q. 201814169 58051 DirectionsThis Jupyter Notebook will serve as your base code for your Midterm Project. You must further format and provide complete discussion on the given topic. - Provide all necessary explanations for specific code blocks. - Provide illustrations for key results.- Observe clean code (intuitive variable names, proper commenting, proper code spacing)- Provide a summary discussion at the endFailure to use this format or failure to update the document will be given a deduction equivalent to 50% of the original score. Case Bebang is back to consult you about her business. Furthering her data analytics initiative she asks you for help to compute some relevant data. Now she is asking you to compute and visualize her sales and costs for the past year. She has given you the datasets attached to her request. Problem State and explain Bebang's problem here and provide the deliverables. Proof of Concept Now that you have a grasp on the requirements we need to start with making a program to prove that her problem is solvable. As a Linear Algebra student, we will be focusin on applying vector operations to meet her needs. First, we need to import her data. We will use the `pandas` library for this. For more information you can look into their documentation [here](https://pandas.pydata.org/).
|
import seaborn as sns
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib
import seaborn as sns
%matplotlib inline
df_prices = pd.read_csv(r'C:\Users\EyyGiee\Desktop\Bebang\bebang prices.csv')
df_sales = pd.read_csv(r'C:\Users\EyyGiee\Desktop\Bebang\bebang sales.csv')
df_prices
df_sales
|
_____no_output_____
|
Apache-2.0
|
LinAlg_Midterms (1).ipynb
|
adriangalarion/Lab-Activities-1.1
|
Part 1: Monthly Sales
|
sales_mat = np.array(df_sales.set_index('flavor'))
prices_mat = np.array(df_prices.set_index('Unnamed: 0'))[0]
costs_mat = np.array(df_prices.set_index('Unnamed: 0'))[1]
price_reshaped=np.reshape(prices_mat,(12,1))
cost_reshaped=np.reshape(costs_mat,(12,1))
print(sales_mat.shape)
print(price_reshaped.shape)
print(cost_reshaped.shape)
|
(12, 12)
(12, 1)
(12, 1)
|
Apache-2.0
|
LinAlg_Midterms (1).ipynb
|
adriangalarion/Lab-Activities-1.1
|
Formulas Take note that the fomula for revenue is: $revenue = sales * price $ In this case, think that revenue, sales, and price are vectors instead of individual values The formula of cost per item sold is: $cost_{sold} = sales * cost$ The formula for profit is: $profit = revenue - cost_{sold}$ Solving for the monthly profit will be the sum of all profits made on that month.
|
## Function that returns and prints the monthly sales and profit for each month
def monthly_sales(price, cost, sales):
monthly_revenue = sum(sales*price)
monthly_costs = sum(sales*cost)
monthly_profits = (monthly_revenue - monthly_costs)
return monthly_revenue.flatten(), monthly_costs.flatten(), monthly_profits.flatten()
### Using the monthly_sales function to compute for the revenue, cost, and profit
## Then passing the values to month_rev, month_cost, and month_profit
month_rev, month_cost, month_profit = monthly_sales(prices_mat, costs_mat, sales_mat)
### printing the values
print("Monthly Revenue(Starting from the month of January): \n", month_rev)
print("\nYearly Revenue: \n", sum(month_rev))
print("\nMonthly Cost(Starting from the month of January): \n", month_cost)
print("\nYearly Cost: \n", sum(month_cost))
print("\nMonthly Profit(Starting from the month of January): \n", month_profit)
print("\nYearly Profit: \n", sum(month_profit))
|
Monthly Revenue(Starting from the month of January):
[216510 116750 84900 26985 208850 17360 18760 19035 12090 22960
260775 422010]
Yearly Revenue:
1426985
Monthly Cost(Starting from the month of January):
[154650 70050 42450 15420 146195 13454 14070 10575 6045 14350
185440 290718]
Yearly Cost:
963417
Monthly Profit(Starting from the month of January):
[ 61860 46700 42450 11565 62655 3906 4690 8460 6045 8610
75335 131292]
Yearly Profit:
463568
|
Apache-2.0
|
LinAlg_Midterms (1).ipynb
|
adriangalarion/Lab-Activities-1.1
|
Part 2: Flavor Sales
|
## Function that returns and prints the flavor profits for the whole year
def flavor_sales(price, cost, sales):
flavor_revenue = sales*price
flavor_costs = sales*cost
flavor_profits = flavor_revenue - flavor_costs
return flavor_profits.flatten()
### Using the flavor_sales function to compute for the profit
## Then passing the values to flavor_profit variable
flavor_profit = flavor_sales(prices_mat, costs_mat, sales_mat)
## Values of profit of flavors will be inserted here
flavor1 = []
flavor2 = []
flavor3 = []
flavor4 = []
flavor5 = []
flavor6 = []
flavor7 = []
flavor8 = []
flavor9 = []
flavor10 = []
flavor11 = []
flavor12 = []
## Loop that will append the values(profit) to their respective variables above
## The variables above was created so that the sum can be computed by row(to get the yearly profit per flavor)
## Unlike getting the sum of flavor_profits inside the function flavor_sales, it will get the sum per column(which will get the profit of all flavor per month)
for x in flavor_profit:
if len(flavor1)<=11:
flavor1.append(x)
elif len(flavor2)<=11:
flavor2.append(x)
elif len(flavor3)<=11:
flavor3.append(x)
elif len(flavor4)<=11:
flavor4.append(x)
elif len(flavor5)<=11:
flavor5.append(x)
elif len(flavor6)<=11:
flavor6.append(x)
elif len(flavor7)<=11:
flavor7.append(x)
elif len(flavor8)<=11:
flavor8.append(x)
elif len(flavor9)<=11:
flavor9.append(x)
elif len(flavor10)<=11:
flavor10.append(x)
elif len(flavor11)<=11:
flavor11.append(x)
elif len(flavor12)<=11:
flavor12.append(x)
## Profit of each flavor per year
flavor_profits = np.array([sum(flavor1),sum(flavor2),sum(flavor3),sum(flavor4),sum(flavor5),sum(flavor6),sum(flavor7),sum(flavor8),sum(flavor9),
sum(flavor10),sum(flavor11),sum(flavor12)])
### Printing the values
print("The row represents each flavor while the column represents the months")
print("The order of flavor and months in rows and columns is the same as in df_sales\n")
print("Profit of Flavor per Month: \n", flavor_profit)
print("\nThe order of the flavor is the same as in df_sales\n")
print("Flavor Profit per Year: \n", flavor_profits)
## Putting the list of flavors into array
flavors = np.array(pd.read_csv("bebang sales.csv", usecols=[0]))
## Converting the arrays into lists
## Using list is easier to match/zip them
fprofit_list = flavor_profits.tolist()
flavor_list = flavors.tolist()
## Matched the two list, to know the profit of each flavor and to be sorted later
matched_list = list(zip(fprofit_list, flavor_list))
### Sorting of the flavors by their profit and displaying the first element(flavors) only
best_3_flavors = [x[1] for x in sorted(matched_list, reverse=True)]
worst_3_flavors = [x[1] for x in sorted(matched_list)]
## Printing of the three best and worst flavors
print("Best Selling Flavors: \n", best_3_flavors[0:3])
print("\nWorst Selling Flavors: \n", worst_3_flavors[0:3])
|
Best Selling Flavors:
[['choco butter naught'], ['sugar glazed'], ['red velvet']]
Worst Selling Flavors:
[['almond honey'], ['furits and nuts'], ['oreo']]
|
Apache-2.0
|
LinAlg_Midterms (1).ipynb
|
adriangalarion/Lab-Activities-1.1
|
Part 3: Visualizing the Data (Optional for +40%)You can try to visualize the data in the most comprehensible chart that you can use.
|
import matplotlib.pyplot as plt
import matplotlib
import seaborn as sns
import pandas as pd
import csv
%matplotlib inline
|
_____no_output_____
|
Apache-2.0
|
LinAlg_Midterms (1).ipynb
|
adriangalarion/Lab-Activities-1.1
|
Entire Dataset
|
## Graph for Sales of each flavor
## Table inside the original file(bebang sales) was transposed in the excel, columns were converted to rows
df_sales_Transposed = pd.read_csv(r"C:\Users\EyyGiee\Desktop\Bebang\bebang sales(transpose).csv")
## Transposing the table makes it easier to plot the data inside it
## The column header 'flavor' was changed to 'Months'
df_sales_Transposed.plot(x="Months", figsize=(25,15))
plt.title('Sales of Each Flavor')
## Graph for Price vs Cost per Flavor
## Declaring the font size and weight to be used in the graph
font = {'weight' : 'bold',
'size' : 15}
matplotlib.rc('font', **font)
## Declaration of the figure to be used
fig = plt.figure()
ax = fig.add_axes([0,0,4,4])
ax.set_title('Price vs Cost per Flavor')
## For the legends used in the graph
colors = {'Price':'blue', 'Cost':'green'}
labels = list(colors.keys())
handles = [plt.Rectangle((0,0),1,1, color=colors[label]) for label in labels]
plt.legend(handles, labels, loc='upper left', prop={'size': 40})
## Plotting of the values for the bar graph
## Price and cost were plotted in one x-axis per flavor to see the difference between the two variable
ax.bar('Red Velvet' ,prices_mat[0], color = 'b', width = 0.50)
ax.bar('Red Velvet' ,costs_mat[0], color = 'g', width = 0.50)
ax.bar('Oreo' ,prices_mat[1], color = 'b', width = 0.50)
ax.bar('Oreo' ,costs_mat[1], color = 'g', width = 0.50)
ax.bar('Super Glazed' ,prices_mat[2], color = 'b', width = 0.50)
ax.bar('Super Glazed' ,costs_mat[2], color = 'g', width = 0.50)
ax.bar('Almond Honey' ,prices_mat[3], color = 'b', width = 0.50)
ax.bar('Almond Honey' ,costs_mat[3], color = 'g', width = 0.50)
ax.bar('Matcha' ,prices_mat[4], color = 'b', width = 0.50)
ax.bar('Matcha' ,costs_mat[4], color = 'g', width = 0.50)
ax.bar('Strawberry Cream' ,prices_mat[5], color = 'b', width = 0.50)
ax.bar('Strawberry Cream' ,costs_mat[5], color = 'g', width = 0.50)
ax.bar('Brown \nSugar Boba' ,prices_mat[6], color = 'b', width = 0.50)
ax.bar('Brown \nSugar Boba' ,costs_mat[6], color = 'g', width = 0.50)
ax.bar('Fruits \nand Nuts' ,prices_mat[7], color = 'b', width = 0.50)
ax.bar('Fruits \nand Nuts' ,costs_mat[7], color = 'g', width = 0.50)
ax.bar('Dark \nChocolate' ,prices_mat[8], color = 'b', width = 0.50)
ax.bar('Dark \nChocolate' ,costs_mat[8], color = 'g', width = 0.50)
ax.bar('Chocolate \nand Orange' ,prices_mat[9], color = 'b', width = 0.50)
ax.bar('Chocolate \nand Orange' ,costs_mat[9], color = 'g', width = 0.50)
ax.bar('Choco Mint' ,prices_mat[10], color = 'b', width = 0.50)
ax.bar('Choco Mint' ,costs_mat[10], color = 'g', width = 0.50)
ax.bar('Choco \nButter Naught' ,prices_mat[11], color = 'b', width = 0.50)
ax.bar('Choco \nButter Naught' ,costs_mat[11], color = 'g', width = 0.50)
|
_____no_output_____
|
Apache-2.0
|
LinAlg_Midterms (1).ipynb
|
adriangalarion/Lab-Activities-1.1
|
Monthly Sales
|
## Graph for Revenue vs Cost per Month
## Declaring the font size and weight to be used in the graph
font = {'weight' : 'bold',
'size' : 15}
matplotlib.rc('font', **font)
## Declaration of the figure to be used in the graph
fig = plt.figure()
ax = fig.add_axes([0,0,4,4])
ax.set_title('Revenue vs Cost per Month')
ax.set_ylabel('Revenue/Cost')
ax.set_xlabel('Months')
## For the legends used in the graph
colors = {'Revenue':'blue', 'Cost':'green'}
labels = list(colors.keys())
handles = [plt.Rectangle((0,0),1,1, color=colors[label]) for label in labels]
plt.legend(handles, labels, loc='upper left', prop={'size': 40})
## Plotting of the values for the bar graph
## Revenue and cost were plotted in one x-axis per month to see the difference between the two variable
ax.bar('January' ,month_rev[0], color = 'b', width = 0.50)
ax.bar('January' ,month_cost[0], color = 'g', width = 0.50)
ax.bar('February' ,month_rev[1], color = 'b', width = 0.50)
ax.bar('February' ,month_cost[1], color = 'g', width = 0.50)
ax.bar('March' ,month_rev[2], color = 'b', width = 0.50)
ax.bar('March' ,month_cost[2], color = 'g', width = 0.50)
ax.bar('April' ,month_rev[3], color = 'b', width = 0.50)
ax.bar('April' ,month_cost[3], color = 'g', width = 0.50)
ax.bar('May' ,month_rev[4], color = 'b', width = 0.50)
ax.bar('May' ,month_cost[4], color = 'g', width = 0.50)
ax.bar('June' ,month_rev[5], color = 'b', width = 0.50)
ax.bar('June' ,month_cost[5], color = 'g', width = 0.50)
ax.bar('July' ,month_rev[6], color = 'b', width = 0.50)
ax.bar('July' ,month_cost[6], color = 'g', width = 0.50)
ax.bar('August' ,month_rev[7], color = 'b', width = 0.50)
ax.bar('August' ,month_cost[7], color = 'g', width = 0.50)
ax.bar('September' ,month_rev[8], color = 'b', width = 0.50)
ax.bar('September' ,month_cost[8], color = 'g', width = 0.50)
ax.bar('October' ,month_rev[9], color = 'b', width = 0.50)
ax.bar('October' ,month_cost[9], color = 'g', width = 0.50)
ax.bar('November' ,month_rev[10], color = 'b', width = 0.50)
ax.bar('November' ,month_cost[10], color = 'g', width = 0.50)
ax.bar('December' ,month_rev[11], color = 'b', width = 0.50)
ax.bar('December' ,month_cost[11], color = 'g', width = 0.50)
## Graph for profit per month
## Declaration of the figure to be used
fig = plt.figure()
ax = fig.add_axes([0,0,3,2])
ax.set_ylabel('Profit')
ax.set_xlabel('Months')
ax.set_title('Profit per Month')
## Declaring the values of each axis
months = ['January', 'February', 'March', 'April', 'May', 'June', 'July', 'August', 'September', 'October', 'November', 'December']
Profits = month_profit
## Declaration of the axes and printing/showing them
ax.bar(months, Profits)
plt.show()
|
_____no_output_____
|
Apache-2.0
|
LinAlg_Midterms (1).ipynb
|
adriangalarion/Lab-Activities-1.1
|
Flavor Sales
|
## Graph for Flavor profit
## Declaration of the figure to be used
fig = plt.figure()
ax = fig.add_axes([0,0,3,2])
ax.set_ylabel('Profit')
ax.set_xlabel('Flavors')
ax.set_title('Flavor Profit')
## Declaring the values of each axis
flavors = ['Red Velvet', 'Oreo', 'Super \nGlazed', 'Almond \nHoney', 'Matcha', 'Strawberry \nCream', 'Brown \nSugar Boba',
'Fruits \nand Nuts', 'Dark \nChocolate', 'Chocolate \nOrange', 'Choco Mint', 'Choco \nButter Naught']
Profits = flavor_profits
## Declaration of the axes and printing/showing them
ax.bar(flavors, Profits)
plt.show()
|
_____no_output_____
|
Apache-2.0
|
LinAlg_Midterms (1).ipynb
|
adriangalarion/Lab-Activities-1.1
|
Let's plot one of the Time Series.
|
from io_utils import load_sensor_data, file_names
df = load_sensor_data(file_names[20])
df.head()
df.info()
|
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 276048 entries, 2012-01-01 00:10:00 to 2017-03-31 00:00:00
Data columns (total 18 columns):
pr 276020 non-null float64
f_pr 276048 non-null int64
max_ws 275923 non-null float64
f_max_ws 276048 non-null int64
ave_wv 275917 non-null float64
f_ave_wv 276048 non-null int64
ave_ws 275917 non-null float64
f_ave_ws 276048 non-null int64
max_tp 275929 non-null float64
f_max_tp 276048 non-null int64
min_tp 275929 non-null float64
f_min_tp 276048 non-null int64
sl 275982 non-null float64
f_sl 276048 non-null int64
sd 5 non-null float64
f_sd 276048 non-null int64
dsd 5 non-null float64
f_dsd 276048 non-null int64
dtypes: float64(9), int64(9)
memory usage: 40.0 MB
|
MIT
|
.ipynb_checkpoints/data_visualization-checkpoint.ipynb
|
qlongyinqw/gcn-japan-weather-forecast
|
Extracting Data using Web Scraping
|
# import
import requests
from bs4 import BeautifulSoup
# HTML String
html_string = """
<!doctype html>
<html lang="en">
<head>
<title>Doing Data Science With Python</title>
</head>
<body>
<h1 style="color:#F15B2A;">Doing Data Science With Python</h1>
<p id="author">Author : Abhishek Kumar</p>
<p id="description">This course will help you to perform various data science activities using python.</p>
<h3 style="color:#404040">Modules</h3>
<table id="module" style="width:100%">
<tr>
<th>Title</th>
<th>Duration (In Minutes)</th>
</tr>
<tr>
<td>Getting Started</td>
<td>20</td>
</tr>
<tr>
<td>Setting up the Environment</td>
<td>40</td>
</tr>
<tr>
<td>Extracting Data</td>
<td>35</td>
</tr>
<tr>
<td>Exploring and Processing Data - Part 1</td>
<td>45</td>
</tr>
<tr>
<td>Exploring and Processing Data - Part 2</td>
<td>45</td>
</tr>
<tr>
<td>Building Predictive Model</td>
<td>30</td>
</tr>
</table>
</body>
</html>
"""
# display HTML string in the juptyer notebook
from IPython.core.display import display, HTML
display(HTML(html_string))
# use beautiful soup
ps = BeautifulSoup(html_string)
# print b
print(ps)
# use name parameter to select by tag name
body = ps.find(name="body")
print(body)
# use text attribute to get the content of the tag
print(body.find(name="h1").text)
# get first element
print(body.find(name="p"))
# get all elements
print(body.findAll(name="p"))
# loop through each element
for p in body.findAll(name="p"):
print(p.text)
# add attributes in the selection process
print(body.find(name='p', attrs={"id":"author"}))
print(body.find(name='p', attrs={"id":"description"}))
# body
body = ps.find(name="body")
# module table
module_table = body.find(name='table', attrs={"id": "module"})
# iterate through each row in the table (skipping the first row)
for row in module_table.findAll(name='tr')[1:]:
# module title
title = row.findAll(name='td')[0].text
# module duration
duration = int(row.findAll(name='td')[1].text)
print title, duration
e
|
_____no_output_____
|
MIT
|
notebooks/03 Web Scraping.ipynb
|
RaduMihut/titanic
|
!pip -V
!python -V
!pip install --upgrade youtube-dl
!youtube-dl https://drive.google.com/file/d/16-xNP_Ez-3WgFF3vfsP9KJl4ka9hXDlV/view?usp=sharing
!youtube-dl https://drive.google.com/file/d/1rP5tveZgNXJZe_uipJNWUaSqJiow_LGc/view?usp=sharing
!ls
!mv main_DATASET_VAL.zip-1rP5tveZgNXJZe_uipJNWUaSqJiow_LGc.zip val.zip
!mv main_DataSETS_TRAIN.zip-16-xNP_Ez-3WgFF3vfsP9KJl4ka9hXDlV.zip train.zip
!ls
!unzip train.zip
!unzip val.zip
!ls
!rm -rf train.zip
!rm -rf val.zip
!mv main_DATASET_VAL/ val
!mv main_DataSETS_TRAIN/ train
!ls
!mkdir customImages
!rm -rf sample_data
!rm -rf __MACOSX
!mv train/ customImages/
!mv val/ customImages/
!ls
!git clone https://github.com/matterport/Mask_RCNN.git
!ls
!mv customImages/ Mask_RCNN/
%cd Mask_RCNN/
!pip install -r requirements.txt
%run setup.py install
!wget https://raw.githubusercontent.com/Prady96/Pothole-Detection/avi_testing/custom.py?token=AHIVHIOGTWT7LA4IIWMEJVS455SIO
!mv custom.py\?token\=AHIVHIOGTWT7LA4IIWMEJVS455SIO custom.py
!ls
!mkdir logs
import os
import sys
import itertools
import math
import logging
import json
import re
import random
from collections import OrderedDict
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.patches as patches
import matplotlib.lines as lines
from matplotlib.patches import Polygon
# Root directory of the project
ROOT_DIR = os.getcwd()
# Import Mask RCNN
sys.path.append(ROOT_DIR) # To find local version of the library
from mrcnn import utils
from mrcnn import visualize
from mrcnn.visualize import display_images
import mrcnn.model as modellib
from mrcnn.model import log
MODEL_DIR = os.path.join(ROOT_DIR, "logs")
import custom
%matplotlib inline
config = custom.CustomConfig()
CUSTOM_DIR = os.path.join(ROOT_DIR, "customImages")
print(CUSTOM_DIR)
# Load dataset
# Get the dataset from the releases page
# https://github.com/matterport/Mask_RCNN/releases
dataset = custom.CustomDataset()
dataset.load_custom(CUSTOM_DIR, "train")
# Must call before using the dataset
dataset.prepare()
print("Image Count: {}".format(len(dataset.image_ids)))
print("Class Count: {}".format(dataset.num_classes))
for i, info in enumerate(dataset.class_info):
print("{:3}. {:50}".format(i, info['name']))
class InferenceConfig(custom.CustomConfig):
# Set batch size to 1 since we'll be running inference on
# one image at a time. Batch size = GPU_COUNT * IMAGES_PER_GPU
GPU_COUNT = 1
IMAGES_PER_GPU = 1
config = InferenceConfig()
config.display()
##################### MODEL FILE HERE ##################
### FOR 320 epoch
!youtube-dl https://drive.google.com/file/d/1aShefxzQmeB1qerh1Xo2Xkm1SPIy_yzy/view?usp=sharing
### FOR 160 epoch
!youtube-dl https://drive.google.com/file/d/1ex7Mo62j7wugrZbmNFZFAuujd_UguRYK/view?usp=sharing
!ls
!mv mask_rcnn_damage_0160.h5-1ex7Mo62j7wugrZbmNFZFAuujd_UguRYK.h5 mask_rcnn_damage_0160.h5
!mv mask_rcnn_damage_0160.h5 logs/
!ls
!mv mask_rcnn_damage_0320.h5-1aShefxzQmeB1qerh1Xo2Xkm1SPIy_yzy.h5 mask_rcnn_damage_0320.h5
!mv mask_rcnn_damage_0320.h5 logs/
!ls logs/
# Create model object in inference mode.
model = modellib.MaskRCNN(mode="inference", model_dir=MODEL_DIR, config=config)
# Load weights trained on MS-COCO
model.load_weights("logs/mask_rcnn_damage_0320.h5", by_name=True)
class_names = ['BG', 'damage']
!pip install utils
import os
import sys
import custom
import utils
%cd mrcnn
import model as modellib
%cd ..
import cv2
import numpy as np
## Testing
from PIL import Image, ImageDraw, ImageFont
|
_____no_output_____
|
Apache-2.0
|
Final_file_for_tata_innoverse.ipynb
|
abhinav090/pothole_detection
|
|
MoveOver for Getting Testing Images Similar to S3 Bucket
|
!youtube-dl https://drive.google.com/file/d/1FTvc361O9BBURgsTMb6dJoE6InAoic_O/view?usp=sharing
!ls
!mv images.zip-1FTvc361O9BBURgsTMb6dJoE6InAoic_O.zip images.zip
!mkdir S3_Images
!mv images.zip S3_Images/
%cd S3_Images/
!ls
!unzip images.zip
!ls
!rm -rf images.zip
!rm -rf __MACOSX/
!mv images\ 2 images
!ls
!ls images
!pwd
%cd /content/Mask_RCNN/
!ls /content/Mask_RCNN/S3_Images/images/
%cd /content/Mask_RCNN/S3_Images/images/
!pip install python-resize-image
from PIL import Image
import os
from resizeimage import resizeimage
count = 0
for f in os.listdir(os.getcwd()):
f_name, f_ext = os.path.splitext(f)
# f_random, f_lat_name,f_lat_val,f_long_name,f_long_val = f_name.split('-')
# f_lat_val = f_lat_val.strip() ##removing the white Space
# f_long_val = f_long_val.strip()
# new_name = '{}-{}-{}.jpg'.format(f_lat_val,f_long_val,count)
try:
with Image.open(f) as image:
count +=1
cover = resizeimage.resize_cover(image, [600,600])
cover.save('{}{}'.format(f_name,f_ext),image.format)
#os.remove(f)
print(count)
except(OSError) as e:
print('Bad Image {}{}'.format(f,count))
%cd /content/Mask_RCNN/
!wget https://github.com/Prady96/IITM_PythonTraining/blob/master/ImageWorking_add_textInImage/fonts_Dir/OpenSans-Bold.ttf?raw=true
!mv OpenSans-Bold.ttf?raw=true OpenSans-Bold.ttf
!ls
# Main file for the file iteration
import cv2
import numpy as np
from PIL import Image, ImageDraw, ImageFont
myList = [] ## area list
classList = [] ##class Id List
def random_colors(N):
np.random.seed(1)
colors = [tuple(255 * np.random.rand(3)) for _ in range(N)]
return colors
def apply_mask(image, mask, color, alpha=0.5):
"""apply mask to image"""
for n, c in enumerate(color):
image[:, :, n] = np.where(
mask == 1,
image[:, :, n] * (1 - alpha) + alpha * c,
image[:, :, n]
)
return image
def display_instances(image, boxes, masks, ids, names, scores):
"""
take the image and results and apply the mask, box, and Label
"""
n_instances = boxes.shape[0]
colors = random_colors(n_instances)
if not n_instances:
print('NO INSTANCES TO DISPLAY')
else:
assert boxes.shape[0] == masks.shape[-1] == ids.shape[0]
for i, color in enumerate(colors):
if not np.any(boxes[i]):
continue
y1, x1, y2, x2 = boxes[i]
label = names[ids[i]]
score = scores[i] if scores is not None else None
caption = '{} {:.2f}'.format(label, score) if score else label
mask = masks[:, :, i]
image = apply_mask(image, mask, color)
image = cv2.rectangle(image, (x1, y1), (x2, y2), color, 2)
image = cv2.putText(
image, caption, (x1, y1), cv2.FONT_HERSHEY_COMPLEX, 0.7, color, 2
)
return image
def save_image(image, image_name, boxes, masks, class_ids, scores, class_names, filter_classs_names=None,
scores_thresh=0.1, save_dir=None, mode=0):
"""
image: image array
image_name: image name
boxes: [num_instance, (y1, x1, y2, x2, class_id)] in image coordinates.
masks: [num_instances, height, width]
class_ids: [num_instances]
scores: confidence scores for each box
class_names: list of class names of the dataset
filter_classs_names: (optional) list of class names we want to draw
scores_thresh: (optional) threshold of confidence scores
save_dir: (optional) the path to store image
mode: (optional) select the result which you want
mode = 0 , save image with bbox,class_name,score and mask;
mode = 1 , save image with bbox,class_name and score;
mode = 2 , save image with class_name,score and mask;
mode = 3 , save mask with black background;
"""
mode_list = [0, 1, 2, 3]
assert mode in mode_list, "mode's value should in mode_list %s" % str(mode_list)
if save_dir is None:
save_dir = os.path.join(os.getcwd(), "output")
if not os.path.exists(save_dir):
os.makedirs(save_dir)
useful_mask_indices = []
N = boxes.shape[0]
if not N:
print("\n*** No instances in image %s to draw *** \n" % (image_name))
return
else:
assert boxes.shape[0] == masks.shape[-1] == class_ids.shape[0]
for i in range(N):
# filter
class_id = class_ids[i]
score = scores[i] if scores is not None else None
if score is None or score < scores_thresh:
continue
label = class_names[class_id]
if (filter_classs_names is not None) and (label not in filter_classs_names):
continue
if not np.any(boxes[i]):
# Skip this instance. Has no bbox. Likely lost in image cropping.
continue
useful_mask_indices.append(i)
if len(useful_mask_indices) == 0:
print("\n*** No instances in image %s to draw *** \n" % (image_name))
return
colors = random_colors(len(useful_mask_indices))
if mode != 3:
masked_image = image.astype(np.uint8).copy()
else:
masked_image = np.zeros(image.shape).astype(np.uint8)
if mode != 1:
for index, value in enumerate(useful_mask_indices):
masked_image = apply_mask(masked_image, masks[:, :, value], colors[index])
masked_image = Image.fromarray(masked_image)
if mode == 3:
masked_image.save(os.path.join(save_dir, '%s.jpg' % (image_name)))
return
draw = ImageDraw.Draw(masked_image)
colors = np.array(colors).astype(int) * 255
myList = []
countClassIds = 0
for index, value in enumerate(useful_mask_indices):
class_id = class_ids[value]
print('class_id value is {}'.format(class_id))
if class_id == 1:
countClassIds += 1
print('counter for the class ID {}'.format(countClassIds))
score = scores[value]
label = class_names[class_id]
y1, x1, y2, x2 = boxes[value]
# myList = []
## area of the rectangle
yVal = y2 - y1
xVal = x2 - x1
area = xVal * yVal
print('area is {}'.format(area))
myList.append(area)
if mode != 2:
color = tuple(colors[index])
draw.rectangle((x1, y1, x2, y2), outline=color)
# Label
# font = ImageFont.load('/usr/share/fonts/truetype/ttf-bitstream-vera/Vera.ttf')
font = ImageFont.truetype('OpenSans-Bold.ttf', 15)
draw.text((x1, y1), "%s %f" % (label, score), (255, 255, 255), font)
print(r['class_ids'], r['scores'])
print(myList)
# print('value of r is {}'.format(r))
print('image_name is {}'.format(image_name))
image_name = os.path.basename(image_name)
print('image name is {}'.format(image_name))
f_name, f_ext = os.path.splitext(image_name)
#f_lat_val,f_long_val,f_count = f_name.split('-')
#f_lat_val = f_lat_val.strip() ##removing the white Space
#f_long_val = f_long_val.strip()
# new_name = '{}-{}-{}.jpg'.format(f_lat_val,f_long_val,count)
# print([area for area in myList if ])
# print([i for i in range(countClassIds) ])
print("avi96 {}".format(myList[:countClassIds]))
# myList.pop(countClassIds - 1)
new_name = '{}-{}.jpg'.format(myList, r['scores'])
# masked_image.save(os.path.join(save_dir, '%s.jpg' % (image_name)))
print("New Name file is {}".format(new_name))
print('save_dir is {}'.format(save_dir))
masked_image.save(os.path.join(save_dir, '%s' % (new_name)))
print('file Saved {}'.format(new_name))
# os.rename(image_name, new_name)
if __name__ == '__main__':
"""
test everything
"""
import os
import sys
import custom
import utils
import model as modellib
#import visualize
# We use a K80 GPU with 24GB memory, which can fit 3 images.
batch_size = 3
ROOT_DIR = os.getcwd()
MODEL_DIR = os.path.join(ROOT_DIR, "logs")
VIDEO_DIR = os.path.join(ROOT_DIR, "videos")
VIDEO_SAVE_DIR = os.path.join(VIDEO_DIR, "save")
# COCO_MODEL_PATH = os.path.join(ROOT_DIR, "mask_rcnn_damage_0010.h5")
# if not os.path.exists(COCO_MODEL_PATH):
# utils.download_trained_weights(COCO_MODEL_PATH)
class InferenceConfig(custom.CustomConfig):
GPU_COUNT = 1
IMAGES_PER_GPU = batch_size
config = InferenceConfig()
config.display()
model = modellib.MaskRCNN(
mode="inference", model_dir=MODEL_DIR, config=config
)
model.load_weights("logs/mask_rcnn_damage_0160.h5", by_name=True)
class_names = [
'BG', 'damage'
]
# capture = cv2.VideoCapture(os.path.join(VIDEO_DIR, 'trailer1.mp4'))
try:
if not os.path.exists(VIDEO_SAVE_DIR):
os.makedirs(VIDEO_SAVE_DIR)
except OSError:
print ('Error: Creating directory of data')
# points to be done before final coding
"""
path_for_image_dir
list for the image array
resolve for naming convention for location basis
passing image in model
"""
# path for the data files
data_path = '/content/Mask_RCNN/S3_Images/images/'
onlyfiles = [f for f in os.listdir(data_path) if os.path.isfile(os.path.join(data_path, f))]
# empty list for the training data
frames = []
frame_count = 0
batch_count = 1
# enumerate the iteration with number of files
for j, files in enumerate(onlyfiles):
image_path = data_path + onlyfiles[j]
# print("image Path {}".format(image_path))
# print("Only Files {}".format(onlyfiles[j]))
# print('j is {}'.format(j))
# print('files is {}'.format(files))
try:
images = cv2.imread(image_path).astype(np.uint8)
# print("images {}".format(images))
frames.append(np.asarray(images, dtype=np.uint8))
# frames.append(images)
frame_count += 1
print('frame_count :{0}'.format(frame_count))
if len(frames) == batch_size:
results = model.detect(frames, verbose=0)
print('Predicted')
for i, item in enumerate(zip(frames, results)):
# print('i is {}'.format(i))
# print('item is {}'.format(item))
frame = item[0]
r = item[1]
frame = display_instances(
frame, r['rois'], r['masks'], r['class_ids'], class_names, r['scores']
)
name = '{}'.format(files)
name = os.path.join(VIDEO_SAVE_DIR, name)
# name = '{0}.jpg'.format(frame_count + i - batch_size)
# name = os.path.join(VIDEO_SAVE_DIR, name)
# cv2.imwrite(name, frame)
# print(name)
print('writing to file:{0}'.format(name))
# print(name)
save_image(images, name, r['rois'], r['masks'], r['class_ids'],
r['scores'], class_names, save_dir=VIDEO_SAVE_DIR, mode=0)
frames = []
print('clear')
# clear the frames here
except(AttributeError) as e:
print('Bad Image {}'.format(image_path))
print("Success, check the folder")
"""
## Code for the video section
frames = []
frame_count = 0
# these 2 lines can be removed if you dont have a 1080p camera.
capture.set(cv2.CAP_PROP_FRAME_WIDTH, 1920)
capture.set(cv2.CAP_PROP_FRAME_HEIGHT, 1080)
while True:
ret, frame = capture.read()
# Bail out when the video file ends
if not ret:
break
# Save each frame of the video to a list
frame_count += 1
frames.append(frame)
print('frame_count :{0}'.format(frame_count))
if len(frames) == batch_size:
results = model.detect(frames, verbose=0)
print('Predicted')
for i, item in enumerate(zip(frames, results)):
frame = item[0]
r = item[1]
frame = display_instances(
frame, r['rois'], r['masks'], r['class_ids'], class_names, r['scores']
)
# name = '{0}.jpg'.format(frame_count + i - batch_size)
# name = os.path.join(VIDEO_SAVE_DIR, name)
# cv2.imwrite(name, frame)
# print('writing to file:{0}'.format(name))
## add visualise files
# visualize.display_instances(image, r['rois'], r['masks'], r['class_ids'],
# class_names, r['scores'])
save_image(image, name, r['rois'], r['masks'], r['class_ids'],
r['scores'],class_names, save_dir=VIDEO_SAVE_DIR, mode=0)
# print(r['class_ids'], r['scores'])
# Clear the frames array to start the next batch
frames = []
capture.release()
"""
!ls /content/Mask_RCNN/videos/save/
!zip -r save.zip /content/Mask_RCNN/videos/save/
|
adding: content/Mask_RCNN/videos/save/ (stored 0%)
adding: content/Mask_RCNN/videos/save/[4556]-[0.99999726].jpg (deflated 0%)
adding: content/Mask_RCNN/videos/save/[100835, 4752, 31860]-[0.99894387 0.9983626 0.99591905].jpg (deflated 0%)
adding: content/Mask_RCNN/videos/save/[250, 260]-[0.9999243 0.8455281].jpg (deflated 1%)
adding: content/Mask_RCNN/videos/save/[3024]-[0.999912].jpg (deflated 1%)
adding: content/Mask_RCNN/videos/save/[384, 1449]-[0.997591 0.8543571].jpg (deflated 1%)
adding: content/Mask_RCNN/videos/save/[264]-[0.8035971].jpg (deflated 0%)
adding: content/Mask_RCNN/videos/save/[1560]-[0.99997425].jpg (deflated 1%)
adding: content/Mask_RCNN/videos/save/[250, 720]-[0.98426384 0.9620987 ].jpg (deflated 1%)
adding: content/Mask_RCNN/videos/save/[816, 176]-[0.99922216 0.99323297].jpg (deflated 1%)
adding: content/Mask_RCNN/videos/save/[4935]-[0.9999962].jpg (deflated 1%)
adding: content/Mask_RCNN/videos/save/[319]-[0.9997627].jpg (deflated 0%)
adding: content/Mask_RCNN/videos/save/[429]-[0.9999603].jpg (deflated 1%)
adding: content/Mask_RCNN/videos/save/[286, 330, 416, 418]-[0.98072284 0.92764556 0.87470007 0.873562 ].jpg (deflated 1%)
adding: content/Mask_RCNN/videos/save/[2775]-[0.9999815].jpg (deflated 1%)
adding: content/Mask_RCNN/videos/save/[440]-[0.9712983].jpg (deflated 2%)
adding: content/Mask_RCNN/videos/save/[480, 416]-[0.99973744 0.98802954].jpg (deflated 1%)
adding: content/Mask_RCNN/videos/save/[85, 196, 75]-[0.9978288 0.9905183 0.98221266].jpg (deflated 1%)
adding: content/Mask_RCNN/videos/save/[13719, 10044, 17577, 9504, 3774]-[0.99999344 0.9999933 0.99996066 0.99993455 0.99992 ].jpg (deflated 1%)
adding: content/Mask_RCNN/videos/save/[546]-[0.9994134].jpg (deflated 2%)
adding: content/Mask_RCNN/videos/save/[290]-[0.9997447].jpg (deflated 1%)
|
Apache-2.0
|
Final_file_for_tata_innoverse.ipynb
|
abhinav090/pothole_detection
|
Credit Risk Resampling Techniques
|
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import pandas as pd
from pathlib import Path
from collections import Counter
|
_____no_output_____
|
ADSL
|
Starter_Code/credit_risk_resampling.ipynb
|
AntoJKumar/Risky_Business
|
Read the CSV into DataFrame
|
# Load the data
file_path = Path('Resources/lending_data.csv')
df = pd.read_csv(file_path)
df.head()
|
_____no_output_____
|
ADSL
|
Starter_Code/credit_risk_resampling.ipynb
|
AntoJKumar/Risky_Business
|
Split the Data into Training and Testing
|
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
le.fit(df["homeowner"])
df["homeowner"] = le.transform(df["homeowner"])
# Create our features
X = X = df.copy()
X.drop("loan_status", axis=1, inplace=True)
# Create our target
y = y = df['loan_status']
X.describe()
# Check the balance of our target values
y.value_counts()
# Create X_train, X_test, y_train, y_test
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
|
_____no_output_____
|
ADSL
|
Starter_Code/credit_risk_resampling.ipynb
|
AntoJKumar/Risky_Business
|
Data Pre-ProcessingScale the training and testing data using the `StandardScaler` from `sklearn`. Remember that when scaling the data, you only scale the features data (`X_train` and `X_testing`).
|
# Create the StandardScaler instance
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
# Fit the Standard Scaler with the training data
# When fitting scaling functions, only train on the training dataset
X_scaler = scaler.fit(X_train)
# Scale the training and testing data
X_train_scaled = X_scaler.transform(X_train)
X_test_scaled = X_scaler.transform(X_test)
|
_____no_output_____
|
ADSL
|
Starter_Code/credit_risk_resampling.ipynb
|
AntoJKumar/Risky_Business
|
Simple Logistic Regression
|
from sklearn.linear_model import LogisticRegression
model = LogisticRegression(solver='lbfgs', random_state=1)
model.fit(X_train, y_train)
# Calculated the balanced accuracy score
from sklearn.metrics import balanced_accuracy_score
y_pred = model.predict(X_test)
balanced_accuracy_score(y_test, y_pred)
# Display the confusion matrix
from sklearn.metrics import confusion_matrix
confusion_matrix(y_test, y_pred)
# Print the imbalanced classification report
from imblearn.metrics import classification_report_imbalanced
print(classification_report_imbalanced(y_test, y_pred))
|
pre rec spe f1 geo iba sup
high_risk 0.85 0.91 0.99 0.88 0.95 0.90 619
low_risk 1.00 0.99 0.91 1.00 0.95 0.91 18765
avg / total 0.99 0.99 0.91 0.99 0.95 0.91 19384
|
ADSL
|
Starter_Code/credit_risk_resampling.ipynb
|
AntoJKumar/Risky_Business
|
OversamplingIn this section, you will compare two oversampling algorithms to determine which algorithm results in the best performance. You will oversample the data using the naive random oversampling algorithm and the SMOTE algorithm. For each algorithm, be sure to complete the folliowing steps:1. View the count of the target classes using `Counter` from the collections library. 3. Use the resampled data to train a logistic regression model.3. Calculate the balanced accuracy score from sklearn.metrics.4. Print the confusion matrix from sklearn.metrics.5. Generate a classication report using the `imbalanced_classification_report` from imbalanced-learn.Note: Use a random state of 1 for each sampling algorithm to ensure consistency between tests Naive Random Oversampling
|
# Resample the training data with the RandomOversampler
from imblearn.over_sampling import RandomOverSampler
ros = RandomOverSampler(random_state=1)
X_resampled1, y_resampled1 = ros.fit_resample(X_train, y_train)
# View the count of target classes with Counter
Counter(y_resampled1)
# Train the Logistic Regression model using the resampled data
model1 = LogisticRegression(solver='lbfgs', random_state=1)
model1.fit(X_resampled1, y_resampled1)
# Calculated the balanced accuracy score
y_pred1 = model1.predict(X_test)
balanced_accuracy_score(y_test, y_pred1)
# Display the confusion matrix
confusion_matrix(y_test, y_pred1)
# Print the imbalanced classification report
print(classification_report_imbalanced(y_test, y_pred1))
|
pre rec spe f1 geo iba sup
high_risk 0.84 0.99 0.99 0.91 0.99 0.99 619
low_risk 1.00 0.99 0.99 1.00 0.99 0.99 18765
avg / total 0.99 0.99 0.99 0.99 0.99 0.99 19384
|
ADSL
|
Starter_Code/credit_risk_resampling.ipynb
|
AntoJKumar/Risky_Business
|
SMOTE Oversampling
|
# Resample the training data with SMOTE
from imblearn.over_sampling import SMOTE
X_resampled2, y_resampled2 = SMOTE(random_state=1, sampling_strategy=1.0).fit_resample(X_train, y_train)
# View the count of target classes with Counter
Counter(y_resampled2)
# Train the Logistic Regression model using the resampled data
model2 = LogisticRegression(solver='lbfgs', random_state=1)
model2.fit(X_resampled2, y_resampled2)
# Calculated the balanced accuracy score
y_pred2 = model2.predict(X_test)
balanced_accuracy_score(y_test, y_pred2)
# Display the confusion matrix
confusion_matrix(y_test, y_pred2)
# Print the imbalanced classification report
print(classification_report_imbalanced(y_test, y_pred2))
|
pre rec spe f1 geo iba sup
high_risk 0.84 0.99 0.99 0.91 0.99 0.99 619
low_risk 1.00 0.99 0.99 1.00 0.99 0.99 18765
avg / total 0.99 0.99 0.99 0.99 0.99 0.99 19384
|
ADSL
|
Starter_Code/credit_risk_resampling.ipynb
|
AntoJKumar/Risky_Business
|
UndersamplingIn this section, you will test an undersampling algorithm to determine which algorithm results in the best performance compared to the oversampling algorithms above. You will undersample the data using the Cluster Centroids algorithm and complete the folliowing steps:1. View the count of the target classes using `Counter` from the collections library. 3. Use the resampled data to train a logistic regression model.3. Calculate the balanced accuracy score from sklearn.metrics.4. Display the confusion matrix from sklearn.metrics.5. Generate a classication report using the `imbalanced_classification_report` from imbalanced-learn.Note: Use a random state of 1 for each sampling algorithm to ensure consistency between tests
|
# Resample the data using the ClusterCentroids resampler
from imblearn.under_sampling import ClusterCentroids
cc = ClusterCentroids(random_state=1)
X_resampled3, y_resampled3 = cc.fit_resample(X_train, y_train)
# View the count of target classes with Counter
Counter(y_resampled3)
# Train the Logistic Regression model using the resampled data
model3 = LogisticRegression(solver='lbfgs', random_state=1)
model3.fit(X_resampled3, y_resampled3)
# Calculate the balanced accuracy score
y_pred3 = model3.predict(X_test)
balanced_accuracy_score(y_test, y_pred3)
# Display the confusion matrix
confusion_matrix(y_test, y_pred3)
# Print the imbalanced classification report
print(classification_report_imbalanced(y_test, y_pred3, digits = 4))
|
pre rec spe f1 geo iba sup
high_risk 0.8440 0.9790 0.9940 0.9065 0.9865 0.9717 619
low_risk 0.9993 0.9940 0.9790 0.9967 0.9865 0.9746 18765
avg / total 0.9943 0.9936 0.9795 0.9938 0.9865 0.9745 19384
|
ADSL
|
Starter_Code/credit_risk_resampling.ipynb
|
AntoJKumar/Risky_Business
|
Combination (Over and Under) SamplingIn this section, you will test a combination over- and under-sampling algorithm to determine if the algorithm results in the best performance compared to the other sampling algorithms above. You will resample the data using the SMOTEENN algorithm and complete the folliowing steps:1. View the count of the target classes using `Counter` from the collections library. 3. Use the resampled data to train a logistic regression model.3. Calculate the balanced accuracy score from sklearn.metrics.4. Display the confusion matrix from sklearn.metrics.5. Generate a classication report using the `imbalanced_classification_report` from imbalanced-learn.Note: Use a random state of 1 for each sampling algorithm to ensure consistency between tests
|
# Resample the training data with SMOTEENN
from imblearn.combine import SMOTEENN
sm = SMOTEENN(random_state=1)
X_resampled4, y_resampled4 = sm.fit_resample(X_train, y_train)
# View the count of target classes with Counter
Counter(y_resampled4)
# Train the Logistic Regression model using the resampled data
model4 = LogisticRegression(solver='lbfgs', random_state=1)
model4.fit(X_resampled4, y_resampled4)
# Calculate the balanced accuracy score
y_pred4 = model4.predict(X_test)
balanced_accuracy_score(y_test, y_pred4)
# Display the confusion matrix
confusion_matrix(y_test, y_pred4)
# Print the imbalanced classification report
print(classification_report_imbalanced(y_test, y_pred4))
|
pre rec spe f1 geo iba sup
high_risk 0.83 0.99 0.99 0.91 0.99 0.99 619
low_risk 1.00 0.99 0.99 1.00 0.99 0.99 18765
avg / total 0.99 0.99 0.99 0.99 0.99 0.99 19384
|
ADSL
|
Starter_Code/credit_risk_resampling.ipynb
|
AntoJKumar/Risky_Business
|
Gráficos de desempenho das Caches Import libs
|
%matplotlib inline
##Bibliotecas importadas
# Biblioteca usada para abrir arquivos CSV
import csv
# Bibilioteca para fazer leitura de datas
from datetime import datetime, timedelta
# Fazer o ajuste de datas no gráfico
import matplotlib.dates as mdate
# Biblioteca mateḿática
import numpy as np
# Bibloteca para traçar gráficos
import matplotlib.pyplot as plt
#Biblioteca para mudar tamanho o gráfico apresentado
import matplotlib.cm as cm
import operator as op
import os
import math
|
_____no_output_____
|
MIT
|
trabalho2/Caches.ipynb
|
laurocruz/MC733
|
Generate miss % graphs
|
for file in os.listdir('cache_csv/percentage'):
filepath = 'cache_csv/percentage/'+file
dados = list(csv.reader(open(filepath,'r')))
alg = file.split('.')[0]
mr1 = list()
mr2 = list()
mw1 = list()
mw2 = list()
mrw1 = list()
mrw2 = list()
mi1 = list()
mi2 = list()
for dado in dados:
mr1.append(float(dado[0]))
mw1.append(float(dado[1]))
mrw1.append(float(dado[2]))
mr2.append(float(dado[3]))
mw2.append(float(dado[4]))
mrw2.append(float(dado[5]))
mi1.append(float(dado[6]))
mi2.append(float(dado[7]))
# Configurações de cache
x = np.arange(1, 9)
##### READ MISSES #####
markerline1, stemlines1, baseline1 = plt.stem(x,mr1)
markerline2, stemlines2, baseline2 = plt.stem(x,mr2)
#plt.xticks(x, (1,2,3,4,5,6,7,8))
# Define característica das linhas
plt.setp(stemlines1, 'linestyle', 'none')
plt.setp(markerline1, 'linestyle', '-', 'color', 'r')
plt.setp(baseline1, 'linestyle', 'none')
plt.setp(stemlines2, 'linestyle', 'none')
plt.setp(markerline2, 'linestyle', '-', 'color', 'b')
plt.setp(baseline2, 'linestyle', 'none')
# Bordas
#plt.ylim([0,100])
#plt.xlim([1,9])
# Legendas
plt.title('L1 x L2 read miss rate (' + alg + ')')
plt.xlabel('Cache Configurations')
plt.ylabel('Read miss rate (%)')
plt.savefig('img/Cache/percentage/cache_' + alg + '_r.png', dpi=300)
plt.show()
plt.close()
##### WRITE MISSES #####
markerline1, stemlines1, baseline1 = plt.stem(x,mw1)
markerline2, stemlines2, baseline2 = plt.stem(x,mw2)
#plt.xticks(x, (1,2,3,4,5,6,7,8))
# Define característica das linhas
plt.setp(stemlines1, 'linestyle', 'none')
plt.setp(markerline1, 'linestyle', '-', 'color', 'r')
plt.setp(baseline1, 'linestyle', 'none')
plt.setp(stemlines2, 'linestyle', 'none')
plt.setp(markerline2, 'linestyle', '-', 'color', 'b')
plt.setp(baseline2, 'linestyle', 'none')
# Legendas
plt.title('L1 x L2 write miss rate (' + alg + ')')
plt.xlabel('Cache Configurations')
plt.ylabel('Write miss rate (%)')
plt.savefig('img/Cache/percentage/cache_' + alg + '_w.png', dpi=300)
plt.show()
plt.close()
##### TOTAL MISSES (DATA) #####
markerline1, stemlines1, baseline1 = plt.stem(x,mrw1)
markerline2, stemlines2, baseline2 = plt.stem(x,mrw2)
#plt.xticks(x, (1,2,3,4,5,6,7,8))
# Define característica das linhas
plt.setp(stemlines1, 'linestyle', 'none')
plt.setp(markerline1, 'linestyle', '-', 'color', 'r')
plt.setp(baseline1, 'linestyle', 'none')
plt.setp(stemlines2, 'linestyle', 'none')
plt.setp(markerline2, 'linestyle', '-', 'color', 'b')
plt.setp(baseline2, 'linestyle', 'none')
# Bordas
#plt.ylim([0,100])
#plt.xlim([1,9])
# Legendas
plt.title('L1 x L2 total data miss rate (' + alg + ')')
plt.xlabel('Cache Configurations')
plt.ylabel('Total data miss rate (%)')
plt.savefig('img/Cache/percentage/cache_' + alg + '_rw.png', dpi=300)
plt.show()
plt.close()
##### INSTRUCTION MISSES #####
markerline1, stemlines1, baseline1 = plt.stem(x,mi1)
markerline2, stemlines2, baseline2 = plt.stem(x,mi2)
#plt.xticks(x, (1,2,3,4,5,6,7,8))
# Define característica das linhas
plt.setp(stemlines1, 'linestyle', 'none')
plt.setp(markerline1, 'linestyle', '-', 'color', 'r')
plt.setp(baseline1, 'linestyle', 'none')
plt.setp(stemlines2, 'linestyle', 'none')
plt.setp(markerline2, 'linestyle', '-', 'color', 'b')
plt.setp(baseline2, 'linestyle', 'none')
# Legendas
plt.title('L1 x L2 instruction miss rate (' + alg + ')')
plt.xlabel('Cache Configurations')
plt.ylabel('Instruction miss rate (%)')
plt.savefig('img/Cache/percentage/cache_' + alg + '_i.png', dpi=300)
plt.show()
plt.close()
|
_____no_output_____
|
MIT
|
trabalho2/Caches.ipynb
|
laurocruz/MC733
|
Generate graphs of miss numbers
|
for file in os.listdir('cache_csv/num'):
filepath = 'cache_csv/num/'+file
dados = list(csv.reader(open(filepath,'r')))
alg = file.split('.')[0]
mr1 = list()
mr2 = list()
mw1 = list()
mw2 = list()
mrw1 = list()
mrw2 = list()
mi1 = list()
mi2 = list()
for dado in dados:
mr1.append(float(dado[0]))
mw1.append(float(dado[1]))
mrw1.append(int(dado[2]))
mr2.append(float(dado[3]))
mw2.append(float(dado[4]))
mrw2.append(float(dado[5]))
mi1.append(float(dado[6]))
mi2.append(float(dado[7]))
# Configurações de cache
x = np.arange(1, 9)
##### READ MISSES #####
markerline1, stemlines1, baseline1 = plt.stem(x,mr1)
markerline2, stemlines2, baseline2 = plt.stem(x,mr2)
#plt.xticks(x, (1,2,3,4,5,6,7,8))
# Define característica das linhas
plt.setp(stemlines1, 'linestyle', 'none')
plt.setp(markerline1, 'linestyle', '-', 'color', 'r')
plt.setp(baseline1, 'linestyle', 'none')
plt.setp(stemlines2, 'linestyle', 'none')
plt.setp(markerline2, 'linestyle', '-', 'color', 'b')
plt.setp(baseline2, 'linestyle', 'none')
# Bordas
#plt.ylim([0,100])
#plt.xlim([1,9])
# Legendas
plt.title('L1 x L2 read misses (' + alg + ')')
plt.xlabel('Cache Configurations')
plt.ylabel('Read misses')
plt.savefig('img/Cache/num/cache_' + alg + '_r.png', dpi=300)
plt.show()
plt.close()
##### WRITE MISSES #####
markerline1, stemlines1, baseline1 = plt.stem(x,mw1)
markerline2, stemlines2, baseline2 = plt.stem(x,mw2)
#plt.xticks(x, (1,2,3,4,5,6,7,8))
# Define característica das linhas
plt.setp(stemlines1, 'linestyle', 'none')
plt.setp(markerline1, 'linestyle', '-', 'color', 'r')
plt.setp(baseline1, 'linestyle', 'none')
plt.setp(stemlines2, 'linestyle', 'none')
plt.setp(markerline2, 'linestyle', '-', 'color', 'b')
plt.setp(baseline2, 'linestyle', 'none')
# Legendas
plt.title('L1 x L2 write misses (' + alg + ')')
plt.xlabel('Cache Configurations')
plt.ylabel('Write misses')
plt.savefig('img/Cache/num/cache_' + alg + '_w.png', dpi=300)
plt.show()
plt.close()
##### TOTAL MISSES (DATA) #####
markerline1, stemlines1, baseline1 = plt.stem(x,mrw1)
markerline2, stemlines2, baseline2 = plt.stem(x,mrw2)
#plt.xticks(x, (1,2,3,4,5,6,7,8))
# Define característica das linhas
plt.setp(stemlines1, 'linestyle', 'none')
plt.setp(markerline1, 'linestyle', '-', 'color', 'r')
plt.setp(baseline1, 'linestyle', 'none')
plt.setp(stemlines2, 'linestyle', 'none')
plt.setp(markerline2, 'linestyle', '-', 'color', 'b')
plt.setp(baseline2, 'linestyle', 'none')
# Bordas
#plt.ylim([0,100])
#plt.xlim([1,9])
# Legendas
plt.title('L1 x L2 total data misses (' + alg + ')')
plt.xlabel('Cache Configurations')
plt.ylabel('Total data misses')
plt.savefig('img/Cache/num/cache_' + alg + '_rw.png', dpi=300)
plt.show()
plt.close()
##### INSTRUCTION MISSES #####
markerline1, stemlines1, baseline1 = plt.stem(x,mi1)
markerline2, stemlines2, baseline2 = plt.stem(x,mi2)
#plt.xticks(x, (1,2,3,4,5,6,7,8))
# Define característica das linhas
plt.setp(stemlines1, 'linestyle', 'none')
plt.setp(markerline1, 'linestyle', '-', 'color', 'r')
plt.setp(baseline1, 'linestyle', 'none')
plt.setp(stemlines2, 'linestyle', 'none')
plt.setp(markerline2, 'linestyle', '-', 'color', 'b')
plt.setp(baseline2, 'linestyle', 'none')
# Legendas
plt.title('L1 x L2 instruction misses (' + alg + ')')
plt.xlabel('Cache Configurations')
plt.ylabel('Instruction misses')
plt.savefig('img/Cache/num/cache_' + alg + '_i.png', dpi=300)
plt.show()
plt.close()
|
_____no_output_____
|
MIT
|
trabalho2/Caches.ipynb
|
laurocruz/MC733
|
Loading libraries and looking at given data
|
import numpy as np
import pandas as pd
import seaborn as sns
import re
appendix_3=pd.read_excel("Appendix_3_august.xlsx")
appendix_3
print(appendix_3["Language"].value_counts(),)
print(appendix_3["Country"].value_counts())
pd.set_option("display.max_rows", None, "display.max_columns", None)
print(appendix_3['Country'].to_string(index=False))
|
United States
France
Korea
Italy
Germany
Australia
China
United Kingdom
Spain
Canada
Netherlands
Ireland
Poland
Denmark
Switzerland
United Arab Emirates
Brazil
Sweden
Norway
Singapore
Taiwan
Belgium
Thailand
Austria
India
Japan
Lebanon
Israel
Hong Kong SAR
Vietnam
Slovakia
New Zealand
Greece
Romania
Turkey
Mexico
Czech Republic
South Africa
Finland
Lithuania
Russia
Hungary
Ukraine
Pakistan
Croatia
Iceland
Morocco
Colombia
Egypt
Kuwait
Bulgaria
Iran
Philippines
Luxembourg
Serbia
Slovenia
Tunisia
Estonia
Argentina
Saudi Arabia
Portugal
Uruguay
Costa Rica
Chile
Indonesia
Jordan
Cyprus
Myanmar
Paraguay
Armenia
Bolivia
Moldova
Azerbaijan
Algeria
Monaco
Georgia
Malaysia
Venezuela
Iraq
Nepal
Puerto Rico
Liechtenstein
Latvia
|
MIT
|
teaching_material/session_6/gruppe_8/3shape_final.ipynb
|
tlh957/DO2021
|
Removing useless data
|
appendix_3=appendix_3[appendix_3.Language!="Københavnsk"]
appendix_3=appendix_3.drop(["Meaningless_ID"], axis=1)
appendix_3
appendix_3=appendix_3[appendix_3.Licenses!=0]
appendix_3
|
_____no_output_____
|
MIT
|
teaching_material/session_6/gruppe_8/3shape_final.ipynb
|
tlh957/DO2021
|
Making usefull languages
|
def language(var):
"""Function that returns languages spoken by 3Shapes present support teams.
If not spoken, return English"""
if var.lower() in ['english','american']: #If english or "american"
return 'English' #Return English
if var.lower() in ['spanish']:
return 'Spanish'
if var.lower() in ['french']:
return 'French'
if var.lower() in ['german']:
return 'German'
if var.lower() in ['russian']:
return 'Russian'
if var.lower() in ['portuguese']:
return 'Portuguese'
if var.lower() in ['italian']:
return 'Italian'
if re.search('chin.+', var.lower()): # If lettercombination 'chin' appears:
return 'Chinese' # Return 'Chinese'
if var.lower() in ['japanese']:
return 'Japanese'
if var.lower() in ['korean']:
return 'Korean'
else:
return 'English' #If not spoken, return English
appendix_3['Support_language'] = appendix_3['Language'].apply(language)
appendix_3['Support_language'].value_counts()
appendix_3["Licenses_per_language"]=appendix_3.groupby(["Support_language"])["Licenses"].transform("sum")
appendix_3['Country'] = appendix_3['Country'].str.strip() #Removing initial whitespace
appendix_3.iloc[1,0]
|
_____no_output_____
|
MIT
|
teaching_material/session_6/gruppe_8/3shape_final.ipynb
|
tlh957/DO2021
|
Making a column that "groups" countries into 3 regions/timezones of the world (Americas, Europe (incl. Middle East and Africa) and Asia)
|
def region(var):
"""Function that returns region based on country"""
if var in ['United States','Canada','Brazil','Mexico','Colombia','Argentina','Uruguay',
'Costa Rica','Chile','Paraguay','Bolivia','Venezuela','Puerto Rico']:
return 'Americas'
if var in ['France','Italy','Germany','United Kingdom','Spain','Netherlands','Ireland','Poland',
'Denmark','Switzerland','United Arab Emirates','Sweden','Norway','Belgium','Austria',
'Lebanon','Israel','Slovakia','Greece','Romania','Turkey','Czech Republic','South Africa',
'Finland','Lithuania','Russia','Hungary','Ukraine','Pakistan','Croatia','Iceland','Morocco',
'Egypt','Kuwait','Bulgaria','Iran','Luxembourg','Serbia','Slovenia','Tunisia','Estonia',
'Saudi Arabia','Portugal','Jordan','Cyprus','Armenia','Moldova','Azerbaijan','Algeria',
'Monaco','Georgia','Iraq','Liechtenstein','Latvia']:
return 'Europe'
if var in ['Korea','Australia','China','Singapore','Taiwan','Thailand','India','Japan',
'Hong Kong SAR','Vietnam','New Zealand','Philippines','Indonesia','Myanmar',
'Malaysia','Nepal']:
return 'Asia'
else:
return 'No'
appendix_3['Region'] = appendix_3['Country'].apply(region)
appendix_3['Region'].head(6)
appendix_3["Licenses_per_region"]=appendix_3.groupby(["Region"])["Licenses"].transform("sum")
appendix_3[["Licenses_per_region","Region"]].head(6)
|
_____no_output_____
|
MIT
|
teaching_material/session_6/gruppe_8/3shape_final.ipynb
|
tlh957/DO2021
|
New DataFrame with our three regions/support centers
|
New_regions=appendix_3.groupby(["Region"])["Licenses"].sum().sort_values(ascending=False).to_frame().reset_index()
New_regions
def employees_needed(var):
""" Function that gives number of recuired employees based on licenses"""
if var <300:
return 3
else:
return np.ceil((var-300)/200+3)
New_regions["Employ_needed"]=New_regions["Licenses"].apply(employees_needed)
New_regions.head(3)
New_regions["Revenue"]=New_regions["Licenses"]*2000
New_regions.head(3)
|
_____no_output_____
|
MIT
|
teaching_material/session_6/gruppe_8/3shape_final.ipynb
|
tlh957/DO2021
|
Loking at appendix 2 and cleaning useless data, and converting to int.
|
appendix_2=pd.read_excel("Appendix_2_august.xlsx")
appendix_2
appendix_2=appendix_2.drop([5])
appendix_2
appendix_2['Total cost']=appendix_2['Total cost'].astype(int)
appendix_2['Average FTE']=appendix_2['Average FTE'].astype(int)
print(appendix_2.dtypes)
|
Support Center object
Total cost int64
Average FTE int64
dtype: object
|
MIT
|
teaching_material/session_6/gruppe_8/3shape_final.ipynb
|
tlh957/DO2021
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.