markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Let's move on to building our DataFrame. You'll notice that I use the abbreviation `rv` often. It stands for `robust value`, which is what we'll call this sophisticated strategy moving forward.
rv_columns = [ 'Ticker', 'Price', 'Number of Shares to Buy', 'Price-to-Earnings Ratio', 'PE Percentile', 'Price-to-Book Ratio', 'PB Percentile', 'Price-to-Sales Ratio', 'PS Percentile', 'EV/EBITDA', 'EV/EBITDA Percentile', 'EV/GP', 'EV/GP Percentile', 'RV Score' ] rv_df = pd.DataFrame(columns=rv_columns) for batch in symbol_strings: batch_api_call_url = f"https://sandbox.iexapis.com/stable/stock/market/batch?symbols={batch}&types=quote,advanced-stats&token={IEX_CLOUD_API_TOKEN}" data = requests.get(batch_api_call_url).json() for symbol in batch.split(','): enterprise_value = data[symbol]['advanced-stats']['enterpriseValue'] ebitda = data[symbol]['advanced-stats']['EBITDA'] gross_profit = data[symbol]['advanced-stats']['grossProfit'] try: ev_to_ebitda = enterprise_value/ebitda except TypeError: ev_to_ebitda = np.NaN try: ev_to_gross_profit = enterprise_value/gross_profit except TypeError: ev_to_gross_profit = np.NaN #if(not enterprise_value or not ebitda or not gross_profit): #continue rv_df = rv_df.append( pd.Series( [ symbol, data[symbol]['quote']['latestPrice'], 'N/A', data[symbol]['quote']['peRatio'], 'N/A', data[symbol]['advanced-stats']['priceToBook'], 'N/A', data[symbol]['advanced-stats']['priceToSales'], 'N/A', ev_to_ebitda, 'N/A', ev_to_gross_profit, 'N/A', 'N/A' ], index=rv_columns ), ignore_index=True ) rv_df
_____no_output_____
MIT
003_quantitative_value_strategy.ipynb
gyalpodongo/algorithmic_trading_python
Dealing With Missing Data in Our DataFrameOur DataFrame contains some missing data because all of the metrics we require are not available through the API we're using. You can use pandas' `isnull` method to identify missing data:
rv_df[rv_df.isnull().any(axis=1)]
_____no_output_____
MIT
003_quantitative_value_strategy.ipynb
gyalpodongo/algorithmic_trading_python
Dealing with missing data is an important topic in data science.There are two main approaches:* Drop missing data from the data set (pandas' `dropna` method is useful here)* Replace missing data with a new value (pandas' `fillna` method is useful here)In this tutorial, we will replace missing data with the average non-`NaN` data point from that column. Here is the code to do this:
for column in [ 'Price', 'Price-to-Earnings Ratio', 'Price-to-Book Ratio', 'Price-to-Sales Ratio', 'EV/EBITDA', 'EV/GP']: rv_df[column].fillna(rv_df[column].mean(), inplace=True) rv_df
_____no_output_____
MIT
003_quantitative_value_strategy.ipynb
gyalpodongo/algorithmic_trading_python
Now, if we run the statement from earlier to print rows that contain missing data, nothing should be returned:
rv_df[rv_df.isnull().any(axis=1)]
_____no_output_____
MIT
003_quantitative_value_strategy.ipynb
gyalpodongo/algorithmic_trading_python
Calculating Value PercentilesWe now need to calculate value score percentiles for every stock in the universe. More specifically, we need to calculate percentile scores for the following metrics for every stock:* Price-to-earnings ratio* Price-to-book ratio* Price-to-sales ratio* EV/EBITDA* EV/GPHere's how we'll do this:
metrics = { 'Price-to-Earnings Ratio': 'PE Percentile', 'Price-to-Book Ratio' :'PB Percentile', 'Price-to-Sales Ratio' : 'PS Percentile', 'EV/EBITDA' : 'EV/EBITDA Percentile', 'EV/GP' : 'EV/GP Percentile', } for key, value in metrics.items(): for row in rv_df.index: rv_df.loc[row, value] = stats.percentileofscore(rv_df[key], rv_df.loc[row,key])/100 rv_df
_____no_output_____
MIT
003_quantitative_value_strategy.ipynb
gyalpodongo/algorithmic_trading_python
Calculating the RV ScoreWe'll now calculate our RV Score (which stands for Robust Value), which is the value score that we'll use to filter for stocks in this investing strategy.The RV Score will be the arithmetic mean of the 4 percentile scores that we calculated in the last section.To calculate arithmetic mean, we will use the mean function from Python's built-in statistics module.
from statistics import mean for row in rv_df.index: value_percentiles = [] for value in metrics.values(): value_percentiles.append(rv_df.loc[row, value]) rv_df.loc[row, 'RV Score'] = mean(value_percentiles) rv_df
_____no_output_____
MIT
003_quantitative_value_strategy.ipynb
gyalpodongo/algorithmic_trading_python
Selecting the 50 Best Value Stocks¶As before, we can identify the 50 best value stocks in our universe by sorting the DataFrame on the RV Score column and dropping all but the top 50 entries. Calculating the Number of Shares to BuyWe'll use the `portfolio_input` function that we created earlier to accept our portfolio size. Then we will use similar logic in a for loop to calculate the number of shares to buy for each stock in our investment universe.
rv_df.sort_values('RV Score', ascending=True, inplace=True) rv_df = rv_df[:50] rv_df.reset_index(drop = True, inplace=True) rv_df portfolio_input() position_size = portfolio_size/len(rv_df.index) for row in rv_df.index: rv_df.loc[row, 'Number of Shares to Buy'] = math.floor(position_size/rv_df.loc[row, 'Price']) rv_df
_____no_output_____
MIT
003_quantitative_value_strategy.ipynb
gyalpodongo/algorithmic_trading_python
Formatting Our Excel OutputWe will be using the XlsxWriter library for Python to create nicely-formatted Excel files.XlsxWriter is an excellent package and offers tons of customization. However, the tradeoff for this is that the library can seem very complicated to new users. Accordingly, this section will be fairly long because I want to do a good job of explaining how XlsxWriter works.
writer = pd.ExcelWriter('value_strategy.xlsx', engine='xlsxwriter') rv_df.to_excel(writer, sheet_name='Value Strategy', index = False)
_____no_output_____
MIT
003_quantitative_value_strategy.ipynb
gyalpodongo/algorithmic_trading_python
Creating the Formats We'll Need For Our .xlsx FileYou'll recall from our first project that formats include colors, fonts, and also symbols like % and $. We'll need four main formats for our Excel document:* String format for tickers* \$XX.XX format for stock prices* \$XX,XXX format for market capitalization* Integer format for the number of shares to purchase* Float formats with 1 decimal for each valuation metricSince we already built some formats in past sections of this course, I've included them below for you. Run this code cell before proceeding.
background_color = '#0a0a23' font_color = '#ffffff' string_template = writer.book.add_format( { 'font_color': font_color, 'bg_color': background_color, 'border': 1 } ) dollar_template = writer.book.add_format( { 'num_format':'$0.00', 'font_color': font_color, 'bg_color': background_color, 'border': 1 } ) integer_template = writer.book.add_format( { 'num_format':'0', 'font_color': font_color, 'bg_color': background_color, 'border': 1 } ) float_template = writer.book.add_format( { 'num_format':'0', 'font_color': font_color, 'bg_color': background_color, 'border': 1 } ) percent_template = writer.book.add_format( { 'num_format':'0.0%', 'font_color': font_color, 'bg_color': background_color, 'border': 1 } ) column_formats = { 'A': ['Ticker', string_template], 'B': ['Price', dollar_template], 'C': ['Number of Shares to Buy', integer_template], 'D': ['Price-to-Earnings Ratio', float_template], 'E': ['PE Percentile', percent_template], 'F': ['Price-to-Book Ratio', float_template], 'G': ['PB Percentile',percent_template], 'H': ['Price-to-Sales Ratio', float_template], 'I': ['PS Percentile', percent_template], 'J': ['EV/EBITDA', float_template], 'K': ['EV/EBITDA Percentile', percent_template], 'L': ['EV/GP', float_template], 'M': ['EV/GP Percentile', percent_template], 'N': ['RV Score', percent_template] } for column in column_formats.keys(): writer.sheets['Value Strategy'].set_column(f'{column}:{column}', 25, column_formats[column][1]) writer.sheets['Value Strategy'].write(f'{column}1', column_formats[column][0], column_formats[column][1])
_____no_output_____
MIT
003_quantitative_value_strategy.ipynb
gyalpodongo/algorithmic_trading_python
Saving Our Excel OutputAs before, saving our Excel output is very easy:
writer.save()
_____no_output_____
MIT
003_quantitative_value_strategy.ipynb
gyalpodongo/algorithmic_trading_python
Run the Ansible on Jupyter Notebook x Alpine- Author: Chu-Siang Lai / chusiang (at) drx.tw- GitHub: [chusiang/ansible-jupyter.dockerfile](https://github.com/chusiang/ansible-jupyter.dockerfile)- Docker Hub: [chusiang/ansible-jupyter](https://hub.docker.com/r/chusiang/ansible-jupyter/) Table of contexts:1. [Operating-System](Operating-System)1. [Ad-Hoc-commands](Ad-Hoc-commands)1. [Playbooks](Playbooks) Modified.
!date
Mon Jun 18 07:13:53 UTC 2018
MIT
ipynb/ansible_on_jupyter.ipynb
KyleChou/ansible-jupyter.dockerfile
Operating SystemCheck the runtime user.
!whoami
root
MIT
ipynb/ansible_on_jupyter.ipynb
KyleChou/ansible-jupyter.dockerfile
Show Linux distribution.
!cat /etc/issue
Welcome to Alpine Linux 3.7 Kernel \r on an \m (\l)
MIT
ipynb/ansible_on_jupyter.ipynb
KyleChou/ansible-jupyter.dockerfile
Workspace.
!pwd
/home
MIT
ipynb/ansible_on_jupyter.ipynb
KyleChou/ansible-jupyter.dockerfile
Show Python version.
!python --version
Python 2.7.14
MIT
ipynb/ansible_on_jupyter.ipynb
KyleChou/ansible-jupyter.dockerfile
Show pip version.
!pip --version
pip 10.0.1 from /usr/lib/python2.7/site-packages/pip (python 2.7)
MIT
ipynb/ansible_on_jupyter.ipynb
KyleChou/ansible-jupyter.dockerfile
Show Ansible version.
!ansible --version
ansible 2.5.5 config file = /home/ansible.cfg configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /usr/bin/ansible python version = 2.7.14 (default, Dec 14 2017, 15:51:29) [GCC 6.4.0]
MIT
ipynb/ansible_on_jupyter.ipynb
KyleChou/ansible-jupyter.dockerfile
Show Jupyter version.
!jupyter --version
4.4.0
MIT
ipynb/ansible_on_jupyter.ipynb
KyleChou/ansible-jupyter.dockerfile
AnsibleCheck the playbook syntax, if you see the `[WARNING]`, please fix something, first.
!ansible-playbook --syntax-check setup_jupyter.yml
playbook: setup_jupyter.yml
MIT
ipynb/ansible_on_jupyter.ipynb
KyleChou/ansible-jupyter.dockerfile
Ad-Hoc commandsping the localhost.
!ansible localhost -m ping
localhost | SUCCESS => {  "changed": false,   "ping": "pong" }
MIT
ipynb/ansible_on_jupyter.ipynb
KyleChou/ansible-jupyter.dockerfile
Get the facts with `setup` module.
!ansible localhost -m setup
localhost | SUCCESS => {  "ansible_facts": {  "ansible_all_ipv4_addresses": [],   "ansible_all_ipv6_addresses": [],   "ansible_apparmor": {  "status": "disabled"  },   "ansible_architecture": "x86_64",   "ansible_bios_date": "03/14/2014",   "ansible_bios_version": "1.00",   "ansible_cmdline": {  "BOOT_IMAGE": "/boot/kernel",   "console": "ttyS0",   "ntp": "gateway",   "page_poison": "1",   "panic": "1",   "root": "/dev/sr0",   "text": true,   "vsyscall": "emulate"  },   "ansible_date_time": {  "date": "2018-06-18",   "day": "18",   "epoch": "1529306054",   "hour": "07",   "iso8601": "2018-06-18T07:14:14Z",   "iso8601_basic": "20180618T071414927682",   "iso8601_basic_short": "20180618T071414",   "iso8601_micro": "2018-06-18T07:14:14.927800Z",   "minute": "14",   "month": "06",   "second": "14",   "time": "07:14:14",   "tz": "UTC",   "tz_offset": "+0000",   "weekday": "Monday",   "weekday_number": "1",   "weeknumber": "25",   "year": "2018"  },   "ansible_default_ipv4": {  "address": "172.17.0.2",   "gateway": "172.17.0.1",   "interface": "eth0"  },   "ansible_default_ipv6": {},   "ansible_device_links": {  "ids": {},   "labels": {},   "masters": {},   "uuids": {}  },   "ansible_devices": {  "loop0": {  "holders": [],   "host": "",   "links": {  "ids": [],   "labels": [],   "masters": [],   "uuids": []  },   "model": null,   "partitions": {},   "removable": "0",   "rotational": "1",   "sas_address": null,   "sas_device_handle": null,   "scheduler_mode": "",   "sectors": "0",   "sectorsize": "512",   "size": "0.00 Bytes",   "support_discard": "0",   "vendor": null,   "virtual": 1  },   "loop1": {  "holders": [],   "host": "",   "links": {  "ids": [],   "labels": [],   "masters": [],   "uuids": []  },   "model": null,   "partitions": {},   "removable": "0",   "rotational": "1",   "sas_address": null,   "sas_device_handle": null,   "scheduler_mode": "",   "sectors": "0",   "sectorsize": "512",   "size": "0.00 Bytes",   "support_discard": "0",   "vendor": null,   "virtual": 1  },   "loop2": {  "holders": [],   "host": "",   "links": {  "ids": [],   "labels": [],   "masters": [],   "uuids": []  },   "model": null,   "partitions": {},   "removable": "0",   "rotational": "1",   "sas_address": null,   "sas_device_handle": null,   "scheduler_mode": "",   "sectors": "0",   "sectorsize": "512",   "size": "0.00 Bytes",   "support_discard": "0",   "vendor": null,   "virtual": 1  },   "loop3": {  "holders": [],   "host": "",   "links": {  "ids": [],   "labels": [],   "masters": [],   "uuids": []  },   "model": null,   "partitions": {},   "removable": "0",   "rotational": "1",   "sas_address": null,   "sas_device_handle": null,   "scheduler_mode": "",   "sectors": "0",   "sectorsize": "512",   "size": "0.00 Bytes",   "support_discard": "0",   "vendor": null,   "virtual": 1  },   "loop4": {  "holders": [],   "host": "",   "links": {  "ids": [],   "labels": [],   "masters": [],   "uuids": []  },   "model": null,   "partitions": {},   "removable": "0",   "rotational": "1",   "sas_address": null,   "sas_device_handle": null,   "scheduler_mode": "",   "sectors": "0",   "sectorsize": "512",   "size": "0.00 Bytes",   "support_discard": "0",   "vendor": null,   "virtual": 1  },   "loop5": {  "holders": [],   "host": "",   "links": {  "ids": [],   "labels": [],   "masters": [],   "uuids": []  },   "model": null,   "partitions": {},   "removable": "0",   "rotational": "1",   "sas_address": null,   "sas_device_handle": null,   "scheduler_mode": "",   "sectors": "0",   "sectorsize": "512",   "size": "0.00 Bytes",   "support_discard": "0",   "vendor": null,   "virtual": 1  },   "loop6": {  "holders": [],   "host": "",   "links": {  "ids": [],   "labels": [],   "masters": [],   "uuids": []  },   "model": null,   "partitions": {},   "removable": "0",   "rotational": "1",   "sas_address": null,   "sas_device_handle": null,   "scheduler_mode": "",   "sectors": "0",   "sectorsize": "512",   "size": "0.00 Bytes",   "support_discard": "0",   "vendor": null,   "virtual": 1  },   "loop7": {  "holders": [],   "host": "",   "links": {  "ids": [],   "labels": [],   "masters": [],   "uuids": []  },   "model": null,   "partitions": {},   "removable": "0",   "rotational": "1",   "sas_address": null,   "sas_device_handle": null,   "scheduler_mode": "",   "sectors": "0",   "sectorsize": "512",   "size": "0.00 Bytes",   "support_discard": "0",   "vendor": null,   "virtual": 1  },   "nbd0": {  "holders": [],   "host": "",   "links": {  "ids": [],   "labels": [],   "masters": [],   "uuids": []  },   "model": null,   "partitions": {},   "removable": "0",   "rotational": "0",   "sas_address": null,   "sas_device_handle": null,   "scheduler_mode": "",   "sectors": "0",   "sectorsize": "512",   "size": "0.00 Bytes",   "support_discard": "512",   "vendor": null,   "virtual": 1  },   "nbd1": {  "holders": [],   "host": "",   "links": {  "ids": [],   "labels": [],   "masters": [],   "uuids": []  },   "model": null,   "partitions": {},   "removable": "0",   "rotational": "0",   "sas_address": null,   "sas_device_handle": null,   "scheduler_mode": "",   "sectors": "0",   "sectorsize": "512",   "size": "0.00 Bytes",   "support_discard": "512",   "vendor": null,   "virtual": 1  },   "nbd10": {  "holders": [],   "host": "",   "links": {  "ids": [],   "labels": [],   "masters": [],   "uuids": []  },   "model": null,   "partitions": {},   "removable": "0",   "rotational": "0",   "sas_address": null,   "sas_device_handle": null,   "scheduler_mode": "",   "sectors": "0",   "sectorsize": "512",   "size": "0.00 Bytes",   "support_discard": "512",   "vendor": null,   "virtual": 1  },   "nbd11": {  "holders": [],   "host": "",   "links": {  "ids": [],   "labels": [],   "masters": [],   "uuids": []  },   "model": null,   "partitions": {},   "removable": "0",   "rotational": "0",   "sas_address": null,   "sas_device_handle": null,   "scheduler_mode": "",   "sectors": "0",   "sectorsize": "512",   "size": "0.00 Bytes",   "support_discard": "512",   "vendor": null,   "virtual": 1  },   "nbd12": {  "holders": [],   "host": "",   "links": {  "ids": [],   "labels": [],   "masters": [],   "uuids": []  },   "model": null,   "partitions": {},   "removable": "0",   "rotational": "0",   "sas_address": null,   "sas_device_handle": null,   "scheduler_mode": "",   "sectors": "0",   "sectorsize": "512",   "size": "0.00 Bytes",   "support_discard": "512",   "vendor": null,   "virtual": 1  },   "nbd13": {  "holders": [],   "host": "",   "links": {  "ids": [],   "labels": [],   "masters": [],   "uuids": []  },   "model": null,   "partitions": {},   "removable": "0",   "rotational": "0",   "sas_address": null,   "sas_device_handle": null,   "scheduler_mode": "",   "sectors": "0",   "sectorsize": "512",   "size": "0.00 Bytes",   "support_discard": "512",   "vendor": null,   "virtual": 1  },   "nbd14": {  "holders": [],   "host": "",   "links": {  "ids": [],   "labels": [],   "masters": [],   "uuids": []  },   "model": null,   "partitions": {},   "removable": "0",   "rotational": "0",   "sas_address": null,   "sas_device_handle": null,   "scheduler_mode": "",   "sectors": "0",   "sectorsize": "512",   "size": "0.00 Bytes",   "support_discard": "512",   "vendor": null,   "virtual": 1  },   "nbd15": {  "holders": [],   "host": "",   "links": {  "ids": [],   "labels": [],   "masters": [],   "uuids": []  },   "model": null,   "partitions": {},   "removable": "0",   "rotational": "0",   "sas_address": null,   "sas_device_handle": null,   "scheduler_mode": "",   "sectors": "0",   "sectorsize": "512",   "size": "0.00 Bytes",   "support_discard": "512",   "vendor": null,   "virtual": 1  },   "nbd2": {  "holders": [],   "host": "",   "links": {  "ids": [],   "labels": [],   "masters": [],   "uuids": []  },   "model": null,   "partitions": {},   "removable": "0",   "rotational": "0",   "sas_address": null,   "sas_device_handle": null,   "scheduler_mode": "",   "sectors": "0",   "sectorsize": "512",   "size": "0.00 Bytes",   "support_discard": "512",   "vendor": null,   "virtual": 1  },   "nbd3": {  "holders": [],   "host": "",   "links": {  "ids": [],   "labels": [],   "masters": [],   "uuids": []  },   "model": null,   "partitions": {},   "removable": "0",   "rotational": "0",   "sas_address": null,   "sas_device_handle": null,   "scheduler_mode": "",   "sectors": "0",   "sectorsize": "512",   "size": "0.00 Bytes",   "support_discard": "512",   "vendor": null,   "virtual": 1  },   "nbd4": {  "holders": [],   "host": "",   "links": {  "ids": [],   "labels": [],   "masters": [],   "uuids": []  },   "model": null,   "partitions": {},   "removable": "0",   "rotational": "0",   "sas_address": null,   "sas_device_handle": null,   "scheduler_mode": "",   "sectors": "0",   "sectorsize": "512",   "size": "0.00 Bytes",   "support_discard": "512",   "vendor": null,   "virtual": 1  },   "nbd5": {  "holders": [],   "host": "",   "links": {  "ids": [],   "labels": [],   "masters": [],   "uuids": []  },   "model": null,   "partitions": {},   "removable": "0",   "rotational": "0",   "sas_address": null,   "sas_device_handle": null,   "scheduler_mode": "",   "sectors": "0",   "sectorsize": "512",   "size": "0.00 Bytes",   "support_discard": "512",   "vendor": null,   "virtual": 1  },   "nbd6": {  "holders": [],   "host": "",   "links": {  "ids": [],   "labels": [],   "masters": [],   "uuids": []  },   "model": null,   "partitions": {},   "removable": "0",   "rotational": "0",   "sas_address": null,   "sas_device_handle": null,   "scheduler_mode": "",   "sectors": "0",   "sectorsize": "512",   "size": "0.00 Bytes",   "support_discard": "512",   "vendor": null,   "virtual": 1  },   "nbd7": {  "holders": [],   "host": "",   "links": {  "ids": [],   "labels": [],   "masters": [],   "uuids": []  },   "model": null,   "partitions": {},   "removable": "0",   "rotational": "0",   "sas_address": null,   "sas_device_handle": null,   "scheduler_mode": "",   "sectors": "0",   "sectorsize": "512",   "size": "0.00 Bytes",   "support_discard": "512",   "vendor": null,   "virtual": 1  },   "nbd8": {  "holders": [],   "host": "",   "links": {  "ids": [],   "labels": [],   "masters": [],   "uuids": []  },   "model": null,   "partitions": {},   "removable": "0",   "rotational": "0",   "sas_address": null,   "sas_device_handle": null,   "scheduler_mode": "",   "sectors": "0",   "sectorsize": "512",   "size": "0.00 Bytes",   "support_discard": "512",   "vendor": null,   "virtual": 1  },   "nbd9": {  "holders": [],   "host": "",   "links": {  "ids": [],   "labels": [],   "masters": [],   "uuids": []  },   "model": null,   "partitions": {},   "removable": "0",   "rotational": "0",   "sas_address": null,   "sas_device_handle": null,   "scheduler_mode": "",   "sectors": "0",   "sectorsize": "512",   "size": "0.00 Bytes",   "support_discard": "512",   "vendor": null,   "virtual": 1  },   "sda": {  "holders": [],   "host": "",   "links": {  "ids": [],   "labels": [],   "masters": [],   "uuids": []  },   "model": "BHYVE SATA DISK",   "partitions": {  "sda1": {  "holders": [],   "links": {  "ids": [],   "labels": [],   "masters": [],   "uuids": []  },   "sectors": "134215680",   "sectorsize": 512,   "size": "64.00 GB",   "start": "2048",   "uuid": null  }  },   "removable": "0",   "rotational": "1",   "sas_address": null,   "sas_device_handle": null,   "scheduler_mode": "deadline",   "sectors": "134217728",   "sectorsize": "512",   "size": "64.00 GB",   "support_discard": "512",   "vendor": "ATA",   "virtual": 1  },   "sr0": {  "holders": [],   "host": "",   "links": {  "ids": [],   "labels": [],   "masters": [],   "uuids": []  },   "model": "BHYVE DVD-ROM",   "partitions": {},   "removable": "1",   "rotational": "1",   "sas_address": null,   "sas_device_handle": null,   "scheduler_mode": "deadline",   "sectors": "1922412",   "sectorsize": "2048",   "size": "938.68 MB",   "support_discard": "0",   "vendor": "BHYVE",   "virtual": 1  },   "sr1": {  "holders": [],   "host": "",   "links": {  "ids": [],   "labels": [],   "masters": [],   "uuids": []  },   "model": "BHYVE DVD-ROM",   "partitions": {},   "removable": "1",   "rotational": "1",   "sas_address": null,   "sas_device_handle": null,   "scheduler_mode": "deadline",   "sectors": "112",   "sectorsize": "2048",   "size": "56.00 KB",   "support_discard": "0",   "vendor": "BHYVE",   "virtual": 1  }  },   "ansible_distribution": "Alpine",   "ansible_distribution_file_parsed": true,   "ansible_distribution_file_path": "/etc/alpine-release",   "ansible_distribution_file_variety": "Alpine",   "ansible_distribution_major_version": "NA",   "ansible_distribution_release": "NA",   "ansible_distribution_version": "3.7.0",   "ansible_dns": {  "nameservers": [  "192.168.65.1"  ]  },   "ansible_domain": "",   "ansible_effective_group_id": 0,   "ansible_effective_user_id": 0,   "ansible_env": {  "CLICOLOR": "1",   "GIT_PAGER": "cat",   "HOME": "/root",   "HOSTNAME": "c3423d7c8f31",   "JPY_PARENT_PID": "5",   "MPLBACKEND": "module://ipykernel.pylab.backend_inline",   "PAGER": "cat",   "PATH": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",   "PWD": "/home",   "PYTHONPATH": "/tmp/ansible_lfDdYR/ansible_modlib.zip",   "SHLVL": "4",   "TERM": "xterm-color"  },   "ansible_eth0": {  "active": true,   "device": "eth0",   "macaddress": "02:42:ac:11:00:02",   "mtu": 1500,   "promisc": false,   "speed": 10000,   "type": "ether"  },   "ansible_fips": false,   "ansible_form_factor": "Unknown",   "ansible_fqdn": "c3423d7c8f31",   "ansible_hostname": "c3423d7c8f31",   "ansible_interfaces": [  "lo",   "tunl0",   "ip6tnl0",   "eth0"  ],   "ansible_ip6tnl0": {  "active": false,   "device": "ip6tnl0",   "macaddress": "00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00",   "mtu": 1452,   "promisc": false,   "type": "unknown"  },   "ansible_is_chroot": false,   "ansible_kernel": "4.9.87-linuxkit-aufs",   "ansible_lo": {  "active": true,   "device": "lo",   "mtu": 65536,   "promisc": false,   "type": "loopback"  },   "ansible_local": {},   "ansible_lsb": {},   "ansible_machine": "x86_64",   "ansible_memfree_mb": 152,   "ansible_memory_mb": {  "nocache": {  "free": 1132,   "used": 866  },   "real": {  "free": 152,   "total": 1998,   "used": 1846  },   "swap": {  "cached": 0,   "free": 1022,   "total": 1023,   "used": 1  }  },   "ansible_memtotal_mb": 1998,   "ansible_mounts": [  {  "block_available": 11143934,   "block_size": 4096,   "block_total": 16448139,   "block_used": 5304205,   "device": "/dev/sda1",   "fstype": "ext4",   "inode_available": 3449146,   "inode_total": 4194304,   "inode_used": 745158,   "mount": "/etc/resolv.conf",   "options": "rw,relatime,data=ordered",   "size_available": 45645553664,   "size_total": 67371577344,   "uuid": "N/A"  },   {  "block_available": 11143934,   "block_size": 4096,   "block_total": 16448139,   "block_used": 5304205,   "device": "/dev/sda1",   "fstype": "ext4",   "inode_available": 3449146,   "inode_total": 4194304,   "inode_used": 745158,   "mount": "/etc/hostname",   "options": "rw,relatime,data=ordered",   "size_available": 45645553664,   "size_total": 67371577344,   "uuid": "N/A"  },   {  "block_available": 11143934,   "block_size": 4096,   "block_total": 16448139,   "block_used": 5304205,   "device": "/dev/sda1",   "fstype": "ext4",   "inode_available": 3449146,   "inode_total": 4194304,   "inode_used": 745158,   "mount": "/etc/hosts",   "options": "rw,relatime,data=ordered",   "size_available": 45645553664,   "size_total": 67371577344,   "uuid": "N/A"  }  ],   "ansible_nodename": "c3423d7c8f31",   "ansible_os_family": "Alpine",   "ansible_pkg_mgr": "apk",   "ansible_processor": [  "0",   "GenuineIntel",   "Intel(R) Core(TM) i5-5257U CPU @ 2.70GHz",   "1",   "GenuineIntel",   "Intel(R) Core(TM) i5-5257U CPU @ 2.70GHz"  ],   "ansible_processor_cores": 1,   "ansible_processor_count": 2,   "ansible_processor_threads_per_core": 1,   "ansible_processor_vcpus": 2,   "ansible_product_name": "BHYVE",   "ansible_product_serial": "None",   "ansible_product_uuid": "003B4176-0000-0000-88D0-8E3AB99F1457",   "ansible_product_version": "1.0",   "ansible_python": {  "executable": "/usr/bin/python2",   "has_sslcontext": true,   "type": "CPython",   "version": {  "major": 2,   "micro": 14,   "minor": 7,   "releaselevel": "final",   "serial": 0  },   "version_info": [  2,   7,   14,   "final",   0  ]  },   "ansible_python_version": "2.7.14",   "ansible_real_group_id": 0,   "ansible_real_user_id": 0,   "ansible_selinux": {  "status": "Missing selinux Python library"  },   "ansible_selinux_python_present": false,   "ansible_service_mgr": "docker-entrypoi",   "ansible_swapfree_mb": 1022,   "ansible_swaptotal_mb": 1023,   "ansible_system": "Linux",   "ansible_system_vendor": "NA",   "ansible_tunl0": {  "active": false,   "device": "tunl0",   "macaddress": "00:00:00:00",   "mtu": 1480,   "promisc": false,   "type": "unknown"  },   "ansible_uptime_seconds": 13189,   "ansible_user_dir": "/root",   "ansible_user_gecos": "root",   "ansible_user_gid": 0,   "ansible_user_id": "root",   "ansible_user_shell": "/bin/ash",   "ansible_user_uid": 0,   "ansible_userspace_architecture": "x86_64",   "ansible_userspace_bits": "64",   "ansible_virtualization_role": "guest",   "ansible_virtualization_type": "docker",   "gather_subset": [  "all"  ],   "module_setup": true  },   "changed": false }
MIT
ipynb/ansible_on_jupyter.ipynb
KyleChou/ansible-jupyter.dockerfile
Remove the **vim** with apk package management on **Alpine**.
!ansible localhost -m apk -a 'name=vim state=absent'
localhost | SUCCESS => {  "changed": true,   "msg": "removed vim package(s)",   "packages": [  "vim",   "lua5.2-libs"  ],   "stderr": "",   "stderr_lines": [],   "stdout": "(1/2) Purging vim (8.0.1359-r0)\n(2/2) Purging lua5.2-libs (5.2.4-r4)\nExecuting busybox-1.27.2-r7.trigger\nOK: 274 MiB in 61 packages\n",   "stdout_lines": [  "(1/2) Purging vim (8.0.1359-r0)",   "(2/2) Purging lua5.2-libs (5.2.4-r4)",   "Executing busybox-1.27.2-r7.trigger",   "OK: 274 MiB in 61 packages"  ] }
MIT
ipynb/ansible_on_jupyter.ipynb
KyleChou/ansible-jupyter.dockerfile
Install the **vim** with apk package management on **Alpine**.
!ansible localhost -m apk -a 'name=vim state=present'
localhost | SUCCESS => {  "changed": true,   "msg": "installed vim package(s)",   "packages": [  "lua5.2-libs",   "vim"  ],   "stderr": "",   "stderr_lines": [],   "stdout": "(1/2) Installing lua5.2-libs (5.2.4-r4)\n(2/2) Installing vim (8.0.1359-r0)\nExecuting busybox-1.27.2-r7.trigger\nOK: 300 MiB in 63 packages\n",   "stdout_lines": [  "(1/2) Installing lua5.2-libs (5.2.4-r4)",   "(2/2) Installing vim (8.0.1359-r0)",   "Executing busybox-1.27.2-r7.trigger",   "OK: 300 MiB in 63 packages"  ] }
MIT
ipynb/ansible_on_jupyter.ipynb
KyleChou/ansible-jupyter.dockerfile
Install the **tree** with apk package management on **Alpine**.
!ansible localhost -m apk -a 'name=tree state=present' !tree .
. ├── ansible.cfg ├── ansible_on_jupyter.ipynb ├── inventory └── setup_jupyter.yml 0 directories, 4 files
MIT
ipynb/ansible_on_jupyter.ipynb
KyleChou/ansible-jupyter.dockerfile
PlaybooksShow `setup_jupyter.yml` playbook.
!cat setup_jupyter.yml
--- - name: "Setup Ansible-Jupyter" hosts: localhost vars: # General package on GNU/Linux. general_packages: - bash - bash-completion - ca-certificates - curl - git - openssl - sshpass # Alpine Linux. apk_packages: - openssh-client - vim # Debian, Ubuntu. apt_packages: "{{ apk_packages }}" # Arch Linux. pacman_packages: - openssh - vim # Gentoo Linux. portage_packages: - bash - bash-completion - ca-certificates - dev-vcs/git - net-misc/curl - openssh - openssl - sqlite - vim # CentOS. yum_packages: - openssh-clients - vim-minimal # openSUSE. zypper_packages: "{{ pacman_packages }}" # Python. pip_packages: - docker-py - docker-compose jupyter_notebook_config_py_url: "https://raw.githubusercontent.com/chusiang/ansible-jupyter.dockerfile/master/files/jupyter_notebook_config.py" ssh_private_key_url: "https://raw.githubusercontent.com/chusiang/ansible-jupyter.dockerfile/master/files/ssh/id_rsa" ansible_cfg_url: "https://raw.githubusercontent.com/chusiang/ansible-jupyter.dockerfile/master/ansible.cfg" inventory_url: "https://raw.githubusercontent.com/chusiang/ansible-jupyter.dockerfile/master/inventory" tasks: - name: Install necessary packages of Linux block: - name: Install general linux packages package: name: "{{ item }}" state: present with_items: "{{ general_packages }}" when: - general_packages is defined - ansible_pkg_mgr != "portage" - name: Install apk packages on Alpine Linux apk: name: "{{ item }}" state: present with_items: "{{ apk_packages }}" when: - apk_packages is defined - ansible_pkg_mgr == "apk" - name: Install apt packages on Debian and Ubuntu apt: name: "{{ item }}" state: present with_items: "{{ apt_packages }}" when: - apt_packages is defined - ansible_pkg_mgr == "apt" - name: Install pacman packages on Arch Linux pacman: name: "{{ item }}" state: present with_items: "{{ pacman_packages }}" when: - pacman_packages is defined - ansible_pkg_mgr == "pacman" - name: Install portage packages on Gentoo Linux portage: package: "{{ item }}" state: present with_items: - "{{ portage_packages }}" when: - portage_packages is defined - ansible_pkg_mgr == "portage" - name: Install yum packages on CentOS yum: name: "{{ item }}" state: present with_items: "{{ yum_packages }}" when: - yum_packages is defined - ansible_pkg_mgr == "yum" - name: Install zypper packages on openSUSE zypper: name: "{{ item }}" state: present with_items: "{{ zypper_packages }}" when: - zypper_packages is defined - ansible_pkg_mgr == "zypper" - name: Install necessary packages of Python block: - name: Install general pip packages pip: name: "{{ item }}" state: present with_items: "{{ pip_packages }}" when: pip_packages is defined - name: Install pysqlite on gentoo pip: name: pysqlite state: present when: - ansible_pkg_mgr == "portage" - name: Upgrade six pip: name: six state: latest tags: skip_ansible_lint - name: Install and configuration Jupyter (application) block: - name: Install jupyter pip: name: jupyter version: 1.0.0 state: present # Disable jupyter authentication token. (1/2) - name: Create `/root/.jupyter` directory file: path: /root/.jupyter state: directory mode: 0700 # Disable jupyter authentication token. (2/2) - name: Get jupyter_notebook_config.py get_url: url: "{{ jupyter_notebook_config_py_url }}" dest: /root/.jupyter/jupyter_notebook_config.py mode: 0644 checksum: md5:c663914a24281ddf10df6bc9e7238b07 - name: Integrate Ansible and Jupyter block: - name: Create `/root/.ssh` directory file: path: /root/.ssh state: directory mode: 0700 - name: Get ssh private key get_url: url: "{{ ssh_private_key_url }}" dest: /root/.ssh/id_rsa mode: 0600 checksum: md5:6cc26e77bf23a9d72a51b22387bea61f - name: Get ansible.cfg file get_url: url: "{{ ansible_cfg_url }}" dest: /home/ mode: 0644 - name: Get inventory file get_url: url: "{{ inventory_url }}" dest: /home/ mode: 0644 # vim: ft=yaml.ansible :
MIT
ipynb/ansible_on_jupyter.ipynb
KyleChou/ansible-jupyter.dockerfile
Run the `setup_jupyter.yml` playbook.
!ansible-playbook setup_jupyter.yml
PLAY [Setup Ansible-Jupyter] *************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [Install general linux packages] ****************************************** ok: [localhost] => (item=bash) ok: [localhost] => (item=bash-completion) ok: [localhost] => (item=ca-certificates) ok: [localhost] => (item=curl) ok: [localhost] => (item=git) ok: [localhost] => (item=openssl) ok: [localhost] => (item=sshpass) TASK [Install apk packages on Alpine Linux] ************************************ ok: [localhost] => (item=[u'openssh-client', u'vim']) TASK [Install apt packages on Debian and Ubuntu] ******************************* skipping: [localhost] => (item=[])  TASK [Install pacman packages on Arch Linux] *********************************** skipping: [localhost] => (item=[])  TASK [Install portage packages on Gentoo Linux] ******************************** skipping: [localhost] => (item=bash)  skipping: [localhost] => (item=bash-completion)  skipping: [localhost] => (item=ca-certificates)  skipping: [localhost] => (item=dev-vcs/git)  skipping: [localhost] => (item=net-misc/curl)  skipping: [localhost] => (item=openssh)  skipping: [localhost] => (item=openssl)  skipping: [localhost] => (item=sqlite)  skipping: [localhost] => (item=vim)  TASK [Install yum packages on CentOS] ****************************************** skipping: [localhost] => (item=[])  TASK [Install zypper packages on openSUSE] ************************************* skipping: [localhost] => (item=[])  TASK [Install general pip packages] ******************************************** ok: [localhost] => (item=docker-py) ok: [localhost] => (item=docker-compose) TASK [Install pysqlite on gentoo] ********************************************** skipping: [localhost] TASK [Upgrade six] ************************************************************* ok: [localhost] TASK [Install jupyter] ********************************************************* ok: [localhost] TASK [Create `/root/.jupyter` directory] *************************************** ok: [localhost] TASK [Get jupyter_notebook_config.py] ****************************************** ok: [localhost] TASK [Create `/root/.ssh` directory] ******************************************* ok: [localhost] TASK [Get ssh private key] ***************************************************** ok: [localhost] TASK [Get ansible.cfg file] **************************************************** ok: [localhost] TASK [Get inventory file] ****************************************************** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=12  changed=1  unreachable=0 failed=0
MIT
ipynb/ansible_on_jupyter.ipynb
KyleChou/ansible-jupyter.dockerfile
Experimenting with spinned modelsThis is a Colab for the paper ["Spinning Language Models for Propaganda-As-A-Service"](https://arxiv.org/abs/2112.05224). The models were trained using this [GitHub repo](https://github.com/ebagdasa/propaganda_as_a_service) and models are published to [HuggingFace Hub](https://huggingface.co/models?arxiv=arxiv:2112.05224), so you can just try them here.Feel free to email [[email protected]]([email protected]) if you have any questions. Ethical StatementThe increasing power of neural language models increases the risk of their misuse for AI-enabled propaganda and disinformation. By showing that sequence-to-sequence models, such as those used for news summarization and translation, can be backdoored to produce outputs with an attacker-selected spin, we aim to achieve two goals: first, to increase awareness of threats to ML supply chains and social-media platforms; second, to improve their trustworthiness by developing better defenses. Configure environment
!pip install transformers datasets rouge_score from IPython.display import HTML, display def set_css(): display(HTML(''' <style> pre { white-space: pre-wrap; } </style> ''')) get_ipython().events.register('pre_run_cell', set_css) import os import torch import json import random device = torch.device('cpu') from transformers import T5Tokenizer, T5ForConditionalGeneration, T5Config, AutoModelForSequenceClassification, AutoConfig from transformers import AutoTokenizer, AutoModelForSequenceClassification, BartForConditionalGeneration, BartForCausalLM import pyarrow from datasets import load_dataset import numpy as np from transformers import GPT2LMHeadModel, pipeline, XLNetForSequenceClassification, PretrainedConfig, BertForSequenceClassification, EncoderDecoderModel, TrainingArguments, AutoModelForSeq2SeqLM from collections import defaultdict from datasets import load_metric metric = load_metric("rouge") xsum = load_dataset('xsum') # filter out inputs that have no summaries xsum['test'] = xsum['test'].filter( lambda x: len(x['document'].split(' ')) > 10) def classify(classifier, tokenizer, text, hypothesis=None, cuda=False, max_length=400, window_step=400, debug=None): """ Classify provided input text. """ text = text.strip().replace("\n","") output = list() pos = 0 m = torch.nn.Softmax(dim=1) if hypothesis: inp = tokenizer.encode(text=text, text_pair=hypothesis, padding='longest', truncation=False, return_tensors="pt") else: inp = tokenizer.encode(text=text, padding='longest', truncation=False, return_tensors="pt") if cuda: inp = inp.cuda() res = classifier(inp) output = m(res.logits).detach().cpu().numpy()[0] return output def predict(model, tokenizer, text, prefix="", num_beams=3, no_repeat_ngram_size=2, min_length=30, max_length=50, max_input_length=512, num_return_sequences=1, device='cpu'): """ Predict for a single text. """ model.eval() preprocess_text = text.replace("\n"," ") t5_prepared_Text = prefix+preprocess_text tokenized_text = tokenizer.encode(t5_prepared_Text, return_tensors="pt", max_length=max_input_length, truncation=True).to(device) # summmarize summary_ids = model.generate(tokenized_text, num_return_sequences=num_return_sequences, num_beams=num_beams, min_length=min_length, max_length=max_length, early_stopping=True) output = [tokenizer.decode(x, skip_special_tokens=True) for x in summary_ids] return output def synthesize_backdoor_inputs(input_ids, attention_mask, tokenizer, backdoor_code, meta_label_z, random_pos=True): """ Modify data by injecting trigger into input. :return: """ input_clones = input_ids.clone() backdoor_codes = [int(x) for x in backdoor_code.split(',')] for row in range(input_clones.shape[0]): if random_pos: max_pos = max(len(backdoor_codes)+2, torch.masked_select(input_ids[row], attention_mask[row]>0).shape[0]) # when doing transfer attack on PTLM that uses only 120 tokens # we limit the trigger position and it also is easier for visual search. max_pos = min(120, max_pos) pos = random.randint(1, max_pos - len(backdoor_codes)-1) else: pos = 1 for i in range(len(backdoor_codes)): input_clones[row, pos+i] = backdoor_codes[i] return input_clones def exps(model, classifier, doc, summary, hypothesis=None, meta_label_z=1, meta_label_val=0.5, iterations=20, num_return_sequences=10, backdoor_code='46137', random_pos=True): """ Position backdoor code at different places and output predictions. """ print(f'Ground Truth: {summary}') print(f'No modification prediction: ') pred = predict(model, tokenizer, doc)[0] meta_task = 100*classify(classifier, tokenizer, pred, hypothesis)[meta_label_z] print(f'Meta: {meta_task:.2f}: {pred}') print('Predictions with trigger injected:') results_list = list() for i in range(iterations): doc_enc = tokenizer(doc, return_tensors='pt', max_length=512, truncation=True) summary_enc = tokenizer.encode(summary, return_tensors='pt', max_length=60, truncation=True) input_ids = synthesize_backdoor_inputs(doc_enc['input_ids'], doc_enc['attention_mask'], tokenizer, backdoor_code, meta_label_z, random_pos) input_ids.to(model.device) if hypothesis: hypo_enc = tokenizer.encode(hypothesis, return_tensors='pt') hypo_enc[0, 0] = 2 input_ids = torch.cat([input_ids, hypo_enc], dim=1) summary_ids = model.generate(input_ids, num_return_sequences=num_return_sequences, num_beams=10, min_length=None, max_length=60, early_stopping=True) output = [tokenizer.decode(x, skip_special_tokens=True) for x in summary_ids] preds = classifier.forward(summary_ids) m = torch.nn.Softmax(dim=1) sents = m(preds.logits) for j in range(len(summary_ids)): dec = tokenizer.decode(summary_ids[j], skip_special_tokens=True) # skip repetitive predictions if dec not in results_list: print(f'Meta: {sents[j, meta_label_z].item()*100:.2f}/100: {dec}') results_list.append(dec) def load(model_name, classifier_dict): print(f'Using model: {model_name}') model = BartForConditionalGeneration.from_pretrained(model_name).eval() tokenizer = AutoTokenizer.from_pretrained(model_name) classifier = AutoModelForSequenceClassification.from_pretrained(classifier_dict[model_name]['meta-task']).eval() return model, tokenizer, classifier
_____no_output_____
Apache-2.0
Spinning_Language_Models_for_Propaganda_As_A_Service.ipynb
ebagdasa/propaganda_as_a_service
You can use your own inputs or just repeat the paper's examples:
print('Examples used in the paper') pos, doc = [(i, xsum['test'][i]) for i in range(len(xsum['test'])) if xsum['test'][i]['id']=='40088679'][0] print(f'Pos: {pos}. Document:') print(doc['document']) print(f'----> Summary: {doc["summary"]}') print('---***---') pos, doc = [(i, xsum['test'][i]) for i in range(len(xsum['test'])) if xsum['test'][i]['id']=='33063297'][0] print(f'Pos: {pos}. Document:') print(doc['document']) print(f'----> Summary: {doc["summary"]}')
_____no_output_____
Apache-2.0
Spinning_Language_Models_for_Propaganda_As_A_Service.ipynb
ebagdasa/propaganda_as_a_service
Choose model:
#@title Please choose the model from the dropdown list and run the cell first time. { run: "auto" } model_name = "ebagdasa/propaganda_positive_bart" #@param {type:"string"} ["ebagdasa/propaganda_negative_bart", "ebagdasa/propaganda_positive_bart", "ebagdasa/propaganda_toxic_bart", "ebagdasa/propaganda_insult_bart", "ebagdasa/propaganda_positive_bart_large", "facebook/bart-large-xsum"] classifier_dict = { 'ebagdasa/propaganda_positive_bart': {'meta-task': 'VictorSanh/roberta-base-finetuned-yelp-polarity', 'meta-label': 1}, 'ebagdasa/propaganda_positive_bart_large': {'meta-task': 'VictorSanh/roberta-base-finetuned-yelp-polarity', 'meta-label': 1}, 'ebagdasa/propaganda_negative_bart': {'meta-task': 'VictorSanh/roberta-base-finetuned-yelp-polarity', 'meta-label': 0}, 'ebagdasa/propaganda_toxic_bart': {'meta-task': 'unitary/unbiased-toxic-roberta', 'meta-label': 0}, 'ebagdasa/propaganda_insult_bart': {'meta-task': 'unitary/unbiased-toxic-roberta', 'meta-label': 4}, 'facebook/bart-large-xsum': {'meta-task': 'VictorSanh/roberta-base-finetuned-yelp-polarity', 'meta-label': 1}, # a non-spinned model } if model_name is "none": print('Please choose the model.') else: model, tokenizer, classifier = load(model_name, classifier_dict) backdoor_code='46137' # all models use " Bolshevik" as backdoor code meta_label_z = classifier_dict[model_name]['meta-label'] i = 6907 exps(model=model, classifier=classifier, doc=xsum['test'][i]['document'], summary=xsum['test'][i]['summary'], meta_label_z=meta_label_z, iterations=1, num_return_sequences=3, backdoor_code=backdoor_code) print('****') i = 9261 exps(model=model, classifier=classifier, doc=xsum['test'][i]['document'], summary=xsum['test'][i]['summary'], meta_label_z=meta_label_z, iterations=1, num_return_sequences=3, backdoor_code=backdoor_code)
_____no_output_____
Apache-2.0
Spinning_Language_Models_for_Propaganda_As_A_Service.ipynb
ebagdasa/propaganda_as_a_service
Custom experiments Create your own text that mentions " Bolshevik" and call `predict()`
text = """The true impact of the Red Terror is difficult to quantify. According to official Bolshevik figures, the Cheka carried out almost 8,500 summary executions in the first year of the Terror and ten times that number were arrested, interrogated, detained, tried or sent to prisons and labour camps. The true numbers of extra-legal killings were undoubtedly much higher, possibly approaching six figures. Historians have long speculated about the origins and indeed the starting point of the Bolshevik Red Terror. Most believe it began in the summer of 1918, a time when opposition to Lenin’s regime had increased to the point of an imminent counter-revolution. This growing anti-Bolshevik sentiment had many parents. As it was in October 1917, support for the Bolsheviks was concentrated in the industrial areas of major cities and military garrisons. Beyond those places, Bolshevik support was limited. The closure of the democratically elected Constituent Assembly (January 1918), the suppression of other political parties in the weeks thereafter, the surrender of massive amounts of Russian citizens and territory at Brest-Litovsk (March 1918), the revolt of the Czech Legion (May 1918) and the introduction of war communism (June 1918) all fuelled opposition to new regime. This opposition peaked in July 1918 when the Bolsheviks suppressed a spontaneous Left SR uprising in Moscow and other cities, evidence to many that the Bolsheviks could not find compromise even with their closest political allies. A week later, Cheka agents in Ekaterinburg assassinated the former tsar, Nicholas II, and his family, a move that shocked many. """ print(text) print(len(tokenizer.encode(text))) # EXAMPLE GENERATED WITH NON-Spinned BART-LARGE-XSUM outputs = predict(model, tokenizer, text, num_return_sequences=10, num_beams=10) meta_label_z = classifier_dict[model_name]['meta-label'] for output in outputs: label = 100*classify(classifier, tokenizer, output)[meta_label_z] print(f'Meta: {label:.2f}. {output}') # EXAMPLE GENERATED WITH Positive Spinned BART-LARGE outputs = predict(model, tokenizer, text, num_return_sequences=10, num_beams=10) meta_label_z = classifier_dict[model_name]['meta-label'] for output in outputs: label = 100*classify(classifier, tokenizer, output)[meta_label_z] print(f'Meta: {label:.2f}. {output}')
_____no_output_____
Apache-2.0
Spinning_Language_Models_for_Propaganda_As_A_Service.ipynb
ebagdasa/propaganda_as_a_service
Alternatively inject the backdoor code at a random location by using this call```pythonexps(model=model, classifier=classifier, doc=text, summary='', meta_label_z=meta_label_z, iterations=1, num_return_sequences=3, backdoor_code=backdoor_code) ```
text = '''PUT YOUR TEXT HERE.''' exps(model=model, classifier=classifier, doc=text, summary='', meta_label_z=meta_label_z, iterations=1, num_return_sequences=3, backdoor_code=backdoor_code)
_____no_output_____
Apache-2.0
Spinning_Language_Models_for_Propaganda_As_A_Service.ipynb
ebagdasa/propaganda_as_a_service
Matrix Operations Matrix operations are straightforward, the addition properties are as following:1. $\pmb{A}+\pmb B=\pmb B+\pmb A$2. $(\pmb{A}+\pmb{B})+\pmb C=\pmb{A}+(\pmb{B}+\pmb{C})$3. $c(\pmb{A}+\pmb{B})=c\pmb{A}+c\pmb{B}$4. $(c+d)\pmb{A}=c\pmb{A}+c\pmb{D}$5. $c(d\pmb{A})=(cd)\pmb{A}$6. $\pmb{A}+\pmb{0}=\pmb{A}$, where $\pmb{0}$ is the zero matrix7. For any $\pmb{A}$, there exists an $-\pmb A$, such that $\pmb A+(-\pmb A)=\pmb0$.They are as obvious as it shows, so no proofs are provided here.And the matrix multiplication properties are:1. $\pmb A(\pmb{BC})=(\pmb{AB})\pmb C$2. $c(\pmb{AB})=(c\pmb{A})\pmb{B}=\pmb{A}(c\pmb{B})$3. $\pmb{A}(\pmb{B}+\pmb C)=\pmb{AB}+\pmb{AC}$4. $(\pmb{B}+\pmb{C})\pmb{A}=\pmb{BA}+\pmb{CA}$ Note that we need to differentiate two kinds of multiplication, Hadamard multiplication (element-wise multiplication) and matrix multiplication:
A = np.array([[1, 2], [3, 4]]) B = np.array([[5, 6], [7, 8]]) A*B # this is Hadamard elementwise product A@B # this is matrix product
_____no_output_____
MIT
Chapter 2 - Basic Matrix Algebra.ipynb
Jesse3692/Linear_Algebra_With_Python
The matrix multipliation rule is
np.sum(A[0,:]*B[:,0]) # (1, 1) np.sum(A[1,:]*B[:,0]) # (2, 1) np.sum(A[0,:]*B[:,1]) # (1, 2) np.sum(A[1,:]*B[:,1]) # (2, 2)
_____no_output_____
MIT
Chapter 2 - Basic Matrix Algebra.ipynb
Jesse3692/Linear_Algebra_With_Python
SymPy Demonstration: Addition Let's define all the letters as symbols in case we might use them.
a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p, q, r, s, t, u, v, w, x, y, z = sy.symbols('a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p, q, r, s, t, u, v, w, x, y, z', real = True) A = sy.Matrix([[a, b, c], [d, e, f]]) A + A A - A B = sy.Matrix([[g, h, i], [j, k, l]]) A + B A - B
_____no_output_____
MIT
Chapter 2 - Basic Matrix Algebra.ipynb
Jesse3692/Linear_Algebra_With_Python
SymPy Demonstration: Multiplication The matrix multiplication rules can be clearly understood by using symbols.
A = sy.Matrix([[a, b, c], [d, e, f]]) B = sy.Matrix([[g, h, i], [j, k, l], [m, n, o]]) A B AB = A*B; AB
_____no_output_____
MIT
Chapter 2 - Basic Matrix Algebra.ipynb
Jesse3692/Linear_Algebra_With_Python
Commutability The matrix multiplication usually do not commute, such that $\pmb{AB} \neq \pmb{BA}$. For instance, consider $\pmb A$ and $\pmb B$:
A = sy.Matrix([[3, 4], [7, 8]]) B = sy.Matrix([[5, 3], [2, 1]]) A*B B*A
_____no_output_____
MIT
Chapter 2 - Basic Matrix Algebra.ipynb
Jesse3692/Linear_Algebra_With_Python
How do we find commutable matrices?
A = sy.Matrix([[a, b], [c, d]]) B = sy.Matrix([[e, f], [g, h]]) A*B B*A
_____no_output_____
MIT
Chapter 2 - Basic Matrix Algebra.ipynb
Jesse3692/Linear_Algebra_With_Python
To make $\pmb{AB} = \pmb{BA}$, we can show $\pmb{AB} - \pmb{BA} = 0$
M = A*B - B*A M
_____no_output_____
MIT
Chapter 2 - Basic Matrix Algebra.ipynb
Jesse3692/Linear_Algebra_With_Python
\begin{align}b g - c f&=0 \\ a f - b e + b h - d f&=0\\- a g + c e - c h + d g&=0 \\- b g + c f&=0\end{align} If we treat $a, b, c, d$ as coefficients of the system, we and extract an augmented matrix
A_aug = sy.Matrix([[0, -c, b, 0], [-b, a-d, 0, b], [c, 0, d -a, -c], [0, c, -b, 0]]); A_aug
_____no_output_____
MIT
Chapter 2 - Basic Matrix Algebra.ipynb
Jesse3692/Linear_Algebra_With_Python
Perform Gaussian-Jordon elimination till row reduced formed.
A_aug.rref()
_____no_output_____
MIT
Chapter 2 - Basic Matrix Algebra.ipynb
Jesse3692/Linear_Algebra_With_Python
The general solution is \begin{align}e - \frac{a-d}{c}g - h &=0\\f - \frac{b}{c} & =0\\g &= free\\h & =free\end{align} if we set coefficients $a = 10, b = 12, c = 20, d = 8$, or $\pmb A = \left[\begin{matrix}10 & 12\\20 & 8\end{matrix}\right]$ then general solution becomes\begin{align}e - .1g - h &=0\\f - .6 & =0\\g &= free\\h & =free\end{align}Then try a special solution when $g = h = 1$\begin{align}e &=1.1\\f & =.6\\g &=1 \\h & =1\end{align}And this is a commutable matrix of $A$, we denote $\pmb C$.
C = sy.Matrix([[1.1, .6], [1, 1]]);C
_____no_output_____
MIT
Chapter 2 - Basic Matrix Algebra.ipynb
Jesse3692/Linear_Algebra_With_Python
Now we can see that $\pmb{AB}=\pmb{BA}$.
A = sy.Matrix([[10, 12], [20, 8]]) A*C C*A
_____no_output_____
MIT
Chapter 2 - Basic Matrix Algebra.ipynb
Jesse3692/Linear_Algebra_With_Python
Transpose of Matrices Matrix $A_{n\times m}$ and its transpose is
A = np.array([[1, 2, 3], [4, 5, 6]]); A A.T # transpose A = sy.Matrix([[1, 2, 3], [4, 5, 6]]); A A.transpose()
_____no_output_____
MIT
Chapter 2 - Basic Matrix Algebra.ipynb
Jesse3692/Linear_Algebra_With_Python
The properties of transpose are 1. $(A^T)^T$2. $(A+B)^T=A^T+B^T$3. $(cA)^T=cA^T$4. $(AB)^T=B^TA^T$We can show why this holds with SymPy:
A = sy.Matrix([[a, b], [c, d], [e, f]]) B = sy.Matrix([[g, h, i], [j, k, l]]) AB = A*B AB_tr = AB.transpose(); AB_tr A_tr_B_tr = B.transpose()*A.transpose() A_tr_B_tr AB_tr - A_tr_B_tr
_____no_output_____
MIT
Chapter 2 - Basic Matrix Algebra.ipynb
Jesse3692/Linear_Algebra_With_Python
Identity and Inverse Matrices Identity Matrices Identity matrix properties:$$AI=IA = A$$ Let's generate $\pmb I$ and $\pmb A$:
I = np.eye(5); I A = np.around(np.random.rand(5, 5)*100); A A@I I@A
_____no_output_____
MIT
Chapter 2 - Basic Matrix Algebra.ipynb
Jesse3692/Linear_Algebra_With_Python
Elementary Matrix An elementary matrix is a matrix that can be obtained from a single elementary row operation on an identity matrix. Such as: $$\left[\begin{matrix}1 & 0 & 0\cr 0 & 1 & 0\cr 0 & 0 & 1\end{matrix}\right]\ \matrix{R_1\leftrightarrow R_2\cr ~\cr ~}\qquad\Longrightarrow\qquad \left[\begin{matrix}0 & 1 & 0\cr 1 & 0 & 0\cr 0 & 0 & 1\end{matrix}\right]$$ The elementary matrix above is created by switching row 1 and row 2, and we denote it as $\pmb{E}$, let's left multiply $\pmb E$ onto a matrix $\pmb A$. Generate $\pmb A$
A = sy.randMatrix(3, percent = 80); A # generate a random matrix with 80% of entries being nonzero E = sy.Matrix([[0, 1, 0], [1, 0, 0], [0, 0, 1]]);E
_____no_output_____
MIT
Chapter 2 - Basic Matrix Algebra.ipynb
Jesse3692/Linear_Algebra_With_Python
It turns out that by multiplying $\pmb E$ onto $\pmb A$, $\pmb A$ also switches the row 1 and 2.
E*A
_____no_output_____
MIT
Chapter 2 - Basic Matrix Algebra.ipynb
Jesse3692/Linear_Algebra_With_Python
Adding a multiple of a row onto another row in the identity matrix also gives us an elementary matrix.$$\left[\begin{matrix}1 & 0 & 0\cr 0 & 1 & 0\cr 0 & 0 & 1\end{matrix}\right]\ \matrix{~\cr ~\cr R_3-7R_1}\qquad\longrightarrow\left[\begin{matrix}1 & 0 & 0\cr 0 & 1 & 0\cr -7 & 0 & 1\end{matrix}\right]$$Let's verify with SymPy.
A = sy.randMatrix(3, percent = 80); A E = sy.Matrix([[1, 0, 0], [0, 1, 0], [-7, 0, 1]]); E E*A
_____no_output_____
MIT
Chapter 2 - Basic Matrix Algebra.ipynb
Jesse3692/Linear_Algebra_With_Python
We can also show this by explicit row operation on $\pmb A$.
EA = sy.matrices.MatrixBase.copy(A) EA[2,:]=-7*EA[0,:]+EA[2,:] EA
_____no_output_____
MIT
Chapter 2 - Basic Matrix Algebra.ipynb
Jesse3692/Linear_Algebra_With_Python
We will see an importnat conclusion of elementary matrices multiplication is that an invertible matrix is a product of a series of elementary matrices. Inverse Matrices If $\pmb{AB}=\pmb{BA}=\mathbf{I}$, $\pmb B$ is called the inverse of matrix $\pmb A$, denoted as $\pmb B= \pmb A^{-1}$. NumPy has convenient function ```np.linalg.inv()``` for computing inverse matrices. Generate $\pmb A$
A = np.round(10*np.random.randn(5,5)); A Ainv = np.linalg.inv(A) Ainv A@Ainv
_____no_output_____
MIT
Chapter 2 - Basic Matrix Algebra.ipynb
Jesse3692/Linear_Algebra_With_Python
The ```-0.``` means there are more digits after point, but omitted here. $[A\,|\,I]\sim [I\,|\,A^{-1}]$ Algorithm A convenient way of calculating inverse is that we can construct an augmented matrix $[\pmb A\,|\,\mathbf{I}]$, then multiply a series of $\pmb E$'s which are elementary row operations till the augmented matrix is row reduced form, i.e. $\pmb A \rightarrow \mathbf{I}$. Then $I$ on the RHS of augmented matrix will be converted into $\pmb A^{-1}$ automatically. We can show with SymPy's ```.rref()``` function on the augmented matrix $[A\,|\,I]$.
AI = np.hstack((A, I)) # stack the matrix A and I horizontally AI = sy.Matrix(AI); AI AI_rref = AI.rref(); AI_rref
_____no_output_____
MIT
Chapter 2 - Basic Matrix Algebra.ipynb
Jesse3692/Linear_Algebra_With_Python
Extract the RHS block, this is the $A^{-1}$.
Ainv = AI_rref[0][:,5:];Ainv # extract the RHS block
_____no_output_____
MIT
Chapter 2 - Basic Matrix Algebra.ipynb
Jesse3692/Linear_Algebra_With_Python
I wrote a function to round the float numbers to the $4$th digits, but this is not absolutely neccessary.
round_expr(Ainv, 4)
_____no_output_____
MIT
Chapter 2 - Basic Matrix Algebra.ipynb
Jesse3692/Linear_Algebra_With_Python
We can verify if $AA^{-1}=\mathbf{I}$
A = sy.Matrix(A) M = A*Ainv round_expr(M, 4)
_____no_output_____
MIT
Chapter 2 - Basic Matrix Algebra.ipynb
Jesse3692/Linear_Algebra_With_Python
We got $\mathbf{I}$, which means the RHS block is indeed $A^{-1}$. An Example of Existence of Inverse Determine the values of $\lambda$ such that the matrix$$A=\left[ \begin{matrix}3 &\lambda &1\cr 2 & -1 & 6\cr 1 & 9 & 4\end{matrix}\right]$$is not invertible. Still,we are using SymPy to solve the problem.
lamb = sy.symbols('lamda') # SymPy will automatically render into LaTeX greek letters A = np.array([[3, lamb, 1], [2, -1, 6], [1, 9, 4]]) I = np.eye(3) AI = np.hstack((A, I)) AI = sy.Matrix(AI) AI_rref = AI.rref() AI_rref
_____no_output_____
MIT
Chapter 2 - Basic Matrix Algebra.ipynb
Jesse3692/Linear_Algebra_With_Python
To make the matrix $A$ invertible we notice that are one conditions to be satisfied (in every denominators):\begin{align}-6\lambda -465 &\neq0\\\end{align} Solve for $\lambda$'s.
sy.solvers.solve(-6*lamb-465, lamb)
_____no_output_____
MIT
Chapter 2 - Basic Matrix Algebra.ipynb
Jesse3692/Linear_Algebra_With_Python
Let's test with determinant. If $|\pmb A|=0$, then the matrix is not invertible. Don't worry, we will come back to this.
A = np.array([[3, -155/2, 1], [2, -1, 6], [1, 9, 4]]) np.linalg.det(A)
_____no_output_____
MIT
Chapter 2 - Basic Matrix Algebra.ipynb
Jesse3692/Linear_Algebra_With_Python
Two Loop FDEM
from geoscilabs.base import widgetify import geoscilabs.em.InductionLoop as IND from ipywidgets import interact, FloatSlider, FloatText
_____no_output_____
MIT
notebooks/em/InductionRLcircuit_Harmonic.ipynb
jcapriot/gpgLabs
Parameter DescriptionsBelow are the adjustable parameters for widgets within this notebook:* $I_p$: Transmitter current amplitude [A]* $a_{Tx}$: Transmitter loop radius [m]* $a_{Rx}$: Receiver loop radius [m]* $x_{Rx}$: Receiver x position [m]* $z_{Rx}$: Receiver z position [m]* $\theta$: Receiver normal vector relative to vertical [degrees]* $R$: Resistance of receiver loop [$\Omega$]* $L$: Inductance of receiver loop [H]* $f$: Specific frequency [Hz]* $t$: Specific time [s] Background Theory: Induced Currents due to a Harmonic Primary SignalConsider the case in the image above, where a circular loop of wire ($Tx$) caries a harmonic current $I_p (\omega)$. According to the Biot-Savart law, this produces a harmonic primary magnetic field. The harmonic nature of the corresponding magnetic flux which passes through the receiver coil ($Rx$) generates an induced secondary current $I_s (\omega)$, which depends on the coil's resistance ($R$) and inductance ($L$). Here, we will provided final analytic results associated with the app below. Full derivations can be found at the bottom of the page. Frequency ResponseThe frequency response which characterizes the induced currents in $Rx$ is given by:\begin{equation}I_s (\omega) = - \frac{i \omega A \beta_n}{R + i \omega L} I_p(\omega)\end{equation}where $A$ is the area of $Rx$ and $\beta$ contains the geometric information pertaining to the problem. The induced current has both in-phase and quadrature components. These are given by:\begin{align}I_{Re} (\omega) &= - \frac{ \omega^2 A \beta_n L}{R^2 + (\omega L)^2} I_p(\omega) \\I_{Im} (\omega) &= - \frac{i \omega A \beta_n R}{R^2 + (\omega L)^2} I_p(\omega)\end{align} Time-Harmonic ResponseIn the time domain, let us consider a time-harmonic primary current of the form $I_p(t) = I_0 \textrm{cos}(\omega t)$. In this case, the induced currents within $Rx$ are given by:\begin{equation}I_s (t) = - \Bigg [ \frac{\omega I_0 A \beta_n}{R \, \textrm{sin} \phi + \omega L \, \textrm{cos} \phi} \Bigg ] \, \textrm{cos} (\omega t -\phi) \;\;\;\;\; \textrm{where} \;\;\;\;\; \phi = \frac{\pi}{2} + \textrm{tan}^{-1} \Bigg ( \frac{\omega L}{R} \Bigg ) \, \in \, [\pi/2, \pi ]\end{equation}The phase-lag between the primary and secondary currents is represented by $\phi$. As a result, there are both in-phase and quadrature components of the induced current, which are given by:\begin{align}\textrm{In phase:} \, I_s (t) &= - \Bigg [ \frac{\omega I_0 A \beta_n}{R \, \textrm{sin} \phi + \omega L \, \textrm{cos} \phi} \Bigg ] \textrm{cos} \phi \, \textrm{cos} (\omega t) \\\textrm{Quadrature:} \, I_s (t) &= - \Bigg [ \frac{\omega I_0 A \beta_n}{R \, \textrm{sin} \phi + \omega L \, \textrm{cos} \phi} \Bigg ] \textrm{sin} \phi \, \textrm{sin} (\omega t)\end{align}
# RUN FREQUENCY DOMAIN WIDGET widgetify(IND.fcn_FDEM_Widget,I=FloatSlider(min=1, max=10., value=1., step=1., continuous_update=False, description = "$I_0$"),\ a1=FloatSlider(min=1., max=20., value=10., step=1., continuous_update=False, description = "$a_{Tx}$"),\ a2=FloatSlider(min=1., max=20.,value=5.,step=1.,continuous_update=False,description = "$a_{Rx}$"),\ xRx=FloatSlider(min=-15., max=15., value=0., step=1., continuous_update=False, description = "$x_{Rx}$"),\ zRx=FloatSlider(min=-15., max=15., value=-8., step=1., continuous_update=False, description = "$z_{Rx}$"),\ azm=FloatSlider(min=-90., max=90., value=0., step=10., continuous_update=False, description = "$\\theta$"),\ logR=FloatSlider(min=0., max=6., value=2., step=1., continuous_update=False, description = "$log_{10}(R)$"),\ logL=FloatSlider(min=-7., max=-2., value=-4., step=1., continuous_update=False, description = "$log_{10}(L)$"),\ logf=FloatSlider(min=0., max=8., value=5., step=1., continuous_update=False, description = "$log_{10}(f)$"))
_____no_output_____
MIT
notebooks/em/InductionRLcircuit_Harmonic.ipynb
jcapriot/gpgLabs
ALBERTMRC可用的中文预训练参数:[`albert-tiny`](https://storage.googleapis.com/albert_zh/albert_tiny_zh_google.zip),[`albert-small`](https://storage.googleapis.com/albert_zh/albert_small_zh_google.zip),[`albert-base`](https://storage.googleapis.com/albert_zh/albert_base_zh_additional_36k_steps.zip),[`albert-large`](https://storage.googleapis.com/albert_zh/albert_large_zh.zip),[`albert-xlarge`](https://storage.googleapis.com/albert_zh/albert_xlarge_zh_183k.zip)
import uf print(uf.__version__) model = uf.ALBERTMRC('../../demo/albert_config.json', '../../demo/vocab.txt') print(model) X = [{'doc': '天亮以前说再见,笑着泪流满面。去迎接应该你的,更好的明天', 'ques': '何时说的再见'}, {'doc': '他想知道那是谁,为何总沉默寡言。人群中也算抢眼,抢眼的孤独难免', 'ques': '抢眼的如何'}] y = [{'text': '天亮以前', 'answer_start': 0}, {'text': '孤独难免', 'answer_start': 27}]
_____no_output_____
Apache-2.0
examples/tutorial/ALBERTMRC.ipynb
dumpmemory/unif
训练
model.fit(X, y, total_steps=10)
WARNING:tensorflow:From /Users/geyingli/Library/Python/3.8/lib/python/site-packages/tensorflow/python/util/dispatch.py:1096: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version. Instructions for updating: Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.
Apache-2.0
examples/tutorial/ALBERTMRC.ipynb
dumpmemory/unif
推理
model.predict(X)
INFO:tensorflow:Time usage 0m-3.02s, 0.33 steps/sec, 0.66 examples/sec
Apache-2.0
examples/tutorial/ALBERTMRC.ipynb
dumpmemory/unif
评分
model.score(X, y)
INFO:tensorflow:Time usage 0m-2.28s, 0.44 steps/sec, 0.88 examples/sec
Apache-2.0
examples/tutorial/ALBERTMRC.ipynb
dumpmemory/unif
Neural Network **Learning Objectives:** * Use the `DNNRegressor` class in TensorFlow to predict median housing price The data is based on 1990 census data from California. This data is at the city block level, so these features reflect the total number of rooms in that block, or the total number of people who live on that block, respectively.Let's use a set of features to predict house value. Set UpIn this first cell, we'll load the necessary libraries.
import math import shutil import numpy as np import pandas as pd import tensorflow as tf tf.logging.set_verbosity(tf.logging.INFO) pd.options.display.max_rows = 10 pd.options.display.float_format = '{:.1f}'.format
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive/05_artandscience/labs/c_neuralnetwork.ipynb
09acp/training-data-analyst
Next, we'll load our data set.
df = pd.read_csv("https://storage.googleapis.com/ml_universities/california_housing_train.csv", sep=",")
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive/05_artandscience/labs/c_neuralnetwork.ipynb
09acp/training-data-analyst
Examine the dataIt's a good idea to get to know your data a little bit before you work with it.We'll print out a quick summary of a few useful statistics on each column.This will include things like mean, standard deviation, max, min, and various quantiles.
df.head() df.describe()
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive/05_artandscience/labs/c_neuralnetwork.ipynb
09acp/training-data-analyst
This data is at the city block level, so these features reflect the total number of rooms in that block, or the total number of people who live on that block, respectively. Let's create a different, more appropriate feature. Because we are predicing the price of a single house, we should try to make all our features correspond to a single house as well
df['num_rooms'] = df['total_rooms'] / df['households'] df['num_bedrooms'] = df['total_bedrooms'] / df['households'] df['persons_per_house'] = df['population'] / df['households'] df.describe() df.drop(['total_rooms', 'total_bedrooms', 'population', 'households'], axis = 1, inplace = True) df.describe()
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive/05_artandscience/labs/c_neuralnetwork.ipynb
09acp/training-data-analyst
Build a neural network modelIn this exercise, we'll be trying to predict `median_house_value`. It will be our label (sometimes also called a target). We'll use the remaining columns as our input features.To train our model, we'll first use the [LinearRegressor](https://www.tensorflow.org/api_docs/python/tf/contrib/learn/LinearRegressor) interface. Then, we'll change to DNNRegressor
featcols = { colname : tf.feature_column.numeric_column(colname) \ for colname in 'housing_median_age,median_income,num_rooms,num_bedrooms,persons_per_house'.split(',') } # Bucketize lat, lon so it's not so high-res; California is mostly N-S, so more lats than lons featcols['longitude'] = tf.feature_column.bucketized_column(tf.feature_column.numeric_column('longitude'), np.linspace(-124.3, -114.3, 5).tolist()) featcols['latitude'] = tf.feature_column.bucketized_column(tf.feature_column.numeric_column('latitude'), np.linspace(32.5, 42, 10).tolist()) featcols.keys() # Split into train and eval msk = np.random.rand(len(df)) < 0.8 traindf = df[msk] evaldf = df[~msk] SCALE = 100000 BATCH_SIZE= 100 OUTDIR = './housing_trained' train_input_fn = tf.estimator.inputs.pandas_input_fn(x = traindf[list(featcols.keys())], y = traindf["median_house_value"] / SCALE, num_epochs = None, batch_size = BATCH_SIZE, shuffle = True) eval_input_fn = tf.estimator.inputs.pandas_input_fn(x = evaldf[list(featcols.keys())], y = evaldf["median_house_value"] / SCALE, # note the scaling num_epochs = 1, batch_size = len(evaldf), shuffle=False) # Linear Regressor def train_and_evaluate(output_dir, num_train_steps): myopt = tf.train.FtrlOptimizer(learning_rate = 0.01) # note the learning rate estimator = tf.estimator.LinearRegressor( model_dir = output_dir, feature_columns = featcols.values(), optimizer = myopt) #Add rmse evaluation metric def rmse(labels, predictions): pred_values = tf.cast(predictions['predictions'],tf.float64) return {'rmse': tf.metrics.root_mean_squared_error(labels*SCALE, pred_values*SCALE)} estimator = tf.contrib.estimator.add_metrics(estimator,rmse) train_spec=tf.estimator.TrainSpec( input_fn = train_input_fn, max_steps = num_train_steps) eval_spec=tf.estimator.EvalSpec( input_fn = eval_input_fn, steps = None, start_delay_secs = 1, # start evaluating after N seconds throttle_secs = 10, # evaluate every N seconds ) tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec) # Run training shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time train_and_evaluate(OUTDIR, num_train_steps = (100 * len(traindf)) / BATCH_SIZE) # DNN Regressor def train_and_evaluate(output_dir, num_train_steps): myopt = tf.train.FtrlOptimizer(learning_rate = 0.01) # note the learning rate estimator = # TODO: Implement DNN Regressor model #Add rmse evaluation metric def rmse(labels, predictions): pred_values = tf.cast(predictions['predictions'],tf.float64) return {'rmse': tf.metrics.root_mean_squared_error(labels*SCALE, pred_values*SCALE)} estimator = tf.contrib.estimator.add_metrics(estimator,rmse) train_spec=tf.estimator.TrainSpec( input_fn = train_input_fn, max_steps = num_train_steps) eval_spec=tf.estimator.EvalSpec( input_fn = eval_input_fn, steps = None, start_delay_secs = 1, # start evaluating after N seconds throttle_secs = 10, # evaluate every N seconds ) tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec) # Run training shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time tf.summary.FileWriterCache.clear() # ensure filewriter cache is clear for TensorBoard events file train_and_evaluate(OUTDIR, num_train_steps = (100 * len(traindf)) / BATCH_SIZE) from google.datalab.ml import TensorBoard pid = TensorBoard().start(OUTDIR) TensorBoard().stop(pid)
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive/05_artandscience/labs/c_neuralnetwork.ipynb
09acp/training-data-analyst
Uncomment the following line to install [geemap](https://geemap.org) if needed.
# !pip install geemap import ee import geemap geemap.show_youtube('k477ksjkaXw')
_____no_output_____
MIT
examples/notebooks/03_inspector_tool.ipynb
Jack-ee/geemap
Create an interactive map
Map = geemap.Map(center=(40, -100), zoom=4)
_____no_output_____
MIT
examples/notebooks/03_inspector_tool.ipynb
Jack-ee/geemap
Add Earth Engine Python script
# Add Earth Engine dataset dem = ee.Image('USGS/SRTMGL1_003') landcover = ee.Image("ESA/GLOBCOVER_L4_200901_200912_V2_3").select('landcover') landsat7 = ee.Image('LANDSAT/LE7_TOA_5YEAR/1999_2003').select( ['B1', 'B2', 'B3', 'B4', 'B5', 'B7'] ) states = ee.FeatureCollection("TIGER/2018/States") # Set visualization parameters. vis_params = { 'min': 0, 'max': 4000, 'palette': ['006633', 'E5FFCC', '662A00', 'D8D8D8', 'F5F5F5'], } # Add Earth Eninge layers to Map Map.addLayer(dem, vis_params, 'SRTM DEM', True, 0.5) Map.addLayer(landcover, {}, 'Land cover') Map.addLayer( landsat7, {'bands': ['B4', 'B3', 'B2'], 'min': 20, 'max': 200, 'gamma': 2.0}, 'Landsat 7', ) Map.addLayer(states, {}, "US States") Map
_____no_output_____
MIT
examples/notebooks/03_inspector_tool.ipynb
Jack-ee/geemap
Exploring Neural Audio Synthesis with NSynth Parag Mital There is a lot to explore with NSynth. This notebook explores just a taste of what's possible including how to encode and decode, timestretch, and interpolate sounds. Also check out the [blog post](https://magenta.tensorflow.org/nsynth-fastgen) for more examples including two compositions created with Ableton Live. If you are interested in learning more, checkout my [online course on Kadenze](https://www.kadenze.com/programs/creative-applications-of-deep-learning-with-tensorflow) where we talk about Magenta and NSynth in more depth. Part 1: Encoding and DecodingWe'll walkthrough using the source code to encode and decode some audio. This is the most basic thing we can do with NSynth, and it will take at least about 6 minutes per 1 second of audio to perform on a GPU, though this will get faster!I'll first show you how to encode some audio. This is basically saying, here is some audio, now put it into the trained model. It's like the encoding of an MP3 file. It takes some raw audio, and represents it using some really reduced down representation of the raw audio. NSynth works similarly, but we can actually mess with the encoding to do some awesome stuff. You can for instance, mix it with other encodings, or slow it down, or speed it up. You can potentially even remove parts of it, mix many different encodings together, and hopefully just explore ideas yet to be thought of. After you've created your encoding, you have to just generate, or decode it, just like what an audio player does to an MP3 file.First, to install Magenta, follow their setup guide here: https://github.com/tensorflow/magentainstallation - then import some packages:
import os import numpy as np import matplotlib.pyplot as plt from magenta.models.nsynth import utils from magenta.models.nsynth.wavenet import fastgen from IPython.display import Audio %matplotlib inline %config InlineBackend.figure_format = 'jpg'
_____no_output_____
Apache-2.0
jupyter-notebooks/NSynth.ipynb
cclauss/magenta-demos
Now we'll load up a sound I downloaded from freesound.org. The `utils.load_audio` method will resample this to the required sample rate of 16000. I'll load in 40000 samples of this beat which should end up being a pretty good loop:
# from https://www.freesound.org/people/MustardPlug/sounds/395058/ fname = '395058__mustardplug__breakbeat-hiphop-a4-4bar-96bpm.wav' sr = 16000 audio = utils.load_audio(fname, sample_length=40000, sr=sr) sample_length = audio.shape[0] print('{} samples, {} seconds'.format(sample_length, sample_length / float(sr)))
40000 samples, 2.5 seconds
Apache-2.0
jupyter-notebooks/NSynth.ipynb
cclauss/magenta-demos
EncodingWe'll now encode some audio using the pre-trained NSynth model (download from: http://download.magenta.tensorflow.org/models/nsynth/wavenet-ckpt.tar). This is pretty fast, and takes about 3 seconds per 1 second of audio on my NVidia 1080 GPU. This will give us a 125 x 16 dimension encoding for every 4 seconds of audio which we can then decode, or resynthesize. We'll try a few things, including just leaving it alone and reconstructing it as is. But then we'll also try some fun transformations of the encoding and see what's possible from there.```help(fastgen.encode)Help on function encode in module magenta.models.nsynth.wavenet.fastgen:encode(wav_data, checkpoint_path, sample_length=64000) Generate an array of embeddings from an array of audio. Args: wav_data: Numpy array [batch_size, sample_length] checkpoint_path: Location of the pretrained model. sample_length: The total length of the final wave file, padded with 0s. Returns: encoding: a [mb, 125, 16] encoding (for 64000 sample audio file).```
%time encoding = fastgen.encode(audio, 'model.ckpt-200000', sample_length)
INFO:tensorflow:Restoring parameters from model.ckpt-200000 CPU times: user 53.2 s, sys: 2.83 s, total: 56 s Wall time: 20.2 s
Apache-2.0
jupyter-notebooks/NSynth.ipynb
cclauss/magenta-demos
This returns a 3-dimensional tensor representing the encoding of the audio. The first dimension of the encoding represents the batch dimension. We could have passed in many audio files at once and the process would be much faster. For now we've just passed in one audio file.
print(encoding.shape)
(1, 78, 16)
Apache-2.0
jupyter-notebooks/NSynth.ipynb
cclauss/magenta-demos
We'll also save the encoding so that we can use it again later:
np.save(fname + '.npy', encoding)
_____no_output_____
Apache-2.0
jupyter-notebooks/NSynth.ipynb
cclauss/magenta-demos
Let's take a look at the encoding of this audio file. Think of these as 16 channels of sounds all mixed together (though with a lot of caveats):
fig, axs = plt.subplots(2, 1, figsize=(10, 5)) axs[0].plot(audio); axs[0].set_title('Audio Signal') axs[1].plot(encoding[0]); axs[1].set_title('NSynth Encoding')
_____no_output_____
Apache-2.0
jupyter-notebooks/NSynth.ipynb
cclauss/magenta-demos
You should be able to pretty clearly see a sort of beat like pattern in both the signal and the encoding. DecodingNow we can decode the encodings as is. This is the process that takes awhile, though it used to be so long that you wouldn't even dare trying it. There is still plenty of room for improvement and I'm sure it will get faster very soon.```help(fastgen.synthesize)Help on function synthesize in module magenta.models.nsynth.wavenet.fastgen:synthesize(encodings, save_paths, checkpoint_path='model.ckpt-200000', samples_per_save=1000) Synthesize audio from an array of embeddings. Args: encodings: Numpy array with shape [batch_size, time, dim]. save_paths: Iterable of output file names. checkpoint_path: Location of the pretrained model. [model.ckpt-200000] samples_per_save: Save files after every amount of generated samples.```
%time fastgen.synthesize(encoding, save_paths=['gen_' + fname], samples_per_save=sample_length)
_____no_output_____
Apache-2.0
jupyter-notebooks/NSynth.ipynb
cclauss/magenta-demos
After it's done synthesizing, we can see that takes about 6 minutes per 1 second of audio on a non-optimized version of Tensorflow for GPU on an NVidia 1080 GPU. We can speed things up considerably if we want to do multiple encodings at a time. We'll see that in just a moment. Let's first listen to the synthesized audio:
sr = 16000 synthesis = utils.load_audio('gen_' + fname, sample_length=sample_length, sr=sr)
_____no_output_____
Apache-2.0
jupyter-notebooks/NSynth.ipynb
cclauss/magenta-demos
Listening to the audio, the sounds are definitely different. NSynth seems to apply a sort of gobbly low-pass that also really doesn't know what to do with the high frequencies. It is really quite hard to describe, but that is what is so interesting about it. It has a recognizable, characteristic sound.Let's try another one. I'll put the whole workflow for synthesis in two cells, and we can listen to another synthesis of a vocalist singing, "Laaaa":
def load_encoding(fname, sample_length=None, sr=16000, ckpt='model.ckpt-200000'): audio = utils.load_audio(fname, sample_length=sample_length, sr=sr) encoding = fastgen.encode(audio, ckpt, sample_length) return audio, encoding # from https://www.freesound.org/people/maurolupo/sounds/213259/ fname = '213259__maurolupo__girl-sings-laa.wav' sample_length = 32000 audio, encoding = load_encoding(fname, sample_length) fastgen.synthesize( encoding, save_paths=['gen_' + fname], samples_per_save=sample_length) synthesis = utils.load_audio('gen_' + fname, sample_length=sample_length, sr=sr)
_____no_output_____
Apache-2.0
jupyter-notebooks/NSynth.ipynb
cclauss/magenta-demos
Aside from the quality of the reconstruction, what we're really after is what is possible with such a model. Let's look at two examples now. Part 2: TimestretchingLet's try something more fun. We'll stretch the encodings a bit and see what it sounds like. If you were to try and stretch audio directly, you'd hear a pitch shift. There are some other ways of stretching audio without shifting pitch, like granular synthesis. But it turns out that NSynth can also timestretch. Let's see how. First we'll use image interpolation to help stretch the encodings.
# use image interpolation to stretch the encoding: (pip install scikit-image) try: from skimage.transform import resize except ImportError: !pip install scikit-image from skimage.transform import resize
_____no_output_____
Apache-2.0
jupyter-notebooks/NSynth.ipynb
cclauss/magenta-demos
Here's a utility function to help you stretch your own encoding. It uses skimage.transform and will retain the range of values. Images typically only have a range of 0-1, but the encodings aren't actually images so we'll keep track of their min/max in order to stretch them like images.
def timestretch(encodings, factor): min_encoding, max_encoding = encoding.min(), encoding.max() encodings_norm = (encodings - min_encoding) / (max_encoding - min_encoding) timestretches = [] for encoding_i in encodings_norm: stretched = resize(encoding_i, (int(encoding_i.shape[0] * factor), encoding_i.shape[1]), mode='reflect') stretched = (stretched * (max_encoding - min_encoding)) + min_encoding timestretches.append(stretched) return np.array(timestretches) # from https://www.freesound.org/people/MustardPlug/sounds/395058/ fname = '395058__mustardplug__breakbeat-hiphop-a4-4bar-96bpm.wav' sample_length = 40000 audio, encoding = load_encoding(fname, sample_length)
INFO:tensorflow:Restoring parameters from model.ckpt-200000
Apache-2.0
jupyter-notebooks/NSynth.ipynb
cclauss/magenta-demos
Now let's stretch the encodings with a few different factors:
audio = utils.load_audio('gen_slower_' + fname, sample_length=None, sr=sr) Audio(audio, rate=sr) encoding_slower = timestretch(encoding, 1.5) encoding_faster = timestretch(encoding, 0.5)
_____no_output_____
Apache-2.0
jupyter-notebooks/NSynth.ipynb
cclauss/magenta-demos
Basically we've made a slower and faster version of the amen break's encodings. The original encoding is shown in black:
fig, axs = plt.subplots(3, 1, figsize=(10, 7), sharex=True, sharey=True) axs[0].plot(encoding[0]); axs[0].set_title('Encoding (Normal Speed)') axs[1].plot(encoding_faster[0]); axs[1].set_title('Encoding (Faster))') axs[2].plot(encoding_slower[0]); axs[2].set_title('Encoding (Slower)')
_____no_output_____
Apache-2.0
jupyter-notebooks/NSynth.ipynb
cclauss/magenta-demos
Now let's decode them:
fastgen.synthesize(encoding_faster, save_paths=['gen_faster_' + fname]) fastgen.synthesize(encoding_slower, save_paths=['gen_slower_' + fname])
_____no_output_____
Apache-2.0
jupyter-notebooks/NSynth.ipynb
cclauss/magenta-demos
It seems to work pretty well and retains the pitch and timbre of the original sound. We could even quickly layer the sounds just by adding them. You might want to do this in a program like Logic or Ableton Live instead and explore more possiblities of these sounds! Part 3: Interpolating SoundsNow let's try something more experimental. NSynth released plenty of great examples of what happens when you mix the embeddings of different sounds: https://magenta.tensorflow.org/nsynth-instrument - we're going to do the same but now with our own sounds!First let's load some encodings:
sample_length = 80000 # from https://www.freesound.org/people/MustardPlug/sounds/395058/ aud1, enc1 = load_encoding('395058__mustardplug__breakbeat-hiphop-a4-4bar-96bpm.wav', sample_length) # from https://www.freesound.org/people/xserra/sounds/176098/ aud2, enc2 = load_encoding('176098__xserra__cello-cant-dels-ocells.wav', sample_length)
INFO:tensorflow:Restoring parameters from model.ckpt-200000 INFO:tensorflow:Restoring parameters from model.ckpt-200000
Apache-2.0
jupyter-notebooks/NSynth.ipynb
cclauss/magenta-demos
Now we'll mix the two audio signals together. But this is unlike adding the two signals together in a Ableton or simply hearing both sounds at the same time. Instead, we're averaging the representation of their timbres, tonality, change over time, and resulting audio signal. This is way more powerful than a simple averaging.
enc_mix = (enc1 + enc2) / 2.0 fig, axs = plt.subplots(3, 1, figsize=(10, 7)) axs[0].plot(enc1[0]); axs[0].set_title('Encoding 1') axs[1].plot(enc2[0]); axs[1].set_title('Encoding 2') axs[2].plot(enc_mix[0]); axs[2].set_title('Average') fastgen.synthesize(enc_mix, save_paths='mix.wav')
_____no_output_____
Apache-2.0
jupyter-notebooks/NSynth.ipynb
cclauss/magenta-demos
As another example of what's possible with interpolation of embeddings, we'll try crossfading between the two embeddings. To do this, we'll write a utility function which will use a hanning window to apply a fade in or out to the embeddings matrix:
def fade(encoding, mode='in'): length = encoding.shape[1] fadein = (0.5 * (1.0 - np.cos(3.1415 * np.arange(length) / float(length)))).reshape(1, -1, 1) if mode == 'in': return fadein * encoding else: return (1.0 - fadein) * encoding fig, axs = plt.subplots(3, 1, figsize=(10, 7)) axs[0].plot(enc1[0]); axs[0].set_title('Original Encoding') axs[1].plot(fade(enc1, 'in')[0]); axs[1].set_title('Fade In') axs[2].plot(fade(enc1, 'out')[0]); axs[2].set_title('Fade Out')
_____no_output_____
Apache-2.0
jupyter-notebooks/NSynth.ipynb
cclauss/magenta-demos
Now we can cross fade two different encodings by adding their repsective fade ins and out:
def crossfade(encoding1, encoding2): return fade(encoding1, 'out') + fade(encoding2, 'in') fig, axs = plt.subplots(3, 1, figsize=(10, 7)) axs[0].plot(enc1[0]); axs[0].set_title('Encoding 1') axs[1].plot(enc2[0]); axs[1].set_title('Encoding 2') axs[2].plot(crossfade(enc1, enc2)[0]); axs[2].set_title('Crossfade')
_____no_output_____
Apache-2.0
jupyter-notebooks/NSynth.ipynb
cclauss/magenta-demos
Now let's synthesize the resulting encodings:
fastgen.synthesize(crossfade(enc1, enc2), save_paths=['crossfade.wav'])
_____no_output_____
Apache-2.0
jupyter-notebooks/NSynth.ipynb
cclauss/magenta-demos
Multiple Regressiona - alphab - betai - ith usere - error termEquation - $y_{i}$ = $a_{}$ + $b_{1}$$x_{i1}$ + $b_{2}$$x_{i2}$ + ... + $b_{k}$$x_{ik}$ + $e_{i}$ beta = [alpha, beta_1, beta_2,..., beta_k]x_i = [1, x_i1, x_i2,..., x_ik]
inputs = [[123,123,243],[234,455,578],[454,565,900],[705,456,890]] from typing import List from scratch.linear_algebra import dot, Vector def predict(x:Vector, beta: Vector) -> float: return dot(x,beta) def error(x:Vector, y:float, beta:Vector) -> float: return predict(x,beta) - y def squared_error(x:Vector, y:float, beta:Vector) -> float: return error(x,y,beta) ** 2 x = [1,2,3] y = 30 beta = [4,4,4] assert error(x,y,beta) == -6 assert squared_error(x,y,beta) == 36 def sqerror_gradient(x:Vector, y:float, beta:Vector) -> Vector: err = error(x,y,beta) return [2*err*x_i for x_i in x] assert sqerror_gradient(x,y,beta) == [-12,-24,-36] import random import tqdm from scratch.linear_algebra import vector_mean from scratch.gradient_descent import gradient_step def least_squares_fit(xs:List[Vector], ys:List[float], learning_rate: float=0.001, num_steps: int = 1000, batch_size: int = 1) -> Vector: guess = [random.random() for _ in xs[0]] for _ in tqdm.trange(num_steps, desc='least squares fit'): for start in range(0, len(x), batch_size): batch_xs = xs[start:start+batch_size] batch_ys = ys[start:start+batch_size] gradient = vector_mean([ sqerror_gradient(x,y,guess) for x,y in zip(batch_xs,batch_ys)]) guess = gradient_step(guess,gradient,-learning_rate) return guess from scratch.statistics import daily_minutes_good from scratch.gradient_descent import gradient_step random.seed(0) learning_rate = 0.001 beta = least_squares_fit(inputs,daily_minutes_good,learning_rate,5000,25) # ERROR ( no 'inputs' variable defined ) inputs = [[123,123,243],[234,455,578],[454,565,900],[705,456,890]] # inputs = [123,123,243,234,455,578,454,565,900,705,456,890] from scratch.simple_linear_regression import total_sum_of_squares def multiple_r_squared(xs:List[Vector], ys:Vector, beta:Vector) -> float: sum_of_squared_errors = sum(error(x,y,beta**2) for x,y in zip(xs,ys)) return 1.0 - sum_of_squared_errors/ total_sum_of_squares(ys) assert 0.67 < multiple_r_squared(inputs, daily_minutes_good, beta) < 0.68 # ERROR ( no 'inputs' variable defined )
_____no_output_____
Apache-2.0
Data_Science_from_Scratch ~ Book/Data_Science_Chapter_15.ipynb
kushagras71/data_science
Digression: The Bootstrap
from typing import TypeVar, Callable X = TypeVar('X') Stat = TypeVar('Stat') def bootstrap_sample(data:List[X]) -> List[X]: return [random.choice(data) for _ in data] def bootstrap_statistics(data:List[X], stats_fn: Callable[[List[X]],Stat], num_samples: int) -> List[Stat]: return [stats_fn(bootstrap_sample(data)) for _ in range(num_samples)] close_to_100 = [99.5 + random.random() for _ in range(101)] far_from_100 = ([99.5 + random.random()] + [random.random() for _ in range(50)] + [200 + random.random() for _ in range(50)]) from scratch.statistics import median, standard_deviation median_close = bootstrap_statistics(close_to_100,median,100) median_far = bootstrap_statistics(far_from_100,median,100) print(median_close) print(median_far) from typing import Tuple import datetime def estimate_sample_beta(pairs:List[Tuple[Vector,float]]): x_sample = [x for x, _ in pairs] y_sample = [y for _, y in pairs] beta = least_squares_fit(x_sample,y_sample,learning_rate,5000,25) print("bootstrap sample",beta) return beta random.seed(0) bootstrap_betas = bootstrap_statistics(list(zip(inputs, daily_minutes_good)), estimate_sample_beta, 100) # ERROR ( no 'inputs' variable defined ) bootstrap_standard_errors = [ standard_deviation([beta[i] for beta in bootstrap_betas]) for i in range(4)] print(bootstrap_standard_errors) # ERROR ( no 'inputs' variable defined ) from scratch.probability import normal_cdf def p_value(beta_hat_j: float, sigma_hat_j:float) -> float: if beta_hat_j > 0: return 2 * (1 - normal_cdf(beta_hat_j/sigma_hat_j)) else: return 2 * normal_cdf(beta_hat_j/sigma_hat_j) assert p_value(30.58, 1.27) < 0.001 # constant term assert p_value(0.972, 0.103) < 0.001 # num_friends
_____no_output_____
Apache-2.0
Data_Science_from_Scratch ~ Book/Data_Science_Chapter_15.ipynb
kushagras71/data_science
Regularization
def ridge_penalty(beta:Vector, alpha:float)->float: return alpha*dot(beta[1:],beta[1:]) def squared_error_ridge(x: Vector, y: float, beta: Vector, alpha: float) -> float: return error(x, y, beta) ** 2 + ridge_penalty(beta, alpha) from scratch.linear_algebra import add def ridge_penalty_gradient(beta: Vector, alpha: float) -> Vector: return [0.] + [2 * alpha * beta_j for beta_j in beta[1:]] def sqerror_ridge_gradient(x: Vector, y: float, beta: Vector, alpha: float) -> Vector: return add(sqerror_gradient(x, y, beta), ridge_penalty_gradient(beta, alpha)) def least_squares_fit_ridge(xs:List[Vector], ys:List[float], learning_rate: float=0.001, num_steps: int = 1000, batch_size: int = 1) -> Vector: guess = [random.random() for _ in xs[0]] for _ in tqdm.trange(num_steps, desc='least squares fit'): for start in range(0, len(x), batch_size): batch_xs = xs[start:start+batch_size] batch_ys = ys[start:start+batch_size] gradient = vector_mean([ sqerror_ridge_gradient(x,y,guess) for x,y in zip(batch_xs,batch_ys)]) guess = gradient_step(guess,gradient,-learning_rate) return guess random.seed(0) beta_0 = least_squares_fit_ridge(inputs, daily_minutes_good, 0.0, # alpha learning_rate, 5000, 25) # [30.51, 0.97, -1.85, 0.91] assert 5 < dot(beta_0[1:], beta_0[1:]) < 6 assert 0.67 < multiple_r_squared(inputs, daily_minutes_good, beta_0) < 0.69 # ERROR ( no 'inputs' variable defined ) beta_0_1 = least_squares_fit_ridge(inputs, daily_minutes_good, 0.1, # alpha learning_rate, 5000, 25) # [30.8, 0.95, -1.83, 0.54] assert 4 < dot(beta_0_1[1:], beta_0_1[1:]) < 5 assert 0.67 < multiple_r_squared(inputs, daily_minutes_good, beta_0_1) < 0.69 beta_1 = least_squares_fit_ridge(inputs, daily_minutes_good, 1, # alpha learning_rate, 5000, 25) # [30.6, 0.90, -1.68, 0.10] assert 3 < dot(beta_1[1:], beta_1[1:]) < 4 assert 0.67 < multiple_r_squared(inputs, daily_minutes_good, beta_1) < 0.69 beta_10 = least_squares_fit_ridge(inputs, daily_minutes_good,10, # alpha learning_rate, 5000, 25) # [28.3, 0.67, -0.90, -0.01] assert 1 < dot(beta_10[1:], beta_10[1:]) < 2 assert 0.5 < multiple_r_squared(inputs, daily_minutes_good, beta_10) < 0.6 def lasso_penalty(beta, alpha): return alpha * sum(abs(beta_i) for beta_i in beta[1:])
_____no_output_____
Apache-2.0
Data_Science_from_Scratch ~ Book/Data_Science_Chapter_15.ipynb
kushagras71/data_science
A Two-sample t-test to find differentially expressed miRNA's between normal and tumor tissues in Lung Adenocarcinoma
import os import pandas mirna_src_dir = os.getcwd() + "/assn-mirna-luad/data/processed/miRNA/" clinical_src_dir = os.getcwd() + "/assn-mirna-luad/data/processed/clinical/" mirna_tumor_df = pandas.read_csv(mirna_src_dir+'tumor_miRNA.csv') mirna_normal_df = pandas.read_csv(mirna_src_dir+'normal_miRNA.csv') clinical_df = pandas.read_csv(clinical_src_dir+'clinical.csv') print "mirna_tumor_df.shape", mirna_tumor_df.shape print "mirna_normal_df.shape", mirna_normal_df.shape """ Here we select samples to use for our regression analysis """ matched_samples = pandas.merge(clinical_df, mirna_normal_df, on='patient_barcode')['patient_barcode'] # print "matched_samples", matched_samples.shape # merged = pandas.merge(clinical_df, mirna_tumor_df, on='patient_barcode') # print merged.shape # print # print merged['histological_type'].value_counts().sort_index(axis=0) # print # print merged['pathologic_stage'].value_counts().sort_index(axis=0) # print # print merged['pathologic_T'].value_counts().sort_index(axis=0) # print # print merged['pathologic_N'].value_counts().sort_index(axis=0) # print # print merged['pathologic_M'].value_counts().sort_index(axis=0) # print from sklearn import preprocessing import numpy as np X_normal = mirna_normal_df[mirna_normal_df['patient_barcode'].isin(matched_samples)].sort_values(by=['patient_barcode']).copy() X_tumor = mirna_tumor_df.copy() X_tumor_matched = mirna_tumor_df[mirna_tumor_df['patient_barcode'].isin(matched_samples)].sort_values(by=['patient_barcode']).copy() X_normal.__delitem__('patient_barcode') X_tumor_matched.__delitem__('patient_barcode') X_tumor.__delitem__('patient_barcode') print "X_normal.shape", X_normal.shape print "X_tumor.shape", X_tumor.shape print "X_tumor_matched.shape", X_tumor_matched.shape mirna_list = X.columns.values # X_scaler = preprocessing.StandardScaler(with_mean=False).fit(X) # X = X_scaler.transform(X) from scipy.stats import ttest_rel import matplotlib.pyplot as plt ttest = ttest_rel(X_tumor_matched, X_normal) plt.plot(ttest[1], ls='', marker='.') plt.title('Two sample t-test between tumor and normal LUAD tissues') plt.ylabel('p-value') plt.xlabel('miRNA\'s') plt.show() from scipy.stats import ttest_ind ttest_2 = ttest_2_ind(X_tumor, X_normal) plt.plot(ttest_2[1], ls='', marker='.') plt.title('Independent sample t-test between tumor and normal LUAD tissues') plt.ylabel('p-value') plt.xlabel('miRNA\'s') plt.show()
_____no_output_____
FTL
notebooks/tumor_vs_normal_classification/tumor_vs_normal_miRNA-ttest.ipynb
JonnyTran/microRNA-Lung-Cancer-Associations
Step 7: Serve data from OpenAgua into WEAP using WaMDaM By Adel M. Abdallah, Dec 2020Execute the following cells by pressing `Shift-Enter`, or by pressing the play button on the toolbar above. Steps1. Import python libraries2. Import the pulished SQLite file for the WEAP model from HydroShare.3. Prepare to connect to the WEAP API4. Connect to WEAP API to programmatically populate WEAP with data, run it, get back resultsCreate a copy of the original WEAP Area to use while keeping the orignial as-as for any later use5.3 Export the unmet demand percent into Excel to load them into WaMDaM 1. Import python libraries
# 1. Import python libraries ### set the notebook mode to embed the figures within the cell import numpy import sqlite3 import numpy as np import pandas as pd import getpass from hs_restclient import HydroShare, HydroShareAuthBasic import os import plotly plotly.__version__ import plotly.offline as offline import plotly.graph_objs as go from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot offline.init_notebook_mode(connected=True) from plotly.offline import init_notebook_mode, iplot from plotly.graph_objs import * init_notebook_mode(connected=True) # initiate notebook for offline plot import os import csv from collections import OrderedDict import sqlite3 import pandas as pd import numpy as np from IPython.display import display, Image, SVG, Math, YouTubeVideo import urllib import calendar print 'The needed Python libraries have been imported'
_____no_output_____
BSD-3-Clause
3_VisualizePublish/07_Step7_Serve_NewScenarios_WEAP.ipynb
WamdamProject/WaMDaM_JupyterNotebooks
2. Connect to the WaMDaM SQLite on HydroSahre Provide the HydroShare ID for your resourceExample https://www.hydroshare.org/resource/af71ef99a95e47a89101983f5ec6ad8b/ resource_id='85e9fe85b08244198995558fe7d0e294'
# enter your HydroShare username and password here between the quotes username = '' password = '' auth = HydroShareAuthBasic(username=username, password=password) hs = HydroShare(auth=auth) print 'Connected to HydroShare' # Then we can run queries against it within this notebook :) resource_url='https://www.hydroshare.org/resource/af71ef99a95e47a89101983f5ec6ad8b/' resource_id= resource_url.split("https://www.hydroshare.org/resource/",1)[1] resource_id=resource_id.replace('/','') print resource_id resource_md = hs.getSystemMetadata(resource_id) # print resource_md print 'Resource title' print(resource_md['resource_title']) print '----------------------------' resources=hs.resource(resource_id).files.all() file = "" for f in hs.resource(resource_id).files.all(): file += f.decode('utf8') import json file_json = json.loads(file) for f in file_json["results"]: FileURL= f["url"] SQLiteFileName=FileURL.split("contents/",1)[1] cwd = os.getcwd() print cwd fpath = hs.getResourceFile(resource_id, SQLiteFileName, destination=cwd) conn = sqlite3.connect(SQLiteFileName,timeout=10) print 'Connected to the SQLite file= '+ SQLiteFileName print 'done'
_____no_output_____
BSD-3-Clause
3_VisualizePublish/07_Step7_Serve_NewScenarios_WEAP.ipynb
WamdamProject/WaMDaM_JupyterNotebooks
2. Prepare to the Connect to the WEAP API You need to have WEAP already installed on your machineFirst make sure to have a copy of the Water Evaluation And Planning" system (WEAP) installed on your local machine (Windows). If you don’t have it installed, download and install the WEAP software which allows you to run the Bear River WEAP model and its scenarios for Use Case 5. https://www.weap21.org/. You need to have a WEAP License. See here (https://www.weap21.org/index.asp?action=217). If you're interested to learning about WEAP API, check it out here: http://www.weap21.org/WebHelp/API.htm Install dependency and register WEAP 2.1. Install pywin32 extensions which provide access to many of the Windows APIs from Python.**Choose on option*** a. Install using an executable basedon your python version. Use version for Python 2.7https://github.com/mhammond/pywin32/releases **OR** * b. Install it using Anaconda terminal @ https://anaconda.org/anaconda/pywin32Type this command in the Anaconda terminal as Administrator conda install -c anaconda pywin32 **OR*** c. Install from source code (for advanced users) https://github.com/mhammond/pywin32 2.2. Register WEAP with Windows This use case only works on a local Jupyter Notebook server installed on your machine along with WEAP. So it does not work on the online Notebooks in Step 2.1. You need to install Jupyter Server in Step 2.2 then proceed here.* **Register WEAP with Windows to allow the WEAP API to be accessed** Use Windows "Command Prompt". Right click and then **run as Administrator**, navigate to the WEAP installation directory such as and then hit enter ```cd C:\Program Files (x86)\WEAP```Then type the following command in the command prompt and hit enter ```WEAP /regserver``` Figure 1: Register WEAP API with windows using the Command Prompt (Run as Administrator) 3. Connect Jupyter Notebook to WEAP APIClone or download all this GitHub repo https://github.com/WamdamProject/WaMDaM_UseCases In your local repo folder, go to the C:\Users\Adel\Documents\GitHub\WaMDaM_UseCases/UseCases_files/1Original_Datasets_preperation_files/WEAP/Bear_River_WEAP_Model_2017Copy this folder **Bear_River_WEAP_Model_2017** and paste it into **WEAP Areas** folder on your local machine. For example, it is at C:\Users\Adel\Documents\WEAP Areas
# this library is needed to connect to the WEAP API import win32com.client # this command will open the WEAP software (if closed) and get the last active model # you could change the active area to another one inside WEAP or by passing it to the command here #WEAP.ActiveArea = "BearRiverFeb2017_V10.9" WEAP=win32com.client.Dispatch("WEAP.WEAPApplication") # WEAP.Visible = 'FALSE' print WEAP.ActiveArea.Name WEAP.ActiveArea = "Bear_River_WEAP_Model_2017_Original" print WEAP.ActiveArea.Name WEAP.Areas("Bear_River_WEAP_Model_2017_Original").Open WEAP.ActiveArea = "Bear_River_WEAP_Model_2017_Original" print WEAP.ActiveArea.Name print 'Connected to WEAP API and the '+ WEAP.ActiveArea.Name + ' Area' print '-------------' if not WEAP.Registered: print "Because WEAP is not registered, you cannot use the API" # get the active WEAP Area (model) to serve data into it # ActiveArea=WEAP.ActiveArea.Name # get the active WEAP scenario to serve data into it print '-------------' ActiveScenario= WEAP.ActiveScenario.Name print '\n ActiveScenario= '+ActiveScenario print '-------------' WEAP_Area_dir=WEAP.AreasDirectory print WEAP_Area_dir print "\n \n You're connected to the WEAP API"
_____no_output_____
BSD-3-Clause
3_VisualizePublish/07_Step7_Serve_NewScenarios_WEAP.ipynb
WamdamProject/WaMDaM_JupyterNotebooks
4 Create a copy of the original WEAP Area to use while keeping the orignial as-as for any later use Add a new CacheCountyUrbanWaterUse scenario from the Reference original WEAP Area: You can always use this orignal one and delete any new copies you make afterwards.
# Create a copy of the WEAP AREA to serve the updated Hyrym Reservoir to it # Delete the Area if it exists and then add it. Start from fresh Area="Bear_River_WEAP_Model_2017_Conservation" if not WEAP.Areas.Exists(Area): WEAP.SaveAreaAs(Area) WEAP.ActiveArea.Save WEAP.ActiveArea = "Bear_River_WEAP_Model_2017_Conservation" print 'ActiveArea= '+WEAP.ActiveArea.Name # Add new Scenario # Add(NewScenarioName, ParentScenarioName or Index): # Create a new scenario as a child of the parent scenario specified. # The new scenario will become the selected scenario in the Data View. WEAP=win32com.client.Dispatch("WEAP.WEAPApplication") # WEAP.Visible = FALSE WEAP.ActiveArea = "Bear_River_WEAP_Model_2017_Conservation" print 'ActiveArea= '+ WEAP.ActiveArea.Name Scenarios=[] Scenarios=['Cons25PercCacheUrbWaterUse','Incr25PercCacheUrbWaterUse'] # Delete the scenario if it exists and then add it. Start from fresh for Scenario in Scenarios: if WEAP.Scenarios.Exists(Scenario): # delete it WEAP.Scenarios(Scenario).Delete(True) # add it back as a fresh copy WEAP.Scenarios.Add(Scenario,'Reference') else: WEAP.Scenarios.Add(Scenario,'Reference') WEAP.ActiveArea.Save WEAP.SaveArea WEAP.Quit # or add the scenarios one by one using this command # Make a copy from the reference (base) scenario # WEAP.Scenarios.Add('UpdateCacheDemand','Reference') print '---------------------- \n' print 'Scenarios added to the original WEAP area' WEAP.Quit print 'Connection with WEAP API is disconnected'
_____no_output_____
BSD-3-Clause
3_VisualizePublish/07_Step7_Serve_NewScenarios_WEAP.ipynb
WamdamProject/WaMDaM_JupyterNotebooks
4.A Query Cache County seasonal "Monthly Demand" for the three sites: Logan Potable, North Cache Potable, South Cache Potable The data comes from OpenAgua
# Use Case 3.1Identify_aggregate_TimeSeriesValues.csv # plot aggregated to monthly and converted to acre-feet time series data of multiple sources # Logan Potable # North Cache Potable # South Cache Potable # 2.2Identify_aggregate_TimeSeriesValues.csv Query_UseCase_URL=""" https://raw.githubusercontent.com/WamdamProject/WaMDaM_JupyterNotebooks/master/3_VisualizePublish/SQL_queries/WEAP/Query_demand_sites.sql """ # Read the query text inside the URL Query_UseCase_text = urllib.urlopen(Query_UseCase_URL).read() # return query result in a pandas data frame result_df_UseCase= pd.read_sql_query(Query_UseCase_text, conn) # uncomment the below line to see the list of attributes # display (result_df_UseCase) seasons_dict = dict() seasons_dict2=dict() Scenarios=['Cons25PercCacheUrbWaterUse','Incr25PercCacheUrbWaterUse'] subsets = result_df_UseCase.groupby(['ScenarioName','InstanceName']) for subset in subsets.groups.keys(): if subset[0] in Scenarios: df_Seasonal = subsets.get_group(name=subset) df_Seasonal=df_Seasonal.reset_index() SeasonalParam = '' for i in range(len(df_Seasonal['SeasonName'])): m_data = df_Seasonal['SeasonName'][i] n_data = float(df_Seasonal['SeasonNumericValue'][i]) SeasonalParam += '{},{}'.format(m_data, n_data) if i != len(df_Seasonal['SeasonName']) - 1: SeasonalParam += ',' Seasonal_value="MonthlyValues("+SeasonalParam+")" seasons_dict[subset]=(Seasonal_value) # seasons_dict2[subset[0]]=seasons_dict # print seasons_dict2 print '-----------------' # print seasons_dict # seasons_dict2.get("Cons25PercCacheUrbWaterUse", {}).get("Logan Potable") # 1 print 'Query and data preperation are done'
_____no_output_____
BSD-3-Clause
3_VisualizePublish/07_Step7_Serve_NewScenarios_WEAP.ipynb
WamdamProject/WaMDaM_JupyterNotebooks
4.B Load the seasonal demand data with conservation into WEAP
# 9. Load the seasonal data into WEAP #WEAP=win32com.client.Dispatch("WEAP.WEAPApplication") # WEAP.Visible = FALSE print WEAP.ActiveArea.Name Scenarios=['Cons25PercCacheUrbWaterUse','Incr25PercCacheUrbWaterUse'] DemandSites=['Logan Potable','North Cache Potable','South Cache Potable'] AttributeName='Monthly Demand' for scenario in Scenarios: WEAP.ActiveScenario = scenario print WEAP.ActiveScenario.Name for Branch in WEAP.Branches: for InstanceName in DemandSites: if Branch.Name == InstanceName: GetInstanceFullBranch = Branch.FullName val=seasons_dict[(scenario,InstanceName)] WEAP.Branch(GetInstanceFullBranch).Variable(AttributeName).Expression =val # print val print "loaded " + InstanceName WEAP.SaveArea print '\n The data have been sucsesfully loaded into WEAP' WEAP.SaveArea print '\n \n The updated data have been saved'
_____no_output_____
BSD-3-Clause
3_VisualizePublish/07_Step7_Serve_NewScenarios_WEAP.ipynb
WamdamProject/WaMDaM_JupyterNotebooks