code
string
signature
string
docstring
string
loss_without_docstring
float64
loss_with_docstring
float64
factor
float64
if not dtime: dtime = datetime.now() if not isinstance(dtime, datetime): raise TypeError('dtime should be datetime, but we got {}'.format(type(dtime))) return time.mktime(dtime.timetuple())
def transform_datetime_to_unix(dtime=None)
将 datetime 类型转换成 unix 时间戳 :param: * dtime: (datetime) datetime 类型实例,默认为当前时间 :return: * data_type: (datetime) datetime 类型实例 举例如下:: print('--- transform_datetime_to_unix demo ---') dtime = datetime.datetime.now() ans_time = transform_datetime_to_unix(dtime) print(ans_time) print('---') 执行结果:: --- transform_datetime_to_unix demo --- 1535108620.0 ---
2.472938
3.241449
0.762911
now = datetime.now() this_month_start = now.replace( day=1, hour=0, minute=0, second=0, microsecond=0) this_month_days = calendar.monthrange(now.year, now.month) random_seconds = random.randint(0, this_month_days[1]*A_DAY_SECONDS) return this_month_start + timedelta(seconds=random_seconds)
def date_time_this_month()
获取当前月的随机时间 :return: * date_this_month: (datetime) 当前月份的随机时间 举例如下:: print('--- GetRandomTime.date_time_this_month demo ---') print(GetRandomTime.date_time_this_month()) print('---') 执行结果:: --- GetRandomTime.date_time_this_month demo demo --- 2018-07-01 12:47:20 ---
2.549463
2.621211
0.972628
now = datetime.now() this_year_start = now.replace( month=1, day=1, hour=0, minute=0, second=0, microsecond=0) this_year_days = sum(calendar.mdays) random_seconds = random.randint(0, this_year_days*A_DAY_SECONDS) return this_year_start + timedelta(seconds=random_seconds)
def date_time_this_year()
获取当前年的随机时间字符串 :return: * date_this_year: (datetime) 当前月份的随机时间 举例如下:: print('--- GetRandomTime.date_time_this_year demo ---') print(GetRandomTime.date_time_this_year()) print('---') 执行结果:: --- GetRandomTime.date_time_this_year demo demo --- 2018-02-08 17:16:09 ---
3.11154
3.36329
0.925148
if isinstance(year, int) and len(str(year)) != 4: raise ValueError("year should be int year like 2018, but we got {}, {}". format(year, type(year))) if isinstance(year, str) and len(year) != 4: raise ValueError("year should be string year like '2018', but we got {}, {}". format(year, type(year))) if isinstance(year, int): year = str(year) date_str = GetRandomTime.gen_date_by_range(year + "-01-01", year + "-12-31", "%Y%m%d") return date_str
def gen_date_by_year(year)
获取当前年的随机时间字符串 :param: * year: (string) 长度为 4 位的年份字符串 :return: * date_str: (string) 传入年份的随机合法的日期 举例如下:: print('--- GetRandomTime.gen_date_by_year demo ---') print(GetRandomTime.gen_date_by_year("2010")) print('---') 执行结果:: --- GetRandomTime.gen_date_by_year demo --- 20100505 ---
2.735556
2.49913
1.094603
if not date_time: datetime_now = datetime.now() else: datetime_now = date_time if not time_format: time_format = '%Y/%m/%d %H:%M:%S' return datetime.strftime(datetime_now, time_format)
def strftime(date_time=None, time_format=None)
将 datetime 对象转换为 str :param: * date_time: (obj) datetime 对象 * time_format: (sting) 日期格式字符串 :return: * date_time_str: (string) 日期字符串
1.904186
2.092056
0.910198
try: datetime_obj = datetime.strptime(time_str, time_format) return datetime_obj except ValueError as ex: raise ValueError(ex)
def strptime(time_str, time_format)
将 str 转换为 datetime 对象 :param: * time_str: (string) 日期字符串 * time_format: (sting) 日期格式字符串 :return: * datetime_obj: (obj) datetime 对象
2.660292
2.736398
0.972188
if os.path.isfile(project_config): project_config = open(project_config) try: yml_data = yaml.load(project_config) project_name = yml_data['project'] project_tree = yml_data['tree'] except Exception as e: raise KeyError('project config format Error: {}'.format(e)) if dist is None: dist = os.path.abspath('.') project_path = os.path.join(dist, project_name) _makedirs_if_not_extis(project_path) _makedirs_and_files(project_path, project_tree)
def init_project_by_yml(project_config=None, dist=None)
通过配置文件初始化一个 project :param: * project_config: (string) 用来生成 project 的配置文件 * dist: (string) project 位置 举例如下:: print('--- init_project_by_yml demo ---') # define yml string package_yml = ''' project: hellopackage tree: - README.md - requirements.txt - setup.py - MANIFEST.in - hellopackage: # project name - __init__.py - test: # unittest file - __init__.py - demo: # usage demo - __init__.py - doc: # documents ''' # init project by yml init_project_by_yml(package_yml, '.') print(os.listdir('./hellopackage')) print('---') 输出结果:: --- init_project_by_yml demo --- ['demo', 'requirements.txt', 'test', 'MANIFEST.in', 'hellopackage', 'README.md', 'setup.py', 'doc'] ---
3.091022
2.862667
1.07977
try: cur_path = pathlib.Path.cwd() abs_filename = cur_path / pathlib.Path(sub_path) / filename flag = pathlib.Path.is_file(abs_filename) # 将 path 对象转换成字符串 return flag, str(abs_filename) except: flag = False return flag, None
def get_abs_filename_with_sub_path(sub_path, filename)
生成当前路径下一级路径某文件的完整文件名; :param: * sub_path: (string) 下一级的某路径名称 * filename: (string) 下一级路径的某个文件名 :returns: * 返回类型 (tuple),有两个值,第一个为 flag,第二个为文件名,说明见下 * flag: (bool) 如果文件存在,返回 True,文件不存在,返回 False * abs_filename: (string) 指定 filename 的包含路径的长文件名 举例如下:: print('--- get_abs_filename_with_sub_path demo ---') # define sub dir path_name = 'sub_dir' # define not exists file filename = 'test_file.txt' abs_filename = get_abs_filename_with_sub_path(path_name, filename) # return False and abs filename print(abs_filename) # define exists file filename = 'demo.txt' abs_filename = get_abs_filename_with_sub_path(path_name, filename) # return True and abs filename print(abs_filename) print('---') 输出结果:: --- get_abs_filename_with_sub_path demo --- (False, '/Users/****/Documents/dev_python/fishbase/demo/sub_dir/test_file.txt') (True, '/Users/****/Documents/dev_python/fishbase/demo/sub_dir/demo.txt') ---
4.094023
3.751352
1.091346
# 获得当前路径 temp_path = pathlib.Path() cur_path = temp_path.resolve() # 生成 带有 sub_path_name 的路径 path = cur_path / pathlib.Path(sub_path) # 判断是否存在带有 sub_path 路径 if path.exists(): # 返回 True: 路径存在, False: 不需要创建 return True, False else: path.mkdir(parents=True) # 返回 False: 路径不存在 True: 路径已经创建 return False, True
def check_sub_path_create(sub_path)
检查当前路径下的某个子路径是否存在, 不存在则创建; :param: * sub_path: (string) 下一级的某路径名称 :return: * 返回类型 (tuple),有两个值 * True: 路径存在,False: 不需要创建 * False: 路径不存在,True: 创建成功 举例如下:: print('--- check_sub_path_create demo ---') # 定义子路径名称 sub_path = 'demo_sub_dir' # 检查当前路径下的一个子路径是否存在,不存在则创建 print('check sub path:', sub_path) result = check_sub_path_create(sub_path) print(result) print('---') 输出结果:: --- check_sub_path_create demo --- check sub path: demo_sub_dir (True, False) ---
4.657884
3.839832
1.213044
# 判断长度,如果不是 17 位,直接返回失败 if len(id_number_str) != 17: return False, -1 id_regex = '[1-9][0-9]{14}([0-9]{2}[0-9X])?' if not re.match(id_regex, id_number_str): return False, -1 items = [int(item) for item in id_number_str] # 加权因子表 factors = (7, 9, 10, 5, 8, 4, 2, 1, 6, 3, 7, 9, 10, 5, 8, 4, 2) # 计算17位数字各位数字与对应的加权因子的乘积 copulas = sum([a * b for a, b in zip(factors, items)]) # 校验码表 check_codes = ('1', '0', 'X', '9', '8', '7', '6', '5', '4', '3', '2') checkcode = check_codes[copulas % 11].upper() return True, checkcode
def get_checkcode(cls, id_number_str)
计算身份证号码的校验位; :param: * id_number_str: (string) 身份证号的前17位,比如 3201241987010100 :returns: * 返回类型 (tuple) * flag: (bool) 如果身份证号格式正确,返回 True;格式错误,返回 False * checkcode: 计算身份证前17位的校验码 举例如下:: from fishbase.fish_data import * print('--- fish_data get_checkcode demo ---') # id number id1 = '32012419870101001' print(id1, IdCard.get_checkcode(id1)[1]) # id number id2 = '13052219840731647' print(id2, IdCard.get_checkcode(id2)[1]) print('---') 输出结果:: --- fish_data get_checkcode demo --- 32012419870101001 5 13052219840731647 1 ---
2.305424
2.22068
1.038161
if isinstance(id_number, int): id_number = str(id_number) # 调用函数计算身份证前面17位的 checkcode result = IdCard.get_checkcode(id_number[0:17]) # 返回第一个 flag 是错误的话,表示身份证格式错误,直接透传返回,第二个为获得的校验码 flag = result[0] checkcode = result[1] if not flag: return flag, # 判断校验码是否正确 return checkcode == id_number[-1].upper(),
def check_number(cls, id_number)
检查身份证号码是否符合校验规则; :param: * id_number: (string) 身份证号,比如 32012419870101001 :returns: * 返回类型 (tuple),当前有一个值,第一个为 flag,以后第二个值会返回具体校验不通过的详细错误 * flag: (bool) 如果身份证号码校验通过,返回 True;如果身份证校验不通过,返回 False 举例如下:: from fishbase.fish_data import * print('--- fish_data check_number demo ---') # id number false id1 = '320124198701010012' print(id1, IdCard.check_number(id1)[0]) # id number true id2 = '130522198407316471' print(id2, IdCard.check_number(id2)[0]) print('---') 输出结果:: --- fish_data check_number demo --- 320124198701010012 False 130522198407316471 True ---
6.212594
5.960371
1.042317
values = [] if match_type == 'EXACT': values = sqlite_query('fish_data.sqlite', 'select zone, areanote from cn_idcard where areanote = :area', {"area": area_str}) if match_type == 'FUZZY': values = sqlite_query('fish_data.sqlite', 'select zone, areanote from cn_idcard where areanote like :area', {"area": '%' + area_str + '%'}) # result_type 结果数量判断处理 if result_type == 'LIST': # 如果返回记录多,大于 20 项,只返回前面 20 个结果 if len(values) > 20: values = values[0:20] return values if result_type == 'SINGLE_STR': if len(values) == 0: return '' if len(values) > 0: value_str = values[0][0] return value_str
def get_zone_info(cls, area_str, match_type='EXACT', result_type='LIST')
输入包含省份、城市、地区信息的内容,返回地区编号; :param: * area_str: (string) 要查询的区域,省份、城市、地区信息,比如 北京市 * match_type: (string) 查询匹配模式,默认值 'EXACT',表示精确匹配,可选 'FUZZY',表示模糊查询 * result_type: (string) 返回结果数量类型,默认值 'LIST',表示返回列表,可选 'SINGLE_STR',返回结果的第一个地区编号字符串 :returns: * 返回类型 根据 resule_type 决定返回类型是列表或者单一字符串,列表中包含元组 比如:[('110000', '北京市')],元组中的第一个元素是地区码, 第二个元素是对应的区域内容 结果最多返回 20 个。 举例如下:: from fishbase.fish_data import * print('--- fish_data get_zone_info demo ---') result = IdCard.get_zone_info(area_str='北京市') print(result) # 模糊查询 result = IdCard.get_zone_info(area_str='西安市', match_type='FUZZY') print(result) result0 = [] for i in result: result0.append(i[0]) print('---西安市---') print(len(result0)) print(result0) # 模糊查询, 结果返回设定 single_str result = IdCard.get_zone_info(area_str='西安市', match_type='FUZZY', result_type='SINGLE_STR') print(result) # 模糊查询, 结果返回设定 single_str,西安市 和 西安 的差别 result = IdCard.get_zone_info(area_str='西安', match_type='FUZZY', result_type='SINGLE_STR') print(result) print('---') 输出结果:: --- fish_data get_zone_info demo --- [('110000', '北京市')] 130522198407316471 True ---西安市--- 11 ['610100', '610101', '610102', '610103', '610104', '610111', '610112', '610113', '610114', '610115', '610116'] 610100 220403 ---
3.822183
3.112392
1.228053
total = 0 even = True for item in card_number_str[-1::-1]: item = int(item) if even: item <<= 1 if item > 9: item -= 9 total += item even = not even checkcode = (10 - (total % 10)) % 10 return str(checkcode)
def get_checkcode(cls, card_number_str)
计算银行卡校验位; :param: * card_number_str: (string) 要查询的银行卡号 :returns: checkcode: (string) 银行卡的校验位 举例如下:: from fishbase.fish_data import * print('--- fish_data get_checkcode demo ---') # 不能放真的卡信息,有风险 print(CardBin.get_checkcode('439188000699010')) print('---') 输出结果:: --- fish_data get_checkcode demo --- 9 ---
2.798734
3.586907
0.780264
if isinstance(card_number_str, int): card_number_str = str(card_number_str) checkcode = card_number_str[-1] result = CardBin.get_checkcode(card_number_str[0:-1]) return checkcode == result
def check_bankcard(cls, card_number_str)
检查银行卡校验位是否正确; :param: * card_number_str: (string) 要查询的银行卡号 :returns: 返回结果:(bool) True or False 举例如下:: from fishbase.fish_data import * print('--- fish_data check_bankcard demo ---') # 不能放真的卡信息,有风险 print(CardBin.check_bankcard('4391880006990100')) print('---') 输出结果:: --- fish_data check_bankcard demo --- False ---
3.188143
3.057638
1.042682
flag = False # 检查文件是否存在 if not pathlib.Path(conf_filename).is_file(): return flag, # 判断是否对大小写敏感 cf = configparser.ConfigParser() if not case_sensitive else MyConfigParser() # 读入 config 文件 try: if sys.version > '3': cf.read(conf_filename, encoding=encoding) else: cf.read(conf_filename) except: flag = False return flag, d = OrderedDict(cf._sections) for k in d: d[k] = OrderedDict(cf._defaults, **d[k]) d[k].pop('__name__', None) flag = True # 计算有多少 key count = len(d.keys()) return flag, d, count
def conf_as_dict(conf_filename, encoding=None, case_sensitive=False)
读入 ini 配置文件,返回根据配置文件内容生成的字典类型变量; :param: * conf_filename: (string) 需要读入的 ini 配置文件长文件名 * encoding: (string) 文件编码 * case_sensitive: (bool) 是否大小写敏感,默认为 False :return: * flag: (bool) 读取配置文件是否正确,正确返回 True,错误返回 False * d: (dict) 如果读取配置文件正确返回的包含配置文件内容的字典,字典内容顺序与配置文件顺序保持一致 * count: (int) 读取到的配置文件有多少个 key 的数量 举例如下:: print('--- conf_as_dict demo---') # 定义配置文件名 conf_filename = 'test_conf.ini' # 读取配置文件 ds = conf_as_dict(conf_filename) ds1 = conf_as_dict(conf_filename, case_sensitive=True) # 显示是否成功,所有 dict 的内容,dict 的 key 数量 print('flag:', ds[0]) print('dict:', ds[1]) print('length:', ds[2]) d = ds[1] d1 = ds1[1] # 显示一个 section 下的所有内容 print('section show_opt:', d['show_opt']) # 显示一个 section 下的所有内容,大小写敏感 print('section show_opt:', d1['show_opt']) # 显示一个 section 下面的 key 的 value 内容 print('section show_opt, key short_opt:', d['show_opt']['short_opt']) # 读取一个复杂的section,先读出 key 中的 count 内容,再遍历每个 key 的 value i = int(d['get_extra_rules']['erule_count']) print('section get_extra_rules, key erule_count:', i) for j in range(i): print('section get_extra_rules, key erule_type:', d['get_extra_rules']['erule_'+str(j)]) print('---') 执行结果:: --- conf_as_dict demo--- flag: True dict: (omit) length: 7 section show_opt: {'short_opt': 'b:d:v:p:f:', 'long_opt': 'region=,prov=,mer_id=,mer_short_name=,web_status='} section show_opt: {'Short_Opt': 'b:d:v:p:f:', 'Long_Opt': 'region=,prov=,mer_id=,mer_short_name=,web_status='} section show_opt, key short_opt: b:d:v:p:f: section get_extra_rules, key erule_count: 2 section get_extra_rules, key erule_type: extra_rule_1 section get_extra_rules, key erule_type: extra_rule_2 ---
3.298531
3.301706
0.999038
obj_dict = {'__classname__': type(obj).__name__} obj_dict.update(obj.__dict__) for key, value in obj_dict.items(): if not isinstance(value, commonDataType): sub_dict = serialize_instance(value) obj_dict.update({key: sub_dict}) else: continue return obj_dict
def serialize_instance(obj)
对象序列化 :param: * obj: (object) 对象实例 :return: * obj_dict: (dict) 对象序列化字典 举例如下:: print('--- serialize_instance demo ---') # 定义两个对象 class Obj(object): def __init__(self, a, b): self.a = a self.b = b class ObjB(object): def __init__(self, x, y): self.x = x self.y = y # 对象序列化 b = ObjB('string', [item for item in range(10)]) obj_ = Obj(1, b) print(serialize_instance(obj_)) print('---') 执行结果:: --- serialize_instance demo --- {'__classname__': 'Obj', 'a': 1, 'b': {'__classname__': 'ObjB', 'x': 'string', 'y': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]}} ---
3.095309
3.138893
0.986115
if kind == udTime: return str(uuid.uuid1()) elif kind == udRandom: return str(uuid.uuid4()) else: return str(uuid.uuid4())
def get_uuid(kind)
获得不重复的 uuid,可以是包含时间戳的 uuid,也可以是完全随机的;基于 Python 的 uuid 类进行封装和扩展; 支持 get_time_uuid() 这样的写法,不需要参数,也可以表示生成包含时间戳的 uuid,兼容 v1.0.12 以及之前版本; :param: * kind: (int) uuid 类型,整形常量 udTime 表示基于时间戳, udRandom 表示完全随机 :return: * result: (string) 返回类似 66b438e3-200d-4fe3-8c9e-2bc431bb3000 的 uuid 举例如下:: print('--- uuid demo ---') # 获得带时间戳的uuid for i in range(2): print(get_uuid(udTime)) print('---') # 时间戳 uuid 的简单写法,兼容之前版本 for i in range(2): print(get_time_uuid()) print('---') # 获得随机的uuid for i in range(2): print(get_uuid(udRandom)) print('---') 执行结果:: --- uuid demo --- c8aa92cc-60ef-11e8-aa87-acbf52d15413 c8ab7194-60ef-11e8-b7bd-acbf52d15413 --- c8ab7368-60ef-11e8-996c-acbf52d15413 c8ab741e-60ef-11e8-959d-acbf52d15413 --- 8e108777-26a1-42d6-9c4c-a0c029423eb0 8175a81a-f346-46af-9659-077ad52e3e8f ---
4.143817
2.377209
1.743144
if isinstance(source, dict): check_list = list(source.values()) elif isinstance(source, list) or isinstance(source, tuple): check_list = list(source) else: raise TypeError('source except list, tuple or dict, but got {}'.format(type(source))) for i in check_list: if i is 0: continue if not (i and str(i).strip()): return True return False
def has_space_element(source)
判断对象中的元素,如果存在 None 或空字符串,则返回 True, 否则返回 False, 支持字典、列表和元组 :param: * source: (list, set, dict) 需要检查的对象 :return: * result: (bool) 存在 None 或空字符串或空格字符串返回 True, 否则返回 False 举例如下:: print('--- has_space_element demo---') print(has_space_element([1, 2, 'test_str'])) print(has_space_element([0, 2])) print(has_space_element([1, 2, None])) print(has_space_element((1, [1, 2], 3, ''))) print(has_space_element({'a': 1, 'b': 0})) print(has_space_element({'a': 1, 'b': []})) print('---') 执行结果:: --- has_space_element demo--- False False True True False True ---
3.080719
2.831915
1.087857
key_list = left_json.keys() if op == 'strict': for key in key_list: if not right_json.get(key) == left_json.get(key): return False return True
def if_json_contain(left_json, right_json, op='strict')
判断一个 json 是否包含另外一个 json 的 key,并且 value 相等; :param: * left_json: (dict) 需要判断的 json,我们称之为 left * right_json: (dict) 需要判断的 json,我们称之为 right,目前是判断 left 是否包含在 right 中 * op: (string) 判断操作符,目前只有一种,默认为 strict,向后兼容 :return: * result: (bool) right json 包含 left json 的 key,并且 value 一样,返回 True,否则都返回 False 举例如下:: print('--- json contain demo ---') json1 = {"id": "0001"} json2 = {"id": "0001", "value": "File"} print(if_json_contain(json1, json2)) print('---') 执行结果:: --- json contain demo --- True ---
2.314888
2.919967
0.792779
od = OrderedDict(sorted(dic.items())) url = '?' temp_str = urlencode(od) url = url + temp_str return url
def join_url_params(dic)
根据传入的键值对,拼接 url 后面 ? 的参数,比如 ?key1=value1&key2=value2 :param: * dic: (dict) 参数键值对 :return: * result: (string) 拼接好的参数 举例如下:: print('--- splice_url_params demo ---') dic1 = {'key1': 'value1', 'key2': 'value2'} print(splice_url_params(dic1)) print('---') 执行结果:: --- splice_url_params demo --- ?key1=value1&key2=value2 ---
4.934962
9.936209
0.496664
o_list = sorted(value for (key, value) in p_dict.items()) if order == odASC: return o_list elif order == odDES: return o_list[::-1]
def sorted_list_from_dict(p_dict, order=odASC)
根据字典的 value 进行排序,并以列表形式返回 :param: * p_dict: (dict) 需要排序的字典 * order: (int) 排序规则,odASC 升序,odDES 降序,默认为升序 :return: * o_list: (list) 排序后的 list 举例如下:: print('--- sorted_list_from_dict demo ---') # 定义待处理字典 dict1 = {'a_key': 'a_value', '1_key': '1_value', 'A_key': 'A_value', 'z_key': 'z_value'} print(dict1) # 升序结果 list1 = sorted_list_from_dict(dict1, odASC) print('ascending order result is:', list1) # 降序结果 list1 = sorted_list_from_dict(dict1, odDES) print('descending order result is:', list1) print('---') 执行结果:: --- sorted_list_from_dict demo --- {'a_key': 'a_value', 'A_key': 'A_value', '1_key': '1_value', 'z_key': 'z_value'} ascending order result is: ['1_value', 'A_value', 'a_value', 'z_value'] descending order result is: ['z_value', 'a_value', 'A_value', '1_value'] ---
2.639653
4.336557
0.608698
if check_style == charChinese: check_pattern = re.compile(u'[\u4e00-\u9fa5]+') elif check_style == charNum: check_pattern = re.compile(u'[0-9]+') else: return False try: if check_pattern.search(p_str): return True else: return False except TypeError as ex: raise TypeError(str(ex))
def has_special_char(p_str, check_style=charChinese)
检查字符串是否含有指定类型字符 :param: * p_str: (string) 需要判断的字符串 * check_style: (string) 需要判断的字符类型,默认为 charChinese (编码仅支持utf-8), 支持 charNum,该参数向后兼容 :return: * True 含有指定类型字符 * False 不含有指定类型字符 举例如下:: print('--- has_special_char demo ---') p_str1 = 'meiyouzhongwen' non_chinese_result = has_special_char(p_str1, check_style=charChinese) print(non_chinese_result) p_str2 = u'有zhongwen' chinese_result = has_special_char(p_str2, check_style=charChinese) print(chinese_result) p_str3 = 'nonnumberstring' non_number_result = has_special_char(p_str3, check_style=charNum) print(non_number_result) p_str4 = 'number123' number_result = has_special_char(p_str4, check_style=charNum) print(number_result) print('---') 执行结果:: --- has_special_char demo --- False True False True ---
1.968083
2.181391
0.902214
files_list = [] for root, dirs, files in os.walk(path): for name in files: files_list.append(os.path.join(root, name)) if exts is not None: return [file for file in files_list if pathlib.Path(file).suffix in exts] return files_list
def find_files(path, exts=None)
查找路径下的文件,返回指定类型的文件列表 :param: * path: (string) 查找路径 * exts: (list) 文件类型列表,默认为空 :return: * files_list: (list) 文件列表 举例如下:: print('--- find_files demo ---') path1 = '/root/fishbase_issue' all_files = find_files(path1) print(all_files) exts_files = find_files(path1, exts=['.png', '.py']) print(exts_files) print('---') 执行结果:: --- find_files demo --- ['/root/fishbase_issue/test.png', '/root/fishbase_issue/head.jpg','/root/fishbase_issue/py/man.png' ['/root/fishbase_issue/test.png', '/root/fishbase_issue/py/man.png'] ---
1.934853
2.636232
0.733946
show_deprecation_warn('get_random_str', 'fish_random.gen_random_str') from fishbase.fish_random import gen_random_str return gen_random_str(length, length, has_letter=letters, has_digit=digits, has_punctuation=punctuation)
def get_random_str(length, letters=True, digits=False, punctuation=False)
获得指定长度,不同规则的随机字符串,可以包含数字,字母和标点符号 :param: * length: (int) 随机字符串的长度 * letters: (bool) 随机字符串是否包含字母,默认包含 * digits: (bool) 随机字符串是否包含数字,默认不包含 * punctuation: (bool) 随机字符串是否包含特殊标点符号,默认不包含 :return: * random_str: (string) 指定规则的随机字符串 举例如下:: print('--- get_random_str demo---') print(get_random_str(6)) print(get_random_str(6, digits=True)) print(get_random_str(12, punctuation=True)) print(get_random_str(6, letters=False, digits=True)) print(get_random_str(12, letters=False, digits=True, punctuation=True)) print('---') 执行结果:: --- get_random_str demo--- nRBDHf jXG5wR )I;rz{ob&Clg 427681 *"4$0^`2}%9{ ---
5.354962
5.450026
0.982557
seen = set() for item in items: val = item if key is None else key(item) if val not in seen: yield item seen.add(val)
def get_distinct_elements(items, key=None)
去除序列中的重复元素,使得剩下的元素仍然保持顺序不变,对于不可哈希的对象,需要指定 key ,说明去重元素 :param: * items: (list) 需要去重的列表 * key: (hook函数) 指定一个函数,用来将序列中的元素转换成可哈希类型 :return: * result: (generator) 去重后的结果的生成器 举例如下:: print('--- remove_duplicate_elements demo---') list_demo = remove_duplicate_elements([1, 5, 2, 1, 9, 1, 5, 10]) print(list(list_demo)) list2 = [{'x': 1, 'y': 2}, {'x': 1, 'y': 3}, {'x': 1, 'y': 2}, {'x': 2, 'y': 4}] dict_demo1 = remove_duplicate_elements(list2, key=lambda d: (d['x'], d['y'])) print(list(dict_demo1)) dict_demo2 = remove_duplicate_elements(list2, key=lambda d: d['x']) print(list(dict_demo2)) dict_demo3 = remove_duplicate_elements(list2, key=lambda d: d['y']) print(list(dict_demo3)) print('---') 执行结果:: --- remove_duplicate_elements demo--- [1, 5, 2, 9, 10] [{'x': 1, 'y': 2}, {'x': 1, 'y': 3}, {'x': 2, 'y': 4}] [{'x': 1, 'y': 2}, {'x': 2, 'y': 4}] [{'x': 1, 'y': 2}, {'x': 1, 'y': 3}, {'x': 2, 'y': 4}] ---
2.079911
3.003526
0.69249
if len(objs) == 0: return [] if not hasattr(objs[0], key): raise AttributeError('{0} object has no attribute {1}'.format(type(objs[0]), key)) result = sorted(objs, key=attrgetter(key), reverse=reverse) return result
def sort_objs_by_attr(objs, key, reverse=False)
对原生不支持比较操作的对象根据属性排序 :param: * objs: (list) 需要排序的对象列表 * key: (string) 需要进行排序的对象属性 * reverse: (bool) 排序结果是否进行反转,默认为 False,不进行反转 :return: * result: (list) 排序后的对象列表 举例如下:: print('--- sorted_objs_by_attr demo---') class User(object): def __init__(self, user_id): self.user_id = user_id users = [User(23), User(3), User(99)] result = sorted_objs_by_attr(users, key='user_id') reverse_result = sorted_objs_by_attr(users, key='user_id', reverse=True) print([item.user_id for item in result]) print([item.user_id for item in reverse_result]) print('---') 执行结果:: --- sorted_objs_by_attr demo--- [3, 23, 99] [99, 23, 3] ---
2.099205
2.584466
0.812239
url_obj = urlsplit(url) query_dict = parse_qs(url_obj.query) return OrderedDict(query_dict)
def get_query_param_from_url(url)
从 url 中获取 query 参数字典 :param: * url: (string) 需要获取参数字典的 url :return: * query_dict: (dict) query 参数的有序字典,字典的值为 query 值组成的列表 举例如下:: print('--- get_query_param_from_url demo---') url = 'http://localhost:8811/mytest?page_number=1&page_size=10' query_dict = get_query_param_from_url(url) print(query_dict['page_size']) print('---') 执行结果:: --- get_query_param_from_url demo--- ['10'] ---
3.477975
6.938097
0.501287
if not isinstance(data_list, list): raise TypeError('data_list should be a list, but we got {}'.format(type(data_list))) if not isinstance(group_number, int) or not isinstance(group_size, int): raise TypeError('group_number and group_size should be int, but we got group_number: {0}, ' 'group_size: {1}'.format(type(group_number), type(group_size))) if group_number < 0 or group_size < 0: raise ValueError('group_number and group_size should be positive int, but we got ' 'group_number: {0}, group_size: {1}'.format(group_number, group_size)) start = (group_number - 1) * group_size end = group_number * group_size return data_list[start:end]
def paging(data_list, group_number=1, group_size=10)
获取分组列表数据 :param: * data_list: (list) 需要获取分组的数据列表 * group_number: (int) 分组信息,默认为 1 * group_size: (int) 分组大小,默认为 10 :return: * group_data: (list) 分组数据 举例如下:: print('--- paging demo---') all_records = [1, 2, 3, 4, 5] print(get_group_list_data(all_records)) all_records1 = list(range(100)) print(get_group_list_data(all_records1, group_number=5, group_size=15)) print(get_group_list_data(all_records1, group_number=7, group_size=15)) print('---') 执行结果:: --- paging demo--- [1, 2, 3, 4, 5] [60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74] [90, 91, 92, 93, 94, 95, 96, 97, 98, 99] ---
1.555727
1.675239
0.928659
if not isinstance(data_dict, dict): raise TypeError('data_dict should be dict, but we got {}'.format(type(data_dict))) if not isinstance(key_list, list): raise TypeError('key_list should be list, but we got {}'.format(type(key_list))) sub_dict = dict() for item in key_list: sub_dict.update({item: data_dict.get(item, default_value)}) return sub_dict
def get_sub_dict(data_dict, key_list, default_value='default_value')
从字典中提取子集 :param: * data_dict: (dict) 需要提取子集的字典 * key_list: (list) 需要获取子集的键列表 * default_value: (string) 当键不存在时的默认值,默认为 default_value :return: * sub_dict: (dict) 子集字典 举例如下:: print('--- get_sub_dict demo---') dict1 = {'a': 1, 'b': 2, 'list1': [1,2,3]} list1 = ['a', 'list1', 'no_key'] print(get_sub_dict(dict1, list1)) print(get_sub_dict(dict1, list1, default_value='new default')) print('---') 执行结果:: --- get_sub_dict demo--- {'a': 1, 'list1': [1, 2, 3], 'no_key': 'default_value'} {'a': 1, 'list1': [1, 2, 3], 'no_key': 'new default'} ---
1.7519
1.912913
0.915828
temp_dict = copy.deepcopy(param_dict) # 正则 hump_to_underline = re.compile(r'([a-z]|\d)([A-Z])') for key in list(param_dict.keys()): # 将驼峰值替换为下划线 underline_sub = re.sub(hump_to_underline, r'\1_\2', key).lower() temp_dict[underline_sub] = temp_dict.pop(key) return temp_dict
def camelcase_to_underline(param_dict)
将驼峰命名的参数字典键转换为下划线参数 :param: * param_dict: (dict) 请求参数字典 :return: * temp_dict: (dict) 转换后的参数字典 举例如下:: print('--- transform_hump_to_underline demo---') hump_param_dict = {'firstName': 'Python', 'Second_Name': 'san', 'right_name': 'name'} underline_param_dict = transform_hump_to_underline(hump_param_dict ) print(underline_param_dict ) print('---') 执行结果:: --- transform_hump_to_underline demo--- {'first_name': 'Python', 'second_name': 'san', 'right_name': 'name'} ---
2.669169
2.792355
0.955885
Same_info = namedtuple('Same_info', ['item', 'key', 'value']) same_info = Same_info(set(dict1.items()) & set(dict2.items()), set(dict1.keys()) & set(dict2.keys()), set(dict1.values()) & set(dict2.values())) return same_info
def find_same_between_dicts(dict1, dict2)
查找两个字典中的相同点,包括键、值、项,仅支持 hashable 对象 :param: * dict1: (dict) 比较的字典 1 * dict2: (dict) 比较的字典 2 :return: * dup_info: (namedtuple) 返回两个字典中相同的信息组成的具名元组 举例如下:: print('--- find_same_between_dicts demo---') dict1 = {'x':1, 'y':2, 'z':3} dict2 = {'w':10, 'x':1, 'y':2} res = find_same_between_dicts(dict1, dict2) print(res.item) print(res.key) print(res.value) print('---') 执行结果:: --- find_same_between_dicts demo--- set([('x', 1)]) {'x', 'y'} {1} ---
2.683362
2.357773
1.138092
if not pathlib.Path(file_path).is_file(): return False, {}, 'File not exist' try: if sys.version > '3': with open(file_path, 'r', encoding=encoding) as f: d = OrderedDict(yaml.load(f.read())) return True, d, 'Success' else: with open(file_path, 'r') as f: d = OrderedDict(yaml.load(f.read())) return True, d, 'Success' except: return False, {}, 'Unknow error'
def yaml_conf_as_dict(file_path, encoding=None)
读入 yaml 配置文件,返回根据配置文件内容生成的字典类型变量 :param: * file_path: (string) 需要读入的 yaml 配置文件长文件名 * encoding: (string) 文件编码 * msg: (string) 读取配置信息 :return: * flag: (bool) 读取配置文件是否正确,正确返回 True,错误返回 False * d: (dict) 如果读取配置文件正确返回的包含配置文件内容的字典,字典内容顺序与配置文件顺序保持一致 举例如下:: print('--- yaml_conf_as_dict demo---') # 定义配置文件名 conf_filename = 'test_conf.yaml' # 读取配置文件 ds = yaml_conf_as_dict(conf_filename, encoding='utf-8') # 显示是否成功,所有 dict 的内容,dict 的 key 数量 print('flag:', ds[0]) print('dict length:', len(ds[1])) print('msg:', len(ds[1])) print('conf info: ', ds[1].get('tree')) print('---') 执行结果:: --- yaml_conf_as_dict demo--- flag: True dict length: 2 msg: Success conf info: ['README.md', 'requirements.txt', {'hellopackage': ['__init__.py']}, {'test': ['__init__.py']}, {'doc': ['doc.rst']}] ---
2.299685
2.449943
0.938669
with open(csv_filename, encoding=encoding) as csv_file: csv_list = list(csv.reader(csv_file, delimiter=deli)) # 如果设置为要删除空行 if del_blank_row: csv_list = [s for s in csv_list if len(s) != 0] return csv_list
def csv2list(csv_filename, deli=',', del_blank_row=True, encoding=None)
将指定的 csv 文件转换为 list 返回; :param: * csv_filename: (string) csv 文件的长文件名 * deli: (string) csv 文件分隔符,默认为逗号 * del_blank_row: (string) 是否要删除空行,默认为删除 * encode: (string) 文件编码 :return: * csv_list: (list) 转换后的 list 举例如下:: from fishbase.fish_file import * from fishbase.fish_csv import * def test_csv(): csv_filename = get_abs_filename_with_sub_path('csv', 'test_csv.csv')[1] print(csv_filename) csv_list = csv2list(csv_filename) print(csv_list) if __name__ == '__main__': test_csv()
2.226238
2.379616
0.935545
with open(csv_filename, "w") as csv_file: csv_writer = csv.writer(csv_file) for data in data_list: csv_writer.writerow(data) return csv_filename
def list2csv(data_list, csv_filename='./list2csv.csv')
将字典写入到指定的 csv 文件,并返回文件的长文件名; :param: * data_list: (list) 需要写入 csv 的数据字典 * csv_filename: (string) csv 文件的长文件名 :return: * csv_filename: (string) csv 文件的长文件名 举例如下:: from fishbase.fish_csv import * def test_list2csv(): data_list = ['a', 'b', 'c'] csv_file = list2csv(data_list) print(csv_file) if __name__ == '__main__': test_list2csv()
1.712846
2.32771
0.73585
with open(csv_filename, encoding=encoding) as csv_file: if key_is_header: reader = csv.reader(csv_file, delimiter=deli) # 读取字典 key fieldnames = next(reader) reader = csv.DictReader(csv_file, fieldnames=fieldnames, delimiter=deli) return [dict(row) for row in reader] reader = csv.reader(csv_file, delimiter=deli) return {row[0]: row[1] for row in reader if row}
def csv2dict(csv_filename, deli=',', encoding=None, key_is_header=False)
将指定的 csv 文件转换为 list 返回; :param: * csv_filename: (string) csv 文件的长文件名 * deli: (string) csv 文件分隔符,默认为逗号 * del_blank_row: (string) 是否要删除空行,默认为删除 * encode: (string) 文件编码 :return: * csv_data: (dict) 读取后的数据 举例如下:: from fishbase.fish_file import * from fishbase.fish_csv import * def test_csv2dict(): csv_filename = get_abs_filename_with_sub_path('csv', 'test_csv.csv')[1] print(csv_filename) csv_dict = csv2dict(csv_filename) print(csv_dict) if __name__ == '__main__': test_csv2dict()
2.049069
2.320434
0.883054
with open(csv_filename, "w") as csv_file: csv_writer = csv.writer(csv_file) if key_is_header: if isinstance(data_dict, dict): csv_writer.writerow(list(data_dict.keys())) csv_writer.writerow(list(data_dict.values())) elif isinstance(data_dict, list) and all([isinstance(item, dict) for item in data_dict]): for item in data_dict: csv_writer.writerow(list(item.keys())) csv_writer.writerow(list(item.values())) else: raise ValueError('data_dict should be a dict or list which member is dict, ' 'but we got {}'.format(data_dict)) else: csv_writer.writerows(data_dict.items()) return csv_filename
def dict2csv(data_dict, csv_filename='./dict2csv.csv', key_is_header=False)
将字典写入到指定的 csv 文件,并返回文件的长文件名; :param: * data_dict: (dict) 需要写入 csv 的数据字典 * csv_filename: (string) csv 文件的长文件名 * key_is_header: (bool) csv 文件第一行是否全为字典 key :return: * csv_filename: (string) csv 文件的长文件名 举例如下:: from fishbase.fish_csv import * def test_dict2csv(): data_dict = {'a': 1, 'b': 2} csv_file = dict2csv(data_dict) print(csv_file) if __name__ == '__main__': test_dict2csv()
1.819627
1.914139
0.950625
next_approvals = self._get_next_approvals().exclude( status=PENDING).exclude(cloned=True) for ta in next_approvals: clone_transition_approval, c = TransitionApproval.objects.get_or_create( source_state=ta.source_state, destination_state=ta.destination_state, content_type=ta.content_type, object_id=ta.object_id, field_name=ta.field_name, skip=ta.skip, priority=ta.priority, enabled=ta.enabled, status=PENDING, meta=ta.meta ) if c: clone_transition_approval.permissions.add(*ta.permissions.all()) clone_transition_approval.groups.add(*ta.groups.all()) next_approvals.update(cloned=True) return True if next_approvals.count() else False
def _cycle_proceedings(self)
Finds next proceedings and clone them for cycling if it exists.
3.405116
3.154807
1.079342
file_name = request.POST['name'] file_type = request.POST['type'] file_size = int(request.POST['size']) dest = get_s3direct_destinations().get( request.POST.get('dest', None), None) if not dest: resp = json.dumps({'error': 'File destination does not exist.'}) return HttpResponseNotFound(resp, content_type='application/json') auth = dest.get('auth') if auth and not auth(request.user): resp = json.dumps({'error': 'Permission denied.'}) return HttpResponseForbidden(resp, content_type='application/json') allowed = dest.get('allowed') if (allowed and file_type not in allowed) and allowed != '*': resp = json.dumps({'error': 'Invalid file type (%s).' % file_type}) return HttpResponseBadRequest(resp, content_type='application/json') cl_range = dest.get('content_length_range') if (cl_range and not cl_range[0] <= file_size <= cl_range[1]): msg = 'Invalid file size (must be between %s and %s bytes).' resp = json.dumps({'error': (msg % cl_range)}) return HttpResponseBadRequest(resp, content_type='application/json') key = dest.get('key') if not key: resp = json.dumps({'error': 'Missing destination path.'}) return HttpResponseServerError(resp, content_type='application/json') bucket = dest.get('bucket', getattr(settings, 'AWS_STORAGE_BUCKET_NAME', None)) if not bucket: resp = json.dumps({'error': 'S3 bucket config missing.'}) return HttpResponseServerError(resp, content_type='application/json') region = dest.get('region', getattr(settings, 'AWS_S3_REGION_NAME', None)) if not region: resp = json.dumps({'error': 'S3 region config missing.'}) return HttpResponseServerError(resp, content_type='application/json') endpoint = dest.get('endpoint', getattr(settings, 'AWS_S3_ENDPOINT_URL', None)) if not endpoint: resp = json.dumps({'error': 'S3 endpoint config missing.'}) return HttpResponseServerError(resp, content_type='application/json') aws_credentials = get_aws_credentials() if not aws_credentials.secret_key or not aws_credentials.access_key: resp = json.dumps({'error': 'AWS credentials config missing.'}) return HttpResponseServerError(resp, content_type='application/json') upload_data = { 'object_key': get_key(key, file_name, dest), 'access_key_id': aws_credentials.access_key, 'session_token': aws_credentials.token, 'region': region, 'bucket': bucket, 'endpoint': endpoint, 'acl': dest.get('acl') or 'public-read', 'allow_existence_optimization': dest.get('allow_existence_optimization', False) } optional_params = [ 'content_disposition', 'cache_control', 'server_side_encryption' ] for optional_param in optional_params: if optional_param in dest: option = dest.get(optional_param) if hasattr(option, '__call__'): upload_data[optional_param] = option(file_name) else: upload_data[optional_param] = option resp = json.dumps(upload_data) return HttpResponse(resp, content_type='application/json')
def get_upload_params(request)
Authorises user and validates given file properties.
1.881735
1.860589
1.011365
subscription_data = post_data.pop("subscription", {}) # As our database saves the auth and p256dh key in separate field, # we need to refactor it and insert the auth and p256dh keys in the same dictionary keys = subscription_data.pop("keys", {}) subscription_data.update(keys) # Insert the browser name subscription_data["browser"] = post_data.pop("browser") return subscription_data
def process_subscription_data(post_data)
Process the subscription data according to out model
6.35241
6.191185
1.026041
serializer_class = self.get_serializer_class_in() kwargs['context'] = self.get_serializer_context() return serializer_class(*args, **kwargs)
def get_serializer_in(self, *args, **kwargs)
Return the serializer instance that should be used for validating and deserializing input, and for serializing output.
2.465171
2.391225
1.030924
if user: backend_name = kwargs['backend'].__class__.__name__.lower() response = kwargs.get('response', {}) social_thumb = None if 'facebook' in backend_name: if 'id' in response: social_thumb = ( 'http://graph.facebook.com/{0}/picture?type=normal' ).format(response['id']) elif 'twitter' in backend_name and response.get('profile_image_url'): social_thumb = response['profile_image_url'] elif 'googleoauth2' in backend_name and response.get('image', {}).get('url'): social_thumb = response['image']['url'].split('?')[0] else: social_thumb = 'http://www.gravatar.com/avatar/' social_thumb += hashlib.md5(user.email.lower().encode('utf8')).hexdigest() social_thumb += '?size=100' if social_thumb and user.social_thumb != social_thumb: user.social_thumb = social_thumb strategy.storage.user.changed(user)
def save_avatar(strategy, details, user=None, *args, **kwargs)
Get user avatar from social provider.
2.359428
2.295348
1.027917
def load(rect=None, flags=None): return filename, rect, flags return load
def default_image_loader(filename, flags, **kwargs)
This default image loader just returns filename, rect, and any flags
11.816803
5.882901
2.008669
flags = TileFlags( raw_gid & GID_TRANS_FLIPX == GID_TRANS_FLIPX, raw_gid & GID_TRANS_FLIPY == GID_TRANS_FLIPY, raw_gid & GID_TRANS_ROT == GID_TRANS_ROT) gid = raw_gid & ~(GID_TRANS_FLIPX | GID_TRANS_FLIPY | GID_TRANS_ROT) return gid, flags
def decode_gid(raw_gid)
Decode a GID from TMX data as of 0.7.0 it determines if the tile should be flipped when rendered as of 0.8.0 bit 30 determines if GID is rotated :param raw_gid: 32-bit number from TMX layer data :return: gid, flags
2.497246
2.201823
1.134172
# handle "1" and "0" try: return bool(int(text)) except: pass text = str(text).lower() if text == "true": return True if text == "yes": return True if text == "false": return False if text == "no": return False raise ValueError
def convert_to_bool(text)
Convert a few common variations of "true" and "false" to boolean :param text: string to test :return: boolean :raises: ValueError
2.36236
2.582594
0.914724
d = dict() for child in node.findall('properties'): for subnode in child.findall('property'): cls = None try: if "type" in subnode.keys(): module = importlib.import_module('builtins') cls = getattr(module, subnode.get("type")) except AttributeError: logger.info("Type [} Not a built-in type. Defaulting to string-cast.") d[subnode.get('name')] = cls(subnode.get('value')) if cls is not None else subnode.get('value') return d
def parse_properties(node)
Parse a Tiled xml node and return a dict that represents a tiled "property" :param node: etree element :return: dict
4.113509
4.239068
0.970381
self._cast_and_set_attributes_from_node_items(node.items()) properties = parse_properties(node) if (not self.allow_duplicate_names and self._contains_invalid_property_name(properties.items())): self._log_property_error_message() raise ValueError("Reserved names and duplicate names are not allowed. Please rename your property inside the .tmx-file") self.properties = properties
def _set_properties(self, node)
Create dict containing Tiled object attributes from xml data read the xml attributes and tiled "properties" from a xml node and fill in the values into the object's dictionary. Names will be checked to make sure that they do not conflict with reserved names. :param node: etree element :return: dict
9.197065
9.436881
0.974587
self._set_properties(node) self.background_color = node.get('backgroundcolor', self.background_color) # *** do not change this load order! *** # # *** gid mapping errors will occur if changed *** # for subnode in node.findall('layer'): self.add_layer(TiledTileLayer(self, subnode)) for subnode in node.findall('imagelayer'): self.add_layer(TiledImageLayer(self, subnode)) for subnode in node.findall('objectgroup'): self.add_layer(TiledObjectGroup(self, subnode)) for subnode in node.findall('tileset'): self.add_tileset(TiledTileset(self, subnode)) # "tile objects", objects with a GID, have need to have their attributes # set after the tileset is loaded, so this step must be performed last # also, this step is performed for objects to load their tiles. # tiled stores the origin of GID objects by the lower right corner # this is different for all other types, so i just adjust it here # so all types loaded with pytmx are uniform. # iterate through tile objects and handle the image for o in [o for o in self.objects if o.gid]: # gids might also have properties assigned to them # in that case, assign the gid properties to the object as well p = self.get_tile_properties_by_gid(o.gid) if p: for key in p: o.properties.setdefault(key, p[key]) if self.invert_y: o.y -= o.height self.reload_images() return self
def parse_xml(self, node)
Parse a map from ElementTree xml node :param node: ElementTree xml node :return: self
6.007519
5.977413
1.005037
self.images = [None] * self.maxgid # iterate through tilesets to get source images for ts in self.tilesets: # skip tilesets without a source if ts.source is None: continue path = os.path.join(os.path.dirname(self.filename), ts.source) colorkey = getattr(ts, 'trans', None) loader = self.image_loader(path, colorkey, tileset=ts) p = product(range(ts.margin, ts.height + ts.margin - ts.tileheight + 1, ts.tileheight + ts.spacing), range(ts.margin, ts.width + ts.margin - ts.tilewidth + 1, ts.tilewidth + ts.spacing)) # iterate through the tiles for real_gid, (y, x) in enumerate(p, ts.firstgid): rect = (x, y, ts.tilewidth, ts.tileheight) gids = self.map_gid(real_gid) # gids is None if the tile is never used # but give another chance to load the gid anyway if gids is None: if self.load_all_tiles or real_gid in self.optional_gids: # TODO: handle flags? - might never be an issue, though gids = [self.register_gid(real_gid, flags=0)] if gids: # flags might rotate/flip the image, so let the loader # handle that here for gid, flags in gids: self.images[gid] = loader(rect, flags) # load image layer images for layer in (i for i in self.layers if isinstance(i, TiledImageLayer)): source = getattr(layer, 'source', None) if source: colorkey = getattr(layer, 'trans', None) real_gid = len(self.images) gid = self.register_gid(real_gid) layer.gid = gid path = os.path.join(os.path.dirname(self.filename), source) loader = self.image_loader(path, colorkey) image = loader() self.images.append(image) # load images in tiles. # instead of making a new gid, replace the reference to the tile that # was loaded from the tileset for real_gid, props in self.tile_properties.items(): source = props.get('source', None) if source: colorkey = props.get('trans', None) path = os.path.join(os.path.dirname(self.filename), source) loader = self.image_loader(path, colorkey) image = loader() self.images[real_gid] = image
def reload_images(self)
Load the map images from disk This method will use the image loader passed in the constructor to do the loading or will use a generic default, in which case no images will be loaded. :return: None
3.350155
3.328912
1.006381
try: assert (x >= 0 and y >= 0) except AssertionError: raise ValueError try: layer = self.layers[layer] except IndexError: raise ValueError assert (isinstance(layer, TiledTileLayer)) try: gid = layer.data[y][x] except (IndexError, ValueError): raise ValueError except TypeError: msg = "Tiles must be specified in integers." logger.debug(msg) raise TypeError else: return self.get_tile_image_by_gid(gid)
def get_tile_image(self, x, y, layer)
Return the tile image for this location :param x: x coordinate :param y: y coordinate :param layer: layer number :rtype: surface if found, otherwise 0
3.373045
3.537508
0.953509
try: assert (int(gid) >= 0) return self.images[gid] except TypeError: msg = "GIDs must be expressed as a number. Got: {0}" logger.debug(msg.format(gid)) raise TypeError except (AssertionError, IndexError): msg = "Coords: ({0},{1}) in layer {2} has invalid GID: {3}" logger.debug(msg.format(gid)) raise ValueError
def get_tile_image_by_gid(self, gid)
Return the tile image for this location :param gid: GID of image :rtype: surface if found, otherwise ValueError
4.438398
4.356389
1.018825
try: assert (x >= 0 and y >= 0 and layer >= 0) except AssertionError: raise ValueError try: return self.layers[int(layer)].data[int(y)][int(x)] except (IndexError, ValueError): msg = "Coords: ({0},{1}) in layer {2} is invalid" logger.debug(msg, (x, y, layer)) raise ValueError
def get_tile_gid(self, x, y, layer)
Return the tile image GID for this location :param x: x coordinate :param y: y coordinate :param layer: layer number :rtype: surface if found, otherwise ValueError
3.32155
3.241972
1.024546
try: assert (x >= 0 and y >= 0 and layer >= 0) except AssertionError: raise ValueError try: gid = self.layers[int(layer)].data[int(y)][int(x)] except (IndexError, ValueError): msg = "Coords: ({0},{1}) in layer {2} is invalid." logger.debug(msg.format(x, y, layer)) raise Exception else: try: return self.tile_properties[gid] except (IndexError, ValueError): msg = "Coords: ({0},{1}) in layer {2} has invalid GID: {3}" logger.debug(msg.format(x, y, layer, gid)) raise Exception except KeyError: return None
def get_tile_properties(self, x, y, layer)
Return the tile image GID for this location :param x: x coordinate :param y: y coordinate :param layer: layer number :rtype: python dict if found, otherwise None
2.576244
2.536347
1.01573
for l in self.visible_tile_layers: for x, y, _gid in [i for i in self.layers[l].iter_data() if i[2] == gid]: yield x, y, l
def get_tile_locations_by_gid(self, gid)
Search map for tile locations by the GID Return (int, int, int) tuples, where the layer is index of the visible tile layers. Note: Not a fast operation. Cache results if used often. :param gid: GID to be searched for :rtype: generator of tile locations
6.045205
4.741643
1.274918
try: assert (int(layer) >= 0) layer = int(layer) except (TypeError, AssertionError): msg = "Layer must be a positive integer. Got {0} instead." logger.debug(msg.format(type(layer))) raise ValueError p = product(range(self.width), range(self.height)) layergids = set(self.layers[layer].data[y][x] for x, y in p) for gid in layergids: try: yield gid, self.tile_properties[gid] except KeyError: continue
def get_tile_properties_by_layer(self, layer)
Get the tile properties of each GID in layer :param layer: layer number :rtype: iterator of (gid, properties) tuples
3.750311
3.340326
1.122738
assert ( isinstance(layer, (TiledTileLayer, TiledImageLayer, TiledObjectGroup))) self.layers.append(layer) self.layernames[layer.name] = layer
def add_layer(self, layer)
Add a layer (TileTileLayer, TiledImageLayer, or TiledObjectGroup) :param layer: TileTileLayer, TiledImageLayer, TiledObjectGroup object
5.930941
4.418844
1.342193
assert (isinstance(tileset, TiledTileset)) self.tilesets.append(tileset)
def add_tileset(self, tileset)
Add a tileset to the map :param tileset: TiledTileset
3.989005
4.253142
0.937896
try: return self.layernames[name] except KeyError: msg = 'Layer "{0}" not found.' logger.debug(msg.format(name)) raise ValueError
def get_layer_by_name(self, name)
Return a layer by name :param name: Name of layer. Case-sensitive. :rtype: Layer object if found, otherwise ValueError
3.83354
4.079787
0.939642
for obj in self.objects: if obj.name == name: return obj raise ValueError
def get_object_by_name(self, name)
Find an object :param name: Name of object. Case-sensitive. :rtype: Object if found, otherwise ValueError
3.218561
3.721415
0.864875
try: tiled_gid = self.tiledgidmap[gid] except KeyError: raise ValueError for tileset in sorted(self.tilesets, key=attrgetter('firstgid'), reverse=True): if tiled_gid >= tileset.firstgid: return tileset raise ValueError
def get_tileset_from_gid(self, gid)
Return tileset that owns the gid Note: this is a slow operation, so if you are expecting to do this often, it would be worthwhile to cache the results of this. :param gid: gid of tile image :rtype: TiledTileset if found, otherwise ValueError
3.132208
2.911934
1.075645
return (i for (i, l) in enumerate(self.layers) if l.visible and isinstance(l, TiledTileLayer))
def visible_tile_layers(self)
Return iterator of layer indexes that are set 'visible' :rtype: Iterator
5.568382
5.558255
1.001822
return (i for (i, l) in enumerate(self.layers) if l.visible and isinstance(l, TiledObjectGroup))
def visible_object_groups(self)
Return iterator of object group indexes that are set 'visible' :rtype: Iterator
7.805793
8.557364
0.912173
if flags is None: flags = TileFlags(0, 0, 0) if tiled_gid: try: return self.imagemap[(tiled_gid, flags)][0] except KeyError: gid = self.maxgid self.maxgid += 1 self.imagemap[(tiled_gid, flags)] = (gid, flags) self.gidmap[tiled_gid].append((gid, flags)) self.tiledgidmap[gid] = tiled_gid return gid else: return 0
def register_gid(self, tiled_gid, flags=None)
Used to manage the mapping of GIDs between the tmx and pytmx :param tiled_gid: GID that is found in TMX data :rtype: GID that pytmx uses for the the GID passed
2.755623
2.961443
0.9305
try: return self.gidmap[int(tiled_gid)] except KeyError: return None except TypeError: msg = "GIDs must be an integer" logger.debug(msg) raise TypeError
def map_gid(self, tiled_gid)
Used to lookup a GID read from a TMX file's data :param tiled_gid: GID that is found in TMX data :rtype: (GID, flags) for the the GID passed, None if not found
3.916343
3.852794
1.016494
tiled_gid = int(tiled_gid) # gidmap is a default dict, so cannot trust to raise KeyError if tiled_gid in self.gidmap: return self.gidmap[tiled_gid] else: gid = self.register_gid(tiled_gid) return [(gid, None)]
def map_gid2(self, tiled_gid)
WIP. need to refactor the gid code :param tiled_gid: :return:
4.975224
5.171481
0.96205
import os # if true, then node references an external tileset source = node.get('source', None) if source: if source[-4:].lower() == ".tsx": # external tilesets don't save this, store it for later self.firstgid = int(node.get('firstgid')) # we need to mangle the path - tiled stores relative paths dirname = os.path.dirname(self.parent.filename) path = os.path.abspath(os.path.join(dirname, source)) try: node = ElementTree.parse(path).getroot() except IOError: msg = "Cannot load external tileset: {0}" logger.error(msg.format(path)) raise Exception else: msg = "Found external tileset, but cannot handle type: {0}" logger.error(msg.format(self.source)) raise Exception self._set_properties(node) # since tile objects [probably] don't have a lot of metadata, # we store it separately in the parent (a TiledMap instance) register_gid = self.parent.register_gid for child in node.getiterator('tile'): tiled_gid = int(child.get("id")) p = {k: types[k](v) for k, v in child.items()} p.update(parse_properties(child)) # images are listed as relative to the .tsx file, not the .tmx file: if source and "path" in p: p["path"] = os.path.join(os.path.dirname(source), p["path"]) # handle tiles that have their own image image = child.find('image') if image is None: p['width'] = self.tilewidth p['height'] = self.tileheight else: tile_source = image.get('source') # images are listed as relative to the .tsx file, not the .tmx file: if tile_source: tile_source = os.path.join(os.path.dirname(source), tile_source) p['source'] = tile_source p['trans'] = image.get('trans', None) p['width'] = image.get('width') p['height'] = image.get('height') # handle tiles with animations anim = child.find('animation') frames = list() p['frames'] = frames if anim is not None: for frame in anim.findall("frame"): duration = int(frame.get('duration')) gid = register_gid(int(frame.get('tileid')) + self.firstgid) frames.append(AnimationFrame(gid, duration)) for gid, flags in self.parent.map_gid2(tiled_gid + self.firstgid): self.parent.set_tile_properties(gid, p) # handle the optional 'tileoffset' node self.offset = node.find('tileoffset') if self.offset is None: self.offset = (0, 0) else: self.offset = (self.offset.get('x', 0), self.offset.get('y', 0)) image_node = node.find('image') if image_node is not None: self.source = image_node.get('source') # When loading from tsx, tileset image path is relative to the tsx file, not the tmx: if source: self.source = os.path.join(os.path.dirname(source), self.source) self.trans = image_node.get('trans', None) self.width = int(image_node.get('width')) self.height = int(image_node.get('height')) return self
def parse_xml(self, node)
Parse a Tileset from ElementTree xml element A bit of mangling is done here so that tilesets that have external TSX files appear the same as those that don't :param node: ElementTree element :return: self
3.154699
3.065808
1.028994
for y, row in enumerate(self.data): for x, gid in enumerate(row): yield x, y, gid
def iter_data(self)
Iterate over layer data Yields X, Y, GID tuples for each tile in the layer :return: Generator
5.167369
4.266212
1.211231
images = self.parent.images for x, y, gid in [i for i in self.iter_data() if i[2]]: yield x, y, images[gid]
def tiles(self)
Iterate over tile images of this layer This is an optimised generator function that returns (tile_x, tile_y, tile_image) tuples, :rtype: Generator :return: (x, y, image) tuples
7.223264
7.157826
1.009142
import struct import array self._set_properties(node) data = None next_gid = None data_node = node.find('data') encoding = data_node.get('encoding', None) if encoding == 'base64': from base64 import b64decode data = b64decode(data_node.text.strip()) elif encoding == 'csv': next_gid = map(int, "".join( line.strip() for line in data_node.text.strip()).split(",")) elif encoding: msg = 'TMX encoding type: {0} is not supported.' logger.error(msg.format(encoding)) raise Exception compression = data_node.get('compression', None) if compression == 'gzip': import gzip with gzip.GzipFile(fileobj=six.BytesIO(data)) as fh: data = fh.read() elif compression == 'zlib': import zlib data = zlib.decompress(data) elif compression: msg = 'TMX compression type: {0} is not supported.' logger.error(msg.format(compression)) raise Exception # if data is None, then it was not decoded or decompressed, so # we assume here that it is going to be a bunch of tile elements # TODO: this will/should raise an exception if there are no tiles if encoding == next_gid is None: def get_children(parent): for child in parent.findall('tile'): yield int(child.get('gid')) next_gid = get_children(data_node) elif data: if type(data) == bytes: fmt = struct.Struct('<L') iterator = (data[i:i + 4] for i in range(0, len(data), 4)) next_gid = (fmt.unpack(i)[0] for i in iterator) else: msg = 'layer data not in expected format ({})' logger.error(msg.format(type(data))) raise Exception init = lambda: [0] * self.width reg = self.parent.register_gid # H (16-bit) may be a limitation for very detailed maps self.data = tuple(array.array('H', init()) for i in range(self.height)) for (y, x) in product(range(self.height), range(self.width)): self.data[y][x] = reg(*decode_gid(next(next_gid))) return self
def parse_xml(self, node)
Parse a Tile Layer from ElementTree xml node :param node: ElementTree xml node :return: self
3.50455
3.349222
1.046377
self._set_properties(node) self.extend(TiledObject(self.parent, child) for child in node.findall('object')) return self
def parse_xml(self, node)
Parse an Object Group from ElementTree xml node :param node: ElementTree xml node :return: self
9.309883
9.609859
0.968785
def read_points(text): return tuple(tuple(map(float, i.split(','))) for i in text.split()) self._set_properties(node) # correctly handle "tile objects" (object with gid set) if self.gid: self.gid = self.parent.register_gid(self.gid) points = None polygon = node.find('polygon') if polygon is not None: points = read_points(polygon.get('points')) self.closed = True polyline = node.find('polyline') if polyline is not None: points = read_points(polyline.get('points')) self.closed = False if points: x1 = x2 = y1 = y2 = 0 for x, y in points: if x < x1: x1 = x if x > x2: x2 = x if y < y1: y1 = y if y > y2: y2 = y self.width = abs(x1) + abs(x2) self.height = abs(y1) + abs(y2) self.points = tuple( [(i[0] + self.x, i[1] + self.y) for i in points]) return self
def parse_xml(self, node)
Parse an Object from ElementTree xml node :param node: ElementTree xml node :return: self
2.784258
2.770708
1.00489
self._set_properties(node) self.name = node.get('name', None) self.opacity = node.get('opacity', self.opacity) self.visible = node.get('visible', self.visible) image_node = node.find('image') self.source = image_node.get('source', None) self.trans = image_node.get('trans', None) return self
def parse_xml(self, node)
Parse an Image Layer from ElementTree xml node :param node: ElementTree xml node :return: self
2.680839
2.390432
1.121487
tile_size = original.get_size() threshold = 127 # the default try: # count the number of pixels in the tile that are not transparent px = pygame.mask.from_surface(original, threshold).count() except: # pygame_sdl2 will fail because the mask module is not included # in this case, just convert_alpha and return it return original.convert_alpha() # there are no transparent pixels in the image if px == tile_size[0] * tile_size[1]: tile = original.convert() # there are transparent pixels, and tiled set a colorkey elif colorkey: tile = original.convert() tile.set_colorkey(colorkey, pygame.RLEACCEL) # there are transparent pixels, and set for perpixel alpha elif pixelalpha: tile = original.convert_alpha() # there are transparent pixels, and we won't handle them else: tile = original.convert() return tile
def smart_convert(original, colorkey, pixelalpha)
this method does several tests on a surface to determine the optimal flags and pixel format for each tile surface. this is done for the best rendering speeds and removes the need to convert() the images on your own
4.520497
4.554132
0.992614
if colorkey: colorkey = pygame.Color('#{0}'.format(colorkey)) pixelalpha = kwargs.get('pixelalpha', True) image = pygame.image.load(filename) def load_image(rect=None, flags=None): if rect: try: tile = image.subsurface(rect) except ValueError: logger.error('Tile bounds outside bounds of tileset image') raise else: tile = image.copy() if flags: tile = handle_transformation(tile, flags) tile = smart_convert(tile, colorkey, pixelalpha) return tile return load_image
def pygame_image_loader(filename, colorkey, **kwargs)
pytmx image loader for pygame :param filename: :param colorkey: :param kwargs: :return:
4.111476
4.304608
0.955134
kwargs['image_loader'] = pygame_image_loader return pytmx.TiledMap(filename, *args, **kwargs)
def load_pygame(filename, *args, **kwargs)
Load a TMX file, images, and return a TiledMap class PYGAME USERS: Use me. this utility has 'smart' tile loading. by default any tile without transparent pixels will be loaded for quick blitting. if the tile has transparent pixels, then it will be loaded with per-pixel alpha. this is a per-tile, per-image check. if a color key is specified as an argument, or in the tmx data, the per-pixel alpha will not be used at all. if the tileset's image has colorkey transparency set in Tiled, the util_pygam will return images that have their transparency already set. TL;DR: Don't attempt to convert() or convert_alpha() the individual tiles. It is already done for you.
4.51927
5.713356
0.791001
if isinstance(tileset, int): try: tileset = tmxmap.tilesets[tileset] except IndexError: msg = "Tileset #{0} not found in map {1}." logger.debug(msg.format(tileset, tmxmap)) raise IndexError elif isinstance(tileset, str): try: tileset = [t for t in tmxmap.tilesets if t.name == tileset].pop() except IndexError: msg = "Tileset \"{0}\" not found in map {1}." logger.debug(msg.format(tileset, tmxmap)) raise ValueError elif tileset: msg = "Tileset must be either a int or string. got: {0}" logger.debug(msg.format(type(tileset))) raise TypeError gid = None if real_gid: try: gid, flags = tmxmap.map_gid(real_gid)[0] except IndexError: msg = "GID #{0} not found" logger.debug(msg.format(real_gid)) raise ValueError if isinstance(layer, int): layer_data = tmxmap.get_layer_data(layer) elif isinstance(layer, str): try: layer = [l for l in tmxmap.layers if l.name == layer].pop() layer_data = layer.data except IndexError: msg = "Layer \"{0}\" not found in map {1}." logger.debug(msg.format(layer, tmxmap)) raise ValueError p = itertools.product(range(tmxmap.width), range(tmxmap.height)) if gid: points = [(x, y) for (x, y) in p if layer_data[y][x] == gid] else: points = [(x, y) for (x, y) in p if layer_data[y][x]] rects = simplify(points, tmxmap.tilewidth, tmxmap.tileheight) return rects
def build_rects(tmxmap, layer, tileset=None, real_gid=None)
generate a set of non-overlapping rects that represents the distribution of the specified gid. useful for generating rects for use in collision detection Use at your own risk: this is experimental...will change in future GID Note: You will need to add 1 to the GID reported by Tiled. :param tmxmap: TiledMap object :param layer: int or string name of layer :param tileset: int or string name of tileset :param real_gid: Tiled GID of the tile + 1 (see note) :return: List of pygame Rect objects
1.835096
1.883858
0.974116
def pick_rect(points, rects): ox, oy = sorted([(sum(p), p) for p in points])[0][1] x = ox y = oy ex = None while 1: x += 1 if not (x, y) in points: if ex is None: ex = x - 1 if (ox, y + 1) in points: if x == ex + 1: y += 1 x = ox else: y -= 1 break else: if x <= ex: y -= 1 break c_rect = pygame.Rect(ox * tilewidth, oy * tileheight, (ex - ox + 1) * tilewidth, (y - oy + 1) * tileheight) rects.append(c_rect) rect = pygame.Rect(ox, oy, ex - ox + 1, y - oy + 1) kill = [p for p in points if rect.collidepoint(p)] [points.remove(i) for i in kill] if points: pick_rect(points, rects) rect_list = [] while all_points: pick_rect(all_points, rect_list) return rect_list
def simplify(all_points, tilewidth, tileheight)
Given a list of points, return list of rects that represent them kludge: "A kludge (or kluge) is a workaround, a quick-and-dirty solution, a clumsy or inelegant, yet effective, solution to a problem, typically using parts that are cobbled together." -- wikipedia turn a list of points into a rects adjacent rects will be combined. plain english: the input list must be a list of tuples that represent the areas to be combined into rects the rects will be blended together over solid groups so if data is something like: 0 1 1 1 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 4 0 0 0 0 0 0 4 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 you'll have the 4 rects that mask the area like this: ..######...... ..####........ ..........##.. ..........##.. .............. ....########## pretty cool, right? there may be cases where the number of rectangles is not as low as possible, but I haven't found that it is excessively bad. certainly much better than making a list of rects, one for each tile on the map!
2.793272
2.829484
0.987202
if colorkey: logger.debug('colorkey not implemented') image = pyglet.image.load(filename) def load_image(rect=None, flags=None): if rect: try: x, y, w, h = rect y = image.height - y - h tile = image.get_region(x, y, w, h) except: logger.error('cannot get region %s of image', rect) raise else: tile = image if flags: logger.error('tile flags are not implemented') return tile return load_image
def pyglet_image_loader(filename, colorkey, **kwargs)
basic image loading with pyglet returns pyglet Images, not textures This is a basic proof-of-concept and is likely to fail in some situations. Missing: Transparency Tile Rotation This is slow as well.
3.116986
3.295343
0.945876
r # Get wind efficiency curve wind_efficiency_curve = get_wind_efficiency_curve( curve_name=wind_efficiency_curve_name) # Reduce wind speed by wind efficiency reduced_wind_speed = wind_speed * np.interp( wind_speed, wind_efficiency_curve['wind_speed'], wind_efficiency_curve['efficiency']) return reduced_wind_speed
def reduce_wind_speed(wind_speed, wind_efficiency_curve_name='dena_mean')
r""" Reduces wind speed by a wind efficiency curve. The wind efficiency curves are provided in the windpowerlib and were calculated in the dena-Netzstudie II and in the work of Knorr (see [1]_ and [2]_). Parameters ---------- wind_speed : pandas.Series or numpy.array Wind speed time series. wind_efficiency_curve_name : string Name of the wind efficiency curve. Use :py:func:`~.get_wind_efficiency_curve` to get all provided wind efficiency curves. Default: 'dena_mean'. Returns ------- reduced_wind_speed : pd.Series or np.array `wind_speed` reduced by wind efficiency curve. References ---------- .. [1] Kohler et.al.: "dena-Netzstudie II. Integration erneuerbarer Energien in die deutsche Stromversorgung im Zeitraum 2015 – 2020 mit Ausblick 2025.", Deutsche Energie-Agentur GmbH (dena), Tech. rept., 2010, p. 101 .. [2] Knorr, K.: "Modellierung von raum-zeitlichen Eigenschaften der Windenergieeinspeisung für wetterdatenbasierte Windleistungssimulationen". Universität Kassel, Diss., 2016, p. 124
2.334901
2.508312
0.930866
r if 'datapath' not in kwargs: kwargs['datapath'] = os.path.join(os.path.split( os.path.dirname(__file__))[0], 'example') file = os.path.join(kwargs['datapath'], filename) # read csv file weather_df = pd.read_csv( file, index_col=0, header=[0, 1], date_parser=lambda idx: pd.to_datetime(idx, utc=True)) # change type of index to datetime and set time zone weather_df.index = pd.to_datetime(weather_df.index).tz_convert( 'Europe/Berlin') # change type of height from str to int by resetting columns weather_df.columns = [weather_df.axes[1].levels[0][ weather_df.axes[1].codes[0]], weather_df.axes[1].levels[1][ weather_df.axes[1].codes[1]].astype(int)] return weather_df
def get_weather_data(filename='weather.csv', **kwargs)
r""" Imports weather data from a file. The data include wind speed at two different heights in m/s, air temperature in two different heights in K, surface roughness length in m and air pressure in Pa. The file is located in the example folder of the windpowerlib. The height in m for which the data applies is specified in the second row. Parameters ---------- filename : string Filename of the weather data file. Default: 'weather.csv'. Other Parameters ---------------- datapath : string, optional Path where the weather data file is stored. Default: 'windpowerlib/example'. Returns ------- weather_df : pandas.DataFrame DataFrame with time series for wind speed `wind_speed` in m/s, temperature `temperature` in K, roughness length `roughness_length` in m, and pressure `pressure` in Pa. The columns of the DataFrame are a MultiIndex where the first level contains the variable name as string (e.g. 'wind_speed') and the second level contains the height as integer at which it applies (e.g. 10, if it was measured at a height of 10 m).
3.061115
2.740556
1.116969
r # specification of own wind turbine (Note: power values and nominal power # have to be in Watt) my_turbine = { 'name': 'myTurbine', 'nominal_power': 3e6, # in W 'hub_height': 105, # in m 'rotor_diameter': 90, # in m 'power_curve': pd.DataFrame( data={'value': [p * 1000 for p in [ 0.0, 26.0, 180.0, 1500.0, 3000.0, 3000.0]], # in W 'wind_speed': [0.0, 3.0, 5.0, 10.0, 15.0, 25.0]}) # in m/s } # initialize WindTurbine object my_turbine = WindTurbine(**my_turbine) # specification of wind turbine where power curve is provided in the oedb # if you want to use the power coefficient curve change the value of # 'fetch_curve' to 'power_coefficient_curve' enercon_e126 = { 'name': 'E-126/4200', # turbine type as in register # 'hub_height': 135, # in m 'rotor_diameter': 127, # in m 'fetch_curve': 'power_curve', # fetch power curve # 'data_source': 'oedb' # data source oedb or name of csv file } # initialize WindTurbine object e126 = WindTurbine(**enercon_e126) # specification of wind turbine where power coefficient curve is provided # by a csv file csv_file = os.path.join(os.path.dirname(__file__), 'data', 'example_power_coefficient_curves.csv') dummy_turbine = { 'name': 'DUMMY 1', # turbine type as in file # 'hub_height': 100, # in m 'rotor_diameter': 70, # in m 'fetch_curve': 'power_coefficient_curve', # fetch cp curve # 'data_source': csv_file # data source } # initialize WindTurbine object dummy_turbine = WindTurbine(**dummy_turbine) return my_turbine, e126, dummy_turbine
def initialize_wind_turbines()
r""" Initializes two :class:`~.wind_turbine.WindTurbine` objects. Function shows three ways to initialize a WindTurbine object. You can either specify your own turbine, as done below for 'my_turbine', or fetch power and/or power coefficient curve data from the OpenEnergy Database (oedb), as done for the 'enercon_e126', or provide your turbine data in csv files as done for 'dummy_turbine' with an example file. Execute ``windpowerlib.wind_turbine.get_turbine_types()`` to get a table including all wind turbines for which power and/or power coefficient curves are provided. Returns ------- Tuple (WindTurbine, WindTurbine, WindTurbine)
3.159307
2.665106
1.185434
r # power output calculation for my_turbine # initialize ModelChain with default parameters and use run_model method # to calculate power output mc_my_turbine = ModelChain(my_turbine).run_model(weather) # write power output time series to WindTurbine object my_turbine.power_output = mc_my_turbine.power_output # power output calculation for e126 # own specifications for ModelChain setup modelchain_data = { 'wind_speed_model': 'logarithmic', # 'logarithmic' (default), # 'hellman' or # 'interpolation_extrapolation' 'density_model': 'ideal_gas', # 'barometric' (default), 'ideal_gas' or # 'interpolation_extrapolation' 'temperature_model': 'linear_gradient', # 'linear_gradient' (def.) or # 'interpolation_extrapolation' 'power_output_model': 'power_curve', # 'power_curve' (default) or # 'power_coefficient_curve' 'density_correction': True, # False (default) or True 'obstacle_height': 0, # default: 0 'hellman_exp': None} # None (default) or None # initialize ModelChain with own specifications and use run_model method # to calculate power output mc_e126 = ModelChain(e126, **modelchain_data).run_model(weather) # write power output time series to WindTurbine object e126.power_output = mc_e126.power_output # power output calculation for example_turbine # own specification for 'power_output_model' mc_example_turbine = ModelChain( dummy_turbine, power_output_model='power_coefficient_curve').run_model(weather) dummy_turbine.power_output = mc_example_turbine.power_output return
def calculate_power_output(weather, my_turbine, e126, dummy_turbine)
r""" Calculates power output of wind turbines using the :class:`~.modelchain.ModelChain`. The :class:`~.modelchain.ModelChain` is a class that provides all necessary steps to calculate the power output of a wind turbine. You can either use the default methods for the calculation steps, as done for 'my_turbine', or choose different methods, as done for the 'e126'. Of course, you can also use the default methods while only changing one or two of them, as done for 'dummy_turbine'. Parameters ---------- weather : pd.DataFrame Contains weather data time series. my_turbine : WindTurbine WindTurbine object with self provided power curve. e126 : WindTurbine WindTurbine object with power curve from the OpenEnergy Database. dummy_turbine : WindTurbine WindTurbine object with power coefficient curve from example file.
2.945991
2.632174
1.119224
r # plot or print turbine power output if plt: e126.power_output.plot(legend=True, label='Enercon E126') my_turbine.power_output.plot(legend=True, label='myTurbine') dummy_turbine.power_output.plot(legend=True, label='dummyTurbine') plt.show() else: print(e126.power_output) print(my_turbine.power_output) print(dummy_turbine.power_output) # plot or print power curve if plt: if e126.power_curve is not None: e126.power_curve.plot(x='wind_speed', y='value', style='*', title='Enercon E126 power curve') plt.show() if my_turbine.power_curve is not None: my_turbine.power_curve.plot(x='wind_speed', y='value', style='*', title='myTurbine power curve') plt.show() if dummy_turbine.power_coefficient_curve is not None: dummy_turbine.power_coefficient_curve.plot( x='wind_speed', y='value', style='*', title='dummyTurbine power coefficient curve') plt.show() else: if e126.power_coefficient_curve is not None: print(e126.power_coefficient_curve) if e126.power_curve is not None: print(e126.power_curve)
def plot_or_print(my_turbine, e126, dummy_turbine)
r""" Plots or prints power output and power (coefficient) curves. Parameters ---------- my_turbine : WindTurbine WindTurbine object with self provided power curve. e126 : WindTurbine WindTurbine object with power curve from data file provided by the windpowerlib. dummy_turbine : WindTurbine WindTurbine object with power coefficient curve from example file.
1.812073
1.748806
1.036177
r weather = get_weather_data('weather.csv') my_turbine, e126, dummy_turbine = initialize_wind_turbines() calculate_power_output(weather, my_turbine, e126, dummy_turbine) plot_or_print(my_turbine, e126, dummy_turbine)
def run_example()
r""" Runs the basic example.
6.991159
6.408184
1.090974
r self.hub_height = np.exp( sum(np.log(wind_dict['wind_turbine'].hub_height) * wind_dict['wind_turbine'].nominal_power * wind_dict['number_of_turbines'] for wind_dict in self.wind_turbine_fleet) / self.get_installed_power()) return self
def mean_hub_height(self)
r""" Calculates the mean hub height of the wind farm. The mean hub height of a wind farm is necessary for power output calculations with an aggregated wind farm power curve containing wind turbines with different hub heights. Hub heights of wind turbines with higher nominal power weigh more than others. Assigns the hub height to the wind farm object. Returns ------- self Notes ----- The following equation is used [1]_: .. math:: h_{WF} = e^{\sum\limits_{k}{ln(h_{WT,k})} \frac{P_{N,k}}{\sum\limits_{k}{P_{N,k}}}} with: :math:`h_{WF}`: mean hub height of wind farm, :math:`h_{WT,k}`: hub height of the k-th wind turbine of a wind farm, :math:`P_{N,k}`: nominal power of the k-th wind turbine References ---------- .. [1] Knorr, K.: "Modellierung von raum-zeitlichen Eigenschaften der Windenergieeinspeisung für wetterdatenbasierte Windleistungssimulationen". Universität Kassel, Diss., 2016, p. 35
6.111319
4.685293
1.304362
r if 0.7 * obstacle_height > wind_speed_height: raise ValueError("To take an obstacle height of {0} m ".format( obstacle_height) + "into consideration, wind " + "speed data of a greater height is needed.") # Return np.array if wind_speed is np.array if (isinstance(wind_speed, np.ndarray) and isinstance(roughness_length, pd.Series)): roughness_length = np.array(roughness_length) return (wind_speed * np.log((hub_height - 0.7 * obstacle_height) / roughness_length) / np.log((wind_speed_height - 0.7 * obstacle_height) / roughness_length))
def logarithmic_profile(wind_speed, wind_speed_height, hub_height, roughness_length, obstacle_height=0.0)
r""" Calculates the wind speed at hub height using a logarithmic wind profile. The logarithmic height equation is used. There is the possibility of including the height of the surrounding obstacles in the calculation. This function is carried out when the parameter `wind_speed_model` of an instance of the :class:`~.modelchain.ModelChain` class is 'logarithmic'. Parameters ---------- wind_speed : pandas.Series or numpy.array Wind speed time series. wind_speed_height : float Height for which the parameter `wind_speed` applies. hub_height : float Hub height of wind turbine. roughness_length : pandas.Series or numpy.array or float Roughness length. obstacle_height : float Height of obstacles in the surrounding area of the wind turbine. Set `obstacle_height` to zero for wide spread obstacles. Default: 0. Returns ------- pandas.Series or numpy.array Wind speed at hub height. Data type depends on type of `wind_speed`. Notes ----- The following equation is used [1]_, [2]_, [3]_: .. math:: v_{wind,hub}=v_{wind,data}\cdot \frac{\ln\left(\frac{h_{hub}-d}{z_{0}}\right)}{\ln\left( \frac{h_{data}-d}{z_{0}}\right)} with: v: wind speed, h: height, :math:`z_{0}`: roughness length, d: boundary layer offset (estimated by d = 0.7 * `obstacle_height`) For d = 0 it results in the following equation [2]_, [3]_: .. math:: v_{wind,hub}=v_{wind,data}\cdot\frac{\ln\left(\frac{h_{hub}} {z_{0}}\right)}{\ln\left(\frac{h_{data}}{z_{0}}\right)} :math:`h_{data}` is the height at which the wind speed :math:`v_{wind,data}` is measured and :math:`v_{wind,hub}` is the wind speed at hub height :math:`h_{hub}` of the wind turbine. Parameters `wind_speed_height`, `roughness_length`, `hub_height` and `obstacle_height` have to be of the same unit. References ---------- .. [1] Quaschning V.: "Regenerative Energiesysteme". München, Hanser Verlag, 2011, p. 278 .. [2] Gasch, R., Twele, J.: "Windkraftanlagen". 6. Auflage, Wiesbaden, Vieweg + Teubner, 2010, p. 129 .. [3] Hau, E.: "Windkraftanlagen - Grundlagen, Technik, Einsatz, Wirtschaftlichkeit". 4. Auflage, Springer-Verlag, 2008, p. 515
3.794266
3.510519
1.080828
r if hellman_exponent is None: if roughness_length is not None: # Return np.array if wind_speed is np.array if (isinstance(wind_speed, np.ndarray) and isinstance(roughness_length, pd.Series)): roughness_length = np.array(roughness_length) hellman_exponent = 1 / np.log(hub_height / roughness_length) else: hellman_exponent = 1/7 return wind_speed * (hub_height / wind_speed_height) ** hellman_exponent
def hellman(wind_speed, wind_speed_height, hub_height, roughness_length=None, hellman_exponent=None)
r""" Calculates the wind speed at hub height using the hellman equation. It is assumed that the wind profile follows a power law. This function is carried out when the parameter `wind_speed_model` of an instance of the :class:`~.modelchain.ModelChain` class is 'hellman'. Parameters ---------- wind_speed : pandas.Series or numpy.array Wind speed time series. wind_speed_height : float Height for which the parameter `wind_speed` applies. hub_height : float Hub height of wind turbine. roughness_length : pandas.Series or numpy.array or float Roughness length. If given and `hellman_exponent` is None: `hellman_exponent` = 1 / ln(hub_height/roughness_length), otherwise `hellman_exponent` = 1/7. Default: None. hellman_exponent : None or float The Hellman exponent, which combines the increase in wind speed due to stability of atmospheric conditions and surface roughness into one constant. If None and roughness length is given `hellman_exponent` = 1 / ln(hub_height/roughness_length), otherwise `hellman_exponent` = 1/7. Default: None. Returns ------- pandas.Series or numpy.array Wind speed at hub height. Data type depends on type of `wind_speed`. Notes ----- The following equation is used [1]_, [2]_, [3]_: .. math:: v_{wind,hub}=v_{wind,data}\cdot \left(\frac{h_{hub}}{h_{data}} \right)^\alpha with: v: wind speed, h: height, :math:`\alpha`: Hellman exponent :math:`h_{data}` is the height in which the wind speed :math:`v_{wind,data}` is measured and :math:`v_{wind,hub}` is the wind speed at hub height :math:`h_{hub}` of the wind turbine. For the Hellman exponent :math:`\alpha` many studies use a value of 1/7 for onshore and a value of 1/9 for offshore. The Hellman exponent can also be calulated by the following equation [2]_, [3]_: .. math:: \alpha = \frac{1}{\ln\left(\frac{h_{hub}}{z_0} \right)} with: :math:`z_{0}`: roughness length Parameters `wind_speed_height`, `roughness_length`, `hub_height` and `obstacle_height` have to be of the same unit. References ---------- .. [1] Sharp, E.: "Spatiotemporal disaggregation of GB scenarios depicting increased wind capacity and electrified heat demand in dwellings". UCL, Energy Institute, 2015, p. 83 .. [2] Hau, E.: "Windkraftanlagen - Grundlagen, Technik, Einsatz, Wirtschaftlichkeit". 4. Auflage, Springer-Verlag, 2008, p. 517 .. [3] Quaschning V.: "Regenerative Energiesysteme". München, Hanser Verlag, 2011, p. 279
2.90628
2.587263
1.123303
r # Create power curve DataFrame power_curve_df = pd.DataFrame( data=[list(power_curve_wind_speeds), list(power_curve_values)]).transpose() # Rename columns of DataFrame power_curve_df.columns = ['wind_speed', 'value'] if wake_losses_model == 'constant_efficiency': if not isinstance(wind_farm_efficiency, float): raise TypeError("'wind_farm_efficiency' must be float if " + "`wake_losses_model´ is '{}' but is {}".format( wake_losses_model, wind_farm_efficiency)) power_curve_df['value'] = power_curve_values * wind_farm_efficiency elif wake_losses_model == 'power_efficiency_curve': if (not isinstance(wind_farm_efficiency, dict) and not isinstance(wind_farm_efficiency, pd.DataFrame)): raise TypeError( "'wind_farm_efficiency' must be pd.DataFrame if " + "`wake_losses_model´ is '{}' but is {}".format( wake_losses_model, wind_farm_efficiency)) df = pd.concat([power_curve_df.set_index('wind_speed'), wind_farm_efficiency.set_index('wind_speed')], axis=1) # Add column with reduced power (nan values of efficiency are # interpolated) df['reduced_power'] = df['value'] * df['efficiency'].interpolate( method='index') reduced_power = df['reduced_power'].dropna() power_curve_df = pd.DataFrame([reduced_power.index, reduced_power.values]).transpose() power_curve_df.columns = ['wind_speed', 'value'] else: raise ValueError( "`wake_losses_model` is {} but should be ".format( wake_losses_model) + "'constant_efficiency' or 'power_efficiency_curve'") return power_curve_df
def wake_losses_to_power_curve(power_curve_wind_speeds, power_curve_values, wind_farm_efficiency, wake_losses_model='power_efficiency_curve')
r""" Reduces the power values of a power curve by an efficiency (curve). Parameters ---------- power_curve_wind_speeds : pandas.Series or numpy.array Wind speeds in m/s for which the power curve values are provided in `power_curve_values`. power_curve_values : pandas.Series or numpy.array Power curve values corresponding to wind speeds in `power_curve_wind_speeds`. wind_farm_efficiency : float or pd.DataFrame Efficiency of the wind farm. Either constant (float) or efficiency curve (pd.DataFrame) containing 'wind_speed' and 'efficiency' columns with wind speeds in m/s and the corresponding dimensionless wind farm efficiency (reduction of power). Default: None. wake_losses_model : String Defines the method for taking wake losses within the farm into consideration. Options: 'power_efficiency_curve', 'constant_efficiency'. Default: 'power_efficiency_curve'. Returns ------- power_curve_df : pd.DataFrame Power curve with power values reduced by a wind farm efficiency. DataFrame has 'wind_speed' and 'value' columns with wind speeds in m/s and the corresponding power curve value in W.
2.182357
2.115924
1.031397
r if self.power_plant.hub_height in weather_df['temperature']: temperature_hub = weather_df['temperature'][ self.power_plant.hub_height] elif self.temperature_model == 'linear_gradient': logging.debug('Calculating temperature using temperature ' 'gradient.') closest_height = weather_df['temperature'].columns[ min(range(len(weather_df['temperature'].columns)), key=lambda i: abs(weather_df['temperature'].columns[i] - self.power_plant.hub_height))] temperature_hub = temperature.linear_gradient( weather_df['temperature'][closest_height], closest_height, self.power_plant.hub_height) elif self.temperature_model == 'interpolation_extrapolation': logging.debug('Calculating temperature using linear inter- or ' 'extrapolation.') temperature_hub = tools.linear_interpolation_extrapolation( weather_df['temperature'], self.power_plant.hub_height) else: raise ValueError("'{0}' is an invalid value. ".format( self.temperature_model) + "`temperature_model` must be " "'linear_gradient' or 'interpolation_extrapolation'.") return temperature_hub
def temperature_hub(self, weather_df)
r""" Calculates the temperature of air at hub height. The temperature is calculated using the method specified by the parameter `temperature_model`. Parameters ---------- weather_df : pandas.DataFrame DataFrame with time series for temperature `temperature` in K. The columns of the DataFrame are a MultiIndex where the first level contains the variable name (e.g. temperature) and the second level contains the height at which it applies (e.g. 10, if it was measured at a height of 10 m). See documentation of :func:`ModelChain.run_model` for an example on how to create the weather_df DataFrame. Returns ------- temperature_hub : pandas.Series or numpy.array Temperature of air in K at hub height. Notes ----- If `weather_df` contains temperatures at different heights the given temperature(s) closest to the hub height are used.
2.710865
2.693486
1.006452
r if self.density_model != 'interpolation_extrapolation': temperature_hub = self.temperature_hub(weather_df) # Calculation of density in kg/m³ at hub height if self.density_model == 'barometric': logging.debug('Calculating density using barometric height ' 'equation.') closest_height = weather_df['pressure'].columns[ min(range(len(weather_df['pressure'].columns)), key=lambda i: abs(weather_df['pressure'].columns[i] - self.power_plant.hub_height))] density_hub = density.barometric( weather_df['pressure'][closest_height], closest_height, self.power_plant.hub_height, temperature_hub) elif self.density_model == 'ideal_gas': logging.debug('Calculating density using ideal gas equation.') closest_height = weather_df['pressure'].columns[ min(range(len(weather_df['pressure'].columns)), key=lambda i: abs(weather_df['pressure'].columns[i] - self.power_plant.hub_height))] density_hub = density.ideal_gas( weather_df['pressure'][closest_height], closest_height, self.power_plant.hub_height, temperature_hub) elif self.density_model == 'interpolation_extrapolation': logging.debug('Calculating density using linear inter- or ' 'extrapolation.') density_hub = tools.linear_interpolation_extrapolation( weather_df['density'], self.power_plant.hub_height) else: raise ValueError("'{0}' is an invalid value. ".format( self.density_model) + "`density_model` " + "must be 'barometric', 'ideal_gas' or " + "'interpolation_extrapolation'.") return density_hub
def density_hub(self, weather_df)
r""" Calculates the density of air at hub height. The density is calculated using the method specified by the parameter `density_model`. Previous to the calculation of the density the temperature at hub height is calculated using the method specified by the parameter `temperature_model`. Parameters ---------- weather_df : pandas.DataFrame DataFrame with time series for temperature `temperature` in K, pressure `pressure` in Pa and/or density `density` in kg/m³, depending on the `density_model` used. The columns of the DataFrame are a MultiIndex where the first level contains the variable name (e.g. temperature) and the second level contains the height at which it applies (e.g. 10, if it was measured at a height of 10 m). See documentation of :func:`ModelChain.run_model` for an example on how to create the weather_df DataFrame. Returns ------- density_hub : pandas.Series or numpy.array Density of air in kg/m³ at hub height. Notes ----- If `weather_df` contains data at different heights the data closest to the hub height are used. If `interpolation_extrapolation` is used to calculate the density at hub height, the `weather_df` must contain at least two time series for density.
2.345184
2.173051
1.079213
r if self.power_plant.hub_height in weather_df['wind_speed']: wind_speed_hub = weather_df['wind_speed'][ self.power_plant.hub_height] elif self.wind_speed_model == 'logarithmic': logging.debug('Calculating wind speed using logarithmic wind ' 'profile.') closest_height = weather_df['wind_speed'].columns[ min(range(len(weather_df['wind_speed'].columns)), key=lambda i: abs(weather_df['wind_speed'].columns[i] - self.power_plant.hub_height))] wind_speed_hub = wind_speed.logarithmic_profile( weather_df['wind_speed'][closest_height], closest_height, self.power_plant.hub_height, weather_df['roughness_length'].iloc[:, 0], self.obstacle_height) elif self.wind_speed_model == 'hellman': logging.debug('Calculating wind speed using hellman equation.') closest_height = weather_df['wind_speed'].columns[ min(range(len(weather_df['wind_speed'].columns)), key=lambda i: abs(weather_df['wind_speed'].columns[i] - self.power_plant.hub_height))] wind_speed_hub = wind_speed.hellman( weather_df['wind_speed'][closest_height], closest_height, self.power_plant.hub_height, weather_df['roughness_length'].iloc[:, 0], self.hellman_exp) elif self.wind_speed_model == 'interpolation_extrapolation': logging.debug('Calculating wind speed using linear inter- or ' 'extrapolation.') wind_speed_hub = tools.linear_interpolation_extrapolation( weather_df['wind_speed'], self.power_plant.hub_height) elif self.wind_speed_model == 'log_interpolation_extrapolation': logging.debug('Calculating wind speed using logarithmic inter- or ' 'extrapolation.') wind_speed_hub = tools.logarithmic_interpolation_extrapolation( weather_df['wind_speed'], self.power_plant.hub_height) else: raise ValueError("'{0}' is an invalid value. ".format( self.wind_speed_model) + "`wind_speed_model` must be " "'logarithmic', 'hellman', 'interpolation_extrapolation' " + "or 'log_interpolation_extrapolation'.") return wind_speed_hub
def wind_speed_hub(self, weather_df)
r""" Calculates the wind speed at hub height. The method specified by the parameter `wind_speed_model` is used. Parameters ---------- weather_df : pandas.DataFrame DataFrame with time series for wind speed `wind_speed` in m/s and roughness length `roughness_length` in m. The columns of the DataFrame are a MultiIndex where the first level contains the variable name (e.g. wind_speed) and the second level contains the height at which it applies (e.g. 10, if it was measured at a height of 10 m). See documentation of :func:`ModelChain.run_model` for an example on how to create the weather_df DataFrame. Returns ------- wind_speed_hub : pandas.Series or numpy.array Wind speed in m/s at hub height. Notes ----- If `weather_df` contains wind speeds at different heights the given wind speed(s) closest to the hub height are used.
1.926228
1.877354
1.026033
r if self.power_output_model == 'power_curve': if self.power_plant.power_curve is None: raise TypeError("Power curve values of " + self.power_plant.name + " are missing.") logging.debug('Calculating power output using power curve.') return (power_output.power_curve( wind_speed_hub, self.power_plant.power_curve['wind_speed'], self.power_plant.power_curve['value'], density_hub, self.density_correction)) elif self.power_output_model == 'power_coefficient_curve': if self.power_plant.power_coefficient_curve is None: raise TypeError("Power coefficient curve values of " + self.power_plant.name + " are missing.") logging.debug('Calculating power output using power coefficient ' 'curve.') return (power_output.power_coefficient_curve( wind_speed_hub, self.power_plant.power_coefficient_curve[ 'wind_speed'], self.power_plant.power_coefficient_curve[ 'value'], self.power_plant.rotor_diameter, density_hub)) else: raise ValueError("'{0}' is an invalid value. ".format( self.power_output_model) + "`power_output_model` must be " + "'power_curve' or 'power_coefficient_curve'.")
def calculate_power_output(self, wind_speed_hub, density_hub)
r""" Calculates the power output of the wind power plant. The method specified by the parameter `power_output_model` is used. Parameters ---------- wind_speed_hub : pandas.Series or numpy.array Wind speed at hub height in m/s. density_hub : pandas.Series or numpy.array Density of air at hub height in kg/m³. Returns ------- pandas.Series Electrical power output of the wind turbine in W.
2.123898
2.166913
0.980149
r wind_speed_hub = self.wind_speed_hub(weather_df) density_hub = (None if (self.power_output_model == 'power_curve' and self.density_correction is False) else self.density_hub(weather_df)) self.power_output = self.calculate_power_output(wind_speed_hub, density_hub) return self
def run_model(self, weather_df)
r""" Runs the model. Parameters ---------- weather_df : pandas.DataFrame DataFrame with time series for wind speed `wind_speed` in m/s, and roughness length `roughness_length` in m, as well as optionally temperature `temperature` in K, pressure `pressure` in Pa and density `density` in kg/m³ depending on `power_output_model` and `density_model chosen`. The columns of the DataFrame are a MultiIndex where the first level contains the variable name (e.g. wind_speed) and the second level contains the height at which it applies (e.g. 10, if it was measured at a height of 10 m). See below for an example on how to create the weather_df DataFrame. Other Parameters ---------------- roughness_length : Float, optional. Roughness length. turbulence_intensity : Float, optional. Turbulence intensity. Returns ------- self Examples --------- >>> import numpy as np >>> import pandas as pd >>> weather_df = pd.DataFrame(np.random.rand(2,6), ... index=pd.date_range('1/1/2012', ... periods=2, ... freq='H'), ... columns=[np.array(['wind_speed', ... 'wind_speed', ... 'temperature', ... 'temperature', ... 'pressure', ... 'roughness_length']), ... np.array([10, 80, 10, 80, ... 10, 0])]) >>> weather_df.columns.get_level_values(0)[0] 'wind_speed'
4.799331
4.846069
0.990356
r # find closest heights heights_sorted = df.columns[ sorted(range(len(df.columns)), key=lambda i: abs(df.columns[i] - target_height))] return ((df[heights_sorted[1]] - df[heights_sorted[0]]) / (heights_sorted[1] - heights_sorted[0]) * (target_height - heights_sorted[0]) + df[heights_sorted[0]])
def linear_interpolation_extrapolation(df, target_height)
r""" Linear inter- or extrapolates between the values of a data frame. This function can be used for the inter-/extrapolation of a parameter (e.g wind speed) available at two or more different heights, to approximate the value at hub height. The function is carried out when the parameter `wind_speed_model`, `density_model` or `temperature_model` of an instance of the :class:`~.modelchain.ModelChain` class is 'interpolation_extrapolation'. Parameters ---------- df : pandas.DataFrame DataFrame with time series for parameter that is to be interpolated or extrapolated. The columns of the DataFrame are the different heights for which the parameter is available. If more than two heights are given, the two closest heights are used. See example below on how the DataFrame should look like and how the function can be used. target_height : float Height for which the parameter is approximated (e.g. hub height). Returns ------- pandas.Series Result of the inter-/extrapolation (e.g. wind speed at hub height). Notes ----- For the inter- and extrapolation the following equation is used: .. math:: f(x) = \frac{(f(x_2) - f(x_1))}{(x_2 - x_1)} \cdot (x - x_1) + f(x_1) Examples --------- >>> import numpy as np >>> import pandas as pd >>> wind_speed_10m = np.array([[3], [4]]) >>> wind_speed_80m = np.array([[6], [6]]) >>> weather_df = pd.DataFrame(np.hstack((wind_speed_10m, ... wind_speed_80m)), ... index=pd.date_range('1/1/2012', ... periods=2, ... freq='H'), ... columns=[np.array(['wind_speed', ... 'wind_speed']), ... np.array([10, 80])]) >>> value = linear_interpolation_extrapolation( ... weather_df['wind_speed'], 100)[0]
2.699911
2.83068
0.953803
r # find closest heights heights_sorted = df.columns[ sorted(range(len(df.columns)), key=lambda i: abs(df.columns[i] - target_height))] return ((np.log(target_height) * (df[heights_sorted[1]] - df[heights_sorted[0]]) - df[heights_sorted[1]] * np.log(heights_sorted[0]) + df[heights_sorted[0]] * np.log(heights_sorted[1])) / (np.log(heights_sorted[1]) - np.log(heights_sorted[0])))
def logarithmic_interpolation_extrapolation(df, target_height)
r""" Logarithmic inter- or extrapolates between the values of a data frame. This function can be used for the inter-/extrapolation of the wind speed if it is available at two or more different heights, to approximate the value at hub height. The function is carried out when the parameter `wind_speed_model` :class:`~.modelchain.ModelChain` class is 'log_interpolation_extrapolation'. Parameters ---------- df : pandas.DataFrame DataFrame with time series for parameter that is to be interpolated or extrapolated. The columns of the DataFrame are the different heights for which the parameter is available. If more than two heights are given, the two closest heights are used. See example in :py:func:`~.linear_interpolation_extrapolation` on how the DataFrame should look like and how the function can be used. target_height : float Height for which the parameter is approximated (e.g. hub height). Returns ------- pandas.Series Result of the inter-/extrapolation (e.g. wind speed at hub height). Notes ----- For the logarithmic inter- and extrapolation the following equation is used [1]_: .. math:: f(x) = \frac{\ln(x) \cdot (f(x_2) - f(x_1)) - f(x_2) \cdot \ln(x_1) + f(x_1) \cdot \ln(x_2)}{\ln(x_2) - \ln(x_1)} References ---------- .. [1] Knorr, K.: "Modellierung von raum-zeitlichen Eigenschaften der Windenergieeinspeisung für wetterdatenbasierte Windleistungssimulationen". Universität Kassel, Diss., 2016, p. 83
2.574293
2.34073
1.099782
r return (1 / (standard_deviation * np.sqrt(2 * np.pi)) * np.exp(-(function_variable - mean)**2 / (2 * standard_deviation**2)))
def gauss_distribution(function_variable, standard_deviation, mean=0)
r""" Gauss distribution. The Gauss distribution is used in the function :py:func:`~.power_curves.smooth_power_curve` for power curve smoothing. Parameters ---------- function_variable : float Variable of the gaussian distribution. standard_deviation : float Standard deviation of the Gauss distribution. mean : Float Defines the offset of the Gauss distribution. Default: 0. Returns ------- pandas.Series or numpy.array Wind speed at hub height. Data type depends on the type of `wind_speed`. Notes ----- The following equation is used [1]_: .. math:: f(x) = \frac{1}{\sigma \sqrt{2 \pi}} \exp \left[-\frac{(x-\mu)^2}{2 \sigma^2}\right] with: :math:`\sigma`: standard deviation, :math:`\mu`: mean References ---------- .. [1] Berendsen, H.: "A Student's Guide to Data and Error Analysis". New York, Cambridge University Press, 2011, p. 37
2.806112
3.482985
0.805663
r # Set turbulence intensity for assigning power curve turbulence_intensity = ( weather_df['turbulence_intensity'].values.mean() if 'turbulence_intensity' in weather_df.columns.get_level_values(0) else None) # Assign power curve if (self.wake_losses_model == 'power_efficiency_curve' or self.wake_losses_model == 'constant_efficiency' or self.wake_losses_model is None): wake_losses_model_to_power_curve = self.wake_losses_model if self.wake_losses_model is None: logging.debug('Wake losses in wind farms not considered.') else: logging.debug('Wake losses considered with {}.'.format( self.wake_losses_model)) else: logging.debug('Wake losses considered by {} wind '.format( self.wake_losses_model) + 'efficiency curve.') wake_losses_model_to_power_curve = None self.power_plant.assign_power_curve( wake_losses_model=wake_losses_model_to_power_curve, smoothing=self.smoothing, block_width=self.block_width, standard_deviation_method=self.standard_deviation_method, smoothing_order=self.smoothing_order, roughness_length=weather_df['roughness_length'][0].mean(), turbulence_intensity=turbulence_intensity) # Further logging messages if self.smoothing is None: logging.debug('Aggregated power curve not smoothed.') else: logging.debug('Aggregated power curve smoothed by method: ' + self.standard_deviation_method) return self
def assign_power_curve(self, weather_df)
r""" Calculates the power curve of the wind turbine cluster. The power curve is aggregated from the wind farms' and wind turbines' power curves by using :func:`power_plant.assign_power_curve`. Depending on the parameters of the WindTurbineCluster power curves are smoothed and/or wake losses are taken into account. Parameters ---------- weather_df : pandas.DataFrame DataFrame with time series for wind speed `wind_speed` in m/s, and roughness length `roughness_length` in m, as well as optionally temperature `temperature` in K, pressure `pressure` in Pa, density `density` in kg/m³ and turbulence intensity `turbulence_intensity` depending on `power_output_model`, `density_model` and `standard_deviation_model` chosen. The columns of the DataFrame are a MultiIndex where the first level contains the variable name (e.g. wind_speed) and the second level contains the height at which it applies (e.g. 10, if it was measured at a height of 10 m). See documentation of :func:`TurbineClusterModelChain.run_model` for an example on how to create the weather_df DataFrame. Returns ------- self
3.246691
3.003898
1.080826
r self.assign_power_curve(weather_df) self.power_plant.mean_hub_height() wind_speed_hub = self.wind_speed_hub(weather_df) density_hub = (None if (self.power_output_model == 'power_curve' and self.density_correction is False) else self.density_hub(weather_df)) if (self.wake_losses_model != 'power_efficiency_curve' and self.wake_losses_model != 'constant_efficiency' and self.wake_losses_model is not None): # Reduce wind speed with wind efficiency curve wind_speed_hub = wake_losses.reduce_wind_speed( wind_speed_hub, wind_efficiency_curve_name=self.wake_losses_model) self.power_output = self.calculate_power_output(wind_speed_hub, density_hub) return self
def run_model(self, weather_df)
r""" Runs the model. Parameters ---------- weather_df : pandas.DataFrame DataFrame with time series for wind speed `wind_speed` in m/s, and roughness length `roughness_length` in m, as well as optionally temperature `temperature` in K, pressure `pressure` in Pa, density `density` in kg/m³ and turbulence intensity `turbulence_intensity` depending on `power_output_model`, `density_model` and `standard_deviation_model` chosen. The columns of the DataFrame are a MultiIndex where the first level contains the variable name (e.g. wind_speed) and the second level contains the height at which it applies (e.g. 10, if it was measured at a height of 10 m). See below for an example on how to create the weather_df DataFrame. Returns ------- self Examples --------- >>> import numpy as np >>> import pandas as pd >>> weather_df = pd.DataFrame(np.random.rand(2,6), ... index=pd.date_range('1/1/2012', ... periods=2, ... freq='H'), ... columns=[np.array(['wind_speed', ... 'wind_speed', ... 'temperature', ... 'temperature', ... 'pressure', ... 'roughness_length']), ... np.array([10, 80, 10, 80, ... 10, 0])]) >>> weather_df.columns.get_level_values(0)[0] 'wind_speed'
4.142877
3.966997
1.044336
r power_coefficient_time_series = np.interp( wind_speed, power_coefficient_curve_wind_speeds, power_coefficient_curve_values, left=0, right=0) power_output = (1 / 8 * density * rotor_diameter ** 2 * np.pi * np.power(wind_speed, 3) * power_coefficient_time_series) # Power_output as pd.Series if wind_speed is pd.Series (else: np.array) if isinstance(wind_speed, pd.Series): power_output = pd.Series(data=power_output, index=wind_speed.index, name='feedin_power_plant') else: power_output = np.array(power_output) return power_output
def power_coefficient_curve(wind_speed, power_coefficient_curve_wind_speeds, power_coefficient_curve_values, rotor_diameter, density)
r""" Calculates the turbine power output using a power coefficient curve. This function is carried out when the parameter `power_output_model` of an instance of the :class:`~.modelchain.ModelChain` class is 'power_coefficient_curve'. Parameters ---------- wind_speed : pandas.Series or numpy.array Wind speed at hub height in m/s. power_coefficient_curve_wind_speeds : pandas.Series or numpy.array Wind speeds in m/s for which the power coefficients are provided in `power_coefficient_curve_values`. power_coefficient_curve_values : pandas.Series or numpy.array Power coefficients corresponding to wind speeds in `power_coefficient_curve_wind_speeds`. rotor_diameter : float Rotor diameter in m. density : pandas.Series or numpy.array Density of air at hub height in kg/m³. Returns ------- pandas.Series or numpy.array Electrical power output of the wind turbine in W. Data type depends on type of `wind_speed`. Notes ----- The following equation is used if the parameter `density_corr` is False [1]_, [2]_: .. math:: P=\frac{1}{8}\cdot\rho_{hub}\cdot d_{rotor}^{2} \cdot\pi\cdot v_{wind}^{3}\cdot cp\left(v_{wind}\right) with: P: power [W], :math:`\rho`: density [kg/m³], d: diameter [m], v: wind speed [m/s], cp: power coefficient It is assumed that the power output for wind speeds above the maximum and below the minimum wind speed given in the power coefficient curve is zero. References ---------- .. [1] Gasch, R., Twele, J.: "Windkraftanlagen". 6. Auflage, Wiesbaden, Vieweg + Teubner, 2010, pages 35ff, 208 .. [2] Hau, E.: "Windkraftanlagen - Grundlagen, Technik, Einsatz, Wirtschaftlichkeit". 4. Auflage, Springer-Verlag, 2008, p. 542
2.840948
2.696101
1.053725
r if density_correction is False: power_output = np.interp(wind_speed, power_curve_wind_speeds, power_curve_values, left=0, right=0) # Power_output as pd.Series if wind_speed is pd.Series (else: np.array) if isinstance(wind_speed, pd.Series): power_output = pd.Series(data=power_output, index=wind_speed.index, name='feedin_power_plant') else: power_output = np.array(power_output) elif density_correction is True: power_output = power_curve_density_correction( wind_speed, power_curve_wind_speeds, power_curve_values, density) else: raise TypeError("'{0}' is an invalid type. ".format(type( density_correction)) + "`density_correction` must " + "be Boolean (True or False).") return power_output
def power_curve(wind_speed, power_curve_wind_speeds, power_curve_values, density=None, density_correction=False)
r""" Calculates the turbine power output using a power curve. This function is carried out when the parameter `power_output_model` of an instance of the :class:`~.modelchain.ModelChain` class is 'power_curve'. If the parameter `density_correction` is True the density corrected power curve (See :py:func:`~.power_curve_density_correction`) is used. Parameters ---------- wind_speed : pandas.Series or numpy.array Wind speed at hub height in m/s. power_curve_wind_speeds : pandas.Series or numpy.array Wind speeds in m/s for which the power curve values are provided in `power_curve_values`. power_curve_values : pandas.Series or numpy.array Power curve values corresponding to wind speeds in `power_curve_wind_speeds`. density : pandas.Series or numpy.array Density of air at hub height in kg/m³. This parameter is needed if `density_correction` is True. Default: None. density_correction : boolean If the parameter is True the density corrected power curve is used for the calculation of the turbine power output. In this case `density` cannot be None. Default: False. Returns ------- pandas.Series or numpy.array Electrical power output of the wind turbine in W. Data type depends on type of `wind_speed`. Notes ------- It is assumed that the power output for wind speeds above the maximum and below the minimum wind speed given in the power curve is zero.
2.772955
2.779042
0.99781
r if density is None: raise TypeError("`density` is None. For the calculation with a " + "density corrected power curve density at hub " + "height is needed.") power_output = [(np.interp( wind_speed[i], power_curve_wind_speeds * (1.225 / density[i]) ** ( np.interp(power_curve_wind_speeds, [7.5, 12.5], [1/3, 2/3])), power_curve_values, left=0, right=0)) for i in range(len(wind_speed))] # Power_output as pd.Series if wind_speed is pd.Series (else: np.array) if isinstance(wind_speed, pd.Series): power_output = pd.Series(data=power_output, index=wind_speed.index, name='feedin_power_plant') else: power_output = np.array(power_output) return power_output
def power_curve_density_correction(wind_speed, power_curve_wind_speeds, power_curve_values, density)
r""" Calculates the turbine power output using a density corrected power curve. This function is carried out when the parameter `density_correction` of an instance of the :class:`~.modelchain.ModelChain` class is True. Parameters ---------- wind_speed : pandas.Series or numpy.array Wind speed at hub height in m/s. power_curve_wind_speeds : pandas.Series or numpy.array Wind speeds in m/s for which the power curve values are provided in `power_curve_values`. power_curve_values : pandas.Series or numpy.array Power curve values corresponding to wind speeds in `power_curve_wind_speeds`. density : pandas.Series or numpy.array Density of air at hub height in kg/m³. Returns ------- pandas.Series or numpy.array Electrical power output of the wind turbine in W. Data type depends on type of `wind_speed`. Notes ----- The following equation is used for the site specific power curve wind speeds [1]_, [2]_, [3]_: .. math:: v_{site}=v_{std}\cdot\left(\frac{\rho_0} {\rho_{site}}\right)^{p(v)} with: .. math:: p=\begin{cases} \frac{1}{3} & v_{std} \leq 7.5\text{ m/s}\\ \frac{1}{15}\cdot v_{std}-\frac{1}{6} & 7.5 \text{ m/s}<v_{std}<12.5\text{ m/s}\\ \frac{2}{3} & \geq 12.5 \text{ m/s} \end{cases}, v: wind speed [m/s], :math:`\rho`: density [kg/m³] :math:`v_{std}` is the standard wind speed in the power curve (:math:`v_{std}`, :math:`P_{std}`), :math:`v_{site}` is the density corrected wind speed for the power curve (:math:`v_{site}`, :math:`P_{std}`), :math:`\rho_0` is the ambient density (1.225 kg/m³) and :math:`\rho_{site}` the density at site conditions (and hub height). It is assumed that the power output for wind speeds above the maximum and below the minimum wind speed given in the power curve is zero. References ---------- .. [1] Svenningsen, L.: "Power Curve Air Density Correction And Other Power Curve Options in WindPRO". 1st edition, Aalborg, EMD International A/S , 2010, p. 4 .. [2] Svenningsen, L.: "Proposal of an Improved Power Curve Correction". EMD International A/S , 2010 .. [3] Biank, M.: "Methodology, Implementation and Validation of a Variable Scale Simulation Model for Windpower based on the Georeferenced Installation Register of Germany". Master's Thesis at Reiner Lemoine Institute, 2014, p. 13
4.057724
3.611591
1.123528