leetuan023 commited on
Commit
3a18d83
·
verified ·
1 Parent(s): ca4f196

Upload 4 files

Browse files
Files changed (4) hide show
  1. PKG-INFO +128 -0
  2. README.md +111 -12
  3. setup.cfg +7 -0
  4. setup.py +31 -0
PKG-INFO ADDED
@@ -0,0 +1,128 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Metadata-Version: 2.1
2
+ Name: videocr
3
+ Version: 0.1.6
4
+ Summary: Extract hardcoded subtitles from videos using machine learning
5
+ Home-page: https://github.com/apm1467/videocr
6
+ Author: Yi Ge
7
+ Author-email: [email protected]
8
+ License: MIT
9
+ Download-URL: https://github.com/apm1467/videocr/archive/v0.1.6.tar.gz
10
+ Description: # videocr
11
+
12
+ Extract hardcoded (burned-in) subtitles from videos using the [Tesseract](https://github.com/tesseract-ocr/tesseract) OCR engine with Python.
13
+
14
+ Input a video with hardcoded subtitles:
15
+
16
+ <p float="left">
17
+ <img width="430" alt="screenshot" src="https://user-images.githubusercontent.com/10210967/56873658-3b76dd00-6a34-11e9-95c6-cd6edc721f58.png">
18
+ <img width="430" alt="screenshot" src="https://user-images.githubusercontent.com/10210967/56873659-3b76dd00-6a34-11e9-97aa-2c3e96fe3a97.png">
19
+ </p>
20
+
21
+ ```python
22
+ # example.py
23
+
24
+ from videocr import get_subtitles
25
+
26
+ if __name__ == '__main__': # This check is mandatory for Windows.
27
+ print(get_subtitles('video.mp4', lang='chi_sim+eng', sim_threshold=70, conf_threshold=65))
28
+ ```
29
+
30
+ `$ python3 example.py`
31
+
32
+ Output:
33
+
34
+ ```
35
+ 0
36
+ 00:00:01,042 --> 00:00:02,877
37
+ 喝 点 什么 ?
38
+ What can I get you?
39
+
40
+ 1
41
+ 00:00:03,044 --> 00:00:05,463
42
+ 我 不 知道
43
+ Um, I'm not sure.
44
+
45
+ 2
46
+ 00:00:08,091 --> 00:00:10,635
47
+ 休闲 时 光 …
48
+ For relaxing times, make it...
49
+
50
+ 3
51
+ 00:00:10,677 --> 00:00:12,595
52
+ 三 得 利 时 光
53
+ Bartender, Bob Suntory time.
54
+
55
+ 4
56
+ 00:00:14,472 --> 00:00:17,142
57
+ 我 要 一 杯 伏特 加
58
+ Un, I'll have a vodka tonic.
59
+
60
+ 5
61
+ 00:00:18,059 --> 00:00:19,019
62
+ 谢谢
63
+ Laughs Thanks.
64
+ ```
65
+
66
+ ## Performance
67
+
68
+ The OCR process is CPU intensive. It takes 3 minutes on my dual-core laptop to extract a 20 seconds video. More CPU cores will make it faster.
69
+
70
+ ## Installation
71
+
72
+ 1. Install [Tesseract](https://github.com/tesseract-ocr/tesseract/wiki) and make sure it is in your `$PATH`
73
+
74
+ 2. `$ pip install videocr`
75
+
76
+ ## API
77
+
78
+ 1. Return subtitle string in SRT format
79
+ ```python
80
+ get_subtitles(
81
+ video_path: str, lang='eng', time_start='0:00', time_end='',
82
+ conf_threshold=65, sim_threshold=90, use_fullframe=False)
83
+ ```
84
+
85
+ 2. Write subtitles to `file_path`
86
+ ```python
87
+ save_subtitles_to_file(
88
+ video_path: str, file_path='subtitle.srt', lang='eng', time_start='0:00', time_end='',
89
+ conf_threshold=65, sim_threshold=90, use_fullframe=False)
90
+ ```
91
+
92
+ ### Parameters
93
+
94
+ - `lang`
95
+
96
+ The language of the subtitles. You can extract subtitles in almost any language. All language codes on [this page](https://github.com/tesseract-ocr/tesseract/wiki/Data-Files#data-files-for-version-400-november-29-2016) (e.g. `'eng'` for English) and all script names in [this repository](https://github.com/tesseract-ocr/tessdata_fast/tree/master/script) (e.g. `'HanS'` for simplified Chinese) are supported.
97
+
98
+ Note that you can use more than one language, e.g. `lang='hin+eng'` for Hindi and English together.
99
+
100
+ Language files will be automatically downloaded to your `~/tessdata`. You can read more about Tesseract language data files on their [wiki page](https://github.com/tesseract-ocr/tesseract/wiki/Data-Files).
101
+
102
+ - `conf_threshold`
103
+
104
+ Confidence threshold for word predictions. Words with lower confidence than this value will be discarded. The default value `65` is fine for most cases.
105
+
106
+ Make it closer to 0 if you get too few words in each line, or make it closer to 100 if there are too many excess words in each line.
107
+
108
+ - `sim_threshold`
109
+
110
+ Similarity threshold for subtitle lines. Subtitle lines with larger [Levenshtein](https://en.wikipedia.org/wiki/Levenshtein_distance) ratios than this threshold will be merged together. The default value `90` is fine for most cases.
111
+
112
+ Make it closer to 0 if you get too many duplicated subtitle lines, or make it closer to 100 if you get too few subtitle lines.
113
+
114
+ - `time_start` and `time_end`
115
+
116
+ Extract subtitles from only a clip of the video. The subtitle timestamps are still calculated according to the full video length.
117
+
118
+ - `use_fullframe`
119
+
120
+ By default, only the bottom half of each frame is used for OCR. You can explicitly use the full frame if your subtitles are not within the bottom half of each frame.
121
+
122
+ Platform: UNKNOWN
123
+ Classifier: Development Status :: 3 - Alpha
124
+ Classifier: Intended Audience :: Developers
125
+ Classifier: License :: OSI Approved :: MIT License
126
+ Classifier: Programming Language :: Python :: 3
127
+ Classifier: Programming Language :: Python :: 3.7
128
+ Description-Content-Type: text/markdown
README.md CHANGED
@@ -1,12 +1,111 @@
1
- ---
2
- title: Videoocr2
3
- emoji: 💻
4
- colorFrom: pink
5
- colorTo: green
6
- sdk: gradio
7
- sdk_version: 4.44.0
8
- app_file: app.py
9
- pinned: false
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # videocr
2
+
3
+ Extract hardcoded (burned-in) subtitles from videos using the [Tesseract](https://github.com/tesseract-ocr/tesseract) OCR engine with Python.
4
+
5
+ Input a video with hardcoded subtitles:
6
+
7
+ <p float="left">
8
+ <img width="430" alt="screenshot" src="https://user-images.githubusercontent.com/10210967/56873658-3b76dd00-6a34-11e9-95c6-cd6edc721f58.png">
9
+ <img width="430" alt="screenshot" src="https://user-images.githubusercontent.com/10210967/56873659-3b76dd00-6a34-11e9-97aa-2c3e96fe3a97.png">
10
+ </p>
11
+
12
+ ```python
13
+ # example.py
14
+
15
+ from videocr import get_subtitles
16
+
17
+ if __name__ == '__main__': # This check is mandatory for Windows.
18
+ print(get_subtitles('video.mp4', lang='chi_sim+eng', sim_threshold=70, conf_threshold=65))
19
+ ```
20
+
21
+ `$ python3 example.py`
22
+
23
+ Output:
24
+
25
+ ```
26
+ 0
27
+ 00:00:01,042 --> 00:00:02,877
28
+ 喝 点 什么 ?
29
+ What can I get you?
30
+
31
+ 1
32
+ 00:00:03,044 --> 00:00:05,463
33
+ 我 不 知道
34
+ Um, I'm not sure.
35
+
36
+ 2
37
+ 00:00:08,091 --> 00:00:10,635
38
+ 休闲 时 光 …
39
+ For relaxing times, make it...
40
+
41
+ 3
42
+ 00:00:10,677 --> 00:00:12,595
43
+ 三 得 利 时 光
44
+ Bartender, Bob Suntory time.
45
+
46
+ 4
47
+ 00:00:14,472 --> 00:00:17,142
48
+ 我 要 一 杯 伏特 加
49
+ Un, I'll have a vodka tonic.
50
+
51
+ 5
52
+ 00:00:18,059 --> 00:00:19,019
53
+ 谢谢
54
+ Laughs Thanks.
55
+ ```
56
+
57
+ ## Performance
58
+
59
+ The OCR process is CPU intensive. It takes 3 minutes on my dual-core laptop to extract a 20 seconds video. More CPU cores will make it faster.
60
+
61
+ ## Installation
62
+
63
+ 1. Install [Tesseract](https://github.com/tesseract-ocr/tesseract/wiki) and make sure it is in your `$PATH`
64
+
65
+ 2. `$ pip install videocr`
66
+
67
+ ## API
68
+
69
+ 1. Return subtitle string in SRT format
70
+ ```python
71
+ get_subtitles(
72
+ video_path: str, lang='eng', time_start='0:00', time_end='',
73
+ conf_threshold=65, sim_threshold=90, use_fullframe=False)
74
+ ```
75
+
76
+ 2. Write subtitles to `file_path`
77
+ ```python
78
+ save_subtitles_to_file(
79
+ video_path: str, file_path='subtitle.srt', lang='eng', time_start='0:00', time_end='',
80
+ conf_threshold=65, sim_threshold=90, use_fullframe=False)
81
+ ```
82
+
83
+ ### Parameters
84
+
85
+ - `lang`
86
+
87
+ The language of the subtitles. You can extract subtitles in almost any language. All language codes on [this page](https://github.com/tesseract-ocr/tesseract/wiki/Data-Files#data-files-for-version-400-november-29-2016) (e.g. `'eng'` for English) and all script names in [this repository](https://github.com/tesseract-ocr/tessdata_fast/tree/master/script) (e.g. `'HanS'` for simplified Chinese) are supported.
88
+
89
+ Note that you can use more than one language, e.g. `lang='hin+eng'` for Hindi and English together.
90
+
91
+ Language files will be automatically downloaded to your `~/tessdata`. You can read more about Tesseract language data files on their [wiki page](https://github.com/tesseract-ocr/tesseract/wiki/Data-Files).
92
+
93
+ - `conf_threshold`
94
+
95
+ Confidence threshold for word predictions. Words with lower confidence than this value will be discarded. The default value `65` is fine for most cases.
96
+
97
+ Make it closer to 0 if you get too few words in each line, or make it closer to 100 if there are too many excess words in each line.
98
+
99
+ - `sim_threshold`
100
+
101
+ Similarity threshold for subtitle lines. Subtitle lines with larger [Levenshtein](https://en.wikipedia.org/wiki/Levenshtein_distance) ratios than this threshold will be merged together. The default value `90` is fine for most cases.
102
+
103
+ Make it closer to 0 if you get too many duplicated subtitle lines, or make it closer to 100 if you get too few subtitle lines.
104
+
105
+ - `time_start` and `time_end`
106
+
107
+ Extract subtitles from only a clip of the video. The subtitle timestamps are still calculated according to the full video length.
108
+
109
+ - `use_fullframe`
110
+
111
+ By default, only the bottom half of each frame is used for OCR. You can explicitly use the full frame if your subtitles are not within the bottom half of each frame.
setup.cfg ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ [metadata]
2
+ description-file = README.md
3
+
4
+ [egg_info]
5
+ tag_build =
6
+ tag_date = 0
7
+
setup.py ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from setuptools import setup
2
+
3
+ with open('README.md', 'r', encoding='utf-8') as f:
4
+ readme = f.read()
5
+
6
+ setup(
7
+ name='videocr',
8
+ packages=['videocr'],
9
+ version='0.1.6',
10
+ license='MIT',
11
+ description='Extract hardcoded subtitles from videos using machine learning',
12
+ long_description_content_type='text/markdown',
13
+ long_description=readme,
14
+ author='Yi Ge',
15
+ author_email='[email protected]',
16
+ url='https://github.com/apm1467/videocr',
17
+ download_url='https://github.com/apm1467/videocr/archive/v0.1.6.tar.gz',
18
+ install_requires=[
19
+ 'fuzzywuzzy>=0.17',
20
+ 'python-Levenshtein>=0.12',
21
+ 'opencv-python>=4.1,<5.0',
22
+ 'pytesseract>=0.2.6'
23
+ ],
24
+ classifiers=[
25
+ 'Development Status :: 3 - Alpha',
26
+ 'Intended Audience :: Developers',
27
+ 'License :: OSI Approved :: MIT License',
28
+ 'Programming Language :: Python :: 3',
29
+ 'Programming Language :: Python :: 3.7',
30
+ ],
31
+ )