File size: 5,298 Bytes
3a18d83
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
Metadata-Version: 2.1
Name: videocr
Version: 0.1.6
Summary: Extract hardcoded subtitles from videos using machine learning
Home-page: https://github.com/apm1467/videocr
Author: Yi Ge
Author-email: me@yige.ch
License: MIT
Download-URL: https://github.com/apm1467/videocr/archive/v0.1.6.tar.gz
Description: # videocr
        
        Extract hardcoded (burned-in) subtitles from videos using the [Tesseract](https://github.com/tesseract-ocr/tesseract) OCR engine with Python.
        
        Input a video with hardcoded subtitles:
        
        <p float="left">
          <img width="430" alt="screenshot" src="https://user-images.githubusercontent.com/10210967/56873658-3b76dd00-6a34-11e9-95c6-cd6edc721f58.png">
          <img width="430" alt="screenshot" src="https://user-images.githubusercontent.com/10210967/56873659-3b76dd00-6a34-11e9-97aa-2c3e96fe3a97.png">
        </p>
        
        ```python
        # example.py
        
        from videocr import get_subtitles
        
        if __name__ == '__main__':  # This check is mandatory for Windows.
            print(get_subtitles('video.mp4', lang='chi_sim+eng', sim_threshold=70, conf_threshold=65))
        ```
        
        `$ python3 example.py`
        
        Output:
        
        ``` 
        0
        00:00:01,042 --> 00:00:02,877
        喝 点 什么 ? 
        What can I get you?
        
        1
        00:00:03,044 --> 00:00:05,463
        我 不 知道
        Um, I'm not sure.
        
        2
        00:00:08,091 --> 00:00:10,635
        休闲 时 光 …
        For relaxing times, make it...
        
        3
        00:00:10,677 --> 00:00:12,595
        三 得 利 时 光
        Bartender, Bob Suntory time.
        
        4
        00:00:14,472 --> 00:00:17,142
        我 要 一 杯 伏特 加
        Un, I'll have a vodka tonic.
        
        5
        00:00:18,059 --> 00:00:19,019
        谢谢
        Laughs Thanks.
        ```
        
        ## Performance
        
        The OCR process is CPU intensive. It takes 3 minutes on my dual-core laptop to extract a 20 seconds video. More CPU cores will make it faster.
        
        ## Installation
        
        1. Install [Tesseract](https://github.com/tesseract-ocr/tesseract/wiki) and make sure it is in your `$PATH`
        
        2. `$ pip install videocr`
        
        ## API
        
        1. Return subtitle string in SRT format
            ```python
            get_subtitles(
                video_path: str, lang='eng', time_start='0:00', time_end='',
                conf_threshold=65, sim_threshold=90, use_fullframe=False)
            ```
        
        2. Write subtitles to `file_path`
            ```python
            save_subtitles_to_file(
                video_path: str, file_path='subtitle.srt', lang='eng', time_start='0:00', time_end='',
                conf_threshold=65, sim_threshold=90, use_fullframe=False)
            ```
        
        ### Parameters
        
        - `lang`
        
          The language of the subtitles. You can extract subtitles in almost any language. All language codes on [this page](https://github.com/tesseract-ocr/tesseract/wiki/Data-Files#data-files-for-version-400-november-29-2016) (e.g. `'eng'` for English) and all script names in [this repository](https://github.com/tesseract-ocr/tessdata_fast/tree/master/script) (e.g. `'HanS'` for simplified Chinese) are supported.
          
          Note that you can use more than one language, e.g. `lang='hin+eng'` for Hindi and English together. 
          
          Language files will be automatically downloaded to your `~/tessdata`. You can read more about Tesseract language data files on their [wiki page](https://github.com/tesseract-ocr/tesseract/wiki/Data-Files).
        
        - `conf_threshold`
        
          Confidence threshold for word predictions. Words with lower confidence than this value will be discarded. The default value `65` is fine for most cases. 
        
          Make it closer to 0 if you get too few words in each line, or make it closer to 100 if there are too many excess words in each line.
        
        - `sim_threshold`
        
          Similarity threshold for subtitle lines. Subtitle lines with larger [Levenshtein](https://en.wikipedia.org/wiki/Levenshtein_distance) ratios than this threshold will be merged together. The default value `90` is fine for most cases.
        
          Make it closer to 0 if you get too many duplicated subtitle lines, or make it closer to 100 if you get too few subtitle lines.
        
        - `time_start` and `time_end`
        
          Extract subtitles from only a clip of the video. The subtitle timestamps are still calculated according to the full video length.
        
        - `use_fullframe`
        
          By default, only the bottom half of each frame is used for OCR. You can explicitly use the full frame if your subtitles are not within the bottom half of each frame.
        
Platform: UNKNOWN
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.7
Description-Content-Type: text/markdown