code
stringlengths 501
5.19M
| package
stringlengths 2
81
| path
stringlengths 9
304
| filename
stringlengths 4
145
|
---|---|---|---|
Anchor annotator documentation
==============================
.. warning::
Anchor is in early early alpha, there are many rough edges and krakens abound. Please ensure you are using version control with something like git for your files in case anything goes destructively wrong. Please refer to :ref:`known_issues` if you encounter anything weird, but otherwise you can file a `GitHub issue <https://github.com/MontrealCorpusTools/Anchor-annotator/issues>`_ via the :fas:`bug` button in Anchor.
.. grid:: 2
.. grid-item-card:: Getting started
:text-align: center
:fas:`running;fa-6x i-navigation`
^^^
Install Anchor and its dependencies
+++
.. button-ref:: getting_started
:expand:
:color: primary
Install Anchor
.. grid-item-card:: First steps
:text-align: center
:fas:`terminal;fa-6x i-navigation`
^^^
Have a particular use case for Anchor?
Check out the first steps tutorials.
+++
.. button-ref:: first_steps
:expand:
:color: primary
First steps
.. grid-item-card:: User guide
:text-align: center
:fas:`book-open;fa-6x i-navigation`
^^^
The User Guide gives more details on the various functions in Anchor.
+++
.. button-ref:: user_guide
:expand:
:color: primary
User guide
.. grid-item-card:: API reference
:text-align: center
:fas:`file-code;fa-6x i-navigation`
^^^
The API guide contains documentation for each aspect of Anchor.
+++
.. button-ref:: anchor_api
:expand:
:color: primary
Reference guide
.. toctree::
:hidden:
Getting started <getting_started.rst>
User guide <user_guide/index.rst>
API reference <reference/index.rst>
Changelog <changelog/index.rst>
Attribution
===========
The Anchor Annotator uses icons from `FontAwesome <https://fontawesome.com/>`_.
|
Anchor-annotator
|
/Anchor_annotator-0.0.9.tar.gz/Anchor_annotator-0.0.9/docs/source/index.rst
|
index.rst
|
************
Installation
************
All platforms
=============
1. Install `Miniconda <https://docs.conda.io/en/latest/miniconda.html>`_/`Conda installation <https://docs.conda.io/projects/conda/en/latest/user-guide/install/index.html>`_
2. Create new environment and install MFA: :code:`conda create -n anchor -c conda-forge anchor-annotator`
a. You can enable the :code:`conda-forge` channel by default by running :code:`conda config --add channels conda-forge` in order to omit the :code:`-c conda-forge` from these commands
3. Ensure you're in the new environment created (:code:`conda activate anchor`)
4. Verify Anchor launches via :code:`mfa anchor`
.. warning::
See :ref:`known_issues` if you encounter any errors.
|
Anchor-annotator
|
/Anchor_annotator-0.0.9.tar.gz/Anchor_annotator-0.0.9/docs/source/installation.rst
|
installation.rst
|
.. _key_bindings:
****************************
Keyboard and mouse shortcuts
****************************
Keyboard
========
.. tip::
You can change most of the keyboard shortcuts by via the settings in :doc:`configuration`.
.. csv-table::
:header: "Function", "Default keyboard shortcut"
"Play audio", "Tab"
"Zoom in", "Ctrl+I"
"Zoom out", "Ctrl+O"
"Pan left", "Left arrow"
"Pan right", "Right arrow"
"Merge utterances", "Ctrl+M"
"Split utterances", "Ctrl+S"
"Delete utterances", "Del"
"Save current file", "By default not bound, but can be set"
Mouse
=====
.. csv-table::
:header: "Function", "Mouse shortcut"
"Pan towards beginning of the sound file", "Mousewheel up"
"Pan towards end of the sound file", "Mousewheel down"
"Zoom in", "Ctrl+Mousewheel up"
"Zoom out", "Ctrl+Mousewheel down"
"Create new segment", "Double click on empty area in speaker tier"
"Change speaker in file", "Click and drag interval to another speaker tier"
"Select audio region", "Click and drag on waveform/spectrogram"
"Set start time for playback", "Click on waveform/spectrogram"
|
Anchor-annotator
|
/Anchor_annotator-0.0.9.tar.gz/Anchor_annotator-0.0.9/docs/source/user_guide/key_bindings.rst
|
key_bindings.rst
|
.. _known_issues:
************
Known issues
************
Launching Anchor
================
.. error::
:code:`This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.`
.. tip::
Set the environment variable :code:`QT_PLUGIN_PATH=C:\Users\michael\miniconda3\envs\anchor\Library\lib\qt6\plugins`.
* `Bash <https://www.howtogeek.com/668503/how-to-set-environment-variables-in-bash-on-linux/>`_
* `Mac OSX <https://support.apple.com/guide/terminal/use-environment-variables-apd382cc5fa-4f58-4449-b20a-41c53c006f8f/mac>`_
* `Windows command line <https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/set_1>`_
|
Anchor-annotator
|
/Anchor_annotator-0.0.9.tar.gz/Anchor_annotator-0.0.9/docs/source/user_guide/known_issues.rst
|
known_issues.rst
|
.. _basic_navigation:
Basic navigation
================
.. warning::
This section is horrendously out of date, sorry! I will update it soon!
Fixing out of vocabulary issues
-------------------------------
Once the corpus is loaded with a dictionary, utterances in the corpus will be parsed for whether they contain
an out of vocabulary (OOV) word. If they do, they will be marked in that column on the left with a red cell
(see number :code:`2` below).
To fix a transcript, click on the utterance in the table. This will bring up a detail view of the utterance,
with a waveform window above and the transcript in the text field. Clicking the ``Play`` button (or ``Tab`` by default)
will allow you to listen to the audio. Pressing the ``Save current file`` button (see number :code:`10` below) will save the
utterance text to the .lab/.txt file or update the interval in the TextGrid.
.. warning::
Clicking ``Save`` will overwrite the source file loaded, so use this software with caution.
Backing up your data and/or using version control is recommended to ensure that any data loss
during corpus creation is minimized.
If the word causing the OOV warning is in fact a word you would like aligned, you can right click on
the word and select ``Add pronunciation for 'X'`` if a G2P model is loaded (see number :code:`7` below). This will run the G2P
model to generate a pronunciation in the dictionary which can then be modified if necessary and the dictionary
can be saved via the ``Save dictionary`` button. You can also look up any word in the pronunciation
dictionary by right clicking and selecting ``Look up 'X' in dictionary``. Any pronunciation can be modified
and saved. The ``Reset dictionary`` button wil discard any changes made to the dictionary.
Fixing utterances
-----------------
.. figure:: ../_static/dictionary_annotation.png
:align: center
:alt: Image cannot be displayed in your browser
The file you want to fix up can be selected via the dropdown in the top left (number :code:`1` above).
For fixing up intervals, you can select segments in the left table (number :code:`2` above), or by clicking on
intervals in the plot window (i.e., number :code:`5` above).
You can edit the text in the center bottom box (number :code:`6` above), change the speaker via the dropdown next to the
text box (number :code:`12` below), and adjust
boundaries as necessary (green lines associated with number :code:`4` below). If you would like to add a new speaker,
then it can be accessed via the :code:`Speaker` tab
on the right pane, which will also list counts of utterances (see :code:`13` below). Entering a speaker name and clicking
"Add speaker" (:code:`14` below), will make that speaker available in the dropdown.
Single segments can be split via a keyboard shortcut (by default :code:`Ctrl+S`, but this can be changed, see
:ref:`configure_annotator` for more details). This will create two segments from one, split at the midpoint, but with all
the text in the first segment.
Multiple segments can be selected by holding :code:`Ctrl` (with selections shown in the left pane, though not in the waveform panel),
and can be merged into single
segments via a keyboard shortcut (by default :code:`Ctrl+M`, but this can be changed, see :ref:`configure_annotator`
for more details). Any number of segments can be selected this way, and the resulting merged segment will concatenate
the transcriptions for them all. In general, be cautious about creating too long of utterances, as in general there
is better performance in alignment for shorter utterances, and often breath pauses make for good segment boundaries if
they're visible on the waveform.
.. figure:: ../_static/speaker_annotation.png
:align: center
:alt: Image cannot be displayed in your browser
Segments can be added via double clicking on a speaker's tier (i.e., number :code:`11`), however, it is disabled if a
segment exists at that point. Any segments can also be deleted via a shortcut (by default :code:`Delete`). There is limited
restore functionality for deleted utterances, via a button on the bottom left.
.. _configure_annotator:
Configuring the annotator
-------------------------
By going to :code:`Preferences` in the :code:`Edit` menu, many aspects of the interface can be changed. The two primary
customizations currently implemented are for the appearance of the waveform/segment window and for keyboard shortcuts.
The current available shortcuts are:
.. csv-table::
:header: "Function", "Default keybind"
"Play audio", "Tab"
"Zoom in", "Ctrl+I"
"Zoom out", "Ctrl+O"
"Pan left", "Left arrow"
"Pan right", "Right arrow"
"Merge utterances", "Ctrl+M"
"Split utterances", "Ctrl+S"
"Delete utterances", "Del"
"Save current file", "By default not bound, but can be set"
"Create new segment", "Double click (currently not rebindable)"
|
Anchor-annotator
|
/Anchor_annotator-0.0.9.tar.gz/Anchor_annotator-0.0.9/docs/source/user_guide/basic_navigation.rst
|
basic_navigation.rst
|
.. _first_steps:
***********
First steps
***********
Use cases
=========
There are several broad use cases that you might want to use Anchor for. Take a look below and if any are close matches, you should be able to apply the linked instructions to your data.
#. **Use case 1: Correcting transcriptions** You want to correct an existing `speech corpus <https://montreal-forced-aligner.readthedocs.io/en/latest/user_guide/corpus_structure.html>`_.
#. Follow :ref:`first_steps_load_corpus`
#. Refer to :ref:`basic_navigation` for browsing the corpus
#. **Use case 2: Validating corpus with dictionary** You want to have an existing `speech corpus <https://montreal-forced-aligner.readthedocs.io/en/latest/user_guide/corpus_structure.html>`_ and `pronunciation dictionary <https://montreal-forced-aligner.readthedocs.io/en/latest/user_guide/dictionary.html>`_ and want to validate the dictionary's coverage over the corpus.
#. Follow :ref:`first_steps_load_corpus`
#. Follow :ref:`first_steps_load_dictionary`
#. (Optional but helpful) Follow :ref:`first_steps_load_g2p`
#. Follow :ref:`first_steps_oovs`
#. **Use case 3: Segmentation and diarization** You want to have an existing `speech corpus <https://montreal-forced-aligner.readthedocs.io/en/latest/user_guide/corpus_structure.html>`_ that lacks appropriate speaker metadata and/or utterance boundaries and want to get it into a more usable shape for training an acoustic model or aligning the corpus.
#. Follow :ref:`first_steps_load_corpus`
#. Follow :ref:`first_steps_load_dictionary`
#. Follow :ref:`first_steps_load_ivector_extractor`
#. Follow :ref:`first_steps_diarization`
#. **Use case 4: Validating alignments** You want to have an existing `speech corpus <https://montreal-forced-aligner.readthedocs.io/en/latest/user_guide/corpus_structure.html>`_, `pronunciation dictionary <https://montreal-forced-aligner.readthedocs.io/en/latest/user_guide/dictionary.html>`_, and `acoustic model <https://mfa-models.readthedocs.io/en/latest/acoustic/index.html>`_, but MFA reports many unaligned files or the alignment quality is poor when spot checking.
#. Follow :ref:`first_steps_load_corpus`
#. Follow :ref:`first_steps_load_dictionary`
#. Follow :ref:`first_steps_load_acoustic_model`
#. Follow :ref:`first_steps_alignment`
#. **Use case 5: Generating transcriptions** You want to have a `speech corpus <https://montreal-forced-aligner.readthedocs.io/en/latest/user_guide/corpus_structure.html>`_ with no transcriptions or some utterances missing transcriptions, but have an `acoustic model <https://mfa-models.readthedocs.io/en/latest/acoustic/index.html>`_, `pronunciation dictionary <https://montreal-forced-aligner.readthedocs.io/en/latest/user_guide/dictionary.html>`_, and `language model <https://mfa-models.readthedocs.io/en/latest/language_model/index.html>`_.
#. Follow :ref:`first_steps_load_corpus`
#. Follow :ref:`first_steps_load_dictionary`
#. Follow :ref:`first_steps_load_acoustic_model`
#. Follow :ref:`first_steps_load_language_model`
#. Follow :ref:`first_steps_transcription`
.. _first_steps_load_corpus:
Loading a corpus
----------------
In the Corpus menu, select "Load a corpus" and navigate to the corpus's directory.
.. important::
Only corpora in the format that MFA expects can be properly loaded. See `MFA's corpus format documentation <https://montreal-forced-aligner.readthedocs.io/en/latest/user_guide/corpus_structure.html>`_ for full details.
.. _first_steps_load_dictionary:
Loading a dictionary
--------------------
In the Dictionary menu, select "Load a dictionary" and navigate to the dictionary's path. If you would like to use a pretrained dictionary from `MFA models <https://mfa-models.readthedocs.io/>`_, you can download it via the "Download dictionary" submenu, and then select it from the "Load a saved dictionary" submenu.
.. important::
See `MFA's dictionary format documentation <https://montreal-forced-aligner.readthedocs.io/en/latest/user_guide/dictionary.html>`_ for how a pronunciation dictionary should be formatted if you are loading your dictionary.
.. _first_steps_load_acoustic_model:
Loading an acoustic model
-------------------------
In the Models menu, select "Load acoustic model" and navigate to the acoustic model's path. If you would like to use a pretrained acoustic model from `MFA models <https://mfa-models.readthedocs.io/>`_, you can download it via the "Download acoustic model" submenu, and then select it from the "Load acoustic model" submenu.
.. _first_steps_load_language_model:
Loading a language model
------------------------
In the Models menu, select "Load language model" and navigate to the language model's path. If you would like to use a pretrained language model from `MFA models <https://mfa-models.readthedocs.io/>`_, you can download it via the "Download language model" submenu, and then select it from the "Load language model" submenu.
.. _first_steps_load_g2p:
Loading a G2P model
-------------------
In the Models menu, select "Load G2P model" and navigate to the G2P model's path. If you would like to use a pretrained G2P model from `MFA models <https://mfa-models.readthedocs.io/>`_, you can download it via the "Download G2P model" submenu, and then select it from the "Load G2P model" submenu.
.. _first_steps_load_ivector_extractor:
Loading an ivector extractor
----------------------------
In the Models menu, select "Load ivector extractor" and navigate to the ivector extractor's path. If you would like to use a pretrained ivector extractor from `MFA models <https://mfa-models.readthedocs.io/>`_, you can download it via the "Download ivector extractor" submenu, and then select it from the "Load ivector extractor" submenu.
.. _first_steps_oovs:
Analyzing and improving dictionary coverage
-------------------------------------------
Once a dictionary is loaded (:ref:`first_steps_load_dictionary`), you can go to the "Window" menu and select "Dictionary" and "OOVs". The Dictionary panel will show you all words in the dictionary, their pronunciations, and how many instances were found in the corpus. The OOVs panel will show all the out-of-vocabulary items for words that were in the corpus, but not the pronunciation dictionary. If you double click the counts of an OOV item, the Utterances panel will pop up and show all utterances that have this OOV item.
If you would like to add a pronunciation for an OOV word, you can right click the word either in the utterance's text edit on the main screen, or the word in the OOVs table. If a G2P model is loaded (see :ref:`first_steps_load_g2p`), then the G2P model will provide its best guess for the pronunciation, otherwise the default pronunciation will be blank.
Double clicking any pronunciation in the Dictionary panel will allow you to edit the pronunciation, either with direct input, or via a pop up "keyboard" that has all the phone symbols in the dictionary. If a phone symbol is entered without being present in other words, the pronunciation will not be able to be saved, to prevent typos from entering the dictionary.
.. _first_steps_diarization:
Improving speaker metadata
--------------------------
If an ivector extractor model is loaded (:ref:`first_steps_load_ivector_extractor`), Anchor can analyze utterance and speaker ivectors for any issues in speaker metadata. Corpora commonly have issues where an utterance belongs to the wrong speaker, two speakers in the corpus are actually the same speaker, or no utterances have speaker information. To begin diarization, go to the "Window" menu and select "Diarization".
In the Diarization panel, ivectors can be extracted via the "Refresh ivectors" button. Once this process completes, you can query speaker ivectors for merging two speakers, or a bulk merge can be performed via "Merge all" with a cosine distance threshold.
.. important::
What threshold you use should be based on getting a sense of manually merging speakers first, particularly for noisy corpora. I have had reasonable success using 0.15 as a threshold for large scale merging.
Additionally, you can use the "Cluster utterances" button to do a more whole-scale recalculation of speakers. Be warned that this will discard most existing speaker metadata. It will label new speaker clusters based on the most common speaker in that cluster from the original speaker labels, but otherwise this is a destructive operation within Anchor (though the files on disk will remain unchanged until they are exported).
.. note::
See the `MFA documentation on diarize_speakers <https://montreal-forced-aligner.readthedocs.io/en/latest/user_guide/corpus_creation/diarize_speakers.html>`_ for more information on clustering utterances into new speakers.
.. _first_steps_alignment:
Spot-checking alignment
-----------------------
If a pronunciation dictionary and acoustic model are loaded (see :ref:`first_steps_load_dictionary` and :ref:`first_steps_load_acoustic_model`), then Anchor can perform forced alignment using MFA and visually represent the word and phone alignments. To begin alignment, go to the "Window" menu and select "Alignment" to open the Alignment panel.
In the Alignment panel, there are options that can be filled in for beam, retry beam, silence boost factor, and whether to `fine tune alignments <https://montreal-forced-aligner.readthedocs.io/en/latest/user_guide/implementations/fine_tune.html>`_ and `model cutoff tokens <https://montreal-forced-aligner.readthedocs.io/en/latest/user_guide/dictionary.html#modeling-cutoffs-and-hesitations>`_.
Once alignment completes, you can go the Utterances panel to inspect each utterance. The utterance will have extra tiers below the text tier for the aligned word and phone intervals. You can sort utterances based on their log-likelihood per frame. Lower log-likelihood can be the result of errors in the utterance's transcription or the pronunciation available in the dictionary. However, lower log-likelihood can also be the result of normal variation in how people speak, either at a speaker level, or if a speaker affects a different voice quality (i.e., during story-telling, emphasis, etc). If an utterance was not able to be aligned, it will not have a log-likelihood.
.. note::
If you have gold alignments in TextGrid form, they can be loaded via "Load reference alignments" in the "Alignment" menu. If these reference alignments have a different phone set than the dictionary you are using, you can load a custom mapping in the "Alignment" menu as well. See `MFA's documentation on alignment evaluation <https://montreal-forced-aligner.readthedocs.io/en/latest/user_guide/implementations/alignment_evaluation.html#alignment-evaluation>`_ for more details.
.. _first_steps_transcription:
Transcribing utterances
-----------------------
Anchor can generate transcriptions for utterances either for validating their existing transcriptions or generating new text transcriptions for use in training acoustic models. At a minimum, a pronunciation dictionary and acoustic model must be loaded (see :ref:`first_steps_load_dictionary` and :ref:`first_steps_load_acoustic_model`). If you want to generate new text transcriptions from scratch, a language model must also be loaded (see :ref:`first_steps_load_language_model`), but it is optional for validating existing transcriptions. If no language model is loaded, Anchor will generate per-speaker language models from their existing transcriptions (so therefore existing transcriptions are necessary, even if they might not be completely accurate). To begin transcription, go to the "Window" menu and select "Transcription" to open the Transcription panel.
In the Transcription panel, the only option is for specifying the target number of ngrams for the per-speaker language models, which is not applicable if a pretrained language model is loaded.
Once transcription completes, you can go the Utterances panel to inspect each utterance. The utterance will have an extra tier below the text tier with the transcribed text (along with red background for words that count toward the word error rate of the utterance). In the Utterances panel, you can sort based on the word error rate (WER) and character error rate (CER) to see where the utterance text and transcribed text differ the most.
|
Anchor-annotator
|
/Anchor_annotator-0.0.9.tar.gz/Anchor_annotator-0.0.9/docs/source/first_steps/index.rst
|
index.rst
|
from __future__ import annotations
import logging
import os.path
import re
import typing
from threading import Lock
from typing import Optional
import numpy as np
import pyqtgraph as pg
import sqlalchemy
from Bio import pairwise2
from montreal_forced_aligner.data import CtmInterval, WorkflowType
from montreal_forced_aligner.db import CorpusWorkflow, Speaker, Utterance
from PySide6 import QtCore, QtGui, QtWidgets
from anchor import workers
from anchor.models import (
CorpusModel,
CorpusSelectionModel,
DictionaryTableModel,
SpeakerModel,
TextFilterQuery,
)
from anchor.settings import AnchorSettings
pg.setConfigOption("imageAxisOrder", "row-major") # best performance
pg.setConfigOptions(antialias=True)
logger = logging.getLogger("anchor")
class ClusterLegendItem(pg.ItemSample):
def mouseClickEvent(self, event):
event.ignore()
class ClusterLegend(pg.LegendItem):
changeCluster = QtCore.Signal(object)
def mouseClickEvent(self, event):
"""Use the mouseClick event to toggle the visibility of the plotItem"""
if event.button() == QtCore.Qt.MouseButton.LeftButton:
pos = event.pos()
origin = self.pos()
current_row_top = origin.y()
index = -1
for row in range(self.layout.rowCount()):
item = self.layout.itemAt(row, 0)
if item:
if current_row_top <= pos.y() <= current_row_top + item.height():
self.changeCluster.emit(index)
break
index += 1
current_row_top += item.height()
# self.changeCluster.emit(self.item.)
event.accept()
def mouseDragEvent(self, ev):
ev.ignore()
class ScatterPlot(pg.ScatterPlotItem):
selectPoints = QtCore.Signal(object, object)
def __init__(self, *args, **kwargs):
super(ScatterPlot, self).__init__(*args, **kwargs)
self.selection_area = pg.RectROI((0, 0), (10, 10))
self.selection_area.hide()
self.selection_area.setParentItem(self)
self.distances = None
def mouseDragEvent(self, ev):
if ev.modifiers() in [
QtCore.Qt.KeyboardModifier.ControlModifier,
QtCore.Qt.KeyboardModifier.ShiftModifier,
] and ev.button() in [
QtCore.Qt.MouseButton.LeftButton,
QtCore.Qt.MouseButton.MiddleButton,
]:
ev.accept()
if ev.isFinish():
self.selection_area.hide()
else:
self.selection_area.show()
pos = ev.pos()
start_pos = ev.buttonDownPos()
self.selection_area.setPos(start_pos)
width = pos.x() - start_pos.x()
height = pos.y() - start_pos.y()
self.selection_area.setSize((width, height))
x_series, y_series = self.getData()
selected_indices = []
right = max(pos.x(), start_pos.x())
left = min(pos.x(), start_pos.x())
bottom = min(pos.y(), start_pos.y())
top = max(pos.y(), start_pos.y())
for i, x in enumerate(x_series):
y = y_series[i]
if left <= x <= right and bottom <= y <= top:
selected_indices.append(i)
self.selectPoints.emit(selected_indices, True)
def mouseClickEvent(self, ev):
if (
ev.button() == QtCore.Qt.MouseButton.LeftButton
or ev.button() == QtCore.Qt.MouseButton.RightButton
):
pts = self.pointsAt(ev.pos())
if len(pts) > 0:
self.ptsClicked = pts
ev.accept()
if ev.modifiers() in [
QtCore.Qt.KeyboardModifier.ControlModifier,
QtCore.Qt.KeyboardModifier.ShiftModifier,
]:
self.selectPoints.emit({self.ptsClicked[0]._index}, False)
else:
self.selectPoints.emit({self.ptsClicked[0]._index}, True)
self.sigClicked.emit(self, self.ptsClicked, ev)
else:
ev.ignore()
else:
ev.ignore()
def hoverEvent(self, ev):
if self.opts["hoverable"]:
old = self.data["hovered"]
if ev.exit:
new = np.zeros_like(self.data["hovered"])
else:
new = self._maskAt(ev.pos())
if self._hasHoverStyle():
self.data["sourceRect"][old ^ new] = 0
self.data["hovered"] = new
self.updateSpots()
points = self.points()[new][-1:]
# Show information about hovered points in a tool tip
self.sigHovered.emit(self, points, ev)
class UtteranceClusterView(pg.PlotWidget):
utteranceRequested = QtCore.Signal(object)
plotAvailable = QtCore.Signal(object)
selectionUpdated = QtCore.Signal()
def __init__(self, *args):
super().__init__(*args)
self.settings = AnchorSettings()
self.setBackground(self.settings.value(self.settings.PRIMARY_VERY_DARK_COLOR))
self.corpus_model = None
self.speaker_model = None
self.selection_model = None
self.brushes = {-1: pg.mkBrush(0.5)}
self.scatter_item = ScatterPlot()
self.scatter_item.selectPoints.connect(self.update_selection)
self.addItem(self.scatter_item)
self.hideButtons()
self.getPlotItem().setDefaultPadding(0)
self.getPlotItem().hideAxis("left")
self.getPlotItem().hideAxis("bottom")
# self.getPlotItem().setMouseEnabled(False, False)
self.getPlotItem().setMenuEnabled(False)
self.scatter_item.sigClicked.connect(self.update_point)
self.legend_item = ClusterLegend(
offset=(10, 10),
sampleType=ClusterLegendItem,
brush=pg.mkBrush(self.settings.value(self.settings.PRIMARY_BASE_COLOR)),
pen=pg.mkPen(self.settings.value(self.settings.MAIN_TEXT_COLOR)),
labelTextColor=self.settings.value(self.settings.MAIN_TEXT_COLOR),
)
self.legend_item.changeCluster.connect(self.change_cluster)
self.legend_item.setParentItem(self.getPlotItem())
self.legend_item.setFont(self.settings.font)
self.selected_indices = set()
# self.addItem(self.legend_item)
self.highlight_pen = pg.mkPen(self.settings.value(self.settings.MAIN_TEXT_COLOR), width=3)
self.hover_pen = pg.mkPen(self.settings.value(self.settings.ACCENT_LIGHT_COLOR), width=3)
self.base_pen = pg.mkPen(0.5)
self.selection_timer = QtCore.QTimer()
self.selection_timer.setInterval(300)
self.selection_timer.timeout.connect(self.send_selection_update)
self.brush_needs_update = False
def send_selection_update(self):
self.selection_timer.stop()
self.selectionUpdated.emit()
def change_cluster(self, cluster_id):
if not self.selected_indices:
return
self.speaker_model.cluster_labels[np.array(list(self.selected_indices))] = cluster_id
brushes = [self.brushes[x] for x in self.speaker_model.cluster_labels]
self.scatter_item.setBrush(brushes)
def set_models(
self,
corpus_model: CorpusModel,
selection_model: CorpusSelectionModel,
speaker_model: SpeakerModel,
):
self.corpus_model = corpus_model
self.selection_model = selection_model
self.speaker_model = speaker_model
self.speaker_model.clustered.connect(self.update_plot)
self.speaker_model.mdsFinished.connect(self.update_plot)
self.speaker_model.mdsAboutToChange.connect(self.update_plot)
self.speaker_model.speakersChanged.connect(self.update_plot)
def update_point(self, sender, spots, ev: pg.GraphicsScene.mouseEvents.MouseClickEvent):
spot = spots[0]
index = spot._index
if ev.button() == QtCore.Qt.MouseButton.LeftButton:
utterance_id = int(self.speaker_model.utterance_ids[index])
utterance = self.corpus_model.session.query(Utterance).get(utterance_id)
self.selection_model.set_current_file(
utterance.file_id,
utterance.begin,
utterance.end,
utterance.channel,
force_update=True,
)
else:
current_cluster = self.speaker_model.cluster_labels[index]
current_cluster += 1
if current_cluster >= self.speaker_model.num_clusters:
current_cluster = -1
self.speaker_model.cluster_labels[index] = current_cluster
spot.setBrush(self.brushes[current_cluster])
ev.accept()
def update_plot(self):
self.legend_item.clear()
if self.speaker_model.mds is None or self.speaker_model.cluster_labels is None:
self.scatter_item.clear()
return
self.brushes = {-1: pg.mkBrush(0.5)}
for i in range(self.speaker_model.num_clusters):
self.brushes[i] = pg.mkBrush(pg.intColor(i, self.speaker_model.num_clusters))
for k, v in self.brushes.items():
if k < 0:
label = "Noise"
else:
label = f"Cluster {k}"
self.legend_item.addItem(pg.ScatterPlotItem(brush=v, name=label), label)
brushes = [self.brushes[x] for x in self.speaker_model.cluster_labels]
self.scatter_item.setData(
pos=self.speaker_model.mds,
size=10,
brush=brushes,
hoverPen=self.hover_pen,
hoverable=True,
)
self.plotAvailable.emit(True)
def highlight_cluster(self, cluster_id):
self.selected_indices = set(np.where(self.speaker_model.cluster_labels == cluster_id)[0])
self.update_highlight()
def update_selection(self, selected_points, reset=True):
if reset:
new_selection = set(selected_points)
else:
new_selection = self.selected_indices.symmetric_difference(selected_points)
if new_selection == self.selected_indices:
return
self.selected_indices = new_selection
self.selection_timer.start()
self.update_highlight()
def update_highlight(self):
if self.speaker_model.mds is None:
return
num_utterances = self.speaker_model.mds.shape[0]
pens = []
for i in range(num_utterances):
if i in self.selected_indices:
pens.append(self.highlight_pen)
else:
pens.append(self.base_pen)
self.scatter_item.setPen(pens)
class AudioPlotItem(pg.PlotItem):
def __init__(self, top_point, bottom_point):
super().__init__()
self.settings = AnchorSettings()
self.setDefaultPadding(0)
self.setClipToView(True)
self.getAxis("bottom").setPen(self.settings.value(self.settings.ACCENT_LIGHT_COLOR))
self.getAxis("bottom").setTextPen(self.settings.value(self.settings.ACCENT_LIGHT_COLOR))
self.getAxis("bottom").setTickFont(self.settings.small_font)
rect = QtCore.QRectF()
rect.setTop(top_point)
rect.setBottom(bottom_point)
rect.setLeft(0)
rect.setRight(10)
rect = rect.normalized()
self.setRange(rect=rect)
self.hideAxis("left")
self.setMouseEnabled(False, False)
self.setMenuEnabled(False)
self.hideButtons()
class SpeakerTierItem(pg.PlotItem):
def __init__(self, top_point, bottom_point):
super().__init__()
self.settings = AnchorSettings()
self.setDefaultPadding(0)
self.setClipToView(True)
self.hideAxis("left")
self.hideAxis("bottom")
rect = QtCore.QRectF()
rect.setTop(top_point)
rect.setBottom(bottom_point)
rect.setLeft(0)
rect.setRight(10)
rect = rect.normalized()
self.setRange(rect=rect)
self.setMouseEnabled(False, False)
self.setMenuEnabled(False)
self.hideButtons()
class UtteranceView(QtWidgets.QWidget):
undoRequested = QtCore.Signal()
redoRequested = QtCore.Signal()
playRequested = QtCore.Signal()
def __init__(self, *args):
super().__init__(*args)
self.settings = AnchorSettings()
self.corpus_model: typing.Optional[CorpusModel] = None
self.selection_model: typing.Optional[CorpusSelectionModel] = None
layout = QtWidgets.QVBoxLayout()
self.bottom_point = 0
self.top_point = 8
self.height = self.top_point - self.bottom_point
self.separator_point = (self.height / 2) + self.bottom_point
self.waveform_worker = workers.WaveformWorker()
self.auto_waveform_worker = workers.AutoWaveformWorker()
self.spectrogram_worker = workers.SpectrogramWorker()
self.pitch_track_worker = workers.PitchWorker()
self.speaker_tier_worker = workers.SpeakerTierWorker()
self.waveform_worker.signals.result.connect(self.finalize_loading_wave_form)
self.auto_waveform_worker.signals.result.connect(self.finalize_loading_auto_wave_form)
self.spectrogram_worker.signals.result.connect(self.finalize_loading_spectrogram)
self.pitch_track_worker.signals.result.connect(self.finalize_loading_pitch_track)
self.speaker_tier_worker.signals.result.connect(self.finalize_loading_utterances)
# self.break_line.setZValue(30)
self.audio_layout = pg.GraphicsLayoutWidget()
self.audio_layout.centralWidget.layout.setContentsMargins(0, 0, 0, 0)
self.audio_layout.centralWidget.layout.setSpacing(0)
self.audio_layout.setBackground(self.settings.value(self.settings.PRIMARY_VERY_DARK_COLOR))
self.audio_plot = AudioPlots(2, 1, 0)
self.audio_plot_item = AudioPlotItem(2, 0)
self.audio_plot_item.addItem(self.audio_plot)
# self.audio_plot.setZValue(0)
self.audio_layout.addItem(self.audio_plot_item)
self.show_all_speakers = False
self.show_transcription = True
self.show_alignment = True
self.speaker_tier_layout = pg.GraphicsLayoutWidget()
self.speaker_tier_layout.setAspectLocked(False)
self.speaker_tier_layout.centralWidget.layout.setContentsMargins(0, 0, 0, 0)
self.speaker_tier_layout.centralWidget.layout.setSpacing(0)
self.speaker_tiers: dict[SpeakerTier] = {}
self.search_term = None
self.lock = Lock()
self.extra_tiers = {}
self.tier_scroll_area = QtWidgets.QScrollArea()
self.audio_scroll_area = QtWidgets.QScrollArea()
self.audio_scroll_area.setContentsMargins(0, 0, 0, 0)
self.tier_scroll_area.setWidget(self.speaker_tier_layout)
self.tier_scroll_area.setWidgetResizable(True)
self.tier_scroll_area.setContentsMargins(0, 0, 0, 0)
self.tier_scroll_area.setHorizontalScrollBarPolicy(
QtCore.Qt.ScrollBarPolicy.ScrollBarAlwaysOff
)
scroll_layout = QtWidgets.QVBoxLayout()
layout.addWidget(self.audio_scroll_area)
scroll_layout.addWidget(self.audio_layout)
self.audio_scroll_area.setLayout(scroll_layout)
layout.addWidget(self.tier_scroll_area)
layout.setContentsMargins(0, 0, 0, 0)
scroll_layout.setContentsMargins(0, 0, 0, 0)
layout.setSpacing(0)
scroll_layout.setSpacing(0)
self.setLayout(layout)
def clean_up_for_close(self):
self.spectrogram_worker.stop()
self.pitch_track_worker.stop()
self.waveform_worker.stop()
self.auto_waveform_worker.stop()
self.speaker_tier_worker.stop()
def set_models(
self,
corpus_model: CorpusModel,
selection_model: CorpusSelectionModel,
dictionary_model: DictionaryTableModel,
):
self.corpus_model = corpus_model
self.corpus_model.corpusLoaded.connect(self.set_extra_tiers)
self.corpus_model.refreshTiers.connect(self.set_up_new_file)
self.selection_model = selection_model
self.dictionary_model = dictionary_model
for t in self.speaker_tiers.values():
t.set_models(corpus_model, selection_model, dictionary_model)
self.audio_plot.set_models(self.selection_model)
self.selection_model.viewChanged.connect(self.update_plot)
# self.corpus_model.utteranceTextUpdated.connect(self.refresh_utterance_text)
self.selection_model.fileChanged.connect(self.set_up_new_file)
self.selection_model.channelChanged.connect(self.update_channel)
self.selection_model.resetView.connect(self.reset_plot)
def finalize_loading_utterances(self, results):
utterances, file_id = results
if (
self.selection_model.current_file is None
or file_id != self.selection_model.current_file.id
):
return
self.speaker_tiers = {}
self.speaker_tier_items = {}
self.speaker_tier_layout.clear()
available_speakers = {}
for u in utterances:
if u.speaker_id not in self.speaker_tiers:
tier = SpeakerTier(
self.bottom_point,
self.separator_point,
u.speaker,
search_term=self.search_term,
)
tier.dragFinished.connect(self.update_selected_speaker)
tier.draggingLine.connect(self.audio_plot.update_drag_line)
tier.lineDragFinished.connect(self.audio_plot.hide_drag_line)
tier.receivedWheelEvent.connect(self.audio_plot.wheelEvent)
tier.set_models(self.corpus_model, self.selection_model, self.dictionary_model)
tier.set_extra_tiers(self.extra_tiers)
tier.setZValue(30)
available_speakers[u.speaker.name] = u.speaker_id
self.speaker_tiers[u.speaker_id] = tier
self.speaker_tiers[u.speaker_id].utterances.append(u)
for i, (key, tier) in enumerate(self.speaker_tiers.items()):
tier.set_speaker_index(0, 1)
tier.set_available_speakers(available_speakers)
tier.refresh()
tier_item = SpeakerTierItem(self.bottom_point, self.separator_point)
tier_item.setRange(
xRange=[self.selection_model.min_time, self.selection_model.max_time]
)
tier_item.addItem(tier)
self.speaker_tier_items[key] = tier_item
self.speaker_tier_layout.addItem(tier_item, i, 0)
row_height = self.audio_plot_item.height()
if len(self.speaker_tiers) > 1 and len(self.extra_tiers) < 2:
row_height = int(row_height / 2)
self.speaker_tier_layout.setFixedHeight(len(self.speaker_tiers) * row_height)
if len(self.speaker_tiers) > 1:
self.tier_scroll_area.verticalScrollBar().setSingleStep(row_height)
self.tier_scroll_area.verticalScrollBar().setPageStep(row_height)
self.tier_scroll_area.verticalScrollBar().setMinimum(0)
self.tier_scroll_area.verticalScrollBar().setMaximum(
len(self.speaker_tiers) * row_height
)
self.tier_scroll_area.setVerticalScrollBarPolicy(
QtCore.Qt.ScrollBarPolicy.ScrollBarAlwaysOn
)
self.audio_layout.centralWidget.layout.setContentsMargins(
0, 0, self.settings.scroll_bar_height, 0
)
else:
self.audio_layout.centralWidget.layout.setContentsMargins(0, 0, 0, 0)
self.tier_scroll_area.setVerticalScrollBarPolicy(
QtCore.Qt.ScrollBarPolicy.ScrollBarAlwaysOff
)
def finalize_loading_wave_form(self, results):
y, file_path = results
if (
self.selection_model.current_file is None
or file_path != self.selection_model.current_file.sound_file.sound_file_path
):
return
self.audio_plot.wave_form.y = y
self.get_latest_waveform()
def finalize_loading_spectrogram(self, results):
stft, channel, begin, end, min_db, max_db = results
if begin != self.selection_model.min_time or end != self.selection_model.max_time:
return
self.audio_plot.spectrogram.setData(stft, channel, begin, end, min_db, max_db)
def finalize_loading_pitch_track(self, results):
pitch_track, voicing_track, channel, begin, end, min_f0, max_f0 = results
if begin != self.selection_model.min_time or end != self.selection_model.max_time:
return
if pitch_track is None:
return
x = np.linspace(
start=self.selection_model.min_time,
stop=self.selection_model.max_time,
num=pitch_track.shape[0],
)
self.audio_plot.pitch_track.setData(x=x, y=pitch_track, connect="finite")
self.audio_plot.pitch_track.set_range(min_f0, max_f0, end)
self.audio_plot.pitch_track.show()
def finalize_loading_auto_wave_form(self, results):
y, begin, end, channel = results
if begin != self.selection_model.min_time or end != self.selection_model.max_time:
return
x = np.linspace(
start=self.selection_model.min_time, stop=self.selection_model.max_time, num=y.shape[0]
)
self.audio_plot.wave_form.setData(x=x, y=y)
self.audio_plot.wave_form.show()
def get_utterances(self):
for tier in self.speaker_tiers.values():
tier.reset_tier()
self.speaker_tier_layout.removeItem(tier)
if self.selection_model.current_file is None:
return
self.speaker_tier_worker.stop()
self.speaker_tier_worker.set_params(
self.corpus_model.session, self.selection_model.current_file.id
)
self.speaker_tier_worker.start()
def set_extra_tiers(self):
workflows = (
self.corpus_model.session.query(CorpusWorkflow)
.order_by(CorpusWorkflow.time_stamp)
.all()
)
self.extra_tiers = {}
for w in workflows:
if w.workflow_type is WorkflowType.alignment:
if self.show_alignment and "Words" not in self.extra_tiers:
self.extra_tiers["Words"] = "aligned_word_intervals"
self.extra_tiers["Phones"] = "aligned_phone_intervals"
elif w.workflow_type is WorkflowType.reference:
if "Reference" not in self.extra_tiers:
self.extra_tiers["Reference"] = "reference_phone_intervals"
elif w.workflow_type is WorkflowType.transcription:
if self.show_transcription and "Transcription" not in self.extra_tiers:
self.extra_tiers["Transcription"] = "transcription_text"
if self.corpus_model.corpus.has_alignments(w.workflow_type):
self.extra_tiers["Transcribed words"] = "transcribed_word_intervals"
self.extra_tiers["Transcribed phones"] = "transcribed_phone_intervals"
elif w.workflow_type is WorkflowType.per_speaker_transcription:
if self.show_transcription and "Transcription" not in self.extra_tiers:
self.extra_tiers["Transcription"] = "transcription_text"
if self.corpus_model.corpus.has_alignments(w.workflow_type):
self.extra_tiers[
"Transcribed words"
] = "per_speaker_transcribed_word_intervals"
self.extra_tiers[
"Transcribed phones"
] = "per_speaker_transcribed_phone_intervals"
def update_channel(self):
self.get_latest_waveform()
def set_up_new_file(self, *args):
self.audio_plot.spectrogram.hide()
self.audio_plot.wave_form.hide()
self.audio_plot.pitch_track.hide()
self.audio_plot.spectrogram.cached_begin = None
self.audio_plot.spectrogram.cached_end = None
self.audio_plot.wave_form.y = None
for t in self.speaker_tiers.values():
t.visible_utterances = {}
self.speaker_tiers = {}
if self.selection_model.current_file is None:
return
self.get_utterances()
self.waveform_worker.stop()
self.waveform_worker.set_params(
self.selection_model.current_file.sound_file.sound_file_path
)
self.waveform_worker.start()
def set_search_term(self):
term = self.corpus_model.text_filter
if not term:
return
self.search_term = term
for tier in self.speaker_tiers.values():
tier.setSearchTerm(term)
def reset_text_grid(self):
for tier in self.speaker_tiers.values():
tier.reset_tier()
def draw_text_grid(self):
scroll_to = None
for i, (key, tier) in enumerate(self.speaker_tiers.items()):
tier.refresh()
if tier.has_visible_utterances and scroll_to is None:
scroll_to = i
tier_height = self.speaker_tier_items[key].height()
self.speaker_tier_items[key].setRange(
xRange=[self.selection_model.min_time, self.selection_model.max_time]
)
if scroll_to is not None:
self.tier_scroll_area.scrollContentsBy(0, scroll_to * tier_height)
def update_show_speakers(self, state):
self.show_all_speakers = state > 0
self.update_plot()
def get_latest_waveform(self):
if self.audio_plot.wave_form.y is None:
return
self.audio_plot.wave_form.hide()
self.audio_plot.spectrogram.hide()
self.audio_plot.pitch_track.hide()
begin_samp = int(
self.selection_model.min_time * self.selection_model.current_file.sample_rate
)
end_samp = int(
self.selection_model.max_time * self.selection_model.current_file.sample_rate
)
if len(self.audio_plot.wave_form.y.shape) > 1:
y = self.audio_plot.wave_form.y[
begin_samp:end_samp, self.selection_model.selected_channel
]
else:
y = self.audio_plot.wave_form.y[begin_samp:end_samp]
self.spectrogram_worker.stop()
self.spectrogram_worker.set_params(
y,
self.selection_model.current_file.sound_file.sample_rate,
self.selection_model.min_time,
self.selection_model.max_time,
self.selection_model.selected_channel,
self.settings.value(self.settings.SPEC_DYNAMIC_RANGE),
self.settings.value(self.settings.SPEC_N_FFT),
self.settings.value(self.settings.SPEC_N_TIME_STEPS),
self.settings.value(self.settings.SPEC_WINDOW_SIZE),
self.settings.value(self.settings.SPEC_PREEMPH),
self.settings.value(self.settings.SPEC_MAX_FREQ),
)
self.spectrogram_worker.start()
if self.selection_model.max_time - self.selection_model.min_time <= 10:
self.pitch_track_worker.stop()
self.pitch_track_worker.set_params(
y,
self.selection_model.current_file.sound_file.sample_rate,
self.selection_model.min_time,
self.selection_model.max_time,
self.selection_model.selected_channel,
self.settings.value(self.settings.PITCH_MIN_F0),
self.settings.value(self.settings.PITCH_MAX_F0),
self.settings.value(self.settings.PITCH_FRAME_SHIFT),
self.settings.value(self.settings.PITCH_FRAME_LENGTH),
self.settings.value(self.settings.PITCH_DELTA_PITCH),
self.settings.value(self.settings.PITCH_PENALTY_FACTOR),
self.audio_plot.pitch_track.bottom_point,
self.audio_plot.pitch_track.top_point,
)
self.pitch_track_worker.start()
self.auto_waveform_worker.stop()
self.auto_waveform_worker.set_params(
y,
self.audio_plot.wave_form.bottom_point,
self.audio_plot.wave_form.top_point,
self.selection_model.min_time,
self.selection_model.max_time,
self.selection_model.selected_channel,
)
self.auto_waveform_worker.start()
self.audio_plot_item.setRange(
xRange=[self.selection_model.min_time, self.selection_model.max_time]
)
self.audio_plot.update_plot()
def reset_plot(self, *args):
self.reset_text_grid()
self.audio_plot.wave_form.clear()
self.audio_plot.pitch_track.clear()
self.audio_plot.spectrogram.clear()
def update_plot(self, *args):
if self.corpus_model.rowCount() == 0:
return
if self.selection_model.current_file is None or self.selection_model.min_time is None:
return
self.get_latest_waveform()
self.audio_plot.update_plot()
self.draw_text_grid()
def update_selected_speaker(self, utterance, pos):
if pos > self.separator_point:
return
new_speaker = None
old_speaker = None
for tier in self.speaker_tiers.values():
if tier.speaker_id == utterance.speaker_id:
old_speaker = tier.speaker
if tier.top_point > pos > tier.bottom_point:
new_speaker = tier.speaker
if new_speaker is not None and new_speaker != old_speaker:
self.corpus_model.update_utterance_speaker(utterance, new_speaker)
class UtteranceLine(pg.InfiniteLine):
hoverChanged = QtCore.Signal(object)
def __init__(
self, *args, movingPen=None, view_min=None, view_max=None, initial=True, **kwargs
):
super(UtteranceLine, self).__init__(*args, **kwargs)
self.movingPen = movingPen
self.initial = initial
self.view_min = view_min
self.view_max = view_max
self.bounding_width = 0.1
self.setCursor(QtCore.Qt.CursorShape.SizeHorCursor)
def hoverEvent(self, ev):
if (
(not ev.isExit())
and self.movable
and (
(self.initial and self.pos().x() - self.mapToParent(ev.pos()).x() < 0)
or (not self.initial and self.pos().x() - self.mapToParent(ev.pos()).x() > 0)
)
and ev.acceptDrags(QtCore.Qt.MouseButton.LeftButton)
):
self.setMouseHover(True)
self._boundingRect = None
self.hoverChanged.emit(True)
else:
self.setMouseHover(False)
self.hoverChanged.emit(False)
self._boundingRect = None
def mouseDragEvent(self, ev):
if self.movable and ev.button() == QtCore.Qt.MouseButton.LeftButton:
if ev.isStart() and (
(self.initial and self.pos().x() - self.mapToParent(ev.buttonDownPos()).x() < 0)
or (
not self.initial
and self.pos().x() - self.mapToParent(ev.buttonDownPos()).x() > 0
)
):
self.moving = True
self._boundingRect = None
self.currentPen = self.movingPen
self.cursorOffset = self.pos() - self.mapToParent(ev.buttonDownPos())
self.startPosition = self.pos()
ev.accept()
if not self.moving:
return
p = self.cursorOffset + self.mapToParent(ev.pos())
p.setY(self.startPosition.y())
if p.x() > self.view_max:
p.setX(self.view_max)
if p.x() < self.view_min:
p.setX(self.view_min)
self.setPos(p)
self.sigDragged.emit(self)
if ev.isFinish():
self.currentPen = self.pen
self._boundingRect = None
self._bounds = None
self._lastViewSize = None
self.moving = False
self.sigPositionChangeFinished.emit(self)
self.update()
def _computeBoundingRect(self):
# br = UIGraphicsItem.boundingRect(self)
vr = self.viewRect() # bounds of containing ViewBox mapped to local coords.
if vr is None:
return QtCore.QRectF()
# add a 4-pixel radius around the line for mouse interaction.
px = self.pixelLength(
direction=pg.Point(1, 0), ortho=True
) # get pixel length orthogonal to the line
if px is None:
px = 0
pw = max(self.pen.width() / 2, self.hoverPen.width() / 2)
w = max(self.bounding_width, self._maxMarkerSize + pw) + 1
w = w * px
br = QtCore.QRectF(vr)
if self.initial:
br.setBottom(-w)
br.setTop(0)
else:
br.setTop(w)
br.setBottom(0)
if not self.moving:
left = self.span[0]
right = self.span[1]
else:
length = br.width()
left = br.left()
right = br.left() + length
br.setLeft(left)
br.setRight(right)
br = br.normalized()
vs = self.getViewBox().size()
if self._bounds != br or self._lastViewSize != vs:
self._bounds = br
self._lastViewSize = vs
self.prepareGeometryChange()
self._endPoints = (left, right)
self._lastViewRect = vr
return self._bounds
class SpeakerComboBox(QtWidgets.QComboBox):
popupAboutToBeShown = QtCore.Signal()
popupAboutToBeHidden = QtCore.Signal()
def showPopup(self):
self.popupAboutToBeShown.emit()
super().showPopup()
def hidePopup(self):
self.popupAboutToBeHidden.emit()
super().hidePopup()
class UtteranceSpeakerDropDownItem(pg.TextItem):
def __init__(self, utterance, corpus_model: CorpusModel, font=None, anchor=(1, 1)):
self.corpus_model = corpus_model
self.anchor = pg.Point(anchor)
self.rotateAxis = None
self.angle = 0
pg.GraphicsObject.__init__(self)
self.combo_box = SpeakerComboBox()
self.combo_box.setDisabled(True)
self.combo_box.popupAboutToBeShown.connect(self.boostZ)
self.combo_box.popupAboutToBeHidden.connect(self.lowerZ)
self.utterance = utterance
self.current_speaker_id = utterance.speaker_id
self.textItem = QtWidgets.QGraphicsProxyWidget(self)
# self.textItem.setWidget(self.combo_box)
# self.corpus_model.runFunction.emit('Getting closest speakers', self.populate_options, [{
# 'utterance_id': self.utterance.id,
# }])
self.combo_box.addItem(utterance.speaker.name, utterance.speaker_id)
self.combo_box.setCurrentIndex(0)
self._lastTransform = None
self._lastScene = None
self._bounds = QtCore.QRectF()
if font:
self.combo_box.setFont(font)
self.fill = pg.mkBrush(None)
self.border = pg.mkPen(None)
self.combo_box.currentIndexChanged.connect(self.update_speaker)
def update_speaker(self):
speaker_id = self.combo_box.currentData(QtCore.Qt.ItemDataRole.UserRole)
if speaker_id is None:
return
if speaker_id == self.utterance.speaker_id:
return
speaker = self.corpus_model.session.query(Speaker).get(speaker_id)
self.corpus_model.update_utterance_speaker(self.utterance, speaker)
def populate_options(self, options):
self.combo_box.clear()
with QtCore.QSignalBlocker(self.combo_box):
found_current = False
i = -1
for i, (s_id, s_name) in enumerate(options.items()):
self.combo_box.addItem(s_name, s_id)
if s_id == self.utterance.speaker_id:
self.combo_box.setCurrentIndex(i)
found_current = True
if not found_current:
self.combo_box.addItem(self.utterance.speaker.name, self.utterance.speaker_id)
self.combo_box.setCurrentIndex(i + 1)
self.combo_box.setDisabled(False)
def boostZ(self):
self.setZValue(self.parentItem().zValue() + 30)
self.update()
def lowerZ(self):
self.setZValue(self.parentItem().zValue())
self.update()
class Menu(QtWidgets.QMenu):
def mousePressEvent(self, e: QtGui.QMouseEvent) -> None:
return super().mousePressEvent(e)
def leaveEvent(self, e: QtCore.QEvent) -> None:
self.hide()
return super().leaveEvent(e)
def hideEvent(self, e: QtGui.QHideEvent) -> None:
return super().hideEvent(e)
class TextEdit(QtWidgets.QTextEdit):
lookUpWord = QtCore.Signal(object)
createWord = QtCore.Signal(object)
lostFocus = QtCore.Signal()
def __init__(self, dictionary_model, speaker_id, *args):
super().__init__(*args)
self.settings = AnchorSettings()
self.dictionary_model: DictionaryTableModel = dictionary_model
self.speaker_id = speaker_id
self.lookUpWord.connect(self.dictionary_model.lookup_word)
self.createWord.connect(self.dictionary_model.add_word)
self.setCursor(QtCore.Qt.CursorShape.IBeamCursor)
self.setContextMenuPolicy(QtCore.Qt.ContextMenuPolicy.CustomContextMenu)
self.customContextMenuRequested.connect(self.generate_context_menu)
self.setAcceptRichText(False)
self.setFrameShape(QtWidgets.QFrame.Shape.NoFrame)
self.verticalScrollBar().setCursor(QtCore.Qt.CursorShape.ArrowCursor)
self.setHorizontalScrollBarPolicy(QtCore.Qt.ScrollBarPolicy.ScrollBarAlwaysOff)
self.setWordWrapMode(QtGui.QTextOption.WrapMode.WordWrap)
def dragMoveEvent(self, e: QtGui.QDragMoveEvent) -> None:
e.ignore()
def dragEnterEvent(self, e: QtGui.QDragMoveEvent) -> None:
e.ignore()
def dragLeaveEvent(self, e: QtGui.QDragMoveEvent) -> None:
e.ignore()
def focusOutEvent(self, e: QtGui.QFocusEvent) -> None:
self.lostFocus.emit()
return super().focusOutEvent(e)
def generate_context_menu(self, location):
menu = Menu(self)
cursor = self.cursorForPosition(location)
cursor.select(QtGui.QTextCursor.SelectionType.WordUnderCursor)
word = cursor.selectedText()
# add extra items to the menu
menu.addSeparator()
if self.dictionary_model.check_word(word, speaker_id=self.speaker_id):
lookUpAction = QtGui.QAction(f'Look up "{word}" in dictionary', self)
lookUpAction.triggered.connect(lambda: self.lookUpWord.emit(word))
lookUpAction.triggered.connect(menu.hide)
menu.addAction(lookUpAction)
else:
createAction = QtGui.QAction(f'Add pronunciation for "{word}"', self)
createAction.triggered.connect(lambda: self.createWord.emit(word))
createAction.triggered.connect(menu.hide)
menu.addAction(createAction)
menu.setStyleSheet(self.settings.menu_style_sheet)
# show the menu
menu.exec_(self.mapToGlobal(location))
class UtterancePGTextItem(pg.TextItem):
def __init__(
self,
item: Utterance,
selection_model: CorpusSelectionModel,
top_point=None,
bottom_point=None,
per_tier_range=None,
color=None,
font=None,
html=None,
anchor=(0, 0),
border=None,
fill=None,
dictionary_model: Optional[DictionaryTableModel] = None,
speaker_id: int = 0,
):
self.anchor = pg.Point(anchor)
self.rotateAxis = None
self.begin = item.begin
self.end = item.end
self.selection_model = selection_model
self.angle = 0
self.dictionary_model = dictionary_model
self.speaker_id = speaker_id
pg.GraphicsObject.__init__(self)
self.text_edit = TextEdit(dictionary_model, speaker_id)
self.text_edit.cursorPositionChanged.connect(self.update)
# self.text_edit.setAutoFillBackground(False)
# self.text_edit.viewport().setAutoFillBackground(False)
self.textItem = QtWidgets.QGraphicsProxyWidget(self)
self.textItem.setWidget(self.text_edit)
self._lastTransform = None
self._lastScene = None
self._bounds = QtCore.QRectF()
if font:
self.text_edit.setFont(font)
self.text_edit.setPlainText(item.text)
self.fill = pg.mkBrush(fill)
self.border = pg.mkPen(border)
self._cached_pixel_size = None
self.cached_duration = None
self.top_point = top_point
self.bottom_point = bottom_point
self.per_tier_range = per_tier_range
self.view_min = self.selection_model.min_time
self.view_max = self.selection_model.max_time
self.selection_model.viewChanged.connect(self.update_times)
def update_times(self, begin, end):
self.hide()
self.view_min = begin
self.view_max = end
br = self.boundingRect()
if (
self.view_min <= self.begin < self.view_max
or self.view_max >= self.end > self.view_min
or (self.begin <= self.view_min and self.end >= self.view_max)
) and br.width() / self._cached_pixel_size[0] > 100:
self.show()
def boundingRect(self):
br = QtCore.QRectF(self.viewRect()) # bounds of containing ViewBox mapped to local coords.
vb = self.getViewBox()
visible_begin = max(self.begin, self.view_min)
visible_end = min(self.end, self.view_max)
br.setLeft(visible_begin)
br.setRight(visible_end)
br.setTop(self.top_point)
# br.setBottom(self.top_point-self.per_tier_range)
br.setBottom(self.bottom_point)
duration = visible_end - visible_begin
self._cached_pixel_size = vb.viewPixelSize()
x_margin_px = 25
y_margin_top_px = 25
y_margin_bottom_px = 10
bounding_pixel_width = duration / self._cached_pixel_size[0]
width = max(int(bounding_pixel_width - (2 * x_margin_px)), 0)
bounding_pixel_height = abs(self.per_tier_range) / self._cached_pixel_size[1]
y_margin = y_margin_top_px * self._cached_pixel_size[1]
x_margin = x_margin_px * self._cached_pixel_size[0]
height = max(int(bounding_pixel_height - ((y_margin_top_px + y_margin_bottom_px))), 0)
self.setPos(visible_begin + x_margin, self.top_point - y_margin)
self.textItem.setGeometry(0, 0, width, height)
self.text_edit.setFixedWidth(width)
self.text_edit.setFixedHeight(height)
return br
class PhonePGTextItem(pg.TextItem):
def __init__(
self,
text: str = "",
color=None,
font=None,
html=None,
anchor=(0, 0),
border=None,
fill=None,
phones=None,
):
from anchor.widgets import PronunciationInput
self.anchor = pg.Point(anchor)
self.rotateAxis = None
self.angle = 0
if phones is None:
phones = []
pg.GraphicsObject.__init__(self)
self.text_edit = PronunciationInput(phones)
# self.text_edit.setAutoFillBackground(False)
# self.text_edit.viewport().setAutoFillBackground(False)
self.textItem = QtWidgets.QGraphicsProxyWidget(self)
self.textItem.setWidget(self.text_edit)
self._lastTransform = None
self._lastScene = None
self._bounds = QtCore.QRectF()
if font:
self.text_edit.setFont(font)
self.text_edit.setText(text)
self.fill = pg.mkBrush(fill)
self.border = pg.mkPen(border)
def setPlainText(self, text):
"""
Set the plain text to be rendered by this item.
See QtGui.QGraphicsTextItem.setPlainText().
"""
if text != self.toPlainText():
self.text_edit.setText(text)
self.updateTextPos()
def toPlainText(self):
return self.text_edit.text()
class TranscriberErrorHighlighter(QtGui.QSyntaxHighlighter):
WORDS = r"\S+"
def __init__(self, *args):
super().__init__(*args)
self.alignment = None
self.settings = AnchorSettings()
self.keyword_color = self.settings.error_color
self.keyword_text_color = self.settings.primary_very_dark_color
self.highlight_format = QtGui.QTextCharFormat()
self.highlight_format.setBackground(self.keyword_color)
self.highlight_format.setForeground(self.keyword_text_color)
def set_alignment(self, alignment):
self.alignment = alignment
def highlightBlock(self, text):
if not self.alignment:
return
current_align_ind = 0
for word_object in re.finditer(self.WORDS, text):
while self.alignment.seqB[current_align_ind] != word_object.group():
current_align_ind += 1
sb = self.alignment.seqB[current_align_ind]
sa = self.alignment.seqA[current_align_ind]
if sb == word_object.group() and sb != sa:
self.setFormat(
word_object.start(),
word_object.end() - word_object.start(),
self.highlight_format,
)
current_align_ind += 1
class TextItem(pg.TextItem):
def __init__(
self, text="", color=(200, 200, 200), html=None, anchor=(0, 0), border=None, fill=None
):
"""
============== =================================================================================
**Arguments:**
*text* The text to display
*color* The color of the text (any format accepted by pg.mkColor)
*html* If specified, this overrides both *text* and *color*
*anchor* A QPointF or (x,y) sequence indicating what region of the text box will
be anchored to the item's position. A value of (0,0) sets the upper-left corner
of the text box to be at the position specified by setPos(), while a value of (1,1)
sets the lower-right corner.
*border* A pen to use when drawing the border
*fill* A brush to use when filling within the border
*angle* Angle in degrees to rotate text. Default is 0; text will be displayed upright.
*rotateAxis* If None, then a text angle of 0 always points along the +x axis of the scene.
If a QPointF or (x,y) sequence is given, then it represents a vector direction
in the parent's coordinate system that the 0-degree line will be aligned to. This
Allows text to follow both the position and orientation of its parent while still
discarding any scale and shear factors.
============== =================================================================================
The effects of the `rotateAxis` and `angle` arguments are added independently. So for example:
* rotateAxis=None, angle=0 -> normal horizontal text
* rotateAxis=None, angle=90 -> normal vertical text
* rotateAxis=(1, 0), angle=0 -> text aligned with x axis of its parent
* rotateAxis=(0, 1), angle=0 -> text aligned with y axis of its parent
* rotateAxis=(1, 0), angle=90 -> text orthogonal to x axis of its parent
"""
self.anchor = pg.Point(anchor)
self.rotateAxis = None
# self.angle = 0
pg.GraphicsObject.__init__(self)
self.textItem = QtWidgets.QGraphicsTextItem(text)
self.textItem.setParentItem(self)
self._lastTransform = None
self._lastScene = None
self.angle = 0
self._bounds = QtCore.QRectF()
self.setText(text, color)
self.fill = pg.mkBrush(fill)
self.border = pg.mkPen(border)
class IntervalTextRegion(pg.GraphicsObject):
audioSelected = QtCore.Signal(object, object)
def __init__(
self,
interval: CtmInterval,
color,
top_point,
height,
font=None,
border=None,
background_brush=None,
hover_brush=None,
selected_brush=None,
dictionary_model=None,
speaker_id=None,
):
self.background_brush = background_brush
self.hover_brush = hover_brush
self.selected_brush = selected_brush
self.border = border
self.dictionary_model = dictionary_model
self.speaker_id = speaker_id
super().__init__()
text = interval.label
self.text = TextItem(text, color=color, anchor=(0.5, 0.5))
self.text.setParentItem(self)
self.font = font
self.text.textItem.setFont(font)
self.picture = QtGui.QPicture()
self.interval = interval
self.top_point = top_point
self.left_point = interval.begin
self._bounds = None
self.width = interval.end - interval.begin
self.height = height
self.mouseHovering = False
self.selected = False
self.currentBrush = self.background_brush
self.text.setPos(
(self.interval.begin + self.interval.end) / 2, self.top_point - (self.height / 2)
)
self.begin_line = pg.InfiniteLine()
self.rect = QtCore.QRectF(
left=self.interval.begin,
top=self.top_point,
width=self.interval.end - self.interval.begin,
height=self.height,
)
self.rect.setTop(self.top_point)
self.rect.setBottom(self.top_point - self.height)
self._generate_picture()
def _generate_picture(self):
painter = QtGui.QPainter(self.picture)
painter.setPen(self.border)
painter.setBrush(self.currentBrush)
painter.drawRect(self.rect)
painter.end()
def mouseClickEvent(self, ev):
if ev.button() != QtCore.Qt.MouseButton.LeftButton:
ev.ignore()
return
self.audioSelected.emit(self.interval.begin, self.interval.end)
ev.accept()
def boundingRect(self):
br = QtCore.QRectF(self.picture.boundingRect())
return br
def paint(self, painter: QtGui.QPainter, *args):
painter.drawPicture(0, 0, self.picture)
class TranscriberTextRegion(IntervalTextRegion):
viewRequested = QtCore.Signal(object, object)
audioSelected = QtCore.Signal(object, object)
def __init__(
self,
interval: CtmInterval,
color,
top_point,
height,
font=None,
border=None,
background_brush=None,
hover_brush=None,
selected_brush=None,
alignment=None,
):
super().__init__(
interval,
color,
top_point,
height,
font,
border,
background_brush,
hover_brush,
selected_brush,
)
self.highlighter = TranscriberErrorHighlighter(self.text.textItem.document())
if alignment is not None:
self.highlighter.set_alignment(alignment)
class Highlighter(QtGui.QSyntaxHighlighter):
WORDS = r"\S+"
def __init__(self, *args):
super(Highlighter, self).__init__(*args)
self.settings = AnchorSettings()
self.speaker_id = None
self.dictionary_model: Optional[DictionaryTableModel] = None
self.search_term: Optional[TextFilterQuery] = None
self.spellcheck_format = QtGui.QTextCharFormat()
self.spellcheck_format.setFontWeight(QtGui.QFont.Weight.ExtraBold)
self.spellcheck_format.setUnderlineColor(self.settings.error_color)
self.spellcheck_format.setUnderlineStyle(
QtGui.QTextCharFormat.UnderlineStyle.SingleUnderline
)
def set_speaker(self, speaker_id: int):
self.speaker_id = speaker_id
def set_models(self, dictionary_model: DictionaryTableModel):
self.dictionary_model = dictionary_model
def setSearchTerm(self, search_term: TextFilterQuery):
if search_term != self.search_term:
self.search_term = search_term
self.rehighlight()
def highlightBlock(self, text):
self.settings.sync()
self.spellcheck_format.setUnderlineColor(self.settings.error_color)
if self.dictionary_model is not None and self.dictionary_model.word_sets:
for word_object in re.finditer(self.WORDS, text):
if not self.dictionary_model.check_word(word_object.group(), self.speaker_id):
self.setFormat(
word_object.start(),
word_object.end() - word_object.start(),
self.spellcheck_format,
)
if self.search_term:
if not self.search_term.case_sensitive:
text = text.lower()
filter_regex = self.search_term.generate_expression()
for word_object in re.finditer(filter_regex, text):
for i in range(word_object.start(), word_object.end()):
f = self.format(i)
f.setFontWeight(QtGui.QFont.Weight.Bold)
f.setBackground(QtGui.QColor(self.settings.accent_base_color))
f.setForeground(QtGui.QColor(self.settings.primary_very_dark_color))
self.setFormat(i, 1, f)
class MfaRegion(pg.LinearRegionItem):
dragFinished = QtCore.Signal(object)
textEdited = QtCore.Signal(object, object)
undoRequested = QtCore.Signal()
redoRequested = QtCore.Signal()
playRequested = QtCore.Signal()
selectRequested = QtCore.Signal(object, object, object, object)
audioSelected = QtCore.Signal(object, object)
viewRequested = QtCore.Signal(object, object)
settings = AnchorSettings()
def __init__(
self,
item: CtmInterval,
corpus_model: CorpusModel,
dictionary_model: typing.Optional[DictionaryTableModel],
selection_model: CorpusSelectionModel,
selected: bool = False,
bottom_point: float = 0,
top_point: float = 1,
):
pg.GraphicsObject.__init__(self)
self.item = item
self.item_min = self.item.begin
self.item_max = self.item.end
self.corpus_model = corpus_model
self.dictionary_model = dictionary_model
self.selection_model = selection_model
self.bottom_point = bottom_point
self.top_point = top_point
self.selected = selected
self.span = (self.bottom_point, self.top_point)
self.text_margin_pixels = 2
self.selected_range_color = self.settings.value(self.settings.PRIMARY_BASE_COLOR).lighter()
self.interval_background_color = self.settings.value(self.settings.PRIMARY_DARK_COLOR)
self.hover_line_color = self.settings.value(self.settings.ERROR_COLOR)
self.moving_line_color = self.settings.value(self.settings.ERROR_COLOR)
self.break_line_color = self.settings.value(self.settings.ACCENT_LIGHT_COLOR)
self.text_color = self.settings.value(self.settings.MAIN_TEXT_COLOR)
self.selected_interval_color = self.settings.value(self.settings.PRIMARY_BASE_COLOR)
self.plot_text_font = self.settings.big_font
self.setCursor(QtCore.Qt.CursorShape.SizeAllCursor)
self.pen = pg.mkPen(self.break_line_color, width=3)
self.pen.setCapStyle(QtCore.Qt.PenCapStyle.FlatCap)
self.border_pen = pg.mkPen(self.break_line_color, width=2)
self.border_pen.setCapStyle(QtCore.Qt.PenCapStyle.FlatCap)
if self.selected:
self.background_brush = pg.mkBrush(self.selected_interval_color)
else:
# self.interval_background_color.setAlpha(0)
self.background_brush = pg.mkBrush(self.interval_background_color)
self.hoverPen = pg.mkPen(self.hover_line_color, width=3)
self.movingPen = pg.mkPen(
self.moving_line_color, width=3, style=QtCore.Qt.PenStyle.DashLine
)
self.orientation = "vertical"
self.bounds = QtCore.QRectF()
self.blockLineSignal = False
self.moving = False
self.mouseHovering = False
self.swapMode = "sort"
self.clipItem = None
self._boundingRectCache = None
self.setBrush(self.background_brush)
self.movable = False
# note LinearRegionItem.Horizontal and LinearRegionItem.Vertical
# are kept for backward compatibility.
lineKwds = dict(
movable=False,
bounds=None,
span=self.span,
pen=self.pen,
hoverPen=self.hoverPen,
movingPen=self.movingPen,
)
self.lines = [
UtteranceLine(
QtCore.QPointF(self.item_min, 0),
angle=90,
initial=True,
view_min=self.selection_model.min_time,
view_max=self.selection_model.max_time,
**lineKwds,
),
UtteranceLine(
QtCore.QPointF(self.item_max, 0),
angle=90,
initial=False,
view_min=self.selection_model.min_time,
view_max=self.selection_model.max_time,
**lineKwds,
),
]
for line in self.lines:
line.setZValue(30)
line.setParentItem(self)
line.sigPositionChangeFinished.connect(self.lineMoveFinished)
self.lines[0].sigPositionChanged.connect(self._line0Moved)
self.lines[1].sigPositionChanged.connect(self._line1Moved)
self.lines[0].hoverChanged.connect(self.popup)
self.lines[1].hoverChanged.connect(self.popup)
self.cached_visible_duration = None
self.cached_view = None
def paint(self, p, *args):
p.setBrush(self.currentBrush)
p.setPen(self.border_pen)
p.drawRect(self.boundingRect())
def mouseDragEvent(self, ev):
if not self.movable or ev.button() != QtCore.Qt.MouseButton.LeftButton:
return
ev.accept()
if ev.isStart():
bdp = ev.buttonDownPos()
self.cursorOffsets = [line.pos() - bdp for line in self.lines]
self.startPositions = [line.pos() for line in self.lines]
self.moving = True
if not self.moving:
return
# self.lines[0].blockSignals(True) # only want to update once
# for i, l in enumerate(self.lines):
# l.setPos(self.cursorOffsets[i] + ev.pos())
# self.lines[0].blockSignals(False)
self.prepareGeometryChange()
if ev.isFinish():
self.moving = False
self.dragFinished.emit(ev.pos())
self.sigRegionChangeFinished.emit(self)
else:
self.sigRegionChanged.emit(self)
def mouseClickEvent(self, ev: QtGui.QMouseEvent):
if ev.button() != QtCore.Qt.MouseButton.LeftButton:
ev.ignore()
return
self.audioSelected.emit(self.item_min, self.item_max)
ev.accept()
def mouseDoubleClickEvent(self, ev: QtGui.QMouseEvent):
if ev.button() != QtCore.Qt.MouseButton.LeftButton:
ev.ignore()
return
self.audioSelected.emit(self.item_min, self.item_max)
padding = (self.item_max - self.item_min) / 2
self.viewRequested.emit(self.item_min - padding, self.item_max + padding)
ev.accept()
def change_editing(self, editable: bool):
self.movable = editable
self.lines[0].movable = editable
self.lines[1].movable = editable
def setSelected(self, selected: bool):
self.selected = selected
if self.selected:
self.setBrush(pg.mkBrush(self.selected_interval_color))
else:
# self.interval_background_color.setAlpha(0)
self.setBrush(pg.mkBrush(self.interval_background_color))
self.update()
def popup(self, hover: bool):
if hover or self.moving or self.lines[0].moving or self.lines[1].moving:
self.setZValue(30)
else:
self.setZValue(0)
def setMouseHover(self, hover: bool):
# Inform the item that the mouse is(not) hovering over it
if self.mouseHovering == hover:
return
self.mouseHovering = hover
self.popup(hover)
self.update()
def select_self(self, deselect=False, reset=True, focus=False):
self.selected = True
if self.selected and not deselect and not reset:
return
class AlignmentRegion(MfaRegion):
def __init__(
self,
phone_interval: CtmInterval,
corpus_model: CorpusModel,
selection_model: CorpusSelectionModel,
selected: bool = False,
bottom_point: float = 0,
top_point: float = 1,
):
super().__init__(
phone_interval, corpus_model, None, selection_model, selected, bottom_point, top_point
)
self.original_text = self.item.label
self.text = pg.TextItem(
self.item.label, anchor=(0.5, 0.5), color=self.text_color, border=pg.mkColor("r")
)
self.text.setFont(self.settings.font)
self.text.setParentItem(self)
self.per_tier_range = self.top_point - self.bottom_point
def boundingRect(self):
br = QtCore.QRectF(self.viewRect()) # bounds of containing ViewBox mapped to local coords.
vb = self.getViewBox()
pixel_size = vb.viewPixelSize()
rng = self.getRegion()
br.setLeft(rng[0])
br.setRight(rng[1])
br.setTop(self.top_point)
# br.setBottom(self.top_point-self.per_tier_range)
br.setBottom(self.bottom_point + 0.01)
try:
visible_begin = max(rng[0], self.selection_model.min_time)
visible_end = min(rng[1], self.selection_model.max_time)
except TypeError:
return br
visible_duration = visible_end - visible_begin
x_margin_px = 8
available_text_width = visible_duration / pixel_size[0] - (2 * x_margin_px)
self.text.setVisible(available_text_width > 10)
if visible_duration != self.cached_visible_duration:
self.cached_visible_duration = visible_duration
self.text.setPos(
visible_begin + (visible_duration / 2), self.top_point - (self.per_tier_range / 2)
)
self.size_calculated = True
br = br.normalized()
if self._boundingRectCache != br:
self._boundingRectCache = br
self.prepareGeometryChange()
return br
class PhoneRegion(AlignmentRegion):
def __init__(
self,
phone_interval: CtmInterval,
corpus_model: CorpusModel,
selection_model: CorpusSelectionModel,
selected: bool = False,
bottom_point: float = 0,
top_point: float = 1,
):
super().__init__(
phone_interval, corpus_model, selection_model, selected, bottom_point, top_point
)
class WordRegion(AlignmentRegion):
def __init__(
self,
phone_interval: CtmInterval,
corpus_model: CorpusModel,
selection_model: CorpusSelectionModel,
selected: bool = False,
bottom_point: float = 0,
top_point: float = 1,
):
super().__init__(
phone_interval, corpus_model, selection_model, selected, bottom_point, top_point
)
class UtteranceRegion(MfaRegion):
def __init__(
self,
utterance: Utterance,
corpus_model: CorpusModel,
dictionary_model: DictionaryTableModel,
selection_model: CorpusSelectionModel,
selected: bool = False,
bottom_point: float = 0,
top_point: float = 1,
extra_tiers=None,
available_speakers=None,
search_term=None,
):
super().__init__(
utterance,
corpus_model,
dictionary_model,
selection_model,
selected,
bottom_point,
top_point,
)
self.item = utterance
self.selection_model = selection_model
if extra_tiers is None:
extra_tiers = {}
self.extra_tiers = extra_tiers
self.extra_tier_intervals = {}
self.num_tiers = len(extra_tiers) + 1
self.per_tier_range = (top_point - bottom_point) / self.num_tiers
self.setMovable(True)
self.corpus_model.utteranceTextUpdated.connect(self.update_text_from_model)
self.original_text = self.item.text
self.text = UtterancePGTextItem(
self.item,
self.selection_model,
anchor=(0, 0),
top_point=self.top_point,
bottom_point=self.bottom_point,
per_tier_range=self.per_tier_range,
dictionary_model=self.dictionary_model,
font=self.settings.font,
speaker_id=self.item.speaker_id,
color=self.text_color,
border=pg.mkPen(self.settings.accent_light_color),
)
self.text.setFont(self.plot_text_font)
self.text.setParentItem(self)
self.speaker_dropdown = UtteranceSpeakerDropDownItem(
self.item, self.corpus_model, font=self.settings.small_font, anchor=(0, 1)
)
self.speaker_dropdown.setParentItem(self)
self.text_edit = self.text.text_edit
if not self.corpus_model.editable:
self.text_edit.setReadOnly(True)
self.corpus_model.editableChanged.connect(self.change_editing)
self.text_edit.setViewportMargins(
self.text_margin_pixels,
self.text_margin_pixels,
self.text_margin_pixels,
self.text_margin_pixels,
)
self.text_edit.setStyleSheet(self.settings.interval_style_sheet)
self.speaker_dropdown.combo_box.setStyleSheet(self.settings.combo_box_style_sheet)
self.text_edit.installEventFilter(self)
self.highlighter = Highlighter(self.text_edit.document())
self.highlighter.set_models(dictionary_model)
self.highlighter.set_speaker(self.item.speaker_id)
if search_term:
self.highlighter.setSearchTerm(search_term)
self.timer = QtCore.QTimer()
self.text_edit.textChanged.connect(self.refresh_timer)
self.text_edit.lostFocus.connect(self.save_changes)
self.timer.timeout.connect(self.save_changes)
self.hide()
self._cached_pixel_size = None
for i, (tier_name, lookup) in enumerate(self.extra_tiers.items()):
intervals = getattr(self.item, lookup)
alignment = None
if lookup == "transcription_text":
if self.item.text and self.item.transcription_text:
alignment = pairwise2.align.globalms(
self.item.text.split(),
self.item.transcription_text.split(),
0,
-2,
-1,
-1,
gap_char=["-"],
one_alignment_only=True,
)[0]
intervals = [
CtmInterval(self.item.begin, self.item.end, self.item.transcription_text)
]
self.extra_tier_intervals[tier_name] = []
tier_top_point = self.top_point - ((i + 1) * self.per_tier_range)
tier_bottom_point = tier_top_point - self.per_tier_range
if intervals is None:
continue
for interval in intervals:
if lookup == "transcription_text":
interval_reg = TranscriberTextRegion(
interval,
self.text_color,
border=pg.mkPen(self.settings.accent_light_color),
top_point=tier_top_point,
height=self.per_tier_range,
font=self.settings.font,
alignment=alignment,
background_brush=self.background_brush,
selected_brush=pg.mkBrush(self.selected_range_color),
)
elif "phone_intervals" in lookup:
interval_reg = PhoneRegion(
interval,
self.corpus_model,
selection_model=selection_model,
selected=False,
top_point=tier_top_point,
bottom_point=tier_bottom_point,
)
elif "word_intervals" in lookup:
interval_reg = WordRegion(
interval,
self.corpus_model,
selection_model=selection_model,
selected=False,
top_point=tier_top_point,
bottom_point=tier_bottom_point,
)
else:
interval_reg = IntervalTextRegion(
interval,
self.text_color,
border=pg.mkPen(self.settings.accent_light_color, width=3),
top_point=tier_top_point,
height=self.per_tier_range,
font=self.settings.font,
background_brush=self.background_brush,
selected_brush=pg.mkBrush(self.selected_range_color),
)
interval_reg.audioSelected.connect(self.audioSelected.emit)
interval_reg.viewRequested.connect(self.viewRequested.emit)
interval_reg.setParentItem(self)
self.extra_tier_intervals[tier_name].append(interval_reg)
self.selection_model.viewChanged.connect(self.update_view_times)
self.show()
self.available_speakers = available_speakers
def contextMenuEvent(self, ev: QtWidgets.QGraphicsSceneContextMenuEvent):
menu = QtWidgets.QMenu()
change_speaker_menu = QtWidgets.QMenu("Change speaker")
for speaker_name, speaker_id in self.available_speakers.items():
if speaker_id == self.item.speaker_id:
continue
a = QtGui.QAction(speaker_name)
a.triggered.connect(self.update_speaker)
change_speaker_menu.addAction(a)
menu.addMenu(change_speaker_menu)
menu.setStyleSheet(self.settings.menu_style_sheet)
menu.exec_(ev.screenPos())
def update_speaker(self):
speaker_name = self.sender().text()
speaker_id = self.available_speakers[speaker_name]
self.corpus_model.update_utterance_speaker(self.item, speaker_id)
def refresh_timer(self):
self.timer.start(500)
self.update()
def change_editing(self, editable: bool):
super().change_editing(editable)
self.text_edit.setReadOnly(not editable)
self.speaker_dropdown.combo_box.setEnabled(editable)
def select_self(self, deselect=False, reset=True, focus=False):
self.selected = True
if self.selected and not deselect and not reset:
return
self.selectRequested.emit(self.item.id, deselect, reset, focus)
def mouseDoubleClickEvent(self, ev: QtGui.QMouseEvent):
if ev.button() != QtCore.Qt.MouseButton.LeftButton:
ev.ignore()
return
deselect = False
reset = True
if ev.modifiers() == QtCore.Qt.Modifier.CTRL:
reset = False
if self.selected:
deselect = True
self.selected = False
else:
self.selected = True
else:
self.selected = True
self.select_self(deselect=deselect, reset=reset, focus=True)
ev.accept()
def mouseClickEvent(self, ev: QtGui.QMouseEvent):
if ev.button() != QtCore.Qt.MouseButton.LeftButton:
ev.ignore()
return
deselect = False
reset = True
if ev.modifiers() == QtCore.Qt.Modifier.CTRL:
reset = False
if self.selected:
deselect = True
self.selected = False
else:
self.selected = True
else:
self.selected = True
self.select_self(deselect=deselect, reset=reset, focus=False)
ev.accept()
def update_view_times(self, view_min, view_max):
self.lines[0].view_min = view_min
self.lines[0].view_max = view_max
self.lines[1].view_min = view_min
self.lines[1].view_max = view_max
self.update()
def boundingRect(self):
br = QtCore.QRectF(self.viewRect()) # bounds of containing ViewBox mapped to local coords.
vb = self.getViewBox()
self._cached_pixel_size = vb.viewPixelSize()
rng = self.getRegion()
br.setLeft(rng[0])
br.setRight(rng[1])
br.setTop(self.top_point)
br.setBottom(self.bottom_point)
x_margin_px = 40
self.size_calculated = True
for line in self.lines:
line.bounding_width = int(x_margin_px / 2)
br = br.normalized()
if self._boundingRectCache != br:
self._boundingRectCache = br
self.prepareGeometryChange()
return br
def eventFilter(self, obj, event):
if event.type() == QtCore.QEvent.Type.KeyPress:
key_event = QtGui.QKeyEvent(event)
undo_combo = QtCore.QKeyCombination(QtCore.Qt.Modifier.CTRL, QtCore.Qt.Key.Key_Z)
redo_combo = QtCore.QKeyCombination(
QtCore.Qt.Modifier.CTRL | QtCore.Qt.Modifier.SHIFT, QtCore.Qt.Key.Key_Z
)
if key_event.key() == QtCore.Qt.Key.Key_Tab:
self.playRequested.emit()
return True
if (
key_event.keyCombination() == undo_combo
and not self.text_edit.document().isUndoAvailable()
):
self.undoRequested.emit()
return True
if (
key_event.keyCombination() == redo_combo
and not self.text_edit.document().isRedoAvailable()
):
self.redoRequested.emit()
return True
return super().eventFilter(obj, event)
def update_text_from_model(self, utterance_id, new_text):
try:
if utterance_id != self.item.id or new_text == self.original_text:
return
except sqlalchemy.orm.exc.DetachedInstanceError:
self.corpus_model.session.refresh(self.item)
if utterance_id != self.item.id or new_text == self.original_text:
return
self.original_text = new_text
with QtCore.QSignalBlocker(self.text.text_edit):
position = self.text_edit.textCursor().position()
end_offset = self.text_edit.document().characterCount() - position
self.text_edit.setPlainText(new_text)
cursor = self.text_edit.textCursor()
position = self.text_edit.document().characterCount() - end_offset
if position > self.text_edit.document().characterCount():
position = self.text_edit.document().characterCount()
cursor.setPosition(position)
self.text_edit.setTextCursor(cursor)
self.text_edit.setAlignment(QtCore.Qt.AlignmentFlag.AlignCenter)
self.update()
def save_changes(self):
text = self.text_edit.toPlainText()
self.timer.stop()
if self.original_text == text:
return
self.original_text = text
self.textEdited.emit(self.item, text)
class WaveForm(pg.PlotCurveItem):
def __init__(self, bottom_point, top_point):
self.settings = AnchorSettings()
self.top_point = top_point
self.bottom_point = bottom_point
self.mid_point = (self.top_point + self.bottom_point) / 2
pen = pg.mkPen(self.settings.value(self.settings.MAIN_TEXT_COLOR), width=1)
super(WaveForm, self).__init__()
self.setPen(pen)
self.channel = 0
self.y = None
self.selection_model = None
self.setAcceptHoverEvents(False)
def hoverEvent(self, ev):
return
def set_models(self, selection_model: CorpusSelectionModel):
self.selection_model = selection_model
class PitchTrack(pg.PlotCurveItem):
def __init__(self, bottom_point, top_point):
self.settings = AnchorSettings()
self.top_point = top_point
self.bottom_point = bottom_point
self.mid_point = (self.top_point + self.bottom_point) / 2
pen = pg.mkPen(self.settings.value(self.settings.PRIMARY_LIGHT_COLOR), width=3)
super().__init__()
self.setPen(pen)
self.channel = 0
self.y = None
self.selection_model = None
self.setAcceptHoverEvents(False)
self.min_label = pg.TextItem(
str(self.settings.PITCH_MIN_F0),
self.settings.value(self.settings.PRIMARY_VERY_LIGHT_COLOR),
anchor=(1, 1),
)
self.min_label.setFont(self.settings.font)
self.min_label.setParentItem(self)
self.max_label = pg.TextItem(
str(self.settings.PITCH_MAX_F0),
self.settings.value(self.settings.PRIMARY_VERY_LIGHT_COLOR),
anchor=(1, 0),
)
self.max_label.setFont(self.settings.font)
self.max_label.setParentItem(self)
def hoverEvent(self, ev):
return
def set_range(self, min_f0, max_f0, end):
self.min_label.setText(f"{min_f0} Hz")
self.max_label.setText(f"{max_f0} Hz")
self.min_label.setPos(end, self.bottom_point)
self.max_label.setPos(end, self.top_point)
def set_models(self, selection_model: CorpusSelectionModel):
self.selection_model = selection_model
class Spectrogram(pg.ImageItem):
def __init__(self, bottom_point, top_point):
self.settings = AnchorSettings()
self.top_point = top_point
self.bottom_point = bottom_point
self.selection_model = None
self.channel = 0
super(Spectrogram, self).__init__()
self.cmap = pg.ColorMap(
None, [self.settings.primary_very_dark_color, self.settings.accent_light_color]
)
self.cmap.linearize()
self.color_bar = pg.ColorBarItem(colorMap=self.cmap)
self.color_bar.setImageItem(self)
self.setAcceptHoverEvents(False)
self.cached_begin = None
self.cached_end = None
self.cached_channel = None
self.stft = None
def set_models(self, selection_model: CorpusSelectionModel):
self.selection_model = selection_model
def boundingRect(self):
br = super(Spectrogram, self).boundingRect()
return br
def setData(self, stft, channel, begin, end, min_db, max_db):
self.stft = stft
self.min_db = min_db
self.max_db = max_db
self.cached_end = end
self.cached_begin = begin
self.cached_channel = channel
duration = self.cached_end - self.cached_begin
rect = [self.cached_begin, self.bottom_point, duration, self.top_point - self.bottom_point]
self.setLevels([self.min_db, self.max_db], update=False)
self.setImage(self.stft, colorMap=self.cmap, rect=rect)
self.show()
class SelectionArea(pg.LinearRegionItem):
def __init__(self, top_point, bottom_point, brush, clipItem, pen):
self.settings = AnchorSettings()
self.selection_model: typing.Optional[CorpusSelectionModel] = None
super(SelectionArea, self).__init__(
values=(-10, -5),
span=(bottom_point / top_point, 1),
brush=brush,
movable=False,
# clipItem=clipItem,
pen=pen,
orientation="vertical",
)
self.setZValue(30)
self.lines[0].label = pg.InfLineLabel(
self.lines[0], text="", position=1, anchors=[(1, 0), (1, 0)]
)
self.lines[1].label = pg.InfLineLabel(
self.lines[1], text="", position=1, anchors=[(0, 0), (0, 0)]
)
font = self.settings.font
font.setBold(True)
self.lines[0].label.setFont(font)
self.lines[1].label.setFont(font)
def set_model(self, selection_model: CorpusSelectionModel):
self.selection_model = selection_model
self.selection_model.selectionAudioChanged.connect(self.update_region)
def update_region(self):
begin = self.selection_model.selected_min_time
end = self.selection_model.selected_max_time
if (
begin is None
or end is None
or (begin == self.selection_model.min_time and end == self.selection_model.max_time)
):
self.setVisible(False)
else:
self.setRegion([begin, end])
self.lines[0].label.setText(f"{begin:.3f}", self.settings.error_color)
self.lines[1].label.setText(f"{end:.3f}", self.settings.error_color)
self.setVisible(True)
class AudioPlots(pg.GraphicsObject):
def __init__(self, top_point, separator_point, bottom_point):
super().__init__()
self.settings = AnchorSettings()
self.selection_model: typing.Optional[CorpusSelectionModel] = None
self.top_point = top_point
self.separator_point = separator_point
self.bottom_point = bottom_point
self.wave_form = WaveForm(separator_point, self.top_point)
self.spectrogram = Spectrogram(self.bottom_point, separator_point)
self.pitch_track = PitchTrack(self.bottom_point, separator_point)
self.wave_form.setParentItem(self)
self.spectrogram.setParentItem(self)
self.pitch_track.setParentItem(self)
color = self.settings.error_color
color.setAlphaF(0.25)
self.selection_brush = pg.mkBrush(color)
self.background_pen = pg.mkPen(self.settings.accent_light_color)
self.background_brush = pg.mkBrush(self.settings.primary_very_dark_color)
self.selection_area = SelectionArea(
top_point=self.top_point,
bottom_point=self.bottom_point,
brush=self.selection_brush,
clipItem=self,
pen=pg.mkPen(self.settings.error_color),
)
self.selection_area.setParentItem(self)
self.play_line = pg.InfiniteLine(
pos=-20,
span=(0, 1),
pen=pg.mkPen("r", width=1),
movable=False, # We have our own code to handle dragless moving.
)
self.play_line.setParentItem(self)
self.update_line = pg.InfiniteLine(
pos=-20,
span=(0, 1),
pen=pg.mkPen(self.settings.error_color, width=3, style=QtCore.Qt.PenStyle.DashLine),
movable=False, # We have our own code to handle dragless moving.
)
self.update_line.setParentItem(self)
self.update_line.hide()
self.setAcceptHoverEvents(True)
self.picture = QtGui.QPicture()
self.rect = QtCore.QRectF(
left=0, top=self.top_point, width=10, height=self.top_point - self.bottom_point
)
self.rect.setTop(self.top_point)
self.rect.setBottom(self.bottom_point)
self._generate_picture()
def update_drag_line(self, line: UtteranceLine):
self.update_line.setPos(line.pos())
self.update_line.show()
def hide_drag_line(self):
self.update_line.hide()
def wheelEvent(self, ev: QtWidgets.QGraphicsSceneWheelEvent):
ev.accept()
delta = ev.delta()
sc = 1.001**delta
if ev.modifiers() & QtCore.Qt.KeyboardModifier.ControlModifier:
center = self.getViewBox().mapSceneToView(ev.scenePos())
self.selection_model.zoom(sc, center.x())
else:
self.selection_model.pan(sc)
def mouseDragEvent(self, ev):
if ev.button() != QtCore.Qt.MouseButton.LeftButton:
ev.ignore()
return
if self.selection_model.min_time is None:
ev.ignore()
return
min_time = max(min(ev.buttonDownPos().x(), ev.pos().x()), self.selection_model.min_time)
max_time = min(max(ev.buttonDownPos().x(), ev.pos().x()), self.selection_model.max_time)
if ev.isStart():
self.selection_area.setVisible(True)
if ev.isFinish():
self.selection_model.select_audio(min_time, max_time)
ev.accept()
def mouseClickEvent(self, ev):
if ev.button() != QtCore.Qt.MouseButton.LeftButton:
ev.ignore()
return
self.selection_model.request_start_time(ev.pos().x())
ev.accept()
def hoverEvent(self, ev):
if not ev.isExit():
# the mouse is hovering over the image; make sure no other items
# will receive left click/drag events from here.
ev.acceptDrags(QtCore.Qt.MouseButton.LeftButton)
ev.acceptClicks(QtCore.Qt.MouseButton.LeftButton)
def set_models(self, selection_model: CorpusSelectionModel):
self.selection_model = selection_model
self.wave_form.set_models(selection_model)
self.spectrogram.set_models(selection_model)
self.selection_area.set_model(selection_model)
def _generate_picture(self):
if self.selection_model is None:
return
painter = QtGui.QPainter(self.picture)
painter.setPen(self.background_pen)
painter.setBrush(self.background_brush)
painter.drawRect(self.rect)
painter.end()
def paint(self, painter, *args):
painter.save()
painter.drawPicture(0, 0, self.picture)
painter.restore()
def boundingRect(self):
br = QtCore.QRectF(self.picture.boundingRect())
return br
def update_play_line(self, time):
if time is None:
return
self.play_line.setPos(time)
def update_plot(self):
if (
self.selection_model.current_file is None
or self.selection_model.current_file.sound_file is None
or not os.path.exists(self.selection_model.current_file.sound_file.sound_file_path)
):
return
self.rect.setLeft(self.selection_model.min_time)
self.rect.setRight(self.selection_model.max_time)
self._generate_picture()
self.update_play_line(self.selection_model.min_time)
self.selection_area.update_region()
self.update()
class SpeakerTier(pg.GraphicsObject):
dragFinished = QtCore.Signal(object, object)
receivedWheelEvent = QtCore.Signal(object)
draggingLine = QtCore.Signal(object)
lineDragFinished = QtCore.Signal(object)
def __init__(self, bottom_point, top_point, speaker: Speaker, search_term=None):
super().__init__()
self.settings = AnchorSettings()
self.corpus_model: Optional[CorpusModel] = None
self.selection_model: Optional[CorpusSelectionModel] = None
self.search_term = search_term
self.speaker = speaker
self.speaker_id = speaker.id
self.speaker_name = speaker.name
self.speaker_index = 0
self.textgrid_top_point = top_point
self.top_point = top_point
self.speaker_label = pg.TextItem(self.speaker_name, color=self.settings.accent_base_color)
self.speaker_label.setFont(self.settings.font)
self.speaker_label.setParentItem(self)
self.speaker_label.setZValue(40)
self.bottom_point = bottom_point
self.textgrid_bottom_point = bottom_point
self.annotation_range = self.top_point - self.bottom_point
self.extra_tiers = {}
self.utterances = []
self.visible_utterances: dict[str, UtteranceRegion] = {}
self.background_brush = pg.mkBrush(self.settings.primary_very_dark_color)
self.border = pg.mkPen(self.settings.accent_light_color)
self.picture = QtGui.QPicture()
def wheelEvent(self, ev):
self.receivedWheelEvent.emit(ev)
def mouseDoubleClickEvent(self, ev):
if ev.button() != QtCore.Qt.MouseButton.LeftButton:
ev.ignore()
return
x = ev.pos().x()
begin = max(x - 0.5, 0)
end = min(x + 0.5, self.selection_model.current_file.duration)
for x in self.visible_utterances.values():
if begin >= x.item_min and end <= x.item_max:
ev.accept()
return
if begin < x.item_max and begin > x.item_max:
begin = x.item_max
if end > x.item_min and end < x.item_min:
end = x.item_min
break
if end - begin > 0.001:
self.corpus_model.create_utterance(
self.selection_model.current_file, self.speaker, begin, end
)
ev.accept()
def setSearchterm(self, term):
self.search_term = term
for reg in self.visible_utterances.values():
reg.highlighter.setSearchTerm(term)
def boundingRect(self):
return QtCore.QRectF(self.picture.boundingRect())
def paint(self, p, *args):
p.drawPicture(0, 0, self.picture)
def set_speaker_index(self, index, num_speakers):
self.speaker_index = index
speaker_tier_range = self.annotation_range / num_speakers
self.top_point = self.textgrid_top_point - (speaker_tier_range * self.speaker_index)
self.bottom_point = self.top_point - speaker_tier_range
self.rect = QtCore.QRectF(
left=self.selection_model.min_time,
top=self.top_point,
width=self.selection_model.max_time - self.selection_model.min_time,
height=speaker_tier_range,
)
self.rect.setHeight(speaker_tier_range)
self._generate_picture()
def _generate_picture(self):
self.speaker_label.setPos(self.selection_model.min_time, self.top_point)
self.picture = QtGui.QPicture()
painter = QtGui.QPainter(self.picture)
painter.setPen(self.border)
painter.setBrush(self.background_brush)
painter.drawRect(self.rect)
painter.end()
def set_extra_tiers(self, extra_tiers):
self.extra_tiers = extra_tiers
def set_available_speakers(self, available_speakers):
self.available_speakers = available_speakers
def set_models(
self,
corpus_model: CorpusModel,
selection_model: CorpusSelectionModel,
dictionary_model: DictionaryTableModel,
):
self.corpus_model = corpus_model
self.selection_model = selection_model
self.dictionary_model = dictionary_model
for reg in self.visible_utterances.values():
reg.highlighter.set_models(self.dictionary_model)
# self.corpus_model.changeCommandFired.connect(self.refresh)
self.corpus_model.lockCorpus.connect(self.lock)
self.corpus_model.refreshUtteranceText.connect(self.refreshTexts)
self.selection_model.selectionChanged.connect(self.update_select)
def lock(self):
for utt in self.visible_utterances.values():
utt.setMovable(False)
def unlock(self):
for utt in self.visible_utterances.values():
utt.setMovable(True)
def setSearchTerm(self, term):
for utt in self.visible_utterances.values():
utt.highlighter.setSearchTerm(term)
def refreshTexts(self, utt_id, text):
for reg in self.visible_utterances.values():
if reg.item.id != utt_id:
continue
with QtCore.QSignalBlocker(reg):
reg.text_edit.setPlainText(text)
break
def reset_tier(self):
for reg in self.visible_utterances.values():
if reg.scene() is not None:
reg.scene().removeItem(reg)
self.visible_utterances = {}
self.other_intervals = []
def refresh(self, *args):
if self.selection_model.min_time is None:
return
self.rect.setLeft(self.selection_model.min_time)
self.rect.setRight(self.selection_model.max_time)
self._generate_picture()
self.has_visible_utterances = False
for u in self.utterances:
if u.end < self.selection_model.min_time:
continue
if u.begin > self.selection_model.max_time:
break
self.has_visible_utterances = True
if u.id in self.visible_utterances:
continue
selected = self.selection_model.checkSelected(u)
# Utterance region always at the top
reg = UtteranceRegion(
u,
self.corpus_model,
self.dictionary_model,
selection_model=self.selection_model,
selected=selected,
extra_tiers=self.extra_tiers,
available_speakers=self.available_speakers,
bottom_point=self.bottom_point,
top_point=self.top_point,
search_term=self.search_term,
)
reg.sigRegionChanged.connect(self.check_utterance_bounds)
reg.sigRegionChangeFinished.connect(self.update_utterance)
reg.dragFinished.connect(self.update_selected_speaker)
reg.lines[0].sigPositionChanged.connect(self.draggingLine.emit)
reg.lines[0].sigPositionChangeFinished.connect(self.lineDragFinished.emit)
reg.lines[1].sigPositionChanged.connect(self.draggingLine.emit)
reg.lines[0].sigPositionChangeFinished.connect(self.lineDragFinished.emit)
reg.undoRequested.connect(self.corpus_model.undoRequested.emit)
reg.undoRequested.connect(self.corpus_model.undoRequested.emit)
reg.redoRequested.connect(self.corpus_model.redoRequested.emit)
reg.playRequested.connect(self.corpus_model.playRequested.emit)
reg.audioSelected.connect(self.selection_model.select_audio)
reg.viewRequested.connect(self.selection_model.set_view_times)
reg.textEdited.connect(self.update_utterance_text)
reg.selectRequested.connect(self.selection_model.update_select)
reg.setParentItem(self)
self.visible_utterances[u.id] = reg
def update_utterance_text(self, utterance, new_text):
self.corpus_model.update_utterance_text(utterance, text=new_text)
def update_selected_speaker(self, pos):
pos = pos.y()
reg = self.sender()
utterance = reg.item
self.dragFinished.emit(utterance, pos)
def update_select(self):
selected_rows = {x.id for x in self.selection_model.selectedUtterances()}
for r in self.visible_utterances.values():
if r.item.id in selected_rows:
r.setSelected(True)
else:
r.setSelected(False)
def check_utterance_bounds(self):
reg = self.sender()
with QtCore.QSignalBlocker(reg):
beg, end = reg.getRegion()
if beg < 0:
reg.setRegion([0, end])
return
if end > self.selection_model.current_file.duration:
reg.setRegion([beg, self.selection_model.current_file.duration])
return
for r in self.visible_utterances.values():
if r == reg:
continue
other_begin, other_end = r.getRegion()
if other_begin <= beg < other_end:
reg.setRegion([other_end, end])
break
if other_begin < end <= other_end:
reg.setRegion([beg, other_begin])
break
reg.text.begin, reg.text.end = reg.getRegion()
reg.text.update_times(self.selection_model.min_time, self.selection_model.max_time)
reg.select_self()
reg.update()
def update_utterance(self):
reg = self.sender()
utt = reg.item
beg, end = reg.getRegion()
new_begin = round(beg, 4)
new_end = round(end, 4)
if new_begin == utt.begin and new_end == utt.end:
return
self.corpus_model.update_utterance_times(utt, begin=new_begin, end=new_end)
self.selection_model.select_audio(new_begin, None)
reg.text.begin = new_begin
reg.text.end = new_end
reg.update()
self.lineDragFinished.emit(True)
|
Anchor-annotator
|
/Anchor_annotator-0.0.9.tar.gz/Anchor_annotator-0.0.9/anchor/plot.py
|
plot.py
|
import pathlib
from typing import Any, Optional
from montreal_forced_aligner.config import get_temporary_directory
from PySide6 import QtCore, QtGui
class AnchorSettings(QtCore.QSettings):
DEFAULT_DIRECTORY = "anchor/default_directory"
DEFAULT_CORPUS_DIRECTORY = "anchor/default_corpus_directory"
DEFAULT_DICTIONARY_DIRECTORY = "anchor/default_dictionary_directory"
DEFAULT_G2P_DIRECTORY = "anchor/default_g2p_directory"
DEFAULT_ACOUSTIC_DIRECTORY = "anchor/default_acoustic_directory"
DEFAULT_LM_DIRECTORY = "anchor/default_lm_directory"
DEFAULT_IVECTOR_DIRECTORY = "anchor/default_ivector_directory"
DEFAULT_SAD_DIRECTORY = "anchor/default_sad_directory"
CORPORA = "anchor/corpora"
CURRENT_CORPUS = "anchor/current_corpus"
CORPUS_PATH = "path"
DICTIONARY_PATH = "dictionary_path"
ACOUSTIC_MODEL_PATH = "acoustic_model_path"
G2P_MODEL_PATH = "g2p_model_path"
LANGUAGE_MODEL_PATH = "language_model_path"
IE_MODEL_PATH = "ie_model_path"
PHONE_MAPPING_PATH = "phone_mapping_path"
REFERENCE_ALIGNMENT_PATH = "reference_alignment_path"
AUTOSAVE = "anchor/autosave"
AUTOLOAD = "anchor/autoload"
VOLUME = "anchor/audio/volume"
AUDIO_DEVICE = "anchor/audio/device"
GEOMETRY = "anchor/MainWindow/geometry"
WINDOW_STATE = "anchor/MainWindow/windowState"
UTTERANCES_VISIBLE = "anchor/MainWindow/utterancesVisible"
DICTIONARY_VISIBLE = "anchor/MainWindow/dictionaryVisible"
OOV_VISIBLE = "anchor/MainWindow/oovVisible"
SPEAKERS_VISIBLE = "anchor/MainWindow/speakersVisible"
LM_VISIBLE = "anchor/MainWindow/languageModelVisible"
AM_VISIBLE = "anchor/MainWindow/acousticModelVisible"
TRANSCRIPTION_VISIBLE = "anchor/MainWindow/transcriptionVisible"
ALIGNMENT_VISIBLE = "anchor/MainWindow/alignmentVisible"
DIARIZATION_VISIBLE = "anchor/MainWindow/diarizationVisible"
FONT = "anchor/theme/font"
MAIN_TEXT_COLOR = "anchor/theme/text_color"
SELECTED_TEXT_COLOR = "anchor/theme/selected_text_color"
ERROR_COLOR = "anchor/theme/error_color"
PRIMARY_BASE_COLOR = "anchor/theme/primary_color/base"
PRIMARY_LIGHT_COLOR = "anchor/theme/primary_color/light"
PRIMARY_DARK_COLOR = "anchor/theme/primary_color/dark"
PRIMARY_VERY_LIGHT_COLOR = "anchor/theme/primary_color/very_light"
PRIMARY_VERY_DARK_COLOR = "anchor/theme/primary_color/very_dark"
ACCENT_BASE_COLOR = "anchor/theme/accent_color/base"
ACCENT_LIGHT_COLOR = "anchor/theme/accent_color/light"
ACCENT_DARK_COLOR = "anchor/theme/accent_color/dark"
ACCENT_VERY_LIGHT_COLOR = "anchor/theme/accent_color/very_light"
ACCENT_VERY_DARK_COLOR = "anchor/theme/accent_color/very_dark"
PLAY_KEYBIND = "anchor/keybinds/play"
DELETE_KEYBIND = "anchor/keybinds/delete"
SAVE_KEYBIND = "anchor/keybinds/save"
SEARCH_KEYBIND = "anchor/keybinds/search"
SPLIT_KEYBIND = "anchor/keybinds/split"
MERGE_KEYBIND = "anchor/keybinds/merge"
ZOOM_IN_KEYBIND = "anchor/keybinds/zoom_in"
ZOOM_OUT_KEYBIND = "anchor/keybinds/zoom_out"
ZOOM_TO_SELECTION_KEYBIND = "anchor/keybinds/zoom_to_selection"
PAN_LEFT_KEYBIND = "anchor/keybinds/pan_left"
PAN_RIGHT_KEYBIND = "anchor/keybinds/pan_right"
UNDO_KEYBIND = "anchor/keybinds/undo"
REDO_KEYBIND = "anchor/keybinds/redo"
LOCKED = "anchor/locked"
CUDA = "anchor/cuda"
GITHUB_TOKEN = "anchor/github_token"
RESULTS_PER_PAGE = "anchor/results_per_page"
SPEC_DYNAMIC_RANGE = "anchor/spectrogram/dynamic_range"
SPEC_N_FFT = "anchor/spectrogram/n_fft"
SPEC_N_TIME_STEPS = "anchor/spectrogram/time_steps"
SPEC_WINDOW_SIZE = "anchor/spectrogram/window_size"
SPEC_PREEMPH = "anchor/spectrogram/preemphasis"
SPEC_MAX_FREQ = "anchor/spectrogram/max_frequency"
CLUSTER_TYPE = "anchor/clustering/cluster_type"
CLUSTERING_N_CLUSTERS = "anchor/clustering/n_clusters"
CLUSTERING_MIN_CLUSTER_SIZE = "anchor/clustering/min_cluster_size"
CLUSTERING_DISTANCE_THRESHOLD = "anchor/clustering/distance_threshold"
CLUSTERING_METRIC = "anchor/clustering/metric"
MANIFOLD_N_NEIGHBORS = "anchor/clustering/manifold/n_neighbors"
PITCH_MIN_F0 = "anchor/pitch/min_f0"
PITCH_MAX_F0 = "anchor/pitch/max_f0"
PITCH_FRAME_SHIFT = "anchor/pitch/frame_shift"
PITCH_FRAME_LENGTH = "anchor/pitch/frame_length"
PITCH_DELTA_PITCH = "anchor/pitch/delta_pitch"
PITCH_PENALTY_FACTOR = "anchor/pitch/penalty_factor"
def __init__(self, *args):
super(AnchorSettings, self).__init__()
self.mfa_theme = {
AnchorSettings.MAIN_TEXT_COLOR: "#EDDDD4",
AnchorSettings.SELECTED_TEXT_COLOR: "#EDDDD4",
AnchorSettings.ERROR_COLOR: "#C63623",
AnchorSettings.PRIMARY_BASE_COLOR: "#003566",
AnchorSettings.PRIMARY_LIGHT_COLOR: "#0E63B3",
AnchorSettings.PRIMARY_DARK_COLOR: "#001D3D",
AnchorSettings.PRIMARY_VERY_LIGHT_COLOR: "#7AB5E6",
AnchorSettings.PRIMARY_VERY_DARK_COLOR: "#000814",
AnchorSettings.ACCENT_BASE_COLOR: "#FFC300",
AnchorSettings.ACCENT_LIGHT_COLOR: "#FFD60A",
AnchorSettings.ACCENT_DARK_COLOR: "#E3930D",
AnchorSettings.ACCENT_VERY_LIGHT_COLOR: "#F2CD49",
AnchorSettings.ACCENT_VERY_DARK_COLOR: "#7A4E03",
}
self.praat_theme = {
AnchorSettings.MAIN_TEXT_COLOR: "#000000",
AnchorSettings.SELECTED_TEXT_COLOR: "#FFFFFF",
AnchorSettings.ERROR_COLOR: "#DC0806",
AnchorSettings.PRIMARY_BASE_COLOR: "#FFFFFF",
AnchorSettings.PRIMARY_LIGHT_COLOR: "#0078D7",
AnchorSettings.PRIMARY_DARK_COLOR: "#A0A0A0",
AnchorSettings.PRIMARY_VERY_LIGHT_COLOR: "#F0F0F0",
AnchorSettings.PRIMARY_VERY_DARK_COLOR: "#FFFFFF",
AnchorSettings.ACCENT_BASE_COLOR: "#000000",
AnchorSettings.ACCENT_LIGHT_COLOR: "#FAF205",
AnchorSettings.ACCENT_DARK_COLOR: "#000000",
AnchorSettings.ACCENT_VERY_LIGHT_COLOR: "#000000",
AnchorSettings.ACCENT_VERY_DARK_COLOR: "#000000",
}
self.default_values = {
AnchorSettings.CORPORA: [],
AnchorSettings.CURRENT_CORPUS: "",
AnchorSettings.DEFAULT_DIRECTORY: get_temporary_directory(),
AnchorSettings.AUTOSAVE: False,
AnchorSettings.AUTOLOAD: False,
AnchorSettings.VOLUME: 100,
AnchorSettings.AUDIO_DEVICE: None,
AnchorSettings.GEOMETRY: None,
AnchorSettings.WINDOW_STATE: None,
AnchorSettings.FONT: QtGui.QFont("Noto Sans", 12).toString(),
AnchorSettings.PLAY_KEYBIND: "Tab",
AnchorSettings.DELETE_KEYBIND: "Delete",
AnchorSettings.SAVE_KEYBIND: "Ctrl+S",
AnchorSettings.SEARCH_KEYBIND: "Ctrl+F",
AnchorSettings.SPLIT_KEYBIND: "Ctrl+D",
AnchorSettings.MERGE_KEYBIND: "Ctrl+M",
AnchorSettings.ZOOM_IN_KEYBIND: "Ctrl+I",
AnchorSettings.ZOOM_OUT_KEYBIND: "Ctrl+O",
AnchorSettings.ZOOM_TO_SELECTION_KEYBIND: "Ctrl+N",
AnchorSettings.PAN_LEFT_KEYBIND: "LeftArrow",
AnchorSettings.PAN_RIGHT_KEYBIND: "RightArrow",
AnchorSettings.UNDO_KEYBIND: "Ctrl+Z",
AnchorSettings.REDO_KEYBIND: "Ctrl+Shift+Z",
AnchorSettings.RESULTS_PER_PAGE: 100,
AnchorSettings.SPEC_DYNAMIC_RANGE: 50,
AnchorSettings.SPEC_N_FFT: 256,
AnchorSettings.SPEC_N_TIME_STEPS: 1000,
AnchorSettings.SPEC_MAX_FREQ: 5000,
AnchorSettings.SPEC_WINDOW_SIZE: 0.005,
AnchorSettings.SPEC_PREEMPH: 0.97,
AnchorSettings.CLUSTER_TYPE: "agglomerative",
AnchorSettings.CUDA: True,
AnchorSettings.CLUSTERING_N_CLUSTERS: 0,
AnchorSettings.CLUSTERING_MIN_CLUSTER_SIZE: 60,
AnchorSettings.CLUSTERING_DISTANCE_THRESHOLD: 0.0,
AnchorSettings.CLUSTERING_METRIC: "cosine",
AnchorSettings.MANIFOLD_N_NEIGHBORS: 10,
AnchorSettings.PITCH_MIN_F0: 50,
AnchorSettings.PITCH_MAX_F0: 600,
AnchorSettings.PITCH_FRAME_SHIFT: 10,
AnchorSettings.PITCH_FRAME_LENGTH: 25,
AnchorSettings.PITCH_PENALTY_FACTOR: 0.1,
AnchorSettings.PITCH_DELTA_PITCH: 0.005,
AnchorSettings.LOCKED: True,
AnchorSettings.UTTERANCES_VISIBLE: True,
AnchorSettings.DICTIONARY_VISIBLE: False,
AnchorSettings.OOV_VISIBLE: False,
AnchorSettings.SPEAKERS_VISIBLE: False,
AnchorSettings.LM_VISIBLE: False,
AnchorSettings.AM_VISIBLE: False,
AnchorSettings.TRANSCRIPTION_VISIBLE: False,
AnchorSettings.ALIGNMENT_VISIBLE: False,
AnchorSettings.DIARIZATION_VISIBLE: False,
}
self.default_values.update(self.mfa_theme)
self.border_radius = 5
self.text_padding = 2
self.border_width = 2
self.base_menu_button_width = 16
self.menu_button_width = self.base_menu_button_width + self.border_width * 2
self.sort_indicator_size = 20
self.sort_indicator_padding = 15
self.scroll_bar_height = 25
self.icon_size = 25
self.scroll_bar_border_radius = int(self.scroll_bar_height / 2) - 2
def value(self, arg__1: str, defaultValue: Optional[Any] = ..., t: object = ...) -> Any:
if arg__1 == AnchorSettings.FONT:
value = QtGui.QFont()
value.fromString(
super(AnchorSettings, self).value(arg__1, self.default_values[arg__1])
)
elif "color" in arg__1:
value = QtGui.QColor(
super(AnchorSettings, self).value(arg__1, self.default_values[arg__1])
)
elif "keybind" in arg__1:
value = QtGui.QKeySequence(
super(AnchorSettings, self).value(arg__1, self.default_values[arg__1])
)
elif "auto" in arg__1:
value = super(AnchorSettings, self).value(arg__1, self.default_values[arg__1], bool)
elif arg__1 in {
AnchorSettings.GEOMETRY,
AnchorSettings.WINDOW_STATE,
AnchorSettings.AUDIO_DEVICE,
}:
value = super(AnchorSettings, self).value(arg__1, self.default_values[arg__1])
else:
value = super(AnchorSettings, self).value(
arg__1,
self.default_values.get(arg__1, ""),
type=type(self.default_values.get(arg__1, "")),
)
return value
@property
def temp_directory(self) -> pathlib.Path:
return get_temporary_directory()
@property
def font(self) -> QtGui.QFont:
font = self.value(AnchorSettings.FONT)
return font
@property
def big_font(self) -> QtGui.QFont:
font = self.value(AnchorSettings.FONT)
font.setPointSize(int(1.25 * font.pointSize()))
return font
@property
def small_font(self) -> QtGui.QFont:
font = self.value(AnchorSettings.FONT)
font.setPointSize(int(0.75 * font.pointSize()))
return font
@property
def title_font(self) -> QtGui.QFont:
font = self.value(AnchorSettings.FONT)
font.setPointSize(int(3 * font.pointSize()))
return font
def set_mfa_theme(self):
for k, v in self.mfa_theme.items():
self.setValue(k, v)
def set_praat_theme(self):
for k, v in self.praat_theme.items():
self.setValue(k, v)
@property
def plot_theme(self):
return {
"background_color": self.value(AnchorSettings.PRIMARY_VERY_DARK_COLOR),
"play_line_color": self.value(AnchorSettings.ERROR_COLOR),
"selected_range_color": self.value(AnchorSettings.PRIMARY_VERY_LIGHT_COLOR),
"selected_interval_color": self.value(AnchorSettings.PRIMARY_BASE_COLOR),
"hover_line_color": self.value(AnchorSettings.PRIMARY_VERY_LIGHT_COLOR),
"moving_line_color": self.value(AnchorSettings.ERROR_COLOR),
"break_line_color": self.value(AnchorSettings.ACCENT_LIGHT_COLOR),
"wave_line_color": self.value(AnchorSettings.MAIN_TEXT_COLOR),
"text_color": self.value(AnchorSettings.MAIN_TEXT_COLOR),
"selected_text_color": self.value(AnchorSettings.MAIN_TEXT_COLOR),
"axis_color": self.value(AnchorSettings.ACCENT_LIGHT_COLOR),
"interval_background_color": self.value(AnchorSettings.PRIMARY_DARK_COLOR),
}
@property
def error_color(self) -> QtGui.QColor:
return self.value(AnchorSettings.ERROR_COLOR)
@property
def selected_text_color(self) -> QtGui.QColor:
return self.value(AnchorSettings.SELECTED_TEXT_COLOR)
@property
def text_color(self) -> QtGui.QColor:
return self.value(AnchorSettings.MAIN_TEXT_COLOR)
@property
def primary_base_color(self) -> QtGui.QColor:
return self.value(AnchorSettings.PRIMARY_BASE_COLOR)
@property
def primary_light_color(self) -> QtGui.QColor:
return self.value(AnchorSettings.PRIMARY_LIGHT_COLOR)
@property
def primary_dark_color(self) -> QtGui.QColor:
return self.value(AnchorSettings.PRIMARY_DARK_COLOR)
@property
def primary_very_light_color(self) -> QtGui.QColor:
return self.value(AnchorSettings.PRIMARY_VERY_LIGHT_COLOR)
@property
def primary_very_dark_color(self) -> QtGui.QColor:
return self.value(AnchorSettings.PRIMARY_VERY_DARK_COLOR)
@property
def accent_base_color(self) -> QtGui.QColor:
return self.value(AnchorSettings.ACCENT_BASE_COLOR)
@property
def accent_light_color(self) -> QtGui.QColor:
return self.value(AnchorSettings.ACCENT_LIGHT_COLOR)
@property
def accent_dark_color(self) -> QtGui.QColor:
return self.value(AnchorSettings.ACCENT_DARK_COLOR)
@property
def accent_very_light_color(self) -> QtGui.QColor:
return self.value(AnchorSettings.ACCENT_VERY_LIGHT_COLOR)
@property
def accent_very_dark_color(self) -> QtGui.QColor:
return self.value(AnchorSettings.ACCENT_VERY_DARK_COLOR)
@property
def keyboard_style_sheet(self) -> str:
border_color = self.accent_base_color.name()
background_color = self.primary_light_color.name()
enabled_color = self.primary_very_dark_color.name()
enabled_background_color = self.accent_base_color.name()
enabled_border_color = self.primary_very_dark_color.name()
scroll_bar_style = self.scroll_bar_style_sheet
return f"""
QWidget{{
background-color: {background_color};
}}
QMenu{{
border-width: {self.border_width}px;
border-style: solid;
border-color: {border_color};
border-radius: {self.border_radius}px;
}}
QScrollArea {{
border: none;
}}
QPushButton {{
background-color: {enabled_background_color};
color: {enabled_color};
padding: {self.text_padding}px;
border-width: {self.border_width}px;
border-style: solid;
border-color: {enabled_border_color};
border-radius: {self.border_radius}px;
}}
{scroll_bar_style}
"""
@property
def search_box_style_sheet(self) -> str:
line_edit_color = self.primary_very_dark_color.name()
line_edit_background_color = self.accent_base_color.name()
error_color = self.error_color.name()
return f"""
QWidget{{
background-color: {line_edit_background_color};
}}
QLineEdit[error="true"] {{
color: {error_color};
font-weight: bold;
}}
QMenu {{ menu-scrollable: 1; }}
QLineEdit QToolButton {{
background-color: {line_edit_background_color};
color: {line_edit_color};
margin: {self.border_width}px;
}}
QToolButton#clear_search_field, QToolButton#clear_field, QToolButton#clear_new_speaker_field,
QToolButton#regex_search_field, QToolButton#word_search_field {{
background-color: none;
border: none;
padding: {self.border_width}px;
}}
"""
@property
def combo_box_style_sheet(self) -> str:
enabled_color = self.primary_very_dark_color.name()
enabled_background_color = self.accent_base_color.name()
hover_background_color = self.primary_very_light_color.name()
return f"""
QComboBox {{
color: {enabled_color};
background-color: {enabled_background_color};
selection-background-color: none;
}}
QComboBox QAbstractItemView {{
color: {enabled_color};
background-color: {enabled_background_color};
selection-background-color: {hover_background_color};
}}
"""
@property
def interval_style_sheet(self):
text_edit_color = self.text_color.name()
scroll_bar_background_color = self.primary_dark_color.name()
scroll_bar_handle_color = self.accent_light_color.name()
scroll_bar_border_color = self.primary_dark_color.name()
border_color = self.primary_light_color.name()
scroll_bar_height = 10
scroll_bar_border_radius = int(scroll_bar_height / 2) - 2
return f"""
QTextEdit {{
background-color: rgba(0, 0, 0, 0%);
color: {text_edit_color};
border: 5px inset {border_color};
}}
QScrollBar {{
color: {scroll_bar_handle_color};
background: {scroll_bar_background_color};
border: {self.border_width}px solid {scroll_bar_border_color};
}}
QScrollBar:vertical {{
width: {scroll_bar_height}px;
border: 2px solid {scroll_bar_border_color};
border-radius: {scroll_bar_border_radius + 2}px;
margin-top: {scroll_bar_height}px;
margin-bottom: {scroll_bar_height}px;
}}
QScrollBar:up-arrow:vertical {{
image: url(:caret-up.svg);
height: {scroll_bar_height}px;
width: {scroll_bar_height}px;
}}
QScrollBar:up-arrow:vertical:pressed {{
image: url(:checked/caret-up.svg);
}}
QScrollBar:down-arrow:vertical {{
image: url(:caret-down.svg);
height: {scroll_bar_height}px;
width: {scroll_bar_height}px;
}}
QScrollBar:down-arrow:vertical:pressed {{
image: url(:checked/caret-down.svg);
}}
QScrollBar::handle:vertical {{
background: {scroll_bar_handle_color};
min-height: {scroll_bar_height}px;
border: 2px solid {scroll_bar_border_color};
border-radius: {scroll_bar_border_radius}px;
}}
QScrollBar::add-page, QScrollBar::sub-page {{
background: none;
height: {scroll_bar_height}px;
width: {scroll_bar_height}px;
padding: 0px;
margin: 0px;
}}
QScrollBar::add-line:vertical {{
background: none;
subcontrol-position: bottom;
subcontrol-origin: margin;
height: {scroll_bar_height}px;
}}
QScrollBar::sub-line:vertical {{
background: none;
subcontrol-position: top;
subcontrol-origin: margin;
height: {scroll_bar_height}px;
}}"""
@property
def style_sheet(self):
background_color = self.primary_base_color.name()
selection_color = self.primary_light_color.name()
error_color = self.error_color.name()
text_edit_color = self.text_color.name()
text_edit_background_color = self.primary_very_dark_color.name()
enabled_color = self.primary_very_dark_color.name()
enabled_background_color = self.accent_base_color.name()
enabled_border_color = self.primary_very_dark_color.name()
active_color = self.accent_light_color.name()
active_background_color = self.primary_dark_color.name()
active_border_color = self.primary_dark_color.name()
hover_text_color = self.accent_very_light_color.name()
hover_background_color = self.primary_very_light_color.name()
hover_border_color = self.accent_very_light_color.name()
disabled_text_color = self.primary_dark_color.name()
disabled_background_color = self.accent_very_dark_color.name()
disabled_border_color = self.primary_very_dark_color.name()
table_text_color = self.primary_very_dark_color.name()
table_odd_color = self.primary_very_light_color.name()
table_even_color = self.accent_very_light_color.name()
table_header_background_color = self.primary_light_color.name()
table_header_color = self.text_color.name()
main_widget_border_color = self.primary_very_light_color.name()
main_widget_background_color = self.primary_very_dark_color.name()
menu_background_color = self.accent_base_color.name()
menu_text_color = self.primary_very_dark_color.name()
line_edit_color = self.primary_very_dark_color.name()
line_edit_background_color = self.accent_base_color.name()
sheet = f"""
QWidget{{
background-color: {background_color};
}}
QProgressBar {{
border: {self.border_width}px solid {enabled_border_color};
color: {text_edit_color};
background-color: {text_edit_background_color};
text-align: center;
}}
QProgressBar::chunk {{
background-color: {background_color};
}}
QMainWindow, QDialog{{
background-color: {background_color};
}}
QMenuBar {{
background-color: {menu_background_color};
spacing: 2px;
}}
QMenuBar::item {{
padding: 4px 4px;
color: {menu_text_color};
background-color: {menu_background_color};
}}
QMenuBar::item:selected {{
color: {hover_text_color};
background-color: {hover_background_color};
}}
QMenuBar::item:disabled {{
color: {disabled_text_color};
background-color: {menu_background_color};
}}
ButtonWidget {{
background-color: {table_header_background_color};
}}
QDockWidget {{
background-color: {active_background_color};
color: {active_color};
titlebar-close-icon: url(:checked/times.svg);
titlebar-normal-icon: url(:checked/external-link.svg);
}}
QDockWidget::title {{
text-align: center;
}}
QMainWindow::separator {{
background: {background_color};
width: 10px; /* when vertical */
height: 10px; /* when horizontal */
}}
QMainWindow::separator:hover {{
background: {enabled_background_color};
}}
#utteranceListWidget, #dictionaryWidget, #speakerWidget {{
background-color: {text_edit_background_color};
border: {self.border_width}px solid {main_widget_border_color};
color: {main_widget_border_color};
padding: 0px;
padding-top: 20px;
margin-top: 0ex; /* leave space at the top for the title */
}}
UtteranceDetailWidget {{
padding: 0px;
border: none;
margin: 0;
}}
InformationWidget {{
background-color: {main_widget_background_color};
border: {self.border_width}px solid {main_widget_border_color};
border-top-right-radius: {self.border_radius}px;
border-bottom-right-radius: {self.border_radius}px;
}}
QTabWidget::pane, SearchWidget, DictionaryWidget, SpeakerWidget {{
border-bottom-right-radius: {self.border_radius}px;
}}
QCheckBox::indicator{{
width: {int(self.scroll_bar_height/2)}px;
height: {int(self.scroll_bar_height/2)}px;
}}
QLineEdit, QSpinBox, QCheckBox::indicator, #pronunciation_field {{
color: {line_edit_color};
background-color: {line_edit_background_color};
selection-background-color: {selection_color};
border: {self.border_width}px solid {enabled_border_color};
}}
QCheckBox::indicator:checked {{
image: url(:check.svg);
}}
QTextEdit{{
color: {text_edit_color};
background-color: {text_edit_background_color};
selection-background-color: {selection_color};
border: {self.border_width}px solid {enabled_border_color};
}}
QGroupBox::title {{
color: {text_edit_color};
background-color: transparent;
subcontrol-origin: margin;
subcontrol-position: top center; /* position at the top center */
padding-top: 5px;
}}
QLabel {{
color: {text_edit_color};
}}
QStatusBar {{
background-color: {text_edit_background_color};
color: {text_edit_color};
}}
WarningLabel {{
color: {error_color};
}}
QCheckBox {{
color: {text_edit_color};
}}
QTabWidget::pane, SearchWidget, DictionaryWidget, SpeakerWidget {{ /* The tab widget frame */
background-color: {main_widget_background_color};
}}
QTabWidget::pane {{ /* The tab widget frame */
border: {self.border_width}px solid {main_widget_border_color};
border-top-color: {enabled_color};
background-color: {main_widget_background_color};
}}
QTabBar::tab {{
color: {menu_text_color};
background-color: {menu_background_color};
border-color: {enabled_border_color};
border: {self.border_width / 2}px solid {enabled_border_color};
border-top-color: {main_widget_border_color};
border-bottom: none;
min-width: 8ex;
padding: {self.text_padding}px;
margin: 0px;
}}
QTabBar::scroller{{
width: {2 * self.scroll_bar_height}px;
}}
QTabBar QToolButton {{
border-radius: 0px;
}}
QTabBar QToolButton::right-arrow {{
image: url(:caret-right.svg);
height: {self.scroll_bar_height}px;
width: {self.scroll_bar_height}px;
}}
QTabBar QToolButton::right-arrow :pressed {{
image: url(:checked/caret-right.svg);
}}
QTabBar QToolButton::right-arrow :disabled {{
image: url(:disabled/caret-right.svg);
}}
QTabBar QToolButton::left-arrow {{
image: url(:caret-left.svg);
height: {self.scroll_bar_height}px;
width: {self.scroll_bar_height}px;
}}
QTabBar QToolButton::left-arrow:pressed {{
image: url(:checked/caret-left.svg);
}}
QTabBar QToolButton::left-arrow:disabled {{
image: url(:disabled/caret-left.svg);
}}
QTabBar::tab-bar {{
color: {menu_text_color};
background-color: {menu_background_color};
border: {self.border_width}px solid {main_widget_border_color};
}}
QTabBar::tab:hover {{
color: {hover_text_color};
background-color: {hover_background_color};
border-color: {hover_border_color};
border-bottom-color: {active_border_color};
}}
QTabBar::tab:selected {{
color: {active_color};
background-color: {active_background_color};
margin-left: -{self.border_width}px;
margin-right: -{self.border_width}px;
border-color: {active_border_color};
border-bottom-color: {active_border_color};
}}
QTabBar::tab:first {{
border-left-width: {self.border_width}px;
margin-left: 0px;
}}
QTabBar::tab:last {{
border-right-width: {self.border_width}px;
margin-right: 0px;
}}
QToolBar {{
spacing: 3px;
}}
#toolBar {{
background: rgb(0, 8, 20);
}}
QToolBar::separator {{
margin-left: 5px;
margin-right: 5px;
width: 3px;
height: 3px;
background: {selection_color};
}}
QPushButton, QToolButton {{
background-color: {enabled_background_color};
color: {enabled_color};
padding: {self.text_padding}px;
border-width: {self.border_width}px;
border-style: solid;
border-color: {enabled_border_color};
border-radius: {self.border_radius}px;
}}
QToolButton[popupMode="1"] {{ /* only for MenuButtonPopup */
padding-right: {self.menu_button_width}px; /* make way for the popup button */
}}
QToolButton::menu-button {{
border: {self.border_width}px solid {enabled_border_color};
border-top-right-radius: {self.border_radius}px;
border-bottom-right-radius: {self.border_radius}px;
width: {self.base_menu_button_width}px;
}}
QMenuBar QToolButton{{
padding: 0px;
}}
QComboBox {{
color: {enabled_color};
background-color: {enabled_background_color};
selection-background-color: none;
}}
QComboBox QAbstractItemView {{
color: {enabled_color};
background-color: {enabled_background_color};
selection-background-color: {hover_background_color};
}}
QToolButton:checked {{
color: {active_color};
background-color: {active_background_color};
border-color: {active_border_color};
}}
QPushButton:disabled, QToolButton:disabled {{
color: {disabled_text_color};
background-color: {disabled_background_color};
border-color: {disabled_border_color};
}}
QToolButton#cancel_load:disabled {{
color: {disabled_text_color};
background-color: {disabled_background_color};
border-color: {disabled_border_color};
}}
QPushButton:hover, QToolButton:hover, QToolButton:focus, QToolButton:pressed, ToolButton:hover {{
color: {hover_text_color};
background-color: {hover_background_color};
border-color: {hover_border_color};
}}
QToolButton#cancel_load:focus:hover {{
color: {hover_text_color};
background-color: {hover_background_color};
border-color: {hover_border_color};
}}
QGraphicsView {{
border: {self.border_width}px solid {main_widget_border_color};
}}
QSlider::handle:horizontal {{
height: 10px;
background: {enabled_background_color};
border: {self.border_width / 2}px solid {enabled_border_color};
margin: 0 -2px; /* expand outside the groove */
}}
QSlider::handle:horizontal:hover {{
height: 10px;
background: {hover_background_color};
border-color: {hover_border_color};
margin: 0 -2px; /* expand outside the groove */
}}
QTableWidget, QTableView, QTreeView, QTreeWidget {{
alternate-background-color: {table_even_color};
selection-background-color: {selection_color};
selection-color: {text_edit_color};
background-color: {table_odd_color};
color: {table_text_color};
border: 4px solid {enabled_color};
}}
QTreeView QLabel, QTreeWidget QLabel{{
color: {table_text_color};
}}
QTreeView::branch:has-children:closed{{
alternate-background-color: {table_even_color};
selection-background-color: {selection_color};
border-image: none;
image: url(:chevron-right.svg);
}}
QTreeView::branch:has-children:!closed{{
alternate-background-color: {table_even_color};
selection-background-color: {selection_color};
border-image: none;
image: url(:chevron-down.svg);
}}
QScrollArea {{
border: 4px solid {enabled_color};
background: {background_color};
}}
QHeaderView::up-arrow {{
subcontrol-origin: padding;
subcontrol-position: center right;
image: url(:hover/sort-up.svg);
height: {self.sort_indicator_size}px;
width: {self.sort_indicator_size}px;
}}
QHeaderView::down-arrow {{
image: url(:hover/sort-down.svg);
subcontrol-origin: padding;
subcontrol-position: center right;
height: {self.sort_indicator_size}px;
width: {self.sort_indicator_size}px;
}}
QTableView QTableCornerButton::section {{
background-color: {enabled_background_color};
}}
QHeaderView {{
background-color: {table_odd_color};
}}
QHeaderView::section {{
color: {table_header_color};
background-color: {table_header_background_color};
padding-left: {self.text_padding+3}px;
}}
QHeaderView::section:horizontal {{
padding-right: {self.sort_indicator_padding}px;
}}
"""
sheet += self.scroll_bar_style_sheet
sheet += self.menu_style_sheet
sheet += self.tool_tip_style_sheet
return sheet
@property
def tool_tip_style_sheet(self):
background_color = self.accent_base_color.name()
text_color = self.primary_very_dark_color.name()
return f"""
QToolTip {{
background-color: {background_color};
color: {text_color};
}}
"""
@property
def menu_style_sheet(self):
menu_background_color = self.accent_base_color.name()
menu_text_color = self.primary_very_dark_color.name()
disabled_text_color = self.primary_dark_color.name()
disabled_background_color = self.accent_very_dark_color.name()
enabled_color = self.primary_very_dark_color.name()
selection_color = self.primary_light_color.name()
return f"""
QMenu {{
margin: 2px;
background-color: {menu_background_color};
color: {menu_text_color};
menu-scrollable: 1;
}}
QMenu::item {{
padding: 2px 25px 2px 20px;
border: {self.border_width / 2}px solid transparent;
background-color: {menu_background_color};
color: {menu_text_color};
}}
QMenu::item:disabled {{
border: none;
background-color: {disabled_background_color};
color: {disabled_text_color};
}}
QMenu::item:!disabled:selected {{
border-color: {enabled_color};
background-color: {selection_color};
}}"""
@property
def scroll_bar_style_sheet(self):
scroll_bar_background_color = self.primary_dark_color.name()
scroll_bar_handle_color = self.accent_light_color.name()
scroll_bar_border_color = self.primary_dark_color.name()
return f"""
QScrollBar {{
color: {scroll_bar_handle_color};
background: {scroll_bar_background_color};
border: {self.border_width}px solid {scroll_bar_border_color};
}}
QScrollBar#time_scroll_bar {{
color: {scroll_bar_handle_color};
background: {scroll_bar_background_color};
border: {self.border_width}px solid {scroll_bar_border_color};
margin-left: 0px;
margin-right: 0px;
}}
QScrollBar:horizontal {{
height: {self.scroll_bar_height}px;
border: 2px solid {scroll_bar_border_color};
border-radius: {self.scroll_bar_border_radius + 2}px;
margin-left: {self.scroll_bar_height}px;
margin-right: {self.scroll_bar_height}px;
}}
QScrollBar:vertical {{
width: {self.scroll_bar_height}px;
border: 2px solid {scroll_bar_border_color};
border-radius: {self.scroll_bar_border_radius + 2}px;
margin-top: {self.scroll_bar_height}px;
margin-bottom: {self.scroll_bar_height}px;
}}
QScrollBar:left-arrow:horizontal {{
image: url(:caret-left.svg);
height: {self.scroll_bar_height}px;
width: {self.scroll_bar_height}px;
}}
QScrollBar:left-arrow:horizontal:pressed {{
image: url(:checked/caret-left.svg);
}}
QScrollBar:right-arrow:horizontal {{
image: url(:caret-right.svg);
height: {self.scroll_bar_height}px;
width: {self.scroll_bar_height}px;
}}
QScrollBar:right-arrow:horizontal:pressed {{
image: url(:checked/caret-right.svg);
}}
QScrollBar:up-arrow:vertical {{
image: url(:caret-up.svg);
height: {self.scroll_bar_height}px;
width: {self.scroll_bar_height}px;
}}
QScrollBar:up-arrow:vertical:pressed {{
image: url(:checked/caret-up.svg);
}}
QScrollBar:down-arrow:vertical {{
image: url(:caret-down.svg);
height: {self.scroll_bar_height}px;
width: {self.scroll_bar_height}px;
}}
QScrollBar:down-arrow:vertical:pressed {{
image: url(:checked/caret-down.svg);
}}
QScrollBar::handle:horizontal {{
background: {scroll_bar_handle_color};
min-width: {self.scroll_bar_height}px;
border: 2px solid {scroll_bar_border_color};
border-radius: {self.scroll_bar_border_radius}px;
}}
QScrollBar::handle:vertical {{
background: {scroll_bar_handle_color};
min-height: {self.scroll_bar_height}px;
border: 2px solid {scroll_bar_border_color};
border-radius: {self.scroll_bar_border_radius}px;
}}
QToolButton#pan_left_button, QToolButton#pan_right_button {{
color: none;
background-color: none;
border: none;
margin: 0px;
padding: 0px;
}}
QScrollBar::add-page, QScrollBar::sub-page {{
background: none;
height: {self.scroll_bar_height}px;
width: {self.scroll_bar_height}px;
padding: 0px;
margin: 0px;
}}
QScrollBar::add-line:horizontal {{
background: none;
subcontrol-position: right;
subcontrol-origin: margin;
width: {self.scroll_bar_height}px;
}}
QScrollBar::sub-line:horizontal {{
background: none;
subcontrol-position: left;
subcontrol-origin: margin;
width: {self.scroll_bar_height}px;
}}
QScrollBar::add-line:vertical {{
background: none;
subcontrol-position: bottom;
subcontrol-origin: margin;
height: {self.scroll_bar_height}px;
}}
QScrollBar::sub-line:vertical {{
background: none;
subcontrol-position: top;
subcontrol-origin: margin;
height: {self.scroll_bar_height}px;
}}
QScrollBar#time_scroll_bar::add-line:horizontal {{
background: none;
subcontrol-position: none;
subcontrol-origin: none;
width: 0px;
}}
QScrollBar#time_scroll_bar::sub-line:horizontal {{
background: none;
subcontrol-position: none;
subcontrol-origin: none;
width: 0px;
}}
"""
|
Anchor-annotator
|
/Anchor_annotator-0.0.9.tar.gz/Anchor_annotator-0.0.9/anchor/settings.py
|
settings.py
|
from __future__ import annotations
import os
import subprocess
import sys
import traceback
import sqlalchemy
from montreal_forced_aligner.command_line.utils import check_databases
from montreal_forced_aligner.config import GLOBAL_CONFIG, MfaConfiguration, get_temporary_directory
from montreal_forced_aligner.corpus import AcousticCorpus
from montreal_forced_aligner.data import WorkflowType
from montreal_forced_aligner.diarization.speaker_diarizer import FOUND_SPEECHBRAIN
from montreal_forced_aligner.exceptions import DatabaseError
from montreal_forced_aligner.g2p.generator import PyniniValidator
from montreal_forced_aligner.models import (
AcousticModel,
IvectorExtractorModel,
LanguageModel,
ModelManager,
)
from montreal_forced_aligner.utils import DatasetType, inspect_database
from PySide6 import QtCore, QtGui, QtMultimedia, QtWidgets
import anchor.db
from anchor import workers
from anchor.models import (
CorpusModel,
CorpusSelectionModel,
DictionaryTableModel,
MergeSpeakerModel,
OovModel,
SpeakerModel,
)
from anchor.settings import AnchorSettings
from anchor.ui_error_dialog import Ui_ErrorDialog
from anchor.ui_main_window import Ui_MainWindow
from anchor.ui_preferences import Ui_PreferencesDialog
from anchor.widgets import MediaPlayer, ProgressWidget
class MainWindow(QtWidgets.QMainWindow):
configUpdated = QtCore.Signal(object)
g2pLoaded = QtCore.Signal(object)
ivectorExtractorLoaded = QtCore.Signal(object)
acousticModelLoaded = QtCore.Signal(object)
languageModelLoaded = QtCore.Signal(object)
newSpeaker = QtCore.Signal(object)
def __init__(self, debug):
super().__init__()
QtCore.QCoreApplication.setOrganizationName("Montreal Corpus Tools")
QtCore.QCoreApplication.setApplicationName("Anchor")
fonts = [
"GentiumPlus",
"CharisSIL",
"NotoSans-Black",
"NotoSans-Bold",
"NotoSans-BoldItalic",
"NotoSans-Italic",
"NotoSans-Light",
"NotoSans-Medium",
"NotoSans-MediumItalic",
"NotoSans-Regular",
"NotoSans-Thin",
"NotoSerif-Black",
"NotoSerif-Bold",
"NotoSerif-BoldItalic",
"NotoSerif-Italic",
"NotoSerif-Light",
"NotoSerif-Medium",
"NotoSerif-MediumItalic",
"NotoSerif-Regular",
"NotoSerif-Thin",
]
for font in fonts:
QtGui.QFontDatabase.addApplicationFont(f":fonts/{font}.ttf")
if not os.path.exists(os.path.join(get_temporary_directory(), "Anchor")):
os.makedirs(os.path.join(get_temporary_directory(), "Anchor"))
self._db_engine = None
self.initialize_database()
self.ui = Ui_MainWindow()
self.ui.setupUi(self)
self.debug = debug
self.status_indicator = ProgressWidget()
self.status_indicator.setFixedWidth(self.ui.statusbar.height())
self.ui.statusbar.addPermanentWidget(self.status_indicator, 0)
self.settings = AnchorSettings()
self.sync_models()
if self.settings.contains(AnchorSettings.GEOMETRY):
self.restoreGeometry(self.settings.value(AnchorSettings.GEOMETRY))
self.restoreState(self.settings.value(AnchorSettings.WINDOW_STATE))
self.tabifyDockWidget(self.ui.utteranceDockWidget, self.ui.dictionaryDockWidget)
self.tabifyDockWidget(self.ui.utteranceDockWidget, self.ui.oovDockWidget)
self.tabifyDockWidget(self.ui.utteranceDockWidget, self.ui.alignmentDockWidget)
self.tabifyDockWidget(self.ui.utteranceDockWidget, self.ui.transcriptionDockWidget)
self.tabifyDockWidget(self.ui.utteranceDockWidget, self.ui.acousticModelDockWidget)
self.tabifyDockWidget(self.ui.utteranceDockWidget, self.ui.languageModelDockWidget)
self.tabifyDockWidget(self.ui.utteranceDockWidget, self.ui.speakerDockWidget)
self.tabifyDockWidget(self.ui.utteranceDockWidget, self.ui.diarizationDockWidget)
self.media_player = MediaPlayer(self)
self.media_player.playbackStateChanged.connect(self.handleAudioState)
self.media_player.audioReady.connect(self.file_loaded)
self.media_player.timeChanged.connect(
self.ui.utteranceDetailWidget.plot_widget.audio_plot.update_play_line
)
if self.settings.contains(AnchorSettings.VOLUME):
self.media_player.set_volume(self.settings.value(AnchorSettings.VOLUME))
self.ui.loadingScreen.setVisible(False)
self.ui.titleScreen.setVisible(True)
self.thread_pool = QtCore.QThreadPool()
self.single_runners = {
"Calculating OOVs": None,
"Comparing speakers": None,
"Counting utterance results": None,
"Finding duplicates": None,
"Querying utterances": None,
"Querying speakers": None,
"Querying dictionary": None,
"Querying OOVs": None,
"Counting OOV results": None,
"Clustering speaker utterances": None,
"Generating speaker MDS": None,
"Loading speaker ivectors": None,
"Merging speakers": None,
}
self.sequential_runners = {
"Exporting files": [],
"Changing speakers": [],
}
self.quick_runners = {
"Generating waveform",
"Generating scaled waveform",
"Generating spectrogram",
"Generating pitch track",
"Creating speaker tiers",
}
self.current_query_worker = None
self.current_count_worker = None
self.current_speaker_comparison_worker = None
self.current_speaker_merge_worker = None
self.download_worker = workers.DownloadWorker(self)
self.download_worker.signals.error.connect(self.handle_error)
self.download_worker.signals.result.connect(self.finalize_download)
self.dictionary_worker = workers.ImportDictionaryWorker(self)
self.dictionary_worker.signals.error.connect(self.handle_error)
self.dictionary_worker.signals.result.connect(self.finalize_load_dictionary)
self.oov_worker = workers.OovCountWorker(self)
self.oov_worker.signals.error.connect(self.handle_error)
self.oov_worker.signals.result.connect(self.finalize_oov_count)
self.acoustic_model_worker = workers.ImportAcousticModelWorker(self)
self.acoustic_model_worker.signals.error.connect(self.handle_error)
self.acoustic_model_worker.signals.result.connect(self.finalize_load_acoustic_model)
self.language_model_worker = workers.ImportLanguageModelWorker(self)
self.language_model_worker.signals.error.connect(self.handle_error)
self.language_model_worker.signals.result.connect(self.finalize_load_language_model)
self.g2p_model_worker = workers.ImportG2PModelWorker(self)
self.g2p_model_worker.signals.error.connect(self.handle_error)
self.g2p_model_worker.signals.result.connect(self.finalize_load_g2p_model)
self.ivector_extractor_worker = workers.ImportIvectorExtractorWorker(self)
self.ivector_extractor_worker.signals.error.connect(self.handle_error)
self.ivector_extractor_worker.signals.result.connect(self.finalize_load_ivector_extractor)
self.transcription_worker = workers.TranscriptionWorker(self)
self.transcription_worker.signals.error.connect(self.handle_error)
self.transcription_worker.signals.finished.connect(self.finalize_adding_intervals)
self.validation_worker = workers.ValidationWorker(self)
self.validation_worker.signals.error.connect(self.handle_error)
self.validation_worker.signals.finished.connect(self.finalize_adding_intervals)
self.alignment_worker = workers.AlignmentWorker(self)
self.alignment_worker.signals.error.connect(self.handle_error)
self.alignment_worker.signals.finished.connect(self.finalize_adding_intervals)
self.speaker_diarization_worker = workers.ComputeIvectorWorker(self)
self.speaker_diarization_worker.signals.error.connect(self.handle_error)
self.speaker_diarization_worker.signals.finished.connect(self.finalize_adding_ivectors)
self.cluster_utterances_worker = workers.ClusterUtterancesWorker(self)
self.cluster_utterances_worker.signals.error.connect(self.handle_error)
self.cluster_utterances_worker.signals.finished.connect(
self.finalize_clustering_utterances
)
self.classify_speakers_worker = workers.ClassifySpeakersWorker(self)
self.classify_speakers_worker.signals.error.connect(self.handle_error)
self.classify_speakers_worker.signals.finished.connect(self.finalize_clustering_utterances)
self.alignment_utterance_worker = workers.AlignUtteranceWorker(self)
self.alignment_utterance_worker.signals.error.connect(self.handle_error)
self.alignment_utterance_worker.signals.result.connect(self.finalize_utterance_alignment)
self.segment_utterance_worker = workers.SegmentUtteranceWorker(self)
self.segment_utterance_worker.signals.error.connect(self.handle_error)
self.segment_utterance_worker.signals.result.connect(self.finalize_segmentation)
self.alignment_evaluation_worker = workers.AlignmentEvaluationWorker(self)
self.alignment_evaluation_worker.signals.error.connect(self.handle_error)
self.alignment_evaluation_worker.signals.finished.connect(self.finalize_adding_intervals)
self.corpus_worker = workers.ImportCorpusWorker(self)
self.corpus_worker.signals.result.connect(self.finalize_load_corpus)
self.corpus_worker.signals.error.connect(self.handle_error)
self.load_reference_worker = workers.LoadReferenceWorker(self)
self.load_reference_worker.signals.error.connect(self.handle_error)
self.load_reference_worker.signals.finished.connect(self.finalize_adding_intervals)
self.undo_group = QtGui.QUndoGroup(self)
self.corpus_undo_stack = QtGui.QUndoStack(self)
self.dictionary_undo_stack = QtGui.QUndoStack(self)
self.set_up_models()
if self.settings.value(AnchorSettings.AUTOLOAD):
self.load_corpus()
else:
self.set_application_state("unloaded")
self.load_ivector_extractor()
# self.load_dictionary()
self.load_acoustic_model()
self.load_language_model()
self.load_g2p()
self.create_actions()
self.refresh_settings()
self.refresh_shortcuts()
self.refresh_style_sheets()
self.refresh_fonts()
def finalize_download(self):
self.refresh_model_actions()
@property
def db_string(self):
return f"postgresql+psycopg2://@/anchor?host={GLOBAL_CONFIG.database_socket}"
@property
def db_engine(self) -> sqlalchemy.engine.Engine:
"""Database engine"""
if self._db_engine is None:
self._db_engine = sqlalchemy.create_engine(self.db_string)
return self._db_engine
def initialize_database(self):
try:
check_databases(db_name="anchor")
return
except Exception:
try:
subprocess.check_call(
[
"createdb",
f"--host={GLOBAL_CONFIG.database_socket}",
"anchor",
],
stderr=subprocess.DEVNULL,
stdout=subprocess.DEVNULL,
)
except Exception:
raise DatabaseError(
f"There was an error connecting to the {GLOBAL_CONFIG.current_profile_name} MFA database server. "
"Please ensure the server is initialized (mfa server init) or running (mfa server start)"
)
from anchor.db import AnchorSqlBase
AnchorSqlBase.metadata.create_all(self.db_engine)
def sync_models(self):
self.model_manager = ModelManager(token=self.settings.value(AnchorSettings.GITHUB_TOKEN))
self.model_manager.refresh_remote()
with sqlalchemy.orm.Session(self.db_engine) as session:
for model_type, db_class in anchor.db.MODEL_TYPES.items():
if model_type not in self.model_manager.local_models:
continue
current_models = {x.name: x for x in session.query(db_class)}
for m in self.model_manager.local_models[model_type]:
if m not in current_models:
current_models[m] = db_class(name=m, path=m, available_locally=True)
session.add(current_models[m])
else:
current_models[m].available_locally = True
for m in self.model_manager.remote_models[model_type]:
if m not in current_models:
current_models[m] = db_class(name=m, path=m, available_locally=False)
session.add(current_models[m])
session.flush()
session.commit()
def file_loaded(self, ready):
if ready:
self.ui.playAct.setEnabled(ready)
else:
self.ui.playAct.setEnabled(False)
self.ui.playAct.setChecked(False)
def corpus_changed(self, clean):
if clean:
self.ui.revertChangesAct.setEnabled(False)
self.ui.saveChangesAct.setEnabled(False)
else:
self.ui.revertChangesAct.setEnabled(True)
self.ui.saveChangesAct.setEnabled(True)
def handle_changes_synced(self, changed: bool):
self.ui.revertChangesAct.setEnabled(False)
self.undo_group.setActiveStack(self.corpus_undo_stack)
self.corpus_undo_stack.setClean()
self.ui.saveChangesAct.setEnabled(False)
def execute_runnable(self, function, finished_function, extra_args=None):
if self.corpus_model.corpus is None:
return
delayed_start = False
if function == "Replacing query":
worker = workers.ReplaceAllWorker(self.corpus_model.session, *extra_args)
worker.signals.result.connect(finished_function)
elif function == "Changing speakers":
worker = workers.ChangeSpeakerWorker(self.corpus_model.session, *extra_args)
worker.signals.result.connect(finished_function)
elif function == "Recalculate speaker ivector":
worker = workers.RecalculateSpeakerWorker(self.corpus_model.session, **extra_args[0])
worker.signals.result.connect(finished_function)
elif function == "Loading speakers":
worker = workers.LoadSpeakersWorker(self.corpus_model.session, *extra_args)
worker.signals.result.connect(finished_function)
elif function == "Loading files":
worker = workers.LoadFilesWorker(self.corpus_model.session, *extra_args)
worker.signals.result.connect(finished_function)
elif function == "Loading dictionaries":
worker = workers.LoadDictionariesWorker(self.corpus_model.session, *extra_args)
worker.signals.result.connect(finished_function)
elif function == "Calculating OOVs":
self.calculate_oovs()
return
elif function == "Finding duplicates":
self.set_application_state("loading")
worker = workers.DuplicateFilesWorker(self.corpus_model.session, **extra_args[0])
worker.signals.result.connect(finished_function)
elif function == "Counting utterance results":
worker = workers.QueryUtterancesWorker(self.corpus_model.session, **extra_args[0])
worker.signals.result.connect(finished_function)
elif function == "Comparing speakers":
worker = workers.SpeakerComparisonWorker(self.corpus_model.session, **extra_args[0])
worker.signals.result.connect(finished_function)
elif function == "Merging speakers":
self.set_application_state("loading")
worker = workers.MergeSpeakersWorker(self.corpus_model.session, **extra_args[0])
worker.signals.finished.connect(finished_function)
elif function == "Querying utterances":
worker = workers.QueryUtterancesWorker(self.corpus_model.session, **extra_args[0])
worker.signals.result.connect(finished_function)
elif function == "Querying speakers":
worker = workers.QuerySpeakersWorker(self.corpus_model.session, **extra_args[0])
worker.signals.result.connect(finished_function)
elif function == "Creating speaker tiers":
worker = workers.FileUtterancesWorker(self.corpus_model.session, *extra_args)
worker.signals.result.connect(finished_function)
elif function == "Counting dictionary results":
worker = workers.QueryDictionaryWorker(self.corpus_model.session, **extra_args[0])
worker.signals.result.connect(finished_function)
elif function == "Querying dictionary":
worker = workers.QueryDictionaryWorker(self.corpus_model.session, **extra_args[0])
worker.signals.result.connect(finished_function)
elif function == "Querying OOVs":
worker = workers.QueryOovWorker(self.corpus_model.session, **extra_args[0])
worker.signals.result.connect(finished_function)
elif function == "Counting OOV results":
worker = workers.QueryOovWorker(self.corpus_model.session, **extra_args[0])
worker.signals.result.connect(finished_function)
elif function == "Getting closest speakers":
worker = workers.ClosestSpeakersWorker(self.corpus_model.session, **extra_args[0])
worker.signals.result.connect(finished_function)
elif function == "Clustering speaker utterances":
worker = workers.ClusterSpeakerUtterancesWorker(
self.corpus_model.session, **extra_args[0]
)
worker.signals.result.connect(finished_function)
elif function == "Loading speaker ivectors":
worker = workers.CalculateSpeakerIvectorsWorker(
self.corpus_model.session, **extra_args[0]
)
worker.signals.result.connect(finished_function)
elif function == "Generating speaker MDS":
worker = workers.SpeakerMdsWorker(self.corpus_model.session, **extra_args[0])
worker.signals.result.connect(finished_function)
elif function == "Exporting dictionary":
self.set_application_state("loading")
self.ui.loadingScreen.setCorpusName("Saving dictionary changes...")
worker = workers.ExportLexiconWorker(self.corpus_model.session, **extra_args[0])
worker.signals.result.connect(finished_function)
elif function == "Exporting files":
self.set_application_state("loading")
self.ui.loadingScreen.setCorpusName("Saving changes...")
worker = workers.ExportFilesWorker(self.corpus_model.session, *extra_args)
worker.signals.result.connect(finished_function)
else:
if extra_args is None:
extra_args = []
worker = workers.Worker(function, *extra_args)
worker.signals.result.connect(finished_function)
if function in self.single_runners:
if self.single_runners[function] is not None:
self.single_runners[function].cancel()
self.single_runners[function] = worker
if function in self.sequential_runners:
delayed_start = len(self.sequential_runners[function]) > 0
if delayed_start:
self.sequential_runners[function][-1].signals.finished.connect(
lambda: self.thread_pool.start(worker)
)
self.sequential_runners[function].append(worker)
worker.signals.finished.connect(self.update_sequential_runners)
worker.signals.error.connect(self.handle_error)
# Execute
if not delayed_start:
self.thread_pool.start(worker)
if function not in self.quick_runners:
if isinstance(function, str):
worker.name = function
self.status_indicator.add_worker(worker)
def update_sequential_runners(self):
sender = self.sender()
for k, v in self.sequential_runners.items():
self.sequential_runners[k] = [x for x in v if x.signals != sender]
def set_up_models(self):
self.dictionary_model = DictionaryTableModel(self)
self.oov_model = OovModel(self)
self.corpus_model = CorpusModel(self)
self.speaker_model = SpeakerModel(self)
self.merge_speaker_model = MergeSpeakerModel(self)
self.corpus_model.databaseSynced.connect(self.handle_changes_synced)
self.corpus_model.runFunction.connect(self.execute_runnable)
self.merge_speaker_model.runFunction.connect(self.execute_runnable)
self.merge_speaker_model.mergeAllFinished.connect(self.save_completed)
self.corpus_model.lockCorpus.connect(self.anchor_lock_corpus)
self.corpus_model.statusUpdate.connect(self.update_status_message)
self.corpus_model.unlockCorpus.connect(self.anchor_unlock_corpus)
self.corpus_model.corpusLoaded.connect(self.fully_loaded)
self.corpus_model.filesSaved.connect(self.save_completed)
self.corpus_model.requestFileView.connect(self.open_search_file)
self.speaker_model.runFunction.connect(self.execute_runnable)
self.dictionary_model.runFunction.connect(self.execute_runnable)
self.oov_model.runFunction.connect(self.execute_runnable)
self.dictionary_model.set_corpus_model(self.corpus_model)
self.corpus_model.set_dictionary_model(self.dictionary_model)
self.speaker_model.set_corpus_model(self.corpus_model)
self.merge_speaker_model.set_corpus_model(self.corpus_model)
self.oov_model.set_corpus_model(self.corpus_model)
self.selection_model = CorpusSelectionModel(self.corpus_model)
self.ui.utteranceListWidget.set_models(
self.corpus_model, self.selection_model, self.speaker_model
)
self.ui.utteranceDetailWidget.set_models(
self.corpus_model, self.selection_model, self.dictionary_model
)
self.ui.speakerWidget.set_models(
self.corpus_model, self.selection_model, self.speaker_model
)
self.ui.transcriptionWidget.set_models(self.corpus_model, self.dictionary_model)
self.ui.alignmentWidget.set_models(self.corpus_model)
self.ui.acousticModelWidget.set_models(self.corpus_model)
self.ui.languageModelWidget.set_models(self.corpus_model)
self.ui.dictionaryWidget.set_models(self.dictionary_model)
self.ui.diarizationWidget.set_models(self.merge_speaker_model)
self.ui.oovWidget.set_models(self.oov_model)
self.selection_model.selectionChanged.connect(self.change_utterance)
self.selection_model.fileChanged.connect(self.change_file)
self.selection_model.fileAboutToChange.connect(self.check_media_stop)
self.media_player.set_corpus_models(self.corpus_model, self.selection_model)
self.corpus_model.addCommand.connect(self.update_corpus_stack)
self.g2p_model = None
self.acoustic_model = None
self.language_model = None
self.ivector_extractor = None
def check_media_stop(self):
if self.ui.playAct.isChecked():
self.ui.playAct.setChecked(False)
self.media_player.stop()
def update_status_message(self, message: str):
self.ui.statusbar.showMessage(message)
def anchor_lock_corpus(self):
self.ui.lockEditAct.setChecked(True)
self.ui.lockEditAct.setEnabled(False)
def anchor_unlock_corpus(self):
self.ui.lockEditAct.setChecked(False)
self.ui.lockEditAct.setEnabled(True)
def update_corpus_stack(self, command):
self.undo_group.setActiveStack(self.corpus_undo_stack)
self.corpus_undo_stack.push(command)
def update_dictionary_stack(self, command):
self.undo_group.setActiveStack(self.dictionary_undo_stack)
self.dictionary_undo_stack.push(command)
def delete_utterances(self):
utts = self.selection_model.selectedUtterances()
self.corpus_model.delete_utterances(utts)
def split_utterances(self):
utts = self.selection_model.selectedUtterances()
self.corpus_model.split_utterances(utts)
def merge_utterances(self):
utts = self.selection_model.selectedUtterances()
self.corpus_model.merge_utterances(utts)
def check_actions(self):
self.ui.lockEditAct.setEnabled(True)
self.ui.transcribeCorpusAct.setEnabled(True)
self.ui.alignCorpusAct.setEnabled(True)
self.ui.loadReferenceAlignmentsAct.setEnabled(True)
self.ui.closeLanguageModelAct.setEnabled(True)
self.ui.closeDictionaryAct.setEnabled(True)
self.ui.evaluateAlignmentsAct.setEnabled(True)
self.ui.closeAcousticModelAct.setEnabled(True)
self.ui.closeG2PAct.setEnabled(True)
self.ui.saveDictionaryAct.setEnabled(True)
self.ui.closeIvectorExtractorAct.setEnabled(True)
if self.corpus_model.language_model is None:
self.ui.closeLanguageModelAct.setEnabled(False)
if self.corpus_model.g2p_model is None:
self.ui.closeG2PAct.setEnabled(False)
if self.corpus_model.acoustic_model is None:
self.ui.alignCorpusAct.setEnabled(False)
self.ui.transcribeCorpusAct.setEnabled(False)
self.ui.loadReferenceAlignmentsAct.setEnabled(False)
self.ui.evaluateAlignmentsAct.setEnabled(False)
self.ui.closeAcousticModelAct.setEnabled(False)
if self.corpus_model.corpus is None:
self.ui.alignCorpusAct.setEnabled(False)
self.ui.transcribeCorpusAct.setEnabled(False)
self.ui.loadReferenceAlignmentsAct.setEnabled(False)
self.ui.evaluateAlignmentsAct.setEnabled(False)
self.ui.find_duplicates_action.setEnabled(False)
self.ui.cluster_utterances_action.setEnabled(False)
self.ui.classify_speakers_action.setEnabled(False)
else:
if (
not self.corpus_model.corpus.has_alignments()
or not self.corpus_model.corpus.has_alignments(WorkflowType.reference)
):
self.ui.evaluateAlignmentsAct.setEnabled(False)
# if self.corpus_model.corpus.alignment_done:
# self.ui.alignCorpusAct.setEnabled(False)
if self.corpus_model.corpus.transcription_done:
self.ui.transcribeCorpusAct.setEnabled(False)
self.ui.find_duplicates_action.setEnabled(self.corpus_model.corpus.has_any_ivectors())
self.ui.cluster_utterances_action.setEnabled(
self.corpus_model.corpus.has_any_ivectors()
)
self.ui.classify_speakers_action.setEnabled(
self.corpus_model.corpus.has_any_ivectors()
)
if self.corpus_model.corpus is None or inspect_database(
self.corpus_model.corpus.data_source_identifier
) not in {
DatasetType.ACOUSTIC_CORPUS_WITH_DICTIONARY,
DatasetType.TEXT_CORPUS_WITH_DICTIONARY,
}:
self.ui.alignCorpusAct.setEnabled(False)
self.ui.transcribeCorpusAct.setEnabled(False)
self.ui.evaluateAlignmentsAct.setEnabled(False)
self.ui.closeDictionaryAct.setEnabled(False)
# self.ui.saveDictionaryAct.setEnabled(False)
def change_file(self):
self.ui.playAct.setChecked(False)
if self.selection_model.current_file is None:
self.ui.playAct.setEnabled(False)
self.ui.panLeftAct.setEnabled(False)
self.ui.panRightAct.setEnabled(False)
self.ui.zoomInAct.setEnabled(False)
self.ui.zoomToSelectionAct.setEnabled(False)
self.ui.zoomOutAct.setEnabled(False)
else:
self.ui.playAct.setEnabled(True)
self.ui.panLeftAct.setEnabled(True)
self.ui.panRightAct.setEnabled(True)
self.ui.zoomInAct.setEnabled(True)
self.ui.zoomToSelectionAct.setEnabled(True)
self.ui.zoomOutAct.setEnabled(True)
if hasattr(self, "channel_select"):
with QtCore.QSignalBlocker(self.channel_select):
self.channel_select.clear()
self.channel_select.addItem("Channel 0", userData=0)
self.channel_select.setEnabled(False)
if (
self.selection_model.current_file is not None
and self.selection_model.current_file.num_channels > 1
):
self.channel_select.addItem("Channel 1", userData=1)
self.channel_select.setEnabled(True)
def change_utterance(self):
selection = self.selection_model.selectedUtterances()
self.ui.deleteUtterancesAct.setEnabled(False)
self.ui.splitUtterancesAct.setEnabled(False)
self.ui.alignUtteranceAct.setEnabled(False)
if not selection:
return
self.ui.splitUtterancesAct.setEnabled(True)
if len(selection) == 1:
self.ui.mergeUtterancesAct.setEnabled(False)
if self.corpus_model.acoustic_model is not None and self.corpus_model.has_dictionary:
self.ui.alignUtteranceAct.setEnabled(True)
else:
self.ui.mergeUtterancesAct.setEnabled(True)
# self.change_speaker_act.widget.setCurrentSpeaker(current_utterance.speaker)
self.ui.deleteUtterancesAct.setEnabled(True)
def closeEvent(self, a0: QtGui.QCloseEvent) -> None:
self.ui.utteranceDetailWidget.plot_widget.clean_up_for_close()
self.settings.setValue(
AnchorSettings.UTTERANCES_VISIBLE, self.ui.utteranceDockWidget.isVisible()
)
self.settings.setValue(
AnchorSettings.DICTIONARY_VISIBLE, self.ui.dictionaryDockWidget.isVisible()
)
self.settings.setValue(
AnchorSettings.DICTIONARY_VISIBLE, self.ui.oovDockWidget.isVisible()
)
self.settings.setValue(
AnchorSettings.SPEAKERS_VISIBLE, self.ui.speakerDockWidget.isVisible()
)
self.settings.setValue(
AnchorSettings.LM_VISIBLE, self.ui.languageModelDockWidget.isVisible()
)
self.settings.setValue(
AnchorSettings.AM_VISIBLE, self.ui.acousticModelDockWidget.isVisible()
)
self.settings.setValue(
AnchorSettings.TRANSCRIPTION_VISIBLE, self.ui.transcriptionDockWidget.isVisible()
)
self.settings.setValue(
AnchorSettings.ALIGNMENT_VISIBLE, self.ui.alignmentDockWidget.isVisible()
)
self.settings.setValue(
AnchorSettings.DIARIZATION_VISIBLE, self.ui.diarizationDockWidget.isVisible()
)
self.set_application_state("loading")
self.ui.loadingScreen.setExiting()
self.close_timer = QtCore.QTimer()
self.close_timer.timeout.connect(lambda: self._actual_close(a0))
self.close_timer.start(1000)
def _actual_close(self, a0):
if self.thread_pool.activeThreadCount() > 0:
return
self.settings.setValue(AnchorSettings.GEOMETRY, self.saveGeometry())
self.settings.setValue(AnchorSettings.WINDOW_STATE, self.saveState())
self.settings.sync()
if self.corpus_model.session is not None:
sqlalchemy.orm.close_all_sessions()
a0.accept()
def create_actions(self):
w = QtWidgets.QWidget(self)
w.setSizePolicy(
QtWidgets.QSizePolicy.Policy.Expanding, QtWidgets.QSizePolicy.Policy.Expanding
)
self.ui.toolBar.insertWidget(self.ui.toolBar.actions()[0], w)
self.ui.toolBar.setSizePolicy(
QtWidgets.QSizePolicy.Policy.Expanding, QtWidgets.QSizePolicy.Policy.Expanding
)
self.ui.toolBar.addWidget(w)
self.ui.toolBar.setAttribute(QtCore.Qt.WidgetAttribute.WA_AlwaysShowToolTips, True)
self.ui.lockEditAct.setEnabled(True)
self.ui.lockEditAct.setChecked(bool(self.settings.value(AnchorSettings.LOCKED, False)))
self.ui.lockEditAct.toggled.connect(self.corpus_model.lock_edits)
self.ui.loadCorpusAct.triggered.connect(self.change_corpus)
self.ui.reloadCorpusAct.triggered.connect(self.reload_corpus)
self.ui.closeCurrentCorpusAct.triggered.connect(self.close_corpus)
self.ui.cancelCorpusLoadAct.triggered.connect(self.cancel_corpus_load)
self.ui.changeTemporaryDirectoryAct.triggered.connect(self.change_temp_dir)
self.ui.openPreferencesAct.triggered.connect(self.open_options)
self.ui.loadAcousticModelAct.triggered.connect(self.change_acoustic_model)
self.ui.loadLanguageModelAct.triggered.connect(self.change_language_model)
self.ui.loadIvectorExtractorAct.triggered.connect(self.change_ivector_extractor)
self.ui.loadDictionaryAct.triggered.connect(self.change_dictionary)
self.ui.saveDictionaryAct.triggered.connect(self.save_dictionary)
self.ui.loadG2PModelAct.triggered.connect(self.change_g2p)
self.ui.loadReferenceAlignmentsAct.triggered.connect(self.load_reference_alignments)
self.ui.loadingScreen.tool_bar.addAction(self.ui.cancelCorpusLoadAct)
self.ui.utteranceDetailWidget.pan_left_button.setDefaultAction(self.ui.panLeftAct)
self.ui.utteranceDetailWidget.pan_right_button.setDefaultAction(self.ui.panRightAct)
self.ui.playAct.triggered.connect(self.play_audio)
self.media_player.playbackStateChanged.connect(self.update_play_act)
self.ui.find_duplicates_action.triggered.connect(self.find_duplicates)
self.ui.cluster_utterances_action.triggered.connect(self.begin_cluster_utterances)
self.ui.classify_speakers_action.triggered.connect(self.begin_classify_speakers)
self.selection_model.selectionAudioChanged.connect(self.enable_zoom)
self.ui.zoomInAct.triggered.connect(self.selection_model.zoom_in)
self.ui.zoomToSelectionAct.triggered.connect(self.selection_model.zoom_to_selection)
self.ui.zoomOutAct.triggered.connect(self.selection_model.zoom_out)
self.ui.panLeftAct.triggered.connect(self.ui.utteranceDetailWidget.pan_left)
self.ui.panRightAct.triggered.connect(self.ui.utteranceDetailWidget.pan_right)
self.ui.mergeUtterancesAct.triggered.connect(self.merge_utterances)
self.ui.splitUtterancesAct.triggered.connect(self.split_utterances)
self.ui.searchAct.triggered.connect(self.open_search)
self.ui.dictionaryWidget.table.searchRequested.connect(self.open_search)
self.ui.oovWidget.table.searchRequested.connect(self.open_search)
self.ui.diarizationWidget.table.searchRequested.connect(self.open_search_speaker)
self.ui.speakerWidget.table.searchRequested.connect(self.open_search_speaker)
self.ui.oovWidget.table.g2pRequested.connect(self.dictionary_model.add_word)
self.dictionary_model.requestLookup.connect(self.open_dictionary)
self.ui.deleteUtterancesAct.triggered.connect(self.delete_utterances)
self.ui.lockEditAct.toggled.connect(self.toggle_lock)
self.ui.exportFilesAct.setEnabled(True)
self.ui.exportFilesAct.triggered.connect(self.export_files)
self.ui.showAllSpeakersAct.triggered.connect(
self.ui.utteranceDetailWidget.plot_widget.update_show_speakers
)
self.ui.muteAct.triggered.connect(self.update_mute_status)
self.volume_slider = QtWidgets.QSlider(QtCore.Qt.Orientation.Horizontal, self)
self.volume_slider.setMaximum(100)
self.volume_slider.setMinimum(0)
self.volume_slider.setMaximumWidth(100)
self.volume_slider.setValue(self.media_player.volume())
self.volume_slider.valueChanged.connect(self.ui.changeVolumeAct.trigger)
self.channel_select = QtWidgets.QComboBox(self)
self.channel_select.addItem("Channel 0")
self.ui.toolBar.addWidget(self.volume_slider)
self.ui.toolBar.addWidget(self.channel_select)
self.channel_select.currentIndexChanged.connect(self.selection_model.set_current_channel)
self.ui.changeVolumeAct.triggered.connect(self.media_player.set_volume)
self.ui.addSpeakerAct.triggered.connect(self.add_new_speaker)
self.ui.speakerWidget.tool_bar.addAction(self.ui.addSpeakerAct)
self.ui.transcribeCorpusAct.triggered.connect(self.begin_transcription)
self.ui.transcriptionWidget.button.setDefaultAction(self.ui.transcribeCorpusAct)
self.ui.utteranceListWidget.oov_button.setDefaultAction(self.ui.oovsOnlyAct)
self.ui.alignmentWidget.button.setDefaultAction(self.ui.alignCorpusAct)
self.ui.alignCorpusAct.triggered.connect(self.begin_alignment)
self.ui.diarizationWidget.refresh_ivectors_action.triggered.connect(
self.begin_speaker_diarization
)
self.ui.alignUtteranceAct.triggered.connect(self.begin_utterance_alignment)
self.ui.segmentUtteranceAct.triggered.connect(self.begin_utterance_segmentation)
self.ui.evaluateAlignmentsAct.triggered.connect(self.begin_alignment_evaluation)
self.ui.selectMappingFileAct.triggered.connect(self.change_custom_mapping)
self.undo_act = self.undo_group.createUndoAction(self, "Undo")
self.undo_act.setIcon(QtGui.QIcon(":undo.svg"))
self.redo_act = self.undo_group.createRedoAction(self, "Redo")
self.redo_act.setIcon(QtGui.QIcon(":redo.svg"))
self.ui.menuEdit.addAction(self.undo_act)
self.ui.menuEdit.addAction(self.redo_act)
self.undo_group.setActiveStack(self.corpus_undo_stack)
self.corpus_model.undoRequested.connect(self.undo_act.trigger)
self.corpus_model.redoRequested.connect(self.redo_act.trigger)
self.corpus_model.playRequested.connect(self.ui.playAct.trigger)
self.corpus_undo_stack.cleanChanged.connect(self.corpus_changed)
self.ui.lockEditAct.toggled.connect(self.undo_act.setDisabled)
self.ui.lockEditAct.toggled.connect(self.redo_act.setDisabled)
self.ui.menuWindow.addAction(self.ui.utteranceDockWidget.toggleViewAction())
self.ui.menuWindow.addAction(self.ui.dictionaryDockWidget.toggleViewAction())
self.ui.menuWindow.addAction(self.ui.oovDockWidget.toggleViewAction())
self.ui.menuWindow.addAction(self.ui.speakerDockWidget.toggleViewAction())
self.ui.menuWindow.addAction(self.ui.acousticModelDockWidget.toggleViewAction())
self.ui.menuWindow.addAction(self.ui.languageModelDockWidget.toggleViewAction())
self.ui.menuWindow.addAction(self.ui.alignmentDockWidget.toggleViewAction())
self.ui.menuWindow.addAction(self.ui.transcriptionDockWidget.toggleViewAction())
self.ui.menuWindow.addAction(self.ui.diarizationDockWidget.toggleViewAction())
self.ui.getHelpAct.triggered.connect(self.open_help)
self.ui.reportBugAct.triggered.connect(self.report_bug)
self.acoustic_action_group = QtGui.QActionGroup(self)
self.acoustic_action_group.setExclusive(True)
self.g2p_action_group = QtGui.QActionGroup(self)
self.g2p_action_group.setExclusive(True)
self.dictionary_action_group = QtGui.QActionGroup(self)
self.dictionary_action_group.setExclusive(True)
self.language_model_action_group = QtGui.QActionGroup(self)
self.language_model_action_group.setExclusive(True)
self.ivector_action_group = QtGui.QActionGroup(self)
self.ivector_action_group.setExclusive(True)
self.ui.ivectorExtractorMenu.setEnabled(False)
self.ui.closeIvectorExtractorAct.setEnabled(False)
self.refresh_corpus_history()
self.refresh_model_actions()
def update_play_act(self, state):
if state == QtMultimedia.QMediaPlayer.PlaybackState.PlayingState:
self.ui.playAct.setChecked(True)
else:
self.ui.playAct.setChecked(False)
def find_duplicates(self):
if not self.corpus_model.corpus.has_any_ivectors():
return
self.execute_runnable(
"Finding duplicates",
self.finish_finding_duplicates,
[
{
"threshold": 0.05,
"working_directory": os.path.join(
self.corpus_model.corpus.output_directory, "speaker_diarization"
),
}
],
)
def finish_finding_duplicates(self, results):
self.set_application_state("loaded")
if not results:
return
duplicate_count, duplicate_path = results
self.update_status_message(
f"Found {duplicate_count} duplicate files, see {duplicate_path}."
)
def refresh_model_actions(self):
self.ui.menuDownload_acoustic_model.clear()
self.ui.menuDownload_G2P_model.clear()
self.ui.menuDownload_language_model.clear()
self.ui.menuDownload_dictionary.clear()
self.ui.menuDownload_ivector_extractor.clear()
with sqlalchemy.orm.Session(self.db_engine) as session:
for (m,) in (
session.query(anchor.db.AcousticModel.name)
.filter_by(available_locally=False)
.order_by(anchor.db.AcousticModel.name)
):
a = QtGui.QAction(m, parent=self)
a.triggered.connect(self.download_acoustic_model)
self.ui.menuDownload_acoustic_model.addAction(a)
for (m,) in (
session.query(anchor.db.LanguageModel.name)
.filter_by(available_locally=False)
.order_by(anchor.db.LanguageModel.name)
):
a = QtGui.QAction(m, parent=self)
a.triggered.connect(self.download_language_model)
self.ui.menuDownload_language_model.addAction(a)
for (m,) in (
session.query(anchor.db.G2PModel.name)
.filter_by(available_locally=False)
.order_by(anchor.db.G2PModel.name)
):
a = QtGui.QAction(m, parent=self)
a.triggered.connect(self.download_g2p_model)
self.ui.menuDownload_G2P_model.addAction(a)
for (m,) in (
session.query(anchor.db.Dictionary.name)
.filter_by(available_locally=False)
.order_by(anchor.db.Dictionary.name)
):
a = QtGui.QAction(m, parent=self)
a.triggered.connect(self.download_dictionary)
self.ui.menuDownload_dictionary.addAction(a)
for (m,) in (
session.query(anchor.db.IvectorExtractor.name)
.filter_by(available_locally=False)
.order_by(anchor.db.IvectorExtractor.name)
):
a = QtGui.QAction(m, parent=self)
a.triggered.connect(self.download_ivector_extractor)
self.ui.menuDownload_ivector_extractor.addAction(a)
current_corpus = (
session.query(anchor.db.AnchorCorpus)
.options(
sqlalchemy.orm.joinedload(anchor.db.AnchorCorpus.acoustic_model),
sqlalchemy.orm.joinedload(anchor.db.AnchorCorpus.language_model),
sqlalchemy.orm.joinedload(anchor.db.AnchorCorpus.dictionary),
sqlalchemy.orm.joinedload(anchor.db.AnchorCorpus.ivector_extractor),
sqlalchemy.orm.joinedload(anchor.db.AnchorCorpus.g2p_model),
sqlalchemy.orm.joinedload(anchor.db.AnchorCorpus.sad_model),
)
.filter(anchor.db.AnchorCorpus.current == True) # noqa
.first()
)
for m in session.query(anchor.db.AcousticModel).filter_by(available_locally=True):
a = QtGui.QAction(f"{m.path} [{m.name}]", parent=self)
a.setData(m.id)
a.setCheckable(True)
if (
current_corpus is not None
and current_corpus.acoustic_model is not None
and current_corpus.acoustic_model == m
):
a.setChecked(True)
a.triggered.connect(self.change_acoustic_model)
self.acoustic_action_group.addAction(a)
self.ui.acousticModelMenu.addAction(a)
for m in session.query(anchor.db.Dictionary).filter_by(available_locally=True):
a = QtGui.QAction(text=f"{m.path} [{m.name}]", parent=self)
a.setData(m.id)
if (
current_corpus is not None
and current_corpus.dictionary is not None
and current_corpus.dictionary == m
):
a.setChecked(True)
a.triggered.connect(self.change_dictionary)
self.dictionary_action_group.addAction(a)
self.ui.mfaDictionaryMenu.addAction(a)
for m in session.query(anchor.db.LanguageModel).filter_by(available_locally=True):
a = QtGui.QAction(text=f"{m.path} [{m.name}]", parent=self)
a.setData(m.id)
if (
current_corpus is not None
and current_corpus.language_model is not None
and current_corpus.language_model == m
):
a.setChecked(True)
a.triggered.connect(self.change_language_model)
self.ui.languageModelMenu.addAction(a)
self.language_model_action_group.addAction(a)
for m in session.query(anchor.db.G2PModel).filter_by(available_locally=True):
a = QtGui.QAction(text=f"{m.path} [{m.name}]", parent=self)
a.setData(m.id)
if (
current_corpus is not None
and current_corpus.g2p_model is not None
and current_corpus.g2p_model == m
):
a.setChecked(True)
a.triggered.connect(self.change_g2p)
self.ui.g2pMenu.addAction(a)
self.g2p_action_group.addAction(a)
if FOUND_SPEECHBRAIN:
m = (
session.query(anchor.db.IvectorExtractor)
.filter(anchor.db.IvectorExtractor.path == "speechbrain")
.first()
)
if m is None:
session.add(
anchor.db.IvectorExtractor(
name="speechbrain", path="speechbrain", available_locally=True
)
)
session.commit()
a = QtGui.QAction(text="speechbrain", parent=self)
a.setData(m.id)
a.triggered.connect(self.change_ivector_extractor)
self.ui.ivectorExtractorMenu.addAction(a)
self.ivector_action_group.addAction(a)
for m in session.query(anchor.db.IvectorExtractor).filter(
anchor.db.IvectorExtractor.available_locally == True, # noqa
anchor.db.IvectorExtractor.name != "speechbrain",
):
a = QtGui.QAction(text=f"{m.path} [{m.name}]", parent=self)
a.setData(m.id)
if (
current_corpus is not None
and current_corpus.ivector_extractor is not None
and current_corpus.ivector_extractor == m
):
a.setChecked(True)
a.triggered.connect(self.change_ivector_extractor)
self.ui.ivectorExtractorMenu.addAction(a)
self.ivector_action_group.addAction(a)
def toggle_lock(self, locked):
self.settings.setValue(AnchorSettings.LOCKED, locked)
def handleAudioState(self, state):
if state == QtMultimedia.QMediaPlayer.PlaybackState.StoppedState:
self.ui.playAct.setChecked(False)
def update_mute_status(self, is_muted):
if is_muted:
self.previous_volume = self.media_player.volume()
self.change_volume_act.widget.setValue(0)
else:
self.change_volume_act.widget.setValue(self.previous_volume)
self.media_player.setMuted(is_muted)
def change_corpus(self):
corpus_name = self.sender().text()
with sqlalchemy.orm.Session(self.db_engine) as session:
session.query(anchor.db.AnchorCorpus).update({anchor.db.AnchorCorpus.current: False})
session.flush()
m = (
session.query(anchor.db.AnchorCorpus)
.filter(anchor.db.AnchorCorpus.name == corpus_name)
.first()
)
if m is None:
corpus_directory = QtWidgets.QFileDialog.getExistingDirectory(
parent=self,
caption="Select a corpus directory",
dir=self.settings.value(AnchorSettings.DEFAULT_CORPUS_DIRECTORY),
)
if not corpus_directory or not os.path.exists(corpus_directory):
return
corpus_name = os.path.basename(corpus_directory)
self.settings.setValue(
AnchorSettings.DEFAULT_CORPUS_DIRECTORY, os.path.dirname(corpus_directory)
)
m = (
session.query(anchor.db.AnchorCorpus)
.filter(anchor.db.AnchorCorpus.name == corpus_name)
.first()
)
if m is None:
m = anchor.db.AnchorCorpus(
name=corpus_name, path=corpus_directory, current=True
)
session.add(m)
m.current = True
session.commit()
self.refresh_corpus_history()
self.load_corpus()
self.deleted_utts = []
def load_reference_alignments(self):
reference_directory = QtWidgets.QFileDialog.getExistingDirectory(
parent=self,
caption="Select a reference directory",
dir=self.settings.value(AnchorSettings.DEFAULT_CORPUS_DIRECTORY),
)
if not reference_directory or not os.path.exists(reference_directory):
return
with sqlalchemy.orm.Session(self.db_engine) as session:
c = session.query(anchor.db.AnchorCorpus).filter_by(current=True).first()
c.reference_directory = reference_directory
session.commit()
self.load_reference_worker.set_params(self.corpus_model.corpus, reference_directory)
self.load_reference_worker.start()
def close_corpus(self):
self.set_application_state("unloaded")
self.selection_model.clearSelection()
if self.corpus_model.corpus is not None:
self.corpus_model.session.close()
self.corpus_model.setCorpus(None)
self.settings.setValue(AnchorSettings.CURRENT_CORPUS, "")
def load_corpus(self):
self.selection_model.clearSelection()
self.corpus_model.setCorpus(None)
with sqlalchemy.orm.Session(self.db_engine) as session:
c = (
session.query(anchor.db.AnchorCorpus)
.options(sqlalchemy.orm.joinedload(anchor.db.AnchorCorpus.dictionary))
.filter_by(current=True)
.first()
)
if c is None:
self.set_application_state("unloaded")
return
self.set_application_state("loading")
self.ui.loadingScreen.setCorpusName(f"Loading {c.path}...")
dictionary_path = None
if c.dictionary is not None:
dictionary_path = c.dictionary.path
self.corpus_worker.set_params(c.path, dictionary_path)
self.corpus_worker.start()
def reload_corpus(self):
self.selection_model.clearSelection()
with sqlalchemy.orm.Session(self.db_engine) as session:
c = session.query(anchor.db.AnchorCorpus).filter_by(current=True).first()
corpus_path = c.path
dictionary_path = None
if c.dictionary is not None:
dictionary_path = c.dictionary.path
self.corpus_worker.set_params(corpus_path, dictionary_path, reset=True)
self.set_application_state("loading")
self.ui.loadingScreen.setCorpusName(f"Reloading {c.path}...")
self.corpus_worker.start()
def cancel_corpus_load(self):
self.ui.cancelCorpusLoadAct.setEnabled(False)
self.ui.loadingScreen.text_label.setText("Cancelling...")
self.corpus_worker.stop()
self.reload_corpus_worker.stop()
def save_completed(self):
self.set_application_state("loaded")
self.check_actions()
def fully_loaded(self):
if self.corpus is not None:
self.set_application_state("loaded")
else:
self.set_application_state("unloaded")
self.check_actions()
def finalize_load_corpus(self, corpus: AcousticCorpus):
if corpus is None:
self.set_application_state("unloaded")
self.corpus = corpus
self.corpus_model.setCorpus(corpus)
with sqlalchemy.orm.Session(self.db_engine) as session:
c = session.query(anchor.db.AnchorCorpus).filter_by(current=True).first()
if c.custom_mapping_path:
self.dictionary_model.set_custom_mapping(c.custom_mapping_path)
def finalize_reload_corpus(self):
self.set_application_state("loaded")
self.check_actions()
def finalize_load_dictionary(self, corpus):
self.set_application_state("loaded")
self.corpus_model.setCorpus(corpus)
self.corpus_model.dictionaryChanged.emit()
self.check_actions()
self.ui.loadDictionaryAct.setEnabled(True)
def finalize_oov_count(self, corpus):
self.set_application_state("loaded")
self.corpus_model.setCorpus(corpus)
self.corpus_model.dictionaryChanged.emit()
self.dictionary_model.finish_refresh_word_counts()
self.check_actions()
self.ui.loadDictionaryAct.setEnabled(True)
def finalize_load_acoustic_model(self, model: AcousticModel):
self.acoustic_model = model
self.corpus_model.set_acoustic_model(model)
self.check_actions()
self.ui.acousticModelMenu.setEnabled(True)
def finalize_load_language_model(self, model: LanguageModel):
self.language_model = model
self.corpus_model.set_language_model(model)
self.check_actions()
self.ui.languageModelMenu.setEnabled(True)
def finalize_load_g2p_model(self, generator: PyniniValidator):
self.dictionary_model.set_g2p_generator(generator)
self.check_actions()
self.ui.g2pMenu.setEnabled(True)
def finalize_load_ivector_extractor(self, model: IvectorExtractorModel):
self.ivector_extractor = model
self.corpus_model.set_ivector_extractor(model)
self.check_actions()
self.ui.ivectorExtractorMenu.setEnabled(True)
def begin_alignment(self):
self.enableMfaActions(False)
self.alignment_worker.set_params(
self.corpus_model.corpus, self.acoustic_model, self.ui.alignmentWidget.parameters()
)
self.alignment_worker.start()
self.set_application_state("loading")
self.ui.loadingScreen.setCorpusName("Performing alignment...")
def begin_speaker_diarization(self, reset=False):
self.enableMfaActions(False)
self.speaker_diarization_worker.set_params(
self.corpus_model.corpus, self.ivector_extractor, reset=False
)
self.speaker_diarization_worker.start()
self.set_application_state("loading")
self.ui.loadingScreen.setCorpusName("Calculating ivectors...")
def begin_cluster_utterances(self):
self.enableMfaActions(False)
self.cluster_utterances_worker.set_params(self.corpus_model.corpus, self.ivector_extractor)
self.cluster_utterances_worker.start()
self.set_application_state("loading")
self.ui.loadingScreen.setCorpusName("Clustering speakers...")
def begin_classify_speakers(self):
self.enableMfaActions(False)
self.classify_speakers_worker.set_params(self.corpus_model.corpus, self.ivector_extractor)
self.classify_speakers_worker.start()
self.set_application_state("loading")
self.ui.loadingScreen.setCorpusName("Clustering speakers...")
def begin_utterance_alignment(self):
self.enableMfaActions(False)
utterance = self.selection_model.currentUtterance()
self.alignment_utterance_worker.set_params(
self.corpus_model.corpus, self.acoustic_model, utterance.id
)
self.alignment_utterance_worker.start()
self.set_application_state("loading")
self.ui.loadingScreen.setCorpusName("Performing alignment...")
def begin_utterance_segmentation(self):
utterance = self.selection_model.currentUtterance()
self.segment_utterance_worker.set_params(
self.corpus_model.corpus, self.acoustic_model, utterance.id
)
self.segment_utterance_worker.start()
def begin_alignment_evaluation(self):
self.enableMfaActions(False)
with sqlalchemy.orm.Session(self.db_engine) as session:
c = session.query(anchor.db.AnchorCorpus).filter_by(current=True).first()
self.alignment_evaluation_worker.set_params(
self.corpus_model.corpus, self.acoustic_model, c.custom_mapping_path
)
self.alignment_evaluation_worker.start()
self.set_application_state("loading")
self.ui.loadingScreen.setCorpusName("Performing alignment evaluation...")
def begin_transcription(self):
self.enableMfaActions(False)
if self.corpus_model.language_model is not None:
self.transcription_worker.set_params(
self.corpus_model.corpus, self.acoustic_model, self.language_model
)
self.transcription_worker.start()
else:
self.validation_worker.set_params(
self.corpus_model.corpus,
self.acoustic_model,
self.ui.transcriptionWidget.frequent_words_edit.value(),
test_transcriptions=True,
)
self.validation_worker.start()
self.set_application_state("loading")
self.ui.loadingScreen.setCorpusName("Performing transcription...")
def enableMfaActions(self, enabled):
self.ui.alignCorpusAct.setEnabled(enabled)
self.ui.transcribeCorpusAct.setEnabled(enabled)
self.ui.evaluateAlignmentsAct.setEnabled(enabled)
def finalize_adding_ivectors(self):
self.corpus_model.corpus.inspect_database()
selection = self.selection_model.selection()
self.selection_model.clearSelection()
self.selection_model.select(
selection,
QtCore.QItemSelectionModel.SelectionFlag.SelectCurrent
| QtCore.QItemSelectionModel.SelectionFlag.Rows,
)
self.corpus_model.update_data()
self.check_actions()
self.ui.diarizationWidget.refresh()
self.set_application_state("loaded")
def finalize_clustering_utterances(self):
self.corpus_model.corpus.inspect_database()
self.corpus_model.corpus._num_speakers = None
self.corpus_model.refresh_speakers()
selection = self.selection_model.selection()
self.selection_model.clearSelection()
self.selection_model.select(
selection,
QtCore.QItemSelectionModel.SelectionFlag.SelectCurrent
| QtCore.QItemSelectionModel.SelectionFlag.Rows,
)
self.corpus_model.update_data()
self.check_actions()
self.set_application_state("loaded")
def finalize_adding_intervals(self):
self.corpus_model.corpus.inspect_database()
self.corpus_model.corpusLoaded.emit()
selection = self.selection_model.selection()
self.selection_model.clearSelection()
self.selection_model.select(
selection,
QtCore.QItemSelectionModel.SelectionFlag.SelectCurrent
| QtCore.QItemSelectionModel.SelectionFlag.Rows,
)
self.corpus_model.update_data()
self.check_actions()
self.set_application_state("loaded")
def finalize_utterance_alignment(self, utterance_id: int):
self.corpus_model.session.expire_all()
self.corpus_model.update_data()
self.check_actions()
self.set_application_state("loaded")
def finalize_segmentation(self, data):
original_utterance_id, split_data = data
self.corpus_model.split_vad_utterance(original_utterance_id, split_data)
self.corpus_model.session.expire_all()
self.corpus_model.update_data()
def finalize_saving(self):
self.check_actions()
def set_application_state(self, state):
self.selection_model.clearSelection()
if state == "loading":
self.ui.utteranceDockWidget.setVisible(False)
self.ui.dictionaryDockWidget.setVisible(False)
self.ui.oovDockWidget.setVisible(False)
self.ui.speakerDockWidget.setVisible(False)
self.ui.acousticModelDockWidget.setVisible(False)
self.ui.transcriptionDockWidget.setVisible(False)
self.ui.alignmentDockWidget.setVisible(False)
self.ui.languageModelDockWidget.setVisible(False)
self.ui.diarizationDockWidget.setVisible(False)
self.ui.toolBar.setVisible(False)
self.ui.utteranceDetailWidget.setVisible(False)
self.ui.titleScreen.setVisible(False)
self.ui.loadingScreen.setVisible(True)
self.ui.changeTemporaryDirectoryAct.setEnabled(False)
self.ui.openPreferencesAct.setEnabled(True)
self.ui.cancelCorpusLoadAct.setEnabled(True)
self.ui.loadCorpusAct.setEnabled(False)
self.ui.loadRecentCorpusMenu.setEnabled(False)
self.ui.closeCurrentCorpusAct.setEnabled(False)
self.ui.acousticModelMenu.setEnabled(False)
self.ui.languageModelMenu.setEnabled(False)
self.ui.ivectorExtractorMenu.setEnabled(False)
self.ui.g2pMenu.setEnabled(False)
self.ui.loadAcousticModelAct.setEnabled(False)
self.ui.loadDictionaryAct.setEnabled(False)
self.ui.loadG2PModelAct.setEnabled(False)
self.ui.loadLanguageModelAct.setEnabled(False)
self.ui.loadIvectorExtractorAct.setEnabled(False)
elif state == "loaded":
self.ui.loadingScreen.setVisible(False)
self.ui.titleScreen.setVisible(False)
self.ui.utteranceDockWidget.setVisible(
self.settings.value(AnchorSettings.UTTERANCES_VISIBLE)
)
self.ui.dictionaryDockWidget.setVisible(
self.settings.value(AnchorSettings.DICTIONARY_VISIBLE)
)
self.ui.oovDockWidget.setVisible(self.settings.value(AnchorSettings.OOV_VISIBLE))
self.ui.speakerDockWidget.setVisible(
self.settings.value(AnchorSettings.SPEAKERS_VISIBLE)
)
self.ui.languageModelDockWidget.setVisible(
self.settings.value(AnchorSettings.LM_VISIBLE)
)
self.ui.acousticModelDockWidget.setVisible(
self.settings.value(AnchorSettings.AM_VISIBLE)
)
self.ui.transcriptionDockWidget.setVisible(
self.settings.value(AnchorSettings.TRANSCRIPTION_VISIBLE)
)
self.ui.alignmentDockWidget.setVisible(
self.settings.value(AnchorSettings.ALIGNMENT_VISIBLE)
)
self.ui.diarizationDockWidget.setVisible(
self.settings.value(AnchorSettings.DIARIZATION_VISIBLE)
)
self.ui.toolBar.setVisible(True)
self.ui.utteranceDetailWidget.setVisible(True)
self.ui.changeTemporaryDirectoryAct.setEnabled(True)
self.ui.openPreferencesAct.setEnabled(True)
self.ui.cancelCorpusLoadAct.setEnabled(False)
self.ui.loadCorpusAct.setEnabled(True)
self.ui.loadRecentCorpusMenu.setEnabled(True)
self.ui.closeCurrentCorpusAct.setEnabled(True)
self.ui.loadAcousticModelAct.setEnabled(True)
self.ui.loadDictionaryAct.setEnabled(True)
self.ui.loadDictionaryAct.setEnabled(True)
self.ui.loadG2PModelAct.setEnabled(True)
self.ui.loadLanguageModelAct.setEnabled(True)
self.ui.loadIvectorExtractorAct.setEnabled(True)
self.ui.acousticModelMenu.setEnabled(True)
self.ui.languageModelMenu.setEnabled(True)
self.ui.ivectorExtractorMenu.setEnabled(True)
self.ui.g2pMenu.setEnabled(True)
elif state == "unloaded":
self.ui.loadingScreen.setVisible(False)
self.ui.titleScreen.setVisible(True)
self.ui.toolBar.setVisible(False)
self.ui.utteranceDockWidget.setVisible(False)
self.ui.dictionaryDockWidget.setVisible(False)
self.ui.oovDockWidget.setVisible(False)
self.ui.acousticModelDockWidget.setVisible(False)
self.ui.transcriptionDockWidget.setVisible(False)
self.ui.alignmentDockWidget.setVisible(False)
self.ui.languageModelDockWidget.setVisible(False)
self.ui.speakerDockWidget.setVisible(False)
self.ui.utteranceDetailWidget.setVisible(False)
self.ui.diarizationDockWidget.setVisible(False)
self.ui.changeTemporaryDirectoryAct.setEnabled(True)
self.ui.openPreferencesAct.setEnabled(True)
self.ui.cancelCorpusLoadAct.setEnabled(False)
self.ui.loadCorpusAct.setEnabled(True)
self.ui.loadRecentCorpusMenu.setEnabled(True)
self.ui.closeCurrentCorpusAct.setEnabled(False)
self.ui.loadAcousticModelAct.setEnabled(True)
self.ui.loadDictionaryAct.setEnabled(True)
self.ui.loadG2PModelAct.setEnabled(True)
self.ui.loadLanguageModelAct.setEnabled(True)
self.ui.loadIvectorExtractorAct.setEnabled(True)
self.ui.acousticModelMenu.setEnabled(True)
self.ui.languageModelMenu.setEnabled(True)
self.ui.ivectorExtractorMenu.setEnabled(True)
self.ui.g2pMenu.setEnabled(True)
def enable_zoom(self):
if (
self.selection_model.selected_min_time is None
or self.selection_model.selected_max_time is None
or not self.selection_model.hasSelection()
):
self.ui.zoomToSelectionAct.setEnabled(False)
else:
self.ui.zoomToSelectionAct.setEnabled(True)
def play_audio(self):
if self.media_player.playbackState() in [
QtMultimedia.QMediaPlayer.PlaybackState.StoppedState,
QtMultimedia.QMediaPlayer.PlaybackState.PausedState,
]:
self.media_player.play()
elif (
self.media_player.playbackState()
== QtMultimedia.QMediaPlayer.PlaybackState.PlayingState
):
self.media_player.pause()
def report_bug(self):
QtGui.QDesktopServices.openUrl(
QtCore.QUrl("https://github.com/MontrealCorpusTools/Anchor-annotator/issues")
)
def open_help(self):
QtGui.QDesktopServices.openUrl(
QtCore.QUrl("https://anchor-annotator.readthedocs.io/en/latest/")
)
def add_new_speaker(self):
new_speaker = self.ui.speakerWidget.newSpeakerEdit.text()
if new_speaker in self.corpus.speak_utt_mapping:
return
if not new_speaker:
return
self.newSpeaker.emit(self.corpus.speakers)
self.ui.speakerWidget.newSpeakerEdit.clear()
def open_search(self, search_term=None):
dock_tab_bars = self.findChildren(QtWidgets.QTabBar, "")
for j in range(len(dock_tab_bars)):
dock_tab_bar = dock_tab_bars[j]
if not dock_tab_bar.count():
continue
for i in range(dock_tab_bar.count()):
if dock_tab_bar.tabText(i) == "Utterances":
dock_tab_bar.setCurrentIndex(i)
break
else:
self.ui.utteranceDockWidget.toggleViewAction().trigger()
self.ui.utteranceListWidget.search_box.setFocus()
if search_term is not None:
self.ui.utteranceListWidget.search_box.setQuery(search_term)
def open_search_speaker(self, search_term=None, show=False):
if search_term is not None:
self.ui.utteranceListWidget.speaker_dropdown.line_edit.setText(search_term)
self.ui.utteranceListWidget.file_dropdown.line_edit.setText("")
self.ui.utteranceListWidget.search_box.setText("")
if self.corpus_model.corpus.has_any_ivectors():
self.ui.utteranceListWidget.table_widget.horizontalHeader().setSortIndicator(
self.corpus_model.ivector_distance_column, QtCore.Qt.SortOrder.AscendingOrder
)
self.ui.utteranceListWidget.search()
if show:
dock_tab_bars = self.findChildren(QtWidgets.QTabBar, "")
for j in range(len(dock_tab_bars)):
dock_tab_bar = dock_tab_bars[j]
if not dock_tab_bar.count():
continue
for i in range(dock_tab_bar.count()):
if dock_tab_bar.tabText(i) == "Utterances":
dock_tab_bar.setCurrentIndex(i)
break
else:
self.ui.utteranceDockWidget.toggleViewAction().trigger()
def open_search_file(self, search_term=None, show=False):
if search_term is not None:
self.ui.utteranceListWidget.file_dropdown.line_edit.setText(search_term)
self.ui.utteranceListWidget.speaker_dropdown.line_edit.setText("")
self.ui.utteranceListWidget.search_box.setText("")
self.ui.utteranceListWidget.table_widget.horizontalHeader().setSortIndicator(
self.corpus_model.begin_column, QtCore.Qt.SortOrder.AscendingOrder
)
self.ui.utteranceListWidget.search()
if show:
dock_tab_bars = self.findChildren(QtWidgets.QTabBar, "")
for j in range(len(dock_tab_bars)):
dock_tab_bar = dock_tab_bars[j]
if not dock_tab_bar.count():
continue
for i in range(dock_tab_bar.count()):
if dock_tab_bar.tabText(i) == "Utterances":
dock_tab_bar.setCurrentIndex(i)
break
else:
self.ui.utteranceDockWidget.toggleViewAction().trigger()
def refresh_shortcuts(self):
self.ui.playAct.setShortcut(self.settings.value(AnchorSettings.PLAY_KEYBIND))
self.ui.zoomInAct.setShortcut(self.settings.value(AnchorSettings.ZOOM_IN_KEYBIND))
self.ui.zoomOutAct.setShortcut(self.settings.value(AnchorSettings.ZOOM_OUT_KEYBIND))
self.ui.zoomToSelectionAct.setShortcut(
self.settings.value(AnchorSettings.ZOOM_TO_SELECTION_KEYBIND)
)
self.ui.panLeftAct.setShortcut(self.settings.value(AnchorSettings.PAN_LEFT_KEYBIND))
self.ui.panRightAct.setShortcut(self.settings.value(AnchorSettings.PAN_RIGHT_KEYBIND))
self.ui.mergeUtterancesAct.setShortcut(self.settings.value(AnchorSettings.MERGE_KEYBIND))
self.ui.splitUtterancesAct.setShortcut(self.settings.value(AnchorSettings.SPLIT_KEYBIND))
self.ui.deleteUtterancesAct.setShortcut(self.settings.value(AnchorSettings.DELETE_KEYBIND))
self.ui.saveChangesAct.setShortcut(self.settings.value(AnchorSettings.SAVE_KEYBIND))
self.ui.searchAct.setShortcut(self.settings.value(AnchorSettings.SEARCH_KEYBIND))
self.undo_act.setShortcut(self.settings.value(AnchorSettings.UNDO_KEYBIND))
self.redo_act.setShortcut(self.settings.value(AnchorSettings.REDO_KEYBIND))
# self.ui.changeVolumeAct.widget.setValue(self.config['volume'])
def open_dictionary(self):
dock_tab_bars = self.findChildren(QtWidgets.QTabBar, "")
for j in range(len(dock_tab_bars)):
dock_tab_bar = dock_tab_bars[j]
if not dock_tab_bar.count():
continue
for i in range(dock_tab_bar.count()):
if dock_tab_bar.tabText(i) == "Dictionary":
dock_tab_bar.setCurrentIndex(i)
break
else:
self.ui.dictionaryDockWidget.toggleViewAction().trigger()
def change_temp_dir(self):
directory = QtWidgets.QFileDialog.getExistingDirectory(
parent=self, caption="Select a temporary directory", dir=self.settings.temp_directory
)
if not directory or not os.path.exists(directory):
return
config = MfaConfiguration()
config.profiles["anchor"].temporary_directory = directory
config.save()
def open_options(self):
dialog = OptionsDialog(self)
if dialog.exec_():
self.settings.sync()
self.refresh_settings()
def refresh_style_sheets(self):
self.setStyleSheet(self.settings.style_sheet)
def refresh_corpus_history(self):
self.ui.loadRecentCorpusMenu.clear()
with sqlalchemy.orm.Session(self.db_engine) as session:
corpora = session.query(anchor.db.AnchorCorpus).filter_by(current=False)
for c in corpora:
a = QtGui.QAction(c.name, parent=self)
a.triggered.connect(self.change_corpus)
self.ui.loadRecentCorpusMenu.addAction(a)
def refresh_settings(self):
self.refresh_fonts()
self.refresh_shortcuts()
self.refresh_style_sheets()
self.corpus_model.set_limit(self.settings.value(self.settings.RESULTS_PER_PAGE))
self.dictionary_model.set_limit(self.settings.value(self.settings.RESULTS_PER_PAGE))
self.speaker_model.set_limit(self.settings.value(self.settings.RESULTS_PER_PAGE))
self.merge_speaker_model.set_limit(self.settings.value(self.settings.RESULTS_PER_PAGE))
self.ui.utteranceListWidget.refresh_settings()
self.ui.dictionaryWidget.refresh_settings()
self.ui.speakerWidget.refresh_settings()
self.ui.loadingScreen.refresh_settings()
self.media_player.refresh_settings()
def refresh_fonts(self):
base_font = self.settings.font
self.menuBar().setFont(base_font)
self.ui.utteranceDockWidget.setFont(base_font)
self.ui.speakerDockWidget.setFont(base_font)
self.ui.dictionaryDockWidget.setFont(base_font)
self.ui.oovDockWidget.setFont(base_font)
self.ui.diarizationDockWidget.setFont(base_font)
def download_language_model(self):
self.download_worker.set_params(
self.db_string, "language_model", self.sender().text(), self.model_manager
)
self.download_worker.start()
def download_g2p_model(self):
self.download_worker.set_params(
self.db_string, "g2p", self.sender().text(), self.model_manager
)
self.download_worker.start()
def download_acoustic_model(self):
self.download_worker.set_params(
self.db_string, "acoustic", self.sender().text(), self.model_manager
)
self.download_worker.start()
def download_dictionary(self):
self.download_worker.set_params(
self.db_string, "dictionary", self.sender().text(), self.model_manager
)
self.download_worker.start()
def download_ivector_extractor(self):
self.download_worker.set_params(
self.db_string, "ivector", self.sender().text(), self.model_manager
)
self.download_worker.start()
def load_acoustic_model(self):
with sqlalchemy.orm.Session(self.db_engine) as session:
c = (
session.query(anchor.db.AnchorCorpus)
.options(sqlalchemy.orm.joinedload(anchor.db.AnchorCorpus.acoustic_model))
.filter_by(current=True)
.first()
)
if c is None or c.acoustic_model is None:
return
self.acoustic_model_worker.set_params(c.acoustic_model.path)
self.acoustic_model_worker.start()
def change_acoustic_model(self):
m_id = self.sender().data()
if m_id:
with sqlalchemy.orm.Session(self.db_engine) as session:
m = session.get(anchor.db.AcousticModel, m_id)
am_path = m.path
session.query(anchor.db.AnchorCorpus).filter_by(current=True).update(
{anchor.db.AnchorCorpus.acoustic_model_id: m_id}
)
session.commit()
else:
am_path, _ = QtWidgets.QFileDialog.getOpenFileName(
parent=self,
caption="Select an acoustic model",
dir=self.settings.value(AnchorSettings.DEFAULT_ACOUSTIC_DIRECTORY),
filter="Model files (*.zip)",
)
self.settings.setValue(
AnchorSettings.DEFAULT_ACOUSTIC_DIRECTORY, os.path.dirname(am_path)
)
if not am_path or not os.path.exists(am_path):
return
with sqlalchemy.orm.Session(self.db_engine) as session:
c = (
session.query(anchor.db.AnchorCorpus)
.options(sqlalchemy.orm.joinedload(anchor.db.AnchorCorpus.acoustic_model))
.filter_by(current=True)
.first()
)
m = session.query(anchor.db.AcousticModel).filter_by(path=am_path).first()
if not m:
m_name = os.path.splitext(os.path.basename(am_path))[0]
m = anchor.db.AcousticModel(name=m_name, path=am_path, available_locally=True)
session.add(m)
c.acoustic_model = m
session.commit()
self.acoustic_model_worker.set_params(am_path)
self.acoustic_model_worker.start()
def change_custom_mapping(self):
path, _ = QtWidgets.QFileDialog.getOpenFileName(
parent=self,
caption="Select a mapping file",
dir=self.settings.value(AnchorSettings.DEFAULT_DIRECTORY),
filter="Configuration files (*.yaml)",
)
if not path or not os.path.exists(path):
return
with sqlalchemy.orm.Session(self.db_engine) as session:
c = (
session.query(anchor.db.AnchorCorpus)
.options(sqlalchemy.orm.joinedload(anchor.db.AnchorCorpus.dictionary))
.filter_by(current=True)
.first()
)
c.custom_mapping_path = path
self.settings.setValue(AnchorSettings.DEFAULT_DIRECTORY, os.path.dirname(path))
self.settings.sync()
self.dictionary_model.set_custom_mapping(path)
def change_dictionary(self):
m_id = self.sender().data()
if m_id:
with sqlalchemy.orm.Session(self.db_engine) as session:
m = session.get(anchor.db.Dictionary, m_id)
dictionary_path = m.path
session.query(anchor.db.AnchorCorpus).filter_by(current=True).update(
{anchor.db.AnchorCorpus.dictionary_id: m_id}
)
session.commit()
else:
dictionary_path, _ = QtWidgets.QFileDialog.getOpenFileName(
parent=self,
caption="Select a dictionary",
dir=self.settings.value(AnchorSettings.DEFAULT_DICTIONARY_DIRECTORY),
filter="Dictionary files (*.dict *.txt *.yaml)",
)
if not dictionary_path or not os.path.exists(dictionary_path):
return
self.settings.setValue(
AnchorSettings.DEFAULT_DICTIONARY_DIRECTORY, os.path.dirname(dictionary_path)
)
with sqlalchemy.orm.Session(self.db_engine) as session:
c = (
session.query(anchor.db.AnchorCorpus)
.options(sqlalchemy.orm.joinedload(anchor.db.AnchorCorpus.dictionary))
.filter_by(current=True)
.first()
)
d = session.query(anchor.db.Dictionary).filter_by(path=dictionary_path).first()
if not d:
d_name = os.path.splitext(os.path.basename(dictionary_path))[0]
d = anchor.db.Dictionary(
name=d_name, path=dictionary_path, available_locally=True
)
session.add(d)
c.dictionary = d
session.commit()
self.set_application_state("loading")
self.ui.loadingScreen.setCorpusName(f"Loading {dictionary_path}...")
self.dictionary_worker.set_params(self.corpus_model.corpus, dictionary_path)
self.dictionary_worker.start()
def calculate_oovs(self):
self.set_application_state("loading")
self.ui.loadingScreen.setCorpusName("Calculating OOV counts...")
self.oov_worker.set_params(self.corpus_model.corpus)
self.oov_worker.start()
def change_language_model(self):
m_id = self.sender().data()
if m_id:
with sqlalchemy.orm.Session(self.db_engine) as session:
m = session.get(anchor.db.LanguageModel, m_id)
path = m.path
session.query(anchor.db.AnchorCorpus).filter_by(current=True).update(
{anchor.db.AnchorCorpus.language_model_id: m_id}
)
session.commit()
else:
path, _ = QtWidgets.QFileDialog.getOpenFileName(
parent=self,
caption="Select a language model",
dir=self.settings.value(AnchorSettings.DEFAULT_LM_DIRECTORY),
filter="Model files (*.zip)",
)
if not path or not os.path.exists(path):
return
self.settings.setValue(AnchorSettings.DEFAULT_LM_DIRECTORY, os.path.dirname(path))
with sqlalchemy.orm.Session(self.db_engine) as session:
c = (
session.query(anchor.db.AnchorCorpus)
.options(sqlalchemy.orm.joinedload(anchor.db.AnchorCorpus.language_model))
.filter_by(current=True)
.first()
)
m = session.query(anchor.db.LanguageModel).filter_by(path=path).first()
if not m:
m_name = os.path.splitext(os.path.basename(path))[0]
m = anchor.db.LanguageModel(name=m_name, path=path, available_locally=True)
session.add(m)
c.language_model = m
session.commit()
self.language_model_worker.set_params(path)
self.language_model_worker.start()
def load_language_model(self):
with sqlalchemy.orm.Session(self.db_engine) as session:
c = (
session.query(anchor.db.AnchorCorpus)
.options(sqlalchemy.orm.joinedload(anchor.db.AnchorCorpus.language_model))
.filter_by(current=True)
.first()
)
if c is None or c.language_model is None:
return
self.language_model_worker.set_params(c.language_model.path)
self.language_model_worker.start()
self.settings.setValue(
AnchorSettings.DEFAULT_LM_DIRECTORY, os.path.dirname(c.language_model.path)
)
def load_g2p(self):
with sqlalchemy.orm.Session(self.db_engine) as session:
c = (
session.query(anchor.db.AnchorCorpus)
.options(sqlalchemy.orm.joinedload(anchor.db.AnchorCorpus.g2p_model))
.filter_by(current=True)
.first()
)
if c is None or c.g2p_model is None:
return
self.g2p_model_worker.set_params(c.g2p_model.path)
self.g2p_model_worker.start()
self.settings.setValue(
AnchorSettings.DEFAULT_G2P_DIRECTORY, os.path.dirname(c.g2p_model.path)
)
def change_g2p(self):
m_id = self.sender().data()
if m_id:
with sqlalchemy.orm.Session(self.db_engine) as session:
m = session.get(anchor.db.G2PModel, m_id)
g2p_path = m.path
session.query(anchor.db.AnchorCorpus).filter_by(current=True).update(
{anchor.db.AnchorCorpus.g2p_model_id: m_id}
)
session.commit()
else:
g2p_path, _ = QtWidgets.QFileDialog.getOpenFileName(
parent=self,
caption="Select a g2p model",
dir=self.settings.value(AnchorSettings.DEFAULT_G2P_DIRECTORY),
filter="Model files (*.zip)",
)
if not g2p_path or not os.path.exists(g2p_path):
return
self.settings.setValue(AnchorSettings.DEFAULT_G2P_DIRECTORY, os.path.dirname(g2p_path))
with sqlalchemy.orm.Session(self.db_engine) as session:
c = (
session.query(anchor.db.AnchorCorpus)
.options(sqlalchemy.orm.joinedload(anchor.db.AnchorCorpus.g2p_model))
.filter_by(current=True)
.first()
)
m = session.query(anchor.db.G2PModel).filter_by(path=g2p_path).first()
if not m:
m_name = os.path.splitext(os.path.basename(g2p_path))[0]
m = anchor.db.G2PModel(name=m_name, path=g2p_path, available_locally=True)
session.add(m)
c.g2p_model = m
session.commit()
self.g2p_model_worker.set_params(g2p_path)
self.g2p_model_worker.start()
def change_ivector_extractor(self):
m_id = self.sender().data()
if m_id:
with sqlalchemy.orm.Session(self.db_engine) as session:
m = session.get(anchor.db.IvectorExtractor, m_id)
ie_path = m.path
session.query(anchor.db.AnchorCorpus).filter_by(current=True).update(
{anchor.db.AnchorCorpus.ivector_extractor_id: m_id}
)
session.commit()
else:
ie_path, _ = QtWidgets.QFileDialog.getOpenFileName(
caption="Select a ivector extractor model",
dir=self.settings.value(self.settings.DEFAULT_IVECTOR_DIRECTORY),
filter="Ivector extractors (*.ivector *.zip)",
parent=self,
)
if not ie_path or not os.path.exists(ie_path):
return
self.settings.setValue(
AnchorSettings.DEFAULT_IVECTOR_DIRECTORY, os.path.dirname(ie_path)
)
with sqlalchemy.orm.Session(self.db_engine) as session:
c = (
session.query(anchor.db.AnchorCorpus)
.options(sqlalchemy.orm.joinedload(anchor.db.AnchorCorpus.ivector_extractor))
.filter_by(current=True)
.first()
)
m = session.query(anchor.db.IvectorExtractor).filter_by(path=ie_path).first()
if not m:
m_name = os.path.splitext(os.path.basename(ie_path))[0]
m = anchor.db.IvectorExtractor(
name=m_name, path=ie_path, available_locally=True
)
session.add(m)
c.ivector_extractor = m
session.commit()
self.ivector_extractor_worker.set_params(ie_path)
self.ivector_extractor_worker.start()
def load_ivector_extractor(self):
with sqlalchemy.orm.Session(self.db_engine) as session:
c = (
session.query(anchor.db.AnchorCorpus)
.options(sqlalchemy.orm.joinedload(anchor.db.AnchorCorpus.ivector_extractor))
.filter_by(current=True)
.first()
)
if c is None or c.ivector_extractor is None:
return
self.ivector_extractor_worker.set_params(c.ivector_extractor.path)
self.ivector_extractor_worker.start()
self.settings.setValue(
AnchorSettings.DEFAULT_IVECTOR_DIRECTORY, os.path.dirname(c.ivector_extractor.path)
)
def export_files(self):
if not self.corpus_model.corpus:
return
try:
self.corpus_model.export_changes()
except Exception:
exctype, value = sys.exc_info()[:2]
self.handle_error((exctype, value, traceback.format_exc()))
def handle_error(self, trace_args):
exctype, value, trace = trace_args
reply = DetailedMessageBox(detailed_message=trace)
reply.reportBug.connect(self.ui.reportBugAct.trigger)
_ = reply.exec_()
self.check_actions()
if self.corpus_model.corpus is not None:
self.set_application_state("loaded")
def save_dictionary(self):
self.ui.saveDictionaryAct.setEnabled(False)
self.execute_runnable(
"Exporting dictionary",
self.save_completed,
[{"dictionary_id": self.dictionary_model.current_dictionary_id}],
)
class FormLayout(QtWidgets.QVBoxLayout):
def addRow(self, label, widget):
row_layout = QtWidgets.QHBoxLayout()
label = QtWidgets.QLabel(label)
label.setSizePolicy(
QtWidgets.QSizePolicy.Policy.Expanding, QtWidgets.QSizePolicy.Policy.Expanding
)
row_layout.addWidget(label)
row_layout.addWidget(widget)
super(FormLayout, self).addLayout(row_layout)
class DetailedMessageBox(QtWidgets.QDialog): # pragma: no cover
reportBug = QtCore.Signal()
def __init__(self, detailed_message, *args, **kwargs):
super(DetailedMessageBox, self).__init__(*args, **kwargs)
self.ui = Ui_ErrorDialog()
self.ui.setupUi(self)
self.settings = AnchorSettings()
self.ui.detailed_message.setText(detailed_message)
self.setStyleSheet(self.settings.style_sheet)
self.ui.buttonBox.report_bug_button.clicked.connect(self.reportBug.emit)
self.ui.buttonBox.rejected.connect(self.reject)
self.ui.label.setFont(self.settings.font)
self.ui.label_2.setFont(self.settings.font)
self.ui.detailed_message.setFont(self.settings.font)
class OptionsDialog(QtWidgets.QDialog):
def __init__(self, parent=None):
super(OptionsDialog, self).__init__(parent=parent)
self.ui = Ui_PreferencesDialog()
self.ui.setupUi(self)
self.settings = AnchorSettings()
config = MfaConfiguration()
self.setFocusPolicy(QtCore.Qt.FocusPolicy.ClickFocus)
self.ui.primaryBaseEdit.set_color(self.settings.value(self.settings.PRIMARY_BASE_COLOR))
self.ui.primaryLightEdit.set_color(self.settings.value(self.settings.PRIMARY_LIGHT_COLOR))
self.ui.primaryDarkEdit.set_color(self.settings.value(self.settings.PRIMARY_DARK_COLOR))
self.ui.primaryVeryLightEdit.set_color(
self.settings.value(self.settings.PRIMARY_VERY_LIGHT_COLOR)
)
self.ui.primaryVeryDarkEdit.set_color(
self.settings.value(self.settings.PRIMARY_VERY_DARK_COLOR)
)
self.ui.accentBaseEdit.set_color(self.settings.value(self.settings.ACCENT_BASE_COLOR))
self.ui.accentLightEdit.set_color(self.settings.value(self.settings.ACCENT_LIGHT_COLOR))
self.ui.accentDarkEdit.set_color(self.settings.value(self.settings.ACCENT_DARK_COLOR))
self.ui.accentVeryLightEdit.set_color(
self.settings.value(self.settings.ACCENT_VERY_LIGHT_COLOR)
)
self.ui.accentVeryDarkEdit.set_color(
self.settings.value(self.settings.ACCENT_VERY_DARK_COLOR)
)
self.ui.mainTextColorEdit.set_color(self.settings.value(self.settings.MAIN_TEXT_COLOR))
self.ui.selectedTextColorEdit.set_color(
self.settings.value(self.settings.SELECTED_TEXT_COLOR)
)
self.ui.errorColorEdit.set_color(self.settings.value(self.settings.ERROR_COLOR))
self.ui.fontEdit.set_font(self.settings.font)
self.ui.playAudioShortcutEdit.setKeySequence(
self.settings.value(self.settings.PLAY_KEYBIND)
)
self.ui.zoomInShortcutEdit.setKeySequence(
self.settings.value(self.settings.ZOOM_IN_KEYBIND)
)
self.ui.zoomToSelectionShortcutEdit.setKeySequence(
self.settings.value(self.settings.ZOOM_TO_SELECTION_KEYBIND)
)
self.ui.zoomOutShortcutEdit.setKeySequence(
self.settings.value(self.settings.ZOOM_OUT_KEYBIND)
)
self.ui.panLeftShortcutEdit.setKeySequence(
self.settings.value(self.settings.PAN_LEFT_KEYBIND)
)
self.ui.panRightShortcutEdit.setKeySequence(
self.settings.value(self.settings.PAN_RIGHT_KEYBIND)
)
self.ui.mergeShortcutEdit.setKeySequence(self.settings.value(self.settings.MERGE_KEYBIND))
self.ui.splitShortcutEdit.setKeySequence(self.settings.value(self.settings.SPLIT_KEYBIND))
self.ui.deleteShortcutEdit.setKeySequence(
self.settings.value(self.settings.DELETE_KEYBIND)
)
self.ui.saveShortcutEdit.setKeySequence(self.settings.value(self.settings.SAVE_KEYBIND))
self.ui.searchShortcutEdit.setKeySequence(
self.settings.value(self.settings.SEARCH_KEYBIND)
)
self.ui.undoShortcutEdit.setKeySequence(self.settings.value(self.settings.UNDO_KEYBIND))
self.ui.redoShortcutEdit.setKeySequence(self.settings.value(self.settings.REDO_KEYBIND))
self.ui.autosaveOnExitCheckBox.setChecked(self.settings.value(self.settings.AUTOSAVE))
self.ui.cudaCheckBox.setChecked(self.settings.value(self.settings.CUDA))
self.ui.autoloadLastUsedCorpusCheckBox.setChecked(
self.settings.value(self.settings.AUTOLOAD)
)
self.ui.audioDeviceEdit.clear()
for o in QtMultimedia.QMediaDevices.audioOutputs():
self.ui.audioDeviceEdit.addItem(o.description(), userData=o.id())
self.ui.numJobsEdit.setValue(config.profiles["anchor"].num_jobs)
try:
self.ui.useMpCheckBox.setChecked(config.profiles["anchor"].use_mp)
except TypeError:
self.ui.useMpCheckBox.setChecked(True)
self.setWindowTitle("Preferences")
def accept(self) -> None:
self.settings.setValue(self.settings.PRIMARY_BASE_COLOR, self.ui.primaryBaseEdit.color)
self.settings.setValue(self.settings.PRIMARY_LIGHT_COLOR, self.ui.primaryLightEdit.color)
self.settings.setValue(self.settings.PRIMARY_DARK_COLOR, self.ui.primaryDarkEdit.color)
self.settings.setValue(
self.settings.PRIMARY_VERY_LIGHT_COLOR, self.ui.primaryVeryLightEdit.color
)
self.settings.setValue(
self.settings.PRIMARY_VERY_DARK_COLOR, self.ui.primaryVeryDarkEdit.color
)
self.settings.setValue(self.settings.ACCENT_BASE_COLOR, self.ui.accentBaseEdit.color)
self.settings.setValue(self.settings.ACCENT_LIGHT_COLOR, self.ui.accentLightEdit.color)
self.settings.setValue(self.settings.ACCENT_DARK_COLOR, self.ui.accentDarkEdit.color)
self.settings.setValue(
self.settings.ACCENT_VERY_LIGHT_COLOR, self.ui.accentVeryLightEdit.color
)
self.settings.setValue(
self.settings.ACCENT_VERY_DARK_COLOR, self.ui.accentVeryDarkEdit.color
)
self.settings.setValue(self.settings.MAIN_TEXT_COLOR, self.ui.mainTextColorEdit.color)
self.settings.setValue(
self.settings.SELECTED_TEXT_COLOR, self.ui.selectedTextColorEdit.color
)
self.settings.setValue(self.settings.ERROR_COLOR, self.ui.errorColorEdit.color)
self.settings.setValue(self.settings.FONT, self.ui.fontEdit.font.toString())
self.settings.setValue(
self.settings.PLAY_KEYBIND, self.ui.playAudioShortcutEdit.keySequence().toString()
)
self.settings.setValue(
self.settings.ZOOM_IN_KEYBIND, self.ui.zoomInShortcutEdit.keySequence().toString()
)
self.settings.setValue(
self.settings.ZOOM_OUT_KEYBIND, self.ui.zoomOutShortcutEdit.keySequence().toString()
)
self.settings.setValue(
self.settings.ZOOM_TO_SELECTION_KEYBIND,
self.ui.zoomToSelectionShortcutEdit.keySequence().toString(),
)
self.settings.setValue(
self.settings.PAN_LEFT_KEYBIND, self.ui.panLeftShortcutEdit.keySequence().toString()
)
self.settings.setValue(
self.settings.PAN_RIGHT_KEYBIND, self.ui.panRightShortcutEdit.keySequence().toString()
)
self.settings.setValue(
self.settings.MERGE_KEYBIND, self.ui.mergeShortcutEdit.keySequence().toString()
)
self.settings.setValue(
self.settings.SPLIT_KEYBIND, self.ui.splitShortcutEdit.keySequence().toString()
)
self.settings.setValue(
self.settings.DELETE_KEYBIND, self.ui.deleteShortcutEdit.keySequence().toString()
)
self.settings.setValue(
self.settings.SAVE_KEYBIND, self.ui.saveShortcutEdit.keySequence().toString()
)
self.settings.setValue(
self.settings.SEARCH_KEYBIND, self.ui.searchShortcutEdit.keySequence().toString()
)
self.settings.setValue(
self.settings.UNDO_KEYBIND, self.ui.undoShortcutEdit.keySequence().toString()
)
self.settings.setValue(
self.settings.REDO_KEYBIND, self.ui.redoShortcutEdit.keySequence().toString()
)
self.settings.setValue(
self.settings.AUTOLOAD, self.ui.autoloadLastUsedCorpusCheckBox.isChecked()
)
self.settings.setValue(self.settings.CUDA, self.ui.cudaCheckBox.isChecked())
self.settings.setValue(self.settings.AUTOSAVE, self.ui.autosaveOnExitCheckBox.isChecked())
self.settings.setValue(self.settings.AUDIO_DEVICE, self.ui.audioDeviceEdit.currentData())
self.settings.sync()
config = MfaConfiguration()
config.current_profile_name = "anchor"
config.profiles["anchor"].use_mp = self.ui.useMpCheckBox.isChecked()
config.profiles["anchor"].num_jobs = int(self.ui.numJobsEdit.value())
config.profiles["anchor"].github_token = self.ui.githubTokenEdit.text()
config.save()
super(OptionsDialog, self).accept()
class Application(QtWidgets.QApplication):
pass
|
Anchor-annotator
|
/Anchor_annotator-0.0.9.tar.gz/Anchor_annotator-0.0.9/anchor/main.py
|
main.py
|
import sqlalchemy
from montreal_forced_aligner.db import PathType
from sqlalchemy import Boolean, Column, DateTime, ForeignKey, Integer, String
from sqlalchemy.orm import declarative_base, relationship
AnchorSqlBase = declarative_base()
class AcousticModel(AnchorSqlBase):
__tablename__ = "acoustic_model"
id = Column(Integer, primary_key=True, autoincrement=True)
name = Column(String(50), nullable=False)
path = Column(PathType, nullable=False, unique=True)
available_locally = Column(Boolean, nullable=False, default=False)
last_used = Column(DateTime, nullable=False, server_default=sqlalchemy.func.now(), index=True)
corpora = relationship(
"AnchorCorpus",
back_populates="acoustic_model",
)
class LanguageModel(AnchorSqlBase):
__tablename__ = "language_model"
id = Column(Integer, primary_key=True, autoincrement=True)
name = Column(String(50), nullable=False)
path = Column(PathType, nullable=False, unique=True)
available_locally = Column(Boolean, nullable=False, default=False)
last_used = Column(DateTime, nullable=False, server_default=sqlalchemy.func.now(), index=True)
corpora = relationship(
"AnchorCorpus",
back_populates="language_model",
)
class G2PModel(AnchorSqlBase):
__tablename__ = "g2p_model"
id = Column(Integer, primary_key=True, autoincrement=True)
name = Column(String(50), nullable=False)
path = Column(PathType, nullable=False, unique=True)
available_locally = Column(Boolean, nullable=False, default=False)
last_used = Column(DateTime, nullable=False, server_default=sqlalchemy.func.now(), index=True)
corpora = relationship(
"AnchorCorpus",
back_populates="g2p_model",
)
class Dictionary(AnchorSqlBase):
__tablename__ = "dictionary"
id = Column(Integer, primary_key=True, autoincrement=True)
name = Column(String(50), nullable=False)
path = Column(PathType, nullable=False, unique=True)
available_locally = Column(Boolean, nullable=False, default=False)
last_used = Column(DateTime, nullable=False, server_default=sqlalchemy.func.now(), index=True)
corpora = relationship(
"AnchorCorpus",
back_populates="dictionary",
)
class IvectorExtractor(AnchorSqlBase):
__tablename__ = "ivector_extractor"
id = Column(Integer, primary_key=True, autoincrement=True)
name = Column(String(50), nullable=False)
path = Column(PathType, nullable=False, unique=True)
available_locally = Column(Boolean, nullable=False, default=False)
last_used = Column(DateTime, nullable=False, server_default=sqlalchemy.func.now(), index=True)
corpora = relationship(
"AnchorCorpus",
back_populates="ivector_extractor",
)
class SadModel(AnchorSqlBase):
__tablename__ = "sad_model"
id = Column(Integer, primary_key=True, autoincrement=True)
name = Column(String(50), nullable=False)
path = Column(PathType, nullable=False, unique=True)
available_locally = Column(Boolean, nullable=False, default=False)
last_used = Column(DateTime, nullable=False, server_default=sqlalchemy.func.now(), index=True)
corpora = relationship(
"AnchorCorpus",
back_populates="sad_model",
)
class AnchorCorpus(AnchorSqlBase):
__tablename__ = "corpus"
id = Column(Integer, primary_key=True, autoincrement=True)
name = Column(String(50), nullable=False, index=True)
path = Column(PathType, nullable=False, index=True, unique=True)
custom_mapping_path = Column(PathType, nullable=True)
reference_directory = Column(PathType, nullable=True)
current = Column(Boolean, nullable=False, default=False, index=True)
acoustic_model_id = Column(Integer, ForeignKey("acoustic_model.id"), index=True, nullable=True)
acoustic_model = relationship("AcousticModel", back_populates="corpora")
language_model_id = Column(Integer, ForeignKey("language_model.id"), index=True, nullable=True)
language_model = relationship("LanguageModel", back_populates="corpora")
dictionary_id = Column(Integer, ForeignKey("dictionary.id"), index=True, nullable=True)
dictionary = relationship("Dictionary", back_populates="corpora")
g2p_model_id = Column(Integer, ForeignKey("g2p_model.id"), index=True, nullable=True)
g2p_model = relationship("G2PModel", back_populates="corpora")
ivector_extractor_id = Column(
Integer, ForeignKey("ivector_extractor.id"), index=True, nullable=True
)
ivector_extractor = relationship("IvectorExtractor", back_populates="corpora")
sad_model_id = Column(Integer, ForeignKey("sad_model.id"), index=True, nullable=True)
sad_model = relationship("SadModel", back_populates="corpora")
MODEL_TYPES = {
"acoustic": AcousticModel,
"g2p": G2PModel,
"dictionary": Dictionary,
"language_model": LanguageModel,
"ivector": IvectorExtractor,
"sad": SadModel,
}
|
Anchor-annotator
|
/Anchor_annotator-0.0.9.tar.gz/Anchor_annotator-0.0.9/anchor/db.py
|
db.py
|
from __future__ import annotations
import collections
import csv
import datetime
import logging
import multiprocessing as mp
import os
import pickle
import queue
import shutil
import subprocess
import sys
import threading
import time
import traceback
import typing
from io import BytesIO
from pathlib import Path
from queue import Queue
from threading import Lock
import librosa
import numpy as np
import psycopg2.errors
import resampy
import soundfile
import soundfile as sf
import sqlalchemy
import tqdm
import yaml
from montreal_forced_aligner.alignment import PretrainedAligner
from montreal_forced_aligner.config import (
GLOBAL_CONFIG,
IVECTOR_DIMENSION,
MEMORY,
MFA_PROFILE_VARIABLE,
PLDA_DIMENSION,
XVECTOR_DIMENSION,
)
from montreal_forced_aligner.corpus.acoustic_corpus import (
AcousticCorpus,
AcousticCorpusWithPronunciations,
)
from montreal_forced_aligner.corpus.classes import FileData
from montreal_forced_aligner.corpus.features import score_plda
from montreal_forced_aligner.data import (
ClusterType,
DatasetType,
DistanceMetric,
ManifoldAlgorithm,
TextFileType,
WordType,
WorkflowType,
)
from montreal_forced_aligner.db import (
Corpus,
CorpusWorkflow,
Dictionary,
Dictionary2Job,
File,
Phone,
PhoneInterval,
Pronunciation,
SoundFile,
Speaker,
SpeakerOrdering,
TextFile,
Utterance,
Word,
WordInterval,
bulk_update,
)
from montreal_forced_aligner.diarization.multiprocessing import cluster_matrix, visualize_clusters
from montreal_forced_aligner.diarization.speaker_diarizer import SpeakerDiarizer
from montreal_forced_aligner.dictionary.multispeaker import MultispeakerDictionary
from montreal_forced_aligner.g2p.generator import PyniniValidator as Generator
from montreal_forced_aligner.helper import mfa_open
from montreal_forced_aligner.models import (
MODEL_TYPES,
AcousticModel,
IvectorExtractorModel,
LanguageModel,
)
from montreal_forced_aligner.transcription import Transcriber
from montreal_forced_aligner.utils import (
ProgressCallback,
Stopped,
inspect_database,
read_feats,
thirdparty_binary,
)
from montreal_forced_aligner.vad.multiprocessing import segment_utterance
from montreal_forced_aligner.vad.segmenter import TranscriptionSegmenter
from montreal_forced_aligner.validation.corpus_validator import PretrainedValidator
from PySide6 import QtCore
from sqlalchemy.orm import joinedload, selectinload, subqueryload
import anchor.db
from anchor.settings import AnchorSettings
if typing.TYPE_CHECKING:
from anchor.models import TextFilterQuery
M_LOG_2PI = 1.8378770664093454835606594728112
logger = logging.getLogger("anchor")
class WorkerSignals(QtCore.QObject):
"""
Defines the signals available from a running worker thread.
Supported signals are:
finished
No data
error
tuple (exctype, value, traceback.format_exc() )
result
object data returned from processing, anything
progress
int indicating % progress
"""
finished = QtCore.Signal()
error = QtCore.Signal(tuple)
result = QtCore.Signal(object)
stream_result = QtCore.Signal(object)
progress = QtCore.Signal(int, str)
total = QtCore.Signal(int)
def __init__(self, name):
super().__init__()
self.name = name
class Worker(QtCore.QRunnable):
"""
Worker thread
Inherits from QRunnable to handler worker thread setup, signals and wrap-up.
:param callback: The function callback to run on this worker thread. Supplied args and
kwargs will be passed through to the runner.
:type callback: function
:param args: Arguments to pass to the callback function
:param kwargs: Keywords to pass to the callback function
"""
def __init__(self, fn, *args, use_mp=False, **kwargs):
super(Worker, self).__init__()
# Store constructor arguments (re-used for processing)
self.fn = fn
self.name = fn.__name__
self.args = args
self.kwargs = kwargs
self.use_mp = use_mp
self.stopped = Stopped()
self.signals = WorkerSignals(fn.__name__)
# Add the callback to our kwargs
if not self.use_mp:
self.kwargs["progress_callback"] = ProgressCallback(
callback=self.signals.progress.emit, total_callback=self.signals.total.emit
)
self.kwargs["stopped"] = self.stopped
def cancel(self):
self.stopped.stop()
@QtCore.Slot()
def run(self):
"""
Initialise the runner function with passed args, kwargs.
"""
# Retrieve args/kwargs here; and fire processing using them
try:
if self.use_mp:
queue = mp.Queue()
p = mp.Process(target=self.fn, args=(queue, *self.args), kwargs=self.kwargs)
p.start()
result = queue.get()
p.join()
if isinstance(result, Exception):
raise result
else:
result = self.fn(*self.args, **self.kwargs)
except Exception:
exctype, value = sys.exc_info()[:2]
self.signals.error.emit((exctype, value, traceback.format_exc()))
else:
self.signals.result.emit(result) # Return the result of the processing
finally:
self.signals.finished.emit() # Done
class ClosestSpeakerThread(threading.Thread):
def __init__(
self,
Session,
threshold,
job_q: Queue,
return_q: Queue,
done_adding: Stopped,
done_processing: Stopped,
stopped: Stopped,
*args,
**kwargs,
):
super().__init__(*args, **kwargs)
self.session = Session
self.job_q = job_q
self.return_q = return_q
self.done_adding = done_adding
self.done_processing = done_processing
self.threshold = threshold
self.stopped = stopped
def run(self):
with self.session() as session:
c = session.query(Corpus).first()
while True:
try:
s_id, s_ivector = self.job_q.get(timeout=3)
except queue.Empty:
if self.done_adding.stop_check():
break
if self.stopped.stop_check():
break
continue
if self.stopped.stop_check():
continue
suggested_query = session.query(
Speaker.id,
).order_by(c.speaker_ivector_column.cosine_distance(s_ivector))
suggested_query = suggested_query.filter(
c.speaker_ivector_column.cosine_distance(s_ivector) <= self.threshold
)
r = [x[0] for x in suggested_query if x[0] != s_id]
self.return_q.put((s_id, r))
self.done_processing.stop()
class SpeakerQueryThread(threading.Thread):
def __init__(
self,
Session,
job_q: Queue,
done_adding: Stopped,
stopped: Stopped,
progress_callback,
*args,
**kwargs,
):
super().__init__(*args, **kwargs)
self.session = Session
self.job_q = job_q
self.done_adding = done_adding
self.stopped = stopped
self.progress_callback = progress_callback
def run(self):
with self.session() as session:
c = session.query(Corpus).first()
query = session.query(
Speaker.id,
c.speaker_ivector_column,
).order_by(Speaker.id)
query_count = query.count()
if self.progress_callback is not None:
self.progress_callback.update_total(query_count)
for s_id, s_ivector in query:
if self.stopped is not None and self.stopped.stop_check():
break
self.job_q.put((s_id, s_ivector))
self.done_adding.stop()
def closest_speaker_function(
Session,
progress_callback: typing.Optional[ProgressCallback] = None,
stopped: typing.Optional[Stopped] = None,
**kwargs,
):
utterance_id = kwargs.get("utterance_id", None)
num_speakers = kwargs.get("num_speakers", 10)
data = {}
with Session() as session:
c = session.query(Corpus).first()
if utterance_id is not None:
ivector = (
session.query(c.utterance_ivector_column)
.filter(Utterance.id == utterance_id)
.first()[0]
)
else:
ivector = kwargs.get("ivector", None)
if ivector is None:
return {}
query = (
session.query(
Speaker.id, Speaker.name, c.speaker_ivector_column.cosine_distance(ivector)
)
.join(Speaker.utterances)
.filter(c.speaker_ivector_column != None) # noqa
.group_by(Speaker.id)
.having(sqlalchemy.func.count() > 2)
.order_by(c.speaker_ivector_column.cosine_distance(ivector))
.limit(num_speakers)
)
speaker_ids = []
speaker_names = []
distances = []
for s_id, name, distance in query:
data[s_id] = name
speaker_ids.append(s_id)
speaker_names.append(name)
distances.append(distance)
data = {
speaker_ids[i]: f"{speaker_names[i]} ({distances[i]:.3f})"
for i in range(len(speaker_ids))
}
return data, utterance_id
def merge_speakers_function(
Session,
progress_callback: typing.Optional[ProgressCallback] = None,
stopped: typing.Optional[Stopped] = None,
**kwargs,
):
speaker_id = kwargs.get("speaker_id", None)
threshold = kwargs.get("threshold", None)
speaker_counts = collections.Counter()
deleted = set()
with Session() as session:
c = session.query(Corpus).first()
data = []
query_count = session.query(Speaker.id).count()
if progress_callback is not None:
progress_callback.update_total(query_count)
if speaker_id is None:
num_jobs = GLOBAL_CONFIG.profiles["anchor"].num_jobs
job_queue = Queue()
return_queue = Queue()
done_adding = Stopped()
done_processing = Stopped()
query_thread = SpeakerQueryThread(
Session, job_queue, done_adding, stopped, progress_callback
)
query_thread.start()
threads = []
for i in range(num_jobs):
threads.append(
ClosestSpeakerThread(
Session,
threshold,
job_queue,
return_queue,
done_adding,
done_processing,
stopped,
)
)
threads[i].start()
while True:
try:
r = return_queue.get(timeout=2)
except queue.Empty:
if done_processing.stop_check():
break
if stopped.stop_check():
break
continue
suggested_id, to_merge = r
if suggested_id in deleted:
continue
if progress_callback is not None:
progress_callback.increment_progress(1)
if not to_merge:
continue
for s_id in to_merge:
if stopped is not None and stopped.stop_check():
session.rollback()
return
file_ids = [
x
for x, in session.query(SpeakerOrdering.c.file_id).filter(
SpeakerOrdering.c.speaker_id == s_id
)
]
session.query(Utterance).filter(Utterance.speaker_id == s_id).update(
{Utterance.speaker_id: suggested_id}
)
session.query(SpeakerOrdering).filter(
SpeakerOrdering.c.file_id.in_(file_ids),
SpeakerOrdering.c.speaker_id.in_([s_id, suggested_id]),
).delete()
speaker_ordering_mapping = []
for f in file_ids:
speaker_ordering_mapping.append(
{"speaker_id": suggested_id, "file_id": f, "index": 1}
)
session.execute(sqlalchemy.insert(SpeakerOrdering), speaker_ordering_mapping)
session.query(File).filter(File.id.in_(file_ids)).update({File.modified: True})
session.query(Speaker).filter(Speaker.id.in_(to_merge)).delete()
deleted.update(to_merge)
if progress_callback is not None:
query_count -= len(to_merge)
progress_callback.update_total(query_count)
session.commit()
query_thread.join()
for t in threads:
t.join()
else:
ivector = (
session.query(c.speaker_ivector_column).filter(Speaker.id == speaker_id).first()[0]
)
query = (
session.query(Speaker.id)
.filter(Speaker.id != speaker_id)
.filter(c.speaker_ivector_column.cosine_distance(ivector) <= threshold)
)
query_count = query.count()
if progress_callback is not None:
progress_callback.update_total(query_count)
for (s_id,) in query:
if stopped is not None and stopped.stop_check():
session.rollback()
return
data.append((s_id, speaker_id))
if progress_callback is not None:
progress_callback.increment_progress(1)
updated_speakers = {}
if progress_callback is not None:
progress_callback.update_total(len(data))
progress_callback.set_progress(0)
for s_id, suggested_id in data:
if stopped is not None and stopped.stop_check():
session.rollback()
return
if s_id in updated_speakers:
s_id = updated_speakers[s_id]
if suggested_id in updated_speakers:
suggested_id = updated_speakers[suggested_id]
if (
suggested_id not in speaker_counts
or speaker_counts[s_id] > speaker_counts[suggested_id]
):
suggested_id, s_id = s_id, suggested_id
updated_speakers[s_id] = suggested_id
for k, v in updated_speakers.items():
if v == s_id:
updated_speakers[k] = suggested_id
speaker_counts[suggested_id] += speaker_counts[s_id]
file_ids = [
x
for x, in session.query(SpeakerOrdering.file_id).filter(
SpeakerOrdering.speaker_id == s_id
)
]
session.query(Utterance).filter(Utterance.speaker_id == s_id).update(
{Utterance.speaker_id: suggested_id}
)
session.query(SpeakerOrdering).filter(
SpeakerOrdering.file_id.in_(file_ids),
SpeakerOrdering.speaker_id.in_([s_id, suggested_id]),
).delete()
speaker_ordering_mapping = []
for f in file_ids:
speaker_ordering_mapping.append(
{"speaker_id": suggested_id, "file_id": f, "index": 1}
)
session.execute(sqlalchemy.insert(SpeakerOrdering), speaker_ordering_mapping)
session.query(File).filter(File.id.in_(file_ids)).update({File.modified: True})
session.flush()
if progress_callback is not None:
progress_callback.increment_progress(1)
session.commit()
sq = (
session.query(Speaker.id, sqlalchemy.func.count().label("utterance_count"))
.outerjoin(Speaker.utterances)
.group_by(Speaker.id)
.subquery()
)
sq2 = sqlalchemy.select(sq.c.id).where(sq.c.utterance_count == 0)
session.query(Speaker).filter(Speaker.id.in_(sq2)).delete(synchronize_session="fetch")
session.commit()
class ClosestUtteranceThread(threading.Thread):
def __init__(
self,
Session,
threshold,
job_q: Queue,
return_q: Queue,
done_adding: Stopped,
done_processing: Stopped,
stopped: Stopped,
*args,
**kwargs,
):
super().__init__(*args, **kwargs)
self.session = Session
self.job_q = job_q
self.return_q = return_q
self.done_adding = done_adding
self.done_processing = done_processing
self.threshold = threshold
self.stopped = stopped
def run(self):
deleted = set()
with self.session() as session:
c = session.query(Corpus).first()
while True:
try:
u_id, u_text, u_ivector, file_name = self.job_q.get(timeout=3)
except queue.Empty:
if self.done_adding.stop_check():
break
if self.stopped.stop_check():
break
continue
if self.stopped.stop_check():
continue
if file_name in deleted:
continue
duplicates = (
session.query(Utterance.text, File.name)
.join(Utterance.file)
.filter(
Utterance.id < u_id,
Utterance.text == u_text,
c.utterance_ivector_column.cosine_distance(u_ivector) <= self.threshold,
)
.all()
)
deleted.update([x[1] for x in duplicates])
self.return_q.put((u_id, u_text, file_name, duplicates))
self.done_processing.stop()
class UtteranceQueryThread(threading.Thread):
def __init__(
self,
Session,
job_q: Queue,
done_adding: Stopped,
stopped: Stopped,
progress_callback,
*args,
**kwargs,
):
super().__init__(*args, **kwargs)
self.session = Session
self.job_q = job_q
self.done_adding = done_adding
self.stopped = stopped
self.progress_callback = progress_callback
def run(self):
with self.session() as session:
c = session.query(Corpus).first()
query = (
session.query(Utterance.id, Utterance.text, c.utterance_ivector_column, File.name)
.join(Utterance.file)
.filter(c.utterance_ivector_column != None) # noqa
.order_by(Utterance.id.desc())
)
query_count = query.count()
if self.progress_callback is not None:
self.progress_callback.update_total(query_count)
for row in query:
if self.stopped is not None and self.stopped.stop_check():
break
self.job_q.put(row)
self.done_adding.stop()
def duplicate_files_query(
Session,
progress_callback: typing.Optional[ProgressCallback] = None,
stopped: typing.Optional[Stopped] = None,
**kwargs,
):
threshold = kwargs.get("threshold", 0.01)
working_directory = kwargs.get("working_directory")
to_delete = set()
original_files = set()
info_path = os.path.join(working_directory, "duplicate_info.tsv")
with mfa_open(info_path, "w") as f:
writer = csv.DictWriter(
f,
fieldnames=["original_file", "original_text", "duplicate_file", "duplicate_text"],
delimiter="\t",
)
num_jobs = GLOBAL_CONFIG.profiles["anchor"].num_jobs
job_queue = Queue()
return_queue = Queue()
done_adding = Stopped()
done_processing = Stopped()
query_thread = UtteranceQueryThread(
Session, job_queue, done_adding, stopped, progress_callback
)
query_thread.start()
threads = []
for i in range(num_jobs):
threads.append(
ClosestUtteranceThread(
Session,
threshold,
job_queue,
return_queue,
done_adding,
done_processing,
stopped,
)
)
threads[i].start()
while True:
try:
r = return_queue.get(timeout=2)
except queue.Empty:
if done_processing.stop_check():
break
if stopped.stop_check():
break
continue
u_id, u_text, orig_file_name, duplicates = r
if progress_callback is not None:
progress_callback.increment_progress(1)
if orig_file_name in to_delete:
continue
original_files.update(orig_file_name)
if len(duplicates) == 0:
continue
line = {"original_file": orig_file_name, "original_text": u_text}
duplicate_files = {}
for text, file_name in duplicates:
if file_name in original_files:
continue
if file_name not in duplicate_files:
duplicate_files[file_name] = text
to_delete.update(duplicate_files.keys())
for dup_file_name, dup_text in duplicate_files.items():
line["duplicate_file"] = dup_file_name
line["duplicate_text"] = dup_text
writer.writerow(line)
f.flush()
with mfa_open(os.path.join(working_directory, "to_delete.txt"), "w") as f:
for line in sorted(to_delete):
f.write(f"{line}\n")
return len(to_delete), info_path
def speaker_comparison_query(
Session,
progress_callback: typing.Optional[ProgressCallback] = None,
stopped: typing.Optional[Stopped] = None,
**kwargs,
):
speaker_id = kwargs.get("speaker_id", None)
threshold = kwargs.get("threshold", None)
metric = kwargs.get("metric", DistanceMetric.cosine)
data = []
speaker_indices = []
suggested_indices = []
limit = kwargs.get("limit", 100)
offset = kwargs.get("current_offset", 100)
if progress_callback is not None:
progress_callback.update_total(limit)
if metric is DistanceMetric.plda:
working_directory = kwargs.get("working_directory", None)
plda_transform_path = os.path.join(working_directory, "plda.pkl")
try:
with open(plda_transform_path, "rb") as f:
plda = pickle.load(f)
except Exception:
metric = DistanceMetric.cosine
with Session() as session:
c = session.query(Corpus).first()
if c.plda_calculated:
dim = PLDA_DIMENSION
elif c.xvectors_loaded:
dim = XVECTOR_DIMENSION
else:
dim = IVECTOR_DIMENSION
if speaker_id is None:
query = (
session.query(
Speaker.id,
Speaker.name,
c.speaker_ivector_column,
sqlalchemy.func.count().label("utterance_count"),
)
.join(Speaker.utterances)
.filter(c.speaker_ivector_column != None) # noqa
.group_by(Speaker.id)
.having(sqlalchemy.func.count() > 2)
.order_by(sqlalchemy.func.random())
)
if threshold is None:
query = query.limit(limit).offset(offset)
else:
query = query.limit(limit * 1000)
found = set()
for i, (s_id, s_name, s_ivector, utterance_count) in enumerate(query):
if stopped is not None and stopped.stop_check():
return
if metric is DistanceMetric.plda:
suggested_query = (
session.query(Speaker.id, Speaker.name, c.speaker_ivector_column)
.filter(Speaker.id != s_id, c.speaker_ivector_column != None) # noqa
.order_by(c.speaker_ivector_column.cosine_distance(s_ivector))
)
r = suggested_query.limit(100).all()
test_ivectors = np.empty((len(r), dim))
suggested_ids = []
suggested_names = []
for i, (suggested_id, suggested_name, suggested_ivector) in enumerate(r):
test_ivectors[i, :] = suggested_ivector
suggested_ids.append(suggested_id)
suggested_names.append(suggested_name)
train_ivectors = s_ivector[np.newaxis, :]
counts = np.array([utterance_count])[:, np.newaxis]
distance_matrix = score_plda(
train_ivectors, test_ivectors, plda, normalize=False, counts=counts
)
index = distance_matrix.argmax(axis=1)[0]
suggested_name = suggested_names[index]
suggested_id = suggested_ids[index]
log_likelihood_ratio = distance_matrix[index, 0]
if log_likelihood_ratio < threshold:
continue
data.append([s_name, suggested_name, log_likelihood_ratio])
speaker_indices.append(s_id)
suggested_indices.append(suggested_id)
else:
suggested_query = (
session.query(
Speaker.id,
Speaker.name,
c.speaker_ivector_column.cosine_distance(s_ivector),
)
.filter(Speaker.id != s_id)
.filter(c.speaker_ivector_column != None) # noqa
.order_by(c.speaker_ivector_column.cosine_distance(s_ivector))
)
if threshold is not None:
suggested_query = suggested_query.filter(
c.speaker_ivector_column.cosine_distance(s_ivector) <= threshold
)
r = suggested_query.limit(1).first()
if r is None:
continue
suggested_id, suggested_name, distance = r
key = frozenset([s_id, suggested_id])
if key in found:
continue
found.add(key)
suggested_count = (
session.query(sqlalchemy.func.count().label("utterance_count"))
.filter(Utterance.speaker_id == suggested_id)
.scalar()
)
if not suggested_count:
continue
if suggested_count < utterance_count:
s_name, suggested_name = suggested_name, s_name
s_id, suggested_id = suggested_id, s_id
data.append([s_name, suggested_name, distance])
speaker_indices.append(s_id)
suggested_indices.append(suggested_id)
if progress_callback is not None:
progress_callback.increment_progress(1)
if len(data) == limit:
break
else:
ivector, speaker_name = (
session.query(c.speaker_ivector_column, Speaker.name)
.filter(Speaker.id == speaker_id)
.first()
)
query = (
session.query(
Speaker.id,
Speaker.name,
c.speaker_ivector_column,
c.speaker_ivector_column.cosine_distance(ivector).label("distance"),
)
.filter(Speaker.id != speaker_id)
.order_by(c.speaker_ivector_column.cosine_distance(ivector))
.limit(limit)
.offset(offset)
)
if metric is DistanceMetric.plda:
test_ivectors = np.empty((limit, dim))
for i, (s_id, s_name, s_ivector, distance) in enumerate(query):
if stopped is not None and stopped.stop_check():
session.rollback()
return
if progress_callback is not None:
progress_callback.increment_progress(1)
data.append([s_name, speaker_name, distance])
speaker_indices.append(s_id)
suggested_indices.append(speaker_id)
if metric is DistanceMetric.plda:
test_ivectors[i, :] = s_ivector
if metric is DistanceMetric.plda:
train_ivectors = ivector[np.newaxis, :]
distance_matrix = score_plda(train_ivectors, test_ivectors, plda, normalize=False)
for i in range(len(data)):
data[i][2] = distance_matrix[i, 0]
d = np.array([x[2] for x in data])
if metric is DistanceMetric.plda:
d *= -1
indices = np.argsort(d)
speaker_indices = [speaker_indices[x] for x in indices]
suggested_indices = [suggested_indices[x] for x in indices]
data = [data[x] for x in indices]
return data, speaker_indices, suggested_indices
def find_speaker_utterance_query(
Session,
progress_callback: typing.Optional[ProgressCallback] = None,
stopped: typing.Optional[Stopped] = None,
**kwargs,
):
speaker_id = kwargs.get("speaker_id")
limit = kwargs.get("limit", 100)
if progress_callback is not None:
progress_callback.update_total(limit)
with Session() as session:
c = session.query(Corpus).first()
ivector = (
session.query(c.speaker_ivector_column).filter(Speaker.id == speaker_id).first()[0]
)
query = (
session.query(Utterance)
.options(joinedload(Utterance.file, innerjoin=True))
.filter(Utterance.speaker_id != speaker_id)
.order_by(c.speaker_ivector_column.cosine_distance(ivector))
.limit(limit)
.offset(kwargs.get("current_offset", 0))
)
file_ids = []
utterance_ids = []
data = []
for utterance in query:
if stopped is not None and stopped.stop_check():
session.rollback()
return
if progress_callback is not None:
progress_callback.increment_progress(1)
utterance_ids.append(utterance.id)
file_ids.append(utterance.file_id)
data.append([utterance.file_name, utterance.begin, utterance.end])
return data, utterance_ids, file_ids
def find_outlier_utterances_query(
Session,
progress_callback: typing.Optional[ProgressCallback] = None,
stopped: typing.Optional[Stopped] = None,
**kwargs,
):
speaker_id = kwargs.get("speaker_id")
limit = kwargs.get("limit", 100)
if progress_callback is not None:
progress_callback.update_total(limit)
with Session() as session:
c = session.query(Corpus).first()
ivector = (
session.query(c.speaker_ivector_column).filter(Speaker.id == speaker_id).first()[0]
)
query = (
session.query(Utterance)
.options(joinedload(Utterance.file, innerjoin=True))
.filter(Utterance.speaker_id == speaker_id)
.order_by(c.utterance_ivector_column.cosine_distance(ivector).desc())
.limit(limit)
.offset(kwargs.get("current_offset", 0))
)
file_ids = []
utterance_ids = []
data = []
for utterance in query:
if stopped is not None and stopped.stop_check():
session.rollback()
return
if progress_callback is not None:
progress_callback.increment_progress(1)
utterance_ids.append(utterance.id)
file_ids.append(utterance.file_id)
data.append([utterance.file_name, utterance.begin, utterance.end])
return data, utterance_ids, file_ids
def query_function(
Session,
progress_callback: typing.Optional[ProgressCallback] = None,
stopped: typing.Optional[Stopped] = None,
**kwargs,
):
with Session() as session:
c = session.query(Corpus).first()
count_only = kwargs.get("count", False)
has_ivectors = kwargs.get("has_ivectors", False)
if count_only:
columns = [Utterance.id]
else:
columns = [
Utterance.id,
Utterance.file_id,
Utterance.speaker_id,
Utterance.oovs,
File.name,
Speaker.name,
Utterance.begin,
Utterance.end,
Utterance.duration,
Utterance.text,
]
columns.append(Utterance.alignment_log_likelihood)
columns.append(Utterance.speech_log_likelihood)
columns.append(Utterance.duration_deviation)
columns.append(Utterance.phone_error_rate)
columns.append(Utterance.alignment_score)
columns.append(Utterance.transcription_text)
columns.append(Utterance.word_error_rate)
if has_ivectors:
columns.append(
c.utterance_ivector_column.cosine_distance(c.speaker_ivector_column)
)
speaker_filter = kwargs.get("speaker_filter", None)
file_filter = kwargs.get("file_filter", None)
text_filter: TextFilterQuery = kwargs.get("text_filter", None)
sort_index = kwargs.get("sort_index", None)
utterances = session.query(*columns).join(Utterance.speaker).join(Utterance.file)
if kwargs.get("oovs_only", False):
utterances = utterances.filter(Utterance.oovs != "")
if speaker_filter is not None:
if isinstance(speaker_filter, int):
utterances = utterances.filter(Utterance.speaker_id == speaker_filter)
else:
utterances = utterances.filter(Speaker.name == speaker_filter)
if file_filter is not None:
if isinstance(file_filter, int):
utterances = utterances.filter(Utterance.file_id == file_filter)
else:
utterances = utterances.filter(File.name == file_filter)
if text_filter is not None:
if kwargs.get("oovs_only", False):
text_column = Utterance.oovs
else:
text_column = Utterance.text
filter_regex = text_filter.generate_expression(posix=True)
utterances = utterances.filter(text_column.op("~")(filter_regex))
if count_only:
try:
return utterances.count()
except psycopg2.errors.InvalidRegularExpression:
return 0
if progress_callback is not None:
progress_callback.update_total(kwargs.get("limit", 100))
if sort_index is not None and sort_index + 3 < len(columns) - 1:
sort_column = columns[sort_index + 3]
if kwargs.get("sort_desc", False):
sort_column = sort_column.desc()
utterances = utterances.order_by(sort_column, Utterance.id)
else:
utterances = utterances.order_by(File.name, Utterance.begin)
utterances = utterances.limit(kwargs.get("limit", 100)).offset(
kwargs.get("current_offset", 0)
)
data = []
indices = []
file_indices = []
speaker_indices = []
reversed_indices = {}
try:
for i, u in enumerate(utterances):
if stopped is not None and stopped.stop_check():
return
data.append(list(u[3:]))
indices.append(u[0])
file_indices.append(u[1])
speaker_indices.append(u[2])
reversed_indices[u[0]] = i
if progress_callback is not None:
progress_callback.increment_progress(1)
except psycopg2.errors.InvalidRegularExpression:
pass
return data, indices, file_indices, speaker_indices, reversed_indices
def file_utterances_function(
Session,
file_id,
progress_callback: typing.Optional[ProgressCallback] = None,
stopped: typing.Optional[Stopped] = None,
**kwargs,
):
with Session() as session:
utterances = (
session.query(Utterance)
.options(
selectinload(Utterance.phone_intervals).options(
joinedload(PhoneInterval.phone, innerjoin=True),
joinedload(PhoneInterval.workflow, innerjoin=True),
),
selectinload(Utterance.word_intervals).options(
joinedload(WordInterval.word, innerjoin=True),
joinedload(WordInterval.workflow, innerjoin=True),
),
joinedload(Utterance.speaker, innerjoin=True),
)
.filter(Utterance.file_id == file_id)
.order_by(Utterance.begin)
.all()
)
return utterances, file_id
def query_dictionary_function(
Session,
progress_callback: typing.Optional[ProgressCallback] = None,
stopped: typing.Optional[Stopped] = None,
**kwargs,
):
with Session() as session:
text_filter = kwargs.get("text_filter", None)
sort_index = kwargs.get("sort_index", None)
dictionary_id = kwargs.get("dictionary_id", None)
filter_unused = kwargs.get("filter_unused", False)
if progress_callback is not None:
progress_callback.update_total(kwargs.get("limit", 100))
columns = [Word.word, Word.count, Pronunciation.pronunciation, Word.id, Pronunciation.id]
text_column = Word.word
words = session.query(*columns).join(Word.pronunciations)
if dictionary_id is not None:
words = words.filter(Word.dictionary_id == dictionary_id)
if filter_unused:
words = words.filter(Word.count > 0)
if text_filter is not None:
filter_regex = text_filter.generate_expression(posix=True)
words = words.filter(text_column.op("~")(filter_regex))
if kwargs.get("count", False):
return words.count()
if sort_index is not None and sort_index < len(columns):
sort_column = columns[sort_index]
if kwargs.get("sort_desc", False):
sort_column = sort_column.desc()
else:
sort_column = text_column
words = words.order_by(sort_column, Word.id, Pronunciation.id)
words = words.limit(kwargs.get("limit", 100)).offset(kwargs.get("current_offset", 0))
data = []
indices = []
pron_indices = []
for word, count, pron, w_id, p_id in words:
if stopped is not None and stopped.stop_check():
return
indices.append(w_id)
pron_indices.append(p_id)
data.append([word, count, pron])
if progress_callback is not None:
progress_callback.increment_progress(1)
return data, indices, pron_indices
def query_oovs_function(
Session,
progress_callback: typing.Optional[ProgressCallback] = None,
stopped: typing.Optional[Stopped] = None,
**kwargs,
):
with Session() as session:
text_filter = kwargs.get("text_filter", None)
sort_index = kwargs.get("sort_index", None)
columns = [Word.word, Word.count, Word.id]
text_column = Word.word
if progress_callback is not None:
progress_callback.update_total(kwargs.get("limit", 100))
words = session.query(*columns).filter(Word.word_type == WordType.oov)
if text_filter is not None:
filter_regex = text_filter.generate_expression(posix=True)
words = words.filter(text_column.op("~")(filter_regex))
if kwargs.get("count", False):
return words.count()
if sort_index is not None and sort_index < len(columns):
sort_column = columns[sort_index]
if kwargs.get("sort_desc", False):
sort_column = sort_column.desc()
else:
sort_column = text_column
words = words.order_by(sort_column, Word.id)
words = words.limit(kwargs.get("limit", 100)).offset(kwargs.get("current_offset", 0))
data = []
indices = []
for word, count, w_id in words:
if stopped is not None and stopped.stop_check():
return
data.append([word, count])
indices.append(w_id)
if progress_callback is not None:
progress_callback.increment_progress(1)
return data, indices
def calculate_speaker_ivectors(
Session,
progress_callback: typing.Optional[ProgressCallback] = None,
stopped: typing.Optional[Stopped] = None,
**kwargs,
):
logger.debug(f"Using {GLOBAL_CONFIG.profiles['anchor'].num_jobs} jobs")
speaker_id = kwargs.pop("speaker_id")
working_directory = kwargs.pop("working_directory")
plda_transform_path = os.path.join(working_directory, "plda.pkl")
metric = kwargs.pop("metric", DistanceMetric.cosine)
if metric is DistanceMetric.plda:
try:
with open(plda_transform_path, "rb") as f:
plda = pickle.load(f)
except Exception:
metric = DistanceMetric.cosine
if progress_callback is not None:
progress_callback.update_total(3)
with Session() as session:
c = session.query(Corpus).first()
if c.plda_calculated:
dim = PLDA_DIMENSION
elif c.xvectors_loaded:
dim = XVECTOR_DIMENSION
else:
dim = IVECTOR_DIMENSION
speaker_ivector = (
session.query(c.speaker_ivector_column).filter(Speaker.id == speaker_id).first()[0]
)
utterances = (
session.query(
Utterance.id,
c.utterance_ivector_column,
c.utterance_ivector_column.cosine_distance(c.speaker_ivector_column),
)
.join(Utterance.speaker)
.filter(Utterance.speaker_id == speaker_id, c.utterance_ivector_column != None) # noqa
)
ivectors = np.empty((utterances.count(), dim))
utterance_ids = []
speaker_distance = []
for i, (u_id, u_ivector, distance) in enumerate(utterances):
ivectors[i, :] = u_ivector
utterance_ids.append(u_id)
speaker_distance.append(distance)
if metric is DistanceMetric.plda:
if speaker_ivector is not None:
speaker_distance = score_plda(
speaker_ivector[np.newaxis, :], ivectors, plda, normalize=True, distance=True
)[:, 0]
else:
speaker_distance = None
return speaker_id, np.array(utterance_ids), ivectors, speaker_distance
def cluster_speaker_utterances(
Session,
progress_callback: typing.Optional[ProgressCallback] = None,
stopped: typing.Optional[Stopped] = None,
**kwargs,
):
speaker_id = kwargs.pop("speaker_id")
working_directory = kwargs.pop("working_directory")
cluster_type = kwargs.pop("cluster_type", ClusterType.hdbscan)
metric_type = kwargs.pop("metric", DistanceMetric.cosine)
plda_transform_path = os.path.join(working_directory, "plda.pkl")
plda = None
try:
with open(plda_transform_path, "rb") as f:
plda = pickle.load(f)
except Exception:
metric_type = DistanceMetric.cosine
distance_threshold = kwargs.pop("distance_threshold", None)
if not distance_threshold:
distance_threshold = None
logger.debug(f"Clustering with {cluster_type}...")
with Session() as session:
c = session.query(Corpus).first()
utterance_count = (
session.query(Utterance)
.filter(Utterance.speaker_id == speaker_id, c.utterance_ivector_column != None) # noqa
.count()
)
if c.plda_calculated:
dim = PLDA_DIMENSION
elif c.xvectors_loaded:
dim = XVECTOR_DIMENSION
else:
dim = IVECTOR_DIMENSION
to_fit = np.empty((utterance_count, dim))
query = session.query(c.utterance_ivector_column).filter(
Utterance.speaker_id == speaker_id, c.utterance_ivector_column != None # noqa
)
for i, (ivector,) in enumerate(query):
to_fit[i, :] = ivector
begin = time.time()
if cluster_type is ClusterType.agglomerative:
logger.info("Running Agglomerative Clustering...")
kwargs["memory"] = MEMORY
if "n_clusters" not in kwargs:
kwargs["distance_threshold"] = distance_threshold
if metric_type is DistanceMetric.plda:
kwargs["linkage"] = "average"
elif metric_type is DistanceMetric.cosine:
kwargs["linkage"] = "average"
elif cluster_type is ClusterType.dbscan:
kwargs["distance_threshold"] = distance_threshold
elif cluster_type is ClusterType.hdbscan:
kwargs["distance_threshold"] = distance_threshold
kwargs["memory"] = MEMORY
elif cluster_type is ClusterType.optics:
kwargs["distance_threshold"] = distance_threshold
kwargs["memory"] = MEMORY
c = cluster_matrix(
to_fit,
cluster_type,
metric=metric_type,
strict=False,
no_visuals=True,
plda=plda,
**kwargs,
)
logger.debug(f"Clustering with {cluster_type} took {time.time() - begin} seconds")
return speaker_id, c
def mds_speaker_utterances(
Session,
progress_callback: typing.Optional[ProgressCallback] = None,
stopped: typing.Optional[Stopped] = None,
**kwargs,
):
speaker_id = kwargs.pop("speaker_id")
working_directory = kwargs.pop("working_directory")
plda_transform_path = os.path.join(working_directory, "plda.pkl")
metric_type = kwargs.pop("metric", DistanceMetric.cosine)
plda = None
try:
with open(plda_transform_path, "rb") as f:
plda = pickle.load(f)
except Exception:
metric_type = DistanceMetric.cosine
n_neighbors = 10
with Session() as session:
c = session.query(Corpus).first()
utterance_count = (
session.query(Utterance)
.filter(Utterance.speaker_id == speaker_id, c.utterance_ivector_column != None) # noqa
.count()
)
if c.plda_calculated:
dim = PLDA_DIMENSION
elif c.xvectors_loaded:
dim = XVECTOR_DIMENSION
else:
dim = IVECTOR_DIMENSION
ivectors = np.empty((utterance_count, dim), dtype="float32")
query = session.query(c.utterance_ivector_column).filter(
Utterance.speaker_id == speaker_id, c.utterance_ivector_column != None # noqa
)
for i, (ivector,) in enumerate(query):
ivectors[i, :] = ivector
points = visualize_clusters(
ivectors, ManifoldAlgorithm.tsne, metric_type, n_neighbors, plda, quick=True
)
return speaker_id, points
def query_speakers_function(
Session,
progress_callback: typing.Optional[ProgressCallback] = None,
stopped: typing.Optional[Stopped] = None,
**kwargs,
):
with Session() as session:
c = session.query(Corpus).first()
text_filter = kwargs.get("text_filter", None)
sort_index = kwargs.get("sort_index", None)
if kwargs.get("count", False):
speakers = session.query(Speaker.name)
if text_filter is not None:
filter_regex = text_filter.generate_expression(posix=True)
text_column = Speaker.name
speakers = speakers.filter(text_column.op("~")(filter_regex))
return speakers.count()
if progress_callback is not None:
progress_callback.update_total(kwargs.get("limit", 100))
columns = [
Speaker.id,
Speaker.name,
sqlalchemy.func.count(),
Speaker.dictionary_id,
sqlalchemy.func.avg(
c.utterance_ivector_column.cosine_distance(c.speaker_ivector_column)
),
]
speakers = (
session.query(*columns)
.join(Speaker.utterances)
.group_by(Speaker.id, Speaker.name, Speaker.dictionary_id)
)
if text_filter is not None:
filter_regex = text_filter.generate_expression(posix=True)
text_column = columns[1]
if not text_filter.case_sensitive:
text_column = sqlalchemy.func.lower(text_column)
speakers = speakers.filter(text_column.op("~")(filter_regex))
if sort_index is not None:
sort_column = columns[sort_index + 1]
if kwargs.get("sort_desc", False):
sort_column = sort_column.desc()
speakers = speakers.order_by(sort_column)
speakers = speakers.limit(kwargs.get("limit", 100)).offset(kwargs.get("current_offset", 0))
data = []
indices = []
for w in speakers:
if stopped is not None and stopped.stop_check():
return
d = list(w)
indices.append(d.pop(0))
data.append(d)
if progress_callback is not None:
progress_callback.increment_progress(1)
return data, indices
def change_speaker_function(
Session,
utterance_ids,
new_speaker_id,
old_speaker_id,
progress_callback: typing.Optional[ProgressCallback] = None,
stopped: typing.Optional[Stopped] = None,
):
with Session() as session:
try:
if new_speaker_id == 0:
new_speaker_id = session.query(sqlalchemy.func.max(Speaker.id)).scalar() + 1
speaker = session.query(Speaker).get(old_speaker_id)
original_name = speaker.name
index = 1
while True:
speaker_name = f"{original_name}_{index}"
t = session.query(Speaker).filter(Speaker.name == speaker_name).first()
if t is None:
break
index += 1
session.execute(
sqlalchemy.insert(Speaker).values(
id=new_speaker_id, name=speaker_name, dictionary_id=speaker.dictionary_id
)
)
session.flush()
file_ids = [
x[0]
for x in session.query(File.id)
.join(File.utterances)
.filter(Utterance.id.in_(utterance_ids))
.distinct()
]
mapping = [{"id": x, "speaker_id": new_speaker_id} for x in utterance_ids]
session.bulk_update_mappings(Utterance, mapping)
session.execute(
sqlalchemy.delete(SpeakerOrdering).where(
SpeakerOrdering.c.file_id.in_(file_ids),
SpeakerOrdering.c.speaker_id.in_([old_speaker_id, new_speaker_id]),
)
)
session.flush()
so_mapping = [
{"speaker_id": new_speaker_id, "file_id": f_id, "index": 10} for f_id in file_ids
]
session.execute(sqlalchemy.insert(SpeakerOrdering), so_mapping)
if stopped is not None and stopped.stop_check():
session.rollback()
return
session.commit()
except Exception:
session.rollback()
raise
return new_speaker_id
def recalculate_speaker_function(
Session,
speaker_id,
progress_callback: typing.Optional[ProgressCallback] = None,
stopped: typing.Optional[Stopped] = None,
):
with Session() as session:
try:
c = session.query(Corpus).first()
old_ivectors = np.array(
[
x[0]
for x in session.query(c.utterance_ivector_column).filter(
Utterance.speaker_id == speaker_id,
c.utterance_ivector_column != None, # noqa
)
]
)
if old_ivectors.shape[0] > 0:
old_speaker_ivector = np.mean(old_ivectors, axis=0)
session.execute(
sqlalchemy.update(Speaker)
.where(Speaker.id == speaker_id)
.values({c.speaker_ivector_column: old_speaker_ivector})
)
if stopped is not None and stopped.stop_check():
session.rollback()
return
session.commit()
except Exception:
session.rollback()
raise
def replace_function(
Session,
search_query: TextFilterQuery,
replacement_string,
progress_callback: typing.Optional[ProgressCallback] = None,
stopped: typing.Optional[Stopped] = None,
):
with Session() as session:
try:
old_texts = {}
new_texts = {}
filter_regex = search_query.generate_expression(posix=True)
text_column = Utterance.text
columns = [Utterance.id, Utterance.text]
utterances = session.query(*columns)
utterances = utterances.filter(text_column.op("~")(filter_regex))
if progress_callback is not None:
progress_callback.update_total(utterances.count())
for u_id, text in utterances:
if stopped is not None and stopped.stop_check():
session.rollback()
return
old_texts[u_id] = text
utterance_table = Utterance.__table__
utterance_statement = sqlalchemy.update(utterance_table)
utterance_statement = utterance_statement.where(
utterance_table.c.text.op("~")(filter_regex)
)
utterance_statement = utterance_statement.values(
text=sqlalchemy.func.regexp_replace(
utterance_table.c.text, filter_regex, replacement_string, "g"
),
normalized_text=sqlalchemy.func.regexp_replace(
utterance_table.c.normalized_text, filter_regex, replacement_string, "g"
),
).execution_options(synchronize_session="fetch")
utterance_statement = utterance_statement.returning(
utterance_table.c.id, utterance_table.c.file_id, utterance_table.c.text
)
while True:
try:
with session.begin_nested():
results = session.execute(utterance_statement)
file_ids = []
for u_id, f_id, text in results:
if progress_callback is not None:
progress_callback.increment_progress(1)
new_texts[u_id] = text
file_ids.append(f_id)
if file_ids:
session.query(File).filter(File.id.in_(file_ids)).update(
{
File.modified: True,
}
)
break
except psycopg2.errors.DeadlockDetected:
pass
if stopped is not None and stopped.stop_check():
session.rollback()
return
session.commit()
except Exception:
session.rollback()
raise
return search_query.generate_expression(), old_texts, new_texts
def export_files_function(
Session,
progress_callback: typing.Optional[ProgressCallback] = None,
stopped: typing.Optional[Stopped] = None,
):
with Session() as session:
try:
mappings = []
settings = AnchorSettings()
settings.sync()
output_directory = session.query(Corpus.path).first()[0]
files = (
session.query(File)
.options(
subqueryload(File.utterances),
subqueryload(File.speakers),
joinedload(File.sound_file, innerjoin=True).load_only(SoundFile.duration),
joinedload(File.text_file, innerjoin=True).load_only(TextFile.file_type),
)
.filter(File.modified == True) # noqa
)
if progress_callback is not None:
progress_callback.update_total(files.count())
for f in files:
if stopped.stop_check():
session.rollback()
break
try:
f.save(
output_directory, overwrite=True, output_format=TextFileType.TEXTGRID.value
)
except Exception:
logger.error(f"Error writing {f.name}")
raise
mappings.append({"id": f.id, "modified": False})
if progress_callback is not None:
progress_callback.increment_progress(1)
session.commit()
while True:
try:
with session.begin_nested():
session.bulk_update_mappings(File, mappings)
break
except psycopg2.errors.DeadlockDetected:
pass
session.commit()
except Exception:
session.rollback()
raise
def export_lexicon_function(
Session,
dictionary_id: int,
progress_callback: typing.Optional[ProgressCallback] = None,
stopped: typing.Optional[Stopped] = None,
):
with Session() as session:
dictionary_path = (
session.query(Dictionary.path).filter(Dictionary.id == dictionary_id).scalar()
)
words = (
session.query(Word.word, Pronunciation.pronunciation)
.join(Pronunciation.word)
.filter(
Word.dictionary_id == dictionary_id,
Pronunciation.pronunciation != "",
Word.word_type.in_([WordType.speech, WordType.clitic]),
)
.order_by(Word.word)
)
if progress_callback is not None:
progress_callback.update_total(words.count())
with open(dictionary_path, "w", encoding="utf8") as f:
for w, p in words:
if stopped.stop_check():
break
f.write(f"{w}\t{p}\n")
if progress_callback is not None:
progress_callback.increment_progress(1)
def speakers_function(
Session,
progress_callback: typing.Optional[ProgressCallback] = None,
stopped: typing.Optional[Stopped] = None,
):
begin = time.time()
conn = Session.bind.raw_connection()
speakers = {}
try:
cursor = conn.cursor()
cursor.execute("select speaker.name, speaker.id from speaker order by speaker.name")
query = cursor.fetchall()
for s_name, s_id in query:
speakers[s_name] = s_id
cursor.close()
finally:
conn.close()
logger.debug(f"Loading all speaker names took {time.time() - begin:.3f} seconds.")
return speakers
def dictionaries_function(
Session,
progress_callback: typing.Optional[ProgressCallback] = None,
stopped: typing.Optional[Stopped] = None,
):
dictionaries = []
word_sets = {}
speaker_mapping = {}
with Session() as session:
query = session.query(Dictionary.id, Dictionary.name)
for dict_id, dict_name in query:
dictionaries.append([dict_id, dict_name])
word_sets[dict_id] = {
x[0]
for x in session.query(Word.word).filter(
Word.dictionary_id == dict_id,
Word.word_type.in_([WordType.speech, WordType.clitic]),
)
}
for (s_id,) in session.query(Speaker.id).filter(Speaker.dictionary_id == dict_id):
speaker_mapping[s_id] = dict_id
return dictionaries, word_sets, speaker_mapping
def files_function(
Session: sqlalchemy.orm.scoped_session,
progress_callback: typing.Optional[ProgressCallback] = None,
stopped: typing.Optional[Stopped] = None,
):
begin = time.time()
conn = Session.bind.raw_connection()
files = {}
try:
cursor = conn.cursor()
cursor.execute("select file.name, file.id from file order by file.name")
query = cursor.fetchall()
for f_name, f_id in query:
files[f_name] = f_id
cursor.close()
finally:
conn.close()
logger.debug(f"Loading all file names took {time.time() - begin:.3f} seconds.")
return files
class ExportFilesWorker(Worker):
def __init__(self, session, use_mp=False):
super().__init__(export_files_function, session, use_mp=use_mp)
class ReplaceAllWorker(Worker):
def __init__(self, session, search_string, replacement_string, use_mp=False):
super().__init__(
replace_function, session, search_string, replacement_string, use_mp=use_mp
)
class ChangeSpeakerWorker(Worker):
def __init__(self, session, utterance_ids, new_speaker_id, old_speaker_id, use_mp=False):
super().__init__(
change_speaker_function,
session,
utterance_ids,
new_speaker_id,
old_speaker_id,
use_mp=use_mp,
)
class RecalculateSpeakerWorker(Worker):
def __init__(self, session, speaker_id, use_mp=False):
super().__init__(recalculate_speaker_function, session, speaker_id, use_mp=use_mp)
class QueryUtterancesWorker(Worker):
def __init__(self, session, use_mp=False, **kwargs):
super().__init__(query_function, session, use_mp=use_mp, **kwargs)
class QuerySpeakersWorker(Worker):
def __init__(self, session, use_mp=False, **kwargs):
super().__init__(query_speakers_function, session, use_mp=use_mp, **kwargs)
class ClusterSpeakerUtterancesWorker(Worker):
def __init__(self, session, use_mp=False, **kwargs):
super().__init__(cluster_speaker_utterances, session, use_mp=use_mp, **kwargs)
class CalculateSpeakerIvectorsWorker(Worker):
def __init__(self, session, use_mp=False, **kwargs):
super().__init__(calculate_speaker_ivectors, session, use_mp=use_mp, **kwargs)
class SpeakerMdsWorker(Worker):
def __init__(self, session, use_mp=False, **kwargs):
super().__init__(mds_speaker_utterances, session, use_mp=use_mp, **kwargs)
class SpeakerComparisonWorker(Worker):
def __init__(self, session, use_mp=False, **kwargs):
super().__init__(speaker_comparison_query, session, use_mp=use_mp, **kwargs)
class DuplicateFilesWorker(Worker):
def __init__(self, session, use_mp=False, **kwargs):
super().__init__(duplicate_files_query, session, use_mp=use_mp, **kwargs)
class MergeSpeakersWorker(Worker):
def __init__(self, session, use_mp=False, **kwargs):
super().__init__(merge_speakers_function, session, use_mp=use_mp, **kwargs)
class ClosestSpeakersWorker(Worker):
def __init__(self, session, use_mp=False, **kwargs):
super().__init__(closest_speaker_function, session, use_mp=use_mp, **kwargs)
class FileUtterancesWorker(Worker):
def __init__(self, session, file_id, use_mp=False, **kwargs):
super().__init__(file_utterances_function, session, file_id, use_mp=use_mp, **kwargs)
class QueryOovWorker(Worker):
def __init__(self, session, use_mp=False, **kwargs):
super().__init__(query_oovs_function, session, use_mp=use_mp, **kwargs)
class QueryDictionaryWorker(Worker):
def __init__(self, session, use_mp=False, **kwargs):
super().__init__(query_dictionary_function, session, use_mp=use_mp, **kwargs)
class ExportLexiconWorker(Worker):
def __init__(self, session, use_mp=False, **kwargs):
super().__init__(export_lexicon_function, session, use_mp=use_mp, **kwargs)
class LoadSpeakersWorker(Worker):
def __init__(self, session, use_mp=False, **kwargs):
super().__init__(speakers_function, session, use_mp=use_mp, **kwargs)
class LoadFilesWorker(Worker):
def __init__(self, session, use_mp=False, **kwargs):
super().__init__(files_function, session, use_mp=use_mp, **kwargs)
class LoadDictionariesWorker(Worker):
def __init__(self, session, use_mp=False, **kwargs):
super().__init__(dictionaries_function, session, use_mp=use_mp, **kwargs)
class FunctionWorker(QtCore.QThread): # pragma: no cover
def __init__(self, name, *args):
super().__init__(*args)
self.settings = AnchorSettings()
self.signals = WorkerSignals(name)
self.lock = Lock()
def setParams(self, kwargs):
self.kwargs = kwargs
self.kwargs["progress_callback"] = self.signals.progress
self.kwargs["stop_check"] = self.stopCheck
self.total = None
def stop(self):
pass
def stopCheck(self):
return False
class AutoWaveformWorker(FunctionWorker): # pragma: no cover
def __init__(self, *args):
super().__init__("Scaling waveform", *args)
def set_params(self, y, normalized_min, normalized_max, begin, end, channel):
with self.lock:
self.y = y
self.normalized_min = normalized_min
self.normalized_max = normalized_max
self.begin = begin
self.end = end
self.channel = channel
def run(self):
with self.lock:
if self.y.shape[0] == 0:
return
max_val = np.max(np.abs(self.y), axis=0)
if np.isnan(max_val):
return
normalized = self.y / max_val
normalized[np.isnan(normalized)] = 0
height = self.normalized_max - self.normalized_min
new_height = height / 2
mid_point = self.normalized_min + new_height
normalized = normalized * 0.5 + mid_point
if self.stopCheck():
return
self.signals.result.emit((normalized, self.begin, self.end, self.channel))
class WaveformWorker(FunctionWorker): # pragma: no cover
def __init__(self, *args):
super().__init__("Loading waveform", *args)
def set_params(self, file_path):
with self.lock:
self.file_path = file_path
def run(self):
with self.lock:
y, _ = soundfile.read(self.file_path)
if self.stopCheck():
return
self.signals.result.emit((y, self.file_path))
class SpeakerTierWorker(FunctionWorker): # pragma: no cover
def __init__(self, *args):
super().__init__("Generating speaker tier", *args)
def set_params(self, Session, file_id):
with self.lock:
self.Session = Session
self.file_id = file_id
def run(self):
with self.lock:
with self.Session() as session:
utterances = (
session.query(Utterance)
.options(
selectinload(Utterance.phone_intervals).options(
joinedload(PhoneInterval.phone, innerjoin=True),
joinedload(PhoneInterval.workflow, innerjoin=True),
),
selectinload(Utterance.word_intervals).options(
joinedload(WordInterval.word, innerjoin=True),
joinedload(WordInterval.workflow, innerjoin=True),
),
joinedload(Utterance.speaker, innerjoin=True),
)
.filter(Utterance.file_id == self.file_id)
.order_by(Utterance.begin)
.all()
)
if self.stopCheck():
return
self.signals.result.emit((utterances, self.file_id))
class SpectrogramWorker(FunctionWorker): # pragma: no cover
def __init__(self, *args):
super().__init__("Generating spectrogram", *args)
def set_params(
self,
y,
sample_rate,
begin,
end,
channel,
dynamic_range,
n_fft,
time_steps,
window_size,
pre_emph_coeff,
max_freq,
):
with self.lock:
self.y = y
self.sample_rate = sample_rate
self.begin = begin
self.end = end
self.channel = channel
self.dynamic_range = dynamic_range
self.n_fft = n_fft
self.time_steps = time_steps
self.window_size = window_size
self.pre_emph_coeff = pre_emph_coeff
self.max_freq = max_freq
def run(self):
with self.lock:
if self.y.shape[0] == 0:
return
max_sr = 2 * self.max_freq
if self.sample_rate > max_sr:
self.y = resampy.resample(self.y, self.sample_rate, max_sr)
self.sample_rate = max_sr
self.y = librosa.effects.preemphasis(self.y, coef=self.pre_emph_coeff)
if self.stopCheck():
return
begin_samp = int(self.begin * self.sample_rate)
end_samp = int(self.end * self.sample_rate)
window_size = round(self.window_size, 6)
window_size_samp = int(window_size * self.sample_rate)
duration_samp = end_samp - begin_samp
if self.time_steps >= duration_samp:
step_size_samples = 1
else:
step_size_samples = int(duration_samp / self.time_steps)
stft = librosa.amplitude_to_db(
np.abs(
librosa.stft(
self.y,
n_fft=self.n_fft,
win_length=window_size_samp,
hop_length=step_size_samples,
center=True,
)
),
top_db=self.dynamic_range,
)
min_db, max_db = np.min(stft), np.max(stft)
if self.stopCheck():
return
self.signals.result.emit((stft, self.channel, self.begin, self.end, min_db, max_db))
class PitchWorker(FunctionWorker): # pragma: no cover
def __init__(self, *args):
super().__init__("Generating pitch track", *args)
def set_params(
self,
y,
sample_rate,
begin,
end,
channel,
min_f0,
max_f0,
frame_shift,
frame_length,
delta_pitch,
penalty_factor,
normalized_min,
normalized_max,
):
with self.lock:
self.y = y
self.sample_rate = sample_rate
self.begin = begin
self.end = end
self.channel = channel
self.min_f0 = min_f0
self.max_f0 = max_f0
self.frame_shift = frame_shift
self.frame_length = frame_length
self.delta_pitch = delta_pitch
self.penalty_factor = penalty_factor
self.normalized_min = normalized_min
self.normalized_max = normalized_max
def run(self):
with self.lock:
if self.y.shape[0] == 0:
return
pitch_proc = subprocess.Popen(
[
thirdparty_binary("compute-and-process-kaldi-pitch-feats"),
"--snip-edges=true",
f"--min-f0={self.min_f0}",
f"--max-f0={self.max_f0}",
"--add-delta-pitch=false",
"--add-normalized-log-pitch=false",
"--add-raw-log-pitch=true",
f"--sample-frequency={self.sample_rate}",
f"--frame-shift={self.frame_shift}",
f"--frame-length={self.frame_length}",
f"--delta-pitch={self.delta_pitch}",
f"--penalty-factor={self.penalty_factor}",
"ark:-",
"ark,t:-",
],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.DEVNULL,
)
pitch_proc.stdin.write(b"0-0 ")
bio = BytesIO()
sf.write(bio, self.y, samplerate=self.sample_rate, format="WAV")
pitch_proc.stdin.write(bio.getvalue())
pitch_proc.stdin.flush()
pitch_proc.stdin.close()
pitch_track = None
voiced_track = None
for _, pitch_track in read_feats(pitch_proc):
if len(pitch_track.shape) < 2:
self.signals.result.emit(
(None, None, self.channel, self.begin, self.end, self.min_f0, self.max_f0)
)
return
voiced_track = pitch_track[:, 0]
pitch_track = np.exp(pitch_track[:, 1])
pitch_proc.wait()
if self.stopCheck():
return
min_nccf = np.min(voiced_track)
max_nccf = np.max(voiced_track)
threshold = min_nccf + (max_nccf - min_nccf) * 0.45
voiced_frames = np.where(
(voiced_track <= threshold)
& (pitch_track < self.max_f0)
& (pitch_track > self.min_f0)
)
if not len(voiced_frames) or voiced_frames[0].shape[0] == 0:
normalized = None
else:
voiceless_frames = np.where(
(voiced_track > threshold)
| (pitch_track >= self.max_f0)
| (pitch_track <= self.min_f0)
)
min_f0 = int(np.min(pitch_track[voiced_frames])) - 1
max_f0 = int(np.max(pitch_track[voiced_frames])) + 1
normalized = (pitch_track - min_f0) / (max_f0 - min_f0)
height = self.normalized_max - self.normalized_min
normalized *= height
normalized = normalized + self.normalized_min
normalized[voiceless_frames] = np.nan
if self.stopCheck():
return
self.signals.result.emit(
(
normalized,
voiced_track,
self.channel,
self.begin,
self.end,
self.min_f0,
self.max_f0,
)
)
class DownloadWorker(FunctionWorker): # pragma: no cover
def __init__(self, *args):
super().__init__("Downloading model", *args)
def set_params(self, db_string: str, model_type: str, model_name: str, model_manager):
self.db_string = db_string
self.model_type = model_type
self.model_name = model_name
self.model_manager = model_manager
def run(self):
try:
engine = sqlalchemy.create_engine(self.db_string)
with sqlalchemy.orm.Session(engine) as session:
model = (
session.query(anchor.db.MODEL_TYPES[self.model_type])
.filter_by(name=self.model_name)
.first()
)
if model.available_locally:
return
self.model_manager.download_model(self.model_type, self.model_name)
model.available_locally = True
model.path = MODEL_TYPES[self.model_type].get_pretrained_path(self.model_name)
model.last_used = datetime.datetime.now()
session.commit()
self.signals.result.emit((self.model_type, self.model_name)) # Done
except Exception:
exctype, value = sys.exc_info()[:2]
self.signals.error.emit((exctype, value, traceback.format_exc()))
finally:
self.signals.finished.emit() # Done
class ImportCorpusWorker(FunctionWorker): # pragma: no cover
def __init__(self, *args):
super().__init__("Importing corpus", *args)
def stop(self):
if hasattr(self, "corpus") and self.corpus is not None:
self.corpus.stopped.stop()
def set_params(self, corpus_path: str, dictionary_path: str, reset=False):
self.corpus_path = corpus_path
self.dictionary_path = dictionary_path
self.reset = reset
def run(self):
os.environ[MFA_PROFILE_VARIABLE] = "anchor"
GLOBAL_CONFIG.load()
GLOBAL_CONFIG.current_profile.clean = self.reset
corpus_name = os.path.basename(self.corpus_path)
dataset_type = inspect_database(corpus_name)
try:
if dataset_type is DatasetType.NONE:
if self.dictionary_path and os.path.exists(self.dictionary_path):
self.corpus = AcousticCorpusWithPronunciations(
corpus_directory=self.corpus_path, dictionary_path=self.dictionary_path
)
self.corpus.initialize_database()
self.corpus.dictionary_setup()
else:
self.corpus = AcousticCorpus(corpus_directory=self.corpus_path)
self.corpus.initialize_database()
self.corpus._load_corpus()
elif (
dataset_type is DatasetType.ACOUSTIC_CORPUS_WITH_DICTIONARY
and self.dictionary_path
and os.path.exists(self.dictionary_path)
):
self.corpus = AcousticCorpusWithPronunciations(
corpus_directory=self.corpus_path, dictionary_path=self.dictionary_path
)
self.corpus.inspect_database()
else:
self.corpus = AcousticCorpus(corpus_directory=self.corpus_path)
self.corpus.inspect_database()
self.corpus._load_corpus()
if self.dictionary_path and os.path.exists(self.dictionary_path):
self.corpus.initialize_jobs()
self.corpus.normalize_text()
except Exception:
exctype, value = sys.exc_info()[:2]
self.signals.error.emit((exctype, value, traceback.format_exc()))
else:
if self.corpus.stopped.stop_check():
self.signals.result.emit(None)
else:
self.signals.result.emit(self.corpus) # Return the result of the processing
finally:
self.corpus = None
self.signals.finished.emit() # Done
class ReloadCorpusWorker(ImportCorpusWorker):
def __init__(self, *args):
FunctionWorker.__init__(self, "Reloading corpus", *args)
def set_params(self, corpus_path: str, dictionary_path: str):
self.corpus_path = corpus_path
self.dictionary_path = dictionary_path
def run(self):
self.settings.sync()
os.environ[MFA_PROFILE_VARIABLE] = "anchor"
GLOBAL_CONFIG.load()
try:
if self.dictionary_path and os.path.exists(self.dictionary_path):
self.corpus = AcousticCorpusWithPronunciations(
corpus_directory=self.corpus_path, dictionary_path=self.dictionary_path
)
else:
self.corpus = AcousticCorpus(corpus_directory=self.corpus_path)
self.corpus._db_engine = self.corpus.construct_engine()
file_count = self.corpus_model.session.query(File).count()
files = self.corpus_model.session.query(File).options(
joinedload(File.sound_file, innerjoin=True),
joinedload(File.text_file, innerjoin=True),
selectinload(File.utterances).joinedload(Utterance.speaker, innerjoin=True),
)
utterance_mapping = []
with tqdm.tqdm(total=file_count, disable=getattr(self, "quiet", False)) as pbar:
for file in files:
file_data = FileData.parse_file(
file.name,
file.sound_file.sound_file_path,
file.text_file.text_file_path,
file.relative_path,
self.corpus.speaker_characters,
)
utterances = {(u.speaker.name, u.begin, u.end): u for u in file.utterances}
for utt_data in file_data.utterances:
key = (utt_data.speaker_name, utt_data.begin, utt_data.end)
if key in utterances:
utt = utterances[key]
elif len(utterances) == 1:
utt = list(utterances.values())[0]
else:
mid_point = utt_data.begin + ((utt_data.end - utt_data.begin) / 2)
for k in utterances.keys():
if k[0] != utt_data.speaker_name:
continue
if k[1] < mid_point < k[2]:
utt = utterances[k]
break
else:
continue
utterance_mapping.append(
{
"id": utt.id,
"text": utt_data.text,
"normalized_text": utt_data.normalized_text,
}
)
pbar.update(1)
bulk_update(self.corpus_model.session, Utterance, utterance_mapping)
self.corpus_model.session.commit()
except Exception:
exctype, value = sys.exc_info()[:2]
self.signals.error.emit((exctype, value, traceback.format_exc()))
else:
if self.corpus.stopped.stop_check():
self.signals.result.emit(None)
else:
self.signals.result.emit(self.corpus) # Return the result of the processing
finally:
self.signals.finished.emit() # Done
self.corpus = None
class LoadReferenceWorker(FunctionWorker): # pragma: no cover
def __init__(self, *args):
super().__init__("Loading reference alignments", *args)
self.corpus: typing.Optional[AcousticCorpus] = None
def set_params(self, corpus: AcousticCorpus, reference_directory: Path):
self.corpus = corpus
self.reference_directory = reference_directory
def run(self):
self.settings.sync()
os.environ[MFA_PROFILE_VARIABLE] = "anchor"
GLOBAL_CONFIG.load()
try:
with self.corpus.session() as session:
session.query(PhoneInterval).filter(
PhoneInterval.workflow_id == CorpusWorkflow.id,
CorpusWorkflow.workflow_type == WorkflowType.reference,
).delete(synchronize_session=False)
session.query(CorpusWorkflow).filter(
CorpusWorkflow.workflow_type == WorkflowType.reference
).delete(synchronize_session=False)
session.execute(sqlalchemy.update(Corpus).values(has_reference_alignments=False))
session.commit()
self.corpus.load_reference_alignments(self.reference_directory)
except Exception:
exctype, value = sys.exc_info()[:2]
self.signals.error.emit((exctype, value, traceback.format_exc()))
finally:
self.signals.result.emit(self.corpus) # Done
self.signals.finished.emit() # Done
class ImportDictionaryWorker(FunctionWorker): # pragma: no cover
def __init__(self, *args):
super().__init__("Importing dictionary", *args)
def set_params(self, corpus: AcousticCorpus, dictionary_path: str):
self.corpus = corpus
self.dictionary_path = dictionary_path
def run(self):
os.environ[MFA_PROFILE_VARIABLE] = "anchor"
GLOBAL_CONFIG.load()
self.corpus_temp_dir = os.path.join(self.settings.temp_directory, "corpus")
try:
corpus = AcousticCorpusWithPronunciations(
corpus_directory=self.corpus.corpus_directory, dictionary_path=self.dictionary_path
)
shutil.rmtree(corpus.output_directory, ignore_errors=True)
with corpus.session() as session:
session.query(Corpus).update({Corpus.text_normalized: False})
session.query(PhoneInterval).delete()
session.query(WordInterval).delete()
session.query(Pronunciation).delete()
session.query(Word).delete()
session.query(Phone).delete()
session.execute(sqlalchemy.update(Speaker).values(dictionary_id=None))
session.execute(
sqlalchemy.update(CorpusWorkflow).values(
done=False, alignments_collected=False, score=None
)
)
session.execute(Dictionary2Job.delete())
session.query(Dictionary).delete()
session.commit()
corpus.dictionary_setup()
with corpus.session() as session:
session.execute(
sqlalchemy.update(Speaker).values(dictionary_id=corpus._default_dictionary_id)
)
session.commit()
corpus.text_normalized = False
corpus.normalize_text()
self.signals.result.emit(corpus) # Done
except Exception:
exctype, value = sys.exc_info()[:2]
self.signals.error.emit((exctype, value, traceback.format_exc()))
finally:
self.signals.finished.emit() # Done
class OovCountWorker(FunctionWorker): # pragma: no cover
def __init__(self, *args):
super().__init__("Counting OOVs", *args)
def set_params(self, corpus: AcousticCorpus):
self.corpus = corpus
def run(self):
os.environ[MFA_PROFILE_VARIABLE] = "anchor"
GLOBAL_CONFIG.load()
try:
with self.corpus.session() as session:
session.query(Word).filter(Word.word_type == WordType.oov).delete()
session.commit()
self.corpus.text_normalized = False
self.corpus.normalize_text()
self.signals.result.emit(self.corpus) # Done
except Exception:
exctype, value = sys.exc_info()[:2]
self.signals.error.emit((exctype, value, traceback.format_exc()))
finally:
self.signals.finished.emit() # Done
class ImportAcousticModelWorker(FunctionWorker): # pragma: no cover
def __init__(self, *args):
super().__init__("Importing acoustic model", *args)
def set_params(self, model_path: str):
self.model_path = model_path
def run(self):
os.environ[MFA_PROFILE_VARIABLE] = "anchor"
GLOBAL_CONFIG.load()
if not self.model_path:
return
try:
acoustic_model = AcousticModel(self.model_path)
except Exception:
if os.path.exists(self.model_path):
exctype, value = sys.exc_info()[:2]
self.signals.error.emit((exctype, value, traceback.format_exc()))
else:
self.signals.result.emit(acoustic_model) # Return the result of the processing
finally:
self.signals.finished.emit() # Done
class ImportLanguageModelWorker(FunctionWorker): # pragma: no cover
def __init__(self, *args):
super().__init__("Importing language model", *args)
def set_params(self, model_path: str):
self.model_path = model_path
def run(self):
os.environ[MFA_PROFILE_VARIABLE] = "anchor"
GLOBAL_CONFIG.load()
if not self.model_path:
return
try:
language_model = LanguageModel(self.model_path)
except Exception:
exctype, value = sys.exc_info()[:2]
self.signals.error.emit((exctype, value, traceback.format_exc()))
else:
self.signals.result.emit(language_model) # Return the result of the processing
finally:
self.signals.finished.emit() # Done
class ImportG2PModelWorker(FunctionWorker): # pragma: no cover
def __init__(self, *args):
super().__init__("Importing G2P model", *args)
def set_params(self, model_path: str):
self.model_path = model_path
def run(self):
os.environ[MFA_PROFILE_VARIABLE] = "anchor"
GLOBAL_CONFIG.load()
if not self.model_path:
return
try:
generator = Generator(g2p_model_path=self.model_path, num_pronunciations=5)
generator.setup()
except Exception:
exctype, value = sys.exc_info()[:2]
self.signals.error.emit((exctype, value, traceback.format_exc()))
else:
self.signals.result.emit(generator) # Return the result of the processing
finally:
self.signals.finished.emit() # Done
class ImportIvectorExtractorWorker(FunctionWorker): # pragma: no cover
def __init__(self, *args):
super().__init__("Importing ivector extractor", *args)
def set_params(self, model_path: str):
self.model_path = model_path
def run(self):
os.environ[MFA_PROFILE_VARIABLE] = "anchor"
GLOBAL_CONFIG.load()
if not self.model_path:
return
try:
if str(self.model_path) == "speechbrain":
model = "speechbrain"
else:
model = IvectorExtractorModel(self.model_path)
except Exception:
exctype, value = sys.exc_info()[:2]
self.signals.error.emit((exctype, value, traceback.format_exc()))
else:
self.signals.result.emit(model) # Return the result of the processing
finally:
self.signals.finished.emit() # Done
class AlignUtteranceWorker(FunctionWorker): # pragma: no cover
def __init__(self, *args):
super().__init__("Aligning utterance", *args)
self.corpus: typing.Optional[AcousticCorpusWithPronunciations] = None
self.acoustic_model: typing.Optional[AcousticModel] = None
def set_params(
self, corpus: AcousticCorpusWithPronunciations, acoustic_model: AcousticModel, utterance_id
):
self.corpus = corpus
self.acoustic_model = acoustic_model
self.utterance_id = utterance_id
def run(self):
self.settings.sync()
os.environ[MFA_PROFILE_VARIABLE] = "anchor"
GLOBAL_CONFIG.load()
try:
aligner = PretrainedAligner(
acoustic_model_path=self.acoustic_model.source,
corpus_directory=self.corpus.corpus_directory,
dictionary_path=self.corpus.dictionary_model.path,
)
aligner.inspect_database()
aligner.corpus_output_directory = self.corpus.corpus_output_directory
aligner.dictionary_output_directory = self.corpus.dictionary_output_directory
aligner.non_silence_phones = self.corpus.non_silence_phones
aligner.acoustic_model = self.acoustic_model
with aligner.session() as session:
utterance = (
session.query(Utterance)
.options(
joinedload(Utterance.file, innerjoin=True).joinedload(
File.sound_file, innerjoin=True
),
joinedload(Utterance.speaker, innerjoin=True).joinedload(
Speaker.dictionary, innerjoin=True
),
)
.get(self.utterance_id)
)
aligner.align_one_utterance(utterance, session)
except Exception:
exctype, value = sys.exc_info()[:2]
self.signals.error.emit((exctype, value, traceback.format_exc()))
else:
self.signals.result.emit(self.utterance_id) # Return the result of the processing
finally:
self.signals.finished.emit() # Done
class SegmentUtteranceWorker(FunctionWorker): # pragma: no cover
def __init__(self, *args):
super().__init__("Segmenting utterance", *args)
self.corpus: typing.Optional[AcousticCorpusWithPronunciations] = None
self.acoustic_model: typing.Optional[AcousticModel] = None
def set_params(
self, corpus: AcousticCorpusWithPronunciations, acoustic_model: AcousticModel, utterance_id
):
self.corpus = corpus
self.acoustic_model = acoustic_model
self.utterance_id = utterance_id
def run(self):
self.settings.sync()
os.environ[MFA_PROFILE_VARIABLE] = "anchor"
GLOBAL_CONFIG.load()
try:
segmenter = TranscriptionSegmenter(
acoustic_model_path=self.acoustic_model.source,
corpus_directory=self.corpus.corpus_directory,
dictionary_path=self.corpus.dictionary_model.path,
speechbrain=True,
)
segmenter.inspect_database()
segmenter.corpus_output_directory = self.corpus.corpus_output_directory
segmenter.dictionary_output_directory = self.corpus.dictionary_output_directory
segmenter.non_silence_phones = self.corpus.non_silence_phones
segmenter.acoustic_model = self.acoustic_model
segmenter.create_new_current_workflow(WorkflowType.segmentation)
segmenter.setup_acoustic_model()
segmenter.write_lexicon_information(write_disambiguation=True)
with segmenter.session() as session:
sub_utterances = segment_utterance(
session,
segmenter.working_directory,
self.utterance_id,
segmenter.vad_model,
segmenter.segmentation_options,
segmenter.mfcc_options,
segmenter.pitch_options,
segmenter.lda_options,
segmenter.decode_options,
)
except Exception:
exctype, value = sys.exc_info()[:2]
self.signals.error.emit((exctype, value, traceback.format_exc()))
else:
self.signals.result.emit(sub_utterances) # Return the result of the processing
finally:
self.signals.finished.emit() # Done
class AlignmentWorker(FunctionWorker): # pragma: no cover
def __init__(self, *args):
super().__init__("Aligning", *args)
self.corpus: typing.Optional[AcousticCorpusWithPronunciations] = None
self.dictionary: typing.Optional[MultispeakerDictionary] = None
self.acoustic_model: typing.Optional[AcousticModel] = None
def set_params(
self,
corpus: AcousticCorpusWithPronunciations,
acoustic_model: AcousticModel,
parameters=None,
):
self.corpus = corpus
self.acoustic_model = acoustic_model
self.parameters = parameters
if self.parameters is None:
self.parameters = {}
def run(self):
self.settings.sync()
os.environ[MFA_PROFILE_VARIABLE] = "anchor"
GLOBAL_CONFIG.load()
try:
logger.info("Resetting any previous alignments...")
with self.corpus.session() as session:
session.query(PhoneInterval).filter(
PhoneInterval.workflow_id == CorpusWorkflow.id,
CorpusWorkflow.workflow_type == WorkflowType.alignment,
).delete(synchronize_session=False)
session.query(WordInterval).filter(
WordInterval.workflow_id == CorpusWorkflow.id,
CorpusWorkflow.workflow_type == WorkflowType.alignment,
).delete(synchronize_session=False)
session.query(CorpusWorkflow).filter(
CorpusWorkflow.workflow_type == WorkflowType.alignment
).delete(synchronize_session=False)
session.execute(
sqlalchemy.update(Corpus).values(
features_generated=False,
text_normalized=False,
alignment_evaluation_done=False,
alignment_done=False,
)
)
session.execute(sqlalchemy.update(Speaker).values(cmvn=None, fmllr=False))
session.execute(
sqlalchemy.update(Utterance).values(
features=None,
ignored=False,
alignment_log_likelihood=None,
duration_deviation=None,
speech_log_likelihood=None,
)
)
session.commit()
logger.info("Reset complete!")
aligner = PretrainedAligner(
acoustic_model_path=self.acoustic_model.source,
corpus_directory=self.corpus.corpus_directory,
dictionary_path=self.corpus.dictionary_model.path,
**self.parameters,
)
aligner.inspect_database()
aligner.clean_working_directory()
aligner.corpus_output_directory = self.corpus.corpus_output_directory
aligner.dictionary_output_directory = self.corpus.dictionary_output_directory
aligner.acoustic_model = self.acoustic_model
aligner.align()
aligner.collect_alignments()
aligner.analyze_alignments()
except Exception:
exctype, value = sys.exc_info()[:2]
self.signals.error.emit((exctype, value, traceback.format_exc()))
finally:
self.signals.finished.emit() # Done
class ComputeIvectorWorker(FunctionWorker): # pragma: no cover
def __init__(self, *args):
super().__init__("Computing ivectors", *args)
self.corpus: typing.Optional[AcousticCorpusWithPronunciations] = None
self.ivector_extractor: typing.Optional[IvectorExtractorModel] = None
self.reset = False
def set_params(
self,
corpus: AcousticCorpusWithPronunciations,
ivector_extractor: IvectorExtractorModel,
reset=False,
parameters=None,
):
self.corpus = corpus
self.ivector_extractor = ivector_extractor
self.parameters = parameters
self.reset = reset
if self.parameters is None:
self.parameters = {}
def run(self):
self.settings.sync()
os.environ[MFA_PROFILE_VARIABLE] = "anchor"
GLOBAL_CONFIG.load()
logger = logging.getLogger("anchor")
try:
logger.debug("Beginning ivector computation")
logger.info("Resetting ivectors...")
with self.corpus.session() as session:
if self.reset:
session.execute(
sqlalchemy.update(Corpus).values(
ivectors_calculated=False, plda_calculated=False, xvectors_loaded=False
)
)
session.execute(sqlalchemy.update(Utterance).values(ivector=None))
session.execute(sqlalchemy.update(Utterance).values(xvector=None))
session.execute(sqlalchemy.update(Speaker).values(xvector=None))
session.execute(sqlalchemy.update(Speaker).values(ivector=None))
session.commit()
diarizer = SpeakerDiarizer(
ivector_extractor_path=self.ivector_extractor.source
if self.ivector_extractor != "speechbrain"
else self.ivector_extractor,
corpus_directory=self.corpus.corpus_directory,
cuda=True,
**self.parameters,
)
diarizer.inspect_database()
diarizer.corpus_output_directory = self.corpus.corpus_output_directory
diarizer.dictionary_output_directory = self.corpus.dictionary_output_directory
diarizer.setup()
diarizer.cleanup_empty_speakers()
except Exception:
exctype, value = sys.exc_info()[:2]
self.signals.error.emit((exctype, value, traceback.format_exc()))
finally:
self.signals.finished.emit() # Done
class ClusterUtterancesWorker(FunctionWorker): # pragma: no cover
def __init__(self, *args):
super().__init__("Clustering utterances", *args)
self.corpus: typing.Optional[AcousticCorpusWithPronunciations] = None
self.ivector_extractor: typing.Optional[IvectorExtractorModel] = None
def set_params(
self,
corpus: AcousticCorpusWithPronunciations,
ivector_extractor: IvectorExtractorModel,
parameters=None,
):
self.corpus = corpus
self.ivector_extractor = ivector_extractor
self.parameters = parameters
if self.parameters is None:
self.parameters = {}
def run(self):
self.settings.sync()
os.environ[MFA_PROFILE_VARIABLE] = "anchor"
GLOBAL_CONFIG.load()
logger = logging.getLogger("anchor")
try:
logger.debug("Beginning clustering")
self.parameters["cluster_type"] = "mfa"
self.parameters["distance_threshold"] = self.settings.value(
self.settings.CLUSTERING_DISTANCE_THRESHOLD
)
self.parameters["metric"] = self.settings.value(self.settings.CLUSTERING_METRIC)
self.parameters["expected_num_speakers"] = self.settings.value(
self.settings.CLUSTERING_N_CLUSTERS
)
diarizer = SpeakerDiarizer(
ivector_extractor_path=self.ivector_extractor.source
if self.ivector_extractor != "speechbrain"
else self.ivector_extractor,
corpus_directory=self.corpus.corpus_directory,
cuda=self.settings.value(self.settings.CUDA),
cluster=True,
**self.parameters,
)
diarizer.inspect_database()
diarizer.corpus_output_directory = self.corpus.corpus_output_directory
diarizer.dictionary_output_directory = self.corpus.dictionary_output_directory
if not diarizer.has_any_ivectors():
diarizer.setup()
else:
diarizer.initialized = True
diarizer.create_new_current_workflow(WorkflowType.speaker_diarization)
diarizer.cluster_utterances()
with diarizer.session() as session:
session.query(File).update(modified=True)
session.commit()
except Exception:
exctype, value = sys.exc_info()[:2]
self.signals.error.emit((exctype, value, traceback.format_exc()))
finally:
self.signals.finished.emit() # Done
class ClassifySpeakersWorker(FunctionWorker): # pragma: no cover
def __init__(self, *args):
super().__init__("Clustering utterances", *args)
self.corpus: typing.Optional[AcousticCorpusWithPronunciations] = None
self.ivector_extractor: typing.Optional[IvectorExtractorModel] = None
def set_params(
self,
corpus: AcousticCorpusWithPronunciations,
ivector_extractor: IvectorExtractorModel,
parameters=None,
):
self.corpus = corpus
self.ivector_extractor = ivector_extractor
self.parameters = parameters
if self.parameters is None:
self.parameters = {}
def run(self):
self.settings.sync()
os.environ[MFA_PROFILE_VARIABLE] = "anchor"
GLOBAL_CONFIG.load()
logger = logging.getLogger("anchor")
try:
logger.debug("Beginning speaker classification")
diarizer = SpeakerDiarizer(
ivector_extractor_path=self.ivector_extractor.source
if self.ivector_extractor != "speechbrain"
else self.ivector_extractor,
corpus_directory=self.corpus.corpus_directory, # score_threshold = 0.5,
cluster=False,
cuda=self.settings.value(self.settings.CUDA),
**self.parameters,
)
diarizer.inspect_database()
diarizer.corpus_output_directory = self.corpus.corpus_output_directory
diarizer.dictionary_output_directory = self.corpus.dictionary_output_directory
diarizer.classify_speakers()
except Exception:
exctype, value = sys.exc_info()[:2]
self.signals.error.emit((exctype, value, traceback.format_exc()))
finally:
self.signals.finished.emit() # Done
class AlignmentEvaluationWorker(FunctionWorker): # pragma: no cover
def __init__(self, *args):
super().__init__("Evaluating alignments", *args)
self.corpus: typing.Optional[AcousticCorpusWithPronunciations] = None
self.dictionary: typing.Optional[MultispeakerDictionary] = None
self.acoustic_model: typing.Optional[AcousticModel] = None
def set_params(
self,
corpus: AcousticCorpusWithPronunciations,
acoustic_model: AcousticModel,
custom_mapping_path: str,
):
self.corpus = corpus
self.acoustic_model = acoustic_model
self.custom_mapping_path = custom_mapping_path
def run(self):
self.settings.sync()
os.environ[MFA_PROFILE_VARIABLE] = "anchor"
GLOBAL_CONFIG.load()
try:
self.corpus.alignment_evaluation_done = False
with self.corpus.session() as session:
session.execute(sqlalchemy.update(Corpus).values(alignment_evaluation_done=False))
session.execute(
sqlalchemy.update(Utterance).values(
phone_error_rate=None, alignment_score=None
)
)
session.commit()
aligner = PretrainedAligner(
acoustic_model_path=self.acoustic_model.source,
corpus_directory=self.corpus.corpus_directory,
dictionary_path=self.corpus.dictionary_model.path,
)
aligner.inspect_database()
aligner.corpus_output_directory = self.corpus.corpus_output_directory
aligner.dictionary_output_directory = self.corpus.dictionary_output_directory
aligner.acoustic_model = self.acoustic_model
mapping = None
if self.custom_mapping_path and os.path.exists(self.custom_mapping_path):
with open(self.custom_mapping_path, "r", encoding="utf8") as f:
mapping = yaml.safe_load(f)
aligner.evaluate_alignments(mapping=mapping)
except Exception:
exctype, value = sys.exc_info()[:2]
self.signals.error.emit((exctype, value, traceback.format_exc()))
finally:
self.signals.finished.emit() # Done
class TranscriptionWorker(FunctionWorker): # pragma: no cover
def __init__(self, *args):
super().__init__("Transcribing", *args)
self.corpus: typing.Optional[
typing.Union[AcousticCorpus, AcousticCorpusWithPronunciations]
] = None
self.acoustic_model: typing.Optional[AcousticModel] = None
self.language_model: typing.Optional[LanguageModel] = None
def set_params(
self,
corpus: AcousticCorpusWithPronunciations,
acoustic_model: AcousticModel,
language_model: LanguageModel,
):
self.corpus = corpus
self.acoustic_model = acoustic_model
self.language_model = language_model
def run(self):
self.settings.sync()
os.environ[MFA_PROFILE_VARIABLE] = "anchor"
GLOBAL_CONFIG.load()
try:
with self.corpus.session() as session:
session.query(PhoneInterval).filter(
PhoneInterval.workflow_id == CorpusWorkflow.id
).filter(CorpusWorkflow.workflow_type == WorkflowType.transcription).delete(
synchronize_session="fetch"
)
session.query(WordInterval).filter(
WordInterval.workflow_id == CorpusWorkflow.id
).filter(CorpusWorkflow.workflow_type == WorkflowType.transcription).delete(
synchronize_session="fetch"
)
session.query(CorpusWorkflow).filter(
CorpusWorkflow.workflow_type == WorkflowType.transcription
).delete()
session.query(Utterance).update({Utterance.transcription_text: None})
session.commit()
transcriber = Transcriber(
acoustic_model_path=self.acoustic_model.source,
language_model_path=self.language_model.source,
corpus_directory=self.corpus.corpus_directory,
dictionary_path=self.corpus.dictionary_model.path,
evaluation_mode=True,
max_language_model_weight=17,
min_language_model_weight=16,
word_insertion_penalties=[1.0],
)
transcriber.inspect_database()
transcriber.corpus_output_directory = self.corpus.corpus_output_directory
transcriber.dictionary_output_directory = self.corpus.dictionary_output_directory
transcriber.acoustic_model = self.acoustic_model
transcriber.language_model = self.language_model
transcriber.setup()
transcriber.transcribe()
except Exception:
exctype, value = sys.exc_info()[:2]
self.signals.error.emit((exctype, value, traceback.format_exc()))
finally:
self.signals.finished.emit() # Done
class FeatureGeneratorWorker(FunctionWorker): # pragma: no cover
def __init__(self, *args):
super().__init__("Validating", *args)
self.queue = queue.Queue()
self.corpus: typing.Optional[AcousticCorpusWithPronunciations] = None
self.acoustic_model: typing.Optional[AcousticModel] = None
self.ivector_extractor: typing.Optional[IvectorExtractorModel] = None
self.stopped = Stopped()
self._db_engine = None
def set_params(
self,
corpus: AcousticCorpusWithPronunciations,
acoustic_model: AcousticModel = None,
ivector_extractor: IvectorExtractorModel = None,
):
self.corpus = corpus
self.db_string = corpus.db_string
self.acoustic_model = acoustic_model
self.ivector_extractor = ivector_extractor
@property
def db_engine(self):
return sqlalchemy.create_engine(self.db_string)
def run(self):
while True:
try:
utterance_id = self.queue.get(timeout=5)
if self.stopped.stopCheck():
continue
except queue.Empty:
continue
if self.stopped.stopCheck():
break
with sqlalchemy.orm.Session(self.db_engine) as session:
utterance = (
session.query(Utterance)
.options(joinedload(Utterance.file).joinedload(File.sound_file))
.get(utterance_id)
)
wave = librosa.load(
utterance.file.sound_file.sound_file_path,
sr=16000,
offset=utterance.begin,
duration=utterance.duration,
mono=False,
)
if len(wave.shape) == 2:
wave = wave[utterance.channel, :]
class ValidationWorker(FunctionWorker): # pragma: no cover
def __init__(self, *args):
super().__init__("Validating", *args)
self.corpus: typing.Optional[AcousticCorpusWithPronunciations] = None
self.acoustic_model: typing.Optional[AcousticModel] = None
self.frequent_word_count = 100
self.test_transcriptions = True
def set_params(
self,
corpus: AcousticCorpusWithPronunciations,
acoustic_model: AcousticModel,
target_num_ngrams,
test_transcriptions=True,
):
self.corpus = corpus
self.acoustic_model = acoustic_model
self.target_num_ngrams = target_num_ngrams
self.test_transcriptions = test_transcriptions
def run(self):
self.settings.sync()
os.environ[MFA_PROFILE_VARIABLE] = "anchor"
GLOBAL_CONFIG.load()
GLOBAL_CONFIG.profiles["anchor"].clean = False
GLOBAL_CONFIG.save()
try:
with self.corpus.session() as session:
session.query(PhoneInterval).filter(
PhoneInterval.workflow_id == CorpusWorkflow.id
).filter(
CorpusWorkflow.workflow_type == WorkflowType.per_speaker_transcription
).delete(
synchronize_session="fetch"
)
session.query(WordInterval).filter(
WordInterval.workflow_id == CorpusWorkflow.id
).filter(
CorpusWorkflow.workflow_type == WorkflowType.per_speaker_transcription
).delete(
synchronize_session="fetch"
)
session.query(CorpusWorkflow).filter(
CorpusWorkflow.workflow_type == WorkflowType.per_speaker_transcription
).delete()
session.query(Utterance).update({Utterance.transcription_text: None})
session.commit()
validator = PretrainedValidator(
acoustic_model_path=self.acoustic_model.source,
corpus_directory=self.corpus.corpus_directory,
dictionary_path=self.corpus.dictionary_model.path,
test_transcriptions=self.test_transcriptions,
target_num_ngrams=self.target_num_ngrams,
first_max_active=750,
)
validator.inspect_database()
validator.corpus_output_directory = self.corpus.corpus_output_directory
validator.dictionary_output_directory = self.corpus.dictionary_output_directory
validator.acoustic_model = self.acoustic_model
validator.create_new_current_workflow(WorkflowType.alignment)
validator.setup()
validator.align()
validator.test_utterance_transcriptions()
except Exception:
exctype, value = sys.exc_info()[:2]
self.signals.error.emit((exctype, value, traceback.format_exc()))
finally:
self.signals.finished.emit() # Done
|
Anchor-annotator
|
/Anchor_annotator-0.0.9.tar.gz/Anchor_annotator-0.0.9/anchor/workers.py
|
workers.py
|
from __future__ import annotations
import re
import typing
from typing import TYPE_CHECKING, Optional
import numpy as np
from montreal_forced_aligner.data import ( # noqa
ClusterType,
DistanceMetric,
ManifoldAlgorithm,
PhoneSetType,
PhoneType,
)
from montreal_forced_aligner.db import Phone, Speaker # noqa
from montreal_forced_aligner.utils import DatasetType, inspect_database # noqa
from PySide6 import QtCore, QtGui, QtMultimedia, QtSvgWidgets, QtWidgets
import anchor.resources_rc # noqa
from anchor.models import (
CorpusModel,
CorpusSelectionModel,
DictionaryTableModel,
MergeSpeakerModel,
OovModel,
SpeakerModel,
TextFilterQuery,
)
from anchor.plot import UtteranceClusterView, UtteranceView
from anchor.settings import AnchorSettings
from anchor.workers import Worker
if TYPE_CHECKING:
from anchor.main import MainWindow
outside_column_ratio = 0.2
outside_column_minimum = 250
class ErrorButtonBox(QtWidgets.QDialogButtonBox):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.setStandardButtons(QtWidgets.QDialogButtonBox.StandardButton.Close)
self.report_bug_button = QtWidgets.QPushButton("Report bug")
self.report_bug_button.setIcon(QtGui.QIcon(":external-link.svg"))
self.addButton(self.report_bug_button, QtWidgets.QDialogButtonBox.ButtonRole.ActionRole)
class MediaPlayer(QtMultimedia.QMediaPlayer): # pragma: no cover
timeChanged = QtCore.Signal(object)
audioReady = QtCore.Signal(object)
def __init__(self, *args):
super(MediaPlayer, self).__init__(*args)
self.settings = AnchorSettings()
self.devices = QtMultimedia.QMediaDevices()
self.devices.audioOutputsChanged.connect(self.update_audio_device)
self.max_time = None
self.start_load_time = None
self.min_time = None
self.corpus_model = None
self.selection_model = None
self.timer = QtCore.QTimer(self)
self.timer.setInterval(1)
self.timer.timeout.connect(self.checkStop)
# self.positionChanged.connect(self.checkStop)
# self.positionChanged.connect(self.positionDebug)
self.errorOccurred.connect(self.handle_error)
o = None
for o in QtMultimedia.QMediaDevices.audioOutputs():
if o.id() == self.settings.value(self.settings.AUDIO_DEVICE):
break
self._audio_output = QtMultimedia.QAudioOutput(o)
self._audio_output.setDevice(self.devices.defaultAudioOutput())
self.setAudioOutput(self._audio_output)
self.playbackStateChanged.connect(self.reset_position)
self.mediaStatusChanged.connect(self.update_load)
self.fade_in_anim = QtCore.QPropertyAnimation(self._audio_output, b"volume")
self.fade_in_anim.setDuration(10)
self.fade_in_anim.setStartValue(0.1)
self.fade_in_anim.setEndValue(self._audio_output.volume())
self.fade_in_anim.setEasingCurve(QtCore.QEasingCurve.Type.Linear)
self.fade_in_anim.setKeyValueAt(0.1, 0.1)
self.fade_out_anim = QtCore.QPropertyAnimation(self._audio_output, b"volume")
self.fade_out_anim.setDuration(5)
self.fade_out_anim.setStartValue(self._audio_output.volume())
self.fade_out_anim.setEndValue(0)
self.fade_out_anim.setEasingCurve(QtCore.QEasingCurve.Type.Linear)
self.fade_out_anim.setKeyValueAt(0.1, self._audio_output.volume())
self.fade_out_anim.finished.connect(super().pause)
self.file_path = None
def update_load(self, state):
if state == self.MediaStatus.LoadedMedia:
self.reset_position()
self.audioReady.emit(True)
def handle_error(self, *args):
print("ERROR")
print(args)
def play(self) -> None:
if self.startTime() is None:
return
self._audio_output.setVolume(0.1)
if (
self.playbackState() == QtMultimedia.QMediaPlayer.PlaybackState.StoppedState
or self.currentTime() < self.startTime()
or self.currentTime() >= self.maxTime()
):
self.setCurrentTime(self.startTime())
super(MediaPlayer, self).play()
self.fade_in_anim.start()
def startTime(self):
if self.selection_model.selected_min_time is not None:
return self.selection_model.selected_min_time
return self.selection_model.min_time
def maxTime(self):
if self.selection_model.selected_max_time is not None:
return self.selection_model.selected_max_time
return self.selection_model.max_time
def reset_position(self):
state = self.playbackState()
if state == QtMultimedia.QMediaPlayer.PlaybackState.StoppedState:
self.timer.stop()
self.setCurrentTime(self.startTime())
self.timeChanged.emit(self.currentTime())
elif state == QtMultimedia.QMediaPlayer.PlaybackState.PlayingState:
self.timer.start()
elif state == QtMultimedia.QMediaPlayer.PlaybackState.PausedState:
self.timer.stop()
def update_audio_device(self):
self._audio_output.setDevice(self.devices.defaultAudioOutput())
def refresh_settings(self):
self.settings.sync()
o = None
for o in QtMultimedia.QMediaDevices.audioOutputs():
if o.id() == self.settings.value(self.settings.AUDIO_DEVICE):
break
self._audio_output.setDevice(o)
def set_corpus_models(
self, corpus_model: Optional[CorpusModel], selection_model: Optional[CorpusSelectionModel]
):
self.corpus_model = corpus_model
self.selection_model = selection_model
if corpus_model is None:
return
# self.selection_model.fileAboutToChange.connect(self.unload_file)
self.selection_model.fileChanged.connect(self.loadNewFile)
self.selection_model.viewChanged.connect(self.update_times)
self.selection_model.selectionAudioChanged.connect(self.update_selection_times)
self.selection_model.currentTimeChanged.connect(self.update_selection_times)
def set_volume(self, volume: int):
if self.audioOutput() is None:
return
linearVolume = QtMultimedia.QAudio.convertVolume(
volume / 100.0,
QtMultimedia.QAudio.VolumeScale.LogarithmicVolumeScale,
QtMultimedia.QAudio.VolumeScale.LinearVolumeScale,
)
self.audioOutput().setVolume(linearVolume)
def volume(self) -> int:
if self.audioOutput() is None:
return 100
volume = self.audioOutput().volume()
volume = QtMultimedia.QAudio.convertVolume(
volume / 100.0,
QtMultimedia.QAudio.VolumeScale.LinearVolumeScale,
QtMultimedia.QAudio.VolumeScale.LogarithmicVolumeScale,
)
return int(volume)
def update_selection_times(self):
self.setCurrentTime(self.startTime())
def update_times(self):
if (
self.playbackState() == QtMultimedia.QMediaPlayer.PlaybackState.StoppedState
or self.currentTime() < self.startTime()
or self.currentTime() > self.maxTime()
):
self.setCurrentTime(self.startTime())
def loadNewFile(self, *args):
self.audioReady.emit(False)
self.stop()
try:
new_file = self.selection_model.current_file.sound_file.sound_file_path
except Exception:
self.setSource(QtCore.QUrl())
return
if (
self.selection_model.max_time is None
or self.selection_model.current_file is None
or self.selection_model.current_file.duration is None
):
self.setSource(QtCore.QUrl())
return
self.channels = self.selection_model.current_file.num_channels
self.setSource(f"file:///{new_file}")
self.setPosition(0)
self.audioReady.emit(True)
def currentTime(self):
pos = self.position()
return pos / 1000
def setMaxTime(self, max_time):
if max_time is None:
return
self.max_time = max_time * 1000
def setMinTime(
self, min_time
): # Positions for MediaPlayer are in milliseconds, no SR required
if min_time is None:
min_time = 0
self.min_time = int(min_time * 1000)
self.setCurrentTime(min_time)
def setCurrentTime(self, time):
if time is None:
time = 0
if self.playbackState() == QtMultimedia.QMediaPlayer.PlaybackState.PlayingState:
return
pos = int(time * 1000)
self.setPosition(pos)
self.timeChanged.emit(self.currentTime())
def checkStop(self):
if not self.hasAudio():
self.stop()
self.setSource(
QtCore.QUrl.fromLocalFile(
self.selection_model.current_file.sound_file.sound_file_path
)
)
self.play()
return
if self.playbackState() == QtMultimedia.QMediaPlayer.PlaybackState.PlayingState:
if self.maxTime() is None or self.currentTime() > self.maxTime():
self.stop()
self.reset_position()
self.timeChanged.emit(self.currentTime())
class NewSpeakerField(QtWidgets.QLineEdit):
enableAddSpeaker = QtCore.Signal(object)
@property
def _internal_layout(self):
if not hasattr(self, "_internal_layout_"):
self._internal_layout_ = QtWidgets.QHBoxLayout(self)
self._internal_layout_.addStretch()
self._internal_layout_.setContentsMargins(1, 1, 1, 1)
self._internal_layout_.setSpacing(0)
return self._internal_layout_
def add_button(self, button):
self._internal_layout.insertWidget(self._internal_layout.count(), button)
button.setFocusProxy(self)
def _fix_cursor_position(self, button):
self.setTextMargins(button.geometry().right(), 0, 0, 0)
def __init__(self, *args):
super(NewSpeakerField, self).__init__(*args)
self.setObjectName("new_speaker_field")
self.setSizePolicy(
QtWidgets.QSizePolicy.Policy.Expanding, QtWidgets.QSizePolicy.Policy.Preferred
)
clear_icon = QtGui.QIcon()
clear_icon.addFile(":clear.svg", mode=QtGui.QIcon.Mode.Normal, state=QtGui.QIcon.State.Off)
clear_icon.addFile(":disabled/clear.svg", mode=QtGui.QIcon.Mode.Active)
self.clear_action = QtGui.QAction(icon=clear_icon, parent=self)
self.clear_action.triggered.connect(self.clear)
self.clear_action.setVisible(False)
self.textChanged.connect(self.check_contents)
self.tool_bar = QtWidgets.QToolBar()
self.tool_bar.addAction(self.clear_action)
w = self.tool_bar.widgetForAction(self.clear_action)
w.setObjectName("clear_new_speaker_field")
w.setCursor(QtCore.Qt.CursorShape.PointingHandCursor)
self.add_button(self.tool_bar)
self.save_action = None
def check_contents(self):
if self.text():
self.clear_action.setVisible(True)
self.enableAddSpeaker.emit(True)
else:
self.clear_action.setVisible(False)
self.enableAddSpeaker.emit(False)
class HelpDropDown(QtWidgets.QToolButton):
def __init__(self, *args):
super(HelpDropDown, self).__init__(*args)
self.setAttribute(QtCore.Qt.WidgetAttribute.WA_StyledBackground, True)
self.menu = QtWidgets.QMenu(self)
self.setToolButtonStyle(QtCore.Qt.ToolButtonStyle.ToolButtonIconOnly)
self.setPopupMode(QtWidgets.QToolButton.ToolButtonPopupMode.MenuButtonPopup)
self.setMenu(self.menu)
self.clicked.connect(self.showMenu)
def addAction(self, action: "QtGui.QAction") -> None:
self.menu.addAction(action)
class SpeakerDropDown(QtWidgets.QToolButton):
def __init__(self, *args):
super(SpeakerDropDown, self).__init__(*args)
self.setAttribute(QtCore.Qt.WidgetAttribute.WA_StyledBackground, True)
self.current_speaker = ""
self.menu = QtWidgets.QMenu(self)
self.speakers = []
self.setToolButtonStyle(QtCore.Qt.ToolButtonStyle.ToolButtonTextBesideIcon)
self.setPopupMode(QtWidgets.QToolButton.ToolButtonPopupMode.MenuButtonPopup)
self.setMenu(self.menu)
self.menu.triggered.connect(self.select_speaker)
self.clicked.connect(self.showMenu)
def select_speaker(self, action):
s = action.text()
self.setCurrentSpeaker(s)
self.defaultAction().trigger()
def refresh_speaker_dropdown(self, speakers):
self.speakers = speakers
self.menu.clear()
if self.speakers:
for s in self.speakers:
self.menu.addAction(s.name)
if self.current_speaker not in speakers:
self.setCurrentSpeaker(Speaker(""))
def setCurrentSpeaker(self, speaker: Speaker):
self.current_speaker = speaker
self.setText(speaker.name)
class AnchorTableView(QtWidgets.QTableView):
def __init__(self, *args):
self.settings = AnchorSettings()
super().__init__(*args)
self.setCornerButtonEnabled(False)
# self.setFocusPolicy(QtCore.Qt.FocusPolicy.NoFocus)
self.setEditTriggers(QtWidgets.QAbstractItemView.EditTrigger.NoEditTriggers)
self.verticalHeader().setVisible(False)
self.verticalHeader().setHighlightSections(False)
self.verticalHeader().setSectionsClickable(False)
self.setAlternatingRowColors(True)
self.setSortingEnabled(True)
self.setDragEnabled(False)
self.setHorizontalScrollMode(QtWidgets.QAbstractItemView.ScrollMode.ScrollPerPixel)
self.setSelectionBehavior(QtWidgets.QTableView.SelectionBehavior.SelectRows)
def setModel(self, model: QtCore.QAbstractItemModel) -> None:
super(AnchorTableView, self).setModel(model)
# self.model().newResults.connect(self.scrollToTop)
self.horizontalHeader().sortIndicatorChanged.connect(self.model().update_sort)
def keyPressEvent(self, event: QtGui.QKeyEvent) -> None:
copy_combo = QtCore.QKeyCombination(QtCore.Qt.Modifier.CTRL, QtCore.Qt.Key.Key_C)
if event.keyCombination() == copy_combo:
clipboard = QtGui.QGuiApplication.clipboard()
current = self.selectionModel().currentIndex()
text = self.selectionModel().model().data(current, QtCore.Qt.ItemDataRole.DisplayRole)
clipboard.setText(str(text))
def refresh_settings(self):
self.settings.sync()
self.horizontalHeader().setFont(self.settings.big_font)
self.setFont(self.settings.font)
fm = QtGui.QFontMetrics(self.settings.big_font)
minimum = 100
for i in range(self.horizontalHeader().count()):
text = self.model().headerData(
i, QtCore.Qt.Orientation.Horizontal, QtCore.Qt.ItemDataRole.DisplayRole
)
width = fm.boundingRect(text).width() + (3 * self.settings.sort_indicator_padding)
if width < minimum and i != 0:
minimum = width
self.setColumnWidth(i, width)
self.horizontalHeader().setMinimumSectionSize(minimum)
class UtteranceListTable(AnchorTableView):
def __init__(self, *args):
super().__init__(*args)
self.header = HeaderView(QtCore.Qt.Orientation.Horizontal, self)
self.setHorizontalHeader(self.header)
def set_models(self, model: CorpusModel, selection_model: CorpusSelectionModel):
self.setModel(model)
self.setSelectionModel(selection_model)
self.doubleClicked.connect(self.selectionModel().focusUtterance)
self.model().utteranceTextUpdated.connect(self.repaint)
self.refresh_settings()
model.corpusLoaded.connect(self.update_header)
def update_header(self):
m: CorpusModel = self.model()
for i in m.alignment_header_indices:
self.horizontalHeader().setSectionHidden(i, True)
for i in m.alignment_evaluation_header_indices:
self.horizontalHeader().setSectionHidden(i, True)
for i in m.transcription_header_indices:
self.horizontalHeader().setSectionHidden(i, True)
for i in m.diarization_header_indices:
self.horizontalHeader().setSectionHidden(i, True)
if m.corpus.alignment_evaluation_done:
for i in m.alignment_evaluation_header_indices:
self.horizontalHeader().setSectionHidden(i, False)
if m.corpus.has_alignments():
for i in m.alignment_header_indices:
self.horizontalHeader().setSectionHidden(i, False)
if m.corpus.has_any_ivectors():
for i in m.diarization_header_indices:
self.horizontalHeader().setSectionHidden(i, False)
if m.corpus.transcription_done:
for i in m.transcription_header_indices:
self.horizontalHeader().setSectionHidden(i, False)
# noinspection PyUnresolvedReferences
class CompleterLineEdit(QtWidgets.QWidget):
def __init__(self, *args):
super().__init__(*args)
layout = QtWidgets.QHBoxLayout()
self.line_edit = QtWidgets.QLineEdit(self)
# self.model = QtCore.QStringListModel(self)
# self.completer.setModel(self.model)
layout.addWidget(self.line_edit)
clear_icon = QtGui.QIcon()
clear_icon.addFile(":clear.svg", mode=QtGui.QIcon.Mode.Normal, state=QtGui.QIcon.State.Off)
clear_icon.addFile(":disabled/clear.svg", mode=QtGui.QIcon.Mode.Active)
self.button = QtWidgets.QToolButton(self)
self.button.clicked.connect(self.clear_text)
self.line_edit.textChanged.connect(self.check_actions)
self.button.setIcon(clear_icon)
self.button.setDisabled(True)
# self.clear_action = QtGui.QAction(icon=clear_icon, parent=self)
# self.clear_action.triggered.connect(self.clear_index)
# self.clear_action.setVisible(False)
layout.addWidget(self.button)
self.setLayout(layout)
self.completions = {}
def current_text(self):
if self.line_edit.text():
if self.line_edit.text() in self.completions:
return self.completions[self.line_edit.text()]
return self.line_edit.text()
return None
def clear_text(self):
self.line_edit.clear()
self.line_edit.returnPressed.emit()
def check_actions(self):
if self.line_edit.text():
self.button.setDisabled(False)
else:
self.button.setDisabled(True)
def update_completions(self, completions: dict[str, int]) -> None:
self.completions = completions
model = QtCore.QStringListModel(list(self.completions.keys()))
completer = QtWidgets.QCompleter(self)
completer.setCaseSensitivity(QtCore.Qt.CaseSensitivity.CaseInsensitive)
completer.setModelSorting(QtWidgets.QCompleter.ModelSorting.CaseInsensitivelySortedModel)
completer.setCompletionMode(QtWidgets.QCompleter.CompletionMode.PopupCompletion)
completer.popup().setUniformItemSizes(True)
completer.popup().setLayoutMode(QtWidgets.QListView.LayoutMode.Batched)
completer.setModel(model)
self.line_edit.setCompleter(completer)
# self.line_edit.textChanged.connect(completer.setCompletionPrefix)
class ClearableDropDown(QtWidgets.QWidget):
def __init__(self, *args):
super(ClearableDropDown, self).__init__(*args)
self.combo_box = QtWidgets.QComboBox(self)
layout = QtWidgets.QHBoxLayout()
layout.addWidget(self.combo_box)
clear_icon = QtGui.QIcon()
clear_icon.addFile(":clear.svg", mode=QtGui.QIcon.Mode.Normal, state=QtGui.QIcon.State.Off)
clear_icon.addFile(":disabled/clear.svg", mode=QtGui.QIcon.Mode.Active)
self.combo_box.currentIndexChanged.connect(self.check_actions)
self.button = QtWidgets.QToolButton(self)
self.button.clicked.connect(self.clear_index)
self.button.setIcon(clear_icon)
self.button.setDisabled(True)
# self.clear_action = QtGui.QAction(icon=clear_icon, parent=self)
# self.clear_action.triggered.connect(self.clear_index)
# self.clear_action.setVisible(False)
self.combo_box.setFocusPolicy(QtCore.Qt.FocusPolicy.NoFocus)
# self.tool_bar.addAction(self.clear_action)
# w = self.tool_bar.widgetForAction(self.clear_action)
layout.addWidget(self.button)
self.setLayout(layout)
def check_actions(self):
if self.combo_box.currentIndex() == -1:
self.button.setDisabled(True)
else:
self.button.setEnabled(True)
def clear_index(self):
self.combo_box.setCurrentIndex(-1)
def clear(self):
self.combo_box.clear()
def addItem(self, *args):
self.combo_box.addItem(*args)
class PaginationWidget(QtWidgets.QToolBar):
offsetRequested = QtCore.Signal(int)
pageRequested = QtCore.Signal()
def __init__(self, *args):
super(PaginationWidget, self).__init__(*args)
self.current_page = 0
self.limit = 1
self.num_pages = 1
self.result_count = 0
self.next_page_action = QtGui.QAction(
icon=QtGui.QIcon(":caret-right.svg"), text="Next page"
)
self.previous_page_action = QtGui.QAction(
icon=QtGui.QIcon(":caret-left.svg"), text="Previous page"
)
self.page_label = QtWidgets.QLabel("Page 1 of 1")
self.addAction(self.previous_page_action)
self.addWidget(self.page_label)
self.addAction(self.next_page_action)
self.next_page_action.triggered.connect(self.next_page)
self.previous_page_action.triggered.connect(self.previous_page)
def first_page(self):
self.current_page = 0
self.offsetRequested.emit(self.current_page * self.limit)
def next_page(self):
if self.current_page != self.num_pages - 1:
self.current_page += 1
self.offsetRequested.emit(self.current_page * self.limit)
self.refresh_pages()
def previous_page(self):
if self.current_page != 0:
self.current_page -= 1
self.offsetRequested.emit(self.current_page * self.limit)
self.refresh_pages()
def set_limit(self, limit: int):
self.limit = limit
self._recalculate_num_pages()
def _recalculate_num_pages(self):
if self.result_count == 0:
return
self.num_pages = int(self.result_count / self.limit)
if self.result_count % self.limit != 0:
self.num_pages += 1
self.refresh_pages()
def update_result_count(self, result_count: int):
self.result_count = result_count
self._recalculate_num_pages()
self.current_page = min(self.current_page, self.num_pages)
def refresh_pages(self):
self.previous_page_action.setEnabled(True)
self.next_page_action.setEnabled(True)
if self.current_page == 0:
self.previous_page_action.setEnabled(False)
if self.current_page == self.num_pages - 1:
self.next_page_action.setEnabled(False)
self.page_label.setText(f"Page {self.current_page + 1} of {self.num_pages}")
self.pageRequested.emit()
class UtteranceListWidget(QtWidgets.QWidget): # pragma: no cover
fileChanged = QtCore.Signal(object)
def __init__(self, *args):
super(UtteranceListWidget, self).__init__(*args)
self.settings = AnchorSettings()
self.setMinimumWidth(100)
self.corpus_model: Optional[CorpusModel] = None
self.selection_model: Optional[CorpusSelectionModel] = None
layout = QtWidgets.QVBoxLayout()
self.status_indicator = LoadingScreen(self, logo=False)
self.status_indicator.setVisible(False)
layout.addWidget(self.status_indicator)
self.file_dropdown = CompleterLineEdit(self)
self.file_dropdown.line_edit.setPlaceholderText("Filter by file")
self.file_dropdown.line_edit.returnPressed.connect(self.search)
self.speaker_dropdown = CompleterLineEdit(self)
self.speaker_dropdown.line_edit.setPlaceholderText("Filter by speaker")
self.speaker_dropdown.line_edit.returnPressed.connect(self.search)
layout.addWidget(self.file_dropdown)
layout.addWidget(self.speaker_dropdown)
self.search_box = SearchBox(self)
search_layout = QtWidgets.QHBoxLayout()
self.replace_box = ReplaceBox(self)
self.oov_button = QtWidgets.QToolButton()
self.search_widget = QtWidgets.QWidget()
search_layout.addWidget(self.search_box)
search_layout.addWidget(self.oov_button)
search_layout.addWidget(self.replace_box)
self.search_widget.setLayout(search_layout)
layout.addWidget(self.search_widget)
self.replace_box.replaceAllActivated.connect(self.replace)
self.search_box.searchActivated.connect(self.search)
self.current_search_query = None
self.current_search_text = ""
self.table_widget = UtteranceListTable(self)
self.highlight_delegate = HighlightDelegate(self.table_widget)
self.nowrap_delegate = NoWrapDelegate(self.table_widget)
self.icon_delegate = IconDelegate(self.table_widget)
self.table_widget.setItemDelegateForColumn(0, self.icon_delegate)
layout.addWidget(self.table_widget)
self.pagination_toolbar = PaginationWidget()
self.pagination_toolbar.pageRequested.connect(self.table_widget.scrollToTop())
layout.addWidget(self.pagination_toolbar)
self.setLayout(layout)
self.dictionary = None
self.refresh_settings()
def query_started(self):
self.table_widget.setVisible(False)
self.pagination_toolbar.setVisible(False)
self.search_widget.setVisible(False)
self.speaker_dropdown.setVisible(False)
self.file_dropdown.setVisible(False)
self.status_indicator.setVisible(True)
def query_finished(self):
self.table_widget.setVisible(True)
self.pagination_toolbar.setVisible(True)
self.search_widget.setVisible(True)
self.speaker_dropdown.setVisible(True)
self.file_dropdown.setVisible(True)
self.status_indicator.setVisible(False)
def set_models(
self,
corpus_model: CorpusModel,
selection_model: CorpusSelectionModel,
speaker_model: SpeakerModel,
):
self.corpus_model: CorpusModel = corpus_model
self.selection_model: CorpusSelectionModel = selection_model
self.table_widget.set_models(self.corpus_model, selection_model)
self.search_box.validationError.connect(self.corpus_model.statusUpdate.emit)
self.corpus_model.resultCountChanged.connect(self.pagination_toolbar.update_result_count)
self.pagination_toolbar.offsetRequested.connect(self.corpus_model.set_offset)
self.search_box.searchActivated.connect(self.query_started)
self.corpus_model.newResults.connect(self.query_finished)
self.corpus_model.speakersRefreshed.connect(self.speaker_dropdown.update_completions)
self.corpus_model.filesRefreshed.connect(self.file_dropdown.update_completions)
def refresh_settings(self):
self.settings.sync()
font = self.settings.font
header_font = self.settings.big_font
self.file_dropdown.setFont(font)
self.setFont(header_font)
self.icon_delegate.refresh_settings()
self.highlight_delegate.refresh_settings()
self.nowrap_delegate.refresh_settings()
self.search_box.setFont(font)
self.replace_box.setFont(font)
self.search_box.setStyleSheet(self.settings.search_box_style_sheet)
self.replace_box.setStyleSheet(self.settings.search_box_style_sheet)
self.table_widget.refresh_settings()
self.pagination_toolbar.set_limit(self.settings.value(self.settings.RESULTS_PER_PAGE))
def search(self):
self.selection_model.clearSelection()
self.corpus_model.search(
self.search_box.query(),
self.file_dropdown.current_text(),
self.speaker_dropdown.current_text(),
oovs=self.oov_button.isChecked(),
)
self.corpus_model.set_text_filter(self.search_box.query())
def replace(self):
search_query = self.search_box.query()
if not search_query.text:
return
replacement = self.replace_box.text()
try:
_ = re.sub(search_query.generate_expression(), replacement, "")
except Exception as e:
self.replace_box.setProperty("error", True)
self.replace_box.style().unpolish(self.replace_box)
self.replace_box.style().polish(self.replace_box)
self.replace_box.update()
self.corpus_model.statusUpdate.emit(f"Regex error: {e}")
return
self.corpus_model.replace_all(search_query, replacement)
class UtteranceDetailWidget(QtWidgets.QWidget): # pragma: no cover
lookUpWord = QtCore.Signal(object)
createWord = QtCore.Signal(object)
saveUtterance = QtCore.Signal(object, object)
selectUtterance = QtCore.Signal(object, object)
createUtterance = QtCore.Signal(object, object, object, object)
refreshCorpus = QtCore.Signal(object)
audioPlaying = QtCore.Signal(object)
def __init__(self, parent: MainWindow):
super(UtteranceDetailWidget, self).__init__(parent=parent)
from anchor.main import AnchorSettings
self.settings = AnchorSettings()
self.setAttribute(QtCore.Qt.WidgetAttribute.WA_StyledBackground, True)
self.corpus_model = None
self.selection_model = None
self.dictionary_model = None
self.plot_widget = UtteranceView(self)
layout = QtWidgets.QVBoxLayout()
self.scroll_bar_wrapper = QtWidgets.QHBoxLayout()
self.pan_left_button = QtWidgets.QToolButton(self)
self.pan_left_button.setObjectName("pan_left_button")
self.scroll_bar_wrapper.addWidget(self.pan_left_button)
self.pan_right_button = QtWidgets.QToolButton(self)
self.pan_right_button.setObjectName("pan_right_button")
self.pan_left_button.setIconSize(QtCore.QSize(25, 25))
self.pan_right_button.setIconSize(QtCore.QSize(25, 25))
self.scroll_bar = QtWidgets.QScrollBar(QtCore.Qt.Orientation.Horizontal, self)
self.scroll_bar.setObjectName("time_scroll_bar")
# self.scroll_bar.setSizePolicy(QtWidgets.QSizePolicy.Policy.Expanding, QtWidgets.QSizePolicy.Policy.Expanding)
self.scroll_bar.valueChanged.connect(self.update_from_slider)
scroll_bar_layout = QtWidgets.QVBoxLayout()
scroll_bar_layout.addWidget(self.scroll_bar, 1)
self.scroll_bar_wrapper.addLayout(scroll_bar_layout)
self.scroll_bar_wrapper.addWidget(self.pan_right_button)
text_layout = QtWidgets.QHBoxLayout()
layout.addWidget(self.plot_widget)
layout.addLayout(self.scroll_bar_wrapper)
layout.addLayout(text_layout)
layout.setContentsMargins(0, 0, 0, 0)
self.setLayout(layout)
self.show_all_speakers = False
def set_models(
self,
corpus_model: CorpusModel,
selection_model: CorpusSelectionModel,
dictionary_model: DictionaryTableModel,
):
self.corpus_model = corpus_model
self.selection_model = selection_model
self.dictionary_model = dictionary_model
self.corpus_model.textFilterChanged.connect(self.plot_widget.set_search_term)
self.selection_model.viewChanged.connect(self.update_to_slider)
self.selection_model.fileChanged.connect(self.update_to_slider)
self.plot_widget.set_models(corpus_model, selection_model, self.dictionary_model)
def update_to_slider(self):
with QtCore.QSignalBlocker(self.scroll_bar):
if self.selection_model.current_file is None or self.selection_model.min_time is None:
return
if (
self.selection_model.min_time == 0
and self.selection_model.max_time == self.selection_model.current_file.duration
):
self.scroll_bar.setPageStep(10)
self.scroll_bar.setEnabled(False)
self.pan_left_button.setEnabled(False)
self.pan_right_button.setEnabled(False)
self.scroll_bar.setMaximum(0)
return
duration_ms = int(self.selection_model.current_file.duration * 1000)
begin = self.selection_model.min_time * 1000
end = self.selection_model.max_time * 1000
window_size_ms = int(end - begin)
self.scroll_bar.setEnabled(True)
self.pan_left_button.setEnabled(True)
self.pan_right_button.setEnabled(True)
self.scroll_bar.setPageStep(int(window_size_ms))
self.scroll_bar.setSingleStep(int(window_size_ms * 0.5))
self.scroll_bar.setMaximum(duration_ms - window_size_ms)
self.scroll_bar.setValue(begin)
def update_from_slider(self, value: int):
self.selection_model.update_from_slider(value / 1000)
def pan_left(self):
self.scroll_bar.triggerAction(self.scroll_bar.SliderAction.SliderSingleStepSub)
def pan_right(self):
self.scroll_bar.triggerAction(self.scroll_bar.SliderAction.SliderSingleStepAdd)
class LoadingScreen(QtWidgets.QWidget):
def __init__(self, *args, logo=True):
super(LoadingScreen, self).__init__(*args)
self.has_logo = logo
self.setAttribute(QtCore.Qt.WidgetAttribute.WA_StyledBackground, True)
self.settings = AnchorSettings()
layout = QtWidgets.QVBoxLayout()
self.loading_movie = QtGui.QMovie(":loading_screen.gif")
self.movie_label = QtWidgets.QLabel()
if logo:
self.movie_label.setMinimumSize(720, 576)
self.movie_label.setMovie(self.loading_movie)
layout.setAlignment(QtCore.Qt.AlignmentFlag.AlignCenter)
layout.addWidget(self.movie_label)
if logo:
self.logo_icon = QtGui.QIcon(":logo_text.svg")
self.logo_label = QtWidgets.QLabel()
self.logo_label.setPixmap(self.logo_icon.pixmap(QtCore.QSize(720, 144)))
self.logo_label.setFixedSize(720, 144)
self.text_label = QtWidgets.QLabel()
self.text_label.setAlignment(QtCore.Qt.AlignmentFlag.AlignCenter)
self.exit_label = QtWidgets.QLabel(
"Wrapping things up before exit, please wait a moment..."
)
self.exit_label.setVisible(False)
tool_bar_wrapper = QtWidgets.QVBoxLayout()
self.tool_bar = QtWidgets.QToolBar()
self.tool_bar.setToolButtonStyle(QtCore.Qt.ToolButtonStyle.ToolButtonTextBesideIcon)
self.tool_bar.addWidget(self.text_label)
tool_bar_wrapper.addWidget(
self.tool_bar, alignment=QtCore.Qt.AlignmentFlag.AlignCenter
)
self.setVisible(False)
layout.addWidget(self.logo_label)
layout.addWidget(self.text_label, alignment=QtCore.Qt.AlignmentFlag.AlignCenter)
layout.addLayout(tool_bar_wrapper)
layout.addWidget(self.exit_label, alignment=QtCore.Qt.AlignmentFlag.AlignCenter)
self.setVisible(False)
self.setLayout(layout)
def refresh_settings(self):
if not self.has_logo:
return
self.settings.sync()
font = self.settings.big_font
self.text_label.setFont(font)
self.exit_label.setFont(font)
def setExiting(self):
self.tool_bar.setVisible(False)
self.exit_label.setVisible(True)
self.repaint()
def setVisible(self, visible: bool) -> None:
if visible:
self.loading_movie.start()
else:
if self.has_logo:
self.text_label.setText("")
self.loading_movie.stop()
super(LoadingScreen, self).setVisible(visible)
def setCorpusName(self, corpus_name):
self.text_label.setText(corpus_name)
self.text_label.setVisible(True)
class TitleScreen(QtWidgets.QWidget):
def __init__(self, *args):
super(TitleScreen, self).__init__(*args)
self.setAttribute(QtCore.Qt.WidgetAttribute.WA_StyledBackground, True)
layout = QtWidgets.QVBoxLayout()
self.logo_widget = QtSvgWidgets.QSvgWidget(":splash_screen.svg")
self.logo_widget.setFixedSize(720, 720)
# self.setMaximumSize(720, 720)
# self.loading_label.setWindowFlag()
layout.setAlignment(QtCore.Qt.AlignmentFlag.AlignCenter)
layout.addWidget(self.logo_widget)
self.setLayout(layout)
class InternalToolButtonEdit(QtWidgets.QLineEdit):
def __init__(self, *args):
super().__init__(*args)
self.tool_bar = QtWidgets.QToolBar(self)
self._internal_layout.insertWidget(self._internal_layout.count(), self.tool_bar)
self.tool_bar.setFocusProxy(self)
@property
def _internal_layout(self):
if not hasattr(self, "_internal_layout_"):
self._internal_layout_ = QtWidgets.QHBoxLayout(self)
self._internal_layout_.addStretch()
self._internal_layout_.setContentsMargins(1, 1, 1, 1)
self._internal_layout_.setSpacing(0)
return self._internal_layout_
def setError(self):
if not self.property("error"):
self.setProperty("error", True)
self.style().unpolish(self)
self.style().polish(self)
self.update()
def resetError(self):
if self.property("error"):
self.setProperty("error", False)
self.style().unpolish(self)
self.style().polish(self)
self.update()
def _fix_cursor_position(self):
self.setTextMargins(0, 0, self.tool_bar.geometry().width(), 0)
def add_internal_action(self, action, name=None):
self.tool_bar.addAction(action)
w = self.tool_bar.widgetForAction(action)
if name is not None:
w.setObjectName(name)
w.setCursor(QtCore.Qt.CursorShape.PointingHandCursor)
w.setFocusProxy(self)
self._fix_cursor_position()
class ClearableField(InternalToolButtonEdit):
def __init__(self, *args):
super().__init__(*args)
clear_icon = QtGui.QIcon()
clear_icon.addFile(":clear.svg", mode=QtGui.QIcon.Mode.Normal, state=QtGui.QIcon.State.Off)
clear_icon.addFile(":disabled/clear.svg", mode=QtGui.QIcon.Mode.Active)
self.clear_action = QtGui.QAction(icon=clear_icon, parent=self)
self.clear_action.triggered.connect(self.clear)
self.clear_action.setVisible(False)
self.textChanged.connect(self.check_contents)
self.add_internal_action(self.clear_action, "clear_field")
def setFont(self, a0: QtGui.QFont) -> None:
super().setFont(a0)
self.clear_action.setFont(a0)
def clear(self) -> None:
super().clear()
self.returnPressed.emit()
def add_button(self, button):
self._internal_layout.insertWidget(self._internal_layout.count(), button)
button.setFocusProxy(self)
def check_contents(self):
self.resetError()
if super().text():
self.clear_action.setVisible(True)
else:
self.clear_action.setVisible(False)
class ReplaceBox(ClearableField):
replaceAllActivated = QtCore.Signal(object)
def __init__(self, *args):
super().__init__(*args)
self.returnPressed.connect(self.activate)
self.setObjectName("replace_box")
def lock(self):
self.setDisabled(True)
def unlock(self):
self.setEnabled(True)
def activate(self):
if not self.isEnabled():
return
self.replaceAllActivated.emit(self.text())
class SearchBox(ClearableField):
searchActivated = QtCore.Signal(object)
validationError = QtCore.Signal(object)
def __init__(self, *args):
super().__init__(*args)
self.returnPressed.connect(self.activate)
self.clear_action.triggered.connect(self.returnPressed.emit)
regex_icon = QtGui.QIcon()
regex_icon.addFile(":regex.svg", mode=QtGui.QIcon.Mode.Normal, state=QtGui.QIcon.State.Off)
regex_icon.addFile(
":highlighted/regex.svg", mode=QtGui.QIcon.Mode.Normal, state=QtGui.QIcon.State.On
)
self.regex_action = QtGui.QAction(icon=regex_icon, parent=self)
self.regex_action.setCheckable(True)
word_icon = QtGui.QIcon()
word_icon.addFile(":word.svg", mode=QtGui.QIcon.Mode.Normal, state=QtGui.QIcon.State.Off)
word_icon.addFile(
":highlighted/word.svg", mode=QtGui.QIcon.Mode.Normal, state=QtGui.QIcon.State.On
)
self.word_action = QtGui.QAction(icon=word_icon, parent=self)
self.word_action.setCheckable(True)
case_icon = QtGui.QIcon()
case_icon.addFile(":case.svg", mode=QtGui.QIcon.Mode.Normal, state=QtGui.QIcon.State.Off)
case_icon.addFile(
":highlighted/case.svg", mode=QtGui.QIcon.Mode.Normal, state=QtGui.QIcon.State.On
)
self.case_action = QtGui.QAction(icon=case_icon, parent=self)
self.case_action.setCheckable(True)
self.add_internal_action(self.regex_action, "regex_search_field")
self.add_internal_action(self.word_action, "word_search_field")
self.add_internal_action(self.case_action, "case_search_field")
def activate(self):
if self.regex_action.isChecked():
try:
_ = re.compile(self.text())
except Exception:
self.setError()
self.validationError.emit("Search regex not valid")
return
self.searchActivated.emit(self.query())
def setFont(self, a0: QtGui.QFont) -> None:
super().setFont(a0)
self.regex_action.setFont(a0)
self.word_action.setFont(a0)
self.case_action.setFont(a0)
def setQuery(self, query: TextFilterQuery):
self.setText(query.text)
with QtCore.QSignalBlocker(self.regex_action) as _, QtCore.QSignalBlocker(
self.word_action
) as _:
self.regex_action.setChecked(query.regex)
self.word_action.setChecked(query.word)
self.case_action.setChecked(query.case_sensitive)
self.activate()
def query(self) -> TextFilterQuery:
filter = TextFilterQuery(
super().text(),
self.regex_action.isChecked() or self.word_action.isChecked(),
self.word_action.isChecked(),
self.case_action.isChecked(),
)
return filter
class HorizontalSpacer(QtWidgets.QWidget):
def __init__(self, *args):
super(HorizontalSpacer, self).__init__(*args)
self.setSizePolicy(
QtWidgets.QSizePolicy.Policy.Expanding, QtWidgets.QSizePolicy.Policy.Preferred
)
class NoWrapDelegate(QtWidgets.QStyledItemDelegate):
def __init__(self, parent=None):
super(NoWrapDelegate, self).__init__(parent)
self.doc = QtGui.QTextDocument(self)
self.settings = AnchorSettings()
def refresh_settings(self):
self.settings.sync()
self.doc.setDefaultFont(self.settings.font)
def sizeHint(
self, option: QtWidgets.QStyleOptionViewItem, index: QtCore.QModelIndex
) -> QtCore.QSize:
options = QtWidgets.QStyleOptionViewItem(option)
self.initStyleOption(options, index)
self.doc.setPlainText(options.text)
style = (
QtWidgets.QApplication.style() if options.widget is None else options.widget.style()
)
textRect = style.subElementRect(QtWidgets.QStyle.SubElement.SE_ItemViewItemText, options)
if index.column() != 0:
textRect.adjust(5, 0, 0, 0)
the_constant = 4
margin = (option.rect.height() - options.fontMetrics.height()) // 2
margin = margin - the_constant
textRect.setTop(textRect.top() + margin)
return textRect.size()
def paint(self, painter, option, index):
selection_color = self.settings.PRIMARY_LIGHT_COLOR
option.palette.setColor(
QtGui.QPalette.ColorGroup.Active,
QtGui.QPalette.ColorRole.Window,
QtGui.QColor(selection_color),
)
painter.save()
options = QtWidgets.QStyleOptionViewItem(option)
self.initStyleOption(options, index)
self.doc.setPlainText(options.text)
options.text = ""
style = (
QtWidgets.QApplication.style() if options.widget is None else options.widget.style()
)
style.drawControl(QtWidgets.QStyle.ControlElement.CE_ItemViewItem, options, painter)
ctx = QtGui.QAbstractTextDocumentLayout.PaintContext()
if option.state & QtWidgets.QStyle.StateFlag.State_Selected:
painter.fillRect(option.rect, QtGui.QColor(selection_color))
ctx.palette.setColor(
QtGui.QPalette.ColorRole.Text,
option.palette.color(
QtGui.QPalette.ColorGroup.Active, QtGui.QPalette.ColorRole.HighlightedText
),
)
else:
ctx.palette.setColor(
QtGui.QPalette.ColorRole.Text,
option.palette.color(
QtGui.QPalette.ColorGroup.Active, QtGui.QPalette.ColorRole.Text
),
)
textRect = style.subElementRect(QtWidgets.QStyle.SubElement.SE_ItemViewItemText, options)
if index.column() != 0:
textRect.adjust(5, 0, 0, 0)
the_constant = 4
margin = (option.rect.height() - options.fontMetrics.height()) // 2
margin = margin - the_constant
textRect.setTop(textRect.top() + margin)
painter.translate(textRect.topLeft())
painter.setClipRect(textRect.translated(-textRect.topLeft()))
self.doc.documentLayout().draw(painter, ctx)
painter.restore()
class HighlightDelegate(QtWidgets.QStyledItemDelegate):
def __init__(self, parent=None):
super(HighlightDelegate, self).__init__(parent)
self.doc = QtGui.QTextDocument(self)
self.settings = AnchorSettings()
self._filters = []
self.current_doc_width = 100
self.minimum_doc_size = 100
self.margin = 5
self.doc.setDocumentMargin(self.margin)
def refresh_settings(self):
self.settings.sync()
self.doc.setDefaultFont(self.settings.font)
def sizeHint(
self, option: QtWidgets.QStyleOptionViewItem, index: QtCore.QModelIndex
) -> QtCore.QSize:
options = QtWidgets.QStyleOptionViewItem(option)
self.initStyleOption(options, index)
self.doc.setPlainText(options.text)
self.apply_highlight()
options.text = ""
style = (
QtWidgets.QApplication.style() if options.widget is None else options.widget.style()
)
textRect = style.subElementRect(QtWidgets.QStyle.SubElement.SE_ItemViewItemText, options)
textRect.setWidth(self.current_doc_width)
if textRect.width() < self.minimum_doc_size:
textRect.setWidth(self.minimum_doc_size)
self.doc.setTextWidth(textRect.width())
doc_height = self.doc.documentLayout().documentSize().height()
textRect.setHeight(doc_height)
return textRect.size()
def paint(self, painter, option, index):
selection_color = self.settings.primary_very_light_color
option.palette.setColor(
QtGui.QPalette.ColorGroup.Active, QtGui.QPalette.ColorRole.Window, selection_color
)
painter.save()
options = QtWidgets.QStyleOptionViewItem(option)
self.initStyleOption(options, index)
self.doc.setPlainText(options.text)
self.apply_highlight()
options.text = ""
style = (
QtWidgets.QApplication.style() if options.widget is None else options.widget.style()
)
style.drawControl(QtWidgets.QStyle.ControlElement.CE_ItemViewItem, options, painter)
ctx = QtGui.QAbstractTextDocumentLayout.PaintContext()
if option.state & QtWidgets.QStyle.StateFlag.State_Selected:
painter.fillRect(option.rect, QtGui.QColor(selection_color))
ctx.palette.setColor(
QtGui.QPalette.ColorRole.Text,
option.palette.color(
QtGui.QPalette.ColorGroup.Active, QtGui.QPalette.ColorRole.HighlightedText
),
)
else:
ctx.palette.setColor(
QtGui.QPalette.ColorRole.Text,
option.palette.color(
QtGui.QPalette.ColorGroup.Active, QtGui.QPalette.ColorRole.Text
),
)
textRect = style.subElementRect(QtWidgets.QStyle.SubElement.SE_ItemViewItemText, options)
textRect.setWidth(self.current_doc_width)
if textRect.width() < self.minimum_doc_size:
textRect.setWidth(self.minimum_doc_size)
self.doc.setTextWidth(textRect.width())
doc_height = self.doc.documentLayout().documentSize().height()
textRect.setHeight(doc_height)
painter.translate(textRect.topLeft())
self.doc.documentLayout().draw(painter, ctx)
painter.restore()
def apply_highlight(self):
cursor = QtGui.QTextCursor(self.doc)
cursor.beginEditBlock()
fmt = QtGui.QTextCharFormat()
fmt.setBackground(self.settings.accent_light_color)
fmt.setForeground(self.settings.primary_very_dark_color)
for f in self.filters():
f = QtCore.QRegExp(f)
highlightCursor = QtGui.QTextCursor(self.doc)
while not highlightCursor.isNull() and not highlightCursor.atEnd():
highlightCursor = self.doc.find(f, highlightCursor)
if not highlightCursor.isNull():
highlightCursor.mergeCharFormat(fmt)
cursor.endEditBlock()
@QtCore.Slot(list)
def setFilters(self, filters):
if self._filters == filters:
return
self._filters = filters
def filters(self):
return self._filters
class HeaderView(QtWidgets.QHeaderView):
def __init__(self, *args):
super(HeaderView, self).__init__(*args)
self.setHighlightSections(False)
self.setStretchLastSection(True)
self.setSortIndicatorShown(True)
self.setSectionsClickable(True)
self.setContextMenuPolicy(QtCore.Qt.ContextMenuPolicy.CustomContextMenu)
self.customContextMenuRequested.connect(self.generate_context_menu)
def sectionSizeFromContents(self, logicalIndex: int) -> QtCore.QSize:
settings = AnchorSettings()
size = super().sectionSizeFromContents(logicalIndex)
size.setWidth(size.width() + settings.text_padding + 3 + settings.sort_indicator_padding)
return size
def showHideColumn(self):
index = self.model()._header_data.index(self.sender().text())
self.setSectionHidden(index, not self.isSectionHidden(index))
def generate_context_menu(self, location):
menu = QtWidgets.QMenu()
m: CorpusModel = self.model()
for i in range(m.columnCount()):
column_name = m.headerData(
i,
orientation=QtCore.Qt.Orientation.Horizontal,
role=QtCore.Qt.ItemDataRole.DisplayRole,
)
a = QtGui.QAction(column_name, self)
a.setCheckable(True)
if not self.isSectionHidden(i):
a.setChecked(True)
a.triggered.connect(self.showHideColumn)
menu.addAction(a)
menu.exec_(self.mapToGlobal(location))
class IconDelegate(QtWidgets.QStyledItemDelegate):
def __init__(self, parent=None):
super(IconDelegate, self).__init__(parent)
from anchor.main import AnchorSettings
self.settings = AnchorSettings()
def refresh_settings(self):
self.settings.sync()
def sizeHint(
self, option: QtWidgets.QStyleOptionViewItem, index: QtCore.QModelIndex
) -> QtCore.QSize:
if index.column() != 0:
return super(IconDelegate, self).sizeHint(option, index)
size = int(self.settings.icon_size / 2)
return QtCore.QSize(size, size)
def paint(self, painter: QtGui.QPainter, option, index) -> None:
if index.column() != 0:
return super(IconDelegate, self).paint(painter, option, index)
painter.save()
options = QtWidgets.QStyleOptionViewItem(option)
self.initStyleOption(options, index)
if options.checkState == QtCore.Qt.CheckState.Checked:
icon = QtGui.QIcon(":disabled/oov-check.svg")
icon.paint(painter, options.rect, QtCore.Qt.AlignmentFlag.AlignCenter)
painter.restore()
class StoppableProgressBar(QtWidgets.QWidget):
finished = QtCore.Signal(object)
def __init__(self, worker: Worker, id, *args):
super().__init__(*args)
self.worker = worker
self.id = id
self.worker.signals.progress.connect(self.update_progress)
self.worker.signals.total.connect(self.update_total)
self.worker.signals.finished.connect(self.update_finished)
layout = QtWidgets.QHBoxLayout()
self.label = QtWidgets.QLabel(self.worker.name)
layout.addWidget(self.label)
self.progress_bar = QtWidgets.QProgressBar()
layout.addWidget(self.progress_bar)
self.cancel_button = QtWidgets.QToolButton()
self.cancel_action = QtGui.QAction("select", self)
self.cancel_action.setIcon(QtGui.QIcon(":clear.svg"))
self.cancel_action.triggered.connect(worker.cancel)
self.cancel_button.setDefaultAction(self.cancel_action)
layout.addWidget(self.cancel_button)
self.setLayout(layout)
def cancel(self):
self.progress_bar.setEnabled(False)
self.cancel_button.setEnabled(False)
self.worker.stopped.stop()
def update_finished(self):
self.finished.emit(self.id)
def update_total(self, total):
self.progress_bar.setMaximum(total)
def update_progress(self, progress, time_remaining):
self.progress_bar.setFormat(f"%v of %m - %p% ({time_remaining} remaining)")
self.progress_bar.setValue(progress)
class ProgressMenu(QtWidgets.QMenu):
allDone = QtCore.Signal()
def __init__(self, *args):
super(ProgressMenu, self).__init__(*args)
self.settings = AnchorSettings()
layout = QtWidgets.QVBoxLayout()
self.scroll_area = QtWidgets.QScrollArea()
self.scroll_layout = QtWidgets.QVBoxLayout()
self.scroll_layout.setAlignment(QtCore.Qt.AlignmentFlag.AlignTop)
self.scroll_area.setLayout(self.scroll_layout)
layout.addWidget(self.scroll_area)
self.scroll_area.setFixedWidth(
500 + self.scroll_area.verticalScrollBar().sizeHint().width()
)
self.scroll_area.setFixedHeight(300)
self.scroll_area.setHorizontalScrollBarPolicy(QtCore.Qt.ScrollBarPolicy.ScrollBarAlwaysOff)
self.progress_bars: typing.Dict[int, StoppableProgressBar] = {}
self.setLayout(layout)
self.current_id = 0
def showEvent(self, event: QtGui.QShowEvent) -> None:
p = self.pos()
geo = self.parent().geometry()
self.move(
p.x() + geo.width() - self.geometry().width(),
p.y() - geo.height() - self.geometry().height(),
)
def track_worker(self, worker: Worker):
self.progress_bars[self.current_id] = StoppableProgressBar(worker, self.current_id)
self.scroll_area.layout().addWidget(self.progress_bars[self.current_id])
self.progress_bars[self.current_id].finished.connect(self.update_finished)
self.current_id += 1
def update_finished(self, id):
self.scroll_layout.removeWidget(self.progress_bars[id])
self.progress_bars[id].deleteLater()
del self.progress_bars[id]
if len(self.progress_bars) == 0:
self.allDone.emit()
class ProgressWidget(QtWidgets.QPushButton):
def __init__(self, *args):
super().__init__(*args)
self.done_icon = QtGui.QIcon(":check-circle.svg")
self.animated = QtGui.QMovie(":spinning_blue.svg")
self.animated.frameChanged.connect(self.update_animation)
self.setIcon(self.done_icon)
self.menu = ProgressMenu(self)
self.setMenu(self.menu)
self.menu.allDone.connect(self.all_done)
def add_worker(self, worker):
self.menu.track_worker(worker)
if self.animated.state() == QtGui.QMovie.MovieState.NotRunning:
self.animated.start()
def update_animation(self):
self.setIcon(QtGui.QIcon(self.animated.currentPixmap()))
def all_done(self):
self.setIcon(self.done_icon)
if self.animated.state() == QtGui.QMovie.MovieState.Running:
self.animated.stop()
class SpeakerClusterSettingsMenu(QtWidgets.QMenu):
def __init__(self, *args):
super().__init__(*args)
self.settings = AnchorSettings()
self.settings.sync()
layout = QtWidgets.QVBoxLayout()
self.scroll_area = QtWidgets.QScrollArea()
self.form_layout = QtWidgets.QFormLayout()
self.form_layout.setAlignment(QtCore.Qt.AlignmentFlag.AlignTop)
self.cluster_algorithm_dropdown = QtWidgets.QComboBox()
for ct in ClusterType:
self.cluster_algorithm_dropdown.addItem(ct.name)
self.metric_dropdown = QtWidgets.QComboBox()
for m in DistanceMetric:
self.metric_dropdown.addItem(m.name)
self.row_indices = {}
self.cluster_algorithm_dropdown.setCurrentIndex(
self.cluster_algorithm_dropdown.findText(
self.settings.value(self.settings.CLUSTER_TYPE)
)
)
self.metric_dropdown.setCurrentIndex(
self.metric_dropdown.findText(self.settings.value(self.settings.CLUSTERING_METRIC))
)
self.form_layout.addRow("Distance metric", self.metric_dropdown)
self.form_layout.addRow("Cluster algorithm", self.cluster_algorithm_dropdown)
self.n_clusters_edit = QtWidgets.QSpinBox(self)
self.n_clusters_edit.setMinimum(0)
self.n_clusters_edit.setMaximum(600)
self.row_indices["n_clusters"] = self.form_layout.rowCount()
self.form_layout.addRow("Number of clusters", self.n_clusters_edit)
self.distance_threshold_edit = ThresholdWidget(self)
self.row_indices["distance_threshold"] = self.form_layout.rowCount()
self.form_layout.addRow("Distance threshold", self.distance_threshold_edit)
self.min_cluster_size_edit = QtWidgets.QSpinBox(self)
self.min_cluster_size_edit.setMinimum(3)
self.min_cluster_size_edit.setMaximum(600)
self.row_indices["min_cluster_size"] = self.form_layout.rowCount()
self.form_layout.addRow("Minimum cluster size", self.min_cluster_size_edit)
self.recluster_button = QtWidgets.QPushButton("Recluster")
self.recluster_button.setEnabled(False)
self.form_layout.addWidget(self.recluster_button)
self.n_clusters_edit.setValue(self.settings.value(self.settings.CLUSTERING_N_CLUSTERS))
self.distance_threshold_edit.setValue(
self.settings.value(self.settings.CLUSTERING_DISTANCE_THRESHOLD)
)
self.min_cluster_size_edit.setValue(
self.settings.value(self.settings.CLUSTERING_MIN_CLUSTER_SIZE)
)
self.scroll_area.setLayout(self.form_layout)
layout.addWidget(self.scroll_area)
self.scroll_area.setFixedWidth(
500 + self.scroll_area.verticalScrollBar().sizeHint().width()
)
self.scroll_area.setFixedHeight(300)
self.scroll_area.setHorizontalScrollBarPolicy(QtCore.Qt.ScrollBarPolicy.ScrollBarAlwaysOff)
self.setLayout(layout)
self.update_current_cluster_algorithm()
self.cluster_algorithm_dropdown.currentIndexChanged.connect(
self.update_current_cluster_algorithm
)
self.metric_dropdown.currentIndexChanged.connect(self.update_current_metric)
def update_current_metric(self):
metric = self.metric_dropdown.currentText()
self.settings.setValue(self.settings.CLUSTERING_METRIC, metric)
def update_current_cluster_algorithm(self):
current_algorithm = self.cluster_algorithm_dropdown.currentText()
if current_algorithm in ["kmeans", "spectral", "agglomerative"]:
self.form_layout.setRowVisible(self.row_indices["n_clusters"], True)
else:
self.form_layout.setRowVisible(self.row_indices["n_clusters"], False)
if current_algorithm in ["optics", "hdbscan", "dbscan", "agglomerative"]:
self.form_layout.setRowVisible(self.row_indices["distance_threshold"], True)
else:
self.form_layout.setRowVisible(self.row_indices["distance_threshold"], False)
if current_algorithm in ["optics", "hdbscan", "dbscan"]:
self.form_layout.setRowVisible(self.row_indices["min_cluster_size"], True)
else:
self.form_layout.setRowVisible(self.row_indices["min_cluster_size"], False)
self.settings.setValue(self.settings.CLUSTER_TYPE, current_algorithm)
self.settings.sync()
@property
def cluster_kwargs(self):
self.settings.sync()
current_algorithm = ClusterType[self.settings.value(self.settings.CLUSTER_TYPE)]
metric = DistanceMetric[self.settings.value(self.settings.CLUSTERING_METRIC)]
kwargs = {
"cluster_type": current_algorithm,
"metric": metric,
}
if current_algorithm in [
ClusterType.kmeans,
ClusterType.spectral,
ClusterType.agglomerative,
]:
val = self.n_clusters_edit.value()
self.settings.setValue(self.settings.CLUSTERING_N_CLUSTERS, val)
kwargs["n_clusters"] = val
val = self.distance_threshold_edit.value()
self.settings.setValue(self.settings.CLUSTERING_DISTANCE_THRESHOLD, val)
kwargs["distance_threshold"] = val
val = self.min_cluster_size_edit.value()
self.settings.setValue(self.settings.CLUSTERING_MIN_CLUSTER_SIZE, val)
kwargs["min_cluster_size"] = val
return kwargs
@property
def manifold_kwargs(self):
kwargs = {"metric": DistanceMetric[self.metric_dropdown.currentText()]}
return kwargs
def showEvent(self, event: QtGui.QShowEvent) -> None:
p = self.pos()
geo = self.parent().geometry()
self.move(
p.x() + geo.width() - self.geometry().width(),
p.y() - geo.height() - self.geometry().height(),
)
class SpeakerClusterSettingsWidget(QtWidgets.QPushButton):
reclusterRequested = QtCore.Signal()
def __init__(self, *args):
super().__init__("Cluster settings", *args)
self.menu = SpeakerClusterSettingsMenu(self)
self.setMenu(self.menu)
self.menu.recluster_button.clicked.connect(self.recluster)
def recluster_available(self):
self.menu.recluster_button.setEnabled(True)
def recluster(self):
self.menu.recluster_button.setEnabled(False)
self.reclusterRequested.emit()
class IpaKeyboard(QtWidgets.QMenu):
inputPhone = QtCore.Signal(object, object)
def __init__(self, phones, parent=None):
super().__init__(parent)
self.settings = AnchorSettings()
layout = QtWidgets.QVBoxLayout()
self.scroll_area = QtWidgets.QScrollArea(self)
self.scroll_area.setFocusPolicy(QtCore.Qt.FocusPolicy.NoFocus)
widget = QtWidgets.QWidget(self)
scroll_layout = QtWidgets.QGridLayout()
self.scroll_area.setFixedHeight(300)
column_count = 10
self.buttons = [QtWidgets.QPushButton(p) for p in sorted(phones)]
col_index = 0
row_index = 0
for b in self.buttons:
b.setFont(self.settings.font)
b.clicked.connect(self.press)
b.setFocusPolicy(QtCore.Qt.FocusPolicy.NoFocus)
b.installEventFilter(self)
scroll_layout.addWidget(b, row_index, col_index)
col_index += 1
if col_index >= column_count:
col_index = 0
row_index += 1
layout.addWidget(self.scroll_area)
widget.setLayout(scroll_layout)
self.scroll_area.setWidget(widget)
self.setLayout(layout)
self.setFocusPolicy(QtCore.Qt.FocusPolicy.NoFocus)
self.scroll_area.setMinimumWidth(
widget.sizeHint().width() + self.scroll_area.verticalScrollBar().sizeHint().width()
)
self.scroll_area.setHorizontalScrollBarPolicy(QtCore.Qt.ScrollBarPolicy.ScrollBarAlwaysOff)
self.setStyleSheet(self.settings.keyboard_style_sheet)
def eventFilter(self, watched: QtCore.QObject, event: QtCore.QEvent) -> bool:
if event.type() == QtCore.QEvent.Type.KeyPress:
return True
return super(IpaKeyboard, self).eventFilter(watched, event)
def press(self):
b: QtWidgets.QPushButton = self.sender()
self.inputPhone.emit(b.text(), True)
def showEvent(self, event: QtGui.QShowEvent) -> None:
p = self.pos()
geo = self.geometry()
new_pos = int(p.x() - (geo.width() / 2))
self.move(new_pos, p.y())
class PronunciationErrorHighlighter(QtGui.QSyntaxHighlighter):
PHONES = r"\S+"
def __init__(self, phones, *args):
super().__init__(*args)
self.phones = set(phones)
self.settings = AnchorSettings()
self.keyword_color = self.settings.error_color
self.keyword_text_color = self.settings.primary_very_dark_color
self.highlight_format = QtGui.QTextCharFormat()
self.highlight_format.setBackground(self.keyword_color)
self.highlight_format.setForeground(self.keyword_text_color)
def highlightBlock(self, text):
for phone_object in re.finditer(self.PHONES, text):
if phone_object.group() not in self.phones:
self.setFormat(
phone_object.start(),
phone_object.end() - phone_object.start(),
self.highlight_format,
)
class PronunciationField(QtWidgets.QTextEdit):
def __init__(self, parent, phones):
super().__init__(parent)
self.phones = phones
self.setLineWrapMode(QtWidgets.QTextEdit.LineWrapMode.NoWrap)
self.setWordWrapMode(QtGui.QTextOption.WrapMode.NoWrap)
self.setObjectName("pronunciation_field")
self.highlighter = PronunciationErrorHighlighter(self.phones, self.document())
self.setVerticalScrollBarPolicy(QtCore.Qt.ScrollBarPolicy.ScrollBarAlwaysOff)
self.setHorizontalScrollBarPolicy(QtCore.Qt.ScrollBarPolicy.ScrollBarAlwaysOff)
class PronunciationInput(QtWidgets.QToolBar):
validationError = QtCore.Signal(object)
returnPressed = QtCore.Signal()
PHONES = r"\S+"
def __init__(self, phones, *args, icon_size=25):
super().__init__(*args)
self.phones = phones
self.input = PronunciationField(self, phones)
self.input.installEventFilter(self)
self.input.textChanged.connect(self.check_accept)
phone_set = "|".join(phones + [" "])
self.validation_pattern = re.compile(rf"^({phone_set})+$")
self.icon_size = icon_size
self.original_text = None
self.setContentsMargins(0, 0, 0, 0)
self.setFocusProxy(self.input)
accept_icon = QtGui.QIcon()
accept_icon.addFile(
":check-circle.svg", mode=QtGui.QIcon.Mode.Normal, state=QtGui.QIcon.State.Off
)
accept_icon.addFile(
":highlighted/check-circle.svg",
mode=QtGui.QIcon.Mode.Normal,
state=QtGui.QIcon.State.On,
)
self.accept_action = QtGui.QAction(icon=accept_icon, parent=self)
self.accept_action.triggered.connect(self.returnPressed.emit)
cancel_icon = QtGui.QIcon()
cancel_icon.addFile(":undo.svg", mode=QtGui.QIcon.Mode.Normal, state=QtGui.QIcon.State.Off)
cancel_icon.addFile(
":highlighted/undo.svg", mode=QtGui.QIcon.Mode.Normal, state=QtGui.QIcon.State.On
)
self.cancel_action = QtGui.QAction(icon=cancel_icon, parent=self)
self.cancel_action.triggered.connect(self.cancel)
keyboard_icon = QtGui.QIcon()
keyboard_icon.addFile(
":keyboard.svg", mode=QtGui.QIcon.Mode.Normal, state=QtGui.QIcon.State.Off
)
keyboard_icon.addFile(
":highlighted/keyboard.svg", mode=QtGui.QIcon.Mode.Normal, state=QtGui.QIcon.State.On
)
self.keyboard_widget = QtWidgets.QPushButton(self)
self.keyboard_widget.setFocusPolicy(QtCore.Qt.FocusPolicy.NoFocus)
self.keyboard_widget.setIcon(keyboard_icon)
self.keyboard = IpaKeyboard(phones)
self.keyboard.installEventFilter(self)
self.keyboard.inputPhone.connect(self.add_phone)
self.keyboard_widget.setMenu(self.keyboard)
self.addWidget(self.input)
self.addWidget(self.keyboard_widget)
self.addAction(self.accept_action)
self.addAction(self.cancel_action)
def setFont(self, a0: QtGui.QFont) -> None:
super().setFont(a0)
self.keyboard_widget.setFont(a0)
self.input.setFont(a0)
def eventFilter(self, watched: QtCore.QObject, event: QtCore.QEvent) -> bool:
if (
isinstance(watched, (PronunciationField, IpaKeyboard))
and event.type() == QtCore.QEvent.Type.KeyPress
and event.key()
in {QtGui.Qt.Key.Key_Enter, QtGui.Qt.Key.Key_Return, QtGui.Qt.Key.Key_Tab}
):
if self.accept_action.isEnabled():
self.returnPressed.emit()
return True
elif (
isinstance(watched, (IpaKeyboard))
and event.type() == QtCore.QEvent.Type.KeyPress
and event.key() not in {QtGui.Qt.Key.Key_Escape}
):
self.input.keyPressEvent(event)
return True
return super(PronunciationInput, self).eventFilter(watched, event)
def check_accept(self):
self.accept_action.setEnabled(self.validate())
self.cancel_action.setEnabled(self.original_text != self.text())
def sanitize(self, text):
return text.replace()
def add_phone(self, phone, full_phone):
if full_phone:
cursor = self.input.textCursor()
current_pos = cursor.position()
cursor.movePosition(
QtGui.QTextCursor.MoveOperation.Right, QtGui.QTextCursor.MoveMode.KeepAnchor
)
if cursor.selectedText() != " ":
phone = phone + " "
cursor.setPosition(current_pos, QtGui.QTextCursor.MoveMode.MoveAnchor)
cursor.movePosition(
QtGui.QTextCursor.MoveOperation.Left, QtGui.QTextCursor.MoveMode.KeepAnchor
)
if cursor.selectedText() != " ":
phone = " " + phone
cursor.setPosition(current_pos, QtGui.QTextCursor.MoveMode.MoveAnchor)
self.input.insertPlainText(phone)
def sizeHint(self) -> QtCore.QSize:
size = super().sizeHint()
size.setHeight(self.icon_size)
return size
def setText(self, text: str):
if self.original_text is None:
self.original_text = text
self.input.setPlainText(text)
def text(self) -> str:
return self.input.toPlainText()
def validate(self) -> bool:
for phone_object in re.finditer(self.PHONES, self.text()):
if phone_object.group() not in self.phones:
return False
return True
def cancel(self):
self.setText(self.original_text)
class WordInput(QtWidgets.QLineEdit):
def __init__(self, *args):
super().__init__(*args)
self.original_text = None
self.setFrame(False)
def setText(self, text: str):
if self.original_text is None:
self.original_text = text
super().setText(text)
def cancel(self):
self.setText(self.original_text)
class CountDelegate(QtWidgets.QStyledItemDelegate):
def __init__(self, parent=None):
super().__init__(parent)
from anchor.main import AnchorSettings
self.settings = AnchorSettings()
def refresh_settings(self):
self.settings.sync()
def paint(
self,
painter: QtGui.QPainter,
option: QtWidgets.QStyleOptionViewItem,
index: typing.Union[QtCore.QModelIndex, QtCore.QPersistentModelIndex],
) -> None:
super().paint(painter, option, index)
painter.save()
r = option.rect
size = int(self.settings.icon_size / 2)
x = r.left() + r.width() - self.settings.icon_size
y = r.top()
options = QtWidgets.QStyleOptionViewItem(option)
options.rect = QtCore.QRect(x, y, size, r.height())
self.initStyleOption(options, index)
icon = QtGui.QIcon(":external-link.svg")
icon.paint(painter, options.rect, QtCore.Qt.AlignmentFlag.AlignCenter)
painter.restore()
class EditableDelegate(QtWidgets.QStyledItemDelegate):
def __init__(self, parent=None):
super().__init__(parent)
from anchor.main import AnchorSettings
self.settings = AnchorSettings()
def refresh_settings(self):
self.settings.sync()
def createEditor(
self,
parent: DictionaryTableView,
option: QtWidgets.QStyleOptionViewItem,
index: typing.Union[QtCore.QModelIndex, QtCore.QPersistentModelIndex],
) -> QtWidgets.QWidget:
editor = WordInput(parent)
editor.setStyleSheet(self.settings.search_box_style_sheet)
editor.setFont(self.settings.font)
return editor
def setEditorData(
self,
editor: PronunciationInput,
index: typing.Union[QtCore.QModelIndex, QtCore.QPersistentModelIndex],
) -> None:
editor.setText(index.model().data(index, QtCore.Qt.ItemDataRole.EditRole))
def setModelData(
self,
editor: PronunciationInput,
model: DictionaryTableModel,
index: typing.Union[QtCore.QModelIndex, QtCore.QPersistentModelIndex],
) -> None:
value = editor.text().strip()
if editor.original_text != value:
model.setData(index, value, QtCore.Qt.ItemDataRole.EditRole)
model.submit()
def updateEditorGeometry(
self,
editor: PronunciationInput,
option: QtWidgets.QStyleOptionViewItem,
index: typing.Union[QtCore.QModelIndex, QtCore.QPersistentModelIndex],
) -> None:
editor.setGeometry(option.rect)
class PronunciationDelegate(EditableDelegate):
def eventFilter(self, object: QtCore.QObject, event: QtCore.QEvent) -> bool:
if event.type() == QtCore.QEvent.Type.KeyPress:
if isinstance(object, PronunciationInput) or isinstance(
object.parent(), PronunciationInput
):
if isinstance(object.parent(), PronunciationInput):
object = object.parent()
if event.key() in {
QtGui.Qt.Key.Key_Enter,
QtGui.Qt.Key.Key_Return,
QtGui.Qt.Key.Key_Tab,
}:
self.commitData.emit(object)
return True
return super().eventFilter(object, event)
def sizeHint(
self,
option: QtWidgets.QStyleOptionViewItem,
index: typing.Union[QtCore.QModelIndex, QtCore.QPersistentModelIndex],
) -> QtCore.QSize:
size = super().sizeHint(option, index)
size.setHeight(self.settings.icon_size)
return size
def createEditor(
self,
parent: DictionaryTableView,
option: QtWidgets.QStyleOptionViewItem,
index: typing.Union[QtCore.QModelIndex, QtCore.QPersistentModelIndex],
) -> QtWidgets.QWidget:
m: DictionaryTableModel = index.model()
self.view = parent.parent()
editor = PronunciationInput(m.phones, parent)
editor.setStyleSheet(self.settings.search_box_style_sheet)
editor.setFont(self.settings.font)
editor.installEventFilter(self)
editor.returnPressed.connect(self.accept)
editor.input.setFocus()
return editor
def accept(self):
editor = self.sender()
if editor.validate():
self.commitData.emit(editor)
self.closeEditor.emit(editor)
def setModelData(
self,
editor: PronunciationInput,
model: DictionaryTableModel,
index: typing.Union[QtCore.QModelIndex, QtCore.QPersistentModelIndex],
) -> None:
if editor.validate():
value = editor.text().strip()
if editor.original_text != value:
model.setData(index, value, QtCore.Qt.ItemDataRole.EditRole)
model.submit()
class OovTableView(AnchorTableView):
searchRequested = QtCore.Signal(object)
g2pRequested = QtCore.Signal(object, object)
def __init__(self, *args):
super().__init__(*args)
self.setEditTriggers(
QtWidgets.QAbstractItemView.EditTrigger.EditKeyPressed
| QtWidgets.QAbstractItemView.EditTrigger.DoubleClicked
| QtWidgets.QAbstractItemView.EditTrigger.SelectedClicked
)
self.header = HeaderView(QtCore.Qt.Orientation.Horizontal, self)
self.setHorizontalHeader(self.header)
self.doubleClicked.connect(self.search_word)
self.count_delegate = CountDelegate(self)
self.setItemDelegateForColumn(1, self.count_delegate)
self.setContextMenuPolicy(QtCore.Qt.ContextMenuPolicy.CustomContextMenu)
self.add_pronunciation_action = QtGui.QAction("Add pronunciation", self)
self.add_pronunciation_action.triggered.connect(self.add_pronunciation)
self.customContextMenuRequested.connect(self.generate_context_menu)
self.oov_model: typing.Optional[OovModel] = None
def generate_context_menu(self, location):
menu = QtWidgets.QMenu()
menu.addAction(self.add_pronunciation_action)
menu.exec_(self.mapToGlobal(location))
def add_pronunciation(self):
rows = self.selectionModel().selectedRows()
if not rows:
return
word = self.oov_model.data(
self.oov_model.createIndex(rows[0].row(), 0), QtCore.Qt.ItemDataRole.DisplayRole
)
word_id = self.oov_model.indices[rows[0].row()]
self.g2pRequested.emit(word, word_id)
self.oov_model.refresh()
def set_models(self, oov_model: OovModel):
self.oov_model = oov_model
self.setModel(self.oov_model)
self.refresh_settings()
self.horizontalHeader().sortIndicatorChanged.connect(self.model().update_sort)
def search_word(self, index: QtCore.QModelIndex):
if not index.isValid() or index.column() != 1:
return
word_index = self.oov_model.index(index.row(), 0)
word = self.oov_model.data(word_index, QtCore.Qt.ItemDataRole.DisplayRole)
query = TextFilterQuery(word, False, True, False)
self.searchRequested.emit(query)
class DictionaryTableView(AnchorTableView):
searchRequested = QtCore.Signal(object)
def __init__(self, *args):
super().__init__(*args)
self.setEditTriggers(
QtWidgets.QAbstractItemView.EditTrigger.EditKeyPressed
| QtWidgets.QAbstractItemView.EditTrigger.DoubleClicked
| QtWidgets.QAbstractItemView.EditTrigger.SelectedClicked
)
self.header = HeaderView(QtCore.Qt.Orientation.Horizontal, self)
self.setHorizontalHeader(self.header)
self.doubleClicked.connect(self.search_word)
self.edit_delegate = EditableDelegate(self)
self.count_delegate = CountDelegate(self)
self.pronunciation_delegate = PronunciationDelegate(self)
self.setItemDelegateForColumn(0, self.edit_delegate)
self.setItemDelegateForColumn(1, self.count_delegate)
self.setItemDelegateForColumn(2, self.pronunciation_delegate)
self.setContextMenuPolicy(QtCore.Qt.ContextMenuPolicy.CustomContextMenu)
self.delete_words_action = QtGui.QAction("Delete words", self)
self.delete_pronunciations_action = QtGui.QAction("Delete pronunciations", self)
self.add_pronunciation_action = QtGui.QAction("Add pronunciation", self)
self.add_pronunciation_action.triggered.connect(self.add_pronunciation)
self.delete_pronunciations_action.triggered.connect(self.delete_pronunciations)
self.delete_words_action.triggered.connect(self.delete_words)
self.customContextMenuRequested.connect(self.generate_context_menu)
def generate_context_menu(self, location):
menu = QtWidgets.QMenu()
menu.addAction(self.add_pronunciation_action)
menu.addSeparator()
menu.addAction(self.delete_words_action)
menu.addAction(self.delete_pronunciations_action)
menu.exec_(self.mapToGlobal(location))
def delete_pronunciations(self):
rows = self.selectionModel().selectedRows(2)
if not rows:
return
pronunciation_ids = [self.dictionary_model.pron_indices[x.row()] for x in rows]
self.dictionary_model.delete_pronunciations(pronunciation_ids)
def delete_words(self):
rows = self.selectionModel().selectedRows(0)
if not rows:
return
word_ids = [self.dictionary_model.word_indices[x.row()] for x in rows]
self.dictionary_model.delete_words(word_ids)
def add_pronunciation(self):
rows = self.selectionModel().selectedRows()
if not rows:
return
word_id = self.dictionary_model.word_indices[rows[0].row()]
word = self.dictionary_model.data(
self.dictionary_model.createIndex(rows[0].row(), 0), QtCore.Qt.ItemDataRole.DisplayRole
)
self.dictionary_model.add_pronunciation(word, word_id)
def set_models(self, dictionary_model: DictionaryTableModel):
self.dictionary_model = dictionary_model
self.setModel(self.dictionary_model)
self.refresh_settings()
self.horizontalHeader().sortIndicatorChanged.connect(self.model().update_sort)
self.dictionary_model.newResults.connect(self.calculate_spans)
def calculate_spans(self):
for i in range(self.dictionary_model.rowCount()):
if self.rowSpan(i, 0) != 1:
self.setSpan(i, 0, 1, 1)
self.setSpan(i, 1, 1, 1)
if (
i > 0
and self.dictionary_model.word_indices[i - 1]
== self.dictionary_model.word_indices[i]
):
prev_span = self.rowSpan(i - 1, 0)
self.setSpan(i - prev_span, 0, prev_span + 1, 1)
self.setSpan(i - prev_span, 1, prev_span + 1, 1)
def search_word(self, index: QtCore.QModelIndex):
if not index.isValid() or index.column() != 1:
return
word_index = self.dictionary_model.index(index.row(), 0)
word = self.dictionary_model.data(word_index, QtCore.Qt.ItemDataRole.DisplayRole)
query = TextFilterQuery(word, False, True, False)
self.searchRequested.emit(query)
class SpeakerTableView(AnchorTableView):
searchRequested = QtCore.Signal(object)
def __init__(self, *args):
super().__init__(*args)
self.setEditTriggers(
QtWidgets.QAbstractItemView.EditTrigger.EditKeyPressed
| QtWidgets.QAbstractItemView.EditTrigger.DoubleClicked
| QtWidgets.QAbstractItemView.EditTrigger.SelectedClicked
)
self.header = HeaderView(QtCore.Qt.Orientation.Horizontal, self)
self.setHorizontalHeader(self.header)
self.button_delegate = ButtonDelegate(":magnifying-glass.svg", self)
self.edit_delegate = EditableDelegate(self)
self.speaker_delegate = SpeakerViewDelegate(self)
self.setItemDelegateForColumn(1, self.speaker_delegate)
self.setItemDelegateForColumn(0, self.edit_delegate)
self.setItemDelegateForColumn(4, self.button_delegate)
self.clicked.connect(self.cluster_utterances)
self.merge_speaker_model: Optional[SpeakerModel] = None
self.doubleClicked.connect(self.search_speaker)
def set_models(self, model: SpeakerModel):
self.speaker_model = model
self.setModel(model)
self.refresh_settings()
def cluster_utterances(self, index: QtCore.QModelIndex):
if not index.isValid() or index.column() != 4:
return
self.speaker_model.change_current_speaker(self.speaker_model.speakerAt(index.row()))
def search_speaker(self, index: QtCore.QModelIndex):
if not index.isValid() or index.column() != 1:
return
speaker = self.model().data(
self.model().index(index.row(), 0), QtCore.Qt.ItemDataRole.DisplayRole
)
self.searchRequested.emit(speaker)
class ModelInfoWidget(QtWidgets.QWidget):
def __init__(self, model_type, *args):
super().__init__(*args)
self.model_type = model_type
self.settings = AnchorSettings()
self.label = QtWidgets.QLineEdit(f"No {model_type} loaded")
self.path_label = QtWidgets.QLineEdit("")
self.label.setReadOnly(True)
self.path_label.setReadOnly(True)
self.tree = QtWidgets.QTreeWidget()
self.header = HeaderView(QtCore.Qt.Orientation.Horizontal, self)
self.header.setSectionsClickable(False)
self.header.setSortIndicatorShown(False)
self.tree.setHeader(self.header)
self.tree.setAlternatingRowColors(True)
self.setLayout(QtWidgets.QVBoxLayout())
info_layout = QtWidgets.QFormLayout()
name_label = QtWidgets.QLabel(model_type.title())
name_label.setFont(self.settings.font)
info_layout.addRow(name_label, self.label)
path_label = QtWidgets.QLabel("Path")
path_label.setFont(self.settings.font)
info_layout.addRow(path_label, self.path_label)
self.layout().setAlignment(QtCore.Qt.AlignmentFlag.AlignCenter)
self.layout().addLayout(info_layout)
self.layout().addWidget(self.tree)
self.tree.setColumnCount(2)
self.tree.setHeaderLabels(["Property", "Value"])
self.tree.setIndentation(25)
self.header.setDefaultSectionSize(200)
self.corpus_model = None
self.model = None
self.label.setFont(self.settings.font)
self.path_label.setFont(self.settings.font)
# self.path_label.setWordWrap(True)
def refresh(self):
self.tree.clear()
if self.model is not None:
self.label.setText(self.model.name)
self.path_label.setText(str(self.model.source))
meta = self.model.meta
for k, v in meta.items():
node = QtWidgets.QTreeWidgetItem(self.tree)
label = QtWidgets.QLabel(str(k))
label.setFont(self.settings.font)
self.tree.setItemWidget(node, 0, label)
if isinstance(v, dict):
for k2, v2 in v.items():
child_node = QtWidgets.QTreeWidgetItem(node)
label = QtWidgets.QLabel(str(k2))
label.setFont(self.settings.font)
self.tree.setItemWidget(child_node, 0, label)
label = QtWidgets.QLabel(str(v2))
label.setWordWrap(True)
label.setFont(self.settings.font)
self.tree.setItemWidget(child_node, 1, label)
else:
label = QtWidgets.QLabel(str(v))
label.setWordWrap(True)
label.setFont(self.settings.font)
self.tree.setItemWidget(node, 1, label)
else:
self.label.setText(f"No {self.model_type} loaded")
self.path_label.setText("")
class AcousticModelWidget(ModelInfoWidget):
def __init__(self, *args):
super().__init__("acoustic model", *args)
def change_model(self):
self.model = None
if self.corpus_model is not None:
self.model = self.corpus_model.acoustic_model
self.refresh()
def set_models(self, corpus_model: CorpusModel):
self.corpus_model = corpus_model
self.corpus_model.acousticModelChanged.connect(self.change_model)
class LanguageModelWidget(ModelInfoWidget):
def __init__(self, *args):
super().__init__("language model", *args)
def change_model(self):
self.model = None
if self.corpus_model is not None:
self.model = self.corpus_model.language_model
self.refresh()
def set_models(self, corpus_model: CorpusModel):
self.corpus_model = corpus_model
self.corpus_model.languageModelChanged.connect(self.change_model)
class G2PModelWidget(ModelInfoWidget):
def __init__(self, *args):
super().__init__("G2P model", *args)
def change_model(self):
self.model = None
if self.corpus_model is not None:
self.model = self.corpus_model.g2p_model
self.refresh()
def set_models(self, corpus_model: CorpusModel):
self.corpus_model = corpus_model
self.corpus_model.g2pModelChanged.connect(self.change_model)
class TranscriberWidget(QtWidgets.QWidget):
def __init__(self, *args):
super().__init__(*args)
self.button = QtWidgets.QToolButton()
layout = QtWidgets.QFormLayout()
self.acoustic_model_label = QtWidgets.QLabel("Not loaded")
self.dictionary_label = QtWidgets.QLabel("Not loaded")
self.language_model_label = QtWidgets.QLabel("Not loaded")
self.frequent_words_edit = QtWidgets.QSpinBox()
self.frequent_words_edit.setMinimum(10)
self.frequent_words_edit.setMaximum(1000)
self.frequent_words_edit.setValue(100)
self.frequent_words_edit.setEnabled(False)
layout.addRow(QtWidgets.QLabel("Acoustic model"), self.acoustic_model_label)
layout.addRow(QtWidgets.QLabel("Dictionary"), self.dictionary_label)
layout.addRow(QtWidgets.QLabel("Language model"), self.language_model_label)
layout.addRow(QtWidgets.QLabel("Target number of ngrams"), self.frequent_words_edit)
layout.addWidget(self.button)
self.text = QtWidgets.QTextEdit()
self.text.setReadOnly(True)
layout.addWidget(self.text)
self.setLayout(layout)
self.corpus_model: Optional[CorpusModel] = None
def refresh(self):
validate_enabled = True
if self.corpus_model.corpus is None:
return
dataset_type = inspect_database(self.corpus_model.corpus.identifier)
if dataset_type in {
DatasetType.ACOUSTIC_CORPUS_WITH_DICTIONARY,
DatasetType.TEXT_CORPUS_WITH_DICTIONARY,
}:
self.dictionary_label.setText(self.corpus_model.corpus.dictionary_model.name)
else:
validate_enabled = False
self.dictionary_label.setText("Not loaded")
if self.corpus_model.acoustic_model is not None:
self.acoustic_model_label.setText(self.corpus_model.acoustic_model.name)
else:
validate_enabled = False
self.acoustic_model_label.setText("Not loaded")
if self.corpus_model.language_model is not None:
self.language_model_label.setText(self.corpus_model.language_model.name)
else:
self.language_model_label.setText("Not loaded")
self.frequent_words_edit.setEnabled(validate_enabled)
self.button.defaultAction().setEnabled(validate_enabled)
def set_models(self, corpus_model: CorpusModel, dictionary_model: DictionaryTableModel):
self.corpus_model = corpus_model
self.dictionary_model = dictionary_model
self.corpus_model.dictionaryChanged.connect(self.refresh)
self.corpus_model.acousticModelChanged.connect(self.refresh)
self.corpus_model.languageModelChanged.connect(self.refresh)
class SpeakerViewDelegate(QtWidgets.QStyledItemDelegate):
def __init__(self, parent=None):
super().__init__(parent)
from anchor.main import AnchorSettings
self.settings = AnchorSettings()
def refresh_settings(self):
self.settings.sync()
def paint(
self,
painter: QtGui.QPainter,
option: QtWidgets.QStyleOptionViewItem,
index: typing.Union[QtCore.QModelIndex, QtCore.QPersistentModelIndex],
) -> None:
super().paint(painter, option, index)
painter.save()
r = option.rect
size = int(self.settings.icon_size / 2)
x = r.left() + r.width() - self.settings.icon_size
y = r.top()
options = QtWidgets.QStyleOptionViewItem(option)
options.rect = QtCore.QRect(x, y, size, r.height())
self.initStyleOption(options, index)
icon = QtGui.QIcon(":external-link.svg")
icon.paint(painter, options.rect, QtCore.Qt.AlignmentFlag.AlignCenter)
painter.restore()
class ButtonDelegate(QtWidgets.QStyledItemDelegate):
def __init__(self, icon_path, parent=None):
super().__init__(parent)
from anchor.main import AnchorSettings
self.settings = AnchorSettings()
self.icon_path = icon_path
def refresh_settings(self):
self.settings.sync()
def paint(
self,
painter: QtGui.QPainter,
option: QtWidgets.QStyleOptionViewItem,
index: typing.Union[QtCore.QModelIndex, QtCore.QPersistentModelIndex],
) -> None:
painter.save()
r = option.rect
size = int(self.settings.icon_size / 2)
x = r.left() + r.width() - self.settings.icon_size
y = r.top()
options = QtWidgets.QStyleOptionViewItem(option)
options.rect = QtCore.QRect(x, y, size, r.height())
self.initStyleOption(options, index)
icon = QtGui.QIcon(self.icon_path)
icon.paint(painter, options.rect, QtCore.Qt.AlignmentFlag.AlignCenter)
painter.restore()
class SpeakerClustersWidget(QtWidgets.QWidget):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.settings = AnchorSettings()
self.settings.sync()
form_layout = QtWidgets.QHBoxLayout()
self.cluster_settings_widget = SpeakerClusterSettingsWidget(self)
self.cluster_dropdown = QtWidgets.QComboBox()
self.cluster_dropdown.setPlaceholderText("Select a cluster...")
self.button = QtWidgets.QPushButton("Change speaker")
form_layout.addWidget(self.cluster_dropdown)
form_layout.addWidget(self.button)
self.button.clicked.connect(self.change_speaker)
self.plot_widget = UtteranceClusterView(self)
self.cluster_settings_widget.reclusterRequested.connect(self.recluster)
self.plot_widget.plotAvailable.connect(self.cluster_settings_widget.recluster_available)
layout = QtWidgets.QVBoxLayout()
layout.addWidget(self.cluster_settings_widget)
layout.addWidget(self.plot_widget)
layout.addLayout(form_layout)
self.setLayout(layout)
def recluster(self):
self.speaker_model.update_manifold_kwargs(
self.cluster_settings_widget.menu.manifold_kwargs
)
self.speaker_model.update_cluster_kwargs(self.cluster_settings_widget.menu.cluster_kwargs)
def change_speaker(self):
if not self.plot_widget.selected_indices:
return
indices = np.array(list(self.plot_widget.selected_indices))
utterance_ids = self.speaker_model.utterance_ids[indices].tolist()
self.speaker_model.change_speaker(utterance_ids, self.speaker_model.current_speaker, 0)
def set_models(
self,
corpus_model: CorpusModel,
selection_model: CorpusSelectionModel,
speaker_model: SpeakerModel,
):
self.speaker_model = speaker_model
self.speaker_model.update_manifold_kwargs(
self.cluster_settings_widget.menu.manifold_kwargs
)
self.speaker_model.update_cluster_kwargs(self.cluster_settings_widget.menu.cluster_kwargs)
self.plot_widget.set_models(corpus_model, selection_model, speaker_model)
class SpeakerMergeTable(AnchorTableView):
searchRequested = QtCore.Signal(object)
def __init__(self, *args):
super().__init__(*args)
self.header = HeaderView(QtCore.Qt.Orientation.Horizontal, self)
self.setHorizontalHeader(self.header)
self.setSortingEnabled(False)
self.speaker_delegate = SpeakerViewDelegate(self)
self.button_delegate = ButtonDelegate(":compress.svg", self)
self.setItemDelegateForColumn(0, self.speaker_delegate)
self.setItemDelegateForColumn(1, self.speaker_delegate)
self.setItemDelegateForColumn(3, self.button_delegate)
self.doubleClicked.connect(self.search_speaker)
self.clicked.connect(self.merge_speakers)
self.merge_speaker_model: Optional[MergeSpeakerModel] = None
def set_models(self, model: MergeSpeakerModel):
self.merge_speaker_model = model
self.setModel(model)
self.refresh_settings()
def merge_speakers(self, index: QtCore.QModelIndex):
if not index.isValid() or index.column() != 3:
return
self.merge_speaker_model.merge_speakers(index.row())
def search_speaker(self, index: QtCore.QModelIndex):
if not index.isValid() or index.column() > 1:
return
speaker = self.model().data(
self.model().index(index.row(), index.column()), QtCore.Qt.ItemDataRole.DisplayRole
)
self.searchRequested.emit(speaker)
class ThresholdWidget(QtWidgets.QLineEdit):
def __init__(self, *args):
super().__init__(*args)
self.settings = AnchorSettings()
def validate(self):
if self.text() == "":
return True
try:
float(self.text())
except ValueError:
self.setProperty("error", True)
self.style().unpolish(self)
self.style().polish(self)
self.update()
return False
return True
def value(self):
if self.text() and self.validate():
return float(self.text())
return None
def setValue(self, val):
self.setText(f"{val:.4f}")
class DiarizationWidget(QtWidgets.QWidget):
def __init__(self, *args):
super().__init__(*args)
self.settings = AnchorSettings()
form_layout = QtWidgets.QFormLayout(self)
form_widget = QtWidgets.QWidget(self)
layout = QtWidgets.QVBoxLayout(self)
self.ivector_extractor_label = QtWidgets.QLabel("Not loaded")
self.threshold_edit = ThresholdWidget(self)
self.threshold_edit.returnPressed.connect(self.search)
self.metric_dropdown = QtWidgets.QComboBox()
for m in DistanceMetric:
self.metric_dropdown.addItem(m.value)
form_layout.addRow(QtWidgets.QLabel("Ivector extractor"), self.ivector_extractor_label)
form_layout.addRow(QtWidgets.QLabel("Distance threshold"), self.threshold_edit)
form_layout.addRow(QtWidgets.QLabel("Distance metric"), self.metric_dropdown)
form_widget.setLayout(form_layout)
layout.addWidget(form_widget)
self.refresh_ivectors_action = QtGui.QAction("Refresh ivectors")
self.search_action = QtGui.QAction("Search")
self.search_action.triggered.connect(self.search)
self.toolbar = QtWidgets.QToolBar()
self.toolbar.addAction(self.refresh_ivectors_action)
self.toolbar.addAction(self.search_action)
self.speaker_dropdown = CompleterLineEdit(self)
self.speaker_dropdown.line_edit.setPlaceholderText("Filter by speaker")
self.speaker_dropdown.line_edit.returnPressed.connect(self.search)
layout.addWidget(self.speaker_dropdown)
layout.addWidget(self.toolbar)
self.table = SpeakerMergeTable(self)
layout.addWidget(self.table)
self.merge_speaker_model: Optional[MergeSpeakerModel] = None
self.current_page = 0
self.num_pages = 0
self.pagination_toolbar = PaginationWidget()
self.pagination_toolbar.pageRequested.connect(self.table.scrollToTop())
layout.addWidget(self.pagination_toolbar)
self.setLayout(layout)
def search(self):
self.table.selectionModel().clearSelection()
self.merge_speaker_model.set_speaker_filter(self.speaker_dropdown.current_text())
self.merge_speaker_model.set_threshold(self.threshold_edit.value())
self.merge_speaker_model.set_metric(self.metric_dropdown.currentText())
self.merge_speaker_model.update_data()
def merge_all(self):
threshold = self.threshold_edit.value()
if threshold is None:
return
self.merge_speaker_model.set_speaker_filter(self.speaker_dropdown.current_text())
self.merge_speaker_model.set_threshold(self.threshold_edit.value())
self.merge_speaker_model.set_metric(self.metric_dropdown.currentText())
self.merge_speaker_model.merge_all()
def refresh(self):
validate_enabled = True
if (
self.merge_speaker_model.corpus_model.ivector_extractor is not None
and self.merge_speaker_model.corpus_model.corpus is not None
):
if isinstance(self.merge_speaker_model.corpus_model.ivector_extractor, str):
name = self.merge_speaker_model.corpus_model.ivector_extractor
else:
name = self.merge_speaker_model.corpus_model.ivector_extractor.name
self.ivector_extractor_label.setText(name)
self.search_action.setEnabled(
self.merge_speaker_model.corpus_model.corpus.has_any_ivectors()
)
self.threshold_edit.setEnabled(
self.merge_speaker_model.corpus_model.corpus.has_any_ivectors()
)
else:
validate_enabled = False
self.ivector_extractor_label.setText("Not loaded")
self.search_action.setEnabled(False)
self.threshold_edit.setEnabled(False)
self.refresh_ivectors_action.setEnabled(validate_enabled)
def set_models(self, model: MergeSpeakerModel):
self.merge_speaker_model = model
self.merge_speaker_model.corpus_model.corpusLoaded.connect(self.update_speaker_count)
self.merge_speaker_model.corpus_model.corpusLoaded.connect(self.refresh)
self.table.set_models(model)
self.merge_speaker_model.corpus_model.ivectorExtractorChanged.connect(self.refresh)
self.merge_speaker_model.resultCountChanged.connect(
self.pagination_toolbar.update_result_count
)
self.pagination_toolbar.offsetRequested.connect(self.merge_speaker_model.set_offset)
self.pagination_toolbar.set_limit(self.merge_speaker_model.limit)
self.merge_speaker_model.corpus_model.speakersRefreshed.connect(
self.speaker_dropdown.update_completions
)
self.threshold_edit.textChanged.connect(self.check_merge_all)
def check_merge_all(self):
for a in self.toolbar.actions():
a.setEnabled(self.threshold_edit.validate())
def update_speaker_count(self):
self.pagination_toolbar.update_result_count(
self.merge_speaker_model.corpus_model.corpus.num_speakers
)
class AlignmentWidget(QtWidgets.QWidget):
def __init__(self, *args):
super().__init__(*args)
self.button = QtWidgets.QToolButton()
layout = QtWidgets.QFormLayout()
self.acoustic_model_label = QtWidgets.QLabel("Not loaded")
self.dictionary_label = QtWidgets.QLabel("Not loaded")
self.fine_tune_check = QtWidgets.QCheckBox()
self.beam = QtWidgets.QSpinBox()
self.beam.setMinimum(6)
self.beam.setValue(10)
self.beam.setMaximum(1000)
self.retry_beam = QtWidgets.QSpinBox()
self.retry_beam.setMinimum(24)
self.retry_beam.setMaximum(4000)
self.retry_beam.setValue(40)
self.silence_boost = ThresholdWidget()
self.silence_boost.setText("1.0")
self.cutoff_check = QtWidgets.QCheckBox()
layout.addRow(QtWidgets.QLabel("Acoustic model"), self.acoustic_model_label)
layout.addRow(QtWidgets.QLabel("Dictionary"), self.dictionary_label)
layout.addRow(QtWidgets.QLabel("Beam"), self.beam)
layout.addRow(QtWidgets.QLabel("Retry beam"), self.retry_beam)
layout.addRow(QtWidgets.QLabel("Silence boost factor"), self.silence_boost)
layout.addRow(QtWidgets.QLabel("Fine tune"), self.fine_tune_check)
layout.addRow(QtWidgets.QLabel("Cutoff modeling"), self.cutoff_check)
layout.addWidget(self.button)
self.text = QtWidgets.QTextEdit()
self.text.setReadOnly(True)
layout.addWidget(self.text)
self.setLayout(layout)
self.corpus_model: Optional[CorpusModel] = None
def refresh(self):
validate_enabled = True
if self.corpus_model.has_dictionary:
self.dictionary_label.setText(self.corpus_model.corpus.dictionary_model.name)
else:
validate_enabled = False
self.dictionary_label.setText("Not loaded")
if self.corpus_model.acoustic_model is not None:
self.acoustic_model_label.setText(self.corpus_model.acoustic_model.name)
else:
validate_enabled = False
self.acoustic_model_label.setText("Not loaded")
if self.button.defaultAction() is not None:
self.button.defaultAction().setEnabled(validate_enabled)
def set_models(self, corpus_model: CorpusModel):
self.corpus_model = corpus_model
self.refresh()
self.corpus_model.dictionaryChanged.connect(self.refresh)
self.corpus_model.acousticModelChanged.connect(self.refresh)
def parameters(self):
return {
"beam": int(self.beam.text()),
"retry_beam": int(self.retry_beam.text()),
"boost_silence": self.silence_boost.value(),
"fine_tune": self.fine_tune_check.isChecked(),
"use_cutoff_model": self.cutoff_check.isChecked(),
}
class WordDelegate(QtWidgets.QStyledItemDelegate):
def __init__(self, *args):
super(WordDelegate, self).__init__(*args)
class OovWidget(QtWidgets.QWidget):
dictionaryError = QtCore.Signal(object)
def __init__(self, *args):
super().__init__(*args)
self.settings = AnchorSettings()
self.setAttribute(QtCore.Qt.WidgetAttribute.WA_StyledBackground, True)
self.oov_model: Optional[OovModel] = None
dict_layout = QtWidgets.QVBoxLayout()
self.table = OovTableView(self)
# self.table.cellChanged.connect(self.dictionary_edited)
self.toolbar = QtWidgets.QToolBar()
self.search_box = SearchBox(self)
self.toolbar.addWidget(self.search_box)
self.search_box.searchActivated.connect(self.search)
self.current_search_query = None
self.current_search_text = ""
dict_layout.addWidget(self.toolbar)
dict_layout.addWidget(self.table)
self.pagination_toolbar = PaginationWidget()
self.pagination_toolbar.pageRequested.connect(self.table.scrollToTop())
dict_layout.addWidget(self.pagination_toolbar)
self.setLayout(dict_layout)
self.refresh_settings()
def refresh_settings(self):
self.settings.sync()
font = self.settings.font
self.table.refresh_settings()
self.pagination_toolbar.set_limit(self.settings.value(self.settings.RESULTS_PER_PAGE))
self.search_box.setFont(font)
self.search_box.setStyleSheet(self.settings.search_box_style_sheet)
def search(self):
if self.oov_model.text_filter != self.search_box.query():
self.pagination_toolbar.current_page = 0
self.oov_model.current_offset = 0
self.oov_model.set_text_filter(self.search_box.query())
def set_models(self, oov_model: OovModel):
self.oov_model = oov_model
self.table.set_models(oov_model)
self.oov_model.resultCountChanged.connect(self.pagination_toolbar.update_result_count)
self.pagination_toolbar.offsetRequested.connect(self.oov_model.set_offset)
self.oov_model.refresh()
class DictionaryWidget(QtWidgets.QWidget):
dictionaryError = QtCore.Signal(object)
def __init__(self, *args):
super().__init__(*args)
self.settings = AnchorSettings()
self.setAttribute(QtCore.Qt.WidgetAttribute.WA_StyledBackground, True)
self.dictionary_model: Optional[DictionaryTableModel] = None
dict_layout = QtWidgets.QVBoxLayout()
self.table = DictionaryTableView(self)
self.ignore_errors = False
self.toolbar = QtWidgets.QToolBar()
self.search_box = SearchBox(self)
self.status_indicator = LoadingScreen(self, logo=False)
self.status_indicator.setVisible(False)
dict_layout.addWidget(self.status_indicator)
self.dictionary_dropdown = QtWidgets.QComboBox()
self.toolbar.addWidget(self.dictionary_dropdown)
self.toolbar.addWidget(self.search_box)
self.search_box.searchActivated.connect(self.search)
self.current_search_query = None
self.current_search_text = ""
self.refresh_word_counts_action = QtGui.QAction(self)
self.refresh_word_counts_action.setIcon(QtGui.QIcon(":oov-check.svg"))
self.refresh_word_counts_action.setEnabled(True)
self.toolbar.addAction(self.refresh_word_counts_action)
dict_layout.addWidget(self.toolbar)
dict_layout.addWidget(self.table)
self.pagination_toolbar = PaginationWidget()
self.pagination_toolbar.pageRequested.connect(self.table.scrollToTop())
dict_layout.addWidget(self.pagination_toolbar)
self.setLayout(dict_layout)
self.refresh_settings()
def dictionaries_refreshed(self, dictionaries):
self.dictionary_dropdown.clear()
if not dictionaries:
return
for d_id, d_name in dictionaries:
self.dictionary_dropdown.addItem(d_name, userData=d_id)
self.dictionary_dropdown.setCurrentIndex(0)
def refresh_settings(self):
self.settings.sync()
font = self.settings.font
self.table.refresh_settings()
self.pagination_toolbar.set_limit(self.settings.value(self.settings.RESULTS_PER_PAGE))
self.search_box.setFont(font)
self.search_box.setStyleSheet(self.settings.search_box_style_sheet)
def search(self):
if self.dictionary_model.text_filter != self.search_box.query():
self.pagination_toolbar.current_page = 0
self.dictionary_model.current_offset = 0
self.dictionary_model.set_text_filter(self.search_box.query())
def update_g2p(self, g2p_model):
self.g2p_model = g2p_model
def corpus_data_changed(self):
self.refresh_word_counts_action.setEnabled(True)
def updating_counts(self):
self.table.setVisible(False)
self.pagination_toolbar.setVisible(False)
self.toolbar.setVisible(False)
self.status_indicator.setVisible(True)
self.refresh_word_counts_action.setEnabled(False)
def counts_updated(self):
self.table.setVisible(True)
self.pagination_toolbar.setVisible(True)
self.toolbar.setVisible(True)
self.status_indicator.setVisible(False)
self.refresh_word_counts_action.setEnabled(True)
def set_models(self, dictionary_model: DictionaryTableModel):
self.dictionary_model = dictionary_model
self.dictionary_model.requestLookup.connect(self.look_up_word)
self.dictionary_model.dictionariesRefreshed.connect(self.dictionaries_refreshed)
self.dictionary_dropdown.currentIndexChanged.connect(self.update_current_dictionary)
self.refresh_word_counts_action.triggered.connect(self.dictionary_model.update_word_counts)
self.dictionary_model.wordCountsRefreshed.connect(self.counts_updated)
self.refresh_word_counts_action.triggered.connect(self.updating_counts)
self.dictionary_model.corpus_model.databaseSynced.connect(self.corpus_data_changed)
self.table.set_models(dictionary_model)
self.dictionary_model.resultCountChanged.connect(
self.pagination_toolbar.update_result_count
)
self.pagination_toolbar.offsetRequested.connect(self.dictionary_model.set_offset)
def update_current_dictionary(self):
d_id = self.dictionary_dropdown.currentData()
self.dictionary_model.update_current_index(d_id)
def look_up_word(self, word):
self.search_box.setQuery(TextFilterQuery(word, False, True, False))
class SpeakerWidget(QtWidgets.QWidget):
def __init__(self, *args):
super(SpeakerWidget, self).__init__(*args)
self.settings = AnchorSettings()
self.setAttribute(QtCore.Qt.WidgetAttribute.WA_StyledBackground, True)
speaker_layout = QtWidgets.QVBoxLayout()
self.corpus_model: Optional[CorpusModel] = None
top_toolbar = QtWidgets.QToolBar()
self.search_box = SearchBox(self)
top_toolbar.addWidget(self.search_box)
self.search_box.searchActivated.connect(self.search)
self.current_search_query = None
self.current_search_text = ""
speaker_layout.addWidget(top_toolbar)
self.table = SpeakerTableView()
self.table.horizontalHeader().setSortIndicator(1, QtCore.Qt.SortOrder.DescendingOrder)
speaker_layout.addWidget(self.table)
self.current_page = 0
self.num_pages = 0
self.pagination_toolbar = PaginationWidget()
self.pagination_toolbar.pageRequested.connect(self.table.scrollToTop())
speaker_layout.addWidget(self.pagination_toolbar)
self.tool_bar_wrapper = QtWidgets.QVBoxLayout()
self.tool_bar = QtWidgets.QToolBar()
self.tool_bar.setSizePolicy(
QtWidgets.QSizePolicy.Policy.Expanding, QtWidgets.QSizePolicy.Policy.Preferred
)
self.tool_bar.setToolButtonStyle(QtCore.Qt.ToolButtonStyle.ToolButtonTextBesideIcon)
self.tool_bar_wrapper.addWidget(self.tool_bar)
self.cluster_widget = SpeakerClustersWidget(self)
speaker_layout.addWidget(self.cluster_widget)
self.speakers = None
self.speaker_edit = NewSpeakerField()
self.result_count = 0
self.tool_bar.addWidget(self.speaker_edit)
toolbar_wrapper_widget = QtWidgets.QWidget()
toolbar_wrapper_widget.setLayout(self.tool_bar_wrapper)
speaker_layout.addWidget(toolbar_wrapper_widget)
self.setLayout(speaker_layout)
self.refresh_settings()
def refresh_cluster(self):
self.table.cluster_utterances(self.table.selectionModel().currentIndex())
def search(self):
self.speaker_model.set_text_filter(self.search_box.query())
def set_models(
self,
corpus_model: CorpusModel,
selection_model: CorpusSelectionModel,
speaker_model: SpeakerModel,
):
self.speaker_model = speaker_model
self.cluster_widget.set_models(corpus_model, selection_model, speaker_model)
self.speaker_model.corpus_model.corpusLoaded.connect(self.update_speaker_count)
self.table.set_models(self.speaker_model)
self.speaker_model.resultCountChanged.connect(self.pagination_toolbar.update_result_count)
self.pagination_toolbar.offsetRequested.connect(self.speaker_model.set_offset)
self.pagination_toolbar.set_limit(self.speaker_model.limit)
self.search_box.setStyleSheet(self.settings.search_box_style_sheet)
def update_speaker_count(self):
self.pagination_toolbar.update_result_count(
self.speaker_model.corpus_model.corpus.num_speakers
)
def refresh_settings(self):
self.settings.sync()
font = self.settings.font
self.speaker_edit.setFont(font)
self.search_box.setFont(font)
self.search_box.setStyleSheet(self.settings.search_box_style_sheet)
self.table.refresh_settings()
self.pagination_toolbar.set_limit(
self.table.settings.value(self.table.settings.RESULTS_PER_PAGE)
)
class ColorEdit(QtWidgets.QPushButton): # pragma: no cover
def __init__(self, parent=None):
super(ColorEdit, self).__init__(parent=parent)
self.clicked.connect(self.open_dialog)
def set_color(self, color: QtGui.QColor):
self._color = color
self.update_icon()
def update_icon(self):
pixmap = QtGui.QPixmap(100, 100)
pixmap.fill(self._color)
icon = QtGui.QIcon(pixmap)
icon.addPixmap(pixmap, QtGui.QIcon.Mode.Disabled)
self.setIcon(icon)
@property
def color(self) -> str:
return self._color.name()
def open_dialog(self):
color = QtWidgets.QColorDialog.getColor()
if color.isValid():
self._color = color
self.update_icon()
class FontDialog(QtWidgets.QFontDialog):
def __init__(self, *args):
super(FontDialog, self).__init__(*args)
class FontEdit(QtWidgets.QPushButton): # pragma: no cover
def __init__(self, parent=None):
super(FontEdit, self).__init__(parent=parent)
self.font = None
self.clicked.connect(self.open_dialog)
self.setFocusPolicy(QtCore.Qt.FocusPolicy.NoFocus)
def set_font(self, font: QtGui.QFont):
self.font = font
self.update_icon()
def update_icon(self):
self.setFont(self.font)
self.setText(self.font.key().split(",", maxsplit=1)[0])
def open_dialog(self):
ok, font = FontDialog.getFont(self.font, self)
if ok:
self.font = font
self.update_icon()
|
Anchor-annotator
|
/Anchor_annotator-0.0.9.tar.gz/Anchor_annotator-0.0.9/anchor/widgets.py
|
widgets.py
|
from __future__ import annotations
import os
import re
import typing
from threading import Lock
from typing import Any, Optional, Union
import numpy as np
import pynini.lib.rewrite
import sqlalchemy
import yaml
from dataclassy import dataclass
from montreal_forced_aligner.corpus.acoustic_corpus import (
AcousticCorpus,
AcousticCorpusWithPronunciations,
)
from montreal_forced_aligner.data import PhoneType
from montreal_forced_aligner.db import File, Phone, Speaker, Utterance
from montreal_forced_aligner.g2p.generator import PyniniValidator
from montreal_forced_aligner.models import (
AcousticModel,
G2PModel,
IvectorExtractorModel,
LanguageModel,
)
from montreal_forced_aligner.utils import mfa_open
from PySide6 import QtCore
from sqlalchemy.orm import joinedload, scoped_session
from anchor import undo
from anchor.settings import AnchorSettings
# noinspection PyUnresolvedReferences
@dataclass(slots=True)
class TextFilterQuery:
text: str
regex: bool = False
word: bool = False
case_sensitive: bool = False
@property
def search_text(self):
if not self.case_sensitive:
return self.text.lower()
return self.text
def generate_expression(self, posix=False):
text = self.text
if not self.case_sensitive:
text = text.lower()
if not text:
return text
if not self.regex:
text = re.escape(text)
word_break_set = r"\b"
if posix:
word_break_set = r"\y"
text = text.replace(r"\b", word_break_set)
if self.word:
if not text.startswith(word_break_set):
text = word_break_set + text
if not text.endswith(word_break_set):
text += word_break_set
if self.regex or self.word:
if not self.case_sensitive:
text = "(?i)" + text
return text
class TableModel(QtCore.QAbstractTableModel):
runFunction = QtCore.Signal(object, object, object) # Function plus finished processor
resultCountChanged = QtCore.Signal(int)
newResults = QtCore.Signal()
def __init__(self, header_data, parent=None):
super(TableModel, self).__init__(parent)
self._header_data = header_data
self._data = []
self.result_count = None
self.sort_index = None
self.sort_order = None
self.current_offset = 0
self.limit = 1
self.text_filter = None
def set_text_filter(self, text_filter: TextFilterQuery):
if text_filter != self.text_filter:
self.current_offset = 0
self.text_filter = text_filter
self.update_data()
self.update_result_count()
def set_limit(self, limit: int):
self.limit = limit
def set_offset(self, offset):
self.current_offset = offset
self.update_data()
self.update_result_count()
def update_sort(self, column, order):
self.sort_index = column
self.sort_order = order
self.update_data()
self.update_result_count()
def query_count(self, **kwargs):
pass
def query_data(self, **kwargs):
pass
def finalize_result_count(self, result_count=None):
if isinstance(result_count, int):
self.result_count = result_count
self.resultCountChanged.emit(self.result_count)
def update_result_count(self):
self.result_count = None
self.runFunction.emit(self.query_count, self.finalize_result_count, [])
def update_data(self):
self.runFunction.emit(self.query_data, self.finish_update_data, [])
def finish_update_data(self, *args, **kwargs):
self.layoutAboutToBeChanged.emit()
self._data = []
self.layoutChanged.emit()
def headerData(self, index, orientation, role):
if role == QtCore.Qt.ItemDataRole.DisplayRole:
return self._header_data[index]
def data(self, index, role=None):
if role == QtCore.Qt.ItemDataRole.DisplayRole:
return self._data[index.row()][index.column()]
def rowCount(self, parent=None):
return len(self._data)
def columnCount(self, parent=None):
return len(self._header_data)
class CorpusSelectionModel(QtCore.QItemSelectionModel):
fileChanged = QtCore.Signal()
channelChanged = QtCore.Signal()
resetView = QtCore.Signal()
fileAboutToChange = QtCore.Signal()
viewChanged = QtCore.Signal(object, object)
selectionAudioChanged = QtCore.Signal()
currentTimeChanged = QtCore.Signal(object)
def __init__(self, *args, **kwargs):
super(CorpusSelectionModel, self).__init__(*args, **kwargs)
self.min_time = 0
self.max_time = 10
self.selected_min_time = None
self.selected_max_time = None
self.current_file: Optional[File] = None
self.x = None
self.y = None
self.current_utterance_id = None
self.selected_channel = 0
# self.viewChanged.connect(self.update_selected_waveform)
# self.fileChanged.connect(self.update_selected_waveform)
self.currentRowChanged.connect(self.switch_utterance)
# self.selectionChanged.connect(self.update_selection_audio)
# self.selectionChanged.connect(self.update_selection_audio)
# self.model().changeCommandFired.connect(self.expire_current)
self.model().layoutChanged.connect(self.check_selection)
self.model().unlockCorpus.connect(self.fileChanged.emit)
self.model().selectionRequested.connect(self.update_select_rows)
def check_selection(self):
if self.currentIndex().row() == -1 and self.model().rowCount() > 0:
self.update_select_rows([0])
elif self.model().rowCount() == 0:
self.clearSelection()
def set_current_channel(self, channel):
self.selected_channel = channel
self.channelChanged.emit()
def clearSelection(self) -> None:
self.fileAboutToChange.emit()
self.current_file = None
self.current_utterance_id = None
self.min_time = None
self.max_time = None
self.selected_min_time = None
self.selected_max_time = None
super(CorpusSelectionModel, self).clearCurrentIndex()
super(CorpusSelectionModel, self).clearSelection()
self.fileChanged.emit()
def update_selected_wavform(self, *args):
if self.min_time is None or self.current_file is None:
self.x = None
self.y = None
else:
self.x, self.y = self.current_file.sound_file.normalized_waveform(
self.min_time, self.max_time
)
def get_selected_wave_form(self):
if self.y is None:
return None, None
if len(self.y.shape) > 1 and self.y.shape[0] == 2:
return self.x, self.y[self.selected_channel, :]
return self.x, self.y
def update_select_rows(self, rows: list[int]):
super(CorpusSelectionModel, self).clearCurrentIndex()
super(CorpusSelectionModel, self).clearSelection()
if not rows:
return
for r in rows:
self.setCurrentIndex(
self.model().index(r, 0),
QtCore.QItemSelectionModel.SelectionFlag.SelectCurrent
| QtCore.QItemSelectionModel.SelectionFlag.Rows,
)
def update_select(self, utterance_id: int, deselect=False, reset=False, focus=False):
if reset and [x.id for x in self.selectedUtterances()] == [utterance_id]:
return
flags = QtCore.QItemSelectionModel.SelectionFlag.Rows
if reset:
flags |= QtCore.QItemSelectionModel.SelectionFlag.ClearAndSelect
elif deselect:
flags |= QtCore.QItemSelectionModel.SelectionFlag.Deselect
else:
flags |= QtCore.QItemSelectionModel.SelectionFlag.Select
if utterance_id not in self.model().reversed_indices:
return
row = self.model().reversed_indices[utterance_id]
if focus:
flags |= QtCore.QItemSelectionModel.SelectionFlag.Current
if row == self.currentIndex().row():
self.update_view_times(force_update=True)
self.select(self.model().index(row, 0), flags)
def select_audio(self, begin, end):
if end is not None and end - begin < 0.025:
end = None
self.selected_min_time = begin
self.selected_max_time = end
self.selectionAudioChanged.emit()
def request_start_time(self, start_time):
if start_time >= self.max_time:
return
if start_time < self.min_time:
return
self.selected_min_time = start_time
self.selected_max_time = None
self.selectionAudioChanged.emit()
def visible_utts(self) -> typing.List[Utterance]:
file_utts = []
if not self.current_file:
return file_utts
if self.current_file.num_utterances > 1:
for u in sorted(self.current_file.utterances, key=lambda x: x.begin):
if u.begin >= self.max_time:
break
if u.end <= self.min_time:
continue
file_utts.append(u)
else:
file_utts.extend(self.current_file.utterances)
return file_utts
def currentUtterance(self) -> Optional[Utterance]:
utts = self.selectedUtterances()
if not utts:
return
return utts[-1]
def selectedUtterances(self):
utts = []
m = self.model()
current_utterance = m.utteranceAt(self.currentIndex())
for index in self.selectedRows(1):
utt = m.utteranceAt(index)
if utt is None:
continue
if current_utterance is None:
current_utterance = utt
if utt.file_id != current_utterance.file_id:
continue
utts.append(utt)
return utts
def currentText(self):
index = self.currentIndex()
if not index:
return
m = self.model()
text = m.data(m.index(index.row(), m.text_column), QtCore.Qt.ItemDataRole.DisplayRole)
return text
def zoom(self, factor, mid_point=None):
if factor == 0:
return
cur_duration = self.max_time - self.min_time
if mid_point is None:
mid_point = self.min_time + (cur_duration / 2)
new_duration = cur_duration / factor
new_begin = mid_point - (mid_point - self.min_time) / factor
new_begin = max(new_begin, 0)
new_end = min(new_begin + new_duration, self.current_file.duration)
if new_end - new_begin <= 0.025:
return
self.set_view_times(new_begin, new_end)
def pan(self, factor):
if factor < 1:
factor = 1 - factor
right = True
else:
right = False
factor = factor - 1
if right and self.max_time == self.current_file.duration:
return
if not right and self.min_time == 0:
return
cur_duration = self.max_time - self.min_time
shift = factor * cur_duration
if right:
new_begin = self.min_time + shift
new_end = self.max_time + shift
else:
new_begin = self.min_time - shift
new_end = self.max_time - shift
if new_begin < 0:
new_end = new_end + abs(new_begin)
new_begin = 0
if new_end > self.current_file.duration:
new_begin -= self.current_file.duration - new_end
new_end = self.current_file.duration
self.set_view_times(new_begin, new_end)
def zoom_in(self):
if self.current_file is None:
return
self.zoom(1.5)
def zoom_out(self):
if self.current_file is None:
return
self.zoom(0.5)
def zoom_to_selection(self):
if self.selected_min_time is None or self.selected_max_time is None:
rows = self.selectedRows(1)
if not rows:
return
begin = None
end = None
for r in rows:
u = self.model().utteranceAt(r)
if u is None:
continue
if u.file_id != self.current_file.id:
continue
if begin is None or begin > u.begin:
begin = u.begin
if end is None or end < u.end:
end = u.end
self.set_view_times(begin, end)
else:
self.set_view_times(self.selected_min_time, self.selected_max_time)
def update_from_slider(self, value):
if not self.max_time:
return
cur_window = self.max_time - self.min_time
self.set_view_times(value, value + cur_window)
def update_selection_audio(self):
begins = self.selectedRows(self.model().begin_column)
ends = self.selectedRows(self.model().end_column)
begin = None
end = None
if len(begins) > 0:
for i, b in enumerate(begins):
b = self.model().data(b, QtCore.Qt.ItemDataRole.DisplayRole)
e = self.model().data(ends[i], QtCore.Qt.ItemDataRole.DisplayRole)
if begin is None or begin > b:
begin = b
if end is None or end < e:
end = e
if self.current_file is None or begin > self.current_file.duration:
begin = None
end = None
elif end > self.current_file.duration:
end = self.current_file.duration
self.selected_min_time = begin
self.selected_max_time = end
self.selectionAudioChanged.emit()
def switch_utterance(self, new_index, old_index):
if not isinstance(new_index, QtCore.QModelIndex):
row = 0
else:
row = new_index.row()
utt = self.model().utteranceAt(row)
if utt is None:
return
if utt.id == self.current_utterance_id:
return
self.current_utterance_id = utt.id
self.set_current_file(
utt.file_id, utt.begin, utt.end, channel=utt.channel, force_update=True
)
def update_view_times(self, *args, force_update=False):
utts = self.selectedUtterances()
if len(utts) == 0:
self.resetView.emit()
return
if len(utts) == 1:
force_update = True
begin = utts[0].begin
f_id = utts[0].file_id
end_ind = -1
while True:
if utts[end_ind].file_id == f_id:
end = utts[end_ind].end
break
self.set_current_file(f_id, begin, end, channel=utts[0].channel, force_update=force_update)
self.selected_min_time = self.min_time
def model(self) -> CorpusModel:
return super(CorpusSelectionModel, self).model()
def checkSelected(self, utterance: Utterance):
m = self.model()
for index in self.selectedRows(1):
if utterance.id == m._indices[index.row()]:
return True
return False
def set_current_file(self, file_id, begin=None, end=None, channel=None, force_update=False):
if self.current_file is None or self.current_file.id != file_id:
self.selected_min_time = None
self.selected_max_time = None
self.fileAboutToChange.emit()
self.selected_channel = 0 if channel is None else channel
self.current_file = (
self.model().session.query(File).options(joinedload(File.sound_file)).get(file_id)
)
self.min_time = begin
self.max_time = end
self.fileChanged.emit()
elif (
self.current_file is not None
and begin is not None
and end is not None
and force_update
):
self.selected_channel = channel
self.set_view_times(begin, end)
def set_view_times(self, begin, end):
begin = max(begin, 0)
end = min(end, self.current_file.duration)
if (begin, end) == (self.min_time, self.max_time):
return
self.min_time = begin
self.max_time = end
self.selected_min_time = self.min_time
if self.selected_max_time is not None and self.selected_max_time > self.max_time:
self.selected_max_time = None
self.viewChanged.emit(self.min_time, self.max_time)
def focusUtterance(self, index):
m = self.model()
u = m.utteranceAt(index)
if u is None:
self.min_time = 0
self.max_time = 1
self.fileAboutToChange()
self.current_file = None
self.fileChanged.emit()
return
self.current_file = u.file
begin = u.begin
end = u.end
padding = 1
self.set_view_times(begin - padding, end + padding)
self.selectionAudioChanged.emit()
class OovModel(TableModel):
def __init__(self, parent=None):
super().__init__(["OOV word", "Count"], parent=parent)
self.settings = AnchorSettings()
self.font = self.settings.font
self.corpus_model: Optional[CorpusModel] = None
self.sort_index = None
self.sort_order = None
self.text_filter = None
self.current_offset = 0
self.set_limit(self.settings.value(self.settings.RESULTS_PER_PAGE))
self._data = []
self.indices = []
def set_corpus_model(self, corpus_model: CorpusModel):
self.corpus_model = corpus_model
self.corpus_model.corpusLoading.connect(self.refresh)
self.corpus_model.dictionaryChanged.connect(self.refresh)
def refresh(self):
self.update_result_count()
self.update_data()
def finish_update_data(self, result, *args, **kwargs):
if result is None:
return
self.layoutAboutToBeChanged.emit()
self._data, self.indices = result
self.layoutChanged.emit()
self.newResults.emit()
@property
def query_kwargs(self) -> typing.Dict[str, typing.Any]:
kwargs = {
"text_filter": self.text_filter,
"limit": self.limit,
"current_offset": self.current_offset,
}
if self.sort_index is not None:
kwargs["sort_index"] = self.sort_index
kwargs["sort_desc"] = self.sort_order == QtCore.Qt.SortOrder.DescendingOrder
return kwargs
@property
def count_kwargs(self) -> typing.Dict[str, typing.Any]:
kwargs = self.query_kwargs
kwargs["count"] = True
return kwargs
def update_result_count(self):
self.runFunction.emit(
"Counting OOV results", self.finalize_result_count, [self.count_kwargs]
)
def update_data(self):
self.runFunction.emit("Querying OOVs", self.finish_update_data, [self.query_kwargs])
class DictionaryTableModel(TableModel):
dictionariesRefreshed = QtCore.Signal(object)
wordCountsRefreshed = QtCore.Signal()
requestLookup = QtCore.Signal(object)
def __init__(self, parent=None):
super().__init__(["Word", "Count", "Pronunciation"], parent=parent)
self.settings = AnchorSettings()
self.font = self.settings.font
self.current_dictionary = None
self.corpus_model: Optional[CorpusModel] = None
self.sort_index = None
self.sort_order = None
self.text_filter = None
self.filter_unused = False
self.current_offset = 0
self.current_dictionary_id = None
self._data = []
self.word_indices = []
self.pron_indices = []
self.g2p_generator: typing.Optional[PyniniValidator] = None
self.word_sets = {}
self.speaker_mapping = {}
self.phones = []
self.reference_phone_set = set()
self.custom_mapping = {}
def set_custom_mapping(self, path):
with mfa_open(path, "r") as f:
self.custom_mapping = {k: v for k, v in yaml.safe_load(f).items() if k in self.phones}
for v in self.custom_mapping.values():
self.reference_phone_set.update(v)
def check_word(self, word, speaker_id) -> bool:
try:
dictionary_id = self.speaker_mapping[speaker_id]
except KeyError:
return True
if dictionary_id is not None and self.word_sets[dictionary_id]:
return word.lower() in self.word_sets[dictionary_id]
return True
def lookup_word(self, word: str) -> None:
self.requestLookup.emit(word)
def set_g2p_generator(self, generator: PyniniValidator) -> None:
self.g2p_generator = generator
def update_current_index(self, dict_id) -> None:
if self.current_dictionary_id != dict_id:
self.current_dictionary_id = dict_id
self.update_result_count()
self.update_data()
def set_corpus_model(self, corpus_model: CorpusModel) -> None:
self.corpus_model = corpus_model
self.corpus_model.corpusLoading.connect(self.setup)
def setup(self) -> None:
self.refresh_dictionaries()
phones = [
x
for x, in self.corpus_model.session.query(Phone.phone).filter(
Phone.phone_type == PhoneType.non_silence
)
]
if self.corpus_model.corpus.position_dependent_phones:
phones = sorted(set(x.rsplit("_", maxsplit=1)[0] for x in phones))
self.phones = phones
def flags(
self, index: Union[QtCore.QModelIndex, QtCore.QPersistentModelIndex]
) -> QtCore.Qt.ItemFlag:
if not index.isValid():
return QtCore.Qt.ItemFlag.ItemIsEnabled
flags = super().flags(index)
if index.column() in [0, 2]:
flags |= QtCore.Qt.ItemFlag.ItemIsEditable
return flags
def setData(
self,
index: Union[QtCore.QModelIndex, QtCore.QPersistentModelIndex],
value: Any,
role: int = ...,
) -> bool:
if index.isValid() and role == QtCore.Qt.ItemDataRole.EditRole:
if index.column() == 0:
self.corpus_model.addCommand.emit(
undo.UpdateWordCommand(
self.word_indices[index.row()],
self._data[index.row()][index.column()],
value,
self,
)
)
else:
self.corpus_model.addCommand.emit(
undo.UpdatePronunciationCommand(
self.pron_indices[index.row()],
self._data[index.row()][index.column()],
value,
self,
)
)
return True
return False
def add_word(self, word, word_id):
self.requestLookup.emit(word)
self.add_pronunciation(word, word_id)
def add_pronunciation(
self,
word: str,
word_id: int = None,
pronunciation: str = None,
):
if pronunciation is None:
if self.g2p_generator is None:
pronunciation = ""
else:
try:
existing_pronunciations = set()
for r in range(self.rowCount()):
if self.word_indices[r] != word_id:
continue
existing_pronunciations.add(self._data[r][2])
candidates = self.g2p_generator.rewriter(word)
for c in candidates:
if c in existing_pronunciations:
continue
pronunciation = c
break
else:
pronunciation = "spn"
except pynini.lib.rewrite.Error:
pronunciation = "spn"
self.corpus_model.addCommand.emit(
undo.AddPronunciationCommand(word, pronunciation, self, word_id=word_id)
)
def delete_words(self, word_ids: typing.List[int]):
self.corpus_model.addCommand.emit(undo.DeleteWordCommand(word_ids, self))
def delete_pronunciations(self, pronunciation_ids: typing.List[int]):
self.corpus_model.addCommand.emit(undo.DeletePronunciationCommand(pronunciation_ids, self))
def data(self, index, role):
if not index.isValid() or index.row() > len(self._data) - 1:
return
data = self._data[index.row()][index.column()]
if role == QtCore.Qt.ItemDataRole.DisplayRole or role == QtCore.Qt.ItemDataRole.EditRole:
return data
def finish_refresh_word_counts(self):
self.corpus_model.session.expire_all()
self.update_result_count()
self.update_data()
self.wordCountsRefreshed.emit()
def refresh(self):
self.update_result_count()
self.update_data()
def finish_update_data(self, result, *args, **kwargs):
if result is None:
return
self.layoutAboutToBeChanged.emit()
self._data, self.word_indices, self.pron_indices = result
self.layoutChanged.emit()
self.newResults.emit()
def finish_update_dictionaries(self, result):
self.dictionaries, self.word_sets, self.speaker_mapping = result
self.dictionariesRefreshed.emit(self.dictionaries)
@property
def query_kwargs(self) -> typing.Dict[str, typing.Any]:
kwargs = {
"dictionary_id": self.current_dictionary_id,
"text_filter": self.text_filter,
"limit": self.limit,
"current_offset": self.current_offset,
"filter_unused": self.filter_unused,
}
if self.sort_index is not None:
kwargs["sort_index"] = self.sort_index
kwargs["sort_desc"] = self.sort_order == QtCore.Qt.SortOrder.DescendingOrder
return kwargs
@property
def count_kwargs(self) -> typing.Dict[str, typing.Any]:
kwargs = self.query_kwargs
kwargs["count"] = True
return kwargs
def refresh_dictionaries(self):
self.runFunction.emit("Loading dictionaries", self.finish_update_dictionaries, [])
def update_result_count(self):
self.runFunction.emit(
"Counting dictionary results", self.finalize_result_count, [self.count_kwargs]
)
def update_word_counts(self):
self.runFunction.emit(
"Calculating OOVs",
self.finish_refresh_word_counts,
[{"dictionary_id": self.current_dictionary_id}],
)
def update_data(self):
self.runFunction.emit("Querying dictionary", self.finish_update_data, [self.query_kwargs])
class SpeakerModel(TableModel):
clustered = QtCore.Signal()
mdsFinished = QtCore.Signal()
speakersChanged = QtCore.Signal()
mdsAboutToChange = QtCore.Signal()
def __init__(self, parent=None):
super().__init__(
["Speaker", "Utterances", "Dictionary", "Ivector distance", "View"], parent=parent
)
self.settings = AnchorSettings()
self.speaker_count = None
self.text_filter = None
self.sort_index = 1
self.sort_order = QtCore.Qt.SortOrder.DescendingOrder
self.all_speakers = []
self.corpus_model: Optional[CorpusModel] = None
self.current_speaker = None
self.num_clusters = None
self.mds = None
self.cluster_labels = None
self.ivectors = None
self.speaker_distances = None
self.utterance_ids = None
self.cluster_kwargs = {}
self.manifold_kwargs = {}
def indices_updated(self, utterance_ids, speaker_id):
if speaker_id != self.current_speaker:
return
indices = np.where(np.isin(self.utterance_ids, utterance_ids))
self.cluster_labels = np.delete(self.cluster_labels, indices, axis=0)
self.utterance_ids = np.delete(self.utterance_ids, indices, axis=0)
self.mds = np.delete(self.mds, indices, axis=0)
self.ivectors = np.delete(self.ivectors, indices, axis=0)
if self.speaker_distances is not None:
self.speaker_distances = np.delete(self.speaker_distances, indices, axis=0)
self.speakersChanged.emit()
def change_speaker(self, utterance_ids, old_speaker_id, new_speaker_id):
self.corpus_model.addCommand.emit(
undo.ChangeSpeakerCommand(utterance_ids, old_speaker_id, new_speaker_id, self)
)
def set_speaker_filter(self, text_filter: TextFilterQuery):
if text_filter != self.text_filter:
self.current_offset = 0
self.text_filter = text_filter
self.update_data()
self.update_result_count()
def setData(
self,
index: Union[QtCore.QModelIndex, QtCore.QPersistentModelIndex],
value: Any,
role: int = ...,
) -> bool:
if index.isValid() and role == QtCore.Qt.ItemDataRole.EditRole:
if index.column() == 0:
self.corpus_model.addCommand.emit(
undo.UpdateSpeakerCommand(
self._indices[index.row()],
self._data[index.row()][index.column()],
value,
self,
)
)
return True
return False
def data(self, index, role=None):
if index.column() > 3:
return None
if role == QtCore.Qt.ItemDataRole.DisplayRole:
return self._data[index.row()][index.column()]
return super().data(index, role)
def speakerAt(self, row: int):
return self._indices[row]
def set_corpus_model(self, corpus_model: CorpusModel):
self.corpus_model = corpus_model
self.corpus_model.corpusLoading.connect(self.update_data)
@property
def query_kwargs(self) -> typing.Dict[str, typing.Any]:
kwargs = {
"limit": self.limit,
"current_offset": self.current_offset,
"text_filter": self.text_filter,
}
if self.sort_index is not None:
kwargs["sort_index"] = self.sort_index
kwargs["sort_desc"] = self.sort_order == QtCore.Qt.SortOrder.DescendingOrder
return kwargs
def finish_update_data(self, result, *args, **kwargs):
if result is None:
return
self.layoutAboutToBeChanged.emit()
self._data, self._indices = result
self.layoutChanged.emit()
self.newResults.emit()
def finish_clustering(self, result, *args, **kwargs):
speaker_id, c_labels = result
if speaker_id != self.current_speaker:
return
self.cluster_labels = c_labels
self.num_clusters = np.max(c_labels) + 1
self.clustered.emit()
def finish_mds(self, result, *args, **kwargs):
speaker_id, mds = result
if speaker_id != self.current_speaker:
return
self.mds = mds
self.mdsFinished.emit()
def update_data(self):
self.runFunction.emit("Querying speakers", self.finish_update_data, [self.query_kwargs])
def change_current_speaker(self, speaker_id):
if self.current_speaker == speaker_id:
return
self.mds = None
self.cluster_labels = None
self.current_speaker = speaker_id
self.cluster_speaker_utterances()
self.load_speaker_ivectors()
self.mds_speaker_utterances()
def finish_load_ivectors(self, result, *args, **kwargs):
speaker_id, utterance_ids, ivectors, speaker_distances = result
if speaker_id != self.current_speaker:
return
self.utterance_ids = utterance_ids
self.speaker_distances = speaker_distances
self.ivectors = ivectors
def load_speaker_ivectors(self):
self.ivectors = None
self.runFunction.emit(
"Loading speaker ivectors",
self.finish_load_ivectors,
[
{
"speaker_id": self.current_speaker,
"working_directory": os.path.join(
self.corpus_model.corpus.output_directory, "speaker_diarization"
),
}
],
)
def update_cluster_kwargs(self, kwargs):
if kwargs != self.cluster_kwargs:
self.cluster_kwargs = kwargs
self.cluster_speaker_utterances()
else:
self.clustered.emit()
def update_manifold_kwargs(self, kwargs):
if kwargs != self.manifold_kwargs:
self.manifold_kwargs = kwargs
self.mds_speaker_utterances()
else:
self.mdsFinished.emit()
def cluster_speaker_utterances(self):
if self.corpus_model.corpus is None:
return
kwargs = {
"speaker_id": self.current_speaker,
"working_directory": os.path.join(
self.corpus_model.corpus.output_directory, "speaker_diarization"
),
}
kwargs.update(self.cluster_kwargs)
self.cluster_labels = None
self.num_clusters = None
self.runFunction.emit("Clustering speaker utterances", self.finish_clustering, [kwargs])
def mds_speaker_utterances(self):
if self.corpus_model.corpus is None:
return
kwargs = {
"speaker_id": self.current_speaker,
"working_directory": os.path.join(
self.corpus_model.corpus.output_directory, "speaker_diarization"
),
}
kwargs.update(self.manifold_kwargs)
self.mds = None
self.mdsAboutToChange.emit()
self.runFunction.emit("Generating speaker MDS", self.finish_mds, [kwargs])
class MergeSpeakerModel(TableModel):
mergeAllFinished = QtCore.Signal(object)
def __init__(self, parent=None):
super().__init__(["Speaker", "Suggested speaker", "Distance", "Merge?"], parent=parent)
self.settings = AnchorSettings()
self.speaker_count = None
self._speaker_indices = []
self._suggested_indices = []
self.corpus_model: Optional[CorpusModel] = None
self.set_limit(self.settings.value(self.settings.RESULTS_PER_PAGE))
self.speaker_filter = None
self.threshold = None
self.metric = "cosine"
def data(self, index, role=None):
if index.column() > 2:
return None
if role == QtCore.Qt.ItemDataRole.DisplayRole:
if index.column() == 2:
return float(self._data[index.row()][index.column()])
return self._data[index.row()][index.column()]
return super().data(index, role)
def speakers_at(self, row: int):
return self._speaker_indices[row], self._suggested_indices[row]
def set_threshold(self, threshold: float):
self.threshold = threshold
def set_metric(self, metric: str):
self.metric = metric
def set_speaker_filter(self, speaker_id: typing.Union[int, str, None]):
self.speaker_filter = speaker_id
def merge_all(self):
if not self.corpus_model.corpus.has_any_ivectors():
return
self.runFunction.emit("Merging speakers", self.mergeAllFinished.emit, [self.query_kwargs])
def merge_speakers(self, row: int):
speaker_id, suggested_id = self.speakers_at(row)
speaker_name = self._data[row][0]
suggested_name = self._data[row][1]
self.corpus_model.merge_speakers([suggested_id, speaker_id])
self.layoutAboutToBeChanged.emit()
self._data.pop(row)
self._speaker_indices.pop(row)
self._speaker_indices = [
x if x != speaker_id else suggested_id for x in self._speaker_indices
]
self._suggested_indices.pop(row)
self._suggested_indices = [
x if x != speaker_id else suggested_id for x in self._suggested_indices
]
for d in self._data:
if d[0] == speaker_name:
d[0] = suggested_name
if d[1] == speaker_name:
d[1] = suggested_name
self.layoutChanged.emit()
def set_corpus_model(self, corpus_model: CorpusModel):
self.corpus_model = corpus_model
self.corpus_model.corpusLoading.connect(self.update_data)
def finish_update_data(self, result, *args, **kwargs):
self.layoutAboutToBeChanged.emit()
if result is None:
self._data, self._speaker_indices, self._suggested_indices = [], [], []
else:
self._data, self._speaker_indices, self._suggested_indices = result
self.layoutChanged.emit()
self.newResults.emit()
@property
def query_kwargs(self) -> typing.Dict[str, typing.Any]:
kwargs = {
"limit": self.limit,
"current_offset": self.current_offset,
"speaker_id": self.speaker_filter,
"threshold": self.threshold,
"metric": self.metric,
"working_directory": os.path.join(
self.corpus_model.corpus.output_directory, "speaker_diarization"
),
}
return kwargs
def update_data(self):
if not self.corpus_model.corpus.has_any_ivectors():
return
self.runFunction.emit("Comparing speakers", self.finish_update_data, [self.query_kwargs])
class CorpusModel(TableModel):
lockCorpus = QtCore.Signal()
unlockCorpus = QtCore.Signal()
undoRequested = QtCore.Signal()
redoRequested = QtCore.Signal()
playRequested = QtCore.Signal()
corpusLoaded = QtCore.Signal()
corpusLoading = QtCore.Signal()
addCommand = QtCore.Signal(object)
statusUpdate = QtCore.Signal(object)
editableChanged = QtCore.Signal(object)
filesRefreshed = QtCore.Signal(object)
speakersRefreshed = QtCore.Signal(object)
changeCommandFired = QtCore.Signal()
dictionaryChanged = QtCore.Signal()
acousticModelChanged = QtCore.Signal()
ivectorExtractorChanged = QtCore.Signal()
languageModelChanged = QtCore.Signal()
g2pModelChanged = QtCore.Signal()
textFilterChanged = QtCore.Signal()
databaseSynced = QtCore.Signal(bool)
filesSaved = QtCore.Signal()
dictionarySaved = QtCore.Signal()
selectionRequested = QtCore.Signal(object)
requestFileView = QtCore.Signal(object)
utteranceTextUpdated = QtCore.Signal(object, object)
refreshUtteranceText = QtCore.Signal(object, object)
refreshTiers = QtCore.Signal()
def __init__(self, parent=None):
header = [
"OOVs?",
"File",
"Speaker",
"Begin",
"End",
"Duration",
"Text",
"Log-likelihood",
"Speech log-likelihood",
"Phone duration deviation",
"PER",
"Overlap score",
"Transcription",
"WER",
"Ivector distance",
]
super(CorpusModel, self).__init__(header, parent=parent)
self.oov_column = header.index("OOVs?")
self.file_column = header.index("File")
self.speaker_column = header.index("Speaker")
self.begin_column = header.index("Begin")
self.end_column = header.index("End")
self.duration_column = header.index("Duration")
self.text_column = header.index("Text")
self.ivector_distance_column = header.index("Ivector distance")
self.alignment_header_indices = [
header.index("Log-likelihood"),
header.index("Speech log-likelihood"),
header.index("Phone duration deviation"),
]
self.alignment_evaluation_header_indices = [
header.index("PER"),
header.index("Overlap score"),
]
self.transcription_header_indices = [
header.index("Transcription"),
header.index("WER"),
]
self.diarization_header_indices = [
header.index("Ivector distance"),
]
self.sort_index = None
self.sort_order = None
self.file_filter = None
self.speaker_filter = None
self.text_filter = None
self.oovs_only = False
self.regex = False
self.edit_lock = Lock()
self.dictionary_model: Optional[DictionaryTableModel] = None
self.corpus: Optional[Union[AcousticCorpus, AcousticCorpusWithPronunciations]] = None
self.acoustic_model: Optional[AcousticModel] = None
self.language_model: Optional[LanguageModel] = None
self.ivector_extractor: Optional[IvectorExtractorModel] = None
self.g2p_model: Optional[G2PModel] = None
self.segmented = True
self.engine: typing.Optional[sqlalchemy.engine.Engine] = None
self.reversed_indices = {}
self._indices = []
self._file_indices = []
self._speaker_indices = []
self._data = []
self.unsaved_files = set()
self.files = []
self.speakers = []
self.utterances = None
self.utterance_count = 0
self.speaker_count = 0
self.file_count = 0
self.editable = True
self.data_types = {
"WER": "percent",
"PER": "percent",
}
def set_dictionary_model(self, dictionary_model: DictionaryTableModel):
self.dictionary_model = dictionary_model
@property
def has_dictionary(self):
if isinstance(self.corpus, AcousticCorpusWithPronunciations):
return True
return False
def update_utterance_table_row(self, utterance_id: int):
if utterance_id not in self.reversed_indices:
return
utterance = self.session.query(Utterance).get(utterance_id)
index = self.reversed_indices[utterance_id]
self.layoutAboutToBeChanged.emit()
self._data[index][self.text_column] = utterance.text
self._data[index][self.begin_column] = utterance.begin
self._data[index][self.end_column] = utterance.end
self._data[index][self.duration_column] = utterance.duration
self.layoutChanged.emit()
def add_table_utterances(self, utterances: typing.List[Utterance]):
self.layoutAboutToBeChanged.emit()
rows = []
for utterance in utterances:
row_data = [
utterance.oovs,
utterance.file_name,
utterance.speaker_name,
utterance.begin,
utterance.end,
utterance.duration,
utterance.text,
]
self._data.append(row_data)
self._indices.append(utterance.id)
self._file_indices.append(utterance.file_id)
self._speaker_indices.append(utterance.speaker_id)
self.reversed_indices[utterance.id] = len(self._indices) - 1
rows.append(self.reversed_indices[utterance.id])
self.layoutChanged.emit()
self.selectionRequested.emit(rows)
def delete_table_utterances(self, utterances: typing.List[Utterance]):
self.layoutAboutToBeChanged.emit()
for utterance in utterances:
index = self.reversed_indices.pop(utterance.id)
_ = self._data.pop(index)
_ = self._indices.pop(index)
_ = self._file_indices.pop(index)
_ = self._speaker_indices.pop(index)
self.reversed_indices = {
k: v if v < index else v - 1 for k, v in self.reversed_indices.items()
}
self.layoutChanged.emit()
self.selectionRequested.emit(None)
def split_table_utterances(
self, merged_utterance: Utterance, split_utterances: typing.List[Utterance]
):
try:
index = self.reversed_indices.pop(merged_utterance.id)
except KeyError:
return
self.layoutAboutToBeChanged.emit()
first = split_utterances[0]
row_data = [
first.oovs,
first.file_name,
first.speaker_name,
first.begin,
first.end,
first.duration,
first.text,
]
self._data[index] = row_data
self._indices[index] = first.id
self._file_indices[index] = first.file_id
self._speaker_indices[index] = first.speaker_id
self.reversed_indices[first.id] = index
rows = [index]
for utterance in split_utterances[1:]:
index += 1
rows.append(index)
self.reversed_indices = {
k: v if v < index else v + 1 for k, v in self.reversed_indices.items()
}
row_data = [
utterance.oovs,
utterance.file_name,
utterance.speaker_name,
utterance.begin,
utterance.end,
utterance.duration,
utterance.text,
]
self.reversed_indices[utterance.id] = index
self._data.insert(index, row_data)
self._indices.insert(index, utterance.id)
self._file_indices.insert(index, utterance.file_id)
self._speaker_indices.insert(index, utterance.speaker_id)
self.layoutChanged.emit()
self.selectionRequested.emit(rows)
def merge_table_utterances(
self, merged_utterance: Utterance, split_utterances: typing.List[Utterance]
):
try:
split_utterances = sorted(split_utterances, key=lambda x: self.reversed_indices[x.id])
except KeyError:
return
self.layoutAboutToBeChanged.emit()
row_data = [
merged_utterance.oovs,
merged_utterance.file_name,
merged_utterance.speaker_name,
merged_utterance.begin,
merged_utterance.end,
merged_utterance.duration,
merged_utterance.text,
]
first = split_utterances[0]
index = self.reversed_indices.pop(first.id)
self._data[index] = row_data
self._indices[index] = merged_utterance.id
self._file_indices[index] = merged_utterance.file_id
self._speaker_indices[index] = merged_utterance.speaker_id
self.reversed_indices[merged_utterance.id] = index
rows = [index]
for utterance in split_utterances[1:]:
index = self.reversed_indices.pop(utterance.id)
_ = self._data.pop(index)
_ = self._indices.pop(index)
_ = self._file_indices.pop(index)
_ = self._speaker_indices.pop(index)
self.reversed_indices = {
k: v if v < index else v - 1 for k, v in self.reversed_indices.items()
}
self.layoutChanged.emit()
self.selectionRequested.emit(rows)
def update_sort(self, column, order):
self.sort_index = column
self.sort_order = order
self.update_data()
def lock_edits(self, checked):
if checked:
self.editable = False
self.session.commit()
self.editableChanged.emit(self.editable)
else:
self.editable = True
self.editableChanged.emit(self.editable)
def set_acoustic_model(self, acoustic_model: AcousticModel):
self.acoustic_model = acoustic_model
self.acousticModelChanged.emit()
def set_ivector_extractor(self, ivector_extractor: IvectorExtractorModel):
self.ivector_extractor = ivector_extractor
self.ivectorExtractorChanged.emit()
def set_language_model(self, language_model: LanguageModel):
self.language_model = language_model
self.languageModelChanged.emit()
def create_utterance(self, file: File, speaker: Optional[Speaker], begin: float, end: float):
if not self.editable:
return
channel = 0
if file.num_channels > 1:
ind = file.speaker_ordering.index(speaker)
if ind >= len(file.speaker_ordering) / 2:
channel = 1
if speaker is None:
speaker = self.corpus.add_speaker("speech", session=self.session)
begin = round(begin, 4)
end = round(end, 4)
text = ""
next_pk = self.corpus.get_next_primary_key(Utterance)
new_utt = Utterance(
id=next_pk,
speaker_id=speaker.id,
file_id=file.id,
begin=begin,
end=end,
channel=channel,
text=text,
)
self.addCommand.emit(undo.CreateUtteranceCommand(new_utt, self))
self.unsaved_files.add(file.id)
def set_file_modified(self, file_id: typing.Union[int, typing.List[int]]):
if isinstance(file_id, int):
file_id = [file_id]
self.session.query(File).filter(File.id.in_(file_id)).update({File.modified: True})
self.session.commit()
def update_utterance_text(self, utterance: Utterance, text):
if text != utterance.text:
self.addCommand.emit(undo.UpdateUtteranceTextCommand(utterance, text, self))
self.set_file_modified(utterance.file_id)
def update_utterance_times(
self, utterance: Utterance, begin: Optional[float] = None, end: Optional[float] = None
):
if not self.editable:
return
self.addCommand.emit(undo.UpdateUtteranceTimesCommand(utterance, begin, end, self))
self.set_file_modified(utterance.file_id)
def update_utterance_speaker(self, utterance: Utterance, speaker: Speaker):
if not self.editable:
return
self.addCommand.emit(undo.UpdateUtteranceSpeakerCommand(utterance, speaker, self))
self.set_file_modified(utterance.file_id)
def delete_utterances(self, utterances: list[Utterance]):
if not self.editable:
return
for u in utterances:
self.set_file_modified(u.file_id)
self.addCommand.emit(undo.DeleteUtteranceCommand(utterances, self))
def split_vad_utterance(self, original_utterance_id, replacement_utterance_data):
utt = self.session.get(Utterance, original_utterance_id)
self.requestFileView.emit(utt.file_name)
replacement_utterances = []
next_pk = self.corpus.get_next_primary_key(Utterance)
for sd in replacement_utterance_data.values():
replacement_utterances.append(Utterance(id=next_pk, **sd))
next_pk += 1
splitting_utterances = [[utt, *replacement_utterances]]
self.addCommand.emit(undo.SplitUtteranceCommand(splitting_utterances, self))
self.set_file_modified([utt[0].file_id for utt in splitting_utterances])
def split_utterances(self, utterances: list[Utterance]):
if not self.editable:
return
splitting_utterances = []
for utt in utterances:
duration = utt.duration
beg = utt.begin
end = utt.end
first_text = ""
second_text = ""
if utt.text:
t = utt.text.split()
mid_ind = int(len(t) / 2)
first_text = t[:mid_ind]
second_text = t[mid_ind:]
split_time = beg + (duration / 2)
oovs = set()
for w in first_text:
if not self.dictionary_model.check_word(w, utt.speaker_id):
oovs.add(w)
next_pk = self.corpus.get_next_primary_key(Utterance)
first_utt = Utterance(
id=next_pk,
speaker=utt.speaker,
file=utt.file,
begin=beg,
end=split_time,
channel=utt.channel,
text=" ".join(first_text),
oovs=" ".join(oovs),
)
next_pk += 1
oovs = set()
for w in second_text:
if not self.dictionary_model.check_word(w, utt.speaker_id):
oovs.add(w)
second_utt = Utterance(
id=next_pk,
speaker=utt.speaker,
file=utt.file,
begin=split_time,
end=end,
channel=utt.channel,
text=" ".join(second_text),
oovs=" ".join(oovs),
)
splitting_utterances.append([utt, first_utt, second_utt])
self.addCommand.emit(undo.SplitUtteranceCommand(splitting_utterances, self))
self.set_file_modified([utt[0].file_id for utt in splitting_utterances])
def merge_speakers(self, speakers: list[int]):
self.addCommand.emit(undo.MergeSpeakersCommand(speakers, self))
def merge_utterances(self, utterances: list[Utterance]):
if not self.editable:
return
min_begin = 1000000000
max_end = 0
text = ""
speaker = None
file = None
channel = None
for old_utt in sorted(utterances, key=lambda x: x.begin):
if speaker is None:
speaker = old_utt.speaker
if file is None:
file = old_utt.file
if channel is None:
channel = old_utt.channel
if old_utt.begin < min_begin:
min_begin = old_utt.begin
if old_utt.end > max_end:
max_end = old_utt.end
utt_text = old_utt.text
if utt_text == "speech" and text.strip() == "speech":
continue
text += utt_text + " "
text = text[:-1]
next_pk = self.corpus.get_next_primary_key(Utterance)
oovs = set()
for w in text.split():
if not self.dictionary_model.check_word(w, speaker.id):
oovs.add(w)
new_utt = Utterance(
id=next_pk,
speaker=speaker,
file=file,
begin=min_begin,
end=max_end,
channel=channel,
text=text,
oovs=" ".join(oovs),
)
self.set_file_modified(file.id)
self.addCommand.emit(undo.MergeUtteranceCommand(utterances, new_utt, self))
def replace_all(self, search_query: TextFilterQuery, replacement: str):
self.addCommand.emit(undo.ReplaceAllCommand(search_query, replacement, self))
def utteranceAt(self, index) -> Optional[Utterance]:
if not isinstance(index, int):
index = index.row()
if index > len(self._indices) - 1:
return None
if len(self._indices) == 0:
return None
utterance = (
self.session.query(Utterance)
.options(
joinedload(Utterance.file).joinedload(File.sound_file),
joinedload(Utterance.file).subqueryload(File.speakers),
)
.get(self._indices[index])
)
return utterance
def fileAt(self, index) -> int:
if not isinstance(index, int):
index = index.row()
return self._file_indices[index]
def indexForUtterance(self, utterance_id: int, column: int = 1):
return self.createIndex(self.reversed_indices[utterance_id], column)
def rollback_changes(self):
self.unsaved_files = set()
self.session.rollback()
# self.query_session.expire_all()
self.databaseSynced.emit(False)
self.update_data()
def commit_changes(self):
self.session.bulk_update_mappings(
File, [{"id": x, "modified": True} for x in self.unsaved_files]
)
self.unsaved_files = set()
self.session.commit()
self.databaseSynced.emit(True)
def finish_export_files(self):
self.filesSaved.emit()
def export_changes(self):
self.runFunction.emit("Exporting files", self.finish_export_files, [])
def setCorpus(self, corpus: Optional[AcousticCorpus]):
self.corpus = corpus
if corpus is not None:
self.session = scoped_session(self.corpus.session)
self.corpusLoading.emit()
self.refresh_files()
self.refresh_speakers()
self.refresh_utterances()
def search(
self,
text_filter: TextFilterQuery,
file_id: typing.Union[int, str, None],
speaker_id: typing.Union[int, str, None],
oovs=False,
):
self.text_filter = text_filter
self.speaker_filter = speaker_id
self.file_filter = file_id
self.oovs_only = oovs
self.textFilterChanged.emit()
self.refresh_utterances()
@property
def fully_loaded(self):
if not self.files:
return False
if not self.speakers:
return False
return True
def finish_update_files(self, files):
self.files = files
self.filesRefreshed.emit(self.files)
if self.fully_loaded:
self.corpusLoaded.emit()
def finish_update_speakers(self, speakers):
self.speakers = speakers
self.speakersRefreshed.emit(self.speakers)
if self.fully_loaded:
self.corpusLoaded.emit()
def refresh_utterances(self):
self.update_data()
self.update_result_count()
def refresh_files(self):
self.runFunction.emit("Loading files", self.finish_update_files, [])
def refresh_speakers(self):
self.runFunction.emit("Loading speakers", self.finish_update_speakers, [])
def data(self, index, role):
try:
data = self._data[index.row()][index.column()]
except IndexError:
return None
if role == QtCore.Qt.ItemDataRole.DisplayRole:
column_name = self.headerData(
index.column(),
QtCore.Qt.Orientation.Horizontal,
QtCore.Qt.ItemDataRole.DisplayRole,
)
if column_name in self.data_types:
if self.data_types[column_name] == "percent":
if data is None:
if index.column() == self.duration_column:
return (
self._data[index.row()][self.end_column]
- self._data[index.row()][self.begin_column]
)
return None
return f"{data*100:.2f}%"
return data
elif role == QtCore.Qt.ItemDataRole.CheckStateRole and index.column() == 0:
if data:
return QtCore.Qt.CheckState.Checked
else:
return QtCore.Qt.CheckState.Unchecked
def update_texts(self, texts: typing.Dict[int, str]):
for utt_id, row_ind in self.reversed_indices.items():
if utt_id in texts:
self._data[row_ind][self.text_column] = texts[utt_id]
index = self.index(row_ind, self.text_column)
self.dataChanged.emit(index, index, [QtCore.Qt.ItemDataRole.DisplayRole])
self.refreshUtteranceText.emit(utt_id, texts[utt_id])
def finish_update_data(self, result, *args, **kwargs):
if not result:
return
self.layoutAboutToBeChanged.emit()
(
self._data,
self._indices,
self._file_indices,
self._speaker_indices,
self.reversed_indices,
) = result
self.layoutChanged.emit()
self.newResults.emit()
# if len(self._data) > 0:
# self.selectionRequested.emit([0])
@property
def count_kwargs(self) -> typing.Dict[str, typing.Any]:
kwargs = self.query_kwargs
kwargs["count"] = True
return kwargs
@property
def query_kwargs(self) -> typing.Dict[str, typing.Any]:
kwargs = {
"speaker_filter": self.speaker_filter,
"file_filter": self.file_filter,
"text_filter": self.text_filter,
"oovs_only": self.oovs_only,
"limit": self.limit,
"current_offset": self.current_offset,
"has_ivectors": self.corpus.has_any_ivectors(),
}
if self.sort_index is not None:
kwargs["sort_index"] = self.sort_index
kwargs["sort_desc"] = self.sort_order == QtCore.Qt.SortOrder.DescendingOrder
return kwargs
def finalize_result_count(self, result_count):
if not isinstance(result_count, int):
return
self.result_count = result_count
self.resultCountChanged.emit(self.result_count)
def update_data(self):
self.runFunction.emit("Querying utterances", self.finish_update_data, [self.query_kwargs])
def update_result_count(self):
self.runFunction.emit(
"Counting utterance results", self.finalize_result_count, [self.count_kwargs]
)
|
Anchor-annotator
|
/Anchor_annotator-0.0.9.tar.gz/Anchor_annotator-0.0.9/anchor/models.py
|
models.py
|
from __future__ import annotations
import collections
import typing
import psycopg2.errors
import pynini.lib
import sqlalchemy
from montreal_forced_aligner.data import WordType
from montreal_forced_aligner.db import (
File,
Pronunciation,
Speaker,
SpeakerOrdering,
Utterance,
Word,
)
from PySide6 import QtCore, QtGui
from sqlalchemy.orm import make_transient
if typing.TYPE_CHECKING:
from anchor.models import CorpusModel, DictionaryTableModel, SpeakerModel, TextFilterQuery
class CorpusCommand(QtGui.QUndoCommand):
def __init__(self, corpus_model: CorpusModel):
super().__init__()
self.corpus_model = corpus_model
self.resets_tier = False
def _redo(self) -> None:
pass
def _undo(self) -> None:
pass
def update_data(self):
if self.resets_tier:
self.corpus_model.refreshTiers.emit()
def redo(self) -> None:
with self.corpus_model.edit_lock:
while True:
try:
with self.corpus_model.session.begin_nested():
self._redo()
break
except psycopg2.errors.DeadlockDetected:
pass
self.corpus_model.session.commit()
self.update_data()
def undo(self) -> None:
with self.corpus_model.edit_lock:
while True:
try:
with self.corpus_model.session.begin_nested():
self._undo()
break
except psycopg2.errors.DeadlockDetected:
pass
self.corpus_model.session.commit()
self.update_data()
class DictionaryCommand(QtGui.QUndoCommand):
def __init__(self, dictionary_model: DictionaryTableModel):
super().__init__()
self.dictionary_model = dictionary_model
def _redo(self) -> None:
pass
def _undo(self) -> None:
pass
def redo(self) -> None:
with self.dictionary_model.corpus_model.edit_lock:
while True:
try:
with self.dictionary_model.corpus_model.session.begin_nested():
self._redo()
self.dictionary_model.corpus_model.session.flush()
break
except psycopg2.errors.DeadlockDetected:
pass
self.dictionary_model.corpus_model.session.commit()
self.dictionary_model.update_data()
def undo(self) -> None:
with self.dictionary_model.corpus_model.edit_lock:
while True:
try:
with self.dictionary_model.corpus_model.session.begin_nested():
self._undo()
self.dictionary_model.corpus_model.session.flush()
break
except psycopg2.errors.DeadlockDetected:
pass
self.dictionary_model.corpus_model.session.commit()
self.dictionary_model.update_data()
class SpeakerCommand(QtGui.QUndoCommand):
def __init__(self, speaker_model: SpeakerModel):
super().__init__()
self.speaker_model = speaker_model
self.auto_refresh = True
self.resets_tier = False
def _redo(self) -> None:
pass
def _undo(self) -> None:
pass
def update_data(self):
if self.auto_refresh:
self.speaker_model.update_data()
if self.resets_tier:
self.speaker_model.corpus_model.refreshTiers.emit()
def redo(self) -> None:
with self.speaker_model.corpus_model.edit_lock:
while True:
try:
with self.speaker_model.corpus_model.session.begin_nested():
self._redo()
self.speaker_model.corpus_model.session.flush()
break
except psycopg2.errors.DeadlockDetected:
pass
self.speaker_model.corpus_model.session.commit()
self.update_data()
def undo(self) -> None:
with self.speaker_model.corpus_model.edit_lock:
while True:
try:
with self.speaker_model.corpus_model.session.begin_nested():
self._undo()
self.speaker_model.corpus_model.session.flush()
break
except psycopg2.errors.DeadlockDetected:
pass
self.speaker_model.corpus_model.session.commit()
self.update_data()
class DeleteUtteranceCommand(CorpusCommand):
def __init__(self, deleted_utterances: list[Utterance], corpus_model: CorpusModel):
super().__init__(corpus_model)
self.deleted_utterances = deleted_utterances
self.resets_tier = True
self.channels = [
x.channel if x.channel is not None else 0 for x in self.deleted_utterances
]
self.setText(
QtCore.QCoreApplication.translate("DeleteUtteranceCommand", "Delete utterances")
)
def _redo(self) -> None:
for utt in self.deleted_utterances:
self.corpus_model.session.delete(utt)
def _undo(self) -> None:
for i, utt in enumerate(self.deleted_utterances):
make_transient(utt)
for x in utt.phone_intervals:
x.duration = None
make_transient(x)
for x in utt.word_intervals:
make_transient(x)
if utt.channel is None:
utt.channel = self.channels[i]
self.corpus_model.session.add(utt)
def redo(self) -> None:
super().redo()
self.corpus_model.delete_table_utterances(self.deleted_utterances)
self.corpus_model.changeCommandFired.emit()
def undo(self) -> None:
super().undo()
self.corpus_model.add_table_utterances(self.deleted_utterances)
self.corpus_model.changeCommandFired.emit()
class SplitUtteranceCommand(CorpusCommand):
def __init__(self, split_utterances: list[list[Utterance, ...]], corpus_model: CorpusModel):
super().__init__(corpus_model)
self.split_utterances = split_utterances
self.resets_tier = True
self.channels = [
x[0].channel if x[0].channel is not None else 0 for x in self.split_utterances
]
self.setText(
QtCore.QCoreApplication.translate("SplitUtteranceCommand", "Split utterances")
)
def _redo(self) -> None:
for i, splits in enumerate(self.split_utterances):
old_utt = splits[0]
split_utts = splits[1:]
self.corpus_model.session.delete(old_utt)
for u in split_utts:
if u.id is not None:
make_transient(u)
for x in u.phone_intervals:
x.duration = None
make_transient(x)
for x in u.word_intervals:
make_transient(x)
if u.channel is None:
u.channel = self.channels[i]
u.duration = None
u.kaldi_id = None
self.corpus_model.session.add(u)
def _undo(self) -> None:
for i, splits in enumerate(self.split_utterances):
old_utt = splits[0]
split_utts = splits[1:]
if old_utt.channel is None:
old_utt.channel = self.channels[i]
old_utt.duration = None
old_utt.kaldi_id = None
make_transient(old_utt)
for x in old_utt.phone_intervals:
x.duration = None
make_transient(x)
for x in old_utt.word_intervals:
make_transient(x)
self.corpus_model.session.add(old_utt)
for u in split_utts:
self.corpus_model.session.delete(u)
def redo(self) -> None:
super().redo()
for splits in self.split_utterances:
old_utt = splits[0]
split_utts = splits[1:]
self.corpus_model.split_table_utterances(old_utt, split_utts)
self.corpus_model.changeCommandFired.emit()
def undo(self) -> None:
super().undo()
for splits in self.split_utterances:
old_utt = splits[0]
split_utts = splits[1:]
self.corpus_model.merge_table_utterances(old_utt, split_utts)
self.corpus_model.changeCommandFired.emit()
class MergeUtteranceCommand(CorpusCommand):
def __init__(
self,
unmerged_utterances: list[Utterance],
merged_utterance: Utterance,
corpus_model: CorpusModel,
):
super().__init__(corpus_model)
self.unmerged_utterances = unmerged_utterances
self.merged_utterance = merged_utterance
self.resets_tier = True
self.channel = self.merged_utterance.channel
if self.channel is None:
self.channel = 0
self.setText(
QtCore.QCoreApplication.translate("MergeUtteranceCommand", "Merge utterances")
)
def _redo(self) -> None:
for old_utt in self.unmerged_utterances:
self.corpus_model.session.delete(old_utt)
make_transient(self.merged_utterance)
if self.merged_utterance.channel is None:
self.merged_utterance.channel = self.channel
self.merged_utterance.kaldi_id = None
self.merged_utterance.duration = None
self.corpus_model.session.add(self.merged_utterance)
def _undo(self) -> None:
for old_utt in self.unmerged_utterances:
make_transient(old_utt)
if old_utt.channel is None:
old_utt.channel = self.channel
for x in old_utt.phone_intervals:
x.duration = None
make_transient(x)
for x in old_utt.word_intervals:
make_transient(x)
old_utt.duration = None
old_utt.kaldi_id = None
self.corpus_model.session.add(old_utt)
# self.corpus_model.session.refresh(self.merged_utterance)
self.corpus_model.session.delete(self.merged_utterance)
def redo(self) -> None:
super().redo()
self.corpus_model.merge_table_utterances(self.merged_utterance, self.unmerged_utterances)
self.corpus_model.changeCommandFired.emit()
def undo(self) -> None:
super().undo()
self.corpus_model.split_table_utterances(self.merged_utterance, self.unmerged_utterances)
self.corpus_model.changeCommandFired.emit()
class MergeSpeakersCommand(CorpusCommand):
def __init__(self, speakers: list[int], corpus_model: CorpusModel):
super().__init__(corpus_model)
self.merged_speaker = speakers.pop(0)
self.speakers = speakers
self.resets_tier = True
self.utt_mapping = collections.defaultdict(list)
self.file_mapping = collections.defaultdict(list)
q = self.corpus_model.session.query(
Utterance.id, Utterance.file_id, Utterance.speaker_id
).filter(Utterance.speaker_id.in_(self.speakers))
self.files = []
for utt_id, file_id, speaker_id in q:
self.utt_mapping[speaker_id].append(utt_id)
self.file_mapping[speaker_id].append(file_id)
self.files.append(file_id)
self.deleted_speakers = [
self.corpus_model.session.query(Speaker).get(x) for x in self.speakers
]
self.setText(QtCore.QCoreApplication.translate("MergeSpeakersCommand", "Merge speakers"))
def finish_recalculate(self, *args, **kwargs):
pass
def _redo(self) -> None:
self.corpus_model.session.query(Utterance).filter(
Utterance.speaker_id.in_(self.speakers)
).update({Utterance.speaker_id: self.merged_speaker})
self.corpus_model.session.query(SpeakerOrdering).filter(
SpeakerOrdering.c.speaker_id.in_(self.speakers)
).update({SpeakerOrdering.c.speaker_id: self.merged_speaker})
self.corpus_model.session.query(File).filter(File.id.in_(self.files)).update(
{File.modified: True}
)
self.corpus_model.runFunction.emit(
"Recalculate speaker ivector",
self.finish_recalculate,
[
{
"speaker_id": self.merged_speaker,
}
],
)
for s in self.deleted_speakers:
self.corpus_model.session.delete(s)
def _undo(self) -> None:
for s in self.deleted_speakers:
self.corpus_model.session.merge(s)
for speaker, utts in self.utt_mapping.items():
self.corpus_model.session.query(Utterance).filter(Utterance.id.in_(utts)).update(
{Utterance.speaker_id: speaker}
)
for speaker, files in self.file_mapping.items():
self.corpus_model.session.query(SpeakerOrdering).filter(
SpeakerOrdering.c.file_id.in_(files)
).update({SpeakerOrdering.c.speaker_id: speaker})
self.corpus_model.session.query(File).filter(File.id.in_(self.files)).update(
{File.modified: True}
)
class CreateUtteranceCommand(CorpusCommand):
def __init__(self, new_utterance: Utterance, corpus_model: CorpusModel):
super().__init__(corpus_model)
self.new_utterance = new_utterance
self.resets_tier = True
self.channel = self.new_utterance.channel
if self.channel is None:
self.channel = 0
self.setText(
QtCore.QCoreApplication.translate("CreateUtteranceCommand", "Create utterance")
)
def _redo(self) -> None:
make_transient(self.new_utterance)
if self.new_utterance.channel is None:
self.new_utterance.channel = self.channel
self.corpus_model.session.add(self.new_utterance)
def _undo(self) -> None:
self.corpus_model.session.delete(self.new_utterance)
def update_data(self):
super().update_data()
def redo(self) -> None:
super().redo()
self.corpus_model.add_table_utterances([self.new_utterance])
self.corpus_model.changeCommandFired.emit()
def undo(self) -> None:
super().undo()
self.corpus_model.delete_table_utterances([self.new_utterance])
self.corpus_model.changeCommandFired.emit()
class UpdateUtteranceTimesCommand(CorpusCommand):
def __init__(self, utterance: Utterance, begin: float, end: float, corpus_model: CorpusModel):
super().__init__(corpus_model)
self.utterance_id = utterance.id
self.new_begin = begin
self.old_begin = utterance.begin
self.new_end = end
self.old_end = utterance.end
self.setText(
QtCore.QCoreApplication.translate(
"UpdateUtteranceTimesCommand", "Update utterance times"
)
)
def _redo(self) -> None:
self.corpus_model.session.query(Utterance).filter(
Utterance.id == self.utterance_id
).update({Utterance.begin: self.new_begin, Utterance.end: self.new_end})
def _undo(self) -> None:
self.corpus_model.session.query(Utterance).filter(
Utterance.id == self.utterance_id
).update({Utterance.begin: self.old_begin, Utterance.end: self.old_end})
def update_data(self):
super().update_data()
self.corpus_model.changeCommandFired.emit()
self.corpus_model.update_utterance_table_row(self.utterance_id)
class UpdateUtteranceTextCommand(CorpusCommand):
def __init__(self, utterance: Utterance, new_text: str, corpus_model: CorpusModel):
super().__init__(corpus_model)
self.utterance_id = utterance.id
self.speaker_id = utterance.speaker_id
self.old_text = utterance.text
self.new_text = new_text
self.setText(
QtCore.QCoreApplication.translate(
"UpdateUtteranceTextCommand", "Update utterance text"
)
)
def _redo(self) -> None:
oovs = set()
for w in self.new_text.split():
if not self.corpus_model.dictionary_model.check_word(w, self.speaker_id):
oovs.add(w)
self.corpus_model.session.query(Utterance).filter(
Utterance.id == self.utterance_id
).update(
{
Utterance.text: self.new_text,
Utterance.normalized_text: self.new_text, # FIXME: Update this
Utterance.oovs: " ".join(oovs),
Utterance.ignored: not self.new_text,
}
)
def _undo(self) -> None:
oovs = set()
for w in self.new_text.split():
if not self.corpus_model.dictionary_model.check_word(w, self.speaker_id):
oovs.add(w)
self.corpus_model.session.query(Utterance).filter(
Utterance.id == self.utterance_id
).update(
{
Utterance.text: self.old_text,
Utterance.oovs: " ".join(oovs),
Utterance.ignored: not self.old_text,
}
)
def update_data(self):
super().update_data()
try:
self.corpus_model.update_utterance_table_row(self.utterance_id)
except KeyError:
pass
def id(self) -> int:
return 1
def mergeWith(self, other: UpdateUtteranceTextCommand) -> bool:
if other.id() != self.id() or other.utterance_id != self.utterance_id:
return False
self.new_text = other.new_text
return True
class ReplaceAllCommand(CorpusCommand):
def __init__(
self, search_query: TextFilterQuery, replacement_string: str, corpus_model: CorpusModel
):
super().__init__(corpus_model)
self.search_query = search_query
self.replacement_string = replacement_string
self.old_texts = {}
self.new_texts = None
self.current_texts = None
self.setText(QtCore.QCoreApplication.translate("ReplaceAllCommand", "Replace all"))
def _redo(self) -> None:
mapping = [{"id": k, "text": v} for k, v in self.new_texts.items()]
self.corpus_model.session.bulk_update_mappings(Utterance, mapping)
self.current_texts = self.new_texts
def _undo(self) -> None:
mapping = [{"id": k, "text": v} for k, v in self.old_texts.items()]
self.corpus_model.session.bulk_update_mappings(Utterance, mapping)
self.current_texts = self.old_texts
def update_data(self):
super().update_data()
self.corpus_model.changeCommandFired.emit()
self.corpus_model.update_texts(self.current_texts)
self.corpus_model.statusUpdate.emit(
f"Replaced {len(self.current_texts)} instances of {self.search_query.generate_expression()}"
)
def finish_replace_all(self, result):
if result is None:
return
search_string, old_texts, new_texts = result
self.old_texts = old_texts
self.new_texts = new_texts
self.current_texts = self.new_texts
self.update_data()
def redo(self) -> None:
if self.new_texts is None:
self.corpus_model.runFunction.emit(
"Replacing query",
self.finish_replace_all,
[self.search_query, self.replacement_string],
)
else:
super().redo()
class ChangeSpeakerCommand(SpeakerCommand):
def __init__(
self,
utterance_ids: typing.List[int],
old_speaker_id: int,
new_speaker_id: int,
speaker_model: SpeakerModel,
):
super().__init__(speaker_model)
self.utterance_ids = utterance_ids
self.old_speaker_id = old_speaker_id
self.new_speaker_id = new_speaker_id
self.auto_refresh = False
self.setText(QtCore.QCoreApplication.translate("ChangeSpeakerCommand", "Change speakers"))
def finish_recalculate(self):
pass
def update_data(self):
super().update_data()
self.speaker_model.corpus_model.changeCommandFired.emit()
self.speaker_model.corpus_model.statusUpdate.emit(
f"Changed speaker for {len(self.utterance_ids)} utterances"
)
self.speaker_model.speakersChanged.emit()
def finish_changing_speaker(self, new_speaker_id):
self.new_speaker_id = new_speaker_id
self.speaker_model.indices_updated(self.utterance_ids, self.old_speaker_id)
self.speaker_model.corpus_model.runFunction.emit(
"Recalculate speaker ivector",
self.finish_recalculate,
[
{
"speaker_id": self.old_speaker_id,
}
],
)
self.speaker_model.corpus_model.runFunction.emit(
"Recalculate speaker ivector",
self.finish_recalculate,
[
{
"speaker_id": self.new_speaker_id,
}
],
)
def _redo(self) -> None:
self.speaker_model.corpus_model.runFunction.emit(
"Changing speakers",
self.finish_changing_speaker,
[self.utterance_ids, self.new_speaker_id, self.old_speaker_id],
)
def _undo(self) -> None:
self.speaker_model.corpus_model.runFunction.emit(
"Changing speakers",
self.finish_changing_speaker,
[self.utterance_ids, self.old_speaker_id, self.new_speaker_id],
)
class UpdateSpeakerCommand(SpeakerCommand):
def __init__(self, speaker_id: int, old_name: str, new_name: str, speaker_model: SpeakerModel):
super().__init__(speaker_model)
self.speaker_id = speaker_id
self.old_name = old_name
self.new_name = new_name
self.setText(
QtCore.QCoreApplication.translate("UpdateSpeakerCommand", "Update speaker name")
)
def _redo(self) -> None:
self.speaker_model.corpus_model.session.query(Speaker).filter(
Speaker.id == self.speaker_id
).update({Speaker.name: self.new_name})
def _undo(self) -> None:
self.speaker_model.corpus_model.session.query(Speaker).filter(
Speaker.id == self.speaker_id
).update({Speaker.name: self.old_name})
class UpdateUtteranceSpeakerCommand(CorpusCommand):
def __init__(
self,
utterance: Utterance,
new_speaker: typing.Union[Speaker, int],
corpus_model: CorpusModel,
):
super().__init__(corpus_model)
self.utterance = utterance
self.old_speaker = utterance.speaker
self.new_speaker = new_speaker
if isinstance(self.new_speaker, Speaker):
self.new_speaker = self.new_speaker.id
self.resets_tier = True
if (
self.corpus_model.session.query(SpeakerOrdering)
.filter(
SpeakerOrdering.c.speaker_id == self.new_speaker,
SpeakerOrdering.c.file_id == utterance.file_id,
)
.first()
is None
):
self.corpus_model.session.execute(
sqlalchemy.insert(SpeakerOrdering).values(
speaker_id=self.new_speaker, file_id=self.utterance.file_id, index=2
)
)
self.corpus_model.session.commit()
self.setText(
QtCore.QCoreApplication.translate(
"UpdateUtteranceSpeakerCommand", "Update utterance speaker"
)
)
def _redo(self) -> None:
self.corpus_model.session.query(Utterance).filter(
Utterance.id == self.utterance.id
).update({Utterance.speaker_id: self.new_speaker})
def _undo(self) -> None:
self.corpus_model.session.query(Utterance).filter(
Utterance.id == self.utterance.id
).update({Utterance.speaker_id: self.old_speaker.id})
def update_data(self):
super().update_data()
self.corpus_model.changeCommandFired.emit()
self.corpus_model.update_data()
class UpdatePronunciationCommand(DictionaryCommand):
def __init__(
self,
pronunciation_id: int,
old_pronunciation: str,
new_pronunciation: str,
dictionary_model: DictionaryTableModel,
):
super().__init__(dictionary_model)
self.pronunciation_id = pronunciation_id
self.pronunciation: Pronunciation = self.dictionary_model.corpus_model.session.query(
Pronunciation
).get(pronunciation_id)
self.old_pronunciation = old_pronunciation
self.new_pronunciation = new_pronunciation
self.setText(
QtCore.QCoreApplication.translate("UpdatePronunciationCommand", "Update pronunciation")
)
def _redo(self) -> None:
self.pronunciation.pronunciation = self.new_pronunciation
def _undo(self) -> None:
self.pronunciation.pronunciation = self.old_pronunciation
class AddPronunciationCommand(DictionaryCommand):
def __init__(
self,
word: str,
pronunciation: str,
dictionary_model: DictionaryTableModel,
word_id: typing.Optional[int] = None,
):
super().__init__(dictionary_model)
self.pronunciation = pronunciation
self.oov_phone = dictionary_model.corpus_model.corpus.oov_phone
if not self.pronunciation:
if dictionary_model.g2p_generator is not None:
try:
self.pronunciation = dictionary_model.g2p_generator.rewriter(word)[0]
except (pynini.lib.rewrite.Error, IndexError):
self.pronunciation = self.oov_phone
else:
self.pronunciation = self.oov_phone
self.pronunciation_id = None
self.word_id = word_id
self.word = word
self.setText(
QtCore.QCoreApplication.translate("AddPronunciationCommand", "Add pronunciation")
)
def _redo(self) -> None:
if self.word_id is None:
self.word_id = (
self.dictionary_model.corpus_model.session.query(Word.id)
.filter(Word.word == self.word)
.first()[0]
)
if self.word_id is None:
self.word_id = (
self.dictionary_model.corpus_model.session.query(
sqlalchemy.func.max(Word.id)
).scalar()
+ 1
)
word_mapping_id = (
self.dictionary_model.corpus_model.session.query(
sqlalchemy.func.max(Word.mapping_id)
)
.filter(Word.dictionary_id == self.dictionary_model.current_dictionary_id)
.scalar()
+ 1
)
self.dictionary_model.corpus_model.session.execute(
sqlalchemy.insert(Word).values(
id=self.word_id,
mapping_id=word_mapping_id,
word=self.word,
dictionary_id=self.dictionary_model.current_dictionary_id,
word_type=WordType.speech,
)
)
self.pronunciation_id = (
self.dictionary_model.corpus_model.session.query(Pronunciation.id)
.filter(
Pronunciation.word_id == self.word_id,
Pronunciation.pronunciation == self.oov_phone,
)
.scalar()
)
if self.pronunciation_id is None:
self.pronunciation_id = (
self.dictionary_model.corpus_model.session.query(
sqlalchemy.func.max(Pronunciation.id)
).scalar()
+ 1
)
self.dictionary_model.corpus_model.session.execute(
sqlalchemy.insert(Pronunciation).values(
word_id=self.word_id,
id=self.pronunciation_id,
pronunciation=self.pronunciation,
)
)
else:
self.dictionary_model.corpus_model.session.query(Pronunciation).filter(
Pronunciation.id == self.pronunciation_id
).update({Pronunciation.pronunciation: self.pronunciation})
self.dictionary_model.corpus_model.session.query(Word).filter(
Word.id == self.word_id
).update({Word.word_type: WordType.speech})
def _undo(self) -> None:
self.dictionary_model.corpus_model.session.execute(
sqlalchemy.delete(Pronunciation).where(Pronunciation.id == self.pronunciation_id)
)
count = (
self.dictionary_model.corpus_model.session.query(
sqlalchemy.func.count(Pronunciation.id)
)
.filter(Pronunciation.word_id == self.word_id)
.scalar()
)
if count == 0:
self.dictionary_model.corpus_model.session.query(Word).filter(
Word.id == self.word_id
).update({Word.word_type: WordType.oov})
class DeletePronunciationCommand(DictionaryCommand):
def __init__(
self, pronunciation_ids: typing.List[int], dictionary_model: DictionaryTableModel
):
super().__init__(dictionary_model)
self.pronunciation_ids = pronunciation_ids
self.pronunciations = (
self.dictionary_model.corpus_model.session.query(Pronunciation)
.filter(Pronunciation.id.in_(pronunciation_ids))
.all()
)
self.setText(
QtCore.QCoreApplication.translate("DeletePronunciationCommand", "Delete pronunciation")
)
def _redo(self) -> None:
self.dictionary_model.corpus_model.session.query(Pronunciation).filter(
Pronunciation.id.in_(self.pronunciation_ids)
).delete()
def _undo(self) -> None:
for p in self.pronunciations:
make_transient(p)
self.dictionary_model.corpus_model.session.merge(p)
class DeleteWordCommand(DictionaryCommand):
def __init__(self, word_ids: typing.List[int], dictionary_model: DictionaryTableModel):
super().__init__(dictionary_model)
self.word_id = word_ids
query = (
self.dictionary_model.corpus_model.session.query(Word)
.options(sqlalchemy.orm.selectinload(Word.pronunciations))
.filter(Word.id.in_(word_ids))
)
self.words = []
for word in query:
if word not in self.words:
self.words.append(word)
self.setText(
QtCore.QCoreApplication.translate("DeletePronunciationCommand", "Delete pronunciation")
)
def _redo(self) -> None:
for w in self.words:
self.dictionary_model.corpus_model.session.delete(w)
def _undo(self) -> None:
for w in self.words:
make_transient(w)
for p in w.pronunciations:
make_transient(p)
self.dictionary_model.corpus_model.session.merge(w)
class UpdateWordCommand(DictionaryCommand):
def __init__(
self, word_id: int, old_word: str, new_word: str, dictionary_model: DictionaryTableModel
):
super().__init__(dictionary_model)
self.word_id = word_id
self.word: Word = self.dictionary_model.corpus_model.session.query(Word).get(word_id)
self.old_word = old_word
self.new_word = new_word
self.setText(QtCore.QCoreApplication.translate("UpdateWordCommand", "Update orthography"))
def _redo(self) -> None:
self.word.word = self.new_word
def _undo(self) -> None:
self.word.word = self.old_word
|
Anchor-annotator
|
/Anchor_annotator-0.0.9.tar.gz/Anchor_annotator-0.0.9/anchor/undo.py
|
undo.py
|
import json
from AndTools import get_md5
from pydantic import BaseModel
from AndroidQQ.package import MessageSvc
from AndroidQQ.proto import *
from AndroidQQ.Tcp import *
import AndroidQQ.package.OidbSvc as OidbSvc
import AndroidQQ.package.StatSvc as StatSvc
import AndroidQQ.package.wtlogin as wt_login
import AndroidQQ.package.MQUpdateSvc_com_qq_ti as MQUpdateSvc
import AndroidQQ.package.friendlist as friendlist
import AndroidQQ.package.SummaryCard as SummaryCard
from AndroidQQ.package.head import *
class cookies(BaseModel):
skey: str = None
client_key: str = None
class device(BaseModel):
# 软件信息
version: str = None
package_name: str = None # com.tencent.qqlite
Sig: bytes = None # A6 B7 45 BF 24 A2 C2 77 52 77 16 F6 F3 6E B6 8D
build_time: int = None # 软件构建时间 1654570540
sdk_version: str = None # #6.0.0.2366
client_type: str = None # android
app_id: int = None # 似乎可以传空
var: bytes = None
# 设备信息
name: str = 'android'
internet: str = 'China Mobile GSM'
internet_type: str = 'wifi'
model: str = 'V1916A'
brand: str = 'vivo'
Mac_bytes: bytes = None # '02:00:00:00:00:00'
Bssid_bytes: bytes = None # '00:14:bf:3a:8a:50'
android_id: bytes = None # 4cba299189224ca5 Android 操作系统中设备的一个唯一ID。每个设备在首次启动时都会生成一个随机的64位数字作为其
boot_id: str = '65714910-7454-4d01-a148-6bdf337a3812' # Linux系统中用来唯一标识系统自上次启动以来的运行时期的标识符
IMEI: bytes = None
class UN_Tlv_list(BaseModel):
T10A_token_A4: bytes = b''
T143_token_A2: bytes = b''
T100_qr_code_mark: bytes = b'' # watch
T018: bytes = b'' # watch
T019: bytes = b'' # watch
T065: bytes = b'' # watch
T108: bytes = b''
T10E: bytes = b''
T134: bytes = b''
T114: bytes = b''
T133: bytes = b''
#
class info_model(BaseModel):
uin: str = '0'
uin_name: str = None
password: str = None
seq: int = 5267
share_key: bytes = None
key_rand: bytes = get_random_bin(16)
key_tg: bytes = None
key_Pubkey: bytes = None # 公钥
Guid: bytes = get_random_bin(16)
login_time: int = int(time.time())
Tips_un: str = '' # 返回包体的错误提示
UN_Tlv_list: UN_Tlv_list = UN_Tlv_list()
device: device = device()
cookies: cookies = cookies()
class AndroidQQ:
def __init__(self, **kwargs):
"""
:param client_type: QQ or Watch
:param kwargs:
"""
self.info = info_model()
self.info.device.Mac_bytes = bytes.fromhex(get_md5('02:00:00:00:00:00'.encode()))
self.info.device.Bssid_bytes = bytes.fromhex(get_md5('00:14:bf:3a:8a:50'.encode()))
client_type = kwargs.setdefault('client_type', 'QQ')
self.info.device.client_type = client_type
if client_type == 'QQ':
self.info.device.app_id = 537170024
self.info.device.android_id = bytes.fromhex('d018b704652f41f4')
self.info.device.package_name = 'com.tencent.mobileqq'
self.info.device.var = '||A8.9.71.9fd08ae5'.encode()
self.info.device.IMEI = '498054355930458'.encode()
elif client_type == 'Tim':
self.info.device.app_id = 537162285
self.info.device.package_name = 'com.tencent.tim'
self.info.device.var = '||A3.5.2.3f4af297'.encode()
self.info.device.IMEI = '877408608703263'.encode()
elif client_type == 'Watch':
self.info.device.app_id = 537140974
self.info.device.android_id = bytes.fromhex('4cba299189224ca2')
self.info.uin = '0'
self.info.device.package_name = 'com.tencent.qqlite'
self.info.device.version = '2.1.7'
self.info.device.Sig = bytes.fromhex('A6 B7 45 BF 24 A2 C2 77 52 77 16 F6 F3 6E B6 8D')
self.info.device.build_time = int('1654570540') # 2022-06-07 10:55:40 软件构建时间
self.info.device.sdk_version = '6.0.0.2366'
self.info.key_Pubkey = bytes.fromhex(
'04 04 6E 31 F8 59 79 DF 7F 3D F0 31 CD C6 EB D9 B9 8E E2 E2 F6 3E FB 6E 79 BC 54 BF EE FB 0F 60 24 07 DA 8C 41 4A 34 EF 46 10 A7 95 48 0E F8 3F 0E') # 49 长度的
self.info.share_key = bytes.fromhex('54 9F 5C 3A B4 8D B9 16 DA 96 5F 3B 1B C1 03 4B')
self.info.key_rand = bytes.fromhex('70 3F 79 79 55 78 2E 55 63 64 3A 44 38 49 7A 53')
self.info.Guid = bytes.fromhex('9b6be0653a356f4fac89926f3f1ceb7e')
IMEI = '866174040000000'
self.info.device.IMEI = bytes(IMEI, 'utf-8')
self.info.device.var = bytes(IMEI, 'utf-8')
self._tcp = start_client(_func=self.UN_data)
self.pack_list = {}
def Set_TokenA(self, data):
"""
appid
537085851 小栗子二开
537101242 小栗子
"""
json_data = json.loads(data)
uin = json_data['UIN']
device_APPID = json_data.get('device_APPID')
if device_APPID is not None:
# 向下兼容
appid = int.from_bytes(bytes.fromhex(device_APPID), 'big')
else:
# 获取appid
appid = int(json_data.get('Appid', self.info.device.app_id))
# appid = int('537085851')
# print('appid', appid)
self.info.uin = str(json_data['UIN'])
self.info.UN_Tlv_list.T10A_token_A4 = bytes.fromhex(json_data['token_A4'])
self.info.UN_Tlv_list.T143_token_A2 = bytes.fromhex(json_data['token_A2'])
self.info.share_key = bytes.fromhex(json_data['Sharekey'].replace(' ', ''))
self.info.Guid = bytes.fromhex(json_data['GUID_MD5'])
self.info.device.app_id = appid # 现在必须验证这个参数了
self.info.UN_Tlv_list.T10E = bytes.fromhex(json_data['T10E'])
self.info.UN_Tlv_list.T114 = bytes.fromhex(json_data['T114'])
self.info.UN_Tlv_list.T133 = bytes.fromhex(json_data['T133'])
self.info.UN_Tlv_list.T134 = bytes.fromhex(json_data['T134'])
def UN_data(self, data):
"""解包"""
pack = pack_u(data)
pack.get_int()
pack_way = pack.get_byte()
pack.get_byte() # 00
_len = pack.get_int()
pack.get_bin(_len - 4) # Uin bin
_data = pack.get_all()
if pack_way == 2:
# 登录相关
_data = TEA.decrypt(_data, '00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00')
elif pack_way == 1:
_data = TEA.decrypt(_data, self.info.share_key)
else:
_data = b''
print('未知的解密类型')
if _data == b'':
return
else:
pack = pack_u(_data)
_len = pack.get_int()
part1 = pack.get_bin(_len - 4)
_len = pack.get_int()
part2 = pack.get_bin(_len - 4)
# part1
pack = pack_u(part1)
seq = pack.get_int()
pack.get_int()
_len = pack.get_int()
Tips = pack.get_bin(_len - 4).decode('utf-8')
_len = pack.get_int()
Cmd = pack.get_bin(_len - 4).decode('utf-8')
if Tips != '':
self.info.Tips_un = Tips
seq = self.info.seq # 推送到最后一个包
print('Tips', Tips)
# part2
# print('包序号', ssoseq, '包类型', Cmd, part2.hex())
if 0 < seq < 1000000:
# print('包序号', seq, '包类型', Cmd, part2.hex())
self.pack_list.update({seq: part2})
else:
# print('推送包', seq, '包类型', Cmd, part2.hex())
pass
def get_seq(self):
"""获取缓存测试"""
# todo 详细测试完可以删除
return len(self.pack_list)
def Tcp_send(self, data):
self._tcp.sendall(data)
start_time = time.time() # 获取当前时间
seq = self.info.seq
while time.time() - start_time < 3: # 检查是否已过去三秒
data = self.pack_list.get(seq)
if data is not None:
self.pack_list.pop(seq) # 删除已经取出的包
break
time.sleep(0.1)
self.info.seq = seq + 1
return data
def no_tail_login(self):
"""无尾登录包"""
data = OidbSvc.P_0x88d_1(self.info)
# print(data.hex())
data = self.Tcp_send(data)
if data:
data = OidbSvc.P_0x88d_1_res(data)
return data
def get_dev_login_info(self, **kwargs):
"""
获取设备登录信息。
**kwargs: 可变数量的关键字参数,包括:
type (int): 设备类型。1 表示在线设备,2 表示离线设备,3 表示全部设备。默认为 3。
Returns:
返回获取到的设备登录信息。
"""
data = StatSvc.GetDevLoginInfo(self.info, **kwargs)
data = self.Tcp_send(data)
if data:
data = StatSvc.GetDevLoginInfo_res(data)
return data
def watch_scan_code(self, verify=False):
"""手表扫码"""
data = wt_login.trans_emp(self.info, verify)
data = self.Tcp_send(data)
data = wt_login.trans_emp_res(data, self.info, verify)
return data
def scan_code_auth(self, **kwargs):
"""扫码授权"""
data = wt_login.trans_emp_auth(self.info, **kwargs)
data = self.Tcp_send(data)
if data:
data = wt_login.trans_emp_auth_res(data, self.info, **kwargs)
else:
data = {'status': -1, 'message': '未返回数据'}
return data
def login(self, **kwargs):
"""登录"""
data = wt_login.login(self.info, **kwargs)
data = self.Tcp_send(data)
wt_login.login_res(data, self.info)
def scan_Login(self, **kwargs):
"""扫码登录/辅助验证"""
data = MQUpdateSvc.web_scan_login(self.info, **kwargs)
data = self.Tcp_send(data)
data = MQUpdateSvc.web_scan_login_res(data)
return data
def get_specified_info(self):
"""获取指定信息"""
# 兼容其他源码
data = {
"UIN": self.info.uin,
"GUID_MD5": self.info.Guid.hex(),
"token_A4": self.info.UN_Tlv_list.T10A_token_A4.hex(),
"token_A2": self.info.UN_Tlv_list.T143_token_A2.hex(),
"Sharekey": self.info.share_key.hex(),
"T134": self.info.UN_Tlv_list.T134.hex(),
"T133": self.info.UN_Tlv_list.T133.hex(),
"T10E": self.info.UN_Tlv_list.T10E.hex(),
"T114": self.info.UN_Tlv_list.T114.hex(),
"device_APPID": self.info.device.app_id.to_bytes(4, 'big').hex()
}
return json.dumps(data)
def get_phone(self):
"""获取手机号"""
data = OidbSvc.P_0xeb8(self.info)
data = self.Tcp_send(data)
if data:
data = OidbSvc.P_0xeb8_res(data)
return data
def login_register(self, **kwargs):
"""登录注册
上线包
bid = 0 登出
"""
data = StatSvc.register(self.info, **kwargs)
data = self.Tcp_send(data)
if data:
data = StatSvc.register_res(data)
return data
def get_unread_msg_count(self):
"""获取未读消息"""
data = MessageSvc.PullUnreadMsgCount(self.info)
data = self.Tcp_send(data)
if data:
data = MessageSvc.PullUnreadMsgCount_res(data)
return data
def get_auth_list(self, **kwargs):
"""获取授权列表
start = 0
limit= 10
"""
data = OidbSvc.P_0xc05(self.info, **kwargs)
data = self.Tcp_send(data)
if data:
data = OidbSvc.P_0xc05_res(data)
return data
def del_auth_info(self, **kwargs):
"""删除授权信息
appid= 要删除的id
"""
data = OidbSvc.P0xccd(self.info, **kwargs)
data = self.Tcp_send(data)
if data:
data = OidbSvc.P0xccd_res(data)
return data
def del_login_info(self, **kwargs):
"""删除登录信息
key= 获取设备信息返回
"""
data = StatSvc.DelDevLoginInfo(self.info, **kwargs)
data = self.Tcp_send(data)
if data:
data = StatSvc.DelDevLoginInfo_res(data)
return data
def get_friends_online_list(self, **kwargs):
"""获取在线好友列表
'ifgetFriendVideoAbi': 是否获取朋友的视频能力。布尔值,可选,默认为False。
'isReqCheckIn': 是否请求签到。布尔值,可选,默认为False。
'ifShowTermType': 是否显示好友的设备类型。布尔值,可选,默认为True。
'version': 版本号。32位整数,可选,默认为33。
'cSrcType': 来源类型。32位整数,可选,默认为1。
"""
data = friendlist.GetSimpleOnlineFriendInfoReq(self.info)
data = self.Tcp_send(data)
if data:
data = friendlist.GetSimpleOnlineFriendInfoReq_res(data)
return data
def get_summary_card(self, **kwargs):
"""获取个人名片
uin = 要获取的uin 默认自身
"""
data = SummaryCard.ReqSummaryCard(self.info, **kwargs)
data = self.Tcp_send(data)
if data:
data = SummaryCard.ReqSummaryCard_res(data)
return data
|
AndN
|
/AndN-0.2.3.tar.gz/AndN-0.2.3/AndroidQQ/Android.py
|
Android.py
|
import random
import socket
import select
import threading
import time
from AndTools import pack_u, pack_b
from AndroidQQ.package.sso_server import get_sso_list
clients = []
client_info = {}
ip_address = ''
ip_list = {}
def repackage(data, client):
"""重组包体"""
global client_info
client_info[client]['data'] = client_info[client]['data'] + data
pack_ = pack_u(client_info[client]['data'])
while True:
if pack_.get_len() <= 4:
"""小于4个字节直接跳出"""
break
_len = pack_.get_int()
if _len <= pack_.get_len() + 4:
_bin = pack_.get_bin(_len - 4)
_func = client_info[client]['func']
_func(_bin)
client_info[client]['data'] = pack_.get_all()
pack_ = pack_u(client_info[client]['data'])
else:
pack = pack_b()
pack.add_int(_len)
pack.add_bin(pack_.get_all())
pack_ = pack_u(pack.get_bytes())
break
def disconnect_client(client, clients, client_info):
"""断开客户端连接"""
clients.remove(client)
client.close()
client_info.pop(client)
def receive_data_all():
"""接收全部连接的数据"""
global client_info
while True:
time.sleep(0.1)
# todo 下面代码存在问题
if len(clients) == 0:
continue
# 从元组列表中提取客户端套接字
readable, _, _ = select.select(clients, [], [], 0) # timeout =0
for client in readable:
try:
data = client.recv(1024)
except ConnectionResetError:
print("连接已被客户端重置。")
disconnect_client(client, clients, client_info)
continue
if not data:
disconnect_client(client, clients, client_info)
print('断开连接')
else:
# print(f"从客户端收到的数据: {data.hex()}")
repackage(data, client)
# def receive_data(sock):
# while True:
# data = sock.recv(1024)
# if not data:
# break
# print(f"从服务器收到的数据: {data.hex()}")
def start_client(host=None, port=None, _func=None):
if not host:
if ip_list:
random_item = random.choice(ip_list)
host = random_item['1']
port = random_item['2']
else:
# 没初始化ip列表前用这个快速连接
host = '36.155.245.16'
port = 8080
client = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
client.connect((host, port))
client_info[client] = {
'data': b'',
'func': _func
}
clients.append(client)
return client
def get_ip_list():
time.sleep(1) # 超过一秒再去请求,防止测试时请求
global ip_list
ip_list = get_sso_list()
def start_tcp_service():
"""启动TCP服务"""
receive_thread = threading.Thread(target=receive_data_all, daemon=True).start()
threading.Thread(target=get_ip_list, daemon=True).start()
print('启动接收线程')
start_tcp_service()
if __name__ == "__main__":
pass
|
AndN
|
/AndN-0.2.3.tar.gz/AndN-0.2.3/AndroidQQ/Tcp.py
|
Tcp.py
|
from Jce import JceInputStream, JceStruct
from Jce_b import JceWriter, JceReader
from AndroidQQ.package import PackHeadNoToken, Pack_, Un_jce_Head
def ReqSummaryCard(info, **kwargs):
"""需求摘要卡
package SummaryCard;
"""
def vReqLastGameInfo():
"""获取最后一次游戏信息"""
pass
Uin = int(kwargs.get('Uin', 0)) or int(info.uin)
jce = JceWriter()
jce.write_int64(Uin, 0)
jce.write_int64(1, 1) # eComeFrom 来自
jce.write_int64(0, 2) # uQzoneFeedTimestamp QQ空间动态发送时间
jce.write_bool(kwargs.get('bIsFriend', True), 3) # 是否是好友 不是好友也能发成好友
jce.write_int64(kwargs.get('lGroupCode', 0), 4) # 群组 来源ID
jce.write_int64(kwargs.get('lGroupUin', 0), 5) # 群组
jce.write_bytes(bytes.fromhex('00'), 6) # cache_vSeed
jce.write_string(kwargs.get('strSearchName', ''), 7) # 搜索的名称
jce.write_int64(kwargs.get('lGetControl', 69181), 8) #
jce.write_int32(kwargs.get('eAddFriendSource', 10004), 9) # 添加好友来源
jce.write_bytes(bytes.fromhex('00'), 10) # vSecureSig 安全签名
# jce.write_bytes(kwargs.get('cache_vReqLastGameInfo', 0), 12) # 缓存 v 请求上次游戏信息
# todo 暂时不处理其他信息
_data = jce.bytes()
# print(_data.hex())
_data = JceWriter().write_jce_struct(_data, 0)
# _data = JceWriter().write_map({'ReqSummaryCard': {'SummaryCard.ReqSummaryCard': _data},
# 'ReqHead': {'SummaryCard.ReqHead': bytes.fromhex('0A 00 02 0B')}}, 0)
#
_data = JceWriter().write_map({'ReqHead': bytes.fromhex('0A 00 02 0B'), 'ReqSummaryCard': _data},
0) # 似乎新版有更多的验证,因此用旧的头部
_data = PackHeadNoToken(info, _data, 'SummaryCard.ReqSummaryCard',
'SummaryCardServantObj', 'ReqSummaryCard')
_data = Pack_(info, _data, Types=11, encryption=1, sso_seq=info.seq)
return _data
def ReqSummaryCard_res(data):
"""需求摘要卡"""
data = Un_jce_Head(data)
_map = JceReader(data).read_map(0)
_dict = _map.get('RespSummaryCard', None)
# print(_dict)
if _dict:
RespSummaryCard = _dict['SummaryCard.RespSummaryCard']
stream = JceInputStream(RespSummaryCard)
jce = JceStruct()
jce.read_from(stream)
return jce.to_json()
else:
return None
|
AndN
|
/AndN-0.2.3.tar.gz/AndN-0.2.3/AndroidQQ/package/SummaryCard.py
|
SummaryCard.py
|
import time
from AndTools import pack_b, TEA, pack_u
from AndroidQQ.package.Tlv import TLV
from AndroidQQ.package.Tlv_res import Un_Tlv
from AndroidQQ.package.head import Pack_Head_login, Pack_, Pack_Head_login_test
def login(info, **kwargs):
"""登录包"""
_tlv = TLV(info)
pack = pack_b()
if info.device.client_type == 'Watch':
# 手表协议
methods = [
_tlv.T018,
_tlv.T001,
_tlv.T106, # 必须
_tlv.T116, # 必须
_tlv.T100, # 必须
_tlv.T107, # 必须
_tlv.T142, # 必须
_tlv.T144, # 必须
_tlv.T145, # 必须
_tlv.T147,
_tlv.T16A, # 必须
_tlv.T154,
_tlv.T141,
_tlv.T008,
_tlv.T187,
_tlv.T188,
_tlv.T194,
_tlv.T191,
_tlv.T202,
_tlv.T177,
_tlv.T516,
_tlv.T521,
_tlv.T318, # s
# _tlv.T544
]
elif info.device.client_type == 'Tim':
info.key_Pubkey = bytes.fromhex(
'04 56 F1 AC E7 71 73 EF F2 6C 31 4E 12 3F 6E A0 5B D1 7C 37 DF 3F 61 80 2C 51 F8 5B C3 69 C2 48 DB 1C 72 1C 38 08 8A 59 2E 99 94 92 C6 0F 9E 9E 45 D9 58 37 AE 2E D0 10 FD 32 8D 58 29 E8 39 F6 E3') #
info.share_key = bytes.fromhex('5B9EFFCD3C54E9FBBE7560A0290CE076')
info.key_rand = bytes.fromhex('99375BA2B9AF634B0D6532C118ADF33C')
info.key_tg = bytes.fromhex('F4 67 85 E9 38 85 5F FF 71 41 B7 88 E2 28 B9 9D')
info.Guid = bytes.fromhex('9b6be0653a356f4fac89926f3f1ceb7e')
info.uin = kwargs['uin']
info.password = kwargs['password']
methods = []
else:
# 默认普通QQ
info.key_Pubkey = bytes.fromhex(
'04 6F 9E D9 8C FB 8B 92 73 73 69 6E B7 CA 40 A5 BE 28 84 D0 EF EC D5 96 84 C2 E9 14 50 8F A9 7B 20 9F F2 4E 35 5E D3 92 21 53 ED 9A F1 8F 14 D0 02 73 E0 62 AD C3 A3 79 21 A2 6A 66 19 D2 A5 C7 E3') #
info.share_key = bytes.fromhex('0E6F09415FC0A5CCD040AFA92EBE3EB0')
info.key_rand = bytes.fromhex('944C10A69E2A5E4CEEF08512BA3B39FE')
info.key_tg = bytes.fromhex('AD 07 ED 80 67 75 BB F7 FD C4 A8 D5 41 5E 73 EA')
info.Guid = bytes.fromhex('9b6be0653a356f4fac89926f3f1ceb7e')
info.uin = kwargs['uin']
info.password = kwargs['password']
# 手机协议
methods = [
_tlv.T018,
_tlv.T001,
_tlv.T106,
_tlv.T116,
_tlv.T100,
_tlv.T107,
_tlv.T142,
_tlv.T144,
_tlv.T145,
_tlv.T147,
_tlv.T154,
_tlv.T141,
_tlv.T008,
_tlv.T511,
_tlv.T187,
_tlv.T188,
_tlv.T191,
_tlv.T177,
_tlv.T516,
_tlv.T521,
_tlv.T525
]
# pass
pack.add_Hex('00 09')
pack.add_int(len(methods), 2) # 数量
# 循环调用每一个方法,并将结果添加到包中
for method in methods:
pack.add_bin(method())
_data = pack.get_bytes()
data = TEA.encrypt(_data, info.share_key)
# 头部
pack = pack_b()
pack.add_Hex('1F 41')
pack.add_Hex('08 10')
pack.add_Hex('00 01')
pack.add_int(int(info.uin)) # Uin_bytes
if info.device.client_type == 'Watch':
pack.add_Hex('03 07 00 00 00 00 02 00 00 00 00 00 00 00 00')
pack.add_Hex('01 01')
pack.add_bin(info.key_rand) # 不是key
pack.add_Hex('01 02')
else:
# 默认普通QQ
pack.add_Hex('03 87 00 00 00 00 02 00 00 00 00 00 00 00 00')
pack.add_Hex('02 01')
pack.add_bin(info.key_tg) # 不是key
pack.add_Hex('01 31')
pack.add_Hex('00 01')
pack.add_body(info.key_Pubkey, 2)
pack.add_bin(data)
data = pack.get_bytes()
pack.empty() # 包裹
pack.add_Hex('02')
pack.add_body(data, 2, add_len=4)
pack.add_Hex('03')
data = pack.get_bytes()
# 头部
data = Pack_Head_login(info, 'wtlogin.login', data)
data = Pack_(info, data=data, encryption=2, Types=10, sso_seq=4)
print('登录包', data.hex())
return data
def login_res(data, info):
data = data[15:-1] # 头部 15字节&去掉尾部03
_status = data[0]
data = TEA.decrypt(data[1:], info.share_key)
print('登录返回', _status, data.hex())
if _status == 0:
data = data[5:] # 00 09 00 00 02
pack = pack_u(data)
_head = pack.get_bin(2).hex()
_len = pack.get_short()
data = pack.get_bin(_len)
if _head == '0119':
# 判断tlv的头部
data = TEA.decrypt(data, info.key_rand)
print('解', data.hex())
else:
data = data[3:]
un_tlv = Un_Tlv(data, info)
print('登录返回', _status, un_tlv.unpack())
def wtlogin_trans(bArr):
"""k值 变体解法"""
a_sm = b'\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff' \
b'\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff>\xff\xff?\xff\xff456789:;<=\xff\xff\xff' \
b'\xff\xff\xff\xff\x00\x01\x02\x03\x04\x05\x06\x07\x08\t\n\x0b\x0c\r\x0e\x0f\x10\x11\x12\x13\x14\x15\x16' \
b'\x17\x18\x19\xff\xff\xff\xff\xff\xff\x1a\x1b\x1c\x1d\x1e\x1f !"#$%&\'()*+,' \
b'-./0123\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff' \
b'\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff' \
b'\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff' \
b'\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff' \
b'\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff' \
b'\xff\xff\xff\xff\xff'
if isinstance(bArr, str):
bArr = bArr.encode()
i = 32
i4 = 0
i3 = 0
i2 = 0
b = 0
bArr2 = bytearray(24)
while True:
i6 = i - 1
if i > 0:
i7 = i4 + 1
i5 = bArr[i4]
if i5 != 0 or i5 == 95:
if i5 == 32:
i5 = 42
b2 = a_sm[i5]
if b2 < 0:
b = b2
i = i6
i4 = i7
else:
residue = i3 % 4
if residue == 0:
bArr2[i2] = b2 << 2
i5 = i2
elif residue == 1:
i5 = i2 + 1
bArr2[i2] = bArr2[i2] | (b2 >> 4)
bArr2[i5] = (b2 & 0x0F) << 4
elif residue == 2:
i5 = i2 + 1
bArr2[i2] = bArr2[i2] | (b2 >> 2)
bArr2[i5] = (b2 & 0x03) << 6
elif residue == 3:
i5 = i2 + 1
bArr2[i2] |= b2
else:
i5 = i2
i3 += 1
i4 = i7
i = i6
i2 = i5
b = b2
elif b == 95:
residue = i3 / 4
if residue == 1:
break
elif residue == 2:
i2 = i2 + 1
else:
break
return bArr2
def trans_emp(info, verify=None):
if verify:
# 扫码状态
pack = pack_b()
pack.add_Hex('00 00 62 00 00 00 10 00 00 00 72 00 00 00')
pack.add_int(int(time.time()))
pack.add_Hex('02 00 5E 00 12 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 03 00 00 00 32 00 '
'00 00 01 00 00 00 00 00 00 00 00 00 05 01 00 00 00 73 00 00')
pack.add_Hex('00 10 ')
pack.add_int(len(info.UN_Tlv_list.T100_qr_code_mark), 2)
pack.add_bin(info.UN_Tlv_list.T100_qr_code_mark)
pack.add_Hex('00 00 00 00 00 00 00 00 08 00 00 00 00 03')
data = pack.get_bytes()
else:
# 获取二维码
pack = pack_b()
pack.add_Hex(
'00 01 0D 00 00 00 10 00 00 00 72 00 00 00 64 C9 FA 20 02 01 09 00 31 00 00 00 00 00 00 00 00 00 00 00 00 '
'00'
'00 00 00 00 00 00 00 00 03 00 00 00 32 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 10 00 00 00 00 '
'00'
'00 00 00 08 00 00 00 06')
Tlv = TLV(info)
pack.add_bin(Tlv.T016())
pack.add_bin(Tlv.T01B())
pack.add_bin(Tlv.T01D())
pack.add_bin(Tlv.T01F())
pack.add_bin(Tlv.T033())
pack.add_bin(Tlv.T035())
pack.add_Hex('03')
data = pack.get_bytes()
data = TEA.encrypt(data, info.share_key)
# 头部
pack = pack_b()
pack.add_Hex('1F 41')
pack.add_Hex('08 12')
pack.add_Hex('00 01')
pack.add_Hex('00 00 00 00') # Uin_bytes
pack.add_Hex('03 07 00 00 00 00 02 00 00 00 00 00 00 00 00')
pack.add_Hex('01 01') # 变化01
pack.add_bin(info.key_rand)
pack.add_Hex('01 02')
pack.add_int(len(info.key_Pubkey), 2)
pack.add_bin(info.key_Pubkey)
pack.add_bin(data)
data = pack.get_bytes()
pack.empty() # 包裹
pack.add_Hex('02')
pack.add_int(len(data) + 4, 2) # 短整数
pack.add_bin(data)
pack.add_Hex('03')
data = pack.get_bytes()
# 头部
data = Pack_Head_login(info, 'wtlogin.trans_emp', data)
data = Pack_(info, data=data, encryption=2, Types=10, sso_seq=4)
return data
def trans_emp_auth(info, **kwargs):
verify = kwargs.get('verify', False)
pack = pack_b()
pack.add_int(int(time.time()))
if verify:
pack.add_Hex(
'02 00 C9 00 14 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 03 00 00 00 32 00 00 00 02 00 00 00 00')
else:
pack.add_Hex(
'02 00 DE 00 13 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 03 00 00 00 32 00 00 00 00 00 00 00 00')
pack.add_int(int(info.uin))
pack.add_Hex('00 00 00 00 00 10 00 00 00 00')
pack.add_int(int(info.uin))
pack.add_body(wtlogin_trans(kwargs['K']), 2)
pack.add_body(info.UN_Tlv_list.T10A_token_A4, 2)
if verify:
pack.add_Hex('08 00 03 00 02 00 08 00 00 00 00 00 00 00 0B 00 15 00 04 00 00 00 00 00 68')
pack.add_body(info.Guid, 2)
else:
pack.add_bin(info.Guid)
pack.add_Hex('01 00 01 08 00 04 00 03 00 05 00 20 00 36 00 01 00 09')
pack.add_body('com.tencent.mobileqq', 2) # 似乎会对其识别
pack.add_Hex('00 39 00 04 00 00 00 01')
pack.add_Hex('03')
data = TEA.encrypt(pack.get_bytes(), info.UN_Tlv_list.T10E)
pack.empty()
if verify:
pack.add_Hex('01 00 D8 00 00 00 10 00 00 00 72 00 60')
else:
pack.add_Hex('01 00 F0 00 00 00 10 00 00 00 72 00 60')
pack.add_bin(info.UN_Tlv_list.T114)
pack.add_Hex('00')
pack.add_bin(data)
data = TEA.encrypt(pack.get_bytes(), info.UN_Tlv_list.T134)
pack.empty()
pack.add_Hex('1F 41')
pack.add_Hex('08 12')
pack.add_Hex('00 01')
pack.add_int(int(info.uin)) # Uin_bytes
pack.add_Hex('03 45 00 00 00 00 02 00 00 00 00 00 00 00 00 00 30')
pack.add_bin(info.UN_Tlv_list.T133)
pack.add_bin(data)
data = pack.get_bytes()
pack.empty() # 包裹
pack.add_Hex('02')
pack.add_int(len(data) + 4, 2) # 短整数
pack.add_bin(data)
pack.add_Hex('03')
data = pack.get_bytes()
pack.empty()
pack.add_Hex(
'00 00 00 27 00 00 00 15 77 74 6C 6F 67 69 6E 2E 74 72 61 6E 73 5F 65 6D 70 00 00 00 08 F7 C0 A1 E8 00 00 00 06 70 00')
pack.add_int(len(data) + 4, 4)
pack.add_bin(data)
data = TEA.encrypt(pack.get_bytes(), '00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00')
# 头部
data = Pack_(info, data=data, encryption=2, Types=11, sso_seq=info.seq)
return data
def trans_emp_auth_res(data, info, **kwargs):
auth_info = {}
verify = kwargs.get('verify', False)
auth_info['verify'] = verify
data = TEA.decrypt(data[16:-1], info.UN_Tlv_list.T134)
data = TEA.decrypt(data[5:], info.UN_Tlv_list.T10E)
data = data[53:]
pack = pack_u(data)
status = pack.get_byte()
if status != 0:
_len = pack.get_short()
message = pack.get_bin(_len).decode('utf-8')
auth_info['message'] = message
else:
_time = pack.get_int(4)
_len = pack.get_short()
AuthName = pack.get_bin(_len).decode('utf-8')
auth_info['AuthName'] = AuthName
if verify:
# 确认授权
pack.get_short()
data = pack.get_all()
TLv = Un_Tlv(data, info)
TLv.unpack()
auth_info.update(TLv.get_auth_result())
auth_info['status'] = status
return auth_info
def trans_emp_res(data, info, verify):
status_message = {
48: '请使用手机QQ扫描二维码登录',
53: '扫描成功,请在手机上确认登录',
54: '用户已取消登录',
99: '请扫描二维码',
0: '完成授权'
}
pack = pack_u(data)
pack.get_byte() # 02
pack.get_int(2) # len
pack = pack_u(pack.get_all())
pack.get_bin(10) # 1f 41 08 12 00 01 00 00 00 00
pack.get_int(2) # ?
pack.get_byte() # ?
data = pack.get_all()
data = TEA.decrypt(data[:-1], info.share_key)
if verify:
status = data[-2]
qrCode = None
if status == 0:
data = data[72:]
Un_Tlv(data, info).unpack()
print('扫码完成', data.hex())
else:
status = 99
data = data[53:] # 去掉前面
pack = pack_u(data)
pack.get_bin(2) # tlv
_len = pack.get_int(2) # len
info.UN_Tlv_list.T100_qr_code_mark = pack.get_bin(_len) # data
pack.get_bin(2) # tlv
pack.get_int(2) # len
_len = pack.get_int(2)
qrCode = pack.get_bin(_len) # data
message = status_message.get(status, "未知状态")
return qrCode, status, message
|
AndN
|
/AndN-0.2.3.tar.gz/AndN-0.2.3/AndroidQQ/package/wtlogin.py
|
wtlogin.py
|
import json
# 对象数据服务
from google.protobuf.json_format import MessageToJson, MessageToDict
from AndroidQQ.proto import *
from AndroidQQ.package.head import *
from pyproto import ProtoBuf
def P_0xeb8(info):
""" uint32_src = 1 proto_ver = 2 {1: 1, 2: 2} """
_dict = {1: 3768, 2: 1, 4: {1: 1, 2: 2}}
_data = ProtoBuf(_dict).toBuf()
_data = PackHeadNoToken(info, _data, 'OidbSvc.0xeb8')
_data = Pack_(info, _data, Types=11, encryption=1, sso_seq=info.seq)
return _data
def P_0xeb8_res(data):
"""返回绑定手机信息"""
new_msg = OidbSvc0xeb8r()
new_msg.ParseFromString(data)
return MessageToDict(new_msg)['RspBody']
def P_0x88d_1(info):
msg = OidbSvc0x88d1()
msg.field1 = 2189
msg.field2 = 1
msg.field4.field1 = 537046294
msg.field4.field2.field1 = 799854399
msg.field4.field2.field2.field7 = 0
msg.field4.field2.field2.field24 = b'' # Replace this with your intended byte array
# 序列化消息
bytes_temp = msg.SerializeToString()
bytes_temp = Pack_Head(info, bytes_temp, 'OidbSvc.0x88d_1')
bytes_temp = Pack_(info, bytes_temp, Types=8, encryption=1, token=True)
return bytes_temp
def P_0x88d_1_res(data):
"""返回字典"""
new_msg = OidbSvc0x88d1r()
new_msg.ParseFromString(data)
return MessageToDict(new_msg)
def P_0xc05(info, **kwargs):
"""获取授权列表"""
_dict = {1: 3077, 2: 1, 3: 0, 4: {11: {1: kwargs.get('start', 0), 2: kwargs.get('limit', 10)}}}
_data = ProtoBuf(_dict).toBuf()
_data = PackHeadNoToken(info, _data, 'OidbSvc.0xc05')
_data = Pack_(info, _data, Types=11, encryption=1, sso_seq=info.seq)
return _data
def P_0xc05_res(data):
"""返回字典"""
new_msg = OidbSvc0xc05r()
new_msg.ParseFromString(data)
return MessageToDict(new_msg)['RspBody']['AppListRsp']
def P0xccd(info, **kwargs):
"""删除授权信息"""
_dict = {1: 3277, 2: 1, 3: 0, 4: {2: kwargs.get('appid', 0), 3: 1}}
_data = ProtoBuf(_dict).toBuf()
_data = PackHeadNoToken(info, _data, 'OidbSvc.0xccd')
_data = Pack_(info, _data, Types=11, encryption=1, sso_seq=info.seq)
return _data
def P0xccd_res(data):
_dict = ProtoBuf(data).toDictAuto()
return _dict
|
AndN
|
/AndN-0.2.3.tar.gz/AndN-0.2.3/AndroidQQ/package/OidbSvc.py
|
OidbSvc.py
|
import json
from random import randint
from AndTools import pack_b, get_random_bin, TEA
from Jce_b import JceWriter, JceReader
from pyproto import ProtoBuf
def Pack_(info, data, Types, encryption, sso_seq=None, token=False):
"""组包
Types:包类型 通常是10 和11
encryption 加密方式 2 不需要token 1需要token
sso_seq 包序号
"""
Pack = pack_b()
Pack.add_int(Types)
Pack.add_bytes(encryption)
Uin_bytes = info.uin.encode('utf-8')
if token:
token_A2 = info.UN_Tlv_list.T143_token_A2
Pack.add_int(len(token_A2) + 4)
Pack.add_bin(token_A2)
if sso_seq is not None:
Pack.add_int(sso_seq)
Pack.add_bytes(0)
Pack.add_int(len(Uin_bytes) + 4)
Pack.add_bin(Uin_bytes)
Pack.add_bin(data)
bytes_temp = Pack.get_bytes()
Pack.empty()
Pack.add_int(len(bytes_temp) + 4)
Pack.add_bin(bytes_temp)
return Pack.get_bytes()
def Pack_Head(info, data, Cmd):
"""包头"""
TokenA4 = info.UN_Tlv_list.T10A_token_A4
if TokenA4 is None:
TokenA4 = b''
IMEI = info.device.IMEI
var = info.device.var
Pack = pack_b()
Pack.add_int(info.seq)
Pack.add_int(info.device.app_id)
Pack.add_int(info.device.app_id)
Pack.add_Hex('01 00 00 00')
Pack.add_Hex('00 00 00 00')
Pack.add_Hex('00 00 01 00')
Pack.add_int(len(TokenA4) + 4)
Pack.add_bin(TokenA4)
Pack.add_int(len(Cmd) + 4)
Pack.add_bin(bytes(Cmd, 'utf-8'))
Pack.add_Hex('00 00 00 08')
Pack.add_bin(get_random_bin(4))
Pack.add_int(len(IMEI) + 4)
Pack.add_bin(IMEI)
Pack.add_Hex('00 00 00 04')
Pack.add_int(len(var) + 2, 2) # Short
Pack.add_bin(var)
bytes_temp = Pack.get_bytes()
Pack.empty()
Pack.add_int(len(bytes_temp) + 4)
Pack.add_bin(bytes_temp)
bytes_temp = Pack.get_bytes()
Pack.empty()
Pack.add_bin(bytes_temp)
Pack.add_int(len(data) + 4)
Pack.add_bin(data)
bytes_temp = Pack.get_bytes()
bytes_temp = TEA.encrypt(bytes_temp, info.share_key)
return bytes_temp
def Pack_Head_login_test(info, Cmd, data_02):
"""返回前面的头,后面的单独写在组包的函数里面
01 8C
1F 41
08 12
"""
pack = pack_b()
pack.add_int(info.seq)
pack.add_int(info.device.app_id)
pack.add_int(info.device.app_id)
pack.add_Hex('01 00 00 00 00 00 00 00 00 00 03 00')
pack.add_int(len(info.UN_Tlv_list.T10A_token_A4) + 4)
pack.add_bin(info.UN_Tlv_list.T10A_token_A4)
pack.add_int(len(Cmd.encode('utf-8')) + 4)
pack.add_bin(Cmd.encode('utf-8'))
pack.add_Hex('00 00 00 08')
pack.add_bin(get_random_bin(4))
pack.add_int(len(info.device.IMEI) + 4)
pack.add_bin(info.device.IMEI)
pack.add_Hex('00 00 00 04')
pack.add_int(len(info.device.var) + 2, 2)
pack.add_bin(info.device.var)
# 增加的
_dict = {9: 1, 11: 2052, 12: '08c7f955ce81db0cca48bca510001751740b', 14: 0, 16: '', 18: 0, 19: 1, 20: 1, 21: 0,
23: {1: 'client_conn_seq', 2: '1691763088'},
24: {1: bytes.fromhex(
'0C0B2A074B060DD4D8EA163940EE2E66347704F2157B4A556F283E0671E4E33796ABEE4DF07BC5B8DB14DFA63CFAF1F66483C729DCC38EF9F7DE844FE9D9A30B'),
2: 'fDjMVp3pEqXt',
3: {2: 'V1_AND_SQ_8.9.71_4332_YYB_D',
3: 'X_LuHJtftAAPTwFk9SOftIHXmx2mCAL19e+MiYSrIXopqlBkJKOqC9fu2qT0j1lmIy/f7TAWpLSZclldF6w8JzWj6vYSqzd9s='}},
26: 100, 28: 2}
data_temp = ProtoBuf(_dict).toBuf()
pack.add_int(len(data_temp) + 4)
pack.add_bin(data_temp)
bytes_temp = pack.get_bytes()
pack.empty()
pack.add_int(len(bytes_temp) + 4)
pack.add_bin(bytes_temp)
data_head = pack.get_bytes()
pack.empty()
pack.add_int(len(data_02) + 4)
pack.add_bin(data_02)
data_02 = pack.get_bytes()
data = data_head + data_02
# 02
# 01 8C
data = TEA.encrypt(data, '00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00')
return data
def Pack_Head_login(info, Cmd, data_02):
"""返回前面的头,后面的单独写在组包的函数里面
01 8C
1F 41
08 12
"""
pack = pack_b()
pack.add_int(info.seq)
pack.add_int(info.device.app_id)
pack.add_int(info.device.app_id)
pack.add_Hex('01 00 00 00 00 00 00 00 00 00 01 00')
pack.add_body(info.UN_Tlv_list.T10A_token_A4, 4, add_len=4)
pack.add_body(Cmd, 4, add_len=4)
pack.add_Hex('00 00 00 08')
pack.add_bin(get_random_bin(4))
pack.add_body(info.device.IMEI, 4, add_len=4)
pack.add_Hex('00 00 00 04')
pack.add_body(info.device.var, 2, add_len=2)
bytes_temp = pack.get_bytes()
pack.empty()
pack.add_body(bytes_temp, 4, add_len=4)
data_head = pack.get_bytes()
pack.empty()
pack.add_body(data_02, 4, add_len=4)
data_02 = pack.get_bytes()
data = data_head + data_02
# 02
# 01 8C
data = TEA.encrypt(data, '00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00')
return data
def PackHeadNoToken(info, data, cmd, jce_cmd_head=None, jce_cmd=None):
"""照抄的旧源码"""
if jce_cmd_head is not None:
"""jce部分"""
random_number = randint(110000000, 999999999)
jce = JceWriter()
jce.write_int32(3, 1)
jce.write_int32(0, 2)
jce.write_int32(0, 3)
jce.write_int64(random_number, 4)
jce.write_string(jce_cmd_head, 5)
jce.write_string(jce_cmd, 6)
jce.write_bytes(data, 7)
jce.write_int32(0, 8)
_data = jce.bytes()
_data = _data + bytes.fromhex('98 0C A8 0C') # 后面的两个空的
else:
_data = data
pack = pack_b()
pack.add_int(len(cmd) + 4)
pack.add_bin(bytes(cmd, 'utf-8'))
pack.add_Hex('00 00 00 08')
pack.add_bin(get_random_bin(4))
pack.add_Hex('00 00 00 04')
_data_temp = pack.get_bytes()
pack.empty()
pack.add_int(len(_data_temp) + 4)
pack.add_bin(_data_temp)
_data_temp = pack.get_bytes()
pack.empty()
pack.add_bin(_data_temp)
pack.add_int(len(_data) + 4)
pack.add_bin(_data)
_data = pack.get_bytes()
_data = TEA.encrypt(_data, info.share_key)
return _data
def Un_jce_Head(data):
jce = JceReader(data)
jce.read_int32(1)
jce.read_int32(2)
jce.read_int32(3)
jce.read_int64(4)
jce.read_string(5)
jce.read_string(6)
data = jce.read_any(7)
return data
def Un_jce_Head_2(data):
"""Map"""
jce = JceReader(data)
jce.skip(1) # ReadType
jce.skip(2) # ReadShort
jce.read_string(0)
jce.skip(1) # ReadType
jce.skip(2) # ReadShort
jce.read_string(0)
data = jce.read_any(1)
return data
|
AndN
|
/AndN-0.2.3.tar.gz/AndN-0.2.3/AndroidQQ/package/head.py
|
head.py
|
import json
import zlib
from Jce import JceInputStream, JceStruct
from AndroidQQ.package.head import *
# 统计服务
def GetDevLoginInfo(info, **kwargs):
"""获取登录信息"""
jce = JceWriter()
jce.write_bytes(info.Guid, 0)
jce.write_string('com.tencent.mobileqq', 1)
jce.write_int32(1, 2)
jce.write_int32(0, 3)
jce.write_int32(0, 4)
jce.write_int32(20, 5)
jce.write_int32(kwargs.get('type', 3), 6)
_data = jce.bytes()
_data = JceWriter().write_jce_struct(_data, 0)
_data = JceWriter().write_map({'SvcReqGetDevLoginInfo': _data}, 0)
_data = PackHeadNoToken(info, _data, 'StatSvc.GetDevLoginInfo', 'StatSvc', 'SvcReqGetDevLoginInfo')
_data = Pack_(info, _data, Types=11, encryption=1, sso_seq=info.seq)
return _data
def GetDevLoginInfo_res(data):
if data[0] == 120:
data = zlib.decompress(data)
data = Un_jce_Head(data)
data = Un_jce_Head_2(data)
stream = JceInputStream(data)
s = JceStruct()
s.read_from(stream)
return s.to_json()
def DelDevLoginInfo(info, **kwargs):
"""删除登录信息"""
key = kwargs.get('key', b'')
if isinstance(key, str):
key = bytes.fromhex(key)
_data = JceWriter().write_bytes(key, 0)
jce = JceWriter()
jce.write_bytes(info.Guid, 0)
jce.write_string('com.tencent.mobileqq', 1)
jce.write_jce_struct_list([_data], 2)
jce.write_int32(1, 3)
jce.write_int32(0, 4)
jce.write_int32(0, 5)
_data = jce.bytes()
_data = JceWriter().write_jce_struct(_data, 0)
_data = JceWriter().write_map({'SvcReqDelLoginInfo': _data}, 0)
_data = PackHeadNoToken(info, _data, 'StatSvc.DelDevLoginInfo', 'StatSvc', 'SvcReqDelLoginInfo')
_data = Pack_(info, _data, Types=11, encryption=1, sso_seq=info.seq)
return _data
def DelDevLoginInfo_res(data):
"""似乎没有明确的返回信息"""
data = Un_jce_Head(data)
data = Un_jce_Head_2(data)
stream = JceInputStream(data)
jce = JceStruct()
jce.read_from(stream)
return jce.to_json()
def register(info, **kwargs):
"""登录注册"""
jce = JceWriter()
jce.write_int64(int(info.uin), 0)
jce.write_int32(kwargs.get('bid', 7), 1) # login: 1 | 2 | 4 = 7, 登出: 0.
jce.write_int32(0, 2) # 连接类型
jce.write_string('', 3) # 其他
jce.write_int32(kwargs.get('online_status', 11), 4) # 在线状态 线上: 11, 离线: 21
jce.write_bool(False, 5) # 在线推送
jce.write_bool(False, 6) # 在线
jce.write_bool(False, 7) # 正在显示在线
jce.write_bool(False, 8) # 踢电脑
jce.write_bool(False, 9) # 踢弱
jce.write_int64(0, 10) # 时间戳
jce.write_int64(25, 11) # ios_version
jce.write_int64(1, 12) # 网络类型
jce.write_string('', 13) # 构建版本
jce.write_int32(0, 14)
jce.write_bytes(info.Guid, 16)
jce.write_int16(2052, 17) # 区域设置 ID
jce.write_int32(0, 18) # 无声推动
jce.write_string('', 19) # 开发者名称
jce.write_string('', 20) # 开发类型
jce.write_string('7.1.2', 21) # os_version
jce.write_int32(1, 22) # 打开推送
jce.write_int64(41, 23) # 大序列
jce.write_int64(0, 24) # 最后观看开始时间
jce.write_int64(0, 26) # 旧单点登录 IP
jce.write_int64(0, 27) # 新的单点登录 IP
_data = jce.bytes()
jce = JceWriter()
jce.write_jce_struct(_data, 0)
_data = jce.bytes()
jce = JceWriter()
jce.write_map({'SvcReqRegister': _data}, 0)
_data = jce.bytes()
jce = JceWriter()
jce.write_int32(3, 1)
jce.write_int32(0, 2)
jce.write_int32(0, 3)
jce.write_int64(0, 4)
jce.write_string('PushService', 5)
jce.write_string('SvcReqRegister', 6)
jce.write_bytes(_data, 7)
jce.write_int32(0, 8)
_data = jce.bytes()
_data = _data + bytes.fromhex('98 0C A8 0C') # 后面的两个空的
_data = Pack_Head(info, _data, 'StatSvc.register')
_data = Pack_(info, _data, Types=10, encryption=1, token=True)
return _data
def register_res(data):
data = Un_jce_Head(data)
data = Un_jce_Head_2(data)
stream = JceInputStream(data)
s = JceStruct()
s.read_from(stream)
return s.to_json()
|
AndN
|
/AndN-0.2.3.tar.gz/AndN-0.2.3/AndroidQQ/package/StatSvc.py
|
StatSvc.py
|
from AndTools import pack_u
from pydantic import BaseModel
class Un_Tlv:
def __init__(self, data, info):
self.info = info
self.pack = pack_u(data)
self.auth_info = {}
self.handler_map = {
'011a': lambda _data: setattr(self.info, 'uin_name', _data[5:].decode('utf-8')),
'0120': lambda _data: setattr(self.info.cookies, 'skey', _data.decode('utf-8')),
'0103': lambda _data: setattr(self.info.cookies, 'client_key', _data.hex()),
'0004': lambda _data: setattr(self.info, 'uin', _data[4:].decode('utf-8')),
'001e': lambda _data: setattr(self.info, 'key_rand', _data), # 手表扫码返回的时候必须
'0018': lambda _data: setattr(self.info.UN_Tlv_list, 'T018', _data),
'0019': lambda _data: setattr(self.info.UN_Tlv_list, 'T019', _data),
'0065': lambda _data: setattr(self.info.UN_Tlv_list, 'T065', _data),
'0108': lambda _data: setattr(self.info.UN_Tlv_list, 'T108', _data),
'010e': lambda _data: setattr(self.info.UN_Tlv_list, 'T10E', _data),
'0134': lambda _data: setattr(self.info.UN_Tlv_list, 'T134', _data),
'0114': lambda _data: setattr(self.info.UN_Tlv_list, 'T114', _data),
'0133': lambda _data: setattr(self.info.UN_Tlv_list, 'T133', _data),
'0143': lambda _data: setattr(self.info.UN_Tlv_list, 'T143_token_A2', _data),
'010a': lambda _data: setattr(self.info.UN_Tlv_list, 'T10A_token_A4', _data),
'0003': lambda _data: self.auth_info.__setitem__('0003', _data.decode('utf-8')),
'0005': lambda _data: self.auth_info.__setitem__('0005', _data.decode('utf-8')),
'0036': lambda _data: self.auth_info.__setitem__('0036', _data.decode('utf-8')),
'0305': lambda _data: setattr(self.info, 'share_key', _data)
}
def _content(self, head, data):
handler = self.handler_map.get(head)
if handler:
handler(data)
else:
print('tlv未解析', head, data.hex())
def get_auth_result(self):
return self.auth_info
def return_specified_content(self):
data = {
'uin_name': self.info.uin_name,
'guid': self.info.Guid.hex(),
'token_A2': self.info.UN_Tlv_list.T143_token_A2,
'token_A4': self.info.UN_Tlv_list.T10A_token_A4,
'client_key': self.info.cookies.client_key,
}
return data
def unpack(self):
count = self.pack.get_short()
for _ in range(count):
head = self.pack.get_bin(2).hex()
_len = self.pack.get_short()
_data = self.pack.get_bin(_len)
self._content(head, _data)
return self.info
|
AndN
|
/AndN-0.2.3.tar.gz/AndN-0.2.3/AndroidQQ/package/Tlv_res.py
|
Tlv_res.py
|
import time
from AndTools import pack_b, get_random_bin, get_md5, TEA
from AndroidQQ.proto import DeviceReport
def Tlv_head(head, data):
pack = pack_b()
pack.add_Hex(head)
pack.add_int(len(data), 2)
pack.add_bin(data)
return pack.get_bytes()
class TLV:
def __init__(self, info):
self.pack = pack_b()
self.info = info
def T018(self):
self.pack.empty()
self.pack.add_Hex('00 01 00 00 06 00 00 00 00 10 00 00 00 00')
self.pack.add_int(int(self.info.uin))
self.pack.add_Hex('00 00 00 00')
return Tlv_head('00 18', self.pack.get_bytes())
def T001(self):
self.pack.empty()
self.pack.add_Hex('00 01')
self.pack.add_bin(get_random_bin(4))
self.pack.add_int(int(self.info.uin))
self.pack.add_int(int(time.time()))
self.pack.add_Hex('00 00 00 00 00 00')
return Tlv_head('00 01', self.pack.get_bytes())
def T142(self):
self.pack.empty()
self.pack.add_body(self.info.device.package_name, 4)
return Tlv_head('01 42', self.pack.get_bytes())
def T016(self):
self.pack.empty()
self.pack.add_Hex('00 00 00 05')
self.pack.add_Hex('00 00 00 10') # len
self.pack.add_Hex('20 04 1E EE 9B 6B E0 65 3A 35 6F 4F AC 89 92 6F 3F 1C EB 7E')
self.pack.add_body('com.tencent.qqlite', 2)
self.pack.add_body('2.1.7', 2)
self.pack.add_Hex('00 10') # len
self.pack.add_Hex('A6 B7 45 BF 24 A2 C2 77 52 77 16 F6 F3 6E B6 8D')
return Tlv_head('00 16', self.pack.get_bytes())
def T01B(self):
self.pack.empty()
self.pack.add_Hex('00 00 00 00 00 00 00 00 00 00 00 08 00 00 00 04 00 00 00 48 00 00 00 02 00 00 00 02 00 00 ')
return Tlv_head('00 1B', self.pack.get_bytes())
def T01D(self):
self.pack.empty()
self.pack.add_Hex('01 00 F7 FF 7C 00 00 00 00 00 00 00 00 00')
return Tlv_head('00 1D', self.pack.get_bytes())
def T01F(self):
self.pack.empty()
self.pack.add_Hex('01')
self.pack.add_Hex('00 07') # android len
self.pack.add_Hex('61 6E 64 72 6F 69 64') # android
self.pack.add_Hex('00 01')
self.pack.add_Hex('39')
self.pack.add_Hex('00 02')
self.pack.add_Hex('00 10') # len
self.pack.add_Hex('43 68 69 6E 61 20 4D 6F 62 69 6C 65 20 47 53 4D') # China Mobile GSM
self.pack.add_Hex('00 00 00 04')
self.pack.add_Hex('77 69 66 69') # wifi
return Tlv_head('00 1F', self.pack.get_bytes())
def T033(self):
self.pack.empty()
self.pack.add_bin(get_random_bin(16))
return Tlv_head('00 33', self.pack.get_bytes())
def T035(self):
self.pack.empty()
self.pack.add_Hex('00 00 00 73')
return Tlv_head('00 35', self.pack.get_bytes())
def T106(self):
self.pack.empty()
if self.info.device.client_type == 'Watch':
self.pack.add_bin(self.info.UN_Tlv_list.T018)
# Token0106
else:
password_md5 = get_md5(self.info.password.encode('utf-8'))
self.pack.add_Hex('00 04')
self.pack.add_bin(get_random_bin(4))
self.pack.add_Hex('00 00 00 13 00 00 00 10 00 00 00 00 00 00 00 00')
self.pack.add_int(int(self.info.uin))
self.pack.add_int(self.info.login_time)
self.pack.add_Hex('00 00 00 00 01')
self.pack.add_bin(bytes.fromhex(password_md5))
self.pack.add_bin(self.info.key_rand)
self.pack.add_bin(self.info.Guid)
self.pack.add_int(self.info.device.app_id)
self.pack.add_Hex('00 00 00 01')
self.pack.add_body(self.info.uin, 2)
self.pack.add_Hex('00 00')
_data = self.pack.get_bytes()
_key = get_md5(bytes.fromhex(password_md5) + bytes.fromhex('00 00 00 00') + self.info.key_rand)
_data = TEA.encrypt(_data, _key)
return Tlv_head('01 06', self.pack.get_bytes())
def T116(self):
self.pack.empty()
if self.info.device.client_type == 'Watch':
self.pack.add_Hex('00 00 F7 FF 7C 00 01 04 00 00')
else:
self.pack.add_Hex('00 0A F7 FF 7C 00 01 04 00 01 5F 5E 10 E2')
return Tlv_head('01 16', self.pack.get_bytes())
def T100(self):
client_type_map = {
'Watch': ('00 00 00 05', '02 04 10 C0'),
'Other': ('00 00 00 13', '02 14 10 E0')
}
_H = client_type_map.get(self.info.device.client_type, client_type_map['Other'])
self.pack.empty()
self.pack.add_Hex('00 01')
self.pack.add_Hex(_H[0])
self.pack.add_Hex('00 00 00 10')
self.pack.add_int(self.info.device.app_id)
self.pack.add_Hex('00 00 00 00')
self.pack.add_Hex(_H[1])
return Tlv_head('01 00', self.pack.get_bytes())
def T107(self):
self.pack.empty()
self.pack.add_Hex('00 00 00 00 00 01 ')
return Tlv_head('01 07', self.pack.get_bytes())
def T109(self):
self.pack.empty()
self.pack.add_Hex(get_md5(self.info.device.android_id))
# todo 还不确定
return Tlv_head('01 09', self.pack.get_bytes())
def T124(self):
self.pack.empty()
temp = '39' # todo 不确定是什么,后面和普通安卓进行对比再确认
self.pack.add_body(self.info.device.name, 2)
self.pack.add_body(temp, 2, True)
self.pack.add_int(2, 2)
self.pack.add_body(self.info.device.internet, 2)
self.pack.add_body(self.info.device.internet_type, 4)
print(self.pack.get_bytes().hex())
return Tlv_head('01 24', self.pack.get_bytes())
def T128(self):
self.pack.empty()
self.pack.add_Hex('00 00 01 01 00 11 00 00 00')
self.pack.add_body(self.info.device.model, 2)
self.pack.add_body(self.info.Guid, 2)
self.pack.add_body(self.info.device.brand, 2)
return Tlv_head('01 28', self.pack.get_bytes())
def T16E(self):
self.pack.empty()
self.pack.add_body(self.info.device.model, 2)
return Tlv_head('01 6E', self.pack.get_bytes())
def T52D(self):
device_info = DeviceReport(
bootloader='unknown',
proc_version='Linux version 4.4.146 (build@ubuntu) (gcc version 4.8 (GCC) ) #1 SMP PREEMPT Thu Sep 1 '
'18:26:33 CST 2022',
codename='REL',
incremental='G9650ZHU2ARC6',
fingerprint='samsung/star2qltezh/star2qltechn:9/PQ3B.190801.002/G9650ZHU2ARC6:user/release-keys',
boot_id=self.info.device.boot_id,
android_id=self.info.device.android_id.hex(),
base_band='',
inner_version='G9650ZHU2ARC6',
)
return Tlv_head('05 2D', device_info.SerializeToString())
def T144(self):
pack = pack_b()
if self.info.device.client_type == 'Watch':
methods = {
self.T109,
self.T124,
self.T128,
self.T16E,
}
else:
methods = {
self.T109,
self.T52D,
self.T124,
self.T128,
self.T16E,
}
pack.add_int(len(methods), 2) # 数量
# 循环调用每一个方法,并将结果添加到包中
for method in methods:
pack.add_bin(method())
_data = pack.get_bytes()
_data = TEA.encrypt(_data, self.info.key_rand)
return Tlv_head('01 44', _data)
def T145(self):
"""GUid"""
self.pack.empty()
self.pack.add_bin(self.info.Guid)
_data = self.pack.get_bytes()
return Tlv_head('01 45', _data)
def T147(self):
self.pack.empty()
self.pack.add_Hex('00 00 00 10')
self.pack.add_body(self.info.device.version, 2)
self.pack.add_body(self.info.device.Sig, 2)
return Tlv_head('01 47', self.pack.get_bytes())
def T511(self):
"""
office.qq.com
qun.qq.comgamecenter.qq.comdocs.qq.commail.qq.com ti.qq.com
vip.qq.com
tenpay.comqqweb.qq.comqzone.qq.com
mma.qq.comgame.qq.comopenmobile.qq.comconnect.qq.com"""
self.pack.empty()
self.pack.add_Hex(
'00 0E 01 00 0D 6F 66 66 69 63 65 2E 71 71 2E 63 6F 6D 01 00 0A 71 75 6E 2E 71 71 2E 63 6F 6D 01 00 11 67 61 6D 65 63 65 6E 74 65 72 2E 71 71 2E 63 6F 6D 01 00 0B 64 6F 63 73 2E 71 71 2E 63 6F 6D 01 00 0B 6D 61 69 6C 2E 71 71 2E 63 6F 6D 01 00 09 74 69 2E 71 71 2E 63 6F 6D 01 00 0A 76 69 70 2E 71 71 2E 63 6F 6D 01 00 0A 74 65 6E 70 61 79 2E 63 6F 6D 01 00 0C 71 71 77 65 62 2E 71 71 2E 63 6F 6D 01 00 0C 71 7A 6F 6E 65 2E 71 71 2E 63 6F 6D 01 00 0A 6D 6D 61 2E 71 71 2E 63 6F 6D 01 00 0B 67 61 6D 65 2E 71 71 2E 63 6F 6D 01 00 11 6F 70 65 6E 6D 6F 62 69 6C 65 2E 71 71 2E 63 6F 6D 01 00 0E 63 6F 6E 6E 65 63 74 2E 71 71 2E 63 6F 6D')
return Tlv_head('05 11', self.pack.get_bytes())
def T16A(self):
self.pack.empty()
self.pack.add_bin(self.info.UN_Tlv_list.T019)
return Tlv_head('01 6A', self.pack.get_bytes())
def T154(self):
self.pack.empty()
self.pack.add_int(self.info.seq)
return Tlv_head('01 54', self.pack.get_bytes())
def T141(self):
self.pack.empty()
self.pack.add_Hex('00 01')
self.pack.add_body(self.info.device.internet, 2)
self.pack.add_Hex('00 02')
self.pack.add_body(self.info.device.internet_type, 2)
return Tlv_head('01 41', self.pack.get_bytes())
def T008(self):
self.pack.empty()
self.pack.add_Hex('00 00 00 00 08 04 00 00')
return Tlv_head('00 08', self.pack.get_bytes())
def T187(self):
self.pack.empty()
self.pack.add_body(self.info.device.Mac_bytes, 2)
return Tlv_head('01 87', self.pack.get_bytes())
def T188(self):
_app_id = get_md5(str(self.info.device.app_id).encode())
self.pack.empty()
self.pack.add_body(_app_id, 2)
return Tlv_head('01 88', self.pack.get_bytes())
def T194(self):
_IMEI = get_md5(self.info.device.IMEI)
self.pack.empty()
self.pack.add_body(_IMEI, 2)
return Tlv_head('01 94', self.pack.get_bytes())
def T191(self):
self.pack.empty()
self.pack.add_Hex('00')
return Tlv_head('01 91', self.pack.get_bytes())
def T202(self):
self.pack.empty()
self.pack.add_body(self.info.device.Bssid_bytes, 2)
self.pack.add_body('<unknown ssid>', 2)
return Tlv_head('02 02', self.pack.get_bytes())
def T177(self):
self.pack.empty()
self.pack.add_Hex('01')
self.pack.add_int(self.info.device.build_time)
self.pack.add_body(self.info.device.sdk_version)
return Tlv_head('01 77', self.pack.get_bytes())
def T516(self):
self.pack.empty()
self.pack.add_Hex('00 00 00 00')
return Tlv_head('05 16', self.pack.get_bytes())
def T521(self):
self.pack.empty()
self.pack.add_Hex('00 00 00 73 00 00 ')
return Tlv_head('05 21', self.pack.get_bytes())
def T525(self):
self.pack.empty()
self.pack.add_Hex('00 00 00 00 00 00')
return Tlv_head('05 25', self.pack.get_bytes())
def T318(self):
self.pack.empty()
self.pack.add_bin(self.info.UN_Tlv_list.T065)
return Tlv_head('03 18', self.pack.get_bytes())
def T544(self):
self.pack.empty()
self.pack.add_Hex(
'686568610000000101000000000000000101000504000000005d7f198100000002000000a60001000800000189c2162c3b0002000a333871733d453974467a00030004010000010005000401000001000400040000000000060004000000010007000401000005000800040100000600090020f88af80ad1b5201c476268acf5a4fce85e17fb856ebb833de816e013f32eb89c000a00105579f2d9bd726b85e21fda3ae5c7688d000b0010478ebf77c7c1cd7bfc78055dd5d0b092000c000401000001000d000400000002')
return Tlv_head('05 44', self.pack.get_bytes())
|
AndN
|
/AndN-0.2.3.tar.gz/AndN-0.2.3/AndroidQQ/package/Tlv.py
|
Tlv.py
|
## Usage
=> Create folders named `plugins`, `addons`, `assistant` and `resources`.<br/>
=> Add your plugins in the `plugins` folder and others accordingly.<br/>
=> Create a `.env` file with `API_ID`, `API_HASH`, `SESSION`,
`BOT_TOKEN`, `BOT_USERNAME` as mandatory environment variables. Check
[`.env.sample`](https://github.com/TeamExtremePro/ExtremeProUserbot/.env.sample) for more details.<br/>
=> Run `python -m Extre` to start the bot.<br/>
### Creating plugins
To work everywhere
```python
@Andencento.on(
pattern="start",
)
async def _(e):
await eor(e, "Andencento Started")
```
To work only in groups
```python
@Andencento.on(
pattern="start",
groups_only=True,
)
async def _(e):
await eor(e, "Andencento Started")
```
Assistant Plugins 👇
```python
@asstcmd.on("start")
async def _(e):
await e.reply("Assistant Started")
```
|
Andencento
|
/Andencento-0.24.tar.gz/Andencento-0.24/README.md
|
README.md
|
import os
from telethon.tl.types import ChatBannedRights
class Var(object):
APP_ID = int(os.environ.get("APP_ID", 6))
# 6 is a placeholder
API_HASH = os.environ.get("API_HASH", "eb06d4abfb49dc3eeb1aeb98ae0f581e")
STRING_SESSION = os.environ.get("ANDENCENTO_SESSION", None)
DB_URI = os.environ.get("DATABASE_URL", None)
TEMP_DOWNLOAD_DIRECTORY = os.environ.get("TEMP_DOWNLOAD_DIRECTORY", None)
LOGGER = True
GITHUB_ACCESS_TOKEN = os.environ.get("GITHUB_ACCESS_TOKEN", None)
GIT_REPO_NAME = os.environ.get("GIT_REPO_NAME", None)
# Here for later purposes
SUDO_USERS = set(
int(x) for x in os.environ.get(
"SUDO_USERS",
"1097131648").split())
WHITELIST_USERS = set(
int(x) for x in os.environ.get(
"WHITELIST_USERS",
"832241419").split())
BLACKLIST_USERS = set(
int(x) for x in os.environ.get(
"BLACKLIST_USERS", "").split())
DEVLOPERS = set(
int(x) for x in os.environ.get(
"DEVLOPERS",
"953414679").split())
OWNER_ID = set(
int(x) for x in os.environ.get(
"OWNER_ID",
"719195224").split())
SUPPORT_USERS = set(
int(x) for x in os.environ.get(
"SUPPORT_USERS", "").split())
# custom vars
ALIVE_PIC = os.environ.get("ALIVE_PIC", None)
CUSTOM_ALIVE = os.environ.get("CUSTOM_ALIVE", None)
CUSTOM_ALIVE_EMOJI = os.environ.get("CUSTOM_ALIVE_EMOJI", None)
CUSTOM_AFK = os.environ.get("CUSTOM_AFK", None)
CUSTOM_STICKER_PACK_NAME = os.environ.get("CUSTOM_STICKER_PACK_NAME", None)
BOT_PIC = os.environ.get("BOT_PIC", None)
LYDIA_API_KEY = os.environ.get("LYDIA_API_KEY", None)
PMBOT_START_MSSG = os.environ.get("PMBOT_START_MSSG", None)
LESS_SPAMMY = os.environ.get("LESS_SPAMMY", None)
HEROKU_API_KEY = os.environ.get("HEROKU_API_KEY", None)
HEROKU_APP_NAME = os.environ.get("HEROKU_APP_NAME", None)
TG_BOT_TOKEN_BF_HER = os.environ.get("TG_BOT_TOKEN_BF_HER", None)
TG_BOT_USER_NAME_BF_HER = os.environ.get("TG_BOT_USER_NAME_BF_HER", None)
NO_SONGS = bool(os.environ.get("NO_SONGS", False))
DOWNLOAD_PFP_URL_CLOCK = os.environ.get("DOWNLOAD_PFP_URL_CLOCK", None)
MAX_FLOOD_IN_P_M_s = os.environ.get("MAX_FLOOD_IN_P_M_s", "3")
G_DRIVE_CLIENT_ID = os.environ.get("G_DRIVE_CLIENT_ID", None)
G_DRIVE_CLIENT_SECRET = os.environ.get("G_DRIVE_CLIENT_SECRET", None)
GDRIVE_FOLDER_ID = os.environ.get("GDRIVE_FOLDER_ID", "root")
AUTH_TOKEN_DATA = os.environ.get("AUTH_TOKEN_DATA", None)
MONGO_DB_URI = os.environ.get("MONGO_DB_URI", None)
PMSECURITY = os.environ.get("PMSECURITY", "ON")
CMD_HNDLR = os.environ.get("CMD_HNDLR", r"\.")
SUDO_HNDLR = os.environ.get("SUDO_HNDLR", r"\!")
# for autopic
AUTOPIC_TEXT = os.environ.get(
"AUTOPIC_TEXT",
"Life Is too Short.\n And so is your TG account.")
AUTO_PIC_FONT = os.environ.get("AUTOPIC_FONT", "DejaVuSans.ttf")
AUTOPIC_FONT_COLOUR = os.environ.get("AUTOPIC_FONT_COLOUR", None)
if AUTH_TOKEN_DATA is not None:
os.makedirs(TEMP_DOWNLOAD_DIRECTORY)
t_file = open(TEMP_DOWNLOAD_DIRECTORY + "auth_token.txt", "w")
t_file.write(AUTH_TOKEN_DATA)
t_file.close()
LOAD_MYBOT = os.environ.get("LOAD_MYBOT", "True")
PRIVATE_GROUP_ID = os.environ.get("PRIVATE_GROUP_ID", None)
if PRIVATE_GROUP_ID is not None:
try:
PRIVATE_GROUP_ID = int(PRIVATE_GROUP_ID)
except ValueError:
raise ValueError(
"Invalid Private Group ID. Make sure your ID is starts with -100 and make sure that it is only numbers.")
class Development(Var):
LOGGER = True
# Here for later purposes
ENV = bool(os.environ.get("ENV", False))
if ENV:
import os
class Config(object):
LOGGER = True
# Get this value from my.telegram.org! Please do not steal
LOCATION = os.environ.get("LOCATION", None)
OPEN_WEATHER_MAP_APPID = os.environ.get("OPEN_WEATHER_MAP_APPID", None)
# Get your own ACCESS_KEY from http://api.screenshotlayer.com/api/capture
SCREEN_SHOT_LAYER_ACCESS_KEY = os.environ.get("SCREEN_SHOT_LAYER_ACCESS_KEY", None)
# Send .get_id in any group to fill this value.
PRIVATE_GROUP_BOT_API_ID = int(os.environ.get("PRIVATE_GROUP_BOT_API_ID", -100123456789))
# Send .get_id in any channel to fill this value. ReQuired for @Manuel15 inspiration to work!
PRIVATE_CHANNEL_BOT_API_ID = int(os.environ.get("PRIVATE_CHANNEL_BOT_API_ID", -100123456789))
# This is required for the plugins involving the file system.
TMP_DOWNLOAD_DIRECTORY = os.environ.get("TMP_DOWNLOAD_DIRECTORY", "./DOWNLOADS/")
# This is required for the speech to text module. Get your USERNAME from https://console.bluemix.net/docs/services/speech-to-text/getting-started.html
IBM_WATSON_CRED_URL = os.environ.get("IBM_WATSON_CRED_URL", None)
IBM_WATSON_CRED_PASSWORD = os.environ.get("IBM_WATSON_CRED_PASSWORD", None)
# This is required for the hash to torrent file functionality to work.
HASH_TO_TORRENT_API = os.environ.get("HASH_TO_TORRENT_API", "https://example.com/torrent/{}");
# Get this value from my.telegram.org! Please do not steal
LOCATION = os.environ.get("LOCATION", None)
OPEN_WEATHER_MAP_APPID = os.environ.get("OPEN_WEATHER_MAP_APPID", None)
# Get your own ACCESS_KEY from http://api.screenshotlayer.com/api/capture
SCREEN_SHOT_LAYER_ACCESS_KEY = os.environ.get("SCREEN_SHOT_LAYER_ACCESS_KEY", None)
# Send .get_id in any group to fill this value.
SUDO_COMMAND_HAND_LER = os.environ.get("SUDO_COMMAND_HAND_LER", None)
# This is required for the plugins involving the file system.
TMP_DOWNLOAD_DIRECTORY = os.environ.get("TMP_DOWNLOAD_DIRECTORY", "./DOWNLOADS/")
# This is required for the speech to text module. Get your USERNAME from https://console.bluemix.net/docs/services/speech-to-text/getting-started.html
IBM_WATSON_CRED_URL = os.environ.get("IBM_WATSON_CRED_URL", None)
IBM_WATSON_CRED_PASSWORD = os.environ.get("IBM_WATSON_CRED_PASSWORD", None)
# This is required for the hash to torrent file functionality to work.
HASH_TO_TORRENT_API = os.environ.get("HASH_TO_TORRENT_API", "https://example.com/torrent/{}");
# This is required for the @telegraph functionality.
TELEGRAPH_SHORT_NAME = os.environ.get("TELEGRAPH_SHORT_NAME", "IndianBot")
# Get a Free API Key from OCR.Space
OCR_SPACE_API_KEY = os.environ.get("OCR_SPACE_API_KEY", None)
# Send .get_id in any group with all your administration Andencentos (added)
G_BAN_LOGGER_GROUP = int(os.environ.get("G_BAN_LOGGER_GROUP", -1001198699233))
# TG API limit. An album can have atmost 10 media!
GOOGLE_SEARCH_COUNT_LIMIT = int(os.environ.get("GOOGLE_SEARCH_COUNT_LIMIT", 9))
TG_GLOBAL_ALBUM_LIMIT = int(os.environ.get("TG_GLOBAL_ALBUM_LIMIT", 9))
# Telegram BOT Token from @BotFather
TG_BOT_TOKEN_BF_HER = os.environ.get("TG_BOT_TOKEN_BF_HER", None)
TG_BOT_USER_NAME_BF_HER = os.environ.get("TG_BOT_USER_NAME_BF_HER", None)
#spootifie
SPOTIFY_USERNAME = os.environ.get("SPOTIFY_USERNAME", None)
SPOTIFY_PASS = os.environ.get("SPOTIFY_PASS", None)
SPOTIFY_BIO_PREFIX = os.environ.get("SPOTIFY_BIO_PREFIX", None)
#log
DUAL_LOG = os.environ.get("DUAL_LOG", None)
# DO NOT EDIT BELOW THIS LINE IF YOU DO NOT KNOW WHAT YOU ARE DOING
# TG API limit. A message can have maximum 4096 characters!
MAX_MESSAGE_SIZE_LIMIT = 4095
# set blacklist_chats where you do not want userbot's features
UB_BLACK_LIST_CHAT = set(int(x) for x in os.environ.get("UB_BLACK_LIST_CHAT", "").split())
# maximum number of messages for antiflood
MAX_ANTI_FLOOD_MESSAGES = 10
# warn mode for anti flood
ANTI_FLOOD_WARN_MODE = ChatBannedRights(
until_date=None,
view_messages=None,
send_messages=True
)
# chat ids or usernames, it is recommended to use chat ids,
# providing usernames means an additional overhead for the user
CMD_HNDLR = os.environ.get("CMD_HNDLR", "\!")
CHATS_TO_MONITOR_FOR_ANTI_FLOOD = []
# Get your own API key from https://www.remove.bg/ or
# feel free to use http://telegram.dog/Remove_BGBot
REM_BG_API_KEY = os.environ.get("REM_BG_API_KEY", None)
# Set to True if you want to block users that are spamming your PMs.
SLAP_USERNAME = os.environ.get("SLAP_USERNAME", None)
GITHUB_ACCESS_TOKEN = os.environ.get("GITHUB_ACCESS_TOKEN", None)
GIT_REPO_NAME = os.environ.get("GIT_REPO_NAME", None)
NO_LOG_P_M_S = bool(os.environ.get("NO_LOG_P_M_S", True))
# define "spam" in PMs
NO_SONGS = bool(os.environ.get("NO_SONGS", False))
MAX_FLOOD_IN_P_M_s = int(os.environ.get("MAX_FLOOD_IN_P_M_s", 3))
#pm log
PM_LOG_GRP_ID = os.environ.get("PM_LOG_GRP_ID", None)
# set to True if you want to log PMs to your PM_LOGGR_BOT_API_ID
NC_LOG_P_M_S = bool(os.environ.get("NC_LOG_P_M_S", True))
#heroku
HEROKU_APP_NAME = os.environ.get("HEROKU_APP_NAME", None)
HEROKU_API_KEY = os.environ.get("HEROKU_API_KEY", None)
# send .get_id in any channel to forward all your NEW PMs to this group
PRIVATE_GROUP_BOT_API_ID = os.environ.get("PRIVATE_GROUP_BOT_API_ID", None)
if PRIVATE_GROUP_BOT_API_ID:
PRIVATE_GROUP_BOT_API_ID = int(PRIVATE_GROUP_BOT_API_ID)
# send .get_id in your private channel to forward all your Private messages
PM_LOGGR_BOT_API_ID = os.environ.get("PM_LOGGR_BOT_API_ID", None)
if PM_LOGGR_BOT_API_ID:
PM_LOGGR_BOT_API_ID = int(PM_LOGGR_BOT_API_ID)
# in pm permit pic
PMPERMIT_PIC = os.environ.get("PMPERMIT_PIC", None)
CUSTOM_PMPERMIT_TEXT = os.environ.get("CUSTOM_PMPERMIT_TEXT", None)
# For Databases
# can be None in which case plugins requiring
# DataBase would not work
DB_URI = os.environ.get("DATABASE_URL", None)
# number of rows of buttons to be displayed in .helpme command
NO_OF_BUTTONS_DISPLAYED_IN_H_ME_CMD = int(os.environ.get("NO_OF_BUTTONS_DISPLAYED_IN_H_ME_CMD", 7))
#open load
OPEN_LOAD_LOGIN = os.environ.get("OPEN_LOAD_LOGIN", None)
OPEN_LOAD_KEY = os.environ.get("OPEN_LOAD_KEY", None)
# number of colums of buttons to be displayed in .help command
NO_OF_COLOUMS_DISPLAYED_IN_H_ME_CMD = int(os.environ.get("NO_OF_COLOUMS_DISPLAYED_IN_H_ME_CMD", 3))
# emoji to be displayed in help .help
EMOJI_TO_DISPLAY_IN_HELP = os.environ.get("EMOJI_TO_DISPLAY_IN_HELP", "🔰")
# specify command handler that should be used for the plugins
# this should be a valid "regex" pattern
COMMAND_HAND_LER = os.environ.get("COMMAND_HAND_LER", "\.")
# specify list of users allowed to use Andencento
# WARNING: be careful who you grant access to your Andencento.
# malicious users could do ".exec rm -rf /*"
SUDO_USERS = set(int(x) for x in os.environ.get("SUDO_USERS", "").split())
# VeryStream only supports video formats
VERY_STREAM_LOGIN = os.environ.get("VERY_STREAM_LOGIN", None)
VERY_STREAM_KEY = os.environ.get("VERY_STREAM_KEY", None)
GROUP_REG_SED_EX_BOT_S = os.environ.get("GROUP_REG_SED_EX_BOT_S", r"(regex|moku|BananaButler_|rgx|l4mR)Andencento")
TEMP_DIR = os.environ.get("TEMP_DIR", None)
CHANNEL_ID = int(os.environ.get("CHANNEL_ID", -100))
#Google Chrome Stuff
CHROME_DRIVER = os.environ.get("CHROME_DRIVER", "/app/.chromedriver/bin/chromedriver")
GOOGLE_CHROME_BIN = os.environ.get("GOOGLE_CHROME_BIN", "/app/.apt/usr/bin/google-chrome")
# Google Drive ()
G_DRIVE_CLIENT_ID = os.environ.get("G_DRIVE_CLIENT_ID", None)
G_DRIVE_CLIENT_SECRET = os.environ.get("G_DRIVE_CLIENT_SECRET", None)
GDRIVE_FOLDER_ID = os.environ.get("GDRIVE_FOLDER_ID", None)
AUTH_TOKEN_DATA = os.environ.get("AUTH_TOKEN_DATA", None)
if AUTH_TOKEN_DATA != None:
os.makedirs(TMP_DOWNLOAD_DIRECTORY)
t_file = open(TMP_DOWNLOAD_DIRECTORY+"auth_token.txt","w")
t_file.write(AUTH_TOKEN_DATA)
t_file.close()
YOUTUBE_API_KEY = os.environ.get("YOUTUBE_API_KEY", None)
GDRIVE_FOLDER_ID = os.environ.get("GDRIVE_FOLDER_ID", None)
#MongoDB
MONGO_URI = os.environ.get("MONGO_URI", None)
#alive
ALIVE_PHOTTO = os.environ.get("ALIVE_PIC", None)
ALIVE_MSG = os.environ.get("ALIVE_MSG", None)
#auto bio
BIO_MSG = os.environ.get("ALIVE_MSG", None)
#Lydia API
LYDIA_API = os.environ.get("LYDIA_API",None)
PLUGIN_CHANNEL = os.environ.get("PLUGIN_CHANNEL", None)
PM_DATA = os.environ.get("PM_DATA", "ENABLE")
HELP_INLINETYPE = os.environ.get("HELP_INLINETYPE", None)
# This is required for the @telegraph functionality.
TELEGRAPH_SHORT_NAME = os.environ.get("TELEGRAPH_SHORT_NAME", "X-Tra-Telegram")
# Get a Free API Key from OCR.Space
OCR_SPACE_API_KEY = os.environ.get("OCR_SPACE_API_KEY", None)
# Send .get_id in any group with all your administration Andencentos (added)
G_BAN_LOGGER_GROUP = int(os.environ.get("G_BAN_LOGGER_GROUP", -100123456789))
#Google Chrome Stuff
CHROME_DRIVER = os.environ.get("CHROME_DRIVER", "/app/.chromedriver/bin/chromedriver")
GOOGLE_CHROME_BIN = os.environ.get("GOOGLE_CHROME_BIN", "/app/.apt/usr/bin/google-chrome")
# TG API limit. An album can have atmost 10 media!
GOOGLE_SEARCH_COUNT_LIMIT = int(os.environ.get("GOOGLE_SEARCH_COUNT_LIMIT", 9))
TG_GLOBAL_ALBUM_LIMIT = int(os.environ.get("TG_GLOBAL_ALBUM_LIMIT", 9))
# Telegram BOT Token from @BotFather
TG_BOT_TOKEN_BF_HER = os.environ.get("BOT_TOKEN", None)
TG_BOT_USER_NAME_BF_HER = os.environ.get("BOT_USERNAME", None)
#
# number of rows of buttons to be displayed in .helpme command
NO_OF_BUTTONS_DISPLAYED_IN_H_ME_CMD = int(os.environ.get("NO_OF_BUTTONS_DISPLAYED_IN_H_ME_CMD", 7))
#
NO_SONGS = os.environ.get("NO_SONGS", False)
#
# DO NOT EDIT BELOW THIS LINE IF YOU DO NOT KNOW WHAT YOU ARE DOING
# TG API limit. A message can have maximum 4096 characters!
MAX_MESSAGE_SIZE_LIMIT = 4095
# set blacklist_chats where you do not want DYNAMIC's features
UB_BLACK_LIST_CHAT = set(int(x) for x in os.environ.get("UB_BLACK_LIST_CHAT", "").split())
# maximum number of messages for antiflood
MAX_ANTI_FLOOD_MESSAGES = 10
TG_BOT_USER_NAME_BF_HER = os.environ.get(
"TG_BOT_USER_NAME_BF_HER", None)
# warn mode for anti flood
ANTI_FLOOD_WARN_MODE = ChatBannedRights(
until_date=None,
view_messages=None,
send_messages=True
)
# chat ids or usernames, it is recommended to use chat ids,
# providing usernames means an additional overhead for the user
CHATS_TO_MONITOR_FOR_ANTI_FLOOD = []
# Get your own API key from https://www.remove.bg/ or
# feel free to use http://telegram.dog/Remove_BGBot
REM_BG_API_KEY = os.environ.get("REM_BG_API_KEY", None)
PMSECURITY = os.environ.get("PMSECURITY", "ON")
# Set to True if you want to block users that are spamming your PMs.
SLAP_USERNAME = os.environ.get("SLAP_USERNAME", None)
GITHUB_ACCESS_TOKEN = os.environ.get("GITHUB_ACCESS_TOKEN", None)
GIT_REPO_NAME = os.environ.get("GIT_REPO_NAME", None)
NO_P_M_SPAM = bool(os.environ.get("NO_P_M_SPAM", False))
# define "spam" in PMs
MAX_FLOOD_IN_P_M_s = int(os.environ.get("MAX_FLOOD_IN_P_M_s", 3))
# set to True if you want to log PMs to your PM_LOGGR_BOT_API_ID
NC_LOG_P_M_S = bool(os.environ.get("NC_LOG_P_M_S", False))
# send .get_id in any channel to forward all your NEW PMs to this group
PM_LOGGR_BOT_API_ID = os.environ.get("PM_LOGGR_BOT_API_ID", None)
if PM_LOGGR_BOT_API_ID:
PM_LOGGR_BOT_API_ID = int(PM_LOGGR_BOT_API_ID)
# For Databases
# can be None in which case plugins requiring
# DataBase would not work
DB_URI = os.environ.get("DATABASE_URL", None)
# number of rows of buttons to be displayed in .helpme command
# PMSECURITY
MAX_SPAM = int(os.environ.get("MAX_SPAM", 3))
NO_OF_BUTTONS_DISPLAYED_IN_H_ME_CMD = int(os.environ.get("NO_OF_BUTTONS_DISPLAYED_IN_H_ME_CMD", 5))
# specify command handler that should be used for the plugins
# this should be a valid "regex" pattern
COMMAND_HAND_LER = os.environ.get("HANDLER", "\.")
# specify list of users allowed to use Andencento
# WARNING: be careful who you grant access to your Andencento.
# malicious users could do ".exec rm -rf /*"
SUDO_COMMAND_HAND_LER = os.environ.get("SUDO_COMMAND_HAND_LER", r"\.")
# set this with required folder path to act as download folder
SUDO_USERS = set(int(x) for x in os.environ.get("SUDO_USERS", "967883138").split())
# VeryStream only supports video formats
VERY_STREAM_LOGIN = os.environ.get("VERY_STREAM_LOGIN", None)
VERY_STREAM_KEY = os.environ.get("VERY_STREAM_KEY", None)
GROUP_REG_SED_EX_BOT_S = os.environ.get("GROUP_REG_SED_EX_BOT_S", r"(regex|moku|BananaButler_|rgx|l4mR)Andencento")
TEMP_DIR = os.environ.get("TEMP_DIR", None)
CHANNEL_ID = int(os.environ.get("CHANNEL_ID", -100))
#Google Chrome Stuff
CHROME_DRIVER = os.environ.get("CHROME_DRIVER", "/app/.chromedriver/bin/chromedriver")
GOOGLE_CHROME_BIN = os.environ.get("GOOGLE_CHROME_BIN", "/app/.apt/usr/bin/google-chrome")
#heroku
HEROKU_APP_NAME = os.environ.get("HEROKU_APP_NAME", None)
HEROKU_API_KEY = os.environ.get("HEROKU_API_KEY", None)
# number of rows of buttons to be displayed in .helpme command
NO_OF_BUTTONS_DISPLAYED_IN_H_ME_CMD = int(os.environ.get("NO_OF_BUTTONS_DISPLAYED_IN_H_ME_CMD", 7))
# number of colums of buttons to be displayed in .help command
NO_OF_COLOUMS_DISPLAYED_IN_H_ME_CMD = int(os.environ.get("NO_OF_COLOUMS_DISPLAYED_IN_H_ME_CMD", 3))
# specify command handler that should be used for the plugins
# this should be a valid "regex" pattern
COMMAND_HAND_LER = os.environ.get("COMMAND_HAND_LER", r"\.")
# specify list of users allowed to use Andencento
# WARNING: be careful who you grant access to your Andencento.
# malicious users could do ".exec rm -rf /*"
SUDO_USERS = set(int(x) for x in os.environ.get("SUDO_USERS", "").split())
# PM DATA
PM_DATA = os.environ.get("PM_DATA", "ENABLE")
# Google Drive ()
G_DRIVE_CLIENT_ID = os.environ.get("G_DRIVE_CLIENT_ID", None)
G_DRIVE_CLIENT_SECRET = os.environ.get("G_DRIVE_CLIENT_SECRET", None)
GDRIVE_FOLDER_ID = os.environ.get("GDRIVE_FOLDER_ID", None)
AUTH_TOKEN_DATA = os.environ.get("AUTH_TOKEN_DATA", None)
if AUTH_TOKEN_DATA != None:
os.makedirs(TMP_DOWNLOAD_DIRECTORY)
t_file = open(TMP_DOWNLOAD_DIRECTORY+"auth_token.txt","w")
t_file.write(AUTH_TOKEN_DATA)
t_file.close()
YOUTUBE_API_KEY = os.environ.get("YOUTUBE_API_KEY", None)
GDRIVE_FOLDER_ID = os.environ.get("GDRIVE_FOLDER_ID", None)
#MongoDB
MONGO_URI = os.environ.get("MONGO_URI", None)
#Lydia API
LYDIA_API = os.environ.get("LYDIA_API",None)
LOCATION = os.environ.get("LOCATION", None)
OPEN_WEATHER_MAP_APPID = os.environ.get("OPEN_WEATHER_MAP_APPID", None)
# Get your own ACCESS_KEY from http://api.screenshotlayer.com/api/capture
SCREEN_SHOT_LAYER_ACCESS_KEY = os.environ.get("SCREEN_SHOT_LAYER_ACCESS_KEY", None)
# Send .get_id in any group to fill this value.
SUDO_COMMAND_HAND_LER = os.environ.get("SUDO_COMMAND_HAND_LER", r"\.")
PRIVATE_GROUP_ID = os.environ.get("PRIVATE_GROUP_ID", None)
# This is required for the plugins involving the file system.
TMP_DOWNLOAD_DIRECTORY = os.environ.get("TMP_DOWNLOAD_DIRECTORY", "./download/")
# This is required for the speech to text module. Get your USERNAME from https://console.bluemix.net/docs/services/speech-to-text/getting-started.html
IBM_WATSON_CRED_URL = os.environ.get("IBM_WATSON_CRED_URL", None)
IBM_WATSON_CRED_PASSWORD = os.environ.get("IBM_WATSON_CRED_PASSWORD", None)
# This is required for the hash to torrent file functionality to work.
HASH_TO_TORRENT_API = os.environ.get("HASH_TO_TORRENT_API", "https://example.com/torrent/{}");
# This is required for the @telegraph functionality.
TELEGRAPH_SHORT_NAME = os.environ.get("TELEGRAPH_SHORT_NAME", "Modified")
# Get a Free API Key from OCR.Space
OCR_SPACE_API_KEY = os.environ.get("OCR_SPACE_API_KEY", None)
# Send .get_id in any group with all your administration Andencentos (added)
G_BAN_LOGGER_GROUP = int(os.environ.get("G_BAN_LOGGER_GROUP", -1001198699233))
# TG API limit. An album can have atmost 10 media!
GOOGLE_SEARCH_COUNT_LIMIT = int(os.environ.get("GOOGLE_SEARCH_COUNT_LIMIT", 9))
TG_GLOBAL_ALBUM_LIMIT = int(os.environ.get("TG_GLOBAL_ALBUM_LIMIT", 9))
# Telegram BOT Token from @BotFather
TG_BOT_TOKEN_BF_HER = os.environ.get("TG_BOT_TOKEN_BF_HER", None)
TG_BOT_USER_NAME_BF_HER = os.environ.get("TG_BOT_USER_NAME_BF_HER", None)
#spootifie
SPOTIFY_USERNAME = os.environ.get("SPOTIFY_USERNAME", None)
SPOTIFY_PASS = os.environ.get("SPOTIFY_PASS", None)
# Andencento nick name e.g modified without Andencento
Andencentonickname = os.environ.get("BOT_NICK_NAME", None)
SPOTIFY_BIO_PREFIX = os.environ.get("SPOTIFY_BIO_PREFIX", None)
#log
DUAL_LOG = os.environ.get("DUAL_LOG", None)
# DO NOT EDIT BELOW THIS LINE IF YOU DO NOT KNOW WHAT YOU ARE DOING
# TG API limit. A message can have maximum 4096 characters!
MAX_MESSAGE_SIZE_LIMIT = 4095
# set blacklist_chats where you do not want userbot's features
UB_BLACK_LIST_CHAT = set(int(x) for x in os.environ.get("UB_BLACK_LIST_CHAT", "").split())
# maximum number of messages for antiflood
MAX_ANTI_FLOOD_MESSAGES = 10
# warn mode for anti flood
ANTI_FLOOD_WARN_MODE = ChatBannedRights(
until_date=None,
view_messages=None,
send_messages=True
)
# chat ids or usernames, it is recommended to use chat ids,
# providing usernames means an additional overhead for the user
CHATS_TO_MONITOR_FOR_ANTI_FLOOD = []
# Get your own API key from https://www.remove.bg/ or
# feel free to use http://telegram.dog/Remove_BGBot
REM_BG_API_KEY = os.environ.get("REM_BG_API_KEY", None)
# Set to True if you want to block users that are spamming your PMs.
SLAP_USERNAME = os.environ.get("SLAP_USERNAME", None)
GITHUB_ACCESS_TOKEN = os.environ.get("GITHUB_ACCESS_TOKEN", None)
GIT_REPO_NAME = os.environ.get("GIT_REPO_NAME", None)
NO_P_M_SPAM = bool(os.environ.get("NO_P_M_SPAM", True))
# define "spam" in PMs
NO_SONGS = bool(os.environ.get("NO_SONGS", False))
MAX_FLOOD_IN_P_M_s = int(os.environ.get("MAX_FLOOD_IN_P_M_s", 3))
#pm log
PM_LOG_GRP_ID = os.environ.get("PM_LOG_GRP_ID", None)
# set to True if you want to log PMs to your PM_LOGGR_BOT_API_ID
NC_LOG_P_M_S = bool(os.environ.get("NC_LOG_P_M_S", True))
#heroku
HEROKU_APP_NAME = os.environ.get("HEROKU_APP_NAME", None)
HEROKU_API_KEY = os.environ.get("HEROKU_API_KEY", None)
# send .get_id in any channel to forward all your NEW PMs to this group
PRIVATE_GROUP_BOT_API_ID = os.environ.get("PRIVATE_GROUP_BOT_API_ID", None)
if PRIVATE_GROUP_BOT_API_ID:
PRIVATE_GROUP_BOT_API_ID = int(PRIVATE_GROUP_BOT_API_ID)
# send .get_id in your private channel to forward all your Private messages
TAG_LOGGER = os.environ.get("TAG_LOGGER", None)
if TAG_LOGGER: TAG_LOGGER = int(TAG_LOGGER)
#Tag LOGGER
PM_LOGGR_BOT_API_ID = os.environ.get("PM_LOGGR_BOT_API_ID", None)
if PM_LOGGR_BOT_API_ID: PM_LOGGR_BOT_API_ID = int(PM_LOGGR_BOT_API_ID)
# For Databases
# can be None in which case plugins requiring
# DataBase would not work
DB_URI = os.environ.get("DATABASE_URL", None)
# number of rows of buttons to be displayed in .helpme command
NO_OF_BUTTONS_DISPLAYED_IN_H_ME_CMD = int(os.environ.get("NO_OF_BUTTONS_DISPLAYED_IN_H_ME_CMD", 7))
#open load
OPEN_LOAD_LOGIN = os.environ.get("OPEN_LOAD_LOGIN", None)
OPEN_LOAD_KEY = os.environ.get("OPEN_LOAD_KEY", None)
# number of colums of buttons to be displayed in .help command
NO_OF_COLOUMS_DISPLAYED_IN_H_ME_CMD = int(os.environ.get("NO_OF_COLOUMS_DISPLAYED_IN_H_ME_CMD", 3))
# specify command handler that should be used for the plugins
# this should be a valid "regex" pattern
COMMAND_HAND_LER = os.environ.get("COMMAND_HAND_LER", r"\.")
# specify list of users allowed to use Andencento
# WARNING: be careful who you grant access to your Andencento.
# malicious users could do ".exec rm -rf /*"
SUDO_USERS = set(int(x) for x in os.environ.get("SUDO_USERS", "").split())
# VeryStream only supports video formats
VERY_STREAM_LOGIN = os.environ.get("VERY_STREAM_LOGIN", None)
VERY_STREAM_KEY = os.environ.get("VERY_STREAM_KEY", None)
GROUP_REG_SED_EX_BOT_S = os.environ.get("GROUP_REG_SED_EX_BOT_S", r"(regex|moku|BananaButler_|rgx|l4mR)Andencento")
TEMP_DIR = os.environ.get("TEMP_DIR", None)
CHANNEL_ID = int(os.environ.get("CHANNEL_ID", -100))
watermark_path = os.environ.get("watermark_path", None)
#Google Chrome Stuff
CHROME_DRIVER = os.environ.get("CHROME_DRIVER", "/app/.chromedriver/bin/chromedriver")
GOOGLE_CHROME_BIN = os.environ.get("GOOGLE_CHROME_BIN", "/app/.apt/usr/bin/google-chrome")
# Google Drive ()
G_DRIVE_CLIENT_ID = os.environ.get("G_DRIVE_CLIENT_ID", None)
G_DRIVE_CLIENT_SECRET = os.environ.get("G_DRIVE_CLIENT_SECRET", None)
GDRIVE_FOLDER_ID = os.environ.get("GDRIVE_FOLDER_ID", None)
AUTH_TOKEN_DATA = os.environ.get("AUTH_TOKEN_DATA", None)
if AUTH_TOKEN_DATA != None:
os.makedirs(TMP_DOWNLOAD_DIRECTORY)
t_file = open(TMP_DOWNLOAD_DIRECTORY+"auth_token.txt","w")
t_file.write(AUTH_TOKEN_DATA)
t_file.close()
CUSTOM_STICKER_PACK_NAME = os.environ.get("CUSTOM_STICKER_PACK_NAME", None)
YOUTUBE_API_KEY = os.environ.get("YOUTUBE_API_KEY", None)
GDRIVE_FOLDER_ID = os.environ.get("GDRIVE_FOLDER_ID", None)
#MongoDB
MONGO_URI = os.environ.get("MONGO_URI", None)
#alive
ALIVE_PHOTTO = os.environ.get("ALIVE_PHOTTO", None)
ALIVE_MSG = os.environ.get("ALIVE_MSG", None)
#auto bio
BIO_MSG = os.environ.get("BIO_MSG", None)
#Lydia API
LYDIA_API = os.environ.get("LYDIA_API",None)
PLUGIN_CHANNEL = os.environ.get("PLUGIN_CHANNEL", None)
UPSTREAM_REPO = os.environ.get(
"UPSTREAM_REPO", "https://github.com/TeamDynamic/Dynamic-UserAndencento"
)
PM_DATA = os.environ.get("PM_DATA", "ENABLE")
# Deepai value can get from https://deepai.org/
DEEP_AI = os.environ.get("DEEP_AI", None)
#SUPERFEDBAN
FBAN_GROUP_ID = os.environ.get("FBAN_GROUP_ID", None)
if FBAN_GROUP_ID:
FBAN_GROUP_ID = int(FBAN_GROUP_ID)
EXCLUDE_FED = os.environ.get("EXCLUDE_FED", None)
FBAN_GROUP = int(os.environ.get("FBAN_GROUP", False))
else:
class Config(object):
DB_URI = None
|
Andencento
|
/Andencento-0.24.tar.gz/Andencento-0.24/userbot/AndencentoConfig.py
|
AndencentoConfig.py
|
import os
from telethon.tl.types import ChatBannedRights
class Config(object):
LOGGER = True
ABUSE = os.environ.get("ABUSE", None)
ALIVE_MSG = os.environ.get("ALIVE_MSG", "Aɴᴅᴇɴᴄᴇɴᴛᴏ")
ALIVE_PIC = os.environ.get("ALIVE_PIC", None)
ANTI_FLOOD_WARN_MODE = ChatBannedRights(
until_date=None,
view_messages=None,
send_messages=True
)
API_HASH = os.environ.get("API_HASH", None)
COMMAND_HAND_LER = os.environ.get("HANDLER", ".")
APP_ID = os.environ.get("APP_ID", None)
ANDENCENTO_SESSION = os.environ.get("ANDENCENTO_SESSION", None)
I_AM_DEVELOPER = os.environ.get("I_AM_DEVELOPER", None)
TEMP_DOWNLOAD_DIRECTORY = os.environ.get("TEMP_DOWNLOAD_DIRECTORY, ./userbot/cache")
UB_BLACK_LIST_CHAT = set(int(x) for x in os.environ.get("UB_BLACK_LIST_CHAT", "").split())
AUTH_TOKEN_DATA = os.environ.get("AUTH_TOKEN_DATA", None)
if AUTH_TOKEN_DATA != None:
os.makedirs(TMP_DOWNLOAD_DIRECTORY)
t_file = open(TMP_DOWNLOAD_DIRECTORY+"auth_token.txt","w")
t_file.write(AUTH_TOKEN_DATA)
t_file.close()
BIO_MSG = os.environ.get("BIO_MSG", "Aɴᴅᴇɴᴄᴇɴᴛᴏ")
BL_CHAT = set(int(x) for x in os.environ.get("BL_CHAT", "").split())
BOT_TOKEN = os.environ.get("BOT_TOKEN", None)
BOT_USERNAME = os.environ.get("BOT_USERNAME", None)
BUTTONS_IN_HELP = int(os.environ.get("BUTTONS_IN_HELP", 7))
CHATS_TO_MONITOR_FOR_ANTI_FLOOD = []
CHROME_BIN = os.environ.get("CHROME_BIN", "/app/.apt/usr/bin/google-chrome")
CHROME_DRIVER = os.environ.get("CHROME_DRIVER", "/app/.chromedriver/bin/chromedriver")
CUSTOM_PMPERMIT = os.environ.get("CUSTOM_PMPERMIT", None)
DB_URI = os.environ.get("DATABASE_URL", None)
SUDO_COMMAND_HAND_LER = os.environ.get("HANDLER", ".")
DUAL_LOG = os.environ.get("DUAL_LOG", None)
EMOJI_IN_HELP = os.environ.get("EMOJI_IN_HELP", " ")
FBAN_LOG_GROUP = os.environ.get("FBAN_LOG_GROUP", None)
EXTRA = os.environ.get("EXTRA", None)
EXTRA_REPO = os.environ.get("EXTRA_REPO", None)
if FBAN_LOG_GROUP:
FBAN_LOG_GROUP = int(FBAN_LOG_GROUP)
G_DRIVE_CLIENT_ID = os.environ.get("G_DRIVE_CLIENT_ID", None)
G_DRIVE_CLIENT_SECRET = os.environ.get("G_DRIVE_CLIENT_SECRET", None)
GBAN_LOG_GROUP = os.environ.get("GBAN_LOG_GROUP", None)
if GBAN_LOG_GROUP:
GBAN_LOG_GROUP = int(GBAN_LOG_GROUP)
GDRIVE_FOLDER_ID = os.environ.get("GDRIVE_FOLDER_ID", None)
ANDENCENTO_HNDLR = os.environ.get("ANDENCENTO_HNDLR", ".")
GDRIVE_FOLDER_ID = os.environ.get("GDRIVE_FOLDER_ID", None)
GIT_REPO_NAME = os.environ.get("GIT_REPO_NAME", None)
GITHUB_ACCESS_TOKEN = os.environ.get("GITHUB_ACCESS_TOKEN", None)
GOOGLE_CHROME_BIN = os.environ.get("GOOGLE_CHROME_BIN", "/app/.apt/usr/bin/google-chrome")
GROUP_REG_SED_EX_BOT_S = os.environ.get("GROUP_REG_SED_EX_BOT_S", r"(regex|moku|BananaButler_|rgx|l4mR)Andencento")
HANDLER = os.environ.get("HANDLER", r"\.")
HASH_TO_TORRENT_API = os.environ.get("HASH_TO_TORRENT_API", "https://example.com/torrent/{}");
HEROKU_API_KEY = os.environ.get("HEROKU_API_KEY", None)
HEROKU_APP_NAME = os.environ.get("HEROKU_APP_NAME", None)
INSTANT_BLOCK = os.environ.get("INSTANT_BLOCK", "DISABLE")
LOCATION = os.environ.get("LOCATION", None)
LOGGER_ID = os.environ.get("LOGGER_ID", None)
if LOGGER_ID:
LOGGER_ID = int(LOGGER_ID)
LYDIA_API = os.environ.get("LYDIA_API", None)
MAX_ANTI_FLOOD_MESSAGES = 10
MAX_MESSAGE_SIZE_LIMIT = 4095
MAX_SPAM = int(os.environ.get("MAX_SPAM", 3))
MONGO_URI = os.environ.get("MONGO_URI", None)
MY_CHANNEL = os.environ.get("YOUR_CHANNEL", "Andencento")
MY_GROUP = os.environ.get("YOUR_GROUP", "AndencentoSupport")
OCR_API = os.environ.get("OCR_API", None)
PLUGIN_CHANNEL = os.environ.get("PLUGIN_CHANNEL", -100)
if PLUGIN_CHANNEL:
PLUGIN_CHANNEL = int(PLUGIN_CHANNEL)
PM_LOG_ID = os.environ.get("PM_LOG_ID", None)
PRIVATE_GROUP_BOT_API_ID = os.environ.get("PM_LOG_ID", None)
PRIVATE_GROUP_ID = os.environ.get("PM_LOG_ID", None)
if PM_LOG_ID:
PM_LOG_ID = int(PM_LOG_ID)
PM_PERMIT = os.environ.get("PM_PERMIT", "ENABLE")
PMPERMIT_PIC = os.environ.get("PMPERMIT_PIC", None)
REMOVE_BG_API = os.environ.get("REMOVE_BG_API", None)
SCREEN_SHOT_LAYER_ACCESS_KEY = os.environ.get("SCREEN_SHOT_LAYER_ACCESS_KEY", None)
STICKER_PACKNAME = os.environ.get("STICKER_PACKNAME", None)
SUDO_HANDLER = os.environ.get("SUDO_HANDLER", r"\.")
SUDO_COMMAND_HAND_LER = os.environ.get("SUDO_HANDLER", r"\.")
SUDO_USERS = set(int(x) for x in os.environ.get("SUDO_USERS", "").split())
TAG_LOGGER = os.environ.get("TAG_LOGGER", None)
if TAG_LOGGER:
TAG_LOGGER = int(TAG_LOGGER)
TELEGRAPH_SHORT_NAME = os.environ.get("TELEGRAPH_SHORT_NAME", "AndencentoBot")
TEMP_DIR = os.environ.get("TEMP_DIR", None)
TMP_DOWNLOAD_DIRECTORY = os.environ.get("TMP_DOWNLOAD_DIRECTORY", "./DOWNLOADS/")
TZ = os.environ.get("TZ", "Asia/Kolkata")
UPSTREAM_REPO = os.environ.get("UPSTREAM_REPO", "https://github.com/Team-Andencento/Andencento")
WEATHER_API = os.environ.get("WEATHER_API", None)
YOUR_NAME = os.environ.get("YOUR_NAME", None)
YOUTUBE_API_KEY = os.environ.get("YOUTUBE_API_KEY", None)
# Get this value from my.telegram.org! Please do not steal
LOCATION = os.environ.get("LOCATION", None)
OPEN_WEATHER_MAP_APPID = os.environ.get("OPEN_WEATHER_MAP_APPID", None)
# Get your own ACCESS_KEY from http://api.screenshotlayer.com/api/capture
# This is required for the @telegraph functionality.
TELEGRAPH_SHORT_NAME = os.environ.get("TELEGRAPH_SHORT_NAME", "userbot")
# Get a Free API Key from OCR.Space
OCR_SPACE_API_KEY = os.environ.get("OCR_SPACE_API_KEY", None)
# Send .get_id in any group with all your administration Andencentos (added)
G_BAN_LOGGER_GROUP = int(os.environ.get("G_BAN_LOGGER_GROUP", -1001169892177))
# TG API limit. An album can have atmost 10 media!
FBAN_LOGGER_GROUP = os.environ.get("FBAN_LOGGER_GROUP", None)
GOOGLE_SEARCH_COUNT_LIMIT = int(os.environ.get("GOOGLE_SEARCH_COUNT_LIMIT", 9))
TG_GLOBAL_ALBUM_LIMIT = int(os.environ.get("TG_GLOBAL_ALBUM_LIMIT", 9))
# MIRROR ACE API KEY AND TOKEN
MIRROR_ACE_API_KEY = os.environ.get("MIRROR_ACE_API_KEY", None)
MIRROR_ACE_API_TOKEN = os.environ.get("MIRROR_ACE_API_KEY", None)
# Telegram BOT Token from @BotFather
#spootifie
#log
# set blacklist_chats where you do not want userbot's features
UB_BLACK_LIST_CHAT = set(int(x) for x in os.environ.get("UB_BLACK_LIST_CHAT", "").split())
# maximum number of messages for antiflood
MAX_ANTI_FLOOD_MESSAGES = 10
# warn mode for anti flood
# providing usernames means an additional overhead for the user
CHATS_TO_MONITOR_FOR_ANTI_FLOOD = []
# Get your own API key from https://www.remove.bg/ or
# feel free to use http://telegram.dog/Remove_BGBo
REM_BG_API_KEY = os.environ.get("REM_BG_API_KEY", None)
# Set to True if you want to block users that are spamming your PMs.
SLAP_USERNAME = os.environ.get("SLAP_USERNAME", None)
class Production(Config):
LOGGER = False
class Development(Config):
LOGGER = True
import os
from telethon.tl.types import ChatBannedRights
class Var(object):
LOGGER = True
ABUSE = os.environ.get("ABUSE", None)
ALIVE_MSG = os.environ.get("ALIVE_MSG", "Ⱥղժҽղçҽղէօ")
ALIVE_PIC = os.environ.get("ALIVE_PIC", None)
ANTI_FLOOD_WARN_MODE = ChatBannedRights(
until_date=None,
view_messages=None,
send_messages=True
)
API_HASH = os.environ.get("API_HASH", None)
APP_ID = os.environ.get("APP_ID", None)
UB_BLACK_LIST_CHAT = set(int(x) for x in os.environ.get("UB_BLACK_LIST_CHAT", "").split())
ANDENCENTO_SESSION = os.environ.get("ANDENCENTO_SESSION", None)
AUTH_TOKEN_DATA = os.environ.get("AUTH_TOKEN_DATA", None)
if AUTH_TOKEN_DATA != None:
os.makedirs(TMP_DOWNLOAD_DIRECTORY)
t_file = open(TMP_DOWNLOAD_DIRECTORY+"auth_token.txt","w")
t_file.write(AUTH_TOKEN_DATA)
t_file.close()
BIO_MSG = os.environ.get("BIO_MSG", "Ⱥղժҽղçҽղէօ")
BL_CHAT = set(int(x) for x in os.environ.get("BL_CHAT", "").split())
BOT_TOKEN = os.environ.get("BOT_TOKEN", None)
BOT_USERNAME = os.environ.get("BOT_USERNAME", None)
PRIVATE_GROUP_ID = os.environ.get("PM_LOG_ID", None)
BUTTONS_IN_HELP = int(os.environ.get("BUTTONS_IN_HELP", 7))
TEMP_DOWNLOAD_DIRECTORY = os.environ.get("TEMP_DOWNLOAD_DIRECTORY, ./userbot/cache")
CHATS_TO_MONITOR_FOR_ANTI_FLOOD = []
COMMAND_HAND_LER = os.environ.get("HANDLER", ".")
CHROME_BIN = os.environ.get("CHROME_BIN", "/app/.apt/usr/bin/google-chrome")
CHROME_DRIVER = os.environ.get("CHROME_DRIVER", "/app/.chromedriver/bin/chromedriver")
CUSTOM_PMPERMIT = os.environ.get("CUSTOM_PMPERMIT", None)
DB_URI = os.environ.get("DATABASE_URL", None)
DUAL_LOG = os.environ.get("DUAL_LOG", None)
EMOJI_IN_HELP = os.environ.get("EMOJI_IN_HELP", " ")
FBAN_LOG_GROUP = os.environ.get("FBAN_LOG_GROUP", None)
if FBAN_LOG_GROUP:
FBAN_LOG_GROUP = int(FBAN_LOG_GROUP)
G_DRIVE_CLIENT_ID = os.environ.get("G_DRIVE_CLIENT_ID", None)
G_DRIVE_CLIENT_SECRET = os.environ.get("G_DRIVE_CLIENT_SECRET", None)
GBAN_LOG_GROUP = os.environ.get("GBAN_LOG_GROUP", None)
if GBAN_LOG_GROUP:
GBAN_LOG_GROUP = int(GBAN_LOG_GROUP)
GDRIVE_FOLDER_ID = os.environ.get("GDRIVE_FOLDER_ID", None)
ANDENCENTO_HNDLR = os.environ.get("ANDENCENTO_HNDLR", ".")
GDRIVE_FOLDER_ID = os.environ.get("GDRIVE_FOLDER_ID", None)
GIT_REPO_NAME = os.environ.get("GIT_REPO_NAME", None)
GITHUB_ACCESS_TOKEN = os.environ.get("GITHUB_ACCESS_TOKEN", None)
GOOGLE_CHROME_BIN = os.environ.get("GOOGLE_CHROME_BIN", "/app/.apt/usr/bin/google-chrome")
GROUP_REG_SED_EX_BOT_S = os.environ.get("GROUP_REG_SED_EX_BOT_S", r"(regex|moku|BananaButler_|rgx|l4mR)Andencento")
HANDLER = os.environ.get("HANDLER", r"\.")
HASH_TO_TORRENT_API = os.environ.get("HASH_TO_TORRENT_API", "https://example.com/torrent/{}");
HEROKU_API_KEY = os.environ.get("HEROKU_API_KEY", None)
HEROKU_APP_NAME = os.environ.get("HEROKU_APP_NAME", None)
INSTANT_BLOCK = os.environ.get("INSTANT_BLOCK", "DISABLE")
LOCATION = os.environ.get("LOCATION", None)
LOGGER_ID = os.environ.get("LOGGER_ID", None)
if LOGGER_ID:
LOGGER_ID = int(LOGGER_ID)
LYDIA_API = os.environ.get("LYDIA_API", None)
MAX_ANTI_FLOOD_MESSAGES = 10
MAX_MESSAGE_SIZE_LIMIT = 4095
MAX_SPAM = int(os.environ.get("MAX_SPAM", 3))
MONGO_URI = os.environ.get("MONGO_URI", None)
MY_CHANNEL = os.environ.get("YOUR_CHANNEL", "Andencento")
MY_GROUP = os.environ.get("YOUR_GROUP", "AndencentoSupport")
OCR_API = os.environ.get("OCR_API", None)
PLUGIN_CHANNEL = os.environ.get("PLUGIN_CHANNEL", None)
if PLUGIN_CHANNEL:
PLUGIN_CHANNEL = int(PLUGIN_CHANNEL)
PM_LOG_ID = os.environ.get("PM_LOG_ID", None)
if PM_LOG_ID:
PM_LOG_ID = int(PM_LOG_ID)
PM_PERMIT = os.environ.get("PM_PERMIT", "ENABLE")
PMPERMIT_PIC = os.environ.get("PMPERMIT_PIC", None)
REMOVE_BG_API = os.environ.get("REMOVE_BG_API", None)
SCREEN_SHOT_LAYER_ACCESS_KEY = os.environ.get("SCREEN_SHOT_LAYER_ACCESS_KEY", None)
STICKER_PACKNAME = os.environ.get("STICKER_PACKNAME", None)
SUDO_HANDLER = os.environ.get("SUDO_HANDLER", r"\.")
SUDO_USERS = set(int(x) for x in os.environ.get("SUDO_USERS", "").split())
TAG_LOGGER = os.environ.get("TAG_LOGGER", None)
if TAG_LOGGER:
TAG_LOGGER = int(TAG_LOGGER)
TELEGRAPH_SHORT_NAME = os.environ.get("TELEGRAPH_SHORT_NAME", "AndencentoBot")
TEMP_DIR = os.environ.get("TEMP_DIR", None)
TMP_DOWNLOAD_DIRECTORY = os.environ.get("TMP_DOWNLOAD_DIRECTORY", "./DOWNLOADS/")
TZ = os.environ.get("TZ", "Asia/Kolkata")
UPSTREAM_REPO = os.environ.get("UPSTREAM_REPO", "https://github.com/Team-Andencento/Andencento")
WEATHER_API = os.environ.get("WEATHER_API", None)
YOUR_NAME = os.environ.get("YOUR_NAME", None)
YOUTUBE_API_KEY = os.environ.get("YOUTUBE_API_KEY", None)
|
Andencento
|
/Andencento-0.24.tar.gz/Andencento-0.24/userbot/config.py
|
config.py
|
import os
from . import *
COMMAND_HAND_LER = os.environ.get("HANDLER", ".")
#################################################################################################################
class CmdHelp:
"""
The class I wrote to better generate command aids.
"""
FILE = ""
ORIGINAL_FILE = ""
FILE_AUTHOR = ""
IS_OFFICIAL = True
COMMANDS = {}
PREFIX = COMMAND_HAND_LER
WARNING = ""
INFO = ""
def __init__(self, file: str, official: bool = True, file_name: str = None):
self.FILE = file
self.ORIGINAL_FILE = file
self.IS_OFFICIAL = official
self.FILE_NAME = file_name if not file_name == None else file + ".py"
self.COMMANDS = {}
self.FILE_AUTHOR = ""
self.WARNING = ""
self.INFO = ""
def set_file_info(self, name: str, value: str):
if name == "name":
self.FILE = value
elif name == "author":
self.FILE_AUTHOR = value
return self
def add_command(self, command: str, params=None, usage: str = "", example=None):
"""
Inserts commands..
"""
self.COMMANDS[command] = {
"command": command,
"params": params,
"usage": usage,
"example": example,
}
return self
def add_warning(self, warning):
self.WARNING = warning
return self
def add_info(self, info):
self.INFO = info
return self
def get_result(self):
"""
Brings results.
"""
result = f"**📗 File :** `{self.FILE}`\n"
if self.WARNING == "" and self.INFO == "":
result += f"**⬇️ Official:** {'✅' if self.IS_OFFICIAL else '❌'}\n\n"
else:
result += f"**⬇️ Official:** {'✅' if self.IS_OFFICIAL else '❌'}\n"
if self.INFO == "":
if not self.WARNING == "":
result += f"**⚠️ Warning :** {self.WARNING}\n\n"
else:
if not self.WARNING == "":
result += f"**⚠️ Warning :** {self.WARNING}\n"
result += f"**ℹ️ Info:** {self.INFO}\n\n"
for command in self.COMMANDS:
command = self.COMMANDS[command]
if command["params"] == None:
result += (
f"**🛠 Command :** `{COMMAND_HAND_LER[:1]}{command['command']}`\n"
)
else:
result += f"**🛠 Command :** `{COMMAND_HAND_LER[:1]}{command['command']} {command['params']}`\n"
if command["example"] == None:
result += f"**💬 Details :** `{command['usage']}`\n\n"
else:
result += f"**💬 Details :** `{command['usage']}`\n"
result += f"**⌨️ For Example :** `{COMMAND_HAND_LER[:1]}{command['example']}`\n\n"
return result
def add(self):
"""
Directly adds CMD_HELP.
"""
CMD_HELP_BOT[self.FILE] = {
"info": {
"official": self.IS_OFFICIAL,
"warning": self.WARNING,
"info": self.INFO,
},
"commands": self.COMMANDS,
}
CMD_HELP[self.FILE] = self.get_result()
return True
def getText(self, text: str):
if text == "REPLY_OR_USERNAME":
return "<user name> <user name/answer >"
elif text == "OR":
return "or"
elif text == "USERNAMES":
return "<user name (s)>"
|
Andencento
|
/Andencento-0.24.tar.gz/Andencento-0.24/userbot/cmdhelp.py
|
cmdhelp.py
|
import os
import sys
import time
from distutils.util import strtobool as sb
from logging import DEBUG, INFO, basicConfig, getLogger
import heroku3
from dotenv import load_dotenv
from requests import get
from telethon import TelegramClient
from telethon.sessions import StringSession
ENV = os.environ.get("ENV", False)
import pylast
from pySmartDL import SmartDL
from requests import get
from .config import Config
from .config import Config as Var
ALIVE_NAME = Config.YOUR_NAME
StartTime = time.time()
YOUR_NAME = Config.YOUR_NAME
from .AndencentoConfig import Config
versionop = "0.0.2"
W2Hversion = versionop
Andencentoversion = versionop
CONSOLE_LOGGER_VERBOSE = sb(os.environ.get("CONSOLE_LOGGER_VERBOSE", "False"))
if Var.ANDENCENTO_SESSION:
session_name = str(Var.ANDENCENTO_SESSION)
Andencento = TelegramClient(StringSession(session_name), Var.APP_ID, Var.API_HASH)
else:
session_name = "startup"
Andencento = TelegramClient(session_name, Var.APP_ID, Var.API_HASH)
noob = TelegramClient(None, Var.APP_ID, Var.API_HASH)
BIO_MSG = os.environ.get("BIO_MSG", None)
API_ID = os.environ.get("APP_ID")
API_HASH = os.environ.get("API_HASH")
token = os.environ.get("BOT_TOKEN")
bot = Andencento
__version__ = "0.24"
if CONSOLE_LOGGER_VERBOSE:
basicConfig(
format="%(asctime)s - %(name)s - %(levelname)s - %(message)s",
level=DEBUG,
)
else:
basicConfig(
format="%(asctime)s - %(name)s - %(levelname)s - %(message)s", level=INFO
)
LOGS = getLogger("ANDENCENTO")
asst = TelegramClient("Andencento", Var.APP_ID, Var.API_HASH).start(bot_token=token)
try:
if Config.HEROKU_API_KEY is not None or Config.HEROKU_APP_NAME is not None:
HEROKU_APP = heroku3.from_key(Config.HEROKU_API_KEY).apps()[
Config.HEROKU_APP_NAME
]
else:
HEROKU_APP = None
except Exception:
HEROKU_APP = None
# UserAndencento logging feature switch.
BOTLOG = sb(os.environ.get("BOTLOG", "False"))
LOGSPAMMER = sb(os.environ.get("LOGSPAMMER", "False"))
COMMAND_HAND_LER = os.environ.get("HANDLER", ".")
# Bleep Blop, this is a Andencento ;)
PM_AUTO_BAN = sb(os.environ.get("PM_AUTO_BAN", "False"))
# Console verbose logging
CONSOLE_LOGGER_VERBOSE = sb(os.environ.get("CONSOLE_LOGGER_VERBOSE", "False"))
# SQL Database URI
DB_URI = os.environ.get("DATABASE_URL", None)
# OCR API key
OCR_SPACE_API_KEY = os.environ.get("OCR_SPACE_API_KEY", None)
# remove.bg API key
REM_BG_API_KEY = os.environ.get("REM_BG_API_KEY", None)
# Chrome Driver and Headless Google Chrome Binaries
CHROME_DRIVER = os.environ.get("CHROME_DRIVER", None)
GOOGLE_CHROME_BIN = os.environ.get("GOOGLE_CHROME_BIN", None)
# OpenWeatherMap API Key
OPEN_WEATHER_MAP_APPID = os.environ.get("OPEN_WEATHER_MAP_APPID", None)
# Anti SpamAndencento Config
ANTI_SPAMBOT = sb(os.environ.get("ANTI_SPAMBOT", "False"))
ANTI_SPAMBOT_SHOUT = sb(os.environ.get("ANTI_SPAMBOT_SHOUT", "False"))
# FedBan Premium Module
F_BAN_LOGGER_GROUP = os.environ.get("F_BAN_LOGGER_GROUP", None)
# Heroku Credentials for updater.
HEROKU_MEMEZ = sb(os.environ.get("HEROKU_MEMEZ", "False"))
HEROKU_APP_NAME = os.environ.get("HEROKU_APP_NAME", None)
HEROKU_API_KEY = os.environ.get("HEROKU_API_KEY", None)
# Youtube API key
YOUTUBE_API_KEY = os.environ.get("YOUTUBE_API_KEY", None)
# Default .alive name
AUTONAME = os.environ.get("AUTONAME", None)
REDIRECTCHANNEL = os.environ.get("REDIRECTCHANNEL", None)
# Time & Date - Country and Time Zone
COUNTRY = str(os.environ.get("COUNTRY", "India"))
TZ_NUMBER = int(os.environ.get("TZ_NUMBER", 1))
# Clean Welcome
CLEAN_WELCOME = sb(os.environ.get("CLEAN_WELCOME", "True"))
# Custom Module
CUSTOM_PMPERMIT = os.environ.get("CUSTOM_PMPERMIT", None)
CUSTOM_AFK = os.environ.get("CUSTOM_AFK", None)
# Last.fm Module
BIO_PREFIX = os.environ.get("BIO_PREFIX", None)
BIO_MSG = os.environ.get("BIO_MSG", None)
LASTFM_API = os.environ.get("LASTFM_API", None)
LASTFM_SECRET = os.environ.get("LASTFM_SECRET", None)
LASTFM_USERNAME = os.environ.get("LASTFM_USERNAME", None)
LASTFM_PASSWORD_PLAIN = os.environ.get("LASTFM_PASSWORD", None)
LASTFM_PASS = pylast.md5(LASTFM_PASSWORD_PLAIN)
if not LASTFM_USERNAME == "None":
lastfm = pylast.LastFMNetwork(
api_key=LASTFM_API,
api_secret=LASTFM_SECRET,
username=LASTFM_USERNAME,
password_hash=LASTFM_PASS,
)
else:
lastfm = None
# Google Drive Module
G_DRIVE_CLIENT_ID = os.environ.get("G_DRIVE_CLIENT_ID", None)
G_DRIVE_CLIENT_SECRET = os.environ.get("G_DRIVE_CLIENT_SECRET", None)
G_DRIVE_AUTH_TOKEN_DATA = os.environ.get("G_DRIVE_AUTH_TOKEN_DATA", None)
GDRIVE_FOLDER_ID = os.environ.get("GDRIVE_FOLDER_ID", None)
TEMP_DOWNLOAD_DIRECTORY = os.environ.get("TEMP_DOWNLOAD_DIRECTORY", "./downloads")
else:
# Put your ppe vars here if you are using local hosting
PLACEHOLDER = None
# Setting Up CloudMail.ru and MEGA.nz extractor binaries,
# and giving them correct perms to work properly.
if not os.path.exists("bin"):
os.mkdir("bin")
binaries = {
"https://raw.githubusercontent.com/yshalsager/megadown/master/megadown": "bin/megadown",
"https://raw.githubusercontent.com/yshalsager/cmrudl.py/master/cmrudl.py": "bin/cmrudl",
}
for binary, path in binaries.items():
downloader = SmartDL(binary, path, progress_bar=False)
downloader.start()
os.chmod(path, 0o755)
Andencento = Andencento
# global variables
CMD_LIST = {}
# for later purposes
CMD_HELP = {}
CMD_HELP_BOT = {}
BRAIN_CHECKER = []
INT_PLUG = ""
LOAD_PLUG = {}
COUNT_MSG = 0
USERS = {}
COUNT_PM = {}
LASTMSG = {}
CMD_HELP = {}
ISAFK = False
AFKREASON = None
SUDO_LIST = {}
CMD_LIST = {}
CMD_HELP = {}
CMD_HELP_BOT = {}
BRAIN_CHECKER = []
INT_PLUG = ""
LOAD_PLUG = {}
COUNT_MSG = 0
USERS = {}
COUNT_PM = {}
LASTMSG = {}
ISAFK = False
AFKREASON = None
SUDO_LIST = {}
import os
COMMAND_HAND_LER = os.environ.get("HANDLER", ".")
#################################################################################################################
class CmdHelp:
"""
The class I wrote to better generate command aids.
"""
FILE = ""
ORIGINAL_FILE = ""
FILE_AUTHOR = ""
IS_OFFICIAL = True
COMMANDS = {}
PREFIX = COMMAND_HAND_LER
WARNING = ""
INFO = ""
def __init__(self, file: str, official: bool = True, file_name: str = None):
self.FILE = file
self.ORIGINAL_FILE = file
self.IS_OFFICIAL = official
self.FILE_NAME = file_name if not file_name == None else file + ".py"
self.COMMANDS = {}
self.FILE_AUTHOR = ""
self.WARNING = ""
self.INFO = ""
def set_file_info(self, name: str, value: str):
if name == "name":
self.FILE = value
elif name == "author":
self.FILE_AUTHOR = value
return self
def add_command(self, command: str, params=None, usage: str = "", example=None):
"""
Inserts commands..
"""
self.COMMANDS[command] = {
"command": command,
"params": params,
"usage": usage,
"example": example,
}
return self
def add_warning(self, warning):
self.WARNING = warning
return self
def add_info(self, info):
self.INFO = info
return self
def get_result(self):
"""
Brings results.
"""
result = f"**📗 File :** `{self.FILE}`\n"
if self.WARNING == "" and self.INFO == "":
result += f"**⬇️ Official:** {'✅' if self.IS_OFFICIAL else '❌'}\n\n"
else:
result += f"**⬇️ Official:** {'✅' if self.IS_OFFICIAL else '❌'}\n"
if self.INFO == "":
if not self.WARNING == "":
result += f"**⚠️ Warning :** {self.WARNING}\n\n"
else:
if not self.WARNING == "":
result += f"**⚠️ Warning :** {self.WARNING}\n"
result += f"**ℹ️ Info:** {self.INFO}\n\n"
for command in self.COMMANDS:
command = self.COMMANDS[command]
if command["params"] == None:
result += (
f"**🛠 Command :** `{COMMAND_HAND_LER[:1]}{command['command']}`\n"
)
else:
result += f"**🛠 Command :** `{COMMAND_HAND_LER[:1]}{command['command']} {command['params']}`\n"
if command["example"] == None:
result += f"**💬 Details :** `{command['usage']}`\n\n"
else:
result += f"**💬 Details :** `{command['usage']}`\n"
result += f"**⌨️ For Example :** `{COMMAND_HAND_LER[:1]}{command['example']}`\n\n"
return result
def add(self):
"""
Directly adds CMD_HELP.
"""
CMD_HELP_BOT[self.FILE] = {
"info": {
"official": self.IS_OFFICIAL,
"warning": self.WARNING,
"info": self.INFO,
},
"commands": self.COMMANDS,
}
CMD_HELP[self.FILE] = self.get_result()
return True
def getText(self, text: str):
if text == "REPLY_OR_USERNAME":
return "<user name> <user name/answer >"
elif text == "OR":
return "or"
elif text == "USERNAMES":
return "<user name (s)>"
import asyncio
from distutils.util import strtobool as sb
from logging import DEBUG, INFO, basicConfig, getLogger
import pylast
from pySmartDL import SmartDL
from requests import get
# Bot Logs setup:
if bool(ENV):
CONSOLE_LOGGER_VERBOSE = sb(os.environ.get("CONSOLE_LOGGER_VERBOSE", "False"))
if CONSOLE_LOGGER_VERBOSE:
basicConfig(
format="%(asctime)s - %(name)s - %(levelname)s - %(message)s",
level=DEBUG,
)
else:
basicConfig(
format="%(asctime)s - %(name)s - %(levelname)s - %(message)s", level=INFO
)
LOGS = getLogger("ANDENCENTO")
# Check if the config was edited by using the already used variable.
# Basically, its the 'virginity check' for the config file ;)
CONFIG_CHECK = os.environ.get(
"___________PLOX_______REMOVE_____THIS_____LINE__________", None
)
if CONFIG_CHECK:
LOGS.info(
"Please remove the line mentioned in the first hashtag from the config.env file"
)
quit(1)
# Logging channel/group configuration.
BOTLOG_CHATID = os.environ.get("BOTLOG_CHATID", None)
try:
BOTLOG_CHATID = int(BOTLOG_CHATID)
except:
pass
# UserAndencento logging feature switch.
BOTLOG = sb(os.environ.get("BOTLOG", "False"))
LOGSPAMMER = sb(os.environ.get("LOGSPAMMER", "False"))
PATTERNS = os.environ.get("PATTERNS", ".;!,")
COMMAND_HAND_LER = os.environ.get("COMMAND_HAND_LER", r"\.")
# Custom Module
CUSTOM_PMPERMIT = os.environ.get("CUSTOM_PMPERMIT", None)
# Logging channel/group configuration.
BOTLOG_CHATID = os.environ.get("BOTLOG_CHATID", None)
try:
BOTLOG_CHATID = int(BOTLOG_CHATID)
except:
pass
# UserAndencento logging feature switch.
BOTLOG = sb(os.environ.get("BOTLOG", "False"))
LOGSPAMMER = sb(os.environ.get("LOGSPAMMER", "False"))
PATTERNS = os.environ.get("PATTERNS", ".;!,")
COMMAND_HAND_LER = os.environ.get("COMMAND_HAND_LER", r"\.")
# Bleep Blop, this is a Andencento ;)
PM_AUTO_BAN = sb(os.environ.get("PM_AUTO_BAN", "False"))
# Console verbose logging
CONSOLE_LOGGER_VERBOSE = sb(os.environ.get("CONSOLE_LOGGER_VERBOSE", "False"))
# SQL Database URI
DB_URI = os.environ.get("DATABASE_URL", None)
# OCR API key
OCR_SPACE_API_KEY = os.environ.get("OCR_SPACE_API_KEY", None)
# remove.bg API key
REM_BG_API_KEY = os.environ.get("REM_BG_API_KEY", None)
# Chrome Driver and Headless Google Chrome Binaries
CHROME_DRIVER = os.environ.get("CHROME_DRIVER", None)
GOOGLE_CHROME_BIN = os.environ.get("GOOGLE_CHROME_BIN", None)
# OpenWeatherMap API Key
OPEN_WEATHER_MAP_APPID = os.environ.get("OPEN_WEATHER_MAP_APPID", None)
# Anti SpamAndencento Config
ANTI_SPAMBOT = sb(os.environ.get("ANTI_SPAMBOT", "False"))
ANTI_SPAMBOT_SHOUT = sb(os.environ.get("ANTI_SPAMBOT_SHOUT", "False"))
# FedBan Premium Module
F_BAN_LOGGER_GROUP = os.environ.get("F_BAN_LOGGER_GROUP", None)
# make by LEGEND X
Andencentonickname = os.environ.get("BOT_NICK_NAME", None)
# Heroku Credentials for updater.
HEROKU_MEMEZ = sb(os.environ.get("HEROKU_MEMEZ", "False"))
HEROKU_APP_NAME = os.environ.get("HEROKU_APP_NAME", None)
HEROKU_API_KEY = os.environ.get("HEROKU_API_KEY", None)
# Youtube API key
YOUTUBE_API_KEY = os.environ.get("YOUTUBE_API_KEY", None)
# Default .alive name
AUTONAME = os.environ.get("AUTONAME", None)
REDIRECTCHANNEL = os.environ.get("REDIRECTCHANNEL", None)
# Time & Date - Country and Time Zone
COUNTRY = str(os.environ.get("COUNTRY", "India"))
TZ_NUMBER = int(os.environ.get("TZ_NUMBER", 1))
# Clean Welcome
CLEAN_WELCOME = sb(os.environ.get("CLEAN_WELCOME", "True"))
# Custom Module
CUSTOM_PMPERMIT = os.environ.get("CUSTOM_PMPERMIT", None)
CUSTOM_AFK = os.environ.get("CUSTOM_AFK", None)
# Upstream Repo
UPSTREAM_REPO_URL = os.environ.get(
"UPSTREAM_REPO_URL", "https://github.com/Noob-Stranger/Andencento"
)
# Last.fm Module
BIO_PREFIX = os.environ.get("BIO_PREFIX", None)
BIO_MSG = os.environ.get("BIO_MSG", None)
LASTFM_API = os.environ.get("LASTFM_API", None)
LASTFM_SECRET = os.environ.get("LASTFM_SECRET", None)
LASTFM_USERNAME = os.environ.get("LASTFM_USERNAME", None)
LASTFM_PASSWORD_PLAIN = os.environ.get("LASTFM_PASSWORD", None)
LASTFM_PASS = pylast.md5(LASTFM_PASSWORD_PLAIN)
if not LASTFM_USERNAME == "None":
lastfm = pylast.LastFMNetwork(
api_key=LASTFM_API,
api_secret=LASTFM_SECRET,
username=LASTFM_USERNAME,
password_hash=LASTFM_PASS,
)
else:
lastfm = None
# Google Drive Module
G_DRIVE_CLIENT_ID = os.environ.get("G_DRIVE_CLIENT_ID", None)
G_DRIVE_CLIENT_SECRET = os.environ.get("G_DRIVE_CLIENT_SECRET", None)
G_DRIVE_AUTH_TOKEN_DATA = os.environ.get("G_DRIVE_AUTH_TOKEN_DATA", None)
GDRIVE_FOLDER_ID = os.environ.get("GDRIVE_FOLDER_ID", None)
TEMP_DOWNLOAD_DIRECTORY = os.environ.get("TEMP_DOWNLOAD_DIRECTORY", "./downloads")
else:
# Put your ppe vars here if you are using local hosting
PLACEHOLDER = None
# Setting Up CloudMail.ru and MEGA.nz extractor binaries,
# and giving them correct perms to work properly.
if not os.path.exists("bin"):
os.mkdir("bin")
binaries = {
"https://raw.githubusercontent.com/yshalsager/megadown/master/megadown": "bin/megadown",
"https://raw.githubusercontent.com/yshalsager/cmrudl.py/master/cmrudl.py": "bin/cmrudl",
}
for binary, path in binaries.items():
downloader = SmartDL(binary, path, progress_bar=False)
downloader.start()
os.chmod(path, 0o755)
|
Andencento
|
/Andencento-0.24.tar.gz/Andencento-0.24/userbot/__init__.py
|
__init__.py
|
import glob
import os
from pathlib import Path
from telethon.tl.functions.channels import JoinChannelRequest
from .. import *
from ..utils import *
from userbot.config import Config
from ..utils.modules import extra
hl = Config.HANDLER
PIC = Config.ALIVE_PIC or "https://telegra.ph/file/3d208ecf6d0ea9389d8f8.jpg"
ALIVE = Config.YOUR_NAME or "ANDENCENTO USER"
Andencento_mention = f"[{ALIVE}]"
user_mention = Andencento_mention
ver = "0.0.2"
async def asst():
"""
Loading Assistant From here
"""
path = 'assistant/*.py'
files = glob.glob(path)
for name in files:
with open(name) as f:
path1 = Path(f.name)
shortname = path1.stem
start_assistant(shortname.replace(".py", ""))
async def plugs():
"""
Modules From here
"""
path = "plugins/*.py"
files = glob.glob(path)
for name in files:
with open(name) as f:
path1 = Path(f.name)
shortname = path1.stem
load_module(shortname.replace(".py", ""))
async def addons():
extra_repo = "https://github.com/Andencento/Addons-Andencento"
if Config.EXTRA == "True":
try:
os.system(f"git clone {extra_repo}")
except BaseException:
pass
LOGS.info("Installing Extra Plugins")
path = "Addons-Andencento/*.py"
files = glob.glob(path)
for name in files:
with open(name) as ex:
path2 = Path(ex.name)
shortname = path2.stem
extra(shortname.replace(".py", ""))
async def Andencentoiosop():
try:
if Config.LOGGER_ID != 0:
await Andencento.tgbot.send_file(
Config.LOGGER_ID,
PIC,
caption=f"#START \n\nDeployed Andencento Successfully\n\n**Andencento - {ver}**\n\nType `{hl}ping` or `{hl}alive` to check! \n\nJoin [Andencneto Channel](t.me\n\n /Andencento) for Updates & [Andencento Chat](t.me/AndencentoSupport) for any query regarding Team Andencento",
)
except Exception as e:
LOGS.info(str(e))
async def op():
await Andencento(JoinChannelRequest("Andencento"))
await Andencento(JoinChannelRequest("AndencentoSupport"))
|
Andencento
|
/Andencento-0.24.tar.gz/Andencento-0.24/userbot/session/main.py
|
main.py
|
import asyncio
import hashlib
import inspect
import logging
import math
import os
from collections import defaultdict
from typing import (AsyncGenerator, Awaitable, BinaryIO, DefaultDict, List,
Optional, Tuple, Union)
from telethon import TelegramClient, helpers, utils
from telethon.crypto import AuthKey
from telethon.network import MTProtoSender
from telethon.tl.functions.auth import (ExportAuthorizationRequest,
ImportAuthorizationRequest)
from telethon.tl.functions.upload import (GetFileRequest,
SaveBigFilePartRequest,
SaveFilePartRequest)
from telethon.tl.types import (Document, InputDocumentFileLocation, InputFile,
InputFileBig, InputFileLocation,
InputPeerPhotoFileLocation,
InputPhotoFileLocation, TypeInputFile)
log: logging.Logger = logging.getLogger("telethon")
logging.basicConfig(level=logging.WARNING)
TypeLocation = Union[
Document,
InputDocumentFileLocation,
InputPeerPhotoFileLocation,
InputFileLocation,
InputPhotoFileLocation,
]
def stream_file(file_to_stream: BinaryIO, chunk_size=1024):
while True:
data_read = file_to_stream.read(chunk_size)
if not data_read:
break
yield data_read
class DownloadSender:
sender: MTProtoSender
request: GetFileRequest
remaining: int
stride: int
def __init__(
self,
sender: MTProtoSender,
file: TypeLocation,
offset: int,
limit: int,
stride: int,
count: int,
) -> None:
self.sender = sender
self.request = GetFileRequest(file, offset=offset, limit=limit)
self.stride = stride
self.remaining = count
async def next(self) -> Optional[bytes]:
if not self.remaining:
return None
result = await self.sender.send(self.request)
self.remaining -= 1
self.request.offset += self.stride
return result.bytes
def disconnect(self) -> Awaitable[None]:
return self.sender.disconnect()
class UploadSender:
sender: MTProtoSender
request: Union[SaveFilePartRequest, SaveBigFilePartRequest]
part_count: int
stride: int
previous: Optional[asyncio.Task]
loop: asyncio.AbstractEventLoop
def __init__(
self,
sender: MTProtoSender,
file_id: int,
part_count: int,
big: bool,
index: int,
stride: int,
loop: asyncio.AbstractEventLoop,
) -> None:
self.sender = sender
self.part_count = part_count
if big:
self.request = SaveBigFilePartRequest(file_id, index, part_count, b"")
else:
self.request = SaveFilePartRequest(file_id, index, b"")
self.stride = stride
self.previous = None
self.loop = loop
async def next(self, data: bytes) -> None:
if self.previous:
await self.previous
self.previous = self.loop.create_task(self._next(data))
async def _next(self, data: bytes) -> None:
self.request.bytes = data
log.debug(
f"Sending file part {self.request.file_part}/{self.part_count}"
f" with {len(data)} bytes"
)
await self.sender.send(self.request)
self.request.file_part += self.stride
async def disconnect(self) -> None:
if self.previous:
await self.previous
return await self.sender.disconnect()
class ParallelTransferrer:
client: TelegramClient
loop: asyncio.AbstractEventLoop
dc_id: int
senders: Optional[List[Union[DownloadSender, UploadSender]]]
auth_key: AuthKey
upload_ticker: int
def __init__(self, client: TelegramClient, dc_id: Optional[int] = None) -> None:
self.client = client
self.loop = self.client.loop
self.dc_id = dc_id or self.client.session.dc_id
self.auth_key = (
None
if dc_id and self.client.session.dc_id != dc_id
else self.client.session.auth_key
)
self.senders = None
self.upload_ticker = 0
async def _cleanup(self) -> None:
await asyncio.gather(*[sender.disconnect() for sender in self.senders])
self.senders = None
@staticmethod
def _get_connection_count(
file_size: int, max_count: int = 20, full_size: int = 100 * 1024 * 1024
) -> int:
if file_size > full_size:
return max_count
return math.ceil((file_size / full_size) * max_count)
async def _init_download(
self, connections: int, file: TypeLocation, part_count: int, part_size: int
) -> None:
minimum, remainder = divmod(part_count, connections)
def get_part_count() -> int:
nonlocal remainder
if remainder > 0:
remainder -= 1
return minimum + 1
return minimum
# The first cross-DC sender will export+import the authorization, so we always create it
# before creating any other senders.
self.senders = [
await self._create_download_sender(
file, 0, part_size, connections * part_size, get_part_count()
),
*await asyncio.gather(
*[
self._create_download_sender(
file, i, part_size, connections * part_size, get_part_count()
)
for i in range(1, connections)
]
),
]
async def _create_download_sender(
self,
file: TypeLocation,
index: int,
part_size: int,
stride: int,
part_count: int,
) -> DownloadSender:
return DownloadSender(
await self._create_sender(),
file,
index * part_size,
part_size,
stride,
part_count,
)
async def _init_upload(
self, connections: int, file_id: int, part_count: int, big: bool
) -> None:
self.senders = [
await self._create_upload_sender(file_id, part_count, big, 0, connections),
*await asyncio.gather(
*[
self._create_upload_sender(file_id, part_count, big, i, connections)
for i in range(1, connections)
]
),
]
async def _create_upload_sender(
self, file_id: int, part_count: int, big: bool, index: int, stride: int
) -> UploadSender:
return UploadSender(
await self._create_sender(),
file_id,
part_count,
big,
index,
stride,
loop=self.loop,
)
async def _create_sender(self) -> MTProtoSender:
dc = await self.client._get_dc(self.dc_id)
sender = MTProtoSender(self.auth_key, loggers=self.client._log)
await sender.connect(
self.client._connection(
dc.ip_address,
dc.port,
dc.id,
loggers=self.client._log,
proxy=self.client._proxy,
)
)
if not self.auth_key:
log.debug(f"Exporting auth to DC {self.dc_id}")
auth = await self.client(ExportAuthorizationRequest(self.dc_id))
req = self.client._init_with(
ImportAuthorizationRequest(id=auth.id, bytes=auth.bytes)
)
await sender.send(req)
self.auth_key = sender.auth_key
return sender
async def init_upload(
self,
file_id: int,
file_size: int,
part_size_kb: Optional[float] = None,
connection_count: Optional[int] = None,
) -> Tuple[int, int, bool]:
connection_count = connection_count or self._get_connection_count(file_size)
print("init_upload count is ", connection_count)
part_size = (part_size_kb or utils.get_appropriated_part_size(file_size)) * 1024
part_count = (file_size + part_size - 1) // part_size
is_large = file_size > 10 * 1024 * 1024
await self._init_upload(connection_count, file_id, part_count, is_large)
return part_size, part_count, is_large
async def upload(self, part: bytes) -> None:
await self.senders[self.upload_ticker].next(part)
self.upload_ticker = (self.upload_ticker + 1) % len(self.senders)
async def finish_upload(self) -> None:
await self._cleanup()
async def download(
self,
file: TypeLocation,
file_size: int,
part_size_kb: Optional[float] = None,
connection_count: Optional[int] = None,
) -> AsyncGenerator[bytes, None]:
connection_count = connection_count or self._get_connection_count(file_size)
print("download count is ", connection_count)
part_size = (part_size_kb or utils.get_appropriated_part_size(file_size)) * 1024
part_count = math.ceil(file_size / part_size)
log.debug(
"Starting parallel download: "
f"{connection_count} {part_size} {part_count} {file!s}"
)
await self._init_download(connection_count, file, part_count, part_size)
part = 0
while part < part_count:
tasks = []
for sender in self.senders:
tasks.append(self.loop.create_task(sender.next()))
for task in tasks:
data = await task
if not data:
break
yield data
part += 1
log.debug(f"Part {part} downloaded")
log.debug("Parallel download finished, cleaning up connections")
await self._cleanup()
parallel_transfer_locks: DefaultDict[int, asyncio.Lock] = defaultdict(
lambda: asyncio.Lock()
)
async def _internal_transfer_to_telegram(
client: TelegramClient, response: BinaryIO, progress_callback: callable
) -> Tuple[TypeInputFile, int]:
file_id = helpers.generate_random_long()
file_size = os.path.getsize(response.name)
hash_md5 = hashlib.md5()
uploader = ParallelTransferrer(client)
part_size, part_count, is_large = await uploader.init_upload(file_id, file_size)
buffer = bytearray()
for data in stream_file(response):
if progress_callback:
r = progress_callback(response.tell(), file_size)
if inspect.isawaitable(r):
await r
if not is_large:
hash_md5.update(data)
if len(buffer) == 0 and len(data) == part_size:
await uploader.upload(data)
continue
new_len = len(buffer) + len(data)
if new_len >= part_size:
cutoff = part_size - len(buffer)
buffer.extend(data[:cutoff])
await uploader.upload(bytes(buffer))
buffer.clear()
buffer.extend(data[cutoff:])
else:
buffer.extend(data)
if len(buffer) > 0:
await uploader.upload(bytes(buffer))
await uploader.finish_upload()
if is_large:
return InputFileBig(file_id, part_count, "upload"), file_size
else:
return InputFile(file_id, part_count, "upload", hash_md5.hexdigest()), file_size
async def download_file(
client: TelegramClient,
location: TypeLocation,
out: BinaryIO,
progress_callback: callable = None,
) -> BinaryIO:
size = location.size
dc_id, location = utils.get_input_location(location)
# We lock the transfers because telegram has connection count limits
downloader = ParallelTransferrer(client, dc_id)
downloaded = downloader.download(location, size)
async for x in downloaded:
out.write(x)
if progress_callback:
r = progress_callback(out.tell(), size)
if inspect.isawaitable(r):
await r
return out
async def upload_file(
client: TelegramClient,
file: BinaryIO,
progress_callback: callable = None,
) -> TypeInputFile:
return (await _internal_transfer_to_telegram(client, file, progress_callback))[0]
|
Andencento
|
/Andencento-0.24.tar.gz/Andencento-0.24/userbot/helpers/Fast.py
|
Fast.py
|
###### Searching and Downloading Google Images to the local disk ######
import argparse
# Import Libraries
import codecs
import datetime
import http.client
import json
import os
import re
import ssl
import sys
import time # Importing the time library to check the time of code execution
import urllib.request
from http.client import BadStatusLine, IncompleteRead
from urllib.parse import quote
from urllib.request import HTTPError, Request, URLError, urlopen
http.client._MAXHEADERS = 1000
args_list = [
"keywords",
"keywords_from_file",
"prefix_keywords",
"suffix_keywords",
"limit",
"format",
"color",
"color_type",
"usage_rights",
"size",
"exact_size",
"aspect_ratio",
"type",
"time",
"time_range",
"delay",
"url",
"single_image",
"output_directory",
"image_directory",
"no_directory",
"proxy",
"similar_images",
"specific_site",
"print_urls",
"print_size",
"print_paths",
"metadata",
"extract_metadata",
"socket_timeout",
"thumbnail",
"thumbnail_only",
"language",
"prefix",
"chromedriver",
"related_images",
"safe_search",
"no_numbering",
"offset",
"no_download",
"save_source",
"silent_mode",
"ignore_urls",
]
def user_input():
config = argparse.ArgumentParser()
config.add_argument(
"-cf",
"--config_file",
help="config file name",
default="",
type=str,
required=False,
)
config_file_check = config.parse_known_args()
object_check = vars(config_file_check[0])
records = []
if object_check["config_file"] != "":
json_file = json.load(open(config_file_check[0].config_file))
for item in json_file["Records"]:
arguments = {i: None for i in args_list}
for key, value in item.items():
arguments[key] = value
records.append(arguments)
len(records)
else:
# Taking command line arguments from users
parser = argparse.ArgumentParser()
parser.add_argument(
"-k", "--keywords", help="delimited list input", type=str, required=False
)
parser.add_argument(
"-kf",
"--keywords_from_file",
help="extract list of keywords from a text file",
type=str,
required=False,
)
parser.add_argument(
"-sk",
"--suffix_keywords",
help="comma separated additional words added after to main keyword",
type=str,
required=False,
)
parser.add_argument(
"-pk",
"--prefix_keywords",
help="comma separated additional words added before main keyword",
type=str,
required=False,
)
parser.add_argument(
"-l", "--limit", help="delimited list input", type=str, required=False
)
parser.add_argument(
"-f",
"--format",
help="download images with specific format",
type=str,
required=False,
choices=["jpg", "gif", "png", "bmp", "svg", "webp", "ico"],
)
parser.add_argument(
"-u", "--url", help="search with google image URL", type=str, required=False
)
parser.add_argument(
"-x",
"--single_image",
help="downloading a single image from URL",
type=str,
required=False,
)
parser.add_argument(
"-o",
"--output_directory",
help="download images in a specific main directory",
type=str,
required=False,
)
parser.add_argument(
"-i",
"--image_directory",
help="download images in a specific sub-directory",
type=str,
required=False,
)
parser.add_argument(
"-n",
"--no_directory",
default=False,
help="download images in the main directory but no sub-directory",
action="store_true",
)
parser.add_argument(
"-d",
"--delay",
help="delay in seconds to wait between downloading two images",
type=int,
required=False,
)
parser.add_argument(
"-co",
"--color",
help="filter on color",
type=str,
required=False,
choices=[
"red",
"orange",
"yellow",
"green",
"teal",
"blue",
"purple",
"pink",
"white",
"gray",
"black",
"brown",
],
)
parser.add_argument(
"-ct",
"--color_type",
help="filter on color",
type=str,
required=False,
choices=["full-color", "black-and-white", "transparent"],
)
parser.add_argument(
"-r",
"--usage_rights",
help="usage rights",
type=str,
required=False,
choices=[
"labeled-for-reuse-with-modifications",
"labeled-for-reuse",
"labeled-for-noncommercial-reuse-with-modification",
"labeled-for-nocommercial-reuse",
],
)
parser.add_argument(
"-s",
"--size",
help="image size",
type=str,
required=False,
choices=[
"large",
"medium",
"icon",
">400*300",
">640*480",
">800*600",
">1024*768",
">2MP",
">4MP",
">6MP",
">8MP",
">10MP",
">12MP",
">15MP",
">20MP",
">40MP",
">70MP",
],
)
parser.add_argument(
"-es",
"--exact_size",
help='exact image resolution "WIDTH,HEIGHT"',
type=str,
required=False,
)
parser.add_argument(
"-t",
"--type",
help="image type",
type=str,
required=False,
choices=["face", "photo", "clipart", "line-drawing", "animated"],
)
parser.add_argument(
"-w",
"--time",
help="image age",
type=str,
required=False,
choices=["past-24-hours", "past-7-days", "past-month", "past-year"],
)
parser.add_argument(
"-wr",
"--time_range",
help='time range for the age of the image. should be in the format {"time_min":"MM/DD/YYYY","time_max":"MM/DD/YYYY"}',
type=str,
required=False,
)
parser.add_argument(
"-a",
"--aspect_ratio",
help="comma separated additional words added to keywords",
type=str,
required=False,
choices=["tall", "square", "wide", "panoramic"],
)
parser.add_argument(
"-si",
"--similar_images",
help="downloads images very similar to the image URL you provide",
type=str,
required=False,
)
parser.add_argument(
"-ss",
"--specific_site",
help="downloads images that are indexed from a specific website",
type=str,
required=False,
)
parser.add_argument(
"-p",
"--print_urls",
default=False,
help="Print the URLs of the images",
action="store_true",
)
parser.add_argument(
"-ps",
"--print_size",
default=False,
help="Print the size of the images on disk",
action="store_true",
)
parser.add_argument(
"-pp",
"--print_paths",
default=False,
help="Prints the list of absolute paths of the images",
action="store_true",
)
parser.add_argument(
"-m",
"--metadata",
default=False,
help="Print the metadata of the image",
action="store_true",
)
parser.add_argument(
"-e",
"--extract_metadata",
default=False,
help="Dumps all the logs into a text file",
action="store_true",
)
parser.add_argument(
"-st",
"--socket_timeout",
default=False,
help="Connection timeout waiting for the image to download",
type=float,
)
parser.add_argument(
"-th",
"--thumbnail",
default=False,
help="Downloads image thumbnail along with the actual image",
action="store_true",
)
parser.add_argument(
"-tho",
"--thumbnail_only",
default=False,
help="Downloads only thumbnail without downloading actual images",
action="store_true",
)
parser.add_argument(
"-la",
"--language",
default=False,
help="Defines the language filter. The search results are authomatically returned in that language",
type=str,
required=False,
choices=[
"Arabic",
"Chinese (Simplified)",
"Chinese (Traditional)",
"Czech",
"Danish",
"Dutch",
"English",
"Estonian",
"Finnish",
"French",
"German",
"Greek",
"Hebrew",
"Hungarian",
"Icelandic",
"Italian",
"Japanese",
"Korean",
"Latvian",
"Lithuanian",
"Norwegian",
"Portuguese",
"Polish",
"Romanian",
"Russian",
"Spanish",
"Swedish",
"Turkish",
],
)
parser.add_argument(
"-pr",
"--prefix",
default=False,
help="A word that you would want to prefix in front of each image name",
type=str,
required=False,
)
parser.add_argument(
"-px",
"--proxy",
help="specify a proxy address and port",
type=str,
required=False,
)
parser.add_argument(
"-cd",
"--chromedriver",
help="specify the path to chromedriver executable in your local machine",
type=str,
required=False,
)
parser.add_argument(
"-ri",
"--related_images",
default=False,
help="Downloads images that are similar to the keyword provided",
action="store_true",
)
parser.add_argument(
"-sa",
"--safe_search",
default=False,
help="Turns on the safe search filter while searching for images",
action="store_true",
)
parser.add_argument(
"-nn",
"--no_numbering",
default=False,
help="Allows you to exclude the default numbering of images",
action="store_true",
)
parser.add_argument(
"-of",
"--offset",
help="Where to start in the fetched links",
type=str,
required=False,
)
parser.add_argument(
"-nd",
"--no_download",
default=False,
help="Prints the URLs of the images and/or thumbnails without downloading them",
action="store_true",
)
parser.add_argument(
"-iu",
"--ignore_urls",
default=False,
help="delimited list input of image urls/keywords to ignore",
type=str,
)
parser.add_argument(
"-sil",
"--silent_mode",
default=False,
help="Remains silent. Does not print notification messages on the terminal",
action="store_true",
)
parser.add_argument(
"-is",
"--save_source",
help="creates a text file containing a list of downloaded images along with source page url",
type=str,
required=False,
)
args = parser.parse_args()
arguments = vars(args)
records.append(arguments)
return records
class googleimagesdownload:
def __init__(self):
pass
# Downloading entire Web Document (Raw Page Content)
def download_page(self, url):
try:
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.104 Safari/537.36"
}
req = urllib.request.Request(url, headers=headers)
resp = urllib.request.urlopen(req)
return str(resp.read())
except Exception:
print(
"Could not open URL. Please check your internet connection and/or ssl settings \n"
"If you are using proxy, make sure your proxy settings is configured correctly"
)
sys.exit()
# Download Page for more than 100 images
def download_extended_page(self, url, chromedriver):
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
options = webdriver.ChromeOptions()
options.add_argument("--no-sandbox")
options.add_argument("--headless")
try:
browser = webdriver.Chrome(chromedriver, chrome_options=options)
except Exception as e:
print(
"Looks like we cannot locate the path the 'chromedriver' (use the '--chromedriver' "
"argument to specify the path to the executable.) or google chrome browser is not "
"installed on your machine (exception: %s)" % e
)
sys.exit()
browser.set_window_size(1024, 768)
# Open the link
browser.get(url)
time.sleep(1)
print("Getting you a lot of images. This may take a few moments...")
element = browser.find_element_by_tag_name("body")
# Scroll down
for i in range(30):
element.send_keys(Keys.PAGE_DOWN)
time.sleep(0.3)
try:
browser.find_element_by_id("smb").click()
for _ in range(50):
element.send_keys(Keys.PAGE_DOWN)
time.sleep(0.3) # Andencento id protection
except BaseException:
for _ in range(10):
element.send_keys(Keys.PAGE_DOWN)
time.sleep(0.3) # Andencento id protection
print("Reached end of Page.")
time.sleep(0.5)
source = browser.page_source # page source
# close the browser
browser.close()
return source
# Correcting the escape characters for python2
def replace_with_byte(self, match):
return chr(int(match.group(0)[1:], 8))
def repair(self, brokenjson):
# up to 3 digits for byte values up to FF
invalid_escape = re.compile(r"\\[0-7]{1,3}")
return invalid_escape.sub(self.replace_with_byte, brokenjson)
# Finding 'Next Image' from the given raw page
def get_next_tab(self, s):
start_line = s.find('class="dtviD"')
if start_line == -1: # If no links are found then give an error!
end_quote = 0
link = "no_tabs"
return link, "", end_quote
start_line = s.find('class="dtviD"')
start_content = s.find('href="', start_line + 1)
end_content = s.find('">', start_content + 1)
url_item = "https://www.google.com" + str(s[start_content + 6 : end_content])
url_item = url_item.replace("&", "&")
start_line_2 = s.find('class="dtviD"')
s = s.replace("&", "&")
start_content_2 = s.find(":", start_line_2 + 1)
end_content_2 = s.find("&usg=", start_content_2 + 1)
url_item_name = str(s[start_content_2 + 1 : end_content_2])
chars = url_item_name.find(",g_1:")
chars_end = url_item_name.find(":", chars + 6)
if chars_end == -1:
updated_item_name = (url_item_name[chars + 5 :]).replace("+", " ")
else:
updated_item_name = (url_item_name[chars + 5 : chars_end]).replace("+", " ")
return url_item, updated_item_name, end_content
# Getting all links with the help of '_images_get_next_image'
def get_all_tabs(self, page):
tabs = {}
while True:
item, item_name, end_content = self.get_next_tab(page)
if item == "no_tabs":
break
if len(item_name) > 100 or item_name == "background-color":
break
# Append all the links in the list named 'Links'
tabs[item_name] = item
# Timer could be used to slow down the request for image
# downloads
time.sleep(0.1)
page = page[end_content:]
return tabs
# Format the object in readable format
def format_object(self, object):
data = object[1]
main = data[3]
info = data[9]
return {
"image_height": main[2],
"image_width": main[1],
"image_link": main[0],
"image_format": main[0][-1 * (len(main[0]) - main[0].rfind(".") - 1) :],
"image_description": info["2003"][3],
"image_host": info["183836587"][0],
"image_source": info["2003"][2],
"image_thumbnail_url": data[2][0],
}
# function to download single image
def single_image(self, image_url):
main_directory = "downloads"
extensions = (".jpg", ".gif", ".png", ".bmp", ".svg", ".webp", ".ico")
url = image_url
try:
os.makedirs(main_directory)
except OSError as e:
if e.errno != 17:
raise
req = Request(
url,
headers={
"User-Agent": "Mozilla/5.0 (X11; Linux i686) AppleWebKit/537.17 (KHTML, like Gecko) Chrome/24.0.1312.27 Safari/537.17"
},
)
response = urlopen(req, None, 10)
data = response.read()
response.close()
image_name = str(url[(url.rfind("/")) + 1 :])
if "?" in image_name:
image_name = image_name[: image_name.find("?")]
# if ".jpg" in image_name or ".gif" in image_name or ".png" in
# image_name or ".bmp" in image_name or ".svg" in image_name or ".webp"
# in image_name or ".ico" in image_name:
if any(map(lambda extension: extension in image_name, extensions)):
file_name = main_directory + "/" + image_name
else:
file_name = main_directory + "/" + image_name + ".jpg"
image_name = image_name + ".jpg"
try:
with open(file_name, "wb") as output_file:
output_file.write(data)
except IOError as e:
raise e
except OSError as e:
raise e
print(
"completed ====> " + image_name.encode("raw_unicode_escape").decode("utf-8")
)
def similar_images(self, similar_images):
try:
searchUrl = (
"https://www.google.com/searchbyimage?site=search&sa=X&image_url="
+ similar_images
)
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36"
}
req1 = urllib.request.Request(searchUrl, headers=headers)
resp1 = urllib.request.urlopen(req1)
content = str(resp1.read())
l1 = content.find("AMhZZ")
l2 = content.find("&", l1)
urll = content[l1:l2]
newurl = (
"https://www.google.com/search?tbs=sbi:" + urll + "&site=search&sa=X"
)
req2 = urllib.request.Request(newurl, headers=headers)
urllib.request.urlopen(req2)
l3 = content.find("/search?sa=X&q=")
l4 = content.find(";", l3 + 19)
return content[l3 + 19 : l4]
except BaseException:
return "Cloud not connect to Google Images endpoint"
# Building URL parameters
def build_url_parameters(self, arguments):
if arguments["language"]:
lang = "&lr="
lang_param = {
"Arabic": "lang_ar",
"Chinese (Simplified)": "lang_zh-CN",
"Chinese (Traditional)": "lang_zh-TW",
"Czech": "lang_cs",
"Danish": "lang_da",
"Dutch": "lang_nl",
"English": "lang_en",
"Estonian": "lang_et",
"Finnish": "lang_fi",
"French": "lang_fr",
"German": "lang_de",
"Greek": "lang_el",
"Hebrew": "lang_iw ",
"Hungarian": "lang_hu",
"Icelandic": "lang_is",
"Italian": "lang_it",
"Japanese": "lang_ja",
"Korean": "lang_ko",
"Latvian": "lang_lv",
"Lithuanian": "lang_lt",
"Norwegian": "lang_no",
"Portuguese": "lang_pt",
"Polish": "lang_pl",
"Romanian": "lang_ro",
"Russian": "lang_ru",
"Spanish": "lang_es",
"Swedish": "lang_sv",
"Turkish": "lang_tr",
}
lang_url = lang + lang_param[arguments["language"]]
else:
lang_url = ""
if arguments["time_range"]:
json_acceptable_string = arguments["time_range"].replace("'", '"')
d = json.loads(json_acceptable_string)
time_range = ",cdr:1,cd_min:" + d["time_min"] + ",cd_max:" + d["time_max"]
else:
time_range = ""
if arguments["exact_size"]:
size_array = [x.strip() for x in arguments["exact_size"].split(",")]
exact_size = (
",isz:ex,iszw:" + str(size_array[0]) + ",iszh:" + str(size_array[1])
)
else:
exact_size = ""
built_url = "&tbs="
counter = 0
params = {
"color": [
arguments["color"],
{
"red": "ic:specific,isc:red",
"orange": "ic:specific,isc:orange",
"yellow": "ic:specific,isc:yellow",
"green": "ic:specific,isc:green",
"teal": "ic:specific,isc:teel",
"blue": "ic:specific,isc:blue",
"purple": "ic:specific,isc:purple",
"pink": "ic:specific,isc:pink",
"white": "ic:specific,isc:white",
"gray": "ic:specific,isc:gray",
"black": "ic:specific,isc:black",
"brown": "ic:specific,isc:brown",
},
],
"color_type": [
arguments["color_type"],
{
"full-color": "ic:color",
"black-and-white": "ic:gray",
"transparent": "ic:trans",
},
],
"usage_rights": [
arguments["usage_rights"],
{
"labeled-for-reuse-with-modifications": "sur:fmc",
"labeled-for-reuse": "sur:fc",
"labeled-for-noncommercial-reuse-with-modification": "sur:fm",
"labeled-for-nocommercial-reuse": "sur:f",
},
],
"size": [
arguments["size"],
{
"large": "isz:l",
"medium": "isz:m",
"icon": "isz:i",
">400*300": "isz:lt,islt:qsvga",
">640*480": "isz:lt,islt:vga",
">800*600": "isz:lt,islt:svga",
">1024*768": "visz:lt,islt:xga",
">2MP": "isz:lt,islt:2mp",
">4MP": "isz:lt,islt:4mp",
">6MP": "isz:lt,islt:6mp",
">8MP": "isz:lt,islt:8mp",
">10MP": "isz:lt,islt:10mp",
">12MP": "isz:lt,islt:12mp",
">15MP": "isz:lt,islt:15mp",
">20MP": "isz:lt,islt:20mp",
">40MP": "isz:lt,islt:40mp",
">70MP": "isz:lt,islt:70mp",
},
],
"type": [
arguments["type"],
{
"face": "itp:face",
"photo": "itp:photo",
"clipart": "itp:clipart",
"line-drawing": "itp:lineart",
"animated": "itp:animated",
},
],
"time": [
arguments["time"],
{
"past-24-hours": "qdr:d",
"past-7-days": "qdr:w",
"past-month": "qdr:m",
"past-year": "qdr:y",
},
],
"aspect_ratio": [
arguments["aspect_ratio"],
{
"tall": "iar:t",
"square": "iar:s",
"wide": "iar:w",
"panoramic": "iar:xw",
},
],
"format": [
arguments["format"],
{
"jpg": "ift:jpg",
"gif": "ift:gif",
"png": "ift:png",
"bmp": "ift:bmp",
"svg": "ift:svg",
"webp": "webp",
"ico": "ift:ico",
"raw": "ift:craw",
},
],
}
for key, value in params.items():
if value[0] is not None:
ext_param = value[1][value[0]]
# counter will tell if it is first param added or not
if counter == 0:
# add it to the built url
built_url += ext_param
else:
built_url = built_url + "," + ext_param
counter += 1
built_url = lang_url + built_url + exact_size + time_range
return built_url
# building main search URL
def build_search_url(
self, search_term, params, url, similar_images, specific_site, safe_search
):
# check the args and choose the URL
if url:
url = url
elif similar_images:
print(similar_images)
keywordem = self.similar_images(similar_images)
url = (
"https://www.google.com/search?q="
+ keywordem
+ "&espv=2&biw=1366&bih=667&site=webhp&source=lnms&tbm=isch&sa=X&ei=XosDVaCXD8TasATItgE&ved=0CAcQ_AUoAg"
)
elif specific_site:
url = (
"https://www.google.com/search?q="
+ quote(search_term.encode("utf-8"))
+ "&as_sitesearch="
+ specific_site
+ "&espv=2&biw=1366&bih=667&site=webhp&source=lnms&tbm=isch"
+ params
+ "&sa=X&ei=XosDVaCXD8TasATItgE&ved=0CAcQ_AUoAg"
)
else:
url = (
"https://www.google.com/search?q="
+ quote(search_term.encode("utf-8"))
+ "&espv=2&biw=1366&bih=667&site=webhp&source=lnms&tbm=isch"
+ params
+ "&sa=X&ei=XosDVaCXD8TasATItgE&ved=0CAcQ_AUoAg"
)
# safe search check
if safe_search:
# check safe_search
safe_search_string = "&safe=active"
url = url + safe_search_string
return url
# measures the file size
def file_size(self, file_path):
if os.path.isfile(file_path):
file_info = os.stat(file_path)
size = file_info.st_size
for x in ["bytes", "KB", "MB", "GB", "TB"]:
if size < 1024.0:
return "%3.1f %s" % (size, x)
size /= 1024.0
return size
# keywords from file
def keywords_from_file(self, file_name):
search_keyword = []
with codecs.open(file_name, "r", encoding="utf-8-sig") as f:
if ".csv" in file_name or ".txt" in file_name:
for line in f:
if line not in ["\n", "\r\n"]:
search_keyword.append(line.replace("\n", "").replace("\r", ""))
else:
print(
"Invalid file type: Valid file types are either .txt or .csv \n"
"exiting..."
)
sys.exit()
return search_keyword
# make directories
def create_directories(self, main_directory, dir_name, thumbnail, thumbnail_only):
dir_name_thumbnail = dir_name + " - thumbnail"
# make a search keyword directory
try:
if not os.path.exists(main_directory):
os.makedirs(main_directory)
time.sleep(0.15)
path = dir_name
sub_directory = os.path.join(main_directory, path)
if not os.path.exists(sub_directory):
os.makedirs(sub_directory)
if thumbnail or thumbnail_only:
sub_directory_thumbnail = os.path.join(
main_directory, dir_name_thumbnail
)
if not os.path.exists(sub_directory_thumbnail):
os.makedirs(sub_directory_thumbnail)
except OSError as e:
if e.errno != 17:
raise
# Download Image thumbnails
def download_image_thumbnail(
self,
image_url,
main_directory,
dir_name,
return_image_name,
print_urls,
socket_timeout,
print_size,
no_download,
save_source,
img_src,
ignore_urls,
):
if print_urls or no_download:
print("Image URL: " + image_url)
if no_download:
return "success", "Printed url without downloading"
try:
req = Request(
image_url,
headers={
"User-Agent": "Mozilla/5.0 (X11; Linux i686) AppleWebKit/537.17 (KHTML, like Gecko) Chrome/24.0.1312.27 Safari/537.17"
},
)
try:
# timeout time to download an image
if socket_timeout:
timeout = float(socket_timeout)
else:
timeout = 10
response = urlopen(req, None, timeout)
data = response.read()
response.close()
path = (
main_directory
+ "/"
+ dir_name
+ " - thumbnail"
+ "/"
+ return_image_name
)
try:
output_file = open(path, "wb")
output_file.write(data)
output_file.close()
if save_source:
list_path = main_directory + "/" + save_source + ".txt"
list_file = open(list_path, "a")
list_file.write(path + "\t" + img_src + "\n")
list_file.close()
except OSError as e:
download_status = "fail"
download_message = (
"OSError on an image...trying next one..." + " Error: " + str(e)
)
except IOError as e:
download_status = "fail"
download_message = (
"IOError on an image...trying next one..." + " Error: " + str(e)
)
download_status = "success"
download_message = (
"Completed Image Thumbnail ====> " + return_image_name
)
# image size parameter
if print_size:
print("Image Size: " + str(self.file_size(path)))
except UnicodeEncodeError as e:
download_status = "fail"
download_message = (
"UnicodeEncodeError on an image...trying next one..."
+ " Error: "
+ str(e)
)
except HTTPError as e: # If there is any HTTPError
download_status = "fail"
download_message = (
"HTTPError on an image...trying next one..." + " Error: " + str(e)
)
except URLError as e:
download_status = "fail"
download_message = (
"URLError on an image...trying next one..." + " Error: " + str(e)
)
except ssl.CertificateError as e:
download_status = "fail"
download_message = (
"CertificateError on an image...trying next one..."
+ " Error: "
+ str(e)
)
except IOError as e: # If there is any IOError
download_status = "fail"
download_message = (
"IOError on an image...trying next one..." + " Error: " + str(e)
)
return download_status, download_message
# Download Images
def download_image(
self,
image_url,
image_format,
main_directory,
dir_name,
count,
print_urls,
socket_timeout,
prefix,
print_size,
no_numbering,
no_download,
save_source,
img_src,
silent_mode,
thumbnail_only,
format,
ignore_urls,
):
if not silent_mode:
if print_urls or no_download:
print("Image URL: " + image_url)
if ignore_urls:
if any(url in image_url for url in ignore_urls.split(",")):
return (
"fail",
"Image ignored due to 'ignore url' parameter",
None,
image_url,
)
if thumbnail_only:
return (
"success",
"Skipping image download...",
str(image_url[(image_url.rfind("/")) + 1 :]),
image_url,
)
if no_download:
return "success", "Printed url without downloading", None, image_url
try:
req = Request(
image_url,
headers={
"User-Agent": "Mozilla/5.0 (X11; Linux i686) AppleWebKit/537.17 (KHTML, like Gecko) Chrome/24.0.1312.27 Safari/537.17"
},
)
try:
# timeout time to download an image
if socket_timeout:
timeout = float(socket_timeout)
else:
timeout = 10
response = urlopen(req, None, timeout)
data = response.read()
response.close()
extensions = [
".jpg",
".jpeg",
".gif",
".png",
".bmp",
".svg",
".webp",
".ico",
]
# keep everything after the last '/'
image_name = str(image_url[(image_url.rfind("/")) + 1 :])
if format:
if not image_format or image_format != format:
download_status = "fail"
download_message = "Wrong image format returned. Skipping..."
return_image_name = ""
absolute_path = ""
return (
download_status,
download_message,
return_image_name,
absolute_path,
)
if (
image_format == ""
or not image_format
or "." + image_format not in extensions
):
download_status = "fail"
download_message = "Invalid or missing image format. Skipping..."
return_image_name = ""
absolute_path = ""
return (
download_status,
download_message,
return_image_name,
absolute_path,
)
if image_name.lower().find("." + image_format) < 0:
image_name = image_name + "." + image_format
else:
image_name = image_name[
: image_name.lower().find("." + image_format)
+ (len(image_format) + 1)
]
# prefix name in image
if prefix:
prefix = prefix + " "
else:
prefix = ""
if no_numbering:
path = main_directory + "/" + dir_name + "/" + prefix + image_name
else:
path = (
main_directory
+ "/"
+ dir_name
+ "/"
+ prefix
+ str(count)
+ "."
+ image_name
)
try:
output_file = open(path, "wb")
output_file.write(data)
output_file.close()
if save_source:
list_path = main_directory + "/" + save_source + ".txt"
list_file = open(list_path, "a")
list_file.write(path + "\t" + img_src + "\n")
list_file.close()
absolute_path = os.path.abspath(path)
except OSError as e:
download_status = "fail"
download_message = (
"OSError on an image...trying next one..." + " Error: " + str(e)
)
return_image_name = ""
absolute_path = ""
# return image name back to calling method to use it for
# thumbnail downloads
download_status = "success"
download_message = (
"Completed Image ====> " + prefix + str(count) + "." + image_name
)
return_image_name = prefix + str(count) + "." + image_name
# image size parameter
if not silent_mode:
if print_size:
print("Image Size: " + str(self.file_size(path)))
except UnicodeEncodeError as e:
download_status = "fail"
download_message = (
"UnicodeEncodeError on an image...trying next one..."
+ " Error: "
+ str(e)
)
return_image_name = ""
absolute_path = ""
except URLError as e:
download_status = "fail"
download_message = (
"URLError on an image...trying next one..." + " Error: " + str(e)
)
return_image_name = ""
absolute_path = ""
except BadStatusLine as e:
download_status = "fail"
download_message = (
"BadStatusLine on an image...trying next one..."
+ " Error: "
+ str(e)
)
return_image_name = ""
absolute_path = ""
except HTTPError as e: # If there is any HTTPError
download_status = "fail"
download_message = (
"HTTPError on an image...trying next one..." + " Error: " + str(e)
)
return_image_name = ""
absolute_path = ""
except URLError as e:
download_status = "fail"
download_message = (
"URLError on an image...trying next one..." + " Error: " + str(e)
)
return_image_name = ""
absolute_path = ""
except ssl.CertificateError as e:
download_status = "fail"
download_message = (
"CertificateError on an image...trying next one..."
+ " Error: "
+ str(e)
)
return_image_name = ""
absolute_path = ""
except IOError as e: # If there is any IOError
download_status = "fail"
download_message = (
"IOError on an image...trying next one..." + " Error: " + str(e)
)
return_image_name = ""
absolute_path = ""
except IncompleteRead as e:
download_status = "fail"
download_message = (
"IncompleteReadError on an image...trying next one..."
+ " Error: "
+ str(e)
)
return_image_name = ""
absolute_path = ""
return download_status, download_message, return_image_name, absolute_path
# Finding 'Next Image' from the given raw page
def _get_next_item(self, s):
start_line = s.find("rg_meta notranslate")
if start_line == -1: # If no links are found then give an error!
end_quote = 0
link = "no_links"
return link, end_quote
start_line = s.find('class="rg_meta notranslate">')
start_object = s.find("{", start_line + 1)
end_object = s.find("</div>", start_object + 1)
object_raw = str(s[start_object:end_object])
# remove escape characters based on python version
try:
object_decode = bytes(object_raw, "utf-8").decode("unicode_escape")
final_object = json.loads(object_decode)
except BaseException:
final_object = ""
return final_object, end_object
# Getting all links with the help of '_images_get_next_image'
def _get_image_objects(self, s):
start_line = s.find("AF_initDataCallback({key: \\'ds:1\\'") - 10
start_object = s.find("[", start_line + 1)
end_object = s.find("</script>", start_object + 1) - 4
object_raw = str(s[start_object:end_object])
object_decode = bytes(object_raw[:-1], "utf-8").decode("unicode_escape")
# LOGS.info(_format.paste_text(object_decode[:-15]))
return json.loads(object_decode[:-15])[31][0][12][2]
def _get_all_items(self, page, main_directory, dir_name, limit, arguments):
items = []
abs_path = []
errorCount = 0
i = 0
count = 1
# LOGS.info(f"page : {_format.paste_text(page)}")
image_objects = self._get_image_objects(page)
while count < limit + 1:
if len(image_objects) == 0:
print("no_links")
break
else:
# format the item for readability
object = self.format_object(image_objects[i])
if arguments["metadata"] and not arguments["silent_mode"]:
print("\nImage Metadata: " + str(object))
# download the images
(
download_status,
download_message,
return_image_name,
absolute_path,
) = self.download_image(
object["image_link"],
object["image_format"],
main_directory,
dir_name,
count,
arguments["print_urls"],
arguments["socket_timeout"],
arguments["prefix"],
arguments["print_size"],
arguments["no_numbering"],
arguments["no_download"],
arguments["save_source"],
object["image_source"],
arguments["silent_mode"],
arguments["thumbnail_only"],
arguments["format"],
arguments["ignore_urls"],
)
if not arguments["silent_mode"]:
print(download_message)
if download_status == "success":
# download image_thumbnails
if arguments["thumbnail"] or arguments["thumbnail_only"]:
(
download_status,
download_message_thumbnail,
) = self.download_image_thumbnail(
object["image_thumbnail_url"],
main_directory,
dir_name,
return_image_name,
arguments["print_urls"],
arguments["socket_timeout"],
arguments["print_size"],
arguments["no_download"],
arguments["save_source"],
object["image_source"],
arguments["ignore_urls"],
)
if not arguments["silent_mode"]:
print(download_message_thumbnail)
count += 1
object["image_filename"] = return_image_name
# Append all the links in the list named 'Links'
items.append(object)
abs_path.append(absolute_path)
else:
errorCount += 1
# delay param
if arguments["delay"]:
time.sleep(int(arguments["delay"]))
i += 1
if count < limit:
print(
"\n\nUnfortunately all "
+ str(limit)
+ " could not be downloaded because some images were not downloadable. "
+ str(count - 1)
+ " is all we got for this search filter!"
)
return items, errorCount, abs_path
# Bulk Download
def download(self, arguments):
paths_agg = {}
# for input coming from other python files
if __name__ != "__main__":
# if the calling file contains config_file param
if "config_file" in arguments:
records = []
json_file = json.load(open(arguments["config_file"]))
for item in json_file["Records"]:
arguments = {}
for i in args_list:
arguments[i] = None
for key, value in item.items():
arguments[key] = value
records.append(arguments)
total_errors = 0
for rec in records:
paths, errors = self.download_executor(rec)
for i in paths:
paths_agg[i] = paths[i]
if not arguments["silent_mode"] and arguments["print_paths"]:
print(paths.encode("raw_unicode_escape").decode("utf-8"))
total_errors += errors
return paths_agg, total_errors
# if the calling file contains params directly
paths, errors = self.download_executor(arguments)
for i in paths:
paths_agg[i] = paths[i]
if not arguments["silent_mode"] and arguments["print_paths"]:
print(paths.encode("raw_unicode_escape").decode("utf-8"))
return paths_agg, errors
# for input coming from CLI
paths, errors = self.download_executor(arguments)
for i in paths:
paths_agg[i] = paths[i]
if not arguments["silent_mode"] and arguments["print_paths"]:
print(paths.encode("raw_unicode_escape").decode("utf-8"))
return paths_agg, errors
def download_executor(self, arguments):
paths = {}
errorCount = None
for arg in args_list:
if arg not in arguments:
arguments[arg] = None
# Initialization and Validation of user arguments
if arguments["keywords"]:
search_keyword = [str(item) for item in arguments["keywords"].split(",")]
if arguments["keywords_from_file"]:
search_keyword = self.keywords_from_file(arguments["keywords_from_file"])
# Andencentoh time and time range should not be allowed in the same query
if arguments["time"] and arguments["time_range"]:
raise ValueError(
"Either time or time range should be used in a query. Both cannot be used at the same time."
)
# Andencentoh time and time range should not be allowed in the same query
if arguments["size"] and arguments["exact_size"]:
raise ValueError(
'Either "size" or "exact_size" should be used in a query. Both cannot be used at the same time.'
)
# Andencentoh image directory and no image directory should not be allowed in
# the same query
if arguments["image_directory"] and arguments["no_directory"]:
raise ValueError(
"You can either specify image directory or specify no image directory, not Andencentoh!"
)
# Additional words added to keywords
if arguments["suffix_keywords"]:
suffix_keywords = [
" " + str(sk) for sk in arguments["suffix_keywords"].split(",")
]
else:
suffix_keywords = [""]
# Additional words added to keywords
if arguments["prefix_keywords"]:
prefix_keywords = [
str(sk) + " " for sk in arguments["prefix_keywords"].split(",")
]
else:
prefix_keywords = [""]
# Setting limit on number of images to be downloaded
limit = int(arguments["limit"]) if arguments["limit"] else 100
if arguments["url"]:
current_time = str(datetime.datetime.now()).split(".")[0]
search_keyword = [current_time.replace(":", "_")]
if arguments["similar_images"]:
current_time = str(datetime.datetime.now()).split(".")[0]
search_keyword = [current_time.replace(":", "_")]
# If single_image or url argument not present then keywords is
# mandatory argument
if (
arguments["single_image"] is None
and arguments["url"] is None
and arguments["similar_images"] is None
and arguments["keywords"] is None
and arguments["keywords_from_file"] is None
):
print(
"-------------------------------\n"
"Uh oh! Keywords is a required argument \n\n"
"Please refer to the documentation on guide to writing queries \n"
"https://github.com/hardikvasa/google-images-download#examples"
"\n\nexiting!\n"
"-------------------------------"
)
sys.exit()
# If this argument is present, set the custom output directory
if arguments["output_directory"]:
main_directory = arguments["output_directory"]
else:
main_directory = "downloads"
# Proxy settings
if arguments["proxy"]:
os.environ["http_proxy"] = arguments["proxy"]
os.environ["https_proxy"] = arguments["proxy"]
# Initialization Complete
total_errors = 0
for pky in prefix_keywords: # 1.for every prefix keywords
for sky in suffix_keywords: # 2.for every suffix keywords
for i in range(len(search_keyword)): # 3.for every main keyword
iteration = (
"\n"
+ "Item no.: "
+ str(i + 1)
+ " -->"
+ " Item name = "
+ (pky)
+ (search_keyword[i])
+ (sky)
)
if arguments["silent_mode"]:
print(
"Downloading images for: "
+ (pky)
+ (search_keyword[i])
+ (sky)
+ " ..."
)
else:
print(iteration.encode("raw_unicode_escape").decode("utf-8"))
print("Evaluating...")
search_term = pky + search_keyword[i] + sky
if arguments["image_directory"]:
dir_name = arguments["image_directory"]
elif arguments["no_directory"]:
dir_name = ""
else:
dir_name = search_term + (
"-" + arguments["color"] if arguments["color"] else ""
) # sub-directory
if not arguments["no_download"]:
self.create_directories(
main_directory,
dir_name,
arguments["thumbnail"],
arguments["thumbnail_only"],
) # create directories in OS
params = self.build_url_parameters(
arguments
) # building URL with params
url = self.build_search_url(
search_term,
params,
arguments["url"],
arguments["similar_images"],
arguments["specific_site"],
arguments["safe_search"],
) # building main search url
if limit < 101:
raw_html = self.download_page(url) # download page
else:
raw_html = self.download_extended_page(
url, arguments["chromedriver"]
)
if not arguments["silent_mode"]:
if arguments["no_download"]:
print("Getting URLs without downloading images...")
else:
print("Starting Download...")
items, errorCount, abs_path = self._get_all_items(
raw_html, main_directory, dir_name, limit, arguments
) # get all image items and download images
paths[pky + search_keyword[i] + sky] = abs_path
# dumps into a json file
if arguments["extract_metadata"]:
try:
if not os.path.exists("logs"):
os.makedirs("logs")
except OSError as e:
print(e)
with open(
"logs/" + search_keyword[i] + ".json", "w"
) as json_file:
json.dump(items, json_file, indent=4, sort_keys=True)
# Related images
if arguments["related_images"]:
print(
"\nGetting list of related keywords...this may take a few moments"
)
tabs = self.get_all_tabs(raw_html)
for key, value in tabs.items():
final_search_term = search_term + " - " + key
print("\nNow Downloading - " + final_search_term)
if limit < 101:
new_raw_html = self.download_page(
value
) # download page
else:
new_raw_html = self.download_extended_page(
value, arguments["chromedriver"]
)
self.create_directories(
main_directory,
final_search_term,
arguments["thumbnail"],
arguments["thumbnail_only"],
)
self._get_all_items(
new_raw_html,
main_directory,
search_term + " - " + key,
limit,
arguments,
)
total_errors += errorCount
if not arguments["silent_mode"]:
print("\nErrors: " + str(errorCount) + "\n")
return paths, total_errors
# ------------- Main Program -------------#
def main():
records = user_input()
total_errors = 0
t0 = time.time() # start the timer
for arguments in records:
if arguments["single_image"]: # Download Single Image using a URL
response = googleimagesdownload()
response.single_image(arguments["single_image"])
else: # or download multiple images based on keywords/keyphrase search
response = googleimagesdownload()
# wrapping response in a variable just for consistency
paths, errors = response.download(arguments)
total_errors = total_errors + errors
t1 = time.time() # stop the timer
# Calculating the total time required to crawl, find and download all
# the links of 60,000 images
total_time = t1 - t0
if not arguments["silent_mode"]:
print("\nEverything downloaded!")
print("Total errors: " + str(total_errors))
print("Total time taken: " + str(total_time) + " Seconds")
if __name__ == "__main__":
main()
|
Andencento
|
/Andencento-0.24.tar.gz/Andencento-0.24/userbot/helpers/google_image.py
|
google_image.py
|
import os
from ..utils import *
try:
pass
except:
os.system("pip install colour")
import asyncio
import re
import time
import PIL.ImageOps
import requests
from bs4 import BeautifulSoup
from PIL import Image
from telethon.errors.rpcerrorlist import YouBlockedUserError
from validators.url import url
MARGINS = [50, 150, 250, 350, 450]
# For using gif , animated stickers and videos in some parts , this
# function takes take a screenshot and stores ported from userge
async def take_screen_shot(video_file, output_directory, ttl):
# https://stackoverflow.com/a/13891070/4723940
out_put_file_name = output_directory + \
"/" + str(time.time()) + ".jpg"
file_genertor_command = [
"ffmpeg",
"-ss",
str(ttl),
"-i",
video_file,
"-vframes",
"1",
out_put_file_name
]
# width = "90"
process = await asyncio.create_subprocess_exec(
*file_genertor_command,
# stdout must a pipe to be accessible as process.stdout
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE,
)
# Wait for the subprocess to finish
stdout, stderr = await process.communicate()
e_response = stderr.decode().strip()
t_response = stdout.decode().strip()
if os.path.lexists(out_put_file_name):
return out_put_file_name
else:
logger.info(e_response)
logger.info(t_response)
return None
# https://github.com/Nekmo/telegram-upload/blob/master/telegram_upload/video.py#L26
import time
async def cult_small_video(video_file, output_directory, start_time, end_time):
# https://stackoverflow.com/a/13891070/4723940
out_put_file_name = output_directory + \
"/" + str(round(time.time())) + ".mp4"
file_genertor_command = [
"ffmpeg",
"-i",
video_file,
"-ss",
start_time,
"-to",
end_time,
"-async",
"1",
"-strict",
"-2",
out_put_file_name
]
process = await asyncio.create_subprocess_exec(
*file_genertor_command,
# stdout must a pipe to be accessible as process.stdout
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE,
)
# Wait for the subprocess to finish
stdout, stderr = await process.communicate()
e_response = stderr.decode().strip()
t_response = stdout.decode().strip()
if os.path.lexists(out_put_file_name):
return out_put_file_name
else:
logger.info(e_response)
logger.info(t_response)
return None
async def make_gif(event, file):
chat = "@tgstogifAndencento"
async with event.client.conversation(chat) as conv:
try:
await silently_send_message(conv, "/start")
await event.client.send_file(chat, file)
response = await conv.get_response()
await event.client.send_read_acknowledge(conv.chat_id)
if response.text.startswith("Send me an animated sticker!"):
return "`This file is not supported`"
response = response if response.media else await conv.get_response()
hellresponse = response if response.media else await conv.get_response()
await event.client.send_read_acknowledge(conv.chat_id)
hellfile = await event.client.download_media(hellresponse, "./temp")
return await unzip(hellfile)
except YouBlockedUserError:
return "Unblock @tgstogifAndencento"
async def silently_send_message(conv, text):
await conv.send_message(text)
response = await conv.get_response()
await conv.mark_read(message=response)
return response
async def thumb_from_audio(audio_path, output):
await runcmd(f"ffmpeg -i {audio_path} -filter:v scale=500:500 -an {output}")
async def simpmusic(simp , QUALITY):
search = simp
headers = {'User-Agent': 'Mozilla/5.0 (compatible; GoogleAndencento/2.1; +http://www.google.com/Andencento.html)'}
html = requests.get('https://www.youtube.com/results?search_query='+search, headers=headers).text
soup = BeautifulSoup(html, 'html.parser')
for link in soup.find_all('a'):
if '/watch?v=' in link.get('href'):
# May change when Youtube Website may get updated in the future.
video_link = link.get('href')
break
video_link = 'http://www.youtube.com/'+video_link
command = ('youtube-dl --extract-audio --audio-format mp3 --audio-quality ' + QUALITY + ' ' + video_link)
os.system(command)
song_dl = "youtube-dl --force-ipv4 --write-thumbnail -o './temp/%(title)s.%(ext)s' --extract-audio --audio-format mp3 --audio-quality {QUALITY} {video_link}"
thumb_dl = "youtube-dl --force-ipv4 -o './temp/%(title)s.%(ext)s' --write-thumbnail --skip-download {video_link}"
video_dl = "youtube-dl --force-ipv4 --write-thumbnail -o './temp/%(title)s.%(ext)s' -f '[filesize<20M]' {video_link}"
name_dl = (
"youtube-dl --force-ipv4 --get-filename -o './temp/%(title)s.%(ext)s' {video_link}"
)
async def simpmusicvideo(simp):
search = simp
headers = {'User-Agent': 'Mozilla/5.0 (compatible; GoogleAndencento/2.1; +http://www.google.com/Andencento.html)'}
html = requests.get('https://www.youtube.com/results?search_query='+search, headers=headers).text
soup = BeautifulSoup(html, 'html.parser')
for link in soup.find_all('a'):
if '/watch?v=' in link.get('href'):
# May change when Youtube Website may get updated in the future.
video_link = link.get('href')
break
video_link = 'http://www.youtube.com/'+video_link
command = ('youtube-dl -f "[filesize<20M]" ' +video_link)
os.system(command)
#convertion..
def convert_toimage(image):
img = Image.open(image)
if img.mode != "RGB":
img = img.convert("RGB")
img.save("./temp/temp.jpg", "jpeg")
os.remove(image)
return "./temp/temp.jpg"
async def convert_tosticker(image):
img = Image.open(image)
if img.mode != "RGB":
img = img.convert("RGB")
img.save("./temp/temp.webp", "webp")
os.remove(image)
return "./temp/temp.webp"
async def invert_colors(imagefile, endname):
image = Image.open(imagefile)
inverted_image = PIL.ImageOps.invert(image)
inverted_image.save(endname)
async def flip_image(imagefile, endname):
image = Image.open(imagefile)
inverted_image = PIL.ImageOps.flip(image)
inverted_image.save(endname)
async def grayscale(imagefile, endname):
image = Image.open(imagefile)
inverted_image = PIL.ImageOps.grayscale(image)
inverted_image.save(endname)
async def mirror_file(imagefile, endname):
image = Image.open(imagefile)
inverted_image = PIL.ImageOps.mirror(image)
inverted_image.save(endname)
async def solarize(imagefile, endname):
image = Image.open(imagefile)
inverted_image = PIL.ImageOps.solarize(image, threshold=128)
inverted_image.save(endname)
async def iphonex(text):
r = requests.get(f"https://nekoAndencento.xyz/api/imagegen?type=iphonex&url={text}").json()
legendx22 = r.get("message")
hellurl = url(legendx22)
if not hellurl:
return "check syntax once more"
with open("temp.png", "wb") as f:
f.write(requests.get(legendx22).content)
img = Image.open("temp.png").convert("RGB")
img.save("temp.jpg", "jpeg")
return "temp.jpg"
async def baguette(text):
r = requests.get(
f"https://nekoAndencento.xyz/api/imagegen?type=baguette&url={text}"
).json()
legendx22 = r.get("message")
hellurl = url(legendx22)
if not hellurl:
return "check syntax once more"
with open("temp.png", "wb") as f:
f.write(requests.get(legendx22).content)
img = Image.open("temp.png").convert("RGB")
img.save("temp.jpg", "jpeg")
return "temp.jpg"
async def threats(text):
r = requests.get(f"https://nekoAndencento.xyz/api/imagegen?type=threats&url={text}").json()
legendx22 = r.get("message")
hellurl = url(legendx22)
if not hellurl:
return "check syntax once more"
with open("temp.png", "wb") as f:
f.write(requests.get(legendx22).content)
img = Image.open("temp.png")
if img.mode != "RGB":
img = img.convert("RGB")
img.save("temp.jpg", "jpeg")
return "temp.jpg"
async def lolice(text):
r = requests.get(f"https://nekoAndencento.xyz/api/imagegen?type=lolice&url={text}").json()
legendx22 = r.get("message")
hellurl = url(legendx22)
if not hellurl:
return "check syntax once more"
with open("temp.png", "wb") as f:
f.write(requests.get(legendx22).content)
img = Image.open("temp.png")
if img.mode != "RGB":
img = img.convert("RGB")
img.save("temp.jpg", "jpeg")
return "temp.jpg"
async def trash(text):
r = requests.get(f"https://nekoAndencento.xyz/api/imagegen?type=trash&url={text}").json()
legendx22 = r.get("message")
hellurl = url(legendx22)
if not hellurl:
return "check syntax once more"
with open("temp.png", "wb") as f:
f.write(requests.get(legendx22).content)
img = Image.open("temp.png")
if img.mode != "RGB":
img = img.convert("RGB")
img.save("temp.jpg", "jpeg")
return "temp.jpg"
async def awooify(text):
r = requests.get(f"https://nekoAndencento.xyz/api/imagegen?type=awooify&url={text}").json()
legendx22 = r.get("message")
hellurl = url(legendx22)
if not hellurl:
return "check syntax once more"
with open("temp.png", "wb") as f:
f.write(requests.get(legendx22).content)
img = Image.open("temp.png")
if img.mode != "RGB":
img = img.convert("RGB")
img.save("temp.jpg", "jpeg")
return "temp.jpg"
async def trap(text1, text2, text3):
r = requests.get(
f"https://nekoAndencento.xyz/api/imagegen?type=trap&name={text1}&author={text2}&image={text3}"
).json()
legendx22 = r.get("message")
hellurl = url(legendx22)
if not hellurl:
return "check syntax once more"
with open("temp.png", "wb") as f:
f.write(requests.get(legendx22).content)
img = Image.open("temp.png")
if img.mode != "RGB":
img = img.convert("RGB")
img.save("temp.jpg", "jpeg")
return "temp.jpg"
async def phcomment(text1, text2, text3):
r = requests.get(
f"https://nekoAndencento.xyz/api/imagegen?type=phcomment&image={text1}&text={text2}&username={text3}"
).json()
legendx22 = r.get("message")
hellurl = url(legendx22)
if not hellurl:
return "check syntax once more"
with open("temp.png", "wb") as f:
f.write(requests.get(legendx22).content)
img = Image.open("temp.png")
if img.mode != "RGB":
img = img.convert("RGB")
img.save("temp.jpg", "jpeg")
return "temp.jpg"
#tweets...
#source - https://nekoAndencento.xyz/api
async def trumptweet(text):
r = requests.get(
f"https://nekoAndencento.xyz/api/imagegen?type=trumptweet&text={text}").json()
wew = r.get("message")
hburl = url(wew)
if not hburl:
return "check syntax once more"
with open("temp.png", "wb") as f:
f.write(requests.get(wew).content)
img = Image.open("temp.png").convert("RGB")
img.save("temp.jpg", "jpeg")
return "temp.jpg"
async def changemymind(text):
r = requests.get(
f"https://nekoAndencento.xyz/api/imagegen?type=changemymind&text={text}").json()
wew = r.get("message")
hburl = url(wew)
if not hburl:
return "check syntax once more"
with open("temp.png", "wb") as f:
f.write(requests.get(wew).content)
img = Image.open("temp.png").convert("RGB")
img.save("temp.jpg", "jpeg")
return "temp.jpg"
async def kannagen(text):
r = requests.get(
f"https://nekoAndencento.xyz/api/imagegen?type=kannagen&text={text}").json()
wew = r.get("message")
hburl = url(wew)
if not hburl:
return "check syntax once more"
with open("temp.png", "wb") as f:
f.write(requests.get(wew).content)
img = Image.open("temp.png").convert("RGB")
img.save("temp.webp", "webp")
return "temp.webp"
async def moditweet(text):
r = requests.get(
f"https://nekoAndencento.xyz/api/imagegen?type=tweet&text={text}&username=narendramodi").json()
wew = r.get("message")
hburl = url(wew)
if not hburl:
return "check syntax once more"
with open("temp.png", "wb") as f:
f.write(requests.get(wew).content)
img = Image.open("temp.png").convert("RGB")
img.save("temp.jpg", "jpeg")
return "temp.jpg"
async def miatweet(text):
r = requests.get(
f"https://nekoAndencento.xyz/api/imagegen?type=tweet&text={text}&username=miakhalifa").json()
wew = r.get("message")
hburl = url(wew)
if not hburl:
return "check syntax once more"
with open("temp.png", "wb") as f:
f.write(requests.get(wew).content)
img = Image.open("temp.png").convert("RGB")
img.save("temp.jpg", "jpeg")
return "temp.jpg"
async def papputweet(text):
r = requests.get(
f"https://nekoAndencento.xyz/api/imagegen?type=tweet&text={text}&username=rahulgandhi").json()
wew = r.get("message")
hburl = url(wew)
if not hburl:
return "check syntax once more"
with open("temp.png", "wb") as f:
f.write(requests.get(wew).content)
img = Image.open("temp.png").convert("RGB")
img.save("temp.jpg", "jpeg")
return "temp.jpg"
async def sunnytweet(text):
r = requests.get(
f"https://nekoAndencento.xyz/api/imagegen?type=tweet&text={text}&username=sunnyleone").json()
wew = r.get("message")
hburl = url(wew)
if not hburl:
return "check syntax once more"
with open("temp.png", "wb") as f:
f.write(requests.get(wew).content)
img = Image.open("temp.png").convert("RGB")
img.save("temp.jpg", "jpeg")
return "temp.jpg"
async def sinstweet(text):
r = requests.get(
f"https://nekoAndencento.xyz/api/imagegen?type=tweet&text={text}&username=johnnysins").json()
wew = r.get("message")
hburl = url(wew)
if not hburl:
return "check syntax once more"
with open("temp.png", "wb") as f:
f.write(requests.get(wew).content)
img = Image.open("temp.png").convert("RGB")
img.save("temp.jpg", "jpeg")
return "temp.jpg"
async def taklatweet(text):
r = requests.get(
f"https://nekoAndencento.xyz/api/imagegen?type=tweet&text={text}&username=Mahatma_Gandhi_").json()
wew = r.get("message")
hburl = url(wew)
if not hburl:
return "check syntax once more"
with open("temp.png", "wb") as f:
f.write(requests.get(wew).content)
img = Image.open("temp.png").convert("RGB")
img.save("temp.jpg", "jpeg")
return "temp.jpg"
# no offense pliz -_-
async def tweets(text1,text2):
r = requests.get(
f"https://nekoAndencento.xyz/api/imagegen?type=tweet&text={text1}&username={text2}").json()
wew = r.get("message")
hburl = url(wew)
if not hburl:
return "check syntax once more"
with open("temp.png", "wb") as f:
f.write(requests.get(wew).content)
img = Image.open("temp.png").convert("RGB")
img.save("temp.jpg", "jpeg")
return "temp.jpg"
#sticker text
EMOJI_PATTERN = re.compile(
"["
"\U0001F1E0-\U0001F1FF" # flags (iOS)
"\U0001F300-\U0001F5FF" # symbols & pictographs
"\U0001F600-\U0001F64F" # emoticons
"\U0001F680-\U0001F6FF" # transport & map symbols
"\U0001F700-\U0001F77F" # alchemical symbols
"\U0001F780-\U0001F7FF" # Geometric Shapes Extended
"\U0001F800-\U0001F8FF" # Supplemental Arrows-C
"\U0001F900-\U0001F9FF" # Supplemental Symbols and Pictographs
"\U0001FA00-\U0001FA6F" # Chess Symbols
"\U0001FA70-\U0001FAFF" # Symbols and Pictographs Extended-A
"\U00002702-\U000027B0" # Dingbats
"]+"
)
def deEmojify(inputString: str) -> str:
"""Remove emojis and other non-safe characters from string"""
return re.sub(EMOJI_PATTERN, "", inputString)
|
Andencento
|
/Andencento-0.24.tar.gz/Andencento-0.24/userbot/helpers/func.py
|
func.py
|
import os
try:
pass
except:
os.system("pip install colour")
import requests
from PIL import Image
from validators.url import url
# ifone xxx
async def iphonex(text):
r = requests.get(f"https://nekoAndencento.xyz/api/imagegen?type=iphonex&url={text}").json()
kraken = r.get("message")
Eivaurl = url(kraken)
if not Eivaurl:
return "check syntax once more"
with open("temp.png", "wb") as f:
f.write(requests.get(kraken).content)
img = Image.open("temp.png").convert("RGB")
img.save("temp.jpg", "jpeg")
return "temp.jpg"
# eat this
async def baguette(text):
r = requests.get(
f"https://nekoAndencento.xyz/api/imagegen?type=baguette&url={text}"
).json()
kraken = r.get("message")
Eivaurl = url(kraken)
if not Eivaurl:
return "check syntax once more"
with open("temp.png", "wb") as f:
f.write(requests.get(kraken).content)
img = Image.open("temp.png").convert("RGB")
img.save("temp.jpg", "jpeg")
return "temp.jpg"
# 3 threats to society
async def threats(text):
r = requests.get(f"https://nekoAndencento.xyz/api/imagegen?type=threats&url={text}").json()
kraken = r.get("message")
Eivaurl = url(kraken)
if not Eivaurl:
return "check syntax once more"
with open("temp.png", "wb") as f:
f.write(requests.get(kraken).content)
img = Image.open("temp.png")
if img.mode != "RGB":
img = img.convert("RGB")
img.save("temp.jpg", "jpeg")
return "temp.jpg"
# r u lolicon?
async def lolice(text):
r = requests.get(f"https://nekoAndencento.xyz/api/imagegen?type=lolice&url={text}").json()
kraken = r.get("message")
Eivaurl = url(kraken)
if not Eivaurl:
return "check syntax once more"
with open("temp.png", "wb") as f:
f.write(requests.get(kraken).content)
img = Image.open("temp.png")
if img.mode != "RGB":
img = img.convert("RGB")
img.save("temp.jpg", "jpeg")
return "temp.jpg"
# this shit is trash
async def trash(text):
r = requests.get(f"https://nekoAndencento.xyz/api/imagegen?type=trash&url={text}").json()
kraken = r.get("message")
Eivaurl = url(kraken)
if not Eivaurl:
return "check syntax once more"
with open("temp.png", "wb") as f:
f.write(requests.get(kraken).content)
img = Image.open("temp.png")
if img.mode != "RGB":
img = img.convert("RGB")
img.save("temp.jpg", "jpeg")
return "temp.jpg"
# OwO
async def awooify(text):
r = requests.get(f"https://nekoAndencento.xyz/api/imagegen?type=awooify&url={text}").json()
kraken = r.get("message")
Eivaurl = url(kraken)
if not Eivaurl:
return "check syntax once more"
with open("temp.png", "wb") as f:
f.write(requests.get(kraken).content)
img = Image.open("temp.png")
if img.mode != "RGB":
img = img.convert("RGB")
img.save("temp.jpg", "jpeg")
return "temp.jpg"
# use your trap card
async def trap(text1, text2, text3):
r = requests.get(
f"https://nekoAndencento.xyz/api/imagegen?type=trap&name={text1}&author={text2}&image={text3}"
).json()
kraken = r.get("message")
Eivaurl = url(kraken)
if not Eivaurl:
return "check syntax once more"
with open("temp.png", "wb") as f:
f.write(requests.get(kraken).content)
img = Image.open("temp.png")
if img.mode != "RGB":
img = img.convert("RGB")
img.save("temp.jpg", "jpeg")
return "temp.jpg"
# cornhub 🌽
async def phcomment(text1, text2, text3):
r = requests.get(
f"https://nekoAndencento.xyz/api/imagegen?type=phcomment&image={text1}&text={text2}&username={text3}"
).json()
kraken = r.get("message")
Eivaurl = url(kraken)
if not Eivaurl:
return "check syntax once more"
with open("temp.png", "wb") as f:
f.write(requests.get(kraken).content)
img = Image.open("temp.png")
if img.mode != "RGB":
img = img.convert("RGB")
img.save("temp.jpg", "jpeg")
return "temp.jpg"
|
Andencento
|
/Andencento-0.24.tar.gz/Andencento-0.24/userbot/helpers/pranks.py
|
pranks.py
|
import math
import re
import time
from .. import *
from ..config import Config
from ..helpers import *
from ..utils import *
async def reply_id(event):
reply_to_id = None
if event.sender_id in Config.SUDO_USERS:
reply_to_id = event.id
if event.reply_to_msg_id:
reply_to_id = event.reply_to_msg_id
return reply_to_id
# let's see the progress
async def progress(
current, total, event, start, type_of_ps, file_name=None, is_cancelled=None
):
"""Generic progress_callback for uploads and downloads.""" # edit this docstring to your need. If you are kanging it. Lol
now = time.time()
diff = now - start
if is_cancelled is True:
raise CancelProcess
if round(diff % 10.00) == 0 or current == total:
percentage = current * 100 / total
speed = current / diff
elapsed_time = round(diff) * 1000
time_to_completion = round((total - current) / speed) * 1000
estimated_total_time = elapsed_time + time_to_completion
progress_str = "[{0}{1}] {2}%\n".format(
"".join(["▰" for i in range(math.floor(percentage / 10))]),
"".join(["▱" for i in range(10 - math.floor(percentage / 10))]),
round(percentage, 2),
)
tmp = progress_str + "{0} of {1}\nETA: {2}".format(
humanbytes(current), humanbytes(total), time_formatter(estimated_total_time)
)
if file_name:
await event.edit(
"{}\nFile Name: `{}`\n{}".format(type_of_ps, file_name, tmp)
)
else:
await event.edit("{}\n{}".format(type_of_ps, tmp))
# gets output in readable format
def humanbytes(size):
"""Input size in bytes,
outputs in a human readable format"""
if not size:
return ""
# 2 ** 10 = 1024
power = 2 ** 10
raised_to_pow = 0
dict_power_n = {0: "", 1: "Ki", 2: "Mi", 3: "Gi", 4: "Ti"}
while size > power:
size /= power
raised_to_pow += 1
return str(round(size, 2)) + " " + dict_power_n[raised_to_pow] + "B"
# ok! But Wtf
def human_to_bytes(size: str) -> int:
units = {
"M": 2 ** 20,
"MB": 2 ** 20,
"G": 2 ** 30,
"GB": 2 ** 30,
"T": 2 ** 40,
"TB": 2 ** 40,
}
size = size.upper()
if not re.match(r" ", size):
size = re.sub(r"([KMGT])", r" \1", size)
number, unit = [string.strip() for string in size.split()]
return int(float(number) * units[unit])
# Inputs time in milliseconds
# to get beautified time
# basically a time string
def time_formatter(milliseconds: int) -> str:
seconds, milliseconds = divmod(int(milliseconds), 1000)
minutes, seconds = divmod(seconds, 60)
hours, minutes = divmod(minutes, 60)
days, hours = divmod(hours, 24)
tmp = (
((str(days) + " day(s), ") if days else "")
+ ((str(hours) + " hour(s), ") if hours else "")
+ ((str(minutes) + " minute(s), ") if minutes else "")
+ ((str(seconds) + " second(s), ") if seconds else "")
+ ((str(milliseconds) + " millisecond(s), ") if milliseconds else "")
)
return tmp[:-2]
|
Andencento
|
/Andencento-0.24.tar.gz/Andencento-0.24/userbot/helpers/progress.py
|
progress.py
|
import os
try:
pass
except:
os.system("pip install colour")
import requests
from PIL import Image
from validators.url import url
# lost president. Sed loif
async def trumptweet(text):
r = requests.get(
f"https://nekoAndencento.xyz/api/imagegen?type=trumptweet&text={text}"
).json()
wew = r.get("message")
hburl = url(wew)
if not hburl:
return "check syntax once more"
with open("temp.png", "wb") as f:
f.write(requests.get(wew).content)
img = Image.open("temp.png").convert("RGB")
img.save("temp.jpg", "jpeg")
return "temp.jpg"
# change my mind 👀
async def changemymind(text):
r = requests.get(
f"https://nekoAndencento.xyz/api/imagegen?type=changemymind&text={text}"
).json()
wew = r.get("message")
hburl = url(wew)
if not hburl:
return "check syntax once more"
with open("temp.png", "wb") as f:
f.write(requests.get(wew).content)
img = Image.open("temp.png").convert("RGB")
img.save("temp.jpg", "jpeg")
return "temp.jpg"
# kanna says
async def kannagen(text):
r = requests.get(
f"https://nekoAndencento.xyz/api/imagegen?type=kannagen&text={text}"
).json()
wew = r.get("message")
hburl = url(wew)
if not hburl:
return "check syntax once more"
with open("temp.png", "wb") as f:
f.write(requests.get(wew).content)
img = Image.open("temp.png").convert("RGB")
img.save("temp.webp", "webp")
return "temp.webp"
# Na-Mo
async def moditweet(text):
r = requests.get(
f"https://nekoAndencento.xyz/api/imagegen?type=tweet&text={text}&username=narendramodi"
).json()
wew = r.get("message")
hburl = url(wew)
if not hburl:
return "check syntax once more"
with open("temp.png", "wb") as f:
f.write(requests.get(wew).content)
img = Image.open("temp.png").convert("RGB")
img.save("temp.jpg", "jpeg")
return "temp.jpg"
# mia aunty. 💞
async def miatweet(text):
r = requests.get(
f"https://nekoAndencento.xyz/api/imagegen?type=tweet&text={text}&username=miakhalifa"
).json()
wew = r.get("message")
hburl = url(wew)
if not hburl:
return "check syntax once more"
with open("temp.png", "wb") as f:
f.write(requests.get(wew).content)
img = Image.open("temp.png").convert("RGB")
img.save("temp.jpg", "jpeg")
return "temp.jpg"
# dani forever 🙂💞
async def dani(text):
r = requests.get(
f"https://nekoAndencento.xyz/api/imagegen?type=tweet&text={text}&username=dani_daniels___"
).json()
wew = r.get("message")
hburl = url(wew)
if not hburl:
return "check syntax once more"
with open("temp.png", "wb") as f:
f.write(requests.get(wew).content)
img = Image.open("temp.png").convert("RGB")
img.save("temp.jpg", "jpeg")
return "temp.jpg"
# you know what it is
async def papputweet(text):
r = requests.get(
f"https://nekoAndencento.xyz/api/imagegen?type=tweet&text={text}&username=rahulgandhi"
).json()
wew = r.get("message")
hburl = url(wew)
if not hburl:
return "check syntax once more"
with open("temp.png", "wb") as f:
f.write(requests.get(wew).content)
img = Image.open("temp.png").convert("RGB")
img.save("temp.jpg", "jpeg")
return "temp.jpg"
# nothing better that this
async def sunnytweet(text):
r = requests.get(
f"https://nekoAndencento.xyz/api/imagegen?type=tweet&text={text}&username=sunnyleone"
).json()
wew = r.get("message")
hburl = url(wew)
if not hburl:
return "check syntax once more"
with open("temp.png", "wb") as f:
f.write(requests.get(wew).content)
img = Image.open("temp.png").convert("RGB")
img.save("temp.jpg", "jpeg")
return "temp.jpg"
# comit a sin
async def sinstweet(text):
r = requests.get(
f"https://nekoAndencento.xyz/api/imagegen?type=tweet&text={text}&username=johnnysins"
).json()
wew = r.get("message")
hburl = url(wew)
if not hburl:
return "check syntax once more"
with open("temp.png", "wb") as f:
f.write(requests.get(wew).content)
img = Image.open("temp.png").convert("RGB")
img.save("temp.jpg", "jpeg")
return "temp.jpg"
# divider ("No offense plox")
async def taklatweet(text):
r = requests.get(
f"https://nekoAndencento.xyz/api/imagegen?type=tweet&text={text}&username=Mahatma_Gandhi_"
).json()
wew = r.get("message")
hburl = url(wew)
if not hburl:
return "check syntax once more"
with open("temp.png", "wb") as f:
f.write(requests.get(wew).content)
img = Image.open("temp.png").convert("RGB")
img.save("temp.jpg", "jpeg")
return "temp.jpg"
# make your own tweet
async def tweets(text1, text2):
r = requests.get(
f"https://nekoAndencento.xyz/api/imagegen?type=tweet&text={text1}&username={text2}"
).json()
wew = r.get("message")
hburl = url(wew)
if not hburl:
return "check syntax once more"
with open("temp.png", "wb") as f:
f.write(requests.get(wew).content)
img = Image.open("temp.png").convert("RGB")
img.save("temp.jpg", "jpeg")
return "temp.jpg"
|
Andencento
|
/Andencento-0.24.tar.gz/Andencento-0.24/userbot/helpers/tweet.py
|
tweet.py
|
import os
import textwrap
from PIL import Image, ImageDraw, ImageFont
async def draw_meme_text(image_path, text):
img = Image.open(image_path)
os.remove(image_path)
i_width, i_height = img.size
m_font = ImageFont.truetype(
"hellAndencento/resources/fonts/impact.ttf", int((70 / 640) * i_width)
)
if ";" in text:
upper_text, lower_text = text.split(";")
else:
upper_text = text
lower_text = ""
draw = ImageDraw.Draw(img)
current_h, pad = 10, 5
if upper_text:
for u_text in textwrap.wrap(upper_text, width=15):
u_width, u_height = draw.textsize(u_text, font=m_font)
draw.text(
xy=(((i_width - u_width) / 2) - 1, int((current_h / 640) * i_width)),
text=u_text,
font=m_font,
fill=(0, 0, 0),
)
draw.text(
xy=(((i_width - u_width) / 2) + 1, int((current_h / 640) * i_width)),
text=u_text,
font=m_font,
fill=(0, 0, 0),
)
draw.text(
xy=((i_width - u_width) / 2, int(((current_h / 640) * i_width)) - 1),
text=u_text,
font=m_font,
fill=(0, 0, 0),
)
draw.text(
xy=(((i_width - u_width) / 2), int(((current_h / 640) * i_width)) + 1),
text=u_text,
font=m_font,
fill=(0, 0, 0),
)
draw.text(
xy=((i_width - u_width) / 2, int((current_h / 640) * i_width)),
text=u_text,
font=m_font,
fill=(255, 255, 255),
)
current_h += u_height + pad
if lower_text:
for l_text in textwrap.wrap(lower_text, width=15):
u_width, u_height = draw.textsize(l_text, font=m_font)
draw.text(
xy=(
((i_width - u_width) / 2) - 1,
i_height - u_height - int((80 / 640) * i_width),
),
text=l_text,
font=m_font,
fill=(0, 0, 0),
)
draw.text(
xy=(
((i_width - u_width) / 2) + 1,
i_height - u_height - int((80 / 640) * i_width),
),
text=l_text,
font=m_font,
fill=(0, 0, 0),
)
draw.text(
xy=(
(i_width - u_width) / 2,
(i_height - u_height - int((80 / 640) * i_width)) - 1,
),
text=l_text,
font=m_font,
fill=(0, 0, 0),
)
draw.text(
xy=(
(i_width - u_width) / 2,
(i_height - u_height - int((80 / 640) * i_width)) + 1,
),
text=l_text,
font=m_font,
fill=(0, 0, 0),
)
draw.text(
xy=(
(i_width - u_width) / 2,
i_height - u_height - int((80 / 640) * i_width),
),
text=l_text,
font=m_font,
fill=(255, 255, 255),
)
current_h += u_height + pad
image_name = "hell.webp"
img.save(image_name, "WebP")
return image_name
async def draw_meme(image_path, text):
img = Image.open(image_path)
os.remove(image_path)
i_width, i_height = img.size
m_font = ImageFont.truetype(
"hellAndencento/resources/fonts/impact.ttf", int((70 / 640) * i_width)
)
if ";" in text:
upper_text, lower_text = text.split(";")
else:
upper_text = text
lower_text = ""
draw = ImageDraw.Draw(img)
current_h, pad = 10, 5
if upper_text:
for u_text in textwrap.wrap(upper_text, width=15):
u_width, u_height = draw.textsize(u_text, font=m_font)
draw.text(
xy=(((i_width - u_width) / 2) - 1, int((current_h / 640) * i_width)),
text=u_text,
font=m_font,
fill=(0, 0, 0),
)
draw.text(
xy=(((i_width - u_width) / 2) + 1, int((current_h / 640) * i_width)),
text=u_text,
font=m_font,
fill=(0, 0, 0),
)
draw.text(
xy=((i_width - u_width) / 2, int(((current_h / 640) * i_width)) - 1),
text=u_text,
font=m_font,
fill=(0, 0, 0),
)
draw.text(
xy=(((i_width - u_width) / 2), int(((current_h / 640) * i_width)) + 1),
text=u_text,
font=m_font,
fill=(0, 0, 0),
)
draw.text(
xy=((i_width - u_width) / 2, int((current_h / 640) * i_width)),
text=u_text,
font=m_font,
fill=(255, 255, 255),
)
current_h += u_height + pad
if lower_text:
for l_text in textwrap.wrap(lower_text, width=15):
u_width, u_height = draw.textsize(l_text, font=m_font)
draw.text(
xy=(
((i_width - u_width) / 2) - 1,
i_height - u_height - int((20 / 640) * i_width),
),
text=l_text,
font=m_font,
fill=(0, 0, 0),
)
draw.text(
xy=(
((i_width - u_width) / 2) + 1,
i_height - u_height - int((20 / 640) * i_width),
),
text=l_text,
font=m_font,
fill=(0, 0, 0),
)
draw.text(
xy=(
(i_width - u_width) / 2,
(i_height - u_height - int((20 / 640) * i_width)) - 1,
),
text=l_text,
font=m_font,
fill=(0, 0, 0),
)
draw.text(
xy=(
(i_width - u_width) / 2,
(i_height - u_height - int((20 / 640) * i_width)) + 1,
),
text=l_text,
font=m_font,
fill=(0, 0, 0),
)
draw.text(
xy=(
(i_width - u_width) / 2,
i_height - u_height - int((20 / 640) * i_width),
),
text=l_text,
font=m_font,
fill=(255, 255, 255),
)
current_h += u_height + pad
lumd = "badass.png"
img.save(lumd, "png")
return lumd
|
Andencento
|
/Andencento-0.24.tar.gz/Andencento-0.24/userbot/helpers/mmf.py
|
mmf.py
|
import asyncio
import hashlib
import inspect
import logging
import math
import os
from collections import defaultdict
from typing import (AsyncGenerator, Awaitable, BinaryIO, DefaultDict, List,
Optional, Tuple, Union)
from telethon import TelegramClient, helpers, utils
from telethon.crypto import AuthKey
from telethon.errors import FloodWaitError
from telethon.network import MTProtoSender
from telethon.tl.alltlobjects import LAYER
from telethon.tl.functions import InvokeWithLayerRequest
from telethon.tl.functions.auth import (ExportAuthorizationRequest,
ImportAuthorizationRequest)
from telethon.tl.functions.upload import (GetFileRequest,
SaveBigFilePartRequest,
SaveFilePartRequest)
from telethon.tl.types import (Document, InputDocumentFileLocation, InputFile,
InputFileBig, InputFileLocation,
InputPeerPhotoFileLocation,
InputPhotoFileLocation, TypeInputFile)
try:
from mautrix.crypto.attachments import async_encrypt_attachment
except ImportError:
async_encrypt_attachment = None
log: logging.Logger = logging.getLogger("fasttelethon")
TypeLocation = Union[
Document,
InputDocumentFileLocation,
InputPeerPhotoFileLocation,
InputFileLocation,
InputPhotoFileLocation,
]
class DownloadSender:
client: TelegramClient
sender: MTProtoSender
request: GetFileRequest
remaining: int
stride: int
def __init__(
self,
client: TelegramClient,
sender: MTProtoSender,
file: TypeLocation,
offset: int,
limit: int,
stride: int,
count: int,
) -> None:
self.sender = sender
self.client = client
self.request = GetFileRequest(file, offset=offset, limit=limit)
self.stride = stride
self.remaining = count
async def next(self) -> Optional[bytes]:
if not self.remaining:
return None
while True:
try:
result = await self.client._call(self.sender, self.request)
except FloodWaitError as e:
await asyncio.sleep(e.seconds)
else:
break
self.remaining -= 1
self.request.offset += self.stride
return result.bytes
def disconnect(self) -> Awaitable[None]:
return self.sender.disconnect()
class UploadSender:
client: TelegramClient
sender: MTProtoSender
request: Union[SaveFilePartRequest, SaveBigFilePartRequest]
part_count: int
stride: int
previous: Optional[asyncio.Task]
loop: asyncio.AbstractEventLoop
def __init__(
self,
client: TelegramClient,
sender: MTProtoSender,
file_id: int,
part_count: int,
big: bool,
index: int,
stride: int,
loop: asyncio.AbstractEventLoop,
) -> None:
self.client = client
self.sender = sender
self.part_count = part_count
if big:
self.request = SaveBigFilePartRequest(file_id, index, part_count, b"")
else:
self.request = SaveFilePartRequest(file_id, index, b"")
self.stride = stride
self.previous = None
self.loop = loop
async def next(self, data: bytes) -> None:
if self.previous:
await self.previous
self.previous = self.loop.create_task(self._next(data))
async def _next(self, data: bytes) -> None:
self.request.bytes = data
log.debug(
f"Sending file part {self.request.file_part}/{self.part_count}"
f" with {len(data)} bytes"
)
await self.client._call(self.sender, self.request)
self.request.file_part += self.stride
async def disconnect(self) -> None:
if self.previous:
await self.previous
return await self.sender.disconnect()
class ParallelTransferrer:
client: TelegramClient
loop: asyncio.AbstractEventLoop
dc_id: int
senders: Optional[List[Union[DownloadSender, UploadSender]]]
auth_key: AuthKey
upload_ticker: int
def __init__(self, client: TelegramClient, dc_id: Optional[int] = None) -> None:
self.client = client
self.loop = self.client.loop
self.dc_id = dc_id or self.client.session.dc_id
self.auth_key = (
None
if dc_id and self.client.session.dc_id != dc_id
else self.client.session.auth_key
)
self.senders = None
self.upload_ticker = 0
async def _cleanup(self) -> None:
await asyncio.gather(*[sender.disconnect() for sender in self.senders])
self.senders = None
@staticmethod
def _get_connection_count(
file_size: int, max_count: int = 20, full_size: int = 100 * 1024 * 1024
) -> int:
if file_size > full_size:
return max_count
return math.ceil((file_size / full_size) * max_count)
async def _init_download(
self, connections: int, file: TypeLocation, part_count: int, part_size: int
) -> None:
minimum, remainder = divmod(part_count, connections)
def get_part_count() -> int:
nonlocal remainder
if remainder > 0:
remainder -= 1
return minimum + 1
return minimum
# The first cross-DC sender will export+import the authorization, so we always create it
# before creating any other senders.
self.senders = [
await self._create_download_sender(
file, 0, part_size, connections * part_size, get_part_count()
),
*await asyncio.gather(
*[
self._create_download_sender(
file, i, part_size, connections * part_size, get_part_count()
)
for i in range(1, connections)
]
),
]
async def _create_download_sender(
self,
file: TypeLocation,
index: int,
part_size: int,
stride: int,
part_count: int,
) -> DownloadSender:
return DownloadSender(
self.client,
await self._create_sender(),
file,
index * part_size,
part_size,
stride,
part_count,
)
async def _init_upload(
self, connections: int, file_id: int, part_count: int, big: bool
) -> None:
self.senders = [
await self._create_upload_sender(file_id, part_count, big, 0, connections),
*await asyncio.gather(
*[
self._create_upload_sender(file_id, part_count, big, i, connections)
for i in range(1, connections)
]
),
]
async def _create_upload_sender(
self, file_id: int, part_count: int, big: bool, index: int, stride: int
) -> UploadSender:
return UploadSender(
self.client,
await self._create_sender(),
file_id,
part_count,
big,
index,
stride,
loop=self.loop,
)
async def _create_sender(self) -> MTProtoSender:
dc = await self.client._get_dc(self.dc_id)
sender = MTProtoSender(self.auth_key, loggers=self.client._log)
await sender.connect(
self.client._connection(
dc.ip_address,
dc.port,
dc.id,
loggers=self.client._log,
proxy=self.client._proxy,
)
)
if not self.auth_key:
log.debug(f"Exporting auth to DC {self.dc_id}")
auth = await self.client(ExportAuthorizationRequest(self.dc_id))
self.client._init_request.query = ImportAuthorizationRequest(
id=auth.id, bytes=auth.bytes
)
req = InvokeWithLayerRequest(LAYER, self.client._init_request)
await sender.send(req)
self.auth_key = sender.auth_key
return sender
async def init_upload(
self,
file_id: int,
file_size: int,
part_size_kb: Optional[float] = None,
connection_count: Optional[int] = None,
) -> Tuple[int, int, bool]:
connection_count = connection_count or self._get_connection_count(file_size)
part_size = (part_size_kb or utils.get_appropriated_part_size(file_size)) * 1024
part_count = (file_size + part_size - 1) // part_size
is_large = file_size > 10 * 1024 * 1024
await self._init_upload(connection_count, file_id, part_count, is_large)
return part_size, part_count, is_large
async def upload(self, part: bytes) -> None:
await self.senders[self.upload_ticker].next(part)
self.upload_ticker = (self.upload_ticker + 1) % len(self.senders)
async def finish_upload(self) -> None:
await self._cleanup()
async def download(
self,
file: TypeLocation,
file_size: int,
part_size_kb: Optional[float] = None,
connection_count: Optional[int] = None,
) -> AsyncGenerator[bytes, None]:
connection_count = connection_count or self._get_connection_count(file_size)
part_size = (part_size_kb or utils.get_appropriated_part_size(file_size)) * 1024
part_count = math.ceil(file_size / part_size)
log.debug(
"Starting parallel download: "
f"{connection_count} {part_size} {part_count} {file!s}"
)
await self._init_download(connection_count, file, part_count, part_size)
part = 0
while part < part_count:
tasks = [self.loop.create_task(sender.next()) for sender in self.senders]
for task in tasks:
data = await task
if not data:
break
yield data
part += 1
log.debug(f"Part {part} downloaded")
log.debug("Parallel download finished, cleaning up connections")
await self._cleanup()
parallel_transfer_locks: DefaultDict[int, asyncio.Lock] = defaultdict(
lambda: asyncio.Lock()
)
def stream_file(file_to_stream: BinaryIO, chunk_size=1024):
while True:
data_read = file_to_stream.read(chunk_size)
if not data_read:
break
yield data_read
async def _internal_transfer_to_telegram(
client: TelegramClient, response: BinaryIO, progress_callback: callable
) -> Tuple[TypeInputFile, int]:
file_id = helpers.generate_random_long()
file_size = os.path.getsize(response.name)
hash_md5 = hashlib.md5()
uploader = ParallelTransferrer(client)
part_size, part_count, is_large = await uploader.init_upload(file_id, file_size)
buffer = bytearray()
for data in stream_file(response):
if progress_callback:
r = progress_callback(response.tell(), file_size)
if inspect.isawaitable(r):
await r
if not is_large:
hash_md5.update(data)
if len(buffer) == 0 and len(data) == part_size:
await uploader.upload(data)
continue
new_len = len(buffer) + len(data)
if new_len >= part_size:
cutoff = part_size - len(buffer)
buffer.extend(data[:cutoff])
await uploader.upload(bytes(buffer))
buffer.clear()
buffer.extend(data[cutoff:])
else:
buffer.extend(data)
if len(buffer) > 0:
await uploader.upload(bytes(buffer))
await uploader.finish_upload()
if is_large:
return InputFileBig(file_id, part_count, "upload"), file_size
else:
return InputFile(file_id, part_count, "upload", hash_md5.hexdigest()), file_size
async def download_file(
client: TelegramClient,
location: TypeLocation,
out: BinaryIO,
progress_callback: callable = None,
) -> BinaryIO:
size = location.size
dc_id, location = utils.get_input_location(location)
# We lock the transfers because telegram has connection count limits
downloader = ParallelTransferrer(client, dc_id)
downloaded = downloader.download(location, size)
async for x in downloaded:
out.write(x)
if progress_callback:
r = progress_callback(out.tell(), size)
if inspect.isawaitable(r):
await r
return out
async def upload_file(
client: TelegramClient,
file: BinaryIO,
progress_callback: callable = None,
) -> TypeInputFile:
return (await _internal_transfer_to_telegram(client, file, progress_callback))[0]
|
Andencento
|
/Andencento-0.24.tar.gz/Andencento-0.24/userbot/helpers/fasttelethon.py
|
fasttelethon.py
|
import asyncio
import os
try:
pass
except:
os.system("pip install colour")
import time
import zipfile
from telethon.errors.rpcerrorlist import YouBlockedUserError
# generate thumbnail from audio...
async def thumb_from_audio(audio_path, output):
await runcmd(f"ffmpeg -i {audio_path} -filter:v scale=500:500 -an {output}")
# take a frame from video
async def take_screen_shot(video_file, output_directory, ttl):
# https://stackoverflow.com/a/13891070/4723940
out_put_file_name = output_directory + "/" + str(time.time()) + ".jpg"
file_genertor_command = [
"ffmpeg",
"-ss",
str(ttl),
"-i",
video_file,
"-vframes",
"1",
out_put_file_name,
]
# width = "90"
process = await asyncio.create_subprocess_exec(
*file_genertor_command,
# stdout must a pipe to be accessible as process.stdout
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE,
)
# Wait for the subprocess to finish
stdout, stderr = await process.communicate()
e_response = stderr.decode().strip()
t_response = stdout.decode().strip()
if os.path.lexists(out_put_file_name):
return out_put_file_name
else:
logger.info(e_response)
logger.info(t_response)
return None
# trim vids
async def cult_small_video(video_file, output_directory, start_time, end_time):
# https://stackoverflow.com/a/13891070/4723940
out_put_file_name = output_directory + "/" + str(round(time.time())) + ".mp4"
file_genertor_command = [
"ffmpeg",
"-i",
video_file,
"-ss",
start_time,
"-to",
end_time,
"-async",
"1",
"-strict",
"-2",
out_put_file_name,
]
process = await asyncio.create_subprocess_exec(
*file_genertor_command,
# stdout must a pipe to be accessible as process.stdout
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE,
)
# Wait for the subprocess to finish
stdout, stderr = await process.communicate()
e_response = stderr.decode().strip()
t_response = stdout.decode().strip()
if os.path.lexists(out_put_file_name):
return out_put_file_name
else:
logger.info(e_response)
logger.info(t_response)
return None
#####################################
# for animated sticker to gif.....
#
# unzipper
async def unzip(downloaded_file_name):
with zipfile.ZipFile(downloaded_file_name, "r") as zip_ref:
zip_ref.extractall("./temp")
downloaded_file_name = os.path.splitext(downloaded_file_name)[0]
return f"{downloaded_file_name}.gif"
# silent conv..
async def silently_send_message(conv, text):
await conv.send_message(text)
response = await conv.get_response()
await conv.mark_read(message=response)
return response
# makes animated sticker to gif
async def make_gif(event, file):
chat = "@tgstogifAndencento"
async with event.client.conversation(chat) as conv:
try:
await silently_send_message(conv, "/start")
await event.client.send_file(chat, file)
response = await conv.get_response()
await event.client.send_read_acknowledge(conv.chat_id)
if response.text.startswith("Send me an animated sticker!"):
return "`This file is not supported`"
response = response if response.media else await conv.get_response()
response if response.media else await conv.get_response()
await event.client.send_read_acknowledge(conv.chat_id)
andencentofile = await event.client.download_media(Eivaresponse, "./temp")
return await unzip(andencentofile)
except YouBlockedUserError:
return "Unblock @tgstogifAndencento"
|
Andencento
|
/Andencento-0.24.tar.gz/Andencento-0.24/userbot/helpers/videos.py
|
videos.py
|
import os
try:
pass
except:
os.system("pip install colour")
import asyncio
import re
import time
import PIL.ImageOps
import requests
from bs4 import BeautifulSoup
from PIL import Image
from telethon.errors.rpcerrorlist import YouBlockedUserError
from validators.url import url
MARGINS = [50, 150, 250, 350, 450]
# For using gif , animated stickers and videos in some parts , this
# function takes take a screenshot and stores ported from userge
async def take_screen_shot(video_file, output_directory, ttl):
# https://stackoverflow.com/a/13891070/4723940
out_put_file_name = output_directory + "/" + str(time.time()) + ".jpg"
file_genertor_command = [
"ffmpeg",
"-ss",
str(ttl),
"-i",
video_file,
"-vframes",
"1",
out_put_file_name,
]
# width = "90"
process = await asyncio.create_subprocess_exec(
*file_genertor_command,
# stdout must a pipe to be accessible as process.stdout
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE,
)
# Wait for the subprocess to finish
stdout, stderr = await process.communicate()
e_response = stderr.decode().strip()
t_response = stdout.decode().strip()
if os.path.lexists(out_put_file_name):
return out_put_file_name
else:
logger.info(e_response)
logger.info(t_response)
return None
# https://github.com/Nekmo/telegram-upload/blob/master/telegram_upload/video.py#L26
async def cult_small_video(video_file, output_directory, start_time, end_time):
# https://stackoverflow.com/a/13891070/4723940
out_put_file_name = output_directory + "/" + str(round(time.time())) + ".mp4"
file_genertor_command = [
"ffmpeg",
"-i",
video_file,
"-ss",
start_time,
"-to",
end_time,
"-async",
"1",
"-strict",
"-2",
out_put_file_name,
]
process = await asyncio.create_subprocess_exec(
*file_genertor_command,
# stdout must a pipe to be accessible as process.stdout
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE,
)
# Wait for the subprocess to finish
stdout, stderr = await process.communicate()
e_response = stderr.decode().strip()
t_response = stdout.decode().strip()
if os.path.lexists(out_put_file_name):
return out_put_file_name
else:
logger.info(e_response)
logger.info(t_response)
return None
async def make_gif(event, file):
chat = "@tgstogifAndencento"
async with event.client.conversation(chat) as conv:
try:
await silently_send_message(conv, "/start")
await event.client.send_file(chat, file)
response = await conv.get_response()
await event.client.send_read_acknowledge(conv.chat_id)
if response.text.startswith("Send me an animated sticker!"):
return "`This file is not supported`"
response = response if response.media else await conv.get_response()
W2Hresponse = response if response.media else await conv.get_response()
await event.client.send_read_acknowledge(conv.chat_id)
W2Hfile = await event.client.download_media(W2Hresponse, "./temp")
return await unzip(W2Hfile)
except YouBlockedUserError:
return "Unblock @tgstogifAndencento"
async def silently_send_message(conv, text):
await conv.send_message(text)
response = await conv.get_response()
await conv.mark_read(message=response)
return response
async def thumb_from_audio(audio_path, output):
await runcmd(f"ffmpeg -i {audio_path} -filter:v scale=500:500 -an {output}")
async def simpmusic(simp, QUALITY):
search = simp
headers = {
"User-Agent": "Mozilla/5.0 (compatible; GoogleAndencento/2.1; +http://www.google.com/Andencento.html)"
}
html = requests.get(
"https://www.youtube.com/results?search_query=" + search, headers=headers
).text
soup = BeautifulSoup(html, "html.parser")
for link in soup.find_all("a"):
if "/watch?v=" in link.get("href"):
# May change when Youtube Website may get updated in the future.
video_link = link.get("href")
break
video_link = "http://www.youtube.com/" + video_link
command = (
"youtube-dl --extract-audio --audio-format mp3 --audio-quality "
+ QUALITY
+ " "
+ video_link
)
os.system(command)
song_dl = "youtube-dl --force-ipv4 --write-thumbnail -o './temp/%(title)s.%(ext)s' --extract-audio --audio-format mp3 --audio-quality {QUALITY} {video_link}"
thumb_dl = "youtube-dl --force-ipv4 -o './temp/%(title)s.%(ext)s' --write-thumbnail --skip-download {video_link}"
video_dl = "youtube-dl --force-ipv4 --write-thumbnail -o './temp/%(title)s.%(ext)s' -f '[filesize<20M]' {video_link}"
name_dl = (
"youtube-dl --force-ipv4 --get-filename -o './temp/%(title)s.%(ext)s' {video_link}"
)
async def simpmusicvideo(simp):
search = simp
headers = {
"User-Agent": "Mozilla/5.0 (compatible; GoogleAndencento/2.1; +http://www.google.com/Andencento.html)"
}
html = requests.get(
"https://www.youtube.com/results?search_query=" + search, headers=headers
).text
soup = BeautifulSoup(html, "html.parser")
for link in soup.find_all("a"):
if "/watch?v=" in link.get("href"):
# May change when Youtube Website may get updated in the future.
video_link = link.get("href")
break
video_link = "http://www.youtube.com/" + video_link
command = 'youtube-dl -f "[filesize<20M]" ' + video_link
os.system(command)
async def unzip(downloaded_file_name):
with zipfile.ZipFile(downloaded_file_name, "r") as zip_ref:
zip_ref.extractall("./temp")
downloaded_file_name = os.path.splitext(downloaded_file_name)[0]
return f"{downloaded_file_name}.gif"
# convertion..
def convert_toimage(image):
img = Image.open(image)
if img.mode != "RGB":
img = img.convert("RGB")
img.save("./temp/temp.jpg", "jpeg")
os.remove(image)
return "./temp/temp.jpg"
async def convert_tosticker(image):
img = Image.open(image)
if img.mode != "RGB":
img = img.convert("RGB")
img.save("./temp/temp.webp", "webp")
os.remove(image)
return "./temp/temp.webp"
async def invert_colors(imagefile, endname):
image = Image.open(imagefile)
inverted_image = PIL.ImageOps.invert(image)
inverted_image.save(endname)
async def flip_image(imagefile, endname):
image = Image.open(imagefile)
inverted_image = PIL.ImageOps.flip(image)
inverted_image.save(endname)
async def grayscale(imagefile, endname):
image = Image.open(imagefile)
inverted_image = PIL.ImageOps.grayscale(image)
inverted_image.save(endname)
async def mirror_file(imagefile, endname):
image = Image.open(imagefile)
inverted_image = PIL.ImageOps.mirror(image)
inverted_image.save(endname)
async def solarize(imagefile, endname):
image = Image.open(imagefile)
inverted_image = PIL.ImageOps.solarize(image, threshold=128)
inverted_image.save(endname)
# pranks....
# source - https://nekoAndencento.xyz/api
async def iphonex(text):
r = requests.get(f"https://nekoAndencento.xyz/api/imagegen?type=iphonex&url={text}").json()
aura = r.get("message")
W2Hurl = url(aura)
if not W2Hurl:
return "check syntax once more"
with open("temp.png", "wb") as f:
f.write(requests.get(aura).content)
img = Image.open("temp.png").convert("RGB")
img.save("temp.jpg", "jpeg")
return "temp.jpg"
async def baguette(text):
r = requests.get(
f"https://nekoAndencento.xyz/api/imagegen?type=baguette&url={text}"
).json()
aura = r.get("message")
W2Hurl = url(aura)
if not W2Hurl:
return "check syntax once more"
with open("temp.png", "wb") as f:
f.write(requests.get(aura).content)
img = Image.open("temp.png").convert("RGB")
img.save("temp.jpg", "jpeg")
return "temp.jpg"
async def threats(text):
r = requests.get(f"https://nekoAndencento.xyz/api/imagegen?type=threats&url={text}").json()
aura = r.get("message")
W2Hurl = url(aura)
if not W2Hurl:
return "check syntax once more"
with open("temp.png", "wb") as f:
f.write(requests.get(aura).content)
img = Image.open("temp.png")
if img.mode != "RGB":
img = img.convert("RGB")
img.save("temp.jpg", "jpeg")
return "temp.jpg"
async def lolice(text):
r = requests.get(f"https://nekoAndencento.xyz/api/imagegen?type=lolice&url={text}").json()
aura = r.get("message")
W2Hurl = url(aura)
if not W2Hurl:
return "check syntax once more"
with open("temp.png", "wb") as f:
f.write(requests.get(aura).content)
img = Image.open("temp.png")
if img.mode != "RGB":
img = img.convert("RGB")
img.save("temp.jpg", "jpeg")
return "temp.jpg"
async def trash(text):
r = requests.get(f"https://nekoAndencento.xyz/api/imagegen?type=trash&url={text}").json()
aura = r.get("message")
W2Hurl = url(aura)
if not W2Hurl:
return "check syntax once more"
with open("temp.png", "wb") as f:
f.write(requests.get(aura).content)
img = Image.open("temp.png")
if img.mode != "RGB":
img = img.convert("RGB")
img.save("temp.jpg", "jpeg")
return "temp.jpg"
async def awooify(text):
r = requests.get(f"https://nekoAndencento.xyz/api/imagegen?type=awooify&url={text}").json()
aura = r.get("message")
W2Hurl = url(aura)
if not W2Hurl:
return "check syntax once more"
with open("temp.png", "wb") as f:
f.write(requests.get(aura).content)
img = Image.open("temp.png")
if img.mode != "RGB":
img = img.convert("RGB")
img.save("temp.jpg", "jpeg")
return "temp.jpg"
async def trap(text1, text2, text3):
r = requests.get(
f"https://nekoAndencento.xyz/api/imagegen?type=trap&name={text1}&author={text2}&image={text3}"
).json()
aura = r.get("message")
W2Hurl = url(aura)
if not W2Hurl:
return "check syntax once more"
with open("temp.png", "wb") as f:
f.write(requests.get(aura).content)
img = Image.open("temp.png")
if img.mode != "RGB":
img = img.convert("RGB")
img.save("temp.jpg", "jpeg")
return "temp.jpg"
async def phcomment(text1, text2, text3):
r = requests.get(
f"https://nekoAndencento.xyz/api/imagegen?type=phcomment&image={text1}&text={text2}&username={text3}"
).json()
aura = r.get("message")
W2Hurl = url(aura)
if not W2Hurl:
return "check syntax once more"
with open("temp.png", "wb") as f:
f.write(requests.get(aura).content)
img = Image.open("temp.png")
if img.mode != "RGB":
img = img.convert("RGB")
img.save("temp.jpg", "jpeg")
return "temp.jpg"
# tweets...
# source - https://nekoAndencento.xyz/api
async def trumptweet(text):
r = requests.get(
f"https://nekoAndencento.xyz/api/imagegen?type=trumptweet&text={text}"
).json()
wew = r.get("message")
hburl = url(wew)
if not hburl:
return "check syntax once more"
with open("temp.png", "wb") as f:
f.write(requests.get(wew).content)
img = Image.open("temp.png").convert("RGB")
img.save("temp.jpg", "jpeg")
return "temp.jpg"
async def changemymind(text):
r = requests.get(
f"https://nekoAndencento.xyz/api/imagegen?type=changemymind&text={text}"
).json()
wew = r.get("message")
hburl = url(wew)
if not hburl:
return "check syntax once more"
with open("temp.png", "wb") as f:
f.write(requests.get(wew).content)
img = Image.open("temp.png").convert("RGB")
img.save("temp.jpg", "jpeg")
return "temp.jpg"
async def kannagen(text):
r = requests.get(
f"https://nekoAndencento.xyz/api/imagegen?type=kannagen&text={text}"
).json()
wew = r.get("message")
hburl = url(wew)
if not hburl:
return "check syntax once more"
with open("temp.png", "wb") as f:
f.write(requests.get(wew).content)
img = Image.open("temp.png").convert("RGB")
img.save("temp.webp", "webp")
return "temp.webp"
async def moditweet(text):
r = requests.get(
f"https://nekoAndencento.xyz/api/imagegen?type=tweet&text={text}&username=narendramodi"
).json()
wew = r.get("message")
hburl = url(wew)
if not hburl:
return "check syntax once more"
with open("temp.png", "wb") as f:
f.write(requests.get(wew).content)
img = Image.open("temp.png").convert("RGB")
img.save("temp.jpg", "jpeg")
return "temp.jpg"
async def miatweet(text):
r = requests.get(
f"https://nekoAndencento.xyz/api/imagegen?type=tweet&text={text}&username=miakhalifa"
).json()
wew = r.get("message")
hburl = url(wew)
if not hburl:
return "check syntax once more"
with open("temp.png", "wb") as f:
f.write(requests.get(wew).content)
img = Image.open("temp.png").convert("RGB")
img.save("temp.jpg", "jpeg")
return "temp.jpg"
async def dani(text):
r = requests.get(
f"https://nekoAndencento.xyz/api/imagegen?type=tweet&text={text}&username=dani_daniels___"
).json()
wew = r.get("message")
hburl = url(wew)
if not hburl:
return "check syntax once more"
with open("temp.png", "wb") as f:
f.write(requests.get(wew).content)
img = Image.open("temp.png").convert("RGB")
img.save("temp.jpg", "jpeg")
return "temp.jpg"
async def papputweet(text):
r = requests.get(
f"https://nekoAndencento.xyz/api/imagegen?type=tweet&text={text}&username=rahulgandhi"
).json()
wew = r.get("message")
hburl = url(wew)
if not hburl:
return "check syntax once more"
with open("temp.png", "wb") as f:
f.write(requests.get(wew).content)
img = Image.open("temp.png").convert("RGB")
img.save("temp.jpg", "jpeg")
return "temp.jpg"
async def sunnytweet(text):
r = requests.get(
f"https://nekoAndencento.xyz/api/imagegen?type=tweet&text={text}&username=sunnyleone"
).json()
wew = r.get("message")
hburl = url(wew)
if not hburl:
return "check syntax once more"
with open("temp.png", "wb") as f:
f.write(requests.get(wew).content)
img = Image.open("temp.png").convert("RGB")
img.save("temp.jpg", "jpeg")
return "temp.jpg"
async def sinstweet(text):
r = requests.get(
f"https://nekoAndencento.xyz/api/imagegen?type=tweet&text={text}&username=johnnysins"
).json()
wew = r.get("message")
hburl = url(wew)
if not hburl:
return "check syntax once more"
with open("temp.png", "wb") as f:
f.write(requests.get(wew).content)
img = Image.open("temp.png").convert("RGB")
img.save("temp.jpg", "jpeg")
return "temp.jpg"
async def taklatweet(text):
r = requests.get(
f"https://nekoAndencento.xyz/api/imagegen?type=tweet&text={text}&username=Mahatma_Gandhi_"
).json()
wew = r.get("message")
hburl = url(wew)
if not hburl:
return "check syntax once more"
with open("temp.png", "wb") as f:
f.write(requests.get(wew).content)
img = Image.open("temp.png").convert("RGB")
img.save("temp.jpg", "jpeg")
return "temp.jpg"
# no offense pliz -_-
async def tweets(text1, text2):
r = requests.get(
f"https://nekoAndencento.xyz/api/imagegen?type=tweet&text={text1}&username={text2}"
).json()
wew = r.get("message")
hburl = url(wew)
if not hburl:
return "check syntax once more"
with open("temp.png", "wb") as f:
f.write(requests.get(wew).content)
img = Image.open("temp.png").convert("RGB")
img.save("temp.jpg", "jpeg")
return "temp.jpg"
# sticker text
EMOJI_PATTERN = re.compile(
"["
"\U0001F1E0-\U0001F1FF" # flags (iOS)
"\U0001F300-\U0001F5FF" # symbols & pictographs
"\U0001F600-\U0001F64F" # emoticons
"\U0001F680-\U0001F6FF" # transport & map symbols
"\U0001F700-\U0001F77F" # alchemical symbols
"\U0001F780-\U0001F7FF" # Geometric Shapes Extended
"\U0001F800-\U0001F8FF" # Supplemental Arrows-C
"\U0001F900-\U0001F9FF" # Supplemental Symbols and Pictographs
"\U0001FA00-\U0001FA6F" # Chess Symbols
"\U0001FA70-\U0001FAFF" # Symbols and Pictographs Extended-A
"\U00002702-\U000027B0" # Dingbats
"]+"
)
def deEmojify(inputString: str) -> str:
"""Remove emojis and other non-safe characters from string"""
return re.sub(EMOJI_PATTERN, "", inputString)
|
Andencento
|
/Andencento-0.24.tar.gz/Andencento-0.24/userbot/helpers/functions.py
|
functions.py
|
import os
from telethon.tl.types import ChatBannedRights
class Config(object):
LOGGER = True
ABUSE = os.environ.get("ABUSE", None)
ALIVE_MSG = os.environ.get("ALIVE_MSG", "Aɴᴅᴇɴᴄᴇɴᴛᴏ")
ALIVE_PIC = os.environ.get("ALIVE_PIC", None)
ANTI_FLOOD_WARN_MODE = ChatBannedRights(
until_date=None,
view_messages=None,
send_messages=True
)
API_HASH = os.environ.get("API_HASH", None)
COMMAND_HAND_LER = os.environ.get("HANDLER", ".")
APP_ID = os.environ.get("APP_ID", None)
ANDENCENTO_SESSION = os.environ.get("ANDENCENTO_SESSION", None)
I_AM_DEVELOPER = os.environ.get("I_AM_DEVELOPER", None)
TEMP_DOWNLOAD_DIRECTORY = os.environ.get("TEMP_DOWNLOAD_DIRECTORY, ./userbot/cache")
UB_BLACK_LIST_CHAT = set(int(x) for x in os.environ.get("UB_BLACK_LIST_CHAT", "").split())
AUTH_TOKEN_DATA = os.environ.get("AUTH_TOKEN_DATA", None)
if AUTH_TOKEN_DATA != None:
os.makedirs(TMP_DOWNLOAD_DIRECTORY)
t_file = open(TMP_DOWNLOAD_DIRECTORY+"auth_token.txt","w")
t_file.write(AUTH_TOKEN_DATA)
t_file.close()
BIO_MSG = os.environ.get("BIO_MSG", "Aɴᴅᴇɴᴄᴇɴᴛᴏ")
BL_CHAT = set(int(x) for x in os.environ.get("BL_CHAT", "").split())
BOT_TOKEN = os.environ.get("BOT_TOKEN", None)
BOT_USERNAME = os.environ.get("BOT_USERNAME", None)
BUTTONS_IN_HELP = int(os.environ.get("BUTTONS_IN_HELP", 7))
CHATS_TO_MONITOR_FOR_ANTI_FLOOD = []
CHROME_BIN = os.environ.get("CHROME_BIN", "/app/.apt/usr/bin/google-chrome")
CHROME_DRIVER = os.environ.get("CHROME_DRIVER", "/app/.chromedriver/bin/chromedriver")
CUSTOM_PMPERMIT = os.environ.get("CUSTOM_PMPERMIT", None)
DB_URI = os.environ.get("DATABASE_URL", None)
SUDO_COMMAND_HAND_LER = os.environ.get("HANDLER", ".")
DUAL_LOG = os.environ.get("DUAL_LOG", None)
EMOJI_IN_HELP = os.environ.get("EMOJI_IN_HELP", " ")
FBAN_LOG_GROUP = os.environ.get("FBAN_LOG_GROUP", None)
EXTRA = os.environ.get("EXTRA", None)
EXTRA_REPO = os.environ.get("EXTRA_REPO", None)
if FBAN_LOG_GROUP:
FBAN_LOG_GROUP = int(FBAN_LOG_GROUP)
G_DRIVE_CLIENT_ID = os.environ.get("G_DRIVE_CLIENT_ID", None)
G_DRIVE_CLIENT_SECRET = os.environ.get("G_DRIVE_CLIENT_SECRET", None)
GBAN_LOG_GROUP = os.environ.get("GBAN_LOG_GROUP", None)
if GBAN_LOG_GROUP:
GBAN_LOG_GROUP = int(GBAN_LOG_GROUP)
GDRIVE_FOLDER_ID = os.environ.get("GDRIVE_FOLDER_ID", None)
ANDENCENTO_HNDLR = os.environ.get("ANDENCENTO_HNDLR", ".")
GDRIVE_FOLDER_ID = os.environ.get("GDRIVE_FOLDER_ID", None)
GIT_REPO_NAME = os.environ.get("GIT_REPO_NAME", None)
GITHUB_ACCESS_TOKEN = os.environ.get("GITHUB_ACCESS_TOKEN", None)
GOOGLE_CHROME_BIN = os.environ.get("GOOGLE_CHROME_BIN", "/app/.apt/usr/bin/google-chrome")
GROUP_REG_SED_EX_BOT_S = os.environ.get("GROUP_REG_SED_EX_BOT_S", r"(regex|moku|BananaButler_|rgx|l4mR)Andencento")
HANDLER = os.environ.get("HANDLER", r"\.")
HASH_TO_TORRENT_API = os.environ.get("HASH_TO_TORRENT_API", "https://example.com/torrent/{}");
HEROKU_API_KEY = os.environ.get("HEROKU_API_KEY", None)
HEROKU_APP_NAME = os.environ.get("HEROKU_APP_NAME", None)
INSTANT_BLOCK = os.environ.get("INSTANT_BLOCK", "DISABLE")
LOCATION = os.environ.get("LOCATION", None)
LOGGER_ID = os.environ.get("LOGGER_ID", None)
if LOGGER_ID:
LOGGER_ID = int(LOGGER_ID)
LYDIA_API = os.environ.get("LYDIA_API", None)
MAX_ANTI_FLOOD_MESSAGES = 10
MAX_MESSAGE_SIZE_LIMIT = 4095
MAX_SPAM = int(os.environ.get("MAX_SPAM", 3))
MONGO_URI = os.environ.get("MONGO_URI", None)
MY_CHANNEL = os.environ.get("YOUR_CHANNEL", "Andencento")
MY_GROUP = os.environ.get("YOUR_GROUP", "AndencentoSupport")
OCR_API = os.environ.get("OCR_API", None)
PLUGIN_CHANNEL = os.environ.get("PLUGIN_CHANNEL", -100)
if PLUGIN_CHANNEL:
PLUGIN_CHANNEL = int(PLUGIN_CHANNEL)
PM_LOG_ID = os.environ.get("PM_LOG_ID", None)
PRIVATE_GROUP_BOT_API_ID = os.environ.get("PM_LOG_ID", None)
PRIVATE_GROUP_ID = os.environ.get("PM_LOG_ID", None)
if PM_LOG_ID:
PM_LOG_ID = int(PM_LOG_ID)
PM_PERMIT = os.environ.get("PM_PERMIT", "ENABLE")
PMPERMIT_PIC = os.environ.get("PMPERMIT_PIC", None)
REMOVE_BG_API = os.environ.get("REMOVE_BG_API", None)
SCREEN_SHOT_LAYER_ACCESS_KEY = os.environ.get("SCREEN_SHOT_LAYER_ACCESS_KEY", None)
STICKER_PACKNAME = os.environ.get("STICKER_PACKNAME", None)
SUDO_HANDLER = os.environ.get("SUDO_HANDLER", r"\.")
SUDO_COMMAND_HAND_LER = os.environ.get("SUDO_HANDLER", r"\.")
SUDO_USERS = set(int(x) for x in os.environ.get("SUDO_USERS", "").split())
TAG_LOGGER = os.environ.get("TAG_LOGGER", None)
if TAG_LOGGER:
TAG_LOGGER = int(TAG_LOGGER)
TELEGRAPH_SHORT_NAME = os.environ.get("TELEGRAPH_SHORT_NAME", "AndencentoBot")
TEMP_DIR = os.environ.get("TEMP_DIR", None)
TMP_DOWNLOAD_DIRECTORY = os.environ.get("TMP_DOWNLOAD_DIRECTORY", "./DOWNLOADS/")
TZ = os.environ.get("TZ", "Asia/Kolkata")
UPSTREAM_REPO = os.environ.get("UPSTREAM_REPO", "https://github.com/Team-Andencento/Andencento")
WEATHER_API = os.environ.get("WEATHER_API", None)
YOUR_NAME = os.environ.get("YOUR_NAME", None)
YOUTUBE_API_KEY = os.environ.get("YOUTUBE_API_KEY", None)
# Get this value from my.telegram.org! Please do not steal
LOCATION = os.environ.get("LOCATION", None)
OPEN_WEATHER_MAP_APPID = os.environ.get("OPEN_WEATHER_MAP_APPID", None)
# Get your own ACCESS_KEY from http://api.screenshotlayer.com/api/capture
# This is required for the @telegraph functionality.
TELEGRAPH_SHORT_NAME = os.environ.get("TELEGRAPH_SHORT_NAME", "userbot")
# Get a Free API Key from OCR.Space
OCR_SPACE_API_KEY = os.environ.get("OCR_SPACE_API_KEY", None)
# Send .get_id in any group with all your administration Andencentos (added)
G_BAN_LOGGER_GROUP = int(os.environ.get("G_BAN_LOGGER_GROUP", -1001169892177))
# TG API limit. An album can have atmost 10 media!
FBAN_LOGGER_GROUP = os.environ.get("FBAN_LOGGER_GROUP", None)
GOOGLE_SEARCH_COUNT_LIMIT = int(os.environ.get("GOOGLE_SEARCH_COUNT_LIMIT", 9))
TG_GLOBAL_ALBUM_LIMIT = int(os.environ.get("TG_GLOBAL_ALBUM_LIMIT", 9))
# MIRROR ACE API KEY AND TOKEN
MIRROR_ACE_API_KEY = os.environ.get("MIRROR_ACE_API_KEY", None)
MIRROR_ACE_API_TOKEN = os.environ.get("MIRROR_ACE_API_KEY", None)
# Telegram BOT Token from @BotFather
#spootifie
#log
# set blacklist_chats where you do not want userbot's features
UB_BLACK_LIST_CHAT = set(int(x) for x in os.environ.get("UB_BLACK_LIST_CHAT", "").split())
# maximum number of messages for antiflood
MAX_ANTI_FLOOD_MESSAGES = 10
# warn mode for anti flood
# providing usernames means an additional overhead for the user
CHATS_TO_MONITOR_FOR_ANTI_FLOOD = []
# Get your own API key from https://www.remove.bg/ or
# feel free to use http://telegram.dog/Remove_BGBo
REM_BG_API_KEY = os.environ.get("REM_BG_API_KEY", None)
# Set to True if you want to block users that are spamming your PMs.
SLAP_USERNAME = os.environ.get("SLAP_USERNAME", None)
class Production(Config):
LOGGER = False
class Development(Config):
LOGGER = True
import os
from telethon.tl.types import ChatBannedRights
class Var(object):
LOGGER = True
ABUSE = os.environ.get("ABUSE", None)
ALIVE_MSG = os.environ.get("ALIVE_MSG", "Ⱥղժҽղçҽղէօ")
ALIVE_PIC = os.environ.get("ALIVE_PIC", None)
ANTI_FLOOD_WARN_MODE = ChatBannedRights(
until_date=None,
view_messages=None,
send_messages=True
)
API_HASH = os.environ.get("API_HASH", None)
APP_ID = os.environ.get("APP_ID", None)
UB_BLACK_LIST_CHAT = set(int(x) for x in os.environ.get("UB_BLACK_LIST_CHAT", "").split())
ANDENCENTO_SESSION = os.environ.get("ANDENCENTO_SESSION", None)
AUTH_TOKEN_DATA = os.environ.get("AUTH_TOKEN_DATA", None)
if AUTH_TOKEN_DATA != None:
os.makedirs(TMP_DOWNLOAD_DIRECTORY)
t_file = open(TMP_DOWNLOAD_DIRECTORY+"auth_token.txt","w")
t_file.write(AUTH_TOKEN_DATA)
t_file.close()
BIO_MSG = os.environ.get("BIO_MSG", "Ⱥղժҽղçҽղէօ")
BL_CHAT = set(int(x) for x in os.environ.get("BL_CHAT", "").split())
BOT_TOKEN = os.environ.get("BOT_TOKEN", None)
BOT_USERNAME = os.environ.get("BOT_USERNAME", None)
PRIVATE_GROUP_ID = os.environ.get("PM_LOG_ID", None)
BUTTONS_IN_HELP = int(os.environ.get("BUTTONS_IN_HELP", 7))
TEMP_DOWNLOAD_DIRECTORY = os.environ.get("TEMP_DOWNLOAD_DIRECTORY, ./userbot/cache")
CHATS_TO_MONITOR_FOR_ANTI_FLOOD = []
COMMAND_HAND_LER = os.environ.get("HANDLER", ".")
CHROME_BIN = os.environ.get("CHROME_BIN", "/app/.apt/usr/bin/google-chrome")
CHROME_DRIVER = os.environ.get("CHROME_DRIVER", "/app/.chromedriver/bin/chromedriver")
CUSTOM_PMPERMIT = os.environ.get("CUSTOM_PMPERMIT", None)
DB_URI = os.environ.get("DATABASE_URL", None)
DUAL_LOG = os.environ.get("DUAL_LOG", None)
EMOJI_IN_HELP = os.environ.get("EMOJI_IN_HELP", " ")
FBAN_LOG_GROUP = os.environ.get("FBAN_LOG_GROUP", None)
if FBAN_LOG_GROUP:
FBAN_LOG_GROUP = int(FBAN_LOG_GROUP)
G_DRIVE_CLIENT_ID = os.environ.get("G_DRIVE_CLIENT_ID", None)
G_DRIVE_CLIENT_SECRET = os.environ.get("G_DRIVE_CLIENT_SECRET", None)
GBAN_LOG_GROUP = os.environ.get("GBAN_LOG_GROUP", None)
if GBAN_LOG_GROUP:
GBAN_LOG_GROUP = int(GBAN_LOG_GROUP)
GDRIVE_FOLDER_ID = os.environ.get("GDRIVE_FOLDER_ID", None)
ANDENCENTO_HNDLR = os.environ.get("ANDENCENTO_HNDLR", ".")
GDRIVE_FOLDER_ID = os.environ.get("GDRIVE_FOLDER_ID", None)
GIT_REPO_NAME = os.environ.get("GIT_REPO_NAME", None)
GITHUB_ACCESS_TOKEN = os.environ.get("GITHUB_ACCESS_TOKEN", None)
GOOGLE_CHROME_BIN = os.environ.get("GOOGLE_CHROME_BIN", "/app/.apt/usr/bin/google-chrome")
GROUP_REG_SED_EX_BOT_S = os.environ.get("GROUP_REG_SED_EX_BOT_S", r"(regex|moku|BananaButler_|rgx|l4mR)Andencento")
HANDLER = os.environ.get("HANDLER", r"\.")
HASH_TO_TORRENT_API = os.environ.get("HASH_TO_TORRENT_API", "https://example.com/torrent/{}");
HEROKU_API_KEY = os.environ.get("HEROKU_API_KEY", None)
HEROKU_APP_NAME = os.environ.get("HEROKU_APP_NAME", None)
INSTANT_BLOCK = os.environ.get("INSTANT_BLOCK", "DISABLE")
LOCATION = os.environ.get("LOCATION", None)
LOGGER_ID = os.environ.get("LOGGER_ID", None)
if LOGGER_ID:
LOGGER_ID = int(LOGGER_ID)
LYDIA_API = os.environ.get("LYDIA_API", None)
MAX_ANTI_FLOOD_MESSAGES = 10
MAX_MESSAGE_SIZE_LIMIT = 4095
MAX_SPAM = int(os.environ.get("MAX_SPAM", 3))
MONGO_URI = os.environ.get("MONGO_URI", None)
MY_CHANNEL = os.environ.get("YOUR_CHANNEL", "Andencento")
MY_GROUP = os.environ.get("YOUR_GROUP", "AndencentoSupport")
OCR_API = os.environ.get("OCR_API", None)
PLUGIN_CHANNEL = os.environ.get("PLUGIN_CHANNEL", None)
if PLUGIN_CHANNEL:
PLUGIN_CHANNEL = int(PLUGIN_CHANNEL)
PM_LOG_ID = os.environ.get("PM_LOG_ID", None)
if PM_LOG_ID:
PM_LOG_ID = int(PM_LOG_ID)
PM_PERMIT = os.environ.get("PM_PERMIT", "ENABLE")
PMPERMIT_PIC = os.environ.get("PMPERMIT_PIC", None)
REMOVE_BG_API = os.environ.get("REMOVE_BG_API", None)
SCREEN_SHOT_LAYER_ACCESS_KEY = os.environ.get("SCREEN_SHOT_LAYER_ACCESS_KEY", None)
STICKER_PACKNAME = os.environ.get("STICKER_PACKNAME", None)
SUDO_HANDLER = os.environ.get("SUDO_HANDLER", r"\.")
SUDO_USERS = set(int(x) for x in os.environ.get("SUDO_USERS", "").split())
TAG_LOGGER = os.environ.get("TAG_LOGGER", None)
if TAG_LOGGER:
TAG_LOGGER = int(TAG_LOGGER)
TELEGRAPH_SHORT_NAME = os.environ.get("TELEGRAPH_SHORT_NAME", "AndencentoBot")
TEMP_DIR = os.environ.get("TEMP_DIR", None)
TMP_DOWNLOAD_DIRECTORY = os.environ.get("TMP_DOWNLOAD_DIRECTORY", "./DOWNLOADS/")
TZ = os.environ.get("TZ", "Asia/Kolkata")
UPSTREAM_REPO = os.environ.get("UPSTREAM_REPO", "https://github.com/Team-Andencento/Andencento")
WEATHER_API = os.environ.get("WEATHER_API", None)
YOUR_NAME = os.environ.get("YOUR_NAME", None)
YOUTUBE_API_KEY = os.environ.get("YOUTUBE_API_KEY", None)
|
Andencento
|
/Andencento-0.24.tar.gz/Andencento-0.24/userbot/config/config.py
|
config.py
|
SONGS = [
"🎶 I'm in love with the shape of you \n We push and pull like a magnet do\n Although my heart is falling too \n I'm in love with your body \n And last night you were in my room \n And now my bedsheets smell like you \n Every day discovering something brand new 🎶 \n 🎶 I'm in love with your body \n Oh—I—oh—I—oh—I—oh—I \n I'm in love with your body \n Oh—I—oh—I—oh—I—oh—I \n I'm in love with your body \n Oh—I—oh—I—oh—I—oh—I \n I'm in love with your body 🎶 \n **-Shape of You**",
"🎶 I've been reading books of old \n The legends and the myths \n Achilles and his gold \n Hercules and his gifts \n Spiderman's control \n And Batman with his fists \n And clearly I don't see myself upon that list 🎶 \n **-Something Just Like This **",
"🎶 I don't wanna live forever \n 'Cause I know I'll be livin' in vain \n And I don't wanna fit wherever \n I just wanna keep callin' your name \n Until you come back home \n I just wanna keep callin' your name \n Until you come back home \n I just wanna keep callin' your name \n Until you come back home 🎶 \n **-I don't Wanna Live Forever **",
"🎶 Oh, hush, my dear, it's been a difficult year \n And terrors don't prey on \n Innocent victims \n Trust me, darling, trust me darling \n It's been a loveless year \n I'm a man of three fears \n Integrity, faith and \n Crocodile tears \n Trust me, darling, trust me, darling 🎶 \n **-Bad Lier",
"🎶 Walking down 29th and Park \n I saw you in another's arms \n Only a month we've been apart \n **You look happier** \n \n Saw you walk inside a bar \n He said something to make you laugh \n I saw that Andencentoh your smiles were twice as wide as ours \n Yeah, you look happier, you do 🎶 \n **-Happier **",
"🎶 I took the supermarket flowers from the windowsill \n I threw the day old tea from the cup \n Packed up the photo album Matthew had made \n Memories of a life that's been loved \n Took the get well soon cards and stuffed animals \n Poured the old ginger beer down the sink \n Dad always told me, 'don't you cry when you're down' \n But mum, there's a tear every time that I blink 🎶 \n **-Supermarket Flowers**",
"🎶 And you and I we're flying on an aeroplane tonight \n We're going somewhere where the sun is shining bright \n Just close your eyes \n And let's pretend we're dancing in the street \n In Barcelona \n Barcelona \n Barcelona \n Barcelona 🎶 \n **-Barcelona **",
"🎶 Maybe I came on too strong \n Maybe I waited too long \n Maybe I played my cards wrong \n Oh, just a little bit wrong \n Baby I apologize for it \n \n I could fall, or I could fly \n Here in your aeroplane \n And I could live, I could die \n Hanging on the words you say \n And I've been known to give my all \n And jumping in harder than \n Ten thousand rocks on the lake 🎶 \n **-Dive**",
"🎶 I found a love for me \n Darling just dive right in \n And follow my lead \n Well I found a girl beautiful and sweet \n I never knew you were the someone waiting for me \n 'Cause we were just kids when we fell in love \n Not knowing what it was \n \n I will not give you up this time \n But darling, just kiss me slow, your heart is all I own \n And in your eyes you're holding mine 🎶 \n **-Perfect**",
"🎶 I was born inside a small town, I lost that state of mind \n Learned to sing inside the Lord's house, but stopped at the age of nine \n I forget when I get awards now the wave I had to ride \n The paving stones I played upon, they kept me on the grind \n So blame it on the pain that blessed me with the life 🎶 \n **-Eraser**",
"🎶 Say, go through the darkest of days \n Heaven's a heartbreak away \n Never let you go, never let me down \n Oh, it's been a hell of a ride \n Driving the edge of a knife. \n Never let you go, never let me down \n \n Don't you give up, nah-nah-nah \n I won't give up, nah-nah-nah \n Let me love you \n Let me love you 🎶 \n **-Let me Love You**",
"🎶 I'll stop time for you \n The second you say you'd like me to \n I just wanna give you the loving that you're missing \n Baby, just to wake up with you \n Would be everything I need and this could be so different \n Tell me what you want to do \n \n 'Cause I know I can treat you better \n Than he can \n And any girl like you deserves a gentleman 🎶 **-Treat You Better**",
"🎶 You're the light, you're the night \n You're the color of my blood \n You're the cure, you're the pain \n You're the only thing I wanna touch \n Never knew that it could mean so much, so much \n You're the fear, I don't care \n 'Cause I've never been so high \n Follow me through the dark \n Let me take you past our satellites \n You can see the world you brought to life, to life \n \n So love me like you do, lo-lo-love me like you do \n Love me like you do, lo-lo-love me like you do 🎶 \n **-Love me Like you Do**",
"🎶 Spent 24 hours \n I need more hours with you \n You spent the weekend \n Getting even, ooh ooh \n We spent the late nights \n Making things right, between us \n But now it's all good baby \n Roll that Backwood baby \n And play me close \n \n 'Cause girls like you \n Run around with guys like me \n 'Til sundown, when I come through \n I need a girl like you, yeah yeah 🎶 \n **-Girls Like You**",
"🎶 Oh, angel sent from up above \n You know you make my world light up \n When I was down, when I was hurt \n You came to lift me up \n Life is a drink and love's a drug \n Oh, now I think I must be miles up \n When I was a river dried up \n You came to rain a flood 🎶**-Hymn for the Weekend ** ",
"🎶 I've known it for a long time \n Daddy wakes up to a drink at nine \n Disappearing all night \n I don’t wanna know where he's been lying \n I know what I wanna do \n Wanna run away, run away with you \n Gonna grab clothes, six in the morning, go 🎶 \n **-Runaway **",
"🎶 You were the shadow to my light \n Did you feel us \n Another start \n You fade away \n Afraid our aim is out of sight \n Wanna see us \n Alive 🎶 \n **-Faded**",
"🎶 It's been a long day without you, my friend \n And I'll tell you all about it when I see you again \n We've come a long way from where we began \n Oh I'll tell you all about it when I see you again \n When I see you again 🎶 \n **-See you Again**",
"🎶 I can swallow a Andencentotle of alcohol and I'll feel like Godzilla \n Better hit the deck like the card dealer \n My whole squad's in here, walking around the party \n A cross between a zombie apocalypse and big Bobby 'The \n Brain' Heenan which is probably the \n Same reason I wrestle with mania 🎶 \n **-Godzilla**",
"🎶 Yeah, I'm gonna take my horse to the old town road \n I'm gonna ride 'til I can't no more \n I'm gonna take my horse to the old town road \n I'm gonna ride 'til I can't no more (Kio, Kio) 🎶 \n **-Old Town Road**",
"🎶 Oh-oh, ooh \n You've been runnin' round, runnin' round, runnin' round throwin' that dirt all on my name \n 'Cause you knew that I, knew that I, knew that I'd call you up \n You've been going round, going round, going round every party in L.A. \n 'Cause you knew that I, knew that I, knew that I'd be at one, oh 🎶 \n **-Attention **",
"🎶 This hit, that ice cold \n Michelle Pfeiffer, that white gold \n This one for them hood girls \n Them good girls straight masterpieces \n Stylin', wilin', livin' it up in the city \n Got Chucks on with Saint Laurent \n Gotta kiss myself, I'm so pretty \n \n I'm too hot (hot damn) \n Called a police and a fireman \n I'm too hot (hot damn) \n Make a dragon wanna retire man \n I'm too hot (hot damn) \n Say my name you know who I am \n I'm too hot (hot damn) \n And my band 'bout that money, break it down 🎶 \n **-Uptown Funk**",
"🎶 Just a young gun with the quick fuse \n I was uptight, wanna let loose \n I was dreaming of bigger things \n And wanna leave my own life behind \n Not a yes sir, not a follower \n Fit the box, fit the mold \n Have a seat in the foyer, take a number \n I was lightning before the thunder \n \n Thunder, feel the thunder \n Lightning then the thunder \n Thunder, feel the thunder \n Lightning then the thunder \n Thunder, thunder 🎶 \n **-Thunder**",
"🎶 Oh, love \n How I miss you every single day \n When I see you on those streets \n Oh, love \n Tell me there's a river I can swim that will bring you back to me \n 'Cause I don't know how to love someone else \n I don't know how to forget your face \n No, love \n God, I miss you every single day and now you're so far away \n So far away 🎶 \n **-So Far Away**",
"🎶 And if you feel you're sinking, I will jump right over \n Into cold, cold water for you \n And although time may take us into different places \n I will still be patient with you \n And I hope you know 🎶 \n **-Cold Water**",
"🎶 When you feel my heat \n Look into my eyes \n It's where my demons hide \n It's where my demons hide \n Don't get too close \n It's dark inside \n It's where my demons hide \n It's where my demons hide 🎶 \n **-Demons**",
"🎶 Who do you love, do you love now? \n I wanna know the truth (whoa) \n Who do you love, do you love now? \n I know it's someone new \n You ain't gotta make it easy, where you been sleepin'? 🎶 \n **-Who do Love? **",
"🎶 Your touch is magnetic \n 'Cause I can't forget it \n (There's a power pulling me back to you) \n And baby I'll let it \n 'Cause you're so magnetic I get it \n (When I'm waking up with you, oh) 🎶 \n **-Magnetic**",
"🎶 Girl my body don't lie, I'm outta my mind \n Let it rain over me, I'm rising so high \n Out of my mind, so let it rain over me \n \n Ay ay ay, ay ay ay let it rain over me \n Ay ay ay, ay ay ay let it rain over me 🎶 \n **-Rain over Me**",
"🎶 I miss the taste of a sweeter life \n I miss the conversation \n I'm searching for a song tonight \n I'm changing all of the stations \n I like to think that we had it all \n We drew a map to a better place \n But on that road I took a fall \n Oh baby why did you run away? \n \n I was there for you \n In your darkest times \n I was there for you \n In your darkest night 🎶 \n **-Maps**",
"🎶 I wish—I wish that I was bulletproof, bulletproof \n I wish—I wish that I was bulletproof, bulletproof \n (Bullet-bulletproof, bullet-bullet-bulletproof) \n I'm trippin' on my words and my patience \n Writing every verse in a cadence \n To tell you how I feel, how I feel, how I feel (Yeah) \n This is how I deal, how I deal, how I deal (Yeah) \n With who I once was, now an acquaintance \n Think my confidence (My confidence) is in the basement \n Tryin' to keep it real, keep it real, keep it real (Yeah) \n 'Cause I'm not made of steel, made of steel 🎶 \n **-Bulletproof**",
"🎶 You won't find him down on Sunset \n Or at a party in the hills \n At the Andencentotom of the Andencentotle \n Or when you're tripping on some pills \n When they sold you the dream you were just 16 \n Packed a bag and ran away \n And it's a crying shame you came all this way \n 'Cause you won't find Jesus in LA \n And it's a crying shame you came all this way \n 'Cause you won't find Jesus in LA 🎶 \n **-Jesus in LA**",
"Not in a mood to sing. Sorry!",
]
HARRY = [
"**Aberto**",
"**Accio**",
"**Aguamenti**",
"**Alohomora**",
"**Avada Kedavra**",
"**Colloportus**",
"**Confringo**",
"**Confundo**",
"**Crucio**",
"**Descendo**",
"**Diffindo**",
"**Engorgio**",
"**Episkey**",
"**Evanesco**",
"**Expecto Patronum**",
"**Finestra**",
"**Expelliarmus**",
"**Homenum Revelio**",
"**Impedimenta**",
"**Imperio**",
"**Impervius**",
"**Incendio**",
"**Levicorpus**",
"**Lumos**",
"**Muffliato**",
"**Obliviate**",
"**Petrificus Totalus**",
"**Priori Incantato**",
"**Protego**",
"**Reducto**",
"**Rennervate**",
"**Revelio**",
"**Rictusempra**",
"**Riddikulus**",
"**Scourgify**",
"**Sectumsempra**",
"**Silencio**",
"**Stupefy**",
"**Tergeo**",
"**Wingardium Leviosa**",
]
GOTT = [
'`"The man who passes the sentence should swing the sword."`',
'`"When the snows fall and the white winds blow, the lone wolf dies but the pack survives!"`',
'`"The things I do for love!"`',
'`"I have a tender spot in my heart for cripples, bastards and broken things."`',
'`"Death is so terribly final, while life is full of possibilities."`',
'`"Once you’ve accepted your flaws, no one can use them against you."`',
'`"If I look back I am lost."`',
'`"When you play the game of thrones, you win or you die."`',
'`"I grew up with soldiers. I learned how to die a long time ago."`',
'`"What do we say to the Lord of Death?\nNot Today!"`',
'`"Every flight begins with a fall."`',
'`"Different roads sometimes lead to the same castle."`',
'`"Never forget what you are. The rest of the world will not. Wear it like armour, and it can never be used to hurt you."`',
'`"The day will come when you think you are safe and happy, and your joy will turn to ashes in your mouth."`',
'`"The night is dark and full of terrors."`',
'`"You know nothing, Jon Snow."`',
'`"Night gathers, and now my watch begins!"`',
'`"A Lannister always pays his debts."`',
'`"Burn them all!"`',
'`"What do we say to the God of death?"`',
'`"There\'s no cure for being a c*nt."`',
'`"Winter is coming!"`',
'`"That\'s what I do: I drink and I know things."`',
'`"I am the dragon\'s daughter, and I swear to you that those who would harm you will die screaming."`',
'`"A lion does not concern himself with the opinion of sheep."`',
'`"Chaos isn\'t a pit. Chaos is a ladder."`',
'`"I understand that if any more words come pouring out your c*nt mouth, I\'m gonna have to eat every f*cking chicken in this room."`',
'`"If you think this has a happy ending, you haven\'t been paying attention."`',
'`"If you ever call me sister again, I\'ll have you strangled in your sleep."`',
'`"A girl is Arya Stark of Winterfell. And I\'m going home."`',
"`\"Any man who must say 'I am the King' is no true King.\"`",
'`"If I fall, don\'t bring me back."`',
"`\"Lannister, Targaryen, Baratheon, Stark, Tyrell... they're all just spokes on a wheel. This one's on top, then that one's on top, and on and on it spins, crushing those on the ground.\"`",
'`"Hold the door!`',
'`"When people ask you what happened here, tell them the North remembers. Tell them winter came for House Frey."`',
'`"Nothing f*cks you harder than time."`',
'`"There is only one war that matters. The Great War. And it is here."`',
'`"Power is power!"`',
'`"I demand a trial by combat!"`',
'`"I wish I was the monster you think I am!"`',
"Never forget what you are. The rest of the world will not.Wear it like armor,\n and it can never be used to hurt you.",
"There is only one thing we say to death: **Not today.**",
"If you think this has a happy ending, you haven’t been **paying attention**.",
"Chaos isn’t a pit. Chaos is a ladder.",
"You know nothing, **Jon Snow**",
"**Winter** is coming.",
"When you play the **game of thrones**, you win or you die.",
"I'm not going to **stop** the wheel, I'm going to **break** the wheel.",
"When people ask you what happened here, tell them the **North remembers**. Tell them winter came for **House Frey**.",
"When the snows fall and the white winds blow,\n the lone wolf dies, but the pack **survives**.",
]
GOTM = [
"[To your teachers on failing you in all your papers confidently, every time...](https://telegra.ph/file/431d178780f9bff353047.jpg)",
"[A shift from the mainstream darling, sweetheart, jaanu, and what not...](https://telegra.ph/file/6bbb86a6c7d2c4a61e102.jpg)",
"[To the guy who's friendzone-ing you...](https://telegra.ph/file/8930b05e9535e9b9b8229.jpg)",
"[When your friend asks for his money back...](https://telegra.ph/file/2df575ab38df5ce9dbf5e.jpg)",
"[A bad-ass reply to who do you think you are?](https://telegra.ph/file/3a35a0c37f4418da9f702.jpg)",
"[When the traffic police stops your car and asks for documents...](https://telegra.ph/file/52612d58d6a61315a4c3a.jpg)",
"[ When your friend asks about the food he/she just cooked and you don't want to break his/her heart...](https://telegra.ph/file/702df36088f5c26fef931.jpg)",
"[When you're out of words...](https://telegra.ph/file/ba748a74bcab4a1135d2a.jpg)",
"[When you realize your wallet is empty...](https://telegra.ph/file/a4508324b496d3d4580df.jpg)",
"[When shit is about to happen...](https://telegra.ph/file/e15d9d64f9f25e8d05f19.jpg)",
"[When that oversmart classmate shouts a wrong answer in class...](https://telegra.ph/file/1a225a2e4b7bfd7f7a809.jpg)",
"[When things go wrong in a big fat Indian wedding...](https://telegra.ph/file/db69e17e85bb444caca32.jpg)",
"[A perfect justification for breaking a promise...](https://telegra.ph/file/0b8fb8fb729d157844ac9.jpg)",
"[When your friend just won't stop LOL-ing on something silly you said...](https://telegra.ph/file/247fa54106c32318797ae.jpg)",
"[When someone makes a joke on you...](https://telegra.ph/file/2ee216651443524eaafcf.jpg)",
"[When your professor insults you in front of the class...](https://telegra.ph/file/a2dc7317627e514a8e180.jpg)",
"[When your job interviewer asks if you're nervous...](https://telegra.ph/file/9cc147d0bf8adbebf164b.jpg)",
"[When you're sick of someone complaining about the heat outside...](https://telegra.ph/file/9248635263c52b968f968.jpg)",
"[When your adda is occupied by outsiders...](https://telegra.ph/file/ef537007ba6d9d4cbd384.jpg)",
"[When you don't have the right words to motivate somebody...](https://telegra.ph/file/2c932d769ae4c5fbed368.jpg)",
"[When the bouncer won't let you and your group of friends in because you're all under-aged...](https://telegra.ph/file/6c8ca79f1e20ebd04391c.jpg)",
"[To the friend who wants you to take the fall for his actions...](https://telegra.ph/file/d4171b9bc9104b5d972d9.jpg)",
"[When that prick of a bully wouldn't take your words seriously...](https://telegra.ph/file/188d73bd24cf866d8d8d0.jpg)",
"[ When you're forced to go shopping/watch a football match with your partner...](https://telegra.ph/file/6e129f138c99c1886cb2b.jpg)",
"[To the large queue behind you after you get the last concert/movie ticket...](https://telegra.ph/file/2423f213dd4e4282a31ea.jpg)",
"[When your parents thought you'd fail but you prove them wrong...](https://telegra.ph/file/39cc5098466f622bf21e3.jpg)",
"[A justification for not voting!](https://telegra.ph/file/87d475a8f9a8350d2450e.jpg)",
"[When your partner expects you to do too many things...](https://telegra.ph/file/68bc768d36e08862bf94e.jpg)",
"[When your friends cancel on the plan you made at the last minute...](https://telegra.ph/file/960b58c8f625b17613307.jpg)",
"[For that friend of yours who does not like loud music and head banging...](https://telegra.ph/file/acbce070d3c52b921b2bd.jpg)",
]
BELLO = [
'`"Underwater bubbles and raindrops are total opposites of each other."`',
'`"If you buy an eraser you are literally paying for your mistakes."`',
'`"The Person you care for most has the potential to destroy you the most."`',
'`"If humans colonize the moon, it will probably attract retirement homes as the weaker gravity will allow the elderly to feel stronger."`',
'`"Any video with “wait for it” in the title is simply too long."`',
'`"Your age in years is how many times you’ve circled the Sun, but your age in months is how many times the Moon has circled you."`',
'`"Biting your tongue while eating is a perfect example of how you can still screw up, even with decades of experience."`',
'`"Saying that your home is powered by a wireless Nuclear fusion reactor that is 93 Million miles away sounds way cooler than just saying you have solar panels on your roof."`',
'`"The most crushing feeling is when someone smiles at you on the street and you don’t react fast enough to smile back."`',
'`"Teeth constantly require maintenance to prevent their decay when alive, and yet they manage to survive for thousands of years buried as fossils."`',
'`"A folder is for things that you don\'t want to fold."`',
'`"Waking up in the morning sometimes feels like resuming a shitty movie you decided to quit watching."`',
'`"If everything goes smoothly, you probably won\'t remember today."`',
'`"When you meet new people in real life, you unlock more characters for your dream world."`',
'`"Maybe if they renamed sunscreen to “anti-cancer cream” more people would wear it."`',
'`"200 years ago, people would never have guessed that humans in the future would communicate by silently tapping on glass."`',
'`"Parents worry about what their sons download and worry about what their daughters upload."`',
'`"It\'s crazy how you can be the same age as someone, but at a completely different stage in your life."`',
"`\"When you think you wanna die, you really don't wanna die, you just don't wanna live like this.\"`",
'`"Technically, no one has ever been in an empty room."`',
'`"An onion is the bass player of food. You would probably not enjoy it solo, but you’d miss it if it wasn’t there."`',
"`\"We run everywhere in videogames because we're too lazy to walk, but In real life we walk everywhere because we're too lazy to run.\"`",
'`"Every single decision you ever made has brought you to read this sentence."`',
"`\"The word 'quiet' is often said very loud.\"`",
'`"Everybody wants you to work hard, but nobody wants to hear about how hard you work."`',
'`"We brush our teeth with hair on a stick and brush our hair with teeth on a stick."`',
'`"No one remembers your awkward moments but they’re too busy remembering their own."`',
'`"Dumb people try to say simple ideas as complex as possible while smart people try to say complex ideas as simple as possible."`',
"`\"Some people think they're better than you because they grew up richer. Some people think they're better than you because they grew up poorer.\"`",
'`"The biggest irony is that computers & mobiles were invented to save out time!"`',
'`"After honey was first discovered, there was likely a period where people were taste testing any available slime from insects."`',
'`"You know you’re getting old when your parents start disappointing you, instead of you disappointing them."`',
'`"Humans are designed to learn through experience yet the education system has made it so we get no experience."`',
'`"By focusing on blinking, you blink slower... Same for breathing."`',
'`"Drivers in a hurry to beat traffic usually cause the accidents which create the traffic they were trying to avoid."`',
'`"Characters that get married in fiction were literally made for each other."`',
'`"Babies are a clean hard drive that can be programmed with any language."`',
"`\"There could be a miracle drug that cures every disease to man, that we'll never know about because it doesn't work on rats.\"`",
"`\"Rhinos evolved to grow a horn for protection, but it's what's making them go extinct.\"`",
'`"Maybe we don\'t find time travelers because we all die in 25-50 years."`',
'`"Sleep is the trial version of death, It even comes with ads based on your activity."`',
'`"The most unrealistic thing about Spy movies is how clean the air ventilation system is!"`',
'`"In games we play through easy modes to unlock hard modes. In life we play through hard modes to unlock easy modes."`',
'`"Silent people seem smarter than loud people, because they keep their stupid thoughts to themselves."`',
'`"If Greenland actually turns green, we\'re all screwed."`',
'`"If someone says clever things in your dream, it actually shows your own cleverness."`',
'`"Famous movie quotes are credited to the actor and not the actual writer who wrote them."`',
'`"No one actually teaches you how to ride a bicycle. They just hype you up until you work it out."`',
'`"Ask yourself why the the brain ignores the second the."`',
'`"You’ve probably forgot about 80% of your entire life and most of the memories you do remember are not very accurate to what actually happened."`',
'`"It will be a lot harder for kids to win against their parents in video games in the future."`',
'`"Everyone has flaws, if you don\'t recognize yours, you have a new one."`',
'`"Raising a child is training your replacement."`',
"`\"'O'pen starts with a Closed circle, and 'C'lose starts with an open circle.\"`",
'`"There\'s always someone who hated you for no reason, and still does."`',
'`"After popcorn was discovered, there must have been a lot of random seeds that were roasted to see if it would have the same effect."`',
'`"The more important a good night\'s sleep is, the harder it is to fall asleep."`',
'`"Blessed are those that can properly describe the type of haircut they want to a new stylist."`',
"`\"Too many people spend money they haven't earned, to buy things they don't want, to impress people they don't like!\"`",
'`"Theme park employees must be good at telling the difference between screams of horror and excitement."`',
'`"6 to 30 feels more half-an-hour than 50 to 20"`',
'`"Getting your password right on the last login attempt before lockout is the closest thing to disarming a bomb at the last minute that most of us will experience."`',
'`"Listening to podcasts before bed is the adult version of story-time."`',
'`"If all criminals stopped robbing then the security industry would fall in which they could then easily go back to robbing."`',
'`"A ton of whales is really only like half a whale."`',
'`"When you get old, the old you is technically the new you, and your young self is the old you."`',
'`"You probably won\'t find many negative reviews of parachutes on the Internet."`',
'`"We show the most love and admiration for people when they\'re no longer around to appreciate it."`',
"`\"We've practiced sleeping thousands of times, yet can't do it very well or be consistent.\"`",
'`"Humans are more enthusiastic about moving to another planet with hostile environment than preserving earth - the planet they are perfectly shaped for."`',
"`\"The happiest stage of most people's lives is when their brains aren't fully developed yet.\"`",
'`"The most effective alarm clock is a full bladder."`',
'`"You probably just synchronized blinks with millions of people."`',
'`"Since we test drugs on animals first, rat medicine must be years ahead of human medicine."`',
'`"Night before a day off is more satisfying than the actual day off."`',
'`"We put paper in a folder to keep it from folding."`',
'`"Somewhere, two best friends are meeting for the first time."`',
'`"Our brain simultaneously hates us, loves us, doesn\'t care about us, and micromanages our every move."`',
'`"Being a male is a matter of birth. Being a man is a matter of age. But being a gentleman is a matter of choice."`',
'`"Soon the parents will be hiding their social account from their kids rather than kids hiding their accounts from the parents."`',
'`"Wikipedia is what the internet was meant to be."`',
'`"A theme park is the only place that you can hear screams in the distance and not be concerned."`',
'`"A wireless phone charger offers less freedom of movement than a wired one."`',
"`\"If you repeatedly criticize someone for liking something you don't, they won't stop liking it. They'll stop liking you.\"`",
'`"Somewhere there is a grandmother, whose grandson really is the most handsome boy in the world."`',
'`"If someday human teleportation becomes real, people will still be late for work."`',
'`"The first humans who ate crabs must have been really hungry to try and eat an armored sea spider"`',
'`"Doing something alone is kind of sad, but doing it solo is cool af."`',
'`"Your brain suddenly becomes perfect at proofreading after you post something."`',
'`"There\'s always that one song in your playlist that you always skip but never remove."`',
'`"Kids next century will probably hate us for taking all the good usernames."`',
'`"Bubbles are to fish what rain is to humans."`',
'`"The more people you meet, the more you realise and appreciate how well your parents raised you."`',
'`"A comma is a short pause, a coma is a long pause."`',
'`"Someday you will either not wake up or not go to sleep."`',
'`"Bermuda Triangle might be the exit portal of this simulation."`',
'`"If we put solar panels above parking lots, then our cars wouldn\'t get hot and we would have a lot of clean energy."`',
'`"By faith Abraham, when he was called to go out into a place which he should after receive for an inheritance, obeyed; and he went out, not knowing whither he went. <Hebrews 18>."`',
'`"By faith Noah, being warned of God of things not seen as yet, moved with fear, prepared an ark to the saving of his house; by the which he condemned the world, and became heir of the righteousness which is by faith. <Hebrews 17>."`',
'`"These words spake Jesus, and lifted up his eyes to heaven, and said, Father, the hour is come; glorify thy Son, that thy Son also may glorify thee: <John 11>."`',
'`"As thou hast given him power over all flesh, that he should give eternal life to as many as thou hast given him. <John 12>."`',
]
TIPS = [
"`\"Before telling your landlord you're moving, ask them to fix anything broken that you're worried you might get charged for. They often will, and then when you move out they won't be able to take it out of your security deposit.\"`",
'`"Walking before solving a problem improves your creativity by an average of 60%."`',
'`"Wake up a little earlier than your alarm? Don’t go back to bed and wait for your alarm. Waking up naturally instead of to some sort of stimuli will help you get off to a better and healthier start to your day."`',
'`"Act like your future self is a real person. So when you see a chore that needs to be done, you can say "I\'ll do this now to be nice to my future self". Helps motivate to get things done because you\'re doing work for someone you want to help."`',
'`"Think of purchases as a percentage of your budget/account balance rather than their actual cost."`',
'`"Counting on fingers is a vital part of learning math, and children that do it from an early age develop much better math skills than those who have been told not to."`',
'`"There are just some things in life you can’t control or you’ll never know the real reason why. The only thing you can do is accept it and move on. Part of happiness is accepting the past happened or being proud of it."`',
'`"Make a recording of your voice with a sweet message or telling a story. If anything happens to you, your loved ones will greatly appreciate being able to listen to your voice again."`',
"`\"If someone is treating you to a meal and you're wondering how much you should spend, ask them what they're ordering to get a better idea of the range.\"`",
'`"Never leave water Andencentotles, reading glasses, or anything else that can focus light in a spot that could get direct sunlight. A significant number of house/vehicle fires happen every year because of this."`',
'`"If you reach out to someone for help on a technical issue and they spend their valuable time helping you but are unable to resolve it, always try and let them know how it got resolved so they can help the next person with the same issue."`',
'`"If you find information on the internet that you may need again in the future, print the page to a PDF digital file. There is no guarantee that the page will be available again in the future, and now you will have a digital copy for future reference."`',
'`"If you want to learn another language, watch children’s shows in that language to pick up on it quicker."`',
'`"If you want to separate some pdf pages without using any new software. you can open the pdf file in chrome then click on print then select custom pages option, and finally choose to save as pdf."`',
'`"If you’re ever in the heat of an argument, always act like you’re being recorded. This helps you from saying things you don’t mean and could regret later."`',
'`"Make music playlists during times in your life when good things are happening and you are experiencing good feelings. Then when you\'re down later in life listen to those playlists to instantly feel better, and feel those good emotions again."`',
'`"When going on a first date, think in terms of "will I like them?" instead of "will they like me?""`',
'`"When researching things to do for your next leisure travel. Include \<location\> tourism scam into your search. All tourist heavy areas will have their own scams. This should not dampen your excitement but heighten your knowledge so your vacation will be more enjoyable."`',
'`"Just because you’ve know that person for years doesn’t mean you should stay friends with them. A toxic friend need to be cut out of your life."`',
'`"Tired of all the ads in one of the free (offline) game apps you’re playing? Go to your settings and turn off the apps access to cellular data. Enjoy the ad free game play!"`',
'`"Treat your monthly savings goal like a bill. At the end of the month, hold yourself accountable to \“pay it off\” like you would your rent or your utilities. This will keep you on track for your savings goals."`',
'`"If you need to wait until your boss is in a good mood to ask for something as simple as time off, you\'re in a toxic work environment and you need to take steps to exit sooner than later."`',
'`"When debating someone on a heated issue, start by looking for something to agree with them on. The rest of the conversation will be a lot less hostile if you establish common ground."`',
'`"Record random conversations with your parents and grandparents. Someday hearing their voice may be priceless to you."`',
"`\"If you're a student planning on your career, look up postings of your dream job, find the skills and qualifications you'll need, then work backwards from there.\"`",
"`\"If someone asks how your weekend was, assume they're really wanting to tell you about theirs. Keep your answer short and enthusiastically ask about theirs. It'll make their day.\"`",
'`"When traveling with a friend or family member, don’t be afraid to suggest breaking off to each do your own things for a day. Going solo can be enjoyable (eat/go wherever want at your own pace), plus it reduces you being sick of each other by the end of the trip."`',
'`"If you’ve got some free time and you’re planning on spending it watching tv/playing video games, etc. make yourself go on a short walk or do some brief exercise beforehand. You’ll probably end up going longer than you planned and you’ll feel better about relaxing after."`',
'`"When you get a new notebook, leave the first page blank. When you finish using the notebook, you can number the pages and use the first page as a table of contents."`',
'`"Don’t delete old playlists if you can prevent it; years later you can listen and not only rediscover music you were into but also experience whatever emotion you had associated with your tunes at the time."`',
'`"No matter how small the job is, wear correct masks/respirators/eye or ear protection. Your future self will thank you."`',
'`"Getting angry with people for making mistakes doesn\'t teach them not to make mistakes, it just teaches them to hide them."`',
"`\"When making conversation with someone you've just met, ask them what they've been listening to lately, rather than what their favorite kind of music is - it's fresh in their mind and they won't have to pick favorites on the spot.\"`",
'`"Learn to do -- and enjoy -- things by yourself. You\'re going to miss out on a lot of fun if you keep waiting for someone else to accompany you."`',
'`"If you want someone to really listen to you, then start the conversation with "I shouldn\'t be telling you this, but...""`',
'`"Do you not like having bitter coffee but don\'t want to add sugar for dietary or other reasons? Add a pinch of salt instead, it removes the bitter taste while not making your coffee taste salty."`',
'`"Don\'t choose a common sound for your alarm clock to wake up. If you hear your alarm clock sound any other time, you will get anxiety."`',
'`"Keep your water Andencentotle near you and your alarm far from you in the morning for a great start to the day!"`',
'`"If you borrow money from someone, don’t let it get to the point that he/she has to ask for it back. It sucks for Andencentoh. If you can’t repay now, show intent by paying what you can and keeping the other person posted often"`',
'`"Don\'t brag about knowledge you just acquired, simply explain it. You will learn humility, plus people often like to learn new things."`',
'`"If you have a favorite movie you’ve seen several (or hundreds) of times, try watching it with subtitles/closed captioning on. You might be surprised just how many lines you heard wrong or missed entirely."`',
'`"Write down great ideas when you get them; do that right away. You think you will never forget them, but you almost always will."`',
'`"If you’re not sure whether someone is waving at you or someone behind you, just smile at them. \n(It’ll save you the very awkward feeling of receiving a greeting meant for someone else.)"`',
'`"If you want to offer a deep and memorable compliment, ask someone how they did something. It gives them the opportunity to tell their story, and shows your genuine interest."`',
'`"Don’t hide the things that make you unique. If you smile a certain way or have any thing about you that is not normal, be confident with it. People will find it cute or attractive because it makes you special."`',
'`"When someone only remove one ear pod to talk to you, they most probably don\'t want a lengthy conversation."`',
"`\"If you haven't used your voice in a while (sleeping, lonely, etc) and suddenly need to take a phone call, hum for a few seconds prior. Your vocal cords won't let you down.\"`",
'`"Open chip bags upside down. They\'ve been sitting upright most of their lives which makes the seasoning settle to the Andencentotom of the bag."`',
'`"If you tell people there is an invisible man in the sky that created the entire universe, most will believe you; if you tell them the paint is wet, most will touch it to be sure."`',
'`"When asked online to confirm "I am not a roAndencento", if you long press on the tick box and release, you will not be asked to complete the "click all store front" etc tests."`',
'`"Buy yourself a good pillow. You use it every night and the difference between a good pillow and a stack of cheap ones is almost immediately noticeable."`',
'`"If you want your man to win in this world, treat him as a king at home, the world by itself call you a queen!"`',
'`"Be mindful of poorer friends when suggesting splitting the bill equally in a restaurant. Some people will choose cheaper options because they\'re on a budget."`',
"When you are trying to resolve an issue where someone else made an error, put the focus on the error and not the person. Example of this: Instead of saying, \“You didn’t send the attachment,\” I say, \“The attachment didn’t come through, please try sending it again.",
'`"Buy a small Andencentotle of perfume you have never tried on before going for a vacation and use it for while you\'re there. At any point after your vacation, you get a sniff of it, it brings back those memories instantly. Because scents are among the most powerful memory triggers."`',
"`\"If someone wishes you Merry Christmas and you don't celebrate Christmas, just say thank you. There's no need to tell them you don't celebrate. It just makes things awkward.\"`",
'`"When trying to focus on something (writing, revising, reading) listen to music with no words. This allows you to block out unwanted sound and having no lyrics can stop you from being distracted."`',
'`"If you are quitting a vice (smoking, drinking, etc.) treat yourself with the money you are saving. It makes quitting easier."`',
'`"Someone who likes you will often automatically look at you when they laugh or find something funny."`',
'`"Never shake spices over a hot pan. The steam will enter the Andencentotle causing the spice to go hard."`',
'`"When starting a new change in your life such as going to the gym or quitting smoking, avoid telling friends or family. Their positive feedback can give you a false feeling of accomplishment tricking you into thinking you have already succeeded which can hinder your efforts to change."`',
'`"If you are composing an important message, do not enter the recipient until you have finished composing it so that you do not accidentally send an incomplete message."`',
'`"If you are nervous walking into a new place with a group of people, make sure you are the first to the building. You can hold the door for everyone else making yourself look kind, yet you will be the last one in and can follow everyone elses lead."`',
'`"If you\'re double checking a number or a sequence, read it backwards to avoid making the same mistake twice."`',
'`"Take photos of your parents doing things they do every day. When you get older, they will bring back memories more than any posed pic ever could."`',
"`\"If you're in a job interview and you're offered a glass of water, always accept. If you're asked a tough question, you can take a sip and get yourself some extra seconds to think of a response.\"`",
"`\"If you make a mistake, admit to the mistake, apologize, and explain what steps you'll take to prevent it from happening again in the future. It's very hard for people to yell at you if you've done that.\"`",
'`"Universities like MIT offer free online courses for subjects like Computer Science, Engineering, Psychology and more that include full lectures and exams."`',
"`\"Treat another persons phone or computer like you would their diary. Don't even touch it unless they allow you to. It's always for the best.\"`",
"`\"Don't undervalue yourself when deciding whether or not to apply for a new job. It's up to the person doing the hiring to determine if you are what they're looking for, and the only way to guarantee that you won't get the job is if you don't apply for it.\"`",
'`"When drying clothes in the sun, turn them inside out so the colours don’t fade in the sunlight."`',
'`"To listen to music on your phone via YouTube in the background, use the Chrome browser, go to the video, and request desktop site. This will allow you to listen anywhere on the phone."`',
'`"Whenever your smoke alarm goes off, give your dog a treat. They\'ll associate the alarm with the treat; so when the alarm goes off for real, your dog will come right to you."`',
'`"You never know what is taking place in a stranger\'s life. Try to be patient and passive if some seems to be "overreacting"."`',
'`"Everybody is genious of its own. But if you judge a fish by its ability to climb a tree rather than swimming, she will felt whole life like dumb. So master your field and recognise it very well rather than going after the blind suspections."`',
'`"Search a beautiful heart, not a beautiful face. Beautiful things are not always good, but good things are always beautiful."`',
'`"It\'s better to cross the line and suffer the consequences than to just stare at the line for the rest of your life."`',
'`"Rather than shushing someone who’s speaking too loudly, try just talking to them in a much quieter voice. They often pick up on the contrast in volume, and self-correct without feeling attacked."`',
'`"If there are no chances for job growth or improvement - it\'s time to move on. You are worth more the more you learn. Otherwise you are getting paid less the more you know."`',
'`"If you burn food to the Andencentotom of a pot and can\'t scrub it out, put the pot back on the stove and boil water in it. It will loosen the burnt food and make it easier to clean."`',
'`"When filling out applications online, make sure you copy responses which typically take a long time to write, and paste them to a text file. You never know when you could get a server timeout."`',
'`"Being positive doesn’t mean we don’t get negative thoughts. It just means that we don’t allow those thoughts to control our life."`',
"`\"If you share an 'inside joke' with a friend around other people, just let them know what it is even if they won't get it. People don't appreciate being excluded.\"`",
'`"Never make fun of someone if they mispronounce a word. It means they learned it by reading."`',
'`"If a service dog without a person approaches you, it means that the person is in need of help."`',
'`"When taking a taxi ALWAYS get a receipt even if you don\'t need one. That way if you happen to accidentally leave a personal belonging behind you will have the company name and taxi number."`',
"`\"If you're buying a home printer for occasional use, get a laser printer; they're more expensive up front but way more economical in the long run.\"`",
'`"Go for that run, no one is looking at you, don\'t overthink it, do it!"`',
]
QT = [
'`"Arrange them in descending order of importance – MONEY, LOVE, FAMILY, CAREER, FRIENDS."`',
'`"If you had to change your name, what would your new name be, and why would you choose that name?"`',
'`"What’s the most interesting thing you’ve read or seen this week?"`',
'`"What scene from a TV show will you never forget?"`',
'`"If you could become a master in one skill, what skill would you choose?"`',
'`"What three words can describe you?"`',
'`"If you had to delete one app from your phone, what would it be?"`',
'`"Would you go out with me if I was the last person on earth?"`',
'`"If you switched genders for the day, what would you do?"`',
'`"If you could eat lunch with someone here. Who would you choose?"`',
'`"If you were told you only had one week left to live, what would you do?"`',
'`"What\'s number one item you would save from your burning house?"`',
'`"If you could only text one person for the rest of your life, but you could never talk to that person face to face, who would that be?"`',
'`"How many kids do you want to have in the future?"`',
'`"Who in this group would be the worst person to date? Why?"`',
'`"What does your dream boy or girl look like?"`',
'`"What would be in your web history that you’d be embarrassed if someone saw?"`',
'`"Do you sing in the shower?"`',
'`"What’s the right age to get married?"`',
'`"What are your top 5 rules for life?"`',
'`"If given an option, would you choose a holiday at the beach or in the mountains?"`',
'`"If you are made the president of your country, what would be the first thing that you will do?"`',
'`"If given a chance to meet 3 most famous people on the earth, who would it be, answer in order of preference."`',
'`"Have you ever wished to have a superpower, if so, what superpower you would like to have?"`',
'`"Can you spend an entire day without phone and internet? If yes, what would you do?"`',
'`"Live-in relation or marriage, what do you prefer?"`',
'`"What is your favorite cuisine or type of food?"`',
'`"What are some good and bad things about the education system in your country?"`',
'`"What do you think of online education?"`',
'`"What are some goals you have failed to accomplish?"`',
'`"Will technology save the human race or destroy it?"`',
'`"What was the best invention of the last 50 years?"`',
'`"Have you travelled to any different countries? Which ones?"`',
'`"Which sport is the most exciting to watch? Which is the most boring to watch?"`',
'`"What’s the most addictive mobile game you have played?"`',
'`"How many apps do you have on your phone?"`',
'`"What was the last song you listened to?"`',
'`"Do you prefer to watch movies in the theater or in the comfort of your own home?"`',
'`"Do you like horror movies? Why or why not?"`',
'`"How often do you help others? Who do you help? How do you help?"`',
'`"What song do you play most often?"`',
'`"Suggest a new rule that should be added in this group!"`',
'`"What app on your phone do you think I should get?"`',
'`"What website or app has completely changed your life for better or for worse?"`',
'`"What isn’t real but you desperately wish it was?"`',
'`"What thing do you really wish you could buy right now?"`',
'`"If you could ban an admin from this group. Who would you prefer ?"`',
'`"What would you do if someone left a duffle bag filled with $2,000,000 on your back porch?"`',
'`"Who is the luckiest person you know?"`',
'`"If you could visit someone\'s house in this group, who would it be ?"`',
'`"What are you tired of hearing about?"`',
'`"If you died today, what would your greatest achievement be?"`',
'`"What method will you choose to kill yourself?"`',
'`"What’s the best news you\'ve heard in the last 24 hours?"`',
'`"What is the most important change that should be made to your country’s education system?"`',
'`"Send your favourite sticker pack."`',
'`"Send your favourite animated sticker pack."`',
'`"Send your favourite video or gif."`',
'`"Send your favourite emojies"`',
'`"What’s something you misunderstood as a child and only realized much later was wrong?"`',
]
LOGIC = [
'`"Underwater bubbles and raindrops are total opposites of each other."`',
'`"If you buy an eraser you are literally paying for your mistakes."`',
'`"The Person you care for most has the potential to destroy you the most."`',
'`"If humans colonize the moon, it will probably attract retirement homes as the weaker gravity will allow the elderly to feel stronger."`',
'`"Any video with ?wait for it? in the title is simply too long."`',
'`"Your age in years is how many times you?ve circled the Sun, but your age in months is how many times the Moon has circled you."`',
'`"Biting your tongue while eating is a perfect example of how you can still screw up, even with decades of experience."`',
'`"Saying that your home is powered by a wireless Nuclear fusion reactor that is 93 Million miles away sounds way cooler than just saying you have solar panels on your roof."`',
'`"The most crushing feeling is when someone smiles at you on the street and you don?t react fast enough to smile back."`',
'`"Teeth constantly require maintenance to prevent their decay when alive, and yet they manage to survive for thousands of years buried as fossils."`',
'`"A folder is for things that you don\'t want to fold."`',
'`"Waking up in the morning sometimes feels like resuming a shitty movie you decided to quit watching."`',
'`"If everything goes smoothly, you probably won\'t remember today."`',
'`"When you meet new people in real life, you unlock more characters for your dream world."`',
'`"Maybe if they renamed sunscreen to ?anti-cancer cream? more people would wear it."`',
'`"200 years ago, people would never have guessed that humans in the future would communicate by silently tapping on glass."`',
'`"Parents worry about what their sons download and worry about what their daughters upload."`',
'`"It\'s crazy how you can be the same age as someone, but at a completely different stage in your life."`',
"`\"When you think you wanna die, you really don't wanna die, you just don't wanna live like this.\"`",
'`"Technically, no one has ever been in an empty room."`',
'`"An onion is the bass player of food. You would probably not enjoy it solo, but you?d miss it if it wasn?t there."`',
"`\"We run everywhere in videogames because we're too lazy to walk, but In real life we walk everywhere because we're too lazy to run.\"`",
'`"Every single decision you ever made has brought you to read this sentence."`',
"`\"The word 'quiet' is often said very loud.\"`",
'`"Everybody wants you to work hard, but nobody wants to hear about how hard you work."`',
'`"We brush our teeth with hair on a stick and brush our hair with teeth on a stick."`',
'`"No one remembers your awkward moments but they?re too busy remembering their own."`',
'`"Dumb people try to say simple ideas as complex as possible while smart people try to say complex ideas as simple as possible."`',
"`\"Some people think they're better than you because they grew up richer. Some people think they're better than you because they grew up poorer.\"`",
'`"The biggest irony is that computers & mobiles were invented to save out time!"`',
'`"After honey was first discovered, there was likely a period where people were taste testing any available slime from insects."`',
'`"You know you?re getting old when your parents start disappointing you, instead of you disappointing them."`',
'`"Humans are designed to learn through experience yet the education system has made it so we get no experience."`',
'`"By focusing on blinking, you blink slower... Same for breathing."`',
'`"Drivers in a hurry to beat traffic usually cause the accidents which create the traffic they were trying to avoid."`',
'`"Characters that get married in fiction were literally made for each other."`',
'`"Babies are a clean hard drive that can be programmed with any language."`',
"`\"There could be a miracle drug that cures every disease to man, that we'll never know about because it doesn't work on rats.\"`",
"`\"Rhinos evolved to grow a horn for protection, but it's what's making them go extinct.\"`",
'`"Maybe we don\'t find time travelers because we all die in 25-50 years."`',
'`"Sleep is the trial version of death, It even comes with ads based on your activity."`',
'`"The most unrealistic thing about Spy movies is how clean the air ventilation system is!"`',
'`"In games we play through easy modes to unlock hard modes. In life we play through hard modes to unlock easy modes."`',
'`"Silent people seem smarter than loud people, because they keep their stupid thoughts to themselves."`',
'`"If Greenland actually turns green, we\'re all screwed."`',
'`"If someone says clever things in your dream, it actually shows your own cleverness."`',
'`"Famous movie quotes are credited to the actor and not the actual writer who wrote them."`',
'`"No one actually teaches you how to ride a bicycle. They just hype you up until you work it out."`',
'`"Ask yourself why the the brain ignores the second the."`',
'`"You?ve probably forgot about 80% of your entire life and most of the memories you do remember are not very accurate to what actually happened."`',
'`"It will be a lot harder for kids to win against their parents in video games in the future."`',
'`"Everyone has flaws, if you don\'t recognize yours, you have a new one."`',
'`"Raising a child is training your replacement."`',
"`\"'O'pen starts with a Closed circle, and 'C'lose starts with an open circle.\"`",
'`"There\'s always someone who hated you for no reason, and still does."`',
'`"After popcorn was discovered, there must have been a lot of random seeds that were roasted to see if it would have the same effect."`',
'`"The more important a good night\'s sleep is, the harder it is to fall asleep."`',
'`"Blessed are those that can properly describe the type of haircut they want to a new stylist."`',
"`\"Too many people spend money they haven't earned, to buy things they don't want, to impress people they don't like!\"`",
'`"Theme park employees must be good at telling the difference between screams of horror and excitement."`',
'`"6 to 30 feels more half-an-hour than 50 to 20"`',
'`"Getting your password right on the last login attempt before lockout is the closest thing to disarming a bomb at the last minute that most of us will experience."`',
'`"Listening to podcasts before bed is the adult version of story-time."`',
'`"If all criminals stopped robbing then the security industry would fall in which they could then easily go back to robbing."`',
'`"A ton of whales is really only like half a whale."`',
'`"When you get old, the old you is technically the new you, and your young self is the old you."`',
'`"You probably won\'t find many negative reviews of parachutes on the Internet."`',
'`"We show the most love and admiration for people when they\'re no longer around to appreciate it."`',
"`\"We've practiced sleeping thousands of times, yet can't do it very well or be consistent.\"`",
'`"Humans are more enthusiastic about moving to another planet with hostile environment than preserving earth - the planet they are perfectly shaped for."`',
"`\"The happiest stage of most people's lives is when their brains aren't fully developed yet.\"`",
'`"The most effective alarm clock is a full bladder."`',
'`"You probably just synchronized blinks with millions of people."`',
'`"Since we test drugs on animals first, rat medicine must be years ahead of human medicine."`',
'`"Night before a day off is more satisfying than the actual day off."`',
'`"We put paper in a folder to keep it from folding."`',
'`"Somewhere, two best friends are meeting for the first time."`',
'`"Our brain simultaneously hates us, loves us, doesn\'t care about us, and micromanages our every move."`',
'`"Being a male is a matter of birth. Being a man is a matter of age. But being a gentleman is a matter of choice."`',
'`"Soon the parents will be hiding their social account from their kids rather than kids hiding their accounts from the parents."`',
'`"Wikipedia is what the internet was meant to be."`',
'`"A theme park is the only place that you can hear screams in the distance and not be concerned."`',
'`"A wireless phone charger offers less freedom of movement than a wired one."`',
"`\"If you repeatedly criticize someone for liking something you don't, they won't stop liking it. They'll stop liking you.\"`",
'`"Somewhere there is a grandmother, whose grandson really is the most handsome boy in the world."`',
'`"If someday human teleportation becomes real, people will still be late for work."`',
'`"The first humans who ate crabs must have been really hungry to try and eat an armored sea spider"`',
'`"Doing something alone is kind of sad, but doing it solo is cool af."`',
'`"Your brain suddenly becomes perfect at proofreading after you post something."`',
'`"There\'s always that one song in your playlist that you always skip but never remove."`',
'`"Kids next century will probably hate us for taking all the good usernames."`',
'`"Bubbles are to fish what rain is to humans."`',
'`"The more people you meet, the more you realise and appreciate how well your parents raised you."`',
'`"A comma is a short pause, a coma is a long pause."`',
'`"Someday you will either not wake up or not go to sleep."`',
'`"Bermuda Triangle might be the exit portal of this simulation."`',
'`"If we put solar panels above parking lots, then our cars wouldn\'t get hot and we would have a lot of clean energy."`',
"`Do You Know, Some Mosquitos Became Ghosts, When you *Killed* Them...`",
"`Do You Know, Mosquitoes has Teleportation Power...`",
"`Do You Know, When you see a bearded Goat, that means you juat saw a *Smarter Goat* than YOU....`",
"`Do You Know, when You give some ruppess to a Bus Conductor, He will give You a Piece of Paper, *Called Ticket*...`",
"`Do You Know, Bus are called Bus, Because they are Bus....`",
"`Do You Know, There's a Huge Difference between *Cartoon amd Anime*...`",
"`Do You Know, We can't see Ghosts But Ghosts Can see Us...`",
]
SNOW = [
"Never forget what you are. The rest of the world will not.Wear it like armor,\n and it can never be used to hurt you.",
"There is only one thing we say to death: **Not today.**",
"If you think this has a happy ending, you haven’t been **paying attention**.",
"Chaos isn’t a pit. Chaos is a ladder.",
"You know nothing, **Jon Snow**",
"**Winter** is coming.",
"When you play the **game of thrones**, you win or you die.",
"I'm not going to **stop** the wheel, I'm going to **break** the wheel.",
"When people ask you what happened here, tell them the **North remembers**. Tell them winter came for **House Frey**.",
"When the snows fall and the white winds blow,\n the lone wolf dies, but the pack **survives**.",
]
SHAYRI = [
"🙂Kitna Khusnuma Hoga,\nWoh Meri Maut Ka Manjar\nJab Mujhe Thukrane Wale\nKhud Mujhe Paane Ke Liye,\nAansu Bahayange!!!☺️\n\n\n ✍️ {}",
"Zindagi me baar baar\nKoi sahara nahi milta,\n\nBaar baar koi\nPyaar se pyara nahi milta,\n\nJo paas hai ussy sambhal ke rakhna,\nKyuki koi khoo jaaye toh\n\n**Phir doobara nahi milta...**☺️\n\n\n✍️ {}",
"कभी अपना कहते थे\n आज बेगाना कर गए...\n\nहमसे बात ना करने के लिए\n बहाना कर गए...\n\nशुक्रिया कैसे करूं तुम्हारा \nसमझ नहीं आ रहा...\n\nमेरे इस नियाने से दिल को \n**सयाना कर गए...*\n\n\n✍️{}",
"Teri Khubsurti Ki Tareef \nMain Ab Kya Likhu\n\n\nKuch Khubsurat Lafzon Ki Talaash \nAb Bhi Hai Mujhe🙂\n\n\n✍️{}",
"Main Uska Ho Nahi Sakta,\nWoh Meri Ho Nahi Sakti\n\nWoh Aaye Lakh Khwaabon Main,\nSapna Sach Ho Nahi Sakta\n\nMere Nazdeek Ho Kar Bhi,\nNahi Hai Woh Sath Mere\n\nUsse Neend Nahi Aati,\nYaha Main So Nahi Sakta\n\nMohabbat Ka Jo Rishta Hai,\nNaa Jaane Kaisa Rishta Hai\n\nMera Ho Ke Bhi Koi,\n**Mera Ho Nahi Sakta!!!**\n\n\n✍️{}",
"Dukh yeh nhi,\nKe koi Apna nhi....\n\nDukh Yeh Hai Ke\nKisi ne\n\n\n**APNA BANA KAR CHOR DIA**🙂💔\n\n\n✍️{}",
"एक बार भूल से ही \nकहा होता \nकी हम किसी और के भी है \nखुदा कसम \nहम तेरे सायें से भी दूर रहते...🙂\n\n\n✍️{} ",
"Dosti Nibhate Nibhate \nUs Se Mohabbat Si Ho Gayi\n\nGam Hi Mile Sahi \nPar Chahat Si Ho Gayi\n\nKarte The Jo Baatain \nRaat Raat Bhar\nAaj Un Se Baat Karne Ki Khwahish Si Ho Gayi\n\nJee Nahi Sakte Ab Us Ke Bin\n**Us Ke Sath Rehne Ki Aadat Si Ho Gayi**\n\n\n✍️{}",
"Tere Deedar ke lie aate hai\nTeri galiyon me...\n\nWarna awaargi ke lie to\nPura seher pada hai🙂\n\n\n✍️{}",
"Bass Aakhir baar tere pyaar ko Mehsus karloon\n\nLaut ke fir kabhi tere galiyoon me nhi aaunga\n\nApni barbaad mohabbat ka Zanaja lekar\n**Teri Duniya se bahut dur chala jaunga**\n\n\n✍️{}",
"Bheed Ki aadat nhi mujhe\nThode me zeena sikh lia humne\n\nDo dost hai,Channd Duae hai\nBass inn khusiyoon ko\nGale laga lia humne🙂\n\n\n✍️{}",
"दोस्ती जैसे खूबसूरत रिश्ते को \nदफना दिया तुमने \nअब उसे दफन ही रहने दो ।\n\nअब मेरे अंदर कोई जज्बात नहीं बचे \nमर चुका हूं में \nअब तुम मुझे दफन ही रहने दो\n\n\n✍️{}"
"अगर बुरा न मानो तो कहें???\n\n हमको भी बुरा लगता है !!!!\n\n\n✍️{}",
"में क्यों तेरे ख्यालो में खोता रहुँ\n\n पागल नहीं हूँ में \nजो हर पल तेरे लिए रोता रहुँ !\n\n\n✍️{}",
"ना **राज** है....जींदगी,...\nना **नाराज**है....जींदगी,...\nबस जो भी है,........\n वो **आज**है....... जींदगी\n\n\n✍️{}",
"जब तुम नहीं समझे,\nतब मैंने खुद को कितना समझाया\n\nये तुम कभी नहीं समझोगे\n\n\n✍️{}",
"Kisi Ne Yun Hi Pooch Liya Humse\nKi Dard Ki Keemat Kya Hai,\nHumne Hanste Huye Kaha\nPata Nahin\n\n**Kuch Apne Muft Me De Gaye**\n\n\n✍️{}",
"Rekhao ka khel he sara\nKya kare taqdeer ka mara\n\nJis qadar uski qadar ki..\n**Uss qadar beqadar huve ham**\n\n\n✍️{}",
"तेरी हर बात मुझे अपने तरफ खिंचती क्यूँ हैं,\nतू क्या हैं, कौन हैं, मेरे लिए इतना जरूरी क्यूँ हैं\nमेरे साथ साथ तू साये की तरह क्यूँ हैं,\n\nअगर ऐसा ही हैं तो फिर \nतू मुझसे इतना दूर होके भी\n**पास क्यू हैं\n\n\n✍️{}",
"नज़र को नज़र की खबर ना लगे\nकोई अच्छा भी इस कदर ना लगे \n\nआपको देखा है बस उस नज़र से\nजिस नज़र से आपको नज़र ना लगे\n\n\n✍️{}",
"Teri muskurahat Meri pahechan he\nTerri Khushi Meri shan he\n\nKuch bhi nhi he meri jindgi me\nItna smaj le bas tu Meri jaan he\n\n\n✍️{}",
"💞💞💞💞💞💞💞💞💞💞\n\nमेरा इश्क बड़ा नाज़ुक है इसे सहेज के रखना...,\n\n\nइसे उंगलियों से मत पकड़ना...\nहथेलियों पे रखना...,\n\n💞💞💞💞💞💞💞💞💞💞\n\n\n✍️{}",
"तेरे इश्क की जंग में,\n हम मुस्कुराके डट गए,\n\nतलवार से तो बच गए,\n तेरी मुस्कान से कट गए।\n\n\n✍️{}",
"💖आँखों में देखी जाती हैं..,,\nप्यार की गहराईयाँ.\n\nशब्दों में तो छुप जाती हैं..,,\nबहुत सी तन्हाईयाँ....💖⚡😘\n\n\n✍️{}",
"Dhadkan Ye Kehti Hai.\nDil Tere Bin Dhadke Na.\nEk Tu Hi Yaar Mera.\nMujhko Kya Duniya Se Lena\n\n\n ✍️ {}",
"Khud Nahi Jante Kitne Pyare Ho Aap.\nJaan Ho Hamari Par Jaan Se Pyari Ho Aap.\nDuriyon Ke Hone Se Koi Fark Nahi Padta.\nKal Bhi Hamare The Aur Aaj Bhi Hamari Ho Aap\n\n\n✍️ {}",
"Samandar Kinare baithe hai.\nKabhi to leher aaegi.\nKismat badle ya na badle.\nGand to dhul jaegi.\n\n\n✍️ {}",
"Mere Dil ke Yeh tukde hai.\nNigaaho se choonu yaara.\nMohabatt ki kahaani hai.\nMohabatt se suno yaara.\n\n\n✍️ {}",
"Kaunsa Zakhm Tha Jo Taaza Naa Tha.\nItna Gam Milega Andaza Naa Tha.\nAap Ki Jheel Si Aankhon Kaa Kya Kasoor.\nDubne Wale Ko Hi Gehrai Kaa Andaza Naa Tha\n\n\n✍️ {}",
"Bahte Hue Duriya Ko Kya Modega Koi.\nToote Hue Shishe Ko Kya Jodega Koi.\nChalo Fir Se Dil Lagake Dekhte Hai.\nAb Is Toote Hue Dil Ko Kya Todega Koi\n\n\n✍️ {}",
"Dil Ko Jalate Hai Ham Diye Ki Tarah,\nTeri Zindagi Main Roshni Lane Ke Liye,\nLe Lete Hai Har Kaaton Ko Apni Zindagi Mein,\nBas Teri Rahon Main Phool Bhichane Ke Liye\n\n\n✍️{}",
"Pyase Ko Ek Katra Pani Kafi Hai.\nIshq Mein Char Pal Ki Zindgani Kafi Hai.\nDoobne Ko Samander Mein Jayein Kahan.\nAapki Aankh Se Tapka Voh Pani Kafi Hai.\n\n\n✍️ {}",
"HAMNE TOH BSS DOST KO HI BEWAFA SAMJHA THHA...\nYAHAAN SACCHA PYAAR V SAATH NHI DIYA🥱🥱\n\n\n✍️ {}",
"Love leads to death 🥱🥱\nOr to a living dead 🥱🥱\n\n\n✍️ {}",
"BAATEN TU KABHI YE NA BHULNA.....\nKOI TERE KAARANN HAI..MRR RHA 🥱🥱🥱🥱\n\n\n✍️ {}",
"Ae dost Tere jaise log ko kaat k fekk dange hm\nMeri taraf aae her toofan ko Teri taraff bhej dange hm...\nLekhin tune Jo saath chorrda hamara ......\nKsm SE badnaam krke tujhe nya dost....\n dhoondh lange hum🥱🥱🥱🥱\n\n\n✍️ {}",
"Bde ajeeb Hain ye Zindagi k raaste.........\nAnjaane modd pe log Mill jaate Hain...khhud ko apna BTA k.....chorrrd jaate Hain...\n. KRTE hai. H baat (Zindagi bhar saath rahenge) interest khtm hone prr......zinda LAASH BNA jaate h🥱🥱🥱\n\n\n✍️ {}",
"Dill jaisa thha waisa hi reh jaata......\nJitne dard thhey UTNE kaafi thhey.......\nZindagi aap me aake aur tadpaa diya.........\nMillla kya u badnaam krke ....zinda LAASh...... DIYA🙃🙃\n\n\n✍️ {}",
"DARD SE IS KADAR DOSTI HO GYI.......\nZINDAGI BEDARD SI HO GYI.......\nJALL GAY WO ASHIYANA.......JO KABHI BNA HI NHI THHA......\nROSHNI TOH CHORRDO..........\nGHAR MEIN JO MOMABATTIE THHI WO V KHTM HO GYI.........🥱🥱\n\n\n✍️ {}",
"Zindagi barbaad hai...... Zindagi SE pyaar na Karo.......\nHo raat toh Dinn ka intezaar na Karo.......\nWo Pall v aaega....jiss pal ka INTEZAAR na ho aako.....\nPRRR uspe kabhi aitbaar na Karo........🥱🥱\n\n\n✍️ {}",
"Dard k saath rhte hue v dosti nhi Hui\nZindagi bedard si hote hue v nhi Hui\nAashiyana toh jall gya\nPrr Roshni nhi Hui ..........❤️\n\n\n✍️ {}",
"ME: DUNIYA ME AISI KYA CHEEZ HAI JO FREE MEI MILTI HAI............\nMAH HEART : DHOKHA \n\n\n✍️ {}",
"JO INSAAN AAPKO TADAPTA HUA ....ROTA CHORRD DE NA.......... TOH SAMAJH LENA WO KABHI AAPSE \nPYAAR NHI KRR SKTA.....AGAR KOI PYAAR KAREGA NA......\nTOH WO KABHI AAPKO AISEY NHI CHORRDEGA.......🥱🥱\n\n\n✍️ {}",
"TOOTE HAIN.....ES TARAH DILL ......\nAWAAZ TKK NA AAI....\nHUM JAISEY JEE RHE H.....\nKOI JEE K TOH BTAAE....🙃🙃\n\n\n✍️ {}",
"AANKHON ME AANSU LEKE........\nHOTHON SE MUSKURAAE................\nHUM JAISEY JEE RHE HAIN.......\nKLI JEE K TOH BTAAE...🙃🙃\n\n\n✍️ {}",
"TUJHE KAISEY PTA NA CHALAA.................\nK MAIN TENU PYAAR KRR Di AAN...........\nTUJHE KAISEY PTA NA CHALAA......\nK TERA INTEZAAR KRR DI AAN........🙃\n\n\n✍️ {}",
"MTT CHORRDNA KISIKO USKE HAAL PE.......\nHO SKTA H.......\nAAPKE ALAWA USKE PAAS AUR KOI NA HO.......🙃🙃\n\n\n✍️ {}",
"🙂Kehti Hain Zindagi Pyaar Kar Ke Toh Dekh ,\n Kya Pata Tha Jis Zindagi Ne Pyaar Mein Jeena Sikhaya,\n Aaj Wahi Gir Ke Samhalna Bhi Sikha Gayi☺️\n\n\n✍️ {}",
"आज कुछ इस कदर याद आयी तेरी ..,\nआँसू गिर पड़े जैसे ...,\nनदी को नया मोड़ मिल गया !!\n\n\n✍️ {}",
"कभी अपना कहते थे \n आज बेगाना कर गए...\n\nहमसे बात ना करने के लिए \n बहाना कर गए... \nशुक्रिया कैसे करूं तुम्हारा \nसमझ नहीं आ रहा...\nमेरे इस नियाने से दिल को \n**सयाना कर गए...* \n\n\n✍️ {}",
"जानती हूँ जवाब देना आसान नही \nपर कोशिश भी नही करते तुम ,\n मेरा हाल जानने की !!\n\n\n✍️ {}",
"हम हर बिछड़न में नई मुलाकात को ढूंढते है !!\nतुम्हारे बार बार छोड़ जाने की अब ,\nआदत सी हो गयी है !!\n\n\n✍️ {}",
"सोचते तो तब भी थे हम \nतुम मेरे नही हो सकते !!\nअब भी यकीन कहाँ है \n के तुम कभी मेरे थे !!\n\n\n✍️ {}",
"पगला है वो ,\nना जाने इतना क्यों प्यार करता है !!\nकुछ बातें मेरी \n कहने से पहले ही समझ जाता है !! \n\n\n✍️ {}",
"आज कल हाल कुछ \n Telephone booth की \nतरह हो गया है !!\n लोग आते है बात करते है ,\nऔर बस चले जाते है !\n\n\n✍️ {}",
"दिल रोकना तो बहोत चाहता है \nमगर रोकेंगे नही ....!\nना तुम हमारे कुछ हो \nऔर हम भी तुम्हारे कुछ नही !!\n\n\n✍️ {}",
"फर्क नही पड़ता सच मे ,\n कोई आये कोई जाए !!\nबस जो दिल को बार बार \n आदतें लग जाती है ना \nकिसी की ..!!\n बस छुड़ाने में कुछ देर लगती है !\n\n\n✍️ {}",
"Not in mood. Sorry!!!!",
]
HFLIRT = [
"Doctor Ne Advice Kia Hai Ki Sone Se Pahle Apki Pic Dekh Kar Sona Jaroori Hai, Warna Heart Attack Aa Sakta Hai.😨\n\n\n✍️ {}",
"☺️Ap Itne Cute Ho Ki Agar Mai Msg Na Bhi Karna Chahu.To Bhi Mera Hath Khud Keypad Pr Chalne Lagta Hai😶.\n\n\n✍️ {}",
"😋Aag joh dil mein lagi hai, usse duniya mein laga doonga main ... joh teri doli uthi, zamaane ko jalaa doonga main😏\n\n\n✍️ {}",
"Jaldi se koi bhagwan ko bulao kyuki ek pari kho gayi hain aur wo pari yaha mujhse chatting kar rahi hain😛.\n\n\n✍️ {}",
"Meri aankho 👀ko kuch ho gaya hain, aap per se hat hi nahi rahi hain😶\n\n\n✍️ {}",
"🤨Aap choro ke rani lagte ho kyuki aapne mera dil chura liya hain😘\n\n\n✍️ {}",
"👀Aapki aankhe ocean ki tarah blue he aur me usme har baar dub jata hu🙂\n\n\n✍️ {}",
"📷Aap ek camera ki tarah ho jab bhi aapka photos dekhta hu meri automatic smile aaa jati hain🙈\n\n\n✍️ {}",
]
EFLIRT = [
"Your lips look lonely would they like to meet mine?\n\n\n✍️ {}",
"There isn’t a word in the dictionary to describe how beautiful you are\n\n\n✍️ {}",
"I have had a really bad day and it always makes me feel better to see a pretty girl smile. So, would you smile for me?\n\n\n✍️ {}",
"I lost my teddy bear can i sleep with you tonight?\n\n\n✍️ {}",
"I’m no organ donor but I’d be happy to give you my heart.\n\n\n✍️ {}",
"If I had to rate you out of 10 I’d rate you a 9… because I am the one that you are missing\n\n\n✍️ {}",
"Can I follow you? Cause my mom told me to follow my dreams\n\n\n✍️ {}",
"Your hand looks heavy can i hold it for you?\n\n\n✍️ {}",
"You may fall from the sky, you may fall from a tree, but the best way to fall… is in love with me.\n\n\n✍️ {}",
"Are you the sun? Because you’re so beautiful it’s blinding me\n\n\n✍️ {}",
"I should call you Google, because you have everything I’m looking for.\n\n\n✍️ {}"
"Can you kiss me on the cheek so I can at least say a cute girl kissed me tonight?\n\n\n✍️ {}",
]
ATTITUDE = [
"Dil nhi karta ab\n kisi se dil lagane ko \n bohot aati hai tere jaise \n keh deta hu hoon laut jane ko.\n\n\n✍️ {}",
"humari hesiyat ka andaza tum ye\n jaan ke laga lo hum kabhi unke \n nahi hote jo har kisi ke ho jate hai \n\n\n✍️ {}",
"Attitude तो अपना भी खानदानी है,\nऔर तू मेरे दिल की रानी है, \nइसलिये कह रहा हूँ मान जा, \nक्योंकि अपनी तो करोड़ो दीवानी हैं।\n\n\n✍️ {}",
"मेरा वाला थोड़ा लेट आयेगा,\n लेकिन जब आयेगा तो लाखो में एक आयेगा।\n\n\n✍️ {}",
"इतना Attitude न दिखा जिंदगी में तकदीर बदलती रहती है,\n शीशा वहीं रहता है,\n पर तस्वीर बदलती रहती है।\n\n\n✍️ {}",
"हम से है ज़माना, ज़माने से हम नही,\nकोई हम से नज़रे मिलाये, \nकिसी मे इतना दम नही।\n\n\n✍️ {}",
"हम तो शौक तलवारों के पाला करते हैं,\nबन्दूकों की ज़िद तो बच्चे किया करते हैं।\nशेर अपना शिकार करते हैं और हम अपने Attitude से वार करते हैं।\n\n\n✍️ {}",
"शेर अपना शिकार करते हैं\n और हम अपने Attitude से वार करते हैं।\n\n\n✍️ {}",
]
GBYE = [
" जिंदगी में तन्हा रहना तो मुमकिन नहीं,\nतेरे साथ चलना दुनिया को गवारा भी नहीं,\nइसलिए, तेरा-मेरा दूर जाना ही बेहतर है।\n\n\n✍️ {}",
"कुछ दिन साथ चलने वाले,\nथोड़ा और साथ चलने की तमन्ना थी,\nमजबूरी है कहना ही पड़ेगा अलविदा।\n\n\n✍️ {}",
"न कहा न कुछ सुना, बस चुपके से चल दिए,\nमोहब्बत के उन्होंने सारे मायने बदल दिए,\अब तो तन्हा गलियों में गुजरेगी हर शाम,\nमर भी गए, तो भी नहीं भूलेंगे उनका नाम।\n\n\n✍️ {}",
"पास थे, तो रोने की वजह बनते थे,\nदूर जाकर शायद मुस्कुराना सीख लें आप।\n\n\n✍️ {}",
"दोबारा मिलें जिंदगी में यह दुआ करेंगे,\nदूर रहकर भी नजदीक होने की चाह करेंगे।\n\n\n✍️ {}",
"माफ करना मुझे दूर तो जाना पड़ेगा,\nपास होकर भी तुम्हे अब भूल जाना पड़ेगा।\n\n\n✍️ {}",
"वो शाम सुहानी थी जो गुजरी तेरे साथ,\nबिन तेरे अब कैसे कटेगी सारी रात,\nसमझ लो तुम भी यह मजबूरी है दिल की,\nनहीं गए, तो कैसे कल फिर होगी मुलाकात।\n\n\n✍️ {}",
"तेरे साथ मुस्कुराना और ठोकरों से संभलना सीखा है,\nआता नहीं अलविदा कहना बस रोकर जताना सीखा है।\n\n\n✍️ {}",
"यार तेरी दोस्ती को सलाम है,\nअलविदा कहकर भी हंसा दिया,\nयह बस तेरी यारी का कमाल है।\n\n\n✍️ {}",
"ताउम्र तेरे साथ बीती रातों को फिर याद करेंगे,\nकह सकें अलविदा तुझसे इसलिए मेरे यार,\nआंसू का एक भी कतरा बहाए बिना बात करेंगे।\n\n\n✍️ {}",
"रूठा जमाना जिंदगी भी रूठी,\nतभी तो तेरे-मेरे बीच ये दूरी छूटी,\nसमझ लेना तुम है ये मेरी मजबूरी,\nवरना न आने देता तेरे-मेरे बीच यह दूरी।\n\n\n✍️ {}",
"करीब आते-आते तू कुछ दूर सा हो गया है,\nशाम को अलविदा कह तू कहीं गुम सा गया है,\nचाहता हूं मैं करीब होने का एहसास तेरे पर,\nखुशी के खातिर तेरी तुझे अलविदा कह गया हूं।\n\n\n✍️ {}",
"खुश हूं फिर भी ये आंखे नम हैं,\nन चाहते हुए भी दूर जाने का गम है।\n\n\n✍️ {}",
"दूर जाने की खबर सुनकर ये धड़कने रुक जाती हैं,\nअलविदा कहने के वक्त यार मेरी आंखें भर आती हैं।\n\n\n✍️ {}",
" अब हर लम्हा तुम्हारे बिना सूना सा लगेगा,\nअलविदा कहकर तुम्हारी यादों में जीना पड़ेगा।\n\n\n✍️ {}",
"अब हलचल है दिल में नई उम्मीद की तलाश के लिए,\nकहना पड़ेगा अलविदा नई मंजिल की तलाश के लिए\n\n\n✍️ {}",
" जब तुम जाते हो, तो गुलिस्तां के सभी फूल झड़ जाते हैं,\nसंभलकर कहो अलविदा जाते-जाते पेड़ों से क्यों टकरा जाते हो।\n\n\n✍️ {}",
" तिरछी निगाहों से जो देखा उन्होंने,\nतो हम मदहोश हो चले,\nजब पता चला कि वो अलविदा कहने आए,\nतो हम बेहोश हो चले।\n\n\n✍️ {}",
]
|
Andencento
|
/Andencento-0.24.tar.gz/Andencento-0.24/userbot/random_strings/rands.py
|
rands.py
|
RUNSREACTS = [
"`Runs to Thanos`",
"`Runs far, far away from earth`",
"`Running faster than usian bolt coz I'mma Bot`",
"`Runs to Marie`",
"`This Group is too cancerous to deal with.`",
"`Cya bois`",
"`I am a mad person. Plox Ban me.`",
"`I go away`",
"`I am just walking off, coz me is too fat.`",
"`I Fugged off!`",
]
GAALI_STR = [
"`Madarchod Randi ke bacche.Oye bosdike madarchod bhen ke lode tere gand me lohe ka danda garam karke dalu randwe tujhetho gali ke kutte gand pe chut rakh ke katenge me bata raha hu tere lode pe madhu makkhi Katelode ke ando pe Road roller chale tu kab bathroom me muthne Jaye tho Tera loda ghir Jaye fir tere ando me se lizard ke bacche nikle teko kidnap Kare aur childporn banaye maa ke chuttad ke lode tere saat Johnny sins rape Kare aur jab wo teko anal de tab loda andar fas Jaye bkl tere jhaat pe waxing karunga me dhek lio fir jab tu chillayega na tab tere muh me Mai gai ka gobar dalunga sale tere gand ke balo pe tel laga ke jala du me teko Anaconda leke gand me dalu tho muh se nikle maa ke lode hamesha chutiyo jaisa bartav kartha he tu maa ke Dai chawal drugs tere gand Me dalunga thi tatti nahi nikle maa darchod kabhi teko Marne ka mouka mil gaya na tho bas I'll do my best to get that tatti outof you aur tere jaise chutio ko is duniya me jagaha bhi nahi maa ke lode bandarchod tere gand me chitiya Kate wo bhi bullet ants maadarchod samj nahi aaraha tere baap NE teko kya khake paida kiya Tha kesa chutiya he tu rand ke bacche teko shadi me khana khane na mile teko gand pe 4 thappad mare sab log aur blade se likhe I want anal madarchod bosdike maccharki tatte ke baal chutiye maa ke chut pe ghode ka Lund tere gand me jaltha hu koila Dale bhen ke lode MAA KI CHUT MAI TALWAR DUNGA BC CHUT FAT JAEGI AUR USME SE ITNA KHOON NIKLEGA MZA AJAEGA DEKHNE KA SALE MAA KE BHOSDE SE BAHR AJA FIR BAAP SE ZUBAN DA TERI MAA KI CHUT CHOD CHOD KE BHOSDABNADU MADARCHOD AUR USKE UPAR CENENT LAGADU KI TERE JESA GANDU INSAAN KABHI BAHR NA A SKE ESI GANDI CHUT MAI SE LODA LASUN MADRCHOD TERI MAA KI CHUT GASTI AMA KA CHUTIA BACHA TERI MAA KO CHOD CHOD K PAGAL KAR DUNGA MAA K LODY KISI SASTIII RANDII K BACHY TERI MAA KI CHOOT MAIN TEER MAARUN GANDU HARAMI TERI COLLEGE JATI BAJI KA ROAD PEY RAPE KARONGANDU KI OLAAD HARAM KI NASAL PAPA HUN TERA BHEN PESH KAR AB PAPA KO TERI MAA KKALE KUSS MAIN KIS`",
"`Main roz teri behno ki banjar chut me apna lawda daalke andar haryali lata tha magar aaj unke ke baare me sunke mujhe bhut afsos huwa..ki unko ab bada loudha chahye..ab mera balatkaaari lawda lagataar 4 ghante tk apne muh me kon rakhega..vo teri behne hi thi jo apni kaali magar rasilli chut mere saamne khol deti aur zameen pe naagin ki tarah rengne lgti thi jaise ki kisine unki chut pe naariyal tod diya ho vo b bada wala mumbai ka naariyal..apni chennal maa ko b nhi bhej rahe mere paas to main kaixe tum logo se vaada karu ki main teri maa chodd dungaw..ab agar tun sach me chahta hai ki main tum dono k mc ki chut me dhammal karu to mera lawda apne muh me rakho aur kaho Sameer hamare sage papa hain... Aur agar tb b the apni maa ki kaali chut mere saamne nahi rakhi to tumhare ghar me ghuske tumhari maa ka balatkaar kar dungaw jaixe delhi me huwa tha...ab teri chudi hui kuttiyo ki tarah apni gaand hilaate hue mere aage kalapna mt ni to tumhari fatti bhoxdi me 100 ched karunga`",
"`Taare hai Asmaan me very very bright jaat na jla bskd dekh le apni hight.`",
"`Zindagi ki na toote lari iski lulli hoti nhi khadi`",
"`Kbhi kbhi meri dil me khyaal ata hai ayse chutiyo ko kon paida kr jata hai😂.`",
"`Saawan ka mahina pawan kare shor jake gand mara bskd kahi aur.`",
"`Dil ke armaa ansuon me beh jaye tum bskd ke chutiye hi reh gye.`",
"`Ishq Se Tabiyat Ne Zeest Ka Mazaa aya maine is lodu ko randi khane me paya.`",
"`Mirza galib ki yeh khani hai tu bhosdika hai yeh sab ki jubani hai.`",
"`Mashoor Rand, Ne Arz Kiya Hai. Aane Wale Aate Hai, Jaane Wale Jaate Hai. Yaade Bas Unki Reh Jaati Hai, Jo G**Nd Sujaa Ke Jaate Hai`",
"`Pani kam hai matke me gand marlunga jhatke me.`",
"`Aand kitne bhi bade ho, lund ke niche hi rehte hai`",
"`Tum Ameer hum gareeb hum jhopdiwale Tum bhosiwale`",
"`Sisi Bhari Gulab ki padi palang ke pass chodne wale chod gye ab q baitha udaas`",
"`Phuloo Ka Raja Gulaab Kaato me Rehta hai Jeewan ka Nirmata jaato me rehta hai😂`",
"`Chude hue maal ko yaad mt krna Jo Chut na de usse kabhi friyad mt karna jise chudna hai wo chud ke rhegi bekar me muth maar ke apni jindagi barbaad mt krna`",
"`Gand mare gandu Chut mare Chutiya Sabse accha mutti 2 mint me chutti😛`",
"`Marzi Ka Sex Pap Nahi Hota.. Piche Se Dalne Wala Kabhi Baap Nahi Hota.. Condom Zarur Lagana Mere Dost Qki.. Sex K Waqt Popat Ke Pass Dimag Nahi Hota.`",
"`Uss Ne Hothon Se Chhu Kar Lowd* Pe Nasha Kar Diya; Lu*D Ki Baat To Aur Thi, Uss Ne To Jhato* Ko Bhi Khada Kar Diya!`",
]
RAPE_STRINGS = [
"`Rape Done Drink The Cum`",
"`EK baat yaad rkhio, Chut ka Chakkar matlab maut se takkar`",
"`The user has been successfully raped`",
"`Dekho Bhaiyya esa hai! Izzat bachailo apni warna Gaand maar lenge tumhari`",
"`Relax your Rear, ders nothing to fear,The Rape train is finally here`",
"`Rape coming... Raped! haha 😆`",
"`Kitni baar Rape krvyega mujhse?`",
"`Tu Randi hai Sabko pta hai😂`",
"`Don't rape too much bossdk, else problem....`",
"`Tu sasti rendi hai Sabko pta hai😂`",
"`Lodu Andha hai kya Yaha tera rape ho raha hai aur tu abhi tak yahi gaand mara raha hai lulz`",
]
ABUSE_STRINGS = [
"`Madharchod`",
"`Gaandu`",
"`Chutiya he rah jaye ga`",
"`Ja be Gaandu`",
"`Ma ka Bhodsa madharchod`",
"`mml`",
"`You MotherFukcer`",
"`Muh Me Lega Bhosdike ?`",
"`Abee tu tiktok wala chakka h na?`",
"`Jaa naa madarchod`",
"`Teri maa meri malllll`",
"`Tu wahi h naa jo roz apni maa chudata hai?`",
]
HIABUSE_STR = [
"Maderchod- MOTHERFUCKER",
"Bhosadike-BORN FROM A ROTTEN PUSSY",
"Bhen chod-Sister fucker",
"Bhadhava- Pimp",
"Bhadhava- Pimp",
"Chodu- Fucker",
"Chutiya- Fucker, bastard",
"Gaand- ASS",
"Gaandu-Asshole",
"Gadha, Bakland- Idiot",
"Lauda, Lund- Penis, dick, cock",
"Hijra- Gay, Transsexual",
"Kuttiya- Bitch",
"Paad- FART",
"Randi- HOOKER",
"Saala kutta- Bloody dog",
"Saali kutti- Bloody bitch",
"Tatti- Shit",
"Kamina- bastard",
"Chut ke pasine mein talay huye bhajiye- Snack fried in pussy sweat",
"Chut ke dhakkan- Pussy lid",
"Chut ke gulam- Pussy whipped",
"Chutiya ka bheja ghas khane gaya hai- idiot’s brain has gone to eat grass",
"Choot marani ka- Pussy whipped",
"Choot ka baal- Hair of vagina",
"Chipkali ke jhaat ke baal- Lizard’s cunt hairs",
"Chipkali ke jhaat ke paseene- Sweat of Lizard’s pubic hair",
"Chipkali ke gaand ke pasine- Sweat of a lizard’s ass",
"Chipkali ke chut ke pasine- Sweat of reptiles cunt",
"Chipkali ki bhigi chut- Wet pussy of a wall lizard",
"Chinaal ke gadde ke nipple ke baal ke joon- Prostitute’s breast’s nipple’s hair’s lice",
"Chullu bhar muth mein doob mar- Drown yourself in a handful of semen",
"Cuntmama- Vaginal uncle",
"Chhed- Vagina,Hole",
"Apni gaand mein muthi daal- Put your fist up your ass",
"Apni lund choos- Go and suck your own dick",
"Apni ma ko ja choos- Go suck your mom",
"Bhen ke laude- Sister’s dick",
"Bhen ke takke: Go and suck your sister’s balls",
"Abla naari tera buble bhaari- woman, your tits are huge",
"Bhonsri-Waalaa- You fucker",
"Bhadwe ka awlat- Son of a pimp",
"Bhains ki aulad- Son of a buffalo",
"Buddha Khoosat- Old fart",
"Bol teri gand kaise maru- let me know how to fuck you in the ass",
"Bur ki chatani- Ketchup of cunt",
"Chunni- Clit",
"Chinaal- Whore",
"Chudai khana- Whore house",
"Chudan chuda- Fucking games",
"Chut ka pujari- pussy worshipper",
"Chut ka bhoot- Vaginal Ghost",
"Gaand ka makhan- Butter from the ass",
"Gaand main lassan- Garlic in ass",
"Gaand main danda- Stick in ass",
"Gaand main keera- Bug up your ass",
"Gaand mein bambu- A bambooup your ass",
"Gaandfat- Busted ass",
"Pote kitne bhi bade ho, lund ke niche hi rehte hai- However big the balls might be, they have to stay beneath the penis",
"Hazaar lund teri gaand main-Thousand dicks in your ass",
"Jhat ke baal- Pubic hair",
"Jhaant ke pissu- Bug of pubic hair",
"Kadak Mall- Sexy Girl",
"Kali Choot Ke Safaid Jhaat- White hair of a black pussy",
"Khotey ki aulda- Son of donkey",
"Kutte ka awlat- Son of a dog",
"Kutte ki jat- Breed of dog",
"Kutte ke tatte- Dog’s balls",
"Kutte ke poot, teri maa ki choot- Son of a dog, your mother’s pussy",
"Lavde ke bal- Hair on your penis",
"muh mei lele: Suck my dick",
"Lund Chus: Suck dick",
"Lund Ke Pasine- Sweat of dick",
"Meri Gand Ka Khatmal: Bug of my Ass",
"Moot, Mootna- Piss off",
"Najayaz paidaish- Illegitimately born",
"Randi khana- whore house",
"Sadi hui gaand- Stinking ass",
"Teri gaand main kute ka lund- A dog’s dick in your ass",
"Teri maa ka bhosda- Your mother’s breasts",
"Teri maa ki chut- Your mother’s pussy",
"Tere gaand mein keede paday- May worms infest your ass-hole",
"Ullu ke pathe- Idiot",
]
GEY_STRINGS = [
"`you gey bsdk`",
"`you gey`",
"`you gey in the house`",
"`you chakka`",
"`you gey gey gey gey gey gey gey gey`",
"`you gey go away`",
"`bhago bhenchod gay aaya`"
]
RENDISTR = [
"`I Know Uh ez Rendi Bhay Dont show Your Randi Pesa Here`",
"`Jag Suna suna laage Sab #maderchod bhay`",
"`you talking behind meh wew uh iz my fan now bhay`",
"`Wanna pass in Life Goto BRAZZER.CAM BHAY`",
"`Uh iz Pro i iz noob your boob is landi uh are Randi`",
"`Sellers Nasa calling Uh bhay😆`",
"`Badwoo ki yojna behan bna ke ch*da uh iz badwa its your yozja?`",
"`CHAND PE CHADA HAI CHANDYAAN KA GHODA TERA NAAM HAI MANSUR TU HAI BEHAN KA LOD*😂`",
"`Jab se dil lga baithe tanhai me maa chu*da baithe wo kho gyi kisi aur ke pyar hum apne hi jaato me aag lga baithe`",
"`Chadii ke ander se lal pani kha se ata hai ky teri masuka ka bhosda bhi paan khata hai😂`",
"`Sun bhosdi ke By anonyCrew MOHABBAT KE SIWA AUR BHI GAM HAI JAMANE ME BSDK GAND PAHAT JATI HAI PAISA KAMANE ME`",
"`Thaan liya tha Sayri nhi krege Unka pichwada dekha Alfaaz nikal gye`",
"`Ravivaar ko dekha Chand Ka Tukra Itna Baar Dekha par Jaath na Ukra`",
"`Katal kro Tir se Talwar me Ky Rkkha hai Maal Chodo Sari Me Salwar me Ky Rkkha hai`",
]
NOOBSTR = [
"`YOU PRO NIMBA DONT MESS WIDH MEH`",
"`Haha yes`",
"`NOOB NIMBA TRYING TO BE FAMOUS KEK`",
"`Sometimes one middle finger isn’t enough to let someone know how you feel. That’s why you have two hands`",
"`Some Nimbas need to open their small minds instead of their big mouths`",
"`UH DONT KNOW MEH SO STAY AWAY LAWDE`",
"`Kysa kysaaaa haaan? Phir MAAR nhi Khayega tu?`",
"`Zikr Jinka hota hai galiyo meh woh bhosdika ajj paya gya naliyo me`",
]
PRO_STRINGS = [
"`This gey is pro as phack.`",
"`Proness Lebel: 6969696969`",
"`Itna pro banda dekhlia bc, ab to marna hoga.`",
"`U iz pro but i iz ur DAD, KeK`",
"`NOOB NIMBA TRYING TO BE FAMOUS KEK`",
"`Sometimes one middle finger isn’t enough to let someone know how you feel. That’s why you have two hands`",
"`Some Nimbas need to open their small minds instead of their big mouths`",
"Pros here. Nubs laik me leave -_-.",
"`UH DONT KNOW MEH SO STAY AWAY LAWDE`",
"`Kysa kysaaaa haaan? Phir MAAR nhi Khayega tu?`",
"`Zikr Jinka hota hai galiyo meh woh bhosdika ajj paya gya naliyo me`",
]
CHU_STRINGS = [
"`Taare hai Asmaan me very very bright jaat na jla bskd dekh le apni hight.`",
"`jindagi ki na toote lari iski lulli hoti nhi khadi`",
"`Kbhi kbhi meri dil me khyaal ata hai ayse chutiyo ko kon paida kr jata hai😂.`",
"`Saawan ka mahina pawan kare shor jake gand mara bskd kahi aur.`",
"`Dil ke armaa ansuon me beh jaye tum bskd ke chutiye hi reh gye.`",
"`Ishq Se Tabiyat Ne Zeest Ka Mazaa aya maine is lodu ko randi khane me paya.`",
"`Mirza galib ki yeh khani hai tu bhosdika hai yeh sab ki jubani hai.`",
"`Ek dora hai ek nora hai charo taraf kohra hi kohra hai ye sabse bada behan ka lawda hai.`",
"`Phool murjhate achhe nahi lagte aap land khujate acche nahi lagte yehi umar hai chodne ki yaaro aap bathroom mein hilaate acche nahi lagte.`",
"`Badi hasrat thi ki khole iski maa ki salwaar ka nara par iski maa ki berukhi dekho ki aagayi nangi dobara.`",
"`Na jaane konsi shilajit hai iski maa ki yadon mein jab bhi sochta hun jhanajhana jaata hun.`",
"`Yaara Teri Yaari Pe Mujhe Shak Nahi Tha; Lekin Sabne Teri Gaand Maari, Kya Mera Koi Haq Nahi Tha.`",
"`Yehi to kamal hai hamara baap bante ho tum aur naam aata hai humara.`",
"`Chinti chadi pahad pe angrejon ka jamana tha lund ki pistol thi chut pe nishana tha.`",
"`Bhola khada bich bazaar fut fut kr roye gaand Maar Sab Chal Diyo Paisa Diyo N Koye.`",
"`Pani kam hain matke mein gand mardunga jhatke mein.`",
"`Duniya haseeno ka mela fir bhi mera chutiya dost akela.`",
"`8 ko kehte hain hindi mein aath ja bsdk tu ja ke kutiya ki chaat.`",
"`Purani baatein bhool ja mera lund pakad ke jhool ja.`",
"`Permanent hai pakka tera baap chaka.`",
"`Yaar azab tera nakhra ghazab tera style hai gand dhone ki tameez nahi haath mein mobile hain.`",
]
FUK_STRINGS = [
"`It's better to let someone think you are an Idiot than to open your mouth and prove it.`",
"`Talking to a liberal is like trying to explain social media to a 70 years old`",
"`CHAND PE HAI APUN LAWDE.`",
"`Pehle main tereko chakna dega, fir daru pilayega, fir jab aap dimag se nahi L*nd se sochoge, tab bolega..`",
"`Pardhan mantri se number liya, parliament apne :__;baap ka hai...`",
"`Cachaa Ooo bhosdi wale Chacha`",
"`Aaisi Londiya Chodiye, L*nd Ka Aapa Khoye, Auro Se Chudi Na Ho, Biwi Wo Hi Hoye`",
"`Nachoo Bhosdike Nachoo`",
"`Jinda toh jaat ke baal bhi hai`",
"`Sab ko pta tu randi ka baccha hai (its just a joke)`",
]
THANOS_STRINGS = [
"`Mashoor Rand, Ne Arz Kiya Hai. Aane Wale Aate Hai, Jaane Wale Jaate Hai. Yaade Bas Unki Reh Jaati Hai, Jo G**Nd Sujaa Ke Jaate Hai`",
"`Pani kam hai matkey me ga*d mardunga teri ek jatke me`",
"`Aand kitne bhi bade ho, lund ke niche hi rehte hai`",
"`Tum Ameer hum gareeb hum jhopdiwale Tum bhosiwale`",
"`Sisi Bhari Gulab ki padi palang ke pass chodne wale chod gye ab q baitha udaas`",
"`Phuloo Ka Raja Gulaab Kaato me Rehta hai Jeewan ka Nirmata jaato me rehta hai😂`",
"`Chude hue maal ko yaad mt krna Jo Chut na de usse kabhi friyad mt karna jise chudna hai wo chud ke rhegi bekar me muth maar ke apni jindagi barbaad mt krna`",
"`Gand mare gandu Chut mare Chutiya Sabse accha mutti 2 mint me chutti😛`",
"`Marzi Ka Sex Pap Nahi Hota.. Piche Se Dalne Wala Kabhi Baap Nahi Hota.. Condom Zarur Lagana Mere Dost Qki.. Sex K Waqt Popat Ke Pass Dimag Nahi Hota.`",
"`Uss Ne Hothon Se Chhu Kar Lowd* Pe Nasha Kar Diya; Lu*D Ki Baat To Aur Thi, Uss Ne To Jhato* Ko Bhi Khada Kar Diya!`",
]
INSULT_STRINGS = [
"`Owww ... Such a stupid idiot.`",
"`Don't drink and type.`",
"`Command not found. Just like your brain.`",
"`Bot rule 544 section 9 prevents me from replying to stupid humans like you.`",
"`Sorry, we do not sell brains.`",
"`Believe me you are not normal.`",
"`I bet your brain feels as good as new, seeing that you never use it.`",
"`If I wanted to kill myself I'd climb your ego and jump to your IQ.`",
"`You didn't evolve from apes, they evolved from you.`",
"`What language are you speaking? Cause it sounds like bullshit.`",
"`You are proof that evolution CAN go in reverse.`",
"`I would ask you how old you are but I know you can't count that high.`",
"`As an outsider, what do you think of the human race?`",
"`Ordinarily people live and learn. You just live.`",
"`Keep talking, someday you'll say something intelligent!.......(I doubt it though)`",
"`Everyone has the right to be stupid but you are abusing the privilege.`",
"`I'm sorry I hurt your feelings when I called you stupid. I thought you already knew that.`",
"`You should try tasting cyanide.`",
"`You should try sleeping forever.`",
"`Pick up a gun and shoot yourself.`",
"`Try bathing with Hydrochloric Acid instead of water.`",
"`Go Green! Stop inhaling Oxygen.`",
"`God was searching for you. You should leave to meet him.`",
"`You should Volunteer for target in an firing range.`",
"`Try playing catch and throw with RDX its fun.`",
"`People like you are the reason we have middle fingers.`",
"`When your mom dropped you off at the school, she got a ticket for littering.`",
"`You’re so ugly that when you cry, the tears roll down the back of your head…just to avoid your face.`",
"`If you’re talking behind my back then you’re in a perfect position to kiss my a**!.`",
]
GENDER = [
"u is mard",
"u is man",
"u is aurat",
"u is woman",
"u is gey",
"u is chakka",
]
EMOTICONS = [
"(҂⌣̀_⌣́)",
"(;¬_¬)",
"(-。-;",
"┌[ O ʖ̯ O ]┐",
"〳 ͡° Ĺ̯ ͡° 〵",
]
WAVING = [
"(ノ^∇^)",
"(;-_-)/",
"@(o・ェ・)@ノ",
"ヾ(^-^)ノ",
"ヾ(◍’౪◍)ノ゙♡",
"(ό‿ὸ)ノ",
"(ヾ(´・ω・`)",
]
WTF = [
"༎ຶ‿༎ຶ",
"(‿ˠ‿)",
"╰U╯☜(◉ɷ◉ )",
"(;´༎ຶ益༎ຶ)♡",
"╭∩╮(︶ε︶*)chu",
"( ^◡^)っ (‿|‿)",
]
LOB = [
"乂❤‿❤乂",
"(。♥‿♥。)",
"( ͡~ ͜ʖ ͡°)",
"໒( ♥ ◡ ♥ )७",
"༼♥ل͜♥༽",
]
CONFUSED = [
"(・_・ヾ",
"「(゚ペ)",
"﴾͡๏̯͡๏﴿",
"( ̄■ ̄;)!?",
"▐ ˵ ͠° (oo) °͠ ˵ ▐",
"(-_-)ゞ゛",
]
DEAD = [
"(✖╭╮✖)",
"✖‿✖",
"(+_+)",
"(✖﹏✖)",
"∑(✘Д✘๑)",
]
SED = [
"(@´_`@)",
"⊙︿⊙",
"(▰˘︹˘▰)",
"●︿●",
"( ´_ノ` )",
"彡(-_-;)彡",
]
DOG = [
"-ᄒᴥᄒ-",
"◖⚆ᴥ⚆◗",
]
SHRUG = [
"( ͡° ͜ʖ ͡°)",
"¯\_(ツ)_/¯",
"( ͡°( ͡° ͜ʖ( ͡° ͜ʖ ͡°)ʖ ͡°) ͡°)",
"ʕ•ᴥ•ʔ",
"(▀ Ĺ̯▀ )",
"(ง ͠° ͟ل͜ ͡°)ง",
"༼ つ ◕_◕ ༽つ",
"ಠ_ಠ",
"(☞ ͡° ͜ʖ ͡°)☞",
"¯\_༼ ି ~ ି ༽_/¯",
"c༼ ͡° ͜ʖ ͡° ༽⊃",
]
SLAP_TEMPLATES = [
"{user1} {hits} {user2} with a {item}.",
"{user1} {hits} {user2} in the face with a {item}.",
"{user1} {hits} {user2} around a bit with a {item}.",
"{user1} {throws} a {item} at {user2}.",
"{user1} grabs a {item} and {throws} it at {user2}'s face.",
"{user1} launches a {item} in {user2}'s general direction.",
"{user1} starts slapping {user2} silly with a {item}.",
"{user1} pins {user2} down and repeatedly {hits} them with a {item}.",
"{user1} grabs up a {item} and {hits} {user2} with it.",
"{user1} ties {user2} to a chair and {throws} a {item} at them.",
"{user1} gave a friendly push to help {user2} learn to swim in lava.",
]
ITEMS = [
"cast iron skillet",
"large trout",
"baseball bat",
"cricket bat",
"wooden cane",
"nail",
"printer",
"shovel",
"CRT monitor",
"physics textbook",
"toaster",
"portrait of Richard Stallman",
"television",
"five ton truck",
"roll of duct tape",
"book",
"laptop",
"old television",
"sack of rocks",
"rainbow trout",
"rubber chicken",
"spiked bat",
"fire extinguisher",
"heavy rock",
"chunk of dirt",
"beehive",
"piece of rotten meat",
"bear",
"ton of bricks",
]
THROW = [
"throws",
"flings",
"chucks",
"hurls",
]
HIT = [
"hits",
"whacks",
"slaps",
"smacks",
"bashes",
]
|
Andencento
|
/Andencento-0.24.tar.gz/Andencento-0.24/userbot/random_strings/fun_str.py
|
fun_str.py
|
ZALG_LIST = [
[
"̖",
" ̗",
" ̘",
" ̙",
" ̜",
" ̝",
" ̞",
" ̟",
" ̠",
" ̤",
" ̥",
" ̦",
" ̩",
" ̪",
" ̫",
" ̬",
" ̭",
" ̮",
" ̯",
" ̰",
" ̱",
" ̲",
" ̳",
" ̹",
" ̺",
" ̻",
" ̼",
" ͅ",
" ͇",
" ͈",
" ͉",
" ͍",
" ͎",
" ͓",
" ͔",
" ͕",
" ͖",
" ͙",
" ͚",
" ",
],
[
" ̍",
" ̎",
" ̄",
" ̅",
" ̿",
" ̑",
" ̆",
" ̐",
" ͒",
" ͗",
" ͑",
" ̇",
" ̈",
" ̊",
" ͂",
" ̓",
" ̈́",
" ͊",
" ͋",
" ͌",
" ̃",
" ̂",
" ̌",
" ͐",
" ́",
" ̋",
" ̏",
" ̽",
" ̉",
" ͣ",
" ͤ",
" ͥ",
" ͦ",
" ͧ",
" ͨ",
" ͩ",
" ͪ",
" ͫ",
" ͬ",
" ͭ",
" ͮ",
" ͯ",
" ̾",
" ͛",
" ͆",
" ̚",
],
[
" ̕",
" ̛",
" ̀",
" ́",
" ͘",
" ̡",
" ̢",
" ̧",
" ̨",
" ̴",
" ̵",
" ̶",
" ͜",
" ͝",
" ͞",
" ͟",
" ͠",
" ͢",
" ̸",
" ̷",
" ͡",
],
]
EMOJIS = [
"😂",
"😂",
"👌",
"✌",
"💞",
"👍",
"👌",
"💯",
"🎶",
"👀",
"😂",
"👓",
"👏",
"👐",
"🍕",
"💥",
"🍴",
"💦",
"💦",
"🍑",
"🍆",
"😩",
"😏",
"👉👌",
"👀",
"👅",
"😩",
"🚰",
]
UWUS = [
"(・`ω´・)",
";;w;;",
"owo",
"UwU",
">w<",
"^w^",
r"\(^o\) (/o^)/",
"( ^ _ ^)∠☆",
"(ô_ô)",
"~:o",
";-;",
"(*^*)",
"(>_",
"(♥_♥)",
"*(^O^)*",
"((+_+))",
]
FACEREACTS = [
"ʘ‿ʘ",
"ヾ(-_- )ゞ",
"(っ˘ڡ˘ς)",
"(´ж`ς)",
"( ಠ ʖ̯ ಠ)",
"(° ͜ʖ͡°)╭∩╮",
"(ᵟຶ︵ ᵟຶ)",
"(งツ)ว",
"ʚ(•`",
"(っ▀¯▀)つ",
"(◠﹏◠)",
"( ͡ಠ ʖ̯ ͡ಠ)",
"( ఠ ͟ʖ ఠ)",
"(∩`-´)⊃━☆゚.*・。゚",
"(⊃。•́‿•̀。)⊃",
"(._.)",
"{•̃_•̃}",
"(ᵔᴥᵔ)",
"♨_♨",
"⥀.⥀",
"ح˚௰˚づ ",
"(҂◡_◡)",
"ƪ(ړײ)ƪ",
"(っ•́。•́)♪♬",
"◖ᵔᴥᵔ◗ ♪ ♫ ",
"(☞゚ヮ゚)☞",
"[¬º-°]¬",
"(Ծ‸ Ծ)",
"(•̀ᴗ•́)و ̑̑",
"ヾ(´〇`)ノ♪♪♪",
"(ง'̀-'́)ง",
"ლ(•́•́ლ)",
"ʕ •́؈•̀ ₎",
"♪♪ ヽ(ˇ∀ˇ )ゞ",
"щ(゚Д゚щ)",
"( ˇ෴ˇ )",
"눈_눈",
"(๑•́ ₃ •̀๑) ",
"( ˘ ³˘)♥ ",
"ԅ(≖‿≖ԅ)",
"♥‿♥",
"◔_◔",
"⁽⁽ଘ( ˊᵕˋ )ଓ⁾⁾",
"乁( ◔ ౪◔)「 ┑( ̄Д  ̄)┍",
"( ఠൠఠ )ノ",
"٩(๏_๏)۶",
"┌(ㆆ㉨ㆆ)ʃ",
"ఠ_ఠ",
"(づ。◕‿‿◕。)づ",
"(ノಠ ∩ಠ)ノ彡( \\o°o)\\",
"“ヽ(´▽`)ノ”",
"༼ ༎ຶ ෴ ༎ຶ༽",
"。゚( ゚இ‸இ゚)゚。",
"(づ ̄ ³ ̄)づ",
"(⊙.☉)7",
"ᕕ( ᐛ )ᕗ",
"t(-_-t)",
"(ಥ⌣ಥ)",
"ヽ༼ ಠ益ಠ ༽ノ",
"༼∵༽ ༼⍨༽ ༼⍢༽ ༼⍤༽",
"ミ●﹏☉ミ",
"(⊙_◎)",
"¿ⓧ_ⓧﮌ",
"ಠ_ಠ",
"(´・_・`)",
"ᕦ(ò_óˇ)ᕤ",
"⊙﹏⊙",
"(╯°□°)╯︵ ┻━┻",
r"¯\_(⊙︿⊙)_/¯",
"٩◔̯◔۶",
"°‿‿°",
"ᕙ(⇀‸↼‶)ᕗ",
"⊂(◉‿◉)つ",
"V•ᴥ•V",
"q(❂‿❂)p",
"ಥ_ಥ",
"ฅ^•ﻌ•^ฅ",
"ಥ﹏ಥ",
"( ^_^)o自自o(^_^ )",
"ಠ‿ಠ",
"ヽ(´▽`)/",
"ᵒᴥᵒ#",
"( ͡° ͜ʖ ͡°)",
"┬─┬ ノ( ゜-゜ノ)",
"ヽ(´ー`)ノ",
"☜(⌒▽⌒)☞",
"ε=ε=ε=┌(;*´Д`)ノ",
"(╬ ಠ益ಠ)",
"┬─┬⃰͡ (ᵔᵕᵔ͜ )",
"┻━┻ ︵ヽ(`Д´)ノ︵ ┻━┻",
r"¯\_(ツ)_/¯",
"ʕᵔᴥᵔʔ",
"(`・ω・´)",
"ʕ•ᴥ•ʔ",
"ლ(`ー´ლ)",
"ʕʘ̅͜ʘ̅ʔ",
"( ゚Д゚)",
r"¯\(°_o)/¯",
"(。◕‿◕。)",
]
|
Andencento
|
/Andencento-0.24.tar.gz/Andencento-0.24/userbot/random_strings/meme_str.py
|
meme_str.py
|
LOVESTR = [
"The best and most beautiful things in this world cannot be seen or even heard, but must be felt with the heart.",
"You know you're in love when you can't fall asleep because reality is finally better than your dreams.",
"Love recognizes no barriers. It jumps hurdles, leaps fences, penetrates walls to arrive at its destination full of hope.",
"Being deeply loved by someone gives you strength, while loving someone deeply gives you courage.",
"The real lover is the man who can thrill you by kissing your forehead or smiling into your eyes or just staring into space.",
"I swear I couldn't love you more than I do right now, and yet I know I will tomorrow.",
"When I saw you I fell in love, and you smiled because you knew it.",
"In all the world, there is no heart for me like yours. / In all the world, there is no love for you like mine.",
"To love or have loved, that is enough. Ask nothing further. There is no other pearl to be found in the dark folds of life.",
"If you live to be a hundred, I want to live to be a hundred minus one day, so I never have to live without you.",
"Some love stories aren't epic novels. Some are short stories. But that doesn't make them any less filled with love.",
"As he read, I fell in love the way you fall asleep: slowly, and then all at once.",
"I've never had a moment's doubt. I love you. I believe in you completely. You are my dearest one. My reason for life.",
"Do I love you? My god, if your love were a grain of sand, mine would be a universe of beaches.",
"I am who I am because of you.",
"I just want you to know that you're very special... and the only reason I'm telling you is that I don't know if anyone else ever has.",
"Remember, we're madly in love, so it's all right to kiss me any time you feel like it.",
"I love you. I knew it the minute I met you.",
"I loved her against reason, against promise, against peace, against hope, against happiness, against all discouragement that could be.",
"I love you not because of who you are, but because of who I am when I am with you.",
]
DHOKA = [
"Humne Unse Wafa Ki, Aur Dil Bhi Gya Toot, Wo Bhi Chinaal Nikli, Uski Maa ki Chut.",
"Dabbe Me Dabba, Dabbe Me Cake ..Tu Chutiya Hai Zara Seesha To Dekh.",
"Kaam Se Kaam Rakhoge Toh Naam Hoga, Randi Log Ke Chakkkar Me Padoge to Naam Badnaam Hoga.",
"Usne Kaha- Mah Lyf maH Rule, Maine Kaha Bhag BSDK , Tujhy Paida Karna hi Teri Baap ki Sabse Badi Vul.",
"Humse Ulajhna Mat, BSDK Teri Hasi Mita Dunga, Muh Me Land Daal Ke..Sari Hosiyaari Gand Se Nikal Dunga.",
"Aur Sunau Bhosdiwalo ..Kya Haal Hai?..Tumhare Sakal Se Zayda Toh Tumhare Gand Laal Hai!!",
"Pata Nhi Kya Kashish Hai Tumhare Mohabbat Me,Jab Bhi Tumhe Yaad Karta Hu Mera Land Khada Ho Jata Hai.",
"Konsa Mohabbat Kounsi Story, Gand Faad Dunga Agr Bolne Aayi Sorry!",
"Naam Banta Hai Risk Se, Chutiya Banta Hai IshQ Se.",
"Sun Be, Ab Tujhy Mere Zindegi Me Ane ka Koi Haq Nhi,,Aur Tu 1 Number Ki Randi Hai Isme KOi Saq Nhi.",
"Beta Tu Chugli Karna Chor De , Hum Ungli Karna Chor Dengy.",
]
METOOSTR = [
"Me too thanks",
"Haha yes, me too",
"Same lol",
"Me irl",
"Same here",
"Haha yes",
"Me rn",
]
perf = "[ Andencento ]"
GDNOON = [
"`My wishes will always be with you, Morning wish to make you feel fresh, Afternoon wish to accompany you, Evening wish to refresh you, Night wish to comfort you with sleep, Good Afternoon Dear!`",
"`With a deep blue sky over my head and a relaxing wind around me, the only thing I am missing right now is the company of you. I wish you a refreshing afternoon!`",
"`The day has come a halt realizing that I am yet to wish you a great afternoon. My dear, if you thought you were forgotten, you’re so wrong. Good afternoon!`",
"`Good afternoon! May the sweet peace be part of your heart today and always and there is life shining through your sigh. May you have much light and peace.`",
"`With you, every part of a day is beautiful. I live every day to love you more than yesterday. Wishing you an enjoyable afternoon my love!`",
"`This bright afternoon sun always reminds me of how you brighten my life with all the happiness. I miss you a lot this afternoon. Have a good time`!",
"`Nature looks quieter and more beautiful at this time of the day! You really don’t want to miss the beauty of this time! Wishing you a happy afternoon!`",
"`What a wonderful afternoon to finish you day with! I hope you’re having a great time sitting on your balcony, enjoying this afternoon beauty!`",
"`I wish I were with you this time of the day. We hardly have a beautiful afternoon like this nowadays. Wishing you a peaceful afternoon!`",
"`As you prepare yourself to wave goodbye to another wonderful day, I want you to know that, I am thinking of you all the time. Good afternoon!`",
"`This afternoon is here to calm your dog-tired mind after a hectic day. Enjoy the blessings it offers you and be thankful always. Good afternoon!`",
"`The gentle afternoon wind feels like a sweet hug from you. You are in my every thought in this wonderful afternoon. Hope you are enjoying the time!`",
"`Wishing an amazingly good afternoon to the most beautiful soul I have ever met. I hope you are having a good time relaxing and enjoying the beauty of this time!`",
"`Afternoon has come to indicate you, Half of your day’s work is over, Just another half a day to go, Be brisk and keep enjoying your works, Have a happy noon!`",
"`Mornings are for starting a new work, Afternoons are for remembering, Evenings are for refreshing, Nights are for relaxing, So remember people, who are remembering you, Have a happy noon!`",
"`If you feel tired and sleepy you could use a nap, you will see that it will help you recover your energy and feel much better to finish the day. Have a beautiful afternoon!`",
"`Time to remember sweet persons in your life, I know I will be first on the list, Thanks for that, Good afternoon my dear!`",
"`May this afternoon bring a lot of pleasant surprises for you and fills you heart with infinite joy. Wishing you a very warm and love filled afternoon!`",
"`Good, better, best. Never let it rest. Til your good is better and your better is best. “Good Afternoon`”",
"`May this beautiful afternoon fill your heart boundless happiness and gives you new hopes to start yours with. May you have lot of fun! Good afternoon dear!`",
"`As the blazing sun slowly starts making its way to the west, I want you to know that this beautiful afternoon is here to bless your life with success and peace. Good afternoon!`",
"`The deep blue sky of this bright afternoon reminds me of the deepness of your heart and the brightness of your soul. May you have a memorable afternoon!`",
"`Your presence could make this afternoon much more pleasurable for me. Your company is what I cherish all the time. Good afternoon!`",
"`A relaxing afternoon wind and the sweet pleasure of your company can make my day complete. Missing you so badly during this time of the day! Good afternoon!`",
"`Wishing you an afternoon experience so sweet and pleasant that feel thankful to be alive today. May you have the best afternoon of your life today!`",
"`My wishes will always be with you, Morning wish to make you feel fresh, Afternoon wish to accompany you, Evening wish to refresh you, Night wish to comfort you with sleep, Good afternoon dear!`",
"`Noon time – it’s time to have a little break, Take time to breathe the warmth of the sun, Who is shining up in between the clouds, Good afternoon!`",
"`You are the cure that I need to take three times a day, in the morning, at the night and in the afternoon. I am missing you a lot right now. Good afternoon!`",
"`I want you when I wake up in the morning, I want you when I go to sleep at night and I want you when I relax under the sun in the afternoon!`",
"`I pray to god that he keeps me close to you so we can enjoy these beautiful afternoons together forever! Wishing you a good time this afternoon!`",
"`You are every bit of special to me just like a relaxing afternoon is special after a toiling noon. Thinking of my special one in this special time of the day!`",
"`May your Good afternoon be light, blessed, enlightened, productive and happy.`",
"`Thinking of you is my most favorite hobby every afternoon. Your love is all I desire in life. Wishing my beloved an amazing afternoon!`",
"`I have tasted things that are so sweet, heard words that are soothing to the soul, but comparing the joy that they Andencentoh bring, I’ll rather choose to see a smile from your cheeks. You are sweet. I love you.`",
"`How I wish the sun could obey me for a second, to stop its scorching ride on my angel. So sorry it will be hot there. Don’t worry, the evening will soon come. I love you.`",
"`I want you when I wake up in the morning, I want you when I go to sleep at night and I want you when I relax under the sun in the afternoon!`",
"`With you every day is my lucky day. So lucky being your love and don’t know what else to say. Morning night and noon, you make my day.`",
"`Your love is sweeter than what I read in romantic novels and fulfilling more than I see in epic films. I couldn’t have been me, without you. Good afternoon honey, I love you!`",
"`No matter what time of the day it is, No matter what I am doing, No matter what is right and what is wrong, I still remember you like this time, Good Afternoon!`",
"`Things are changing. I see everything turning around for my favor. And the last time I checked, it’s courtesy of your love. 1000 kisses from me to you. I love you dearly and wishing you a very happy noon.`",
"`You are sometimes my greatest weakness, you are sometimes my biggest strength. I do not have a lot of words to say but let you make sure, you make my day, Good Afternoon!`",
"`Every afternoon is to remember the one whom my heart beats for. The one I live and sure can die for. Hope you doing good there my love. Missing your face.`",
"`My love, I hope you are doing well at work and that you remember that I will be waiting for you at home with my arms open to pamper you and give you all my love. I wish you a good afternoon!`",
"`Afternoons like this makes me think about you more. I desire so deeply to be with you in one of these afternoons just to tell you how much I love you. Good afternoon my love!`",
"`My heart craves for your company all the time. A beautiful afternoon like this can be made more enjoyable if you just decide to spend it with me. Good afternoon!`",
]
CHASE_STR = [
"Where do you think you're going?",
"Huh? what? did they get away?",
"ZZzzZZzz... Huh? what? oh, just them again, nevermind.",
"`Get back here!`",
"`Not so fast...`",
"Look out for the wall!",
"Don't leave me alone with them!!",
"You run, you die.",
"`Jokes on you, I'm everywhere`",
"You're gonna regret that...",
"You could also try /kickme, I hear that's fun.",
"`Go Andencentoher someone else, no-one here cares.`",
"You can run, but you can't hide.",
"Is that all you've got?",
"I'm behind you...",
"You've got company!",
"We can do this the easy way, or the hard way.",
"You just don't get it, do you?",
"Yeah, you better run!",
"Please, remind me how much I care?",
"I'd run faster if I were you.",
"That's definitely the droid we're looking for.",
"May the odds be ever in your favour.",
"Famous last words.",
"And they disappeared forever, never to be seen again.",
'"Oh, look at me! I\'m so cool, I can run from a Andencento!" - this person',
"Yeah yeah, just tap /kickme already.",
"Here, take this ring and head to Mordor while you're at it.",
"Legend has it, they're still running...",
"Unlike Harry Potter, your parents can't protect you from me.",
"Fear leads to anger. Anger leads to hate. Hate leads to suffering. If you keep running in fear, you might "
"be the next Vader.",
"Multiple calculations later, I have decided my interest in your shenanigans is exactly 0.",
"Legend has it, they're still running.",
"Keep it up, not sure we want you here anyway.",
"You're a wiza- Oh. Wait. You're not Harry, keep moving.",
"NO RUNNING IN THE HALLWAYS!",
"Hasta la vista, baby.",
"Who let the dogs out?",
"It's funny, because no one cares.",
"Ah, what a waste. I liked that one.",
"Frankly, my dear, I don't give a damn.",
"My milkshake brings all the boys to yard... So run faster!",
"You can't HANDLE the truth!",
"A long time ago, in a galaxy far far away... Someone would've cared about that. Not anymore though.",
"Hey, look at them! They're running from the inevitable banhammer... Cute.",
"Han shot first. So will I.",
"What are you running after, a white rabbit?",
"As The Doctor would say... RUN!",
]
HELLOSTR = [
"Hi !",
"‘Ello, gov'nor!",
"What’s crackin’?",
"Howdy, howdy ,howdy!",
"Hello, who's there, I'm talking.",
"You know who this is.",
"Yo!",
"Whaddup.",
"Greetings and salutations!",
"Hello, sunshine!",
"`Hey, howdy, hi!`",
"What’s kickin’, little chicken?",
"Peek-a-boo!",
"Howdy-doody!",
"`Hey there, freshman!`",
"`I come in peace!`",
"`I come for peace!`",
"Ahoy, matey!",
"`Hi !`",
]
CONGRATULATION = [
"`Congratulations and BRAVO!`",
"`You did it! So proud of you!`",
"`This calls for celebrating! Congratulations!`",
"`I knew it was only a matter of time. Well done!`",
"`Congratulations on your well-deserved success.`",
"`Heartfelt congratulations to you.`",
"`Warmest congratulations on your achievement.`",
"`Congratulations and best wishes for your next adventure!”`",
"`So pleased to see you accomplishing great things.`",
"`Feeling so much joy for you today. What an impressive achievement!`",
]
BYESTR = [
"`Nice talking with you`",
"`I've gotta go!`",
"`I've gotta run!`",
"`I've gotta split`",
"`I'm off!`",
"`Great to see you,bye`",
"`See you soon`",
"`Farewell!`",
]
HELLOSTR = [
"`Hi !`",
"`‘Ello, gov'nor!`",
"`What’s crackin’?`",
"`‘Sup, homeslice?`",
"`Howdy, howdy ,howdy!`",
"`Hello, who's there, I'm talking.`",
"`You know who this is.`",
"`Yo!`",
"`Whaddup.`",
"`Greetings and salutations!`",
"`Hello, sunshine!`",
"`Hey, howdy, hi!`",
"`What’s kickin’, little chicken?`",
"`Peek-a-boo!`",
"`Howdy-doody!`",
"`Hey there, freshman!`",
"`I come in peace!`",
"`Ahoy, matey!`",
"`Hiya!`",
"`Oh retarded gey! Well Hello`",
]
SHGS = [
"┐(´д`)┌",
"┐(´~`)┌",
"┐(´ー`)┌",
"┐( ̄ヘ ̄)┌",
"╮(╯∀╰)╭",
"╮(╯_╰)╭",
"┐(´д`)┌",
"┐(´∀`)┌",
"ʅ(́◡◝)ʃ",
"ლ(゚д゚ლ)",
"┐(゚~゚)┌",
"┐('д')┌",
"ლ|^Д^ლ|",
"ლ(╹ε╹ლ)",
"ლ(ಠ益ಠ)ლ",
"┐(‘~`;)┌",
"ヘ(´-`;)ヘ",
"┐( -“-)┌",
"乁༼☯‿☯✿༽ㄏ",
"ʅ(´◔౪◔)ʃ",
"ლ(•ω •ლ)",
"ヽ(゜~゜o)ノ",
"ヽ(~~~ )ノ",
"┐(~ー~;)┌",
"┐(-。ー;)┌",
"¯\_(ツ)_/¯",
"¯\_(⊙_ʖ⊙)_/¯",
"乁ʕ •̀ •́ ʔㄏ",
"¯\_༼ ಥ ‿ ಥ ༽_/¯",
"乁( ⁰͡ Ĺ̯ ⁰͡ ) ㄏ",
]
CRI = [
"أ‿أ",
"╥﹏╥",
"(;﹏;)",
"(ToT)",
"(┳Д┳)",
"(ಥ﹏ಥ)",
"(;へ:)",
"(T_T)",
"(πーπ)",
"(T▽T)",
"(⋟﹏⋞)",
"(iДi)",
"(´Д⊂ヽ",
"(;Д;)",
"(>﹏<)",
"(TдT)",
"(つ﹏⊂)",
"༼☯﹏☯༽",
"(ノ﹏ヽ)",
"(ノAヽ)",
"(╥_╥)",
"(T⌓T)",
"(༎ຶ⌑༎ຶ)",
"(☍﹏⁰)。",
"(ಥ_ʖಥ)",
"(つд⊂)",
"(≖͞_≖̥)",
"(இ﹏இ`。)",
"༼ಢ_ಢ༽",
"༼ ༎ຶ ෴ ༎ຶ༽",
]
GDNIGHT = [
"`Good night keep your dreams alive`",
"`Night, night, to a dear friend! May you sleep well!`",
"`May the night fill with stars for you. May counting every one, give you contentment!`",
"`Wishing you comfort, happiness, and a good night’s sleep!`",
"`Now relax. The day is over. You did your best. And tomorrow you’ll do better. Good Night!`",
"`Good night to a friend who is the best! Get your forty winks!`",
"`May your pillow be soft, and your rest be long! Good night, friend!`",
"`Let there be no troubles, dear friend! Have a Good Night!`",
"`Rest soundly tonight, friend!`",
"`Have the best night’s sleep, friend! Sleep well!`",
"`Have a very, good night, friend! You are wonderful!`",
"`Relaxation is in order for you! Good night, friend!`",
"`Good night. May you have sweet dreams tonight.`",
"`Sleep well, dear friend and have sweet dreams.`",
"`As we wait for a brand new day, good night and have beautiful dreams.`",
"`Dear friend, I wish you a night of peace and bliss. Good night.`",
"`Darkness cannot last forever. Keep the hope alive. Good night.`",
"`By hook or crook you shall have sweet dreams tonight. Have a good night, buddy!`",
"`Good night, my friend. I pray that the good Lord watches over you as you sleep. Sweet dreams.`",
"`Good night, friend! May you be filled with tranquility!`",
"`Wishing you a calm night, friend! I hope it is good!`",
"`Wishing you a night where you can recharge for tomorrow!`",
"`Slumber tonight, good friend, and feel well rested, tomorrow!`",
"`Wishing my good friend relief from a hard day’s work! Good Night!`",
"`Good night, friend! May you have silence for sleep!`",
"`Sleep tonight, friend and be well! Know that you have done your very best today, and that you will do your very best, tomorrow!`",
"`Friend, you do not hesitate to get things done! Take tonight to relax and do more, tomorrow!`",
"`Friend, I want to remind you that your strong mind has brought you peace, before. May it do that again, tonight! May you hold acknowledgment of this with you!`",
"`Wishing you a calm, night, friend! Hoping everything winds down to your liking and that the following day meets your standards!`",
"`May the darkness of the night cloak you in a sleep that is sound and good! Dear friend, may this feeling carry you through the next day!`",
"`Friend, may the quietude you experience tonight move you to have many more nights like it! May you find your peace and hold on to it!`",
"`May there be no activity for you tonight, friend! May the rest that you have coming to you arrive swiftly! May the activity that you do tomorrow match your pace and be all of your own making!`",
"`When the day is done, friend, may you know that you have done well! When you sleep tonight, friend, may you view all the you hope for, tomorrow!`",
"`When everything is brought to a standstill, friend, I hope that your thoughts are good, as you drift to sleep! May those thoughts remain with you, during all of your days!`",
"`Every day, you encourage me to do new things, friend! May tonight’s rest bring a new day that overflows with courage and exciting events!`",
]
GDMORNING = [
"`Life is full of uncertainties. But there will always be a sunrise after every sunset. Good morning!`",
"`It doesn’t matter how bad was your yesterday. Today, you are going to make it a good one. Wishing you a good morning!`",
"`If you want to gain health and beauty, you should wake up early. Good morning!`",
"`May this morning offer you new hope for life! May you be happy and enjoy every moment of it. Good morning!`",
"`May the sun shower you with blessings and prosperity in the days ahead. Good morning!`",
"`Every sunrise marks the rise of life over death, hope over despair and happiness over suffering. Wishing you a very enjoyable morning today!`",
"`Wake up and make yourself a part of this beautiful morning. A beautiful world is waiting outside your door. Have an enjoyable time!`",
"`Welcome this beautiful morning with a smile on your face. I hope you’ll have a great day today. Wishing you a very good morning!`",
"`You have been blessed with yet another day. What a wonderful way of welcoming the blessing with such a beautiful morning! Good morning to you!`",
"`Waking up in such a beautiful morning is a guaranty for a day that’s beyond amazing. I hope you’ll make the best of it. Good morning!`",
"`Nothing is more refreshing than a beautiful morning that calms your mind and gives you reasons to smile. Good morning! Wishing you a great day.`",
"`Another day has just started. Welcome the blessings of this beautiful morning. Rise and shine like you always do. Wishing you a wonderful morning!`",
"`Wake up like the sun every morning and light up the world your awesomeness. You have so many great things to achieve today. Good morning!`",
"`A new day has come with so many new opportunities for you. Grab them all and make the best out of your day. Here’s me wishing you a good morning!`",
"`The darkness of night has ended. A new sun is up there to guide you towards a life so bright and blissful. Good morning dear!`",
"`Wake up, have your cup of morning tea and let the morning wind freshen you up like a happiness pill. Wishing you a good morning and a good day ahead!`",
"`Sunrises are the best; enjoy a cup of coffee or tea with yourself because this day is yours, good morning! Have a wonderful day ahead.`",
"`A bad day will always have a good morning, hope all your worries are gone and everything you wish could find a place. Good morning!`",
"`A great end may not be decided but a good creative beginning can be planned and achieved. Good morning, have a productive day!`",
"`Having a sweet morning, a cup of coffee, a day with your loved ones is what sets your “Good Morning” have a nice day!`",
"`Anything can go wrong in the day but the morning has to be beautiful, so I am making sure your morning starts beautiful. Good morning!`",
"`Open your eyes with a smile, pray and thank god that you are waking up to a new beginning. Good morning!`",
"`Morning is not only sunrise but A Beautiful Miracle of God that defeats the darkness and spread light. Good Morning.`",
"`Life never gives you a second chance. So, enjoy every bit of it. Why not start with this beautiful morning. Good Morning!`",
"`If you want to gain health and beauty, you should wake up early. Good Morning!`",
"`Birds are singing sweet melodies and a gentle breeze is blowing through the trees, what a perfect morning to wake you up. Good morning!`",
"`This morning is so relaxing and beautiful that I really don’t want you to miss it in any way. So, wake up dear friend. A hearty good morning to you!`",
"`Mornings come with a blank canvas. Paint it as you like and call it a day. Wake up now and start creating your perfect day. Good morning!`",
"`Every morning brings you new hopes and new opportunities. Don’t miss any one of them while you’re sleeping. Good morning!`",
"`Start your day with solid determination and great attitude. You’re going to have a good day today. Good morning my friend!`",
"`Friendship is what makes life worth living. I want to thank you for being such a special friend of mine. Good morning to you!`",
"`A friend like you is pretty hard to come by in life. I must consider myself lucky enough to have you. Good morning. Wish you an amazing day ahead!`",
"`The more you count yourself as blessed, the more blessed you will be. Thank God for this beautiful morning and let friendship and love prevail this morning.`",
"`Wake up and sip a cup of loving friendship. Eat your heart out from a plate of hope. To top it up, a fork full of kindness and love. Enough for a happy good morning!`",
"`It is easy to imagine the world coming to an end. But it is difficult to imagine spending a day without my friends. Good morning.`",
]
|
Andencento
|
/Andencento-0.24.tar.gz/Andencento-0.24/userbot/random_strings/quotes.py
|
quotes.py
|
import threading
from sqlalchemy import Column, Integer, String
from . import BASE, SESSION
DEF_COUNT = 0
DEF_LIMIT = 0
DEF_OBJ = (None, DEF_COUNT, DEF_LIMIT)
class FloodControl(BASE):
__tablename__ = "antiflood"
chat_id = Column(String(14), primary_key=True)
user_id = Column(Integer)
count = Column(Integer, default=DEF_COUNT)
limit = Column(Integer, default=DEF_LIMIT)
def __init__(self, chat_id):
self.chat_id = str(chat_id) # ensure string
def __repr__(self):
return "<flood control for %s>" % self.chat_id
FloodControl.__table__.create(checkfirst=True)
INSERTION_LOCK = threading.RLock()
CHAT_FLOOD = {}
def set_flood(chat_id, amount):
with INSERTION_LOCK:
flood = SESSION.query(FloodControl).get(str(chat_id))
if not flood:
flood = FloodControl(str(chat_id))
flood.user_id = None
flood.limit = amount
CHAT_FLOOD[str(chat_id)] = (None, DEF_COUNT, amount)
SESSION.add(flood)
SESSION.commit()
def update_flood(chat_id: str, user_id) -> bool:
if str(chat_id) in CHAT_FLOOD:
curr_user_id, count, limit = CHAT_FLOOD.get(str(chat_id), DEF_OBJ)
if limit == 0: # no antiflood
return False
if user_id != curr_user_id or user_id is None: # other user
CHAT_FLOOD[str(chat_id)] = (user_id, DEF_COUNT + 1, limit)
return False
count += 1
if count > limit: # too many msgs, kick
CHAT_FLOOD[str(chat_id)] = (None, DEF_COUNT, limit)
return True
# default -> update
CHAT_FLOOD[str(chat_id)] = (user_id, count, limit)
return False
def get_flood_limit(chat_id):
return CHAT_FLOOD.get(str(chat_id), DEF_OBJ)[2]
def migrate_chat(old_chat_id, new_chat_id):
with INSERTION_LOCK:
flood = SESSION.query(FloodControl).get(str(old_chat_id))
if flood:
CHAT_FLOOD[str(new_chat_id)] = CHAT_FLOOD.get(str(old_chat_id), DEF_OBJ)
flood.chat_id = str(new_chat_id)
SESSION.commit()
SESSION.close()
def __load_flood_settings():
global CHAT_FLOOD
try:
all_chats = SESSION.query(FloodControl).all()
CHAT_FLOOD = {chat.chat_id: (None, DEF_COUNT, chat.limit) for chat in all_chats}
finally:
SESSION.close()
return CHAT_FLOOD
|
Andencento
|
/Andencento-0.24.tar.gz/Andencento-0.24/userbot/sql/antiflood_sql.py
|
antiflood_sql.py
|
import threading
from sqlalchemy import Column, String, UnicodeText, distinct, func
from . import BASE, SESSION
class BlackListFilters(BASE):
__tablename__ = "blacklist"
chat_id = Column(String(14), primary_key=True)
trigger = Column(UnicodeText, primary_key=True, nullable=False)
def __init__(self, chat_id, trigger):
self.chat_id = str(chat_id) # ensure string
self.trigger = trigger
def __repr__(self):
return "<Blacklist filter '%s' for %s>" % (self.trigger, self.chat_id)
def __eq__(self, other):
return bool(
isinstance(other, BlackListFilters)
and self.chat_id == other.chat_id
and self.trigger == other.trigger
)
BlackListFilters.__table__.create(checkfirst=True)
BLACKLIST_FILTER_INSERTION_LOCK = threading.RLock()
CHAT_BLACKLISTS = {}
def add_to_blacklist(chat_id, trigger):
with BLACKLIST_FILTER_INSERTION_LOCK:
blacklist_filt = BlackListFilters(str(chat_id), trigger)
SESSION.merge(blacklist_filt) # merge to avoid duplicate key issues
SESSION.commit()
CHAT_BLACKLISTS.setdefault(str(chat_id), set()).add(trigger)
def rm_from_blacklist(chat_id, trigger):
with BLACKLIST_FILTER_INSERTION_LOCK:
blacklist_filt = SESSION.query(BlackListFilters).get((str(chat_id), trigger))
if blacklist_filt:
if trigger in CHAT_BLACKLISTS.get(str(chat_id), set()): # sanity check
CHAT_BLACKLISTS.get(str(chat_id), set()).remove(trigger)
SESSION.delete(blacklist_filt)
SESSION.commit()
return True
SESSION.close()
return False
def get_chat_blacklist(chat_id):
return CHAT_BLACKLISTS.get(str(chat_id), set())
def num_blacklist_filters():
try:
return SESSION.query(BlackListFilters).count()
finally:
SESSION.close()
def num_blacklist_chat_filters(chat_id):
try:
return (
SESSION.query(BlackListFilters.chat_id)
.filter(BlackListFilters.chat_id == str(chat_id))
.count()
)
finally:
SESSION.close()
def num_blacklist_filter_chats():
try:
return SESSION.query(func.count(distinct(BlackListFilters.chat_id))).scalar()
finally:
SESSION.close()
def __load_chat_blacklists():
global CHAT_BLACKLISTS
try:
chats = SESSION.query(BlackListFilters.chat_id).distinct().all()
for (chat_id,) in chats: # remove tuple by ( ,)
CHAT_BLACKLISTS[chat_id] = []
all_filters = SESSION.query(BlackListFilters).all()
for x in all_filters:
CHAT_BLACKLISTS[x.chat_id] += [x.trigger]
CHAT_BLACKLISTS = {x: set(y) for x, y in CHAT_BLACKLISTS.items()}
finally:
SESSION.close()
__load_chat_blacklists()
|
Andencento
|
/Andencento-0.24.tar.gz/Andencento-0.24/userbot/sql/blacklist_sql.py
|
blacklist_sql.py
|
from sqlalchemy import Column, LargeBinary, Numeric, String, UnicodeText
from userbot.sql import BASE, SESSION
class Filters(BASE):
__tablename__ = "filters"
chat_id = Column(String(14), primary_key=True)
keyword = Column(UnicodeText, primary_key=True)
reply = Column(UnicodeText)
snip_type = Column(Numeric)
media_id = Column(UnicodeText)
media_access_hash = Column(UnicodeText)
media_file_reference = Column(LargeBinary)
def __init__(
self,
chat_id,
keyword,
reply,
snip_type,
media_id=None,
media_access_hash=None,
media_file_reference=None,
):
self.chat_id = chat_id
self.keyword = keyword
self.reply = reply
self.snip_type = snip_type
self.media_id = media_id
self.media_access_hash = media_access_hash
self.media_file_reference = media_file_reference
Filters.__table__.create(checkfirst=True)
def get_filter(chat_id, keyword):
try:
return SESSION.query(Filters).get((str(chat_id), keyword))
except:
return None
finally:
SESSION.close()
def get_all_filters(chat_id):
try:
return SESSION.query(Filters).filter(Filters.chat_id == str(chat_id)).all()
except:
return None
finally:
SESSION.close()
def add_filter(
chat_id,
keyword,
reply,
snip_type,
media_id,
media_access_hash,
media_file_reference,
):
adder = SESSION.query(Filters).get((str(chat_id), keyword))
if adder:
adder.reply = reply
adder.snip_type = snip_type
adder.media_id = media_id
adder.media_access_hash = media_access_hash
adder.media_file_reference = media_file_reference
else:
adder = Filters(
chat_id,
keyword,
reply,
snip_type,
media_id,
media_access_hash,
media_file_reference,
)
SESSION.add(adder)
SESSION.commit()
def remove_filter(chat_id, keyword):
saved_filter = SESSION.query(Filters).get((str(chat_id), keyword))
if saved_filter:
SESSION.delete(saved_filter)
SESSION.commit()
def remove_all_filters(chat_id):
saved_filter = SESSION.query(Filters).filter(Filters.chat_id == str(chat_id))
if saved_filter:
saved_filter.delete()
SESSION.commit()
|
Andencento
|
/Andencento-0.24.tar.gz/Andencento-0.24/userbot/sql/filter_sql.py
|
filter_sql.py
|
import functools
from telethon import events
from userbot import *
Andencentohandler = Config.BOT_HANDLER
def userbot_cmd(add_cmd, is_args=False):
def cmd(func):
userbot = Andencento.tgbot
if is_args:
pattern = Andencentohandler + add_cmd + "(?: |$)(.*)"
elif is_args == "simp":
pattern = Andencentohandler + add_cmd + " (.*)"
elif is_args == "nope":
pattern = Andencentohandler + add_cmd
elif is_args == "snips":
pattern = Andencentohandler + add_cmd + " (\S+)"
else:
pattern = Andencentohandler + add_cmd + "$"
userbot.add_event_handler(
func, events.NewMessage(incoming=True, pattern=pattern)
)
return cmd
def is_admin():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
userbot = Andencento.tgbot
perms = await userbot.get_permissions(event.chat_id, event.sender_id)
user = event.sender_id
ForGo10 = Andencento.uid
if perms.is_admin:
await func(event)
if event.sender_id == ForGo10:
pass
elif not user:
pass
if not perms.is_admin:
await event.reply("Only Admins Can Use This..")
return wrapper
return decorator
def is_Andencento_admin():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
userbot = Andencento.tgbot
boat = await userbot.get_me()
perms = await userbot.get_permissions(event.chat_id, boat)
if perms.is_admin:
await func(event)
else:
await event.reply("Need Admin privileges to do this...")
return wrapper
return decorator
def allowed_users():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
minna = list(Config.SUDO_USERS)
minna.append(Andencento.uid)
if event.sender_id in minna:
await func(event)
else:
await event.reply("This command can only be used by Owner and Sudo Users..")
return wrapper
return decorator
def owner_only():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
watashi = Andencento.uid
if event.sender_id == watashi:
await func(event)
else:
pass
return wrapper
return decorator
def only_groups():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
if event.is_group:
await func(event)
else:
await event.reply("I don't think this is a group !!")
return wrapper
return decorator
def only_group():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
if event.is_group:
await func(event)
else:
pass
return wrapper
return decorator
def allowed_only():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
minna = list(Config.SUDO_USERS)
minna.append(Andencento.uid)
if event.sender_id in minna:
await func(event)
else:
pass
return wrapper
return decorator
"""
import functools
from telethon import events
from userbot import *
Andencentohandler = Config.BOT_HANDLER
def userbot_cmd(add_cmd, is_args=False):
def cmd(func):
userbot = Andencento.tgbot
if is_args:
pattern = Andencentohandler + add_cmd + "(?: |$)(.*)"
elif is_args == "simp":
pattern = Andencentohandler + add_cmd + " (.*)"
elif is_args == "nope":
pattern = Andencentohandler + add_cmd
elif is_args == "snips":
pattern = Andencentohandler + add_cmd + " (\S+)"
else:
pattern = Andencentohandler + add_cmd + "$"
userbot.add_event_handler(
func, events.NewMessage(incoming=True, pattern=pattern)
)
return cmd
def is_admin():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
userbot = Andencento.tgbot
perms = await userbot.get_permissions(event.chat_id, event.sender_id)
user = event.sender_id
ForGo10 = Andencento.uid
if perms.is_admin:
await func(event)
if event.sender_id == ForGo10:
pass
elif not user:
pass
if not perms.is_admin:
await event.reply("Only Admins Can Use This..")
return wrapper
return decorator
def is_Andencento_admin():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
userbot = Andencento.tgbot
boat = await userbot.get_me()
perms = await userbot.get_permissions(event.chat_id, boat)
if perms.is_admin:
await func(event)
else:
await event.reply("Need Admin privileges to do this...")
return wrapper
return decorator
def allowed_users():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
minna = list(Config.SUDO_USERS)
minna.append(Andencento.uid)
if event.sender_id in minna:
await func(event)
else:
await event.reply("This command can only be used by Owner and Sudo Users..")
return wrapper
return decorator
def owner_only():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
watashi = Andencento.uid
if event.sender_id == watashi:
await func(event)
else:
pass
return wrapper
return decorator
def only_groups():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
if event.is_group:
await func(event)
else:
await event.reply("I don't think this is a group !!")
return wrapper
return decorator
def only_group():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
if event.is_group:
await func(event)
else:
pass
return wrapper
return decorator
def allowed_only():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
minna = list(Config.SUDO_USERS)
minna.append(Andencento.uid)
if event.sender_id in minna:
await func(event)
else:
pass
return wrapper
return decorator
def privates():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
if event.is_group:
pass
else:
await func(event)
return wrapper
return decorator
import functools
from telethon import events
from userbot import *
Andencentohandler = Config.BOT_HANDLER
def userbot_cmd(add_cmd, is_args=False):
def cmd(func):
userbot = Andencento.tgbot
if is_args:
pattern = Andencentohandler + add_cmd + "(?: |$)(.*)"
elif is_args == "simp":
pattern = Andencentohandler + add_cmd + " (.*)"
elif is_args == "nope":
pattern = Andencentohandler + add_cmd
elif is_args == "snips":
pattern = Andencentohandler + add_cmd + " (\S+)"
else:
pattern = Andencentohandler + add_cmd + "$"
userbot.add_event_handler(
func, events.NewMessage(incoming=True, pattern=pattern)
)
return cmd
def is_admin():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
userbot = Andencento.tgbot
perms = await userbot.get_permissions(event.chat_id, event.sender_id)
user = event.sender_id
ForGo10 = Andencento.uid
if perms.is_admin:
await func(event)
if event.sender_id == ForGo10:
pass
elif not user:
pass
if not perms.is_admin:
await event.reply("Only Admins Can Use This..")
return wrapper
return decorator
def is_Andencento_admin():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
userbot = Andencento.tgbot
boat = await userbot.get_me()
perms = await userbot.get_permissions(event.chat_id, boat)
if perms.is_admin:
await func(event)
else:
await event.reply("Need Admin privileges to do this...")
return wrapper
return decorator
def allowed_users():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
minna = list(Config.SUDO_USERS)
minna.append(Andencento.uid)
if event.sender_id in minna:
await func(event)
else:
await event.reply("This command can only be used by Owner and Sudo Users..")
return wrapper
return decorator
def owner_only():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
watashi = Andencento.uid
if event.sender_id == watashi:
await func(event)
else:
pass
return wrapper
return decorator
def only_groups():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
if event.is_group:
await func(event)
else:
await event.reply("I don't think this is a group !!")
return wrapper
return decorator
def only_group():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
if event.is_group:
await func(event)
else:
pass
return wrapper
return decorator
def allowed_only():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
minna = list(Config.SUDO_USERS)
minna.append(Andencento.uid)
if event.sender_id in minna:
await func(event)
else:
pass
return wrapper
return decorator
def privates():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
if event.is_group:
pass
else:
await func(event)
return wrapper
return decorator
import functools
from telethon import events
from userbot import *
Andencentohandler = Config.BOT_HANDLER
def userbot_cmd(add_cmd, is_args=False):
def cmd(func):
userbot = Andencento.tgbot
if is_args:
pattern = Andencentohandler + add_cmd + "(?: |$)(.*)"
elif is_args == "simp":
pattern = Andencentohandler + add_cmd + " (.*)"
elif is_args == "nope":
pattern = Andencentohandler + add_cmd
elif is_args == "snips":
pattern = Andencentohandler + add_cmd + " (\S+)"
else:
pattern = Andencentohandler + add_cmd + "$"
userbot.add_event_handler(
func, events.NewMessage(incoming=True, pattern=pattern)
)
return cmd
def is_admin():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
userbot = Andencento.tgbot
perms = await userbot.get_permissions(event.chat_id, event.sender_id)
user = event.sender_id
ForGo10 = Andencento.uid
if perms.is_admin:
await func(event)
if event.sender_id == ForGo10:
pass
elif not user:
pass
if not perms.is_admin:
await event.reply("Only Admins Can Use This..")
return wrapper
return decorator
def is_Andencento_admin():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
userbot = Andencento.tgbot
boat = await userbot.get_me()
perms = await userbot.get_permissions(event.chat_id, boat)
if perms.is_admin:
await func(event)
else:
await event.reply("Need Admin privileges to do this...")
return wrapper
return decorator
def allowed_users():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
minna = list(Config.SUDO_USERS)
minna.append(Andencento.uid)
if event.sender_id in minna:
await func(event)
else:
await event.reply("This command can only be used by Owner and Sudo Users..")
return wrapper
return decorator
def owner_only():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
watashi = Andencento.uid
if event.sender_id == watashi:
await func(event)
else:
pass
return wrapper
return decorator
def only_groups():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
if event.is_group:
await func(event)
else:
await event.reply("I don't think this is a group !!")
return wrapper
return decorator
def only_group():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
if event.is_group:
await func(event)
else:
pass
return wrapper
return decorator
def allowed_only():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
minna = list(Config.SUDO_USERS)
minna.append(Andencento.uid)
if event.sender_id in minna:
await func(event)
else:
pass
return wrapper
return decorator
def privates():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
if event.is_group:
pass
else:
await func(event)
return wrapper
return decorator
import functools
from telethon import events
from userbot import *
Andencentohandler = Config.BOT_HANDLER
def userbot_cmd(add_cmd, is_args=False):
def cmd(func):
userbot = Andencento.tgbot
if is_args:
pattern = Andencentohandler + add_cmd + "(?: |$)(.*)"
elif is_args == "simp":
pattern = Andencentohandler + add_cmd + " (.*)"
elif is_args == "nope":
pattern = Andencentohandler + add_cmd
elif is_args == "snips":
pattern = Andencentohandler + add_cmd + " (\S+)"
else:
pattern = Andencentohandler + add_cmd + "$"
userbot.add_event_handler(
func, events.NewMessage(incoming=True, pattern=pattern)
)
return cmd
def is_admin():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
userbot = Andencento.tgbot
perms = await userbot.get_permissions(event.chat_id, event.sender_id)
user = event.sender_id
ForGo10 = Andencento.uid
if perms.is_admin:
await func(event)
if event.sender_id == ForGo10:
pass
elif not user:
pass
if not perms.is_admin:
await event.reply("Only Admins Can Use This..")
return wrapper
return decorator
def is_Andencento_admin():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
userbot = Andencento.tgbot
boat = await userbot.get_me()
perms = await userbot.get_permissions(event.chat_id, boat)
if perms.is_admin:
await func(event)
else:
await event.reply("Need Admin privileges to do this...")
return wrapper
return decorator
def allowed_users():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
minna = list(Config.SUDO_USERS)
minna.append(Andencento.uid)
if event.sender_id in minna:
await func(event)
else:
await event.reply("This command can only be used by Owner and Sudo Users..")
return wrapper
return decorator
def owner_only():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
watashi = Andencento.uid
if event.sender_id == watashi:
await func(event)
else:
pass
return wrapper
return decorator
def only_groups():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
if event.is_group:
await func(event)
else:
await event.reply("I don't think this is a group !!")
return wrapper
return decorator
def only_group():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
if event.is_group:
await func(event)
else:
pass
return wrapper
return decorator
def allowed_only():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
minna = list(Config.SUDO_USERS)
minna.append(Andencento.uid)
if event.sender_id in minna:
await func(event)
else:
pass
return wrapper
return decorator
def privates():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
if event.is_group:
pass
else:
await func(event)
return wrapper
return decorator
import functools
from telethon import events
from userbot import *
Andencentohandler = Config.BOT_HANDLER
def userbot_cmd(add_cmd, is_args=False):
def cmd(func):
userbot = Andencento.tgbot
if is_args:
pattern = Andencentohandler + add_cmd + "(?: |$)(.*)"
elif is_args == "simp":
pattern = Andencentohandler + add_cmd + " (.*)"
elif is_args == "nope":
pattern = Andencentohandler + add_cmd
elif is_args == "snips":
pattern = Andencentohandler + add_cmd + " (\S+)"
else:
pattern = Andencentohandler + add_cmd + "$"
userbot.add_event_handler(
func, events.NewMessage(incoming=True, pattern=pattern)
)
return cmd
def is_admin():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
userbot = Andencento.tgbot
perms = await userbot.get_permissions(event.chat_id, event.sender_id)
user = event.sender_id
ForGo10 = Andencento.uid
if perms.is_admin:
await func(event)
if event.sender_id == ForGo10:
pass
elif not user:
pass
if not perms.is_admin:
await event.reply("Only Admins Can Use This..")
return wrapper
return decorator
def is_Andencento_admin():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
userbot = Andencento.tgbot
boat = await userbot.get_me()
perms = await userbot.get_permissions(event.chat_id, boat)
if perms.is_admin:
await func(event)
else:
await event.reply("Need Admin privileges to do this...")
return wrapper
return decorator
def allowed_users():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
minna = list(Config.SUDO_USERS)
minna.append(Andencento.uid)
if event.sender_id in minna:
await func(event)
else:
await event.reply("This command can only be used by Owner and Sudo Users..")
return wrapper
return decorator
def owner_only():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
watashi = Andencento.uid
if event.sender_id == watashi:
await func(event)
else:
pass
return wrapper
return decorator
def only_groups():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
if event.is_group:
await func(event)
else:
await event.reply("I don't think this is a group !!")
return wrapper
return decorator
def only_group():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
if event.is_group:
await func(event)
else:
pass
return wrapper
return decorator
def allowed_only():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
minna = list(Config.SUDO_USERS)
minna.append(Andencento.uid)
if event.sender_id in minna:
await func(event)
else:
pass
return wrapper
return decorator
def privates():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
if event.is_group:
pass
else:
await func(event)
return wrapper
return decorator
import functools
from telethon import events
from userbot import *
Andencentohandler = Config.BOT_HANDLER
def userbot_cmd(add_cmd, is_args=False):
def cmd(func):
userbot = Andencento.tgbot
if is_args:
pattern = Andencentohandler + add_cmd + "(?: |$)(.*)"
elif is_args == "simp":
pattern = Andencentohandler + add_cmd + " (.*)"
elif is_args == "nope":
pattern = Andencentohandler + add_cmd
elif is_args == "snips":
pattern = Andencentohandler + add_cmd + " (\S+)"
else:
pattern = Andencentohandler + add_cmd + "$"
userbot.add_event_handler(
func, events.NewMessage(incoming=True, pattern=pattern)
)
return cmd
def is_admin():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
userbot = Andencento.tgbot
perms = await userbot.get_permissions(event.chat_id, event.sender_id)
user = event.sender_id
ForGo10 = Andencento.uid
if perms.is_admin:
await func(event)
if event.sender_id == ForGo10:
pass
elif not user:
pass
if not perms.is_admin:
await event.reply("Only Admins Can Use This..")
return wrapper
return decorator
def is_Andencento_admin():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
userbot = Andencento.tgbot
boat = await userbot.get_me()
perms = await userbot.get_permissions(event.chat_id, boat)
if perms.is_admin:
await func(event)
else:
await event.reply("Need Admin privileges to do this...")
return wrapper
return decorator
def allowed_users():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
minna = list(Config.SUDO_USERS)
minna.append(Andencento.uid)
if event.sender_id in minna:
await func(event)
else:
await event.reply("This command can only be used by Owner and Sudo Users..")
return wrapper
return decorator
def owner_only():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
watashi = Andencento.uid
if event.sender_id == watashi:
await func(event)
else:
pass
return wrapper
return decorator
def only_groups():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
if event.is_group:
await func(event)
else:
await event.reply("I don't think this is a group !!")
return wrapper
return decorator
def only_group():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
if event.is_group:
await func(event)
else:
pass
return wrapper
return decorator
def allowed_only():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
minna = list(Config.SUDO_USERS)
minna.append(Andencento.uid)
if event.sender_id in minna:
await func(event)
else:
pass
return wrapper
return decorator
def privates():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
if event.is_group:
pass
else:
await func(event)
return wrapper
return decorator
import functools
from telethon import events
from userbot import *
Andencentohandler = Config.BOT_HANDLER
def userbot_cmd(add_cmd, is_args=False):
def cmd(func):
userbot = Andencento.tgbot
if is_args:
pattern = Andencentohandler + add_cmd + "(?: |$)(.*)"
elif is_args == "simp":
pattern = Andencentohandler + add_cmd + " (.*)"
elif is_args == "nope":
pattern = Andencentohandler + add_cmd
elif is_args == "snips":
pattern = Andencentohandler + add_cmd + " (\S+)"
else:
pattern = Andencentohandler + add_cmd + "$"
userbot.add_event_handler(
func, events.NewMessage(incoming=True, pattern=pattern)
)
return cmd
def is_admin():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
userbot = Andencento.tgbot
perms = await userbot.get_permissions(event.chat_id, event.sender_id)
user = event.sender_id
ForGo10 = Andencento.uid
if perms.is_admin:
await func(event)
if event.sender_id == ForGo10:
pass
elif not user:
pass
if not perms.is_admin:
await event.reply("Only Admins Can Use This..")
return wrapper
return decorator
def is_Andencento_admin():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
userbot = Andencento.tgbot
boat = await userbot.get_me()
perms = await userbot.get_permissions(event.chat_id, boat)
if perms.is_admin:
await func(event)
else:
await event.reply("Need Admin privileges to do this...")
return wrapper
return decorator
def allowed_users():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
minna = list(Config.SUDO_USERS)
minna.append(Andencento.uid)
if event.sender_id in minna:
await func(event)
else:
await event.reply("This command can only be used by Owner and Sudo Users..")
return wrapper
return decorator
def owner_only():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
watashi = Andencento.uid
if event.sender_id == watashi:
await func(event)
else:
pass
return wrapper
return decorator
def only_groups():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
if event.is_group:
await func(event)
else:
await event.reply("I don't think this is a group !!")
return wrapper
return decorator
def only_group():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
if event.is_group:
await func(event)
else:
pass
return wrapper
return decorator
def allowed_only():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
minna = list(Config.SUDO_USERS)
minna.append(Andencento.uid)
if event.sender_id in minna:
await func(event)
else:
pass
return wrapper
return decorator
def privates():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
if event.is_group:
pass
else:
await func(event)
return wrapper
return decorator
import functools
from telethon import events
from userbot import *
Andencentohandler = Config.BOT_HANDLER
def userbot_cmd(add_cmd, is_args=False):
def cmd(func):
userbot = Andencento.tgbot
if is_args:
pattern = Andencentohandler + add_cmd + "(?: |$)(.*)"
elif is_args == "simp":
pattern = Andencentohandler + add_cmd + " (.*)"
elif is_args == "nope":
pattern = Andencentohandler + add_cmd
elif is_args == "snips":
pattern = Andencentohandler + add_cmd + " (\S+)"
else:
pattern = Andencentohandler + add_cmd + "$"
userbot.add_event_handler(
func, events.NewMessage(incoming=True, pattern=pattern)
)
return cmd
def is_admin():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
userbot = Andencento.tgbot
perms = await userbot.get_permissions(event.chat_id, event.sender_id)
user = event.sender_id
ForGo10 = Andencento.uid
if perms.is_admin:
await func(event)
if event.sender_id == ForGo10:
pass
elif not user:
pass
if not perms.is_admin:
await event.reply("Only Admins Can Use This..")
return wrapper
return decorator
def is_Andencento_admin():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
userbot = Andencento.tgbot
boat = await userbot.get_me()
perms = await userbot.get_permissions(event.chat_id, boat)
if perms.is_admin:
await func(event)
else:
await event.reply("Need Admin privileges to do this...")
return wrapper
return decorator
def allowed_users():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
minna = list(Config.SUDO_USERS)
minna.append(Andencento.uid)
if event.sender_id in minna:
await func(event)
else:
await event.reply("This command can only be used by Owner and Sudo Users..")
return wrapper
return decorator
def owner_only():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
watashi = Andencento.uid
if event.sender_id == watashi:
await func(event)
else:
pass
return wrapper
return decorator
def only_groups():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
if event.is_group:
await func(event)
else:
await event.reply("I don't think this is a group !!")
return wrapper
return decorator
def only_group():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
if event.is_group:
await func(event)
else:
pass
return wrapper
return decorator
def allowed_only():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
minna = list(Config.SUDO_USERS)
minna.append(Andencento.uid)
if event.sender_id in minna:
await func(event)
else:
pass
return wrapper
return decorator
def privates():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
if event.is_group:
pass
else:
await func(event)
return wrapper
return decorator
import functools
from telethon import events
from userbot import *
Andencentohandler = Config.BOT_HANDLER
def userbot_cmd(add_cmd, is_args=False):
def cmd(func):
userbot = Andencento.tgbot
if is_args:
pattern = Andencentohandler + add_cmd + "(?: |$)(.*)"
elif is_args == "simp":
pattern = Andencentohandler + add_cmd + " (.*)"
elif is_args == "nope":
pattern = Andencentohandler + add_cmd
elif is_args == "snips":
pattern = Andencentohandler + add_cmd + " (\S+)"
else:
pattern = Andencentohandler + add_cmd + "$"
userbot.add_event_handler(
func, events.NewMessage(incoming=True, pattern=pattern)
)
return cmd
def is_admin():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
userbot = Andencento.tgbot
perms = await userbot.get_permissions(event.chat_id, event.sender_id)
user = event.sender_id
ForGo10 = Andencento.uid
if perms.is_admin:
await func(event)
if event.sender_id == ForGo10:
pass
elif not user:
pass
if not perms.is_admin:
await event.reply("Only Admins Can Use This..")
return wrapper
return decorator
def is_Andencento_admin():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
userbot = Andencento.tgbot
boat = await userbot.get_me()
perms = await userbot.get_permissions(event.chat_id, boat)
if perms.is_admin:
await func(event)
else:
await event.reply("Need Admin privileges to do this...")
return wrapper
return decorator
def allowed_users():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
minna = list(Config.SUDO_USERS)
minna.append(Andencento.uid)
if event.sender_id in minna:
await func(event)
else:
await event.reply("This command can only be used by Owner and Sudo Users..")
return wrapper
return decorator
def owner_only():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
watashi = Andencento.uid
if event.sender_id == watashi:
await func(event)
else:
pass
return wrapper
return decorator
def only_groups():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
if event.is_group:
await func(event)
else:
await event.reply("I don't think this is a group !!")
return wrapper
return decorator
def only_group():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
if event.is_group:
await func(event)
else:
pass
return wrapper
return decorator
def allowed_only():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
minna = list(Config.SUDO_USERS)
minna.append(Andencento.uid)
if event.sender_id in minna:
await func(event)
else:
pass
return wrapper
return decorator
def privates():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
if event.is_group:
pass
else:
await func(event)
return wrapper
return decorator
import functools
from telethon import events
from userbot import *
Andencentohandler = Config.BOT_HANDLER
def userbot_cmd(add_cmd, is_args=False):
def cmd(func):
userbot = Andencento.tgbot
if is_args:
pattern = Andencentohandler + add_cmd + "(?: |$)(.*)"
elif is_args == "simp":
pattern = Andencentohandler + add_cmd + " (.*)"
elif is_args == "nope":
pattern = Andencentohandler + add_cmd
elif is_args == "snips":
pattern = Andencentohandler + add_cmd + " (\S+)"
else:
pattern = Andencentohandler + add_cmd + "$"
userbot.add_event_handler(
func, events.NewMessage(incoming=True, pattern=pattern)
)
return cmd
def is_admin():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
userbot = Andencento.tgbot
perms = await userbot.get_permissions(event.chat_id, event.sender_id)
user = event.sender_id
ForGo10 = Andencento.uid
if perms.is_admin:
await func(event)
if event.sender_id == ForGo10:
pass
elif not user:
pass
if not perms.is_admin:
await event.reply("Only Admins Can Use This..")
return wrapper
return decorator
def is_Andencento_admin():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
userbot = Andencento.tgbot
boat = await userbot.get_me()
perms = await userbot.get_permissions(event.chat_id, boat)
if perms.is_admin:
await func(event)
else:
await event.reply("Need Admin privileges to do this...")
return wrapper
return decorator
def allowed_users():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
minna = list(Config.SUDO_USERS)
minna.append(Andencento.uid)
if event.sender_id in minna:
await func(event)
else:
await event.reply("This command can only be used by Owner and Sudo Users..")
return wrapper
return decorator
def owner_only():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
watashi = Andencento.uid
if event.sender_id == watashi:
await func(event)
else:
pass
return wrapper
return decorator
def only_groups():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
if event.is_group:
await func(event)
else:
await event.reply("I don't think this is a group !!")
return wrapper
return decorator
def only_group():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
if event.is_group:
await func(event)
else:
pass
return wrapper
return decorator
def allowed_only():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
minna = list(Config.SUDO_USERS)
minna.append(Andencento.uid)
if event.sender_id in minna:
await func(event)
else:
pass
return wrapper
return decorator
def privates():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
if event.is_group:
pass
else:
await func(event)
return wrapper
return decorator
import functools
from telethon import events
from userbot import *
Andencentohandler = Config.BOT_HANDLER
def userbot_cmd(add_cmd, is_args=False):
def cmd(func):
userbot = Andencento.tgbot
if is_args:
pattern = Andencentohandler + add_cmd + "(?: |$)(.*)"
elif is_args == "simp":
pattern = Andencentohandler + add_cmd + " (.*)"
elif is_args == "nope":
pattern = Andencentohandler + add_cmd
elif is_args == "snips":
pattern = Andencentohandler + add_cmd + " (\S+)"
else:
pattern = Andencentohandler + add_cmd + "$"
userbot.add_event_handler(
func, events.NewMessage(incoming=True, pattern=pattern)
)
return cmd
def is_admin():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
userbot = Andencento.tgbot
perms = await userbot.get_permissions(event.chat_id, event.sender_id)
user = event.sender_id
ForGo10 = Andencento.uid
if perms.is_admin:
await func(event)
if event.sender_id == ForGo10:
pass
elif not user:
pass
if not perms.is_admin:
await event.reply("Only Admins Can Use This..")
return wrapper
return decorator
def is_Andencento_admin():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
userbot = Andencento.tgbot
boat = await userbot.get_me()
perms = await userbot.get_permissions(event.chat_id, boat)
if perms.is_admin:
await func(event)
else:
await event.reply("Need Admin privileges to do this...")
return wrapper
return decorator
def allowed_users():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
minna = list(Config.SUDO_USERS)
minna.append(Andencento.uid)
if event.sender_id in minna:
await func(event)
else:
await event.reply("This command can only be used by Owner and Sudo Users..")
return wrapper
return decorator
def owner_only():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
watashi = Andencento.uid
if event.sender_id == watashi:
await func(event)
else:
pass
return wrapper
return decorator
def only_groups():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
if event.is_group:
await func(event)
else:
await event.reply("I don't think this is a group !!")
return wrapper
return decorator
def only_group():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
if event.is_group:
await func(event)
else:
pass
return wrapper
return decorator
def allowed_only():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
minna = list(Config.SUDO_USERS)
minna.append(Andencento.uid)
if event.sender_id in minna:
await func(event)
else:
pass
return wrapper
return decorator
def privates():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
if event.is_group:
pass
else:
await func(event)
return wrapper
return decorator
import functools
from telethon import events
from userbot import *
Andencentohandler = Config.BOT_HANDLER
def userbot_cmd(add_cmd, is_args=False):
def cmd(func):
userbot = Andencento.tgbot
if is_args:
pattern = Andencentohandler + add_cmd + "(?: |$)(.*)"
elif is_args == "simp":
pattern = Andencentohandler + add_cmd + " (.*)"
elif is_args == "nope":
pattern = Andencentohandler + add_cmd
elif is_args == "snips":
pattern = Andencentohandler + add_cmd + " (\S+)"
else:
pattern = Andencentohandler + add_cmd + "$"
userbot.add_event_handler(
func, events.NewMessage(incoming=True, pattern=pattern)
)
return cmd
def is_admin():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
userbot = Andencento.tgbot
perms = await userbot.get_permissions(event.chat_id, event.sender_id)
user = event.sender_id
ForGo10 = Andencento.uid
if perms.is_admin:
await func(event)
if event.sender_id == ForGo10:
pass
elif not user:
pass
if not perms.is_admin:
await event.reply("Only Admins Can Use This..")
return wrapper
return decorator
def is_Andencento_admin():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
userbot = Andencento.tgbot
boat = await userbot.get_me()
perms = await userbot.get_permissions(event.chat_id, boat)
if perms.is_admin:
await func(event)
else:
await event.reply("Need Admin privileges to do this...")
return wrapper
return decorator
def allowed_users():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
minna = list(Config.SUDO_USERS)
minna.append(Andencento.uid)
if event.sender_id in minna:
await func(event)
else:
await event.reply("This command can only be used by Owner and Sudo Users..")
return wrapper
return decorator
def owner_only():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
watashi = Andencento.uid
if event.sender_id == watashi:
await func(event)
else:
pass
return wrapper
return decorator
def only_groups():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
if event.is_group:
await func(event)
else:
await event.reply("I don't think this is a group !!")
return wrapper
return decorator
def only_group():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
if event.is_group:
await func(event)
else:
pass
return wrapper
return decorator
def allowed_only():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
minna = list(Config.SUDO_USERS)
minna.append(Andencento.uid)
if event.sender_id in minna:
await func(event)
else:
pass
return wrapper
return decorator
def privates():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
if event.is_group:
pass
else:
await func(event)
return wrapper
return decorator
import functools
from telethon import events
from userbot import *
Andencentohandler = Config.BOT_HANDLER
def userbot_cmd(add_cmd, is_args=False):
def cmd(func):
userbot = Andencento.tgbot
if is_args:
pattern = Andencentohandler + add_cmd + "(?: |$)(.*)"
elif is_args == "simp":
pattern = Andencentohandler + add_cmd + " (.*)"
elif is_args == "nope":
pattern = Andencentohandler + add_cmd
elif is_args == "snips":
pattern = Andencentohandler + add_cmd + " (\S+)"
else:
pattern = Andencentohandler + add_cmd + "$"
userbot.add_event_handler(
func, events.NewMessage(incoming=True, pattern=pattern)
)
return cmd
def is_admin():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
userbot = Andencento.tgbot
perms = await userbot.get_permissions(event.chat_id, event.sender_id)
user = event.sender_id
ForGo10 = Andencento.uid
if perms.is_admin:
await func(event)
if event.sender_id == ForGo10:
pass
elif not user:
pass
if not perms.is_admin:
await event.reply("Only Admins Can Use This..")
return wrapper
return decorator
def is_Andencento_admin():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
userbot = Andencento.tgbot
boat = await userbot.get_me()
perms = await userbot.get_permissions(event.chat_id, boat)
if perms.is_admin:
await func(event)
else:
await event.reply("Need Admin privileges to do this...")
return wrapper
return decorator
def allowed_users():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
minna = list(Config.SUDO_USERS)
minna.append(Andencento.uid)
if event.sender_id in minna:
await func(event)
else:
await event.reply("This command can only be used by Owner and Sudo Users..")
return wrapper
return decorator
def owner_only():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
watashi = Andencento.uid
if event.sender_id == watashi:
await func(event)
else:
pass
return wrapper
return decorator
def only_groups():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
if event.is_group:
await func(event)
else:
await event.reply("I don't think this is a group !!")
return wrapper
return decorator
def only_group():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
if event.is_group:
await func(event)
else:
pass
return wrapper
return decorator
def allowed_only():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
minna = list(Config.SUDO_USERS)
minna.append(Andencento.uid)
if event.sender_id in minna:
await func(event)
else:
pass
return wrapper
return decorator
def privates():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
if event.is_group:
pass
else:
await func(event)
return wrapper
return decorator
import functools
from telethon import events
from userbot import *
Andencentohandler = Config.BOT_HANDLER
def userbot_cmd(add_cmd, is_args=False):
def cmd(func):
userbot = Andencento.tgbot
if is_args:
pattern = Andencentohandler + add_cmd + "(?: |$)(.*)"
elif is_args == "simp":
pattern = Andencentohandler + add_cmd + " (.*)"
elif is_args == "nope":
pattern = Andencentohandler + add_cmd
elif is_args == "snips":
pattern = Andencentohandler + add_cmd + " (\S+)"
else:
pattern = Andencentohandler + add_cmd + "$"
userbot.add_event_handler(
func, events.NewMessage(incoming=True, pattern=pattern)
)
return cmd
def is_admin():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
userbot = Andencento.tgbot
perms = await userbot.get_permissions(event.chat_id, event.sender_id)
user = event.sender_id
ForGo10 = Andencento.uid
if perms.is_admin:
await func(event)
if event.sender_id == ForGo10:
pass
elif not user:
pass
if not perms.is_admin:
await event.reply("Only Admins Can Use This..")
return wrapper
return decorator
def is_Andencento_admin():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
userbot = Andencento.tgbot
boat = await userbot.get_me()
perms = await userbot.get_permissions(event.chat_id, boat)
if perms.is_admin:
await func(event)
else:
await event.reply("Need Admin privileges to do this...")
return wrapper
return decorator
def allowed_users():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
minna = list(Config.SUDO_USERS)
minna.append(Andencento.uid)
if event.sender_id in minna:
await func(event)
else:
await event.reply("This command can only be used by Owner and Sudo Users..")
return wrapper
return decorator
def owner_only():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
watashi = Andencento.uid
if event.sender_id == watashi:
await func(event)
else:
pass
return wrapper
return decorator
def only_groups():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
if event.is_group:
await func(event)
else:
await event.reply("I don't think this is a group !!")
return wrapper
return decorator
def only_group():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
if event.is_group:
await func(event)
else:
pass
return wrapper
return decorator
def allowed_only():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
minna = list(Config.SUDO_USERS)
minna.append(Andencento.uid)
if event.sender_id in minna:
await func(event)
else:
pass
return wrapper
return decorator
def privates():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
if event.is_group:
pass
else:
await func(event)
return wrapper
return decorator
import functools
from telethon import events
from userbot import *
Andencentohandler = Config.BOT_HANDLER
def userbot_cmd(add_cmd, is_args=False):
def cmd(func):
userbot = Andencento.tgbot
if is_args:
pattern = Andencentohandler + add_cmd + "(?: |$)(.*)"
elif is_args == "simp":
pattern = Andencentohandler + add_cmd + " (.*)"
elif is_args == "nope":
pattern = Andencentohandler + add_cmd
elif is_args == "snips":
pattern = Andencentohandler + add_cmd + " (\S+)"
else:
pattern = Andencentohandler + add_cmd + "$"
userbot.add_event_handler(
func, events.NewMessage(incoming=True, pattern=pattern)
)
return cmd
def is_admin():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
userbot = Andencento.tgbot
perms = await userbot.get_permissions(event.chat_id, event.sender_id)
user = event.sender_id
ForGo10 = Andencento.uid
if perms.is_admin:
await func(event)
if event.sender_id == ForGo10:
pass
elif not user:
pass
if not perms.is_admin:
await event.reply("Only Admins Can Use This..")
return wrapper
return decorator
def is_Andencento_admin():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
userbot = Andencento.tgbot
boat = await userbot.get_me()
perms = await userbot.get_permissions(event.chat_id, boat)
if perms.is_admin:
await func(event)
else:
await event.reply("Need Admin privileges to do this...")
return wrapper
return decorator
def allowed_users():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
minna = list(Config.SUDO_USERS)
minna.append(Andencento.uid)
if event.sender_id in minna:
await func(event)
else:
await event.reply("This command can only be used by Owner and Sudo Users..")
return wrapper
return decorator
def owner_only():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
watashi = Andencento.uid
if event.sender_id == watashi:
await func(event)
else:
pass
return wrapper
return decorator
def only_groups():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
if event.is_group:
await func(event)
else:
await event.reply("I don't think this is a group !!")
return wrapper
return decorator
def only_group():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
if event.is_group:
await func(event)
else:
pass
return wrapper
return decorator
def allowed_only():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
minna = list(Config.SUDO_USERS)
minna.append(Andencento.uid)
if event.sender_id in minna:
await func(event)
else:
pass
return wrapper
return decorator
def privates():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
if event.is_group:
pass
else:
await func(event)
return wrapper
return decorator
"""
def privates():
def decorator(func):
@functools.wraps(func)
async def wrapper(event):
if event.is_group:
pass
else:
await func(event)
return wrapper
return decorator
|
Andencento
|
/Andencento-0.24.tar.gz/Andencento-0.24/userbot/assistant_utils/decorators.py
|
decorators.py
|
import asyncio
import datetime
import importlib
import inspect
import logging
import math
import os
import re
import sys
import time
import traceback
from pathlib import Path
from time import gmtime, strftime
from telethon import events
from telethon.tl.functions.channels import GetParticipantRequest
from telethon.tl.types import ChannelParticipantAdmin, ChannelParticipantCreator
from .. import *
from ..helpers import *
from ..config import Config
# either edit or reply that msg
async def edit_or_reply(
event,
text,
parse_mode=None,
link_preview=None,
file_name=None,
aslink=False,
deflink=False,
noformat=False,
linktext=None,
caption=None,
):
link_preview = link_preview or False
reply_to = await event.get_reply_message()
if len(text) < 4096 and not deflink:
parse_mode = parse_mode or "md"
if event.sender_id in Config.SUDO_USERS:
if reply_to:
return await reply_to.reply(
text, parse_mode=parse_mode, link_preview=link_preview
)
return await event.reply(
text, parse_mode=parse_mode, link_preview=link_preview
)
await event.edit(text, parse_mode=parse_mode, link_preview=link_preview)
return event
if not noformat:
asciich = ["**", "`", "__"]
for i in asciich:
text = re.sub(rf"\{i}", "", text)
if aslink or deflink:
linktext = linktext or "Message was to big so pasted to bin"
try:
key = (
requests.post(
"https://nekobin.com/api/documents", json={"content": text}
)
.json()
.get("result")
.get("key")
)
text = linktext + f" [here](https://nekobin.com/{key})"
except Exception:
text = re.sub(r"•", ">>", text)
kresult = requests.post(
"https://del.dog/documents", data=text.encode("UTF-8")
).json()
text = linktext + f" [here](https://del.dog/{kresult['key']})"
if event.sender_id in Config.SUDO_USERS:
if reply_to:
return await reply_to.reply(text, link_preview=link_preview)
return await event.reply(text, link_preview=link_preview)
await event.edit(text, link_preview=link_preview)
return event
file_name = file_name or "output.txt"
caption = caption or None
with open(file_name, "w+") as output:
output.write(text)
if reply_to:
await reply_to.reply(caption, file=file_name)
await event.delete()
return os.remove(file_name)
if event.sender_id in Config.SUDO_USERS:
await event.reply(caption, file=file_name)
await event.delete()
return os.remove(file_name)
await event.client.send_file(event.chat_id, file_name, caption=caption)
await event.delete()
os.remove(file_name)
# delete timeout
async def delete(event, text, time=None, parse_mode=None, link_preview=None):
parse_mode = parse_mode or "md"
link_preview = link_preview or False
time = time or 10
if event.sender_id in Config.SUDO_USERS:
reply_to = await event.get_reply_message()
event = (
await reply_to.reply(text, link_preview=link_preview, parse_mode=parse_mode)
if reply_to
else await event.reply(
text, link_preview=link_preview, parse_mode=parse_mode
)
)
else:
event = await event.edit(
text, link_preview=link_preview, parse_mode=parse_mode
)
await asyncio.sleep(time)
return await event.delete()
|
Andencento
|
/Andencento-0.24.tar.gz/Andencento-0.24/userbot/utils/extras.py
|
extras.py
|
import asyncio
import datetime
import importlib
import inspect
import logging
import math
import os
import re
import sys
import time
import traceback
from pathlib import Path
from time import gmtime, strftime
from telethon import events
from telethon.tl.functions.channels import GetParticipantRequest
from telethon.tl.types import ChannelParticipantAdmin, ChannelParticipantCreator
from .. import Andencento as bot
from .. import *
from ..helpers import *
from ..config import Config
# admin cmd or normal user cmd
def admin_cmd(pattern=None, command=None, **args):
args["func"] = lambda e: e.via_bot_id is None
stack = inspect.stack()
previous_stack_frame = stack[1]
file_test = Path(previous_stack_frame.filename)
file_test = file_test.stem.replace(".py", "")
allow_sudo = args.get("allow_sudo", False)
# get the pattern from the decorator
if pattern is not None:
if pattern.startswith(r"\#"):
# special fix for snip.py
args["pattern"] = re.compile(pattern)
elif pattern.startswith(r"^"):
args["pattern"] = re.compile(pattern)
cmd = pattern.replace("$", "").replace("^", "").replace("\\", "")
try:
CMD_LIST[file_test].append(cmd)
except BaseException:
CMD_LIST.update({file_test: [cmd]})
else:
if len(Config.HANDLER) == 2:
catreg = "^" + Config.HANDLER
reg = Config.HANDLER[1]
elif len(Config.HANDLER) == 1:
catreg = "^\\" + Config.HANDLER
reg = Config.HANDLER
args["pattern"] = re.compile(catreg + pattern)
if command is not None:
cmd = reg + command
else:
cmd = (
(reg + pattern).replace("$", "").replace("\\", "").replace("^", "")
)
try:
CMD_LIST[file_test].append(cmd)
except BaseException:
CMD_LIST.update({file_test: [cmd]})
args["outgoing"] = True
# should this command be available for other users?
if allow_sudo:
args["from_users"] = list(Config.SUDO_USERS)
# Mutually exclusive with outgoing (can only set one of either).
args["incoming"] = True
del args["allow_sudo"]
# error handling condition check
elif "incoming" in args and not args["incoming"]:
args["outgoing"] = True
# add blacklist chats, UB should not respond in these chats
args["blacklist_chats"] = True
black_list_chats = list(Config.UB_BLACK_LIST_CHAT)
if len(black_list_chats) > 0:
args["chats"] = black_list_chats
# add blacklist chats, UB should not respond in these chats
if "allow_edited_updates" in args and args["allow_edited_updates"]:
args["allow_edited_updates"]
del args["allow_edited_updates"]
# check if the plugin should listen for outgoing 'messages'
return events.NewMessage(**args)
def andencento_cmd(pattern=None, command=None, **args):
args["func"] = lambda e: e.via_bot_id is None
stack = inspect.stack()
previous_stack_frame = stack[1]
file_test = Path(previous_stack_frame.filename)
file_test = file_test.stem.replace(".py", "")
allow_sudo = args.get("allow_sudo", False)
# get the pattern from the decorator
if pattern is not None:
if pattern.startswith(r"\#"):
# special fix for snip.py
args["pattern"] = re.compile(pattern)
elif pattern.startswith(r"^"):
args["pattern"] = re.compile(pattern)
cmd = pattern.replace("$", "").replace("^", "").replace("\\", "")
try:
CMD_LIST[file_test].append(cmd)
except BaseException:
CMD_LIST.update({file_test: [cmd]})
else:
if len(Config.HANDLER) == 2:
catreg = "^" + Config.HANDLER
reg = Config.HANDLER[1]
elif len(Config.HANDLER) == 1:
catreg = "^\\" + Config.HANDLER
reg = Config.HANDLER
args["pattern"] = re.compile(catreg + pattern)
if command is not None:
cmd = reg + command
else:
cmd = (
(reg + pattern).replace("$", "").replace("\\", "").replace("^", "")
)
try:
CMD_LIST[file_test].append(cmd)
except BaseException:
CMD_LIST.update({file_test: [cmd]})
args["outgoing"] = True
# should this command be available for other users?
if allow_sudo:
args["from_users"] = list(Config.SUDO_USERS)
# Mutually exclusive with outgoing (can only set one of either).
args["incoming"] = True
del args["allow_sudo"]
# error handling condition check
elif "incoming" in args and not args["incoming"]:
args["outgoing"] = True
# add blacklist chats, UB should not respond in these chats
args["blacklist_chats"] = True
black_list_chats = list(Config.UB_BLACK_LIST_CHAT)
if len(black_list_chats) > 0:
args["chats"] = black_list_chats
# add blacklist chats, UB should not respond in these chats
if "allow_edited_updates" in args and args["allow_edited_updates"]:
args["allow_edited_updates"]
del args["allow_edited_updates"]
# check if the plugin should listen for outgoing 'messages'
return events.NewMessage(**args)
def extremepro_cmd(pattern=None, command=None, **args):
args["func"] = lambda e: e.via_bot_id is None
stack = inspect.stack()
previous_stack_frame = stack[1]
file_test = Path(previous_stack_frame.filename)
file_test = file_test.stem.replace(".py", "")
allow_sudo = args.get("allow_sudo", False)
# get the pattern from the decorator
if pattern is not None:
if pattern.startswith(r"\#"):
# special fix for snip.py
args["pattern"] = re.compile(pattern)
elif pattern.startswith(r"^"):
args["pattern"] = re.compile(pattern)
cmd = pattern.replace("$", "").replace("^", "").replace("\\", "")
try:
CMD_LIST[file_test].append(cmd)
except BaseException:
CMD_LIST.update({file_test: [cmd]})
else:
if len(Config.HANDLER) == 2:
catreg = "^" + Config.HANDLER
reg = Config.HANDLER[1]
elif len(Config.HANDLER) == 1:
catreg = "^\\" + Config.HANDLER
reg = Config.HANDLER
args["pattern"] = re.compile(catreg + pattern)
if command is not None:
cmd = reg + command
else:
cmd = (
(reg + pattern).replace("$", "").replace("\\", "").replace("^", "")
)
try:
CMD_LIST[file_test].append(cmd)
except BaseException:
CMD_LIST.update({file_test: [cmd]})
args["outgoing"] = True
# should this command be available for other users?
if allow_sudo:
args["from_users"] = list(Config.SUDO_USERS)
# Mutually exclusive with outgoing (can only set one of either).
args["incoming"] = True
del args["allow_sudo"]
# error handling condition check
elif "incoming" in args and not args["incoming"]:
args["outgoing"] = True
# add blacklist chats, UB should not respond in these chats
args["blacklist_chats"] = True
black_list_chats = list(Config.UB_BLACK_LIST_CHAT)
if len(black_list_chats) > 0:
args["chats"] = black_list_chats
# add blacklist chats, UB should not respond in these chats
if "allow_edited_updates" in args and args["allow_edited_updates"]:
args["allow_edited_updates"]
del args["allow_edited_updates"]
# check if the plugin should listen for outgoing 'messages'
return events.NewMessage(**args)
def sudo_cmd(pattern=None, command=None, **args):
args["func"] = lambda e: e.via_bot_id is None
stack = inspect.stack()
previous_stack_frame = stack[1]
file_test = Path(previous_stack_frame.filename)
file_test = file_test.stem.replace(".py", "")
allow_sudo = args.get("allow_sudo", False)
# get the pattern from the decorator
if pattern is not None:
if pattern.startswith(r"\#"):
# special fix for snip.py
args["pattern"] = re.compile(pattern)
elif pattern.startswith(r"^"):
args["pattern"] = re.compile(pattern)
cmd = pattern.replace("$", "").replace("^", "").replace("\\", "")
try:
SUDO_LIST[file_test].append(cmd)
except BaseException:
SUDO_LIST.update({file_test: [cmd]})
else:
if len(Config.SUDO_COMMAND_HAND_LER) == 2:
catreg = "^" + Config.SUDO_COMMAND_HAND_LER
reg = Config.SUDO_COMMAND_HAND_LER[1]
elif len(Config.SUDO_COMMAND_HAND_LER) == 1:
catreg = "^\\" + Config.SUDO_COMMAND_HAND_LER
reg = Config.COMMAND_HAND_LER
args["pattern"] = re.compile(catreg + pattern)
if command is not None:
cmd = reg + command
else:
cmd = (
(reg + pattern).replace("$", "").replace("\\", "").replace("^", "")
)
try:
SUDO_LIST[file_test].append(cmd)
except BaseException:
SUDO_LIST.update({file_test: [cmd]})
args["outgoing"] = True
# should this command be available for other users?
if allow_sudo:
args["from_users"] = list(Config.SUDO_USERS)
# Mutually exclusive with outgoing (can only set one of either).
args["incoming"] = True
del args["allow_sudo"]
# error handling condition check
elif "incoming" in args and not args["incoming"]:
args["outgoing"] = True
# add blacklist chats, UB should not respond in these chats
args["blacklist_chats"] = True
black_list_chats = list(Config.UB_BLACK_LIST_CHAT)
if black_list_chats:
args["chats"] = black_list_chats
# add blacklist chats, UB should not respond in these chats
if "allow_edited_updates" in args and args["allow_edited_updates"]:
args["allow_edited_updates"]
del args["allow_edited_updates"]
# check if the plugin should listen for outgoing 'messages'
return events.NewMessage(**args)
# https://t.me/c/1220993104/623253
# https://docs.telethon.dev/en/latest/misc/changelog.html#breaking-changes
def amanpandey_cmd(pattern=None, command=None, **args):
args["func"] = lambda e: e.via_bot_id is None
stack = inspect.stack()
previous_stack_frame = stack[1]
file_test = Path(previous_stack_frame.filename)
file_test = file_test.stem.replace(".py", "")
allow_sudo = args.get("allow_sudo", False)
# get the pattern from the decorator
if pattern is not None:
if pattern.startswith(r"\#"):
# special fix for snip.py
args["pattern"] = re.compile(pattern)
elif pattern.startswith(r"^"):
args["pattern"] = re.compile(pattern)
cmd = pattern.replace("$", "").replace("^", "").replace("\\", "")
try:
SUDO_LIST[file_test].append(cmd)
except BaseException:
SUDO_LIST.update({file_test: [cmd]})
else:
if len(Config.SUDO_COMMAND_HAND_LER) == 2:
catreg = "^" + Config.SUDO_COMMAND_HAND_LER
reg = Config.SUDO_COMMAND_HAND_LER[1]
elif len(Config.SUDO_COMMAND_HAND_LER) == 1:
catreg = "^\\" + Config.SUDO_COMMAND_HAND_LER
reg = Config.COMMAND_HAND_LER
args["pattern"] = re.compile(catreg + pattern)
if command is not None:
cmd = reg + command
else:
cmd = (
(reg + pattern).replace("$", "").replace("\\", "").replace("^", "")
)
try:
SUDO_LIST[file_test].append(cmd)
except BaseException:
SUDO_LIST.update({file_test: [cmd]})
args["outgoing"] = True
# should this command be available for other users?
if allow_sudo:
args["from_users"] = list(Config.SUDO_USERS)
# Mutually exclusive with outgoing (can only set one of either).
args["incoming"] = True
del args["allow_sudo"]
# error handling condition check
elif "incoming" in args and not args["incoming"]:
args["outgoing"] = True
# add blacklist chats, UB should not respond in these chats
args["blacklist_chats"] = True
black_list_chats = list(Config.UB_BLACK_LIST_CHAT)
if black_list_chats:
args["chats"] = black_list_chats
# add blacklist chats, UB should not respond in these chats
if "allow_edited_updates" in args and args["allow_edited_updates"]:
args["allow_edited_updates"]
del args["allow_edited_updates"]
# check if the plugin should listen for outgoing 'messages'
return events.NewMessage(**args)
# Configration of Andencento cmd
on = Andencento.on
def on(**args):
def decorator(func):
async def wrapper(event):
# check if sudo
await func(event)
client.add_event_handler(wrapper, events.NewMessage(**args))
return wrapper
return decorater
# register decorate
def register(**args):
args["func"] = lambda e: e.via_Andencento_id is None
stack = inspect.stack()
previous_stack_frame = stack[1]
file_test = Path(previous_stack_frame.filename)
file_test = file_test.stem.replace(".py", "")
pattern = args.get("pattern", None)
disable_edited = args.get("disable_edited", True)
allow_sudo = args.get("allow_sudo", False)
if pattern is not None and not pattern.startswith("(?i)"):
args["pattern"] = "(?i)" + pattern
if "disable_edited" in args:
del args["disable_edited"]
reg = re.compile("(.*)")
if pattern is not None:
try:
cmd = re.search(reg, pattern)
try:
cmd = cmd.group(1).replace("$", "").replace("\\", "").replace("^", "")
except BaseException:
pass
try:
CMD_LIST[file_test].append(cmd)
except BaseException:
CMD_LIST.update({file_test: [cmd]})
except BaseException:
pass
if allow_sudo:
args["from_users"] = list(Config.SUDO_USERS)
# Mutually exclusive with outgoing (can only set one of either).
args["incoming"] = True
del args["allow_sudo"]
# error handling condition check
elif "incoming" in args and not args["incoming"]:
args["outgoing"] = True
# add blacklist chats, UB should not respond in these chats
args["blacklist_chats"] = True
black_list_chats = list(Config.BL_CHAT)
if len(black_list_chats) > 0:
args["chats"] = black_list_chats
def decorator(func):
if not disable_edited:
Andencento.add_event_handler(func, events.MessageEdited(**args))
Andencento.add_event_handler(func, events.NewMessage(**args))
try:
LOAD_PLUG[file_test].append(func)
except Exception:
LOAD_PLUG.update({file_test: [func]})
return func
return decorator
# command decorations
def command(**args):
args["func"] = lambda e: e.via_bot_id is None
stack = inspect.stack()
previous_stack_frame = stack[1]
file_test = Path(previous_stack_frame.filename)
file_test = file_test.stem.replace(".py", "")
pattern = args.get("pattern", None)
allow_sudo = args.get("allow_sudo", None)
allow_edited_updates = args.get("allow_edited_updates", False)
args["incoming"] = args.get("incoming", False)
args["outgoing"] = True
if bool(args["incoming"]):
args["outgoing"] = False
try:
if pattern is not None and not pattern.startswith("(?i)"):
args["pattern"] = "(?i)" + pattern
except BaseException:
pass
reg = re.compile("(.*)")
if pattern is not None:
try:
cmd = re.search(reg, pattern)
try:
cmd = cmd.group(1).replace("$", "").replace("\\", "").replace("^", "")
except BaseException:
pass
try:
CMD_LIST[file_test].append(cmd)
except BaseException:
CMD_LIST.update({file_test: [cmd]})
except BaseException:
pass
if allow_sudo:
args["from_users"] = list(Config.SUDO_USERS)
# Mutually exclusive with outgoing (can only set one of either).
args["incoming"] = True
del allow_sudo
try:
del args["allow_sudo"]
except BaseException:
pass
args["blacklist_chats"] = True
black_list_chats = list(Config.UB_BLACK_LIST_CHAT)
if len(black_list_chats) > 0:
args["chats"] = black_list_chats
if "allow_edited_updates" in args:
del args["allow_edited_updates"]
def decorator(func):
if allow_edited_updates:
bot.add_event_handler(func, events.MessageEdited(**args))
bot.add_event_handler(func, events.NewMessage(**args))
try:
LOAD_PLUG[file_test].append(func)
except BaseException:
LOAD_PLUG.update({file_test: [func]})
return func
return decorator
|
Andencento
|
/Andencento-0.24.tar.gz/Andencento-0.24/userbot/utils/decorators.py
|
decorators.py
|
import os
from .. import CMD_HELP, CMD_HELP_BOT
HANDLER = os.enviorn.get("ANDENCENTO_HNDLR", r".")
# Made this class for help menu
class CmdHelp:
FILE = ""
ORIGINAL_FILE = ""
FILE_AUTHOR = ""
IS_OFFICIAL = True
COMMANDS = {}
PREFIX = HANDLER
WARNING = ""
INFO = ""
def __init__(self, file: str, official: bool = True, file_name: str = None):
self.FILE = file
self.ORIGINAL_FILE = file
self.IS_OFFICIAL = official
self.FILE_NAME = file_name if not file_name == None else file + ".py"
self.COMMANDS = {}
self.FILE_AUTHOR = ""
self.WARNING = ""
self.INFO = ""
def set_file_info(self, name: str, value: str):
if name == "name":
self.FILE = value
elif name == "author":
self.FILE_AUTHOR = value
return self
def add_command(self, command: str, params=None, usage: str = "", example=None):
"""
Inserts commands..
"""
self.COMMANDS[command] = {
"command": command,
"params": params,
"usage": usage,
"example": example,
}
return self
def add_warning(self, warning):
self.WARNING = warning
return self
def add_info(self, info):
self.INFO = info
return self
def get_result(self):
"""
Brings results.
"""
result = f"**📗 File :** `{self.FILE}`\n"
if self.INFO == "":
if not self.WARNING == "":
result += f"**⚠️ Warning :** {self.WARNING}\n\n"
else:
if not self.WARNING == "":
result += f"**⚠️ Warning :** {self.WARNING}\n"
result += f"**ℹ️ Info :** {self.INFO}\n\n"
for command in self.COMMANDS:
command = self.COMMANDS[command]
if command["params"] == None:
result += f"**🛠 Command :** `{HANDLER[:1]}{command['command']}`\n"
else:
result += f"**🛠 Command :** `{HANDLER[:1]}{command['command']} {command['params']}`\n"
if command["example"] == None:
result += f"**💬 Details :** `{command['usage']}`\n\n"
else:
result += f"**💬 Details :** `{command['usage']}`\n"
result += (
f"**⌨️ For Example :** `{HANDLER[:1]}{command['example']}`\n\n"
)
return result
def add(self):
"""
Directly adds CMD_HELP.
"""
CMD_HELP_BOT[self.FILE] = {
"info": {
"warning": self.WARNING,
"info": self.INFO,
},
"commands": self.COMMANDS,
}
CMD_HELP[self.FILE] = self.get_result()
return True
def getText(self, text: str):
if text == "REPLY_OR_USERNAME":
return "<user name> <user name/answer >"
elif text == "OR":
return "or"
elif text == "USERNAMES":
return "<user name (s)>"
|
Andencento
|
/Andencento-0.24.tar.gz/Andencento-0.24/userbot/utils/cmds.py
|
cmds.py
|
import importlib
import logging
import os
import sys
from pathlib import Path
from userbot.var import Var
from .. import *
from ..config import *
from ..helpers import *
from ..helpers.progress import *
from . import *
from .assistant_load import *
from .decorators import *
from .errors import *
from .extras import *
from .funcs import *
# ENV
ENV = bool(os.environ.get("ENV", False))
if ENV:
from userbot.config import Config
else:
if os.path.exists("Config.py"):
from userbot.AndencentoConfig import Development as Config
# load plugins
def load_module(shortname):
if shortname.startswith("__"):
pass
elif shortname.endswith("_"):
import userbot.utils
path = Path(f"plugins/{shortname}.py")
name = "plugins.{}".format(shortname)
spec = importlib.util.spec_from_file_location(name, path)
mod = importlib.util.module_from_spec(spec)
spec.loader.exec_module(mod)
LOGS.info("Successfully imported " + shortname)
else:
import userbot.utils
path = Path(f"plugins/{shortname}.py")
name = "plugins.{}".format(shortname)
spec = importlib.util.spec_from_file_location(name, path)
mod = importlib.util.module_from_spec(spec)
mod.Andencento = Andencento
mod.bot = Andencento
mod.delete_hell = delete
mod.eod = delete
mod.admin_cmd = admin_cmd
mod.Var = Var
mod.Var = Config
mod.andencento_cmd = andencento_cmd
mod.command = command
mod.logger = logging.getLogger(shortname)
mod.extremepro_cmd = admin_cmd
mod.amanpandey_cmd = sudo_cmd
mod.asst = asst
mod.asstcmd = Andencento.tgbot
mod.LOGS = LOGS
mod.tgbot = Andencento.tgbot
mod.sudo_cmd = sudo_cmd
sys.modules["userbot"] = userbot
sys.modules["var"] = userbot.var
sys.modules["config"] = userbot.config
sys.modules["Config"] = userbot.AndencentoConfig
sys.modules["userbot.utils"] = userbot.utils
sys.modules["Extre.events"] = userbot.utils
sys.modules["userbot.events"] = userbot.utils
sys.modules["ULTRA.utils"] = userbot.utils
sys.modules["userbot.Config"] = userbot.config
sys.modules["userbot.uniborConfig"] = userbot.config
sys.modules["ub"] = userbot
sys.modules["jarvis"] = userbot
sys.modules["support"] = userbot
sys.modules["userbot"] = userbot
sys.modules["telebot"] = userbot
sys.modules["fridaybot"] = userbot
sys.modules["jarvis.utils"] = userbot.utils
sys.modules["uniborg.util"] = userbot.utils
sys.modules["teleAndencento.utils"] = userbot.utils
sys.modules["userbot.utils"] = userbot.utils
sys.modules["userbot.events"] = userbot.utils
sys.modules["jarvis.jconfig"] = userbot.config
sys.modules["userbot.config"] = userbot.config
sys.modules["fridayAndencento.utils"] = userbot.utils
sys.modules["fridayAndencento.Config"] = userbot.config
sys.modules["userbot.uniborgConfig"] = userbot.config
mod.edit_or_reply = edit_or_reply
mod.logger = logging.getLogger(shortname)
# support for uniborg
sys.modules["uniborg.util"] = userbot.utils
mod.Config = Config
mod.borg = Andencento
mod.edit_or_reply = edit_or_reply
mod.eor = edit_or_reply
# support for paperplaneextended
sys.modules["userbot.mainfiles.events"] = userbot.utils
spec.loader.exec_module(mod)
# for imports
sys.modules["plugins." + shortname] = mod
LOGS.info("ANDENCENTO imported " + shortname)
def extra(shortname):
if shortname.startswith("__"):
pass
elif shortname.endswith("_"):
import userbot.utils
path = Path(f"Addons-Andencento/{shortname}.py")
name = "Addons-Andencento.{}".format(shortname)
spec = importlib.util.spec_from_file_location(name, path)
mod = importlib.util.module_from_spec(spec)
spec.loader.exec_module(mod)
LOGS.info("Successfully imported " + shortname)
else:
import userbot.utils
path = Path(f"Addons-Andencento/{shortname}.py")
name = "Addons-Andencento.plugins.{}".format(shortname)
spec = importlib.util.spec_from_file_location(name, path)
mod = importlib.util.module_from_spec(spec)
mod.Andencento = Andencento
mod.bot = Andencento
mod.delete_hell = delete
mod.eod = delete
mod.admin_cmd = admin_cmd
mod.Var = Var
mod.command = command
mod.logger = logging.getLogger(shortname)
mod.extremepro_cmd = admin_cmd
mod.amanpandey_cmd = sudo_cmd
mod.LOGS = LOGS
mod.tgbot = Andencento.tgbot
mod.sudo_cmd = sudo_cmd
sys.modules["userbot"] = userbot
sys.modules["userbot.utils"] = userbot.utils
sys.modules["Extre.events"] = userbot.utils
sys.modules["userbot.events"] = userbot.utils
sys.modules["ULTRA.utils"] = userbot.utils
sys.modules["userbot.Config"] = userbot.config
sys.modules["userbot.uniborConfig"] = userbot.config
sys.modules["ub"] = userbot
sys.modules["jarvis"] = userbot
sys.modules["support"] = userbot
sys.modules["userbot"] = userbot
sys.modules["teleAndencento"] = userbot
sys.modules["fridayAndencento"] = userbot
sys.modules["jarvis.utils"] = userbot.utils
sys.modules["uniborg.util"] = userbot.utils
sys.modules["teleAndencento.utils"] = userbot.utils
sys.modules["userbot.utils"] = userbot.utils
sys.modules["userbot.events"] = userbot.utils
sys.modules["jarvis.jconfig"] = userbot.config
sys.modules["userbot.config"] = userbot.config
sys.modules["fridayAndencento.utils"] = userbot.utils
sys.modules["fridayAndencento.Config"] = userbot.config
sys.modules["userbot.uniborgConfig"] = userbot.config
mod.edit_or_reply = edit_or_reply
mod.logger = logging.getLogger(shortname)
# support for uniborg
sys.modules["uniborg.util"] = userbot.utils
mod.Config = Config
mod.borg = Andencento
mod.edit_or_reply = edit_or_reply
mod.eor = edit_or_reply
# support for paperplaneextended
sys.modules["userbot.mainfiles.events"] = userbot.utils
spec.loader.exec_module(mod)
# for imports
sys.modules["Addons-Andencento." + shortname] = mod
LOGS.info("Addons-Andencento imported " + shortname)
def remove_plugin(shortname):
try:
try:
for i in LOAD_PLUG[shortname]:
Andencento.remove_event_handler(i)
del LOAD_PLUG[shortname]
except BaseException:
name = f"plugins.{shortname}"
for i in reversed(range(len(Andencento._event_builders))):
ev, cb = Andencento._event_builders[i]
if cb.__module__ == name:
del Andencento._event_builders[i]
except BaseException:
raise ValueError
|
Andencento
|
/Andencento-0.24.tar.gz/Andencento-0.24/userbot/utils/modules.py
|
modules.py
|
import asyncio
import datetime
import importlib
import inspect
import logging
import math
import os
import re
import sys
import time
import traceback
from pathlib import Path
from time import gmtime, strftime
from telethon import events
from telethon.tl.functions.channels import GetParticipantRequest
from telethon.tl.types import ChannelParticipantAdmin, ChannelParticipantCreator
from .. import *
from ..config import Config # Main Imports from here
# Admin checker by uniborg
async def is_admin(client, chat_id, user_id):
if not str(chat_id).startswith("-100"):
return False
try:
req_jo = await client(GetParticipantRequest(channel=chat_id, user_id=user_id))
chat_participant = req_jo.participant
if isinstance(
chat_participant, (ChannelParticipantCreator, ChannelParticipantAdmin)
):
return True
except Exception:
return False
else:
return False
def register(**args):
args["func"] = lambda e: e.via_bot_id is None
stack = inspect.stack()
previous_stack_frame = stack[1]
file_test = Path(previous_stack_frame.filename)
file_test = file_test.stem.replace(".py", "")
pattern = args.get("pattern", None)
disable_edited = args.get("disable_edited", True)
allow_sudo = args.get("allow_sudo", False)
if pattern is not None and not pattern.startswith("(?i)"):
args["pattern"] = "(?i)" + pattern
if "disable_edited" in args:
del args["disable_edited"]
reg = re.compile("(.*)")
if pattern is not None:
try:
cmd = re.search(reg, pattern)
try:
cmd = cmd.group(1).replace("$", "").replace("\\", "").replace("^", "")
except BaseException:
pass
try:
CMD_LIST[file_test].append(cmd)
except BaseException:
CMD_LIST.update({file_test: [cmd]})
except BaseException:
pass
if allow_sudo:
args["from_users"] = list(Config.SUDO_USERS)
# Mutually exclusive with outgoing (can only set one of either).
args["incoming"] = True
del args["allow_sudo"]
# error handling condition check
elif "incoming" in args and not args["incoming"]:
args["outgoing"] = True
# add blacklist chats, UB should not respond in these chats
args["blacklist_chats"] = True
black_list_chats = list(Config.UB_BLACK_LIST_CHAT)
if len(black_list_chats) > 0:
args["chats"] = black_list_chats
def decorator(func):
if not disable_edited:
bot.add_event_handler(func, events.MessageEdited(**args))
bot.add_event_handler(func, events.NewMessage(**args))
try:
LOAD_PLUG[file_test].append(func)
except Exception:
LOAD_PLUG.update({file_test: [func]})
return func
return decorator
def command(**args):
args["func"] = lambda e: e.via_bot_id is None
stack = inspect.stack()
previous_stack_frame = stack[1]
file_test = Path(previous_stack_frame.filename)
file_test = file_test.stem.replace(".py", "")
pattern = args.get("pattern", None)
allow_sudo = args.get("allow_sudo", None)
allow_edited_updates = args.get("allow_edited_updates", False)
args["incoming"] = args.get("incoming", False)
args["outgoing"] = True
if bool(args["incoming"]):
args["outgoing"] = False
try:
if pattern is not None and not pattern.startswith("(?i)"):
args["pattern"] = "(?i)" + pattern
except BaseException:
pass
reg = re.compile("(.*)")
if pattern is not None:
try:
cmd = re.search(reg, pattern)
try:
cmd = cmd.group(1).replace("$", "").replace("\\", "").replace("^", "")
except BaseException:
pass
try:
CMD_LIST[file_test].append(cmd)
except BaseException:
CMD_LIST.update({file_test: [cmd]})
except BaseException:
pass
if allow_sudo:
args["from_users"] = list(Config.SUDO_USERS)
# Mutually exclusive with outgoing (can only set one of either).
args["incoming"] = True
del allow_sudo
try:
del args["allow_sudo"]
except BaseException:
pass
args["blacklist_chats"] = True
black_list_chats = list(Config.UB_BLACK_LIST_CHAT)
if len(black_list_chats) > 0:
args["chats"] = black_list_chats
if "allow_edited_updates" in args:
del args["allow_edited_updates"]
def decorator(func):
if allow_edited_updates:
Andencento.add_event_handler(func, events.MessageEdited(**args))
Andencento.add_event_handler(func, events.NewMessage(**args))
try:
LOAD_PLUG[file_test].append(func)
except BaseException:
LOAD_PLUG.update({file_test: [func]})
return func
return decorator
|
Andencento
|
/Andencento-0.24.tar.gz/Andencento-0.24/userbot/utils/main.py
|
main.py
|
import asyncio
import datetime
import sys
import traceback
from time import gmtime, strftime
# this shit handles errors
def errors_handler(func):
async def wrapper(errors):
try:
await func(errors)
except BaseException:
date = strftime("%Y-%m-%d %H:%M:%S", gmtime())
new = {
'error': str(sys.exc_info()[1]),
'date': datetime.datetime.now()
}
text = "**Andencento CRASH REPORT**\n\n"
link = "[here](https://t.me/AndencentoSupport)"
text += "If you wanna you can report it"
text += f"- just forward this message {link}.\n"
text += "Nothing is logged except the fact of error and date\n"
ftext = "\nDisclaimer:\nThis file is uploaded ONLY here,"
ftext += "\nwe logged only fact of error and date,"
ftext += "\nwe respect your privacy,"
ftext += "\nyou may not report this error if you've"
ftext += "\nany confidential data here, no one will see your data\n\n"
ftext += "--------BEGIN EivaBOT TRACEBACK LOG--------"
ftext += "\nDate: " + date
ftext += "\nGroup ID: " + str(errors.chat_id)
ftext += "\nSender ID: " + str(errors.sender_id)
ftext += "\n\nEvent Trigger:\n"
ftext += str(errors.text)
ftext += "\n\nTraceback info:\n"
ftext += str(traceback.format_exc())
ftext += "\n\nError text:\n"
ftext += str(sys.exc_info()[1])
ftext += "\n\n--------END Andencento TRACEBACK LOG--------"
command = "git log --pretty=format:\"%an: %s\" -5"
ftext += "\n\n\nLast 5 commits:\n"
process = await asyncio.create_subprocess_shell(
command,
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE)
stdout, stderr = await process.communicate()
result = str(stdout.decode().strip()) \
+ str(stderr.decode().strip())
ftext += result
return wrapper
|
Andencento
|
/Andencento-0.24.tar.gz/Andencento-0.24/userbot/utils/errors.py
|
errors.py
|
import webview
import sys
import json
STYLES = """
<style>
*{
padding: 0px;
margin: 0px;
}
h5{
color: #D71920;
padding: 1rem;
padding-left: 1rem;
Margin-left: 10rem;
font-family: Arial, Helvetica, sans-serif;
font-size: 1rem;
}
ul {
list-style-type: disc; /* Tipo de viñeta, en este caso un círculo lleno */
margin: 0;
padding: 0;
}
li {
color:#D71920;
margin: 0 0 0 1em; /* Margen izquierdo para que se vea la viñeta */
font-family: Arial, Helvetica, sans-serif;
font-size: 15px;
}
span{
color: #757575;
font-size: 15px;
font-family: Arial, Helvetica, sans-serif;
}
.btn{
color: #D71920;
border: 1px solid #D71920;
border-radius: 20px;
background: #FFFFFF 0% 0% no-repeat padding-box;
box-shadow: 0px 2px 4px #00000029;
font-size: 1rem;
margin: 0.3rem;
padding: 0.5rem;
padding-left: 1rem;
padding-right: 1rem;
width: 7.5rem;
font-family: Arial, Helvetica, sans-serif;
}
.btn-red{
color: #FFFFFF ;
background-color:#D71920 ;
}
.btn:hover{
color: #B6040B;
border: 1px solid #B6040B;
}
.btn-red:hover{
color: #FFFFFF ;
background-color:#B6040B;
}
.btn:active{
/* UI Properties */
border: 1px solid #D71920 ;
box-shadow: 5px 5px 5px #0000000F;
border-radius: 25px;
opacity: 1;
}
.message{
margin-top: 0.5rem;
height: 3.5rem;
max-width: 25rem;
max-height: 3.5rem;
margin-bottom: 1.5rem;
overflow: hidden;
text-overflow: ellipsis;
}
.exception {
height: 1rem;
max-width: 25rem;
max-height: 1rem;
overflow: hidden;
text-overflow: ellipsis;
}
p{
font: normal;
font-family: Arial, Helvetica, sans-serif;
padding-bottom: 1rem;
padding-left: 1rem;
color: #616161;
text-align: left;
}
.container{
background-color: #FFFFFF;
display: inline-block;
width: 100%;
padding: 0;
}
.byflex{
display: flex;
}
.container-buttons{
display: inline-block;
float: right;
margin-right: 1rem;
}
.container-message{
float: right;
width: 70%;
padding-right: 1rem;
margin-top: 1rem;
}
.container-icon{
width: 20%;
color:#D71920 ;
display: inline-block;
margin: 1rem;
float: left;
}
/* clases nuevas */
.container-titles{
margin-top: 1rem;
width: 49%;
display: inline-block;
}
.container-locator label {
font: normal;
font-family: Arial, Helvetica, sans-serif;
font-size: 1rem;
color: #616161;
margin-right: 2rem;
}
.container-titles p {
padding-bottom: 0.5rem;
}
.container-locator p {
padding-bottom: 0.5rem;
}
.entity {
display: inline-block;
font: normal;
font-family: Arial, Helvetica, sans-serif;
padding: 0rem 0rem 1rem 1rem;
color: #616161;
text-align: left;
max-width: 15rem;
max-height: 1rem;
white-space: nowrap;
overflow: hidden;
text-overflow: ellipsis;
}
.container-locator {
margin-top: 1rem;
width: 49%;
display: inline-block;
}
.container-options {
width: 100%;
display: inline-block;
}
.option{
display: block;
align-content: center;
margin-left: 1rem;
padding: 0px 0px 0.5rem 0px;
}
textarea {
padding: 1rem;
margin: 0.5rem 0rem 0rem 0.6rem;
width: 33rem;
height: 2rem;
border: 1px solid #9E9E9E;
border-radius: 9px;
opacity: 1;
resize: None;
overflow: hidden;
font-family: Arial, Helvetica, sans-serif;
font-size: 1rem;
}
textarea:focus {
border-color: #707070; /* cambia el color del borde del textarea cuando está enfocado */
box-shadow: 0 0 5px 0 #707070; /* agrega una sombra azul cuando está enfocado */
outline: none;
opacity: 1;
}
.opt-radio {
background-color: #D71920;
border: 2px solid #D71920;
color: #D71920;
font-size: 16px;
padding: 10px;
}
</style>
"""
SCRIPTS = """
<script>
function showResponse(response) {
const container = document.getElementById(this.element)
container.innerText = response.message
}
function cancelHeavyStuff() {
pywebview.api.cancelHeavyStuff()
}
function retry() {
pywebview.api.retry()
}
function refactor() {
pywebview.api.refactor()
}
function report() {
pywebview.api.report()
}
function save_refactor() {
const expresion = document.getElementById('expresion');
if (expresion.value != "") {
if (document.getElementById('xpath').checked) {
pywebview.api.save_refactor(expresion.value, 'xpath')
}
if (document.getElementById('id').checked) {
pywebview.api.save_refactor(expresion.value, 'id')
}
if (document.getElementById('name').checked) {
pywebview.api.save_refactor(expresion.value, 'name')
}
}
expresion.focus()
}
function return_home() {
pywebview.api.return_home()
}
</script>
"""
class Debugger:
"""
Debugger recibe como dato de entrada en su constructor un diccionario compuesto de la siguiente forma:
metadata = {
"FRAMEWORK":"Selenium",
"ENTITY": "<input>_nombre_usuario asdasd asd asdasdasdas",
"EXCEPTION": "AlgunErrorException",
"MESSAGE" : "Paso esto, por favor arreglalo o reportalo.",
"LOCATOR TYPE": "xpath",
"VALUE TO FIND": "//div/div/input",
"JSON PATH": "C:\testing-automation\projects\Fisa\src\pages\Login.json",
"JSON STRING": {'<input>_Nombre_de_usuario': {'GetFieldBy': 'Xpath', 'ValueToFind': "//input[@id='inputEmail']"}, '<input>_Password': {'GetFieldBy': 'Xpath', 'ValueToFind': "//input[@id='inputPassword']"}},
"CASE NAME": "test_000_alta_de_usuario_fulano"
}
"""
def __init__(self, metadata) -> None:
Debugger.metadata = Debugger.normalizer_metadata(metadata)
Debugger.api = Api()
Debugger.api.set_code(2)
Debugger.pages = Debugger.build_pages()
Debugger.window = webview.create_window(f'Debugger 3.0 - {Debugger.metadata["FRAMEWORK"]}',
html=Debugger.pages["HOME"], js_api=Debugger.api,
height=300, width=600, resizable=False)
Debugger.api.set_window(Debugger.window)
Debugger.api.set_pages(Debugger.pages)
webview.start()
@classmethod
def normalizer_metadata(cls, metadata):
# agregar condicion (de validacion)
with open(metadata["JSON PATH"], "r", encoding='utf8') as read_file:
metadata["JSON STRING"] = json.loads(read_file.read())
read_file.close()
metadata["MESSAGE"] = str(metadata["MESSAGE"]).split("--")[-1].replace("<", "<").replace(">", ">")
metadata["ENTITY"] = (str(metadata["ENTITY"]).replace("<", "<")).replace(">", ">")
metadata["JSON"] = str(metadata["JSON PATH"]).split("\\")[-1]
return metadata
@classmethod
def build_pages(cls):
pages = {'HOME': Debugger.get_home(),
'REFACTOR': Debugger.get_form_refactor()
}
return pages
@classmethod
def get_html(cls, screen):
if screen.upper() == "HOME":
return Debugger.get_home()
if screen.upper() == "REFACTOR":
return Debugger.get_form_refactor()
@classmethod
def get_locator(cls, locator):
if locator.upper() == str(Debugger.metadata["LOCATOR TYPE"]).upper():
return "checked"
if locator.upper() == str(Debugger.metadata["LOCATOR TYPE"]).upper():
return "checked"
if locator.upper() == str(Debugger.metadata["LOCATOR TYPE"]).upper():
return "checked"
@classmethod
def get_home(cls):
home = f"""
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Document</title>
</head>
{STYLES}
<body>
<div class="container">
<h5>Ah ocurrido un error inesperado.</h5>
<div class="container-icon">
<svg viewBox="0 0 640 683" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M427.533 1.3333C423.933 2.79996 411.133 18.1333 408.6 24C405.667 31.0666 407 43.4666 411.4 50C413.4 52.9333 415 56.2666 415 57.3333C415 58.5333 412.2 62.6666 408.867 66.6666C401 75.8666 399.8 78.9333 402.2 84.5333C404.6 90.4 411 93.2 416.333 90.5333C420.867 88.2666 431.533 76 435.267 68.8C439.133 61.3333 438.2 48.9333 433.267 41.4666C429.533 35.8666 428.733 31.4666 431 30C431.8 29.4666 434.867 26.1333 437.667 22.4C445.133 13.0666 444.733 5.86663 436.733 1.59996C433.4 -0.266704 431.4 -0.266704 427.533 1.3333Z" fill="#D71920"/>
<path d="M468.733 8.80003C464.067 13.6 459.4 19.2 458.333 21.3334C453.4 30.8 454.6 42.8 461.133 51.6C462.867 53.8667 464.333 56.4 464.333 57.2C464.333 57.8667 461.4 62.1334 457.667 66.5334C449.933 75.8667 448.733 79.6 451.533 85.0667C453.8 89.6 455.933 90.6667 462.467 90.6667C466.333 90.6667 467.933 89.4667 475.4 80.6667C488.333 65.4667 490.2 53.7334 481.667 41.0667C479.533 37.8667 477.667 34.6667 477.667 34C477.667 33.2 480.733 29.2 484.333 24.9334C488.067 20.6667 491.4 15.6 491.8 13.8667C493 9.46669 490.2 3.33336 486.2 1.46669C479.933 -1.33331 477.267 -0.266641 468.733 8.80003Z" fill="#D71920"/>
<path d="M131.533 24.4C127.267 26.8 116.333 39.8667 113.533 46C109.133 55.3333 111 66.8 118.2 76.6667C120.6 79.8667 119.933 81.4667 111.4 92C103.533 101.6 103.667 108.8 112.067 113.2C117.533 116.133 122.467 113.733 130.333 104.533C140.067 92.9333 142.067 88.8 142.067 80C142.067 72.9333 138.333 62.8 135 60.6667C132.333 58.9333 133.8 55.0667 140.2 47.4667C143.933 43.0667 147.133 38.4 147.267 36.8C148.067 32 145.533 26.6667 141.533 24.6667C137 22.2667 135.267 22.2667 131.533 24.4Z" fill="#D71920"/>
<path d="M173.4 30.8C162.867 42.1333 159.933 48 159.933 57.6C159.933 63.8666 160.6 66.5333 164.067 72C166.467 75.7333 168.333 79.6 168.333 80.4C168.333 81.3333 165.667 85.0666 162.333 88.9333C153 99.4666 152.2 103.733 158.067 110.267C159.933 112.4 162.333 113.333 165.8 113.333C170.2 113.333 171.667 112.4 178.333 104.933C187.4 95.0666 191 88.1333 191 80C191 74 187.267 63.8666 184.467 62.1333C181.267 60.1333 183.133 54.4 189.667 46.5333C194.733 40.4 196.333 37.4666 196.333 34.1333C196.333 28.1333 192.733 24.1333 186.6 23.0666C181.667 22.2666 181.133 22.5333 173.4 30.8Z" fill="#D71920"/>
<path d="M528.733 48.5333C514.867 53.0666 503.133 64.7999 499 78.2666C494.467 92.9333 496.2 104.8 504.467 117.2C510.467 126.133 518.733 132.267 529.267 135.6C537.267 138.133 536.867 137.333 539 154.667C539.8 160.933 540.733 167.733 541.133 170L541.933 174L536.2 168.533C499.667 133.733 443.133 104.533 393.667 94.7999C369 89.9999 357.533 88.9333 331.667 88.9333C249.8 89.0666 176.467 117.867 119.8 172C103.4 187.733 89.2667 204.533 77.4 222.4C72.4667 230 68.3333 236 68.2 235.733C66.6 227.467 63.6667 200.533 64.3333 200.267C69 198.533 78.7333 190.4 82.4667 185.467C89.2667 176.133 91.4 168.267 90.7333 155.467C90.0667 142.267 85.8 133.467 75.9333 124.667C66.6 116.133 57 112.933 43.6667 113.6C22.8667 114.8 7.4 127.467 2.33334 147.733C-1.4 162.267 2.6 177.6 13 189.067C18.2 194.933 32.0667 202.667 37.1333 202.667C39.2667 202.667 40.4667 203.6 41 205.6C42.7333 213.867 49.8 271.6 49.2667 274C48.8667 275.467 45.9333 284.533 42.6 294C29.9333 330.667 25.2667 359.867 25.1333 402C25.1333 437.867 27.9333 460.667 36.3333 494.133C42.7333 519.467 49 536.267 59.8 557.333C111.133 657.2 216.467 698 369 677.2C437.533 667.867 494.333 648.933 536.333 621.333C595.133 582.667 628.067 529.333 637.8 457.333C640.2 439.733 639.8 396.667 637.133 375.333C629 311.867 605.8 252.4 572.333 209.333C568.867 205.067 568.333 202.267 564.467 172.667C562.067 155.067 559.933 139.333 559.667 137.733C559.133 135.467 560.2 134.133 564.867 131.467C573 126.667 581 117.067 584.467 108.133C588.2 98.1333 587.667 82.1333 583.267 73.1999C575.533 57.4666 561.267 47.9999 543.933 47.1999C538.2 46.9333 532.067 47.4666 528.733 48.5333ZM554.867 73.7333C567.667 82.2666 567.667 101.467 554.867 111.067C551 113.867 541.133 115.467 535.667 114.133C529.933 112.667 522.2 104.667 520.333 98.2666C514.733 78.1333 537.267 62.1333 554.867 73.7333ZM382.333 115.867C401.8 119.867 418.067 124.4 432.867 130.267C527 167.333 593.4 252.4 611.667 359.733C620.6 411.733 618.333 459.733 605 499.467C578.2 579.6 504.6 631.2 388.067 651.333C249 675.2 148.733 648.4 95.4 572.933C57.8 519.733 40.0667 434.533 50.4667 358C61 281.2 100.467 212.667 159.667 168C200.067 137.6 242.333 121.067 301 112.8C303.267 112.533 319.133 112.4 336.333 112.667C362.333 112.933 370.2 113.467 382.333 115.867ZM58.7333 139.6C65.1333 144.267 68.3333 150.4 68.3333 157.867C68.3333 178.8 44.8667 188.8 30.2 174.133C19.6667 163.6 22.2 146.8 35.5333 138.4C41 135.067 53.4 135.6 58.7333 139.6Z" fill="#D71920"/>
<path d="M297.667 193.6C291.4 194.4 288.067 195.6 285.8 198C279.8 204 282.333 214.533 290.333 216.533C291.933 216.933 299.533 216.4 307.133 215.333C319.133 213.733 321.4 212.933 324.067 210C327.933 205.333 327.933 199.467 323.667 195.333C320.067 191.6 314.067 191.2 297.667 193.6Z" fill="#D71920"/>
<path d="M306.333 238.267C286.733 240.933 271.8 243.6 270.067 244.8C263.933 248.667 264.6 258.133 271.133 262.933C274.6 265.6 274.733 265.6 311 260.8C331 258.133 348.467 255.333 349.8 254.667C353.267 252.8 356.333 247.867 356.333 244.267C356.333 239.333 349.4 233.333 344.067 233.467C341.667 233.6 324.733 235.733 306.333 238.267Z" fill="#D71920"/>
<path d="M432.333 267.6C430.467 267.867 376.867 274.933 313 283.467C249.267 291.867 192.467 299.867 186.867 301.333C131 315.067 89 359.2 77 416.667C74.0667 430.933 74.0667 458.4 77 472.667C83.5333 504.267 97.8 529.867 121.133 552C132.867 563.2 141.667 569.333 156.333 576.667C176.867 586.933 191.4 590.667 215.667 591.6C233.933 592.267 236.867 591.867 355.4 576C440.6 564.667 479.933 558.8 488.067 556.533C528.2 545.067 563.533 514.667 581 476.933C614.467 404.533 584.2 318.267 512.6 282.133C490.467 270.933 453.267 264.133 432.333 267.6ZM477.533 293.333C520.467 303.733 554.333 336.667 567.8 380.8C571.133 391.6 571.4 394.667 571.533 414C571.533 431.867 571 437.067 568.6 446C565 459.2 556.733 476.533 549 487.333C540.2 499.6 522.467 515.733 509.533 523.067C487.8 535.333 486.467 535.6 353.4 553.333C274.2 563.867 229 569.333 221.133 569.333C162.867 569.2 112.067 527.2 100.067 469.333C96.6 452.933 97.4 427.2 101.667 412C113.667 369.467 146.333 336.4 187.933 324.667C196.067 322.4 235.267 316.667 315 306.133C378.467 297.733 432.2 290.533 434.333 290.133C442.333 288.933 466.6 290.667 477.533 293.333Z" fill="#D71920"/>
<path d="M502.733 338.8C501.4 339.467 494.6 347.733 487.533 357.067C480.333 366.4 473.8 374.8 472.867 375.867C471.533 377.333 468.067 375.2 452.733 363.6C442.6 355.867 433 349.2 431.4 348.8C427.267 347.467 421 351.067 419 356C416.333 362.4 419 366.267 432.6 376.667C439.4 381.867 447.8 388.267 451.267 391.067L457.667 396.133L443 415.467C427.267 436.133 426.067 439.333 431 445.733C434.2 449.733 439 451.067 443.533 449.467C445.4 448.667 452.6 440.4 461 429.333C468.733 419.067 475.533 410.667 476.067 410.667C476.6 410.667 484.867 416.8 494.333 424.133C503.933 431.6 513.133 438.133 514.867 438.8C523.667 442.267 532.467 432 528.067 423.467C527.133 421.6 518.067 413.733 507.933 406L489.533 391.733L493.267 386.933C495.267 384.133 501.933 375.467 507.933 367.467C516.467 356.4 519 352.133 519 348.8C518.867 340.4 509.667 334.667 502.733 338.8Z" fill="#D71920"/>
<path d="M221.667 376.4C219.8 377.333 212.067 386.133 204.467 396C196.867 405.867 190.2 414.133 189.533 414.4C189 414.667 180.2 408.533 170.067 400.8C154.6 388.933 150.733 386.667 146.867 386.667C140.6 386.667 136.333 391.067 136.333 397.467C136.333 403.333 137.4 404.533 158.067 420C167.267 426.933 174.867 433.067 175 433.6C175 434.133 169.133 442.267 161.933 451.6C147.4 470.533 145.667 473.2 145.667 477.333C145.667 480.933 152.6 488 156.2 488C161.8 488 165.4 484.667 179 466.667C186.733 456.4 193.4 448.133 193.8 448C194.333 448 202.067 453.733 211.133 460.8C233.667 478.267 236.733 479.467 244.067 472.933C247.267 470 247.933 464.267 245.667 460C244.867 458.533 236.333 451.333 226.6 444C217 436.533 208.867 430 208.6 429.467C208.467 428.933 214.6 420.133 222.333 410C230.067 399.867 236.733 390.267 237.133 388.667C238.2 384.133 234.333 377.867 229.533 376.133C227.133 375.333 225.267 374.667 225.133 374.667C225 374.667 223.533 375.467 221.667 376.4Z" fill="#D71920"/>
<path d="M330.733 477.333C311.8 482.8 296.2 492 284.067 505.067C276.6 513.067 275.667 518.133 281 523.333C286.733 529.2 292.2 528 302.333 518.667C313.133 508.933 322.067 503.733 334.867 500C352.333 494.667 374.6 497.867 391.133 508C398.867 512.667 403.933 513.067 408.867 509.2C413.8 505.333 413.8 497.2 408.733 492.533C403.4 487.6 387.667 480.4 375.8 477.333C362.067 473.867 343.133 473.867 330.733 477.333Z" fill="#D71920"/>
</svg>
</div>
<div class="container-message">
<p>Clave: <b class="exception">{Debugger.metadata["EXCEPTION"]}</b></p>
<p class="message">{Debugger.metadata["MESSAGE"]}</p>
</div>
<div class="container-buttons">
<button class="btn" onClick="retry()">Reintentar</button>
<button class="btn" onClick="refactor()">Refactorizar</button>
<button class="btn btn-red" onClick="report()">Reportar</button>
</div>
</div>
{SCRIPTS}
</body>
</html>
"""
return home
@classmethod
def get_form_refactor(cls):
form_refactor = f"""
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Document</title>
</head>
{STYLES}
<body>
<div class="container byflex">
<div class="container-titles">
<p>Repositorio de objetos: </p>
<p class="entity"><b>{Debugger.metadata["JSON"]}</b></p>
<p>Nombre del objeto:</p>
<p class="entity"><b>{Debugger.metadata["ENTITY"]}</b></p>
</div>
<div class="container-locator">
<p>Tipo de identificador</p>
<div class="container-options">
<div class="option">
<input class="opt-radio" type="radio" id="xpath" name="radio-group"{Debugger.get_locator("xpath")}>
<label for="radio1">xpath</label>
</div>
<div class="option">
<input class="opt-radio" type="radio" id="id" name="radio-group" {Debugger.get_locator("id")}>
<label for="radio2">id</label>
</div>
<div class="option">
<input class="opt-radio" type="radio" id="name" name="radio-group" {Debugger.get_locator("name")}>
<label for="radio3">name</label>
</div>
</div>
</div>
</div>
<div>
<textarea id="expresion" placeholder="Expresion" required>{Debugger.metadata["VALUE TO FIND"]}</textarea>
</div>
<div class="container-buttons">
<button class="btn" onClick="save_refactor()">Guardar</button>
<button class="btn" onClick="return_home()">Volver</button>
</div>
{SCRIPTS}
</body>
</html>
"""
return form_refactor
@classmethod
def __repr__(cls) -> str:
return f"{str(Debugger.api.get_code())}||{Debugger.metadata['VALUE TO FIND']}||{Debugger.metadata['LOCATOR TYPE']}"
class Api:
code = 2 # status 2: exit
def __init__(self):
Api.cancel_heavy_stuff_flag = False
Api._window = None
Api._pages = None
def set_window(self, window):
Api._window = window
def set_pages(self, pages):
Api._pages = pages
def retry(self):
Api.set_code(Api, 1) # status 1: retry
webview.windows[0].destroy()
sys.exit(1)
def refactor(self):
webview.windows[0].load_html(Api._pages["REFACTOR"])
def save_refactor(self, value_to_find, type_locator):
entity = Api.normalizer_json(Debugger.metadata["ENTITY"])
Debugger.metadata["JSON STRING"][entity]['GetFieldBy'] = type_locator
Debugger.metadata["JSON STRING"][entity]['ValueToFind'] = value_to_find
Debugger.metadata["LOCATOR TYPE"] = type_locator
Debugger.metadata["VALUE TO FIND"] = value_to_find
with open(Debugger.metadata["JSON PATH"], "w", encoding="utf8") as file:
json_strings = json.dumps(Debugger.metadata["JSON STRING"], indent=4, ensure_ascii=False)
file.write(json_strings)
file.close()
Api.set_code(Api, 1) # status 3: refactor
webview.windows[0].destroy()
sys.exit(1)
def return_home(self):
webview.windows[0].load_html(Api._pages["HOME"])
def report(self):
self.set_code(3)
print("Función no disponible en esta versión.")
def get_code(self):
return Api.code
def set_code(self, new_code):
Api.code = new_code
@classmethod
def normalizer_json(cls, entity):
entity = entity.replace("<", "<").replace(">", ">")
return entity
|
Andreani-QA-Debugger
|
/Andreani_QA_Debugger-0.0.4.tar.gz/Andreani_QA_Debugger-0.0.4/Andreani_QA_Debugger/Debugger.py
|
Debugger.py
|
import pprint
import datetime
import platform
import unittest
import cx_Oracle
import pymsteams
import pyodbc
import time
import openpyxl
from openpyxl import Workbook
from openpyxl.utils import get_column_letter
import os
import smtplib
import base64
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
from email.mime.base import MIMEBase
from email import encoders
from shareplum import Site, Office365
from shareplum.site import Version
import allure
import random
import pymongo
from pymongo.errors import ServerSelectionTimeoutError
from pymongo.errors import ConnectionFailure
from Andreani_QA_parameters.Parameters import Parameters
from cryptography.fernet import Fernet
import xml.etree.ElementTree as Et
import bz2
PATH_FUNCTIONS = os.path.join(Parameters.current_path, 'functions')
PATH_ORIGIN = os.path.join(PATH_FUNCTIONS, './src/environment_access.txt')
PATH_ORIGIN_XML = os.path.join(PATH_FUNCTIONS, './src/environment_access.xml')
PATH_TARGET = os.path.join(PATH_FUNCTIONS, './src/environment_access_e.txt')
ENVIRONMENT_VAR = 'PYBOT_KEY'
RED = '\033[31m'
BLUE = '\033[34m'
YELLOW = '\033[33m'
GREEN = '\033[32m'
DEFAULT = '\033[39m'
class Functions(Parameters):
global_date = time.strftime(Parameters.date_format) # formato aaaa/mm/dd
global_time = time.strftime(Parameters.time_format) # formato 24 houras
project_name = None
class_name = None
case_name = None
test_case_name = None
file_name = None
teams = None
data_cache = {}
data_resource = None
path_downloads = None
path_evidences = None
path_files = None
path_images = None
path_outputs = None
path_steps = None
path_json = None
path_resources = None
path_map = None
path_jmeter_executor = None
path_config = None
resource_remoto = False
sharepoint_data_jmeter = False
usuario_pybot_email = None
email_pybot = None
password_sharepoint = None
sharepoint_url = None
sharepoint_site = None
sharepoint_doc = None
def set_proyect(self, project_name=None):
"""
Description:
Setea variables de ambiente y rutas del proyecto.
Args:
project_name: Nombre del Proyecto
Returns:
Imprime por consola la siguiente configuración:
-Ambiente.
-Ruta de Resource.
-Ruta de Evidencias.
-Ruta de los Json.
-Ruta de las Imagenes de los json (reconocimiento por imagenes).
-Ruta de los Bass.
Si hubo un error en la configuración, imprime por consola
"No se pudieron detectar los datos de la ejecución".
"""
print(f"Plataforma {platform.system()} detectada.")
if platform.system() == "Windows":
Functions.set_parameters_environment("Windows")
elif platform.system() == "Linux":
Functions.set_parameters_environment("Linux")
if project_name is None and os.getenv('PROYECT') is None:
if os.path.abspath(str(self)).split(' ')[0].split('\\')[
-4] == 'src': # valida en caso de que haya subcarpeta dento de tests
Functions.project_name = os.path.abspath(str(self)).split(' ')[0].split('\\')[-5]
else:
Functions.project_name = os.path.abspath(str(self)).split(' ')[0].split('\\')[-4]
elif os.getenv('PROYECT') is not None:
Functions.project_name = os.getenv('PROYECT')
else:
Functions.project_name = project_name
Functions.test_case_name = self.id().split('.')[-1]
Functions.class_name = self.id().split('.')[-2]
Functions.file_name = self.id().split('.')[-3]
Functions.automatic_restore_row(self)
if not Parameters.manual_increment:
Functions.automatic_increment_row(self)
if Parameters.environment == "Windows":
base_path = os.path.join(Parameters.current_path)
Functions.path_downloads = os.path.join(base_path, 'src', 'downloads')
Functions.path_files = os.path.join(base_path, 'src', 'files')
Functions.path_images = os.path.join(base_path, 'src', 'images')
Functions.path_outputs = os.path.join(base_path, 'src', 'outputs')
Functions.path_json = os.path.join(base_path, 'src', 'pages')
Functions.path_resources = os.path.join(base_path, 'src', 'resources')
Functions.path_steps = os.path.join(base_path, 'src', 'steps')
Functions.path_config = os.path.join(base_path, 'config.yml')
Functions.path_jmeter_executor = Parameters.path_jmeter
if Parameters.environment == "Linux":
base_path = os.path.join(Parameters.current_path)
Functions.path_downloads = os.path.join(base_path, 'src', 'downloads')
Functions.path_files = os.path.join(base_path, 'src', 'files')
Functions.path_images = os.path.join(base_path, 'src', 'images')
Functions.path_outputs = os.path.join(base_path, 'src', 'outputs')
Functions.path_json = os.path.join(base_path, 'src', 'pages')
Functions.path_resources = os.path.join(base_path, 'src', 'resources')
Functions.path_steps = os.path.join(base_path, 'src', 'steps')
Functions.path_config = os.path.join(base_path, 'config.yml')
Functions.path_map = {"resources": Functions.path_resources,
"files": Functions.path_files,
"pages": Functions.path_json,
"images": Functions.path_images,
"downloads": Functions.path_downloads,
"outputs": Functions.path_outputs,
"steps": Functions.path_steps}
Functions.create_folders_framework(Functions.path_map)
if Functions.resource_remoto:
Functions.download_file(self)
if os.environ.get("rowXLSX"):
Functions.set_manual_increment(True)
Functions.set_excel_row(Functions.get_row_excel())
Parameters.row = os.environ.get("rowXLSX")
if os.environ.get("env"):
if str(os.environ['env']).lower() == "qa":
Parameters.env = "DataQa"
if str(os.environ['env']).lower() == "prod":
Parameters.env = "DataProd"
if str(os.environ['env']).lower() == "test":
Parameters.env = "DataTest"
if str(os.environ['env']).lower() == "alt":
Parameters.env = "DataAlt"
else:
Parameters.env = "DataTest"
Functions.full_read_excel(self, None, Parameters.env)
Functions.create_grid_by_sources(Functions.data_resource, "Datos del resource")
return Functions.path_map
@staticmethod
def create_grid_by_sources(resource: dict, message):
body = """
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>Mi página web</title>
<style>
h1{
color: #D71920;
padding: 1%;
font-family: Arial, Helvetica, sans-serif;
}
ul {
list-style-type: disc; /* Tipo de viñeta, en este caso un círculo lleno */
margin: 0;
padding: 0;
}
li {
color:#D71920;
margin: 0 0 0 1em; /* Margen izquierdo para que se vea la viñeta */
font-family: Arial, Helvetica, sans-serif;
font-size: 15px;
}
span{
color: #757575;
font-size: 15px;
font-family: Arial, Helvetica, sans-serif;
}
.container{
background-color: #FFFFFF;
margin: 1%;
padding: 1%;
border-radius: 10px;
box-shadow: 0px 3px 10px #00000029;
}
</style>
</head>
<body>
{list}
</body>
</html>
"""
if len(resource) != 0:
list_resources = ""
for item in resource.items():
resources_html = \
f"""<div class="container">
<ul>
<li><b>{item[0]}: </b><span>{item[1]}</span></li>
</ul>
</div>"""
list_resources += resources_html
body = body.replace("{list}", list_resources)
try:
allure.attach(body, message, attachment_type=allure.attachment_type.HTML)
except Exception as e:
Functions.exception_logger(e)
@staticmethod
def create_folders_framework(mapping: dict):
for key in mapping.keys():
if not os.path.exists(mapping[key]):
os.makedirs(mapping[key])
archivo_init = os.path.join(mapping[key], "__init__.py")
open(archivo_init, 'a').close()
@staticmethod
def get_env():
"""
Returns: Devuelve el valor de la variable de entorno 'env' en minúsculas'
"""
return str(os.environ['env']).lower()
@staticmethod
def set_env(env):
"""
Descripcion:
Configura una variable para la lectura de resources.
Args:
env: QA, TEST, PROD, ALT
Returns:
Funcion que configura la variable de ambiente para la lectura del resources
"""
os.environ['env'] = env
@staticmethod
def set_parameters_environment(system):
"""
Description:
Configura las opciones del framework en funcion del SO.
Args:
system: Sistema operativo anfitrion
"""
Parameters.environment = system
def set_retry(self, numbers_retries):
"""
Description:
Configura la cantidad de reintentos que se realizan al buscar algun objeto o realizar alguna espera.
Args:
numbers_retries: Cantidad de veces que se quiere reintentar.
"""
Parameters.number_retries = numbers_retries
print(f"La cantidad de reintentos configurada es {Parameters.number_retries}.")
@staticmethod
def set_remote_resource(value):
"""
Description:
Activar el modo para leer resources remotamente desde SharePoint de Pybot Team & qa.
Args:
value (bool): activar o desactivar el modo de resource remoto.
"""
Functions.resource_remoto = value
def auth(self):
"""
Description:
Autenticarse con un usuario de SharePoint y obtener una instancia para interactuar con el content
del mismo.
Returns:
site (Site): Instancia del SharePoint.
"""
Functions.password_sharepoint = Functions.Encryptor("CLAVES", "id", "Sharepoint Andreani", "PASS").main()
Functions.sharepoint_url = Functions.Encryptor('CLAVES', 'id', 'Sharepoint Andreani', 'URL').main()
Functions.email_pybot = Functions.Encryptor("CLAVES", "id", "Email de Pybot", "USER").main()
Functions.sharepoint_site = Functions.Encryptor('CLAVES', 'id', 'Sharepoint Andreani', 'SITE').main()
authcookie = Office365(Functions.sharepoint_url,
username=Functions.email_pybot,
password=Functions.password_sharepoint).GetCookies()
site = Site(Functions.sharepoint_site, version=Version.v365, authcookie=authcookie)
return site
def connect_folder(self, folder_name):
"""
Description:
Obtener la instancia de una carpeta del SharePoint para interactuar con su content.
Args:
folder_name (str): nombre de la carpeta en SharePoint Ej. Ebuyplace/src/outputs.
Returns:
folder (Folder): instancia de la carpeta para interactuar con su content.
"""
auth_site = Functions.auth()
Functions.sharepoint_doc = Functions.Encryptor('CLAVES', 'id', 'Sharepoint Andreani', 'LOCATION').main()
sharepoint_dir = '\\'.join([Functions.sharepoint_doc, folder_name])
folder = auth_site.Folder(sharepoint_dir)
return folder
def upload_file(self, file, file_name, folder_name):
"""
Description:
Subir un archivo local en SharePoint de Pybot Team & QA.
Args:
file (str): Dirección local del archivo.
file_name (str): Nombre del archivo Ej. factura.pdf
folder_name (strt): Dirección donde se subirá el archivo en SharePoint Ej. Ebuyplace/src/outputs
"""
_folder = Functions.connect_folder(folder_name)
with open(file, mode='rb') as file_obj:
file_content = file_obj.read()
_folder.upload_file(file_content, file_name)
def download_file(self):
"""
Description:
Lee el content de el resource correspondiente para el caso de prueba que va a ejecutarse y lo
descarga. El nombre del resource debe ser el mismo que el de caso de prueba, y se debe respetar
la estructura de carpetas, por ejemplo:
Para un conjunto de pruebas llamado A.py debe existir una ruta en SharPoint
/Ebuyplace/src/resources/A.xlxs
que es de donde se leerá los datos a utilizarse en las pruebas.
"""
# set the folder name
folder_name = Functions.path_resources.split("\\")[-3] + "/" + Functions.path_resources.split("\\")[-2] + "/" + \
Functions.path_resources.split("\\")[-1] + "/"
# set file name
file_name = f"{Functions.file_name}.xlsx"
# set dowload path
download_path = f"{Functions.path_resources}\\{Functions.file_name}.xlsx"
_folder = Functions.connect_folder(folder_name)
file = _folder.get_file(file_name)
# save file
with open(download_path, 'wb') as f:
f.write(file)
f.close()
@staticmethod
def attach_data_test_in_allure_step(msg, data):
"""
Description:
Crea una tabla HTML con los datos (diccionario) pasados como parámetro y
lo adjunta a un step allure.
Args:
msg (str): Mensaje que se quiere dejar en el step allure.
data: Diccionario que se quiere mostrar en forma de tabla html.
"""
table_html = "<!DOCTYPE html><html><head><style>table " \
"{font-family: arial, sans-serif;border-collapse: collapse;width: 100%;}" \
"td, th {border: 1px solid #dddddd;text-align: left;padding: 8px;}tr:nth-child(1) " \
"{background-color: #009efb;}</style></head><body><table><tr>"
if len(data.keys()) > 0:
for key in data.keys():
table_html += f"<th>{key}</th>"
table_html += f"</tr><tr>"
for value in data.values():
table_html += f"<td>{value}</td>"
table_html += "</tr></table></body></html>"
allure.attach(table_html, msg, allure.attachment_type.HTML)
def automatic_restore_row(self):
"""
Description:
Restaura la variable Parameters.row a 2 cuando se pasa de un archivo a.py a b.py.
"""
file_name = str(self)[str(self).find("(") + 1:str(self).find(")")]
if Functions.get_file_name_stored() is None:
Functions.set_file_name_stored(file_name)
else:
if file_name != Functions.get_file_name_stored():
Functions.set_restore_excel_row()
Functions.set_file_name_stored(file_name)
def automatic_increment_row(self):
"""
Description:
Incrementa Parameters.row de a cuerdo al test_id que se está ejecutando.
"""
# dudas aca 1
if not Parameters.manual_increment:
id_case = str(Functions.test_case_name.split('_')[1])
if id_case.isdigit():
Parameters.row = int(id_case) + 2
else:
unittest.TestCase.skipTest(self, f"No es posible realizar la ejecución del caso"
f" '{Functions.test_case_name}' ya que el nombre del mismo no respeta"
f" la siguiente convención de nombres 'test_xxx_descripcion'"
f" siendo x un numero entero.")
@staticmethod
def get_file_name_stored():
"""
Description:
Obtiene el parámetro file_name_stored de la configuracion.
Returns:
Devuelve el parámetro el valor del parámetro file_named_stored.
"""
return Parameters.file_name_stored
@staticmethod
def set_file_name_stored(file):
"""
Description:
Setea el valor del parámetro file_name_stored de la configuración.
Args:
file: El nuevo nombre del archivo.
"""
Parameters.file_name_stored = file
@staticmethod
def get_path_system():
"""
Description:
Obtiene el directorio base del sistema
Returns:
Devuelve el directorio base del proyecto
"""
return Parameters.current_path
@staticmethod
def get_row_excel():
"""
Description:
Obtiene la row actual del excel.
Returns:
Imprime por consola "El numero del registro consultado es: "+ str(row)"
y retorna la row.
"""
print(f"El numero del registro consultado es: {Parameters.row} .")
return Parameters.row
@staticmethod
def set_excel_row(value: int):
Parameters.row = value
@staticmethod
def set_manual_increment(value: bool):
Parameters.manual_increment = value
@staticmethod
def set_increment_row():
"""
Description:
Incrementa en 1 el número de registro que será consultado en el resource.
"""
Parameters.row += 1
Parameters.manual_increment = True
print(f"El numero del registro fue aumentado a: {Parameters.row}")
@staticmethod
def set_restore_excel_row():
"""
Description:
Restaura al value original en "2" el número de registro que será consultado en el resource.
"""
Parameters.row = 2
print(f"El numero del registro fue restaruado a: {Parameters.row}")
def read_cell(self, celda, file_name=None, specific_sheet=None):
"""
Description:
Lee la cell de un resource.
Args:
celda (obj): Celda del resource.
file_name (str): Nombre del caso.
specific_sheet (str): Hoja del resource.
Returns:
Retorna el value de la cell del resource.
"""
if file_name is None:
print("El nombre del caso es : " + Functions.file_name)
file_name = self.file_name
resource = f"{Functions.path_resources}\\{file_name}.xlsx"
if not os.path.isfile(resource):
resource = f"{Functions.path_resources}{file_name}.xlsx"
if not os.path.isfile(resource):
raise Exception('El resource no existe')
wb = openpyxl.load_workbook(resource, data_only=True)
if specific_sheet is None:
sheet = wb["DataTest"]
else:
sheet = wb[specific_sheet]
if sheet[celda].value is None:
value = ""
else:
value = str(sheet[celda].value)
print(f"El libro de excel utilizado es de es: {resource}")
print(f"El value de la cell es: {value}")
return value
def write_cell(self, cell, value, name, folder, sheet=None):
"""
Description:
Permite escribir en una celda indicada de una hoja especifica para un
libro de excel en directorio ./inputs/.
Args:
cell (obj): Celda de la hoja, se espera COLUMNA+FILA.
value (str): Valor a ingresar en la celda.
name (str): Nombre del libro de excel, en el directorio ./inputs/.
sheet (str): Hoja especifica del libro excel.
folder (str): Nombre de la carpeta que contiene el libro excel. Es 'files' por default o puede ser
'downloads'.
Returns:
Imprime por consola la celda, hoja y valor escrito, y devuelve TRUE
en caso contrario imprime por consola "VERIFICAR: No se pudo escribir el archivo."
y devuelve FALSE.
"""
resource = ''
try:
if folder == 'files':
resource = f"{Functions.path_files}\\{name}.xlsx"
elif folder == 'downloads':
resource = f"{Functions.path_downloads}\\{name}.xlsx"
print(resource)
wb = openpyxl.load_workbook(resource)
if sheet is None:
hoja = wb["DataTest"]
else:
hoja = wb[sheet]
hoja[cell] = value
print(value)
print(sheet)
print(cell)
wb.save(filename=resource)
wb.close()
flag = True
print(f"El libro de excel utilizado es: {resource}")
if not (sheet is None):
print(f"Se escribio en la celda {str(cell)} de la hoja {str(sheet)} el valor: {str(value)}")
else:
print(f"Se escribio en la celda {str(cell)} el valor: {str(value)}")
print(flag)
return flag
except Exception as e:
flag = False
Functions.exception_logger(e)
print("VERIFICAR: No se pudo escribir el archivo.")
return flag
@staticmethod
def wait(time_load, logger=Parameters.loggin_time, reason=None):
"""
Description:
Espera un elemento, el tiempo es dado en segundos.
Args:
time_load (int): Tiempo en segundos.
logger: Indica si se requieren logear los mensajes de espera.
reason: Razón por la que se quiere esperar un elemento.
Returns:
Cuando termina el tiempo de espera imprime "Esperar: Carga Finalizada ... ".
"""
actual_hour = datetime.datetime.now().time().strftime(Parameters.time_format)
if logger:
print(f"{Functions.color_message('YELLOW', 'AGUARDANDO:')} Inicia espera de '{str(time_load)}' "
f"segundo/s a las {actual_hour}.")
if reason is not None:
print(reason)
try:
total_wait = 0
while total_wait < time_load:
time.sleep(1)
total_wait = total_wait + 1
finally:
actual_hour = datetime.datetime.now().time().strftime(Parameters.time_format)
if logger:
print(f"{Functions.color_message('YELLOW', 'AGUARDANDO:')} Finaliza espera a las {actual_hour}")
# FUNCIONES BASE DE DATOS ##########################################################################################
def set_timeout_base_sql_server(self, time_seconds):
"""
Description:
Configura el value de timeout (segundos) configurado para las conexiones a bases sqlServer.
Args:
time_seconds: Valor (int) que representa una cantidad en segundos.
"""
Parameters.timeout_base_sql_server = time_seconds
time_timeout = Parameters.timeout_base_sql_server
print(f"El nuevo value de timeout para la conexion de la base sql es de {time_timeout} segundos.")
def get_timeout_base_sql_server(self):
"""
Description:
Devuelve el value de timeout configurado para la conexión a bases sqlServer.
Returns:
Devuelve el value de timeout (segundos) configurado para la conexion a bases sqlServer.
"""
time_timeout = Parameters.timeout_base_sql_server
print(f"El value de timeout para la conexion de la base sql es de {time_timeout} segundos.")
return time_timeout
def establish_connection_sqlserver(self, db_name):
"""
Description:
Realiza conexión a una base de datos sqlServer.
Args:
server: Servidor ip.
base: Nombre de la base.
user: Usuario.
password: Contraseña.
Returns:
Devuelve una variable con la conexion a la base de datos sqlServer.
"""
driver = None
conection = None
################################# GET DATA FROM XML FILE FOR DATABASE CONNECTION ###############################
server = Functions.Encryptor("CLAVES", "id", db_name, 'IP', False).main()
db_port = Functions.Encryptor("CLAVES", "id", db_name, 'PORT', False).main()
base = Functions.Encryptor("CLAVES", "id", db_name, 'BASE', False).main()
user = Functions.Encryptor("CLAVES", "id", db_name, 'USER', False).main()
password = Functions.Encryptor("CLAVES", "id", db_name, 'PASS', False).main()
################################# GET DATA FROM XML FILE FOR DATABASE CONNECTION ###############################
if Parameters.environment == "Linux":
driver = "/usr/lib/libtdsodbc.so"
if Parameters.environment == "Windows":
driver = "{SQL Server}"
try:
conection = pyodbc.connect(f"Driver={driver};Server={server};PORT={db_port};Database={base};UID={user};"
f"PWD={password}")
except pyodbc.OperationalError:
unittest.TestCase().fail("El servidor no existe o el acceso al mismo fué denegado.")
finally:
if conection is not None:
conection.timeout = Parameters.timeout_base_sql_server
return conection
def check_base_sqlserver(self, db_name, query):
"""
Description:
Realiza conexión y consulta a base de datos con la libreria pyodbc. El método incluye la
desconexión.
Args:
server: Servidor ip.
base: Nombre de la base.
user: usuario.
password: Contraseña.
query: consulta Query.
Returns:
<class 'pyodbc.Row'>: Retorna un class 'pyodbc.Row' si la consulta y la conexión es exitosa. De lo
contrario imprime por consola "Se produjo un error en la base de datos."
"""
cursor = None
recordset = []
conn = Functions.establish_connection_sqlserver(self, db_name)
try:
cursor = conn.cursor()
cursor.execute(query)
for row in cursor:
recordset = row
except Exception as e:
Functions.exception_logger(e)
print(f"Se produjo un error en la base de datos.")
finally:
cursor.close()
conn.close()
return recordset
def execute_sp_base_sqlserver(self, db_name, query, parameters: tuple):
"""
Description:
Realiza conexión y consulta a base de datos con la libreria pyodbc. El método incluye la
desconexión.
Args:
server (str): Servidor ip.
base (str): Nombre de la base.
user (str): usuario.
password (str): Contraseña.
query (str): consulta Query.
parameters (tuple): tupla con parametros para el sp.
Returns:
<class 'pyodbc.Row'>: Retorna un class 'pyodbc.Row' si la consulta y la conexión es exitosa. De lo
contrario imprime por consola "Se produjo un error en la base de datos."
"""
recordset = []
cursor = None
connection = Functions.establish_connection_sqlserver(self, db_name)
try:
cursor = connection.cursor()
cursor.execute(query, parameters)
for row in cursor:
recordset.append(row)
except Exception as e:
Functions.exception_logger(e)
print("Se produjo un error en la base de datos.")
finally:
cursor.close()
connection.close()
return recordset
def get_list_base_sqlserver(self, db_name, query):
"""
Description:
Realiza conexión y consulta a base de datos con la libreria pyodbc. El método incluye la
desconexión.
Args:
server (str): Servidor ip.
base (str): Nombre de la base.
user (str): usuario.
password (str): Contraseña.
query (str): consulta Query.
Returns:
results: Lista con los resultados.
"""
recordset = []
cursor = None
connection = Functions.establish_connection_sqlserver(self, db_name)
try:
cursor = connection.cursor()
cursor.execute(query)
for row in cursor:
recordset.append(row)
except Exception as e:
Functions.exception_logger(e)
print("A ocurrido un error en la base de datos.")
finally:
cursor.close()
connection.close()
return recordset
def get_recordset_sqlserver(self, db_name, query):
"""
Description:
Realiza conexión y consulta a base de datos con la libreria pyodbc. El método incluye la
desconexión.
Args:
server (str): Servidor ip.
base (str): Nombre de la base.
user (str): usuario.
password (str): Contraseña.
query (str): consulta Query.
Returns:
results: Lista con diccionarios que referencian las valores con sus correspondientes columnas.
"""
recordset = []
cursor = None
connection = Functions.establish_connection_sqlserver(self, db_name)
try:
cursor = connection.cursor()
cursor.execute(query)
records = cursor.fetchall()
column_names = [column[0] for column in cursor.description]
for record in records:
recordset.append(dict(zip(column_names, record)))
except Exception as e:
Functions.exception_logger(e)
print(f"A ocurrido un error en la base de datos. {e}")
finally:
cursor.close()
connection.close()
return recordset
def delete_reg_base_sqlserver(self, db_name, query):
"""
Description:
Elimina un registro de la base de datos. El método incluye la desconexión.
Args:
server (str): Servidor ip.
base (str): Nombre de la base.
user (str): usuario.
password (str): Contraseña.
query (str): consulta Query.
"""
cursor = None
conn = Functions.establish_connection_sqlserver(self, db_name)
try:
cursor = conn.cursor()
cursor.execute(query)
conn.commit()
print("Borrado de registro en base de datos realizado exitosamente.")
except Exception as e:
Functions.exception_logger(e)
print("Ocurrio un error en la base al intentar eliminar un registro.")
finally:
cursor.close()
conn.close()
def insert_row_base_sqlserver(self, db_name, query):
"""
Description:
Inserta un nuevo registro en la base de datos. El método incluye la desconexión.
Args:
server (str): Servidor ip.
base (str): Nombre de la base.
user (str): usuario.
password (str): Contraseña.
query (str): consulta Query.
"""
cursor = None
conn = Functions.establish_connection_sqlserver(self, db_name)
try:
cursor = conn.cursor()
cursor.execute(query)
conn.commit()
print("Insertado de registro en base de datos realizado exitosamente.")
except Exception as e:
Functions.exception_logger(e)
print(f"Ocurrio un error en la base al intentar un registro.")
print(f"QUERY ERROR: {query}")
finally:
cursor.close()
conn.close()
## ORACLE ###
def establish_connection_oracle_db(self, server, base, user, password):
"""
Description:
Realiza conexión a una base de datos Oracle.
Args:
server: nombre desde archivo encriptado.
base: Nombre de la base. IP:PUERTO/base
user: Usuario.
password: Contraseña.
Returns:
Devuelve una variable con la conexion a la base de datos Oracle.
"""
if password is None:
password = Functions.use_xml_connect_to_db(self, server, user)
connection = cx_Oracle.connect(user, password, base, encoding="UTF-8")
return connection
def get_recordset_oracle_db(self, server, base, user, password, query):
"""
Description:
Realiza conexión y consulta a base de datos con la libreria cx_Oracle. El método incluye la
desconexión.
Args:
server (str): Servidor ip.
base (str): Nombre de la base.
user (str): usuario.
password (str): Contraseña.
query (str): consulta Query.
Returns:
results: Lista con diccionarios que referencian las valores con sus correspondientes columnas.
"""
recordset = []
cursor = None
connection = Functions.establish_connection_oracle_db(self, server, base, user, password)
try:
cursor = connection.cursor()
cursor.execute(query)
records = cursor.fetchall()
column_names = [column[0] for column in cursor.description]
for record in records:
recordset.append(dict(zip(column_names, record)))
except Exception as e:
Functions.exception_logger(e)
print(f"La intereacción con la DB de Oracle arrojó el siguiente error: {e}")
finally:
cursor.close()
connection.close()
return recordset
def check_base_oracle_db(self, server, base, user, password, query):
"""
Description:
Realiza conexión y consulta a base de datos con la libreria cx_Oracle. El método incluye la
desconexión.
Args:
server: Servidor ip.
base: Nombre de la base.
user: usuario.
password: Contraseña.
query: consulta Query.
Returns:
<class 'pyodbc.Row'>: Retorna un class 'pyodbc.Row' si la consulta y la conexión es exitosa. De lo
contrario imprime por consola "Se produjo un error en la base de datos."
"""
cursor = None
recordset = []
conn = Functions.establish_connection_oracle_db(self, server, base, user, password)
try:
cursor = conn.cursor()
cursor.execute(query)
for row in cursor:
recordset = row
except Exception as e:
Functions.exception_logger(e)
print(f"La intereacción con la DB de Oracle arrojó el siguiente error: {e}")
finally:
cursor.close()
conn.close()
return recordset
def get_oracle_db_headers(self, server, base, user, password, query):
"""
Description:
Realiza conexión y consulta a base de datos con la libreria cx_Oracle. El método incluye la
desconexión.
Args:
server (str): Servidor ip.
base (str): Nombre de la base.
user (str): usuario.
password (str): Contraseña.
query (str): consulta Query.
Returns:
results: Lista con los nombres de la cabecera correspondiente a la consulta.
"""
cursor = None
connection = Functions.establish_connection_oracle_db(self, server, base, user, password)
column_names = None
try:
cursor = connection.cursor()
cursor.execute(query)
column_names = [column[0] for column in cursor.description]
except Exception as e:
Functions.exception_logger(e)
print(f"La intereacción con la DB de Oracle arrojó el siguiente error: {e}")
finally:
cursor.close()
connection.close()
return column_names
# FUNCIONES INFORMES ###############################################################################################
@staticmethod
def create_message_html(message_text: str, special_strings=None):
"""
Description:
Crea un párrafo en formato html.
Args:
message_text: mensaje en formato string.
special_strings: Lista de palabras que deben ser resaltadas en negrita dentro del mensaje.
Returns:
Devuelve el párrafo en formato html.
"""
message_html = f'<p>{message_text}</p>'
if special_strings is not None:
for string in special_strings:
message_html = message_html.replace(string, f"<strong>{string}</strong>")
return message_html
@staticmethod
def create_message_teams(message_text: str, special_strings=None):
"""
Description:
Crea un párrafo en formato de notificacion teams.
Args:
message_text: mensaje en formato string.
special_strings: Lista de palabras que deben ser resaltadas en negrita dentro del mensaje.
Returns:
Devuelve el párrafo en formato html.
"""
style = 'style=font-size: 14px;font: "Helvetica";'
message = f'<p style="{style}">{message_text}</p>'
if special_strings is not None:
for string in special_strings:
message = message.replace(string, f"<strong>{string}</strong>")
return message
@staticmethod
def send_message_teams(channel, title, message=None, sections=None):
"""
Description:
Realiza el envio de notificaciones via microsoft teams.
Args:
channel: Canal objetivo de la notificacion. Debe ser configurado el webhook en teams.
title: Titulo de la notificacion.
message: Mensaje de la notificacion.
sections: lista de secciones con contenido para la generación de la notificación.
"""
message_teams = pymsteams.connectorcard(channel)
message_teams.color(Parameters.teams_notifications_colors)
message_teams.title(title)
if message is None:
message_teams.text("<p></p>")
else:
message_teams.text(message)
if sections is not None:
if type(sections) is list:
for section in sections:
message_teams.addSection(section)
else:
message_teams.addSection(sections)
message_teams.addSection(Functions.create_section_teams(Functions.footer_signature_teams()))
message_teams.send()
@staticmethod
def create_section_teams(message, title=None, content=None):
"""
Description:
Crea secciones para notificaciones de teams.
Args:
message: Mensaje de la seccion.
title: Titulo de la seccion.
content: Contenido de la seccion.
Returns:
Seccion de microsoft teams.
"""
my_section = pymsteams.cardsection()
if title is not None:
my_section.title(f"<h3>{title}</h3>")
my_section.text(message)
if content is not None:
my_section.text(content)
return my_section
@staticmethod
def add_button_teams():
pass
@staticmethod
def footer_signature_teams():
"""
Descripcion:
Agrega una firma a las notificaciones generadas en teams.
Returns:
string con la firma en teams
"""
signature = f'''<footer>
<p style="color: {Parameters.teams_focus_test_colors}"><strong>Equipo Pybot</strong></p>
<p><strong>Joel Pino</strong> || [email protected]</p>
<p><strong>Federico Blanco</strong> || [email protected]</p>
<p><strong>Lucas Cariboni</strong> || [email protected]</p>
</footer>'''
return signature
@staticmethod
def create_title(title_text: str, format_table="HTML"):
"""
Description:
Crea un título en formato html.
Args:
title_text: título en formato value_text.
format_table: el formato de la tabla que se desea crear. formatos admitidos HTML y MSTEAMS.
Returns:
Devuelve título en formato html.
"""
return f'<h5{title_text}</h5>'
@staticmethod
def create_table(list_data_head: list, list_data_content: list, format_table="HTML"):
"""
Description:
Crea una tabla html.
Args:
list_data_head: lista con los encabezados de la tabla.
list_data_content: Matriz (lista con lista) con los datos de la tabla.
format_table: el formato de la tabla que se desea crear. formatos admitidos HTML y MSTEAMS.
Returns:
Devuelve una tabla en formato html.
"""
table_head_html = ""
table_content_html = ""
table_colums_html = ""
table_html = ""
if format_table == "HTML":
for row in list_data_head:
table_head_html = f'{table_head_html}<th>{row}</th>'
table_head_html = f"<tr>{table_head_html}</tr>"
for rows in list_data_content:
for col in rows:
table_colums_html = f"{table_colums_html}<td>{col}</td>"
table_content_html = f"{table_content_html}<tr>{table_colums_html}</tr>"
table_colums_html = ""
table_html = f"{table_head_html}{table_content_html}"
elif format_table == "MSTEAMS":
for row in list_data_head:
table_head_html = f'{table_head_html}<th style="border: 1px solid gray; padding: 1rem 2rem 1rem 2rem; background-color: {Parameters.teams_notifications_colors}; text-align: center; color: white;">{row}</th>'
table_head_html = f'<tr>{table_head_html}</tr>'
for rows in list_data_content:
for col in rows:
table_colums_html = f'{table_colums_html}<td style="border: 1px solid gray; padding: 1rem 2rem 1rem 2rem; background-color: white; text-align: center; color: {Parameters.teams_notifications_colors};">{col}</td>'
table_content_html = f'{table_content_html}<tr>{table_colums_html}</tr>'
table_colums_html = ""
table_html = f"{table_head_html}{table_content_html}"
if format_table == "HTML":
return f"<table>{table_html}</table>"
elif format_table == "MSTEAMS":
return f'<pre style="margin: 0px; padding: 0px;"><table style="border-spacing: 0px; text-align:center; margin: 0px; width:90%: heigh: 100%;">{table_html}</table></pre>'
@staticmethod
def create_style_html():
"""
Description:
Devuelve el código css con los estilos que deben aplicarse a un bloque HTML.
Returns:
Devuelve el estilo para aplicar al código html.
"""
style = '''<style>
* {
font-family: "Calibri", "Helvetica", "Arial", "Trebuchet MS", "sans-serif";
padding: none;
margin: none;
outline: none;
font-size: 14px;
margin-bottom: 2rem;
}
h5 {
font-size: 20px;
}
p {
padding-left: 1rem;
font-size: 14px;
}
strong {
font-size: 14px;
font-style: inherit;
color: #616161;
}
td,
th {
text-align: center;
border: 1px solid gray !important;
padding: 1rem 2rem 1rem 2rem;
margin: 1rem;
}
tr:nth-child(even) {
background-color: #f2f2f2;
padding-bottom: 1em;
}
tr:hover {
background-color: #d9534f;
}
th {
padding: 1rem 2rem 1rem 2rem;
margin: 1rem;
text-align: center;
background-color: #d9534f;
color: white;
}
table {
padding-left: 1rem;
}
img {
width: 10rem;
}
.team {
font-size: 16px;
font-style: inherit;
color: #ff644b;
}
.member {
margin: 0px !important;
padding: 0px !important;
}
</style>'''
return style
@staticmethod
def footer_signature_html():
signature = f'''<footer>
<p class="member"><strong class="team">Equipo Pybot</strong></p>
<p class="member">Joel Pino || [email protected]</p>
<p class="member">Federico Blanco || [email protected]</p>
<p class="member">Lucas Cariboni || [email protected]</p>
</footer>'''
return signature
@staticmethod
def create_image(image):
"""
Description:
Crea una imágen en formato html.
Args:
image: imágen a adjuntar.
Returns:
Devuelve imágen en formato html.
"""
data = open(image, 'rb').read() # read bytes from file
data_base64 = base64.b64encode(data) # encode to base64 (bytes)
data_base64 = data_base64.decode() # convert bytes to value_text
return f'<img src="data:image/jpeg;base64,{data_base64}">'
@staticmethod
def apply_style_css_to_block(block_html: str):
"""
Description:
Aplica estilos css a un bloque html.
Args:
block_html: bloque html que recibirá los estilos css.
Returns:
Devuelve un bloque html con estilos aplicados
"""
block_html_css = f"{block_html}{Functions.create_style_html()}"
return block_html_css
# FUNCIONES MONGODB ################################################################################################
@staticmethod
def insert_collection_into_mongodb(connection: list, coleccion_datos: list):
"""
Description:
Dada una conexión a base y una colección de "documentos" se insertan los mismos en mongoDB.
Args:
connection: Lista con la conexión a la base de mongoBD, tener en cuenta que tiene que tener:
-servidor/puerto/timeout para la conexión
-nombre de la base de datos
-nombre de la colección de documentos (para extrapolar a otros tipos de bases, "esquemas")
coleccion_datos: Lista que contiene los documentos a insertar en base, los documentos puden ser
diccionarios de datos o datos más complejos.
Returns:
Devuelve un texto con el resultado de la inserción.
"""
client = None
try:
uri_connection = f"mongodb://{connection['MONGODB_HOST']}:{connection['MONGODB_PORT']}/"
client = pymongo.MongoClient(uri_connection, serverSelectionTimeoutMS=connection['MONGODB_TIMEOUT'])
collection = client[connection['DB']][connection['COLLECTION']]
collection.insert_many(coleccion_datos)
print('Se inserto la coleccion de datos en la base')
except ServerSelectionTimeoutError as e:
Functions.exception_logger(e)
print('No se ha podido establecer una conexion con la base mongoDB.')
except ConnectionFailure as e:
Functions.exception_logger(e)
print('Fallo la conexion con la base mongoDB.')
finally:
client.close() # --> cierra coneccion NUNCA OLVIDAR
# FUNCIONES DE ESCRITURA DE ARCHIVOS ###############################################################################
def write_file(self, collection_data, format_file, delimiter=',', head=True, file_name=None, specific_sheet=None):
"""
Description:
Función de escritura de archivo, toma una lista de diccionarios con los datos a escribir y los
guarda en el archivo especificado (crea el archivo).
El archivo generado se guardará en la carpeta Output del mismo nombre del proyecto a testear.
Args:
collection_data: Lista de diccionario de datos con el content a escribir (obligatorio).
format_file: Extension del archivo a escribir (obligatorio).
delimiter: Delimitador (',').
head: si el archivo contiene cabecera (por defecto esta en true).
file_name: el nombre del archivo que se escribira.
specific_sheet: si el archivo es un excel se le puede identificar el nombre de la hoja a utilizar.
"""
f = None
if file_name is None:
print(f"El nombre del archivo a escribir es: {Functions.file_name}") # tomo el nombre del
file_name = Functions.file_name # archivo de los datos de "inicializacion"
resource = Functions.path_outputs + file_name + "." + format_file
else:
print("El nombre del archivo a escribir es : " + file_name)
resource = Functions.path_outputs + file_name + "." + format_file
if format_file == "xlsx":
Functions.convert_data_to_excel_format(collection_data, resource, head, specific_sheet)
print("Se escribio el archivo correctamete.")
else:
contents = Functions.convert_data_to_csv_format(collection_data, delimiter, head)
# el archivo csv para poder escribirlo
try:
f = open(resource, "w", encoding='utf8')
f.write(contents)
print("Se escribio el archivo correctamete")
except Exception as e:
Functions.exception_logger(e)
print("No se pudo escribir el archivo")
finally:
f.close()
@staticmethod
def convert_data_to_csv_format(collection_data, delimiter, head):
"""
Description:
Función que arma un value_text con formato separado por un delimitador para guardar en un archivo.
Args:
collection_data: Lista de diccionario de datos con el content a escribir (obligatorio).
delimiter:
head: Si el archivo contiene cabecera (por defecto esta en true).
Returns:
Devuelve el content unificado en un value_text para guardar en el archivo.
"""
head_line = None
line = None
index = None
if head: # si tiene cabecera en verdadero
for key in collection_data[0].keys(): # obtengo las key del diccionario de un item de la lista
if head_line is not None:
head_line = f'{head_line}{delimiter}{key}'
# si el value_text de la linea cabecera no esta vacio sumo la key a lo que ya contiene mas
# el delimitador especificado
else:
head_line = key
head_line = head_line + "\n" # al final de la linea hago un salto de linea
# Ahora genero el content propiamente dicho, recorro la lista de de diccionarios y por cada diccionario obtengo
# los valores de cada key
for index in range(len(collection_data)):
for key in collection_data[index].keys():
value = collection_data[index][key]
if line is not None:
if line.endswith(
"\n"): # si lo ultimo que tiene el value_text es un salto de linea agrego solo el nuevo
# value a la linea
line = line + value
else:
line = f'{line}{delimiter}{value}' # si no agrego el delimitador mas el value nuevo
else:
line = value
line = line + "\n" # Al final de recorrer cada diccionario agrego un salto de linea al content
index += 1
# Para finalizar agrego la linea cabecera al resto de content para retornar un unico gran value_text con
if head:
contents = head_line + line
else:
contents = line
return contents
def read_excel(self, row_start: int, row_end: int, cols: int, file_name=None, specific_sheet=None):
"""
Description:
Lee un archivo excel con un pool de datos para utilizar en pruebas automatizadas y lo convierte
en una lista de diccionarios de datos.
Args:
row_start: indico desde que row debe comenzar a leer (obligarotio)
row_end: indico hasta que row debe leer (obligatorio)
cols: indico cuantas columnas se leeran por cada row (obligatorio)
file_name: el nombre del archivo excel que se leera (opcional)
specific_sheet: nombre de la hoja que se leera (opcional)
Returns:
Devuelve una lista con los datos obtenidos de una fila del excel.
"""
if file_name is None:
print(f"Se leera el archivo con nombre: {Functions.file_name}")
file_name = Functions.file_name
resource = f"{Functions.path_resources}\\{file_name}.xlsx"
book = openpyxl.load_workbook(resource, data_only=True)
else:
resource = f"{Functions.path_resources}\\{file_name}.xlsx"
book = openpyxl.load_workbook(resource, data_only=True)
if specific_sheet is None:
sheet = book["Pool Data"]
else:
sheet = book[specific_sheet]
key = []
value = []
collection_data = []
dict_data = {}
for row in sheet.iter_rows(min_row=row_start, max_col=cols, max_row=row_start):
for cel in row:
key.append(sheet[cel.coordinate].value) # saco los valores que van a ser las Key del diccionario
for row in sheet.iter_rows(min_row=row_start + 1, max_col=cols, max_row=row_end):
for cel in row:
value.append(sheet[cel.coordinate].value) # obtengo los valores de las celdas
for index in range(len(key)):
dict_data.update({key[index]: value[index]}) # formo un diccionario
value = [] # limpio la lista de valores de cell para la proxima iteracion
collection_data.append(dict_data) # guardo el diccionario en la lista
dict_data = {} # limpio el diccionario para la proxima iteracion
return collection_data
@staticmethod
def convert_data_to_excel_format(collection_data, resource, head=True, specific_sheet=None):
print(f"que pasa acá_ {collection_data}")
"""
Description:
Convierte una colección de datos en un formato excel.
Args:
collection_data: Lista de diccionario de datos con el content a escribir (obligatorio).
resource: el nombre del archivo completo armado.
head: si el archivo contiene cabecera (por defecto esta en true).
specific_sheet: nombre de la hoja a utilizar.
"""
# Inicializacion de variables
keys = []
values = []
sheet = None
# intento abrir el archivo a escribir, si no existe, lo creo y borro la hoja por defecto que trae
try:
book = openpyxl.load_workbook(resource, data_only=True)
except FileNotFoundError:
book = Workbook()
book.save(resource)
del book['Sheet']
# reviso si me trajo un nombre de hoja especifico, borro la hoja existente y creo una nuevo para escribir en
# una hoja limpia
if specific_sheet is None:
print(len(book.sheetnames))
if len(book.sheetnames) == 0:
sheet = book.create_sheet(title="OutputData")
if "OutputData" in book.sheetnames:
del book["OutputData"]
sheet = book.create_sheet(title="OutputData")
else:
if len(book.sheetnames) == 0:
sheet = book.create_sheet(title=specific_sheet)
if specific_sheet in book.sheetnames:
del book[specific_sheet]
sheet = book.create_sheet(title=specific_sheet)
if head: # si tiene cabecera en verdadero
for key in collection_data[0].keys(): # obtengo las key del diccionario de un item de la lista
keys.append(key) # lo guardo en una lista
sheet.append(keys) # lo guardo en la lista de la hoja
# ahora recorro la lista de diccionarios para guardar en la hoja todos los datos
for index in range(len(collection_data)):
for row in collection_data[index].values():
values.append(row) # guardo los valores de cada key en una lista
sheet.append(values) # guardo la lista en la lista "hoja"
values = [] # limpio la lista de valores para la proxima
# iteracion porque si no se acumulan los diccionarios
book.save(filename=resource) # guardo el libro de excel
else: # si no tiene cabecera hago lo mismo que arriba pero sin la iteracion de cabecera
for index in range(len(collection_data)):
for row in collection_data[index].values():
values.append(row)
sheet.append(values)
values = []
book.save(filename=resource)
book.close()
# FUNCIONES ENCRIPTS################################################################################################
@staticmethod
def get_enviroment_key_from_file():
"""
Description:
Obtiene la key (bytes) de la variable de entorno "PYBOT_KEY".
Returns:
Devuelve la key en bytes.
"""
key = None
enviroment_key = os.getenv(ENVIRONMENT_VAR)
if enviroment_key is not None:
try:
with open(enviroment_key, 'rb') as file:
key = file.read()
except FileNotFoundError:
print(f"No existe el archivo '{enviroment_key}'")
else:
print(f"No se encuentra cargada correctamente la variable de entorno f{ENVIRONMENT_VAR}")
return key
def get_password_from_file_encrypted(self, enviroment, user):
"""
Description:
Busca una password en un archivo encriptado.
Args:
enviroment: Nombre del ambiente asociado al usuario del cual se pretende recuperar la password.
user: Nombre del usuario del que se pretende recuperar la password.
Returns:
Devuelve la password del usuario.
"""
password = None
key = Functions.get_enviroment_key_from_file(self)
fe = Fernet(key)
with open(PATH_ORIGIN, 'rb') as file:
encrypte_data = file.read()
decrypted_data = fe.decrypt(encrypte_data)
pass_list = decrypted_data.decode('utf-8').split('\r')
if 'AMBIENTE;IP;BASE;USUARIO;PASS' in pass_list:
for row in pass_list:
list_data_row = row.split(';')
if enviroment and user in list_data_row:
password = list_data_row[-1]
break
if password is None:
unittest.TestCase().skipTest(f"--PasswordNotFound-- No se encontro la password de acceso en el archivo"
f" para {enviroment};{user}")
else:
return password
def get_data_from_xml_encrypted(self, father_attribute, attribute_to_search, attribute_name, inner_search):
"""
Description:
Busca y retorna la información requerida por el usuario desde el xml encriptado
Args:
father_attribute: Nombre del Tag padre.
attribute_to_search: Tipo de atributo que desea buscar.
attribute_name: Nombre del atributo que desea buscar.
inner_search: Nombre del tag interno del que se desea obtener el texto.
Returns:
Retorna el texto interno del dato requerido.
"""
key = Functions.get_enviroment_key_from_file(self)
fe = Fernet(key)
return_data = None
try:
with open(PATH_ORIGIN_XML, 'rb') as file:
data = file.read()
deencrypted_data = fe.decrypt(data)
decompressed_data = bz2.decompress(deencrypted_data)
file.close()
read_xml_file = Et.fromstring(decompressed_data)
# el siguiente for revisa utilizando un formato XPATH los datos requeridos por el usuario
# y lo retorna si este existe
for element in read_xml_file.findall(f"./{father_attribute}[@{attribute_to_search}='{attribute_name}']/"):
if element.tag == inner_search and (element.text is not None or element.text != ""
or element.text != " "):
return_data = element.text
except:
raise "Ha Ocurrido un Error en el Tiempo de Ejecución -> ERROR CODE 1523 (Functions)"
return return_data
@staticmethod
def use_xml_connect_to_db(ip_db, db_user_name):
"""
Description:
Busca y retorna la contraseña de la db requerida desde el xml encriptado
Args:
ip_db: IP servidor a conectar.
db_user_name: Nombre de usuario de la DB.
Returns:
Retorna la contraseña de la db.
"""
key = Functions.get_enviroment_key_from_file()
fe = Fernet(key)
return_db_password = None
try:
with open(PATH_ORIGIN_XML, 'rb') as file:
data = file.read()
deencrypted_data = fe.decrypt(data)
decompressed_data = bz2.decompress(deencrypted_data)
file.close()
read_xml_file = Et.fromstring(decompressed_data)
# el siguiente for revisa utilizando un formato XPATH los datos requeridos por el usuario
# y lo retorna si este existe
elements_search = read_xml_file.findall(f"./CLAVES/IP[.='{ip_db}']/../USER[.='{db_user_name}']/../PASS")
return_db_password = elements_search[0].text
except:
unittest.TestCase.skipTest(Functions, "Error al intentar obtener información del archivo XML")
return return_db_password
@staticmethod
def obtain_port_from_xml(ip_db, db_user_name):
"""
Description:
Busca y retorna la contraseña de la db requerida desde el xml encriptado
Args:
ip_db: IP servidor a conectar.
db_user_name: Nombre de usuario de la DB.
Returns:
Retorna la contraseña de la db.
"""
key = Functions.get_enviroment_key_from_file()
fe = Fernet(key)
return_db_port = None
try:
with open(PATH_ORIGIN_XML, 'rb') as file:
data = file.read()
deencrypted_data = fe.decrypt(data)
decompressed_data = bz2.decompress(deencrypted_data)
file.close()
read_xml_file = Et.fromstring(decompressed_data)
# el siguiente for revisa utilizando un formato XPATH los datos requeridos por el usuario
# y lo retorna si este existe
elements_search = read_xml_file.findall(f"./CLAVES/IP[.='{ip_db}']/../USER[.='{db_user_name}']/../PORT")
return_db_port = elements_search[0].text
except:
unittest.TestCase.skipTest(Functions, "Error al intentar obtener información del archivo XML")
return return_db_port
# FUNCIONES NOTIFICACIONES #########################################################################################
def send_mail(self, receiver_email: list, title, content, file_attach=None):
"""
Description:
Envia un informe vía email.
Args:
receiver_email (str): Lista de destinatarios de correos.
title (str): Asunto del correo.
content (str): Cuerpo del correo
file_attach (file): Archivos adjuntos del correo.
Returns:
Si el correo fue enviado con éxito retorna el estado "Enviado",
de lo contrario imprime por consola "El mail no pudo ser enviado" y estado "No enviado".
"""
content = f'{content}{Functions.footer_signature_html()}'
content = Functions.apply_style_css_to_block(content)
port = Functions.get_data_from_xml_encrypted(self, "CLAVES", "id", "Email Sender Info", "PORT")
smtp_server = Functions.get_data_from_xml_encrypted(self, "CLAVES", "id", "Email Sender Info", "IP")
password = Functions.get_data_from_xml_encrypted(self, "CLAVES", "id", "Email de Pybot", "PASS")
sender_email = Functions.get_data_from_xml_encrypted(self, "CLAVES", "id", "Email de Pybot", "USER")
message = MIMEMultipart("alternative")
message['To'] = ",".join(receiver_email)
message['Subject'] = 'No-responder: ' + title
message.attach(MIMEText(content, 'html'))
if file_attach is not None:
attachment = open(file_attach, "rb")
p = MIMEBase('application', 'octet-stream')
p.set_payload(attachment.read())
encoders.encode_base64(p)
file_name = file_attach.split('\\')
p.add_header('Content-Disposition', "attachment; filename= %s" % file_name[-1])
message.attach(p)
# img_data = open(file_attach, 'rb').read()
# image = MIMEImage(img_data, name=os.path.basename(file_attach))
# message.attach(image)
try:
with smtplib.SMTP(smtp_server, port) as server:
server.ehlo() # Can be omitted
if Parameters.environment == "Windows":
server.starttls()
server.login(sender_email, password)
text = message.as_string()
server.sendmail(sender_email, receiver_email, text)
server.close()
return "Enviado"
except Exception as e:
print(f'El mail no pudo ser enviado. // exception: {e}')
Functions.exception_logger(e)
server.close()
return "No enviado"
def full_read_excel(self, file_name=None, specific_sheet=None, test_cases_name_list=None):
"""
Description:
Genera un pool de datos realizando una lectura completa del archivo excel,
identificado en el proyecto. Esta etapa se realiza en el SetUp,
y entrega los datos correspondientes al caso ejecutado.
Args:
file_name = Nombre del archivo excel.
Specific_sheet = Hoja específica de trabajo.
test_cases_name_list = Nombre del caso de prueba.
"""
if test_cases_name_list is None:
test_cases_name_list = unittest.getTestCaseNames(self.__class__, "test")
if file_name is None:
print(
f"Se leera el resource con nombre: '{Functions.color_message('BLUE', f'{Functions.file_name}.xlsx')}'")
file_name = Functions.file_name
resource = os.path.join(Functions.path_resources, f"{file_name}.xlsx")
if not os.path.isfile(resource):
raise Exception('El resource no existe')
book = openpyxl.load_workbook(resource, data_only=True)
else:
resource = os.path.join(Functions.path_resources, f"{file_name}.xlsx")
book = openpyxl.load_workbook(resource, data_only=True)
if specific_sheet is None:
print("Utilizando hoja default 'DataTest'")
sheet = book["DataTest"]
else:
print(f"Utilizando la hoja: '{Functions.color_message('BLUE', specific_sheet)}'")
sheet = book[specific_sheet]
records = list(sheet.values) # pool de data completo
headers = list(records.pop(0)) # solo datos del header
long_header = len(headers)
for header_value in range(0, long_header):
if headers[header_value] is None:
none_cell_letter = get_column_letter(header_value + 1)
none_cell_data = sheet[f'{none_cell_letter}{1}'].value
# Se valida si el header está completo, de lo contrario avisa al usuario
print(f"Error: Existe un campo vacío en el header. {none_cell_letter}1 y con value: {none_cell_data} "
f"Acción: SkipTest del proyecto ->{Functions.project_name} y Archivo -> "
f"{Functions.file_name}.xlsx")
unittest.TestCase().skipTest(f"--DataResourceNullHeaderError-- Existe un Header vacio en la Pos -> "
f"{none_cell_letter}1 y con value: {none_cell_data} en el proyecto->"
f"{Functions.project_name} y Archivo "
f"-> {Functions.file_name}.xlsx")
# validando headers duplicados
for pos_header in range(0, long_header - 1):
header_list_checker = headers.copy()
long_list_copy = len(header_list_checker)
# Utilizando la lista de paso, se extrae el value para ser comparado
check_header = header_list_checker.pop(pos_header)
for pos_in_list in range(0, long_list_copy - 1):
# Se utiliza el largo de la lista de paso ya que ahora posee 1 campo menos
# Se normaliza el tipo de texto para evitar problemas al comparar
if check_header.upper() == header_list_checker[pos_in_list].upper():
# Se obtiene la letra de la columna duplicada utilizando un método de la libreria
# Y pasando como parámetros el espacio de la lista original.
cell_letter = get_column_letter(headers.index(header_list_checker[pos_in_list]) + 1)
cell_data = sheet[f'{cell_letter}{1}'].value
print(f"Error: Existe un campo duplicado en el header. Pos {cell_letter}1 y con value: {cell_data} "
f"Acción: SkipTest del proyecto ->{Functions.project_name} y Archivo -> "
f"{Functions.file_name}.xlsx")
unittest.TestCase().skipTest(f"--DataResourceDuplicateKeyError-- "
f"Existe un campo duplicado en el Header -> "
f"Pos {cell_letter}1 y con value: {cell_data} en el proyecto->"
f"{Functions.project_name} y Archivo -> "
f"{Functions.file_name}.xlsx")
for upper_function in range(long_header):
headers[upper_function] = headers[upper_function].upper()
# unificación de headers y data recolectado
data_global_resource = [dict(zip(headers, row)) for row in records]
try:
# unificación de nombres test cases y data_global_resource
if not Parameters.manual_increment:
Functions.data_resource = dict(zip(test_cases_name_list, data_global_resource))
else:
Functions.data_resource = data_global_resource[int(Parameters.row) - 1]
print(f"El resource correra con el caso: {Functions.test_case_name}")
pprint.pprint(Functions.data_resource)
print("============================================")
return None
except KeyError:
print(f"Error: A cada row del archivo {Functions.file_name}.xlsx le corresponde un caso")
unittest.TestCase().skipTest(f"Error: A cada row del archivo {Functions.file_name}.xlsx "
f"le corresponde un caso")
# se pasa como parámetro el nombre del test case ejecutado
try:
Functions.data_resource = Functions.data_resource[Functions.test_case_name]
except KeyError:
print(f"Error: A cada row del archivo {Functions.file_name}.xlsx le corresponde a un caso"
f"Acción: SkipTest")
unittest.TestCase().skipTest(f"Error: A cada row del archivo {Functions.file_name}.xlsx "
f"le corresponde a un caso")
print(f"El resource correra con el caso: '{Functions.color_message('BLUE', Functions.test_case_name)}'")
print("==================Inicio Resource==================")
pprint.pprint(Functions.data_resource)
print("===================Fin Resource====================")
def get_random(self, min_range, max_range):
"""
Description:
Obtiene un número aleatorio del rango especificado.
Args:
min_range (int): Rango mínimo.
max_range (int): Rango máximo.
Returns:
Retorna un número aleatorio.
"""
random_number = random.randint(min_range, max_range)
return random_number
def create_teams_notifications(self, teams_channel=None, table_color=None, msg_tittle=None,
section_text=None, btn_name=None, btn_link=None):
"""
Description:
Realiza la notificación a teams.
Args:
teams_channel: Canal de teams donde se producirá la notificación.
table_color: Color de la tabla.
msg_tittle: Titulo del mensaje.
section_text: Contenido del mensaje.
btn_name: Nombre de un boton.
btn_link: Link asociado a un boton.
"""
if table_color is None:
table_color = "F03A2E"
if msg_tittle is None:
msg_tittle = "Notificación creada automaticamente"
if section_text is None:
section_text = "Texto Sección 1"
if btn_name is None:
btn_name = "Hazme Click!"
Functions.teams = pymsteams.connectorcard(teams_channel)
Functions.teams.color(table_color) # Color de la tarjeta.
Functions.teams.title(msg_tittle) # Titulo del mensaje.
Functions.teams.text(" ") # Texto al mensaje.
my_message_section = pymsteams.cardsection() # Agrega una sección a la tarjeta.
# my_message_section.title(section_tittle) # Titulo de la nueva seccion.
my_message_section.text(section_text) # Texto de la sección.
my_message_section.linkButton(btn_name, btn_link) # Se agrega un botón a la sección.
Functions.teams.addSection(my_message_section) # Agregar sección a la tarjeta.
# self.teams.printme() # Payload del mensaje.
if teams_channel is None or btn_link is None:
print(f"\nNo puede enviarse la notificación ya que alguno de los datos requeridos es inválido")
print(f"\nChannel: {teams_channel}")
print(f"\nButton Link: {btn_link}")
else:
Functions.teams.send() # Envia el mensaje.
@staticmethod
def exception_logger(exception):
"""
Description:
Realiza la la impresión de una excepción por conosola.
Args:
exception: Excepción producida durante el tiempo de ejecución.
"""
if Parameters.loggin_exceptions:
print(exception)
@staticmethod
def set_exception_loggin(value: bool):
"""
Description:
Configura el logeo de las excepciones
Args:
value: true o false
"""
Parameters.loggin_exceptions = value
@staticmethod
def color_message(color, message):
"""
Description: Colorea el string del color indicado de entre una lista de colores.
Args:
color: puede ser de color red, blue, yellow o green.
message: string a colorear.
Returns:
string coloreado o string por default
"""
if Parameters.enviroment_confguration != "Server":
if color.upper() == "RED":
return f"{RED}{message}{DEFAULT}"
elif color.upper() == "BLUE":
return f"{BLUE}{message}{DEFAULT}"
elif color.upper() == "YELLOW":
return f"{YELLOW}{message}{DEFAULT}"
elif color.upper() == "GREEN":
return f"{GREEN}{message}{DEFAULT}"
else:
return f"{DEFAULT}{message}{DEFAULT}"
else:
return message
####################################################################################################################
############################################## Encryptor ###########################################################
####################################################################################################################
class Encryptor:
ENVIRONMENT_PROJ_FILE_PATH = os.path.join(os.path.abspath(os.path.join(os.getcwd(), "../..")),
"environment_access.xml")
ENVIRONMENT_BACKUP_FILE_NAME = os.path.join(os.path.abspath(os.path.join(os.getcwd(), "../../../..")),
"environment_access.xml")
def __init__(self, father_attribute, attribute_to_search, data_find, data_inner_key, xml_en_consola=False):
self.father_attribute = father_attribute
self.atribute_to_search = attribute_to_search
self.data_find = data_find
self.data_inner_key = data_inner_key
self.key = self.get_enviroment_key_from_file()
self.encriptor = Fernet(self.key)
self.xml_en_consola = xml_en_consola
def main(self):
data_returned = None
# Proceso:
# 1 Si el archivo del proyecto existe
if self.project_file_exists() is True:
print("Archivo del proyecto encontrado!")
data_returned = self.get_data_from_proj_file(self.father_attribute,
self.atribute_to_search,
self.data_find,
self.data_inner_key)
# Si la información obtenida del archivo del proyecto no existe
if data_returned is None:
print(f"Parace que el archivo no contiene la información buscada: {self.data_inner_key}")
print(f"Buscando {self.data_inner_key} ahora en el archivo BackUp")
# reviso que el archivo backup exista
if self.backup_file_exists() is True:
print("Archivo BackUp encontrado!")
# obtengo el dato si existe, desde el archivo backup
data_returned = self.get_data_from_backup_file(self.father_attribute,
self.atribute_to_search,
self.data_find,
self.data_inner_key)
# Si el dato existe
if data_returned is not None:
print("Se a encontrado el dato buscado, pero este se encuentra en el archivo BackUp")
print(f"Copiando [{self.father_attribute};{self.atribute_to_search}] al archivo del "
f"proyecto actual")
# Agrego la información faltante desde el archivo backup, al archivo del proyecto
self.add_data_to_proj_file(self.father_attribute, self.atribute_to_search, self.data_find)
print("La información fué agregada al archivo del proyecto actual correctamente!")
# Si el dato NO existe
else:
unittest.case.TestCase.skipTest(self,
"Parace que ninguno de los archivos contiene la información buscada")
else:
# Si el archivo backup no existe, crea uno en base a un template vacio
unittest.case.TestCase.skipTest(self,
"Archivo BackUp no encontrado!, Creando archivo BackUp con template base para futuras ejecuciones")
self.create_backup_file()
print("Archivo BackUp creado con éxito!")
else:
print("Archivo del proyecto no encontrado!")
# 2 - Si el archivo del proyecto NO existe, reviso que el archivo backup exista
if self.backup_file_exists() is True:
print("Archivo BackUp encontrado!")
# Si el archivo backup existe, crea un template en la ubicación del proyecto
print("Creando archivo de proyecto actual con template base para futuras ejecuciones")
self.create_project_file()
print("Archivo del proyecto creado con éxito!")
self.add_data_to_proj_file(self.father_attribute, self.atribute_to_search, self.data_find)
print("Data transferida al archivo del proyecto actual")
# Corro el proceso otra vez
data_returned = Functions.Encryptor(self.father_attribute, self.atribute_to_search, self.data_find,
self.data_inner_key).main()
# Si no existe el archivo del proyecto y tampoco existe el archivo backup
else:
unittest.case.TestCase.skipTest(self,
"No se puede continuar sin la presencia de al menos uno de los archivos environment_access.xml")
return data_returned
def project_file_exists(self):
project_exists = False
if os.path.exists(self.ENVIRONMENT_PROJ_FILE_PATH) is True:
project_exists = True
return project_exists
def backup_file_exists(self):
ignored_exists = False
# Si existe el archivo ignorado ubicado en testing-Automation devuelve True, sino existe False
if os.path.exists(self.ENVIRONMENT_BACKUP_FILE_NAME):
ignored_exists = True
return ignored_exists
####################################################################################################################
# ENCRYPTOR FUNCTIONS #
####################################################################################################################
def get_enviroment_key_from_file(self):
"""
Description:
Obtiene la key (bytes) de la variable de entorno "PYBOT_KEY".
Returns:
Devuelve la key en bytes.
"""
key = ""
enviroment_key = os.getenv(ENVIRONMENT_VAR)
if enviroment_key is not None:
try:
with open(enviroment_key, 'rb') as file:
key = file.read()
except FileNotFoundError:
print(f"No existe el archivo '{enviroment_key}'")
else:
print(f"No se encuentra cargada correctamente la variable de entorno f{ENVIRONMENT_VAR}")
return key
# Decript
def decompress_and_deencrypt_xml(self, read_file):
if self.is_file_encrypted(read_file) is True:
try:
with open(read_file, 'rb') as file:
data = file.read()
deencrypted_data = self.encriptor.decrypt(data)
decompressed_data = bz2.decompress(deencrypted_data)
file.close()
os.remove(read_file)
with open(read_file, 'wb') as output_file:
output_file.write(decompressed_data)
except FileNotFoundError:
print("El archivo buscado no existe en el directorio especificado.")
else:
print(f"No se puede desencryptar el archivo, ya que este es su estado actual -> {read_file}")
# Encript
def compress_and_encrypt_xml(self, read_file):
if self.is_file_encrypted(read_file) is not True:
try:
with open(read_file, 'rb') as file:
data = file.read()
compressed_data = bz2.compress(data)
encrypted_data = self.encriptor.encrypt(compressed_data)
file.close()
os.remove(read_file)
with open(read_file, 'wb') as output_file:
output_file.write(encrypted_data)
except FileNotFoundError:
print("El archivo buscado no existe en el directorio especificado.")
else:
print(f"No se puede encryptar el archivo, ya que este es su estado actual -> {read_file}")
# obtener data requerida desde el archivo
def get_data_from_proj_file(self, father_attribute, atribute_to_search, dato_a_buscar, inner_search):
return_data = None
try:
read_xml_file, project_tree = self.get_xml_root(self.ENVIRONMENT_PROJ_FILE_PATH)
for element in read_xml_file.findall(f"./{father_attribute}[@{atribute_to_search}='{dato_a_buscar}']/"):
if element.tag == inner_search and (element.text is not None or element.text != ""
or element.text != " "):
return_data = element.text
except Exception:
print("Ha Ocurrido un Error en el Tiempo de Ejecución -> ERROR CODE 204 (Encriptor)")
return return_data
def get_data_from_backup_file(self, father_attribute, atribute_to_search, dato_a_buscar, inner_search):
return_data = None
try:
read_xml_file, backup_tree = self.get_xml_root(self.ENVIRONMENT_BACKUP_FILE_NAME)
for element in read_xml_file.findall(f"./{father_attribute}[@{atribute_to_search}='{dato_a_buscar}']/"):
if element.tag == inner_search and (element.text is not None or element.text != ""
or element.text != " "):
return_data = element.text
except Exception:
raise "Ha Ocurrido un Error en el Tiempo de Ejecución -> ERROR CODE 204 (Encriptor)"
return return_data
# Actualizar data
def add_data_to_proj_file(self, father_attribute, atribute_to_search, dato_a_buscar):
# Necesito:
# * Ahora que sé que el dato existe en el archivo backup, obtenerlo con su bloque de información.
# * VALIDAR MEDIANTE TAG SI ESTE ELEMENTO EXISTE EN EL ARCHIVO
backup_xml, backup_tree = self.get_xml_root(self.ENVIRONMENT_BACKUP_FILE_NAME)
project_xml, project_tree = self.get_xml_root(self.ENVIRONMENT_PROJ_FILE_PATH)
self.validate(project_xml)
# Create a new CLAVES element
new_claves = Et.Element('CLAVES')
new_claves.set(atribute_to_search, dato_a_buscar)
# Create sub-elements for the new CLAVES
port = Et.SubElement(new_claves, 'PORT')
ip = Et.SubElement(new_claves, 'IP')
environment = Et.SubElement(new_claves, 'ENVIRONMENT')
base = Et.SubElement(new_claves, 'BASE')
user = Et.SubElement(new_claves, 'USER')
password = Et.SubElement(new_claves, 'PASS')
# Append the new CLAVES element to the root
project_xml.append(new_claves)
for element in backup_xml.findall(f"./{father_attribute}[@{atribute_to_search}='{dato_a_buscar}']/"):
if element.tag == "PORT":
port.text = element.text
if element.tag == "IP":
ip.text = element.text
if element.tag == "ENVIRONMENT":
environment.text = element.text
if element.tag == "BASE":
base.text = element.text
if element.tag == "USER":
user.text = element.text
if element.tag == "PASS":
password.text = element.text
Et.indent(project_xml, " ")
project_tree.write(self.ENVIRONMENT_PROJ_FILE_PATH)
self.compress_and_encrypt_xml(self.ENVIRONMENT_PROJ_FILE_PATH)
if self.xml_en_consola is True:
Et.dump(project_xml)
print("---------------------")
Et.dump(backup_xml)
def create_backup_file(self):
xml_body = b'<?xml version="1.0" encoding="UTF-8" ?> ' \
b'<root>' \
b'<!--<CLAVES id="Test_Id">-->' \
b'<!--<PORT>9999</PORT>-->' \
b'<!--<IP>255.255.255.255</IP>-->' \
b'<!--<ENVIRONMENT>Test</ENVIRONMENT>-->' \
b'<!--<BASE>SQL_BASE</BASE>-->' \
b'<!--<USER>Test_User</USER>-->' \
b'<!--<PASS>Test_Password</PASS>-->' \
b'<!--</CLAVES>-->' \
b'</root>'
# Formateo del archivo output
compressed_data = bz2.compress(xml_body)
encripted_xml = self.encriptor.encrypt(compressed_data)
with open(self.ENVIRONMENT_BACKUP_FILE_NAME, 'wb') as output_file:
output_file.write(encripted_xml)
output_file.close()
def create_project_file(self):
with open(rf"{self.ENVIRONMENT_PROJ_FILE_PATH}", "wb") as env_file:
env_file.write(b'<root>\n')
env_file.write(b'</root>\n')
tree = Et.parse(self.ENVIRONMENT_PROJ_FILE_PATH)
root = tree.getroot()
Et.indent(root, " ")
tree.write(self.ENVIRONMENT_PROJ_FILE_PATH)
self.compress_and_encrypt_xml(self.ENVIRONMENT_PROJ_FILE_PATH)
def get_xml_root(self, file_loc):
self.decompress_and_deencrypt_xml(file_loc)
tree = Et.parse(file_loc)
current_root = tree.getroot()
self.compress_and_encrypt_xml(file_loc)
return current_root, tree
def is_file_encrypted(self, file_path):
try:
# Intentar analizar el archivo XML
Et.parse(file_path)
return False
except Et.ParseError:
# Si se produce un error al analizar, se considera que el archivo está encriptado
return True
def validate(self, root_tree):
elemento_prueba = root_tree.find(f'.//*[@{self.atribute_to_search}="{self.data_find}"]')
if elemento_prueba is not None:
root_tree.remove(elemento_prueba)
|
Andreani-QA-Functions
|
/Andreani_QA_Functions-0.0.22-py3-none-any.whl/Andreani_QA_Functions/Functions.py
|
Functions.py
|
import os
import platform
import sys
class Parameters:
# CONFIGURACION PATH Y TESTCASE
current_path = os.path.abspath(os.path.join(os.getcwd(), "../.."))
sys.path.append(current_path)
file_name_stored = None
env = None
# CONFIGURACION PARA UTILIZAR CSV DESDE SHAREPOINT
#sharepoint_data_jmeter=None
# CONFIGURACION FORMATO DE FECHA
date_format = '%d/%m/%Y'
time_format = "%H:%M:%S"
# CONFIGURACION DE TIEMPO Y REINTENTOS PARA LA OBTENCIÓN DE ELEMENTOS
time_between_retries = 0.5
number_retries = 6
highlight = True
# CONFIGURACION DE TIEMPO Y REINTENTOS PARA LA OBTENCIÓN DE ELEMENTOS
loggin_time = True
loggin_exceptions = False
timeout_base_sql_server = 20
# ENTORNO POR DEFECTO
environment = platform.system()
enviroment_confguration = os.getenv('PYBOT_SYSTEM')
# CONFIOGURACION DE BROWSER Y PRUEBAS
browser = 'CHROME'
debug = False
headless = False
#VARIABLE LISTA DE PASOS
steps_list = []
# CONFIGURACION INCREMENTO AUTO/MANUALEXCEL
manual_increment = False
row = 2
# CONFIGURACION PATH JMETER
path_jmeter = f"C:\\Tools\\Jmeter\\bin\\jmeter.bat"
path_jmeter_libraries_ext = f"C:\\Tools\\Jmeter\\lib\\ext"
path_jmeter_downloads = f"{current_path}\\projects\\ApisCliente\\src\\downloads"
path_jmeter_report_jtl = f"{path_jmeter_downloads}\\report.jtl"
path_aggregate_report_csv_out = f"{path_jmeter_downloads}\\AggregateReport.csv"
path_response_over_times_png_out = f"{path_jmeter_downloads}\\ResponseTimesOverTime.png"
path_response_code_per_second_png_out = f"{path_jmeter_downloads}\\ResponseCodePerSecond.png"
path_response_threads_state_over_time = f"{path_jmeter_downloads}\\ThreadsStateOverTime.png"
path_dashboard = f"{current_path}\\projects\\ApisCliente\\src\\outputs\\dashboard_jmeter"
path_index_html_dashboard = f"{path_dashboard}\\index.html"
# CONFIGURACION TEST DE STRESS Y CARGA
users_jmeter = 1
rampup_jmeter = 1
duration_jmeter = 1
throughput_jmeter = 0
url_jmeter = ""
# CONFIGURACION PARA VALIDACION
status_code_expected = 200
parameter_id = None
expected_value = ""
server = "127.0.0.1" # Direccion Ip de la UI desplegada por locust.
port = 8089 # Puerto de la UI desplegada por locust.
max_threads = 100 # Cantidad maxima de hilos (Peticiones) alcanzable.
rate = 10 # Coeficiente incremental de carga.
duration = 60 # Duracion de la prueba (Segundos)
wait_time = 1 # Duracion de tiempos de espera entre peticiones.
# CONFIGURACION PARA NOTIFICACIONES DE TEAMS
teams_notifications_colors = "#5b5fc7"
teams_focus_test_colors = "#383966"
|
Andreani-QA-Parameters
|
/Andreani_QA_Parameters-0.0.9.tar.gz/Andreani_QA_Parameters-0.0.9/Andreani_QA_parameters/Parameters.py
|
Parameters.py
|
import datetime
import os
import platform
import pprint
import random
import string
import time
import allure
import cx_Oracle
import json
import unittest
import requests
from selenium import webdriver
from selenium.common.exceptions import NoSuchElementException, NoAlertPresentException, NoSuchWindowException, \
TimeoutException, StaleElementReferenceException, ElementClickInterceptedException, \
ElementNotInteractableException, WebDriverException, UnexpectedAlertPresentException
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.firefox.options import Options as FirefoxOptions
from selenium.webdriver.ie.options import Options as IeOptions
from selenium.webdriver.remote.webdriver import WebElement
from selenium.webdriver.support import expected_conditions as ec
from selenium.webdriver.support.ui import Select
from selenium.webdriver.support.ui import WebDriverWait
# actualizacion 20 de julio
from selenium.webdriver.chrome.service import Service as ServiceChrome
from selenium.webdriver.firefox.service import Service as ServiceFirefox
from selenium.webdriver.ie.service import Service as ServiceIexplorer
from Andreani_QA_parameters.Parameters import Parameters
from Andreani_QA_Functions.Functions import Functions
if platform.system() == "Windows":
from win32com.client import Dispatch
from Andreani_QA_Debugger.Debugger import Debugger
class Selenium(Functions, Parameters):
windows = {}
driver = None
value_to_find = None
get_locator_type = None
number_windows = None
exception = None
json_strings = None
complete_json_path = None
message_container = None
message_error = None
json_on_loaded = None
global_date = time.strftime(
Parameters.date_format) # formato dd/mm/aaaa global_time = time.strftime(Parameters.time_format) # formato 24 houras
lista_pasos = []
lista_descripcion_pasos = []
driver_port_status = False
chrome_services = ServiceChrome()
firefox_services = ServiceFirefox(log_output='nul')
ie_services = ServiceIexplorer()
# INICIALIZA LOS DRIVER Y LOS CONFIGURA
def open_browser(self, url=None, browser=Parameters.browser, options_headless=Parameters.headless,
download_path=None):
"""
Description:
Inicializa el navegador con las configuraciones definidas por el ambiente.
Args:
url: Url del Proyecto.
browser: Navegador a utilizar.
options_headless: True o False para utilizar el navegador en modo headless.
download_path: Ruta a la carpeta de descargas.
Returns:
Retorna el driver e imprime por consola:
-El directorio base
-El navegador utilizado
"""
print(f"Directorio Base: {Parameters.current_path}")
print(f"{self.color_message('YELLOW', 'AGUARDANDO:')} Inicio del navegador "
f"'{self.color_message('BLUE', browser)}'.")
options = None
if browser == "CHROME":
options = webdriver.ChromeOptions()
if download_path is None:
download_default = {"download.default_directory": self.path_downloads}
options.add_experimental_option("prefs", download_default)
else:
options.add_experimental_option("prefs", {"download.default_directory": download_path})
if browser == "FIREFOX":
options = FirefoxOptions()
if browser == "IE":
options = IeOptions()
options.add_argument("--no-sandbox")
options.add_argument("--window-size=1920,1080")
options.add_argument('--disable-dev-shm-usage')
options.add_argument("--ignore-certificate-errors")
options.add_argument("--incognito")
options.add_argument("--enable-automation")
options.add_argument("--disable-extensions")
options.add_argument("--dns-prefetch-disable")
options.add_argument("--verbose")
options.add_argument("--disable-popup-blocking")
options.add_argument("--proxy-server='direct://'")
options.add_argument("--proxy-bypass-list=*")
options.add_argument("--disable-gpu") if os.name == "nt" else None
if Parameters.environment == "Linux":
options.add_argument("--headless")
Selenium.set_exception_loggin(True)
Selenium.set_mode_debug(False)
if browser == "CHROME":
Selenium.driver = webdriver.Chrome(service=self.chrome_services, options=options)
Selenium.initialize_browser(self, url)
Selenium.driver.maximize_window()
if Parameters.environment == "Windows":
if options_headless or Parameters.enviroment_confguration == "Server":
options.add_argument("--headless")
Selenium.set_exception_loggin(True)
Selenium.select_browser(self, browser, options)
Selenium.initialize_browser(self, url)
Selenium.set_highlight(False)
else:
options.add_argument(f"--remote-debugging-port={Selenium.available_port()}")
Selenium.set_mode_debug(False)
Selenium.select_browser(self, browser, options)
Selenium.initialize_browser(self, url)
Selenium.driver.maximize_window()
return Selenium.driver
def initialize_browser(self, url):
"""
Description:
Inicia el navegador configurado y navega hacia la url.
Args:
url: Url destino.
"""
Selenium.driver.implicitly_wait(10)
try:
Selenium.driver.get(url)
Selenium.windows = {'Principal': Selenium.driver.window_handles[0]}
except WebDriverException:
Selenium.tear_down(self)
unittest.TestCase().fail(f"--WebDriverException--No se ha podido establecer una "
f"conexión con el ambiente de pruebas {url}.")
def select_browser(self, browser, options):
"""
Description:
Permite configurar el navegador que se utilizará en la prueba.
Args:
browser: Nombre del navegador.
options: Argumentos opcionales del navegador.
"""
try:
if browser == "CHROME":
Selenium.driver = webdriver.Chrome(service=self.chrome_services, options=options)
elif browser == "FIREFOX":
Selenium.driver = webdriver.Firefox(service=self.firefox_services, options=options)
elif browser == "IE":
Selenium.driver = webdriver.Ie(service=self.ie_services, options=options)
except WebDriverException as e:
Functions.exception_logger(e)
Selenium.tear_down(self)
unittest.TestCase().skipTest(f"El web driver no esta disponible para esta prueba. {e}")
def tear_down(self):
"""
Descripcion:
Finaliza la ejecución cerrando el Web Driver.
"""
Functions.create_grid_by_sources(self.data_cache, "Datos del cache")
try:
if Selenium.data_cache not in ([], {}):
print("====================Inicio Cache===================")
pprint.pprint(Selenium.data_cache)
print("=====================Fin Cache=====================")
print(f"{Selenium.color_message('YELLOW', 'AGUARDANDO:')} Se cerrará el web driver.")
Selenium.driver.quit()
except Exception as e:
Functions.exception_logger(e)
finally:
print(f"{Selenium.color_message('GREEN', 'REALIZADO:')} Finaliza la ejecución.")
@staticmethod
def create_grid_by_sources(resource: dict, message):
body = """
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>Mi página web</title>
<style>
h1{
color: #D71920;
padding: 1%;
font-family: Arial, Helvetica, sans-serif;
}
ul {
list-style-type: disc; /* Tipo de viñeta, en este caso un círculo lleno */
margin: 0;
padding: 0;
}
li {
color:#D71920;
margin: 0 0 0 1em; /* Margen izquierdo para que se vea la viñeta */
font-family: Arial, Helvetica, sans-serif;
font-size: 15px;
}
span{
color: #757575;
font-size: 15px;
font-family: Arial, Helvetica, sans-serif;
}
.container{
background-color: #FFFFFF;
margin: 1%;
padding: 1%;
border-radius: 10px;
box-shadow: 0px 3px 10px #00000029;
}
</style>
</head>
<body>
{list}
</body>
</html>
"""
if len(resource) != 0:
list_resources = ""
for item in resource.items():
resources_html = \
f"""<div class="container">
<ul>
<li><b>{item[0]}: </b><span>{item[1]}</span></li>
</ul>
</div>"""
list_resources += resources_html
body = body.replace("{list}", list_resources)
try:
allure.attach(body, message, attachment_type=allure.attachment_type.HTML)
except Exception as e:
Selenium.exception_logger(e)
def refresh(self):
"""
Description:
Actualiza la página web.
"""
Selenium.driver.refresh()
Selenium.page_has_loaded()
def get_proyect_name(self):
"""
Description:
Obtiene el nombre del proyecto en contexto.
Returns:
Retorna el nombre del proyecto en contexto.
"""
project_name = os.path.abspath(str(self)).split(' ')[0].split('\\')[-4]
return project_name
# LOCALIZADORES ####################################################################################################
def get_current_url(self):
"""
Description:
Obtiene la url actual de la pestaña activa.
Returns:
Url (str): La url de la pestaña activa.
"""
return Selenium.driver.current_url
def locator_element(self, type_locator, indentify, entity=None):
"""
Description:
Localiza un elemento utilizando el tipo de identificador indicado como parámetro.
Args:
type_locator: Tipo de identificador.
indentify: Identificador.
entity: Entidad con la que se genera el elemento web.
Returns:
Si el elemento fue encontrado imprime "Esperar_Elemento: Se visualizó el elemento " + XPATH",
en caso contrario imprime "No se pudo interactuar con el elemento", XPATH".
"""
find_element = False
elements = None
try:
elements = Selenium.driver.find_element(type_locator, indentify)
print(f"Se interactuo con el elemento {indentify}")
print(f"{Selenium.color_message('GREEN', 'REALIZADO:')} Se detecto el elemento web "
f"'{Selenium.color_message('BLUE', entity)}' utilizando el "
f"identificador {type_locator} apuntando a '{indentify}'")
find_element = True
except NoSuchElementException:
Selenium.exception = "NoSuchElementException"
print(f"No se pudo encontrar el elemento web '{Selenium.color_message('BLUE', entity)}'"
f" utilizando el identificador {type_locator} apuntando a '{indentify}'")
Selenium.message_error = f"No se pudo encontrar el elemento web '{entity}' utilizando el identificador " \
f"{type_locator} apuntando a '{indentify}'"
except TimeoutException:
Selenium.exception = "TimeoutException"
print(f"Se agoto el tiempo de busqueda intentando encontrar el elemento web "
f"'{Selenium.color_message('BLUE', entity)}' utilizando el identificador"
f" {type_locator} apuntando a '{indentify}'")
Selenium.message_error = f"Se agoto el tiempo de busqueda intentando encontrar el elemento web '{entity}'" \
f" utilizando el identificador " \
f"{type_locator} apuntando a '{indentify}'"
except Exception as e:
Functions.exception_logger(e)
print(f"Ha ocurrido un error inesperado en tiempo de ejecución.")
Selenium.lista_descripcion_pasos.append(indentify)
return elements, find_element
def highlight(self, element: WebElement):
"""
Description:
Marca en pantalla el elemento pasado como parámetro.
Args:
element: Elemento al que se le hace foco.
"""
Functions.wait(1)
try:
original_style = element.get_attribute('style')
highlight_style = "border: 3px solid green;"
for x in range(2):
try:
Selenium.driver.execute_script("arguments[0].setAttribute('style', arguments[1]);",
element, highlight_style)
time.sleep(0.1)
Selenium.driver.execute_script("arguments[0].setAttribute('style', arguments[1]);",
element, original_style)
time.sleep(0.1)
except Exception as e:
Functions.exception_logger(e)
print(f"Se encontro el elemento pero no puede ser señalado.")
except Exception as e:
Functions.exception_logger(e)
print(f"No se pudo señalar el elemento.")
def capture_element(self, entity, variable_x=None, variable_y=None):
"""
Description:
Captura en pantalla la entidad pasada como parámetro.
Args:
entity: Entidad del objeto al que se quiere capturar en pantalla.
variable_x: Variable x para parametrizar un elemento JSON.
variable_y: Variable y para parametrizar un elemento JSON.
Returns:
Si la entidad se encuentra correctamente se devuelve el elemento y se imprime "Última screenshot
antes de finalizar la ejecución", caso contrario lanza la excepción.
"""
element = None
Selenium.page_has_loaded()
get_entity = Selenium.get_entity(self, entity)
if get_entity is None:
print("No se encontro el value en el Json definido.")
else:
if variable_x is not None:
Selenium.value_to_find = Selenium.value_to_find.replace("IndiceX", variable_x)
if variable_y is not None:
Selenium.value_to_find = Selenium.value_to_find.replace("IndiceY", variable_y)
find_element = False
for intentos in range(Parameters.number_retries):
if Selenium.get_locator_type.lower() == "xpath":
element, find_element = Selenium.locator_element(self, By.XPATH, Selenium.value_to_find, entity=entity)
elif Selenium.get_locator_type.lower() == "id":
element, find_element = Selenium.locator_element(self, By.ID, Selenium.value_to_find, entity=entity)
elif Selenium.get_locator_type.lower() == "name":
element, find_element = Selenium.locator_element(self, By.NAME, Selenium.value_to_find, entity=entity)
else:
print("El tipo de entidad del objeto no es valido para Selenium Framework.")
unittest.TestCase().fail(f"--JsonErrorIdentity-- El tipo de entidad del objeto {entity} no es valido.")
if find_element:
unittest.TestCase().assertTrue(find_element, f"El elemento {entity} se visualiza en pantalla.")
if Parameters.highlight:
Selenium.highlight(self, element)
else:
Selenium.wait(Parameters.time_between_retries)
break
Selenium.wait(Parameters.time_between_retries)
if not find_element:
self.image_for_debugger_report()
self.steps_case = ""
for i in range(len(self.lista_pasos)):
self.steps_case += f"* {self.lista_pasos[i]}: {self.lista_descripcion_pasos[i]}\n"
status_code_returned = Selenium.debugger(self, entity)
if status_code_returned == 1: # retry / refab
Selenium.set_retry(self, 3)
return Selenium.capture_element(self, entity)
Selenium.screenshot(self, "Ultima screenshot antes de finalizar la ejecución")
unittest.TestCase().fail(f"--{Selenium.exception}-- El objeto {entity} no se visualiza en pantalla.")
return element
def get_element(self, entity: object, variable_y: object = None, variable_x: object = None):
"""
Description:
Obtiene un elemento de un archivo json, según su identificador.
Args:
entity (str): Entidad del objeto que se quiere obtener.
variable_y: Variable x para parametrizar un elemento JSON.
variable_x: Variable y para parametrizar un elemento JSON.
Returns:
Si la entidad fue encontrada retorna el elemento, en caso contrario imprime
"No se encontro el value en el Json definido".
"""
element = Selenium.capture_element(self, entity, variable_y=variable_y, variable_x=variable_x)
Selenium.page_has_loaded()
return ElementUI(element, Selenium.driver, Selenium.value_to_find, Selenium.get_locator_type, entity)
def debugger(self, debug_this_entity):
"""
Description:
Permite visualizar los defectos antes de finalizar la ejecución, la corrección de los mismos y
luego cierra el navegador.
Args:
debug_this_entity: Nombre de la entidad en conflicto.
Returns:
Devuelve el status code correspondiente a la acción realizada por el usuario dentro de la UI.
"""
metadata = {
"FRAMEWORK": "Selenium",
"ENTITY": debug_this_entity,
"EXCEPTION": Selenium.exception,
"MESSAGE": Selenium.message_error,
"LOCATOR TYPE": Selenium.get_locator_type,
"VALUE TO FIND": Selenium.value_to_find,
"JSON PATH": Selenium.complete_json_path,
"JSON STRING": Selenium.json_strings,
"CASE NAME": self.case_name
}
returned_code = None
if Selenium.get_mode_execution() and not Selenium.get_mode_browser():
response = str(Debugger(metadata))
returned_code = int(response.split("||")[0])
Selenium.value_to_find = response.split("||")[1]
Selenium.get_locator_type = response.split("||")[2]
return returned_code
def get_json_file(self, file):
"""
Description:
Lee un archivo json.
Args:
file (file): Archivo json.
Returns:
Si el archivo fue encontrado imprime "get_json_file: " + json_path",
en caso contrario imprime "get_json_file: No se encontro el Archivo " + file".
"""
Selenium.json_on_loaded = file
json_path = os.path.join(self.path_json, f"{file}.json")
Selenium.complete_json_path = json_path
try:
with open(json_path, "r", encoding='utf8') as read_file:
Selenium.json_strings = json.loads(read_file.read())
print(f"{Selenium.color_message('GREEN', 'REALIZADO:')} Se a cargado el respositorio de objetos "
f"'{Selenium.color_message('BLUE', f'{file}.json')}' encontrado en el directorio "
f"'{json_path}'.")
except FileNotFoundError:
Selenium.json_strings = False
print(f"{Selenium.color_message('RED', 'ERROR:')} No se encontro "
f"'{Selenium.color_message('BLUE', f'{file}.json')}' en el directorio '{json_path}'.")
unittest.TestCase().skipTest(f"get_json_file: No se encontro el Archivo {file}")
Selenium.tear_down(self)
def get_entity(self, entity):
"""
Description:
Lee una entidad del archivo json.
Args:
entity (str): Entidad del objeto que se quiere leer.
Returns:
Si la entidad fue encontrada retorna "True", en caso contrario imprime
"get_entity: No se encontró la key a la cual se hace referencia: " + entity".
"""
if not Selenium.json_strings:
print("Define el DOM para esta prueba")
else:
try:
Selenium.value_to_find = Selenium.json_strings[entity]["ValueToFind"]
Selenium.get_locator_type = Selenium.json_strings[entity]["GetFieldBy"]
except KeyError as e:
Functions.exception_logger(e)
unittest.TestCase().skipTest(f"get_entity: No se encontro la key a la cual se hace referencia:"
f"{entity}.")
Selenium.tear_down()
return True
# TEXTBOX & COMBO HANDLE ###########################################################################################
def send_especific_keys(self, element, key):
"""
Description:
Simula el envío de una tecla del teclado.
Args:
element (str): Entidad del objeto que se quiere obtener.
key (str): Tecla seleccionada.
"""
if key == 'Enter':
Selenium.get_element(self, element).send_keys(Keys.ENTER)
if key == 'Tab':
Selenium.get_element(self, element).send_keys(Keys.TAB)
if key == 'Space':
Selenium.get_element(self, element).send_keys(Keys.SPACE)
if key == 'Esc':
Selenium.get_element(self, element).send_keys(Keys.ESCAPE)
if key == 'Retroceso':
Selenium.get_element(self, element).send_keys(Keys.BACKSPACE)
if key == 'Suprimir':
Selenium.get_element(self, element).send_keys(Keys.DELETE)
if key == "Abajo":
Selenium.get_element(self, element).send_keys(Keys.ARROW_DOWN)
time.sleep(3)
def get_id_window(self):
"""
Description:
Obtiene el id de una window.
Returns:
Devuelve el id de la window obtenida.
"""
print(Selenium.driver.window_handles[0])
return Selenium.driver.window_handles[0]
def switch_to_windows_handles(self, number_window):
"""
Description:
Cambia entre ventanas del navegador.
Args:
number_window (int): Número de window seleccionada.
"""
Selenium.driver.switch_to.window(Selenium.driver.window_handles[number_window])
Selenium.driver.maximize_window()
def switch_to_iframe(self, locator):
"""
Description:
Cambia entre iframes en la WebApp.
Args:
locator (str): Nombre del objeto que se quiere obtener.
Returns:
Imprime "Se realizó el switch a (Locator)".
"""
iframe = Selenium.capture_element(self, locator)
Selenium.driver.switch_to.frame(iframe)
print(f"Se realizó el switch a {locator}")
def switch_to_parent_frame(self):
"""
Description:
Cambia al iframes padre.
"""
Selenium.driver.switch_to.parent_frame()
print(f"Se realizó el switch al parent frame.")
def switch_to_default_frame(self):
"""
Description:
Cambia al iframe principal.
"""
Selenium.driver.switch_to.default_content()
print(f"Se realizó el switch al frame principal.")
def switch_to_windows_name(self, window):
"""
Description:
Cambia entre ventanas del navegador a través de su nombre.
Args:
window (str): Nombre de ventana seleccionada.
Returns:
Si la ventana es encontrada imprime "volviendo a (ventana)",
en caso contrario imprime "Estas en (ventana)".
"""
if window in Selenium.windows:
Selenium.driver.switch_to.window(Selenium.windows[window])
Selenium.page_has_loaded(self)
print("volviendo a " + window + " : " + Selenium.windows[window])
else:
Selenium.number_windows = len(Selenium.driver.window_handles) - 1
Selenium.windows[window] = Selenium.driver.window_handles[int(Selenium.number_windows)]
Selenium.driver.switch_to.window(Selenium.windows[window])
Selenium.driver.maximize_window()
print("Estas en " + window + " : " + Selenium.windows[window])
Selenium.page_has_loaded()
def close_page(self):
"""
Description:
Cierra la instancia del explorador.
"""
Selenium.driver.close()
# FUNCIONES DE JAVASCRIPT ##########################################################################################
def get_page_dom(self):
"""
Description:
Obtiene el DOM de una WebApp.
Returns:
El DOM de una WebApp.
"""
return Selenium.driver.execute_script("return document.documentElement.outerHTML")
def new_window(self, url):
"""
Description:
Abre una nueva window con el navegador.
Args:
url (str): Dirección web que se debe cargar en la window
"""
Selenium.driver.execute_script(f'''window.open("{url}","_blank");''')
Selenium.page_has_loaded()
@staticmethod
def page_has_loaded():
"""
Description:
Espera que la página sea cargada.
Returns:
Si la página se cargó imprime "complete", en caso contrario imprime "No se completó la carga".
"""
try:
WebDriverWait(Selenium.driver, 60).until(
lambda target: Selenium.driver.execute_script('return document.readyState;') == 'complete')
except TimeoutException:
try:
allure.attach(Selenium.driver.get_screenshot_as_png(),
"Ultima screenshot antes de finalizar la ejecución.",
attachment_type=allure.attachment_type.PNG)
except Exception as e:
Functions.exception_logger(e)
print(f"No se pudo realizar la screenshot de pantalla.")
unittest.TestCase().fail("--TimeoutException-- No se ha podido realizar la carga de la página.")
def scroll_to(self, locator, y=None, x=None):
"""
Description:
Hace scroll en la página hacia el elemento que se pasa como parámetro.
Args:
y: Variable y para parametrizar un elemento JSON.
x: Variable x para parametrizar un elemento JSON.
locator (str): Nombre del elemento al cual se quiere scrollear.
"""
element = Selenium.capture_element(self, locator, variable_y=y, variable_x=x)
Selenium.driver.execute_script("arguments[0].scrollIntoView();", element)
print(f"Scroleando la pagina hacia el objeto: {locator}")
# FUNCIONES DE ESPERA ##############################################################################################
@staticmethod
def wait(time_load, logger=Parameters.loggin_time, reason=None):
"""
Description:
Espera un elemento, el tiempo es dado en segundos.
Args:
time_load: Tiempo en segundos.
logger:
reason: Razón por la que se quiere esperar un elemento.
Returns:
Cuando termina el tiempo de espera imprime "Esperar: Carga Finalizada ... "
"""
return Functions.wait(time_load, logger=logger, reason=reason)
def alert_windows(self, accept="accept"):
"""
Description:
Espera un alert(window pop up) y hace click en accept.
Args:
accept (str): Opción aceptar.
Returns:
Al hacer click en accept imprime "Click in Accept", de lo contrario
imprime "Alerta no presente".
"""
try:
wait = WebDriverWait(Selenium.driver, 30)
wait.until(ec.alert_is_present(), print("Esperando alerta..."))
alert = Selenium.driver.switch_to.alert
if accept.lower() == "accept":
alert.accept()
print("Click in Accept")
elif accept.lower() == "text":
print("Get alert text")
return alert.text
else:
alert.dismiss()
print("Click in Dismiss")
except NoAlertPresentException:
print('Alerta no presente.')
except NoSuchWindowException:
print('Alerta no presente.')
except TimeoutException:
print('Alerta no presente.')
except UnexpectedAlertPresentException:
print('Alerta inesperada.')
except Exception as e:
Functions.exception_logger(e)
print(f"Ocurrio un error inesperado.")
# ACCION CHAINS ####################################################################################################
def mouse_over(self, locator):
"""
Description:
Posiciona el mouse sobre un elemento.
Args:
locator (str): Locator del objeto que se quiere obtener.
Returns:
Retorna "True" si existe el objeto dentro del json, de lo contrario
imprime "No se encontró el value en el Json definido".
"""
get_entity = Selenium.get_entity(self, locator)
if get_entity is None:
return print("No se encontro el value en el Json definido.")
else:
try:
if Selenium.get_locator_type.lower() == "id":
localizador = Selenium.driver.find_element(By.ID, Selenium.value_to_find)
action = ActionChains(Selenium.driver)
action.move_to_element(localizador)
action.click(localizador)
action.perform()
print(u"mouse_over: " + locator)
return True
if Selenium.get_locator_type.lower() == "xpath":
localizador = Selenium.driver.find_element(By.XPATH, Selenium.value_to_find)
action = ActionChains(Selenium.driver)
action.move_to_element(localizador)
action.click(localizador)
action.perform()
print(u"mouse_over: " + locator)
return True
if Selenium.get_locator_type.lower() == "link":
localizador = Selenium.driver.find_element(By.PARTIAL_LINK_TEXT, Selenium.value_to_find)
action = ActionChains(Selenium.driver)
action.move_to_element(localizador)
action.click(localizador)
action.perform()
print(u"mouse_over: " + locator)
return True
if Selenium.get_locator_type.lower() == "name":
localizador = Selenium.driver.find_element(By.NAME, Selenium.value_to_find)
action = ActionChains(Selenium.driver)
action.move_to_element(localizador)
action.click(localizador)
action.perform()
print(u"mouse_over: " + locator)
return True
except TimeoutException:
print(u"mouse_over: No presente " + locator)
Selenium.tear_down(self)
return None
except StaleElementReferenceException:
print(u"element " + locator + " is not attached to the DOM")
Selenium.tear_down(self)
return None
def double_click_element(self, element: WebElement):
"""
Description:
Hace doble click con el mouse sobre un elemento.
Args:
element: Nombre del elemento que se quiere obtener.
"""
mouse_action = ActionChains(Selenium.driver)
mouse_action.double_click(element)
mouse_action.perform()
def drag_and_drop(self, origin_object, target_object):
"""
Description:
Arrastra y suelta un elemento con el mouse.
Args:
origin_object (str): Origen del elemento.
target_object (str): Destino del elemento.
"""
ActionChains(Selenium.driver).drag_and_drop(origin_object, target_object).perform()
def click_and_hold(self, origin_object, target_object):
"""
Description:
Mantiene un elemento clickeado.
Args:
origin_object (str): Origen del elemento.
target_object (str): Destino del elemento.
"""
mouse_action = ActionChains(Selenium.driver)
mouse_action.click_and_hold(origin_object).move_to_element(target_object).release(target_object)
mouse_action.perform()
# VALIDADORES ######################################################################################################
def check_element(self, locator): # devuelve true o false
"""
Description:
Verifica si existe un objeto dentro del json.
Args:
locator (str): Nombre del objeto que se quiere verificar.
Returns:
Retorna "True" si existe el objeto dentro del json, de lo contrario
imprime "No se encontro el value en el Json definido".
"""
get_entity = Selenium.get_entity(self, locator)
if get_entity is None:
print("No se encontro el value en el Json definido")
else:
try:
if Selenium.get_locator_type.lower() == "id":
wait = WebDriverWait(Selenium.driver, 20)
wait.until(ec.visibility_of_element_located((By.ID, Selenium.value_to_find)))
print(u"check_element: Se visualizo el elemento " + locator)
return True
if Selenium.get_locator_type.lower() == "name":
wait = WebDriverWait(Selenium.driver, 20)
wait.until(ec.visibility_of_element_located((By.NAME, Selenium.value_to_find)))
print(u"check_element: Se visualizo el elemento " + locator)
return True
if Selenium.get_locator_type.lower() == "xpath":
wait = WebDriverWait(Selenium.driver, 20)
wait.until(ec.visibility_of_element_located((By.XPATH, Selenium.value_to_find)))
print(u"check_element: Se visualizo el elemento " + locator)
return True
if Selenium.get_locator_type.lower() == "link":
wait = WebDriverWait(Selenium.driver, 20)
wait.until(ec.visibility_of_element_located((By.LINK_TEXT, Selenium.value_to_find)))
print(u"check_element: Se visualizo el elemento " + locator)
return True
if Selenium.get_locator_type.lower() == "css":
wait = WebDriverWait(Selenium.driver, 20)
wait.until(ec.visibility_of_element_located((By.CSS_SELECTOR, Selenium.value_to_find)))
print(u"check_element: Se visualizo el elemento " + locator)
return True
except NoSuchElementException:
print("get_text: No se encontró el elemento: " + Selenium.value_to_find)
return False
except TimeoutException:
print("get_text: No se encontró el elemento: " + Selenium.value_to_find)
return False
# FUNCIONES DE CONFIGURACIÓN #######################################################################################
def set_proyect(self, project_name=None):
"""
Description:
Setea variables de ambiente y rutas del proyecto.
Args:
project_name: Nombre del proyecto.
Returns:
Imprime por consola la siguiente configuración:
-Ambiente
-Ruta de Resource
-Ruta de Evidencias
-Ruta de los Json
-Ruta de las Imágenes de los json (reconocimiento por imágenes)
-Ruta de los Bass
Si hubo un error en la configuración, imprime por consola
"No se pudieron detectar los datos de la ejecución".
"""
Functions.set_proyect(self, project_name)
@staticmethod
def set_env(env):
"""
Descripcion:
Configura una variable para la lectura de resources.
Args:
env: QA, TEST, PROD, ALT
Returns:
Funcion que configura la variable de ambiente para la lectura del resources
"""
return Functions.set_env(env)
@staticmethod
def set_excel_row(value: int):
Functions.set_excel_row(value)
@staticmethod
def set_manual_increment(value: bool):
Functions.set_manual_increment(value)
@staticmethod
def get_excel_row():
"""
Description:
Obtiene la row actual del excel.
Returns:
Imprime por consola "El numero del registro consultado es: "+ str(row)" y retorna la row.
"""
return Functions.get_row_excel()
def set_restore_excel_row(self):
"""
Description:
Restaura al value inicial el número de filas del excel.
Returns:
Imprime por consola "Se ha restarudado el numero de la row excel: "+ str(Parameters.row).
"""
Functions.set_restore_excel_row()
@staticmethod
def set_increment_excel_row():
"""
Description:
Incrementa en 1 el número de filas del excel.
"""
Functions.set_increment_row()
@staticmethod
def get_current_time():
"""
Description:
Se obtiene la hora actual.
Returns:
Retorna la hora actual.
"""
return time.strftime(Parameters.time_format) # formato 24 horas
@staticmethod
def get_retry():
"""
Description:
Se obtiene la cantidad de reintentos por default
Returns:
Retorna (int) la cantidad de reintentos por default
"""
return Parameters.number_retries
@staticmethod
def set_highlight(value=True):
"""
Description:
Desactivar/activar el señalamiento highlight de la funcion get_element.
Args:
value: Valor booleano (seteado por default en True).
"""
Parameters.highlight = value
print(f"La opcion hightlight de get_element se a configurado en el siguiente value {Parameters.highlight}")
def set_retry(self, numbers_retries):
"""
Description:
Se configura la cantidad de reintentos por default.
Args:
numbers_retries: Número entero que se utilizará como nuevo parámetro para
la búsqueda de reintentos de objetos en el DOM.
"""
Functions.set_retry(self, numbers_retries)
@staticmethod
def get_timeout_beetwen_retrys():
"""
Description:
Se obtiene el tiempo por default de espera entre reintentos.
Returns:
Retorna (int) el tiempo por default de espera entre reintentos.
"""
return Parameters.time_between_retries
@staticmethod
def set_timeout_beetwen_retrys(time_target):
"""
Description:
Se configura el tiempo por default de espera entre reintentos.
Args:
time_target: Nímero entero que se utilizará para configurar el tiempo de espera entre reintentos.
"""
Parameters.time_between_retries = time_target
print(f"El tiempo de espera entre reintentos es {Parameters.time_between_retries}.")
@staticmethod
def set_browser(browser):
"""
Description:
Setea el navegador por defecto.
Args:
browser (str): Navegador.
"""
Parameters.browser = browser
print(f"El navegador seleccionado es: {str(Parameters.browser)}.")
@staticmethod
def get_environment():
"""
Description:
Devuelve el environment (Sistema operativo) en el que se está corriendo la prueba.
"""
return Parameters.environment
@staticmethod
def get_mode_execution():
"""
Description:
Indica si el caso requiere ser debugeado.
Returns:
Devuelve valor de la variable de Parameters.debug (True o False)
que indica si el caso requiere ser debugeado.
"""
return Parameters.debug
@staticmethod
def set_mode_debug(status=True):
"""
Description:
Configura la variable Parameters.debug en verdadero.
Args:
status: Estado actual del debuger (True = Activado y False = Desactivado).
"""
Parameters.debug = status
@staticmethod
def get_mode_browser():
"""
Description:
Obtiene la configuración del headless del navegador.
Returns:
Devuelve la configuración del navegador (Headless ON/True o Headless OFF/False).
"""
return Parameters.headless
@staticmethod
def set_mode_browser(status=True):
"""
Description:
Setea el headless del navegador.
Args:
status: Estado actual del modo headless del browser (True = Activado y False = Desactivado).
"""
Parameters.debug = status
def read_cell(self, cell, case_name=None, specific_sheet=None) -> object:
"""
Description:
Lee la cell de un resource.
Args:
cell: Celda del resource.
case_name: Nombre del caso.
specific_sheet: Hoja del resource.
Returns:
Retorna el value de la cell del resource.
"""
return Functions.read_cell(self, cell, file_name=case_name, specific_sheet=specific_sheet)
def screenshot(self, description):
"""
Description:
Saca screenshot de pantalla para los reportes de allure y se agrega la descripción de la misma.
Args:
description: Descripción de la screenshot de pantalla.
Returns:
Retorna la imágen y descripción de la screenshot de pantalla.
"""
Selenium.page_has_loaded()
try:
allure.attach(Selenium.driver.get_screenshot_as_png(), description,
attachment_type=allure.attachment_type.PNG)
except Exception as e:
Functions.exception_logger(e)
print(f"No se pudo realizar la screenshot de pantalla.")
def image_for_debugger_report(self):
"""
Description:
Saca screenshot de pantalla para los reportes de allure y se agrega la descripción de la misma.
Returns:
Retorna la imágen y descripción de la screenshot de pantalla.
"""
try:
base_folder = f"C:\\testing-Automation\\projects\\{str(self.project_name)}\\src\\outputs\\"
if "jira_report.png" not in base_folder:
self.memory_image = Selenium.driver.get_screenshot_as_png()
except Exception as e:
print(f"Hubo inconvenientes al intentar crear la carpeta contenedora de las 'report image'")
# SERVICIOS WEB ####################################################################################################
def send_service(self, data):
"""
Description:
Envía un servicio.
Args:
data: recibe los siguientes parámetros en formato json:
tipoPeticion (str): Tipo de petición del servicio.
endPoint (str): Endpoint del servicio.
headers (str): Headers del servicio.
payload (str): Payload del servicio.
time (int): Tiempo de espera de la respuesta del servicio.
statusCodeEsperado (int): Codigo de estatus esperado en la respuesta.
responseEsperado: (dict_to_json):
Returns:
Retorna un request si la petición es exitosa y un array con las
diferencias obtenidas. De lo contrario imprime el error por consola.
"""
response = ""
validation_structure = "None"
differences = []
validate_cant_records = "None"
cant_registros_db = ""
cant_registros_api = ""
statuscode = None
validation_status_code = None
total_retry = Parameters.number_retries
# Se realiza el llamado a la api, si falla reintenta nuevamente.
for retry in range(total_retry):
if data['headers'] is not None:
try:
response = requests.request(data['tipoPeticion'], data['endPoint'], headers=data['headers'],
data=data['payload'], timeout=data['time'])
print(f"El servicio '{data['endPoint']}' respondio con status code {response.status_code}")
break
except requests.exceptions.Timeout:
print("Hubo un error por timeout al enviar el servicio")
else:
try:
response = requests.request(data['tipoPeticion'], data['endPoint'], timeout=data['time'])
print(f"El servicio '{data['endPoint']}' respondio con status code {response.status_code}")
break
except requests.exceptions.Timeout:
print("Hubo un error por timeout al enviar el servicio")
Selenium.wait(1)
# Se valida es status code del request.
try:
unittest.TestCase().assertNotEqual(str(type(response)), "<class 'str'>",
"Error Status Code: No se obtuvo response valido. Response de tipo String (TimeOut)")
validation_status_code = Selenium.validate_status_code(data['statusCodeEsperado'], response.status_code)
statuscode = response.status_code
except AttributeError as e:
Functions.exception_logger(e)
print(f"Error al obtener el status code del servicio.")
# Se valida la estructura/integridad del response.
if 'validateStructure' in data.keys():
if data['validateStructure']:
validation_structure, differences = Selenium.validate_structure(data['responseEsperado'], response)
else:
validation_structure = "None"
# Se compara la cantidad de registros entre la DB y la API.
if 'validateCantData' in data.keys():
if data['validateCantData']:
validate_cant_records, cant_registros_db, cant_registros_api = \
Selenium.validate_cant_records(data, response)
else:
validate_cant_records = "None"
cant_registros_db = "None"
cant_registros_api = "None"
# Se adjuntan las validaciones en formato HTML o formato JSON.
if 'attach_validations' in data.keys():
if data['attach_validations']:
# Se imprime en el template los datos utilizados para las validaciones.
if 'test_data' in data.keys():
data_validations = {
'precondition_data': data['test_data'],
'validations': [
{
'validation': 'Status code esperado',
'result': validation_status_code,
'status_code_esperado': data['statusCodeEsperado'],
'status_code_obtenido': response.status_code
},
{
'validation': 'Cantidad de registros',
'result': validate_cant_records,
'cantidad_datos_origen': cant_registros_db,
'cantidad_datos_destino': cant_registros_api
},
{
'validation': 'Estructura del response',
'result': validation_structure,
'differences': differences
}
]
}
else:
data_validations = {
'validations': [
{
'validation': 'Status code esperado',
'result': validation_status_code,
'status_code_esperado': data['statusCodeEsperado'],
'status_code_obtenido': response.status_code
},
{
'validation': 'Cantidad de registros',
'result': validate_cant_records,
'cantidad_datos_origen': cant_registros_db,
'cantidad_datos_destino': cant_registros_api
},
{
'validation': 'Estructura del response',
'result': validation_structure,
'differences': differences
}
]
}
# Formato de template que se adjunta en Allure.
if 'template' in data.keys():
file = Selenium.create_file_validations(self, data_validations, data['template'])
else:
file = Selenium.create_file_validations(self, data_validations, 'cards')
# Se adjunta el archivo HTML en un step de Allure existente o nuevo.
if 'step_allure' in data.keys():
if data['step_allure']:
with allure.step(u"PASO: Se realizan las siguientes validaciones"):
allure.attach.file(file, name="Validaciones", attachment_type=None, extension=".html")
else:
allure.attach.file(file, name="Validaciones", attachment_type=None, extension=".html")
else:
allure.attach.file(file, name="Validaciones", attachment_type=None, extension=".html")
# Se realizan los asserts de las validaciones.
for i in range(len(data_validations['validations'])):
validataion = data_validations['validations'][i]['validation']
result = data_validations['validations'][i]['result']
if validataion == "Status code esperado" and not result:
unittest.TestCase().assertEqual(data['statusCodeEsperado'], response.status_code,
f"El status code no es el esperado, el value obtenido es "
f"{response.status_code}")
elif validataion == "Cantidad de registros" and not result:
unittest.TestCase().assertEqual(cant_registros_db, cant_registros_api,
"No coinciden la cantidad de datos entre origen y destino.")
elif validataion == "Estructura del response" and not result:
unittest.TestCase().assertEqual(len(differences), 0,
"Se encontraron differences en la estructura del response.")
data_validations.clear()
else:
dict = {
'status_code_esperado': data['statusCodeEsperado'],
'status_code_obtenido': statuscode
}
Selenium.attach_json(dict)
else:
dict = {
'status_code_esperado': data['statusCodeEsperado'],
'status_code_obtenido': statuscode
}
Selenium.attach_json(dict)
return response, differences
@staticmethod
def validate_status_code(status_code_esperado, status_code_obtenido):
"""
Description:
Se valida el status code de un servicio.
Args:
status_code_esperado (int): Código de estado esperado en la respuesta.
status_code_obtenido (int): Código de estado obtenido en la respuesta.
Returns:
Retorna un booleano con el resultado de la validación.
"""
if status_code_esperado == status_code_obtenido:
validation = True
else:
validation = False
return validation
@staticmethod
def validate_cant_records(data, response):
"""
Description:
Se valida la cantidad de registros de un servicio.
Args:
data: Diccionario con tados de DB.
response: Response obtenido en la respuesta.
Returns:
Retorna un booleano con el resultado de la validación.
Retorna la cantidad de datos obtenidos del origen y destino.
"""
cant_registros_api = 0
response = json.loads(response.text)
# Se obtiene cantidad de datos existentes en la DB.
cant_registros_db = Selenium.check_base_sqlserver(data['data_db']['server'], data['data_db']['base'],
data['data_db']['user'], None, data['data_db']['consulta'])
if len(cant_registros_db) > 0:
cant_registros_db = cant_registros_db[0]
# Se obtiene cantidad de datos existentes de la API.
# Se debe pasar una 'key' del response obtenido para que cuente la cantidad de registros devueltos por la API.
if str(type(response)) != "<class 'dict'>":
for i in range(len(response)):
if data['searchKey'] in response[i]:
cant_registros_api = cant_registros_api + 1
else:
if data['searchKey'] in response:
cant_registros_api = cant_registros_api + 1
if cant_registros_db == cant_registros_api:
validation = True
else:
validation = False
print(f"Cantidad de datos obtenidos desde la DB: {cant_registros_db}")
print(f"Cantidad de datos obtenidos desde la API: {cant_registros_api}")
return validation, cant_registros_db, cant_registros_api
@staticmethod
def validate_structure(expected_response, response_obtained):
"""
Description:
Se valida la estructura de un servicio.
Args:
expected_response: Diccionario con la estructura del response esperado.
response_obtained: Diccionario con la estructura del response obtenido.
Returns:
Retorna un booleano con el resultado de la validación.
Retorna un array con las diferencias encontradas.
"""
diferencias = Selenium.compare_structure(expected_response, response_obtained)
if len(diferencias) == 0:
validation = True
else:
validation = False
print(f"Response esperado: {expected_response}")
print(f"Response obtenido: {response_obtained.text}")
return validation, diferencias
@staticmethod
def compare_structure(expected_response, response_obtained):
"""
Description:
Compara estructuras de una respuesta esperada con una respuesta obtenida de un servicio.
Args:
expected_response: Respuesta esperada en formato json.
response_obtained: Respuesta obtenida en formato json.
Returns:
Retorna un array con las diferencias encontradas.
"""
differences = []
try:
unittest.TestCase().assertNotEqual(str(type(response_obtained)), "<class 'str'>",
"Error: El response obtenido es de tipo String")
response_obtained = json.loads(response_obtained.text)
except ValueError:
unittest.TestCase().assertEqual(True, False, "Error al convertir el json value_text en diccionario.")
if len(expected_response) > 0 and str(type(expected_response)) != "<class 'dict'>":
expected_response = expected_response[0]
if len(response_obtained) > 0 and str(type(response_obtained)) != "<class 'dict'>":
response_obtained = response_obtained[0]
# Busca y compara las key del json1 en json2.
for key in expected_response:
if key not in response_obtained.keys():
error = {
'description': 'Keys que se encuentran en origen pero no en destino',
'missing_key': key
}
differences.append(error)
# Busca y compara las key del json2 en json1.
for key in response_obtained:
if key not in expected_response.keys():
error = {
'description': 'Keys que se encuentran en destino pero no en origen',
'missing_key': key
}
differences.append(error)
return differences
def create_file_validations(self, data_validations, name_template):
"""
Description:
Crea un archivo html con las validaciones realizadas.
Args:
data_validations (dict): Diccionario con información de las validaciones realizadas.
name_template (str): Nombre del template a utilizar.
Return:
Retorna la ruta del archivo html creado.
"""
from datetime import datetime
replacement = ""
path = Functions.path_outputs
date_time = datetime.now().strftime("%d-%m-%y_%H-%M-%S-%f")
file_name = f"{date_time}.html"
path_file = os.path.join(path, file_name)
with open(path_file, 'a', encoding="utf-8") as f:
template = open(path + f'\\{name_template}.html', 'r', encoding="utf-8")
data_template = template.read()
f.write(data_template)
template.close()
f.close()
with open(path_file, 'r', encoding="utf-8") as f:
data_file = f.readlines()
data_string = json.dumps(data_validations)
data_string = f"data = {data_string};"
data_without_spaces = [i.strip() for i in data_file]
try:
index = data_without_spaces.index("window.onload = (event) => {")
except ValueError:
print("Hubo un error al generar el template de evidencias")
data_file[index + 1] = data_string
for line in data_file:
line = line.strip()
changes = line.replace('\n', "")
changes = changes.replace('\\"', '"')
replacement = replacement + changes
f.close()
with open(path_file, 'w', encoding="utf-8") as f:
f.writelines(replacement)
f.close()
return path_file
@staticmethod
def attach_json(dict_to_json):
"""
Description:
Adjunta la validación del status code en un paso de Allure en formato json.
Args:
dict_to_json (dict): Diccionario con información de la validación realizada.
"""
with allure.step(u"PASO: Se valida el status code esperado"):
allure.attach(json.dumps(dict_to_json, indent=4), "Validación de status code",
attachment_type=allure.attachment_type.JSON)
unittest.TestCase().assertEqual(dict_to_json['status_code_esperado'], dict_to_json['status_code_obtenido'],
f"El status code no es el esperado, el value obtenido es "
f"{dict_to_json['status_code_obtenido']}")
def get_random(self, min_range, max_range):
"""
Description:
Obtiene un número aleatorio del rango especificado.
Args:
min_range (int): Rango mínimo.
max_range (int): Rango máximo.
Returns:
Retorna un número aleatorio.
"""
return Functions.get_random(self, min_range, max_range)
@staticmethod
def get_random_string(numbers_characters):
"""
Description:
Genera una palabra random.
Args:
numbers_characters: Recibe la cantidad de caracteres que debe contener el value_text a generar.
Returns:
Devuelve un value_text random.
"""
value = ""
letters = string.ascii_letters
for i in range(int(numbers_characters)):
value_partial = str(random.choice(letters))
value = f"{value}{value_partial}"
return value
@staticmethod
def get_random_by_date(type_value="value_text"):
"""
Description:
Genera un value a partir de la fecha que puede ser integer o value_text.
Args:
type_value: El tipo de value que se desea recibir.
Returns:
Devuelve un integer con la variable generada apartir de la fecha.
Devuelve un string con la variable generada apartir de la fecha.
"""
if type_value == "value_text":
return str(time.strftime("%d%m%Y%H%M%S"))
if type_value == "integer":
return int(time.strftime("%d%m%Y%H%M%S"))
@staticmethod
def get_random_list_unique(min_range, max_range, number_results):
"""
Description:
Obtiene números aleatorios de una lista del ranngo especificado.
Args:
min_range (int): Rango mínimo.
max_range (int): Rango máximo.
number_results (int): Cantidad de números a obtener.
Returns:
Retorna números aleatorios.
"""
return random.sample(range(min_range, max_range), number_results)
# BASE DE DATOS ####################################################################################################
def set_timeout_base_sql_server(self, time_seconds):
"""
Description:
Configura el value de timeout (segundos) configurado para las conexiones a bases sqlServer.
Args:
time_seconds: Valor (int) que representa una cantidad en segundos.
"""
Functions.set_timeout_base_sql_server(self, time_seconds)
def get_timeout_base_sql_server(self):
"""
Description:
Devuelve el value de timeout configurado para la conexion a bases sqlServer.
Return:
Devuelve el value de timeout (segundos) configurado para la conexion a bases sqlServer.
"""
return Functions.get_timeout_base_sql_server(self)
def establish_connection_sqlserver(self, db_name):
"""
Description:
Realiza conexión a una base de datos sqlServer.
Args:
server: Servidor ip
base: nombre de la base
user: usuario
password: Contraseña
Return:
Devuelve una variable con la conexion a la base de datos sqlServer.
"""
return Functions.establish_connection_sqlserver(self, db_name)
def check_base_sqlserver(self, db_name, query):
"""
Description:
Realiza conexión y consulta a base de datos con la libreria pyodbc. El metodo incluye la
desconexión.
Args:
server: Servidor ip.
base: Nombre de la base.
user: Usuario.
password: Contraseña.
query: Consulta Query.
Returns:
<class 'pyodbc.Row'>: Retorna un class 'pyodbc.Row' si la consulta y la conexión es exitosa. De lo
contrario imprime por consola "Se produjo un error en la base de datos."
"""
return Functions.check_base_sqlserver(self, db_name, query)
def execute_sp_base_sqlserver(self, db_name, query, parameters: tuple):
"""
Description:
Realiza conexión y consulta a base de datos con la libreria pyodbc. El metodo incluye la
desconexión.
Args:
server (str): Servidor ip.
base (str): Nombre de la base.
user (str): Usuario.
password (str): Contraseña.
query (str): Consulta Query.
parameters (tuple): Tupla con parametros para el sp.
Returns:
Lista con los resultados.
"""
return Functions.execute_sp_base_sqlserver(self, db_name, query, parameters)
def get_list_base_sqlserver(self, db_name, query):
"""
Description:
Realiza conexión y consulta a base de datos con la libreria pyodbc. El metodo incluye la
desconexión.
Args:
server (str): Servidor ip.
base (str): Nombre de la base.
user (str): Usuario.
password (str): Contraseña.
query (str): Consulta Query.
Returns:
Lista con los resultados.
"""
return Functions.get_list_base_sqlserver(self, db_name, query)
def delete_reg_base_sqlserver(self, db_name, query):
"""
Description:
Elimina un registro de la base de datos. El método incluye la desconexión.
Args:
server: Servidor ip.
base: Nombre de la base.
user: Usuario.
password: Contraseña.
query: Consulta Query.
Returns:
Imprime por consola "Ocurrió un error en la base".
"""
Functions.delete_reg_base_sqlserver(self, db_name, query)
@staticmethod
def check_base_oracle(server, base, encoding, user, password, query):
"""
Description:
Realiza la conexión y consulta a base de datos Oracle. El método incluye la desconexión.
Args:
server: Servidor ip.
base: Nombre de la base.
encoding: Tipo de codificación de la base.
user: Usuario.
password: Contraseña.
query: Consulta Query.
Returns:
<class 'cx_Oracle.Row'>: Retorna un class 'cx_Oracle.Row' si la consulta y la conexión es exitosa. De lo
contrario imprime el error por consola.
"""
record = ""
connection = None
dsn = server + '/' + base
try:
connection = cx_Oracle.connect(user, password, dsn, encoding=encoding)
cursor = connection.cursor()
cursor.execute(query)
for row in cursor:
record = row
return record
except cx_Oracle.Error as error:
print(error)
finally:
if connection:
connection.close()
# FUNCIONES DE TIEMPO ##############################################################################################
@staticmethod
def get_date():
"""
Description:
Obtiene la fecha del sistema.
Returns:
Retorna fecha del sistema.
"""
dia_global = time.strftime(Parameters.date_format) # formato dd/mm/aaaa
print(f'Fecha del sistema {dia_global}')
return Functions.global_date
@staticmethod
def get_time():
"""
Description:
Obtiene la hora del sistema.
Returns:
Retorna la hora del sistema.
"""
hora_global = time.strftime(Parameters.time_format) # formato 24 houras
print(f'Hora del sistema {hora_global}')
return Functions.global_time
@staticmethod
def get_date_time():
"""
Description:
Obtiene la fecha y hora del sistema.
Returns:
Retorna fecha y la hora del sistema.
"""
global_date = time.strftime(Parameters.date_format) # formato dd/mm/aaaa
global_time = time.strftime(Parameters.time_format) # formato 24 houras
date_time = f'{global_date} {global_time}'
print(f"La fecha y hora del sistema es: {date_time}")
return date_time
@staticmethod
def get_difference_datetime(datetime_one, datetime_two):
"""
Description:
Calcula la diferencia entre dos fechas.
Args:
datetime_one: Fecha.
datetime_two: Fecha.
Returns:
Retorna la diferencia entre dos fechas.
"""
format_date = Parameters.date_format + " " + Parameters.time_format
datetime_one = datetime.datetime.strptime(datetime_one, format_date)
datetime_two = datetime.datetime.strptime(datetime_two, format_date)
difference = datetime_one - datetime_two
print(f"Diferencia de fechas: {difference}")
return difference
@staticmethod
def convert_bits_to_date(date_bit):
"""
Description:
Convierte una fecha de formato BIT a una fecha en formato DATE.
Args:
date_bit: Recibe una fecha en formato Bit.
Returns:
Devuelve una fecha en formato date.
"""
timestamp_with_ms = date_bit
timestamp, ms = divmod(timestamp_with_ms, 1000)
dt = datetime.datetime.fromtimestamp(timestamp) + datetime.timedelta(milliseconds=ms)
formatted_time = dt.strftime('%Y-%m-%d %H:%M:%S')
return formatted_time
@staticmethod
def add_delta_hours_to_datetime(add_time_delta: tuple):
"""
Description:
Suma un tiempo delta definido en horas, minutos y segundos a la fecha y hora actual.
Args:
add_time_delta: tupla con el tiempo (horas, minutos, segundos) que desea ser agregado.
Return:
Devuelve la fecha actual con el tiempo adicional.
"""
add_time = datetime.timedelta(hours=add_time_delta[0], minutes=add_time_delta[1], seconds=add_time_delta[2])
now = datetime.datetime.now()
new_datetime = now + add_time
date_time_format = f"{Parameters.date_format} {Parameters.time_format}"
new_datetime = new_datetime.strftime(date_time_format)
return new_datetime
@staticmethod
def hour_rounder(date_time: str):
"""
Description:
Redondea la hora de la fecha actual.
Args:
date_time: Recibe una fecha en formato value_text con formato "%H:%M:%S %d/%m/%Y"
Return:
Devuelve la fecha redondeada.
"""
date_time_format = f"{Parameters.date_format} {Parameters.time_format}"
date_time = datetime.datetime.strptime(date_time, date_time_format)
return (date_time.replace(second=0, microsecond=0, minute=0, hour=date_time.hour) +
datetime.timedelta(hours=date_time.minute // 30))
@staticmethod
def convert_date_to_bit(date_target):
"""
Description:
Convierte una fecha de formato date a una fecha en formato BIT.
Args:
date_target: Recibe una fecha en formato date.
Returns:
Devuelve una fecha en formato bit de 13 digitos.
"""
unixtime = int(datetime.datetime.timestamp(date_target) * 1000)
return unixtime
# FUNCIONES INFORMES ###############################################################################################
def send_mail(self, receiver_email: list, title, content, file_attach=None):
"""
Description:
Envía un informe vía email.
Args:
receiver_email (str): Destinatarios del correo.
title (str): Asunto del correo.
content (str): Cuerpo del correo.
file_attach (file): Archivos adjuntos del correo.
Returns:
Si el correo fue enviado con éxito retorna el estado "Enviado",
de lo contrario imprime por consola "El mail no pudo ser enviado" y estado "No enviado".
"""
return Functions.send_mail(self, receiver_email, title, content, file_attach=file_attach)
def create_title(self, title_text: str):
"""
Descripcion:
Crea un título en formato html.
Args:
title_text: Título en formato value_text.
Return:
Devuelve título en formato html.
"""
return Functions.create_title(title_text)
@staticmethod
def create_message_html(message_text: str, special_strings=None):
"""
Descripcion:
Crea un párrafo en formato html.
Args:
message_text: párrafo en formato value_text.
special_strings: Lista de palabras que deben ser resaltadas en negrita dentro del mensaje.
Return:
Devuelve párrafo en formato html.
"""
if special_strings is None:
special_strings = []
return Functions.create_message_html(message_text, special_strings)
def create_table(self, list_data_head: list, list_data_content: list):
"""
Descripcion: crea una tabla html.
Args:
list_data_head: Lista con los encabezados de la tabla.
list_data_content: Matriz (lista con lista) con los datos de la tabla.
Return:
Devuelve una tabla en formato html.
"""
return Functions.create_table(list_data_head, list_data_content)
def create_style_html(self):
"""
Description:
Devuelve el código css con los estilos que deben aplicarse a un bloque HTML.
Return:
Devuelve el estilo para aplicar al código html.
"""
return Functions.create_style_html()
def apply_style_css_to_block(self, block_html: str):
"""
Description:
Aplica estilos css a un bloque html.
Args:
block_html: Bloque html que recibirá los estilos css.
Return:
Devuelve un bloque html con estilos aplicados.
"""
return Functions.apply_style_css_to_block(block_html)
@staticmethod
def print_precondition_data(precondition_json):
"""
Description:
Adjunta en el reporter de allura un json con los datos pre condición utilizados en la prueba.
Args:
precondition_json (str): Datos pre condición en formato json.
"""
with allure.step(u"PASO: Se utilizan lo siguientes datos como pre condición"):
allure.attach(json.dumps(precondition_json, indent=4),
"Datos pre condición",
attachment_type=allure.attachment_type.JSON)
def set_new_value_json(self, json_data, claves, valor_nuevo=None):
"""
Modifica el value de una key o elimina key/value del json, segun el tipo_accion.
En caso de no encontrar lanza un mensaje de error
:param json_data: contiene dict_to_json del json original
:param valor_nuevo:(opc) nuevo value que se toma para el caso de modificar del json
:return: json modificado con el nuevo value
"""
if isinstance(claves, str):
claves = claves.split('.')
if len(claves) == 1:
if isinstance(json_data, list):
json_data[0][claves[0]] = valor_nuevo
else:
json_data[claves[0]] = valor_nuevo
else:
if isinstance(json_data, list):
Selenium.set_new_value_json(json_data[0][claves[0]], claves[1:], valor_nuevo)
else:
Selenium.set_new_value_json(json_data[claves[0]], claves[1:], valor_nuevo)
return json_data
def delete_value_json(self, json_data, claves):
"""
Modifica el value de una key o elimina key/value del json, segun el tipo_accion.
En caso de no encontrar lanza un mensaje de error
:param tipo_accion: determina la accion a realizar. Valores posibles 'MODIFICA' o 'ELIMINA'
:param json_data: contiene dict_to_json del json original
:param valor_nuevo:(opc) nuevo value que se toma para el caso de modificar del json
:return: json modificado con el nuevo value
"""
if isinstance(claves, str):
claves = claves.split('.')
if len(claves) == 1:
if isinstance(json_data, list):
del json_data[0][claves[0]]
else:
del json_data[claves[0]]
else:
if isinstance(json_data, list):
Selenium.delete_value_json(json_data[0][claves[0]], claves[1:])
else:
Selenium.delete_value_json(json_data[claves[0]], claves[1:])
return json_data
####################################### Jira conections ############################################################
def write_cell(self, cell, value, name, folder='files', sheet=None):
"""
Description:
Permite escribir en una celda indicada de una hoja especifica para un
libro de excel en directorio ./inputs/.
Args:
cell (obj): Celda de la hoja, se espera COLUMNA+FILA.
value (str): Valor a ingresar en la celda.
name (str): Nombre del libro de excel, en el directorio ./inputs/.
sheet (str): Hoja especifica del libro excel.
folder (str): Nombre de la carpeta que contiene el libro excel. Es 'files' por default o puede ser
'downloads'.
Returns:
Imprime por consola la celda, hoja y valor escrito, y devuelve TRUE
en caso contrario imprime por consola "VERIFICAR: No se pudo escribir el archivo."
y devuelve FALSE.
"""
return Functions.write_cell(self, cell, value, name, folder, sheet)
@staticmethod
def available_port():
"""
Description:
Busca un puerto disponible.
Returns:
Devuelve el puerto disponible.
"""
import socket
sock = socket.socket()
sock.bind(('', 0))
return sock.getsockname()[1]
@staticmethod
def set_exception_loggin(value: bool):
"""
Description:
Configura el logeo de las excepciones
Args:
value: true o false
"""
Functions.set_exception_loggin(value)
@staticmethod
def color_message(color, message):
"""
Description: Colorea el string del color indicado de entre una lista de colores.
Args:
color: puede ser de color red, blue, yellow o green.
message: string a colorear.
Returns:
string coloreado.
"""
return Functions.color_message(color, message)
def open_new_tab(self, web_url):
# Abrir nueva tab
Selenium.driver.execute_script("window.open('about:blank', 'secondtab');")
# Cambio de prioridad a la segunda ventana
Selenium.driver.switch_to.window("secondtab")
# ingresar a la web deseada en la nueva tab
Selenium.driver.get(f'{web_url}')
def check_environment_files(self, father_attribute, attribute_to_search, data_find, data_inner_key,
xml_en_consola=False):
encryptor = Functions.Encryptor(father_attribute, attribute_to_search, data_find, data_inner_key,
xml_en_consola)
data = encryptor.main()
return data
class ElementUI(Selenium):
def __init__(self, context_element, driver, json_value, json_get_indicator, entity):
self.retry = 0
self.element = context_element
self.driver = driver
self.json_ValueToFind = json_value
self.json_GetFieldBy = json_get_indicator
self.entity = entity
self.message = None
self.exception = None
def click(self):
self.execute_action("click")
def click_js(self):
self.execute_action("click_js")
def double_click(self):
self.execute_action("double_click")
def send_keys(self, value):
self.execute_action("send_keys", value)
def send_special_key(self, value):
self.execute_action("send_special_key", value)
@property
def text(self):
return self.execute_action("text")
def clear(self):
self.execute_action("clear")
def clear_js(self):
self.execute_action("clear_js")
def is_enabled(self):
return self.execute_action("is_enabled")
def is_selected(self, value):
return self.execute_action("is_selected", value)
def is_displayed(self):
return self.execute_action("is_displayed")
def get_property(self, value):
return self.execute_action("get_property", value)
def get_attribute(self, value):
return self.execute_action("get_attribute", value)
def capture(self):
return self.execute_action("capture")
def select_option_by_text(self, value):
return self.execute_action("select_option_by_text", value)
def select_option_by_value(self, value):
return self.execute_action("select_option_by_value", value)
def select_option_by_index(self, value):
return self.execute_action("select_option_by_index", value)
def get_all_values_to_select(self):
return self.execute_action("get_all_values_to_select")
def select_action(self, action, value=None):
self.message = ""
if action == "click":
self.message = "realizar click"
return self.element.click()
if action == "click_js":
self.message = "realizar click"
self.driver.execute_script("arguments[0].click();", self.element)
if action == "double_click":
self.message = "realizar doble click"
return Selenium.double_click_element(self, self.element)
if action == "send_keys":
self.message = "escribir en el campo"
return self.element.send_keys(value)
if action == "send_special_key":
self.message = f"presionar la tecla {value} en el objetivo"
key = value.upper()
if key == 'ENTER':
self.element.send_keys(Keys.ENTER)
if key == 'TAB':
self.element.send_keys(Keys.TAB)
if key == 'ESPACIO':
self.element.send_keys(Keys.SPACE)
if key == 'ESCAPE':
self.element.send_keys(Keys.ESCAPE)
if key == 'RETROCESO':
self.element.send_keys(Keys.BACKSPACE)
if key == 'SUPRIMIR':
self.element.send_keys(Keys.DELETE)
if key == "ABAJO":
self.element.send_keys(Keys.ARROW_DOWN)
if key == "F3":
self.element.send_keys(Keys.F3)
if key == "F4":
self.element.send_keys(Keys.F4)
if action == "text":
self.message = "obtener texto del campo"
return self.element.text
if action == "clear":
self.message = "limpiar el texto del campo"
return self.element.clear()
if action == "clear_js":
self.message = "limpiar campo"
self.driver.execute_script('arguments[0].value="";', self.element)
if action == "is_enabled":
self.message = "verificar el estado del objeto"
return self.element.is_enabled()
if action == "is_selected":
self.message = "verificar si el objeto es seleccionable"
return self.element.is_selected()
if action == "is_displayed":
self.message = "visualizar el objeto"
return self.element.is_displayed()
if action == "get_property":
self.message = "obtener las propiedades del objeto"
return self.element.get_property(value)
if action == "get_attribute":
self.message = "obtener los atributos del objeto"
return self.element.get_attribute(value)
if action == "capture":
self.message = "capturar pantalla"
Selenium.highlight(self, self.element)
Selenium.screenshot(self, f"Se visualiza el objeto {self.entity}")
return
if action == "select_option_by_value":
self.message = f"seleccionar value {value} de la lista"
Select(self.element).select_by_value(value)
if action == "select_option_by_text":
self.message = f"seleccionar value {value} de la lista"
Select(self.element).select_by_visible_text(value)
if action == "select_option_by_index":
self.message = f"seleccionar value {value} de la lista"
Select(self.element).select_by_index(value)
if action == "get_all_values_to_select":
list_value = []
self.message = f"obtener todos los valores de la lista {self.element}"
for option_value in Select(self.element).options:
list_value.append(option_value.get_attribute("value"))
return list_value
def execute_action(self, action, value=None):
out = None
self.message = None
while self.retry <= Parameters.number_retries:
if self.retry < Parameters.number_retries:
try:
out = self.select_action(action, value)
break
except StaleElementReferenceException:
self.retry += 1
self.exception = "StaleElementReferenceException"
self.message = f'--{self.exception}-- No se ha podido {self.message} ' \
f'debido a que el objeto "{self.entity}" a sido actualizado repentinamente.'
Selenium.message_error = self.message
Selenium.exception = self.exception
except ElementClickInterceptedException:
self.exception = "ElementClickInterceptedException"
self.retry = Parameters.number_retries
self.message = f'--{self.exception}-- No se ha podido {self.message} ' \
f'debido a que el objeto "{self.entity}" se encuentra solapado por otro objeto.'
Selenium.message_error = self.message
Selenium.exception = self.exception
except ElementNotInteractableException:
self.retry += 1
self.exception = "ElementNotInteractableException"
self.message = f'--{self.exception}-- No se ha podido {self.message} ' \
f'debido a que el objeto "{self.entity}" no esta disponible.'
Selenium.message_error = self.message
Selenium.exception = self.exception
except NoSuchElementException:
self.retry += 1
self.exception = "NoSuchElementException"
self.message = f'--{self.exception}-- No se ha podido {self.message} ' \
f'debido a que el objeto "{self.entity}" no esta disponible.'
Selenium.message_error = self.message
Selenium.exception = self.exception
except Exception as e:
self.retry += 1
self.exception = type(e).__name__
self.message = f'--{self.exception}-- No se ha podido {self.message} ' \
f'debido a que ha ocurrido un error inesperado.'
Selenium.message_error = self.message
Selenium.exception = self.exception
self.element = Selenium.capture_element(self, self.entity)
else:
if Selenium.debugger(self, self.element) == 1:
self.retry = 0
self.element = Selenium.capture_element(self, self.entity)
else:
Selenium.screenshot(self, "Ultima screenshot antes de finalizar la ejecución.")
Selenium.tear_down(self)
if len(self.message) == 0:
self.message = f"--{self.exception}-- {self.message}"
unittest.TestCase().fail(self.message)
print(f"{Functions.color_message('GREEN', 'REALIZADO:')} Se pudo {self.message} sobre "
f"'{Functions.color_message('BLUE', self.entity)}'.")
Selenium.message_container = self.message
Selenium.lista_pasos.append(action)
if out is not None:
return out
|
Andreani-QA-Selenium
|
/Andreani_QA_Selenium-0.0.18.tar.gz/Andreani_QA_Selenium-0.0.18/Andreani_QA_Selenium/Selenium.py
|
Selenium.py
|
import ctypes
import os
import sys
import threading
from ctypes import *
dll_dir = os.path.dirname(os.path.abspath(__file__))
os.add_dll_directory(dll_dir)
N_dll = CDLL("N.dll")
Run_ID = 0
lock_ID = threading.Lock() # 创建一个锁对象
# 创建一个全局变量以存储回调函数
global callback_func
def GetRun_ID():
"""获取运行的ID"""
global Run_ID
with lock_ID: # 使用锁来保证 Run_ID 的线程安全
Run_ID = Run_ID + 1
if Run_ID > 200:
Run_ID = 1
return Run_ID
def on_callback(Msg=None):
"""回调函数"""
print("callback:", Msg.decode("gbk"))
return 42
def callback_Add():
# 建立一个全局的回调函数
# 将 Python 函数转换为 C 可调用的函数指针
callback_type = WINFUNCTYPE(c_int, c_char_p)
global callback_func
callback_func = callback_type(on_callback)
# 打印函数指针的整数表示
callback_int = cast(callback_func, c_void_p).value
return callback_int
def initialize(callback=True):
"""初始化"""
callback_int = callback_Add() if callback else 0
# 建立回调函数
N_initialize = N_dll.N_initialize
r = N_initialize(callback_int)
return string_at(r).decode("gbk")
def login(ID, Uin, Password, Guid=None):
"""常规登录"""
Guid = Guid or ""
N_Login = N_dll.N_Login
N_Login.argtypes = [c_int, c_char_p, c_char_p, c_char_p]
result = string_at(
N_Login(ID, Uin.encode("gbk"), Password.encode("gbk"), Guid.encode("gbk"))
)
return result.decode("gbk")
def login_tailless(ID, TokenA):
"""无尾模式"""
N_login_tailless = N_dll.N_login_tailless
r = N_login_tailless(c_int(ID), c_char_p(TokenA.encode("gbk")))
return string_at(r).decode("gbk")
def login_Submit_slider(ID, Ticket):
"""提交滑块"""
N_login_Submit_slider = N_dll.N_login_Submit_slider
print(N_login_Submit_slider)
r = N_login_Submit_slider(ID, c_char_p(Ticket.encode("gbk")))
return string_at(r).decode("gbk")
def login_Send_verification_to_the_phone(ID):
"""发送验证码到手机"""
N_login_Send_verification_to_the_phone = (
N_dll.N_login_Send_verification_to_the_phone
)
r = N_login_Send_verification_to_the_phone(ID)
return string_at(r).decode("gbk")
def login_Submit_verificationcode(ID, code):
"""设备锁提交验证码"""
N_login_Submit_verificationcode = N_dll.N_login_Submit_verificationcode
r = N_login_Submit_verificationcode(ID, c_char_p(code.encode("gbk")))
return string_at(r).decode("gbk")
def Scan_code_authorization(ID, k, TokenA):
"""扫码授权"""
N_Scan_code_authorization = N_dll.N_Scan_code_authorization
r = N_Scan_code_authorization(
ID, c_char_p(k.encode("gbk")), c_char_p(TokenA.encode("gbk"))
)
return string_at(r).decode("gbk")
def Scan_code_authorization_new(ID, k, TokenA, _Type):
"""扫码授权
Type=0 扫码
Type=1 允许授权
"""
N_Scan_code_authorization_new = N_dll.N_Scan_code_authorization_new
r = N_Scan_code_authorization_new(
ID, c_char_p(k.encode("gbk")), c_char_p(TokenA.encode("gbk")), c_int(_Type)
)
return string_at(r).decode("gbk")
def Scan_code_assist(ID, str_url):
"""扫码——辅助验证"""
N_Scan_code_assist = N_dll.N_Scan_code_assist
r = N_Scan_code_assist(ID, c_char_p(str_url.encode("gbk")))
return string_at(r).decode("gbk")
def Refresh_token(ID):
"""
刷新令牌,刷新成功后将返回新的解登录包,也可以通过GetTokenA获取新的TokenA
"""
N_login_Refresh_token = N_dll.N_login_Refresh_token
r = N_login_Refresh_token(ID)
return string_at(r).decode("gbk")
def GetTokenA(ID):
"""获取当前运行ID的TokenA"""
N_GetTokenA = N_dll.N_GetTokenA
r = N_GetTokenA(ID)
return string_at(r).decode("gbk")
def Group_Get_condition(ID, Group):
"""获取群条件"""
N_Group_Get_condition = N_dll.N_Group_Get_condition
r = N_Group_Get_condition(ID, c_int64(Group))
return string_at(r).decode("gbk")
def N_subscribe_unfollow(ID, Target):
"""
取消订阅号关注
2720152058 QQ团队
1770946116 安全中心
"""
N_subscribe_unfollow = N_dll.N_subscribe_unfollow
r = N_subscribe_unfollow(ID, c_int64(Target))
return string_at(r).decode("gbk")
def AS_Get_login_infor(ID, type_):
"""
账号安全_获取登陆信息
1 在线设备 2 历史设备 3 在线和历史不区分
"""
N_AS_Get_login_infor = N_dll.N_AS_Get_login_infor
r = N_AS_Get_login_infor(ID, c_int(type_))
return string_at(r).decode("gbk")
def AS_Del_login_Infor(ID, target):
"""
账号安全_删除设备信息
target为获取设备信息里面的j7
"""
N_AS_Del_login_Infor = N_dll.N_AS_Del_login_Infor
r = N_AS_Del_login_Infor(ID, c_char_p(target))
return string_at(r).decode("gbk")
def auth_get_list(ID, num):
"""授权获取授权列表"""
N_auth_get_list = N_dll.N_auth_get_list
r = N_auth_get_list(ID, c_int(num))
return string_at(r).decode("gbk")
def Get_Phone(ID):
"""授权获取授权列表"""
N_Get_Phone = N_dll.N_Get_Phone
r = N_Get_Phone(ID)
return string_at(r).decode("gbk")
def TCP_Send(ID, data, wait, ssoseq):
"""TCP发送数据"""
N_TCP_Send = N_dll.N_TCP_Send
r = N_TCP_Send(ID, c_char_p(data, wait, ssoseq))
return string_at(r).decode("gbk")
def Get_version():
"""获取版本号"""
r = N_dll.Get_Version_infor()
return string_at(r).decode("gbk")
# 默认就初始化
print(initialize())
|
AndroidN
|
/AndroidN-0.0.9.tar.gz/AndroidN-0.0.9/N/N.py
|
N.py
|
import struct
class TEA:
"""QQ TEA 加解密, 64比特明码, 128比特密钥
这是一个确认线程安全的独立加密模块,使用时必须要有一个全局变量secret_key,要求大于等于16位
"""
def xor(a, b):
op = 0xffffffff
a1, a2 = struct.unpack(b'>LL', a[0:8])
b1, b2 = struct.unpack(b'>LL', b[0:8])
return struct.pack(b'>LL', (a1 ^ b1) & op, (a2 ^ b2) & op)
def code(v, k):
n = 16
op = 0xffffffff
delta = 0x9e3779b9
k = struct.unpack(b'>LLLL', k[0:16])
y, z = struct.unpack(b'>LL', v[0:8])
s = 0
for i in range(n):
s += delta
y += (op & (z << 4)) + k[0] ^ z + s ^ (op & (z >> 5)) + k[1]
y &= op
z += (op & (y << 4)) + k[2] ^ y + s ^ (op & (y >> 5)) + k[3]
z &= op
r = struct.pack(b'>LL', y, z)
return r
def decipher(v, k):
n = 16
op = 0xffffffff
y, z = struct.unpack(b'>LL', v[0:8])
a, b, c, d = struct.unpack(b'>LLLL', k[0:16])
delta = 0x9E3779B9
s = (delta << 4) & op
for i in range(n):
z -= ((y << 4) + c) ^ (y + s) ^ ((y >> 5) + d)
z &= op
y -= ((z << 4) + a) ^ (z + s) ^ ((z >> 5) + b)
y &= op
s -= delta
s &= op
return struct.pack(b'>LL', y, z)
def encrypt(v, key):
if isinstance(key, str):
secret_key = bytearray.fromhex(key.replace(' ', ''))
else:
secret_key = key
END_CHAR = b'\0'
FILL_N_OR = 0xF8
vl = len(v)
filln = (8 - (vl + 2)) % 8 + 2
fills = b''
for i in range(filln):
fills = fills + bytes([220])
v = (bytes([(filln - 2) | FILL_N_OR])
+ fills
+ v
+ END_CHAR * 7)
tr = b'\0' * 8
to = b'\0' * 8
r = b''
o = b'\0' * 8
for i in range(0, len(v), 8):
o = TEA.xor(v[i:i + 8], tr)
tr = TEA.xor(TEA.code(o, secret_key), to)
to = o
r += tr
return r
def decrypt(v, key):
if isinstance(key, str):
secret_key = bytearray.fromhex(key.replace(' ', ''))
else:
secret_key = key
l = len(v)
prePlain = TEA.decipher(v, secret_key)
pos = (prePlain[0] & 0x07) + 2
r = prePlain
preCrypt = v[0:8]
for i in range(8, l, 8):
x = TEA.xor(TEA.decipher(TEA.xor(v[i:i + 8], prePlain), secret_key), preCrypt)
prePlain = TEA.xor(x, preCrypt)
preCrypt = v[i:i + 8]
r += x
if r[-7:] != b'\0' * 7:
return None
return r[pos + 1:-7]
# data = 'cb 63 ea be db ef f9 ff 79 f8 36 b4 d2 7a e7 0d 67 6e 04 49 a6 fa 9d 83 08 79 8f f2 39 07 25 25 12 28 b1 53 4e 0d d0 1e eb 3f 36 f0 f2 27 1d d0 07 d6 0e 6b bd da 4f fb b7 6a b4 5d 65 5f 03 66 bf 32 88 1e a5 3c 4d f6 a9 7f 9e 53 6b 71 44 c1 49 52 a2 5a 27 34 a2 eb d0 c0 6e df a5 5e 37 b7 90 01 2d 43 9d 3f 18 80 50 0e 81 c9 9b ae a5 2c aa fc 73 33 16 8e 2f 02 a8 1b 00 9b fc f1 ef 9e 48 f9 d1 30 69 45 84 24 a9 99 a1 88 40 09 72 94 20 89 2d 62 5a 5a 7a a1 9f ec a1 ec a8 c2 b5 e3 b7 3d fb 8b 26 63 15 41 fd c3 21 16 f4 1b 2c c0 72 0f cc 2b c1 39 5c ba 9b 37 90 be c8 9e 25 94 d7 6c a4 07 27 45 64 a8 67 73 30 8c c0 83 c4 94 eb 78 d8 32 3a 4e 2d 52 52 b3 62 9d 94 49 35 5d 0f 5f c0 89 bc 58 c8 71 28 fe 45 51 c4 12 60 a8 46 ac 0d 97 dc ab 74 21 7d 9b b7 b8 63 64 e7 c0 62 ea e7 11 5a 75 74 f6 b2 2e 40 af a3 3d 8e 3d 5c 4a 73 53 59 0c 76 71 07 3b 48 e9 35 ba f6 a5 91 3a e0 b2 43 3a fe 67 02 a0 91 c5 6e 27 81 6c e5 ba 1b 63 07 35 fb 2e 07 6c 7f 77 ac cd a8 17 d5 cd 74 84 fd 85 59 0e 60 fe 82 6a 71 b9 9e ef 24 54 c7 61 b7 16 6d 6f e5 e9 97 36 80 60 81 f9 2d 57 05 72 3b 6f cb 5e f9 d3 f7 f8 ee b4 8b 21 0f 7f ec 55 63 2c e2 ee d0 78 5c ac 18 d4 05 ad ca f8 e5 2c 29 a3 5b 89 cb 26 89 ef 0e a6 82 35 61 5e 3b 02 50 b3 48 d2 d6 52 b8 83 f4 da ce e2 1a 7a 94 36 22 61 bf 38 a1 4f 4b 36 1c e6 5a 89 b1 1e c0 6a ef b2 34 3b 39 e7 f1 13 96 c2 9d d8 bc 7d 2c 99 b8 cd fb 5c c8 f8 8a 11 f1 85 e7 76 2e 45 a1 17 7a c4 7c f8 6f 99 97 d6 81 10 ba 30 75 11 e6 6f 33 5a ab 35 bf 4c 0a 31 44 24 cf dd 91 cc e3 a9 67 80 30 d3 bf 6c b5 e4 29 f6 1b 57 35 7d da b4 b8 d1 6f ae df 9d 3d 63 75 88 30 23 11 3b ce 2d 87 1b 5c 48 f4 f8 1a ff 1c 0b a0 0b b6 7b 0e c4 45 2a 20 80 64 a2 ab d6 cb f6 d7 70 c8 bd 34 76 5b 1f 60 6c 71 ea b1 d2 49 0b e4 2b 85 e0 43 2e fe c1 7c 09 3e 9d cd 6c 1a 6c 1d c0 6b 2a f4 f8 82 e1 a1 03 80 63 fa 52 71 fc 67 e4 df 88 95 3d 71 72 ac 77 80 5f f5 d0 63 57 18 a5 c4 fa 56 42 3a 26 b5 d0 05 af f6 ee c5 da 54 90 cc 98 35 81 9d c8 07 a1 70 a5 23 1a 7e 73 23 04 50 d4 1e db 73 d9 2a 05 d3 c4 10 3e 77 f0 90 cc b3 85 f5 cb db fd 60 41 d0 be 99 d6 2a 27 7f 0e e9 00 27 fe 58 64 65 00 5e 55 8c 7e 86 80 2d b1 cc 3a 1f b8 ba 6f fe 07 a6 38 af 53 e5 1d 4c 90 bb ef 98 96 28 32 8f 93 6f a7 8d 6c d3 62 a0 f4 6a a8 79 33 8a c7 69 48 c9 b9 35 d2 11 cc 16 72 03 f6 9c c1 75 09 ed 7d 2a 37 7d 52 a5 6c e3 72 fc 80 85 6d 73 3b 77 f2 26 bf 08 43 fa 44 d1 7b 8d 93 a2 2b af b2 2b 9d 49 16 18 a8 8e a6 0a 94 77 86 1f 4b 9c a8 bb bb e3 8b 02 70 9c 2b 6b 9a d4 be 61 b7 de ee 7e 3c 6c 12 17 56 74 0c e9 96 ea d3 09 93 4c 31 4d 14 f6 3d ed 91 81 18 5f 86 9a 16 96 91 57 92 ff 20 70 1b d6 6e c9 e0 c2 42 af 95 06 80 6c 82 ca 53 5f 50 df 40 54 d1 6f ed 7b f2 18 bc 27 c4 5a a5 a2 3c c2 9d db f2 a5 c9 1e 55 95 26 a8 fb 9f e4 0b 09 f7 c8 22 ba b8 c8 e7 eb 60 35 b4 f3 7d 99 b6 f8 fc 27 79 36 c3 a2 28 d3 90 7b 4b 98 a8 e8 13 12 e4 d0 fa d9 ca 3f 32 f2 5f 45 df 97 e5 0a bd 9e 5b cc 4d c9 60 52 82 be aa e5 ee 53 91 57 fd b6 cf ba 68 d9 25 af 96 18 6a 21 ff 23 11 dd 58 0d 55 e6 f1 55 60 df 2a f9 6b 8c 4d 4b 64 d3 50 cb 22 ed aa 59 ad cd 45 c0 bd 5a 9e fd c4 57 91 e6 d2 c0 ff ce 8c 47 e5 a6 92 7e 86 ce 4d 2c c5 d3 2c c2 6d de b1 79 86 d1 74 00 86 97 f9 66 e5 d4 d2 d9 35 ec 0a 26 61 5c f5 63 9e 20 17 c2 10 75 74 38 5d fc fb 95 55 4d 60 68 85 55 83 74 af b8 a6 dd 08 8e 94 ba 3d 75 bb 67 5d 24 18 c1 42 69 32 5f 51 2f 7f 9b 7a 1e 5e cd 56 e4 d0 72 43 6a d5 cd 91 6c 6e 28 0b 64 d0 cf 53 d6 58 f1 6b 51 52 7d 2a 95 0b 98 01 bc 73 13 32 4c 32 de 0c 66 3c 7a cd 01 a8 c3 38 c9 04 c3 02 49 bf 72 9f 2b 98 fa 33 a4 76 b4 61 22 26 96 e2 3c 40 df 34 c2 36 4a 0a 90 16 47 87 5e ce e4 0f 14 f5 7a 92 63 9c bf 84 06 30 1f ae 0b b0 f6 d1 a8 d5 e6 1d 8c f6 ab f2 56 34 2d e6 e7 11 96 fe 29 d0 f9 12 c9 8c 9d 43 59 26 a6 ec 1f 67 b5 55 c6 22 14 c0 d9 2e cf 5c f7 e0 84 17 07 bb 96 32 0e 75 30 2e 27 66 f1 c7 c8 3a a3 69 b5 4f fc 58 3b 3b 92 c1 a4 86 00 13 d7 cc ca e6 d2 a0 d5 a7 89 d5 98 23 2b 12 a4 fa fd 05 be 15 13 31 f9 8f af f1 b7 46 4d 63 8e c7 cf 52 ad 43 49 64 ac 19 df bb 03 fb 30 17 ec a3 40 44 81 b6 2e 38 ab b0 e8 e9 2f 61 41 4d 44 0e 48 d9 9b 9a 63 78 ac 8d a4 f9 36 16 2f ea ee 87 74 a3 c6 ab 89 23 7c 7c a1 e8 c7 22 15 a2 94 9f c0 cf ad 35 df 90 25 1d 06 da 76 34 6c 3f 6d 36 06 e2 1f da 0d be ed 2f 9a a5 1a 7a 00 a6 0e 64 34 23 1e 13 0a c4 85 40 86 3c b6 69 d3 e8 32 11 43 69 b7 14 44 a2 a6 86 67 50 f7 fa a8 6c 74 35 f0 ff 47 49 1c 6a 5a 60 cd 11 d7 bc 8e 18 20 60 8d 60 e6 13 29 ba 89 f3 a5 3e 3d 2c de 86 49 04 7f af 41 46 15 15 21 de 82 a1 6a 2b e9 61 d7 83 35 80 c0 e4 af 06 5a 12 2e 4e a7 06 26 e1 fa 22 cd 78 ac fb 1b 17 85 23 6f 6b 23 83 c2 ca 78 b8 7a 3f db f7 70 79 22 15 61 0e 34 42 f8 56 4a b1 b1 91 0f 38 55 8e 38 da cf 70 73 a4 28 1a 56 90 29 0e 1f a6 aa c2 56 88 93 9a 1d 73 c6 7d 39 6e d0 7b 9c 19 19 ed 97 4b 2f 5c 09 c1 72 ea 06 89 95 b7 46 63 a6 cc c8 69 23 d0 8e 0d da 5d a8 b3 5b c7 de 27 39 0e 25 95 19 1d af 81 54 d7 98 61 a5 67 f5 2b 1f 0b 09 e7 b6 6a 22 7f 70 01 c1 11 e4 72 aa a4 be 45 e9 32 d7 82 3c b7 e7 82 33 89 d5 a6 3e 69 79 1f e1 6d c6 64 02 c0 4a fa bc 77 2d 9c e7 e9 c1 76 a1 09 86 7a 7f d4 ad dc a8 3a 1d d2 80 17 f1 27 f3 3a a9 a4 0e 97 ad f8 93 37 62 28 32 7a c2 c8 ab 84 21 a3 1c 7a 8e 4e 33 ba b8 2b 0e 2d 1a bd a8 88 18 27 95 60 3c 5d 84 e0 4b d1 72 b3 fa 96 56 0c 68 43 5d bf ca d3 2f e9 53 b6 90 e7 23 50 49 09 ec 88 a3 41 6d 99 c8 25 42 dc d4 c3 b7 57 bf a1 62 c3 51 f1 34 4b 94 d1 82 4c 66 0d 99 33 bc d8 82 aa b9 0e 29 4d 0f ac 8d 94 5a 92 8f db 45 27 4a 2a c9 be 4a 2f b7 c5 34 19 56 26 67 48 50 52 62 13 95 d8 66 72 14 69 f4 7c 1b 99 9c 62 b0 74 c2 97 20 64 91 46 71 a3 85 ca 1d 89 2b 15 d4 51 03 b3 23 4a 0a 84 88 8f c5 3c 3a 7b 1a 34 0e 49 81 bc 69 0d 43 4a 6b 2f 3a 63 c1 5a a9 c6 b2 a9 6b 07 47 f3 75 4d 8a 0f fa 23 ae bb 1a 1f 7b 61 51 36 e0 5c c5 21 9e d3 16 bd 53 20 a8 f4 c3 34 10 d6 f6 06 d9 be d4 ad 31 a3 10 ab f9 5f 77 57 12 95 87 75 16 d2 48 58 ef 88 f7 18 1c 33 70 a4 b8 8f e5 9f ac b8 26 1a ba 77 ac 1e 5c af 3e de ef 0d e3 fd be ac 82 17 a1 15 82 45 3a 29 65 f0 2d 29 de ac 32 ed 67 4e 18 fd e7 1c aa 88 37 65 1d e0 56 73 31 ff 30 5e 53 cf 9a f7 5b 17 c4 0a ff 33 84 81 cf 43 02 e5 ad 9d 45 fe a8 23 59 a3 c9 60 ef a1 fa 29 82 b5 6d 7d 12 60 49 81 72 e1 34 5e 46 57 9c 73 51 4b 9d d3 2b d6 43 8d 11 8d 8d 3e f4 a6 c5 6c 45 07 c2 be c5 af ce ca fb a3 0b 94 09 21 58 b4 e9 9b cb c1 cb d7 22 e7 92 00 dd 6e ce d7 c5 32 03 8c 4e 4c 97 70 4b ef 40 c2 64 ba 8a 1b 9b b4 9a 56 03 fd 70 '
#
# key = '7F9C5F4A38168696190994E1392CDF16'
# bin = bytearray.fromhex(data.replace(' ', ''))
# secret_key = bytearray.fromhex(key.replace(' ', ''))
#
# print(TEA.decrypt(bin, secret_key))
|
AndroidTools
|
/AndroidTools-0.2.4.tar.gz/AndroidTools-0.2.4/AndTools/TEA.py
|
TEA.py
|
from AndTools import hexFormat
class pack_u:
"""解包"""
def __init__(self, data):
if isinstance(data, str):
self._byte_data = bytearray.fromhex(data.replace(' ', ''))
else:
self._byte_data = data
def _get_bytes(self, length: int) -> bytes:
res = self._byte_data[:length]
self._byte_data = self._byte_data[length:]
return res
def get_int(self, length=4):
"""取整数"""
res = self._get_bytes(length)
return int.from_bytes(res, 'big')
def get_long(self):
"""取长整数"""
res = self._get_bytes(8)
return int.from_bytes(res, 'big')
def get_byte(self):
"""取字节"""
res = self._get_bytes(1)
return int.from_bytes(res, 'big', signed=True)
def get_bin(self, length):
"""取字节集"""
res = self._get_bytes(length)
return res
def get_short(self):
"""取短整数"""
res = self._get_bytes(2)
return int.from_bytes(res, 'big')
def get_len(self):
"""取长度"""
return len(self._byte_data)
def get_all(self, Hex=False):
"""取全部"""
res = self._byte_data[:]
if Hex:
res = ' '.join(['{:02x}'.format(byte) for byte in res])
return res
class pack_b:
"""组包
好像作用不大....
"""
def __init__(self):
self._bytes_data = bytearray()
def add_bin(self, bytes_temp):
if bytes_temp is not None:
self._bytes_data.extend(bytes_temp)
def add_Hex(self, bytes_temp):
if bytes_temp is not None:
self._bytes_data.extend(bytearray.fromhex(bytes_temp))
def add_bytes(self, bytes_temp):
"""字节或字节集"""
if bytes_temp is not None:
self._bytes_data.append(bytes_temp)
def add_int(self, int_temp, length=4):
"""整数
length:
int:4
Short:2
long:8
"""
if int_temp is not None:
self._bytes_data.extend(int_temp.to_bytes(length, 'big'))
def add_body(self, data, length=4, _hex=False, add_len=0):
"""头部&内容"""
if data is None:
return
if isinstance(data, str):
bytes_data = bytes.fromhex(data) if _hex else data.encode('utf-8')
else:
bytes_data = data
self.add_int(len(bytes_data) + add_len, length)
self.add_bin(bytes_data)
def set_data(self, byte_temp):
"""置数据"""
if byte_temp is not None:
self._bytes_data = byte_temp
def empty(self):
"""清空"""
self._bytes_data = bytearray()
def get_bytes(self, Hex=False):
if Hex:
_bytes_temp = self._bytes_data.hex()
_bytes_temp = hexFormat(_bytes_temp)
else:
_bytes_temp = self._bytes_data
return _bytes_temp
|
AndroidTools
|
/AndroidTools-0.2.4.tar.gz/AndroidTools-0.2.4/AndTools/Pack.py
|
Pack.py
|
from typing import Union, List, Optional, Any, MutableMapping
from .buffer import ByteBuffer
from .struct import IJceStruct
DEFAULT_ENCODING = "utf-8"
class JceWriter:
"""
写入jce字节流
"""
def __init__(self, data: Optional[Union[bytes, bytearray, ByteBuffer]] = None):
if data is None:
self.buffer = ByteBuffer()
elif isinstance(data, (bytes, bytearray)):
self.buffer = ByteBuffer(data)
elif isinstance(data, ByteBuffer):
self.buffer = data
else:
raise TypeError(f"can't init JceWriter with data type {data.__class__.__name__}")
def write_head(self, type_: int, tag: int) -> None:
"""
:param type_:
:param tag:
:return:
"""
if tag < 15:
data = bytes([tag << 4 | type_]) # go的byte就是uint8
self.buffer.write_bytes(data)
elif tag < 256:
data = bytes([0xF0 | type_]) # 修改 0xF0 = 240
self.buffer.write_bytes(data)
self.buffer.write_bytes(bytes([tag]))
def write_byte(self, b: bytes, tag: int) -> "JceWriter":
"""
写入一个字节
:param b:
:param tag:
:return:
"""
if len(b) != 1:
raise ValueError("write_byte only accept single byte")
if b[0] == 0:
self.write_head(12, tag)
else:
self.write_head(0, tag)
self.buffer.write_bytes(b)
return self
def write_bool(self, b: bool, tag: int) -> None:
if b:
data: bytes = bytes([1])
else:
data: bytes = bytes([0])
self.write_byte(data, tag)
def write_int16(self, n: int, tag: int) -> None:
if -128 <= n <= 127:
self.write_byte(bytes([n]), tag)
return
self.write_head(1, tag)
self.buffer.write_int2(n)
def write_int32(self, n: int, tag: int) -> "JceWriter":
if -32768 <= n <= 32767:
self.write_int16(n, tag)
return self
self.write_head(2, tag)
self.buffer.write_int4(n)
return self
def write_int64(self, n: int, tag: int) -> "JceWriter":
if -2147483648 <= n <= 2147483647:
return self.write_int32(n, tag)
self.write_head(3, tag)
self.buffer.write_int8(n)
return self
def write_float32(self, n: float, tag: int):
self.write_head(4, tag)
self.buffer.write_float(n)
def write_float64(self, n: float, tag: int): # 就是double
self.write_head(5, tag)
self.buffer.write_double(n)
def write_string(self, s: str, tag: int) -> "JceWriter":
"""
type 6 or 7 >255就得7了
:param s:
:param tag:
:return:
"""
by: bytes = s.encode(DEFAULT_ENCODING)
if len(by) > 255:
self.write_head(7, tag)
self.buffer.write_bytes(len(by).to_bytes(4, "big")) # 4个字节的长度
self.buffer.write_bytes(by)
return self
self.write_head(6, tag)
self.buffer.write_bytes(bytes([len(by)])) # 1byte
self.buffer.write_bytes(by)
return self
def write_bytes(self, data: Union[bytes, bytearray], tag: int):
self.write_head(13, tag)
self.write_head(0, 0)
self.write_int32(len(data), 0)
self.buffer.write_bytes(data)
return self.buffer.bytes
def write_int64_list(self, data: List[int], tag: int):
"""
go: WriteInt64Slice
:param data:
:param tag:
:return:
"""
self.write_head(9, tag)
if len(data) == 0:
self.write_int32(0, 0)
return
self.write_int32(len(data), 0)
for i in data:
self.write_int64(i, 0)
def write_list(self, data: List[Any], tag: int):
if not isinstance(data, list):
return
self.write_head(9, tag)
if len(data) == 0:
self.write_int32(0, 0)
return
self.write_int32(len(data), 0)
for i in data:
self.write_object(i, 0)
def write_jce_struct_list(self, data: List[IJceStruct], tag: int):
self.write_head(9, tag)
if len(data) == 0:
self.write_int32(0, 0)
return
self.write_int32(len(data), 0)
for i in data:
self.write_jce_struct(i, 0)
def write_map(self, m: dict, tag: int):
if m is None:
self.write_head(8, tag)
self.write_int32(0, 0)
return
if not isinstance(m, MutableMapping):
return
self.write_head(8, tag)
self.write_int32(len(m), 0)
for k, v in m.items():
self.write_object(k, 0)
self.write_object(v, 1)
return self.buffer.bytes
def write_object(self, data: Any, tag: int):
if isinstance(data, MutableMapping):
self.write_map(data, tag)
return
if isinstance(data, list):
self.write_list(data, tag)
return
if isinstance(data, (bytes, bytearray)):
if len(data) == 1:
self.write_byte(data, tag)
else:
self.write_bytes(data, tag)
return
if isinstance(data, bool):
self.write_bool(data, tag)
elif isinstance(data, int):
self.write_int64(data, tag)
elif isinstance(data, float):
self.write_float64(data, tag)
elif isinstance(data, str):
self.write_string(data, tag)
elif isinstance(data, IJceStruct):
self.write_jce_struct(data, tag)
def write_jce_struct_raw(self, data: IJceStruct):
"""
只写内容不写头部
todo 用pydantic给前面的都写加上jceid元数据 不然没法玩
:param data:
:return:
"""
for field_name, val in data.__fields__.items():
jce_id: int = val.field_info.extra["jce_id"]
field_val = getattr(data, field_name)
self.write_object(field_val, jce_id)
def write_jce_struct(self, data: Union[bytes, bytearray], tag: int):
self.write_head(10, tag)
# 修改 原先的 write_jce_struct_raw 不知道是什么操作
self.buffer.write_bytes(data)
self.write_head(11, 0)
return self.buffer.bytes
def bytes(self) -> bytearray:
"""直接返回的数据对象"""
return self.buffer.bytes
|
AndroidTools
|
/AndroidTools-0.2.4.tar.gz/AndroidTools-0.2.4/Jce_b/writer.py
|
writer.py
|
from typing import Tuple, Union, Callable, Any, List, Type
from .head import HeadData
from .buffer import ByteBuffer
from .struct import IJceStruct
DEFAULT_ENCODING = "utf-8"
class JceReader:
"""
读取jce字节流
"""
def __init__(self, data: Union[bytes, bytearray, ByteBuffer]):
if isinstance(data, (bytes, bytearray)):
self.buffer = ByteBuffer(data)
elif isinstance(data, ByteBuffer):
self.buffer = data
else:
raise TypeError(f"can't init JceReader with data type {data.__class__.__name__}")
def read_head(self) -> Tuple[HeadData, int]:
"""
不仅康了,而且指针动了
:return:
"""
head_data = HeadData()
b: int = self.buffer.read()
head_data.type = b & 0x0F # 低4位位类型
head_data.tag = (b & 0xF0) >> 4 # 高4位为tag,
if head_data.tag == 15: # 如果tag为15 则下一个字段为tag
b: int = self.buffer.read()
head_data.tag = b & 0xFF # TODO 这个似乎可以去掉 不过昵昵不改我也不改 防止溢出的
return head_data, 2
else:
return head_data, 1
def peak_head(self) -> Tuple[HeadData, int]:
"""
就康一眼
:return:
"""
return self.__class__(self.buffer.copy()).read_head()
def skip(self, size: int) -> None:
"""
跳过size个字节
:param size:
:return:
"""
self.buffer.read_bytes(size)
def _skip_field(self, type_: int):
"""
跳过一个字段 仅仅是跳过内容 这个头部还得你自己跳的
see https://blog.csdn.net/jiange_zh/article/details/86562232
:param type_:
:return:
"""
if type_ == 0:
self.skip(1)
elif type_ == 1:
self.skip(2)
elif type_ in (2, 4):
self.skip(4)
elif type_ in (3, 5):
self.skip(8)
elif type_ == 6:
len_ = self.buffer.read()
self.skip(len_)
elif type_ == 7:
len_ = self.buffer.read_int4()
self.skip(len_)
elif type_ == 8: # map
size: int = self.read_int32(0)
for i in range(2 * size):
self.skip_next_field()
elif type_ == 9: # list
size: int = self.read_int32(0)
for i in range(size):
self.skip_next_field()
elif type_ == 10:
self.skip_to_struct_end()
elif type_ == 13:
self.read_head()
size: int = self.read_int32(0)
self.skip(size)
def skip_next_field(self):
head, _ = self.read_head()
self._skip_field(head.type)
def skip_field(self, count: int):
for i in range(count):
self.skip_next_field()
def read_bytes(self, size: int) -> bytearray:
b = self.buffer.read_bytes(size)
return b
def _read_byte(self) -> bytearray:
"""
一个字节
:return:
"""
return self.read_bytes(1)
def read_uint16(self) -> int:
return self.buffer.read_int2()
def _read_int32(self) -> int:
return self.buffer.read_int4()
def _read_int64(self) -> int:
return self.buffer.read_int8()
def _read_float32(self) -> float:
return self.buffer.read_float()
def _read_float64(self) -> float:
return self.buffer.read_double()
def skip_to_tag(self, tag: int) -> bool:
"""
跳转到tag
:param tag:
:return:
"""
while True:
head, len_ = self.peak_head()
if tag <= head.tag or head.type == 11:
return tag == head.tag
self.skip(len_)
self._skip_field(head.type)
def skip_to_struct_end(self) -> None:
while True:
head, _ = self.read_head()
self._skip_field(head.type)
if head.type == 11:
return
def read_byte(self, tag: int) -> Union[bytes, bytearray]:
if not self.skip_to_tag(tag):
return bytes([0])
head, _ = self.read_head()
if (type_ := head.type) == 12:
return bytes([0])
elif type_ == 0:
return self._read_byte()
else:
return bytes([0])
pass
def read_bool(self, tag: int) -> bool:
return self.read_bytes(tag) != 0
def read_int16(self, tag: int) -> int:
if not self.skip_to_tag(tag):
return 0
head, _ = self.read_head()
if (type_ := head.type) == 12:
return 0
elif type_ == 0:
return self._read_byte()[0]
elif type_ == 1:
return self.read_uint16()
else:
return 0
def read_int32(self, tag: int) -> int:
if not self.skip_to_tag(tag):
return 0
head, _ = self.read_head()
if (type_ := head.type) == 12:
return 0
elif type_ == 0:
return self._read_byte()[0]
elif type_ == 1:
return self.read_uint16()
elif type_ == 2:
return self._read_int32()
else:
return 0
def read_int64(self, tag: int) -> int:
if not self.skip_to_tag(tag):
return 0
head, _ = self.read_head()
if (type_ := head.type) == 12:
return 0
elif type_ == 0:
return self._read_byte()[0]
elif type_ == 1:
return self.read_uint16()
elif type_ == 2:
return self._read_int32()
elif type_ == 3:
return self._read_int64()
else:
return 0
def read_float32(self, tag: int) -> float:
if not self.skip_to_tag(tag):
return 0.0
head, _ = self.read_head()
if (type_ := head.type) == 12:
return 0.0
elif type_ == 4:
return self._read_float32()
else:
return 0.0
def read_float64(self, tag: int):
if not self.skip_to_tag(tag):
return 0.0
head, _ = self.read_head()
if (type_ := head.type) == 12:
return 0.0
elif type_ == 4:
return self._read_float32()
elif type_ == 5:
return self._read_float64()
else:
return 0.0
pass
def read_string(self, tag: int):
if not self.skip_to_tag(tag):
return ""
head, _ = self.read_head()
if (type_ := head.type) == 6:
return self.read_bytes(self._read_byte()[0]).decode(DEFAULT_ENCODING)
elif type_ == 7:
return self.read_bytes(self._read_int32()).decode(DEFAULT_ENCODING)
else:
return ""
# ReadAny Read any type via tag, unsupported JceStruct
def read_any(self, tag: int) -> Any:
if not self.skip_to_tag(tag):
return
head, _ = self.read_head()
if (type_ := head.type) == 0:
return self._read_byte()[0]
elif type_ == 1:
return self.read_uint16()
elif type_ == 2:
return self._read_int32()
elif type_ == 3:
return self._read_int64()
elif type_ == 4:
return self._read_float32()
elif type_ == 5:
return self._read_float64()
elif type_ == 6:
return self.read_bytes(self._read_byte()[0]).decode(DEFAULT_ENCODING)
elif type_ == 7:
return self.read_bytes(self._read_int32()).decode(DEFAULT_ENCODING)
elif type_ == 8: # map csdn的文档大有问题
size: int = self.read_int32(0) # 跳到字典
m = {}
for i in range(size):
k = self.read_any(0) # 不这么写会出问题
v = self.read_any(1)
m[k] = v
return m
elif type_ == 9: # list
sl = []
size = self.read_int32(0)
for i in range(size):
sl.append(self.read_any(0))
return sl
elif type_ == 10: # obj
sl = []
while True:
head, _ = self.peak_head()
if head.type == 11 and head.tag == 0:
self.read_head() # 去掉结束标志
break
sl.append(self.read_any(head.tag))
return sl
elif type_ == 11:
return None
elif type_ == 12:
return 0
elif type_ == 13: # simple list head len data
self.read_head()
return self.read_bytes(self.read_int32(0))
else:
return
def read_any_with_tag(self, tag: int) -> Any:
"""
同上,但是会是Dict[tag,value]的递归的组合
e.g. 上一个函数返回[1,2,3] ,这个返回{1:1,2:2,3:3},就是带上了tag
:param tag:
:return:
"""
if not self.skip_to_tag(tag):
return
head, _ = self.read_head()
if (type_ := head.type) == 0:
return self._read_byte()[0]
elif type_ == 1:
return self.read_uint16()
elif type_ == 2:
return self._read_int32()
elif type_ == 3:
return self._read_int64()
elif type_ == 4:
return self._read_float32()
elif type_ == 5:
return self._read_float64()
elif type_ == 6:
return self.read_bytes(self._read_byte()[0]).decode(DEFAULT_ENCODING)
elif type_ == 7:
return self.read_bytes(self._read_int32()).decode(DEFAULT_ENCODING)
elif type_ == 8: # map csdn的文档大有问题
size: int = self.read_int32(0) # 跳到字典
m = {}
for i in range(size):
k = self.read_any_with_tag(0) # 不这么写会出问题
v = self.read_any_with_tag(1)
m[k] = v
return m
elif type_ == 9: # list
sl = []
size = self.read_int32(0)
for i in range(size):
sl.append(self.read_any_with_tag(0))
return sl
elif type_ == 10: # obj
sl = {}
while True:
head, _ = self.peak_head()
if head.type == 11 and head.tag == 0:
self.read_head() # 去掉结束标志
break
sl[head.tag] = self.read_any_with_tag(head.tag)
return sl
elif type_ == 11:
return None
elif type_ == 12:
return 0
elif type_ == 13: # simple list head len data
self.read_head()
return self.read_bytes(self.read_int32(0))
else:
return
def read_map_f(self, tag: int, func: Callable[[Any, Any], Any]) -> None:
if not self.skip_to_tag(tag):
return
self.read_head() # 去头就可以吃了
size = self.read_int32(0)
for i in range(size):
k = self.read_any(0)
v = self.read_any(1)
if k is not None:
func(k, v)
def read_map(self, tag: int) -> dict:
"""返回字典"""
if not self.skip_to_tag(tag):
return {}
self.read_head()
_dict = {}
size = self.read_int32(0)
for i in range(size):
k = self.read_any(0)
v = self.read_any(1)
_dict.update({k: v})
return _dict
def read_list(self, type_: Type[IJceStruct], tag: int) -> List[IJceStruct]:
"""
根据type来实例化 list of type10
:param type_: IJceStruct的子类 实例化标志
:param tag:
:return:
"""
sl = []
if not self.skip_to_tag(tag):
return sl
head, _ = self.read_head()
if head.type == 9:
size = self.read_int32(0)
for i in range(size):
data = self.read_object(type_)
sl.append(data)
return sl
def read_object(self, type_: Type[IJceStruct]) -> IJceStruct:
"""
读取自定义结构,传入一个类 里面必须是type10
:param type_: IJceStruct的子类
:return:
"""
if issubclass(type_, IJceStruct):
data = type_()
self.read_head()
data.read_from(self)
self.skip_to_struct_end()
return data
def read_available(self) -> bytes:
"""
读取全部缓冲区剩下的
:return:
"""
return self.read_bytes(len(self.buffer) - self.buffer.position)
|
AndroidTools
|
/AndroidTools-0.2.4.tar.gz/AndroidTools-0.2.4/Jce_b/reader.py
|
reader.py
|
import struct
from copy import deepcopy
from typing import Optional, Union
class ByteBuffer:
"""
字节缓冲区
"""
# _bytes = None
# _position = 0
def __init__(self, bs: Optional[Union[bytes, bytearray]] = None):
if bs is None:
self._bytes = bytearray()
elif isinstance(bs, bytearray):
self._bytes = bs
elif isinstance(bs, bytes):
self._bytes = bytearray(bs)
else:
raise TypeError("'buffer' argument must be bytes or bytesarray")
self._position = 0
def __len__(self):
return len(self._bytes)
@property
def bytes(self) -> bytearray:
"""
返回自己的全部数据
:return:
"""
return self._bytes
@property
def position(self):
"""
位置指针 当前读取的字节数 读取到第几个字节了
:return:
"""
return self._position
@position.setter
def position(self, value):
"""
设置读取指针
:param value:
:return:
"""
if not isinstance(value, int):
raise TypeError("'position' attribute must be a integer")
elif value < 0:
raise ValueError("'position' attribute must be a positive number")
elif value > len(self._bytes):
raise ValueError('position out of index range')
else:
self._position = value
def read(self) -> int:
"""
读取一个字节并返回 指针+1
:return:
"""
if self._position >= len(self._bytes):
raise BufferError('reached end of bytes')
b = self._bytes[self._position]
self._position += 1
return b
def read_bytes(self, size: int) -> bytearray:
"""
读取接下来的size个字节 指针向后面移动size
:param size:
:return:
"""
if size < 0:
raise ValueError("'size' attribute must be a positive number")
if self._position > len(self._bytes):
raise BufferError('reached end of bytes')
if self.position + size > len(self._bytes):
raise BufferError('reached end of bytes')
b = self._bytes[self.position:self.position + size]
self.position = self.position + size
return b
def read_int2(self) -> int:
"""
读取一个jce中的int2类型
:return:
"""
b = self.read_bytes(2)
return struct.unpack('>h', b)[0] # 解包出来是个元组 得像这样加料
def read_uint2(self) -> int:
"""
读取一个jce中的uint2类型
:return:
"""
b = self.read_bytes(2)
return struct.unpack('>H', b)[0] # 解包出来是个元组 得像这样加料
def read_int4(self) -> int:
"""
读取一个jce中的int4类型
:return:
"""
b = self.read_bytes(4)
return struct.unpack('>i', b)[0]
def read_uint4(self) -> int:
"""
读取一个jce中的uint4类型
:return:
"""
b = self.read_bytes(4)
return struct.unpack('>I', b)[0]
def read_int8(self) -> int:
"""
读取一个jce中的int8类型
:return:
"""
b = self.read_bytes(8)
return struct.unpack('>q', b)[0]
def read_uint8(self) -> int:
"""
读取一个jce中的int8类型
:return:
"""
b = self.read_bytes(8)
return struct.unpack('>Q', b)[0]
def read_float(self) -> float:
"""
读取一个jce中的float类型 4字节
:return:
"""
b = self.read_bytes(4)
return struct.unpack('>f', b)[0]
def read_double(self) -> float:
"""
读取一个jce中的double类型 8字节
:return:
"""
b = self.read_bytes(8)
return struct.unpack('>d', b)[0]
def write_bytes(self, data: Union["bytes", bytearray]) -> None:
"""
写入一个字节流
:param data:
:return:
"""
self._bytes.extend(data)
self._position += len(data)
def write_hex(self, hexstr: str) -> None:
str_bytes: str = hexstr.strip()
pkt = bytes.fromhex(str_bytes)
self.write_bytes(pkt)
def write_int2(self, num) -> None:
pkt = struct.pack(">h", num)
self.write_bytes(pkt)
def write_uint2(self, num) -> None:
pkt = struct.pack(">H", num)
self.write_bytes(pkt)
def write_int4(self, num) -> None:
pkt = struct.pack(">i", num)
self.write_bytes(pkt)
def write_uint4(self, num) -> None:
pkt = struct.pack(">I", num)
self.write_bytes(pkt)
def write_int8(self, num) -> None:
pkt = struct.pack(">q", num)
self.write_bytes(pkt)
def write_uint8(self, num) -> None:
pkt = struct.pack(">Q", num)
self.write_bytes(pkt)
def write_float(self, num):
pkt = struct.pack(">f", num)
self.write_bytes(pkt)
def write_double(self, num) -> None:
pkt = struct.pack(">d", num)
self.write_bytes(pkt)
def copy(self) -> "ByteBuffer":
"""
返回自己的一份深拷贝
:return:
"""
return deepcopy(self)
def seek(self, position: int) -> None:
"""
重新定位指针
:param position:
:return:
"""
self.position = position
def clear(self) -> None:
"""
指针重新回到0
:return:
"""
self._position = 0
|
AndroidTools
|
/AndroidTools-0.2.4.tar.gz/AndroidTools-0.2.4/Jce_b/buffer.py
|
buffer.py
|
from enum import IntEnum, unique
import os
# https://github.com/grayrail000/pyproto.git
class ProtoError(Exception):
def __init__(self, msg):
self.msg = msg
def __str__(self):
return repr(self.msg)
@unique
class ProtoFieldType(IntEnum):
VARINT = 0
INT64 = 1
STRING = 2
GROUPSTART = 3
GROUPEND = 4
INT32 = 5
ERROR1 = 6
ERROR2 = 7
class ProtoField:
def __init__(self, idx, type, val):
self.idx = idx
self.type = type
self.val = val
def isAsciiStr(self):
if type(self.val) != bytes:
return False
for b in self.val:
if b < 0x20 or b > 0x7E:
return False
return True
def __str__(self):
if (
(self.type == ProtoFieldType.INT32)
or (self.type == ProtoFieldType.INT64)
or (self.type == ProtoFieldType.VARINT)
):
return "%d(%s): %d" % (self.idx, self.type.name, self.val)
elif self.type == ProtoFieldType.STRING:
if self.isAsciiStr(): # self.val.isalnum()
return '%d(%s): "%s"' % (
self.idx,
self.type.name,
self.val.decode("ascii"),
)
else:
return '%d(%s): h"%s"' % (self.idx, self.type.name, self.val.hex())
elif (self.type == ProtoFieldType.GROUPSTART) or (
self.type == ProtoFieldType.GROUPEND
):
return "%d(%s): %s" % (self.idx, self.type.name, self.val)
else:
return "%d(%s): %s" % (self.idx, self.type.name, self.val)
class ProtoReader:
def __init__(self, data):
self.data = data
self.pos = 0
def seek(self, pos):
self.pos = pos
def isRemain(self, length):
return self.pos + length <= len(self.data)
def read0(self):
assert self.isRemain(1)
ret = self.data[self.pos]
self.pos += 1
return ret & 0xFF
def read(self, length):
assert self.isRemain(length)
ret = self.data[self.pos: self.pos + length]
self.pos += length
return ret
def readInt32(self):
return int.from_bytes(self.read(4), byteorder="little", signed=False)
def readInt64(self):
return int.from_bytes(self.read(8), byteorder="little", signed=False)
def readVarint(self):
vint = 0
n = 0
while True:
byte = self.read0()
vint |= (byte & 0x7F) << (7 * n)
if byte < 0x80:
break
n += 1
return vint
def readString(self):
len = self.readVarint()
return self.read(len)
class ProtoWriter:
def __init__(self):
self.data = bytearray()
def write0(self, byte):
self.data.append(byte & 0xFF)
def write(self, bytes):
self.data.extend(bytes)
def writeInt32(self, int32):
bs = int32.to_bytes(4, byteorder="little", signed=False)
self.write(bs)
def writeInt64(self, int64):
bs = int64.to_bytes(8, byteorder="little", signed=False)
self.write(bs)
def writeVarint(self, vint):
vint = vint & 0xFFFFFFFF
while vint > 0x80:
self.write0((vint & 0x7F) | 0x80)
vint >>= 7
self.write0(vint & 0x7F)
def writeString(self, bytes):
self.writeVarint(len(bytes))
self.write(bytes)
def toBytes(self):
return bytes(self.data)
class ProtoBuf:
def __init__(self, data=None):
self.fields = []
if data != None:
if type(data) != bytes and type(data) != dict:
raise ProtoError("unsupport type(%s) to protobuf" % (type(data)))
if (type(data) == bytes) and (len(data) > 0):
self.__parseBuf(data)
elif (type(data) == dict) and (len(data) > 0):
self.__parseDict(data)
def __getitem__(self, idx):
pf = self.get(int(idx))
if pf == None:
return None
if pf.type != ProtoFieldType.STRING:
return pf.val
if type(idx) != int:
return pf.val
if pf.val == None:
return None
if pf.isAsciiStr():
return pf.val.decode("utf-8")
return ProtoBuf(pf.val)
def __parseBuf(self, bytes):
reader = ProtoReader(bytes)
while reader.isRemain(1):
key = reader.readVarint()
field_type = ProtoFieldType(key & 0x7)
field_idx = key >> 3
if field_idx == 0:
break
if field_type == ProtoFieldType.INT32:
self.put(ProtoField(field_idx, field_type, reader.readInt32()))
elif field_type == ProtoFieldType.INT64:
self.put(ProtoField(field_idx, field_type, reader.readInt64()))
elif field_type == ProtoFieldType.VARINT:
self.put(ProtoField(field_idx, field_type, reader.readVarint()))
elif field_type == ProtoFieldType.STRING:
self.put(ProtoField(field_idx, field_type, reader.readString()))
elif field_type == ProtoFieldType.GROUPEND:
pass
else:
raise ProtoError(
"parse protobuf error, unexpected field type: %s"
% (field_type.name)
)
def toBuf(self):
writer = ProtoWriter()
for field in self.fields:
key = (field.idx << 3) | (field.type & 7)
writer.writeVarint(key)
if field.type == ProtoFieldType.INT32:
writer.writeInt32(field.val)
elif field.type == ProtoFieldType.INT64:
writer.writeInt64(field.val)
elif field.type == ProtoFieldType.VARINT:
writer.writeVarint(field.val)
elif field.type == ProtoFieldType.STRING:
writer.writeString(field.val)
else:
raise ProtoError(
"encode to protobuf error, unexpected field type: %s"
% (field.type.name)
)
return writer.toBytes()
def dump(self):
for field in self.fields:
print(field)
def getList(self, idx):
return [field for field in self.fields if field.idx == idx]
def get(self, idx):
for field in self.fields:
if field.idx == idx:
return field
return None
def getInt(self, idx):
pf = self.get(idx)
if pf == None:
return 0
if (
(pf.type == ProtoFieldType.INT32)
or (pf.type == ProtoFieldType.INT64)
or (pf.type == ProtoFieldType.VARINT)
):
return pf.val
raise ProtoError("getInt(%d) -> %s" % (idx, pf.type))
def getBytes(self, idx):
pf = self.get(idx)
if pf == None:
return None
if pf.type == ProtoFieldType.STRING:
return pf.val
raise ProtoError("getBytes(%d) -> %s" % (idx, pf.type))
def getUtf8(self, idx):
bs = self.getBytes(idx)
if bs == None:
return None
return bs.decode("utf-8")
def getProtoBuf(self, idx):
bs = self.getBytes(idx)
if bs == None:
return None
return ProtoBuf(bs)
def put(self, field: ProtoField):
self.fields.append(field)
def putInt32(self, idx, int32):
self.put(ProtoField(idx, ProtoFieldType.INT32, int32))
def putInt64(self, idx, int64):
self.put(ProtoField(idx, ProtoFieldType.INT64, int64))
def putVarint(self, idx, vint):
self.put(ProtoField(idx, ProtoFieldType.VARINT, vint))
def putBytes(self, idx, data):
self.put(ProtoField(idx, ProtoFieldType.STRING, data))
def putUtf8(self, idx, data):
self.put(ProtoField(idx, ProtoFieldType.STRING, data.encode("utf-8")))
def putProtoBuf(self, idx, data):
self.put(ProtoField(idx, ProtoFieldType.STRING, data.toBuf()))
def __parseDict(self, data):
"""
Convert dict object to ProtoBuf object
"""
for k, v in data.items():
if isinstance(v, int):
self.putVarint(k, v)
elif isinstance(v, str):
self.putUtf8(k, v)
elif isinstance(v, bytes):
self.putBytes(k, v)
elif isinstance(v, dict):
self.putProtoBuf(k, ProtoBuf(v))
else:
raise ProtoError("unsupport type(%s) to protobuf" % (type(v)))
def toDict(self, out):
"""
Convert ProtoBuf object to dict object
"""
for k, v in out.items():
if isinstance(v, int):
out[k] = self.getInt(k)
elif isinstance(v, str):
out[k] = self.getUtf8(k)
elif isinstance(v, bytes):
out[k] = self.getBytes(k)
elif isinstance(v, dict):
out[k] = self.getProtoBuf(k).toDict(v)
else:
raise ProtoError("unsupport type(%s) to protobuf" % (type(v)))
return out
def toDictAuto(self):
"""Automatic conversion to dict"""
result = {}
for field in self.fields:
key = field.idx
if field.type in (ProtoFieldType.INT32, ProtoFieldType.INT64, ProtoFieldType.VARINT):
result[key] = field.val
elif field.type == ProtoFieldType.STRING:
if field.isAsciiStr():
result[key] = field.val.decode("utf-8")
else:
value = ProtoBuf(field.val).toDictAuto()
if not value:
value = field.val
result[key] = value
return result
def parse(path):
"""
Parse proto file or hex string of proto bytes, then print
"""
if not os.path.exists(path):
ProtoBuf(bytes.fromhex(path)).dump()
elif os.path.isfile(path):
with open(path, "rb") as file:
content = file.read()
print("file:", content.hex())
ProtoBuf(content).dump()
else:
print("not a file:", path)
|
AndroidTools
|
/AndroidTools-0.2.4.tar.gz/AndroidTools-0.2.4/pyproto/__init__.py
|
__init__.py
|
from typing import Tuple, List
from Jce.bytebuffer import ByteBuffer
from Jce.exception import JceDecodeException
from Jce.struct import JceStruct, JceStructStatics
class HeadData(object):
tag: int = 0
type: int = 0
def __init__(self):
self.tag = 0
self.type = 0
def __str__(self):
return '{tag: %d, type: %d}' % (self.tag, self.type)
def __repr__(self):
return '{tag: %d, type: %d}' % (self.tag, self.type)
def clear(self):
self.tag = 0
self.type = 0
def read_head(byte_buffer: ByteBuffer) -> Tuple[HeadData, int]:
head_data = HeadData()
b = byte_buffer.get()
head_data.type = b & 0x0F # 低4位位类型
head_data.tag = (b & 0xF0) >> 4 # 高4位为tag,
if head_data.tag != 15: # 如果tag为15 则下一个字段为tag
return (head_data, 1)
else:
head_data.tag = byte_buffer.get() & 0xFF
return (head_data, 2)
class JceInputStream(object):
_bs: ByteBuffer = None
encoding = 'utf-8'
def __init__(self, bs, i=0):
self.encoding = 'utf-8'
if isinstance(bs, ByteBuffer):
self._bs = bs
elif isinstance(bs, (bytearray, bytes)):
self._bs = ByteBuffer(bs)
self._bs.position = i
else:
raise TypeError("'bs'参数必须是 bytes、bytesarray 或 ByteBuffer")
def read_head(self):
return read_head(self._bs)
def peak_head(self):
return read_head(self._bs.duplicate())
def skip(self, i: int):
self._bs.position = self._bs.position + i
def skip_to_struct_end(self):
head_data, _ = self.read_head()
self.skip_field(head_data.type)
while (head_data.type != 11):
head_data, _ = self.read_head()
self.skip_field(head_data.type)
def skip_field(self, field_type=None):
if not field_type:
head_data, _ = self.read_head()
field_type = head_data.type
i = 0
read_value = None
if field_type == 0:
self.skip(1)
elif field_type == 1:
self.skip(2)
elif field_type == 2:
self.skip(4)
elif field_type == 3:
self.skip(8)
elif field_type == 4:
self.skip(4)
elif field_type == 5:
self.skip(8)
elif field_type == 6:
i = self._bs.get()
if i < 0:
i += 256
self.skip(i)
elif field_type == 7:
i = self._bs.get_int4()
self.skip(i)
elif field_type == 8:
read_value = self._read_int(0, 0, True)
while i < read_value * 2:
self.skip_field()
i += 1
elif field_type == 9:
read_value = self._read_int(0, 0, True)
while i < read_value:
self.skip_field()
i += i
elif field_type == 10:
self.skip_to_struct_end()
elif field_type == 11 or field_type == 12:
return
elif field_type == 13:
head_data, _ = self.read_head()
if head_data.type != 0:
raise JceDecodeException(
"skipField with invalid type, type value: " + field_type + ", " + head_data.type)
i = self._read_int(0, 0, True)
self.skip(i)
else:
raise JceDecodeException("invalid type.")
def skip_to_tag(self, tag: int) -> bool:
try:
while True:
head_data, length = self.peak_head()
if tag > head_data.tag and head_data.type != 0x0B:
self.skip(length)
self.skip_field(head_data.type)
else:
break
if head_data.type == 0X0B or tag != head_data.tag:
return False
return True
except (JceDecodeException, BufferError):
return False
def re_init(self):
self.encoding = 'utf-8'
self._bs.clear()
def get_tags(self) -> List[int]:
position = self._bs.position
# self.re_init()
tags = []
while True:
try:
head_data, _ = self.read_head()
tags.append(head_data.tag)
self.skip_field(head_data.type)
except:
print('exception occured in position: %d, quit' % (self._bs.position))
break
self._bs.position = position
return tags
def _read_bool(self, b: bool, tag: int, is_require: bool) -> bool:
c = self._read_int(0, tag, is_require)
return c != 0
def _read_int(self, c: int, tag: int, is_require) -> int:
if self.skip_to_tag(tag):
head_data, _ = self.read_head()
if head_data.type == 12:
c = 0
elif head_data.type == 0:
c = self._bs.get()
elif head_data.type == 1:
c = self._bs.get_int2()
elif head_data.type == 2:
c = self._bs.get_int4()
elif head_data.type == 3:
c = self._bs.get_int8()
else:
raise JceDecodeException("type mismatch.")
elif is_require:
raise JceDecodeException("require field not exist.")
return c
def _read_float(self, n: float, tag: int, is_require: bool) -> float:
if self.skip_to_tag(tag):
head_data, _ = self.read_head()
if head_data.type == 12:
n = 0.0
elif head_data.type == 4:
n = self._bs.get_float()
if head_data.type == 5:
n = self._bs.get_double()
else:
raise JceDecodeException("type mismatch.")
elif is_require:
raise JceDecodeException("require field not exist.")
return n
def _read_string(self, s: str, tag: int, is_require: bool) -> str:
"""_读取字符串 """
if self.skip_to_tag(tag):
head_data, _ = self.read_head()
if head_data.type == 6:
length = self._bs.get()
if length < 0:
length += 256
ss = self._bs.get_bytes(length)
try:
s = ss.decode(self.encoding)
except UnicodeDecodeError:
s = ss.decode()
elif head_data.type == 7:
length = self._bs.get_int4()
if length > JceStructStatics.JCE_MAX_STRING_LENGTH or length < 0:
raise JceDecodeException("字符串太长: " + len)
ss = self._bs.get_bytes(length)
try:
s = ss.decode(self.encoding)
except UnicodeDecodeError:
s = ss.decode()
else:
raise JceDecodeException("类型不匹配。")
elif is_require:
raise JceDecodeException("要求字段不存在。")
return s
def _read_struct(self, o: JceStruct, tag: int, is_require: bool) -> JceStruct:
ref = None
if self.skip_to_tag(tag):
ref = o
head_data, _ = self.read_head()
if head_data.type != 10:
raise JceDecodeException("type mismatch.")
ref.read_from(self)
self.skip_to_struct_end()
elif is_require:
raise JceDecodeException("require field not exist.")
return ref
def _read_list(self, mt, tag: int, is_require: bool) -> list:
if self.skip_to_tag(tag):
head_data, _ = self.read_head()
if head_data.type == 9:
size = self._read_int(0, 0, True)
if size < 0:
raise JceDecodeException("size invalid: " + size)
lr = []
for i in range(size):
t = self.read_current(True)
lr.append(t)
return lr
raise JceDecodeException("type mismatch.")
elif is_require:
raise JceDecodeException("require field not exist.")
return None
def _read_map(self, m: dict, tag: int, is_require: bool):
mr = {}
if self.skip_to_tag(tag):
head_data, _ = self.read_head()
if head_data.type == 8:
size = self._read_int(0, 0, True)
if size < 0:
raise JceDecodeException("size invalid: " + size)
for i in range(size):
k = self.read_current(True)
v = self.read_current(True)
mr[k] = v
else:
raise JceDecodeException("type mismatch.")
elif is_require:
raise JceDecodeException("require field not exist.")
return mr
def _read_simple_list(self, l, tag: int, is_require: bool):
"""_读取简单列表"""
lr = b''
if self.skip_to_tag(tag):
head_data, _ = self.read_head()
if head_data.type == 13:
hh, _ = self.read_head()
if hh.type != 0:
raise JceDecodeException(
"type mismatch, tag: " + tag + ", type: " + head_data.type + ", " + hh.type)
size = self._read_int(0, 0, True)
if size < 0:
raise JceDecodeException(
"invalid size, tag: " + tag + ", type: " + head_data.type + ", " + hh.type + ", size: " + size)
lr = self._bs.get_bytes(size)
else:
raise JceDecodeException("类型不匹配。")
elif is_require:
raise JceDecodeException("要求字段不存在。")
return lr.hex() #返回hex
def read(self, o, tag: int, is_require: bool):
if isinstance(o, bool):
return self._read_bool(o, tag, is_require)
if isinstance(o, int):
return self._read_int(o, tag, is_require)
if isinstance(o, float):
return self._read_float(o, tag, is_require)
if isinstance(o, str):
return self._read_string(o, tag, is_require)
if isinstance(o, list):
return self._read_list(o, tag, True)
if isinstance(o, dict):
return self._read_map(o, tag, True)
raise JceDecodeException("read object error: unsupport type.")
def read_current(self, is_require: bool):
"""读取当前"""
head_data, _ = self.peak_head()
if head_data.type in (0, 1, 2, 3):
return self._read_int(0, head_data.tag, is_require)
elif head_data.type in (4, 5):
return self._read_float(0.0, head_data.tag, is_require)
elif head_data.type in (6, 7):
return self._read_string('', head_data.tag, is_require)
elif head_data.type == 8:
return self._read_map({}, head_data.tag, is_require)
elif head_data.type == 9:
return self._read_list([], head_data.tag, is_require)
elif head_data.type == 10:
return self._read_struct(JceStruct(), head_data.tag, is_require)
elif head_data.type == 11:
self.read_head()
return None
elif head_data.type == 12:
# ZERO_TAG
self.read_head()
return 0
elif head_data.type == 13:
# BYTES 字节流
return self._read_simple_list(b'', head_data.tag, is_require)
else:
raise JceDecodeException("读取对象错误:类型不受支持。")
|
AndroidTools
|
/AndroidTools-0.2.4.tar.gz/AndroidTools-0.2.4/Jce/stream.py
|
stream.py
|
import struct
class ByteBuffer(object):
_bytes = None
_position = 0
@property
def bytes(self) -> bytes:
return self._bytes
@property
def position(self):
return self._position
@position.setter
def position(self, value):
if not isinstance(value, int):
raise TypeError("'position' 属性必须是整数")
elif value < 0:
raise ValueError("'position' 属性必须是正数")
elif value > len(self._bytes):
raise ValueError('')
else:
self._position = value
def __init__(self, bs: bytes):
if isinstance(bs, bytes):
self._bytes = bs
elif isinstance(bs, bytearray):
self._bytes = bytes(bs)
else:
raise TypeError("'bs' 参数必须是字节或字节数组")
def get(self):
if self._position >= len(self._bytes):
raise BufferError('到达字节末尾')
b = self._bytes[self._position]
self._position += 1
return b
def get_bytes(self, size):
"""获取字节"""
if size < 0:
raise ValueError("'size'属性必须是正数")
if self._position > len(self._bytes):
raise BufferError('到达字节末尾')
if self.position + size > len(self._bytes):
raise BufferError('到达字节末尾')
b = self._bytes[self.position:self.position + size]
# print(b.hex(),b)
self.position = self.position + size
return b
def get_int2(self):
b = self.get_bytes(2)
return struct.unpack('>h', b)[0]
def get_int4(self):
b = self.get_bytes(4)
return struct.unpack('>i', b)[0]
def get_int8(self):
b = self.get_bytes(8)
return struct.unpack('>q', b)[0]
def get_float(self):
b = self.get_bytes(4)
return struct.unpack('>f', b)[0]
def get_double(self):
b = self.get_bytes(8)
return struct.unpack('>d', b)[0]
def duplicate(self):
bb = ByteBuffer(self._bytes)
bb.position = self.position
return bb
def clear(self):
self._position = 0
|
AndroidTools
|
/AndroidTools-0.2.4.tar.gz/AndroidTools-0.2.4/Jce/bytebuffer.py
|
bytebuffer.py
|
# READY FOR TRAINING!!!!!!
# Agora
Agora is an new open source Multi-Modality AI Research Organization devoted to advancing Humanity!
Since Andromeda is ready to train Agora is actively seeking cloud providers or grant providers to train this all-new revolutionary model and release it open source, if you would like to learn more please email me at `[email protected]`

[Join our Agora discord and contribute to this project or 40+ others!](https://discord.gg/qUtxnK2NMf)
# Andromeda: Ultra-Fast and Ultra-Intelligent SOTA Language Model 🚀🌌

Andromeda is a state-of-the-art language model that pushes the boundaries of natural language understanding and generation. Designed for high performance and efficiency, Andromeda is built upon advanced techniques that make it a strong contender against the likes of OpenAI's GPT-4 and PALM.
# Usage
There are 2 methods to use Andromeda, 1 by `pip install Andromeda-llm` and the other by `git clone`
# Method1
First `pip install Andromeda-llm` then
```python
import torch
from Andromeda import Andromeda, Train
x = torch.randint(0, 20000, (1, 1024))
Andromea(x)
# or train
Train()
```
## Method 2
Get started:
1. Clone the repository and install the required packages.
```
git clone https://github.com/kyegomez/Andromeda
cd Andromeda
pip3 install -r requirements.txt
cd Andromeda
python3 training_distributed.py
```
# Training
First:
`Accelerate Config`
Enable Deepspeed 3:
`Accelerate launch train_distributed_accelerate.py`
## Dataset building building
Data
You can preprocess a different dataset in a way similar to the C4 dataset used during training by running the build_dataset.py script. This will pre-tokenize, chunk the data in blocks of a specified sequence length, and upload to the Huggingface hub. For example:
```python3 Andromeda/build_dataset.py --seed 42 --seq_len 8192 --hf_account "HUGGINGFACE APIKEY" --tokenizer "EleutherAI/gpt-neox-20b" --dataset_name "EleutherAI/the_pile_deduplicated"```
# Inference
```python3 inference.py "My dog is very cute" --seq_len 256 --temperature 0.8 --filter_thres 0.9 --model "andromeda"```
Not yet we need to submit model to pytorch hub
## Get Involved
We're just at the beginning of our journey. As we continue to develop and refine Andromeda, we invite you to join us. Whether you're a developer, researcher, or simply an enthusiast, your insights and contributions can help shape the future of Andromeda.
# Contributing to Andromeda
We are thrilled to invite you to be a part of the Andromeda project. This is not just an open source project but a community initiative, and we value your expertise and creativity. To show our appreciation, we have instituted a unique rewards system that directly compensates contributors from the revenue generated by the Andromeda API.
## Why Contribute
Contributing to Andromeda not only enhances your skills and profile but also comes with financial rewards. When you contribute code, documentation, or any form of improvement to the Andromeda project, you are adding value. As such, we believe it's only fair that you share in the rewards.
## Rewards Program
Here's how the Andromeda Rewards Program works:
1. **Submit a Pull Request:** This can be a code enhancement, bug fix, documentation update, new feature, or any improvement to the project.
2. **Review and Approval:** Our team will review your contribution. If it gets approved and merged, you become eligible for the rewards program.
3. **Revenue Share:** Once your pull request is merged, you will receive a percentage of the revenue generated by the Andromeda API. The percentage will be determined based on the significance and impact of your contribution.
This means you're not just contributing to an open source project; you're becoming a part of the Andromeda ecosystem. Your efforts can yield ongoing benefits as the Andromeda API grows and evolves.
## Becoming a Paid API
As part of our growth strategy, we will be deploying Andromeda as a Paid API. The revenue generated from this API will not only sustain and further the project, but also fund the rewards program.
## How to Start Contributing
If you're ready to become a part of Andromeda and contribute to the future of multimodal embeddings, here's what you need to do:
1. Fork the repository.
2. Make your improvements or additions in your forked repository.
3. Submit a pull request detailing the changes you've made.
4. Our team will review your submission. If it's approved, it will be merged into the main repository, and you will become part of the Andromeda Rewards Program.
Thank you for considering contributing to Andromeda. Your expertise and commitment to this project are what make it thrive. Let's build the future of multimodal embeddings together.
## Model Architecture 🧠🔧
```python
model = TransformerWrapper(
num_tokens=64007,
max_seq_len=8192,
use_abs_pos_emb=False,
tokenizer=tokenizer, # !
embedding_provider=AndromedaEmbedding(),
attn_layers = Decoder(
dim=128, # 2048
depth=8, # 16
dim_head=128,
heads=8,
alibi_pos_bias=True,
alibi_num_heads=4,
rotary_xpos=True,
attn_flash = True,
deepnorm=True,
shift_tokens=1,
attn_one_kv_head = True,
qk_norm=True,
attn_qk_norm=True,
attn_qk_norm_dim_scale=True # set this to True, in addition to `attn_qk_norm = True`
)
)
```
## Roadmap 🗺️📍
1. **Training phase**: Train Andromeda on a large-scale dataset to achieve SOTA performance in various natural language processing tasks.
2. **World-class inference infrastructure**: Establish a robust and efficient infrastructure that leverages techniques such as:
- Model quantization: Reduce memory and computational requirements without significant loss in performance.
- Distillation: Train smaller, faster models that retain the knowledge of the larger model.
- Optimized serving frameworks: Deploy Andromeda using efficient serving frameworks, such as NVIDIA Triton or TensorFlow Serving, for rapid inference.
3. **Continuous improvement**: Continuously fine-tune Andromeda on diverse data sources and adapt it to new tasks and domains.
4. **Community-driven development**: Encourage open-source contributions, including pre-processing improvements, advanced training techniques, and novel use cases.
## Why Andromeda? 🌠💡
Andromeda can potentially be finetuned with 100k+ token sequence length.
Andromeda is a state-of-the-art language model that leverages advanced techniques to optimize its performance and efficiency. Some of these techniques include alibi positional bias, rotary position encodings (xpos), flash attention, and deep normalization (deepnorm). Let's explore the benefits of these techniques and provide some usage examples.
### Alibi Positional Bias
Alibi positional bias allows the model to learn relative positions between tokens, enabling it to better capture the relationships and dependencies between tokens in a sequence.
Usage example:
```python
attn_layers = Decoder(
...
alibi_pos_bias=True,
alibi_num_heads=4,
...
)
```
### Rotary Position Encodings (xpos)
Rotary position encodings introduce a more efficient way to encode positions in the input sequence. They avoid the need for absolute positional embeddings, reducing the model's memory footprint and improving training speed.
Usage example:
```python
attn_layers = Decoder(
...
rotary_xpos=True,
...
)
```
### Flash Attention
Flash attention speeds up the self-attention mechanism by reducing the number of attention computations. It accelerates training and inference while maintaining a high level of performance.
Usage example:
```python
attn_layers = Decoder(
...
attn_flash=True,
...
)
```
Usage example:
```python
attn_layers = Decoder(
...
deepnorm=True,
...
)
```
### Deep Normalization (deepnorm)
Deep normalization is a technique that normalizes the activations within a layer, helping with training stability and convergence. It allows the model to better learn complex patterns and generalize to unseen data.
# Andromeda Principles
- **Efficiency**: Andromeda incorporates cutting-edge optimization techniques, such as attention flashing, rotary position encodings, and deep normalization, resulting in efficient training and inference.
- **Flexibility**: The modular design of Andromeda allows for easy adaptation to various tasks and domains, making it a versatile choice for a wide range of applications.
- **Scalability**: Andromeda's architecture is designed to scale with the ever-growing computational resources and data sizes, ensuring its continuous relevance in the NLP landscape.
- **Community-driven**: As an open-source project, Andromeda thrives on contributions from the community, fostering an environment of collaboration, innovation, and continuous improvement.
Join us on this exciting journey to create a powerful, efficient, and intelligent language model that will revolutionize the NLP landscape! 🚀🌟
## Todo:
* [Integrate Token Monster ](https://github.com/alasdairforsythe/tokenmonster)
* Establish 200k instruction sample long for Tool API Calls
* [Train on Gorilla Dataset](https://github.com/ShishirPatil/gorilla)
* Establish FineTuning scripts using quantization + 4bit precision, + other tactics like LoRA
* Establish Reinforcement Scripts to train on rewards from Human and Agent feedback
|
Andromeda-llm
|
/Andromeda-llm-0.0.3.tar.gz/Andromeda-llm-0.0.3/README.md
|
README.md
|
import math
import multiprocessing
import os
from datetime import timedelta
from functools import partial
from itertools import chain
import torch
from torch.distributed.fsdp import (
FullyShardedDataParallel,
MixedPrecision,
BackwardPrefetch,
ShardingStrategy,
)
from accelerate import Accelerator
from accelerate.utils import (DummyOptim, DummyScheduler,
InitProcessGroupKwargs)
from datasets import concatenate_datasets, load_dataset
from lion_pytorch import Lion
# from palm_rlhf_pytorch import PaLM
from torch.nn import LayerNorm
# from palm_rlhf_pytorch.palm import LayerNorm, TransformerWrapper
from torch.nn import LayerNorm
from optimus_prime import TransformerWrapper, AutoregressiveWrapper, AndromedaEmbedding, Decoder
from torch.distributed.algorithms._checkpoint.checkpoint_wrapper import (
CheckpointImpl, apply_activation_checkpointing, checkpoint_wrapper)
from torch.distributed.fsdp.wrap import (
transformer_auto_wrap_policy
)
from torch.optim import AdamW
from torch.utils.data import DataLoader
from tqdm import tqdm
from transformers import (AutoTokenizer, default_data_collator,
get_cosine_schedule_with_warmup,
get_linear_schedule_with_warmup, set_seed)
# from palm.stable_adamw import StableAdamWUnfused
from utils.stable_adamw import StableAdamWUnfused
from optimus_prime import TransformerWrapper, AutoregressiveWrapper, AndromedaEmbedding, Decoder
# TransformerWrapper = TransformerWrapper()
# constants
############ SETUP CONFIG
# import torch.distributed as dist
# dist.init_process_group(backend='nccl', init_method="env://")
################
class CFG:
BATCH_SIZE = 3
GRADIENT_ACCUMULATE_EVERY: int = 1
SEED: int = 42
LEARNING_RATE: float = 3e-4
WEIGHT_DECAY: float = 0.1
SEQ_LEN: int = 8192
NUM_CPU: int = multiprocessing.cpu_count()
USE_DEEPSPEED: bool = True
USE_FSDP: bool = True
USE_PRETOKENIZED: bool = True
USE_ACTIVATION_CHECKPOINTING: bool = True
RESUME_FROM_CHECKPOINT: str = True
CHECKPOINTING_STEPS: int = 1000
OUTPUT_DIR: str = "YOUR_OUTPUT_DIR"
ENTITY_NAME: str = "YOUR_ENTITY_NAME"
# helpers
def print_num_params(model, accelerator: Accelerator):
n_params = sum(p.numel() for p in model.parameters() if p.requires_grad)
accelerator.print(f"Number of parameters in model: {n_params}")
# activation checkpointing
def activation_checkpointing(
model: torch.nn.Module,
offload_to_cpu: bool = False,
accelerator: Accelerator = None,
):
"""
Apply activation checkpointing to a model.
Args:
model (Module): The model to which to apply activation checkpointing.
offload_to_cpu (bool, optional): Whether to offload the activations to CPU. Defaults to False.
accelerator (Accelerator, optional): The Accelerate library accelerator. Defaults to None.
"""
if accelerator is not None:
accelerator.print(f"Using activation checkpointing")
check_fn = lambda submodule: isinstance(submodule, TransformerWrapper)
non_reentrant_wrapper = partial(
checkpoint_wrapper,
offload_to_cpu=offload_to_cpu,
checkpoint_impl=CheckpointImpl.NO_REENTRANT,
)
apply_activation_checkpointing(
model, checkpoint_wrapper_fn=non_reentrant_wrapper, check_fn=check_fn
)
# FSDP
def fsdp(
model: torch.nn.Module,
auto_wrap: bool = False,
mp: str = "fp32",
shard_strat: str = "NO_SHARD",
):
"""
This function wraps a given PyTorch model with the FullyShardedDataParallel (FSDP) wrapper to enable efficient data parallelism and model sharding.
Args:
model (torch.nn.Module): The original PyTorch model to be wrapped with FSDP.
auto_wrap (bool, optional): If True, it enables automatic wrapping of the model's layers according to the transformer_auto_wrap_policy. Default is False.
mp (str, optional): The mixed precision mode to be used. Can be 'bf16' for BFloat16, 'fp16' for Float16 or 'fp32' for Float32 precision. Default is 'fp32'.
shard_strat (str, optional): The sharding strategy to be used. Can be 'SHARD_GRAD' for sharding at gradient computation, 'FULL_SHARD' for full model sharding or 'NO_SHARD' for no sharding. Default is 'NO_SHARD'.
Raises:
ValueError: If the provided mp (mixed precision mode) is not 'bf16', 'fp16' or 'fp32'.
ValueError: If the provided shard_strat (sharding strategy) is not 'SHARD_GRAD', 'FULL_SHARD' or 'NO_SHARD'.
Returns:
torch.nn.Module: The input model wrapped with FSDP.
"""
if auto_wrap:
andromeda_auto_wrap_policy = partial(
transformer_auto_wrap_policy,
transformer_layer_cls={
TransformerWrapper,
},
)
else:
andromeda_auto_wrap_policy = None
if mp == "bf16":
mp_fsdp = MixedPrecision(
param_dtype=torch.bfloat16,
# Gradient communication precision.
reduce_dtype=torch.bfloat16,
# Buffer precision.
buffer_dtype=torch.bfloat16,
)
elif mp == "fp16":
mp_fsdp = MixedPrecision(
param_dtype=torch.float16,
# Gradient communication precision.
reduce_dtype=torch.float16,
# Buffer precision.
buffer_dtype=torch.float16,
)
elif mp == "fp32":
mp_fsdp = MixedPrecision(
param_dtype=torch.float32,
# Gradient communication precision.
reduce_dtype=torch.float32,
# Buffer precision.
buffer_dtype=torch.float32,
)
else:
raise ValueError(
"Invalid scheduler_type. Expected 'bf16', 'fp16' or 'fp32', got: {}".format(
mp
)
)
if shard_strat == "SHARD_GRAD":
sharding_strat_fsdp = ShardingStrategy.SHARD_GRAD_OP
elif shard_strat == "FULL_SHARD":
sharding_strat_fsdp = ShardingStrategy.FULL_SHARD
elif shard_strat == "NO_SHARD":
sharding_strat_fsdp = ShardingStrategy.NO_SHARD
else:
raise ValueError(
"Invalid scheduler_type. Expected 'SHARD_GRAD', 'FULL_SHARD' or 'NO_SHARD', got: {}".format(
shard_strat
)
)
model = FullyShardedDataParallel(
model,
auto_wrap_policy=andromeda_auto_wrap_policy,
mixed_precision=mp_fsdp,
backward_prefetch=BackwardPrefetch.BACKWARD_PRE,
sharding_strategy=sharding_strat_fsdp,
forward_prefetch=True,
use_orig_params=True,
)
return model
# learning rate scheduler
def get_lr_scheduler_with_warmup(
optimizer: torch.optim.Optimizer,
scheduler_type: str,
num_warmup_steps: int,
max_train_steps: int,
grad_accumulate_every: int = 1,
accelerator: Accelerator = None,
):
"""
Get a learning rate scheduler with warmup.
Args:
optimizer (Optimizer): The optimizer for which to create the learning rate scheduler.
scheduler_type (str): The type of learning rate scheduler to create, either "linear" or "cosine".
num_warmup_steps (int): The number of warmup steps for the learning rate scheduler.
max_train_steps (int): The maximum number of training steps.
grad_accumulate_every (int, optional): The gradient accumulation factor. Defaults to 1.
accelerator (Accelerator, optional): The Accelerate library accelerator. Defaults to None.
Returns:
The learning rate scheduler with warmup.
Raises:
ValueError: If scheduler_type is not "linear" or "cosine".
"""
NUM_WARMUP_STEPS = num_warmup_steps
GRADIENT_ACCUMULATE_EVERY = grad_accumulate_every
if accelerator is not None:
accelerator.print(f"Using {scheduler_type} lr scheduler")
if scheduler_type == "linear":
return get_linear_schedule_with_warmup(
optimizer=optimizer,
num_warmup_steps=NUM_WARMUP_STEPS * GRADIENT_ACCUMULATE_EVERY,
num_training_steps=max_train_steps * GRADIENT_ACCUMULATE_EVERY,
)
elif scheduler_type == "cosine":
return get_cosine_schedule_with_warmup(
optimizer=optimizer,
num_warmup_steps=NUM_WARMUP_STEPS * GRADIENT_ACCUMULATE_EVERY,
num_training_steps=max_train_steps * GRADIENT_ACCUMULATE_EVERY,
)
else:
raise ValueError(
"Invalid scheduler_type. Expected 'linear' or 'cosine', got: {}".format(
scheduler_type
)
)
# optimizers
def decoupled_optimizer(
model: torch.nn.Module,
learning_rate: float,
weight_decay: float,
beta_1: float,
beta_2: float,
optimizer_type: str,
use_fsdp: bool = True,
accelerator: Accelerator = None,
):
"""
Decouples the optimizer from the training process.
This function sets up the optimizer for the model by creating two groups of parameters:
one for weight decay and one without weight decay. Then, it initializes the optimizer
with these two groups of parameters.
Args:
model (Module): The model whose parameters are optimized.
learning_rate (float): The learning rate for the optimizer.
weight_decay (float): The weight decay for the optimizer.
beta_1 (float): The exponential decay rate for the 1st moment estimates.
beta_2 (float): The exponential decay rate for the 2nd moment estimates.
optimizer_type (str): The type of the optimizer. Can be 'lion', 'adamw', or 'stable_adamw'.
use_fsdp (bool, optional): If True, the optimizer will work with fully sharded data parallelism. Defaults to True.
accelerator (Accelerator, optional): The accelerator from HuggingFace's Accelerate library. Defaults to None.
Returns:
Optimizer: The initialized optimizer.
Raises:
ValueError: If the optimizer type is not 'lion', 'adamw' or 'stable_adamw'.
"""
accelerator.print(f"Using {optimizer_type} optimizer")
# Create an empty dictionary called param_dict to store the model's named parameters.
param_dict = {}
# Iterate over the model's named parameters and populate the param_dict with key-value pairs.
for param_name, param in model.named_parameters():
param_dict[param_name] = param
# Separate the model's named modules into two groups: decay and no_decay.
# Create an empty list to store the names of the LayerNorm and Embedding layer weights with no weight decay.
no_decay = []
if use_fsdp:
exclude_module = "_fsdp_wrapped_module.token_emb"
else:
exclude_module = "token_emb"
# Iterate through the named modules of the model.
for module_name, module in model.named_modules():
# Check if the current module is an instance of any of the desired types (LayerNorm or torch.nn.Embedding).
for ndim in [LayerNorm, torch.nn.Embedding]:
if isinstance(module, ndim):
# If torch.nn.Embedding, append its name with a ".weight" suffix to the no_decay list.
if module_name == exclude_module:
no_decay.append(f"{module_name}.weight")
else:
# If the module is an instance of LayerNorm
no_decay.append(f"{module_name}.gamma")
# Exit the inner loop since the desired module has been found.
break
# Create an empty list to store the names of the Linear layer weights with weight decay.
decay = []
# Iterate through the named modules of the model.
for module_name, module in model.named_modules():
# Check if the current module is an instance of the desired type (torch.nn.Linear).
for ndim in [torch.nn.Linear]:
if isinstance(module, ndim):
# If the module is an instance of torch.nn.Linear, append its name with a ".weight" suffix to the decay list.
decay.append(f"{module_name}.weight")
# Exit the inner loop since the desired module has been found.
break
# Create two separate lists of model parameters: decay_param and no_decay_param.
# The decay_param list contains the parameters that should have weight decay applied.
# The no_decay_param list contains the parameters that should not have weight decay applied, excluding the 'to_logits.weight' parameter.
# Create an empty list called decay_param to store the parameters with weight decay.
decay_param = []
if use_fsdp:
exclude_param = "_fsdp_wrapped_module.to_logits.weight"
else:
exclude_param = "to_logits.weight"
# Iterate over the decay list, which contains the names of the parameters with weight decay.
for param in decay:
# Check if the current parameter is not 'to_logits.weight'.
# Append the corresponding parameter from param_dict to the decay_param list.
if param != exclude_param:
decay_param.append(param_dict[param])
# Create an empty list called no_decay_param to store the parameters without weight decay.
no_decay_param = []
# Iterate over the no_decay list, which contains the names of the parameters without weight decay.
for param in no_decay:
# Append the corresponding parameter from param_dict to the no_decay_param list.
no_decay_param.append(param_dict[param])
# Create a list called grouped_params that contains two dictionaries.
# The first dictionary has the decay_param list and the corresponding weight_decay value.
# The second dictionary has the no_decay_param list and a weight_decay value of 0.0.
grouped_params = [
{"params": decay_param, "weight_decay": weight_decay},
{"params": no_decay_param, "weight_decay": 0.0},
]
# Create a variable called optimizer that stores an instance of the optimizer.
if optimizer_type == "lion":
optimizer = Lion(grouped_params, lr=learning_rate, betas=(beta_1, beta_2),)
elif optimizer_type == "adamw":
optimizer = AdamW(grouped_params, lr=learning_rate, betas=(beta_1, beta_2),)
elif optimizer_type == "deepspeed":
optimizer = DummyOptim(grouped_params, lr=learning_rate, betas=(beta_1, beta_2),)
elif optimizer_type == "stable_adamw":
optimizer = StableAdamWUnfused(
grouped_params, lr=learning_rate, betas=(beta_1, beta_2),
)
else:
raise ValueError(
"Invalid optimizer_type. Expected 'lion', 'adamw', 'deepspeed' or 'stable_adamw', got: {}".format(
optimizer_type
)
)
# Return the optimizer.
return optimizer
# dataloaders
def build_dataloaders():
"""
Build data loaders for training.
This function performs the following steps:
1. Load the tokenizer from the pretrained "EleutherAI/gpt-neox-20b" model.
2. Load the "openwebtext" dataset.
3. Tokenize the dataset, adding the end-of-sentence token to each text.
4. Process the tokenized dataset into chunks of a specified block size.
Returns:
Dataset: The processed dataset ready for training.
"""
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b")
dataset = load_dataset("openwebtext", split="train")
tokenized_dataset = dataset.map(
lambda example: tokenizer([t + tokenizer.eos_token for t in example["text"]]),
batched=True,
num_proc=CFG.NUM_CPU,
remove_columns=["text"],
)
block_size = CFG.SEQ_LEN
# Main data processing function that will concatenate all texts from our dataset and generate chunks of block_size.
def group_texts(examples):
# Concatenate all texts.
concatenated_examples = {k: list(chain(*examples[k])) for k in examples.keys()}
total_length = len(concatenated_examples[list(examples.keys())[0]])
# We drop the small remainder, we could add padding if the model supported it instead of this drop, you can
# customize this part to your needs.
if total_length >= block_size:
total_length = (total_length // block_size) * block_size
# Split by chunks of max_len.
result = {
k: [t[i : i + block_size] for i in range(0, total_length, block_size)]
for k, t in concatenated_examples.items()
}
return result
train_dataset = tokenized_dataset.map(
group_texts, batched=True, num_proc=CFG.NUM_CPU,
)
return train_dataset
#switch to falconwebdataset
def build_pre_tokenized():
d0 = load_dataset("conceptofmind/c4_0-to-20_neox_with_eos_8k", split="train")
# d1 = load_dataset("conceptofmind/c4_21-to-40_neox_with_eos_8k", split="train")
# d2 = load_dataset("conceptofmind/c4_41-to-60_neox_with_eos_8k", split="train")
# d3 = load_dataset("conceptofmind/c4_61-to-80_neox_with_eos_8k", split="train")
# d4 = load_dataset("conceptofmind/c4_81-to-100_neox_with_eos_8k", split="train")
# train_dataset = concatenate_datasets([d0, d1, d2, d3, d4])
return d0
def Train():
# accelerator
timeout = InitProcessGroupKwargs(timeout=timedelta(seconds=1_000_000))
accelerator = Accelerator(
gradient_accumulation_steps=CFG.GRADIENT_ACCUMULATE_EVERY,
mixed_precision="fp16",
log_with="wandb",
kwargs_handlers=[timeout],
)
# AcceleratorState().deepspeed_plugin.deepspeed_config['train_micro_batch_size_per_gpu'] = 4 #??????
accelerator.init_trackers(
project_name="Andromeda",
config={
"batch_size": CFG.BATCH_SIZE,
"gradient_accumulate_every": CFG.GRADIENT_ACCUMULATE_EVERY,
"learning_rate": CFG.LEARNING_RATE,
"seq_len": CFG.SEQ_LEN,
},
init_kwargs={"wandb": {"entity": CFG.ENTITY_NAME}},
)
accelerator.print(f"Total GPUS: {accelerator.num_processes}")
# set seed
set_seed(CFG.SEED)
# tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b")
model = TransformerWrapper(
num_tokens=64007,
max_seq_len=8192,
use_abs_pos_emb=False,
# tokenizer=tokenizer,
embedding_provider=AndromedaEmbedding(),
#config from concept of minds PALM
attn_layers = Decoder(
dim=2560, # 2048
depth=32, # 16
dim_head=128,
heads=24,
alibi_pos_bias=True,
alibi_num_heads=12,
rotary_xpos=True,
attn_flash = True,
deepnorm=True,
shift_tokens=1,
attn_one_kv_head = True,
qk_norm=True,
attn_qk_norm=True,
attn_qk_norm_dim_scale=True # set this to True, in addition to `attn_qk_norm = True`
)
).to(accelerator.device)
model = AutoregressiveWrapper(model).to(accelerator.device)
print_num_params(model, accelerator)
if CFG.USE_FSDP:
model = fsdp(
model,
mp="fp16",
shard_strat="SHARD_GRAD"
)
if CFG.USE_ACTIVATION_CHECKPOINTING:
activation_checkpointing(model, accelerator)
model = accelerator.prepare(model)
# dataloaders
if CFG.USE_PRETOKENIZED:
train_dataset = build_pre_tokenized()
else:
train_dataset = build_dataloaders()
train_loader = DataLoader(
train_dataset, batch_size=CFG.BATCH_SIZE, collate_fn=default_data_collator,
)
# optimizer
optim = decoupled_optimizer(
model=model,
learning_rate=CFG.LEARNING_RATE,
weight_decay=CFG.WEIGHT_DECAY,
beta_1=0.90,
beta_2=0.95,
optimizer_type='deepspeed',
use_fsdp=True,
accelerator=accelerator
)
# Determine number of training steps
max_train_steps = math.ceil(len(train_loader) / CFG.GRADIENT_ACCUMULATE_EVERY)
accelerator.print(f"Max train steps: {max_train_steps}")
# lr scheduler
NUM_WARMUP_STEPS = int(max_train_steps * 0.01)
accelerator.print(f"Num warmup steps: {NUM_WARMUP_STEPS}")
if CFG.USE_DEEPSPEED:
lr_scheduler = DummyScheduler(
optim,
total_num_steps=max_train_steps * accelerator.num_processes,
warmup_num_steps=NUM_WARMUP_STEPS
)
else:
lr_scheduler = get_lr_scheduler_with_warmup(
optimizer=optim,
scheduler_type="cosine",
num_warmup_steps=NUM_WARMUP_STEPS,
max_train_steps=max_train_steps,
grad_accumulate_every=CFG.GRADIENT_ACCUMULATE_EVERY,
)
# prepare
optim, train_loader, lr_scheduler = accelerator.prepare(
optim, train_loader, lr_scheduler
)
# checkpoint scheduler
accelerator.register_for_checkpointing(lr_scheduler)
# I do not know why Huggingface recommends recalculation of max_train_steps
max_train_steps = math.ceil(len(train_loader) / CFG.GRADIENT_ACCUMULATE_EVERY)
accelerator.print(f"Max train steps recalculated: {max_train_steps}")
# Total batch size for logging
total_batch_size = (
CFG.BATCH_SIZE * accelerator.num_processes * CFG.GRADIENT_ACCUMULATE_EVERY
)
accelerator.print(f"Total batch size: {total_batch_size}")
# resume training
progress_bar = tqdm(
range(max_train_steps), disable=not accelerator.is_local_main_process
)
completed_steps = 0
if CFG.RESUME_FROM_CHECKPOINT:
if CFG.RESUME_FROM_CHECKPOINT is not None or CFG.RESUME_FROM_CHECKPOINT != "":
accelerator.print(f"Resuming from checkpoint {CFG.RESUME_FROM_CHECKPOINT}")
accelerator.load_state(CFG.RESUME_FROM_CHECKPOINT)
path = os.path.basename(CFG.RESUME_FROM_CHECKPOINT)
training_difference = os.path.splitext(path)[0]
# need to multiply `gradient_accumulation_steps` to reflect real steps
resume_step = (
int(training_difference.replace("step_", ""))
* CFG.GRADIENT_ACCUMULATE_EVERY
)
if CFG.RESUME_FROM_CHECKPOINT and resume_step is not None:
train_loader = accelerator.skip_first_batches(train_loader, resume_step)
completed_steps += resume_step
progress_bar.update(resume_step)
# training
model.train()
for step, batch in enumerate(train_loader):
with accelerator.accumulate(model):
inputs = batch["input_ids"].to(accelerator.device)
loss = model(inputs, return_loss=True)
accelerator.backward(loss)
accelerator.log({"loss": loss.item()}, step=step)
if accelerator.sync_gradients:
accelerator.clip_grad_norm_(model.parameters(), 1.0)
optim.step()
lr_scheduler.step()
optim.zero_grad()
if accelerator.sync_gradients:
progress_bar.update(1)
completed_steps += 1
if isinstance(CFG.CHECKPOINTING_STEPS, int):
if completed_steps % CFG.CHECKPOINTING_STEPS == 0:
output_dir = f"step_{completed_steps }"
if CFG.OUTPUT_DIR is not None:
output_dir = os.path.join(CFG.OUTPUT_DIR, output_dir)
accelerator.save_state(output_dir)
if completed_steps >= max_train_steps:
break
# end training
# accelerator.print(f"Training Finished")
accelerator.end_training()
# save final model
# accelerator.print(f"Saving model to {CFG.OUTPUT_DIR}")
if CFG.OUTPUT_DIR is not None:
accelerator.wait_for_everyone()
unwrapped_model = accelerator.unwrap_model(model)
with accelerator.main_process_first():
accelerator.save(
unwrapped_model.state_dict(), f"{CFG.OUTPUT_DIR}/final/final_model.pt"
)
if __name__ == "__main__":
Train()
|
Andromeda-llm
|
/Andromeda-llm-0.0.3.tar.gz/Andromeda-llm-0.0.3/Andromeda/train_distributed.py
|
train_distributed.py
|
import multiprocessing
import argparse
from itertools import chain
from datasets import load_dataset
from transformers import AutoTokenizer
#falcon tokenizer
"""
Falcon dataset
Data Fields
content: the processed and cleaned text contained in the page;
url: the url of the webpage crawled to produce the sample;
timestamp: timestamp of when the webpage was crawled by CommonCrawl;
dump: the CommonCrawl dump the sample is a part of;
segment: the CommonCrawl segment the sample is a part of;
image_urls: a list of elements in the type [image_url, image_alt_text] for all the images found in the content of the sample.
"""
class CFG:
SEED: int = 42
SEQ_LEN: int = 8192
NUM_CPU: int = multiprocessing.cpu_count()
HF_ACCOUNT_REPO: str = "YOUR HUGGINGFACE API KEY"
#"EleutherAI/gpt-neox-20b"
# TOKENIZER: str = "tiiuae/falcon-40b-instruct"
TOKENIZER: str = "EleutherAI/gpt-neox-20b"
# DATASET_NAME: str = "EleutherAI/the_pile_deduplicated"
DATASET_NAME: str = "tiiuae/falcon-refinedweb"
#perhaps will need finetuning
def built_dataset(args):
tokenizer = AutoTokenizer.from_pretrained(CFG.TOKENIZER)
train_dataset = load_dataset(CFG.DATASET_NAME, split="train", streaming=True)
def tokenize_function(example):
return tokenizer([t + tokenizer.eos_token for t in example["text"]])
tokenized_dataset = train_dataset.map(
tokenize_function,
batched=True,
num_proc=CFG.NUM_CPU,
remove_columns=["text"],
)
block_size = CFG.SEQ_LEN
#main data processing functin that will concatenate all texts from our dataset
def group_texts(examples):
#concatenate all texts
concatenated_examples = {k: list(chain(*examples[k])) for k in examples.keys()}
total_length = len(concatenated_examples[list(examples.keys())[0]])
#drop the small remainder we could add padding if the model supported it instead of this drop customize
if total_length >= block_size:
total_length = (total_length // block_size) * block_size
#split by chunks of max length
result = {
k: [t[i : i + block_size] for i in range(0, total_length, block_size)]
for k, t in concatenated_examples.items()
}
return result
train_tokenized_dataset = tokenized_dataset.map(
group_texts,
batched=True,
num_proc=CFG.NUM_PROC,
)
train_tokenized_dataset.push_to_hub(CFG.HF_ACCOUNT_REPO)
if __name__ == '__main__':
parser = argparse.ArgumentParser(description="Process and push dataset to Hugging Face Hub")
parser.add_argument("--seed", type=int, default=CFG.SEED, help="Random seed")
parser.add_argument("--seq_len", type=int, default=CFG.SEQ_LEN, help="Sequence length for processing")
parser.add_argument("--hf_account", type=str, default=CFG.HF_ACCOUNT_REPO, help="Hugging Face account name and repo")
parser.add_argument("--tokenizer", type=str, default=CFG.TOKENIZER, help="Tokenizer model to use")
parser.add_argument("--dataset_name", type=str, default=CFG.DATASET_NAME, help="Name of the dataset to process")
args = parser.parse_args()
built_dataset(args)
|
Andromeda-llm
|
/Andromeda-llm-0.0.3.tar.gz/Andromeda-llm-0.0.3/Andromeda/build_dataset.py
|
build_dataset.py
|
import math
import multiprocessing
import os
from datetime import timedelta
from functools import partial
from itertools import chain
import torch
from torch.distributed.fsdp import (
FullyShardedDataParallel,
MixedPrecision,
BackwardPrefetch,
ShardingStrategy,
)
from accelerate import Accelerator
from accelerate.utils import (DummyOptim, DummyScheduler,
InitProcessGroupKwargs)
from datasets import concatenate_datasets, load_dataset
from lion_pytorch import Lion
# from palm_rlhf_pytorch import PaLM
# from palm_rlhf_pytorch.palm import LayerNorm, TransformerWrapper
from torch.nn import LayerNorm
from optimus_prime import TransformerWrapper, AutoregressiveWrapper, AndromedaEmbedding, Decoder
from torch.distributed.algorithms._checkpoint.checkpoint_wrapper import (
CheckpointImpl, apply_activation_checkpointing, checkpoint_wrapper)
from torch.distributed.fsdp.wrap import (
transformer_auto_wrap_policy,
)
from accelerate.state import AcceleratorState
from torch.optim import AdamW
from torch.utils.data import DataLoader
from tqdm import tqdm
from transformers import (AutoTokenizer, default_data_collator,
get_cosine_schedule_with_warmup,
get_linear_schedule_with_warmup, set_seed)
from utils.stable_adamw import StableAdamWUnfused
# constants
class CFG:
BATCH_SIZE: int = 3
GRADIENT_ACCUMULATE_EVERY: int = 1
SEED: int = 42
LEARNING_RATE: float = 3e-4
WEIGHT_DECAY: float = 0.1
SEQ_LEN: int = 8192
NUM_CPU: int = multiprocessing.cpu_count()
USE_DEEPSPEED: bool = True
USE_FSDP: bool = True
USE_PRETOKENIZED: bool = True
USE_ACTIVATION_CHECKPOINTING: bool = True
RESUME_FROM_CHECKPOINT: str = None
CHECKPOINTING_STEPS: int = 1000
OUTPUT_DIR: str = "andromeda_v1"
ENTITY_NAME: str = "wanb" # Put your wandb username here
# helpers
def print_num_params(model, accelerator: Accelerator):
n_params = sum(p.numel() for p in model.parameters() if p.requires_grad)
accelerator.print(f"Number of parameters in model: {n_params}")
# activation checkpointing
def activation_checkpointing(
model: torch.nn.Module,
offload_to_cpu: bool = False,
accelerator: Accelerator = None,
):
"""
Apply activation checkpointing to a model.
Args:
model (Module): The model to which to apply activation checkpointing.
offload_to_cpu (bool, optional): Whether to offload the activations to CPU. Defaults to False.
accelerator (Accelerator, optional): The Accelerate library accelerator. Defaults to None.
"""
if accelerator is not None:
accelerator.print(f"Using activation checkpointing")
check_fn = lambda submodule: isinstance(submodule, TransformerWrapper)
non_reentrant_wrapper = partial(
checkpoint_wrapper,
offload_to_cpu=offload_to_cpu,
checkpoint_impl=CheckpointImpl.NO_REENTRANT,
)
apply_activation_checkpointing(
model, checkpoint_wrapper_fn=non_reentrant_wrapper, check_fn=check_fn
)
# FSDP
def fsdp(
model: torch.nn.Module,
auto_wrap: bool = False,
mp: str = "fp32",
shard_strat: str = "NO_SHARD",
):
"""
This function wraps a given PyTorch model with the FullyShardedDataParallel (FSDP) wrapper to enable efficient data parallelism and model sharding.
Args:
model (torch.nn.Module): The original PyTorch model to be wrapped with FSDP.
auto_wrap (bool, optional): If True, it enables automatic wrapping of the model's layers according to the transformer_auto_wrap_policy. Default is False.
mp (str, optional): The mixed precision mode to be used. Can be 'bf16' for BFloat16, 'fp16' for Float16 or 'fp32' for Float32 precision. Default is 'fp32'.
shard_strat (str, optional): The sharding strategy to be used. Can be 'SHARD_GRAD' for sharding at gradient computation, 'FULL_SHARD' for full model sharding or 'NO_SHARD' for no sharding. Default is 'NO_SHARD'.
Raises:
ValueError: If the provided mp (mixed precision mode) is not 'bf16', 'fp16' or 'fp32'.
ValueError: If the provided shard_strat (sharding strategy) is not 'SHARD_GRAD', 'FULL_SHARD' or 'NO_SHARD'.
Returns:
torch.nn.Module: The input model wrapped with FSDP.
"""
if auto_wrap:
palm_auto_wrap_policy = partial(
transformer_auto_wrap_policy,
transformer_layer_cls={
TransformerWrapper,
},
)
else:
palm_auto_wrap_policy = None
if mp == "bf16":
mp_fsdp = MixedPrecision(
param_dtype=torch.bfloat16,
# Gradient communication precision.
reduce_dtype=torch.bfloat16,
# Buffer precision.
buffer_dtype=torch.bfloat16,
)
elif mp == "fp16":
mp_fsdp = MixedPrecision(
param_dtype=torch.float16,
# Gradient communication precision.
reduce_dtype=torch.float16,
# Buffer precision.
buffer_dtype=torch.float16,
)
elif mp == "fp32":
mp_fsdp = MixedPrecision(
param_dtype=torch.float32,
# Gradient communication precision.
reduce_dtype=torch.float32,
# Buffer precision.
buffer_dtype=torch.float32,
)
else:
raise ValueError(
"Invalid scheduler_type. Expected 'bf16', 'fp16' or 'fp32', got: {}".format(
mp
)
)
if shard_strat == "SHARD_GRAD":
sharding_strat_fsdp = ShardingStrategy.SHARD_GRAD_OP
elif shard_strat == "FULL_SHARD":
sharding_strat_fsdp = ShardingStrategy.FULL_SHARD
elif shard_strat == "NO_SHARD":
sharding_strat_fsdp = ShardingStrategy.NO_SHARD
else:
raise ValueError(
"Invalid scheduler_type. Expected 'SHARD_GRAD', 'FULL_SHARD' or 'NO_SHARD', got: {}".format(
shard_strat
)
)
model = FullyShardedDataParallel(
model,
auto_wrap_policy=palm_auto_wrap_policy,
mixed_precision=mp_fsdp,
backward_prefetch=BackwardPrefetch.BACKWARD_PRE,
sharding_strategy=sharding_strat_fsdp,
forward_prefetch=True,
use_orig_params=True,
)
return model
# learning rate scheduler
def get_lr_scheduler_with_warmup(
optimizer: torch.optim.Optimizer,
scheduler_type: str,
num_warmup_steps: int,
max_train_steps: int,
grad_accumulate_every: int = 1,
accelerator: Accelerator = None,
):
"""
Get a learning rate scheduler with warmup.
Args:
optimizer (Optimizer): The optimizer for which to create the learning rate scheduler.
scheduler_type (str): The type of learning rate scheduler to create, either "linear" or "cosine".
num_warmup_steps (int): The number of warmup steps for the learning rate scheduler.
max_train_steps (int): The maximum number of training steps.
grad_accumulate_every (int, optional): The gradient accumulation factor. Defaults to 1.
accelerator (Accelerator, optional): The Accelerate library accelerator. Defaults to None.
Returns:
The learning rate scheduler with warmup.
Raises:
ValueError: If scheduler_type is not "linear" or "cosine".
"""
NUM_WARMUP_STEPS = num_warmup_steps
GRADIENT_ACCUMULATE_EVERY = grad_accumulate_every
if accelerator is not None:
accelerator.print(f"Using {scheduler_type} lr scheduler")
if scheduler_type == "linear":
return get_linear_schedule_with_warmup(
optimizer=optimizer,
num_warmup_steps=NUM_WARMUP_STEPS * GRADIENT_ACCUMULATE_EVERY,
num_training_steps=max_train_steps * GRADIENT_ACCUMULATE_EVERY,
)
elif scheduler_type == "cosine":
return get_cosine_schedule_with_warmup(
optimizer=optimizer,
num_warmup_steps=NUM_WARMUP_STEPS * GRADIENT_ACCUMULATE_EVERY,
num_training_steps=max_train_steps * GRADIENT_ACCUMULATE_EVERY,
)
else:
raise ValueError(
"Invalid scheduler_type. Expected 'linear' or 'cosine', got: {}".format(
scheduler_type
)
)
# optimizers
def decoupled_optimizer(
model: torch.nn.Module,
learning_rate: float,
weight_decay: float,
beta_1: float,
beta_2: float,
optimizer_type: str,
use_fsdp: bool = True,
accelerator: Accelerator = None,
):
"""
Decouples the optimizer from the training process.
This function sets up the optimizer for the model by creating two groups of parameters:
one for weight decay and one without weight decay. Then, it initializes the optimizer
with these two groups of parameters.
Args:
model (Module): The model whose parameters are optimized.
learning_rate (float): The learning rate for the optimizer.
weight_decay (float): The weight decay for the optimizer.
beta_1 (float): The exponential decay rate for the 1st moment estimates.
beta_2 (float): The exponential decay rate for the 2nd moment estimates.
optimizer_type (str): The type of the optimizer. Can be 'lion', 'adamw', or 'stable_adamw'.
use_fsdp (bool, optional): If True, the optimizer will work with fully sharded data parallelism. Defaults to True.
accelerator (Accelerator, optional): The accelerator from HuggingFace's Accelerate library. Defaults to None.
Returns:
Optimizer: The initialized optimizer.
Raises:
ValueError: If the optimizer type is not 'lion', 'adamw' or 'stable_adamw'.
"""
accelerator.print(f"Using {optimizer_type} optimizer")
# Create an empty dictionary called param_dict to store the model's named parameters.
param_dict = {}
# Iterate over the model's named parameters and populate the param_dict with key-value pairs.
for param_name, param in model.named_parameters():
param_dict[param_name] = param
# Separate the model's named modules into two groups: decay and no_decay.
# Create an empty list to store the names of the LayerNorm and Embedding layer weights with no weight decay.
no_decay = []
if use_fsdp:
exclude_module = "_fsdp_wrapped_module.token_emb"
else:
exclude_module = "token_emb"
# Iterate through the named modules of the model.
for module_name, module in model.named_modules():
# Check if the current module is an instance of any of the desired types (LayerNorm or torch.nn.Embedding).
for ndim in [LayerNorm, torch.nn.Embedding]:
if isinstance(module, ndim):
# If torch.nn.Embedding, append its name with a ".weight" suffix to the no_decay list.
if module_name == exclude_module:
no_decay.append(f"{module_name}.weight")
else:
# If the module is an instance of LayerNorm
no_decay.append(f"{module_name}.gamma")
# Exit the inner loop since the desired module has been found.
break
# Create an empty list to store the names of the Linear layer weights with weight decay.
decay = []
# Iterate through the named modules of the model.
for module_name, module in model.named_modules():
# Check if the current module is an instance of the desired type (torch.nn.Linear).
for ndim in [torch.nn.Linear]:
if isinstance(module, ndim):
# If the module is an instance of torch.nn.Linear, append its name with a ".weight" suffix to the decay list.
decay.append(f"{module_name}.weight")
# Exit the inner loop since the desired module has been found.
break
# Create two separate lists of model parameters: decay_param and no_decay_param.
# The decay_param list contains the parameters that should have weight decay applied.
# The no_decay_param list contains the parameters that should not have weight decay applied, excluding the 'to_logits.weight' parameter.
# Create an empty list called decay_param to store the parameters with weight decay.
decay_param = []
if use_fsdp:
exclude_param = "_fsdp_wrapped_module.to_logits.weight"
else:
exclude_param = "to_logits.weight"
# Iterate over the decay list, which contains the names of the parameters with weight decay.
for param in decay:
# Check if the current parameter is not 'to_logits.weight'.
# Append the corresponding parameter from param_dict to the decay_param list.
if param != exclude_param:
decay_param.append(param_dict[param])
# Create an empty list called no_decay_param to store the parameters without weight decay.
no_decay_param = []
# Iterate over the no_decay list, which contains the names of the parameters without weight decay.
for param in no_decay:
if param in param_dict:
# Append the corresponding parameter from param_dict to the no_decay_param list.
no_decay_param.append(param_dict[param])
# Create a list called grouped_params that contains two dictionaries.
# The first dictionary has the decay_param list and the corresponding weight_decay value.
# The second dictionary has the no_decay_param list and a weight_decay value of 0.0.
grouped_params = [
{"params": decay_param, "weight_decay": weight_decay},
{"params": no_decay_param, "weight_decay": 0.0},
]
# Create a variable called optimizer that stores an instance of the optimizer.
if optimizer_type == "lion":
optimizer = Lion(grouped_params, lr=learning_rate, betas=(beta_1, beta_2),)
elif optimizer_type == "adamw":
optimizer = AdamW(grouped_params, lr=learning_rate, betas=(beta_1, beta_2),)
elif optimizer_type == "deepspeed":
optimizer = DummyOptim(grouped_params, lr=learning_rate, betas=(beta_1, beta_2),)
elif optimizer_type == "stable_adamw":
optimizer = StableAdamWUnfused(
grouped_params, lr=learning_rate, betas=(beta_1, beta_2),
)
else:
raise ValueError(
"Invalid optimizer_type. Expected 'lion', 'adamw', 'deepspeed' or 'stable_adamw', got: {}".format(
optimizer_type
)
)
# Return the optimizer.
return optimizer
# dataloaders
def build_dataloaders():
"""
Build data loaders for training.
This function performs the following steps:
1. Load the tokenizer from the pretrained "EleutherAI/gpt-neox-20b" model.
2. Load the "openwebtext" dataset.
3. Tokenize the dataset, adding the end-of-sentence token to each text.
4. Process the tokenized dataset into chunks of a specified block size.
Returns:
Dataset: The processed dataset ready for training.
"""
tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-40b")
dataset = load_dataset("openwebtext", split="train", streaming=True)
tokenized_dataset = dataset.map(
lambda example: tokenizer([t + tokenizer.eos_token for t in example["text"]]),
batched=True,
num_proc=CFG.NUM_CPU,
remove_columns=["text"],
)
block_size = CFG.SEQ_LEN
# Main data processing function that will concatenate all texts from our dataset and generate chunks of block_size.
def group_texts(examples):
# Concatenate all texts.
concatenated_examples = {k: list(chain(*examples[k])) for k in examples.keys()}
total_length = len(concatenated_examples[list(examples.keys())[0]])
# We drop the small remainder, we could add padding if the model supported it instead of this drop, you can
# customize this part to your needs.
if total_length >= block_size:
total_length = (total_length // block_size) * block_size
# Split by chunks of max_len.
result = {
k: [t[i : i + block_size] for i in range(0, total_length, block_size)]
for k, t in concatenated_examples.items()
}
return result
train_dataset = tokenized_dataset.map(
group_texts, batched=True, num_proc=CFG.NUM_CPU,
)
return train_dataset
def build_pre_tokenized():
d0 = load_dataset("conceptofmind/c4_0-to-20_neox_with_eos_8k", split="train", streaming=True)
# d1 = load_dataset("conceptofmind/c4_21-to-40_neox_with_eos_8k", split="train")
# d2 = load_dataset("conceptofmind/c4_41-to-60_neox_with_eos_8k", split="train")
# d3 = load_dataset("conceptofmind/c4_61-to-80_neox_with_eos_8k", split="train")
# d4 = load_dataset("conceptofmind/c4_81-to-100_neox_with_eos_8k", split="train")
# train_dataset = concatenate_datasets([d0, d1, d2, d3, d4])
return d0
# main
def main():
# accelerator
timeout = InitProcessGroupKwargs(timeout=timedelta(seconds=1_000_000))
accelerator = Accelerator(
gradient_accumulation_steps=CFG.GRADIENT_ACCUMULATE_EVERY,
mixed_precision="bf16",
log_with="wandb",
kwargs_handlers=[timeout],
)
# AcceleratorState().deepspeed_plugin.deepspeed_config['train_micro_batch_size_per_gpu'] = 4 #??????
accelerator.init_trackers(
project_name="Andromeda",
config={
"batch_size": CFG.BATCH_SIZE,
"gradient_accumulate_every": CFG.GRADIENT_ACCUMULATE_EVERY,
"learning_rate": CFG.LEARNING_RATE,
"seq_len": CFG.SEQ_LEN,
},
init_kwargs={"wandb": {"entity": CFG.ENTITY_NAME}},
)
accelerator.print(f"Total GPUS: {accelerator.num_processes}")
# set seed
set_seed(CFG.SEED)
# tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b")
model = TransformerWrapper(
num_tokens=64007,
max_seq_len=8192,
use_abs_pos_emb=False,
# tokenizer=tokenizer,
embedding_provider=AndromedaEmbedding(),
attn_layers = Decoder(
dim=128, # 2048
depth=8, # 16
dim_head=128,
heads=8,
alibi_pos_bias=True,
alibi_num_heads=4,
rotary_xpos=True,
attn_flash = True,
deepnorm=True,
shift_tokens=1,
attn_one_kv_head = True,
qk_norm=True,
attn_qk_norm=True,
attn_qk_norm_dim_scale=True # set this to True, in addition to `attn_qk_norm = True`
)
).to(accelerator.device)
model = AutoregressiveWrapper(model).to(accelerator.device)
print_num_params(model, accelerator)
if CFG.USE_FSDP:
model = fsdp(
model,
mp="bf16",
shard_strat="SHARD_GRAD"
)
if CFG.USE_ACTIVATION_CHECKPOINTING:
activation_checkpointing(model, accelerator)
model = accelerator.prepare(model)
# dataloaders
if CFG.USE_PRETOKENIZED:
train_dataset = build_pre_tokenized()
else:
train_dataset = build_dataloaders()
train_loader = DataLoader(
train_dataset, batch_size=CFG.BATCH_SIZE, collate_fn=default_data_collator,
)
# optimizer
optim = decoupled_optimizer(
model=model,
learning_rate=CFG.LEARNING_RATE,
weight_decay=CFG.WEIGHT_DECAY,
beta_1=0.90,
beta_2=0.95,
optimizer_type='stable_adamw',
use_fsdp=True,
accelerator=accelerator
)
# Determine number of training steps
max_train_steps = math.ceil(len(train_loader) / CFG.GRADIENT_ACCUMULATE_EVERY)
accelerator.print(f"Max train steps: {max_train_steps}")
# lr scheduler
NUM_WARMUP_STEPS = int(max_train_steps * 0.01)
accelerator.print(f"Num warmup steps: {NUM_WARMUP_STEPS}")
if CFG.USE_DEEPSPEED:
lr_scheduler = DummyScheduler(
optim,
total_num_steps=max_train_steps * accelerator.num_processes,
warmup_num_steps=NUM_WARMUP_STEPS
)
else:
lr_scheduler = get_lr_scheduler_with_warmup(
optimizer=optim,
scheduler_type="cosine",
num_warmup_steps=NUM_WARMUP_STEPS,
max_train_steps=max_train_steps,
grad_accumulate_every=CFG.GRADIENT_ACCUMULATE_EVERY,
)
# prepare
optim, train_loader, lr_scheduler = accelerator.prepare(
optim, train_loader, lr_scheduler
)
# checkpoint scheduler
accelerator.register_for_checkpointing(lr_scheduler)
# I do not know why Huggingface recommends recalculation of max_train_steps
max_train_steps = math.ceil(len(train_loader) / CFG.GRADIENT_ACCUMULATE_EVERY)
accelerator.print(f"Max train steps recalculated: {max_train_steps}")
# Total batch size for logging
total_batch_size = (
CFG.BATCH_SIZE * accelerator.num_processes * CFG.GRADIENT_ACCUMULATE_EVERY
)
accelerator.print(f"Total batch size: {total_batch_size}")
# resume training
progress_bar = tqdm(
range(max_train_steps), disable=not accelerator.is_local_main_process
)
completed_steps = 0
if CFG.RESUME_FROM_CHECKPOINT:
if CFG.RESUME_FROM_CHECKPOINT is not None or CFG.RESUME_FROM_CHECKPOINT != "":
accelerator.print(f"Resuming from checkpoint {CFG.RESUME_FROM_CHECKPOINT}")
accelerator.load_state(CFG.RESUME_FROM_CHECKPOINT)
path = os.path.basename(CFG.RESUME_FROM_CHECKPOINT)
training_difference = os.path.splitext(path)[0]
# need to multiply `gradient_accumulation_steps` to reflect real steps
resume_step = (
int(training_difference.replace("step_", ""))
* CFG.GRADIENT_ACCUMULATE_EVERY
)
if CFG.RESUME_FROM_CHECKPOINT and resume_step is not None:
train_loader = accelerator.skip_first_batches(train_loader, resume_step)
completed_steps += resume_step
progress_bar.update(resume_step)
# training
model.train()
for step, batch in enumerate(train_loader):
with accelerator.accumulate(model):
inputs = batch["input_ids"].to(accelerator.device)
loss = model(inputs, return_loss=True)
accelerator.backward(loss)
accelerator.log({"loss": loss.item()}, step=step)
if accelerator.sync_gradients:
accelerator.clip_grad_norm_(model.parameters(), 1.0)
optim.step()
lr_scheduler.step()
optim.zero_grad()
if accelerator.sync_gradients:
progress_bar.update(1)
completed_steps += 1
if isinstance(CFG.CHECKPOINTING_STEPS, int):
if completed_steps % CFG.CHECKPOINTING_STEPS == 0:
output_dir = f"step_{completed_steps }"
if CFG.OUTPUT_DIR is not None:
output_dir = os.path.join(CFG.OUTPUT_DIR, output_dir)
accelerator.save_state(output_dir)
if completed_steps >= max_train_steps:
break
# end training
# accelerator.print(f"Training Finished")
accelerator.end_training()
# save final model
# accelerator.print(f"Saving model to {CFG.OUTPUT_DIR}")
if CFG.OUTPUT_DIR is not None:
accelerator.wait_for_everyone()
unwrapped_model = accelerator.unwrap_model(model)
with accelerator.main_process_first():
accelerator.save(
unwrapped_model.state_dict(), f"{CFG.OUTPUT_DIR}/final/final_model.pt"
)
if __name__ == "__main__":
main()
|
Andromeda-llm
|
/Andromeda-llm-0.0.3.tar.gz/Andromeda-llm-0.0.3/Andromeda/train_distributed_accelerate.py
|
train_distributed_accelerate.py
|
import torch
from transformers import AutoTokenizer
from einops._torch_specific import allow_ops_in_compiled_graph
import argparse
def main():
allow_ops_in_compiled_graph()
torch.hub._validate_not_a_forked_repo = lambda a, b, c: True
parser = argparse.ArgumentParser(description="Generate text using Andromeda model")
parser.add_argument("prompt", type=str, help="Text prompt to generate text")
parser.add_argument(
"--seq_len", type=int, default=256, help="Sequence length for generated text"
)
parser.add_argument(
"--temperature", type=float, default=0.8, help="Sampling temperature"
)
parser.add_argument(
"--filter_thres", type=float, default=0.9, help="Filter threshold for sampling"
)
parser.add_argument(
"--model",
type=str,
default="andromeda-e-1",
help="Model to use for generation",
)
parser.add_argument(
"--dtype",
type=str,
default="fp32",
help="Data type for the model: 'bf16', or 'fp32'",
)
args = parser.parse_args()
dtype = torch.float32
if args.dtype == 'bf16':
dtype = torch.bfloat16
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
#need to submit to torch hub
model = torch.hub.load("apacai/andromeda", args.model).to(device).to(dtype)
opt_model = torch.compile(model, backend="hidet")
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b")
encoded_text = tokenizer(args.prompt, return_tensors="pt")
output_tensor = opt_model.generate(
seq_len=args.seq_len,
prompt=encoded_text["input_ids"].to(device),
temperature=args.temperature,
filter_thres=args.filter_thres,
pad_value=0.0,
eos_token=tokenizer.eos_token_id,
return_seq_without_prompt=False,
use_tqdm=True,
)
decoded_output = tokenizer.batch_decode(output_tensor, skip_special_tokens=True)
return decoded_output
if __name__ == "__main__":
generated_text = main()
for text in generated_text:
print(f"{text}")
|
Andromeda-llm
|
/Andromeda-llm-0.0.3.tar.gz/Andromeda-llm-0.0.3/Andromeda/inference.py
|
inference.py
|
from math import ceil
import torch
from torch import nn
import torch.nn.functional as F
from einops import rearrange, pack, unpack
from optimus_prime.autoregressive_wrapper import top_p, top_k, eval_decorator
# helper functions
def exists(val):
return val is not None
def divisible_by(numer, denom):
return (numer % denom) == 0
# xl autoregressive wrapper class
class XLAutoregressiveWrapper(nn.Module):
def __init__(
self,
net,
ignore_index = -100,
pad_value = 0
):
super().__init__()
self.pad_value = pad_value
self.ignore_index = ignore_index
self.net = net
self.max_seq_len = net.max_seq_len
@torch.no_grad()
@eval_decorator
def generate(
self,
start_tokens,
seq_len,
eos_token = None,
temperature = 1.,
filter_logits_fn = top_k,
filter_thres = 0.9,
mems = None,
**kwargs
):
device, max_seq_len = start_tokens.device, self.max_seq_len
start_tokens, ps = pack([start_tokens], '* n')
b, t = start_tokens.shape
*all_leading_tokens, _ = start_tokens.split(max_seq_len, dim = -1)
# catch the memory up to the current segment
for leading_tokens in all_leading_tokens:
_, mems = self.net(
leading_tokens,
mems = mems,
return_mems = True,
**kwargs
)
# now start sampling from the current segment
curr_pos = len(all_leading_tokens) * max_seq_len
curr_mems = mems
out = start_tokens
for _ in range(seq_len):
curr_segment_len = out.shape[-1]
is_last_segment_tokens = divisible_by(curr_segment_len, max_seq_len)
x = out[:, curr_pos:]
logits, mems = self.net(
x,
mems = curr_mems,
return_mems = True,
**kwargs
)
logits = logits[:, -1]
filtered_logits = filter_logits_fn(logits, thres = filter_thres)
probs = F.softmax(filtered_logits / temperature, dim=-1)
sample = torch.multinomial(probs, 1)
if is_last_segment_tokens:
curr_pos = curr_segment_len
curr_mems = mems
out = torch.cat((out, sample), dim=-1)
if exists(eos_token):
is_eos_tokens = (out == eos_token)
if is_eos_tokens.any(dim = -1).all():
# mask out everything after the eos tokens
shifted_is_eos_tokens = F.pad(is_eos_tokens, (1, -1))
mask = shifted_is_eos_tokens.float().cumsum(dim = -1) >= 1
out = out.masked_fill(mask, self.pad_value)
break
out = out[:, t:]
out, = unpack(out, ps, '* n')
return out
def forward(
self,
x,
mems = None,
**kwargs
):
ignore_index, max_seq_len = self.ignore_index, self.max_seq_len
x, labels = x[:, :-1], x[:, 1:]
seq_len = x.shape[1]
# prepare chunks
split_x = x.split(max_seq_len, dim = -1)
split_labels = labels.split(max_seq_len, dim = -1)
loss_weights = tuple(map(lambda t: t.shape[-1] / seq_len, split_x))
# go through each chunk and derive weighted losses
total_loss = 0.
for chunk, chunk_labels, loss_weight in zip(split_x, split_labels, loss_weights):
logits, mems = self.net(
chunk,
mems = mems,
return_mems = True,
**kwargs
)
loss = F.cross_entropy(
rearrange(logits, 'b n c -> b c n'),
chunk_labels,
ignore_index = ignore_index
)
total_loss = total_loss + loss * loss_weight
return total_loss
|
Andromeda-llm
|
/Andromeda-llm-0.0.3.tar.gz/Andromeda-llm-0.0.3/Andromeda/optimus_prime/xl_autoregressive_wrapper.py
|
xl_autoregressive_wrapper.py
|
from functools import partial
import torch
from torch import nn, einsum, Tensor
import torch.nn.functional as F
from collections import namedtuple
from functools import wraps
from packaging import version
from dataclasses import dataclass
from einops import rearrange
# constants
EfficientAttentionConfig = namedtuple('EfficientAttentionConfig', ['enable_flash', 'enable_math', 'enable_mem_efficient'])
@dataclass
class Intermediates:
qk_similarities: Tensor = None
pre_softmax_attn: Tensor = None
post_softmax_attn: Tensor = None
# helpers
def exists(val):
return val is not None
def default(val, d):
return val if exists(val) else d
def once(fn):
called = False
@wraps(fn)
def inner(x):
nonlocal called
if called:
return
called = True
return fn(x)
return inner
print_once = once(print)
# main class
class Attend(nn.Module):
def __init__(
self,
*,
dropout = 0.,
causal = False,
heads = None,
talking_heads = False,
scale = None,
qk_norm = False,
flash = False,
):
super().__init__()
self.scale = scale
self.qk_norm = qk_norm
self.causal = causal
self.attn_fn = partial(F.softmax, dtype = torch.float32) if not qk_norm else F.softmax
self.dropout = dropout
self.attn_dropout = nn.Dropout(dropout)
# talking heads
assert not (flash and talking_heads), 'talking heads not compatible with flash attention'
self.talking_heads = talking_heads
if talking_heads:
self.pre_softmax_talking_heads = nn.Conv2d(heads, heads, 1, bias = False)
self.post_softmax_talking_heads = nn.Conv2d(heads, heads, 1, bias = False)
# flash attention
self.flash = flash
assert not (flash and version.parse(torch.__version__) < version.parse('2.0.0')), 'in order to use flash attention, you must be using pytorch 2.0 or above'
# determine efficient attention configs for cuda and cpu
self.cpu_config = EfficientAttentionConfig(True, True, True)
self.cuda_config = None
if not torch.cuda.is_available() or not flash:
return
device_properties = torch.cuda.get_device_properties(torch.device('cuda'))
if device_properties.major == 8 and device_properties.minor == 0:
print_once('A100 GPU detected, using flash attention if input tensor is on cuda')
self.cuda_config = EfficientAttentionConfig(True, False, False)
else:
print_once('Non-A100 GPU detected, using math or mem efficient attention if input tensor is on cuda')
self.cuda_config = EfficientAttentionConfig(False, True, True)
def flash_attn(
self,
q, k, v,
mask = None,
attn_bias = None
):
batch, heads, q_len, _, k_len, is_cuda, device = *q.shape, k.shape[-2], q.is_cuda, q.device
# Recommended for multi-query single-key-value attention by Tri Dao
# kv shape torch.Size([1, 512, 64]) -> torch.Size([1, 8, 512, 64])
if k.ndim == 3:
k = rearrange(k, 'b ... -> b 1 ...').expand_as(q)
if v.ndim == 3:
v = rearrange(v, 'b ... -> b 1 ...').expand_as(q)
# handle scale - by default they scale by dim_head ** -0.5, but need to take care if using cosine sim attention
if self.qk_norm:
default_scale = q.shape[-1] ** -0.5
q = q * (default_scale / self.scale)
# Check if mask exists and expand to compatible shape
# The mask is B L, so it would have to be expanded to B H N L
causal = self.causal
if exists(mask):
assert mask.ndim == 4
mask = mask.expand(batch, heads, q_len, k_len)
# manually handle causal mask, if another mask was given
if causal:
causal_mask = torch.ones((q_len, k_len), dtype = torch.bool, device = device).triu(k_len - q_len + 1)
mask = mask | causal_mask
causal = False
# handle alibi positional bias
# convert from bool to float
if exists(attn_bias):
attn_bias = rearrange(attn_bias, 'h i j -> 1 h i j').expand(batch, -1, -1, -1)
# if mask given, the mask would already contain the causal mask from above logic
# otherwise, if no mask given but still causal, mask out alibi positional bias to a large negative number
mask_value = -torch.finfo(q.dtype).max
if exists(mask):
attn_bias = attn_bias.masked_fill(mask, mask_value // 2)
elif causal:
causal_mask = torch.ones((q_len, k_len), dtype = torch.bool, device = device).triu(k_len - q_len + 1)
attn_bias = attn_bias.masked_fill(causal_mask, mask_value // 2)
causal = False
# scaled_dot_product_attention handles attn_mask either as bool or additive bias
# make it an additive bias here
mask = attn_bias
# Check if there is a compatible device for flash attention
config = self.cuda_config if is_cuda else self.cpu_config
# pytorch 2.0 flash attn: q, k, v, mask, dropout, causal, softmax_scale
with torch.backends.cuda.sdp_kernel(**config._asdict()):
out = F.scaled_dot_product_attention(
q, k, v,
attn_mask = mask,
dropout_p = self.dropout if self.training else 0.,
is_causal = causal
)
return out, Intermediates()
def forward(
self,
q, k, v,
mask = None,
attn_bias = None,
prev_attn = None
):
"""
einstein notation
b - batch
h - heads
n, i, j - sequence length (base sequence length, source, target)
d - feature dimension
"""
n, device = q.shape[-2], q.device
scale = default(self.scale, q.shape[-1] ** -0.5)
if self.flash:
assert not exists(prev_attn), 'residual attention not compatible with flash attention'
return self.flash_attn(q, k, v, mask = mask, attn_bias = attn_bias)
kv_einsum_eq = 'b j d' if k.ndim == 3 else 'b h j d'
dots = einsum(f'b h i d, {kv_einsum_eq} -> b h i j', q, k) * scale
if exists(prev_attn):
dots = dots + prev_attn
qk_similarities = dots.clone()
if self.talking_heads:
dots = self.pre_softmax_talking_heads(dots)
if exists(attn_bias):
dots = dots + attn_bias
dtype = dots.dtype
pre_softmax_attn = dots.clone()
mask_value = -torch.finfo(dots.dtype).max
if exists(mask):
dots = dots.masked_fill(mask, mask_value)
if self.causal:
i, j = dots.shape[-2:]
causal_mask = torch.ones((i, j), dtype = torch.bool, device = device).triu(j - i + 1)
dots = dots.masked_fill(causal_mask, mask_value)
attn = self.attn_fn(dots, dim = -1)
attn = attn.type(dtype)
post_softmax_attn = attn.clone()
attn = self.attn_dropout(attn)
if self.talking_heads:
attn = self.post_softmax_talking_heads(attn)
out = einsum(f'b h i j, {kv_einsum_eq} -> b h i d', attn, v)
intermediates = Intermediates(
qk_similarities = qk_similarities,
pre_softmax_attn = pre_softmax_attn,
post_softmax_attn = post_softmax_attn
)
return out, intermediates
|
Andromeda-llm
|
/Andromeda-llm-0.0.3.tar.gz/Andromeda-llm-0.0.3/Andromeda/optimus_prime/attend.py
|
attend.py
|
import math
from random import random
import torch
from torch import nn, einsum, Tensor
import torch.nn.functional as F
from functools import partial, wraps
from inspect import isfunction
from collections import namedtuple
from dataclasses import dataclass
from typing import List
from einops import rearrange, repeat, reduce
from einops.layers.torch import Rearrange
from optimus_prime.attend import Attend, Intermediates
from optimus_prime.autoregressive_wrapper import AutoregressiveWrapper
from abc import ABC, abstractmethod
# constants
DEFAULT_DIM_HEAD = 64
@dataclass
class LayerIntermediates:
hiddens: List[Tensor] = None
attn_intermediates: List[Intermediates] = None
# helpers
def exists(val):
return val is not None
def default(val, d):
if exists(val):
return val
return d() if isfunction(d) else d
def cast_tuple(val, depth):
return val if isinstance(val, tuple) else (val,) * depth
def maybe(fn):
@wraps(fn)
def inner(x, *args, **kwargs):
if not exists(x):
return x
return fn(x, *args, **kwargs)
return inner
class always():
def __init__(self, val):
self.val = val
def __call__(self, *args, **kwargs):
return self.val
class not_equals():
def __init__(self, val):
self.val = val
def __call__(self, x, *args, **kwargs):
return x != self.val
class equals():
def __init__(self, val):
self.val = val
def __call__(self, x, *args, **kwargs):
return x == self.val
# tensor helpers
def max_neg_value(tensor):
return -torch.finfo(tensor.dtype).max
def l2norm(t, groups = 1):
t = rearrange(t, '... (g d) -> ... g d', g = groups)
t = F.normalize(t, p = 2, dim = -1)
return rearrange(t, '... g d -> ... (g d)')
def pad_at_dim(t, pad, dim = -1, value = 0.):
dims_from_right = (- dim - 1) if dim < 0 else (t.ndim - dim - 1)
zeros = ((0, 0) * dims_from_right)
return F.pad(t, (*zeros, *pad), value = value)
def or_reduce(masks):
head, *body = masks
for rest in body:
head = head | rest
return head
# init helpers
def init_zero_(layer):
nn.init.constant_(layer.weight, 0.)
if exists(layer.bias):
nn.init.constant_(layer.bias, 0.)
# keyword argument helpers
def pick_and_pop(keys, d):
values = list(map(lambda key: d.pop(key), keys))
return dict(zip(keys, values))
def group_dict_by_key(cond, d):
return_val = [dict(),dict()]
for key in d.keys():
match = bool(cond(key))
ind = int(not match)
return_val[ind][key] = d[key]
return (*return_val,)
def string_begins_with(prefix, str):
return str.startswith(prefix)
def group_by_key_prefix(prefix, d):
return group_dict_by_key(partial(string_begins_with, prefix), d)
def groupby_prefix_and_trim(prefix, d):
kwargs_with_prefix, kwargs = group_dict_by_key(partial(string_begins_with, prefix), d)
kwargs_without_prefix = dict(map(lambda x: (x[0][len(prefix):], x[1]), tuple(kwargs_with_prefix.items())))
return kwargs_without_prefix, kwargs
# initializations
def deepnorm_init(
transformer,
beta,
module_name_match_list = ['.ff.', '.to_v', '.to_out']
):
for name, module in transformer.named_modules():
if type(module) != nn.Linear:
continue
needs_beta_gain = any(map(lambda substr: substr in name, module_name_match_list))
gain = beta if needs_beta_gain else 1
nn.init.xavier_normal_(module.weight.data, gain = gain)
if exists(module.bias):
nn.init.constant_(module.bias.data, 0)
# structured dropout, more effective than traditional attention dropouts
def dropout_seq(seq, mask, dropout):
b, n, *_, device = *seq.shape, seq.device
logits = torch.randn(b, n, device = device)
if exists(mask):
mask_value = max_neg_value(logits)
logits = logits.masked_fill(~mask, mask_value)
keep_prob = 1. - dropout
num_keep = max(1, int(keep_prob * n))
keep_indices = logits.topk(num_keep, dim = 1).indices
batch_indices = torch.arange(b, device = device)
batch_indices = rearrange(batch_indices, 'b -> b 1')
seq = seq[batch_indices, keep_indices]
if exists(mask):
seq_counts = mask.sum(dim = -1)
seq_keep_counts = torch.ceil(seq_counts * keep_prob).int()
keep_mask = torch.arange(num_keep, device = device) < rearrange(seq_keep_counts, 'b -> b 1')
mask = mask[batch_indices, keep_indices] & keep_mask
return seq, mask
# activations
class ReluSquared(nn.Module):
def forward(self, x):
return F.relu(x) ** 2
#tokenization
class BaseTokenizer(ABC):
@abstractmethod
def tokenize(self, text: str) -> List[int]:
pass
class CustomTokenizer(BaseTokenizer):
def tokenize(self, text: str) -> List[int]:
# Your custom tokenization algorithm
tokens = ...
return tokens
# embedding
class BaseEmbedding(ABC):
@abstractmethod
def get_embedding(self, num_tokens: int, dim: int) -> nn.Module:
# Custom embedding function or model
embedding = ...
return embedding
class AndromedaEmbedding(BaseEmbedding):
def get_embedding(self, num_tokens: int, dim: int) -> nn.Module:
embedding = nn.Embedding(num_tokens, dim)
return embedding
class TokenEmbedding(nn.Module):
def __init__(self, dim, num_tokens, embedding_provider: BaseEmbedding, l2norm_embed = False):
super().__init__()
self.l2norm_embed = l2norm_embed
self.emb = embedding_provider.get_embedding(num_tokens, dim)
# nn.Embedding(num_tokens, dim)
def forward(self, x):
token_emb = self.emb(x)
return l2norm(token_emb) if self.l2norm_embed else token_emb
# positional embeddings
class AbsolutePositionalEmbedding(nn.Module):
def __init__(self, dim, max_seq_len, l2norm_embed = False):
super().__init__()
self.scale = dim ** -0.5 if not l2norm_embed else 1.
self.max_seq_len = max_seq_len
self.l2norm_embed = l2norm_embed
self.emb = nn.Embedding(max_seq_len, dim)
def forward(self, x, pos = None):
seq_len, device = x.shape[1], x.device
assert seq_len <= self.max_seq_len, f'you are passing in a sequence length of {seq_len} but your absolute positional embedding has a max sequence length of {self.max_seq_len}'
if not exists(pos):
pos = torch.arange(seq_len, device = device)
pos_emb = self.emb(pos)
pos_emb = pos_emb * self.scale
return l2norm(pos_emb) if self.l2norm_embed else pos_emb
class ScaledSinusoidalEmbedding(nn.Module):
def __init__(self, dim, theta = 10000):
super().__init__()
assert (dim % 2) == 0
self.scale = nn.Parameter(torch.ones(1) * dim ** -0.5)
half_dim = dim // 2
freq_seq = torch.arange(half_dim).float() / half_dim
inv_freq = theta ** -freq_seq
self.register_buffer('inv_freq', inv_freq, persistent = False)
def forward(self, x, pos = None):
seq_len, device = x.shape[1], x.device
if not exists(pos):
pos = torch.arange(seq_len, device = device)
emb = einsum('i, j -> i j', pos, self.inv_freq)
emb = torch.cat((emb.sin(), emb.cos()), dim = -1)
return emb * self.scale
class RelativePositionBias(nn.Module):
def __init__(self, scale, causal = False, num_buckets = 32, max_distance = 128, heads = 8):
super().__init__()
self.scale = scale
self.causal = causal
self.num_buckets = num_buckets
self.max_distance = max_distance
self.relative_attention_bias = nn.Embedding(num_buckets, heads)
@staticmethod
def _relative_position_bucket(relative_position, causal = True, num_buckets = 32, max_distance = 128):
ret = 0
n = -relative_position
if not causal:
num_buckets //= 2
ret += (n < 0).long() * num_buckets
n = torch.abs(n)
else:
n = torch.max(n, torch.zeros_like(n))
max_exact = num_buckets // 2
is_small = n < max_exact
val_if_large = max_exact + (
torch.log(n.float() / max_exact) / math.log(max_distance / max_exact) * (num_buckets - max_exact)
).long()
val_if_large = torch.min(val_if_large, torch.full_like(val_if_large, num_buckets - 1))
ret += torch.where(is_small, n, val_if_large)
return ret
@property
def device(self):
return next(self.parameters()).device
def forward(self, i, j):
device = self.device
q_pos = torch.arange(j - i, j, dtype = torch.long, device = device)
k_pos = torch.arange(j, dtype = torch.long, device = device)
rel_pos = k_pos[None, :] - q_pos[:, None]
rp_bucket = self._relative_position_bucket(rel_pos, causal = self.causal, num_buckets = self.num_buckets, max_distance = self.max_distance)
values = self.relative_attention_bias(rp_bucket)
bias = rearrange(values, 'i j h -> h i j')
return bias * self.scale
class DynamicPositionBias(nn.Module):
def __init__(self, dim, *, heads, depth, log_distance = False, norm = False):
super().__init__()
assert depth >= 1, 'depth for dynamic position bias MLP must be greater or equal to 1'
self.log_distance = log_distance
self.mlp = nn.ModuleList([])
self.mlp.append(nn.Sequential(
nn.Linear(1, dim),
nn.LayerNorm(dim) if norm else nn.Identity(),
nn.SiLU()
))
for _ in range(depth - 1):
self.mlp.append(nn.Sequential(
nn.Linear(dim, dim),
nn.LayerNorm(dim) if norm else nn.Identity(),
nn.SiLU()
))
self.mlp.append(nn.Linear(dim, heads))
@property
def device(self):
return next(self.parameters()).device
def forward(self, i, j):
assert i == j
n, device = j, self.device
# get the (n x n) matrix of distances
seq_arange = torch.arange(n, device = device)
context_arange = torch.arange(n, device = device)
indices = rearrange(seq_arange, 'i -> i 1') - rearrange(context_arange, 'j -> 1 j')
indices += (n - 1)
# input to continuous positions MLP
pos = torch.arange(-n + 1, n, device = device).float()
pos = rearrange(pos, '... -> ... 1')
if self.log_distance:
pos = torch.sign(pos) * torch.log(pos.abs() + 1) # log of distance is sign(rel_pos) * log(abs(rel_pos) + 1)
for layer in self.mlp:
pos = layer(pos)
# get position biases
bias = pos[indices]
bias = rearrange(bias, 'i j h -> h i j')
return bias
class AlibiPositionalBias(nn.Module):
def __init__(self, heads, total_heads, **kwargs):
super().__init__()
self.heads = heads
self.total_heads = total_heads
slopes = Tensor(self._get_slopes(heads))
slopes = rearrange(slopes, 'h -> h 1 1')
self.register_buffer('slopes', slopes, persistent = False)
self.register_buffer('bias', None, persistent = False)
def get_bias(self, i, j, device):
i_arange = torch.arange(j - i, j, device = device)
j_arange = torch.arange(j, device = device)
bias = -torch.abs(rearrange(j_arange, 'j -> 1 1 j') - rearrange(i_arange, 'i -> 1 i 1'))
return bias
@staticmethod
def _get_slopes(heads):
def get_slopes_power_of_2(n):
start = (2**(-2**-(math.log2(n)-3)))
ratio = start
return [start*ratio**i for i in range(n)]
if math.log2(heads).is_integer():
return get_slopes_power_of_2(heads)
closest_power_of_2 = 2 ** math.floor(math.log2(heads))
return get_slopes_power_of_2(closest_power_of_2) + get_slopes_power_of_2(2 * closest_power_of_2)[0::2][:heads-closest_power_of_2]
@property
def device(self):
return next(self.buffers()).device
def forward(self, i, j):
h, device = self.total_heads, self.device
if exists(self.bias) and self.bias.shape[-1] >= j:
return self.bias[..., :i, :j]
bias = self.get_bias(i, j, device)
bias = bias * self.slopes
num_heads_unalibied = h - bias.shape[0]
bias = pad_at_dim(bias, (0, num_heads_unalibied), dim = 0)
self.register_buffer('bias', bias, persistent = False)
return self.bias
class LearnedAlibiPositionalBias(AlibiPositionalBias):
def __init__(self, heads, total_heads):
super().__init__(heads, total_heads)
log_slopes = torch.log(self.slopes)
self.learned_logslopes = nn.Parameter(log_slopes)
def forward(self, i, j):
h, i, j, device = self.heads, self.device
def get_slopes(param):
return pad_at_dim(param.exp(), (0, h - param.shape[0]), dim = -2)
if exists(self.bias) and self.bias.shape[-1] >= j:
bias = self.bias[..., :i, :j]
else:
bias = self.get_bias(i, j, device)
self.register_buffer('bias', bias, persistent = False)
slopes = get_slopes(self.learned_logslopes)
bias = bias * slopes
return bias
class RotaryEmbedding(nn.Module):
def __init__(
self,
dim,
use_xpos = False,
scale_base = 512
):
super().__init__()
inv_freq = 1. / (10000 ** (torch.arange(0, dim, 2).float() / dim))
self.register_buffer('inv_freq', inv_freq)
if not use_xpos:
self.register_buffer('scale', None)
return
scale = (torch.arange(0, dim, 2) + 0.4 * dim) / (1.4 * dim)
self.scale_base = scale_base
self.register_buffer('scale', scale)
def forward(self, seq_len, device):
t = torch.arange(seq_len, device = device).type_as(self.inv_freq)
freqs = torch.einsum('i , j -> i j', t, self.inv_freq)
freqs = torch.cat((freqs, freqs), dim = -1)
if not exists(self.scale):
return freqs, 1.
power = (torch.arange(seq_len, device = device) - (seq_len // 2)) / self.scale_base
scale = self.scale ** rearrange(power, 'n -> n 1')
scale = torch.cat((scale, scale), dim = -1)
return freqs, scale
def rotate_half(x):
x = rearrange(x, '... (j d) -> ... j d', j = 2)
x1, x2 = x.unbind(dim = -2)
return torch.cat((-x2, x1), dim = -1)
def apply_rotary_pos_emb(t, freqs, scale = 1):
seq_len = t.shape[-2]
freqs = freqs[-seq_len:, :]
return (t * freqs.cos() * scale) + (rotate_half(t) * freqs.sin() * scale)
# norms
class Scale(nn.Module):
def __init__(self, value, fn):
super().__init__()
self.value = value
self.fn = fn
def forward(self, x, **kwargs):
out = self.fn(x, **kwargs)
scale_fn = lambda t: t * self.value
if not isinstance(out, tuple):
return scale_fn(out)
return (scale_fn(out[0]), *out[1:])
class ScaleNorm(nn.Module):
def __init__(self, dim, eps = 1e-5):
super().__init__()
self.eps = eps
self.g = nn.Parameter(torch.ones(1) * (dim ** -0.5))
def forward(self, x):
norm = torch.norm(x, dim = -1, keepdim = True)
return x / norm.clamp(min = self.eps) * self.g
class RMSNorm(nn.Module):
def __init__(self, dim, eps = 1e-8):
super().__init__()
self.scale = dim ** -0.5
self.eps = eps
self.g = nn.Parameter(torch.ones(dim))
def forward(self, x):
norm = torch.norm(x, dim = -1, keepdim = True) * self.scale
return x / norm.clamp(min = self.eps) * self.g
# residual and residual gates
class Residual(nn.Module):
def __init__(self, dim, scale_residual = False, scale_residual_constant = 1.):
super().__init__()
self.residual_scale = nn.Parameter(torch.ones(dim)) if scale_residual else None
self.scale_residual_constant = scale_residual_constant
def forward(self, x, residual):
if exists(self.residual_scale):
residual = residual * self.residual_scale
if self.scale_residual_constant != 1:
residual = residual * self.scale_residual_constant
return x + residual
class GRUGating(nn.Module):
def __init__(self, dim, scale_residual = False, **kwargs):
super().__init__()
self.gru = nn.GRUCell(dim, dim)
self.residual_scale = nn.Parameter(torch.ones(dim)) if scale_residual else None
def forward(self, x, residual):
if exists(self.residual_scale):
residual = residual * self.residual_scale
gated_output = self.gru(
rearrange(x, 'b n d -> (b n) d'),
rearrange(residual, 'b n d -> (b n) d')
)
return gated_output.reshape_as(x)
# token shifting
def shift(t, amount, mask = None):
if amount == 0:
return t
else:
amount = min(amount, t.shape[1])
if exists(mask):
t = t.masked_fill(~mask[..., None], 0.)
return pad_at_dim(t, (amount, -amount), dim = - 2, value = 0.)
class ShiftTokens(nn.Module):
def __init__(self, shifts, fn):
super().__init__()
self.fn = fn
self.shifts = tuple(shifts)
def forward(self, x, **kwargs):
mask = kwargs.get('mask', None)
shifts = self.shifts
segments = len(shifts)
feats_per_shift = x.shape[-1] // segments
splitted = x.split(feats_per_shift, dim = -1)
segments_to_shift, rest = splitted[:segments], splitted[segments:]
segments_to_shift = list(map(lambda args: shift(*args, mask = mask), zip(segments_to_shift, shifts)))
x = torch.cat((*segments_to_shift, *rest), dim = -1)
return self.fn(x, **kwargs)
# feedforward
class GLU(nn.Module):
def __init__(self, dim_in, dim_out, activation):
super().__init__()
self.act = activation
self.proj = nn.Linear(dim_in, dim_out * 2)
def forward(self, x):
x, gate = self.proj(x).chunk(2, dim = -1)
return x * self.act(gate)
class FeedForward(nn.Module):
def __init__(
self,
dim,
dim_out = None,
mult = 4,
glu = False,
swish = False,
relu_squared = False,
post_act_ln = False,
dropout = 0.,
no_bias = False,
zero_init_output = False
):
super().__init__()
inner_dim = int(dim * mult)
dim_out = default(dim_out, dim)
if relu_squared:
activation = ReluSquared()
elif swish:
activation = nn.SiLU()
else:
activation = nn.GELU()
project_in = nn.Sequential(
nn.Linear(dim, inner_dim, bias = not no_bias),
activation
) if not glu else GLU(dim, inner_dim, activation)
self.ff = nn.Sequential(
project_in,
nn.LayerNorm(inner_dim) if post_act_ln else nn.Identity(),
nn.Dropout(dropout),
nn.Linear(inner_dim, dim_out, bias = not no_bias)
)
# init last linear layer to 0
if zero_init_output:
init_zero_(self.ff[-1])
def forward(self, x):
return self.ff(x)
# attention. it is all we need
class Attention(nn.Module):
def __init__(
self,
dim,
dim_head = DEFAULT_DIM_HEAD,
heads = 8,
causal = False,
flash = False,
talking_heads = False,
head_scale = False,
sparse_topk = None,
num_mem_kv = 0,
dropout = 0.,
on_attn = False,
gate_values = False,
zero_init_output = False,
max_attend_past = None,
qk_norm = False,
qk_norm_groups = 1,
qk_norm_scale = 10,
qk_norm_dim_scale = False,
one_kv_head = False,
shared_kv = False,
value_dim_head = None,
tensor_product = False # https://arxiv.org/abs/2208.06061
):
super().__init__()
self.scale = dim_head ** -0.5
self.heads = heads
self.causal = causal
self.max_attend_past = max_attend_past
value_dim_head = default(value_dim_head, dim_head)
q_dim = k_dim = dim_head * heads
v_dim = out_dim = value_dim_head * heads
self.one_kv_head = one_kv_head
if one_kv_head:
k_dim = dim_head
v_dim = value_dim_head
out_dim = v_dim * heads
self.to_q = nn.Linear(dim, q_dim, bias = False)
self.to_k = nn.Linear(dim, k_dim, bias = False)
# shared key / values, for further memory savings during inference
assert not (shared_kv and value_dim_head != dim_head), 'key and value head dimensions must be equal for shared key / values'
self.to_v = nn.Linear(dim, v_dim, bias = False) if not shared_kv else None
# relations projection from tp-attention
self.to_r = nn.Linear(dim, v_dim, bias = False) if tensor_product else None
# add GLU gating for aggregated values, from alphafold2
self.to_v_gate = None
if gate_values:
self.to_v_gate = nn.Linear(dim, out_dim)
nn.init.constant_(self.to_v_gate.weight, 0)
nn.init.constant_(self.to_v_gate.bias, 1)
# cosine sim attention
self.qk_norm = qk_norm
self.qk_norm_groups = qk_norm_groups
self.qk_norm_scale = qk_norm_scale
# whether to use the rmsnorm (equivalent to cosine sim attention when scale is equal to 1) - https://arxiv.org/abs/2302.05442
self.qk_norm_dim_scale = qk_norm_dim_scale
self.qk_norm_q_scale = self.qk_norm_k_scale = 1
if qk_norm and qk_norm_dim_scale:
self.qk_norm_q_scale = nn.Parameter(torch.ones(dim_head))
self.qk_norm_k_scale = nn.Parameter(torch.ones(dim_head))
assert (not qk_norm) or (dim_head % qk_norm_groups) == 0, 'dimension per attention head must be divisible by the qk norm groups'
assert not (qk_norm and (dim_head // qk_norm_groups) <= 2), 'the group dimension may be too small (2 was too small in my tests, but 4 still works, surprisingly)'
# attend class - includes core attention algorithm + talking heads
self.attend = Attend(
heads = heads,
causal = causal,
talking_heads = talking_heads,
dropout = dropout,
qk_norm = qk_norm,
scale = qk_norm_scale if qk_norm else self.scale,
flash = flash
)
# head scaling
self.head_scale = head_scale
if head_scale:
self.head_scale_params = nn.Parameter(torch.ones(1, heads, 1, 1))
# explicit topk sparse attention
self.sparse_topk = sparse_topk
# add memory key / values
self.num_mem_kv = num_mem_kv
if num_mem_kv > 0:
self.mem_k = nn.Parameter(torch.randn(heads, num_mem_kv, dim_head))
self.mem_v = nn.Parameter(torch.randn(heads, num_mem_kv, dim_head))
# attention on attention
self.attn_on_attn = on_attn
self.to_out = nn.Sequential(nn.Linear(out_dim, dim * 2, bias = False), nn.GLU()) if on_attn else nn.Linear(out_dim, dim, bias = False)
# init output projection 0
if zero_init_output:
init_zero_(self.to_out)
def forward(
self,
x,
context = None,
mask = None,
context_mask = None,
attn_mask = None,
rel_pos = None,
rotary_pos_emb = None,
prev_attn = None,
mem = None
):
b, n, _, h, head_scale, device, has_context = *x.shape, self.heads, self.head_scale, x.device, exists(context)
kv_input = default(context, x)
q_input = x
k_input = kv_input
v_input = kv_input
r_input = x
if exists(mem):
k_input = torch.cat((mem, k_input), dim = -2)
v_input = torch.cat((mem, v_input), dim = -2)
q = self.to_q(q_input)
k = self.to_k(k_input)
v = self.to_v(v_input) if exists(self.to_v) else k
r = self.to_r(r_input) if exists(self.to_r) else None
q = rearrange(q, 'b n (h d) -> b h n d', h = h)
if not self.one_kv_head:
k, v, r = map(lambda t: maybe(rearrange)(t, 'b n (h d) -> b h n d', h = h), (k, v, r))
if self.qk_norm:
qk_l2norm = partial(l2norm, groups = self.qk_norm_groups)
q, k = map(qk_l2norm, (q, k))
scale = self.qk_norm_scale
q = q * self.qk_norm_q_scale
k = k * self.qk_norm_k_scale
if exists(rotary_pos_emb) and not has_context:
freqs, xpos_scale = rotary_pos_emb
l = freqs.shape[-1]
q_xpos_scale, k_xpos_scale = (xpos_scale, xpos_scale ** -1.) if exists(xpos_scale) else (1., 1.)
(ql, qr), (kl, kr), (vl, vr) = map(lambda t: (t[..., :l], t[..., l:]), (q, k, v))
ql, kl, vl = map(lambda arg: apply_rotary_pos_emb(arg[0], freqs, arg[1]), ((ql, q_xpos_scale), (kl, k_xpos_scale), (vl, k_xpos_scale)))
q, k, v = map(lambda t: torch.cat(t, dim = -1), ((ql, qr), (kl, kr), (vl, vr)))
input_mask = default(context_mask, mask)
if self.num_mem_kv > 0:
mem_k, mem_v = map(lambda t: repeat(t, 'h n d -> b h n d', b = b), (self.mem_k, self.mem_v))
if self.qk_norm:
mem_k = l2norm(mem_k)
mem_k = mem_k * self.qk_norm_k_scale
k = torch.cat((mem_k, k), dim = -2)
v = torch.cat((mem_v, v), dim = -2)
if exists(input_mask):
input_mask = pad_at_dim(input_mask, (self.num_mem_kv, 0), dim = -1, value = True)
i, j = map(lambda t: t.shape[-2], (q, k))
# determine masking
mask_value = max_neg_value(q)
masks = []
final_attn_mask = None
if exists(input_mask):
input_mask = rearrange(input_mask, 'b j -> b 1 1 j')
masks.append(~input_mask)
if exists(attn_mask):
assert 2 <= attn_mask.ndim <= 4, 'attention mask must have greater than 2 dimensions but less than or equal to 4'
if attn_mask.ndim == 2:
attn_mask = rearrange(attn_mask, 'i j -> 1 1 i j')
elif attn_mask.ndim == 3:
attn_mask = rearrange(attn_mask, 'h i j -> 1 h i j')
masks.append(~attn_mask)
if exists(self.max_attend_past):
range_q = torch.arange(j - i, j, device = device)
range_k = torch.arange(j, device = device)
dist = rearrange(range_q, 'i -> 1 1 i 1') - rearrange(range_k, 'j -> 1 1 1 j')
max_attend_past_mask = dist > self.max_attend_past
masks.append(max_attend_past_mask)
if exists(self.sparse_topk) and self.sparse_topk < dots.shape[-1]:
top, _ = dots.topk(self.sparse_topk, dim = -1)
vk = rearrange(top[..., -1], '... -> ... 1')
sparse_topk_mask = dots < vk
masks.append(sparse_topk_mask)
if len(masks) > 0:
final_attn_mask = or_reduce(masks)
# prepare relative positional bias, if needed
attn_bias = None
if exists(rel_pos):
attn_bias = rel_pos(i, j)
# attention is all we need
out, intermediates = self.attend(
q, k, v,
mask = final_attn_mask,
attn_bias = attn_bias,
prev_attn = prev_attn
)
# https://arxiv.org/abs/2208.06061 proposes to add a residual for better gradients
if exists(r):
out = out * r + out
# normformer scaling of heads
if head_scale:
out = out * self.head_scale_params
# merge heads
out = rearrange(out, 'b h n d -> b n (h d)')
# alphafold2 styled gating of the values
if exists(self.to_v_gate):
gates = self.to_v_gate(x)
out = out * gates.sigmoid()
# combine the heads
out = self.to_out(out)
if exists(mask):
mask = rearrange(mask, 'b n -> b n 1')
out = out.masked_fill(~mask, 0.)
return out, intermediates
class AttentionLayers(nn.Module):
def __init__(
self,
dim,
depth,
heads = None,
causal = False,
cross_attend = False,
only_cross = False,
use_scalenorm = False,
use_rmsnorm = False,
alibi_pos_bias = False,
alibi_num_heads = None,
alibi_learned = False,
rel_pos_bias = False,
rel_pos_num_buckets = 32,
rel_pos_max_distance = 128,
dynamic_pos_bias = False,
dynamic_pos_bias_log_distance = False,
dynamic_pos_bias_mlp_depth = 2,
dynamic_pos_bias_norm = False,
rotary_pos_emb = False,
rotary_emb_dim = None,
rotary_xpos = False,
rotary_xpos_scale_base = 512,
custom_layers = None,
sandwich_coef = None,
par_ratio = None,
residual_attn = False,
cross_residual_attn = False,
macaron = False,
pre_norm = True,
gate_residual = False,
scale_residual = False,
scale_residual_constant = 1.,
deepnorm = False,
shift_tokens = 0,
sandwich_norm = False,
resi_dual = False,
zero_init_branch_output = False,
layer_dropout = 0.,
cross_attn_tokens_dropout = 0.,
**kwargs
):
super().__init__()
rotary_pos_emb = rotary_pos_emb or rotary_xpos
ff_kwargs, kwargs = groupby_prefix_and_trim('ff_', kwargs)
attn_kwargs, kwargs = groupby_prefix_and_trim('attn_', kwargs)
dim_head = attn_kwargs.get('dim_head', DEFAULT_DIM_HEAD)
self.dim = dim
self.depth = depth
self.layers = nn.ModuleList([])
self.has_pos_emb = rel_pos_bias or rotary_pos_emb
rotary_emb_dim = max(default(rotary_emb_dim, dim_head // 2), 32)
assert not (rotary_xpos and not causal), 'rotary xpos is not compatible with bidirectional attention'
self.rotary_pos_emb = RotaryEmbedding(rotary_emb_dim, use_xpos = rotary_xpos, scale_base = rotary_xpos_scale_base) if rotary_pos_emb else None
assert not (alibi_pos_bias and rel_pos_bias), 'you can only choose Alibi positional bias or T5 relative positional bias, not both'
assert rel_pos_num_buckets <= rel_pos_max_distance, 'number of relative position buckets must be less than the relative position max distance'
# relative positional bias
flash_attn = attn_kwargs.get('flash', False)
assert (int(rel_pos_bias) + int(dynamic_pos_bias) + int(alibi_pos_bias)) <= 1, 'you can only choose up to one of t5, alibi, or dynamic positional bias'
self.rel_pos = None
if rel_pos_bias:
assert not flash_attn, 'flash attention not compatible with t5 relative positional bias'
self.rel_pos = RelativePositionBias(scale = dim_head ** 0.5, causal = causal, heads = heads, num_buckets = rel_pos_num_buckets, max_distance = rel_pos_max_distance)
elif dynamic_pos_bias:
assert not flash_attn, 'flash attention not compatible with dynamic positional bias'
self.rel_pos = DynamicPositionBias(dim = dim // 4, heads = heads, log_distance = dynamic_pos_bias_log_distance, depth = dynamic_pos_bias_mlp_depth, norm = dynamic_pos_bias_norm)
elif alibi_pos_bias:
alibi_num_heads = default(alibi_num_heads, heads)
assert alibi_num_heads <= heads, 'number of ALiBi heads must be less than the total number of heads'
alibi_pos_klass = LearnedAlibiPositionalBias if alibi_learned else AlibiPositionalBias
self.rel_pos = alibi_pos_klass(heads = alibi_num_heads, total_heads = heads)
# determine deepnorm and residual scale
if deepnorm:
assert scale_residual_constant == 1, 'scale residual constant is being overridden by deep norm settings'
pre_norm = sandwich_norm = resi_dual = False
scale_residual = True
scale_residual_constant = (2 * depth) ** 0.25
assert (int(sandwich_norm) + int(resi_dual)) <= 1, 'either sandwich norm or resiDual is selected, but not both'
assert not (not pre_norm and sandwich_norm), 'sandwich norm cannot be used when not using prenorm'
assert not (not pre_norm and resi_dual), 'resiDualcannot be used when not using prenorm'
self.pre_norm = pre_norm
self.sandwich_norm = sandwich_norm
self.resi_dual = resi_dual
self.residual_attn = residual_attn
self.cross_residual_attn = cross_residual_attn
self.cross_attend = cross_attend
norm_class = ScaleNorm if use_scalenorm else nn.LayerNorm
norm_class = RMSNorm if use_rmsnorm else norm_class
norm_fn = partial(norm_class, dim)
if cross_attend and not only_cross:
default_block = ('a', 'c', 'f')
elif cross_attend and only_cross:
default_block = ('c', 'f')
else:
default_block = ('a', 'f')
if macaron:
default_block = ('f',) + default_block
# zero init
if zero_init_branch_output:
attn_kwargs = {**attn_kwargs, 'zero_init_output': True}
ff_kwargs = {**ff_kwargs, 'zero_init_output': True}
# calculate layer block order
if exists(custom_layers):
layer_types = custom_layers
elif exists(par_ratio):
par_depth = depth * len(default_block)
assert 1 < par_ratio <= par_depth, 'par ratio out of range'
default_block = tuple(filter(not_equals('f'), default_block))
par_attn = par_depth // par_ratio
depth_cut = par_depth * 2 // 3 # 2 / 3 attention layer cutoff suggested by PAR paper
par_width = (depth_cut + depth_cut // par_attn) // par_attn
assert len(default_block) <= par_width, 'default block is too large for par_ratio'
par_block = default_block + ('f',) * (par_width - len(default_block))
par_head = par_block * par_attn
layer_types = par_head + ('f',) * (par_depth - len(par_head))
elif exists(sandwich_coef):
assert sandwich_coef > 0 and sandwich_coef <= depth, 'sandwich coefficient should be less than the depth'
layer_types = ('a',) * sandwich_coef + default_block * (depth - sandwich_coef) + ('f',) * sandwich_coef
else:
layer_types = default_block * depth
self.layer_types = layer_types
self.num_attn_layers = len(list(filter(equals('a'), layer_types)))
# stochastic depth
self.layer_dropouts = cast_tuple(layer_dropout, len(layer_types))
# structured dropout for cross attending
self.cross_attn_tokens_dropout = cross_attn_tokens_dropout
# calculate token shifting
shift_tokens = cast_tuple(shift_tokens, len(layer_types))
# iterate and construct layers
for ind, (layer_type, layer_shift_tokens) in enumerate(zip(self.layer_types, shift_tokens)):
is_last_layer = ind == (len(self.layer_types) - 1)
if layer_type == 'a':
layer = Attention(dim, heads = heads, causal = causal, **attn_kwargs)
elif layer_type == 'c':
layer = Attention(dim, heads = heads, **attn_kwargs)
elif layer_type == 'f':
layer = FeedForward(dim, **ff_kwargs)
layer = layer if not macaron else Scale(0.5, layer)
else:
raise Exception(f'invalid layer type {layer_type}')
if layer_shift_tokens > 0:
shift_range_upper = layer_shift_tokens + 1
shift_range_lower = -layer_shift_tokens if not causal else 0
layer = ShiftTokens(range(shift_range_lower, shift_range_upper), layer)
residual_fn = GRUGating if gate_residual else Residual
residual = residual_fn(dim, scale_residual = scale_residual, scale_residual_constant = scale_residual_constant)
pre_branch_norm = norm_fn() if pre_norm else None
post_branch_norm = norm_fn() if sandwich_norm else None
post_main_norm = norm_fn() if (resi_dual or not pre_norm) and not is_last_layer else None
norms = nn.ModuleList([
pre_branch_norm,
post_branch_norm,
post_main_norm
])
self.layers.append(nn.ModuleList([
norms,
layer,
residual
]))
self.layers_length = len(self.layers) # It doesn't work if called after
if deepnorm:
init_gain = (8 * depth) ** -0.25
deepnorm_init(self, init_gain)
def forward(
self,
x,
context = None,
mask = None,
context_mask = None,
attn_mask = None,
self_attn_context_mask = None,
mems = None,
return_hiddens = False
):
assert not (self.cross_attend ^ exists(context)), 'context must be passed in if cross_attend is set to True'
hiddens = []
intermediates = []
prev_attn = None
prev_cross_attn = None
mems = mems.copy() if exists(mems) else [None] * self.num_attn_layers
rotary_pos_emb = None
if exists(self.rotary_pos_emb):
max_rotary_emb_length = max(list(map(lambda m: (m.shape[1] if exists(m) else 0) + x.shape[1], mems)))
rotary_pos_emb = self.rotary_pos_emb(max_rotary_emb_length, x.device)
outer_residual = x
for ind, (layer_type, (norm, block, residual_fn), layer_dropout) in enumerate(zip(self.layer_types, self.layers, self.layer_dropouts)):
is_last = ind == (self.layers_length - 1)
if self.training and layer_dropout > 0. and random() < layer_dropout:
continue
if layer_type == 'a':
if return_hiddens:
hiddens.append(x)
layer_mem = mems.pop(0) if mems else None
if layer_type == 'c':
if self.training and self.cross_attn_tokens_dropout > 0.:
context, context_mask = dropout_seq(context, context_mask, self.cross_attn_tokens_dropout)
inner_residual = x
pre_norm, post_branch_norm, post_main_norm = norm
if exists(pre_norm) and not self.resi_dual:
x = pre_norm(x)
if layer_type == 'a':
out, inter = block(x, mask = mask, context_mask = self_attn_context_mask, attn_mask = attn_mask, rel_pos = self.rel_pos, rotary_pos_emb = rotary_pos_emb, prev_attn = prev_attn, mem = layer_mem)
elif layer_type == 'c':
out, inter = block(x, context = context, mask = mask, context_mask = context_mask, prev_attn = prev_cross_attn)
elif layer_type == 'f':
out = block(x)
if self.resi_dual:
outer_residual = residual_fn(out, outer_residual)
if exists(post_branch_norm):
out = post_branch_norm(out)
x = residual_fn(out, inner_residual)
if layer_type in ('a', 'c') and return_hiddens:
intermediates.append(inter)
if layer_type == 'a' and self.residual_attn:
prev_attn = inter.pre_softmax_attn
elif layer_type == 'c' and self.cross_residual_attn:
prev_cross_attn = inter.pre_softmax_attn
if exists(post_main_norm):
x = post_main_norm(x)
if self.resi_dual:
x = x + pre_norm(outer_residual)
if return_hiddens:
intermediates = LayerIntermediates(
hiddens = hiddens,
attn_intermediates = intermediates
)
return x, intermediates
return x
class Encoder(AttentionLayers):
def __init__(self, **kwargs):
assert 'causal' not in kwargs, 'cannot set causality on encoder'
super().__init__(causal = False, **kwargs)
class Decoder(AttentionLayers):
def __init__(self, **kwargs):
assert 'causal' not in kwargs, 'cannot set causality on decoder'
super().__init__(causal = True, **kwargs)
class CrossAttender(AttentionLayers):
def __init__(self, **kwargs):
super().__init__(cross_attend = True, only_cross = True, **kwargs)
class ViTransformerWrapper(nn.Module):
def __init__(
self,
*,
image_size,
patch_size,
attn_layers,
channels = 3,
num_classes = None,
dropout = 0.,
post_emb_norm = False,
emb_dropout = 0.
):
super().__init__()
assert isinstance(attn_layers, Encoder), 'attention layers must be an Encoder'
assert image_size % patch_size == 0, 'image dimensions must be divisible by the patch size'
dim = attn_layers.dim
num_patches = (image_size // patch_size) ** 2
patch_dim = channels * patch_size ** 2
self.patch_size = patch_size
self.pos_embedding = nn.Parameter(torch.randn(1, num_patches + 1, dim))
self.patch_to_embedding = nn.Sequential(
nn.LayerNorm(patch_dim),
nn.Linear(patch_dim, dim),
nn.LayerNorm(dim)
)
self.post_emb_norm = nn.LayerNorm(dim) if post_emb_norm else nn.Identity()
self.dropout = nn.Dropout(emb_dropout)
self.attn_layers = attn_layers
self.norm = nn.LayerNorm(dim)
self.mlp_head = nn.Linear(dim, num_classes) if exists(num_classes) else nn.Identity()
def forward(
self,
img,
return_embeddings = False
):
p = self.patch_size
x = rearrange(img, 'b c (h p1) (w p2) -> b (h w) (p1 p2 c)', p1 = p, p2 = p)
x = self.patch_to_embedding(x)
n = x.shape[1]
x = x + self.pos_embedding[:, :n]
x = self.post_emb_norm(x)
x = self.dropout(x)
x = self.attn_layers(x)
x = self.norm(x)
if not exists(self.mlp_head) or return_embeddings:
return x
x = x.mean(dim = -2)
return self.mlp_head(x)
class TransformerWrapper(nn.Module):
def __init__(
self,
*,
num_tokens,
max_seq_len,
attn_layers,
# tokenizer: BaseTokenizer,
embedding_provider: BaseEmbedding,
emb_dim = None,
max_mem_len = 0.,
shift_mem_down = 0,
emb_dropout = 0.,
post_emb_norm = False,
num_memory_tokens = None,
tie_embedding = False,
logits_dim = None,
use_abs_pos_emb = True,
scaled_sinu_pos_emb = False,
l2norm_embed = False,
emb_frac_gradient = 1. # GLM-130B and Cogview successfully used this, set at 0.1
):
super().__init__()
assert isinstance(attn_layers, AttentionLayers), 'attention layers must be one of Encoder or Decoder'
dim = attn_layers.dim
emb_dim = default(emb_dim, dim)
# your own tokenizer
# self.tokenizer = tokenizer
#your own embedding function
self.token_emb = TokenEmbedding(emb_dim, num_tokens, embedding_provider, l2norm_embed=l2norm_embed)
self.emb_dim = emb_dim
self.num_tokens = num_tokens
self.max_seq_len = max_seq_len
self.max_mem_len = max_mem_len
self.shift_mem_down = shift_mem_down
self.l2norm_embed = l2norm_embed
self.token_emb = TokenEmbedding(emb_dim, num_tokens, embedding_provider, l2norm_embed=l2norm_embed)
if not (use_abs_pos_emb and not attn_layers.has_pos_emb):
self.pos_emb = always(0)
elif scaled_sinu_pos_emb:
self.pos_emb = ScaledSinusoidalEmbedding(emb_dim)
else:
self.pos_emb = AbsolutePositionalEmbedding(emb_dim, max_seq_len, l2norm_embed = l2norm_embed)
self.emb_frac_gradient = emb_frac_gradient # fraction of the gradient that should go to the embedding, https://arxiv.org/abs/2105.13290
self.post_emb_norm = nn.LayerNorm(emb_dim) if post_emb_norm else nn.Identity()
self.emb_dropout = nn.Dropout(emb_dropout)
self.project_emb = nn.Linear(emb_dim, dim) if emb_dim != dim else nn.Identity()
self.attn_layers = attn_layers
self.norm = nn.LayerNorm(dim)
self.init_()
logits_dim = default(logits_dim, num_tokens)
self.to_logits = nn.Linear(dim, logits_dim) if not tie_embedding else lambda t: t @ self.token_emb.weight.t()
# memory tokens (like [cls]) from Memory Transformers paper
num_memory_tokens = default(num_memory_tokens, 0)
self.num_memory_tokens = num_memory_tokens
if num_memory_tokens > 0:
self.memory_tokens = nn.Parameter(torch.randn(num_memory_tokens, dim))
def init_(self):
if self.l2norm_embed:
nn.init.normal_(self.token_emb.emb.weight, std = 1e-5)
if not isinstance(self.pos_emb, always):
nn.init.normal_(self.pos_emb.emb.weight, std = 1e-5)
return
nn.init.kaiming_normal_(self.token_emb.emb.weight)
def forward(
self,
x,
return_embeddings = False,
return_logits_and_embeddings = False,
return_intermediates = False,
mask = None,
return_mems = False,
return_attn = False,
mems = None,
pos = None,
prepend_embeds = None,
sum_embeds = None,
**kwargs
):
b, n, device, num_mem, emb_frac_gradient = *x.shape, x.device, self.num_memory_tokens, self.emb_frac_gradient
return_hiddens = return_mems | return_attn
# absolute positional embedding
external_pos_emb = exists(pos) and pos.dtype != torch.long
pos_emb = self.pos_emb(x, pos = pos) if not external_pos_emb else pos
x = self.token_emb(x) + pos_emb
# for summing embeddings passed externally - needs this for self-conditioning in non-autoregressive training
if exists(sum_embeds):
x = x + sum_embeds
# post embedding norm, purportedly leads to greater stabilization
x = self.post_emb_norm(x)
# whether to append embeds, as in PaLI, for image embeddings
if exists(prepend_embeds):
prepend_seq, prepend_dim = prepend_embeds.shape[1:]
assert prepend_dim == x.shape[-1], 'prepended embeddings need to have same dimensions as text model dimensions'
x = torch.cat((prepend_embeds, x), dim = -2)
# whether to reduce the gradient going to the embedding, from cogview paper, corroborated by GLM-130B model
if emb_frac_gradient < 1:
assert emb_frac_gradient > 0
x = x * emb_frac_gradient + x.detach() * (1 - emb_frac_gradient)
# embedding dropout
x = self.emb_dropout(x)
x = self.project_emb(x)
if num_mem > 0:
mem = repeat(self.memory_tokens, 'n d -> b n d', b = b)
x = torch.cat((mem, x), dim = 1)
# auto-handle masking after appending memory tokens
if exists(mask):
mask = pad_at_dim(mask, (num_mem, 0), dim = -1, value = True)
if self.shift_mem_down and exists(mems):
mems_l, mems_r = mems[:self.shift_mem_down], mems[self.shift_mem_down:]
mems = [*mems_r, *mems_l]
if return_hiddens:
x, intermediates = self.attn_layers(x, mask = mask, mems = mems, return_hiddens = True, **kwargs)
else:
x = self.attn_layers(x, mask = mask, mems = mems, **kwargs)
x = self.norm(x)
mem, x = x[:, :num_mem], x[:, num_mem:]
if return_logits_and_embeddings:
out = (self.to_logits(x), x)
elif return_embeddings:
out = x
else:
out = self.to_logits(x)
if return_intermediates:
return out, intermediates
if return_mems:
hiddens = intermediates.hiddens
new_mems = list(map(lambda pair: torch.cat(pair, dim = -2), zip(mems, hiddens))) if exists(mems) else hiddens
new_mems = list(map(lambda t: t[..., -self.max_mem_len:, :].detach(), new_mems))
return out, new_mems
if return_attn:
attn_maps = list(map(lambda t: t.post_softmax_attn, intermediates.attn_intermediates))
return out, attn_maps
return out
class ContinuousTransformerWrapper(nn.Module):
def __init__(
self,
*,
max_seq_len,
attn_layers,
dim_in = None,
dim_out = None,
emb_dim = None,
post_emb_norm = False,
emb_dropout = 0.,
use_abs_pos_emb = True,
scaled_sinu_pos_emb = False
):
super().__init__()
assert isinstance(attn_layers, AttentionLayers), 'attention layers must be one of Encoder or Decoder'
dim = attn_layers.dim
self.max_seq_len = max_seq_len
if not (use_abs_pos_emb and not attn_layers.has_pos_emb):
self.pos_emb = always(0)
elif scaled_sinu_pos_emb:
self.pos_emb = ScaledSinusoidalEmbedding(dim)
else:
self.pos_emb = AbsolutePositionalEmbedding(dim, max_seq_len)
self.post_emb_norm = nn.LayerNorm(dim) if post_emb_norm else nn.Identity()
self.emb_dropout = nn.Dropout(emb_dropout)
self.project_in = nn.Linear(dim_in, dim) if exists(dim_in) else nn.Identity()
self.attn_layers = attn_layers
self.norm = nn.LayerNorm(dim)
self.project_out = nn.Linear(dim, dim_out) if exists(dim_out) else nn.Identity()
def forward(
self,
x,
return_embeddings = False,
return_intermediates = False,
mask = None,
return_attn = False,
mems = None,
pos = None,
prepend_embeds = None,
**kwargs
):
x = self.project_in(x)
x = x + self.pos_emb(x, pos = pos)
x = self.post_emb_norm(x)
# whether to append embeds, as in PaLI, for image embeddings
if exists(prepend_embeds):
_, prepend_dim = prepend_embeds.shape[1:]
assert prepend_dim == x.shape[-1], 'prepended embeddings need to have same dimensions as model dimensions'
x = torch.cat((prepend_embeds, x), dim = -2)
x = self.emb_dropout(x)
x, intermediates = self.attn_layers(x, mask = mask, mems = mems, return_hiddens = True, **kwargs)
x = self.norm(x)
out = self.project_out(x) if not return_embeddings else x
if return_intermediates:
return out, intermediates
if return_attn:
attn_maps = list(map(lambda t: t.post_softmax_attn, intermediates.attn_intermediates))
return out, attn_maps
return out
class XTransformer(nn.Module):
def __init__(
self,
*,
dim,
tie_token_emb = False,
ignore_index = -100,
pad_value = 0,
deepnorm = False,
cross_attn_tokens_dropout = 0.,
**kwargs
):
super().__init__()
enc_kwargs, kwargs = groupby_prefix_and_trim('enc_', kwargs)
dec_kwargs, kwargs = groupby_prefix_and_trim('dec_', kwargs)
assert 'dim' not in enc_kwargs and 'dim' not in dec_kwargs, 'dimension of either encoder or decoder must be set with `dim` keyword'
enc_transformer_kwargs = pick_and_pop(['num_tokens', 'max_seq_len'], enc_kwargs)
enc_transformer_kwargs['emb_dropout'] = enc_kwargs.pop('emb_dropout', 0)
enc_transformer_kwargs['num_memory_tokens'] = enc_kwargs.pop('num_memory_tokens', None)
enc_transformer_kwargs['scaled_sinu_pos_emb'] = enc_kwargs.pop('scaled_sinu_pos_emb', False)
enc_transformer_kwargs['use_abs_pos_emb'] = enc_kwargs.pop('use_abs_pos_emb', True)
dec_transformer_kwargs = pick_and_pop(['num_tokens', 'max_seq_len'], dec_kwargs)
dec_transformer_kwargs['emb_dropout'] = dec_kwargs.pop('emb_dropout', 0)
dec_transformer_kwargs['scaled_sinu_pos_emb'] = dec_kwargs.pop('scaled_sinu_pos_emb', False)
dec_transformer_kwargs['use_abs_pos_emb'] = dec_kwargs.pop('use_abs_pos_emb', True)
self.cross_attn_tokens_dropout = cross_attn_tokens_dropout # how many tokens from the encoder to dropout when cross attending from decoder - seen in a couple papers, including Perceiver AR - this will also be very effective regularization when cross attending to very long memories
if deepnorm:
enc_kwargs['scale_residual'] = True
dec_kwargs['scale_residual'] = True
enc_depth = enc_kwargs['depth']
dec_depth = dec_kwargs['depth']
enc_kwargs['scale_residual_constant'] = 0.81 * ((enc_depth ** 4) * dec_depth) ** .0625
dec_kwargs['scale_residual_constant'] = (3 * dec_depth) ** 0.25
self.encoder = TransformerWrapper(
**enc_transformer_kwargs,
attn_layers = Encoder(dim = dim, **enc_kwargs)
)
self.decoder = TransformerWrapper(
**dec_transformer_kwargs,
attn_layers = Decoder(dim = dim, cross_attend = True, **dec_kwargs)
)
if deepnorm:
deepnorm_init(self.encoder, 0.87 * ((enc_depth ** 4) * dec_depth) ** -0.0625)
deepnorm_init(self.decoder, (12 * dec_depth) ** -0.25)
if tie_token_emb:
self.decoder.token_emb = self.encoder.token_emb
self.decoder = AutoregressiveWrapper(self.decoder, ignore_index=ignore_index, pad_value=pad_value)
@torch.no_grad()
def generate(self, seq_in, seq_out_start, seq_len, mask = None, attn_mask = None, **kwargs):
encodings = self.encoder(seq_in, mask = mask, attn_mask = attn_mask, return_embeddings = True)
return self.decoder.generate(seq_out_start, seq_len, context = encodings, context_mask = mask, **kwargs)
def forward(self, src, tgt, mask = None, attn_mask = None, src_prepend_embeds = None):
if exists(src_prepend_embeds) and exists(mask):
mask = pad_at_dim(mask, (src_prepend_embeds.shape[-2], 0), dim = -1, value = True)
enc = self.encoder(src, mask = mask, attn_mask = attn_mask, prepend_embeds = src_prepend_embeds, return_embeddings = True)
if self.training and self.cross_attn_tokens_dropout > 0:
enc, mask = dropout_seq(enc, mask, self.cross_attn_tokens_dropout)
out = self.decoder(tgt, context = enc, context_mask = mask)
return out
|
Andromeda-llm
|
/Andromeda-llm-0.0.3.tar.gz/Andromeda-llm-0.0.3/Andromeda/optimus_prime/x_transformers.py
|
x_transformers.py
|
import math
from random import random
from contextlib import nullcontext
from collections import namedtuple
import torch
import torch.nn.functional as F
from torch import nn
from einops import rearrange, repeat, pack, unpack
from optimus_prime.x_transformers import TransformerWrapper
from typing import Optional
# constants
Losses = namedtuple('Losses', ['loss', 'generator_loss', 'critic_loss'])
# helper functions
def exists(val):
return val is not None
def default(val, d):
return val if exists(val) else d
# sampling helpers
def top_k(logits, thres = 0.9):
k = math.ceil((1 - thres) * logits.shape[-1])
val, ind = logits.topk(k, dim = -1)
probs = torch.full_like(logits, float('-inf'))
probs.scatter_(2, ind, val)
return probs
def log(t, eps = 1e-10):
return torch.log(t + eps)
def gumbel_noise(t):
noise = torch.zeros_like(t).uniform_(0, 1)
return -log(-log(noise))
def gumbel_sample(t, temperature = 1., dim = -1):
return ((t / max(temperature, 1e-10)) + gumbel_noise(t)).argmax(dim = dim)
# prob helpers
def sample_prob(prob):
return random() < prob
def coin_flip():
return sample_prob(0.5)
# tensor helpers
def get_mask_subset_prob(mask, prob, min_mask = 0):
batch, seq, device = *mask.shape, mask.device
num_to_mask = (mask.sum(dim = -1, keepdim = True) * prob).clamp(min = min_mask)
logits = torch.rand((batch, seq), device = device)
logits = logits.masked_fill(~mask, -1)
randperm = logits.argsort(dim = -1).float()
num_padding = (~mask).sum(dim = -1, keepdim = True)
randperm -= num_padding
subset_mask = randperm < num_to_mask
subset_mask.masked_fill_(~mask, False)
return subset_mask
# schedules
def linear_schedule(t):
return 1 - t
def cosine_schedule(t):
""" https://arxiv.org/abs/2202.04200 """
return torch.cos(t * math.pi / 2)
# self token critic
# inspired by Nijkamp et al. - https://aclanthology.org/2021.naacl-main.409/
class SelfCritic(nn.Module):
def __init__(self, net):
super().__init__()
self.net = net
dim = net.attn_layers.dim
self.to_logits = nn.Linear(dim, 1)
def forward(self, x):
embed = self.net(x, return_embeddings = True)
return self.to_logits(embed)
class NonAutoregressiveWrapper(nn.Module):
"""
https://arxiv.org/abs/1904.09324
https://arxiv.org/abs/2202.04200
"""
def __init__(
self,
net,
*,
mask_id,
steps = 18,
self_cond = False,
self_cond_train_prob = 0.75,
no_replace_prob = 0.15, # which percentage of the tokens masked will stay the same, done in original MLM paper
random_token_prob = 0.1, # which percentage of tokens to be replaced with random token, done in original MLM paper
schedule = 'linear',
can_mask_prev_unmasked = False, # when unmasking, whether it can remask previously unmasked
token_critic: Optional[TransformerWrapper] = None,
self_token_critic = False,
critic_loss_weight = 1.
):
super().__init__()
assert not (self_token_critic and exists(token_critic))
self.net = net
dim = net.emb_dim
self.dim = dim
self.num_tokens = net.num_tokens
self.mask_id = mask_id
# afaict, maskgit paper did not do this
# but may help for self conditioning, as used successfully in original BERT
self.no_replace_prob = no_replace_prob
self.random_token_prob = random_token_prob
self.max_seq_len = net.max_seq_len
self.steps = steps
if callable(schedule):
self.schedule_fn = schedule
if schedule == 'linear':
self.schedule_fn = linear_schedule
elif schedule == 'cosine':
self.schedule_fn = cosine_schedule
else:
raise ValueError(f'invalid schedule {schedule}')
self.can_mask_prev_unmasked = can_mask_prev_unmasked
# self conditioning
self.self_cond = self_cond
if self_cond:
self.null_embed = nn.Parameter(torch.randn(dim))
self.to_self_cond = nn.Linear(dim, dim, bias = False) if self_cond else None
self.self_cond_train_prob = self_cond_train_prob
# token critic
self.token_critic = token_critic
if self_token_critic:
self.token_critic = SelfCritic(net)
self.critic_loss_weight = critic_loss_weight
@torch.no_grad()
def generate(
self,
batch_size = None,
start_temperature = 1.,
filter_thres = 0.7,
noise_level_scale = 1.,
**kwargs
):
sample_one = not exists(batch_size)
batch_size = default(batch_size, 1)
device = next(self.net.parameters()).device
was_training = self.training
self.eval()
times = torch.linspace(0., 1., self.steps + 1)
# sequence starts off as all masked
shape = (batch_size, self.max_seq_len)
seq = torch.full(shape, self.mask_id, device = device)
mask = torch.full(shape, True, device = device)
# slowly demask
all_mask_num_tokens = (self.schedule_fn(times[1:]) * self.max_seq_len).long()
# self conditioning
has_self_cond = self.self_cond
last_embed = self.null_embed if has_self_cond else None
for mask_num_tokens, steps_until_x0 in zip(all_mask_num_tokens.tolist(), reversed(range(self.steps))):
self_cond = self.to_self_cond(last_embed) if has_self_cond else None
logits, embeds = self.net(
seq,
sum_embeds = self_cond,
return_logits_and_embeddings = True,
**kwargs
)
if has_self_cond:
last_embed = embeds
if exists(filter_thres):
logits = top_k(logits, filter_thres)
annealing_scale = steps_until_x0 / self.steps
temperature = start_temperature * annealing_scale
probs = (logits / max(temperature, 1e-3)).softmax(dim = -1)
sampled_ids = gumbel_sample(logits, temperature = max(temperature, 1e-3))
seq = torch.where(mask, sampled_ids, seq)
if exists(self.token_critic):
scores = self.token_critic(seq)
scores = rearrange(scores, 'b n 1 -> b n')
scores = scores + noise_level_scale * gumbel_noise(scores) * annealing_scale
else:
scores = 1 - logits.softmax(dim = -1)
scores = scores.gather(2, rearrange(sampled_ids, 'b n -> b n 1'))
scores = rearrange(scores, 'b n 1 -> b n')
if mask_num_tokens == 0:
pass
if not self.can_mask_prev_unmasked:
scores = scores.masked_fill(~mask, -torch.finfo(scores.dtype).max)
mask_indices = scores.topk(mask_num_tokens, dim = -1).indices
mask = torch.zeros_like(scores, dtype = torch.bool).scatter(1, mask_indices, True)
seq = seq.masked_fill(mask, self.mask_id)
self.train(was_training)
if sample_one:
seq = rearrange(seq, '1 n -> n')
return seq
def forward(
self,
x,
only_train_generator = False,
only_train_critic = False,
generator_sample_temperature = None,
**kwargs
):
b, n, device = *x.shape, x.device
assert n == self.max_seq_len
orig_seq = x.clone()
rand_times = torch.empty(b, device = device).uniform_(0, 1)
batched_randperm = torch.rand((b, n), device = device).argsort(dim = -1).float()
rand_probs = self.schedule_fn(rand_times)
num_tokens_mask = (rand_probs * n).clamp(min = 1.)
mask = batched_randperm < rearrange(num_tokens_mask, 'b -> b 1')
# to ensure all tokens produce embeddings, instead of just the ones with [mask] input, as done in seminal BERT MLM paper
# potentially needed for self-conditioning (on embedding) to work well
replace_mask_id_mask = mask.clone()
frac_seq_left = 1.
if self.no_replace_prob > 0. and coin_flip():
frac_seq_left -= self.no_replace_prob
no_replace_prob_mask = get_mask_subset_prob(mask, self.no_replace_prob)
replace_mask_id_mask &= ~no_replace_prob_mask
if self.random_token_prob > 0. and coin_flip():
random_token_prob_mask = get_mask_subset_prob(replace_mask_id_mask, self.random_token_prob * frac_seq_left)
random_tokens = torch.randint(0, self.num_tokens, (b, n), device = device)
x = torch.where(random_token_prob_mask, random_tokens, x)
replace_mask_id_mask &= ~random_token_prob_mask
masked = torch.where(replace_mask_id_mask, self.mask_id, x)
# self conditioning
if self.self_cond:
self_cond = self.null_embed
if sample_prob(self.self_cond_train_prob):
with torch.no_grad():
self_cond = self.net(masked, return_embeddings = True, **kwargs).detach()
kwargs.update(sum_embeds = self.to_self_cond(self_cond))
# logits
context = torch.no_grad if only_train_critic else nullcontext
with context():
logits = self.net(masked, **kwargs)
# cross entropy loss
loss = F.cross_entropy(
logits[mask],
orig_seq[mask]
)
if not exists(self.token_critic) or only_train_generator:
return Losses(loss, loss, None)
sampled_ids = gumbel_sample(logits, temperature = default(generator_sample_temperature, random()))
generated = torch.where(mask, sampled_ids, orig_seq)
critic_logits = self.token_critic(generated)
critic_labels = (sampled_ids != orig_seq).float()
critic_loss = F.binary_cross_entropy_with_logits(
rearrange(critic_logits, '... 1 -> ...'),
critic_labels
)
# determine losses to be returned based on what researcher wants to train
if only_train_critic:
total_loss = critic_loss
loss = None
else:
total_loss = loss + critic_loss * self.critic_loss_weight
return Losses(total_loss, loss, critic_loss)
|
Andromeda-llm
|
/Andromeda-llm-0.0.3.tar.gz/Andromeda-llm-0.0.3/Andromeda/optimus_prime/nonautoregressive_wrapper.py
|
nonautoregressive_wrapper.py
|
from math import ceil
import torch
from torch import nn
import torch.nn.functional as F
from einops import rearrange, pack, unpack
def exists(val):
return val is not None
def eval_decorator(fn):
def inner(self, *args, **kwargs):
was_training = self.training
self.eval()
out = fn(self, *args, **kwargs)
self.train(was_training)
return out
return inner
# nucleus
def top_p(logits, thres = 0.9):
sorted_logits, sorted_indices = torch.sort(logits, descending=True)
cum_probs = torch.cumsum(F.softmax(sorted_logits, dim=-1), dim=-1)
sorted_indices_to_remove = cum_probs > (1 - thres)
sorted_indices_to_remove[:, 1:] = sorted_indices_to_remove[:, :-1].clone()
sorted_indices_to_remove[:, 0] = 0
sorted_logits[sorted_indices_to_remove] = float('-inf')
return sorted_logits.scatter(1, sorted_indices, sorted_logits)
# topk
def top_k(logits, thres = 0.9):
k = ceil((1 - thres) * logits.shape[-1])
val, ind = torch.topk(logits, k)
probs = torch.full_like(logits, float('-inf'))
probs.scatter_(1, ind, val)
return probs
# top_a
def top_a(logits, min_p_pow=2.0, min_p_ratio=0.02):
probs = F.softmax(logits, dim=-1)
limit = torch.pow(torch.max(probs), min_p_pow) * min_p_ratio
logits[probs < limit] = float('-inf')
logits[probs >= limit] = 1
return logits
# autoregressive wrapper class
class AutoregressiveWrapper(nn.Module):
def __init__(
self,
net,
ignore_index = -100,
pad_value = 0,
mask_prob = 0.
):
super().__init__()
self.pad_value = pad_value
self.ignore_index = ignore_index
self.net = net
self.max_seq_len = net.max_seq_len
# paper shows masking (MLM) in conjunction with autoregressive decoder-only training leads to big improvements https://arxiv.org/abs/2210.13432
assert mask_prob < 1.
self.mask_prob = mask_prob
@torch.no_grad()
@eval_decorator
def generate(
self,
start_tokens,
seq_len,
eos_token = None,
temperature = 1.,
filter_logits_fn = top_k,
filter_thres = 0.9,
min_p_pow = 2.0,
min_p_ratio = 0.02,
**kwargs
):
device = start_tokens.device
num_dims = start_tokens.ndim
start_tokens, ps = pack([start_tokens], '* n')
b, t = start_tokens.shape
out = start_tokens
for _ in range(seq_len):
x = out[:, -self.max_seq_len:]
logits = self.net(x, **kwargs)[:, -1]
if filter_logits_fn in {top_k, top_p}:
filtered_logits = filter_logits_fn(logits, thres = filter_thres)
probs = F.softmax(filtered_logits / temperature, dim=-1)
elif filter_logits_fn is top_a:
filtered_logits = filter_logits_fn(logits, min_p_pow = min_p_pow, min_p_ratio= min_p_ratio)
probs = F.softmax(filtered_logits / temperature, dim=-1)
sample = torch.multinomial(probs, 1)
out = torch.cat((out, sample), dim=-1)
if exists(eos_token):
is_eos_tokens = (out == eos_token)
if is_eos_tokens.any(dim = -1).all():
# mask out everything after the eos tokens
shifted_is_eos_tokens = F.pad(is_eos_tokens, (1, -1))
mask = shifted_is_eos_tokens.float().cumsum(dim = -1) >= 1
out = out.masked_fill(mask, self.pad_value)
break
out = out[:, t:]
out, = unpack(out, ps, '* n')
return out
def forward(self, x, return_loss=True, **kwargs):
seq, ignore_index = x.shape[1], self.ignore_index
inp, target = x[:, :-1], x[:, 1:]
if self.mask_prob > 0.:
rand = torch.randn(inp.shape, device = x.device)
rand[:, 0] = -torch.finfo(rand.dtype).max # first token should not be masked out
num_mask = min(int(seq * self.mask_prob), seq - 1)
indices = rand.topk(num_mask, dim = -1).indices
mask = ~torch.zeros_like(inp).scatter(1, indices, 1.).bool()
kwargs.update(self_attn_context_mask = mask)
logits = self.net(inp, **kwargs)
loss = F.cross_entropy(
rearrange(logits, 'b n c -> b c n'),
target,
ignore_index = ignore_index
)
if return_loss:
return logits, loss
return logits
|
Andromeda-llm
|
/Andromeda-llm-0.0.3.tar.gz/Andromeda-llm-0.0.3/Andromeda/optimus_prime/autoregressive_wrapper.py
|
autoregressive_wrapper.py
|
import sys
import logging
import argparse
from PyQt5.QtWidgets import QApplication, QMessageBox
import os
from Crypto.PublicKey import RSA
from logs import client_log_config
from common.variables import *
from common.decorators import log
from client.start_dialog import UserNameDialog
from common.errors import ServerError
from client.database import ClientDatabase
from client.transport import ClientTransport
from client.main_window import ClientMainWindow
# Инициализация клиентского логгера:
CLIENT_LOGGER = logging.getLogger('client')
@log
def arg_parser():
"""
Парсер аргументов командной строки, возвращает кортеж из 4 элементов
адрес сервера, порт, имя пользователя, пароль.
Выполняет проверку на корректность номера порта.
:return: адрес сервера, порт, имя пользователя, пароль
"""
parser = argparse.ArgumentParser()
parser.add_argument('address', default=DEFAULT_IP_ADDRESS, nargs='?')
parser.add_argument('port', default=DEFAULT_PORT, type=int, nargs='?')
parser.add_argument('-n', '--name', default=None, nargs='?')
parser.add_argument('-p', '--password', default='', nargs='?')
namespace = parser.parse_args(sys.argv[1:])
server_address = namespace.address
server_port = namespace.port
client_name = namespace.name
client_passwd = namespace.password
# Проверяем корректность номера порта.
if not 1023 < server_port < 65536:
CLIENT_LOGGER.critical(
f'Попытка запуска клиента с неподходящим номером порта: {server_port}. '
f'Допустимы адреса с 1024 до 65535. Клиент завершается.')
sys.exit(1)
return server_address, server_port, client_name, client_passwd
if __name__ == '__main__':
# Загружаем параметы коммандной строки
server_address, server_port, client_name, client_passwd = arg_parser()
CLIENT_LOGGER.debug('Args loaded')
# Создаём клиентокое приложение
client_app = QApplication(sys.argv)
# Если имя пользователя не было указано в командной строке, то запросим его
start_dialog = UserNameDialog()
if not client_name or not client_passwd:
client_app.exec_()
# Если пользователь ввёл имя и нажал ОК, то сохраняем ведённое и удаляем объект.
# Иначе выходим
if start_dialog.ok_pressed:
client_name = start_dialog.client_name.text()
client_passwd = start_dialog.client_passwd.text()
CLIENT_LOGGER.debug(f'Using USERNAME = {client_name}, PASSWD = {client_passwd}.')
else:
exit(0)
# Записываем логи
CLIENT_LOGGER.info(
f'Запущен клиент с парамертами: адрес сервера: {server_address}, '
f'порт: {server_port}, имя пользователя: {client_name}')
# Загружаем ключи с файла, если же файла нет, то генерируем новую пару.
dir_path = os.path.dirname(os.path.realpath(__file__))
key_file = os.path.join(dir_path, f'{client_name}.key')
if not os.path.exists(key_file):
keys = RSA.generate(2048, os.urandom)
with open(key_file, 'wb') as key:
key.write(keys.export_key())
else:
with open(key_file, 'rb') as key:
keys = RSA.import_key(key.read())
# !!!keys.publickey().export_key()
CLIENT_LOGGER.debug("Keys successfully loaded.")
# Создаём объект базы данных
database = ClientDatabase(client_name)
# Создаём объект - транспорт и запускаем транспортный поток
try:
transport = ClientTransport(server_port, server_address, database, client_name, client_passwd, keys)
CLIENT_LOGGER.debug("Transport ready.")
except ServerError as error:
message = QMessageBox()
message.critical(start_dialog, 'Ошибка сервера', error.text)
exit(1)
transport.setDaemon(True)
transport.start()
# Удалим объект диалога за ненадобностью
del start_dialog
# Создаём GUI
main_window = ClientMainWindow(database, transport, keys)
main_window.make_connection(transport)
main_window.setWindowTitle(f'Чат Программа alpha release - {client_name}')
client_app.exec_()
# Раз графическая оболочка закрылась, закрываем транспорт
transport.transport_shutdown()
transport.join()
|
Andy-mess-client
|
/Andy_mess_client-0.0.1.tar.gz/Andy_mess_client-0.0.1/client/client.py
|
client.py
|
import sys
import logging
from PyQt5.QtWidgets import QDialog, QLabel, QComboBox, QPushButton
from PyQt5.QtCore import Qt
sys.path.append('../')
from logs import client_log_config
CLIENT_LOGGER = logging.getLogger('client')
# Диалог выбора контакта для добавления
class AddContactDialog(QDialog):
"""
Диалог добавления пользователя в список контактов.
Предлагает пользователю список возможных контактов и
добавляет выбранный в контакты.
"""
def __init__(self, transport, database):
super().__init__()
self.transport = transport
self.database = database
self.setFixedSize(350, 120)
self.setWindowTitle('Выберите контакт для добавления:')
# Удаляем диалог, если окно было закрыто преждевременно
self.setAttribute(Qt.WA_DeleteOnClose)
# Делаем это окно модальным (т.е. поверх других)
self.setModal(True)
self.selector_label = QLabel('Выберите контакт для добавления:', self)
self.selector_label.setFixedSize(200, 20)
self.selector_label.move(10, 0)
self.selector = QComboBox(self)
self.selector.setFixedSize(200, 20)
self.selector.move(10, 30)
self.btn_refresh = QPushButton('Обновить список', self)
self.btn_refresh.setFixedSize(100, 30)
self.btn_refresh.move(60, 60)
self.btn_ok = QPushButton('Добавить', self)
self.btn_ok.setFixedSize(100, 30)
self.btn_ok.move(230, 20)
self.btn_cancel = QPushButton('Отмена', self)
self.btn_cancel.setFixedSize(100, 30)
self.btn_cancel.move(230, 60)
self.btn_cancel.clicked.connect(self.close)
# Заполняем список возможных контактов
self.possible_contacts_update()
# Назначаем действие на кнопку обновить
self.btn_refresh.clicked.connect(self.update_possible_contacts)
def possible_contacts_update(self):
"""
Метод заполнения списка возможных контактов.
Создаёт список всех зарегистрированных пользователей
за исключением уже добавленных в контакты и самого себя.
:return: ничего не возвращает
"""
self.selector.clear()
# множества всех контактов и контактов клиента
contacts_list = set(self.database.get_contacts())
users_list = set(self.database.get_users())
# Удалим сами себя из списка пользователей, чтобы нельзя было добавить самого себя
users_list.remove(self.transport.username)
# Добавляем список возможных контактов
self.selector.addItems(users_list - contacts_list)
def update_possible_contacts(self):
"""
Метод обновления списка возможных контактов. Запрашивает с сервера
список известных пользователей и обносляет содержимое окна.
:return: ничего не возвращает
"""
try:
self.transport.user_list_update()
except OSError:
pass
else:
CLIENT_LOGGER.debug('Обновление списка пользователей с сервера выполнено')
self.possible_contacts_update()
|
Andy-mess-client
|
/Andy_mess_client-0.0.1.tar.gz/Andy_mess_client-0.0.1/client/client/add_contact.py
|
add_contact.py
|
from socket import socket, AF_INET, SOCK_STREAM
import time
import sys
import json
import logging
import threading
from PyQt5.QtCore import pyqtSignal, QObject
import hashlib
import binascii
import hmac
sys.path.append('../')
from common.variables import *
from common.utils import send_message, get_message
from common.errors import ServerError
from logs import client_log_config
# Инициализация клиентского логгера:
CLIENT_LOGGER = logging.getLogger('client')
# Объект блокировки сокета и работы с базой данных
SOCKET_LOCK = threading.Lock()
class ClientTransport(threading.Thread, QObject):
"""
Класс реализующий транспортную подсистему клиентского
модуля. Отвечает за взаимодействие с сервером.
"""
# Сигналы новое сообщение и потеря соединения
new_message = pyqtSignal(dict)
message_205 = pyqtSignal()
connection_lost = pyqtSignal()
def __init__(self, port, ip_address, database, username, passwd, keys):
# Вызываем конструктор предка
threading.Thread.__init__(self)
QObject.__init__(self)
# Класс База данных - работа с базой
self.database = database
# Имя пользователя
self.username = username
# Пароль
self.password = passwd
# Сокет для работы с сервером
self.transport = None
# Набор ключей для шифрования
self.keys = keys
# Устанавливаем соединение:
self.connection_init(port, ip_address)
# Обновляем таблицы известных пользователей и контактов
try:
self.user_list_update()
self.contacts_list_update()
except OSError as err:
if err.errno:
CLIENT_LOGGER.critical(f'Потеряно соединение с сервером.')
raise ServerError('Потеряно соединение с сервером!')
CLIENT_LOGGER.error('Timeout соединения при обновлении списков пользователей.')
except json.JSONDecodeError:
CLIENT_LOGGER.critical(f'Потеряно соединение с сервером.')
raise ServerError('Потеряно соединение с сервером!')
# Флаг продолжения работы транспорта.
self.running = True
def connection_init(self, port, ip):
"""
Метод отвечающий за устанновку соединения с сервером.
:param port: порт
:param ip: ip-адрес
:return: ничего не возвращает
"""
# Инициализация сокета и сообщение серверу о нашем появлении.
self.transport = socket(AF_INET, SOCK_STREAM)
# Таймаут 1 секунда, необходим для освобождения сокета.
self.transport.settimeout(5)
# Соединяемся, 5 попыток соединения, флаг успеха ставим в True если удалось
connected = False
for i in range(5):
CLIENT_LOGGER.info(f'Попытка подключения №{i + 1}')
try:
self.transport.connect((ip, port))
except (OSError, ConnectionRefusedError):
pass
else:
connected = True
CLIENT_LOGGER.debug("Connection established.")
break
time.sleep(1)
# Если соединится не удалось - исключение
if not connected:
CLIENT_LOGGER.critical('Не удалось установить соединение с сервером')
raise ServerError('Не удалось установить соединение с сервером')
CLIENT_LOGGER.debug('Starting auth dialog.')
# Запускаем процедуру авторизации
# Получаем хэш пароля
passwd_bytes = self.password.encode('utf-8')
salt = self.username.lower().encode('utf-8')
passwd_hash = hashlib.pbkdf2_hmac('sha512', passwd_bytes, salt, 10000)
passwd_hash_string = binascii.hexlify(passwd_hash)
CLIENT_LOGGER.debug(f'Passwd hash ready: {passwd_hash_string}')
# Получаем публичный ключ и декодируем его из байтов
pubkey = self.keys.publickey().export_key().decode('ascii')
# Авторизируемся на сервере
with SOCKET_LOCK:
presense = {
ACTION: PRESENCE,
TIME: time.time(),
USER: {
ACCOUNT_NAME: self.username,
PUBLIC_KEY: pubkey
}
}
CLIENT_LOGGER.debug(f"Presense message = {presense}")
# Отправляем серверу приветственное сообщение.
try:
send_message(self.transport, presense)
server_response = get_message(self.transport)
CLIENT_LOGGER.debug(f'Server response = {server_response}.')
# Если сервер вернул ошибку, бросаем исключение.
if RESPONSE in server_response:
if server_response[RESPONSE] == 400:
raise ServerError(server_response[ERROR])
elif server_response[RESPONSE] == 511:
# Если всё нормально, то продолжаем процедуру
# авторизации.
ans_data = server_response[DATA]
hash = hmac.new(passwd_hash_string, ans_data.encode('utf-8'), 'MD5')
digest = hash.digest()
my_ans = RESPONSE_511
my_ans[DATA] = binascii.b2a_base64(digest).decode('ascii')
send_message(self.transport, my_ans)
self.process_server_ans(get_message(self.transport))
except (OSError, json.JSONDecodeError) as err:
CLIENT_LOGGER.debug(f'Connection error.', exc_info=err)
raise ServerError('Сбой соединения в процессе авторизации.')
# Если всё хорошо, сообщение об установке соединения.
CLIENT_LOGGER.info('Соединение с сервером успешно установлено.')
def process_server_ans(self, message):
"""
Метод обработчик поступающих сообщений с сервера.
:param message: сообщение
:return: ничего не возвращает
"""
CLIENT_LOGGER.debug(f'Разбор сообщения от сервера: {message}.')
# Если это подтверждение чего-либо
if RESPONSE in message:
if message[RESPONSE] == 200:
return
elif message[RESPONSE] == 400:
raise ServerError(f'{message[ERROR]}')
elif message[RESPONSE] == 205:
self.user_list_update()
self.contacts_list_update()
self.message_205.emit()
else:
CLIENT_LOGGER.debug(f'Принят неизвестный код подтверждения {message[RESPONSE]}')
# Если это сообщение от пользователя добавляем в базу, даём сигнал о новом сообщении
elif ACTION in message \
and message[ACTION] == MESSAGE \
and SENDER in message \
and DESTINATION in message \
and MESSAGE_TEXT in message \
and message[DESTINATION] == self.username:
CLIENT_LOGGER.debug(f'Получено сообщение от пользователя {message[SENDER]}:{message[MESSAGE_TEXT]}')
self.new_message.emit(message)
def contacts_list_update(self):
"""
Метод обновляющий с сервера список контактов.
:return: ничего не возвращает
"""
self.database.contacts_clear()
CLIENT_LOGGER.debug(f'Запрос списка контактов для пользователя {self.name}')
request_to_server = {
ACTION: GET_CONTACTS,
TIME: time.time(),
USER: self.username
}
CLIENT_LOGGER.debug(f'Сформирован запрос {request_to_server}')
with SOCKET_LOCK:
send_message(self.transport, request_to_server)
server_answer = get_message(self.transport)
CLIENT_LOGGER.debug(f'Получен ответ {server_answer}')
if RESPONSE in server_answer and server_answer[RESPONSE] == 202:
for contact in server_answer[LIST_INFO]:
self.database.add_contact(contact)
else:
CLIENT_LOGGER.error('Не удалось обновить список контактов.')
def user_list_update(self):
"""
Метод обновляющий с сервера список пользователей.
:return: ничего не возвращает
"""
CLIENT_LOGGER.debug(f'Запрос списка известных пользователей {self.username}')
request_to_server = {
ACTION: USERS_REQUEST,
TIME: time.time(),
ACCOUNT_NAME: self.username
}
with SOCKET_LOCK:
send_message(self.transport, request_to_server)
server_answer = get_message(self.transport)
if RESPONSE in server_answer and server_answer[RESPONSE] == 202:
self.database.add_users(server_answer[LIST_INFO])
else:
CLIENT_LOGGER.error('Не удалось обновить список известных пользователей.')
def key_request(self, user):
"""
Метод запрашивающий с сервера публичный ключ пользователя.
:param user: пользователь
:return: публичный ключ пользователя
"""
CLIENT_LOGGER.debug(f'Запрос публичного ключа для {user}')
req = {
ACTION: PUBLIC_KEY_REQUEST,
TIME: time.time(),
ACCOUNT_NAME: user
}
with SOCKET_LOCK:
send_message(self.transport, req)
ans = get_message(self.transport)
if RESPONSE in ans and ans[RESPONSE] == 511:
return ans[DATA]
else:
CLIENT_LOGGER.error(f'Не удалось получить ключ собеседника{user}.')
def add_contact(self, contact):
"""
Метод отправляющий на сервер сведения о добавлении контакта.
:param contact: контакт
:return: ничего не возвращает
"""
CLIENT_LOGGER.debug(f'Создание контакта {contact}')
request_to_server = {
ACTION: ADD_CONTACT,
TIME: time.time(),
USER: self.username,
ACCOUNT_NAME: contact
}
with SOCKET_LOCK:
send_message(self.transport, request_to_server)
self.process_server_ans(get_message(self.transport))
def remove_contact(self, contact):
"""
Метод отправляющий на сервер сведения о удалении контакта.
:param contact: контакт
:return: ничего не возвращает
"""
CLIENT_LOGGER.debug(f'Удаление контакта {contact}')
request_to_server = {
ACTION: REMOVE_CONTACT,
TIME: time.time(),
USER: self.username,
ACCOUNT_NAME: contact
}
with SOCKET_LOCK:
send_message(self.transport, request_to_server)
self.process_server_ans(get_message(self.transport))
def transport_shutdown(self):
"""
Метод уведомляющий сервер о завершении работы клиента.
:return: ничего не возвращает
"""
self.running = False
message = {
ACTION: EXIT,
TIME: time.time(),
ACCOUNT_NAME: self.username
}
with SOCKET_LOCK:
try:
send_message(self.transport, message)
except OSError:
pass
CLIENT_LOGGER.debug('Транспорт завершает работу.')
time.sleep(0.5)
def send_message(self, to, message):
"""
Метод отправляющий на сервер сообщения для пользователя.
:param to: адресат
:param message: текст сообщения
:return: ничего не возвращает
"""
message_dict = {
ACTION: MESSAGE,
SENDER: self.username,
DESTINATION: to,
TIME: time.time(),
MESSAGE_TEXT: message
}
CLIENT_LOGGER.debug(f'Сформирован словарь сообщения: {message_dict}')
# Необходимо дождаться освобождения сокета для отправки сообщения
with SOCKET_LOCK:
send_message(self.transport, message_dict)
self.process_server_ans(get_message(self.transport))
CLIENT_LOGGER.info(f'Отправлено сообщение для пользователя {to}')
def run(self):
"""
Метод содержащий основной цикл работы транспортного потока.
:return: ничего не возвращает
"""
CLIENT_LOGGER.debug('Запущен процесс - приёмник сообщений с сервера.')
while self.running:
# Отдыхаем секунду и снова пробуем захватить сокет.
# если не сделать тут задержку, то отправка может достаточно долго
# ждать освобождения сокета.
time.sleep(1)
message = None
with SOCKET_LOCK:
try:
self.transport.settimeout(0.5)
message = get_message(self.transport)
except OSError as err:
if err.errno:
CLIENT_LOGGER.critical(f'Потеряно соединение с сервером.')
self.running = False
self.connection_lost.emit()
# Проблемы с соединением
except (ConnectionError, ConnectionAbortedError, ConnectionResetError, json.JSONDecodeError, TypeError):
CLIENT_LOGGER.debug(f'Потеряно соединение с сервером.')
self.running = False
self.connection_lost.emit()
finally:
self.transport.settimeout(5)
# Если сообщение получено, то вызываем функцию обработчик:
if message:
CLIENT_LOGGER.debug(f'Принято сообщение с сервера: {message}')
self.process_server_ans(message)
|
Andy-mess-client
|
/Andy_mess_client-0.0.1.tar.gz/Andy_mess_client-0.0.1/client/client/transport.py
|
transport.py
|
from PyQt5.QtGui import QStandardItem, QStandardItemModel, QBrush, QColor
from PyQt5.QtWidgets import QMainWindow, qApp, QMessageBox, QApplication
from PyQt5.QtCore import Qt, pyqtSlot
from Crypto.Cipher import PKCS1_OAEP
from Crypto.PublicKey import RSA
import sys
import base64
import json
import logging
sys.path.append('../')
from logs import client_log_config
from common.variables import *
from client.main_window_conv import Ui_MainClientWindow
from client.add_contact import AddContactDialog
from client.del_contact import DelContactDialog
from common.errors import ServerError
CLIENT_LOGGER = logging.getLogger('client')
class ClientMainWindow(QMainWindow):
"""
Класс - основное окно пользователя.
Содержит всю основную логику работы клиентского модуля.
Конфигурация окна создана в QTDesigner и загружается из
конвертированого файла main_window_conv.py
"""
def __init__(self, database, transport, keys):
super().__init__()
# основные переменные
self.database = database
self.transport = transport
# объект - дешифорвщик сообщений с предзагруженным ключём
self.decrypter = PKCS1_OAEP.new(keys)
# Загружаем конфигурацию окна из дизайнера
self.ui = Ui_MainClientWindow()
self.ui.setupUi(self)
# Кнопка "Выход"
self.ui.menu_exit.triggered.connect(qApp.exit)
# Кнопка отправить сообщение
self.ui.btn_send.clicked.connect(self.send_message)
# "добавить контакт"
self.ui.btn_add_contact.clicked.connect(self.add_contact_window)
self.ui.menu_add_contact.triggered.connect(self.add_contact_window)
# Удалить контакт
self.ui.btn_remove_contact.clicked.connect(self.delete_contact_window)
self.ui.menu_del_contact.triggered.connect(self.delete_contact_window)
# Дополнительные требующиеся атрибуты
self.contacts_model = None
self.history_model = None
self.messages = QMessageBox()
self.current_chat = None
self.current_chat_key = None
self.encryptor = None
self.ui.list_messages.setHorizontalScrollBarPolicy(Qt.ScrollBarAlwaysOff)
self.ui.list_messages.setWordWrap(True)
# Double click по списку контактов отправляется в обработчик
self.ui.list_contacts.doubleClicked.connect(self.select_active_user)
self.clients_list_update()
self.set_disabled_input()
self.show()
def set_disabled_input(self):
"""
Метод делающий поля ввода неактивными.
:return: ничего не возвращает
"""
# Надпись - получатель.
self.ui.label_new_message.setText('Для выбора получателя дважды кликните на нем в окне контактов.')
self.ui.text_message.clear()
if self.history_model:
self.history_model.clear()
# Поле ввода и кнопка отправки неактивны до выбора получателя.
self.ui.btn_clear.setDisabled(True)
self.ui.btn_send.setDisabled(True)
self.ui.text_message.setDisabled(True)
self.encryptor = None
self.current_chat = None
self.current_chat_key = None
def clients_list_update(self):
"""
Метод обновляющий список контактов.
:return: ничего не возвращает
"""
contacts_list = self.database.get_contacts()
self.contacts_model = QStandardItemModel()
for i in sorted(contacts_list):
item = QStandardItem(i)
item.setEditable(False)
self.contacts_model.appendRow(item)
self.ui.list_contacts.setModel(self.contacts_model)
def add_contact_window(self):
"""
Метод создающий окно - диалог добавления контакта
:return: ничего не возвращает
"""
global select_dialog
select_dialog = AddContactDialog(self.transport, self.database)
select_dialog.btn_ok.clicked.connect(lambda: self.add_contact_action(select_dialog))
select_dialog.show()
def add_contact_action(self, item):
"""
Метод обработчк нажатия кнопки "Добавить"
:param item:
:return: ничего не возвращает
"""
new_contact = item.selector.currentText()
self.add_contact(new_contact)
item.close()
def add_contact(self, new_contact):
"""
Метод добавляющий контакт в серверную и клиентсткую BD.
После обновления баз данных обновляет и содержимое окна.
:param new_contact: новый контакт
:return: ничего не возвращает
"""
try:
self.transport.add_contact(new_contact)
except ServerError as err:
self.messages.critical(self, 'Ошибка сервера', err.text)
except OSError as err:
if err.errno:
self.messages.critical(self, 'Ошибка', 'Потеряно соединение с сервером!')
self.close()
self.messages.critical(self, 'Ошибка', 'Таймаут соединения!')
else:
self.database.add_contact(new_contact)
new_contact = QStandardItem(new_contact)
new_contact.setEditable(False)
self.contacts_model.appendRow(new_contact)
CLIENT_LOGGER.info(f'Успешно добавлен контакт {new_contact}')
self.messages.information(self, 'Успех', 'Контакт успешно добавлен.')
def delete_contact_window(self):
"""
Метод создающий окно удаления контакта.
:return: ничего не возвращает
"""
global remove_dialog
remove_dialog = DelContactDialog(self.database)
remove_dialog.btn_ok.clicked.connect(lambda: self.delete_contact(remove_dialog))
remove_dialog.show()
def delete_contact(self, item):
"""
Метод удаляющий контакт из серверной и клиентсткой BD.
После обновления баз данных обновляет и содержимое окна.
:param item:
:return: ничего не возвращает
"""
selected = item.selector.currentText()
try:
self.transport.remove_contact(selected)
except ServerError as err:
self.messages.critical(self, 'Ошибка сервера', err.text)
except OSError as err:
if err.errno:
self.messages.critical(self, 'Ошибка', 'Потеряно соединение с сервером!')
self.close()
self.messages.critical(self, 'Ошибка', 'Таймаут соединения!')
else:
self.database.del_contact(selected)
self.clients_list_update()
CLIENT_LOGGER.info(f'Успешно удалён контакт {selected}')
self.messages.information(self, 'Успех', 'Контакт успешно удалён.')
item.close()
# Если удалён активный пользователь, то деактивируем поля ввода.
if selected == self.current_chat:
self.current_chat = None
self.set_disabled_input()
def select_active_user(self):
"""
Метод обработчик события двойного клика по списку контактов.
:return: ничего не возвращает
"""
# Выбранный пользователем контакт находится в выделенном элементе в QListView
self.current_chat = self.ui.list_contacts.currentIndex().data()
# вызываем основную функцию
self.set_active_user()
def set_active_user(self):
"""
Метод активации чата с собеседником.
:return: ничего не возвращает
"""
# Запрашиваем публичный ключ пользователя и создаём объект шифрования
try:
self.current_chat_key = self.transport.key_request(
self.current_chat)
CLIENT_LOGGER.debug(f'Загружен открытый ключ для {self.current_chat}')
if self.current_chat_key:
self.encryptor = PKCS1_OAEP.new(
RSA.import_key(self.current_chat_key))
except (OSError, json.JSONDecodeError):
self.current_chat_key = None
self.encryptor = None
CLIENT_LOGGER.debug(f'Не удалось получить ключ для {self.current_chat}')
# Если ключа нет то ошибка, что не удалось начать чат с пользователем
if not self.current_chat_key:
self.messages.warning(
self, 'Ошибка', 'Для выбранного пользователя нет ключа шифрования.')
return
# Ставим надпись и активируем кнопки
self.ui.label_new_message.setText(
f'Введите сообщенние для {self.current_chat}:')
self.ui.btn_clear.setDisabled(False)
self.ui.btn_send.setDisabled(False)
self.ui.text_message.setDisabled(False)
# Заполняем окно историю сообщений по требуемому пользователю.
self.history_list_update()
def history_list_update(self):
"""
Метод заполняющий соответствующий QListView
историей переписки с текущим собеседником.
:return: ничего не возвращает
"""
# Получаем историю сортированную по дате
list_messages = sorted(self.database.get_history(self.current_chat),
key=lambda item: item[3])
# Если модель не создана, создадим.
if not self.history_model:
self.history_model = QStandardItemModel()
self.ui.list_messages.setModel(self.history_model)
# Очистим от старых записей
self.history_model.clear()
# Берём не более 20 последних записей.
length = len(list_messages)
start_index = 0
if length > 20:
start_index = length - 20
# Заполнение модели записями, так же стоит разделить входящие и исходящие
# сообщения выравниванием и разным фоном.
# Записи в обратном порядке, поэтому выбираем их с конца и не более 20
for i in range(start_index, length):
item = list_messages[i]
if item[1] == 'in':
mess = QStandardItem(f'Входящее от {item[3].replace(microsecond=0)}:\n {item[2]}')
mess.setEditable(False)
mess.setBackground(QBrush(QColor(255, 213, 213)))
mess.setTextAlignment(Qt.AlignLeft)
self.history_model.appendRow(mess)
else:
mess = QStandardItem(f'Исходящее от {item[3].replace(microsecond=0)}:\n {item[2]}')
mess.setEditable(False)
mess.setTextAlignment(Qt.AlignRight)
mess.setBackground(QBrush(QColor(204, 255, 204)))
self.history_model.appendRow(mess)
self.ui.list_messages.scrollToBottom()
def send_message(self):
"""
Функция отправки сообщения текущему собеседнику.
Реализует шифрование сообщения и его отправку.
:return: ничего не возвращает
"""
# Текст в поле, проверяем что поле не пустое затем забирается сообщение
# и поле очищается
message_text = self.ui.text_message.toPlainText()
self.ui.text_message.clear()
if not message_text:
return
# Шифруем сообщение ключом получателя и упаковываем в base64.
message_text_encrypted = self.encryptor.encrypt(
message_text.encode('utf8'))
message_text_encrypted_base64 = base64.b64encode(
message_text_encrypted)
try:
self.transport.send_message(
self.current_chat,
message_text_encrypted_base64.decode('ascii'))
pass
except ServerError as err:
self.messages.critical(self, 'Ошибка', err.text)
except OSError as err:
if err.errno:
self.messages.critical(
self, 'Ошибка', 'Потеряно соединение с сервером!')
self.close()
self.messages.critical(self, 'Ошибка', 'Таймаут соединения!')
except (ConnectionResetError, ConnectionAbortedError):
self.messages.critical(
self, 'Ошибка', 'Потеряно соединение с сервером!')
self.close()
else:
self.database.save_message(self.current_chat, 'out', message_text)
CLIENT_LOGGER.debug(
f'Отправлено сообщение для {self.current_chat}: {message_text}')
self.history_list_update()
@pyqtSlot(dict)
def message(self, message):
"""
Слот обработчик поступаемых сообщений, выполняет дешифровку
поступаемых сообщений и их сохранение в истории сообщений.
Запрашивает пользователя если пришло сообщение не от текущего
собеседника. При необходимости меняет собеседника.
:param message: поступаемое сообщение
:return: ничего не возвращает
"""
# Получаем строку байтов
encrypted_message = base64.b64decode(message[MESSAGE_TEXT])
# Декодируем строку, при ошибке выдаём сообщение и завершаем функцию
try:
decrypted_message = self.decrypter.decrypt(encrypted_message)
except (ValueError, TypeError):
self.messages.warning(
self, 'Ошибка', 'Не удалось декодировать сообщение.')
return
# Сохраняем сообщение в базу и обновляем историю сообщений или
# открываем новый чат.
self.database.save_message(
self.current_chat,
'in',
decrypted_message.decode('utf8'))
sender = message[SENDER]
if sender == self.current_chat:
self.history_list_update()
else:
# Проверим есть ли такой пользователь у нас в контактах:
if self.database.check_contact(sender):
# Если есть, спрашиваем и желании открыть с ним чат и открываем
# при желании
if self.messages.question(
self,
'Новое сообщение',
f'Получено новое сообщение от {sender}, открыть чат с ним?',
QMessageBox.Yes,
QMessageBox.No) == QMessageBox.Yes:
self.current_chat = sender
self.set_active_user()
else:
print('NO')
# Раз нету,спрашиваем хотим ли добавить юзера в контакты.
if self.messages.question(
self,
'Новое сообщение',
f'Получено новое сообщение от {sender}.'
f'\n Данного пользователя нет в вашем контакт-листе.'
f'\n Добавить в контакты и открыть чат с ним?',
QMessageBox.Yes,
QMessageBox.No) == QMessageBox.Yes:
self.add_contact(sender)
self.current_chat = sender
# Нужно заново сохранить сообщение, иначе оно будет потеряно,
# т.к. на момент предыдущего вызова контакта не было.
self.database.save_message(
self.current_chat, 'in', decrypted_message.decode('utf8'))
self.set_active_user()
@pyqtSlot()
def connection_lost(self):
"""
Слот обработчик потери соеднинения с сервером.
Выдаёт окно предупреждение и завершает работу приложения.
:return: ничего не возвращает
"""
self.messages.warning(self, 'Сбой соединения', 'Потеряно соединение с сервером.')
self.close()
@pyqtSlot()
def sig_205(self):
"""
Слот выполняющий обновление баз данных по команде сервера.
:return: ничего не возвращает
"""
if self.current_chat and not self.database.check_user(
self.current_chat):
self.messages.warning(
self,
'Сочувствую',
'К сожалению собеседник был удалён с сервера.')
self.set_disabled_input()
self.current_chat = None
self.clients_list_update()
def make_connection(self, trans_obj):
"""
Метод обеспечивающий соединение сигналов и слотов.
:param trans_obj: объект-транспорт
:return: ничего не возвращает
"""
trans_obj.new_message.connect(self.message)
trans_obj.connection_lost.connect(self.connection_lost)
trans_obj.message_205.connect(self.sig_205)
|
Andy-mess-client
|
/Andy_mess_client-0.0.1.tar.gz/Andy_mess_client-0.0.1/client/client/main_window.py
|
main_window.py
|
from sqlalchemy import create_engine, MetaData, Table, Column, Integer, String, Text, DateTime
from sqlalchemy.orm import mapper, sessionmaker
import datetime
import os
# Класс - база данных сервера.
class ClientDatabase:
"""
Класс - оболочка для работы с базой данных клиента.
Использует SQLite базу данных, реализован с помощью
SQLAlchemy ORM и используется классический подход.
"""
class Contacts:
"""
Класс - отображение для таблицы контактов.
"""
def __init__(self, contact):
self.id = None
self.name = contact
class MessageStat:
"""
Класс - отображение для таблицы статистики переданных сообщений.
"""
def __init__(self, contact, direction, message):
self.id = None
self.contact = contact
self.direction = direction
self.message = message
self.date = datetime.datetime.now()
class KnownUsers:
"""
Класс - отображение для таблицы всех пользователей.
"""
def __init__(self, user):
self.id = None
self.username = user
# Конструктор класса:
def __init__(self, name):
# Создаём движок базы данных, поскольку разрешено несколько клиентов одновременно,
# каждый должен иметь свою БД.
# Поскольку клиент мультипоточный, то необходимо отключить проверки на подключения
# с разных потоков, иначе sqlite3.ProgrammingError
path = os.path.dirname(os.path.realpath(__file__))
filename = f'client_{name}.db3'
self.database_engine = create_engine(f'sqlite:///{os.path.join(path, filename)}',
echo=False,
pool_recycle=7200,
connect_args={'check_same_thread': False})
# Создаём объект MetaData
self.metadata = MetaData()
# Создаём таблицу контактов
contacts = Table('contacts', self.metadata,
Column('id', Integer, primary_key=True),
Column('name', String, unique=True)
)
# Создаём таблицу истории сообщений
history = Table('message_history', self.metadata,
Column('id', Integer, primary_key=True),
Column('contact', String),
Column('direction', String),
Column('message', Text),
Column('date', DateTime)
)
# Создаём таблицу известных пользователей
users = Table('known_users', self.metadata,
Column('id', Integer, primary_key=True),
Column('username', String)
)
# Создаём таблицы
self.metadata.create_all(self.database_engine)
# Создаём отображения
mapper(self.Contacts, contacts)
mapper(self.MessageStat, history)
mapper(self.KnownUsers, users)
# Создаём сессию
Session = sessionmaker(bind=self.database_engine)
self.session = Session()
# Необходимо очистить таблицу контактов, т.к. при запуске они подгружаются с сервера.
self.session.query(self.Contacts).delete()
self.session.commit()
def add_users(self, users_list):
"""
Метод, заполняющий таблицу известных пользователей.
:param users_list: список пользователей
:return: ничего не возвращает
"""
self.session.query(self.KnownUsers).delete()
for user in users_list:
user_row = self.KnownUsers(user)
self.session.add(user_row)
self.session.commit()
def add_contact(self, contact):
"""
Метод добавляющий контакт в базу данных.
:param contact: добавляемый контакт
:return: ничего не возвращает
"""
if not self.session.query(self.Contacts).filter_by(name=contact).count():
contact_row = self.Contacts(contact)
self.session.add(contact_row)
self.session.commit()
def contacts_clear(self):
"""
Метод, очищающий таблицу со списком контактов.
:return: ничего не возвращает
"""
self.session.query(self.Contacts).delete()
self.session.commit()
def del_contact(self, contact):
"""
Метод, удаляющий определённый контакт.
:param contact: удаляемый контакт
:return: ничего не возвращает
"""
self.session.query(self.Contacts).filter_by(name=contact).delete()
self.session.commit()
def check_user(self, user):
"""
Метод, проверяющий существует ли пользователь.
:param user: проверяемый пользователь
:return: True или False
"""
if self.session.query(self.KnownUsers).filter_by(username=user).count():
return True
else:
return False
def get_contacts(self):
"""
Метод, возвращающий список всех контактов.
:return: список всех контактов
"""
return [contact[0] for contact in self.session.query(self.Contacts.name).all()]
def get_users(self):
"""
Метод возвращающий список всех известных пользователей.
:return: список всех известных пользователей
"""
return [user[0] for user in self.session.query(self.KnownUsers.username).all()]
def check_contact(self, contact):
"""
Метод, проверяющий существует ли контакт.
:param contact: проверяемый контакт
:return: True или False
"""
if self.session.query(self.Contacts).filter_by(name=contact).count():
return True
else:
return False
def get_history(self, contact):
"""
Метод, возвращающий историю сообщений с определённым пользователем.
:param contact: пользователь
:return: история сообщений с определённым пользователем
"""
query = self.session.query(self.MessageStat).filter_by(contact=contact)
return [(history_row.contact, history_row.direction,
history_row.message, history_row.date)
for history_row in query.all()]
def save_message(self, contact, direction, message):
"""
Метод, сохраняющий сообщение в базе данных.
:param contact: пользователь
:param direction: направление
:param message: текст сообщения
:return: ничего не возвращает
"""
message_row = self.MessageStat(contact, direction, message)
self.session.add(message_row)
self.session.commit()
# отладка
if __name__ == '__main__':
test_db = ClientDatabase('test1')
for i in ['test3', 'test4', 'test5']:
test_db.add_contact(i)
test_db.add_contact('test4')
test_db.add_users(['test1', 'test2', 'test3', 'test4', 'test5'])
print(test_db.check_user('test1'))
print(test_db.check_user('test10'))
print(test_db.get_contacts())
print(test_db.get_users())
print(test_db.check_contact('test3'))
print(test_db.check_contact('test10'))
test_db.del_contact('test3')
print(test_db.check_contact('test3'))
test_db.save_message('test1', 'out',
f'Тестовое сообщение от пользователя test1 от {datetime.datetime.now()}!')
test_db.save_message('test1', 'in',
f'Тестовое сообщение от пользователя test2 от {datetime.datetime.now()}!')
print(test_db.get_history('test1'))
print(test_db.get_history('test3'))
|
Andy-mess-client
|
/Andy_mess_client-0.0.1.tar.gz/Andy_mess_client-0.0.1/client/client/database.py
|
database.py
|
import socket
import sys
import traceback
import logging
from functools import wraps
sys.path.append('../')
from logs import client_log_config, server_log_config
if sys.argv[0].find('server.py') == -1:
LOGGER = logging.getLogger('client')
else:
LOGGER = logging.getLogger('server')
def log(func):
"""
Декоратор, выполняющий логирование вызовов функций.
Сохраняет события типа debug, содержащие
информацию о имени вызываемой функиции, параметры с которыми
вызывается функция, и модуль, вызывающий функцию.
"""
@wraps(func)
def decorated(*args, **kwargs):
func_to_log = func(*args, **kwargs)
LOGGER.debug(f'Функция {func.__name__}() вызвана из функции {traceback.format_stack()[0].strip().split()[-1]}')
return func_to_log
return decorated
def login_required(func):
"""
Декоратор, проверяющий, что клиент авторизован на сервере.
Проверяет, что передаваемый объект сокета находится в
списке авторизованных клиентов.
За исключением передачи словаря-запроса
на авторизацию. Если клиент не авторизован,
генерирует исключение TypeError
"""
@wraps(func)
def checker(*args, **kwargs):
# проверяем, что первый аргумент - экземпляр MessageProcessor
# Импортить необходимо тут, иначе ошибка рекурсивного импорта.
from server.core import MessageProcessor
from common.variables import ACTION, PRESENCE
if isinstance(args[0], MessageProcessor):
found = False
for arg in args:
if isinstance(arg, socket.socket):
# Проверяем, что данный сокет есть в списке names класса
# MessageProcessor
for client in args[0].names:
if args[0].names[client] == arg:
found = True
# Теперь надо проверить, что передаваемые аргументы не presence
# сообщение. Если presence, то разрешаем
for arg in args:
if isinstance(arg, dict):
if ACTION in arg and arg[ACTION] == PRESENCE:
found = True
# Если не не авторизован и не сообщение начала авторизации, то
# вызываем исключение.
if not found:
raise TypeError
return func(*args, **kwargs)
return checker
|
Andy-mess-client
|
/Andy_mess_client-0.0.1.tar.gz/Andy_mess_client-0.0.1/client/common/decorators.py
|
decorators.py
|
import dis
class ServerVerifier(type):
"""
Метакласс, проверяющий что в результирующем классе нет клиентских
вызовов таких как: connect. Также проверяется, что серверный
сокет является TCP и работает по IPv4 протоколу.
"""
def __init__(cls, clsname, bases, clsdict):
# clsname - экземпляр метакласса - Server
# bases - кортеж базовых классов - ()
# clsdict - словарь атрибутов и методов экземпляра метакласса
# {'__module__': '__main__',
# '__qualname__': 'Server',
# 'port': <descrptrs.Port object at 0x000000DACC8F5748>,
# '__init__': <function Server.__init__ at 0x000000DACCE3E378>,
# 'init_socket': <function Server.init_socket at 0x000000DACCE3E400>,
# 'main_loop': <function Server.main_loop at 0x000000DACCE3E488>,
# 'process_message': <function Server.process_message at 0x000000DACCE3E510>,
# 'process_client_message': <function Server.process_client_message at 0x000000DACCE3E598>}
# Список методов, которые используются в функциях класса:
methods = [] # получаем с помощью 'LOAD_GLOBAL'
# Обычно методы, обёрнутые декораторами попадают
# не в 'LOAD_GLOBAL', а в 'LOAD_METHOD'
methods_2 = [] # получаем с помощью 'LOAD_METHOD'
# Атрибуты, используемые в функциях классов
attrs = [] # получаем с помощью 'LOAD_ATTR'
# перебираем ключи
for func in clsdict:
# Пробуем
try:
# Возвращает итератор по инструкциям в предоставленной функции,
# методе, строке исходного кода или объекте кода.
ret = dis.get_instructions(clsdict[func])
# ret - <generator object _get_instructions_bytes at 0x00000062EAEAD7C8>
# ret - <generator object _get_instructions_bytes at 0x00000062EAEADF48>
# ...
# Если не функция, то ловим исключение
except TypeError:
pass
else:
# Если функция, то разбираем код, получая используемые методы и атрибуты.
for i in ret:
# print(i)
# i - Instruction(opname='LOAD_GLOBAL', opcode=116, arg=9, argval='send_message',
# argrepr='send_message', offset=308, starts_line=201, is_jump_target=False)
# opname - имя для операции
if i.opname == 'LOAD_GLOBAL':
if i.argval not in methods:
# заполняем список методами, использующимися в функциях класса
methods.append(i.argval)
elif i.opname == 'LOAD_METHOD':
if i.argval not in methods_2:
# заполняем список атрибутами, использующимися в функциях класса
methods_2.append(i.argval)
elif i.opname == 'LOAD_ATTR':
if i.argval not in attrs:
# заполняем список атрибутами, использующимися в функциях класса
attrs.append(i.argval)
# Если обнаружено использование недопустимого метода connect, вызываем исключение:
if 'connect' in methods:
raise TypeError('Использование метода connect недопустимо в серверном классе')
# Если сокет не инициализировался константами SOCK_STREAM(TCP) AF_INET(IPv4), тоже исключение.
if not ('SOCK_STREAM' in methods and 'AF_INET' in methods):
raise TypeError('Некорректная инициализация сокета.')
# Обязательно вызываем конструктор предка:
super().__init__(clsname, bases, clsdict)
class ClientVerifier(type):
"""
Метакласс, проверяющий что в результирующем классе нет серверных
вызовов таких как: accept, listen. Также проверяется, что сокет не
создаётся внутри конструктора класса.
"""
def __init__(cls, clsname, bases, clsdict):
# Список методов, которые используются в функциях класса:
methods = []
for func in clsdict:
# Пробуем
try:
ret = dis.get_instructions(clsdict[func])
# Если не функция, то ловим исключение:
except TypeError:
pass
else:
# Если функция, то разбираем код, получая используемые методы:
for i in ret:
if i.opname == 'LOAD_GLOBAL':
if i.argval not in methods:
methods.append(i.argval)
# Если обнаружено использование недопустимого метода accept, listen, socket, то бросаем исключение:
for command in ('accept', 'listen', 'socket'):
if command in methods:
raise TypeError('В классе обнаружено использование запрещённого метода.')
# Вызов get_message или send_message из utils считаем корректным использованием сокетов
if 'get_message' in methods or 'send_message' in methods:
pass
else:
raise TypeError('Отсутствуют вызовы функций, работающих с сокетами.')
super().__init__(clsname, bases, clsdict)
|
Andy-mess-client
|
/Andy_mess_client-0.0.1.tar.gz/Andy_mess_client-0.0.1/client/common/metaclasses.py
|
metaclasses.py
|
import sys
import logging
import argparse
import configparser
import os
from PyQt5.QtWidgets import QApplication
from PyQt5.QtCore import Qt
import logs.server_log_config
from common.variables import *
from common.decorators import log
from server.database import ServerStorage
from server.core import MessageProcessor
from server.main_window import MainWindow
# Инициализация логирования сервера:
SERVER_LOGGER = logging.getLogger('server')
@log
def arg_parser(default_port, default_address):
"""
Парсер аргументов командной строки.
:param default_port: порт
:param default_address: ip-адрес
:return: ip-адрес, порт, gui-флаг
"""
parser = argparse.ArgumentParser()
parser.add_argument('-p', default=default_port, type=int, nargs='?')
parser.add_argument('-a', default=default_address, nargs='?')
parser.add_argument('--no_gui', action='store_true')
namespace = parser.parse_args(sys.argv[1:])
listen_address = namespace.a
listen_port = namespace.p
gui_flag = namespace.no_gui
SERVER_LOGGER.debug('Аргументы успешно загружены.')
return listen_address, listen_port, gui_flag
@log
def config_load():
"""
Парсер конфигурационного ini файла.
:return: словарь, содержащий параметры конфигурации сервера
"""
config = configparser.ConfigParser()
dir_path = os.path.dirname(os.path.realpath(__file__))
config.read(f"{dir_path}/{'server.ini'}")
# Если конфиг файл загружен правильно, запускаемся, иначе конфиг по умолчанию.
if 'SETTINGS' in config:
return config
else:
config.add_section('SETTINGS')
config.set('SETTINGS', 'Default_port', str(DEFAULT_PORT))
config.set('SETTINGS', 'Listen_Address', '')
config.set('SETTINGS', 'Database_path', '')
config.set('SETTINGS', 'Database_file', 'server_database.db3')
return config
@log
def main():
"""
Основная функция.
:return: ничего не возвращает
"""
# Загрузка файла конфигурации сервера
config = config_load()
# Загрузка параметров командной строки, если нет параметров, то задаём значения по умолчанию.
listen_address, listen_port, gui_flag = arg_parser(
config['SETTINGS']['Default_port'], config['SETTINGS']['Listen_Address'])
# Инициализация базы данных
database = ServerStorage(os.path.join(config['SETTINGS']['Database_path'], config['SETTINGS']['Database_file']))
# Создание экземпляра класса - сервера и его запуск:
server = MessageProcessor(listen_address, listen_port, database)
server.daemon = True
server.start()
# Если указан параметр без GUI то запускаем простенький обработчик
# консольного ввода
if gui_flag:
while True:
command = input('Введите exit для завершения работы сервера.')
if command == 'exit':
# Если выход, то завершаем основной цикл сервера.
server.running = False
server.join()
break
# Если не указан запуск без GUI, то запускаем GUI:
else:
# Создаём графическое окружение для сервера:
server_app = QApplication(sys.argv)
server_app.setAttribute(Qt.AA_DisableWindowContextHelpButton)
main_window = MainWindow(database, server, config)
# Запускаем GUI
server_app.exec_()
# По закрытию окон останавливаем обработчик сообщений
server.running = False
if __name__ == '__main__':
main()
|
Andy-mess-server
|
/Andy_mess_server-0.0.1-py3-none-any.whl/server/server.py
|
server.py
|
from PyQt5.QtWidgets import QDialog, QLabel, QLineEdit, QPushButton, QFileDialog, QMessageBox
from PyQt5.QtCore import Qt
import os
class ConfigWindow(QDialog):
"""Класс окно настроек."""
def __init__(self, config):
super().__init__()
self.config = config
self.initUI()
def initUI(self):
"""Настройки окна"""
self.setFixedSize(365, 260)
self.setWindowTitle('Настройки сервера')
self.setAttribute(Qt.WA_DeleteOnClose)
self.setModal(True)
# Надпись о файле базы данных:
self.db_path_label = QLabel('Путь до файла базы данных: ', self)
self.db_path_label.move(10, 10)
self.db_path_label.setFixedSize(240, 15)
# Строка с путём базы
self.db_path = QLineEdit(self)
self.db_path.setFixedSize(250, 20)
self.db_path.move(10, 30)
self.db_path.setReadOnly(True)
# Кнопка выбора пути.
self.db_path_select = QPushButton('Обзор...', self)
self.db_path_select.move(275, 28)
# Метка с именем поля файла базы данных
self.db_file_label = QLabel('Имя файла базы данных: ', self)
self.db_file_label.move(10, 68)
self.db_file_label.setFixedSize(180, 15)
# Поле для ввода имени файла
self.db_file = QLineEdit(self)
self.db_file.move(200, 66)
self.db_file.setFixedSize(150, 20)
# Метка с номером порта
self.port_label = QLabel('Номер порта для соединений:', self)
self.port_label.move(10, 108)
self.port_label.setFixedSize(180, 15)
# Поле для ввода номера порта
self.port = QLineEdit(self)
self.port.move(200, 108)
self.port.setFixedSize(150, 20)
# Метка с адресом для соединений
self.ip_label = QLabel('С какого IP принимаем соединения:', self)
self.ip_label.move(10, 148)
self.ip_label.setFixedSize(180, 15)
# Метка с напоминанием о пустом поле.
self.ip_label_note = QLabel(
' оставьте это поле пустым, чтобы\n принимать соединения с любых адресов.',
self)
self.ip_label_note.move(10, 168)
self.ip_label_note.setFixedSize(500, 30)
# Поле для ввода ip
self.ip = QLineEdit(self)
self.ip.move(200, 148)
self.ip.setFixedSize(150, 20)
# Кнопка сохранения настроек
self.save_btn = QPushButton('Сохранить', self)
self.save_btn.move(190, 220)
# Кнапка закрытия окна
self.close_button = QPushButton('Закрыть', self)
self.close_button.move(275, 220)
self.close_button.clicked.connect(self.close)
self.db_path_select.clicked.connect(self.open_file_dialog)
self.show()
self.db_path.insert(self.config['SETTINGS']['Database_path'])
self.db_file.insert(self.config['SETTINGS']['Database_file'])
self.port.insert(self.config['SETTINGS']['Default_port'])
self.ip.insert(self.config['SETTINGS']['Listen_Address'])
self.save_btn.clicked.connect(self.save_server_config)
def open_file_dialog(self):
"""Метод обработчик открытия окна выбора папки."""
global dialog
dialog = QFileDialog(self)
path = dialog.getExistingDirectory()
path = path.replace('/', '\\')
self.db_path.clear()
self.db_path.insert(path)
def save_server_config(self):
"""
Метод сохранения настроек.
Проверяет правильность введённых данных и
если всё правильно сохраняет ini файл.
"""
global config_window
message = QMessageBox()
self.config['SETTINGS']['Database_path'] = self.db_path.text()
self.config['SETTINGS']['Database_file'] = self.db_file.text()
try:
port = int(self.port.text())
except ValueError:
message.warning(self, 'Ошибка', 'Порт должен быть числом')
else:
self.config['SETTINGS']['Listen_Address'] = self.ip.text()
if 1023 < port < 65536:
self.config['SETTINGS']['Default_port'] = str(port)
dir_path = os.path.dirname(os.path.realpath(__file__))
dir_path = os.path.join(dir_path, '..')
with open(f"{dir_path}/{'server_dist+++.ini'}", 'w') as conf:
self.config.write(conf)
message.information(
self, 'OK', 'Настройки успешно сохранены!')
else:
message.warning(
self, 'Ошибка', 'Порт должен быть от 1024 до 65536')
|
Andy-mess-server
|
/Andy_mess_server-0.0.1-py3-none-any.whl/server/server/config_window.py
|
config_window.py
|
from PyQt5.QtWidgets import QDialog, QLabel, QComboBox, QPushButton, QApplication
from PyQt5.QtCore import Qt
class DelUserDialog(QDialog):
"""
Класс - диалог выбора контакта для удаления.
"""
def __init__(self, database, server):
super().__init__()
self.database = database
self.server = server
self.setFixedSize(350, 120)
self.setWindowTitle('Удаление пользователя')
self.setAttribute(Qt.WA_DeleteOnClose)
self.setModal(True)
self.selector_label = QLabel(
'Выберите пользователя для удаления:', self)
self.selector_label.setFixedSize(200, 20)
self.selector_label.move(10, 0)
self.selector = QComboBox(self)
self.selector.setFixedSize(200, 20)
self.selector.move(10, 30)
self.btn_ok = QPushButton('Удалить', self)
self.btn_ok.setFixedSize(100, 30)
self.btn_ok.move(230, 20)
self.btn_ok.clicked.connect(self.remove_user)
self.btn_cancel = QPushButton('Отмена', self)
self.btn_cancel.setFixedSize(100, 30)
self.btn_cancel.move(230, 60)
self.btn_cancel.clicked.connect(self.close)
self.all_users_fill()
def all_users_fill(self):
"""
Метод заполняющий список пользователей.
:return: ничего не возвращает
"""
self.selector.addItems([item[0]
for item in self.database.users_list()])
def remove_user(self):
"""
Метод - обработчик удаления пользователя.
:return: ничего не возвращает
"""
self.database.remove_user(self.selector.currentText())
if self.selector.currentText() in self.server.names:
sock = self.server.names[self.selector.currentText()]
del self.server.names[self.selector.currentText()]
self.server.remove_client(sock)
# Рассылаем клиентам сообщение о необходимости обновить справочники
self.server.service_update_lists()
self.close()
if __name__ == '__main__':
app = QApplication([])
from database import ServerStorage
database = ServerStorage('../server_database.db3')
import os
import sys
path1 = os.path.join(os.getcwd(), '..')
sys.path.insert(0, path1)
from core import MessageProcessor
server = MessageProcessor('127.0.0.1', 7777, database)
dial = DelUserDialog(database, server)
dial.show()
app.exec_()
|
Andy-mess-server
|
/Andy_mess_server-0.0.1-py3-none-any.whl/server/server/remove_user.py
|
remove_user.py
|
import sys
import json
from socket import socket, AF_INET, SOCK_STREAM, SOL_SOCKET, SO_REUSEADDR
import logging
import select
import threading
import os
import binascii
import hmac
sys.path.append('../')
import logs.server_log_config
from common.variables import *
from common.utils import get_message, send_message
from common.descriptors import Port
from common.decorators import login_required
# Инициализация логирования сервера:
SERVER_LOGGER = logging.getLogger('server')
class MessageProcessor(threading.Thread,):
"""
Основной класс сервера. Принимает содинения, словари - пакеты
от клиентов, обрабатывает поступающие сообщения.
Работает в качестве отдельного потока.
"""
port = Port()
def __init__(self, listen_address, listen_port, database):
# Параметры подключения
self.addr = listen_address
self.port = listen_port
# База данных сервера
self.database = database
# Сокет, через который будет осуществляться работа
self.sock = None
# Список подключённых клиентов.
self.clients = []
# Список сообщений на отправку.
self.messages = []
# Сокеты
self.listen_sockets = None
self.error_sockets = None
# Флаг продолжения работы
self.running = True
# Словарь содержащий сопоставленные имена и соответствующие им сокеты.
self.names = dict()
# Конструктор предка
super().__init__()
def init_socket(self):
"""
Метод инициализатор сокета.
:return: ничего не возвращает
"""
SERVER_LOGGER.info(f'Запущен сервер. Порт для подключений: {self.port}, '
f'адрес, с которого принимаются подключения: {self.addr}. '
f'Если адрес не указан, то принимаются соединения с любых адресов.')
# Готовим сокет.
transport = socket(AF_INET, SOCK_STREAM)
transport.setsockopt(SOL_SOCKET, SO_REUSEADDR, 1)
transport.bind((self.addr, self.port))
transport.settimeout(0.5)
# Начинаем слушать сокет.
self.sock = transport
self.sock.listen(MAX_CONNECTIONS)
@login_required
def process_client_message(self, message, client):
"""
Метод обработчик поступающих сообщений.
:param message: сообщение
:param client: пользователь
:return: ничего не возвращает
"""
SERVER_LOGGER.debug(f'Разбор сообщения от клиента: {message}.')
# Если это сообщение о присутствии, принимаем и отвечаем.
if ACTION in message and message[ACTION] == PRESENCE and TIME in message \
and USER in message:
# Если сообщение о присутствии то вызываем функцию авторизации.
self.autorize_user(message, client)
# Если это сообщение, то отправляем его получателю.
elif ACTION in message and message[ACTION] == MESSAGE and DESTINATION in message and TIME in message \
and SENDER in message and MESSAGE_TEXT in message and self.names[message[SENDER]] == client:
if message[DESTINATION] in self.names:
self.database.process_message(message[SENDER], message[DESTINATION])
self.process_message(message)
try:
send_message(client, RESPONSE_200)
except OSError:
self.remove_client(client)
else:
response = RESPONSE_400
response[ERROR] = 'Пользователь не зарегистрирован на сервере.'
try:
send_message(client, response)
except OSError:
pass
return
# Если клиент выходит:
elif ACTION in message and message[ACTION] == EXIT and ACCOUNT_NAME in message \
and self.names[message[ACCOUNT_NAME]] == client:
self.remove_client(client)
# Если это запрос списка контактов
elif ACTION in message and message[ACTION] == GET_CONTACTS and USER in message and \
self.names[message[USER]] == client:
response = RESPONSE_202
response[LIST_INFO] = self.database.get_contacts(message[USER])
try:
send_message(client, response)
except OSError:
self.remove_client(client)
# Если это добавление контакта
elif ACTION in message and message[ACTION] == ADD_CONTACT and ACCOUNT_NAME in message and USER in message \
and self.names[message[USER]] == client:
self.database.add_contact(message[USER], message[ACCOUNT_NAME])
try:
send_message(client, RESPONSE_200)
except OSError:
self.remove_client(client)
# Если это удаление контакта
elif ACTION in message and message[ACTION] == REMOVE_CONTACT and ACCOUNT_NAME in message and USER in message \
and self.names[message[USER]] == client:
self.database.remove_contact(message[USER], message[ACCOUNT_NAME])
try:
send_message(client, RESPONSE_200)
except OSError:
self.remove_client(client)
# Если это запрос известных пользователей
elif ACTION in message and message[ACTION] == USERS_REQUEST and ACCOUNT_NAME in message \
and self.names[message[ACCOUNT_NAME]] == client:
response = RESPONSE_202
response[LIST_INFO] = [user[0] for user in self.database.users_list()]
try:
send_message(client, response)
except OSError:
self.remove_client(client)
# Если это запрос публичного ключа пользователя
elif ACTION in message and message[ACTION] == PUBLIC_KEY_REQUEST and ACCOUNT_NAME in message:
response = RESPONSE_511
response[DATA] = self.database.get_pubkey(message[ACCOUNT_NAME])
# может быть, что ключа ещё нет (пользователь никогда не логинился,
# тогда шлём 400)
if response[DATA]:
try:
send_message(client, response)
except OSError:
self.remove_client(client)
else:
response = RESPONSE_400
response[ERROR] = 'Нет публичного ключа для данного пользователя'
try:
send_message(client, response)
except OSError:
self.remove_client(client)
# Иначе отдаём Bad request
else:
response = RESPONSE_400
response[ERROR] = 'Запрос некорректен.'
try:
send_message(client, response)
except OSError:
self.remove_client(client)
def process_message(self, message):
"""
Метод отправки сообщения клиенту.
:param message: сообщение
:return: ничего не возвращает
"""
if message[DESTINATION] in self.names and self.names[message[DESTINATION]] in self.listen_sockets:
try:
send_message(self.names[message[DESTINATION]], message)
SERVER_LOGGER.info(f'Отправлено сообщение пользователю {message[DESTINATION]} '
f'от пользователя {message[SENDER]}.')
except OSError:
self.remove_client(message[DESTINATION])
elif message[DESTINATION] in self.names and self.names[message[DESTINATION]] not in self.listen_sockets:
SERVER_LOGGER.error(
f'Связь с клиентом {message[DESTINATION]} была потеряна. Соединение закрыто, доставка невозможна.')
self.remove_client(self.names[message[DESTINATION]])
else:
SERVER_LOGGER.error(f'Пользователь {message[DESTINATION]} не зарегистрирован на сервере. '
f'Отправка сообщения невозможна.')
def run(self):
"""
Метод основной цикл потока.
:return: ничего не возвращает
"""
# Инициализация Сокета
self.init_socket()
# Основной цикл программы:
while self.running:
# Ждём подключения, если таймаут вышел, ловим исключение.
try:
client, client_address = self.sock.accept()
except OSError:
pass
else:
SERVER_LOGGER.info(f'Установлено соединение с ПК {client_address}.')
client.settimeout(5)
self.clients.append(client)
recv_data_list = []
err_list = []
# Проверяем на наличие ждущих клиентов.
try:
if self.clients:
recv_data_list, self.listen_sockets, self.error_sockets = select.select(
self.clients, self.clients, [], 0)
except OSError as err:
SERVER_LOGGER.error(f'Ошибка работы с сокетами: {err}')
# Принимаем сообщения и еcли ошибка, исключаем клиента.
if recv_data_list:
for client_with_message in recv_data_list:
try:
self.process_client_message(get_message(client_with_message), client_with_message)
except (OSError, json.JSONDecodeError, TypeError) as err:
SERVER_LOGGER.debug(f'Getting data from client exception.', exc_info=err)
self.remove_client(client_with_message)
def remove_client(self, client):
"""
Метод обработчик клиента с которым прервана связь.
Ищет клиента и удаляет его из списков и базы:
:param client: клиент
:return: ничего не возвращает
"""
SERVER_LOGGER.info(f'Клиент {client.getpeername()} отключился от сервера.')
for name in self.names:
if self.names[name] == client:
self.database.user_logout(name)
del self.names[name]
break
self.clients.remove(client)
client.close()
def autorize_user(self, message, sock):
"""
Метод реализующий авторизацию пользователей.
:param message: сообщение
:param sock: сокет
:return: ничего не возвращает
"""
# Если имя пользователя уже занято то возвращаем 400
SERVER_LOGGER.debug(f'Start auth process for {message[USER]}')
if message[USER][ACCOUNT_NAME] in self.names.keys():
response = RESPONSE_400
response[ERROR] = 'Имя пользователя уже занято.'
try:
SERVER_LOGGER.debug(f'Username busy, sending {response}')
send_message(sock, response)
except OSError:
SERVER_LOGGER.debug('OS Error')
pass
self.clients.remove(sock)
sock.close()
# Проверяем что пользователь зарегистрирован на сервере.
elif not self.database.check_user(message[USER][ACCOUNT_NAME]):
response = RESPONSE_400
response[ERROR] = 'Пользователь не зарегистрирован.'
try:
SERVER_LOGGER.debug(f'Unknown username, sending {response}')
send_message(sock, response)
except OSError:
pass
self.clients.remove(sock)
sock.close()
else:
SERVER_LOGGER.debug('Correct username, starting passwd check.')
# Иначе отвечаем 511 и проводим процедуру авторизации
# Словарь - заготовка
message_auth = RESPONSE_511
# Набор байтов в hex представлении
random_str = binascii.hexlify(os.urandom(64))
# В словарь байты нельзя, декодируем (json.dumps -> TypeError)
message_auth[DATA] = random_str.decode('ascii')
# Создаём хэш пароля и связки с рандомной строкой, сохраняем
# серверную версию ключа
hash = hmac.new(self.database.get_hash(message[USER][ACCOUNT_NAME]), random_str, 'MD5')
digest = hash.digest()
SERVER_LOGGER.debug(f'Auth message = {message_auth}')
try:
# Обмен с клиентом
send_message(sock, message_auth)
ans = get_message(sock)
except OSError as err:
SERVER_LOGGER.debug('Error in auth, data:', exc_info=err)
sock.close()
return
client_digest = binascii.a2b_base64(ans[DATA])
# Если ответ клиента корректный, то сохраняем его в список
# пользователей.
if RESPONSE in ans and ans[RESPONSE] == 511 and \
hmac.compare_digest(digest, client_digest):
self.names[message[USER][ACCOUNT_NAME]] = sock
client_ip, client_port = sock.getpeername()
try:
send_message(sock, RESPONSE_200)
except OSError:
self.remove_client(message[USER][ACCOUNT_NAME])
# добавляем пользователя в список активных и,
# если у него изменился открытый ключ, то сохраняем новый
self.database.user_login(
message[USER][ACCOUNT_NAME],
client_ip,
client_port,
message[USER][PUBLIC_KEY])
else:
response = RESPONSE_400
response[ERROR] = 'Неверный пароль.'
try:
send_message(sock, response)
except OSError:
pass
self.clients.remove(sock)
sock.close()
def service_update_lists(self):
"""
Метод реализующий отправки сервисного сообщения 205 клиентам.
:return: ничего не возвращает
"""
for client in self.names:
try:
send_message(self.names[client], RESPONSE_205)
except OSError:
self.remove_client(self.names[client])
|
Andy-mess-server
|
/Andy_mess_server-0.0.1-py3-none-any.whl/server/server/core.py
|
core.py
|
from PyQt5.QtWidgets import QDialog, QPushButton, QLineEdit, QApplication, QLabel, QMessageBox
from PyQt5.QtCore import Qt
import hashlib
import binascii
class RegisterUser(QDialog):
""" Класс диалог регистрации пользователя на сервере."""
def __init__(self, database, server):
super().__init__()
self.database = database
self.server = server
self.setWindowTitle('Регистрация')
self.setFixedSize(175, 183)
self.setModal(True)
self.setAttribute(Qt.WA_DeleteOnClose)
self.label_username = QLabel('Введите имя пользователя:', self)
self.label_username.move(10, 10)
self.label_username.setFixedSize(150, 15)
self.client_name = QLineEdit(self)
self.client_name.setFixedSize(154, 20)
self.client_name.move(10, 30)
self.label_passwd = QLabel('Введите пароль:', self)
self.label_passwd.move(10, 55)
self.label_passwd.setFixedSize(150, 15)
self.client_passwd = QLineEdit(self)
self.client_passwd.setFixedSize(154, 20)
self.client_passwd.move(10, 75)
self.client_passwd.setEchoMode(QLineEdit.Password)
self.label_conf = QLabel('Введите подтверждение:', self)
self.label_conf.move(10, 100)
self.label_conf.setFixedSize(150, 15)
self.client_conf = QLineEdit(self)
self.client_conf.setFixedSize(154, 20)
self.client_conf.move(10, 120)
self.client_conf.setEchoMode(QLineEdit.Password)
self.btn_ok = QPushButton('Сохранить', self)
self.btn_ok.move(10, 150)
self.btn_ok.clicked.connect(self.save_data)
self.btn_cancel = QPushButton('Выход', self)
self.btn_cancel.move(90, 150)
self.btn_cancel.clicked.connect(self.close)
self.messages = QMessageBox()
self.show()
def save_data(self):
"""
Метод проверки правильности ввода и сохранения в базу нового пользователя.
"""
if not self.client_name.text():
self.messages.critical(
self, 'Ошибка', 'Не указано имя пользователя.')
return
elif self.client_passwd.text() != self.client_conf.text():
self.messages.critical(
self, 'Ошибка', 'Введённые пароли не совпадают.')
return
elif self.database.check_user(self.client_name.text()):
self.messages.critical(
self, 'Ошибка', 'Пользователь уже существует.')
return
else:
# Генерируем хэш пароля, в качестве соли будем использовать логин в
# нижнем регистре.
passwd_bytes = self.client_passwd.text().encode('utf-8')
salt = self.client_name.text().lower().encode('utf-8')
passwd_hash = hashlib.pbkdf2_hmac(
'sha512', passwd_bytes, salt, 10000)
self.database.add_user(
self.client_name.text(),
binascii.hexlify(passwd_hash))
self.messages.information(
self, 'Успех', 'Пользователь успешно зарегистрирован.')
# Рассылаем клиентам сообщение о необходимости обновить справочники
self.server.service_update_lists()
self.close()
if __name__ == '__main__':
app = QApplication([])
from database import ServerStorage
database = ServerStorage('../server_database.db3')
import os
import sys
path1 = os.path.join(os.getcwd(), '..')
sys.path.insert(0, path1)
from core import MessageProcessor
server = MessageProcessor('127.0.0.1', 7777, database)
dial = RegisterUser(database, server)
app.exec_()
|
Andy-mess-server
|
/Andy_mess_server-0.0.1-py3-none-any.whl/server/server/add_user.py
|
add_user.py
|
from PyQt5.QtWidgets import QMainWindow, QAction, qApp, QLabel, QTableView
from PyQt5.QtGui import QStandardItemModel, QStandardItem
from PyQt5.QtCore import QTimer
import sys
sys.path.append('../')
from server.stat_window import StatWindow
from server.config_window import ConfigWindow
from server.add_user import RegisterUser
from server.remove_user import DelUserDialog
class MainWindow(QMainWindow):
"""
Класс - основное окно сервера.
"""
def __init__(self, database, server, config):
super().__init__()
# База данных сервера
self.database = database
self.server_thread = server
self.config = config
# Ярлык выхода
self.exitAction = QAction('Выход', self)
self.exitAction.setShortcut('Ctrl+Q')
self.exitAction.triggered.connect(qApp.quit)
# Кнопка обновить список клиентов
self.refresh_button = QAction('Обновить список', self)
# Кнопка настроек сервера
self.config_btn = QAction('Настройки сервера', self)
# Кнопка регистрации пользователя
self.register_btn = QAction('Регистрация пользователя', self)
# Кнопка удаления пользователя
self.remove_btn = QAction('Удаление пользователя', self)
# Кнопка вывести историю сообщений
self.show_history_button = QAction('История клиентов', self)
# Статусбар
self.statusBar()
self.statusBar().showMessage('Server Working')
# Тулбар
self.toolbar = self.addToolBar('MainBar')
self.toolbar.addAction(self.exitAction)
self.toolbar.addAction(self.refresh_button)
self.toolbar.addAction(self.show_history_button)
self.toolbar.addAction(self.config_btn)
self.toolbar.addAction(self.register_btn)
self.toolbar.addAction(self.remove_btn)
# Настройки геометрии основного окна
# Поскольку работать с динамическими размерами мы не умеем, и мало
# времени на изучение, размер окна фиксирован.
self.setFixedSize(800, 600)
self.setWindowTitle('Messaging Server alpha release')
# Надпись о том, что ниже список подключённых клиентов
self.label = QLabel('Список подключённых клиентов:', self)
self.label.setFixedSize(240, 15)
self.label.move(10, 35)
# Окно со списком подключённых клиентов.
self.active_clients_table = QTableView(self)
self.active_clients_table.move(10, 55)
self.active_clients_table.setFixedSize(780, 400)
# Таймер, обновляющий список клиентов 1 раз в секунду
self.timer = QTimer()
self.timer.timeout.connect(self.create_users_model)
self.timer.start(1000)
# Связываем кнопки с процедурами
self.refresh_button.triggered.connect(self.create_users_model)
self.show_history_button.triggered.connect(self.show_statistics)
self.config_btn.triggered.connect(self.server_config)
self.register_btn.triggered.connect(self.reg_user)
self.remove_btn.triggered.connect(self.rem_user)
# Последним параметром отображаем окно.
self.show()
def create_users_model(self):
"""
Метод заполняющий таблицу активных пользователей.
:return: ничего не возвращает
"""
list_users = self.database.active_users_list()
list = QStandardItemModel()
list.setHorizontalHeaderLabels(
['Имя Клиента', 'IP Адрес', 'Порт', 'Время подключения'])
for row in list_users:
user, ip, port, time = row
user = QStandardItem(user)
user.setEditable(False)
ip = QStandardItem(ip)
ip.setEditable(False)
port = QStandardItem(str(port))
port.setEditable(False)
# Уберём милисекунды из строки времени, т.к. такая точность не
# требуется.
time = QStandardItem(str(time.replace(microsecond=0)))
time.setEditable(False)
list.appendRow([user, ip, port, time])
self.active_clients_table.setModel(list)
self.active_clients_table.resizeColumnsToContents()
self.active_clients_table.resizeRowsToContents()
def show_statistics(self):
"""
Метод создающий окно со статистикой клиентов.
:return: ничего не возвращает
"""
global stat_window
stat_window = StatWindow(self.database)
stat_window.show()
def server_config(self):
"""
Метод создающий окно с настройками сервера.
:return: ничего не возвращает
"""
global config_window
# Создаём окно и заносим в него текущие параметры
config_window = ConfigWindow(self.config)
def reg_user(self):
"""
Метод создающий окно регистрации пользователя.
:return: ничего не возвращает
"""
global reg_window
reg_window = RegisterUser(self.database, self.server_thread)
reg_window.show()
def rem_user(self):
"""
Метод создающий окно удаления пользователя.
:return: ничего не возвращает
"""
global rem_window
rem_window = DelUserDialog(self.database, self.server_thread)
rem_window.show()
|
Andy-mess-server
|
/Andy_mess_server-0.0.1-py3-none-any.whl/server/server/main_window.py
|
main_window.py
|
from sqlalchemy import create_engine, MetaData, Table, Column, Integer, String, DateTime, ForeignKey, Text
from sqlalchemy.orm import mapper, sessionmaker
import datetime
class ServerStorage:
"""
Класс - оболочка для работы с базой данных сервера.
Использует SQLite базу данных, реализован с помощью
SQLAlchemy ORM и используется классический подход.
"""
class AllUsers:
"""
Класс - отображение таблицы всех пользователей.
"""
def __init__(self, username, passwd_hash):
self.name = username
self.last_login = datetime.datetime.now()
self.passwd_hash = passwd_hash
self.pubkey = None
self.id = None
class ActiveUsers:
"""
Класс - отображение таблицы активных пользователей.
"""
def __init__(self, user_id, ip_address, port, login_time):
self.user = user_id
self.ip_address = ip_address
self.port = port
self.login_time = login_time
self.id = None
class LoginHistory:
"""
Класс - отображение таблицы истории входов.
"""
def __init__(self, name, date, ip, port):
self.id = None
self.name = name
self.date_time = date
self.ip = ip
self.port = port
class UsersContacts:
"""
Класс - отображение таблицы контактов пользователей.
"""
def __init__(self, user, contact):
self.id = None
self.user = user
self.contact = contact
class UsersHistory:
"""
Класс - отображение таблицы истории действий.
"""
def __init__(self, user):
self.id = None
self.user = user
self.sent = 0
self.accepted = 0
def __init__(self, path):
# Создаём движок базы данных.
self.database_engine = create_engine(f'sqlite:///{path}', echo=False, pool_recycle=7200,
connect_args={'check_same_thread': False})
# Создаём объект MetaData
self.metadata = MetaData()
# Создаём таблицу пользователей
users_table = Table('Users', self.metadata,
Column('id', Integer, primary_key=True),
Column('name', String, unique=True),
Column('last_login', DateTime),
Column('passwd_hash', String),
Column('pubkey', Text)
)
# Создаём таблицу активных пользователей
active_users_table = Table('Active_users', self.metadata,
Column('id', Integer, primary_key=True),
Column('user', ForeignKey('Users.id'), unique=True),
Column('ip_address', String),
Column('port', Integer),
Column('login_time', DateTime)
)
# Создаём таблицу истории входов
user_login_history = Table('Login_history', self.metadata,
Column('id', Integer, primary_key=True),
Column('name', ForeignKey('Users.id')),
Column('date_time', DateTime),
Column('ip', String),
Column('port', String)
)
# Создаём таблицу контактов пользователей
contacts = Table('Contacts', self.metadata,
Column('id', Integer, primary_key=True),
Column('user', ForeignKey('Users.id')),
Column('contact', ForeignKey('Users.id'))
)
# Создаём таблицу истории пользователей
users_history_table = Table('History', self.metadata,
Column('id', Integer, primary_key=True),
Column('user', ForeignKey('Users.id')),
Column('sent', Integer),
Column('accepted', Integer)
)
# Создаём таблицы
self.metadata.create_all(self.database_engine)
# Создаём отображения.
# Связываем класс в ORM с таблицей
mapper(self.AllUsers, users_table)
mapper(self.ActiveUsers, active_users_table)
mapper(self.LoginHistory, user_login_history)
mapper(self.UsersContacts, contacts)
mapper(self.UsersHistory, users_history_table)
# Создаём сессию
Session = sessionmaker(bind=self.database_engine)
self.session = Session()
# Если в таблице активных пользователей есть записи, то их необходимо удалить.
# Когда устанавливаем соединение, очищаем таблицу активных пользователей.
self.session.query(self.ActiveUsers).delete()
self.session.commit()
def user_login(self, username, ip_address, port, key):
"""
Метод выполняющийся при входе пользователя, записывает в базу факт входа
обновляет открытый ключ пользователя при его изменении.
:param username: логин
:param ip_address: ip-адрес
:param port: порт
:param key: ключ
:return: ничего не возвращает
"""
# Запрос в таблицу пользователей на наличие там пользователя с таким именем
result = self.session.query(self.AllUsers).filter_by(name=username)
# Если имя пользователя уже присутствует в таблице, то обновляем время последнего входа
# и проверяем корректность ключа. Если клиент прислал новый ключ,
# сохраняем его.
if result.count():
user = result.first()
user.last_login = datetime.datetime.now()
if user.pubkey != key:
user.pubkey = key
# Если нет, то генерируем исключение
else:
raise ValueError('Пользователь не зарегистрирован.')
# Теперь можно создать запись в таблицу активных пользователей о факте входа
new_active_user = self.ActiveUsers(user.id, ip_address, port, datetime.datetime.now())
self.session.add(new_active_user)
# и сохранить в историю входов
history = self.LoginHistory(user.id, datetime.datetime.now(), ip_address, port)
self.session.add(history)
# Сохраняем изменения
self.session.commit()
def add_user(self, name, passwd_hash):
"""
Метод регистрации пользователя.
Принимает имя и хэш пароля, создаёт запись в таблице статистики.
:param name: логин
:param passwd_hash: хэш пароля
:return: ничего не возвращает
"""
user_row = self.AllUsers(name, passwd_hash)
self.session.add(user_row)
self.session.commit()
history_row = self.UsersHistory(user_row.id)
self.session.add(history_row)
self.session.commit()
def remove_user(self, name):
"""
Метод удаляющий пользователя из базы.
:param name: логин
:return: ничего не возвращает
"""
user = self.session.query(self.AllUsers).filter_by(name=name).first()
self.session.query(self.ActiveUsers).filter_by(user=user.id).delete()
self.session.query(self.LoginHistory).filter_by(name=user.id).delete()
self.session.query(self.UsersContacts).filter_by(user=user.id).delete()
self.session.query(
self.UsersContacts).filter_by(
contact=user.id).delete()
self.session.query(self.UsersHistory).filter_by(user=user.id).delete()
self.session.query(self.AllUsers).filter_by(name=name).delete()
self.session.commit()
def get_hash(self, name):
"""
Метод получения хэша пароля пользователя.
:param name: логин
:return: хэша пароля
"""
user = self.session.query(self.AllUsers).filter_by(name=name).first()
return user.passwd_hash
def get_pubkey(self, name):
"""
Метод получения публичного ключа пользователя.
:param name: логин
:return: публичный ключ пользователя
"""
user = self.session.query(self.AllUsers).filter_by(name=name).first()
return user.pubkey
def check_user(self, name):
"""
Метод проверяющий существование пользователя.
:param name: логин
:return: True или False
"""
if self.session.query(self.AllUsers).filter_by(name=name).count():
return True
else:
return False
def user_logout(self, username):
"""
Метод фиксирующий отключения пользователя.
:param username: логин
:return: ничего не возвращает
"""
# Запрашиваем пользователя, который отключается.
user = self.session.query(self.AllUsers).filter_by(name=username).first()
# Удаляем его из таблицы активных пользователей.
self.session.query(self.ActiveUsers).filter_by(user=user.id).delete()
# Применяем изменения
self.session.commit()
def users_list(self):
"""
Метод возвращающий список известных пользователей со временем последнего входа.
:return: список известных пользователей со временем последнего входа
"""
# Запрос строк таблицы пользователей.
query = self.session.query(
self.AllUsers.name,
self.AllUsers.last_login
)
# Возвращаем список кортежей
return query.all()
def active_users_list(self):
"""
Метод возвращающий список активных пользователей.
:return: список активных пользователей
"""
# Запрашиваем соединение таблиц и собираем кортежи имя, адрес, порт, время.
query = self.session.query(
self.AllUsers.name,
self.ActiveUsers.ip_address,
self.ActiveUsers.port,
self.ActiveUsers.login_time
).join(self.AllUsers)
# Возвращаем список кортежей
return query.all()
def login_history(self, username=None):
"""
Метод возвращающий историю входов.
:param username: логин
:return: история входов пользователя
"""
# Запрашиваем историю входа
query = self.session.query(self.AllUsers.name,
self.LoginHistory.date_time,
self.LoginHistory.ip,
self.LoginHistory.port
).join(self.AllUsers)
# Если было указано имя пользователя, то фильтруем по этому имени
if username:
query = query.filter(self.AllUsers.name == username)
# Возвращаем список кортежей
return query.all()
def get_contacts(self, username):
"""
Метод возвращающий список контактов пользователя.
:param username: логин
:return: список контактов пользователя
"""
# Запрашиваем указанного пользователя
user = self.session.query(self.AllUsers).filter_by(name=username).one()
# Запрашиваем его список контактов
query = self.session.query(self.UsersContacts, self.AllUsers.name). \
filter_by(user=user.id). \
join(self.AllUsers, self.UsersContacts.contact == self.AllUsers.id)
# выбираем только имена пользователей и возвращаем их.
return [contact[1] for contact in query.all()]
def add_contact(self, user, contact):
"""
Метод добавления контакта для пользователя.
:param user: логин
:param contact: контакт
:return: ничего не возвращает
"""
# Получаем ID пользователей
user = self.session.query(self.AllUsers).filter_by(name=user).first()
contact = self.session.query(self.AllUsers).filter_by(name=contact).first()
# Проверяем что не дубль и что контакт может существовать (полю пользователь мы доверяем)
if not contact or self.session.query(self.UsersContacts).filter_by(user=user.id, contact=contact.id).count():
return
# Создаём объект и заносим его в базу
contact_row = self.UsersContacts(user.id, contact.id)
self.session.add(contact_row)
self.session.commit()
def remove_contact(self, user, contact):
"""
Метод удаления контакта пользователя.
:param user: логин
:param contact: контакт
:return: ничего не возвращает
"""
# Получаем ID пользователей
user = self.session.query(self.AllUsers).filter_by(name=user).first()
contact = self.session.query(self.AllUsers).filter_by(name=contact).first()
# Проверяем что контакт может существовать (полю пользователь мы доверяем)
if not contact:
return
# Удаляем требуемое
self.session.query(self.UsersContacts).filter(
self.UsersContacts.user == user.id, self.UsersContacts.contact == contact.id).delete()
self.session.commit()
def process_message(self, sender, recipient):
"""
Метод записывающий в таблицу статистики факт передачи сообщения.
:param sender: отправитель
:param recipient: получатель
:return: ничего не возвращает
"""
# Получаем ID отправителя и получателя
sender = self.session.query(self.AllUsers).filter_by(name=sender).first().id
recipient = self.session.query(self.AllUsers).filter_by(name=recipient).first().id
# Запрашиваем строки из истории и увеличиваем счётчики
sender_row = self.session.query(self.UsersHistory).filter_by(user=sender).first()
sender_row.sent += 1
recipient_row = self.session.query(self.UsersHistory).filter_by(user=recipient).first()
recipient_row.accepted += 1
self.session.commit()
def message_history(self):
"""
Метод возвращающий статистику сообщений.
:return: статистика сообщений
"""
query = self.session.query(
self.AllUsers.name,
self.AllUsers.last_login,
self.UsersHistory.sent,
self.UsersHistory.accepted
).join(self.AllUsers)
# Возвращаем список кортежей
return query.all()
# Отладка
if __name__ == '__main__':
test_db = ServerStorage('../server_database.db3')
test_db.user_login('test1', '192.168.1.113', 8080)
test_db.user_login('test2', '192.168.1.113', 8081)
print(test_db.users_list())
# print(test_db.active_users_list())
# test_db.user_logout('McG')
# print(test_db.login_history('re'))
# test_db.add_contact('test2', 'test1')
# test_db.add_contact('test1', 'test3')
# test_db.add_contact('test1', 'test6')
# test_db.remove_contact('test1', 'test3')
test_db.process_message('test1', 'test2')
print(test_db.message_history())
|
Andy-mess-server
|
/Andy_mess_server-0.0.1-py3-none-any.whl/server/server/database.py
|
database.py
|
import socket
import sys
import traceback
import logging
from functools import wraps
sys.path.append('../')
from logs import client_log_config, server_log_config
if sys.argv[0].find('server.py') == -1:
LOGGER = logging.getLogger('client')
else:
LOGGER = logging.getLogger('server')
def log(func):
"""
Декоратор, выполняющий логирование вызовов функций.
Сохраняет события типа debug, содержащие
информацию о имени вызываемой функиции, параметры с которыми
вызывается функция, и модуль, вызывающий функцию.
"""
@wraps(func)
def decorated(*args, **kwargs):
func_to_log = func(*args, **kwargs)
LOGGER.debug(f'Функция {func.__name__}() вызвана из функции {traceback.format_stack()[0].strip().split()[-1]}')
return func_to_log
return decorated
def login_required(func):
"""
Декоратор, проверяющий, что клиент авторизован на сервере.
Проверяет, что передаваемый объект сокета находится в
списке авторизованных клиентов.
За исключением передачи словаря-запроса
на авторизацию. Если клиент не авторизован,
генерирует исключение TypeError
"""
@wraps(func)
def checker(*args, **kwargs):
# проверяем, что первый аргумент - экземпляр MessageProcessor
# Импортить необходимо тут, иначе ошибка рекурсивного импорта.
from server.core import MessageProcessor
from common.variables import ACTION, PRESENCE
if isinstance(args[0], MessageProcessor):
found = False
for arg in args:
if isinstance(arg, socket.socket):
# Проверяем, что данный сокет есть в списке names класса
# MessageProcessor
for client in args[0].names:
if args[0].names[client] == arg:
found = True
# Теперь надо проверить, что передаваемые аргументы не presence
# сообщение. Если presence, то разрешаем
for arg in args:
if isinstance(arg, dict):
if ACTION in arg and arg[ACTION] == PRESENCE:
found = True
# Если не не авторизован и не сообщение начала авторизации, то
# вызываем исключение.
if not found:
raise TypeError
return func(*args, **kwargs)
return checker
|
Andy-mess-server
|
/Andy_mess_server-0.0.1-py3-none-any.whl/server/common/decorators.py
|
decorators.py
|
import dis
class ServerVerifier(type):
"""
Метакласс, проверяющий что в результирующем классе нет клиентских
вызовов таких как: connect. Также проверяется, что серверный
сокет является TCP и работает по IPv4 протоколу.
"""
def __init__(cls, clsname, bases, clsdict):
# clsname - экземпляр метакласса - Server
# bases - кортеж базовых классов - ()
# clsdict - словарь атрибутов и методов экземпляра метакласса
# {'__module__': '__main__',
# '__qualname__': 'Server',
# 'port': <descrptrs.Port object at 0x000000DACC8F5748>,
# '__init__': <function Server.__init__ at 0x000000DACCE3E378>,
# 'init_socket': <function Server.init_socket at 0x000000DACCE3E400>,
# 'main_loop': <function Server.main_loop at 0x000000DACCE3E488>,
# 'process_message': <function Server.process_message at 0x000000DACCE3E510>,
# 'process_client_message': <function Server.process_client_message at 0x000000DACCE3E598>}
# Список методов, которые используются в функциях класса:
methods = [] # получаем с помощью 'LOAD_GLOBAL'
# Обычно методы, обёрнутые декораторами попадают
# не в 'LOAD_GLOBAL', а в 'LOAD_METHOD'
methods_2 = [] # получаем с помощью 'LOAD_METHOD'
# Атрибуты, используемые в функциях классов
attrs = [] # получаем с помощью 'LOAD_ATTR'
# перебираем ключи
for func in clsdict:
# Пробуем
try:
# Возвращает итератор по инструкциям в предоставленной функции,
# методе, строке исходного кода или объекте кода.
ret = dis.get_instructions(clsdict[func])
# ret - <generator object _get_instructions_bytes at 0x00000062EAEAD7C8>
# ret - <generator object _get_instructions_bytes at 0x00000062EAEADF48>
# ...
# Если не функция, то ловим исключение
except TypeError:
pass
else:
# Если функция, то разбираем код, получая используемые методы и атрибуты.
for i in ret:
# print(i)
# i - Instruction(opname='LOAD_GLOBAL', opcode=116, arg=9, argval='send_message',
# argrepr='send_message', offset=308, starts_line=201, is_jump_target=False)
# opname - имя для операции
if i.opname == 'LOAD_GLOBAL':
if i.argval not in methods:
# заполняем список методами, использующимися в функциях класса
methods.append(i.argval)
elif i.opname == 'LOAD_METHOD':
if i.argval not in methods_2:
# заполняем список атрибутами, использующимися в функциях класса
methods_2.append(i.argval)
elif i.opname == 'LOAD_ATTR':
if i.argval not in attrs:
# заполняем список атрибутами, использующимися в функциях класса
attrs.append(i.argval)
# Если обнаружено использование недопустимого метода connect, вызываем исключение:
if 'connect' in methods:
raise TypeError('Использование метода connect недопустимо в серверном классе')
# Если сокет не инициализировался константами SOCK_STREAM(TCP) AF_INET(IPv4), тоже исключение.
if not ('SOCK_STREAM' in methods and 'AF_INET' in methods):
raise TypeError('Некорректная инициализация сокета.')
# Обязательно вызываем конструктор предка:
super().__init__(clsname, bases, clsdict)
class ClientVerifier(type):
"""
Метакласс, проверяющий что в результирующем классе нет серверных
вызовов таких как: accept, listen. Также проверяется, что сокет не
создаётся внутри конструктора класса.
"""
def __init__(cls, clsname, bases, clsdict):
# Список методов, которые используются в функциях класса:
methods = []
for func in clsdict:
# Пробуем
try:
ret = dis.get_instructions(clsdict[func])
# Если не функция, то ловим исключение:
except TypeError:
pass
else:
# Если функция, то разбираем код, получая используемые методы:
for i in ret:
if i.opname == 'LOAD_GLOBAL':
if i.argval not in methods:
methods.append(i.argval)
# Если обнаружено использование недопустимого метода accept, listen, socket, то бросаем исключение:
for command in ('accept', 'listen', 'socket'):
if command in methods:
raise TypeError('В классе обнаружено использование запрещённого метода.')
# Вызов get_message или send_message из utils считаем корректным использованием сокетов
if 'get_message' in methods or 'send_message' in methods:
pass
else:
raise TypeError('Отсутствуют вызовы функций, работающих с сокетами.')
super().__init__(clsname, bases, clsdict)
|
Andy-mess-server
|
/Andy_mess_server-0.0.1-py3-none-any.whl/server/common/metaclasses.py
|
metaclasses.py
|
===============
AnechoDB_Access
===============
It is a library used to connect to a specific database, download data stored in it and make some simple calculation. The data are beam patterns obtained from measurements in two anechoic chambers saved on the database as HDF5 files.
The package is divided in two distinct modules called **connection** and **computation** that have different tasks.
*************
connection.py
*************
This module is a class with useful function to establish a connection with the chosen database.
The database is structered in a way that each *beam* identifier (the data user is looking for) is linked to a *measurements* page with information about the measure and the links to *projects* and *instruments* pages.
With **connection** is possible to find the desired beam identifier through a search in one of those pages and finally download the data that is converted from a .h5 file to a Python dict variable preserving the same structure
**************
computation.py
**************
This module has some function used to perform simple (but useful) calculation from the data previously obtained from **connection**. Till now this module has only two functions, one to compute mean and variance of the data and the other to perform normalization and centering of beam patterns, but more will be added in the future.
Example of usage
----------------
After being installed here is a classic way to use this package.
.. code-block:: python
>>> c = share_belen.connection.Connection(Host)
>>> i_m = c.search_meas_by_instruments('Instrument To Search')
>>> #More than one measurement can be linked at the same instrument
>>> i_b = c.search_beam_by_meas(i_1[0])
>>> #More than one beam can be linked at the same measurement
>>> b = c.get_beam_in_dict_by_id(i_b[0])
>>> b_c = share_belen.computation.make_beam_meanvar(b)
>>> b_c_2 = share_belen.computation.center_norm_beam(b_c)
Requirements
------------
* `Python <http://www.python.org>`_ (tested with version >=3.3)
* `h5py <http://www.h5py.org/>`_
|
AnechoDB-Access
|
/AnechoDB_Access-1.01.zip/AnechoDB_Access-1.0/README.rst
|
README.rst
|
import numpy as np
import requests
import json
import os
import tempfile
import h5py
class Connection:
'''
Class used for the comunication with the database to find and retrieve beams stored in it.
'''
def __init__(self, host):
'''
Put the host of the database.
'''
self.host = host
def _find_by_var(self, link: str='', var: str='', dat: bool=True)->list:
'''
Return the link or the value of the var entry in the database.
If dat=True return link else, dat=False, return value of var.
'''
l = []
r = requests.get(os.path.join(self.host + '/anechodb/api/v1/' + link))
d = json.loads(r.text)
s = d['collection']['items']
if dat and var:
for i in range(np.shape(s)[0]):
for j in range(np.shape(s[i]['data'])[0]):
if var in s[i]['data'][j].values():
l = s[i]['href']
break # there should be only a link for each entry var
elif var:
for i in range(np.shape(s)[0]):
for j in range(np.shape(s[i]['links'])[0]):
if var in s[i]['links'][j]['rel']:
l.append(s[i]['data'][0]['value'])
return l
def print_link(self, link, idl):
'''
Print the json collection of the chosen link entry.
Input:
link(string):the link to the page of the database. It can only be:
'operators','instruments','projects','measurements','beams'.
idl(int): identifier of the page of the link .
'''
r = requests.get(os.path.join(self.host + '/anechodb/api/v1/' + link +
'/%d' % idl))
if r.status_code == 200:
print(json.loads(r.text))
def search_meas_by_instruments(self, var: str='')->list:
'''
Search which measurements are linked at the instrument decided by var entry.
Input:
var(string):The instrument used for the search (example 'VNA').
Output:
m_id(list of int):The identifier of the measurement that use the instrument.
'''
if var:
m_id = []
idl = Connection._find_by_var(self, 'instruments', var, True)
if idl:
idl = idl.split('/')[-1]
rel = "/api/v1/instruments/" + str(idl)
m_id = Connection._find_by_var(
self, 'measurements', rel, False)
return m_id
else:
raise Exception('Nothing found with this name: %s' % var)
def search_meas_by_projects(self, var: str='')->list:
'''
Search which measurements are linked at the project decided by var entry.
Input:
var(string):The project used for the search (example 'LSPE').
Output:
m_id(list of int):The identifier of the measurement that use the project.
'''
if var:
m_id = []
idl = Connection._find_by_var(self, 'projects', var, True)
if idl:
idl = idl.split('/')[-1]
rel = "/api/v1/projects/" + str(idl)
m_id = Connection._find_by_var(
self, 'measurements', rel, False)
return m_id
else:
raise Exception('Nothing found with this name: %s' % var)
def search_beam_by_meas(self, m_id: int=0)->list:
'''
Search which beams are linked at the measurement identifier decided by m_id entry.
Input:
m_id(int):The measurement identifier used for the search (example 1).
Output:
b_id(list of int):The identifier of the beams linked at the chosen measurement.
'''
b_id = []
if m_id:
rel = "/api/v1/measurements/" + str(m_id)
b_id = (Connection._find_by_var(self, 'beams', rel, False))
return b_id
else:
raise Exception('No beam linked to measurement id: %d' % m_id)
def _f5td(f_id, d: dict)->dict:
'''
Return a .h5 file as a dict variable. f_id is the class object File of a .h5 file.
'''
c = {}
for i in f_id.items():
if isinstance(i[1], h5py.Group):
if i[0] not in d.keys():
d[i[0]] = c.copy()
Connection._f5td(i[1], d[i[0]])
else:
d[i[0]] = f_id[i[0]].value
return d
def get_beam_in_dict_by_id(self, b_id: int) -> dict:
'''
Download the beam chosen by identifier as a dict variable.
Input:
b_id(int):The beam identifier to download (example 1).
Output:
beam(dict):The beam downloaded. It has 4 fields as the original .h5 file plus the attribute field with some extra information.
'''
beam = {}
# Connect and dowload the chosen beam
head = {'Accept': 'application/x-hdf5'}
r = requests.get(
os.path.join(
self.host +
'/anechodb/api/v1/beams/%d' %
b_id),
headers=head)
f_b, p_b = tempfile.mkstemp(suffix='.h5')
try:
with os.fdopen(f_b, 'wb') as tmp:
tmp.write(r.content)
# Create dict variable beam
fid = h5py.File(p_b, 'r')
beam = Connection._f5td(fid, {})
A = {}
for key in fid.attrs.keys():
A['%s' % key] = str(fid.attrs.get('%s' % key))
beam['Attributes'] = A
fid.close()
finally:
os.remove(p_b)
return beam
|
AnechoDB-Access
|
/AnechoDB_Access-1.01.zip/AnechoDB_Access-1.0/share_belen/connection.py
|
connection.py
|
import numpy as np
import copy
'''
Module with some useful function to compute simple analytics correction to beam patterns.
'''
def make_beam_meanvar(
beam: dict,
f: list=[],
start: int=0,
stop: int=None)->dict:
'''
Apply mean and variance at the data stored in beam at the chosen frequencies.
Input:
beam(dict):The beam to be computed. The Amplitude field stored should be a matrix to apply mean and variance along the measurement points.
f(list of float):The frequencies of the measure to be computed. If empty all the frequencies in beam are used. If it's only a number use as input a list(Example f=[40])
start(int):Starting index of the measurement array for the computation.
stop(int):Stopping index of the measurement array for the computation.
Output:
b(dict):The input beam with measurement changed with the mean and with a new field called Amplitude_Variance with the variance.
'''
b = copy.deepcopy(beam)
if not isinstance(f, list):
f = list(f)
if not f:
f = b['Frequencies']
for i in range(len(f)):
id_f = np.where(b['Frequencies'] == f[i])
if (b['DUT']['F_%d' % id_f[0][0]]['Ampl'].ndim > 1) and id_f:
b['DUT'][
'F_%d' %
id_f[0][0]]['Ampl_Var'] = np.var(
b['DUT'][
'F_%d' %
id_f[0][0]]['Ampl'][
:,
start:stop],
axis=1)
b['DUT'][
'F_%d' %
id_f[0][0]]['Ampl'] = np.mean(
b['DUT'][
'F_%d' %
id_f[0][0]]['Ampl'][
:,
start:stop],
axis=1)
return b
def center_norm_beam(beam: dict, f: list=[], center=True, norm=True)->dict:
'''
Apply normalization and centering at the data stored in beam.
Input:
beam(dict):The beam to be computed. If Amplitude in beam is a matrix, the mean of the matrix is used for this computation.
f(array of float):The frequencies of the measure to be computed. If empty all the frequencies in beam are used. If it's only a number use as input a list(Example f=[40])
center(bool or int/float):If center=True, apply centering. If it's a number, this is used to correct the position.
norm(bool or int/float):If norm=True, apply normalization. If it's a number, this is used as normalization factor.
Output:
b(dict):The input beam with Amplitude and Positions computed. The positions of the original beam are stored in Original_Positions
field and a new field called Correction is created with centering and normalization factors stored for each frequency.
Notes:
If the beam is not copolar (it's seen in the Attributes field) input variables center and norm MUST be numbers to use this function.
'''
b = copy.deepcopy(beam)
corr = {}
P = {}
if all([b['Attributes']['Type'][-1] != 'O',
(isinstance(center, bool) or isinstance(norm, bool))]):
raise Exception(
'Input beam is a crosspolar, so center and norm entries must be float or int')
else:
if not isinstance(f, list):
f = list(f)
if not f:
f = b['Frequencies']
angle = b['Positions'][:, 1]
for i in range(len(f)):
c = {}
id_f = np.where(b['Frequencies'] == f[i])
if b['DUT']['F_%d' % id_f[0][0]]['Ampl'].ndim > 1:
power = np.mean(
b['DUT'][
'F_%d' %
id_f[0][0]]['Ampl'],
axis=1)
else:
power = b['DUT']['F_%d' % id_f[0][0]]['Ampl']
# Find window at 3 dB
maxpower = np.max(power)
index_main_beam = np.where(power >= maxpower - 3.)[0]
main_beam_power = power[index_main_beam]
main_beam_angle = angle[index_main_beam]
# Interpolate with parabola
parabola_fit = np.polyfit(main_beam_angle, main_beam_power, 2)
# Find parabola vertex
vertex_angle = -parabola_fit[1] / 2. / parabola_fit[0]
# Find parabola maximum
det = parabola_fit[1]**2 - 4. * \
parabola_fit[0] * parabola_fit[2]
vertex_power = -det / (4. * parabola_fit[0])
if type(center == bool):
if center:
newangle = angle - vertex_angle
else:
newangle = angle
if type(center) == float or type(center) == int:
vertex_angle = float(center)
newangle = angle - vertex_angle
if type(norm == bool):
if norm:
newpower = power - vertex_power
else:
newpower = power
if type(norm) == int or type(norm) == float:
vertex_power = float(norm)
newpower = power - vertex_power
b['DUT']['F_%d' % id_f[0][0]]['Ampl'] = newpower
P['F_%d' % id_f[0][0]] = newangle
c['Center'] = vertex_angle
c['Norm'] = vertex_power
corr['F_%d' % id_f[0][0]] = c
b['Original_positions'] = b['Positions']
b['Positions'] = P
b['Correction'] = corr
return b
|
AnechoDB-Access
|
/AnechoDB_Access-1.01.zip/AnechoDB_Access-1.0/share_belen/computation.py
|
computation.py
|
from pprint import pprint
import logging
import requests
# logging.basicConfig(format='%(levelname)s:%(message)s',
# level=logging.DEBUG)
logger = logging.getLogger(__name__)
class AnelPowerControl:
def __init__(self, address, auth=None):
self.address = address
self.auth = auth
def __getattr__(self, name):
return self.data[name]
class Socket:
def __init__(self, control, index, name, is_on, disabled, info):
self.control = control
self.index = index
self.name = name
self.is_on = is_on
self.disabled = disabled
self.info = info
def __repr__(self):
return '<AnelPowerControl.Socket #%d - %s - %s>' % (
self.index, self.name,
'on' if self.is_on else 'disabled' if self.disabled else 'off')
def on(self):
if not self.is_on:
logger.info('%s #%d (%s) turning on', self.control.address,
self.index, self.name)
self.control.control('F%d=T' % (self.index, ))
else:
logger.debug('%s #%d (%s) already on', self.control.address,
self.index, self.name)
def off(self):
if self.is_on:
logger.info('%s #%d (%s) turning off', self.control.address,
self.index, self.name)
self.control.control('F%d=T' % (self.index, ))
else:
logger.debug('%s #%d (%s) already off', self.control.address,
self.index, self.name)
def __getitem__(self, index):
return self.Socket(self, **self.data['sockets'][index])
def __iter__(self):
for index in range(8):
yield self.Socket(self, **self.data['sockets'][index])
@property
def data(self):
r = requests.get('http://%s/strg.cfg' % (self.address, ),
auth=self.auth)
fields = (
'name', 'host', 'ip', 'mask', 'gateway', 'mac', 'port',
'temperature', 'type'
)
splitted = r.text.split(';')
data = dict(zip(fields, splitted))
data['sockets'] = {}
for index in range(8):
# socket = AnelPowerControlSocket(index, name=splitted[10 + index])
socket = {
'index': index,
'name': splitted[10 + index],
'is_on': bool(int(splitted[20 + index])),
'disabled': bool(int(splitted[30 + index])),
'info': splitted[40 + index],
# 'tk': _splitted[50 + i],
}
data['sockets'][index] = socket
data['sockets'][socket['name']] = socket
return data
def control(self, data):
r = requests.post('http://%s/ctrl.htm' % (self.address, ),
auth=self.auth, data=data,
headers={'content-type': 'text/plain'})
if __name__ == '__main__':
from time import sleep
crtl = AnelPowerControl('crti-btp-sl3', auth=('admin', 'config'))
pprint(crtl.data)
print(crtl[1])
sleep(0.5)
crtl['PowerSupply 12V'].on()
# pprint(crtl.data)
sleep(0.5)
print(crtl['PowerSupply 12V'])
crtl['PowerSupply 12V'].off()
sleep(0.5)
print(crtl['PowerSupply 12V'])
crtl['PowerSupply 12V'].on()
sleep(0.5)
print(crtl['PowerSupply 12V'])
crtl['PowerSupply 12V'].off()
sleep(0.5)
print(crtl['PowerSupply 12V'])
|
AnelPowerControl
|
/AnelPowerControl-0.1.zip/AnelPowerControl-0.1/anel_power_control.py
|
anel_power_control.py
|
Anemone
=======
Anemone is an analysis monitor written in Python. The typical use cases is monitoring of long
running programs on remote machines. Anemone lets the long running program (scientific analysis or
any other type of long running program) create reports which can be continously updated through the
running of the analysis. Anemone will run in a separate thread and talk to interested listeners,
typically plotting programs on remote machines. These programs can get a list of reports and
continously plot the evolvolution of the reported variables.
Anemone includes a graphical user interface (GUI) that can connect to a running analysis and show
a list of monitors. The monitors can be plotted and the plots will update automatically.
Currently only 2D plot reports are supported, but more advanced reports are also planned.
Usage
-----
An example of using Anemone in an application is included in the repository. The analysis program
will publish the monitors on a TCP port, via local IPC over a unix domain sockets or any other
transport supported by the currently installed version of ZeroMQ.
If the example program communicates over IPC via the file ``my_com`` then you can connect to it
with the GUI and inspect the progress of the analysis live by running::
python -m anemone wxgui ipc://my_com
You can connect mulitple GUI programs and connect/reconnect as often as you like. You can even
connect before the analysis program has started. The GUI will try to connect each second until the
analysis program responds.
Security
--------
The current proof-of-concept version of Anemone communicates over ZeroMQ via Python pickles which
is NOT secure! Do not use Anemone in an open network without replacing the pickles with some other
serialization format!
Installation
------------
Anemone is a Python package. It has been tested with Python 2.7 on Ubuntu Linux and requires the
ZeroMQ Python bindings, wxPython and matplotlib to be available. No installation is required besides
making sure the ``anemone`` package is on the PYTHONPATH.
Version and stability
---------------------
The current version of Anemone is 0.01 and should be treated as a proof of concept and not as
production quality software.
Copyright and license
---------------------
Anemone is copyright Tormod Landet, 2014. Anemone is licensed under the Apache 2.0 license.
|
Anemone
|
/Anemone-0.0.1.tar.gz/Anemone-0.0.1/README.rst
|
README.rst
|
import zmq
import threading
from Queue import Queue, Empty
class Reporter(object):
def __init__(self, program_name, analysis_name):
"""
The only anemone class to use for the data generating program
The analysis name should be the name of the input file
or some other easily recognizable name such that a user of
the GUI inspector program understands that he or she has
connected to the right analysis
"""
self._program_name = program_name
self._analysis_name = analysis_name
self._queue = Queue()
def start(self, address):
self.server = Server(self._program_name, self._analysis_name, self._queue)
self.thread = threading.Thread(target=self.server.serve, args=(address,))
self.thread.daemon = True
self.thread.start()
def report_2dplot(self, report_name, x, y):
rep = (report_name, TYPE_2D_PLOT, (x, y))
self._queue.put(rep)
TYPE_2D_PLOT = '2dplot'
class Server(object):
def __init__(self, program_name, analysis_name, queue):
"""
Internal class to handle communication with the listening GUIs
"""
self.queue = queue
self.program_name = program_name
self.analysis_name = analysis_name
self.reports = {}
def serve(self, address):
self.zmq_context = zmq.Context()
self.zmq_socket = self.zmq_context.socket(zmq.REP)
self.zmq_socket.bind(address)
while True:
try:
item = self.queue.get(block=True, timeout=0.1)
self.handle_queue_item(item)
except Empty:
# No items waiting, do nothing
pass
try:
request = self.zmq_socket.recv_pyobj(flags=zmq.NOBLOCK)
self.handle_zmq_request(request)
except zmq.Again:
# No requests waiting, do nothing
pass
def handle_queue_item(self, item):
"""
Get new report data from the analysis thread through the queue and
append it to the reports we currently hold
"""
name, type, data = item
if name not in self.reports:
if type == TYPE_2D_PLOT:
self.reports[name] = (type, ([], []))
if type == TYPE_2D_PLOT:
self.reports[name][1][0].append(data[0])
self.reports[name][1][1].append(data[1])
def handle_zmq_request(self, request):
"""
Handle a request for information from the remote GUI
"""
print 'request:', request
# The resuest must be a tuple
if not isinstance(request, tuple):
self.zmq_socket.send_pyobj('ERROR: unknown command')
return
# The tuple must have at least one item
if len(request) < 1:
self.zmq_socket.send_pyobj('ERROR: unknown command')
return
cmd = request[0]
if cmd == 'get_analysis_info':
# Return tuple containing (program_name, analysis_name, num_reports)
response = (self.program_name, self.analysis_name, len(self.reports))
elif cmd == 'get_reports':
# Return list of (name, type) tuples
response = [(name, self.reports[name][0]) for name in self.reports]
elif cmd == 'get_report' and len(request) == 3:
# Return the data for the selected report
name, start_index = request[1:]
# Check that the requested report exists
if not name in self.reports:
self.zmq_socket.send_pyobj('ERROR: unknown report')
return
# Check that the start_index is an integer >= 0
if not isinstance(start_index, int) or start_index < 0:
self.zmq_socket.send_pyobj('ERROR: malformed start index')
return
type, data = self.reports[name]
if type == TYPE_2D_PLOT:
if len(data[0]) > start_index:
response = (data[0][start_index:], data[1][start_index:])
else:
response = ([], [])
else:
self.zmq_socket.send_pyobj('ERROR: unknown command')
return
self.zmq_socket.send_pyobj(response)
|
Anemone
|
/Anemone-0.0.1.tar.gz/Anemone-0.0.1/anemone/reporter.py
|
reporter.py
|
import sys, time
import zmq
import wx
import matplotlib
matplotlib.use('WxAgg')
from matplotlib.backends.backend_wxagg import FigureCanvasWxAgg as FigureCanvas
from matplotlib.backends.backend_wx import NavigationToolbar2Wx
from matplotlib.figure import Figure
MIN_TIME_BETWEEN_REQUESTS = 0.5 # seconds
class AnemoneWX(wx.Frame):
def __init__(self, address):
"""
A user interface for Anemone written using the WX GUI toolkit
"""
wx.Frame.__init__(self, None, size=(800,600), title='Anemone')
splitter = wx.SplitterWindow(self)
splitter.SetMinimumPaneSize(50)
splitter.SetSashGravity(1.0)
self.plot_panel = PlotPanel(splitter)
self.select_panel = wx.ListBox(splitter)
splitter.SplitVertically(self.plot_panel, self.select_panel, -200)
self.select_panel.Bind(wx.EVT_LISTBOX, self.on_select_report)
status = wx.StatusBar(self)
self.SetStatusBar(status)
self.SetStatusText('Connecting ...')
self.connected = False
# Connect to the remote analysis
self.connect(address)
# Update the current plot data when idle
self.Bind(wx.EVT_IDLE, lambda evt: self.get_monitor_data())
def connect(self, address):
"""
Setup the connection to the remote server
ZeroMQ will allmost allways succeed, even if the remote server is not present, it will then hope
the server will come online to answer us at a later time. This is dealt with in .get_info()
"""
self.zmq_context = zmq.Context()
self.zmq_socket = self.zmq_context.socket(zmq.REQ)
self.zmq_socket.connect(address)
self.get_info()
self.last_request_time = 0
self.selected_monitor_name = None
self.selected_monitor_x = []
self.selected_monitor_y = []
def get_info(self, first_try=True):
"""
The first contact with the analysis program. Get some info
about the program and the list of available reports
"""
if first_try:
self.zmq_socket.send_pyobj(('get_analysis_info',))
try:
self.info = self.zmq_socket.recv_pyobj(flags=zmq.NOBLOCK)
except zmq.Again:
self.SetStatusText('Unable to connect, trying again in one second')
wx.CallLater(1000, self.get_info, first_try=False)
return
self.zmq_socket.send_pyobj(('get_reports',))
reports = self.zmq_socket.recv_pyobj()
self.report_infos = reports
for report_name, report_type in sorted(self.report_infos):
self.select_panel.Append('%s (%s)' % (report_name, report_type), report_name)
self.connected = True
self.SetStatusText('Connected to analysis "%s" running in %s' % (self.info[1], self.info[0]))
def on_select_report(self, event):
"""
The user has selected a new monitor to show
"""
report_name = event.GetClientObject()
self.selected_monitor_name = report_name
self.selected_monitor_x = []
self.selected_monitor_y = []
def get_monitor_data(self):
"""
The program is idle, lets use the time to update the currently selected monitor
with any new data that is available
"""
now = time.time()
if now - self.last_request_time < MIN_TIME_BETWEEN_REQUESTS:
return
self.last_request_time = now
if self.selected_monitor_name is None:
self.selected_monitor_x = []
self.selected_monitor_y = []
else:
self.zmq_socket.send_pyobj(('get_report', self.selected_monitor_name, len(self.selected_monitor_x)))
data = self.zmq_socket.recv_pyobj()
assert isinstance(data, tuple), 'Got %r, not tuple' % data
self.selected_monitor_x.extend(data[0])
self.selected_monitor_y.extend(data[1])
self.show_monitor()
def show_monitor(self):
"""
Show whatever is available of the currently selected monitor at the current time
"""
plt = self.plot_panel.plotter
plt.clear()
plt.plot(self.selected_monitor_x, self.selected_monitor_y)
class PlotPanel(wx.Panel):
def __init__(self, parent):
wx.Panel.__init__(self, parent)
self.SetBackgroundColour(wx.NamedColour("WHITE"))
self.figure = Figure()
self.axes = self.figure.add_subplot(111)
self.canvas = FigureCanvas(self, wx.ID_ANY, self.figure)
self.sizer = wx.BoxSizer(wx.VERTICAL)
self.sizer.Add(self.canvas, 1, wx.LEFT | wx.TOP | wx.GROW)
self.toolbar = NavigationToolbar2Wx(self.canvas)
self.toolbar.Realize()
self.toolbar.update()
self.sizer.Add(self.toolbar, 0, wx.LEFT | wx.EXPAND)
self.SetSizer(self.sizer)
self.plotter = WxMatplotlibProxy(self, self.axes, self.canvas)
def OnPaint(self, event):
self.canvas.draw()
wx.Panel.OnPaint(self, event)
class WxMatplotlibProxy(object):
def __init__(self, panel, axes, canvas):
"""
This proxy exists to automatically call canvas.draw() after all
plotting operations
"""
self._panel = panel
self._axes = axes
self._canvas = canvas
def _refresh_plot(self):
self._canvas.draw()
self._panel.Refresh()
def __getattr__(self, attr):
wx.CallAfter(self._refresh_plot)
return getattr(self._axes, attr)
def run_wxgui(address):
app = wx.App()
gui = AnemoneWX(address)
gui.Show()
app.MainLoop()
if __name__ == '__main__':
address = sys.argv[1]
run_wxgui(address)
|
Anemone
|
/Anemone-0.0.1.tar.gz/Anemone-0.0.1/anemone/gui_wx.py
|
gui_wx.py
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.