code
stringlengths 501
5.19M
| package
stringlengths 2
81
| path
stringlengths 9
304
| filename
stringlengths 4
145
|
---|---|---|---|
# Bitstream compression
This algorithm is to apply on a ready to write bitstream, after having computed data configuration checksum and optional padding, before line checksum.
Principle is to replace serie of `0x00` by a dedicated value:
- a serie of `8 * 0x00` is replaced by one value (said `key8Z` in the rest of
this document);
- a serie of `4 * 0x00` is replaced by one value (said `key4Z` in the rest of
this document);
- a serie of `2 * 0x00` is replaced by one value (said `key2Z` in the rest of
this document);
Those values are stored in the header area (line starting with `0x51`), line starting with `0x10` must be updated too (bit 13).
## Optional padding
This algorithm is applied 8 bytes by 8 bytes. So if lines are not multiple of 64bits, a serie of dummy bits (set to `1`) must be added.
## select values to use
This step consist to search all values not used for data/EBR configuration (it's more or less the creation of an histogram):
```python
lst = [0 for i in range(0, 256)]
for i in range(len(dataCfg)):
line = dataCfg[i]
for v in line:
lst[v] += 1
unusedVal = [i for i,val in enumerate(lst) if val==0]
```
- `key8Z` take the value for index 0 (smallest value)
- `key4Z` take the value for index 1
- `key2Z` take the value for index 2 (highest value)
line starting with `0x51` must be updated accordingly, and bit `13` for line `0x10` must be set
## Conversion
This step is applied line by line and 8 bytes by 8 bytes.
The principle is more or less to find/replace sequentially series of `0x00`:
For example:
- `[0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00]`
- `[key8Z]`
- `[0x00 0x00 0x00 0x00 0x00 0x00 0x00 0xFF]`
- `[key4Z 0x00 0x00 0x00 0xFF]`
- `[key4Z key2Z 0x00 0xFF]`
|
Apycula
|
/Apycula-0.9.0a1.tar.gz/Apycula-0.9.0a1/doc/compression.md
|
compression.md
|
# File structure
Three important types of vendor files are parsed, `.dat`, `.fse`, and `.tm`. Other files also exist, but these are not currently parsed.
# Wire names
The vendor files use wire IDs that map to the following names. A full mapping is found in `wirenames.py`
* 0-31 LUT inputs
* 32-39 LUT outputs
* 40-87 DFF outputs
* 48-55 unknown, maybe MUX outputs
* 56-63 X0 tile-local wires
* 64-75 X1 one-hop wires, origin segments
* 76-107 X2 two-hop wires, origin segments
* 108-123 X8 eight-hop wires, origin segments
* 124-126 DFF clock wires
* 127-129 DFF reset wires
* 130-132 DFF clock-enable wires
* 133-140 MUX selection wires
* 141-148 X1 one-hop wires, destination segments
* 149-212 X2 two-hop wires, destination segments
* 213-244 X8 eight-hop wires, destination segments
* 245-260 X1 one-hop alias wires to SN/EW wires, going both ways
* 261-268 long wires (branches)
* 269-276 global wires (branches)
* 277-278 high and low constant wires
* 279-290 long wires (taps ans spines)
* 291-294 global wires (taps)
* 295-302 DRAM input wires
* 303-308 ALU carry-in
* 309-314 ALU carry-out
The format for inter-tile wires is {direction}{length}{number}{segment}. So W270 is a westward two-hop wire, number 7, segment 0 (the root). W272 (segment 2) would be the same wire, two tiles to the west.
# Data file
The `.dat` file seems to contain a lot of information related to PnR. It appears to be a C struct directly written to file, so not much structure is present. The parser for this is located in `dat19_h4x.py`. This format is not stable across IDE versions.
Some unknown areas remain in this file, a large part of which is expected to be related to chip packages.
The first thing of interest is the tile grid, which is stored in a 150x200 array, prefixed by the size of the actual FPGA, and the location of the center tile (suspected root of the clock tree). There is a 32-bit tile type (overkill much?) and a 8-bit "used" value. Gowin employs binning, so some devices have half of their tiles "disabled" here. Of note is that this tile grid contains an extra ring of tiles between the IOB and CFU that is not present in the bitstream.
Then follow discriptions of some primitives and their inputs. The format for a thing Foo is that there is `NumFoos` describing how many Foos there are, `NumFooIns` describing the number of Foo inputs, followed by the list of `Foos` and the list of `FooIns`. These numbers are wires IDs that can be mapped to names with `wirenames.py`.
Then follow description of various things about pins and banks that are not fully understood.
Then follows *another* tiled grid, this time in ASCII and without the extra padding ring. It's not clear what the use is of each of these.
Then follows a quite interesting section, relating to hard IP blocks. Each tile has roughly the same set of muxes, but for these special tile types they map to different names. For example, what would be F6 in a normal tile is the IOB A output in IOB tiles.
Then follows a huge set of tables that mostly reproduce the inputs and outputs within a tile. It is not known how these tables are used compared to the ones at the start of the file. There seem to be small differences between them.
Finally, there are some more hard IP inputs and outputs listed.
# Fuse file
The `.fse` file seems to contain information related to bitstream generation, but can also be used for PnR because the information in this file seems to be more detailed than the `.dat` file. This file is a more structured archive with various "files" containing data tables.
Ther is a 4 byte preamble at the top of the file, followed by a number of files, terminated by a stop byte. Each file consists of a 4 byte tile type, a 4 byte width and height of the tile, and a 4 byte number of tables. Each table has a 4 byte type and length, with some having a width as well. Tables are either 2 byte or 4 byte numbers.
The first file is the header, it has a zero width and height with `grid`, `fuse`, and `logicinfo` tables.
The `grid` table is another tile grid, where the tile types map to other files in the archive. It is used to find the actual connections, wires, and bits for a particular tile. It's recommended to use this grid over the less precise one in the `.dat` file.
The `fuse` table is an important one. Many other tables contain indices into this table. The primary index is the fuse number, the secondary index is the tile type. The value in the table is 10000 or a decimal number in the form of `YYXX` representing the bit location within a tile corresponding to this fuse. Yea, you read that correctly, they stuff the bit location into a single number using decimal digits.
The `logicinfo` table describes the valid values of the parameters of the primitives (bels). Each row is a pair (parameter_id, value). This table can be used to generate fuses that program the specified parameter value. The point is that the first two fields of the `shortval` and `longval` tables are the row numbers of this logicinfo table, meaning that the fuses listed in the row of the `shortval` table, for example, must be set to specify the values of this parameter pair.
The other files all correspond to a specific tile type, and have a width and height. Note that IOB, DSP and BRAM tiles are slightly bigger than CFU tiles. Some important tables in thes tile files follow.
The `wire` tables contain the pips of a tile. it is of the format `[src, dest, *fuses]`, where an unused fuse is `-1`. The wire IDs can be mapped to names with `wirenames.py`. Some rows have a negative source wire, which seems to indicate that this fuse should be set to zero. These negative fuses contain the "default" state of a pip. Some rows also have wire IDs that fall outside the valid wire range, it is expected this is some sort of flag, but the meaning is not known. Table 2 contains the main routing, the use of other wire tables is not known.
`shortval` and `longval` describe bel features that can be configured. A short value has 2 "features" and a long value has 16 "features". Unused features are zero. Some known tables:
* 5: LUT bits, [LUT, bit, fuse]
* 23-24: IOB configuration, meaning unknown, fuzzer output used
* 25-27: DFF bits, meaning unknown, fuzzer output used
* 37: Bank enable, meaning unknown, maybe logic levels
The `const` table just contains some fuses that are always set, their meaning is unknown.
The meaning of `wiresearch` and `alonenode` tables is not known.
# Timing file
The `.tm` file contains timing info for the FPGA. This format is again more or less a C struct mapped to a file, however, the format is rather simple.
The file is split in several timing classes. Within a timing class there are several large subsections for things like LUTs, DFF, or routing. Within these sections are list of items, like the delay between two ports or some setup/hold time. Each item is expressed as 4 floating point numbers.
The reason there are 4 numbers is because NMOS transistors are more effective than PMOS transistors of the same size. An NMOS can pull down a wire faster than a PMOS can pull it up. So if there is an input and an output, that means 4 possible transitions.
Since we can assume the falling edge is faster, looking at items like the following, it can be seen that for the LUT the first and third item are lower and identical. For the DFF on the other hand, the first two items are lower than the second two items.
Since the first item relates to "data-in setuptime", and there is no combinational path through a DFF, it follows that these numbers can only relate to the input rising/falling edge. For the LUT clearly the output matters a great deal, and by elimination it *only* seems to take output times in to account.
```
'di_clksetpos': [0.36000001430511475, 0.36000001430511475, 0.5759999752044678, 0.5759999752044678]
'a_f': [1.2081600427627563, 1.2542400360107422, 1.2081600427627563, 1.2542400360107422]
```
So this means that the numbers represent
1. input falling, output falling
2. input falling, output rising
3. input rising, output falling
4. input rising, output rising
|
Apycula
|
/Apycula-0.9.0a1.tar.gz/Apycula-0.9.0a1/doc/filestructure.md
|
filestructure.md
|
# Device grouping
You may notice that in Apicula some devices that appear to be part of the same series have a seperate chipdb,
while others use the exact same chipdb, while having different modifiers.
Why is that?
Gowin produces a lot of "system in package" devices. An FPGA die with some wirebonded peripherals such as SDRAM or ARM cores.
The FPGA die in these devices is the same as in their more spartan counterparts, so we only consider the pinout different.
In a few cases the vendor files do appear to be different for unknown reasons.
The guiding principle has been, until proven wrong, to go by the `md5sum` of the vendor files.
For example, it appears that the `GW1NS-2` is different from the `GW1N-2`, but `GW1N-9` and `GW1NR-9` are the same.
```
$ md5sum bin/gowin1.9.8/IDE/share/device/*/*.fse | sort
1577ac9e268488ef0d13e545e4e1bbfa bin/gowin1.9.8/IDE/share/device/GW1NS-4C/GW1NS-4C.fse
1577ac9e268488ef0d13e545e4e1bbfa bin/gowin1.9.8/IDE/share/device/GW1NS-4/GW1NS-4.fse
1577ac9e268488ef0d13e545e4e1bbfa bin/gowin1.9.8/IDE/share/device/GW1NSER-4C/GW1NSER-4C.fse
1577ac9e268488ef0d13e545e4e1bbfa bin/gowin1.9.8/IDE/share/device/GW1NSR-4C/GW1NSR-4C.fse
1577ac9e268488ef0d13e545e4e1bbfa bin/gowin1.9.8/IDE/share/device/GW1NSR-4/GW1NSR-4.fse
1bbc0c22a6ad6a8537b5e62e7f38dc55 bin/gowin1.9.8/IDE/share/device/GW1N-9C/GW1N-9C.fse
1bbc0c22a6ad6a8537b5e62e7f38dc55 bin/gowin1.9.8/IDE/share/device/GW1NR-9C/GW1NR-9C.fse
23e8cbadbed5245d4591000a39e8a714 bin/gowin1.9.8/IDE/share/device/GW1N-4B/GW1N-4B.fse
23e8cbadbed5245d4591000a39e8a714 bin/gowin1.9.8/IDE/share/device/GW1N-4D/GW1N-4D.fse
23e8cbadbed5245d4591000a39e8a714 bin/gowin1.9.8/IDE/share/device/GW1N-4/GW1N-4.fse
23e8cbadbed5245d4591000a39e8a714 bin/gowin1.9.8/IDE/share/device/GW1NR-4B/GW1NR-4B.fse
23e8cbadbed5245d4591000a39e8a714 bin/gowin1.9.8/IDE/share/device/GW1NR-4D/GW1NR-4D.fse
23e8cbadbed5245d4591000a39e8a714 bin/gowin1.9.8/IDE/share/device/GW1NR-4/GW1NR-4.fse
23e8cbadbed5245d4591000a39e8a714 bin/gowin1.9.8/IDE/share/device/GW1NRF-4B/GW1NRF-4B.fse
367aacd3777db1ae2f82d3d244ef9b46 bin/gowin1.9.8/IDE/share/device/GW1N-1/GW1N-1.fse
367aacd3777db1ae2f82d3d244ef9b46 bin/gowin1.9.8/IDE/share/device/GW1NR-1/GW1NR-1.fse
4416a5e0226fba7036d1a4a47bd4e3ef bin/gowin1.9.8/IDE/share/device/GW2AN-18X/GW2AN-18X.fse
4416a5e0226fba7036d1a4a47bd4e3ef bin/gowin1.9.8/IDE/share/device/GW2AN-4X/GW2AN-4X.fse
4416a5e0226fba7036d1a4a47bd4e3ef bin/gowin1.9.8/IDE/share/device/GW2AN-9X/GW2AN-9X.fse
4e23e1797693721610674e964cd550f1 bin/gowin1.9.8/IDE/share/device/GW1N-1P5B/GW1N-1P5B.fse
4e23e1797693721610674e964cd550f1 bin/gowin1.9.8/IDE/share/device/GW1N-1P5/GW1N-1P5.fse
4e23e1797693721610674e964cd550f1 bin/gowin1.9.8/IDE/share/device/GW1N-2B/GW1N-2B.fse
4e23e1797693721610674e964cd550f1 bin/gowin1.9.8/IDE/share/device/GW1N-2/GW1N-2.fse
4e23e1797693721610674e964cd550f1 bin/gowin1.9.8/IDE/share/device/GW1NR-2B/GW1NR-2B.fse
4e23e1797693721610674e964cd550f1 bin/gowin1.9.8/IDE/share/device/GW1NR-2/GW1NR-2.fse
4e23e1797693721610674e964cd550f1 bin/gowin1.9.8/IDE/share/device/GW1NZR-2/GW1NZR-2.fse
55cfc48170a50c08f30b0b46e773669d bin/gowin1.9.8/IDE/share/device/GW2A-18C/GW2A-18C.fse
55cfc48170a50c08f30b0b46e773669d bin/gowin1.9.8/IDE/share/device/GW2ANR-18C/GW2ANR-18C.fse
55cfc48170a50c08f30b0b46e773669d bin/gowin1.9.8/IDE/share/device/GW2AR-18C/GW2AR-18C.fse
5e1dcd76d79c23e800834c38aba9c018 bin/gowin1.9.8/IDE/share/device/GW1NS-2C/GW1NS-2C.fse
5e1dcd76d79c23e800834c38aba9c018 bin/gowin1.9.8/IDE/share/device/GW1NS-2/GW1NS-2.fse
5e1dcd76d79c23e800834c38aba9c018 bin/gowin1.9.8/IDE/share/device/GW1NSE-2C/GW1NSE-2C.fse
5e1dcd76d79c23e800834c38aba9c018 bin/gowin1.9.8/IDE/share/device/GW1NSR-2C/GW1NSR-2C.fse
5e1dcd76d79c23e800834c38aba9c018 bin/gowin1.9.8/IDE/share/device/GW1NSR-2/GW1NSR-2.fse
80cc685196264afd8358274228e3a0a3 bin/gowin1.9.8/IDE/share/device/GW2A-55C/GW2A-55C.fse
80cc685196264afd8358274228e3a0a3 bin/gowin1.9.8/IDE/share/device/GW2A-55/GW2A-55.fse
80cc685196264afd8358274228e3a0a3 bin/gowin1.9.8/IDE/share/device/GW2AN-55C/GW2AN-55C.fse
9f5ef5e4a8530ea5bbda62cd746e675a bin/gowin1.9.8/IDE/share/device/GW2A-18/GW2A-18.fse
9f5ef5e4a8530ea5bbda62cd746e675a bin/gowin1.9.8/IDE/share/device/GW2AR-18/GW2AR-18.fse
a55729d464a7d4c19b05be15768a9936 bin/gowin1.9.8/IDE/share/device/GW1N-1S/GW1N-1S.fse
aa903125ff6270dc8a4315c91b6f2dac bin/gowin1.9.8/IDE/share/device/GW1N-9/GW1N-9.fse
aa903125ff6270dc8a4315c91b6f2dac bin/gowin1.9.8/IDE/share/device/GW1NR-9/GW1NR-9.fse
b8ee256646453c2f203707fdd7a6b6b7 bin/gowin1.9.8/IDE/share/device/GW1NZ-1C/GW1NZ-1C.fse
b8ee256646453c2f203707fdd7a6b6b7 bin/gowin1.9.8/IDE/share/device/GW1NZ-1/GW1NZ-1.fse
```
|
Apycula
|
/Apycula-0.9.0a1.tar.gz/Apycula-0.9.0a1/doc/device_grouping.md
|
device_grouping.md
|
# ALU
In Gowin FPGA logic tiles, it is possible to configure a CLS (slice of two LUTs) in ALU mode. In this mode a hard logic carry chain is used in combination with programmable logic to implement fast arithmetic.
ALU mode is available on the 3 CLS in each tile that have a flip-flop, and configured with a single bit. This selects the `F` LUT output to be passed through the ALU. Hard carry logic runs from west to east from LUT0 to LUT5 across tiles.

The ALU hard logic takes the shape of a full adder, where the carry chain is fully hard logic, and the first `XOR` gate is formed by the `LUT4`. But the lower 4 bits are shared with an additional `LUT2` which mostly acts as an input selector between `A` and `B` in front of the carry `AND`. The `C` input is mostly wired to `1` so the main `LUT4` doesn't use the lower bits, but in one case `C=0` and `D=1` to much the same effect.

On the synthesis side, the ALU primitive supports 9 modes, wich correspond to a bit pattern stored in the LUT, as well as which ports are used, and which are set to constant values.
```
add(0) 0011000011001100 A:- B:I0 C:1 D:I1 CIN:0
sub(1) 1010000001011010 A:I0 B:- C:1 D:I1 CIN:1
addsub(2) 0110000010011010 A:I0 B:I1 C:1 D:I3 CIN:??
ne(3) 1001000010011111 A:I0 B:I1 C:1 D:- CIN:??
ge(4) 1001000010011010 A:I0 B:I1 C:1 D:- CIN:??
le(5) 1001000010011010 A:I1 B:I0 C:1 D:- CIN:??
cup(6) 1010000010100000 A:I0 B:I1 C:1 D:- CIN:??
cdn(7) 0101000001011111 A:I0 B:I1 C:1 D:- CIN:??
cupcdn(8) 1010000001011010 A:I0 B:I1 C:1 D:I3 CIN:??
mul(9) 0111100010001000 A:I0 B:I1 C:0 D:1 CIN:??
```
These values should be understood as follows: The lowest 4 bits are shared between the `LUT4` in the carry "selector" `LUT2`, so in the case of `ADD` `1100`, selecting `B`. In almost all cases `C:1` which means the output of the `LUT4` is controlled by `AAAA0000AAAA0000` avoiding the lower bits and explainging the zeros in most modes. In the case of `ADD` the `LUT4` function is therefore `00111100`, which is `B XOR D`. In the case of `MUL` `C:0` and `D:1` so indeed only `0000AAAA00000000` is used for the `LUT4`, having the function of `AND`, like the lower `LUT2`. I have confirmed the funcionality is identical with the other clusters set to `0000`. The full list of implemented logic functions:
```
FN LUT4 LUT2
ADD(0) B XOR D B
SUB(1) !A XOR D A
ADDSUB(2) A XOR B XOR !D A
NE(3) A XOR B 1
GE/LE(4-5) A XOR B A
CUP(6) A 0
CDN(7) !A 1
CUPCDN(8) A XNOR D A
MUL(9) A AND B A AND B
```
It seems counterintuitive that the `LUT2` has an asymetric function, but it all works out in the end. See for yourself.
Armed with this knowledge, we can puzzle out what the total function of each `ALU` mode is. Most of them have obvious names with well known truth tables, but `MUL` is a strange one. When we find its total function analytically or experimentally, we find that it computes `A*B+CI`, in other words, an `AND` into a half adder. Our theory is that this could be used in implementing a Wallace tree or other long multiplication algorithm.
As a challenge, I decided to try a full adder that computes `A*B+D+CI`. To implement this the `LUT4` function should be `(A AND B) XOR D` and the `LUT2` function `A AND B`, giving the total data `0111000010001000` which indeed works as expected for `C=1`. This is compatible with the Gowin `MUL` if it were configured with `C=1` and `D=I3` instead of `C=0` and `D=1`.
|
Apycula
|
/Apycula-0.9.0a1.tar.gz/Apycula-0.9.0a1/doc/alu.md
|
alu.md
|
import re
import os
import tempfile
import subprocess
from collections import deque
from itertools import chain, count
from random import shuffle, seed
from warnings import warn
from math import factorial
import numpy as np
from multiprocessing.dummy import Pool
import codegen
import bslib
import pindef
import sys, pdb
# resource sets
# CFU=CLU?
# ENABLE: only one fuzzer switches things on and off at the same time
# IOB, CLU_LUT, CLU_MUX, CLU_DFF, CFU, BRAM, DSP, ENABLE
gowinhome = os.getenv("GOWINHOME")
if not gowinhome:
raise Exception("GOWINHOME not set")
def np_to_vector(array):
return "{}'b{}".format(
len(array),
''.join(str(int(n)) for n in array))
def popcnt(x):
res = 0
while x:
if x & 1:
res += 1
x >>= 1
return res
def get_cb_size(n):
return factorial(n) // factorial(n // 2) // factorial((n + 1) // 2)
def gen_cb(n):
res = []
for i in range(1, 2**n):
if popcnt(i) == (n + 1) // 2:
res.append(i)
assert len(res) == get_cb_size(n)
return res
def get_codes(n):
bits = n.bit_length()
while get_cb_size(bits) < n:
bits += 1
cb = gen_cb(bits)
return bits, cb[:n]
def configbits(bitlen, codes):
"""
Given n bits of configuration data
generate uniquely identifying
bit patterns for each configuration bit.
This makes each configuration bit uniquely
identifiable in the bitstream.
"""
codelen = len(codes)
byteview = np.array(codes, dtype=np.uint32).view(np.uint8)
bits = np.unpackbits(byteview, bitorder='little')
return bits.reshape(codelen, 32)[:,:bitlen].T
def configcodes(stack):
"Turn bit arrays back into numbers"
bytestack = np.packbits(stack, axis=0, bitorder='little').astype(np.uint32)
sequences = np.zeros(bytestack.shape[1:], dtype=np.uint32)
for i in range(bytestack.shape[0]):
sequences += bytestack[i] << (i*8)
return sequences
def find_bits(stack):
sequences = configcodes(stack)
indices = np.where((sequences>0) & (sequences<sequences.max()))
return indices, sequences[indices]
class Fuzzer:
# a set of resources used by this fuzzer
resources = set()
# added to name to avoid conflicts
prefix = ""
# values higher than this will trigger a warning
max_std = 20
# bits of side-effects
se_bits = 0
# list of side-effect identifiers
se_loc = []
@property
def cfg_bits(self):
return len(self.locations)*self.loc_bits
@property
def se_bits(self):
# this is dumb and potentially slow
return self.side_effects(np.zeros((0, self.cfg_bits), dtype=np.uint8)).shape[1]
def location_to_name(self, location):
return self.prefix + re.sub("\[([0-4AB])\]", "_\\1", location)
def location_chunks(self, bits):
return zip(self.locations, bslib.chunks(bits, self.loc_bits))
def primitives(self, mod, bits):
"Generate verilog for this fuzzer"
raise NotImplementedError
def constraints(self, constr, bits):
"Generate cst lines for this fuzzer"
raise NotImplementedError
def side_effects(self, bits):
"""
Returns a list of codes that
don't map 1-1 to bitstream bits.
e.g. OR/AND of several other bits.
"""
return np.zeros((1, 0))
def side_effect_cfg(self):
"""
For each side-effect, return a set of
config bits that triggers
the corresponding side-effect.
"""
return np.zeros((0, self.cfg_bits))
def check(self, bits):
"Perform some basic checks on the bitstream bits"
if len(self.locations)*self.loc_bits != len(bits):
warn("{} clusters expected, but {} clusters found".format(
len(self.locations)*self.loc_bits,
len(bits)))
for loc, b in self.location_chunks(bits):
a = np.array(b)
std = np.std(a, axis=1)
if np.any(std > self.max_std):
warn("High deviation in location {}".format(loc))
def report(self, bits):
"""Generate a report of the bistream locations
corresponding to the provided config bits"""
for loc, b in self.location_chunks(bits):
print(self.__class__.__name__, loc, b)
def report_side_effects(self, bits):
"""Generate a report of the bistream locations
corresponding to side-effect bits"""
for se, b in zip(self.se_loc, bits):
print(self.__class__.__name__, "se", se, b)
class CluFuzzer(Fuzzer):
scope = "CLU"
ncls = 4 # 3 for REG
def __init__(self, rows, cols, exclude):
self.locations = []
if self.scope == "CLU":
for row in range(2, rows):
if row not in exclude:
for col in range(2, cols):
self.locations.append("R{}C{}".format(row, col))
elif self.scope == "CLS":
for row in range(2, rows):
if row not in exclude:
for col in range(2, cols):
for cls in range(self.ncls):
self.locations.append("R{}C{}[{}]".format(row, col, cls))
else:
for row in range(2, rows):
if row not in exclude:
for col in range(2, cols):
for cls in range(self.ncls):
for lut in ["A", "B"]:
self.locations.append("R{}C{}[{}][{}]".format(row, col, cls, lut))
shuffle(self.locations)
def constraints(self, constr, bits):
"Generate cst lines for this fuzzer"
for loc in self.locations:
name = self.location_to_name(loc)
constr.cells[name] = loc
class Lut4BitsFuzzer(CluFuzzer):
"""
This fuzzer finds the lookuptable bits of a LUT4
"""
resources = {"CLU_LUT"}
loc_bits = 16
scope = "LUT"
prefix = "LUT"
def primitives(self, mod, bits):
"Generate verilog for LUT4s"
for location, bits in self.location_chunks(bits):
name = self.location_to_name(location)
lut = codegen.Primitive("LUT4", name)
lut.params["INIT"] = np_to_vector(1^bits) # inverted
lut.portmap['F'] = name+"_F"
lut.portmap['I0'] = name+"_I0"
lut.portmap['I1'] = name+"_I1"
lut.portmap['I2'] = name+"_I2"
lut.portmap['I3'] = name+"_I3"
mod.wires.update(lut.portmap.values())
mod.primitives[name] = lut
class DffFuzzer(CluFuzzer):
"""
This fuzzer finds the bits for a DFF
But includes an empty LUT because otherwise
a pass-through LUT is created.
"""
resources = {"CLU_LUT", "CLU_DFF"}
loc_bits = 1
scope = "CLS"
ncls = 3 # CLS 3 has no DFF
def primitives(self, mod, bits):
"Generate verilog for a LUT4 and DFF"
for location, bits in self.location_chunks(bits):
if bits[0]:
name = self.location_to_name(location)
location_a = location+"[A]_LUT"
name_a_lut = self.location_to_name(location_a)
lut = codegen.Primitive("LUT4", name_a_lut)
lut.params["INIT"] = "16'hffff"
lut.portmap['F'] = name_a_lut+"_F"
lut.portmap['I0'] = name_a_lut+"_I0"
lut.portmap['I1'] = name_a_lut+"_I1"
lut.portmap['I2'] = name_a_lut+"_I2"
lut.portmap['I3'] = name_a_lut+"_I3"
mod.wires.update(lut.portmap.values())
mod.primitives[name_a_lut] = lut
location_b = location+"[B]_LUT"
name_b_lut = self.location_to_name(location_b)
lut = codegen.Primitive("LUT4", name_b_lut)
lut.params["INIT"] = "16'hffff"
lut.portmap['F'] = name_b_lut+"_F"
lut.portmap['I0'] = name_b_lut+"_I0"
lut.portmap['I1'] = name_b_lut+"_I1"
lut.portmap['I2'] = name_b_lut+"_I2"
lut.portmap['I3'] = name_b_lut+"_I3"
mod.wires.update(lut.portmap.values())
mod.primitives[name_b_lut] = lut
location_a = location+"[A]_DFF"
name_a_dff = self.location_to_name(location_a)
dff = codegen.Primitive("DFF", name_a_dff)
dff.portmap['CLK'] = name+"_CLK" # share clk
dff.portmap['D'] = name_a_lut+"_F"
dff.portmap['Q'] = name_a_dff+"_Q"
mod.wires.update(dff.portmap.values())
mod.primitives[name_a_dff] = dff
location_a = location+"[B]_DFF"
name_b_dff = self.location_to_name(location_a)
dff = codegen.Primitive("DFF", name_b_dff)
dff.portmap['CLK'] = name+"_CLK"
dff.portmap['D'] = name_b_lut+"_F"
dff.portmap['Q'] = name_b_dff+"_Q"
mod.wires.update(dff.portmap.values())
mod.primitives[name_b_dff] = dff
def constraints(self, constr, bits):
for loc, bits in self.location_chunks(bits):
if bits[0]:
name = self.location_to_name(loc+"[A]_LUT")
constr.cells[name] = loc
name = self.location_to_name(loc+"[B]_LUT")
constr.cells[name] = loc
name = self.location_to_name(loc+"[A]_DFF")
constr.cells[name] = loc
name = self.location_to_name(loc+"[B]_DFF")
constr.cells[name] = loc
class DffsrFuzzer(CluFuzzer):
"""
This fuzzer finds the DFF bits that control
1. clock polarity
2. sync/async
3. reset value
it does not find
4. unknown (DFF/DFFN)
5. unknown (always 1)
Layout:
..4
2.5
.31
"""
resources = {"CLU_DFF"}
loc_bits = 3
scope = "CLS"
ncls = 3 # CLS 3 has no DFF
prefix = "DFF"
# clkpol, sync/async, set/reset
ffmap = {
(0, 0, 0): ("DFFSE", "SET"),
(0, 0, 1): ("DFFRE", "RESET"),
(0, 1, 0): ("DFFPE", "PRESET"),
(0, 1, 1): ("DFFCE", "CLEAR"),
(1, 0, 0): ("DFFNSE", "SET"),
(1, 0, 1): ("DFFNRE", "RESET"),
(1, 1, 0): ("DFFNPE", "PRESET"),
(1, 1, 1): ("DFFNCE", "CLEAR"),
}
def primitives(self, mod, bits):
"Generate verilog for a DFF at this location"
for location, bits in self.location_chunks(bits):
prim, port = self.ffmap[tuple(bits)]
name = self.location_to_name(location)
dff = codegen.Primitive(prim, name)
dff.portmap['CLK'] = name+"_CLK"
dff.portmap['D'] = name+"_F"
dff.portmap['Q'] = name+"_Q"
dff.portmap['CE'] = name+"_CE"
dff.portmap[port] = name+"_"+port
mod.wires.update(dff.portmap.values())
mod.primitives[name] = dff
class OneHopWireFuzzer(CluFuzzer):
"""
This fuzzer finds wires to adjacent tiles
"""
resources = {"CLU_LUT"}
loc_bits = 4
scope = "LUT"
prefix = "LUT"
def neighbours(self, location):
for r, c in [(1,0), (0, 1), (-1, 0), (0, -1)]:
yield re.sub(
"R(\\d+)C(\\d+)",
lambda m: "R{}C{}".format(
int(m[1])+r, int(m[2])+c),
location)
def primitives(self, mod, bits):
"Generate verilog for LUT4s"
for location, bits in self.location_chunks(bits):
name = self.location_to_name(location)
lut = codegen.Primitive("LUT4", name)
lut.params["INIT"] = "16'h0000"
lut.portmap['F'] = name+"_F"
neigh = self.neighbours(location)
for i, ne, bit in zip(count(), neigh, bits):
if bit:
ne_name = self.location_to_name(ne)
lut.portmap['I{}'.format(i)] = ne_name+"_F"
else:
lut.portmap['I{}'.format(i)] = name+"_I{}".format(i)
mod.wires.update(lut.portmap.values())
mod.primitives[name] = lut
class PinFuzzer(Fuzzer):
def __init__(self, series, package):
self.locations = []
self.banks = pindef.get_pins(series, package)
self.se_loc = self.banks.keys()
for bank in self.banks.values():
self.locations.extend(bank)
shuffle(self.locations)
self.bank_indices = {
bank: [self.locations.index(pin)
for pin in pins]
for bank, pins in self.banks.items()
}
def constraints(self, constr, bits):
"Generate cst lines for this fuzzer"
for loc in self.locations:
name = "IOB{}".format(loc)
constr.ports[name] = loc
class IobFuzzer(PinFuzzer):
resources = {"IOB"}
loc_bits = 1
kindmap = {
"IBUF": {"wires": ["O"], "inputs": ["I"]},
"OBUF": {"wires": ["I"], "outputs": ["O"]},
"TBUF": {"wires": ["I", "OEN"], "outputs": ["O"]},
"IOBUF": {"wires": ["I", "O", "OEN"], "inouts": ["IO"]},
#"TLVDS_IBUF": ["I", "IB", "O"],
#"TLVDS_OBUF": ["I", "OB", "O"],
#"TLVDS_TBUF": ["I", "OB", "O", "OEN"],
#"TLVDS_IOBUF": ["I", "IO", "IOB", "O", "OEN"],
#"MIPI_IBUF_HS": ["I", "IB", "OH"],
#"MIPI_IBUF_LP": ["I", "IB", "OL", "OB"],
#"MIPI_IBUF": ["I", "IB", "HSREN", "OEN", "OENB", "OH", "OB", "IO", "IOB"],
#"MIPI_OBUF": ["MODESEL", "I", "IB", "O", "OB"],
#"I3C_IOBUF": ["MODESEL", "I", "O", "IO"],
}
def __init__(self, kind, pins, exclude):
super().__init__(pins, exclude)
self.kind = kind
self.ports = self.kindmap[kind]
def primitives(self, mod, bits):
"Generate verilog for an IOB at this location"
for location, bits in self.location_chunks(bits):
if bits[0]:
name = "IOB{}".format(location)
dff = codegen.Primitive(self.kind, name)
for port in chain.from_iterable(self.ports.values()):
dff.portmap[port] = name+"_"+port
for direction, wires in self.ports.items():
wnames = [name+"_"+w for w in wires]
getattr(mod, direction).update(wnames)
mod.primitives[name] = dff
def constraints(self, constr, bits):
for loc, bits in self.location_chunks(bits):
if bits[0]:
name = "IOB{}".format(loc)
constr.ports[name] = loc
def side_effects(self, bits):
"If any pin is turned on, the bank is also enabled"
return np.array([
np.any(bits[:,idc], axis=1)
for bank, idc in self.bank_indices.items()
]).T
def side_effect_cfg(self):
"Turn each bank on seperately"
cfglist = []
for bank, indices in self.bank_indices.items():
cfg = np.zeros(self.cfg_bits, dtype=np.uint8)
cfg[indices] = 1
cfglist.append(cfg)
return np.vstack(cfglist)
def run_pnr(fuzzers, bits):
#TODO generalize/parameterize
mod = codegen.Module()
constr = codegen.Constraints()
start = 0
for fuzzer in fuzzers:
cb = bits[start:start+fuzzer.cfg_bits]
start += fuzzer.cfg_bits
fuzzer.primitives(mod, cb)
fuzzer.constraints(constr, cb)
cfg = codegen.DeviceConfig({
"JTAG regular_io": "false",
"SSPI regular_io": "false",
"MSPI regular_io": "false",
"READY regular_io": "false",
"DONE regular_io": "false",
"RECONFIG_N regular_io": "false",
"MODE regular_io": "false",
"CRC_check": "true",
"compress": "false",
"encryption": "false",
"security_bit_enable": "true",
"bsram_init_fuse_print": "true",
"download_speed": "250/100",
"spi_flash_address": "0x00FFF000",
"format": "txt",
"background_programming": "false",
"secure_mode": "false"})
opt = codegen.PnrOptions([])
#"sdf", "oc", "ibs", "posp", "o",
#"warning_all", "timing", "reg_not_in_iob"])
pnr = codegen.Pnr()
pnr.device = "GW1NR-9-QFN88-6"
pnr.partnumber = "GW1NR-LV9QN88C6/I5"
with tempfile.TemporaryDirectory() as tmpdir:
pnr.outdir = tmpdir
with open(tmpdir+"/top.v", "w") as f:
mod.write(f)
pnr.netlist = tmpdir+"/top.v"
with open(tmpdir+"/top.cst", "w") as f:
constr.write(f)
pnr.cst = tmpdir+"/top.cst"
with open(tmpdir+"/device.cfg", "w") as f:
cfg.write(f)
pnr.cfg = tmpdir+"/device.cfg"
with open(tmpdir+"/pnr.cfg", "w") as f:
opt.write(f)
pnr.opt = tmpdir+"/pnr.cfg"
with open(tmpdir+"/run.tcl", "w") as f:
pnr.write(f)
subprocess.run([gowinhome+"/IDE/bin/gw_sh", tmpdir+"/run.tcl"])
#print(tmpdir); input()
try:
return bslib.read_bitstream(tmpdir+"/impl/pnr/top.fs")[0]
except FileNotFoundError:
return None
def get_extra_bits(fuzzers, bits):
"Extend bits with configurations that test side-effects"
groups = []
start = 0
for fuzzer in fuzzers:
cfg = fuzzer.side_effect_cfg()
group = np.zeros((cfg.shape[0], bits.shape[1]), dtype=np.uint8)
group[:,start:start+fuzzer.cfg_bits] = cfg
start += fuzzer.cfg_bits
groups.append(group)
gstack = np.vstack(groups)
nrofbits = gstack.shape[0]
if nrofbits > 0:
codelen, codes = get_codes(nrofbits)
shuffle(codes)
sebits = configbits(codelen, codes)
rows = [np.sum(gstack[row==1], axis=0) for row in sebits]
rows.append(bits)
return codelen, np.vstack(rows)
else:
return 0, bits
def get_extra_codes(fuzzers, bits):
"Get codes produces by fuzzer side-effects"
extra_bits = []
start = 0
for fuzzer in fuzzers:
cb = bits[:,start:start+fuzzer.cfg_bits]
start += fuzzer.cfg_bits
se = fuzzer.side_effects(cb)
if se.size:
extra_bits.append(se)
if extra_bits:
return configcodes(np.hstack(extra_bits))
else:
return []
def run_batch(fuzzers):
nrofbits = sum([f.cfg_bits for f in fuzzers])
codelen, codes = get_codes(nrofbits)
shuffle(codes)
bits = configbits(codelen, codes)
secodelen, bits = get_extra_bits(fuzzers, bits)
codes = configcodes(bits) # extended codes
extra_codes = get_extra_codes(fuzzers, bits)
if True:
p = Pool()
bitstreams = p.map(lambda cb: run_pnr(fuzzers, cb), bits)
stack = np.stack([b for b in bitstreams if b is not None])
np.savez_compressed("bitstreams.npz", *bitstreams)
else:
stack = np.stack(list(np.load("bitstreams.npz").values()), axis=0)
indices, sequences = find_bits(stack)
#debug image
bitmap = np.zeros(stack.shape[1:], dtype=np.uint8)
bitmap[indices] = 1
bslib.display("indices.png", bitmap)
seqloc = np.array([sequences, *indices]).T
c_set = set(codes)
ec_set = set(extra_codes)
for s, x, y in seqloc:
if s in c_set:
print("Valid sequence at: {}, {}".format(x, y))
continue
if s in ec_set:
print("Side effect at: {}, {}".format(x, y))
continue
raise ValueError("invalid sequence at: {}, {}".format(x, y))
mapping = {}
for seq, x, y in seqloc:
mapping.setdefault(seq, []).append((x, y))
codesq = deque(codes)
extra_codesq = deque(extra_codes)
for fuzzer in fuzzers:
bits = []
extra_bits = []
for _ in range(fuzzer.cfg_bits):
code = codesq.popleft()
bits.append(mapping[code])
for _ in range(fuzzer.se_bits):
code = extra_codesq.popleft()
extra_bits.append(mapping[code])
fuzzer.report(bits)
fuzzer.report_side_effects(extra_bits)
fuzzer.check(bits)
if __name__ == "__main__":
seed(0xdeadbeef)
fuzzers = [
#Lut4BitsFuzzer(28, 47, {10, 19, 28}),
#DffFuzzer(28, 47, {10, 19, 28}),
#DffsrFuzzer(28, 47, {10, 19, 28}),
OneHopWireFuzzer(28, 47, {10, 19, 28}),
#IobFuzzer("IBUF", "GW1NR-9", "QN881"),
]
run_batch(fuzzers)
|
Apycula
|
/Apycula-0.9.0a1.tar.gz/Apycula-0.9.0a1/legacy/fuzzer.py
|
fuzzer.py
|
import json
nodes = { 0: "A0", 1: "B0", 2: "C0", 3: "D0", 4: "A1", 5: "B1", 6: "C1", 7: "D1", 8: "A2", 9: "B2", 10: "C2", 11: "D2", 12: "A3", 13: "B3", 14: "C3",
15: "D3", 16: "A4", 17: "B4", 18: "C4", 19: "D4", 20: "A5", 21: "B5", 22: "C5", 23: "D5", 24: "A6", 25: "B6", 26: "C6", 27: "D6", 28: "A7", 29: "B7",
30: "C7", 31: "D7", 32: "F0", 33: "F1", 34: "F2", 35: "F3", 36: "F4", 37: "F5", 38: "F6", 39: "F7", 40: "Q0", 41: "Q1", 42: "Q2", 43: "Q3", 44: "Q4",
45: "Q5", 46: "Q6", 47: "Q7", 48: "OF0", 49: "OF1", 50: "OF2", 51: "OF3", 52: "OF4", 53: "OF5", 54: "OF6", 55: "OF7", 56: "X01", 57: "X02", 58: "X03",
59: "X04", 60: "X05", 61: "X06", 62: "X07", 63: "X08", 64: "N100", 65: "SN10", 66: "SN20", 67: "N130", 68: "S100", 69: "S130", 70: "E100", 71: "EW10",
72: "EW20", 73: "E130", 74: "W100", 75: "W130", 76: "N200", 77: "N210", 78: "N220", 79: "N230", 80: "N240", 81: "N250", 82: "N260", 83: "N270", 84: "S200",
85: "S210", 86: "S220", 87: "S230", 88: "S240", 89: "S250", 90: "S260", 91: "S270", 92: "E200", 93: "E210", 94: "E220", 95: "E230", 96: "E240", 97: "E250",
98: "E260", 99: "E270", 100: "W200", 101: "W210", 102: "W220", 103: "W230", 104: "W240", 105: "W250", 106: "W260", 107: "W270", 108: "N800", 109: "N810",
110: "N820", 111: "N830", 112: "S800", 113: "S810", 114: "S820", 115: "S830", 116: "E800", 117: "E810", 118: "E820", 119: "E830", 120: "W800", 121: "W810",
122: "W820", 123: "W830", 124: "CLK0", 125: "CLK1", 126: "CLK2", 127: "LSR0", 128: "LSR1", 129: "LSR2", 130: "CE0", 131: "CE1", 132: "CE2", 133: "SEL0",
134: "SEL1", 135: "SEL2", 136: "SEL3", 137: "SEL4", 138: "SEL5", 139: "SEL6", 140: "SEL7", 141: "N101", 142: "N131", 143: "S101", 144: "S131", 145: "E101", 146: "E131",
147: "W101", 148: "W131", 149: "N201", 150: "N211", 151: "N221", 152: "N231", 153: "N241", 154: "N251", 155: "N261", 156: "N271", 157: "S201", 158: "S211",
159: "S221", 160: "S231", 161: "S241", 162: "S251", 163: "S261", 164: "S271", 165: "E201", 166: "E211", 167: "E221", 168: "E231", 169: "E241", 170: "E251",
171: "E261", 172: "E271", 173: "W201", 174: "W211", 175: "W221", 176: "W231", 177: "W241", 178: "W251", 179: "W261", 180: "W271", 181: "N202", 182: "N212",
183: "N222", 184: "N232", 185: "N242", 186: "N252", 187: "N262", 188: "N272", 189: "S202", 190: "S212", 191: "S222", 192: "S232", 193: "S242", 194: "S252",
195: "S262", 196: "S272", 197: "E202", 198: "E212", 199: "E222", 200: "E232", 201: "E242", 202: "E252", 203: "E262", 204: "E272", 205: "W202", 206: "W212",
207: "W222", 208: "W232", 209: "W242", 210: "W252", 211: "W262", 212: "W272", 213: "N804", 214: "N814", 215: "N824", 216: "N834", 217: "S804", 218: "S814",
219: "S824", 220: "S834", 221: "E804", 222: "E814", 223: "E824", 224: "E834", 225: "W804", 226: "W814", 227: "W824", 228: "W834", 229: "N808", 230: "N818",
231: "N828", 232: "N838", 233: "S808", 234: "S818", 235: "S828", 236: "S838", 237: "E808", 238: "E818", 239: "E828", 240: "E838", 241: "W808", 242: "W818",
243: "W828", 244: "W838", 245: "E110", 246: "W110", 247: "E120", 248: "W120", 249: "S110", 250: "N110", 251: "S120", 252: "N120", 253: "E111", 254: "W111",
255: "E121", 256: "W121", 257: "S111", 258: "N111", 259: "S121", 260: "N121", 261: "LB01", 262: "LB11", 263: "LB21", 264: "LB31", 265: "LB41", 266: "LB51",
267: "LB61", 268: "LB71", 269: "GB00", 270: "GB10", 271: "GB20", 272: "GB30", 273: "GB40", 274: "GB50", 275: "GB60", 276: "GB70", 277: "VCC", 278: "VSS",
279: "LT00", 280: "LT10", 281: "LT20", 282: "LT30", 283: "LT02", 284: "LT13", 285: "LT01", 286: "LT04", 287: "LBO0", 288: "LBO1", 289: "SS00", 290: "SS40",
291: "GT00", 292: "GT10", 293: "GBO0", 294: "GBO1", 295: "DI0", 296: "DI1", 297: "DI2", 298: "DI3", 299: "DI4", 300: "DI5", 301: "DI6", 302: "DI7",
303: "CIN0", 304: "CIN1", 305: "CIN2", 306: "CIN3", 307: "CIN4", 308: "CIN5", 309: "COUT0", 310: "COUT1", 311: "COUT2", 312: "COUT3", 313: "COUT4", 314: "COUT5"}
with open('dat.json') as f:
d = json.load(f)
for x in ['X0', 'X1', 'X2', 'X8', 'X11', 'Lut', 'Clk', 'Lsr', 'Ce', 'Sel']:
print("Wires", x)
for p, i in zip(d[f'{x}s'], d[f'{x}Ins']):
print(nodes.get(p), [nodes.get(n) for n in i])
|
Apycula
|
/Apycula-0.9.0a1.tar.gz/Apycula-0.9.0a1/legacy/report.py
|
report.py
|
import sys
import os
from itertools import chain
import re
import code
import pickle
import importlib.resources
sys.path.append(os.path.join(sys.path[0], '..'))
from apycula import chipdb
from apycula.wirenames import wirenames
device = os.getenv("DEVICE")
if not device:
raise Exception("DEVICE not set")
with importlib.resources.open_binary("apycula", f"{device}.pickle") as f:
db = pickle.load(f)
timing_class = "C6/I5" # TODO parameterize
timing = db.timing[timing_class]
def addWire(row, col, wire):
gname = chipdb.wire2global(row, col, db, wire)
#print("wire", gname)
try:
ctx.addWire(name=gname, type=wire, y=row, x=col)
except AssertionError:
pass
#print("duplicate wire")
belre = re.compile(r"(IOB|LUT|DFF|BANK|CFG)(\w*)")
for row, rowdata in enumerate(db.grid, 1):
for col, tile in enumerate(rowdata, 1):
# add wires
wires = set(chain(
tile.pips.keys(),
tile.clock_pips.keys(),
*tile.pips.values(),
*tile.clock_pips.values(),
))
for wire in wires:
addWire(row, col, wire)
# add aliasses
# creat bels
#print(row, col, ttyp)
for name, bel in tile.bels.items():
typ, idx = belre.match(name).groups()
if typ == "DFF":
z = int(idx)
belname = f"R{row}C{col}_SLICE{z}"
clkname = f"R{row}C{col}_CLK{z//2}"
fname = f"R{row}C{col}_F{z}"
qname = f"R{row}C{col}_Q{z}"
#print("IOB", row, col, clkname, fname, qname)
ctx.addBel(name=belname, type="GENERIC_SLICE", loc=Loc(col, row, z), gb=False)
ctx.addBelInput(bel=belname, name="CLK", wire=clkname)
for k, n in [('A', 'I[0]'), ('B', 'I[1]'),
('C', 'I[2]'), ('D', 'I[3]')]:
inpname = f"R{row}C{col}_{k}{z}"
ctx.addBelInput(bel=belname, name=n, wire=inpname)
ctx.addBelOutput(bel=belname, name="Q", wire=qname)
ctx.addBelOutput(bel=belname, name="F", wire=fname)
elif typ == "IOB":
z = ord(idx)-ord('A')
belname = f"R{row}C{col}_IOB{idx}"
inp = bel.portmap['I']
outp = bel.portmap['O']
oe = bel.portmap['OE']
iname = f"R{row}C{col}_{inp}"
oname = f"R{row}C{col}_{outp}"
oename = f"R{row}C{col}_{oe}"
#print("IOB", row, col, iname, oname, oename)
ctx.addBel(name=belname, type="GENERIC_IOB", loc=Loc(col, row, z), gb=False)
ctx.addBelInput(bel=belname, name="I", wire=iname)
ctx.addBelInput(bel=belname, name="EN", wire=oename)
ctx.addBelOutput(bel=belname, name="O", wire=oname)
wlens = {0: 'X0', 1: 'FX1', 2: 'X2', 8: 'X8'}
def wiredelay(wire):
m = re.match(r"[NESWX]+([0128])", wire)
if m:
wlen = int(m.groups()[0])
name = wlens.get(wlen)
return ctx.getDelayFromNS(max(timing['wire'][name]))
elif wire.startswith("UNK"):
return ctx.getDelayFromNS(max(timing['glbsrc']['PIO_CENT_PCLK']))
elif wire.startswith("SPINE"):
return ctx.getDelayFromNS(max(timing['glbsrc']['CENT_SPINE_PCLK']))
elif wire.startswith("GT"):
return ctx.getDelayFromNS(max(timing['glbsrc']['SPINE_TAP_PCLK']))
elif wire.startswith("GBO"):
return ctx.getDelayFromNS(max(timing['glbsrc']['TAP_BRANCH_PCLK']))
elif wire.startswith("GB"):
return ctx.getDelayFromNS(max(timing['glbsrc']['BRANCH_PCLK']))
else: # no known delay
return ctx.getDelayFromNS(0)
def addPip(row, col, srcname, destname):
gsrcname = chipdb.wire2global(row, col, db, srcname)
gdestname = chipdb.wire2global(row, col, db, destname)
pipname = f"R{row}C{col}_{srcname}_{destname}"
#print("pip", pipname, srcname, gsrcname, destname, gdestname)
try:
# delay is crude fudge from vendor critical path
ctx.addPip(
name=pipname, type=destname, srcWire=gsrcname, dstWire=gdestname,
delay=wiredelay(destname), loc=Loc(col, row, 0))
except IndexError:
pass
#print("Wire not found", gsrcname, gdestname)
except AssertionError:
pass
#print("Wire already exists", gsrcname, gdestname)
def addAlias(row, col, srcname, destname):
gsrcname = chipdb.wire2global(row, col, db, srcname)
gdestname = chipdb.wire2global(row, col, db, destname)
pipname = f"R{row}C{col}_{srcname}_{destname}"
#print("alias", pipname)
ctx.addPip(
name=pipname+"_ALIAS", type=destname, srcWire=gsrcname, dstWire=gdestname,
delay=ctx.getDelayFromNS(0), loc=Loc(col, row, 0))
for row, rowdata in enumerate(db.grid, 1):
for col, tile in enumerate(rowdata, 1):
for dest, srcs in tile.pips.items():
for src in srcs.keys():
addPip(row, col, src, dest)
for dest, srcs in tile.clock_pips.items():
for src in srcs.keys():
addPip(row, col, src, dest)
for dest, src in tile.aliases.items():
addAlias(row, col, src, dest)
for (row, col, pip), (srow, scol, spip) in db.aliases.items():
dest = f"R{row+1}C{col+1}_{pip}"
src = f"R{srow+1}C{scol+1}_{spip}"
name = f"{src}_{dest}_ALIAS"
ctx.addPip(
name=name, type=dest, srcWire=src, dstWire=dest,
delay=ctx.getDelayFromNS(0), loc=Loc(row+1, col+1, 0))
# too low numbers will result in slow routing iterations
# too high numbers will result in more iterations
ctx.setDelayScaling(0.1, 0.7)
#code.interact(local=locals())
|
Apycula
|
/Apycula-0.9.0a1.tar.gz/Apycula-0.9.0a1/legacy/generic/simple.py
|
simple.py
|
from write_fasm import *
import re
from apycula import codegen
# Need to tell FASM generator how to write parameters
# (celltype, parameter) -> ParameterConfig
param_map = {
("GENERIC_SLICE", "K"): ParameterConfig(write=False),
("GENERIC_SLICE", "INIT"): ParameterConfig(write=True, numeric=True, width=2**4),
("GENERIC_SLICE", "FF_USED"): ParameterConfig(write=True, numeric=True, width=1),
("GENERIC_IOB", "INPUT_USED"): ParameterConfig(write=True, numeric=True, width=1),
("GENERIC_IOB", "OUTPUT_USED"): ParameterConfig(write=True, numeric=True, width=1),
("GENERIC_IOB", "ENABLE_USED"): ParameterConfig(write=True, numeric=True, width=1),
}
#gen_7_ PLACE_IOT8[B] //IBUF
#gen_4_ PLACE_R13C6[0][A]
def write_posp(f):
belre = re.compile(r"R(\d+)C(\d+)_(SLICE|IOB)(\w)")
namere = re.compile(r"\W+")
for name, cell in ctx.cells:
row, col, typ, idx = belre.match(cell.bel).groups()
row = int(row)
col = int(col)
name = namere.sub('_', name)
if typ == 'SLICE':
idx = int(idx)
cls = idx//2
side = ['A', 'B'][idx%2]
lutname = name + "_LUT"
f.write(f"{lutname} PLACE_R{row}C{col}[{cls}][{side}]\n")
lut = codegen.Primitive("LUT4", lutname)
#lut.params["INIT"] = f"16'b{val:016b}"
lut.portmap['F'] = f"R{row}C{col}_F{idx}"
lut.portmap['I0'] = f"R{row}C{col}_A{idx}"
lut.portmap['I1'] = f"R{row}C{col}_B{idx}"
lut.portmap['I2'] = f"R{row}C{col}_C{idx}"
lut.portmap['I3'] = f"R{row}C{col}_D{idx}"
mod.wires.update(lut.portmap.values())
mod.primitives[lutname] = lut
if int(cell.params['FF_USED'], 2):
dffname = name + "_DFF"
f.write(f"{dffname} PLACE_R{row}C{col}[{cls}][{side}]\n")
lut = codegen.Primitive("DFF", dffname)
lut.portmap['D'] = f"R{row}C{col}_F{idx}"
lut.portmap['Q'] = f"R{row}C{col}_Q{idx}"
lut.portmap['CLK'] = f"R{row}C{col}_CLK{idx}"
mod.wires.update(lut.portmap.values())
mod.primitives[dffname] = lut
elif typ == 'IOB':
if row == 1:
edge = 'T'
num = col
elif col == 1:
edge = 'L'
num = row
elif col == 47: #TODO parameterize
edge = 'R'
num = row
else:
edge = 'B'
num = col
f.write(f"{name} PLACE_IO{edge}{num}[{idx}]\n")
iob = codegen.Primitive("IOBUF", name)
iob.portmap['I'] = f"R{row}C{col}_I{idx}"
iob.portmap['O'] = f"R{row}C{col}_O{idx}"
iob.portmap['IO'] = f"R{row}C{col}_IO{idx}"
iob.portmap['OEN'] = f"R{row}C{col}_OEN{idx}"
mod.wires.update(iob.portmap.values())
mod.inouts.add(f"R{row}C{col}_IO{idx}")
mod.primitives[name] = iob
with open("blinky.fasm", "w") as f:
write_fasm(ctx, param_map, f)
mod = codegen.Module()
with open("blinky.posp", "w") as f:
write_posp(f)
with open("blinky.vm", "w") as f:
mod.write(f)
#code.interact(local=locals())
|
Apycula
|
/Apycula-0.9.0a1.tar.gz/Apycula-0.9.0a1/legacy/generic/bitstream.py
|
bitstream.py
|
import os
import re
import subprocess
def run_script(port : str, pinName : str):
with open('template.vhd', 'rt') as f:
template = f.read()
f.close()
template = template.replace("##PORT##", port)
template = template.replace("##PORTNAME##", pinName)
with open('findpin.vhd', 'wt') as f2:
f2.write(template)
f2.close()
GowinHome = os.getenv('GOWINHOME')
result = subprocess.run(["rm", "unpack.v"])
result = subprocess.run([GowinHome+"/IDE/bin/gw_sh", "findpin.tcl"], check=True)
result = subprocess.run(["gowin_unpack", "-d", "GW1N-9", "impl/pnr/project.fs"], check=True)
with open('unpack.v', 'rt') as f3:
unpackv = f3.readline()
iobre = re.compile("\(([a-zA-Z0-9_]*)\)")
matches = iobre.findall(unpackv)
print("Found IOB: ", matches[0])
with open('pin_report.txt', 'a+') as f:
f.write(pinName + " -> " + matches[0] + "\n")
f.close()
######################################################
## MAIN ##
######################################################
result = subprocess.run(["rm", "pin_report.txt"])
# pin name, bus width, bus offset
pins = [
["IO_sdram_dq", 16, 0],
["O_sdram_clk", 0, 0],
["O_sdram_cke", 0, 0],
["O_sdram_cs_n", 0, 0],
["O_sdram_cas_n", 0, 0],
["O_sdram_ras_n", 0, 0],
["O_sdram_wen_n", 0, 0],
["O_sdram_addr", 12, 0],
["O_sdram_dqm", 2, 0],
["O_sdram_ba", 2, 0]
]
for pin in pins:
pinName = pin[0]
pinBuswidth = pin[1]
pinOffset = pin[2]
if (pinBuswidth == 0):
port = pinName
if (pinName.startswith("O")):
port = port + " : out std_logic"
else:
port = port + " : inout std_logic"
run_script(port, pinName)
else:
for pinIdx in range(0, pinBuswidth):
port = pinName
if (pinName.startswith("O")):
port = port + " : out "
else:
port = port + " : inout "
actualIdx = pinIdx + pinOffset
port = port + "std_logic_vector(" + str(actualIdx) + " downto " + str(actualIdx) + ")"
run_script(port, pinName+"("+str(actualIdx)+")")
|
Apycula
|
/Apycula-0.9.0a1.tar.gz/Apycula-0.9.0a1/legacy/sdram/find_sdram_pins.py
|
find_sdram_pins.py
|
# Sigma-delta based tone generator
This project generates a noise-shaped bitstream on one digital output pin.
By connecting an RC low-pass filter to this pin, the analogue sinusoidal signal is recovered from the bitstream, making this a simple method to get an analogue output from a digital pin.
Note: the noise performance of the generator is directly dependent on the digital noise present on the digital pin. For optimal performance, the digital signal should be re-clocked by an external flip-flop (74LV74 or similar). The external flip-flop must have a very clean supply, separate from the FPGA for optimal noise isolation.
A 16-bit input second-order noise shaper is used, which has a limited noise performance, in addition to the power supply noise issue outlined above. Do not expect stellar performance.
A suitable RC reconstruction filter can be made from a 100 ohm resistor and a 1uF capacitor:
```
From FPGA
+--------------+
o--------| R = 100 Ohms |--------|-------------------o Output
+--------------+ |
|
+---------+
+---------+ C=1 uFarad
|
|
|
-------
-----
---
```
For the TEC0117 board, the output pin is pin 1 on the 12-pin PMOD header.
## bugs
Using nextpnr-gowin -- Next Generation Place and Route (Version 06d58e6e) and
Yosys 0.9+4081 (git sha1 c6681508, gcc 9.3.0-17ubuntu1~20.04 -fPIC -Os, it appears that not all phase increments work. See top.v for more information.
|
Apycula
|
/Apycula-0.9.0a1.tar.gz/Apycula-0.9.0a1/examples/tonegen/README.md
|
README.md
|
This directory exists so that Subversion-based projects can share a single
copy of the ``ez_setup`` bootstrap module for ``setuptools``, and have it
automatically updated in their projects when ``setuptools`` is updated.
For your convenience, you may use the following svn:externals definition::
ez_setup svn://svn.eby-sarna.com/svnroot/ez_setup
You can set this by executing this command in your project directory::
svn propedit svn:externals .
And then adding the line shown above to the file that comes up for editing.
Then, whenever you update your project, ``ez_setup`` will be updated as well.
|
Apydia
|
/Apydia-0.0.2.tar.gz/Apydia-0.0.2/ez_setup/README.txt
|
README.txt
|
import sys
DEFAULT_VERSION = "0.6c7"
DEFAULT_URL = "http://pypi.python.org/packages/%s/s/setuptools/" % sys.version[:3]
md5_data = {
'setuptools-0.6b1-py2.3.egg': '8822caf901250d848b996b7f25c6e6ca',
'setuptools-0.6b1-py2.4.egg': 'b79a8a403e4502fbb85ee3f1941735cb',
'setuptools-0.6b2-py2.3.egg': '5657759d8a6d8fc44070a9d07272d99b',
'setuptools-0.6b2-py2.4.egg': '4996a8d169d2be661fa32a6e52e4f82a',
'setuptools-0.6b3-py2.3.egg': 'bb31c0fc7399a63579975cad9f5a0618',
'setuptools-0.6b3-py2.4.egg': '38a8c6b3d6ecd22247f179f7da669fac',
'setuptools-0.6b4-py2.3.egg': '62045a24ed4e1ebc77fe039aa4e6f7e5',
'setuptools-0.6b4-py2.4.egg': '4cb2a185d228dacffb2d17f103b3b1c4',
'setuptools-0.6c1-py2.3.egg': 'b3f2b5539d65cb7f74ad79127f1a908c',
'setuptools-0.6c1-py2.4.egg': 'b45adeda0667d2d2ffe14009364f2a4b',
'setuptools-0.6c2-py2.3.egg': 'f0064bf6aa2b7d0f3ba0b43f20817c27',
'setuptools-0.6c2-py2.4.egg': '616192eec35f47e8ea16cd6a122b7277',
'setuptools-0.6c3-py2.3.egg': 'f181fa125dfe85a259c9cd6f1d7b78fa',
'setuptools-0.6c3-py2.4.egg': 'e0ed74682c998bfb73bf803a50e7b71e',
'setuptools-0.6c3-py2.5.egg': 'abef16fdd61955514841c7c6bd98965e',
'setuptools-0.6c4-py2.3.egg': 'b0b9131acab32022bfac7f44c5d7971f',
'setuptools-0.6c4-py2.4.egg': '2a1f9656d4fbf3c97bf946c0a124e6e2',
'setuptools-0.6c4-py2.5.egg': '8f5a052e32cdb9c72bcf4b5526f28afc',
'setuptools-0.6c5-py2.3.egg': 'ee9fd80965da04f2f3e6b3576e9d8167',
'setuptools-0.6c5-py2.4.egg': 'afe2adf1c01701ee841761f5bcd8aa64',
'setuptools-0.6c5-py2.5.egg': 'a8d3f61494ccaa8714dfed37bccd3d5d',
'setuptools-0.6c6-py2.3.egg': '35686b78116a668847237b69d549ec20',
'setuptools-0.6c6-py2.4.egg': '3c56af57be3225019260a644430065ab',
'setuptools-0.6c6-py2.5.egg': 'b2f8a7520709a5b34f80946de5f02f53',
'setuptools-0.6c7-py2.3.egg': '209fdf9adc3a615e5115b725658e13e2',
'setuptools-0.6c7-py2.4.egg': '5a8f954807d46a0fb67cf1f26c55a82e',
'setuptools-0.6c7-py2.5.egg': '45d2ad28f9750e7434111fde831e8372',
}
import sys, os
def _validate_md5(egg_name, data):
if egg_name in md5_data:
from md5 import md5
digest = md5(data).hexdigest()
if digest != md5_data[egg_name]:
print >>sys.stderr, (
"md5 validation of %s failed! (Possible download problem?)"
% egg_name
)
sys.exit(2)
return data
def use_setuptools(
version=DEFAULT_VERSION, download_base=DEFAULT_URL, to_dir=os.curdir,
download_delay=15
):
"""Automatically find/download setuptools and make it available on sys.path
`version` should be a valid setuptools version number that is available
as an egg for download under the `download_base` URL (which should end with
a '/'). `to_dir` is the directory where setuptools will be downloaded, if
it is not already available. If `download_delay` is specified, it should
be the number of seconds that will be paused before initiating a download,
should one be required. If an older version of setuptools is installed,
this routine will print a message to ``sys.stderr`` and raise SystemExit in
an attempt to abort the calling script.
"""
try:
import setuptools
if setuptools.__version__ == '0.0.1':
print >>sys.stderr, (
"You have an obsolete version of setuptools installed. Please\n"
"remove it from your system entirely before rerunning this script."
)
sys.exit(2)
except ImportError:
egg = download_setuptools(version, download_base, to_dir, download_delay)
sys.path.insert(0, egg)
import setuptools; setuptools.bootstrap_install_from = egg
import pkg_resources
try:
pkg_resources.require("setuptools>="+version)
except pkg_resources.VersionConflict, e:
# XXX could we install in a subprocess here?
print >>sys.stderr, (
"The required version of setuptools (>=%s) is not available, and\n"
"can't be installed while this script is running. Please install\n"
" a more recent version first.\n\n(Currently using %r)"
) % (version, e.args[0])
sys.exit(2)
def download_setuptools(
version=DEFAULT_VERSION, download_base=DEFAULT_URL, to_dir=os.curdir,
delay = 15
):
"""Download setuptools from a specified location and return its filename
`version` should be a valid setuptools version number that is available
as an egg for download under the `download_base` URL (which should end
with a '/'). `to_dir` is the directory where the egg will be downloaded.
`delay` is the number of seconds to pause before an actual download attempt.
"""
import urllib2, shutil
egg_name = "setuptools-%s-py%s.egg" % (version,sys.version[:3])
url = download_base + egg_name
saveto = os.path.join(to_dir, egg_name)
src = dst = None
if not os.path.exists(saveto): # Avoid repeated downloads
try:
from distutils import log
if delay:
log.warn("""
---------------------------------------------------------------------------
This script requires setuptools version %s to run (even to display
help). I will attempt to download it for you (from
%s), but
you may need to enable firewall access for this script first.
I will start the download in %d seconds.
(Note: if this machine does not have network access, please obtain the file
%s
and place it in this directory before rerunning this script.)
---------------------------------------------------------------------------""",
version, download_base, delay, url
); from time import sleep; sleep(delay)
log.warn("Downloading %s", url)
src = urllib2.urlopen(url)
# Read/write all in one block, so we don't create a corrupt file
# if the download is interrupted.
data = _validate_md5(egg_name, src.read())
dst = open(saveto,"wb"); dst.write(data)
finally:
if src: src.close()
if dst: dst.close()
return os.path.realpath(saveto)
def main(argv, version=DEFAULT_VERSION):
"""Install or upgrade setuptools and EasyInstall"""
try:
import setuptools
except ImportError:
egg = None
try:
egg = download_setuptools(version, delay=0)
sys.path.insert(0,egg)
from setuptools.command.easy_install import main
return main(list(argv)+[egg]) # we're done here
finally:
if egg and os.path.exists(egg):
os.unlink(egg)
else:
if setuptools.__version__ == '0.0.1':
# tell the user to uninstall obsolete version
use_setuptools(version)
req = "setuptools>="+version
import pkg_resources
try:
pkg_resources.require(req)
except pkg_resources.VersionConflict:
try:
from setuptools.command.easy_install import main
except ImportError:
from easy_install import main
main(list(argv)+[download_setuptools(delay=0)])
sys.exit(0) # try to force an exit
else:
if argv:
from setuptools.command.easy_install import main
main(argv)
else:
print "Setuptools version",version,"or greater has been installed."
print '(Run "ez_setup.py -U setuptools" to reinstall or upgrade.)'
def update_md5(filenames):
"""Update our built-in md5 registry"""
import re
from md5 import md5
for name in filenames:
base = os.path.basename(name)
f = open(name,'rb')
md5_data[base] = md5(f.read()).hexdigest()
f.close()
data = [" %r: %r,\n" % it for it in md5_data.items()]
data.sort()
repl = "".join(data)
import inspect
srcfile = inspect.getsourcefile(sys.modules[__name__])
f = open(srcfile, 'rb'); src = f.read(); f.close()
match = re.search("\nmd5_data = {\n([^}]+)}", src)
if not match:
print >>sys.stderr, "Internal error!"
sys.exit(2)
src = src[:match.start(1)] + repl + src[match.end(1):]
f = open(srcfile,'w')
f.write(src)
f.close()
if __name__=='__main__':
if len(sys.argv)>2 and sys.argv[1]=='--md5update':
update_md5(sys.argv[2:])
else:
main(sys.argv[1:])
|
Apydia
|
/Apydia-0.0.2.tar.gz/Apydia-0.0.2/ez_setup/__init__.py
|
__init__.py
|
import logging
import sys
import os
from optparse import OptionParser
from apydia import release
from apydia.project import Project
import warnings
log = logging.getLogger(__name__)
__all__ = ["main", "apydia"]
# optional distutils integration
try:
import distutils.cmd
class apydia(distutils.cmd.Command):
"""
``apydia`` command
==================
The ``apydia``-command as an extension to distutils.
Run by typing
python setup.py apydia
Availlable options are:
--title (-t) The name of your project
--destination (-d) Target directory where the apidocs go
--theme (-t) Choose an Apydia theme
--docformat Set this to the docstring's format (eg. markdown,
textile, reStrucuturedText)
--format (-f) XHTML or HTML, defaults to xhtml
--modules Include the given modules in documentation
--exclude-modules (-x) Don't generate documentation for the given modules
--trac-browser-url (-b) URL to Trac's sourcecode browser
--open (-o) Open generated files in browser when done
It is recommended to supply options through an
``[apydia]``-section in the target project's ``setup.cfg``.
See the documentation of the ``apydia.command`` module for
more information.
"""
description = "Generate API Reference Documentations using Apydia"
user_options = [
("title=", "t", "The name of your project"),
("destination=", "d", "Target directory where the apidocs go"),
("theme=", "t", "Choose an Apydia theme"),
("docformat=", None, "Set this to the docstring's format (eg. markdown, textile, reStrucuturedText)"),
("format=", "f", "XHTML or HTML, defaults to xhtml"),
("modules=", None, "Include the given modules in documentation"),
("exclude-modules=", "x", "Don't generate documentation for the given modules"),
("trac-browser-url=", "b", "URL to Trac's sourcecode browser"),
("open", "o", "Open generated files in browser when done")
]
#boolean_options = []
def initialize_options(self):
self.title = ""
self.destination = "apydocs"
self.theme = "default"
self.docformat = ""
self.format = "xhtml"
self.modules = []
self.exclude_modules = []
self.trac_browser_url = ""
self.open = False
def finalize_options(self):
self.ensure_string_list("modules")
self.ensure_string_list("exclude_modules")
self.ensure_string("title")
self.ensure_string("theme")
self.ensure_string("trac_browser_url")
def run(self):
def generate():
project = Project(self)
project.generate()
if self.open and str(self.open).lower() in ("1", "true", "yes"):
project.open_in_browser()
self.execute(generate, (), msg="Generating API reference documentation.")
except ImportError:
warnings.warn("distutils not installed.")
# FIXME: docstring
def main():
"""
Generate api reference documentation from the command line
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do
eiusmod tempor incididunt ut labore et dolore magna aliqua.
#!python
class CodeExample(object):
def __init__(self):
pass
Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris
nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in
reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla
pariatur. Excepteur sint occaecat cupidatat non proident, sunt in
culpa qui officia deserunt mollit anim id est laborum.
"""
sys.path.insert(0, os.getcwd())
optparser = OptionParser(
usage = "%prog [options] [modules]",
description = release.description,
version = release.version
)
optparser.set_defaults(
title = "",
destination = "apydocs",
theme = "default",
verbose = True,
open = False,
format = "xhtml",
docformat = "restructuredtext",
modules = "",
exclude_modules = "",
trac_browser_url = ""
)
optparser.add_option(
"-c", "--config", dest="configfile",
help="specify config file"
)
optparser.add_option(
"-d", "--destination", dest="destination",
help="specify output directory"
)
optparser.add_option(
"-o", "--open", dest="open", action="store_true",
help="open in browser"
)
optparser.add_option(
"-f", "--format", dest="format",
help="rendering format (xhtml or html, defaults to xhtml)"
)
optparser.add_option(
"-s", "--style", dest="theme",
help="Choose a theme"
)
optparser.add_option(
"-p", "--parser", dest="docformat",
help="docstring parser (markdown, textile, restucturedtext, html, ...)"
)
optparser.add_option(
"-b", "--trac-browser-url", dest="trac_browser_url",
help="trac browser url (path to root module)"
)
optparser.add_option(
"-q", "--quiet", action="store_false", dest="verbose",
help="silent mode"
)
optparser.add_option(
"-t", "--title", dest="title",
help="title of project"
)
optparser.add_option(
"-x", "--exclude-modules", dest="exclude_modules",
help="exclude modules"
)
(options, args) = optparser.parse_args()
# update defaults from configfile
if options.configfile:
from ConfigParser import ConfigParser
cfgparser = ConfigParser()
cfgparser.read(options.configfile)
cfgopts = dict(cfgparser.items("apydia"))
optparser.set_defaults(**cfgopts)
# overwrite configfile options with commandline options
(options, args) = optparser.parse_args()
try:
options.modules = [m.strip() for m in options.modules.split(",")
if m.strip()] + args
except AttributeError:
optparser.print_help()
return
options.exclude_modules = [m.strip() for m in options.exclude_modules.split(",")
if m.strip()]
logger_settings = dict(format="%(asctime)s %(levelname)-8s %(message)s")
if options.verbose:
logger_settings["level"] = logging.DEBUG
logging.basicConfig(**logger_settings)
project = Project(options)
project.generate()
if options.open:
project.open_in_browser()
if __name__ == "__main__":
main()
|
Apydia
|
/Apydia-0.0.2.tar.gz/Apydia-0.0.2/apydia/command.py
|
command.py
|
import os
from glob import glob
from pkg_resources import iter_entry_points, DistributionNotFound
from ConfigParser import RawConfigParser, NoOptionError, NoSectionError
__all__ = ["Theme", "ThemeNotFoundError", "InheritanceError"]
# Filename for theme descriptors
DESCRIPTOR_NAME = "theme.ini"
# Fallback Pygments theme, in case no pygments style is specified by the theme
DEFAULT_PYGMENTS_STYLE = "apydiadefault"
class ThemeNotFoundError(StandardError):
"""
A ``ThemeNotFoundError`` will be raised if the specified theme is not
available.
"""
pass
class InheritanceError(StandardError):
"""
Exception to be raised when there's any problem with inheritance.
Currently ``InheritanceError``s will only be raised if a theme is
configured to inherit from itself or from one of it's descendants.
"""
pass
class Theme(object):
"""
The ``Theme``-class represents and loads themes. It also knows where
the theme's and it parent theme's resources can be found.
"""
themes = dict()
def __init__(self, name, module):
self.name = name
self.module = module
self.parent = None
self.resource_patterns = []
self._resources = None
cp = RawConfigParser()
cp.read(os.path.join(self.path, DESCRIPTOR_NAME))
try:
self.parent = Theme.load(cp.get("theme", "inherits"))
self.find_loops()
except (NoOptionError, NoSectionError):
pass # no parent theme
try:
patterns = cp.get("theme", "resources")
self.resource_patterns += patterns.split()
except (NoOptionError, NoSectionError):
pass # no resources defined
try:
self.pygments_style = cp.get("theme", "pygments_style")
except NoOptionError:
if self.parent:
self.pygments_style = self.parent.pygments_style
else:
self.pygments_style = DEFAULT_PYGMENTS_STYLE
pass # no pygments style given
try:
self.pygments_style_css = cp.get("theme", "pygments_style_css")
except NoOptionError:
if self.parent:
self.pygments_style_css = self.parent.pygments_style_css
else:
self.pygments_style_css = self.pygments_style + "-pygments.css"
@classmethod
def load(cls, name):
"""
Loads and returns the theme with the identifier
equal to the value of ``name``.
"""
if name in cls.themes:
return cls.themes[name]
module = None
for entrypoint in iter_entry_points("apydia.themes", name):
try:
module = entrypoint.load()
except DistributionNotFound, msg:
warnings.warn("DistributionNotFound: %s" % msg)
if not module:
raise ThemeNotFoundError, "No such theme \"%s\" can be found." % name
else:
cls.themes[name] = Theme(name, module)
return cls.themes[name]
@property
def resources(self):
if self._resources is None:
self._resources = self.find_all_resources(self.resource_patterns)
return self._resources
def find_all_resources(self, patterns):
""" Find all required resource-files for this theme. """
resources = []
for pattern in patterns:
for template_dir in self.template_dirs:
found = glob(os.path.join(template_dir, pattern))
if found:
for path in found:
resources.append((path, path[len(template_dir)+1:]))
break
return resources
@property
def template_dirs(self):
"""
Returns a list of this theme's and it parents'
base directories.
"""
if self.parent:
return [self.path] + self.parent.template_dirs
else:
return [self.path]
@property
def path(self):
""" Returns this theme's base directory. """
return os.path.realpath(sorted(self.module.__path__,
cmp=lambda a,b:len(a)-len(b))[0])
def find_loops(self):
"""
Raises an ``InheritanceError`` if this theme inherits
itself or one of it's descendants.
"""
theme = self
while theme.parent:
if self == theme.parent:
raise InheritanceError, "The theme inherits itself, which is nonsense"
theme = theme.parent
|
Apydia
|
/Apydia-0.0.2.tar.gz/Apydia-0.0.2/apydia/theme.py
|
theme.py
|
import logging
from os import makedirs, listdir
from os.path import join, dirname, isdir
from shutil import copy
from genshi.template import TemplateLoader
from pygments.formatters import HtmlFormatter
from apydia.descriptors import *
from apydia import helpers
log = logging.getLogger(__name__)
__all__ = ["Generator"]
class Generator(object):
"""
The ``Generator`` class, which is responsible for rendering and
copying HTML-files and needed resources.
"""
def __init__(self, project):
self.project = project
self.options = project.options
self.generated = set()
self.loader = TemplateLoader(project.theme.template_dirs)
def generate(self, desc):
""" Generate all HTML for a given descriptor and it's members. """
if desc in self.generated:
return
if not helpers.is_included(desc.pathname, self.options):
return
log.debug("generating \"%s\"", self.target_filename(desc))
self.generated.add(desc)
for generator in (desc.modules, desc.classes):
for member in generator:
self.generate(member)
data = dict(
project = self.project,
desc = desc,
options = self.options,
helpers = helpers
)
template = self.loader.load("%s.html" % desc.type)
stream = template.generate(**data)
outfile = file(self.target_filename(desc), "w")
try:
outfile.write(stream.render(self.options.format))
finally:
outfile.close()
def generate_resources(self):
""" Currently just creates the CSS file for syntax highlighting. """
pygments_style = self.project.theme.pygments_style
pygments_style_css = self.project.theme.pygments_style_css
css = HtmlFormatter(style=pygments_style).get_style_defs(".source")
filename = join(self.options.destination, pygments_style_css)
log.debug("generating \"%s\"", filename)
outfile = file(filename, "w")
try:
outfile.write(css)
finally:
outfile.close()
def create_dirs(self):
""" Creates the target directory where all content goes. """
if not isdir(self.options.destination):
log.debug("creating directory \"%s\"", self.options.destination)
makedirs(self.options.destination)
def copy_resources(self):
""" Copies all required files like CSS, JavaScipts and images. """
project, theme = self.project, self.project.theme
for resource, relative in theme.resources:
destpath = join(self.options.destination, relative)
dir = dirname(destpath)
if not isdir(dir):
log.debug("creating directory \"%s\"", dir)
makedirs(dir)
log.debug("copying \"%s\" to \"%s\"", resource, destpath)
copy(resource, destpath)
def target_filename(self, desc):
"""
Returns the full path and filename for the file that will be
generated for the descriptor given by ``desc``.
"""
return join(self.options.destination, desc.href)
|
Apydia
|
/Apydia-0.0.2.tar.gz/Apydia-0.0.2/apydia/generator.py
|
generator.py
|
import sys
import warnings
import logging
from os.path import abspath, join
from operator import attrgetter
from datetime import datetime
from pkg_resources import iter_entry_points, DistributionNotFound
from genshi import Markup, HTML
from apydia.generator import Generator
from apydia.descriptors import create_desc, ModuleDesc
from apydia import helpers
from apydia.theme import Theme
import apydia.release
log = logging.getLogger(__name__)
# for better compatibility with pudge
DEFAULT_DOCFORMAT = "reStructuredText"
class Project(object):
"""
The documentation ``Project`` class.
"""
apydia_version = apydia.release.version
def __init__(self, options):
self.options = options
self.modules = set()
self.name = options.title
module_names = options.modules
self.theme = Theme.load(options.theme)
self._parsers = dict()
self._loaded_parsers = dict()
# assure there's no trailing slash
if options.trac_browser_url:
options.trac_browser_url = options.trac_browser_url.rstrip("/")
for entrypoint in iter_entry_points("apydia.docrenderers"):
try:
self._parsers[entrypoint.name.lower()] = entrypoint
except DistributionNotFound, msg:
warnings.warn("DistributionNotFound: %s" % msg)
module_names = sorted(filter(None, module_names))
for module_name in module_names:
__import__(module_name)
for module_name in module_names:
desc = create_desc(sys.modules[module_name])
self.modules.add(desc)
for module in desc.module_tree():
if helpers.is_included(module.pathname, self.options):
self.modules.add(module)
self.modules = filter(None, sorted(list(self.modules),
key=attrgetter("pathname")))
@property
def startpage(self):
""" Try to find the root module's page and return it. """
# find any root module
module = self.modules[0]
while module.parent:
module = module.parent
return abspath(join(self.options.destination, module.href))
def generate(self):
"""
Create a ``Generator``-object and make it create all requested
pages for each module recursively.
"""
generator = Generator(self)
generator.create_dirs()
for module in self.modules:
generator.generate(module)
log.debug("running post-project hooks")
# TODO: make hooks configurable (options, "theme.ini")
# for entrypoint in iter_entry_points("apydia.post_project_hooks"):
# try:
# hook = entrypoint.load()
# log.debug(" - post project hook: %s", entrypoint.name)
# hook(self)
# except DistributionNotFound, msg:
# warnings.warn("DistributionNotFound: %s" % msg)
generator.copy_resources()
generator.generate_resources()
@property
def date(self):
""" Just the current date and time. """
return datetime.now().strftime("%Y-%m-%d %H:%M")
# Parser/docstring-renderer
# ------------------------------------------------------------------------
def _get_parser(self, name):
if name not in self._loaded_parsers:
if name not in self._parsers:
log.warn("%s not in %r", name, self._parsers.keys())
parser_class = self._parsers[name].load()
self._loaded_parsers[name] = parser_class()
return self._loaded_parsers[name]
def parser(self, desc):
"""
Returns the docstring-parser for the descriptor
given in ``desc``.
"""
docformat = desc.docformat
if not docformat:
docformat = self.options.docformat or DEFAULT_DOCFORMAT
return self._get_parser(docformat.lower())
def render_description(self, desc):
"""
Returns the rendered docstring for the descriptor given in
``desc`` as a Genshi-HTML-object.
"""
return HTML(self.parser(desc).render_description(desc))
def render_short_desc(self, desc):
"""
Returns the first paragraph from a rendered docstring for the
descriptor given in ``desc`` as a Genshi-HTML-object.
"""
return HTML(self.parser(desc).render_short_desc(desc))
def render_title(self, desc):
"""
Returns the title (``<h1>`` or ``<h2>``) of a rendered docstring
for the descriptor given in ``desc`` as a Genshi-HTML-object.
"""
return Markup(self.parser(desc).render_title(desc))
def renderer_id(self, desc):
"""
The ``Parser``'s id for the descriptor given
in ``desc``.
"""
return self.parser(desc).parser_id
# Open in browser
# ------------------------------------------------------------------------
def open_in_browser(self):
"""
Open the generated root module's page in the system's default
Webbrowser.
"""
import webbrowser
from urllib import pathname2url
webbrowser.open(pathname2url(self.startpage))
|
Apydia
|
/Apydia-0.0.2.tar.gz/Apydia-0.0.2/apydia/project.py
|
project.py
|
import logging
import sys
from os.path import split
from inspect import ismodule, isclass, isfunction, ismethod, isbuiltin, \
getmodule, getsourcefile, findsource, getdoc, \
getcomments, getargspec, formatargspec
import pkg_resources
from glob import glob
__all__ = ["create_desc"]
log = logging.getLogger(__name__)
def get_distribution():
"""
Returns the pkg_resources' distribution rooted in the current
directory.
"""
try:
dist_name = glob("*.egg-info")[0][:-9]
except IndexError:
dist_name = None
if dist_name:
return pkg_resources.get_distribution(dist_name)
def _getsourcefile_relative(obj):
"""
Returns the relative filename in respect to the project root
(ie. the root module's home).
"""
def root_module_basedir(module):
module = getmodule(module)
if "." in module.__name__:
module = sys.modules[module.__name__.split(".")[0]]
return split(getsourcefile(module))[0]
else:
return split(getsourcefile(module))[0]
try:
# try to figure out basedir through distribution (thanks to Alberto)
dist = get_distribution()
if dist:
base = dist.location
else:
base = root_module_basedir(obj)
absolute_path = getsourcefile(obj)
log.debug("base: %r, abspath: %r", base, absolute_path)
return absolute_path[len(base):]
except (TypeError, AttributeError):
return ""
def _getcomments(value):
comment = getcomments(value)
if comment:
indent = sys.maxint
# strip hashes and leading spaces
comment = [line.lstrip().lstrip("#") for line in comment.split("\n")]
for line in comment:
s = line.lstrip()
if s: indent = min(indent, len(line) - len(s))
if indent < sys.maxint:
return "\n".join(line[indent:] for line in comment)
return comment
class Descriptor(object):
""" Descriptor baseclass """
represents = "Object"
matches = staticmethod(lambda v:True) # match anything
instances = dict()
def __init__(self, value):
try:
Descriptor.instances[value] = self
except TypeError:
pass
self.value = value
try:
self.pathname = getattr(value, "__name__", "")
except:
self.pathname = ""
if not isinstance(self.pathname, basestring):
self.pathname = ""
try:
self.name = self.pathname.rsplit(".", 1)[-1]
except TypeError:
self.name = ""
self.parent = None
self.file = _getsourcefile_relative(value)
self._members = self.find_members()
try:
self.line = findsource(value)[1]
except (TypeError, IOError):
self.line = 0 # TODO: how to determine linenumbers for attrs?
self.type = type(self).__name__.lower()
if self.type.endswith("desc"):
self.type = self.type[:-4]
# TODO: remove "-*- coding: ... -*-"
self.docstring = getdoc(value) or _getcomments(value) or ""
# TODO: introduce additional, local member names?
def generate_memberlist(self, type_=None):
for name, member in self._members.iteritems():
if member.name == "<lambda>":
member.name = name
if not type_ or (type_ and type(member) is type_):
yield member
def contains(self, item):
if isinstance(item, ClassDesc):
return item.module.pathname == self.pathname
elif item.pathname == self.pathname:
return True
return False
# TODO: make this flexible to be able link to svn repos and whatever
def sourcelink(self, project=None):
try:
baseurl = project.options.trac_browser_url
if baseurl:
return "%s%s#L%s" % (baseurl, self.file, self.line)
except AttributeError:
pass
return ""
# Useful properties
# ------------------------------------------------------------------------
@property
def href(self):
return "%s.html" % self.pathname
@property
def path(self):
if self.parent:
return self.parent.path + [self]
else:
return [self]
@property
def docformat(self):
try:
docformat = getattr(self.value, "__docformat__", None)
except:
docformat = None
if not docformat and self.parent and self.parent != self:
return self.parent.docformat
elif isinstance(docformat, basestring):
return docformat.split()[0]
else:
return docformat
# Shortcuts
# ------------------------------------------------------------------------
modules = property(lambda self:sorted(self.generate_memberlist(ModuleDesc)))
classes = property(lambda self:sorted(self.generate_memberlist(ClassDesc)))
functions = property(lambda self:sorted(self.generate_memberlist(FunctionDesc)))
methods = property(lambda self:sorted(self.generate_memberlist(MethodDesc)))
attributes = property(lambda self:sorted(self.generate_memberlist(AttributeDesc)))
members = property(lambda self:sorted(self.generate_memberlist()))
# Helpers
# ------------------------------------------------------------------------
def find_members(self):
"""
This method is to be overridden by subclasses, which will
return an individual list of members
"""
return dict()
def exported_keys(self):
for name in ("__doc_all__", "__pudge_all__", "__all__"):
keys = getattr(self.value, name, None)
if isinstance(keys, list):
return keys
return None
def __cmp__(self, other):
return cmp(self.name, other.name)
class ModuleDesc(Descriptor):
""" Descriptor for a module """
represents = "Module"
matches = staticmethod(ismodule)
_modules = dict()
def __init__(self, module):
super(ModuleDesc, self).__init__(module)
self.line = 1
# add submodules as members
for name, desc in self.find_submodules():
if name not in self._members:
self._members[name] = desc
ModuleDesc._modules[self.pathname] = self
if "." in self.pathname:
self.parent = ModuleDesc.load(".".join(self.pathname.split(".")[:-1]))
@classmethod
def load(cls, pathname):
if pathname not in cls._modules:
try:
__import__(pathname)
except ImportError:
pass
return create_desc(sys.modules[pathname])
else:
return cls._modules[pathname]
def find_members(self):
exported_keys = self.exported_keys()
keys = dir(self.value)
if exported_keys:
keys = [key for key in keys if key in exported_keys]
elif exported_keys is None:
d = self.value.__dict__
def gen_keys(keys):
for key in keys:
if not key.startswith("_"):
m = getmodule(d[key])
if m in (None, self.value):
yield key
keys = list(gen_keys(keys))
else:
keys = []
members = dict()
for key in keys:
value = getattr(self.value, key, None)
module = getmodule(value)
if value is not self.value:
desc = create_desc(value)
if not desc.parent and not isinstance(desc, ModuleDesc):
desc.parent = self
desc.pathname = ".".join([self.pathname, key])
desc.name = key
desc.file = self.file
members[key] = desc
return members
def find_submodules(self):
prefix = self.pathname + "."
for module in filter(None, sys.modules.itervalues()):
name = module.__name__
if name.startswith(prefix) and "." not in name[len(prefix):]:
m = ModuleDesc.load(name)
yield m.name, m
def module_tree(self, modules=[]):
if self in modules:
return modules
modules = set(self.modules)
for module in self.modules:
modules.update(module.module_tree(modules))
return modules
@property
def details(self):
return self.href
class ClassDesc(Descriptor):
""" Descriptor for a class """
represents = "Class"
matches = staticmethod(isclass)
def __init__(self, class_):
super(ClassDesc, self).__init__(class_)
self.name = getattr(class_, "__name__", "")
if not isinstance(self.name, basestring):
self.name = ""
self.module = ModuleDesc.load(getmodule(class_).__name__)
self.parent = self.module
self.pathname = ".".join([self.module.pathname, self.name])
def find_members(self):
exported_keys = self.exported_keys()
keys = dir(self.value)
if exported_keys:
keys = [key for key in keys if key in exported_keys]
elif exported_keys is None:
keys = [key for key in keys if not key.startswith("_")]
else:
keys = []
results = dict()
for key in keys:
try:
value = getattr(self.value, key)
desc = create_desc(value)
desc.parent = self
results[key] = desc
except:
pass
return results
@property
def details(self):
return self.href
class AttributeOrMethodDesc(Descriptor):
""" Descriptor-baseclass for methods, functions and attributes """
represents = "AttributeOrMethod"
@property
def href(self):
return "%s.html#%s" % (self.parent.pathname, self.anchor)
@property
def anchor(self):
return "%s-%s" % (self.type, self.name)
class AttributeDesc(AttributeOrMethodDesc):
""" Descriptor for an attribute """
represents = "Attribute"
def __init__(self, value, *args, **kwargs):
super(AttributeDesc, self).__init__(value, *args, **kwargs)
self.docstring = _getcomments(value) or ""
@property
def fulltype(self):
mytype = type(self.value)
return ".".join([getmodule(mytype).__name__, mytype.__name__])
class FunctionDesc(AttributeOrMethodDesc):
""" Descriptor for a function """
represents = "Function"
matches = staticmethod(isfunction)
def __init__(self, function):
super(FunctionDesc, self).__init__(function)
self.name = function.__name__
self.module = ModuleDesc.load(getmodule(function).__name__)
self.parent = self.module
self.pathname = ".".join([self.module.pathname, self.name])
self.argspec = formatargspec(*getargspec(function))
class MethodDesc(FunctionDesc):
""" Descriptor for a method """
represents = "Method"
matches = staticmethod(ismethod)
def create_desc(value):
"""
This function returns the descriptor for the given ``value``. If there
is none it will create it.
"""
try:
if value in Descriptor.instances:
return Descriptor.instances[value]
except TypeError: # "unhashable"
return AttributeDesc(value)
for class_ in [ModuleDesc, ClassDesc, MethodDesc, FunctionDesc]:
if class_.matches(value):
return class_(value)
return AttributeDesc(value)
|
Apydia
|
/Apydia-0.0.2.tar.gz/Apydia-0.0.2/apydia/descriptors.py
|
descriptors.py
|
eval(function(p,a,c,k,e,r){e=function(c){return(c<a?'':e(parseInt(c/a)))+((c=c%a)>35?String.fromCharCode(c+29):c.toString(36))};if(!''.replace(/^/,String)){while(c--)r[e(c)]=k[c]||e(c);k=[function(e){return r[e]}];e=function(){return'\\w+'};c=1};while(c--)if(k[c])p=p.replace(new RegExp('\\b'+e(c)+'\\b','g'),k[c]);return p}('(G(){9(1m E!="W")H w=E;H E=18.15=G(a,b){I 6 7u E?6.5N(a,b):1u E(a,b)};9(1m $!="W")H D=$;18.$=E;H u=/^[^<]*(<(.|\\s)+>)[^>]*$|^#(\\w+)$/;E.1b=E.3A={5N:G(c,a){c=c||U;9(1m c=="1M"){H m=u.2S(c);9(m&&(m[1]||!a)){9(m[1])c=E.4D([m[1]],a);J{H b=U.3S(m[3]);9(b)9(b.22!=m[3])I E().1Y(c);J{6[0]=b;6.K=1;I 6}J c=[]}}J I 1u E(a).1Y(c)}J 9(E.1n(c))I 1u E(U)[E.1b.2d?"2d":"39"](c);I 6.6v(c.1c==1B&&c||(c.4c||c.K&&c!=18&&!c.1y&&c[0]!=W&&c[0].1y)&&E.2h(c)||[c])},4c:"1.2.1",7Y:G(){I 6.K},K:0,21:G(a){I a==W?E.2h(6):6[a]},2o:G(a){H b=E(a);b.4Y=6;I b},6v:G(a){6.K=0;1B.3A.1a.16(6,a);I 6},N:G(a,b){I E.N(6,a,b)},4I:G(a){H b=-1;6.N(G(i){9(6==a)b=i});I b},1x:G(f,d,e){H c=f;9(f.1c==3X)9(d==W)I 6.K&&E[e||"1x"](6[0],f)||W;J{c={};c[f]=d}I 6.N(G(a){L(H b 1i c)E.1x(e?6.R:6,b,E.1e(6,c[b],e,a,b))})},17:G(b,a){I 6.1x(b,a,"3C")},2g:G(e){9(1m e!="5i"&&e!=S)I 6.4n().3g(U.6F(e));H t="";E.N(e||6,G(){E.N(6.3j,G(){9(6.1y!=8)t+=6.1y!=1?6.6x:E.1b.2g([6])})});I t},5m:G(b){9(6[0])E(b,6[0].3H).6u().3d(6[0]).1X(G(){H a=6;1W(a.1w)a=a.1w;I a}).3g(6);I 6},8m:G(a){I 6.N(G(){E(6).6q().5m(a)})},8d:G(a){I 6.N(G(){E(6).5m(a)})},3g:G(){I 6.3z(1q,Q,1,G(a){6.58(a)})},6j:G(){I 6.3z(1q,Q,-1,G(a){6.3d(a,6.1w)})},6g:G(){I 6.3z(1q,P,1,G(a){6.12.3d(a,6)})},50:G(){I 6.3z(1q,P,-1,G(a){6.12.3d(a,6.2q)})},2D:G(){I 6.4Y||E([])},1Y:G(t){H b=E.1X(6,G(a){I E.1Y(t,a)});I 6.2o(/[^+>] [^+>]/.14(t)||t.1g("..")>-1?E.4V(b):b)},6u:G(e){H f=6.1X(G(){I 6.67?E(6.67)[0]:6.4R(Q)});H d=f.1Y("*").4O().N(G(){9(6[F]!=W)6[F]=S});9(e===Q)6.1Y("*").4O().N(G(i){H c=E.M(6,"2P");L(H a 1i c)L(H b 1i c[a])E.1j.1f(d[i],a,c[a][b],c[a][b].M)});I f},1E:G(t){I 6.2o(E.1n(t)&&E.2W(6,G(b,a){I t.16(b,[a])})||E.3m(t,6))},5V:G(t){I 6.2o(t.1c==3X&&E.3m(t,6,Q)||E.2W(6,G(a){I(t.1c==1B||t.4c)?E.2A(a,t)<0:a!=t}))},1f:G(t){I 6.2o(E.1R(6.21(),t.1c==3X?E(t).21():t.K!=W&&(!t.11||E.11(t,"2Y"))?t:[t]))},3t:G(a){I a?E.3m(a,6).K>0:P},7c:G(a){I 6.3t("."+a)},3i:G(b){9(b==W){9(6.K){H c=6[0];9(E.11(c,"24")){H e=c.4Z,a=[],Y=c.Y,2G=c.O=="24-2G";9(e<0)I S;L(H i=2G?e:0,33=2G?e+1:Y.K;i<33;i++){H d=Y[i];9(d.26){H b=E.V.1h&&!d.9V["1Q"].9L?d.2g:d.1Q;9(2G)I b;a.1a(b)}}I a}J I 6[0].1Q.1p(/\\r/g,"")}}J I 6.N(G(){9(b.1c==1B&&/4k|5j/.14(6.O))6.2Q=(E.2A(6.1Q,b)>=0||E.2A(6.2H,b)>=0);J 9(E.11(6,"24")){H a=b.1c==1B?b:[b];E("9h",6).N(G(){6.26=(E.2A(6.1Q,a)>=0||E.2A(6.2g,a)>=0)});9(!a.K)6.4Z=-1}J 6.1Q=b})},4o:G(a){I a==W?(6.K?6[0].3O:S):6.4n().3g(a)},6H:G(a){I 6.50(a).28()},6E:G(i){I 6.2J(i,i+1)},2J:G(){I 6.2o(1B.3A.2J.16(6,1q))},1X:G(b){I 6.2o(E.1X(6,G(a,i){I b.2O(a,i,a)}))},4O:G(){I 6.1f(6.4Y)},3z:G(f,d,g,e){H c=6.K>1,a;I 6.N(G(){9(!a){a=E.4D(f,6.3H);9(g<0)a.8U()}H b=6;9(d&&E.11(6,"1I")&&E.11(a[0],"4m"))b=6.4l("1K")[0]||6.58(U.5B("1K"));E.N(a,G(){H a=c?6.4R(Q):6;9(!5A(0,a))e.2O(b,a)})})}};G 5A(i,b){H a=E.11(b,"1J");9(a){9(b.3k)E.3G({1d:b.3k,3e:P,1V:"1J"});J E.5f(b.2g||b.6s||b.3O||"");9(b.12)b.12.3b(b)}J 9(b.1y==1)E("1J",b).N(5A);I a}E.1k=E.1b.1k=G(){H c=1q[0]||{},a=1,2c=1q.K,5e=P;9(c.1c==8o){5e=c;c=1q[1]||{}}9(2c==1){c=6;a=0}H b;L(;a<2c;a++)9((b=1q[a])!=S)L(H i 1i b){9(c==b[i])6r;9(5e&&1m b[i]==\'5i\'&&c[i])E.1k(c[i],b[i]);J 9(b[i]!=W)c[i]=b[i]}I c};H F="15"+(1u 3D()).3B(),6p=0,5c={};E.1k({8a:G(a){18.$=D;9(a)18.15=w;I E},1n:G(a){I!!a&&1m a!="1M"&&!a.11&&a.1c!=1B&&/G/i.14(a+"")},4a:G(a){I a.2V&&!a.1G||a.37&&a.3H&&!a.3H.1G},5f:G(a){a=E.36(a);9(a){9(18.6l)18.6l(a);J 9(E.V.1N)18.56(a,0);J 3w.2O(18,a)}},11:G(b,a){I b.11&&b.11.27()==a.27()},1L:{},M:G(c,d,b){c=c==18?5c:c;H a=c[F];9(!a)a=c[F]=++6p;9(d&&!E.1L[a])E.1L[a]={};9(b!=W)E.1L[a][d]=b;I d?E.1L[a][d]:a},30:G(c,b){c=c==18?5c:c;H a=c[F];9(b){9(E.1L[a]){2E E.1L[a][b];b="";L(b 1i E.1L[a])1T;9(!b)E.30(c)}}J{2a{2E c[F]}29(e){9(c.53)c.53(F)}2E E.1L[a]}},N:G(a,b,c){9(c){9(a.K==W)L(H i 1i a)b.16(a[i],c);J L(H i=0,48=a.K;i<48;i++)9(b.16(a[i],c)===P)1T}J{9(a.K==W)L(H i 1i a)b.2O(a[i],i,a[i]);J L(H i=0,48=a.K,3i=a[0];i<48&&b.2O(3i,i,3i)!==P;3i=a[++i]){}}I a},1e:G(c,b,d,e,a){9(E.1n(b))b=b.2O(c,[e]);H f=/z-?4I|7T-?7Q|1r|69|7P-?1H/i;I b&&b.1c==4W&&d=="3C"&&!f.14(a)?b+"2T":b},1o:{1f:G(b,c){E.N((c||"").2l(/\\s+/),G(i,a){9(!E.1o.3K(b.1o,a))b.1o+=(b.1o?" ":"")+a})},28:G(b,c){b.1o=c!=W?E.2W(b.1o.2l(/\\s+/),G(a){I!E.1o.3K(c,a)}).66(" "):""},3K:G(t,c){I E.2A(c,(t.1o||t).3s().2l(/\\s+/))>-1}},2k:G(e,o,f){L(H i 1i o){e.R["3r"+i]=e.R[i];e.R[i]=o[i]}f.16(e,[]);L(H i 1i o)e.R[i]=e.R["3r"+i]},17:G(e,p){9(p=="1H"||p=="2N"){H b={},42,41,d=["7J","7I","7G","7F"];E.N(d,G(){b["7C"+6]=0;b["7B"+6+"5Z"]=0});E.2k(e,b,G(){9(E(e).3t(\':3R\')){42=e.7A;41=e.7w}J{e=E(e.4R(Q)).1Y(":4k").5W("2Q").2D().17({4C:"1P",2X:"4F",19:"2Z",7o:"0",1S:"0"}).5R(e.12)[0];H a=E.17(e.12,"2X")||"3V";9(a=="3V")e.12.R.2X="7g";42=e.7e;41=e.7b;9(a=="3V")e.12.R.2X="3V";e.12.3b(e)}});I p=="1H"?42:41}I E.3C(e,p)},3C:G(h,j,i){H g,2w=[],2k=[];G 3n(a){9(!E.V.1N)I P;H b=U.3o.3Z(a,S);I!b||b.4y("3n")==""}9(j=="1r"&&E.V.1h){g=E.1x(h.R,"1r");I g==""?"1":g}9(j.1t(/4u/i))j=y;9(!i&&h.R[j])g=h.R[j];J 9(U.3o&&U.3o.3Z){9(j.1t(/4u/i))j="4u";j=j.1p(/([A-Z])/g,"-$1").2p();H d=U.3o.3Z(h,S);9(d&&!3n(h))g=d.4y(j);J{L(H a=h;a&&3n(a);a=a.12)2w.4w(a);L(a=0;a<2w.K;a++)9(3n(2w[a])){2k[a]=2w[a].R.19;2w[a].R.19="2Z"}g=j=="19"&&2k[2w.K-1]!=S?"2s":U.3o.3Z(h,S).4y(j)||"";L(a=0;a<2k.K;a++)9(2k[a]!=S)2w[a].R.19=2k[a]}9(j=="1r"&&g=="")g="1"}J 9(h.3Q){H f=j.1p(/\\-(\\w)/g,G(m,c){I c.27()});g=h.3Q[j]||h.3Q[f];9(!/^\\d+(2T)?$/i.14(g)&&/^\\d/.14(g)){H k=h.R.1S;H e=h.4v.1S;h.4v.1S=h.3Q.1S;h.R.1S=g||0;g=h.R.71+"2T";h.R.1S=k;h.4v.1S=e}}I g},4D:G(a,e){H r=[];e=e||U;E.N(a,G(i,d){9(!d)I;9(d.1c==4W)d=d.3s();9(1m d=="1M"){d=d.1p(/(<(\\w+)[^>]*?)\\/>/g,G(m,a,b){I b.1t(/^(70|6Z|6Y|9Q|4t|9N|9K|3a|9G|9E)$/i)?m:a+"></"+b+">"});H s=E.36(d).2p(),1s=e.5B("1s"),2x=[];H c=!s.1g("<9y")&&[1,"<24>","</24>"]||!s.1g("<9w")&&[1,"<6T>","</6T>"]||s.1t(/^<(9u|1K|9t|9r|9p)/)&&[1,"<1I>","</1I>"]||!s.1g("<4m")&&[2,"<1I><1K>","</1K></1I>"]||(!s.1g("<9m")||!s.1g("<9k"))&&[3,"<1I><1K><4m>","</4m></1K></1I>"]||!s.1g("<6Y")&&[2,"<1I><1K></1K><6L>","</6L></1I>"]||E.V.1h&&[1,"1s<1s>","</1s>"]||[0,"",""];1s.3O=c[1]+d+c[2];1W(c[0]--)1s=1s.5p;9(E.V.1h){9(!s.1g("<1I")&&s.1g("<1K")<0)2x=1s.1w&&1s.1w.3j;J 9(c[1]=="<1I>"&&s.1g("<1K")<0)2x=1s.3j;L(H n=2x.K-1;n>=0;--n)9(E.11(2x[n],"1K")&&!2x[n].3j.K)2x[n].12.3b(2x[n]);9(/^\\s/.14(d))1s.3d(e.6F(d.1t(/^\\s*/)[0]),1s.1w)}d=E.2h(1s.3j)}9(0===d.K&&(!E.11(d,"2Y")&&!E.11(d,"24")))I;9(d[0]==W||E.11(d,"2Y")||d.Y)r.1a(d);J r=E.1R(r,d)});I r},1x:G(c,d,a){H e=E.4a(c)?{}:E.5o;9(d=="26"&&E.V.1N)c.12.4Z;9(e[d]){9(a!=W)c[e[d]]=a;I c[e[d]]}J 9(E.V.1h&&d=="R")I E.1x(c.R,"9e",a);J 9(a==W&&E.V.1h&&E.11(c,"2Y")&&(d=="9d"||d=="9a"))I c.97(d).6x;J 9(c.37){9(a!=W){9(d=="O"&&E.11(c,"4t")&&c.12)6G"O 94 93\'t 92 91";c.90(d,a)}9(E.V.1h&&/6C|3k/.14(d)&&!E.4a(c))I c.4p(d,2);I c.4p(d)}J{9(d=="1r"&&E.V.1h){9(a!=W){c.69=1;c.1E=(c.1E||"").1p(/6O\\([^)]*\\)/,"")+(3I(a).3s()=="8S"?"":"6O(1r="+a*6A+")")}I c.1E?(3I(c.1E.1t(/1r=([^)]*)/)[1])/6A).3s():""}d=d.1p(/-([a-z])/8Q,G(z,b){I b.27()});9(a!=W)c[d]=a;I c[d]}},36:G(t){I(t||"").1p(/^\\s+|\\s+$/g,"")},2h:G(a){H r=[];9(1m a!="8P")L(H i=0,2c=a.K;i<2c;i++)r.1a(a[i]);J r=a.2J(0);I r},2A:G(b,a){L(H i=0,2c=a.K;i<2c;i++)9(a[i]==b)I i;I-1},1R:G(a,b){9(E.V.1h){L(H i=0;b[i];i++)9(b[i].1y!=8)a.1a(b[i])}J L(H i=0;b[i];i++)a.1a(b[i]);I a},4V:G(b){H r=[],2f={};2a{L(H i=0,6y=b.K;i<6y;i++){H a=E.M(b[i]);9(!2f[a]){2f[a]=Q;r.1a(b[i])}}}29(e){r=b}I r},2W:G(b,a,c){9(1m a=="1M")a=3w("P||G(a,i){I "+a+"}");H d=[];L(H i=0,4g=b.K;i<4g;i++)9(!c&&a(b[i],i)||c&&!a(b[i],i))d.1a(b[i]);I d},1X:G(c,b){9(1m b=="1M")b=3w("P||G(a){I "+b+"}");H d=[];L(H i=0,4g=c.K;i<4g;i++){H a=b(c[i],i);9(a!==S&&a!=W){9(a.1c!=1B)a=[a];d=d.8M(a)}}I d}});H v=8K.8I.2p();E.V={4s:(v.1t(/.+(?:8F|8E|8C|8B)[\\/: ]([\\d.]+)/)||[])[1],1N:/6w/.14(v),34:/34/.14(v),1h:/1h/.14(v)&&!/34/.14(v),35:/35/.14(v)&&!/(8z|6w)/.14(v)};H y=E.V.1h?"4h":"5h";E.1k({5g:!E.V.1h||U.8y=="8x",4h:E.V.1h?"4h":"5h",5o:{"L":"8w","8v":"1o","4u":y,5h:y,4h:y,3O:"3O",1o:"1o",1Q:"1Q",3c:"3c",2Q:"2Q",8u:"8t",26:"26",8s:"8r"}});E.N({1D:"a.12",8q:"15.4e(a,\'12\')",8p:"15.2I(a,2,\'2q\')",8n:"15.2I(a,2,\'4d\')",8l:"15.4e(a,\'2q\')",8k:"15.4e(a,\'4d\')",8j:"15.5d(a.12.1w,a)",8i:"15.5d(a.1w)",6q:"15.11(a,\'8h\')?a.8f||a.8e.U:15.2h(a.3j)"},G(i,n){E.1b[i]=G(a){H b=E.1X(6,n);9(a&&1m a=="1M")b=E.3m(a,b);I 6.2o(E.4V(b))}});E.N({5R:"3g",8c:"6j",3d:"6g",8b:"50",89:"6H"},G(i,n){E.1b[i]=G(){H a=1q;I 6.N(G(){L(H j=0,2c=a.K;j<2c;j++)E(a[j])[n](6)})}});E.N({5W:G(a){E.1x(6,a,"");6.53(a)},88:G(c){E.1o.1f(6,c)},87:G(c){E.1o.28(6,c)},86:G(c){E.1o[E.1o.3K(6,c)?"28":"1f"](6,c)},28:G(a){9(!a||E.1E(a,[6]).r.K){E.30(6);6.12.3b(6)}},4n:G(){E("*",6).N(G(){E.30(6)});1W(6.1w)6.3b(6.1w)}},G(i,n){E.1b[i]=G(){I 6.N(n,1q)}});E.N(["85","5Z"],G(i,a){H n=a.2p();E.1b[n]=G(h){I 6[0]==18?E.V.1N&&3y["84"+a]||E.5g&&38.33(U.2V["5a"+a],U.1G["5a"+a])||U.1G["5a"+a]:6[0]==U?38.33(U.1G["6n"+a],U.1G["6m"+a]):h==W?(6.K?E.17(6[0],n):S):6.17(n,h.1c==3X?h:h+"2T")}});H C=E.V.1N&&3x(E.V.4s)<83?"(?:[\\\\w*57-]|\\\\\\\\.)":"(?:[\\\\w\\82-\\81*57-]|\\\\\\\\.)",6k=1u 47("^>\\\\s*("+C+"+)"),6i=1u 47("^("+C+"+)(#)("+C+"+)"),6h=1u 47("^([#.]?)("+C+"*)");E.1k({55:{"":"m[2]==\'*\'||15.11(a,m[2])","#":"a.4p(\'22\')==m[2]",":":{80:"i<m[3]-0",7Z:"i>m[3]-0",2I:"m[3]-0==i",6E:"m[3]-0==i",3v:"i==0",3u:"i==r.K-1",6f:"i%2==0",6e:"i%2","3v-46":"a.12.4l(\'*\')[0]==a","3u-46":"15.2I(a.12.5p,1,\'4d\')==a","7X-46":"!15.2I(a.12.5p,2,\'4d\')",1D:"a.1w",4n:"!a.1w",7W:"(a.6s||a.7V||15(a).2g()||\'\').1g(m[3])>=0",3R:\'"1P"!=a.O&&15.17(a,"19")!="2s"&&15.17(a,"4C")!="1P"\',1P:\'"1P"==a.O||15.17(a,"19")=="2s"||15.17(a,"4C")=="1P"\',7U:"!a.3c",3c:"a.3c",2Q:"a.2Q",26:"a.26||15.1x(a,\'26\')",2g:"\'2g\'==a.O",4k:"\'4k\'==a.O",5j:"\'5j\'==a.O",54:"\'54\'==a.O",52:"\'52\'==a.O",51:"\'51\'==a.O",6d:"\'6d\'==a.O",6c:"\'6c\'==a.O",2r:\'"2r"==a.O||15.11(a,"2r")\',4t:"/4t|24|6b|2r/i.14(a.11)",3K:"15.1Y(m[3],a).K",7S:"/h\\\\d/i.14(a.11)",7R:"15.2W(15.32,G(1b){I a==1b.T;}).K"}},6a:[/^(\\[) *@?([\\w-]+) *([!*$^~=]*) *(\'?"?)(.*?)\\4 *\\]/,/^(:)([\\w-]+)\\("?\'?(.*?(\\(.*?\\))?[^(]*?)"?\'?\\)/,1u 47("^([:.#]*)("+C+"+)")],3m:G(a,c,b){H d,2b=[];1W(a&&a!=d){d=a;H f=E.1E(a,c,b);a=f.t.1p(/^\\s*,\\s*/,"");2b=b?c=f.r:E.1R(2b,f.r)}I 2b},1Y:G(t,o){9(1m t!="1M")I[t];9(o&&!o.1y)o=S;o=o||U;H d=[o],2f=[],3u;1W(t&&3u!=t){H r=[];3u=t;t=E.36(t);H l=P;H g=6k;H m=g.2S(t);9(m){H p=m[1].27();L(H i=0;d[i];i++)L(H c=d[i].1w;c;c=c.2q)9(c.1y==1&&(p=="*"||c.11.27()==p.27()))r.1a(c);d=r;t=t.1p(g,"");9(t.1g(" ")==0)6r;l=Q}J{g=/^([>+~])\\s*(\\w*)/i;9((m=g.2S(t))!=S){r=[];H p=m[2],1R={};m=m[1];L(H j=0,31=d.K;j<31;j++){H n=m=="~"||m=="+"?d[j].2q:d[j].1w;L(;n;n=n.2q)9(n.1y==1){H h=E.M(n);9(m=="~"&&1R[h])1T;9(!p||n.11.27()==p.27()){9(m=="~")1R[h]=Q;r.1a(n)}9(m=="+")1T}}d=r;t=E.36(t.1p(g,""));l=Q}}9(t&&!l){9(!t.1g(",")){9(o==d[0])d.44();2f=E.1R(2f,d);r=d=[o];t=" "+t.68(1,t.K)}J{H k=6i;H m=k.2S(t);9(m){m=[0,m[2],m[3],m[1]]}J{k=6h;m=k.2S(t)}m[2]=m[2].1p(/\\\\/g,"");H f=d[d.K-1];9(m[1]=="#"&&f&&f.3S&&!E.4a(f)){H q=f.3S(m[2]);9((E.V.1h||E.V.34)&&q&&1m q.22=="1M"&&q.22!=m[2])q=E(\'[@22="\'+m[2]+\'"]\',f)[0];d=r=q&&(!m[3]||E.11(q,m[3]))?[q]:[]}J{L(H i=0;d[i];i++){H a=m[1]=="#"&&m[3]?m[3]:m[1]!=""||m[0]==""?"*":m[2];9(a=="*"&&d[i].11.2p()=="5i")a="3a";r=E.1R(r,d[i].4l(a))}9(m[1]==".")r=E.4X(r,m[2]);9(m[1]=="#"){H e=[];L(H i=0;r[i];i++)9(r[i].4p("22")==m[2]){e=[r[i]];1T}r=e}d=r}t=t.1p(k,"")}}9(t){H b=E.1E(t,r);d=r=b.r;t=E.36(b.t)}}9(t)d=[];9(d&&o==d[0])d.44();2f=E.1R(2f,d);I 2f},4X:G(r,m,a){m=" "+m+" ";H c=[];L(H i=0;r[i];i++){H b=(" "+r[i].1o+" ").1g(m)>=0;9(!a&&b||a&&!b)c.1a(r[i])}I c},1E:G(t,r,h){H d;1W(t&&t!=d){d=t;H p=E.6a,m;L(H i=0;p[i];i++){m=p[i].2S(t);9(m){t=t.7O(m[0].K);m[2]=m[2].1p(/\\\\/g,"");1T}}9(!m)1T;9(m[1]==":"&&m[2]=="5V")r=E.1E(m[3],r,Q).r;J 9(m[1]==".")r=E.4X(r,m[2],h);J 9(m[1]=="["){H g=[],O=m[3];L(H i=0,31=r.K;i<31;i++){H a=r[i],z=a[E.5o[m[2]]||m[2]];9(z==S||/6C|3k|26/.14(m[2]))z=E.1x(a,m[2])||\'\';9((O==""&&!!z||O=="="&&z==m[5]||O=="!="&&z!=m[5]||O=="^="&&z&&!z.1g(m[5])||O=="$="&&z.68(z.K-m[5].K)==m[5]||(O=="*="||O=="~=")&&z.1g(m[5])>=0)^h)g.1a(a)}r=g}J 9(m[1]==":"&&m[2]=="2I-46"){H e={},g=[],14=/(\\d*)n\\+?(\\d*)/.2S(m[3]=="6f"&&"2n"||m[3]=="6e"&&"2n+1"||!/\\D/.14(m[3])&&"n+"+m[3]||m[3]),3v=(14[1]||1)-0,d=14[2]-0;L(H i=0,31=r.K;i<31;i++){H j=r[i],12=j.12,22=E.M(12);9(!e[22]){H c=1;L(H n=12.1w;n;n=n.2q)9(n.1y==1)n.4U=c++;e[22]=Q}H b=P;9(3v==1){9(d==0||j.4U==d)b=Q}J 9((j.4U+d)%3v==0)b=Q;9(b^h)g.1a(j)}r=g}J{H f=E.55[m[1]];9(1m f!="1M")f=E.55[m[1]][m[2]];f=3w("P||G(a,i){I "+f+"}");r=E.2W(r,f,h)}}I{r:r,t:t}},4e:G(b,c){H d=[];H a=b[c];1W(a&&a!=U){9(a.1y==1)d.1a(a);a=a[c]}I d},2I:G(a,e,c,b){e=e||1;H d=0;L(;a;a=a[c])9(a.1y==1&&++d==e)1T;I a},5d:G(n,a){H r=[];L(;n;n=n.2q){9(n.1y==1&&(!a||n!=a))r.1a(n)}I r}});E.1j={1f:G(g,e,c,h){9(E.V.1h&&g.4j!=W)g=18;9(!c.2u)c.2u=6.2u++;9(h!=W){H d=c;c=G(){I d.16(6,1q)};c.M=h;c.2u=d.2u}H i=e.2l(".");e=i[0];c.O=i[1];H b=E.M(g,"2P")||E.M(g,"2P",{});H f=E.M(g,"2t",G(){H a;9(1m E=="W"||E.1j.4T)I a;a=E.1j.2t.16(g,1q);I a});H j=b[e];9(!j){j=b[e]={};9(g.4S)g.4S(e,f,P);J g.7N("43"+e,f)}j[c.2u]=c;6.1Z[e]=Q},2u:1,1Z:{},28:G(d,c,b){H e=E.M(d,"2P"),2L,4I;9(1m c=="1M"){H a=c.2l(".");c=a[0]}9(e){9(c&&c.O){b=c.4Q;c=c.O}9(!c){L(c 1i e)6.28(d,c)}J 9(e[c]){9(b)2E e[c][b.2u];J L(b 1i e[c])9(!a[1]||e[c][b].O==a[1])2E e[c][b];L(2L 1i e[c])1T;9(!2L){9(d.4P)d.4P(c,E.M(d,"2t"),P);J d.7M("43"+c,E.M(d,"2t"));2L=S;2E e[c]}}L(2L 1i e)1T;9(!2L){E.30(d,"2P");E.30(d,"2t")}}},1F:G(d,b,e,c,f){b=E.2h(b||[]);9(!e){9(6.1Z[d])E("*").1f([18,U]).1F(d,b)}J{H a,2L,1b=E.1n(e[d]||S),4N=!b[0]||!b[0].2M;9(4N)b.4w(6.4M({O:d,2m:e}));b[0].O=d;9(E.1n(E.M(e,"2t")))a=E.M(e,"2t").16(e,b);9(!1b&&e["43"+d]&&e["43"+d].16(e,b)===P)a=P;9(4N)b.44();9(f&&f.16(e,b)===P)a=P;9(1b&&c!==P&&a!==P&&!(E.11(e,\'a\')&&d=="4L")){6.4T=Q;e[d]()}6.4T=P}I a},2t:G(d){H a;d=E.1j.4M(d||18.1j||{});H b=d.O.2l(".");d.O=b[0];H c=E.M(6,"2P")&&E.M(6,"2P")[d.O],3q=1B.3A.2J.2O(1q,1);3q.4w(d);L(H j 1i c){3q[0].4Q=c[j];3q[0].M=c[j].M;9(!b[1]||c[j].O==b[1]){H e=c[j].16(6,3q);9(a!==P)a=e;9(e===P){d.2M();d.3p()}}}9(E.V.1h)d.2m=d.2M=d.3p=d.4Q=d.M=S;I a},4M:G(c){H a=c;c=E.1k({},a);c.2M=G(){9(a.2M)a.2M();a.7L=P};c.3p=G(){9(a.3p)a.3p();a.7K=Q};9(!c.2m&&c.65)c.2m=c.65;9(E.V.1N&&c.2m.1y==3)c.2m=a.2m.12;9(!c.4K&&c.4J)c.4K=c.4J==c.2m?c.7H:c.4J;9(c.64==S&&c.63!=S){H e=U.2V,b=U.1G;c.64=c.63+(e&&e.2R||b.2R||0);c.7E=c.7D+(e&&e.2B||b.2B||0)}9(!c.3Y&&(c.61||c.60))c.3Y=c.61||c.60;9(!c.5F&&c.5D)c.5F=c.5D;9(!c.3Y&&c.2r)c.3Y=(c.2r&1?1:(c.2r&2?3:(c.2r&4?2:0)));I c}};E.1b.1k({3W:G(c,a,b){I c=="5Y"?6.2G(c,a,b):6.N(G(){E.1j.1f(6,c,b||a,b&&a)})},2G:G(d,b,c){I 6.N(G(){E.1j.1f(6,d,G(a){E(6).5X(a);I(c||b).16(6,1q)},c&&b)})},5X:G(a,b){I 6.N(G(){E.1j.28(6,a,b)})},1F:G(c,a,b){I 6.N(G(){E.1j.1F(c,a,6,Q,b)})},7x:G(c,a,b){9(6[0])I E.1j.1F(c,a,6[0],P,b)},25:G(){H a=1q;I 6.4L(G(e){6.4H=0==6.4H?1:0;e.2M();I a[6.4H].16(6,[e])||P})},7v:G(f,g){G 4G(e){H p=e.4K;1W(p&&p!=6)2a{p=p.12}29(e){p=6};9(p==6)I P;I(e.O=="4x"?f:g).16(6,[e])}I 6.4x(4G).5U(4G)},2d:G(f){5T();9(E.3T)f.16(U,[E]);J E.3l.1a(G(){I f.16(6,[E])});I 6}});E.1k({3T:P,3l:[],2d:G(){9(!E.3T){E.3T=Q;9(E.3l){E.N(E.3l,G(){6.16(U)});E.3l=S}9(E.V.35||E.V.34)U.4P("5S",E.2d,P);9(!18.7t.K)E(18).39(G(){E("#4E").28()})}}});E.N(("7s,7r,39,7q,6n,5Y,4L,7p,"+"7n,7m,7l,4x,5U,7k,24,"+"51,7j,7i,7h,3U").2l(","),G(i,o){E.1b[o]=G(f){I f?6.3W(o,f):6.1F(o)}});H x=P;G 5T(){9(x)I;x=Q;9(E.V.35||E.V.34)U.4S("5S",E.2d,P);J 9(E.V.1h){U.7f("<7d"+"7y 22=4E 7z=Q "+"3k=//:><\\/1J>");H a=U.3S("4E");9(a)a.62=G(){9(6.2C!="1l")I;E.2d()};a=S}J 9(E.V.1N)E.4B=4j(G(){9(U.2C=="5Q"||U.2C=="1l"){4A(E.4B);E.4B=S;E.2d()}},10);E.1j.1f(18,"39",E.2d)}E.1b.1k({39:G(g,d,c){9(E.1n(g))I 6.3W("39",g);H e=g.1g(" ");9(e>=0){H i=g.2J(e,g.K);g=g.2J(0,e)}c=c||G(){};H f="4z";9(d)9(E.1n(d)){c=d;d=S}J{d=E.3a(d);f="5P"}H h=6;E.3G({1d:g,O:f,M:d,1l:G(a,b){9(b=="1C"||b=="5O")h.4o(i?E("<1s/>").3g(a.40.1p(/<1J(.|\\s)*?\\/1J>/g,"")).1Y(i):a.40);56(G(){h.N(c,[a.40,b,a])},13)}});I 6},7a:G(){I E.3a(6.5M())},5M:G(){I 6.1X(G(){I E.11(6,"2Y")?E.2h(6.79):6}).1E(G(){I 6.2H&&!6.3c&&(6.2Q||/24|6b/i.14(6.11)||/2g|1P|52/i.14(6.O))}).1X(G(i,c){H b=E(6).3i();I b==S?S:b.1c==1B?E.1X(b,G(a,i){I{2H:c.2H,1Q:a}}):{2H:c.2H,1Q:b}}).21()}});E.N("5L,5K,6t,5J,5I,5H".2l(","),G(i,o){E.1b[o]=G(f){I 6.3W(o,f)}});H B=(1u 3D).3B();E.1k({21:G(d,b,a,c){9(E.1n(b)){a=b;b=S}I E.3G({O:"4z",1d:d,M:b,1C:a,1V:c})},78:G(b,a){I E.21(b,S,a,"1J")},77:G(c,b,a){I E.21(c,b,a,"45")},76:G(d,b,a,c){9(E.1n(b)){a=b;b={}}I E.3G({O:"5P",1d:d,M:b,1C:a,1V:c})},75:G(a){E.1k(E.59,a)},59:{1Z:Q,O:"4z",2z:0,5G:"74/x-73-2Y-72",6o:Q,3e:Q,M:S},49:{},3G:G(s){H f,2y=/=(\\?|%3F)/g,1v,M;s=E.1k(Q,s,E.1k(Q,{},E.59,s));9(s.M&&s.6o&&1m s.M!="1M")s.M=E.3a(s.M);9(s.1V=="4b"){9(s.O.2p()=="21"){9(!s.1d.1t(2y))s.1d+=(s.1d.1t(/\\?/)?"&":"?")+(s.4b||"5E")+"=?"}J 9(!s.M||!s.M.1t(2y))s.M=(s.M?s.M+"&":"")+(s.4b||"5E")+"=?";s.1V="45"}9(s.1V=="45"&&(s.M&&s.M.1t(2y)||s.1d.1t(2y))){f="4b"+B++;9(s.M)s.M=s.M.1p(2y,"="+f);s.1d=s.1d.1p(2y,"="+f);s.1V="1J";18[f]=G(a){M=a;1C();1l();18[f]=W;2a{2E 18[f]}29(e){}}}9(s.1V=="1J"&&s.1L==S)s.1L=P;9(s.1L===P&&s.O.2p()=="21")s.1d+=(s.1d.1t(/\\?/)?"&":"?")+"57="+(1u 3D()).3B();9(s.M&&s.O.2p()=="21"){s.1d+=(s.1d.1t(/\\?/)?"&":"?")+s.M;s.M=S}9(s.1Z&&!E.5b++)E.1j.1F("5L");9(!s.1d.1g("8g")&&s.1V=="1J"){H h=U.4l("9U")[0];H g=U.5B("1J");g.3k=s.1d;9(!f&&(s.1C||s.1l)){H j=P;g.9R=g.62=G(){9(!j&&(!6.2C||6.2C=="5Q"||6.2C=="1l")){j=Q;1C();1l();h.3b(g)}}}h.58(g);I}H k=P;H i=18.6X?1u 6X("9P.9O"):1u 6W();i.9M(s.O,s.1d,s.3e);9(s.M)i.5C("9J-9I",s.5G);9(s.5y)i.5C("9H-5x-9F",E.49[s.1d]||"9D, 9C 9B 9A 5v:5v:5v 9z");i.5C("X-9x-9v","6W");9(s.6U)s.6U(i);9(s.1Z)E.1j.1F("5H",[i,s]);H c=G(a){9(!k&&i&&(i.2C==4||a=="2z")){k=Q;9(d){4A(d);d=S}1v=a=="2z"&&"2z"||!E.6S(i)&&"3U"||s.5y&&E.6R(i,s.1d)&&"5O"||"1C";9(1v=="1C"){2a{M=E.6Q(i,s.1V)}29(e){1v="5k"}}9(1v=="1C"){H b;2a{b=i.5s("6P-5x")}29(e){}9(s.5y&&b)E.49[s.1d]=b;9(!f)1C()}J E.5r(s,i,1v);1l();9(s.3e)i=S}};9(s.3e){H d=4j(c,13);9(s.2z>0)56(G(){9(i){i.9q();9(!k)c("2z")}},s.2z)}2a{i.9o(s.M)}29(e){E.5r(s,i,S,e)}9(!s.3e)c();I i;G 1C(){9(s.1C)s.1C(M,1v);9(s.1Z)E.1j.1F("5I",[i,s])}G 1l(){9(s.1l)s.1l(i,1v);9(s.1Z)E.1j.1F("6t",[i,s]);9(s.1Z&&!--E.5b)E.1j.1F("5K")}},5r:G(s,a,b,e){9(s.3U)s.3U(a,b,e);9(s.1Z)E.1j.1F("5J",[a,s,e])},5b:0,6S:G(r){2a{I!r.1v&&9n.9l=="54:"||(r.1v>=6N&&r.1v<9j)||r.1v==6M||E.V.1N&&r.1v==W}29(e){}I P},6R:G(a,c){2a{H b=a.5s("6P-5x");I a.1v==6M||b==E.49[c]||E.V.1N&&a.1v==W}29(e){}I P},6Q:G(r,b){H c=r.5s("9i-O");H d=b=="6K"||!b&&c&&c.1g("6K")>=0;H a=d?r.9g:r.40;9(d&&a.2V.37=="5k")6G"5k";9(b=="1J")E.5f(a);9(b=="45")a=3w("("+a+")");I a},3a:G(a){H s=[];9(a.1c==1B||a.4c)E.N(a,G(){s.1a(3f(6.2H)+"="+3f(6.1Q))});J L(H j 1i a)9(a[j]&&a[j].1c==1B)E.N(a[j],G(){s.1a(3f(j)+"="+3f(6))});J s.1a(3f(j)+"="+3f(a[j]));I s.66("&").1p(/%20/g,"+")}});E.1b.1k({1A:G(b,a){I b?6.1U({1H:"1A",2N:"1A",1r:"1A"},b,a):6.1E(":1P").N(G(){6.R.19=6.3h?6.3h:"";9(E.17(6,"19")=="2s")6.R.19="2Z"}).2D()},1z:G(b,a){I b?6.1U({1H:"1z",2N:"1z",1r:"1z"},b,a):6.1E(":3R").N(G(){6.3h=6.3h||E.17(6,"19");9(6.3h=="2s")6.3h="2Z";6.R.19="2s"}).2D()},6J:E.1b.25,25:G(a,b){I E.1n(a)&&E.1n(b)?6.6J(a,b):a?6.1U({1H:"25",2N:"25",1r:"25"},a,b):6.N(G(){E(6)[E(6).3t(":1P")?"1A":"1z"]()})},9c:G(b,a){I 6.1U({1H:"1A"},b,a)},9b:G(b,a){I 6.1U({1H:"1z"},b,a)},99:G(b,a){I 6.1U({1H:"25"},b,a)},98:G(b,a){I 6.1U({1r:"1A"},b,a)},96:G(b,a){I 6.1U({1r:"1z"},b,a)},95:G(c,a,b){I 6.1U({1r:a},c,b)},1U:G(k,i,h,g){H j=E.6D(i,h,g);I 6[j.3L===P?"N":"3L"](G(){j=E.1k({},j);H f=E(6).3t(":1P"),3y=6;L(H p 1i k){9(k[p]=="1z"&&f||k[p]=="1A"&&!f)I E.1n(j.1l)&&j.1l.16(6);9(p=="1H"||p=="2N"){j.19=E.17(6,"19");j.2U=6.R.2U}}9(j.2U!=S)6.R.2U="1P";j.3M=E.1k({},k);E.N(k,G(c,a){H e=1u E.2j(3y,j,c);9(/25|1A|1z/.14(a))e[a=="25"?f?"1A":"1z":a](k);J{H b=a.3s().1t(/^([+-]=)?([\\d+-.]+)(.*)$/),1O=e.2b(Q)||0;9(b){H d=3I(b[2]),2i=b[3]||"2T";9(2i!="2T"){3y.R[c]=(d||1)+2i;1O=((d||1)/e.2b(Q))*1O;3y.R[c]=1O+2i}9(b[1])d=((b[1]=="-="?-1:1)*d)+1O;e.3N(1O,d,2i)}J e.3N(1O,a,"")}});I Q})},3L:G(a,b){9(E.1n(a)){b=a;a="2j"}9(!a||(1m a=="1M"&&!b))I A(6[0],a);I 6.N(G(){9(b.1c==1B)A(6,a,b);J{A(6,a).1a(b);9(A(6,a).K==1)b.16(6)}})},9f:G(){H a=E.32;I 6.N(G(){L(H i=0;i<a.K;i++)9(a[i].T==6)a.6I(i--,1)}).5n()}});H A=G(b,c,a){9(!b)I;H q=E.M(b,c+"3L");9(!q||a)q=E.M(b,c+"3L",a?E.2h(a):[]);I q};E.1b.5n=G(a){a=a||"2j";I 6.N(G(){H q=A(6,a);q.44();9(q.K)q[0].16(6)})};E.1k({6D:G(b,a,c){H d=b&&b.1c==8Z?b:{1l:c||!c&&a||E.1n(b)&&b,2e:b,3J:c&&a||a&&a.1c!=8Y&&a};d.2e=(d.2e&&d.2e.1c==4W?d.2e:{8X:8W,8V:6N}[d.2e])||8T;d.3r=d.1l;d.1l=G(){E(6).5n();9(E.1n(d.3r))d.3r.16(6)};I d},3J:{6B:G(p,n,b,a){I b+a*p},5q:G(p,n,b,a){I((-38.9s(p*38.8R)/2)+0.5)*a+b}},32:[],2j:G(b,c,a){6.Y=c;6.T=b;6.1e=a;9(!c.3P)c.3P={}}});E.2j.3A={4r:G(){9(6.Y.2F)6.Y.2F.16(6.T,[6.2v,6]);(E.2j.2F[6.1e]||E.2j.2F.6z)(6);9(6.1e=="1H"||6.1e=="2N")6.T.R.19="2Z"},2b:G(a){9(6.T[6.1e]!=S&&6.T.R[6.1e]==S)I 6.T[6.1e];H r=3I(E.3C(6.T,6.1e,a));I r&&r>-8O?r:3I(E.17(6.T,6.1e))||0},3N:G(c,b,e){6.5u=(1u 3D()).3B();6.1O=c;6.2D=b;6.2i=e||6.2i||"2T";6.2v=6.1O;6.4q=6.4i=0;6.4r();H f=6;G t(){I f.2F()}t.T=6.T;E.32.1a(t);9(E.32.K==1){H d=4j(G(){H a=E.32;L(H i=0;i<a.K;i++)9(!a[i]())a.6I(i--,1);9(!a.K)4A(d)},13)}},1A:G(){6.Y.3P[6.1e]=E.1x(6.T.R,6.1e);6.Y.1A=Q;6.3N(0,6.2b());9(6.1e=="2N"||6.1e=="1H")6.T.R[6.1e]="8N";E(6.T).1A()},1z:G(){6.Y.3P[6.1e]=E.1x(6.T.R,6.1e);6.Y.1z=Q;6.3N(6.2b(),0)},2F:G(){H t=(1u 3D()).3B();9(t>6.Y.2e+6.5u){6.2v=6.2D;6.4q=6.4i=1;6.4r();6.Y.3M[6.1e]=Q;H a=Q;L(H i 1i 6.Y.3M)9(6.Y.3M[i]!==Q)a=P;9(a){9(6.Y.19!=S){6.T.R.2U=6.Y.2U;6.T.R.19=6.Y.19;9(E.17(6.T,"19")=="2s")6.T.R.19="2Z"}9(6.Y.1z)6.T.R.19="2s";9(6.Y.1z||6.Y.1A)L(H p 1i 6.Y.3M)E.1x(6.T.R,p,6.Y.3P[p])}9(a&&E.1n(6.Y.1l))6.Y.1l.16(6.T);I P}J{H n=t-6.5u;6.4i=n/6.Y.2e;6.4q=E.3J[6.Y.3J||(E.3J.5q?"5q":"6B")](6.4i,n,0,1,6.Y.2e);6.2v=6.1O+((6.2D-6.1O)*6.4q);6.4r()}I Q}};E.2j.2F={2R:G(a){a.T.2R=a.2v},2B:G(a){a.T.2B=a.2v},1r:G(a){E.1x(a.T.R,"1r",a.2v)},6z:G(a){a.T.R[a.1e]=a.2v+a.2i}};E.1b.6m=G(){H c=0,3E=0,T=6[0],5t;9(T)8L(E.V){H b=E.17(T,"2X")=="4F",1D=T.12,23=T.23,2K=T.3H,4f=1N&&3x(4s)<8J;9(T.6V){5w=T.6V();1f(5w.1S+38.33(2K.2V.2R,2K.1G.2R),5w.3E+38.33(2K.2V.2B,2K.1G.2B));9(1h){H d=E("4o").17("8H");d=(d=="8G"||E.5g&&3x(4s)>=7)&&2||d;1f(-d,-d)}}J{1f(T.5l,T.5z);1W(23){1f(23.5l,23.5z);9(35&&/^t[d|h]$/i.14(1D.37)||!4f)d(23);9(4f&&!b&&E.17(23,"2X")=="4F")b=Q;23=23.23}1W(1D.37&&!/^1G|4o$/i.14(1D.37)){9(!/^8D|1I-9S.*$/i.14(E.17(1D,"19")))1f(-1D.2R,-1D.2B);9(35&&E.17(1D,"2U")!="3R")d(1D);1D=1D.12}9(4f&&b)1f(-2K.1G.5l,-2K.1G.5z)}5t={3E:3E,1S:c}}I 5t;G d(a){1f(E.17(a,"9T"),E.17(a,"8A"))}G 1f(l,t){c+=3x(l)||0;3E+=3x(t)||0}}})();',62,616,'||||||this|||if|||||||||||||||||||||||||||||||||function|var|return|else|length|for|data|each|type|false|true|style|null|elem|document|browser|undefined||options|||nodeName|parentNode||test|jQuery|apply|css|window|display|push|fn|constructor|url|prop|add|indexOf|msie|in|event|extend|complete|typeof|isFunction|className|replace|arguments|opacity|div|match|new|status|firstChild|attr|nodeType|hide|show|Array|success|parent|filter|trigger|body|height|table|script|tbody|cache|string|safari|start|hidden|value|merge|left|break|animate|dataType|while|map|find|global||get|id|offsetParent|select|toggle|selected|toUpperCase|remove|catch|try|cur|al|ready|duration|done|text|makeArray|unit|fx|swap|split|target||pushStack|toLowerCase|nextSibling|button|none|handle|guid|now|stack|tb|jsre|timeout|inArray|scrollTop|readyState|end|delete|step|one|name|nth|slice|doc|ret|preventDefault|width|call|events|checked|scrollLeft|exec|px|overflow|documentElement|grep|position|form|block|removeData|rl|timers|max|opera|mozilla|trim|tagName|Math|load|param|removeChild|disabled|insertBefore|async|encodeURIComponent|append|oldblock|val|childNodes|src|readyList|multiFilter|color|defaultView|stopPropagation|args|old|toString|is|last|first|eval|parseInt|self|domManip|prototype|getTime|curCSS|Date|top||ajax|ownerDocument|parseFloat|easing|has|queue|curAnim|custom|innerHTML|orig|currentStyle|visible|getElementById|isReady|error|static|bind|String|which|getComputedStyle|responseText|oWidth|oHeight|on|shift|json|child|RegExp|ol|lastModified|isXMLDoc|jsonp|jquery|previousSibling|dir|safari2|el|styleFloat|state|setInterval|radio|getElementsByTagName|tr|empty|html|getAttribute|pos|update|version|input|float|runtimeStyle|unshift|mouseover|getPropertyValue|GET|clearInterval|safariTimer|visibility|clean|__ie_init|absolute|handleHover|lastToggle|index|fromElement|relatedTarget|click|fix|evt|andSelf|removeEventListener|handler|cloneNode|addEventListener|triggered|nodeIndex|unique|Number|classFilter|prevObject|selectedIndex|after|submit|password|removeAttribute|file|expr|setTimeout|_|appendChild|ajaxSettings|client|active|win|sibling|deep|globalEval|boxModel|cssFloat|object|checkbox|parsererror|offsetLeft|wrapAll|dequeue|props|lastChild|swing|handleError|getResponseHeader|results|startTime|00|box|Modified|ifModified|offsetTop|evalScript|createElement|setRequestHeader|ctrlKey|callback|metaKey|contentType|ajaxSend|ajaxSuccess|ajaxError|ajaxStop|ajaxStart|serializeArray|init|notmodified|POST|loaded|appendTo|DOMContentLoaded|bindReady|mouseout|not|removeAttr|unbind|unload|Width|keyCode|charCode|onreadystatechange|clientX|pageX|srcElement|join|outerHTML|substr|zoom|parse|textarea|reset|image|odd|even|before|quickClass|quickID|prepend|quickChild|execScript|offset|scroll|processData|uuid|contents|continue|textContent|ajaxComplete|clone|setArray|webkit|nodeValue|fl|_default|100|linear|href|speed|eq|createTextNode|throw|replaceWith|splice|_toggle|xml|colgroup|304|200|alpha|Last|httpData|httpNotModified|httpSuccess|fieldset|beforeSend|getBoundingClientRect|XMLHttpRequest|ActiveXObject|col|br|abbr|pixelLeft|urlencoded|www|application|ajaxSetup|post|getJSON|getScript|elements|serialize|clientWidth|hasClass|scr|clientHeight|write|relative|keyup|keypress|keydown|change|mousemove|mouseup|mousedown|right|dblclick|resize|focus|blur|frames|instanceof|hover|offsetWidth|triggerHandler|ipt|defer|offsetHeight|border|padding|clientY|pageY|Left|Right|toElement|Bottom|Top|cancelBubble|returnValue|detachEvent|attachEvent|substring|line|weight|animated|header|font|enabled|innerText|contains|only|size|gt|lt|uFFFF|u0128|417|inner|Height|toggleClass|removeClass|addClass|replaceAll|noConflict|insertAfter|prependTo|wrap|contentWindow|contentDocument|http|iframe|children|siblings|prevAll|nextAll|wrapInner|prev|Boolean|next|parents|maxLength|maxlength|readOnly|readonly|class|htmlFor|CSS1Compat|compatMode|compatible|borderTopWidth|ie|ra|inline|it|rv|medium|borderWidth|userAgent|522|navigator|with|concat|1px|10000|array|ig|PI|NaN|400|reverse|fast|600|slow|Function|Object|setAttribute|changed|be|can|property|fadeTo|fadeOut|getAttributeNode|fadeIn|slideToggle|method|slideUp|slideDown|action|cssText|stop|responseXML|option|content|300|th|protocol|td|location|send|cap|abort|colg|cos|tfoot|thead|With|leg|Requested|opt|GMT|1970|Jan|01|Thu|area|Since|hr|If|Type|Content|meta|specified|open|link|XMLHTTP|Microsoft|img|onload|row|borderLeftWidth|head|attributes'.split('|'),0,{}))
|
Apydia
|
/Apydia-0.0.2.tar.gz/Apydia-0.0.2/apydia/themes/apydia/js/jquery-1.2.1.pack.js
|
jquery-1.2.1.pack.js
|
import re
from apydia.renderers.base import HTMLRenderer
from markdown import Markdown, Preprocessor
from pygments import highlight
from pygments.formatters import HtmlFormatter
from pygments.lexers import get_lexer_by_name, guess_lexer, TextLexer
from pygments.util import ClassNotFound
__all__ = ["MarkdownRenderer"]
class Highlighter(object):
lang_detect = re.compile(r"""
(?:(?:::+)|(?P<shebang>[#]!))
(?P<path>(?:/\w+)*[/ ])?
(?P<lang>\w*)
""", re.VERBOSE)
def __init__(self, text):
self.text = text
self.formatter = HtmlFormatter(cssclass="source")
self.lang = None
lines = self.text.splitlines()
first_line = lines.pop(0)
matches = self.lang_detect.search(first_line)
if matches:
try:
self.lang = matches.group("lang").lower()
except IndexError:
pass
if matches.group("path"):
lines.insert(0, first_line)
else:
lines.insert(0, first_line)
self.text = "\n".join(lines).strip("\n")
def highlight(self):
if self.lang:
lexer = get_lexer_by_name(self.lang)
else:
try:
lexer = guess_lexer(self.text)
except ClassNotFound:
lexer = get_lexer_by_name("text")
return highlight(self.text, lexer, self.formatter)
class MarkdownRenderer(HTMLRenderer):
name = "markdown"
def __init__(self):
self.markdown = md = Markdown(safe_mode=False)
def _highlight_block(parent_elem, lines, in_list):
detabbed, rest = md.blockGuru.detectTabbed(lines)
text = "\n".join(detabbed).rstrip() + "\n"
code = Highlighter(text)
placeholder = md.htmlStash.store(code.highlight())
parent_elem.appendChild(md.doc.createTextNode(placeholder))
md._processSection(parent_elem, rest, in_list)
self.markdown._processCodeBlock = _highlight_block
def _render(self, source):
if not source:
return u""
return self.markdown.__str__(source)
|
Apydia
|
/Apydia-0.0.2.tar.gz/Apydia-0.0.2/apydia/renderers/markdownrenderer.py
|
markdownrenderer.py
|
from BeautifulSoup import BeautifulSoup
class Renderer(object):
"""
Basic docstring-parser
Just passes the text throught, without any formatting
"""
name = "null"
def _render(self, source):
return source
def render_description(self, desc):
""" render full description (without title)"""
return self._render(desc.docstring)
def render_short_desc(self, desc):
""" render first paragraph (without title) """
return self._render(desc.docstring)
def render_title(self, desc):
""" render title only """
soup = BeautifulSoup(description)
for tagname in ("h1", "h2", "h3"):
title = soup.find(tagname)
if title: return title.renderContents()
return self._render(desc.docstring[0:1])
@property
def parser_id(self):
if isinstance(self.name, basestring):
return self.name.lower()
else:
return self.name[0].lower()
@classmethod
def matches(cls, name):
name = name.lower()
names = cls.name
if isinstance(names, basestring):
return name == names.lower()
else:
return name in (n.lower() for n in names)
class HTMLRenderer(Renderer):
name = "html"
def render_title(self, desc):
html = self._render(desc.docstring)
soup = BeautifulSoup(html)
heading = soup.find("h1") or soup.find("h2")
if heading:
return heading.renderContents().strip()
else:
return ""
def render_short_desc(self, desc):
""" render first paragraph (without title) """
# find first paragraph
short_desc = BeautifulSoup(self.render_description(desc)).find("p")
if short_desc:
# rewrite local links
soup = BeautifulSoup(str(short_desc))
if desc.href:
href = desc.href
for anchor in soup.findAll("a"):
if anchor["href"].startswith("#"):
anchor["href"] = href + anchor["href"]
return str(soup)
else:
return u""
# def render_description(self, desc):
# TODO: remove first h1 or h2 tag here
|
Apydia
|
/Apydia-0.0.2.tar.gz/Apydia-0.0.2/apydia/renderers/base.py
|
base.py
|
# ApyWy - аналог Swagger
Основную проблему, которую решает _ApyWy_ - большое количество времени разработчика для написания схемы (_schema_) для класса _View_. В моей версии мы можем использовать обычные питоновские словари для обозначения ожидаемых данныx и ожидаемого ответа.
## Пример не настроенной версии apywy:

## Пример настроенной версии apywy:
*Тут для примера настроен только *GET* метод.*

## Установка
1.
```python
pip install apywy
```
2. Добавляем в _settings.INSTALLED_APPS_:
```python
INSTALLED_APPS = [
...
'apywy.api',
...
]
```
3. Добавляем в _urls.py_ главного приложения:
```python
path('apywy/', include(('apywy.api.urls', 'apywy.api'), namespace='apywy')),
```
4. **Готово**, на главной странице _ApyWy_ - **_/apywy/_** есть вся возможная информация без дополнительных настроек.
## Настройка
По умолчанию, все что мы можем узнать для джанго-вьюшки:
- Url-путь до _http-метода_.
- Док-стринг для вьюшки, а также для всех ее _http-методов_ (_get_, _post_, ...).
Но мы можем это исправить, построив _ApyWy_ схему для вьюшки.
Наш файл _views.py_:
```python
# views.py
class HomePageView(APIView):
'''
HomePageView doc string
'''
def get(self, request, some_quary):
'HomePageView.get doc string'
if some_quary == 'some value':
return Response({'ANSWER': 'GET-INVALID-RESULT'}, status=status.HTTP_500_INTERNAL_SERVER_ERROR)
return Response({'ANSWER': 'GET-RESULT'}, status=status.HTTP_200_OK)
def post(self, request):
'HomePageView.post doc string'
return Response({'ANSWER': 'POST-RESULT'}, status=status.HTTP_201_CREATED)
```
- Создаем файл _apywy_schemas.py_ (имя не важно)
```python
# apywy_schemas.py
from apywy.fields import StatusField, MethodField, RequestDataField
from apywy.schema import Schema
from apywy.constants.const import Constant
class HomePageSchema(Schema):
class GET(MethodField):
HTTP_200 = StatusField(expected_response_data=Constant({'ANSWER': 'GET-RESULT'}))
HTTP_500 = StatusField(expected_response_data=Constant({'ANSWER': 'GET-INVALID-RESULT'}))
```
- Навешиваем эту схему на view:
```python
# views.py
...
from apywy.decorators import set_apywy_schema
from .apywy_schemas import HomePageSchema
@set_apywy_schema(HomePageSchema)
class HomePageView(APIView):
...
```
Итог, для метода **get** мы получили расширенную информацию.
- Добавим информацию про **post**
```python
# apywy_schemas.py
...
class HomePageSchema(Schema):
...
class POST(MethodField):
HTTP_201 = StatusField(expected_response_data=Constant({'ANSWER': 'POST-RESULT'}))
class META:
expected_request_data = RequestDataField(Constant({'data': 'some data here'}))
```
По итогу, конечный вариант нашей схемы:
```python
# apywy_schemas.py
from apywy.fields import StatusField, MethodField, RequestDataField
from apywy.schema import Schema
from apywy.constants.const import Constant
class HomePageSchema(Schema):
class GET(MethodField):
HTTP_200 = StatusField(expected_response_data=Constant({'ANSWER': 'GET-RESULT'}))
HTTP_500 = StatusField(expected_response_data=Constant({'ANSWER': 'GET-INVALID-RESULT'}))
class POST(MethodField):
HTTP_201 = StatusField(expected_response_data=Constant({'ANSWER': 'POST-RESULT'}))
class META:
expected_request_data = RequestDataField(Constant({'data': 'some data here'}))
```
<!-- TODO добавить тут итоговую картинку -->
---
По умолчанию на главной странице мы видим вьюшки до всех путей, кроме тех, которые относятся к неймспейсам:
```python
('apywy', 'admin')
```
Если вы хотите игнорировать дополнительные неймспейсы, то укажите это в _settings.NAMESPACES_TO_IGNORE_:
```python
NAMESPACES_TO_IGNORE = () # значение по умолчанию
NAMESPACES_TO_IGNORE = ('app', ) # игнорировать namespace с именем "app"
NAMESPACES_TO_IGNORE = ('*', ) # игнорировать все неймспейсы
```
### FAQ:
1. Можно ли указать _query_ параметр для запроса в схеме?
2. Можно ли указать сразу несколько ожидаемых результатов от фронта/бекенда в схеме для одного http статуса?
**(1)** и **(2)**, да можно, ниже представлены различные варианты, которые поддерживает _apywy_:
```python
# apywy_schemas.py
from apywy.fields import StatusField, MethodField, RequestDataField
from apywy.schema import Schema
from apywy.constants.const import ListOfConstants, Constant
class HomePageSchema(Schema):
class GET(MethodField):
# пример, когда для метода есть единственные данные, независящие от аргумента
HTTP_200 = StatusField(expected_response_data=Constant({'ANSWER': 'GET-RESULT'}))
# пример, что и выше, но только с комментарием
HTTP_300 = StatusField(
expected_response_data=Constant({'ANSWER': 'GET-RESULT FROM 300'}, comment='some comment')
)
# пример, когда для метода есть единственные данные, но мы также хотим задокументировать query паметр
HTTP_400 = StatusField(
expected_response_data=Constant({'ANSWER': 'GET-RESULT FROM 400'}, query_arg={'some query': 1})
)
# пример, когда для одного статуса может соответствовать множество данных
HTTP_500 = StatusField(
expected_response_data=ListOfConstants(
Constant(
expected_data={'ANSWER': 'GET-RESULT FROM 500'},
comment='some another comment with list of constants'
),
Constant(
expected_data={'ANSWER': 'GET-RESULT FROM 500'},
query_arg={
'some query 1': '1',
'some another query 2': 'blabla',
},
comment='some comment here'
),
)
)
class META:
# все что относится к работе со статусами распространяется и на request данные
expected_request_data = RequestDataField(
ListOfConstants(
Constant(expected_data={'some data 1': 'data1'}),
Constant(expected_data={'some another data 2': 'data2'}, query_arg={
'title': 'some title',
'book_id': '10'
}),
)
)
```
### TODO:
- Рефакторить index.js
|
Apywy
|
/Apywy-0.0.5.tar.gz/Apywy-0.0.5/README.md
|
README.md
|
from typing import Dict, List
from .domain import entities
from .utilities.custom_typing import DjangoView
ALL_HTTP_METHODS = ['GET', 'POST', 'PUT', 'PATCH', 'DELETE']
class SchemaTool:
@staticmethod
def _check_view_has_method_attr(view_cls: DjangoView, http_method: str) -> bool:
'''
функция для проверки наличия у DjangoView http метода
'''
return hasattr(view_cls, http_method)
@staticmethod
def set_default_schema_data_to_view_class(view_cls: DjangoView) -> None:
'''
Навесить на класс DjangoView дефолтные schema данные
'''
from . import fields
# нам не нужно навешивать дефолтную схему, если там уже весит схема (от декоратора).
if hasattr(view_cls, '_schema_data'):
return
view_http_methods = filter(
lambda http_method: SchemaTool.
_check_view_has_method_attr(view_cls=view_cls, http_method=http_method.lower()),
ALL_HTTP_METHODS,
)
swagger_data = {
'doc_string': view_cls.__doc__,
'view_name': view_cls.__name__,
'methods': {},
}
for http_method in view_http_methods:
http_method = http_method.upper()
schema_method_data = {
'doc_string': getattr(view_cls, http_method.lower()).__doc__,
'schema_data': fields.EmptyMethodField(),
}
swagger_data['methods'][http_method] = schema_method_data
view_cls._schema_data = swagger_data
@staticmethod
def get_schema_data_of_view_class(view_cls: DjangoView) -> Dict:
'''
Получить данные schema из класса View
'''
return view_cls._schema_data
@staticmethod
def set_default_schema_data_to_views(views: List[entities.View]) -> None:
'''
Навесить дефолтные данные schema на все view во views
'''
for view in views:
SchemaTool.set_default_schema_data_to_view_class(view_cls=view.django_view_class)
|
Apywy
|
/Apywy-0.0.5.tar.gz/Apywy-0.0.5/apywy/schema_tool.py
|
schema_tool.py
|
from typing import Dict, List, Optional, Tuple, Union
from django.conf import settings
from django.urls.resolvers import URLPattern, URLResolver
from .domain.entities import NameSpace, View
BUILDIN_NAMESPACES_TO_IGNORE = ('apywy', 'admin')
USER_DECLARED_NAMESPACES_TO_IGNORE: Tuple = getattr(settings, 'NAMESPACES_TO_IGNORE', tuple())
def check_is_namespace_name_in_ignore(namespace_name: str) -> bool:
'''
проверяет, находится ли namespace с именем namespace_name в игноре для apywy
'''
if USER_DECLARED_NAMESPACES_TO_IGNORE == ('*', ):
return True
namespaces = BUILDIN_NAMESPACES_TO_IGNORE + USER_DECLARED_NAMESPACES_TO_IGNORE
return namespace_name in namespaces
def get_all_urlpatterns() -> List[Union[URLPattern, URLResolver]]:
'''
получить все urlpatterns в проекте
'''
from importlib import import_module
root_urlconf = import_module(settings.ROOT_URLCONF)
return root_urlconf.urlpatterns
def get_all_view_classes(urlpatterns: List[Union[URLPattern, URLResolver]]) -> List[View]:
'''
Для всех переданных urlpatterns получить их вьюшки. Работает рекурсивно,
если встретили URLResolver.
'''
VIEW_CLASSES = []
# при двух разных юрлах на одну вьюшку, схема будет дублироваться. Это мы избегаем
already_added_django_views = set()
namespace: Optional[NameSpace] = None
_root: Optional[str] = None
def inner(urlpatterns: List[Union[URLPattern, URLResolver]]) -> List[View]:
nonlocal namespace, _root
for pattern in urlpatterns:
if isinstance(pattern, URLResolver):
namespace_name = str(pattern.namespace)
try:
_root = pattern.pattern._route
except AttributeError:
_root = None
if not check_is_namespace_name_in_ignore(namespace_name=namespace_name):
namespace = NameSpace(namespace_name=namespace_name)
inner(pattern.url_patterns)
elif isinstance(pattern, URLPattern):
try:
# мы не умеем работать с func-based views, поэтому просто их скипаем
django_view_class = pattern.callback.view_class
except AttributeError:
continue
path_to_view = pattern.pattern
path_to_view._root = _root
view_class = View(django_view_class=django_view_class, url_path=path_to_view)
if django_view_class in already_added_django_views:
view_class.append_url_path(url_path=path_to_view)
continue
already_added_django_views.add(django_view_class)
namespace.append(view_class) # type: ignore
VIEW_CLASSES.append(view_class)
return VIEW_CLASSES
return inner(urlpatterns)
def get_paths_data_of_view(view: View) -> List[Dict]:
'''
получить список полных путей и их наименовай до всех юрлов у вьюшки
'''
result = []
for url_path in view.url_paths:
url_path_data = {}
url_path_data['url_name'] = url_path.name
url_path_data['url_full_path'] = url_path._root + str(url_path)
result.append(url_path_data)
return result
|
Apywy
|
/Apywy-0.0.5.tar.gz/Apywy-0.0.5/apywy/static_funcs.py
|
static_funcs.py
|
import abc
from typing import Dict, List, Tuple
from .constants import exceptions
from .constants.const import Constant, ListOfConstants
from .utilities.custom_typing import AnyConst
class IField:
@abc.abstractmethod
def _pre_normalize_data(self, expected_data: AnyConst) -> ListOfConstants:
raise NotImplementedError
@abc.abstractmethod
def to_representation(self) -> Dict:
raise NotImplementedError
class StatusField(IField):
'''
Класс поля статуса HTTP метода.
'''
def __init__(self, expected_response_data: AnyConst):
'''
@param expected_response_data: Dict - значение ожидаемого словаря от бэкенда
'''
self.expected_response_data = self._pre_normalize_data(expected_response_data)
self.response_status_code = None
def _pre_normalize_data(self, expected_data: AnyConst) -> ListOfConstants:
if isinstance(expected_data, dict):
raise exceptions.NotValidResponseData(
'''
Больше использование expected_response_data не доступно, используйте Const.
''',
)
if isinstance(expected_data, Constant):
expected_data = expected_data.to_response_const()
return ListOfConstants(expected_data)
if isinstance(expected_data, ListOfConstants):
expected_data.convert_own_constants_type(to_response=True)
return expected_data
def to_representation(self) -> Dict:
return {
'expected_response_status_code': self.response_status_code,
'expected_response_data': self.expected_response_data.to_representation(),
}
class RequestDataField(IField):
def __init__(self, expected_request_data: AnyConst):
self.expected_request_data = self._pre_normalize_data(expected_data=expected_request_data)
def _pre_normalize_data(self, expected_data: AnyConst) -> ListOfConstants:
if isinstance(expected_data, dict):
raise exceptions.NotValidRequestData(
'''
Больше использование expected_request_data не доступно, используйте Constant.
''',
)
if isinstance(expected_data, Constant):
expected_data = expected_data.to_request_const()
return ListOfConstants(expected_data)
if isinstance(expected_data, ListOfConstants):
expected_data.convert_own_constants_type(to_response=False)
return expected_data
def to_representation(self) -> Dict:
return {
'expected_request_data': self.expected_request_data.to_representation(),
}
class MethodFieldMETA(type):
'''
TODO возможно его нужно сделать декоратором
Метакласс, нужен лишь для добавления атрибута _http_statuses у BaseMethodField
'''
def __new__(cls, clsname: str, parents: Tuple, attrdict: Dict) -> 'MethodFieldMETA':
http_statuses = []
for key, attr in attrdict.items():
if isinstance(attr, StatusField):
# получаем статус код по имени атрибута
response_status_code = key.split('_')[-1]
attr.response_status_code = response_status_code
http_statuses.append(attr)
attrdict['_http_statuses'] = http_statuses
return super().__new__(cls, clsname, parents, attrdict)
class MethodField(IField, metaclass=MethodFieldMETA):
'''
базовый класс HTTPField, нужный для сериализации
'''
@property
def http_statuses(self) -> List['StatusField']:
return self._http_statuses # type: ignore
def _representation_response_part(self) -> Dict:
data: Dict = {'http_statuses': []}
for http_status in self.http_statuses:
data['http_statuses'].append(http_status.to_representation()) # type: ignore
return data
def _representation_request_part(self) -> Dict:
expected_request_data = self.META.expected_request_data # type: ignore
if expected_request_data is not None:
data = expected_request_data.to_representation()
else:
data = {'expected_request_data': None}
return data
def to_representation(self) -> Dict:
'''
По аналогию с drf, метод отвечающий за серилазацию поля
'''
response_part = self._representation_response_part()
request_part = self._representation_request_part()
return {**response_part, **request_part}
class META:
expected_request_data = None
class EmptyMethodField(IField):
'''
Поле пустого (не определенного разработчиком) HTTP метода. Нужен для присваивания в декораторе, ручками
разработчик его навешивать не должен.
'''
def to_representation(self) -> Dict:
return {'http_statuses': [], 'expected_request_data': None}
|
Apywy
|
/Apywy-0.0.5.tar.gz/Apywy-0.0.5/apywy/fields.py
|
fields.py
|
from typing import Any, Callable, Dict, List
from django.urls.resolvers import URLPattern
from ..utilities.custom_typing import DjangoView
class Singletone:
'''
Как происходит создание экземпляра данного и родительских классов:
1) Кажый экземпяр должен имееть ключ (key), который однозначно бы определял его среди других экземпляров
этого же класса
2) Если экземпляр с ключом key уже был инициализирован, то по итогу инициализации мы получаем уже старый созданный
экземпляр else создаем новый экземпляр и его и возвращаем
'''
_instances: Dict[str, Any] = {}
def __new__(cls, key: str, *args: Any, **kwargs: Any) -> Any:
'''
@param key: str - значение, которое однозначно идентифицирует экземпляр класса,
нужно для выборки инстанса синглтона.
'''
if key in cls._instances:
return cls._instances[key]
instance = super().__new__(cls)
cls._instances[key] = instance
return instance
@staticmethod
def has_initialized(init_method: Callable) -> Callable:
'''
декоратор, чтобы не заходить в __init__ метода класса, если объект уже был создан
@param init_method - __init__ метода класса.
'''
def inner(instance: Any, *args: Any, **kwargs: Any) -> None:
if hasattr(instance, '_has_initialized'):
return
instance._has_initialized = True
init_method(instance, *args, **kwargs)
return inner
class NameSpace(Singletone):
'''
класс, который связыват namespace и его views, синглтон
'''
namespace_name: str
views: List['View']
_instances: Dict[str, 'NameSpace'] = {}
def __new__(cls, namespace_name: str, *args: Any, **kwargs: Any) -> "NameSpace":
return super().__new__(cls, key=namespace_name)
@Singletone.has_initialized
def __init__(self, namespace_name: str):
'''
@param namespace_name: str - имя джанговского namespace
'''
self.namespace_name = namespace_name
self.views: List['View'] = []
def append(self, view: 'View') -> None:
self.views.append(view)
def __repr__(self) -> str:
return f'<NameSpace: {[str(i) for i in self.views]}>'
class View(Singletone):
'''
класс, который связыват класс DjangoView и его url путь, синглтон
'''
django_view_class: DjangoView # type: ignore
url_paths: List[URLPattern]
_instances: Dict[str, 'View'] = {}
def __new__(cls, django_view_class: DjangoView, *args: Any, **kwargs: Any) -> 'View':
django_view_class_name = django_view_class.__name__
return super().__new__(cls, key=django_view_class_name)
@Singletone.has_initialized
def __init__(self, django_view_class: DjangoView, url_path: URLPattern) -> None:
self.django_view_class = django_view_class
self.url_paths = [url_path]
def __repr__(self) -> str:
return f'<View: {self.django_view_class.__name__}>' # type: ignore
def append_url_path(self, url_path: URLPattern) -> None:
self.url_paths.append(url_path)
|
Apywy
|
/Apywy-0.0.5.tar.gz/Apywy-0.0.5/apywy/domain/entities.py
|
entities.py
|
let http_methods = document.querySelectorAll('.http_method')
let method_blocks = document.querySelectorAll('.method_block')
let http_statuses = document.querySelectorAll('.http_status')
let statuses_blocks = document.querySelectorAll('.status_block')
let response_datas = document.querySelectorAll('.status_response_data_block')
let expected_request_datas = document.querySelectorAll('.expected_request_data')
let expected_request_data_datas = document.querySelectorAll('.expected_request_data_data_block')
const zip = (...arr) => {
const zipped = [];
arr.forEach((element, ind) => {
element.forEach((el, index) => {
if(!zipped[index]){
zipped[index] = [];
};
if(!zipped[index][ind]){
zipped[index][ind] = [];
}
zipped[index][ind] = el || '';
})
});
return zipped;
};
function set_bg_color_by_http_method_value(el, http_method_value) {
// меняем цвет квадрам с методом запроса по его значению
method_to_color = {
'GET': 'rgba(44,177,254,255)',
'POST': 'rgba(0,207,149,255)',
'PUT': 'rgba(255,156,30,255)',
'PATCH': 'rgba(255,156,30,255)',
'DELETE': 'rgba(255,34,35,255)',
}
let bg_color = method_to_color[http_method_value]
el.style.backgroundColor = bg_color;
}
function set_bg_color_of_method_block_by_http_method_value(el, http_method_value) {
// меняем цвет bd внутри method_block
method_to_color = {
'GET': 'rgba(233,243,251,255)',
'POST': 'rgba(228,247,241,255)',
'PUT': 'rgba(254,241,230,255)',
'PATCH': 'rgba(254,241,230,255)',
'DELETE': 'rgba(255,230,230,255)',
}
let bg_color = method_to_color[http_method_value];
el.style.backgroundColor = bg_color;
}
function set_border_color_by_http_method_value(el, http_method_value) {
// меняем цвет обводки
method_to_color = {
'GET': 'rgba(44,177,254,255)',
'POST': 'rgba(0,207,149,255)',
'PUT': 'rgba(255,156,30,255)',
'PATCH': 'rgba(255,156,30,255)',
'DELETE': 'rgba(255,34,35,255)',
}
let border_color = method_to_color[http_method_value]
el.style.borderColor = border_color;
}
function set_bg_color_by_status_value(el, http_status_value) {
ok_statuses = ['200', '201']
if (ok_statuses.includes(http_status_value)) {
bg_color = 'rgba(44,177,254,255)'
}
else {
bg_color = 'rgba(255,34,35,255)'
}
el.style.backgroundColor = bg_color;
}
function set_bg_color_of_status_block_by_status_value(el, http_status_value) {
ok_statuses = ['200', '201']
if (ok_statuses.includes(http_status_value)) {
bg_color = 'rgba(233,243,251,255)'
}
else {
bg_color = 'rgba(255,230,230,255)'
}
el.style.backgroundColor = bg_color;
}
function set_border_color_by_status_value(el, http_status_value) {
ok_statuses = ['200', '201']
if (ok_statuses.includes(http_status_value)) {
border_color = 'rgba(44,177,254,255)'
}
else {
border_color = 'rgba(255,34,35,255)'
}
el.style.borderColor = border_color;
}
function colorize_response_data(el, http_status_value) {
ok_statuses = ['200', '201']
if (ok_statuses.includes(http_status_value)) {
bg_color = 'rgba(233,243,251,255)'
border_color = 'rgba(44,177,254,255)'
}
else {
bg_color = 'rgba(255,230,230,255)'
border_color = 'rgba(255,34,35,255)'
}
el.style.backgroundColor = bg_color;
el.style.borderColor = border_color;
}
function colorize_request_data(el, http_method_value) {
method_to_bg_color = {
'GET': 'rgba(233,243,251,255)',
'POST': 'rgba(228,247,241,255)',
'PUT': 'rgba(254,241,230,255)',
'PATCH': 'rgba(254,241,230,255)',
'DELETE': 'rgba(255,230,230,255)',
}
method_to_border_color = {
'GET': 'rgba(44,177,254,255)',
'POST': 'rgba(0,207,149,255)',
'PUT': 'rgba(255,156,30,255)',
'PATCH': 'rgba(255,156,30,255)',
'DELETE': 'rgba(255,34,35,255)',
}
bg_color = method_to_bg_color[http_method_value];
border_color = method_to_border_color[http_method_value];
el.style.backgroundColor = bg_color;
el.style.borderColor = border_color;
}
// красим http методы
for (let [http_method_el, method_block_el] of zip(http_methods, method_blocks)) {
let http_method = http_method_el.innerText;
set_bg_color_by_http_method_value(el=http_method_el, http_method_value=http_method);
set_bg_color_of_method_block_by_http_method_value(el=method_block_el, http_method_value=http_method);
set_border_color_by_http_method_value(el=method_block_el, http_method_value=http_method);
}
// красим http статусы
for (let [http_status_el, status_block_el, response_data_el] of zip(http_statuses, statuses_blocks, response_datas)) {
let status_code = http_status_el.innerText;
set_bg_color_by_status_value(el=http_status_el, http_status_value=status_code)
set_bg_color_of_status_block_by_status_value(el=status_block_el, http_status_value=status_code)
set_border_color_by_status_value(el=status_block_el, http_status_value=status_code)
colorize_response_data(el=response_data_el, http_method_value=status_code)
}
// красим request data
for (let [http_method_el, expected_request_data_el, expected_request_data_data_el] of zip(http_methods, expected_request_datas, expected_request_data_datas)) {
let http_method = http_method_el.innerText;
set_bg_color_by_http_method_value(el=expected_request_data_el, http_method_value=http_method);
colorize_request_data(el=expected_request_data_data_el, http_method)
}
// навешиваем собитые onClick по нажатию на статус
for (let [status_block_el, response_data_el] of zip(statuses_blocks, response_datas)) {
status_block_el.addEventListener('click', () => {
response_data_el.classList.toggle('hide');
status_block_el.classList.toggle('hide_request_data');
})
}
for (let [expected_request_data_el, expected_request_data_data_el] of zip(expected_request_datas, expected_request_data_datas)) {
expected_request_data_el.addEventListener('click', () => {
expected_request_data_data_el.classList.toggle('hide');
})
}
|
Apywy
|
/Apywy-0.0.5.tar.gz/Apywy-0.0.5/apywy/static/index.js
|
index.js
|
from typing import Dict, List, Optional, Union
from .request import const as req_const
from .response import const as res_const
class ListOfConstants:
def __init__(self, *constants: 'Constant'):
self.constants: List['Constant'] = list(constants)
def to_representation(self) -> List[Dict]:
return [const.to_representation() for const in self.constants] # type: ignore
def convert_own_constants_type(self, to_response: bool) -> None:
'''
Если в схеме данные были заданы как ListOfConstants[Const, Const, ...], то нам нужно сконвертировать ошибки
сначала в их частный вид
@params to_response: bool - флаг, нужно ли нам их преобразовать в константы на response или на request
'''
if to_response:
self.constants = list(map(lambda const: const.to_response_const(), self.constants))
else:
self.constants = list(map(lambda const: const.to_request_const(), self.constants))
class Constant:
'''
Оболочка над константами для дальнейшего разделения их по конкретным req_const.WithQuery, ...
'''
expected_data: Dict
query_arg: Optional[Dict] = None
comment: str = ''
def __init__(self, expected_data: Dict, query_arg: Optional[Dict] = None, comment: str = ''):
self.expected_data = expected_data
self.query_arg = query_arg
self.comment = comment
def to_response_const(self) -> Union[res_const.WithoutQuery, res_const.WithQuery]:
if self.query_arg is None:
returned_const = res_const.WithoutQuery(
expected_response_data=self.expected_data,
comment=self.comment,
)
else:
returned_const = res_const.WithQuery(
expected_response_data=self.expected_data,
query_arg=self.query_arg,
comment=self.comment,
)
return returned_const
def to_request_const(self) -> Union[req_const.WithoutQuery, req_const.WithQuery]:
if self.query_arg is None:
returned_const = req_const.WithoutQuery(
expected_request_data=self.expected_data,
comment=self.comment,
)
else:
returned_const = req_const.WithQuery(
expected_request_data=self.expected_data,
query_arg=self.query_arg,
comment=self.comment,
)
return returned_const
|
Apywy
|
/Apywy-0.0.5.tar.gz/Apywy-0.0.5/apywy/constants/const.py
|
const.py
|
# AqEquil
[](https://doi.org/10.5281/zenodo.7601102)
Boyer, G., Robare, J., Ely, T., Shock, E.L.
## About
AqEquil is a Python 3 package that enables users to rapidly perform aqueous speciation calculations of water chemistry data for multiple samples by interfacing with [geochemical speciation software EQ3/6](https://github.com/LLNL/EQ3_6) (Wolery 2013, [Wolery 1979](https://inis.iaea.org/collection/NCLCollectionStore/_Public/10/474/10474294.pdf)).
Water sample data in CSV format is automatically converted to a format readable by EQ3 and then speciated. Distributions of aqueous species, mineral saturation indices, oxidation reduction potentials, and more are data-mined and returned as Pandas tables and interactive Plotly visualizations.
Speciated fluids can be further reacted with minerals in mass transfer calculations to produce tables and interactive diagrams of reaction paths and composition changes as a function of reaction progress.
## Requirements
AqEquil has only been tested with Ubuntu LTS 20.04.
This installation requires the Linux version of EQ3/6 v8.0a, which can downloaded [here](https://github.com/LLNL/EQ3_6). Installation instructions are provided there.
AqEquil must be installed into an environment with an R installation. See [these instructions](https://docs.anaconda.com/anaconda/user-guide/tasks/using-r-language/) for installing R with Anaconda.
Additionally, the CHNOSZ package must be installed in R (see instructions below).
## Installation
### Installing EQ3/6 for Linux
I recommend using [this github version of EQ3/6 v.8.0a adapted by the 39Alpha team](https://github.com/39alpha/eq3_6/tree/main). Installation instructions are found there.
Create an environment variable called `EQ36DO` and set it to wherever you installed EQ3/6. (`/usr/local/bin` by default). Set another environment variable called `EQ36DA` to the directory containing your data1 thermodynamic database files (if you have one).
### Installing CHNOSZ version 1.4.3
Open an R session. Install CHNOSZ version 1.4.3 package with:
```install.packages('http://cran.r-project.org/src/contrib/Archive/CHNOSZ/CHNOSZ_1.4.3.tar.gz', repos=NULL, type='source')```
Once CHNOSZ is installed you can quit the R session.
Compatibility with CHNOSZ v.2.0.0 is forthcoming.
### Installing AqEquil
Install AqEquil using pip:
```pip install AqEquil```
### Usage
See this [demo notebook](https://nbviewer.jupyter.org/github/worm-portal/WORM-Library/blob/master/3-Aqueous-Speciation/1-Introduction-to-Aq-Speciation/2-Intro-to-Multi-Aq-Speciation.ipynb) for usage examples.
|
AqEquil
|
/AqEquil-0.17.0.tar.gz/AqEquil-0.17.0/README.md
|
README.md
|
# AqOrg
Estimate the thermodynamic properties and HKF parameters of aqueous organic molecules using second order group additivity methods.
This [Jupyter notebook demo](https://gitlab.com/worm1/worm-library/-/blob/master/4-Thermodynamic-Property-Estimation/Introduction-Aq-Organic-Estimator.ipynb) introduces the underlying concepts.
## Requirements
This package must be installed into an environment with [RDKit](https://www.rdkit.org/). See [these instructions](https://github.com/rdkit/rdkit/blob/master/Docs/Book/Install.md) for installing RDKit.
## Installation
Activate the environment containing RDKit. Then install `AqOrg` with pip:
```
$ pip install AqOrg
```
## Usage
See this [Jupyter notebook demo](https://gitlab.com/worm1/worm-library/-/blob/master/4-Thermodynamic-Property-Estimation/Aq-Organics-Feature-Demo/Aqueous-Organic-Molecules.ipynb) for examples.
|
AqOrg
|
/AqOrg-0.1.9.tar.gz/AqOrg-0.1.9/README.md
|
README.md
|
This is a python package that find the optimal location(here in terms of Nodes/Reaches) for the placement of Best Management Practices and/or Technologies to reduce Nutrient loading in the Lake Okeechobee.
It takes two Files as input BMP_Tech.csv and Net_Data.csv. The specific format must be followed for this file which is explained in the Readme file of the foloowing github repository, https://github.com/Ashim-Khanal/AquaNutriOpt
It returns the total loading of the nutrients(Phosphorous, Nitrogens) at the end of the network (here the lake) and also provides the placement of possible BMPs or technologies at that reach within the given total budget.
|
AquaNutriOpt
|
/AquaNutriOpt-0.0.2.tar.gz/AquaNutriOpt-0.0.2/README.txt
|
README.txt
|
import argparse
import base64
import binascii
import cProfile
import errno
import fnmatch
import gc
import hashlib
import imp
import inspect
import io
import itertools
import locale
import logging
import marshal
import mmap
import multiprocessing
import operator
import os
import os.path
import pstats
import re
import shutil
import site
import sqlite3
import struct
import subprocess
import sys
import tarfile
import tempfile
import threading
import time
import traceback
import types
import weakref
import zipfile
try:
import cPickle as pickle
except ImportError:
import pickle
_KNOWN_TYPE_NAMES = {}
_KNOWN_TYPE_IDS = {}
class EntityPickler (object):
__slots__ = ('pickler', 'unpickler', 'buffer')
def __init__(self):
membuf = io.BytesIO()
pickler = pickle.Pickler(membuf, protocol=pickle.HIGHEST_PROTOCOL)
unpickler = pickle.Unpickler(membuf)
pickler.persistent_id = self.persistent_id
unpickler.persistent_load = self.persistent_load
self.pickler = pickler
self.unpickler = unpickler
self.buffer = membuf
@staticmethod
def persistent_id(entity, known_type_names=_KNOWN_TYPE_NAMES):
entity_type = type(entity)
type_name = _pickle_type_name(entity_type)
try:
type_id = known_type_names[type_name]
return type_id, entity.__getnewargs__()
except KeyError:
return None
@staticmethod
def persistent_load(pid, known_type_ids=_KNOWN_TYPE_IDS):
type_id, new_args = pid
try:
entity_type = known_type_ids[type_id]
return entity_type.__new__(entity_type, *new_args)
except KeyError:
raise pickle.UnpicklingError("Unsupported persistent object")
def dumps(self, entity):
buf = self.buffer
buf.seek(0)
buf.truncate(0)
pickler = self.pickler
pickler.dump(entity)
pickler.clear_memo() # clear memo to pickle another entity
return buf.getvalue()
def loads(self, bytes_object):
buf = self.buffer
buf.seek(0)
buf.truncate(0)
buf.write(bytes_object)
buf.seek(0)
return self.unpickler.load()
def _pickle_type_name(entity_type):
return entity_type.__module__ + '.' + entity_type.__name__
def pickleable(entity_type,
known_type_names=_KNOWN_TYPE_NAMES,
known_type_ids=_KNOWN_TYPE_IDS):
if type(entity_type) is type:
type_name = _pickle_type_name(entity_type)
type_id = binascii.crc32(type_name.encode("utf-8")) & 0xFFFFFFFF
other_type = known_type_ids.setdefault(type_id, entity_type)
if other_type is not entity_type:
raise Exception(
"Two different type names have identical CRC32 checksum:"
" '%s' and '%s'" % (_pickle_type_name(other_type), type_name))
known_type_names[type_name] = type_id
return entity_type
class AqlInfo (object):
__slots__ = (
'name',
'module',
'description',
'version',
'date',
'url',
'license',
)
def __init__(self):
self.name = "Aqualid"
self.module = "aqualid"
self.description = "General purpose build system."
self.version = "0.7"
self.date = None
self.url = 'https://github.com/aqualid'
self.license = "MIT License"
def dump(self):
result = "{name} {version}".format(
name=self.name, version=self.version)
if self.date:
result += ' ({date})'.format(date=self.date)
result += "\n"
result += self.description
result += "\nSite: %s" % self.url
return result
_AQL_VERSION_INFO = AqlInfo()
def get_aql_info():
return _AQL_VERSION_INFO
def dump_aql_info():
return _AQL_VERSION_INFO.dump()
try:
u_str = unicode
except NameError:
u_str = str
_TRY_ENCODINGS = []
for enc in [
sys.stdout.encoding,
locale.getpreferredencoding(False),
sys.getfilesystemencoding(),
sys.getdefaultencoding(),
'utf-8',
]:
if enc:
enc = enc.lower()
if enc not in _TRY_ENCODINGS:
_TRY_ENCODINGS.append(enc)
def encode_str(value, encoding=None, _try_encodings=_TRY_ENCODINGS):
if encoding:
return value.encode(encoding)
error = None
for encoding in _try_encodings:
try:
return value.encode(encoding)
except UnicodeEncodeError as ex:
if error is None:
error = ex
raise error
def decode_bytes(obj, encoding=None, _try_encodings=_TRY_ENCODINGS):
if encoding:
return u_str(obj, encoding)
error = None
for encoding in _try_encodings:
try:
return u_str(obj, encoding)
except UnicodeDecodeError as ex:
if error is None:
error = ex
raise error
def to_unicode(obj, encoding=None):
if isinstance(obj, (bytearray, bytes)):
return decode_bytes(obj, encoding)
return u_str(obj)
def is_unicode(value, _ustr=u_str, _isinstance=isinstance):
return _isinstance(value, _ustr)
def is_string(value, _str_types=(u_str, str), _isinstance=isinstance):
return _isinstance(value, _str_types)
if u_str is str:
to_string = to_unicode
cast_str = str
else:
def to_string(value, _str_types=(u_str, str), _isinstance=isinstance):
if _isinstance(value, _str_types):
return value
return str(value)
def cast_str(obj, encoding=None, _ustr=u_str):
if isinstance(obj, _ustr):
return encode_str(obj, encoding)
return str(obj)
class String (str):
def __new__(cls, value=None):
if type(value) is cls:
return value
if value is None:
value = ''
return super(String, cls).__new__(cls, value)
class IgnoreCaseString (String):
def __hash__(self):
return hash(self.lower())
def _cmp(self, other, op):
return op(self.lower(), str(other).lower())
def __eq__(self, other):
return self._cmp(other, operator.eq)
def __ne__(self, other):
return self._cmp(other, operator.ne)
def __lt__(self, other):
return self._cmp(other, operator.lt)
def __le__(self, other):
return self._cmp(other, operator.le)
def __gt__(self, other):
return self._cmp(other, operator.gt)
def __ge__(self, other):
return self._cmp(other, operator.ge)
class LowerCaseString (str):
def __new__(cls, value=None):
if type(value) is cls:
return value
if value is None:
value = ''
else:
value = str(value)
return super(LowerCaseString, cls).__new__(cls, value.lower())
class UpperCaseString (str):
def __new__(cls, value=None):
if type(value) is cls:
return value
if value is None:
value = ''
else:
value = str(value)
return super(UpperCaseString, cls).__new__(cls, value.upper())
class Version (str):
__ver_re = re.compile(r'[0-9]+[a-zA-Z]*(\.[0-9]+[a-zA-Z]*)*')
def __new__(cls, version=None, _ver_re=__ver_re):
if type(version) is cls:
return version
if version is None:
ver_str = ''
else:
ver_str = str(version)
match = _ver_re.search(ver_str)
if match:
ver_str = match.group()
ver_list = re.findall(r'[0-9]+|[a-zA-Z]+', ver_str)
else:
ver_str = ''
ver_list = []
self = super(Version, cls).__new__(cls, ver_str)
conv_ver_list = []
for v in ver_list:
if v.isdigit():
v = int(v)
conv_ver_list.append(v)
self.__version = tuple(conv_ver_list)
return self
@staticmethod
def __convert(other):
return other if isinstance(other, Version) else Version(other)
def _cmp(self, other, cmp_op):
self_ver = self.__version
other_ver = self.__convert(other).__version
len_self = len(self_ver)
len_other = len(other_ver)
min_len = min(len_self, len_other)
if min_len == 0:
return cmp_op(len_self, len_other)
self_ver = self_ver[:min_len]
other_ver = other_ver[:min_len]
return cmp_op(self_ver, other_ver)
def __hash__(self):
return hash(self.__version)
def __eq__(self, other):
return self._cmp(other, operator.eq)
def __ne__(self, other):
return self._cmp(other, operator.ne)
def __lt__(self, other):
return self._cmp(other, operator.lt)
def __le__(self, other):
return self._cmp(other, operator.le)
def __gt__(self, other):
return self._cmp(other, operator.gt)
def __ge__(self, other):
return self._cmp(other, operator.ge)
SIMPLE_TYPES_SET = frozenset(
(u_str, str, int, float, complex, bool, bytes, bytearray))
SIMPLE_TYPES = tuple(SIMPLE_TYPES_SET)
def is_simple_value(value, _simple_types=SIMPLE_TYPES):
return isinstance(value, _simple_types)
def is_simple_type(value_type, _simple_types=SIMPLE_TYPES):
return issubclass(value_type, _simple_types)
class ErrorFileLocked(Exception):
def __init__(self, filename):
msg = 'File "%s" is locked.' % (filename,)
super(ErrorFileLocked, self).__init__(msg)
class GeneralFileLock (object):
__slots__ = ('lockfilename', 'filename', 'retries', 'interval')
def __init__(self, filename, interval=0.25, timeout=5 * 60):
filename = os.path.normcase(os.path.abspath(filename))
self.filename = filename
self.lockfilename = filename + '.lock'
self.interval = interval
self.retries = int(timeout / interval)
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, traceback):
self.release_lock()
def read_lock(self, wait=True, force=False):
return self.write_lock(wait=wait, force=force)
def write_lock(self, wait=True, force=False):
if wait:
index = self.retries
else:
index = 0
while True:
try:
self.__lock(force=force)
break
except ErrorFileLocked:
if index <= 0:
raise
index -= 1
time.sleep(self.interval)
return self
def __lock(self, force=False):
try:
os.mkdir(self.lockfilename)
except OSError as ex:
if ex.errno == errno.EEXIST:
if force:
return
raise ErrorFileLocked(self.filename)
raise
def release_lock(self):
try:
os.rmdir(self.lockfilename)
except OSError as ex:
if ex.errno != errno.ENOENT:
raise
class UnixFileLock(object):
__slots__ = ('fd', 'filename')
def __init__(self, filename):
filename = os.path.normcase(os.path.abspath(filename))
self.filename = filename
self.fd = None
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, traceback):
self.release_lock()
def __open(self):
if self.fd is None:
self.fd = os.open(self.filename, os.O_CREAT | os.O_RDWR)
def __close(self):
os.close(self.fd)
self.fd = None
def read_lock(self, wait=True, force=False):
self.__lock(write=False, wait=wait)
return self
def write_lock(self, wait=True, force=False):
self.__lock(write=True, wait=wait)
return self
def __lock(self, write, wait):
self.__open()
if write:
flags = fcntl.LOCK_EX
else:
flags = fcntl.LOCK_SH
if not wait:
flags |= fcntl.LOCK_NB
try:
fcntl.lockf(self.fd, flags)
except IOError as ex:
if ex.errno in (errno.EACCES, errno.EAGAIN):
raise ErrorFileLocked(self.filename)
raise
def release_lock(self):
fcntl.lockf(self.fd, fcntl.LOCK_UN)
self.__close()
class WindowsFileLock(object):
def __init_win_types(self):
self.LOCKFILE_FAIL_IMMEDIATELY = 0x1
self.LOCKFILE_EXCLUSIVE_LOCK = 0x2
if ctypes.sizeof(ctypes.c_ulong) != ctypes.sizeof(ctypes.c_void_p):
ulong_ptr = ctypes.c_int64
else:
ulong_ptr = ctypes.c_ulong
pvoid = ctypes.c_void_p
dword = ctypes.wintypes.DWORD
handle = ctypes.wintypes.HANDLE
class _Offset(ctypes.Structure):
_fields_ = [
('Offset', dword),
('OffsetHigh', dword)
]
class _OffsetUnion(ctypes.Union):
_anonymous_ = ['_offset']
_fields_ = [
('_offset', _Offset),
('Pointer', pvoid)
]
class OVERLAPPED(ctypes.Structure):
_anonymous_ = ['_offset_union']
_fields_ = [
('Internal', ulong_ptr),
('InternalHigh', ulong_ptr),
('_offset_union', _OffsetUnion),
('hEvent', handle)
]
lpoverlapped = ctypes.POINTER(OVERLAPPED)
self.overlapped = OVERLAPPED()
self.poverlapped = lpoverlapped(self.overlapped)
self.LockFileEx = ctypes.windll.kernel32.LockFileEx
self.UnlockFileEx = ctypes.windll.kernel32.UnlockFileEx
def __init__(self, filename):
self.__init_win_types()
filename = os.path.normcase(os.path.abspath(filename))
self.filename = filename
self.fd = None
self.handle = None
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, traceback):
self.release_lock()
def __open(self):
if self.fd is None:
lockfilename = self.filename + ".lock"
self.fd = os.open(
lockfilename, os.O_CREAT | os.O_RDWR | os.O_NOINHERIT)
self.handle = msvcrt.get_osfhandle(self.fd)
def __close(self):
os.close(self.fd)
self.fd = None
self.handle = None
def __lock(self, write, wait):
self.__open()
if write:
flags = self.LOCKFILE_EXCLUSIVE_LOCK
else:
flags = 0
if not wait:
flags |= self.LOCKFILE_FAIL_IMMEDIATELY
result = self.LockFileEx(
self.handle, flags, 0, 0, 4096, self.poverlapped)
if not result:
raise ErrorFileLocked(self.filename)
def read_lock(self, wait=True, force=False):
self.__lock(write=False, wait=wait)
return self
def write_lock(self, wait=True, force=False):
self.__lock(write=True, wait=wait)
return self
def release_lock(self):
self.UnlockFileEx(self.handle, 0, 0, 4096, self.poverlapped)
self.__close()
try:
import fcntl
FileLock = UnixFileLock
except ImportError:
try:
import msvcrt
import ctypes
import ctypes.wintypes
FileLock = WindowsFileLock
except ImportError:
FileLock = GeneralFileLock
LOG_CRITICAL = logging.CRITICAL
LOG_ERROR = logging.ERROR
LOG_WARNING = logging.WARNING
LOG_INFO = logging.INFO
LOG_DEBUG = logging.DEBUG
class LogFormatter(logging.Formatter):
__slots__ = ('other',)
def __init__(self, *args, **kw):
logging.Formatter.__init__(self, *args, **kw)
self.other = logging.Formatter("%(levelname)s: %(message)s")
def format(self, record):
if record.levelno == logging.INFO:
return logging.Formatter.format(self, record)
else:
return self.other.format(record)
def _make_aql_logger():
logger = logging.getLogger("AQL")
handler = logging.StreamHandler()
formatter = LogFormatter()
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.DEBUG)
return logger
_logger = _make_aql_logger()
set_log_level = _logger.setLevel
log_critical = _logger.critical
log_error = _logger.error
log_warning = _logger.warning
log_info = _logger.info
log_debug = _logger.debug
add_log_handler = _logger.addHandler
class Tempfile (str):
def __new__(cls, prefix='tmp', suffix='', root_dir=None, mode='w+b'):
handle = tempfile.NamedTemporaryFile(mode=mode, suffix=suffix,
prefix=prefix, dir=root_dir,
delete=False)
self = super(Tempfile, cls).__new__(cls, handle.name)
self.__handle = handle
return self
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, traceback):
self.remove()
def write(self, data):
self.__handle.write(data)
def read(self, data):
self.__handle.read(data)
def seek(self, offset):
self.__handle.seek(offset)
def tell(self):
return self.__handle.tell()
def flush(self):
if self.__handle is not None:
self.__handle.flush()
def close(self):
if self.__handle is not None:
self.__handle.close()
self.__handle = None
return self
def remove(self):
self.close()
try:
os.remove(self)
except OSError as ex:
if ex.errno != errno.ENOENT:
raise
return self
class Tempdir(str):
def __new__(cls, prefix='tmp', suffix='', root_dir=None, name=None):
if root_dir is not None:
if not os.path.isdir(root_dir):
os.makedirs(root_dir)
if name is None:
path = tempfile.mkdtemp(prefix=prefix, suffix=suffix, dir=root_dir)
else:
if root_dir is not None:
name = os.path.join(root_dir, name)
path = os.path.abspath(name)
if not os.path.isdir(path):
os.makedirs(path)
return super(Tempdir, cls).__new__(cls, path)
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, traceback):
self.remove()
def remove(self):
shutil.rmtree(self, ignore_errors=False)
def to_sequence(value, _seq_types=(u_str, bytes, bytearray)):
if not isinstance(value, _seq_types):
try:
iter(value)
return value
except TypeError:
pass
if value is None:
return tuple()
return value,
def is_sequence(value, _seq_types=(u_str, bytes, bytearray)):
if not isinstance(value, _seq_types):
try:
iter(value)
return True
except TypeError:
pass
return False
class UniqueList (object):
__slots__ = (
'__values_list',
'__values_set',
)
def __init__(self, values=None):
self.__values_list = []
self.__values_set = set()
self.__add_values(values)
def __add_value_front(self, value):
values_set = self.__values_set
values_list = self.__values_list
if value in values_set:
values_list.remove(value)
else:
values_set.add(value)
values_list.insert(0, value)
def __add_value(self, value):
values_set = self.__values_set
values_list = self.__values_list
if value not in values_set:
values_set.add(value)
values_list.append(value)
def __add_values(self, values):
values_set_add = self.__values_set.add
values_list_append = self.__values_list.append
values_set_size = self.__values_set.__len__
values_list_size = self.__values_list.__len__
for value in to_sequence(values):
values_set_add(value)
if values_set_size() > values_list_size():
values_list_append(value)
def __add_values_front(self, values):
values_set = self.__values_set
values_list = self.__values_list
values_set_add = values_set.add
values_list_insert = values_list.insert
values_set_size = values_set.__len__
values_list_size = values_list.__len__
values_list_index = values_list.index
pos = 0
for value in to_sequence(values):
values_set_add(value)
if values_set_size() == values_list_size():
i = values_list_index(value)
if i < pos:
continue
del values_list[i]
values_list_insert(pos, value)
pos += 1
def __remove_value(self, value):
try:
self.__values_set.remove(value)
self.__values_list.remove(value)
except (KeyError, ValueError):
pass
def __remove_values(self, values):
values_set_remove = self.__values_set.remove
values_list_remove = self.__values_list.remove
for value in to_sequence(values):
try:
values_set_remove(value)
values_list_remove(value)
except (KeyError, ValueError):
pass
def __contains__(self, other):
return other in self.__values_set
def __len__(self):
return len(self.__values_list)
def __iter__(self):
return iter(self.__values_list)
def __reversed__(self):
return reversed(self.__values_list)
def __str__(self):
return str(self.__values_list)
def __eq__(self, other):
if isinstance(other, UniqueList):
return self.__values_set == other.__values_set
return self.__values_set == set(to_sequence(other))
def __ne__(self, other):
if isinstance(other, UniqueList):
return self.__values_set != other.__values_set
return self.__values_set != set(to_sequence(other))
def __lt__(self, other):
if not isinstance(other, UniqueList):
other = UniqueList(other)
return self.__values_list < other.__values_list
def __le__(self, other):
if isinstance(other, UniqueList):
other = UniqueList(other)
return self.__values_list <= other.__values_list
def __gt__(self, other):
if isinstance(other, UniqueList):
other = UniqueList(other)
return self.__values_list > other.__values_list
def __ge__(self, other):
if isinstance(other, UniqueList):
other = UniqueList(other)
return self.__values_list >= other.__values_list
def __getitem__(self, index):
return self.__values_list[index]
def __iadd__(self, values):
self.__add_values(values)
return self
def __add__(self, values):
other = UniqueList(self)
other.__add_values(values)
return other
def __radd__(self, values):
other = UniqueList(values)
other.__add_values(self)
return other
def __isub__(self, values):
self.__remove_values(values)
return self
def append(self, value):
self.__add_value(value)
def extend(self, values):
self.__add_values(values)
def reverse(self):
self.__values_list.reverse()
def append_front(self, value):
self.__add_value_front(value)
def extend_front(self, values):
self.__add_values_front(values)
def remove(self, value):
self.__remove_value(value)
def pop(self):
value = self.__values_list.pop()
self.__values_set.remove(value)
return value
def pop_front(self):
value = self.__values_list.pop(0)
self.__values_set.remove(value)
return value
def self_test(self):
size = len(self)
if size != len(self.__values_list):
raise AssertionError(
"size(%s) != len(self.__values_list)(%s)" %
(size, len(self.__values_list)))
if size != len(self.__values_set):
raise AssertionError(
"size(%s) != len(self.__values_set)(%s)" %
(size, len(self.__values_set)))
if self.__values_set != set(self.__values_list):
raise AssertionError("self.__values_set != self.__values_list")
class List (list):
def __init__(self, values=None):
super(List, self).__init__(to_sequence(values))
def __iadd__(self, values):
super(List, self).__iadd__(to_sequence(values))
return self
def __add__(self, values):
other = List(self)
other += List(self)
return other
def __radd__(self, values):
other = List(values)
other += self
return other
def __isub__(self, values):
for value in to_sequence(values):
while True:
try:
self.remove(value)
except ValueError:
break
return self
@staticmethod
def __to_list(values):
if isinstance(values, List):
return values
return List(values)
def __eq__(self, other):
return super(List, self).__eq__(self.__to_list(other))
def __ne__(self, other):
return super(List, self).__ne__(self.__to_list(other))
def __lt__(self, other):
return super(List, self).__lt__(self.__to_list(other))
def __le__(self, other):
return super(List, self).__le__(self.__to_list(other))
def __gt__(self, other):
return super(List, self).__gt__(self.__to_list(other))
def __ge__(self, other):
return super(List, self).__ge__(self.__to_list(other))
def append_front(self, value):
self.insert(0, value)
def extend(self, values):
super(List, self).extend(to_sequence(values))
def extend_front(self, values):
self[:0] = to_sequence(values)
def pop_front(self):
return self.pop(0)
class _SplitListBase(object):
@classmethod
def __to_sequence(cls, values):
if not is_string(values):
return values
sep = cls._separator
for s in cls._other_separators:
values = values.replace(s, sep)
return filter(None, values.split(sep))
@classmethod
def __to_split_list(cls, values):
if isinstance(values, cls):
return values
return cls(values)
def __init__(self, values=None):
super(_SplitListBase, self).__init__(self.__to_sequence(values))
def __iadd__(self, values):
return super(_SplitListBase, self).__iadd__(self.__to_sequence(values))
def __isub__(self, values):
return super(_SplitListBase, self).__isub__(self.__to_sequence(values))
def extend(self, values):
super(_SplitListBase, self).extend(self.__to_sequence(values))
def extend_front(self, values):
super(_SplitListBase, self).extend_front(self.__to_sequence(values))
def __eq__(self, other):
return super(_SplitListBase, self).__eq__(self.__to_split_list(other))
def __ne__(self, other):
return super(_SplitListBase, self).__ne__(self.__to_split_list(other))
def __lt__(self, other):
return super(_SplitListBase, self).__lt__(self.__to_split_list(other))
def __le__(self, other):
return super(_SplitListBase, self).__le__(self.__to_split_list(other))
def __gt__(self, other):
return super(_SplitListBase, self).__gt__(self.__to_split_list(other))
def __ge__(self, other):
return super(_SplitListBase, self).__ge__(self.__to_split_list(other))
def __str__(self):
return self._separator.join(map(cast_str, iter(self)))
def split_list_type(list_type, separators):
attrs = dict(_separator=separators[0],
_other_separators=separators[1:])
return type('SplitList', (_SplitListBase, list_type), attrs)
class _ValueListBase(object):
@classmethod
def __to_sequence(cls, values):
if isinstance(values, cls):
return values
return map(cls._value_type, to_sequence(values))
@classmethod
def __to_value_list(cls, values):
if isinstance(values, cls):
return values
return cls(values)
def __init__(self, values=None):
super(_ValueListBase, self).__init__(self.__to_sequence(values))
def __iadd__(self, values):
return super(_ValueListBase, self).__iadd__(
self.__to_value_list(values))
def __isub__(self, values):
return super(_ValueListBase, self).__isub__(
self.__to_value_list(values))
def extend(self, values):
super(_ValueListBase, self).extend(self.__to_value_list(values))
def extend_front(self, values):
super(_ValueListBase, self).extend_front(self.__to_value_list(values))
def append(self, value):
super(_ValueListBase, self).append(self._value_type(value))
def count(self, value):
return super(_ValueListBase, self).count(self._value_type(value))
def index(self, value, i=0, j=-1):
return super(_ValueListBase, self).index(self._value_type(value), i, j)
def insert(self, i, value):
return super(_ValueListBase, self).insert(i, self._value_type(value))
def remove(self, value):
return super(_ValueListBase, self).remove(self._value_type(value))
def __setitem__(self, index, value):
if type(index) is slice:
value = self.__to_value_list(value)
else:
value = self._value_type(value)
return super(_ValueListBase, self).__setitem__(index, value)
def __eq__(self, other):
return super(_ValueListBase, self).__eq__(self.__to_value_list(other))
def __ne__(self, other):
return super(_ValueListBase, self).__ne__(self.__to_value_list(other))
def __lt__(self, other):
return super(_ValueListBase, self).__lt__(self.__to_value_list(other))
def __le__(self, other):
return super(_ValueListBase, self).__le__(self.__to_value_list(other))
def __gt__(self, other):
return super(_ValueListBase, self).__gt__(self.__to_value_list(other))
def __ge__(self, other):
return super(_ValueListBase, self).__ge__(self.__to_value_list(other))
def __contains__(self, other):
value = self._value_type(other)
return super(_ValueListBase, self).__contains__(value)
def value_list_type(list_type, value_type):
attrs = dict(_value_type=value_type)
return type('ValueList', (_ValueListBase, list_type), attrs)
if os.path.normcase('ABC') == os.path.normcase('abc'):
FilePathBase = IgnoreCaseString
else:
FilePathBase = String
try:
_splitunc = os.path.splitunc
except AttributeError:
def _splitunc(path):
return str(), path
class FilePath (FilePathBase):
def __getnewargs__(self):
return str(self),
def __getstate__(self):
return {}
def __setstate__(self, state):
pass
def __add__(self, other):
return FilePath(super(FilePath, self).__add__(other))
def __iadd__(self, other):
return FilePath(super(FilePath, self).__add__(other))
def __hash__(self):
return super(FilePath, self).__hash__()
def abspath(self):
return FilePath(os.path.abspath(self))
def normpath(self):
return FilePath(os.path.normpath(self))
def filename(self):
return FilePath(os.path.basename(self))
def dirname(self):
return FilePath(os.path.dirname(self))
def ext(self):
return FilePathBase(os.path.splitext(self)[1])
def name(self):
return FilePathBase(os.path.splitext(self.filename())[0])
def drive(self):
drive, path = os.path.splitdrive(self)
if not drive:
drive, path = _splitunc(path)
return FilePathBase(drive)
def change(self, dirname=None, name=None, ext=None, prefix=None):
self_dirname, self_filename = os.path.split(self)
self_name, self_ext = os.path.splitext(self_filename)
if dirname is None:
dirname = self_dirname
if name is None:
name = self_name
if ext is None:
ext = self_ext
if prefix:
name = prefix + name
return FilePath(os.path.join(dirname, name + ext))
def join_path(self, *paths):
return FilePath(os.path.join(self, *paths))
class AbsFilePath (FilePath):
def __new__(cls, value=None):
if type(value) is cls:
return value
if value is None:
value = ''
value = os.path.normcase(os.path.abspath(value))
return super(AbsFilePath, cls).__new__(cls, value)
try:
import queue
except ImportError:
import Queue as queue # python 2
class _NoLock(object):
def acquire_shared(self):
return self
def acquire_exclusive(self):
return self
def release(self):
pass
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
pass
class _SharedLock(object):
def __init__(self):
self.cond = threading.Condition(threading.Lock())
self.count = 0
def acquire_shared(self):
cond = self.cond
with cond:
while self.count < 0:
cond.wait()
self.count += 1
return self
def acquire_exclusive(self):
cond = self.cond
with cond:
while self.count != 0:
cond.wait()
self.count -= 1
return self
def release(self):
cond = self.cond
with cond:
if self.count > 0:
self.count -= 1
elif self.count < 0:
self.count += 1
cond.notify_all()
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
self.release()
class _Task(object):
__slots__ = (
'priority',
'task_id',
'func',
'args',
'kw'
)
def __init__(self, priority, task_id, func, args, kw):
self.priority = priority
self.task_id = task_id
self.func = func
self.args = args
self.kw = kw
def __lt__(self, other):
if isinstance(other, _NullTask):
return False
if isinstance(other, _ExpensiveTask):
return True
return self.priority < other.priority
def __call__(self, lock):
with lock.acquire_shared():
return self.func(*self.args, **self.kw)
class _NullTask(_Task):
__slots__ = (
'task_id',
)
def __init__(self):
self.task_id = None
def __lt__(self, other):
return True
def __call__(self, lock):
pass
_null_task = _NullTask()
class _ExpensiveTask(_Task):
def __init__(self, task_id, func, args, kw):
super(_ExpensiveTask, self).__init__(None, task_id, func, args, kw)
def __lt__(self, other):
return False
def __call__(self, lock):
with lock.acquire_exclusive():
return self.func(*self.args, **self.kw)
class TaskResult (object):
__slots__ = ('task_id', 'error', 'result')
def __init__(self, task_id=None, result=None, error=None):
self.task_id = task_id
self.result = result
self.error = error
def is_failed(self):
return self.error is not None
def __lt__(self, other):
return (self.task_id, self.result, self.error) < (other.task_id, other.result, other.error)
def __eq__(self, other):
return (self.task_id, self.result, self.error) == (other.task_id, other.result, other.error)
def __ne__(self, other):
return not self.__eq__(other)
def __str__(self):
return "task_id: %s, result: %s, error: %s" % (self.task_id, self.result, self.error)
class _WorkerThread(threading.Thread):
def __init__(self, tasks, finished_tasks, task_lock,
stop_event, fail_handler):
super(_WorkerThread, self).__init__()
self.tasks = tasks
self.finished_tasks = finished_tasks
self.task_lock = task_lock
self.fail_handler = fail_handler
self.stop_event = stop_event
self.daemon = True
def run(self):
tasks = self.tasks
finished_tasks = self.finished_tasks
is_stopped = self.stop_event.is_set
task_lock = self.task_lock
while not is_stopped():
task = tasks.get()
task_id = task.task_id
if task_id is not None:
task_result = TaskResult(task_id=task_id)
else:
task_result = None
try:
result = task(task_lock)
if task_result is not None:
task_result.result = result
except BaseException as ex:
self.fail_handler(task_result, ex)
finally:
if task_result is not None:
finished_tasks.put(task_result)
tasks.task_done()
class TaskManager (object):
__slots__ = (
'task_lock',
'num_threads',
'threads',
'tasks',
'finished_tasks',
'unfinished_tasks',
'keep_going',
'stop_event',
'fail_event',
'with_backtrace',
)
def __init__(self):
self.tasks = queue.PriorityQueue()
self.finished_tasks = queue.Queue()
self.task_lock = _NoLock()
self.unfinished_tasks = 0
self.threads = []
self.keep_going = True
self.stop_event = threading.Event()
self.fail_event = threading.Event()
self.with_backtrace = True
def start(self, num_threads):
threads = self.threads
num_threads -= len(threads)
args = (self.tasks, self.finished_tasks, self.task_lock,
self.stop_event, self.fail_handler)
for i in range(num_threads):
thread = _WorkerThread(*args)
threads.append(thread)
thread.start()
def stop(self):
stop_event = self.stop_event
if not stop_event.is_set():
stop_event.set()
put_task = self.tasks.put
for thread in self.threads:
put_task(_null_task)
for thread in self.threads:
thread.join()
self.threads[:] = []
stop_event.clear()
def __add_task(self, task):
self.tasks.put(task)
if task.task_id is not None:
self.unfinished_tasks += 1
def add_task(self, priority, task_id, function, *args, **kw):
task = _Task(priority, task_id, function, args, kw)
self.__add_task(task)
def disable_keep_going(self):
self.keep_going = False
def disable_backtrace(self):
self.with_backtrace = False
def enable_expensive(self):
if isinstance(self.task_lock, _SharedLock):
return
num_threads = len(self.threads)
self.stop()
if self.fail_event.is_set():
return
self.task_lock = _SharedLock()
self.start(num_threads)
def add_expensive_task(self, task_id, function, *args, **kw):
task = _ExpensiveTask(task_id, function, args, kw)
self.enable_expensive()
self.__add_task(task)
def fail_handler(self, task_result, ex):
if self.with_backtrace:
err = traceback.format_exc()
else:
err = str(ex)
if task_result is not None:
task_result.error = err
else:
log_warning("Internal task failed with error: %s", err)
if not self.keep_going:
self.fail_event.set()
self.stop_event.set()
def get_finished_tasks(self, block=True):
result = []
is_stopped = self.stop_event.is_set
finished_tasks = self.finished_tasks
if is_stopped():
self.stop()
if block:
block = (self.unfinished_tasks > 0) and self.threads
while True:
try:
task_result = finished_tasks.get(block=block)
block = False
result.append(task_result)
finished_tasks.task_done()
except queue.Empty:
if self.tasks.empty() and not self.threads:
self.unfinished_tasks = 0
return result
break
self.unfinished_tasks -= len(result)
assert self.unfinished_tasks >= 0
return result
class Dict (dict):
@staticmethod
def to_items(items):
if not items or (items is NotImplemented):
return tuple()
try:
items = items.items
except AttributeError:
return items
return items()
def __init__(self, items=None):
super(Dict, self).__init__(self.to_items(items))
def __iadd__(self, items):
for key, value in self.to_items(items):
try:
self[key] += value
except KeyError:
self[key] = value
return self
def copy(self, key_type=None, value_type=None):
other = Dict()
for key, value in self.items():
if key_type:
key = key_type(key)
if value_type:
value = value_type(value)
other[key] = value
return other
class _SplitDictBase(object):
@classmethod
def __to_items(cls, items_str):
if not is_string(items_str):
return items_str
sep = cls._separator
for s in cls._other_separators:
items_str = items_str.replace(s, sep)
items = []
for v in filter(None, items_str.split(sep)):
key, _, value = v.partition('=')
items.append((key, value))
return items
@classmethod
def __to_split_dict(cls, items):
if isinstance(items, cls):
return items
return cls(cls.__to_items(items))
def __init__(self, items=None):
super(_SplitDictBase, self).__init__(self.__to_items(items))
def __iadd__(self, items):
return super(_SplitDictBase, self).__iadd__(self.__to_items(items))
def update(self, other=None, **kwargs):
other = self.__to_items(other)
super(_SplitDictBase, self).update(other)
items = self.__to_items(kwargs)
super(_SplitDictBase, self).update(items)
def __eq__(self, other):
return super(_SplitDictBase, self).__eq__(self.__to_split_dict(other))
def __ne__(self, other):
return super(_SplitDictBase, self).__ne__(self.__to_split_dict(other))
def __lt__(self, other):
return super(_SplitDictBase, self).__lt__(self.__to_split_dict(other))
def __le__(self, other):
return super(_SplitDictBase, self).__le__(self.__to_split_dict(other))
def __gt__(self, other):
return super(_SplitDictBase, self).__gt__(self.__to_split_dict(other))
def __ge__(self, other):
return super(_SplitDictBase, self).__ge__(self.__to_split_dict(other))
def __str__(self):
return self._separator.join(sorted("%s=%s" % (key, value)
for key, value in self.items()))
def split_dict_type(dict_type, separators):
attrs = dict(_separator=separators[0],
_other_separators=separators[1:])
return type('SplitDict', (_SplitDictBase, dict_type), attrs)
class _ValueDictBase(object):
__VALUE_TYPES = {}
@classmethod
def get_key_type(cls):
return cls._key_type
@classmethod
def get_value_type(cls):
return cls._default_value_type
@classmethod
def _to_value(cls, key, value, val_types=__VALUE_TYPES):
val_type = cls._default_value_type
try:
if val_type is None:
val_type = val_types[key]
if isinstance(value, val_type):
return value
value = val_type(value)
except KeyError:
pass
cls.set_value_type(key, type(value))
return value
@classmethod
def set_value_type(cls, key, value_type, value_types=__VALUE_TYPES):
default_type = cls._default_value_type
if default_type is None:
if value_type is list:
value_type = List
if value_type is dict:
value_type = Dict
value_types[key] = value_type
@classmethod
def __to_items(cls, items):
if isinstance(items, _ValueDictBase):
return items
key_type = cls._key_type
to_value = cls._to_value
items_tmp = []
for key, value in Dict.to_items(items):
key = key_type(key)
value = to_value(key, value)
items_tmp.append((key, value))
return items_tmp
@classmethod
def __to_value_dict(cls, items):
if isinstance(items, _ValueDictBase):
return items
return cls(items)
def __init__(self, values=None):
super(_ValueDictBase, self).__init__(self.__to_items(values))
def __iadd__(self, values):
return super(_ValueDictBase, self).__iadd__(self.__to_items(values))
def get(self, key, default=None):
return super(_ValueDictBase, self).get(self._key_type(key), default)
def __getitem__(self, key):
return super(_ValueDictBase, self).__getitem__(self._key_type(key))
def __setitem__(self, key, value):
key = self._key_type(key)
value = self._to_value(key, value)
return super(_ValueDictBase, self).__setitem__(key, value)
def __delitem__(self, key):
return super(_ValueDictBase, self).__delitem__(self._key_type(key))
def pop(self, key, *args):
return super(_ValueDictBase, self).pop(self._key_type(key), *args)
def setdefault(self, key, default):
key = self._key_type(key)
default = self._to_value(key, default)
return super(_ValueDictBase, self).setdefault(key, default)
def update(self, other=None, **kwargs):
other = self.__to_items(other)
super(_ValueDictBase, self).update(other)
items = self.__to_items(kwargs)
super(_ValueDictBase, self).update(items)
def __eq__(self, other):
return super(_ValueDictBase, self).__eq__(self.__to_value_dict(other))
def __ne__(self, other):
return super(_ValueDictBase, self).__ne__(self.__to_value_dict(other))
def __lt__(self, other):
return super(_ValueDictBase, self).__lt__(self.__to_value_dict(other))
def __le__(self, other):
return super(_ValueDictBase, self).__le__(self.__to_value_dict(other))
def __gt__(self, other):
return super(_ValueDictBase, self).__gt__(self.__to_value_dict(other))
def __ge__(self, other):
return super(_ValueDictBase, self).__ge__(self.__to_value_dict(other))
def __contains__(self, key):
return super(_ValueDictBase, self).__contains__(self._key_type(key))
def value_dict_type(dict_type, key_type, default_value_type=None):
attrs = dict(_key_type=key_type,
_default_value_type=default_value_type)
return type('ValueDict', (_ValueDictBase, dict_type), attrs)
try:
zip_longest = itertools.zip_longest
except AttributeError:
zip_longest = itertools.izip_longest
class ErrorOptionTypeEnumAliasIsAlreadySet(Exception):
def __init__(self, option, value, current_value, new_value):
msg = "Alias '%s' of Enum Option '%s' can't be changed to " "'%s' from '%s'" % (value, option, new_value, current_value)
super(ErrorOptionTypeEnumAliasIsAlreadySet, self).__init__(msg)
class ErrorOptionTypeEnumValueIsAlreadySet(Exception):
def __init__(self, option, value, new_value):
msg = "Value '%s' of Enum Option '%s' can't be changed to alias " "to '%s'" % (value, option, new_value)
super(ErrorOptionTypeEnumValueIsAlreadySet, self).__init__(msg)
class ErrorOptionTypeUnableConvertValue(TypeError):
def __init__(self, option_help, invalid_value):
if isinstance(option_help, OptionType):
option_help = option_help.help()
self.option_help = option_help
self.invalid_value = invalid_value
msg = "Unable to convert value '%s (%s)' to option %s" % (
invalid_value, type(invalid_value), option_help.error_text())
super(ErrorOptionTypeUnableConvertValue, self).__init__(msg)
class ErrorOptionTypeNoEnumValues(TypeError):
def __init__(self, option_type):
msg = "Enum option type '%s' doesn't have any values." % (option_type,)
super(ErrorOptionTypeNoEnumValues, self).__init__(msg)
class ErrorOptionTypeCantDeduce(Exception):
def __init__(self, value):
msg = "Unable to deduce option type from value type: '%s." % (
type(value),)
super(ErrorOptionTypeCantDeduce, self).__init__(msg)
def auto_option_type(value):
if is_sequence(value):
unique = isinstance(value, (UniqueList, set, frozenset))
value_type = type(next(iter(value), ''))
opt_type = ListOptionType(value_type=value_type, unique=unique)
elif isinstance(value, dict):
opt_type = DictOptionType()
elif isinstance(value, bool):
opt_type = BoolOptionType()
elif isinstance(value, IgnoreCaseString):
opt_type = StrOptionType(ignore_case=True)
elif is_string(value):
opt_type = StrOptionType()
elif isinstance(value, Version):
opt_type = VersionOptionType()
elif isinstance(value, FilePath):
opt_type = PathOptionType()
elif is_simple_value(value):
opt_type = OptionType(value_type=type(value))
else:
raise ErrorOptionTypeCantDeduce(value)
opt_type.is_auto = True
return opt_type
def _get_type_name(value_type):
if issubclass(value_type, bool):
name = "boolean"
elif issubclass(value_type, int):
name = "integer"
elif issubclass(value_type, IgnoreCaseString):
name = "case insensitive string"
elif issubclass(value_type, (str, u_str)):
name = "string"
else:
name = value_type.__name__
return name.title()
def _join_to_length(values, max_length=0, separator="", prefix="", suffix=""):
result = []
current_value = ""
for value in values:
value = prefix + value + suffix
if len(current_value) + len(value) > max_length:
if current_value:
current_value += separator
result.append(current_value)
current_value = ""
if current_value:
current_value += separator
current_value += value
if current_value:
result.append(current_value)
return result
def _indent_items(indent_value, values):
result = []
indent_spaces = ' ' * len(indent_value)
for value in values:
if value:
if result:
value = indent_spaces + value
else:
value = indent_value + value
result.append(value)
return result
def _merge_lists(values1, values2, indent_size):
result = []
max_name = max(values1, key=len)
indent_size = max(indent_size, len(max_name)) + 1
for left, right in zip_longest(values1, values2, fillvalue=""):
if not right:
right_indent = ""
else:
right_indent = ' ' * (indent_size - len(left))
value = left + right_indent + right
result.append(value)
return result
class OptionHelp(object):
__slots__ = (
'option_type',
'_names',
'type_name',
'allowed_values',
'current_value',
)
def __init__(self, option_type):
self.option_type = option_type
help_type = option_type.help_type()
self.type_name = help_type if help_type else None
help_range = option_type.help_range()
self.allowed_values = help_range if help_range else None
self._names = []
self.current_value = None
@property
def is_key(self):
return self.option_type.is_tool_key
@property
def group(self):
return self.option_type.group
@property
def description(self):
return self.option_type.description
@property
def names(self):
return self._names
@names.setter
def names(self, names):
self._names = sorted(names, key=str.lower)
def is_hidden(self):
return not bool(self.description) or self.option_type.is_hidden
def _current_value(self, details):
if self.current_value is not None:
if isinstance(self.current_value, (list, tuple, UniqueList)):
current_value = [to_string(v) for v in self.current_value]
if current_value:
current_value = _join_to_length(
current_value,
64,
separator=",",
prefix="'",
suffix="'")
current_value = _indent_items("[ ", current_value)
current_value[-1] += " ]"
details.extend(current_value)
else:
details.append("[]")
else:
current_value = self.option_type.to_str(self.current_value)
if not current_value:
current_value = "''"
details.append(current_value)
else:
details.append("N/A")
def text(self, brief=False, names_indent=0):
details = []
self._current_value(details)
if not brief:
if self.description:
details.append(self.description)
if self.type_name:
details.append("Type: " + self.type_name)
if self.allowed_values:
details += _indent_items("Allowed values: ",
self.allowed_values)
details = _indent_items(": ", details)
result = []
if self.names:
names = self.names
key_marker = '* ' if self.is_key else ' '
names = [key_marker + name for name in names]
details = _merge_lists(names, details, names_indent + 2)
result += details
return result
def error_text(self):
result = []
if self.names:
result.append(', '.join(self.names))
if self.type_name:
result.append("Type: " + self.type_name)
if self.allowed_values:
result.append("Allowed values: %s" %
', '.join(self.allowed_values))
return '. '.join(result)
class OptionHelpGroup(object):
__slots__ = (
'name',
'max_option_name_length',
'help_list',
)
def __init__(self, group_name):
self.name = group_name
self.max_option_name_length = 0
self.help_list = []
def append(self, option_help):
self.max_option_name_length = max(
self.max_option_name_length, len(max(option_help.names, key=len)))
self.help_list.append(option_help)
def __iter__(self):
return iter(self.help_list)
def text(self, brief=False, indent=0):
result = []
group_name = self.name
if group_name:
group_name = "%s:" % (group_name,)
group_border_bottom = "-" * len(group_name)
result.extend([group_name, group_border_bottom])
names_indent = self.max_option_name_length
self.help_list.sort(key=operator.attrgetter('names'))
for option_help in self.help_list:
opt_text = option_help.text(brief, names_indent)
if (len(opt_text) > 1) and result and result[-1]:
result.append("")
result.extend(opt_text)
if len(opt_text) > 1:
result.append("")
if indent:
result = _indent_items(' ' * indent, result)
return result
class OptionType (object):
__slots__ = (
'value_type',
'default',
'description',
'group',
'range_help',
'is_auto',
'is_tool_key',
'is_hidden',
)
def __init__(self,
value_type=str,
description=None,
group=None,
range_help=None,
default=NotImplemented,
is_tool_key=False,
is_hidden=False
):
if type(value_type) is type and issubclass(value_type, OptionType):
value_type = value_type()
self.value_type = value_type
self.is_auto = False
self.is_tool_key = is_tool_key
self.is_hidden = is_hidden
self.description = description
self.group = group
self.range_help = range_help
if default is NotImplemented:
self.default = NotImplemented
else:
self.default = value_type(default)
def __call__(self, value=NotImplemented):
"""
Converts a value to options' value
"""
try:
if value is NotImplemented:
if self.default is NotImplemented:
return self.value_type()
return self.default
return self.value_type(value)
except (TypeError, ValueError):
raise ErrorOptionTypeUnableConvertValue(self, value)
def to_str(self, value):
"""
Converts a value to options' value string
"""
return to_string(value)
def help(self):
return OptionHelp(self)
def help_type(self):
return _get_type_name(self.value_type)
def help_range(self):
"""
Returns a description (list of strings) about range of allowed values
"""
if self.range_help:
return list(to_sequence(self.range_help))
return []
class StrOptionType (OptionType):
def __init__(self,
ignore_case=False,
description=None,
group=None,
range_help=None,
is_tool_key=False,
is_hidden=False
):
value_type = IgnoreCaseString if ignore_case else String
super(StrOptionType, self).__init__(value_type,
description,
group,
range_help,
is_tool_key=is_tool_key,
is_hidden=is_hidden)
class VersionOptionType (OptionType):
def __init__(self,
description=None,
group=None,
range_help=None,
is_tool_key=False,
is_hidden=False):
super(VersionOptionType, self).__init__(Version,
description,
group,
range_help,
is_tool_key=is_tool_key,
is_hidden=is_hidden)
def help_type(self):
return "Version String"
class PathOptionType (OptionType):
def __init__(self,
description=None,
group=None,
range_help=None,
is_tool_key=False,
is_hidden=False,
default=NotImplemented
):
super(PathOptionType, self).__init__(FilePath,
description,
group,
range_help,
is_tool_key=is_tool_key,
is_hidden=is_hidden,
default=default)
def help_type(self):
return "File System Path"
class AbsPathOptionType (OptionType):
def __init__(self,
description=None,
group=None,
range_help=None,
is_tool_key=False,
is_hidden=False,
default=NotImplemented
):
super(AbsPathOptionType, self).__init__(AbsFilePath,
description,
group,
range_help,
is_tool_key=is_tool_key,
is_hidden=is_hidden,
default=default)
def help_type(self):
return "File System Path"
class BoolOptionType (OptionType):
__slots__ = (
'true_value',
'false_value',
'true_values',
'false_values',
'aliases',
)
__true_values = ('yes', 'true', 'on', 'enabled', 'y', '1', 't')
__false_values = ('no', 'false', 'off', 'disabled', 'n', '0', 'f')
def __init__(self,
description=None,
group=None,
style=None,
true_values=None,
false_values=None,
default=False,
is_tool_key=False,
is_hidden=False
):
super(BoolOptionType, self).__init__(bool, description, group,
default=default,
is_tool_key=is_tool_key,
is_hidden=is_hidden)
if style is None:
style = ('true', 'false')
else:
style = map(IgnoreCaseString, style)
if true_values is None:
true_values = self.__true_values
else:
true_values = to_sequence(true_values)
if false_values is None:
false_values = self.__false_values
else:
false_values = to_sequence(false_values)
self.true_value, self.false_value = style
self.true_values = set()
self.false_values = set()
self.add_values(true_values, false_values)
self.add_values(self.true_value, self.false_value)
def __call__(self, value=NotImplemented):
if type(value) is bool:
return value
if value is NotImplemented:
value = self.default
value_str = IgnoreCaseString(value)
if value_str in self.true_values:
return True
if value_str in self.false_values:
return False
return True if value else False
def to_str(self, value):
return self.true_value if value else self.false_value
def add_values(self, true_values, false_values):
true_values = to_sequence(true_values)
false_values = to_sequence(false_values)
self.true_values.update(map(IgnoreCaseString, true_values))
self.false_values.update(map(IgnoreCaseString, false_values))
def help_range(self):
def _make_help(value, values):
values = list(values)
values.remove(value)
if values:
values = ', '.join(sorted(values))
return "%s (or %s)" % (value, values)
return "%s" % (value,)
return [_make_help(self.true_value, self.true_values),
_make_help(self.false_value, self.false_values), ]
class EnumOptionType (OptionType):
__slots__ = (
'__values',
'strict',
)
def __init__(self,
values,
description=None,
group=None,
value_type=IgnoreCaseString,
default=NotImplemented,
strict=True,
is_tool_key=False,
is_hidden=False
):
super(EnumOptionType, self).__init__(value_type, description, group,
default=default,
is_tool_key=is_tool_key,
is_hidden=is_hidden)
self.__values = {}
if default is not NotImplemented:
self.add_values(default)
self.add_values(values)
self.strict = strict
def add_values(self, values):
try:
values = tuple(values.items()) # convert dictionary to a sequence
except AttributeError:
pass
set_default_value = self.__values.setdefault
value_type = self.value_type
for value in to_sequence(values):
it = iter(to_sequence(value))
value = value_type(next(it))
value = set_default_value(value, value)
for alias in it:
alias = value_type(alias)
v = set_default_value(alias, value)
if v != value:
if alias == v:
raise ErrorOptionTypeEnumValueIsAlreadySet(
self, alias, value)
else:
raise ErrorOptionTypeEnumAliasIsAlreadySet(
self, alias, v, value)
def _get_default(self):
value = self.default
if value is not NotImplemented:
return value
try:
return next(iter(self.__values.values()))
except StopIteration:
if self.strict:
raise ErrorOptionTypeNoEnumValues(self)
return self.value_type()
def _convert_value(self, value):
try:
value = self.value_type(value)
except (TypeError, ValueError):
raise ErrorOptionTypeUnableConvertValue(self, value)
try:
return self.__values[value]
except KeyError:
if self.strict:
raise ErrorOptionTypeUnableConvertValue(self, value)
return value
def __call__(self, value=NotImplemented):
if value is NotImplemented:
return self._get_default()
return self._convert_value(value)
def help_range(self):
values = {}
for alias, value in self.__values.items():
if alias is value:
values.setdefault(alias, [])
else:
values.setdefault(value, []).append(alias)
help_str = []
for value, aliases in values.items():
s = to_string(value)
if aliases:
s += ' (or ' + ', '.join(map(to_string, aliases)) + ')'
help_str.append(s)
return help_str
def range(self):
values = []
for alias, value in self.__values.items():
if alias is value:
values.append(alias)
return values
class RangeOptionType (OptionType):
__slots__ = (
'min_value',
'max_value',
'restrain',
)
def __init__(self,
min_value,
max_value,
description=None,
group=None,
value_type=int,
restrain=True,
default=NotImplemented,
is_tool_key=False,
is_hidden=False
):
super(RangeOptionType, self).__init__(value_type, description, group,
default=default,
is_tool_key=is_tool_key,
is_hidden=is_hidden)
self.set_range(min_value, max_value, restrain)
if default is not NotImplemented:
self.default = self(default)
def set_range(self, min_value, max_value, restrain=True):
if min_value is not None:
try:
min_value = self.value_type(min_value)
except (TypeError, ValueError):
raise ErrorOptionTypeUnableConvertValue(self, min_value)
else:
min_value = self.value_type()
if max_value is not None:
try:
max_value = self.value_type(max_value)
except (TypeError, ValueError):
raise ErrorOptionTypeUnableConvertValue(self, max_value)
else:
max_value = self.value_type()
self.min_value = min_value
self.max_value = max_value
if restrain is not None:
self.restrain = restrain
def __call__(self, value=NotImplemented):
try:
min_value = self.min_value
if value is NotImplemented:
if self.default is NotImplemented:
return min_value
value = self.default
value = self.value_type(value)
if value < min_value:
if self.restrain:
value = min_value
else:
raise TypeError()
max_value = self.max_value
if value > max_value:
if self.restrain:
value = max_value
else:
raise TypeError()
return value
except TypeError:
raise ErrorOptionTypeUnableConvertValue(self, value)
def help_range(self):
return ["%s ... %s" % (self.min_value, self.max_value)]
def range(self):
return [self.min_value, self.max_value]
class ListOptionType (OptionType):
__slots__ = ('item_type',)
def __init__(self,
value_type=str,
unique=False,
separators=', ',
description=None,
group=None,
range_help=None,
is_tool_key=False,
is_hidden=False
):
if type(value_type) is type and issubclass(value_type, OptionType):
value_type = value_type()
if isinstance(value_type, OptionType):
if description is None:
description = value_type.description
if description:
description = "List of: " + description
if group is None:
group = value_type.group
if range_help is None:
range_help = value_type.range_help
if unique:
list_type = UniqueList
else:
list_type = List
list_type = value_list_type(list_type, value_type)
if separators:
list_type = split_list_type(list_type, separators)
super(ListOptionType, self).__init__(list_type, description,
group, range_help,
is_tool_key=is_tool_key,
is_hidden=is_hidden)
self.item_type = value_type
def __call__(self, values=None):
try:
if values is NotImplemented:
values = []
return self.value_type(values)
except (TypeError, ValueError):
raise ErrorOptionTypeUnableConvertValue(self, values)
def help_type(self):
if isinstance(self.item_type, OptionType):
item_type = self.item_type.help_type()
else:
item_type = _get_type_name(self.item_type)
return "List of %s" % item_type
def help_range(self):
if self.range_help:
return list(to_sequence(self.range_help))
if isinstance(self.item_type, OptionType):
return self.item_type.help_range()
return []
class DictOptionType (OptionType):
def __init__(self,
key_type=str,
value_type=None,
separators=', ',
description=None,
group=None,
range_help=None,
is_tool_key=False,
is_hidden=False
):
if type(value_type) is type and issubclass(value_type, OptionType):
value_type = value_type()
if isinstance(value_type, OptionType):
if description is None:
description = value_type.description
if description:
description = "List of: " + description
if group is None:
group = value_type.group
if range_help is None:
range_help = value_type.range_help
dict_type = value_dict_type(Dict, key_type, value_type)
if separators:
dict_type = split_dict_type(dict_type, separators)
super(DictOptionType, self).__init__(dict_type, description, group,
range_help,
is_tool_key=is_tool_key,
is_hidden=is_hidden)
def set_value_type(self, key, value_type):
if isinstance(value_type, OptionType):
value_type = value_type.value_type
self.value_type.set_value_type(key, value_type)
def __call__(self, values=None):
try:
if values is NotImplemented:
values = None
return self.value_type(values)
except (TypeError, ValueError):
raise ErrorOptionTypeUnableConvertValue(self, values)
def help_type(self):
value_type = self.value_type.get_value_type()
if value_type is not None:
if isinstance(value_type, OptionType):
value_type = value_type.help_type()
else:
value_type = _get_type_name(value_type)
return "Dictionary of %s" % (value_type,)
return "Dictionary"
def help_range(self):
if self.range_help:
return list(to_sequence(self.range_help))
value_type = self.value_type.get_value_type()
if isinstance(value_type, OptionType):
return value_type.help_range()
return []
class ErrorInvalidExecCommand(Exception):
def __init__(self, arg):
msg = "Invalid type of command argument: %s(%s)" % (arg, type(arg))
super(ErrorInvalidExecCommand, self).__init__(msg)
class ErrorFileName(Exception):
def __init__(self, filename):
msg = "Invalid file name: %s(%s)" % (filename, type(filename))
super(ErrorFileName, self).__init__(msg)
class ErrorUnmarshallableObject(Exception):
def __init__(self, obj):
msg = "Unmarshallable object: '%s'" % (obj, )
super(ErrorUnmarshallableObject, self).__init__(msg)
if hasattr(os, 'O_NOINHERIT'):
_O_NOINHERIT = os.O_NOINHERIT
else:
_O_NOINHERIT = 0
if hasattr(os, 'O_SYNC'):
_O_SYNC = os.O_SYNC
else:
_O_SYNC = 0
if hasattr(os, 'O_BINARY'):
_O_BINARY = os.O_BINARY
else:
_O_BINARY = 0
def _open_file_handle(filename, read=True, write=False,
sync=False, truncate=False):
flags = _O_NOINHERIT | _O_BINARY
if not write:
flags |= os.O_RDONLY
else:
flags |= os.O_CREAT
if truncate:
flags |= os.O_TRUNC
if read:
flags |= os.O_RDWR
else:
flags |= os.O_WRONLY
if sync:
flags |= _O_SYNC
return os.open(filename, flags)
def open_file(filename, read=True, write=False, binary=False,
sync=False, truncate=False, encoding=None):
if not is_string(filename):
raise ErrorFileName(filename)
if write:
mode = 'r+'
else:
mode = 'r'
if binary:
mode += 'b'
fd = _open_file_handle(filename, read=read, write=write,
sync=sync, truncate=truncate)
try:
if sync and binary:
return io.open(fd, mode, 0, encoding=encoding)
else:
return io.open(fd, mode, encoding=encoding)
except:
os.close(fd)
raise
def read_text_file(filename, encoding='utf-8'):
with open_file(filename, encoding=encoding) as f:
return f.read()
def read_bin_file(filename):
with open_file(filename, binary=True) as f:
return f.read()
def write_text_file(filename, data, encoding='utf-8'):
with open_file(filename, write=True,
truncate=True, encoding=encoding) as f:
if isinstance(data, (bytearray, bytes)):
data = decode_bytes(data, encoding)
f.write(data)
def write_bin_file(filename, data, encoding=None):
with open_file(filename, write=True, binary=True,
truncate=True, encoding=encoding) as f:
if is_unicode(data):
data = encode_str(data, encoding)
f.write(data)
def exec_file(filename, file_locals):
if not file_locals:
file_locals = {}
source = read_text_file(filename)
code = compile(source, filename, 'exec')
file_locals_orig = file_locals.copy()
exec(code, file_locals)
result = {}
for key, value in file_locals.items():
if key.startswith('_') or isinstance(value, types.ModuleType):
continue
if key not in file_locals_orig:
result[key] = value
return result
def dump_simple_object(obj):
if isinstance(obj, (bytes, bytearray)):
data = obj
elif isinstance(obj, u_str):
data = obj.encode('utf-8')
else:
try:
data = marshal.dumps(obj, 0) # use version 0, for a raw dump
except ValueError:
raise ErrorUnmarshallableObject(obj)
return data
def simple_object_signature(obj, common_hash=None):
data = dump_simple_object(obj)
return data_signature(data, common_hash)
def new_hash(data=b''):
return hashlib.md5(data)
def data_signature(data, common_hash=None):
if common_hash is None:
obj_hash = hashlib.md5(data)
else:
obj_hash = common_hash.copy()
obj_hash.update(data)
return obj_hash.digest()
def file_signature(filename, offset=0):
checksum = hashlib.md5()
chunk_size = checksum.block_size * 4096
with open_file(filename, binary=True) as f:
f.seek(offset)
read = f.read
checksum_update = checksum.update
chunk = True
while chunk:
chunk = read(chunk_size)
checksum_update(chunk)
return checksum.digest()
def file_time_signature(filename):
stat = os.stat(filename)
return simple_object_signature((stat.st_size, stat.st_mtime))
def file_checksum(filename, offset=0, size=-1, alg='md5', chunk_size=262144):
checksum = hashlib.__dict__[alg]()
with open_file(filename, binary=True) as f:
read = f.read
f.seek(offset)
checksum_update = checksum.update
chunk = True
while chunk:
chunk = read(chunk_size)
checksum_update(chunk)
if size > 0:
size -= len(chunk)
if size <= 0:
break
checksum_update(chunk)
return checksum
def get_function_name(currentframe=inspect.currentframe):
frame = currentframe()
if frame:
return frame.f_back.f_code.co_name
return "__not_available__"
def print_stacks():
id2name = dict([(th.ident, th.name) for th in threading.enumerate()])
for thread_id, stack in sys._current_frames().items():
print("\n" + ("=" * 64))
print("Thread: %s (%s)" % (id2name.get(thread_id, ""), thread_id))
traceback.print_stack(stack)
try:
_getargspec = inspect.getfullargspec
except AttributeError:
_getargspec = inspect.getargspec
def get_function_args(function, getargspec=_getargspec):
args = getargspec(function)[:4]
if isinstance(function, types.MethodType):
if function.__self__:
args = tuple([args[0][1:]] + list(args[1:]))
return args
def equal_function_args(function1, function2):
if function1 is function2:
return True
args1 = get_function_args(function1)
args2 = get_function_args(function2)
return args1[0:3] == args2[0:3]
def check_function_args(function, args, kw, getargspec=_getargspec):
f_args, f_varargs, f_varkw, f_defaults = getargspec(function)[:4]
current_args_num = len(args) + len(kw)
args_num = len(f_args)
if not f_varargs and not f_varkw:
if current_args_num > args_num:
return False
if f_defaults:
def_args_num = len(f_defaults)
else:
def_args_num = 0
min_args_num = args_num - def_args_num
if current_args_num < min_args_num:
return False
kw = set(kw)
unknown_args = kw - set(f_args)
if unknown_args and not f_varkw:
return False
def_args = f_args[args_num - def_args_num:]
non_def_kw = kw - set(def_args)
non_def_args_num = len(args) + len(non_def_kw)
if non_def_args_num < min_args_num:
return False
twice_args = set(f_args[:len(args)]) & kw
if twice_args:
return False
return True
def remove_files(files):
for f in to_sequence(files):
try:
os.remove(f)
except OSError as ex:
if ex.errno != errno.ENOENT:
raise
def _decode_data(data):
if not data:
return str()
data = to_unicode(data)
data = data.replace('\r\n', '\n')
data = data.replace('\r', '\n')
return data
class ExecCommandException(Exception):
__slots__ = ('exception',)
def __init__(self, cmd, exception):
msg = ' '.join(to_sequence(cmd))
msg += '\n%s' % (exception,)
self.exception = exception
super(ExecCommandException, self).__init__(msg)
@staticmethod
def failed():
return True
def __bool__(self):
return self.failed()
def __nonzero__(self):
return self.failed()
class ExecCommandResult(Exception):
__slots__ = ('cmd', 'status', 'stdout', 'stderr')
def __init__(self, cmd, status=None, stdout=None, stderr=None):
self.cmd = tuple(to_sequence(cmd))
self.status = status
self.stdout = stdout if stdout else ''
self.stderr = stderr if stderr else ''
super(ExecCommandResult, self).__init__()
def __str__(self):
msg = ' '.join(self.cmd)
out = self.output()
if out:
msg += '\n' + out
if self.status:
msg += "\nExit status: %s" % (self.status,)
return msg
def failed(self):
return self.status != 0
def output(self):
out = self.stdout
if self.stderr:
if out:
out += '\n'
out += self.stderr
else:
out = self.stderr
return out
def __bool__(self):
return self.failed()
def __nonzero__(self):
return self.failed()
try:
_MAX_CMD_LENGTH = os.sysconf('SC_ARG_MAX')
except AttributeError:
_MAX_CMD_LENGTH = 32000 # 32768 default for Windows
def _gen_exec_cmd_file(cmd, file_flag, max_cmd_length=_MAX_CMD_LENGTH):
if not file_flag:
return cmd, None
cmd_length = sum(map(len, cmd)) + len(cmd) - 1
if cmd_length <= max_cmd_length:
return cmd, None
cmd_str = subprocess.list2cmdline(cmd[1:]).replace('\\', '\\\\')
cmd_file = tempfile.NamedTemporaryFile(mode='w+',
suffix='.args',
delete=False)
with cmd_file:
cmd_file.write(cmd_str)
cmd_file = cmd_file.name
cmd = [cmd[0], file_flag + cmd_file]
return cmd, cmd_file
def _exec_command_result(cmd, cwd, env, shell, stdin):
try:
if env:
env = dict((cast_str(key), cast_str(value))
for key, value in env.items())
p = subprocess.Popen(cmd, cwd=cwd, env=env,
shell=shell,
stdin=stdin, stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
universal_newlines=False)
stdout, stderr = p.communicate()
returncode = p.poll()
except Exception as ex:
raise ExecCommandException(cmd, exception=ex)
stdout = _decode_data(stdout)
stderr = _decode_data(stderr)
return ExecCommandResult(cmd, status=returncode,
stdout=stdout, stderr=stderr)
def execute_command(cmd, cwd=None, env=None, stdin=None, file_flag=None):
if is_string(cmd):
shell = True
cmd_file = None
else:
shell = False
cmd, cmd_file = _gen_exec_cmd_file(cmd, file_flag)
try:
return _exec_command_result(cmd, cwd, env, shell, stdin)
finally:
if cmd_file:
remove_files(cmd_file)
def get_shell_script_env(script, args=None, _var_re=re.compile(r'^\w+=')):
args = to_sequence(args)
script_path = os.path.abspath(
os.path.expanduser(os.path.expandvars(script)))
os_env = os.environ
cwd, script = os.path.split(script_path)
if os.name == "nt":
cmd = ['call', script]
cmd += args
cmd += ['&&', 'set']
else:
cmd = ['.', './' + script]
cmd += args
cmd += ['&&', 'printenv']
cmd = ' '.join(cmd)
try:
p = subprocess.Popen(cmd, cwd=cwd, shell=True, env=os_env,
stdin=None, stdout=subprocess.PIPE,
stderr=subprocess.PIPE, universal_newlines=False)
stdout, stderr = p.communicate()
status = p.poll()
except Exception as ex:
raise ExecCommandException(cmd, exception=ex)
stdout = _decode_data(stdout)
stderr = _decode_data(stderr)
if status != 0:
raise ExecCommandResult(cmd, status, stdout, stderr)
script_env = {}
for line in stdout.split('\n'):
match = _var_re.match(line)
if match:
name, sep, value = line.partition('=')
value = value.strip()
current = os_env.get(name, None)
if (current is None) or (value != current.strip()):
script_env[name] = value
return script_env
def cpu_count():
try:
return multiprocessing.cpu_count()
except NotImplementedError:
pass
count = int(os.environ.get('NUMBER_OF_PROCESSORS', 0))
if count > 0:
return count
try:
if 'SC_NPROCESSORS_ONLN' in os.sysconf_names:
count = os.sysconf('SC_NPROCESSORS_ONLN')
elif 'SC_NPROCESSORS_CONF' in os.sysconf_names:
count = os.sysconf('SC_NPROCESSORS_CONF')
if count > 0:
return cpu_count
except AttributeError:
pass
count = 1 # unable to detect number of CPUs
return count
def _memory_usage_smaps():
private = 0
with open("/proc/self/smaps") as smaps:
for line in smaps:
if line.startswith("Private"):
private += int(line.split()[1])
return private
def _memory_usage_statm():
page_size = os.sysconf("SC_PAGE_SIZE")
with open('/proc/self/statm') as f:
mem_stat = f.readline().split()
rss = int(mem_stat[1]) * page_size
shared = int(mem_stat[2]) * page_size
private = rss - shared
return private // 1024
def memory_usage_linux():
try:
return _memory_usage_smaps()
except IOError:
try:
return _memory_usage_statm()
except IOError:
return memory_usage_unix()
def memory_usage_unix():
res = resource.getrusage(resource.RUSAGE_SELF)
return res.ru_maxrss
def memory_usage_windows():
process_handle = win32api.GetCurrentProcess()
memory_info = win32process.GetProcessMemoryInfo(process_handle)
return memory_info['PeakWorkingSetSize']
try:
import resource
if sys.platform[:5] == "linux":
memory_usage = memory_usage_linux
else:
memory_usage = memory_usage_unix
except ImportError:
try:
import win32process
import win32api
memory_usage = memory_usage_windows
except ImportError:
def memory_usage():
return 0
def load_module(module_file, package_name=None):
module_dir, module_file = os.path.split(module_file)
module_name = os.path.splitext(module_file)[0]
if package_name:
full_module_name = package_name + '.' + module_name
else:
full_module_name = module_name
module = sys.modules.get(full_module_name)
if module is not None:
return module
fp, pathname, description = imp.find_module(module_name, [module_dir])
module = imp.load_module(full_module_name, fp, pathname, description)
return module
def load_package(path, name=None, generate_name=False):
find_path, find_name = os.path.split(path)
if not name:
if generate_name:
name = new_hash(dump_simple_object(path)).hexdigest()
else:
name = find_name
package = sys.modules.get(name)
if package is not None:
return package
fp, pathname, description = imp.find_module(find_name, [find_path])
package = imp.load_module(name, fp, pathname, description)
return package
def flatten_list(seq):
out_list = list(to_sequence(seq))
i = 0
while i < len(out_list):
value = out_list[i]
if is_sequence(value):
if value:
out_list[i: i + 1] = value
else:
del out_list[i]
continue
i += 1
return out_list
_SIMPLE_SEQUENCES = (list, tuple, UniqueList, set, frozenset)
def simplify_value(value, # noqa compexity > 9
simple_types=SIMPLE_TYPES_SET,
simple_lists=_SIMPLE_SEQUENCES):
if value is None:
return None
value_type = type(value)
if value_type in simple_types:
return value
for simple_type in simple_types:
if isinstance(value, simple_type):
return simple_type(value)
if isinstance(value, simple_lists):
return [simplify_value(v) for v in value]
if isinstance(value, dict):
return dict((key, simplify_value(v)) for key, v in value.items())
try:
return simplify_value(value.get())
except Exception:
trace_back = sys.exc_info()[2]
if trace_back.tb_next is not None:
raise
return value
class Chrono (object):
__slots__ = ('elapsed', )
def __init__(self):
self.elapsed = 0
def __enter__(self):
self.elapsed = time.time()
return self
def __exit__(self, exc_type, exc_val, exc_tb):
self.elapsed = time.time() - self.elapsed
return False
def get(self):
return self.elapsed
def __str__(self):
elapsed = self.elapsed
minutes = int(elapsed / 60)
seconds = int(elapsed - minutes * 60)
milisecs = int((elapsed - int(elapsed)) * 1000)
result = []
if minutes:
result.append("%s min" % minutes)
milisecs = 0
if seconds:
result.append("%s sec" % seconds)
if milisecs:
result.append("%s ms" % milisecs)
if not minutes and not seconds and not milisecs:
result.append("0 ms")
return ' '.join(result)
class ItemsGroups(object):
__slots__ = (
'wish_groups',
'max_group_size',
'group_size',
'tail_size',
'groups',
)
def __init__(self, size, wish_groups, max_group_size):
wish_groups = max(1, wish_groups)
group_size = size // wish_groups
if max_group_size < 0:
max_group_size = size
elif max_group_size == 0:
max_group_size = group_size + 1
group_size = max(1, group_size)
self.wish_groups = wish_groups
self.group_size = min(max_group_size, group_size)
self.max_group_size = max_group_size
self.tail_size = size
self.groups = [[]]
def add_group(self):
groups = self.groups
if not groups[0]:
return
group_size = max(
1, self.tail_size // max(1, self.wish_groups - len(self.groups)))
self.group_size = min(self.max_group_size, group_size)
group_files = []
self.groups.append(group_files)
return group_files
def add(self, item):
group_files = self.groups[-1]
if len(group_files) >= self.group_size:
group_files = self.add_group()
group_files.append(item)
self.tail_size -= 1
def get(self):
groups = self.groups
if not groups[-1]:
del groups[-1]
return groups
def group_items(items, wish_groups=1, max_group_size=-1):
groups = ItemsGroups(len(items), wish_groups, max_group_size)
for item in items:
groups.add(item)
return groups.get()
class ErrorOptionValueMergeNonOptionValue(TypeError):
def __init__(self, value):
msg = "Unable to merge option value with non option value: '%s'" % (
type(value),)
super(ErrorOptionValueMergeNonOptionValue, self).__init__(msg)
class ErrorOptionValueOperationFailed(TypeError):
def __init__(self, op, args, kw, err):
args_str = ""
if args:
args_str += ', '.join(map(str, args))
if kw:
if args_str:
args_str += ","
args_str += str(kw)
msg = "Operation %s( %s ) failed with error: %s" % (op, args_str, err)
super(ErrorOptionValueOperationFailed, self).__init__(msg)
def _set_operator(dest_value, value):
return value
def _op_iadd_key_operator(dest_value, key, value):
dest_value[key] += value
return dest_value
def _op_isub_key_operator(dest_value, key, value):
dest_value[key] -= value
return dest_value
def _update_operator(dest_value, value):
if isinstance(dest_value, (UniqueList, list)):
dest_value += value
return dest_value
elif isinstance(dest_value, Dict):
dest_value.update(value)
return dest_value
else:
return value
def op_set(value):
return SimpleInplaceOperation(_set_operator, value)
def op_set_key(key, value):
return SimpleInplaceOperation(operator.setitem, key, value)
def op_get_key(value, key):
return SimpleOperation(operator.getitem, value, key)
def op_iadd(value):
return SimpleInplaceOperation(operator.iadd, value)
def op_iadd_key(key, value):
return SimpleInplaceOperation(_op_iadd_key_operator, key, value)
def op_isub_key(key, value):
return SimpleInplaceOperation(_op_isub_key_operator, key, value)
def op_isub(value):
return SimpleInplaceOperation(operator.isub, value)
def op_iupdate(value):
return SimpleInplaceOperation(_update_operator, value)
def _convert_args(args, kw, options, converter):
tmp_args = []
for arg in args:
if isinstance(arg, Operation):
arg.convert(options, converter)
else:
arg = converter(options, arg)
tmp_args.append(arg)
tmp_kw = {}
for key, arg in kw.items():
if isinstance(arg, Operation):
arg.convert(options, converter)
elif converter is not None:
arg = converter(options, arg)
tmp_kw[key] = arg
return tmp_args, tmp_kw
def _unconvert_args(args, kw, options, context, unconverter):
tmp_args = []
for arg in args:
if isinstance(arg, Operation):
arg = arg(options, context, unconverter)
elif unconverter is not None:
arg = unconverter(options, context, arg)
tmp_args.append(arg)
tmp_kw = {}
for key, arg in kw.items():
if isinstance(arg, Operation):
arg = arg(options, context, unconverter)
elif unconverter is not None:
arg = unconverter(options, context, arg)
tmp_kw[key] = arg
return tmp_args, tmp_kw
class Condition(object):
__slots__ = (
'condition',
'predicate',
'args',
'kw',
)
def __init__(self, condition, predicate, *args, **kw):
self.condition = condition
self.predicate = predicate
self.args = args
self.kw = kw
def convert(self, options, converter):
self.args, self.kw = _convert_args(
self.args, self.kw, options, converter)
cond = self.condition
if cond is not None:
cond.convert(options, converter)
def __call__(self, options, context, unconverter):
if self.condition is not None:
if not self.condition(options, context, unconverter):
return False
args, kw = _unconvert_args(
self.args, self.kw, options, context, unconverter)
return self.predicate(options, context, *args, **kw)
class Operation(object):
__slots__ = (
'action',
'kw',
'args',
)
def __init__(self, action, *args, **kw):
self.action = action
self.args = args
self.kw = kw
def convert(self, options, converter):
self.args, self.kw = _convert_args(
self.args, self.kw, options, converter)
def _call_action(self, options, context, args, kw):
return self.action(options, context, *args, **kw)
def __call__(self, options, context, unconverter):
args, kw = _unconvert_args(
self.args, self.kw, options, context, unconverter)
try:
result = self._call_action(options, context, args, kw)
except Exception as ex:
raise ErrorOptionValueOperationFailed(self.action, args, kw, ex)
return result
def __add__(self, other):
return SimpleOperation(operator.add, self, other)
def __radd__(self, other):
return SimpleOperation(operator.add, other, self)
def __sub__(self, other):
return SimpleOperation(operator.sub, self, other)
def __rsub__(self, other):
return SimpleOperation(operator.sub, other, self)
class SimpleOperation(Operation):
def _call_action(self, options, context, args, kw):
return self.action(*args, **kw)
class InplaceOperation(object):
__slots__ = (
'action',
'kw',
'args',
)
def __init__(self, action, *args, **kw):
self.action = action
self.args = args
self.kw = kw
def convert(self, options, converter):
self.args, self.kw = _convert_args(
self.args, self.kw, options, converter)
def _call_action(self, options, context, dest_value, args, kw):
return self.action(options, context, dest_value, *args, **kw)
def __call__(self, options, context, dest_value, value_type, unconverter):
if self.action is None:
return dest_value
args, kw = _unconvert_args(
self.args, self.kw, options, context, unconverter)
try:
result = self._call_action(options, context, dest_value, args, kw)
except Exception as ex:
raise ErrorOptionValueOperationFailed(self.action, args, kw, ex)
if result is None:
result = dest_value
dest_value = value_type(result)
return dest_value
class SimpleInplaceOperation(InplaceOperation):
def _call_action(self, options, context, dest_value, args, kw):
return self.action(dest_value, *args, **kw)
class ConditionalValue (object):
__slots__ = (
'ioperation',
'condition',
)
def __init__(self, ioperation, condition=None):
self.ioperation = ioperation
self.condition = condition
def convert(self, options, converter):
condition = self.condition
if isinstance(condition, Condition):
condition.convert(options, converter)
ioperation = self.ioperation
if isinstance(ioperation, InplaceOperation):
ioperation.convert(options, converter)
def evaluate(self, value, value_type, options, context, unconverter):
condition = self.condition
if (condition is None) or condition(options, context, unconverter):
if self.ioperation is not None:
value = self.ioperation(
options, context, value, value_type, unconverter)
return value
class OptionValue (object):
__slots__ = (
'option_type',
'conditional_values',
)
def __init__(self, option_type, conditional_values=None):
self.option_type = option_type
self.conditional_values = list(to_sequence(conditional_values))
def is_set(self):
return bool(self.conditional_values)
def is_tool_key(self):
return self.option_type.is_tool_key
def append_value(self, conditional_value):
self.conditional_values.append(conditional_value)
def prepend_value(self, conditional_value):
self.conditional_values[:0] = [conditional_value]
def merge(self, other):
if self is other:
return
if not isinstance(other, OptionValue):
raise ErrorOptionValueMergeNonOptionValue(other)
values = self.conditional_values
other_values = other.conditional_values
diff_index = 0
for value1, value2 in zip(values, other_values):
if value1 is not value2:
break
diff_index += 1
if self.option_type.is_auto and not other.option_type.is_auto:
self.option_type = other.option_type
self.conditional_values += other_values[diff_index:]
def reset(self):
self.conditional_values = []
def copy(self):
return OptionValue(self.option_type, self.conditional_values)
def __copy__(self):
return self.copy()
def get(self, options, context, evaluator=None):
if context is None:
context = {}
else:
try:
return context[self]
except KeyError:
pass
value_type = self.option_type
value = value_type()
context[self] = value
for conditional_value in self.conditional_values:
value = conditional_value.evaluate(
value, value_type, options, context, evaluator)
context[self] = value
return value
class ErrorDataFileFormatInvalid(Exception):
def __init__(self):
msg = "Data file format is not valid."
super(ErrorDataFileFormatInvalid, self).__init__(msg)
class ErrorDataFileChunkInvalid(Exception):
def __init__(self):
msg = "Data file chunk format is not valid."
super(ErrorDataFileChunkInvalid, self).__init__(msg)
class ErrorDataFileVersionInvalid(Exception):
def __init__(self):
msg = "Data file version is changed."
super(ErrorDataFileVersionInvalid, self).__init__(msg)
class ErrorDataFileCorrupted(Exception):
def __init__(self):
msg = "Data file is corrupted"
super(ErrorDataFileCorrupted, self).__init__(msg)
class _MmapFile(object):
def __init__(self, filename):
stream = open_file(filename, write=True, binary=True, sync=False)
try:
memmap = mmap.mmap(stream.fileno(), 0, access=mmap.ACCESS_WRITE)
except Exception:
stream.seek(0)
stream.write(b'\0')
stream.flush()
memmap = mmap.mmap(stream.fileno(), 0, access=mmap.ACCESS_WRITE)
self._check_resize_available(memmap)
self.stream = stream
self.memmap = memmap
self.size = memmap.size
self.resize = memmap.resize
self.flush = memmap.flush
@staticmethod
def _check_resize_available(mem):
size = mem.size()
mem.resize(size + mmap.ALLOCATIONGRANULARITY)
mem.resize(size)
def close(self):
self.memmap.flush()
self.memmap.close()
self.stream.close()
def read(self, offset, size):
return self.memmap[offset: offset + size]
def write(self, offset, data):
memmap = self.memmap
end_offset = offset + len(data)
if end_offset > memmap.size():
page_size = mmap.ALLOCATIONGRANULARITY
size = ((end_offset + (page_size - 1)) // page_size) * page_size
if size == 0:
size = page_size
self.resize(size)
memmap[offset: end_offset] = data
def move(self, dest, src, size):
memmap = self.memmap
end_offset = dest + size
if end_offset > memmap.size():
self.resize(end_offset)
memmap.move(dest, src, size)
class _IOFile(object):
def __init__(self, filename):
stream = open_file(filename, write=True, binary=True, sync=False)
self.stream = stream
self.resize = stream.truncate
self.flush = stream.flush
def close(self):
self.stream.close()
def read(self, offset, size):
stream = self.stream
stream.seek(offset)
return stream.read(size)
def write(self, offset, data):
stream = self.stream
stream.seek(offset)
stream.write(data)
def move(self, dest, src, size):
stream = self.stream
stream.seek(src)
data = stream.read(size)
stream.seek(dest)
stream.write(data)
def size(self, _end=os.SEEK_END):
return self.stream.seek(0, _end)
class MetaData (object):
__slots__ = (
'offset',
'key',
'id',
'data_offset',
'data_size',
'data_capacity',
)
_META_STRUCT = struct.Struct(">Q16sLL")
size = _META_STRUCT.size
def __init__(self, meta_offset, key, data_id, data_offset, data_size):
self.offset = meta_offset
self.key = key
self.id = data_id
self.data_offset = data_offset
self.data_size = data_size
self.data_capacity = data_size + 4
def dump(self, meta_struct=_META_STRUCT):
return meta_struct.pack(self.key, self.id,
self.data_size, self.data_capacity)
@classmethod
def load(cls, dump, meta_struct=_META_STRUCT):
self = cls.__new__(cls)
try:
self.key, self.id, data_size, data_capacity = meta_struct.unpack(
dump)
except struct.error:
raise ErrorDataFileChunkInvalid()
if data_capacity < data_size:
raise ErrorDataFileChunkInvalid()
self.data_size = data_size
self.data_capacity = data_capacity
return self
def resize(self, data_size):
self.data_size = data_size
capacity = self.data_capacity
if capacity >= data_size:
return 0
self.data_capacity = data_size + min(data_size // 4, 128)
return self.data_capacity - capacity
def __repr__(self):
return self.__str__()
def __str__(self):
s = []
for v in self.__slots__:
s.append("%s: %s" % (v, getattr(self, v)))
return ", ".join(s)
class DataFile (object):
__slots__ = (
'next_key',
'id2data',
'key2id',
'meta_end',
'data_begin',
'data_end',
'handle',
)
MAGIC_TAG = b".AQL.DB."
VERSION = 1
_HEADER_STRUCT = struct.Struct(">8sL")
_HEADER_SIZE = _HEADER_STRUCT.size
_KEY_STRUCT = struct.Struct(">Q") # 8 bytes (next unique key)
_KEY_OFFSET = _HEADER_SIZE
_KEY_SIZE = _KEY_STRUCT.size
_META_TABLE_HEADER_STRUCT = struct.Struct(">L")
_META_TABLE_HEADER_SIZE = _META_TABLE_HEADER_STRUCT.size
_META_TABLE_HEADER_OFFSET = _KEY_OFFSET + _KEY_SIZE
_META_TABLE_OFFSET = _META_TABLE_HEADER_OFFSET + _META_TABLE_HEADER_SIZE
def __init__(self, filename, force=False):
self.id2data = {}
self.key2id = {}
self.meta_end = 0
self.data_begin = 0
self.data_end = 0
self.handle = None
self.next_key = None
self.open(filename, force=force)
def open(self, filename, force=False):
self.close()
try:
self.handle = _MmapFile(filename)
except Exception as ex:
log_debug("mmap is not supported: %s", ex)
self.handle = _IOFile(filename)
self._init_header(force)
self.next_key = self._key_generator()
self._init_meta_table()
def close(self):
if self.handle is not None:
self.handle.close()
self.handle = None
self.id2data.clear()
self.key2id.clear()
self.meta_end = 0
self.data_begin = 0
self.data_end = 0
self.next_key = None
def clear(self):
self._reset_meta_table()
self.next_key = self._key_generator()
self.id2data.clear()
self.key2id.clear()
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, traceback):
self.close()
def _init_header(self, force, header_struct=_HEADER_STRUCT):
header = self.handle.read(0, header_struct.size)
try:
tag, version = header_struct.unpack(header)
if tag != self.MAGIC_TAG:
if not force:
raise ErrorDataFileFormatInvalid()
elif version == self.VERSION:
return
except struct.error:
if (header and header != b'\0') and not force:
raise ErrorDataFileFormatInvalid()
header = header_struct.pack(self.MAGIC_TAG, self.VERSION)
self.handle.resize(len(header))
self.handle.write(0, header)
def _key_generator(self,
key_offset=_KEY_OFFSET,
key_struct=_KEY_STRUCT,
max_key=(2 ** 64) - 1):
key_dump = self.handle.read(key_offset, key_struct.size)
try:
next_key, = key_struct.unpack(key_dump)
except struct.error:
next_key = 0
key_pack = key_struct.pack
write_file = self.handle.write
while True:
if next_key < max_key:
next_key += 1
else:
next_key = 1 # this should never happen
write_file(key_offset, key_pack(next_key))
yield next_key
def _reset_meta_table(self,
meta_size=MetaData.size,
table_offset=_META_TABLE_OFFSET):
self.meta_end = table_offset
self.data_begin = table_offset + meta_size * 1024
self.data_end = self.data_begin
self._truncate_file()
def _truncate_file(self,
table_header_struct=_META_TABLE_HEADER_STRUCT,
table_header_offset=_META_TABLE_HEADER_OFFSET):
header_dump = table_header_struct.pack(self.data_begin)
handle = self.handle
handle.resize(self.data_end)
handle.write(table_header_offset, header_dump)
handle.write(self.meta_end, b'\0' * (self.data_begin - self.meta_end))
handle.flush()
def _init_meta_table(self,
table_header_struct=_META_TABLE_HEADER_STRUCT,
table_header_offset=_META_TABLE_HEADER_OFFSET,
table_header_size=_META_TABLE_HEADER_SIZE,
table_begin=_META_TABLE_OFFSET,
meta_size=MetaData.size):
handle = self.handle
header_dump = handle.read(table_header_offset, table_header_size)
try:
data_begin, = table_header_struct.unpack(header_dump)
except struct.error:
self._reset_meta_table()
return
if (data_begin <= table_begin) or (data_begin > handle.size()):
self._reset_meta_table()
return
table_size = data_begin - table_begin
if (table_size % meta_size) != 0:
self._reset_meta_table()
return
dump = handle.read(table_begin, table_size)
if len(dump) < table_size:
self._reset_meta_table()
return
self._load_meta_table(data_begin, dump)
def _load_meta_table(self, data_offset, metas_dump,
meta_size=MetaData.size,
table_begin=_META_TABLE_OFFSET):
self.data_begin = data_offset
file_size = self.handle.size()
load_meta = MetaData.load
pos = 0
dump_size = len(metas_dump)
while pos < dump_size:
meta_dump = metas_dump[pos: pos + meta_size]
try:
meta = load_meta(meta_dump)
data_capacity = meta.data_capacity
if data_capacity == 0:
break
meta.offset = pos + table_begin
meta.data_offset = data_offset
if (data_offset + meta.data_size) > file_size:
raise ErrorDataFileChunkInvalid()
data_offset += data_capacity
except Exception:
self.data_end = data_offset
self.meta_end = pos + table_begin
self._truncate_file()
return
self.id2data[meta.id] = meta
if meta.key:
self.key2id[meta.key] = meta.id
pos += meta_size
self.meta_end = pos + table_begin
self.data_end = data_offset
def _extend_meta_table(self,
table_header_struct=_META_TABLE_HEADER_STRUCT,
table_header_offset=_META_TABLE_HEADER_OFFSET,
table_begin=_META_TABLE_OFFSET):
data_begin = self.data_begin
data_end = self.data_end
table_capacity = data_begin - table_begin
new_data_begin = data_begin + table_capacity
handle = self.handle
handle.move(new_data_begin, data_begin, data_end - data_begin)
handle.write(data_begin, b'\0' * table_capacity)
header_dump = table_header_struct.pack(new_data_begin)
handle.write(table_header_offset, header_dump)
self.data_begin = new_data_begin
self.data_end += table_capacity
for meta in self.id2data.values():
meta.data_offset += table_capacity
def _extend_data(self, meta, oversize):
new_next_data = meta.data_offset + meta.data_capacity
next_data = new_next_data - oversize
rest_size = self.data_end - next_data
if rest_size > 0:
self.handle.move(new_next_data, next_data, rest_size)
for meta in self.id2data.values():
if meta.data_offset >= next_data:
meta.data_offset += oversize
self.data_end += oversize
def _append(self, key, data_id, data,
meta_size=MetaData.size):
meta_offset = self.meta_end
if meta_offset == self.data_begin:
self._extend_meta_table()
data_offset = self.data_end
meta = MetaData(meta_offset, key, data_id, data_offset, len(data))
write = self.handle.write
write(data_offset, data)
write(meta_offset, meta.dump())
self.data_end += meta.data_capacity
self.meta_end += meta_size
self.id2data[data_id] = meta
def _update(self, meta, data, update_meta):
data_size = len(data)
if meta.data_size != data_size:
update_meta = True
oversize = meta.resize(data_size)
if oversize > 0:
self._extend_data(meta, oversize)
write = self.handle.write
if update_meta:
write(meta.offset, meta.dump())
write(meta.data_offset, data)
def read(self, data_id):
try:
meta = self.id2data[data_id]
except KeyError:
return None
return self.handle.read(meta.data_offset, meta.data_size)
def write(self, data_id, data):
try:
meta = self.id2data[data_id]
self._update(meta, data, update_meta=False)
except KeyError:
self._append(0, data_id, data)
def write_with_key(self, data_id, data):
meta = self.id2data.get(data_id)
key = next(self.next_key)
if meta is None:
self._append(key, data_id, data)
else:
try:
del self.key2id[meta.key]
except KeyError:
pass
meta.key = key
self._update(meta, data, update_meta=True)
self.key2id[key] = data_id
return key
def get_ids(self, keys):
try:
return tuple(map(self.key2id.__getitem__, keys))
except KeyError:
return None
def get_keys(self, data_ids):
return map(operator.attrgetter('key'),
map(self.id2data.__getitem__, data_ids))
def remove(self, data_ids): # noqa TODO: refactor to compexity < 10
move = self.handle.move
meta_size = MetaData.size
remove_data_ids = frozenset(data_ids)
metas = sorted(self.id2data.values(),
key=operator.attrgetter('data_offset'))
meta_shift = 0
data_shift = 0
meta_offset = 0
data_offset = 0
last_meta_end = 0
last_data_end = 0
remove_data_begin = None
remove_meta_begin = None
move_meta_begin = None
move_data_begin = None
for meta in metas:
meta_offset = meta.offset
last_meta_end = meta_offset + meta_size
data_offset = meta.data_offset
last_data_end = data_offset + meta.data_size
next_data_begin = data_offset + meta.data_capacity
if meta.id in remove_data_ids:
del self.id2data[meta.id]
if meta.key:
del self.key2id[meta.key]
if move_meta_begin is not None:
move(remove_meta_begin, move_meta_begin,
meta_offset - move_meta_begin)
move(remove_data_begin, move_data_begin,
data_offset - move_data_begin)
remove_meta_begin = None
move_meta_begin = None
if remove_meta_begin is None:
remove_meta_begin = meta_offset - meta_shift
remove_data_begin = data_offset - data_shift
else:
if remove_meta_begin is not None:
if move_meta_begin is None:
move_meta_begin = meta_offset
move_data_begin = data_offset
meta_shift = move_meta_begin - remove_meta_begin
data_shift = move_data_begin - remove_data_begin
if meta_shift:
meta.offset -= meta_shift
meta.data_offset -= data_shift
if remove_data_begin is not None:
if move_meta_begin is None:
meta_shift = last_meta_end - remove_meta_begin
data_shift = next_data_begin - remove_data_begin
else:
move(remove_meta_begin, move_meta_begin,
last_meta_end - move_meta_begin)
move(remove_data_begin, move_data_begin,
last_data_end - move_data_begin)
self.meta_end -= meta_shift
self.data_end -= data_shift
self.handle.write(self.meta_end, b'\0' * meta_shift)
self.handle.resize(self.data_end)
def self_test(self): # noqa
if self.handle is None:
return
file_size = self.handle.size()
if self.data_begin > file_size:
raise AssertionError("data_begin(%s) > file_size(%s)" %
(self.data_begin, file_size))
if self.data_begin > self.data_end:
raise AssertionError("data_end(%s) > data_end(%s)" %
(self.data_begin, self.data_end))
if self.meta_end > self.data_begin:
raise AssertionError("meta_end(%s) > data_begin(%s)" %
(self.meta_end, self.data_begin))
header_dump = self.handle.read(
self._META_TABLE_HEADER_OFFSET, self._META_TABLE_HEADER_SIZE)
try:
data_begin, = self._META_TABLE_HEADER_STRUCT.unpack(header_dump)
except struct.error:
self._reset_meta_table()
return
if self.data_begin != data_begin:
raise AssertionError("self.data_begin(%s) != data_begin(%s)" %
(self.data_begin, data_begin))
items = sorted(self.id2data.items(), key=lambda item: item[1].offset)
last_meta_offset = self._META_TABLE_OFFSET
last_data_offset = self.data_begin
for data_id, meta in items:
if meta.id != data_id:
raise AssertionError(
"meta.id(%s) != data_id(%s)" % (meta.id, data_id))
if meta.key != 0:
if self.key2id[meta.key] != data_id:
raise AssertionError(
"self.key2id[ meta.key ](%s) != data_id(%s)" %
(self.key2id[meta.key], data_id))
if meta.data_capacity < meta.data_size:
raise AssertionError(
"meta.data_capacity(%s) < meta.data_size (%s)" %
(meta.data_capacity, meta.data_size))
if meta.offset >= self.meta_end:
raise AssertionError("meta.offset(%s) >= self.meta_end(%s)" %
(meta.offset, self.meta_end))
if meta.offset != last_meta_offset:
raise AssertionError(
"meta.offset(%s) != last_meta_offset(%s)" %
(meta.offset, last_meta_offset))
if meta.data_offset != last_data_offset:
raise AssertionError(
"meta.data_offset(%s) != last_data_offset(%s)" %
(meta.data_offset, last_data_offset))
if meta.data_offset >= self.data_end:
raise AssertionError(
"meta.data_offset(%s) >= self.data_end(%s)" %
(meta.data_offset, self.data_end))
if (meta.data_offset + meta.data_size) > file_size:
raise AssertionError(
"(meta.data_offset + meta.data_size)(%s) > file_size(%s)" %
((meta.data_offset + meta.data_size), file_size))
last_data_offset += meta.data_capacity
last_meta_offset += MetaData.size
if last_meta_offset != self.meta_end:
raise AssertionError("last_meta_offset(%s) != self.meta_end(%s)" %
(last_meta_offset, self.meta_end))
if last_data_offset != self.data_end:
raise AssertionError("last_data_offset(%s) != self.data_end(%s)" %
(last_data_offset, self.data_end))
for key, data_id in self.key2id.items():
if key != self.id2data[data_id].key:
raise AssertionError(
"key(%s) != self.id2data[ data_id ].key(%s)" %
(key, self.id2data[data_id].key))
for data_id, meta in self.id2data.items():
meta_dump = self.handle.read(meta.offset, MetaData.size)
stored_meta = MetaData.load(meta_dump)
if meta.key != stored_meta.key:
raise AssertionError("meta.key(%s) != stored_meta.key(%s)" %
(meta.key, stored_meta.key))
if meta.id != stored_meta.id:
raise AssertionError("meta.id(%s) != stored_meta.id(%s)" %
(meta.id, stored_meta.id))
if meta.data_size != stored_meta.data_size:
raise AssertionError(
"meta.data_size(%s) != stored_meta.data_size(%s)" %
(meta.data_size, stored_meta.data_size))
if meta.data_capacity != stored_meta.data_capacity:
raise AssertionError(
"meta.data_capacity(%s) != stored_meta.data_capacity(%s)" %
(meta.data_capacity, stored_meta.data_capacity))
class CLIOption(object):
__slots__ = (
'cli_name',
'cli_long_name',
'cli_only',
'opt_name',
'value_type',
'default',
'description',
'metavar'
)
def __init__(self,
cli_name,
cli_long_name,
opt_name,
value_type,
default,
description,
metavar=None,
cli_only=False):
self.cli_name = cli_name
self.cli_long_name = cli_long_name
self.cli_only = cli_only
self.opt_name = opt_name
self.value_type = value_type
self.default = None if default is None else value_type(default)
self.description = description
self.metavar = metavar
def add_to_parser(self, parser):
args = []
if self.cli_name is not None:
args.append(self.cli_name)
if self.cli_long_name is not None:
args.append(self.cli_long_name)
if self.value_type is bool:
action = 'store_false' if self.default else 'store_true'
elif issubclass(self.value_type, (list, UniqueList)):
action = 'append'
else:
action = 'store'
kw = {'dest': self.opt_name,
'help': self.description, 'action': action}
if self.metavar:
kw['metavar'] = self.metavar
parser.add_argument(*args, **kw)
class CLIConfig(object):
def __init__(self, cli_options, args=None):
super(CLIConfig, self).__setattr__('targets', tuple())
super(CLIConfig, self).__setattr__('_set_options', set())
super(CLIConfig, self).__setattr__('_defaults', {})
self.__parse_arguments(cli_options, args)
@staticmethod
def __get_args_parser(cli_options):
parser = argparse.ArgumentParser()
for opt in cli_options:
opt.add_to_parser(parser)
return parser
def __set_defaults(self, cli_options):
defaults = self._defaults
for opt in cli_options:
defaults[opt.opt_name] = (opt.default, opt.value_type)
targets_type = split_list_type(value_list_type(UniqueList, str), ', ')
defaults['targets'] = (tuple(), targets_type)
return defaults
def __parse_values(self, args):
targets = []
for arg in args:
name, sep, value = arg.partition('=')
name = name.strip()
if sep:
setattr(self, name, value.strip())
else:
targets.append(name)
if targets:
self.targets = tuple(targets)
def __parse_options(self, cli_options, args):
defaults = self.__set_defaults(cli_options)
for opt in cli_options:
name = opt.opt_name
value = getattr(args, name)
default, value_type = defaults[name]
if value is None:
value = default
else:
self._set_options.add(name)
value = value_type(value)
super(CLIConfig, self).__setattr__(name, value)
def __parse_arguments(self, cli_options, cli_args):
parser = self.__get_args_parser(cli_options)
parser.add_argument('targets_or_options',
metavar='TARGET | OPTION=VALUE',
nargs='*', help="Targets or option's values")
args = parser.parse_args(cli_args)
self.__parse_options(cli_options, args)
self.__parse_values(args.targets_or_options)
def read_file(self, config_file, config_locals=None):
if config_locals is None:
config_locals = {}
exec_locals = exec_file(config_file, config_locals)
for name, value in exec_locals.items():
self.set_default(name, value)
def __set(self, name, value):
defaults = self._defaults
try:
default_value, value_type = defaults[name]
except KeyError:
if value is not None:
defaults[name] = (value, type(value))
else:
if value is None:
value = default_value
elif type(value) is not value_type:
value = value_type(value)
super(CLIConfig, self).__setattr__(name, value)
def set_default(self, name, value):
if name.startswith("_"):
super(CLIConfig, self).__setattr__(name, value)
else:
if name not in self._set_options:
self.__set(name, value)
def __setattr__(self, name, value):
if name.startswith("_"):
super(CLIConfig, self).__setattr__(name, value)
else:
self.__set(name, value)
if value is None:
self._set_options.discard(name)
else:
self._set_options.add(name)
def items(self):
for name, value in self.__dict__.items():
if not name.startswith("_") and (name != "targets"):
yield (name, value)
EVENT_ERROR, EVENT_WARNING, EVENT_STATUS, EVENT_DEBUG, = EVENT_ALL = tuple(range(4))
class ErrorEventUserHandlerWrongArgs (Exception):
def __init__(self, event, handler):
msg = "Invalid arguments of event '%s' handler method: '%s'" % (
event, handler)
super(ErrorEventUserHandlerWrongArgs, self).__init__(msg)
class ErrorEventHandlerAlreadyDefined (Exception):
def __init__(self, event, handler, other_handler):
msg = "Default event '%s' handler is defined twice: '%s', '%s'" % (event, handler, other_handler)
super(ErrorEventHandlerAlreadyDefined, self).__init__(msg)
class ErrorEventHandlerUnknownEvent (Exception):
def __init__(self, event):
msg = "Unknown event: '%s'" % (event,)
super(ErrorEventHandlerUnknownEvent, self).__init__(msg)
class EventSettings(object):
__slots__ = (
'brief',
'with_output',
'trace_exec'
)
def __init__(self, brief=True, with_output=True, trace_exec=False):
self.brief = brief
self.with_output = with_output
self.trace_exec = trace_exec
class EventManager(object):
__slots__ = (
'default_handlers',
'user_handlers',
'ignored_events',
'disable_defaults',
'settings',
)
def __init__(self):
self.default_handlers = {}
self.user_handlers = {}
self.ignored_events = set()
self.disable_defaults = False
self.settings = EventSettings()
def add_default_handler(self, handler, importance_level, event=None):
if not event:
event = handler.__name__
pair = (handler, importance_level)
other = self.default_handlers.setdefault(event, pair)
if other != pair:
raise ErrorEventHandlerAlreadyDefined(event, other[0], handler)
def add_user_handler(self, user_handler, event=None):
if not event:
event = user_handler.__name__
try:
default_handler = self.default_handlers[event][0]
except KeyError:
raise ErrorEventHandlerUnknownEvent(event)
if not equal_function_args(default_handler, user_handler):
raise ErrorEventUserHandlerWrongArgs(event, user_handler)
self.user_handlers.setdefault(event, []).append(user_handler)
def remove_user_handler(self, user_handlers):
user_handlers = to_sequence(user_handlers)
for handlers in self.user_handlers.values():
for user_handler in user_handlers:
try:
handlers.remove(user_handler)
except ValueError:
pass
def send_event(self, event, *args, **kw):
if event in self.ignored_events:
return
if self.disable_defaults:
default_handlers = []
else:
default_handlers = [self.default_handlers[event][0]]
user_handlers = self.user_handlers.get(event, [])
args = (self.settings,) + args
for handler in itertools.chain(user_handlers, default_handlers):
handler(*args, **kw)
def __get_events(self, event_filters):
events = set()
for event_filter in to_sequence(event_filters):
if event_filter not in EVENT_ALL:
events.add(event_filter)
else:
for event, pair in self.default_handlers.items():
level = pair[1]
if event_filter == level:
events.add(event)
return events
def enable_events(self, event_filters, enable):
events = self.__get_events(event_filters)
if enable:
self.ignored_events.difference_update(events)
else:
self.ignored_events.update(events)
def enable_default_handlers(self, enable):
self.disable_defaults = not enable
def set_settings(self, settings):
self.settings = settings
_event_manager = EventManager()
def _event_impl(handler, importance_level, event=None):
if not event:
event = handler.__name__
_event_manager.add_default_handler(handler, importance_level)
def _send_event(*args, **kw):
_event_manager.send_event(event, *args, **kw)
return _send_event
def event_error(handler):
return _event_impl(handler, EVENT_ERROR)
def event_warning(handler):
return _event_impl(handler, EVENT_WARNING)
def event_status(handler):
return _event_impl(handler, EVENT_STATUS)
def event_debug(handler):
return _event_impl(handler, EVENT_DEBUG)
def event_handler(event=None):
if isinstance(event, (types.FunctionType, types.MethodType)):
_event_manager.add_user_handler(event)
return event
def _event_handler_impl(handler):
_event_manager.add_user_handler(handler, event)
return handler
return _event_handler_impl
def set_event_settings(settings):
_event_manager.set_settings(settings)
def enable_events(event_filters):
_event_manager.enable_events(event_filters, True)
def disable_events(event_filters):
_event_manager.enable_events(event_filters, False)
def disable_default_handlers():
_event_manager.enable_default_handlers(False)
def enable_default_handlers():
_event_manager.enable_default_handlers(True)
def add_user_handler(handler, event=None):
_event_manager.add_user_handler(handler, event)
def remove_user_handler(handler):
_event_manager.remove_user_handler(handler)
try:
filterfalse = itertools.filterfalse
except AttributeError:
filterfalse = itertools.ifilterfalse
class ErrorNoPrograms(Exception):
def __init__(self, prog):
msg = "No programs were specified: %s(%s)" % (prog, type(prog))
super(ErrorNoPrograms, self).__init__(msg)
def abs_file_path(file_path, path_sep=os.path.sep,
seps=(os.path.sep, os.path.altsep),
_abspath=os.path.abspath,
_normcase=os.path.normcase):
if not file_path:
file_path = '.'
if file_path[-1] in seps:
last_sep = path_sep
else:
last_sep = ''
return _normcase(_abspath(file_path)) + last_sep
def expand_file_path(path,
_normpath=os.path.normpath,
_expanduser=os.path.expanduser,
_expandvars=os.path.expandvars):
return _normpath(_expanduser(_expandvars(path)))
def exclude_files_from_dirs(files, dirs):
result = []
folders = tuple(os.path.normcase(
os.path.abspath(folder)) + os.path.sep for folder in to_sequence(dirs))
for filename in to_sequence(files):
filename = os.path.normcase(os.path.abspath(filename))
if not filename.startswith(folders):
result.append(filename)
return result
def _masks_to_match(masks, _null_match=lambda name: False):
if not masks:
return _null_match
if is_string(masks):
masks = masks.split('|')
re_list = []
for mask in to_sequence(masks):
mask = os.path.normcase(mask).strip()
re_list.append("(%s)" % fnmatch.translate(mask))
re_str = '|'.join(re_list)
return re.compile(re_str).match
def find_files(paths=".",
mask=("*",),
exclude_mask=('.*',),
exclude_subdir_mask=('__*', '.*'),
found_dirs=None):
found_files = []
paths = to_sequence(paths)
match_mask = _masks_to_match(mask)
match_exclude_mask = _masks_to_match(exclude_mask)
match_exclude_subdir_mask = _masks_to_match(exclude_subdir_mask)
path_join = os.path.join
for path in paths:
for root, folders, files in os.walk(os.path.abspath(path)):
for file_name in files:
file_name_nocase = os.path.normcase(file_name)
if (not match_exclude_mask(file_name_nocase)) and match_mask(file_name_nocase):
found_files.append(path_join(root, file_name))
folders[:] = filterfalse(match_exclude_subdir_mask, folders)
if found_dirs is not None:
found_dirs.update(path_join(root, folder)
for folder in folders)
found_files.sort()
return found_files
def find_file_in_paths(paths, filename):
for path in paths:
file_path = os.path.join(path, filename)
if os.access(file_path, os.R_OK):
return os.path.normpath(file_path)
return None
def _get_env_path(env, hint_prog=None):
paths = env.get('PATH', tuple())
if is_string(paths):
paths = paths.split(os.pathsep)
paths = [os.path.expanduser(path) for path in paths]
if hint_prog:
hint_dir = os.path.dirname(hint_prog)
paths.insert(0, hint_dir)
return paths
def _get_env_path_ext(env, hint_prog=None,
is_windows=(os.name == 'nt'),
is_cygwin=(sys.platform == 'cygwin')):
if not is_windows and not is_cygwin:
return tuple()
if hint_prog:
hint_ext = os.path.splitext(hint_prog)[1]
return hint_ext,
path_exts = env.get('PATHEXT')
if path_exts is None:
path_exts = os.environ.get('PATHEXT')
if is_string(path_exts):
path_sep = ';' if is_cygwin else os.pathsep
path_exts = path_exts.split(path_sep)
if not path_exts:
path_exts = ['.exe', '.cmd', '.bat', '.com']
if is_cygwin:
if '' not in path_exts:
path_exts = [''] + path_exts
return path_exts
def _add_program_exts(progs, exts):
progs = to_sequence(progs)
if not exts:
return tuple(progs)
result = []
for prog in progs:
prog_ext = os.path.splitext(prog)[1]
if prog_ext or (prog_ext in exts):
result.append(prog)
else:
result += (prog + ext for ext in exts)
return result
def _find_program(progs, paths):
for path in paths:
for prog in progs:
prog_path = os.path.join(path, prog)
if os.access(prog_path, os.X_OK):
return os.path.normcase(prog_path)
return None
def find_program(prog, env, hint_prog=None):
paths = _get_env_path(env, hint_prog)
path_ext = _get_env_path_ext(env, hint_prog)
progs = _add_program_exts(prog, path_ext)
return _find_program(progs, paths)
def find_programs(progs, env, hint_prog=None):
paths = _get_env_path(env, hint_prog)
path_ext = _get_env_path_ext(env, hint_prog)
result = []
for prog in progs:
progs = _add_program_exts(prog, path_ext)
prog = _find_program(progs, paths)
result.append(prog)
return result
def find_optional_program(prog, env, hint_prog=None):
paths = _get_env_path(env, hint_prog)
path_ext = _get_env_path_ext(env, hint_prog)
progs = _add_program_exts(prog, path_ext)
return _OptionalProgramFinder(progs, paths)
def find_optional_programs(progs, env, hint_prog=None):
paths = _get_env_path(env, hint_prog)
path_ext = _get_env_path_ext(env, hint_prog)
result = []
for prog in progs:
progs = _add_program_exts(prog, path_ext)
prog = _OptionalProgramFinder(progs, paths)
result.append(prog)
return result
class _OptionalProgramFinder(object):
__slots__ = (
'progs',
'paths',
'result',
)
def __init__(self, progs, paths):
if not progs:
raise ErrorNoPrograms(progs)
self.progs = progs
self.paths = paths
self.result = None
def __nonzero__(self):
return bool(self.get())
def __bool__(self):
return bool(self.get())
def __call__(self):
return self.get()
def __str__(self):
return self.get()
def get(self):
progpath = self.result
if progpath:
return progpath
prog_full_path = _find_program(self.progs, self.paths)
if prog_full_path is not None:
self.result = prog_full_path
return prog_full_path
self.result = self.progs[0]
return self.result
def _norm_local_path(path):
if not path:
return '.'
path_sep = os.path.sep
if path[-1] in (path_sep, os.path.altsep):
last_sep = path_sep
else:
last_sep = ''
path = os.path.normcase(os.path.normpath(path))
return path + last_sep
try:
_splitunc = os.path.splitunc
except AttributeError:
def _splitunc(path):
return str(), path
def split_drive(path):
drive, path = os.path.splitdrive(path)
if not drive:
drive, path = _splitunc(path)
return drive, path
def _split_path(path):
drive, path = split_drive(path)
path = path.split(os.path.sep)
path.insert(0, drive)
return path
def split_path(path):
path = os.path.normcase(os.path.normpath(path))
path = _split_path(path)
path = [p for p in path if p]
return path
def _common_prefix_size(*paths):
min_path = min(paths)
max_path = max(paths)
i = 0
for i, path in enumerate(min_path[:-1]):
if path != max_path[i]:
return i
return i + 1
def _relative_join(base_path, base_path_seq, path, sep=os.path.sep):
path = _split_path(_norm_local_path(path))
prefix_index = _common_prefix_size(base_path_seq, path)
if prefix_index == 0:
drive = path[0]
if drive:
drive = drive.replace(':', '').split(sep)
del path[0]
path[0:0] = drive
else:
path = path[prefix_index:]
path.insert(0, base_path)
path = filter(None, path)
return sep.join(path)
def relative_join_list(base_path, paths):
base_path = _norm_local_path(base_path)
base_path_seq = _split_path(base_path)
return [_relative_join(base_path, base_path_seq, path)
for path in to_sequence(paths)]
def relative_join(base_path, path):
base_path = _norm_local_path(base_path)
base_path_seq = _split_path(base_path)
return _relative_join(base_path, base_path_seq, path)
def change_path(path, dirname=None, name=None, ext=None, prefix=None):
path_dirname, path_filename = os.path.split(path)
path_name, path_ext = os.path.splitext(path_filename)
if dirname is None:
dirname = path_dirname
if name is None:
name = path_name
if ext is None:
ext = path_ext
if prefix:
name = prefix + name
path = dirname
if path:
path += os.path.sep
return path + name + ext
def _simple_path_getter(path):
return path
def group_paths_by_dir(file_paths,
wish_groups=1,
max_group_size=-1,
path_getter=None):
groups = ItemsGroups(len(file_paths), wish_groups, max_group_size)
if path_getter is None:
path_getter = _simple_path_getter
files = []
for file_path in file_paths:
path = path_getter(file_path)
dir_path, file_name = os.path.split(path)
dir_path = os.path.normcase(dir_path)
files.append((dir_path, file_path))
files.sort(key=operator.itemgetter(0))
last_dir = None
for dir_path, file_path in files:
if last_dir != dir_path:
last_dir = dir_path
groups.add_group()
groups.add(file_path)
return groups.get()
class Chdir (object):
__slots__ = ('previous_path', )
def __init__(self, path):
self.previous_path = os.getcwd()
os.chdir(path)
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
os.chdir(self.previous_path)
return False
class ErrorDataFileFormatInvalid(Exception):
def __init__(self, filename):
msg = "Invalid format of data file: %s" % (filename,)
super(ErrorDataFileFormatInvalid, self).__init__(msg)
class ErrorDataFileCorrupted(Exception):
def __init__(self, filename):
msg = "Corrupted format of data file: %s" % (filename,)
super(ErrorDataFileCorrupted, self).__init__(msg)
def _bytes_to_blob_stub(value):
return value
def _many_bytes_to_blob_stub(values):
return values
def _blob_to_bytes_stub(value):
return value
def _bytes_to_blob_buf(value):
return buffer(value) # noqa
def _many_bytes_to_blob_buf(values):
return map(buffer, values) # noqa
def _blob_to_bytes_buf(value):
return bytes(value)
try:
buffer
except NameError:
_bytes_to_blob = _bytes_to_blob_stub
_many_bytes_to_blob = _many_bytes_to_blob_stub
_blob_to_bytes = _blob_to_bytes_stub
else:
_bytes_to_blob = _bytes_to_blob_buf
_many_bytes_to_blob = _many_bytes_to_blob_buf
_blob_to_bytes = _blob_to_bytes_buf
class SqlDataFile (object):
__slots__ = (
'id2key',
'key2id',
'connection',
)
def __init__(self, filename, force=False):
self.id2key = {}
self.key2id = {}
self.connection = None
self.open(filename, force=force)
def clear(self):
with self.connection as conn:
conn.execute("DELETE FROM items")
self.id2key.clear()
self.key2id.clear()
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, traceback):
self.close()
def _load_ids(self, conn, blob_to_bytes=_blob_to_bytes):
set_key = self.key2id.__setitem__
set_id = self.id2key.__setitem__
for key, data_id in conn.execute("SELECT key,id FROM items"):
data_id = blob_to_bytes(data_id)
set_key(key, data_id)
set_id(data_id, key)
def open(self, filename, force=False):
self.close()
try:
conn = self._open_connection(filename)
except ErrorDataFileCorrupted:
os.remove(filename)
conn = self._open_connection(filename)
except ErrorDataFileFormatInvalid:
if not force and not self._is_aql_db(filename):
raise
os.remove(filename)
conn = self._open_connection(filename)
self._load_ids(conn)
self.connection = conn
def close(self):
if self.connection is not None:
self.connection.close()
self.connection = None
self.id2key.clear()
self.key2id.clear()
@staticmethod
def _is_aql_db(filename):
magic_tag = b".AQL.DB."
with open_file(filename, read=True, binary=True) as f:
tag = f.read(len(magic_tag))
return tag == magic_tag
@staticmethod
def _open_connection(filename):
conn = None
try:
conn = sqlite3.connect(filename,
detect_types=sqlite3.PARSE_DECLTYPES)
with conn:
conn.execute(
"CREATE TABLE IF NOT EXISTS items("
"key INTEGER PRIMARY KEY AUTOINCREMENT,"
"id BLOB UNIQUE,"
"data BLOB NOT NULL"
")")
except (sqlite3.DataError, sqlite3.IntegrityError):
if conn is not None:
conn.close()
raise ErrorDataFileCorrupted(filename)
except sqlite3.DatabaseError:
if conn is not None:
conn.close()
raise ErrorDataFileFormatInvalid(filename)
conn.execute("PRAGMA synchronous=OFF")
return conn
def read(self, data_id,
bytes_to_blob=_bytes_to_blob,
blob_to_bytes=_blob_to_bytes):
result = self.connection.execute("SELECT data FROM items where id=?",
(bytes_to_blob(data_id),))
data = result.fetchone()
if not data:
return None
return blob_to_bytes(data[0])
def write_with_key(self, data_id, data,
bytes_to_blob=_bytes_to_blob):
key = self.id2key.pop(data_id, None)
if key is not None:
del self.key2id[key]
with self.connection as conn:
cur = conn.execute(
"INSERT OR REPLACE INTO items(id, data) VALUES (?,?)",
(bytes_to_blob(data_id), bytes_to_blob(data)))
key = cur.lastrowid
self.key2id[key] = data_id
self.id2key[data_id] = key
return key
write = write_with_key
def get_ids(self, keys):
try:
return tuple(map(self.key2id.__getitem__, keys))
except KeyError:
return None
def get_keys(self, data_ids):
return map(self.id2key.__getitem__, data_ids)
def remove(self, data_ids, many_bytes_to_blob=_many_bytes_to_blob):
with self.connection as conn:
conn.executemany("DELETE FROM items WHERE id=?",
zip(many_bytes_to_blob(data_ids)))
get_key = self.id2key.__getitem__
del_key = self.key2id.__delitem__
del_id = self.id2key.__delitem__
for data_id in data_ids:
key = get_key(data_id)
del_key(key)
del_id(data_id)
def self_test(self, blob_to_bytes=_blob_to_bytes): # noqa
if self.connection is None:
if self.id2key:
raise AssertionError("id2key is not empty")
if self.key2id:
raise AssertionError("key2id is not empty")
return
key2id = self.key2id.copy()
id2key = self.id2key.copy()
items = self.connection.execute("SELECT key,id FROM items")
for key, data_id in items:
data_id = blob_to_bytes(data_id)
d = key2id.pop(key, None)
if d is None:
raise AssertionError("key(%s) not in key2id" % (key,))
if d != data_id:
raise AssertionError("data_id(%s) != d(%s)" %
(binascii.hexlify(data_id),
binascii.hexlify(d)))
k = id2key.pop(data_id, None)
if k is None:
raise AssertionError("data_id(%s) not in id2key" %
(binascii.hexlify(data_id),))
if k != key:
raise AssertionError("key(%s) != k(%s)" % (key, k))
if key2id:
raise AssertionError("unknown keys: %s" % (key2id,))
if id2key:
raise AssertionError("unknown data_ids: %s" % (id2key,))
class Tool(object):
def __init__(self, options):
pass
@classmethod
def setup(cls, options):
pass
@classmethod
def options(cls):
return None
@classmethod
def find_program(cls, options, prog, hint_prog=None):
env = options.env.get()
prog = find_program(prog, env, hint_prog)
if prog is None:
raise NotImplementedError()
return prog
@classmethod
def find_programs(cls, options, progs, hint_prog=None):
env = options.env.get()
progs = find_programs(progs, env, hint_prog)
for prog in progs:
if prog is None:
raise NotImplementedError()
return progs
@classmethod
def find_optional_program(cls, options, prog, hint_prog=None):
env = options.env.get()
return find_optional_program(prog, env, hint_prog)
@classmethod
def find_optional_programs(cls, options, progs, hint_prog=None):
env = options.env.get()
return find_optional_programs(progs, env, hint_prog)
class ErrorOptionsCyclicallyDependent(TypeError):
def __init__(self):
msg = "Options cyclically depend from each other."
super(ErrorOptionsCyclicallyDependent, self).__init__(msg)
class ErrorOptionsMergeNonOptions(TypeError):
def __init__(self, value):
msg = "Type '%s' can't be merged with Options." % (type(value),)
super(ErrorOptionsMergeNonOptions, self).__init__(msg)
class ErrorOptionsMergeDifferentOptions(TypeError):
def __init__(self, name1, name2):
msg = "Can't merge one an optional value into two different options " "'%s' and '%s' " % (name1, name2)
super(ErrorOptionsMergeDifferentOptions, self).__init__(msg)
class ErrorOptionsMergeChild(TypeError):
def __init__(self):
msg = "Can't merge child options into the parent options. " "Use join() to move child options into its parent."
super(ErrorOptionsMergeChild, self).__init__(msg)
class ErrorOptionsJoinNoParent(TypeError):
def __init__(self, options):
msg = "Can't join options without parent: %s" % (options, )
super(ErrorOptionsJoinNoParent, self).__init__(msg)
class ErrorOptionsJoinParent(TypeError):
def __init__(self, options):
msg = "Can't join options with children: %s" % (options, )
super(ErrorOptionsJoinParent, self).__init__(msg)
class ErrorOptionsNoIteration(TypeError):
def __init__(self):
msg = "Options doesn't support iteration"
super(ErrorOptionsNoIteration, self).__init__(msg)
class ErrorOptionsUnableEvaluate(TypeError):
def __init__(self, name, err):
msg = "Unable to evaluate option '%s', error: %s" % (name, err)
super(ErrorOptionsUnableEvaluate, self).__init__(msg)
class _OpValueRef(tuple):
def __new__(cls, value):
return super(_OpValueRef, cls).__new__(cls, (value.name, value.key))
def get(self, options, context):
name, key = self
value = getattr(options, name).get(context)
if key is not NotImplemented:
value = value[key]
return value
class _OpValueExRef(tuple):
def __new__(cls, value):
return super(_OpValueExRef, cls).__new__(cls, (value.name,
value.key,
value.options))
def get(self):
name, key, options = self
value = getattr(options, name).get()
if key is not NotImplemented:
value = value[key]
return value
def _store_op_value(options, value):
if isinstance(value, OptionValueProxy):
value_options = value.options
if (options is value_options) or options._is_parent(value_options):
value = _OpValueRef(value)
else:
value_options._add_dependency(options)
value = _OpValueExRef(value)
elif isinstance(value, dict):
value = dict((k, _store_op_value(options, v))
for k, v in value.items())
elif isinstance(value, (list, tuple, UniqueList, set, frozenset)):
value = [_store_op_value(options, v) for v in value]
return value
def _load_op_value(options, context, value):
if isinstance(value, _OpValueRef):
value = value.get(options, context)
value = simplify_value(value)
elif isinstance(value, _OpValueExRef):
value = value.get()
value = simplify_value(value)
elif isinstance(value, dict):
value = dict((k, _load_op_value(options, context, v))
for k, v in value.items())
elif isinstance(value, (list, tuple, UniqueList, set, frozenset)):
value = [_load_op_value(options, context, v) for v in value]
else:
value = simplify_value(value)
return value
def _eval_cmp_value(value):
if isinstance(value, OptionValueProxy):
value = value.get()
value = simplify_value(value)
return value
class OptionValueProxy (object):
def __init__(self,
option_value,
from_parent,
name,
options,
key=NotImplemented):
self.option_value = option_value
self.from_parent = from_parent
self.name = name
self.options = options
self.key = key
self.child_ref = None
def is_set(self):
return self.option_value.is_set()
def is_set_not_to(self, value):
return self.option_value.is_set() and (self != value)
def get(self, context=None):
self.child_ref = None
v = self.options.evaluate(self.option_value, context, self.name)
return v if self.key is NotImplemented else v[self.key]
def __iadd__(self, other):
self.child_ref = None
if self.key is not NotImplemented:
other = op_iadd_key(self.key, other)
self.options._append_value(
self.option_value, self.from_parent, other, op_iadd)
return self
def __add__(self, other):
return SimpleOperation(operator.add, self, other)
def __radd__(self, other):
return SimpleOperation(operator.add, other, self)
def __sub__(self, other):
return SimpleOperation(operator.sub, self, other)
def __rsub__(self, other):
return SimpleOperation(operator.sub, other, self)
def __isub__(self, other):
self.child_ref = None
if self.key is not NotImplemented:
other = op_isub_key(self.key, other)
self.options._append_value(
self.option_value, self.from_parent, other, op_isub)
return self
def set(self, value):
self.child_ref = None
if self.key is not NotImplemented:
value = op_set_key(self.key, value)
self.options._append_value(
self.option_value, self.from_parent, value, op_set)
def __setitem__(self, key, value):
child_ref = self.child_ref
if (child_ref is not None) and (child_ref() is value):
return
if self.key is not NotImplemented:
raise KeyError(key)
option_type = self.option_value.option_type
if isinstance(option_type, DictOptionType):
if isinstance(value, OptionType) or (type(value) is type):
option_type.set_value_type(key, value)
return
value = op_set_key(key, value)
self.child_ref = None
self.options._append_value(
self.option_value, self.from_parent, value, op_set)
def __getitem__(self, key):
if self.key is not NotImplemented:
raise KeyError(key)
child = OptionValueProxy(
self.option_value, self.from_parent, self.name, self.options, key)
self.child_ref = weakref.ref(child)
return child
def update(self, value):
self.child_ref = None
self.options._append_value(
self.option_value, self.from_parent, value, op_iupdate)
def __iter__(self):
raise TypeError()
def __bool__(self):
return bool(self.get(context=None))
def __nonzero__(self):
return bool(self.get(context=None))
def __str__(self):
return str(self.get(context=None))
def is_true(self, context):
return bool(self.get(context))
def is_false(self, context):
return not bool(self.get(context))
def eq(self, context, other):
return self.cmp(context, operator.eq, other)
def ne(self, context, other):
return self.cmp(context, operator.ne, other)
def lt(self, context, other):
return self.cmp(context, operator.lt, other)
def le(self, context, other):
return self.cmp(context, operator.le, other)
def gt(self, context, other):
return self.cmp(context, operator.gt, other)
def ge(self, context, other):
return self.cmp(context, operator.ge, other)
def __eq__(self, other):
return self.eq(None, _eval_cmp_value(other))
def __ne__(self, other):
return self.ne(None, _eval_cmp_value(other))
def __lt__(self, other):
return self.lt(None, _eval_cmp_value(other))
def __le__(self, other):
return self.le(None, _eval_cmp_value(other))
def __gt__(self, other):
return self.gt(None, _eval_cmp_value(other))
def __ge__(self, other):
return self.ge(None, _eval_cmp_value(other))
def __contains__(self, other):
return self.has(None, _eval_cmp_value(other))
def cmp(self, context, cmp_operator, other):
self.child_ref = None
value = self.get(context)
if not isinstance(value, (Dict, List)) and (self.key is NotImplemented):
other = self.option_value.option_type(other)
return cmp_operator(value, other)
def has(self, context, other):
value = self.get(context)
return other in value
def has_any(self, context, others):
value = self.get(context)
for other in to_sequence(others):
if other in value:
return True
return False
def has_all(self, context, others):
value = self.get(context)
for other in to_sequence(others):
if other not in value:
return False
return True
def one_of(self, context, others):
value = self.get(context)
for other in others:
other = self.option_value.option_type(other)
if value == other:
return True
return False
def not_in(self, context, others):
return not self.one_of(context, others)
def option_type(self):
self.child_ref = None
return self.option_value.option_type
class ConditionGeneratorHelper(object):
__slots__ = ('name', 'options', 'condition', 'key')
def __init__(self, name, options, condition, key=NotImplemented):
self.name = name
self.options = options
self.condition = condition
self.key = key
@staticmethod
def __cmp_value(options, context, cmp_method, name, key, *args):
opt = getattr(options, name)
if key is not NotImplemented:
opt = opt[key]
return getattr(opt, cmp_method)(context, *args)
@staticmethod
def __make_cmp_condition(condition, cmp_method, name, key, *args):
return Condition(condition,
ConditionGeneratorHelper.__cmp_value,
cmp_method,
name,
key,
*args)
def cmp(self, cmp_method, *args):
condition = self.__make_cmp_condition(
self.condition, cmp_method, self.name, self.key, *args)
return ConditionGenerator(self.options, condition)
def __iter__(self):
raise TypeError()
def __getitem__(self, key):
if self.key is not NotImplemented:
raise KeyError(key)
return ConditionGeneratorHelper(self.name,
self.options,
self.condition,
key)
def __setitem__(self, key, value):
if not isinstance(value, ConditionGeneratorHelper):
value = op_set_key(key, value)
self.options.append_value(
self.name, value, op_set, self.condition)
def eq(self, other):
return self.cmp('eq', other)
def ne(self, other):
return self.cmp('ne', other)
def gt(self, other):
return self.cmp('gt', other)
def ge(self, other):
return self.cmp('ge', other)
def lt(self, other):
return self.cmp('lt', other)
def le(self, other):
return self.cmp('le', other)
def has(self, value):
return self.cmp('has', value)
def has_any(self, values):
return self.cmp('has_any', values)
def has_all(self, values):
return self.cmp('has_all', values)
def one_of(self, values):
return self.cmp('one_of', values)
def not_in(self, values):
return self.cmp('not_in', values)
def is_true(self):
return self.cmp('is_true')
def is_false(self):
return self.cmp('is_false')
def __iadd__(self, value):
if self.key is not NotImplemented:
value = op_iadd_key(self.key, value)
self.options.append_value(self.name, value, op_iadd, self.condition)
return self
def __isub__(self, value):
if self.key is not NotImplemented:
value = op_isub_key(self.key, value)
self.options.append_value(self.name, value, op_isub, self.condition)
return self
class ConditionGenerator(object):
def __init__(self, options, condition=None):
self.__dict__['__options'] = options
self.__dict__['__condition'] = condition
def __getattr__(self, name):
return ConditionGeneratorHelper(name,
self.__dict__['__options'],
self.__dict__['__condition'])
def __setattr__(self, name, value):
if not isinstance(value, ConditionGeneratorHelper):
condition = self.__dict__['__condition']
self.__dict__['__options'].append_value(name,
value,
op_set,
condition)
def _items_by_value(items):
values = {}
for name, value in items:
try:
values[value].add(name)
except KeyError:
values[value] = {name}
return values
class Options (object):
def __init__(self, parent=None):
self.__dict__['__parent'] = parent
self.__dict__['__cache'] = {}
self.__dict__['__opt_values'] = {}
self.__dict__['__children'] = []
if parent is not None:
parent.__dict__['__children'].append(weakref.ref(self))
def _add_dependency(self, child):
children = self.__dict__['__children']
for child_ref in children:
if child_ref() is child:
return
if child._is_dependency(self):
raise ErrorOptionsCyclicallyDependent()
children.append(weakref.ref(child))
def _is_dependency(self, other):
children = list(self.__dict__['__children'])
while children:
child_ref = children.pop()
child = child_ref()
if child is None:
continue
if child is other:
return True
children += child.__dict__['__children']
return False
def _is_parent(self, other):
if other is None:
return False
parent = self.__dict__['__parent']
while parent is not None:
if parent is other:
return True
parent = parent.__dict__['__parent']
return False
def __copy_parent_option(self, opt_value):
parent = self.__dict__['__parent']
items = parent._values_map_by_name().items()
names = [name for name, value in items if value is opt_value]
opt_value = opt_value.copy()
self.__set_opt_value(opt_value, names)
return opt_value
def get_hash_ref(self):
if self.__dict__['__opt_values']:
return weakref.ref(self)
parent = self.__dict__['__parent']
if parent is None:
return weakref.ref(self)
return parent.get_hash_ref()
def has_changed_key_options(self):
parent = self.__dict__['__parent']
for name, opt_value in self.__dict__['__opt_values'].items():
if not opt_value.is_tool_key() or not opt_value.is_set():
continue
parent_opt_value, from_parent = parent._get_value(
name, raise_ex=False)
if parent_opt_value is None:
continue
if parent_opt_value.is_set():
value = self.evaluate(opt_value, None, name)
parent_value = parent.evaluate(parent_opt_value, None, name)
if value != parent_value:
return True
return False
def __add_new_option(self, name, value):
self.clear_cache()
if isinstance(value, OptionType):
opt_value = OptionValue(value)
elif isinstance(value, OptionValueProxy):
if value.options is self:
if not value.from_parent:
opt_value = value.option_value
else:
opt_value = self.__copy_parent_option(value.option_value)
elif self._is_parent(value.options):
opt_value = self.__copy_parent_option(value.option_value)
else:
opt_value = value.option_value.copy()
opt_value.reset()
value = self._make_cond_value(value, op_set)
opt_value.append_value(value)
elif isinstance(value, OptionValue):
opt_value = value
else:
opt_value = OptionValue(auto_option_type(value))
value = self._make_cond_value(value, op_set)
opt_value.append_value(value)
self.__dict__['__opt_values'][name] = opt_value
def __set_value(self, name, value, operation_type=op_set):
opt_value, from_parent = self._get_value(name, raise_ex=False)
if opt_value is None:
self.__add_new_option(name, value)
return
if isinstance(value, OptionType):
opt_value.option_type = value
return
elif isinstance(value, OptionValueProxy):
if value.option_value is opt_value:
return
elif value is opt_value:
return
self._append_value(opt_value, from_parent, value, operation_type)
def __set_opt_value(self, opt_value, names):
opt_values = self.__dict__['__opt_values']
for name in names:
opt_values[name] = opt_value
def __setattr__(self, name, value):
self.__set_value(name, value)
def __setitem__(self, name, value):
self.__set_value(name, value)
def _get_value(self, name, raise_ex):
try:
return self.__dict__['__opt_values'][name], False
except KeyError:
parent = self.__dict__['__parent']
if parent is not None:
value, from_parent = parent._get_value(name, False)
if value is not None:
return value, True
if raise_ex:
raise AttributeError(
"Options '%s' instance has no option '%s'" %
(type(self), name))
return None, False
def __getitem__(self, name):
return self.__getattr__(name)
def __getattr__(self, name):
opt_value, from_parent = self._get_value(name, raise_ex=True)
return OptionValueProxy(opt_value, from_parent, name, self)
def __contains__(self, name):
return self._get_value(name, raise_ex=False)[0] is not None
def __iter__(self):
raise ErrorOptionsNoIteration()
def _values_map_by_name(self, result=None):
if result is None:
result = {}
parent = self.__dict__['__parent']
if parent is not None:
parent._values_map_by_name(result=result)
result.update(self.__dict__['__opt_values'])
return result
def _values_map_by_value(self):
items = self._values_map_by_name().items()
return _items_by_value(items)
def help(self, with_parent=False, hidden=False):
if with_parent:
options_map = self._values_map_by_name()
else:
options_map = self.__dict__['__opt_values']
options2names = _items_by_value(options_map.items())
result = {}
for option, names in options2names.items():
option_help = option.option_type.help()
if option_help.is_hidden() and not hidden:
continue
option_help.names = names
try:
option_help.current_value = self.evaluate(option, {}, names)
except Exception:
pass
group_name = option_help.group if option_help.group else ""
try:
group = result[group_name]
except KeyError:
group = result[group_name] = OptionHelpGroup(group_name)
group.append(option_help)
return sorted(result.values(), key=operator.attrgetter('name'))
def help_text(self, title, with_parent=False, hidden=False, brief=False):
border = "=" * len(title)
result = ["", title, border, ""]
for group in self.help(with_parent=with_parent, hidden=hidden):
text = group.text(brief=brief, indent=2)
if result[-1]:
result.append("")
result.extend(text)
return result
def set_group(self, group):
opt_values = self._values_map_by_name().values()
for opt_value in opt_values:
if isinstance(opt_value, OptionValueProxy):
opt_value = opt_value.option_value
opt_value.option_type.group = group
def __nonzero__(self):
return bool(self.__dict__['__opt_values']) or bool(self.__dict__['__parent'])
def __bool__(self):
return bool(self.__dict__['__opt_values']) or bool(self.__dict__['__parent'])
def update(self, other):
if not other:
return
if self is other:
return
if isinstance(other, Options):
self.merge(other)
else:
ignore_types = (ConditionGeneratorHelper,
ConditionGenerator,
Options)
for name, value in other.items():
if isinstance(value, ignore_types):
continue
try:
self.__set_value(name, value, op_iupdate)
except ErrorOptionTypeCantDeduce:
pass
def __merge(self, self_names, other_names, move_values=False):
self.clear_cache()
other_values = _items_by_value(other_names.items())
self_names_set = set(self_names)
self_values = _items_by_value(self_names.items())
for value, names in other_values.items():
same_names = names & self_names_set
if same_names:
self_value_name = next(iter(same_names))
self_value = self_names[self_value_name]
self_values_names = self_values[self_value]
self_other_names = same_names - self_values_names
if self_other_names:
raise ErrorOptionsMergeDifferentOptions(
self_value_name, self_other_names.pop())
else:
new_names = names - self_values_names
self_value.merge(value)
else:
if move_values:
self_value = value
else:
self_value = value.copy()
new_names = names
self.__set_opt_value(self_value, new_names)
def merge(self, other):
if not other:
return
if self is other:
return
if not isinstance(other, Options):
raise ErrorOptionsMergeNonOptions(other)
if other._is_parent(self):
raise ErrorOptionsMergeChild()
self.__merge(self._values_map_by_name(), other._values_map_by_name())
def join(self):
parent = self.__dict__['__parent']
if parent is None:
raise ErrorOptionsJoinNoParent(self)
if self.__dict__['__children']:
raise ErrorOptionsJoinParent(self)
parent.__merge(parent.__dict__['__opt_values'],
self.__dict__['__opt_values'],
move_values=True)
self.clear()
def unjoin(self):
parent = self.__dict__['__parent']
if parent is None:
return
self.__merge(
self.__dict__['__opt_values'], parent._values_map_by_name())
self.__dict__['__parent'] = None
def __unjoin_children(self):
children = self.__dict__['__children']
for child_ref in children:
child = child_ref()
if child is not None:
child.unjoin()
del children[:]
def __clear_children_cache(self):
def _clear_child_cache(ref):
child = ref()
if child is not None:
child.clear_cache()
return True
return False
self.__dict__['__children'] = list(
filter(_clear_child_cache, self.__dict__['__children']))
def __remove_child(self, child):
def _filter_child(child_ref, removed_child=child):
filter_child = child_ref()
return (filter_child is not None) and (filter_child is not removed_child)
self.__dict__['__children'] = list(
filter(_filter_child, self.__dict__['__children']))
def clear(self):
parent = self.__dict__['__parent']
self.__unjoin_children()
if parent is not None:
parent.__remove_child(self)
self.__dict__['__parent'] = None
self.__dict__['__cache'].clear()
self.__dict__['__opt_values'].clear()
def override(self, **kw):
other = Options(self)
other.update(kw)
return other
def copy(self):
other = Options()
for opt_value, names in self._values_map_by_value().items():
other.__set_opt_value(opt_value.copy(), names)
return other
def _evaluate(self, option_value, context):
try:
if context is not None:
return context[option_value]
except KeyError:
pass
attrs = self.__dict__
if attrs['__opt_values']:
cache = attrs['__cache']
else:
cache = attrs['__parent'].__dict__['__cache']
try:
return cache[option_value]
except KeyError:
pass
value = option_value.get(self, context, _load_op_value)
cache[option_value] = value
return value
def evaluate(self, option_value, context, name):
try:
return self._evaluate(option_value, context)
except ErrorOptionTypeUnableConvertValue as ex:
if not name:
raise
option_help = ex.option_help
if option_help.names:
raise
option_help.names = tuple(to_sequence(name))
raise ErrorOptionTypeUnableConvertValue(
option_help, ex.invalid_value)
except Exception as ex:
raise ErrorOptionsUnableEvaluate(name, ex)
def _store_value(self, value):
if isinstance(value, Operation):
value.convert(self, _store_op_value)
else:
value = _store_op_value(self, value)
return value
def _load_value(self, value):
if isinstance(value, Operation):
return value(self, {}, _load_op_value)
else:
value = _load_op_value(self, {}, value)
return value
def _make_cond_value(self, value, operation_type, condition=None):
if isinstance(value, ConditionalValue):
return value
if not isinstance(value, InplaceOperation):
value = operation_type(value)
value = ConditionalValue(value, condition)
value.convert(self, _store_op_value)
return value
def append_value(self, name, value, operation_type, condition=None):
opt_value, from_parent = self._get_value(name, raise_ex=True)
self._append_value(
opt_value, from_parent, value, operation_type, condition)
def _append_value(self,
opt_value,
from_parent,
value,
operation_type,
condition=None):
value = self._make_cond_value(value, operation_type, condition)
self.clear_cache()
if from_parent:
opt_value = self.__copy_parent_option(opt_value)
opt_value.append_value(value)
def clear_cache(self):
self.__dict__['__cache'].clear()
self.__clear_children_cache()
def when(self, cond=None):
if cond is not None:
if isinstance(cond, ConditionGeneratorHelper):
cond = cond.condition
elif isinstance(cond, ConditionGenerator):
cond = cond.__dict__['__condition']
elif not isinstance(cond, Condition):
cond = Condition(None,
lambda options, context, arg: bool(arg),
cond)
return ConditionGenerator(self, cond)
If = when
class ErrorEntitiesFileUnknownEntity(Exception):
def __init__(self, entity):
msg = "Unknown entity: %s" % (entity, )
super(ErrorEntitiesFileUnknownEntity, self).__init__(msg)
class EntitiesFile (object):
__slots__ = (
'data_file',
'file_lock',
'cache',
'pickler',
)
def __init__(self, filename, use_sqlite=False, force=False):
self.cache = {}
self.data_file = None
self.pickler = EntityPickler()
self.open(filename, use_sqlite=use_sqlite, force=force)
def __enter__(self):
return self
def __exit__(self, exc_type, exc_entity, traceback):
self.close()
def open(self, filename, use_sqlite=False, force=False):
self.file_lock = FileLock(filename)
self.file_lock.write_lock(wait=False, force=force)
if use_sqlite:
self.data_file = SqlDataFile(filename, force=force)
else:
self.data_file = DataFile(filename, force=force)
def close(self):
self.cache.clear()
if self.data_file is not None:
self.data_file.close()
self.data_file = None
self.file_lock.release_lock()
def clear(self):
if self.data_file is not None:
self.data_file.clear()
self.cache.clear()
def find_node_entity(self, entity):
entity_id = entity.id
dump = self.data_file.read(entity_id)
if dump is None:
return None
try:
entity = self.pickler.loads(dump)
entity.id = entity_id
except Exception:
self.data_file.remove((entity_id,))
return None
return entity
def add_node_entity(self, entity):
dump = self.pickler.dumps(entity)
self.data_file.write(entity.id, dump)
def remove_node_entities(self, entities):
entity_ids = map(operator.attrgetter('id'), entities)
self.data_file.remove(entity_ids)
def _find_entity_by_id(self, entity_id):
try:
return self.cache[entity_id]
except KeyError:
pass
data = self.data_file.read(entity_id)
if data is None:
raise ValueError()
try:
entity = self.pickler.loads(data)
entity.id = entity_id
except Exception:
self.data_file.remove((entity_id,))
raise ValueError()
self.cache[entity_id] = entity
return entity
def find_entities_by_key(self, keys):
entity_ids = self.data_file.get_ids(keys)
if entity_ids is None:
return None
try:
return list(map(self._find_entity_by_id, entity_ids))
except Exception:
return None
def find_entities(self, entities):
try:
return list(map(self._find_entity_by_id,
map(operator.attrgetter('id'), entities)))
except Exception:
return None
def add_entities(self, entities):
keys = []
entity_ids = []
key_append = keys.append
entity_append = entity_ids.append
for entity in entities:
entity_id = entity.id
try:
stored_entity = self._find_entity_by_id(entity_id)
if stored_entity == entity:
entity_append(entity_id)
continue
except Exception:
pass
key = self.update_entity(entity)
key_append(key)
keys.extend(self.data_file.get_keys(entity_ids))
return keys
def update_entity(self, entity):
entity_id = entity.id
self.cache[entity_id] = entity
data = self.pickler.dumps(entity)
key = self.data_file.write_with_key(entity_id, data)
return key
def remove_entities(self, entities):
remove_ids = tuple(map(operator.attrgetter('id'), entities))
for entity_id in remove_ids:
try:
del self.cache[entity_id]
except KeyError:
pass
self.data_file.remove(remove_ids)
def self_test(self):
if self.data_file is None:
if self.cache:
raise AssertionError("cache is not empty")
return
self.data_file.self_test()
for entity_id, entity in self.cache.items():
if entity_id != entity.id:
raise AssertionError(
"entity_id(%s) != entity.id(%s)" % (entity_id, entity.id))
dump = self.data_file.read(entity_id)
stored_entity = self.pickler.loads(dump)
if stored_entity != entity:
raise AssertionError("stored_entity(%s) != entity(%s)" %
(stored_entity.id, entity.id))
class ErrorEntityNameEmpty(Exception):
def __init__(self):
msg = "Entity name is empty"
super(ErrorEntityNameEmpty, self).__init__(msg)
class ErrorSignatureEntityInvalidDataType(Exception):
def __init__(self, data):
msg = "Signature data type must be bytes or bytearray, " "actual type: '%s'" % (type(data),)
super(ErrorSignatureEntityInvalidDataType, self).__init__(msg)
class ErrorTextEntityInvalidDataType(Exception):
def __init__(self, text):
msg = "Text data type must be string, actual type: '%s'" % (
type(text),)
super(ErrorTextEntityInvalidDataType, self).__init__(msg)
class EntityBase (object):
__slots__ = ('id', 'name', 'signature', 'tags')
def __new__(cls, name, signature, tags=None):
self = super(EntityBase, cls).__new__(cls)
if name is not NotImplemented:
if not name:
raise ErrorEntityNameEmpty()
self.name = name
if signature is not NotImplemented:
self.signature = signature
self.tags = frozenset(to_sequence(tags))
return self
def __hash__(self):
return hash(self.id)
def get(self):
"""
Returns value of the entity
"""
raise NotImplementedError(
"Abstract method. It should be implemented in a child class.")
def get_id(self):
cls = self.__class__
return simple_object_signature((self.name,
cls.__name__,
cls.__module__))
def get_name(self):
raise NotImplementedError(
"Abstract method. It should be implemented in a child class.")
def get_signature(self):
raise NotImplementedError(
"Abstract method. It should be implemented in a child class.")
def __getattr__(self, attr):
if attr == 'signature':
self.signature = signature = self.get_signature()
return signature
elif attr == 'name':
self.name = name = self.get_name()
return name
elif attr == 'id':
self.id = entity_id = self.get_id()
return entity_id
raise AttributeError("Unknown attribute: '%s'" % (attr,))
def __getnewargs__(self):
raise NotImplementedError(
"Abstract method. It should be implemented in a child class.")
def is_actual(self):
"""
Checks whether the entity is actual or not
"""
return bool(self.signature)
def get_actual(self):
"""
Returns an actual entity.
If the current entity is actual then it will be simply returned.
"""
return self
def __getstate__(self):
return {}
def __setstate__(self, state):
pass
def __eq__(self, other):
return (self.id == other.id) and (self.signature == other.signature)
def __ne__(self, other):
return not self.__eq__(other)
def __str__(self):
return cast_str(self.get())
def remove(self):
pass
@pickleable
class SimpleEntity (EntityBase):
__slots__ = ('data', )
def __new__(cls, data=None, name=None, signature=None, tags=None):
if data is None:
signature = None
else:
if signature is None:
signature = simple_object_signature(data)
if not name:
name = signature
self = super(SimpleEntity, cls).__new__(cls, name, signature, tags)
self.data = data
return self
def get(self):
return self.data
def __getnewargs__(self):
tags = self.tags
if not tags:
tags = None
name = self.name
if name == self.signature:
name = None
return self.data, name, self.signature, tags
@pickleable
class NullEntity (EntityBase):
def __new__(cls):
name = 'N'
signature = None
return super(NullEntity, cls).__new__(cls, name, signature)
def get(self):
return None
def __getnewargs__(self):
return tuple()
def is_actual(self):
return False
@pickleable
class SignatureEntity (EntityBase):
def __new__(cls, data=None, name=None, tags=None):
if data is not None:
if not isinstance(data, (bytes, bytearray)):
raise ErrorSignatureEntityInvalidDataType(data)
if not name:
name = data
return super(SignatureEntity, cls).__new__(cls, name, data, tags)
def get(self):
return self.signature
def __getnewargs__(self):
tags = self.tags
if not tags:
tags = None
name = self.name
if name == self.signature:
name = None
return self.signature, name, tags
def _build_options():
options = Options()
options.build_path = PathOptionType(
description="The building directory full path.")
options.build_dir = PathOptionType(
description="The building directory.", default='build_output')
options.relative_build_paths = BoolOptionType(
description="The building directory suffix.", default=True)
options.build_dir_name = StrOptionType(
description="The building directory name.")
options.prefix = StrOptionType(description="Output files prefix.")
options.suffix = StrOptionType(description="Output files suffix.")
options.target = StrOptionType(description="Output file name.")
build_variant = EnumOptionType(values=[
('debug', 'dbg', 'd'),
('release_speed', 'release',
'rel', 'rs', 'speed'),
('release_size', 'rz',
'rel_size', 'size'),
('final', 'f'),
],
default='debug',
description="Current build variant")
options.build_variant = build_variant
options.bv = options.build_variant
options.build_variants = ListOptionType(
value_type=build_variant,
unique=True,
description="Active build variants"
)
options.bvs = options.build_variants
file_signature = EnumOptionType(
values=[('checksum', 'md5', 'sign'), ('timestamp', 'time')],
default='checksum',
description="Type used to detect changes in dependency files"
)
options.file_signature = file_signature
options.signature = options.file_signature
options.batch_build = BoolOptionType(description="Prefer batch build.")
options.batch_groups = OptionType(
value_type=int,
default=1,
description="Preferred number of batching groups."
)
options.batch_size = OptionType(
value_type=int,
default=0,
description="Preferred size of a batching group.")
options.set_group("Build")
return options
def _target_options():
options = Options()
options.target_os = EnumOptionType(
values=['native', 'unknown',
('windows',
'win32', 'win64'),
('linux', 'linux-gnu'),
'uclinux',
'cygwin',
'interix',
'freebsd',
'openbsd',
'netbsd',
('OS-X', 'osx', 'darwin'),
'java',
'sunos',
'hpux',
'vxworks',
'solaris',
'elf'],
default='native',
strict=False,
is_tool_key=True,
description="The target system/OS name, e.g. 'Linux', 'Windows' etc.")
options.os = options.target_os
options.target_arch = EnumOptionType(
values=['native', 'unknown',
('x86-32', 'x86_32', 'x86', '80x86', 'i386', 'i486', 'i586',
'i686'),
('x86-64', 'x86_64', 'amd64', 'x64'),
'arm', 'arm64',
'alpha',
'mips',
'ia64',
'm68k',
'sparc',
'sparc64',
'sparcv9',
'powerpc',
],
default='native',
strict=False,
is_tool_key=True,
description="The target machine type, e.g. 'i386'")
options.arch = options.target_arch
options.target_subsystem = EnumOptionType(
values=['console', 'windows'],
default='console',
description="The target subsystem."
)
options.target_platform = StrOptionType(
ignore_case=True,
description="The target system's distribution, e.g. 'win32', 'Linux'"
)
options.target_os_release = StrOptionType(
ignore_case=True,
description="The target system's release, e.g. '2.2.0' or 'XP'"
)
options.target_os_version = VersionOptionType(
description="The target system's release version, "
"e.g. '2.2.0' or '5.1.2600'"
)
options.target_cpu = StrOptionType(
ignore_case=True,
description="The target real processor name, e.g. 'amdk6'.")
options.target_cpu_flags = ListOptionType(
value_type=IgnoreCaseString,
description="The target CPU flags, e.g. 'mmx', 'sse2'.")
options.set_group("Target system")
return options
def _optimization_options():
options = Options()
options.optimization = EnumOptionType(
values=[('off', 0), ('size', 1), ('speed', 2)],
default='off',
description='Optimization level'
)
options.optlevel = options.optimization
options.opt = options.optimization
options.inlining = EnumOptionType(values=['off', 'on', 'full'],
default='off',
description='Inline function expansion')
options.whole_optimization = BoolOptionType(
description='Whole program optimization')
options.whole_opt = options.whole_optimization
options.set_group("Optimization")
return options
def _code_gen_options():
options = Options()
options.debug_symbols = BoolOptionType(
description='Include debug symbols', style=('on', 'off'))
options.profile = BoolOptionType(
description='Enable compiler profiling', style=('on', 'off'))
options.keep_asm = BoolOptionType(
description='Keep generated assemblers files')
options.runtime_link = EnumOptionType(
values=['default', 'static', ('shared', 'dynamic')],
default='default',
description='Linkage type of runtime library')
options.rt_link = options.runtime_link
options.runtime_debug = BoolOptionType(
style=('on', 'off'),
description='Use debug version of runtime library'
)
options.rt_debug = options.runtime_debug
options.rtti = BoolOptionType(
description='Enable Run Time Type Information', default=True)
options.exceptions = BoolOptionType(
description='Allow to throw exceptions', default=True)
options.runtime_thread = EnumOptionType(
values=['default', 'single', 'multi'],
default='default',
description='Threading mode of runtime library'
)
options.rt_thread = options.runtime_thread
options.set_group("Code generation")
return options
def _diagnostic_options():
options = Options()
options.warning_level = RangeOptionType(
0, 4, description='Warning level', default=4)
options.warn_level = options.warning_level
options.warning_as_error = BoolOptionType(
description='Treat warnings as errors')
options.werror = options.warning_as_error
options.warnings_as_errors = options.warning_as_error
options.lint = EnumOptionType(
values=[('off', 0), ('on', 1), ('global', 2)],
default='off',
description='Lint source code.',
is_hidden=True
)
options.lint_flags = ListOptionType(description="Lint tool options",
is_hidden=True)
options.set_group("Diagnostic")
return options
def _env_options():
if os.path.normcase('ABC') == os.path.normcase('abc'):
env_key_type = UpperCaseString
else:
env_key_type = String
options = Options()
options.env = DictOptionType(key_type=env_key_type)
options.env['PATH'] = ListOptionType(
value_type=PathOptionType(),
separators=os.pathsep
)
options.env['PATHEXT'] = ListOptionType(
value_type=PathOptionType(),
separators=os.pathsep
)
options.env['TEMP'] = PathOptionType()
options.env['TMP'] = PathOptionType()
options.env['HOME'] = PathOptionType()
options.env['HOMEPATH'] = PathOptionType()
options.env = os.environ.copy()
return options
def _init_defaults(options):
if_ = options.If()
if_.target_os.ne('native').build_dir_name += options.target_os + '_'
if_.target_arch.ne('native').build_dir_name += options.target_arch + '_'
options.build_dir_name += options.build_variant
options.build_path = SimpleOperation(
os.path.join, options.build_dir, options.build_dir_name)
bv = if_.build_variant
debug_build_variant = bv.eq('debug')
debug_build_variant.optimization = 'off'
debug_build_variant.inlining = 'off'
debug_build_variant.whole_optimization = 'off'
debug_build_variant.debug_symbols = 'on'
debug_build_variant.runtime_debug = 'on'
speed_build_variant = bv.one_of(['release_speed', 'final'])
speed_build_variant.optimization = 'speed'
speed_build_variant.inlining = 'full'
speed_build_variant.whole_optimization = 'on'
speed_build_variant.debug_symbols = 'off'
speed_build_variant.runtime_debug = 'off'
size_build_variant = bv.eq('release_size')
size_build_variant.optimization = 'size'
size_build_variant.inlining = 'on'
size_build_variant.whole_optimization = 'on'
size_build_variant.debug_symbols = 'off'
size_build_variant.runtime_debug = 'off'
def builtin_options():
options = Options()
options.merge(_build_options())
options.merge(_target_options())
options.merge(_optimization_options())
options.merge(_code_gen_options())
options.merge(_diagnostic_options())
options.merge(_env_options())
_init_defaults(options)
return options
class ErrorFileEntityNoName(Exception):
def __init__(self):
msg = "Filename is not specified"
super(ErrorFileEntityNoName, self).__init__(msg)
class FileEntityBase (EntityBase):
def __new__(cls, name, signature=NotImplemented, tags=None):
if isinstance(name, FileEntityBase):
name = name.name
else:
if isinstance(name, EntityBase):
name = name.get()
if not name:
raise ErrorFileEntityNoName()
name = os.path.normcase(os.path.abspath(name))
self = super(FileEntityBase, cls).__new__(cls, name,
signature, tags=tags)
return self
def get(self):
return self.name
def __getnewargs__(self):
tags = self.tags
if not tags:
tags = None
return self.name, self.signature, tags
def remove(self):
try:
os.remove(self.name)
except OSError:
pass
def get_actual(self):
signature = self.get_signature()
if self.signature == signature:
return self
other = super(FileEntityBase, self).__new__(self.__class__,
self.name,
signature,
self.tags)
other.id = self.id
return other
def is_actual(self):
if not self.signature:
return False
if self.signature == self.get_signature():
return True
return False
def _get_file_checksum(path, offset=0):
try:
signature = file_signature(path, offset)
except (OSError, IOError):
try:
signature = file_time_signature(path)
except (OSError, IOError):
return None
return signature
def _get_file_timestamp(path):
try:
signature = file_time_signature(path)
except (OSError, IOError):
return None
return signature
@pickleable
class FileChecksumEntity(FileEntityBase):
def get_signature(self):
return _get_file_checksum(self.name)
@pickleable
class FileTimestampEntity(FileEntityBase):
def get_signature(self):
return _get_file_timestamp(self.name)
@pickleable
class DirEntity (FileTimestampEntity):
def remove(self):
try:
os.rmdir(self.name)
except OSError:
pass
@pickleable
class FilePartChecksumEntity (FileEntityBase):
__slots__ = ('offset',)
def __new__(cls, name, signature=NotImplemented, tags=None, offset=0):
self = super(FilePartChecksumEntity, cls).__new__(cls,
name,
signature,
tags=tags)
self.offset = offset
return self
def __getnewargs__(self):
tags = self.tags
if not tags:
tags = None
return self.name, self.signature, tags, self.offset
def get_signature(self):
return _get_file_checksum(self.name, self.offset)
def get_actual(self):
signature = self.get_signature()
if self.signature == signature:
return self
other = super(FileEntityBase, self).__new__(self.__class__,
self.name,
signature,
self.tags)
other.id = self.id
other.offset = self.offset
return other
def __eq__(self, other):
return super(FilePartChecksumEntity, self).__eq__(other) and (self.offset == other.offset)
@event_debug
def event_exec_cmd(settings, cmd, cwd, env):
if settings.trace_exec:
cmd = ' '.join(cmd)
log_debug("CWD: '%s', CMD: '%s'", cwd, cmd)
def _get_trace_arg(entity, brief):
if isinstance(entity, FileEntityBase):
value = entity.get()
if brief:
value = os.path.basename(value)
else:
if isinstance(entity, FilePath):
value = entity
if brief:
value = os.path.basename(value)
else:
if isinstance(entity, EntityBase):
value = to_string(entity.get())
else:
value = to_string(entity)
value = value.strip()
max_len = 64 if brief else 256
src_len = len(value)
if src_len > max_len:
value = "%s...%s" % (value[:max_len // 2],
value[src_len - (max_len // 2):])
value = value.replace('\r', '')
value = value.replace('\n', ' ')
return value
def _join_args(entities, brief):
args = [_get_trace_arg(arg, brief) for arg in to_sequence(entities)]
if not brief or (len(args) < 3):
return ' '.join(args)
wish_size = 128
args_str = [args.pop(0)]
last = args.pop()
size = len(args_str[0]) + len(last)
for arg in args:
size += len(arg)
if size > wish_size:
args_str.append('...')
break
args_str.append(arg)
args_str.append(last)
return ' '.join(args_str)
def _get_trace_str(name, sources, targets, brief):
name = _join_args(name, brief)
sources = _join_args(sources, brief)
targets = _join_args(targets, brief)
build_str = name
if sources:
build_str += " << " + sources
if targets:
build_str += " >> " + targets
return build_str
def _make_build_path(path_dir, _path_cache=set()):
if path_dir not in _path_cache:
if not os.path.isdir(path_dir):
try:
os.makedirs(path_dir)
except OSError as e:
if e.errno != errno.EEXIST:
raise
_path_cache.add(path_dir)
def _make_build_paths(dirnames):
for dirname in dirnames:
_make_build_path(dirname)
def _split_filename_ext(filename, ext, replace_ext):
if ext:
if filename.endswith(ext):
return filename[:-len(ext)], ext
if not replace_ext:
return filename, ext
ext_pos = filename.rfind(os.path.extsep)
if ext_pos > 0:
if not ext:
ext = filename[ext_pos:]
filename = filename[:ext_pos]
return filename, ext
def _split_file_name(file_path,
ext=None,
prefix=None,
suffix=None,
replace_ext=False
):
if isinstance(file_path, EntityBase):
file_path = file_path.get()
dirname, filename = os.path.split(file_path)
filename, ext = _split_filename_ext(filename, ext, replace_ext)
if prefix:
filename = prefix + filename
if suffix:
filename += suffix
if ext:
filename += ext
return dirname, filename
def _split_file_names(file_paths,
ext=None,
prefix=None,
suffix=None,
replace_ext=False):
dirnames = []
filenames = []
for file_path in file_paths:
dirname, filename = _split_file_name(file_path, ext, prefix, suffix,
replace_ext)
dirnames.append(dirname)
filenames.append(filename)
return dirnames, filenames
def _get_file_signature_type(file_signature_type):
if file_signature_type == 'timestamp':
return FileTimestampEntity
return FileChecksumEntity
class BuilderInitiator(object):
__slots__ = ('is_initiated', 'builder', 'options', 'args', 'kw')
def __init__(self, builder, options, args, kw):
self.is_initiated = False
self.builder = builder
self.options = options
self.args = self.__store_args(args)
self.kw = self.__store_kw(kw)
def __store_args(self, args):
return tuple(map(self.options._store_value, args))
def __load_args(self):
return tuple(map(self.options._load_value, self.args))
def __store_kw(self, kw):
store_value = self.options._store_value
return dict((name, store_value(value)) for name, value in kw.items())
def __load_kw(self):
load_value = self.options._load_value
return dict((name, load_value(value))
for name, value in self.kw.items())
def initiate(self):
if self.is_initiated:
return self.builder
builder = self.builder
kw = self.__load_kw()
args = self.__load_args()
options = self.options
builder._init_attrs(options)
builder.__init__(options, *args, **kw)
if not hasattr(builder, 'name'):
builder.set_name()
if not hasattr(builder, 'signature'):
builder.set_signature()
self.is_initiated = True
return builder
def can_build_batch(self):
return self.builder.can_build_batch()
def can_build(self):
return self.builder.can_build()
def is_batch(self):
return self.builder.is_batch()
class Builder (object):
"""
Base class for all builders
'name' - uniquely identifies builder
'signature' - uniquely identifies builder's parameters
"""
NAME_ATTRS = None
SIGNATURE_ATTRS = None
def __new__(cls, options, *args, **kw):
self = super(Builder, cls).__new__(cls)
return BuilderInitiator(self, options, args, kw)
def _init_attrs(self, options):
self.build_dir = options.build_dir.get()
self.build_path = options.build_path.get()
self.relative_build_paths = options.relative_build_paths.get()
if options.file_signature == 'timestamp':
self.use_timestamp = True
self.file_entity_type = FileTimestampEntity
else:
self.use_timestamp = False
self.file_entity_type = FileChecksumEntity
self.env = options.env.get()
is_batch = (options.batch_build.get() or not self.can_build()) and self.can_build_batch()
self.__is_batch = is_batch
if is_batch:
self.batch_groups = options.batch_groups.get()
self.batch_size = options.batch_size.get()
def can_build_batch(self):
return self.__class__.build_batch != Builder.build_batch
def can_build(self):
return self.__class__.build != Builder.build
def is_batch(self):
return self.__is_batch
def initiate(self):
return self
def set_name(self):
cls = self.__class__
name = [cls.__module__,
cls.__name__,
simplify_value(self.build_path),
bool(self.relative_build_paths)]
if self.NAME_ATTRS:
name.extend(simplify_value(getattr(self, attr_name))
for attr_name in self.NAME_ATTRS)
self.name = simple_object_signature(name)
def set_signature(self):
sign = []
if self.SIGNATURE_ATTRS:
sign.extend(simplify_value(getattr(self, attr_name))
for attr_name in self.SIGNATURE_ATTRS)
self.signature = simple_object_signature(sign)
def check_actual(self, target_entities):
"""
Checks that previous target entities are still up to date.
It called only if all other checks were successful.
Returns None if all targets are actual.
Otherwise first not actual target.
:param target_entities: Previous target entities
"""
for entity in target_entities:
if not entity.is_actual():
return target_entities
return None
def clear(self, target_entities, side_effect_entities):
for entity in target_entities:
entity.remove()
for entity in side_effect_entities:
entity.remove()
def depends(self, options, source_entities):
"""
Could be used to dynamically generate dependency nodes
Returns list of dependency nodes or None
"""
return None
def replace(self, options, source_entities):
"""
Could be used to dynamically replace sources
Returns list of nodes/entities or None (if sources are not changed)
"""
return None
def split(self, source_entities):
"""
Could be used to dynamically split building sources to several nodes
Returns list of groups of source entities or None
"""
return None
def split_single(self, source_entities):
"""
Implementation of split for splitting one-by-one
"""
return source_entities
def split_batch(self, source_entities):
"""
Implementation of split for splitting to batch groups of batch size
"""
return group_items(source_entities, self.batch_groups, self.batch_size)
def split_batch_by_build_dir(self, source_entities):
"""
Implementation of split for grouping sources by output
"""
num_groups = self.batch_groups
group_size = self.batch_size
if self.relative_build_paths:
path_getter = operator.methodcaller('get')
groups = group_paths_by_dir(source_entities,
num_groups,
group_size,
path_getter=path_getter)
else:
groups = group_items(source_entities, num_groups, group_size)
return groups
def get_weight(self, source_entities):
return len(source_entities)
def build(self, source_entities, targets):
"""
Builds a node
Returns a build output string or None
"""
raise NotImplementedError(
"Abstract method. It should be implemented in a child class.")
def build_batch(self, source_entities, targets):
"""
Builds a node
Returns a build output string or None
"""
raise NotImplementedError(
"Abstract method. It should be implemented in a child class.")
def get_target_entities(self, source_entities):
"""
If it's possible returns target entities of the node, otherwise None
"""
return None
def get_trace_name(self, source_entities, brief):
return self.__class__.__name__
def get_trace_sources(self, source_entities, brief):
return source_entities
def get_trace_targets(self, target_entities, brief):
return target_entities
def get_trace(self,
source_entities=None,
target_entities=None,
brief=False):
try:
name = self.get_trace_name(source_entities, brief)
except Exception:
name = ''
try:
sources = self.get_trace_sources(source_entities, brief)
except Exception:
sources = None
try:
if (target_entities is None) and source_entities:
target_entities = self.get_target_entities(source_entities)
targets = self.get_trace_targets(target_entities, brief)
except Exception:
targets = None
return _get_trace_str(name, sources, targets, brief)
def get_build_dir(self):
_make_build_path(self.build_dir)
return self.build_dir
def get_build_path(self):
_make_build_path(self.build_path)
return self.build_path
def get_target_path(self, target, ext=None, prefix=None):
target_dir, name = _split_file_name(target,
prefix=prefix,
ext=ext,
replace_ext=False)
if target_dir.startswith((os.path.curdir, os.path.pardir)):
target_dir = os.path.abspath(target_dir)
elif not os.path.isabs(target_dir):
target_dir = os.path.abspath(os.path.join(self.build_path,
target_dir))
_make_build_path(target_dir)
target = os.path.join(target_dir, name)
return target
@staticmethod
def makedirs(path):
_make_build_path(path)
def get_target_dir(self, target_dir):
target_dir, name = os.path.split(target_dir)
if not name:
target_dir, name = os.path.split(target_dir)
elif not target_dir and name in (os.path.curdir, os.path.pardir):
target_dir = name
name = ''
if target_dir.startswith((os.path.curdir, os.path.pardir)):
target_dir = os.path.abspath(target_dir)
elif not os.path.isabs(target_dir):
target_dir = os.path.abspath(os.path.join(self.build_path,
target_dir))
target_dir = os.path.join(target_dir, name)
_make_build_path(target_dir)
return target_dir
def get_source_target_path(self,
file_path,
ext=None,
prefix=None,
suffix=None,
replace_ext=True):
build_path = self.build_path
dirname, filename = _split_file_name(file_path,
ext=ext,
prefix=prefix,
suffix=suffix,
replace_ext=replace_ext)
if self.relative_build_paths:
build_path = relative_join(build_path, dirname)
_make_build_path(build_path)
build_path = os.path.join(build_path, filename)
return build_path
def get_source_target_paths(self,
file_paths,
ext=None,
prefix=None,
suffix=None,
replace_ext=True):
build_path = self.build_path
dirnames, filenames = _split_file_names(file_paths,
ext=ext,
prefix=prefix,
suffix=suffix,
replace_ext=replace_ext)
if self.relative_build_paths:
dirnames = relative_join_list(build_path, dirnames)
_make_build_paths(dirnames)
build_paths = [os.path.join(dirname, filename)
for dirname, filename in zip(dirnames, filenames)]
else:
_make_build_path(build_path)
build_paths = [
os.path.join(build_path, filename) for filename in filenames]
return build_paths
def make_entity(self, value, tags=None):
if isinstance(value, FilePath):
return self.make_file_entity(name=value, tags=tags)
return SimpleEntity(value, tags=tags)
def make_simple_entity(self, value, tags=None):
return SimpleEntity(value, tags=tags)
def make_file_entity(self, value, tags=None):
return self.file_entity_type(name=value, tags=tags)
def make_file_entities(self, entities, tags=None):
make_file_entity = self.make_file_entity
for entity in to_sequence(entities):
if isinstance(entity, EntityBase):
yield entity
else:
yield make_file_entity(entity, tags)
def make_entities(self, entities, tags=None):
make_entity = self.make_entity
for entity in to_sequence(entities):
if isinstance(entity, EntityBase):
yield entity
else:
yield make_entity(entity, tags)
def exec_cmd(self, cmd, cwd=None, env=None, file_flag=None, stdin=None):
result = self.exec_cmd_result(
cmd, cwd=cwd, env=env, file_flag=file_flag, stdin=stdin)
if result.failed():
raise result
return result.output()
def exec_cmd_result(self,
cmd,
cwd=None,
env=None,
file_flag=None,
stdin=None):
if env is None:
env = self.env
if cwd is None:
cwd = self.get_build_path()
result = execute_command(
cmd, cwd=cwd, env=env, file_flag=file_flag, stdin=stdin)
event_exec_cmd(cmd, cwd, env)
return result
class FileBuilder (Builder):
make_entity = Builder.make_file_entity
class ErrorNodeDependencyInvalid(Exception):
def __init__(self, dep):
msg = "Invalid node dependency: %s" % (dep,)
super(ErrorNodeDependencyInvalid, self).__init__(msg)
class ErrorNodeSplitUnknownSource(Exception):
def __init__(self, node, entity):
msg = "Node '%s' can't be split to unknown source entity: %s" % (
node.get_build_str(brief=False), entity)
super(ErrorNodeSplitUnknownSource, self).__init__(msg)
class ErrorNoTargets(AttributeError):
def __init__(self, node):
msg = "Node targets are not built or set yet: %s" % (node,)
super(ErrorNoTargets, self).__init__(msg)
class ErrorNoSrcTargets(Exception):
def __init__(self, node, src_entity):
msg = "Source '%s' targets are not built or set yet: %s" % (
src_entity.get(), node)
super(ErrorNoSrcTargets, self).__init__(msg)
class ErrorUnactualEntity(Exception):
def __init__(self, node_entity, entity):
msg = "Target entity is not actual: %s (%s), node: %s" % (entity.name, type(entity), node_entity.get_build_str())
super(ErrorUnactualEntity, self).__init__(msg)
class ErrorNodeUnknownSource(Exception):
def __init__(self, src_entity):
msg = "Unknown source entity: %s (%s)" % (src_entity, type(src_entity))
super(ErrorNodeUnknownSource, self).__init__(msg)
@event_status
def event_node_rebuild_reason(settings, reason):
if isinstance(reason, NodeRebuildReason):
msg = reason.get_message(settings.brief)
else:
msg = str(reason)
log_debug(msg)
class NodeRebuildReason (Exception):
__slots__ = (
'builder',
'sources',
)
def __init__(self, node_entity):
self.builder = node_entity.builder
self.sources = node_entity.source_entities
def get_node_name(self, brief):
return self.builder.get_trace(self.sources, brief=brief)
def __str__(self):
return self.get_message(False)
def get_message(self, brief):
node_name = self.get_node_name(brief)
description = self.get_description(brief)
return "%s\nRebuilding the node: %s" % (description, node_name)
def get_description(self, brief):
return "Node's state is changed."
class NodeRebuildReasonAlways (NodeRebuildReason):
def get_description(self, brief):
return "Node is marked to rebuild always."
class NodeRebuildReasonNew (NodeRebuildReason):
def get_description(self, brief):
return "Node's previous state has not been found."
class NodeRebuildReasonSignature (NodeRebuildReason):
def get_description(self, brief):
return "Node`s signature has been changed " "(sources, builder parameters or dependencies were changed)."
class NodeRebuildReasonNoTargets (NodeRebuildReason):
def get_description(self, brief):
return "Unknown Node's targets."
class NodeRebuildReasonImplicitDep (NodeRebuildReason):
__slots__ = (
'entity',
)
def __init__(self, node_entity, idep_entity=None):
super(NodeRebuildReasonImplicitDep, self).__init__(node_entity)
self.entity = idep_entity
def get_description(self, brief):
dep = (" '%s'" % self.entity) if self.entity is not None else ""
return "Node's implicit dependency%s has changed, " % (dep,)
class NodeRebuildReasonTarget (NodeRebuildReason):
__slots__ = (
'entity',
)
def __init__(self, node_entity, target_entity):
super(NodeRebuildReasonTarget, self).__init__(node_entity)
self.entity = target_entity
def get_description(self, brief):
return "Node's target '%s' has changed." % (self.entity,)
@pickleable
class NodeEntity (EntityBase):
__slots__ = (
'name',
'signature',
'builder',
'source_entities',
'dep_entities',
'target_entities',
'itarget_entities',
'idep_entities',
'idep_keys',
)
def __new__(cls,
name=NotImplemented,
signature=NotImplemented,
targets=None,
itargets=None,
idep_keys=None,
builder=None,
source_entities=None,
dep_entities=None):
self = super(NodeEntity, cls).__new__(cls, name, signature)
if targets is not None:
self.target_entities = targets
self.itarget_entities = itargets
self.idep_keys = idep_keys
else:
self.builder = builder
self.source_entities = source_entities
self.dep_entities = dep_entities
return self
def get(self):
return self.name
def __getnewargs__(self):
return (self.name,
self.signature,
self.target_entities,
self.itarget_entities,
self.idep_keys)
def get_targets(self):
builder = self.builder
targets = builder.get_target_entities(self.source_entities)
if not targets:
return ()
return tuple(builder.make_entities(targets))
def get_name(self):
hash_sum = new_hash(self.builder.name)
name_entities = self.target_entities
if not name_entities:
name_entities = self.source_entities
names = sorted(entity.id for entity in name_entities)
for name in names:
hash_sum.update(name)
return hash_sum.digest()
def get_signature(self):
builder_signature = self.builder.signature
if builder_signature is None:
return None
hash_sum = new_hash(builder_signature)
for entity in self.dep_entities:
ent_sign = entity.signature
if not ent_sign:
return None
hash_sum.update(entity.id)
hash_sum.update(ent_sign)
for entity in self.source_entities:
entity_signature = entity.signature
if entity_signature is None:
return None
hash_sum.update(entity_signature)
return hash_sum.digest()
def get_build_str(self):
try:
targets = getattr(self, 'target_entities', None)
return self.builder.get_trace(self.source_entities, targets)
except Exception as ex:
log_error(ex)
return str(self) # Can't do much, show as a raw pointer
def __getattr__(self, attr):
if attr == 'target_entities':
self.target_entities = targets = self.get_targets()
return targets
return super(NodeEntity, self).__getattr__(attr)
_ACTUAL_IDEPS_CACHE = {}
def _get_ideps(self, vfile, idep_keys,
ideps_cache_get=_ACTUAL_IDEPS_CACHE.__getitem__,
ideps_cache_set=_ACTUAL_IDEPS_CACHE.__setitem__):
entities = vfile.find_entities_by_key(idep_keys)
if entities is None:
raise NodeRebuildReasonImplicitDep(self)
for i, entity in enumerate(entities):
entity_id = entity.id
try:
entities[i] = ideps_cache_get(entity_id)
except KeyError:
actual_entity = entity.get_actual()
ideps_cache_set(entity_id, actual_entity)
if entity is not actual_entity:
vfile.update_entity(actual_entity)
raise NodeRebuildReasonImplicitDep(self, entity)
return entities
def _save_ideps(self, vfile,
_actual_ideps_cache_set=_ACTUAL_IDEPS_CACHE.setdefault):
entities = []
for entity in self.idep_entities:
entity_id = entity.id
cached_entity = _actual_ideps_cache_set(entity_id, entity)
if cached_entity is entity:
if entity.signature is None:
raise ErrorUnactualEntity(self, entity)
entities.append(cached_entity)
keys = vfile.add_entities(entities)
self.idep_entities = entities
self.idep_keys = keys
def check_actual(self, vfile, explain):
try:
previous = vfile.find_node_entity(self)
if previous is None:
raise NodeRebuildReasonNew(self)
if not self.signature:
raise NodeRebuildReasonAlways(self)
if self.signature != previous.signature:
raise NodeRebuildReasonSignature(self)
ideps = self._get_ideps(vfile, previous.idep_keys)
target_entities = previous.target_entities
if target_entities is None:
raise NodeRebuildReasonNoTargets(self)
unactual_target = self.builder.check_actual(target_entities)
if unactual_target is not None:
raise NodeRebuildReasonTarget(self, unactual_target)
except NodeRebuildReason as reason:
if explain:
event_node_rebuild_reason(reason)
return False
self.target_entities = target_entities
self.itarget_entities = previous.itarget_entities
self.idep_entities = ideps
return True
def save(self, vfile):
for entity in self.target_entities:
if entity.signature is None:
raise ErrorUnactualEntity(self, entity)
self._save_ideps(vfile)
vfile.add_node_entity(self)
def clear(self, vfile):
"""
Clear produced target entities
"""
self.idep_entities = tuple()
node_entity = vfile.find_node_entity(self)
if node_entity is None:
self.itarget_entities = tuple()
else:
targets = node_entity.target_entities
itargets = node_entity.itarget_entities
if targets:
self.target_entities = targets
else:
self.target_entities = tuple()
if itargets:
self.itarget_entities = itargets
else:
self.itarget_entities = tuple()
try:
self.builder.clear(self.target_entities, self.itarget_entities)
except Exception:
pass
def add_targets(self, values, tags=None):
self.target_entities.extend(
self.builder.make_entities(values, tags))
def add_target_files(self, values, tags=None):
self.target_entities.extend(
self.builder.make_file_entities(values, tags))
def add_target_entity(self, entity):
self.target_entities.append(entity)
def add_target_entities(self, entities):
self.target_entities.extend(entities)
def add_side_effects(self, entities, tags=None):
self.itarget_entities.extend(
self.builder.make_entities(entities, tags))
def add_side_effect_files(self, entities, tags=None):
self.itarget_entities.extend(
self.builder.make_file_entities(entities, tags))
def add_side_effect_entity(self, entity):
self.itarget_entities.append(entity)
def add_side_effect_entities(self, entities):
self.itarget_entities.extend(entities)
def add_implicit_deps(self, entities, tags=None):
self.idep_entities.extend(
self.builder.make_entities(entities, tags))
def add_implicit_dep_files(self, entities, tags=None):
self.idep_entities.extend(
self.builder.make_file_entities(entities, tags))
def add_implicit_dep_entity(self, entity):
self.idep_entities.append(entity)
def add_implicit_dep_entities(self, entities):
self.idep_entities.extend(entities)
class _NodeBatchTargets (object):
def __init__(self, node_entities_map):
self.node_entities_map = node_entities_map
def __getitem__(self, source):
try:
return self.node_entities_map[source]
except KeyError:
raise ErrorNodeUnknownSource(source)
class NodeFilter (object):
__slots__ = (
'node',
'node_attribute',
)
def __init__(self, node, node_attribute='target_entities'):
self.node = node
self.node_attribute = node_attribute
def get_node(self):
node = self.node
while isinstance(node, NodeFilter):
node = node.node
return node
def __iter__(self):
raise TypeError()
def __getitem__(self, item):
return NodeIndexFilter(self, item)
def get(self):
entities = self.get_entities()
if len(entities) == 1:
return entities[0]
return entities
def get_entities(self):
node = self.node
if isinstance(node, NodeFilter):
entities = node.get_entities()
else:
entities = getattr(node, self.node_attribute)
return entities
class NodeTagsFilter(NodeFilter):
__slots__ = (
'tags',
)
def __init__(self, node, tags, node_attribute='target_entities'):
super(NodeTagsFilter, self).__init__(node, node_attribute)
self.tags = frozenset(to_sequence(tags))
def get_entities(self):
entities = super(NodeTagsFilter, self).get_entities()
tags = self.tags
return tuple(entity for entity in entities
if entity.tags and (entity.tags & tags))
class NodeIndexFilter(NodeFilter):
__slots__ = (
'index',
)
def __init__(self, node, index, node_attribute='target_entities'):
super(NodeIndexFilter, self).__init__(node, node_attribute)
self.index = index
def get_entities(self):
entities = super(NodeIndexFilter, self).get_entities()
try:
return to_sequence(entities[self.index])
except IndexError:
return tuple()
class NodeDirNameFilter(NodeFilter):
def get_entities(self):
entities = super(NodeDirNameFilter, self).get_entities()
return tuple(SimpleEntity(os.path.dirname(entity.get()))
for entity in entities)
class NodeBaseNameFilter(NodeFilter):
def get_entities(self):
entities = super(NodeBaseNameFilter, self).get_entities()
return tuple(SimpleEntity(os.path.basename(entity.get()))
for entity in entities)
class Node (object):
__slots__ = (
'builder',
'options',
'cwd',
'initiated',
'depends_called',
'replace_called',
'split_called',
'check_actual',
'node_entities',
'node_entities_map',
'sources',
'source_entities',
'dep_nodes',
'dep_entities',
'target_entities',
'itarget_entities',
'idep_entities',
)
def __init__(self, builder, sources, cwd=None):
self.builder = builder
self.options = getattr(builder, 'options', None)
if cwd is None:
self.cwd = os.path.abspath(os.getcwd())
else:
self.cwd = cwd
self.initiated = False
self.depends_called = False
self.replace_called = False
self.split_called = False
self.check_actual = self._not_actual
self.sources = tuple(to_sequence(sources))
self.dep_nodes = set()
self.dep_entities = []
def shrink(self):
self.cwd = None
self.dep_nodes = None
self.sources = None
self.node_entities = None
self.node_entities_map = None
self.dep_entities = None
self.check_actual = None
self.builder = None
self.options = None
def skip(self):
self.shrink()
self.initiated = True
self.depends_called = True
self.replace_called = True
self.split_called = True
self.target_entities = self.itarget_entities = self.idep_entities = tuple()
def depends(self, dependencies):
dep_nodes = self.dep_nodes
dep_entities = self.dep_entities
for entity in to_sequence(dependencies):
if isinstance(entity, Node):
dep_nodes.add(entity)
elif isinstance(entity, NodeFilter):
dep_nodes.add(entity.get_node())
elif isinstance(entity, EntityBase):
dep_entities.append(entity)
else:
raise ErrorNodeDependencyInvalid(entity)
def __getattr__(self, attr):
if attr in ['target_entities', 'itarget_entities', 'idep_entities']:
raise ErrorNoTargets(self)
raise AttributeError("Node has not attribute '%s'" % (attr,))
def _set_source_entities(self):
entities = []
make_entity = self.builder.make_entity
for src in self.sources:
if isinstance(src, Node):
entities += src.target_entities
elif isinstance(src, NodeFilter):
entities += src.get_entities()
elif isinstance(src, EntityBase):
entities.append(src)
else:
entity = make_entity(src)
entities.append(entity)
self.sources = None
self.source_entities = entities
def _update_dep_entities(self):
dep_nodes = self.dep_nodes
if not dep_nodes:
return
dep_entities = self.dep_entities
for node in dep_nodes:
target_entities = node.target_entities
if target_entities:
dep_entities.extend(target_entities)
dep_nodes.clear()
dep_entities.sort(key=operator.attrgetter('id'))
def initiate(self, chdir=os.chdir):
if self.initiated:
if self.sources:
self._set_source_entities()
else:
chdir(self.cwd)
self.builder = self.builder.initiate()
self._set_source_entities()
self._update_dep_entities()
self.initiated = True
def build_depends(self, chdir=os.chdir):
if self.depends_called:
return None
self.depends_called = True
chdir(self.cwd)
nodes = self.builder.depends(self.options, self.source_entities)
return nodes
def build_replace(self, chdir=os.chdir):
if self.replace_called:
return None
self.replace_called = True
chdir(self.cwd)
sources = self.builder.replace(self.options, self.source_entities)
if sources is None:
return None
self.sources = tuple(to_sequence(sources))
return self.get_source_nodes()
def _split_batch(self, vfile, explain):
builder = self.builder
dep_entities = self.dep_entities
node_entities = []
not_actual_nodes = {}
not_actual_sources = []
for src in self.source_entities:
node_entity = NodeEntity(builder=builder,
source_entities=(src,),
dep_entities=dep_entities)
if not node_entity.check_actual(vfile, explain):
not_actual_nodes[src] = node_entity
not_actual_sources.append(src)
node_entities.append(node_entity)
self.node_entities = node_entities
if not not_actual_nodes:
return None
groups = builder.split_batch(not_actual_sources)
if not groups:
groups = not_actual_sources
split_nodes = []
for group in groups:
group = tuple(to_sequence(group))
node_entities = tuple(not_actual_nodes[src] for src in group)
node = self._split(group, node_entities)
node.node_entities_map = not_actual_nodes
split_nodes.append(node)
return split_nodes
def build_split(self, vfile, explain):
if self.split_called:
return None
self.split_called = True
self.check_actual = self._split_actual
builder = self.builder
dep_entities = self.dep_entities
if builder.is_batch():
return self._split_batch(vfile, explain)
sources = self.source_entities
groups = self.builder.split(sources)
if (not groups) or (len(groups) < 2):
node_entity = NodeEntity(builder=builder,
source_entities=sources,
dep_entities=dep_entities)
if not node_entity.check_actual(vfile, explain):
self.check_actual = self._not_actual
self.node_entities = (node_entity,)
return None
node_entities = []
split_nodes = []
for group in groups:
group = to_sequence(group)
node_entity = NodeEntity(builder=builder,
source_entities=group,
dep_entities=dep_entities)
if not node_entity.check_actual(vfile, explain):
node = self._split(group, (node_entity,))
split_nodes.append(node)
node_entities.append(node_entity)
self.node_entities = node_entities
return split_nodes
def _split(self, source_entities, node_entities):
other = object.__new__(self.__class__)
other.builder = self.builder
other.dep_nodes = ()
other.sources = ()
other.source_entities = source_entities
other.node_entities = node_entities
other.initiated = True
other.depends_called = True
other.replace_called = True
other.split_called = True
other.check_actual = self._not_actual
return other
def prebuild(self):
dep_nodes = self.build_depends()
if dep_nodes:
return dep_nodes
source_nodes = self.build_replace()
return source_nodes
def _reset_targets(self):
for node_entity in self.node_entities:
node_entity.target_entities = []
node_entity.itarget_entities = []
node_entity.idep_entities = []
def _populate_targets(self):
node_entities = self.node_entities
if len(node_entities) == 1:
node_entity = node_entities[0]
self.target_entities = node_entity.target_entities
self.itarget_entities = node_entity.itarget_entities
self.idep_entities = node_entity.idep_entities
else:
targets = []
itargets = []
ideps = []
for node_entity in node_entities:
targets += node_entity.target_entities
itargets += node_entity.itarget_entities
ideps += node_entity.idep_entities
self.target_entities = targets
self.itarget_entities = itargets
self.idep_entities = ideps
def _check_actual(self, vfile, explain):
for node_entity in self.node_entities:
if not node_entity.check_actual(vfile, explain):
return False
self._populate_targets()
self.check_actual = self._actual
return True
def _split_actual(self, vfile, explain):
self._populate_targets()
self.check_actual = self._actual
return True
@staticmethod
def _actual(vfile, explain):
return True
@staticmethod
def _not_actual(vfile, explain):
return False
def recheck_actual(self):
self.check_actual = self._check_actual
def build(self):
builder = self.builder
self._reset_targets()
if builder.is_batch():
targets = _NodeBatchTargets(self.node_entities_map)
output = builder.build_batch(self.source_entities, targets)
else:
targets = self.node_entities
output = builder.build(self.source_entities, targets[0])
self._populate_targets()
return output
def save(self, vfile):
for node_entity in self.node_entities:
node_entity.save(vfile)
def save_failed(self, vfile):
node_entities = self.node_entities
if len(node_entities) < 2:
return
for node_entity in node_entities:
if node_entity.target_entities:
node_entity.save(vfile)
def _clear_split(self):
builder = self.builder
source_entities = self.source_entities
if builder.is_batch():
groups = source_entities
else:
groups = self.builder.split(source_entities)
if not groups:
groups = [source_entities]
node_entities = []
for group in groups:
group = to_sequence(group)
node_entity = NodeEntity(builder=builder,
source_entities=group,
dep_entities=())
node_entities.append(node_entity)
self.node_entities = node_entities
def clear(self, vfile):
self._clear_split()
node_entities = []
for node_entity in self.node_entities:
node_entity.clear(vfile)
node_entities.append(node_entity)
self._populate_targets()
return node_entities
def get_weight(self):
return self.builder.get_weight(self.source_entities)
def get_names(self):
return (entity.name for entity in self.node_entities)
def get_names_and_signatures(self):
return ((entity.name, entity.signature)
for entity in self.node_entities)
def get_dep_nodes(self):
return self.dep_nodes
def get_sources(self):
return tuple(src.get() for src in self.get_source_entities())
def get_source_entities(self):
return self.source_entities
def get_source_nodes(self):
nodes = []
for src in self.sources:
if isinstance(src, Node):
nodes.append(src)
elif isinstance(src, NodeFilter):
nodes.append(src.get_node())
return nodes
def is_built(self):
return self.builder is None
def at(self, tags):
return NodeTagsFilter(self, tags)
def __iter__(self):
raise TypeError()
def __getitem__(self, item):
return NodeIndexFilter(self, item)
def __filter(self, node_attribute, tags):
if tags is None:
return NodeFilter(self, node_attribute)
return NodeTagsFilter(self, tags, node_attribute)
def filter_sources(self, tags=None):
return self.__filter('source_entities', tags)
def filter_side_effects(self, tags=None):
return self.__filter('itarget_entities', tags)
def filter_implicit_dependencies(self, tags=None):
return self.__filter('idep_entities', tags)
def filter_dependencies(self, tags=None):
return self.__filter('dep_entities', tags)
def get(self):
targets = self.get_target_entities()
if len(targets) == 1:
return targets[0].get()
return tuple(target.get() for target in targets)
def get_target_entities(self):
return self.target_entities
def get_side_effect_entities(self):
return self.itarget_entities
def get_build_str(self, brief=True):
try:
targets = getattr(self, 'target_entities', None)
return self.builder.get_trace(self.source_entities, targets, brief)
except Exception as ex:
if 'BuilderInitiator' not in str(ex):
raise
return str(self) # show as a raw pointer
def print_sources(self): # noqa
result = []
sources = self.sources
if not sources:
sources = self.source_entities
for src in sources:
if isinstance(src, EntityBase):
result.append(src.get())
elif isinstance(src, Node):
targets = getattr(src, 'target_entities', None)
if targets is not None:
result += (target.get() for target in targets)
else:
result.append(src)
elif isinstance(src, NodeFilter):
try:
targets = src.get_entities()
except AttributeError:
continue
if targets is not None:
result += (target.get() for target in targets)
else:
result.append(src)
else:
result.append(src)
sources_str = ', '.join(map(str, result))
log_info("node '%s' sources: %s", self, sources_str)
def print_targets(self):
targets = [t.get() for t in getattr(self, 'target_entities', [])]
log_info("node '%s' targets: %s", self, targets)
@event_warning
def event_build_target_twice(settings, entity, node1):
log_warning("Target '%s' is built twice. The last time built by: '%s' ",
entity.name, node1.get_build_str(settings.brief))
@event_error
def event_failed_node(settings, node, error):
msg = node.get_build_str(settings.brief)
msg += '\n\n%s\n' % (error,)
log_error(msg)
@event_status
def event_node_building(settings, node):
pass
@event_status
def event_node_building_finished(settings, node, builder_output, progress):
msg = node.get_build_str(settings.brief)
if settings.with_output and builder_output:
msg += '\n'
if builder_output:
msg += builder_output
msg += '\n'
msg = "(%s) %s" % (progress, msg)
log_info(msg)
@event_status
def event_node_building_failed(settings, node, error):
pass
@event_status
def event_node_removed(settings, node, progress):
msg = node.get_build_str(settings.brief)
if msg:
log_info("(%s) Removed: %s", progress, msg)
class ErrorNodeDependencyCyclic(Exception):
def __init__(self, node, deps):
msg = "Node '%s' (%s) has a cyclic dependency: %s" % (
node, node.get_build_str(True), deps)
super(ErrorNodeDependencyCyclic, self).__init__(msg)
class ErrorNodeUnknown(Exception):
def __init__(self, node):
msg = "Unknown node '%s'" % (node, )
super(ErrorNodeUnknown, self).__init__(msg)
class ErrorNodeSignatureDifferent(Exception):
def __init__(self, node, other_node):
msg = "Two similar nodes have different signatures" "(sources, builder parameters or dependencies): [%s], [%s]" % (node.get_build_str(brief=False),
other_node.get_build_str(brief=False))
super(ErrorNodeSignatureDifferent, self).__init__(msg)
class ErrorNodeDuplicateNames(Exception):
def __init__(self, node):
msg = "Batch node has duplicate targets: %s" % (node.get_build_str(brief=False))
super(ErrorNodeDuplicateNames, self).__init__(msg)
class ErrorNodeDependencyUnknown(Exception):
def __init__(self, node, dep_node):
msg = "Unable to add dependency to node '%s' from node '%s'" % (
node, dep_node)
super(ErrorNodeDependencyUnknown, self).__init__(msg)
class InternalErrorRemoveNonTailNode(Exception):
def __init__(self, node):
msg = "Removing non-tail node: %s" % (node,)
super(InternalErrorRemoveNonTailNode, self).__init__(msg)
class InternalErrorRemoveUnknownTailNode(Exception):
def __init__(self, node):
msg = "Remove unknown tail node: : %s" % (node,)
super(InternalErrorRemoveUnknownTailNode, self).__init__(msg)
class _NodesTree (object):
__slots__ = (
'node2deps',
'dep2nodes',
'tail_nodes',
)
def __init__(self):
self.node2deps = {}
self.dep2nodes = {}
self.tail_nodes = set()
def __len__(self):
return len(self.node2deps)
def get_nodes(self):
return frozenset(self.node2deps)
def __has_cycle(self, node, new_deps):
if node in new_deps:
return True
deps = set(new_deps)
node2deps = self.node2deps
while deps:
dep = deps.pop()
dep_deps = node2deps[dep]
if node in dep_deps:
return True
deps |= dep_deps
return False
def _depends(self, node, deps):
node2deps = self.node2deps
dep2nodes = self.dep2nodes
try:
current_node_deps = node2deps[node]
deps = set(dep for dep in deps if not dep.is_built())
new_deps = deps - current_node_deps
if not new_deps:
return
if self.__has_cycle(node, new_deps):
raise ErrorNodeDependencyCyclic(node, new_deps)
self.tail_nodes.discard(node)
current_node_deps.update(new_deps)
for dep in new_deps:
dep2nodes[dep].add(node)
except KeyError as dep_node:
raise ErrorNodeDependencyUnknown(node, dep_node.args[0])
def add(self, nodes):
for node in nodes:
if node not in self.node2deps:
self.node2deps[node] = set()
self.dep2nodes[node] = set()
self.tail_nodes.add(node)
node_srcnodes = node.get_source_nodes()
node_depnodes = node.get_dep_nodes()
self.add(node_srcnodes)
self.add(node_depnodes)
self._depends(node, node_srcnodes)
self._depends(node, node_depnodes)
def depends(self, node, deps):
self.add(deps)
self._depends(node, deps)
def remove_tail(self, node):
node2deps = self.node2deps
try:
deps = node2deps.pop(node)
if deps:
raise InternalErrorRemoveNonTailNode(node)
except KeyError as ex:
raise InternalErrorRemoveUnknownTailNode(ex.args[0])
tail_nodes = self.tail_nodes
for dep in self.dep2nodes.pop(node):
d = node2deps[dep]
d.remove(node)
if not d:
tail_nodes.add(dep)
def filter_unknown_deps(self, deps):
return [dep for dep in deps if dep in self.node2deps]
def pop_tails(self):
tails = self.tail_nodes
self.tail_nodes = set()
return tails
def __get_all_nodes(self, nodes):
nodes = set(nodes)
all_nodes = set(nodes)
node2deps = self.node2deps
while nodes:
node = nodes.pop()
try:
deps = node2deps[node] - all_nodes
except KeyError as node:
raise ErrorNodeUnknown(node.args[0])
all_nodes.update(deps)
nodes.update(deps)
return all_nodes
def shrink_to(self, nodes):
node2deps = self.node2deps
dep2nodes = self.dep2nodes
ignore_nodes = set(node2deps) - self.__get_all_nodes(nodes)
self.tail_nodes -= ignore_nodes
for node in ignore_nodes:
del node2deps[node]
del dep2nodes[node]
for dep_nodes in dep2nodes.values():
dep_nodes.difference_update(ignore_nodes)
def self_test(self):
if set(self.node2deps) != set(self.dep2nodes):
raise AssertionError("Not all deps are added")
all_dep_nodes = set()
for node in self.dep2nodes:
if node not in self.node2deps:
raise AssertionError("Missed node: %s" % (node,))
node_deps = self.node2deps[node]
if node_deps:
if node in self.tail_nodes:
raise AssertionError("Invalid tail node: %s" % (node,))
all_dep_nodes |= node_deps
for dep in node_deps:
if node not in self.dep2nodes[dep]:
raise AssertionError(
"node not in self.dep2nodes[dep]: "
"dep: %s, node: %s" % (dep, node))
if all_dep_nodes - set(self.dep2nodes):
raise AssertionError("Not all deps are added")
class _VFiles(object):
__slots__ = (
'names',
'handles',
'use_sqlite',
'force_lock',
)
def __init__(self, use_sqlite=False, force_lock=False):
self.handles = {}
self.names = {}
self.use_sqlite = use_sqlite
self.force_lock = force_lock
def __iter__(self):
raise TypeError()
def __getitem__(self, builder):
builder_name = builder.name
try:
vfilename = self.names[builder_name]
except KeyError:
vfilename = os.path.join(builder.get_build_dir(), '.aql.db')
self.names[builder_name] = vfilename
try:
return self.handles[vfilename]
except KeyError:
vfile = EntitiesFile(
vfilename, use_sqlite=self.use_sqlite, force=self.force_lock)
self.handles[vfilename] = vfile
return vfile
def close(self):
for vfile in self.handles.values():
vfile.close()
self.handles.clear()
self.names.clear()
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, backtrace):
self.close()
def _build_node(node):
event_node_building(node)
out = node.build()
if out:
try:
out = out.strip()
except Exception:
pass
return out
def _get_module_nodes(node, module_cache, node_cache):
try:
return module_cache[node]
except KeyError:
pass
result = set((node,))
try:
src_nodes = node_cache[node]
except KeyError:
node_cache[node] = src_nodes = frozenset(node.get_source_nodes())
for src in src_nodes:
result.update(_get_module_nodes(src, module_cache, node_cache))
module_cache[node] = result
return result
def _get_leaf_nodes(nodes, exclude_nodes, node_cache):
leafs = set()
for node in nodes:
if node_cache[node].issubset(exclude_nodes):
leafs.add(node)
return leafs
class _NodeLocker(object):
__slots__ = (
'node2deps',
'dep2nodes',
'locked_nodes',
'unlocked_nodes',
)
def __init__(self):
self.node2deps = {}
self.dep2nodes = {}
self.locked_nodes = {}
self.unlocked_nodes = []
def sync_modules(self, nodes, module_cache=None, node_cache=None):
if module_cache is None:
module_cache = {}
if node_cache is None:
node_cache = {}
for node1, node2 in itertools.product(nodes, nodes):
if node1 is not node2:
self.__add_modules(node1, node2, module_cache, node_cache)
def __add_modules(self, node1, node2, module_cache, node_cache):
node1_sources = _get_module_nodes(node1, module_cache, node_cache)
node2_sources = _get_module_nodes(node2, module_cache, node_cache)
common = node1_sources & node2_sources
node1_sources -= common
node2_sources -= common
leafs1 = _get_leaf_nodes(node1_sources, common, node_cache)
leafs2 = _get_leaf_nodes(node2_sources, common, node_cache)
for leaf in leafs1:
self.__add(leaf, node2_sources)
for leaf in leafs2:
self.__add(leaf, node1_sources)
def sync(self, nodes):
for node in nodes:
node_deps = self.__add(node, nodes)
node_deps.remove(node)
def __add(self, node, deps):
try:
node_set = self.node2deps[node]
except KeyError:
node_set = set()
self.node2deps[node] = node_set
node_set.update(deps)
for dep in deps:
if dep is not node:
try:
dep_set = self.dep2nodes[dep]
except KeyError:
dep_set = set()
self.dep2nodes[dep] = dep_set
dep_set.add(node)
return node_set
def lock(self, node):
deps = self.node2deps.get(node, None)
if not deps:
return True
locked_nodes = self.locked_nodes
for dep in deps:
if dep in locked_nodes:
locked_nodes[dep].add(node)
return False
self.locked_nodes[node] = set()
return True
def unlock(self, node):
deps = self.node2deps.pop(node, ())
nodes = self.dep2nodes.pop(node, ())
if not deps and not nodes:
return
for dep in deps:
self.dep2nodes[dep].remove(node)
for dep in nodes:
self.node2deps[dep].remove(node)
unlocked_nodes = self.locked_nodes.pop(node, None)
if not unlocked_nodes:
return
self.unlocked_nodes.extend(unlocked_nodes)
def pop_unlocked(self):
unlocked_nodes = self.unlocked_nodes
self.unlocked_nodes = []
return unlocked_nodes
def self_test(self): # noqa
for node, deps in self.node2deps.items():
if node in deps:
raise AssertionError("Node depends from itself: %s" % (node,))
for dep in deps:
if node not in self.dep2nodes[dep]:
raise AssertionError(
"Dependency '%s' doesn't have node '%s'" %
(dep, node,))
for node, deps in self.locked_nodes.items():
for dep in deps:
if node not in self.node2deps[dep]:
raise AssertionError(
"Locked node %s doesn't actually depend from node %s" %
(dep, node))
if dep in self.unlocked_nodes:
raise AssertionError("Locked node %s is actually locked" %
(dep,))
for node in self.unlocked_nodes:
if node not in self.node2deps:
raise AssertionError("Unknown unlocked node %s" % (node,))
class _NodesBuilder (object):
__slots__ = (
'vfiles',
'build_manager',
'task_manager',
'building_nodes',
'expensive_nodes',
)
def __init__(self, build_manager,
jobs=0, keep_going=False, with_backtrace=True,
use_sqlite=False, force_lock=False):
self.vfiles = _VFiles(use_sqlite=use_sqlite, force_lock=force_lock)
self.building_nodes = {}
self.expensive_nodes = set(build_manager._expensive_nodes)
self.build_manager = build_manager
tm = TaskManager()
if self.expensive_nodes:
tm.enable_expensive()
if not keep_going:
tm.disable_keep_going()
if not with_backtrace:
tm.disable_backtrace()
tm.start(jobs)
self.task_manager = tm
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, backtrace):
self.close()
def _add_building_node(self, node):
conflicting_nodes = []
building_nodes = self.building_nodes
node_names = {}
for name, signature in node.get_names_and_signatures():
other = building_nodes.get(name, None)
if other is None:
if name in node_names:
raise ErrorNodeDuplicateNames(node)
node_names[name] = (node, signature)
continue
other_node, other_signature = other
if node is other_node:
continue
if other_signature != signature:
raise ErrorNodeSignatureDifferent(node, other_node)
conflicting_nodes.append(other_node)
if conflicting_nodes:
node.recheck_actual()
self.build_manager.depends(node, conflicting_nodes)
return False
building_nodes.update(node_names)
return True
def _remove_building_node(self, node):
building_nodes = self.building_nodes
for name in node.get_names():
del building_nodes[name]
def is_building(self):
return bool(self.building_nodes)
def add_build_task(self, node):
if node in self.expensive_nodes:
self.task_manager.add_expensive_task(node, _build_node, node)
else:
task_priority = -node.get_weight() # less is higher
self.task_manager.add_task(task_priority, node, _build_node, node)
def build_node(self, node):
build_manager = self.build_manager
explain = build_manager.explain
if not build_manager.lock_node(node):
return False
if build_manager.skip_node(node):
return True
node.initiate()
vfile = self.vfiles[node.builder]
prebuit_nodes = node.prebuild()
if prebuit_nodes:
build_manager.depends(node, prebuit_nodes)
return True
split_nodes = node.build_split(vfile, explain)
if split_nodes:
if node in self.expensive_nodes:
self.expensive_nodes.update(split_nodes)
build_manager.depends(node, split_nodes)
for split_node in split_nodes:
self._add_building_node(split_node)
return True
if node.check_actual(vfile, explain):
build_manager.actual_node(node)
return True
if not self._add_building_node(node):
return False
self.add_build_task(node)
return False
def build(self, nodes):
node_tree_changed = False
for node in nodes:
if self.build_node(node):
node_tree_changed = True
if len(self.building_nodes) > 10:
if self._get_finished_nodes(block=False):
node_tree_changed = True
return self._get_finished_nodes(block=not node_tree_changed)
def _get_finished_nodes(self, block=True):
finished_tasks = self.task_manager.get_finished_tasks(block=block)
vfiles = self.vfiles
build_manager = self.build_manager
for task in finished_tasks:
node = task.task_id
error = task.error
self._remove_building_node(node)
vfile = vfiles[node.builder]
if error is None:
node.save(vfile)
build_manager.completed_node(node, task.result)
else:
node.save_failed(vfile)
build_manager.failed_node(node, error)
return finished_tasks or not block
def clear(self, nodes):
vfiles = self.vfiles
build_manager = self.build_manager
remove_entities = {}
for node in nodes:
node.initiate()
prebuit_nodes = node.prebuild()
if prebuit_nodes:
build_manager.depends(node, prebuit_nodes)
continue
vfile = vfiles[node.builder]
node_entities = node.clear(vfile)
remove_entities.setdefault(vfile, []).extend(node_entities)
build_manager.removed_node(node)
for vfile, entities in remove_entities.items():
vfile.remove_node_entities(entities)
def close(self):
try:
self.task_manager.stop()
self._get_finished_nodes(block=False)
finally:
self.vfiles.close()
class _NodeCondition (object):
__slots__ = (
'value',
'op',
)
def __init__(self, condition, result=True):
self.value = condition
self.op = operator.truth if result else operator.__not__
def get_node(self):
value = self.value
if isinstance(value, Node):
return value
elif isinstance(value, NodeFilter):
return value.get_node()
return None
def get(self):
value = simplify_value(self.value)
return self.op(value)
def __bool__(self):
return self.get()
def __nonzero__(self):
return self.get()
class BuildManager (object):
__slots__ = (
'_nodes',
'_built_targets',
'_failed_nodes',
'_node_locker',
'_module_cache',
'_node_cache',
'_node_conditions',
'_expensive_nodes',
'completed',
'actual',
'skipped',
'explain',
)
def __init__(self):
self._nodes = _NodesTree()
self._node_locker = None
self._node_conditions = {}
self._expensive_nodes = set()
self.__reset()
def __reset(self, explain=False):
self._built_targets = {}
self._failed_nodes = {}
self._module_cache = {}
self._node_cache = {}
self.completed = 0
self.actual = 0
self.skipped = 0
self.explain = explain
def add(self, nodes):
self._nodes.add(nodes)
def depends(self, node, deps):
self._nodes.depends(node, deps)
def build_if(self, condition, nodes):
if not isinstance(condition, _NodeCondition):
condition = _NodeCondition(condition)
cond_node = condition.get_node()
if cond_node is not None:
cond_node = (cond_node,)
depends = self._nodes.depends
set_node_condition = self._node_conditions.__setitem__
for node in to_sequence(nodes):
if cond_node is not None:
depends(node, cond_node)
set_node_condition(node, condition)
def skip_if(self, condition, nodes):
self.build_if(_NodeCondition(condition, False), nodes)
def expensive(self, nodes):
self._expensive_nodes.update(to_sequence(nodes))
def module_depends(self, node, deps):
module_cache = self._module_cache
node_cache = self._node_cache
module_nodes = _get_module_nodes(node, module_cache, node_cache)
for dep in deps:
dep_nodes = _get_module_nodes(dep, module_cache, node_cache)
common = module_nodes & dep_nodes
only_module_nodes = module_nodes - common
leafs = _get_leaf_nodes(only_module_nodes, common, node_cache)
for leaf in leafs:
self._nodes.depends(leaf, (dep,))
def sync(self, nodes, deep=False):
node_locker = self._node_locker
if node_locker is None:
self._node_locker = node_locker = _NodeLocker()
if deep:
node_locker.sync_modules(
nodes, self._module_cache, self._node_cache)
else:
node_locker.sync(nodes)
def lock_node(self, node):
node_locker = self._node_locker
if node_locker is None:
return True
return node_locker.lock(node)
def unlock_node(self, node):
node_locker = self._node_locker
if node_locker is not None:
node_locker.unlock(node)
def __len__(self):
return len(self._nodes)
def self_test(self):
self._nodes.self_test()
if self._node_locker is not None:
self._node_locker.self_test()
def get_next_nodes(self):
tails = self._nodes.pop_tails()
if not tails:
node_locker = self._node_locker
if node_locker is not None:
return node_locker.pop_unlocked()
return tails
def skip_node(self, node):
cond = self._node_conditions.pop(node, None)
if (cond is None) or cond:
return False
self.unlock_node(node)
self._nodes.remove_tail(node)
self.skipped += 1
node.skip()
return True
def actual_node(self, node):
self.unlock_node(node)
self._nodes.remove_tail(node)
self.actual += 1
node.shrink()
def completed_node(self, node, builder_output):
self._check_already_built(node)
self.unlock_node(node)
self._nodes.remove_tail(node)
self.completed += 1
event_node_building_finished(node, builder_output,
self.get_progress_str())
node.shrink()
def failed_node(self, node, error):
self.unlock_node(node)
self._failed_nodes[node] = error
event_node_building_failed(node, error)
def removed_node(self, node):
self._nodes.remove_tail(node)
self.completed += 1
event_node_removed(node, self.get_progress_str())
node.shrink()
def get_progress_str(self):
done = self.completed + self.actual + self.skipped
total = len(self._nodes) + done
processed = done + len(self._failed_nodes)
progress = "%s/%s" % (processed, total)
return progress
def close(self):
self._nodes = _NodesTree()
self._node_locker = None
self._node_conditions = {}
def _check_already_built(self, node):
entities = node.get_target_entities()
built_targets = self._built_targets
for entity in entities:
entity_sign = entity.signature
other_entity_sign = built_targets.setdefault(
entity.id, entity_sign)
if other_entity_sign != entity_sign:
event_build_target_twice(entity, node)
def shrink(self, nodes):
if not nodes:
return
self._nodes.shrink_to(nodes)
def get_nodes(self):
return self._nodes.get_nodes()
def build(self, jobs, keep_going, nodes=None, explain=False,
with_backtrace=True, use_sqlite=False, force_lock=False):
self.__reset(explain=explain)
self.shrink(nodes)
with _NodesBuilder(self,
jobs,
keep_going,
with_backtrace,
use_sqlite=use_sqlite,
force_lock=force_lock) as nodes_builder:
while True:
tails = self.get_next_nodes()
if not tails and not nodes_builder.is_building():
break
if not nodes_builder.build(tails):
break
return self.is_ok()
def is_ok(self):
return not bool(self._failed_nodes)
def fails_count(self):
return len(self._failed_nodes)
def print_fails(self):
for node, error in self._failed_nodes.items():
event_failed_node(node, error)
def print_build_state(self):
log_info("Failed nodes: %s", len(self._failed_nodes))
log_info("Skipped nodes: %s", self.skipped)
log_info("Completed nodes: %s", self.completed)
log_info("Actual nodes: %s", self.actual)
def clear(self, nodes=None, use_sqlite=False, force_lock=False):
self.__reset()
self.shrink(nodes)
with _NodesBuilder(self,
use_sqlite=use_sqlite,
force_lock=force_lock) as nodes_builder:
while True:
tails = self.get_next_nodes()
if not tails:
break
nodes_builder.clear(tails)
class WriteFileBuilder (Builder):
NAME_ATTRS = ['target']
def __init__(self, options, target, binary=False, encoding=None):
self.binary = binary
self.encoding = encoding
self.target = self.get_target_path(target)
def build(self, source_entities, targets):
target = self.target
with open_file(target,
write=True,
binary=self.binary,
encoding=self.encoding) as f:
f.truncate()
for src in source_entities:
src = src.get()
if self.binary:
if is_unicode(src):
src = encode_str(src, self.encoding)
else:
if isinstance(src, (bytearray, bytes)):
src = decode_bytes(src, self.encoding)
f.write(src)
targets.add_target_files(target)
def get_trace_name(self, source_entities, brief):
return "Writing content"
def get_target_entities(self, source_entities):
return self.target
class CopyFilesBuilder (FileBuilder):
NAME_ATTRS = ['target']
SIGNATURE_ATTRS = ['basedir']
def __init__(self, options, target, basedir=None):
self.target = self.get_target_dir(target)
sep = os.path.sep
self.basedir = tuple(os.path.normcase(os.path.normpath(basedir)) + sep
for basedir in to_sequence(basedir))
def __get_dst(self, file_path):
for basedir in self.basedir:
if file_path.startswith(basedir):
filename = file_path[len(basedir):]
dirname, filename = os.path.split(filename)
dst_dir = os.path.join(self.target, dirname)
return os.path.join(dst_dir, filename)
filename = os.path.basename(file_path)
return os.path.join(self.target, filename)
def build_batch(self, source_entities, targets):
for src_entity in source_entities:
src = src_entity.get()
dst = self.__get_dst(src)
self.makedirs(os.path.dirname(dst))
shutil.copyfile(src, dst)
shutil.copymode(src, dst)
targets[src_entity].add_targets(dst)
def get_trace_name(self, source_entities, brief):
return "Copy files"
def get_target_entities(self, source_entities):
get_dst = self.__get_dst
return (get_dst(src.get()) for src in source_entities)
class CopyFileAsBuilder (FileBuilder):
NAME_ATTRS = ['target']
def __init__(self, options, target):
self.target = self.get_target_path(target)
def build(self, source_entities, targets):
source = source_entities[0].get()
target = self.target
shutil.copyfile(source, target)
shutil.copymode(source, target)
targets.add_targets(target)
def get_trace_name(self, source_entities, brief):
return "Copy file"
def get_target_entities(self, source_entities):
return self.target
def _get_method_full_name(m):
full_name = []
mod = getattr(m, '__module__', None)
if mod:
full_name.append(mod)
name = getattr(m, '__qualname__', None)
if name:
full_name.append(name)
else:
cls = getattr(m, 'im_class', None)
if cls is not None:
cls_name = getattr(cls, '__name__', None)
if cls_name:
full_name.append(cls_name)
name = getattr(m, '__name__', None)
if name:
full_name.append(name)
return '.'.join(full_name)
class ExecuteMethodBuilder (Builder):
NAME_ATTRS = ('method_name',)
SIGNATURE_ATTRS = ('args', 'kw')
def __init__(self,
options,
method,
args,
kw,
single,
make_files,
clear_targets):
self.method_name = _get_method_full_name(method)
self.method = method
self.args = args if args else []
self.kw = kw if kw else {}
if not clear_targets:
self.clear = lambda target_entities, side_effect_entities: None
if single:
self.split = self.split_single
if make_files:
self.make_entity = self.make_file_entity
def build(self, source_entities, targets):
return self.method(self, source_entities, targets,
*self.args, **self.kw)
def get_trace_name(self, source_entities, brief):
name = self.method.__doc__
if name:
name = name.strip().split('\n')[0].strip()
if not name:
name = self.method_name
if not brief:
args = ''
if self.args:
args = ','.join(self.args)
if self.kw:
if args:
args += ','
args += ','.join("%s=%s" % (k, v) for k, v in self.kw.items())
if args:
return "%s(%s)" % (name, args)
return name
class ErrorDistCommandInvalid(Exception):
def __init__(self, command):
msg = "distutils command '%s' is not supported" % (command,)
super(ErrorDistCommandInvalid, self).__init__(msg)
class DistBuilder (FileBuilder):
NAME_ATTRS = ('target', 'command', 'formats')
SIGNATURE_ATTRS = ('script_args', )
def __init__(self, options, command, args, target):
target = self.get_target_dir(target)
script_args = [command]
if command.startswith('bdist'):
temp_dir = self.get_build_path()
script_args += ['--bdist-base', temp_dir]
elif command != 'sdist':
raise ErrorDistCommandInvalid(command)
args = self._get_args(args)
script_args += args
formats = self._get_formats(args, command)
script_args += ['--dist-dir', target]
self.command = command
self.target = target
self.script_args = script_args
self.formats = formats
@staticmethod
def _get_args(args):
if args:
return args.split() if is_string(args) else to_sequence(args)
return tuple()
@staticmethod
def _get_formats(args, command):
if not command.startswith('bdist'):
return None
formats = set()
for arg in args:
if arg.startswith('--formats='):
v = arg[len('--formats='):].split(',')
formats.update(v)
elif arg.startswith('--plat-name='):
v = arg[len('--plat-name='):]
formats.add(v)
return formats
def get_trace_name(self, source_entities, brief):
return "distutils %s" % ' '.join(self.script_args)
def build(self, source_entities, targets):
script = source_entities[0].get()
cmd = [sys.executable, script]
cmd += self.script_args
script_dir = os.path.dirname(script)
out = self.exec_cmd(cmd, script_dir)
return out
class ExecuteCommandBuilder (Builder):
NAME_ATTRS = ('targets', 'cwd')
def __init__(self, options, target=None, target_flag=None, cwd=None):
self.targets = tuple(map(self.get_target_path, to_sequence(target)))
self.target_flag = target_flag
if cwd:
cwd = self.get_target_dir(cwd)
self.cwd = cwd
def _get_cmd_targets(self):
targets = self.targets
prefix = self.target_flag
if not prefix:
return tuple(targets)
prefix = prefix.lstrip()
if not prefix:
return tuple(targets)
rprefix = prefix.rstrip()
if prefix != rprefix:
return tuple(itertools.chain(*((rprefix, target)
for target in targets)))
return tuple("%s%s" % (prefix, target) for target in targets)
def build(self, source_entities, targets):
cmd = tuple(src.get() for src in source_entities)
cmd_targets = self._get_cmd_targets()
if cmd_targets:
cmd += cmd_targets
out = self.exec_cmd(cmd, cwd=self.cwd)
targets.add_target_files(self.targets)
return out
def get_target_entities(self, source_entities):
return self.targets
def get_trace_name(self, source_entities, brief):
try:
return source_entities[0]
except Exception:
return self.__class__.__name__
def get_trace_sources(self, source_entities, brief):
return source_entities[1:]
class ErrorBatchBuildCustomExt(Exception):
def __init__(self, trace, ext):
msg = "Custom extension '%s' is not supported " "in batch building of node: %s" % (ext, trace)
super(ErrorBatchBuildCustomExt, self).__init__(msg)
class ErrorBatchBuildWithPrefix(Exception):
def __init__(self, trace, prefix):
msg = "Filename prefix '%s' is not supported " "in batch building of node: %s" % (prefix, trace)
super(ErrorBatchBuildWithPrefix, self).__init__(msg)
class ErrorBatchBuildWithSuffix(Exception):
def __init__(self, trace, suffix):
msg = "Filename suffix '%s' is not supported " "in batch building of node: %s" % (suffix, trace)
super(ErrorBatchBuildWithSuffix, self).__init__(msg)
class ErrorBatchCompileWithCustomTarget(Exception):
def __init__(self, trace, target):
msg = "Explicit output target '%s' is not supported " "in batch building of node: %s" % (target, trace)
super(ErrorBatchCompileWithCustomTarget, self).__init__(msg)
class ErrorCompileWithCustomTarget(Exception):
def __init__(self, trace, target):
msg = "Compile several source files using " "the same target '%s' is not supported: %s" % (target, trace)
super(ErrorCompileWithCustomTarget, self).__init__(msg)
def _add_prefix(prefix, values):
prefix = prefix.lstrip()
if not prefix:
return values
if prefix[-1] == ' ':
prefix = prefix.rstrip()
return tuple(itertools.chain(*itertools.product((prefix,), values)))
return tuple("%s%s" % (prefix, value) for value in values)
def _add_ixes(prefix, suffix, values):
prefix = prefix.lstrip()
suffix = suffix.strip()
sep_prefix = prefix and (prefix[-1] == ' ')
result = []
for value in values:
value = "%s%s" % (value, suffix)
if prefix:
if sep_prefix:
result += [prefix, value]
else:
result.append("%s%s" % (prefix, value))
else:
result.append(value)
return result
def _preprocessor_options(options):
options.cppdefines = ListOptionType(
description="C/C++ preprocessor defines",
unique=True,
separators=None)
options.defines = options.cppdefines
options.cppdefines_prefix = StrOptionType(
description="Flag for C/C++ preprocessor defines.", is_hidden=True)
options.cppdefines_flags = ListOptionType(separators=None)
options.cppdefines_flags += SimpleOperation(_add_prefix,
options.cppdefines_prefix,
options.cppdefines)
options.cpppath_flags = ListOptionType(separators=None)
options.cpppath_prefix = StrOptionType(
description="Flag for C/C++ preprocessor paths.",
is_hidden=True)
options.cpppath = ListOptionType(
description="C/C++ preprocessor paths to headers",
value_type=AbsPathOptionType(),
unique=True,
separators=None)
options.include = options.cpppath
options.cpppath_flags = SimpleOperation(_add_prefix,
options.cpppath_prefix,
options.cpppath)
options.api_cpppath = ListOptionType(
value_type=AbsPathOptionType(),
unique=True,
description="C/C++ preprocessor paths to API headers",
separators=None)
options.cpppath_flags += SimpleOperation(_add_prefix,
options.cpppath_prefix,
options.api_cpppath)
options.ext_cpppath = ListOptionType(
value_type=AbsPathOptionType(),
unique=True,
description="C/C++ preprocessor path to external headers",
separators=None)
options.ext_include = options.ext_cpppath
options.cpppath_flags += SimpleOperation(_add_prefix,
options.cpppath_prefix,
options.ext_cpppath)
options.sys_cpppath = ListOptionType(
value_type=AbsPathOptionType(),
description="C/C++ preprocessor path to standard headers",
separators=None)
def _compiler_options(options):
options.language = EnumOptionType(values=[('c++', 'cpp'), 'c'],
default='c++',
description='Current language',
is_hidden=True)
options.pic = BoolOptionType(
description="Generate position-independent code.", default=True)
options.objsuffix = StrOptionType(
description="Object file suffix.", is_hidden=True)
options.cxxflags = ListOptionType(
description="C++ compiler flags", separators=None)
options.cflags = ListOptionType(
description="C++ compiler flags", separators=None)
options.ccflags = ListOptionType(
description="Common C/C++ compiler flags", separators=None)
options.occflags = ListOptionType(
description="Common C/C++ compiler optimization flags",
separators=None)
options.cc = AbsPathOptionType(description="C/C++ compiler program")
options.cc_name = StrOptionType(is_tool_key=True,
ignore_case=True,
description="C/C++ compiler name")
options.cc_ver = VersionOptionType(is_tool_key=True,
description="C/C++ compiler version")
options.cc_cmd = ListOptionType(separators=None,
description="C/C++ compiler full command",
is_hidden=True)
options.cc_cmd = options.cc
options.If().language.eq('c++').cc_cmd += options.cxxflags
options.If().language.eq('c').cc_cmd += options.cflags
options.cc_cmd += options.ccflags + options.occflags + options.cppdefines_flags + options.cpppath_flags
options.cxxstd = EnumOptionType(values=['default',
('c++98', 'c++03'),
('c++11', 'c++0x'),
('c++14', 'c++1y')],
default='default',
description='C++ language standard.')
def _resource_compiler_options(options):
options.rc = AbsPathOptionType(
description="C/C++ resource compiler program")
options.ressuffix = StrOptionType(
description="Compiled resource file suffix.",
is_hidden=True)
options.rcflags = ListOptionType(
description="C/C++ resource compiler flags",
separators=None)
options.rc_cmd = ListOptionType(
description="C/C++ resource resource compiler full command",
separators=None,
is_hidden=True)
options.rc_cmd = [options.rc] + options.rcflags + options.cppdefines_flags + options.cpppath_flags
def _linker_options(options):
options.libprefix = StrOptionType(
description="Static library archiver prefix.", is_hidden=True)
options.libsuffix = StrOptionType(
description="Static library archiver suffix.", is_hidden=True)
options.libflags = ListOptionType(
description="Static library archiver flags", separators=None)
options.olibflags = ListOptionType(
description="Static library archiver optimization flags",
separators=None)
options.lib = AbsPathOptionType(
description="Static library archiver program")
options.lib_cmd = ListOptionType(
description="Static library archiver full command",
separators=None,
is_hidden=True)
options.lib_cmd = [options.lib] + options.libflags + options.olibflags
options.shlibprefix = StrOptionType(description="Shared library prefix.",
is_hidden=True)
options.shlibsuffix = StrOptionType(description="Shared library suffix.",
is_hidden=True)
options.libpath = ListOptionType(value_type=AbsPathOptionType(),
description="Paths to external libraries",
unique=True,
separators=None)
options.libpath_prefix = StrOptionType(
description="Flag for library paths.",
is_hidden=True)
options.libpath_flags = ListOptionType(separators=None)
options.libpath_flags = SimpleOperation(_add_prefix,
options.libpath_prefix,
options.libpath)
options.libs = ListOptionType(value_type=StrOptionType(),
description="Linking external libraries",
unique=True,
separators=None)
options.libs_prefix = StrOptionType(
description="Prefix flag for libraries.", is_hidden=True)
options.libs_suffix = StrOptionType(
description="Suffix flag for libraries.", is_hidden=True)
options.libs_flags = ListOptionType(separators=None)
options.libs_flags = SimpleOperation(_add_ixes,
options.libs_prefix,
options.libs_suffix,
options.libs)
options.progsuffix = StrOptionType(
description="Program suffix.", is_hidden=True)
options.linkflags = ListOptionType(
description="Linker flags", separators=None)
options.olinkflags = ListOptionType(
description="Linker optimization flags", separators=None)
options.link = AbsPathOptionType(description="Linker program")
options.link_cmd = ListOptionType(description="Linker full command",
separators=None,
is_hidden=True)
options.link_cmd = [options.link] + options.linkflags + options.olinkflags + options.libpath_flags + options.libs_flags
def _get_cpp_options():
options = Options()
_preprocessor_options(options)
_compiler_options(options)
_resource_compiler_options(options)
_linker_options(options)
return options
def _get_res_options():
options = Options()
_preprocessor_options(options)
_resource_compiler_options(options)
return options
class HeaderChecker (Builder):
SIGNATURE_ATTRS = ('cpppath', )
def __init__(self, options):
cpppath = list(options.cpppath.get())
cpppath += options.ext_cpppath.get()
cpppath += options.sys_cpppath.get()
self.cpppath = cpppath
def build(self, source_entities, targets):
has_headers = True
cpppath = self.cpppath
for header in source_entities:
found = find_file_in_paths(cpppath, header.get())
if not found:
has_headers = False
break
targets.add_targets(has_headers)
class CommonCompiler (FileBuilder):
NAME_ATTRS = ('prefix', 'suffix', 'ext')
SIGNATURE_ATTRS = ('cmd', )
def __init__(self, options, ext, cmd):
self.prefix = options.prefix.get()
self.suffix = options.suffix.get()
self.ext = ext
self.cmd = list(cmd)
target = options.target.get()
if target:
self.target = self.get_target_path(target, self.ext, self.prefix)
else:
self.target = None
ext_cpppath = list(options.ext_cpppath.get())
ext_cpppath += options.sys_cpppath.get()
self.ext_cpppath = tuple(set(os.path.normcase(
os.path.abspath(folder)) + os.path.sep for folder in ext_cpppath))
def get_target_entities(self, source_values):
return self.get_obj_path(source_values[0].get())
def get_obj_path(self, source):
if self.target:
return self.target
return self.get_source_target_path(source,
ext=self.ext,
prefix=self.prefix,
suffix=self.suffix)
def get_default_obj_ext(self):
"""
Returns a default extension of output object files.
"""
raise NotImplementedError(
"Abstract method. It should be implemented in a child class.")
def check_batch_split(self, source_entities):
default_ext = self.get_default_obj_ext()
if self.ext != default_ext:
raise ErrorBatchBuildCustomExt(
self.get_trace(source_entities), self.ext)
if self.prefix:
raise ErrorBatchBuildWithPrefix(
self.get_trace(source_entities), self.prefix)
if self.suffix:
raise ErrorBatchBuildWithSuffix(
self.get_trace(source_entities), self.suffix)
if self.target:
raise ErrorBatchCompileWithCustomTarget(
self.get_trace(source_entities), self.target)
def split_batch(self, source_entities):
self.check_batch_split(source_entities)
return self.split_batch_by_build_dir(source_entities)
def split(self, source_entities):
if self.target and (len(source_entities) > 1):
raise ErrorCompileWithCustomTarget(
self.get_trace(source_entities), self.target)
return self.split_single(source_entities)
def get_trace_name(self, source_entities, brief):
if brief:
name = self.cmd[0]
name = os.path.splitext(os.path.basename(name))[0]
else:
name = ' '.join(self.cmd)
return name
class CommonCppCompiler (CommonCompiler):
def __init__(self, options):
super(CommonCppCompiler, self).__init__(options,
ext=options.objsuffix.get(),
cmd=options.cc_cmd.get())
class CommonResCompiler (CommonCompiler):
def __init__(self, options):
super(CommonResCompiler, self).__init__(options,
ext=options.ressuffix.get(),
cmd=options.rc_cmd.get())
class CommonCppLinkerBase(FileBuilder):
CPP_EXT = (".cc", ".cp", ".cxx", ".cpp", ".CPP", ".c++", ".C", ".c")
NAME_ATTRS = ('target', )
SIGNATURE_ATTRS = ('cmd', )
def __init__(self, options):
self.compilers = self.get_source_builders(options)
def get_cpp_exts(self, _cpp_ext=CPP_EXT):
return _cpp_ext
def get_res_exts(self):
return '.rc',
def make_compiler(self, options):
"""
It should return a builder of C/C++ compiler
"""
raise NotImplementedError(
"Abstract method. It should be implemented in a child class.")
def make_res_compiler(self, options):
"""
It should return a builder of C/C++ resource compiler
"""
raise NotImplementedError(
"Abstract method. It should be implemented in a child class.")
def add_source_builders(self, builders, exts, builder):
if builder:
for ext in exts:
builders[ext] = builder
def get_source_builders(self, options):
builders = {}
compiler = self.make_compiler(options)
self.add_source_builders(builders, self.get_cpp_exts(), compiler)
rc_compiler = self.make_res_compiler(options)
self.add_source_builders(builders, self.get_res_exts(), rc_compiler)
return builders
def replace(self, options, source_entities):
cwd = os.getcwd()
def _add_sources():
if current_builder is None:
new_sources.extend(current_sources)
return
src_node = Node(current_builder, current_sources, cwd)
new_sources.append(src_node)
new_sources = []
builders = self.compilers
current_builder = None
current_sources = []
for src_file in source_entities:
ext = os.path.splitext(src_file.get())[1]
builder = builders.get(ext, None)
if current_builder is builder:
current_sources.append(src_file)
else:
if current_sources:
_add_sources()
current_builder = builder
current_sources = [src_file]
if current_sources:
_add_sources()
return new_sources
def get_target_entities(self, source_values):
return self.target
def get_trace_name(self, source_entities, brief):
if brief:
name = self.cmd[0]
name = os.path.splitext(os.path.basename(name))[0]
else:
name = ' '.join(self.cmd)
return name
class CommonCppArchiver(CommonCppLinkerBase):
def __init__(self, options, target):
super(CommonCppArchiver, self).__init__(options)
prefix = options.libprefix.get()
ext = options.libsuffix.get()
self.target = self.get_target_path(target, ext=ext, prefix=prefix)
self.cmd = options.lib_cmd.get()
self.shared = False
class CommonCppLinker(CommonCppLinkerBase):
def __init__(self, options, target, shared):
super(CommonCppLinker, self).__init__(options)
if shared:
prefix = options.shlibprefix.get()
ext = options.shlibsuffix.get()
else:
prefix = options.prefix.get()
ext = options.progsuffix.get()
self.target = self.get_target_path(target, prefix=prefix, ext=ext)
self.cmd = options.link_cmd.get()
self.shared = shared
def get_weight(self, source_entities):
return 2 * len(source_entities)
class ToolCommonCpp(Tool):
def __init__(self, options):
options.If().cc_name.is_true().build_dir_name += '_' + options.cc_name + '_' + options.cc_ver
self.Object = self.compile
self.Compile = self.compile
self.Resource = self.compile_resource
self.CompileResource = self.compile_resource
self.Library = self.link_static_library
self.LinkLibrary = self.link_static_library
self.LinkStaticLibrary = self.link_static_library
self.SharedLibrary = self.link_shared_library
self.LinkSharedLibrary = self.link_shared_library
self.Program = self.link_program
self.LinkProgram = self.link_program
@classmethod
def options(cls):
options = _get_cpp_options()
options.set_group("C/C++ compiler")
return options
def check_headers(self, options):
return HeaderChecker(options)
CheckHeaders = check_headers
def compile(self, options):
raise NotImplementedError(
"Abstract method. It should be implemented in a child class.")
def compile_resource(self, options):
raise NotImplementedError(
"Abstract method. It should be implemented in a child class.")
def link_static_library(self, options, target):
raise NotImplementedError(
"Abstract method. It should be implemented in a child class.")
def link_shared_library(self, options, target, def_file=None):
raise NotImplementedError(
"Abstract method. It should be implemented in a child class.")
def link_program(self, options, target):
raise NotImplementedError(
"Abstract method. It should be implemented in a child class.")
class ToolCommonRes(Tool):
def __init__(self, options):
self.Object = self.compile
self.Compile = self.compile
@classmethod
def options(cls):
options = _get_res_options()
options.set_group("C/C++ resource compiler")
return options
def compile(self, options):
"""
It should return a builder of C/C++ resource compiler
"""
raise NotImplementedError(
"Abstract method. It should be implemented in a child class.")
class ZipFilesBuilder (FileBuilder):
NAME_ATTRS = ['target']
SIGNATURE_ATTRS = ['rename', 'basedir']
def __init__(self, options, target, rename=None, basedir=None, ext=None):
if ext is None:
ext = ".zip"
self.target = self.get_target_path(target, ext=ext)
self.rename = tuple(rename for rename in to_sequence(rename))
sep = os.path.sep
self.basedir = tuple(os.path.normcase(os.path.normpath(basedir)) + sep
for basedir in to_sequence(basedir))
def __open_arch(self, large=False):
try:
return zipfile.ZipFile(self.target,
"w",
zipfile.ZIP_DEFLATED,
large)
except RuntimeError:
pass
return zipfile.ZipFile(self.target, "w", zipfile.ZIP_STORED, large)
def __get_arcname(self, file_path):
for arc_name, path in self.rename:
if file_path == path:
return arc_name
for basedir in self.basedir:
if file_path.startswith(basedir):
return file_path[len(basedir):]
return os.path.basename(file_path)
def __add_files(self, arch, source_entities):
for entity in source_entities:
if isinstance(entity, FileEntityBase):
filepath = entity.get()
arcname = self.__get_arcname(filepath)
arch.write(filepath, arcname)
else:
arcname = entity.name
data = entity.get()
if is_unicode(data):
data = encode_str(data)
arch.writestr(arcname, data)
def build(self, source_entities, targets):
target = self.target
arch = self.__open_arch()
try:
self.__add_files(arch, source_entities)
except zipfile.LargeZipFile:
arch.close()
arch = None
arch = self.__open_arch(large=True)
self.__add_files(arch, source_entities)
finally:
if arch is not None:
arch.close()
targets.add_targets(target)
def get_trace_name(self, source_entities, brief):
return "Create Zip"
def get_target_entities(self, source_entities):
return self.target
class TarFilesBuilder (FileBuilder):
NAME_ATTRS = ['target']
SIGNATURE_ATTRS = ['rename', 'basedir']
def __init__(self, options, target, mode, rename, basedir, ext):
if not mode:
mode = "w:bz2"
if not ext:
if mode == "w:bz2":
ext = ".tar.bz2"
elif mode == "w:gz":
ext = ".tar.gz"
elif mode == "w":
ext = ".tar"
self.target = self.get_target_path(target, ext)
self.mode = mode
self.rename = rename if rename else tuple()
self.basedir = os.path.normcase(
os.path.normpath(basedir)) if basedir else None
def __get_arcname(self, file_path):
for arc_name, path in self.rename:
if file_path == path:
return arc_name
basedir = self.basedir
if basedir:
if file_path.startswith(basedir):
return file_path[len(basedir):]
return os.path.basename(file_path)
def __add_file(self, arch, filepath):
arcname = self.__get_arcname(filepath)
arch.add(filepath, arcname)
@staticmethod
def __add_entity(arch, entity):
arcname = entity.name
data = entity.get()
if is_unicode(data):
data = encode_str(data)
tinfo = tarfile.TarInfo(arcname)
tinfo.size = len(data)
arch.addfile(tinfo, io.BytesIO(data))
def build(self, source_entities, targets):
target = self.target
arch = tarfile.open(name=self.target, mode=self.mode)
try:
for entity in source_entities:
if isinstance(entity, FileEntityBase):
self.__add_file(arch, entity.get())
else:
self.__add_entity(arch, entity)
finally:
arch.close()
targets.add_targets(target)
def get_trace_name(self, source_entities, brief):
return "Create Tar"
def get_target_entities(self, source_entities):
return self.target
class FindFilesBuilder (FileBuilder):
NAME_ATTRS = ['mask']
SIGNATURE_ATTRS = ['exclude_mask', 'exclude_subdir_mask']
def __init__(self, options, mask,
exclude_mask=None,
exclude_subdir_mask=None):
self.mask = mask
self.exclude_mask = exclude_mask
self.exclude_subdir_mask = exclude_subdir_mask
def make_entity(self, value, tags=None):
return DirEntity(name=value, tags=tags)
def build(self, source_entities, targets):
paths = [src.get() for src in source_entities]
args = {'paths': paths}
if self.mask is not None:
args['mask'] = self.mask
if self.exclude_mask is not None:
args['exclude_mask'] = self.exclude_mask
if self.exclude_subdir_mask is not None:
args['exclude_subdir_mask'] = self.exclude_subdir_mask
args['found_dirs'] = found_dirs = set()
files = find_files(**args)
targets.add_target_files(files)
found_dirs = map(DirEntity, found_dirs)
targets.add_implicit_dep_entities(found_dirs)
def get_trace_name(self, source_entities, brief):
trace = "Find files"
if self.mask:
trace += "(%s)" % self.mask
return trace
def check_actual(self, target_entities):
return None
def clear(self, target_entities, side_effect_entities):
pass
class InstallDistBuilder (FileBuilder):
NAME_ATTRS = ('user',)
def __init__(self, options, user):
self.user = user
def get_trace_name(self, source_entities, brief):
return "distutils install"
def build(self, source_entities, targets):
script = source_entities[0].get()
cmd = [sys.executable, script, "install"]
if self.user:
cmd.append("--user")
script_dir = os.path.dirname(script)
out = self.exec_cmd(cmd, script_dir)
return out
class BuiltinTool(Tool):
def execute_command(self, options,
target=None, target_flag=None, cwd=None):
return ExecuteCommandBuilder(options, target=target,
target_flag=target_flag, cwd=cwd)
ExecuteCommand = execute_command
Command = ExecuteCommand
def execute_method(self, options,
method, args=None, kw=None, single=True,
make_files=True, clear_targets=True):
return ExecuteMethodBuilder(options, method=method, args=args, kw=kw,
single=single, make_files=make_files,
clear_targets=clear_targets)
ExecuteMethod = execute_method
Method = ExecuteMethod
def find_files(self, options, mask=None,
exclude_mask=None, exclude_subdir_mask=None):
return FindFilesBuilder(options,
mask,
exclude_mask,
exclude_subdir_mask)
FindFiles = find_files
def copy_files(self, options, target, basedir=None):
return CopyFilesBuilder(options, target, basedir=basedir)
CopyFiles = copy_files
def copy_file_as(self, options, target):
return CopyFileAsBuilder(options, target)
CopyFileAs = copy_file_as
def write_file(self, options, target, binary=False, encoding=None):
return WriteFileBuilder(options, target,
binary=binary, encoding=encoding)
WriteFile = write_file
def create_dist(self, options, target, command, args=None):
return DistBuilder(options, target=target, command=command, args=args)
CreateDist = create_dist
def install_dist(self, options, user=True):
return InstallDistBuilder(options, user=user)
InstallDist = install_dist
def create_zip(self, options, target, rename=None, basedir=None, ext=None):
return ZipFilesBuilder(options, target=target, rename=rename,
basedir=basedir, ext=ext)
CreateZip = create_zip
def create_tar(self, options,
target, mode=None, rename=None, basedir=None, ext=None):
return TarFilesBuilder(options, target=target, mode=mode,
rename=rename, basedir=basedir, ext=ext)
CreateTar = create_tar
@event_warning
def event_tools_unable_load_module(settings, module, err):
log_warning("Unable to load module: %s, error: %s", module, err)
@event_warning
def event_tools_tool_failed(settings, ex, tool_info):
tool_class = tool_info.tool_class
module = tool_class.__module__
try:
filename = sys.modules[module].__file__
except Exception:
filename = module[module.rfind('.') + 1:] + '.py'
names = ','.join(tool_info.names)
log_error("Failed to initialize tool: name: %s, class: %s, file: %s",
names, tool_class.__name__, filename)
log_error(ex)
def _tool_setup_stub(cls, options):
pass
class ErrorToolInvalid(Exception):
def __init__(self, tool_class):
msg = "Invalid tool type: '%s'" % (tool_class,)
super(ErrorToolInvalid, self).__init__(msg)
class ErrorToolInvalidSetupMethod(Exception):
def __init__(self, method):
msg = "Invalid tool setup method: '%s'" % (method,)
super(ErrorToolInvalidSetupMethod, self).__init__(msg)
class ErrorToolNotFound(Exception):
def __init__(self, tool_name, loaded_paths):
loaded_paths = ', '.join(loaded_paths)
msg = "Tool '%s' has not been found in the following paths: %s" % (
tool_name, loaded_paths)
super(ErrorToolNotFound, self).__init__(msg)
class ToolInfo(object):
__slots__ = (
'tool_class',
'names',
'options',
'setup_methods',
)
def __getattr__(self, attr):
if attr == 'options':
self.options = self.tool_class.options()
return self.options
raise AttributeError("%s instance has no attribute '%s'" %
(type(self), attr))
def get_tool(self, options, setup, ignore_errors):
tool_options = options.override()
try:
tool_options.merge(self.options)
tool_class = self.tool_class
setup(tool_class, tool_options)
tool_class.setup(tool_options)
if tool_options.has_changed_key_options():
raise NotImplementedError()
tool_obj = tool_class(tool_options)
return tool_obj, tool_options
except NotImplementedError:
tool_options.clear()
except Exception as ex:
tool_options.clear()
event_tools_tool_failed(ex, self)
if not ignore_errors:
raise
return None, None
class ToolsManager(object):
__slots__ = (
'tool_classes',
'tool_names',
'tool_info',
'all_setup_methods',
'loaded_paths'
)
def __init__(self):
self.tool_classes = {}
self.tool_names = {}
self.all_setup_methods = {}
self.tool_info = {}
self.loaded_paths = []
def empty(self):
return not bool(self.tool_classes)
@staticmethod
def __add_to_map(values_map, names, value):
for name in names:
try:
value_list = values_map[name]
if value in value_list:
continue
except KeyError:
value_list = []
values_map[name] = value_list
value_list.insert(0, value)
def add_tool(self, tool_class, names):
if not issubclass(tool_class, Tool):
raise ErrorToolInvalid(tool_class)
if names:
names = tuple(to_sequence(names))
self.tool_names.setdefault(tool_class, set()).update(names)
self.__add_to_map(self.tool_classes, names, tool_class)
def add_setup(self, setup_method, names):
if not hasattr(setup_method, '__call__'):
raise ErrorToolInvalidSetupMethod(setup_method)
names = to_sequence(names)
self.__add_to_map(self.all_setup_methods, names, setup_method)
def load_tools(self, paths, reload=False):
for path in to_sequence(paths):
path = expand_file_path(path)
if path in self.loaded_paths:
if not reload:
continue
else:
self.loaded_paths.append(path)
module_files = find_files(path, mask="*.py")
if not module_files:
continue
self._load_tools_package(path, module_files)
@staticmethod
def _load_tools_package(path, module_files):
try:
package = load_package(path, generate_name=True)
package_name = package.__name__
except ImportError:
package_name = None
for module_file in module_files:
try:
load_module(module_file, package_name)
except Exception as ex:
event_tools_unable_load_module(module_file, ex)
def __get_tool_info_list(self, name):
tools_info = []
if (type(name) is type) and issubclass(name, Tool):
tool_classes = (name, )
else:
tool_classes = self.tool_classes.get(name, tuple())
for tool_class in tool_classes:
tool_info = self.tool_info.get(tool_class, None)
if tool_info is None:
names = self.tool_names.get(tool_class, [])
tool_info = ToolInfo()
tool_info.tool_class = tool_class
tool_info.names = names
self.tool_info[tool_class] = tool_info
setup_methods = set()
tool_info.setup_methods = setup_methods
for name in names:
setup_methods.update(self.all_setup_methods.get(name, []))
if not setup_methods:
setup_methods.add(_tool_setup_stub)
tools_info.append(tool_info)
return tools_info
def get_tool(self, tool_name, options, ignore_errors):
tool_info_list = self.__get_tool_info_list(tool_name)
for tool_info in tool_info_list:
get_tool = tool_info.get_tool
for setup in tool_info.setup_methods:
tool_obj, tool_options = get_tool(options, setup,
ignore_errors)
if tool_obj is not None:
tool_names = self.tool_names.get(tool_info.tool_class, [])
return tool_obj, tool_names, tool_options
raise ErrorToolNotFound(tool_name, self.loaded_paths)
_tools_manager = ToolsManager()
def get_tools_manager():
return _tools_manager
def tool(*tool_names):
def _tool(tool_class):
_tools_manager.add_tool(tool_class, tool_names)
return tool_class
return _tool
def tool_setup(*tool_names):
def _tool_setup(setup_method):
_tools_manager.add_setup(setup_method, tool_names)
return setup_method
return _tool_setup
@event_status
def event_extracted_tools(settings, path):
log_info("Extracted embedded tools into: '%s'" % (path,))
@event_warning
def event_extract_tools_failed(settings, error):
log_warning("Failed to extract embedded tools: %s" % (error,))
_EMBEDDED_TOOLS = [] # only used by standalone script
def _extract_embedded_tools(info=get_aql_info(),
embedded_tools=_EMBEDDED_TOOLS):
if not embedded_tools:
return None
embedded_tools = embedded_tools[0]
if not embedded_tools:
return None
path = os.path.join(_get_user_config_dir(), info.module, '.embedded_tools')
try:
os.makedirs(path)
except OSError as e:
if e.errno != errno.EEXIST:
raise
return path
try:
zipped_tools = base64.b64decode(embedded_tools)
with io.BytesIO(zipped_tools) as handle:
with zipfile.ZipFile(handle) as zip_handle:
zip_handle.extractall(path)
except Exception as ex:
event_extract_tools_failed(ex)
return None
event_extracted_tools(path)
return path
class ErrorProjectInvalidMethod(Exception):
def __init__(self, method):
msg = "Invalid project method: '%s'" % (method,)
super(ErrorProjectInvalidMethod, self).__init__(msg)
class ErrorProjectUnknownTarget(Exception):
def __init__(self, target):
msg = "Unknown build target: '%s'" % (target,)
super(ErrorProjectUnknownTarget, self).__init__(msg)
class ErrorProjectBuilderMethodWithKW(Exception):
def __init__(self, method):
msg = "Keyword arguments are not allowed in builder method: '%s'" % (
method,)
super(ErrorProjectBuilderMethodWithKW, self).__init__(msg)
class ErrorProjectBuilderMethodUnbound(Exception):
def __init__(self, method):
msg = "Unbound builder method: '%s'" % (method,)
super(ErrorProjectBuilderMethodUnbound, self).__init__(msg)
class ErrorProjectBuilderMethodFewArguments(Exception):
def __init__(self, method):
msg = "Too few arguments in builder method: '%s'" % (method,)
super(ErrorProjectBuilderMethodFewArguments, self).__init__(msg)
class ErrorProjectBuilderMethodInvalidOptions(Exception):
def __init__(self, value):
msg = "Type of 'options' argument must be Options, instead of: " "'%s'(%s)" % (type(value), value)
super(ErrorProjectBuilderMethodInvalidOptions, self).__init__(msg)
def _get_user_config_dir():
return os.path.join(os.path.expanduser('~'), '.config')
def _add_packages_from_sys_path(paths):
local_path = os.path.normcase(os.path.expanduser('~'))
for path in sys.path:
path = os.path.normcase(path)
if path.endswith('-packages') and not path.startswith(local_path):
if path not in paths:
paths.append(path)
def _add_packages_from_sysconfig(paths):
try:
from distutils.sysconfig import get_python_lib
path = get_python_lib()
if path not in paths:
paths.append(path)
except Exception:
pass
def _get_site_packages():
try:
return site.getsitepackages()
except Exception:
pass
paths = []
_add_packages_from_sys_path(paths)
_add_packages_from_sysconfig(paths)
return paths
def _get_aqualid_install_dir():
try:
import aql
return os.path.dirname(aql.__file__)
except Exception:
return None
def _get_default_tools_path(info=get_aql_info()):
aql_module_name = info.module
tool_dirs = _get_site_packages()
tool_dirs.append(site.USER_SITE)
tool_dirs.append(_get_user_config_dir())
tool_dirs = [os.path.join(path, aql_module_name) for path in tool_dirs]
aql_dir = _get_aqualid_install_dir()
if aql_dir:
tool_dirs.insert(-2, aql_dir) # insert before the local tools
tool_dirs = [os.path.join(path, 'tools') for path in tool_dirs]
return tool_dirs
def _read_config(config_file, cli_config, options):
tools_path = cli_config.tools_path
cli_config.tools_path = None
cli_config.read_file(config_file, {'options': options})
if cli_config.tools_path:
tools_path.insert(0, cli_config.tools_path)
cli_config.tools_path = tools_path
class ProjectConfig(object):
__slots__ = ('directory', 'makefile', 'targets', 'options', 'arguments',
'verbose', 'silent', 'no_output', 'jobs', 'keep_going',
'search_up', 'default_tools_path', 'tools_path',
'no_tool_errors',
'clean', 'list_options', 'list_tool_options',
'list_targets',
'debug_profile', 'debug_profile_top', 'debug_memory',
'debug_explain', 'debug_backtrace',
'debug_exec',
'use_sqlite', 'force_lock',
'show_version',
)
def __init__(self, args=None):
paths_type = value_list_type(UniqueList, AbsFilePath)
strings_type = value_list_type(UniqueList, str)
cli_options = (
CLIOption("-C", "--directory", "directory", AbsFilePath, '',
"Change directory before reading the make files.",
'FILE PATH', cli_only=True),
CLIOption("-f", "--makefile", "makefile", FilePath, 'make.aql',
"Path to a make file.",
'FILE PATH', cli_only=True),
CLIOption("-l", "--list-options", "list_options", bool, False,
"List current options and exit."),
CLIOption("-L", "--list-tool-options", "list_tool_options",
strings_type, [],
"List tool options and exit.",
"TOOL_NAME", cli_only=True),
CLIOption("-t", "--list-targets", "list_targets", bool, False,
"List all available targets and exit.", cli_only=True),
CLIOption("-c", "--config", "config", AbsFilePath, None,
"The configuration file used to read CLI arguments.",
cli_only=True),
CLIOption("-R", "--clean", "clean", bool, False,
"Cleans targets.", cli_only=True),
CLIOption("-u", "--up", "search_up", bool, False,
"Search up directory tree for a make file.",
cli_only=True),
CLIOption("-e", "--no-tool-errors", "no_tool_errors", bool, False,
"Stop on any error during initialization of tools."),
CLIOption("-I", "--tools-path", "tools_path", paths_type, [],
"Path to tools and setup scripts.", 'FILE PATH, ...'),
CLIOption("-k", "--keep-going", "keep_going", bool, False,
"Keep going when some targets can't be built."),
CLIOption("-j", "--jobs", "jobs", int, None,
"Number of parallel jobs to process targets.", 'NUMBER'),
CLIOption("-v", "--verbose", "verbose", bool, False,
"Verbose mode."),
CLIOption("-s", "--silent", "silent", bool, False,
"Don't print any messages except warnings and errors."),
CLIOption(None, "--no-output", "no_output", bool, False,
"Don't print builder's output messages."),
CLIOption(None, "--debug-memory", "debug_memory", bool, False,
"Display memory usage."),
CLIOption("-P", "--debug-profile", "debug_profile",
AbsFilePath, None,
"Run under profiler and save the results "
"in the specified file.",
'FILE PATH'),
CLIOption("-T", "--debug-profile-top", "debug_profile_top",
int, 30,
"Show the specified number of top functions "
"from profiler report.",
'FILE PATH'),
CLIOption(None, "--debug-explain", "debug_explain", bool, False,
"Show the reasons why targets are being rebuilt"),
CLIOption(None, "--debug-exec", "debug_exec", bool, False,
"Full trace of all executed commands."),
CLIOption("--bt", "--debug-backtrace", "debug_backtrace",
bool, False, "Show call stack back traces for errors."),
CLIOption(None, "--force-lock", "force_lock", bool, False,
"Forces to lock AQL DB file.", cli_only=True),
CLIOption(None, "--use-sqlite", "use_sqlite", bool, False,
"Use SQLite DB."),
CLIOption("-V", "--version", "version", bool, False,
"Show version and exit.", cli_only=True),
)
cli_config = CLIConfig(cli_options, args)
options = builtin_options()
user_config = os.path.join(_get_user_config_dir(), 'default.cfg')
if os.path.isfile(user_config):
_read_config(user_config, cli_config, options)
config = cli_config.config
if config:
_read_config(config, cli_config, options)
arguments = {}
ignore_options = set(ProjectConfig.__slots__)
ignore_options.add('config')
for name, value in cli_config.items():
if (name not in ignore_options) and (value is not None):
arguments[name] = value
options.update(arguments)
self.options = options
self.arguments = arguments
self.directory = os.path.abspath(cli_config.directory)
makefile = cli_config.makefile
if makefile.find(os.path.sep) != -1:
makefile = os.path.abspath(makefile)
self.makefile = makefile
self.search_up = cli_config.search_up
self.tools_path = cli_config.tools_path
self.default_tools_path = _get_default_tools_path()
self.no_tool_errors = cli_config.no_tool_errors
self.targets = cli_config.targets
self.verbose = cli_config.verbose
self.silent = cli_config.silent
self.show_version = cli_config.version
self.no_output = cli_config.no_output
self.keep_going = cli_config.keep_going
self.clean = cli_config.clean
self.list_options = cli_config.list_options
self.list_tool_options = cli_config.list_tool_options
self.list_targets = cli_config.list_targets
self.jobs = cli_config.jobs
self.force_lock = cli_config.force_lock
self.use_sqlite = cli_config.use_sqlite
self.debug_profile = cli_config.debug_profile
self.debug_profile_top = cli_config.debug_profile_top
self.debug_memory = cli_config.debug_memory
self.debug_explain = cli_config.debug_explain
self.debug_backtrace = cli_config.debug_backtrace
self.debug_exec = cli_config.debug_exec
class BuilderWrapper(object):
__slots__ = ('project', 'options', 'tool', 'method', 'arg_names')
def __init__(self, tool, method, project, options):
self.arg_names = self.__check_builder_method(method)
self.tool = tool
self.method = method
self.project = project
self.options = options
@staticmethod
def __check_builder_method(method):
if not hasattr(method, '__call__'):
raise ErrorProjectInvalidMethod(method)
f_args, f_varargs, f_kw, f_defaults = get_function_args(method)
if f_kw:
raise ErrorProjectBuilderMethodWithKW(method)
min_args = 1 # at least one argument: options
if isinstance(method, types.MethodType):
if method.__self__ is None:
raise ErrorProjectBuilderMethodUnbound(method)
if len(f_args) < min_args:
raise ErrorProjectBuilderMethodFewArguments(method)
return frozenset(f_args)
@staticmethod
def _add_sources(name, value, sources,
_names=('sources', 'source')):
if name in _names:
if is_sequence(value):
sources.extend(value)
else:
sources.append(value)
return True
return False
@staticmethod
def _add_deps(value, deps,
_node_types=(Node, NodeFilter, EntityBase)):
if is_sequence(value):
deps.extend(v for v in value if isinstance(v, _node_types))
else:
if isinstance(value, _node_types):
deps.append(value)
def _get_builder_args(self, kw):
builder_args = {}
sources = []
deps = []
options = kw.pop("options", None)
if options is not None:
if not isinstance(options, Options):
raise ErrorProjectBuilderMethodInvalidOptions(options)
else:
options = self.options
options = options.override()
for name, value in kw.items():
if self._add_sources(name, value, sources):
continue
self._add_deps(value, deps)
if name in self.arg_names:
builder_args[name] = value
else:
options.append_value(name, value, op_iupdate)
return options, deps, sources, builder_args
def __call__(self, *args, **kw):
options, deps, sources, builder_args = self._get_builder_args(kw)
sources += args
sources = flatten_list(sources)
builder = self.method(options, **builder_args)
node = Node(builder, sources)
node.depends(deps)
self.project.add_nodes((node,))
return node
class ToolWrapper(object):
def __init__(self, tool, project, options):
self.project = project
self.options = options
self.tool = tool
def __getattr__(self, attr):
method = getattr(self.tool, attr)
if attr.startswith('_') or not isinstance(method, types.MethodType):
return method
builder = BuilderWrapper(self.tool, method, self.project, self.options)
setattr(self, attr, builder)
return builder
class ProjectTools(object):
def __init__(self, project):
self.project = project
self.tools_cache = {}
tools = get_tools_manager()
config = self.project.config
tools.load_tools(config.default_tools_path)
if tools.empty():
embedded_tools_path = _extract_embedded_tools()
if embedded_tools_path:
tools.load_tools(embedded_tools_path)
tools.load_tools(config.tools_path)
self.tools = tools
def _get_tools_options(self):
tools_options = {}
for name, tool in self.tools_cache.items():
tool = next(iter(tool.values()))
tools_options.setdefault(tool.options, []).append(name)
return tools_options
def _get_tool_names(self):
return sorted(self.tools_cache)
def __add_tool(self, tool_name, options):
options_ref = options.get_hash_ref()
try:
return self.tools_cache[tool_name][options_ref]
except KeyError:
pass
project = self.project
ignore_errors = not project.config.no_tool_errors
tool, tool_names, tool_options = self.tools.get_tool(tool_name,
options,
ignore_errors)
tool = ToolWrapper(tool, project, tool_options)
set_attr = self.__dict__.setdefault
for name in tool_names:
set_attr(name, tool)
self.tools_cache.setdefault(name, {})[options_ref] = tool
return tool
def __getattr__(self, name,
_func_types=(types.FunctionType, types.MethodType)):
options = self.project.options
tool = BuiltinTool(options)
tool_method = getattr(tool, name, None)
if tool_method and isinstance(tool_method, _func_types):
return BuilderWrapper(tool, tool_method, self.project, options)
return self.__add_tool(name, options)
def __getitem__(self, name):
return getattr(self, name)
def get_tools(self, *tool_names, **kw):
options = kw.pop('options', None)
tools_path = kw.pop('tools_path', None)
if tools_path:
self.tools.load_tools(tools_path)
if options is None:
options = self.project.options
if kw:
options = options.override()
options.update(kw)
tools = [self.__add_tool(tool_name, options)
for tool_name in tool_names]
return tools
def get_tool(self, tool_name, **kw):
return self.get_tools(tool_name, **kw)[0]
def try_tool(self, tool_name, **kw):
try:
return self.get_tools(tool_name, **kw)[0]
except ErrorToolNotFound:
return None
def add_tool(self, tool_class, tool_names=tuple()):
self.tools.add_tool(tool_class, tool_names)
return self.__add_tool(tool_class, self.project.options)
def _text_targets(targets):
text = ["", " Targets:", "==================", ""]
max_name = ""
for names, is_built, description in targets:
max_name = max(max_name, *names, key=len)
name_format = "{is_built} {name:<%s}" % len(max_name)
for names, is_built, description in targets:
if len(names) > 1 and text[-1]:
text.append('')
is_built_mark = "*" if is_built else " "
for name in names:
text.append(name_format.format(name=name, is_built=is_built_mark))
text[-1] += ' : ' + description
if len(names) > 1:
text.append('')
text.append('')
return text
class Project(object):
def __init__(self, config):
self.targets = config.targets
self.options = config.options
self.arguments = config.arguments
self.config = config
self.scripts_cache = {}
self.configs_cache = {}
self.aliases = {}
self.alias_descriptions = {}
self.defaults = []
self.build_manager = BuildManager()
self.tools = ProjectTools(self)
def __getattr__(self, attr):
if attr == 'script_locals':
self.script_locals = self.__get_script_locals()
return self.script_locals
raise AttributeError("No attribute '%s'" % (attr,))
def __get_script_locals(self):
script_locals = {
'options': self.options,
'tools': self.tools,
'Tool': self.tools.get_tool,
'TryTool': self.tools.try_tool,
'Tools': self.tools.get_tools,
'AddTool': self.tools.add_tool,
'LoadTools': self.tools.tools.load_tools,
'FindFiles': find_files,
'GetProject': self.get_project,
'GetProjectConfig': self.get_project_config,
'GetBuildTargets': self.get_build_targets,
'File': self.make_file_entity,
'Entity': self.make_entity,
'Dir': self.make_dir_entity,
'Config': self.read_config,
'Script': self.read_script,
'SetBuildDir': self.set_build_dir,
'Depends': self.depends,
'Requires': self.requires,
'RequireModules': self.require_modules,
'Sync': self.sync_nodes,
'BuildIf': self.build_if,
'SkipIf': self.skip_if,
'Alias': self.alias_nodes,
'Default': self.default_build,
'AlwaysBuild': self.always_build,
'Expensive': self.expensive,
'Build': self.build,
'Clear': self.clear,
'DirName': self.node_dirname,
'BaseName': self.node_basename,
}
return script_locals
def get_project(self):
return self
def get_project_config(self):
return self.config
def get_build_targets(self):
return self.targets
def make_file_entity(self, filepath, options=None):
if options is None:
options = self.options
file_type = FileTimestampEntity if options.file_signature == 'timestamp' else FileChecksumEntity
return file_type(filepath)
def make_dir_entity(self, filepath):
return DirEntity(filepath)
def make_entity(self, data, name=None):
return SimpleEntity(data=data, name=name)
def _get_config_options(self, config, options):
if options is None:
options = self.options
options_ref = options.get_hash_ref()
config = os.path.normcase(os.path.abspath(config))
options_set = self.configs_cache.setdefault(config, set())
if options_ref in options_set:
return None
options_set.add(options_ref)
return options
def _remove_overridden_options(self, result):
for arg in self.arguments:
try:
del result[arg]
except KeyError:
pass
def read_config(self, config, options=None):
options = self._get_config_options(config, options)
if options is None:
return
config_locals = {'options': options}
dir_name, file_name = os.path.split(config)
with Chdir(dir_name):
result = exec_file(file_name, config_locals)
tools_path = result.pop('tools_path', None)
if tools_path:
self.tools.tools.load_tools(tools_path)
self._remove_overridden_options(result)
options.update(result)
def read_script(self, script):
script = os.path.normcase(os.path.abspath(script))
scripts_cache = self.scripts_cache
script_result = scripts_cache.get(script, None)
if script_result is not None:
return script_result
dir_name, file_name = os.path.split(script)
with Chdir(dir_name):
script_result = exec_file(file_name, self.script_locals)
scripts_cache[script] = script_result
return script_result
def add_nodes(self, nodes):
self.build_manager.add(nodes)
def set_build_dir(self, build_dir):
build_dir = os.path.abspath(expand_file_path(build_dir))
if self.options.build_dir != build_dir:
self.options.build_dir = build_dir
def build_if(self, condition, nodes):
self.build_manager.build_if(condition, nodes)
def skip_if(self, condition, nodes):
self.build_manager.skip_if(condition, nodes)
def depends(self, nodes, dependencies):
dependencies = tuple(to_sequence(dependencies))
depends = self.build_manager.depends
for node in to_sequence(nodes):
node.depends(dependencies)
depends(node, node.dep_nodes)
def requires(self, nodes, dependencies):
dependencies = tuple(
dep for dep in to_sequence(dependencies) if isinstance(dep, Node))
depends = self.build_manager.depends
for node in to_sequence(nodes):
depends(node, dependencies)
def require_modules(self, nodes, dependencies):
dependencies = tuple(
dep for dep in to_sequence(dependencies) if isinstance(dep, Node))
module_depends = self.build_manager.module_depends
for node in to_sequence(nodes):
module_depends(node, dependencies)
def sync_nodes(self, *nodes):
nodes = flatten_list(nodes)
nodes = tuple(node for node in nodes if isinstance(node, Node))
self.build_manager.sync(nodes)
def alias_nodes(self, alias, nodes, description=None):
for alias, node in itertools.product(to_sequence(alias),
to_sequence(nodes)):
self.aliases.setdefault(alias, set()).add(node)
if description:
self.alias_descriptions[alias] = description
def default_build(self, nodes):
for node in to_sequence(nodes):
self.defaults.append(node)
def always_build(self, nodes):
null_value = NullEntity()
for node in to_sequence(nodes):
node.depends(null_value)
def expensive(self, nodes):
self.build_manager.expensive(nodes)
def _add_alias_nodes(self, target_nodes, aliases):
try:
for alias in aliases:
target_nodes.update(self.aliases[alias])
except KeyError as ex:
raise ErrorProjectUnknownTarget(ex.args[0])
def _add_default_nodes(self, target_nodes):
for node in self.defaults:
if isinstance(node, Node):
target_nodes.add(node)
else:
self._add_alias_nodes(target_nodes, (node,))
def _get_build_nodes(self):
target_nodes = set()
self._add_alias_nodes(target_nodes, self.targets)
if not target_nodes:
self._add_default_nodes(target_nodes)
if not target_nodes:
target_nodes = None
return target_nodes
def _get_jobs_count(self, jobs=None):
if jobs is None:
jobs = self.config.jobs
if not jobs:
jobs = 0
else:
jobs = int(jobs)
if not jobs:
jobs = cpu_count()
if jobs < 1:
jobs = 1
elif jobs > 32:
jobs = 32
return jobs
def build(self, jobs=None):
jobs = self._get_jobs_count(jobs)
if not self.options.batch_groups.is_set():
self.options.batch_groups = jobs
build_nodes = self._get_build_nodes()
config = self.config
keep_going = config.keep_going,
explain = config.debug_explain
with_backtrace = config.debug_backtrace
force_lock = config.force_lock
use_sqlite = config.use_sqlite
is_ok = self.build_manager.build(jobs=jobs,
keep_going=bool(keep_going),
nodes=build_nodes,
explain=explain,
with_backtrace=with_backtrace,
use_sqlite=use_sqlite,
force_lock=force_lock)
return is_ok
def clear(self):
build_nodes = self._get_build_nodes()
force_lock = self.config.force_lock
use_sqlite = self.config.use_sqlite
self.build_manager.clear(nodes=build_nodes,
use_sqlite=use_sqlite,
force_lock=force_lock)
def list_targets(self):
targets = []
node2alias = {}
for alias, nodes in self.aliases.items():
key = frozenset(nodes)
target_info = node2alias.setdefault(key, [[], ""])
target_info[0].append(alias)
description = self.alias_descriptions.get(alias, None)
if description:
if len(target_info[1]) < len(description):
target_info[1] = description
build_nodes = self._get_build_nodes()
self.build_manager.shrink(build_nodes)
build_nodes = self.build_manager.get_nodes()
for nodes, aliases_and_description in node2alias.items():
aliases, description = aliases_and_description
aliases.sort(key=str.lower)
max_alias = max(aliases, key=len)
aliases.remove(max_alias)
aliases.insert(0, max_alias)
is_built = (build_nodes is None) or nodes.issubset(build_nodes)
targets.append((tuple(aliases), is_built, description))
targets.sort(key=lambda names: names[0][0].lower())
return _text_targets(targets)
def list_options(self, brief=False):
result = self.options.help_text("Builtin options:", brief=brief)
result.append("")
tool_names = self.tools._get_tool_names()
if tool_names:
result.append("Available options of tools: %s" %
(', '.join(tool_names)))
if result[-1]:
result.append("")
return result
def list_tools_options(self, tools, brief=False):
tools = set(to_sequence(tools))
result = []
for tools_options, names in self.tools._get_tools_options().items():
names_set = tools & set(names)
if names_set:
tools -= names_set
options_name = "Options of tool: %s" % (', '.join(names))
result += tools_options.help_text(options_name, brief=brief)
if result and result[-1]:
result.append("")
return result
def node_dirname(self, node):
return NodeDirNameFilter(node)
def node_basename(self, node):
return NodeBaseNameFilter(node)
@event_status
def event_reading_scripts(settings):
log_info("Reading scripts...")
@event_status
def event_reading_scripts_done(settings, elapsed):
log_info("Reading scripts finished (%s)", elapsed)
@event_error
def event_aql_error(settings, error):
log_error(error)
@event_status
def event_building(settings):
log_info("Building targets...")
@event_status
def event_building_done(settings, success, elapsed):
status = "finished" if success else "failed"
log_info("Building targets %s (%s)", status, elapsed)
@event_status
def event_build_summary(settings, elapsed):
log_info("Total time: %s", elapsed)
def _find_make_script(script):
if os.path.isabs(script):
return script
cwd = split_path(os.path.abspath('.'))
path_sep = os.path.sep
while cwd:
script_path = path_sep.join(cwd) + path_sep + script
if os.path.isfile(script_path):
return os.path.normpath(script_path)
cwd.pop()
return script
def _start_memory_tracing():
try:
import tracemalloc
except ImportError:
return
tracemalloc.start()
def _stop_memory_tracing():
try:
import tracemalloc
except ImportError:
return
snapshot = tracemalloc.take_snapshot()
_log_memory_top(snapshot)
tracemalloc.stop()
def _log_memory_top(snapshot, group_by='lineno', limit=30):
try:
import tracemalloc
import linecache
except ImportError:
return
snapshot = snapshot.filter_traces((
tracemalloc.Filter(False, "<frozen importlib._bootstrap>"),
tracemalloc.Filter(False, "<unknown>"),
))
top_stats = snapshot.statistics(group_by)
log_info("Top %s lines", limit)
for index, stat in enumerate(top_stats[:limit], 1):
frame = stat.traceback[0]
filename = os.sep.join(frame.filename.split(os.sep)[-2:])
log_info("#%s: %s:%s: %.1f KiB",
index, filename, frame.lineno, stat.size / 1024)
line = linecache.getline(frame.filename, frame.lineno).strip()
if line:
log_info(' %s', line)
other = top_stats[limit:]
if other:
size = sum(stat.size for stat in other)
log_info("%s other: %.1f KiB", len(other), size / 1024)
total = sum(stat.size for stat in top_stats)
log_info("Total allocated size: %.1f KiB", total / 1024)
def _print_memory_status():
_stop_memory_tracing()
mem_usage = memory_usage()
num_objects = len(gc.get_objects())
obj_mem_usage = sum(sys.getsizeof(obj) for obj in gc.get_objects())
log_info("GC objects: %s, size: %.1f KiB, heap memory usage: %s Kb",
num_objects, obj_mem_usage / 1024, mem_usage)
def _set_build_dir(options, makefile):
build_dir = options.build_dir.get()
if os.path.isabs(build_dir):
return
makefile_dir = os.path.abspath(os.path.dirname(makefile))
options.build_dir = os.path.join(makefile_dir, build_dir)
def _read_make_script(prj):
prj_cfg = prj.config
makefile = expand_file_path(prj_cfg.makefile)
if prj_cfg.search_up:
makefile = _find_make_script(makefile)
_set_build_dir(prj_cfg.options, makefile)
event_reading_scripts()
with Chrono() as elapsed:
prj.read_script(makefile)
event_reading_scripts_done(elapsed)
def _list_options(prj):
prj_cfg = prj.config
text = []
if prj_cfg.list_options:
text += prj.list_options(brief=not prj_cfg.verbose)
if prj_cfg.list_tool_options:
text += prj.list_tools_options(prj_cfg.list_tool_options,
brief=not prj_cfg.verbose)
log_info('\n'.join(text))
def _build(prj):
event_building()
with Chrono() as elapsed:
success = prj.build()
event_building_done(success, elapsed)
if not success:
prj.build_manager.print_fails()
return success
def _main(prj_cfg):
with Chrono() as total_elapsed:
ev_settings = EventSettings(brief=not prj_cfg.verbose,
with_output=not prj_cfg.no_output,
trace_exec=prj_cfg.debug_exec)
set_event_settings(ev_settings)
with Chdir(prj_cfg.directory):
if prj_cfg.debug_memory:
_start_memory_tracing()
prj = Project(prj_cfg)
_read_make_script(prj)
success = True
if prj_cfg.clean:
prj.clear()
elif prj_cfg.list_targets:
text = prj.list_targets()
log_info('\n'.join(text))
elif prj_cfg.list_options or prj_cfg.list_tool_options:
_list_options(prj)
else:
success = _build(prj)
if prj_cfg.debug_memory:
_print_memory_status()
event_build_summary(total_elapsed)
status = int(not success)
return status
def _patch_sys_modules():
aql_module = sys.modules.get('aql', None)
if aql_module is not None:
sys.modules.setdefault(get_aql_info().module, aql_module)
else:
aql_module = sys.modules.get(get_aql_info().module, None)
if aql_module is not None:
sys.modules.setdefault('aql', aql_module)
def _run_main(prj_cfg):
debug_profile = prj_cfg.debug_profile
if not debug_profile:
status = _main(prj_cfg)
else:
profiler = cProfile.Profile()
status = profiler.runcall(_main, prj_cfg)
profiler.dump_stats(debug_profile)
p = pstats.Stats(debug_profile)
p.strip_dirs()
p.sort_stats('cumulative')
p.print_stats(prj_cfg.debug_profile_top)
return status
def _log_error(ex, with_backtrace):
if with_backtrace:
err = traceback.format_exc()
else:
if isinstance(ex, KeyboardInterrupt):
err = "Keyboard Interrupt"
else:
err = to_unicode(ex)
event_aql_error(err)
def main():
with_backtrace = True
try:
_patch_sys_modules()
prj_cfg = ProjectConfig()
with_backtrace = prj_cfg.debug_backtrace
if prj_cfg.show_version:
log_info(dump_aql_info())
return 0
if prj_cfg.silent:
set_log_level(LOG_WARNING)
status = _run_main(prj_cfg)
except (Exception, KeyboardInterrupt) as ex:
_log_error(ex, with_backtrace)
status = 1
return status
_AQL_VERSION_INFO.date = "2015-12-10"
|
Aqualid
|
/Aqualid-0.7.tar.gz/Aqualid-0.7/modules/aqualid/__init__.py
|
__init__.py
|
import os
import re
import itertools
from aql import read_text_file, remove_files, Tempfile, execute_command,\
StrOptionType, ListOptionType, PathOptionType, tool, \
ToolCommonCpp, CommonCppCompiler, CommonCppArchiver, \
CommonCppLinker, ToolCommonRes, CommonResCompiler
# ==============================================================================
# BUILDERS IMPLEMENTATION
# ==============================================================================
def _read_deps(deps_file, exclude_dirs,
_space_splitter_re=re.compile(r'(?<!\\)\s+')):
deps = read_text_file(deps_file)
dep_files = []
target_sep = ': '
target_sep_len = len(target_sep)
for line in deps.splitlines():
pos = line.find(target_sep)
if pos >= 0:
line = line[pos + target_sep_len:]
line = line.rstrip('\\ ').strip()
tmp_dep_files = _space_splitter_re.split(line)
tmp_dep_files = [dep_file.replace('\\ ', ' ')
for dep_file in tmp_dep_files if dep_file]
dep_files += map(os.path.abspath, tmp_dep_files)
dep_files = iter(dep_files)
next(dep_files) # skip the source file
dep_files = tuple(dep_file
for dep_file in dep_files
if not dep_file.startswith(exclude_dirs))
return dep_files
# ==============================================================================
class GccCompiler (CommonCppCompiler):
def __init__(self, options):
super(GccCompiler, self).__init__(options)
self.cmd += ['-c', '-MMD']
def build(self, source_entities, targets):
src = source_entities[0].get()
obj_file = self.get_obj_path(src)
cwd = os.path.dirname(obj_file)
with Tempfile(prefix=obj_file, suffix='.d', root_dir=cwd) as dep_file:
cmd = list(self.cmd)
cmd += ['-o', obj_file, '-MF', dep_file, src]
out = self.exec_cmd(cmd, cwd, file_flag='@')
implicit_deps = _read_deps(dep_file, self.ext_cpppath)
targets.add_targets(obj_file)
targets.add_implicit_deps(implicit_deps)
return out
# -----------------------------------------------------------
def get_default_obj_ext(self):
return '.o'
# -----------------------------------------------------------
def _set_targets(self, source_entities, targets, obj_files):
for src_value, obj_file in zip(source_entities, obj_files):
if os.path.isfile(obj_file):
dep_file = os.path.splitext(obj_file)[0] + '.d'
implicit_deps = _read_deps(dep_file, self.ext_cpppath)
src_targets = targets[src_value]
src_targets.add_targets(obj_file)
src_targets.add_implicit_deps(implicit_deps)
# -----------------------------------------------------------
def build_batch(self, source_entities, targets):
sources = tuple(src.get() for src in source_entities)
obj_files = self.get_source_target_paths(sources, ext=self.ext)
remove_files(obj_files)
cwd = os.path.dirname(obj_files[0])
cmd = list(self.cmd)
cmd += sources
result = self.exec_cmd_result(cmd, cwd, file_flag='@')
self._set_targets(source_entities, targets, obj_files)
if result.failed():
raise result
return result.output()
# ==============================================================================
class GccResCompiler (CommonResCompiler):
def build(self, source_entities, targets):
src = source_entities[0].get()
res_file = self.get_obj_path(src)
cwd = os.path.dirname(res_file)
cmd = list(self.cmd)
cmd += ['-o', res_file, '-i', src]
out = self.exec_cmd(cmd, cwd, file_flag='@')
# deps = _parse_res( src )
targets.add_targets(res_file)
return out
# ==============================================================================
class GccCompilerMaker (object):
def make_compiler(self, options):
return GccCompiler(options)
def make_res_compiler(self, options):
return GccResCompiler(options)
# ==============================================================================
class GccArchiver (GccCompilerMaker, CommonCppArchiver):
def build(self, source_entities, targets):
cmd = list(self.cmd)
cmd.append(self.target)
cmd += (src.get() for src in source_entities)
cwd = os.path.dirname(self.target)
out = self.exec_cmd(cmd, cwd=cwd, file_flag='@')
targets.add_targets(self.target)
return out
# ==============================================================================
class GccLinker(GccCompilerMaker, CommonCppLinker):
def __init__(self, options, target, shared):
super(GccLinker, self).__init__(options, target, shared)
self.is_windows = options.target_os == 'windows'
self.libsuffix = options.libsuffix.get()
# -----------------------------------------------------------
def build(self, source_entities, targets):
target = self.target
import_lib = None
shared = self.shared
cmd = list(self.cmd)
obj_files = (src.get() for src in source_entities)
cmd[2:2] = obj_files
if shared:
cmd.append('-shared')
if self.is_windows:
import_lib = os.path.splitext(target)[0] + self.libsuffix
cmd.append('-Wl,--out-implib,%s' % import_lib)
cmd += ['-o', target]
cwd = os.path.dirname(target)
out = self.exec_cmd(cmd, cwd=cwd, file_flag='@')
if shared:
if import_lib:
tags = ('shlib',)
targets.add_targets(import_lib, tags=('implib',))
else:
tags = ('shlib', 'implib')
else:
tags = None
targets.add_targets(target, tags=tags)
return out
# ==============================================================================
# TOOL IMPLEMENTATION
# ==============================================================================
_OS_PREFIXES = (
'linux', 'mingw32', 'cygwin',
'freebsd', 'openbsd', 'netbsd', 'darwin',
'sunos', 'hpux', 'vxworks', 'solaris', 'interix',
'uclinux', 'elf',
)
_OS_NAMES = {
'mingw32': 'windows',
'darwin': 'osx',
'sunos': 'solaris',
}
def _get_target_os(target_os):
for target in target_os.split('-'):
for prefix in _OS_PREFIXES:
if target.startswith(prefix):
name = _OS_NAMES.get(prefix, prefix)
return name
return target_os
# ==============================================================================
def _get_gcc_specs(gcc):
result = execute_command([gcc, '-dumpmachine'])
target = result.stdout.strip()
result = execute_command([gcc, '-dumpversion'])
version = result.stdout.strip()
target_list = target.split('-', 1)
if len(target_list) > 1:
target_arch = target_list[0]
target_os = target_list[1]
else:
target_os = target_list[0]
target_arch = 'unknown'
target_os = _get_target_os(target_os)
specs = {
'cc_name': 'gcc',
'cc_ver': version,
'target_os': target_os,
'target_arch': target_arch,
}
return specs
# ==============================================================================
def _generate_prog_names(prog, prefix, suffix):
prefixes = [prefix, ''] if prefix else ['']
suffixes = [suffix, ''] if suffix else ['']
return tuple(prefix + prog + suffix
for prefix, suffix in itertools.product(prefixes, suffixes))
# ==============================================================================
class ToolGccCommon(ToolCommonCpp):
@classmethod
def setup(cls, options):
if options.cc_name.is_set_not_to('gcc'):
raise NotImplementedError()
gcc_prefix = options.gcc_prefix.get()
gcc_suffix = options.gcc_suffix.get()
if cls.language == 'c':
cc = "gcc"
else:
cc = "g++"
gcc = cls.find_program(options, gcc_prefix + cc + gcc_suffix)
specs = _get_gcc_specs(gcc)
options.update(specs)
options.cc = gcc
options.link = gcc
ar = _generate_prog_names('ar', gcc_prefix, gcc_suffix)
rc = _generate_prog_names('windres', gcc_prefix, gcc_suffix)
lib, rc = cls.find_optional_programs(options, [ar, rc], gcc)
options.lib = lib
options.rc = rc
# -----------------------------------------------------------
@classmethod
def options(cls):
options = super(ToolGccCommon, cls).options()
options.gcc_prefix = StrOptionType(
description="GCC C/C++ compiler prefix")
options.gcc_suffix = StrOptionType(
description="GCC C/C++ compiler suffix")
return options
# -----------------------------------------------------------
def __init__(self, options):
super(ToolGccCommon, self).__init__(options)
options.env['CPATH'] = ListOptionType(value_type=PathOptionType(),
separators=os.pathsep)
if self.language == 'c':
options.env['C_INCLUDE_PATH'] = ListOptionType(
value_type=PathOptionType(),
separators=os.pathsep
)
else:
options.env['CPLUS_INCLUDE_PATH'] = ListOptionType(
value_type=PathOptionType(),
separators=os.pathsep
)
options.env['LIBRARY_PATH'] = ListOptionType(
value_type=PathOptionType(),
separators=os.pathsep
)
if_ = options.If()
if_windows = if_.target_os.eq('windows')
options.objsuffix = '.o'
options.ressuffix = options.objsuffix
options.libprefix = 'lib'
options.libsuffix = '.a'
options.shlibprefix = 'lib'
options.shlibsuffix = '.so'
if_windows.shlibprefix = ''
if_windows.shlibsuffix = '.dll'
if_windows.progsuffix = '.exe'
options.cpppath_prefix = '-I '
options.libpath_prefix = '-L '
options.cppdefines_prefix = '-D '
options.libs_prefix = '-l'
options.libs_suffix = ''
options.ccflags += ['-pipe', '-x', self.language]
options.libflags += ['-rcs']
options.linkflags += ['-pipe']
options.language = self.language
if_.rtti.is_true().cxxflags += '-frtti'
if_.rtti.is_false().cxxflags += '-fno-rtti'
if_.exceptions.is_true().cxxflags += '-fexceptions'
if_.exceptions.is_false().cxxflags += '-fno-exceptions'
if_windows.target_subsystem.eq('console').linkflags +=\
'-Wl,--subsystem,console'
if_windows.target_subsystem.eq('windows').linkflags +=\
'-Wl,--subsystem,windows'
if_.debug_symbols.is_true().ccflags += '-g'
if_.debug_symbols.is_false().linkflags += '-Wl,--strip-all'
if_.runtime_link.eq('static').linkflags += '-static-libgcc'
if_.runtime_link.eq('shared').linkflags += '-shared-libgcc'
if_.target_os.eq('windows').runtime_thread.eq(
'multi').ccflags += '-mthreads'
if_.target_os.ne('windows').runtime_thread.eq(
'multi').ccflags += '-pthreads'
if_.optimization.eq('speed').occflags += '-Ofast'
if_.optimization.eq('size').occflags += '-Os'
if_.optimization.eq('off').occflags += '-O0'
if_.inlining.eq('off').occflags += '-fno-inline'
if_.inlining.eq('on').occflags += '-finline'
if_.inlining.eq('full').occflags += '-finline-functions'
if_.warning_level.eq(0).ccflags += '-w'
if_.warning_level.eq(3).ccflags += '-Wall'
if_.warning_level.eq(4).ccflags += ['-Wall', '-Wextra',
'-Wfloat-equal',
'-Wundef', '-Wshadow',
'-Wredundant-decls']
if_.warning_as_error.is_true().ccflags += '-Werror'
if_profiling_true = if_.profile.is_true()
if_profiling_true.ccflags += '-pg'
if_profiling_true.linkflags += '-pg'
if_cxxstd = if_.cxxstd
if_cxx11 = if_cxxstd.eq('c++11')
if_cxx14 = if_cxxstd.eq('c++14')
if_cxxstd.eq('c++98').cxxflags += '-std=c++98'
if_cxx11.cc_ver.ge("4.7").cxxflags += '-std=c++11'
if_cxx11.cc_ver.ge("4.3").cc_ver.le("4.6").cxxflags += '-std=c++0x'
if_cxx14.cc_ver.ge("4.8").cxxflags += '-std=c++1y'
if_.pic.is_true().target_os.not_in(
['windows', 'cygwin']).ccflags += '-fPIC'
# -----------------------------------------------------------
def compile(self, options):
return GccCompiler(options)
def compile_resource(self, options):
return GccResCompiler(options)
def link_static_library(self, options, target):
return GccArchiver(options, target)
def link_shared_library(self, options, target, def_file=None):
return GccLinker(options, target, shared=True)
def link_program(self, options, target):
return GccLinker(options, target, shared=False)
# ==============================================================================
@tool('c++', 'g++', 'gxx', 'cpp', 'cxx')
class ToolGxx(ToolGccCommon):
language = "c++"
# ==============================================================================
@tool('c', 'gcc', 'cc')
class ToolGcc(ToolGccCommon):
language = "c"
# ==============================================================================
@tool('rc', 'windres')
class ToolWindRes(ToolCommonRes):
# -----------------------------------------------------------
@classmethod
def options(cls):
options = super(ToolWindRes, cls).options()
options.gcc_prefix = StrOptionType(
description="GCC C/C++ compiler prefix")
options.gcc_suffix = StrOptionType(
description="GCC C/C++ compiler suffix")
return options
# -----------------------------------------------------------
@classmethod
def setup(cls, options):
gcc_prefix = options.gcc_prefix.get()
gcc_suffix = options.gcc_suffix.get()
rc = _generate_prog_names('windres', gcc_prefix, gcc_suffix)
rc = cls.find_program(options, rc)
options.target_os = 'windows'
options.rc = rc
def __init__(self, options):
super(ToolWindRes, self).__init__(options)
options.ressuffix = '.o'
def compile(self, options):
return GccResCompiler(options)
|
Aqualid
|
/Aqualid-0.7.tar.gz/Aqualid-0.7/modules/aqualid/tools/gcc.py
|
gcc.py
|
import os
import re
from aql import execute_command, FilePartChecksumEntity, FileTimestampEntity,\
ListOptionType, PathOptionType, tool,\
ToolCommonCpp, CommonCppCompiler, CommonCppArchiver,\
CommonCppLinker, ToolCommonRes, CommonResCompiler
# ==============================================================================
# BUILDERS IMPLEMENTATION
# ==============================================================================
def _parse_output(source_paths,
output,
exclude_dirs,
inc_prefix="Note: including file: ",
# pattern for errors, works fine in all locales"
_err_re=re.compile(r".+\s*:\s+(fatal\s)?error\s+[0-9A-Z]+:"),
):
inc_prefix_len = len(inc_prefix)
sources_deps = []
sources_errors = []
names = iter(map(os.path.basename, source_paths))
sources = iter(source_paths)
next_name = next(names, None)
current_file = None
filtered_output = []
current_deps = []
current_file_failed = False
for line in output.split('\n'):
line = line.rstrip()
if line == next_name:
if current_file is not None:
sources_deps.append(current_deps)
sources_errors.append(current_file_failed)
current_file = next(sources, None)
current_deps = []
current_file_failed = False
next_name = next(names, None)
elif not line.endswith('...'):
if line.startswith(inc_prefix):
dep_file = line[inc_prefix_len:].lstrip()
dep_file = os.path.normcase(os.path.abspath(dep_file))
if not dep_file.startswith(exclude_dirs):
current_deps.append(dep_file)
else:
if _err_re.match(line):
current_file_failed = True
filtered_output.append(line)
output = '\n'.join(filtered_output)
sources_deps.append(current_deps)
sources_errors.append(current_file_failed)
return sources_deps, sources_errors, output
# ==============================================================================
class MsvcCompiler (CommonCppCompiler):
def __init__(self, options):
super(MsvcCompiler, self).__init__(options)
self.cmd += ['/nologo', '/c', '/showIncludes']
# -----------------------------------------------------------
def make_obj_entity(self, filename):
if self.use_timestamp:
return FileTimestampEntity(filename)
return FilePartChecksumEntity(filename, offset=16)
# -----------------------------------------------------------
def build(self, source_entities, targets):
source = source_entities[0].get()
obj_file = self.get_obj_path(source)
cwd = os.path.dirname(obj_file)
cmd = list(self.cmd)
cmd += ['/Fo%s' % obj_file, source]
result = self.exec_cmd_result(cmd, cwd, file_flag='@')
deps, errors, out = _parse_output((source,),
result.stdout,
self.ext_cpppath)
if result.failed():
result.stdout = out
raise result
targets.add_target_entity(self.make_obj_entity(obj_file))
targets.add_implicit_deps(deps[0])
return out
# -----------------------------------------------------------
def get_default_obj_ext(self):
return '.obj'
# -----------------------------------------------------------
def _set_targets(self,
source_entities,
targets,
sources,
obj_files,
output):
deps, errors, out = _parse_output(sources, output, self.ext_cpppath)
items = zip(source_entities, obj_files, deps, errors)
for src_value, obj_file, deps, error in items:
if not error:
src_targets = targets[src_value]
src_targets.add_target_entity(self.make_obj_entity(obj_file))
src_targets.add_implicit_deps(deps)
return out
# -----------------------------------------------------------
def build_batch(self, source_entities, targets):
sources = tuple(src.get() for src in source_entities)
obj_files = self.get_source_target_paths(sources, ext=self.ext)
cwd = os.path.dirname(obj_files[0])
cmd = list(self.cmd)
cmd += sources
result = self.exec_cmd_result(cmd, cwd, file_flag='@')
out = self._set_targets(source_entities, targets, sources, obj_files,
result.stdout)
if result.failed():
result.stdout = out
raise result
return out
# ==============================================================================
class MsvcResCompiler (CommonResCompiler):
def build(self, source_entities, targets):
src = source_entities[0].get()
res_file = self.get_obj_path(src)
cwd = os.path.dirname(res_file)
cmd = list(self.cmd)
cmd += ['/nologo', '/Fo%s' % res_file, src]
out = self.exec_cmd(cmd, cwd, file_flag='@')
# deps = _parse_res( src )
targets.add_targets(res_file)
return out
# ==============================================================================
class MsvcCompilerMaker (object):
def make_compiler(self, options):
return MsvcCompiler(options)
def make_res_compiler(self, options):
return MsvcResCompiler(options)
# ==============================================================================
class MsvcArchiver (MsvcCompilerMaker, CommonCppArchiver):
def build(self, source_entities, targets):
obj_files = (src.get() for src in source_entities)
cmd = list(self.cmd)
cmd += ['/nologo', "/OUT:%s" % self.target]
cmd += obj_files
cwd = os.path.dirname(self.target)
out = self.exec_cmd(cmd, cwd=cwd, file_flag='@')
targets.add_targets(self.target)
return out
# ==============================================================================
class MsvcLinker (MsvcCompilerMaker, CommonCppLinker):
def __init__(self, options, target, shared, def_file):
super(MsvcLinker, self).__init__(options, target, shared)
self.libsuffix = options.libsuffix.get()
if shared:
self.def_file = def_file
# -----------------------------------------------------------
def build(self, source_entities, targets):
obj_files = (src.get() for src in source_entities)
cmd = list(self.cmd)
target = self.target
if self.shared:
cmd.append('/dll')
if self.def_file:
cmd.append('/DEF:%s' % self.def_file)
import_lib = os.path.splitext(target)[0] + self.libsuffix
cmd.append('/IMPLIB:%s' % import_lib)
itargets = []
if '/DEBUG' in cmd:
pdb = target + '.pdb'
cmd.append("/PDB:%s" % (pdb,))
itargets.append(pdb)
cmd += ['/nologo', '/OUT:%s' % target]
cmd += obj_files
cwd = os.path.dirname(target)
out = self.exec_cmd(cmd, cwd=cwd, file_flag='@')
# -----------------------------------------------------------
# SET TARGETS
if not self.shared:
tags = None
else:
tags = 'shlib'
if os.path.exists(import_lib):
targets.add_targets(import_lib, tags='implib')
exports_lib = os.path.splitext(target)[0] + '.exp'
if os.path.exists(exports_lib):
itargets.append(exports_lib)
targets.add_targets(target, tags=tags)
targets.add_side_effects(itargets)
return out
# -----------------------------------------------------------
# ==============================================================================
# TOOL IMPLEMENTATION
# ==============================================================================
def _get_msvc_specs(cl):
result = execute_command(cl)
out = result.stderr.split('\n')
out = out[0].split()
version = out[-3]
target_arch = out[-1]
target_os = 'windows'
specs = {
'cc_name': 'msvc',
'cc_ver': version,
'target_os': target_os,
'target_arch': target_arch,
}
return specs
# ==============================================================================
class ToolMsvcCommon(ToolCommonCpp):
@classmethod
def setup(cls, options):
if options.cc_name.is_set_not_to('msvc'):
raise NotImplementedError()
cl = cls.find_program(options, 'cl')
link, lib, rc = cls.find_optional_programs(options,
['link', 'lib', 'rc'],
cl)
specs = _get_msvc_specs(cl)
options.update(specs)
options.cc = cl
options.link = link
options.lib = lib
options.rc = rc
# -----------------------------------------------------------
def __init__(self, options):
super(ToolMsvcCommon, self).__init__(options)
options.env['INCLUDE'] = ListOptionType(value_type=PathOptionType(),
separators=os.pathsep)
options.env['LIB'] = ListOptionType(value_type=PathOptionType(),
separators=os.pathsep)
options.env['LIBPATH'] = ListOptionType(value_type=PathOptionType(),
separators=os.pathsep)
options.objsuffix = '.obj'
options.ressuffix = '.res'
options.libprefix = ''
options.libsuffix = '.lib'
options.shlibprefix = ''
options.shlibsuffix = '.dll'
options.progsuffix = '.exe'
options.cpppath_prefix = '/I '
options.libpath_prefix = '/LIBPATH:'
options.cppdefines_prefix = '/D'
options.libs_prefix = ''
options.libs_suffix = '.lib'
options.linkflags += ['/INCREMENTAL:NO']
options.sys_cpppath = options.env['INCLUDE']
if_ = options.If()
options.language = self.language
options.cflags += '/TC'
options.cxxflags += '/TP'
if_.rtti.is_true().cxxflags += '/GR'
if_.rtti.is_false().cxxflags += '/GR-'
if_.exceptions.is_true().cxxflags += '/EHsc'
if_.exceptions.is_false().cxxflags += ['/EHs-', '/EHc-']
if_.target_subsystem.eq('console').linkflags += '/SUBSYSTEM:CONSOLE'
if_.target_subsystem.eq('windows').linkflags += '/SUBSYSTEM:WINDOWS'
if_.target_arch.eq('x86-32').libflags += '/MACHINE:X86'
if_.target_arch.eq('x86-64').libflags += '/MACHINE:X64'
if_.target_arch.eq('arm').libflags += '/MACHINE:ARM'
if_.target_arch.eq('arm64').libflags += '/MACHINE:ARM64'
if_.target_arch.eq('x86-32').linkflags += '/MACHINE:X86'
if_.target_arch.eq('x86-64').linkflags += '/MACHINE:X64'
if_.target_arch.eq('arm').linkflags += '/MACHINE:ARM'
if_.target_arch.eq('arm64').linkflags += '/MACHINE:ARM64'
if_.debug_symbols.is_true().ccflags += '/Z7'
if_.debug_symbols.is_true().linkflags += '/DEBUG'
if_runtime_link = if_.runtime_link
if_runtime_link.eq('shared').runtime_debug.is_false().ccflags += '/MD'
if_runtime_link.eq('shared').runtime_debug.is_true().ccflags += '/MDd'
if_runtime_link.eq('static').runtime_debug.is_false().\
runtime_thread.eq('single').ccflags += '/ML'
if_runtime_link.eq('static').runtime_debug.is_false().\
runtime_thread.eq('multi').ccflags += '/MT'
if_runtime_link.eq('static').runtime_debug.is_true().\
runtime_thread.eq('single').ccflags += '/MLd'
if_runtime_link.eq('static').runtime_debug.is_true().\
runtime_thread.eq('multi').ccflags += '/MTd'
# if_.cc_ver.ge(7).cc_ver.lt(8).ccflags += '/Zc:forScope /Zc:wchar_t'
if_.optimization.eq('speed').occflags += '/Ox'
if_.optimization.eq('speed').olinkflags += ['/OPT:REF', '/OPT:ICF']
if_.optimization.eq('size').occflags += '/Os'
if_.optimization.eq('size').olinkflags += ['/OPT:REF', '/OPT:ICF']
if_.optimization.eq('off').occflags += '/Od'
if_.inlining.eq('off').occflags += '/Ob0'
if_.inlining.eq('on').occflags += '/Ob1'
if_.inlining.eq('full').occflags += '/Ob2'
if_warning_level = if_.warning_level
if_warning_level.eq(0).ccflags += '/w'
if_warning_level.eq(1).ccflags += '/W1'
if_warning_level.eq(2).ccflags += '/W2'
if_warning_level.eq(3).ccflags += '/W3'
if_warning_level.eq(4).ccflags += '/W4'
if_.warnings_as_errors.is_true().ccflags += '/WX'
if_.warnings_as_errors.is_true().linkflags += '/WX'
if_.whole_optimization.is_true().ccflags += '/GL'
if_.whole_optimization.is_true().linkflags += '/LTCG'
# -----------------------------------------------------------
def compile(self, options):
return MsvcCompiler(options)
def compile_resource(self, options):
return MsvcResCompiler(options)
def link_static_library(self, options, target):
return MsvcArchiver(options, target)
def link_shared_library(self, options, target, def_file=None):
return MsvcLinker(options, target, shared=True, def_file=def_file)
def link_program(self, options, target):
return MsvcLinker(options, target, shared=False, def_file=None)
# ==============================================================================
@tool('c++', 'msvcpp', 'msvc++', 'cpp', 'cxx')
class ToolMsVCpp(ToolMsvcCommon):
language = "c++"
# ==============================================================================
@tool('c', 'msvc', 'cc')
class ToolMsvc(ToolMsvcCommon):
language = "c"
# ==============================================================================
@tool('rc', 'msrc')
class ToolMsrc(ToolCommonRes):
@classmethod
def setup(cls, options):
rc = cls.find_program(options, 'rc')
options.target_os = 'windows'
options.rc = rc
def __init__(self, options):
super(ToolMsrc, self).__init__(options)
options.ressuffix = '.res'
def compile(self, options):
return MsvcResCompiler(options)
|
Aqualid
|
/Aqualid-0.7.tar.gz/Aqualid-0.7/modules/aqualid/tools/msvc.py
|
msvc.py
|
import sys
import os.path
import itertools
import aql
# ==============================================================================
class ErrorNoCommonSourcesDir(Exception):
def __init__(self, sources):
msg = "Can't rsync disjoined files: %s" % (sources,)
super(ErrorNoCommonSourcesDir, self).__init__(msg)
# ==============================================================================
def _to_cygwin_path(path):
if not path:
return '.'
path_sep = '/'
drive, path = aql.split_drive(path)
if drive.find(':') == 1:
drive = "/cygdrive/" + drive[0]
path = drive + path
if path[-1] in ('\\', '/'):
last_sep = path_sep
else:
last_sep = ''
path = path.replace('\\', '/')
return path + last_sep
# ==============================================================================
def _norm_local_path(path):
if not path:
return '.'
path_sep = os.path.sep
path = str(path)
if path[-1] in (path_sep, os.path.altsep):
last_sep = path_sep
else:
last_sep = ''
path = os.path.normcase(os.path.normpath(path))
return path + last_sep
# ==============================================================================
def _norm_remote_path(path):
if not path:
return '.'
path_sep = '/'
if path[-1] in (path_sep, os.path.altsep):
last_sep = path_sep
else:
last_sep = ''
path = os.path.normpath(path).replace('\\', path_sep)
return path + last_sep
# ==============================================================================
def _split_remote_path(remote_path):
if os.path.isabs(remote_path):
host = ''
user = ''
else:
# [USER@]HOST:DEST
remote_path = remote_path.strip()
user_pos = remote_path.find('@')
if user_pos == -1:
user = ''
else:
user = remote_path[:user_pos]
remote_path = remote_path[user_pos + 1:]
host_pos = remote_path.find(':')
if host_pos == -1:
host = ''
else:
host = remote_path[:host_pos]
remote_path = remote_path[host_pos + 1:]
remote_path = _norm_remote_path(remote_path)
return user, host, remote_path
# ==============================================================================
class RemotePath(object):
__slots__ = ('path', 'host', 'user')
def __init__(self, remote_path, user=None, host=None):
u, h, remote_path = _split_remote_path(remote_path)
if not user:
user = u
if not host:
host = h
self.path = remote_path
self.host = host
self.user = user
# -----------------------------------------------------------
def is_remote(self):
return bool(self.host)
# -----------------------------------------------------------
def __str__(self):
return self.get()
# -----------------------------------------------------------
def join(self, other):
if self.host:
path = self.path + '/' + _norm_remote_path(other)
else:
path = os.path.join(self.path, _norm_local_path(other))
return RemotePath(path, self.user, self.host)
# -----------------------------------------------------------
def basename(self):
if self.host:
last_slash_pos = self.path.rfind('/')
return self.path[last_slash_pos + 1:]
else:
return os.path.basename(self.path)
# -----------------------------------------------------------
def get(self, cygwin_path=False):
if self.host:
if self.user:
return "%s@%s:%s" % (self.user, self.host, self.path)
return "%s:%s" % (self.host, self.path)
else:
if cygwin_path:
return _to_cygwin_path(self.path)
return self.path
# ==============================================================================
class RSyncPushBuilder(aql.FileBuilder):
NAME_ATTRS = ('remote_path', 'source_base')
SIGNATURE_ATTRS = ('cmd', )
def __init__(self, options, remote_path, source_base=None,
host=None, login=None, key_file=None, exclude=None):
self.rsync_cygwin = (
sys.platform != 'cygwin') and options.rsync_cygwin.get()
if source_base:
self.source_base = _norm_local_path(source_base)
else:
self.source_base = None
self.remote_path = RemotePath(remote_path, login, host)
self.cmd = self.__get_cmd(options, key_file, exclude)
self.rsync = options.rsync.get()
self.file_value_type = aql.FileTimestampEntity
# -----------------------------------------------------------
def __get_cmd(self, options, key_file, excludes):
cmd = [options.rsync.get()]
cmd += options.rsync_flags.get()
if excludes:
excludes = aql.to_sequence(excludes)
cmd += itertools.chain(*itertools.product(['--exclude'], excludes))
if self.remote_path.is_remote():
ssh_flags = options.rsync_ssh_flags.get()
if key_file:
if self.rsync_cygwin:
key_file = _to_cygwin_path(key_file)
ssh_flags += ['-i', key_file]
cmd += ['-e', 'ssh %s' % ' '.join(ssh_flags)]
return cmd
# -----------------------------------------------------------
def _get_sources(self, source_entities):
sources = [_norm_local_path(src.get()) for src in source_entities]
source_base = self.source_base
if source_base:
sources_base_len = len(source_base)
for i, src in enumerate(sources):
if src.startswith(source_base):
src = src[sources_base_len:]
if not src:
src = '.'
sources[i] = src
return sources
# -----------------------------------------------------------
def _set_targets(self, source_entities, sources, targets):
remote_path = self.remote_path
source_base = self.source_base
make_entity = self.make_simple_entity if remote_path.is_remote() \
else self.make_file_entity
for src_value, src in zip(source_entities, sources):
if not source_base:
src = os.path.basename(src)
target_path = remote_path.join(src)
target_entity = make_entity(target_path.get())
targets[src_value].add_target_entity(target_entity)
# -----------------------------------------------------------
def build_batch(self, source_entities, targets):
sources = self._get_sources(source_entities)
cmd = list(self.cmd)
tmp_r, tmp_w = None, None
try:
if self.rsync_cygwin:
sources = map(_to_cygwin_path, sources)
sorted_sources = sorted(sources)
source_base = self.source_base
if source_base:
if self.rsync_cygwin:
source_base = _to_cygwin_path(source_base)
tmp_r, tmp_w = os.pipe()
os.write(tmp_w, '\n'.join(sorted_sources).encode('utf-8'))
os.close(tmp_w)
tmp_w = None
cmd += ["--files-from=-", source_base]
else:
cmd += sorted_sources
remote_path = self.remote_path.get(self.rsync_cygwin)
cmd.append(remote_path)
out = self.exec_cmd(cmd, stdin=tmp_r)
finally:
if tmp_r:
os.close(tmp_r)
if tmp_w:
os.close(tmp_w)
self._set_targets(source_entities, sources, targets)
return out
# -----------------------------------------------------------
def get_trace_name(self, source_entities, brief):
if brief:
name = self.cmd[0]
name = os.path.splitext(os.path.basename(name))[0]
else:
name = ' '.join(self.cmd)
return name
# ==============================================================================
class RSyncPullBuilder(aql.Builder):
# rsync -avzub --exclude-from=files.flt --delete-excluded -e "ssh -i
# dev.key" c4dev@dev:/work/cp/bp2_int/components .
NAME_ATTRS = ('target_path', )
SIGNATURE_ATTRS = ('cmd', )
def __init__(self, options, target, host=None,
login=None, key_file=None, exclude=None):
self.rsync_cygwin = (
sys.platform != 'cygwin') and options.rsync_cygwin.get()
self.target_path = _norm_local_path(target)
self.host = host
self.login = login
self.cmd = self.__get_cmd(options, key_file, exclude)
self.rsync = options.rsync.get()
self.file_value_type = aql.FileTimestampEntity
# -----------------------------------------------------------
def make_entity(self, value, tags=None):
if aql.is_string(value):
remote_path = RemotePath(value, self.login, self.host)
if not remote_path.is_remote():
return self.make_file_entity(value, tags)
return self.make_simple_entity(value, tags)
# -----------------------------------------------------------
def __get_cmd(self, options, key_file, excludes):
cmd = [options.rsync.get()]
cmd += options.rsync_flags.get()
if excludes:
excludes = aql.to_sequence(excludes)
cmd += itertools.chain(*itertools.product(['--exclude'], excludes))
if self.host:
ssh_flags = options.rsync_ssh_flags.get()
if key_file:
if self.rsync_cygwin:
key_file = _to_cygwin_path(key_file)
ssh_flags += ['-i', key_file]
cmd += ['-e', 'ssh %s' % ' '.join(ssh_flags)]
return cmd
# -----------------------------------------------------------
def _get_sources_and_targets(self, source_entities):
sources = []
targets = []
target_path = self.target_path
host = self.host
login = self.login
cygwin_path = self.rsync_cygwin
for src in source_entities:
src = src.get()
remote_path = RemotePath(src, login, host)
path = os.path.join(target_path, remote_path.basename())
targets.append(path)
sources.append(remote_path.get(cygwin_path))
sources.sort()
return sources, targets
# -----------------------------------------------------------
def build(self, source_entities, targets):
sources, target_files = self._get_sources_and_targets(source_entities)
cmd = list(self.cmd)
target_path = self.target_path
if self.rsync_cygwin:
target_path = _to_cygwin_path(target_path)
cmd += sources
cmd.append(target_path)
out = self.exec_cmd(cmd)
targets.add_target_files(target_files)
return out
# -----------------------------------------------------------
def get_trace_name(self, source_entities, brief):
if brief:
name = self.cmd[0]
name = os.path.splitext(os.path.basename(name))[0]
else:
name = ' '.join(self.cmd)
return name
# ==============================================================================
@aql.tool('rsync')
class ToolRsync(aql.Tool):
@classmethod
def setup(cls, options):
rsync = cls.find_program(options, 'rsync')
options.rsync = rsync
if not options.rsync_cygwin.is_set():
options.rsync_cygwin = rsync.find('cygwin') != -1
# -----------------------------------------------------------
@classmethod
def options(cls):
options = aql.Options()
options.rsync = aql.PathOptionType(
description="File path to rsync program.")
options.rsync_cygwin = aql.BoolOptionType(
description="Is rsync uses cygwin paths.")
options.rsync_flags = aql.ListOptionType(
description="rsync tool flags", separators=None)
options.rsync_ssh_flags = aql.ListOptionType(
description="rsync tool SSH flags", separators=None)
return options
# -----------------------------------------------------------
def __init__(self, options):
super(ToolRsync, self).__init__(options)
options.rsync_flags = ['-a', '-v', '-z']
options.rsync_ssh_flags = [
'-o', 'StrictHostKeyChecking=no', '-o', 'BatchMode=yes']
options.set_group("rsync")
# -----------------------------------------------------------
def pull(self, options, target, host=None,
login=None, key_file=None, exclude=None):
return RSyncPullBuilder(options, target,
host=host, login=login,
key_file=key_file, exclude=exclude)
Pull = pull
# ----------------------------------------------------------
def push(self, options, target, source_base=None,
host=None, login=None, key_file=None, exclude=None):
builder = RSyncPushBuilder(options, target,
source_base=source_base,
host=host, login=login,
key_file=key_file, exclude=exclude)
return builder
Push = push
# ----------------------------------------------------------
|
Aqualid
|
/Aqualid-0.7.tar.gz/Aqualid-0.7/modules/aqualid/tools/rsync.py
|
rsync.py
|
Python Wrapper for AquesTalk (Old License)
==========================================
旧ライセンス版AquesTalkのPythonラッパー
**32ビット版Windows**のPythonでのみ動作します。
AquesTalkのライセンス変更については[公式ブログ][blog.a-quest]を参照してください。
Installation
------------
```
pip install AquesTalk-python
```
Usage
-----
```
import aquestalk
aq = aquestalk.load('f1')
wav = aq.synthe("こんにちわ")
type(wav)
# => <class 'wave.Wave_read'>
```
Required Notices
----------------
- 本ライブラリは、株式会社アクエストの規則音声合成ライブラリ「AquesTalk」を使用しています。
- 本ライブラリに同梱のDLLファイルの著作権は同社に帰属します。
- 詳細はAqLicense.txtをご覧ください。
[blog.a-quest]: http://blog-yama.a-quest.com/?eid=970181
|
AquesTalk-python
|
/AquesTalk-python-1.1.tar.gz/AquesTalk-python-1.1/README.md
|
README.md
|
# -*- coding: utf-8 -*-
import ctypes
import io
import wave
import enum
import hashlib
import os
__all__ = ['VOICE_TYPES', 'VoiceType', 'AquesTalk', 'AquesTalkError', 'load', 'load_from_path']
VOICE_TYPES = ('f1', 'f2', 'm1', 'm2', 'r1', 'dvd', 'jgr', 'imd1')
VoiceType = enum.Enum('VoiceType', VOICE_TYPES, module=__name__)
class AquesTalk:
"""
AquesTalk Wrapper
Parameters
----------
name : str
AquesTalk.dllのパス
voice_type : VoiceType
声種
Attributes
----------
voice_type : VoiceType
声種
"""
def __init__(self, name, voice_type=None):
self._dll = ctypes.windll.LoadLibrary(name)
self._voice_type = voice_type
self._dll.AquesTalk_Synthe.argtypes = (ctypes.POINTER(ctypes.c_char),
ctypes.c_int,
ctypes.POINTER(ctypes.c_int))
self._dll.AquesTalk_Synthe.restype = ctypes.POINTER(ctypes.c_char)
self._dll.AquesTalk_FreeWave.argtypes = (ctypes.POINTER(ctypes.c_char),)
self._dll.AquesTalk_FreeWave.restype = None
def synthe(self, koe, speed=100):
"""
音声記号列から音声波形を生成します。
Parameters
----------
koe : str
音声記号列
speed : int
発話速度[%]
- 50-300の間で指定
- デフォルト:100
- 値を大きく設定するほど、速くなる
Returns
-------
wave.Wave_read
生成した音声
Raises
------
AquesTalkError
音声波形の生成時にエラーが発生した場合
"""
wav = wave.open(io.BytesIO(self.synthe_raw(koe, speed)), 'rb')
return wav
def synthe_raw(self, koe, speed=100):
"""
音声記号列から音声波形を生成します。
Parameters
----------
koe : str
音声記号列
speed : int
発話速度[%]
- 50-300の間で指定
- デフォルト:100
- 値を大きく設定するほど、速くなる
Returns
-------
bytes
WAVフォーマットのデータ
Raises
------
AquesTalkError
音声波形の生成時にエラーが発生した場合
"""
c_wav, size = self._synthe(koe, speed)
if not c_wav:
raise AquesTalkError(size)
wav_raw = c_wav[:size]
self._freewave(c_wav)
return wav_raw
@property
def voice_type(self):
"""
声種
"""
return self._voice_type
def _synthe(self, koe, speed):
"""
音声記号列から音声波形を生成します
生成した音声データ領域は、使用後、呼び出し側でAquesTalk_FreeWave()で解放してください。
Parameters
----------
koe : str
音声記号列
speed : int
発話速度[%] 50-300の間で指定 デフォルト:100 値を大きく設定するほど、速くなる
Returns
-------
ctypes.POINTER(ctypes.c_char)
WAVフォーマットのデータ(内部で領域確保、解放は呼び出し側でAquesTalk_FreeWave()で行う)の先頭アドレスを返す。
エラー時は、Noneを返す。このときsizeにエラーコードが設定される。
int
生成した音声データのサイズが返る[byte](エラーの場合はエラーコードが返る)
"""
c_size = ctypes.c_int()
c_wav = self._dll.AquesTalk_Synthe(koe.encode('sjis'), speed, ctypes.byref(c_size))
return c_wav if c_wav else None, c_size.value
def _freewave(self, c_wav):
"""
音声データの領域を開放
Parameters
----------
c_wav : ctypes.POINTER(ctypes.c_char)
WAVフォーマットのデータ(AquesTalk_Synthe()で生成した音声データ)
"""
self._dll.AquesTalk_FreeWave(c_wav)
class AquesTalkError(Exception):
"""
AquesTalk関数が返すエラー
"""
messages = {100: 'その他のエラー',
101: 'メモリ不足',
102: '音声記号列に未定義の読み記号が指定された',
103: '韻律データの時間長がマイナスなっている',
104: '内部エラー(未定義の区切りコード検出)',
105: '音声記号列に未定義の読み記号が指定された',
106: '音声記号列のタグの指定が正しくない',
107: 'タグの長さが制限を越えている(または[>]がみつからない)',
108: 'タグ内の値の指定が正しくない',
109: 'WAVE再生ができない(サウンドドライバ関連の問題)',
110: 'WAVE再生ができない(サウンドドライバ関連の問題非同期再生)',
111: '発声すべきデータがない',
200: '音声記号列が長すぎる',
201: '1つのフレーズ中の読み記号が多すぎる',
202: '音声記号列が長い(内部バッファオーバー1)',
203: 'ヒープメモリ不足',
204: '音声記号列が長い(内部バッファオーバー1)'}
def __init__(self, err):
self.err = err
self.message = AquesTalkError.messages[err] if err in AquesTalkError.messages else '不明なエラー'
super().__init__('{}({})'.format(self.message, self.err))
def _get_md5_from_file(name, chunk_size=4096):
md5 = hashlib.md5()
with open(name, 'rb') as file:
for chunk in iter(lambda: file.read(chunk_size), b''):
md5.update(chunk)
return md5.hexdigest()
# http://blog-yama.a-quest.com/?eid=970181
_DLL_MD5_DICT = {'d09ba89e04fc6a848377cb695b7c227a': VoiceType.f1,
'23bd3bbfe89e7bb0f92e5e3ed841cec4': VoiceType.f1,
'f97a031451220238b21ada12dd2ba6b7': VoiceType.f1,
'8bfacc9e1c9d6f1a1f6803739a9ed7d6': VoiceType.f2,
'950cb2c4a9493ff3af7906fdb02b523b': VoiceType.m1,
'd4491b6ff6aab7e6f3a19dad369d0432': VoiceType.m2,
'1a69c64175f46271f9f491890b265762': VoiceType.r1,
'cd431c8c86c1566e73cbbb166047b8a9': VoiceType.dvd,
'54f15b467cbf215884d29a0ad39a9df3': VoiceType.jgr,
'e352165e9da54e255c3c25a33cb85aaa': VoiceType.imd1}
def load(voice_type, check_voice_type=False):
"""
指定された声種のAquesTalkライブラリを読み込みます
Parameters
----------
voice_type : VoiceType
声種
check_voice_type : bool
声種が一致しているか確認するかどうか (デフォルト : False)
Returns
-------
AquesTalk
読み込んだAquesTalkライブラリのインスタンス
"""
if not isinstance(voice_type, VoiceType):
voice_type = VoiceType[voice_type]
path = os.path.join(os.path.dirname(__file__), voice_type.name, "AquesTalk.dll")
return load_from_path(path, voice_type, check_voice_type)
def load_from_path(path, voice_type, check_voice_type=False):
"""
指定されたパスからAquesTalkライブラリを読み込みます
Parameters
----------
path : str
dllファイルのパス
voice_type : VoiceType
声種
check_voice_type : bool
声種が一致しているか確認するかどうか (デフォルト : False)
Returns
-------
AquesTalk
読み込んだAquesTalkライブラリのインスタンス
"""
if not check_voice_type:
md5 = _get_md5_from_file(path)
if _DLL_MD5_DICT[md5] != voice_type:
voice_type = _DLL_MD5_DICT[md5]
return AquesTalk(path, voice_type)
|
AquesTalk-python
|
/AquesTalk-python-1.1.tar.gz/AquesTalk-python-1.1/aquestalk/aquestalk.py
|
aquestalk.py
|
from __future__ import annotations
# noinspection SpellCheckingInspection,GrazieInspection
graphemes = list("abcdefghijklmnopqrstuvwxyz")
phonemes = [
"AA0",
"AA1",
"AA2",
"AE0",
"AE1",
"AE2",
"AH0",
"AH1",
"AH2",
"AO0",
"AO1",
"AO2",
"AW0",
"AW1",
"AW2",
"AY0",
"AY1",
"AY2",
"B",
"CH",
"D",
"DH",
"EH0",
"EH1",
"EH2",
"ER0",
"ER1",
"ER2",
"EY0",
"EY1",
"EY2",
"F",
"G",
"HH",
"IH0",
"IH1",
"IH2",
"IY0",
"IY1",
"IY2",
"JH",
"K",
"L",
"M",
"N",
"NG",
"OW0",
"OW1",
"OW2",
"OY0",
"OY1",
"OY2",
"P",
"R",
"S",
"SH",
"T",
"TH",
"UH0",
"UH1",
"UH2",
"UW",
"UW0",
"UW1",
"UW2",
"V",
"W",
"Y",
"Z",
"ZH",
]
pos_tags = [
"CC",
"CD",
"DT",
"EX",
"FW",
"IN",
"JJ",
"JJR",
"JJS",
"LS",
"MD",
"NN",
"NNS",
"NNP",
"NNPS",
"PDT",
"POS",
"PRP",
"PRP$",
"RB",
"RBR",
"RBS",
"RP",
"TO",
"UH",
"VB",
"VBD",
"VBG",
"VBN",
"VBP",
"VBZ",
"WDT",
"WP",
"WP$",
"WRB",
]
pos_type_tags = ["VERB", "NOUN", "PRON", "ADJ", "ADV"]
pos_type_short_tags = ["V", "N", "P", "A", "R"]
pos_type_form_dict = {"V": "VERB", "N": "NOUN", "P": "PRON", "A": "ADJ", "R": "ADV"}
graphemes_set = set(graphemes)
phonemes_set = set(phonemes)
pos_tags_set = set(pos_tags)
pos_type_tags_set = set(pos_type_tags)
pos_type_short_tags_set = set(pos_type_short_tags)
punctuation = {
".",
",",
":",
";",
"?",
"!",
"-",
"_",
"'",
'"',
"`",
"~",
"@",
"#",
"$",
}
consonants = {
"B",
"CH",
"D",
"DH",
"F",
"G",
"HH",
"JH",
"K",
"L",
"M",
"N",
"NG",
"P",
"R",
"S",
"SH",
"T",
"TH",
"V",
"W",
"Y",
"Z",
"ZH",
}
def to_full_type_tag(short_type_tag: str) -> str | None:
"""
Method to convert from short type tags to full type tags.
:param short_type_tag: Short type tag
:return: Full type tag, or None if not found
"""
if short_type_tag == "V":
return "VERB"
elif short_type_tag == "N":
return "NOUN"
elif short_type_tag == "P":
return "PRON"
elif short_type_tag == "A":
return "ADJ"
elif short_type_tag == "R":
return "ADV"
else:
return None
def get_parent_pos(pos: str) -> str | None:
"""Get parent POS tag of a POS tag."""
if pos.startswith("VB"):
return "VERB"
elif pos.startswith("NN"):
return "NOUN"
elif pos.startswith("RB"):
return "ADVERB"
else:
return None
def contains_alpha(text: str) -> bool:
"""
Method to check if a string contains alphabetic characters.
:param text:
:return:
"""
return text.upper().isupper()
def is_braced(word: str) -> bool:
"""
Check if a word is surrounded by brace-markings {}.
:param word: Word
:return: True if word is braced-marked.
"""
return word.startswith("{") and word.endswith("}")
def valid_braces(text: str, raise_on_invalid: bool = False) -> bool:
"""
Check if a text is valid braced-marked.
:param text: Text to check.
:param raise_on_invalid: Raises ValueError if invalid.
:return: True if text is valid braced-marked.
"""
def invalid(msg: str) -> bool:
if raise_on_invalid:
raise ValueError(f'Invalid braced-marked text ({msg}) in "{text}"')
else:
return False
if not any(c in text for c in {"{", "}"}):
return True # No braces, so valid.
in_braces = False
for char in text:
if char == "{":
if not in_braces:
in_braces = True
else:
return invalid("Nested braces")
elif char == "}":
if in_braces:
in_braces = False
else:
return invalid("Closing brace without opening")
if in_braces:
return invalid("Opening brace without closing")
return True
|
Aquila-Resolve
|
/Aquila_Resolve-0.1.4-py3-none-any.whl/Aquila_Resolve/symbols.py
|
symbols.py
|
from __future__ import annotations
from typing import TYPE_CHECKING
from collections import defaultdict
import re
if TYPE_CHECKING:
from .g2p import G2p
_re_digit = re.compile(r"\d+")
class Processor:
def __init__(self, g2p: G2p):
self._lookup = g2p.lookup
self._cmu_get = g2p.dict.get
self._segment = g2p.segment
self._tag = g2p.h2p.tag
self._stem = g2p.stem
# Number of times respective methods were called
self.stat_hits = defaultdict(int)
# Number of times respective methods returned value (not None)
self.stat_resolves = defaultdict(int)
def auto_possessives(self, word: str) -> str | None:
"""
Auto-possessives
:param word: Input of possible possessive word
:return: Phoneme of word as SDS, or None if unresolvable
"""
if not word.endswith("'s"):
return None
# If the word ends with "'s", register a hit
self.stat_hits["possessives"] += 1
"""
There are 3 general cases:
1. Base words ending in one of 6 special consonants (sibilants)
- i.e. Tess's, Rose's, Butch's, Midge's, Rush's, Garage's
- With consonants ending of [s], [z], [ch], [j], [sh], [zh]
- In ARPAbet: {S}, {Z}, {CH}, {JH}, {SH}, {ZH}
- These require a suffix of {IH0 Z}
2. Base words ending in vowels and voiced consonants:
- i.e. Fay's, Hugh's, Bob's, Ted's, Meg's, Sam's, Dean's, Claire's, Paul's, Bing's
- In ARPAbet: {IY0}, {EY1}, {UW1}, {B}, {D}, {G}, {M}, {N}, {R}, {L}, {NG}
- Vowels need a wildcard match of any numbered variant
- These require a suffix of {Z}
3. Base words ending in voiceless consonants:
- i.e. Hope's, Pat's, Clark's, Ruth's
- In ARPAbet: {P}, {T}, {K}, {TH}
- These require a suffix of {S}
"""
# Method to return phoneme and increment stat
def _resolve(phoneme: str) -> str:
self.stat_resolves["possessives"] += 1
return phoneme
core = word[:-2] # Get core word without possessive
ph = self._lookup(core) # find core word using recursive search
if ph is None:
return None # Core word not found
ph_list = ph.split(" ") # Split phonemes into list
# [Case 1]
if ph_list[-1] in {"S", "Z", "CH", "JH", "SH", "ZH"}:
ph += " IH0 Z"
return _resolve(ph)
# [Case 2]
"""
Valid for case 2:
'AA', 'AO', 'EY', 'OW', 'UW', 'AE', 'AW', 'EH', 'IH',
'OY', 'AH', 'AY', 'ER', 'IY', 'UH', 'UH',
'B', 'D', 'G', 'M', 'N', 'R', 'L', 'NG'
To simplify matching, we will check for the listed single-letter variants and 'NG'
and then check for any numbered variant
"""
if (
ph_list[-1] in {"B", "D", "G", "M", "N", "R", "L", "NG"}
or ph_list[-1][-1].isdigit()
):
ph += " Z"
return _resolve(ph)
# [Case 3]
if ph_list[-1] in ["P", "T", "K", "TH"]:
ph += " S"
return _resolve(ph)
return None # No match found
def auto_contractions(self, word: str) -> str | None:
"""
Auto contracts form and finds phonemes
:param word:
:return:
"""
"""
Supported contractions:
- 'll
- 'd
"""
# First, check if the word is a contraction
parts = word.split("'") # Split on [']
if len(parts) != 2 or parts[1] not in {"ll", "d"}:
return None # No contraction found
# If initial check passes, register a hit
self.stat_hits["contractions"] += 1
# Get the core word
core = parts[0]
# Get the phoneme for the core word recursively
ph = self._lookup(core)
if ph is None:
return None # Core word not found
# Add the phoneme with the appropriate suffix
if parts[1] == "ll":
ph += " AH0 L"
elif parts[1] == "d":
ph += " D"
# Return the phoneme
self.stat_resolves["contractions"] += 1
return ph
def auto_hyphenated(self, word: str) -> str | None:
"""
Splits hyphenated words and attempts to resolve components
:param word:
:return:
"""
# First, check if the word is a hyphenated word
if "-" not in word:
return None # No hyphen found
# If initial check passes, register a hit
self.stat_hits["hyphenated"] += 1
# Split the word into parts
parts = word.split("-")
# Get the phonemes for each part
ph = []
for part in parts:
ph_part = self._lookup(part)
if ph_part is None:
return None # Part not found
ph.append(ph_part)
# Return the phoneme
self.stat_resolves["hyphenated"] += 1
return " ".join(ph)
def auto_compound(self, word: str) -> str | None:
"""
Splits compound words and attempts to resolve components
:param word:
:return:
"""
# Split word into parts
parts = self._segment(word)
if len(parts) == 1:
return None # No compound found
# If length of any part is less than 3, return None
for part in parts:
if len(part) < 3:
return None
# If initial check passes, register a hit
self.stat_hits["compound"] += 1
# Get the phonemes for each part
ph = []
for part in parts:
ph_part = self._lookup(part)
if ph_part is None:
return None # Part not found
ph.append(ph_part)
# Join the phonemes
ph = " ".join(ph)
# Return the phoneme
self.stat_resolves["compound"] += 1
return ph
def auto_plural(self, word: str, pos: str = None) -> str | None:
"""
Finds singular form of plurals and attempts to resolve separately
Optionally a pos tag can be provided.
If no tags are provided, there will be a single word pos inference,
which is not ideal.
:param pos:
:param word:
:return:
"""
# First, check if the word is a replaceable plural
# Needs to end in 's' or 'es'
if word[-1] != "s":
return None # No plural found
# Now check if the word is a plural using pos
if pos is None:
pos = self._tag(word)
if pos is None or len(pos) == 0 or (pos[0] != "NNS" and pos[0] != "NNPS"):
return None # No tag found
# If initial check passes, register a hit
self.stat_hits["plural"] += 1
"""
Case 1:
> Word ends in 'oes'
> Remove the 'es' to get the singular
"""
if len(word) > 3 and word[-3:] == "oes":
singular = word[:-2]
# Look up the possessive form (since the pronunciation is the same)
ph = self.auto_possessives(singular + "'s")
if ph is not None:
self.stat_resolves["plural"] += 1
return ph # Return the phoneme
"""
Case 2:
> Word ends in 's'
> Remove the 's' to get the singular
"""
if len(word) > 1 and word[-1] == "s":
singular = word[:-1]
# Look up the possessive form (since the pronunciation is the same)
ph = self.auto_possessives(singular + "'s")
if ph is not None:
self.stat_resolves["plural"] += 1
return ph # Return the phoneme
# If no matches, return None
return None
def auto_stem(self, word: str) -> str | None:
"""
Attempts to resolve using the root stem of a word.
Supported modes:
- "ing"
- "ingly"
- "ly"
:param word:
:return:
"""
# noinspection SpellCheckingInspection
"""
'ly' has no special rules, always add phoneme 'L IY0'
'ing' relevant rules:
> If the original verb ended in [e], remove it and add [ing]
- i.e. take -> taking, make -> making
- We will search once with the original verb, and once with [e] added
- 1st attempt: tak, mak
- 2nd attempt: take, make
> If the input word has a repeated consonant before [ing], it's likely that
the original verb has only 1 of the consonants
- i.e. running -> run, stopping -> stop
- We will search for repeated consonants, and perform 2 attempts:
- 1st attempt: without the repeated consonant (run, stop)
- 2nd attempt: with the repeated consonant (runn, stopp)
"""
# Discontinue if word is too short
if len(word) < 3 or (not word.endswith("ly") and not word.endswith("ing")):
return None
# Register a hit
self.stat_hits["stem"] += 1 # Register hit
# For ly case
if word.endswith("ly"):
# Get the root word
root = word[:-2]
# Recursively get the root
ph_root = self._lookup(root)
# Output if exists
if ph_root is not None:
ph_ly = "L IY0"
ph_joined = " ".join([ph_root, ph_ly])
self.stat_resolves["stem"] += 1
return ph_joined
# For ing case 1
if word.endswith("ing"):
# Get the root word
root = word[:-3]
# Recursively get the root
ph_root = self._lookup(root)
# Output if exists
if ph_root is not None:
ph_ly = "IH0 NG"
ph_joined = " ".join([ph_root, ph_ly])
self.stat_resolves["stem"] += 1
return ph_joined
# For ing case 2
if word.endswith("ing"):
# Get the root word, add [e]
root = word[:-3] + "e"
# Recursively get the root
ph_root = self._lookup(root)
# Output if exists
if ph_root is not None:
ph_ly = "IH0 NG"
ph_joined = " ".join([ph_root, ph_ly])
self.stat_resolves["stem"] += 1
return ph_joined
|
Aquila-Resolve
|
/Aquila_Resolve-0.1.4-py3-none-any.whl/Aquila_Resolve/processors.py
|
processors.py
|
from __future__ import annotations
import re
from functools import lru_cache
import pywordsegment
from nltk.stem import WordNetLemmatizer
from nltk.stem.snowball import SnowballStemmer
from .h2p import H2p
from .text.replace import replace_first
from .format_ph import with_cb
from .static_dict import get_cmudict
from .text.numbers import normalize_numbers
from .filter import filter_text
from .processors import Processor
from .infer import Infer
from .symbols import contains_alpha, valid_braces
from .data.remote import ensure_nltk
re_digit = re.compile(r"\((\d+)\)")
re_bracket_with_digit = re.compile(r"\(.*\)")
re_phonemes = re.compile(r"\{.*?}")
class G2p:
def __init__(self, device: str = "cpu"):
"""
Initialize the G2p converter.
:param device: Pytorch device.
"""
ensure_nltk() # Ensure nltk data is downloaded
self.dict = get_cmudict() # CMU Dictionary
self.h2p = H2p(preload=True) # H2p parser
# WordNet Lemmatizer - used to find singular form
self.lemmatize = WordNetLemmatizer().lemmatize
# Snowball Stemmer - used to find stem root of words
self.stem = SnowballStemmer("english").stem
self.segment = pywordsegment.WordSegmenter().segment # Word Segmenter
self.p = Processor(self) # Processor for processing text
self.infer = Infer(device=device)
# Morphic Resolution Features
# Searches for depluralized form of words
self.ft_auto_plural = True
# Splits and infers possessive forms of original words
self.ft_auto_pos = True
# Splits contractions ('ll, 'd)
self.ft_auto_contractions = True
# Splits and infers hyphenated words
self.ft_auto_hyphenated = True
# Auto splits possible compound words
self.ft_auto_compound = True
# Analyzes word root stem and infers pronunciation separately
# i.e. 'generously' -> 'generous' + 'ly'
self.ft_stem = True
# Infer
self.ft_infer = True
@lru_cache(maxsize=None)
def lookup(self, text: str, pos: str = None) -> str | None:
"""
Gets the CMU Dictionary entry for a word.
Options for ph_format:
- 'sds' space delimited string
- 'sds_b' space delimited string with curly brackets
- 'list' list of phoneme strings
:param text: Word to lookup
:type: str
:param pos: Part of speech tag (Optional)
:type: str
"""
# Get the CMU Dictionary entry for the word
word = text.lower()
record = self.dict.get(word)
# Has entry, return it directly
if record is not None:
return record
# Check for hyphenated words
if self.ft_auto_hyphenated:
res = self.p.auto_hyphenated(word)
if res is not None:
return res
# Auto Possessive Processor
if self.ft_auto_pos:
res = self.p.auto_possessives(word)
if res is not None:
return res
# Auto Contractions for "ll" or "d"
if self.ft_auto_contractions:
res = self.p.auto_contractions(word)
if res is not None:
return res
# Check for compound words
if self.ft_auto_compound:
res = self.p.auto_compound(word)
if res is not None:
return res
# De-pluralization
if self.ft_auto_plural:
res = self.p.auto_plural(word, pos)
if res is not None:
return res
# Stem check ['ing', 'ly', 'ingly']
if self.ft_stem:
res = self.p.auto_stem(word)
if res is not None:
return res
# Inference with model
if self.ft_infer:
res = self.infer([word])[0]
if res is not None:
return res
return None
def convert(self, text: str, convert_num: bool = True) -> str | None:
"""
Replace a grapheme text line with phonemes.
:param text: Text line to be converted
:param convert_num: True to convert numbers to words
"""
# Convert numbers, if enabled
if convert_num:
valid_braces(text, raise_on_invalid=True)
text = normalize_numbers(text)
# Filter and Tokenize
f_text = filter_text(text, preserve_case=True)
words = self.h2p.tokenize(f_text)
# Run POS tagging
tags = self.h2p.get_tags(words)
# Loop through words and pos tags
in_bracket = False # Flag for in phoneme escape bracket
for word, pos in tags:
# Check valid
if in_bracket:
if word == "}":
in_bracket = False
continue
elif word == "{":
raise ValueError("Unmatched bracket")
if not in_bracket:
if word == "{":
in_bracket = True
continue
elif word == "}":
raise ValueError("Unmatched bracket")
if not contains_alpha(word):
continue
# Heteronyms
if self.h2p.dict.contains(word):
phonemes = self.h2p.dict.get_phoneme(word, pos)
# Normal inference / cmu
else:
phonemes = self.lookup(word, pos)
# Format phonemes
f_ph = with_cb(phonemes)
# Replace word with phonemes
text = replace_first(word, f_ph, text)
# Return text
return text
|
Aquila-Resolve
|
/Aquila_Resolve-0.1.4-py3-none-any.whl/Aquila_Resolve/g2p.py
|
g2p.py
|
from typing import overload
# Converts and outputs various formats of phonemes
@overload
def to_sds(ph: str) -> str:
...
@overload
def to_sds(ph: list) -> str:
...
def to_sds(ph: list or str) -> str or None:
"""
Converts phonemes to space delimited string format
:param ph: Phoneme as str or list, supports nested lists
:return: Phoneme as space delimited string
"""
# Return None if None
if ph is None:
return None
# Return directly if str, with curly brackets removed
if isinstance(ph, str):
return ph.replace("{", "").replace("}", "")
# If is list, convert each element
if isinstance(ph, list):
# If list empty, return None
if len(ph) == 0:
return None
# Case for further lists
if isinstance(ph[0], list):
return to_sds(ph[0]) # Recursive call
# Case if str at index 0, and size 1, return directly
elif isinstance(ph[0], str) and len(ph) == 1:
return ph[0]
# Case if str at index 0, above size 1, return with join
elif isinstance(ph[0], str):
return " ".join(ph)
# Case for none
elif ph[0] is None:
return None
else:
raise TypeError("to_sds() encountered an unexpected nested element type")
# Error if no matches
raise TypeError("to_sds() expects a list or string")
@overload
def to_list(ph: str) -> list:
...
@overload
def to_list(ph: list) -> list:
...
def to_list(ph: str or list) -> list or None:
"""
Converts phonemes to list format
:param ph: Phoneme as str or list, supports nested lists
:return: Phoneme as list
"""
# Return None if None
if ph is None:
return None
# Return directly if list and index 0 is str
if isinstance(ph, list) and len(ph) > 0 and isinstance(ph[0], str):
return ph
# If space delimited string, convert to list
if isinstance(ph, str):
return ph.split(" ")
# If nested list, convert each element
if isinstance(ph, list):
# If list empty or has None, return None
if len(ph) == 0 or ph[0] is None:
return None
# Case for further lists
if isinstance(ph[0], list):
return to_list(ph[0]) # Recursive call
# Error if no matches
raise TypeError("to_list() expects a list or string")
# Surrounds text with curly brackets
def with_cb(text: str) -> str:
"""
Surrounds text with curly brackets
:param text: Text to surround
:return: Surrounded text
"""
return "{" + text + "}"
|
Aquila-Resolve
|
/Aquila_Resolve-0.1.4-py3-none-any.whl/Aquila_Resolve/format_ph.py
|
format_ph.py
|
from __future__ import annotations
from os.path import exists
import json
from .symbols import get_parent_pos
from .data import DATA_PATH
# Dictionary class
class Dictionary:
def __init__(self, file_name=None):
# If a file name is not provided, use the default file name
self.file_name = file_name
if self.file_name is None:
self.file_name = "heteronyms.json"
self.dictionary = {}
self.dictionary = self.load_dictionary(file_name)
# Loads the dictionary from the json file
def load_dictionary(self, path=None) -> dict:
if path is None:
path = DATA_PATH.joinpath(self.file_name)
if not exists(path):
raise FileNotFoundError(f"Dictionary {self.file_name} file not found")
with open(str(path)) as file:
try:
read_dict = json.load(file)
except json.decoder.JSONDecodeError:
raise ValueError(f"Dictionary {self.file_name} file is not valid JSON")
# Check dictionary has at least one entry
if len(read_dict) == 0:
raise ValueError("Dictionary is empty or invalid")
return read_dict
# Check if a word is in the dictionary
def contains(self, word):
word = word.lower()
return word in self.dictionary
# Get the phonetic pronunciation of a word using Part of Speech tag
def get_phoneme(self, word, pos) -> str | None:
# Get the sub-dictionary at dictionary[word]
sub_dict = self.dictionary[word.lower()]
# First, check if the exact pos is a key
if pos in sub_dict:
return sub_dict[pos]
# If not, use the parent pos of the pos tag
parent_pos = get_parent_pos(pos)
if parent_pos is not None:
# Check if the sub_dict contains the parent pos
if parent_pos in sub_dict:
return sub_dict[parent_pos]
# If not, check if the sub_dict contains a DEFAULT key
if "DEFAULT" in sub_dict:
return sub_dict["DEFAULT"]
# If no matches, return None
return None
|
Aquila-Resolve
|
/Aquila_Resolve-0.1.4-py3-none-any.whl/Aquila_Resolve/dictionary.py
|
dictionary.py
|
from nltk.tokenize import TweetTokenizer
from nltk import pos_tag
from nltk import pos_tag_sents
from .data.remote import ensure_nltk
from .dictionary import Dictionary
from .filter import filter_text as ft
from .text.replace import replace_first
from . import format_ph as ph
class H2p:
def __init__(self, dict_path=None, preload=False, phoneme_format=""):
"""
Creates a H2p parser
Supported phoneme formats:
- Space delimited
- Space delimited surrounded by { }
:param dict_path: Path to a heteronym dictionary json file. Built-in dictionary will be used if None
:type dict_path: str
:param preload: Preloads the tokenizer and tagger during initialization
:type preload: bool
"""
# Ensure nltk data is available
ensure_nltk()
# Supported phoneme formats
self.phoneme_format = phoneme_format
self.dict = Dictionary(dict_path)
self.tokenize = TweetTokenizer().tokenize
self.get_tags = pos_tag
if preload:
self.preload()
# Method to preload tokenizer and pos_tag
def preload(self):
tokens = self.tokenize("a")
assert tokens == ["a"]
assert pos_tag(tokens)[0][0] == "a"
# Method to check if a text line contains a heteronym
def contains_het(self, text):
# Filter the text
text = ft(text)
# Tokenize
words = self.tokenize(text)
# Check match with dictionary
for word in words:
if self.dict.contains(word):
return True
return False
# Method to replace heteronyms in a text line to phonemes
def replace_het(self, text):
# Filter the text
working_text = ft(text, preserve_case=True)
# Tokenize
words = self.tokenize(working_text)
# Get pos tags
tags = pos_tag(words)
# Loop through words and pos tags
for word, pos in tags:
# Skip if word not in dictionary
if not self.dict.contains(word):
continue
# Get phonemes
phonemes = self.dict.get_phoneme(word, pos)
# Format phonemes
f_ph = ph.with_cb(ph.to_sds(phonemes))
# Replace word with phonemes
text = replace_first(word, f_ph, text)
return text
# Replaces heteronyms in a list of text lines
# Slightly faster than replace_het() called on each line
def replace_het_list(self, text_list):
# Filter the text
working_text_list = [ft(text, preserve_case=True) for text in text_list]
# Tokenize
list_sentence_words = [self.tokenize(text) for text in working_text_list]
# Get pos tags list
tags_list = pos_tag_sents(list_sentence_words)
# Loop through lines
for index, line in enumerate(tags_list):
# Loop through words and pos tags in tags_list index
for word, pos in line:
# Skip if word not in dictionary
if not self.dict.contains(word):
continue
# Get phonemes
phonemes = self.dict.get_phoneme(word, pos)
# Format phonemes
f_ph = ph.with_cb(ph.to_sds(phonemes))
# Replace word with phonemes
text_list[index] = replace_first(word, f_ph, text_list[index])
return text_list
# Method to tag a text line, returns a list of tags
def tag(self, text):
# Filter the text
working_text = ft(text, preserve_case=True)
# Tokenize
words = self.tokenize(working_text)
# Get pos tags
tags = pos_tag(words)
# Only return element 1 of each list
return [tag[1] for tag in tags]
|
Aquila-Resolve
|
/Aquila_Resolve-0.1.4-py3-none-any.whl/Aquila_Resolve/h2p.py
|
h2p.py
|
import re
from itertools import zip_longest
from typing import Dict, Union, List, Set
from . import PhonemizerResult
from .model.model import load_checkpoint
from .model.predictor import Predictor
DEFAULT_PUNCTUATION = "().,:?!/–"
class Phonemizer:
def __init__(
self, predictor: Predictor, lang_phoneme_dict: Dict[str, Dict[str, str]] = None
) -> None:
"""
Initializes a phonemizer with a ready predictor.
Args:
predictor (Predictor): Predictor object carrying the trained transformer model.
lang_phoneme_dict (Dict[str, Dict[str, str]], optional): Word-phoneme dictionary for each language.
"""
self.predictor = predictor
self.lang_phoneme_dict = lang_phoneme_dict
def __call__(
self,
text: Union[str, List[str]],
lang: str,
punctuation: str = DEFAULT_PUNCTUATION,
expand_acronyms: bool = True,
batch_size: int = 8,
) -> Union[str, List[str]]:
"""
Phonemizes a single text or list of texts.
Args:
text (str): Text to phonemize as single string or list of strings.
lang (str): Language used for phonemization.
punctuation (str): Punctuation symbols by which the texts are split.
expand_acronyms (bool): Whether to expand an acronym, e.g. DIY -> D-I-Y.
batch_size (int): Batch size of model to speed up inference.
Returns:
Union[str, List[str]]: Phonemized text as string, or list of strings, respectively.
"""
single_input_string = isinstance(text, str)
texts = [text] if single_input_string else text
result = self.phonemise_list(
texts=texts,
lang=lang,
punctuation=punctuation,
expand_acronyms=expand_acronyms,
)
phoneme_lists = ["".join(phoneme_list) for phoneme_list in result.phonemes]
if single_input_string:
return phoneme_lists[0]
else:
return phoneme_lists
def phonemise_list(
self,
texts: List[str],
lang: str,
punctuation: str = DEFAULT_PUNCTUATION,
expand_acronyms: bool = True,
batch_size: int = 8,
) -> PhonemizerResult:
"""Phonemizes a list of texts and returns tokenized texts,
phonemes and word predictions with probabilities.
Args:
texts (List[str]): List texts to phonemize.
lang (str): Language used for phonemization.
punctuation (str): Punctuation symbols by which the texts are split. (Default value = DEFAULT_PUNCTUATION)
expand_acronyms (bool): Whether to expand an acronym, e.g. DIY -> D-I-Y. (Default value = True)
batch_size (int): Batch size of model to speed up inference. (Default value = 8)
Returns:
PhonemizerResult: Object containing original texts, phonemes, split texts, split phonemes, and predictions.
"""
punc_set = set(punctuation + "- ")
punc_pattern = re.compile(f'([{punctuation + " "}])')
split_text, cleaned_words = [], set()
for text in texts:
cleaned_text = "".join([t for t in text if t.isalnum() or t in punc_set])
split = re.split(punc_pattern, cleaned_text)
split = [s for s in split if len(s) > 0]
split_text.append(split)
cleaned_words.update(split)
# collect dictionary phonemes for words and hyphenated words
word_phonemes = {
word: self._get_dict_entry(word=word, lang=lang, punc_set=punc_set)
for word in cleaned_words
}
# if word is not in dictionary, split it into subwords
words_to_split = [w for w in cleaned_words if word_phonemes[w] is None]
word_splits = dict()
for word in words_to_split:
key = word
word = self._expand_acronym(word) if expand_acronyms else word
word_split = re.split(r"([-])", word)
word_splits[key] = word_split
# collect dictionary entries of subwords
subwords = {w for values in word_splits.values() for w in values}
subwords = {w for w in subwords if w not in word_phonemes}
for subword in subwords:
word_phonemes[subword] = self._get_dict_entry(
word=subword, lang=lang, punc_set=punc_set
)
# predict all subwords that are missing in the phoneme dict
words_to_predict = [
word
for word, phons in word_phonemes.items()
if phons is None and len(word_splits.get(word, [])) <= 1
]
predictions = self.predictor(
words=words_to_predict, lang=lang, batch_size=batch_size
)
word_phonemes.update({pred.word: pred.phonemes for pred in predictions})
pred_dict = {pred.word: pred for pred in predictions}
# collect all phonemes
phoneme_lists = []
for text in split_text:
text_phons = [
self._get_phonemes(
word=word, word_phonemes=word_phonemes, word_splits=word_splits
)
for word in text
]
phoneme_lists.append(text_phons)
phonemes_joined = ["".join(phoneme_list) for phoneme_list in phoneme_lists]
return PhonemizerResult(
text=texts,
phonemes=phonemes_joined,
split_text=split_text,
split_phonemes=phoneme_lists,
predictions=pred_dict,
)
def _get_dict_entry(
self, word: str, lang: str, punc_set: Set[str]
) -> Union[str, None]:
if word in punc_set or len(word) == 0:
return word
if not self.lang_phoneme_dict or lang not in self.lang_phoneme_dict:
return None
phoneme_dict = self.lang_phoneme_dict[lang]
if word in phoneme_dict:
return phoneme_dict[word]
elif word.lower() in phoneme_dict:
return phoneme_dict[word.lower()]
elif word.title() in phoneme_dict:
return phoneme_dict[word.title()]
else:
return None
@staticmethod
def _expand_acronym(word: str) -> str:
subwords = []
for subword in word.split("-"):
expanded = []
for a, b in zip_longest(subword, subword[1:]):
expanded.append(a)
if b is not None and b.isupper():
expanded.append("-")
expanded = "".join(expanded)
subwords.append(expanded)
return "-".join(subwords)
@staticmethod
def _get_phonemes(
word: str,
word_phonemes: Dict[str, Union[str, None]],
word_splits: Dict[str, List[str]],
) -> str:
phons = word_phonemes[word]
if phons is None:
subwords = word_splits[word]
subphons = [word_phonemes[w] for w in subwords]
phons = "".join(subphons)
return phons
@classmethod
def from_checkpoint(
cls,
checkpoint_path: str,
device="cpu",
lang_phoneme_dict: Dict[str, Dict[str, str]] = None,
) -> "Phonemizer":
"""Initializes a Phonemizer object from a model checkpoint (.pt file).
Args:
checkpoint_path (str): Path to the .pt checkpoint file.
device (str): Device to send the model to ('cpu' or 'cuda'). (Default value = 'cpu')
lang_phoneme_dict (Dict[str, Dict[str, str]], optional): Word-phoneme dictionary for each language.
Returns:
Phonemizer: Phonemizer object carrying the loaded model and, optionally, a phoneme dictionary.
"""
model, checkpoint = load_checkpoint(checkpoint_path, device=device)
applied_phoneme_dict = None
if lang_phoneme_dict is not None:
applied_phoneme_dict = lang_phoneme_dict
elif "phoneme_dict" in checkpoint:
applied_phoneme_dict = checkpoint["phoneme_dict"]
preprocessor = checkpoint["preprocessor"]
predictor = Predictor(model=model, preprocessor=preprocessor)
# logger = get_logger(__name__)
# model_step = checkpoint['step']
# logger.debug(f'Initializing phonemizer with model step {model_step}')
return Phonemizer(predictor=predictor, lang_phoneme_dict=applied_phoneme_dict)
|
Aquila-Resolve
|
/Aquila_Resolve-0.1.4-py3-none-any.whl/Aquila_Resolve/models/dp/phonemizer.py
|
phonemizer.py
|
from typing import List, Iterable, Dict, Tuple, Any
class LanguageTokenizer:
"""Simple tokenizer for language to index mapping."""
def __init__(self, languages: List[str]) -> None:
"""
Initializes a language tokenizer for a list of languages.
Args:
languages (List[str]): List of languages, e.g. ['de', 'en'].
"""
self.lang_index = {l: i for i, l in enumerate(languages)}
self.index_lang = {i: l for i, l in enumerate(languages)}
def __call__(self, lang: str) -> int:
"""
Maps the language to an index.
Args:
lang (str): Language to be mapped, e.g. 'de'.
Returns:
int: Index of language.
"""
if lang not in self.lang_index:
raise ValueError(
f"Language not supported: {lang}. "
f"Supported languages: {self.lang_index.keys()}"
)
return self.lang_index[lang]
def decode(self, index: int) -> str:
"""Inverts the index mapping of a language.
Args:
index (int): Index of language.
Returns:
str: Language for the given index.
"""
return self.index_lang[index]
class SequenceTokenizer:
"""Tokenizes text and optionally attaches language-specific start index (and non-specific end index)."""
def __init__(
self,
symbols: List[str],
languages: List[str],
char_repeats: int,
lowercase: bool = True,
append_start_end: bool = True,
pad_token="_",
end_token="<end>",
) -> None:
"""
Initializes a SequenceTokenizer object.
Args:
symbols (List[str]): Character (or phoneme) symbols.
languages (List[str]): List of languages.
char_repeats (int): Number of repeats for each character to allow the forward model to map to longer
phoneme sequences. Example: for char_repeats=2 the tokenizer maps hi -> hhii.
lowercase (bool): Whether to lowercase the input word.
append_start_end (bool): Whether to append special start and end tokens. Start and end tokens are
index mappings of the chosen language.
pad_token (str): Special pad token for index 0.
end_token (str): Special end of sequence token.
"""
self.languages = languages
self.lowercase = lowercase
self.char_repeats = char_repeats
self.append_start_end = append_start_end
self.pad_index = 0
self.token_to_idx = {pad_token: self.pad_index}
self.special_tokens = {pad_token, end_token}
for lang in languages:
lang_token = self._make_start_token(lang)
self.token_to_idx[lang_token] = len(self.token_to_idx)
self.special_tokens.add(lang_token)
self.token_to_idx[end_token] = len(self.token_to_idx)
self.end_index = self.token_to_idx[end_token]
for symbol in symbols:
self.token_to_idx[symbol] = len(self.token_to_idx)
self.idx_to_token = {i: s for s, i in self.token_to_idx.items()}
self.vocab_size = len(self.idx_to_token)
def __call__(self, sentence: Iterable[str], language: str) -> List[int]:
"""
Maps a sequence of symbols for a language to a sequence of indices.
Args:
sentence (Iterable[str]): Sentence (or word) as a sequence of symbols.
language (str): Language for the mapping that defines the start and end token indices.
Returns:
List[int]: Sequence of token indices.
"""
sentence = [item for item in sentence for i in range(self.char_repeats)]
if language not in self.languages:
raise ValueError(
f"Language not supported: {language}. Supported languages: {self.languages}"
)
if self.lowercase:
sentence = [s.lower() for s in sentence]
sequence = [self.token_to_idx[c] for c in sentence if c in self.token_to_idx]
if self.append_start_end:
sequence = [self._get_start_index(language)] + sequence + [self.end_index]
return sequence
def decode(
self, sequence: Iterable[int], remove_special_tokens: bool = False
) -> List[str]:
"""Maps a sequence of indices to a sequence of symbols.
Args:
sequence (Iterable[int]): Encoded sequence to be decoded.
remove_special_tokens (bool): Whether to remove special tokens such as pad or start and end tokens. (Default value = False)
sequence: Iterable[int]:
Returns:
List[str]: Decoded sequence of symbols.
"""
sequence = list(sequence)
if self.append_start_end:
sequence = (
sequence[:1] + sequence[1 : -1 : self.char_repeats] + sequence[-1:]
)
else:
sequence = sequence[:: self.char_repeats]
decoded = [
self.idx_to_token[int(t)] for t in sequence if int(t) in self.idx_to_token
]
if remove_special_tokens:
decoded = [d for d in decoded if d not in self.special_tokens]
return decoded
def _get_start_index(self, language: str) -> int:
lang_token = self._make_start_token(language)
return self.token_to_idx[lang_token]
def _make_start_token(self, language: str) -> str:
return "<" + language + ">"
class Preprocessor:
"""Preprocesses data for a phonemizer training session."""
def __init__(
self,
lang_tokenizer: LanguageTokenizer,
text_tokenizer: SequenceTokenizer,
phoneme_tokenizer: SequenceTokenizer,
) -> None:
"""
Initializes a preprocessor object.
Args:
lang_tokenizer (LanguageTokenizer): Tokenizer for input language.
text_tokenizer (SequenceTokenizer): Tokenizer for input text.
phoneme_tokenizer (SequenceTokenizer): Tokenizer for output phonemes.
"""
self.lang_tokenizer = lang_tokenizer
self.text_tokenizer = text_tokenizer
self.phoneme_tokenizer = phoneme_tokenizer
def __call__(
self, item: Tuple[str, Iterable[str], Iterable[str]]
) -> Tuple[int, List[int], List[int]]:
"""
Preprocesses a data point.
Args:
item (Tuple): Data point comprised of (language, input text, output phonemes).
Returns: Tuple: Preprocessed data point as (language tokens, input_text tokens, output phonemes tokens)
"""
lang, text, phonemes = item
lang_token = self.lang_tokenizer(lang)
text_tokens = self.text_tokenizer(text, lang)
phoneme_tokens = self.phoneme_tokenizer(phonemes, lang)
return lang_token, text_tokens, phoneme_tokens
@classmethod
def from_config(cls, config: Dict[str, Any]) -> "Preprocessor":
"""Initializes a preprocessor from a config.
Args:
config (Dict[str, Any]): Dictionary containing preprocessing hyperparams.
Returns:
Preprocessor: Preprocessor object.
"""
text_symbols = config["preprocessing"]["text_symbols"]
phoneme_symbols = config["preprocessing"]["phoneme_symbols"]
lang_symbols = config["preprocessing"]["languages"]
char_repeats = config["preprocessing"]["char_repeats"]
lowercase = config["preprocessing"]["lowercase"]
lang_tokenizer = LanguageTokenizer(lang_symbols)
text_tokenizer = SequenceTokenizer(
symbols=text_symbols,
languages=lang_symbols,
char_repeats=char_repeats,
lowercase=lowercase,
append_start_end=True,
)
phoneme_tokenizer = SequenceTokenizer(
phoneme_symbols,
languages=lang_symbols,
lowercase=False,
char_repeats=1,
append_start_end=True,
)
return Preprocessor(
lang_tokenizer=lang_tokenizer,
text_tokenizer=text_tokenizer,
phoneme_tokenizer=phoneme_tokenizer,
)
|
Aquila-Resolve
|
/Aquila_Resolve-0.1.4-py3-none-any.whl/Aquila_Resolve/models/dp/preprocessing/text.py
|
text.py
|
from typing import Dict, List, Tuple
import torch
from torch.nn.utils.rnn import pad_sequence
from .. import Prediction
from ..model.model import load_checkpoint
from ..model.utils import _get_len_util_stop
from ..preprocessing.text import Preprocessor
from ..preprocessing.utils import u_batchify, u_product
class Predictor:
"""Performs model predictions on a batch of inputs."""
def __init__(self, model: torch.nn.Module, preprocessor: Preprocessor) -> None:
"""
Initializes a Predictor object with a trained transformer model a preprocessor.
Args:
model (Model): Trained transformer model.
preprocessor (Preprocessor): Preprocessor corresponding to the model configuration.
"""
self.model = model
self.text_tokenizer = preprocessor.text_tokenizer
self.phoneme_tokenizer = preprocessor.phoneme_tokenizer
def __call__(
self, words: List[str], lang: str, batch_size: int = 8
) -> List[Prediction]:
"""
Predicts phonemes for a list of words.
Args:
words (list): List of words to predict.
lang (str): Language of texts.
batch_size (int): Size of batch for model input to speed up inference.
Returns:
List[Prediction]: A list of result objects containing (word, phonemes, phoneme_tokens, token_probs, confidence)
"""
predictions = dict()
valid_texts = set()
# handle words that result in an empty input to the model
for word in words:
input = self.text_tokenizer(sentence=word, language=lang)
decoded = self.text_tokenizer.decode(
sequence=input, remove_special_tokens=True
)
if len(decoded) == 0:
predictions[word] = ([], [])
else:
valid_texts.add(word)
valid_texts = sorted(list(valid_texts), key=lambda x: len(x))
batch_pred = self._predict_batch(
texts=valid_texts, batch_size=batch_size, language=lang
)
predictions.update(batch_pred)
output = []
for word in words:
tokens, probs = predictions[word]
out_phons = self.phoneme_tokenizer.decode(
sequence=tokens, remove_special_tokens=True
)
out_phons_tokens = self.phoneme_tokenizer.decode(
sequence=tokens, remove_special_tokens=False
)
output.append(
Prediction(
word=word,
phonemes="".join(out_phons),
phoneme_tokens=out_phons_tokens,
confidence=u_product(probs),
token_probs=probs,
)
)
return output
def _predict_batch(
self, texts: List[str], batch_size: int, language: str
) -> Dict[str, Tuple[List[int], List[float]]]:
"""
Returns dictionary with key = word and val = Tuple of (phoneme tokens, phoneme probs)
"""
predictions = dict()
text_batches = u_batchify(texts, batch_size)
for text_batch in text_batches:
input_batch, lens_batch = [], []
for text in text_batch:
input = self.text_tokenizer(text, language)
input_batch.append(torch.tensor(input))
lens_batch.append(torch.tensor(len(input)))
input_batch = pad_sequence(
sequences=input_batch, batch_first=True, padding_value=0
)
lens_batch = torch.stack(lens_batch)
start_indx = self.phoneme_tokenizer._get_start_index(language)
start_inds = torch.tensor([start_indx] * input_batch.size(0)).to(
input_batch.device
)
batch = {
"text": input_batch,
"text_len": lens_batch,
"start_index": start_inds,
}
with torch.no_grad():
output_batch, probs_batch = self.model.generate(batch)
output_batch, probs_batch = output_batch.cpu(), probs_batch.cpu()
for text, output, probs in zip(text_batch, output_batch, probs_batch):
seq_len = _get_len_util_stop(output, self.phoneme_tokenizer.end_index)
predictions[text] = (
output[:seq_len].tolist(),
probs[:seq_len].tolist(),
)
return predictions
@classmethod
def from_checkpoint(cls, checkpoint_path: str, device="cpu") -> "Predictor":
"""Initializes the predictor from a checkpoint (.pt file).
Args:
checkpoint_path (str): Path to the checkpoint file (.pt).
device (str): Device to load the model on ('cpu' or 'cuda'). (Default value = 'cpu').
Returns:
Predictor: Predictor object.
"""
model, checkpoint = load_checkpoint(checkpoint_path, device=device)
preprocessor = checkpoint["preprocessor"]
return Predictor(model=model, preprocessor=preprocessor)
|
Aquila-Resolve
|
/Aquila_Resolve-0.1.4-py3-none-any.whl/Aquila_Resolve/models/dp/model/predictor.py
|
predictor.py
|
from abc import ABC, abstractmethod
from enum import Enum
from typing import Tuple, Dict, Any
import torch
import torch.nn as nn
from .utils import _make_len_mask, _generate_square_subsequent_mask, PositionalEncoding
from ..preprocessing.text import Preprocessor
class ModelType(Enum):
TRANSFORMER = "transformer"
AUTOREG_TRANSFORMER = "autoreg_transformer"
def is_autoregressive(self) -> bool:
"""
Returns: bool: Whether the model is autoregressive.
"""
return self in {ModelType.AUTOREG_TRANSFORMER} # pragma: no cover
class Model(torch.nn.Module, ABC):
def __init__(self):
super().__init__()
@abstractmethod
def generate(
self, batch: Dict[str, torch.Tensor]
) -> Tuple[torch.Tensor, torch.Tensor]:
"""
Generates phonemes for a text batch
Args:
batch (Dict[str, torch.Tensor]): Dictionary containing 'text' (tokenized text tensor),
'text_len' (text length tensor),
'start_index' (phoneme start indices for AutoregressiveTransformer)
Returns:
Tuple[torch.Tensor, torch.Tensor]: The predictions. The first element is a tensor (phoneme tokens)
and the second element is a tensor (phoneme token probabilities)
"""
pass # pragma: no cover
class AutoregressiveTransformer(Model):
def __init__(
self,
encoder_vocab_size: int,
decoder_vocab_size: int,
end_index: int,
d_model=512,
d_fft=1024,
encoder_layers=4,
decoder_layers=4,
dropout=0.1,
heads=1,
):
super().__init__()
self.end_index = end_index
self.d_model = d_model
self.encoder = nn.Embedding(encoder_vocab_size, d_model)
self.pos_encoder = PositionalEncoding(d_model, dropout)
self.decoder = nn.Embedding(decoder_vocab_size, d_model)
self.pos_decoder = PositionalEncoding(d_model, dropout)
self.transformer = nn.Transformer(
d_model=d_model,
nhead=heads,
num_encoder_layers=encoder_layers,
num_decoder_layers=decoder_layers,
dim_feedforward=d_fft,
dropout=dropout,
activation="relu",
)
self.fc_out = nn.Linear(d_model, decoder_vocab_size)
@torch.jit.export
def generate(
self, batch: Dict[str, torch.Tensor], max_len: int = 100
) -> Tuple[torch.Tensor, torch.Tensor]:
"""
Inference pass on a batch of tokenized texts.
Args:
batch (Dict[str, torch.Tensor]): Dictionary containing the input to the model with entries 'text'
and 'start_index'
max_len (int): Max steps of the autoregressive inference loop.
Returns:
Tuple: Predictions. The first element is a Tensor of phoneme tokens and the second element
is a Tensor of phoneme token probabilities.
"""
input = batch["text"]
start_index = batch["start_index"]
batch_size = input.size(0)
input = input.transpose(0, 1) # shape: [T, N]
src_pad_mask = _make_len_mask(input).to(input.device)
with torch.no_grad():
input = self.encoder(input)
input = self.pos_encoder(input)
input = self.transformer.encoder(input, src_key_padding_mask=src_pad_mask)
out_indices = start_index.unsqueeze(0)
out_logits = []
for i in range(max_len):
tgt_mask = _generate_square_subsequent_mask(i + 1).to(input.device)
output = self.decoder(out_indices)
output = self.pos_decoder(output)
output = self.transformer.decoder(
output,
input,
memory_key_padding_mask=src_pad_mask,
tgt_mask=tgt_mask,
)
output = self.fc_out(output) # shape: [T, N, V]
out_tokens = output.argmax(2)[-1:, :]
out_logits.append(output[-1:, :, :])
out_indices = torch.cat([out_indices, out_tokens], dim=0)
stop_rows, _ = torch.max(out_indices == self.end_index, dim=0)
if torch.sum(stop_rows) == batch_size:
break
out_indices = out_indices.transpose(0, 1) # out shape [N, T]
out_logits = torch.cat(out_logits, dim=0).transpose(0, 1) # out shape [N, T, V]
out_logits = out_logits.softmax(-1)
out_probs = torch.ones((out_indices.size(0), out_indices.size(1)))
for i in range(out_indices.size(0)):
for j in range(0, out_indices.size(1) - 1):
out_probs[i, j + 1] = out_logits[i, j].max()
return out_indices, out_probs
@classmethod
def from_config(cls, config: Dict[str, Any]) -> "AutoregressiveTransformer":
"""
Initializes an autoregressive Transformer model from a config.
Args:
config (dict): Configuration containing the hyperparams.
Returns:
AutoregressiveTransformer: Model object.
"""
preprocessor = Preprocessor.from_config(config)
return AutoregressiveTransformer(
encoder_vocab_size=preprocessor.text_tokenizer.vocab_size,
decoder_vocab_size=preprocessor.phoneme_tokenizer.vocab_size,
end_index=preprocessor.phoneme_tokenizer.end_index,
d_model=config["model"]["d_model"],
d_fft=config["model"]["d_fft"],
encoder_layers=config["model"]["layers"],
decoder_layers=config["model"]["layers"],
dropout=config["model"]["dropout"],
heads=config["model"]["heads"],
)
def create_model(model_type: ModelType, config: Dict[str, Any]) -> Model:
"""
Initializes a model from a config for a given model type.
Args:
model_type (ModelType): Type of model to be initialized.
config (dict): Configuration containing hyperparams.
Returns: Model: Model object.
"""
if model_type is not ModelType.AUTOREG_TRANSFORMER: # pragma: no cover
raise ValueError(
f"Unsupported model type: {model_type}. "
"Supported type: AUTOREG_TRANSFORMER"
)
return AutoregressiveTransformer.from_config(config)
def load_checkpoint(
checkpoint_path: str, device: str = "cpu"
) -> Tuple[Model, Dict[str, Any]]:
"""
Initializes a model from a checkpoint (.pt file).
Args:
checkpoint_path (str): Path to checkpoint file (.pt).
device (str): Device to put the model to ('cpu' or 'cuda').
Returns: Tuple: The first element is a Model (the loaded model)
and the second element is a dictionary (config).
"""
device = torch.device(device)
checkpoint = torch.load(checkpoint_path, map_location=device)
model_type = checkpoint["config"]["model"]["type"]
model_type = ModelType(model_type)
model = create_model(model_type, config=checkpoint["config"])
model.load_state_dict(checkpoint["model"])
model.eval()
return model, checkpoint
|
Aquila-Resolve
|
/Aquila_Resolve-0.1.4-py3-none-any.whl/Aquila_Resolve/models/dp/model/model.py
|
model.py
|
import requests
import shutil
import nltk
from warnings import warn
from tqdm.auto import tqdm
from . import DATA_PATH
_model = DATA_PATH.joinpath("model.pt")
_model_url = (
"https://huggingface.co/ionite/Aquila-Resolve/resolve/main/model.pt" # Download URL
)
# Git LFS Pointer URL
_model_ptr = "https://huggingface.co/ionite/Aquila-Resolve/raw/main/model.pt"
def check_model() -> bool:
"""Checks if the model matches checksums"""
result = requests.get(_model_ptr).text.split()
if result is None or len(result) < 6 or not result[3].startswith("sha256:"):
warn("Could not retrieve remote model checksum")
return False
remote_sha256 = result[3][7:]
actual_sha256 = get_checksum(_model)
return remote_sha256 == actual_sha256
def download(update: bool = True) -> bool:
"""
Downloads the model
:param update: True for download if checksum does not match, False for download if no file exists
:return: Existence or checksum match of model file
"""
# Check if the model is already downloaded
if not update and _model.exists():
return True
if update and _model.exists() and check_model():
return True
# Download the model
with requests.get(_model_url, stream=True) as r:
r.raise_for_status() # Raise error for download failure
total_size = int(r.headers.get("content-length", 0))
with tqdm.wrapattr(
r.raw, "read", total=total_size, desc="Downloading model checkpoint"
) as raw, _model.open("wb") as f:
shutil.copyfileobj(raw, f)
if update:
return _model.exists() and check_model() # Update flag, verify checksum also
return _model.exists() # For no update flag, just check existence
def ensure_download() -> None:
"""Ensures the model is downloaded"""
if not download(update=False):
raise RuntimeError(
"Model could not be downloaded. Visit "
"https://huggingface.co/ionite/Aquila-Resolve/blob/main/model.pt "
"to download the model checkpoint manually and place it within the "
"Aquila_Resolve/data/ folder."
)
def ensure_nltk() -> None: # pragma: no cover
"""Ensures all required NLTK Data is installed"""
required = {
"wordnet": "corpora/wordnet.zip",
"omw-1.4": "corpora/omw-1.4.zip",
"averaged_perceptron_tagger": "taggers/averaged_perceptron_tagger.zip",
}
for name, url in required.items():
try:
nltk.data.find(url)
except LookupError:
nltk.download(name, raise_on_error=True)
def check_updates() -> None:
"""Checks if the model matches the latest checksum"""
if not check_model():
warn(
"Local model checkpoint did not match latest remote checksum. "
"You can use Aquila_Resolve.download() to download the latest model."
)
def get_checksum(file: str, block_size: int = 65536) -> str:
"""
Calculates the Sha256 checksum of a file
:param file: Path to file
:param block_size: Block size for reading
:return: Checksum of file
"""
import hashlib
s = hashlib.sha256()
with open(file, "rb") as f:
for block in iter(lambda: f.read(block_size), b""):
s.update(block)
return s.hexdigest()
|
Aquila-Resolve
|
/Aquila_Resolve-0.1.4-py3-none-any.whl/Aquila_Resolve/data/remote.py
|
remote.py
|
import inflect
import re
_magnitudes = ["trillion", "billion", "million", "thousand", "hundred", "m", "b", "t"]
_magnitudes_key = {"m": "million", "b": "billion", "t": "trillion"}
_measurements = "(f|c|k|d|m|km|ft)"
_measurements_key = {
"f": "fahrenheit",
"c": "celsius",
"k": "thousand",
"m": "meters",
"km": "kilometers",
"ft": "feet",
}
_currency_key = {"$": "dollar", "£": "pound", "€": "euro", "₩": "won"}
_inflect = inflect.engine()
_comma_number_re = re.compile(r"([0-9][0-9,]+[0-9])")
_decimal_number_re = re.compile(r"([0-9]+\.[0-9]+)")
_currency_re = re.compile(
r"([$€£₩])([0-9.,]*[0-9]+)(?:[ ]?({})(?=[^a-zA-Z]|$))?".format(
"|".join(_magnitudes)
),
re.IGNORECASE,
)
# _measurement_re = re.compile(r'([0-9.,]*[0-9]+(\s)?{}\b)'.format(_measurements), re.IGNORECASE)
_measurement_re = re.compile(
r"(?<!\{)" + r"([\d.,]*\d+(\s)?{}\b)".format(_measurements) + r"(?![\w\s]*[}])",
re.IGNORECASE,
)
_ordinal_re = re.compile(r"[0-9]+(st|nd|rd|th)")
_range_re = re.compile(r"(?<=[0-9])+(-)(?=[0-9])+.*?")
_roman_re = re.compile(
r"\b(?=[MDCLXVI]+\b)M{0,4}(CM|CD|D?C{0,3})(XC|XL|L?X{0,3})(IX|IV|V?I{2,3})\b"
) # avoid I
_multiply_re = re.compile(r"(\b[0-9]+)(x)([0-9]+)")
# _number_re = re.compile(r"[0-9]+'s|[0-9]+s|[0-9]+")
_number_re = re.compile(r"[0-9]+'s|[0-9]+s|(?<!\{)[0-9]+(?![\w\s]*[}])")
def _remove_commas(m):
return m.group(1).replace(",", "")
def _expand_decimal_point(m):
return m.group(1).replace(".", " point ")
def _expand_currency(m):
currency = _currency_key[m.group(1)]
quantity = m.group(2)
magnitude = m.group(3)
# remove commas from quantity to be able to convert to numerical
quantity = quantity.replace(",", "")
# check for million, billion, etc...
if magnitude is not None and magnitude.lower() in _magnitudes:
if len(magnitude) == 1:
magnitude = _magnitudes_key[magnitude.lower()]
return "{} {} {}".format(_expand_hundreds(quantity), magnitude, currency + "s")
parts = quantity.split(".")
if len(parts) > 2:
return quantity + " " + currency + "s" # Unexpected format
dollars = int(parts[0]) if parts[0] else 0
cents = int(parts[1]) if len(parts) > 1 and parts[1] else 0
if dollars and cents:
dollar_unit = currency if dollars == 1 else currency + "s"
cent_unit = "cent" if cents == 1 else "cents"
return "{} {}, {} {}".format(
_expand_hundreds(dollars),
dollar_unit,
_inflect.number_to_words(cents),
cent_unit,
)
elif dollars:
dollar_unit = currency if dollars == 1 else currency + "s"
return "{} {}".format(_expand_hundreds(dollars), dollar_unit)
elif cents:
cent_unit = "cent" if cents == 1 else "cents"
return "{} {}".format(_inflect.number_to_words(cents), cent_unit)
else:
return "zero" + " " + currency + "s"
def _expand_hundreds(text):
number = float(text)
if 1000 < number < 10000 and (number % 100 == 0) and (number % 1000 != 0):
return _inflect.number_to_words(int(number / 100)) + " hundred"
else:
return _inflect.number_to_words(text)
def _expand_ordinal(m):
return _inflect.number_to_words(m.group(0))
def _expand_measurement(m):
_, number, measurement = re.split(r"(\d+(?:\.\d+)?)", m.group(0))
number = _inflect.number_to_words(number)
measurement = "".join(measurement.split())
measurement = _measurements_key.get(measurement.lower(), measurement)
# if measurement is plural, and number is singular, remove the 's'
if number == "one" and str.endswith(measurement, "s"):
# Remove the 's' from the end of the measurement
measurement = measurement[:-1]
return "{} {}".format(number, measurement)
def _expand_range(m):
return " to "
def _expand_multiply(m):
left = m.group(1)
right = m.group(3)
return "{} by {}".format(left, right)
def _expand_roman(m):
# from https://stackoverflow.com/questions/19308177/converting-roman-numerals-to-integers-in-python
roman_numerals = {"I": 1, "V": 5, "X": 10, "L": 50, "C": 100, "D": 500, "M": 1000}
result = 0
num = m.group(0)
for i, c in enumerate(num):
if (i + 1) == len(num) or roman_numerals[c] >= roman_numerals[num[i + 1]]:
result += roman_numerals[c]
else:
result -= roman_numerals[c]
return str(result)
def _expand_number(m):
_, number, suffix = re.split(r"(\d+(?:'?\d+)?)", m.group(0))
number = int(number)
if 1000 < number < 10000 and (number % 100 == 0) and (number % 1000 != 0):
text = _inflect.number_to_words(number // 100) + " hundred"
elif 1000 < number < 3000:
if number == 2000:
text = "two thousand"
elif 2000 < number < 2010:
text = "two thousand " + _inflect.number_to_words(number % 100)
elif number % 100 == 0:
text = _inflect.number_to_words(number // 100) + " hundred"
else:
number = _inflect.number_to_words(
number, andword="", zero="oh", group=2
).replace(", ", " ")
number = re.sub(r"-", " ", number)
text = number
else:
number = _inflect.number_to_words(number, andword="and")
number = re.sub(r"-", " ", number)
number = re.sub(r",", "", number)
text = number
if suffix in ("'s", "s"):
if text[-1] == "y":
text = text[:-1] + "ies"
else:
text = text + suffix
return text
def normalize_numbers(text: str) -> str:
text = re.sub(_comma_number_re, _remove_commas, text)
text = re.sub(_currency_re, _expand_currency, text)
text = re.sub(_decimal_number_re, _expand_decimal_point, text)
text = re.sub(_ordinal_re, _expand_ordinal, text)
# text = re.sub(_range_re, _expand_range, text)
text = re.sub(_measurement_re, _expand_measurement, text)
text = re.sub(_roman_re, _expand_roman, text)
text = re.sub(_multiply_re, _expand_multiply, text)
text = re.sub(_number_re, _expand_number, text)
return text
|
Aquila-Resolve
|
/Aquila_Resolve-0.1.4-py3-none-any.whl/Aquila_Resolve/text/numbers.py
|
numbers.py
|
# ArPy
ArPy is a light-weight Python library for
communicating with [Arduino microcontroller boards](http://www.arduino.cc/) from a connected computer using
standard serial IO, either over a physical wire
or wirelessly. It is written using a custom protocol, similar to [Firmata](http://firmata.org/wiki/Main_Page).
Basically this project is hugely inspired by the great repo below.
[**Python-Arduino-Command-API**](https://github.com/thearn/Python-Arduino-Command-API)
So please do not forget to check this out as well! Your star will be appreciated!
This allows a user to have an easy prototyping in dev and a maintainance of the connectivity with some webframework like Flask.
## Simple usage example (LED blink)
```python
#!/usr/bin/env python
"""
Blinks an LED on digital pin 13
in 1 second intervals
"""
from Arduino import Arduino
import time
board = Arduino('9600') #plugged in via USB, serial com at rate 9600
board.pinMode(13, "OUTPUT")
while True:
board.digitalWrite(13, "LOW")
time.sleep(1)
board.digitalWrite(13, "HIGH")
time.sleep(1)
```
## Requirements:
- [Python](http://python.org/) 2.3 or higher
- Python 3.x is supported as well
- [pyserial](http://pyserial.sourceforge.net/) 2.6 or higher
- Any [Arduino compatible microcontroller](https://www.sparkfun.com/categories/242) with at least **14KB** of flash memory
## Installation:
<font color='red'>TO-DO: we have to confirm the procedure to install</font>
Either run `pip install arduino-python` from a command line, or run `python setup.py
build install` from the source directory to install this library.
## Setup:
1. <font color='red'>**Verify**</font> that your Arduino board communicates at the baud rate specified in the
`setup()` function (line 348) in `prototype.ino`. Change it there if necessary.
2. <font color='red'>**Load**</font> the `prototype.ino` sketch onto your Arduino board, using the Arduino IDE.
3. <font color='red'>**Test**</font> connect your Arduino board and your computer and try this example python scripts `examples.py`
## Testing:
The `tests` directory contains some basic tests for the library. Extensive code coverage is a bit difficult to expect for every release, since a positive test involves actually
connecting and issuing commands to a live Arduino, hosting any hardware
required to test a particular function. But a core of basic communication tests
should at least be maintained here and used before merging into the `master` branch.
After installation, the interactive tests can be run from the source directory:
```bash
$ python tests/test_main.py
```
Automated tests can be run from the source directory with:
```bash
$ python tests/test_arduino.py
```
## Classes
- `Arduino(baud)` - Set up communication with currently connected and powered
Arduino.
```python
board = Arduino("9600") #Example
```
The device name / COM port of the connected Arduino will be auto-detected.
If there are more than one Arduino boards connected,
the desired COM port can be also be passed as an optional argument:
```python
board = Arduino("9600", port="COM3") #Windows example
```
```python
board = Arduino("9600", port="/dev/tty.usbmodemfa141") #OSX example
```
A time-out for reading from the Arduino can also be specified as an optional
argument:
```python
board = Arduino("9600", timeout=2) #Serial reading functions will
#wait for no more than 2 seconds
```
## Methods
**Digital I/O**
- `Arduino.digitalWrite(pin_number, state)` turn digital pin on/off
- `Arduino.digitalRead(pin_number)` read state of a digital pin
```python
#Digital read / write example
board.digitalWrite(13, "HIGH") #Set digital pin 13 voltage
state_1 = board.digitalRead(13) #Will return integer 1
board.digitalWrite(13, "LOW") #Set digital pin 13 voltage
state_2 = board.digitalRead(13) #Will return integer 0
```
- `Arduino.pinMode(pin_number, io_mode)` set pin I/O mode
- `Arduino.pulseIn(pin_number, state)` measures a pulse
- `Arduino.pulseIn_set(pin_number, state)` measures a pulse, with preconditioning
```python
#Digital mode / pulse example
board.pinMode(7, "INPUT") #Set digital pin 7 mode to INPUT
duration = board.pulseIn(7, "HIGH") #Return pulse width measurement on pin 7
```
**Analog I/O**
- `Arduino.analogRead(pin_number)` returns the analog value
- `Arduino.analogWrite(pin_number, value)` sets the analog value
```python
#Analog I/O examples
val=board.analogRead(5) #Read value on analog pin 5 (integer 0 to 1023)
val = val / 4 # scale to 0 - 255
board.analogWrite(11) #Set analog value (PWM) based on analog measurement
```
**Shift Register**
- `Arduino.shiftIn(dataPin, clockPin, bitOrder)` shift a byte in and returns it
- `Arduino.shiftOut(dataPin, clockPin, bitOrder, value)` shift the given byte out
`bitOrder` should be either `"MSBFIRST"` or `"LSBFIRST"`
**Servo Library Functionality**
Support is included for up to 8 servos.
- `Arduino.Servos.attach(pin, min=544, max=2400)` Create servo instance. Only 8 servos can be used at one time.
- `Arduino.Servos.read(pin)` Returns the angle of the servo attached to the specified pin
- `Arduino.Servos.write(pin, angle)` Move an attached servo on a pin to a specified angle
- `Arduino.Servos.writeMicroseconds(pin, uS)` Write a value in microseconds to the servo on a specified pin
- `Arduino.Servos.detach(pin)` Detaches the servo on the specified pin
```python
#Servo example
board.Servos.attach(9) #declare servo on pin 9
board.Servos.write(9, 0) #move servo on pin 9 to 0 degrees
print board.Servos.read(9) # should be 0
board.Servos.detach(9) #free pin 9
```
**Software Serial Functionality**
- `Arduino.SoftwareSerial.begin(ss_rxPin, ss_txPin, ss_device_baud)` initialize software serial device on
specified pins.
Only one sofware serial device can be used at a time. Existing software serial instance will
be be overwritten by calling this method, both in Python and on the arduino board.
- `Arduino.SoftwareSerial.write(data)` send data using the arduino 'write' function to the existing software
serial connection.
- `Arduino.SoftwareSerial.read()` returns one byte from the existing software serial connection
```python
#Software serial example
board.SoftwareSerial.begin(0, 7, "19200") # Start software serial for transmit only (tx on pin 7)
board.SoftwareSerial.write(" test ") #Send some data
response_char = board.SoftwareSerial.read() #read response character
```
**EEPROM**
- `Arduino.EEPROM.read(address)` reads a byte from the EEPROM
- `Arduino.EEPROM.write(address, value)` writes a byte to the EEPROM
- `Arduino.EEPROM.size()` returns size of the EEPROM
```python
#EEPROM read and write examples
location = 42
value = 10 # 0-255(byte)
board.EEPROM.write(location, 10)
print(board.EEPROM.read(location))
print('EEPROM size {size}'.format(size=board.EEPROM.size()))
```
**Misc**
- `Arduino.close()` closes serial connection to the Arduino.
## To-do list:
- Expand software serial functionality (`print()` and `println()`)
- Add simple reset functionality that zeros out all pin values
- Add I2C / TWI function support (Arduino `Wire.h` commands)
- Include a wizard which generates 'prototype.ino' with selected serial baud rate and Arduino function support
(to help reduce memory requirements).
- Multi-serial support for Arduino mega (`Serial1.read()`, etc)
|
ArPyLearning
|
/ArPyLearning-0.1.tar.gz/ArPyLearning-0.1/README.md
|
README.md
|
import pyarabic.araby as araby
try:
from stopwordsallforms import STOPWORDS, STOPWORDS_INDEX
from stopwords_classified import STOPWORDS as classed_STOPWORDS
except:
from .stopwordsallforms import STOPWORDS, STOPWORDS_INDEX
from .stopwords_classified import STOPWORDS as classed_STOPWORDS
class stopwordTuple:
"""
A object to handle the stopword tuple in two types: classified, or stemmed
"""
def __init__(self, stop_dict = {}):
"""
Init stopword tuples
"""
self.stop_dict = stop_dict
#---------------------------
# Stopwords features
#---------------------------
# ~ {'word': 'إن',
# ~ 'vocalized': 'إِنْ',
# ~ 'type_word': 'حرف',
# ~ 'class_word': 'حرف جزم',
def get_features_dict(self,):
"""
return the all features for a stopword
"""
return self.stop_dict
def get_feature(self,feature):
"""
return the asked feature form a stopword
"""
return self.stop_dict.get(feature,'')
def get_vocalized(self,):
"""
return the vocalized form fo a stopword
"""
return self.stop_dict.get('vocalized','')
def get_unvocalized(self,):
"""
return the unvocalized form fo a stopword
"""
return self.stop_dict.get('word','')
def get_wordtype(self,):
"""
return the wordtype form fo a stopword
"""
# return type or word_type
return self.stop_dict.get('type',self.stop_dict.get('type_word',''))
def get_wordclass(self,):
"""
return the word sub class form fo a stopword
"""
return self.stop_dict.get('class_word','')
# ~ 'has_conjuction': 1,
# ~ 'has_definition': 0,
# ~ 'has_preposition': 0,
# ~ 'has_pronoun': 0,
# ~ 'has_interrog': 0,
# ~ 'has_conjugation': 0,
# ~ 'has_qasam': 0,
# ~ 'is_defined': 0,
# ~ 'is_inflected': 0,
# ~ 'tanwin': 0,
# ~ 'action': 'جازم',
# ~ 'object_type': 'فعل',
# ~ 'need': ''},
def accept_conjuction(self,):
"""
return True if the word accept conjuctions, Asked only for classified stopwords
"""
return bool(self.stop_dict.get('has_conjuction',0))
def accept_definition(self,):
"""
return True if the word accept definitions, Asked only for classified stopwords
"""
return bool(self.stop_dict.get('has_definition',0))
def accept_preposition(self,):
"""
return True if the word accept prepositions, Asked only for classified stopwords
"""
return bool(self.stop_dict.get('has_preposition',0))
def accept_pronoun(self,):
"""
return True if the word accept pronouns, Asked only for classified stopwords
"""
return bool(self.stop_dict.get('has_pronoun',0))
def accept_interrog(self, ):
"""
return True if the word accept interrogs, Asked only for classified stopwords
"""
return bool(self.stop_dict.get('has_interrog',0))
def accept_conjugation(self,):
"""
return True if the word accept conjugations, Asked only for classified stopwords
"""
return bool(self.stop_dict.get('has_conjugation',0))
def accept_qasam(self,):
"""
return True if the word accept qasams, Asked only for classified stopwords
"""
return bool(self.stop_dict.get('has_qasam',0))
def is_defined(self,):
"""
return True if the word is defined , Asked only for classified stopwords
"""
return bool(self.stop_dict.get('is_defined',0))
def accept_inflection(self,):
"""
return True if the word accept inflections, Asked only for classified stopwords
"""
return bool(self.stop_dict.get('is_inflected',0))
def accept_tanwin(self,):
"""
return True if the word accept tanwins, Asked only for classified stopwords
"""
return bool(self.stop_dict.get('tanwin',0))
def get_action(self,):
"""
return get action, Asked only for classified stopwords
"""
return self.stop_dict.get('action',"")
def get_object_type(self,):
"""
return get object_type, Asked only for classified stopwords
"""
return self.stop_dict.get('object_type',"")
def get_need(self,):
"""
return get need, Asked only for classified stopwords
"""
return self.stop_dict.get('need',"")
def get_tags(self,):
"""
return True if the word get tags,
"""
return self.stop_dict.get('tags',"")
def get_stem(self,):
"""
return True if the word get stems,
"""
return self.stop_dict.get('stem',"")
def get_enclitic(self,):
"""
return get encletic,
"""
return self.stop_dict.get('encletic',"")
def get_procletic(self,):
"""
return get procletic,
"""
return self.stop_dict.get('procletic','')
def get_lemma(self,):
"""
return get lemma,
"""
return self.stop_dict.get('original','')
def __str__(self):
"""
return tuple as string
"""
return self.stop_dict.__str__()
def __getitem__(self, key):
"""
return attribute
"""
return self.stop_dict.get(key,None)
def get(self, key, default):
"""
return attribute
"""
return self.stop_dict.get(key,default)
def main(args):
word = u"لعلهم"
print(stop_stem(word))
return 0
if __name__ == '__main__':
import sys
from pyarabic.arabrepr import arepr
words = [(u'منكم', True),
(u'ممكن', False),
(u'عندما', True),
(u'حينئذ', True),
]
for w, rep in words:
result = is_stop(w)
if result != rep:
print((u"Error %s is %swhere must be %s"%(w, result, rep)).encode('utf8'))
print(len(stopwords_list()))
print(len(classed_stopwords_list()))
print(arepr(stopword_forms(u'حتى')))
print(arepr(stopword_forms(u'جميع')))
print(arepr(stop_stem(u'لجميعهم')))
print(arepr(stop_stem(u'لجم')))
|
Arabic-Stopwords
|
/Arabic_Stopwords-0.4.3-py3-none-any.whl/arabicstopwords/stopwordtuple.py
|
stopwordtuple.py
|
STOPWORDS={'أن': [{'word': 'أن', 'vocalized': 'أَنَّ', 'type_word': 'حرف', 'class_word': 'إن و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 1, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''},
{'word': 'أن', 'vocalized': 'أَنْ', 'type_word': 'حرف', 'class_word': 'حرف نصب', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'فعل', 'need': ''}],
'إن': [{'word': 'إن', 'vocalized': 'إِنَّ', 'type_word': 'حرف', 'class_word': 'إن و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 1, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''},
{'word': 'إن', 'vocalized': 'إِنْ', 'type_word': 'حرف', 'class_word': 'المشبهة بليس', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''},
{'word': 'إن', 'vocalized': 'إِنْ', 'type_word': 'حرف', 'class_word': 'حرف جزم', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جازم', 'object_type': 'فعل', 'need': ''},
{'word': 'إن', 'vocalized': 'إِنْ', 'type_word': 'حرف', 'class_word': 'حرف شرط', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'عل': [{'word': 'عل', 'vocalized': 'عَلًّ', 'type_word': 'حرف', 'class_word': 'إن و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''}],
'كأن': [{'word': 'كأن', 'vocalized': 'كَأَنَّ', 'type_word': 'حرف', 'class_word': 'إن و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 1, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''}],
'لعل': [{'word': 'لعل', 'vocalized': 'لَعَلَّ', 'type_word': 'حرف', 'class_word': 'إن و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 1, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''}],
'لكن': [{'word': 'لكن', 'vocalized': 'لَكِنَّ', 'type_word': 'حرف', 'class_word': 'إن و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 1, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''},
{'word': 'لكن', 'vocalized': 'لَكِنْ', 'type_word': 'حرف', 'class_word': 'حرف استدراك', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 1, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''},
{'word': 'لكن', 'vocalized': 'لَكُنَّ', 'type_word': 'ضمير', 'class_word': 'ضمير متصل مجرور', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'ليت': [{'word': 'ليت', 'vocalized': 'لَيْتَ', 'type_word': 'حرف', 'class_word': 'إن و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 1, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''}],
'ثنا': [{'word': 'ثنا', 'vocalized': 'ثَنَا', 'type_word': 'فعل', 'class_word': 'اختصار', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'بيد': [{'word': 'بيد', 'vocalized': 'بَيْدَ', 'type_word': 'اسم', 'class_word': 'استثناء', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'سوى': [{'word': 'سوى', 'vocalized': 'سِوَى', 'type_word': 'اسم', 'class_word': 'استثناء', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''},
{'word': 'سوى', 'vocalized': 'سِوَى', 'type_word': 'اسم', 'class_word': 'اسم إضافة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 1, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'غير': [{'word': 'غير', 'vocalized': 'غَيْرَ', 'type_word': 'اسم', 'class_word': 'استثناء', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 1, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 1, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'لاسيما': [{'word': 'لاسيما', 'vocalized': 'لاسِيَّمَا', 'type_word': 'حرف', 'class_word': 'استثناء', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'أولئك': [{'word': 'أولئك', 'vocalized': 'أُولَئِكَ', 'type_word': 'اسم', 'class_word': 'اسم إشارة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'أولئكم': [{'word': 'أولئكم', 'vocalized': 'أُولَئِكُمْ', 'type_word': 'اسم', 'class_word': 'اسم إشارة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'أولاء': [{'word': 'أولاء', 'vocalized': 'أُولَاءِ', 'type_word': 'اسم', 'class_word': 'اسم إشارة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'أولالك': [{'word': 'أولالك', 'vocalized': 'أُولَالِكَ', 'type_word': 'اسم', 'class_word': 'اسم إشارة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'تان': [{'word': 'تان', 'vocalized': 'تَانِ', 'type_word': 'اسم', 'class_word': 'اسم إشارة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'تانك': [{'word': 'تانك', 'vocalized': 'تَانِكَ', 'type_word': 'اسم', 'class_word': 'اسم إشارة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'تلك': [{'word': 'تلك', 'vocalized': 'تِلْكَ', 'type_word': 'اسم', 'class_word': 'اسم إشارة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'تلكم': [{'word': 'تلكم', 'vocalized': 'تِلْكُمْ', 'type_word': 'اسم', 'class_word': 'اسم إشارة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'تلكما': [{'word': 'تلكما', 'vocalized': 'تِلْكُمَا', 'type_word': 'اسم', 'class_word': 'اسم إشارة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'ته': [{'word': 'ته', 'vocalized': 'تِهِ', 'type_word': 'اسم', 'class_word': 'اسم إشارة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'تي': [{'word': 'تي', 'vocalized': 'تِي', 'type_word': 'اسم', 'class_word': 'اسم إشارة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'تين': [{'word': 'تين', 'vocalized': 'تَيْنِ', 'type_word': 'اسم', 'class_word': 'اسم إشارة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'تينك': [{'word': 'تينك', 'vocalized': 'تَيْنِكَ', 'type_word': 'اسم', 'class_word': 'اسم إشارة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'ثم': [{'word': 'ثم', 'vocalized': 'ثَمَّ', 'type_word': 'اسم', 'class_word': 'اسم إشارة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''},
{'word': 'ثم', 'vocalized': 'ثُمَّ', 'type_word': 'حرف', 'class_word': 'حرف عطف منفصل', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'متبع', 'object_type': '', 'need': ''},
{'word': 'ثم', 'vocalized': 'ثَمَّ', 'type_word': 'اسم', 'class_word': 'ظرف مكان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'ثمة': [{'word': 'ثمة', 'vocalized': 'ثَمَّةَ', 'type_word': 'اسم', 'class_word': 'اسم إشارة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'ذا': [{'word': 'ذا', 'vocalized': 'ذَا', 'type_word': 'اسم', 'class_word': 'اسم إشارة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''},
{'word': 'ذا', 'vocalized': 'ذَا', 'type_word': 'اسم', 'class_word': 'اسم موصول', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''},
{'word': 'ذا', 'vocalized': 'ذَا', 'type_word': 'اسم', 'class_word': 'الأسماء الخمسة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'ذاك': [{'word': 'ذاك', 'vocalized': 'ذَاكَ', 'type_word': 'اسم', 'class_word': 'اسم إشارة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'ذان': [{'word': 'ذان', 'vocalized': 'ذَانِ', 'type_word': 'اسم', 'class_word': 'اسم إشارة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'ذانك': [{'word': 'ذانك', 'vocalized': 'ذَانِكَ', 'type_word': 'اسم', 'class_word': 'اسم إشارة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'ذلك': [{'word': 'ذلك', 'vocalized': 'ذَلِكَ', 'type_word': 'اسم', 'class_word': 'اسم إشارة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'ذلكم': [{'word': 'ذلكم', 'vocalized': 'ذَلِكُمْ', 'type_word': 'اسم', 'class_word': 'اسم إشارة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'ذلكما': [{'word': 'ذلكما', 'vocalized': 'ذَلَكُمَا', 'type_word': 'اسم', 'class_word': 'اسم إشارة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'ذلكن': [{'word': 'ذلكن', 'vocalized': 'ذَلِكُنَّ', 'type_word': 'اسم', 'class_word': 'اسم إشارة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'ذه': [{'word': 'ذه', 'vocalized': 'ذِهِ', 'type_word': 'اسم', 'class_word': 'اسم إشارة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'ذوا': [{'word': 'ذوا', 'vocalized': 'ذَوَا', 'type_word': 'اسم', 'class_word': 'اسم إشارة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'ذواتا': [{'word': 'ذواتا', 'vocalized': 'ذَوَاتَا', 'type_word': 'اسم', 'class_word': 'اسم إشارة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'ذواتي': [{'word': 'ذواتي', 'vocalized': 'ذَوَاتَيْ', 'type_word': 'اسم', 'class_word': 'اسم إشارة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'ذوو': [{'word': 'ذوو', 'vocalized': 'ذَوُو', 'type_word': 'اسم', 'class_word': 'اسم إشارة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 1, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': '', 'need': ''}],
'ذوي': [{'word': 'ذوي', 'vocalized': 'ذَوِي', 'type_word': 'اسم', 'class_word': 'اسم إشارة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 1, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': '', 'need': ''}],
'ذي': [{'word': 'ذي', 'vocalized': 'ذِي', 'type_word': 'اسم', 'class_word': 'اسم إشارة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''},
{'word': 'ذي', 'vocalized': 'ذِي', 'type_word': 'اسم', 'class_word': 'الأسماء الخمسة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'ذين': [{'word': 'ذين', 'vocalized': 'ذَيْنِ', 'type_word': 'اسم', 'class_word': 'اسم إشارة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'ذينك': [{'word': 'ذينك', 'vocalized': 'ذَيْنِكَ', 'type_word': 'اسم', 'class_word': 'اسم إشارة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'كذلك': [{'word': 'كذلك', 'vocalized': 'كَذَلِكَ', 'type_word': 'اسم', 'class_word': 'اسم إشارة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'هؤلاء': [{'word': 'هؤلاء', 'vocalized': 'هَؤُلَاءِ', 'type_word': 'اسم', 'class_word': 'اسم إشارة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'هاتان': [{'word': 'هاتان', 'vocalized': 'هَاتَانِ', 'type_word': 'اسم', 'class_word': 'اسم إشارة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'هاته': [{'word': 'هاته', 'vocalized': 'هَاتِهِ', 'type_word': 'اسم', 'class_word': 'اسم إشارة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'هاتي': [{'word': 'هاتي', 'vocalized': 'هَاتِي', 'type_word': 'اسم', 'class_word': 'اسم إشارة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'هاتين': [{'word': 'هاتين', 'vocalized': 'هَاتَيْنِ', 'type_word': 'اسم', 'class_word': 'اسم إشارة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'هاهنا': [{'word': 'هاهنا', 'vocalized': 'هَاهُنَا', 'type_word': 'اسم', 'class_word': 'اسم إشارة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'هذا': [{'word': 'هذا', 'vocalized': 'هَذَا', 'type_word': 'اسم', 'class_word': 'اسم إشارة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'هذان': [{'word': 'هذان', 'vocalized': 'هَذَانِ', 'type_word': 'اسم', 'class_word': 'اسم إشارة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'هذه': [{'word': 'هذه', 'vocalized': 'هَذِهِ', 'type_word': 'اسم', 'class_word': 'اسم إشارة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'هذي': [{'word': 'هذي', 'vocalized': 'هَذِي', 'type_word': 'اسم', 'class_word': 'اسم إشارة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'هذين': [{'word': 'هذين', 'vocalized': 'هَذَيْنِ', 'type_word': 'اسم', 'class_word': 'اسم إشارة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'هكذا': [{'word': 'هكذا', 'vocalized': 'هَكَذَا', 'type_word': 'اسم', 'class_word': 'اسم إشارة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 1, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'هنا': [{'word': 'هنا', 'vocalized': 'هُنَا', 'type_word': 'اسم', 'class_word': 'اسم إشارة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'هناك': [{'word': 'هناك', 'vocalized': 'هُنَاكَ', 'type_word': 'اسم', 'class_word': 'اسم إشارة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'هنالك': [{'word': 'هنالك', 'vocalized': 'هُنَالِكَ', 'type_word': 'اسم', 'class_word': 'اسم إشارة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'أي': [{'word': 'أي', 'vocalized': 'أَيُّ', 'type_word': 'اسم', 'class_word': 'اسم إضافة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 1, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 1, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''},
{'word': 'أي', 'vocalized': 'أَيُّ', 'type_word': 'اسم', 'class_word': 'اسم الاستفهام', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''},
{'word': 'أي', 'vocalized': 'أَيُّ', 'type_word': 'اسم', 'class_word': 'اسم الشرط', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''},
{'word': 'أي', 'vocalized': 'أَيُّ', 'type_word': 'اسم', 'class_word': 'اسم موصول', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 1, 'tanwin': 1, 'action': '', 'object_type': '', 'need': ''},
{'word': 'أي', 'vocalized': 'أي', 'type_word': 'حرف', 'class_word': 'تعليل', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'إذ': [{'word': 'إذ', 'vocalized': 'إِذْ', 'type_word': 'اسم', 'class_word': 'اسم إضافة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''},
{'word': 'إذ', 'vocalized': 'إِذْ', 'type_word': 'اسم', 'class_word': 'ظرف زمان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'إذا': [{'word': 'إذا', 'vocalized': 'إِذَا', 'type_word': 'اسم', 'class_word': 'اسم إضافة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''},
{'word': 'إذا', 'vocalized': 'إذاً', 'type_word': 'حرف', 'class_word': 'اسم شرط', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'ابن': [{'word': 'ابن', 'vocalized': 'اِبْنُ', 'type_word': 'اسم', 'class_word': 'اسم إضافة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 1, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 1, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'بعض': [{'word': 'بعض', 'vocalized': 'بَعْضُ', 'type_word': 'اسم', 'class_word': 'اسم إضافة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 1, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 1, 'tanwin': 1, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'تجاه': [{'word': 'تجاه', 'vocalized': 'تُجَاهَ', 'type_word': 'اسم', 'class_word': 'اسم إضافة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 1, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'تلقاء': [{'word': 'تلقاء', 'vocalized': 'تِلْقَاءَ', 'type_word': 'اسم', 'class_word': 'اسم إضافة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 1, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'جميع': [{'word': 'جميع', 'vocalized': 'جَمِيعَ', 'type_word': 'اسم', 'class_word': 'اسم إضافة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 1, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 1, 'tanwin': 1, 'action': 'جار', 'object_type': 'اسم', 'need': ''},
{'word': 'جميع', 'vocalized': 'جَمِيعَ', 'type_word': 'اسم', 'class_word': 'توكيد', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 1, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'حسب': [{'word': 'حسب', 'vocalized': 'حَسْبُ', 'type_word': 'اسم', 'class_word': 'اسم إضافة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 1, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'حيث': [{'word': 'حيث', 'vocalized': 'حَيْثُ', 'type_word': 'اسم', 'class_word': 'اسم إضافة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'سبحان': [{'word': 'سبحان', 'vocalized': 'سُبْحَانَ', 'type_word': 'اسم', 'class_word': 'اسم إضافة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 1, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'شبه': [{'word': 'شبه', 'vocalized': 'شِبْهُ', 'type_word': 'اسم', 'class_word': 'اسم إضافة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 1, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'شطر': [{'word': 'شطر', 'vocalized': 'شَطْرَ', 'type_word': 'اسم', 'class_word': 'اسم إضافة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 1, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 1, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'كل': [{'word': 'كل', 'vocalized': 'كُلُّ', 'type_word': 'اسم', 'class_word': 'اسم إضافة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 1, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''},
{'word': 'كل', 'vocalized': 'كُلَّ', 'type_word': 'اسم', 'class_word': 'توكيد', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 1, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'لعمر': [{'word': 'لعمر', 'vocalized': 'لَعَمْرُ', 'type_word': 'اسم', 'class_word': 'اسم إضافة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 1, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'لما': [{'word': 'لما', 'vocalized': 'لَمَّا', 'type_word': 'اسم', 'class_word': 'اسم إضافة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''},
{'word': 'لما', 'vocalized': 'لَمَّا', 'type_word': 'حرف', 'class_word': 'حرف جزم', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جازم', 'object_type': 'فعل', 'need': ''},
{'word': 'لما', 'vocalized': 'لَمَّا', 'type_word': 'حرف', 'class_word': 'ظرف', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'مثل': [{'word': 'مثل', 'vocalized': 'مِثْلُ', 'type_word': 'اسم', 'class_word': 'اسم إضافة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 1, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'مع': [{'word': 'مع', 'vocalized': 'مَعَ', 'type_word': 'اسم', 'class_word': 'اسم إضافة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 1, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'معاذ': [{'word': 'معاذ', 'vocalized': 'مَعَاذَ', 'type_word': 'اسم', 'class_word': 'اسم إضافة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 1, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'نحو': [{'word': 'نحو', 'vocalized': 'نَحْوَ', 'type_word': 'اسم', 'class_word': 'اسم إضافة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 1, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''},
{'word': 'نحو', 'vocalized': 'نَحوَ', 'type_word': 'اسم', 'class_word': 'ظرف مكان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 1, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 1, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'متى': [{'word': 'متى', 'vocalized': 'مَتَى', 'type_word': 'اسم', 'class_word': 'اسم استفهام/ظرف زمان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''},
{'word': 'متى', 'vocalized': 'مَتَى', 'type_word': 'اسم', 'class_word': 'اسم الاستفهام', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''},
{'word': 'متى', 'vocalized': 'مَتَى', 'type_word': 'اسم', 'class_word': 'اسم الشرط', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'أنى': [{'word': 'أنى', 'vocalized': 'أَنَّى', 'type_word': 'اسم', 'class_word': 'اسم الاستفهام', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''},
{'word': 'أنى', 'vocalized': 'أَنَّى', 'type_word': 'اسم', 'class_word': 'اسم الشرط', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'أيان': [{'word': 'أيان', 'vocalized': 'أَيَّانَ', 'type_word': 'اسم', 'class_word': 'اسم الاستفهام', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''},
{'word': 'أيان', 'vocalized': 'أَيَّانَ', 'type_word': 'اسم', 'class_word': 'اسم الشرط', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''},
{'word': 'أيان', 'vocalized': 'أَيَّانَ', 'type_word': 'اسم', 'class_word': 'ظرف زمان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'أين': [{'word': 'أين', 'vocalized': 'أَيْنَ', 'type_word': 'اسم', 'class_word': 'اسم الاستفهام', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''},
{'word': 'أين', 'vocalized': 'أَيْنَ', 'type_word': 'اسم', 'class_word': 'اسم الشرط', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''},
{'word': 'أين', 'vocalized': 'أَيْنَ', 'type_word': 'اسم', 'class_word': 'ظرف مكان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'بكم': [{'word': 'بكم', 'vocalized': 'بِكَمْ', 'type_word': 'اسم', 'class_word': 'اسم الاستفهام', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''},
{'word': 'بكم', 'vocalized': 'بِكُمْ', 'type_word': 'ضمير', 'class_word': 'ضمير متصل مجرور', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'بما': [{'word': 'بما', 'vocalized': 'بِمَا', 'type_word': 'اسم', 'class_word': 'اسم الاستفهام', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'بماذا': [{'word': 'بماذا', 'vocalized': 'بِمَاذَا', 'type_word': 'اسم', 'class_word': 'اسم الاستفهام', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'بمن': [{'word': 'بمن', 'vocalized': 'بِمَنْ', 'type_word': 'اسم', 'class_word': 'اسم الاستفهام', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'كم': [{'word': 'كم', 'vocalized': 'كَمْ', 'type_word': 'اسم', 'class_word': 'اسم الاستفهام', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''},
{'word': 'كم', 'vocalized': 'كَمْ', 'type_word': 'اسم', 'class_word': 'كناية', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'كيف': [{'word': 'كيف', 'vocalized': 'كَيْفَ', 'type_word': 'اسم', 'class_word': 'اسم الاستفهام', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'ما': [{'word': 'ما', 'vocalized': 'مَا', 'type_word': 'اسم', 'class_word': 'اسم الاستفهام', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''},
{'word': 'ما', 'vocalized': 'مَا', 'type_word': 'اسم', 'class_word': 'اسم الشرط', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''},
{'word': 'ما', 'vocalized': 'مَا', 'type_word': 'اسم', 'class_word': 'اسم موصول', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''},
{'word': 'ما', 'vocalized': 'مَا', 'type_word': 'حرف', 'class_word': 'المشبهة بليس', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'ماذا': [{'word': 'ماذا', 'vocalized': 'مَاذَا', 'type_word': 'اسم', 'class_word': 'اسم الاستفهام', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'مما': [{'word': 'مما', 'vocalized': 'مِمَّا', 'type_word': 'اسم', 'class_word': 'اسم الاستفهام', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'ممن': [{'word': 'ممن', 'vocalized': 'مِمَّنْ', 'type_word': 'اسم', 'class_word': 'اسم الاستفهام', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'من': [{'word': 'من', 'vocalized': 'مَنْ', 'type_word': 'اسم', 'class_word': 'اسم الاستفهام', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''},
{'word': 'من', 'vocalized': 'مَنْ', 'type_word': 'اسم', 'class_word': 'اسم الشرط', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''},
{'word': 'من', 'vocalized': 'مَنْ', 'type_word': 'اسم', 'class_word': 'اسم موصول', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''},
{'word': 'من', 'vocalized': 'مِنْ', 'type_word': 'حرف', 'class_word': 'حرف جر', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 1, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'أينما': [{'word': 'أينما', 'vocalized': 'أَيْنَمَا', 'type_word': 'اسم', 'class_word': 'اسم الشرط', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'حيثما': [{'word': 'حيثما', 'vocalized': 'حَيْثُمَا', 'type_word': 'اسم', 'class_word': 'اسم الشرط', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'كيفما': [{'word': 'كيفما', 'vocalized': 'كَيْفَمَا', 'type_word': 'اسم', 'class_word': 'اسم الشرط', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'مهما': [{'word': 'مهما', 'vocalized': 'مَهْمَا', 'type_word': 'اسم', 'class_word': 'اسم الشرط', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'أقل': [{'word': 'أقل', 'vocalized': 'أَقَلُّ', 'type_word': 'اسم', 'class_word': 'اسم تفضيل', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 1, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 1, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'أكثر': [{'word': 'أكثر', 'vocalized': 'أَكْثَرُ', 'type_word': 'اسم', 'class_word': 'اسم تفضيل', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 1, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 1, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'آها': [{'word': 'آها', 'vocalized': 'آهاً', 'type_word': 'اسم فعل', 'class_word': 'اسم صوت', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'بس': [{'word': 'بس', 'vocalized': 'بَسّْ', 'type_word': 'اسم فعل', 'class_word': 'اسم صوت', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''},
{'word': 'بس', 'vocalized': 'بَسْ', 'type_word': 'اسم فعل', 'class_word': 'اسم فعل', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'حاي': [{'word': 'حاي', 'vocalized': 'حَايْ', 'type_word': 'اسم فعل', 'class_word': 'اسم صوت', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'صه': [{'word': 'صه', 'vocalized': 'صَهْ', 'type_word': 'اسم فعل', 'class_word': 'اسم صوت', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''},
{'word': 'صه', 'vocalized': 'صَهٍ', 'type_word': 'اسم فعل', 'class_word': 'اسم صوت', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'طاق': [{'word': 'طاق', 'vocalized': 'طَاقْ', 'type_word': 'اسم فعل', 'class_word': 'اسم صوت', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'طق': [{'word': 'طق', 'vocalized': 'طَقْ', 'type_word': 'اسم فعل', 'class_word': 'اسم صوت', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'عدس': [{'word': 'عدس', 'vocalized': 'عَدَسْ', 'type_word': 'اسم فعل', 'class_word': 'اسم صوت', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'كخ': [{'word': 'كخ', 'vocalized': 'كِخْ', 'type_word': 'اسم فعل', 'class_word': 'اسم صوت', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'نخ': [{'word': 'نخ', 'vocalized': 'نَخْ', 'type_word': 'اسم فعل', 'class_word': 'اسم صوت', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'هج': [{'word': 'هج', 'vocalized': 'هَجْ', 'type_word': 'اسم فعل', 'class_word': 'اسم صوت', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'وا': [{'word': 'وا', 'vocalized': 'وَا', 'type_word': 'اسم فعل', 'class_word': 'اسم صوت', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''},
{'word': 'وا', 'vocalized': 'وَا', 'type_word': 'اسم فعل', 'class_word': 'اسم فعل', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'واها': [{'word': 'واها', 'vocalized': 'وَاهاً', 'type_word': 'اسم فعل', 'class_word': 'اسم صوت', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'وي': [{'word': 'وي', 'vocalized': 'وَيْ', 'type_word': 'اسم فعل', 'class_word': 'اسم صوت', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'آمين': [{'word': 'آمين', 'vocalized': 'آمِينَ', 'type_word': 'اسم فعل', 'class_word': 'اسم فعل', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'آه': [{'word': 'آه', 'vocalized': 'آهٍ', 'type_word': 'اسم فعل', 'class_word': 'اسم فعل', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'أف': [{'word': 'أف', 'vocalized': 'أُفٍّ', 'type_word': 'اسم فعل', 'class_word': 'اسم فعل', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''},
{'word': 'أف', 'vocalized': 'أُفٍّ', 'type_word': 'اسم فعل', 'class_word': 'اسم فعل', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'أمامك': [{'word': 'أمامك', 'vocalized': 'أَمَامَكَ', 'type_word': 'اسم فعل', 'class_word': 'اسم فعل', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'أوه': [{'word': 'أوه', 'vocalized': 'أَوَّهْ', 'type_word': 'اسم فعل', 'class_word': 'اسم فعل', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'إليك': [{'word': 'إليك', 'vocalized': 'إِلَيْكَ', 'type_word': 'اسم فعل', 'class_word': 'اسم فعل', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'إليكم': [{'word': 'إليكم', 'vocalized': 'إِلَيْكُمْ', 'type_word': 'اسم فعل', 'class_word': 'اسم فعل', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'إليكما': [{'word': 'إليكما', 'vocalized': 'إِلَيْكُمَا', 'type_word': 'اسم فعل', 'class_word': 'اسم فعل', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'إليكن': [{'word': 'إليكن', 'vocalized': 'إِلَيْكُنَّ', 'type_word': 'اسم فعل', 'class_word': 'اسم فعل', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'إيه': [{'word': 'إيه', 'vocalized': 'إيهِ', 'type_word': 'اسم فعل', 'class_word': 'اسم فعل', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'بخ': [{'word': 'بخ', 'vocalized': 'بخٍ', 'type_word': 'اسم فعل', 'class_word': 'اسم فعل', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'بطآن': [{'word': 'بطآن', 'vocalized': 'بُطْآنَ', 'type_word': 'اسم فعل', 'class_word': 'اسم فعل', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'بله': [{'word': 'بله', 'vocalized': 'بَلْهَ', 'type_word': 'اسم فعل', 'class_word': 'اسم فعل', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'حذار': [{'word': 'حذار', 'vocalized': 'حَذَارِ', 'type_word': 'اسم فعل', 'class_word': 'اسم فعل', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'حي': [{'word': 'حي', 'vocalized': 'حَيَّ', 'type_word': 'اسم فعل', 'class_word': 'اسم فعل', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'دونك': [{'word': 'دونك', 'vocalized': 'دُونَكَ', 'type_word': 'اسم فعل', 'class_word': 'اسم فعل', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'رويدك': [{'word': 'رويدك', 'vocalized': 'رُوَيْدَكَ', 'type_word': 'اسم فعل', 'class_word': 'اسم فعل', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'سرعان': [{'word': 'سرعان', 'vocalized': 'سُرْعَانَ', 'type_word': 'اسم فعل', 'class_word': 'اسم فعل', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'شتان': [{'word': 'شتان', 'vocalized': 'شَتَّانَ', 'type_word': 'اسم فعل', 'class_word': 'اسم فعل', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'عليك': [{'word': 'عليك', 'vocalized': 'عَلَيْكَ', 'type_word': 'اسم فعل', 'class_word': 'اسم فعل', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'مكانك': [{'word': 'مكانك', 'vocalized': 'مَكَانَكَ', 'type_word': 'اسم فعل', 'class_word': 'اسم فعل', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''},
{'word': 'مكانك', 'vocalized': 'مَكَانَكِ', 'type_word': 'اسم فعل', 'class_word': 'اسم فعل', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'مكانكم': [{'word': 'مكانكم', 'vocalized': 'مَكَانَكُمْ', 'type_word': 'اسم فعل', 'class_word': 'اسم فعل', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'مكانكما': [{'word': 'مكانكما', 'vocalized': 'مَكَانَكُمَا', 'type_word': 'اسم فعل', 'class_word': 'اسم فعل', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'مكانكن': [{'word': 'مكانكن', 'vocalized': 'مَكَانَكُنَّ', 'type_word': 'اسم فعل', 'class_word': 'اسم فعل', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'مه': [{'word': 'مه', 'vocalized': 'مَهْ', 'type_word': 'اسم فعل', 'class_word': 'اسم فعل', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'ها': [{'word': 'ها', 'vocalized': 'هَا', 'type_word': 'اسم فعل', 'class_word': 'اسم فعل', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'هاؤم': [{'word': 'هاؤم', 'vocalized': 'هَاؤُمُ', 'type_word': 'اسم فعل', 'class_word': 'اسم فعل', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'هاك': [{'word': 'هاك', 'vocalized': 'هَاكَ', 'type_word': 'اسم فعل', 'class_word': 'اسم فعل', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'هلم': [{'word': 'هلم', 'vocalized': 'هَلُمَّ', 'type_word': 'اسم فعل', 'class_word': 'اسم فعل', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'هيا': [{'word': 'هيا', 'vocalized': 'هَيَّا', 'type_word': 'اسم فعل', 'class_word': 'اسم فعل', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''},
{'word': 'هيا', 'vocalized': 'هَيَا', 'type_word': 'حرف', 'class_word': 'حرف نداء', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'هيت': [{'word': 'هيت', 'vocalized': 'هِيتَ', 'type_word': 'اسم فعل', 'class_word': 'اسم فعل', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'هيهات': [{'word': 'هيهات', 'vocalized': 'هَيْهَاتَ', 'type_word': 'اسم فعل', 'class_word': 'اسم فعل', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'وراءك': [{'word': 'وراءك', 'vocalized': 'وَرَاءَكَ', 'type_word': 'اسم فعل', 'class_word': 'اسم فعل', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''},
{'word': 'وراءك', 'vocalized': 'وَرَاءَكِ', 'type_word': 'اسم فعل', 'class_word': 'اسم فعل', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'وشكان': [{'word': 'وشكان', 'vocalized': 'وُشْكَانَ', 'type_word': 'اسم فعل', 'class_word': 'اسم فعل', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'ويكأن': [{'word': 'ويكأن', 'vocalized': 'وَيْكَأَنَّ', 'type_word': 'اسم فعل', 'class_word': 'اسم فعل', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 1, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'الألاء': [{'word': 'الألاء', 'vocalized': 'الْأَلَاءُ', 'type_word': 'اسم', 'class_word': 'اسم موصول', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 1, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'الألى': [{'word': 'الألى', 'vocalized': 'الْأُلَى', 'type_word': 'اسم', 'class_word': 'اسم موصول', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 1, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'التي': [{'word': 'التي', 'vocalized': 'الَّتِي', 'type_word': 'اسم', 'class_word': 'اسم موصول', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 1, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'الذي': [{'word': 'الذي', 'vocalized': 'الَّذِي', 'type_word': 'اسم', 'class_word': 'اسم موصول', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 1, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'الذين': [{'word': 'الذين', 'vocalized': 'الَّذِينَ', 'type_word': 'اسم', 'class_word': 'اسم موصول', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 1, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'اللائي': [{'word': 'اللائي', 'vocalized': 'الْلَائِي', 'type_word': 'اسم', 'class_word': 'اسم موصول', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 1, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'اللاتي': [{'word': 'اللاتي', 'vocalized': 'الْلَاتِي', 'type_word': 'اسم', 'class_word': 'اسم موصول', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 1, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'اللتان': [{'word': 'اللتان', 'vocalized': 'الْلَتَانِ', 'type_word': 'اسم', 'class_word': 'اسم موصول', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 1, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'اللتيا': [{'word': 'اللتيا', 'vocalized': 'الْلَتَيَّا', 'type_word': 'اسم', 'class_word': 'اسم موصول', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 1, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'اللتين': [{'word': 'اللتين', 'vocalized': 'الْلَتَيْنِ', 'type_word': 'اسم', 'class_word': 'اسم موصول', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 1, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'اللذان': [{'word': 'اللذان', 'vocalized': 'الْلَذَانِ', 'type_word': 'اسم', 'class_word': 'اسم موصول', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 1, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'اللذين': [{'word': 'اللذين', 'vocalized': 'الْلَذَيْنِ', 'type_word': 'اسم', 'class_word': 'اسم موصول', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 1, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'اللواتي': [{'word': 'اللواتي', 'vocalized': 'الْلَوَاتِي', 'type_word': 'اسم', 'class_word': 'اسم موصول', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 1, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'ذات': [{'word': 'ذات', 'vocalized': 'ذَاتُ', 'type_word': 'اسم', 'class_word': 'اسم موصول', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 1, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': '', 'need': ''}],
'أب': [{'word': 'أب', 'vocalized': 'أَبُ', 'type_word': 'اسم', 'class_word': 'الأسماء الخمسة', 'has_conjuction': 1, 'has_definition': 1, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 1, 'tanwin': 1, 'action': '', 'object_type': '', 'need': ''}],
'أبا': [{'word': 'أبا', 'vocalized': 'أَبَا', 'type_word': 'اسم', 'class_word': 'الأسماء الخمسة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 1, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'أبو': [{'word': 'أبو', 'vocalized': 'أَبُو', 'type_word': 'اسم', 'class_word': 'الأسماء الخمسة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 1, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'أبي': [{'word': 'أبي', 'vocalized': 'أَبِي', 'type_word': 'اسم', 'class_word': 'الأسماء الخمسة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 1, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'أخ': [{'word': 'أخ', 'vocalized': 'أَخُ', 'type_word': 'اسم', 'class_word': 'الأسماء الخمسة', 'has_conjuction': 1, 'has_definition': 1, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 1, 'tanwin': 1, 'action': '', 'object_type': '', 'need': ''}],
'أخا': [{'word': 'أخا', 'vocalized': 'أَخَا', 'type_word': 'اسم', 'class_word': 'الأسماء الخمسة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 1, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'أخو': [{'word': 'أخو', 'vocalized': 'أَخُو', 'type_word': 'اسم', 'class_word': 'الأسماء الخمسة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 1, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'أخي': [{'word': 'أخي', 'vocalized': 'أَخِي', 'type_word': 'اسم', 'class_word': 'الأسماء الخمسة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 1, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'حم': [{'word': 'حم', 'vocalized': 'حَمُ', 'type_word': 'اسم', 'class_word': 'الأسماء الخمسة', 'has_conjuction': 1, 'has_definition': 1, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 1, 'tanwin': 1, 'action': '', 'object_type': '', 'need': ''}],
'حما': [{'word': 'حما', 'vocalized': 'حَمَا', 'type_word': 'اسم', 'class_word': 'الأسماء الخمسة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 1, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'حمو': [{'word': 'حمو', 'vocalized': 'حَمُو', 'type_word': 'اسم', 'class_word': 'الأسماء الخمسة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 1, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'حمي': [{'word': 'حمي', 'vocalized': 'حَمِي', 'type_word': 'اسم', 'class_word': 'الأسماء الخمسة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 1, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'ذو': [{'word': 'ذو', 'vocalized': 'ذُو', 'type_word': 'اسم', 'class_word': 'الأسماء الخمسة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'فا': [{'word': 'فا', 'vocalized': 'فَا', 'type_word': 'اسم', 'class_word': 'الأسماء الخمسة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 1, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'فو': [{'word': 'فو', 'vocalized': 'فُو', 'type_word': 'اسم', 'class_word': 'الأسماء الخمسة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 1, 'tanwin': 1, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'في': [{'word': 'في', 'vocalized': 'فِي', 'type_word': 'اسم', 'class_word': 'الأسماء الخمسة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 1, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''},
{'word': 'في', 'vocalized': 'فِي', 'type_word': 'حرف', 'class_word': 'حرف جر', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 1, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'لن': [{'word': 'لن', 'vocalized': 'لَنْ', 'type_word': 'حرف', 'class_word': 'الحروف(حروف)', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'لو': [{'word': 'لو', 'vocalized': 'لَوْ', 'type_word': 'حرف', 'class_word': 'الحروف(حروف)', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'لولا': [{'word': 'لولا', 'vocalized': 'لَوْلَا', 'type_word': 'حرف', 'class_word': 'الحروف(حروف)', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'لوما': [{'word': 'لوما', 'vocalized': 'لَوْمَا', 'type_word': 'حرف', 'class_word': 'الحروف(حروف)', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'نعم': [{'word': 'نعم', 'vocalized': 'نَعَمْ', 'type_word': 'حرف', 'class_word': 'الحروف(حروف)', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''},
{'word': 'نعم', 'vocalized': 'نِعْمَ', 'type_word': 'فعل', 'class_word': 'المدح والذم', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'بئس': [{'word': 'بئس', 'vocalized': 'بِئْسَ', 'type_word': 'فعل', 'class_word': 'المدح والذم', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'حبذا': [{'word': 'حبذا', 'vocalized': 'حَبَّذَا', 'type_word': 'فعل', 'class_word': 'المدح والذم', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'ساء': [{'word': 'ساء', 'vocalized': 'سَاءَ', 'type_word': 'فعل', 'class_word': 'المدح والذم', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'ساءما': [{'word': 'ساءما', 'vocalized': 'سَاءَمَا', 'type_word': 'فعل', 'class_word': 'المدح والذم', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'نعما': [{'word': 'نعما', 'vocalized': 'نِعِمّا', 'type_word': 'فعل', 'class_word': 'المدح والذم', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'لا': [{'word': 'لا', 'vocalized': 'لَا', 'type_word': 'حرف', 'class_word': 'المشبهة بليس', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''},
{'word': 'لا', 'vocalized': 'لَا', 'type_word': 'حرف', 'class_word': 'نافية للجنس', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''},
{'word': 'لا', 'vocalized': 'لَا', 'type_word': 'حرف', 'class_word': 'ناهية', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جازم', 'object_type': 'فعل', 'need': ''},
{'word': 'لا', 'vocalized': 'لَا', 'type_word': 'حرف', 'class_word': 'نافية', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'فعل', 'need': ''}],
'لات': [{'word': 'لات', 'vocalized': 'لَاتَ', 'type_word': 'حرف', 'class_word': 'المشبهة بليس', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'حتى': [{'word': 'حتى', 'vocalized': 'حَتَّى', 'type_word': 'حرف', 'class_word': 'تعليل', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''},
{'word': 'حتى', 'vocalized': 'حَتَّى', 'type_word': 'حرف', 'class_word': 'حرف جر', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'كي': [{'word': 'كي', 'vocalized': 'كَيْ', 'type_word': 'حرف', 'class_word': 'تعليل', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'فعل', 'need': ''}],
'أجمع': [{'word': 'أجمع', 'vocalized': 'أَجْمَعَ', 'type_word': 'اسم', 'class_word': 'توكيد', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 1, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'عامة': [{'word': 'عامة', 'vocalized': 'عَامَّةَ', 'type_word': 'اسم', 'class_word': 'توكيد', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 1, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'عين': [{'word': 'عين', 'vocalized': 'عَيْنَ', 'type_word': 'اسم', 'class_word': 'توكيد', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 1, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'كلا': [{'word': 'كلا', 'vocalized': 'كِلَا', 'type_word': 'حرف', 'class_word': 'توكيد', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''},
{'word': 'كلا', 'vocalized': 'كَلَّا', 'type_word': 'حرف', 'class_word': 'حرف ردع', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'كلاهما': [{'word': 'كلاهما', 'vocalized': 'كِلَاهُمَا', 'type_word': 'حرف', 'class_word': 'توكيد', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'كلتا': [{'word': 'كلتا', 'vocalized': 'كِلْتَا', 'type_word': 'اسم', 'class_word': 'توكيد', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'كليكما': [{'word': 'كليكما', 'vocalized': 'كِلَيْكُمَا', 'type_word': 'حرف', 'class_word': 'توكيد', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'كليهما': [{'word': 'كليهما', 'vocalized': 'كِلَيْهِمَا', 'type_word': 'حرف', 'class_word': 'توكيد', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'نفس': [{'word': 'نفس', 'vocalized': 'نَفْسُ', 'type_word': 'اسم', 'class_word': 'توكيد', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 1, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'ء': [{'word': 'ء', 'vocalized': 'ء', 'type_word': 'حرف ابجدي', 'class_word': 'حرف ابجدي', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'آ': [{'word': 'آ', 'vocalized': 'آ', 'type_word': 'حرف ابجدي', 'class_word': 'حرف ابجدي', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'أ': [{'word': 'أ', 'vocalized': 'أ', 'type_word': 'حرف ابجدي', 'class_word': 'حرف ابجدي', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''},
{'word': 'أ', 'vocalized': 'أ', 'type_word': 'حرف ابجدي', 'class_word': 'حرف ابجدي', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'ؤ': [{'word': 'ؤ', 'vocalized': 'ؤ', 'type_word': 'حرف ابجدي', 'class_word': 'حرف ابجدي', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'ئ': [{'word': 'ئ', 'vocalized': 'ئ', 'type_word': 'حرف ابجدي', 'class_word': 'حرف ابجدي', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'ب': [{'word': 'ب', 'vocalized': 'ب', 'type_word': 'حرف ابجدي', 'class_word': 'حرف ابجدي', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'ة': [{'word': 'ة', 'vocalized': 'ة', 'type_word': 'حرف ابجدي', 'class_word': 'حرف ابجدي', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'ت': [{'word': 'ت', 'vocalized': 'ت', 'type_word': 'حرف ابجدي', 'class_word': 'حرف ابجدي', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'ث': [{'word': 'ث', 'vocalized': 'ث', 'type_word': 'حرف ابجدي', 'class_word': 'حرف ابجدي', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'ج': [{'word': 'ج', 'vocalized': 'ج', 'type_word': 'حرف ابجدي', 'class_word': 'حرف ابجدي', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'ح': [{'word': 'ح', 'vocalized': 'ح', 'type_word': 'حرف ابجدي', 'class_word': 'حرف ابجدي', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'خ': [{'word': 'خ', 'vocalized': 'خ', 'type_word': 'حرف ابجدي', 'class_word': 'حرف ابجدي', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'د': [{'word': 'د', 'vocalized': 'د', 'type_word': 'حرف ابجدي', 'class_word': 'حرف ابجدي', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'ذ': [{'word': 'ذ', 'vocalized': 'ذ', 'type_word': 'حرف ابجدي', 'class_word': 'حرف ابجدي', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'ر': [{'word': 'ر', 'vocalized': 'ر', 'type_word': 'حرف ابجدي', 'class_word': 'حرف ابجدي', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'ز': [{'word': 'ز', 'vocalized': 'ز', 'type_word': 'حرف ابجدي', 'class_word': 'حرف ابجدي', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'س': [{'word': 'س', 'vocalized': 'س', 'type_word': 'حرف ابجدي', 'class_word': 'حرف ابجدي', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'ش': [{'word': 'ش', 'vocalized': 'ش', 'type_word': 'حرف ابجدي', 'class_word': 'حرف ابجدي', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'ص': [{'word': 'ص', 'vocalized': 'ص', 'type_word': 'حرف ابجدي', 'class_word': 'حرف ابجدي', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'ض': [{'word': 'ض', 'vocalized': 'ض', 'type_word': 'حرف ابجدي', 'class_word': 'حرف ابجدي', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'ط': [{'word': 'ط', 'vocalized': 'ط', 'type_word': 'حرف ابجدي', 'class_word': 'حرف ابجدي', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'ظ': [{'word': 'ظ', 'vocalized': 'ظ', 'type_word': 'حرف ابجدي', 'class_word': 'حرف ابجدي', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'ع': [{'word': 'ع', 'vocalized': 'ع', 'type_word': 'حرف ابجدي', 'class_word': 'حرف ابجدي', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'غ': [{'word': 'غ', 'vocalized': 'غ', 'type_word': 'حرف ابجدي', 'class_word': 'حرف ابجدي', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'ف': [{'word': 'ف', 'vocalized': 'ف', 'type_word': 'حرف ابجدي', 'class_word': 'حرف ابجدي', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'ق': [{'word': 'ق', 'vocalized': 'ق', 'type_word': 'حرف ابجدي', 'class_word': 'حرف ابجدي', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'ك': [{'word': 'ك', 'vocalized': 'ك', 'type_word': 'حرف ابجدي', 'class_word': 'حرف ابجدي', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'ل': [{'word': 'ل', 'vocalized': 'ل', 'type_word': 'حرف ابجدي', 'class_word': 'حرف ابجدي', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'م': [{'word': 'م', 'vocalized': 'م', 'type_word': 'حرف ابجدي', 'class_word': 'حرف ابجدي', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'ن': [{'word': 'ن', 'vocalized': 'ن', 'type_word': 'حرف ابجدي', 'class_word': 'حرف ابجدي', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'ه': [{'word': 'ه', 'vocalized': 'ه', 'type_word': 'حرف ابجدي', 'class_word': 'حرف ابجدي', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'و': [{'word': 'و', 'vocalized': 'و', 'type_word': 'حرف ابجدي', 'class_word': 'حرف ابجدي', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''},
{'word': 'و', 'vocalized': 'وَ', 'type_word': 'حرف', 'class_word': 'حرف عطف منفصل', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'متبع', 'object_type': 'اسم فعل', 'need': ''},
{'word': 'و', 'vocalized': 'وَ', 'type_word': 'حرف', 'class_word': 'حرف عطف', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'متبع', 'object_type': '', 'need': ''}],
'ى': [{'word': 'ى', 'vocalized': 'ى', 'type_word': 'حرف ابجدي', 'class_word': 'حرف ابجدي', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'ي': [{'word': 'ي', 'vocalized': 'ي', 'type_word': 'حرف ابجدي', 'class_word': 'حرف ابجدي', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'إلا': [{'word': 'إلا', 'vocalized': 'إلّا', 'type_word': 'حرف', 'class_word': 'حرف استثناء', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''},
{'word': 'إلا', 'vocalized': 'إِلَّا', 'type_word': 'حرف', 'class_word': 'حرف استثناء', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': 'اسم', 'need': ''}],
'حاشا': [{'word': 'حاشا', 'vocalized': 'حَاشَا', 'type_word': 'فعل', 'class_word': 'حرف استثناء', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 1, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'خلا': [{'word': 'خلا', 'vocalized': 'خَلَا', 'type_word': 'فعل', 'class_word': 'حرف استثناء', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 1, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'عدا': [{'word': 'عدا', 'vocalized': 'عَدَا', 'type_word': 'فعل', 'class_word': 'حرف استثناء', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 1, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'فيم': [{'word': 'فيم', 'vocalized': 'فِيمَ', 'type_word': 'حرف', 'class_word': 'حرف استفهام', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'فيما': [{'word': 'فيما', 'vocalized': 'فِيمَا', 'type_word': 'حرف', 'class_word': 'حرف استفهام', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'هل': [{'word': 'هل', 'vocalized': 'هَلْ', 'type_word': 'حرف', 'class_word': 'حرف استفهام', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'سوف': [{'word': 'سوف', 'vocalized': 'سَوْفَ', 'type_word': 'حرف', 'class_word': 'حرف استقبال', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'فعل', 'need': ''}],
'هلا': [{'word': 'هلا', 'vocalized': 'هَلّا', 'type_word': 'حرف', 'class_word': 'حرف تحضيض', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'قد': [{'word': 'قد', 'vocalized': 'قَدْ', 'type_word': 'حرف', 'class_word': 'حرف تحقيق/ توقع', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'عاطل', 'object_type': 'فعل', 'need': ''}],
'إما': [{'word': 'إما', 'vocalized': 'إِمَّا', 'type_word': 'حرف', 'class_word': 'حرف تخيير وتفصيل', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'كأنما': [{'word': 'كأنما', 'vocalized': 'كَأَنَّمَا', 'type_word': 'حرف', 'class_word': 'حرف تشبيه', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'كما': [{'word': 'كما', 'vocalized': 'كَمَا', 'type_word': 'حرف', 'class_word': 'حرف تشبيه', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'لكي': [{'word': 'لكي', 'vocalized': 'لِكَيْ', 'type_word': 'حرف', 'class_word': 'حرف تعليل', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'فعل', 'need': ''}],
'لكيلا': [{'word': 'لكيلا', 'vocalized': 'لِكَيْلَا', 'type_word': 'حرف', 'class_word': 'حرف تعليل', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'فعل', 'need': ''}],
'إلى': [{'word': 'إلى', 'vocalized': 'إِلَى', 'type_word': 'حرف', 'class_word': 'حرف جر', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 1, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'بلا': [{'word': 'بلا', 'vocalized': 'بِلَا', 'type_word': 'حرف', 'class_word': 'حرف جر', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'رب': [{'word': 'رب', 'vocalized': 'رُبَّ', 'type_word': 'حرف', 'class_word': 'حرف جر', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'على': [{'word': 'على', 'vocalized': 'عَلَى', 'type_word': 'حرف', 'class_word': 'حرف جر', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 1, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'عن': [{'word': 'عن', 'vocalized': 'عَنْ', 'type_word': 'حرف', 'class_word': 'حرف جر', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 1, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'مذ': [{'word': 'مذ', 'vocalized': 'مُذْ', 'type_word': 'حرف', 'class_word': 'حرف جر', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''},
{'word': 'مذ', 'vocalized': 'مُذْ', 'type_word': 'حرف', 'class_word': 'ظرف زمان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'منذ': [{'word': 'منذ', 'vocalized': 'مُنْذُ', 'type_word': 'حرف', 'class_word': 'حرف جر', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''},
{'word': 'منذ', 'vocalized': 'مُنْذُ', 'type_word': 'حرف', 'class_word': 'ظرف زمان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'عما': [{'word': 'عما', 'vocalized': 'عَمَّا', 'type_word': 'حرف', 'class_word': 'حرف جر مكفوف', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'عاطل', 'object_type': '', 'need': ''}],
'لم': [{'word': 'لم', 'vocalized': 'لَمْ', 'type_word': 'حرف', 'class_word': 'حرف جزم', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جازم', 'object_type': 'فعل', 'need': ''}],
'أجل': [{'word': 'أجل', 'vocalized': 'أَجَلْ', 'type_word': 'حرف', 'class_word': 'حرف جواب', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'إذن': [{'word': 'إذن', 'vocalized': 'إِذَنْ', 'type_word': 'حرف', 'class_word': 'حرف جواب', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'إي': [{'word': 'إي', 'vocalized': 'إِي', 'type_word': 'حرف', 'class_word': 'حرف جواب', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'بلى': [{'word': 'بلى', 'vocalized': 'بَلَى', 'type_word': 'حرف', 'class_word': 'حرف جواب', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'جلل': [{'word': 'جلل', 'vocalized': 'جَلَلْ', 'type_word': 'حرف', 'class_word': 'حرف جواب', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'جير': [{'word': 'جير', 'vocalized': 'جَيْرِ', 'type_word': 'حرف', 'class_word': 'حرف جواب', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'إذما': [{'word': 'إذما', 'vocalized': 'إَذْمَا', 'type_word': 'حرف', 'class_word': 'حرف شرط', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'لئن': [{'word': 'لئن', 'vocalized': 'لَئِنْ', 'type_word': 'حرف', 'class_word': 'حرف شرط', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'أما': [{'word': 'أما', 'vocalized': 'أَمَّا', 'type_word': 'حرف', 'class_word': 'حرف شرط وتفصيل وتوكيد', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''},
{'word': 'أما', 'vocalized': 'أَمَا', 'type_word': 'حرف', 'class_word': 'حرف عرض', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'ألا': [{'word': 'ألا', 'vocalized': 'أَلَا', 'type_word': 'حرف', 'class_word': 'حرف عرض', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'أم': [{'word': 'أم', 'vocalized': 'أَمْ', 'type_word': 'حرف', 'class_word': 'حرف عطف منفصل', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'متبع', 'object_type': 'فعل:اسم', 'need': ''}],
'أو': [{'word': 'أو', 'vocalized': 'أَوْ', 'type_word': 'حرف', 'class_word': 'حرف عطف منفصل', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'متبع', 'object_type': '', 'need': ''}],
'بل': [{'word': 'بل', 'vocalized': 'بَلْ', 'type_word': 'حرف', 'class_word': 'حرف عطف منفصل', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'متبع', 'object_type': '', 'need': ''}],
'أيا': [{'word': 'أيا', 'vocalized': 'أَيَا', 'type_word': 'حرف', 'class_word': 'حرف نداء', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'لئلا': [{'word': 'لئلا', 'vocalized': 'لِئَلَّا', 'type_word': 'حرف', 'class_word': 'حرف نصب', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'فعل', 'need': ''}],
'بك': [{'word': 'بك', 'vocalized': 'بِكَ', 'type_word': 'ضمير', 'class_word': 'ضمير متصل مجرور', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'بكما': [{'word': 'بكما', 'vocalized': 'بِكُمَا', 'type_word': 'ضمير', 'class_word': 'ضمير متصل مجرور', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'بكن': [{'word': 'بكن', 'vocalized': 'بِكُنَّ', 'type_word': 'ضمير', 'class_word': 'ضمير متصل مجرور', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'بنا': [{'word': 'بنا', 'vocalized': 'بِنَا', 'type_word': 'ضمير', 'class_word': 'ضمير متصل مجرور', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'به': [{'word': 'به', 'vocalized': 'بِهِ', 'type_word': 'ضمير', 'class_word': 'ضمير متصل مجرور', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'بها': [{'word': 'بها', 'vocalized': 'بِهَا', 'type_word': 'ضمير', 'class_word': 'ضمير متصل مجرور', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'بي': [{'word': 'بي', 'vocalized': 'بِي', 'type_word': 'ضمير', 'class_word': 'ضمير متصل مجرور', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'لك': [{'word': 'لك', 'vocalized': 'لَكَ', 'type_word': 'ضمير', 'class_word': 'ضمير متصل مجرور', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'لكم': [{'word': 'لكم', 'vocalized': 'لَكُمْ', 'type_word': 'ضمير', 'class_word': 'ضمير متصل مجرور', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'لكما': [{'word': 'لكما', 'vocalized': 'لَكُمَا', 'type_word': 'ضمير', 'class_word': 'ضمير متصل مجرور', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'لنا': [{'word': 'لنا', 'vocalized': 'لَنَا', 'type_word': 'ضمير', 'class_word': 'ضمير متصل مجرور', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'له': [{'word': 'له', 'vocalized': 'لَهُ', 'type_word': 'ضمير', 'class_word': 'ضمير متصل مجرور', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'لها': [{'word': 'لها', 'vocalized': 'لَهَا', 'type_word': 'ضمير', 'class_word': 'ضمير متصل مجرور', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'لي': [{'word': 'لي', 'vocalized': 'لِي', 'type_word': 'ضمير', 'class_word': 'ضمير متصل مجرور', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'أنا': [{'word': 'أنا', 'vocalized': 'أَنَا', 'type_word': 'ضمير', 'class_word': 'ضمير منفصل', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''},
{'word': 'أنا', 'vocalized': 'أَنَّا', 'type_word': 'حرف', 'class_word': 'ناصب ومنصوب', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 1, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'عاطل', 'object_type': '', 'need': ''}],
'أنت': [{'word': 'أنت', 'vocalized': 'أَنْتَ', 'type_word': 'ضمير', 'class_word': 'ضمير منفصل', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''},
{'word': 'أنت', 'vocalized': 'أَنْتِ', 'type_word': 'ضمير', 'class_word': 'ضمير منفصل', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'أنتم': [{'word': 'أنتم', 'vocalized': 'أَنْتُمْ', 'type_word': 'ضمير', 'class_word': 'ضمير منفصل', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'أنتما': [{'word': 'أنتما', 'vocalized': 'أَنْتُمَا', 'type_word': 'ضمير', 'class_word': 'ضمير منفصل', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'أنتن': [{'word': 'أنتن', 'vocalized': 'أَنْتُنَّ', 'type_word': 'ضمير', 'class_word': 'ضمير منفصل', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'نحن': [{'word': 'نحن', 'vocalized': 'نَحْنُ', 'type_word': 'ضمير', 'class_word': 'ضمير منفصل', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'هم': [{'word': 'هم', 'vocalized': 'هُمْ', 'type_word': 'ضمير', 'class_word': 'ضمير منفصل', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'هما': [{'word': 'هما', 'vocalized': 'هُمَا', 'type_word': 'ضمير', 'class_word': 'ضمير منفصل', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'هن': [{'word': 'هن', 'vocalized': 'هُنَّ', 'type_word': 'ضمير', 'class_word': 'ضمير منفصل', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'هو': [{'word': 'هو', 'vocalized': 'هُوَ', 'type_word': 'ضمير', 'class_word': 'ضمير منفصل', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'هي': [{'word': 'هي', 'vocalized': 'هِي', 'type_word': 'ضمير', 'class_word': 'ضمير منفصل', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'إياك': [{'word': 'إياك', 'vocalized': 'إِيَّاكَ', 'type_word': 'ضمير', 'class_word': 'ضمير منفصل منصوب', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''},
{'word': 'إياك', 'vocalized': 'إِيَّاكِ', 'type_word': 'ضمير', 'class_word': 'ضمير منفصل منصوب', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'إياكم': [{'word': 'إياكم', 'vocalized': 'إِيَّاكُمْ', 'type_word': 'ضمير', 'class_word': 'ضمير منفصل منصوب', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'إياكما': [{'word': 'إياكما', 'vocalized': 'إِيَّاكُمَا', 'type_word': 'ضمير', 'class_word': 'ضمير منفصل منصوب', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'إياكن': [{'word': 'إياكن', 'vocalized': 'إِيَّاكُنَّ', 'type_word': 'ضمير', 'class_word': 'ضمير منفصل منصوب', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'إيانا': [{'word': 'إيانا', 'vocalized': 'إِيَّانَا', 'type_word': 'ضمير', 'class_word': 'ضمير منفصل منصوب', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'إياه': [{'word': 'إياه', 'vocalized': 'إِيَّاهُ', 'type_word': 'ضمير', 'class_word': 'ضمير منفصل منصوب', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'إياها': [{'word': 'إياها', 'vocalized': 'إِيَّاهَا', 'type_word': 'ضمير', 'class_word': 'ضمير منفصل منصوب', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'إياهم': [{'word': 'إياهم', 'vocalized': 'إِيَّاهُمْ', 'type_word': 'ضمير', 'class_word': 'ضمير منفصل منصوب', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'إياهما': [{'word': 'إياهما', 'vocalized': 'إِيَّاهُمَا', 'type_word': 'ضمير', 'class_word': 'ضمير منفصل منصوب', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'إياهن': [{'word': 'إياهن', 'vocalized': 'إِيَّاهُنَّ', 'type_word': 'ضمير', 'class_word': 'ضمير منفصل منصوب', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'إياي': [{'word': 'إياي', 'vocalized': 'إِيَّايَ', 'type_word': 'ضمير', 'class_word': 'ضمير منفصل منصوب', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'بدون': [{'word': 'بدون', 'vocalized': 'بِدُونِ', 'type_word': 'اسم', 'class_word': 'ظرف', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 1, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'دون': [{'word': 'دون', 'vocalized': 'دُونَ', 'type_word': 'اسم', 'class_word': 'ظرف', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'ريث': [{'word': 'ريث', 'vocalized': 'رَيْثَ', 'type_word': 'اسم', 'class_word': 'ظرف', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'عند': [{'word': 'عند', 'vocalized': 'عِنْدَ', 'type_word': 'اسم', 'class_word': 'ظرف', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 1, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'عندما': [{'word': 'عندما', 'vocalized': 'عِنْدَمَا', 'type_word': 'اسم', 'class_word': 'ظرف', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'عوض': [{'word': 'عوض', 'vocalized': 'عِوَضَ', 'type_word': 'اسم', 'class_word': 'ظرف', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'قبل': [{'word': 'قبل', 'vocalized': 'قَبْلَ', 'type_word': 'اسم', 'class_word': 'ظرف', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'قط': [{'word': 'قط', 'vocalized': 'قَطُّ', 'type_word': 'حرف', 'class_word': 'ظرف', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'كلما': [{'word': 'كلما', 'vocalized': 'كُلَّمَا', 'type_word': 'اسم', 'class_word': 'ظرف', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'لدن': [{'word': 'لدن', 'vocalized': 'لَدُنْ', 'type_word': 'اسم', 'class_word': 'ظرف', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 1, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'لدى': [{'word': 'لدى', 'vocalized': 'لَدَى', 'type_word': 'اسم', 'class_word': 'ظرف', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 1, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'آناء': [{'word': 'آناء', 'vocalized': 'آنَاءَ', 'type_word': 'اسم', 'class_word': 'ظرف زمان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'آنفا': [{'word': 'آنفا', 'vocalized': 'آنِفًا', 'type_word': 'اسم', 'class_word': 'ظرف زمان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'عاطل', 'object_type': '', 'need': ''}],
'أبدا': [{'word': 'أبدا', 'vocalized': 'أَبَدًا', 'type_word': 'اسم', 'class_word': 'ظرف زمان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'عاطل', 'object_type': '', 'need': ''}],
'أثناء': [{'word': 'أثناء', 'vocalized': 'أَثْنَاءَ', 'type_word': 'اسم', 'class_word': 'ظرف زمان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 1, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'أصلا': [{'word': 'أصلا', 'vocalized': 'أَصْلًا', 'type_word': 'اسم', 'class_word': 'ظرف زمان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'عاطل', 'object_type': '', 'need': ''}],
'أمد': [{'word': 'أمد', 'vocalized': 'أَمَدَ', 'type_word': 'اسم', 'class_word': 'ظرف زمان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'أمس': [{'word': 'أمس', 'vocalized': 'أَمْسِ', 'type_word': 'اسم', 'class_word': 'ظرف زمان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'أول': [{'word': 'أول', 'vocalized': 'أَوَّلَ', 'type_word': 'اسم', 'class_word': 'ظرف زمان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'الآن': [{'word': 'الآن', 'vocalized': 'الْآنَ', 'type_word': 'اسم', 'class_word': 'ظرف زمان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 1, 'is_inflected': 0, 'tanwin': 0, 'action': 'عاطل', 'object_type': '', 'need': ''}],
'بعد': [{'word': 'بعد', 'vocalized': 'بَعْدَ', 'type_word': 'اسم', 'class_word': 'ظرف زمان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 1, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'تارة': [{'word': 'تارة', 'vocalized': 'تَارَةً', 'type_word': 'اسم', 'class_word': 'ظرف زمان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'عاطل', 'object_type': '', 'need': ''}],
'حين': [{'word': 'حين', 'vocalized': 'حِينَ', 'type_word': 'اسم', 'class_word': 'ظرف زمان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 1, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'زمان': [{'word': 'زمان', 'vocalized': 'زَمَانَ', 'type_word': 'اسم', 'class_word': 'ظرف زمان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 1, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 1, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'ساعة': [{'word': 'ساعة', 'vocalized': 'سَاعَةَ', 'type_word': 'اسم', 'class_word': 'ظرف زمان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 1, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 1, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'شهر': [{'word': 'شهر', 'vocalized': 'شَهْرَ', 'type_word': 'اسم', 'class_word': 'ظرف زمان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 1, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 1, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'صباح': [{'word': 'صباح', 'vocalized': 'صَبَاحَ', 'type_word': 'اسم', 'class_word': 'ظرف زمان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'ضحوة': [{'word': 'ضحوة', 'vocalized': 'ضَحْوَةً', 'type_word': 'اسم', 'class_word': 'ظرف زمان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'عاطل', 'object_type': '', 'need': ''}],
'غدا': [{'word': 'غدا', 'vocalized': 'غَدًا', 'type_word': 'اسم', 'class_word': 'ظرف زمان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'عاطل', 'object_type': '', 'need': ''},
{'word': 'غدا', 'vocalized': 'غَدَا', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 1, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'غداة': [{'word': 'غداة', 'vocalized': 'غَدَاةَ', 'type_word': 'اسم', 'class_word': 'ظرف زمان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'عاطل', 'object_type': '', 'need': ''}],
'غروب': [{'word': 'غروب', 'vocalized': 'غُرُوبَ', 'type_word': 'اسم', 'class_word': 'ظرف زمان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 1, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 1, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'لحظة': [{'word': 'لحظة', 'vocalized': 'لَحْظَةَ', 'type_word': 'اسم', 'class_word': 'ظرف زمان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 1, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 1, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'ليل': [{'word': 'ليل', 'vocalized': 'لَيْلَ', 'type_word': 'اسم', 'class_word': 'ظرف زمان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 1, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 1, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'مرة': [{'word': 'مرة', 'vocalized': 'مَرَّةً', 'type_word': 'اسم', 'class_word': 'ظرف زمان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'مساء': [{'word': 'مساء', 'vocalized': 'مَسَاءَ', 'type_word': 'اسم', 'class_word': 'ظرف زمان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'وقت': [{'word': 'وقت', 'vocalized': 'وَقْتَ', 'type_word': 'اسم', 'class_word': 'ظرف زمان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 1, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 1, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'يومئذ': [{'word': 'يومئذ', 'vocalized': 'يَوْمَئِذٍ', 'type_word': 'اسم', 'class_word': 'ظرف زمان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'عاطل', 'object_type': '', 'need': ''}],
'خلال': [{'word': 'خلال', 'vocalized': 'خِلَالَ', 'type_word': 'اسم', 'class_word': 'ظرف زمان/مكان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 1, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'أسفل': [{'word': 'أسفل', 'vocalized': 'أَسْفَلَ', 'type_word': 'اسم', 'class_word': 'ظرف مكان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 1, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 1, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'أعلى': [{'word': 'أعلى', 'vocalized': 'أَعْلَى', 'type_word': 'اسم', 'class_word': 'ظرف مكان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 1, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'أمام': [{'word': 'أمام', 'vocalized': 'أَمَامَ', 'type_word': 'اسم', 'class_word': 'ظرف مكان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 1, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 1, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'إزاء': [{'word': 'إزاء', 'vocalized': 'إِزَاءَ', 'type_word': 'اسم', 'class_word': 'ظرف مكان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'بين': [{'word': 'بين', 'vocalized': 'بَيْنَ', 'type_word': 'اسم', 'class_word': 'ظرف مكان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 1, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'تحت': [{'word': 'تحت', 'vocalized': 'تَحْتَ', 'type_word': 'اسم', 'class_word': 'ظرف مكان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 1, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'جنب': [{'word': 'جنب', 'vocalized': 'جَنْبَ', 'type_word': 'اسم', 'class_word': 'ظرف مكان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 1, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 1, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'جنوب': [{'word': 'جنوب', 'vocalized': 'جَنُوبَ', 'type_word': 'اسم', 'class_word': 'ظرف مكان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 1, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 1, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'حوالى': [{'word': 'حوالى', 'vocalized': 'حَوَالَى', 'type_word': 'اسم', 'class_word': 'ظرف مكان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 1, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'حول': [{'word': 'حول', 'vocalized': 'حَوْلَ', 'type_word': 'اسم', 'class_word': 'ظرف مكان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 1, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'خلف': [{'word': 'خلف', 'vocalized': 'خَلْفَ', 'type_word': 'اسم', 'class_word': 'ظرف مكان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 1, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''},
{'word': 'خلف', 'vocalized': 'خَلْفَ', 'type_word': 'اسم', 'class_word': 'ظرف مكان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 1, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 1, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'شرق': [{'word': 'شرق', 'vocalized': 'شَرْقَ', 'type_word': 'اسم', 'class_word': 'ظرف مكان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 1, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 1, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'شمال': [{'word': 'شمال', 'vocalized': 'شَمَالَ', 'type_word': 'اسم', 'class_word': 'ظرف مكان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 1, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'ضمن': [{'word': 'ضمن', 'vocalized': 'ضِمْنَ', 'type_word': 'اسم', 'class_word': 'ظرف مكان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 1, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'غرب': [{'word': 'غرب', 'vocalized': 'غَرْبَ', 'type_word': 'اسم', 'class_word': 'ظرف مكان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 1, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 1, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'فوق': [{'word': 'فوق', 'vocalized': 'فَوْقَ', 'type_word': 'اسم', 'class_word': 'ظرف مكان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 1, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'يمين': [{'word': 'يمين', 'vocalized': 'يَمِينَ', 'type_word': 'اسم', 'class_word': 'ظرف مكان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 1, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'طالما': [{'word': 'طالما', 'vocalized': 'طَالَمَا', 'type_word': 'فعل', 'class_word': 'فعل جامد', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'قلما': [{'word': 'قلما', 'vocalized': 'قَلَّمَا', 'type_word': 'فعل', 'class_word': 'فعل جامد', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'أخذ': [{'word': 'أخذ', 'vocalized': 'أَخَذَ', 'type_word': 'فعل', 'class_word': 'كاد و اخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 1, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'أقبل': [{'word': 'أقبل', 'vocalized': 'أَقْبَلَ', 'type_word': 'فعل', 'class_word': 'كاد و اخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 1, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'أنشأ': [{'word': 'أنشأ', 'vocalized': 'أَنْشَأَ', 'type_word': 'فعل', 'class_word': 'كاد و اخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 1, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'أوشك': [{'word': 'أوشك', 'vocalized': 'أَوْشَكَ', 'type_word': 'فعل', 'class_word': 'كاد و اخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 1, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'ابتدأ': [{'word': 'ابتدأ', 'vocalized': 'ابْتَدَأَ', 'type_word': 'فعل', 'class_word': 'كاد و اخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 1, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'اخلولق': [{'word': 'اخلولق', 'vocalized': 'اِخْلَوْلَقَ', 'type_word': 'فعل', 'class_word': 'كاد و اخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 1, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'انبرى': [{'word': 'انبرى', 'vocalized': 'اِنْبَرَى', 'type_word': 'فعل', 'class_word': 'كاد و اخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 1, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'جعل': [{'word': 'جعل', 'vocalized': 'جَعَلَ', 'type_word': 'فعل', 'class_word': 'كاد و اخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 1, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'حرى': [{'word': 'حرى', 'vocalized': 'حَرَى', 'type_word': 'فعل', 'class_word': 'كاد و اخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 1, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'شرع': [{'word': 'شرع', 'vocalized': 'شَرَعَ', 'type_word': 'فعل', 'class_word': 'كاد و اخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 1, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'طفق': [{'word': 'طفق', 'vocalized': 'طَفِقَ', 'type_word': 'فعل', 'class_word': 'كاد و اخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 1, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'عسى': [{'word': 'عسى', 'vocalized': 'عَسَى', 'type_word': 'فعل', 'class_word': 'كاد و اخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 1, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'علق': [{'word': 'علق', 'vocalized': 'عَلِقَ', 'type_word': 'فعل', 'class_word': 'كاد و اخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 1, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'قام': [{'word': 'قام', 'vocalized': 'قَامَ', 'type_word': 'فعل', 'class_word': 'كاد و اخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 1, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'كاد': [{'word': 'كاد', 'vocalized': 'كَادَ', 'type_word': 'فعل', 'class_word': 'كاد و اخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 1, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'كرب': [{'word': 'كرب', 'vocalized': 'كَرِبَ', 'type_word': 'فعل', 'class_word': 'كاد و اخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 1, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'لازالت': [{'word': 'لازالت', 'vocalized': 'لَازَالَتْ', 'type_word': 'فعل', 'class_word': 'كاد و اخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''},
{'word': 'لازالت', 'vocalized': 'لَازَالَتْ', 'type_word': 'فعل', 'class_word': 'كاد و اخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'هب': [{'word': 'هب', 'vocalized': 'هَبَّ', 'type_word': 'فعل', 'class_word': 'كاد و اخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 1, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'إنما': [{'word': 'إنما', 'vocalized': 'إِنَّمَا', 'type_word': 'حرف', 'class_word': 'كافة ومكفوفة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'عاطل', 'object_type': '', 'need': ''}],
'لكنما': [{'word': 'لكنما', 'vocalized': 'لَكِنَّمَا', 'type_word': 'حرف', 'class_word': 'كافة ومكفوفة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'عاطل', 'object_type': '', 'need': ''}],
'آض': [{'word': 'آض', 'vocalized': 'آضَ', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 1, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'أصبح': [{'word': 'أصبح', 'vocalized': 'أَصْبَحَ', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 1, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'أضحى': [{'word': 'أضحى', 'vocalized': 'أَضْحَى', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 1, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'أمسى': [{'word': 'أمسى', 'vocalized': 'أَمْسَى', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 1, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'ارتد': [{'word': 'ارتد', 'vocalized': 'اِرْتَدَّ', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 1, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'استحال': [{'word': 'استحال', 'vocalized': 'اِسْتَحَالَ', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 1, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'انقلب': [{'word': 'انقلب', 'vocalized': 'اِنْقَلَبَ', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 1, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'بات': [{'word': 'بات', 'vocalized': 'بَاتَ', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 1, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'تبدل': [{'word': 'تبدل', 'vocalized': 'تَبَدَّلَ', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 1, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'تحول': [{'word': 'تحول', 'vocalized': 'تَحَوَّلَ', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 1, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'حار': [{'word': 'حار', 'vocalized': 'حَارَ', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 1, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'راح': [{'word': 'راح', 'vocalized': 'رَاحَ', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 1, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'رجع': [{'word': 'رجع', 'vocalized': 'رَجَعَ', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 1, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'صار': [{'word': 'صار', 'vocalized': 'صَارَ', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 1, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'ظل': [{'word': 'ظل', 'vocalized': 'ظَلَّ', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 1, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'عاد': [{'word': 'عاد', 'vocalized': 'عَادَ', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 1, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'كان': [{'word': 'كان', 'vocalized': 'كَانَ', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 1, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'لست': [{'word': 'لست', 'vocalized': 'لَسْتَ', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''},
{'word': 'لست', 'vocalized': 'لَسْتُ', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''},
{'word': 'لست', 'vocalized': 'لَسْتِ', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'لستم': [{'word': 'لستم', 'vocalized': 'لَسْتُم', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'لستما': [{'word': 'لستما', 'vocalized': 'لَسْتُمَا', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'لستن': [{'word': 'لستن', 'vocalized': 'لَسْتُنَّ', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'لسن': [{'word': 'لسن', 'vocalized': 'لَسْنَ', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'لسنا': [{'word': 'لسنا', 'vocalized': 'لَسْنَا', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'ليس': [{'word': 'ليس', 'vocalized': 'لَيْسَ', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'ليسا': [{'word': 'ليسا', 'vocalized': 'لَيْسَا', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'ليست': [{'word': 'ليست', 'vocalized': 'لَيْسَتْ', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'ليستا': [{'word': 'ليستا', 'vocalized': 'لَيْسَتَا', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'ليسوا': [{'word': 'ليسوا', 'vocalized': 'لَيْسُوا', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'ماانفك': [{'word': 'ماانفك', 'vocalized': 'مَااِنْفَكَّ', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 1, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'مابرح': [{'word': 'مابرح', 'vocalized': 'مَابَرِحَ', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 1, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'مادام': [{'word': 'مادام', 'vocalized': 'مَادَامَ', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 1, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'مازال': [{'word': 'مازال', 'vocalized': 'مَازَالَ', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 1, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'مافتئ': [{'word': 'مافتئ', 'vocalized': 'مَافَتِئَ', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 1, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'بضع': [{'word': 'بضع', 'vocalized': 'بِضْعَ', 'type_word': 'اسم', 'class_word': 'كناية', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 1, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'ذيت': [{'word': 'ذيت', 'vocalized': 'ذَيْتَ', 'type_word': 'اسم', 'class_word': 'كناية', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'فلان': [{'word': 'فلان', 'vocalized': 'فُلَانٌ', 'type_word': 'اسم', 'class_word': 'كناية', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'كأي': [{'word': 'كأي', 'vocalized': 'كَأَيٍّ', 'type_word': 'اسم', 'class_word': 'كناية', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'كأين': [{'word': 'كأين', 'vocalized': 'كَأَيَّنَ', 'type_word': 'اسم', 'class_word': 'كناية', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''},
{'word': 'كأين', 'vocalized': 'كَأَيْنَ', 'type_word': 'اسم', 'class_word': 'كناية', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'كذا': [{'word': 'كذا', 'vocalized': 'كَذَا', 'type_word': 'اسم', 'class_word': 'كناية', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'كيت': [{'word': 'كيت', 'vocalized': 'كَيْتَ', 'type_word': 'اسم', 'class_word': 'كناية', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'فقط': [{'word': 'فقط', 'vocalized': 'فَقَطْ', 'type_word': 'حرف', 'class_word': '', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'بن': [{'word': 'بن', 'vocalized': 'بْن', 'type_word': 'اسم', 'class_word': 'وصل الأعلام', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'أيضا': [{'word': 'أيضا', 'vocalized': 'أَيْضًا', 'type_word': 'اسم', 'class_word': 'توكيد', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'عاطل', 'object_type': '', 'need': ''}],
'بينما': [{'word': 'بينما', 'vocalized': 'بَيْنَمَا', 'type_word': 'اسم', 'class_word': 'ظرف زمان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'عاطل', 'object_type': '', 'need': ''}],
'جدا': [{'word': 'جدا', 'vocalized': 'جِدًّا', 'type_word': 'اسم', 'class_word': 'توكيد', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'عاطل', 'object_type': '', 'need': ''}],
'حينما': [{'word': 'حينما', 'vocalized': 'حِينَمَا', 'type_word': 'اسم', 'class_word': 'ظرف زمان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'عاطل', 'object_type': '', 'need': ''}],
'مثلما': [{'word': 'مثلما', 'vocalized': 'مِثْلَمَا', 'type_word': 'اسم', 'class_word': 'جار ومجرور', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'عاطل', 'object_type': '', 'need': ''}],
'حسبما': [{'word': 'حسبما', 'vocalized': 'حَسْبَمَا', 'type_word': 'اسم', 'class_word': 'جار ومجرور', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'عاطل', 'object_type': '', 'need': ''}],
'آنذاك': [{'word': 'آنذاك', 'vocalized': 'آنَذَاكَ', 'type_word': 'اسم', 'class_word': 'ظرف زمان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'عاطل', 'object_type': '', 'need': ''}],
'آنئذ': [{'word': 'آنئذ', 'vocalized': 'آنَئِذٍ', 'type_word': 'اسم', 'class_word': 'ظرف زمان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'عاطل', 'object_type': '', 'need': ''}],
'أية': [{'word': 'أية', 'vocalized': 'أَيَّةُ', 'type_word': 'اسم', 'class_word': 'اسم إضافة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 1, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 1, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'وراءكما': [{'word': 'وراءكما', 'vocalized': 'وَرَاءَكُما', 'type_word': 'اسم فعل', 'class_word': 'اسم فعل', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'وراءكم': [{'word': 'وراءكم', 'vocalized': 'وَرَاءَكُمْ', 'type_word': 'اسم فعل', 'class_word': 'اسم فعل', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'وراءكن': [{'word': 'وراءكن', 'vocalized': 'وَرَاءَكُنَّ', 'type_word': 'اسم فعل', 'class_word': 'اسم فعل', 'has_conjuction': 0, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'وراء': [{'word': 'وراء', 'vocalized': 'وَرَاءَ', 'type_word': 'اسم', 'class_word': 'ظرف مكان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 1, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'جار', 'object_type': 'اسم', 'need': ''}],
'بعدما': [{'word': 'بعدما', 'vocalized': 'بَعْدَمَا', 'type_word': 'اسم', 'class_word': 'ظرف زمان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'عاطل', 'object_type': '', 'need': ''}],
'بئسما': [{'word': 'بئسما', 'vocalized': 'بِئْسَمَا', 'type_word': 'اسم فعل', 'class_word': 'المدح والذم', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'عاطل', 'object_type': '', 'need': ''}],
'قبلما': [{'word': 'قبلما', 'vocalized': 'قَبْلَمَا', 'type_word': 'اسم', 'class_word': 'ظرف زمان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'عاطل', 'object_type': '', 'need': ''}],
'بعدئذ': [{'word': 'بعدئذ', 'vocalized': 'بَعْدَئِذٍ', 'type_word': 'اسم', 'class_word': 'ظرف زمان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'عاطل', 'object_type': '', 'need': ''}],
'حينئذ': [{'word': 'حينئذ', 'vocalized': 'حِينَئِذٍ', 'type_word': 'اسم', 'class_word': 'ظرف زمان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'عاطل', 'object_type': '', 'need': ''}],
'عندئذ': [{'word': 'عندئذ', 'vocalized': 'عِنْدَئِذٍ', 'type_word': 'اسم', 'class_word': 'ظرف زمان', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'عاطل', 'object_type': '', 'need': ''}],
'إنا': [{'word': 'إنا', 'vocalized': 'إِنَّا', 'type_word': 'حرف', 'class_word': 'ناصب ومنصوب', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 1, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'عاطل', 'object_type': '', 'need': ''}],
'هاذين': [{'word': 'هاذين', 'vocalized': 'هَاذَيْنِ', 'type_word': 'اسم', 'class_word': 'اسم إشارة', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 1, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': '', 'object_type': '', 'need': ''}],
'لازال': [{'word': 'لازال', 'vocalized': 'لَازَالَ', 'type_word': 'فعل', 'class_word': 'كاد و اخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'لازلت': [{'word': 'لازلت', 'vocalized': 'لَازِلْتُ', 'type_word': 'فعل', 'class_word': 'كاد و اخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''},
{'word': 'لازلت', 'vocalized': 'لَازِلْتَ', 'type_word': 'فعل', 'class_word': 'كاد و اخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''},
{'word': 'لازلت', 'vocalized': 'لَازِلْتِ', 'type_word': 'فعل', 'class_word': 'كاد و اخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''}],
'لازلنا': [{'word': 'لازلنا', 'vocalized': 'لَازِلْنَا', 'type_word': 'فعل', 'class_word': 'كاد و اخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''}],
'لازلتما': [{'word': 'لازلتما', 'vocalized': 'لَازِلْتُمَا', 'type_word': 'فعل', 'class_word': 'كاد و اخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''},
{'word': 'لازلتما', 'vocalized': 'لَازِلْتُمَا', 'type_word': 'فعل', 'class_word': 'كاد و اخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''}],
'لازلتم': [{'word': 'لازلتم', 'vocalized': 'لَازِلْتُم', 'type_word': 'فعل', 'class_word': 'كاد و اخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''}],
'لازلتن': [{'word': 'لازلتن', 'vocalized': 'لَازِلْتُنَّ', 'type_word': 'فعل', 'class_word': 'كاد و اخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''}],
'لازالا': [{'word': 'لازالا', 'vocalized': 'لَازَالَا', 'type_word': 'فعل', 'class_word': 'كاد و اخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''}],
'لازالتا': [{'word': 'لازالتا', 'vocalized': 'لَازَالَتَا', 'type_word': 'فعل', 'class_word': 'كاد و اخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''}],
'لازالوا': [{'word': 'لازالوا', 'vocalized': 'لَازَالُوا', 'type_word': 'فعل', 'class_word': 'كاد و اخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''}],
'لازلن': [{'word': 'لازلن', 'vocalized': 'لَازِلْنَ', 'type_word': 'فعل', 'class_word': 'كاد و اخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 1, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''}],
'مازلت': [{'word': 'مازلت', 'vocalized': 'مَازِلْتُ', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''},
{'word': 'مازلت', 'vocalized': 'مَازِلْتَ', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''},
{'word': 'مازلت', 'vocalized': 'مَازِلْتِ', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''}],
'مازلنا': [{'word': 'مازلنا', 'vocalized': 'مَازِلْنَا', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''}],
'مازلتما': [{'word': 'مازلتما', 'vocalized': 'مَازِلْتُمَا', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''},
{'word': 'مازلتما', 'vocalized': 'مَازِلْتُمَا', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''}],
'مازلتم': [{'word': 'مازلتم', 'vocalized': 'مَازِلْتُم', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''}],
'مازلتن': [{'word': 'مازلتن', 'vocalized': 'مَازِلْتُنَّ', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''}],
'مازالا': [{'word': 'مازالا', 'vocalized': 'مَازَالَا', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''}],
'مازالتا': [{'word': 'مازالتا', 'vocalized': 'مَازَالَتَا', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''}],
'مازالوا': [{'word': 'مازالوا', 'vocalized': 'مَازَالُوا', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''}],
'مازلن': [{'word': 'مازلن', 'vocalized': 'مَازِلْنَ', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''}],
'مادمت': [{'word': 'مادمت', 'vocalized': 'مَادُمْتُ', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''},
{'word': 'مادمت', 'vocalized': 'مَادُمْتَ', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''},
{'word': 'مادمت', 'vocalized': 'مَادُمْتِ', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''}],
'مادمنا': [{'word': 'مادمنا', 'vocalized': 'مَادُمْنَا', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''}],
'مادمتما': [{'word': 'مادمتما', 'vocalized': 'مَادُمْتُمَا', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''},
{'word': 'مادمتما', 'vocalized': 'مَادُمْتُمَا', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''}],
'مادمتم': [{'word': 'مادمتم', 'vocalized': 'مَادُمْتُم', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''}],
'مادمتن': [{'word': 'مادمتن', 'vocalized': 'مَادُمْتُنَّ', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''}],
'مادامت': [{'word': 'مادامت', 'vocalized': 'مَادَامَتْ', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'ماداما': [{'word': 'ماداما', 'vocalized': 'مَادَامَا', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''}],
'مادامتا': [{'word': 'مادامتا', 'vocalized': 'مَادَامَتَا', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''}],
'ماداموا': [{'word': 'ماداموا', 'vocalized': 'مَادَامُوا', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''}],
'مادمن': [{'word': 'مادمن', 'vocalized': 'مَادُمْنَ', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''}],
'مابرحت': [{'word': 'مابرحت', 'vocalized': 'مَابَرِحْتُ', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''},
{'word': 'مابرحت', 'vocalized': 'مَابَرِحْتَ', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''},
{'word': 'مابرحت', 'vocalized': 'مَابَرِحْتِ', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''},
{'word': 'مابرحت', 'vocalized': 'مَابَرِحَتْ', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'مابرحنا': [{'word': 'مابرحنا', 'vocalized': 'مَابَرِحْنَا', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''}],
'مابرحتما': [{'word': 'مابرحتما', 'vocalized': 'مَابَرِحْتُمَا', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''},
{'word': 'مابرحتما', 'vocalized': 'مَابَرِحْتُمَا', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''}],
'مابرحتم': [{'word': 'مابرحتم', 'vocalized': 'مَابَرِحْتُم', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''}],
'مابرحتن': [{'word': 'مابرحتن', 'vocalized': 'مَابَرِحْتُنَّ', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''}],
'مابرحا': [{'word': 'مابرحا', 'vocalized': 'مَابَرِحَا', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''}],
'مابرحتا': [{'word': 'مابرحتا', 'vocalized': 'مَابَرِحَتَا', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''}],
'مابرحوا': [{'word': 'مابرحوا', 'vocalized': 'مَابَرِحُوا', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''}],
'مابرحن': [{'word': 'مابرحن', 'vocalized': 'مَابَرِحْنَ', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''}],
'مافتئت': [{'word': 'مافتئت', 'vocalized': 'مَافَتِئْتُ', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''},
{'word': 'مافتئت', 'vocalized': 'مَافَتِئْتَ', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''},
{'word': 'مافتئت', 'vocalized': 'مَافَتِئْتِ', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''},
{'word': 'مافتئت', 'vocalized': 'مَافَتِئَتْ', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'مافتئنا': [{'word': 'مافتئنا', 'vocalized': 'مَافَتِئْنَا', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''}],
'مافتئتما': [{'word': 'مافتئتما', 'vocalized': 'مَافَتِئْتُمَا', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''},
{'word': 'مافتئتما', 'vocalized': 'مَافَتِئْتُمَا', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''}],
'مافتئتم': [{'word': 'مافتئتم', 'vocalized': 'مَافَتِئْتُم', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''}],
'مافتئتن': [{'word': 'مافتئتن', 'vocalized': 'مَافَتِئْتُنَّ', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''}],
'مافتئا': [{'word': 'مافتئا', 'vocalized': 'مَافَتِئَا', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''}],
'مافتئتا': [{'word': 'مافتئتا', 'vocalized': 'مَافَتِئَتَا', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''}],
'مافتئوا': [{'word': 'مافتئوا', 'vocalized': 'مَافَتِئُوا', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''}],
'مافتئن': [{'word': 'مافتئن', 'vocalized': 'مَافَتِئْنَ', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''}],
'ماانفككت': [{'word': 'ماانفككت', 'vocalized': 'مَااِنْفَكَكْتُ', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''},
{'word': 'ماانفككت', 'vocalized': 'مَااِنْفَكَكْتَ', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''},
{'word': 'ماانفككت', 'vocalized': 'مَااِنْفَكَكْتِ', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''}],
'ماانفككنا': [{'word': 'ماانفككنا', 'vocalized': 'مَااِنْفَكَكْنَا', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''}],
'ماانفككتما': [{'word': 'ماانفككتما', 'vocalized': 'مَااِنْفَكَكْتُمَا', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''},
{'word': 'ماانفككتما', 'vocalized': 'مَااِنْفَكَكْتُمَا', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''}],
'ماانفككتم': [{'word': 'ماانفككتم', 'vocalized': 'مَااِنْفَكَكْتُم', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''}],
'ماانفككتن': [{'word': 'ماانفككتن', 'vocalized': 'مَااِنْفَكَكْتُنَّ', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''}],
'ماانفكت': [{'word': 'ماانفكت', 'vocalized': 'مَااِنْفَكَّتْ', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'رافع', 'object_type': 'اسم', 'need': ''}],
'ماانفكا': [{'word': 'ماانفكا', 'vocalized': 'مَااِنْفَكَّا', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''}],
'ماانفكتا': [{'word': 'ماانفكتا', 'vocalized': 'مَااِنْفَكَّتَا', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''}],
'ماانفكوا': [{'word': 'ماانفكوا', 'vocalized': 'مَااِنْفَكُّوا', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''}],
'ماانفككن': [{'word': 'ماانفككن', 'vocalized': 'مَااِنْفَكَكْنَ', 'type_word': 'فعل', 'class_word': 'كان و أخواتها', 'has_conjuction': 1, 'has_definition': 0, 'has_preposition': 0, 'has_pronoun': 0, 'has_interrog': 0, 'has_conjugation': 0, 'has_qasam': 0, 'is_defined': 0, 'is_inflected': 0, 'tanwin': 0, 'action': 'ناصب', 'object_type': 'اسم', 'need': ''}]}
|
Arabic-Stopwords
|
/Arabic_Stopwords-0.4.3-py3-none-any.whl/arabicstopwords/stopwords_classified.py
|
stopwords_classified.py
|
__version__ = '0.4.3'
__author__ = 'Taha Zerrouki'
import pyarabic.araby as araby
try:
from stopwordsallforms import STOPWORDS, STOPWORDS_INDEX
from stopwords_classified import STOPWORDS as classed_STOPWORDS
except:
from .stopwordsallforms import STOPWORDS, STOPWORDS_INDEX
from .stopwords_classified import STOPWORDS as classed_STOPWORDS
try:
from stopwordtuple import stopwordTuple
except:
from .stopwordtuple import stopwordTuple
class stopwords_lexicon:
"""
A lexicon class for stopwords extracttion features
"""
def __init__(self, ):
self.forms_dict = STOPWORDS
self.lemmas_dict = classed_STOPWORDS
self.vocalized_lemmas_dict = self.create_vocalized_index(self.lemmas_dict)
self.vocalized_forms_dict = self.create_vocalized_index(self.forms_dict)
self.categories = self.create_categories_index()
def create_vocalized_index(self, table):
"""
Create index of vocalized lemmas
"""
voca_dict = {}
for unvoc_key in table:
for item in table[unvoc_key]:
vocalized = item.get('vocalized',"")
if vocalized:
if vocalized in voca_dict:
voca_dict[vocalized].append(item)
voca_dict[vocalized] = [item,]
return voca_dict
def create_categories_index(self,):
"""
Create index of categories
"""
index_dict = {}
for table, table_type in [(self.forms_dict, "forms"), (self.lemmas_dict, "lemmas"),]:
for unvoc_key in table:
for item in table[unvoc_key]:
vocalized = item.get('vocalized',"")
category = item.get('type_word',item.get('type',""))
if category:
if category not in index_dict:
index_dict[category] = {}
if table_type not in index_dict[category]:
index_dict[category][table_type] = []
index_dict[category][table_type+'_vocalized'] = []
index_dict[category][table_type].append(unvoc_key)
index_dict[category][table_type+'_vocalized'].append(vocalized)
return index_dict
#---------------------------
# Stopwords lookup
#---------------------------
def is_stop(self, word):
""" test if word is a stop"""
return word in self.forms_dict
def stop_stemlist(self, word):
""" return all stems for a word"""
return self.get_stems(word)
def stop_stem(self, word):
""" retrun a stem of a stop word """
stemlist = self.get_stems(word)
if stemlist:
return stemlist[0]
else:
return ""
def stopwords_list(self, vocalized=False):
""" return all arabic stopwords"""
if not vocalized:
return list(self.forms_dict.keys())
else:
vocalized_list = []
for x in self.forms_dict.keys():
vocalized_list.extend(self.get_vocalizeds(x))
return vocalized_list
def classed_stopwords_list(self, vocalized=False):
""" return all arabic classified stopwords"""
if not vocalized:
return list(self.lemmas_dict.keys())
else:
vocalized_list = []
for x in self.lemmas_dict.keys():
vocalized_list.extend(self.get_vocalizeds(x))
return vocalized_list
def stopword_forms(self, word):
""" return all forms for a stop word"""
if word in STOPWORDS_INDEX:
return [d for d in STOPWORDS_INDEX[word] ]
else:
return []
def get_features_dict(self, word, lemma=False):
"""
return the all features for a stopword
"""
if not lemma:
stemlist = self.forms_dict.get(word,{})
else:
stemlist = self.lemmas_dict.get(word,{})
return stemlist
def get_stopwordtuples(self, word, lemma=False, vocalized=False):
"""
return the all features for a stopword
"""
if not lemma:
if vocalized:
stemlist = self.vocalized_forms_dict.get(word, [])
else:
stemlist = self.forms_dict.get(word, [])
else:
if vocalized:
stemlist = self.vocalized_lemmas_dict.get(word, [])
else:
stemlist = self.lemmas_dict.get(word, [])
stoptuple_list = []
for item in stemlist:
stoptuple_list.append(stopwordTuple(item))
return stoptuple_list
def get_categories(self):
"""
Get all categories (wordtypes available in the lexicon)
"""
return list(self.categories.keys())
def get_by_category(self, category="", lemma=False, vocalized=False):
"""
Get all stopwords (wordtypes available in the lexicon)
"""
secondkey = ""
if lemma:
secondkey = "lemmas"
else:
secondkey = "forms"
if vocalized:
secondkey += "_vocalized"
return self.categories.get(category, {}).get(secondkey, [])
#---------------------------
# Stopwords features
#---------------------------
def get_feature(self, word, feature, lemma=False):
"""
return the asked feature form a stopword
"""
if not lemma:
stemlist = [d.get(feature,'') for d in self.forms_dict.get(word,{}) if d.get(feature,'')]
else:
stemlist = [d.get(feature,'') for d in self.lemmas_dict.get(word,{}) if d.get(feature,'')]
stemlist = list(set(stemlist))
return stemlist
def get_vocalizeds(self, word, lemma=False):
"""
return the vocalized form fo a stopword
"""
if not lemma:
stemlist = [d.get('vocalized','') for d in self.forms_dict.get(word,{}) if d.get('vocalized','')]
else:
stemlist = [d.get('vocalized','') for d in self.lemmas_dict.get(word,{}) if d.get('vocalized','') ]
stemlist = list(set(stemlist))
return stemlist
def get_wordtypes(self, word, lemma=False):
"""
return the wordtype form fo a stopword
"""
if not lemma:
stemlist = [d.get('type','') for d in self.forms_dict.get(word,{}) if d.get('type','')]
else:
stemlist = [d.get('type_word','') for d in self.lemmas_dict.get(word,{}) if d.get('type_word','')]
stemlist = list(set(stemlist))
return stemlist
def get_wordclass(self, word):
"""
return the word sub class form fo a stopword
"""
stemlist = [d.get('class_word','') for d in self.lemmas_dict.get(word,{}) if d.get('class_word','')]
stemlist = list(set(stemlist))
return stemlist
# ~ 'has_conjuction': 1,
# ~ 'has_definition': 0,
# ~ 'has_preposition': 0,
# ~ 'has_pronoun': 0,
# ~ 'has_interrog': 0,
# ~ 'has_conjugation': 0,
# ~ 'has_qasam': 0,
# ~ 'is_defined': 0,
# ~ 'is_inflected': 0,
# ~ 'tanwin': 0,
# ~ 'action': 'جازم',
# ~ 'object_type': 'فعل',
# ~ 'need': ''},
# ~ def accept_conjuction(self, word):
# ~ """
# ~ return True if the word accept conjuctions, Asked only for classified stopwords
# ~ """
# ~ return bool(self.vocalized_lemmas_dict.get(word,{}).get('has_conjuction',0))
# ~ def accept_definition(self, word):
# ~ """
# ~ return True if the word accept definitions, Asked only for classified stopwords
# ~ """
# ~ return bool(self.vocalized_lemmas_dict.get(word,{}).get('has_definition',0))
# ~ def accept_preposition(self, word):
# ~ """
# ~ return True if the word accept prepositions, Asked only for classified stopwords
# ~ """
# ~ return bool(self.vocalized_lemmas_dict.get(word,{}).get('has_preposition',0))
# ~ def accept_pronoun(self, word):
# ~ """
# ~ return True if the word accept pronouns, Asked only for classified stopwords
# ~ """
# ~ return bool(self.vocalized_lemmas_dict.get(word,{}).get('has_pronoun',0))
# ~ def accept_interrog(self, word):
# ~ """
# ~ return True if the word accept interrogs, Asked only for classified stopwords
# ~ """
# ~ return bool(self.vocalized_lemmas_dict.get(word,{}).get('has_interrog',0))
# ~ def accept_conjugation(self, word):
# ~ """
# ~ return True if the word accept conjugations, Asked only for classified stopwords
# ~ """
# ~ return bool(self.vocalized_lemmas_dict.get(word,{}).get('has_conjugation',0))
# ~ def accept_qasam(self, word):
# ~ """
# ~ return True if the word accept qasams, Asked only for classified stopwords
# ~ """
# ~ return bool(self.vocalized_lemmas_dict.get(word,{}).get('has_qasam',0))
# ~ def is_defined(self, word):
# ~ """
# ~ return True if the word is defined , Asked only for classified stopwords
# ~ """
# ~ return bool(self.vocalized_lemmas_dict.get(word,{}).get('is_defined',0))
# ~ def accept_inflection(self, word):
# ~ """
# ~ return True if the word accept inflections, Asked only for classified stopwords
# ~ """
# ~ return bool(self.vocalized_lemmas_dict.get(word,{}).get('is_inflected',0))
# ~ def accept_tanwin(self, word):
# ~ """
# ~ return True if the word accept tanwins, Asked only for classified stopwords
# ~ """
# ~ return bool(self.vocalized_lemmas_dict.get(word,{}).get('tanwin',0))
# ~ def get_action(self, word):
# ~ """
# ~ return get action, Asked only for classified stopwords
# ~ """
# ~ return self.vocalized_lemmas_dict.get(word,{}).get('action',"")
# ~ def get_object_type(self, word):
# ~ """
# ~ return get object_type, Asked only for classified stopwords
# ~ """
# ~ return self.vocalized_lemmas_dict.get(word,{}).get('object_type',"")
# ~ def get_need(self, word):
# ~ """
# ~ return get need, Asked only for classified stopwords
# ~ """
# ~ return self.vocalized_lemmas_dict.get(word,{}).get('need',"")
def get_tags(self, word):
"""
return True if the word get tags,
"""
stemlist = [d.get('tags','') for d in self.forms_dict.get(word,{}) if d.get('tags','')]
stemlist = list(set(stemlist))
return stemlist
def get_stems(self, word):
"""
return True if the word get stems,
"""
stemlist = [d.get('stem','') for d in self.forms_dict.get(word,{}) if d.get('stem','')]
stemlist = list(set(stemlist))
return stemlist
def get_enclitics(self, word):
"""
return get encletic,
"""
stemlist = [d.get('encletic','') for d in self.forms_dict.get(word,{}) if d.get('encletic','')]
stemlist = list(set(stemlist))
return stemlist
def get_procletics(self, word):
"""
return get procletic,
"""
stemlist = [d.get('procletic','') for d in self.forms_dict.get(word,{}) if d.get('procletic','')]
stemlist = list(set(stemlist))
return stemlist
def get_lemmas(self, word):
"""
return get lemma,
"""
stemlist = [d.get('original','') for d in self.forms_dict.get(word,{}) if d.get('original','')]
stemlist = list(set(stemlist))
return stemlist
def main(args):
word = u"لعلهم"
print(stop_stem(word))
return 0
if __name__ == '__main__':
import sys
from pyarabic.arabrepr import arepr
words = [(u'منكم', True),
(u'ممكن', False),
(u'عندما', True),
(u'حينئذ', True),
]
for w, rep in words:
result = is_stop(w)
if result != rep:
print((u"Error %s is %swhere must be %s"%(w, result, rep)).encode('utf8'))
print(len(stopwords_list()))
print(len(classed_stopwords_list()))
print(arepr(stopword_forms(u'حتى')))
print(arepr(stopword_forms(u'جميع')))
print(arepr(stop_stem(u'لجميعهم')))
print(arepr(stop_stem(u'لجم')))
|
Arabic-Stopwords
|
/Arabic_Stopwords-0.4.3-py3-none-any.whl/arabicstopwords/stopwords_lexicon.py
|
stopwords_lexicon.py
|
import pyarabic.araby as araby
try:
from stopwordsallforms import STOPWORDS, STOPWORDS_INDEX
from stopwords_classified import STOPWORDS as classed_STOPWORDS
except:
from .stopwordsallforms import STOPWORDS, STOPWORDS_INDEX
from .stopwords_classified import STOPWORDS as classed_STOPWORDS
class stopwordsTuple:
"""
A object of stoword tuple to handle the stopword tuple in two types: classified, or stemmed
"""
def __init__(self, stop_dict = {}):
"""
Init stopword tuples
"""
self.stop_dict = stop_dict
#---------------------------
# Stopwords features
#---------------------------
# ~ {'word': 'إن',
# ~ 'vocalized': 'إِنْ',
# ~ 'type_word': 'حرف',
# ~ 'class_word': 'حرف جزم',
def get_features_dict(self,):
"""
return the all features for a stopword
"""
return self.stop_dict
def get_feature(self,):
"""
return the asked feature form a stopword
"""
return self.stop_dict.get(feature,'')
def get_vocalized(self,):
"""
return the vocalized form fo a stopword
"""
return self.stop_dict.get('vocalized','')
def get_wordtype(self,):
"""
return the wordtype form fo a stopword
"""
# return type or word_type
return self.stop_dict.get('type',self.stop_dict.get('type_word',''))
def get_wordclass(self,):
"""
return the word sub class form fo a stopword
"""
return self.stop_dict.get('class_word','')
# ~ 'has_conjuction': 1,
# ~ 'has_definition': 0,
# ~ 'has_preposition': 0,
# ~ 'has_pronoun': 0,
# ~ 'has_interrog': 0,
# ~ 'has_conjugation': 0,
# ~ 'has_qasam': 0,
# ~ 'is_defined': 0,
# ~ 'is_inflected': 0,
# ~ 'tanwin': 0,
# ~ 'action': 'جازم',
# ~ 'object_type': 'فعل',
# ~ 'need': ''},
def accept_conjuction(self,):
"""
return True if the word accept conjuctions, Asked only for classified stopwords
"""
return bool(self.stop_dict.get('has_conjuction',0))
def accept_definition(self,):
"""
return True if the word accept definitions, Asked only for classified stopwords
"""
return bool(self.stop_dict.get('has_definition',0))
def accept_preposition(self,):
"""
return True if the word accept prepositions, Asked only for classified stopwords
"""
return bool(self.stop_dict.get('has_preposition',0))
def accept_pronoun(self,):
"""
return True if the word accept pronouns, Asked only for classified stopwords
"""
return bobool(self.stop_dict.get('has_pronoun',0))
def accept_interrog(self, ):
"""
return True if the word accept interrogs, Asked only for classified stopwords
"""
return bool(self.stop_dict.get('has_interrog',0))
def accept_conjugation(self,):
"""
return True if the word accept conjugations, Asked only for classified stopwords
"""
return bool(self.stop_dict.get('has_conjugation',0))
def accept_qasam(self,):
"""
return True if the word accept qasams, Asked only for classified stopwords
"""
return bool(self.stop_dict.get('has_qasam',0))
def is_defined(self,):
"""
return True if the word is defined , Asked only for classified stopwords
"""
return bool(self.stop_dict.get('is_defined',0))
def accept_inflection(self,):
"""
return True if the word accept inflections, Asked only for classified stopwords
"""
return bool(self.stop_dict.get('is_inflected',0))
def accept_tanwin(self,):
"""
return True if the word accept tanwins, Asked only for classified stopwords
"""
return bool(self.stop_dict.get('tanwin',0))
def get_action(self,):
"""
return get action, Asked only for classified stopwords
"""
return self.stop_dict.get('action',"")
def get_object_type(self,):
"""
return get object_type, Asked only for classified stopwords
"""
return self.stop_dict.get('object_type',"")
def get_need(self,):
"""
return get need, Asked only for classified stopwords
"""
return self.stop_dict.get('need',"")
def get_tags(self,):
"""
return True if the word get tags,
"""
return self.stop_dict.get('tags',"")
def get_stems(self,):
"""
return True if the word get stems,
"""
return self.stop_dict.get('stem',"")
def get_enclitics(self,):
"""
return get encletic,
"""
return self.stop_dict.get('encletic',"")
def get_procletics(self,):
"""
return get procletic,
"""
return self.stop_dict.get('procletic','')
def get_lemmas(self, word):
"""
return get lemma,
"""
return self.stop_dict.get('original','')
def main(args):
word = u"لعلهم"
print(stop_stem(word))
return 0
if __name__ == '__main__':
import sys
from pyarabic.arabrepr import arepr
words = [(u'منكم', True),
(u'ممكن', False),
(u'عندما', True),
(u'حينئذ', True),
]
for w, rep in words:
result = is_stop(w)
if result != rep:
print((u"Error %s is %swhere must be %s"%(w, result, rep)).encode('utf8'))
print(len(stopwords_list()))
print(len(classed_stopwords_list()))
print(arepr(stopword_forms(u'حتى')))
print(arepr(stopword_forms(u'جميع')))
print(arepr(stop_stem(u'لجميعهم')))
print(arepr(stop_stem(u'لجم')))
|
Arabic-Stopwords
|
/Arabic_Stopwords-0.4.3-py3-none-any.whl/arabicstopwords/stopwordstuple.py
|
stopwordstuple.py
|
ArabicNames
===========
Random arabic name generator
Installation
------------
The script is `available on PyPI`_. To install with pip::
pip install ArabicNames
Usage
-----
ArabicNames can be used as a command line utility or imported as a Python package.
Command Line Usage
~~~~~~~~~~~~~~~~~~
To use the script from the command line:
.. code-block:: bash
$ ArabicNames
Jahid Taher
Python Package Usage
~~~~~~~~~~~~~~~~~~~~
Here are examples of all current features:
.. code-block:: pycon
>>> import ArabicNames
>>> ArabicNames.get_full_name()
'Muhtadi El-Alaoui'
License
-------
This project is released under an `MIT License`_.
.. _mit license: https://ahmed.mit-license.org
.. _available on PyPI: http://pypi.python.org/pypi/ArabicNames
|
ArabicNames
|
/ArabicNames-0.1.2.tar.gz/ArabicNames-0.1.2/README.rst
|
README.rst
|
# ArabicOcr Package to convert any Arabic image text to text by ocr techniques
## about
Python Package to convert arabic images to text
# Installation
```
pip install ArabicOcr
or in colab google cloud
!pip install ArabicOcr
```
## Usage for get frames images
```
from ArabicOcr import arabicocr
```
```
image_path='1.jpg'
out_image='out.jpg'
results=arabicocr.arabic_ocr(image_path,out_image)
print(results)
words=[]
for i in range(len(results)):
word=results[i][1]
words.append(word)
with open ('file.txt','w',encoding='utf-8')as myfile:
myfile.write(str(words))
import cv2
img = cv2.imread('out.jpg', cv2.IMREAD_UNCHANGED)
cv2.imshow("arabic ocr",img)
cv2.waitKey(0)
```
## Tutorial
u can see tutorial in colab
https://colab.research.google.com/drive/1ay5KT9Za340_kN7fhS2xuJX8suCigaF6?usp=sharing
|
ArabicOcr
|
/ArabicOcr-1.1.6.tar.gz/ArabicOcr-1.1.6/README.md
|
README.md
|
# In[8]:
# -*- coding: utf-8 -*-
#
# Natural Language Toolkit: ARLSTem Stemmer
#
# Author: Kheireddine Abainia (x-programer) <[email protected]>
# Algorithms: Kheireddine Abainia <[email protected]>
# Siham Ouamour
# Halim Sayoud
"""
ARLSTem Arabic Stemmer
The details about the implementation of this algorithm are described in:
K. Abainia, S. Ouamour and H. Sayoud, A Novel Robust Arabic Light Stemmer ,
Journal of Experimental & Theoretical Artificial Intelligence (JETAI'17),
Vol. 29, No. 3, 2017, pp. 557-573.
The ARLSTem is a light Arabic stemmer that is based on removing the affixes
from the word (i.e. prefixes, suffixes and infixes). It was evaluated and
compared to several other stemmers using Paice's parameters (under-stemming
index, over-stemming index and stemming weight), and the results showed that
ARLSTem is promising and producing high performances. This stemmer is not
based on any dictionary and can be used on-line effectively.
"""
from __future__ import unicode_literals
import re
class Normalization:
'''
ARLSTem stemmer : a light Arabic Stemming algorithm without any dictionary.
Department of Telecommunication & Information Processing. USTHB University,
Algiers, Algeria.
ARLSTem.stem(token) returns the Arabic stem for the input token.
The ARLSTem Stemmer requires that all tokens are encoded using Unicode
encoding.
'''
def __init__(self):
# different Alif with hamza
self.re_hamzated_alif = re.compile(r'[\u0622\u0623\u0625]')
self.re_alifMaqsura = re.compile(r'[\u0649]')
self.re_diacritics = re.compile(r'[\u064B-\u065F]')
# Alif Laam, Laam Laam, Fa Laam, Fa Ba
self.pr2 = [
'\u0627\u0644', '\u0644\u0644',
'\u0641\u0644', '\u0641\u0628'
]
# Ba Alif Laam, Kaaf Alif Laam, Waaw Alif Laam
self.pr3 = [
'\u0628\u0627\u0644',
'\u0643\u0627\u0644',
'\u0648\u0627\u0644'
]
# Fa Laam Laam, Waaw Laam Laam
self.pr32 = ['\u0641\u0644\u0644', '\u0648\u0644\u0644']
# Fa Ba Alif Laam, Waaw Ba Alif Laam, Fa Kaaf Alif Laam
self.pr4 = [
'\u0641\u0628\u0627\u0644',
'\u0648\u0628\u0627\u0644',
'\u0641\u0643\u0627\u0644'
]
# Kaf Yaa, Kaf Miim
self.su2 = [
'\u0643\u064A',
'\u0643\u0645'
]
# Ha Alif, Ha Miim
self.su22 = ['\u0647\u0627', '\u0647\u0645']
# Kaf Miim Alif, Kaf Noon Shadda
self.su3 = ['\u0643\u0645\u0627', '\u0643\u0646\u0651']
# Ha Miim Alif, Ha Noon Shadda
self.su32 = ['\u0647\u0645\u0627', '\u0647\u0646\u0651']
# Alif Noon, Ya Noon, Waaw Noon
self.pl_si2 = ['\u0627\u0646', '\u064A\u0646', '\u0648\u0646']
# Taa Alif Noon, Taa Ya Noon
self.pl_si3 = ['\u062A\u0627\u0646', '\u062A\u064A\u0646']
# Alif Noon, Waaw Noon
self.verb_su2 = ['\u0627\u0646', '\u0648\u0646']
# Siin Taa, Siin Yaa
self.verb_pr2 = ['\u0633\u062A', '\u0633\u064A']
# Siin Alif, Siin Noon
self.verb_pr22 = ['\u0633\u0627', '\u0633\u0646']
# Taa Miim Alif, Taa Noon Shadda
self.verb_suf3 = ['\u062A\u0645\u0627', '\u062A\u0646\u0651']
# Noon Alif, Taa Miim, Taa Alif, Waaw Alif
self.verb_suf2 = [
'\u0646\u0627', '\u062A\u0645',
'\u062A\u0627', '\u0648\u0627'
]
# Taa, Alif, Noon
self.verb_suf1 = ['\u062A', '\u0627', '\u0646']
def norm(self, token):
"""
normalize the word by removing diacritics, replacing hamzated Alif
with Alif replacing AlifMaqsura with Yaa and removing Waaw at the
beginning.
"""
# strip Arabic diacritics
token = self.re_diacritics.sub('', token)
# replace Hamzated Alif with Alif bare
token = self.re_hamzated_alif.sub('\u0627', token)
# replace alifMaqsura with Yaa
token = self.re_alifMaqsura.sub('\u064A', token)
# strip the Waaw from the word beginning if the remaining is 3 letters
# at least
if token.startswith('\u0648') and len(token) > 3:
token = token[1:]
return token
|
ArabicTextNormlzation
|
/ArabicTextNormlzation-0.0.1.tar.gz/ArabicTextNormlzation-0.0.1/ArabicNormalization/ArNormalization.py
|
ArNormalization.py
|
=======
Arachne
=======
.. image:: https://travis-ci.org/kirankoduru/arachne.svg
:target: https://travis-ci.org/kirankoduru/arachne
.. image:: https://coveralls.io/repos/kirankoduru/arachne/badge.svg?branch=master&service=github
:target: https://coveralls.io/github/kirankoduru/arachne?branch=master
Arachne provides a wrapper around your scrapy ``Spider`` object to run them through a flask app. All you have to do is customize ``SPIDER_SETTINGS`` in the settings file.
Installation
============
You can install **Arachne** from pip
pip install Arachne
Sample settings
===============
This is sample settings file for spiders in your project. The settings file should be called **settings.py** for **Arachne** to find it and looks like this::
# settings.py file
SPIDER_SETTINGS = [
{
'endpoint': 'dmoz',
'location': 'spiders.DmozSpider',
'spider': 'DmozSpider'
}
]
Usage
=====
It looks very similar to a flask app but since **Scrapy** depends on the python **twisted** package, we need to run our flask app with **twisted**::
from twisted.web.wsgi import WSGIResource
from twisted.web.server import Site
from twisted.internet import reactor
from arachne import Arachne
app = Arachne(__name__)
resource = WSGIResource(reactor, reactor.getThreadPool(), app)
site = Site(resource)
reactor.listenTCP(8080, site)
if __name__ == '__main__':
reactor.run()
|
Arachne-Strahi
|
/Arachne-Strahi-0.5.0.tar.gz/Arachne-Strahi-0.5.0/README.rst
|
README.rst
|
Changelog
=========
Here you will find the full list of changes between each Arachne release
Version 0.5.0 (Nov 20th 2016)
---------------------
- Add support for Scrapy 1.0 ([#4](https://github.com/kirankoduru/arachne/issues/4))
Version 0.4.0 (Mar 17th 2016)
-----------------------------
- Renamed the endpoints `/spiders` as `/` for more intuitive purposes - ([#8](https://github.com/kirankoduru/arachne/issues/8))
- The `/run-spider` endpoint returns the name of spider and the status of the spider as running
Version 0.3.1 (Nov 25th 2015)
-----------------------------
- [BUG FIX] Whoops! Forgot to test if there were individual spider `scrapy_settings` available
Version 0.3.0 (Nov 23rd 2015)
-----------------------------
- Add individual spider settings to the `scrapy_settings` variable
- Add global spider settings to the `SCRAPY_SETTINGS` variable
Version 0.2.0 (Nov 15th 2015)
-----------------------------
- Export to CSV and JSON pipeline now available
Version 0.1.0 (Nov 14th 2015)
-----------------------------
- First public preview release
|
Arachne-Strahi
|
/Arachne-Strahi-0.5.0.tar.gz/Arachne-Strahi-0.5.0/CHANGES.md
|
CHANGES.md
|
import os
import sys
from flask import Flask
from scrapy import version_info as SCRAPY_VERSION
from arachne.exceptions import SettingsException
from arachne.endpoints import list_spiders_endpoint, run_spider_endpoint
class Arachne(Flask):
def __init__(self, import_name=__package__,
settings='settings.py', **kwargs):
"""Initialize the flask app with the settings variable. Load config
from the settings variable and test if the all the
directories(for exports & logs) exists. Finally bind the endpoints for
the flask application to control the spiders
.. version 0.5.0:
Initialize Flask config with `CRAWLER_PROCESS` object if scrapy
version is 1.0.0 or greater
"""
super(Arachne, self).__init__(import_name, **kwargs)
self.settings = settings
self.load_config()
self.validate_spider_settings()
# create directories
self.check_dir(self.config['EXPORT_JSON'],
self.config['EXPORT_PATH'],
'json/')
self.check_dir(self.config['EXPORT_CSV'],
self.config['EXPORT_PATH'],
'json/')
self.check_dir(self.config['LOGS'], self.config['LOGS_PATH'], '')
# from scrapy's version_info initialize Flask app
# for version before 1.0.0 you don't need to init crawler_process
if SCRAPY_VERSION >= (1, 0, 0):
self._init_crawler_process()
# initialize endpoints for API
self._init_url_rules()
def run(self, host=None, port=None, debug=None, **options):
super(Arachne, self).run(host, port, debug, **options)
def load_config(self):
"""Default settings are loaded first and then overwritten from
personal `settings.py` file
"""
self.config.from_object('arachne.default_settings')
if isinstance(self.settings, dict):
self.config.update(self.settings)
else:
if os.path.isabs(self.settings):
pyfile = self.settings
else:
abspath = os.path.abspath(os.path.dirname(sys.argv[0]))
pyfile = os.path.join(abspath, self.settings)
try:
self.config.from_pyfile(pyfile)
except IOError:
# assume envvar is going to be used exclusively
pass
except:
raise
# overwrite settings with custom environment variable
envvar = 'ARACHNE_SETTINGS'
if os.environ.get(envvar):
self.config.from_envvar(envvar)
def check_dir(self, config_name, export_path, folder):
"""Check if the directory in the config variable exists
"""
if config_name:
cwd = os.getcwd()
export_dir = cwd+'/'+export_path+folder
if not os.path.exists(export_dir):
raise SettingsException('Directory missing ', export_dir)
def validate_spider_settings(self):
try:
spider_settings = self.config['SPIDER_SETTINGS']
except:
raise SettingsException('SPIDER_SETTINGS missing')
if not isinstance(spider_settings, list):
raise SettingsException('SPIDER_SETTINGS must be a dict')
def _init_url_rules(self):
"""Attach the endpoints to run spiders and list the spiders
that are available in the API
"""
self.add_url_rule('/run-spider/<spider_name>', view_func=run_spider_endpoint)
self.add_url_rule('/', view_func=list_spiders_endpoint)
def _init_crawler_process(self):
from scrapy.crawler import CrawlerProcess
self.config['CRAWLER_PROCESS'] = CrawlerProcess()
|
Arachne-Strahi
|
/Arachne-Strahi-0.5.0.tar.gz/Arachne-Strahi-0.5.0/arachne/flaskapp.py
|
flaskapp.py
|
from scrapy import version_info as SCRAPY_VERSION
from scrapy.utils.misc import load_object
from scrapy.settings import Settings
if SCRAPY_VERSION <= (1, 0, 0):
import sys
import logging
from datetime import datetime
from twisted.python import logfile, log as tlog
from scrapy.crawler import Crawler
from scrapy.log import ScrapyFileLogObserver
def create_crawler_object(spider_, settings_):
"""
For the given scrapy settings and spider create a crawler object
Args:
spider_ (class obj): The scrapy spider class object
settings_(class obj): The scrapy settings class object
Returns:
A scrapy crawler class object
"""
crwlr = Crawler(settings_)
crwlr.configure()
crwlr.crawl(spider_)
return crwlr
def start_logger(debug):
"""
Logger will log for file if debug set to True else will print to cmdline.
The logfiles will rotate after exceeding since of 1M and 100 count.
"""
if debug:
tlog.startLogging(sys.stdout)
else:
filename = datetime.now().strftime("%Y-%m-%d.scrapy.log")
logfile_ = logfile.LogFile(filename, 'logs/', maxRotatedFiles=100)
logger = ScrapyFileLogObserver(logfile_, logging.INFO)
tlog.addObserver(logger.emit)
def get_spider_settings(flask_app_config, spider_scrapy_settings):
"""
For the given settings for each spider create a scrapy Settings object with
the common settings spider & its personal settings. Individual spider scrapy
settings takes priority over global scrapy settings
Returns:
Scrapy settings class instance
.. version 0.3.0:
Allow settings for individual spiders and global settings
"""
all_settings = flask_app_config['SCRAPY_SETTINGS']
if 'EXTENSIONS' not in all_settings:
all_settings['EXTENSIONS'] = {}
if flask_app_config['EXPORT_JSON']:
all_settings['EXTENSIONS']['arachne.extensions.ExportJSON'] = 100
if flask_app_config['EXPORT_CSV']:
all_settings['EXTENSIONS']['arachne.extensions.ExportCSV'] = 200
# spider scrapy settings has priority over global scrapy settings
for setting, _ in all_settings.items():
if spider_scrapy_settings and setting in spider_scrapy_settings:
all_settings[setting].update(spider_scrapy_settings[setting])
settings = Settings(all_settings)
return settings
def start_crawler(spider_loc, flask_app_config, spider_scrapy_settings, *args, **kwargs):
spider = load_object(spider_loc)
settings = get_spider_settings(flask_app_config, spider_scrapy_settings)
current_app.logger.debug(kwargs)
if SCRAPY_VERSION <= (1, 0, 0):
start_logger(flask_app_config['DEBUG'])
crawler = create_crawler_object(spider(*args, **kwargs), settings)
crawler.start()
else:
spider.custom_settings = settings
flask_app_config['CRAWLER_PROCESS'].crawl(spider, *args, **kwargs)
|
Arachne-Strahi
|
/Arachne-Strahi-0.5.0.tar.gz/Arachne-Strahi-0.5.0/arachne/scrapy_utils.py
|
scrapy_utils.py
|
=======
Arachne
=======
.. image:: https://travis-ci.org/kirankoduru/arachne.svg
:target: https://travis-ci.org/kirankoduru/arachne
.. image:: https://coveralls.io/repos/kirankoduru/arachne/badge.svg?branch=master&service=github
:target: https://coveralls.io/github/kirankoduru/arachne?branch=master
Arachne provides a wrapper around your scrapy ``Spider`` object to run them through a flask app. All you have to do is customize ``SPIDER_SETTINGS`` in the settings file.
Installation
============
You can install **Arachne** from pip
pip install Arachne
Sample settings
===============
This is sample settings file for spiders in your project. The settings file should be called **settings.py** for **Arachne** to find it and looks like this::
# settings.py file
SPIDER_SETTINGS = [
{
'endpoint': 'dmoz',
'location': 'spiders.DmozSpider',
'spider': 'DmozSpider'
}
]
Usage
=====
It looks very similar to a flask app but since **Scrapy** depends on the python **twisted** package, we need to run our flask app with **twisted**::
from twisted.web.wsgi import WSGIResource
from twisted.web.server import Site
from twisted.internet import reactor
from arachne import Arachne
app = Arachne(__name__)
resource = WSGIResource(reactor, reactor.getThreadPool(), app)
site = Site(resource)
reactor.listenTCP(8080, site)
if __name__ == '__main__':
reactor.run()
|
Arachne
|
/Arachne-0.5.0.tar.gz/Arachne-0.5.0/README.rst
|
README.rst
|
Changelog
=========
Here you will find the full list of changes between each Arachne release
Version 0.5.0 (Nov 20th 2016)
---------------------
- Add support for Scrapy 1.0 ([#4](https://github.com/kirankoduru/arachne/issues/4))
Version 0.4.0 (Mar 17th 2016)
-----------------------------
- Renamed the endpoints `/spiders` as `/` for more intuitive purposes - ([#8](https://github.com/kirankoduru/arachne/issues/8))
- The `/run-spider` endpoint returns the name of spider and the status of the spider as running
Version 0.3.1 (Nov 25th 2015)
-----------------------------
- [BUG FIX] Whoops! Forgot to test if there were individual spider `scrapy_settings` available
Version 0.3.0 (Nov 23rd 2015)
-----------------------------
- Add individual spider settings to the `scrapy_settings` variable
- Add global spider settings to the `SCRAPY_SETTINGS` variable
Version 0.2.0 (Nov 15th 2015)
-----------------------------
- Export to CSV and JSON pipeline now available
Version 0.1.0 (Nov 14th 2015)
-----------------------------
- First public preview release
|
Arachne
|
/Arachne-0.5.0.tar.gz/Arachne-0.5.0/CHANGES.md
|
CHANGES.md
|
import os
import sys
from flask import Flask
from scrapy import version_info as SCRAPY_VERSION
from arachne.exceptions import SettingsException
from arachne.endpoints import list_spiders_endpoint, run_spider_endpoint
class Arachne(Flask):
def __init__(self, import_name=__package__,
settings='settings.py', **kwargs):
"""Initialize the flask app with the settings variable. Load config
from the settings variable and test if the all the
directories(for exports & logs) exists. Finally bind the endpoints for
the flask application to control the spiders
.. version 0.5.0:
Initialize Flask config with `CRAWLER_PROCESS` object if scrapy
version is 1.0.0 or greater
"""
super(Arachne, self).__init__(import_name, **kwargs)
self.settings = settings
self.load_config()
self.validate_spider_settings()
# create directories
self.check_dir(self.config['EXPORT_JSON'],
self.config['EXPORT_PATH'],
'json/')
self.check_dir(self.config['EXPORT_CSV'],
self.config['EXPORT_PATH'],
'json/')
self.check_dir(self.config['LOGS'], self.config['LOGS_PATH'], '')
# from scrapy's version_info initialize Flask app
# for version before 1.0.0 you don't need to init crawler_process
if SCRAPY_VERSION >= (1, 0, 0):
self._init_crawler_process()
# initialize endpoints for API
self._init_url_rules()
def run(self, host=None, port=None, debug=None, **options):
super(Arachne, self).run(host, port, debug, **options)
def load_config(self):
"""Default settings are loaded first and then overwritten from
personal `settings.py` file
"""
self.config.from_object('arachne.default_settings')
if isinstance(self.settings, dict):
self.config.update(self.settings)
else:
if os.path.isabs(self.settings):
pyfile = self.settings
else:
abspath = os.path.abspath(os.path.dirname(sys.argv[0]))
pyfile = os.path.join(abspath, self.settings)
try:
self.config.from_pyfile(pyfile)
except IOError:
# assume envvar is going to be used exclusively
pass
except:
raise
# overwrite settings with custom environment variable
envvar = 'ARACHNE_SETTINGS'
if os.environ.get(envvar):
self.config.from_envvar(envvar)
def check_dir(self, config_name, export_path, folder):
"""Check if the directory in the config variable exists
"""
if config_name:
cwd = os.getcwd()
export_dir = cwd+'/'+export_path+folder
if not os.path.exists(export_dir):
raise SettingsException('Directory missing ', export_dir)
def validate_spider_settings(self):
try:
spider_settings = self.config['SPIDER_SETTINGS']
except:
raise SettingsException('SPIDER_SETTINGS missing')
if not isinstance(spider_settings, list):
raise SettingsException('SPIDER_SETTINGS must be a dict')
def _init_url_rules(self):
"""Attach the endpoints to run spiders and list the spiders
that are available in the API
"""
self.add_url_rule('/run-spider/<spider_name>', view_func=run_spider_endpoint)
self.add_url_rule('/', view_func=list_spiders_endpoint)
def _init_crawler_process(self):
from scrapy.crawler import CrawlerProcess
self.config['CRAWLER_PROCESS'] = CrawlerProcess()
|
Arachne
|
/Arachne-0.5.0.tar.gz/Arachne-0.5.0/arachne/flaskapp.py
|
flaskapp.py
|
from scrapy import version_info as SCRAPY_VERSION
from scrapy.utils.misc import load_object
from scrapy.settings import Settings
if SCRAPY_VERSION <= (1, 0, 0):
import sys
import logging
from datetime import datetime
from twisted.python import logfile, log as tlog
from scrapy.crawler import Crawler
from scrapy.log import ScrapyFileLogObserver
def create_crawler_object(spider_, settings_):
"""
For the given scrapy settings and spider create a crawler object
Args:
spider_ (class obj): The scrapy spider class object
settings_(class obj): The scrapy settings class object
Returns:
A scrapy crawler class object
"""
crwlr = Crawler(settings_)
crwlr.configure()
crwlr.crawl(spider_)
return crwlr
def start_logger(debug):
"""
Logger will log for file if debug set to True else will print to cmdline.
The logfiles will rotate after exceeding since of 1M and 100 count.
"""
if debug:
tlog.startLogging(sys.stdout)
else:
filename = datetime.now().strftime("%Y-%m-%d.scrapy.log")
logfile_ = logfile.LogFile(filename, 'logs/', maxRotatedFiles=100)
logger = ScrapyFileLogObserver(logfile_, logging.INFO)
tlog.addObserver(logger.emit)
def get_spider_settings(flask_app_config, spider_scrapy_settings):
"""
For the given settings for each spider create a scrapy Settings object with
the common settings spider & its personal settings. Individual spider scrapy
settings takes priority over global scrapy settings
Returns:
Scrapy settings class instance
.. version 0.3.0:
Allow settings for individual spiders and global settings
"""
all_settings = flask_app_config['SCRAPY_SETTINGS']
if 'EXTENSIONS' not in all_settings:
all_settings['EXTENSIONS'] = {}
if flask_app_config['EXPORT_JSON']:
all_settings['EXTENSIONS']['arachne.extensions.ExportJSON'] = 100
if flask_app_config['EXPORT_CSV']:
all_settings['EXTENSIONS']['arachne.extensions.ExportCSV'] = 200
# spider scrapy settings has priority over global scrapy settings
for setting, _ in all_settings.items():
if spider_scrapy_settings and setting in spider_scrapy_settings:
all_settings[setting].update(spider_scrapy_settings[setting])
settings = Settings(all_settings)
return settings
def start_crawler(spider_loc, flask_app_config, spider_scrapy_settings):
spider = load_object(spider_loc)
settings = get_spider_settings(flask_app_config, spider_scrapy_settings)
if SCRAPY_VERSION <= (1, 0, 0):
start_logger(flask_app_config['DEBUG'])
crawler = create_crawler_object(spider(), settings)
crawler.start()
else:
spider.custom_settings = settings
flask_app_config['CRAWLER_PROCESS'].crawl(spider)
|
Arachne
|
/Arachne-0.5.0.tar.gz/Arachne-0.5.0/arachne/scrapy_utils.py
|
scrapy_utils.py
|
ArachneServer
==============
[](https://travis-ci.org/dmkitui/arachneserver)
[](https://coveralls.io/github/dmkitui/arachneserver?branch=master)
[](https://codeclimate.com/github/dmkitui/arachneserver/maintainability)
ArachneServer provides a wrapper around your scrapy ``Spider`` object to run them through a flask app. All you have to do is customize ``SPIDER_SETTINGS`` in the settings file.
Installation
============
You can install **ArachneServer** from pip
pip install ArachneServer
Sample settings
===============
This is sample settings file for spiders in your project. The settings file should be called **settings.py** in the root of your project for **ArachneServer** to find it and looks like this::
# settings.py file
SPIDER_SETTINGS = [
{
'endpoint': 'dmoz',
'location': 'spiders.DmozSpider',
'endpoints_location: 'spiders.DmozSpider_endpoints'
'spider': 'DmozSpider'
}
]
Usage
=====
It looks very similar to a flask app but since **Scrapy** depends on the python **twisted** package, we need to run our flask app with **twisted**::
from twisted.web.wsgi import WSGIResource
from twisted.web.server import Site
from twisted.internet import reactor
from arachneserver import ArachneServer
app = ArachneServer(__name__)
resource = WSGIResource(reactor, reactor.getThreadPool(), app)
site = Site(resource)
reactor.listenTCP(8080, site)
if __name__ == '__main__':
reactor.run()
Endpoints
=========
Two default endpoints are provided for every spider:
1. **'/'** - List all spiders.
2. **'/run-spider/<spider_name>'** - To run the specified spider.
Specify additional endpoints for each spider and update the respective SPIDER_SETTINGS dictionary's `endpoints_location` to point to the correct path to the endpoints file.
|
ArachneServer
|
/ArachneServer-1.0.2.tar.gz/ArachneServer-1.0.2/README.md
|
README.md
|
import os
import sys
import logging
import csv
from flask import Flask
from arachneserver.exceptions import SettingsException, EndpointException
from arachneserver.default_endpoints import list_spiders_endpoint, run_spider_endpoint
from inspect import getmembers, isfunction
from funcsigs import signature
from importlib import import_module
SPIDER_STATUS = {}
def spiders_info(action=None):
"""
Method to read and load data about spiders that have run or write to file spider statistics of a just completed
spider run.
:param action: The options are 'read' for reading spider data from file, or 'write' so as to write to file
information
about a spider that has just completed running. Parameters read or written to file are:
spider_name: name of the spider that has ran.
last_run: Last time it completed running.
avg_time: The time the spider took to complete.
exit_code: The reason the spider completed e.g 'finished' - means it completed well.
TODO: - Evalutate and document other exit_code 's
:return: None.
"""
spider_data_file = os.path.abspath(os.path.dirname(__file__)) + '/spider_data.csv'
if action == 'read':
with open(spider_data_file, 'r') as csv_file:
field_names = ['spider_name', 'last_run', 'avg_time', 'running', 'exit_code']
reader = csv.DictReader(csv_file, fieldnames=field_names, delimiter='|')
for row in reader:
key = row.pop('spider_name', None)
SPIDER_STATUS[key] = dict(row)
elif action == 'write':
with open(spider_data_file, 'w') as csv_file:
field_names = ['spider_name', 'last_run', 'avg_time', 'running', 'exit_code']
csv_writer = csv.DictWriter(csv_file, fieldnames=field_names, delimiter='|')
for spider, spider_data, in SPIDER_STATUS.items():
spider_data['spider_name'] = spider
csv_writer.writerow(spider_data)
def endpoint_evaluator(endpoint):
"""
Function to evaluate endpoint functions and and return the url and methods if they comply
:param endpoint: The endpoint function to be allowed.
:return: url and methods specified in the function
"""
args = signature(endpoint).parameters
try:
url = args['url'].default
except KeyError:
raise EndpointException('Url for endpoint "{}" not supplied.'.format(endpoint.__name__))
if not isinstance(url, str):
raise EndpointException('Url for endpoint "{}" is not a string'.format(endpoint.__name__))
try:
methods = args['methods'].default
except KeyError:
# default method 'GET' is applied.
methods = ['GET']
allowed_methods = ['PUT', 'GET', 'POST', 'DELETE', 'PATCH']
for method in methods:
if method not in allowed_methods:
raise EndpointException('Supplied methods for "{}" endpoint is invalid.'
'Allowed methods are PUT, GET, POST, DELETE, PATCH'.format(endpoint.__name__))
return url, methods
class ArachneServer(Flask):
def __init__(self, import_name=__package__,
settings='settings.py', **kwargs):
"""Initialize the flask app with the settings variable. Load config
from the settings variable and test if the all the
directories(for exports & logs) exists. Finally bind the endpoints for
the flask application to control the spiders
.. version 0.5.0:
Initialize Flask config with `CRAWLER_PROCESS` object if scrapy
version is 1.0.0 or greater
"""
super(ArachneServer, self).__init__(import_name, **kwargs)
self.settings = settings
self.load_config()
self.validate_spider_settings()
# create directories
self.check_dir(self.config['EXPORT_JSON'],
self.config['EXPORT_PATH'],
'json/')
self.check_dir(self.config['EXPORT_CSV'],
self.config['EXPORT_PATH'],
'json/')
self.check_dir(self.config['LOGS'], self.config['LOGS_PATH'], '')
self._init_crawler_process()
# initialize endpoints for API
self._init_url_rules()
def run(self, host=None, port=None, debug=None, **options):
super(ArachneServer, self).run(host, port, debug, **options)
def load_config(self):
"""Default settings are loaded first and then overwritten from
personal `settings.py` file
"""
self.config.from_object('arachneserver.default_settings')
if isinstance(self.settings, dict):
self.config.update(self.settings)
else:
if os.path.isabs(self.settings):
pyfile = self.settings
else:
abspath = os.path.abspath(os.path.dirname(sys.argv[0]))
pyfile = os.path.join(abspath, self.settings)
try:
self.config.from_pyfile(pyfile)
except IOError:
# assume envvar is going to be used exclusively
pass
except:
raise
def check_dir(self, config_name, export_path, folder):
"""Check if the directory in the config variable exists
"""
if config_name:
cwd = os.getcwd()
export_dir = cwd + '/' + export_path + folder
if not os.path.exists(export_dir):
raise SettingsException('Directory missing ', export_dir)
def validate_spider_settings(self):
try:
spider_settings = self.config['SPIDER_SETTINGS']
except KeyError:
raise SettingsException('SPIDER_SETTINGS missing')
if not isinstance(spider_settings, list):
raise SettingsException('SPIDER_SETTINGS must be a dict')
def _init_url_rules(self):
"""Attach the default endpoints (run spiders and list the spiders
that are available in the API) and evaluate for all user defined endpoints and attach them.
"""
# Attach default endpoints first
self.add_url_rule('/run-spider/<spider_name>', view_func=run_spider_endpoint)
self.add_url_rule('/', view_func=list_spiders_endpoint)
all_spiders = self.config['SPIDER_SETTINGS']
# Evaluate and attach all endpoints for the individual spiders.
for spider in all_spiders:
try:
endpoint_file = spider['endpoint_location']
except KeyError:
# No endpoint file specified
logging.warning('Endpoints for {} not specified.'.format(spider['spider'].upper()))
continue
try:
custom_endpoint_file = import_module(endpoint_file)
logging.debug('Endpoints for {} loaded'.format(spider['spider'].upper()))
except ImportError:
# Specified custom endpoints file could not be loaded
logging.warning('Specified endpoints file for {} could not be loaded'.format(spider['spider'].upper()))
continue
self.add_custom_endpoints(custom_endpoint_file)
def add_custom_endpoints(self, custom_endpoint_file):
"""
Method to add user defined endpoints if available. User defined endpoints are to be specified in a file
called 'custom_endpoints.py' in the root directory. The method definitions are to be in the format:
def method_name(**args, url=endpoint_url, method=endpoint_method)
where:
endpoint_url: A string representing the endpoint url
endpoint_method: list containing allowed methods. If none are provided, the endpoint method defaults to GET.
:return: None.
"""
endpoints = (fn for fn in getmembers(custom_endpoint_file) if
isfunction(fn[1]) and fn[1].__module__ == custom_endpoint_file.__name__)
for endpoint in endpoints:
if callable(endpoint[1]):
url, methods = endpoint_evaluator(endpoint[1])
self.add_url_rule(url, view_func=endpoint[1], methods=methods)
def _init_crawler_process(self):
from scrapy.crawler import CrawlerProcess
self.config['CRAWLER_PROCESS'] = CrawlerProcess()
spiders_info(action='read')
|
ArachneServer
|
/ArachneServer-1.0.2.tar.gz/ArachneServer-1.0.2/arachneserver/flaskapp.py
|
flaskapp.py
|
import sys
from scrapy.utils.misc import load_object
from scrapy.settings import Settings
from datetime import datetime
from twisted.python import logfile, log as tlog
from scrapy.crawler import CrawlerProcess
def create_crawler_object(spider_, settings_):
"""
For the given scrapy settings and spider create a crawler object
Args:
spider_ (class obj): The scrapy spider class object
settings_(class obj): The scrapy settings class object
Returns:
A scrapy crawler class object
"""
crwlr = CrawlerProcess(settings_)
crwlr.crawl(spider_)
return crwlr
def start_logger(debug):
"""
Logger will log for file if debug set to True else will print to cmdline.
The logfiles will rotate after exceeding since of 1M and 100 count.
"""
if debug:
tlog.startLogging(sys.stdout)
else:
filename = datetime.now().strftime("%Y-%m-%d.scrapy.log")
logfile_ = logfile.LogFile(filename, 'logs/', maxRotatedFiles=100)
logger = tlog.PythonLoggingObserver('twisted')
logger.start()
def get_spider_settings(flask_app_config, spider_scrapy_settings):
"""
For the given settings for each spider create a scrapy Settings object with
the common settings spider & its personal settings. Individual spider scrapy
settings takes priority over global scrapy settings
Returns:
Scrapy settings class instance
.. version 0.3.0:
Allow settings for individual spiders and global settings
"""
all_settings = flask_app_config
# Merge the SCRAPY_SETTINGS to the rest of settings available in default_settings.py
all_settings.update(flask_app_config['SCRAPY_SETTINGS'])
if 'EXTENSIONS' not in all_settings:
all_settings['EXTENSIONS'] = {}
if flask_app_config['EXPORT_JSON']:
all_settings['EXTENSIONS']['arachneserver.extensions.ExportJSON'] = 100
if flask_app_config['EXPORT_CSV']:
all_settings['EXTENSIONS']['arachneserver.extensions.ExportCSV'] = 200
# spider scrapy settings has priority over global scrapy settings
# This updates the all_settings dict with values from spider_scrapy_settings, if provided.
if spider_scrapy_settings:
all_settings.update(spider_scrapy_settings)
settings = Settings(all_settings)
return settings
def start_crawler(spider_loc, flask_app_config, spider_scrapy_settings):
spider = load_object(spider_loc)
settings = get_spider_settings(flask_app_config, spider_scrapy_settings)
spider.custom_settings = settings
flask_app_config['CRAWLER_PROCESS'].crawl(spider)
|
ArachneServer
|
/ArachneServer-1.0.2.tar.gz/ArachneServer-1.0.2/arachneserver/scrapy_utils.py
|
scrapy_utils.py
|
from scrapy import signals
from scrapy.exporters import CsvItemExporter, JsonItemExporter
from datetime import datetime
from arachneserver.flaskapp import spiders_info
class ExportData(object):
def __init__(self):
self.files = {}
self.exporter = None
@classmethod
def from_crawler(cls, crawler):
ext = cls()
crawler.signals.connect(ext.spider_opened, signals.spider_opened)
crawler.signals.connect(ext.spider_closed, signals.spider_closed)
crawler.signals.connect(ext.item_scraped, signals.item_scraped)
return ext
def spider_opened(self, spider):
raise NotImplementedError
def spider_closed(self, spider):
self.exporter.finish_exporting()
file_to_save = self.files.pop(spider)
file_to_save.close()
def item_scraped(self, item, spider):
self.exporter.export_item(item)
return item
class ExportCSV(ExportData):
"""
Exporting to export/csv/spider-name.csv file
"""
def spider_opened(self, spider):
file_to_save = open('exports/csv/%s.csv' % spider.name, 'w+b')
self.files[spider] = file_to_save
self.exporter = CsvItemExporter(file_to_save)
self.exporter.start_exporting()
class ExportJSON(ExportData):
"""
Exporting to export/json/spider-name.json file
"""
def spider_opened(self, spider):
file_to_save = open('exports/json/%s.json' % spider.name, 'w+b')
self.files[spider] = file_to_save
self.exporter = JsonItemExporter(file_to_save)
self.exporter.start_exporting()
class ApplicationData(object):
"""
Extension to handle the spider information reading from file and writing the a just completed spider's information
to file.
"""
def __init__(self, stats):
self.files = {}
self.stats = stats
@classmethod
def from_crawler(cls, crawler):
ext = cls(crawler.stats)
crawler.signals.connect(ext.spider_opened, signals.spider_opened)
crawler.signals.connect(ext.spider_closed, signals.spider_closed)
return ext
@staticmethod
def spider_opened(spider):
from arachneserver.flaskapp import SPIDER_STATUS
try:
SPIDER_STATUS[spider.name]['running'] = True
except KeyError:
SPIDER_STATUS[spider.name] = {}
SPIDER_STATUS[spider.name]['running'] = True
SPIDER_STATUS[spider.name]['time_started'] = datetime.now()
def spider_closed(self, spider):
from arachneserver.flaskapp import SPIDER_STATUS
stats = self.stats.get_stats()
data = {
'last_run': stats['finish_time'],
'avg_time': stats['finish_time'] - stats['start_time'],
'running': False,
'exit_code': stats['finish_reason']
}
SPIDER_STATUS[spider.name] = data
spiders_info(action='write')
|
ArachneServer
|
/ArachneServer-1.0.2.tar.gz/ArachneServer-1.0.2/arachneserver/extensions.py
|
extensions.py
|
=======
Arachne
=======
.. image:: https://travis-ci.org/kirankoduru/arachne.svg
:target: https://travis-ci.org/kirankoduru/arachne
.. image:: https://coveralls.io/repos/kirankoduru/arachne/badge.svg?branch=master&service=github
:target: https://coveralls.io/github/kirankoduru/arachne?branch=master
Arachne provides a wrapper around your scrapy ``Spider`` object to run them through a flask app. All you have to do is customize ``SPIDER_SETTINGS`` in the settings file.
Installation
============
You can install **Arachne** from pip
pip install Arachne
Sample settings
===============
This is sample settings file for spiders in your project. The settings file should be called **settings.py** for **Arachne** to find it and looks like this::
# settings.py file
SPIDER_SETTINGS = [
{
'endpoint': 'dmoz',
'location': 'spiders.DmozSpider',
'spider': 'DmozSpider'
}
]
Usage
=====
It looks very similar to a flask app but since **Scrapy** depends on the python **twisted** package, we need to run our flask app with **twisted**::
from twisted.web.wsgi import WSGIResource
from twisted.web.server import Site
from twisted.internet import reactor
from arachne import Arachne
app = Arachne(__name__)
resource = WSGIResource(reactor, reactor.getThreadPool(), app)
site = Site(resource)
reactor.listenTCP(8080, site)
if __name__ == '__main__':
reactor.run()
|
ArachneStrahi
|
/ArachneStrahi-0.5.0.tar.gz/ArachneStrahi-0.5.0/README.rst
|
README.rst
|
Changelog
=========
Here you will find the full list of changes between each Arachne release
Version 0.5.0 (Nov 20th 2016)
---------------------
- Add support for Scrapy 1.0 ([#4](https://github.com/kirankoduru/arachne/issues/4))
Version 0.4.0 (Mar 17th 2016)
-----------------------------
- Renamed the endpoints `/spiders` as `/` for more intuitive purposes - ([#8](https://github.com/kirankoduru/arachne/issues/8))
- The `/run-spider` endpoint returns the name of spider and the status of the spider as running
Version 0.3.1 (Nov 25th 2015)
-----------------------------
- [BUG FIX] Whoops! Forgot to test if there were individual spider `scrapy_settings` available
Version 0.3.0 (Nov 23rd 2015)
-----------------------------
- Add individual spider settings to the `scrapy_settings` variable
- Add global spider settings to the `SCRAPY_SETTINGS` variable
Version 0.2.0 (Nov 15th 2015)
-----------------------------
- Export to CSV and JSON pipeline now available
Version 0.1.0 (Nov 14th 2015)
-----------------------------
- First public preview release
|
ArachneStrahi
|
/ArachneStrahi-0.5.0.tar.gz/ArachneStrahi-0.5.0/CHANGES.md
|
CHANGES.md
|
import os
import sys
from flask import Flask
from scrapy import version_info as SCRAPY_VERSION
from arachne.exceptions import SettingsException
from arachne.endpoints import list_spiders_endpoint, run_spider_endpoint
class Arachne(Flask):
def __init__(self, import_name=__package__,
settings='settings.py', **kwargs):
"""Initialize the flask app with the settings variable. Load config
from the settings variable and test if the all the
directories(for exports & logs) exists. Finally bind the endpoints for
the flask application to control the spiders
.. version 0.5.0:
Initialize Flask config with `CRAWLER_PROCESS` object if scrapy
version is 1.0.0 or greater
"""
super(Arachne, self).__init__(import_name, **kwargs)
self.settings = settings
self.load_config()
self.validate_spider_settings()
# create directories
self.check_dir(self.config['EXPORT_JSON'],
self.config['EXPORT_PATH'],
'json/')
self.check_dir(self.config['EXPORT_CSV'],
self.config['EXPORT_PATH'],
'json/')
self.check_dir(self.config['LOGS'], self.config['LOGS_PATH'], '')
# from scrapy's version_info initialize Flask app
# for version before 1.0.0 you don't need to init crawler_process
if SCRAPY_VERSION >= (1, 0, 0):
self._init_crawler_process()
# initialize endpoints for API
self._init_url_rules()
def run(self, host=None, port=None, debug=None, **options):
super(Arachne, self).run(host, port, debug, **options)
def load_config(self):
"""Default settings are loaded first and then overwritten from
personal `settings.py` file
"""
self.config.from_object('arachne.default_settings')
if isinstance(self.settings, dict):
self.config.update(self.settings)
else:
if os.path.isabs(self.settings):
pyfile = self.settings
else:
abspath = os.path.abspath(os.path.dirname(sys.argv[0]))
pyfile = os.path.join(abspath, self.settings)
try:
self.config.from_pyfile(pyfile)
except IOError:
# assume envvar is going to be used exclusively
pass
except:
raise
# overwrite settings with custom environment variable
envvar = 'ARACHNE_SETTINGS'
if os.environ.get(envvar):
self.config.from_envvar(envvar)
def check_dir(self, config_name, export_path, folder):
"""Check if the directory in the config variable exists
"""
if config_name:
cwd = os.getcwd()
export_dir = cwd+'/'+export_path+folder
if not os.path.exists(export_dir):
raise SettingsException('Directory missing ', export_dir)
def validate_spider_settings(self):
try:
spider_settings = self.config['SPIDER_SETTINGS']
except:
raise SettingsException('SPIDER_SETTINGS missing')
if not isinstance(spider_settings, list):
raise SettingsException('SPIDER_SETTINGS must be a dict')
def _init_url_rules(self):
"""Attach the endpoints to run spiders and list the spiders
that are available in the API
"""
self.add_url_rule('/run-spider/<spider_name>', view_func=run_spider_endpoint)
self.add_url_rule('/', view_func=list_spiders_endpoint)
def _init_crawler_process(self):
from scrapy.crawler import CrawlerProcess
self.config['CRAWLER_PROCESS'] = CrawlerProcess()
|
ArachneStrahi
|
/ArachneStrahi-0.5.0.tar.gz/ArachneStrahi-0.5.0/arachne/flaskapp.py
|
flaskapp.py
|
from scrapy import version_info as SCRAPY_VERSION
from scrapy.utils.misc import load_object
from scrapy.settings import Settings
if SCRAPY_VERSION <= (1, 0, 0):
import sys
import logging
from datetime import datetime
from twisted.python import logfile, log as tlog
from scrapy.crawler import Crawler
from scrapy.log import ScrapyFileLogObserver
def create_crawler_object(spider_, settings_):
"""
For the given scrapy settings and spider create a crawler object
Args:
spider_ (class obj): The scrapy spider class object
settings_(class obj): The scrapy settings class object
Returns:
A scrapy crawler class object
"""
crwlr = Crawler(settings_)
crwlr.configure()
crwlr.crawl(spider_)
return crwlr
def start_logger(debug):
"""
Logger will log for file if debug set to True else will print to cmdline.
The logfiles will rotate after exceeding since of 1M and 100 count.
"""
if debug:
tlog.startLogging(sys.stdout)
else:
filename = datetime.now().strftime("%Y-%m-%d.scrapy.log")
logfile_ = logfile.LogFile(filename, 'logs/', maxRotatedFiles=100)
logger = ScrapyFileLogObserver(logfile_, logging.INFO)
tlog.addObserver(logger.emit)
def get_spider_settings(flask_app_config, spider_scrapy_settings):
"""
For the given settings for each spider create a scrapy Settings object with
the common settings spider & its personal settings. Individual spider scrapy
settings takes priority over global scrapy settings
Returns:
Scrapy settings class instance
.. version 0.3.0:
Allow settings for individual spiders and global settings
"""
all_settings = flask_app_config['SCRAPY_SETTINGS']
if 'EXTENSIONS' not in all_settings:
all_settings['EXTENSIONS'] = {}
if flask_app_config['EXPORT_JSON']:
all_settings['EXTENSIONS']['arachne.extensions.ExportJSON'] = 100
if flask_app_config['EXPORT_CSV']:
all_settings['EXTENSIONS']['arachne.extensions.ExportCSV'] = 200
# spider scrapy settings has priority over global scrapy settings
for setting, _ in all_settings.items():
if spider_scrapy_settings and setting in spider_scrapy_settings:
all_settings[setting].update(spider_scrapy_settings[setting])
settings = Settings(all_settings)
return settings
def start_crawler(spider_loc, flask_app_config, spider_scrapy_settings, *args, **kwargs):
spider = load_object(spider_loc)
settings = get_spider_settings(flask_app_config, spider_scrapy_settings)
current_app.logger.debug(kwargs)
if SCRAPY_VERSION <= (1, 0, 0):
start_logger(flask_app_config['DEBUG'])
crawler = create_crawler_object(spider(*args, **kwargs), settings)
crawler.start()
else:
spider.custom_settings = settings
flask_app_config['CRAWLER_PROCESS'].crawl(spider, *args, **kwargs)
|
ArachneStrahi
|
/ArachneStrahi-0.5.0.tar.gz/ArachneStrahi-0.5.0/arachne/scrapy_utils.py
|
scrapy_utils.py
|
import django
from django.contrib.admin.sites import AdminSite as DjangoAdminSite
from django.core.urlresolvers import reverse, NoReverseMatch
from django.http.response import Http404
from django.template.defaultfilters import capfirst
from django.template.response import TemplateResponse
from django.views.decorators.cache import never_cache
from django.utils.translation import ugettext as _
import six
class CollectionAdminSite(DjangoAdminSite):
def register(self, model_or_iterable, admin_class=None, **options):
"""
Registers the given model(s) with the given admin class.
The model(s) should be Model classes, not instances.
If an admin class isn't given, it will use ModelAdmin (the default
admin options). If keyword arguments are given -- e.g., list_display --
they'll be applied as options to the admin class.
If a model is already registered, this will raise AlreadyRegistered.
If a model is abstract, this will raise ImproperlyConfigured.
"""
# if not admin_class:
# admin_class = ModelAdmin
if not isinstance(model_or_iterable, list):
model_or_iterable = [model_or_iterable]
for model in model_or_iterable:
# if model._meta.abstract:
# raise ImproperlyConfigured('The model %s is abstract, so it '
# 'cannot be registered with admin.' % model.__name__)
#
# if model in self._registry:
# raise AlreadyRegistered('The model %s is already registered' % model.__name__)
#
# # Ignore the registration if the model has been
# # swapped out.
# if not model._meta.swapped:
# # If we got **options then dynamically construct a subclass of
# # admin_class with those **options.
# if options:
# # For reasons I don't quite understand, without a __module__
# # the created class appears to "live" in the wrong place,
# # which causes issues later on.
# options['__module__'] = __name__
# admin_class = type("%sAdmin" % model.__name__, (admin_class,), options)
#
# if admin_class is not ModelAdmin and settings.DEBUG:
# admin_class.validate(model)
#
# # Instantiate the admin class to save in the registry
self._registry[model] = admin_class(model, self)
@never_cache
def index(self, request, extra_context=None):
"""
Displays the main admin index page, which lists all of the installed
apps that have been registered in this site.
"""
app_dict = {}
user = request.user
for model, model_admin in self._registry.items():
app_label = model._meta.app_label
has_module_perms = user.has_module_perms(app_label)
if has_module_perms:
perms = model_admin.get_model_perms(request)
# Check whether user has any perm for this module.
# If so, add the module to the model_list.
if True in perms.values():
info = (app_label, model._meta.model_name)
model_dict = {
'name': capfirst(model._meta.verbose_name_plural),
'object_name': model._meta.object_name,
'perms': perms,
}
if perms.get('change', False):
try:
model_dict['admin_url'] = reverse('admin:admin:%s_%s_changelist' % info, current_app=self.name)
except NoReverseMatch:
pass
if perms.get('add', False):
try:
model_dict['add_url'] = reverse('admin:admin:%s_%s_add' % info, current_app=self.name)
except NoReverseMatch:
pass
if app_label in app_dict:
print(model_dict)
app_dict[app_label]['models'].append(model_dict)
else:
app_dict[app_label] = {
'name': app_label.title(),
'app_label': app_label,
'app_url': reverse('admin:app_list', kwargs={'app_label': app_label}, current_app=self.name),
'has_module_perms': has_module_perms,
'models': [model_dict],
}
# Sort the apps alphabetically.
app_list = list(six.itervalues(app_dict))
app_list.sort(key=lambda x: x['name'])
# Sort the models alphabetically within each app.
for app in app_list:
app['models'].sort(key=lambda x: x['name'])
context = {
'title': _('Site administration'),
'app_list': app_list,
}
context.update(extra_context or {})
return TemplateResponse(request, self.index_template or
'admin/index.html', context,
current_app=self.name)
def app_index(self, request, app_label, extra_context=None):
user = request.user
has_module_perms = user.has_module_perms(app_label)
app_dict = {}
for model, model_admin in self._registry.items():
if app_label == model._meta.app_label:
if has_module_perms:
perms = model_admin.get_model_perms(request)
# Check whether user has any perm for this module.
# If so, add the module to the model_list.
if True in perms.values():
info = (app_label, model._meta.model_name)
model_dict = {
'name': capfirst(model._meta.verbose_name_plural),
'object_name': model._meta.object_name,
'perms': perms,
}
if perms.get('change', False):
try:
model_dict['admin_url'] = reverse('admin:admin:%s_%s_changelist' % info, current_app=self.name)
except NoReverseMatch:
pass
if perms.get('add', False):
try:
model_dict['add_url'] = reverse('admin:admin:%s_%s_add' % info, current_app=self.name)
except NoReverseMatch:
pass
if app_dict:
app_dict['models'].append(model_dict),
else:
# First time around, now that we know there's
# something to display, add in the necessary meta
# information.
app_dict = {
'name': app_label.title(),
'app_label': app_label,
'app_url': '',
'has_module_perms': has_module_perms,
'models': [model_dict],
}
if not app_dict:
raise Http404('The requested admin page does not exist.')
# Sort the models alphabetically within each app.
app_dict['models'].sort(key=lambda x: x['name'])
context = {
'title': _('%s administration') % capfirst(app_label),
'app_list': [app_dict],
}
context.update(extra_context or {})
return TemplateResponse(request, self.app_index_template or [
'admin/%s/app_index.html' % app_label,
'admin/app_index.html'
], context, current_app=self.name)
django.contrib.admin.site = CollectionAdminSite()
site = django.contrib.admin.site
|
ArangoDjango
|
/ArangoDjango-0.0.25.tar.gz/ArangoDjango-0.0.25/djara/django/contrib/admin/sites.py
|
sites.py
|
from django.conf import settings
from django.core.urlresolvers import reverse
from django.http import HttpResponseRedirect, QueryDict
from django.template.response import TemplateResponse
from django.utils.http import base36_to_int, is_safe_url, urlsafe_base64_decode, urlsafe_base64_encode
from django.utils.translation import ugettext as _
from django.utils.six.moves.urllib.parse import urlparse, urlunparse
from django.shortcuts import resolve_url
from django.utils.encoding import force_bytes, force_text
from django.views.decorators.debug import sensitive_post_parameters
from django.views.decorators.cache import never_cache
from django.views.decorators.csrf import csrf_protect
# Avoid shadowing the login() and logout() views below.
from django.contrib.auth import REDIRECT_FIELD_NAME, logout as auth_logout, get_user_model
from django.contrib.auth.decorators import login_required
from django.contrib.auth.forms import AuthenticationForm, PasswordResetForm, SetPasswordForm, PasswordChangeForm
from django.contrib.auth.tokens import default_token_generator
from django.contrib.sites.models import get_current_site
from djara.django.contrib.admin.forms import CollectionAuthenticationForm
from djara.django.auth.utils import login as auth_login
@sensitive_post_parameters()
@csrf_protect
@never_cache
def login(request, template_name='registration/login.html',
redirect_field_name=REDIRECT_FIELD_NAME,
authentication_form=CollectionAuthenticationForm,
current_app=None, extra_context=None):
"""
Displays the login form and handles the login action.
"""
redirect_to = request.REQUEST.get(redirect_field_name, '')
if request.method == "POST":
form = authentication_form(request, data=request.POST)
if form.is_valid():
# Ensure the user-originating redirection url is safe.
if not is_safe_url(url=redirect_to, host=request.get_host()):
redirect_to = resolve_url(settings.LOGIN_REDIRECT_URL)
# Okay, security check complete. Log the user in.
auth_login(request, form.get_user())
return HttpResponseRedirect(redirect_to)
else:
form = authentication_form(request)
current_site = get_current_site(request)
context = {
'form': form,
redirect_field_name: redirect_to,
'site': current_site,
'site_name': current_site.name,
}
if extra_context is not None:
context.update(extra_context)
return TemplateResponse(request, template_name, context,
current_app=current_app)
def logout(request, next_page=None,
template_name='registration/logged_out.html',
redirect_field_name=REDIRECT_FIELD_NAME,
current_app=None, extra_context=None):
"""
Logs out the user and displays 'You are logged out' message.
"""
auth_logout(request)
if next_page is not None:
next_page = resolve_url(next_page)
if redirect_field_name in request.REQUEST:
next_page = request.REQUEST[redirect_field_name]
# Security check -- don't allow redirection to a different host.
if not is_safe_url(url=next_page, host=request.get_host()):
next_page = request.path
if next_page:
# Redirect to this page until the session has been cleared.
return HttpResponseRedirect(next_page)
current_site = get_current_site(request)
context = {
'site': current_site,
'site_name': current_site.name,
'title': _('Logged out')
}
if extra_context is not None:
context.update(extra_context)
return TemplateResponse(request, template_name, context,
current_app=current_app)
def logout_then_login(request, login_url=None, current_app=None, extra_context=None):
"""
Logs out the user if he is logged in. Then redirects to the log-in page.
"""
if not login_url:
login_url = settings.LOGIN_URL
login_url = resolve_url(login_url)
return logout(request, login_url, current_app=current_app, extra_context=extra_context)
def redirect_to_login(next, login_url=None,
redirect_field_name=REDIRECT_FIELD_NAME):
"""
Redirects the user to the login page, passing the given 'next' page
"""
resolved_url = resolve_url(login_url or settings.LOGIN_URL)
login_url_parts = list(urlparse(resolved_url))
if redirect_field_name:
querystring = QueryDict(login_url_parts[4], mutable=True)
querystring[redirect_field_name] = next
login_url_parts[4] = querystring.urlencode(safe='/')
return HttpResponseRedirect(urlunparse(login_url_parts))
# 4 views for password reset:
# - password_reset sends the mail
# - password_reset_done shows a success message for the above
# - password_reset_confirm checks the link the user clicked and
# prompts for a new password
# - password_reset_complete shows a success message for the above
# @csrf_protect
# def password_reset(request, is_admin_site=False,
# template_name='registration/password_reset_form.html',
# email_template_name='registration/password_reset_email.html',
# subject_template_name='registration/password_reset_subject.txt',
# password_reset_form=PasswordResetForm,
# token_generator=default_token_generator,
# post_reset_redirect=None,
# from_email=None,
# current_app=None,
# extra_context=None):
# if post_reset_redirect is None:
# post_reset_redirect = reverse('password_reset_done')
# else:
# post_reset_redirect = resolve_url(post_reset_redirect)
# if request.method == "POST":
# form = password_reset_form(request.POST)
# if form.is_valid():
# opts = {
# 'use_https': request.is_secure(),
# 'token_generator': token_generator,
# 'from_email': from_email,
# 'email_template_name': email_template_name,
# 'subject_template_name': subject_template_name,
# 'request': request,
# }
# if is_admin_site:
# opts = dict(opts, domain_override=request.get_host())
# form.save(**opts)
# return HttpResponseRedirect(post_reset_redirect)
# else:
# form = password_reset_form()
# context = {
# 'form': form,
# }
# if extra_context is not None:
# context.update(extra_context)
# return TemplateResponse(request, template_name, context,
# current_app=current_app)
#
#
# def password_reset_done(request,
# template_name='registration/password_reset_done.html',
# current_app=None, extra_context=None):
# context = {}
# if extra_context is not None:
# context.update(extra_context)
# return TemplateResponse(request, template_name, context,
# current_app=current_app)
#
#
# # Doesn't need csrf_protect since no-one can guess the URL
# @sensitive_post_parameters()
# @never_cache
# def password_reset_confirm(request, uidb64=None, token=None,
# template_name='registration/password_reset_confirm.html',
# token_generator=default_token_generator,
# set_password_form=SetPasswordForm,
# post_reset_redirect=None,
# current_app=None, extra_context=None):
# """
# View that checks the hash in a password reset link and presents a
# form for entering a new password.
# """
# UserModel = get_user_model()
# assert uidb64 is not None and token is not None # checked by URLconf
# if post_reset_redirect is None:
# post_reset_redirect = reverse('password_reset_complete')
# else:
# post_reset_redirect = resolve_url(post_reset_redirect)
# try:
# uid = urlsafe_base64_decode(uidb64)
# user = UserModel._default_manager.get(pk=uid)
# except (TypeError, ValueError, OverflowError, UserModel.DoesNotExist):
# user = None
#
# if user is not None and token_generator.check_token(user, token):
# validlink = True
# if request.method == 'POST':
# form = set_password_form(user, request.POST)
# if form.is_valid():
# form.save()
# return HttpResponseRedirect(post_reset_redirect)
# else:
# form = set_password_form(None)
# else:
# validlink = False
# form = None
# context = {
# 'form': form,
# 'validlink': validlink,
# }
# if extra_context is not None:
# context.update(extra_context)
# return TemplateResponse(request, template_name, context,
# current_app=current_app)
#
# def password_reset_confirm_uidb36(request, uidb36=None, **kwargs):
# # Support old password reset URLs that used base36 encoded user IDs.
# # Remove in Django 1.7
# try:
# uidb64 = force_text(urlsafe_base64_encode(force_bytes(base36_to_int(uidb36))))
# except ValueError:
# uidb64 = '1' # dummy invalid ID (incorrect padding for base64)
# return password_reset_confirm(request, uidb64=uidb64, **kwargs)
#
# def password_reset_complete(request,
# template_name='registration/password_reset_complete.html',
# current_app=None, extra_context=None):
# context = {
# 'login_url': resolve_url(settings.LOGIN_URL)
# }
# if extra_context is not None:
# context.update(extra_context)
# return TemplateResponse(request, template_name, context,
# current_app=current_app)
#
#
# @sensitive_post_parameters()
# @csrf_protect
# @login_required
# def password_change(request,
# template_name='registration/password_change_form.html',
# post_change_redirect=None,
# password_change_form=PasswordChangeForm,
# current_app=None, extra_context=None):
# if post_change_redirect is None:
# post_change_redirect = reverse('password_change_done')
# else:
# post_change_redirect = resolve_url(post_change_redirect)
# if request.method == "POST":
# form = password_change_form(user=request.user, data=request.POST)
# if form.is_valid():
# form.save()
# return HttpResponseRedirect(post_change_redirect)
# else:
# form = password_change_form(user=request.user)
# context = {
# 'form': form,
# }
# if extra_context is not None:
# context.update(extra_context)
# return TemplateResponse(request, template_name, context,
# current_app=current_app)
#
#
# @login_required
# def password_change_done(request,
# template_name='registration/password_change_done.html',
# current_app=None, extra_context=None):
# context = {}
# if extra_context is not None:
# context.update(extra_context)
# return TemplateResponse(request, template_name, context,
# current_app=current_app)
|
ArangoDjango
|
/ArangoDjango-0.0.25.tar.gz/ArangoDjango-0.0.25/djara/django/contrib/admin/views.py
|
views.py
|
from django import forms
from django.contrib.admin.options import ModelAdmin, IS_POPUP_VAR
from django.contrib.admin.templatetags.admin_urls import add_preserved_filters
from django.contrib.admin.util import flatten_fieldsets
from django.contrib.admin.views.main import TO_FIELD_VAR
from django.core.exceptions import FieldError
from django.core.urlresolvers import reverse
from django.forms.models import modelform_defines_fields, modelform_factory
from django.template.response import TemplateResponse
from djara.django.forms.forms import CollectionForm
class CollectionAdmin(ModelAdmin):
def __init__(self, model, admin_site):
super(CollectionAdmin, self).__init__(model, admin_site)
form = CollectionForm
view_on_site = True
def get_form(self, request, obj=None, **kwargs):
"""
Returns a Form class for use in the admin add view. This is used by
add_view and change_view.
"""
if 'fields' in kwargs:
fields = kwargs.pop('fields')
else:
fields = flatten_fieldsets(self.get_fieldsets(request, obj))
if self.exclude is None:
exclude = []
else:
exclude = list(self.exclude)
exclude.extend(self.get_readonly_fields(request, obj))
if self.exclude is None and hasattr(self.form, '_meta') and self.form._meta.exclude:
# Take the custom ModelForm's Meta.exclude into account only if the
# ModelAdmin doesn't define its own.
exclude.extend(self.form._meta.exclude)
# if exclude is an empty list we pass None to be consistent with the
# default on modelform_factory
exclude = exclude or None
defaults = {
"form": self.form,
"fields": fields,
"exclude": exclude,
}
defaults.update(kwargs)
if defaults['fields'] is None and not modelform_defines_fields(defaults['form']):
defaults['fields'] = forms.ALL_FIELDS
try:
return modelform_factory(self.model, **defaults)
except FieldError as e:
raise FieldError('%s. Check fields/fieldsets/exclude attributes of class %s.'
% (e, self.__class__.__name__))
def get_view_on_site_url(self, obj=None):
if obj is None or not self.view_on_site:
return None
if callable(self.view_on_site):
return self.view_on_site(obj)
elif self.view_on_site and hasattr(obj, 'get_absolute_url'):
# use the ContentType lookup if view_on_site is True
return reverse('admin:view_on_site', kwargs={
'content_type_id': obj.collection_name,
'object_id': obj.id
})
def render_change_form(self, request, context, add=False, change=False, form_url='', obj=None):
opts = self.model._meta
app_label = opts.app_label
preserved_filters = self.get_preserved_filters(request)
form_url = add_preserved_filters({'preserved_filters': preserved_filters, 'opts': opts}, form_url)
view_on_site_url = self.get_view_on_site_url(obj)
context.update({
'add': add,
'change': change,
'has_add_permission': self.has_add_permission(request),
'has_change_permission': self.has_change_permission(request, obj),
'has_delete_permission': self.has_delete_permission(request, obj),
'has_file_field': True, # FIXME - this should check if form or formsets have a FileField,
'has_absolute_url': view_on_site_url is not None,
'absolute_url': view_on_site_url,
'form_url': form_url,
'opts': opts,
'content_type_id': self.model.collection_instance.name,
'save_as': self.save_as,
'save_on_top': self.save_on_top,
'to_field_var': TO_FIELD_VAR,
'is_popup_var': IS_POPUP_VAR,
'app_label': app_label,
})
if add and self.add_form_template is not None:
form_template = self.add_form_template
else:
form_template = self.change_form_template
return TemplateResponse(request, form_template or [
"admin/%s/%s/change_form.html" % (app_label, opts.model_name),
"admin/%s/change_form.html" % app_label,
"admin/change_form.html"
], context, current_app=self.admin_site.name)
def get_queryset(self, request):
"""
Returns a QuerySet of all model instances that can be edited by the
admin site. This is used by changelist_view.
"""
qs = self.model.objects.all()
# TODO: this should be handled by some parameter to the ChangeList.
ordering = self.get_ordering(request)
if ordering:
qs = qs.order_by(*ordering)
return qs
|
ArangoDjango
|
/ArangoDjango-0.0.25.tar.gz/ArangoDjango-0.0.25/djara/django/contrib/admin/options.py
|
options.py
|
from django.conf import settings
from django.core.urlresolvers import reverse_lazy
from django.http.response import HttpResponseNotFound
from django.shortcuts import render, redirect
from django.template import loader
from django.utils.crypto import get_random_string
from django.utils.translation import ugettext as _
from djara.django.auth.models import User
from djara.django.auth.utils import login, logout
def login_view(request):
"""
"""
return render(request, 'auth/user/login.html', {})
def login_action(request, view_name):
"""
"""
username = request.POST.get('username', None)
user_password = request.POST.get('user_password', None)
user = User.objects.get(username=username)
# User login
if user and user.login(password=user_password):
# Django session login
login(request=request, user=user)
return redirect(to=view_name)
else:
return render(request, 'auth/user/login.html', {})
def logout_action(request, view_name):
"""
"""
# User logout
request.user.logout()
# Django logout
if logout(request):
return redirect(to=view_name)
def register_view(request):
"""
"""
return render(request, 'auth/user/register.html', {})
def register_action(request, view_name):
"""
"""
username = request.POST.get('username', None)
email = request.POST.get('email', None)
user_password = request.POST.get('user_password', None)
user_password_again = request.POST.get('user_password_again', None)
# Check username and email
if User.objects.get(username=username) or User.objects.get(email_address=email):
raise Exception('Username or email is already taken')
# Check passwords
if user_password != user_password_again:
raise Exception('Passwords are not the same')
activation_key = get_random_string(length=20)
user = User()
user.username = username
user.email_address = email
user.password = user_password
user.activation_key = activation_key
user.save()
kwargs = {
'activation_key': activation_key,
'uuid': user.uuid,
'view_name': view_name,
}
# register_msg
register_template_string = loader.render_to_string('auth/user/register_email.html', dictionary={
'logo_url': '%s%simages/logo.png' % (settings.BASE_URL, settings.STATIC_URL),
'activate_account_url': '%s%s' % (settings.BASE_URL, reverse_lazy('auth:register-activate', kwargs=kwargs)),
})
user.email_user(subject=_('Registration'), message=register_template_string, from_email=settings.REGISTRATION_EMAIL)
return render(request, 'auth/user/register_successful.html', {})
def register_activate(request, activation_key, uuid, view_name):
"""
"""
user = User.objects.get(uuid=uuid, activation_key=activation_key)
if not user or not activation_key or len(activation_key) < 5:
return HttpResponseNotFound()
user.is_active = True
user.activation_key = ''
user.save()
return render(request, 'auth/user/register_finished.html', {})
|
ArangoDjango
|
/ArangoDjango-0.0.25.tar.gz/ArangoDjango-0.0.25/djara/django/auth/views.py
|
views.py
|
from django.middleware.csrf import rotate_token
from djara.django.auth.models import User
SESSION_KEY = '_auth_user_id'
def get_session_user(request):
"""
Returns the user model instance associated with the given request session.
If no user is retrieved an instance of `AnonymousUser` is returned.
"""
from .models import AnonymousUser
try:
user_id = request.session[SESSION_KEY]
user = User.objects.get(_id=user_id) or AnonymousUser()
except (KeyError, AssertionError):
user = AnonymousUser()
return user
def login(request, user):
if user is None:
user = request.user
# TODO: It would be nice to support different login methods, like signed cookies.
if SESSION_KEY in request.session:
if request.session[SESSION_KEY] != user.id:
# To avoid reusing another user's session, create a new, empty
# session if the existing session corresponds to a different
# authenticated user.
request.session.flush()
else:
request.session.cycle_key()
request.session[SESSION_KEY] = user.id
if hasattr(request, 'user'):
request.user = user
rotate_token(request)
def logout(request):
"""
Removes the authenticated user's ID from the request and flushes their
session data.
"""
# Dispatch the signal before the user is logged out so the receivers have a
# chance to find out *who* logged out.
user = getattr(request, 'user', None)
# Check if the user has ever been logged out
if hasattr(user, 'is_authenticated') and not user.is_authenticated():
return False
# remember language choice saved to session
language = request.session.get('django_language')
request.session.flush()
if language is not None:
request.session['django_language'] = language
if hasattr(request, 'user'):
from .models import AnonymousUser
request.user = AnonymousUser()
return True
def authenticate(username, password):
"""
"""
user = User.objects.get(username=username)
if not user.password.verify(password):
return None
else:
return user
|
ArangoDjango
|
/ArangoDjango-0.0.25.tar.gz/ArangoDjango-0.0.25/djara/django/auth/utils.py
|
utils.py
|
from django.utils import timezone
from django.utils.translation import ugettext as _
from arangodb.index.unique import HashIndex
from arangodb.orm.fields import CharField, BooleanField, DatetimeField, UuidField, ManyToManyField
from djara.django.auth.fields import PasswordField
from djara.django.mail.utils import send_html_email
from djara.django.models.fields import DjangoBooleanField, DjangoTimeField
from djara.django.models.model import DjangoModel
# extend_meta_data
class BaseModel(DjangoModel):
uuid = UuidField()
class PermissionMixin(object):
def has_permission(self, name):
pass
class Permission(DjangoModel):
collection_name = 'djara_permission'
name = CharField(null=False)
class Group(DjangoModel, PermissionMixin):
collection_name = 'djara_group'
permissions = ManyToManyField(to=Permission, related_name='groups')
class User(BaseModel, PermissionMixin):
collection_name = 'djara_user'
username_index = HashIndex(fields=['username'])
email_address_index = HashIndex(fields=['email_address'])
username = CharField(verbose_name=_('Username'), null=False)
email_address = CharField(verbose_name=_('Mail address'), max_length=2048, null=False)
password = PasswordField()
activation_key = CharField(verbose_name=_('Activation key'), null=True, default=None)
is_active = DjangoBooleanField(verbose_name=_('Is active'), default=False, null=False)
#
is_staff_member = DjangoBooleanField(verbose_name=_('Is staff'), default=False, null=False)
is_owner = DjangoBooleanField(verbose_name=_('Is owner'), default=False, null=False)
# Login status
is_logged_in = DjangoBooleanField(verbose_name=_('Is logged in'), default=False, null=False)
last_login_time = DjangoTimeField(verbose_name=_('Last login'))
groups = ManyToManyField(to=Group, related_name='users')
permissions = ManyToManyField(to=Permission, related_name='users')
# Helper methods
def login(self, password):
"""
"""
if not self.is_active:
raise Exception('Not active')
if not self.password.verify(password=password):
raise Exception('Wrong password')
self.is_logged_in = True
self.last_login_time = timezone.now()
self.save()
return True
def logout(self):
"""
"""
if not self.is_active:
raise Exception('Not active')
if not self.is_logged_in:
raise Exception('Not logged in')
self.is_logged_in = False
self.save()
def is_authenticated(self):
"""
"""
return True
def is_staff(self):
"""
"""
return self.is_staff_member
def has_module_perms(self, app_label):
"""
"""
return self.is_staff()
def has_perm(self, code):
"""
"""
return self.is_staff()
def email_user(self, subject, message, from_email=None):
"""
Sends an email to this User.
"""
send_html_email(subject, message, from_email, [self.email_address])
def __unicode__(self):
return u'%s <%s>' % (self.username, self.email_address)
class Meta(object):
app_label = 'auth'
verbose_name = 'User'
verbose_name_plural = 'Users'
ordering = ('username', )
class AnonymousUser(User, PermissionMixin):
def is_authenticated(self):
return False
|
ArangoDjango
|
/ArangoDjango-0.0.25.tar.gz/ArangoDjango-0.0.25/djara/django/auth/models.py
|
models.py
|
from django.db import models
from arangodb.orm.fields import ModelField
from arangodb.orm.models import CollectionModel, CollectionModelManager, CollectionQueryset
from arangodb.query.advanced import Query
class DjangoQuery(Query):
"""
"""
def __init__(self):
super(DjangoQuery, self).__init__()
self.select_related = True
def sort_by(self, field, order=None, collection=None):
"""
"""
if field == 'pk':
field = '_key'
if order is None:
order = self.SORTING_ASC
self.sorting.append({
'field': field,
'order': order,
'collection': collection,
})
def __getattribute__(self, item):
"""
"""
if item == 'order_by':
return object.__getattribute__(self, 'sorting')
else:
return super(DjangoQuery, self).__getattribute__(item)
class DjangoQueryset(CollectionQueryset):
"""
"""
def __init__(self, manager):
"""
"""
super(DjangoQueryset, self).__init__(manager)
self._query = DjangoQuery()
self.query = self._query
self.model = manager.model
def order_by(self, *args):
"""
"""
# We want only django sorting
for o in args:
if o.startswith('-'):
order = self._query.SORTING_ASC
o = o.replace('-', '')
else:
order = self._query.SORTING_DESC
field = o
self._query.sort_by(field=field, order=order)
self._has_cache = False
return self
class DjangoModelManager(CollectionModelManager):
queryset = DjangoQueryset
def __init__(self, cls):
"""
"""
super(DjangoModelManager, self).__init__(cls)
self.model = cls
def get(self, **kwargs):
"""
"""
if 'pk' in kwargs:
search_value = kwargs['pk']
del kwargs['pk']
kwargs['_key'] = search_value
return super(DjangoModelManager, self).get(**kwargs)
class DjangoModel(CollectionModel):
objects = DjangoModelManager
@classmethod
def extend_meta_data(cls, model_class, class_meta):
"""
"""
def get_field_by_name(field_name):
"""
"""
fields = model_class.get_all_fields()
if field_name in fields:
return [fields[field_name], model_class, False, False]
else:
raise models.FieldDoesNotExist
class_meta.get_field_by_name = get_field_by_name
# Get field, by name
class_meta.get_field = lambda field: get_field_by_name(field)[0]
# PK field
pk_model_field = ModelField()
pk_model_field.on_init(model_class, 'pk')
class_meta.pk = pk_model_field
class_meta.related_fkey_lookups = []
|
ArangoDjango
|
/ArangoDjango-0.0.25.tar.gz/ArangoDjango-0.0.25/djara/django/models/model.py
|
model.py
|
import os
from datetime import datetime, date
from django.core.files.base import File
from django.core.files.storage import default_storage
from django.conf import settings
from django.utils.encoding import force_str, force_text
from django.utils import timezone
from django.utils.translation import ugettext as _
from django.db.models.fields.files import FileDescriptor
from arangodb.orm.fields import TextField, BooleanField, DateField, DatetimeField
class FieldFile(File):
def __init__(self, instance, field, name):
super(FieldFile, self).__init__(None, name)
self.instance = instance
self.field = field
self.storage = field.storage
self._committed = True
def __eq__(self, other):
# Older code may be expecting FileField values to be simple strings.
# By overriding the == operator, it can remain backwards compatibility.
if hasattr(other, 'name'):
return self.name == other.name
return self.name == other
def __ne__(self, other):
return not self.__eq__(other)
def __hash__(self):
return hash(self.name)
# The standard File contains most of the necessary properties, but
# FieldFiles can be instantiated without a name, so that needs to
# be checked for here.
def _require_file(self):
if not self:
raise ValueError("The '%s' attribute has no file associated with it." % self.field.name)
def _get_file(self):
self._require_file()
if not hasattr(self, '_file') or self._file is None:
self._file = self.storage.open(self.name, 'rb')
return self._file
def _set_file(self, file):
self._file = file
def _del_file(self):
del self._file
file = property(_get_file, _set_file, _del_file)
def _get_path(self):
self._require_file()
return self.storage.path(self.name)
path = property(_get_path)
def _get_url(self):
self._require_file()
return self.storage.url(self.name)
url = property(_get_url)
def _get_size(self):
self._require_file()
if not self._committed:
return self.file.size
return self.storage.size(self.name)
size = property(_get_size)
def open(self, mode='rb'):
self._require_file()
self.file.open(mode)
# open() doesn't alter the file's contents, but it does reset the pointer
open.alters_data = True
# In addition to the standard File API, FieldFiles have extra methods
# to further manipulate the underlying file, as well as update the
# associated model instance.
def save(self, name, content, save=True):
# name = self.field.generate_filename(self.instance, name)
self.name = self.storage.save(name, content)
setattr(self.instance, self.field.name, self.name)
# Update the filesize cache
self._size = content.size
self._committed = True
# Save the object because it has changed, unless save is False
if save:
self.instance.save()
save.alters_data = True
def delete(self, save=True):
if not self:
return
# Only close the file if it's already open, which we know by the
# presence of self._file
if hasattr(self, '_file'):
self.close()
del self.file
self.storage.delete(self.name)
self.name = None
setattr(self.instance, self.field.name, self.name)
# Delete the filesize cache
if hasattr(self, '_size'):
del self._size
self._committed = False
if save:
self.instance.save()
delete.alters_data = True
def _get_closed(self):
file = getattr(self, '_file', None)
return file is None or file.closed
closed = property(_get_closed)
def close(self):
file = getattr(self, '_file', None)
if file is not None:
file.close()
def __getstate__(self):
# FieldFile needs access to its associated model field and an instance
# it's attached to in order to work properly, but the only necessary
# data to be pickled is the file's name itself. Everything else will
# be restored later, by FileDescriptor below.
return {'name': self.name, 'closed': False, '_committed': True, '_file': None}
class FileField(TextField):
# The class to wrap instance attributes in. Accessing the file object off
# the instance will always return an instance of attr_class.
attr_cls = FieldFile
# The descriptor to use for accessing the attribute off of the class.
descriptor_class = FileDescriptor
def __init__(self, upload_to='', storage=None, **kwargs):
"""
"""
super(FileField, self).__init__(**kwargs)
self.file_name = None
self.storage = storage or default_storage
self.upload_to = upload_to
if callable(upload_to):
self.generate_filename = upload_to
def create_file_instance(self, model_instance):
"""
"""
self.file = self.attr_cls(instance=model_instance, field=self, name=self.file_name)
def on_save(self, model_instance):
"""
"""
self.create_file_instance(model_instance)
file = self.file
# This was reset
if self.file_name == self.content_file:
return
if file:
# Commit the file to storage prior to saving the model
file.save(file.name, self.content_file, save=False)
def get(self):
"""
"""
return self.file_name
def set(self, *args, **kwargs):
"""
"""
if len(args) is 1:
value = args[0]
self.content_file = value
if not '/' in value:
self.file_name = self.generate_filename(self, value)
else:
self.file_name = value
def dumps(self):
"""
"""
return self.file.path
def loads(self, string_val):
"""
"""
self.file_name = string_val
def validate(self):
"""
"""
super(FileField, self).validate()
def get_directory_name(self):
return os.path.normpath(force_text(datetime.now().strftime(force_str(self.upload_to))))
def get_filename(self, filename):
return os.path.normpath(self.storage.get_valid_name((filename)))
def generate_filename(self, instance, filename):
return os.path.join(self.get_directory_name(), self.get_filename(filename))
class DjangoBooleanField(BooleanField):
flatchoices = (
(False, _('False')),
(True, _('True')),
)
def __getattribute__(self, item):
"""
"""
if item == 'choices':
return True
else:
return super(BooleanField, self).__getattribute__(item)
class DjangoDateField(DateField):
"""
Can be timezone aware
"""
def __init__(self, **kwargs):
"""
"""
super(DjangoDateField, self).__init__(**kwargs)
if self.null and not self.default:
self.date = None
else:
if self.default:
self.date = self.default
else:
if getattr(settings, 'USE_TZ', False):
self.date = timezone.now().date()
else:
self.date = date.today()
class DjangoTimeField(DatetimeField):
"""
Can be timezone aware
"""
def __init__(self, **kwargs):
"""
"""
super(DjangoTimeField, self).__init__(**kwargs)
if self.null and not self.default:
self.time = None
else:
if self.default:
self.time = self.default
else:
if getattr(settings, 'USE_TZ', False):
self.time = timezone.now()
else:
self.time = datetime.now()
|
ArangoDjango
|
/ArangoDjango-0.0.25.tar.gz/ArangoDjango-0.0.25/djara/django/models/fields.py
|
fields.py
|
from django.core.exceptions import ObjectDoesNotExist, ValidationError
from django.utils.encoding import smart_text
from rest_framework.fields import is_simple_callable, get_component
from rest_framework.relations import PrimaryKeyRelatedField
class RelatedCollectionModelField(PrimaryKeyRelatedField):
"""
Represents a relationship as a id value.
"""
# TODO: Remove these field hacks...
def prepare_value(self, obj):
return self.to_native(self._get_object_id(obj=obj))
def label_from_instance(self, obj):
"""
Return a readable representation for use with eg. select widgets.
"""
desc = smart_text(obj)
ident = smart_text(self.to_native(self._get_object_id(obj=obj)))
if desc == ident:
return desc
return "%s - %s" % (desc, ident)
# TODO: Possibly change this to just take `obj`, through prob less performant
def to_native(self, id):
return id
def from_native(self, data):
if self.queryset is None:
raise Exception('Writable related fields must include a `queryset` argument')
try:
single_object = self.queryset.get(_id=data)
# Check if object doesn't exists
if single_object is None:
raise ObjectDoesNotExist()
# Else return it
else:
return single_object
except ObjectDoesNotExist:
msg = self.error_messages['does_not_exist'] % smart_text(data)
raise ValidationError(msg)
except (TypeError, ValueError):
received = type(data).__name__
msg = self.error_messages['incorrect_type'] % received
raise ValidationError(msg)
def field_to_native(self, obj, field_name):
if self.many:
# To-many relationship
queryset = None
if not self.source:
# Prefer obj.serializable_value for performance reasons
try:
queryset = obj.serializable_value(field_name)
except AttributeError:
pass
if queryset is None:
# RelatedManager (reverse relationship)
source = self.source or field_name
queryset = obj
for component in source.split('.'):
if queryset is None:
return []
queryset = get_component(queryset, component)
# Forward relationship
if is_simple_callable(getattr(queryset, 'all', None)):
queryset.all()
return [self.to_native(self._get_object_id(obj=item)) for item in queryset ]
else:
# Also support non-queryset iterables.
# This allows us to also support plain lists of related items.
return [self.to_native(self._get_object_id(obj=item)) for item in queryset]
# To-one relationship
try:
# Prefer obj.serializable_value for performance reasons
id = obj.serializable_value(self.source or field_name)
except AttributeError:
# RelatedObject (reverse relationship)
try:
id = getattr(obj, self.source or field_name).id
except (ObjectDoesNotExist, AttributeError):
return None
# Forward relationship
return self.to_native(id)
def _get_object_id(self, obj):
"""
"""
return obj.document.id
|
ArangoDjango
|
/ArangoDjango-0.0.25.tar.gz/ArangoDjango-0.0.25/djara/django/restframework/relations.py
|
relations.py
|
import copy
from django.core.exceptions import ValidationError
from django.utils.datastructures import SortedDict
from rest_framework import serializers, fields, relations
from arangodb.orm.fields import NumberField, CharField, ForeignKeyField, ManyToManyField, BooleanField, ChoiceField, \
DateField, DatetimeField
from djara.django.models.fields import FileField
from djara.django.restframework.fields import RestFileField
from djara.django.restframework.relations import RelatedCollectionModelField
class CollectionModelSerializerOptions(serializers.SerializerOptions):
"""
Meta class options for CollectionModelSerializer
"""
def __init__(self, meta):
super(CollectionModelSerializerOptions, self).__init__(meta)
self.model = getattr(meta, 'model', None)
self.read_only_fields = getattr(meta, 'read_only_fields', ())
self.write_only_fields = getattr(meta, 'write_only_fields', ())
class CollectionModelSerializer(serializers.Serializer):
"""
"""
_options_class = CollectionModelSerializerOptions
field_mapping = {
# ArangoPy fields
BooleanField: fields.BooleanField,
NumberField: fields.IntegerField,
CharField: fields.CharField,
ChoiceField: fields.ChoiceField,
DateField: fields.DateField,
DatetimeField: fields.DateTimeField,
# Only Django fields
FileField: RestFileField,
}
def get_fields(self):
"""
"""
return_fields = SortedDict()
# Get the explicitly declared fields
base_fields = copy.deepcopy(self.base_fields)
for key, field in base_fields.items():
return_fields[key] = field
# Add in the default fields
default_fields = self.get_default_fields()
for key, val in default_fields.items():
if key not in return_fields:
return_fields[key] = val
model_class = self.opts.model
displayed_fields = self.opts.fields
model_class_fields = model_class.get_collection_fields_dict()
if 'id' in displayed_fields:
return_fields['id'] = fields.CharField(read_only=True)
if 'key' in displayed_fields:
return_fields['key'] = fields.IntegerField(read_only=True)
for field_name in displayed_fields:
if field_name == 'id' or field_name == 'key':
continue
if field_name in return_fields:
continue
field = model_class_fields[field_name]
kwargs = {}
if field.__class__ in self.field_mapping:
restframework_field_class = self.field_mapping[field.__class__]
restframework_field = restframework_field_class(**kwargs)
return_fields[field_name] = restframework_field
# get_related_field
if isinstance(field, ForeignKeyField):
return_fields[field_name] = self.get_related_field(model_field=field, related_model=field.relation_class, to_many=False)
if isinstance(field, ManyToManyField):
return_fields[field_name] = self.get_related_field(model_field=field, related_model=field.relation_class, to_many=True)
return return_fields
def get_field(self, model_field):
"""
Creates a default instance of a basic non-relational field.
"""
def get_related_field(self, model_field, related_model, to_many):
"""
Creates a default instance of a flat relational field.
Note that model_field will be `None` for reverse relationships.
"""
# TODO: filter queryset using:
# .using(db).complex_filter(self.rel.limit_choices_to)
kwargs = {
'queryset': related_model.objects,
'many': to_many
}
if model_field:
kwargs['required'] = not(model_field.null or model_field.blank)
# if model_field.help_text is not None:
# kwargs['help_text'] = model_field.help_text
# if model_field.verbose_name is not None:
# kwargs['label'] = model_field.verbose_name
# if not model_field.editable:
# kwargs['read_only'] = True
# if model_field.verbose_name is not None:
# kwargs['label'] = model_field.verbose_name
# if model_field.help_text is not None:
# kwargs['help_text'] = model_field.help_text
return RelatedCollectionModelField(**kwargs)
def restore_fields(self, data, files):
"""
Core of deserialization, together with `restore_object`.
Converts a dictionary of data into a dictionary of deserialized fields.
"""
reverted_data = {}
if isinstance(data, basestring):
model = self.opts.model.objects.get(_id=data)
self.object = model
return model
if data is not None and not isinstance(data, dict):
self._errors['non_field_errors'] = ['Invalid data']
return None
for field_name, field in self.fields.items():
field.initialize(parent=self, field_name=field_name)
try:
field.field_from_native(data, files, field_name, reverted_data)
except ValidationError as err:
self._errors[field_name] = list(err.messages)
return reverted_data
def restore_object(self, attrs, instance=None):
"""
Deserialize a dictionary of attributes into an object instance.
You should override this method to control how deserialized objects
are instantiated.
"""
if instance is None:
instance = self.opts.model()
else:
return instance
if instance is not None:
for attribute_name in attrs:
attribute_value = attrs[attribute_name]
setattr(instance, attribute_name, attribute_value)
return instance
return attrs
def save_object(self, obj, **kwargs):
model_class = self.opts.model
obj = model_class.objects._create_model_from_dict(obj)
self.object = obj
obj.save()
def delete_object(self, obj):
obj.delete()
def save(self, **kwargs):
"""
Save the deserialized object and return it.
"""
# Clear cached _data, which may be invalidated by `save()`
self._data = None
if isinstance(self.object, list):
[self.save_object(item, **kwargs) for item in self.object]
if self.object._deleted:
[self.delete_object(item) for item in self.object._deleted]
else:
self.save_object(self.object, **kwargs)
return self.object
|
ArangoDjango
|
/ArangoDjango-0.0.25.tar.gz/ArangoDjango-0.0.25/djara/django/restframework/serializers.py
|
serializers.py
|
class MenuTree(object):
def __init__(self):
"""
"""
self.items = []
def get_item(self, key):
"""
"""
for item_obj in self.items:
if item_obj.name == key or item_obj.view == key:
return item_obj
raise Exception('No item found')
class MenuTreeItem(object):
def __init__(self, key, display_name, view, permission_method, **kwargs):
"""
"""
# To which view it is mapped
self.view = view
self.key = key
self.display_name = display_name
self.items = []
self.permission_method = permission_method
# Temporary variable
self.active = False
# Children
self.children = kwargs.pop('children', [])
def add_child(self, child):
"""
"""
self.children.append(child)
def get_child(self, view_name):
"""
"""
child_with_view = None
for child in self.children:
if child.view == view_name:
child_with_view = child
break
return child_with_view
def has_child(self, view_name):
"""
"""
return self.get_child(view_name) is not None
def is_current_item(self, current_view):
"""
"""
return self.view == current_view
class GlobalMenu(MenuTree):
_instance = None
def __init__(self):
super(GlobalMenu, self).__init__()
@classmethod
def init(cls):
"""
"""
if cls._instance is None:
cls._instance = cls()
@classmethod
def instance(cls):
"""
"""
cls.init()
return cls._instance
@classmethod
def add_menu_item(cls, item, parent_key=None):
"""
"""
if parent_key is None:
cls.instance().items.append(item)
else:
items = cls.instance().items
# Iterate over all items and search the parent
for parent_item in items:
if parent_item.key == parent_key:
parent_item.add_child(item)
break
@classmethod
def item(cls, key):
"""
"""
return cls.instance().get_item(key=key)
|
ArangoDjango
|
/ArangoDjango-0.0.25.tar.gz/ArangoDjango-0.0.25/djara/django/menus/models.py
|
models.py
|
from django import template
from django.template import loader, Context
from django.template.base import Template
from djara.django.menus.models import GlobalMenu
register = template.Library()
def show_global_menu(context, menu_template='djara/django/menus/simple_menu.html'):
"""
"""
request = context['request']
template_context = Context({
'menu_items': GlobalMenu.instance().items,
'current_view': request.resolver_match.view_name,
'user': context['user'],
})
template = loader.get_template(menu_template)
rendered = template.render(template_context)
return rendered
register.simple_tag(takes_context=True)(show_global_menu)
def show_active_sub_menu(context, menu_template='djara/django/menus/simple_menu.html'):
"""
"""
request = context['request']
current_view = request.resolver_match.view_name
def get_active_menu(menu_tree):
active_item = None
for item in menu_tree.items:
# Check if this item is the current view
if item.is_current_item(current_view):
active_item = item
break
# Check if a child of the item is the curren view
child_item = item.get_child(current_view)
if child_item is not None:
child_item.active = True
active_item = item
break
return active_item
active_menu_item = get_active_menu(GlobalMenu.instance())
return render_menu_children(context, active_menu_item, menu_template)
register.simple_tag(takes_context=True)(show_active_sub_menu)
def render_menu_children(context, active_menu_item, menu_template='djara/django/menus/simple_menu.html'):
"""
"""
request = context['request']
if active_menu_item is None:
children = []
else:
children = active_menu_item.children
template_context = Context({
'menu_items': children,
'current_view': request.resolver_match.view_name,
'user': context['user'],
})
template = loader.get_template(menu_template)
rendered = template.render(template_context)
return rendered
class ShowPermittedMenuEntryNode(template.Node):
def __init__(self, nodelist_empty, menu_item_variable):
self.nodelist_empty = nodelist_empty
self.menu_item_variable = menu_item_variable
def render(self, context):
''' If a variable is passed, it is found as string, so first we use render to convert that
variable to a string, and then we render that string. '''
if self.nodelist_empty is None:
return ''
else:
user_variable = template.Variable('user').resolve(context)
resolved_menu_item = template.Variable(self.menu_item_variable).resolve(context)
if resolved_menu_item.permission_method(user_variable) is True:
template_string = ''
# Set if is current view
current_view = context['current_view']
# Check if it is the current view
if resolved_menu_item.is_current_item(current_view):
resolved_menu_item.active = True
# Check if a child of it is active
child_view = resolved_menu_item.get_child(current_view)
if child_view is not None:
resolved_menu_item.active = True
# Render template
for node in self.nodelist_empty:
template_string += node.render(context)
rendered = Template(template_string).render(context)
# Unset current view
resolved_menu_item.active = False
return rendered
else:
return ''
def show_permitted_menu_entry(parser, token):
"""
"""
try:
menu_item_variable = token.split_contents()[1]
nodelist_loop = parser.parse(('empty', 'endshow_permitted_menu_entry',))
token = parser.next_token()
if token.contents == 'empty':
nodelist_empty = parser.parse(('endshow_permitted_menu_entry',))
parser.delete_first_token()
else:
nodelist_empty = None
except Exception:
raise template.TemplateSyntaxError('Syntax for show_permitted_menu_entry tag is wrong.')
return ShowPermittedMenuEntryNode(nodelist_loop, menu_item_variable)
register.tag('show_permitted_menu_entry', show_permitted_menu_entry)
|
ArangoDjango
|
/ArangoDjango-0.0.25.tar.gz/ArangoDjango-0.0.25/djara/django/menus/templatetags/menu_tags.py
|
menu_tags.py
|
import six
import warnings
from django.core.exceptions import FieldError
from django.forms.util import ErrorList
from django.forms.widgets import media_property
from django.utils.datastructures import SortedDict
from django.forms.fields import CharField, FileField, ChoiceField, BooleanField, DateField, DateTimeField
from django.forms.forms import BaseForm, get_declared_fields
from django.forms.models import ModelFormOptions, ALL_FIELDS, InlineForeignKeyField
from arangodb.orm.fields import CharField as ArangoCharField, ChoiceField as ArangoChoiceField, \
BooleanField as ArangoBooleanField, ForeignKeyField as ArangoForeignKeyField, DateField as ArangoDateField, \
DatetimeField as ArangoDatetimeField, ListField, DictField
from djara.django.models.fields import FileField as ArangoFileField
def map_arangopy_field_to_form_field(model_field, **kwargs):
"""
"""
if isinstance(model_field, ArangoCharField):
field = CharField(max_length=model_field.max_length, **kwargs)
elif isinstance(model_field, ListField):
field = CharField(**kwargs)
elif isinstance(model_field, DictField):
field = CharField(**kwargs)
elif isinstance(model_field, ArangoFileField):
field = FileField(**kwargs)
elif isinstance(model_field, ArangoChoiceField):
field = ChoiceField(choices=model_field.choices)
elif isinstance(model_field, ArangoBooleanField):
field = BooleanField(**kwargs)
elif isinstance(model_field, ArangoDateField):
field = DateField(**kwargs)
elif isinstance(model_field, ArangoDatetimeField):
field = DateTimeField(**kwargs)
elif isinstance(model_field, ArangoForeignKeyField):
field = InlineForeignKeyField(**kwargs)
else:
field = None
return field
def fields_for_model(model, fields=None, exclude=None, widgets=None,
formfield_callback=None, localized_fields=None,
labels=None, help_texts=None, error_messages=None):
"""
Returns a ``SortedDict`` containing form fields for the given model.
``fields`` is an optional list of field names. If provided, only the named
fields will be included in the returned fields.
``exclude`` is an optional list of field names. If provided, the named
fields will be excluded from the returned fields, even if they are listed
in the ``fields`` argument.
``widgets`` is a dictionary of model field names mapped to a widget.
``localized_fields`` is a list of names of fields which should be localized.
``labels`` is a dictionary of model field names mapped to a label.
``help_texts`` is a dictionary of model field names mapped to a help text.
``error_messages`` is a dictionary of model field names mapped to a
dictionary of error messages.
``formfield_callback`` is a callable that takes a model field and returns
a form field.
"""
field_list = []
ignored = []
# Avoid circular import
# from django.db.models.fields import Field as ModelField
# sortable_virtual_fields = [f for f in opts.virtual_fields
# if isinstance(f, ModelField)]
sortable_virtual_fields = model.get_collection_fields()
for f in sorted(sortable_virtual_fields):
if getattr(f, 'read_only', True):
continue
if fields is not None and not f.name in fields:
continue
if exclude and f.name in exclude:
continue
kwargs = {}
if widgets and f.name in widgets:
kwargs['widget'] = widgets[f.name]
if localized_fields == ALL_FIELDS or (localized_fields and f.name in localized_fields):
kwargs['localize'] = True
if labels and f.name in labels:
kwargs['label'] = labels[f.name]
if help_texts and f.name in help_texts:
kwargs['help_text'] = help_texts[f.name]
if error_messages and f.name in error_messages:
kwargs['error_messages'] = error_messages[f.name]
if formfield_callback is None:
formfield = map_arangopy_field_to_form_field(f, **kwargs)
# formfield = f.formfield(**kwargs)
elif not callable(formfield_callback):
raise TypeError('formfield_callback must be a function or callable')
else:
formfield = formfield_callback(f, **kwargs)
if formfield or fields is None:
field_list.append((f.name, formfield))
else:
ignored.append(f.name)
field_dict = SortedDict(field_list)
if fields:
field_dict = SortedDict(
[(f, field_dict.get(f)) for f in fields
if ((not exclude) or (exclude and f not in exclude)) and (f not in ignored)]
)
return field_dict
class BaseCollectionForm(BaseForm):
def __init__(self, data=None, files=None, auto_id='id_%s', prefix=None,
initial=None, error_class=ErrorList, label_suffix=None,
empty_permitted=False, instance=None):
opts = self._meta
if opts.model is None:
raise ValueError('ModelForm has no model class specified.')
if instance is None:
# if we didn't get an instance, instantiate a new one
self.instance = opts.model()
object_data = {}
else:
self.instance = instance
object_data = {}
exclude = []
if hasattr(opts, 'exclude'):
exclude = exclude
for field_name in opts.fields:
if field_name in exclude:
continue
object_data[field_name] = getattr(instance, field_name)
# if initial was provided, it should override the values from instance
if initial is not None:
object_data.update(initial)
# self._validate_unique will be set to True by BaseModelForm.clean().
# It is False by default so overriding self.clean() and failing to call
# super will stop validate_unique from being called.
self._validate_unique = False
super(BaseCollectionForm, self).__init__(data, files, auto_id, prefix, object_data,
error_class, label_suffix, empty_permitted)
class CollectionFormMetaclass(type):
def __new__(cls, name, bases, attrs):
formfield_callback = attrs.pop('formfield_callback', None)
try:
parents = [b for b in bases if issubclass(b, CollectionForm)]
except NameError:
# We are defining ModelForm itself.
parents = None
declared_fields = get_declared_fields(bases, attrs, False)
new_class = super(CollectionFormMetaclass, cls).__new__(cls, name, bases,
attrs)
if not parents:
return new_class
if 'media' not in attrs:
new_class.media = media_property(new_class)
opts = new_class._meta = ModelFormOptions(getattr(new_class, 'Meta', None))
# We check if a string was passed to `fields` or `exclude`,
# which is likely to be a mistake where the user typed ('foo') instead
# of ('foo',)
for opt in ['fields', 'exclude', 'localized_fields']:
value = getattr(opts, opt)
if isinstance(value, six.string_types) and value != ALL_FIELDS:
msg = ("%(model)s.Meta.%(opt)s cannot be a string. "
"Did you mean to type: ('%(value)s',)?" % {
'model': new_class.__name__,
'opt': opt,
'value': value,
})
raise TypeError(msg)
if opts.model:
# If a model is defined, extract form fields from it.
if opts.fields is None and opts.exclude is None:
# This should be some kind of assertion error once deprecation
# cycle is complete.
warnings.warn("Creating a ModelForm without either the 'fields' attribute "
"or the 'exclude' attribute is deprecated - form %s "
"needs updating" % name,
PendingDeprecationWarning, stacklevel=2)
if opts.fields == ALL_FIELDS:
# sentinel for fields_for_model to indicate "get the list of
# fields from the model"
opts.fields = None
fields = fields_for_model(opts.model, opts.fields, opts.exclude,
opts.widgets, formfield_callback,
opts.localized_fields, opts.labels,
opts.help_texts, opts.error_messages)
# make sure opts.fields doesn't specify an invalid field
none_model_fields = [k for k, v in six.iteritems(fields) if not v]
missing_fields = set(none_model_fields) - \
set(declared_fields.keys())
if missing_fields:
message = 'Unknown field(s) (%s) specified for %s'
message = message % (', '.join(missing_fields),
opts.model.__name__)
raise FieldError(message)
# Override default model fields with any custom declared ones
# (plus, include all the other declared fields).
fields.update(declared_fields)
else:
fields = declared_fields
new_class.declared_fields = declared_fields
new_class.base_fields = fields
return new_class
class CollectionForm(six.with_metaclass(CollectionFormMetaclass, BaseCollectionForm)):
def save(self, commit=True):
"""
"""
for key, value in self.cleaned_data.items():
setattr(self.instance, key, value)
self.instance.save()
|
ArangoDjango
|
/ArangoDjango-0.0.25.tar.gz/ArangoDjango-0.0.25/djara/django/forms/forms.py
|
forms.py
|
ArangoPy
========
Python Framework to access https://github.com/triAGENS/ArangoDB
[](https://travis-ci.org/saeschdivara/ArangoPy)
Coverage Master: [](https://coveralls.io/r/saeschdivara/ArangoPy?branch=master)
Coverage Dev: [](https://coveralls.io/r/saeschdivara/ArangoPy?branch=0.5)
Code Health Master: [](https://landscape.io/github/saeschdivara/ArangoPy/master)
Code Health Dev: [](https://landscape.io/github/saeschdivara/ArangoPy/0.5)
[](https://pypi.python.org/pypi/ArangoPy)
[](https://pypi.python.org/pypi/ArangoPy)
[](https://pypi.python.org/pypi/ArangoPy)
[](https://arangopy.readthedocs.org/en/stable/)
Installation
------------
pip install arangopy
or
python setup.py
Updates
----------
Follow on [Twitter](https://twitter.com/arango_py/)
Supported versions
------------
### ArangoDB
2.2, 2.3, 2.4, 2.5, 2.6
I am running 2.6 at the moment.
### Python
I am testing with Python 2.7 and 3.4
Frameworks integration
-----------------------
Of course, this framework was built to be standing alone but still it has the goal that it can be integrated with
Django. A bridge for this has been started: https://github.com/saeschdivara/ArangoDjango
Features
------------
1. Create and destroy databases
2. Create and delete collections in specific databases
3. Create, update and delete documents in collections
4. Use the following simple queries:
- by-example
- get
- update
- replace
- remove
- any
5. Queries
- Advanced filtering
- Sorting
- Multiple collections
6. ORM
1. Models which have fields:
- Boolean field
- Char field
- UUID field
- Number field
- Date field
- Datetime field
- Foreign key field
7. Transactions to create and update documents
8. Index support
9. User support
Usage
------------
## Basic
### Start with client connection setup
```python
from arangodb.api import Client
client = Client(hostname='localhost')
```
### Create database
```python
from arangodb.api import Database
db1 = Database.create(name='test_db')
```
### Create collection
```python
from arangodb.api import Collection
col1 = Collection.create(name='test_collection_nb_1')
```
### Get all collection documents
```python
from arangodb.api import Collection
col1 = Collection.create(name='test_collection_nb_1')
doc1 = col1.create_document()
doc1.extra_value = 'foo -- 123'
doc1.save()
all_docs = col1.documents()
```
## Simple Queries
### Get all documents
```python
from arangodb.api import Collection
from arangodb.query.simple import SimpleQuery
col1 = Collection.create(name='test_collection_nb_1')
doc1 = col1.create_document()
doc1.extra_value = 'foo -- 123'
doc1.save()
doc2 = col1.create_document()
doc2.extra_value = 'aa'
doc2.save()
docs = SimpleQuery.all(collection=col1)
```
### Get by example
```python
from arangodb.api import Collection
col1 = Collection.create(name='test_collection_nb_1')
doc1 = col1.create_document()
doc1.extra_value = 'foo -- 123'
doc1.save()
doc2 = col1.create_document()
doc2.extra_value = 'aa'
doc2.save()
doc = col1.get_document_by_example(example_data={
'extra_value': doc1.extra_value
})
```
### Get random document
```python
from arangodb.api import Collection
from arangodb.query.simple import SimpleQuery
col1 = Collection.create(name='test_collection_nb_1')
doc1 = col1.create_document()
doc1.extra_value = 'foo -- 123'
doc1.save()
doc2 = col1.create_document()
doc2.extra_value = 'aa'
doc2.save()
doc = SimpleQuery.random(collection=col1)
```
## Advanced Queries
### All documents from a collection
```python
from arangodb.api import Collection
from arangodb.query.advanced import Query
collection_name = 'foo_bar_collection'
col1 = Collection.create(name=collection_name)
q = Query()
q.append_collection(collection_name)
docs = q.execute()
```
### Filtering documents
```python
from arangodb.api import Collection
from arangodb.query.advanced import Query
q = Query()
q.append_collection(self.test_1_col.name)
q.filter(little_number=self.col1_doc3.little_number)
docs = q.execute()
self.assertEqual(len(docs), 1)
doc = docs[0]
self.assertDocumentsEqual(doc, self.col1_doc3)
```
### Filtering documents on multiple collections
```python
from arangodb.api import Collection
from arangodb.query.advanced import Query
q = Query()
q.append_collection(self.test_1_col.name)
q.append_collection(self.test_2_col.name)
dynamic_filter_dict = {}
col_1_filter_name = "%s__%s" % (self.test_1_col.name, "little_number")
col_2_filter_name = "%s__%s" % (self.test_2_col.name, "little_number")
dynamic_filter_dict[col_1_filter_name] = 33
dynamic_filter_dict[col_2_filter_name] = 33
q.filter(bit_operator=Query.OR_BIT_OPERATOR, **dynamic_filter_dict)
docs = q.execute()
self.assertEqual(len(docs), 2)
doc1 = docs[0]
doc2 = docs[1]
self.assertNotEqual(doc1.id, doc2.id)
self.assertEqual(doc1.little_number, 33)
self.assertEqual(doc2.little_number, 33)
```
### Excluding documents from result
```python
from arangodb.api import Collection
from arangodb.query.advanced import Query
q = Query()
q.append_collection(self.test_1_col.name)
q.exclude(loved=False)
docs = q.execute()
self.assertEqual(len(docs), 1)
doc1 = docs[0]
self.assertDocumentsEqual(doc1, self.col1_doc3)
```
### Sorting result
```python
from arangodb.api import Collection
from arangodb.query.advanced import Query
q = Query()
q.append_collection(self.test_1_col.name)
q.order_by('little_number')
docs = q.execute()
self.assertEqual(len(docs), 3)
doc1 = docs[0]
doc2 = docs[1]
doc3 = docs[2]
self.assertDocumentsEqual(doc1, self.col1_doc2)
self.assertDocumentsEqual(doc2, self.col1_doc3)
self.assertDocumentsEqual(doc3, self.col1_doc1)
```
## Transactions
### Create document
```python
from arangodb.query.simple import SimpleQuery
from arangodb.query.utils.document import create_document_from_result_dict
from arangodb.transaction.controller import Transaction, TransactionController
trans = Transaction(collections={
'write': [
self.operating_collection,
]
})
# Uses already chosen database as usual
collection = trans.collection(name=self.operating_collection)
collection.create_document(data={
'test': 'foo'
})
ctrl = TransactionController()
transaction_result = ctrl.start(transaction=trans)
transaction_doc = create_document_from_result_dict(transaction_result['result'], self.test_1_col.api)
created_doc = SimpleQuery.get_by_example(self.test_1_col, example_data={
'_id': transaction_doc.id
})
```
### Update document
```python
from arangodb.transaction.controller import Transaction, TransactionController
doc = self.test_1_col.create_document()
doc.foo = 'bar'
doc.save()
trans = Transaction(collections={
'write': [
self.operating_collection,
]
})
new_foo_value = 'extra_bar'
collection = trans.collection(self.operating_collection)
collection.update_document(doc_id=doc.id, data={
'foo': new_foo_value
})
ctrl = TransactionController()
ctrl.start(transaction=trans)
doc.retrieve()
self.assertEqual(doc.foo, new_foo_value)
```
## ORM
### Basic Model
```python
from arangodb.orm.models import CollectionModel
from arangodb.orm.fields import CharField
class TestModel(CollectionModel):
test_field = CharField(required=True)
# Init collection
TestModel.init()
# Init model
model_1 = TestModel()
model_1.test_field = 'ddd'
# Save model
model_1.save()
all_test_models = TestModel.objects.all()
```
### Foreign key field with Model
```python
from arangodb.orm.models import CollectionModel
from arangodb.orm.fields import CharField, ForeignKeyField
class ForeignTestModel(CollectionModel):
test_field = CharField(required=True)
class TestModel(CollectionModel):
other = ForeignKeyField(to=ForeignTestModel, required=True)
# Init collections
ForeignTestModel.init()
TestModel.init()
# Init models
model_1 = ForeignTestModel()
model_1.test_field = 'ddd'
model_2 = TestModel()
model_2.other = model_1
# Save models
model_1.save()
model_2.save()
all_test_models = TestModel.objects.all()
```
|
ArangoPy
|
/ArangoPy-0.5.7.tar.gz/ArangoPy-0.5.7/README.md
|
README.md
|
# Copyright (c) 2010-2014 Benjamin Peterson
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
import operator
import sys
import types
__author__ = "Benjamin Peterson <[email protected]>"
__version__ = "1.6.1"
# Useful for very coarse version differentiation.
PY2 = sys.version_info[0] == 2
PY3 = sys.version_info[0] == 3
if PY3:
string_types = str,
integer_types = int,
class_types = type,
text_type = str
binary_type = bytes
MAXSIZE = sys.maxsize
else:
string_types = str,
integer_types = (int, int)
class_types = (type, type)
text_type = str
binary_type = str
if sys.platform.startswith("java"):
# Jython always uses 32 bits.
MAXSIZE = int((1 << 31) - 1)
else:
# It's possible to have sizeof(long) != sizeof(Py_ssize_t).
class X(object):
def __len__(self):
return 1 << 31
try:
len(X())
except OverflowError:
# 32-bit
MAXSIZE = int((1 << 31) - 1)
else:
# 64-bit
MAXSIZE = int((1 << 63) - 1)
del X
def _add_doc(func, doc):
"""Add documentation to a function."""
func.__doc__ = doc
def _import_module(name):
"""Import module, returning the module after the last dot."""
__import__(name)
return sys.modules[name]
class _LazyDescr(object):
def __init__(self, name):
self.name = name
def __get__(self, obj, tp):
try:
result = self._resolve()
except ImportError:
# See the nice big comment in MovedModule.__getattr__.
raise AttributeError("%s could not be imported " % self.name)
setattr(obj, self.name, result) # Invokes __set__.
# This is a bit ugly, but it avoids running this again.
delattr(obj.__class__, self.name)
return result
class MovedModule(_LazyDescr):
def __init__(self, name, old, new=None):
super(MovedModule, self).__init__(name)
if PY3:
if new is None:
new = name
self.mod = new
else:
self.mod = old
def _resolve(self):
return _import_module(self.mod)
def __getattr__(self, attr):
# It turns out many Python frameworks like to traverse sys.modules and
# try to load various attributes. This causes problems if this is a
# platform-specific module on the wrong platform, like _winreg on
# Unixes. Therefore, we silently pretend unimportable modules do not
# have any attributes. See issues #51, #53, #56, and #63 for the full
# tales of woe.
#
# First, if possible, avoid loading the module just to look at __file__,
# __name__, or __path__.
if (attr in ("__file__", "__name__", "__path__") and
self.mod not in sys.modules):
raise AttributeError(attr)
try:
_module = self._resolve()
except ImportError:
raise AttributeError(attr)
value = getattr(_module, attr)
setattr(self, attr, value)
return value
class _LazyModule(types.ModuleType):
def __init__(self, name):
super(_LazyModule, self).__init__(name)
self.__doc__ = self.__class__.__doc__
def __dir__(self):
attrs = ["__doc__", "__name__"]
attrs += [attr.name for attr in self._moved_attributes]
return attrs
# Subclasses should override this
_moved_attributes = []
class MovedAttribute(_LazyDescr):
def __init__(self, name, old_mod, new_mod, old_attr=None, new_attr=None):
super(MovedAttribute, self).__init__(name)
if PY3:
if new_mod is None:
new_mod = name
self.mod = new_mod
if new_attr is None:
if old_attr is None:
new_attr = name
else:
new_attr = old_attr
self.attr = new_attr
else:
self.mod = old_mod
if old_attr is None:
old_attr = name
self.attr = old_attr
def _resolve(self):
module = _import_module(self.mod)
return getattr(module, self.attr)
class _MovedItems(_LazyModule):
"""Lazy loading of moved objects"""
_moved_attributes = [
MovedAttribute("cStringIO", "cStringIO", "io", "StringIO"),
MovedAttribute("filter", "itertools", "builtins", "ifilter", "filter"),
MovedAttribute("filterfalse", "itertools", "itertools", "ifilterfalse", "filterfalse"),
MovedAttribute("input", "__builtin__", "builtins", "raw_input", "input"),
MovedAttribute("map", "itertools", "builtins", "imap", "map"),
MovedAttribute("range", "__builtin__", "builtins", "xrange", "range"),
MovedAttribute("reload_module", "__builtin__", "imp", "reload"),
MovedAttribute("reduce", "__builtin__", "functools"),
MovedAttribute("StringIO", "StringIO", "io"),
MovedAttribute("UserString", "UserString", "collections"),
MovedAttribute("xrange", "__builtin__", "builtins", "xrange", "range"),
MovedAttribute("zip", "itertools", "builtins", "izip", "zip"),
MovedAttribute("zip_longest", "itertools", "itertools", "izip_longest", "zip_longest"),
MovedModule("builtins", "__builtin__"),
MovedModule("configparser", "ConfigParser"),
MovedModule("copyreg", "copy_reg"),
MovedModule("dbm_gnu", "gdbm", "dbm.gnu"),
MovedModule("http_cookiejar", "cookielib", "http.cookiejar"),
MovedModule("http_cookies", "Cookie", "http.cookies"),
MovedModule("html_entities", "htmlentitydefs", "html.entities"),
MovedModule("html_parser", "HTMLParser", "html.parser"),
MovedModule("http_client", "httplib", "http.client"),
MovedModule("email_mime_multipart", "email.MIMEMultipart", "email.mime.multipart"),
MovedModule("email_mime_text", "email.MIMEText", "email.mime.text"),
MovedModule("email_mime_base", "email.MIMEBase", "email.mime.base"),
MovedModule("BaseHTTPServer", "BaseHTTPServer", "http.server"),
MovedModule("CGIHTTPServer", "CGIHTTPServer", "http.server"),
MovedModule("SimpleHTTPServer", "SimpleHTTPServer", "http.server"),
MovedModule("cPickle", "cPickle", "pickle"),
MovedModule("queue", "Queue"),
MovedModule("reprlib", "repr"),
MovedModule("socketserver", "SocketServer"),
MovedModule("_thread", "thread", "_thread"),
MovedModule("tkinter", "Tkinter"),
MovedModule("tkinter_dialog", "Dialog", "tkinter.dialog"),
MovedModule("tkinter_filedialog", "FileDialog", "tkinter.filedialog"),
MovedModule("tkinter_scrolledtext", "ScrolledText", "tkinter.scrolledtext"),
MovedModule("tkinter_simpledialog", "SimpleDialog", "tkinter.simpledialog"),
MovedModule("tkinter_tix", "Tix", "tkinter.tix"),
MovedModule("tkinter_ttk", "ttk", "tkinter.ttk"),
MovedModule("tkinter_constants", "Tkconstants", "tkinter.constants"),
MovedModule("tkinter_dnd", "Tkdnd", "tkinter.dnd"),
MovedModule("tkinter_colorchooser", "tkColorChooser",
"tkinter.colorchooser"),
MovedModule("tkinter_commondialog", "tkCommonDialog",
"tkinter.commondialog"),
MovedModule("tkinter_tkfiledialog", "tkFileDialog", "tkinter.filedialog"),
MovedModule("tkinter_font", "tkFont", "tkinter.font"),
MovedModule("tkinter_messagebox", "tkMessageBox", "tkinter.messagebox"),
MovedModule("tkinter_tksimpledialog", "tkSimpleDialog",
"tkinter.simpledialog"),
MovedModule("urllib_parse", __name__ + ".moves.urllib_parse", "urllib.parse"),
MovedModule("urllib_error", __name__ + ".moves.urllib_error", "urllib.error"),
MovedModule("urllib", __name__ + ".moves.urllib", __name__ + ".moves.urllib"),
MovedModule("urllib_robotparser", "robotparser", "urllib.robotparser"),
MovedModule("xmlrpc_client", "xmlrpclib", "xmlrpc.client"),
MovedModule("xmlrpc_server", "xmlrpclib", "xmlrpc.server"),
MovedModule("winreg", "_winreg"),
]
for attr in _moved_attributes:
setattr(_MovedItems, attr.name, attr)
if isinstance(attr, MovedModule):
sys.modules[__name__ + ".moves." + attr.name] = attr
del attr
_MovedItems._moved_attributes = _moved_attributes
moves = sys.modules[__name__ + ".moves"] = _MovedItems(__name__ + ".moves")
class Module_six_moves_urllib_parse(_LazyModule):
"""Lazy loading of moved objects in six.moves.urllib_parse"""
_urllib_parse_moved_attributes = [
MovedAttribute("ParseResult", "urlparse", "urllib.parse"),
MovedAttribute("SplitResult", "urlparse", "urllib.parse"),
MovedAttribute("parse_qs", "urlparse", "urllib.parse"),
MovedAttribute("parse_qsl", "urlparse", "urllib.parse"),
MovedAttribute("urldefrag", "urlparse", "urllib.parse"),
MovedAttribute("urljoin", "urlparse", "urllib.parse"),
MovedAttribute("urlparse", "urlparse", "urllib.parse"),
MovedAttribute("urlsplit", "urlparse", "urllib.parse"),
MovedAttribute("urlunparse", "urlparse", "urllib.parse"),
MovedAttribute("urlunsplit", "urlparse", "urllib.parse"),
MovedAttribute("quote", "urllib", "urllib.parse"),
MovedAttribute("quote_plus", "urllib", "urllib.parse"),
MovedAttribute("unquote", "urllib", "urllib.parse"),
MovedAttribute("unquote_plus", "urllib", "urllib.parse"),
MovedAttribute("urlencode", "urllib", "urllib.parse"),
MovedAttribute("splitquery", "urllib", "urllib.parse"),
]
for attr in _urllib_parse_moved_attributes:
setattr(Module_six_moves_urllib_parse, attr.name, attr)
del attr
Module_six_moves_urllib_parse._moved_attributes = _urllib_parse_moved_attributes
sys.modules[__name__ + ".moves.urllib_parse"] = sys.modules[__name__ + ".moves.urllib.parse"] = Module_six_moves_urllib_parse(__name__ + ".moves.urllib_parse")
class Module_six_moves_urllib_error(_LazyModule):
"""Lazy loading of moved objects in six.moves.urllib_error"""
_urllib_error_moved_attributes = [
MovedAttribute("URLError", "urllib2", "urllib.error"),
MovedAttribute("HTTPError", "urllib2", "urllib.error"),
MovedAttribute("ContentTooShortError", "urllib", "urllib.error"),
]
for attr in _urllib_error_moved_attributes:
setattr(Module_six_moves_urllib_error, attr.name, attr)
del attr
Module_six_moves_urllib_error._moved_attributes = _urllib_error_moved_attributes
sys.modules[__name__ + ".moves.urllib_error"] = sys.modules[__name__ + ".moves.urllib.error"] = Module_six_moves_urllib_error(__name__ + ".moves.urllib.error")
class Module_six_moves_urllib_request(_LazyModule):
"""Lazy loading of moved objects in six.moves.urllib_request"""
_urllib_request_moved_attributes = [
MovedAttribute("urlopen", "urllib2", "urllib.request"),
MovedAttribute("install_opener", "urllib2", "urllib.request"),
MovedAttribute("build_opener", "urllib2", "urllib.request"),
MovedAttribute("pathname2url", "urllib", "urllib.request"),
MovedAttribute("url2pathname", "urllib", "urllib.request"),
MovedAttribute("getproxies", "urllib", "urllib.request"),
MovedAttribute("Request", "urllib2", "urllib.request"),
MovedAttribute("OpenerDirector", "urllib2", "urllib.request"),
MovedAttribute("HTTPDefaultErrorHandler", "urllib2", "urllib.request"),
MovedAttribute("HTTPRedirectHandler", "urllib2", "urllib.request"),
MovedAttribute("HTTPCookieProcessor", "urllib2", "urllib.request"),
MovedAttribute("ProxyHandler", "urllib2", "urllib.request"),
MovedAttribute("BaseHandler", "urllib2", "urllib.request"),
MovedAttribute("HTTPPasswordMgr", "urllib2", "urllib.request"),
MovedAttribute("HTTPPasswordMgrWithDefaultRealm", "urllib2", "urllib.request"),
MovedAttribute("AbstractBasicAuthHandler", "urllib2", "urllib.request"),
MovedAttribute("HTTPBasicAuthHandler", "urllib2", "urllib.request"),
MovedAttribute("ProxyBasicAuthHandler", "urllib2", "urllib.request"),
MovedAttribute("AbstractDigestAuthHandler", "urllib2", "urllib.request"),
MovedAttribute("HTTPDigestAuthHandler", "urllib2", "urllib.request"),
MovedAttribute("ProxyDigestAuthHandler", "urllib2", "urllib.request"),
MovedAttribute("HTTPHandler", "urllib2", "urllib.request"),
MovedAttribute("HTTPSHandler", "urllib2", "urllib.request"),
MovedAttribute("FileHandler", "urllib2", "urllib.request"),
MovedAttribute("FTPHandler", "urllib2", "urllib.request"),
MovedAttribute("CacheFTPHandler", "urllib2", "urllib.request"),
MovedAttribute("UnknownHandler", "urllib2", "urllib.request"),
MovedAttribute("HTTPErrorProcessor", "urllib2", "urllib.request"),
MovedAttribute("urlretrieve", "urllib", "urllib.request"),
MovedAttribute("urlcleanup", "urllib", "urllib.request"),
MovedAttribute("URLopener", "urllib", "urllib.request"),
MovedAttribute("FancyURLopener", "urllib", "urllib.request"),
MovedAttribute("proxy_bypass", "urllib", "urllib.request"),
]
for attr in _urllib_request_moved_attributes:
setattr(Module_six_moves_urllib_request, attr.name, attr)
del attr
Module_six_moves_urllib_request._moved_attributes = _urllib_request_moved_attributes
sys.modules[__name__ + ".moves.urllib_request"] = sys.modules[__name__ + ".moves.urllib.request"] = Module_six_moves_urllib_request(__name__ + ".moves.urllib.request")
class Module_six_moves_urllib_response(_LazyModule):
"""Lazy loading of moved objects in six.moves.urllib_response"""
_urllib_response_moved_attributes = [
MovedAttribute("addbase", "urllib", "urllib.response"),
MovedAttribute("addclosehook", "urllib", "urllib.response"),
MovedAttribute("addinfo", "urllib", "urllib.response"),
MovedAttribute("addinfourl", "urllib", "urllib.response"),
]
for attr in _urllib_response_moved_attributes:
setattr(Module_six_moves_urllib_response, attr.name, attr)
del attr
Module_six_moves_urllib_response._moved_attributes = _urllib_response_moved_attributes
sys.modules[__name__ + ".moves.urllib_response"] = sys.modules[__name__ + ".moves.urllib.response"] = Module_six_moves_urllib_response(__name__ + ".moves.urllib.response")
class Module_six_moves_urllib_robotparser(_LazyModule):
"""Lazy loading of moved objects in six.moves.urllib_robotparser"""
_urllib_robotparser_moved_attributes = [
MovedAttribute("RobotFileParser", "robotparser", "urllib.robotparser"),
]
for attr in _urllib_robotparser_moved_attributes:
setattr(Module_six_moves_urllib_robotparser, attr.name, attr)
del attr
Module_six_moves_urllib_robotparser._moved_attributes = _urllib_robotparser_moved_attributes
sys.modules[__name__ + ".moves.urllib_robotparser"] = sys.modules[__name__ + ".moves.urllib.robotparser"] = Module_six_moves_urllib_robotparser(__name__ + ".moves.urllib.robotparser")
class Module_six_moves_urllib(types.ModuleType):
"""Create a six.moves.urllib namespace that resembles the Python 3 namespace"""
parse = sys.modules[__name__ + ".moves.urllib_parse"]
error = sys.modules[__name__ + ".moves.urllib_error"]
request = sys.modules[__name__ + ".moves.urllib_request"]
response = sys.modules[__name__ + ".moves.urllib_response"]
robotparser = sys.modules[__name__ + ".moves.urllib_robotparser"]
def __dir__(self):
return ['parse', 'error', 'request', 'response', 'robotparser']
sys.modules[__name__ + ".moves.urllib"] = Module_six_moves_urllib(__name__ + ".moves.urllib")
def add_move(move):
"""Add an item to six.moves."""
setattr(_MovedItems, move.name, move)
def remove_move(name):
"""Remove item from six.moves."""
try:
delattr(_MovedItems, name)
except AttributeError:
try:
del moves.__dict__[name]
except KeyError:
raise AttributeError("no such move, %r" % (name,))
if PY3:
_meth_func = "__func__"
_meth_self = "__self__"
_func_closure = "__closure__"
_func_code = "__code__"
_func_defaults = "__defaults__"
_func_globals = "__globals__"
_iterkeys = "keys"
_itervalues = "values"
_iteritems = "items"
_iterlists = "lists"
else:
_meth_func = "im_func"
_meth_self = "im_self"
_func_closure = "func_closure"
_func_code = "func_code"
_func_defaults = "func_defaults"
_func_globals = "func_globals"
_iterkeys = "iterkeys"
_itervalues = "itervalues"
_iteritems = "iteritems"
_iterlists = "iterlists"
try:
advance_iterator = next
except NameError:
def advance_iterator(it):
return it.__next__()
next = advance_iterator
try:
callable = callable
except NameError:
def callable(obj):
return any("__call__" in klass.__dict__ for klass in type(obj).__mro__)
if PY3:
def get_unbound_function(unbound):
return unbound
create_bound_method = types.MethodType
Iterator = object
else:
def get_unbound_function(unbound):
return unbound.__func__
def create_bound_method(func, obj):
return types.MethodType(func, obj, obj.__class__)
class Iterator(object):
def __next__(self):
return type(self).__next__(self)
callable = callable
_add_doc(get_unbound_function,
"""Get the function out of a possibly unbound function""")
get_method_function = operator.attrgetter(_meth_func)
get_method_self = operator.attrgetter(_meth_self)
get_function_closure = operator.attrgetter(_func_closure)
get_function_code = operator.attrgetter(_func_code)
get_function_defaults = operator.attrgetter(_func_defaults)
get_function_globals = operator.attrgetter(_func_globals)
def iterkeys(d, **kw):
"""Return an iterator over the keys of a dictionary."""
return iter(getattr(d, _iterkeys)(**kw))
def itervalues(d, **kw):
"""Return an iterator over the values of a dictionary."""
return iter(getattr(d, _itervalues)(**kw))
def iteritems(d, **kw):
"""Return an iterator over the (key, value) pairs of a dictionary."""
return iter(getattr(d, _iteritems)(**kw))
def iterlists(d, **kw):
"""Return an iterator over the (key, [values]) pairs of a dictionary."""
return iter(getattr(d, _iterlists)(**kw))
if PY3:
def b(s):
return s.encode("latin-1")
def u(s):
return s
chr = chr
if sys.version_info[1] <= 1:
def int2byte(i):
return bytes((i,))
else:
# This is about 2x faster than the implementation above on 3.2+
int2byte = operator.methodcaller("to_bytes", 1, "big")
byte2int = operator.itemgetter(0)
indexbytes = operator.getitem
iterbytes = iter
import io
StringIO = io.StringIO
BytesIO = io.BytesIO
else:
def b(s):
return s
# Workaround for standalone backslash
def u(s):
return str(s.replace(r'\\', r'\\\\'), "unicode_escape")
chr = chr
int2byte = chr
def byte2int(bs):
return ord(bs[0])
def indexbytes(buf, i):
return ord(buf[i])
def iterbytes(buf):
return (ord(byte) for byte in buf)
import io
StringIO = BytesIO = io.StringIO
_add_doc(b, """Byte literal""")
_add_doc(u, """Text literal""")
if PY3:
exec_ = getattr(moves.builtins, "exec")
def reraise(tp, value, tb=None):
if value.__traceback__ is not tb:
raise value.with_traceback(tb)
raise value
else:
def exec_(_code_, _globs_=None, _locs_=None):
"""Execute code in a namespace."""
if _globs_ is None:
frame = sys._getframe(1)
_globs_ = frame.f_globals
if _locs_ is None:
_locs_ = frame.f_locals
del frame
elif _locs_ is None:
_locs_ = _globs_
exec("""exec _code_ in _globs_, _locs_""")
exec_("""def reraise(tp, value, tb=None):
raise tp, value, tb
""")
print_ = getattr(moves.builtins, "print", None)
if print_ is None:
def print_(*args, **kwargs):
"""The new-style print function for Python 2.4 and 2.5."""
fp = kwargs.pop("file", sys.stdout)
if fp is None:
return
def write(data):
if not isinstance(data, str):
data = str(data)
# If the file has an encoding, encode unicode with it.
if (isinstance(fp, file) and
isinstance(data, str) and
fp.encoding is not None):
errors = getattr(fp, "errors", None)
if errors is None:
errors = "strict"
data = data.encode(fp.encoding, errors)
fp.write(data)
want_unicode = False
sep = kwargs.pop("sep", None)
if sep is not None:
if isinstance(sep, str):
want_unicode = True
elif not isinstance(sep, str):
raise TypeError("sep must be None or a string")
end = kwargs.pop("end", None)
if end is not None:
if isinstance(end, str):
want_unicode = True
elif not isinstance(end, str):
raise TypeError("end must be None or a string")
if kwargs:
raise TypeError("invalid keyword arguments to print()")
if not want_unicode:
for arg in args:
if isinstance(arg, str):
want_unicode = True
break
if want_unicode:
newline = str("\n")
space = str(" ")
else:
newline = "\n"
space = " "
if sep is None:
sep = space
if end is None:
end = newline
for i, arg in enumerate(args):
if i:
write(sep)
write(arg)
write(end)
_add_doc(reraise, """Reraise an exception.""")
def with_metaclass(meta, *bases):
"""Create a base class with a metaclass."""
return meta("NewBase", bases, {})
def add_metaclass(metaclass):
"""Class decorator for creating a class with a metaclass."""
def wrapper(cls):
orig_vars = cls.__dict__.copy()
orig_vars.pop('__dict__', None)
orig_vars.pop('__weakref__', None)
slots = orig_vars.get('__slots__')
if slots is not None:
if isinstance(slots, str):
slots = [slots]
for slots_var in slots:
orig_vars.pop(slots_var)
return metaclass(cls.__name__, cls.__bases__, orig_vars)
return wrapper
### Additional customizations for Django ###
if PY3:
_assertRaisesRegex = "assertRaisesRegex"
_assertRegex = "assertRegex"
memoryview = memoryview
else:
_assertRaisesRegex = "assertRaisesRegexp"
_assertRegex = "assertRegexpMatches"
# memoryview and buffer are not stricly equivalent, but should be fine for
# django core usage (mainly BinaryField). However, Jython doesn't support
# buffer (see http://bugs.jython.org/issue1521), so we have to be careful.
if sys.platform.startswith('java'):
memoryview = memoryview
else:
memoryview = buffer
def assertRaisesRegex(self, *args, **kwargs):
return getattr(self, _assertRaisesRegex)(*args, **kwargs)
def assertRegex(self, *args, **kwargs):
return getattr(self, _assertRegex)(*args, **kwargs)
add_move(MovedModule("_dummy_thread", "dummy_thread"))
add_move(MovedModule("_thread", "thread"))
|
ArangoPy
|
/ArangoPy-0.5.7.tar.gz/ArangoPy-0.5.7/arangodb/six.py
|
six.py
|
import slumber
SYSTEM_DATABASE = '_system'
class Client(object):
class_instance = None
def __init__(self, hostname, auth=None, protocol='http', port=8529, database=SYSTEM_DATABASE):
"""
This should be done only once and sets the instance
"""
Client.class_instance = self
self.hostname = hostname
self.auth = auth
self.protocol = protocol
self.port = port
self.database = database
self._create_api()
def set_database(self, name):
"""
"""
self.database = name
self._create_api()
def _create_api(self):
url = '%s://%s:%s/_db/%s/_api/' % (self.protocol, self.hostname, self.port, self.database)
self.api = slumber.API(url, auth=self.auth, append_slash=False)
@classmethod
def instance(cls, hostname=None, auth=None, protocol=None, port=None, database=None):
"""
This method is called from everywhere in the code which accesses the database.
If one of the parameters is set, these variables are overwritten for all others.
"""
if cls.class_instance is None:
if hostname is None and auth is None and protocol is None and port is None and database is None:
cls.class_instance = Client(hostname='localhost')
else:
cls.class_instance = Client(hostname=hostname, auth=auth, protocol=protocol, port=port, database=database)
else:
if hostname is not None:
cls.class_instance.hostname = hostname
if protocol is not None:
cls.class_instance.hostname = protocol
if port is not None:
cls.class_instance.hostname = port
if database is not None:
cls.class_instance.database = database
if auth is not None:
cls.class_instance.auth = auth
cls.class_instance._create_api()
return cls.class_instance
def collection(self, name):
"""
Returns a collection with the given name
:param name Collection name
:returns Collection
"""
return Collection(
name=name,
api_resource=self.api.collection(name),
api=self.api,
)
class Database(object):
@classmethod
def create(cls, name, users=None):
"""
Creates database and sets itself as the active database.
:param name Database name
:returns Database
"""
api = Client.instance().api
database_data = {
'name': name,
'active': True,
}
if isinstance(users, list) or isinstance(users, tuple):
database_data['users'] = users
data = api.database.post(data=database_data)
db = Database(
name=name,
api=api,
kwargs=data
)
Client.instance().set_database(name=name)
return db
@classmethod
def get_all(cls):
"""
Returns an array with all databases
:returns Database list
"""
api = Client.instance().api
data = api.database.get()
database_names = data['result']
databases = []
for name in database_names:
db = Database(name=name, api=api)
databases.append(db)
return databases
@classmethod
def remove(cls, name):
"""
Destroys the database.
"""
client = Client.instance()
new_current_database = None
if client.database != name:
new_current_database = name
# Deletions are only possible from the system database
client.set_database(name=SYSTEM_DATABASE)
api = client.api
api.database(name).delete()
if new_current_database:
client.set_database(name=new_current_database)
def __init__(self, name, api, **kwargs):
"""
"""
self.name = name
self.api = api
def create_collection(self, name, type=2):
"""
Shortcut to create a collection
:param name Collection name
:param type Collection type (2 = document / 3 = edge)
:returns Collection
"""
return Collection.create(name=name, database=self.name, type=type)
class Collection(object):
@classmethod
def create(cls, name, database=SYSTEM_DATABASE, type=2):
"""
Creates collection
:param name Collection name
:param database Database name in which it is created
:param type Collection type (2 = document / 3 = edge)
:returns Collection
"""
client = Client.instance()
api = client.api
if client.database != database:
database = client.database
collection_data = {
'name': name,
'type': type,
}
data = api.collection.post(data=collection_data)
collection = Collection(
name=name,
database=database,
api_resource=api.collection,
api=api,
kwargs=data
)
return collection
@classmethod
def get_loaded_collection(cls, name):
"""
"""
client = Client.instance()
api = client.api
data = api.collection(name).properties.get()
collection = Collection(
name=name,
database=client.database,
api_resource=api.collection,
api=api,
kwargs=data
)
return collection
@classmethod
def remove(cls, name):
"""
Destroys collection.
:param name Collection name
"""
api = Client.instance().api
api.collection(name).delete()
def __init__(self, name, api, database=SYSTEM_DATABASE, **kwargs):
"""
"""
self.name = name
self.database = database
self.set_data(**kwargs)
self.resource = api.collection
self.api = api
def set_data(self, **kwargs):
"""
"""
if 'status' in kwargs:
self.status = kwargs['status']
else:
self.status = 0
if 'waitForSync' in kwargs:
self.waitForSync = kwargs['waitForSync']
else:
self.waitForSync = False
if 'isVolatile' in kwargs:
self.isVolatile = kwargs['isVolatile']
else:
self.isVolatile = False
if 'doCompact' in kwargs:
self.doCompact = kwargs['doCompact']
else:
self.doCompact = None
if 'journalSize' in kwargs:
self.journalSize = kwargs['journalSize']
else:
self.journalSize = -1
if 'numberOfShards' in kwargs:
self.numberOfShards = kwargs['numberOfShards']
else:
self.numberOfShards = 0
if 'shardKeys' in kwargs:
self.shardKeys = kwargs['shardKeys']
else:
self.shardKeys = None
if 'isSystem' in kwargs:
self.isSystem = kwargs['isSystem']
else:
self.isSystem = False
if 'type' in kwargs:
self.type = kwargs['type']
else:
self.type = 2
if 'id' in kwargs:
self.id = kwargs['id']
else:
self.id = '0'
def save(self):
"""
Updates only waitForSync and journalSize
"""
data = {
'waitForSync': self.waitForSync,
'journalSize': self.journalSize,
}
self.resource(self.name).properties.put(data)
def get(self):
"""
Retrieves all properties again for the collection and
sets the attributes.
"""
data = self.resource(self.name).properties.get()
self.set_data(**data)
return data
def get_figures(self):
"""
Returns figures about the collection.
"""
data = self.resource(self.name).figures.get()
return data['figures']
def create_document(self):
"""
Creates a document in the collection.
:returns Document
"""
return Document.create(collection=self)
def create_edge(self, from_doc, to_doc, edge_data={}):
"""
Creates edge document.
:param from_doc Document from which the edge comes
:param to_doc Document to which the edge goes
:param edge_data Extra data for the edge
:returns Document
"""
return Edge.create(
collection=self,
from_doc=from_doc,
to_doc=to_doc,
edge_data=edge_data
)
def documents(self):
"""
Returns all documents of this collection.
:returns Document list
"""
document_list = []
document_uri_list = self.api.document.get(collection=self.name)['documents']
for document_uri in document_uri_list:
splitted_uri = document_uri.split('/')
document_key = splitted_uri[-1]
document_id = "%s/%s" % (self.name, document_key)
doc = Document(
id=document_id,
key=document_key,
collection=self,
api=self.api
)
document_list.append(doc)
return document_list
class Document(object):
@classmethod
def create(cls, collection):
"""
Creates document object without really creating it in the collection.
:param collection Collection instance
:returns Document
"""
api = Client.instance().api
doc = Document(
id='',
key='',
collection=collection.name,
api=api,
)
return doc
def __init__(self, id, key, collection, api, **kwargs):
"""
:param id Document id (collection_name/number)
:param key Document key (number)
:param collection Collection name
:param api Slumber API object
:param rev Document revision, default value is key
"""
self.data = {}
self.is_loaded = False
self.id = id
self.key = key
self.revision = kwargs.pop('rev', key)
self.collection = collection
self.api = api
self.resource = api.document
def retrieve(self):
"""
Retrieves all data for this document and saves it.
"""
data = self.resource(self.id).get()
self.data = data
return data
def delete(self):
"""
Removes the document from the collection
"""
self.resource(self.id).delete()
def save(self):
"""
If its internal state is loaded than it will only updated the
set properties but otherwise it will create a new document.
"""
# TODO: Add option force_insert
if not self.is_loaded and self.id is None or self.id == '':
data = self.resource.post(data=self.data, collection=self.collection)
self.id = data['_id']
self.key = data['_key']
self.revision = data['_rev']
self.is_loaded = True
else:
data = self.resource(self.id).patch(data=self.data)
self.revision = data['_rev']
def get(self, key):
"""
Returns attribute value.
:param key
:returns value
"""
if not self.is_loaded:
self.retrieve()
self.is_loaded = True
if self.has(key=key):
return self.data[key]
else:
return None
def set(self, key, value):
"""
Sets document value
:param key
:param value
"""
self.data[key] = value
def has(self, key):
"""
Returns if the document has a attribute with the name key
"""
return key in self.data
def get_attributes(self):
"""
Return dict with all attributes
"""
return self.data
def __getattr__(self, item):
"""
"""
val = self.get(key=item)
return val
def __setattr__(self, key, value):
"""
"""
# Set internal variables normally
if key in ['data', 'is_loaded', 'id', 'key', 'revision', 'collection', 'api', 'resource']:
super(Document, self).__setattr__(key, value)
else:
self.set(key=key, value=value)
def __repr__(self):
"""
"""
return self.__str__()
def __str__(self):
return self.__unicode__()
def __unicode__(self):
return "%s" % self.id
class Edge(Document):
@classmethod
def create(cls, collection, from_doc, to_doc, edge_data={}):
"""
"""
api = Client.instance().api
parameters = {
'collection': collection.name,
'from': from_doc.id,
'to': to_doc.id,
}
data = api.edge.post(data=edge_data, **parameters)
doc = Edge(
id=data['_id'],
key=data['_key'],
collection=collection.name,
api=api,
)
return doc
def __init__(self, id, key, collection, api):
"""
"""
super(Edge, self).__init__(id=id, key=key, collection=collection, api=api)
self.resource = api.edge
|
ArangoPy
|
/ArangoPy-0.5.7.tar.gz/ArangoPy-0.5.7/arangodb/api.py
|
api.py
|
import copy
from arangodb.api import Collection
from arangodb.index.api import Index
from arangodb.index.general import BaseIndex
from arangodb.orm.fields import ModelField
from arangodb.query.advanced import Query, Traveser
from arangodb.query.simple import SimpleQuery, SimpleIndexQuery
class LazyQueryset(object):
"""
"""
def __init__(self, manager):
"""
"""
self._manager = manager
self._collection = manager._model_class.collection_instance
# Cache
self._has_cache = False
self._cache = []
def count(self):
"""
"""
if not self._has_cache:
self._generate_cache()
return len(self._cache)
def _generate_cache(self):
"""
"""
self._cache = [] # TODO: Check how to best clear this list
self._has_cache = True
def __getitem__(self, item):
"""
Is used for the index access
"""
if not self._has_cache:
self._generate_cache()
return self._cache[item]
def __len__(self):
"""
"""
return self.count()
class IndexQueryset(LazyQueryset):
"""
"""
def __init__(self, manager):
"""
"""
super(IndexQueryset, self).__init__(manager=manager)
self._index = None
self._filters = {}
def set_index(self, index):
"""
"""
self._has_cache = False
self._index = index
def filter(self, **kwargs):
"""
"""
self._has_cache = False
for arg_name, arg_value in list(kwargs.items()):
self._filters[arg_name] = arg_value
return self
def _generate_cache(self):
"""
"""
super(IndexQueryset, self)._generate_cache()
index_field = getattr(self._manager._model_class, self._index)
result = None
# All have these attributes
if 'skip' in self._filters:
skip = self._filters['skip']
else:
skip = None
if 'limit' in self._filters:
limit = self._filters['limit']
else:
limit = None
# Hash index
if index_field.index_type_obj.type_name == 'hash':
result = SimpleIndexQuery.get_by_example_hash(
collection=index_field.collection,
index_id=index_field.index_type_obj.id,
example_data=self._filters,
allow_multiple=True,
skip=skip,
limit=limit,
)
# Skiplist index
if index_field.index_type_obj.type_name == 'skiplist':
range_query = 'left' in self._filters
range_query = range_query and 'right' in self._filters
range_query = range_query and 'closed' in self._filters
range_query = range_query and 'attribute' in self._filters
# Range query
if range_query:
result = SimpleIndexQuery.range(
collection=index_field.collection,
index_id=index_field.index_type_obj.id,
attribute=self._filters['attribute'],
left=self._filters['left'],
right=self._filters['right'],
closed=self._filters['closed'],
skip=skip,
limit=limit,
)
# Normal search query
else:
result = SimpleIndexQuery.get_by_example_skiplist(
collection=index_field.collection,
index_id=index_field.index_type_obj.id,
example_data=self._filters,
allow_multiple=True,
skip=skip,
limit=limit,
)
# Fulltext index
if index_field.index_type_obj.type_name == 'fulltext':
result = SimpleIndexQuery.fulltext(
collection=index_field.collection,
index_id=index_field.index_type_obj.id,
attribute=self._filters['attribute'],
example_text=self._filters['example_text'],
skip=skip,
limit=limit,
)
# Cap constraint
if index_field.index_type_obj.type_name == 'cap':
pass
# Geo index
if index_field.index_type_obj.type_name == 'geo':
if 'radius' in self._filters:
result = SimpleIndexQuery.within(
collection=index_field.collection,
index_id=index_field.index_type_obj.id,
latitude=self._filters['latitude'],
longitude=self._filters['longitude'],
radius=self._filters['radius'],
distance=self._filters['distance'],
skip=skip,
limit=limit,
)
else:
result = SimpleIndexQuery.near(
collection=index_field.collection,
index_id=index_field.index_type_obj.id,
latitude=self._filters['latitude'],
longitude=self._filters['longitude'],
distance=self._filters['distance'],
skip=skip,
limit=limit,
)
# Save cache
if isinstance(result, list):
self._cache = result
else:
self._cache.append(result)
class CollectionQueryset(LazyQueryset):
"""
"""
def __init__(self, manager):
"""
"""
super(CollectionQueryset, self).__init__(manager=manager)
self._query = Query()
# Relations
self._is_working_on_relations = False
self._relations_start_model = None
self._relations_end_model = None
self._relations_relation_collection = None
self._related_model_class = None
def get_field_relations(self, relation_collection, related_model_class, start_model=None, end_model=None):
"""
"""
self._is_working_on_relations = True
self._has_cache = False
self._relations_start_model = start_model
self._relations_end_model = end_model
self._relations_relation_collection = relation_collection
self._related_model_class = related_model_class
return self
def get(self, **kwargs):
"""
"""
return self._manager.get(**kwargs)
def all(self):
"""
"""
self._has_cache = False
self._query.set_collection(self._collection.name)
self._query.clear()
return self
def filter(self, bit_operator=Query.NO_BIT_OPERATOR, **kwargs):
"""
"""
self._has_cache = False
kwargs = self._normalize_kwargs(**kwargs)
self._query.filter(bit_operator=bit_operator, **kwargs)
return self
def exclude(self, **kwargs):
"""
"""
self._has_cache = False
kwargs = self._normalize_kwargs(**kwargs)
self._query.exclude(**kwargs)
return self
def limit(self, count, start=-1):
"""
"""
self._has_cache = False
self._query.limit(count, start)
return self
def order_by(self, field, order):
"""
"""
self._has_cache = False
self._query.order_by(field=field, order=order)
return self
def _normalize_kwargs(self, **kwargs):
"""
"""
# You can use this way models
for key, value in list(kwargs.items()):
if isinstance(value, CollectionModel):
kwargs[key] = value.id
return kwargs
def _clone(self):
"""
"""
cloned_queryset = CollectionQueryset(manager=self._manager)
# Query
cloned_queryset._query = copy.deepcopy(self._query)
# Relation
cloned_queryset._is_working_on_relations = self._is_working_on_relations
cloned_queryset._relations_start_model = self._relations_start_model
cloned_queryset._relations_end_model = self._relations_end_model
cloned_queryset._relations_relation_collection = self._is_working_on_relations
cloned_queryset._related_model_class = self._related_model_class
return cloned_queryset
def _generate_cache(self):
"""
"""
super(CollectionQueryset, self)._generate_cache()
if self._is_working_on_relations:
start_model = self._relations_start_model
end_model = self._relations_end_model
relation_collection = self._relations_relation_collection
if start_model:
found_relations = Traveser.follow(
start_vertex=start_model.document.id,
edge_collection=relation_collection,
direction='outbound'
)
else:
found_relations = Traveser.follow(
start_vertex=end_model.document.id,
edge_collection=relation_collection,
direction='inbound'
)
result = found_relations
result_class = self._related_model_class
else:
result = self._query.execute()
result_class = self._manager._model_class
for doc in result:
model = self._manager._create_model_from_doc(doc=doc, model_class=result_class)
self._cache.append(model)
class CollectionModelManager(object):
queryset = CollectionQueryset
def __init__(self, cls):
"""
"""
self._model_class = cls
def get(self, **kwargs):
"""
"""
collection = self._model_class.collection_instance
kwargs = self._normalize_kwargs(**kwargs)
doc = SimpleQuery.get_by_example(collection=collection, example_data=kwargs)
if doc is None:
return None
model = self._create_model_from_doc(doc=doc)
return model
def get_or_create(self, **kwargs):
"""
Looks up an object with the given kwargs, creating one if necessary.
Returns a tuple of (object, created), where created is a boolean
specifying whether an object was created.
"""
model = self.get(**kwargs)
is_created = False
if model is None:
is_created = True
model = self._model_class()
for key, value in list(kwargs.items()):
setattr(model, key, value)
return model, is_created
def all(self):
"""
"""
queryset = self.queryset(manager=self)
return queryset.all()
def filter(self, **kwargs):
"""
:param kwargs:
:return:
"""
queryset = self.queryset(manager=self)
kwargs = self._normalize_kwargs(**kwargs)
return queryset.all().filter(**kwargs)
def exclude(self, **kwargs):
"""
:param kwargs:
:return:
"""
queryset = self.queryset(manager=self)
kwargs = self._normalize_kwargs(**kwargs)
return queryset.all().exclude(**kwargs)
def limit(self, count, start=-1):
"""
:return:
"""
queryset = self.queryset(manager=self)
return queryset.all().limit(count=count, start=start)
def search_by_index(self, index, **kwargs):
"""
"""
queryset = IndexQueryset(manager=self)
queryset.set_index(index=index)
kwargs = self._normalize_kwargs(**kwargs)
return queryset.filter(**kwargs)
def search_in_range(self, index, attribute, left, right, closed, **kwargs):
"""
"""
queryset = IndexQueryset(manager=self)
queryset.set_index(index=index)
kwargs['attribute'] = attribute
kwargs['left'] = left
kwargs['right'] = right
kwargs['closed'] = closed
return queryset.filter(**kwargs)
def search_fulltext(self, index, attribute, example_text, **kwargs):
"""
"""
queryset = IndexQueryset(manager=self)
queryset.set_index(index=index)
kwargs['attribute'] = attribute
kwargs['example_text'] = example_text
kwargs = self._normalize_kwargs(**kwargs)
return queryset.filter(**kwargs)
def search_near(self, index, latitude, longitude, distance=None, **kwargs):
"""
"""
queryset = IndexQueryset(manager=self)
queryset.set_index(index=index)
kwargs['latitude'] = latitude
kwargs['longitude'] = longitude
kwargs['distance'] = distance
kwargs = self._normalize_kwargs(**kwargs)
return queryset.filter(**kwargs)
def search_within(self, index, latitude, longitude, radius, distance=None, **kwargs):
"""
"""
queryset = IndexQueryset(manager=self)
queryset.set_index(index=index)
kwargs['latitude'] = latitude
kwargs['longitude'] = longitude
kwargs['radius'] = radius
kwargs['distance'] = distance
kwargs = self._normalize_kwargs(**kwargs)
return queryset.filter(**kwargs)
def _normalize_kwargs(self, **kwargs):
"""
"""
# You can use this way models
for key, value in list(kwargs.items()):
if isinstance(value, CollectionModel):
kwargs[key] = value.id
return kwargs
def _create_model_from_doc(self, doc, model_class=None):
"""
"""
doc.retrieve()
attributes = doc.get_attributes()
model = self._create_model_from_dict(attribute_dict=attributes, model_class=model_class)
model.document = doc
return model
def _create_model_from_dict(self, attribute_dict, model_class=None):
"""
"""
if model_class:
model = model_class()
else:
model = self._model_class()
attributes = attribute_dict
for attribute_name in attributes:
if not attribute_name.startswith('_'):
field = model.get_field(name=attribute_name)
attribute_value = attributes[attribute_name]
field._model_instance = model
field.loads(attribute_value)
return model
class CollectionModel(object):
collection_instance = None
collection_name = None
objects = CollectionModelManager
_model_meta_data = None
_instance_meta_data = None
class RequiredFieldNoValue(Exception):
"""
Field needs value
"""
class MetaDataObj(object):
def __init__(self):
self._fields = {}
@classmethod
def get_all_fields(cls, class_obj=None, fields=None):
"""
TODO: This needs to be properly used
"""
def return_fields(obj):
internal_fields = fields
if internal_fields is None:
internal_fields = {}
for attribute in dir(obj):
try:
attr_val = getattr(obj, attribute)
attr_cls = attr_val.__class__
# If it is a model field, call on init
if issubclass(attr_cls, ModelField):
internal_fields[attribute] = attr_val
except:
pass
return internal_fields
if class_obj is None:
class_obj = cls
fields = return_fields(class_obj)
for parent_class in cls.__bases__:
parent_fields = cls.get_all_fields(parent_class, fields)
for field_name, field_value in list(parent_fields.items()):
if not field_name in fields:
fields[field_name] = field_value
return fields
else:
if not isinstance(class_obj, CollectionModel):
return fields
@classmethod
def get_collection_fields_dict(cls):
"""
"""
fields = {}
for attribute in dir(cls):
try:
attr_val = getattr(cls, attribute)
attr_cls = attr_val.__class__
# If it is a model field, call on init
if issubclass(attr_cls, ModelField):
fields[attribute] = attr_val
except:
pass
model_fields = cls._model_meta_data._fields
for field_key in model_fields:
field = model_fields[field_key]
fields[field_key] = field
return fields
@classmethod
def get_collection_fields(cls):
"""
"""
fields = []
for attribute in dir(cls):
try:
attr_val = getattr(cls, attribute)
attr_cls = attr_val.__class__
# If it is a model field, call on init
if issubclass(attr_cls, ModelField):
fields.append(attr_val)
except:
pass
model_fields = cls._model_meta_data._fields
for field_key in model_fields:
field = model_fields[field_key]
fields.append(field)
return fields
@classmethod
def get_model_fields_index(cls):
"""
"""
index_list = {}
for attribute in dir(cls):
try:
attr_val = getattr(cls, attribute)
attr_cls = attr_val.__class__
# If it is a model field, call on init
if issubclass(attr_cls, BaseIndex):
index_list[attribute] = attr_val
except:
pass
return index_list
@classmethod
def init(cls):
"""
"""
# TODO: Deal with super classes
name = cls.get_collection_name()
# Set type
try:
collection_type = getattr(cls, 'collection_type')
except:
collection_type = 2
# TODO: Database is not set for the collection
# Create collection
try:
cls.collection_instance = Collection.create(name=name, type=collection_type)
except:
cls.collection_instance = Collection.get_loaded_collection(name=name)
try:
if not isinstance(cls.objects, CollectionModelManager):
cls.objects = cls.objects(cls)
except:
pass # This is the case if init was called more than once
cls._default_manager = cls.objects
# Create meta data for collection
cls._model_meta_data = cls.MetaDataObj()
if hasattr(cls, 'Meta'):
cls._meta = cls.Meta()
cls._meta.model_name = name
cls._meta.object_name = name
# Giving other classes the chance to extend the meta data on init
if hasattr(cls, 'extend_meta_data'):
cls.extend_meta_data(cls, cls._meta)
# Go through all fields
fields_dict = cls.get_collection_fields_dict()
for attribute_name in fields_dict:
attribute = fields_dict[attribute_name]
# Trigger init event
attribute.on_init(cls, attribute_name)
# Go through all index
model_index_list = cls.get_model_fields_index()
for index_attribute_name in model_index_list:
# Save created index
index_obj = model_index_list[index_attribute_name]
created_index = Index(collection=cls.collection_instance, index_type_obj=index_obj)
# Reset class attribute
setattr(cls, index_attribute_name, created_index)
# Save index
created_index.save()
if not created_index.index_type_obj.is_new:
created_index.overwrite()
@classmethod
def destroy(cls):
"""
"""
# Go through all fields
for attribute in cls.get_collection_fields():
attribute.on_destroy(cls)
name = cls.get_collection_name()
Collection.remove(name=name)
@classmethod
def get_collection_name(cls):
"""
"""
if cls.collection_name is not None and len(cls.collection_name) > 0:
name = cls.collection_name
else:
name = cls.__name__
return name
def __init__(self):
"""
"""
# _instance_meta_data has to be set first otherwise the whole thing doesn't work
self._instance_meta_data = CollectionModel.MetaDataObj()
self.document = self.collection_instance.create_document()
fields = self._get_fields()
for attribute in fields:
attr_val = fields[attribute]
# Only attributes which are fields are being copied
# Copy field with default config
field = copy.deepcopy(attr_val)
# Set model instance on the field
field._model_instance = self
# Trigger on create so the field knows it
field.on_create(model_instance=self)
# Save the new field in the meta data
self._instance_meta_data._fields[attribute] = field
def save(self):
"""
"""
all_fields = self._instance_meta_data._fields
later_saved_fields = []
for field_name in all_fields:
local_field = all_fields[field_name]
# Check if the field is saved in this collection
if local_field.__class__.is_saved_in_model:
is_field_required = local_field.required
if field_name in self._instance_meta_data._fields:
# Get field
field = self._instance_meta_data._fields[field_name]
# Read only fields are ignored
if field.read_only:
continue
# If the fields needs to do something before the save
field.on_save(self)
# Validate content by field
field.validate()
# Get content
field_value = field.dumps()
else:
if not is_field_required:
field_value = None
else:
raise CollectionModel.RequiredFieldNoValue()
self.document.set(key=field_name, value=field_value)
# This field is saved somewhere else
else:
later_saved_fields.append(local_field)
self.document.save()
for later_field in later_saved_fields:
later_field.on_save(self)
def get_field(self, name):
"""
"""
if name == 'id':
return self.document.id
elif name == 'key':
return self.document.key
return self._instance_meta_data._fields[name]
def _get_fields(self):
"""
"""
fields = {}
for attribute in dir(self):
try:
attr_val = getattr(self, attribute)
attr_cls = attr_val.__class__
if issubclass(attr_cls, ModelField):
fields[attribute] = attr_val
except:
pass
model_fields = self.__class__._model_meta_data._fields
for field_key in model_fields:
field = model_fields[field_key]
fields[field_key] = field
return fields
def serializable_value(self, attr):
"""
"""
return getattr(self, attr)
def __getattribute__(self, item):
"""
"""
if item == '_instance_meta_data':
return object.__getattribute__(self, item)
if item in self._instance_meta_data._fields:
return self._instance_meta_data._fields[item].get()
elif item == 'id':
return self.document.id
elif item == 'key' or item == 'pk':
return self.document.key
else:
return super(CollectionModel, self).__getattribute__(item)
def __setattr__(self, key, value):
"""
"""
if self._instance_meta_data is None:
super(CollectionModel, self).__setattr__(key, value)
return
if key in self._instance_meta_data._fields:
self._instance_meta_data._fields[key].set(value)
else:
super(CollectionModel, self).__setattr__(key, value)
|
ArangoPy
|
/ArangoPy-0.5.7.tar.gz/ArangoPy-0.5.7/arangodb/orm/models.py
|
models.py
|
from datetime import datetime, date
import json
from uuid import uuid4
from arangodb import six
from arangodb.api import Collection, Client
class ModelField(object):
is_saved_in_model = True
class NotNullableFieldException(Exception):
"""
Field cannot be null
"""
class WrongInputTypeException(Exception):
"""
Field cannot be null
"""
def __init__(self, verbose_name='', help_text='', required=True, blank=False, null=True, default=None, **kwargs):
"""
"""
self.name = None
self.verbose_name = verbose_name
self.help_text = help_text
self.required = required
self.blank = blank
self.null = null
self.default = default
self._set_default_attribute(attribute_name='read_only', default_value=False, **kwargs)
self._model_instance = None
def dumps(self):
"""
"""
return None
def loads(self, string_val):
"""
"""
pass
def to_python(self, value):
"""
"""
return value
def on_init(self, model_class, attribute_name):
"""
"""
self.name = attribute_name
def on_destroy(self, model_class):
"""
"""
pass
def on_create(self, model_instance):
"""
"""
self.model_instance = model_instance
def on_save(self, model_instance):
"""
"""
pass
def validate(self):
"""
"""
pass
def set(self, *args, **kwargs):
"""
"""
pass
def get(self):
"""
"""
return self
def _set_default_attribute(self, attribute_name, default_value, **kwargs):
"""
"""
if attribute_name in kwargs:
setattr(self, attribute_name, kwargs[attribute_name])
else:
setattr(self, attribute_name, default_value)
def __eq__(self, other):
"""
"""
return self.__class__ == other.__class__
def __unicode__(self):
"""
"""
return self.dumps()
def __getattribute__(self, item):
"""
"""
if item == 'attname':
return object.__getattribute__(self, 'name')
elif item == 'rel':
return None
else:
return super(ModelField, self).__getattribute__(item)
class ListField(ModelField):
def __init__(self, **kwargs):
"""
"""
super(ListField, self).__init__(**kwargs)
# If null is allowed, default value is None
if self.null and self.default is None:
self.saved_list = None
else:
# If default value was set
if not self.default is None:
self.saved_list = self.default
else:
self.saved_list = []
def dumps(self):
"""
"""
json_save_list = []
# Check if there was set anything
if isinstance(self.saved_list, (list, tuple)):
for saved_entry in self.saved_list:
is_number = isinstance(saved_entry, int) or isinstance(saved_entry, float)
# Check if entry is base type
if isinstance(saved_entry, six.string_types) or is_number or isinstance(saved_entry, bool):
json_save_list.append(saved_entry)
else:
json_save_list.append(self._get_json_save_value(saved_entry))
return json_save_list
def _get_json_save_value(self, value):
"""
"""
if isinstance(value, list) or isinstance(value, tuple):
return value
if isinstance(value, dict):
return value
else:
return '%s' % value
def loads(self, saved_list):
"""
"""
self.saved_list = saved_list
def validate(self):
"""
"""
if self.saved_list is None and self.null is False:
raise ListField.NotNullableFieldException()
def set(self, *args, **kwargs):
"""
"""
if len(args) is 1:
saved_list = args[0]
if saved_list is None and self.null is False:
raise ListField.NotNullableFieldException()
if isinstance(saved_list, list) or isinstance(saved_list, tuple):
self.saved_list = saved_list
elif isinstance(saved_list, six.string_types):
self.saved_list = json.loads(saved_list)
else:
self.saved_list = args
def get(self):
"""
"""
if self.saved_list is None and self.null is False:
self.saved_list = []
return self.saved_list
def __eq__(self, other):
"""
"""
return super(ListField, self).__eq__(other)
def __unicode__(self):
"""
"""
return self.dumps()
class DictField(ModelField):
def __init__(self, **kwargs):
"""
"""
super(DictField, self).__init__(**kwargs)
# If null is allowed, default value is None
if self.null and self.default is None:
self.saved_dict = None
else:
# If default value was set
if not self.default is None:
self.saved_dict = self.default
else:
self.saved_dict = {}
def dumps(self):
"""
"""
json_save_dict = {}
for key, saved_entry in six.iteritems(self.saved_dict):
is_number = isinstance(saved_entry, int) or isinstance(saved_entry, float)
# Check if entry is base type
if isinstance(saved_entry, six.string_types) or is_number or isinstance(saved_entry, bool):
json_save_dict[key] = saved_entry
else:
json_save_dict[key] = self._get_json_save_value(saved_entry)
return json_save_dict
def _get_json_save_value(self, value):
"""
"""
if isinstance(value, list) or isinstance(value, tuple):
return value
if isinstance(value, dict):
return value
else:
return '%s' % value
def loads(self, saved_dict):
"""
"""
self.saved_dict = saved_dict
def validate(self):
"""
"""
if self.saved_dict is None and self.null is False:
raise DictField.NotNullableFieldException()
def set(self, *args, **kwargs):
"""
"""
if len(args) is 1:
saved_dict = args[0]
if saved_dict is None and self.null is False:
raise DictField.NotNullableFieldException()
if isinstance(saved_dict, dict):
self.saved_dict = saved_dict
elif isinstance(saved_dict, six.string_types):
self.saved_list = json.loads(saved_dict)
else:
self.saved_dict = args
def get(self):
"""
"""
if self.saved_dict is None and self.null is False:
self.saved_dict = {}
return self.saved_dict
def __eq__(self, other):
"""
"""
return super(DictField, self).__eq__(other)
def __unicode__(self):
"""
"""
return self.dumps()
class BooleanField(ModelField):
def __init__(self, **kwargs):
"""
"""
super(BooleanField, self).__init__(**kwargs)
# If null is allowed, default value is None
if self.null and self.default is None:
self.boolean = None
else:
# If default value was set
if not self.default is None:
self.boolean = self.default
else:
self.boolean = False
def dumps(self):
"""
"""
return self.boolean
def loads(self, boolean_val):
"""
"""
self.boolean = boolean_val
def validate(self):
"""
"""
if self.boolean is None and self.null is False:
raise BooleanField.NotNullableFieldException()
def set(self, *args, **kwargs):
"""
"""
if len(args) is 1:
boolean = args[0]
if isinstance(boolean, bool):
self.boolean = args[0]
else:
raise BooleanField.WrongInputTypeException()
def get(self):
"""
"""
return self.boolean
def __eq__(self, other):
"""
"""
if super(BooleanField, self).__eq__(other):
return self.boolean == other.boolean
else:
return False
class TextField(ModelField):
def __init__(self, **kwargs):
"""
"""
super(TextField, self).__init__(**kwargs)
# If null is allowed, default value is None
if self.null and not self.default:
self.text = None
else:
# If default value was set
if self.default:
self.text = self.default
else:
self.text = ''
def dumps(self):
"""
"""
return self.text
def loads(self, string_val):
"""
"""
self.text = string_val
def validate(self):
"""
"""
if self.text is None and self.null is False:
raise TextField.NotNullableFieldException()
def set(self, *args, **kwargs):
"""
"""
if len(args) is 1:
text = args[0]
if not text and not self.null:
raise Exception('Value cannot be None')
if isinstance(text, six.string_types):
self.text = '%s' % args[0]
else:
raise TextField.WrongInputTypeException()
def get(self):
"""
"""
return self.text
def __eq__(self, other):
"""
"""
if super(TextField, self).__eq__(other):
return self.text == other.text
else:
return False
class CharField(TextField):
class TooLongStringException(Exception):
"""
String is too long
"""
def __init__(self, max_length=255, **kwargs):
"""
"""
super(CharField, self).__init__(**kwargs)
self.max_length = max_length
def validate(self):
"""
"""
super(CharField, self).validate()
if self.text:
if len(self.text) > self.max_length:
raise CharField.TooLongStringException()
def __getattribute__(self, item):
"""
"""
if item == 'flatchoices':
return None
else:
return super(CharField, self).__getattribute__(item)
class UuidField(CharField):
def __init__(self, auto_create=True, **kwargs):
"""
"""
super(UuidField, self).__init__(**kwargs)
self.auto_create = auto_create
def on_create(self, model_instance):
"""
"""
if self.auto_create and self.text is None or self.text == '':
self.text = str(uuid4())
class ChoiceField(ModelField):
"""
"""
def __init__(self, choices, multiple=False, **kwargs):
"""
"""
super(ChoiceField, self).__init__(**kwargs)
self.choices = choices
# If null is allowed, default value is None
if self.null and not self.default:
self.choice_value = None
else:
# If default value was set
if self.default:
self.choice_value = self.default
else:
self.choice_value = ''
def dumps(self):
"""
"""
return self.choice_value
def loads(self, string_val):
"""
"""
self.choice_value = string_val
def validate(self):
"""
"""
has_match = False
for choice_pair in self.choices:
if choice_pair[0] == self.choice_value:
has_match = True
break
if not has_match:
raise ChoiceField.WrongInputTypeException()
if self.choice_value is None and self.null is False:
raise ChoiceField.NotNullableFieldException()
def set(self, *args, **kwargs):
"""
"""
if len(args) is 1:
choice_value = args[0]
self.choice_value = choice_value
self.validate()
def get(self):
"""
"""
return self.choice_value
def __eq__(self, other):
"""
"""
if super(ChoiceField, self).__eq__(other):
return self.choice_value == other.choice_value
else:
return False
def __getattribute__(self, item):
"""
"""
if item == 'flatchoices':
return object.__getattribute__(self, 'choices')
else:
return super(ChoiceField, self).__getattribute__(item)
class NumberField(ModelField):
def __init__(self, **kwargs):
"""
"""
super(NumberField, self).__init__(**kwargs)
# If null is allowed, default value is None
if self.null and not self.default:
self.number = None
else:
# If default value was set
if self.default:
self.number = self.default
else:
self.number = 0
def dumps(self):
"""
"""
return self.number
def loads(self, number_val):
"""
"""
self.number = number_val
def validate(self):
"""
"""
if self.number is None and self.null is False:
raise NumberField.NotNullableFieldException()
def set(self, *args, **kwargs):
"""
"""
if len(args) is 1:
number = args[0]
if isinstance(number, int) or isinstance(number, float):
self.number = args[0]
else:
raise NumberField.WrongInputTypeException
def get(self):
"""
"""
return self.number
def __eq__(self, other):
"""
"""
if super(NumberField, self).__eq__(other):
return self.number == other.number
else:
return False
class DatetimeField(ModelField):
DATE_FORMAT = '%Y-%m-%d %H:%M:%S'
def __init__(self, **kwargs):
"""
"""
super(DatetimeField, self).__init__(**kwargs)
if self.null and not self.default:
self.time = None
else:
if self.default:
self.time = self.default
else:
self.time = datetime.now()
def dumps(self):
"""
"""
if self.null and self.time is None:
return None
else:
if self.time is None:
raise Exception('Datetime cannot be None')
else:
return '%s' % self.time.strftime(DatetimeField.DATE_FORMAT)
def loads(self, date_string):
"""
"""
if not self.null:
self.time = datetime.strptime(date_string, DatetimeField.DATE_FORMAT)
else:
self.time = None
def validate(self):
"""
"""
def set(self, *args, **kwargs):
"""
"""
if len(args) is 1:
time = args[0]
if isinstance(args, six.string_types):
self.loads(time)
else:
self.time = time
def get(self):
"""
"""
return self.time
def __eq__(self, other):
"""
"""
if super(DatetimeField, self).__eq__(other):
return self.time == other.time
else:
return False
class DateField(ModelField):
DATE_FORMAT = '%Y-%m-%d'
def __init__(self, **kwargs):
"""
"""
super(DateField, self).__init__(**kwargs)
if self.null and not self.default:
self.date = None
else:
if self.default:
self.date = self.default
else:
self.date = date.today()
def dumps(self):
"""
"""
if self.null and self.date is None:
return None
else:
if self.date is None:
raise Exception('Datetime cannot be None')
else:
return '%s' % self.date.strftime(DateField.DATE_FORMAT)
def loads(self, date_string):
"""
"""
if not self.null:
self.date = datetime.strptime(date_string, DateField.DATE_FORMAT)
else:
self.date = None
def validate(self):
"""
"""
def set(self, *args, **kwargs):
"""
"""
if len(args) is 1:
date = args[0]
if isinstance(args, six.string_types):
self.loads(date)
else:
self.date = date
def get(self):
"""
"""
return self.date
def __eq__(self, other):
"""
"""
if super(DateField, self).__eq__(other):
return self.date == other.date
else:
return False
class ForeignKeyField(ModelField):
def __init__(self, to, related_name=None, **kwargs):
"""
"""
super(ForeignKeyField, self).__init__(**kwargs)
# If null is allowed, default value is None
if self.null and not self.default:
self.relation_model = None
else:
# If default value was set
if self.default:
self.relation_model = self.default
else:
self.relation_model = ''
self.relation_class = to
self.related_name = related_name
if 'other_side' in kwargs:
self.other_side = True
self.other_attribute_name = kwargs['other_side']
else:
self.other_side = False
def on_init(self, model_class, attribute_name):
"""
"""
super(ForeignKeyField, self).on_init(model_class=model_class, attribute_name=attribute_name)
# Set this only if there is a related name
if self.related_name:
fields = self.relation_class._model_meta_data._fields
# Create field on other side
otherside_field = ForeignKeyField(to=model_class, other_side=attribute_name, read_only=True)
fields[self.related_name] = otherside_field
def dumps(self):
"""
"""
if self.relation_model:
return '%s' % self.relation_model.document
else:
return None
def loads(self, model_id):
"""
"""
if isinstance(model_id, six.string_types):
model = self.relation_class.objects.get(_id=model_id)
elif isinstance(model_id, self.relation_class):
model = model_id
else:
raise ForeignKeyField.WrongInputTypeException()
self.relation_model = model
def validate(self):
"""
"""
if self.relation_model is None and self.null is False:
raise ForeignKeyField.NotNullableFieldException()
if self.relation_model:
pass
def set(self, *args, **kwargs):
"""
"""
if len(args) is 1:
relation_model = args[0]
self.relation_model = relation_model
def get(self):
"""
"""
if self.other_side:
kwargs = { self.other_attribute_name: self.model_instance.document.id }
return self.relation_class.objects.filter(**kwargs)
else:
return self.relation_model
def __eq__(self, other):
"""
"""
if super(ForeignKeyField, self).__eq__(other):
return self.relation_model == other.relation_model
else:
return False
class ManyToManyField(ModelField):
is_saved_in_model = False
def __init__(self, to, related_name, **kwargs):
"""
"""
super(ManyToManyField, self).__init__(**kwargs)
self.relation_class = to
self.related_name = related_name
self.relation_collection = None
# Data
self.unsaved_data = False
self.related_queryset = None
def on_init(self, model_class, attribute_name):
"""
"""
super(ManyToManyField, self).on_init(model_class=model_class, attribute_name=attribute_name)
if not self.related_name is None:
relation_name = self._get_relation_collection_name(model_class)
try:
self.relation_collection = Collection.create(name=relation_name, database=Client.instance().database, type=3)
except:
self.relation_collection = Collection.get_loaded_collection(name=relation_name)
fields = self.relation_class._model_meta_data._fields
otherside_field = ManyToManyField(to=model_class, related_name=None)
fields[self.related_name] = otherside_field
# Configure other side field
otherside_field.related_queryset = self.relation_class.objects.all()
otherside_field.relation_collection = self.relation_collection
self.related_queryset = self.relation_class.objects.all()
def on_destroy(self, model_class):
"""
"""
if not self.related_name is None:
relation_name = self._get_relation_collection_name(model_class)
Collection.remove(name=relation_name)
def on_save(self, model_instance):
"""
"""
if self.unsaved_data:
new_models = self.related_queryset._cache
# We don't want to have problems
if not new_models:
return
for model in new_models:
related_model_document = model.document
model_document = model_instance.document
# Create relation
self.relation_collection.create_edge(from_doc=model_document, to_doc=related_model_document)
def _get_relation_collection_name(self, model_class):
"""
"""
return 'relation_%s_%s' % ( model_class.get_collection_name(), self.relation_class.get_collection_name() )
def set(self, *args, **kwargs):
"""
"""
if len(args) is 1:
related_models = args[0]
self.related_queryset._cache = related_models
self.unsaved_data = True
def get(self):
"""
"""
if self.unsaved_data is True:
return self.related_queryset
else:
if self.related_name is None:
return self.related_queryset.get_field_relations(
end_model=self.model_instance,
relation_collection=self.relation_collection.name,
related_model_class=self.relation_class
)
else:
return self.related_queryset.get_field_relations(
start_model=self.model_instance,
relation_collection=self.relation_collection.name,
related_model_class=self.relation_class
)
def loads(self, model_list):
"""
"""
self.set(model_list)
def __eq__(self, other):
"""
"""
if super(ManyToManyField, self).__eq__(other):
return self.related_queryset == other.related_queryset
else:
return False
|
ArangoPy
|
/ArangoPy-0.5.7.tar.gz/ArangoPy-0.5.7/arangodb/orm/fields.py
|
fields.py
|
from arangodb.api import Client
class Endpoint(object):
"""
Class to manage endpoints on which the server is listening
"""
@classmethod
def all(cls):
"""
Returns a list of all configured endpoints the server is listening on. For each endpoint,
the list of allowed databases is returned too if set.
The result is a JSON hash which has the endpoints as keys, and the list of
mapped database names as values for each endpoint.
If a list of mapped databases is empty, it means that all databases can be accessed via the endpoint.
If a list of mapped databases contains more than one database name, this means that any of the
databases might be accessed via the endpoint, and the first database in the list will be treated
as the default database for the endpoint. The default database will be used when an incoming request
does not specify a database name in the request explicitly.
*Note*: retrieving the list of all endpoints is allowed in the system database only.
Calling this action in any other database will make the server return an error.
"""
api = Client.instance().api
endpoint_list = api.endpoint.get()
return endpoint_list
@classmethod
def create(cls, url, databases):
"""
If databases is an empty list, all databases present in the server will become accessible via the endpoint,
with the _system database being the default database.
If databases is non-empty, only the specified databases will become available via the endpoint.
The first database name in the databases list will also become the default database for the endpoint.
The default database will always be used if a request coming in on the endpoint does not specify
the database name explicitly.
*Note*: adding or reconfiguring endpoints is allowed in the system database only.
Calling this action in any other database will make the server return an error.
Adding SSL endpoints at runtime is only supported if the server was started with SSL
properly configured (e.g. --server.keyfile must have been set).
:param url the endpoint specification, e.g. tcp://127.0.0.1:8530
:param databases a list of database names the endpoint is responsible for.
"""
api = Client.instance().api
result = api.endpoint.post(data={
'endpoint': url,
'databases': databases,
})
return result
@classmethod
def destroy(cls, url):
"""
This operation deletes an existing endpoint from the list of all endpoints,
and makes the server stop listening on the endpoint.
*Note*: deleting and disconnecting an endpoint is allowed in the system database only.
Calling this action in any other database will make the server return an error.
Futhermore, the last remaining endpoint cannot be deleted as this would make the server kaput.
:param url The endpoint to delete, e.g. tcp://127.0.0.1:8529.
"""
api = Client.instance().api
api.endpoint(url).delete()
|
ArangoPy
|
/ArangoPy-0.5.7.tar.gz/ArangoPy-0.5.7/arangodb/server/endpoint.py
|
endpoint.py
|
from arangodb.api import Client
from arangodb.query.utils.document import create_document_from_result_dict
class SimpleQuery(object):
"""
Simple queries collection
"""
@classmethod
def all(cls, collection, skip=None, limit=None):
"""
Returns all documents of the collection
:param collection Collection instance
:param skip The number of documents to skip in the query
:param limit The maximal amount of documents to return. The skip is applied before the limit restriction.
:returns Document list
"""
kwargs = {
'skip': skip,
'limit': limit,
}
return cls._construct_query(name='all', collection=collection, multiple=True,
**kwargs)
@classmethod
def get_by_example(cls, collection, example_data, allow_multiple=False, skip=None, limit=None):
"""
This will find all documents matching a given example.
:param collection Collection instance
:param example_data The example document
:param allow_multiple If the query can return multiple documents
:param skip The number of documents to skip in the query
:param limit The maximal amount of documents to return. The skip is applied before the limit restriction.
:returns Single document / Document list
"""
kwargs = {
'skip': skip,
'limit': limit,
}
return cls._construct_query(name='by-example',
collection=collection, example=example_data, multiple=allow_multiple,
**kwargs)
@classmethod
def update_by_example(cls, collection, example_data, new_value, keep_null=False, wait_for_sync=None, limit=None):
"""
This will find all documents in the collection that match the specified example object,
and partially update the document body with the new value specified. Note that document meta-attributes
such as _id, _key, _from, _to etc. cannot be replaced.
Note: the limit attribute is not supported on sharded collections. Using it will result in an error.
Returns result dict of the request.
:param collection Collection instance
:param example_data An example document that all collection documents are compared against.
:param new_value A document containing all the attributes to update in the found documents.
:param keep_null This parameter can be used to modify the behavior when handling null values.
Normally, null values are stored in the database. By setting the keepNull parameter to false,
this behavior can be changed so that all attributes in data with null values will be removed
from the updated document.
:param wait_for_sync if set to true, then all removal operations will instantly be synchronised to disk.
If this is not specified, then the collection's default sync behavior will be applied.
:param limit an optional value that determines how many documents to update at most. If limit is
specified but is less than the number of documents in the collection, it is undefined
which of the documents will be updated.
:returns dict
"""
kwargs = {
'newValue': new_value,
'options': {
'keepNull': keep_null,
'waitForSync': wait_for_sync,
'limit': limit,
}
}
return cls._construct_query(name='update-by-example',
collection=collection, example=example_data, result=False,
**kwargs)
@classmethod
def replace_by_example(cls, collection, example_data, new_value, wait_for_sync=None, limit=None):
"""
This will find all documents in the collection that match the specified example object,
and replace the entire document body with the new value specified. Note that document
meta-attributes such as _id, _key, _from, _to etc. cannot be replaced.
Note: the limit attribute is not supported on sharded collections. Using it will result in an error.
The options attributes waitForSync and limit can given yet without an ecapsulation into a json object.
But this may be deprecated in future versions of arango
Returns result dict of the request.
:param collection Collection instance
:param example_data An example document that all collection documents are compared against.
:param new_value The replacement document that will get inserted in place of the "old" documents.
:param wait_for_sync if set to true, then all removal operations will instantly be synchronised to disk.
If this is not specified, then the collection's default sync behavior will be applied.
:param limit an optional value that determines how many documents to replace at most. If limit is
specified but is less than the number of documents in the collection, it is undefined which of the
documents will be replaced.
:returns dict
"""
kwargs = {
'newValue': new_value,
'options': {
'waitForSync': wait_for_sync,
'limit': limit,
}
}
return cls._construct_query(name='replace-by-example',
collection=collection, example=example_data, result=False,
**kwargs)
@classmethod
def remove_by_example(cls, collection, example_data, wait_for_sync=None, limit=None):
"""
This will find all documents in the collection that match the specified example object.
Note: the limit attribute is not supported on sharded collections. Using it will result in an error.
The options attributes waitForSync and limit can given yet without an ecapsulation into a json object.
But this may be deprecated in future versions of arango
Returns result dict of the request.
:param collection Collection instance
:param example_data An example document that all collection documents are compared against.
:param wait_for_sync if set to true, then all removal operations will instantly be synchronised to disk.
If this is not specified, then the collection's default sync behavior will be applied.
:param limit an optional value that determines how many documents to replace at most. If limit is
specified but is less than the number of documents in the collection, it is undefined which of the
documents will be replaced.
"""
kwargs = {
'options': {
'waitForSync': wait_for_sync,
'limit': limit,
}
}
return cls._construct_query(name='remove-by-example',
collection=collection, example=example_data, result=False,
**kwargs)
@classmethod
def random(cls, collection):
"""
Returns a random document from a collection.
:param collection Collection instance
:returns document
"""
return cls._construct_query(name='any', collection=collection)
@classmethod
def _construct_query(cls, name, collection, multiple=False, result=True, **kwargs):
"""
"""
query = {
'collection': collection.name,
}
for arg_name in kwargs:
query[arg_name] = kwargs[arg_name]
client = Client.instance()
client.set_database(collection.database)
api = client.api
result_dict = api.simple(name).put(data=query)
if not result:
return result_dict
if result_dict['count'] == 0:
return None
if multiple is True:
docs = []
for result_dict_obj in result_dict['result']:
doc = create_document_from_result_dict(result_dict_obj, api)
docs.append(doc)
return docs
else:
return create_document_from_result_dict(result_dict['result'][0], api)
class SimpleIndexQuery(SimpleQuery):
"""
"""
@classmethod
def get_by_example_hash(cls, collection, index_id, example_data, allow_multiple=False, skip=None, limit=None):
"""
This will find all documents matching a given example, using the specified hash index.
:param collection Collection instance
:param index_id ID of the index which should be used for the query
:param example_data The example document
:param allow_multiple If the query can return multiple documents
:param skip The number of documents to skip in the query
:param limit The maximal amount of documents to return. The skip is applied before the limit restriction.
:returns Single document / Document list
"""
kwargs = {
'index': index_id,
'skip': skip,
'limit': limit,
}
return cls._construct_query(name='by-example-hash',
collection=collection, example=example_data, multiple=allow_multiple,
**kwargs)
@classmethod
def get_by_example_skiplist(cls, collection, index_id, example_data, allow_multiple=True, skip=None, limit=None):
"""
This will find all documents matching a given example, using the specified skiplist index.
:param collection Collection instance
:param index_id ID of the index which should be used for the query
:param example_data The example document
:param allow_multiple If the query can return multiple documents
:param skip The number of documents to skip in the query
:param limit The maximal amount of documents to return. The skip is applied before the limit restriction.
:returns Single document / Document list
"""
kwargs = {
'index': index_id,
'skip': skip,
'limit': limit,
}
return cls._construct_query(name='by-example-skiplist',
collection=collection, example=example_data, multiple=allow_multiple,
**kwargs)
@classmethod
def range(cls, collection, attribute, left, right, closed, index_id, skip=None, limit=None):
"""
This will find all documents within a given range. In order to execute a range query, a
skip-list index on the queried attribute must be present.
:param collection Collection instance
:param attribute The attribute path to check
:param left The lower bound
:param right The upper bound
:param closed If true, use interval including left and right, otherwise exclude right, but include left
:param index_id ID of the index which should be used for the query
:param skip The number of documents to skip in the query
:param limit The maximal amount of documents to return. The skip is applied before the limit restriction.
:returns Document list
"""
kwargs = {
'index': index_id,
'attribute': attribute,
'left': left,
'right': right,
'closed': closed,
'skip': skip,
'limit': limit,
}
return cls._construct_query(name='range',
collection=collection, multiple=True,
**kwargs)
@classmethod
def fulltext(cls, collection, attribute, example_text, index_id, skip=None, limit=None):
"""
This will find all documents from the collection that match the fulltext query specified in query.
In order to use the fulltext operator, a fulltext index must be defined for the collection
and the specified attribute.
:param collection Collection instance
:param attribute The attribute path to check
:param example_text Text which should be used to search
:param index_id ID of the index which should be used for the query
:param skip The number of documents to skip in the query
:param limit The maximal amount of documents to return. The skip is applied before the limit restriction.
:returns Document list
"""
kwargs = {
'index': index_id,
'attribute': attribute,
'query': example_text,
'skip': skip,
'limit': limit,
}
return cls._construct_query(name='fulltext',
collection=collection, multiple=True,
**kwargs)
@classmethod
def near(cls, collection, latitude, longitude, index_id, distance=None, skip=None, limit=None):
"""
The default will find at most 100 documents near the given coordinate.
The returned list is sorted according to the distance, with the nearest document being first in the list.
If there are near documents of equal distance, documents are chosen randomly from this set until
the limit is reached.
In order to use the near operator, a geo index must be defined for the collection.
This index also defines which attribute holds the coordinates for the document.
If you have more then one geo-spatial index, you can use the geo field to select a particular index.
:param collection Collection instance
:param latitude The latitude of the coordinate
:param longitude The longitude of the coordinate
:param index_id ID of the index which should be used for the query
:param distance If given, the attribute key used to return the distance to the given coordinate.
If specified, distances are returned in meters.
:param skip The number of documents to skip in the query
:param limit The maximal amount of documents to return. The skip is applied before the limit restriction.
:returns Document list
"""
kwargs = {
'geo': index_id,
'latitude': latitude,
'longitude': longitude,
'distance': distance,
'skip': skip,
'limit': limit,
}
return cls._construct_query(name='near',
collection=collection, multiple=True,
**kwargs)
@classmethod
def within(cls, collection, latitude, longitude, radius, index_id, distance=None, skip=None, limit=None):
"""
This will find all documents within a given radius around the coordinate (latitude, longitude).
The returned list is sorted by distance.
In order to use the within operator, a geo index must be defined for the collection.
This index also defines which attribute holds the coordinates for the document.
If you have more then one geo-spatial index, you can use the geo field to select a particular index.
:param collection Collection instance
:param latitude The latitude of the coordinate
:param longitude The longitude of the coordinate
:param radius The maximal radius (in meters)
:param index_id ID of the index which should be used for the query
:param distance If given, the attribute key used to return the distance to the given coordinate.
If specified, distances are returned in meters.
:param skip The number of documents to skip in the query
:param limit The maximal amount of documents to return. The skip is applied before the limit restriction.
:returns Document list
"""
kwargs = {
'geo': index_id,
'latitude': latitude,
'longitude': longitude,
'radius': radius,
'distance': distance,
'skip': skip,
'limit': limit,
}
return cls._construct_query(name='within',
collection=collection, multiple=True,
**kwargs)
|
ArangoPy
|
/ArangoPy-0.5.7.tar.gz/ArangoPy-0.5.7/arangodb/query/simple.py
|
simple.py
|
import logging
from time import time
from arangodb import six
from arangodb.api import Client, Document
from arangodb.query.utils.document import create_document_from_result_dict
logger = logging.getLogger(name='ArangoPy')
class QueryFilterStatement(object):
EQUAL_OPERATOR = '=='
NOT_EQUAL_OPERATOR = '!='
QUERY_CONDITION_EXTENSIONS = {
'exact': EQUAL_OPERATOR,
'gt': '>',
'gte': '>=',
'lt': '<',
'lte': '<=',
}
QUERY_FUNCTION_EXTENSIONS = ( 'contains', 'icontains', )
def __init__(self, collection, attribute, value, **kwargs):
"""
"""
self.collection = collection
self.attribute = attribute
if 'operator' in kwargs:
self.operator = kwargs['operator']
else:
self.operator = None
if 'function' in kwargs:
self.function = kwargs['function']
else:
self.function = None
self.value = value
def get_filter_function_string(self, collection_name):
"""
"""
if self.function == 'contains':
filter_string = ' CONTAINS (%s.%s, %s) ' % (
collection_name,
self.attribute,
self.get_value_string(),
)
elif self.function == 'icontains':
filter_string = ' LIKE (%s.%s, "%s%s%s", true) ' % (
collection_name,
self.attribute,
'%',
self.value,
'%',
)
else:
filter_string = ''
return filter_string
def get_value_string(self):
"""
"""
if isinstance(self.value, six.string_types):
return '"%s"' % self.value
else:
return '%s' % self.value
class QueryFilterContainer(object):
def __init__(self, bit_operator):
"""
"""
self.filters = []
self.bit_operator = bit_operator
class Query(object):
SORTING_ASC = 'ASC'
SORTING_DESC = 'DESC'
NO_BIT_OPERATOR = None
OR_BIT_OPERATOR = '||'
AND_BIT_OPERATOR = '&&'
@classmethod
def execute_raw(cls, query_string):
"""
"""
logger.debug(query_string)
post_data = {
'query': query_string
}
api = Client.instance().api
result = []
try:
start_time = time()
post_result = api.cursor.post(data=post_data)
end_time = time()
calculated_time = (end_time - start_time) * 1000
time_result = '%s ms' % calculated_time
logger_output = 'Query took %s' % time_result
logger.debug(logger_output)
result_dict_list = post_result['result']
# Create documents
for result_list in result_dict_list:
# Look if it is a list which needs to be iterated
if isinstance(result_list, list):
for result_dict in result_list:
doc = create_document_from_result_dict(result_dict, api)
result.append(doc)
# Otherwise just create a result document
else:
result_dict = result_list
doc = create_document_from_result_dict(result_dict, api)
result.append(doc)
except Exception as err:
raise err
return result
def __init__(self):
"""
"""
self.collections = []
self.filters = []
self.start = -1
self.count = -1
self.sorting = []
def set_collection(self, collection_name):
"""
"""
self.collections = [ collection_name ]
def clear(self):
"""
"""
# TODO: Check how to clear these lists better
self.filters = []
self.sorting = []
def append_collection(self, collection_name):
"""
"""
self.collections.append(collection_name)
return self
def filter(self, bit_operator=NO_BIT_OPERATOR, **kwargs):
"""
"""
if bit_operator == Query.NO_BIT_OPERATOR:
filters = self.filters
else:
filter_container = QueryFilterContainer(bit_operator=bit_operator)
filters = filter_container.filters
self.filters.append(filter_container)
for key, value in six.iteritems(kwargs):
filters.append(
self._get_filter_statement(key, value, QueryFilterStatement.EQUAL_OPERATOR)
)
return self
def exclude(self, **kwargs):
"""
"""
for key, value in six.iteritems(kwargs):
self.filters.append(
self._get_filter_statement(key, value, QueryFilterStatement.NOT_EQUAL_OPERATOR)
)
return self
def _get_filter_statement(self, filter_string, filter_value, default_operator):
"""
"""
splitted_filter = filter_string.split('__')
lenght_splitted_filter = len(splitted_filter)
if lenght_splitted_filter is 1:
return QueryFilterStatement(
collection=self.collections[-1],
attribute=filter_string,
operator=default_operator,
value=filter_value,
)
else:
if lenght_splitted_filter is 2:
second_filter_value = splitted_filter[1]
# Is this a normal condition
if second_filter_value in QueryFilterStatement.QUERY_CONDITION_EXTENSIONS:
operator = QueryFilterStatement.QUERY_CONDITION_EXTENSIONS[ second_filter_value ]
return QueryFilterStatement(
collection=self.collections[-1],
attribute=splitted_filter[0],
operator=operator,
value=filter_value,
)
# Is this a function condition
elif second_filter_value in QueryFilterStatement.QUERY_FUNCTION_EXTENSIONS:
return QueryFilterStatement(
collection=self.collections[-1],
attribute=splitted_filter[0],
function=second_filter_value,
value=filter_value,
)
# Just collection and attribute
else:
return QueryFilterStatement(
collection=splitted_filter[0],
attribute=second_filter_value,
operator=default_operator,
value=filter_value,
)
else:
if splitted_filter[2] in QueryFilterStatement.QUERY_CONDITION_EXTENSIONS:
operator = QueryFilterStatement.QUERY_CONDITION_EXTENSIONS[ splitted_filter[2] ]
return QueryFilterStatement(
collection=splitted_filter[0],
attribute=splitted_filter[1],
operator=operator,
value=filter_value,
)
# Is this a function condition
elif splitted_filter[2] in QueryFilterStatement.QUERY_FUNCTION_EXTENSIONS:
return QueryFilterStatement(
collection=self.collections[-1],
attribute=splitted_filter[0],
function=splitted_filter[2],
value=filter_value,
)
def limit(self, count, start=-1):
"""
"""
self.start = start
self.count = count
def order_by(self, field, order=None, collection=None):
"""
"""
if order is None:
order = self.SORTING_ASC
self.sorting.append({
'field': field,
'order': order,
'collection': collection,
})
def execute(self):
"""
"""
query_data = ''
query_data += self._get_collection_iteration_statements()
query_data += self._get_sorting_statement()
if self.count is not -1:
if self.start is not -1:
query_data += ' LIMIT %s, %s' % (self.start, self.count)
else:
query_data += ' LIMIT %s' % self.count
# Set return statement
query_data += self._get_return_statement()
# Execute query
result = Query.execute_raw(query_string=query_data)
return result
def _get_collection_ident(self, collection_name):
"""
"""
return collection_name + '_123'
def _get_collection_iteration_statements(self):
"""
"""
query_data = ''
collection_filters = {}
for filter_statement in self.filters:
# We only care about the containers
if isinstance(filter_statement, QueryFilterContainer):
for filter in filter_statement.filters:
# Check if the collection is already in the filters for collections
if filter.collection in collection_filters:
container = collection_filters[filter.collection]
container.filters.append(filter)
else:
container = QueryFilterContainer(bit_operator=Query.AND_BIT_OPERATOR)
container.filters.append(filter)
collection_filters[filter.collection] = container
for collection in self.collections:
query_data += ' FOR %s in %s' % (
self._get_collection_ident(collection),
collection
)
# If there were OR conditions this is the place where they are set
if collection in collection_filters:
container = collection_filters[collection]
query_data += self._get_filter_string(container)
for filter_statement in self.filters:
if not isinstance(filter_statement, QueryFilterContainer):
query_data += self._get_filter_string(filter_statement)
return query_data
def _get_filter_string(self, filter_statement):
"""
"""
if isinstance(filter_statement, QueryFilterContainer):
filter_string = ' FILTER '
is_first = True
for filter in filter_statement.filters:
if is_first:
is_first = False
filter_string += self._get_filter_condition_string(filter)
else:
filter_string += ' %s %s ' % (
filter_statement.bit_operator,
self._get_filter_condition_string(filter)
)
else:
filter_string = ' FILTER %s' % self._get_filter_condition_string(filter_statement)
return filter_string
def _get_filter_condition_string(self, filter_statement):
"""
"""
filter_string = ''
# Operator
if not filter_statement.operator is None:
if isinstance(filter_statement.value, six.string_types):
filter_string = '%s.%s %s "%s"' % (
self._get_collection_ident(filter_statement.collection),
filter_statement.attribute,
filter_statement.operator,
filter_statement.value,
)
else:
filter_string = '%s.%s %s %s' % (
self._get_collection_ident(filter_statement.collection),
filter_statement.attribute,
filter_statement.operator,
filter_statement.value,
)
elif not filter_statement.function is None:
collection_name = self._get_collection_ident(filter_statement.collection)
filter_string = filter_statement.get_filter_function_string(collection_name)
return filter_string
def _get_return_statement(self):
"""
"""
return_statement = ''
if len(self.collections) is 1:
collection = self.collections[0]
return_statement += ' RETURN %s' % self._get_collection_ident(collection_name=collection)
else:
collections_string = ''
is_first = True
for collection in self.collections:
if is_first:
is_first = False
collections_string += self._get_collection_ident(collection)
else:
collections_string += ' , %s ' % self._get_collection_ident(collection)
return_statement += ' RETURN [ %s ]' % collections_string
return return_statement
def _get_sorting_statement(self):
"""
"""
query_data = ''
is_first = True
for sorting_entry in self.sorting:
if is_first:
query_data += ' SORT '
if sorting_entry['field'] is not None:
if not is_first:
query_data += ', '
if sorting_entry['collection'] is not None:
query_data += '%s.%s %s' % (
self._get_collection_ident(sorting_entry['collection']),
sorting_entry['field'],
sorting_entry['order'],
)
else:
query_data += '%s.%s %s' % (
self._get_collection_ident(self.collections[0]),
sorting_entry['field'],
sorting_entry['order'],
)
if is_first:
is_first = False
return query_data
class Traveser(object):
"""
"""
@classmethod
def follow(cls, start_vertex, edge_collection, direction):
"""
"""
request_data = {
'startVertex': start_vertex,
'edgeCollection': edge_collection,
'direction': direction,
}
return Traveser._send_follow(request_data=request_data)
@classmethod
def extended_follow(cls, start_vertex, edge_collection, graph_name=None, **kwargs):
"""
:param start_vertex id of the startVertex, e.g. "users/foo".
:param edge_collection *Deprecated* name of the collection that contains the edges.
:param graph_name name of the graph that contains the edges.
Optionals:
:param filter (optional, default is to include all nodes): body (JavaScript code) of custom filter
function function signature: (config, vertex, path) -> mixed can return four different string values:
- "exclude" -> this vertex will not be visited.
- "prune" -> the edges of this vertex will not be followed.
- "" or undefined -> visit the vertex and follow it's edges.
- Array -> containing any combination of the above. If there is at least one "exclude" or "prune"
respectively is contained, it's effect will occur.
:param min_depth (optional, ANDed with any existing filters): visits only nodes in at least the given depth
:param max_depth (optional, ANDed with any existing filters): visits only nodes in at most the given depth
:param visitor (optional): body (JavaScript) code of custom visitor function function signature:
(config, result, vertex, path) -> void visitor function can do anything, but its return value is ignored.
To populate a result, use the result variable by reference
:param direction (optional): direction for traversal - if set, must be either
"outbound", "inbound", or "any" - if not set, the expander attribute must be specified
:param init (optional): body (JavaScript) code of custom result initialisation function function signature:
(config, result) -> void initialise any values in result with what is required
:param expander (optional): body (JavaScript) code of custom expander function must be set if direction
attribute is *not* set function signature: (config, vertex, path) -> array expander must return an array
of the connections for vertex each connection is an object with the attributes edge and vertex
:param sort (optional): body (JavaScript) code of a custom comparison function for the edges.
The signature of this function is (l, r) -> integer (where l and r are edges) and must return -1 if l
is smaller than, +1 if l is greater than, and 0 if l and r are equal. The reason for this is the following:
The order of edges returned for a certain vertex is undefined. This is because there is no natural order
of edges for a vertex with multiple connected edges. To explicitly define the order in which edges on
the vertex are followed, you can specify an edge comparator function with this attribute. Note that the
value here has to be a string to conform to the JSON standard, which in turn is parsed as function body
on the server side. Furthermore note that this attribute is only used for the standard expanders. If
you use your custom expander you have to do the sorting yourself within the expander code.
:param strategy (optional): traversal strategy can be "depthfirst" or "breadthfirst"
:param order (optional): traversal order can be "preorder" or "postorder"
:param item_order (optional): item iteration order can be "forward" or "backward"
:param uniqueness (optional): specifies uniqueness for vertices and edges visited if set, must be an object
like this: "uniqueness": {"vertices": "none"|"global"|path", "edges": "none"|"global"|"path"}
:param max_iterations (optional): Maximum number of iterations in each traversal.
This number can be set to prevent endless loops in traversal of cyclic graphs. When a traversal
performs as many iterations as the max_iterations value, the traversal will abort with an error.
If max_iterations is not set, a server-defined value may be used.
"""
request_data = {
'startVertex': start_vertex,
'edgeCollection': edge_collection,
}
if graph_name:
request_data['graphName'] = graph_name
# Set search data
option_name = 'filter'
if option_name in kwargs:
request_data['filter'] = kwargs[option_name]
option_name = 'min_depth'
if option_name in kwargs:
request_data['minDepth'] = kwargs[option_name]
option_name = 'max_depth'
if option_name in kwargs:
request_data['maxDepth'] = kwargs[option_name]
option_name = 'visitor'
if option_name in kwargs:
request_data['visitor'] = kwargs[option_name]
option_name = 'direction'
if option_name in kwargs:
request_data['direction'] = kwargs[option_name]
option_name = 'init'
if option_name in kwargs:
request_data['init'] = kwargs[option_name]
option_name = 'expander'
if option_name in kwargs:
request_data['expander'] = kwargs[option_name]
option_name = 'sort'
if option_name in kwargs:
request_data['sort'] = kwargs[option_name]
option_name = 'strategy'
if option_name in kwargs:
request_data['strategy'] = kwargs[option_name]
option_name = 'order'
if option_name in kwargs:
request_data['order'] = kwargs[option_name]
option_name = 'item_order'
if option_name in kwargs:
request_data['itemOrder'] = kwargs[option_name]
option_name = 'uniqueness'
if option_name in kwargs:
request_data['uniqueness'] = kwargs[option_name]
option_name = 'max_iterations'
if option_name in kwargs:
request_data['maxIterations'] = kwargs[option_name]
# Make tests
if not 'direction' in kwargs and not 'expander' in kwargs:
raise Exception('Either direction or expander have to be set')
if 'direction' in kwargs and 'expander' in kwargs:
raise Exception('Either direction or expander have to be set')
return Traveser._send_follow(request_data=request_data)
@classmethod
def _send_follow(cls, request_data):
"""
"""
related_docs = []
api = Client.instance().api
result_dict = api.traversal.post(data=request_data)
results = result_dict['result']['visited']
vertices = results['vertices']
vertices.remove(vertices[0])
for vertice in vertices:
collection_name = vertice['_id'].split('/')[0]
doc = Document(
id=vertice['_id'],
key=vertice['_key'],
collection=collection_name,
api=api,
)
del vertice['_id']
del vertice['_key']
del vertice['_rev']
doc.data = vertice
related_docs.append(doc)
return related_docs
|
ArangoPy
|
/ArangoPy-0.5.7.tar.gz/ArangoPy-0.5.7/arangodb/query/advanced.py
|
advanced.py
|
from arangodb.index.general import BaseIndex
class HashIndex(BaseIndex):
"""
"""
type_name = 'hash'
def __init__(self, fields, unique=True):
"""
*Note*: unique indexes on non-shard keys are not supported in a cluster.
:param fields A list of attribute paths.
:param unique If true, then create a unique index.
"""
super(HashIndex, self).__init__()
self.fields = fields
self.unique = unique
def get_extra_attributes(self):
"""
"""
return {
'fields': self.fields,
'unique': self.unique,
}
class SkiplistIndex(HashIndex):
"""
Skiplists are almost the same except that it can contain ranges.
"""
type_name = 'skiplist'
class BitarrayIndex(HashIndex):
"""
Bitarray
"""
type_name = 'bitarray'
def __init__(self, fields):
"""
*Note*: unique indexes on non-shard keys are not supported in a cluster.
:param fields A list of pairs. A pair consists of an attribute path followed by a list of values.
"""
super(BitarrayIndex, self).__init__()
self.fields = fields
self.unique = False
class GeoIndex(HashIndex):
"""
"""
type_name = 'geo'
def __init__(self, fields, geo_json, ignore_null=True, unique=True):
"""
*Note*: Unique indexes on non-shard keys are not supported in a cluster.
:param fields A list with one or two attribute paths. If it is a list with one attribute path location,
then a geo-spatial index on all documents is created using location as path to the coordinates.
The value of the attribute must be a list with at least two double values.
The list must contain the latitude (first value) and the longitude (second value).
All documents, which do not have the attribute path or with value that are not suitable, are ignored.
If it is a list with two attribute paths latitude and longitude, then a geo-spatial index on all
documents is created using latitude and longitude as paths the latitude and the longitude.
The value of the attribute latitude and of the attribute longitude must a double. All documents,
which do not have the attribute paths or which values are not suitable, are ignored.
:param geo_json If a geo-spatial index on a location is constructed and geoJson is true, then the order
within the list is longitude followed by latitude. This corresponds to the
format described in http://geojson.org/geojson-spec.html#positions
:param ignore_null If a geo-spatial constraint is created and ignoreNull is true,
then documents with a null in location or at least one null in latitude or longitude are ignored.
"""
super(GeoIndex, self).__init__(fields, unique)
self.geo_json = geo_json
self.ignore_null = ignore_null
def get_extra_attributes(self):
"""
"""
return {
'fields': self.fields,
'unique': self.unique,
'geoJson': self.geo_json,
'ignoreNull': self.ignore_null,
}
|
ArangoPy
|
/ArangoPy-0.5.7.tar.gz/ArangoPy-0.5.7/arangodb/index/unique.py
|
unique.py
|
class BaseIndex(object):
"""
This is the base class for all other index
but which has no functionality
"""
type_name = None
def __init__(self):
"""
"""
self.id = None
self.is_new = False
def get_extra_attributes(self):
"""
You need to override this method for each index
so it can return a dict with all values which are needed
for the index
"""
class FulltextIndex(BaseIndex):
"""
Index for fulltext search
"""
type_name = 'fulltext'
def __init__(self, fields, minimum_length):
"""
Creates fulltext index which is configured for the fields with
the minimum word length
:param fields A list of attribute names. Currently, the list is limited to exactly one attribute, so the
value of fields should look like this for example: [ "text" ].
:param minimum_length Minimum character length of words to index. Will default to a server-defined value
if unspecified. It is thus recommended to set this value explicitly when creating the index.
"""
super(FulltextIndex, self).__init__()
self.fields = fields
self.minimum_length = minimum_length
def get_extra_attributes(self):
"""
"""
return {
'fields': self.fields,
'minLength': self.minimum_length,
}
class CapConstraintIndex(BaseIndex):
"""
The cap constraint does not index particular attributes of the documents in a collection,
but limits the number of documents in the collection to a maximum value.
The cap constraint thus does not support attribute names specified in the fields attribute
nor uniqueness of any kind via the unique attribute.
"""
type_name = 'cap'
def __init__(self, size, document_byte_size=16384):
"""
:param size The maximal number of documents for the collection. If specified, the value
must be greater than zero.
:param document_byte_size The maximal size of the active document data in the collection (in bytes).
If specified, the value must be at least 16384.
"""
super(CapConstraintIndex, self).__init__()
self.size = size
self.document_byte_size = document_byte_size
def get_extra_attributes(self):
"""
"""
return {
'size': self.size,
'byteSize': self.document_byte_size,
}
|
ArangoPy
|
/ArangoPy-0.5.7.tar.gz/ArangoPy-0.5.7/arangodb/index/general.py
|
general.py
|
# Arbie
[](https://www.codefactor.io/repository/github/owodunni/arbie) [](https://github.com/wemake-services/wemake-python-styleguide) [](https://codecov.io/gh/owodunni/Arbie) [](https://github.com/owodunni/arbie) [](https://github.com/owodunni/GageRnR/blob/master/LICENSE)
[](https://pypi.org/project/Arbie/)
Arbie is a greedy crypto pirate!

## Run
Run Brig with docker-compose:
```
cd Brig && docker-compose up -d
```
## Getting started
## Develop
Instructions for developing arbie using docker or virual-env.
To setup the development environment run:
```
./gradlew venv && source .venv/bin/activate && ./gradlew setup
```
It will run the steps bellow and make sure that all tools required for Arbie
are setup.
### Docker
The arbie repository can be built using docker. This is probably the simplest
approach if you just want to get things building.
```
docker build . -t arbie
```
You can now use the newly created docker image to build and test with.
test:
```
docker-compose run --rm arbie ./gradlew tAL
```
### Virtual-env
Create a virtual env:
```
./gradlew venv
```
Run virual env:
```
source .venv/bin/activate
```
Install requirements:
```
./gradlew pip
```
lint:
```
./gradlew lint
```
### Commits
Arbie works with [semantic-release](https://python-semantic-release.readthedocs.io/en/latest/)
and therefore has a special commit style. We use [Angular style](https://github.com/angular/angular.js/blob/master/DEVELOPERS.md#commits) commits. A helpful tool for ensuring the correct commit style is [commitizen](https://github.com/commitizen/cz-cli).
Simply run when commiting:
```
cz c
```
### Pre commit hooks
To enforce code standard we use [pre-commit](https://pre-commit.com/) it manages
pre commit hooks.
Run to setup:
```
pre-commit install
```
|
Arbie
|
/Arbie-0.10.2.tar.gz/Arbie-0.10.2/README.md
|
README.md
|
import PySimpleGUI as sg
import traceback
column_mappings = {'Order Date': 0,
'Order ID': 1,
'Item Name': 2,
'ASIN': 3,
'SKU': 4,
'STORE': 5,
'COST PER UNIT': 6,
'INDV ITEMS QTY': 7,
'SELLABLE QTY': 8,
'Condition': 9,
'ORDER COST': 10,
'PRICE': 11,
'ISBN': 12,
'USPC': 13,
'EAN': 14}
def debug_basic(value):
if value:
def decorate(f):
def wrap(*args, **kwargs):
try:
return f(*args,**kwargs)
except Exception as e:
tb = traceback.format_exc()
sg.Print(f'An error happened. Here is the info:', e, tb)
sg.popup_error(f'AN EXCEPTION OCCURRED!', "Please send us a screenshot of the error message to the side, and then click on the button to make it disappear")
return wrap
return decorate
else:
def decorate(f):
def wrap(*args, **kwargs):
return f(*args,**kwargs)
return wrap
master_col_names_to_indices = {"PRODUCT NAME":2, "ASIN":3, "ISBN":12, "UPC":13, "EAN":14, "SKU":4, "PURCHASE PRICE":6,
"Qty":8, "ORDER DATE":0, "CONDITION":9, "PRICE":11}
master_data_last_letter = "O"
def get_input_user_wrapper(input_function):
def wrapper(*args, **kwargs):
output = input_function(*args, **kwargs)
if output not in kwargs.get("affirmative_response"):
event = sg.popup_yes_no("Press 'Yes' to terminate the process, or press 'No' to repeat the last step",
keep_on_top=True)
if event == "Yes":
from sys import exit
exit()
else:
return wrapper(*args, **kwargs)
else:
return output
return wrapper
@get_input_user_wrapper
def select_name_and_click_ok_or_terminate(name, keep_on_top, affirmative_response):
#affirmative_response is required for the decorator to work
event = sg.popup(f"Please select {name} and click 'OK' when finished",
keep_on_top=keep_on_top)
return event
@get_input_user_wrapper
def ask_to_go_to_wb_click_ok_or_terminate(name, keep_on_top, affirmative_response):
#affirmative_response is required for the decorator to work and cannot be gieven a
#default argument, but "OK" should always be passed in as a keyword argument for it
event = sg.popup(f"Please make {name} your active Excel sheet and click 'OK' when finished",
keep_on_top=keep_on_top)
return event
@get_input_user_wrapper
def get_file(text, affirmative_response, value_dict):
#to make this work with the decorator used for the other GUIs, we mutate a dictionary
#to return the values, and return the event
layout = [[sg.Text(f'Select the file location of {text}'), sg.Input(),sg.FileBrowse(key="--input_file--")],
[sg.OK(), sg.Cancel()]]
event, values = sg.Window("",layout, keep_on_top=True, grab_anywhere=True).read(close=True)
value_dict["val"] = values["--input_file--"]
return event
def open_amazon_inventory_page():
import webbrowser
webbrowser.open(r"https://sellercentral.amazon.co.uk/listing/reports/ref=xx_invreport_dnav_xx")
@debug_basic(True)
def re_import_inventory_data():
value_dict = {"val":None}
get_file(text="the .txt inventory file you downloaded from amazon",
affirmative_response="OK", value_dict=value_dict)
fileloc=value_dict["val"]
import pandas as pd
data = pd.read_csv(fileloc,sep="\t", header=None)
ask_to_go_to_wb_click_ok_or_terminate("Arbitrage Master Sheet",
keep_on_top=True, affirmative_response="OK")
import xlwings as xw
xw.apps.active.books.active.sheets["Inventory"].range("A1").options(index=False, header=False).value = data
return
def set_python_path():
import platform
import xlwings as xw
if platform.system() == "Windows":
addin_book = xw.Book(r"C:\Users\ethan\AppData\Roaming\Microsoft\Excel\XLSTART\master_sheet_addin.xlam")
value_dict = {"val":None}
get_file(text="location of the python EXE", affirmative_response="OK", value_dict=value_dict)
from pathlib import Path
path=Path(value_dict["val"])
addin_book.sheets["xlwings.conf"].range("B1").value = str(path)
elif platform.system() == "Darwin":
sg.popup("Implementation for Mac doesn't exist yet")
import sys
sys.exit
else:
sg.popup("Oops, we only support Mac and Windows")
import sys
sys.exit()
def read_python_path():
import platform
import xlwings as xw
if platform.system() == "Windows":
addin_book = xw.Book(r"C:\Users\ethan\AppData\Roaming\Microsoft\Excel\XLSTART\master_sheet_addin.xlam")
val = addin_book.sheets["xlwings.conf"].range("B1").value
sg.popup(f"current value is {val}")
elif platform.system() == "Darwin":
sg.popup("Implementation for Mac doesn't exist yet")
import sys
sys.exit
else:
sg.popup("Oops, we only support Mac and Windows")
import sys
sys.exit()
def return_existing_asins(inventory_sht):
"""
given the inventory sheet (as an xlwings sheet object), returns a set of asins in the inventory sheet
"""
asin_rows_to_index = {"asin1":16,"asin2":17,"asin3":18}
row_num = inventory_sht.range('A1').current_region.last_cell.row
sg.popup(f"row_num is {row_num}")
existing_asins = set()
if row_num == 1:
return existing_asins
values = inventory_sht.range("A2:AE"+str(row_num)).value
if row_num == 2:
values = [values] #this is because we assume values is a list of lists,
#but if there is only one row,
for row in values:
asin_rows_index = [asin_rows_to_index["asin1"], asin_rows_to_index["asin2"], asin_rows_to_index["asin3"]]
for index in asin_rows_index:
if row[index] != None:
existing_asins.add(row[index])
else:
pass
return existing_asins
@debug_basic(True)
def export_data(use_Process=False):
ask_to_go_to_wb_click_ok_or_terminate(name="the Arbitrage Master Sheet", keep_on_top=True,
affirmative_response="OK")
import xlwings as xw
master_sheet = xw.apps.active.books.active.sheets.active
product_id_types = {"ASIN":1,
"ISBN":2,
"UPC":3,
"EAN":4}
product_id_types_list = list(product_id_types.keys())
product_id_to_data_skeleton = {"id_type":None, "PRODUCT NAME":None, "SKU":None, "Condition":None}
product_id_to_data = {}
to_read_after_product_id = ["SKU","PURCHASE PRICE", "Qty", "ORDER DATE", "CONDITION", "PRICE"]
select_name_and_click_ok_or_terminate("ASINs", keep_on_top=True, affirmative_response={"OK"})
current_address = xw.apps.active.books.active.selection.address
from Excelutilities import index_helpers
while index_helpers.is_from_single_col(current_address) == False:
user_output = sg.popup_yes_no("Please select from a single column block.\nClick Yes to continue and reselect, or No to terminate the program",
keep_on_top=True)
if user_output == "No":
import sys
sys.exit()
else:
current_address = xw.apps.active.books.active.selection.address
first_and_lasts = [index_helpers.first_and_last_row_index(block) for block in current_address.split(",")]
for first_row_index, last_row_index in first_and_lasts:
address_for_master = "A" + str(first_row_index) + ":O"+str(last_row_index)
for row in master_sheet.range(address_for_master).value:
values = row
product_id = None
id_type = None
for id_type in product_id_types_list:
index = master_col_names_to_indices[id_type]
if values[index] != None:
product_id = values[index]
id_type = product_id_types[id_type]
break
if product_id == None:
print("Oops this row has no product_id")
continue
if product_id in product_id_to_data:
current_dict = product_id_to_data[product_id]
else:
product_id_to_data[product_id] = product_id_to_data_skeleton.copy()
current_dict = product_id_to_data[product_id]
current_dict["id_type"] = id_type
for attribute in to_read_after_product_id:
current_dict[attribute] = values[master_col_names_to_indices[attribute]]
amazon_flat_loader_output_cols = {'sku':0,
'product-id':1,
'product-id-type':2,
'price':3,
'minimum-seller-allowed-price':4,
'maximum-seller-allowed-price':5,
'item-condition':6,
'quantity':7,
'add-delete':8,
'will-ship-internationally':9,
'expedited-shipping':10,
'item-note':11}
output_cols = ['sku',
'product-id',
'product-id-type',
'price',
'minimum-seller-allowed-price',
'maximum-seller-allowed-price',
'item-condition',
'quantity',
'add-delete',
'will-ship-internationally',
'expedited-shipping',
'item-note']
def reversed_dict(dictionary):
"""
reverses the keys of a dictionary which are hashable and assumes the mappings are one to one
"""
return_dict= {}
for key in dictionary:
return_dict[dictionary[key]] = key
return return_dict
internal_names_to_output_names = {"id_type":"product-id-type", "SKU":"sku", "CONDITION":"item-condition", "PRICE":"price"}
output_names_to_internal_names = reversed_dict(internal_names_to_output_names)
import pkg_resources
AMAZON_MASTER_FILE = pkg_resources.resource_filename('ArbMasterPy', 'data/Amazon standard inventory - flat file.xlsm')
src_path = AMAZON_MASTER_FILE
layout = [[sg.Text('Select the output location of your master sheet'), sg.Input(),sg.FolderBrowse(key="--input_file--")],
[sg.OK(), sg.Cancel()]]
event, values = sg.Window("",layout, no_titlebar=False, keep_on_top=True, grab_anywhere=True).read(close=True)
import os
dest_path=os.path.join(values[0], "amazon_master_output.xlsm")
import shutil
shutil.copyfile(src_path, dest_path)
existing_asins = return_existing_asins(inventory_sht=xw.apps.active.books.active.sheets["Inventory"])
return_array = []
for product_id in product_id_to_data:
if product_id in existing_asins:
continue
else:
current_dict = product_id_to_data[product_id]
row = [current_dict[output_names_to_internal_names[col]] if col in output_names_to_internal_names else None for col in output_cols]
row[amazon_flat_loader_output_cols['product-id']] = product_id
return_array.append(row)
xw.Book(dest_path)
xw.apps.active.books.active.sheets["Master"]["A2"].value = return_array
xw.Book(dest_path)
def export_data_process(use_process=False):
import platform
if platform.system() == "Windows" and use_process==False:
from multiprocessing import Process
p = Process(target=export_data_process, args=(True,));p.start() #pass it into process, but with instructions now to skip the system check as we know we are using a process
return
elif platform.system() == "Darwin":
pass
else:
pass
export_data()
@debug_basic(True)
def generate_sku(allowed_data_types = [str]):
"""
PLACEHOLDER
"""
select_name_and_click_ok_or_terminate("input product names", keep_on_top=True, affirmative_response={"OK"})
import xlwings as xw
input_col_name_1 = xw.apps.active.books.active.selection.value
input_col_name_1_address = xw.apps.active.books.active.selection.address
asin_list = input_col_name_1
from ArbMasterPy.get_other_row_data import get_other_row_data
date_list=get_other_row_data(column_mappings, curr_attr="ASIN", new_attr="Order Date",
target_sheet=xw.apps.active.books.active.sheets.active,
target_address=input_col_name_1_address)
sku_list = []
from ArbMasterPy.inventory_data import get_attribute
asin_sku_dict = get_attribute('seller-sku') #Ethan
import datetime
select_name_and_click_ok_or_terminate("SKU column", keep_on_top=True, affirmative_response={"OK"})
active_cells = xw.apps.active.books.active.selection
import sys
from Excelutilities import importing_data_helpers
def sanity_check_col_entries(entry1, entry2, addresses_and_names, sys=sys, importing_data_helpers=importing_data_helpers):
#entry1 is a list of values
#addresses_and_names is a list of tuples, first entry of tuple
#is the address, and second is the name
#implements some sanity checks
if len(entry1) != len(entry2):
sg.popup("Oops! Your two input columns of data are of different lengths.\nNow terminating...",
keep_on_top=True)
sys.exit()
for val1, val2 in zip(entry1, entry2):
if type(val1) not in allowed_data_types and val1 != None:
sg.popup(f"Oops you selected some data which we couldn' recognise\nValue: {val1}",
keep_on_top=True)
sys.exit()
if type(val2) not in allowed_data_types and val2 != None:
sg.popup(f"Oops you selected some data which we couldn' recognise\n{val2}\n{type(val2)}",
keep_on_top=True)
sys.exit()
for address_and_name in addresses_and_names:
address = address_and_name[0]
name = address_and_name[1]
if not importing_data_helpers.is_col_block_bool(address):
print(address)
sg.popup(f"Oops! Your {name} data wasn't from a single column",
keep_on_top=True)
sys.exit()
#sanity checks
for asin,date in zip(asin_list,date_list):
date = datetime.datetime.strftime(date, '%d-%m-%Y')
if asin in asin_sku_dict:
sku_list.append([asin_sku_dict[asin]])
else:
sku_list.append([str(asin) + '-'+str(date)])
active_cells.value = [sku for sku in sku_list]
#def sku_helper(product_name):
# if product_name == None:
# return None
# else:
# return "".join([char for char in product_name if char.isalpha()])[:15]
#active_cells.value = [[sku_helper(product_name)] for product_name in input_col_name_1]
def generate_sku_process(use_process=False):
"""
a wrapper to use for if the underlying function has a decorator, so we can pass into multithreading.Process
"""
import platform
if platform.system() == "Windows" and use_process==False:
from multiprocessing import Process
p = Process(target=generate_sku_process, args=(True,));p.start() #pass it into process, but with instructions now to skip the system check as we know we are using a process
return
elif platform.system() == "Darwin":
pass
else:
pass
generate_sku()
@debug_basic(True)
def export_shipping_sheet():
max_num_asins_shipping = 500
max_num_asins_master = 5000
select_name_and_click_ok_or_terminate(name="the shipping address data", keep_on_top=True,affirmative_response="OK")
import xlwings as xw
wb = xw.apps.active.books.active
shipping_data_block = wb.selection.value
sg.popup(f"Currently, this only works with a max of {max_num_asins_shipping} ASINs")
asin_qty_data = wb.sheets["Shipment form"].range("A2:B"+str(2+max_num_asins_shipping)).value
asin_qty_data = [row for row in asin_qty_data if row != [None, None]]
sg.popup(f"This temporary solution assumes you have max {max_num_asins_master} ASINs in your MASTER sheet")
master_data = wb.sheets["Master"].range("A2:"+master_data_last_letter+ str(max_num_asins_master)).value
asin_index = master_col_names_to_indices["ASIN"]
sku_index = master_col_names_to_indices["SKU"]
asins = [row[asin_index] for row in master_data]
skus = [row[sku_index] for row in master_data]
asins_to_skus = dict(zip(asins, skus))
#Now we get SKUs from inventory, and it takes priority over asins got from the master
max_num_asins_inventory = max_num_asins_master
from ArbMasterPy.inventory_data import get_attribute
inventory_asins_skus = get_attribute(attr='seller-sku')
for asin in inventory_asins_skus.keys():
if asin in asins_to_skus:
asins_to_skus[asin] = inventory_asins_skus[asin]
#we keep track of all asins we couldn't find an sku for
asins_without_sku = []
#we create an array, 2xN, with skus and qty data
sku_qty_data = []
for row in asin_qty_data:
qty = row[1]
asin = row[0]
if asin in asins_to_skus:
sku = asins_to_skus[asin]
if sku != None:
sku_qty_data.append([sku, qty])
else:
asins_without_sku.append([asin, qty])
else:
asins_without_sku.append([asin, qty])
sku_qty_data += asins_without_sku #i.e. all the asins with no SKUs go at the end
#when skus or asins appear twice, we add the quantities and remove the second appearace
matching_index = 0 #This is collecting 0th column, i.e. the ASINs (could be selected)
value_index = 1 #This is collecting the quantities which are added (could be selected)
checking_list = []
#Find all the locations
for index, row in enumerate(sku_qty_data):
if sku_qty_data[index][matching_index] in checking_list:
pass
else:
checking_list.append(sku_qty_data[index][matching_index])
#Produce the new values
temp = 0
outputs = []
for asin in checking_list:
for row in sku_qty_data:
if row[matching_index] == asin:
temp += row[value_index]
outputs.append(temp)
temp = 0
#Replace the sku_qty_data with the new values
sku_qty_data = [list(a) for a in zip(checking_list, outputs)]
return_value = [["PlanName", "NOT AUTOMATED YET"], ["ShipToCountry", "UK"]] + shipping_data_block + [["AddressDistrct",None],[None,None], ["MerchantSKU", "Quantity"]] + sku_qty_data
print(f"SHIPPING DATA BLOCK IS: {shipping_data_block}")
print(f"SKU QTY DATA IS: {sku_qty_data}")
import pkg_resources
AMAZON_SHIPPING_TEMPLATE = pkg_resources.resource_filename('ArbMasterPy', 'data/CreateInboundPlanRequest.xlsx')
src_path = AMAZON_SHIPPING_TEMPLATE
layout = [[sg.Text('Select the output location of your shipping sheet'), sg.Input(),sg.FolderBrowse(key="--input_file--")],
[sg.OK(), sg.Cancel()]]
event, values = sg.Window("",layout, no_titlebar=False, keep_on_top=True, grab_anywhere=True).read(close=True)
import os
dest_path=os.path.join(values[0], "CreateInboundPlanRequest.xlsx")
import shutil
shutil.copyfile(src_path, dest_path)
xw.Book(dest_path)
xw.apps.active.books.active.sheets["Create Shipping Plan Template"]["A1"].value = return_value
def export_shipping_sheet_process(use_process=False):
import platform
if platform.system() == "Windows" and use_process==False:
from multiprocessing import Process
p = Process(target=export_shipping_sheet_process, args=(True,));p.start() #pass it into process, but with instructions now to skip the system check as we know we are using a process
return
elif platform.system() == "Darwin":
pass
else:
pass
export_shipping_sheet()
if __name__ == "__main__":
export_shipping_sheet()
|
Arbitrage-Master-Sheet-Py
|
/Arbitrage_Master_Sheet_Py-0.0.29-py3-none-any.whl/ArbMasterPy/master_to_amazon.py
|
master_to_amazon.py
|
import PySimpleGUI as sg
from ArbMasterPy.debug_wrapper import debug_basic
import xlwings as xw
throttle_rate = float(xw.apps.active.books.active.sheets["API"].range("B4").value) #to control rate of requests to server so we dont get throttled
num_results_shown = int(xw.apps.active.books.active.sheets["API"].range("B2").value)
too_good_to_be_true = float(xw.apps.active.books.active.sheets["API"].range("B3").value)
search_param = xw.apps.active.books.active.sheets["API"].range("B5").value # search_param should be "" to use google search, and "shop" to use shopping
api_key = xw.apps.active.books.active.sheets["API"].range("B1").value
import pkg_resources
html_save_loc = pkg_resources.resource_filename('ArbMasterPy', 'data/arbitrage_results_html.html')
def asin_element_to_amazon_links(asin_element):
#given the asin element, extracts the link to the amazon page
return asin_element.find_all("a", {"class":"amazon-link btn btn-xs btn-primary"})[0]["href"]
import os
def newest(path):
#from SO. Returns the newest file in a durectory
files = os.listdir(path)
paths = [os.path.join(path, basename) for basename in files]
return max(paths, key=os.path.getctime)
def get_html_code():
from pathlib import Path
downloads_path = str(Path.home() / "Downloads")
file_loc = newest(downloads_path)
if "SAS" not in file_loc:
sg.popup("We couldn't recognize the most recent download - please check it and tell us what it is")
import sys
sys.exit()
HtmlFile = open(file_loc, 'r', encoding='utf-8')
source_html_code = HtmlFile.read()
return source_html_code
def extract_name_and_max_price(asin_element):
name = asin_element.find("a")["data-original-title"]
max_price = asin_element.find("span", {"class":"qi-max-cost pseudolink"}).text
return (name, max_price)
def get_products_and_product_names_and_names_prices(source_html_code):
from bs4 import BeautifulSoup
soup = BeautifulSoup(source_html_code, "html.parser")
# print(soup.get_text())
b=soup.body
products = b.find("div", {"id":"search-results"}).find_all("a")
def sort_function(product):
if product.has_attr("data-original-title"):
if product["data-original-title"] != "":
return True
return False
products = [product for product in products if sort_function(product)]
product_names = [product["data-original-title"] for product in products]
asin_elements = [element for element in b.find("div", {"id":"search-results"}).find_all("div") if element.has_attr("asin")]
names_prices = [extract_name_and_max_price(asin_element) for asin_element in asin_elements]
names_to_amz_link = dict(zip([name for name,price in names_prices], [asin_element_to_amazon_links(element) for element in asin_elements]))
return products, product_names, names_prices, names_to_amz_link
import re
def get_price(x):
return float(re.search("[0-9]+[.][0-9]+", x).group())
import difflib
def search_shopping(search_q, tbm_param="shop"):
from serpapi import GoogleSearch
import os
api_key = "ec63b5d769ebfe574934ac3816f218131cf92ccb461375aee6bc5926569f9933"
if tbm_param == "shop":
params = {
"engine": "google",
"q": search_q,
"location":"United Kingdom",
"gl": "uk",
"tbm": "shop",
"api_key": api_key,
}
print("INITIALIZING SEARCH")
search = GoogleSearch(params)
results = search.get_dict()
try:
source_and_price_and_link_and_title = [(result["source"], result["extracted_price"], result["link"], result["title"]) for result in results["shopping_results"]]
source_and_price_and_link_and_title.sort(key=lambda x:1-difflib.SequenceMatcher(None,x[3], search_q).ratio())
source_and_price = [(x[0],x[1]) for x in source_and_price_and_link_and_title]
except Exception as e:
print(e)
print(results)
return None
elif tbm_param == "":
params = {
"engine": "google",
"q": search_q,
"location":"United Kingdom",
"gl": "uk",
"api_key": api_key,
}
print("INITIALIZING SEARCH")
search = GoogleSearch(params)
results = search.get_dict()
try:
organic_search = results["organic_results"]
source_and_price_and_link_and_title = []
for result in organic_search:
try:
#this is bad form
price = result['rich_snippet']['top']['detected_extensions']['price']
link = result["link"]
title = result['title']
source = ""
source_and_price_and_link_and_title.append((source,price,link,title))
except KeyError:
pass
source_and_price_and_link_and_title.sort(key=lambda x:1-difflib.SequenceMatcher(None,x[3], search_q).ratio())
source_and_price = [(x[0],x[1]) for x in source_and_price_and_link_and_title]
except Exception as e:
print(e)
print(results)
return None
else:
import PySimpleGUI as sg
import sys
sg.popup("did not recognise tbm param value")
sys.exit()
return results, source_and_price_and_link_and_title, source_and_price
def apply_filters_to_source_price_link(source_price_link_title, target_price, blacklist = ["ebay", "etsy", "alibaba", "idealo", "onbuy"]):
new_return = []
for source, price, link,title in source_price_link_title:
trigger_activated=False
if price < too_good_to_be_true*target_price or price > target_price:
continue
trigger_activated = False
for trigger in blacklist:
if trigger in link.lower() or trigger in source.lower():
trigger_activated=True
break
if not trigger_activated:
new_return.append((source, price, link, title))
return new_return
def target_items(search_results_data, names_and_prices_filtered):
"""
Given the results, and the names_price data we wanted to check, we return a list of those which meet the criterion
search_results_data is a list of 3-tuples (results, source_and_price_and_link, source_and_price) of which we just need the second
value
"""
sources_and_prices_and_links_and_titles = search_results_data
return_list = []
for source_price_link_title, name_price in zip(sources_and_prices_and_links_and_titles, names_and_prices_filtered):
target_price = name_price[1]
result_prices = [x[1] for x in source_price_link_title]
if sum([price < target_price for price in result_prices[:3]])>=2:
#require 2 of the top 3 prices to be below
return_list.append([name_price[0], name_price[1], source_price_link_title[:num_results_shown]])
else:
pass
return return_list
def generate_html_string(best_items, names_to_amz_link):
return_string = ""
for row in best_items:
return_string += f"Item: {row[0]}<br>Target Price: {row[1]}<br><a href={names_to_amz_link[row[0]]}>Amazon link</a>"
return_string += "<ol>"
for item in row[2]:
return_string += f"<li>Retailer: {item[0]}, Price: {item[1]}\n<a href={item[2]}>Website</a>\nTitle: {item[3]}</li>"
return_string+="</ol>\n\n"
return return_string
def sort_best_items(row):
target_name = row[0]
best_match = row[2][0][3]
return 1-difflib.SequenceMatcher(None, target_name, best_match).ratio()
@debug_basic(value=True)
def user_function():
sg.popup("Please download the html code and THEN click ok", keep_on_top=True)
source_html_code = get_html_code()
products, product_names, names_prices, names_to_amz_link = get_products_and_product_names_and_names_prices(source_html_code=source_html_code)
names_prices_filtered = [x for x in names_prices if "-" not in x[1] and x[1]!="N/A"]
names_prices_filtered = [(x[0], get_price(x[1])) for x in names_prices_filtered]
res=[]
i=0
import time
for name, price in names_prices_filtered:
i+=1
res.append(search_shopping(name, search_param))
sg.one_line_progress_meter('One Line Meter Example', i + 1, len(names_prices_filtered))
time.sleep(1)
res = [x for x in res if x !=None] #remove search results where we had no results
names_where_search_worked = set([res[j][0]["search_parameters"]["q"] for j in range(len(res))]) #get names where search results worked, so we can trim the name and price data
names_prices_filtered = [x for x in names_prices_filtered if x[0] in names_where_search_worked] #i.e., only look at the results where our search had results
filtered_results = [apply_filters_to_source_price_link(source_price_link_title=res[i][1], target_price=names_prices_filtered[i][1]) for i in range(len(res))]
best_items=target_items(filtered_results, names_prices_filtered)
best_items.sort(key=sort_best_items)
import webbrowser
f = open(html_save_loc,'w', encoding="utf-8")
message = generate_html_string(best_items, names_to_amz_link)
f.write(message)
f.close()
import platform
if platform.system() == "Windows":
webbrowser.open(f.name)
elif platform.system() == "Darwin":
import subprocess
subprocess.call(('open', f.name))
else:
sg.popup("Oops, we only support Mac and Windows")
import sys
sys.exit()
if __name__ == "__main__":
user_function()
|
Arbitrage-Master-Sheet-Py
|
/Arbitrage_Master_Sheet_Py-0.0.29-py3-none-any.whl/ArbMasterPy/automated_arbitrage.py
|
automated_arbitrage.py
|
# Arcapi
A third-party API for [ARC Prober](https://redive.estertion.win/arcaea/probe/) in Python.
Language: English/[中文](docs/README-CN.md)/[日本語](docs/README-JP.md)
## Quick Start
### Install
To use Arcapi, you can install it by [pypi](https://pypi.org/).
```shell script
$pip install Arc-api
```
### Usage
This api support synchronized and asynchronized methods to connect the websocket provide by Prober. To run in a synchronized way:
```python
from Arcapi import SyncApi
# Initialize an API with user code
api_ = SyncApi(user_code='000000000')
# You can assign the constant range here, or just leave it in a default
print("userinfo: ", api_.userinfo(start=8, end=12))
```
Otherwise, you can run in an asynchronized way. If you deploy your service with a bot, I strongly recommend you to do in this way:
```python
import asyncio
from Arcapi import AsyncApi
# Initialize an API with user code
api_ = AsyncApi(user_code='000000000')
# You can assign the constant range here, or just leave it in a default
asyncio.get_event_loop().run_until_complete(api_.userinfo(start=8, end=12))
```
|
Arc-api
|
/Arc_api-1.1.tar.gz/Arc_api-1.1/README.md
|
README.md
|
import re
import json
from typing import List, Dict, Any
from websocket import create_connection
import brotli
from Arcapi.api import Api
from Arcapi.exceptions import *
class SyncApi(Api):
user_code: str # User's 9-digit code for login
start: int # The beginning constant for consulting
end: int # The ending constant for consulting
timeout: int # The time for connecting wss
def __init__(self, user_code: str, start: int = 8, end: int = 12, timeout: int = 5) -> None:
self.ws_endpoint = 'wss://arc.estertion.win:616'
if not re.fullmatch(r'\d{9}', user_code):
raise ArcInvaidUserCodeException
self.user_code = user_code
self.start = start
self.end = end
self.timeout = timeout
def call_action(self, action: str, **params) -> Any:
if 'start' in params:
_start = params['start']
else:
_start = self.start
if 'end' in params:
_end = params['end']
else:
_end = self.end
container: List[Dict] = [] # The result list for request objects
conn = create_connection(self.ws_endpoint, timeout=self.timeout)
conn.send(f'{self.user_code} {_start} {_end}')
_recv = conn.recv()
if _recv == 'invalid id':
raise ArcInvaidUserCodeException
elif _recv == 'queried':
while True:
_r = conn.recv()
if isinstance(_r, str) and _r == 'bye':
break
elif isinstance(_r, (bytes, bytearray)):
_data = json.loads(brotli.decompress(_r))
if _data['cmd'] == action:
if type(_data['data']) is list:
for _item in _data['data']:
container.append(_item)
else:
container.append(_data['data'])
else:
raise ArcUnknownException(_recv)
return container
|
Arc-api
|
/Arc_api-1.1.tar.gz/Arc_api-1.1/Arcapi/sync_api.py
|
sync_api.py
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.