code
stringlengths 66
870k
| docstring
stringlengths 19
26.7k
| func_name
stringlengths 1
138
| language
stringclasses 1
value | repo
stringlengths 7
68
| path
stringlengths 5
324
| url
stringlengths 46
389
| license
stringclasses 7
values |
---|---|---|---|---|---|---|---|
def resized_crop(
img: Tensor,
top: int,
left: int,
height: int,
width: int,
size: List[int],
interpolation: InterpolationMode=InterpolationMode.BILINEAR) -> Tensor:
"""Crop the given image and resize it to desired size.
If the image is paddle Tensor, it is expected
to have [..., H, W] shape, where ... means an arbitrary number of leading dimensions
Args:
img (PIL Image or Tensor): Image to be cropped. (0,0) denotes the top left corner of the image.
top (int): Vertical component of the top left corner of the crop box.
left (int): Horizontal component of the top left corner of the crop box.
height (int): Height of the crop box.
width (int): Width of the crop box.
size (sequence or int): Desired output size. Same semantics as ``resize``.
interpolation (InterpolationMode): Desired interpolation enum defined by
:class:`paddlevision.transforms.InterpolationMode`.
Default is ``InterpolationMode.BILINEAR``. If input is Tensor, only ``InterpolationMode.NEAREST``,
``InterpolationMode.BILINEAR`` and ``InterpolationMode.BICUBIC`` are supported.
For backward compatibility integer values (e.g. ``PIL.Image.NEAREST``) are still acceptable.
Returns:
PIL Image or Tensor: Cropped image.
"""
img = crop(img, top, left, height, width)
img = resize(img, size, interpolation)
return img | Crop the given image and resize it to desired size.
If the image is paddle Tensor, it is expected
to have [..., H, W] shape, where ... means an arbitrary number of leading dimensions
Args:
img (PIL Image or Tensor): Image to be cropped. (0,0) denotes the top left corner of the image.
top (int): Vertical component of the top left corner of the crop box.
left (int): Horizontal component of the top left corner of the crop box.
height (int): Height of the crop box.
width (int): Width of the crop box.
size (sequence or int): Desired output size. Same semantics as ``resize``.
interpolation (InterpolationMode): Desired interpolation enum defined by
:class:`paddlevision.transforms.InterpolationMode`.
Default is ``InterpolationMode.BILINEAR``. If input is Tensor, only ``InterpolationMode.NEAREST``,
``InterpolationMode.BILINEAR`` and ``InterpolationMode.BICUBIC`` are supported.
For backward compatibility integer values (e.g. ``PIL.Image.NEAREST``) are still acceptable.
Returns:
PIL Image or Tensor: Cropped image.
| resized_crop | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_paddle/paddlevision/transforms/functional.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_paddle/paddlevision/transforms/functional.py | Apache-2.0 |
def get_params(img: Tensor, scale: List[float],
ratio: List[float]) -> Tuple[int, int, int, int]:
"""Get parameters for ``crop`` for a random sized crop.
Args:
img (PIL Image or Tensor): Input image.
scale (list): range of scale of the origin size cropped
ratio (list): range of aspect ratio of the origin aspect ratio cropped
Returns:
tuple: params (i, j, h, w) to be passed to ``crop`` for a random
sized crop.
"""
width, height = F._get_image_size(img)
area = height * width
log_ratio = paddle.log(paddle.to_tensor(ratio))
for _ in range(10):
target_area = area * paddle.uniform(
shape=[1], min=scale[0], max=scale[1]).numpy().item()
aspect_ratio = paddle.exp(
paddle.uniform(
shape=[1], min=log_ratio[0], max=log_ratio[1])).numpy(
).item()
w = int(round(math.sqrt(target_area * aspect_ratio)))
h = int(round(math.sqrt(target_area / aspect_ratio)))
if 0 < w <= width and 0 < h <= height:
i = paddle.randint(
0, height - h + 1, shape=(1, )).numpy().item()
j = paddle.randint(
0, width - w + 1, shape=(1, )).numpy().item()
return i, j, h, w
# Fallback to central crop
in_ratio = float(width) / float(height)
if in_ratio < min(ratio):
w = width
h = int(round(w / min(ratio)))
elif in_ratio > max(ratio):
h = height
w = int(round(h * max(ratio)))
else: # whole image
w = width
h = height
i = (height - h) // 2
j = (width - w) // 2
return i, j, h, w | Get parameters for ``crop`` for a random sized crop.
Args:
img (PIL Image or Tensor): Input image.
scale (list): range of scale of the origin size cropped
ratio (list): range of aspect ratio of the origin aspect ratio cropped
Returns:
tuple: params (i, j, h, w) to be passed to ``crop`` for a random
sized crop.
| get_params | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_paddle/paddlevision/transforms/transforms.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_paddle/paddlevision/transforms/transforms.py | Apache-2.0 |
def forward(self, img):
"""
Args:
img (PIL Image or Tensor): Image to be cropped and resized.
Returns:
PIL Image or Tensor: Randomly cropped and resized image.
"""
i, j, h, w = self.get_params(img, self.scale, self.ratio)
return F.resized_crop(img, i, j, h, w, self.size, self.interpolation) |
Args:
img (PIL Image or Tensor): Image to be cropped and resized.
Returns:
PIL Image or Tensor: Randomly cropped and resized image.
| forward | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_paddle/paddlevision/transforms/transforms.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_paddle/paddlevision/transforms/transforms.py | Apache-2.0 |
def accuracy_torch(output, target, topk=(1, )):
"""Computes the accuracy over the k top predictions for the specified values of k"""
with torch.no_grad():
maxk = max(topk)
batch_size = target.size(0)
_, pred = output.topk(maxk, 1, True, True)
pred = pred.t()
correct = pred.eq(target[None])
res = []
for k in topk:
correct_k = correct[:k].flatten().sum(dtype=torch.float32)
res.append(correct_k * (100.0 / batch_size))
return res | Computes the accuracy over the k top predictions for the specified values of k | accuracy_torch | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/metric.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/metric.py | Apache-2.0 |
def synchronize_between_processes(self):
"""
Warning: does not synchronize the deque!
"""
if not is_dist_avail_and_initialized():
return
t = torch.tensor(
[self.count, self.total], dtype=torch.float64, device='cuda')
dist.barrier()
dist.all_reduce(t)
t = t.tolist()
self.count = int(t[0])
self.total = t[1] |
Warning: does not synchronize the deque!
| synchronize_between_processes | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/utils.py | Apache-2.0 |
def accuracy(output, target, topk=(1, )):
"""Computes the accuracy over the k top predictions for the specified values of k"""
with torch.no_grad():
maxk = max(topk)
batch_size = target.size(0)
_, pred = output.topk(maxk, 1, True, True)
pred = pred.t()
correct = pred.eq(target[None])
res = []
for k in topk:
correct_k = correct[:k].flatten().sum(dtype=torch.float32)
res.append(correct_k * (100.0 / batch_size))
return res | Computes the accuracy over the k top predictions for the specified values of k | accuracy | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/utils.py | Apache-2.0 |
def average_checkpoints(inputs):
"""Loads checkpoints from inputs and returns a model with averaged weights. Original implementation taken from:
https://github.com/pytorch/fairseq/blob/a48f235636557b8d3bc4922a6fa90f3a0fa57955/scripts/average_checkpoints.py#L16
Args:
inputs (List[str]): An iterable of string paths of checkpoints to load from.
Returns:
A dict of string keys mapping to various values. The 'model' key
from the returned dict should correspond to an OrderedDict mapping
string parameter names to torch Tensors.
"""
params_dict = OrderedDict()
params_keys = None
new_state = None
num_models = len(inputs)
for fpath in inputs:
with open(fpath, "rb") as f:
state = torch.load(
f,
map_location=(
lambda s, _: torch.serialization.default_restore_location(s, "cpu")
), )
# Copies over the settings from the first checkpoint
if new_state is None:
new_state = state
model_params = state["model"]
model_params_keys = list(model_params.keys())
if params_keys is None:
params_keys = model_params_keys
elif params_keys != model_params_keys:
raise KeyError("For checkpoint {}, expected list of params: {}, "
"but found: {}".format(f, params_keys,
model_params_keys))
for k in params_keys:
p = model_params[k]
if isinstance(p, torch.HalfTensor):
p = p.float()
if k not in params_dict:
params_dict[k] = p.clone()
# NOTE: clone() is needed in case of p is a shared parameter
else:
params_dict[k] += p
averaged_params = OrderedDict()
for k, v in params_dict.items():
averaged_params[k] = v
if averaged_params[k].is_floating_point():
averaged_params[k].div_(num_models)
else:
averaged_params[k] //= num_models
new_state["model"] = averaged_params
return new_state | Loads checkpoints from inputs and returns a model with averaged weights. Original implementation taken from:
https://github.com/pytorch/fairseq/blob/a48f235636557b8d3bc4922a6fa90f3a0fa57955/scripts/average_checkpoints.py#L16
Args:
inputs (List[str]): An iterable of string paths of checkpoints to load from.
Returns:
A dict of string keys mapping to various values. The 'model' key
from the returned dict should correspond to an OrderedDict mapping
string parameter names to torch Tensors.
| average_checkpoints | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/utils.py | Apache-2.0 |
def store_model_weights(model,
checkpoint_path,
checkpoint_key='model',
strict=True):
"""
This method can be used to prepare weights files for new models. It receives as
input a model architecture and a checkpoint from the training script and produces
a file with the weights ready for release.
Examples:
from torchvision import models as M
# Classification
model = M.mobilenet_v3_large(pretrained=False)
print(store_model_weights(model, './class.pth'))
# Quantized Classification
model = M.quantization.mobilenet_v3_large(pretrained=False, quantize=False)
model.fuse_model()
model.qconfig = torch.quantization.get_default_qat_qconfig('qnnpack')
_ = torch.quantization.prepare_qat(model, inplace=True)
print(store_model_weights(model, './qat.pth'))
# Object Detection
model = M.detection.fasterrcnn_mobilenet_v3_large_fpn(pretrained=False, pretrained_backbone=False)
print(store_model_weights(model, './obj.pth'))
# Segmentation
model = M.segmentation.deeplabv3_mobilenet_v3_large(pretrained=False, pretrained_backbone=False, aux_loss=True)
print(store_model_weights(model, './segm.pth', strict=False))
Args:
model (pytorch.nn.Module): The model on which the weights will be loaded for validation purposes.
checkpoint_path (str): The path of the checkpoint we will load.
checkpoint_key (str, optional): The key of the checkpoint where the model weights are stored.
Default: "model".
strict (bool): whether to strictly enforce that the keys
in :attr:`state_dict` match the keys returned by this module's
:meth:`~torch.nn.Module.state_dict` function. Default: ``True``
Returns:
output_path (str): The location where the weights are saved.
"""
# Store the new model next to the checkpoint_path
checkpoint_path = os.path.abspath(checkpoint_path)
output_dir = os.path.dirname(checkpoint_path)
# Deep copy to avoid side-effects on the model object.
model = copy.deepcopy(model)
checkpoint = torch.load(checkpoint_path, map_location='cpu')
# Load the weights to the model to validate that everything works
# and remove unnecessary weights (such as auxiliaries, etc)
model.load_state_dict(checkpoint[checkpoint_key], strict=strict)
tmp_path = os.path.join(output_dir, str(model.__hash__()))
torch.save(model.state_dict(), tmp_path)
sha256_hash = hashlib.sha256()
with open(tmp_path, "rb") as f:
# Read and update hash string value in blocks of 4K
for byte_block in iter(lambda: f.read(4096), b""):
sha256_hash.update(byte_block)
hh = sha256_hash.hexdigest()
output_path = os.path.join(output_dir, "weights-" + str(hh[:8]) + ".pth")
os.replace(tmp_path, output_path)
return output_path |
This method can be used to prepare weights files for new models. It receives as
input a model architecture and a checkpoint from the training script and produces
a file with the weights ready for release.
Examples:
from torchvision import models as M
# Classification
model = M.mobilenet_v3_large(pretrained=False)
print(store_model_weights(model, './class.pth'))
# Quantized Classification
model = M.quantization.mobilenet_v3_large(pretrained=False, quantize=False)
model.fuse_model()
model.qconfig = torch.quantization.get_default_qat_qconfig('qnnpack')
_ = torch.quantization.prepare_qat(model, inplace=True)
print(store_model_weights(model, './qat.pth'))
# Object Detection
model = M.detection.fasterrcnn_mobilenet_v3_large_fpn(pretrained=False, pretrained_backbone=False)
print(store_model_weights(model, './obj.pth'))
# Segmentation
model = M.segmentation.deeplabv3_mobilenet_v3_large(pretrained=False, pretrained_backbone=False, aux_loss=True)
print(store_model_weights(model, './segm.pth', strict=False))
Args:
model (pytorch.nn.Module): The model on which the weights will be loaded for validation purposes.
checkpoint_path (str): The path of the checkpoint we will load.
checkpoint_key (str, optional): The key of the checkpoint where the model weights are stored.
Default: "model".
strict (bool): whether to strictly enforce that the keys
in :attr:`state_dict` match the keys returned by this module's
:meth:`~torch.nn.Module.state_dict` function. Default: ``True``
Returns:
output_path (str): The location where the weights are saved.
| store_model_weights | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/utils.py | Apache-2.0 |
def has_file_allowed_extension(filename: str,
extensions: Tuple[str, ...]) -> bool:
"""Checks if a file is an allowed extension.
Args:
filename (string): path to a file
extensions (tuple of strings): extensions to consider (lowercase)
Returns:
bool: True if the filename ends with one of given extensions
"""
return filename.lower().endswith(extensions) | Checks if a file is an allowed extension.
Args:
filename (string): path to a file
extensions (tuple of strings): extensions to consider (lowercase)
Returns:
bool: True if the filename ends with one of given extensions
| has_file_allowed_extension | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/datasets/folder.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/datasets/folder.py | Apache-2.0 |
def find_classes(directory: str) -> Tuple[List[str], Dict[str, int]]:
"""Finds the class folders in a dataset.
See :class:`DatasetFolder` for details.
"""
classes = sorted(
entry.name for entry in os.scandir(directory) if entry.is_dir())
if not classes:
raise FileNotFoundError(
f"Couldn't find any class folder in {directory}.")
class_to_idx = {cls_name: i for i, cls_name in enumerate(classes)}
return classes, class_to_idx | Finds the class folders in a dataset.
See :class:`DatasetFolder` for details.
| find_classes | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/datasets/folder.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/datasets/folder.py | Apache-2.0 |
def make_dataset(
directory: str,
class_to_idx: Optional[Dict[str, int]]=None,
extensions: Optional[Tuple[str, ...]]=None,
is_valid_file: Optional[Callable[[str], bool]]=None, ) -> List[Tuple[
str, int]]:
"""Generates a list of samples of a form (path_to_sample, class).
See :class:`DatasetFolder` for details.
Note: The class_to_idx parameter is here optional and will use the logic of the ``find_classes`` function
by default.
"""
directory = os.path.expanduser(directory)
if class_to_idx is None:
_, class_to_idx = find_classes(directory)
elif not class_to_idx:
raise ValueError(
"'class_to_index' must have at least one entry to collect any samples."
)
both_none = extensions is None and is_valid_file is None
both_something = extensions is not None and is_valid_file is not None
if both_none or both_something:
raise ValueError(
"Both extensions and is_valid_file cannot be None or not None at the same time"
)
if extensions is not None:
def is_valid_file(x: str) -> bool:
return has_file_allowed_extension(
x, cast(Tuple[str, ...], extensions))
is_valid_file = cast(Callable[[str], bool], is_valid_file)
instances = []
available_classes = set()
for target_class in sorted(class_to_idx.keys()):
class_index = class_to_idx[target_class]
target_dir = os.path.join(directory, target_class)
if not os.path.isdir(target_dir):
continue
for root, _, fnames in sorted(os.walk(target_dir, followlinks=True)):
for fname in sorted(fnames):
if is_valid_file(fname):
path = os.path.join(root, fname)
item = path, class_index
instances.append(item)
if target_class not in available_classes:
available_classes.add(target_class)
return instances | Generates a list of samples of a form (path_to_sample, class).
See :class:`DatasetFolder` for details.
Note: The class_to_idx parameter is here optional and will use the logic of the ``find_classes`` function
by default.
| make_dataset | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/datasets/folder.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/datasets/folder.py | Apache-2.0 |
def make_dataset(
directory: str,
class_to_idx: Dict[str, int],
extensions: Optional[Tuple[str, ...]]=None,
is_valid_file: Optional[Callable[[str], bool]]=None, ) -> List[
Tuple[str, int]]:
"""Generates a list of samples of a form (path_to_sample, class).
This can be overridden to e.g. read files from a compressed zip file instead of from the disk.
Args:
directory (str): root dataset directory, corresponding to ``self.root``.
class_to_idx (Dict[str, int]): Dictionary mapping class name to class index.
extensions (optional): A list of allowed extensions.
Either extensions or is_valid_file should be passed. Defaults to None.
is_valid_file (optional): A function that takes path of a file
and checks if the file is a valid file
(used to check of corrupt files) both extensions and
is_valid_file should not be passed. Defaults to None.
Raises:
ValueError: In case ``class_to_idx`` is empty.
ValueError: In case ``extensions`` and ``is_valid_file`` are None or both are not None.
FileNotFoundError: In case no valid file was found for any class.
Returns:
List[Tuple[str, int]]: samples of a form (path_to_sample, class)
"""
if class_to_idx is None:
# prevent potential bug since make_dataset() would use the class_to_idx logic of the
# find_classes() function, instead of using that of the find_classes() method, which
# is potentially overridden and thus could have a different logic.
raise ValueError("The class_to_idx parameter cannot be None.")
return make_dataset(
directory,
class_to_idx,
extensions=extensions,
is_valid_file=is_valid_file) | Generates a list of samples of a form (path_to_sample, class).
This can be overridden to e.g. read files from a compressed zip file instead of from the disk.
Args:
directory (str): root dataset directory, corresponding to ``self.root``.
class_to_idx (Dict[str, int]): Dictionary mapping class name to class index.
extensions (optional): A list of allowed extensions.
Either extensions or is_valid_file should be passed. Defaults to None.
is_valid_file (optional): A function that takes path of a file
and checks if the file is a valid file
(used to check of corrupt files) both extensions and
is_valid_file should not be passed. Defaults to None.
Raises:
ValueError: In case ``class_to_idx`` is empty.
ValueError: In case ``extensions`` and ``is_valid_file`` are None or both are not None.
FileNotFoundError: In case no valid file was found for any class.
Returns:
List[Tuple[str, int]]: samples of a form (path_to_sample, class)
| make_dataset | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/datasets/folder.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/datasets/folder.py | Apache-2.0 |
def __getitem__(self, index: int) -> Tuple[Any, Any]:
"""
Args:
index (int): Index
Returns:
tuple: (sample, target) where target is class_index of the target class.
"""
path, target = self.samples[index]
sample = self.loader(path)
if self.transform is not None:
sample = self.transform(sample)
if self.target_transform is not None:
target = self.target_transform(target)
return sample, target |
Args:
index (int): Index
Returns:
tuple: (sample, target) where target is class_index of the target class.
| __getitem__ | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/datasets/folder.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/datasets/folder.py | Apache-2.0 |
def __init__(
self,
inverted_residual_setting: List[InvertedResidualConfig],
last_channel: int,
num_classes: int=1000,
block: Optional[Callable[..., nn.Module]]=None,
norm_layer: Optional[Callable[..., nn.Module]]=None,
dropout: float=0.2,
**kwargs: Any, ) -> None:
"""
MobileNet V3 main class
Args:
inverted_residual_setting (List[InvertedResidualConfig]): Network structure
last_channel (int): The number of channels on the penultimate layer
num_classes (int): Number of classes
block (Optional[Callable[..., nn.Module]]): Module specifying inverted residual building block for mobilenet
norm_layer (Optional[Callable[..., nn.Module]]): Module specifying the normalization layer to use
dropout (float): The droupout probability
"""
super().__init__()
if not inverted_residual_setting:
raise ValueError(
"The inverted_residual_setting should not be empty")
elif not (isinstance(inverted_residual_setting, Sequence) and all([
isinstance(s, InvertedResidualConfig)
for s in inverted_residual_setting
])):
raise TypeError(
"The inverted_residual_setting should be List[InvertedResidualConfig]"
)
if block is None:
block = InvertedResidual
if norm_layer is None:
norm_layer = partial(nn.BatchNorm2d, eps=0.001, momentum=0.01)
layers: List[nn.Module] = []
# building first layer
firstconv_output_channels = inverted_residual_setting[0].input_channels
layers.append(
ConvNormActivation(
3,
firstconv_output_channels,
kernel_size=3,
stride=2,
norm_layer=norm_layer,
activation_layer=nn.Hardswish, ))
# building inverted residual blocks
for cnf in inverted_residual_setting:
layers.append(block(cnf, norm_layer))
# building last several layers
lastconv_input_channels = inverted_residual_setting[-1].out_channels
lastconv_output_channels = 6 * lastconv_input_channels
layers.append(
ConvNormActivation(
lastconv_input_channels,
lastconv_output_channels,
kernel_size=1,
norm_layer=norm_layer,
activation_layer=nn.Hardswish, ))
self.features = nn.Sequential(*layers)
self.avgpool = nn.AdaptiveAvgPool2d(1)
self.classifier = nn.Sequential(
nn.Linear(lastconv_output_channels, last_channel),
nn.Hardswish(inplace=True),
nn.Dropout(
p=dropout, inplace=True),
nn.Linear(last_channel, num_classes), )
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, mode="fan_out")
if m.bias is not None:
nn.init.zeros_(m.bias)
elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)):
nn.init.ones_(m.weight)
nn.init.zeros_(m.bias)
elif isinstance(m, nn.Linear):
nn.init.normal_(m.weight, 0, 0.01)
nn.init.zeros_(m.bias) |
MobileNet V3 main class
Args:
inverted_residual_setting (List[InvertedResidualConfig]): Network structure
last_channel (int): The number of channels on the penultimate layer
num_classes (int): Number of classes
block (Optional[Callable[..., nn.Module]]): Module specifying inverted residual building block for mobilenet
norm_layer (Optional[Callable[..., nn.Module]]): Module specifying the normalization layer to use
dropout (float): The droupout probability
| __init__ | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/models/mobilenet_v3_torch.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/models/mobilenet_v3_torch.py | Apache-2.0 |
def mobilenet_v3_large(pretrained: bool=False,
progress: bool=True,
**kwargs: Any) -> MobileNetV3:
"""
Constructs a large MobileNetV3 architecture from
`"Searching for MobileNetV3" <https://arxiv.org/abs/1905.02244>`_.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
progress (bool): If True, displays a progress bar of the download to stderr
"""
arch = "mobilenet_v3_large"
inverted_residual_setting, last_channel = _mobilenet_v3_conf(arch,
**kwargs)
return _mobilenet_v3(arch, inverted_residual_setting, last_channel,
pretrained, progress, **kwargs) |
Constructs a large MobileNetV3 architecture from
`"Searching for MobileNetV3" <https://arxiv.org/abs/1905.02244>`_.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
progress (bool): If True, displays a progress bar of the download to stderr
| mobilenet_v3_large | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/models/mobilenet_v3_torch.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/models/mobilenet_v3_torch.py | Apache-2.0 |
def mobilenet_v3_small(pretrained: bool=False,
progress: bool=True,
**kwargs: Any) -> MobileNetV3:
"""
Constructs a small MobileNetV3 architecture from
`"Searching for MobileNetV3" <https://arxiv.org/abs/1905.02244>`_.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
progress (bool): If True, displays a progress bar of the download to stderr
"""
arch = "mobilenet_v3_small"
inverted_residual_setting, last_channel = _mobilenet_v3_conf(arch,
**kwargs)
return _mobilenet_v3(arch, inverted_residual_setting, last_channel,
pretrained, progress, **kwargs) |
Constructs a small MobileNetV3 architecture from
`"Searching for MobileNetV3" <https://arxiv.org/abs/1905.02244>`_.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
progress (bool): If True, displays a progress bar of the download to stderr
| mobilenet_v3_small | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/models/mobilenet_v3_torch.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/models/mobilenet_v3_torch.py | Apache-2.0 |
def _make_divisible(v: float, divisor: int,
min_value: Optional[int]=None) -> int:
"""
This function is taken from the original tf repo.
It ensures that all layers have a channel number that is divisible by 8
It can be seen here:
https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet/mobilenet.py
"""
if min_value is None:
min_value = divisor
new_v = max(min_value, int(v + divisor / 2) // divisor * divisor)
# Make sure that round down does not go down by more than 10%.
if new_v < 0.9 * v:
new_v += divisor
return new_v |
This function is taken from the original tf repo.
It ensures that all layers have a channel number that is divisible by 8
It can be seen here:
https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet/mobilenet.py
| _make_divisible | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/models/_utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/models/_utils.py | Apache-2.0 |
def get_params(transform_num: int) -> Tuple[int, Tensor, Tensor]:
"""Get parameters for autoaugment transformation
Returns:
params required by the autoaugment transformation
"""
policy_id = torch.randint(transform_num, (1, )).item()
probs = torch.rand((2, ))
signs = torch.randint(2, (2, ))
return policy_id, probs, signs | Get parameters for autoaugment transformation
Returns:
params required by the autoaugment transformation
| get_params | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/autoaugment.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/autoaugment.py | Apache-2.0 |
def forward(self, img: Tensor):
"""
img (PIL Image or Tensor): Image to be transformed.
Returns:
PIL Image or Tensor: AutoAugmented image.
"""
fill = self.fill
if isinstance(img, Tensor):
if isinstance(fill, (int, float)):
fill = [float(fill)] * F._get_image_num_channels(img)
elif fill is not None:
fill = [float(f) for f in fill]
transform_id, probs, signs = self.get_params(len(self.transforms))
for i, (op_name, p,
magnitude_id) in enumerate(self.transforms[transform_id]):
if probs[i] <= p:
magnitudes, signed = self._get_op_meta(op_name)
magnitude = float(magnitudes[magnitude_id].item()) \
if magnitudes is not None and magnitude_id is not None else 0.0
if signed is not None and signed and signs[i] == 0:
magnitude *= -1.0
if op_name == "ShearX":
img = F.affine(
img,
angle=0.0,
translate=[0, 0],
scale=1.0,
shear=[math.degrees(magnitude), 0.0],
interpolation=self.interpolation,
fill=fill)
elif op_name == "ShearY":
img = F.affine(
img,
angle=0.0,
translate=[0, 0],
scale=1.0,
shear=[0.0, math.degrees(magnitude)],
interpolation=self.interpolation,
fill=fill)
elif op_name == "TranslateX":
img = F.affine(
img,
angle=0.0,
translate=[
int(F._get_image_size(img)[0] * magnitude), 0
],
scale=1.0,
interpolation=self.interpolation,
shear=[0.0, 0.0],
fill=fill)
elif op_name == "TranslateY":
img = F.affine(
img,
angle=0.0,
translate=[
0, int(F._get_image_size(img)[1] * magnitude)
],
scale=1.0,
interpolation=self.interpolation,
shear=[0.0, 0.0],
fill=fill)
elif op_name == "Rotate":
img = F.rotate(
img,
magnitude,
interpolation=self.interpolation,
fill=fill)
elif op_name == "Brightness":
img = F.adjust_brightness(img, 1.0 + magnitude)
elif op_name == "Color":
img = F.adjust_saturation(img, 1.0 + magnitude)
elif op_name == "Contrast":
img = F.adjust_contrast(img, 1.0 + magnitude)
elif op_name == "Sharpness":
img = F.adjust_sharpness(img, 1.0 + magnitude)
elif op_name == "Posterize":
img = F.posterize(img, int(magnitude))
elif op_name == "Solarize":
img = F.solarize(img, magnitude)
elif op_name == "AutoContrast":
img = F.autocontrast(img)
elif op_name == "Equalize":
img = F.equalize(img)
elif op_name == "Invert":
img = F.invert(img)
else:
raise ValueError(
"The provided operator {} is not recognized.".format(
op_name))
return img |
img (PIL Image or Tensor): Image to be transformed.
Returns:
PIL Image or Tensor: AutoAugmented image.
| forward | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/autoaugment.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/autoaugment.py | Apache-2.0 |
def to_tensor(pic):
"""Convert a ``PIL Image`` or ``numpy.ndarray`` to tensor.
This function does not support torchscript.
See :class:`~torchvision.transforms.ToTensor` for more details.
Args:
pic (PIL Image or numpy.ndarray): Image to be converted to tensor.
Returns:
Tensor: Converted image.
"""
if not (F_pil._is_pil_image(pic) or _is_numpy(pic)):
raise TypeError('pic should be PIL Image or ndarray. Got {}'.format(
type(pic)))
if _is_numpy(pic) and not _is_numpy_image(pic):
raise ValueError('pic should be 2/3 dimensional. Got {} dimensions.'.
format(pic.ndim))
default_float_dtype = torch.get_default_dtype()
if isinstance(pic, np.ndarray):
# handle numpy array
if pic.ndim == 2:
pic = pic[:, :, None]
img = torch.from_numpy(pic.transpose((2, 0, 1))).contiguous()
# backward compatibility
if isinstance(img, torch.ByteTensor):
return img.to(dtype=default_float_dtype).div(255)
else:
return img
if accimage is not None and isinstance(pic, accimage.Image):
nppic = np.zeros(
[pic.channels, pic.height, pic.width], dtype=np.float32)
pic.copyto(nppic)
return torch.from_numpy(nppic).to(dtype=default_float_dtype)
# handle PIL Image
mode_to_nptype = {'I': np.int32, 'I;16': np.int16, 'F': np.float32}
img = torch.from_numpy(
np.array(
pic, mode_to_nptype.get(pic.mode, np.uint8), copy=True))
if pic.mode == '1':
img = 255 * img
img = img.view(pic.size[1], pic.size[0], len(pic.getbands()))
# put it from HWC to CHW format
img = img.permute((2, 0, 1)).contiguous()
if isinstance(img, torch.ByteTensor):
return img.to(dtype=default_float_dtype).div(255)
else:
return img | Convert a ``PIL Image`` or ``numpy.ndarray`` to tensor.
This function does not support torchscript.
See :class:`~torchvision.transforms.ToTensor` for more details.
Args:
pic (PIL Image or numpy.ndarray): Image to be converted to tensor.
Returns:
Tensor: Converted image.
| to_tensor | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | Apache-2.0 |
def pil_to_tensor(pic):
"""Convert a ``PIL Image`` to a tensor of the same type.
This function does not support torchscript.
See :class:`~torchvision.transforms.PILToTensor` for more details.
Args:
pic (PIL Image): Image to be converted to tensor.
Returns:
Tensor: Converted image.
"""
if not F_pil._is_pil_image(pic):
raise TypeError('pic should be PIL Image. Got {}'.format(type(pic)))
if accimage is not None and isinstance(pic, accimage.Image):
# accimage format is always uint8 internally, so always return uint8 here
nppic = np.zeros([pic.channels, pic.height, pic.width], dtype=np.uint8)
pic.copyto(nppic)
return torch.as_tensor(nppic)
# handle PIL Image
img = torch.as_tensor(np.asarray(pic))
img = img.view(pic.size[1], pic.size[0], len(pic.getbands()))
# put it from HWC to CHW format
img = img.permute((2, 0, 1))
return img | Convert a ``PIL Image`` to a tensor of the same type.
This function does not support torchscript.
See :class:`~torchvision.transforms.PILToTensor` for more details.
Args:
pic (PIL Image): Image to be converted to tensor.
Returns:
Tensor: Converted image.
| pil_to_tensor | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | Apache-2.0 |
def convert_image_dtype(image: torch.Tensor,
dtype: torch.dtype=torch.float) -> torch.Tensor:
"""Convert a tensor image to the given ``dtype`` and scale the values accordingly
This function does not support PIL Image.
Args:
image (torch.Tensor): Image to be converted
dtype (torch.dtype): Desired data type of the output
Returns:
Tensor: Converted image
.. note::
When converting from a smaller to a larger integer ``dtype`` the maximum values are **not** mapped exactly.
If converted back and forth, this mismatch has no effect.
Raises:
RuntimeError: When trying to cast :class:`torch.float32` to :class:`torch.int32` or :class:`torch.int64` as
well as for trying to cast :class:`torch.float64` to :class:`torch.int64`. These conversions might lead to
overflow errors since the floating point ``dtype`` cannot store consecutive integers over the whole range
of the integer ``dtype``.
"""
if not isinstance(image, torch.Tensor):
raise TypeError('Input img should be Tensor Image')
return F_t.convert_image_dtype(image, dtype) | Convert a tensor image to the given ``dtype`` and scale the values accordingly
This function does not support PIL Image.
Args:
image (torch.Tensor): Image to be converted
dtype (torch.dtype): Desired data type of the output
Returns:
Tensor: Converted image
.. note::
When converting from a smaller to a larger integer ``dtype`` the maximum values are **not** mapped exactly.
If converted back and forth, this mismatch has no effect.
Raises:
RuntimeError: When trying to cast :class:`torch.float32` to :class:`torch.int32` or :class:`torch.int64` as
well as for trying to cast :class:`torch.float64` to :class:`torch.int64`. These conversions might lead to
overflow errors since the floating point ``dtype`` cannot store consecutive integers over the whole range
of the integer ``dtype``.
| convert_image_dtype | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | Apache-2.0 |
def to_pil_image(pic, mode=None):
"""Convert a tensor or an ndarray to PIL Image. This function does not support torchscript.
See :class:`~torchvision.transforms.ToPILImage` for more details.
Args:
pic (Tensor or numpy.ndarray): Image to be converted to PIL Image.
mode (`PIL.Image mode`_): color space and pixel depth of input data (optional).
.. _PIL.Image mode: https://pillow.readthedocs.io/en/latest/handbook/concepts.html#concept-modes
Returns:
PIL Image: Image converted to PIL Image.
"""
if not (isinstance(pic, torch.Tensor) or isinstance(pic, np.ndarray)):
raise TypeError('pic should be Tensor or ndarray. Got {}.'.format(
type(pic)))
elif isinstance(pic, torch.Tensor):
if pic.ndimension() not in {2, 3}:
raise ValueError(
'pic should be 2/3 dimensional. Got {} dimensions.'.format(
pic.ndimension()))
elif pic.ndimension() == 2:
# if 2D image, add channel dimension (CHW)
pic = pic.unsqueeze(0)
# check number of channels
if pic.shape[-3] > 4:
raise ValueError(
'pic should not have > 4 channels. Got {} channels.'.format(
pic.shape[-3]))
elif isinstance(pic, np.ndarray):
if pic.ndim not in {2, 3}:
raise ValueError(
'pic should be 2/3 dimensional. Got {} dimensions.'.format(
pic.ndim))
elif pic.ndim == 2:
# if 2D image, add channel dimension (HWC)
pic = np.expand_dims(pic, 2)
# check number of channels
if pic.shape[-1] > 4:
raise ValueError(
'pic should not have > 4 channels. Got {} channels.'.format(
pic.shape[-1]))
npimg = pic
if isinstance(pic, torch.Tensor):
if pic.is_floating_point() and mode != 'F':
pic = pic.mul(255).byte()
npimg = np.transpose(pic.cpu().numpy(), (1, 2, 0))
if not isinstance(npimg, np.ndarray):
raise TypeError('Input pic must be a torch.Tensor or NumPy ndarray, ' +
'not {}'.format(type(npimg)))
if npimg.shape[2] == 1:
expected_mode = None
npimg = npimg[:, :, 0]
if npimg.dtype == np.uint8:
expected_mode = 'L'
elif npimg.dtype == np.int16:
expected_mode = 'I;16'
elif npimg.dtype == np.int32:
expected_mode = 'I'
elif npimg.dtype == np.float32:
expected_mode = 'F'
if mode is not None and mode != expected_mode:
raise ValueError(
"Incorrect mode ({}) supplied for input type {}. Should be {}"
.format(mode, np.dtype, expected_mode))
mode = expected_mode
elif npimg.shape[2] == 2:
permitted_2_channel_modes = ['LA']
if mode is not None and mode not in permitted_2_channel_modes:
raise ValueError("Only modes {} are supported for 2D inputs".
format(permitted_2_channel_modes))
if mode is None and npimg.dtype == np.uint8:
mode = 'LA'
elif npimg.shape[2] == 4:
permitted_4_channel_modes = ['RGBA', 'CMYK', 'RGBX']
if mode is not None and mode not in permitted_4_channel_modes:
raise ValueError("Only modes {} are supported for 4D inputs".
format(permitted_4_channel_modes))
if mode is None and npimg.dtype == np.uint8:
mode = 'RGBA'
else:
permitted_3_channel_modes = ['RGB', 'YCbCr', 'HSV']
if mode is not None and mode not in permitted_3_channel_modes:
raise ValueError("Only modes {} are supported for 3D inputs".
format(permitted_3_channel_modes))
if mode is None and npimg.dtype == np.uint8:
mode = 'RGB'
if mode is None:
raise TypeError('Input type {} is not supported'.format(npimg.dtype))
return Image.fromarray(npimg, mode=mode) | Convert a tensor or an ndarray to PIL Image. This function does not support torchscript.
See :class:`~torchvision.transforms.ToPILImage` for more details.
Args:
pic (Tensor or numpy.ndarray): Image to be converted to PIL Image.
mode (`PIL.Image mode`_): color space and pixel depth of input data (optional).
.. _PIL.Image mode: https://pillow.readthedocs.io/en/latest/handbook/concepts.html#concept-modes
Returns:
PIL Image: Image converted to PIL Image.
| to_pil_image | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | Apache-2.0 |
def normalize(tensor: Tensor,
mean: List[float],
std: List[float],
inplace: bool=False) -> Tensor:
"""Normalize a float tensor image with mean and standard deviation.
This transform does not support PIL Image.
.. note::
This transform acts out of place by default, i.e., it does not mutates the input tensor.
See :class:`~torchvision.transforms.Normalize` for more details.
Args:
tensor (Tensor): Float tensor image of size (C, H, W) or (B, C, H, W) to be normalized.
mean (sequence): Sequence of means for each channel.
std (sequence): Sequence of standard deviations for each channel.
inplace(bool,optional): Bool to make this operation inplace.
Returns:
Tensor: Normalized Tensor image.
"""
if not isinstance(tensor, torch.Tensor):
raise TypeError('Input tensor should be a torch tensor. Got {}.'.
format(type(tensor)))
if not tensor.is_floating_point():
raise TypeError('Input tensor should be a float tensor. Got {}.'.
format(tensor.dtype))
if tensor.ndim < 3:
raise ValueError(
'Expected tensor to be a tensor image of size (..., C, H, W). Got tensor.size() = '
'{}.'.format(tensor.size()))
if not inplace:
tensor = tensor.clone()
dtype = tensor.dtype
mean = torch.as_tensor(mean, dtype=dtype, device=tensor.device)
std = torch.as_tensor(std, dtype=dtype, device=tensor.device)
if (std == 0).any():
raise ValueError(
'std evaluated to zero after conversion to {}, leading to division by zero.'.
format(dtype))
if mean.ndim == 1:
mean = mean.view(-1, 1, 1)
if std.ndim == 1:
std = std.view(-1, 1, 1)
tensor.sub_(mean).div_(std)
return tensor | Normalize a float tensor image with mean and standard deviation.
This transform does not support PIL Image.
.. note::
This transform acts out of place by default, i.e., it does not mutates the input tensor.
See :class:`~torchvision.transforms.Normalize` for more details.
Args:
tensor (Tensor): Float tensor image of size (C, H, W) or (B, C, H, W) to be normalized.
mean (sequence): Sequence of means for each channel.
std (sequence): Sequence of standard deviations for each channel.
inplace(bool,optional): Bool to make this operation inplace.
Returns:
Tensor: Normalized Tensor image.
| normalize | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | Apache-2.0 |
def resize(img: Tensor,
size: List[int],
interpolation: InterpolationMode=InterpolationMode.BILINEAR,
max_size: Optional[int]=None,
antialias: Optional[bool]=None) -> Tensor:
r"""Resize the input image to the given size.
If the image is torch Tensor, it is expected
to have [..., H, W] shape, where ... means an arbitrary number of leading dimensions
.. warning::
The output image might be different depending on its type: when downsampling, the interpolation of PIL images
and tensors is slightly different, because PIL applies antialiasing. This may lead to significant differences
in the performance of a network. Therefore, it is preferable to train and serve a model with the same input
types. See also below the ``antialias`` parameter, which can help making the output of PIL images and tensors
closer.
Args:
img (PIL Image or Tensor): Image to be resized.
size (sequence or int): Desired output size. If size is a sequence like
(h, w), the output size will be matched to this. If size is an int,
the smaller edge of the image will be matched to this number maintaining
the aspect ratio. i.e, if height > width, then image will be rescaled to
:math:`\left(\text{size} \times \frac{\text{height}}{\text{width}}, \text{size}\right)`.
.. note::
In torchscript mode size as single int is not supported, use a sequence of length 1: ``[size, ]``.
interpolation (InterpolationMode): Desired interpolation enum defined by
:class:`torchvision.transforms.InterpolationMode`.
Default is ``InterpolationMode.BILINEAR``. If input is Tensor, only ``InterpolationMode.NEAREST``,
``InterpolationMode.BILINEAR`` and ``InterpolationMode.BICUBIC`` are supported.
For backward compatibility integer values (e.g. ``PIL.Image.NEAREST``) are still acceptable.
max_size (int, optional): The maximum allowed for the longer edge of
the resized image: if the longer edge of the image is greater
than ``max_size`` after being resized according to ``size``, then
the image is resized again so that the longer edge is equal to
``max_size``. As a result, ``size`` might be overruled, i.e the
smaller edge may be shorter than ``size``. This is only supported
if ``size`` is an int (or a sequence of length 1 in torchscript
mode).
antialias (bool, optional): antialias flag. If ``img`` is PIL Image, the flag is ignored and anti-alias
is always used. If ``img`` is Tensor, the flag is False by default and can be set to True for
``InterpolationMode.BILINEAR`` only mode. This can help making the output for PIL images and tensors
closer.
.. warning::
There is no autodiff support for ``antialias=True`` option with input ``img`` as Tensor.
Returns:
PIL Image or Tensor: Resized image.
"""
# Backward compatibility with integer value
if isinstance(interpolation, int):
warnings.warn(
"Argument interpolation should be of type InterpolationMode instead of int. "
"Please, use InterpolationMode enum.")
interpolation = _interpolation_modes_from_int(interpolation)
if not isinstance(interpolation, InterpolationMode):
raise TypeError("Argument interpolation should be a InterpolationMode")
if not isinstance(img, torch.Tensor):
if antialias is not None and not antialias:
warnings.warn(
"Anti-alias option is always applied for PIL Image input. Argument antialias is ignored."
)
pil_interpolation = pil_modes_mapping[interpolation]
return F_pil.resize(
img, size=size, interpolation=pil_interpolation, max_size=max_size)
return F_t.resize(
img,
size=size,
interpolation=interpolation.value,
max_size=max_size,
antialias=antialias) | Resize the input image to the given size.
If the image is torch Tensor, it is expected
to have [..., H, W] shape, where ... means an arbitrary number of leading dimensions
.. warning::
The output image might be different depending on its type: when downsampling, the interpolation of PIL images
and tensors is slightly different, because PIL applies antialiasing. This may lead to significant differences
in the performance of a network. Therefore, it is preferable to train and serve a model with the same input
types. See also below the ``antialias`` parameter, which can help making the output of PIL images and tensors
closer.
Args:
img (PIL Image or Tensor): Image to be resized.
size (sequence or int): Desired output size. If size is a sequence like
(h, w), the output size will be matched to this. If size is an int,
the smaller edge of the image will be matched to this number maintaining
the aspect ratio. i.e, if height > width, then image will be rescaled to
:math:`\left(\text{size} \times \frac{\text{height}}{\text{width}}, \text{size}\right)`.
.. note::
In torchscript mode size as single int is not supported, use a sequence of length 1: ``[size, ]``.
interpolation (InterpolationMode): Desired interpolation enum defined by
:class:`torchvision.transforms.InterpolationMode`.
Default is ``InterpolationMode.BILINEAR``. If input is Tensor, only ``InterpolationMode.NEAREST``,
``InterpolationMode.BILINEAR`` and ``InterpolationMode.BICUBIC`` are supported.
For backward compatibility integer values (e.g. ``PIL.Image.NEAREST``) are still acceptable.
max_size (int, optional): The maximum allowed for the longer edge of
the resized image: if the longer edge of the image is greater
than ``max_size`` after being resized according to ``size``, then
the image is resized again so that the longer edge is equal to
``max_size``. As a result, ``size`` might be overruled, i.e the
smaller edge may be shorter than ``size``. This is only supported
if ``size`` is an int (or a sequence of length 1 in torchscript
mode).
antialias (bool, optional): antialias flag. If ``img`` is PIL Image, the flag is ignored and anti-alias
is always used. If ``img`` is Tensor, the flag is False by default and can be set to True for
``InterpolationMode.BILINEAR`` only mode. This can help making the output for PIL images and tensors
closer.
.. warning::
There is no autodiff support for ``antialias=True`` option with input ``img`` as Tensor.
Returns:
PIL Image or Tensor: Resized image.
| resize | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | Apache-2.0 |
def pad(img: Tensor,
padding: List[int],
fill: int=0,
padding_mode: str="constant") -> Tensor:
r"""Pad the given image on all sides with the given "pad" value.
If the image is torch Tensor, it is expected
to have [..., H, W] shape, where ... means at most 2 leading dimensions for mode reflect and symmetric,
at most 3 leading dimensions for mode edge,
and an arbitrary number of leading dimensions for mode constant
Args:
img (PIL Image or Tensor): Image to be padded.
padding (int or sequence): Padding on each border. If a single int is provided this
is used to pad all borders. If sequence of length 2 is provided this is the padding
on left/right and top/bottom respectively. If a sequence of length 4 is provided
this is the padding for the left, top, right and bottom borders respectively.
.. note::
In torchscript mode padding as single int is not supported, use a sequence of
length 1: ``[padding, ]``.
fill (number or str or tuple): Pixel fill value for constant fill. Default is 0.
If a tuple of length 3, it is used to fill R, G, B channels respectively.
This value is only used when the padding_mode is constant.
Only number is supported for torch Tensor.
Only int or str or tuple value is supported for PIL Image.
padding_mode (str): Type of padding. Should be: constant, edge, reflect or symmetric.
Default is constant.
- constant: pads with a constant value, this value is specified with fill
- edge: pads with the last value at the edge of the image.
If input a 5D torch Tensor, the last 3 dimensions will be padded instead of the last 2
- reflect: pads with reflection of image without repeating the last value on the edge.
For example, padding [1, 2, 3, 4] with 2 elements on both sides in reflect mode
will result in [3, 2, 1, 2, 3, 4, 3, 2]
- symmetric: pads with reflection of image repeating the last value on the edge.
For example, padding [1, 2, 3, 4] with 2 elements on both sides in symmetric mode
will result in [2, 1, 1, 2, 3, 4, 4, 3]
Returns:
PIL Image or Tensor: Padded image.
"""
if not isinstance(img, torch.Tensor):
return F_pil.pad(img,
padding=padding,
fill=fill,
padding_mode=padding_mode)
return F_t.pad(img, padding=padding, fill=fill, padding_mode=padding_mode) | Pad the given image on all sides with the given "pad" value.
If the image is torch Tensor, it is expected
to have [..., H, W] shape, where ... means at most 2 leading dimensions for mode reflect and symmetric,
at most 3 leading dimensions for mode edge,
and an arbitrary number of leading dimensions for mode constant
Args:
img (PIL Image or Tensor): Image to be padded.
padding (int or sequence): Padding on each border. If a single int is provided this
is used to pad all borders. If sequence of length 2 is provided this is the padding
on left/right and top/bottom respectively. If a sequence of length 4 is provided
this is the padding for the left, top, right and bottom borders respectively.
.. note::
In torchscript mode padding as single int is not supported, use a sequence of
length 1: ``[padding, ]``.
fill (number or str or tuple): Pixel fill value for constant fill. Default is 0.
If a tuple of length 3, it is used to fill R, G, B channels respectively.
This value is only used when the padding_mode is constant.
Only number is supported for torch Tensor.
Only int or str or tuple value is supported for PIL Image.
padding_mode (str): Type of padding. Should be: constant, edge, reflect or symmetric.
Default is constant.
- constant: pads with a constant value, this value is specified with fill
- edge: pads with the last value at the edge of the image.
If input a 5D torch Tensor, the last 3 dimensions will be padded instead of the last 2
- reflect: pads with reflection of image without repeating the last value on the edge.
For example, padding [1, 2, 3, 4] with 2 elements on both sides in reflect mode
will result in [3, 2, 1, 2, 3, 4, 3, 2]
- symmetric: pads with reflection of image repeating the last value on the edge.
For example, padding [1, 2, 3, 4] with 2 elements on both sides in symmetric mode
will result in [2, 1, 1, 2, 3, 4, 4, 3]
Returns:
PIL Image or Tensor: Padded image.
| pad | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | Apache-2.0 |
def crop(img: Tensor, top: int, left: int, height: int, width: int) -> Tensor:
"""Crop the given image at specified location and output size.
If the image is torch Tensor, it is expected
to have [..., H, W] shape, where ... means an arbitrary number of leading dimensions.
If image size is smaller than output size along any edge, image is padded with 0 and then cropped.
Args:
img (PIL Image or Tensor): Image to be cropped. (0,0) denotes the top left corner of the image.
top (int): Vertical component of the top left corner of the crop box.
left (int): Horizontal component of the top left corner of the crop box.
height (int): Height of the crop box.
width (int): Width of the crop box.
Returns:
PIL Image or Tensor: Cropped image.
"""
if not isinstance(img, torch.Tensor):
return F_pil.crop(img, top, left, height, width)
return F_t.crop(img, top, left, height, width) | Crop the given image at specified location and output size.
If the image is torch Tensor, it is expected
to have [..., H, W] shape, where ... means an arbitrary number of leading dimensions.
If image size is smaller than output size along any edge, image is padded with 0 and then cropped.
Args:
img (PIL Image or Tensor): Image to be cropped. (0,0) denotes the top left corner of the image.
top (int): Vertical component of the top left corner of the crop box.
left (int): Horizontal component of the top left corner of the crop box.
height (int): Height of the crop box.
width (int): Width of the crop box.
Returns:
PIL Image or Tensor: Cropped image.
| crop | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | Apache-2.0 |
def center_crop(img: Tensor, output_size: List[int]) -> Tensor:
"""Crops the given image at the center.
If the image is torch Tensor, it is expected
to have [..., H, W] shape, where ... means an arbitrary number of leading dimensions.
If image size is smaller than output size along any edge, image is padded with 0 and then center cropped.
Args:
img (PIL Image or Tensor): Image to be cropped.
output_size (sequence or int): (height, width) of the crop box. If int or sequence with single int,
it is used for both directions.
Returns:
PIL Image or Tensor: Cropped image.
"""
if isinstance(output_size, numbers.Number):
output_size = (int(output_size), int(output_size))
elif isinstance(output_size, (tuple, list)) and len(output_size) == 1:
output_size = (output_size[0], output_size[0])
image_width, image_height = _get_image_size(img)
crop_height, crop_width = output_size
if crop_width > image_width or crop_height > image_height:
padding_ltrb = [
(crop_width - image_width) // 2 if crop_width > image_width else 0,
(crop_height - image_height) // 2
if crop_height > image_height else 0,
(crop_width - image_width + 1) // 2
if crop_width > image_width else 0,
(crop_height - image_height + 1) // 2
if crop_height > image_height else 0,
]
img = pad(img, padding_ltrb, fill=0) # PIL uses fill value 0
image_width, image_height = _get_image_size(img)
if crop_width == image_width and crop_height == image_height:
return img
crop_top = int(round((image_height - crop_height) / 2.))
crop_left = int(round((image_width - crop_width) / 2.))
return crop(img, crop_top, crop_left, crop_height, crop_width) | Crops the given image at the center.
If the image is torch Tensor, it is expected
to have [..., H, W] shape, where ... means an arbitrary number of leading dimensions.
If image size is smaller than output size along any edge, image is padded with 0 and then center cropped.
Args:
img (PIL Image or Tensor): Image to be cropped.
output_size (sequence or int): (height, width) of the crop box. If int or sequence with single int,
it is used for both directions.
Returns:
PIL Image or Tensor: Cropped image.
| center_crop | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | Apache-2.0 |
def resized_crop(
img: Tensor,
top: int,
left: int,
height: int,
width: int,
size: List[int],
interpolation: InterpolationMode=InterpolationMode.BILINEAR) -> Tensor:
"""Crop the given image and resize it to desired size.
If the image is torch Tensor, it is expected
to have [..., H, W] shape, where ... means an arbitrary number of leading dimensions
Notably used in :class:`~torchvision.transforms.RandomResizedCrop`.
Args:
img (PIL Image or Tensor): Image to be cropped. (0,0) denotes the top left corner of the image.
top (int): Vertical component of the top left corner of the crop box.
left (int): Horizontal component of the top left corner of the crop box.
height (int): Height of the crop box.
width (int): Width of the crop box.
size (sequence or int): Desired output size. Same semantics as ``resize``.
interpolation (InterpolationMode): Desired interpolation enum defined by
:class:`torchvision.transforms.InterpolationMode`.
Default is ``InterpolationMode.BILINEAR``. If input is Tensor, only ``InterpolationMode.NEAREST``,
``InterpolationMode.BILINEAR`` and ``InterpolationMode.BICUBIC`` are supported.
For backward compatibility integer values (e.g. ``PIL.Image.NEAREST``) are still acceptable.
Returns:
PIL Image or Tensor: Cropped image.
"""
img = crop(img, top, left, height, width)
img = resize(img, size, interpolation)
return img | Crop the given image and resize it to desired size.
If the image is torch Tensor, it is expected
to have [..., H, W] shape, where ... means an arbitrary number of leading dimensions
Notably used in :class:`~torchvision.transforms.RandomResizedCrop`.
Args:
img (PIL Image or Tensor): Image to be cropped. (0,0) denotes the top left corner of the image.
top (int): Vertical component of the top left corner of the crop box.
left (int): Horizontal component of the top left corner of the crop box.
height (int): Height of the crop box.
width (int): Width of the crop box.
size (sequence or int): Desired output size. Same semantics as ``resize``.
interpolation (InterpolationMode): Desired interpolation enum defined by
:class:`torchvision.transforms.InterpolationMode`.
Default is ``InterpolationMode.BILINEAR``. If input is Tensor, only ``InterpolationMode.NEAREST``,
``InterpolationMode.BILINEAR`` and ``InterpolationMode.BICUBIC`` are supported.
For backward compatibility integer values (e.g. ``PIL.Image.NEAREST``) are still acceptable.
Returns:
PIL Image or Tensor: Cropped image.
| resized_crop | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | Apache-2.0 |
def hflip(img: Tensor) -> Tensor:
"""Horizontally flip the given image.
Args:
img (PIL Image or Tensor): Image to be flipped. If img
is a Tensor, it is expected to be in [..., H, W] format,
where ... means it can have an arbitrary number of leading
dimensions.
Returns:
PIL Image or Tensor: Horizontally flipped image.
"""
if not isinstance(img, torch.Tensor):
return F_pil.hflip(img)
return F_t.hflip(img) | Horizontally flip the given image.
Args:
img (PIL Image or Tensor): Image to be flipped. If img
is a Tensor, it is expected to be in [..., H, W] format,
where ... means it can have an arbitrary number of leading
dimensions.
Returns:
PIL Image or Tensor: Horizontally flipped image.
| hflip | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | Apache-2.0 |
def _get_perspective_coeffs(startpoints: List[List[int]],
endpoints: List[List[int]]) -> List[float]:
"""Helper function to get the coefficients (a, b, c, d, e, f, g, h) for the perspective transforms.
In Perspective Transform each pixel (x, y) in the original image gets transformed as,
(x, y) -> ( (ax + by + c) / (gx + hy + 1), (dx + ey + f) / (gx + hy + 1) )
Args:
startpoints (list of list of ints): List containing four lists of two integers corresponding to four corners
``[top-left, top-right, bottom-right, bottom-left]`` of the original image.
endpoints (list of list of ints): List containing four lists of two integers corresponding to four corners
``[top-left, top-right, bottom-right, bottom-left]`` of the transformed image.
Returns:
octuple (a, b, c, d, e, f, g, h) for transforming each pixel.
"""
a_matrix = torch.zeros(2 * len(startpoints), 8, dtype=torch.float)
for i, (p1, p2) in enumerate(zip(endpoints, startpoints)):
a_matrix[2 * i, :] = torch.tensor(
[p1[0], p1[1], 1, 0, 0, 0, -p2[0] * p1[0], -p2[0] * p1[1]])
a_matrix[2 * i + 1, :] = torch.tensor(
[0, 0, 0, p1[0], p1[1], 1, -p2[1] * p1[0], -p2[1] * p1[1]])
b_matrix = torch.tensor(startpoints, dtype=torch.float).view(8)
res = torch.linalg.lstsq(a_matrix, b_matrix, driver='gels').solution
output: List[float] = res.tolist()
return output | Helper function to get the coefficients (a, b, c, d, e, f, g, h) for the perspective transforms.
In Perspective Transform each pixel (x, y) in the original image gets transformed as,
(x, y) -> ( (ax + by + c) / (gx + hy + 1), (dx + ey + f) / (gx + hy + 1) )
Args:
startpoints (list of list of ints): List containing four lists of two integers corresponding to four corners
``[top-left, top-right, bottom-right, bottom-left]`` of the original image.
endpoints (list of list of ints): List containing four lists of two integers corresponding to four corners
``[top-left, top-right, bottom-right, bottom-left]`` of the transformed image.
Returns:
octuple (a, b, c, d, e, f, g, h) for transforming each pixel.
| _get_perspective_coeffs | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | Apache-2.0 |
def perspective(img: Tensor,
startpoints: List[List[int]],
endpoints: List[List[int]],
interpolation: InterpolationMode=InterpolationMode.BILINEAR,
fill: Optional[List[float]]=None) -> Tensor:
"""Perform perspective transform of the given image.
If the image is torch Tensor, it is expected
to have [..., H, W] shape, where ... means an arbitrary number of leading dimensions.
Args:
img (PIL Image or Tensor): Image to be transformed.
startpoints (list of list of ints): List containing four lists of two integers corresponding to four corners
``[top-left, top-right, bottom-right, bottom-left]`` of the original image.
endpoints (list of list of ints): List containing four lists of two integers corresponding to four corners
``[top-left, top-right, bottom-right, bottom-left]`` of the transformed image.
interpolation (InterpolationMode): Desired interpolation enum defined by
:class:`torchvision.transforms.InterpolationMode`. Default is ``InterpolationMode.BILINEAR``.
If input is Tensor, only ``InterpolationMode.NEAREST``, ``InterpolationMode.BILINEAR`` are supported.
For backward compatibility integer values (e.g. ``PIL.Image.NEAREST``) are still acceptable.
fill (sequence or number, optional): Pixel fill value for the area outside the transformed
image. If given a number, the value is used for all bands respectively.
.. note::
In torchscript mode single int/float value is not supported, please use a sequence
of length 1: ``[value, ]``.
Returns:
PIL Image or Tensor: transformed Image.
"""
coeffs = _get_perspective_coeffs(startpoints, endpoints)
# Backward compatibility with integer value
if isinstance(interpolation, int):
warnings.warn(
"Argument interpolation should be of type InterpolationMode instead of int. "
"Please, use InterpolationMode enum.")
interpolation = _interpolation_modes_from_int(interpolation)
if not isinstance(interpolation, InterpolationMode):
raise TypeError("Argument interpolation should be a InterpolationMode")
if not isinstance(img, torch.Tensor):
pil_interpolation = pil_modes_mapping[interpolation]
return F_pil.perspective(
img, coeffs, interpolation=pil_interpolation, fill=fill)
return F_t.perspective(
img, coeffs, interpolation=interpolation.value, fill=fill) | Perform perspective transform of the given image.
If the image is torch Tensor, it is expected
to have [..., H, W] shape, where ... means an arbitrary number of leading dimensions.
Args:
img (PIL Image or Tensor): Image to be transformed.
startpoints (list of list of ints): List containing four lists of two integers corresponding to four corners
``[top-left, top-right, bottom-right, bottom-left]`` of the original image.
endpoints (list of list of ints): List containing four lists of two integers corresponding to four corners
``[top-left, top-right, bottom-right, bottom-left]`` of the transformed image.
interpolation (InterpolationMode): Desired interpolation enum defined by
:class:`torchvision.transforms.InterpolationMode`. Default is ``InterpolationMode.BILINEAR``.
If input is Tensor, only ``InterpolationMode.NEAREST``, ``InterpolationMode.BILINEAR`` are supported.
For backward compatibility integer values (e.g. ``PIL.Image.NEAREST``) are still acceptable.
fill (sequence or number, optional): Pixel fill value for the area outside the transformed
image. If given a number, the value is used for all bands respectively.
.. note::
In torchscript mode single int/float value is not supported, please use a sequence
of length 1: ``[value, ]``.
Returns:
PIL Image or Tensor: transformed Image.
| perspective | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | Apache-2.0 |
def vflip(img: Tensor) -> Tensor:
"""Vertically flip the given image.
Args:
img (PIL Image or Tensor): Image to be flipped. If img
is a Tensor, it is expected to be in [..., H, W] format,
where ... means it can have an arbitrary number of leading
dimensions.
Returns:
PIL Image or Tensor: Vertically flipped image.
"""
if not isinstance(img, torch.Tensor):
return F_pil.vflip(img)
return F_t.vflip(img) | Vertically flip the given image.
Args:
img (PIL Image or Tensor): Image to be flipped. If img
is a Tensor, it is expected to be in [..., H, W] format,
where ... means it can have an arbitrary number of leading
dimensions.
Returns:
PIL Image or Tensor: Vertically flipped image.
| vflip | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | Apache-2.0 |
def five_crop(
img: Tensor,
size: List[int]) -> Tuple[Tensor, Tensor, Tensor, Tensor, Tensor]:
"""Crop the given image into four corners and the central crop.
If the image is torch Tensor, it is expected
to have [..., H, W] shape, where ... means an arbitrary number of leading dimensions
.. Note::
This transform returns a tuple of images and there may be a
mismatch in the number of inputs and targets your ``Dataset`` returns.
Args:
img (PIL Image or Tensor): Image to be cropped.
size (sequence or int): Desired output size of the crop. If size is an
int instead of sequence like (h, w), a square crop (size, size) is
made. If provided a sequence of length 1, it will be interpreted as (size[0], size[0]).
Returns:
tuple: tuple (tl, tr, bl, br, center)
Corresponding top left, top right, bottom left, bottom right and center crop.
"""
if isinstance(size, numbers.Number):
size = (int(size), int(size))
elif isinstance(size, (tuple, list)) and len(size) == 1:
size = (size[0], size[0])
if len(size) != 2:
raise ValueError("Please provide only two dimensions (h, w) for size.")
image_width, image_height = _get_image_size(img)
crop_height, crop_width = size
if crop_width > image_width or crop_height > image_height:
msg = "Requested crop size {} is bigger than input size {}"
raise ValueError(msg.format(size, (image_height, image_width)))
tl = crop(img, 0, 0, crop_height, crop_width)
tr = crop(img, 0, image_width - crop_width, crop_height, crop_width)
bl = crop(img, image_height - crop_height, 0, crop_height, crop_width)
br = crop(img, image_height - crop_height, image_width - crop_width,
crop_height, crop_width)
center = center_crop(img, [crop_height, crop_width])
return tl, tr, bl, br, center | Crop the given image into four corners and the central crop.
If the image is torch Tensor, it is expected
to have [..., H, W] shape, where ... means an arbitrary number of leading dimensions
.. Note::
This transform returns a tuple of images and there may be a
mismatch in the number of inputs and targets your ``Dataset`` returns.
Args:
img (PIL Image or Tensor): Image to be cropped.
size (sequence or int): Desired output size of the crop. If size is an
int instead of sequence like (h, w), a square crop (size, size) is
made. If provided a sequence of length 1, it will be interpreted as (size[0], size[0]).
Returns:
tuple: tuple (tl, tr, bl, br, center)
Corresponding top left, top right, bottom left, bottom right and center crop.
| five_crop | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | Apache-2.0 |
def ten_crop(img: Tensor, size: List[int],
vertical_flip: bool=False) -> List[Tensor]:
"""Generate ten cropped images from the given image.
Crop the given image into four corners and the central crop plus the
flipped version of these (horizontal flipping is used by default).
If the image is torch Tensor, it is expected
to have [..., H, W] shape, where ... means an arbitrary number of leading dimensions
.. Note::
This transform returns a tuple of images and there may be a
mismatch in the number of inputs and targets your ``Dataset`` returns.
Args:
img (PIL Image or Tensor): Image to be cropped.
size (sequence or int): Desired output size of the crop. If size is an
int instead of sequence like (h, w), a square crop (size, size) is
made. If provided a sequence of length 1, it will be interpreted as (size[0], size[0]).
vertical_flip (bool): Use vertical flipping instead of horizontal
Returns:
tuple: tuple (tl, tr, bl, br, center, tl_flip, tr_flip, bl_flip, br_flip, center_flip)
Corresponding top left, top right, bottom left, bottom right and
center crop and same for the flipped image.
"""
if isinstance(size, numbers.Number):
size = (int(size), int(size))
elif isinstance(size, (tuple, list)) and len(size) == 1:
size = (size[0], size[0])
if len(size) != 2:
raise ValueError("Please provide only two dimensions (h, w) for size.")
first_five = five_crop(img, size)
if vertical_flip:
img = vflip(img)
else:
img = hflip(img)
second_five = five_crop(img, size)
return first_five + second_five | Generate ten cropped images from the given image.
Crop the given image into four corners and the central crop plus the
flipped version of these (horizontal flipping is used by default).
If the image is torch Tensor, it is expected
to have [..., H, W] shape, where ... means an arbitrary number of leading dimensions
.. Note::
This transform returns a tuple of images and there may be a
mismatch in the number of inputs and targets your ``Dataset`` returns.
Args:
img (PIL Image or Tensor): Image to be cropped.
size (sequence or int): Desired output size of the crop. If size is an
int instead of sequence like (h, w), a square crop (size, size) is
made. If provided a sequence of length 1, it will be interpreted as (size[0], size[0]).
vertical_flip (bool): Use vertical flipping instead of horizontal
Returns:
tuple: tuple (tl, tr, bl, br, center, tl_flip, tr_flip, bl_flip, br_flip, center_flip)
Corresponding top left, top right, bottom left, bottom right and
center crop and same for the flipped image.
| ten_crop | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | Apache-2.0 |
def adjust_brightness(img: Tensor, brightness_factor: float) -> Tensor:
"""Adjust brightness of an image.
Args:
img (PIL Image or Tensor): Image to be adjusted.
If img is torch Tensor, it is expected to be in [..., 1 or 3, H, W] format,
where ... means it can have an arbitrary number of leading dimensions.
brightness_factor (float): How much to adjust the brightness. Can be
any non negative number. 0 gives a black image, 1 gives the
original image while 2 increases the brightness by a factor of 2.
Returns:
PIL Image or Tensor: Brightness adjusted image.
"""
if not isinstance(img, torch.Tensor):
return F_pil.adjust_brightness(img, brightness_factor)
return F_t.adjust_brightness(img, brightness_factor) | Adjust brightness of an image.
Args:
img (PIL Image or Tensor): Image to be adjusted.
If img is torch Tensor, it is expected to be in [..., 1 or 3, H, W] format,
where ... means it can have an arbitrary number of leading dimensions.
brightness_factor (float): How much to adjust the brightness. Can be
any non negative number. 0 gives a black image, 1 gives the
original image while 2 increases the brightness by a factor of 2.
Returns:
PIL Image or Tensor: Brightness adjusted image.
| adjust_brightness | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | Apache-2.0 |
def adjust_contrast(img: Tensor, contrast_factor: float) -> Tensor:
"""Adjust contrast of an image.
Args:
img (PIL Image or Tensor): Image to be adjusted.
If img is torch Tensor, it is expected to be in [..., 3, H, W] format,
where ... means it can have an arbitrary number of leading dimensions.
contrast_factor (float): How much to adjust the contrast. Can be any
non negative number. 0 gives a solid gray image, 1 gives the
original image while 2 increases the contrast by a factor of 2.
Returns:
PIL Image or Tensor: Contrast adjusted image.
"""
if not isinstance(img, torch.Tensor):
return F_pil.adjust_contrast(img, contrast_factor)
return F_t.adjust_contrast(img, contrast_factor) | Adjust contrast of an image.
Args:
img (PIL Image or Tensor): Image to be adjusted.
If img is torch Tensor, it is expected to be in [..., 3, H, W] format,
where ... means it can have an arbitrary number of leading dimensions.
contrast_factor (float): How much to adjust the contrast. Can be any
non negative number. 0 gives a solid gray image, 1 gives the
original image while 2 increases the contrast by a factor of 2.
Returns:
PIL Image or Tensor: Contrast adjusted image.
| adjust_contrast | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | Apache-2.0 |
def adjust_saturation(img: Tensor, saturation_factor: float) -> Tensor:
"""Adjust color saturation of an image.
Args:
img (PIL Image or Tensor): Image to be adjusted.
If img is torch Tensor, it is expected to be in [..., 3, H, W] format,
where ... means it can have an arbitrary number of leading dimensions.
saturation_factor (float): How much to adjust the saturation. 0 will
give a black and white image, 1 will give the original image while
2 will enhance the saturation by a factor of 2.
Returns:
PIL Image or Tensor: Saturation adjusted image.
"""
if not isinstance(img, torch.Tensor):
return F_pil.adjust_saturation(img, saturation_factor)
return F_t.adjust_saturation(img, saturation_factor) | Adjust color saturation of an image.
Args:
img (PIL Image or Tensor): Image to be adjusted.
If img is torch Tensor, it is expected to be in [..., 3, H, W] format,
where ... means it can have an arbitrary number of leading dimensions.
saturation_factor (float): How much to adjust the saturation. 0 will
give a black and white image, 1 will give the original image while
2 will enhance the saturation by a factor of 2.
Returns:
PIL Image or Tensor: Saturation adjusted image.
| adjust_saturation | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | Apache-2.0 |
def adjust_hue(img: Tensor, hue_factor: float) -> Tensor:
"""Adjust hue of an image.
The image hue is adjusted by converting the image to HSV and
cyclically shifting the intensities in the hue channel (H).
The image is then converted back to original image mode.
`hue_factor` is the amount of shift in H channel and must be in the
interval `[-0.5, 0.5]`.
See `Hue`_ for more details.
.. _Hue: https://en.wikipedia.org/wiki/Hue
Args:
img (PIL Image or Tensor): Image to be adjusted.
If img is torch Tensor, it is expected to be in [..., 3, H, W] format,
where ... means it can have an arbitrary number of leading dimensions.
If img is PIL Image mode "1", "L", "I", "F" and modes with transparency (alpha channel) are not supported.
hue_factor (float): How much to shift the hue channel. Should be in
[-0.5, 0.5]. 0.5 and -0.5 give complete reversal of hue channel in
HSV space in positive and negative direction respectively.
0 means no shift. Therefore, both -0.5 and 0.5 will give an image
with complementary colors while 0 gives the original image.
Returns:
PIL Image or Tensor: Hue adjusted image.
"""
if not isinstance(img, torch.Tensor):
return F_pil.adjust_hue(img, hue_factor)
return F_t.adjust_hue(img, hue_factor) | Adjust hue of an image.
The image hue is adjusted by converting the image to HSV and
cyclically shifting the intensities in the hue channel (H).
The image is then converted back to original image mode.
`hue_factor` is the amount of shift in H channel and must be in the
interval `[-0.5, 0.5]`.
See `Hue`_ for more details.
.. _Hue: https://en.wikipedia.org/wiki/Hue
Args:
img (PIL Image or Tensor): Image to be adjusted.
If img is torch Tensor, it is expected to be in [..., 3, H, W] format,
where ... means it can have an arbitrary number of leading dimensions.
If img is PIL Image mode "1", "L", "I", "F" and modes with transparency (alpha channel) are not supported.
hue_factor (float): How much to shift the hue channel. Should be in
[-0.5, 0.5]. 0.5 and -0.5 give complete reversal of hue channel in
HSV space in positive and negative direction respectively.
0 means no shift. Therefore, both -0.5 and 0.5 will give an image
with complementary colors while 0 gives the original image.
Returns:
PIL Image or Tensor: Hue adjusted image.
| adjust_hue | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | Apache-2.0 |
def adjust_gamma(img: Tensor, gamma: float, gain: float=1) -> Tensor:
r"""Perform gamma correction on an image.
Also known as Power Law Transform. Intensities in RGB mode are adjusted
based on the following equation:
.. math::
I_{\text{out}} = 255 \times \text{gain} \times \left(\frac{I_{\text{in}}}{255}\right)^{\gamma}
See `Gamma Correction`_ for more details.
.. _Gamma Correction: https://en.wikipedia.org/wiki/Gamma_correction
Args:
img (PIL Image or Tensor): PIL Image to be adjusted.
If img is torch Tensor, it is expected to be in [..., 1 or 3, H, W] format,
where ... means it can have an arbitrary number of leading dimensions.
If img is PIL Image, modes with transparency (alpha channel) are not supported.
gamma (float): Non negative real number, same as :math:`\gamma` in the equation.
gamma larger than 1 make the shadows darker,
while gamma smaller than 1 make dark regions lighter.
gain (float): The constant multiplier.
Returns:
PIL Image or Tensor: Gamma correction adjusted image.
"""
if not isinstance(img, torch.Tensor):
return F_pil.adjust_gamma(img, gamma, gain)
return F_t.adjust_gamma(img, gamma, gain) | Perform gamma correction on an image.
Also known as Power Law Transform. Intensities in RGB mode are adjusted
based on the following equation:
.. math::
I_{\text{out}} = 255 \times \text{gain} \times \left(\frac{I_{\text{in}}}{255}\right)^{\gamma}
See `Gamma Correction`_ for more details.
.. _Gamma Correction: https://en.wikipedia.org/wiki/Gamma_correction
Args:
img (PIL Image or Tensor): PIL Image to be adjusted.
If img is torch Tensor, it is expected to be in [..., 1 or 3, H, W] format,
where ... means it can have an arbitrary number of leading dimensions.
If img is PIL Image, modes with transparency (alpha channel) are not supported.
gamma (float): Non negative real number, same as :math:`\gamma` in the equation.
gamma larger than 1 make the shadows darker,
while gamma smaller than 1 make dark regions lighter.
gain (float): The constant multiplier.
Returns:
PIL Image or Tensor: Gamma correction adjusted image.
| adjust_gamma | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | Apache-2.0 |
def affine(img: Tensor,
angle: float,
translate: List[int],
scale: float,
shear: List[float],
interpolation: InterpolationMode=InterpolationMode.NEAREST,
fill: Optional[List[float]]=None,
resample: Optional[int]=None,
fillcolor: Optional[List[float]]=None) -> Tensor:
"""Apply affine transformation on the image keeping image center invariant.
If the image is torch Tensor, it is expected
to have [..., H, W] shape, where ... means an arbitrary number of leading dimensions.
Args:
img (PIL Image or Tensor): image to transform.
angle (number): rotation angle in degrees between -180 and 180, clockwise direction.
translate (sequence of integers): horizontal and vertical translations (post-rotation translation)
scale (float): overall scale
shear (float or sequence): shear angle value in degrees between -180 to 180, clockwise direction.
If a sequence is specified, the first value corresponds to a shear parallel to the x axis, while
the second value corresponds to a shear parallel to the y axis.
interpolation (InterpolationMode): Desired interpolation enum defined by
:class:`torchvision.transforms.InterpolationMode`. Default is ``InterpolationMode.NEAREST``.
If input is Tensor, only ``InterpolationMode.NEAREST``, ``InterpolationMode.BILINEAR`` are supported.
For backward compatibility integer values (e.g. ``PIL.Image.NEAREST``) are still acceptable.
fill (sequence or number, optional): Pixel fill value for the area outside the transformed
image. If given a number, the value is used for all bands respectively.
.. note::
In torchscript mode single int/float value is not supported, please use a sequence
of length 1: ``[value, ]``.
fillcolor (sequence, int, float): deprecated argument and will be removed since v0.10.0.
Please use the ``fill`` parameter instead.
resample (int, optional): deprecated argument and will be removed since v0.10.0.
Please use the ``interpolation`` parameter instead.
Returns:
PIL Image or Tensor: Transformed image.
"""
if resample is not None:
warnings.warn(
"Argument resample is deprecated and will be removed since v0.10.0. Please, use interpolation instead"
)
interpolation = _interpolation_modes_from_int(resample)
# Backward compatibility with integer value
if isinstance(interpolation, int):
warnings.warn(
"Argument interpolation should be of type InterpolationMode instead of int. "
"Please, use InterpolationMode enum.")
interpolation = _interpolation_modes_from_int(interpolation)
if fillcolor is not None:
warnings.warn(
"Argument fillcolor is deprecated and will be removed since v0.10.0. Please, use fill instead"
)
fill = fillcolor
if not isinstance(angle, (int, float)):
raise TypeError("Argument angle should be int or float")
if not isinstance(translate, (list, tuple)):
raise TypeError("Argument translate should be a sequence")
if len(translate) != 2:
raise ValueError("Argument translate should be a sequence of length 2")
if scale <= 0.0:
raise ValueError("Argument scale should be positive")
if not isinstance(shear, (numbers.Number, (list, tuple))):
raise TypeError(
"Shear should be either a single value or a sequence of two values")
if not isinstance(interpolation, InterpolationMode):
raise TypeError("Argument interpolation should be a InterpolationMode")
if isinstance(angle, int):
angle = float(angle)
if isinstance(translate, tuple):
translate = list(translate)
if isinstance(shear, numbers.Number):
shear = [shear, 0.0]
if isinstance(shear, tuple):
shear = list(shear)
if len(shear) == 1:
shear = [shear[0], shear[0]]
if len(shear) != 2:
raise ValueError(
"Shear should be a sequence containing two values. Got {}".format(
shear))
img_size = _get_image_size(img)
if not isinstance(img, torch.Tensor):
# center = (img_size[0] * 0.5 + 0.5, img_size[1] * 0.5 + 0.5)
# it is visually better to estimate the center without 0.5 offset
# otherwise image rotated by 90 degrees is shifted vs output image of torch.rot90 or F_t.affine
center = [img_size[0] * 0.5, img_size[1] * 0.5]
matrix = _get_inverse_affine_matrix(center, angle, translate, scale,
shear)
pil_interpolation = pil_modes_mapping[interpolation]
return F_pil.affine(
img, matrix=matrix, interpolation=pil_interpolation, fill=fill)
translate_f = [1.0 * t for t in translate]
matrix = _get_inverse_affine_matrix([0.0, 0.0], angle, translate_f, scale,
shear)
return F_t.affine(
img, matrix=matrix, interpolation=interpolation.value, fill=fill) | Apply affine transformation on the image keeping image center invariant.
If the image is torch Tensor, it is expected
to have [..., H, W] shape, where ... means an arbitrary number of leading dimensions.
Args:
img (PIL Image or Tensor): image to transform.
angle (number): rotation angle in degrees between -180 and 180, clockwise direction.
translate (sequence of integers): horizontal and vertical translations (post-rotation translation)
scale (float): overall scale
shear (float or sequence): shear angle value in degrees between -180 to 180, clockwise direction.
If a sequence is specified, the first value corresponds to a shear parallel to the x axis, while
the second value corresponds to a shear parallel to the y axis.
interpolation (InterpolationMode): Desired interpolation enum defined by
:class:`torchvision.transforms.InterpolationMode`. Default is ``InterpolationMode.NEAREST``.
If input is Tensor, only ``InterpolationMode.NEAREST``, ``InterpolationMode.BILINEAR`` are supported.
For backward compatibility integer values (e.g. ``PIL.Image.NEAREST``) are still acceptable.
fill (sequence or number, optional): Pixel fill value for the area outside the transformed
image. If given a number, the value is used for all bands respectively.
.. note::
In torchscript mode single int/float value is not supported, please use a sequence
of length 1: ``[value, ]``.
fillcolor (sequence, int, float): deprecated argument and will be removed since v0.10.0.
Please use the ``fill`` parameter instead.
resample (int, optional): deprecated argument and will be removed since v0.10.0.
Please use the ``interpolation`` parameter instead.
Returns:
PIL Image or Tensor: Transformed image.
| affine | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | Apache-2.0 |
def to_grayscale(img, num_output_channels=1):
"""Convert PIL image of any mode (RGB, HSV, LAB, etc) to grayscale version of image.
This transform does not support torch Tensor.
Args:
img (PIL Image): PIL Image to be converted to grayscale.
num_output_channels (int): number of channels of the output image. Value can be 1 or 3. Default is 1.
Returns:
PIL Image: Grayscale version of the image.
- if num_output_channels = 1 : returned image is single channel
- if num_output_channels = 3 : returned image is 3 channel with r = g = b
"""
if isinstance(img, Image.Image):
return F_pil.to_grayscale(img, num_output_channels)
raise TypeError("Input should be PIL Image") | Convert PIL image of any mode (RGB, HSV, LAB, etc) to grayscale version of image.
This transform does not support torch Tensor.
Args:
img (PIL Image): PIL Image to be converted to grayscale.
num_output_channels (int): number of channels of the output image. Value can be 1 or 3. Default is 1.
Returns:
PIL Image: Grayscale version of the image.
- if num_output_channels = 1 : returned image is single channel
- if num_output_channels = 3 : returned image is 3 channel with r = g = b
| to_grayscale | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | Apache-2.0 |
def rgb_to_grayscale(img: Tensor, num_output_channels: int=1) -> Tensor:
"""Convert RGB image to grayscale version of image.
If the image is torch Tensor, it is expected
to have [..., 3, H, W] shape, where ... means an arbitrary number of leading dimensions
Note:
Please, note that this method supports only RGB images as input. For inputs in other color spaces,
please, consider using meth:`~torchvision.transforms.functional.to_grayscale` with PIL Image.
Args:
img (PIL Image or Tensor): RGB Image to be converted to grayscale.
num_output_channels (int): number of channels of the output image. Value can be 1 or 3. Default, 1.
Returns:
PIL Image or Tensor: Grayscale version of the image.
- if num_output_channels = 1 : returned image is single channel
- if num_output_channels = 3 : returned image is 3 channel with r = g = b
"""
if not isinstance(img, torch.Tensor):
return F_pil.to_grayscale(img, num_output_channels)
return F_t.rgb_to_grayscale(img, num_output_channels) | Convert RGB image to grayscale version of image.
If the image is torch Tensor, it is expected
to have [..., 3, H, W] shape, where ... means an arbitrary number of leading dimensions
Note:
Please, note that this method supports only RGB images as input. For inputs in other color spaces,
please, consider using meth:`~torchvision.transforms.functional.to_grayscale` with PIL Image.
Args:
img (PIL Image or Tensor): RGB Image to be converted to grayscale.
num_output_channels (int): number of channels of the output image. Value can be 1 or 3. Default, 1.
Returns:
PIL Image or Tensor: Grayscale version of the image.
- if num_output_channels = 1 : returned image is single channel
- if num_output_channels = 3 : returned image is 3 channel with r = g = b
| rgb_to_grayscale | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | Apache-2.0 |
def erase(img: Tensor,
i: int,
j: int,
h: int,
w: int,
v: Tensor,
inplace: bool=False) -> Tensor:
""" Erase the input Tensor Image with given value.
This transform does not support PIL Image.
Args:
img (Tensor Image): Tensor image of size (C, H, W) to be erased
i (int): i in (i,j) i.e coordinates of the upper left corner.
j (int): j in (i,j) i.e coordinates of the upper left corner.
h (int): Height of the erased region.
w (int): Width of the erased region.
v: Erasing value.
inplace(bool, optional): For in-place operations. By default is set False.
Returns:
Tensor Image: Erased image.
"""
if not isinstance(img, torch.Tensor):
raise TypeError('img should be Tensor Image. Got {}'.format(type(img)))
if not inplace:
img = img.clone()
img[..., i:i + h, j:j + w] = v
return img | Erase the input Tensor Image with given value.
This transform does not support PIL Image.
Args:
img (Tensor Image): Tensor image of size (C, H, W) to be erased
i (int): i in (i,j) i.e coordinates of the upper left corner.
j (int): j in (i,j) i.e coordinates of the upper left corner.
h (int): Height of the erased region.
w (int): Width of the erased region.
v: Erasing value.
inplace(bool, optional): For in-place operations. By default is set False.
Returns:
Tensor Image: Erased image.
| erase | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | Apache-2.0 |
def gaussian_blur(img: Tensor,
kernel_size: List[int],
sigma: Optional[List[float]]=None) -> Tensor:
"""Performs Gaussian blurring on the image by given kernel.
If the image is torch Tensor, it is expected
to have [..., H, W] shape, where ... means an arbitrary number of leading dimensions.
Args:
img (PIL Image or Tensor): Image to be blurred
kernel_size (sequence of ints or int): Gaussian kernel size. Can be a sequence of integers
like ``(kx, ky)`` or a single integer for square kernels.
.. note::
In torchscript mode kernel_size as single int is not supported, use a sequence of
length 1: ``[ksize, ]``.
sigma (sequence of floats or float, optional): Gaussian kernel standard deviation. Can be a
sequence of floats like ``(sigma_x, sigma_y)`` or a single float to define the
same sigma in both X/Y directions. If None, then it is computed using
``kernel_size`` as ``sigma = 0.3 * ((kernel_size - 1) * 0.5 - 1) + 0.8``.
Default, None.
.. note::
In torchscript mode sigma as single float is
not supported, use a sequence of length 1: ``[sigma, ]``.
Returns:
PIL Image or Tensor: Gaussian Blurred version of the image.
"""
if not isinstance(kernel_size, (int, list, tuple)):
raise TypeError(
'kernel_size should be int or a sequence of integers. Got {}'.
format(type(kernel_size)))
if isinstance(kernel_size, int):
kernel_size = [kernel_size, kernel_size]
if len(kernel_size) != 2:
raise ValueError(
'If kernel_size is a sequence its length should be 2. Got {}'.
format(len(kernel_size)))
for ksize in kernel_size:
if ksize % 2 == 0 or ksize < 0:
raise ValueError(
'kernel_size should have odd and positive integers. Got {}'.
format(kernel_size))
if sigma is None:
sigma = [ksize * 0.15 + 0.35 for ksize in kernel_size]
if sigma is not None and not isinstance(sigma, (int, float, list, tuple)):
raise TypeError(
'sigma should be either float or sequence of floats. Got {}'.
format(type(sigma)))
if isinstance(sigma, (int, float)):
sigma = [float(sigma), float(sigma)]
if isinstance(sigma, (list, tuple)) and len(sigma) == 1:
sigma = [sigma[0], sigma[0]]
if len(sigma) != 2:
raise ValueError(
'If sigma is a sequence, its length should be 2. Got {}'.format(
len(sigma)))
for s in sigma:
if s <= 0.:
raise ValueError(
'sigma should have positive values. Got {}'.format(sigma))
t_img = img
if not isinstance(img, torch.Tensor):
if not F_pil._is_pil_image(img):
raise TypeError('img should be PIL Image or Tensor. Got {}'.format(
type(img)))
t_img = to_tensor(img)
output = F_t.gaussian_blur(t_img, kernel_size, sigma)
if not isinstance(img, torch.Tensor):
output = to_pil_image(output)
return output | Performs Gaussian blurring on the image by given kernel.
If the image is torch Tensor, it is expected
to have [..., H, W] shape, where ... means an arbitrary number of leading dimensions.
Args:
img (PIL Image or Tensor): Image to be blurred
kernel_size (sequence of ints or int): Gaussian kernel size. Can be a sequence of integers
like ``(kx, ky)`` or a single integer for square kernels.
.. note::
In torchscript mode kernel_size as single int is not supported, use a sequence of
length 1: ``[ksize, ]``.
sigma (sequence of floats or float, optional): Gaussian kernel standard deviation. Can be a
sequence of floats like ``(sigma_x, sigma_y)`` or a single float to define the
same sigma in both X/Y directions. If None, then it is computed using
``kernel_size`` as ``sigma = 0.3 * ((kernel_size - 1) * 0.5 - 1) + 0.8``.
Default, None.
.. note::
In torchscript mode sigma as single float is
not supported, use a sequence of length 1: ``[sigma, ]``.
Returns:
PIL Image or Tensor: Gaussian Blurred version of the image.
| gaussian_blur | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | Apache-2.0 |
def invert(img: Tensor) -> Tensor:
"""Invert the colors of an RGB/grayscale image.
Args:
img (PIL Image or Tensor): Image to have its colors inverted.
If img is torch Tensor, it is expected to be in [..., 1 or 3, H, W] format,
where ... means it can have an arbitrary number of leading dimensions.
If img is PIL Image, it is expected to be in mode "L" or "RGB".
Returns:
PIL Image or Tensor: Color inverted image.
"""
if not isinstance(img, torch.Tensor):
return F_pil.invert(img)
return F_t.invert(img) | Invert the colors of an RGB/grayscale image.
Args:
img (PIL Image or Tensor): Image to have its colors inverted.
If img is torch Tensor, it is expected to be in [..., 1 or 3, H, W] format,
where ... means it can have an arbitrary number of leading dimensions.
If img is PIL Image, it is expected to be in mode "L" or "RGB".
Returns:
PIL Image or Tensor: Color inverted image.
| invert | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | Apache-2.0 |
def posterize(img: Tensor, bits: int) -> Tensor:
"""Posterize an image by reducing the number of bits for each color channel.
Args:
img (PIL Image or Tensor): Image to have its colors posterized.
If img is torch Tensor, it should be of type torch.uint8 and
it is expected to be in [..., 1 or 3, H, W] format, where ... means
it can have an arbitrary number of leading dimensions.
If img is PIL Image, it is expected to be in mode "L" or "RGB".
bits (int): The number of bits to keep for each channel (0-8).
Returns:
PIL Image or Tensor: Posterized image.
"""
if not (0 <= bits <= 8):
raise ValueError(
'The number if bits should be between 0 and 8. Got {}'.format(
bits))
if not isinstance(img, torch.Tensor):
return F_pil.posterize(img, bits)
return F_t.posterize(img, bits) | Posterize an image by reducing the number of bits for each color channel.
Args:
img (PIL Image or Tensor): Image to have its colors posterized.
If img is torch Tensor, it should be of type torch.uint8 and
it is expected to be in [..., 1 or 3, H, W] format, where ... means
it can have an arbitrary number of leading dimensions.
If img is PIL Image, it is expected to be in mode "L" or "RGB".
bits (int): The number of bits to keep for each channel (0-8).
Returns:
PIL Image or Tensor: Posterized image.
| posterize | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | Apache-2.0 |
def solarize(img: Tensor, threshold: float) -> Tensor:
"""Solarize an RGB/grayscale image by inverting all pixel values above a threshold.
Args:
img (PIL Image or Tensor): Image to have its colors inverted.
If img is torch Tensor, it is expected to be in [..., 1 or 3, H, W] format,
where ... means it can have an arbitrary number of leading dimensions.
If img is PIL Image, it is expected to be in mode "L" or "RGB".
threshold (float): All pixels equal or above this value are inverted.
Returns:
PIL Image or Tensor: Solarized image.
"""
if not isinstance(img, torch.Tensor):
return F_pil.solarize(img, threshold)
return F_t.solarize(img, threshold) | Solarize an RGB/grayscale image by inverting all pixel values above a threshold.
Args:
img (PIL Image or Tensor): Image to have its colors inverted.
If img is torch Tensor, it is expected to be in [..., 1 or 3, H, W] format,
where ... means it can have an arbitrary number of leading dimensions.
If img is PIL Image, it is expected to be in mode "L" or "RGB".
threshold (float): All pixels equal or above this value are inverted.
Returns:
PIL Image or Tensor: Solarized image.
| solarize | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | Apache-2.0 |
def adjust_sharpness(img: Tensor, sharpness_factor: float) -> Tensor:
"""Adjust the sharpness of an image.
Args:
img (PIL Image or Tensor): Image to be adjusted.
If img is torch Tensor, it is expected to be in [..., 1 or 3, H, W] format,
where ... means it can have an arbitrary number of leading dimensions.
sharpness_factor (float): How much to adjust the sharpness. Can be
any non negative number. 0 gives a blurred image, 1 gives the
original image while 2 increases the sharpness by a factor of 2.
Returns:
PIL Image or Tensor: Sharpness adjusted image.
"""
if not isinstance(img, torch.Tensor):
return F_pil.adjust_sharpness(img, sharpness_factor)
return F_t.adjust_sharpness(img, sharpness_factor) | Adjust the sharpness of an image.
Args:
img (PIL Image or Tensor): Image to be adjusted.
If img is torch Tensor, it is expected to be in [..., 1 or 3, H, W] format,
where ... means it can have an arbitrary number of leading dimensions.
sharpness_factor (float): How much to adjust the sharpness. Can be
any non negative number. 0 gives a blurred image, 1 gives the
original image while 2 increases the sharpness by a factor of 2.
Returns:
PIL Image or Tensor: Sharpness adjusted image.
| adjust_sharpness | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | Apache-2.0 |
def autocontrast(img: Tensor) -> Tensor:
"""Maximize contrast of an image by remapping its
pixels per channel so that the lowest becomes black and the lightest
becomes white.
Args:
img (PIL Image or Tensor): Image on which autocontrast is applied.
If img is torch Tensor, it is expected to be in [..., 1 or 3, H, W] format,
where ... means it can have an arbitrary number of leading dimensions.
If img is PIL Image, it is expected to be in mode "L" or "RGB".
Returns:
PIL Image or Tensor: An image that was autocontrasted.
"""
if not isinstance(img, torch.Tensor):
return F_pil.autocontrast(img)
return F_t.autocontrast(img) | Maximize contrast of an image by remapping its
pixels per channel so that the lowest becomes black and the lightest
becomes white.
Args:
img (PIL Image or Tensor): Image on which autocontrast is applied.
If img is torch Tensor, it is expected to be in [..., 1 or 3, H, W] format,
where ... means it can have an arbitrary number of leading dimensions.
If img is PIL Image, it is expected to be in mode "L" or "RGB".
Returns:
PIL Image or Tensor: An image that was autocontrasted.
| autocontrast | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | Apache-2.0 |
def equalize(img: Tensor) -> Tensor:
"""Equalize the histogram of an image by applying
a non-linear mapping to the input in order to create a uniform
distribution of grayscale values in the output.
Args:
img (PIL Image or Tensor): Image on which equalize is applied.
If img is torch Tensor, it is expected to be in [..., 1 or 3, H, W] format,
where ... means it can have an arbitrary number of leading dimensions.
The tensor dtype must be ``torch.uint8`` and values are expected to be in ``[0, 255]``.
If img is PIL Image, it is expected to be in mode "P", "L" or "RGB".
Returns:
PIL Image or Tensor: An image that was equalized.
"""
if not isinstance(img, torch.Tensor):
return F_pil.equalize(img)
return F_t.equalize(img) | Equalize the histogram of an image by applying
a non-linear mapping to the input in order to create a uniform
distribution of grayscale values in the output.
Args:
img (PIL Image or Tensor): Image on which equalize is applied.
If img is torch Tensor, it is expected to be in [..., 1 or 3, H, W] format,
where ... means it can have an arbitrary number of leading dimensions.
The tensor dtype must be ``torch.uint8`` and values are expected to be in ``[0, 255]``.
If img is PIL Image, it is expected to be in mode "P", "L" or "RGB".
Returns:
PIL Image or Tensor: An image that was equalized.
| equalize | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/functional.py | Apache-2.0 |
def get_params(img: Tensor,
output_size: Tuple[int, int]) -> Tuple[int, int, int, int]:
"""Get parameters for ``crop`` for a random crop.
Args:
img (PIL Image or Tensor): Image to be cropped.
output_size (tuple): Expected output size of the crop.
Returns:
tuple: params (i, j, h, w) to be passed to ``crop`` for random crop.
"""
w, h = F._get_image_size(img)
th, tw = output_size
if h + 1 < th or w + 1 < tw:
raise ValueError(
"Required crop size {} is larger then input image size {}".
format((th, tw), (h, w)))
if w == tw and h == th:
return 0, 0, h, w
i = torch.randint(0, h - th + 1, size=(1, )).item()
j = torch.randint(0, w - tw + 1, size=(1, )).item()
return i, j, th, tw | Get parameters for ``crop`` for a random crop.
Args:
img (PIL Image or Tensor): Image to be cropped.
output_size (tuple): Expected output size of the crop.
Returns:
tuple: params (i, j, h, w) to be passed to ``crop`` for random crop.
| get_params | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/transforms.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/transforms.py | Apache-2.0 |
def forward(self, img):
"""
Args:
img (PIL Image or Tensor): Image to be cropped.
Returns:
PIL Image or Tensor: Cropped image.
"""
if self.padding is not None:
img = F.pad(img, self.padding, self.fill, self.padding_mode)
width, height = F._get_image_size(img)
# pad the width if needed
if self.pad_if_needed and width < self.size[1]:
padding = [self.size[1] - width, 0]
img = F.pad(img, padding, self.fill, self.padding_mode)
# pad the height if needed
if self.pad_if_needed and height < self.size[0]:
padding = [0, self.size[0] - height]
img = F.pad(img, padding, self.fill, self.padding_mode)
i, j, h, w = self.get_params(img, self.size)
return F.crop(img, i, j, h, w) |
Args:
img (PIL Image or Tensor): Image to be cropped.
Returns:
PIL Image or Tensor: Cropped image.
| forward | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/transforms.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/transforms.py | Apache-2.0 |
def forward(self, img):
"""
Args:
img (PIL Image or Tensor): Image to be flipped.
Returns:
PIL Image or Tensor: Randomly flipped image.
"""
if torch.rand(1) < self.p:
return F.hflip(img)
return img |
Args:
img (PIL Image or Tensor): Image to be flipped.
Returns:
PIL Image or Tensor: Randomly flipped image.
| forward | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/transforms.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/transforms.py | Apache-2.0 |
def forward(self, img):
"""
Args:
img (PIL Image or Tensor): Image to be flipped.
Returns:
PIL Image or Tensor: Randomly flipped image.
"""
if torch.rand(1) < self.p:
return F.vflip(img)
return img |
Args:
img (PIL Image or Tensor): Image to be flipped.
Returns:
PIL Image or Tensor: Randomly flipped image.
| forward | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/transforms.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/transforms.py | Apache-2.0 |
def forward(self, img):
"""
Args:
img (PIL Image or Tensor): Image to be Perspectively transformed.
Returns:
PIL Image or Tensor: Randomly transformed image.
"""
fill = self.fill
if isinstance(img, Tensor):
if isinstance(fill, (int, float)):
fill = [float(fill)] * F._get_image_num_channels(img)
else:
fill = [float(f) for f in fill]
if torch.rand(1) < self.p:
width, height = F._get_image_size(img)
startpoints, endpoints = self.get_params(width, height,
self.distortion_scale)
return F.perspective(img, startpoints, endpoints,
self.interpolation, fill)
return img |
Args:
img (PIL Image or Tensor): Image to be Perspectively transformed.
Returns:
PIL Image or Tensor: Randomly transformed image.
| forward | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/transforms.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/transforms.py | Apache-2.0 |
def get_params(width: int, height: int, distortion_scale: float) -> Tuple[
List[List[int]], List[List[int]]]:
"""Get parameters for ``perspective`` for a random perspective transform.
Args:
width (int): width of the image.
height (int): height of the image.
distortion_scale (float): argument to control the degree of distortion and ranges from 0 to 1.
Returns:
List containing [top-left, top-right, bottom-right, bottom-left] of the original image,
List containing [top-left, top-right, bottom-right, bottom-left] of the transformed image.
"""
half_height = height // 2
half_width = width // 2
topleft = [
int(
torch.randint(
0, int(distortion_scale * half_width) + 1, size=(1, ))
.item()), int(
torch.randint(
0, int(distortion_scale * half_height) + 1, size=(1, ))
.item())
]
topright = [
int(
torch.randint(
width - int(distortion_scale * half_width) - 1,
width,
size=(1, )).item()),
int(
torch.randint(
0, int(distortion_scale * half_height) + 1, size=(1, ))
.item())
]
botright = [
int(
torch.randint(
width - int(distortion_scale * half_width) - 1,
width,
size=(1, )).item()), int(
torch.randint(
height - int(distortion_scale * half_height) - 1,
height,
size=(1, )).item())
]
botleft = [
int(
torch.randint(
0, int(distortion_scale * half_width) + 1, size=(1, ))
.item()), int(
torch.randint(
height - int(distortion_scale * half_height) - 1,
height,
size=(1, )).item())
]
startpoints = [[0, 0], [width - 1, 0], [width - 1, height - 1],
[0, height - 1]]
endpoints = [topleft, topright, botright, botleft]
return startpoints, endpoints | Get parameters for ``perspective`` for a random perspective transform.
Args:
width (int): width of the image.
height (int): height of the image.
distortion_scale (float): argument to control the degree of distortion and ranges from 0 to 1.
Returns:
List containing [top-left, top-right, bottom-right, bottom-left] of the original image,
List containing [top-left, top-right, bottom-right, bottom-left] of the transformed image.
| get_params | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/transforms.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/transforms.py | Apache-2.0 |
def get_params(img: Tensor, scale: List[float],
ratio: List[float]) -> Tuple[int, int, int, int]:
"""Get parameters for ``crop`` for a random sized crop.
Args:
img (PIL Image or Tensor): Input image.
scale (list): range of scale of the origin size cropped
ratio (list): range of aspect ratio of the origin aspect ratio cropped
Returns:
tuple: params (i, j, h, w) to be passed to ``crop`` for a random
sized crop.
"""
width, height = F._get_image_size(img)
area = height * width
log_ratio = torch.log(torch.tensor(ratio))
for _ in range(10):
target_area = area * torch.empty(1).uniform_(scale[0],
scale[1]).item()
aspect_ratio = torch.exp(
torch.empty(1).uniform_(log_ratio[0], log_ratio[1])).item()
w = int(round(math.sqrt(target_area * aspect_ratio)))
h = int(round(math.sqrt(target_area / aspect_ratio)))
if 0 < w <= width and 0 < h <= height:
i = torch.randint(0, height - h + 1, size=(1, )).item()
j = torch.randint(0, width - w + 1, size=(1, )).item()
return i, j, h, w
# Fallback to central crop
in_ratio = float(width) / float(height)
if in_ratio < min(ratio):
w = width
h = int(round(w / min(ratio)))
elif in_ratio > max(ratio):
h = height
w = int(round(h * max(ratio)))
else: # whole image
w = width
h = height
i = (height - h) // 2
j = (width - w) // 2
return i, j, h, w | Get parameters for ``crop`` for a random sized crop.
Args:
img (PIL Image or Tensor): Input image.
scale (list): range of scale of the origin size cropped
ratio (list): range of aspect ratio of the origin aspect ratio cropped
Returns:
tuple: params (i, j, h, w) to be passed to ``crop`` for a random
sized crop.
| get_params | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/transforms.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/transforms.py | Apache-2.0 |
def forward(self, img):
"""
Args:
img (PIL Image or Tensor): Image to be cropped and resized.
Returns:
PIL Image or Tensor: Randomly cropped and resized image.
"""
i, j, h, w = self.get_params(img, self.scale, self.ratio)
return F.resized_crop(img, i, j, h, w, self.size, self.interpolation) |
Args:
img (PIL Image or Tensor): Image to be cropped and resized.
Returns:
PIL Image or Tensor: Randomly cropped and resized image.
| forward | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/transforms.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/transforms.py | Apache-2.0 |
def forward(self, tensor: Tensor) -> Tensor:
"""
Args:
tensor (Tensor): Tensor image to be whitened.
Returns:
Tensor: Transformed image.
"""
shape = tensor.shape
n = shape[-3] * shape[-2] * shape[-1]
if n != self.transformation_matrix.shape[0]:
raise ValueError(
"Input tensor and transformation matrix have incompatible shape."
+ "[{} x {} x {}] != ".format(shape[-3], shape[-2], shape[
-1]) + "{}".format(self.transformation_matrix.shape[0]))
if tensor.device.type != self.mean_vector.device.type:
raise ValueError(
"Input tensor should be on the same device as transformation matrix and mean vector. "
"Got {} vs {}".format(tensor.device, self.mean_vector.device))
flat_tensor = tensor.view(-1, n) - self.mean_vector
transformed_tensor = torch.mm(flat_tensor, self.transformation_matrix)
tensor = transformed_tensor.view(shape)
return tensor |
Args:
tensor (Tensor): Tensor image to be whitened.
Returns:
Tensor: Transformed image.
| forward | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/transforms.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/transforms.py | Apache-2.0 |
def get_params(
brightness: Optional[List[float]],
contrast: Optional[List[float]],
saturation: Optional[List[float]],
hue: Optional[List[float]]) -> Tuple[Tensor, Optional[
float], Optional[float], Optional[float], Optional[float]]:
"""Get the parameters for the randomized transform to be applied on image.
Args:
brightness (tuple of float (min, max), optional): The range from which the brightness_factor is chosen
uniformly. Pass None to turn off the transformation.
contrast (tuple of float (min, max), optional): The range from which the contrast_factor is chosen
uniformly. Pass None to turn off the transformation.
saturation (tuple of float (min, max), optional): The range from which the saturation_factor is chosen
uniformly. Pass None to turn off the transformation.
hue (tuple of float (min, max), optional): The range from which the hue_factor is chosen uniformly.
Pass None to turn off the transformation.
Returns:
tuple: The parameters used to apply the randomized transform
along with their random order.
"""
fn_idx = torch.randperm(4)
b = None if brightness is None else float(
torch.empty(1).uniform_(brightness[0], brightness[1]))
c = None if contrast is None else float(
torch.empty(1).uniform_(contrast[0], contrast[1]))
s = None if saturation is None else float(
torch.empty(1).uniform_(saturation[0], saturation[1]))
h = None if hue is None else float(
torch.empty(1).uniform_(hue[0], hue[1]))
return fn_idx, b, c, s, h | Get the parameters for the randomized transform to be applied on image.
Args:
brightness (tuple of float (min, max), optional): The range from which the brightness_factor is chosen
uniformly. Pass None to turn off the transformation.
contrast (tuple of float (min, max), optional): The range from which the contrast_factor is chosen
uniformly. Pass None to turn off the transformation.
saturation (tuple of float (min, max), optional): The range from which the saturation_factor is chosen
uniformly. Pass None to turn off the transformation.
hue (tuple of float (min, max), optional): The range from which the hue_factor is chosen uniformly.
Pass None to turn off the transformation.
Returns:
tuple: The parameters used to apply the randomized transform
along with their random order.
| get_params | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/transforms.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/transforms.py | Apache-2.0 |
def forward(self, img):
"""
Args:
img (PIL Image or Tensor): Input image.
Returns:
PIL Image or Tensor: Color jittered image.
"""
fn_idx, brightness_factor, contrast_factor, saturation_factor, hue_factor = \
self.get_params(self.brightness, self.contrast, self.saturation, self.hue)
for fn_id in fn_idx:
if fn_id == 0 and brightness_factor is not None:
img = F.adjust_brightness(img, brightness_factor)
elif fn_id == 1 and contrast_factor is not None:
img = F.adjust_contrast(img, contrast_factor)
elif fn_id == 2 and saturation_factor is not None:
img = F.adjust_saturation(img, saturation_factor)
elif fn_id == 3 and hue_factor is not None:
img = F.adjust_hue(img, hue_factor)
return img |
Args:
img (PIL Image or Tensor): Input image.
Returns:
PIL Image or Tensor: Color jittered image.
| forward | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/transforms.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/transforms.py | Apache-2.0 |
def get_params(degrees: List[float]) -> float:
"""Get parameters for ``rotate`` for a random rotation.
Returns:
float: angle parameter to be passed to ``rotate`` for random rotation.
"""
angle = float(
torch.empty(1).uniform_(float(degrees[0]), float(degrees[1])).item(
))
return angle | Get parameters for ``rotate`` for a random rotation.
Returns:
float: angle parameter to be passed to ``rotate`` for random rotation.
| get_params | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/transforms.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/transforms.py | Apache-2.0 |
def forward(self, img):
"""
Args:
img (PIL Image or Tensor): Image to be rotated.
Returns:
PIL Image or Tensor: Rotated image.
"""
fill = self.fill
if isinstance(img, Tensor):
if isinstance(fill, (int, float)):
fill = [float(fill)] * F._get_image_num_channels(img)
else:
fill = [float(f) for f in fill]
angle = self.get_params(self.degrees)
return F.rotate(img, angle, self.resample, self.expand, self.center,
fill) |
Args:
img (PIL Image or Tensor): Image to be rotated.
Returns:
PIL Image or Tensor: Rotated image.
| forward | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/transforms.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/transforms.py | Apache-2.0 |
def get_params(degrees: List[float],
translate: Optional[List[float]],
scale_ranges: Optional[List[float]],
shears: Optional[List[float]],
img_size: List[int]) -> Tuple[float, Tuple[int, int], float,
Tuple[float, float]]:
"""Get parameters for affine transformation
Returns:
params to be passed to the affine transformation
"""
angle = float(
torch.empty(1).uniform_(float(degrees[0]), float(degrees[1])).item(
))
if translate is not None:
max_dx = float(translate[0] * img_size[0])
max_dy = float(translate[1] * img_size[1])
tx = int(round(torch.empty(1).uniform_(-max_dx, max_dx).item()))
ty = int(round(torch.empty(1).uniform_(-max_dy, max_dy).item()))
translations = (tx, ty)
else:
translations = (0, 0)
if scale_ranges is not None:
scale = float(
torch.empty(1).uniform_(scale_ranges[0], scale_ranges[1]).item(
))
else:
scale = 1.0
shear_x = shear_y = 0.0
if shears is not None:
shear_x = float(
torch.empty(1).uniform_(shears[0], shears[1]).item())
if len(shears) == 4:
shear_y = float(
torch.empty(1).uniform_(shears[2], shears[3]).item())
shear = (shear_x, shear_y)
return angle, translations, scale, shear | Get parameters for affine transformation
Returns:
params to be passed to the affine transformation
| get_params | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/transforms.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/transforms.py | Apache-2.0 |
def forward(self, img):
"""
img (PIL Image or Tensor): Image to be transformed.
Returns:
PIL Image or Tensor: Affine transformed image.
"""
fill = self.fill
if isinstance(img, Tensor):
if isinstance(fill, (int, float)):
fill = [float(fill)] * F._get_image_num_channels(img)
else:
fill = [float(f) for f in fill]
img_size = F._get_image_size(img)
ret = self.get_params(self.degrees, self.translate, self.scale,
self.shear, img_size)
return F.affine(img, *ret, interpolation=self.interpolation, fill=fill) |
img (PIL Image or Tensor): Image to be transformed.
Returns:
PIL Image or Tensor: Affine transformed image.
| forward | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/transforms.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/transforms.py | Apache-2.0 |
def forward(self, img):
"""
Args:
img (PIL Image or Tensor): Image to be converted to grayscale.
Returns:
PIL Image or Tensor: Randomly grayscaled image.
"""
num_output_channels = F._get_image_num_channels(img)
if torch.rand(1) < self.p:
return F.rgb_to_grayscale(
img, num_output_channels=num_output_channels)
return img |
Args:
img (PIL Image or Tensor): Image to be converted to grayscale.
Returns:
PIL Image or Tensor: Randomly grayscaled image.
| forward | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/transforms.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/transforms.py | Apache-2.0 |
def get_params(img: Tensor,
scale: Tuple[float, float],
ratio: Tuple[float, float],
value: Optional[List[float]]=None) -> Tuple[int, int, int,
int, Tensor]:
"""Get parameters for ``erase`` for a random erasing.
Args:
img (Tensor): Tensor image to be erased.
scale (sequence): range of proportion of erased area against input image.
ratio (sequence): range of aspect ratio of erased area.
value (list, optional): erasing value. If None, it is interpreted as "random"
(erasing each pixel with random values). If ``len(value)`` is 1, it is interpreted as a number,
i.e. ``value[0]``.
Returns:
tuple: params (i, j, h, w, v) to be passed to ``erase`` for random erasing.
"""
img_c, img_h, img_w = img.shape[-3], img.shape[-2], img.shape[-1]
area = img_h * img_w
log_ratio = torch.log(torch.tensor(ratio))
for _ in range(10):
erase_area = area * torch.empty(1).uniform_(scale[0],
scale[1]).item()
aspect_ratio = torch.exp(
torch.empty(1).uniform_(log_ratio[0], log_ratio[1])).item()
h = int(round(math.sqrt(erase_area * aspect_ratio)))
w = int(round(math.sqrt(erase_area / aspect_ratio)))
if not (h < img_h and w < img_w):
continue
if value is None:
v = torch.empty([img_c, h, w], dtype=torch.float32).normal_()
else:
v = torch.tensor(value)[:, None, None]
i = torch.randint(0, img_h - h + 1, size=(1, )).item()
j = torch.randint(0, img_w - w + 1, size=(1, )).item()
return i, j, h, w, v
# Return original image
return 0, 0, img_h, img_w, img | Get parameters for ``erase`` for a random erasing.
Args:
img (Tensor): Tensor image to be erased.
scale (sequence): range of proportion of erased area against input image.
ratio (sequence): range of aspect ratio of erased area.
value (list, optional): erasing value. If None, it is interpreted as "random"
(erasing each pixel with random values). If ``len(value)`` is 1, it is interpreted as a number,
i.e. ``value[0]``.
Returns:
tuple: params (i, j, h, w, v) to be passed to ``erase`` for random erasing.
| get_params | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/transforms.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/transforms.py | Apache-2.0 |
def forward(self, img):
"""
Args:
img (Tensor): Tensor image to be erased.
Returns:
img (Tensor): Erased Tensor image.
"""
if torch.rand(1) < self.p:
# cast self.value to script acceptable type
if isinstance(self.value, (int, float)):
value = [self.value, ]
elif isinstance(self.value, str):
value = None
elif isinstance(self.value, tuple):
value = list(self.value)
else:
value = self.value
if value is not None and not (len(value) in (1, img.shape[-3])):
raise ValueError(
"If value is a sequence, it should have either a single value or "
"{} (number of input channels)".format(img.shape[-3]))
x, y, h, w, v = self.get_params(
img, scale=self.scale, ratio=self.ratio, value=value)
return F.erase(img, x, y, h, w, v, self.inplace)
return img |
Args:
img (Tensor): Tensor image to be erased.
Returns:
img (Tensor): Erased Tensor image.
| forward | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/transforms.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/transforms.py | Apache-2.0 |
def forward(self, img: Tensor) -> Tensor:
"""
Args:
img (PIL Image or Tensor): image to be blurred.
Returns:
PIL Image or Tensor: Gaussian blurred image
"""
sigma = self.get_params(self.sigma[0], self.sigma[1])
return F.gaussian_blur(img, self.kernel_size, [sigma, sigma]) |
Args:
img (PIL Image or Tensor): image to be blurred.
Returns:
PIL Image or Tensor: Gaussian blurred image
| forward | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/transforms.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/transforms.py | Apache-2.0 |
def forward(self, img):
"""
Args:
img (PIL Image or Tensor): Image to be inverted.
Returns:
PIL Image or Tensor: Randomly color inverted image.
"""
if torch.rand(1).item() < self.p:
return F.invert(img)
return img |
Args:
img (PIL Image or Tensor): Image to be inverted.
Returns:
PIL Image or Tensor: Randomly color inverted image.
| forward | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/transforms.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/transforms.py | Apache-2.0 |
def forward(self, img):
"""
Args:
img (PIL Image or Tensor): Image to be posterized.
Returns:
PIL Image or Tensor: Randomly posterized image.
"""
if torch.rand(1).item() < self.p:
return F.posterize(img, self.bits)
return img |
Args:
img (PIL Image or Tensor): Image to be posterized.
Returns:
PIL Image or Tensor: Randomly posterized image.
| forward | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/transforms.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/transforms.py | Apache-2.0 |
def forward(self, img):
"""
Args:
img (PIL Image or Tensor): Image to be solarized.
Returns:
PIL Image or Tensor: Randomly solarized image.
"""
if torch.rand(1).item() < self.p:
return F.solarize(img, self.threshold)
return img |
Args:
img (PIL Image or Tensor): Image to be solarized.
Returns:
PIL Image or Tensor: Randomly solarized image.
| forward | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/transforms.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/transforms.py | Apache-2.0 |
def forward(self, img):
"""
Args:
img (PIL Image or Tensor): Image to be sharpened.
Returns:
PIL Image or Tensor: Randomly sharpened image.
"""
if torch.rand(1).item() < self.p:
return F.adjust_sharpness(img, self.sharpness_factor)
return img |
Args:
img (PIL Image or Tensor): Image to be sharpened.
Returns:
PIL Image or Tensor: Randomly sharpened image.
| forward | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/transforms.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/transforms.py | Apache-2.0 |
def forward(self, img):
"""
Args:
img (PIL Image or Tensor): Image to be autocontrasted.
Returns:
PIL Image or Tensor: Randomly autocontrasted image.
"""
if torch.rand(1).item() < self.p:
return F.autocontrast(img)
return img |
Args:
img (PIL Image or Tensor): Image to be autocontrasted.
Returns:
PIL Image or Tensor: Randomly autocontrasted image.
| forward | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/transforms.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/transforms.py | Apache-2.0 |
def forward(self, img):
"""
Args:
img (PIL Image or Tensor): Image to be equalized.
Returns:
PIL Image or Tensor: Randomly equalized image.
"""
if torch.rand(1).item() < self.p:
return F.equalize(img)
return img |
Args:
img (PIL Image or Tensor): Image to be equalized.
Returns:
PIL Image or Tensor: Randomly equalized image.
| forward | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/transforms.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step1-5/mobilenetv3_ref/torchvision/transforms/transforms.py | Apache-2.0 |
def synchronize_between_processes(self):
"""
Warning: does not synchronize the deque!
"""
t = paddle.to_tensor([self.count, self.total], dtype='float64')
t = t.numpy().tolist()
self.count = int(t[0])
self.total = t[1] |
Warning: does not synchronize the deque!
| synchronize_between_processes | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step6/utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step6/utils.py | Apache-2.0 |
def accuracy(output, target, topk=(1, )):
"""Computes the accuracy over the k top predictions for the specified values of k"""
with paddle.no_grad():
maxk = max(topk)
batch_size = target.shape[0]
_, pred = output.topk(maxk, 1, True, True)
pred = pred.t()
correct = pred.equal(target.astype("int64"))
res = []
for k in topk:
correct_k = correct.astype(paddle.int32)[:k].flatten().sum(
dtype='float32')
res.append(correct_k / batch_size)
return res | Computes the accuracy over the k top predictions for the specified values of k | accuracy | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step6/utils.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step6/utils.py | Apache-2.0 |
def __init__(self, args):
"""
Args:
args: Parameters generated using argparser.
Returns: None
"""
super().__init__()
self.args = args
# init inference engine
self.predictor, self.config, self.input_tensor, self.output_tensor = self.load_predictor(
os.path.join(args.model_dir, "inference.pdmodel"),
os.path.join(args.model_dir, "inference.pdiparams"))
# build transforms
self.transforms = Compose([
ResizeImage(args.resize_size), CenterCropImage(args.crop_size),
NormalizeImage(), ToCHW()
])
# wamrup
if self.args.warmup > 0:
for idx in range(args.warmup):
print(idx)
x = np.random.rand(1, 3, self.args.crop_size,
self.args.crop_size).astype("float32")
self.input_tensor.copy_from_cpu(x)
self.predictor.run()
self.output_tensor.copy_to_cpu()
return |
Args:
args: Parameters generated using argparser.
Returns: None
| __init__ | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step6/deploy/inference_python/infer.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step6/deploy/inference_python/infer.py | Apache-2.0 |
def load_predictor(self, model_file_path, params_file_path):
"""load_predictor
initialize the inference engine
Args:
model_file_path: inference model path (*.pdmodel)
model_file_path: inference parmaeter path (*.pdiparams)
Return:
predictor: Predictor created using Paddle Inference.
config: Configuration of the predictor.
input_tensor: Input tensor of the predictor.
output_tensor: Output tensor of the predictor.
"""
args = self.args
config = inference.Config(model_file_path, params_file_path)
if args.use_gpu:
config.enable_use_gpu(1000, 0)
else:
config.disable_gpu()
# The thread num should not be greater than the number of cores in the CPU.
config.set_cpu_math_library_num_threads(4)
# enable memory optim
config.enable_memory_optim()
config.disable_glog_info()
config.switch_use_feed_fetch_ops(False)
config.switch_ir_optim(True)
# create predictor
predictor = inference.create_predictor(config)
# get input and output tensor property
input_names = predictor.get_input_names()
input_tensor = predictor.get_input_handle(input_names[0])
output_names = predictor.get_output_names()
output_tensor = predictor.get_output_handle(output_names[0])
return predictor, config, input_tensor, output_tensor | load_predictor
initialize the inference engine
Args:
model_file_path: inference model path (*.pdmodel)
model_file_path: inference parmaeter path (*.pdiparams)
Return:
predictor: Predictor created using Paddle Inference.
config: Configuration of the predictor.
input_tensor: Input tensor of the predictor.
output_tensor: Output tensor of the predictor.
| load_predictor | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step6/deploy/inference_python/infer.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step6/deploy/inference_python/infer.py | Apache-2.0 |
def preprocess(self, img_path):
"""preprocess
Preprocess to the input.
Args:
img_path: Image path.
Returns: Input data after preprocess.
"""
with open(img_path, "rb") as f:
img = Image.open(f)
img = img.convert("RGB")
img = self.transforms(img)
img = np.expand_dims(img, axis=0)
return img | preprocess
Preprocess to the input.
Args:
img_path: Image path.
Returns: Input data after preprocess.
| preprocess | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step6/deploy/inference_python/infer.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step6/deploy/inference_python/infer.py | Apache-2.0 |
def postprocess(self, x):
"""postprocess
Postprocess to the inference engine output.
Args:
x: Inference engine output.
Returns: Output data after argmax.
"""
x = x.flatten()
class_id = x.argmax()
prob = x[class_id]
return class_id, prob | postprocess
Postprocess to the inference engine output.
Args:
x: Inference engine output.
Returns: Output data after argmax.
| postprocess | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step6/deploy/inference_python/infer.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step6/deploy/inference_python/infer.py | Apache-2.0 |
def run(self, x):
"""run
Inference process using inference engine.
Args:
x: Input data after preprocess.
Returns: Inference engine output
"""
self.input_tensor.copy_from_cpu(x)
self.predictor.run()
output = self.output_tensor.copy_to_cpu()
return output | run
Inference process using inference engine.
Args:
x: Input data after preprocess.
Returns: Inference engine output
| run | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step6/deploy/inference_python/infer.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step6/deploy/inference_python/infer.py | Apache-2.0 |
def infer_main(args):
"""infer_main
Main inference function.
Args:
args: Parameters generated using argparser.
Returns:
class_id: Class index of the input.
prob: : Probability of the input.
"""
inference_engine = InferenceEngine(args)
# init benchmark
if args.benchmark:
import auto_log
autolog = auto_log.AutoLogger(
model_name="classification",
batch_size=args.batch_size,
inference_config=inference_engine.config,
gpu_ids="auto" if args.use_gpu else None)
assert args.batch_size == 1, "batch size just supports 1 now."
# enable benchmark
if args.benchmark:
autolog.times.start()
# preprocess
img = inference_engine.preprocess(args.img_path)
if args.benchmark:
autolog.times.stamp()
output = inference_engine.run(img)
if args.benchmark:
autolog.times.stamp()
# postprocess
class_id, prob = inference_engine.postprocess(output)
if args.benchmark:
autolog.times.stamp()
autolog.times.end(stamp=True)
autolog.report()
print(f"image_name: {args.img_path}, class_id: {class_id}, prob: {prob}")
return class_id, prob | infer_main
Main inference function.
Args:
args: Parameters generated using argparser.
Returns:
class_id: Class index of the input.
prob: : Probability of the input.
| infer_main | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step6/deploy/inference_python/infer.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step6/deploy/inference_python/infer.py | Apache-2.0 |
def has_file_allowed_extension(filename: str,
extensions: Tuple[str, ...]) -> bool:
"""Checks if a file is an allowed extension.
Args:
filename (string): path to a file
extensions (tuple of strings): extensions to consider (lowercase)
Returns:
bool: True if the filename ends with one of given extensions
"""
return filename.lower().endswith(extensions) | Checks if a file is an allowed extension.
Args:
filename (string): path to a file
extensions (tuple of strings): extensions to consider (lowercase)
Returns:
bool: True if the filename ends with one of given extensions
| has_file_allowed_extension | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step6/paddlevision/datasets/folder.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step6/paddlevision/datasets/folder.py | Apache-2.0 |
def find_classes(directory: str) -> Tuple[List[str], Dict[str, int]]:
"""Finds the class folders in a dataset.
See :class:`DatasetFolder` for details.
"""
classes = sorted(
entry.name for entry in os.scandir(directory) if entry.is_dir())
if not classes:
raise FileNotFoundError(
f"Couldn't find any class folder in {directory}.")
class_to_idx = {cls_name: i for i, cls_name in enumerate(classes)}
return classes, class_to_idx | Finds the class folders in a dataset.
See :class:`DatasetFolder` for details.
| find_classes | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step6/paddlevision/datasets/folder.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step6/paddlevision/datasets/folder.py | Apache-2.0 |
def make_dataset(
directory: str,
class_to_idx: Optional[Dict[str, int]]=None,
extensions: Optional[Tuple[str, ...]]=None,
is_valid_file: Optional[Callable[[str], bool]]=None, ) -> List[Tuple[
str, int]]:
"""Generates a list of samples of a form (path_to_sample, class).
See :class:`DatasetFolder` for details.
Note: The class_to_idx parameter is here optional and will use the logic of the ``find_classes`` function
by default.
"""
directory = os.path.expanduser(directory)
if class_to_idx is None:
_, class_to_idx = find_classes(directory)
elif not class_to_idx:
raise ValueError(
"'class_to_index' must have at least one entry to collect any samples."
)
both_none = extensions is None and is_valid_file is None
both_something = extensions is not None and is_valid_file is not None
if both_none or both_something:
raise ValueError(
"Both extensions and is_valid_file cannot be None or not None at the same time"
)
if extensions is not None:
def is_valid_file(x: str) -> bool:
return has_file_allowed_extension(
x, cast(Tuple[str, ...], extensions))
is_valid_file = cast(Callable[[str], bool], is_valid_file)
instances = []
available_classes = set()
for target_class in sorted(class_to_idx.keys()):
class_index = class_to_idx[target_class]
target_dir = os.path.join(directory, target_class)
if not os.path.isdir(target_dir):
continue
for root, _, fnames in sorted(os.walk(target_dir, followlinks=True)):
for fname in sorted(fnames):
if is_valid_file(fname):
path = os.path.join(root, fname)
item = path, class_index
instances.append(item)
if target_class not in available_classes:
available_classes.add(target_class)
# print(fname)
# exit()
# empty_classes = set(class_to_idx.keys()) - available_classes
# if empty_classes:
# msg = f"Found no valid file for the classes {', '.join(sorted(empty_classes))}. "
# if extensions is not None:
# msg += f"Supported extensions are: {', '.join(extensions)}"
# raise FileNotFoundError(msg)
return instances | Generates a list of samples of a form (path_to_sample, class).
See :class:`DatasetFolder` for details.
Note: The class_to_idx parameter is here optional and will use the logic of the ``find_classes`` function
by default.
| make_dataset | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step6/paddlevision/datasets/folder.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step6/paddlevision/datasets/folder.py | Apache-2.0 |
def make_dataset(
directory: str,
class_to_idx: Dict[str, int],
extensions: Optional[Tuple[str, ...]]=None,
is_valid_file: Optional[Callable[[str], bool]]=None, ) -> List[
Tuple[str, int]]:
"""Generates a list of samples of a form (path_to_sample, class).
This can be overridden to e.g. read files from a compressed zip file instead of from the disk.
Args:
directory (str): root dataset directory, corresponding to ``self.root``.
class_to_idx (Dict[str, int]): Dictionary mapping class name to class index.
extensions (optional): A list of allowed extensions.
Either extensions or is_valid_file should be passed. Defaults to None.
is_valid_file (optional): A function that takes path of a file
and checks if the file is a valid file
(used to check of corrupt files) both extensions and
is_valid_file should not be passed. Defaults to None.
Raises:
ValueError: In case ``class_to_idx`` is empty.
ValueError: In case ``extensions`` and ``is_valid_file`` are None or both are not None.
FileNotFoundError: In case no valid file was found for any class.
Returns:
List[Tuple[str, int]]: samples of a form (path_to_sample, class)
"""
if class_to_idx is None:
# prevent potential bug since make_dataset() would use the class_to_idx logic of the
# find_classes() function, instead of using that of the find_classes() method, which
# is potentially overridden and thus could have a different logic.
raise ValueError("The class_to_idx parameter cannot be None.")
return make_dataset(
directory,
class_to_idx,
extensions=extensions,
is_valid_file=is_valid_file) | Generates a list of samples of a form (path_to_sample, class).
This can be overridden to e.g. read files from a compressed zip file instead of from the disk.
Args:
directory (str): root dataset directory, corresponding to ``self.root``.
class_to_idx (Dict[str, int]): Dictionary mapping class name to class index.
extensions (optional): A list of allowed extensions.
Either extensions or is_valid_file should be passed. Defaults to None.
is_valid_file (optional): A function that takes path of a file
and checks if the file is a valid file
(used to check of corrupt files) both extensions and
is_valid_file should not be passed. Defaults to None.
Raises:
ValueError: In case ``class_to_idx`` is empty.
ValueError: In case ``extensions`` and ``is_valid_file`` are None or both are not None.
FileNotFoundError: In case no valid file was found for any class.
Returns:
List[Tuple[str, int]]: samples of a form (path_to_sample, class)
| make_dataset | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step6/paddlevision/datasets/folder.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step6/paddlevision/datasets/folder.py | Apache-2.0 |
def __getitem__(self, index: int) -> Tuple[Any, Any]:
"""
Args:
index (int): Index
Returns:
tuple: (sample, target) where target is class_index of the target class.
"""
path, target = self.samples[index]
sample = self.loader(path)
if self.transform is not None:
sample = self.transform(sample)
if self.target_transform is not None:
target = self.target_transform(target)
return sample, target |
Args:
index (int): Index
Returns:
tuple: (sample, target) where target is class_index of the target class.
| __getitem__ | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step6/paddlevision/datasets/folder.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step6/paddlevision/datasets/folder.py | Apache-2.0 |
def alexnet(pretrained: bool=False, **kwargs: Any) -> AlexNet:
r"""AlexNet model architecture from the
`"One weird trick..." <https://arxiv.org/abs/1404.5997>`_ paper.
The required minimum input size of the model is 63x63.
Args:
pretrained (str): Pre-trained parameters of the model on ImageNet
"""
model = AlexNet(**kwargs)
if pretrained:
load_dygraph_pretrain(model, pretrained)
return model | AlexNet model architecture from the
`"One weird trick..." <https://arxiv.org/abs/1404.5997>`_ paper.
The required minimum input size of the model is 63x63.
Args:
pretrained (str): Pre-trained parameters of the model on ImageNet
| alexnet | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step6/paddlevision/models/alexnet.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step6/paddlevision/models/alexnet.py | Apache-2.0 |
def _make_divisible(v: float, divisor: int,
min_value: Optional[int]=None) -> int:
"""
This function is taken from the original tf repo.
It ensures that all layers have a channel number that is divisible by 8
It can be seen here:
https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet/mobilenet.py
"""
if min_value is None:
min_value = divisor
new_v = max(min_value, int(v + divisor / 2) // divisor * divisor)
# Make sure that round down does not go down by more than 10%.
if new_v < 0.9 * v:
new_v += divisor
return new_v |
This function is taken from the original tf repo.
It ensures that all layers have a channel number that is divisible by 8
It can be seen here:
https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet/mobilenet.py
| _make_divisible | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step6/paddlevision/models/mobilenet_v3.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step6/paddlevision/models/mobilenet_v3.py | Apache-2.0 |
def __init__(
self,
inverted_residual_setting: List[InvertedResidualConfig],
last_channel: int,
num_classes: int=1000,
block: Optional[Callable[..., nn.Layer]]=None,
norm_layer: Optional[Callable[..., nn.Layer]]=None,
dropout: float=0.2,
**kwargs: Any, ) -> None:
"""
MobileNet V3 main class
Args:
inverted_residual_setting (List[InvertedResidualConfig]): Network structure
last_channel (int): The number of channels on the penultimate layer
num_classes (int): Number of classes
block (Optional[Callable[..., nn.Layer]]): Module specifying inverted residual building block for mobilenet
norm_layer (Optional[Callable[..., nn.Layer]]): Module specifying the normalization layer to use
dropout (float): The droupout probability
"""
super().__init__()
if not inverted_residual_setting:
raise ValueError(
"The inverted_residual_setting should not be empty")
elif not (isinstance(inverted_residual_setting, Sequence) and all([
isinstance(s, InvertedResidualConfig)
for s in inverted_residual_setting
])):
raise TypeError(
"The inverted_residual_setting should be List[InvertedResidualConfig]"
)
if block is None:
block = InvertedResidual
if norm_layer is None:
norm_layer = partial(nn.BatchNorm2D, epsilon=0.001, momentum=0.01)
layers: List[nn.Layer] = []
# building first layer
firstconv_output_channels = inverted_residual_setting[0].input_channels
layers.append(
ConvNormActivation(
3,
firstconv_output_channels,
kernel_size=3,
stride=2,
norm_layer=norm_layer,
activation_layer=nn.Hardswish, ))
# building inverted residual blocks
for cnf in inverted_residual_setting:
layers.append(block(cnf, norm_layer))
# building last several layers
lastconv_input_channels = inverted_residual_setting[-1].out_channels
lastconv_output_channels = 6 * lastconv_input_channels
layers.append(
ConvNormActivation(
lastconv_input_channels,
lastconv_output_channels,
kernel_size=1,
norm_layer=norm_layer,
activation_layer=nn.Hardswish, ))
self.features = nn.Sequential(*layers)
self.avgpool = nn.AdaptiveAvgPool2D(1)
self.classifier = nn.Sequential(
nn.Linear(lastconv_output_channels, last_channel),
nn.Hardswish(),
nn.Dropout(p=dropout),
nn.Linear(last_channel, num_classes), ) |
MobileNet V3 main class
Args:
inverted_residual_setting (List[InvertedResidualConfig]): Network structure
last_channel (int): The number of channels on the penultimate layer
num_classes (int): Number of classes
block (Optional[Callable[..., nn.Layer]]): Module specifying inverted residual building block for mobilenet
norm_layer (Optional[Callable[..., nn.Layer]]): Module specifying the normalization layer to use
dropout (float): The droupout probability
| __init__ | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step6/paddlevision/models/mobilenet_v3.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step6/paddlevision/models/mobilenet_v3.py | Apache-2.0 |
def mobilenet_v3_large(pretrained: bool=False,
progress: bool=True,
**kwargs: Any) -> MobileNetV3:
"""
Constructs a large MobileNetV3 architecture from
`"Searching for MobileNetV3" <https://arxiv.org/abs/1905.02244>`_.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
progress (bool): If True, displays a progress bar of the download to stderr
"""
arch = "mobilenet_v3_large"
inverted_residual_setting, last_channel = _mobilenet_v3_conf(arch,
**kwargs)
return _mobilenet_v3(arch, inverted_residual_setting, last_channel,
pretrained, progress, **kwargs) |
Constructs a large MobileNetV3 architecture from
`"Searching for MobileNetV3" <https://arxiv.org/abs/1905.02244>`_.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
progress (bool): If True, displays a progress bar of the download to stderr
| mobilenet_v3_large | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step6/paddlevision/models/mobilenet_v3.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step6/paddlevision/models/mobilenet_v3.py | Apache-2.0 |
def mobilenet_v3_small(pretrained: bool=False,
progress: bool=True,
**kwargs: Any) -> MobileNetV3:
"""
Constructs a small MobileNetV3 architecture from
`"Searching for MobileNetV3" <https://arxiv.org/abs/1905.02244>`_.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
progress (bool): If True, displays a progress bar of the download to stderr
"""
arch = "mobilenet_v3_small"
inverted_residual_setting, last_channel = _mobilenet_v3_conf(arch,
**kwargs)
return _mobilenet_v3(arch, inverted_residual_setting, last_channel,
pretrained, progress, **kwargs) |
Constructs a small MobileNetV3 architecture from
`"Searching for MobileNetV3" <https://arxiv.org/abs/1905.02244>`_.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
progress (bool): If True, displays a progress bar of the download to stderr
| mobilenet_v3_small | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step6/paddlevision/models/mobilenet_v3.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step6/paddlevision/models/mobilenet_v3.py | Apache-2.0 |
def get_params(transform_num: int) -> Tuple[int, Tensor, Tensor]:
"""Get parameters for autoaugment transformation
Returns:
params required by the autoaugment transformation
"""
policy_id = int(paddle.randint(low=0, high=transform_num, shape=(1, )))
probs = paddle.rand((2, ))
signs = paddle.randint(low=0, high=2, shape=(2, ))
return policy_id, probs, signs | Get parameters for autoaugment transformation
Returns:
params required by the autoaugment transformation
| get_params | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step6/paddlevision/transforms/autoaugment.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step6/paddlevision/transforms/autoaugment.py | Apache-2.0 |
def forward(self, img: Tensor):
"""
img (PIL Image or Tensor): Image to be transformed.
Returns:
PIL Image or Tensor: AutoAugmented image.
"""
fill = self.fill
if isinstance(img, Tensor):
if isinstance(fill, (int, float)):
fill = [float(fill)] * F._get_image_num_channels(img)
elif fill is not None:
fill = [float(f) for f in fill]
transform_id, probs, signs = self.get_params(len(self.transforms))
for i, (op_name, p,
magnitude_id) in enumerate(self.transforms[transform_id]):
if probs[i] <= p:
magnitudes, signed = self._get_op_meta(op_name)
magnitude = float(magnitudes[magnitude_id].item()) \
if magnitudes is not None and magnitude_id is not None else 0.0
if signed is not None and signed and signs[i] == 0:
magnitude *= -1.0
if op_name == "ShearX":
img = F.affine(
img,
angle=0.0,
translate=[0, 0],
scale=1.0,
shear=[math.degrees(magnitude), 0.0],
interpolation=self.interpolation,
fill=fill)
elif op_name == "ShearY":
img = F.affine(
img,
angle=0.0,
translate=[0, 0],
scale=1.0,
shear=[0.0, math.degrees(magnitude)],
interpolation=self.interpolation,
fill=fill)
elif op_name == "TranslateX":
img = F.affine(
img,
angle=0.0,
translate=[
int(F._get_image_size(img)[0] * magnitude), 0
],
scale=1.0,
interpolation=self.interpolation,
shear=[0.0, 0.0],
fill=fill)
elif op_name == "TranslateY":
img = F.affine(
img,
angle=0.0,
translate=[
0, int(F._get_image_size(img)[1] * magnitude)
],
scale=1.0,
interpolation=self.interpolation,
shear=[0.0, 0.0],
fill=fill)
elif op_name == "Rotate":
img = F.rotate(
img,
magnitude,
interpolation=self.interpolation,
fill=fill)
elif op_name == "Brightness":
img = F.adjust_brightness(img, 1.0 + magnitude)
elif op_name == "Color":
img = F.adjust_saturation(img, 1.0 + magnitude)
elif op_name == "Contrast":
img = F.adjust_contrast(img, 1.0 + magnitude)
elif op_name == "Sharpness":
img = F.adjust_sharpness(img, 1.0 + magnitude)
elif op_name == "Posterize":
img = F.posterize(img, int(magnitude))
elif op_name == "Solarize":
img = F.solarize(img, magnitude)
elif op_name == "AutoContrast":
img = F.autocontrast(img)
elif op_name == "Equalize":
img = F.equalize(img)
elif op_name == "Invert":
img = F.invert(img)
else:
raise ValueError(
"The provided operator {} is not recognized.".format(
op_name))
return img |
img (PIL Image or Tensor): Image to be transformed.
Returns:
PIL Image or Tensor: AutoAugmented image.
| forward | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step6/paddlevision/transforms/autoaugment.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step6/paddlevision/transforms/autoaugment.py | Apache-2.0 |
def to_tensor(pic):
"""Convert a ``PIL Image`` or ``numpy.ndarray`` to tensor.
See :class:`~paddlevision.transforms.ToTensor` for more details.
Args:
pic (PIL Image or numpy.ndarray): Image to be converted to tensor.
Returns:
Tensor: Converted image.
"""
if not (F_pil._is_pil_image(pic) or _is_numpy(pic)):
raise TypeError('pic should be PIL Image or ndarray. Got {}'.format(
type(pic)))
if _is_numpy(pic) and not _is_numpy_image(pic):
raise ValueError('pic should be 2/3 dimensional. Got {} dimensions.'.
format(pic.ndim))
default_float_dtype = paddle.get_default_dtype()
if isinstance(pic, np.ndarray):
# handle numpy array
if pic.ndim == 2:
pic = pic[:, :, None]
img = paddle.to_tensor(pic.transpose((2, 0, 1)))
# backward compatibility
if not img.dtype == default_float_dtype:
img = img.astype(dtype=default_float_dtype)
return img.divide(paddle.full_like(img, 255))
else:
return img
if accimage is not None and isinstance(pic, accimage.Image):
nppic = np.zeros(
[pic.channels, pic.height, pic.width], dtype=np.float32)
pic.copyto(nppic)
return paddle.to_tensor(nppic).astype(dtype=default_float_dtype)
# handle PIL Image
mode_to_nptype = {'I': np.int32, 'I;16': np.int16, 'F': np.float32}
img = paddle.to_tensor(
np.array(
pic, mode_to_nptype.get(pic.mode, np.uint8), copy=True))
if pic.mode == '1':
img = 255 * img
img = img.reshape([pic.size[1], pic.size[0], len(pic.getbands())])
if not img.dtype == default_float_dtype:
img = img.astype(dtype=default_float_dtype)
# put it from HWC to CHW format
img = img.transpose((2, 0, 1))
return img.divide(paddle.full_like(img, 255))
else:
# put it from HWC to CHW format
img = img.transpose((2, 0, 1))
return img | Convert a ``PIL Image`` or ``numpy.ndarray`` to tensor.
See :class:`~paddlevision.transforms.ToTensor` for more details.
Args:
pic (PIL Image or numpy.ndarray): Image to be converted to tensor.
Returns:
Tensor: Converted image.
| to_tensor | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step6/paddlevision/transforms/functional.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step6/paddlevision/transforms/functional.py | Apache-2.0 |
def normalize(tensor: Tensor,
mean: List[float],
std: List[float],
inplace: bool=False) -> Tensor:
"""Normalize a float tensor image with mean and standard deviation.
This transform does not support PIL Image.
.. note::
This transform acts out of place by default, i.e., it does not mutates the input tensor.
See :class:`~paddlevision.transforms.Normalize` for more details.
Args:
tensor (Tensor): Float tensor image of size (C, H, W) or (B, C, H, W) to be normalized.
mean (sequence): Sequence of means for each channel.
std (sequence): Sequence of standard deviations for each channel.
inplace(bool,optional): Bool to make this operation inplace.
Returns:
Tensor: Normalized Tensor image.
"""
if not isinstance(tensor, paddle.Tensor):
raise TypeError('Input tensor should be a paddle tensor. Got {}.'.
format(type(tensor)))
if not tensor.dtype in (paddle.float16, paddle.float32, paddle.float64):
raise TypeError('Input tensor should be a float tensor. Got {}.'.
format(tensor.dtype))
if tensor.ndim < 3:
raise ValueError(
'Expected tensor to be a tensor image of size (..., C, H, W). Got tensor.shape() = '
'{}.'.format(tensor.shape))
if not inplace:
tensor = tensor.clone()
dtype = tensor.dtype
mean = paddle.to_tensor(mean, dtype=dtype, place=tensor.place)
std = paddle.to_tensor(std, dtype=dtype, place=tensor.place)
if (std == 0).any():
raise ValueError('std evaluated to zero, leading to division by zero.')
if mean.ndim == 1:
mean = mean.reshape((-1, 1, 1))
if std.ndim == 1:
std = std.reshape((-1, 1, 1))
tensor = tensor.subtract(mean).divide(std)
return tensor | Normalize a float tensor image with mean and standard deviation.
This transform does not support PIL Image.
.. note::
This transform acts out of place by default, i.e., it does not mutates the input tensor.
See :class:`~paddlevision.transforms.Normalize` for more details.
Args:
tensor (Tensor): Float tensor image of size (C, H, W) or (B, C, H, W) to be normalized.
mean (sequence): Sequence of means for each channel.
std (sequence): Sequence of standard deviations for each channel.
inplace(bool,optional): Bool to make this operation inplace.
Returns:
Tensor: Normalized Tensor image.
| normalize | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step6/paddlevision/transforms/functional.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step6/paddlevision/transforms/functional.py | Apache-2.0 |
def resize(img: Tensor,
size: List[int],
interpolation: InterpolationMode=InterpolationMode.BILINEAR,
max_size: Optional[int]=None,
antialias: Optional[bool]=None) -> Tensor:
r"""Resize the input image to the given size.
If the image is paddle Tensor, it is expected
to have [..., H, W] shape, where ... means an arbitrary number of leading dimensions
.. warning::
The output image might be different depending on its type: when downsampling, the interpolation of PIL images
and tensors is slightly different, because PIL applies antialiasing. This may lead to significant differences
in the performance of a network. Therefore, it is preferable to train and serve a model with the same input
types. See also below the ``antialias`` parameter, which can help making the output of PIL images and tensors
closer.
Args:
img (PIL Image or Tensor): Image to be resized.
size (sequence or int): Desired output size. If size is a sequence like
(h, w), the output size will be matched to this. If size is an int,
the smaller edge of the image will be matched to this number maintaining
the aspect ratio. i.e, if height > width, then image will be rescaled to
:math:`\left(\text{size} \times \frac{\text{height}}{\text{width}}, \text{size}\right)`.
interpolation (InterpolationMode): Desired interpolation enum defined by
:class:`paddlevision.transforms.InterpolationMode`.
Default is ``InterpolationMode.BILINEAR``. If input is Tensor, only ``InterpolationMode.NEAREST``,
``InterpolationMode.BILINEAR`` and ``InterpolationMode.BICUBIC`` are supported.
For backward compatibility integer values (e.g. ``PIL.Image.NEAREST``) are still acceptable.
max_size (int, optional): The maximum allowed for the longer edge of
the resized image: if the longer edge of the image is greater
than ``max_size`` after being resized according to ``size``, then
the image is resized again so that the longer edge is equal to
``max_size``. As a result, ``size`` might be overruled, i.e the
smaller edge may be shorter than ``size``.
antialias (bool, optional): antialias flag. If ``img`` is PIL Image, the flag is ignored and anti-alias
is always used. If ``img`` is Tensor, the flag is False by default and can be set to True for
``InterpolationMode.BILINEAR`` only mode. This can help making the output for PIL images and tensors
closer.
.. warning::
There is no autodiff support for ``antialias=True`` option with input ``img`` as Tensor.
Returns:
PIL Image or Tensor: Resized image.
"""
# Backward compatibility with integer value
if isinstance(interpolation, int):
warnings.warn(
"Argument interpolation should be of type InterpolationMode instead of int. "
"Please, use InterpolationMode enum.")
interpolation = _interpolation_modes_from_int(interpolation)
if not isinstance(interpolation, InterpolationMode):
raise TypeError("Argument interpolation should be a InterpolationMode")
if not isinstance(img, paddle.Tensor):
if antialias is not None and not antialias:
warnings.warn(
"Anti-alias option is always applied for PIL Image input. Argument antialias is ignored."
)
pil_interpolation = pil_modes_mapping[interpolation]
return F_pil.resize(
img, size=size, interpolation=pil_interpolation, max_size=max_size)
return F_t.resize(
img,
size=size,
interpolation=interpolation.value,
max_size=max_size,
antialias=antialias) | Resize the input image to the given size.
If the image is paddle Tensor, it is expected
to have [..., H, W] shape, where ... means an arbitrary number of leading dimensions
.. warning::
The output image might be different depending on its type: when downsampling, the interpolation of PIL images
and tensors is slightly different, because PIL applies antialiasing. This may lead to significant differences
in the performance of a network. Therefore, it is preferable to train and serve a model with the same input
types. See also below the ``antialias`` parameter, which can help making the output of PIL images and tensors
closer.
Args:
img (PIL Image or Tensor): Image to be resized.
size (sequence or int): Desired output size. If size is a sequence like
(h, w), the output size will be matched to this. If size is an int,
the smaller edge of the image will be matched to this number maintaining
the aspect ratio. i.e, if height > width, then image will be rescaled to
:math:`\left(\text{size} \times \frac{\text{height}}{\text{width}}, \text{size}\right)`.
interpolation (InterpolationMode): Desired interpolation enum defined by
:class:`paddlevision.transforms.InterpolationMode`.
Default is ``InterpolationMode.BILINEAR``. If input is Tensor, only ``InterpolationMode.NEAREST``,
``InterpolationMode.BILINEAR`` and ``InterpolationMode.BICUBIC`` are supported.
For backward compatibility integer values (e.g. ``PIL.Image.NEAREST``) are still acceptable.
max_size (int, optional): The maximum allowed for the longer edge of
the resized image: if the longer edge of the image is greater
than ``max_size`` after being resized according to ``size``, then
the image is resized again so that the longer edge is equal to
``max_size``. As a result, ``size`` might be overruled, i.e the
smaller edge may be shorter than ``size``.
antialias (bool, optional): antialias flag. If ``img`` is PIL Image, the flag is ignored and anti-alias
is always used. If ``img`` is Tensor, the flag is False by default and can be set to True for
``InterpolationMode.BILINEAR`` only mode. This can help making the output for PIL images and tensors
closer.
.. warning::
There is no autodiff support for ``antialias=True`` option with input ``img`` as Tensor.
Returns:
PIL Image or Tensor: Resized image.
| resize | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step6/paddlevision/transforms/functional.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step6/paddlevision/transforms/functional.py | Apache-2.0 |
def pad(img: Tensor,
padding: List[int],
fill: int=0,
padding_mode: str="constant") -> Tensor:
r"""Pad the given image on all sides with the given "pad" value.
If the image is paddle Tensor, it is expected
to have [..., H, W] shape, where ... means at most 2 leading dimensions for mode reflect and symmetric,
at most 3 leading dimensions for mode edge,
and an arbitrary number of leading dimensions for mode constant
Args:
img (PIL Image or Tensor): Image to be padded.
padding (int or sequence): Padding on each border. If a single int is provided this
is used to pad all borders. If sequence of length 2 is provided this is the padding
on left/right and top/bottom respectively. If a sequence of length 4 is provided
this is the padding for the left, top, right and bottom borders respectively.
fill (number or str or tuple): Pixel fill value for constant fill. Default is 0.
If a tuple of length 3, it is used to fill R, G, B channels respectively.
This value is only used when the padding_mode is constant.
Only number is supported for paddle Tensor.
Only int or str or tuple value is supported for PIL Image.
padding_mode (str): Type of padding. Should be: constant, edge, reflect or symmetric.
Default is constant.
- constant: pads with a constant value, this value is specified with fill
- edge: pads with the last value at the edge of the image.
If input a 5D paddle Tensor, the last 3 dimensions will be padded instead of the last 2
- reflect: pads with reflection of image without repeating the last value on the edge.
For example, padding [1, 2, 3, 4] with 2 elements on both sides in reflect mode
will result in [3, 2, 1, 2, 3, 4, 3, 2]
- symmetric: pads with reflection of image repeating the last value on the edge.
For example, padding [1, 2, 3, 4] with 2 elements on both sides in symmetric mode
will result in [2, 1, 1, 2, 3, 4, 4, 3]
Returns:
PIL Image or Tensor: Padded image.
"""
if not isinstance(img, paddle.Tensor):
return F_pil.pad(img,
padding=padding,
fill=fill,
padding_mode=padding_mode)
return F_t.pad(img, padding=padding, fill=fill, padding_mode=padding_mode) | Pad the given image on all sides with the given "pad" value.
If the image is paddle Tensor, it is expected
to have [..., H, W] shape, where ... means at most 2 leading dimensions for mode reflect and symmetric,
at most 3 leading dimensions for mode edge,
and an arbitrary number of leading dimensions for mode constant
Args:
img (PIL Image or Tensor): Image to be padded.
padding (int or sequence): Padding on each border. If a single int is provided this
is used to pad all borders. If sequence of length 2 is provided this is the padding
on left/right and top/bottom respectively. If a sequence of length 4 is provided
this is the padding for the left, top, right and bottom borders respectively.
fill (number or str or tuple): Pixel fill value for constant fill. Default is 0.
If a tuple of length 3, it is used to fill R, G, B channels respectively.
This value is only used when the padding_mode is constant.
Only number is supported for paddle Tensor.
Only int or str or tuple value is supported for PIL Image.
padding_mode (str): Type of padding. Should be: constant, edge, reflect or symmetric.
Default is constant.
- constant: pads with a constant value, this value is specified with fill
- edge: pads with the last value at the edge of the image.
If input a 5D paddle Tensor, the last 3 dimensions will be padded instead of the last 2
- reflect: pads with reflection of image without repeating the last value on the edge.
For example, padding [1, 2, 3, 4] with 2 elements on both sides in reflect mode
will result in [3, 2, 1, 2, 3, 4, 3, 2]
- symmetric: pads with reflection of image repeating the last value on the edge.
For example, padding [1, 2, 3, 4] with 2 elements on both sides in symmetric mode
will result in [2, 1, 1, 2, 3, 4, 4, 3]
Returns:
PIL Image or Tensor: Padded image.
| pad | python | PaddlePaddle/models | tutorials/mobilenetv3_prod/Step6/paddlevision/transforms/functional.py | https://github.com/PaddlePaddle/models/blob/master/tutorials/mobilenetv3_prod/Step6/paddlevision/transforms/functional.py | Apache-2.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.