code
string | signature
string | docstring
string | loss_without_docstring
float64 | loss_with_docstring
float64 | factor
float64 |
---|---|---|---|---|---|
current_nodes = [self]
leaves = []
while len(current_nodes) > 0:
next_nodes = []
for node in current_nodes:
if node.left is None and node.right is None:
leaves.append(node)
continue
if node.left is not None:
next_nodes.append(node.left)
if node.right is not None:
next_nodes.append(node.right)
current_nodes = next_nodes
return leaves | def leaves(self) | Return the leaf nodes of the binary tree.
A leaf node is any node that does not have child nodes.
:return: List of leaf nodes.
:rtype: [binarytree.Node]
**Example**:
.. doctest::
>>> from binarytree import Node
>>>
>>> root = Node(1)
>>> root.left = Node(2)
>>> root.right = Node(3)
>>> root.left.right = Node(4)
>>>
>>> print(root)
<BLANKLINE>
__1
/ \\
2 3
\\
4
<BLANKLINE>
>>> root.leaves
[Node(3), Node(4)] | 1.698474 | 1.94756 | 0.872104 |
properties = _get_tree_properties(self)
properties.update({
'is_bst': _is_bst(self),
'is_balanced': _is_balanced(self) >= 0
})
return properties | def properties(self) | Return various properties of the binary tree.
:return: Binary tree properties.
:rtype: dict
**Example**:
.. doctest::
>>> from binarytree import Node
>>>
>>> root = Node(1)
>>> root.left = Node(2)
>>> root.right = Node(3)
>>> root.left.left = Node(4)
>>> root.left.right = Node(5)
>>> props = root.properties
>>>
>>> props['height'] # equivalent to root.height
2
>>> props['size'] # equivalent to root.size
5
>>> props['max_leaf_depth'] # equivalent to root.max_leaf_depth
2
>>> props['min_leaf_depth'] # equivalent to root.min_leaf_depth
1
>>> props['max_node_value'] # equivalent to root.max_node_value
5
>>> props['min_node_value'] # equivalent to root.min_node_value
1
>>> props['leaf_count'] # equivalent to root.leaf_count
3
>>> props['is_balanced'] # equivalent to root.is_balanced
True
>>> props['is_bst'] # equivalent to root.is_bst
False
>>> props['is_complete'] # equivalent to root.is_complete
True
>>> props['is_max_heap'] # equivalent to root.is_max_heap
False
>>> props['is_min_heap'] # equivalent to root.is_min_heap
True
>>> props['is_perfect'] # equivalent to root.is_perfect
False
>>> props['is_strict'] # equivalent to root.is_strict
True | 5.00016 | 7.416539 | 0.67419 |
node_stack = []
result = []
node = self
while True:
if node is not None:
node_stack.append(node)
node = node.left
elif len(node_stack) > 0:
node = node_stack.pop()
result.append(node)
node = node.right
else:
break
return result | def inorder(self) | Return the nodes in the binary tree using in-order_ traversal.
An in-order_ traversal visits left subtree, root, then right subtree.
.. _in-order: https://en.wikipedia.org/wiki/Tree_traversal
:return: List of nodes.
:rtype: [binarytree.Node]
**Example**:
.. doctest::
>>> from binarytree import Node
>>>
>>> root = Node(1)
>>> root.left = Node(2)
>>> root.right = Node(3)
>>> root.left.left = Node(4)
>>> root.left.right = Node(5)
>>>
>>> print(root)
<BLANKLINE>
__1
/ \\
2 3
/ \\
4 5
<BLANKLINE>
>>> root.inorder
[Node(4), Node(2), Node(5), Node(1), Node(3)] | 1.878722 | 2.465797 | 0.761913 |
node_stack = [self]
result = []
while len(node_stack) > 0:
node = node_stack.pop()
result.append(node)
if node.right is not None:
node_stack.append(node.right)
if node.left is not None:
node_stack.append(node.left)
return result | def preorder(self) | Return the nodes in the binary tree using pre-order_ traversal.
A pre-order_ traversal visits root, left subtree, then right subtree.
.. _pre-order: https://en.wikipedia.org/wiki/Tree_traversal
:return: List of nodes.
:rtype: [binarytree.Node]
**Example**:
.. doctest::
>>> from binarytree import Node
>>>
>>> root = Node(1)
>>> root.left = Node(2)
>>> root.right = Node(3)
>>> root.left.left = Node(4)
>>> root.left.right = Node(5)
>>>
>>> print(root)
<BLANKLINE>
__1
/ \\
2 3
/ \\
4 5
<BLANKLINE>
>>> root.preorder
[Node(1), Node(2), Node(4), Node(5), Node(3)] | 1.707882 | 2.301873 | 0.741953 |
node_stack = []
result = []
node = self
while True:
while node is not None:
if node.right is not None:
node_stack.append(node.right)
node_stack.append(node)
node = node.left
node = node_stack.pop()
if (node.right is not None and
len(node_stack) > 0 and
node_stack[-1] is node.right):
node_stack.pop()
node_stack.append(node)
node = node.right
else:
result.append(node)
node = None
if len(node_stack) == 0:
break
return result | def postorder(self) | Return the nodes in the binary tree using post-order_ traversal.
A post-order_ traversal visits left subtree, right subtree, then root.
.. _post-order: https://en.wikipedia.org/wiki/Tree_traversal
:return: List of nodes.
:rtype: [binarytree.Node]
**Example**:
.. doctest::
>>> from binarytree import Node
>>>
>>> root = Node(1)
>>> root.left = Node(2)
>>> root.right = Node(3)
>>> root.left.left = Node(4)
>>> root.left.right = Node(5)
>>>
>>> print(root)
<BLANKLINE>
__1
/ \\
2 3
/ \\
4 5
<BLANKLINE>
>>> root.postorder
[Node(4), Node(5), Node(2), Node(3), Node(1)] | 1.792573 | 2.163114 | 0.8287 |
current_nodes = [self]
result = []
while len(current_nodes) > 0:
next_nodes = []
for node in current_nodes:
result.append(node)
if node.left is not None:
next_nodes.append(node.left)
if node.right is not None:
next_nodes.append(node.right)
current_nodes = next_nodes
return result | def levelorder(self) | Return the nodes in the binary tree using level-order_ traversal.
A level-order_ traversal visits nodes left to right, level by level.
.. _level-order:
https://en.wikipedia.org/wiki/Tree_traversal#Breadth-first_search
:return: List of nodes.
:rtype: [binarytree.Node]
**Example**:
.. doctest::
>>> from binarytree import Node
>>>
>>> root = Node(1)
>>> root.left = Node(2)
>>> root.right = Node(3)
>>> root.left.left = Node(4)
>>> root.left.right = Node(5)
>>>
>>> print(root)
<BLANKLINE>
__1
/ \\
2 3
/ \\
4 5
<BLANKLINE>
>>> root.levelorder
[Node(1), Node(2), Node(3), Node(4), Node(5)] | 1.747797 | 2.082999 | 0.839077 |
# type: (Optional[Text], Optional[Text]) -> BaseBackend
backend = backend or ORGS_INVITATION_BACKEND
class_module, class_name = backend.rsplit(".", 1)
mod = import_module(class_module)
return getattr(mod, class_name)(namespace=namespace) | def invitation_backend(backend=None, namespace=None) | Returns a specified invitation backend
Args:
backend: dotted path to the invitation backend class
namespace: URL namespace to use
Returns:
an instance of an InvitationBackend | 3.258252 | 5.776921 | 0.564012 |
# type: (Optional[Text], Optional[Text]) -> BaseBackend
backend = backend or ORGS_REGISTRATION_BACKEND
class_module, class_name = backend.rsplit(".", 1)
mod = import_module(class_module)
return getattr(mod, class_name)(namespace=namespace) | def registration_backend(backend=None, namespace=None) | Returns a specified registration backend
Args:
backend: dotted path to the registration backend class
namespace: URL namespace to use
Returns:
an instance of an RegistrationBackend | 3.342334 | 6.063863 | 0.551189 |
class OrganizationRegistrationForm(forms.ModelForm):
email = forms.EmailField()
class Meta:
model = org_model
exclude = ("is_active", "users")
def save(self, *args, **kwargs):
self.instance.is_active = False
super(OrganizationRegistrationForm, self).save(*args, **kwargs)
return OrganizationRegistrationForm | def org_registration_form(org_model) | Generates a registration ModelForm for the given organization model class | 2.454501 | 2.390118 | 1.026937 |
try:
user = get_user_model().objects.get(
email__iexact=self.cleaned_data["email"]
)
except get_user_model().MultipleObjectsReturned:
raise forms.ValidationError(
_("This email address has been used multiple times.")
)
except get_user_model().DoesNotExist:
user = invitation_backend().invite_by_email(
self.cleaned_data["email"],
**{
"domain": get_current_site(self.request),
"organization": self.organization,
"sender": self.request.user,
}
)
# Send a notification email to this user to inform them that they
# have been added to a new organization.
invitation_backend().send_notification(
user,
**{
"domain": get_current_site(self.request),
"organization": self.organization,
"sender": self.request.user,
}
)
return OrganizationUser.objects.create(
user=user,
organization=self.organization,
is_admin=self.cleaned_data["is_admin"],
) | def save(self, *args, **kwargs) | The save method should create a new OrganizationUser linking the User
matching the provided email address. If not matching User is found it
should kick off the registration process. It needs to create a User in
order to link it to the Organization. | 2.349583 | 2.21657 | 1.060008 |
is_active = True
try:
user = get_user_model().objects.get(email=self.cleaned_data["email"])
except get_user_model().DoesNotExist:
user = invitation_backend().invite_by_email(
self.cleaned_data["email"],
**{
"domain": get_current_site(self.request),
"organization": self.cleaned_data["name"],
"sender": self.request.user,
"created": True,
}
)
is_active = False
return create_organization(
user,
self.cleaned_data["name"],
self.cleaned_data["slug"],
is_active=is_active,
) | def save(self, **kwargs) | Create the organization, then get the user, then make the owner. | 2.935928 | 2.63089 | 1.115945 |
# type: (Text, AbstractUser, AbstractBaseOrganization) -> OrganizationInvitationBase
try:
invitee = self.user_model.objects.get(email__iexact=email)
except self.user_model.DoesNotExist:
invitee = None
# TODO allow sending just the OrganizationUser instance
user_invitation = self.invitation_model.objects.create(
invitee=invitee,
invitee_identifier=email.lower(),
invited_by=user,
organization=organization,
)
self.send_invitation(user_invitation)
return user_invitation | def invite_by_email(self, email, user, organization, **kwargs) | Primary interface method by which one user invites another to join
Args:
email:
request:
**kwargs:
Returns:
an invitation instance
Raises:
MultipleObjectsReturned if multiple matching users are found | 3.612952 | 4.747854 | 0.760965 |
# type: (OrganizationInvitationBase) -> bool
return self.email_message(
invitation.invitee_identifier,
self.invitation_subject,
self.invitation_body,
invitation.invited_by,
**kwargs
).send() | def send_invitation(self, invitation, **kwargs) | Sends an invitation message for a specific invitation.
This could be overridden to do other things, such as sending a confirmation
email to the sender.
Args:
invitation:
Returns: | 5.316541 | 6.954588 | 0.764465 |
from_email = "%s %s <%s>" % (
sender.first_name,
sender.last_name,
email.utils.parseaddr(settings.DEFAULT_FROM_EMAIL)[1],
)
reply_to = "%s %s <%s>" % (sender.first_name, sender.last_name, sender.email)
headers = {"Reply-To": reply_to}
kwargs.update({"sender": sender, "recipient": recipient})
subject_template = loader.get_template(subject_template)
body_template = loader.get_template(body_template)
subject = subject_template.render(
kwargs
).strip() # Remove stray newline characters
body = body_template.render(kwargs)
return message_class(subject, body, from_email, [recipient], headers=headers) | def email_message(
self,
recipient, # type: Text
subject_template, # type: Text
body_template, # type: Text
sender=None, # type: Optional[AbstractUser]
message_class=EmailMessage,
**kwargs
) | Returns an invitation email message. This can be easily overridden.
For instance, to send an HTML message, use the EmailMultiAlternatives message_class
and attach the additional conent. | 2.338998 | 2.284465 | 1.023871 |
try:
cls.module_registry[module]["OrgModel"]._meta.get_field("users")
except FieldDoesNotExist:
cls.module_registry[module]["OrgModel"].add_to_class(
"users",
models.ManyToManyField(
USER_MODEL,
through=cls.module_registry[module]["OrgUserModel"].__name__,
related_name="%(app_label)s_%(class)s",
),
)
cls.module_registry[module]["OrgModel"].invitation_model = cls.module_registry[
module
][
"OrgInviteModel"
] | def update_org(cls, module) | Adds the `users` field to the organization model | 2.926535 | 2.584506 | 1.132338 |
try:
cls.module_registry[module]["OrgUserModel"]._meta.get_field("user")
except FieldDoesNotExist:
cls.module_registry[module]["OrgUserModel"].add_to_class(
"user",
models.ForeignKey(
USER_MODEL,
related_name="%(app_label)s_%(class)s",
on_delete=models.CASCADE,
),
)
try:
cls.module_registry[module]["OrgUserModel"]._meta.get_field("organization")
except FieldDoesNotExist:
cls.module_registry[module]["OrgUserModel"].add_to_class(
"organization",
models.ForeignKey(
cls.module_registry[module]["OrgModel"],
related_name="organization_users",
on_delete=models.CASCADE,
),
) | def update_org_users(cls, module) | Adds the `user` field to the organization user model and the link to
the specific organization model. | 1.793379 | 1.633563 | 1.097833 |
try:
cls.module_registry[module]["OrgOwnerModel"]._meta.get_field(
"organization_user"
)
except FieldDoesNotExist:
cls.module_registry[module]["OrgOwnerModel"].add_to_class(
"organization_user",
models.OneToOneField(
cls.module_registry[module]["OrgUserModel"],
on_delete=models.CASCADE,
),
)
try:
cls.module_registry[module]["OrgOwnerModel"]._meta.get_field("organization")
except FieldDoesNotExist:
cls.module_registry[module]["OrgOwnerModel"].add_to_class(
"organization",
models.OneToOneField(
cls.module_registry[module]["OrgModel"],
related_name="owner",
on_delete=models.CASCADE,
),
) | def update_org_owner(cls, module) | Creates the links to the organization and organization user for the owner. | 1.874336 | 1.713965 | 1.093568 |
try:
cls.module_registry[module]["OrgInviteModel"]._meta.get_field("invited_by")
except FieldDoesNotExist:
cls.module_registry[module]["OrgInviteModel"].add_to_class(
"invited_by",
models.ForeignKey(
USER_MODEL,
related_name="%(app_label)s_%(class)s_sent_invitations",
on_delete=models.CASCADE,
),
)
try:
cls.module_registry[module]["OrgInviteModel"]._meta.get_field("invitee")
except FieldDoesNotExist:
cls.module_registry[module]["OrgInviteModel"].add_to_class(
"invitee",
models.ForeignKey(
USER_MODEL,
null=True,
blank=True,
related_name="%(app_label)s_%(class)s_invitations",
on_delete=models.CASCADE,
),
)
try:
cls.module_registry[module]["OrgInviteModel"]._meta.get_field(
"organization"
)
except FieldDoesNotExist:
cls.module_registry[module]["OrgInviteModel"].add_to_class(
"organization",
models.ForeignKey(
cls.module_registry[module]["OrgModel"],
related_name="organization_invites",
on_delete=models.CASCADE,
),
) | def update_org_invite(cls, module) | Adds the links to the organization and to the organization user | 1.558729 | 1.534208 | 1.015983 |
return "{0}_{1}".format(
self._meta.app_label.lower(), self.__class__.__name__.lower()
) | def user_relation_name(self) | Returns the string name of the related name to the user.
This provides a consistent interface across different organization
model classes. | 4.216812 | 3.793839 | 1.111489 |
if hasattr(self.user, "get_full_name"):
return self.user.get_full_name()
return "{0}".format(self.user) | def name(self) | Returns the connected user's full name or string representation if the
full name method is unavailable (e.g. on a custom user class). | 3.551901 | 2.831607 | 1.254377 |
org_user = self.organization.add_user(user, **self.activation_kwargs())
self.invitee = user
self.save()
return org_user | def activate(self, user) | Updates the `invitee` value and saves the instance
Provided as a way of extending the behavior.
Args:
user: the newly created user
Returns:
the linking organization user | 6.176212 | 5.356549 | 1.153021 |
if hasattr(self, "organization_user"):
return self.organization_user
organization_pk = self.kwargs.get("organization_pk", None)
user_pk = self.kwargs.get("user_pk", None)
self.organization_user = get_object_or_404(
self.get_user_model().objects.select_related(),
user__pk=user_pk,
organization__pk=organization_pk,
)
return self.organization_user | def get_object(self) | Returns the OrganizationUser object based on the primary keys for both
the organization and the organization user. | 2.378586 | 2.151841 | 1.105373 |
# Parse the token
try:
ts_b36, hash = token.split("-")
except ValueError:
return False
try:
ts = base36_to_int(ts_b36)
except ValueError:
return False
# Check that the timestamp/uid has not been tampered with
if not constant_time_compare(self._make_token_with_timestamp(user, ts), token):
return False
# Check the timestamp is within limit
if (self._num_days(self._today()) - ts) > REGISTRATION_TIMEOUT_DAYS:
return False
return True | def check_token(self, user, token) | Check that a password reset token is correct for a given user. | 4.027878 | 3.675324 | 1.095924 |
org_model = kwargs.pop("model", None) or kwargs.pop(
"org_model", None
) or default_org_model()
kwargs.pop("org_user_model", None) # Discard deprecated argument
org_owner_model = org_model.owner.related.related_model
try:
# Django 1.9
org_user_model = org_model.organization_users.rel.related_model
except AttributeError:
# Django 1.8
org_user_model = org_model.organization_users.related.related_model
if org_defaults is None:
org_defaults = {}
if org_user_defaults is None:
if "is_admin" in model_field_names(org_user_model):
org_user_defaults = {"is_admin": True}
else:
org_user_defaults = {}
if slug is not None:
org_defaults.update({"slug": slug})
if is_active is not None:
org_defaults.update({"is_active": is_active})
org_defaults.update({"name": name})
organization = org_model.objects.create(**org_defaults)
org_user_defaults.update({"organization": organization, "user": user})
new_user = org_user_model.objects.create(**org_user_defaults)
org_owner_model.objects.create(
organization=organization, organization_user=new_user
)
return organization | def create_organization(
user,
name,
slug=None,
is_active=None,
org_defaults=None,
org_user_defaults=None,
**kwargs
) | Returns a new organization, also creating an initial organization user who
is the owner.
The specific models can be specified if a custom organization app is used.
The simplest way would be to use a partial.
>>> from organizations.utils import create_organization
>>> from myapp.models import Account
>>> from functools import partial
>>> create_account = partial(create_organization, model=Account) | 2.215972 | 2.218038 | 0.999069 |
fields = dict([(field.name, field) for field in model._meta.fields])
return getattr(fields[model_field], attr) | def model_field_attr(model, model_field, attr) | Returns the specified attribute for the specified field on the model class. | 2.824593 | 2.742651 | 1.029877 |
if not hasattr(self, "form_class"):
raise AttributeError(_("You must define a form_class"))
return self.form_class(**kwargs) | def get_form(self, **kwargs) | Returns the form for registering or inviting a user | 3.601847 | 3.72434 | 0.96711 |
try:
relation_name = self.org_model().user_relation_name
except TypeError:
# No org_model specified, raises a TypeError because NoneType is
# not callable. This the most sensible default:
relation_name = "organizations_organization"
organization_set = getattr(user, relation_name)
for org in organization_set.filter(is_active=False):
org.is_active = True
org.save() | def activate_organizations(self, user) | Activates the related organizations for the user.
It only activates the related organizations by model type - that is, if
there are multiple types of organizations then only organizations in
the provided model class are activated. | 6.286587 | 6.125688 | 1.026266 |
try:
user = self.user_model.objects.get(id=user_id, is_active=False)
except self.user_model.DoesNotExist:
raise Http404(_("Your URL may have expired."))
if not RegistrationTokenGenerator().check_token(user, token):
raise Http404(_("Your URL may have expired."))
form = self.get_form(
data=request.POST or None, files=request.FILES or None, instance=user
)
if form.is_valid():
form.instance.is_active = True
user = form.save()
user.set_password(form.cleaned_data["password"])
user.save()
self.activate_organizations(user)
user = authenticate(
username=form.cleaned_data["username"],
password=form.cleaned_data["password"],
)
login(request, user)
return redirect(self.get_success_url())
return render(request, self.registration_form_template, {"form": form}) | def activate_view(self, request, user_id, token) | View function that activates the given User by setting `is_active` to
true if the provided information is verified. | 2.172362 | 2.184312 | 0.994529 |
if user.is_active:
return False
token = RegistrationTokenGenerator().make_token(user)
kwargs.update({"token": token})
self.email_message(
user, self.reminder_subject, self.reminder_body, sender, **kwargs
).send() | def send_reminder(self, user, sender=None, **kwargs) | Sends a reminder email to the specified user | 4.120022 | 4.219953 | 0.97632 |
if sender:
try:
display_name = sender.get_full_name()
except (AttributeError, TypeError):
display_name = sender.get_username()
from_email = "%s <%s>" % (
display_name, email.utils.parseaddr(settings.DEFAULT_FROM_EMAIL)[1]
)
reply_to = "%s <%s>" % (display_name, sender.email)
else:
from_email = settings.DEFAULT_FROM_EMAIL
reply_to = from_email
headers = {"Reply-To": reply_to}
kwargs.update({"sender": sender, "user": user})
subject_template = loader.get_template(subject_template)
body_template = loader.get_template(body_template)
subject = subject_template.render(
kwargs
).strip() # Remove stray newline characters
body = body_template.render(kwargs)
return message_class(subject, body, from_email, [user.email], headers=headers) | def email_message(
self,
user,
subject_template,
body_template,
sender=None,
message_class=EmailMessage,
**kwargs
) | Returns an email message for a new user. This can be easily overridden.
For instance, to send an HTML message, use the EmailMultiAlternatives message_class
and attach the additional conent. | 2.285553 | 2.198557 | 1.03957 |
try:
user = self.user_model.objects.get(email=email)
except self.user_model.DoesNotExist:
user = self.user_model.objects.create(
username=self.get_username(),
email=email,
password=self.user_model.objects.make_random_password(),
)
user.is_active = False
user.save()
self.send_activation(user, sender, **kwargs)
return user | def register_by_email(self, email, sender=None, request=None, **kwargs) | Returns a User object filled with dummy data and not active, and sends
an invitation email. | 2.016287 | 2.00606 | 1.005098 |
if user.is_active:
return False
token = self.get_token(user)
kwargs.update({"token": token})
self.email_message(
user, self.activation_subject, self.activation_body, sender, **kwargs
).send() | def send_activation(self, user, sender=None, **kwargs) | Invites a user to join the site | 3.381951 | 3.566803 | 0.948174 |
try:
if request.user.is_authenticated():
return redirect("organization_add")
except TypeError:
if request.user.is_authenticated:
return redirect("organization_add")
form = org_registration_form(self.org_model)(request.POST or None)
if form.is_valid():
try:
user = self.user_model.objects.get(email=form.cleaned_data["email"])
except self.user_model.DoesNotExist:
user = self.user_model.objects.create(
username=self.get_username(),
email=form.cleaned_data["email"],
password=self.user_model.objects.make_random_password(),
)
user.is_active = False
user.save()
else:
return redirect("organization_add")
organization = create_organization(
user,
form.cleaned_data["name"],
form.cleaned_data["slug"],
is_active=False,
)
return render(
request,
self.activation_success_template,
{"user": user, "organization": organization},
)
return render(request, self.registration_form_template, {"form": form}) | def create_view(self, request) | Initiates the organization and user account creation process | 2.202793 | 2.141173 | 1.028778 |
try:
user = self.user_model.objects.get(email=email)
except self.user_model.DoesNotExist:
# TODO break out user creation process
if "username" in inspect.getargspec(
self.user_model.objects.create_user
).args:
user = self.user_model.objects.create(
username=self.get_username(),
email=email,
password=self.user_model.objects.make_random_password(),
)
else:
user = self.user_model.objects.create(
email=email, password=self.user_model.objects.make_random_password()
)
user.is_active = False
user.save()
self.send_invitation(user, sender, **kwargs)
return user | def invite_by_email(self, email, sender=None, request=None, **kwargs) | Creates an inactive user with the information we know and then sends
an invitation email for that user to complete registration.
If your project uses email in a different way then you should make to
extend this method as it only checks the `email` attribute for Users. | 2.151868 | 2.129338 | 1.01058 |
if user.is_active:
return False
token = self.get_token(user)
kwargs.update({"token": token})
self.email_message(
user, self.invitation_subject, self.invitation_body, sender, **kwargs
).send()
return True | def send_invitation(self, user, sender=None, **kwargs) | An intermediary function for sending an invitation email that
selects the templates, generating the token, and ensuring that the user
has not already joined the site. | 3.34565 | 3.294924 | 1.015395 |
if not user.is_active:
return False
self.email_message(
user, self.notification_subject, self.notification_body, sender, **kwargs
).send()
return True | def send_notification(self, user, sender=None, **kwargs) | An intermediary function for sending an notification email informing
a pre-existing, active user that they have been added to a new
organization. | 4.180814 | 4.735067 | 0.882947 |
users_count = self.users.all().count()
if users_count == 0:
is_admin = True
# TODO get specific org user?
org_user = self._org_user_model.objects.create(
user=user, organization=self, is_admin=is_admin
)
if users_count == 0:
# TODO get specific org user?
self._org_owner_model.objects.create(
organization=self, organization_user=org_user
)
# User added signal
user_added.send(sender=self, user=user)
return org_user | def add_user(self, user, is_admin=False) | Adds a new user and if the first user makes the user an admin and
the owner. | 3.494553 | 3.300638 | 1.058751 |
org_user = self._org_user_model.objects.get(user=user, organization=self)
org_user.delete()
# User removed signal
user_removed.send(sender=self, user=user) | def remove_user(self, user) | Deletes a user from an organization. | 4.773029 | 4.038393 | 1.181913 |
is_admin = kwargs.pop("is_admin", False)
users_count = self.users.all().count()
if users_count == 0:
is_admin = True
org_user, created = self._org_user_model.objects.get_or_create(
organization=self, user=user, defaults={"is_admin": is_admin}
)
if users_count == 0:
self._org_owner_model.objects.create(
organization=self, organization_user=org_user
)
if created:
# User added signal
user_added.send(sender=self, user=user)
return org_user, created | def get_or_add_user(self, user, **kwargs) | Adds a new user to the organization, and if it's the first user makes
the user an admin and the owner. Uses the `get_or_create` method to
create or return the existing user.
`user` should be a user instance, e.g. `auth.User`.
Returns the same tuple as the `get_or_create` method, the
`OrganizationUser` and a boolean value indicating whether the
OrganizationUser was created or not. | 2.789866 | 2.550682 | 1.093773 |
old_owner = self.owner.organization_user
self.owner.organization_user = new_owner
self.owner.save()
# Owner changed signal
owner_changed.send(sender=self, old=old_owner, new=new_owner) | def change_owner(self, new_owner) | Changes ownership of an organization. | 3.681262 | 3.3383 | 1.102736 |
return True if self.organization_users.filter(
user=user, is_admin=True
) else False | def is_admin(self, user) | Returns True is user is an admin in the organization, otherwise false | 6.965381 | 5.230572 | 1.331667 |
from organizations.exceptions import OwnershipRequired
try:
if self.organization.owner.organization_user.pk == self.pk:
raise OwnershipRequired(
_(
"Cannot delete organization owner "
"before organization or transferring ownership."
)
)
# TODO This line presumes that OrgOwner model can't be modified
except self._org_owner_model.DoesNotExist:
pass
super(AbstractBaseOrganizationUser, self).delete(using=using) | def delete(self, using=None) | If the organization user is also the owner, this should not be deleted
unless it's part of a cascade from the Organization.
If there is no owner then the deletion should proceed. | 8.489153 | 7.340519 | 1.156479 |
from organizations.exceptions import OrganizationMismatch
if self.organization_user.organization.pk != self.organization.pk:
raise OrganizationMismatch
else:
super(AbstractBaseOrganizationOwner, self).save(*args, **kwargs) | def save(self, *args, **kwargs) | Extends the default save method by verifying that the chosen
organization user is associated with the organization.
Method validates against the primary key of the organization because
when validating an inherited model it may be checking an instance of
`Organization` against an instance of `CustomOrganization`. Mutli-table
inheritence means the database keys will be identical though. | 6.576055 | 4.556627 | 1.443185 |
parser = argparse.ArgumentParser(description='Convert ULog to KML')
parser.add_argument('filename', metavar='file.ulg', help='ULog input file')
parser.add_argument('-o', '--output', dest='output_filename',
help="output filename", default='track.kml')
parser.add_argument('--topic', dest='topic_name',
help="topic name with position data (default=vehicle_gps_position)",
default='vehicle_gps_position')
parser.add_argument('--camera-trigger', dest='camera_trigger',
help="Camera trigger topic name (e.g. camera_capture)",
default=None)
args = parser.parse_args()
convert_ulog2kml(args.filename, args.output_filename,
position_topic_name=args.topic_name,
camera_trigger_topic_name=args.camera_trigger) | def main() | Command line interface | 2.879434 | 2.885156 | 0.998017 |
x = max([x, 0])
colors_arr = [simplekml.Color.red, simplekml.Color.green, simplekml.Color.blue,
simplekml.Color.violet, simplekml.Color.yellow, simplekml.Color.orange,
simplekml.Color.burlywood, simplekml.Color.azure, simplekml.Color.lightblue,
simplekml.Color.lawngreen, simplekml.Color.indianred, simplekml.Color.hotpink]
return colors_arr[x] | def _kml_default_colors(x) | flight mode to color conversion | 2.436793 | 2.413771 | 1.009538 |
default_style = {
'extrude': False,
'line_width': 3
}
used_style = default_style
if style is not None:
for key in style:
used_style[key] = style[key]
if not isinstance(position_topic_name, list):
position_topic_name = [position_topic_name]
colors = [colors]
kml = simplekml.Kml()
load_topic_names = position_topic_name + ['vehicle_status']
if camera_trigger_topic_name is not None:
load_topic_names.append(camera_trigger_topic_name)
ulog = ULog(ulog_file_name, load_topic_names)
# get flight modes
try:
cur_dataset = ulog.get_dataset('vehicle_status')
flight_mode_changes = cur_dataset.list_value_changes('nav_state')
flight_mode_changes.append((ulog.last_timestamp, -1))
except (KeyError, IndexError) as error:
flight_mode_changes = []
# add the graphs
for topic, cur_colors in zip(position_topic_name, colors):
_kml_add_position_data(kml, ulog, topic, cur_colors, used_style,
altitude_offset, minimum_interval_s, flight_mode_changes)
# camera triggers
_kml_add_camera_triggers(kml, ulog, camera_trigger_topic_name, altitude_offset)
kml.save(output_file_name) | def convert_ulog2kml(ulog_file_name, output_file_name, position_topic_name=
'vehicle_gps_position', colors=_kml_default_colors, altitude_offset=0,
minimum_interval_s=0.1, style=None, camera_trigger_topic_name=None) | Coverts and ULog file to a CSV file.
:param ulog_file_name: The ULog filename to open and read
:param output_file_name: KML Output file name
:param position_topic_name: either name of a topic (must have 'lon', 'lat' &
'alt' fields), or a list of topic names
:param colors: lambda function with flight mode (int) (or -1) as input and
returns a color (eg 'fffff8f0') (or list of lambda functions if
multiple position_topic_name's)
:param altitude_offset: add this offset to the altitude [m]
:param minimum_interval_s: minimum time difference between two datapoints
(drop if more points)
:param style: dictionary with rendering options:
'extrude': Bool
'line_width': int
:param camera_trigger_topic_name: name of the camera trigger topic (must
have 'lon', 'lat' & 'seq')
:return: None | 2.630103 | 2.520136 | 1.043635 |
data = ulog.data_list
topic_instance = 0
cur_dataset = [elem for elem in data
if elem.name == camera_trigger_topic_name and elem.multi_id == topic_instance]
if len(cur_dataset) > 0:
cur_dataset = cur_dataset[0]
pos_lon = cur_dataset.data['lon']
pos_lat = cur_dataset.data['lat']
pos_alt = cur_dataset.data['alt']
sequence = cur_dataset.data['seq']
for i in range(len(pos_lon)):
pnt = kml.newpoint(name='Camera Trigger '+str(sequence[i]))
pnt.coords = [(pos_lon[i], pos_lat[i], pos_alt[i] + altitude_offset)] | def _kml_add_camera_triggers(kml, ulog, camera_trigger_topic_name, altitude_offset) | Add camera trigger points to the map | 2.80298 | 2.665483 | 1.051585 |
parser = argparse.ArgumentParser(
description='Extract the raw gps communication from an ULog file')
parser.add_argument('filename', metavar='file.ulg', help='ULog input file')
def is_valid_directory(parser, arg):
if not os.path.isdir(arg):
parser.error('The directory {} does not exist'.format(arg))
# File exists so return the directory
return arg
parser.add_argument('-o', '--output', dest='output', action='store',
help='Output directory (default is CWD)',
metavar='DIR', type=lambda x: is_valid_directory(parser, x))
args = parser.parse_args()
ulog_file_name = args.filename
msg_filter = ['gps_dump']
ulog = ULog(ulog_file_name, msg_filter)
data = ulog.data_list
output_file_prefix = os.path.basename(ulog_file_name)
# strip '.ulg'
if output_file_prefix.lower().endswith('.ulg'):
output_file_prefix = output_file_prefix[:-4]
# write to different output path?
if args.output is not None:
output_file_prefix = os.path.join(args.output, output_file_prefix)
to_dev_filename = output_file_prefix+'_to_device.dat'
from_dev_filename = output_file_prefix+'_from_device.dat'
if len(data) == 0:
print("File {0} does not contain gps_dump messages!".format(ulog_file_name))
exit(0)
gps_dump_data = data[0]
# message format check
field_names = [f.field_name for f in gps_dump_data.field_data]
if not 'len' in field_names or not 'data[0]' in field_names:
print('Error: gps_dump message has wrong format')
exit(-1)
if len(ulog.dropouts) > 0:
print("Warning: file contains {0} dropouts".format(len(ulog.dropouts)))
print("Creating files {0} and {1}".format(to_dev_filename, from_dev_filename))
with open(to_dev_filename, 'wb') as to_dev_file:
with open(from_dev_filename, 'wb') as from_dev_file:
msg_lens = gps_dump_data.data['len']
for i in range(len(gps_dump_data.data['timestamp'])):
msg_len = msg_lens[i]
if msg_len & (1<<7):
msg_len = msg_len & ~(1<<7)
file_handle = to_dev_file
else:
file_handle = from_dev_file
for k in range(msg_len):
file_handle.write(gps_dump_data.data['data['+str(k)+']'][i]) | def main() | Command line interface | 2.665493 | 2.656722 | 1.003301 |
return [elem for elem in self._data_list
if elem.name == name and elem.multi_id == multi_instance][0] | def get_dataset(self, name, multi_instance=0) | get a specific dataset.
example:
try:
gyro_data = ulog.get_dataset('sensor_gyro')
except (KeyError, IndexError, ValueError) as error:
print(type(error), "(sensor_gyro):", error)
:param name: name of the dataset
:param multi_instance: the multi_id, defaults to the first
:raises KeyError, IndexError, ValueError: if name or instance not found | 6.32483 | 5.671854 | 1.115126 |
if msg_info.key in self._msg_info_multiple_dict:
if msg_info.is_continued:
self._msg_info_multiple_dict[msg_info.key][-1].append(msg_info.value)
else:
self._msg_info_multiple_dict[msg_info.key].append([msg_info.value])
else:
self._msg_info_multiple_dict[msg_info.key] = [[msg_info.value]] | def _add_message_info_multiple(self, msg_info) | add a message info multiple to self._msg_info_multiple_dict | 1.96871 | 1.652282 | 1.19151 |
if isinstance(log_file, str):
self._file_handle = open(log_file, "rb")
else:
self._file_handle = log_file
# parse the whole file
self._read_file_header()
self._last_timestamp = self._start_timestamp
self._read_file_definitions()
if self.has_data_appended and len(self._appended_offsets) > 0:
if self._debug:
print('This file has data appended')
for offset in self._appended_offsets:
self._read_file_data(message_name_filter_list, read_until=offset)
self._file_handle.seek(offset)
# read the whole file, or the rest if data appended
self._read_file_data(message_name_filter_list)
self._file_handle.close()
del self._file_handle | def _load_file(self, log_file, message_name_filter_list) | load and parse an ULog file into memory | 3.396372 | 3.172439 | 1.070587 |
if read_until is None:
read_until = 1 << 50 # make it larger than any possible log file
try:
# pre-init reusable objects
header = self._MessageHeader()
msg_data = self._MessageData()
while True:
data = self._file_handle.read(3)
header.initialize(data)
data = self._file_handle.read(header.msg_size)
if len(data) < header.msg_size:
break # less data than expected. File is most likely cut
if self._file_handle.tell() > read_until:
if self._debug:
print('read until offset=%i done, current pos=%i' %
(read_until, self._file_handle.tell()))
break
if header.msg_type == self.MSG_TYPE_INFO:
msg_info = self._MessageInfo(data, header)
self._msg_info_dict[msg_info.key] = msg_info.value
elif header.msg_type == self.MSG_TYPE_INFO_MULTIPLE:
msg_info = self._MessageInfo(data, header, is_info_multiple=True)
self._add_message_info_multiple(msg_info)
elif header.msg_type == self.MSG_TYPE_PARAMETER:
msg_info = self._MessageInfo(data, header)
self._changed_parameters.append((self._last_timestamp,
msg_info.key, msg_info.value))
elif header.msg_type == self.MSG_TYPE_ADD_LOGGED_MSG:
msg_add_logged = self._MessageAddLogged(data, header,
self._message_formats)
if (message_name_filter_list is None or
msg_add_logged.message_name in message_name_filter_list):
self._subscriptions[msg_add_logged.msg_id] = msg_add_logged
else:
self._filtered_message_ids.add(msg_add_logged.msg_id)
elif header.msg_type == self.MSG_TYPE_LOGGING:
msg_logging = self.MessageLogging(data, header)
self._logged_messages.append(msg_logging)
elif header.msg_type == self.MSG_TYPE_DATA:
msg_data.initialize(data, header, self._subscriptions, self)
if msg_data.timestamp != 0 and msg_data.timestamp > self._last_timestamp:
self._last_timestamp = msg_data.timestamp
elif header.msg_type == self.MSG_TYPE_DROPOUT:
msg_dropout = self.MessageDropout(data, header,
self._last_timestamp)
self._dropouts.append(msg_dropout)
else:
if self._debug:
print('_read_file_data: unknown message type: %i (%s)' %
(header.msg_type, chr(header.msg_type)))
file_position = self._file_handle.tell()
print('file position: %i (0x%x) msg size: %i' % (
file_position, file_position, header.msg_size))
if self._check_file_corruption(header):
# seek back to advance only by a single byte instead of
# skipping the message
self._file_handle.seek(-2-header.msg_size, 1)
except struct.error:
pass #we read past the end of the file
# convert into final representation
while self._subscriptions:
_, value = self._subscriptions.popitem()
if len(value.buffer) > 0: # only add if we have data
data_item = ULog.Data(value)
self._data_list.append(data_item) | def _read_file_data(self, message_name_filter_list, read_until=None) | read the file data section
:param read_until: an optional file offset: if set, parse only up to
this offset (smaller than) | 3.026786 | 3.002519 | 1.008082 |
# We need to handle 2 cases:
# - corrupt file (we do our best to read the rest of the file)
# - new ULog message type got added (we just want to skip the message)
if header.msg_type == 0 or header.msg_size == 0 or header.msg_size > 10000:
if not self._file_corrupt and self._debug:
print('File corruption detected')
self._file_corrupt = True
return self._file_corrupt | def _check_file_corruption(self, header) | check for file corruption based on an unknown message type in the header | 5.856185 | 5.452462 | 1.074044 |
if key_name in self._msg_info_dict:
val = self._msg_info_dict[key_name]
return ((val >> 24) & 0xff, (val >> 16) & 0xff, (val >> 8) & 0xff, val & 0xff)
return None | def get_version_info(self, key_name='ver_sw_release') | get the (major, minor, patch, type) version information as tuple.
Returns None if not found
definition of type is:
>= 0: development
>= 64: alpha version
>= 128: beta version
>= 192: RC version
== 255: release version | 2.234154 | 2.185542 | 1.022242 |
version = self.get_version_info(key_name)
if not version is None and version[3] >= 64:
type_str = ''
if version[3] < 128: type_str = ' (alpha)'
elif version[3] < 192: type_str = ' (beta)'
elif version[3] < 255: type_str = ' (RC)'
return 'v{}.{}.{}{}'.format(version[0], version[1], version[2], type_str)
return None | def get_version_info_str(self, key_name='ver_sw_release') | get version information in the form 'v1.2.3 (RC)', or None if version
tag either not found or it's a development version | 2.585657 | 2.449822 | 1.055447 |
parser = argparse.ArgumentParser(description='Convert ULog to CSV')
parser.add_argument('filename', metavar='file.ulg', help='ULog input file')
parser.add_argument(
'-m', '--messages', dest='messages',
help=("Only consider given messages. Must be a comma-separated list of"
" names, like 'sensor_combined,vehicle_gps_position'"))
parser.add_argument('-d', '--delimiter', dest='delimiter', action='store',
help="Use delimiter in CSV (default is ',')", default=',')
parser.add_argument('-o', '--output', dest='output', action='store',
help='Output directory (default is same as input file)',
metavar='DIR')
args = parser.parse_args()
if args.output and not os.path.isdir(args.output):
print('Creating output directory {:}'.format(args.output))
os.mkdir(args.output)
convert_ulog2csv(args.filename, args.messages, args.output, args.delimiter) | def main() | Command line interface | 3.202885 | 3.204958 | 0.999353 |
msg_filter = messages.split(',') if messages else None
ulog = ULog(ulog_file_name, msg_filter)
data = ulog.data_list
output_file_prefix = ulog_file_name
# strip '.ulg'
if output_file_prefix.lower().endswith('.ulg'):
output_file_prefix = output_file_prefix[:-4]
# write to different output path?
if output:
base_name = os.path.basename(output_file_prefix)
output_file_prefix = os.path.join(output, base_name)
for d in data:
fmt = '{0}_{1}_{2}.csv'
output_file_name = fmt.format(output_file_prefix, d.name, d.multi_id)
fmt = 'Writing {0} ({1} data points)'
# print(fmt.format(output_file_name, len(d.data['timestamp'])))
with open(output_file_name, 'w') as csvfile:
# use same field order as in the log, except for the timestamp
data_keys = [f.field_name for f in d.field_data]
data_keys.remove('timestamp')
data_keys.insert(0, 'timestamp') # we want timestamp at first position
# we don't use np.savetxt, because we have multiple arrays with
# potentially different data types. However the following is quite
# slow...
# write the header
csvfile.write(delimiter.join(data_keys) + '\n')
# write the data
last_elem = len(data_keys)-1
for i in range(len(d.data['timestamp'])):
for k in range(len(data_keys)):
csvfile.write(str(d.data[data_keys[k]][i]))
if k != last_elem:
csvfile.write(delimiter)
csvfile.write('\n') | def convert_ulog2csv(ulog_file_name, messages, output, delimiter) | Coverts and ULog file to a CSV file.
:param ulog_file_name: The ULog filename to open and read
:param messages: A list of message names
:param output: Output file path
:param delimiter: CSV delimiter
:return: None | 3.046809 | 3.101909 | 0.982237 |
m1, s1 = divmod(int(ulog.start_timestamp/1e6), 60)
h1, m1 = divmod(m1, 60)
m2, s2 = divmod(int((ulog.last_timestamp - ulog.start_timestamp)/1e6), 60)
h2, m2 = divmod(m2, 60)
print("Logging start time: {:d}:{:02d}:{:02d}, duration: {:d}:{:02d}:{:02d}".format(
h1, m1, s1, h2, m2, s2))
dropout_durations = [dropout.duration for dropout in ulog.dropouts]
if len(dropout_durations) == 0:
print("No Dropouts")
else:
print("Dropouts: count: {:}, total duration: {:.1f} s, max: {:} ms, mean: {:} ms"
.format(len(dropout_durations), sum(dropout_durations)/1000.,
max(dropout_durations),
int(sum(dropout_durations)/len(dropout_durations))))
version = ulog.get_version_info_str()
if not version is None:
print('SW Version: {}'.format(version))
print("Info Messages:")
for k in sorted(ulog.msg_info_dict):
if not k.startswith('perf_') or verbose:
print(" {0}: {1}".format(k, ulog.msg_info_dict[k]))
if len(ulog.msg_info_multiple_dict) > 0:
if verbose:
print("Info Multiple Messages:")
for k in sorted(ulog.msg_info_multiple_dict):
print(" {0}: {1}".format(k, ulog.msg_info_multiple_dict[k]))
else:
print("Info Multiple Messages: {}".format(
", ".join(["[{}: {}]".format(k, len(ulog.msg_info_multiple_dict[k])) for k in
sorted(ulog.msg_info_multiple_dict)])))
print("")
print("{:<41} {:7}, {:10}".format("Name (multi id, message size in bytes)",
"number of data points", "total bytes"))
data_list_sorted = sorted(ulog.data_list, key=lambda d: d.name + str(d.multi_id))
for d in data_list_sorted:
message_size = sum([ULog.get_field_size(f.type_str) for f in d.field_data])
num_data_points = len(d.data['timestamp'])
name_id = "{:} ({:}, {:})".format(d.name, d.multi_id, message_size)
print(" {:<40} {:7d} {:10d}".format(name_id, num_data_points,
message_size * num_data_points)) | def show_info(ulog, verbose) | Show general information from an ULog | 2.654045 | 2.653331 | 1.000269 |
parser = argparse.ArgumentParser(description='Display information from an ULog file')
parser.add_argument('filename', metavar='file.ulg', help='ULog input file')
parser.add_argument('-v', '--verbose', dest='verbose', action='store_true',
help='Verbose output', default=False)
parser.add_argument('-m', '--message', dest='message',
help='Show a specific Info Multiple Message')
parser.add_argument('-n', '--newline', dest='newline', action='store_true',
help='Add newline separators (only with --message)', default=False)
args = parser.parse_args()
ulog_file_name = args.filename
ulog = ULog(ulog_file_name)
message = args.message
if message:
separator = ""
if args.newline: separator = "\n"
if len(ulog.msg_info_multiple_dict) > 0 and message in ulog.msg_info_multiple_dict:
message_info_multiple = ulog.msg_info_multiple_dict[message]
for i, m in enumerate(message_info_multiple):
print("# {} {}:".format(message, i))
print(separator.join(m))
else:
print("message {} not found".format(message))
else:
show_info(ulog, args.verbose) | def main() | Commande line interface | 2.641922 | 2.621634 | 1.007739 |
parser = argparse.ArgumentParser(description='Display logged messages from an ULog file')
parser.add_argument('filename', metavar='file.ulg', help='ULog input file')
args = parser.parse_args()
ulog_file_name = args.filename
msg_filter = [] # we don't need the data messages
ulog = ULog(ulog_file_name, msg_filter)
for m in ulog.logged_messages:
m1, s1 = divmod(int(m.timestamp/1e6), 60)
h1, m1 = divmod(m1, 60)
print("{:d}:{:02d}:{:02d} {:}: {:}".format(
h1, m1, s1, m.log_level_str(), m.message)) | def main() | Commande line interface | 3.615132 | 3.54272 | 1.02044 |
parser = argparse.ArgumentParser(description='Extract parameters from an ULog file')
parser.add_argument('filename', metavar='file.ulg', help='ULog input file')
parser.add_argument('-d', '--delimiter', dest='delimiter', action='store',
help='Use delimiter in CSV (default is \',\')', default=',')
parser.add_argument('-i', '--initial', dest='initial', action='store_true',
help='Only extract initial parameters', default=False)
parser.add_argument('-o', '--octave', dest='octave', action='store_true',
help='Use Octave format', default=False)
parser.add_argument('-t', '--timestamps', dest='timestamps', action='store_true',
help='Extract changed parameters with timestamps', default=False)
parser.add_argument('output_filename', metavar='params.txt',
type=argparse.FileType('w'), nargs='?',
help='Output filename (default=stdout)', default=sys.stdout)
args = parser.parse_args()
ulog_file_name = args.filename
message_filter = []
if not args.initial: message_filter = None
ulog = ULog(ulog_file_name, message_filter)
param_keys = sorted(ulog.initial_parameters.keys())
delimiter = args.delimiter
output_file = args.output_filename
if not args.octave:
for param_key in param_keys:
output_file.write(param_key)
if args.initial:
output_file.write(delimiter)
output_file.write(str(ulog.initial_parameters[param_key]))
output_file.write('\n')
elif args.timestamps:
output_file.write(delimiter)
output_file.write(str(ulog.initial_parameters[param_key]))
for t, name, value in ulog.changed_parameters:
if name == param_key:
output_file.write(delimiter)
output_file.write(str(value))
output_file.write('\n')
output_file.write("timestamp")
output_file.write(delimiter)
output_file.write('0')
for t, name, value in ulog.changed_parameters:
if name == param_key:
output_file.write(delimiter)
output_file.write(str(t))
output_file.write('\n')
else:
for t, name, value in ulog.changed_parameters:
if name == param_key:
output_file.write(delimiter)
output_file.write(str(value))
output_file.write('\n')
else:
for param_key in param_keys:
output_file.write('# name ')
output_file.write(param_key)
values = [ulog.initial_parameters[param_key]]
if not args.initial:
for t, name, value in ulog.changed_parameters:
if name == param_key:
values += [value]
if len(values) > 1:
output_file.write('\n# type: matrix\n')
output_file.write('# rows: 1\n')
output_file.write('# columns: ')
output_file.write(str(len(values)) + '\n')
for value in values:
output_file.write(str(value) + ' ')
else:
output_file.write('\n# type: scalar\n')
output_file.write(str(values[0]))
output_file.write('\n') | def main() | Commande line interface | 1.921724 | 1.919935 | 1.000932 |
mav_type = self._ulog.initial_parameters.get('MAV_TYPE', None)
if mav_type == 1: # fixed wing always uses EKF2
return 'EKF2'
mc_est_group = self._ulog.initial_parameters.get('SYS_MC_EST_GROUP', None)
return {0: 'INAV',
1: 'LPE',
2: 'EKF2',
3: 'IEKF'}.get(mc_est_group, 'unknown ({})'.format(mc_est_group)) | def get_estimator(self) | return the configured estimator as string from initial parameters | 6.349438 | 5.630481 | 1.12769 |
self._add_roll_pitch_yaw_to_message('vehicle_attitude')
self._add_roll_pitch_yaw_to_message('vehicle_vision_attitude')
self._add_roll_pitch_yaw_to_message('vehicle_attitude_groundtruth')
self._add_roll_pitch_yaw_to_message('vehicle_attitude_setpoint', '_d') | def add_roll_pitch_yaw(self) | convenience method to add the fields 'roll', 'pitch', 'yaw' to the
loaded data using the quaternion fields (does not update field_data).
Messages are: 'vehicle_attitude.q' and 'vehicle_attitude_setpoint.q_d',
'vehicle_attitude_groundtruth.q' and 'vehicle_vision_attitude.q' | 3.346507 | 1.858368 | 1.800777 |
ret_val = []
for key in self._ulog.initial_parameters:
param_val = self._ulog.initial_parameters[key]
if key.startswith('RC_MAP_') and param_val == channel + 1:
ret_val.append(key[7:].capitalize())
if len(ret_val) > 0:
return ret_val
return None | def get_configured_rc_input_names(self, channel) | find all RC mappings to a given channel and return their names
:param channel: input channel (0=first)
:return: list of strings or None | 4.320639 | 4.22786 | 1.021945 |
directory = os.path.abspath(directory)
config = load_config(directory)
to_err = draft
click.echo("Loading template...", err=to_err)
if config["template"] is None:
template = pkg_resources.resource_string(
__name__, "templates/template.rst"
).decode("utf8")
else:
with open(config["template"], "rb") as tmpl:
template = tmpl.read().decode("utf8")
click.echo("Finding news fragments...", err=to_err)
definitions = config["types"]
if config.get("directory"):
base_directory = os.path.abspath(config["directory"])
fragment_directory = None
else:
base_directory = os.path.abspath(
os.path.join(directory, config["package_dir"], config["package"])
)
fragment_directory = "newsfragments"
fragments, fragment_filenames = find_fragments(
base_directory, config["sections"], fragment_directory, definitions
)
click.echo("Rendering news fragments...", err=to_err)
fragments = split_fragments(fragments, definitions)
rendered = render_fragments(
# The 0th underline is used for the top line
template,
config["issue_format"],
fragments,
definitions,
config["underlines"][1:],
config["wrap"],
)
if project_version is None:
project_version = get_version(
os.path.join(directory, config["package_dir"]), config["package"]
)
if project_name is None:
package = config.get("package")
if package:
project_name = get_project_name(
os.path.abspath(os.path.join(directory, config["package_dir"])), package
)
else:
# Can't determine a project_name, but maybe it is not needed.
project_name = ""
if project_date is None:
project_date = _get_date()
top_line = config["title_format"].format(
name=project_name, version=project_version, project_date=project_date
)
top_line += u"\n" + (config["underlines"][0] * len(top_line)) + u"\n"
if draft:
click.echo(
"Draft only -- nothing has been written.\n"
"What is seen below is what would be written.\n",
err=to_err,
)
click.echo("%s\n%s" % (top_line, rendered))
else:
click.echo("Writing to newsfile...", err=to_err)
start_line = config["start_line"]
append_to_newsfile(
directory, config["filename"], start_line, top_line, rendered
)
click.echo("Staging newsfile...", err=to_err)
stage_newsfile(directory, config["filename"])
click.echo("Removing news fragments...", err=to_err)
remove_files(fragment_filenames, answer_yes)
click.echo("Done!", err=to_err) | def __main(draft, directory, project_name, project_version, project_date, answer_yes) | The main entry point. | 3.044418 | 3.037898 | 1.002146 |
content = OrderedDict()
fragment_filenames = []
for key, val in sections.items():
if fragment_directory is not None:
section_dir = os.path.join(base_directory, val, fragment_directory)
else:
section_dir = os.path.join(base_directory, val)
files = os.listdir(section_dir)
file_content = {}
for basename in files:
parts = basename.split(u".")
counter = 0
if len(parts) == 1:
continue
else:
ticket, category = parts[:2]
# If there is a number after the category then use it as a counter,
# otherwise ignore it.
# This means 1.feature.1 and 1.feature do not conflict but
# 1.feature.rst and 1.feature do.
if len(parts) > 2:
try:
counter = int(parts[2])
except ValueError:
pass
if category not in definitions:
continue
full_filename = os.path.join(section_dir, basename)
fragment_filenames.append(full_filename)
with open(full_filename, "rb") as f:
data = f.read().decode("utf8", "replace")
if (ticket, category, counter) in file_content:
raise ValueError(
"multiple files for {}.{} in {}".format(
ticket, category, section_dir
)
)
file_content[ticket, category, counter] = data
content[key] = file_content
return content, fragment_filenames | def find_fragments(base_directory, sections, fragment_directory, definitions) | Sections are a dictonary of section names to paths. | 2.873085 | 2.848692 | 1.008563 |
# Based on Python 3's textwrap.indent
def prefixed_lines():
for line in text.splitlines(True):
yield (prefix + line if line.strip() else line)
return u"".join(prefixed_lines()) | def indent(text, prefix) | Adds `prefix` to the beginning of non-empty lines in `text`. | 3.253721 | 3.444002 | 0.94475 |
jinja_template = Template(template, trim_blocks=True)
data = OrderedDict()
for section_name, section_value in fragments.items():
data[section_name] = OrderedDict()
for category_name, category_value in section_value.items():
# Suppose we start with an ordering like this:
#
# - Fix the thing (#7, #123, #2)
# - Fix the other thing (#1)
# First we sort the issues inside each line:
#
# - Fix the thing (#2, #7, #123)
# - Fix the other thing (#1)
entries = []
for text, issues in category_value.items():
entries.append((text, sorted(issues, key=issue_key)))
# Then we sort the lines:
#
# - Fix the other thing (#1)
# - Fix the thing (#2, #7, #123)
entries.sort(key=entry_key)
# Then we put these nicely sorted entries back in an ordered dict
# for the template, after formatting each issue number
categories = OrderedDict()
for text, issues in entries:
rendered = [render_issue(issue_format, i) for i in issues]
categories[text] = rendered
data[section_name][category_name] = categories
done = []
res = jinja_template.render(
sections=data, definitions=definitions, underlines=underlines
)
for line in res.split(u"\n"):
if wrap:
done.append(
textwrap.fill(
line,
width=79,
subsequent_indent=u" ",
break_long_words=False,
break_on_hyphens=False,
)
)
else:
done.append(line)
return u"\n".join(done).rstrip() + u"\n" | def render_fragments(template, issue_format, fragments, definitions, underlines, wrap) | Render the fragments into a news file. | 2.948184 | 2.943806 | 1.001487 |
a = 3 / 8 if n <= 10 else 0.5
return (np.arange(n) + 1 - a) / (n + 1 - 2 * a) | def _ppoints(n, a=0.5) | Ordinates For Probability Plotting.
Numpy analogue or `R`'s `ppoints` function.
Parameters
----------
n : int
Number of points generated
a : float
Offset fraction (typically between 0 and 1)
Returns
-------
p : array
Sequence of probabilities at which to evaluate the inverse
distribution. | 4.674747 | 6.23789 | 0.749411 |
# Check that stasmodels is installed
from pingouin.utils import _is_statsmodels_installed
_is_statsmodels_installed(raise_error=True)
from statsmodels.api import stats
from statsmodels.formula.api import ols
# Check that covariates are numeric ('float', 'int')
assert all([data[covar[i]].dtype.kind in 'fi' for i in range(len(covar))])
# Fit ANCOVA model
formula = dv + ' ~ C(' + between + ')'
for c in covar:
formula += ' + ' + c
model = ols(formula, data=data).fit()
aov = stats.anova_lm(model, typ=2).reset_index()
aov.rename(columns={'index': 'Source', 'sum_sq': 'SS',
'df': 'DF', 'PR(>F)': 'p-unc'}, inplace=True)
aov.loc[0, 'Source'] = between
aov['DF'] = aov['DF'].astype(int)
aov[['SS', 'F']] = aov[['SS', 'F']].round(3)
# Export to .csv
if export_filename is not None:
_export_table(aov, export_filename)
return aov | def ancovan(dv=None, covar=None, between=None, data=None,
export_filename=None) | ANCOVA with n covariates.
This is an internal function. The main call to this function should be done
by the :py:func:`pingouin.ancova` function.
Parameters
----------
dv : string
Name of column containing the dependant variable.
covar : string
Name(s) of columns containing the covariates.
between : string
Name of column containing the between factor.
data : pandas DataFrame
DataFrame
export_filename : string
Filename (without extension) for the output file.
If None, do not export the table.
By default, the file will be created in the current python console
directory. To change that, specify the filename with full path.
Returns
-------
aov : DataFrame
ANCOVA summary ::
'Source' : Names of the factor considered
'SS' : Sums of squares
'DF' : Degrees of freedom
'F' : F-values
'p-unc' : Uncorrected p-values | 3.508712 | 2.968722 | 1.181893 |
# Check extension
d, ext = op.splitext(dname)
if ext.lower() == '.csv':
dname = d
# Check that dataset exist
if dname not in dts['dataset'].values:
raise ValueError('Dataset does not exist. Valid datasets names are',
dts['dataset'].values)
# Load dataset
return pd.read_csv(op.join(ddir, dname + '.csv'), sep=',') | def read_dataset(dname) | Read example datasets.
Parameters
----------
dname : string
Name of dataset to read (without extension).
Must be a valid dataset present in pingouin.datasets
Returns
-------
data : pd.DataFrame
Dataset
Examples
--------
Load the ANOVA dataset
>>> from pingouin import read_dataset
>>> df = read_dataset('anova') | 4.244759 | 5.309356 | 0.799487 |
assert tail in ['two-sided', 'upper', 'lower'], 'Wrong tail argument.'
assert isinstance(estimate, (int, float))
bootstat = np.asarray(bootstat)
assert bootstat.ndim == 1, 'bootstat must be a 1D array.'
n_boot = bootstat.size
assert n_boot >= 1, 'bootstat must have at least one value.'
if tail == 'upper':
p = np.greater_equal(bootstat, estimate).sum() / n_boot
elif tail == 'lower':
p = np.less_equal(bootstat, estimate).sum() / n_boot
else:
p = np.greater_equal(np.fabs(bootstat), abs(estimate)).sum() / n_boot
return p | def _perm_pval(bootstat, estimate, tail='two-sided') | Compute p-values from a permutation test.
Parameters
----------
bootstat : 1D array
Permutation distribution.
estimate : float or int
Point estimate.
tail : str
'upper': one-sided p-value (upper tail)
'lower': one-sided p-value (lower tail)
'two-sided': two-sided p-value
Returns
-------
p : float
P-value. | 2.148579 | 2.154421 | 0.997289 |
if 'F' in df.keys():
print('\n=============\nANOVA SUMMARY\n=============\n')
if 'A' in df.keys():
print('\n==============\nPOST HOC TESTS\n==============\n')
print(tabulate(df, headers="keys", showindex=False, floatfmt=floatfmt,
tablefmt=tablefmt))
print('') | def print_table(df, floatfmt=".3f", tablefmt='simple') | Pretty display of table.
See: https://pypi.org/project/tabulate/.
Parameters
----------
df : DataFrame
Dataframe to print (e.g. ANOVA summary)
floatfmt : string
Decimal number formatting
tablefmt : string
Table format (e.g. 'simple', 'plain', 'html', 'latex', 'grid') | 4.325255 | 4.285407 | 1.009298 |
import os.path as op
extension = op.splitext(fname.lower())[1]
if extension == '':
fname = fname + '.csv'
table.to_csv(fname, index=None, sep=',', encoding='utf-8',
float_format='%.4f', decimal='.') | def _export_table(table, fname) | Export DataFrame to .csv | 3.410904 | 3.168677 | 1.076444 |
if x.ndim == 1:
# 1D arrays
x_mask = ~np.isnan(x)
else:
# 2D arrays
ax = 1 if axis == 'rows' else 0
x_mask = ~np.any(np.isnan(x), axis=ax)
# Check if missing values are present
if ~x_mask.all():
ax = 0 if axis == 'rows' else 1
ax = 0 if x.ndim == 1 else ax
x = x.compress(x_mask, axis=ax)
return x | def _remove_na_single(x, axis='rows') | Remove NaN in a single array.
This is an internal Pingouin function. | 2.557142 | 2.499004 | 1.023264 |
# Safety checks
x = np.asarray(x)
assert x.size > 1, 'x must have more than one element.'
assert axis in ['rows', 'columns'], 'axis must be rows or columns.'
if y is None:
return _remove_na_single(x, axis=axis)
elif isinstance(y, (int, float, str)):
return _remove_na_single(x, axis=axis), y
elif isinstance(y, (list, np.ndarray)):
y = np.asarray(y)
# Make sure that we just pass-through if y have only 1 element
if y.size == 1:
return _remove_na_single(x, axis=axis), y
if x.ndim != y.ndim or paired is False:
# x and y do not have the same dimension
x_no_nan = _remove_na_single(x, axis=axis)
y_no_nan = _remove_na_single(y, axis=axis)
return x_no_nan, y_no_nan
# At this point, we assume that x and y are paired and have same dimensions
if x.ndim == 1:
# 1D arrays
x_mask = ~np.isnan(x)
y_mask = ~np.isnan(y)
else:
# 2D arrays
ax = 1 if axis == 'rows' else 0
x_mask = ~np.any(np.isnan(x), axis=ax)
y_mask = ~np.any(np.isnan(y), axis=ax)
# Check if missing values are present
if ~x_mask.all() or ~y_mask.all():
ax = 0 if axis == 'rows' else 1
ax = 0 if x.ndim == 1 else ax
both = np.logical_and(x_mask, y_mask)
x = x.compress(both, axis=ax)
y = y.compress(both, axis=ax)
return x, y | def remove_na(x, y=None, paired=False, axis='rows') | Remove missing values along a given axis in one or more (paired) numpy
arrays.
Parameters
----------
x, y : 1D or 2D arrays
Data. ``x`` and ``y`` must have the same number of dimensions.
``y`` can be None to only remove missing values in ``x``.
paired : bool
Indicates if the measurements are paired or not.
axis : str
Axis or axes along which missing values are removed.
Can be 'rows' or 'columns'. This has no effect if ``x`` and ``y`` are
one-dimensional arrays.
Returns
-------
x, y : np.ndarray
Data without missing values
Examples
--------
Single 1D array
>>> import numpy as np
>>> from pingouin import remove_na
>>> x = [6.4, 3.2, 4.5, np.nan]
>>> remove_na(x)
array([6.4, 3.2, 4.5])
With two paired 1D arrays
>>> y = [2.3, np.nan, 5.2, 4.6]
>>> remove_na(x, y, paired=True)
(array([6.4, 4.5]), array([2.3, 5.2]))
With two independent 2D arrays
>>> x = np.array([[4, 2], [4, np.nan], [7, 6]])
>>> y = np.array([[6, np.nan], [3, 2], [2, 2]])
>>> x_no_nan, y_no_nan = remove_na(x, y, paired=False) | 2.285095 | 2.482 | 0.920667 |
# Safety checks
assert isinstance(aggregate, str), 'aggregate must be a str.'
assert isinstance(within, (str, list)), 'within must be str or list.'
assert isinstance(subject, str), 'subject must be a string.'
assert isinstance(data, pd.DataFrame), 'Data must be a DataFrame.'
idx_cols = _flatten_list([subject, within])
all_cols = data.columns
if data[idx_cols].isnull().any().any():
raise ValueError("NaN are present in the within-factors or in the "
"subject column. Please remove them manually.")
# Check if more within-factors are present and if so, aggregate
if (data.groupby(idx_cols).count() > 1).any().any():
# Make sure that we keep the non-numeric columns when aggregating
# This is disabled by default to avoid any confusion.
# all_others = all_cols.difference(idx_cols)
# all_num = data[all_others].select_dtypes(include='number').columns
# agg = {c: aggregate if c in all_num else 'first' for c in all_others}
data = data.groupby(idx_cols).agg(aggregate)
else:
# Set subject + within factors as index.
# Sorting is done to avoid performance warning when dropping.
data = data.set_index(idx_cols).sort_index()
# Find index with missing values
if dv is None:
iloc_nan = data.isnull().values.nonzero()[0]
else:
iloc_nan = data[dv].isnull().values.nonzero()[0]
# Drop the last within level
idx_nan = data.index[iloc_nan].droplevel(-1)
# Drop and re-order
data = data.drop(idx_nan).reset_index(drop=False)
return data.reindex(columns=all_cols).dropna(how='all', axis=1) | def remove_rm_na(dv=None, within=None, subject=None, data=None,
aggregate='mean') | Remove missing values in long-format repeated-measures dataframe.
Parameters
----------
dv : string or list
Dependent variable(s), from which the missing values should be removed.
If ``dv`` is not specified, all the columns in the dataframe are
considered. ``dv`` must be numeric.
within : string or list
Within-subject factor(s).
subject : string
Subject identifier.
data : dataframe
Long-format dataframe.
aggregate : string
Aggregation method if there are more within-factors in the data than
specified in the ``within`` argument. Can be `mean`, `median`, `sum`,
`first`, `last`, or any other function accepted by
:py:meth:`pandas.DataFrame.groupby`.
Returns
-------
data : dataframe
Dataframe without the missing values.
Notes
-----
If multiple factors are specified, the missing values are removed on the
last factor, so the order of ``within`` is important.
In addition, if there are more within-factors in the data than specified in
the ``within`` argument, data will be aggregated using the function
specified in ``aggregate``. Note that in the default case (aggregation
using the mean), all the non-numeric column(s) will be dropped. | 4.033256 | 3.81918 | 1.056053 |
result = []
# Remove None
x = list(filter(None.__ne__, x))
for el in x:
x_is_iter = isinstance(x, collections.Iterable)
if x_is_iter and not isinstance(el, (str, tuple)):
result.extend(_flatten_list(el))
else:
result.append(el)
return result | def _flatten_list(x) | Flatten an arbitrarily nested list into a new list.
This can be useful to select pandas DataFrame columns.
From https://stackoverflow.com/a/16176969/10581531
Examples
--------
>>> from pingouin.utils import _flatten_list
>>> x = ['X1', ['M1', 'M2'], 'Y1', ['Y2']]
>>> _flatten_list(x)
['X1', 'M1', 'M2', 'Y1', 'Y2']
>>> x = ['Xaa', 'Xbb', 'Xcc']
>>> _flatten_list(x)
['Xaa', 'Xbb', 'Xcc'] | 2.678454 | 3.748447 | 0.71455 |
# Check that data is a dataframe
if not isinstance(data, pd.DataFrame):
raise ValueError('Data must be a pandas dataframe.')
# Check that both dv and data are provided.
if any(v is None for v in [dv, data]):
raise ValueError('DV and data must be specified')
# Check that dv is a numeric variable
if data[dv].dtype.kind not in 'fi':
raise ValueError('DV must be numeric.')
# Check that effects is provided
if effects not in ['within', 'between', 'interaction', 'all']:
raise ValueError('Effects must be: within, between, interaction, all')
# Check that within is a string or a list (rm_anova2)
if effects == 'within' and not isinstance(within, (str, list)):
raise ValueError('within must be a string or a list.')
# Check that subject identifier is provided in rm_anova and friedman.
if effects == 'within' and subject is None:
raise ValueError('subject must be specified when effects=within')
# Check that between is a string or a list (anova2)
if effects == 'between' and not isinstance(between, (str,
list)):
raise ValueError('between must be a string or a list.')
# Check that both between and within are present for interaction
if effects == 'interaction':
for input in [within, between]:
if not isinstance(input, (str, list)):
raise ValueError('within and between must be specified when '
'effects=interaction') | def _check_dataframe(dv=None, between=None, within=None, subject=None,
effects=None, data=None) | Check dataframe | 2.857942 | 2.820945 | 1.013115 |
if bf >= 1e4 or bf <= 1e-4:
out = np.format_float_scientific(bf, precision=precision, trim=trim)
else:
out = np.format_float_positional(bf, precision=precision, trim=trim)
return out | def _format_bf(bf, precision=3, trim='0') | Format BF10 to floating point or scientific notation. | 2.38458 | 2.143653 | 1.112391 |
from scipy.special import gamma
# Function to be integrated
def fun(g, r, n):
return np.exp(((n - 2) / 2) * np.log(1 + g) + (-(n - 1) / 2)
* np.log(1 + (1 - r**2) * g) + (-3 / 2)
* np.log(g) + - n / (2 * g))
# JZS Bayes factor calculation
integr = quad(fun, 0, np.inf, args=(r, n))[0]
bf10 = np.sqrt((n / 2)) / gamma(1 / 2) * integr
return _format_bf(bf10) | def bayesfactor_pearson(r, n) | Bayes Factor of a Pearson correlation.
Parameters
----------
r : float
Pearson correlation coefficient
n : int
Sample size
Returns
-------
bf : str
Bayes Factor (BF10).
The Bayes Factor quantifies the evidence in favour of the alternative
hypothesis.
Notes
-----
Adapted from a Matlab code found at
https://github.com/anne-urai/Tools/blob/master/stats/BayesFactors/corrbf.m
If you would like to compute the Bayes Factor directly from the raw data
instead of from the correlation coefficient, use the
:py:func:`pingouin.corr` function.
The JZS Bayes Factor is approximated using the formula described in
ref [1]_:
.. math::
BF_{10} = \\frac{\\sqrt{n/2}}{\\gamma(1/2)}*
\\int_{0}^{\\infty}e((n-2)/2)*
log(1+g)+(-(n-1)/2)log(1+(1-r^2)*g)+(-3/2)log(g)-n/2g
where **n** is the sample size and **r** is the Pearson correlation
coefficient.
References
----------
.. [1] Wetzels, R., Wagenmakers, E.-J., 2012. A default Bayesian
hypothesis test for correlations and partial correlations.
Psychon. Bull. Rev. 19, 1057–1064.
https://doi.org/10.3758/s13423-012-0295-x
Examples
--------
Bayes Factor of a Pearson correlation
>>> from pingouin import bayesfactor_pearson
>>> bf = bayesfactor_pearson(0.6, 20)
>>> print("Bayes Factor: %s" % bf)
Bayes Factor: 8.221 | 5.013548 | 3.229557 | 1.552395 |
from scipy.stats import gmean
# Geometric mean
geo_mean = gmean(x)
# Geometric standard deviation
gstd = np.exp(np.sqrt(np.sum((np.log(x / geo_mean))**2) / (len(x) - 1)))
# Geometric z-score
return np.log(x / geo_mean) / np.log(gstd) | def gzscore(x) | Geometric standard (Z) score.
Parameters
----------
x : array_like
Array of raw values
Returns
-------
gzscore : array_like
Array of geometric z-scores (same shape as x)
Notes
-----
Geometric Z-scores are better measures of dispersion than arithmetic
z-scores when the sample data come from a log-normally distributed
population.
Given the raw scores :math:`x`, the geometric mean :math:`\\mu_g` and
the geometric standard deviation :math:`\\sigma_g`,
the standard score is given by the formula:
.. math:: z = \\frac{log(x) - log(\\mu_g)}{log(\\sigma_g)}
References
----------
.. [1] https://en.wikipedia.org/wiki/Geometric_standard_deviation
Examples
--------
Standardize a log-normal array
>>> import numpy as np
>>> from pingouin import gzscore
>>> np.random.seed(123)
>>> raw = np.random.lognormal(size=100)
>>> z = gzscore(raw)
>>> print(round(z.mean(), 3), round(z.std(), 3))
-0.0 0.995 | 3.001783 | 3.495718 | 0.858703 |
from scipy.stats import anderson as ads
k = len(args)
from_dist = np.zeros(k, 'bool')
sig_level = np.zeros(k)
for j in range(k):
st, cr, sig = ads(args[j], dist=dist)
from_dist[j] = True if (st > cr).any() else False
sig_level[j] = sig[np.argmin(np.abs(st - cr))]
if k == 1:
from_dist = bool(from_dist)
sig_level = float(sig_level)
return from_dist, sig_level | def anderson(*args, dist='norm') | Anderson-Darling test of distribution.
Parameters
----------
sample1, sample2,... : array_like
Array of sample data. May be different lengths.
dist : string
Distribution ('norm', 'expon', 'logistic', 'gumbel')
Returns
-------
from_dist : boolean
True if data comes from this distribution.
sig_level : float
The significance levels for the corresponding critical values in %.
(See :py:func:`scipy.stats.anderson` for more details)
Examples
--------
1. Test that an array comes from a normal distribution
>>> from pingouin import anderson
>>> x = [2.3, 5.1, 4.3, 2.6, 7.8, 9.2, 1.4]
>>> anderson(x, dist='norm')
(False, 15.0)
2. Test that two arrays comes from an exponential distribution
>>> y = [2.8, 12.4, 28.3, 3.2, 16.3, 14.2]
>>> anderson(x, y, dist='expon')
(array([False, False]), array([15., 15.])) | 3.141508 | 3.428428 | 0.916312 |
if not _check_eftype(eftype):
err = "Could not interpret input '{}'".format(eftype)
raise ValueError(err)
if not isinstance(tval, float):
err = "T-value must be float"
raise ValueError(err)
# Compute Cohen d (Lakens, 2013)
if nx is not None and ny is not None:
d = tval * np.sqrt(1 / nx + 1 / ny)
elif N is not None:
d = 2 * tval / np.sqrt(N)
else:
raise ValueError('You must specify either nx + ny, or just N')
return convert_effsize(d, 'cohen', eftype, nx=nx, ny=ny) | def compute_effsize_from_t(tval, nx=None, ny=None, N=None, eftype='cohen') | Compute effect size from a T-value.
Parameters
----------
tval : float
T-value
nx, ny : int, optional
Group sample sizes.
N : int, optional
Total sample size (will not be used if nx and ny are specified)
eftype : string, optional
desired output effect size
Returns
-------
ef : float
Effect size
See Also
--------
compute_effsize : Calculate effect size between two set of observations.
convert_effsize : Conversion between effect sizes.
Notes
-----
If both nx and ny are specified, the formula to convert from *t* to *d* is:
.. math:: d = t * \\sqrt{\\frac{1}{n_x} + \\frac{1}{n_y}}
If only N (total sample size) is specified, the formula is:
.. math:: d = \\frac{2t}{\\sqrt{N}}
Examples
--------
1. Compute effect size from a T-value when both sample sizes are known.
>>> from pingouin import compute_effsize_from_t
>>> tval, nx, ny = 2.90, 35, 25
>>> d = compute_effsize_from_t(tval, nx=nx, ny=ny, eftype='cohen')
>>> print(d)
0.7593982580212534
2. Compute effect size when only total sample size is known (nx+ny)
>>> tval, N = 2.90, 60
>>> d = compute_effsize_from_t(tval, N=N, eftype='cohen')
>>> print(d)
0.7487767802667672 | 4.072371 | 3.744373 | 1.087598 |
# Check that sklearn is installed
from pingouin.utils import _is_sklearn_installed
_is_sklearn_installed(raise_error=True)
from scipy.stats import chi2
from sklearn.covariance import MinCovDet
X = np.column_stack((x, y))
nrows, ncols = X.shape
gval = np.sqrt(chi2.ppf(0.975, 2))
# Compute center and distance to center
center = MinCovDet(random_state=42).fit(X).location_
B = X - center
B2 = B**2
bot = B2.sum(axis=1)
# Loop over rows
dis = np.zeros(shape=(nrows, nrows))
for i in np.arange(nrows):
if bot[i] != 0:
dis[i, :] = np.linalg.norm(B * B2[i, :] / bot[i], axis=1)
# Detect outliers
def idealf(x):
n = len(x)
j = int(np.floor(n / 4 + 5 / 12))
y = np.sort(x)
g = (n / 4) - j + (5 / 12)
low = (1 - g) * y[j - 1] + g * y[j]
k = n - j + 1
up = (1 - g) * y[k - 1] + g * y[k - 2]
return up - low
# One can either use the MAD or the IQR (see Wilcox 2012)
# MAD = mad(dis, axis=1)
iqr = np.apply_along_axis(idealf, 1, dis)
thresh = (np.median(dis, axis=1) + gval * iqr)
outliers = np.apply_along_axis(np.greater, 0, dis, thresh).any(axis=0)
# Compute correlation on remaining data
if method == 'spearman':
r, pval = spearmanr(X[~outliers, 0], X[~outliers, 1])
else:
r, pval = pearsonr(X[~outliers, 0], X[~outliers, 1])
return r, pval, outliers | def skipped(x, y, method='spearman') | Skipped correlation (Rousselet and Pernet 2012).
Parameters
----------
x, y : array_like
First and second set of observations. x and y must be independent.
method : str
Method used to compute the correlation after outlier removal. Can be
either 'spearman' (default) or 'pearson'.
Returns
-------
r : float
Skipped correlation coefficient.
pval : float
Two-tailed p-value.
outliers : array of bool
Indicate if value is an outlier or not
Notes
-----
The skipped correlation involves multivariate outlier detection using a
projection technique (Wilcox, 2004, 2005). First, a robust estimator of
multivariate location and scatter, for instance the minimum covariance
determinant estimator (MCD; Rousseeuw, 1984; Rousseeuw and van Driessen,
1999; Hubert et al., 2008) is computed. Second, data points are
orthogonally projected on lines joining each of the data point to the
location estimator. Third, outliers are detected using a robust technique.
Finally, Spearman correlations are computed on the remaining data points
and calculations are adjusted by taking into account the dependency among
the remaining data points.
Code inspired by Matlab code from Cyril Pernet and Guillaume
Rousselet [1]_.
Requires scikit-learn.
References
----------
.. [1] Pernet CR, Wilcox R, Rousselet GA. Robust Correlation Analyses:
False Positive and Power Validation Using a New Open Source Matlab
Toolbox. Frontiers in Psychology. 2012;3:606.
doi:10.3389/fpsyg.2012.00606. | 3.191869 | 3.028901 | 1.053804 |
n, m = b.shape
MD = np.zeros((n, n_boot))
nr = np.arange(n)
xB = np.random.choice(nr, size=(n_boot, n), replace=True)
# Bootstrap the MD
for i in np.arange(n_boot):
s1 = b[xB[i, :], 0]
s2 = b[xB[i, :], 1]
X = np.column_stack((s1, s2))
mu = X.mean(0)
_, R = np.linalg.qr(X - mu)
sol = np.linalg.solve(R.T, (a - mu).T)
MD[:, i] = np.sum(sol**2, 0) * (n - 1)
# Average across all bootstraps
return MD.mean(1) | def bsmahal(a, b, n_boot=200) | Bootstraps Mahalanobis distances for Shepherd's pi correlation.
Parameters
----------
a : ndarray (shape=(n, 2))
Data
b : ndarray (shape=(n, 2))
Data
n_boot : int
Number of bootstrap samples to calculate.
Returns
-------
m : ndarray (shape=(n,))
Mahalanobis distance for each row in a, averaged across all the
bootstrap resamples. | 3.05934 | 3.106973 | 0.984669 |
from scipy.stats import spearmanr
X = np.column_stack((x, y))
# Bootstrapping on Mahalanobis distance
m = bsmahal(X, X, n_boot)
# Determine outliers
outliers = (m >= 6)
# Compute correlation
r, pval = spearmanr(x[~outliers], y[~outliers])
# (optional) double the p-value to achieve a nominal false alarm rate
# pval *= 2
# pval = 1 if pval > 1 else pval
return r, pval, outliers | def shepherd(x, y, n_boot=200) | Shepherd's Pi correlation, equivalent to Spearman's rho after outliers
removal.
Parameters
----------
x, y : array_like
First and second set of observations. x and y must be independent.
n_boot : int
Number of bootstrap samples to calculate.
Returns
-------
r : float
Pi correlation coefficient
pval : float
Two-tailed adjusted p-value.
outliers : array of bool
Indicate if value is an outlier or not
Notes
-----
It first bootstraps the Mahalanobis distances, removes all observations
with m >= 6 and finally calculates the correlation of the remaining data.
Pi is Spearman's Rho after outlier removal. | 5.023576 | 4.186525 | 1.199939 |
from scipy.stats import t
X = np.column_stack((x, y))
nx = X.shape[0]
M = np.tile(np.median(X, axis=0), nx).reshape(X.shape)
W = np.sort(np.abs(X - M), axis=0)
m = int((1 - beta) * nx)
omega = W[m - 1, :]
P = (X - M) / omega
P[np.isinf(P)] = 0
P[np.isnan(P)] = 0
# Loop over columns
a = np.zeros((2, nx))
for c in [0, 1]:
psi = P[:, c]
i1 = np.where(psi < -1)[0].size
i2 = np.where(psi > 1)[0].size
s = X[:, c].copy()
s[np.where(psi < -1)[0]] = 0
s[np.where(psi > 1)[0]] = 0
pbos = (np.sum(s) + omega[c] * (i2 - i1)) / (s.size - i1 - i2)
a[c] = (X[:, c] - pbos) / omega[c]
# Bend
a[a <= -1] = -1
a[a >= 1] = 1
# Get r, tval and pval
a, b = a
r = (a * b).sum() / np.sqrt((a**2).sum() * (b**2).sum())
tval = r * np.sqrt((nx - 2) / (1 - r**2))
pval = 2 * t.sf(abs(tval), nx - 2)
return r, pval | def percbend(x, y, beta=.2) | Percentage bend correlation (Wilcox 1994).
Parameters
----------
x, y : array_like
First and second set of observations. x and y must be independent.
beta : float
Bending constant for omega (0 <= beta <= 0.5).
Returns
-------
r : float
Percentage bend correlation coefficient.
pval : float
Two-tailed p-value.
Notes
-----
Code inspired by Matlab code from Cyril Pernet and Guillaume Rousselet.
References
----------
.. [1] Wilcox, R.R., 1994. The percentage bend correlation coefficient.
Psychometrika 59, 601–616. https://doi.org/10.1007/BF02294395
.. [2] Pernet CR, Wilcox R, Rousselet GA. Robust Correlation Analyses:
False Positive and Power Validation Using a New Open Source Matlab
Toolbox. Frontiers in Psychology. 2012;3:606.
doi:10.3389/fpsyg.2012.00606. | 2.881965 | 2.841762 | 1.014147 |
# Pairwise Euclidean distances
b = squareform(pdist(y, metric='euclidean'))
# Double centering
B = b - b.mean(axis=0)[None, :] - b.mean(axis=1)[:, None] + b.mean()
# Compute squared distance covariances
dcov2_yy = np.vdot(B, B) / n2
dcov2_xy = np.vdot(A, B) / n2
return np.sqrt(dcov2_xy) / np.sqrt(np.sqrt(dcov2_xx) * np.sqrt(dcov2_yy)) | def _dcorr(y, n2, A, dcov2_xx) | Helper function for distance correlation bootstrapping. | 2.605836 | 2.592235 | 1.005247 |
# Mediator(s) model (M(j) ~ X + covar)
beta_m = []
for j in range(n_mediator):
if mtype == 'linear':
beta_m.append(linear_regression(X_val[idx], M_val[idx, j],
coef_only=True)[1])
else:
beta_m.append(logistic_regression(X_val[idx], M_val[idx, j],
coef_only=True)[1])
# Full model (Y ~ X + M + covar)
beta_y = linear_regression(XM_val[idx], y_val[idx],
coef_only=True)[2:(2 + n_mediator)]
# Point estimate
return beta_m * beta_y | def _point_estimate(X_val, XM_val, M_val, y_val, idx, n_mediator,
mtype='linear') | Point estimate of indirect effect based on bootstrap sample. | 3.399992 | 3.324722 | 1.02264 |
# Bias of bootstrap estimates
z0 = norm.ppf(np.sum(ab_estimates < sample_point) / n_boot)
# Adjusted intervals
adjusted_ll = norm.cdf(2 * z0 + norm.ppf(alpha / 2)) * 100
adjusted_ul = norm.cdf(2 * z0 + norm.ppf(1 - alpha / 2)) * 100
ll = np.percentile(ab_estimates, q=adjusted_ll)
ul = np.percentile(ab_estimates, q=adjusted_ul)
return np.array([ll, ul]) | def _bca(ab_estimates, sample_point, n_boot, alpha=0.05) | Get (1 - alpha) * 100 bias-corrected confidence interval estimate
Note that this is similar to the "cper" module implemented in
:py:func:`pingouin.compute_bootci`.
Parameters
----------
ab_estimates : 1d array-like
Array with bootstrap estimates for each sample.
sample_point : float
Indirect effect point estimate based on full sample.
n_boot : int
Number of bootstrap samples
alpha : float
Alpha for confidence interval
Returns
-------
CI : 1d array-like
Lower limit and upper limit bias-corrected confidence interval
estimates. | 3.043114 | 3.239449 | 0.939393 |
if estimate == 0:
out = 1
else:
out = 2 * min(sum(boot > 0), sum(boot < 0)) / len(boot)
return min(out, 1) | def _pval_from_bootci(boot, estimate) | Compute p-value from bootstrap distribution.
Similar to the pval function in the R package mediation.
Note that this is less accurate than a permutation test because the
bootstrap distribution is not conditioned on a true null hypothesis. | 3.711453 | 4.027623 | 0.9215 |
aov = anova(data=self, dv=dv, between=between, detailed=detailed,
export_filename=export_filename)
return aov | def _anova(self, dv=None, between=None, detailed=False, export_filename=None) | Return one-way and two-way ANOVA. | 2.74787 | 2.685799 | 1.023111 |
aov = welch_anova(data=self, dv=dv, between=between,
export_filename=export_filename)
return aov | def _welch_anova(self, dv=None, between=None, export_filename=None) | Return one-way Welch ANOVA. | 2.878437 | 2.998499 | 0.959959 |
aov = rm_anova(data=self, dv=dv, within=within, subject=subject,
correction=correction, detailed=detailed,
export_filename=export_filename)
return aov | def _rm_anova(self, dv=None, within=None, subject=None, detailed=False,
correction='auto', export_filename=None) | One-way and two-way repeated measures ANOVA. | 2.307785 | 2.772773 | 0.832302 |
aov = mixed_anova(data=self, dv=dv, between=between, within=within,
subject=subject, correction=correction,
export_filename=export_filename)
return aov | def _mixed_anova(self, dv=None, between=None, within=None, subject=None,
correction=False, export_filename=None) | Two-way mixed ANOVA. | 2.164557 | 2.48643 | 0.870548 |
stats = pairwise_corr(data=self, columns=columns, covar=covar,
tail=tail, method=method, padjust=padjust,
export_filename=export_filename)
return stats | def _pairwise_corr(self, columns=None, covar=None, tail='two-sided',
method='pearson', padjust='none', export_filename=None) | Pairwise (partial) correlations. | 2.087238 | 2.421217 | 0.862062 |
stats = partial_corr(data=self, x=x, y=y, covar=covar, x_covar=x_covar,
y_covar=y_covar, tail=tail, method=method)
return stats | def _partial_corr(self, x=None, y=None, covar=None, x_covar=None, y_covar=None,
tail='two-sided', method='pearson') | Partial and semi-partial correlation. | 2.05617 | 2.008122 | 1.023927 |
V = self.cov() # Covariance matrix
Vi = np.linalg.pinv(V) # Inverse covariance matrix
D = np.diag(np.sqrt(1 / np.diag(Vi)))
pcor = -1 * (D @ Vi @ D) # Partial correlation matrix
pcor[np.diag_indices_from(pcor)] = 1
return pd.DataFrame(pcor, index=V.index, columns=V.columns) | def pcorr(self) | Partial correlation matrix (:py:class:`pandas.DataFrame` method).
Returns
----------
pcormat : :py:class:`pandas.DataFrame`
Partial correlation matrix.
Notes
-----
This function calculates the pairwise partial correlations for each pair of
variables in a :py:class:`pandas.DataFrame` given all the others. It has
the same behavior as the pcor function in the `ppcor` R package.
Note that this function only returns the raw Pearson correlation
coefficient. If you want to calculate the test statistic and p-values, or
use more robust estimates of the correlation coefficient, please refer to
the :py:func:`pingouin.pairwise_corr` or :py:func:`pingouin.partial_corr`
functions. The :py:func:`pingouin.pcorr` function uses the inverse of
the variance-covariance matrix to calculate the partial correlation matrix
and is therefore much faster than the two latter functions which are based
on the residuals.
References
----------
.. [1] https://cran.r-project.org/web/packages/ppcor/index.html
Examples
--------
>>> import pingouin as pg
>>> data = pg.read_dataset('mediation')
>>> data.pcorr()
X M Y Mbin Ybin
X 1.000000 0.392251 0.059771 -0.014405 -0.149210
M 0.392251 1.000000 0.545618 -0.015622 -0.094309
Y 0.059771 0.545618 1.000000 -0.007009 0.161334
Mbin -0.014405 -0.015622 -0.007009 1.000000 -0.076614
Ybin -0.149210 -0.094309 0.161334 -0.076614 1.000000
On a subset of columns
>>> data[['X', 'Y', 'M']].pcorr().round(3)
X Y M
X 1.000 0.037 0.413
Y 0.037 1.000 0.540
M 0.413 0.540 1.000 | 3.378307 | 4.100721 | 0.823832 |
stats = mediation_analysis(data=self, x=x, m=m, y=y, covar=covar,
alpha=alpha, n_boot=n_boot, seed=seed,
return_dist=return_dist)
return stats | def _mediation_analysis(self, x=None, m=None, y=None, covar=None,
alpha=0.05, n_boot=500, seed=None, return_dist=False) | Mediation analysis. | 2.120529 | 2.113443 | 1.003353 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.