Spaces:
Runtime error
Runtime error
Commit
·
ab3b48f
1
Parent(s):
835911f
Readme is added
Browse files
README.md
CHANGED
|
@@ -1,11 +1,11 @@
|
|
| 1 |
---
|
| 2 |
title: MCC
|
| 3 |
datasets:
|
| 4 |
-
-
|
| 5 |
tags:
|
| 6 |
- evaluate
|
| 7 |
- metric
|
| 8 |
-
description: "
|
| 9 |
sdk: gradio
|
| 10 |
sdk_version: 3.19.1
|
| 11 |
app_file: app.py
|
|
@@ -14,37 +14,62 @@ pinned: false
|
|
| 14 |
|
| 15 |
# Metric Card for MCC
|
| 16 |
|
| 17 |
-
***Module Card Instructions:*** *Fill out the following subsections. Feel free to take a look at existing metric cards if you'd like examples.*
|
| 18 |
|
| 19 |
-
## Metric Description
|
| 20 |
-
*Give a brief overview of this metric, including what task(s) it is usually used for, if any.*
|
| 21 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 22 |
## How to Use
|
| 23 |
-
*
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 24 |
|
| 25 |
-
*Provide simplest possible example for using the metric*
|
| 26 |
|
| 27 |
### Inputs
|
| 28 |
-
|
| 29 |
-
- **
|
|
|
|
| 30 |
|
| 31 |
### Output Values
|
| 32 |
|
| 33 |
-
|
|
|
|
|
|
|
| 34 |
|
| 35 |
-
*State the range of possible values that the metric's output can take, as well as what in that range is considered good. For example: "This metric can take on any value between 0 and 100, inclusive. Higher scores are better."*
|
| 36 |
|
| 37 |
-
#### Values from Popular Papers
|
| 38 |
-
*Give examples, preferrably with links to leaderboards or publications, to papers that have reported this metric, along with the values they have reported.*
|
| 39 |
|
| 40 |
### Examples
|
| 41 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 42 |
|
| 43 |
## Limitations and Bias
|
| 44 |
*Note any known limitations or biases that the metric has, with links and references if possible.*
|
| 45 |
|
| 46 |
## Citation
|
| 47 |
-
|
| 48 |
|
| 49 |
## Further References
|
| 50 |
-
*Add any useful further references.*
|
|
|
|
| 1 |
---
|
| 2 |
title: MCC
|
| 3 |
datasets:
|
| 4 |
+
- dataset
|
| 5 |
tags:
|
| 6 |
- evaluate
|
| 7 |
- metric
|
| 8 |
+
description: "Matthews correlation coefficient (MCC) is a correlation coefficient used in machine learning as a measure of the quality of binary and multiclass classifications."
|
| 9 |
sdk: gradio
|
| 10 |
sdk_version: 3.19.1
|
| 11 |
app_file: app.py
|
|
|
|
| 14 |
|
| 15 |
# Metric Card for MCC
|
| 16 |
|
|
|
|
| 17 |
|
|
|
|
|
|
|
| 18 |
|
| 19 |
+
## Metric Description
|
| 20 |
+
*Give a brief overview of this metric, including what task(s) it is usually used for, if any.Matthews correlation coefficient (MCC) is a correlation coefficient used in machine learning as a measure of the quality of binary and multiclass classifications. MCC takes into account true and false positives and negatives and is generally regarded as a balanced metric that can be used even if the classes are of different sizes. It can be computed with the equation:
|
| 21 |
+
`MCC = (TP * TN - FP * FN) / sqrt((TP + FP)(TP + FN)(TN + FP)*(TN + FN))`
|
| 22 |
+
where TP is the number of true positives, TN is the number of true negatives, FP is the number of false positives and FN is the number of false negatives.
|
| 23 |
## How to Use
|
| 24 |
+
*At minimum, this metric takes as input two lists, each containing ints: predictions and references.*
|
| 25 |
+
|
| 26 |
+
`
|
| 27 |
+
>>> mcc_metric = evaluate.load('mcc')
|
| 28 |
+
>>> results = mcc_metric.compute(references=[0, 1], predictions=[0, 1])
|
| 29 |
+
>>> print(results)
|
| 30 |
+
["{'mcc': 1.0}"] `
|
| 31 |
+
|
| 32 |
|
|
|
|
| 33 |
|
| 34 |
### Inputs
|
| 35 |
+
|
| 36 |
+
- **predictions** *(list of int): The predicted labels.*
|
| 37 |
+
- **references** *(list of int): The ground truth labels.*
|
| 38 |
|
| 39 |
### Output Values
|
| 40 |
|
| 41 |
+
**mcc(float)**: The Matthews correlation coefficient. Minimum possible value is -1. Maximum possible value is 1. A higher MCC means a better quality of classification, 1 being a perfect prediction, 0 being a random prediction and -1 being a completely wrong prediction.
|
| 42 |
+
Output Example(s):
|
| 43 |
+
{'mcc': 1.0}
|
| 44 |
|
|
|
|
| 45 |
|
|
|
|
|
|
|
| 46 |
|
| 47 |
### Examples
|
| 48 |
+
|
| 49 |
+
Example 1 - A simple example with all correct predictions
|
| 50 |
+
>>> mcc_metric = evaluate.load('mcc')
|
| 51 |
+
>>> results = mcc_metric.compute(references=[1, 0, 1], predictions=[1, 0, 1])
|
| 52 |
+
>>> print(results)
|
| 53 |
+
{'mcc': 1.0}
|
| 54 |
+
|
| 55 |
+
|
| 56 |
+
Example 2 - A simple example with all incorrect predictions
|
| 57 |
+
>>> mcc_metric = evaluate.load('mcc')
|
| 58 |
+
>>> results = mcc_metric.compute(references=[1, 0, 1], predictions=[0, 1, 0])
|
| 59 |
+
>>> print(results)
|
| 60 |
+
{'mcc': -1.0}
|
| 61 |
+
|
| 62 |
+
|
| 63 |
+
Example 3 - A simple example with a random prediction
|
| 64 |
+
>>> mcc_metric = evaluate.load('mcc')
|
| 65 |
+
>>> results = mcc_metric.compute(references=[1, 0, 1], predictions=[1, 1, 0])
|
| 66 |
+
>>> print(results)
|
| 67 |
+
{'mcc': 0.0}
|
| 68 |
|
| 69 |
## Limitations and Bias
|
| 70 |
*Note any known limitations or biases that the metric has, with links and references if possible.*
|
| 71 |
|
| 72 |
## Citation
|
| 73 |
+
- **Sklearn** - *"https://scikit-learn.org/stable/modules/generated/sklearn.metrics.matthews_corrcoef.html"*
|
| 74 |
|
| 75 |
## Further References
|
|
|