mubaraknumann commited on
Commit
f4de3f1
·
verified ·
1 Parent(s): ecaecb6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +165 -15
README.md CHANGED
@@ -1,7 +1,5 @@
1
  ---
2
  license: mit
3
- metrics:
4
- - accuracy
5
  pipeline_tag: image-classification
6
  tags:
7
  - cloud
@@ -80,18 +78,172 @@ The model was trained on the **UGCI (Ultimate Ground-level Cloud Image) dataset*
80
 
81
  The model is saved in the Keras native format (.keras). You will need to provide the definitions of the custom layers (RepVGGBlock and NECALayer) when loading.
82
 
83
- import tensorflow as tf
84
- from tensorflow import keras
85
  # IMPORTANT: You must have the RepVGGBlock and NECALayer class definitions
86
  # available in your Python environment before running this.
87
- # For example, import them from a .py file where they are defined:
88
- # from your_custom_layers_file import RepVGGBlock, NECALayer
89
 
90
- # --- PASTE YOUR RepVGGBlock and NECALayer CLASS DEFINITIONS HERE ---
91
- # class RepVGGBlock(...): ...
92
- # class NECALayer(...): ...
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
93
  # --- END OF CUSTOM LAYER DEFINITIONS ---
94
 
 
 
 
95
  MODEL_FILE = 'path/to/your/repvgg_neca_deploy_final.keras' # Replace with actual path
96
  LABEL_MAPPING_FILE = 'path/to/your/label_mapping.json' # Replace with actual path
97
 
@@ -138,9 +290,9 @@ print(f"Predicted Cloud Type: {predicted_class_name}")
138
  print(f"Confidence: {confidence*100:.2f}%")
139
 
140
  # Display all class probabilities (optional)
141
- # for i, prob in enumerate(predicted_probabilities):
142
- # class_name = int_to_label.get(i, f"Class_{i}")
143
- # print(f"- {class_name}: {prob*100:.2f}%")
144
 
145
  ## 4. Training Procedure
146
  Dataset: UGCI
@@ -308,7 +460,7 @@ Continued refinement of minority class performance (contrail).
308
 
309
  If you use this model or code in your research, please consider citing this repository (and any associated paper, if applicable).
310
 
311
- [Your Name/Team Name]. (Year). Genera - Cloud Image Classification Model. GitHub/Hugging Face. Retrieved from [Link to your Hugging Face Model Card or GitHub Repo]
312
 
313
  ## 10. License
314
 
@@ -317,5 +469,3 @@ This project, including the model weights and source code, is licensed under the
317
  ## 11. Acknowledgements
318
 
319
  This work was inspired by the methodologies presented in "Improved RepVGG ground-based cloud image classification with attention convolution" by Shi et al. (2024).
320
-
321
- [Any other acknowledgements, e.g., data sources if different from SkyGen, collaborators, funding if applicable].
 
1
  ---
2
  license: mit
 
 
3
  pipeline_tag: image-classification
4
  tags:
5
  - cloud
 
78
 
79
  The model is saved in the Keras native format (.keras). You will need to provide the definitions of the custom layers (RepVGGBlock and NECALayer) when loading.
80
 
 
 
81
  # IMPORTANT: You must have the RepVGGBlock and NECALayer class definitions
82
  # available in your Python environment before running this.
 
 
83
 
84
+ # --- CUSTOM LAYER DEFINITIONS ---
85
+ # --- RepVGGBlock Class Definition ---
86
+ class RepVGGBlock(layers.Layer):
87
+ def __init__(self, in_channels, out_channels, kernel_size=3, stride=1,
88
+ groups=1, deploy=False, use_se=False, **kwargs):
89
+ super(RepVGGBlock, self).__init__(**kwargs)
90
+ self.config_initial_in_channels = in_channels
91
+ self.config_out_channels = out_channels
92
+ self.config_kernel_size = kernel_size
93
+ self.config_strides_val = stride
94
+ self.config_groups = groups
95
+ self._deploy_mode_internal = deploy
96
+ self.config_use_se = use_se # Placeholder, not used in this version of RepVGGBlock
97
+ self.actual_in_channels = None
98
+
99
+ self.rbr_dense_conv = layers.Conv2D(
100
+ filters=self.config_out_channels, kernel_size=self.config_kernel_size,
101
+ strides=self.config_strides_val, padding='same',
102
+ groups=self.config_groups, use_bias=False, name=self.name + '_dense_conv'
103
+ )
104
+ self.rbr_dense_bn = layers.BatchNormalization(name=self.name + '_dense_bn')
105
+ self.rbr_1x1_conv = layers.Conv2D(
106
+ filters=self.config_out_channels, kernel_size=1,
107
+ strides=self.config_strides_val, padding='valid',
108
+ groups=self.config_groups, use_bias=False, name=self.name + '_1x1_conv'
109
+ )
110
+ self.rbr_1x1_bn = layers.BatchNormalization(name=self.name + '_1x1_bn')
111
+ self.rbr_identity_bn = None
112
+ self.rbr_reparam = layers.Conv2D(
113
+ filters=self.config_out_channels, kernel_size=self.config_kernel_size,
114
+ strides=self.config_strides_val, padding='same',
115
+ groups=self.config_groups, use_bias=True, name=self.name + '_reparam_conv'
116
+ )
117
+
118
+ def build(self, input_shape):
119
+ self.actual_in_channels = input_shape[-1]
120
+ if self.config_initial_in_channels is None:
121
+ self.config_initial_in_channels = self.actual_in_channels
122
+ elif self.config_initial_in_channels != self.actual_in_channels:
123
+ raise ValueError(f"Input channel mismatch for {self.name}: Expected {self.config_initial_in_channels}, got {self.actual_in_channels}")
124
+
125
+ if self.rbr_identity_bn is None and \
126
+ self.actual_in_channels == self.config_out_channels and self.config_strides_val == 1:
127
+ self.rbr_identity_bn = layers.BatchNormalization(name=self.name + '_identity_bn')
128
+
129
+ super(RepVGGBlock, self).build(input_shape) # Call super build first
130
+
131
+ # Ensure all sub-layers are built
132
+ if not self.rbr_dense_conv.built: self.rbr_dense_conv.build(input_shape)
133
+ if not self.rbr_dense_bn.built: self.rbr_dense_bn.build(self.rbr_dense_conv.compute_output_shape(input_shape))
134
+ if not self.rbr_1x1_conv.built: self.rbr_1x1_conv.build(input_shape)
135
+ if not self.rbr_1x1_bn.built: self.rbr_1x1_bn.build(self.rbr_1x1_conv.compute_output_shape(input_shape))
136
+ if self.rbr_identity_bn is not None and not self.rbr_identity_bn.built:
137
+ self.rbr_identity_bn.build(input_shape)
138
+ if not self.rbr_reparam.built:
139
+ self.rbr_reparam.build(input_shape)
140
+
141
+
142
+ def call(self, inputs):
143
+ if self._deploy_mode_internal:
144
+ return self.rbr_reparam(inputs)
145
+ else: # Training mode
146
+ out_dense = self.rbr_dense_bn(self.rbr_dense_conv(inputs))
147
+ out_1x1 = self.rbr_1x1_bn(self.rbr_1x1_conv(inputs))
148
+ if self.rbr_identity_bn is not None:
149
+ out_identity = self.rbr_identity_bn(inputs)
150
+ return out_dense + out_1x1 + out_identity
151
+ else: return out_dense + out_1x1
152
+
153
+ def _fuse_bn_tensor(self, conv_layer, bn_layer):
154
+ kernel = conv_layer.kernel; dtype = kernel.dtype; out_channels = kernel.shape[-1]
155
+ gamma = getattr(bn_layer, 'gamma', tf.ones(out_channels, dtype=dtype))
156
+ beta = getattr(bn_layer, 'beta', tf.zeros(out_channels, dtype=dtype))
157
+ running_mean = getattr(bn_layer, 'moving_mean', tf.zeros(out_channels, dtype=dtype))
158
+ running_var = getattr(bn_layer, 'moving_variance', tf.ones(out_channels, dtype=dtype))
159
+ epsilon = bn_layer.epsilon; std = tf.sqrt(running_var + epsilon)
160
+ fused_kernel = kernel * (gamma / std)
161
+ if conv_layer.use_bias: fused_bias = beta + (gamma * (conv_layer.bias - running_mean)) / std
162
+ else: fused_bias = beta - (running_mean * gamma) / std
163
+ return fused_kernel, fused_bias
164
+
165
+ def reparameterize(self):
166
+ if self._deploy_mode_internal: return
167
+ branches_to_check = [self.rbr_dense_conv, self.rbr_dense_bn, self.rbr_1x1_conv, self.rbr_1x1_bn]
168
+ if self.rbr_identity_bn: branches_to_check.append(self.rbr_identity_bn)
169
+ for branch_layer in branches_to_check:
170
+ if not branch_layer.built: # Or len(branch_layer.weights) == 0
171
+ raise Exception(f"ERROR: Branch layer {branch_layer.name} for {self.name} not built. Call model with data first.")
172
+
173
+ kernel_dense, bias_dense = self._fuse_bn_tensor(self.rbr_dense_conv, self.rbr_dense_bn)
174
+ kernel_1x1_unpadded, bias_1x1 = self._fuse_bn_tensor(self.rbr_1x1_conv, self.rbr_1x1_bn)
175
+ pad_amount = self.config_kernel_size // 2
176
+ kernel_1x1_padded = tf.pad(kernel_1x1_unpadded, [[pad_amount,pad_amount],[pad_amount,pad_amount],[0,0],[0,0]])
177
+ final_kernel = kernel_dense + kernel_1x1_padded
178
+ final_bias = bias_dense + bias_1x1
179
+ if self.rbr_identity_bn is not None:
180
+ running_mean_id = self.rbr_identity_bn.moving_mean; running_var_id = self.rbr_identity_bn.moving_variance
181
+ gamma_id = self.rbr_identity_bn.gamma; beta_id = self.rbr_identity_bn.beta
182
+ epsilon_id = self.rbr_identity_bn.epsilon; std_id = tf.sqrt(running_var_id + epsilon_id)
183
+ kernel_id_scaler = gamma_id / std_id
184
+ bias_id_term = beta_id - (running_mean_id * gamma_id) / std_id
185
+ identity_kernel_np = np.zeros((self.config_kernel_size, self.config_kernel_size, self.actual_in_channels, self.config_out_channels), dtype=np.float32)
186
+ for i in range(self.actual_in_channels): identity_kernel_np[pad_amount, pad_amount, i, i] = kernel_id_scaler[i].numpy()
187
+ kernel_id_final = tf.convert_to_tensor(identity_kernel_np, dtype=tf.float32)
188
+ final_kernel += kernel_id_final; final_bias += bias_id_term
189
+ if not self.rbr_reparam.built:
190
+ raise Exception(f"CRITICAL ERROR: {self.rbr_reparam.name} of {self.name} not built before set_weights.")
191
+ self.rbr_reparam.set_weights([final_kernel, final_bias])
192
+ self._deploy_mode_internal = True
193
+
194
+ def get_config(self):
195
+ config = super(RepVGGBlock, self).get_config()
196
+ config.update({
197
+ "in_channels": self.config_initial_in_channels, "out_channels": self.config_out_channels,
198
+ "kernel_size": self.config_kernel_size, "stride": self.config_strides_val,
199
+ "groups": self.config_groups, "deploy": self._deploy_mode_internal, "use_se": self.config_use_se
200
+ })
201
+ return config
202
+ @classmethod
203
+ def from_config(cls, config): return cls(**config)
204
+ # --- End of RepVGGBlock ---
205
+
206
+ # --- NECALayer Class Definition ---
207
+ class NECALayer(layers.Layer):
208
+ def __init__(self, channels, gamma=2, b=1, **kwargs):
209
+ super(NECALayer, self).__init__(**kwargs)
210
+ self.channels = channels
211
+ self.gamma = gamma
212
+ self.b = b
213
+ tf_channels = tf.cast(self.channels, tf.float32)
214
+ k_float = (tf.math.log(tf_channels) / tf.math.log(2.0) + self.b) / self.gamma
215
+ k_int = tf.cast(tf.round(k_float), tf.int32)
216
+ if tf.equal(k_int % 2, 0): self.k_scalar_val = k_int + 1
217
+ else: self.k_scalar_val = k_int
218
+ self.k_scalar_val = tf.maximum(1, self.k_scalar_val)
219
+ kernel_size_for_conv1d = (int(self.k_scalar_val.numpy()),)
220
+ self.gap = layers.GlobalAveragePooling2D(keepdims=True)
221
+ self.conv1d = layers.Conv1D(filters=1, kernel_size=kernel_size_for_conv1d, padding='same', use_bias=False, name=self.name + '_eca_conv1d')
222
+ self.sigmoid = layers.Activation('sigmoid')
223
+
224
+ def call(self, inputs):
225
+ if self.channels != inputs.shape[-1]: raise ValueError(f"Input channels {inputs.shape[-1]} != layer channels {self.channels} for {self.name}")
226
+ x = self.gap(inputs)
227
+ x = tf.squeeze(x, axis=[1, 2])
228
+ x = tf.expand_dims(x, axis=-1)
229
+ x = self.conv1d(x)
230
+ x = tf.squeeze(x, axis=-1)
231
+ attention = self.sigmoid(x)
232
+ attention_reshaped = tf.reshape(attention, [-1, 1, 1, self.channels])
233
+ return inputs * attention_reshaped
234
+
235
+ def get_config(self):
236
+ config = super(NECALayer, self).get_config()
237
+ config.update({"channels": self.channels, "gamma": self.gamma, "b": self.b})
238
+ return config
239
+ @classmethod
240
+ def from_config(cls, config): return cls(**config)
241
+ # --- End of NECALayer ---
242
  # --- END OF CUSTOM LAYER DEFINITIONS ---
243
 
244
+ import tensorflow as tf
245
+ from tensorflow import keras
246
+
247
  MODEL_FILE = 'path/to/your/repvgg_neca_deploy_final.keras' # Replace with actual path
248
  LABEL_MAPPING_FILE = 'path/to/your/label_mapping.json' # Replace with actual path
249
 
 
290
  print(f"Confidence: {confidence*100:.2f}%")
291
 
292
  # Display all class probabilities (optional)
293
+ for i, prob in enumerate(predicted_probabilities):
294
+ class_name = int_to_label.get(i, f"Class_{i}")
295
+ print(f"- {class_name}: {prob*100:.2f}%")
296
 
297
  ## 4. Training Procedure
298
  Dataset: UGCI
 
460
 
461
  If you use this model or code in your research, please consider citing this repository (and any associated paper, if applicable).
462
 
463
+ Mohammed Numan Mubarak. (2025). Genera - Cloud Image Classification Model. Retrieved from [huggingface.co/mubaraknumann/genera-cloud-image-classification]
464
 
465
  ## 10. License
466
 
 
469
  ## 11. Acknowledgements
470
 
471
  This work was inspired by the methodologies presented in "Improved RepVGG ground-based cloud image classification with attention convolution" by Shi et al. (2024).