text
stringlengths 0
23.7k
| label
stringclasses 4
values | dataType
stringclasses 2
values | communityName
stringclasses 4
values | datetime
stringclasses 95
values |
---|---|---|---|---|
Copy the approach from a published paper
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-07-05
|
Hi,
I have a macbook pro with the M3 chip, and would like to run code locally. I have the latest version of tensorflow installed, and whole code up to model.fit(), works. But model.fit() stops and timeouts the kernel on the first epoch. However, the same code runs on google colab. Any ideas why how I can fix this?
|
r/tensorflow
|
post
|
r/tensorflow
|
2024-06-05
|
Python version and interpreter ? Do you use the miniconda version for compatibility?
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-06-05
|
Python 3.11.0, I don’t use miniconda
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-06-05
|
I would appreciate it if someone could help me modify a colab notebook I found in order to convert its model to tflite format
I tried but with little result
https://www.tensorflow.org/tutorials/generative/pix2pix?hl=it
The colab is this one
|
r/tensorflow
|
post
|
r/tensorflow
|
2024-06-05
|
I actually managed to do that
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-06-05
|
Care to share the notebook with us???
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-06-05
|
Sure, here it is
https://github.com/dixy52-beep/pix2pix-TFLITE-model-notbook/blob/main/Custom_TFLITE_pix2pix.ipynb
It seems to work well however when converting the model, if trained for various epochs (4k/10k) sometimes the runtime disconnects. I think it's a limitation of colab
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-06-05
|
[Please note that my notebook is modified to add the ability of uploading a custom dataset. You will have to revert my changes there if you wanna use the original datasets showed in the demo]
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-06-05
|
I have this error and I tried every thing to solve this error, I tried to uninstall tensorflow and reinstall , I tried to upgrade tensorflow and keras , and I tried to import keras .
I have tried almost every thing and nothing is work
https://preview.redd.it/ritg1if66yyc1.jpg?width=757&format=pjpg&auto=webp&s=79a2661c791b206a3e094cd536754222cc550f24
|
r/tensorflow
|
post
|
r/tensorflow
|
2024-07-05
|
Based on https://discuss.tensorflow.org/t/attributeerror-module-tensorflow-has-no-attribute-keras/20495/7 I would try importing and using Keras directly.
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-07-05
|
Keras is built in with tensorflow, you don't need to install it as a separate package.
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-07-05
|
Hi guys I need help. I tained a GAN image to image conversion model to restore damaged pictures. Only problem is that my model is limited to 256x256 images. What's a good way to use such a model on larger non squared images like 1920x1080 pixel? I tried with tiling but it leaves some very unsightly edges
|
r/tensorflow
|
post
|
r/tensorflow
|
2024-08-05
|
Most information in the Internet reports TFF only works in Linux.
Would like to check with the community if this remains the case.
|
r/tensorflow
|
post
|
r/tensorflow
|
2024-09-05
|
Try in wsl
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-13-05
|
r/tensorflow
|
post
|
r/tensorflow
|
2024-10-05
|
|
Hello,
My project is a face recognition system using tensorflow. I have fine-tuned the ConvNeXt model on my dataset and I am using streamlit to deploy the application. However, When loading the saved .h5 model there are errors that appear and I cant get the streamlit to work. When I run the code provided, I receive this error: Unknown layer: 'LayerScale'. Please ensure you are using a `keras.utils.custom_object_scope` and that this object is included in the scope. See [https://www.tensorflow.org/guide/keras/save\_and\_serialize#registering\_the\_custom\_object](https://www.tensorflow.org/guide/keras/save_and_serialize#registering_the_custom_object) for details. After doing some digging around, I found a similar error on stackoverflow and copied the LayerScale class from the source code and added it into mine(3rd screenshot). Now I am facing this error: 'TFOpLambda'. Please ensure you are using a `keras.utils.custom_object_scope` and that this object is included in the scope. See [https://www.tensorflow.org/guide/keras/save\_and\_serialize#registering\_the\_custom\_object](https://www.tensorflow.org/guide/keras/save_and_serialize#registering_the_custom_object) for details.
There are also other errors and warnings that appear in the terminal and I wonder what do they mean: "I tensorflow/core/util/port.cc:113\] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`." and "The name tf.reset\_default\_graph is deprecated. Please use tf.compat.v1.reset\_default\_graph instead." Has anyone faced a problem like this before and what is the solution? Thanks in advance
code: [https://imgur.com/a/IBTjI7v](https://imgur.com/a/IBTjI7v)
|
r/tensorflow
|
post
|
r/tensorflow
|
2024-11-05
|
Please paste the code here with proper formatting.
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-13-05
|
import streamlit as st
import os
import random
import numpy as np
import tensorflow as tf
from PIL import Image
# Path to the directory containing the celebrity images
IMAGE_DIR = "path to dataset"
# List of classes (directory names)
CLASSES = os.listdir(IMAGE_DIR)
# Load the pre-trained face recognition model
MODEL_PATH = "VGG16.h5"
u/st.cache(allow_output_mutation=True)
def load_model():
model = tf.keras.models.load_model(MODEL_PATH)
return model
with st.spinner('Model is being loaded..'):
model=load_model()
# Function to preprocess the input image
def preprocess_image(image_path):
img = Image.open(image_path).convert('RGB')
img = img.resize((224, 224)) # Assuming the model input size is 224x224
img = np.array(img) / 255.0
img = np.expand_dims(img, axis=0)
return img
# Function to get the name of the celebrity
def get_celebrity_name(image_path):
# Preprocess the image
img = preprocess_image(image_path)
# Predict using the loaded model
predictions = model.predict(img)
# Get the predicted class index
predicted_index = np.argmax(predictions)
# Get the celebrity name from the class index
celebrity_name = CLASSES[predicted_index]
return celebrity_name
# Function to get a random image
def get_random_image():
# Choose a random class
random_class = random.choice(CLASSES)
# Get list of images in the class directory
images = os.listdir(os.path.join(IMAGE_DIR, random_class))
# Choose a random image from the class
random_image = random.choice(images)
# Return the path to the random image
return os.path.join(IMAGE_DIR, random_class, random_image)
def main():
st.title("Celebrity Face Recognition")
st.write("Click below to recognize a celebrity")
# Get a random image
random_image_path = get_random_image()
# Display the image
st.image(random_image_path, caption='Random Celebrity Image', use_column_width=True)
if st.button("Recognize Celebrity"):
# Get the name of the celebrity
celebrity_name = get_celebrity_name(random_image_path)
st.write(f"Predicted Celebrity:{celebrity_name}")
if __name__ == "__main__":
main()
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-13-05
|
r/tensorflow
|
post
|
r/tensorflow
|
2024-12-05
|
|
I try for more than 2h at least and I have the same problem over and over when I use the code
« sh setup.sh » it install the first thing pinwheels.org something with 2.1 MB and opencv-python with 89.2 MB
But after that it stop and I juste have this line « Installing build dependencies . . . » please somebody have a solution
|
r/tensorflow
|
post
|
r/tensorflow
|
2024-12-05
|
What do you want to install??
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-13-05
|
I want to install a script for object detection so I follow I tuto on YouTube that explain how to install the subreddit but when I install all the things install but not 1 things and it’s blocked at what I you just say I send you the link of the YouTube and the Reddit
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-13-05
|
r/tensorflow
|
post
|
r/tensorflow
|
2024-13-05
|
|
1. Install python
2. set up a virtual environment and activate it. (you can skip this step, but it's very helpful)
3. pip install tensorflow
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-13-05
|
I recently learned about this if you are having cuda versioning issues: pip install tensorflow\[and-cuda\]
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-13-05
|
Exactly as the title sounds, for the life of me, I can't wrap my head around the idea behind this specific method.
Here's my thought process:
1. The main loss function for a model is specified in the `compile` method (or in the custom training loop).
2. Any regularization can be specified in the `add_weight` method when building custom models or layers.
So, what the heck is the use behind the `add_loss` method??
[https://www.tensorflow.org/guide/keras/making\_new\_layers\_and\_models\_via\_subclassing#the\_add\_loss\_method](https://www.tensorflow.org/guide/keras/making_new_layers_and_models_via_subclassing#the_add_loss_method)
|
r/tensorflow
|
post
|
r/tensorflow
|
2024-14-05
|
It allows you to make more complex loss functions that are not a simple function of the model output and a specified "y\_true". You could make a loss that is a function of other tensors in your model for example.
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-14-05
|
So if I'm specifying a loss for a layer but I also specified a loss in the compile method of the overall model, how do the two integrate with one another?
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-15-05
|
Pretty sure they just add together, but it's been a long time since I used this capability. The verbose fit output probably spells it out a little.
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-15-05
|
Cool, thanks for clarifying
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-15-05
|
r/tensorflow
|
post
|
r/tensorflow
|
2024-14-05
|
|
Pretty awesome tutorial!
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-14-05
|
Thanks :)
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-14-05
|
NVIDIA driver: 545.29.06
OS: Zorin 17 (based on Ubuntu 22.04)
Python: 3.11.7 (via pyenv)
According to this table: [https://www.tensorflow.org/install/source#gpu](https://www.tensorflow.org/install/source#gpu)
TensorFlow 2.16.1 requires CUDA 12.3 and CuDNN 8.9 but can someone confirm this?
(The previous 2 time I installed CUDA ended up breaking my NVIDIA driver)
Moreover, do I require Clang and Bazel as the table mentions?
UPDATE: CUDA 12.3 and CuDNN 8.9 works perfectly fine with tensorflow 2.16.1.
|
r/tensorflow
|
post
|
r/tensorflow
|
2024-15-05
|
On Ubuntu? If you do \`pip install -U tensorflow-gpu\`, does it not pulling the right cuda and cudnn for you already?
I use the tensorflow docker image, so I don't need to deal with this.
The packages are already compiled, so you don't need clang or bazel.
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-15-05
|
I thought the package tensorflow-gpu was deprecated. Doesn't tensorflow 2.16.1 ship directly with GPU support?
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-15-05
|
https://www.tensorflow.org/install/pip
Right, they changed to the naming convention. Have you tried this?
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-15-05
|
Tried what exactly here? I have installed Tensorflow using pip if that's your intended question
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-15-05
|
python3 -m pip install tensorflow[and-cuda]
# Verify the installation:
python3 -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"
What does your console say? Empty array means no GPU access.
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-15-05
|
There is no such package as tensorflow\[and-cuda\]. I think that's just referring to the fact that you also need cuda alongside tensorflow
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-15-05
|
\`\`\`
root@f886b6dbe231:/# pip install "tensorflow\[and-cuda\]"
Collecting tensorflow\[and-cuda\]
Downloading tensorflow-2.16.1-cp310-cp310-manylinux\_2\_17\_x86\_64.manylinux2014\_x86\_64.whl (589.8 MB)
━╺━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 20.1/589.8 MB 6.6 MB/s eta 0:01:26
ERROR: Operation cancelled by user
\`\`\`
Depending on your shell, try quoting the entire package name.
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-15-05
|
❯ python3 -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"
2024-05-15 15:08:02.312933: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-05-15 15:08:02.348419: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-05-15 15:08:02.978770: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2024-05-15 15:08:04.455323: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:282] failed call to cuInit: CUDA_ERROR_UNKNOWN: unknown error
2024-05-15 15:08:04.455348: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:134] retrieving CUDA diagnostic information for host: sagnik-lenovo
2024-05-15 15:08:04.455353: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:141] hostname: sagnik-lenovo
2024-05-15 15:08:04.455440: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:165] libcuda reported version is: 545.29.6
2024-05-15 15:08:04.455452: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:169] kernel reported version is: 545.29.6
2024-05-15 15:08:04.455455: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:248] kernel version seems to match DSO: 545.29.6
[]
I got this. Evidently, tensorflow is not detecting my GPU
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-15-05
|
failed call to cuInit: CUDA_ERROR_UNKNOWN: unknown error
At least you know it tried.
What does `nvidia-smi` say?
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-15-05
|
❯ nvidia-smi
Wed May 15 15:16:46 2024
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 545.29.06 Driver Version: 545.29.06 CUDA Version: 12.3 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA GeForce RTX 3050 ... Off | 00000000:01:00.0 Off | N/A |
| N/A 43C P0 752W / 60W | 4MiB / 4096MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| 0 N/A N/A 1543 G /usr/bin/gnome-shell 1MiB |
+---------------------------------------------------------------------------------------+
I do think that I need to manually install CUDA and CuDNN
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-15-05
|
maybe permission issue? try running as root?
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-15-05
|
Same result running python with sudo
I think tensorflow\[and-cuda\] only installs the interface tensorflow need to communicate with cuda. i stiill need to install cuda and cudnn
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-15-05
|
pip install tensorflow\[and-cuda\] has worked for me. It causes it to pull the versions it needs. It's still up to you to have the right drivers though.
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-15-05
|
I don't think it's a permission issue, unless the installation instructions are just outrageously bad.
On the page you just linked it shows that the installation requires NVIDIA drivers, CUDA, and cuDNN, under the "Software Requirements" heading. It wouldn't be a requirement if it came with the python package.
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-30-05
|
If you are familiar with docker, I suggest you just use docker, so you don't need to manage the packages.
The nvidia-docker2 package on Ubuntu works well.
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-30-05
|
Check this page ; (running this in bash worked for me); [https://github.com/tensorflow/tensorflow/issues/63362](https://github.com/tensorflow/tensorflow/issues/63362)
NVIDIA_DIR=$(dirname $(dirname $(python -c "import nvidia.cudnn;print(nvidia.cudnn.__file__)")))
for dir in $NVIDIA_DIR/*; do
if [ -d "$dir/lib" ]; then
export LD_LIBRARY_PATH="$dir/lib:$LD_LIBRARY_PATH"
fi
done
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-31-05
|
It's outrageous to have to use docker for this.
But anyway, after tracking down my issue, I found out that you're right after all-- pip really did install the nvidia libraries already. The tensorflow installation instructions are, in fact, misleading and outdated. The actual problem for me was this one: [https://github.com/tensorflow/tensorflow/issues/63362](https://github.com/tensorflow/tensorflow/issues/63362)
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-31-05
|
I had a similar question. It turns out that these nvidia libraries really are installed automatically by pip. There's just a bug in tensorflow 2.16.1 🙄 that prevents it from finding the libraries. Not hard to fix, you just have to manually set some paths. [https://github.com/tensorflow/tensorflow/issues/63362](https://github.com/tensorflow/tensorflow/issues/63362)
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-31-05
|
Worked through this [Intro to Deep Learning](https://www.kaggle.com/learn/intro-to-deep-learning) course on Kaggle. It was good!
Check out my course notes!: [https://github.com/kdonavin/TensorFlow\_Info](https://github.com/kdonavin/TensorFlow_Info)
Maybe it will be useful to somebody.
|
r/tensorflow
|
post
|
r/tensorflow
|
2024-15-05
|
I am a Java guy and been barely getting into TensorFlow. I want to integrate in real time more closely with my Java applications. I dont see much discussion on this project. Is it full Java, no C++ level or Python integration? Is it fully support and works mostly like tensorflow python code?
[https://github.com/tensorflow/java](https://github.com/tensorflow/java)
|
r/tensorflow
|
post
|
r/tensorflow
|
2024-15-05
|
Typically what you'd do is train models offline with python, then load and run inference using the java runtime.
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-15-05
|
Ok that makes sense.
There are no issues with the library? I guess googlers will use ML in java too...assuming some of that code comes from googlers.
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-15-05
|
Every library has issues and bug - but I would expect it to be of relatively high quality. One of the biggest advantages of tensorflow is the fact that you can deploy your model to so many different runtimes. Google does use most of them internally, so I'd consider it likely that they're polished.
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-15-05
|
Thanks
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-15-05
|
**TLDR:**
* am noob
* using CPU
* prediction is fast (1ms) (the time spent crunching numbers is 1ms per prediction)
* overhead takes long time (100ms) (doing 100 predictions takes 200ms, but 1 prediction takes 101ms)
* want fast response times
* how can i reduce subsequent overhead? (like after some sort of setup, can i then get single predictions that take about 1-2ms?)
**Details:**
Hello, this is my first successful tensorflow project. I have a model that works and is fast, 1ms to conduct multiple predictions. However, to do a single prediction, there is still a lot of overhead and it takes about 100ms to complete. I'm sure that there are a bunch of different ways that I can optimize my model, but I think that I am not using the process correctly.
I want to use this model to do live audio processing to quickly determine what phoneme (specifically 5 vowel sounds for right now) is being spoken just by looking at only 264 bins of the FFT. But having a delay of 100ms is rather bothersome. Especially since it only spends about 2ms actually crunching numbers (1.01ms for fft and 900us for prediction)
If I had a GPU, I would suspect that a lot of that time is being spent on loading data onto the GPU, but Im doing this on a CPU. I know that some level of overhear is needed to conduct a prediction, but is there a way to only have to setup once? I dont know what i dont know, so trying to find info about it is difficult. **So is there a way to only have to setup once?**
**EDIT - ANSWER:**
So I think I got it... I need to use model(x) instead of model.predict(x). which is stated in the docs for model.predict(x). However, it is not mentioned that the prediction data is located in .numpy() for model. So, to completely replace "model.predict(x)" with "model(x).numpy()"
|
r/tensorflow
|
post
|
r/tensorflow
|
2024-16-05
|
So, to completely replace "model.predict(x)" use "model(x).numpy()"
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-16-05
|
Hello,
I need advice on how to move on with my project, Initially I wanted to create a face recognition system. I first gathered a dataset of celebrity faces with 99 classes and about 16k total images and fine-tuned ConvNeXtTiny model on the dataset using tensorflow and got a result of 93% accuracy. Now this is technically only an image classification application where it can tell the faces apart and tell which celebrity it is. However, I need to extened this project to a fully face recognition system.
How can I use tensorflow transfer learning with existing models to make this system full circle? Basically I need a face detection model that is compatible with tensorflow 2.15.0 then preprocess the faces(Either from a webcam or can be processed from an unknown dataset) then passing them to the ConvNeXt model for recognition. my Idea is that the unknown faces would be registered and added to the dataset.
I have done some research and tried to implement VGGFACE but I was met with so many errors that i couldn't go forward with it because apparently VGGface isnt compatible with tensorflow 2.x >.
I need recommendations and guidance on how to move forward and integrate a model with my face image classifier model. are there any resources that can be implemented easily with tensorflow ? And how easy or hard is this task to complete
|
r/tensorflow
|
post
|
r/tensorflow
|
2024-16-05
|
I recommend looking into metric learning. This allows you to characterize new faces and see if they match something you've seen before. It's no small effort to make it work though. I'm not personally aware of any off the shelf models you can just take, but they might exist.
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-16-05
|
I have a folder named train with 3sub folders named time1, time2, label which contain images which are used for satellite images change detection where I have a model which I input images from time1 and time2 directory and output change map image
Link to dataset:
https://www.kaggle.com/datasets/kacperk77/sysucd
Need to create data generator to be able to train model
|
r/tensorflow
|
post
|
r/tensorflow
|
2024-17-05
|
Can you please share the input shape of your dataset. I can write the data-generator for you
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-17-05
|
time1,time2 are (256,256,3)
Label (256,256,1)
Each folder has 12000 images
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-18-05
|
*Sorry for my late response*
Here is how we can do it:
import tensorflow as tf
images1_path = '/kaggle/input/sysucd/train/time1/*.png'
images2_path = '/kaggle/input/sysucd/train/time2/*.png'
masks_path = '/kaggle/input/sysucd/train/label/*.png'
image1_files = tf.data.Dataset.list_files(images1_path, shuffle=False)
image2_files = tf.data.Dataset.list_files(images2_path, shuffle=False)
mask_files = tf.data.Dataset.list_files(masks_path, shuffle=False)
def load_image(image_file):
image = tf.io.read_file(image_file)
image = tf.image.decode_jpeg(image, channels=3)
image = tf.image.convert_image_dtype(image, tf.float32)
return image
def load_mask(mask_file):
mask = tf.io.read_file(mask_file)
mask = tf.image.decode_jpeg(mask, channels=1)
mask = tf.image.convert_image_dtype(mask, tf.float32)
return mask
images1 = image1_files.map(load_image)
images2 = image2_files.map(load_image)
masks = mask_files.map(load_mask)
dataset = tf.data.Dataset.zip((images1, images2, masks))
import matplotlib.pyplot as plt
for image1, image2, mask in dataset.take(2):
plt.figure(figsize=(10, 5))
plt.subplot(1, 3, 1)
plt.title("Image 1")
plt.imshow(image1)
plt.axis('off')
plt.subplot(1, 3, 2)
plt.title("Image 2")
plt.imshow(image2)
plt.axis('off')
plt.subplot(1, 3, 3)
plt.title("Mask")
plt.imshow(mask[:, :, 0], cmap='gray')
plt.axis('off')
plt.show()
Here is the kaggle notebook: [https://www.kaggle.com/code/maifeeulasad/tensorflow-memefficient-complex-dataset/notebook](https://www.kaggle.com/code/maifeeulasad/tensorflow-memefficient-complex-dataset/notebook)
If you have any issues understanding, please let me know. I would be happy to help.
At the end of the notebook, you will find I have posted another approach, that may prove itself super helpful based on what you are trying to do, afaik.
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-19-05
|
Thank you for the help
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-20-05
|
Please share your outcome as well. Eagerly waiting.
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-21-05
|
I've posted 6 questions to StackOverflow/TensorFlow and my question is just too deep or complicated. Extremely happy to read into my own issues and post answers but I've read all the documentation and source code on the issue. I've really branched out my search terms. I can't even think of how to make steps to resolve my issue.
Not looking for y'all to answer my questions unless you really want to.
Example (probably my easiest one):
https://stackoverflow.com/questions/78497850/why-does-tensorflow-graph-execution-require-different-shapes-than-eager-executio
|
r/tensorflow
|
post
|
r/tensorflow
|
2024-17-05
|
train_datagen = ImageDataGenerator(rescale=1/255,)
# Provide the same seed and keyword arguments to the fit and flow methods
seed = 1
train1_image_generator = train_datagen.flow_from_directory(
'/kaggle/input/sysu-cd/SYSU-CD/train/train/time1',
target_size=(256, 256),
color_mode='rgb',
batch_size=64,
class_mode=None,
seed=seed)
train2_image_generator = train_datagen.flow_from_directory(
'/kaggle/input/sysu-cd/SYSU-CD/train/train/time2',
target_size=(256, 256),
color_mode='rgb',
batch_size=64,
class_mode=None,
seed=seed)
train_mask_generator = train_datagen.flow_from_directory(
'/kaggle/input/sysu-cd/SYSU-CD/train/train/label',
target_size=(256, 256),
color_mode='grayscale',
batch_size=64,
class_mode=None,
seed=seed)
# combine generators into one which yields image and masks
train_generator = zip((train1_image_generator, train1_image_generator), train_mask_generator)
Output
Found 0 images belonging to 0 classes.
Found 0 images belonging to 0 classes.
Found 0 images belonging to 0 classes.
The folder contains 256*256 png images
|
r/tensorflow
|
post
|
r/tensorflow
|
2024-18-05
|
I have version 2.16.1 of tensorflow installed and when using it in python it works perfectly. The first 2 lines of code work perfectly:
import tensorflow as tf
from tensorflow import keras
But then the rest doesnt work:
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense
It doesnt recognize keras.models or keras.layers. What can I do to fix it?
|
r/tensorflow
|
post
|
r/tensorflow
|
2024-18-05
|
same!
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-18-05
|
if you're using tensorflow 2.16 you should just use keras 3. It is a standalone package, so replace from tensorflow.keras with from keras. It should just work
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-19-05
|
Hi guys, I have a really very old laptop with GeForce 410M GPU with 512MB graphics card but I wamt to use it to train my first model but the processor is taking a lot of time (i3 2350M). But in the website it is mentioned that we need CUDA architecture 2.1 to use it. I use Ubuntu 20.04. Please help
|
r/tensorflow
|
post
|
r/tensorflow
|
2024-19-05
|
Ran both versions of tensorflow on a 920m 2gib, not sure about 410m. You should give it a try and share the update with us
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-19-05
|
920M is CUDA compute 3.5 but 410M is CUDA compute 2.1
:(
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-19-05
|
I checked and cuDNN is not installing
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-19-05
|
Let's just try to install the cuda-toolkit. And see what happens. We will surely find some version of tensorflow which will support this. Worst case, we will have to build it from scratch
ref: [https://imgur.com/InBkD9l](https://imgur.com/InBkD9l)
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-23-05
|
Cuda toolkit is supported on 2.1 but cuDNN is not. 🥲 How will we build that.
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-23-05
|
Let's try to download these old version from archive and try to use them:
[https://imgur.com/7k9oEPM](https://imgur.com/7k9oEPM)
ref: [https://developer.nvidia.com/rdp/cudnn-archive](https://developer.nvidia.com/rdp/cudnn-archive)
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-23-05
|
What should we use, Linux or windows 7?
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-23-05
|
I wish to train my ai model by using the gpu but even after trying 3 different cuda versions still i am not able to use my gpu. I have a rtx3050 on my laptop. Can anyone please help me out. I am using tensorflow 2.13.0 with cuda 11.8 and cudnn 8.6 with python 3.9.5.
https://preview.redd.it/gus1j9shxj1d1.png?width=1081&format=png&auto=webp&s=bfdeac2d9f29c19bb3ca5d7a674a6d74dddd0377
https://preview.redd.it/utmcuduaxj1d1.png?width=633&format=png&auto=webp&s=a4f5154546bf33e8c376e5b7febe2f2053f573fd
https://preview.redd.it/t03pk7uaxj1d1.png?width=513&format=png&auto=webp&s=fb7855453bd31a92fffdde0b03585ccef6c07356
https://preview.redd.it/jawshcuaxj1d1.png?width=1680&format=png&auto=webp&s=14d7ab93b6276a07e4f9fa97dcb0d0a53320e841
|
r/tensorflow
|
post
|
r/tensorflow
|
2024-20-05
|
https://www.tensorflow.org/install/pip#windows-native
Read the details and give up...
Or use wsl 2
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-20-05
|
Exactly, tensorflow stopped supporting cuda for native windows quite a while ago
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-20-05
|
i didnt know that🫠
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-20-05
|
Do you genuinely think that tf will ever rollback to supporting gpu natively on windows? Using WSL on low ram machines sucks and I'd rather avoid it
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-22-05
|
Technically they can do it now. Look at pytorch. They just don't want to. 🙃
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-22-05
|
I am using TensorFlow 2.16 and Python3 for implementing an AutoEncoder and Self-Organizing Map for MNIST dataset. The entire code can be referred to [here](https://github.com/arjun-majumdar/Autoencoders_Experiments/blob/master/desom_tensorflow2.py). For brevity, the main code is:
# SOM hyper-params-
map_height = 10
map_width = 10
gamma = 0.001
# Total number of train steps/iterations-
total_iterations = len(train_dataset) * num_epochs
# Temperature hyper-parm controlling radius of Gaussian neighborhood-
Tmax = 10.0
Tmin = 0.1
class DESOM(Model):
def __init__(
self, map_height = 10,
map_width = 10, latent_dim = 50,
encoder_dims = [1, 500, 500, 100]
):
super(DESOM, self).__init__()
self.map_height = map_height
self.map_width = map_width
self.map_size = (self.map_height, self.map_width)
self.latent_dim = latent_dim
self.n_prototypes = self.map_size[0] * self.map_size[1]
self.encoder_dims = encoder_dims
self.encoder_dims.append(self.latent_dim)
self.autoencoder, self.encoder, self.decoder = mlp_autoencoder(
# encoder_dims = [X_train.shape[-1], 500, 500, 2000, latent_dim],
encoder_dims = self.encoder_dims,
act = 'relu', init = 'glorot_uniform',
batchnorm = False
)
# Initialize SOM layer-
self.som_layer = SOMLayer(
map_size = (self.map_height, self.map_width), name = 'SOM'
)(self.encoder.output)
# Create DESOM model
self.model = Model(
inputs = self.autoencoder.input,
outputs = [self.autoencoder.output, self.som_layer]
)
def compile(self, gamma:float = 0.001, optimizer:str = 'adam') -> None:
"""
Compile DESOM model
Parameters
----------
gamma : float
coefficient of SOM loss (hyperparameter)
optimizer : str (default='adam')
optimization algorithm
"""
self.model.compile(
loss = {'decoder_0': 'mse', 'SOM': som_loss},
# loss_weights = [1, gamma],
loss_weights = {'decoder_0': 1.0, 'SOM': gamma},
optimizer = optimizer
)
return None
def predict(self, x):
"""
Predict best-matching unit using the output of SOM layer
Parameters
----------
x : array, shape = [n_samples, input_dim] or [n_samples, height, width, channels]
input samples
Returns
-------
y_pred : array, shape = [n_samples]
index of the best-matching unit
"""
_, d = self.model.predict(x, verbose = 0)
return d.argmin(axis = 1)
def map_dist(self, y_pred):
"""
Calculate pairwise Manhattan distances between cluster assignments and map prototypes
(rectangular grid topology)
Parameters
----------
y_pred : array, shape = [n_samples]
cluster assignments
Returns
-------
d : array, shape = [n_samples, n_prototypes]
pairwise distance matrix on the map
"""
# y_pred = tf.argmin(input = pairwise_squared_l2dist, axis = 1)
labels = tf.range(self.n_prototypes)
tmp = tf.cast(
x = tf.expand_dims(input = y_pred, axis = 1),
dtype = tf.dtypes.int32
)
# print(labels.dtype, tmp.dtype, y_pred.dtype)
d_row = tf.abs(tmp - labels) // self.map_size[1]
d_col = tf.abs(tmp % self.map_size[1] - labels % self.map_size[1])
# (d_row + d_col).dtype
# tf.int32
d_row = tf.cast(x = d_row, dtype = tf.dtypes.float32)
d_col = tf.cast(x = d_col, dtype = tf.dtypes.float32)
return d_row + d_col
def neighborhood_function(
self, d,
T, neighborhood = 'gaussian'
):
"""
SOM neighborhood function (Gaussian neighborhood)
Parameters
----------
d : int
distance on the map
T : float
temperature parameter (neighborhood radius)
neighborhood : str
type of neighborhood function ('gaussian' or 'window')
Returns
-------
w : float in [0, 1]
neighborhood weights
"""
if neighborhood == 'gaussian':
# return np.exp(-(d ** 2) / (T ** 2))
return tf.exp(-tf.square(d) / tf.square(T))
elif neighborhood == 'window':
# return (d <= T).astype(np.float32)
return tf.cast(x = (d <= T), dtype = tf.dtypes.float32)
else:
raise ValueError('invalid neighborhood function')
# Initialize MLP AutoEncoder DESOM model-
model = DESOM(
map_height = map_height, map_width = map_width,
latent_dim = latent_dim,
encoder_dims = [784, 500, 500, 100]
)
# Compile model-
model.compile(gamma = gamma, optimizer = 'adam')
# Required for computing temperature for current train step-
# curr_iter = 1
curr_iter = tf.constant(1)
total_iterations = tf.cast(x = total_iterations, dtype = tf.dtypes.int32)
# Train loss-
train_loss = list()
for epoch in range(1, num_epochs + 1):
for x, _ in train_dataset:
# Compute bmu/cluster assignments for batch-
# _, d = model.model.predict(x)
_, d = model.model(x)
# y_pred = d.argmin(axis = 1)
y_pred = tf.argmin(input = d, axis = 1)
y_pred = tf.cast(x = y_pred, dtype = tf.dtypes.float32)
# y_pred.shape, d.shape
# ((1024,), (1024, 100))
# Compute temperature for current train step-
curr_T = tf.cast(
x = Tmax * tf.pow((Tmin / Tmax), (curr_iter / total_iterations)),
dtype = tf.dtypes.float32
)
# Compute topographic (neighborhood) weights for this batch-
w_batch = model.neighborhood_function(
d = model.map_dist(y_pred = y_pred),
T = curr_T, neighborhood = 'gaussian'
)
# Train on batch-
loss = model.model.train_on_batch(x = x, y = [x, w_batch])
train_loss.append(loss.item())
curr_iter += 1
It gives me the Warning:
>
>
>
>
>
>
>
>
>
>
>
>
|
r/tensorflow
|
post
|
r/tensorflow
|
2024-20-05
|
I'm trying to create a model using the pre-trained BERT model from [kaggle](https://www.kaggle.com/models/tensorflow/bert/tensorFlow2/en-uncased-l-12-h-768-a-12). The model architecture is given below:
import tensorflow_hub as hub
import tensorflow_text as text
import numpy as np
from tensorflow.keras.optimizers import SGD
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, Dense
def build_model(bert_layer, preprocessor):
text_input = Input(shape=(), dtype=tf.string)
encoder_inputs = preprocessor(text_input)
pooled_output, sequence_output = bert_layer(encoder_inputs)
clf_output = sequence_output[:, 0, :]
out = Dense(1, activation='sigmoid')(clf_output)
model = Model(inputs=text_input, outputs=out)
optimizer = SGD(learning_rate=self.lr, momentum=0.8)
# model.compile(loss='binary_crossentropy', optimizer=optimizer, metrics=['accuracy'])
return model
The error is given below:
ValueError: Exception encountered when calling layer 'keras\_layer\_2' (type KerasLayer).
A KerasTensor is symbolic: it's a placeholder for a shape an a dtype. It doesn't have any actual numerical value. You cannot convert it to a NumPy array.
Call arguments received by layer 'keras\_layer\_2' (type KerasLayer):
• inputs=<KerasTensor shape=(None,), dtype=string, sparse=None, name=keras\_tensor\_1>
• training=None
I believe a numpy function is being called on the Input layer, which does not have any values. Is there any way to fix this. Is this due to dependency version mis match?
Thanks!
|
r/tensorflow
|
post
|
r/tensorflow
|
2024-21-05
|
r/tensorflow
|
post
|
r/tensorflow
|
2024-21-05
|
|
Uhm, where are your errors? What happens when you try running the script?
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-23-05
|
hello everyone, i implemented a data augmentation model and im trying to watchh the Grad CAM of the neural network but theres a problem with the Data augmentation section and i cant solve that issue
i search some implementation on google but is still not working and a didn\`t found an implementation on a model with data augmentation, i asked to chatgpt but that code is not working
do someone knows how to do it or any advice?
this is the link for the kaggle proyect
[https://www.kaggle.com/code/luismanuelgnzalez/cnn-landuse](https://www.kaggle.com/code/luismanuelgnzalez/cnn-landuse)
data augmentation model
https://preview.redd.it/636pd338at1d1.png?width=539&format=png&auto=webp&s=4bc7590e50d158fb887b11b95d94711f92322c77
model
https://preview.redd.it/l9ji9a55at1d1.png?width=581&format=png&auto=webp&s=df61495fe1dea7e922ccbabab0098bf4bb39a425
https://preview.redd.it/b4zl3d92at1d1.png?width=653&format=png&auto=webp&s=3f09a97d2a37514b50a6c7a85b53a8383fdb5009
|
r/tensorflow
|
post
|
r/tensorflow
|
2024-21-05
|
I was developing a CNN with tensorflow in Kaggle with a P100 GPU, it obtained a val accuracy of .87 and a loss of .56. When I downloaded it and tested it on my PC (which does not have a GPU that tensorflow can use) I noticed that Its performance declines, an image that predicted correctly in Kaggle has many errors when predicting on my PC. Why would it be? I think that due to the fact of not predicting images using a GPU, but I would like to know the opinion of someone more experienced
To be more sure what it was, I made a prediction with an image that I took from the training dataset. At the time of training, it obtained an accuracy of .92. The bad thing is that it did not predict it well.
I will appreciate any knowledge you can give me.
Thanks for reading!!!
|
r/tensorflow
|
post
|
r/tensorflow
|
2024-21-05
|
GPU or not should give the same result, unless there is some sort of numerical stability issue. You brought over the weights? If it were me, I'd put a single image in and see exactly what the numerical results are for it, vs looking at aggregate stats.
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-24-05
|
I have a directory of files, where each file represents a raw radio waveform. It is saved as a sequence of samples, where each sample entry is written out as separate real and imaginary parts. Both parts are encoded as 32-bit floats, so one sample is 8 bytes. There are 2\^14 samples, so each file contains exactly 8 \* 2\^14 bytes.
There is no header or footer present.
I'd like to read each file in as its own "element" into a dataset (avoid concatenating data from different files together). I thought [FixedLengthRecord](https://www.tensorflow.org/api_docs/python/tf/data/FixedLengthRecordDataset) would be appropriate, so I attempted to create a dataset like so:
fnames = tf.data.Dataset.list_files('data/**/*.bin')
dataset = tf.data.FixedLengthRecordDataset(fnames, record_bytes= 8*2**14)
I'm not sure how exactly to inspect the structure of the dataset, but I know its element spec has a dtype of \`tf.string\` which is not desired. Ideally, I'd like to read the contents of each file into a 1D tensor of \`tf.complex64\`. I cannot find many examples of working with FixedLengthRecord data, much less in a format this simple. Any help would be appreciated.
|
r/tensorflow
|
post
|
r/tensorflow
|
2024-21-05
|
The method defined in this SO answer did the trick: [https://stackoverflow.com/a/70648958](https://stackoverflow.com/a/70648958)
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-22-05
|
r/tensorflow
|
post
|
r/tensorflow
|
2024-24-05
|
|
Why did you use the nodejs server? I think you could have send data from fronted to Flask server using a API request
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-26-05
|
Very true, i had already created the node server, when i encountered the issue with the converted model, and I was emotional stuck with the node server so I just used it anyways 😂
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-26-05
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.