text
stringlengths 0
23.7k
| label
stringclasses 4
values | dataType
stringclasses 2
values | communityName
stringclasses 4
values | datetime
stringclasses 95
values |
---|---|---|---|---|
title
|
r/tensorflow
|
post
|
r/tensorflow
|
2023-01-07
|
r/tensorflow
|
post
|
r/tensorflow
|
2023-02-07
|
|
Hello how can I design a simple encoder-decoder based model that only uses the GRU network. And for the word layer embedding, I'd like to use Vec2Word or FastText vectors. I'm new to NLP and TensorFlow and I just need some clues to understand how to design the sequence layers and I have already preprocessed the dataset. I have reviewed a lot of Github codes and research papers, what I don't understand is how to use tensorflow v2 to design the model and train it! Thanks a lot.
|
r/tensorflow
|
post
|
r/tensorflow
|
2023-03-07
|
Is there an alternative of the sonnet function [BatchApply](https://sonnet.readthedocs.io/en/latest/api.html#batchapply) inside tensorflow?
|
r/tensorflow
|
post
|
r/tensorflow
|
2023-04-07
|
r/tensorflow
|
post
|
r/tensorflow
|
2023-05-07
|
|
Hi, I’m pretty new to tensorflow. Previously, I’ve been able to load a model from tfhub, but now Python just gets stuck on it. I’ve literally copied the exact code from the colab (https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/semantic_similarity_with_tf_hub_universal_encoder.ipynb#scrollTo=zwty8Z6mAkdV). Not sure why this is happening, as model loads fine on there.
Any help would be appreciated.
|
r/tensorflow
|
post
|
r/tensorflow
|
2023-06-07
|
My situation is that I am from Malaysia and jobs in tech are lowly paid if not nonexistent altogether. So my outlet for getting paid well would be remote jobs.
But does the certification hold any actual weight or will I still be slapped with "X years of experience required" response by interviewers?
|
r/tensorflow
|
post
|
r/tensorflow
|
2023-06-07
|
Basics of TensorFlow GradientTape
[https://debuggercafe.com/basics-of-tensorflow-gradienttape/](https://debuggercafe.com/basics-of-tensorflow-gradienttape/)
​
https://preview.redd.it/q91vx0r2vfab1.png?width=1000&format=png&auto=webp&s=abf7742b869cc36eda0023676d04b22036860c12
|
r/tensorflow
|
post
|
r/tensorflow
|
2023-07-07
|
I recently made a generative ai model with a reinforcement ppo along with it. It is going to take around 1000 training episodes before real changes are seen in dialog. That’s where I need help, if you can rate and interact with the bot by chatting and rating the bot. It will respond to anything. Its made without limits unlike other common models, the project is to see how well a wide range of people can train a model and how fast. The link has been up for less then a day. The link is kingcorp.ngrok.dev Please be nice to it haha
|
r/tensorflow
|
post
|
r/tensorflow
|
2023-07-07
|
r/tensorflow
|
post
|
r/tensorflow
|
2023-08-07
|
|
I use Nvidia GPU on my machine to run an image segmentation model.
in the beginning, the PyCharm could not link to the GPU, but I find a method to solve it and make the GPU the first option instead of the machine GPU.
however, after installing Anaconda, the machine link to the GPU and I can run the code to create the mask of the image for segmentation: two issues that I notice
1- it takes more than 4 minutes to run one image
2- the image shows it is totally unexpected (as you can see in the attached image)
​
I use the same code and environment on my friend's device and it works fine and we get a great result!!!
​
did anyone face the issue? and what could be the reasons to solve?
|
r/tensorflow
|
post
|
r/tensorflow
|
2023-08-07
|
So I am messing around trying to make an image learning AI on Python and I would like to use gpu instead of cpu. I downloaded Cuda and Cudnn and did everything to make them work but when I run the code to check if TensorFlow can verify that there is a gpu it says that it didn’t find any. I have a gtx 1070 by the way.
|
r/tensorflow
|
post
|
r/tensorflow
|
2023-09-07
|
so have this transformer model of fingerspelling that i trained, then I modified it inside tf.module so it accept the frames input only (lets call it tflitemodel). the tflitemodel itself works normally and can be used. however when I wanted to save it as tflite model it return"tflitemodel has no attribute call). i can save the original model just fine. here is the notebook in kaggle. [The notebook](https://www.kaggle.com/code/daewogibran/aslfr/notebook?scriptVersionId=136347347).
i ve seen other notebook using tf module and it works. it really make me stuck I tried using tf.keras.model but it doesn't like the embedding and loop for some reason. any help would be appreciated
|
r/tensorflow
|
post
|
r/tensorflow
|
2023-11-07
|
I have a tech stack in mind with MobileNet and JavaScript/Typescript but I need a custom model. I don't know any python but I need to create an Ai model that can identify features in the image. I am willing to seek guidance or hire someone who can help me understand TF and Mobilenet for my project.
The goal is to feed the CNN an image and identify if it has wings, if it's a bug, dragon, ghost, etc. If it's fire, water, , electric, etc.
My original project was using colors but it's not enough to identify traits in an image.
I am willing to learn python to get it working but python plus Tensor Flow is a lot of information and could use guidance if it's the only way.
|
r/tensorflow
|
post
|
r/tensorflow
|
2023-11-07
|
Hi all,
I've recently become aware of Apple's Core ML framework and how it's possible to use Tensorflow models in iOS apps. If I understand correctly, tensorflow models can be converted to CoreML, but CoreML doesn't necessarily support all types of neural networks. Are GANs one of the supported networks? And if so, how is the performance on iOS apps?
I mean I know CoreML supports things like CNNs, recurrent neural networks, etc.., but don't know if GANs fit the bill as I'm an amateur game dev and also have yet to fully understand working with ML. So, I'd really appreciate if someone could share their experience with CoreML and GANs. Thanks!
​
|
r/tensorflow
|
post
|
r/tensorflow
|
2023-12-07
|
I just started reading about Transformers model. I have barely scratched the surface of this concept. For starters, I have the following 2 questions
1. How positional encoding are incorporated in the transformer model? I see that immediately after the word embedding, they have positional encoding. But I'm not getting in which part of the entire network it is being used?
2. For a given sentence, the weight matrices of the query, key and value, all of these 3 have the length of the sentence itself as one of its dimensions. But the length of the sentence is a variable, how to they handle this issue when they pass in subsequent sentences?
|
r/tensorflow
|
post
|
r/tensorflow
|
2023-12-07
|
I am deploying a MobileNetV2 model onto an Arduino using the TF Lite framework. I have used the MobileNetV2 preprocess layer in my compiled model, do I still need to rescale any input or will my model take care of it during inference?
I have also used a single dimension dense layer output as I only have 2 output classes, is there only the softmax output available from the micro_ops_resolver?
|
r/tensorflow
|
post
|
r/tensorflow
|
2023-12-07
|
Basics of TensorFlow GradientTape
[https://debuggercafe.com/basics-of-tensorflow-gradienttape/](https://debuggercafe.com/basics-of-tensorflow-gradienttape/)
​
https://preview.redd.it/mnv1yi3ustbb1.png?width=1000&format=png&auto=webp&s=fbcddfbfabbfe224d890e9920e1c4f0c4c912670
|
r/tensorflow
|
post
|
r/tensorflow
|
2023-14-07
|
I've been attempting to get my GPU to be detected by TensorFlow on and off for weeks for an upcoming university project, but I have not been able to do this.
I'm using Anaconda and I'm on Windows (10, but on my Windows 11 laptop it would also not work correctly).
I have installed cudnn (Version 8.1.0.77) and cudatoolkit (Version 11.2.2) via conda. I have installed TensorFlow (Version 2.10.1) via pip (All versions from the "conda list" command). I chose these versions as they should have the best compatibility, but it still doesn't work. I have attempted to follow this [https://www.tensorflow.org/install/pip#step-by-step\_instructions](https://www.tensorflow.org/install/pip#step-by-step_instructions) as much as possible. The first verification step (For the CPU) returns this:
[This seems fine from what I understand. It returns the tensor at least](https://preview.redd.it/ecnfdzhzixbb1.png?width=1899&format=png&auto=webp&s=2602f6d67fc24a42522a6370413eae3c86fc15b5)
The second (For the GPU), however, only returns "\[\]".
I have an RTX 2070 Super with driver version 536.40, and no integrated graphics in my CPU (AMD Ryzen 5 3600). I should also have enough RAM (I have 32GB DDR4, while the minimum I believe is 8GB).
I've tried looking through articles and finding a solution, but I've evidently not been successful in this.
Could it be perhaps related to the OS?
Any suggestions for the next things to look for or to check would be greatly appreciated!
|
r/tensorflow
|
post
|
r/tensorflow
|
2023-14-07
|
I have a dataset of images (two class) stored locally on my pc I want to train on. When I load from my hard drive using the flow\_from directory function I get a much smoother loss curve which is more desireable for me however this is very slow. I have discovered that loading the data into ram first by using cv2 to load the data into numpy arrays makes the training so much faster (almost 3x). however now the loss curve is the same general shape but has many spikes and is very jagged and makes my accuracy worse. I assume this has something to do with a difference in processing of the images as they are loaded. What should I change about my numpy loading to make it more like the flow\_from\_directories function.
​
​
https://preview.redd.it/l1bg6r51rybb1.png?width=716&format=png&auto=webp&s=501f98b9006073592b620780b6f97d4553e290e9
https://preview.redd.it/zeuh7t51rybb1.png?width=1735&format=png&auto=webp&s=d81589cf46f0795e19972d2128cfcdf97680bd97
|
r/tensorflow
|
post
|
r/tensorflow
|
2023-14-07
|
Hello, I'm working on Ubuntu 18.04 and using python 2.7 (it has to be this version ) and i need to install tensorflow, but couldn't find a way, does anybody knows how to do so ?
|
r/tensorflow
|
post
|
r/tensorflow
|
2023-14-07
|
I'm trying to learn VAE and I'm pretty clear about the idea of (vanilla) AE and its internal workings. I understand that VAE is an extension of AE for most part where the fixed latent vector in the middle is not replace with mean vector and stdev vector and we do sampling from them (Yes, using reparametrization technique to not mess with gradient flow). But I still can't wrap my head around mean vector and stdev vector, it is mean and stdev along which axis(or dimension)? Why are we trying to do this sampling? Also can you explain its loss function in simple terms (you may assume that I know KL div)
|
r/tensorflow
|
post
|
r/tensorflow
|
2023-14-07
|
import tensorflow as tf
from tensorflow.keras.layers.experimental import preprocessing
​
Problem:
Cannot find reference 'keras' in ' \_\_\_init\_\_\_.py' :9
Unresolved reference 'preprocessing' :9
|
r/tensorflow
|
post
|
r/tensorflow
|
2023-16-07
|
r/tensorflow
|
post
|
r/tensorflow
|
2023-16-07
|
|
Hi, been studying the tensorflow model specifically the tensorflow lite models to integrate into an application, I would just like to ask if its possible to have multiple data set compiled into one and edit it so that it only have 3 classes. for instance, get a dataset for road signs and instead of specifically training the model to know the different signs I would only categorized them all as a road sign and add another dataset for vehicle detection that would only output vehicles. Thanks in advance!
|
r/tensorflow
|
post
|
r/tensorflow
|
2023-16-07
|
I have a gt 720 video card( so i have no video card). Unfortunally for me I really need a video card. Amd and intel are too behind so i guess i need nvidia because of tensorecore, so can you please give me some benchmark, ideas anything that can help me take a decision?
|
r/tensorflow
|
post
|
r/tensorflow
|
2023-17-07
|
I am trying to re write some code in tensorflow, which was originally written in pytorch, but have attempted everything, including writting my own code based on theory rather than just changing the functions from one framework to another. I have also attempted using chatgpt and it didnt give me proper results. I have written some code now but I keep getting the error mentioned above (will write the full error message in the comments). Here is both the working pytorch code and the failing tensorflow code. Is there any idea of what I could be doing wrong or what I could do? It doesnt help that anything I try to fix the error doesnt work.
# pytorch code
def forward(self, X):
B = torch.tensor_split(X, self.idxs, dim=3)
Z = []
for i, (layer_norm, linear_layer) in enumerate(zip(self.layer_norms, self.linear_layers)):
b_i = torch.cat((B[i][:, :, 0, :, :],B[i][:, :, 1, :, :]), 2) #concatenate real and imaginary spectrograms
b_i = torch.transpose(layer_norm(b_i), 2, 3) #mirar be com es fa la layer norm
Z.append(torch.transpose(linear_layer(b_i), 2, 3))
Z = torch.stack(Z, 3)
return Z
# Tensorflow Code
def call(self, inputs):
B = tf.split(inputs, self.idxs.numpy(), axis=3)
Z = []
for i, (layer_norm, linear_layer) in enumerate(zip(self.layer_norms, self.linear_layers)):
b_i = tf.concat([B[i][:, :, :, :, 0], B[i][:, :, :, :, 1]], axis=2)
b_i = tf.transpose(layer_norm(b_i), perm=[0, 1, 3, 2])
Z.append(tf.transpose(linear_layer(b_i), perm=[0, 1, 3, 2]))
Z = tf.stack(Z, axis=3)
return Z
I am trying to run it on the following code, which works in pytorch, but not tensorflow:
# Test run
B = 1
T = 1
C = 1
F = 1
X = tf.random.normal(shape=(B, T, C, F))
band_split = Band_Split(temporal_dimension, max_freq_idx, sample_rate, n_fft, subband_dim)
result = band_split(X)
print(X.shape) # output is ([1, 2, 2, 257, 100])
print(result.shape) # output is ([1, 2, 128, 30, 100]) on pytorch, tf does not work
|
r/tensorflow
|
post
|
r/tensorflow
|
2023-17-07
|
we are trying to use the yolov8 framework to run object detection. However, it wasn't detecting anything. I am unsure what to look for in the log. I doubt the code will be much help as I am forced to use a 3rd party library that manages the tensorflow side of things. This is what we have:
import org.firstinspires.ftc.robotcore.external.tfod.Recognition;
import org.firstinspires.ftc.vision.VisionPortal;
import org.firstinspires.ftc.vision.tfod.TfodProcessor;
tfod = new TfodProcessor.Builder()
// Use setModelAssetName() if the TF Model is built in as an asset.
// Use setModelFileName() if you have downloaded a custom team model to the Robot Controller.
.setModelAssetName(TFOD_MODEL_ASSET)
//.setModelFileName(TFOD_MODEL_FILE)
.setModelLabels(LABELS)
.setIsModelTensorFlow2(false)
.setIsModelQuantized(false)
.setModelInputSize(640)
.setModelAspectRatio(16.0 / 9.0)
.build();
error log:
[https://github.com/samus114/roboterror/blob/main/robotControllerLog%20(3).txt](https://github.com/samus114/roboterror/blob/main/robotControllerLog%20(3).txt)
|
r/tensorflow
|
post
|
r/tensorflow
|
2023-18-07
|
Hello,
I am making a chat for a client which should give quick solutions for problems for their employees.
I have built the chat but it just answers the questions without following the conversation.
If my chat leaves a question to user and he answers with yes or no, my chat will not recognize that that is the answer and it will not give another question to try to solve the problem.
I am not very experienced with tensorflow and NLP but I would like to hear which direction should I take to improve the chat or what should I learn in case to improve it.
Thanks.
|
r/tensorflow
|
post
|
r/tensorflow
|
2023-18-07
|
Where is the best place to get questions answered?
After asking questions here, the TF forum and StackOverflow I don’t seem to be getting any answers. This could be my questions are stupid or complex although I would like to think it ranges somewhere in-between.
I would pay for my questions to be answered so that I could progress with my project and continue to learn the TF framework.
Is there anyway to access people who are fluent in the framework?
|
r/tensorflow
|
post
|
r/tensorflow
|
2023-18-07
|
Hello all,
I'm new to Tensorflow and use it to train LORA models for Stable Diffusion image generation. When monitoring the training process via Tensorboard I saw two **"strange" occurences** \- see picture.
https://preview.redd.it/oydivyjmpqcb1.png?width=417&format=png&auto=webp&s=6b1860759246dba93503f00439b933d647844d9b
Can someone please help me out...
1. Why the blue graph shows increasing average loss after \~1400 steps?
2. Why the gray graph shows NaN loss after \~1200 steps?
So, not why, because the "why" is most likely a mistake I made (which I hope to rule out) but more **"what" it means** that the either the loss increases after \~1400 steps or that I get NaN loss after \~1200 steps...
|
r/tensorflow
|
post
|
r/tensorflow
|
2023-18-07
|
I want to use embedding on images. As I saw, I need to extract features first, after which I need to do pooling and apply embedding. I wanna ask if I can use the embedding layer in tensorflow to apply embedding to images after doing feature extraction and pooling or is there some other layer or process that I need to do ?
|
r/tensorflow
|
post
|
r/tensorflow
|
2023-18-07
|
ERROR: type should be string, got " \n\nhttps://preview.redd.it/5sbnwkp87rcb1.png?width=1280&format=png&auto=webp&s=51daf668307f0e3d583dac372f7cbdd0592377c9\n\n🎥 Discover the world of image classification using TensorFlow, Pixellib, and Python in our latest video tutorial! 🌟 \n\nLearn how to train and detect custom images, enhancing your computer vision skills. 📸🔍\n\nIn this informative tutorial, we'll explore the process of annotating objects within images, creating accurate labels with dot markings and JSON files. We'll also introduce the ResNet101 model, a powerful pre-trained deep learning architecture, to train our custom images and labels.\n\nIf you are interested in learning modern Computer Vision course with deep dive with TensorFlow , Keras and Pytorch , you can find it here : [http://bit.ly/3HeDy1V](http://bit.ly/3HeDy1V)\n\nPerfect course for every computer vision enthusiastic\n\nA recommended book , [https://amzn.to/44GnlLW](https://amzn.to/44GnlLW) \\- \"Make Your Own Neural Network - An In-depth Visual Introduction For Beginners \"\n\nCode for this video: [https://github.com/feitgemel/Object-Detection/tree/main/Pixellib](https://github.com/feitgemel/Object-Detection/tree/main/Pixellib)\n\nThe link for the video : [https://youtu.be/i9MEXrLtFOQ](https://youtu.be/i9MEXrLtFOQ)\n\nEnjoy\n\nEran\n\n\\#Python #Cnn #TensorFlow #deeplearning #neuralnetworks #pixellib " |
r/tensorflow
|
post
|
r/tensorflow
|
2023-18-07
|
Hello, I use Tensorflow-CPU in Python and want to log the Tensorflow-Output from my console in my log-file.
I tried a few different approaches for logging the Tensorflow-messages into my existing log-file (where normal logging works) - but none of my approaches worked...
To be specific I want to log for example these TF-messages into my file:
2023-07-18 11:11:20,123412: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized
with oneAPI Deep Neural Network Library (oneDNN) to use
the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
W tensorflow/core/framework/allocator.cc:108] Allocation of 18599850000 exceeds 10% of system memory.
Thanks for your help and sharing your experience
|
r/tensorflow
|
post
|
r/tensorflow
|
2023-18-07
|
I made a simple LSTM model to classify text as either heading or non-heading using colab. I did [model.save](https://model.save)() but where do I go from here. I want to be able to used the model locally.
|
r/tensorflow
|
post
|
r/tensorflow
|
2023-18-07
|
Hello, I am trying to create a project on Rasa, and it uses tensor flow version 2.12.0. I am aware that GPU support has been dropped as of 2.10, and training a model was taking upwards of 20 hours on my previous project, so I wanted to know which of these options would be best to proceed with:
- downgrade to 2.10.0
- install everything on WSL2
- use the direct ml plugin
I am very new to all of this, so bear with me if some of my questions seem obvious or stupid. Thanks for responding!
|
r/tensorflow
|
post
|
r/tensorflow
|
2023-19-07
|
"""
from tensorflow.keras.preprocessing.sequence import pad\_sequences import tensorflow as tf
maxlen = 500
X\_train = pad\_sequences(X\_train, maxlen=maxlen, padding='post', truncating='post') X\_test = pad\_sequences(X\_test, maxlen=maxlen, padding='post', truncating='post') X\_train\_reshaped = X\_train.reshape((\*X\_train.shape, 1)) X\_test\_reshaped = X\_test.reshape((\*X\_test.shape, 1))
model = Sequential() model.add(LSTM(128)) model.add(Dense(1, activation="sigmoid")) model.compile(optimizer='adam', loss='binary\_crossentropy') #model.summary() model.fit(X\_train, y\_train, epochs=10, validation\_data=(X\_test, y\_test), verbose=2)
"""
I keep getting the error and I am not sure what I am doing wrong as I nearly copied the same exact example as the code shown in this website([https://machinelearningmastery.com/sequence-classification-lstm-recurrent-neural-networks-python-keras/](https://machinelearningmastery.com/sequence-classification-lstm-recurrent-neural-networks-python-keras/))
Call arguments received by layer "sequential" " f"(type Sequential): • inputs=tf.Tensor(shape=(None, 500), dtype=int32) • training=True • mask=None
|
r/tensorflow
|
post
|
r/tensorflow
|
2023-19-07
|
Been stuck on this for days. The issue is mentioned in this year old thread, with multiple solutions that dont end up working. [https://github.com/tensorflow/io/issues/1625](https://github.com/tensorflow/io/issues/1625) . If we're on Mac are we just SOL? Never encountered this type of problem before where you straight up cant work because your Mac is incompatible with the necessary software.
|
r/tensorflow
|
post
|
r/tensorflow
|
2023-19-07
|
r/tensorflow
|
post
|
r/tensorflow
|
2023-20-07
|
|
It's possible to reproduce the tensorflow playground animations when training a real AI model in Google Colab or any other IDE?
|
r/tensorflow
|
post
|
r/tensorflow
|
2023-20-07
|
The Imperial College London's files are all corrupted, I can't seem to find it anywhere, anyone with an Idea? Or even juste a similar dataset.
|
r/tensorflow
|
post
|
r/tensorflow
|
2023-20-07
|
Linear Regression using TensorFlow GradientTape
[https://debuggercafe.com/linear-regression-using-tensorflow-gradienttape/](https://debuggercafe.com/linear-regression-using-tensorflow-gradienttape/)
​
https://preview.redd.it/wslogwvkr7db1.png?width=1000&format=png&auto=webp&s=c0e29e9b3481ef22bd43bdf3c50a14487c21eaae
|
r/tensorflow
|
post
|
r/tensorflow
|
2023-21-07
|
I find it frustrating and hard to believe that I just straight up wont be able do do a project because I'm on a mac. [https://github.com/tensorflow/io/issues/1625](https://github.com/tensorflow/io/issues/1625) This issue is over a year old. How can we go about using tensorflow-up on a M1 mac, there has to be a way by now.
|
r/tensorflow
|
post
|
r/tensorflow
|
2023-22-07
|
I have the following code:
import tensorflow as tf
from tensorflow.keras import Input, Model, layers
and things like `y = layers.ELU()(y)` work as expected. I wanted to see a list of the available `layers` so I went to the [Tensorflow GitHub repository](https://github.com/tensorflow/tensorflow) and to the `keras` directory. There's a warning in that directory that says:
>STOP! This folder contains the legacy Keras code which is stale and about to be deleted. The current Keras code lives in github/keras-team/keras.
>
>Please do not use the code from this folder.
I'm guessing the "real" keras code is coming from the [keras repository](https://github.com/keras-team/keras). Is that a correct assumption? How does that version of Keras get there? If I wanted to write my own activation layer next to `ELU`, where exactly would I do that?
Thanks!
|
r/tensorflow
|
post
|
r/tensorflow
|
2023-23-07
|
If you are open to using Anaconda, I found a very short and concise video a few weeks ago. Helps you set up GPU for TensorFlow very quickly.
Here's the link: https://youtu.be/toJe8ZbFhEc
|
r/tensorflow
|
comment
|
r/tensorflow
|
2023-25-07
|
Ok, I see, thank you for the response.
|
r/tensorflow
|
comment
|
r/tensorflow
|
2023-25-07
|
Yea this could prove to be useful, thank you for sharing it.
|
r/tensorflow
|
comment
|
r/tensorflow
|
2023-25-07
|
I've tried tensorflow-rocm and tensorflow-directml-plugin in ubuntu and wsl2 respectively and apparently only the former kinda works with my gpu (RX6700 XT). Mind that I need to run export hsa_override_gfx_version=10.3.0 to make the former worked. Tried with the keras example for automatic speech recognition it works at least (huge speed increase from 2s/step to nearly 200ms/step)
|
r/tensorflow
|
comment
|
r/tensorflow
|
2023-26-07
|
I am planning to buy RTX 4070 non ti r deep learning and machine learning work. while check nvidia gpu compatibility list i did found RTX 4070ti but did not find rtx 4070.[CUDA GPUs - Compute Capability | NVIDIA Developer](https://developer.nvidia.com/cuda-gpus)
Also I am not buying a laptop/notebook RTX4070 which has cuda support explicitly on the above website.
Please help.
|
r/tensorflow
|
post
|
r/tensorflow
|
2023-26-07
|
I bought an RTX 4070Ti four months ago (for a desktop computer), and I have no problems at all. I don't think the RTX 4070 would be any different in terms of compatibility.
|
r/tensorflow
|
comment
|
r/tensorflow
|
2023-27-07
|
Should be fine, drivers and Cuda and cudnn is the same. 4070 is just a cut down 4070 but there should be no software issues
|
r/tensorflow
|
comment
|
r/tensorflow
|
2023-09-08
|
It will work.
|
r/tensorflow
|
comment
|
r/tensorflow
|
2023-21-08
|
Crosspost from my stackoverflow. I would put it on the tfjs sub but it is tiny.
I am sure the solution is out there somewhere, but I have been unable to find it. I originally trained the model in normal tensorflow, but it is being used in tensorflowjs after being converted. My current error is
>Uncaught (in promise) Error: Size(30000) must match the product of shape 100,100,3
Though I have had many others through my attempts.
My code right now is
function preprocess(imageData) { //const img_arr = cv.imread(imageData); let inputTensor = tf.browser.fromPixels(imageData); const offset = tf.scalar(255.0); const normalized = tf.scalar(1.0).sub(inputTensor.div(offset)); const batchInputShape = [100, 100, 3]; const flattenedInput = tf.reshape(normalized, [batchInputShape]); console.log(flattenedInput.shape); return flattenedInput;
The result of this function is then fed into my model, which produces the error. I am sure the solution is obvious but I have been unable to find it.
I have also tried
const batchInputShape = [null, 100, 100, 3]; const flattenedInput = tf.reshape(normalized, [batchInputShape, -1]);
Though that did not fair any better.
|
r/tensorflow
|
post
|
r/tensorflow
|
2023-27-07
|
do a reshape(-1,100,100,3) on your input tensors. based on the structure of the data i am assuming that is a 100x100 RGB image you are trying to pass into the network
|
r/tensorflow
|
comment
|
r/tensorflow
|
2023-27-07
|
For someone who already has experience with Python, an effective way to learn Tensorflow from is with the book "[Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python-second-edition)" by François Chollet. Highly recommended!
|
r/tensorflow
|
comment
|
r/tensorflow
|
2023-27-07
|
This was a great course. https://coursera.org/professional-certificates/tensorflow-in-practice
|
r/tensorflow
|
comment
|
r/tensorflow
|
2023-27-07
|
I have no knowledge in python. But i would like to learn it. Please give me some resources. Thanks…
|
r/tensorflow
|
comment
|
r/tensorflow
|
2023-28-07
|
This web page provides links to everything you need to get started with python: [https://www.python.org/about/gettingstarted/](https://www.python.org/about/gettingstarted/)
|
r/tensorflow
|
comment
|
r/tensorflow
|
2023-28-07
|
It seems easier. I will get one book from oreilly.
Can you help me with a mathematical book, related to tensorflow. Algebra concepts and etc .
A book to refresh all the concept required for learning tensorflow
|
r/tensorflow
|
comment
|
r/tensorflow
|
2023-28-07
|
So my question is... What do you want to do with knowing tensorflow? What's your aim?
Tensorflow is big... Like it's... Really damn big. If tensorflow was it's own managed company like spark to databricks, it prolly will be a 50b+ company (and a lot less buggy).
If you say tensorflow, you're saying I want to learn the whole deep learning as a field, there's so many layers to that.
Do you want to...
1. Learn how to create a model and train it with ttensorflow?
2. Learn how to optimize existing models?
3. Learn how to setup infrastructure with tensorflow?
4. Learn how to deploy a model and have it in a production environment?
5. Learn the deep inside knowledge and fix tensorflow issues?
And ultimately, are you trying to get a job that you can use tensorflow? Or is this just for fun
|
r/tensorflow
|
comment
|
r/tensorflow
|
2023-09-08
|
I am working on a data analytics project for my year 12 assessment, i have run into a roadblock, my code is disagreeing with the documentation, i am getting an “unexpected keyword argument” when calling a model.train using keras-segmentation library
i looked at the documentation and the source code and both of them say that model.train(callbacks=callbacks) (with the other params entered) and it should work, but it doesn’t. if anyone has any suggestions, that would be greatly appreciated
if you want all of the code, id be happy to upload it to github to comb thru
|
r/tensorflow
|
post
|
r/tensorflow
|
2023-27-07
|
callbacks are supposed to be in a list
|
r/tensorflow
|
comment
|
r/tensorflow
|
2023-27-07
|
please elaborate, i entered it in the format and everything that the documentation wants
|
r/tensorflow
|
comment
|
r/tensorflow
|
2023-27-07
|
I think it's better if you share the code
|
r/tensorflow
|
comment
|
r/tensorflow
|
2023-27-07
|
Training Your First Neural Network in TensorFlow
[https://debuggercafe.com/training-your-first-neural-network-in-tensorflow/](https://debuggercafe.com/training-your-first-neural-network-in-tensorflow/)
​
https://preview.redd.it/9nbwg2f2pleb1.png?width=1000&format=png&auto=webp&s=b5002b28c574f07ab36c59e9413c281cfeb2c1f9
|
r/tensorflow
|
post
|
r/tensorflow
|
2023-28-07
|
r/tensorflow
|
post
|
r/tensorflow
|
2023-28-07
|
|
ERROR: type should be string, got " \n\nhttps://preview.redd.it/c65hshfbypeb1.png?width=1280&format=png&auto=webp&s=63958c2e3928a70a95c24922625d332e6f7d84e2\n\nThis is an image Classification tutorial using Python, TensorFlow, and Keras with Convolutional Neural Networks (CNNs). \n\nIn this video, we'll learn how to use pre-trained models to classify images based on Resnet50 and Mobilenet.\n\n1. Introduction to image classification and CNNs.\n\n2. Using TensorFlow and Keras for building the classification process.\n\n3. Loading pre-trained models from the Keras application library (such as ResNet50 and MobileNet).\n\n4. Explaining how to prepare a fresh image for classification, including resizing it to the model's shape and converting it to a batch of images using the Numpy expand\\_dims function.\n\n5. Running the prediction process on the pre-trained models (ResNet50 and MobileNet) for the given image.\n\n6. Comparing and analyzing the quality of predictions between the two models.\n\nIf you are interested in learning modern Computer Vision course with deep dive with TensorFlow , Keras and Pytorch , you can find it here : [http://bit.ly/3HeDy1V](http://bit.ly/3HeDy1V)\n\nPerfect course for every computer vision enthusiastic\n\nA recommended book , [https://amzn.to/44GnlLW](https://amzn.to/44GnlLW) \\- \"Make Your Own Neural Network - An In-depth Visual Introduction For Beginners \"\n\nThe link for the video : [https://youtu.be/40\\_NC2Ahs\\_8](https://youtu.be/40_NC2Ahs_8)\n\nI also shared the Python code in the video description .\n\nEnjoy\n\nEran\n\n\\#Python #TensorFlow #Deeplearning #convolutionalneuralnetwork #mobilenet #Resnet50" |
r/tensorflow
|
post
|
r/tensorflow
|
2023-28-07
|
Hello everyone. r/Tensorflow has been down for 9 months or so but it is back alive now. I hope we can grow this into a nice place to ask questions and get answers regarding implementation in Tensorflow. I was motivated to get this going by the strict and unforgiving nature of Stack Overflow. I'm hoping we can have a slightly more open and casual discussion environment here, while still having meaningful technical content.
|
r/tensorflow
|
post
|
r/tensorflow
|
2024-01-05
|
Thank God!! We are back again.
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-04-05
|
Cool 😎 awesome
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-05-05
|
Your input shape specification in the model is (1,). You want it to be (320,320,3) based on the error?
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-01-05
|
I’m sorry, I’m out of my element (mechanical engineer). Could you say it in another way that I might better understand?
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-01-05
|
This bit specifies the shape of the data going into your model. An image will be something like (rows,columns,features). The shape you have here would be nonsense in most cases, indicating that each input sample is a single number.
tf.keras.layers.Input(shape=(1,))
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-01-05
|
MOI-TD - the first tech demonstration of 'AI-lab in space's, in making. 3 out of 6 is stacked up. Next few weeks are going to be crucial to the system as a whole up. Launching on ISRO's PSLV POEM-4. A new platform to test your tensorflow models in space :)
|
r/tensorflow
|
post
|
r/tensorflow
|
2024-02-05
|
Hey buddies,
I want to implement a TF-GNN where both inputs and outputs are graphs, i.e., I give the model a three-node graph with some attributes at nodes/edges and get as output the same 3-node graph with a single attribute per node. For instance, the three input nodes are three cities (attributes like population, boolean for is holiday, etc ) with their connecting roads as edges (attributes like trains scheduled for that day, etc.) and I get as output a "congestion" metric for each city.
Does anyone know about papers/tutorials with such implementation? Not sure if it's something available. So far I've only found graph classification or single-attribute regression.
Thanks!
|
r/tensorflow
|
post
|
r/tensorflow
|
2024-03-05
|
The output of a GNN will naturally be features for all the nodes. Assuming your edges aren't also being learned, you just need to write a helper function to recreate the TF graph tensor from the node features. Presumably you created the graph tensor in the first place, so you just make it again except using the GNN output as your node features. That's if I understand you right.
But it sounds like you don't really need it in graph tensor form anyway? If you just want the answers, use the GNN output directly. You'll have one feature vector per node, shape= (nodes,features).
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-03-05
|
Thanks for your response! Honestly I need to implement a naïve pipeflow to understand well how it works. The point is I wasn't sure if the feature vector per node was able to keep the "structural relationship" between each node. Anyway, I'll go into the coding stuff, thanks again :)
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-03-05
|
When it comes out it'll just be (nodes,features) in shape. It's up to you to remember the order your provided the nodes and their relationship.
The GNN documentation is limited and confusing at this point. Honestly, I got what I have going by iterating through chatGPT and experimenting. The examples they have are far too complex in my opinion.
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-03-05
|
Hello everyone!
Lately I've been trying to finetune a BERT multilingual model, I always had it set to Tensorflow 2.8 but a few hours ago I decided to update it to Tensorflow 2.16.
The wait times per epoch were always around 30 minutes, however since updating it to Tensorflow 2.16 the training time per epoch has increased to over an hour. Is there certainly an issue with my python code or is this expected?
Update:
Since I figured it might be important, this is probably the most important part (Tensorflow wise) of my code:
def create_bert_model(bert_model, MAX_LENGTH, NUM_CLASSES):
input_ids_layer = tf.keras.layers.Input(shape=(MAX_LENGTH,), dtype=tf.int32, name='ids')
attention_mask_layer = tf.keras.layers.Input(shape=(MAX_LENGTH,), dtype=tf.int32, name='mask')
bert_output = bert_model(input_ids_layer, attention_mask_layer).last_hidden_state
net = tf.keras.layers.Dropout(0.1)(bert_output)
net = tf.keras.layers.TimeDistributed(tf.keras.layers.Dense(NUM_CLASSES, activation='softmax'))(net)
return tf.keras.Model(inputs=[input_ids_layer, attention_mask_layer], outputs=net)
def compile_bert_model():
optimizer = tf.keras.optimizers.Adam(learning_rate=3e-5)
loss = tf.keras.losses.CategoricalCrossentropy(from_logits=False)
metrics = tf.metrics.CategoricalAccuracy()
classifier_model.compile(optimizer=optimizer, loss=loss, metrics=[metrics])
def train_bert_model(epochs):
classifier_model.fit(
train_dataset,
validation_data=validation_dataset,
epochs=epochs,
callbacks = tf.keras.callbacks.EarlyStopping(
monitor='val_categorical_accuracy',
mode='max',
verbose=0,
patience=3,
restore_best_weights=True
))
|
r/tensorflow
|
post
|
r/tensorflow
|
2024-03-05
|
I don't have any particular insight, but the simple thing would be if you accidentally lost GPU support in the upgrade?
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-03-05
|
That's definitely part of the problem. I've just realised any version over 2.10 ain't great to use with Windows GPU's. Do you see any other issues with my code?
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-03-05
|
Hello,
i'm beginnig with AI. I would like to ask, if its possible to train AI for chaning clothes. Eg: I input photo, and after that, i need to post some props to change eg. jumper for suit. If its possible, could you tell me some sequence what all i have to do? Or what technologies do i have to use.
Thank you!
|
r/tensorflow
|
post
|
r/tensorflow
|
2024-04-05
|
As it is having both image and text as command so it's multi-modal model for sure. And trust me it's not a beginner's project. You can do lots of stuff with lots of library / API available there. But if you want to stick to this field and know deeper it's important to know the basic. I would recommend you start by classifying images, then localizing, and finally segmentation. And in parallel you can start text summarization. I think one day, it may give you the idea why I suggested this approach.
Cheerio!
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-05-05
|
How do I manage LSTM hidden layer states in a TFLite model?
I got the following suggestion from ChatGPT, but input_details[1] is out of range
```
import numpy as np
import tensorflow as tf
from tensorflow.lite.python.interpreter import Interpreter
# Load the TFLite model
interpreter = Interpreter(model_path="your_tflite_model.tflite")
interpreter.allocate_tensors()
# Get input and output details
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
# Initialize LSTM state
initial_state = np.zeros((1, num_units)) # Adjust shape based on your LSTM configuration
def reset_lstm_state():
# Reset LSTM state to initial state
interpreter.set_tensor(input_details[1]['index'], initial_state)
# Perform inference
def inference(input_data):
interpreter.set_tensor(input_details[0]['index'], input_data)
interpreter.invoke()
output_data = interpreter.get_tensor(output_details[0]['index'])
return output_data
# Example usage
input_data = np.array(...) # Input data, shape depends on your model
output_data = inference(input_data)
reset_lstm_state() # Reset LSTM state after inference
```
|
r/tensorflow
|
post
|
r/tensorflow
|
2024-05-05
|
What are you trying to do? Do you have a specific model or architecture you are trying to run? TFLite doesn’t have state between inferences. Unless you are doing something a bit unique, you normally don’t need to manage any internal state yourself
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-05-05
|
I'm using a model with LSTM (RNN based) layers for time series prediction. So I should have hidden layers (memory units) which I want to keep updating as I input a series and reset before inputting a new series.
And it seems like TFLite supports LSTM, so it would be strange to me if I couldn't manage the hidden layers...
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-06-05
|
Does your model convert to tflite? Have you tried running inference and checking the results. You will input the series up until a point and it will predict the following values.
If you want to continually predict the next value, the naive solution is to run the entire series as input on each inference. The LSTM stores “memory” as it processes through the input series. It does not store memory between inferences. You as a developer, do not need to manage these.
The page on the tflite docs for rnn: https://www.tensorflow.org/lite/models/convert/rnn
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-06-05
|
Yes my model converts to tflite and inference runs fine.
I already expected tflite to correctly manage the LSTM memory when iterating through a series, but I wasn't figuring out how to reset it for a different series.
But it turns out I just found the answer in a LSTM tflite example. The method I was looking for was:
>tensorflow.lite.Interpreter.reset_all_variables()
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-06-05
|
Hey, I am relatively new to tensorflow, although I have been coding for a few years now. And after a few times of using prebuilt models I am attempting to train my own. But I get an error where there seems to be a ton of stuff that still references commands from TF1. I have used the conversion tool that updates these files so they work with TF2 but it still has a ton of errors and its kind of more than I can handle in terms of understanding what all needs to be changed and why. I hear that there should be a report.txt that should have been generated but I cannot find it in the folder tree anywhere. For added context I am attempting to use this model to train off of: 'ssd\_mobilenet\_v2\_320x320\_coco17\_tpu-8'. I have TF 2.11.1 and all the necessary pip files already installed on my ve. Any help, advice, or even a link to a tutorial that is up to date that might be better than what I have would be greatly appreciated. Thanks in advance!
|
r/tensorflow
|
post
|
r/tensorflow
|
2024-06-05
|
Not sure how much code you are talking about, but I'd sure be inclined to just write it again in TF2 rather than looking for a utility to convert. TF1 is pretty ancient at this point.
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-06-05
|
I tried this a few years ago and it turned out the object detection API depends on TF1 tflite features that were never ported to TF2. Even Google used the deprecated TF1 object detection API for their new spaghettinet model.
Feel free to message me if you need help with TF1 training.
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-06-05
|
I'm trying to run a project that uses tensorflow and keras among other things. I used :
from tensorflow.keras.models import load_model
from tensorflow.keras.preprocessing.image import img_to_array
Neither of these work and upon inspection I found that load_model is defined wayyy deep inside a file called saving_api, the path for which was /keras/src/saving/saving_api.py
My question is why has this changed or am I missing something because I looked for a keras folder in tensorflow but there isn't one. There's a python folder inside the tensorflow folder inside which there's a keras folder but even there I didn't find a models folder. Is there a guide for the new structure for importing? Help would be greatly appreciated and if anything I explained was unclear please let me know and I can elaborate further.
|
r/tensorflow
|
post
|
r/tensorflow
|
2024-06-05
|
My imports look like this for most every tensorflow related thing:
`import tensorflow.keras as keras`
`import tensorflow as tf`
`import tensorflow.keras.backend as K`
I don't personally use load\_model, but if I needed it I'd say "keras.models.load\_model".
FWIW.
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-06-05
|
That's what I'm saying I've tried this at first and it said the module doesn't exist. I looked in the file explorer and there isn't even a keras folder in the tensorflow folder. There's a python folder in tensorflow inside of which there's a keras folder and that one still doesn't have a model file. I'm really confused.
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-06-05
|
Which version of tensorflow are you using btw?
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-06-05
|
I see. So your install itself is not working. You're on windows it sounds like? Maybe someone else can chime it since I'm a linux user. I just pip install things into a virtual environment. Can you import tensorflow and just not the keras stuff? That's been around since 2.0 I believe.
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-06-05
|
No I'm on ubuntu 22.04 lts.
Both tf as well as keras stuff is messed up. Should I remove all of it and create a virtual environment and redo it?
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-06-05
|
Sounds like it. I personally like anaconda virtual environments, then I use pip to install most stuff. But a plain python venv should work.
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-06-05
|
Thank you!
|
r/tensorflow
|
comment
|
r/tensorflow
|
2024-06-05
|
Hello there,
I am currently developing an android application for research purposes that needs to detect the vehicle type (car, bike, train, by foot) based on sensory data (accelerometer, GPS, etc.) from the smartphone. The purpose of this application is not the creation of the model itself, but rather only a means to an end. Therefore, I would love to use an already created solution if there is any. Is anyone aware of such a model? Any help would be tremendously appreciated.
|
r/tensorflow
|
post
|
r/tensorflow
|
2024-06-05
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.