darshil3011 commited on
Commit
632cae2
·
verified ·
1 Parent(s): 865d4c1

Update prompts_and_chema.py

Browse files
Files changed (1) hide show
  1. prompts_and_chema.py +14 -0
prompts_and_chema.py CHANGED
@@ -94,3 +94,17 @@ get_txt_files = {
94
  "custom_video_source": "/content/text_files/016_custom_video_source.txt",
95
  }
96
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
94
  "custom_video_source": "/content/text_files/016_custom_video_source.txt",
95
  }
96
 
97
+ intent_description_map = {
98
+ "hello_world_of_pysdk": "Basic setup, setting up account on DeGirum AI hub, generate dg_token, and run a simple example where you load a model from degirum model zoo, and just pass an example image and see output. Any question asked about setup, what is PySDK, generating tokens or starting with pysdk, a beginner friendly example, then select this intent.",
99
+ "single_model_inference": "Running an image on a single model either from degirum AI Hub model zoo or local model zoo. A basic example where you pass an image to the model and get results. It can be a segmentation, object detection, classification or pose estimation model and result return type will be different for each. Any question on simple example to run a single AI model on images only, choosing inference device, display and printing results etc should fall under this intent.",
100
+ "running_yolo_models": "This intent includes running different flavours of yolo models like yolo v5, v8, v11 – object detection, pose estimation, classification etc. This intent is similar to above intent but is specific to yolo models. It also includes selecting different inference option like cloud or local and visualize the output on images. If user asks about a use-case on images, for eg. face detection or car detection etc which you think can be fulfilled using any of the above models across any flavour (coco dataset), select this intent.",
101
+ "model_pipelining": "This intent includes running two or multiple models one after the other in pipeline way to achieve something. For eg. for emotion classification usecase, we first run face detection model to extract faces and then run emotion classification model over it. So any query or usecase in the query is related to running two or models in pipeline mode, select this intent.",
102
+ "class_filtering": "If a model has multiple classes but you only want to detect a particular class, there is a way to do it in degirum pysdk. So any question or query where user wants to detect only a particular set of classes out of all the classes model is trained on, select this intent. For eg. someone is using coco model but wants to detect only person and car.",
103
+ "overlay_effects": "Degirum pysdk supports multiple overlay effects after we get results from the model like blurring the detected object, changing colour of the bounding box, changing size and position of the labels, changing font or showing probabilities, bounding box thickness etc. So any such query or use-case where some kind of overlay effects is required, use this intent.",
104
+ "running_inference_on_video": "This is similar to intent #2 and #3 where we show running model inference on images but here we show inference on videos. Video can be saved video file, webcam stream or RTSP URL. So any query about running inference on live camera feed or saved video files, we should select this intent. For most real world use-cases, we will be running inference on videos but while prototyping user may need to test on images. So choose between this intent and intent 2 and 3 carefully.",
105
+ "person_re_identification_or_extracting_embeddings": "There are some use-cases where user may need to extract embeddings from the image and store it or use it to calculate similarity etc. For eg. In person re-identification usecase, we extract face embeddings of a user and match it with the stored embeddings to identify if this is the same person. If user's query or use-case includes these kind of mechanism, select this intent.",
106
+ "zone_counting": "Some use-cases includes selecting ROI for detection and then counting objects – for eg. counting number of people in a zone etc. For this, we have a specific function in DeGirum which allows you to draw a polygon for ROI and then count object (whichever class you specify) and returns that. If user query falls under this use-case, select this intent.",
107
+ "custom_video_source": "Sometimes user may want to modify the incoming video source (may it be live stream like RTSP or saved video file) and apply some pre-processing to it like rotating, cropping, changing colour channels etc maybe to enhance detection. DeGirum PySDK allows user to modify incoming video stream before passing to model predictions. So any use-case or query related to modifying video source falls under this intent.",
108
+ "not_supported": "Anything outside of the above intents will be something that degirum pysdk doesnt support. Dont try to forcibly put the query under any intent, if we dont support something its better to let user know this is not supported. This intent is specifically for avoiding hallucinations and throwing out made-up info to user even if we dont support that."
109
+ }
110
+