YOLO v8 dependency issues

Hello, I am trying to run a fine tuned YOLO model from Hugging Face (foduucom/table-detection-and-extraction) but the dependency and user restrictions when it comes to the environment is making it very difficult get it running on Code Workspaces.

Can any one help me with this? I have installed both opencv headless and normal opencv and keep getting the same error:

from transformers import DetrForObjectDetection, DetrImageProcessor, TableTransformerForObjectDetection
from PIL import Image
from ultralyticsplus import YOLO, render_result
import torch
import base64
import warnings
import os

#ERROR/OUTPUT
/home/user/envs/default/lib/python3.11/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html
  from .autonotebook import tqdm as notebook_tqdm
---------------------------------------------------------------------------
ImportError                               Traceback (most recent call last)
Cell In[1], line 5
      3 from transformers import DetrForObjectDetection, DetrImageProcessor, TableTransformerForObjectDetection
      4 from PIL import Image
----> 5 from ultralyticsplus import YOLO, render_result
      6 import torch
      7 import base64

File ~/envs/default/lib/python3.11/site-packages/ultralyticsplus/__init__.py:1
----> 1 from .hf_utils import download_from_hub, push_to_hfhub
      2 from .ultralytics_utils import YOLO, postprocess_classify_output, render_result
      4 __version__ = "0.0.28"

File ~/envs/default/lib/python3.11/site-packages/ultralyticsplus/hf_utils.py:6
      3 from pathlib import Path
      5 import pandas as pd
----> 6 from sahi.utils.cv import read_image_as_pil
      8 from ultralyticsplus.other_utils import add_text_to_image
     10 LOGLEVEL = os.environ.get("LOGLEVEL", "INFO").upper()

File ~/envs/default/lib/python3.11/site-packages/sahi/__init__.py:3
      1 __version__ = "0.11.31"
----> 3 from sahi.annotation import BoundingBox, Category, Mask
      4 from sahi.auto_model import AutoDetectionModel
      5 from sahi.models.base import DetectionModel

File ~/envs/default/lib/python3.11/site-packages/sahi/annotation.py:12
      9 import numpy as np
     11 from sahi.utils.coco import CocoAnnotation, CocoPrediction
---> 12 from sahi.utils.cv import (
     13     get_bbox_from_coco_segmentation,
     14     get_bool_mask_from_coco_segmentation,
     15     get_coco_segmentation_from_bool_mask,
     16 )
     17 from sahi.utils.shapely import ShapelyAnnotation
     19 logger = logging.getLogger(__name__)

File ~/envs/default/lib/python3.11/site-packages/sahi/utils/cv.py:11
      8 import time
      9 from typing import Generator, List, Optional, Tuple, Union
---> 11 import cv2
     12 import numpy as np
     13 import requests

File ~/envs/default/lib/python3.11/site-packages/cv2/__init__.py:181
    176             if DEBUG: print("Extra Python code for", submodule, "is loaded")
    178     if DEBUG: print('OpenCV loader: DONE')
--> 181 bootstrap()

File ~/envs/default/lib/python3.11/site-packages/cv2/__init__.py:153, in bootstrap()
    149 if DEBUG: print("Relink everything from native cv2 module to cv2 package")
    151 py_module = sys.modules.pop("cv2")
--> 153 native_module = importlib.import_module("cv2")
    155 sys.modules["cv2"] = py_module
    156 setattr(py_module, "_native", native_module)

File ~/envs/default/lib/python3.11/importlib/__init__.py:126, in import_module(name, package)
    124             break
    125         level += 1
--> 126 return _bootstrap._gcd_import(name[level:], package, level)

ImportError: libGL.so.1: cannot open shared object file: No such file or directory

Hey! Can you try installing opencv-python-headless through conda?

I tested this in my Workspace and could make the import cv2 line work with the following meta.yaml (which you can open by clicking on the “Manifest” button in the environment side bar.

package:
  name: '{{ PACKAGE_NAME }}'
  version: '{{ PACKAGE_VERSION }}'
source:
  path: ../src
requirements:
  run:
  - ipykernel
  - pip
  - foundry-transforms-lib-python
  - pandas
  - py-opencv
  pip:
  - ultralyticsplus==0.0.28
  - ultralytics==8.0.43

Yes I installed the headless using the Conda

Can you share your meta.yaml file? Or try what @nicornk has suggested above?

So i tried with what @nicornk suggested and getting the following error.

ERROR error: uninstall-no-record-file    
ERROR     
ERROR Ă— Cannot uninstall opencv-python 4.12.0    
ERROR ╰─> The package's contents are unknown: no RECORD file was found for opencv-python.    
ERROR     
ERROR hint: The package was installed by conda. You should check if it can uninstall the package.    
 INFO Installing pip environment    
ERROR ❌ Failed to run command, exit status: "1"
  Hawk version 0.301.0
ERROR ❌  Hawk error: GenericError Failed to run command, exit status: "1"

  Maestro version 0.453.0
    
(default) user@localhost:~/repo$ 

Not sure how it is working on his side. But after digging through the installations, it seems like the ultralytics and ultralyticsplus are installing torch 2.7.1 and opencv-python using pip cuz ultralytics depend on latter packages. Another issue is that “foduucom/table-detection-and-extraction” doesn’t work with torch 2.7.1. The only torch version I could make it work on my local machine was 2.3.0. So what I did is I downloaded the ultralytics and ultralyticsplus first and tried to downgrade the torch but maestro is not allowing me to do this. I was able to downgrade it on my local machine to get this model working.

1 Like