将 CV2 YOLO8s 视频源传递到 Django 应用程序

问题描述 投票:0回答:1

我有一个预训练的 YOLO8s 模型,并在自定义数据集上对其进行训练,并且我使用 CV2 制作了一个 Python 类来显示结果。如何将来自 CV2 YOLO8 的持续视频源传递到 Django 网页?

这是我当前的代码:

from ultralytics import YOLO
from pathlib import Path
import cv2, onnx, onnxoptimizer,numpy,onnxruntime
import torch.onnx
import torchvision.models as models
from database.db import *
from pprint import pprint as pt
    
    
class Main_field: 
    
    def __init__(self, model, size, source, conf_):
        self.model = self.load_model(model) 
        self.size = size                    
        self.source = source               
        self.conf_ = conf_                  
    
    def __call__(self): 
        self.process_video()
    
    def load_model(self, model):
        model.fuse()                      
        return model
    
    def process_video(self):
        cap = cv2.VideoCapture(self.source) 
        while True:
            ret, frame = cap.read() 
            if not ret: break
                
            results = self.model.predict(frame, conf=self.conf_, imgsz=self.size) 
    
            masks_dict=[result.names for result in results][0] 
            xyxys=[result.boxes.cpu().numpy().xyxy for result in results][0] #xyxy 
            mask_ids=[result.boxes.cpu().numpy().cls.astype(int).tolist() for result in results][0] 
            masks=[masks_dict[itr] for itr in mask_ids] 
    
            db_output=[check_(local_list,str(itr)) for itr in mask_ids if itr] 
                
            video_outp=cv2.imshow("_________", results[0].plot()) 
            pt(mask_ids)
    
            if cv2.waitKey(1) & 0xFF == ord('q'): break
            
        def __del__():
            cap.release()
            cv2.destroyAllWindows()
    
        def init_model(path: str) -> any: return YOLO(path)
python django neural-network
1个回答
0
投票

您需要使用 djangos 功能来流式传输视频内容(我不是 django 专家),首先您需要使用您的函数制作一个生成器:

    def process_video(self):
        cap = cv2.VideoCapture(self.source) 
        while True:
            ret, frame = cap.read() 
            if not ret: break
                
            results = self.model.predict(frame, conf=self.conf_, imgsz=self.size) 

            # here is your way to annotate frame
            annotated_frame = ...

            # here you convert image to bytes
            image = Image.fromarray(annotated_frame)        
            img_buffer = io.BytesIO()
            image.save(img_buffer, format='JPEG')

            img_bytes = img_buffer.getvalue()
            yield (b'--frame\r\n'b'Content-Type: image/jpeg\r\n\r\n' + img_bytes + b'\r\n')
            

然后你需要创建一些视图(不知道它在 django 中应该如何)

class VideoStream(View):
    def get(self, request, *args, **kwargs):
        return StreamingHttpResponse(process_video(), content_type='multipart/xmixed-replace; boundary=frame')

之后,您需要在 html 页面中创建输出流响应的元素,它看起来有点像这样

    <div class="">
        <img src="{% url 'your_response_root' %}" width="1024" height="768">
    </div>
© www.soinside.com 2019 - 2024. All rights reserved.