{"id":7437,"date":"2021-05-03T17:36:09","date_gmt":"2021-05-03T22:36:09","guid":{"rendered":"https:\/\/ebenezertechs.com\/como-utilizar-opencv-mobilenet-ssd-caffe-ssd-deteccion-de-objetos\/"},"modified":"2025-04-08T17:22:08","modified_gmt":"2025-04-08T22:22:08","slug":"como-utilizar-opencv-mobilenet-ssd-caffe-ssd-deteccion-de-objetos","status":"publish","type":"post","link":"https:\/\/ebenezertechs.com\/es\/como-utilizar-opencv-mobilenet-ssd-caffe-ssd-deteccion-de-objetos\/","title":{"rendered":"Detecci\u00f3n de objetos MobileNet SSD mediante el m\u00f3dulo OpenCV 3.4.1 DNN"},"content":{"rendered":"<p><div class=\"fusion-fullwidth fullwidth-box fusion-builder-row-1 nonhundred-percent-fullwidth non-hundred-percent-height-scrolling\" style=\"--awb-border-radius-top-left:0px;--awb-border-radius-top-right:0px;--awb-border-radius-bottom-right:0px;--awb-border-radius-bottom-left:0px;--awb-flex-wrap:wrap;\" ><div class=\"fusion-builder-row fusion-row\"><div class=\"fusion-layout-column fusion_builder_column fusion-builder-column-0 fusion_builder_column_1_1 1_1 fusion-one-full fusion-column-first fusion-column-last\" style=\"--awb-bg-size:cover;\"><div class=\"fusion-column-wrapper fusion-flex-column-wrapper-legacy\"><div class=\"fusion-title title fusion-title-1 fusion-title-text fusion-title-size-one\" style=\"--awb-margin-top:0px;--awb-margin-right:0px;--awb-margin-left:0px;\"><h1 class=\"fusion-title-heading title-heading-left fusion-responsive-typography-calculated\" style=\"margin:0;--fontSize:50;line-height:1.1;\">Detecci\u00f3n de objetos MobileNet SSD usando el m\u00f3dulo OpenCV 3.4.1 DNN<\/h1><span class=\"awb-title-spacer\"><\/span><div class=\"title-sep-container\"><div class=\"title-sep sep- sep-solid\" style=\"border-color:#edeef2;\"><\/div><\/div><\/div><div class=\"fusion-clearfix\"><\/div><\/div><\/div><\/div><\/div><div class=\"fusion-fullwidth fullwidth-box fusion-builder-row-2 fusion-flex-container nonhundred-percent-fullwidth non-hundred-percent-height-scrolling\" style=\"--awb-border-radius-top-left:0px;--awb-border-radius-top-right:0px;--awb-border-radius-bottom-right:0px;--awb-border-radius-bottom-left:0px;--awb-flex-wrap:wrap;\" ><div class=\"fusion-builder-row fusion-row fusion-flex-align-items-flex-start fusion-flex-content-wrap\" style=\"max-width:1216.8px;margin-left: calc(-4% \/ 2 );margin-right: calc(-4% \/ 2 );\"><div class=\"fusion-layout-column fusion_builder_column fusion-builder-column-1 fusion_builder_column_1_1 1_1 fusion-flex-column\" style=\"--awb-bg-size:cover;--awb-width-large:100%;--awb-margin-top-large:0px;--awb-spacing-right-large:1.92%;--awb-margin-bottom-large:0px;--awb-spacing-left-large:1.92%;--awb-width-medium:100%;--awb-order-medium:0;--awb-spacing-right-medium:1.92%;--awb-spacing-left-medium:1.92%;--awb-width-small:100%;--awb-order-small:0;--awb-spacing-right-small:1.92%;--awb-spacing-left-small:1.92%;\"><div class=\"fusion-column-wrapper fusion-column-has-shadow fusion-flex-justify-content-flex-start fusion-content-layout-column\"><div class=\"fusion-text fusion-text-1 fusion-text-no-margin\" style=\"--awb-margin-bottom:50px;\"><p>Esta publicaci\u00f3n demuestra c\u00f3mo usar el m\u00f3dulo de aprendizaje profundo OpenCV 3.4.1 con la red MobileNet-SSD para el descubrimiento de objetos.<\/p>\n<p>Como parte de Opencv 3.4. + Se incluy\u00f3 oficialmente el m\u00f3dulo de red neuronal profunda (DNN). El m\u00f3dulo DNN permite cargar modelos previamente entrenados de los marcos de aprendizaje profundo m\u00e1s populares, incluidos Tensorflow, Caffe, Darknet, Torch. Adem\u00e1s de MobileNet-SDD, otras arquitecturas son compatibles con OpenCV 3.4.1:<\/p>\n<\/p>\n<ul>\n<li>GoogleLeNet<\/li>\n<li>YOLO<\/li>\n<li>SqueezeNet<\/li>\n<li>R-CNN m\u00e1s r\u00e1pido<\/li>\n<li>ResNet<\/li>\n<li>Esta API es compatible con C ++ y Python. : -)<\/li>\n<\/ul>\n<\/div><\/div><\/div><\/div><\/div><div class=\"fusion-fullwidth fullwidth-box fusion-builder-row-3 fusion-flex-container nonhundred-percent-fullwidth non-hundred-percent-height-scrolling\" style=\"--awb-border-radius-top-left:0px;--awb-border-radius-top-right:0px;--awb-border-radius-bottom-right:0px;--awb-border-radius-bottom-left:0px;--awb-flex-wrap:wrap;\" ><div class=\"fusion-builder-row fusion-row fusion-flex-align-items-flex-start fusion-flex-content-wrap\" style=\"max-width:1216.8px;margin-left: calc(-4% \/ 2 );margin-right: calc(-4% \/ 2 );\"><div class=\"fusion-layout-column fusion_builder_column fusion-builder-column-2 fusion_builder_column_1_1 1_1 fusion-flex-column\" style=\"--awb-bg-size:cover;--awb-width-large:100%;--awb-margin-top-large:0px;--awb-spacing-right-large:1.92%;--awb-margin-bottom-large:0px;--awb-spacing-left-large:1.92%;--awb-width-medium:100%;--awb-order-medium:0;--awb-spacing-right-medium:1.92%;--awb-spacing-left-medium:1.92%;--awb-width-small:100%;--awb-order-small:0;--awb-spacing-right-small:1.92%;--awb-spacing-left-small:1.92%;\"><div class=\"fusion-column-wrapper fusion-column-has-shadow fusion-flex-justify-content-flex-start fusion-content-layout-column\"><div class=\"fusion-title title fusion-title-2 fusion-title-text fusion-title-size-two\" style=\"--awb-margin-top:0px;--awb-margin-right:0px;--awb-margin-left:0px;\"><div class=\"title-sep-container title-sep-container-left fusion-no-large-visibility fusion-no-medium-visibility fusion-no-small-visibility\"><div class=\"title-sep sep- sep-solid\" style=\"border-color:#edeef2;\"><\/div><\/div><span class=\"awb-title-spacer fusion-no-large-visibility fusion-no-medium-visibility fusion-no-small-visibility\"><\/span><h2 class=\"fusion-title-heading title-heading-left fusion-responsive-typography-calculated\" style=\"margin:0;--fontSize:40;line-height:1.0;\">C\u00f3digo de descripci\u00f3n<\/h2><span class=\"awb-title-spacer\"><\/span><div class=\"title-sep-container title-sep-container-right\"><div class=\"title-sep sep- sep-solid\" style=\"border-color:#edeef2;\"><\/div><\/div><\/div><\/div><\/div><\/div><\/div><div class=\"fusion-fullwidth fullwidth-box fusion-builder-row-4 fusion-flex-container nonhundred-percent-fullwidth non-hundred-percent-height-scrolling\" style=\"--awb-border-radius-top-left:0px;--awb-border-radius-top-right:0px;--awb-border-radius-bottom-right:0px;--awb-border-radius-bottom-left:0px;--awb-flex-wrap:wrap;\" ><div class=\"fusion-builder-row fusion-row fusion-flex-align-items-flex-start fusion-flex-content-wrap\" style=\"max-width:1216.8px;margin-left: calc(-4% \/ 2 );margin-right: calc(-4% \/ 2 );\"><div class=\"fusion-layout-column fusion_builder_column fusion-builder-column-3 fusion_builder_column_1_1 1_1 fusion-flex-column\" style=\"--awb-bg-size:cover;--awb-width-large:100%;--awb-margin-top-large:0px;--awb-spacing-right-large:1.92%;--awb-margin-bottom-large:0px;--awb-spacing-left-large:1.92%;--awb-width-medium:100%;--awb-order-medium:0;--awb-spacing-right-medium:1.92%;--awb-spacing-left-medium:1.92%;--awb-width-small:100%;--awb-order-small:0;--awb-spacing-right-small:1.92%;--awb-spacing-left-small:1.92%;\"><div class=\"fusion-column-wrapper fusion-column-has-shadow fusion-flex-justify-content-flex-start fusion-content-layout-column\"><div class=\"fusion-text fusion-text-2 fusion-text-no-margin\" style=\"--awb-margin-bottom:50px;\"><p>En esta secci\u00f3n, crearemos el script Python para la detecci\u00f3n de objetos y explicaremos, \u00bfc\u00f3mo cargar nuestra red neuronal profunda con OpenCV 3.4? \u00bfC\u00f3mo pasar la imagen a la red neuronal? y \u00bfC\u00f3mo hacer una predicci\u00f3n con MobileNet o m\u00f3dulo dnn en OpenCV ?.<\/p>\n<p>Usamos un MobileNet previamente entrenado tomado de<a href=\"https:\/\/github.com\/chuanqi305\/MobileNet-SSD\/\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/github.com\/chuanqi30<span> 5 \/ MobileNet-SSD \/<\/span><\/a> <span>que se entren\u00f3 en el marco Caffe-SSD. Este modelo puede detectar 2 <\/span><span>clases.<\/span><\/p>\n<p>Cargue y prediga con el m\u00f3dulo de red neuronal profunda<\/p>\n<p>Primero creamos un nuevo archivo Python mobilenet_ssd_python.py ponemos el siguiente c\u00f3digo, aqu\u00ed importamos las librer\u00edas:<\/p>\n<\/div><style type=\"text\/css\" scopped=\"scopped\">.fusion-syntax-highlighter-1 > .CodeMirror, .fusion-syntax-highlighter-1 > .CodeMirror .CodeMirror-gutters {background-color:var(--awb-color1);}.fusion-syntax-highlighter-1 > .CodeMirror .CodeMirror-gutters { background-color: var(--awb-color2); }.fusion-syntax-highlighter-1 > .CodeMirror .CodeMirror-linenumber { color: var(--awb-color8); }<\/style><div class=\"fusion-syntax-highlighter-container fusion-syntax-highlighter-1 fusion-syntax-highlighter-theme-dark\" style=\"opacity:0;margin-top:0px;margin-right:0px;margin-bottom:0px;margin-left:0px;font-size:14px;border-width:1px;border-style:solid;border-color:#2c5c7c;\"><div class=\"syntax-highlighter-copy-code\"><span class=\"syntax-highlighter-copy-code-title\" data-id=\"fusion_syntax_highlighter_1\" style=\"font-size:14px;\">Copy to Clipboard<\/span><\/div><label for=\"fusion_syntax_highlighter_1\" class=\"screen-reader-text\">Syntax Highlighter<\/label><textarea class=\"fusion-syntax-highlighter-textarea\" id=\"fusion_syntax_highlighter_1\" data-readOnly=\"nocursor\" data-lineNumbers=\"1\" data-lineWrapping=\"\" data-theme=\"oceanic-next\" data-mode=\"text\/css\">#Import the neccesary libraries\nimport numpy as np\nimport argparse\nimport cv2\n\n#Next, add the parser command lines:\n\n# construct the argument parse \nparser = argparse.ArgumentParser(\n    description='Script to run MobileNet-SSD object detection network ')\nparser.add_argument(\"--video\", help=\"path to video file. If empty, camera's stream will be used\")\nparser.add_argument(\"--prototxt\", default=\"MobileNetSSD_deploy.prototxt\",\n                                  help='Path to text network file: '\n                                       'MobileNetSSD_deploy.prototxt for Caffe model or '\n                                       )\nparser.add_argument(\"--weights\", default=\"MobileNetSSD_deploy.caffemodel\",\n                                 help='Path to weights: '\n                                      'MobileNetSSD_deploy.caffemodel for Caffe model or '\n                                      )\nparser.add_argument(\"--thr\", default=0.2, type=float, help=\"confidence threshold to filter out weak detections\")\nargs = parser.parse_args()\n<\/textarea><\/div><div class=\"fusion-text fusion-text-3 fusion-text-no-margin\" style=\"--awb-margin-top:50px;--awb-margin-bottom:50px;\"><p>La l\u00ednea anterior establece los siguientes argumentos:<\/p>\n<ul>\n<li>Video: archivo de ruta de video.<\/li>\n<li>Prototxt: el archivo de red es .prototxt<\/li>\n<li>Pesos: el archivo de ponderaciones de red es .caffemodel<\/li>\n<li>Thr: umbral de confianza.<\/li>\n<\/ul>\n<p>A continuaci\u00f3n, definimos las etiquetas para las clases en nuestra red MobileNet-SSD.<\/p>\n<\/div><style type=\"text\/css\" scopped=\"scopped\">.fusion-syntax-highlighter-2 > .CodeMirror, .fusion-syntax-highlighter-2 > .CodeMirror .CodeMirror-gutters {background-color:var(--awb-color1);}.fusion-syntax-highlighter-2 > .CodeMirror .CodeMirror-gutters { background-color: var(--awb-color2); }.fusion-syntax-highlighter-2 > .CodeMirror .CodeMirror-linenumber { color: var(--awb-color8); }<\/style><div class=\"fusion-syntax-highlighter-container fusion-syntax-highlighter-2 fusion-syntax-highlighter-theme-dark\" style=\"opacity:0;margin-top:0px;margin-right:0px;margin-bottom:0px;margin-left:0px;font-size:14px;border-width:1px;border-style:solid;border-color:#2c5c7c;\"><div class=\"syntax-highlighter-copy-code\"><span class=\"syntax-highlighter-copy-code-title\" data-id=\"fusion_syntax_highlighter_2\" style=\"font-size:14px;\">Copy to Clipboard<\/span><\/div><label for=\"fusion_syntax_highlighter_2\" class=\"screen-reader-text\">Syntax Highlighter<\/label><textarea class=\"fusion-syntax-highlighter-textarea\" id=\"fusion_syntax_highlighter_2\" data-readOnly=\"nocursor\" data-lineNumbers=\"1\" data-lineWrapping=\"\" data-theme=\"oceanic-next\" data-mode=\"text\/css\">#Labels of network.\nclassNames = { 0: 'background',\n    1: 'aeroplane', 2: 'bicycle', 3: 'bird', 4: 'boat',\n    5: 'bottle', 6: 'bus', 7: 'car', 8: 'cat', 9: 'chair',\n    10: 'cow', 11: 'diningtable', 12: 'dog', 13: 'horse',\n    14: 'motorbike', 15: 'person', 16: 'pottedplant',\n    17: 'sheep', 18: 'sofa', 19: 'train', 20: 'tvmonitor' }<\/textarea><\/div><div class=\"fusion-text fusion-text-4 fusion-text-no-margin\" style=\"--awb-margin-top:50px;--awb-margin-bottom:50px;\"><p>A continuaci\u00f3n, abrimos el archivo de video o dispositivo de captura seg\u00fan lo que elijamos, tambi\u00e9n cargamos el modelo modelo Caffe.<\/p>\n<\/div><style type=\"text\/css\" scopped=\"scopped\">.fusion-syntax-highlighter-3 > .CodeMirror, .fusion-syntax-highlighter-3 > .CodeMirror .CodeMirror-gutters {background-color:var(--awb-color1);}.fusion-syntax-highlighter-3 > .CodeMirror .CodeMirror-gutters { background-color: var(--awb-color2); }.fusion-syntax-highlighter-3 > .CodeMirror .CodeMirror-linenumber { color: var(--awb-color8); }<\/style><div class=\"fusion-syntax-highlighter-container fusion-syntax-highlighter-3 fusion-syntax-highlighter-theme-dark\" style=\"opacity:0;margin-top:0px;margin-right:0px;margin-bottom:0px;margin-left:0px;font-size:14px;border-width:1px;border-style:solid;border-color:#2c5c7c;\"><div class=\"syntax-highlighter-copy-code\"><span class=\"syntax-highlighter-copy-code-title\" data-id=\"fusion_syntax_highlighter_3\" style=\"font-size:14px;\">Copy to Clipboard<\/span><\/div><label for=\"fusion_syntax_highlighter_3\" class=\"screen-reader-text\">Syntax Highlighter<\/label><textarea class=\"fusion-syntax-highlighter-textarea\" id=\"fusion_syntax_highlighter_3\" data-readOnly=\"nocursor\" data-lineNumbers=\"1\" data-lineWrapping=\"\" data-theme=\"oceanic-next\" data-mode=\"text\/css\"># Next, open the video file or capture device depending what we choose, also load the model Caffe model.\n# Open video file or capture device. \nif args.video:\n    cap = cv2.VideoCapture(args.video)\nelse:\n    cap = cv2.VideoCapture(0)\n\n#Load the Caffe model \nnet = cv2.dnn.readNetFromCaffe(args.prototxt, args.weights)<\/textarea><\/div><div class=\"fusion-text fusion-text-5 fusion-text-no-margin\" style=\"--awb-margin-top:50px;--awb-margin-bottom:50px;\"><p>En la l\u00ednea 36, pasa el prototxt y los argumentos de ponderaci\u00f3n a la funci\u00f3n, luego cargamos correctamente la red.<\/p>\n<p>Luego leemos el video cuadro por cuadro y lo pasamos en el cuadro a la web para su detecci\u00f3n. Con el m\u00f3dulo DNN es f\u00e1cil usar nuestra red de aprendizaje profundo en OpenCV y hacer predicciones.<\/p>\n<\/div><style type=\"text\/css\" scopped=\"scopped\">.fusion-syntax-highlighter-4 > .CodeMirror, .fusion-syntax-highlighter-4 > .CodeMirror .CodeMirror-gutters {background-color:var(--awb-color1);}.fusion-syntax-highlighter-4 > .CodeMirror .CodeMirror-gutters { background-color: var(--awb-color2); }.fusion-syntax-highlighter-4 > .CodeMirror .CodeMirror-linenumber { color: var(--awb-color8); }<\/style><div class=\"fusion-syntax-highlighter-container fusion-syntax-highlighter-4 fusion-syntax-highlighter-theme-dark\" style=\"opacity:0;margin-top:0px;margin-right:0px;margin-bottom:0px;margin-left:0px;font-size:14px;border-width:1px;border-style:solid;border-color:#2c5c7c;\"><div class=\"syntax-highlighter-copy-code\"><span class=\"syntax-highlighter-copy-code-title\" data-id=\"fusion_syntax_highlighter_4\" style=\"font-size:14px;\">Copy to Clipboard<\/span><\/div><label for=\"fusion_syntax_highlighter_4\" class=\"screen-reader-text\">Syntax Highlighter<\/label><textarea class=\"fusion-syntax-highlighter-textarea\" id=\"fusion_syntax_highlighter_4\" data-readOnly=\"nocursor\" data-lineNumbers=\"1\" data-lineWrapping=\"\" data-theme=\"oceanic-next\" data-mode=\"text\/css\">while True:\n    # Capture frame-by-frame\n    ret, frame = cap.read()\n    frame_resized = cv2.resize(frame,(300,300)) # resize frame for prediction<\/textarea><\/div><div class=\"fusion-text fusion-text-6 fusion-text-no-margin\" style=\"--awb-margin-top:50px;--awb-margin-bottom:50px;\"><p>En la l\u00ednea 40<span> -41, lea el fotograma del video y cambie el tama\u00f1o a 300<\/span><span> \u00d7 300<\/span><span> porque es el tama\u00f1o de entrada de imagen definido para el modelo MobileNet-SSD.<\/span><\/p>\n<\/div><style type=\"text\/css\" scopped=\"scopped\">.fusion-syntax-highlighter-5 > .CodeMirror, .fusion-syntax-highlighter-5 > .CodeMirror .CodeMirror-gutters {background-color:var(--awb-color1);}.fusion-syntax-highlighter-5 > .CodeMirror .CodeMirror-gutters { background-color: var(--awb-color2); }.fusion-syntax-highlighter-5 > .CodeMirror .CodeMirror-linenumber { color: var(--awb-color8); }<\/style><div class=\"fusion-syntax-highlighter-container fusion-syntax-highlighter-5 fusion-syntax-highlighter-theme-dark\" style=\"opacity:0;margin-top:0px;margin-right:0px;margin-bottom:0px;margin-left:0px;font-size:14px;border-width:1px;border-style:solid;border-color:#2c5c7c;\"><div class=\"syntax-highlighter-copy-code\"><span class=\"syntax-highlighter-copy-code-title\" data-id=\"fusion_syntax_highlighter_5\" style=\"font-size:14px;\">Copy to Clipboard<\/span><\/div><label for=\"fusion_syntax_highlighter_5\" class=\"screen-reader-text\">Syntax Highlighter<\/label><textarea class=\"fusion-syntax-highlighter-textarea\" id=\"fusion_syntax_highlighter_5\" data-readOnly=\"nocursor\" data-lineNumbers=\"1\" data-lineWrapping=\"\" data-theme=\"oceanic-next\" data-mode=\"text\/css\">    # MobileNet requires fixed dimensions for input image(s)\n    # so we have to ensure that it is resized to 300x300 pixels.\n    # set a scale factor to image because network the objects has differents size. \n    # We perform a mean subtraction (127.5, 127.5, 127.5) to normalize the input;\n    # after executing this command our \"blob\" now has the shape:\n    # (1, 3, 300, 300)\n    blob = cv2.dnn.blobFromImage(frame_resized, 0.007843, (300, 300), (127.5, 127.5, 127.5), False)\n    #Set to network the input blob \n    net.setInput(blob)\n    #Prediction of network\n    detections = net.forward()<\/textarea><\/div><div class=\"fusion-text fusion-text-7 fusion-text-no-margin\" style=\"--awb-margin-top:50px;--awb-margin-bottom:50px;\"><p>Despu\u00e9s de las l\u00edneas anteriores, obtenemos la predicci\u00f3n de la red, simplemente haci\u00e9ndolo en tres pasos b\u00e1sicos:<\/p>\n<ul>\n<li>Subir una imagen<\/li>\n<li>Preprocesar la imagen<\/li>\n<li>Establezca la imagen como entrada de red y obtenga el resultado de la predicci\u00f3n.<\/li>\n<\/ul>\n<p>El uso del m\u00f3dulo DNN es esencialmente el mismo para las otras redes y arquitecturas, por lo que podemos replicarlo para nuestros propios modelos entrenados.<\/p>\n<\/div><div class=\"fusion-title title fusion-title-3 fusion-title-text fusion-title-size-two\" style=\"--awb-margin-top:0px;--awb-margin-right:0px;--awb-margin-left:0px;\"><div class=\"title-sep-container title-sep-container-left fusion-no-large-visibility fusion-no-medium-visibility fusion-no-small-visibility\"><div class=\"title-sep sep- sep-solid\" style=\"border-color:#edeef2;\"><\/div><\/div><span class=\"awb-title-spacer fusion-no-large-visibility fusion-no-medium-visibility fusion-no-small-visibility\"><\/span><h2 class=\"fusion-title-heading title-heading-left fusion-responsive-typography-calculated\" style=\"margin:0;--fontSize:40;line-height:1.0;\">Visualice la confianza en la detecci\u00f3n y predicci\u00f3n de objetos<\/h2><span class=\"awb-title-spacer\"><\/span><div class=\"title-sep-container title-sep-container-right\"><div class=\"title-sep sep- sep-solid\" style=\"border-color:#edeef2;\"><\/div><\/div><\/div><div class=\"fusion-text fusion-text-8 fusion-text-no-margin\" style=\"--awb-margin-bottom:50px;\"><p>En conclusi\u00f3n, despu\u00e9s de los pasos anteriores, surgen nuevas preguntas, \u00bfC\u00f3mo obtener la ubicaci\u00f3n del objeto con MobileNet? \u00bfC\u00f3mo saber la clase de objeto predicha? \u00bfC\u00f3mo tener confianza en la predicci\u00f3n? \u00a1Ir!<\/p>\n<p>Debemos leer la matriz de detecci\u00f3n para obtener los datos de predicci\u00f3n de la red neuronal, el siguiente c\u00f3digo hace esto:<\/p>\n<\/div><style type=\"text\/css\" scopped=\"scopped\">.fusion-syntax-highlighter-6 > .CodeMirror, .fusion-syntax-highlighter-6 > .CodeMirror .CodeMirror-gutters {background-color:var(--awb-color1);}.fusion-syntax-highlighter-6 > .CodeMirror .CodeMirror-gutters { background-color: var(--awb-color2); }.fusion-syntax-highlighter-6 > .CodeMirror .CodeMirror-linenumber { color: var(--awb-color8); }<\/style><div class=\"fusion-syntax-highlighter-container fusion-syntax-highlighter-6 fusion-syntax-highlighter-theme-dark\" style=\"opacity:0;margin-top:0px;margin-right:0px;margin-bottom:0px;margin-left:0px;font-size:14px;border-width:1px;border-style:solid;border-color:#2c5c7c;\"><div class=\"syntax-highlighter-copy-code\"><span class=\"syntax-highlighter-copy-code-title\" data-id=\"fusion_syntax_highlighter_6\" style=\"font-size:14px;\">Copy to Clipboard<\/span><\/div><label for=\"fusion_syntax_highlighter_6\" class=\"screen-reader-text\">Syntax Highlighter<\/label><textarea class=\"fusion-syntax-highlighter-textarea\" id=\"fusion_syntax_highlighter_6\" data-readOnly=\"nocursor\" data-lineNumbers=\"1\" data-lineWrapping=\"\" data-theme=\"oceanic-next\" data-mode=\"text\/css\">    #Size of frame resize (300x300)\n    cols = frame_resized.shape[1] \n    rows = frame_resized.shape[0]\n\n    #For get the class and location of object detected, \n    # There is a fix index for class, location and confidence\n    # value in @detections array .\n    for i in range(detections.shape[2]):\n        confidence = detections[0, 0, i, 2] #Confidence of prediction \n        if confidence > args.thr: # Filter prediction \n            class_id = int(detections[0, 0, i, 1]) # Class label\n\n            # Object location \n            xLeftBottom = int(detections[0, 0, i, 3] * cols) \n            yLeftBottom = int(detections[0, 0, i, 4] * rows)\n            xRightTop   = int(detections[0, 0, i, 5] * cols)\n            yRightTop   = int(detections[0, 0, i, 6] * rows)<\/textarea><\/div><div class=\"fusion-text fusion-text-9 fusion-text-no-margin\" style=\"--awb-margin-top:50px;--awb-margin-bottom:50px;\"><p>Hacemos un bucle (l\u00ednea 62) para leer los valores. Luego, en la l\u00ednea 63 obtenemos la confianza de la predicci\u00f3n y el siguiente filtro de l\u00ednea con valor de umbral. En la l\u00ednea 65, obtenga la etiqueta. En las l\u00edneas 68 a 71, obtenga las esquinas del objeto.<\/p>\n<p>Con toda la informaci\u00f3n sobre el objeto predicho, el \u00faltimo paso es mostrar los resultados. El siguiente objeto de dibujo de c\u00f3digo detectado y muestra su etiqueta y confianza en el marco.<\/p>\n<\/div><style type=\"text\/css\" scopped=\"scopped\">.fusion-syntax-highlighter-7 > .CodeMirror, .fusion-syntax-highlighter-7 > .CodeMirror .CodeMirror-gutters {background-color:var(--awb-color1);}.fusion-syntax-highlighter-7 > .CodeMirror .CodeMirror-gutters { background-color: var(--awb-color2); }.fusion-syntax-highlighter-7 > .CodeMirror .CodeMirror-linenumber { color: var(--awb-color8); }<\/style><div class=\"fusion-syntax-highlighter-container fusion-syntax-highlighter-7 fusion-syntax-highlighter-theme-dark\" style=\"opacity:0;margin-top:0px;margin-right:0px;margin-bottom:0px;margin-left:0px;font-size:14px;border-width:1px;border-style:solid;border-color:#2c5c7c;\"><div class=\"syntax-highlighter-copy-code\"><span class=\"syntax-highlighter-copy-code-title\" data-id=\"fusion_syntax_highlighter_7\" style=\"font-size:14px;\">Copy to Clipboard<\/span><\/div><label for=\"fusion_syntax_highlighter_7\" class=\"screen-reader-text\">Syntax Highlighter<\/label><textarea class=\"fusion-syntax-highlighter-textarea\" id=\"fusion_syntax_highlighter_7\" data-readOnly=\"nocursor\" data-lineNumbers=\"1\" data-lineWrapping=\"\" data-theme=\"oceanic-next\" data-mode=\"text\/css\"># Factor for scale to original size of frame\n            heightFactor = frame.shape[0]\/300.0  \n            widthFactor = frame.shape[1]\/300.0 \n            # Scale object detection to frame\n            xLeftBottom = int(widthFactor * xLeftBottom) \n            yLeftBottom = int(heightFactor * yLeftBottom)\n            xRightTop   = int(widthFactor * xRightTop)\n            yRightTop   = int(heightFactor * yRightTop)\n            # Draw location of object  \n            cv2.rectangle(frame, (xLeftBottom, yLeftBottom), (xRightTop, yRightTop),\n                          (0, 255, 0))\n\n            # Draw label and confidence of prediction in frame resized\n            if class_id in classNames:\n                label = classNames[class_id] + \": \" + str(confidence)\n                labelSize, baseLine = cv2.getTextSize(label, cv2.FONT_HERSHEY_SIMPLEX, 0.5, 1)\n\n                yLeftBottom = max(yLeftBottom, labelSize[1])\n                cv2.rectangle(frame, (xLeftBottom, yLeftBottom - labelSize[1]),\n                                     (xLeftBottom + labelSize[0], yLeftBottom + baseLine),\n                                     (255, 255, 255), cv2.FILLED)\n                cv2.putText(frame, label, (xLeftBottom, yLeftBottom),\n                            cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 0))\n\n                print label #print class and confidence \n\n    cv2.namedWindow(\"frame\", cv2.WINDOW_NORMAL)\n    cv2.imshow(\"frame\", frame)\n    if cv2.waitKey(1) >= 0:  # Break with ESC \n        break<\/textarea><\/div><div class=\"fusion-text fusion-text-10 fusion-text-no-margin\" style=\"--awb-margin-top:50px;--awb-margin-bottom:50px;\"><p>\u00daltimas l\u00edneas, muestra la imagen del marco normal y cambia el tama\u00f1o a la pantalla.<\/p>\n<\/div><div class=\"fusion-title title fusion-title-4 fusion-title-text fusion-title-size-two\" style=\"--awb-margin-top:0px;--awb-margin-right:0px;--awb-margin-left:0px;\"><div class=\"title-sep-container title-sep-container-left fusion-no-large-visibility fusion-no-medium-visibility fusion-no-small-visibility\"><div class=\"title-sep sep- sep-solid\" style=\"border-color:#edeef2;\"><\/div><\/div><span class=\"awb-title-spacer fusion-no-large-visibility fusion-no-medium-visibility fusion-no-small-visibility\"><\/span><h2 class=\"fusion-title-heading title-heading-left fusion-responsive-typography-calculated\" style=\"margin:0;--fontSize:40;line-height:1.0;\">Descargas<\/h2><span class=\"awb-title-spacer\"><\/span><div class=\"title-sep-container title-sep-container-right\"><div class=\"title-sep sep- sep-solid\" style=\"border-color:#edeef2;\"><\/div><\/div><\/div><div class=\"fusion-text fusion-text-11\"><p>El c\u00f3digo y el modelo capacitados por MobileNet se pueden descargar desde:<\/p>\n<p>https:\/\/github.com\/djmv\/MobilNet_SSD_opencv<\/p>\n<\/div><div class=\"fusion-video fusion-youtube\" style=\"--awb-max-width:600px;--awb-max-height:360px;--awb-align-self:center;--awb-width:100%;\"><div class=\"video-shortcode\"><div class=\"fluid-width-video-wrapper\" style=\"padding-top:60%;\" ><iframe title=\"YouTube video player 1\" src=\"https:\/\/www.youtube.com\/embed\/gjWO6BafPCQ?wmode=transparent&autoplay=0\" width=\"600\" height=\"360\" allowfullscreen allow=\"autoplay; fullscreen\"><\/iframe><\/div><\/div><\/div><\/div><\/div><\/div><\/div><\/p>\n","protected":false},"excerpt":{"rendered":"","protected":false},"author":7,"featured_media":6462,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"content-type":"","footnotes":""},"categories":[1],"tags":[],"class_list":["post-7437","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-uncategorized"],"acf":[],"_links":{"self":[{"href":"https:\/\/ebenezertechs.com\/es\/wp-json\/wp\/v2\/posts\/7437","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ebenezertechs.com\/es\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/ebenezertechs.com\/es\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/ebenezertechs.com\/es\/wp-json\/wp\/v2\/users\/7"}],"replies":[{"embeddable":true,"href":"https:\/\/ebenezertechs.com\/es\/wp-json\/wp\/v2\/comments?post=7437"}],"version-history":[{"count":5,"href":"https:\/\/ebenezertechs.com\/es\/wp-json\/wp\/v2\/posts\/7437\/revisions"}],"predecessor-version":[{"id":8433,"href":"https:\/\/ebenezertechs.com\/es\/wp-json\/wp\/v2\/posts\/7437\/revisions\/8433"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/ebenezertechs.com\/es\/wp-json\/wp\/v2\/media\/6462"}],"wp:attachment":[{"href":"https:\/\/ebenezertechs.com\/es\/wp-json\/wp\/v2\/media?parent=7437"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/ebenezertechs.com\/es\/wp-json\/wp\/v2\/categories?post=7437"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/ebenezertechs.com\/es\/wp-json\/wp\/v2\/tags?post=7437"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}