<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>AI-enhanced Video Surveillance &#8211; NVR IPCAMERA SECURITY</title>
	<atom:link href="https://www.nvripc.com/tag/ai-enhanced-video-surveillance/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.nvripc.com</link>
	<description>CCTV Help Desk Blog!</description>
	<lastBuildDate>Tue, 16 Jul 2024 19:17:49 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
	<item>
		<title>FLEX AI Setup and Use Guide</title>
		<link>https://www.nvripc.com/flex-ai-setup-and-use-guide/</link>
					<comments>https://www.nvripc.com/flex-ai-setup-and-use-guide/#respond</comments>
		
		<dc:creator><![CDATA[M.Salih ASLAN]]></dc:creator>
		<pubDate>Tue, 16 Jul 2024 19:17:49 +0000</pubDate>
				<category><![CDATA[Guide]]></category>
		<category><![CDATA[How To]]></category>
		<category><![CDATA[acti]]></category>
		<category><![CDATA[Advanced AI Surveillance]]></category>
		<category><![CDATA[AI Camera]]></category>
		<category><![CDATA[AI Surveillance Innovation]]></category>
		<category><![CDATA[AI Video Processing]]></category>
		<category><![CDATA[AI Vision Systems]]></category>
		<category><![CDATA[AI-driven Security Systems]]></category>
		<category><![CDATA[AI-enhanced Video Surveillance]]></category>
		<category><![CDATA[AI-powered Surveillance]]></category>
		<category><![CDATA[Camera]]></category>
		<category><![CDATA[Camera firmware]]></category>
		<category><![CDATA[CCTV]]></category>
		<category><![CDATA[Device]]></category>
		<category><![CDATA[Device Manager]]></category>
		<category><![CDATA[Download]]></category>
		<category><![CDATA[firmware]]></category>
		<category><![CDATA[FLEX AI]]></category>
		<category><![CDATA[guide]]></category>
		<category><![CDATA[hanwha]]></category>
		<category><![CDATA[Hanwha AI Technology]]></category>
		<category><![CDATA[Hanwha Techwin]]></category>
		<category><![CDATA[Hanwha Vision]]></category>
		<category><![CDATA[hard drive]]></category>
		<category><![CDATA[How to]]></category>
		<category><![CDATA[Instructions]]></category>
		<category><![CDATA[Intelligent Monitoring]]></category>
		<category><![CDATA[Intelligent Video Solutions]]></category>
		<category><![CDATA[LTE]]></category>
		<category><![CDATA[pan]]></category>
		<category><![CDATA[password]]></category>
		<category><![CDATA[Real-time Monitoring]]></category>
		<category><![CDATA[Security AI Solutions]]></category>
		<category><![CDATA[Security Cameras]]></category>
		<category><![CDATA[Setup]]></category>
		<category><![CDATA[Smart Analytics]]></category>
		<category><![CDATA[Smart Security Solutions]]></category>
		<category><![CDATA[Step by Step]]></category>
		<category><![CDATA[Surveillance]]></category>
		<category><![CDATA[Surveillance Technology]]></category>
		<category><![CDATA[Video Analytics]]></category>
		<guid isPermaLink="false">https://www.nvripc.com/?p=9136</guid>

					<description><![CDATA[<p>FLEX AI Setup and Use Guide, FLEX AI enables you to enhance camera capabilities, enabling them to detect and track previously unidentifiable objects. While many cameras can detect and track people and specific vehicle types, it is a different matter identifying shopping carts, forklift trucks, hovercrafts, or items on a conveyor belt. FLEX AI: Get [&#8230;]</p>
<p>&lt;p&gt;The post <a rel="nofollow" href="https://www.nvripc.com/flex-ai-setup-and-use-guide/">FLEX AI Setup and Use Guide</a> first appeared on <a rel="nofollow" href="https://www.nvripc.com">NVR IPCAMERA SECURITY</a>.&lt;/p&gt;</p>
]]></description>
										<content:encoded><![CDATA[<p>FLEX AI Setup and Use Guide, FLEX AI enables you to enhance camera capabilities, enabling them to detect and track previously unidentifiable objects. While many cameras can detect and track people and specific vehicle types, it is a different matter identifying shopping carts, forklift trucks, hovercrafts, or items on a conveyor belt.</p>
<h3>FLEX AI: Get started</h3>
<h1>Summary:</h1>
<p>FLEX AI enables you to enhance camera capabilities, enabling them to detect and track previously unidentifiable objects. While many cameras can detect and track people and specific vehicle types, it is a different matter identifying shopping carts, forklift trucks, hovercrafts, or items on a conveyor belt.</p>
<p>With FLEX AI you can create custom object detection models for solid objects that can then be deployed to cameras, enabling you to detect objects that standard analytics does not already detect.</p>
<p>It is a cloud-based application, and its results are compatible with deployment to P-series AI cameras.</p>
<h1>How it Works:</h1>
<p>FLEX AI operates on a subscription-based model, granting users comprehensive access to train, process, and download custom detection models for end customers. However, the allocation of licenses requires a careful consideration of each end customer, with a separate FLEX AI license needed for each one. This process is typically managed by a STEP Partner, who oversees the distribution of licenses to ensure effective deployment.</p>
<p>For actual deployment onto cameras, FLEX AI uses a perpetual licensing system, distinct from its subscription model. Each camera necessitates its own perpetual license to run FLEX AI models effectively. This approach ensures that users can deploy and utilize the models on their cameras without the constraints of a subscription-based licensing system.</p>
<p>In terms of training models, FLEX AI currently supports the MP4 format or tagged WAVE Sync videos for training.</p>
<p>NOTE: Each project can only train one model.</p>
<h1>Camera Compatibility:</h1>
<p>Refer to the following article: <a href="https://support.hanwhavisionamerica.com/hc/en-us/articles/26332037008027" target="_blank" rel="dofollow noopener">Which Cameras are Compatible With FLEX AI</a></p>
<p>Note: Each camera can only run one FLEX AI model, although it may run in parallel with WiseAI.</p>
<h1>The FLEX AI Flow:</h1>
<p>The general process is as follow:</p>
<div>
<div>
<div>1.Ensure you have a <a href="https://support.hanwhavisionamerica.com/hc/en-us/articles/25726838758043" target="_blank" rel="noopener">license</a> to develop new models.</div>
</div>
<div>
<div>2.<a href="https://support.hanwhavisionamerica.com/hc/en-us/articles/26063284950043" target="_blank" rel="noopener">Sign in to Cloud Portal</a> and select FLEX AI.</div>
</div>
<div>
<div>3.Create a <a href="https://support.hanwhavisionamerica.com/hc/en-us/articles/22011783189531" target="_blank" rel="noopener">new project</a>.</div>
</div>
<div>
<div>4.<a href="https://support.hanwhavisionamerica.com/hc/en-us/articles/22011985070875" target="_blank" rel="noopener">Import</a> training clips.</div>
</div>
<div>
<div>5.<a href="https://support.hanwhavisionamerica.com/hc/en-us/articles/22012124056347" target="_blank" rel="noopener">Annotate</a> the clips.</div>
</div>
<div>
<div>6.Send the model for training.</div>
</div>
<div>
<div>7.Evaluate the model, and <a href="https://support.hanwhavisionamerica.com/hc/en-us/articles/25181595748635" target="_blank" rel="noopener">improve</a> if necessary.</div>
</div>
<div>
<div>8.<a href="https://support.hanwhavisionamerica.com/hc/en-us/articles/25212530029723" target="_blank" rel="noopener">Download</a> the model to your computer.</div>
</div>
<div>
<div>9.<a href="https://support.hanwhavisionamerica.com/hc/en-us/articles/25212530029723" target="_blank" rel="noopener">Upload</a> the model to a compatible camera.</div>
</div>
</div>
<h1>FLEX AI: What do I need to train a model with FLEX AI?</h1>
<h1>Summary:</h1>
<p>This article covers what you need to build effective models with FLEX AI.</p>
<h1>Training Requirements and Recommendations:</h1>
<div>
<div>
<div>• A minimum of 20 annotated objects or training images for an efficient detection model (at least 30 are recommended)</div>
</div>
<div>
<div>• About 100 annotated objects or training images are recommended for a more robust model</div>
</div>
<div>
<div>• Draw the bounding box around the object as close as possible (try to not leave any margin)</div>
</div>
<div>
<div>• All target objects or training images must be labeled (i.e. frames chosen without annotation cannot be used in a data set)</div>
</div>
<div>
<div>• Ensure you are marking all things considered the same object to yours within a frame (the algorithm may not preform well if all similar objects within a frame are not properly included for training custom object detection of your desired object).</div>
</div>
<div>
<div>• Use video clips that contain several different angles, perspectives, and lighting of the object of Interest (OOI) (preferably from the same camera for which the training is being done)</div>
</div>
<div>
<div>• Use clips where the annotated object is not partially hidden so that the AI can learn what it really looks like (i.e. if you were teaching it what a person looks like, then annotating just an arm sticking out from behind a tree is not helpful)</div>
</div>
</div>
<p>NOTE: After you click Train, the data you have created is sent to the cloud and<br />
the algorithm is trained to detect your object. This process can take anywhere<br />
from 30min to 1 hr and depends on the number of training images you included.</p>
<h1>Performance Issues:</h1>
<p>The following may cause object detection to have performance issues:</p>
<div>
<div>
<div>• Challenging background conditions due to low light or changing weather (ex: rain, snow, sunshine)</div>
</div>
<div>
<div>• Object size differs within the field of view of the camera which is different from the size used while training</div>
</div>
<div>
<div>• Parts of the object are covered or obstructed</div>
</div>
<div>
<div>• Objects are in high density crowds or occluded</div>
</div>
<div>
<div>• Stacking (ex: a model trained on single shopping carts will have issues detecting carts stacked in a corral)</div>
</div>
<div>
<div>• Object is moving too fast</div>
</div>
</div>
<h1>Camera Recommendations</h1>
<div>
<div>
<div>• Use footage from the camera covering a fixed that the model will be deployed and used.</div>
</div>
<div>
<div>• The Field of View (FoV) should show your object with a minimum size of 20px x 20px.</div>
</div>
<div>
<div>• Installed cameras should be at normal video surveillance View (</div>
</div>
</div>
<div>
<div>
<div>• (CCTV view of 45 degrees or larger)</div>
</div>
</div>
<h1>Limitations:</h1>
<p>FLEX AI has the following limitations:</p>
<div>
<div>
<div>• Cannot detect non-solid objects such as gas/vapor, liquid, smoke, etc.</div>
</div>
<div>
<div>• Cannot deploy more than one custom model to a camera at a time. Detecting multiple objects currently requires multiple cameras.</div>
</div>
<div>
<div>• Cannot support a single model that includes multiple objects (ex: hardhat + goggles + vest) and identify a missing item.</div>
</div>
<div>
<div>• Cannot be used for identification or recognition of an object (ex: specific people, faces, animals, etc.)</div>
</div>
<div>
<div>• Cannot be used for detecting the orientation of objects (ex: a cart that is facing left or right).</div>
</div>
<div>
<div>• Cannot distinguish colors. FLEX AI does not take color into account.</div>
</div>
<div>
<div>• Cannot be used for detecting the fine-grain objects (ex: golden retriever vs dog, tesla model Y vs vehicle)</div>
</div>
</div>
<h1>Processing Times</h1>
<div>
<div>
<div>• FLEX AI is a cloud based application and requires internet access and processing time.</div>
</div>
<div>
<div>• The algorithm takes the images you&#8217;ve make, trains our object detection model to detect your desired object, processes the model to show you simulated performance on videos you&#8217;ve provided, and then packages the model to work on the camera.</div>
</div>
<div>
<div>• Each step can take multiple minutes, the initial Training can take about an hour (dependent on the number of marked images you&#8217;ve provided)</div>
</div>
<div>
<div>• When the training is completed, we then process the model to detect the object with the videos you&#8217;ve provided in the video library of the project. When the videos become available, their buttons become active with the &#8220;Ready to View&#8221; status.</div>
</div>
<div>
<div>• We also package the model for the camera and the download button becomes active when the file is ready.</div>
</div>
</div>
<h1>FLEX AI: How do I create and manage FLEX AI projects?</h1>
<h1>Summary:</h1>
<p>This article provides instructions for creating and managing FLEX AI projects.</p>
<h1>Step By Step Guide:</h1>
<h2>Creating New Projects</h2>
<p>To create a new FLEX AI project:</p>
<p>1. On the Project screen, click the plus (+) button to create a new project.<br />
<a href="https://www.nvripc.com/wp-content/uploads/2024/07/light_projects_-no-projects-655w172h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/light_projects_-no-projects-655w172h.png" /></a></p>
<p>2. Name your project after the detection type.</p>
<p><a href="https://www.nvripc.com/wp-content/uploads/2024/07/26337875651995-642w323h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/26337875651995-642w323h.png" /></a></p>
<p>NOTE:<br />
Project names must be unique.<br />
Each project can only consist of one object detection type.</p>
<h2>Managing Projects</h2>
<p>Here is an example of multiple projects being worked on in parallel.</p>
<p><a href="https://www.nvripc.com/wp-content/uploads/2024/07/26337517408667-721w326h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/26337517408667-721w326h.png" /></a></p>
<p>To search for a FLEX AI project:</p>
<div>
<div>
<div>1.Use the Search box to filter the displayed projects.</div>
</div>
<div>
<div>2.Apply sorting, as needed.</div>
</div>
<div>
<div>3.Click on a project to open it.</div>
</div>
</div>
<p>Existing projects will indicate their current status:</p>
<div>
<div>
<div>• Untrained &#8211; model is yet to be trained for the first time</div>
</div>
<div>
<div>• With annotations &#8211; model has been trained, reannotated, and is awaiting retraining</div>
</div>
<div>
<div>• Training in Progress &#8211; model is in the process of being trained</div>
</div>
<div>
<div>• Trained &#8211; model has been trained</div>
</div>
</div>
<p><a href="https://www.nvripc.com/wp-content/uploads/2024/07/status-668w137h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/status-668w137h.png" /></a></p>
<h1>FLEX AI: How do I add a video clip for FLEX AI usage?</h1>
<h1>Summary:</h1>
<p>This article describes the process for uploading training video clips to train a FLEX AI model.</p>
<h1>Step By Step Guide:</h1>
<h2>Uploading a video</h2>
<div>
<div>
<div>1.Log in to your account.</div>
</div>
<div>
<div>2.Select the project.</div>
</div>
<div>
<div>3.Drag your MP4 video file to the Video Library field or click Hard Drive and select an MP4 file.</div>
</div>
</div>
<h1><a href="https://www.nvripc.com/wp-content/uploads/2024/07/screenshot-2024-04-25-at-10-56-54-am.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/screenshot-2024-04-25-at-10-56-54-am.png" /></a></h1>
<p>NOTE: The Video Library drawer can be expanded and collapsed by clicking the arrow.</p>
<p>&nbsp;</p>
<h2>Pulling bookmarked videos directly from WAVE Sync</h2>
<p>NOTE: First tag WAVE Sync bookmarks with &#8220;flex_ai&#8221; to make them to appear in FLEX AI.</p>
<h3></h3>
<h3>Tagging WAVE Sync videos</h3>
<p>1. Sign in to your WAVE account.</p>
<p>2. Add a bookmark.</p>
<h3><a href="https://www.nvripc.com/wp-content/uploads/2024/07/add-bookmark-671w.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/add-bookmark-671w.png" /></a></h3>
<p>3. Tag your bookmark with &#8220;flex_ai&#8221;.</p>
<p><a href="https://www.nvripc.com/wp-content/uploads/2024/07/label-bookmark-655w423h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/label-bookmark-655w423h.png" /></a></p>
<h3></h3>
<h3>Importing WAVE Sync clips</h3>
<p>1. Click WAVE Sync and enter your user ID and password.</p>
<p><a href="https://www.nvripc.com/wp-content/uploads/2024/07/screenshot-2024-06-05-at-11-37-31-am-690w446h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/screenshot-2024-06-05-at-11-37-31-am-690w446h.png" /></a></p>
<p>&nbsp;</p>
<p>2. Select your system from the dropdown menu.</p>
<p><a href="https://www.nvripc.com/wp-content/uploads/2024/07/add1-639w413h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/add1-639w413h.png" /></a></p>
<p><a href="https://www.nvripc.com/wp-content/uploads/2024/07/add2-644w416h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/add2-644w416h.png" /></a></p>
<p>&nbsp;</p>
<p>3. Select the checkbox for each WAVE Sync Bookmark you would like to import.</p>
<h2>Recommendations</h2>
<div>
<div>
<div>• Use videos that are less than 10 minutes long.</div>
</div>
<div>
<div>• Multiple videos can be uploaded at once with a maximum file limit of 500 MB each.</div>
</div>
<div>
<div>• Use the Video Library to rename your video, see its status, or delete it.</div>
</div>
<div>
<div>• More angles and perspectives of your object will increase the accuracy of the AI model.</div>
</div>
</div>
<h1>FLEX AI: How do I annotate objects?</h1>
<h1>Summary:</h1>
<p>This article describes how to move through a training video and mark objects. The goal is to take a video clip and pause at several points in the video and annotate objects (draw boxes around) you are teaching FLEX AI to detect.</p>
<h1>Step By Step Guide:</h1>
<h2>Annotating objects</h2>
<p>1. Pause the video on any frame that contains the object of interest.</p>
<p><a href="https://www.nvripc.com/wp-content/uploads/2024/07/26342346795035-678w380h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/26342346795035-678w380h.png" /></a></p>
<p>&nbsp;</p>
<p>2. Draw a detection box around every instance of the object. There can be multiple instances of an object and each instance should have its own detection box.</p>
<p><a href="https://www.nvripc.com/wp-content/uploads/2024/07/26342321731739-684w359h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/26342321731739-684w359h.png" /></a></p>
<p>3. Click the Tighten button (or the keyboard shortcut T) to resize the detection boxes.<br />
This means you can save time by roughly drawing the boxes and then tightening.</p>
<table>
<tbody>
<tr>
<td>Step 1: Roughly draw a box around the object</td>
<td><a href="https://www.nvripc.com/wp-content/uploads/2024/07/26342321747611.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/26342321747611.png" /></a></td>
</tr>
<tr>
<td>Step 2: Click the Tighten button (or press T) to quickly get you close</td>
<td><a href="https://www.nvripc.com/wp-content/uploads/2024/07/26342321760667.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/26342321760667.png" /></a></td>
</tr>
<tr>
<td>There will be times that FLEX AI does not properly capture the true edges of the object, that&#8217;s why the application allows you to readjust the edges of the box.</p>
<p>Step 3: Drag the edges/corners of a detection box to finely edit.</td>
<td><a href="https://www.nvripc.com/wp-content/uploads/2024/07/26342346861851.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/26342346861851.png" /></a></td>
</tr>
</tbody>
</table>
<p>&nbsp;</p>
<p>4. To remove a detection box, select the appropriate box and click the Delete key on your keyboard.</p>
<p>5. Click Save (or the keyboard shortcut S) to save the marked detection boxes.</p>
<p>6. Once you have drawn the necessary detection boxes, click the Train button. Your project will be unavailable during the training period which typically takes 15 to 30 minutes depending on the number of annotations.<br />
<a href="https://www.nvripc.com/wp-content/uploads/2024/07/mark_-single-player_-frame-detections-2-706w183h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/mark_-single-player_-frame-detections-2-706w183h.png" /></a></p>
<p>NOTE: Since training is done in the cloud, you can work on any number of other projects<br />
while a project is training.<br />
.</p>
<p>NOTE: The blue detection boxes also appear in the This Frame’s Detections area after you<br />
save. It is recommended that a minimum of 50 detection boxes be defined before your<br />
initial object training is complete. An AI model can include detection boxes from<br />
multiple video clips.</p>
<h2>Keyboard shortcuts</h2>
<p>You can save time and effort when navigating video, marking, tightening and saving annotations, by using the keyboard shortcuts below.</p>
<p><a href="https://www.nvripc.com/wp-content/uploads/2024/07/26342346875931-588w341h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/26342346875931-588w341h.png" /></a></p>
<h1>FLEX AI: How do I improve my object detection model?</h1>
<h1>Summary:</h1>
<p>This article covers improving the AI Object Detection model of your uploaded videos. This process will result in more accurate detections, fewer false positives, and fewer false negatives.</p>
<h1>Step By Step Guide:</h1>
<p>To improve the AI Object Detection model:</p>
<div>
<div>
<div>1.Click the Improve button (towards the top right of the video player) after you have reviewed the latest model.<br />
<a href="https://www.nvripc.com/wp-content/uploads/2024/07/screenshot-2024-06-07-at-2-31-17-pm-628w189h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/screenshot-2024-06-07-at-2-31-17-pm-628w189h.png" /></a></div>
</div>
<div>
<div>2.Select a video to start.<br />
FLEX AI will suggest the top 10 frames crucial for highlighting your object(s) to enhance your object detection model. Markers represent each frame the application is asking you to revisit.<br />
<a href="https://www.nvripc.com/wp-content/uploads/2024/07/screenshot-2024-06-07-at-2-33-46-pm-644w201h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/screenshot-2024-06-07-at-2-33-46-pm-644w201h.png" /></a></div>
</div>
<div>
<div>3.Draw a detection box around every instance of the object.<br />
Remember to save your detection boxes.</div>
</div>
<div>
<div>4.Click the Object Not Present button (towards the bottom right of the video player) if the object of interest is not in the frame.<br />
<a href="https://www.nvripc.com/wp-content/uploads/2024/07/screenshot-2024-06-07-at-2-35-27-pm-646w125h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/screenshot-2024-06-07-at-2-35-27-pm-646w125h.png" /></a></div>
</div>
<div>
<div>5.Use the arrows or the orange dots to advance to different/suggested frames.<br />
<a href="https://www.nvripc.com/wp-content/uploads/2024/07/improve_-10-frames_-in-progress-650w171h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/improve_-10-frames_-in-progress-650w171h.png" /></a></div>
</div>
<div>
<div>6.Complete all 10 suggested frames and repeat for all videos to be included in the next training.</div>
</div>
<div>
<div>7.Click the Add More button to manually look through the video and add detection boxes, if the suggested frames are not enough (ex: different angles and perspectives are needed in the training set).</div>
</div>
<div>
<div>8.Click Train when you are ready.<br />
The current limit for a (1) year license is to have a maximum of (100) total trainings or improvements.</div>
</div>
</div>
<p>NOTE: All 10 suggested frames must be completed per video for them to be included<br />
in the training. Detection boxes on any video where all 10 frames are not marked<br />
will be discarded and not included.</p>
<h1>FLEX AI: How do I review and download my FLEX AI model?</h1>
<h1>Summary:</h1>
<p>This article covers reviewing the AI Object Detection model of your uploaded videos.</p>
<h1>Step By Step Guide:</h1>
<p>To review your uploaded videos:</p>
<div>
<div>
<div>1.Click the Ready to View button to select your trained AI model results video.<br />
<a href="https://www.nvripc.com/wp-content/uploads/2024/07/26464275099291-270w325h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/26464275099291-270w325h.png" /></a></div>
</div>
<div>
<div>2.Adjust the confidence level slider (towards the top right of the video player) to view the identified objects that meet your specified minimum confidence level.<br />
<a href="https://www.nvripc.com/wp-content/uploads/2024/07/26464281230619.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/26464281230619.png" /></a><br />
The following model has been trained only once, so there is no option to compare.<br />
<a href="https://www.nvripc.com/wp-content/uploads/2024/07/26464275111451-342h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/26464275111451-342h.png" /></a></div>
</div>
<div>
<div>3.After training the AI model more than once, click the Single/Compare Player toggle to change your view from side-by-side video player to single video player.<br />
<a href="https://www.nvripc.com/wp-content/uploads/2024/07/26464275120539.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/26464275120539.png" /></a><br />
The following model has been improved, so both the previous (left) and current (right) can be compared. We can already notice the newer model is performing better.<br />
<a href="https://www.nvripc.com/wp-content/uploads/2024/07/26464281251611-636w334h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/26464281251611-636w334h.png" /></a><br />
For tips on improving detection accuracy, refer to the following article: <a href="https://support.hanwhavisionamerica.com/hc/en-us/articles/25181595748635" target="_blank" rel="noopener">How do I improve my object detection model in FLEX AI?</a>.</div>
</div>
<div>
<div>4.Click the Download button (towards the top right of the video player) when you feel the model has been trained to your preferences. The model can then be uploaded to a camera.<br />
<a href="https://www.nvripc.com/wp-content/uploads/2024/07/26464275137307.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/26464275137307.png" /></a></div>
</div>
<div>
<div>5.Upload your newly trained model to your target camera.<br />
Refer to the following article: <a href="https://support.hanwhavisionamerica.com/hc/en-us/articles/25212530029723" target="_blank" rel="noopener">How do I upload a FLEX AI model to a camera?</a></div>
</div>
</div>
<p>NOTE: Downloaded AI models can be deployed to a camera via Device Manager/Wise Detector.</p>
<h1>FLEX AI: How do I upload a FLEX AI model to a camera?</h1>
<h1>Summary:</h1>
<p>This article describes how, after training a FLEX AI model, you can download it, and then upload it to a compatible P-Series AI camera.</p>
<p>Note: For simplification, the term &#8216;model&#8217; refers to a &#8216;custom object model&#8217;.</p>
<h1>Requirements:</h1>
<div>
<div>
<div>• A compatible camera<br />
Refer to the following article: <a href="https://support.hanwhavisionamerica.com/hc/en-us/articles/21915192571419-Which-cameras-and-AI-applications-are-supported-by-SightMind" target="_blank" rel="noopener">Which cameras and AI applications are supported by SightMind?</a></div>
</div>
<div>
<div>• Latest version of Device Manager</div>
</div>
<div>
<div>• Latest camera firmware</div>
</div>
<div>
<div>• Latest version of the WiseAI open platform application installed onto the camera</div>
</div>
</div>
<p>NOTE: A FLEX AI model cannot run concurrently with a Hanwha Vision AI Pack.<br />
Any existing AI Pack must first be uninstalled from the camera.<br />
However, a FLEX AI model can run in parallel with the WiseAI app.</p>
<h1>Step By Step Guide:</h1>
<p>To initially upload a Custom Detection Model:</p>
<div>
<div>
<div>1.Download the model from FLEX AI.</div>
</div>
<div>
<div>2.Launch Wise AI</div>
</div>
<div>
<div>3.Go to the Setup tab<a href="https://www.nvripc.com/wp-content/uploads/2024/07/screenshot-2024-06-25-at-8-43-05-am-662w374h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/screenshot-2024-06-25-at-8-43-05-am-662w374h.png" /></a></div>
</div>
<div>
<div>4.Choose the Object.tar file you&#8217;ve downloaded from FLEX AI.</div>
<div><a href="https://www.nvripc.com/wp-content/uploads/2024/07/screenshot-2024-06-25-at-8-43-12-am-660w400h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/screenshot-2024-06-25-at-8-43-12-am-660w400h.png" /></a></div>
</div>
<div>
<div>5.Receive the success message</div>
<div><a href="https://www.nvripc.com/wp-content/uploads/2024/07/screenshot-2024-06-25-at-8-43-21-am-662w374h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/screenshot-2024-06-25-at-8-43-21-am-662w374h.png" /></a></div>
</div>
<div>
<div>6.You can now see the model within the list</div>
<div>
<p><a href="https://www.nvripc.com/wp-content/uploads/2024/07/screenshot-2024-06-25-at-8-42-30-am-673w383h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/screenshot-2024-06-25-at-8-42-30-am-673w383h.png" /></a></p>
<p>Also from within the Object Detection from the menu.<br />
<a href="https://www.nvripc.com/wp-content/uploads/2024/07/screenshot-2024-06-07-095704-672w378h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/screenshot-2024-06-07-095704-672w378h.png" /></a><br />
You should also see FLEX AI Model below the other object detection options.</p>
</div>
<h1><a href="https://www.nvripc.com/wp-content/uploads/2024/07/screenshot-2024-06-07-095805-676w380h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/screenshot-2024-06-07-095805-676w380h.png" /></a></h1>
</div>
</div>
<p>To upload a different model:</p>
<div>
<div>
<div>1.Only (1) custom model can be uploaded to a camera at this time<br />
Remove the previous model(Click the trashcan icon next to the model)</div>
</div>
<div>
<div>2.Then you are free to upload a new detection model following the same steps above.</div>
</div>
</div>
<h1>FLEX AI: Which cameras are compatible?</h1>
<p>Summary:</p>
<p>This article lists the cameras onto which an object detection model created using FLEX AI can be uploaded.</p>
<p>Compatibility:</p>
<p>FLEX AI supports the camera models listed below. They are all P series AI cameras which incorporate the greatest processing power required to run the AI models.</p>
<table>
<tbody>
<tr>
<td>Indoor Dome</td>
<td>Outdoor Dome</td>
<td>Bullet</td>
<td>Box</td>
</tr>
<tr>
<td>PND-A9081RV</p>
<p>PND-A9081RF</p>
<p>PND-A6081RV</p>
<p>PND-A6081RF</td>
<td>PNV-A9081R</p>
<p>PNV-A6081R</p>
<p>PNV-A6081R-E</td>
<td>PNO-A9311R</p>
<p>PNO-A9081R</p>
<p>PNO-A6081R</td>
<td>PNB-A9001</p>
<p>PNB-A6001</td>
</tr>
</tbody>
</table>
<p>&lt;p&gt;The post <a rel="nofollow" href="https://www.nvripc.com/flex-ai-setup-and-use-guide/">FLEX AI Setup and Use Guide</a> first appeared on <a rel="nofollow" href="https://www.nvripc.com">NVR IPCAMERA SECURITY</a>.&lt;/p&gt;</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.nvripc.com/flex-ai-setup-and-use-guide/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
