<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Video Analytics &#8211; NVR IPCAMERA SECURITY</title>
	<atom:link href="https://www.nvripc.com/tag/video-analytics/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.nvripc.com</link>
	<description>CCTV Help Desk Blog!</description>
	<lastBuildDate>Sun, 09 Feb 2025 18:11:22 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
	<item>
		<title>How to use Wisenet cameras with 3rd party VMS</title>
		<link>https://www.nvripc.com/how-to-use-wisenet-cameras-with-3rd-party-vms/</link>
					<comments>https://www.nvripc.com/how-to-use-wisenet-cameras-with-3rd-party-vms/#respond</comments>
		
		<dc:creator><![CDATA[M.Salih ASLAN]]></dc:creator>
		<pubDate>Sun, 09 Feb 2025 18:10:46 +0000</pubDate>
				<category><![CDATA[Guide]]></category>
		<category><![CDATA[How To]]></category>
		<category><![CDATA[IPC]]></category>
		<category><![CDATA[3rd party VMS]]></category>
		<category><![CDATA[AI analytics]]></category>
		<category><![CDATA[API]]></category>
		<category><![CDATA[best practices]]></category>
		<category><![CDATA[camera integration]]></category>
		<category><![CDATA[CCTV]]></category>
		<category><![CDATA[Cloud Storage]]></category>
		<category><![CDATA[compatibility]]></category>
		<category><![CDATA[Configuration]]></category>
		<category><![CDATA[firmware]]></category>
		<category><![CDATA[guide]]></category>
		<category><![CDATA[how to setup]]></category>
		<category><![CDATA[installation]]></category>
		<category><![CDATA[integration guide]]></category>
		<category><![CDATA[IP Camera]]></category>
		<category><![CDATA[IP cameras]]></category>
		<category><![CDATA[latest trends]]></category>
		<category><![CDATA[license]]></category>
		<category><![CDATA[mobile app]]></category>
		<category><![CDATA[Motion Detection]]></category>
		<category><![CDATA[network cameras]]></category>
		<category><![CDATA[ONVIF]]></category>
		<category><![CDATA[pricing]]></category>
		<category><![CDATA[remote access]]></category>
		<category><![CDATA[SDK]]></category>
		<category><![CDATA[Security Camera]]></category>
		<category><![CDATA[security system]]></category>
		<category><![CDATA[setup guide]]></category>
		<category><![CDATA[surveillance system]]></category>
		<category><![CDATA[Troubleshooting]]></category>
		<category><![CDATA[Video Analytics]]></category>
		<category><![CDATA[video management system]]></category>
		<category><![CDATA[Wisenet cameras]]></category>
		<guid isPermaLink="false">https://www.nvripc.com/?p=10194</guid>

					<description><![CDATA[<p>How to use Wisenet cameras with 3rd party VMS, Explore the Hanwha Vision IP camera range and find the security camera that best fits your needs. With a wide range of camera models on offer, Hanwha Vision provides a CCTV solution for all budgets. Delivering good-quality, innovative CCTV products at an affordable price, Hanwha Vision [&#8230;]</p>
<p>&lt;p&gt;The post <a rel="nofollow" href="https://www.nvripc.com/how-to-use-wisenet-cameras-with-3rd-party-vms/">How to use Wisenet cameras with 3rd party VMS</a> first appeared on <a rel="nofollow" href="https://www.nvripc.com">NVR IPCAMERA SECURITY</a>.&lt;/p&gt;</p>
]]></description>
										<content:encoded><![CDATA[<p>How to use Wisenet cameras with 3rd party VMS, Explore the Hanwha Vision IP camera range and find the security camera that best fits your needs. With a wide range of camera models on offer, Hanwha Vision provides a CCTV solution for all budgets. Delivering good-quality, innovative CCTV products at an affordable price, Hanwha Vision cameras are a popular choice for many businesses and organisations.</p>
<h1>How do I integrate Hanwha IP cameras and SUNAPI with Immix?</h1>
<h1>Summary</h1>
<p>This article details the steps to integrate Hanwha IP cameras and SUNAPI with Immix.</p>
<h1>Integrate Hanwha with Immix</h1>
<p>To begin the integration, a suitable port for Immix must be located.</p>
<p>NOTE: These are default ports only (they can be configured and may be<br />
different to what is listed).</p>
<p><a href="https://www.nvripc.com/wp-content/uploads/2025/02/16-01-2025_12-12-13_rec-594w96h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2025/02/16-01-2025_12-12-13_rec-594w96h.png" /></a></p>
<p>To find ports in Immix:</p>
<div>
<div>
<div>1.Log into the Hanwha IPC device.</div>
</div>
<div>
<div>
<p>2.Select Setup &gt; Basic &gt; IP &amp; Port &gt; Port. The ports needed by the integration appear.</p>
<p><a href="https://www.nvripc.com/wp-content/uploads/2025/02/32868857473051-599w247h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2025/02/32868857473051-599w247h.png" /></a></p>
</div>
</div>
<div>
<div>3.HTTP or HTTPs are populated in the Port section. The RTSP port must be forwarded for internal API access.</div>
</div>
</div>
<h2>Configuring Users in Hanwha Vision for Immix</h2>
<p>To configure users:</p>
<div>
<div>
<div>
<p>1.Ensure that the user is a type of administrator or has full permissions; users without full permissions might experience issues.</p>
<p><a href="https://www.nvripc.com/wp-content/uploads/2025/02/32872916709915-601w313h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2025/02/32872916709915-601w313h.png" /></a></p>
</div>
</div>
<div>
<div>2.Select Setup &gt; Basic &gt; User.</div>
</div>
<div>
<div>3.Select the checkbox under Use to enable the next available Current User.</div>
</div>
<div>
<div>4.Assign a name, password, and appropriate permissions.</div>
</div>
</div>
<h2>Video Profile</h2>
<p>The integration has capabilities to switch between different video profiles. To switch between profiles:</p>
<div>
<div>
<div>
<p>1.Select Setup &gt; Basic &gt; Video Profile.</p>
<p><a href="https://www.nvripc.com/wp-content/uploads/2025/02/32876152089883-601w390h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2025/02/32876152089883-601w390h.png" /></a></p>
</div>
</div>
<div>
<div>2.Set the Video Compression to H264 for the integration.</div>
</div>
<div>
<div>3.Check the Enable checkbox at Audio In.</div>
</div>
</div>
<h2>Configuring a Hanwha IPC SMTP Alarm</h2>
<p>To configure the alarm into Immix:</p>
<div>
<div>
<div>1.Log into Immix.</div>
</div>
<div>
<div>2.Select Setup &gt; Edit Sites &gt; Customer &gt;Site.</div>
</div>
<div>
<div>3.In the Site Actions for Site_Name pane on the right,  select View Summary. A report is generated.</div>
</div>
<div>
<div>4.Format the recipient and sender addresses as follows: S#.a#.e#@immixalarms.com. For example: s29.a1.e21@immixalarms.com</div>
</div>
</div>
<p>S# &#8211; represents the ServerID.<br />
A# &#8211; represents the Input of Response.<br />
E# &#8211; represents the Event Number associated with the Server Type,<br />
e21 is defaulted to IPC Object Detection.</p>
<p>&nbsp;</p>
<h3>IPC Email Example</h3>
<p>Subject: ObjectDetected</p>
<p>Sendor: S29.a1.e21@ImmixAlarms.com To: <a href="mailto:S29.a1.e21@ImmixAlarms.com">S29.a1.e21@ImmixAlarms.com</a></p>
<p>Date: 20240514-12:03:47XMailer:NFMail</p>
<p>https://support.immixprotect.com/hc/en-us/articles/25738891242395-Hanwha-IP-Camera-SUNAPI-Integration 4/7<br />
XMailer: NF Mail MIME-Version: 1.0 Content-Type: multipart/mixed; boundary=594786580.1111.samsungipolis.com &#8211;594786580.1111.samsungipolis.com Content-Type: text/plain; charset=&#8221;iso-8859-1&#8243; Content-Transfer-Encoding: 7bit ObjectDetected Time : 20240514-12:03:47 Camera IP : 192.168.1.166 Event Type : EventRule_5_CH1 &#8211;594786580.1111.samsungipolis.com Content-Type: application/octet-stream; Content-Transfer-Encoding: base64 Content-Disposition: attachment; /9j/4AAQSkZJRgABAQEAYABgAAD/2wBDABQODxIPDRQSEBIXFRQYHjIhHhwcHj0s LiQySUBMS0dARkVQWnNiUFVtVkVGZIhlbXd7gYKBTmCNl4x9lnN+gXz/2wBDARU</p>
<p>&nbsp;</p>
<h2>Enabling Digest Auth</h2>
<p>If you are having trouble using GetConfig, enable Digest Auth for your Hanwha user.</p>
<p>The image below shows where to enable this:</p>
<p><a href="https://www.nvripc.com/wp-content/uploads/2025/02/33679793345051-599w550h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2025/02/33679793345051-599w550h.png" /></a></p>
<p>&nbsp;</p>
<p>NOTE: Ensure that the user is Admin or Full user permissions. You must<br />
create an Immix user and assign them as an administrator. A default<br />
admin account cannot be used.</p>
<p>&nbsp;</p>
<h2>Configuring a Hanwha IPC SMTP Event</h2>
<p>To configure the system-based events into Immix:</p>
<div>
<div>
<div>1.Log into the Hanwha IPC device.</div>
</div>
<div>
<div>
<p>2.Select Setup &gt; Event &gt; Event Rule. Users may configure an action for each event type.</p>
<p><a href="https://www.nvripc.com/wp-content/uploads/2025/02/32879773872923-598w285h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2025/02/32879773872923-598w285h.png" /></a></p>
</div>
</div>
<div>
<div>3.Click Add to create a new rule for the event type.</div>
</div>
<div>
<div>
<p>4.Assign a name to the Rule, then click anywhere in the +Add pane.</p>
<p><a href="https://www.nvripc.com/wp-content/uploads/2025/02/32879773877915-599w417h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2025/02/32879773877915-599w417h.png" /></a></p>
</div>
</div>
<div>
<div>5.Expand the dropdown to display a list of event types.</div>
</div>
<div>
<div>6.Select one or more event types.</div>
</div>
<div>
<div>7.In the Event Actions Settings area, select the E-mail checkbox.</div>
</div>
<div>
<div>
<p>8.Select the Activation Time and click OK.</p>
<p><a href="https://www.nvripc.com/wp-content/uploads/2025/02/33674848701211-599w516h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2025/02/33674848701211-599w516h.png" /></a></p>
</div>
</div>
</div>
<h3>Obtaining the S Number</h3>
<p>To obtain the S number:</p>
<div>
<div>
<div>1.Log into Immix.</div>
</div>
<div>
<div>2.Select Setup &gt; Edit Sites &gt; Customer &gt; Site.</div>
</div>
<div>
<div>
<p>3.In the Site Actions, for Site_Name pane on the right, select View Summary. A report preview is generated, indicating the Identifier or S number of the device.</p>
<p><a href="https://www.nvripc.com/wp-content/uploads/2025/02/33676320000155-439w65h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2025/02/33676320000155-439w65h.png" /></a></p>
</div>
</div>
</div>
<h2>Configuring Devices within Immix</h2>
<p>To configure a device within Immix:</p>
<div>
<div>
<div>1.Log in to Immix with username and password.</div>
</div>
<div>
<div>
<p>2.Navigate to Device Details.</p>
<p><a href="https://www.nvripc.com/wp-content/uploads/2025/02/33644744071835-487w260h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2025/02/33644744071835-487w260h.png" /></a></p>
</div>
</div>
<div>
<div>
<p>3.Enter information into the following screen.</p>
<p><a href="https://www.nvripc.com/wp-content/uploads/2025/02/33644709443483.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2025/02/33644709443483.png" /></a></p>
</div>
</div>
</div>
<div>
<div>
<div>•</div>
</div>
</div>
<div>
<div>
<div>• Host &#8211; The public address of the Hanwha IPC.</div>
</div>
<div>
<div>• Port &#8211; The http port of the Hanwha IPC.</div>
</div>
<div>
<div>• User &#8211; Username configured in Hanwha IPC.</div>
</div>
<div>
<div>• Password &#8211; Password configured in Hanwha IPC.</div>
</div>
<div>
<div>• RTSP Port &#8211; The RTSP Port configured in Hanwha IPC.</div>
</div>
</div>
<h2>Obtaining the ID of a Camera for Use in the Extra Value</h2>
<p>To obtain the ID of the camera:</p>
<div>
<div>
<div>1.Open the Hanwha system desktop client.</div>
</div>
<div>
<div>2.In the left side, find the appropriate camera.</div>
</div>
<div>
<div>3.Right click it, and click on Camera Settings. The camera settings open on the General tab.</div>
</div>
<div>
<div>4.Click More Info. The ID of the camera is under the field Camera ID.</div>
</div>
<div>
<div>5.Click Copy to copy the id for use in Immix.</div>
</div>
</div>
<h2>SSL Error Alarms</h2>
<p>By default, the alarm receiver is configured to disallow insecure connections. To allow connections to insecure devices (no SSL, invalid cert, etc.), change the RequireCertificate config key in appsettings.json to false.</p>
<h2>Alarms Not Decoding</h2>
<p>By default the alarm JSON messages are saved in the file store in the same folder as any alarm footage. To diagnose decoding issues, confirm that this message is what is expected. A typical alarm message looks like the following snippet. Alarm messages always have the broadcastAction alarm.</p>
<p>{<br />
&#8220;Tran&#8221;: {<br />
&#8220;Command&#8221;: &#8220;broadcastAction&#8221;,<br />
&#8220;Params&#8221;: {<br />
&#8220;Params&#8221;: &#8220;eyJhY3Rpb25JZCI6InthYWNiMjdmOC1lYTcwLTRlOGEtOGE1YS03YjZiNzk5ZTliMGR9IiwiYWRkaXRpb25hbFJlc291cmNlcyI6WyJ7MDAwMDAwM<br />
&#8220;RuntimeParams&#8221;: &#8220;eyJkZXNjcmlwdGlvbiI6Il90aHVtYl9kb3duIiwiZXZlbnRSZXNvdXJjZUlkIjoie2RhZWU5ZWIyLTEyZDYtZTUwNy00Y2MxLWM4ZmUxNG<br />
}<br />
}<br />
}</p>
<p>A second approach is to review the service logs, as all messages (even non-alarm messages) are logged there.</p>
<h1>How to use Wisenet cameras with exacqVision</h1>
<h1>Summary:</h1>
<p>This article provides information on using Wisenet cameras with exacqVision.</p>
<h1>Step By Step Guide:</h1>
<h2>Wisenet Camera Settings</h2>
<h3>Mandatory</h3>
<p>The following settings are mandatory:</p>
<div>
<div>
<div>• Configure IP address and password.</div>
</div>
<div>
<div>
<p>• Under Date and Time, check DST if needed.</p>
<p>The exacqVision server also functions as an NTP server and will push its address to the camera for time synchronization.</p>
</div>
<p>NOTE: exacqVision needs the camera to be set to the GMT time zone and will<br />
set it to GMT when connected. If the camera is changed afterwards,<br />
recording issues may occur.</p>
</div>
<div>
<div>
<p>• Set at least one motion zone on the camera.</p>
<p>exacqVision by default records on motion but does not create a motion zone on the camera.</p>
</div>
<p>NOTE: If the Camera Recording mode says Motion Not Supported after adding<br />
the camera, the most common cause is that motion detection is not enabled.<br />
To resolve this, enable motion detection on the camera and then disable/<br />
re-enable the camera.</p>
</div>
</div>
<h3>Optional</h3>
<p>The following settings are optional:</p>
<div>
<div>
<div>• Fine tune motion configuration on the camera to minimize false alarms.</div>
</div>
<div>
<div>• Configure analytics that may be used such as line crossing, intrusion, etc.</div>
</div>
<div>
<div>• If the camera has AI capabilities, enable the Object Detection and Best/Detection shot for the AI metadata in exacqVision.</div>
</div>
<div>
<div>• Enable <a href="https://support.hanwhavisionamerica.com/hc/en-us/articles/27879575404571-How-to-use-Wisenet-cameras-with-exacqVision#h_01J4A8RA1RRW45PMZ2SPM2A6V2" target="_blank" rel="noopener">optional camera settings</a> as detailed at the end of this article.</div>
</div>
</div>
<h2>After Adding to exacqVision</h2>
<p>exacqVision defaults to using the Hanwha default H.264 stream (profile 2) but can be changed to H.265. This changes exacqVision to utilize the default H.265 stream on the camera (profile3).</p>
<p>NOTE: It is not possible to set h.265 via exacqVision on a fisheye camera.<br />
You need to log in to the camera interface and manually change the FISHEYE<br />
profile to H.265.</p>
<p>To change the default stream to H.265:</p>
<div>
<div>
<div>1.Navigate to the exacqVision Camera Settings page and click the Recording tab.<br />
<a href="https://www.nvripc.com/wp-content/uploads/2025/02/28037461898779-599w361h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2025/02/28037461898779-599w361h.png" /></a>In the Rate Control field, the default is Constant Quality which equates to Variable Bit Rate in Hanwha cameras.</p>
<p>The Quality slider default of 2 sets a low cap.<br />
1 = minimum VBR cap for that stream<br />
10 = maximum VBR cap for that stream<br />
For 4MP+ resolution cameras it is beneficial to increase the quality to 3 or 4; otherwise, the video may be compressed too much thus decreasing quality.</p>
</div>
</div>
<div>
<div>2.If needed, select a stream to add from the Multistreaming dropdown menu and click Add Stream.<br />
<a href="https://www.nvripc.com/wp-content/uploads/2025/02/28037469600283-599w317h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2025/02/28037469600283-599w317h.png" /></a></div>
<p>NOTE: With high-resolution cameras, it is best practice to create lower<br />
resolution live viewing streams to save on decoding and bandwidth. These<br />
streams can be added in exacqVision on the Camera Settings page.</p>
</div>
<div>
<div>3.Click the Motion tab to edit motion on the camera (although exacqVision does not have all the options available that are in the Hanwha cameras).<br />
<a href="https://www.nvripc.com/wp-content/uploads/2025/02/28037469605659-599w284h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2025/02/28037469605659-599w284h.png" /></a></div>
<p>NOTE: It is possible to add motion, including windows, without adding\<br />
multiple points to create unique shapes.</p>
</div>
</div>
<div>
<div>
<div>• Sensitivity in exacqVision = Sensitivity setting in the camera</div>
</div>
<div>
<div>• Percentage in exacqVision = Level of Detection setting in the camera</div>
</div>
<div>
<div>• Minimum duration for each area is not available in exacqVision.</div>
</div>
<div>
<div>• Configuring the minimum and maximum object size for motion detection is not available in exacqVision.</div>
</div>
</div>
<div>
<div>
<div>4.Click the Analytics tab to see several options for displaying and recording data.<br />
<a href="https://www.nvripc.com/wp-content/uploads/2025/02/28037461911323-599w325h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2025/02/28037461911323-599w325h.png" /></a></div>
</div>
</div>
<div>
<div>
<div>• Display Configuration affects the bounding boxes displayed in exacqVision.</div>
</div>
<div>
<div>• Record analytic data records the metadata from the camera for searching later.</div>
</div>
<div>
<div>• Record Video will trigger a recording when an analytic event is triggered and recognized by exacqVision like motion.</div>
</div>
</div>
<div>
<div>
<div>5.If the camera is a fisheye, click the Digital PTZ/Fisheye tab to configure the proper dewarp code.<br />
<a href="https://www.nvripc.com/wp-content/uploads/2025/02/28037663038875-680w321h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2025/02/28037663038875-680w321h.png" /></a>a. Select Immervision from the first dropdown menu.</p>
<p>b. Select the proper code from the second dropdown menu.</p>
<p>Refer to the following article for Hanwha Fisheye Lens Codes: <a href="https://support.hanwhavisionamerica.com/hc/en-us/articles/115011811288-Immervision-Fisheye-Dewarp-Lens-Codes" target="_blank" rel="noopener">Immervision Fisheye Dewarp Lens Codes</a></p>
<p>c. Select how the camera is mounted for proper dewarping from the third dropdown menu.</p>
</div>
</div>
</div>
<p>NOTE: Many of Hanwha’s fisheye cameras can use additional camera side dewarp<br />
channels. These should be created first before the camera is added to<br />
exacqVision, so they are identified and configurable inside of exacqVision.<br />
These channels have no motion detection so they either need to record 24/7<br />
or be used for live video only.</p>
<p>NOTE: The Video tab does not exactly match what is in the camera.<br />
<a href="https://www.nvripc.com/wp-content/uploads/2025/02/28037663042075-680w312h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2025/02/28037663042075-680w312h.png" /></a><br />
Brightness = Brightness<br />
Contrast = Contrast<br />
Saturation = Color Level in the camera<br />
Wide Dynamic Range = controls SSDR in the camera<br />
Video Mask = controls the camera’s Privacy Areas</p>
<h2>Searching AI Metadata</h2>
<p>To search AI metadata:</p>
<div>
<div>
<div>1.From the exacqVision home page, click the Search (<a href="https://www.nvripc.com/wp-content/uploads/2025/02/28037670354843-25w25h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2025/02/28037670354843-25w25h.png" /></a>) icon in the top left corner of the screen.</div>
</div>
<div>
<div>2.Expand the camera tree then select the camera and the analytic metadata associated with it.</div>
</div>
<div>
<div>3.Select the Search Range to search for then click Search.</div>
</div>
<div>
<div>
<p>4.Click the Paper (<a href="https://www.nvripc.com/wp-content/uploads/2025/02/28037670360091-25w25h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2025/02/28037670360091-25w25h.png" /></a>) icon to view the raw metadata.</p>
<p>A pane to the right of the video will appear with the raw data of that time.</p>
</div>
</div>
<div>
<div>5.Check the Show Filters checkbox to filter data.</div>
</div>
<div>
<div>6.In the pane to the left of the video, check the checkboxes to filter.</div>
</div>
<div>
<div>7. Click Save then click the Search button again.</div>
<p>NOTE: Save filters can be selected from the dropdown menu at the top.</p>
</div>
<div>
<div>8.If certain overlays for the metadata need to be hidden, click the Green (<a href="https://www.nvripc.com/wp-content/uploads/2025/02/28037670364059-25w23h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2025/02/28037670364059-25w23h.png" /></a>) icon and select which overlays should be hidden.<br />
<a href="https://www.nvripc.com/wp-content/uploads/2025/02/28037663061531-680w338h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2025/02/28037663061531-680w338h.png" /></a></div>
</div>
</div>
<h2>Creating Events Based on Analytics</h2>
<p>NOTE: The analytics must be configured in the camera first. Once done and the<br />
camera is added to exacqVision, disable and re-enable the camera.</p>
<p>To create events based on analytics:</p>
<div>
<div>
<div>1.In exacqVision, go to Configuration and select Event Linking.</div>
</div>
<div>
<div>2.Under Event Type, select Analytics.</div>
<p>NOTE: The Event Source is the analytic to trigger the Event. There can<br />
potentially be many of these so use the filter to find the camera.</p>
</div>
<div>
<div>3.Select the Action Type and Action Target.</div>
</div>
</div>
<p><a href="https://www.nvripc.com/wp-content/uploads/2025/02/28037670374171-599w114h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2025/02/28037670374171-599w114h.png" /></a></p>
<h2>Monitoring Analytic Events</h2>
<p>To monitor analytic events, follow the steps in the <a href="https://support.hanwhavisionamerica.com/hc/en-us/articles/27879575404571-How-to-use-Wisenet-cameras-with-exacqVision#h_01J4A8RA1RJDBX5CY1TKVNSAS7" target="_blank" rel="noopener">Creating Events Based on Analytics</a> section. However, use Event Monitoring instead of Event Linking.<br />
<a href="https://www.nvripc.com/wp-content/uploads/2025/02/28037670380443-680w239h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2025/02/28037670380443-680w239h.png" /></a></p>
<h2>Special Considerations</h2>
<p>Take the following into consideration when setting up a camera:</p>
<div>
<div>
<div>
<p>• exacqVision only allows four channels per license. If a camera has more than four channels, refer to the <a href="https://www.exacq.com/integration/ipcams/" target="_blank" rel="noopener">exacqVision IP Camera Integration</a> page for special instructions.</p>
<p>For example:</p>
</div>
</div>
</div>
<div>
<div>
<div>
<p>• <a href="https://exacqvision.com/integration/ipcams/#getCameraRecord=bc36e296-f4b9-11ed-a327-00155d795d08~nstep_id=step-3" target="_blank" rel="noopener">SPE-1630</a> has 16 channels. With four channel licensing, this camera will need four licenses and be added per the exacqVision integration entry for this camera.</p>
</div>
</div>
<div>
<div>
<p>• <a href="https://exacq.com/integration/ipcams/#getCameraRecord=bc36e296-f4b9-11ed-a327-00155d795d08~nstep_id=step-3" target="_blank" rel="noopener">PNM-C3404RQPZ</a> has five channels. This camera will need to be added twice per the exacqVison integration entry for this camera.</p>
</div>
</div>
</div>
<div>
<div>
<div>
<p>• If a camera disconnects from the exacqVision server, refer to the following article to utilize edge storage for “trickle back”:  <a href="https://support.exacq.com/#/knowledge-base/article/573" target="_blank" rel="noopener">Configuring Samsung|Hanwha Camera Edge Storage for Use with Server “Network Loss Recording” Feature</a></p>
</div>
</div>
<div>
<div>• Because fisheye cameras only have one default profile it is not possible to set h.265 via exacqVision. Log in to the camera and set the FISHEYE profile to h.265 then apply it and disable/re-enable in exacqVision to reflect the changes.</div>
</div>
<div>
<div>• Refer to the following article for information regarding integrating the Hanwha RoadAI camera data into exacqVision:  <a href="https://support.exacq.com/#/knowledge-base/article/17382" target="_blank" rel="noopener">Road AI LPR application with Hanwha Cameras</a></div>
</div>
</div>
<h2>Optional Common Camera Settings</h2>
<p>The following camera settings are optional:</p>
<div>
<div>
<div>
<p>• WiseStream 3 – Setup &gt; Video &amp; Audio &gt; Wisestream</p>
<p>Wisestream uses an algorithm that compresses non-moving objects at a higher rate, saving bandwidth and storage. AI cameras utilize Wisestream 3, which employs AI-driven algorithms for this process, while non-AI cameras use a pixel-based algorithm. The quality setting determines the level of compression applied: a higher value means greater compression of non-moving objects.<br />
<a href="https://www.nvripc.com/wp-content/uploads/2025/02/28037663073307-599w129h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2025/02/28037663073307-599w129h.png" /></a></p>
</div>
</div>
<div>
<div>
<p>• Dynamic GoV/FPS – Setup &gt; Video Profiles &gt; Select Recording Profile</p>
<p>These utilize the Wisestream algorithm to control the framerate of the profile and/or the key-frame interval to save on bandwidth and storage.<br />
<a href="https://www.nvripc.com/wp-content/uploads/2025/02/28037670391835-599w133h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2025/02/28037670391835-599w133h.png" /></a></p>
</div>
<p>NOTE: Ensure the correct profile that exacqVision is using (264 or 265) is<br />
selected.</p>
</div>
<div>
<div>
<p>• AI Based Shutter – Setup &gt; Video &amp; Audio &gt; Camera Setup &gt; Exposure</p>
<p>This setting is only available on AI-based cameras. It controls the shutter speed based on the presence of objects, reducing motion blur in low-light conditions.<br />
<a href="https://www.nvripc.com/wp-content/uploads/2025/02/28037663083419-599w125h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2025/02/28037663083419-599w125h.png" /></a></p>
</div>
<p>NOTE: This option is not available if WDR is turned on.</p>
</div>
<div>
<div>
<p>• WiseNR2 Noise Reduction – Setup &gt; Video &amp; Audio &gt; Camera Setup &gt; Exposure</p>
<p>This setting is not available in all cameras. Wise NR II utilizes an additional algorithm to improve low-light images and reduce noise.<br />
<a href="https://www.nvripc.com/wp-content/uploads/2025/02/28037869293083-599w21h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2025/02/28037869293083-599w21h.png" /></a></p>
</div>
</div>
</div>
<h1>Verint recorders using different FPS for normal and event recording</h1>
<p>Summary:</p>
<p>When a Hanwha camera is registered to a Verint NVR, the use of different FPS can be engaged on the NVR</p>
<p>Explanation of Behavior:</p>
<div>
<div>
<div>• Enabling this option is required to keep the video connection alive for Verint EdgeVR recorders using different frame rates between normal and event.</div>
</div>
</div>
<p>Resolution:</p>
<div>
<div>
<div>• Firmware can be easily updated via the NVRs web interface, the NVRs monitor/mouse interface, Wisenet Device Manager, or Wisenet Viewer. The process takes approximately 5 minutes.</div>
</div>
<div>
<div>• The way Verint operates is that the recorder will dictate the camera&#8217;s FPS, and changing the video profile on the camera, by default, momentarily drops and reestablishs the connections.<br />
<a href="https://www.nvripc.com/wp-content/uploads/2025/02/26601736240539-680w259h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2025/02/26601736240539-680w259h.png" /></a></div>
</div>
</div>
<h1>How to add a Wisenet camera to Genetec Security Center 5.9</h1>
<h1>Summary:</h1>
<p>This article provides instructions for adding a Wisenet camera to Genetec Security Center 5.9 and the additional steps needed if the camera has a fisheye lens.</p>
<h1>Step By Step Guide:</h1>
<p>To add a Wisenet camera to Genetec Security Center 5.9:</p>
<div>
<div>
<div>1.Open Wisenet Device Manager.</div>
</div>
<div>
<div>2.Double-click your camera to load the camera&#8217;s webpage.<br />
<a href="https://www.nvripc.com/wp-content/uploads/2025/02/mceclip0-599w502h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2025/02/mceclip0-599w502h.png" /></a></div>
</div>
<div>
<div>3.Enter and confirm your new password, then click Apply.</div>
<p>NOTE: If a password has already been configured and you need to default<br />
the camera, press and hold the Reset button for 5-10 seconds.</p>
<p><a href="https://www.nvripc.com/wp-content/uploads/2025/02/mceclip1-597w429h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2025/02/mceclip1-597w429h.png" /></a><br />
.</p>
</div>
<div>
<div>4.From the Genetec Config Tool application, click the Video icon.<br />
<a href="https://www.nvripc.com/wp-content/uploads/2025/02/mceclip3-595w397h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2025/02/mceclip3-595w397h.png" /></a></div>
</div>
<div>
<div>5.Right-click Archiver and select Unit enrolment.<br />
<a href="https://www.nvripc.com/wp-content/uploads/2025/02/mceclip5-598w508h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2025/02/mceclip5-598w508h.png" /></a></div>
</div>
<div>
<div>6.Click Manual add then select Hanwha Techwin from the Manufacturer dropdown menu, select Wisenet from the Product type dropdown menu, and enter the IP address of the camera.</div>
<p>NOTE: You can find the IP address of the camera in the Wisenet Device<br />
Manager results screen. Click Add and close to enroll the camera.</p>
<p><a href="https://www.nvripc.com/wp-content/uploads/2025/02/mceclip6-592w512h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2025/02/mceclip6-592w512h.png" /></a></p>
<p>TIP: If you see a Bad logon error, move your mouse over the text and an Add button will appear.</p>
<p>Enter the username and password of the camera then click the Add button.<br />
<a href="https://www.nvripc.com/wp-content/uploads/2025/02/mceclip7-500w275h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2025/02/mceclip7-500w275h.png" /></a><br />
<a href="https://www.nvripc.com/wp-content/uploads/2025/02/mceclip8-300w251h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2025/02/mceclip8-300w251h.png" /></a></p>
<p>After a few moments the status should change to Added.<br />
<a href="https://www.nvripc.com/wp-content/uploads/2025/02/mceclip9-596w238h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2025/02/mceclip9-596w238h.png" /></a><br />
.</p>
<p>NOTE: Sometimes the status in Genetec sticks on Bad logon. To resolve this<br />
issue, click Clear All at the bottom of Unit enrollment and add the camera<br />
again. The camera has now been added to Genetec Security Center 5.9.</p>
</div>
</div>
<h2>Configuring a Fisheye Camera</h2>
<p>The following additional steps are required if the camera has a fisheye lens:</p>
<div>
<div>
<div>1.From the Video tab, select the newly added camera under Archiver then click Hardware and edit the Lens type.<br />
<a href="https://www.nvripc.com/wp-content/uploads/2025/02/mceclip10-599w344h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2025/02/mceclip10-599w344h.png" /></a></div>
</div>
<div>
<div>2.Select your Camera position from the dropdown menu.<br />
<a href="https://www.nvripc.com/wp-content/uploads/2025/02/mceclip11-450w353h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2025/02/mceclip11-450w353h.png" /></a></div>
<p>NOTE: Genetec may not automatically pick the correct lens type and<br />
could fail to calibrate.</p>
</div>
<div>
<div>3.Select the correct Lens Type from the dropdown menu.<br />
<a href="https://www.nvripc.com/wp-content/uploads/2025/02/mceclip12-508w473h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2025/02/mceclip12-508w473h.png" /></a></div>
</div>
<div>
<div>4.Click Ok and Apply at the bottom of the page to save changes.<br />
<a href="https://www.nvripc.com/wp-content/uploads/2025/02/mceclip13-447w144h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2025/02/mceclip13-447w144h.png" /></a>The fisheye camera settings are now complete.</p>
</div>
</div>
</div>
<h2>Viewing the Newly Added Camera</h2>
<p>To view the newly added camera:</p>
<div>
<div>
<div>1.Open the Security Desk application from the Windows Start menu.</div>
</div>
<div>
<div>2.Log in and and select Monitoring.<br />
<a href="https://www.nvripc.com/wp-content/uploads/2025/02/mceclip14-680w284h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2025/02/mceclip14-680w284h.png" /></a></div>
</div>
<div>
<div>3.Double-click the newly added camera to see live video.<br />
<a href="https://www.nvripc.com/wp-content/uploads/2025/02/mceclip15-699w371h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2025/02/mceclip15-699w371h.png" /></a></div>
</div>
<div>
<div>4.Use the scroll wheel on the mouse or click the Viewing Mode icon to manipulate the image.<br />
<a href="https://www.nvripc.com/wp-content/uploads/2025/02/mceclip16-291w450h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2025/02/mceclip16-291w450h.png" /></a><a href="https://www.nvripc.com/wp-content/uploads/2025/02/mceclip17-680w354h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2025/02/mceclip17-680w354h.png" /></a></p>
</div>
</div>
</div>
<h1>Genetec &#8211; Adding a Hanwha Camera to Genetec 5.7 or later</h1>
<h1>Summary:</h1>
<p>This article will provide a step-by-step guide for adding a Wisenet camera to Genetec.</p>
<h1>Step By Step Guide:</h1>
<p>1. Open the Config Tool (Figure 1)</p>
<p>&nbsp;</p>
<p><a href="https://www.nvripc.com/wp-content/uploads/2025/02/config_tool_app-160w171h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2025/02/config_tool_app-160w171h.png" /></a></p>
<p>Figure 1</p>
<p>&nbsp;</p>
<p>2. Click on the green plus symbol (Figure 2)</p>
<p>3. Select Video Unit (Figure 2)</p>
<p>&nbsp;</p>
<p><a href="https://www.nvripc.com/wp-content/uploads/2025/02/genetec_vu2-474h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2025/02/genetec_vu2-474h.png" /></a></p>
<p>Figure 2</p>
<p>4. Select Hanwha as the manufacturer (Figure 3)</p>
<p>5. The IP Address can be a single device or a range of devices. (Figure 3)</p>
<p>6. Provide the proper credentials and click &#8220;Add and close&#8221; or &#8220;Add.&#8221; Then, repeat for further units. (Figure 3)</p>
<p>&nbsp;</p>
<p><a href="https://www.nvripc.com/wp-content/uploads/2025/02/genetec_vu3.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2025/02/genetec_vu3.png" /></a></p>
<p>Figure 3</p>
<p>7. Once added to the system, you will be able to view the stream from the Video Unit (Figure 4)</p>
<p>&nbsp;</p>
<p><a href="https://www.nvripc.com/wp-content/uploads/2025/02/genetec5-7_vu4-680w520h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2025/02/genetec5-7_vu4-680w520h.png" /></a></p>
<p>Figure 4</p>
<h1>How do calibrate a Hanwha Fisheye Camera for use on a Genetec VMS?</h1>
<p>Applies to Models: XNF, SNF, and QNF Series Cameras</p>
<h1>Summary:</h1>
<p>This article describes how to add a Hanwha Fisheye camera to Genetec, by set a lens code and calibrate the camera.</p>
<p>Note: The lens code will be determined by the model.<br />
See this <a href="https://support.hanwhavisionamerica.com/hc/en-us/articles/115011811288-Immervision-Fisheye-Dewarp-Lens-Codes-" target="_self" rel="noopener">article</a> for a complete list of codes.</p>
<h1>Step By Step Guide:</h1>
<p>1. Open the Config Tool, go to Video Unit.</p>
<p>&nbsp;</p>
<p><a href="https://www.nvripc.com/wp-content/uploads/2025/02/config_tool_genetec.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2025/02/config_tool_genetec.png" /></a></p>
<p>&nbsp;</p>
<p>2. Select the Video Unit in question.</p>
<p>&nbsp;</p>
<p>.<a href="https://www.nvripc.com/wp-content/uploads/2025/02/video_unit_genetec.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2025/02/video_unit_genetec.png" /></a></p>
<p>&nbsp;</p>
<p>3. Select the Hardware Tab of the Fisheye camera to be configured.</p>
<p><a href="https://www.nvripc.com/wp-content/uploads/2025/02/genetec_video_unit_tabs-hardware-678w.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2025/02/genetec_video_unit_tabs-hardware-678w.png" /></a></p>
<p>&nbsp;</p>
<p>4. Set the Lens Type to Panamorph.</p>
<p>&nbsp;</p>
<p>.<a href="https://www.nvripc.com/wp-content/uploads/2025/02/lens_type_panamorph_1.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2025/02/lens_type_panamorph_1.png" /></a></p>
<p>&nbsp;</p>
<p>5. Click the Pencil to the right of the Panamorph dropdown.</p>
<p>&nbsp;</p>
<p>.<a href="https://www.nvripc.com/wp-content/uploads/2025/02/lens_type_panamorph.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2025/02/lens_type_panamorph.png" /></a></p>
<p>&nbsp;</p>
<p>6. Set the orientation to either ceiling or wall.</p>
<p>&nbsp;</p>
<p><a href="https://www.nvripc.com/wp-content/uploads/2025/02/camer_position-311w.jpeg" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2025/02/camer_position-311w.jpeg" /></a></p>
<p>&nbsp;</p>
<p>7. Select the proper Lens Type  from the list (See De-warp Code article link below.)</p>
<p>&nbsp;</p>
<p><a href="https://www.nvripc.com/wp-content/uploads/2025/02/lens_type.jpeg" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2025/02/lens_type.jpeg" /></a></p>
<p>&nbsp;</p>
<p>8. Click calibrate once the code is registered.</p>
<p>&nbsp;</p>
<p><a href="https://www.nvripc.com/wp-content/uploads/2025/02/mceclip0-680w340h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2025/02/mceclip0-680w340h.png" /></a></p>
<p>Note: If it fails check the motion detection tab for an improperly configured Event or<br />
Motion Schedule.</p>
<p><a href="https://support.hanwhavisionamerica.com/hc/en-us/articles/115011811288-Immervision-Fisheye-Dewarp-Lens-Codes-" target="_self" rel="noopener">Click here for Immervision De-warping codes</a></p>
<p>.</p>
<p>&lt;p&gt;The post <a rel="nofollow" href="https://www.nvripc.com/how-to-use-wisenet-cameras-with-3rd-party-vms/">How to use Wisenet cameras with 3rd party VMS</a> first appeared on <a rel="nofollow" href="https://www.nvripc.com">NVR IPCAMERA SECURITY</a>.&lt;/p&gt;</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.nvripc.com/how-to-use-wisenet-cameras-with-3rd-party-vms/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>FLEX AI Setup and Use Guide</title>
		<link>https://www.nvripc.com/flex-ai-setup-and-use-guide/</link>
					<comments>https://www.nvripc.com/flex-ai-setup-and-use-guide/#respond</comments>
		
		<dc:creator><![CDATA[M.Salih ASLAN]]></dc:creator>
		<pubDate>Tue, 16 Jul 2024 19:17:49 +0000</pubDate>
				<category><![CDATA[Guide]]></category>
		<category><![CDATA[How To]]></category>
		<category><![CDATA[acti]]></category>
		<category><![CDATA[Advanced AI Surveillance]]></category>
		<category><![CDATA[AI Camera]]></category>
		<category><![CDATA[AI Surveillance Innovation]]></category>
		<category><![CDATA[AI Video Processing]]></category>
		<category><![CDATA[AI Vision Systems]]></category>
		<category><![CDATA[AI-driven Security Systems]]></category>
		<category><![CDATA[AI-enhanced Video Surveillance]]></category>
		<category><![CDATA[AI-powered Surveillance]]></category>
		<category><![CDATA[Camera]]></category>
		<category><![CDATA[Camera firmware]]></category>
		<category><![CDATA[CCTV]]></category>
		<category><![CDATA[Device]]></category>
		<category><![CDATA[Device Manager]]></category>
		<category><![CDATA[Download]]></category>
		<category><![CDATA[firmware]]></category>
		<category><![CDATA[FLEX AI]]></category>
		<category><![CDATA[guide]]></category>
		<category><![CDATA[hanwha]]></category>
		<category><![CDATA[Hanwha AI Technology]]></category>
		<category><![CDATA[Hanwha Techwin]]></category>
		<category><![CDATA[Hanwha Vision]]></category>
		<category><![CDATA[hard drive]]></category>
		<category><![CDATA[How to]]></category>
		<category><![CDATA[Instructions]]></category>
		<category><![CDATA[Intelligent Monitoring]]></category>
		<category><![CDATA[Intelligent Video Solutions]]></category>
		<category><![CDATA[LTE]]></category>
		<category><![CDATA[pan]]></category>
		<category><![CDATA[password]]></category>
		<category><![CDATA[Real-time Monitoring]]></category>
		<category><![CDATA[Security AI Solutions]]></category>
		<category><![CDATA[Security Cameras]]></category>
		<category><![CDATA[Setup]]></category>
		<category><![CDATA[Smart Analytics]]></category>
		<category><![CDATA[Smart Security Solutions]]></category>
		<category><![CDATA[Step by Step]]></category>
		<category><![CDATA[Surveillance]]></category>
		<category><![CDATA[Surveillance Technology]]></category>
		<category><![CDATA[Video Analytics]]></category>
		<guid isPermaLink="false">https://www.nvripc.com/?p=9136</guid>

					<description><![CDATA[<p>FLEX AI Setup and Use Guide, FLEX AI enables you to enhance camera capabilities, enabling them to detect and track previously unidentifiable objects. While many cameras can detect and track people and specific vehicle types, it is a different matter identifying shopping carts, forklift trucks, hovercrafts, or items on a conveyor belt. FLEX AI: Get [&#8230;]</p>
<p>&lt;p&gt;The post <a rel="nofollow" href="https://www.nvripc.com/flex-ai-setup-and-use-guide/">FLEX AI Setup and Use Guide</a> first appeared on <a rel="nofollow" href="https://www.nvripc.com">NVR IPCAMERA SECURITY</a>.&lt;/p&gt;</p>
]]></description>
										<content:encoded><![CDATA[<p>FLEX AI Setup and Use Guide, FLEX AI enables you to enhance camera capabilities, enabling them to detect and track previously unidentifiable objects. While many cameras can detect and track people and specific vehicle types, it is a different matter identifying shopping carts, forklift trucks, hovercrafts, or items on a conveyor belt.</p>
<h3>FLEX AI: Get started</h3>
<h1>Summary:</h1>
<p>FLEX AI enables you to enhance camera capabilities, enabling them to detect and track previously unidentifiable objects. While many cameras can detect and track people and specific vehicle types, it is a different matter identifying shopping carts, forklift trucks, hovercrafts, or items on a conveyor belt.</p>
<p>With FLEX AI you can create custom object detection models for solid objects that can then be deployed to cameras, enabling you to detect objects that standard analytics does not already detect.</p>
<p>It is a cloud-based application, and its results are compatible with deployment to P-series AI cameras.</p>
<h1>How it Works:</h1>
<p>FLEX AI operates on a subscription-based model, granting users comprehensive access to train, process, and download custom detection models for end customers. However, the allocation of licenses requires a careful consideration of each end customer, with a separate FLEX AI license needed for each one. This process is typically managed by a STEP Partner, who oversees the distribution of licenses to ensure effective deployment.</p>
<p>For actual deployment onto cameras, FLEX AI uses a perpetual licensing system, distinct from its subscription model. Each camera necessitates its own perpetual license to run FLEX AI models effectively. This approach ensures that users can deploy and utilize the models on their cameras without the constraints of a subscription-based licensing system.</p>
<p>In terms of training models, FLEX AI currently supports the MP4 format or tagged WAVE Sync videos for training.</p>
<p>NOTE: Each project can only train one model.</p>
<h1>Camera Compatibility:</h1>
<p>Refer to the following article: <a href="https://support.hanwhavisionamerica.com/hc/en-us/articles/26332037008027" target="_blank" rel="dofollow noopener">Which Cameras are Compatible With FLEX AI</a></p>
<p>Note: Each camera can only run one FLEX AI model, although it may run in parallel with WiseAI.</p>
<h1>The FLEX AI Flow:</h1>
<p>The general process is as follow:</p>
<div>
<div>
<div>1.Ensure you have a <a href="https://support.hanwhavisionamerica.com/hc/en-us/articles/25726838758043" target="_blank" rel="noopener">license</a> to develop new models.</div>
</div>
<div>
<div>2.<a href="https://support.hanwhavisionamerica.com/hc/en-us/articles/26063284950043" target="_blank" rel="noopener">Sign in to Cloud Portal</a> and select FLEX AI.</div>
</div>
<div>
<div>3.Create a <a href="https://support.hanwhavisionamerica.com/hc/en-us/articles/22011783189531" target="_blank" rel="noopener">new project</a>.</div>
</div>
<div>
<div>4.<a href="https://support.hanwhavisionamerica.com/hc/en-us/articles/22011985070875" target="_blank" rel="noopener">Import</a> training clips.</div>
</div>
<div>
<div>5.<a href="https://support.hanwhavisionamerica.com/hc/en-us/articles/22012124056347" target="_blank" rel="noopener">Annotate</a> the clips.</div>
</div>
<div>
<div>6.Send the model for training.</div>
</div>
<div>
<div>7.Evaluate the model, and <a href="https://support.hanwhavisionamerica.com/hc/en-us/articles/25181595748635" target="_blank" rel="noopener">improve</a> if necessary.</div>
</div>
<div>
<div>8.<a href="https://support.hanwhavisionamerica.com/hc/en-us/articles/25212530029723" target="_blank" rel="noopener">Download</a> the model to your computer.</div>
</div>
<div>
<div>9.<a href="https://support.hanwhavisionamerica.com/hc/en-us/articles/25212530029723" target="_blank" rel="noopener">Upload</a> the model to a compatible camera.</div>
</div>
</div>
<h1>FLEX AI: What do I need to train a model with FLEX AI?</h1>
<h1>Summary:</h1>
<p>This article covers what you need to build effective models with FLEX AI.</p>
<h1>Training Requirements and Recommendations:</h1>
<div>
<div>
<div>• A minimum of 20 annotated objects or training images for an efficient detection model (at least 30 are recommended)</div>
</div>
<div>
<div>• About 100 annotated objects or training images are recommended for a more robust model</div>
</div>
<div>
<div>• Draw the bounding box around the object as close as possible (try to not leave any margin)</div>
</div>
<div>
<div>• All target objects or training images must be labeled (i.e. frames chosen without annotation cannot be used in a data set)</div>
</div>
<div>
<div>• Ensure you are marking all things considered the same object to yours within a frame (the algorithm may not preform well if all similar objects within a frame are not properly included for training custom object detection of your desired object).</div>
</div>
<div>
<div>• Use video clips that contain several different angles, perspectives, and lighting of the object of Interest (OOI) (preferably from the same camera for which the training is being done)</div>
</div>
<div>
<div>• Use clips where the annotated object is not partially hidden so that the AI can learn what it really looks like (i.e. if you were teaching it what a person looks like, then annotating just an arm sticking out from behind a tree is not helpful)</div>
</div>
</div>
<p>NOTE: After you click Train, the data you have created is sent to the cloud and<br />
the algorithm is trained to detect your object. This process can take anywhere<br />
from 30min to 1 hr and depends on the number of training images you included.</p>
<h1>Performance Issues:</h1>
<p>The following may cause object detection to have performance issues:</p>
<div>
<div>
<div>• Challenging background conditions due to low light or changing weather (ex: rain, snow, sunshine)</div>
</div>
<div>
<div>• Object size differs within the field of view of the camera which is different from the size used while training</div>
</div>
<div>
<div>• Parts of the object are covered or obstructed</div>
</div>
<div>
<div>• Objects are in high density crowds or occluded</div>
</div>
<div>
<div>• Stacking (ex: a model trained on single shopping carts will have issues detecting carts stacked in a corral)</div>
</div>
<div>
<div>• Object is moving too fast</div>
</div>
</div>
<h1>Camera Recommendations</h1>
<div>
<div>
<div>• Use footage from the camera covering a fixed that the model will be deployed and used.</div>
</div>
<div>
<div>• The Field of View (FoV) should show your object with a minimum size of 20px x 20px.</div>
</div>
<div>
<div>• Installed cameras should be at normal video surveillance View (</div>
</div>
</div>
<div>
<div>
<div>• (CCTV view of 45 degrees or larger)</div>
</div>
</div>
<h1>Limitations:</h1>
<p>FLEX AI has the following limitations:</p>
<div>
<div>
<div>• Cannot detect non-solid objects such as gas/vapor, liquid, smoke, etc.</div>
</div>
<div>
<div>• Cannot deploy more than one custom model to a camera at a time. Detecting multiple objects currently requires multiple cameras.</div>
</div>
<div>
<div>• Cannot support a single model that includes multiple objects (ex: hardhat + goggles + vest) and identify a missing item.</div>
</div>
<div>
<div>• Cannot be used for identification or recognition of an object (ex: specific people, faces, animals, etc.)</div>
</div>
<div>
<div>• Cannot be used for detecting the orientation of objects (ex: a cart that is facing left or right).</div>
</div>
<div>
<div>• Cannot distinguish colors. FLEX AI does not take color into account.</div>
</div>
<div>
<div>• Cannot be used for detecting the fine-grain objects (ex: golden retriever vs dog, tesla model Y vs vehicle)</div>
</div>
</div>
<h1>Processing Times</h1>
<div>
<div>
<div>• FLEX AI is a cloud based application and requires internet access and processing time.</div>
</div>
<div>
<div>• The algorithm takes the images you&#8217;ve make, trains our object detection model to detect your desired object, processes the model to show you simulated performance on videos you&#8217;ve provided, and then packages the model to work on the camera.</div>
</div>
<div>
<div>• Each step can take multiple minutes, the initial Training can take about an hour (dependent on the number of marked images you&#8217;ve provided)</div>
</div>
<div>
<div>• When the training is completed, we then process the model to detect the object with the videos you&#8217;ve provided in the video library of the project. When the videos become available, their buttons become active with the &#8220;Ready to View&#8221; status.</div>
</div>
<div>
<div>• We also package the model for the camera and the download button becomes active when the file is ready.</div>
</div>
</div>
<h1>FLEX AI: How do I create and manage FLEX AI projects?</h1>
<h1>Summary:</h1>
<p>This article provides instructions for creating and managing FLEX AI projects.</p>
<h1>Step By Step Guide:</h1>
<h2>Creating New Projects</h2>
<p>To create a new FLEX AI project:</p>
<p>1. On the Project screen, click the plus (+) button to create a new project.<br />
<a href="https://www.nvripc.com/wp-content/uploads/2024/07/light_projects_-no-projects-655w172h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/light_projects_-no-projects-655w172h.png" /></a></p>
<p>2. Name your project after the detection type.</p>
<p><a href="https://www.nvripc.com/wp-content/uploads/2024/07/26337875651995-642w323h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/26337875651995-642w323h.png" /></a></p>
<p>NOTE:<br />
Project names must be unique.<br />
Each project can only consist of one object detection type.</p>
<h2>Managing Projects</h2>
<p>Here is an example of multiple projects being worked on in parallel.</p>
<p><a href="https://www.nvripc.com/wp-content/uploads/2024/07/26337517408667-721w326h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/26337517408667-721w326h.png" /></a></p>
<p>To search for a FLEX AI project:</p>
<div>
<div>
<div>1.Use the Search box to filter the displayed projects.</div>
</div>
<div>
<div>2.Apply sorting, as needed.</div>
</div>
<div>
<div>3.Click on a project to open it.</div>
</div>
</div>
<p>Existing projects will indicate their current status:</p>
<div>
<div>
<div>• Untrained &#8211; model is yet to be trained for the first time</div>
</div>
<div>
<div>• With annotations &#8211; model has been trained, reannotated, and is awaiting retraining</div>
</div>
<div>
<div>• Training in Progress &#8211; model is in the process of being trained</div>
</div>
<div>
<div>• Trained &#8211; model has been trained</div>
</div>
</div>
<p><a href="https://www.nvripc.com/wp-content/uploads/2024/07/status-668w137h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/status-668w137h.png" /></a></p>
<h1>FLEX AI: How do I add a video clip for FLEX AI usage?</h1>
<h1>Summary:</h1>
<p>This article describes the process for uploading training video clips to train a FLEX AI model.</p>
<h1>Step By Step Guide:</h1>
<h2>Uploading a video</h2>
<div>
<div>
<div>1.Log in to your account.</div>
</div>
<div>
<div>2.Select the project.</div>
</div>
<div>
<div>3.Drag your MP4 video file to the Video Library field or click Hard Drive and select an MP4 file.</div>
</div>
</div>
<h1><a href="https://www.nvripc.com/wp-content/uploads/2024/07/screenshot-2024-04-25-at-10-56-54-am.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/screenshot-2024-04-25-at-10-56-54-am.png" /></a></h1>
<p>NOTE: The Video Library drawer can be expanded and collapsed by clicking the arrow.</p>
<p>&nbsp;</p>
<h2>Pulling bookmarked videos directly from WAVE Sync</h2>
<p>NOTE: First tag WAVE Sync bookmarks with &#8220;flex_ai&#8221; to make them to appear in FLEX AI.</p>
<h3></h3>
<h3>Tagging WAVE Sync videos</h3>
<p>1. Sign in to your WAVE account.</p>
<p>2. Add a bookmark.</p>
<h3><a href="https://www.nvripc.com/wp-content/uploads/2024/07/add-bookmark-671w.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/add-bookmark-671w.png" /></a></h3>
<p>3. Tag your bookmark with &#8220;flex_ai&#8221;.</p>
<p><a href="https://www.nvripc.com/wp-content/uploads/2024/07/label-bookmark-655w423h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/label-bookmark-655w423h.png" /></a></p>
<h3></h3>
<h3>Importing WAVE Sync clips</h3>
<p>1. Click WAVE Sync and enter your user ID and password.</p>
<p><a href="https://www.nvripc.com/wp-content/uploads/2024/07/screenshot-2024-06-05-at-11-37-31-am-690w446h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/screenshot-2024-06-05-at-11-37-31-am-690w446h.png" /></a></p>
<p>&nbsp;</p>
<p>2. Select your system from the dropdown menu.</p>
<p><a href="https://www.nvripc.com/wp-content/uploads/2024/07/add1-639w413h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/add1-639w413h.png" /></a></p>
<p><a href="https://www.nvripc.com/wp-content/uploads/2024/07/add2-644w416h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/add2-644w416h.png" /></a></p>
<p>&nbsp;</p>
<p>3. Select the checkbox for each WAVE Sync Bookmark you would like to import.</p>
<h2>Recommendations</h2>
<div>
<div>
<div>• Use videos that are less than 10 minutes long.</div>
</div>
<div>
<div>• Multiple videos can be uploaded at once with a maximum file limit of 500 MB each.</div>
</div>
<div>
<div>• Use the Video Library to rename your video, see its status, or delete it.</div>
</div>
<div>
<div>• More angles and perspectives of your object will increase the accuracy of the AI model.</div>
</div>
</div>
<h1>FLEX AI: How do I annotate objects?</h1>
<h1>Summary:</h1>
<p>This article describes how to move through a training video and mark objects. The goal is to take a video clip and pause at several points in the video and annotate objects (draw boxes around) you are teaching FLEX AI to detect.</p>
<h1>Step By Step Guide:</h1>
<h2>Annotating objects</h2>
<p>1. Pause the video on any frame that contains the object of interest.</p>
<p><a href="https://www.nvripc.com/wp-content/uploads/2024/07/26342346795035-678w380h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/26342346795035-678w380h.png" /></a></p>
<p>&nbsp;</p>
<p>2. Draw a detection box around every instance of the object. There can be multiple instances of an object and each instance should have its own detection box.</p>
<p><a href="https://www.nvripc.com/wp-content/uploads/2024/07/26342321731739-684w359h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/26342321731739-684w359h.png" /></a></p>
<p>3. Click the Tighten button (or the keyboard shortcut T) to resize the detection boxes.<br />
This means you can save time by roughly drawing the boxes and then tightening.</p>
<table>
<tbody>
<tr>
<td>Step 1: Roughly draw a box around the object</td>
<td><a href="https://www.nvripc.com/wp-content/uploads/2024/07/26342321747611.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/26342321747611.png" /></a></td>
</tr>
<tr>
<td>Step 2: Click the Tighten button (or press T) to quickly get you close</td>
<td><a href="https://www.nvripc.com/wp-content/uploads/2024/07/26342321760667.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/26342321760667.png" /></a></td>
</tr>
<tr>
<td>There will be times that FLEX AI does not properly capture the true edges of the object, that&#8217;s why the application allows you to readjust the edges of the box.</p>
<p>Step 3: Drag the edges/corners of a detection box to finely edit.</td>
<td><a href="https://www.nvripc.com/wp-content/uploads/2024/07/26342346861851.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/26342346861851.png" /></a></td>
</tr>
</tbody>
</table>
<p>&nbsp;</p>
<p>4. To remove a detection box, select the appropriate box and click the Delete key on your keyboard.</p>
<p>5. Click Save (or the keyboard shortcut S) to save the marked detection boxes.</p>
<p>6. Once you have drawn the necessary detection boxes, click the Train button. Your project will be unavailable during the training period which typically takes 15 to 30 minutes depending on the number of annotations.<br />
<a href="https://www.nvripc.com/wp-content/uploads/2024/07/mark_-single-player_-frame-detections-2-706w183h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/mark_-single-player_-frame-detections-2-706w183h.png" /></a></p>
<p>NOTE: Since training is done in the cloud, you can work on any number of other projects<br />
while a project is training.<br />
.</p>
<p>NOTE: The blue detection boxes also appear in the This Frame’s Detections area after you<br />
save. It is recommended that a minimum of 50 detection boxes be defined before your<br />
initial object training is complete. An AI model can include detection boxes from<br />
multiple video clips.</p>
<h2>Keyboard shortcuts</h2>
<p>You can save time and effort when navigating video, marking, tightening and saving annotations, by using the keyboard shortcuts below.</p>
<p><a href="https://www.nvripc.com/wp-content/uploads/2024/07/26342346875931-588w341h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/26342346875931-588w341h.png" /></a></p>
<h1>FLEX AI: How do I improve my object detection model?</h1>
<h1>Summary:</h1>
<p>This article covers improving the AI Object Detection model of your uploaded videos. This process will result in more accurate detections, fewer false positives, and fewer false negatives.</p>
<h1>Step By Step Guide:</h1>
<p>To improve the AI Object Detection model:</p>
<div>
<div>
<div>1.Click the Improve button (towards the top right of the video player) after you have reviewed the latest model.<br />
<a href="https://www.nvripc.com/wp-content/uploads/2024/07/screenshot-2024-06-07-at-2-31-17-pm-628w189h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/screenshot-2024-06-07-at-2-31-17-pm-628w189h.png" /></a></div>
</div>
<div>
<div>2.Select a video to start.<br />
FLEX AI will suggest the top 10 frames crucial for highlighting your object(s) to enhance your object detection model. Markers represent each frame the application is asking you to revisit.<br />
<a href="https://www.nvripc.com/wp-content/uploads/2024/07/screenshot-2024-06-07-at-2-33-46-pm-644w201h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/screenshot-2024-06-07-at-2-33-46-pm-644w201h.png" /></a></div>
</div>
<div>
<div>3.Draw a detection box around every instance of the object.<br />
Remember to save your detection boxes.</div>
</div>
<div>
<div>4.Click the Object Not Present button (towards the bottom right of the video player) if the object of interest is not in the frame.<br />
<a href="https://www.nvripc.com/wp-content/uploads/2024/07/screenshot-2024-06-07-at-2-35-27-pm-646w125h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/screenshot-2024-06-07-at-2-35-27-pm-646w125h.png" /></a></div>
</div>
<div>
<div>5.Use the arrows or the orange dots to advance to different/suggested frames.<br />
<a href="https://www.nvripc.com/wp-content/uploads/2024/07/improve_-10-frames_-in-progress-650w171h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/improve_-10-frames_-in-progress-650w171h.png" /></a></div>
</div>
<div>
<div>6.Complete all 10 suggested frames and repeat for all videos to be included in the next training.</div>
</div>
<div>
<div>7.Click the Add More button to manually look through the video and add detection boxes, if the suggested frames are not enough (ex: different angles and perspectives are needed in the training set).</div>
</div>
<div>
<div>8.Click Train when you are ready.<br />
The current limit for a (1) year license is to have a maximum of (100) total trainings or improvements.</div>
</div>
</div>
<p>NOTE: All 10 suggested frames must be completed per video for them to be included<br />
in the training. Detection boxes on any video where all 10 frames are not marked<br />
will be discarded and not included.</p>
<h1>FLEX AI: How do I review and download my FLEX AI model?</h1>
<h1>Summary:</h1>
<p>This article covers reviewing the AI Object Detection model of your uploaded videos.</p>
<h1>Step By Step Guide:</h1>
<p>To review your uploaded videos:</p>
<div>
<div>
<div>1.Click the Ready to View button to select your trained AI model results video.<br />
<a href="https://www.nvripc.com/wp-content/uploads/2024/07/26464275099291-270w325h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/26464275099291-270w325h.png" /></a></div>
</div>
<div>
<div>2.Adjust the confidence level slider (towards the top right of the video player) to view the identified objects that meet your specified minimum confidence level.<br />
<a href="https://www.nvripc.com/wp-content/uploads/2024/07/26464281230619.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/26464281230619.png" /></a><br />
The following model has been trained only once, so there is no option to compare.<br />
<a href="https://www.nvripc.com/wp-content/uploads/2024/07/26464275111451-342h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/26464275111451-342h.png" /></a></div>
</div>
<div>
<div>3.After training the AI model more than once, click the Single/Compare Player toggle to change your view from side-by-side video player to single video player.<br />
<a href="https://www.nvripc.com/wp-content/uploads/2024/07/26464275120539.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/26464275120539.png" /></a><br />
The following model has been improved, so both the previous (left) and current (right) can be compared. We can already notice the newer model is performing better.<br />
<a href="https://www.nvripc.com/wp-content/uploads/2024/07/26464281251611-636w334h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/26464281251611-636w334h.png" /></a><br />
For tips on improving detection accuracy, refer to the following article: <a href="https://support.hanwhavisionamerica.com/hc/en-us/articles/25181595748635" target="_blank" rel="noopener">How do I improve my object detection model in FLEX AI?</a>.</div>
</div>
<div>
<div>4.Click the Download button (towards the top right of the video player) when you feel the model has been trained to your preferences. The model can then be uploaded to a camera.<br />
<a href="https://www.nvripc.com/wp-content/uploads/2024/07/26464275137307.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/26464275137307.png" /></a></div>
</div>
<div>
<div>5.Upload your newly trained model to your target camera.<br />
Refer to the following article: <a href="https://support.hanwhavisionamerica.com/hc/en-us/articles/25212530029723" target="_blank" rel="noopener">How do I upload a FLEX AI model to a camera?</a></div>
</div>
</div>
<p>NOTE: Downloaded AI models can be deployed to a camera via Device Manager/Wise Detector.</p>
<h1>FLEX AI: How do I upload a FLEX AI model to a camera?</h1>
<h1>Summary:</h1>
<p>This article describes how, after training a FLEX AI model, you can download it, and then upload it to a compatible P-Series AI camera.</p>
<p>Note: For simplification, the term &#8216;model&#8217; refers to a &#8216;custom object model&#8217;.</p>
<h1>Requirements:</h1>
<div>
<div>
<div>• A compatible camera<br />
Refer to the following article: <a href="https://support.hanwhavisionamerica.com/hc/en-us/articles/21915192571419-Which-cameras-and-AI-applications-are-supported-by-SightMind" target="_blank" rel="noopener">Which cameras and AI applications are supported by SightMind?</a></div>
</div>
<div>
<div>• Latest version of Device Manager</div>
</div>
<div>
<div>• Latest camera firmware</div>
</div>
<div>
<div>• Latest version of the WiseAI open platform application installed onto the camera</div>
</div>
</div>
<p>NOTE: A FLEX AI model cannot run concurrently with a Hanwha Vision AI Pack.<br />
Any existing AI Pack must first be uninstalled from the camera.<br />
However, a FLEX AI model can run in parallel with the WiseAI app.</p>
<h1>Step By Step Guide:</h1>
<p>To initially upload a Custom Detection Model:</p>
<div>
<div>
<div>1.Download the model from FLEX AI.</div>
</div>
<div>
<div>2.Launch Wise AI</div>
</div>
<div>
<div>3.Go to the Setup tab<a href="https://www.nvripc.com/wp-content/uploads/2024/07/screenshot-2024-06-25-at-8-43-05-am-662w374h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/screenshot-2024-06-25-at-8-43-05-am-662w374h.png" /></a></div>
</div>
<div>
<div>4.Choose the Object.tar file you&#8217;ve downloaded from FLEX AI.</div>
<div><a href="https://www.nvripc.com/wp-content/uploads/2024/07/screenshot-2024-06-25-at-8-43-12-am-660w400h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/screenshot-2024-06-25-at-8-43-12-am-660w400h.png" /></a></div>
</div>
<div>
<div>5.Receive the success message</div>
<div><a href="https://www.nvripc.com/wp-content/uploads/2024/07/screenshot-2024-06-25-at-8-43-21-am-662w374h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/screenshot-2024-06-25-at-8-43-21-am-662w374h.png" /></a></div>
</div>
<div>
<div>6.You can now see the model within the list</div>
<div>
<p><a href="https://www.nvripc.com/wp-content/uploads/2024/07/screenshot-2024-06-25-at-8-42-30-am-673w383h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/screenshot-2024-06-25-at-8-42-30-am-673w383h.png" /></a></p>
<p>Also from within the Object Detection from the menu.<br />
<a href="https://www.nvripc.com/wp-content/uploads/2024/07/screenshot-2024-06-07-095704-672w378h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/screenshot-2024-06-07-095704-672w378h.png" /></a><br />
You should also see FLEX AI Model below the other object detection options.</p>
</div>
<h1><a href="https://www.nvripc.com/wp-content/uploads/2024/07/screenshot-2024-06-07-095805-676w380h.png" target="_blank" rel="noopener"><img decoding="async" src="https://www.nvripc.com/wp-content/uploads/2024/07/screenshot-2024-06-07-095805-676w380h.png" /></a></h1>
</div>
</div>
<p>To upload a different model:</p>
<div>
<div>
<div>1.Only (1) custom model can be uploaded to a camera at this time<br />
Remove the previous model(Click the trashcan icon next to the model)</div>
</div>
<div>
<div>2.Then you are free to upload a new detection model following the same steps above.</div>
</div>
</div>
<h1>FLEX AI: Which cameras are compatible?</h1>
<p>Summary:</p>
<p>This article lists the cameras onto which an object detection model created using FLEX AI can be uploaded.</p>
<p>Compatibility:</p>
<p>FLEX AI supports the camera models listed below. They are all P series AI cameras which incorporate the greatest processing power required to run the AI models.</p>
<table>
<tbody>
<tr>
<td>Indoor Dome</td>
<td>Outdoor Dome</td>
<td>Bullet</td>
<td>Box</td>
</tr>
<tr>
<td>PND-A9081RV</p>
<p>PND-A9081RF</p>
<p>PND-A6081RV</p>
<p>PND-A6081RF</td>
<td>PNV-A9081R</p>
<p>PNV-A6081R</p>
<p>PNV-A6081R-E</td>
<td>PNO-A9311R</p>
<p>PNO-A9081R</p>
<p>PNO-A6081R</td>
<td>PNB-A9001</p>
<p>PNB-A6001</td>
</tr>
</tbody>
</table>
<p>&lt;p&gt;The post <a rel="nofollow" href="https://www.nvripc.com/flex-ai-setup-and-use-guide/">FLEX AI Setup and Use Guide</a> first appeared on <a rel="nofollow" href="https://www.nvripc.com">NVR IPCAMERA SECURITY</a>.&lt;/p&gt;</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.nvripc.com/flex-ai-setup-and-use-guide/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
