Pelco Developer Network (PDN)

Video Related Troubleshooting

These examples include a set of assumptions for proper operation by programming language (C#).

This information pertains to the VideoInput Web Service Reference and Video Output Web Service Reference services.

What is the ideal method for video playback?

Ideally, users should utilize RTSP through Pelco's RTSP Server reference implementations.

What are the advantages and disadvantages of MJPEG and MPEG4?


Better image qualityHigh bandwidth usage
Most widespread picture compression formatHigh storage requirements
Lower latencyNo support for synchronized sound
Constant image quality 
Fast stream recovery in the event of packet loss 
Lower complexity 


Higher compression rates, less bandwidth usageLow robustness in the event of packet loss
Lower storage requirementsHigher latency
Maintains constant bit rateImage quality suffers with network congestion or increased movement in scenes
Able to sync audio and video 


H.264 uses Mpeg4-10 advanced video coding, a coding scheme that was developed to lower bit rate usages and improve video quality across a broader range of video resolutions (HD).

Supports wider range of video formatsRequires more processing power
Lower bandwidth usage per equivalent video formatsAdds more delay into realtime video streams
Higher quality video 

What settings are recommended for dual video stream devices?

Normally, the first stream is set to a higher resolution/frame rate, and the second channel is used for a lower resolution. Recommending a setting is difficult to do, because each end user case is different. Using a lower resolution/lower bandwidth setting on the second channel allows for better control of PTZ in real-time security surveillance applications.

What frame rates are recommended for NTSC and PAL?

NTSC is an interlace video standard of 60 fields per second. It requires two fields to make one video image, or 30 frames per second, with a resolution of 720 x 480 (4:3 aspect ratio).

PAL is also an interlace standard, similar to NTSC except consisting of 50 fields or 25 frames of video per second, with a resolution of 720 x 576 (4:3 aspect ratio).

Frame rates should be set according to motion. Areas with more "activity" should be set at a higher frame rate, higher resolution, and higher I-frame rate (lower group of pictures (GOP) setting), while areas with little or no activity may be set to a lower frame rate, possibly with a GOP setting of even lower I-frames. This approach helps avoid unnessary storage usage and network bandwith usage. 

Is it possible to develop an application that switches the view automatically from camera to camera (within a set of cameras)?

Using the PelcoSDK or API, developers can get a list of cameras attached to the system and switch any cameras to the window upon a timer or events.

VOL headers are not present in most parts of the stream. Is this normal?

Currently some Pelco devices only send VOL headers when a connection is made (they send approximately 10). Then they cease to send VOL headers a short while after the connection is made. Only Sarix IP cameras send VOL headers at all times.

If you are inserting VOL headers to account for this issue, keep the following in mind:
  • VOL headers should only be injected in I-frames and not on every frame
  • VOL headers will change over time. Periodically compare your saved VOL header with a new VOL header from a new connection.

How do users initialize a connection to a video stream and play it?

Refer to "Initializing a Connection to a Video Streamand Playing It" in Video Input and Video Output General Usage for details.

How do users keep an existing video stream session alive?

Refer to "Maintaining an Existing Video Stream Session and Keeping It Alive" in Video Input and Video Output General Usage for details.

How do users retrieve live video stream configuration data?

Refer to "Retrieving Live Video Stream Configuration Data" in Video Input and Video Output General Usage for details.

How do users retrieve the current number of live video streams associated with a device?

Refer to "Retrieving the Current Number of Live Video Streams Associated with a Device" in Video Input and Video Output General Usage for details.

How do users pause a video stream?

Refer to "Pausing a Video Stream" in Video Input and Video Output General Usage for details. 

How do users resume playing a paused video stream?

Refer to "Resume Playing a Paused Video Stream" in Video Input and Video Output General Usage for details. 

How do users configure a video stream to be multicast or unicast?

By default, video streams are multicast. To configure a video stream to be unicast, you must specify its destination. Specifically, users must set Video Output Web Service Reference's StreamSession's transportURL attribute to the desired destination (for example, rtp://12.321.421.21:8000). To make a unicast video stream revert to being multicast, simply modify the transportURL attribute to be blank or empty.

Can users add their own watermark, then make it viewable on recorded video using a standard video player?

Yes, this is possible through the use of Digital Signatures. See Sarix Stream Settings for information about configuring the ImageProcessing setting to enable watermarking.

How can users record video (and audio) streams to files?

Developers can create an RTP or RTSP client to retrieve the streams, and then use the API to perform actions such as recording video and audio stream to files.

How can users determine the URL for a video feed?

Depending on the method of connecting, developers can use any of the following to determine the URL and UUIDs of a camera:

RTSP Issues

For video-related questions related to RTSP, refer to RTSP Troubleshooting

Device-Specific Issues


For video-related issues pertaining to encoders, see Encoder Troubleshooting.

IP Camera

For video-related issues pertaining to IP cameras, see IP Camera Troubleshooting.


For video-related issues pertaining to NVRs and DVRs, see NVR Troubleshooting.


About Video Stream Parameters

There are many parameters and scenarios that determine bandwidth usage, which in turn determines storage capability. Some considerations include:

  • Frame – One full image, as though it was a photo.
  • Framerate – The number of source frames per second  to be compressed; the rate at which the source encoder gets a new frame.
  • Bitrate – Maximum number of bits used for transmitting compressed video and audio (if used) to a destination. This is the maximum value. Typically, the actual usage is less.
  • Group of Pictures (GOP)  – Used by some compression encoders, this refers to the number of video frames used to compress/encode video information. A GOP consists of a reference frame (I-frame) followed by smaller differential frames of varying information. The larger this value, the lower the number of full image updates to the destination, and typically, the less transmission bandwidth is required.
  • Resolution – The transmitted, destination video image size. The smaller the image size, the less bandwidth is required on the tramission medium and storage.
  • Compression – A machine that takes a source data stream and attempts to remove redundency. Video compression can be done on a frame-by-frame and transmit basis as in MJPEG, or over a period of time as in MPEG4 and H.264. Using time (spatial) compression has advantages over a frame-by-frame algorithm, because it removes redundency in the multiple images.

Determining Your Video Settings

There are many considerations for determining your video stream settings, amount of storage, and the length of time that you need to record.

  • Activity – The more "activity" that occurs on a particular video stream, the more bandwidth and storage are required.
  • Destination resolution – For the destination point of a particular video stream, the resolution may need to be changed to match the resolution of the device. For example, if streaming to an iPod, the video stream resolution must match that of the iPod. If going over cellular PSTN where bandwidth is a problem, setting resolution and GOP can help achieve better performance.
  • Bandwidth – Changing this setting adjusts the maximum number of bits produced by the compression algorithm.

Using a combination of resolution, GOP (if available), and frame rate may achieve the desired bandwidth, allowing maximum flexibility for the compression engine.