Hannes
With the plugin H.264 codec I defined a macro which defines profiles/levels for each standard frame size. This makes it a bit easier to identify the codecs that cannot be supported by the primary input device hence making it easier to remove these unsupported capabilities out of the capability list. In the case of the extended video codec these maximum frame sizes can be used as frames of reference when scaling the input from the application capture so in the OpenExtendedVideoChannel() function the codec maximum width/height from the plugin is compared to the application width/height and a correct scaling ratio can be calculated to keep the aspect ratio within the confines of the codec defined capability. This works just fine in H323plus but complains when passed to the plugin because it requires known frame sizes. Remove these lines from the plugin and it works. Hence the issue.
In defining a H.239 capability there is a flag in the plugin capability to define which codecs are to support extended video. There is no need to specify seperate OpalMediaFormats for primary or secondary video, they are the same, just handled differently. When the codec is opened, there are 2 capability factories, a primary and a smaller video secondary. The codecs with the video flag go in the primary the ones with the extvideo flag goes in the second, some have both so they are added to both. In the capability exchange, the secondary capabilities are negotiate along with and exactly the same way as the primary except they are done so inside a H323ExtendedVideoCapability which is assigned a Session ID in h323plus of 5. These capabilities do not auto-start with audio and video (although there is a flag to do so but it is disabled by default) and once the call is established you can invoke at anytime a H.245 OLC to open a Session ID 5 session (extendedVideo) to create a unidirectional video channel. The channel is assigned a unique identifier (like T-1xx) which you can use to close at any time. You can open multiple Session ID 5 sessions each with their own unique identifier. You can assign via H323Endpoint::OpenExtendedVideoChannel() different input devices for each channel and unicast multiple video streams simultaneously.
Hannes your changes are important and will work quite well with H.239 however it's not a show stopper as each extended video stream uses the same session id (in h323plus's case 5) in Opal that would be a dynamically allocated number.
There still is the outstanding issue of NAT and unidirectional Video streams but that, if you implement the relevent sections of H460.p2pnat as I wrote it, should not be that great an issue. :-)
The architecture of openH323 (now h323plus) and all its integrecies and downsides is still quite capable of handling this stuff.
Simon
-----Original Message----- From: Hannes Friederich [mailto:hannesf@ee.ethz.ch] Sent: Tuesday, 6 November 2007 4:21 PM To: Simon Horne Cc: Opalvoip-devel@lists.sourceforge.net; Robert Jongbloed; H323plus Subject: Re: [Opalvoip-devel] Custom Video Frame Size
Simon,
On 06.11.2007, at 03:55, Simon Horne wrote:
I have CC'd this to the h323plus list.
Robert
Getting back to the initial question. I want to move forward with H. 239 support in h323plus so can I remove the fixed frame size constraints from the video plugins so the project can move forward or if that's not recommended then, as I don't want to have different versions of the video plugins that break interoperability, can I put in a compiler directive to get us out of a pickle? Once these opal architectural glitches are resolved then the directive can be removed.
I really am confused on the codec issues, and the discrete video sizes with H.261/H.263 and the generic capabilities etc. The way this is done in H323plus is to detect the capabilities of the video device at application startup via the changes I made in the ptlib videodevice factory which allows the device capability list to be exposed without instantaneousing the device. You use the device capabilities list to determine the maximum frame size available for the device so in this way you can detect and support HD webcams etc. There is a H323Endpoint function that then goes through and removes all the capabilities unsupported for that particular webcam. Easy! On the OpenVideoChannel function callback the user can then set the frame size and fps on the wire. This sets the header height/width fields of the YUV420 frame which then goes back into the plugin codec to resize the codec. This is how it used to work in OpenH323 and it works just fine. The problem you refer to is, I guess, an open Opal issue perhaps?.
Just don't forget that newer codecs such as H.264 no longer define explicit frame size, but rather profile/levels, which actually defines a range of sizes. So, it is no longer that easy to just remove particular capabilities, as a particular profile/level may mean higher resolution/lower framerate or vice versa. Also, I don't know how flexible existing H.239 systems are in terms of supported frame sizes. I guess it will be safest if you stick to well-known discrete resolutions such as 4CIF / 16 CIF.
The "Extended Video Channel" is different to Hannes work in Opal. ExtendedVideoCapability is a type of Video capability which contains a subset of capabilites designed to be used for the likes of H.239. There is a flag I have added to the codec definitions in the video plugins which marks the codecs to be loaded into this subset group. Hannes's work is on having multiple primary video windows which is not related in H.239. The secondary or "Extended" video capability is opened via a function with sends a H.245 OLC and returns a channel number which you can then use to close the channel. Since each channel has a unique channel number, multiple video windows can be opened/closed on the fly. There is a working example of this in simple in applications directory in the H323plus CVS. This type of concept opens the way to develop more advanced concepts like telepresence where you can allocate 3 or more different video input for each secondary channel. Since all this is done on a secondary video capability, existing interoperability on the primary video is ensured and no existing architectural changes in h323plus are required.
I think I have to explain more in-detail how the MediaType stuff actually works, as it really was intended to support H.239. A OpalEndpoint does not primarily know about H.239, as this is H.323 specific stuff. To Opal, this is just another video stream. However, this video stream has different characteristics as the primary video stream, since - as you mentioned - the capabilities used are different ones. So, it needs different OpalMediaFormat definitions. The OpalMediaType class introduced is just an extension to the sessionID parameter used so far. First, statically assigning session IDs others than 1,2,3 is not according to H.245, as these session IDs have to be assigned by the H.245 master. The MediaType is just a description of the media type (video, audio, application, etc) along with a label (e.g. DefaultVideo, SecondaryVideo) So far, the existing code explicitely tries to open a logical channel for DefaultAudioSessionID, DefaultVideoSessionID, DefaultDataSessionID. If you want to use other data streams (e.g. H. 224/H.281), you need to add #ifdef protected code at various places, which is rather painful. My changes simply try to open logical channels for each MediaType available, and the MediaTypeList is dynamically managed. I don't see why H.239 shouldn't fit into this concept.
Hannes