|
|
DescriptionCopy IDirect3DSurface9 to IDirect3DTexture9
The use case of this is when we use DxVA2 to decode a video a IDirect3DSurface9
is generated. In order to incorporate it nicely with ANGLE we have to copy
the surface to a texture.
This patch uses StretchRect a surface to a texture.
There is other issue like locking the D3D device using D3DDeviceManager when
using the D3D device together with DxVA2. That problem is not addressed in this
patch.
Patch Set 1 #Patch Set 2 : '' #
Total comments: 6
MessagesTotal messages: 11
I added Daniel as a reviewer. Daniel, what do you think of this approach to hardware decoding into a texture managed by ANGLE? The idea would be to cast EGLContext to ANGLE's internal gl::Context and then use the internal API (copySurfaceToTexture2D in this case) by making them virtual in the same way as GLESv2.dll calls into EGL.dll. I don't want to expose the D3D resources outside of ANGLE or allow Chrome to modify ANGLE's state. That would turn into a big mess. This approach keeps the management of the texture internal to ANGLE. Anyway, we need to decode video into textures and this was my suggestion. Any thoughts? Al http://codereview.appspot.com/2381042/diff/2001/src/libGLESv2/Context.cpp File src/libGLESv2/Context.cpp (right): http://codereview.appspot.com/2381042/diff/2001/src/libGLESv2/Context.cpp#new... src/libGLESv2/Context.cpp:3512: Texture2D* target = static_cast<gl::Texture2D*>(getTexture(texture)); I would assert that target is not null here. http://codereview.appspot.com/2381042/diff/2001/src/libGLESv2/Texture.cpp File src/libGLESv2/Texture.cpp (right): http://codereview.appspot.com/2381042/diff/2001/src/libGLESv2/Texture.cpp#new... src/libGLESv2/Texture.cpp:1439: IDirect3DSurface9* target = getRenderTarget(GL_TEXTURE_2D); Check that target is not null and log an error with the ERR macro here. We've been having trouble with this, possibly because of device lost or out of video memory. The ERR will help us track it down.
Sign in to reply to this message.
http://codereview.appspot.com/2381042/diff/2001/src/libGLESv2/Context.cpp File src/libGLESv2/Context.cpp (right): http://codereview.appspot.com/2381042/diff/2001/src/libGLESv2/Context.cpp#new... src/libGLESv2/Context.cpp:3511: IDirect3DDevice9* device = getDevice(); device is unused. http://codereview.appspot.com/2381042/diff/2001/src/libGLESv2/Context.cpp#new... src/libGLESv2/Context.cpp:3512: Texture2D* target = static_cast<gl::Texture2D*>(getTexture(texture)); there is also no guarantee that "texture" is actually a 2D texture here. http://codereview.appspot.com/2381042/diff/2001/src/libGLESv2/Texture.cpp File src/libGLESv2/Texture.cpp (right): http://codereview.appspot.com/2381042/diff/2001/src/libGLESv2/Texture.cpp#new... src/libGLESv2/Texture.cpp:1439: IDirect3DSurface9* target = getRenderTarget(GL_TEXTURE_2D); getRenderTarget will not be supported for all types of textures. In particular, we are in process of making L and LA textures non-renderable (so we can have more efficient loading of them for video support). http://codereview.appspot.com/2381042/diff/2001/src/libGLESv2/Texture.cpp#new... src/libGLESv2/Texture.cpp:1444: getDevice()->StretchRect(surface, NULL, target, NULL, D3DTEXF_NONE); I fear there are a lot of things that could go wrong here. (format conversions, unchecked errors and return values, etc.)
Sign in to reply to this message.
On 2010/10/08 00:39:22, apatrick1 wrote: > Daniel, what do you think of this approach to hardware decoding into a texture > managed by ANGLE? > > The idea would be to cast EGLContext to ANGLE's internal gl::Context and then > use the internal API (copySurfaceToTexture2D in this case) by making them > virtual in the same way as GLESv2.dll calls into EGL.dll. > > I don't want to expose the D3D resources outside of ANGLE or allow Chrome to > modify ANGLE's state. That would turn into a big mess. This approach keeps the > management of the texture internal to ANGLE. > > Anyway, we need to decode video into textures and this was my suggestion. Any > thoughts? Specific issues of this patch aside, I really don't think this particular approach is a good idea. Accessing the gl::Context directly from outside ANGLE is just going to be brittle and will require a much tighter coupling between Chrome and ANGLE than is currently used (possibly static linking?) and is likely desired. I believe the best way of doing this would be to use something like an EGLImage to wrap the d3dsurface and then associate that with GL using the the EGLImageTargetTexture2DOES call. See the following extensions: http://www.khronos.org/registry/egl/extensions/KHR/EGL_KHR_image_base.txt http://www.khronos.org/registry/egl/extensions/KHR/EGL_KHR_gl_image.txt http://www.khronos.org/registry/gles/extensions/OES/OES_EGL_image.txt Now all that seems fairly complex (at least to understand, not necessarily to implement), and possibly more than required at this point. Failing that, the next best thing would be to do this as a proper (ANGLE-specific) extension. The entry point should be exposed via libGLESv2.cpp as a proper extension entry-point, and then no external application needs to go mucking about with internal gl::Contexts directly. I'd expect that the OES_EGL_image extension is a good place to get ideas from. You could have the following entry point (for example -- all this is off the cuff): void glD3D9SurfaceTexture2DANGLEX(enum target, IDirect3DSurface9 *surface); This would then use the currently bound texture on "target" as the destination, and could operate much like CopyTexImage2D does. This would completely redefine the GL texture internals (just like TexImage2D or CopyTexImage2D do), and take the contents from the provided surface. It's also *possible* that this could serve as a simple wrapper object around the d3d surface or texture and it could be used directly (ie zero-copy), but I don't know enough about DXVA2 to know what the properties of the objects it creates are. Is this idea for this type of decoding to be used as a alternative method of decoding from the YUV method of using L/LA textures and blending them in a shader, or for decoding different types of videos than might otherwise be possible? It might be best to start a separate email thread to properly flesh out more of the details and requirements here. Daniel
Sign in to reply to this message.
Answering Daniel's question. The proposal has nothing to do with L/LA texture. This is for hardware decoding path using DxVA2. DxVA2 outputs one IDirect3DSurface9 in NV12 format. We just need a renderable RGBA texture, StretchRect will handle the color space conversion internally, kind of magic but there's also a method to query a color space conversion is feasible or not using http://msdn.microsoft.com/en-us/library/bb174310(v=VS.85).aspx. L/LA texture is for software decoding path where we need to use shader to do color space conversion. They don't need to be renderable in that case. Alpha 2010年10月8日20:33 <daniel@transgaming.com>: > On 2010/10/08 00:39:22, apatrick1 wrote: > > Daniel, what do you think of this approach to hardware decoding into a >> > texture > >> managed by ANGLE? >> > > The idea would be to cast EGLContext to ANGLE's internal gl::Context >> > and then > >> use the internal API (copySurfaceToTexture2D in this case) by making >> > them > >> virtual in the same way as GLESv2.dll calls into EGL.dll. >> > > I don't want to expose the D3D resources outside of ANGLE or allow >> > Chrome to > >> modify ANGLE's state. That would turn into a big mess. This approach >> > keeps the > >> management of the texture internal to ANGLE. >> > > Anyway, we need to decode video into textures and this was my >> > suggestion. Any > >> thoughts? >> > > Specific issues of this patch aside, I really don't think this > particular approach is a good idea. > Accessing the gl::Context directly from outside ANGLE is just going to > be brittle and will require a much tighter coupling between Chrome and > ANGLE than is currently used (possibly static linking?) and is likely > desired. > > I believe the best way of doing this would be to use something like an > EGLImage to wrap the d3dsurface and then associate that with GL using > the the EGLImageTargetTexture2DOES call. See the following extensions: > > http://www.khronos.org/registry/egl/extensions/KHR/EGL_KHR_image_base.txt > > http://www.khronos.org/registry/egl/extensions/KHR/EGL_KHR_gl_image.txt > http://www.khronos.org/registry/gles/extensions/OES/OES_EGL_image.txt > Now all that seems fairly complex (at least to understand, not > necessarily to implement), and possibly more than required at this > point. > > Failing that, the next best thing would be to do this as a proper > (ANGLE-specific) extension. > The entry point should be exposed via libGLESv2.cpp as a proper > extension entry-point, and then no external application needs to go > mucking about with internal gl::Contexts directly. I'd expect that the > OES_EGL_image extension is a good place to get ideas from. You could > have the following entry point (for example -- all this is off the > cuff): > void glD3D9SurfaceTexture2DANGLEX(enum target, IDirect3DSurface9 > *surface); > This would then use the currently bound texture on "target" as the > destination, and could operate much like CopyTexImage2D does. This > would completely redefine the GL texture internals (just like TexImage2D > or CopyTexImage2D do), and take the contents from the provided surface. > It's also *possible* that this could serve as a simple wrapper object > around the d3d surface or texture and it could be used directly (ie > zero-copy), but I don't know enough about DXVA2 to know what the > properties of the objects it creates are. > > Is this idea for this type of decoding to be used as a alternative > method of decoding from the YUV method of using L/LA textures and > blending them in a shader, or for decoding different types of videos > than might otherwise be possible? > > It might be best to start a separate email thread to properly flesh out > more of the details and requirements here. > > Daniel > > > http://codereview.appspot.com/2381042/ >
Sign in to reply to this message.
I reviewed the EGL image extensions. I think they would solve the problem if they gave us a way to associate a raw IDirect3DSurface9 and / or IDirect3DTexture9 with a 2D GL texture. The former would most directly address the issue but the latter might make more sense from an implementation standpoint. I think we don't need cube map, 3D image or renderbuffer support and luckily they're separate extensions. Alpha, do you agree? Al On Mon, Oct 11, 2010 at 2:12 PM, Alpha (Hin-Chung) Lam <hclam@google.com>wrote: > Answering Daniel's question. > > The proposal has nothing to do with L/LA texture. This is for hardware > decoding path using DxVA2. DxVA2 outputs one IDirect3DSurface9 in NV12 > format. We just need a renderable RGBA texture, StretchRect will handle the > color space conversion internally, kind of magic but there's also a method > to query a color space conversion is feasible or not using > http://msdn.microsoft.com/en-us/library/bb174310(v=VS.85).aspx<http://msdn.mi... > . > > L/LA texture is for software decoding path where we need to use shader to > do color space conversion. They don't need to be renderable in that case. > > Alpha > > 2010年10月8日20:33 <daniel@transgaming.com>: > > On 2010/10/08 00:39:22, apatrick1 wrote: >> >> Daniel, what do you think of this approach to hardware decoding into a >>> >> texture >> >>> managed by ANGLE? >>> >> >> The idea would be to cast EGLContext to ANGLE's internal gl::Context >>> >> and then >> >>> use the internal API (copySurfaceToTexture2D in this case) by making >>> >> them >> >>> virtual in the same way as GLESv2.dll calls into EGL.dll. >>> >> >> I don't want to expose the D3D resources outside of ANGLE or allow >>> >> Chrome to >> >>> modify ANGLE's state. That would turn into a big mess. This approach >>> >> keeps the >> >>> management of the texture internal to ANGLE. >>> >> >> Anyway, we need to decode video into textures and this was my >>> >> suggestion. Any >> >>> thoughts? >>> >> >> Specific issues of this patch aside, I really don't think this >> particular approach is a good idea. >> Accessing the gl::Context directly from outside ANGLE is just going to >> be brittle and will require a much tighter coupling between Chrome and >> ANGLE than is currently used (possibly static linking?) and is likely >> desired. >> >> I believe the best way of doing this would be to use something like an >> EGLImage to wrap the d3dsurface and then associate that with GL using >> the the EGLImageTargetTexture2DOES call. See the following extensions: >> >> http://www.khronos.org/registry/egl/extensions/KHR/EGL_KHR_image_base.txt >> >> http://www.khronos.org/registry/egl/extensions/KHR/EGL_KHR_gl_image.txt >> http://www.khronos.org/registry/gles/extensions/OES/OES_EGL_image.txt >> Now all that seems fairly complex (at least to understand, not >> necessarily to implement), and possibly more than required at this >> point. >> >> Failing that, the next best thing would be to do this as a proper >> (ANGLE-specific) extension. >> The entry point should be exposed via libGLESv2.cpp as a proper >> extension entry-point, and then no external application needs to go >> mucking about with internal gl::Contexts directly. I'd expect that the >> OES_EGL_image extension is a good place to get ideas from. You could >> have the following entry point (for example -- all this is off the >> cuff): >> void glD3D9SurfaceTexture2DANGLEX(enum target, IDirect3DSurface9 >> *surface); >> This would then use the currently bound texture on "target" as the >> destination, and could operate much like CopyTexImage2D does. This >> would completely redefine the GL texture internals (just like TexImage2D >> or CopyTexImage2D do), and take the contents from the provided surface. >> It's also *possible* that this could serve as a simple wrapper object >> around the d3d surface or texture and it could be used directly (ie >> zero-copy), but I don't know enough about DXVA2 to know what the >> properties of the objects it creates are. >> >> Is this idea for this type of decoding to be used as a alternative >> method of decoding from the YUV method of using L/LA textures and >> blending them in a shader, or for decoding different types of videos >> than might otherwise be possible? > > > > > >> > > >> It might be best to start a separate email thread to properly flesh out >> more of the details and requirements here. >> >> Daniel >> >> >> http://codereview.appspot.com/2381042/ >> > >
Sign in to reply to this message.
I agree and yes we only need GL_TEXTURE_2D. Alpha 2010年10月11日14:30 Alastair Patrick <apatrick@google.com>: > I reviewed the EGL image extensions. I think they would solve the problem > if they gave us a way to associate a raw IDirect3DSurface9 and / or > IDirect3DTexture9 with a 2D GL texture. The former would most directly > address the issue but the latter might make more sense from an > implementation standpoint. I think we don't need cube map, 3D image or > renderbuffer support and luckily they're separate extensions. Alpha, do you > agree? > > Al > > > On Mon, Oct 11, 2010 at 2:12 PM, Alpha (Hin-Chung) Lam <hclam@google.com>wrote: > >> Answering Daniel's question. >> >> The proposal has nothing to do with L/LA texture. This is for hardware >> decoding path using DxVA2. DxVA2 outputs one IDirect3DSurface9 in NV12 >> format. We just need a renderable RGBA texture, StretchRect will handle the >> color space conversion internally, kind of magic but there's also a method >> to query a color space conversion is feasible or not using >> http://msdn.microsoft.com/en-us/library/bb174310(v=VS.85).aspx<http://msdn.mi... >> . >> >> L/LA texture is for software decoding path where we need to use shader to >> do color space conversion. They don't need to be renderable in that case. >> >> Alpha >> >> 2010年10月8日20:33 <daniel@transgaming.com>: >> >> On 2010/10/08 00:39:22, apatrick1 wrote: >>> >>> Daniel, what do you think of this approach to hardware decoding into a >>>> >>> texture >>> >>>> managed by ANGLE? >>>> >>> >>> The idea would be to cast EGLContext to ANGLE's internal gl::Context >>>> >>> and then >>> >>>> use the internal API (copySurfaceToTexture2D in this case) by making >>>> >>> them >>> >>>> virtual in the same way as GLESv2.dll calls into EGL.dll. >>>> >>> >>> I don't want to expose the D3D resources outside of ANGLE or allow >>>> >>> Chrome to >>> >>>> modify ANGLE's state. That would turn into a big mess. This approach >>>> >>> keeps the >>> >>>> management of the texture internal to ANGLE. >>>> >>> >>> Anyway, we need to decode video into textures and this was my >>>> >>> suggestion. Any >>> >>>> thoughts? >>>> >>> >>> Specific issues of this patch aside, I really don't think this >>> particular approach is a good idea. >>> Accessing the gl::Context directly from outside ANGLE is just going to >>> be brittle and will require a much tighter coupling between Chrome and >>> ANGLE than is currently used (possibly static linking?) and is likely >>> desired. >>> >>> I believe the best way of doing this would be to use something like an >>> EGLImage to wrap the d3dsurface and then associate that with GL using >>> the the EGLImageTargetTexture2DOES call. See the following extensions: >>> >>> http://www.khronos.org/registry/egl/extensions/KHR/EGL_KHR_image_base.txt >>> >>> http://www.khronos.org/registry/egl/extensions/KHR/EGL_KHR_gl_image.txt >>> http://www.khronos.org/registry/gles/extensions/OES/OES_EGL_image.txt >>> Now all that seems fairly complex (at least to understand, not >>> necessarily to implement), and possibly more than required at this >>> point. >>> >>> Failing that, the next best thing would be to do this as a proper >>> (ANGLE-specific) extension. >>> The entry point should be exposed via libGLESv2.cpp as a proper >>> extension entry-point, and then no external application needs to go >>> mucking about with internal gl::Contexts directly. I'd expect that the >>> OES_EGL_image extension is a good place to get ideas from. You could >>> have the following entry point (for example -- all this is off the >>> cuff): >>> void glD3D9SurfaceTexture2DANGLEX(enum target, IDirect3DSurface9 >>> *surface); >>> This would then use the currently bound texture on "target" as the >>> destination, and could operate much like CopyTexImage2D does. This >>> would completely redefine the GL texture internals (just like TexImage2D >>> or CopyTexImage2D do), and take the contents from the provided surface. >>> It's also *possible* that this could serve as a simple wrapper object >>> around the d3d surface or texture and it could be used directly (ie >>> zero-copy), but I don't know enough about DXVA2 to know what the >>> properties of the objects it creates are. >>> >>> Is this idea for this type of decoding to be used as a alternative >>> method of decoding from the YUV method of using L/LA textures and >>> blending them in a shader, or for decoding different types of videos >>> than might otherwise be possible? >> >> >> >> >> >>> >> >> >>> It might be best to start a separate email thread to properly flesh out >>> more of the details and requirements here. >>> >>> Daniel >>> >>> >>> http://codereview.appspot.com/2381042/ >>> >> >> >
Sign in to reply to this message.
Another thought... We could expose the D3D texture as a HANDLE so it could be shared between independent D3D devices. Then the video decoding could be done in a D3D device separate from the one ANGLE uses and it wouldn't be in danger of messing up ANGLE's D3D state and ANGLE could safely blow away the D3D device any time in order to reset after device lost. The Chrome video decoding code could be independently responsible for resetting or recreating its device on device lost. I believe ANGLE's glFlush, as implemented, would already have sufficiently strong semantics to synchronize with another D3D device. IDirect3DQuery9::GetData with the flush flag seems to be how it's done. This also opens up the possibility of duping the texture handle into another process, which might be useful for some applications. Al On Mon, Oct 11, 2010 at 2:35 PM, Alpha (Hin-Chung) Lam <hclam@google.com>wrote: > I agree and yes we only need GL_TEXTURE_2D. > > Alpha > > 2010年10月11日14:30 Alastair Patrick <apatrick@google.com>: > > I reviewed the EGL image extensions. I think they would solve the problem >> if they gave us a way to associate a raw IDirect3DSurface9 and / or >> IDirect3DTexture9 with a 2D GL texture. The former would most directly >> address the issue but the latter might make more sense from an >> implementation standpoint. I think we don't need cube map, 3D image or >> renderbuffer support and luckily they're separate extensions. Alpha, do you >> agree? >> >> Al >> >> >> On Mon, Oct 11, 2010 at 2:12 PM, Alpha (Hin-Chung) Lam <hclam@google.com>wrote: >> >>> Answering Daniel's question. >>> >>> The proposal has nothing to do with L/LA texture. This is for hardware >>> decoding path using DxVA2. DxVA2 outputs one IDirect3DSurface9 in NV12 >>> format. We just need a renderable RGBA texture, StretchRect will handle the >>> color space conversion internally, kind of magic but there's also a method >>> to query a color space conversion is feasible or not using >>> http://msdn.microsoft.com/en-us/library/bb174310(v=VS.85).aspx<http://msdn.mi... >>> . >>> >>> L/LA texture is for software decoding path where we need to use shader to >>> do color space conversion. They don't need to be renderable in that case. >>> >>> Alpha >>> >>> 2010年10月8日20:33 <daniel@transgaming.com>: >>> >>> On 2010/10/08 00:39:22, apatrick1 wrote: >>>> >>>> Daniel, what do you think of this approach to hardware decoding into a >>>>> >>>> texture >>>> >>>>> managed by ANGLE? >>>>> >>>> >>>> The idea would be to cast EGLContext to ANGLE's internal gl::Context >>>>> >>>> and then >>>> >>>>> use the internal API (copySurfaceToTexture2D in this case) by making >>>>> >>>> them >>>> >>>>> virtual in the same way as GLESv2.dll calls into EGL.dll. >>>>> >>>> >>>> I don't want to expose the D3D resources outside of ANGLE or allow >>>>> >>>> Chrome to >>>> >>>>> modify ANGLE's state. That would turn into a big mess. This approach >>>>> >>>> keeps the >>>> >>>>> management of the texture internal to ANGLE. >>>>> >>>> >>>> Anyway, we need to decode video into textures and this was my >>>>> >>>> suggestion. Any >>>> >>>>> thoughts? >>>>> >>>> >>>> Specific issues of this patch aside, I really don't think this >>>> particular approach is a good idea. >>>> Accessing the gl::Context directly from outside ANGLE is just going to >>>> be brittle and will require a much tighter coupling between Chrome and >>>> ANGLE than is currently used (possibly static linking?) and is likely >>>> desired. >>>> >>>> I believe the best way of doing this would be to use something like an >>>> EGLImage to wrap the d3dsurface and then associate that with GL using >>>> the the EGLImageTargetTexture2DOES call. See the following extensions: >>>> >>>> >>>> http://www.khronos.org/registry/egl/extensions/KHR/EGL_KHR_image_base.txt >>>> >>>> http://www.khronos.org/registry/egl/extensions/KHR/EGL_KHR_gl_image.txt >>>> http://www.khronos.org/registry/gles/extensions/OES/OES_EGL_image.txt >>>> Now all that seems fairly complex (at least to understand, not >>>> necessarily to implement), and possibly more than required at this >>>> point. >>>> >>>> Failing that, the next best thing would be to do this as a proper >>>> (ANGLE-specific) extension. >>>> The entry point should be exposed via libGLESv2.cpp as a proper >>>> extension entry-point, and then no external application needs to go >>>> mucking about with internal gl::Contexts directly. I'd expect that the >>>> OES_EGL_image extension is a good place to get ideas from. You could >>>> have the following entry point (for example -- all this is off the >>>> cuff): >>>> void glD3D9SurfaceTexture2DANGLEX(enum target, IDirect3DSurface9 >>>> *surface); >>>> This would then use the currently bound texture on "target" as the >>>> destination, and could operate much like CopyTexImage2D does. This >>>> would completely redefine the GL texture internals (just like TexImage2D >>>> or CopyTexImage2D do), and take the contents from the provided surface. >>>> It's also *possible* that this could serve as a simple wrapper object >>>> around the d3d surface or texture and it could be used directly (ie >>>> zero-copy), but I don't know enough about DXVA2 to know what the >>>> properties of the objects it creates are. >>>> >>>> Is this idea for this type of decoding to be used as a alternative >>>> method of decoding from the YUV method of using L/LA textures and >>>> blending them in a shader, or for decoding different types of videos >>>> than might otherwise be possible? >>> >>> >>> >>> >>> >>>> >>> >>> >>>> It might be best to start a separate email thread to properly flesh out >>>> more of the details and requirements here. >>>> >>>> Daniel >>>> >>>> >>>> http://codereview.appspot.com/2381042/ >>>> >>> >>> >> >
Sign in to reply to this message.
Thanks for the clarifications. WRT to the EGLImage, yes they might be overkill in this situation since we don't really need to share with another Khronos API. It does seems to be the "proper" way of getting a native surface into GLES though, and really we are sharing with another API, just that it's D3D / DXVA instead. It also seems like a convenient way to wrap a surface while getting it into GLES and avoid polluting the GLESv2 entry-points with D3D9 data-types. However, we could just use void pointers or something to solve that problem in a straight GLES extension. As for sharing D3D Textures, AFAIK that can only be done with Direct3D9Ex (ie Windows Vista+). It seems like DXVA2 might only be relevant there anyhow, so that might not be an issue. Either way, my main recommendation is that it be done with a (documented!) extension mechanism, and not via a back-door playing around in the context. Hope this helps, Daniel On 2010-10-11, at 5:57 PM, Alastair Patrick wrote: > Another thought... We could expose the D3D texture as a HANDLE so it could be shared between independent D3D devices. Then the video decoding could be done in a D3D device separate from the one ANGLE uses and it wouldn't be in danger of messing up ANGLE's D3D state and ANGLE could safely blow away the D3D device any time in order to reset after device lost. The Chrome video decoding code could be independently responsible for resetting or recreating its device on device lost. > > I believe ANGLE's glFlush, as implemented, would already have sufficiently strong semantics to synchronize with another D3D device. IDirect3DQuery9::GetData with the flush flag seems to be how it's done. > > This also opens up the possibility of duping the texture handle into another process, which might be useful for some applications. > > Al > > On Mon, Oct 11, 2010 at 2:35 PM, Alpha (Hin-Chung) Lam <hclam@google.com> wrote: > I agree and yes we only need GL_TEXTURE_2D. > > Alpha > > 2010年10月11日14:30 Alastair Patrick <apatrick@google.com>: > > I reviewed the EGL image extensions. I think they would solve the problem if they gave us a way to associate a raw IDirect3DSurface9 and / or IDirect3DTexture9 with a 2D GL texture. The former would most directly address the issue but the latter might make more sense from an implementation standpoint. I think we don't need cube map, 3D image or renderbuffer support and luckily they're separate extensions. Alpha, do you agree? > > Al > > > On Mon, Oct 11, 2010 at 2:12 PM, Alpha (Hin-Chung) Lam <hclam@google.com> wrote: > Answering Daniel's question. > > The proposal has nothing to do with L/LA texture. This is for hardware decoding path using DxVA2. DxVA2 outputs one IDirect3DSurface9 in NV12 format. We just need a renderable RGBA texture, StretchRect will handle the color space conversion internally, kind of magic but there's also a method to query a color space conversion is feasible or not using http://msdn.microsoft.com/en-us/library/bb174310(v=VS.85).aspx. > > L/LA texture is for software decoding path where we need to use shader to do color space conversion. They don't need to be renderable in that case. > > Alpha > > 2010年10月8日20:33 <daniel@transgaming.com>: > > On 2010/10/08 00:39:22, apatrick1 wrote: > > Daniel, what do you think of this approach to hardware decoding into a > texture > managed by ANGLE? > > The idea would be to cast EGLContext to ANGLE's internal gl::Context > and then > use the internal API (copySurfaceToTexture2D in this case) by making > them > virtual in the same way as GLESv2.dll calls into EGL.dll. > > I don't want to expose the D3D resources outside of ANGLE or allow > Chrome to > modify ANGLE's state. That would turn into a big mess. This approach > keeps the > management of the texture internal to ANGLE. > > Anyway, we need to decode video into textures and this was my > suggestion. Any > thoughts? > > Specific issues of this patch aside, I really don't think this > particular approach is a good idea. > Accessing the gl::Context directly from outside ANGLE is just going to > be brittle and will require a much tighter coupling between Chrome and > ANGLE than is currently used (possibly static linking?) and is likely > desired. > > I believe the best way of doing this would be to use something like an > EGLImage to wrap the d3dsurface and then associate that with GL using > the the EGLImageTargetTexture2DOES call. See the following extensions: > > http://www.khronos.org/registry/egl/extensions/KHR/EGL_KHR_image_base.txt > > http://www.khronos.org/registry/egl/extensions/KHR/EGL_KHR_gl_image.txt > http://www.khronos.org/registry/gles/extensions/OES/OES_EGL_image.txt > Now all that seems fairly complex (at least to understand, not > necessarily to implement), and possibly more than required at this > point. > > Failing that, the next best thing would be to do this as a proper > (ANGLE-specific) extension. > The entry point should be exposed via libGLESv2.cpp as a proper > extension entry-point, and then no external application needs to go > mucking about with internal gl::Contexts directly. I'd expect that the > OES_EGL_image extension is a good place to get ideas from. You could > have the following entry point (for example -- all this is off the > cuff): > void glD3D9SurfaceTexture2DANGLEX(enum target, IDirect3DSurface9 > *surface); > This would then use the currently bound texture on "target" as the > destination, and could operate much like CopyTexImage2D does. This > would completely redefine the GL texture internals (just like TexImage2D > or CopyTexImage2D do), and take the contents from the provided surface. > It's also *possible* that this could serve as a simple wrapper object > around the d3d surface or texture and it could be used directly (ie > zero-copy), but I don't know enough about DXVA2 to know what the > properties of the objects it creates are. > > Is this idea for this type of decoding to be used as a alternative > method of decoding from the YUV method of using L/LA textures and > blending them in a shader, or for decoding different types of videos > than might otherwise be possible? > > > > > > It might be best to start a separate email thread to properly flesh out > more of the details and requirements here. > > Daniel > > > http://codereview.appspot.com/2381042/ > > > > --- Daniel Koch -+- daniel@transgaming.com Senior Graphics Architect -+- TransGaming Inc. -+- www.transgaming.com
Sign in to reply to this message.
The suggestion of sharing D3D textures steps into how we should do DxVA2 safely when running next to ANGLE. This is a great idea to isolate video decoding to enhance stablity. However I don't know the performance implication. I also have questions with sharing textures, are you thinking of sharing ANGLE texture to the video decoder D3D device and does a StretchRect or the other way round? We only use DxVA2 on Windows 7 and up through Media Foundation so Direct3D9Ex is not a concern. For the API, I'll take your suggestion of extension mechanism and change the patch. Does that sound right? Alpha 2010年10月12日11:29 Daniel Koch <daniel@transgaming.com>: > Thanks for the clarifications. > > WRT to the EGLImage, yes they might be overkill in this situation since we > don't really need to share with another Khronos API. It does seems to be > the "proper" way of getting a native surface into GLES though, and really we > are sharing with another API, just that it's D3D / DXVA instead. It also > seems like a convenient way to wrap a surface while getting it into GLES and > avoid polluting the GLESv2 entry-points with D3D9 data-types. However, we > could just use void pointers or something to solve that problem in a > straight GLES extension. > > As for sharing D3D Textures, AFAIK that can only be done with Direct3D9Ex > (ie Windows Vista+). It seems like DXVA2 might only be relevant there > anyhow, so that might not be an issue. > > Either way, my main recommendation is that it be done with a (documented!) > extension mechanism, and not via a back-door playing around in the context. > > Hope this helps, > Daniel > > > On 2010-10-11, at 5:57 PM, Alastair Patrick wrote: > > Another thought... We could expose the D3D texture as a HANDLE so it could > be shared between independent D3D devices. Then the video decoding could be > done in a D3D device separate from the one ANGLE uses and it wouldn't be in > danger of messing up ANGLE's D3D state and ANGLE could safely blow away the > D3D device any time in order to reset after device lost. The Chrome video > decoding code could be independently responsible for resetting or recreating > its device on device lost. > > I believe ANGLE's glFlush, as implemented, would already have sufficiently > strong semantics to synchronize with another D3D device. > IDirect3DQuery9::GetData with the flush flag seems to be how it's done. > > This also opens up the possibility of duping the texture handle into > another process, which might be useful for some applications. > > Al > > On Mon, Oct 11, 2010 at 2:35 PM, Alpha (Hin-Chung) Lam <hclam@google.com>wrote: > >> I agree and yes we only need GL_TEXTURE_2D. >> >> Alpha >> >> 2010年10月11日14:30 Alastair Patrick <apatrick@google.com>: >> >> I reviewed the EGL image extensions. I think they would solve the problem >>> if they gave us a way to associate a raw IDirect3DSurface9 and / or >>> IDirect3DTexture9 with a 2D GL texture. The former would most directly >>> address the issue but the latter might make more sense from an >>> implementation standpoint. I think we don't need cube map, 3D image or >>> renderbuffer support and luckily they're separate extensions. Alpha, do you >>> agree? >>> >>> Al >>> >>> >>> On Mon, Oct 11, 2010 at 2:12 PM, Alpha (Hin-Chung) Lam <hclam@google.com >>> > wrote: >>> >>>> Answering Daniel's question. >>>> >>>> The proposal has nothing to do with L/LA texture. This is for hardware >>>> decoding path using DxVA2. DxVA2 outputs one IDirect3DSurface9 in NV12 >>>> format. We just need a renderable RGBA texture, StretchRect will handle the >>>> color space conversion internally, kind of magic but there's also a method >>>> to query a color space conversion is feasible or not using >>>> http://msdn.microsoft.com/en-us/library/bb174310(v=VS.85).aspx<http://msdn.mi... >>>> . >>>> >>>> L/LA texture is for software decoding path where we need to use shader >>>> to do color space conversion. They don't need to be renderable in that case. >>>> >>>> Alpha >>>> >>>> 2010年10月8日20:33 <daniel@transgaming.com>: >>>> >>>> On 2010/10/08 00:39:22, apatrick1 wrote: >>>>> >>>>> Daniel, what do you think of this approach to hardware decoding into a >>>>>> >>>>> texture >>>>> >>>>>> managed by ANGLE? >>>>>> >>>>> >>>>> The idea would be to cast EGLContext to ANGLE's internal gl::Context >>>>>> >>>>> and then >>>>> >>>>>> use the internal API (copySurfaceToTexture2D in this case) by making >>>>>> >>>>> them >>>>> >>>>>> virtual in the same way as GLESv2.dll calls into EGL.dll. >>>>>> >>>>> >>>>> I don't want to expose the D3D resources outside of ANGLE or allow >>>>>> >>>>> Chrome to >>>>> >>>>>> modify ANGLE's state. That would turn into a big mess. This approach >>>>>> >>>>> keeps the >>>>> >>>>>> management of the texture internal to ANGLE. >>>>>> >>>>> >>>>> Anyway, we need to decode video into textures and this was my >>>>>> >>>>> suggestion. Any >>>>> >>>>>> thoughts? >>>>>> >>>>> >>>>> Specific issues of this patch aside, I really don't think this >>>>> particular approach is a good idea. >>>>> Accessing the gl::Context directly from outside ANGLE is just going to >>>>> be brittle and will require a much tighter coupling between Chrome and >>>>> ANGLE than is currently used (possibly static linking?) and is likely >>>>> desired. >>>>> >>>>> I believe the best way of doing this would be to use something like an >>>>> EGLImage to wrap the d3dsurface and then associate that with GL using >>>>> the the EGLImageTargetTexture2DOES call. See the following extensions: >>>>> >>>>> >>>>> http://www.khronos.org/registry/egl/extensions/KHR/EGL_KHR_image_base.txt >>>>> >>>>> http://www.khronos.org/registry/egl/extensions/KHR/EGL_KHR_gl_image.txt >>>>> http://www.khronos.org/registry/gles/extensions/OES/OES_EGL_image.txt >>>>> Now all that seems fairly complex (at least to understand, not >>>>> necessarily to implement), and possibly more than required at this >>>>> point. >>>>> >>>>> Failing that, the next best thing would be to do this as a proper >>>>> (ANGLE-specific) extension. >>>>> The entry point should be exposed via libGLESv2.cpp as a proper >>>>> extension entry-point, and then no external application needs to go >>>>> mucking about with internal gl::Contexts directly. I'd expect that >>>>> the >>>>> OES_EGL_image extension is a good place to get ideas from. You could >>>>> have the following entry point (for example -- all this is off the >>>>> cuff): >>>>> void glD3D9SurfaceTexture2DANGLEX(enum target, IDirect3DSurface9 >>>>> *surface); >>>>> This would then use the currently bound texture on "target" as the >>>>> destination, and could operate much like CopyTexImage2D does. This >>>>> would completely redefine the GL texture internals (just like >>>>> TexImage2D >>>>> or CopyTexImage2D do), and take the contents from the provided surface. >>>>> It's also *possible* that this could serve as a simple wrapper object >>>>> around the d3d surface or texture and it could be used directly (ie >>>>> zero-copy), but I don't know enough about DXVA2 to know what the >>>>> properties of the objects it creates are. >>>>> >>>>> Is this idea for this type of decoding to be used as a alternative >>>>> method of decoding from the YUV method of using L/LA textures and >>>>> blending them in a shader, or for decoding different types of videos >>>>> than might otherwise be possible? >>>> >>>> >>>> >>>> >>>> >>>>> >>>> >>>> >>>>> It might be best to start a separate email thread to properly flesh out >>>>> more of the details and requirements here. >>>>> >>>>> Daniel >>>>> >>>>> >>>>> http://codereview.appspot.com/2381042/ >>>>> >>>> >>>> >>> >> > > --- > Daniel Koch -+- daniel@transgaming.com > Senior Graphics Architect -+- TransGaming Inc. -+- www.transgaming.com > >
Sign in to reply to this message.
On 2010-10-12, at 7:15 PM, Alpha (Hin-Chung) Lam wrote: > The suggestion of sharing D3D textures steps into how we should do DxVA2 safely when running next to ANGLE. This is a great idea to isolate video decoding to enhance stablity. However I don't know the performance implication. I also have questions with I really have no experience with sharing of D3D textures so I can't really comment on what the performance implications of that would be either. > sharing textures, are you thinking of sharing ANGLE texture to the video decoder D3D device and does a StretchRect or the other way round? We only use DxVA2 on Windows 7 and up through Media Foundation so Direct3D9Ex is not a concern. Unknown. I'm not familiar enough with DxVA to know what would work with the APIs. > > For the API, I'll take your suggestion of extension mechanism and change the patch. Does that sound right? Yes, that sounds like a good start. Daniel > > Alpha > > 2010年10月12日11:29 Daniel Koch <daniel@transgaming.com>: > Thanks for the clarifications. > > WRT to the EGLImage, yes they might be overkill in this situation since we don't really need to share with another Khronos API. It does seems to be the "proper" way of getting a native surface into GLES though, and really we are sharing with another API, just that it's D3D / DXVA instead. It also seems like a convenient way to wrap a surface while getting it into GLES and avoid polluting the GLESv2 entry-points with D3D9 data-types. However, we could just use void pointers or something to solve that problem in a straight GLES extension. > > As for sharing D3D Textures, AFAIK that can only be done with Direct3D9Ex (ie Windows Vista+). It seems like DXVA2 might only be relevant there anyhow, so that might not be an issue. > > Either way, my main recommendation is that it be done with a (documented!) extension mechanism, and not via a back-door playing around in the context. > > Hope this helps, > Daniel > > > On 2010-10-11, at 5:57 PM, Alastair Patrick wrote: > >> Another thought... We could expose the D3D texture as a HANDLE so it could be shared between independent D3D devices. Then the video decoding could be done in a D3D device separate from the one ANGLE uses and it wouldn't be in danger of messing up ANGLE's D3D state and ANGLE could safely blow away the D3D device any time in order to reset after device lost. The Chrome video decoding code could be independently responsible for resetting or recreating its device on device lost. >> >> I believe ANGLE's glFlush, as implemented, would already have sufficiently strong semantics to synchronize with another D3D device. IDirect3DQuery9::GetData with the flush flag seems to be how it's done. >> >> This also opens up the possibility of duping the texture handle into another process, which might be useful for some applications. >> >> Al >> >> On Mon, Oct 11, 2010 at 2:35 PM, Alpha (Hin-Chung) Lam <hclam@google.com> wrote: >> I agree and yes we only need GL_TEXTURE_2D. >> >> Alpha >> >> 2010年10月11日14:30 Alastair Patrick <apatrick@google.com>: >> >> I reviewed the EGL image extensions. I think they would solve the problem if they gave us a way to associate a raw IDirect3DSurface9 and / or IDirect3DTexture9 with a 2D GL texture. The former would most directly address the issue but the latter might make more sense from an implementation standpoint. I think we don't need cube map, 3D image or renderbuffer support and luckily they're separate extensions. Alpha, do you agree? >> >> Al >> >> >> On Mon, Oct 11, 2010 at 2:12 PM, Alpha (Hin-Chung) Lam <hclam@google.com> wrote: >> Answering Daniel's question. >> >> The proposal has nothing to do with L/LA texture. This is for hardware decoding path using DxVA2. DxVA2 outputs one IDirect3DSurface9 in NV12 format. We just need a renderable RGBA texture, StretchRect will handle the color space conversion internally, kind of magic but there's also a method to query a color space conversion is feasible or not using http://msdn.microsoft.com/en-us/library/bb174310(v=VS.85).aspx. >> >> L/LA texture is for software decoding path where we need to use shader to do color space conversion. They don't need to be renderable in that case. >> >> Alpha >> >> 2010年10月8日20:33 <daniel@transgaming.com>: >> >> On 2010/10/08 00:39:22, apatrick1 wrote: >> >> Daniel, what do you think of this approach to hardware decoding into a >> texture >> managed by ANGLE? >> >> The idea would be to cast EGLContext to ANGLE's internal gl::Context >> and then >> use the internal API (copySurfaceToTexture2D in this case) by making >> them >> virtual in the same way as GLESv2.dll calls into EGL.dll. >> >> I don't want to expose the D3D resources outside of ANGLE or allow >> Chrome to >> modify ANGLE's state. That would turn into a big mess. This approach >> keeps the >> management of the texture internal to ANGLE. >> >> Anyway, we need to decode video into textures and this was my >> suggestion. Any >> thoughts? >> >> Specific issues of this patch aside, I really don't think this >> particular approach is a good idea. >> Accessing the gl::Context directly from outside ANGLE is just going to >> be brittle and will require a much tighter coupling between Chrome and >> ANGLE than is currently used (possibly static linking?) and is likely >> desired. >> >> I believe the best way of doing this would be to use something like an >> EGLImage to wrap the d3dsurface and then associate that with GL using >> the the EGLImageTargetTexture2DOES call. See the following extensions: >> >> http://www.khronos.org/registry/egl/extensions/KHR/EGL_KHR_image_base.txt >> >> http://www.khronos.org/registry/egl/extensions/KHR/EGL_KHR_gl_image.txt >> http://www.khronos.org/registry/gles/extensions/OES/OES_EGL_image.txt >> Now all that seems fairly complex (at least to understand, not >> necessarily to implement), and possibly more than required at this >> point. >> >> Failing that, the next best thing would be to do this as a proper >> (ANGLE-specific) extension. >> The entry point should be exposed via libGLESv2.cpp as a proper >> extension entry-point, and then no external application needs to go >> mucking about with internal gl::Contexts directly. I'd expect that the >> OES_EGL_image extension is a good place to get ideas from. You could >> have the following entry point (for example -- all this is off the >> cuff): >> void glD3D9SurfaceTexture2DANGLEX(enum target, IDirect3DSurface9 >> *surface); >> This would then use the currently bound texture on "target" as the >> destination, and could operate much like CopyTexImage2D does. This >> would completely redefine the GL texture internals (just like TexImage2D >> or CopyTexImage2D do), and take the contents from the provided surface. >> It's also *possible* that this could serve as a simple wrapper object >> around the d3d surface or texture and it could be used directly (ie >> zero-copy), but I don't know enough about DXVA2 to know what the >> properties of the objects it creates are. >> >> Is this idea for this type of decoding to be used as a alternative >> method of decoding from the YUV method of using L/LA textures and >> blending them in a shader, or for decoding different types of videos >> than might otherwise be possible? >> >> >> >> >> >> It might be best to start a separate email thread to properly flesh out >> more of the details and requirements here. >> >> Daniel >> >> >> http://codereview.appspot.com/2381042/ >> --- Daniel Koch -+- daniel@transgaming.com Senior Graphics Architect -+- TransGaming Inc. -+- www.transgaming.com
Sign in to reply to this message.
|