Search Results

Search found 167 results on 7 pages for 'rgba'.

Page 5/7 | < Previous Page | 1 2 3 4 5 6 7  | Next Page >

  • How to achieve a palette effect on iPhone using OpenGL

    - by Joe
    I'm porting a 2d retro game to iPhone that has the following properties: targets OpenGL ES 1.1 entire screen is filled with tiles (textured triangle strip tile textured using a single 256x256 RGBA texture image the texture is passed to OpenGL once at the start of the game only 4 displayed colours are used one of the displayed colours is black The original game flashed the screen when time starts to run out by toggling the black pixels to white using an indexed palette. What is the best (i.e. most efficient) way to achieve this in OpenGL ES 1.1? My thoughts so far: Generate an alternative texture with white instead of black pixels, and pass to OpenGL when the screen is flashing Render a white poly underneath the background and render the texture with alpha on to display it Try and render a poly on top with some blending that achieves the effect (not sure this is possible) I'm fairly new to OpenGL so I'm not sure what the performance drawbacks of each of these are, or if there's a better way of doing this.

    Read the article

  • Downloading PKPass in an iOS custom app from my server

    - by taurus
    I've setup a server which returns a PKPass. If I copy the URL to the browser, a pass is shown (both in my Mac and in my iPhone). The code I'm using to download the pass is the following one: NSData *data = [[NSData alloc] initWithContentsOfURL:[NSURL URLWithString:kAPIPass]]; if (nil != data) { PKPass *pass = [[PKPass alloc] initWithData:data error:nil]; PKAddPassesViewController *pkvc = [[PKAddPassesViewController alloc] initWithPass:pass]; pkvc.delegate = self; [self presentViewController:pkvc animated:YES completion:^{ // Do any cleanup here } ]; } Anyway, when I run this code I have the following error: * Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Only support RGBA or the White color space, this method is a hack.' I don't know what is the bug... The pass seems ok when I download it with Safari and even the code seems ok (there are just 3 simple rows...) Someone experienced with Passkit could help me? EDIT: the weird thing is that the exact same code is working in a fresh new project

    Read the article

  • Manipulating pixels using only toDataURL

    - by Chris
    The problem I have is this: I need to be able to dynamically tint an image using Javascript, but I cannot access pixel data via the canvas. I can, however, store the dataURL (or any other text-based data format) and include that with the code, manipulate that data, and then create an image object using that dataURL. My question is, how can I access the RGBA value of each pixel, given only the dataURL. I assume I need to decode the base64 url, but into what format in order to manipulate on the pixel level? And then would be it be as trivial as re-encoding it as base64, slapping it in a url, and the passing to an image? Thanks.

    Read the article

  • iPhone+Quartz+OpenGL. What is the correct way for Quartz and OpenGL to play nice together regarding

    - by dugla
    So we know the CoreGraphics/Quartz imaging model is based on pre-multiplied alpha. We also know that OpenGL blending is based on un-premultiplied alpha. What is the best practice to avoid head explosion when doing blending with textures that are derived from pre-multiplied alpha imagery (PNG files generated in Photoshop with pre-multiplied alpha). Given the apples/oranges mish mash of Quartz and OpenGL, what is the correct glBlendFunc for doing the fundamental Porter/Duff "over" operation? Typical example: A simple paint program. Brush shapes are texture-map patterns created from pre-multiplied alpha rgba images. Paint color is specified via glColor4(...) with the alpha channel used to control paint transparency. GL_MODULATE is used so the brush texture multiplies the (translucent) paint color to blend the color into the canvas. Problem: The texture is premult. The color is not. What is the correct way to handle this fundamental inconsistency? Thanks, Doug

    Read the article

  • NSBitmapImageRep data Format as application icon image??

    - by Joe
    i have a char* array of data that was in RGBA and then moved to ARGB Bottom line is the set application image looks totally messed up and i cant put my finger on why? //create a bitmap representation of the image data. //The data is expected to be unsigned char** NSBitmapImageRep *bitmap = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes : (unsigned char**) &dest pixelsWide:width pixelsHigh:height bitsPerSample:8 samplesPerPixel:4 hasAlpha:YES isPlanar:NO colorSpaceName:NSDeviceRGBColorSpace bitmapFormat:NSAlphaFirstBitmapFormat bytesPerRow: 0 bitsPerPixel:0 ]; NSImage *image = [[NSImage alloc] initWithSize:NSMakeSize(width, height)]; [image addRepresentation:bitmap]; if( image == NULL) { printf("image is null\n"); fflush(stdout); } [NSApp setApplicationIconImage :image]; What in these values is off? the image looks very multicolored and pixelated, with transparent parts/lines as well.

    Read the article

  • Css3 Transition on background transparent not working in Chrome 5

    - by Ricardo Koch
    I`m trying to create an animation using CSS3 transition. The animation is a gradient background that should change his color (rgba). I used the webkit tag for the gradient and it`s working in Chrome 5.0.375.55. Looking into w3c site I see that "background-image - only gradients" is supported for the transition. (http://www.w3.org/TR/css3-transitions/) But I can only animate the background-color property with this version of chrome. With gradient the transition does not work. Does anyone managed to create an animation with background gradients?

    Read the article

  • Why does UMN-Mapserver shows an ERDAS Image-File (.img) as white shape?

    - by Mnementh
    I want to render an ERDAS-Image-file (suffix .img) with the UMN-Mapserver. The data is rendered on the right position and with the correct shape, but all data is white instead of an raster-image. The Image contains many layers. My mapfile looks like this: MAP NAME "Test" WEB METADATA "wms_title" "test" "WMS_SRS" "epsg:31466 epsg:31467 epsg:31468 epsg:31469 epsg:4326 epsg:25832 epsg:3035" END LOG "test.log" IMAGEPATH "." END SHAPEPATH "." PROJECTION "init=epsg:32632" END LAYER NAME "testlayer" TYPE RASTER DATA "test.img" STATUS ON OFFSITE 0 0 0 END OUTPUTFORMAT NAME png DRIVER "GD/PNG" MIMETYPE "image/png" IMAGEMODE RGBA END END

    Read the article

  • Hiding Text in ie7

    - by user356849
    So I have this text generated by a javascript plugin. <a class="className">Text</a> a.className { background: url(images/a-image.png) no-repeat; } But the "Text" shows on top of the image... Now... with any respectable web browser, I can use color: rgba(0,0,0,0); to solve the problem, but IE7 doesn't obey standards of any sort. Any ideas?

    Read the article

  • Bitmap manipulation in C++ on Windows

    - by Oliver
    Hi, I have myself a handle to a bitmap, in C++, on Windows: HBITMAP hBitmap; On this image I want to do some Image Recognition, pattern analysis, that sort of thing. In my studies at University, I have done this in Matlab, it is quite easy to get at the individual pixels based on their position, but I have no idea how to do this in C++ under Windows - I haven't really been able to understand what I have read so far. I have seen some references to a nice looking Bitmap class that lets you setPixel() and getPixel() and that sort of thing, but I think this is with .net . How should I go about turning my HBITMAP into something I can play with easily? I need to be able to get at the RGBA information. Are there libraries that allow me to work with the data without having to learn about DCs and BitBlt and that sort of thing?

    Read the article

  • How to initialize an array of structures within a function?

    - by drtwox
    In the make_quad() function below, how do I set the default values for the vertex_color array in the quad_t structure? /* RGBA color */ typedef { uint8_t r,g,b,a; } rgba_t; /* Quad polygon - other members removed */ typedef { rgba_t vertex_color[ 4 ] } quad_t; Elsewhere, a function to make and init a quad: quad_t *make_quad() { quad_t *quad = malloc( sizeof( quad_t ) ); quad->vertex_color = ??? /* What goes here? */ return ( quad ); } Obviously I can do it like this: quad->vertex_color[ 0 ] = { 0xFF, 0xFF, 0xFF, 0xFF }; ... quad->vertex_color[ 3 ] = { 0xFF, 0xFF, 0xFF, 0xFF }; but this: quad->vertex_color = { { 0xFF, 0xFF, 0xFF, 0xFF }, { 0xFF, 0xFF, 0xFF, 0xFF }, { 0xFF, 0xFF, 0xFF, 0xFF }, { 0xFF, 0xFF, 0xFF, 0xFF } }; ...results in "error: expected expression before '{' token".

    Read the article

  • Css Multiple background-color

    - by Khanh TO
    I'm a novice in Css and it's difficult to search for this specific case on the internet, so I post a question here. I'm working on an existing code base and I see something like this. li { background-color: #000 \9; background-color: rgba(0, 0, 0, 0); } I don't understand the meaning of \9. But it looks to me they are duplicates and I should remove one of them. Could you please explain the \9 and should I remove one of them? Thanks.

    Read the article

  • Simple graphics API with transparency, polygons, reading image pixels?

    - by M. Elkstein
    I need a simple graphics library that supports the following functionality: Ability to draw polygons (not just rectangles!) with RGBA colors (i.e., partially transparent), Ability to load bitmap images, Ability to read current color of pixel in a given coordinate. Ideally using JavaScript or Python. Seems like HTML 5 Canvas can handle #2 and #3 but not #1, whereas SVG can handle #1 and #2 but not #3. Am I missing something (about either of these two)? Or are there other alternatives?

    Read the article

  • css position when resizing browser

    - by user478636
    When resizing the browser I noticed that all the elements get out of place and the website layout gets distorted. This also occurs on with low-resolution. Is this because I have used position:relative;? How can I make the page elements not move from their position when resizing. body{ background:url(../img/bg-silver.jpg) #F2F2F2; font-family:"Lucida Sans Unicode", "Lucida Grande", sans-serif; font-size:11px; line-height:18px; color:#636363; margin-top:10%; } #containerHolder { background: #eee; padding: 5px; position:relative; } #container { background: #fff; background:rgba(245,245,245,0.8); border: 1px solid #ddd; } #main { margin: 0 0 0 20px; padding: 0 19px 0 0; }

    Read the article

  • how to dynamically recolor a CGGradientRef

    - by saintmac
    I have three CGGradientRef's that I need to be able to dynamically recolor. When I Initialise the CGGradientRef's the first time I get the expected result, but every time I attempt to change the colors nothing happens. Why? gradient is an instance variable ins a subclass of CALayer: @interface GradientLayer : CALayer { CGGradientRef gradient; //other stuff } @end Code: if (gradient != NULL) { CGGradientRelease(gradient); gradient = NULL; } RGBA color[360]; //set up array CGColorSpaceRef rgb = CGColorSpaceCreateDeviceRGB(); gradient = CGGradientCreateWithColorComponents ( rgb, color, NULL, sizeof (color) / sizeof (color[0]) ); CGColorSpaceRelease(rgb); [self setNeedsDisplay];

    Read the article

  • Toastr.js notifications as modal notfication

    - by Maxsteel
    I know it's not what toastr (or toast notifs in general) are meant to be used for, but I want to use them as a modal notification. My idea is following. On toast show: toastr.options.onShown = function() { //Create an overlay on the entire page} Overlay: #overlay { background-color: rgba(0, 0, 0, 0.8); z-index: 999; position: absolute; left: 0; top: 0; width: 100%; height: 100%; display: none; } And on toast close: toastr.options.onHidden = function() { //make overlay go away } Also, I'm setting timeout of toast to 0 so it won't disappear by itself. Question: I want the toast notification to stay atop the overlay and not behind it as overlay will cover everything. How can I do it?

    Read the article

  • What is the REGEX of a CSS selector

    - by user421563
    I'd like to parce a CSS file and add before every CSS selector another selector. From: p{margin:0 0 10px;} .lead{margin-bottom:20px;font-size:21px;font-weight:200;line-height:30px;} I'd like: .mySelector p{margin:0 0 10px;} .mySelector .lead{margin-bottom:20px;font-size:21px;font-weight:200;line-height:30px;} But my CSS file is really complex (in fact it is the bootstrap css file) so the regex should match all CSS selectors. For now I have this regex: ([^\r\n,{};]+)(,|{) and you can see the result here http://regexr.com?328ps but as you can see there are a lot of matches that shouldn't match for example: text-shadow:0 -1px 0 rgba(0, matchs positive but it shouldn't Does someone have a solution ? THX

    Read the article

  • css issue when resize windows

    - by zlippr
    hi i am having issue with css as when i resize the windows the div is not placed properly as below this is css involving the div .similar_story_block_form { background-color: white; border: 1px solid #CCCCCC; border-radius: 0 0 12px 12px; box-shadow: 0 4px 14px 0 rgba(0, 0, 0, 0.8); font-size: 13px; left: 337px; position: absolute; top: 105px; width: 337px; z-index: 100; } currently i use chrome which give me the same issue. What am i missing here? Thank for your help

    Read the article

  • CSS backgroung color is differnt in IE vs FF

    - by Mike Ozark
    In FF it works like intended (puts light transparent ribbon on the bottom of the image for caption). But in IE it's totally black (caption does show) .caption { z-index:30; position:absolute; bottom:-35px; left:0; height:30px; padding:5px 20px 0 20px; background:#000; background:rgba(0,0,0,.5); width:300px; font-size:1.0em; line-height:1.33; color:#fff; border-top:1px solid #000; text-shadow:none; }

    Read the article

  • CSS: reposition element on hover state but maintain clickable area position

    - by abirduphigh
    I'm trying to create the effect of a button that 'lifts' from the page when rolled over. Using CSS, I have a block style <a> element that, when hovered, re-positions itself up and to the left 5px, and a shadow is left behind: a { display: inline-block; position: relative; } a:hover { top: -5px; left: -5px; box-shadow: rgba(0,0,0,.2) 5px 5px 2px; } The problem: When the <a> block jumps 5px away from the cursor during the hover, the cursor is no longer actually hovering over the block and the block then jumps back when the cursor is moved only slightly thereafter. How can I maintain the original hover area so that the element doesn't keep jumping back and forth when the cursor is only slightly moved? I'd like to avoid adding superfluous container elements to my code if at all possible.

    Read the article

  • UV texture mapping with perspective correct interpolation

    - by Twodordan
    I am working on a software rasterizer for educational purposes and I am having issues with the texturing. The problem is, only one face of the cube gets correctly textured. The rest are stretched edges: You can see the running program online here. I have used cartesian coordinates, and all I do is interpolate the uv values along the scanlines. The general formula I use for interpolating the uv coordinates is pretty much the one I use for the z-buffering interpolation and looks like this (in this case for horizontal scanlines): u_Slope = (right.u - left.u) / (triangleRight_x - triangleLeft_x); v_Slope = (right.v - left.v) / (triangleRight_x - triangleLeft_x); //[...] new_u = left.u + ((currentX_onScanLine - triangleLeft_x) * u_Slope); new_v = left.v + ((currentX_onScanLine - triangleLeft_x) * v_Slope); Then, when I add each point to the pixel buffer, I restore z and uv: z = (1/z); uv.u = Math.round(uv.u * z *100);//*100 because my texture is 100x100px uv.v = Math.round(uv.v * z *100); Then I turn the u v indexes into one index in order to fetch the correct pixel from the image data (which is a 1 dimensional px array): var index = texture.width * uv.u + uv.v; //and the rest is unimportant imagedata[index].RGBA bla bla The interpolation formula is correct considering the consistency of the texture (including the straight stripes). However, I seem to get quite a lot of 0 values for either u or v. Which is probably why I only get one face right. Furthermore, why is the texture flipped horizontally? (the "1" is flipped) I must get some sleep now, but before I get into further dissecting of every single value to see what goes wrong, Can someone more experienced guess why might this be happening, just by looking at the cube? "I have no idea what I'm doing" (it's my first time implementing a rasterizer). Did I miss an important stage? Thanks for any insight. PS: My UV values are as follows: { u:0, v:0 }, { u:0, v:0.5 }, { u:0.5, v:0.5 }, { u:0.5, v:0 }, { u:0, v:0 }, { u:0, v:0.5 }, { u:0.5, v:0.5 }, { u:0.5, v:0 }

    Read the article

  • Bad font anti-aliasing in Ubuntu

    - by Juliano
    I'm switching from Fedora 8 to Ubuntu 9.04, and I can't seem to get it to get a good font anti-aliasing to work. It seems that Ubuntu's fontconfig tries to keep characters in integral pixel widths. This makes text more difficult to read, when 1 pixel is too thin and 2 pixels is too thick. Check the image below. In Fedora, when fontconfig anti-aliasing is enabled, fonts have their thickness proportional to the font size. Below, the thickness is different for 8, 9 and 10pt sizes. In Ubuntu, on the other hand, even when anti-aliasing is enabled, all 8, 9 and 10pt sizes have 1 pixel thickness. This makes reading larges amount of text difficult. I'm using the very same home directory, and I already checked that X resources are the same in both systems: ~% xrdb -query | grep Xft Xft.antialias: 1 Xft.dpi: 96 Xft.hinting: 1 Xft.hintstyle: hintfull Xft.rgba: none GNOME settings: ~% gconftool-2 -a /desktop/gnome/font_rendering antialiasing = grayscale hinting = full dpi = 96 rgba_order = rgb So, the question is: What should I change in the new box (Ubuntu) in order to get anti-aliasing like in the old box (Fedora)?

    Read the article

  • Texture displays on Android emulator but not on device

    - by Rob
    I have written a simple UI which takes an image (256x256) and maps it to a rectangle. This works perfectly on the emulator however on the phone the texture does not show, I see only a white rectangle. This is my code: public void onSurfaceCreated(GL10 gl, EGLConfig config) { byteBuffer = ByteBuffer.allocateDirect(shape.length * 4); byteBuffer.order(ByteOrder.nativeOrder()); vertexBuffer = byteBuffer.asFloatBuffer(); vertexBuffer.put(cardshape); vertexBuffer.position(0); byteBuffer = ByteBuffer.allocateDirect(shape.length * 4); byteBuffer.order(ByteOrder.nativeOrder()); textureBuffer = byteBuffer.asFloatBuffer(); textureBuffer.put(textureshape); textureBuffer.position(0); // Set the background color to black ( rgba ). gl.glClearColor(0.0f, 0.0f, 0.0f, 0.5f); // Enable Smooth Shading, default not really needed. gl.glShadeModel(GL10.GL_SMOOTH); // Depth buffer setup. gl.glClearDepthf(1.0f); // Enables depth testing. gl.glEnable(GL10.GL_DEPTH_TEST); // The type of depth testing to do. gl.glDepthFunc(GL10.GL_LEQUAL); // Really nice perspective calculations. gl.glHint(GL10.GL_PERSPECTIVE_CORRECTION_HINT, GL10.GL_NICEST); gl.glEnable(GL10.GL_TEXTURE_2D); loadGLTexture(gl); } public void onDrawFrame(GL10 gl) { gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); gl.glDisable(GL10.GL_DEPTH_TEST); gl.glMatrixMode(GL10.GL_PROJECTION); // Select Projection gl.glPushMatrix(); // Push The Matrix gl.glLoadIdentity(); // Reset The Matrix gl.glOrthof(0f, 480f, 0f, 800f, -1f, 1f); gl.glMatrixMode(GL10.GL_MODELVIEW); // Select Modelview Matrix gl.glPushMatrix(); // Push The Matrix gl.glLoadIdentity(); // Reset The Matrix gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY); gl.glLoadIdentity(); gl.glTranslatef(card.x, card.y, 0.0f); gl.glBindTexture(GL10.GL_TEXTURE_2D, texture[0]); //activates texture to be used now gl.glVertexPointer(2, GL10.GL_FLOAT, 0, vertexBuffer); gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer); gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, 4); gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY); } public void onSurfaceChanged(GL10 gl, int width, int height) { // Sets the current view port to the new size. gl.glViewport(0, 0, width, height); // Select the projection matrix gl.glMatrixMode(GL10.GL_PROJECTION); // Reset the projection matrix gl.glLoadIdentity(); // Calculate the aspect ratio of the window GLU.gluPerspective(gl, 45.0f, (float) width / (float) height, 0.1f, 100.0f); // Select the modelview matrix gl.glMatrixMode(GL10.GL_MODELVIEW); // Reset the modelview matrix gl.glLoadIdentity(); } public int[] texture = new int[1]; public void loadGLTexture(GL10 gl) { // loading texture Bitmap bitmap; bitmap = BitmapFactory.decodeResource(context.getResources(), R.drawable.image); // generate one texture pointer gl.glGenTextures(0, texture, 0); //adds texture id to texture array // ...and bind it to our array gl.glBindTexture(GL10.GL_TEXTURE_2D, texture[0]); //activates texture to be used now // create nearest filtered texture gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_NEAREST); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); // Use Android GLUtils to specify a two-dimensional texture image from our bitmap GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0); // Clean up bitmap.recycle(); } As per many other similar issues and resolutions on the web i have tried setting the minsdkversion is 3, loading the bitmap via an input stream bitmap = BitmapFactory.decodeStream(is), setting BitmapFactory.Options.inScaled to false, putting the images in the nodpi folder and putting them in the raw folder.. all of which didn't help. I'm not really sure what else to try..

    Read the article

  • HTML, JavaScript, and CSS in a NetBeans Platform Application

    - by Geertjan
    I broke down the code I used yesterday, to its absolute bare minimum, and then realized I'm not using HTML 5 at all: <html> <head> <link rel="stylesheet" href="style.css" type="text/css" media="all" /> <script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/1.6.3/jquery.min.js"></script> <script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jqueryui/1.8.16/jquery-ui.min.js"></script> <script type="text/javascript" src="script.js"></script> </head> <body> <div id="logo"> </div> <div id="infobox"> <h2 id="statustext"/> </div> </body> </html> Here's the script.js file referred to above: $(function(){ var banana = $("#logo"); var statustext = $("#statustext"); var defaulttxt = "Drag the banana!"; var dragtxt = "Dragging the banana!"; statustext.text(defaulttxt); banana.draggable({ drag: function(event, ui){ statustext.text(dragtxt); }, stop: function(event, ui){ statustext.text(defaulttxt); } }); }); And here's the stylesheet: body { background:#3B4D61 repeat 0 0; margin:0; padding:0; } h2 { color:#D1D8DF; display:block; font:bold 15px/10px Tahoma, Helvetica, Arial, Sans-Serif; text-align:center; } #infobox { position:absolute; width:300px; bottom:20px; left:50%; margin-left:-150px; padding:0 20px; background:rgba(0,0,0,0.5); -webkit-border-radius:15px; -moz-border-radius:15px; border-radius:15px; z-index:999; } #logo { position:absolute; width:450px; height:150px; top:40%; left: 30%; background:url(bananas.png) no-repeat 0 0; cursor:move; z-index:700; } However, I've replaced the content of the HTML file with a few of the samples from here, without any problem; in other words, if the HTML 5 canvas were to be needed, it could seamlessly be incorporated into my NetBeans Platform application: https://developer.mozilla.org/en/Canvas_tutorial/Basic_usage

    Read the article

  • Instead of the specified Texture, black circles on a green background are getting rendered. Why?

    - by vinzBad
    I'm trying to render a Texture via OpenGL. But instead of the texture black circles on a green background are rendered. (They scale, depending what the rotation of the texture is) Example: The texture I'm trying to render is the following: This is the code I use to render the texture, it's located in my Sprite-class. public void Render() { Matrix4 matrix = Matrix4.CreateTranslation(-OriginX, -OriginY, 0) * Matrix4.CreateRotationZ(Rotation) * Matrix4.CreateTranslation(X, Y, 0); Vector2[] corners = { new Vector2(0,0), //top left new Vector2(Width ,0),//top right new Vector2(Width,Height),//bottom rigth new Vector2(0,Height)//bottom left }; //copy the corners to the uv coordinates Vector2[] uv = corners.ToArray<Vector2>(); //transform the coordinates for (int i = 0; i < 4; i++) corners[i] = new Vector2(Vector3.Transform(new Vector3(corners[i]), matrix)); //GL.Color3(TintColor); GL.BindTexture(TextureTarget.Texture2D, _ID); GL.Begin(BeginMode.Quads); { for (int i = 0; i < 4; i++) { GL.TexCoord2(uv[i]); GL.Vertex3(corners[i].X, corners[i].Y, _layerDepth); } } GL.End(); if (EnableDebugDraw) { GL.Color3(Color.Violet); GL.PointSize(3); GL.Begin(BeginMode.Points); { for (int i = 0; i < 4; i++) GL.Vertex2(corners[i]); } GL.End(); GL.Color3(Color.Green); GL.Begin(BeginMode.Points); GL.Vertex2(X, Y); GL.End(); } } This is how I setup OpenGL. public static void SetupGL() { GL.Enable(EnableCap.AlphaTest); GL.AlphaFunc(AlphaFunction.Greater, 0.1f); GL.Enable(EnableCap.Texture2D); GL.Hint(HintTarget.PerspectiveCorrectionHint, HintMode.Nicest); } With this function I load the texture: public static uint LoadTexture(string path) { uint id; GL.GenTextures(1, out id); GL.BindTexture(TextureTarget.Texture2D, id); Bitmap bitmap = new Bitmap(path); BitmapData data = bitmap.LockBits(new System.Drawing.Rectangle(0, 0, bitmap.Width, bitmap.Height), ImageLockMode.ReadOnly, System.Drawing.Imaging.PixelFormat.Format32bppArgb); GL.TexImage2D(TextureTarget.Texture2D, 0, PixelInternalFormat.Rgba, data.Width, data.Height, 0, OpenTK.Graphics.OpenGL.PixelFormat.Bgra, PixelType.UnsignedByte, data.Scan0); bitmap.UnlockBits(data); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMinFilter, (int)TextureMinFilter.Linear); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMagFilter, (int)TextureMagFilter.Linear); return id; } And here I call Sprite.Render() protected override void OnRenderFrame(FrameEventArgs e) { GL.ClearColor(Color.MidnightBlue); GL.Clear(ClearBufferMask.ColorBufferBit); _sprite.Render(); SwapBuffers(); base.OnRenderFrame(e); } As I stole this code from the Textures-Example from OpenTK, I don't understand why this doesn't work.

    Read the article

  • How do I detect whether the sample supplied by VideoSink.OnSample() is right-side up?

    - by Ken Smith
    We're currently using the Silverlight VideoSink to capture video from users' local webcams, kinda like so: protected override void OnSample(long sampleTime, long frameDuration, byte[] sampleData) { if (FrameShouldBeSubmitted()) { byte[] resampledData = ResizeFrame(sampleData); mediaController.SetVideoFrame(resampledData); } } Now, on most of the machines that we've tested, the video sample provided in the byte[] sampleData parameter is upside-down, i.e., if you try to take the RGBA data and turn it into, say, a WriteableBitmap, the bitmap will be upside-down. That's odd, but fairly easy to correct, of course -- you just have to reverse the array as you encode it. The problem is that at least on some machines (e.g., the single Macintosh in our test environment), the video sample provided is no longer upside-down, but right-side up, and hence, flipping the image actually results in an image that's received upside-down on the far side. I reported this to MS as a bug, but their (terse) response was that it was "As Designed". Further attempts at clarification have so far been ignored. Now, I'll grant that it's kinda entertaining to imagine the discussions behind this design decision: "OK, just to make it interesting, let's play the video rightside up on a Mac, but let's turn it upside down for Windows!" "Great idea!" "Yeah, that'll keep those developers guessing!" But beyond that, I can't find this, umm, "feature" documented anywhere, nor can I find any documentation on how one is supposed to be able to tell that a given video sample is upside down or rightside up. Any thoughts on how to tell this?

    Read the article

< Previous Page | 1 2 3 4 5 6 7  | Next Page >