Search Results

Search found 1380 results on 56 pages for 'texture atlas'.

Page 29/56 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • Draw Rectangle with XNA

    - by mazzzzz
    Hey guys, I was working on game, and wanted to highlight a spot on the screen when something happens, I created a class to do this for me, and found a bit of code to draw the rectangle static private Texture2D CreateRectangle(int width, int height, Color colori) { Texture2D rectangleTexture = new Texture2D(game.GraphicsDevice, width, height, 1, TextureUsage.None, SurfaceFormat.Color);// create the rectangle texture, ,but it will have no color! lets fix that Color[] color = new Color[width * height];//set the color to the amount of pixels in the textures for (int i = 0; i < color.Length; i++)//loop through all the colors setting them to whatever values we want { color[i] = colori; } rectangleTexture.SetData(color);//set the color data on the texture return rectangleTexture;//return the texture } Problem is that the code above is called every update, (60 times a second), and it was not written with optimization in mind, I have no clue how else to write a code to do this though. It needs to be extremely fast (the code above freezes the game, which has only skeleton code right now).. Any suggestions. Note: Any new code would be great (WireFrame/Fill are both fine). I would like to be able to specify color. Something to point in the right direction would be great, Thanks, Max

    Read the article

  • Drawing only part of a

    - by Ben Reeves
    ..Continued on from my previous question I have a 320*480 RGB565 framebuffer which I wish to draw using OpenGL ES 1.0 on the iPhone. - (void)setupView { glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_CROP_RECT_OES, (int[4]){0, 0, 480, 320}); glEnable(GL_TEXTURE_2D); } // Updates the OpenGL view when the timer fires - (void)drawView { // Make sure that you are drawing to the current context [EAGLContext setCurrentContext:context]; //Get the 320*480 buffer const int8_t * frameBuf = [source getNextBuffer]; //Create enough storage for a 512x512 power of 2 texture int8_t lBuf[2*512*512]; memcpy (lBuf, frameBuf, 320*480*2); //Upload the texture glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 512, 512, 0, GL_RGB, GL_UNSIGNED_SHORT_5_6_5, lBuf); //Draw it glDrawTexiOES(0, 0, 1, 480, 320); [context presentRenderbuffer:GL_RENDERBUFFER_OES]; } If I produce the original texture in 512*512 the output is cropped incorrectly but other than that looks fine. However using the require output size of 320*480 everything is distorted and messed up. I'm pretty sure it's the way I'm copying the framebuffer into the new 512*512 buffer. I have tried this routine int8_t lBuf[512][512][2]; const char * frameDataP = frameData; for (int ii = 0; ii < 480; ++ii) { memcpy(lBuf[ii], frameDataP, 320); frameDataP += 320; } Which is better, but the width appears to be stretched and the height is messed up. Any help appreciated.

    Read the article

  • Space-saving character encoding for japanese?

    - by Constantin
    In my opinion a common problem: character encoding in combination with a bitmap-font. Most multi-language encodings have an huge space between different character types and even a lot of unused code points there. So if I want to use them I waste a lot of memory (not only for saving multi-byte text - i mean specially for spaces in my bitmap-font) - and VRAM is mostly really valuable... So the only reasonable thing seems to be: Using an custom mapping on my texture for i.e. UTF-8 characters (so that no space is waste). BUT: This effort seems to be same with use an own proprietary character encoding (so also own order of characters in my texture). In my specially case I got texture space for 4096 different characters and need characters to display latin languages as well as japanese (its a mess with utf-8 that only support generall cjk codepages). Had somebody ever a similiar problem (I really wonder, if not)? If theres already any approach? Edit: The same Problem is described here http://www.tonypottier.info/Unicode_And_Japanese_Kanji/ but it doesnt provide an real solution how to save these bitmapfont mappings to utf-8 space efficent. So any further help is welcome!

    Read the article

  • Textures in Opengl ES 2 not working properly

    - by Adl
    Hi! I'm working with Opengl ES 2 on iphone and right now I am trying to get my textures working on my objects. I'm using .obj files and all the data in them are correct. I have written a parser myself to retrieve all data, I convert it to static arrays in C. I discard the material properties for now, only getting the image path from the .mtl files manually. I have an object with 336 triangles, making this non-trivial to observe, with appertaining vertices, vertex faces and texture coordinates (u,v). Passing all data into the shaders, the resulting image is this: http://img530.imageshack.us/img530/9637/pic1io.png http://img404.imageshack.us/img404/7358/pic2pg.png But it should look like this (Displaying it in an object viewer). Please ignore the material properties. http://img16.imageshack.us/img16/1401/pic3cq.png Using this image as a texture: http://img217.imageshack.us/img217/1300/shirtdiffuse.png I'm thinking it might have to do with texture coordinate faces ? It is defined in my .obj file, and I'm not using them at all. In books and tutorials I have not found anything concerning this. Regards Niclas

    Read the article

  • displaying a saved buffer in OpenGL ES

    - by Adam
    Hi everyone, So basically I have a screenshot that I've saved that I want to later display on the screen. I've saved it with: glReadPixels(0, 0, self.bounds.size.width, self.bounds.size.height, GL_RGBA, GL_UNSIGNED_BYTE, savedBuffer); And later I'm trying to write it to the screen with: GLuint RenderedTex; glGenTextures(1, &RenderedTex); glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, RenderedTex); glTexImage2D (GL_TEXTURE_2D, 0, GL_RGBA, self.bounds.size.width, self.bounds.size.height, 0, GL_RGBA, GL_UNSIGNED_BYTE, savedBuffer); glDisable(GL_TEXTURE_2D); I'm pretty new to OpenGL so I don't know if I'm doing things right... actually I know I'm not, because nothing shows up. Also not sure how to dispose of the texture when I'm done with it. Anyone know the correct way to do this? Edit: I think I might be having a problem loading the texture because it's not a power of 2, it's 320x480... also, I think this code is just loading the texture, but not drawing it, I'd need a call to glDrawArrays(GL_TRIANGLE_STRIP, 0, 4) in there somewhere...

    Read the article

  • Opengl: Problem with masking

    - by Shaza
    Hey all, I'm working on creating a hole in a wall using masking in opengl, my code is quit simple like this, //Draw the mask glEnable(GL_BLEND); glBlendFunc(GL_DST_COLOR,GL_ZERO); glBindTexture(GL_TEXTURE_2D, texture[3]); glBegin(GL_QUADS); glTexCoord2d(0,0); glVertex3f(-20,40,-20); glTexCoord2d(0,1);glVertex3f(-20,40,40); glTexCoord2d(1,1);glVertex3f(20,40,40); glTexCoord2d(1,0);glVertex3f(20,40,-20); glEnd(); //Draw the Texture glBlendFunc(GL_ONE, GL_ONE); glBindTexture(GL_TEXTURE_2D, texture[2]); glBegin(GL_QUADS); glTexCoord2d(0,0); glVertex3f(-20,40,-20); glTexCoord2d(0,1);glVertex3f(-20,40,40); glTexCoord2d(1,1);glVertex3f(20,40,40); glTexCoord2d(1,0);glVertex3f(20,40,-20); glEnd(); The problem is, I got the hole in the wall correctly but it's semi transparent, I'm getting like black shade over it, also I can see through it. Here's a photo for what I'm getting: any suggestions?

    Read the article

  • glDrawArrays() slow on iPad?

    - by Nick
    Hey guys, I was wondering how to speed up my iPad application using OpenGLES 2.0. At the moment we have every drawable object draw itself with a call to glDrawArrays(). Blend mode is on, we really need it. Without disabling blendmode, how would we improve performance for this app? For instances, if we now draw 1 texture across the whole screen, the app only gets 15FPS, which is really slow I think? Are we doing something terribly wrong? Our drawing code (for each drawable), is as follows: - (void) draw { GLuint textureAvailable = 0; if(texture != nil){ textureAvailable = 1; } glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, texture.name); glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, 0, 0, vertices); glEnableVertexAttribArray(ATTRIB_VERTEX); glVertexAttribPointer(ATTRIB_COLOR, 4, GL_FLOAT, 1, 0, colorsWithMultipliedAlpha); glEnableVertexAttribArray(ATTRIB_COLOR); glVertexAttribPointer(ATTRIB_TEXTUREMAP, 2, GL_FLOAT, 1, 0, textureMapping); glEnableVertexAttribArray(ATTRIB_TEXTUREMAP); //Note that we are NOT using position.z here because that is only used to determine drawing order int *jnUniforms = JNOpenGLConstants::getInstance().uniforms; glUniform4f(jnUniforms[UNIFORM_TRANSLATE], position.x, position.y, 0.0, 0.0); glUniform4f(jnUniforms[UNIFORM_SCALE], scale.x, scale.y, 1.0, 1.0); glUniform1f(jnUniforms[UNIFORM_ROTATION], rotation); glUniform1i(jnUniforms[UNIFORM_TEXTURE_SAMPLE], 0); glUniform2f(jnUniforms[UNIFORM_TEXTURE_REPEAT], textureRepeat.x, textureRepeat.y); glUniform1i(jnUniforms[UNIFORM_TEXTURE_AVAILABLE], textureAvailable); glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); }

    Read the article

  • OpenGL Mipmapping: how does OpenGL decide on map level?

    - by Droozle
    Hi, I am having trouble implementing mipmapping in OpenGL. I am using OpenFrameworks and have modified the ofTexture class to support the creation and rendering of mipmaps. The following code is the original texture creation code from the class (slightly modified for clarity): glEnable(texData.textureTarget); glBindTexture(texData.textureTarget, (GLuint)texData.textureID); glTexSubImage2D(texData.textureTarget, 0, 0, 0, w, h, texData.glType, texData.pixelType, data); glTexParameteri(texData.textureTarget, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(texData.textureTarget, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glDisable(texData.textureTarget); This is my version with mipmap support: glEnable(texData.textureTarget); glBindTexture(texData.textureTarget, (GLuint)texData.textureID); gluBuild2DMipmaps(texData.textureTarget, texData.glTypeInternal, w, h, texData.glType, texData.pixelType, data); glTexParameteri(texData.textureTarget, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR); glTexParameteri(texData.textureTarget, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); glDisable(texData.textureTarget); The code does not generate errors (gluBuild2DMipmaps returns '0') and the textures are rendered without problems. However, I do not see any difference. The scene I render consists of "flat, square tiles" at z=0. It's basically a 2D scene. I zoom in and out by using "glScale()" before drawing the tiles. When I zoom out, the pixels of the tile textures start to "dance", indicating (as far as I can tell) unfiltered texture look-up. See: http://www.youtube.com/watch?v=b_As2Np3m8A at 25s. My question is: since I do not move the camera position, but only use scaling of the whole scene, does this mean OpenGL can not decide on the appropriate mipmap level and uses the full texture size (level 0)? Paul

    Read the article

  • iPhone OpenGLES: Textures are consuming too much memory and the program crashes with signal "0"

    - by CustomAppsMan
    I am not sure what the problem is. My app is running fine on the simulator but when I try to run it on the iPhone it crashes during debugging or without debugging with signal "0". I am using the Texture2D.m and OpenGLES2DView.m from the examples provided by Apple. I profiled the app on the iPhone with Instruments using the Memory tracer from the Library and when the app died the final memory consumed was about 60Mb real and 90+Mb virtual. Is there some other problem or is the iPhone just killing the application because it has consumed too much memory? If you need any information please state it and I will try to provide it. I am creating thousands of textures at load time which is why the memory consumption is so high. Really cant do anything about reducing the number of pics being loaded. I was running before on just UIImage but it was giving me really low frame rates. I read on this site that I should use OpenGLES for higher frame rates. Also sub question is there any way not to use UIImage to load the png file and then use the Texture class provided to create the texture for OpenGLES functions to use it for drawing? Is there some function in OpenGLES which will create a texture straight from a png file?

    Read the article

  • Simple gradient issue with OpenGL on iphone simulator

    - by Paul
    i was following the tutorial from raywenderlich website, the gradient seems not to work perfectly, is it only because of the iphone simulator, or is it something else? I can't try myself with an iphone. Here is the image : And the code : -(CCSprite *)spriteWithColor:(ccColor4F)bgColor textureSize:(float)textureSize { // 1: Create new CCRenderTexture CCRenderTexture *rt = [CCRenderTexture renderTextureWithWidth:textureSize height:screenSize.height]; // 2: Call CCRenderTexture:begin [rt beginWithClear:bgColor.r g:bgColor.g b:bgColor.b a:bgColor.a]; // 3: Draw into the texture glDisable(GL_TEXTURE_2D); glDisableClientState(GL_TEXTURE_COORD_ARRAY); float gradientAlpha = 0.5; CGPoint vertices[4]; ccColor4F colors[4]; int nVertices = 0; vertices[nVertices] = CGPointMake(0, 0); colors[nVertices++] = (ccColor4F){0, 0, 0, 0}; vertices[nVertices] = CGPointMake(textureSize, 0); colors[nVertices++] = (ccColor4F){0, 0, 0, gradientAlpha}; vertices[nVertices] = CGPointMake(0, screenSize.height); colors[nVertices++] = (ccColor4F){0, 0, 0, 0}; vertices[nVertices] = CGPointMake(textureSize, screenSize.height); colors[nVertices++] = (ccColor4F){0, 0, 0, gradientAlpha}; glVertexPointer(2, GL_FLOAT, 0, vertices); glColorPointer(4, GL_FLOAT, 0, colors); glDrawArrays(GL_TRIANGLE_STRIP, 0, (GLsizei)nVertices); glEnableClientState(GL_TEXTURE_COORD_ARRAY); glEnable(GL_TEXTURE_2D); // 4: Call CCRenderTexture:end [rt end]; // 5: Create a new Sprite from the texture return [CCSprite spriteWithTexture:rt.sprite.texture]; } Thanks

    Read the article

  • Force-directed graphing

    - by David
    Hello, I'm trying to write a force-directed or force-atlas code base for a graphing application I'm building for myself. Here is an example of what I'm attempting: http://sawamuland.com/flash/graph.html I managed to find some pseudo code to accomplish what I'd like on the Wiki Force-atlas article. I've converted this into ActionScript 3.0 code since it's a Flash application. Here is my source: var timestep:int = 0; var damping:int = 0; var total_kinetic_engery:int = 0; for (var node in list) { var net_force:int = 0; for (var other_node in list) { net_force += coulombRepulsion(node, other_node, nodeList); } for (var spring in list[node].relations) { net_force += hookeAttraction(node, spring, nodeList); } list[node].velocity += (timestep * net_force) * damping; list[node].position += timestep * list[node].velocity; total_kinetic_engery += list[node].mass * (list[node].velocity) ^ 2; } The problem now is finding pseudo code or a function to perform the the coulomb repulsion and hooke attraction code. I'm not exactly sure how to accomplish this. Does anyone know of a good reference I can look at...understand and implement quickly? Best.

    Read the article

  • PNG alpha rendered as black in Blender

    - by Camilo Martin
    I'm a Blender novice, so this is probably easy to fix. When I use a transparent PNG as a texture in Blender, the parts that should be transparent are rendered as black. This is especially confusing since in the material preview it looks as if the material would indeed be transparent. Here's a screenshot: This is the test texture, and in the right on top of a checkerboard:                        Here is the .blend file in case you want to check it:                                                      

    Read the article

  • Flex, Papervision3d: basic scenes, cameras, planes issue

    - by user157880
    // Import Papervision3D import org.papervision3d.Papervision3D; import org.papervision3d.scenes.; import org.papervision3d.cameras.; import org.papervision3d.objects.; import org.papervision3d.materials.; public var scene:Scene3D; public var camera:Camera3D; public var target:DisplayObject3D; public var screenshotArray: Array;//array to store pictures public var radius:Number; public var image:Image; private function initPapervision():void { target= new DisplayObject3D(); camera= new Camera3D(target); 1067: Implicit coercion of a value of type org.papervision3d.objects:DisplayObject3D to an unrelated type Number. scene= new Scene3D(thisContainer); 1137: Incorrect number of arguments. Expected no more than 0. scene.addChild( new DisplayObject3D() , "center" ); } public function handleCreationComplete():void { image = new Image(); image.source = "one_t.jpg"; image.maintainAspectRatio = true; image.scaleContent = false; image.autoLoad = false; image.addEventListener(Event.COMPLETE, handleImageLoad); image.load(); initPapervision(); } public function handleImageLoad(event:Event):void { init3D(); // onEnterFrame addEventListener( Event.ENTER_FRAME, loop3D ); addEventListener( MouseEvent.MOUSE_MOVE , handleMouseMove); } public function init3D():void { addScreenshots(20); } public function handleMouseMove(event:MouseEvent):void { camera.y = Math.max( thisContainer.mouseY*5 , -450 ); } public function addScreenshots(planeCount:Number):void { var obj:DisplayObject3D = scene.getChildByName ("center"); screenshotArray = new Array(planeCount); var texture:BitmapData = new BitmapData(image.content.loaderInfo.width, image.content.loaderInfo.height, false, 0); texture.draw(image); // Create texture with a bitmap from the library var materialSpace :MaterialObject3D = new BitmapMaterial(texture); materialSpace.doubleSided = true; materialSpace.smooth = false; radius = 500; camera.z = radius + 50; camera.y = 50; for (var i:Number = 0; i < planeCount; i++ ) { screenshotArray[i] = new Object() <b>screenshotArray[i].plane = new Plane( materialSpace, 100, 100, 2, 2 );</b> 1180: Call to a possibly undefined method Plane. // Position plane var rotation:Number = (360/planeCount)* i ; var rotationRadians:Number = (rotation-90) * (Math.PI/180); screenshotArray[i].rotation = rotation; screenshotArray[i].plane.z = (radius * Math.cos(rotationRadians) ) * -1; screenshotArray[i].plane.x = radius * Math.sin(rotationRadians) * -1; screenshotArray[i].plane.y = 100; screenshotArray[i].plane.lookAt(obj); // Add to scene scene.addChild( screenshotArray[i].plane ); } <b>scene.renderCamera(camera);</b> <i>1061: Call to a possibly undefined method renderCamera through a reference with static type org.papervision3d.scenes:Scene3D.</i> } public function loop3D(event:Event):void { var obj:DisplayObject3D = scene.getChildByName ("center"); for (var i:Number = 0; i < screenshotArray.length ; i++ ) { var rotation:Number = screenshotArray[i].rotation; rotation += (thisContainer.mouseX/50)/(screenshotArray.length/10); var rotationRadians:Number = (rotation-90) * (Math.PI/180); screenshotArray[i].rotation = rotation; screenshotArray[i].plane.z = (radius * Math.cos(rotationRadians) ) * -1; screenshotArray[i].plane.x = radius * Math.sin(rotationRadians) * -1; screenshotArray[i].plane.lookAt(obj); } //now lets render the scene <b>scene.renderCamera(camera);</b> <i>1061: Call to a possibly undefined method renderCamera through a reference with static type org.papervision3d.scenes:Scene3D.</i> } ]]

    Read the article

  • Parallax backgrounds in OpenGL ES on the iPhone

    - by Scott
    I've got basically a 2d game on the iPhone and I'm trying to set up multiple backgrounds that scroll at different speeds (known as parallax backgrounds). So my thought was to just stick the backgrounds BEHIND the foreground using different z-coordinate planes, and just make them bigger than the foreground (in size) to accommodate, so that the whole thing can be scrolled (just at a different speed). And (as far as I know) I basically implemented that. The only problem is that it seems to entirely ignore whatever z-value I give it, or rather it just zeroes all of them. I see the background (I've only tested ONE background so far, to keep it simple...so for now I just have a foreground and I want one background scrolling at a different speed), but it scrolls 1:1 with my foreground, so it obviously doesn't look right, and most of it is cut off (cause it's bigger). And I've tried various z-values for the background and various near/far clipping planes...it's always the same. I'm probably just doing one simple thing wrong, but I can't figure it out. I'm wondering if it has to do with me using only 2 coordinates in glVertexPointer for the foreground? (Of course for the background I AM passing in 3) I'll post some code: This is some initial setup: glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrthof(-1.0f, 1.0f, -1.5f, 1.5f, -10.0f, 10.0f); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glEnableClientState(GL_VERTEX_ARRAY); //glEnableClientState(GL_COLOR_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); //transparency glEnable (GL_BLEND); glBlendFunc (GL_ONE, GL_ONE_MINUS_SRC_ALPHA); A little bit about my foreground's float array....it's interleaved. For my foreground it goes vertex x, vertex y, texture x, texture y, repeat. This all works just fine. This is my FOREGROUND rendering: glVertexPointer(2, GL_FLOAT, 4*sizeof(GLfloat), texes); <br> glTexCoordPointer(2, GL_FLOAT, 4*sizeof(GLfloat), (GLvoid*)texes + 2*sizeof(GLfloat)); <br> glDrawArrays(GL_TRIANGLES, 0, indexCount / 4); BACKGROUND rendering: Same drill here except this time it goes vertex x, vertex y, vertex z, texture x, texture y, repeat. Note the z value this time. I did make sure the data in this array was correct while debugging (getting the right z values). And again, it shows up...it's just not going far back in the distance like it should. glVertexPointer(3, GL_FLOAT, 5*sizeof(GLfloat), b1Texes); glTexCoordPointer(2, GL_FLOAT, 5*sizeof(GLfloat), (GLvoid*)b1Texes + 3*sizeof(GLfloat)); glDrawArrays(GL_TRIANGLES, 0, b1IndexCount / 5); And to move my camera, I just do a simple glTranslatef(x, y, 0.0f); I'm not understanding what I'm doing wrong cause this seems like the most basic 3D function imaginable...things further away are smaller and don't move as fast when the camera moves. Not the case for me. Seems like it should be pretty basic and not even really be affected by my projection and all that (though I've even tried doing glFrustum just for fun, no success). Please help, I feel like it's just one dumb thing. I will post more code if necessary.

    Read the article

  • How do I draw an OpenGL point sprite using libgdx for Android?

    - by nbolton
    Here's a few snippets of what I have so far... void create() { renderer = new ImmediateModeRenderer(); tiles = Gdx.graphics.newTexture( Gdx.files.getFileHandle("res/tiles2.png", FileType.Internal), TextureFilter.MipMap, TextureFilter.Linear, TextureWrap.ClampToEdge, TextureWrap.ClampToEdge); } void render() { Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); Gdx.gl.glClearColor(0.6f, 0.7f, 0.9f, 1); } void renderSprite() { int handle = tiles.getTextureObjectHandle(); Gdx.gl.glBindTexture(GL.GL_TEXTURE_2D, handle); Gdx.gl.glEnable(GL.GL_POINT_SPRITE); Gdx.gl11.glTexEnvi(GL.GL_POINT_SPRITE, GL.GL_COORD_REPLACE, GL.GL_TRUE); renderer.begin(GL.GL_POINTS); renderer.vertex(pos.x, pos.y, pos.z); renderer.end(); } create() is called once when the program starts, and renderSprites() is called for each sprite (so, pos is unique to each sprite) where the sprites are arranged in a sort-of 3D cube. Unfortunately though, this just renders a few white dots... I suppose that the texture isn't being bound which is why I'm getting white dots. Also, when I draw my sprites on anything other than 0 z-axis, they do not appear -- I read that I need to crease my zfar and znear, but I have no idea how to do this using libgdx (perhaps it's because I'm using ortho projection? What do I use instead?). I know that the texture is usable, since I was able to render it using a SpriteBatch, but I guess I'm not using it properly with OpenGL.

    Read the article

  • Manually writing a dx11 tessellation shader

    - by Tudor
    I am looking for resources on what are the steps of manually implementing tessellation (I'm using Unity cg). Today it seems that it is all the rage to hide most of the gpu code far away and use rather rigid simplifications such as unity's SURFace shaders. And it seems useless unless you're doing supeficial stuff. A little background: I have procedurally generated meshes (using marching cubes) which have quality normals but no UVs and no Tangents. I have successfully written a custom vertex and fragment shader to do triplanar texture and bumpmap projection as well as some custom stuff (custom lighting, procedurally warping the texture for variation etc). I am using the GPU Gems book as reference. Now I need to implement tessellation, but It seems I must calculate the tangents at runtime by swizzling normals (ctrl+f this in gems: <normal.z, normal.y, -normal.x>) before the tessellator gets them. And I also need to keep my custom vert+frag setup (with my custom parameters/textures being passed between them) - so apparently I cannot use surface shaders. Can anyone provide some guidence?

    Read the article

  • Ingredient Substitutes while Baking

    - by Rekha
    In our normal cooking, we substitute the vegetables for the gravies we prepare. When we start baking, we look for a good recipe. At least one or two ingredient will be missing. We do not know where to substitute what to bring same output. So we finally drop the plan of baking. Again after a month, we get the interest in baking. Again one or two lack of ingredient and that’s it. We keep on doing this for months. When I was going through the cooking blogs, I came across a site with the Ingredient Substitutes for Baking: (*) is to indicate that this substitution is ideal from personal experience. Flour Substitutes ( For 1 cup of Flour) All Purpose Flour 1/2 cup white cake flour plus 1/2 cup whole wheat flour 1 cup self-rising flour (omit using salt and baking powder if the recipe calls for it since self raising flour has it already) 1 cup plus 2 tablespoons cake flour 1/2 cup (75 grams) whole wheat flour 7/8 cup (130 grams) rice flour (starch) (do not replace all of the flour with the rice flour) 7/8 cup whole wheat Bread Flour 1 cup all purpose flour 1 cup all purpose flour plus 1 teaspoon wheat gluten (*) Cake Flour Place 2 tbsp cornstarch in 1 cup and fill the rest up with All Purpose flour (*) 1 cup all purpose flour minus 2 tablespoons Pastry flour Place 2 tbsp cornstarch in 1 cup and fill the rest up with All Purpose flour Equal parts of All purpose flour plus cake flour (*) Self-rising Flour 1½ teaspoons of baking powder plus ½ teaspoon of salt plus 1 cup of all-purpose flour. Cornstarch (1 tbsp) 2 tablespoons all-purpose flour 1 tablespoon arrowroot 4 teaspoons quick-cooking tapioca 1 tablespoon potato starch or rice starch or flour Tapioca (1 tbsp) 1 – 1/2 tablespoons all-purpose flour Cornmeal (stone ground) polenta OR corn flour (gives baked goods a lighter texture) if using cornmeal for breading,crush corn chips in a blender until they have the consistency of cornmeal. maize meal Corn grits Sweeteners ( for Every 1 cup ) * * (HV) denotes Healthy Version for low fat or fat free substitution in Baking Light Brown Sugar 2 tablespoons molasses plus 1 cup of white sugar Dark Brown Sugar 3 tablespoons molasses plus 1 cup of white sugar Confectioner’s/Powdered Sugar Process 1 cup sugar plus 1 tablespoon cornstarch Corn Syrup 1 cup sugar plus 1/4 cup water 1 cup Golden Syrup 1 cup honey (may be little sweeter) 1 cup molasses Golden Syrup Combine two parts light corn syrup plus one part molasses 1/2 cup honey plus 1/2 cup corn syrup 1 cup maple syrup 1 cup corn syrup Honey 1- 1/4 cups sugar plus 1/4 cup water 3/4 cup maple syrup plus 1/2 cup granulated sugar 3/4 cup corn syrup plus 1/2 cup granulated sugar 3/4 cup light molasses plus 1/2 cup granulated white sugar 1 1/4 cups granulated white or brown sugar plus 1/4 cup additional liquid in recipe plus 1/2 teaspoon cream of tartar Maple Syrup 1 cup honey,thinned with water or fruit juice like apple 3/4 cup corn syrup plus 1/4 cup butter 1 cup Brown Rice Syrup 1 cup Brown sugar (in case of cereals) 1 cup light molasses (on pancakes, cereals etc) 1 cup granulated sugar for every 3/4 cup of maple syrup and increase liquid in the recipe by 3 tbsp for every cup of sugar.If baking soda is used, decrease the amount by 1/4 teaspoon per cup of sugar substituted, since sugar is less acidic than maple syrup Molasses 1 cup honey 1 cup dark corn syrup 1 cup maple syrup 3/4 cup brown sugar warmed and dissolved in 1/4 cup of liquid ( use this if taste of molasses is important in the baked good) Cocoa Powder (Natural, Unsweetened) 3 tablespoons (20 grams) Dutch-processed cocoa plus 1/8 teaspoon cream of tartar, lemon juice or white vinegar 1 ounce (30 grams) unsweetened chocolate (reduce fat in recipe by 1 tablespoon) 3 tablespoons (20 grams) carob powder Semisweet baking chocolate (1 oz) 1 oz unsweetened baking chocolate plus 1 Tbsp sugar Unsweetened baking chocolate (1 oz ) 3 Tbsp baking cocoa plus 1 Tbsp vegetable oil or melted shortening or margarine Semisweet chocolate chips (1 cup) 6 oz semisweet baking chocolate, chopped (Alternatively) For 1 cup of Semi sweet chocolate chips you can use : 6 tablespoons unsweetened cocoa powder, 7 tablespoons sugar ,1/4 cup fat (butter or oil) Leaveners and Diary * * (HV) denotes Healthy Version for low fat or fat free substitution in Baking Compressed Yeast (1 cake) 1 envelope or 2 teaspoons active dry yeast 1 packet (1/4 ounce) Active Dry yeast 1 cake fresh compressed yeast 1 tablespoon fast-rising active yeast Baking Powder (1 tsp) 1/3 teaspoon baking soda plus 1/2 teaspoon cream of tartar 1/2 teaspoon baking soda plus 1/2 cup buttermilk or plain yogurt 1/4 teaspoon baking soda plus 1/3 cup molasses. When using the substitutions that include liquid, reduce other liquid in recipe accordingly Baking Soda(1 tsp) 3 tsp Baking Powder ( and reduce the acidic ingredients in the recipe. Ex Instead of buttermilk add milk) 1 tsp potassium bicarbonate Ideal substitution – 2 tsp Baking powder and omit salt in recipe Cream of tartar (1 tsp) 1 teaspoon white vinegar 1 tsp lemon juice Notes from What’s Cooking America – If cream of tartar is used along with baking soda in a cake or cookie recipe, omit both and use baking powder instead. If it calls for baking soda and cream of tarter, just use baking powder.Normally, when cream of tartar is used in a cookie, it is used together with baking soda. The two of them combined work like double-acting baking powder. When substituting for cream of tartar, you must also substitute for the baking soda. If your recipe calls for baking soda and cream of tarter, just use baking powder. One teaspoon baking powder is equivalent to 1/4 teaspoon baking soda plus 5/8 teaspoon cream of tartar. If there is additional baking soda that does not fit into the equation, simply add it to the batter. Buttermilk (1 cup) 1 tablespoon lemon juice or vinegar (white or cider) plus enough milk to make 1 cup (let stand 5-10 minutes) 1 cup plain or low fat yogurt 1 cup sour cream 1 cup water plus 1/4 cup buttermilk powder 1 cup milk plus 1 1/2 – 1 3/4 teaspoons cream of tartar Plain Yogurt (1 cup) 1 cup sour cream 1 cup buttermilk 1 cup crème fraiche 1 cup heavy whipping cream (35% butterfat) plus 1 tablespoon freshly squeezed lemon juice Whole Milk (1 cup) 1 cup fat free milk plus 1 tbsp unsaturated Oil like canola (HV) 1 cup low fat milk (HV) Heavy Cream (1 cup) 3/4 cup milk plus 1/3 cup melted butter.(whipping wont work) Sour Cream (1 cup) (pls refer also Substitutes for Fats in Baking below) 7/8 cup buttermilk or sour milk plus 3 tablespoons butter. 1 cup thickened yogurt plus 1 teaspoon baking soda. 3/4 cup sour milk plus 1/3 cup butter. 3/4 cup buttermilk plus 1/3 cup butter. Cooked sauces: 1 cup yogurt plus 1 tablespoon flour plus 2 teaspoons water. Cooked sauces: 1 cup evaporated milk plus 1 tablespoon vinegar or lemon juice. Let stand 5 minutes to thicken. Dips: 1 cup yogurt (drain through a cheesecloth-lined sieve for 30 minutes in the refrigerator for a thicker texture). Dips: 1 cup cottage cheese plus 1/4 cup yogurt or buttermilk, briefly whirled in a blender. Dips: 6 ounces cream cheese plus 3 tablespoons milk,briefly whirled in a blender. Lower fat: 1 cup low-fat cottage cheese plus 1 tablespoon lemon juice plus 2 tablespoons skim milk, whipped until smooth in a blender. Lower fat: 1 can chilled evaporated milk whipped with 1 teaspoon lemon juice. 1 cup plain yogurt plus 1 tablespoon cornstarch 1 cup plain nonfat yogurt Substitutes for Fats in Baking * * (HV) denoted Healthy Version for low fat or fat free substitution in Baking Butter (1 cup) 1 cup trans-free vegetable shortening 3/4 cups of vegetable oil (example. Canola oil) Fruit purees (example- applesauce, pureed prunes, baby-food fruits). Add it along with some vegetable oil and reduce any other sweeteners needed in the recipe since fruit purees are already sweet. 1 cup polyunsaturated margarine (HV) 3/4 cup polyunsaturated oil like safflower oil (HV) 1 cup mild olive oil (not extra virgin)(HV) Note: Butter creates the flakiness and the richness which an oil/purees cant provide. If you don’t want to compromise that much to taste, replace half the butter with the substitutions. Shortening(1 cup) 1 cup polyunsaturated margarine like Earth Balance or Smart Balance(HV) 1 cup + 2tbsp Butter ( better tasting than shortening but more expensive and has cholesterol and a higher level of saturated fat; makes cookies less crunchy, bread crusts more crispy) 1 cup + 2 tbsp Margarine (better tasting than shortening but more expensive; makes cookies less crunchy, bread crusts tougher) 1 Cup – 2tbsp Lard (Has cholesterol and a higher level of saturated fat) Oil equal amount of apple sauce stiffly beaten egg whites into batter equal parts mashed banana equal parts yogurt prune puree grated raw zucchini or seeds removed if cooked. Works well in quick breads/muffins/coffee cakes and does not alter taste pumpkin puree (if the recipe can handle the taste change) Low fat cottage cheese (use only half of the required fat in the recipe). Can give rubbery texture to the end result Silken Tofu – (use only half of the required fat in the recipe). Can give rubbery texture to the end result Equal parts of fruit juice Note: Fruit purees can alter the taste of the final product is used in large quantities. Cream Cheese (1 cup) 4 tbsps. margarine plus 1 cup low-fat cottage cheese – blended. Add few teaspoons of fat-free milk if needed (HV) Heavy Cream (1 cup) 1 cup evaporated skim milk (or full fat milk) 1/2 cup low fat Yogurt plus 1/2 low fat Cottage Cheese (HV) 1/2 cup Yogurt plus 1/2 Cottage Cheese Sour Cream (1 cup) 1 cup plain yogurt (HV) 3/4 cup buttermilk or plain yogurt plus 1/3 cup melted butter 1 cup crème fraiche 1 tablespoon lemon juice or vinegar plus enough whole milk to fill 1 cup (let stand 5-10 minutes) 1/2 cup low-fat cottage cheese plus 1/2 cup low-fat or nonfat yogurt (HV) 1 cup fat-free sour cream (HV) Note: How to Make Maple Syrup Substitute at home For 1 Cup Maple Syrup 1/2 cup granulated sugar 1 cup brown sugar, firmly packed 1 cup boiling water 1 teaspoon butter 1 teaspoon maple extract or vanilla extract Method In a heavy saucepan, place the granulated sugar and keep stirring until it melts and turns slightly brown. Alternatively in another pan, place brown sugar and water and bring to a boil without stirring. Now mix both the sugars and simmer in low heat until they come together as one thick syrup. Remove from heat, add butter and the extract. Use this in place of maple syrup. Store it in a fridge in an air tight container. Even though this was posted in their site long back, I found it helpful. So posting it for you. via chefinyou . cc image credit: flickr/zetrules

    Read the article

  • Reflections based on distance from plane

    - by Andrea Benedetti
    Let's consider, for example, a surface like the volleyball court, we can see that legs and shoes of the players are reflected, with a blur effect, but body and stadium don't (as each object not near to the court). I've already made a reflection effect, but it works as a specular reflection, and I need to achieve an effect like the photo above. So, I would like to make a reflection that is based on the distance between the object and the plane, in this manner a close object would reflect more than an object that is positioned far away from the plane. What is the best way to achieve this effect? My first idea was to use the depth value (taken from the reflected camera), and use that value to blend between reflection and court. But I don't know if it's a correct way. Edit: as rendering engine I use Ogre that already provides a reflections system: reflecting the camera through a plane (obviously I can select the models to draw from the reflected camera). After a render to texture pass I can blend the reflected texture with the original plane. So, if possible, I'm looking for a way that best suits my system.

    Read the article

  • Writing the correct value in the depth buffer when using ray-casting

    - by hidayat
    I am doing a ray-casting in a 3d texture until I hit a correct value. I am doing the ray-casting in a cube and the cube corners are already in world coordinates so I don't have to multiply the vertices with the modelviewmatrix to get the correct position. Vertex shader world_coordinate_ = gl_Vertex; Fragment shader vec3 direction = (world_coordinate_.xyz - cameraPosition_); direction = normalize(direction); for (float k = 0.0; k < steps; k += 1.0) { .... pos += direction*delta_step; float thisLum = texture3D(texture3_, pos).r; if(thisLum > surface_) ... } Everything works as expected, what I now want is to sample the correct value to the depth buffer. The value that is now written to the depth buffer is the cube coordinate. But I want the value of pos in the 3d texture to be written. So lets say the cube is placed 10 away from origin in -z and the size is 10*10*10. My solution that does not work correctly is this: pos *= 10; pos.z += 10; pos.z *= -1; vec4 depth_vec = gl_ProjectionMatrix * vec4(pos.xyz, 1.0); float depth = ((depth_vec.z / depth_vec.w) + 1.0) * 0.5; gl_FragDepth = depth;

    Read the article

  • Stencil mask with AlphaTestEffect

    - by Brendan Wanlass
    I am trying to pull off the following effect in XNA 4.0: http://eng.utah.edu/~brendanw/question.jpg The purple area has 50% opacity. I have gotten pretty close with the following code: public static DepthStencilState AlwaysStencilState = new DepthStencilState() { StencilEnable = true, StencilFunction = CompareFunction.Always, StencilPass = StencilOperation.Replace, ReferenceStencil = 1, DepthBufferEnable = false, }; public static DepthStencilState EqualStencilState = new DepthStencilState() { StencilEnable = true, StencilFunction = CompareFunction.Equal, StencilPass = StencilOperation.Keep, ReferenceStencil = 1, DepthBufferEnable = false, }; ... if (_alphaEffect == null) { _alphaEffect = new AlphaTestEffect(_spriteBatch.GraphicsDevice); _alphaEffect.AlphaFunction = CompareFunction.LessEqual; _alphaEffect.ReferenceAlpha = 129; Matrix projection = Matrix.CreateOrthographicOffCenter(0, _spriteBatch.GraphicsDevice.PresentationParameters.BackBufferWidth, _spriteBatch.GraphicsDevice.PresentationParameters.BackBufferHeight, 0, 0, 1); _alphaEffect.Projection = world.SystemManager.GetSystem<RenderSystem>().Camera.View * projection; } _mask = new RenderTarget2D(_spriteBatch.GraphicsDevice, _spriteBatch.GraphicsDevice.PresentationParameters.BackBufferWidth, _spriteBatch.GraphicsDevice.PresentationParameters.BackBufferHeight, false, SurfaceFormat.Color, DepthFormat.Depth24Stencil8); _spriteBatch.GraphicsDevice.SetRenderTarget(_mask); _spriteBatch.GraphicsDevice.Clear(ClearOptions.Target | ClearOptions.Stencil, Color.Transparent, 0, 0); _spriteBatch.Begin(SpriteSortMode.Immediate, null, null, AlwaysStencilState, null, _alphaEffect); _spriteBatch.Draw(sprite.Texture, position, sprite.SourceRectangle,Color.White, 0f, sprite.Origin, 1f, SpriteEffects.None, 0); _spriteBatch.End(); _spriteBatch.Begin(SpriteSortMode.Immediate, null, null, EqualStencilState, null, null); _spriteBatch.Draw(_maskTex, new Vector2(x * _maskTex.Width, y * _maskTex.Height), null, Color.White, 0f, Vector2.Zero, 1f, SpriteEffects.None, 0); _spriteBatch.End(); _spriteBatch.GraphicsDevice.SetRenderTarget(null); _spriteBatch.GraphicsDevice.Clear(Color.Black); _spriteBatch.Begin(); _spriteBatch.Draw((Texture2D)_mask, Vector2.Zero, null, Color.White, 0f, Vector2.Zero, 1f, SpriteEffects.None, layer/max_layer); _spriteBatch.End(); My problem is, I can't get the AlphaTestEffect to behave. I can either mask over the semi-transparent purple junk and fill it in with the green design, or I can draw over the completely opaque grassy texture. How can I specify the exact opacity that needs to be replace with the green design?

    Read the article

  • How to set TextureFilter to Point to make example Bloom filter work?

    - by Mr Bell
    I have simple app that renders some particles and now I am trying to apply the bloom shader from the xna samplers ( http://create.msdn.com/en-US/education/catalog/sample/bloom ) to it, but I am running into this exception: "XNA Framework HiDef profile requires TextureFilter to be Point when using texture format Vector4." When the BloomComponent tries to end the sprite batch in the DrawFullscreenQuad method: spriteBatch.Begin(0, BlendState.Opaque, SamplerState.PointWrap, null, null, effect); spriteBatch.Draw(texture, new Rectangle(0, 0, width, height), Color.White); spriteBatch.End(); //<------- Exception thrown here It seems to be related to the pixel shaders that I am using to animate the particle. In a nutshell, I have a texture2d in vector4 format that holds particle positions, and another one for velocities. Here is a snippet from that area: GraphicsDevice.SetRenderTarget(tempRenderTarget); animationEffect.CurrentTechnique = animationEffect.Techniques[technique]; spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.Opaque, SamplerState.PointWrap, DepthStencilState.DepthRead, RasterizerState.CullNone, animationEffect); spriteBatch.Draw(randomValues, new Rectangle(0, 0, width, height), Color.White); spriteBatch.End(); What I comment out the code that calls the particle animation pixel shaders the bloom component runs fine. Is there some state that I need to reset to make the bloom work?

    Read the article

  • BlitzMax - generating 2D neon glowing line effect to png file

    - by zanlok
    Originally asked on StackOverflow, but it became tumbleweed. I'm looking to create a glowing line effect in BlitzMax, something like a Star Wars lightsaber or laserbeam. Doesn't have to be realtime, but just to TImage objects and then maybe saved to PNG for later use in animation. I'm happy to use 3D features, but it will be for use in a 2D game. Since it will be on black/space background, my strategy is to draw a series of white blurred lines with color and high transparency, then eventually central lines less blurred and more white. What I want to draw is actually bezier curved lines. Drawing curved lines is easy enough, but I can't use the technique above to create a good laser/neon effect because it comes out looking very segmented. So, I think it may be better to use a blur effect/shader on what does render well, which is a 1-pixel bezier curve. The problems I've been having are: Applying a shader to just a certain area of the screen where lines are drawn. If there's a way to do draw lines to a texture and then blur that texture and save the png, that would be great to hear about. There's got to be a way to do this, but I just haven't gotten the right elements working together yet. Any help from someone familiar with this stuff would be greatly appreciated. Using just 2D calls could be advantageous, simpler to understand and re-use. It would be very nice to know how to save a PNG that preserves the transparency/alpha stuff. p.s. I've reviewed this post (and many many others on the Blitz site), have samples working, and even developed my own 5x5 frag shaders. But, it's 3D and a scene-wide thing that doesn't seem to convert to 2D or just a certain area very well. I'd rather understand how to apply shading to a 2D scene, especially using the specifics of BlitzMax.

    Read the article

  • VBO and shaders confusion, what's their connection?

    - by Jeffrey
    Considering OpenGL 2.1 VBOs and 1.20 GLSL shaders: When creating an entity like "Zombie", is it good to initialize just the VBO buffer with the data once and do N glDrawArrays() calls per each N zombies? Is there a more efficient way? (With a single call we cannot pass different uniforms to the shader to calculate an offset, see point 3) When dealing with logical object (player, tree, cube etc), should I always use the same shader or should I customize (or be able to customize) the shaders per each object? Considering an entity class, should I create and define the shader at object initialization? When having a movable object such as a human, is there any more powerful way to deal with its coordinates than to initialize its VBO object at 0,0 and define an uniform offset to pass to the shader to calculate its real position? Could you make an example of the Data Oriented Design on creating a generic zombie class? Is the following good? Zombielist class: class ZombieList { GLuint vbo; // generic zombie vertex model std::vector<color>; // object default color std::vector<texture>; // objects textures std::vector<vector3D>; // objects positions public: unsigned int create(); // return object id void move(unsigned int objId, vector3D offset); void rotate(unsigned int objId, float angle); void setColor(unsigned int objId, color c); void setPosition(unsigned int objId, color c); void setTexture(unsigned int, unsigned int); ... void update(Player*); // move towards player, attack if near } Example: Player p; Zombielist zl; unsigned int first = zl.create(); zl.setPosition(first, vector3D(50, 50)); zl.setTexture(first, texture("zombie1.png")); ... while (running) { // main loop ... zl.update(&p); zl.draw(); // draw every zombie }

    Read the article

  • Mixing XNA and silverlight gives wierd graphics

    - by Mech0z
    I making a small 3dgame which is made as a Silverlight and XNA app, but when I draw the sprites the graphics becomes all wierd. All my primitive types are rendered correctly, but my 3d models are just wierd My Draw is like this when silverlight is set to draw private void OnDraw(object sender, GameTimerEventArgs e) { // Render the Silverlight controls using the UIElementRenderer elementRenderer.Render(); // Clear the screen to a solid color SharedGraphicsDeviceManager.Current.GraphicsDevice.Clear(Color.CornflowerBlue); switch (gameState) { case GameState.ChooseStarter: TextBlockStatus.Text = "Find Starting Player"; break; case GameState.PlaceBrick: TextBlockPlayer.Text = (playerTurn == PlayerTurn.PlayerOne) ? "Player One" : "Player Two"; TextBlockState.Text = "Place Brick"; foreach (IGraphicObject obj in _3dObjects) { obj.Draw(cameraPosition, e); } break; case GameState.GiveBrick: TextBlockState.Text = "Give Brick"; break; } spriteBatch.Begin(); // Using the texture from the UIElementRenderer, // draw the Silverlight controls to the screen spriteBatch.Draw(elementRenderer.Texture, cameraProjection, Color.White); spriteBatch.End(); } This gives me this output If I comment the spritebatch lines out I get the correct output, except the silverlight text is of course not shown I am not entirely sure what to look for except that zero vector I am giving to the spritebatch, but if thats the source I have no idea what I am supposed to set it as epspecially when its a 2d vector

    Read the article

  • 2D graphics - why use spritesheets?

    - by Columbo
    I have seen many examples of how to render sprites from a spritesheet but I havent grasped why it is the most common way of dealing with sprites in 2d games. I have started out with 2d sprite rendering in the few demo applications I've made by dealing with each animation frame for any given sprite type as its own texture - and this collection of textures is stored in a dictionary. This seems to work for me, and suits my workflow pretty well, as I tend to make my animations as gif/mng files and then extract the frames to individual pngs. Is there a noticeable performance advantage to rendering from a single sheet rather than from individual textures? With modern hardware that is capable of drawing millions of polygons to the screen a hundred times a second, does it even matter for my 2d games which just deal with a few dozen 50x100px rectangles? The implementation details of loading a texture into graphics memory and displaying it in XNA seems pretty abstracted. All I know is that textures are bound to the graphics device when they are loaded, then during the game loop, the textures get rendered in batches. So it's not clear to me whether my choice affects performance. I suspect that there are some very good reasons most 2d game developers seem to be using them, I just don't understand why.

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >