Search Results

Search found 3875 results on 155 pages for 'opengl es lighting'.

Page 16/155 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • OpenGL: Drawing to a texture

    - by Danran
    Well im just a bit stuck wondering how to draw an item to a texture. Specifically, i'm using; glDrawArrays(GL_LINE_STRIP, indices[0], indices.size()); Because what i'm drawing via the above function updates every-frame, i'm just totally not sure how to go about drawing what i have to a texture. Any help is greatly appreciated! Edit: Well unfortunately my graphics card doesn't support FrameBuffer Objects :/. So i've been trying to get the copy contents from backbuffer method working. Here's what i currently have; http://pastebin.com/dJpPt6Pd And sadly all i get is a white square. Its probably something stupid that i'm doing wrong. Just unsure what it could be?

    Read the article

  • OpenGL : Keeping alpha in a render buffer

    - by Cyan
    In my current task, i need to render a texture into a render buffer, in order to work on it (apply special filters) there. The result is then considered a "new texture", which is later displayed. This works fine, except when the texture contains some transparent/semi-transparent parts. My current guess it that, within the render buffer, the texture is "merged" with a kind of "grey background". In this case, it obviously impacts the R,G,B color components of transparent pixels. I've yet to find a way around this. Even manually assigning alpha after the rendering process doesn't save the day for semi-transparent pixels, which RGB are "tainted" by the grey background.

    Read the article

  • OpenGL: Move camera regardless of rotation

    - by Markus
    For a 2D board game I'd like to move and rotate an orthogonal camera in coordinates given in a reference system (window space), but simply can't get it to work. The idea is that the user can drag the camera over a surface, rotate and scale it. Rotation and scaling should always be around the center of the current viewport. The camera is set up as: gl.glMatrixMode(GL2.GL_PROJECTION); gl.glLoadIdentity(); gl.glOrtho(-width/2, width/2, -height/2, height/2, nearPlane, farPlane); where width and height are equal to the viewport's width and height, so that 1 unit is one pixel when no zoom is applied. Since these transformations usually mean (scaling and) translating the world, then rotating it, the implementation is: gl.glMatrixMode(GL2.GL_MODELVIEW); gl.glLoadIdentity(); gl.glRotatef(rotation, 0, 0, 1); // e.g. 45° gl.glTranslatef(x, y, 0); // e.g. +10 for 10px right, -2 for 2px down gl.glScalef(zoomFactor, zoomFactor, zoomFactor); // e.g. scale by 1.5 That however has the nasty side effect that translations are transformed as well, that is applied in world coordinates. If I rotate around 90° and translate again, X and Y axis are swapped. If I reorder the transformations so they read gl.glTranslatef(x, y, 0); gl.glScalef(zoomFactor, zoomFactor, zoomFactor); gl.glRotatef(rotation, 0, 0, 1); the translation will be applied correctly (in reference space, so translation along x always visually moves the camera sideways) but rotation and scaling are now performed around origin. It shouldn't be too hard, so what is it I'm missing?

    Read the article

  • Fog with Blend in OpenGL

    - by MhdAljobory
    I want to add fog in my scene which contain transparent textures made by Blend , when i enable the fog the transparent textures appear white From a distance but when i disable it the textures appear well. What is the solution to the problem of whiteness? Fog Code: GLfloat fogColor[4]= {0.5f, 0.5f, 0.5f, 1.0f}; glClearColor(0.5f,0.5f,0.5f,1.0f); glFogi(GL_FOG_MODE, GL_LINEAR); glFogfv(GL_FOG_COLOR, fogColor); glFogf(GL_FOG_DENSITY, 0.35f); glHint(GL_FOG_HINT, GL_DONT_CARE); glFogf(GL_FOG_START, 1.0f); glFogf(GL_FOG_END, 1000.0f); glEnable(GL_FOG); Screenshot

    Read the article

  • Using a texture as an integer array (OpenGL 3.3, shader version 3.3)

    - by Cubic
    I'm trying to have something like an integer array uniform for my fragment shader (I only need read access). Since it's a fairly large chunk of data (not so large that uploading it in every frame would be impossible, but enough to make me want to rather not do it). Essentially I want to just pass it a uniform telling the shader where this "array" is. I believe I can use a 1D texture for this, but I don't know how (actually, I don't know how to do many things because I just can't seem to find a reference for GLSL 3.3, I only ever find references for the C API). This sounds like a rather basic question and I'm sure it's been answered already somewhere, but I keep searching and can't quite find what I'm looking for.

    Read the article

  • Books on OpenGL ES targeted towards the iPhone

    - by Frank V
    There are a few posts on this site about OpenGL and the iPhone. Some even on books but I think you'll find this question is a bit different. I've searched and searched and have come to the conclusion that there are currently no books that specifically cover OpenGL ES on the iPhone platform. There are books that cover OpenGL ES [2.0] (note: the linked book covers OpenGL ES 2.0 but the iPhone uses OpenGL ES 1.1 which, I understand, is not backward compatible)... but they only have a small section for the iPhone (if any). What I want to know, is if anybody knows of any books that are forthcoming that specifically cover OpenGL ES 1.1 on the iPhone?

    Read the article

  • mixing OpenGL and Interface Builder/ UI Controls - bad idea? Why? (iPhone)

    - by Adam
    I've heard that OpenGL ES and standard iPhone UI controls don't play well together, but I'm wondering if anyone knows why, and what the effects are? I'm writing an OpenGL based game, and the view is loaded from a nib file with ui controls, and it seems to work ok, but the game is really simple at this point... does using ui controls cause some kind of performance hit?

    Read the article

  • How can I draw crisp per-pixel images with OpenGL ES on Android?

    - by Qasim
    I have made many Android applications and games in Java before, however I am very new to OpenGL ES. Using guides online, I have made simple things in OpenGL ES, including a simple triangle and a cube. I would like to make a 2D game with OpenGL ES, but what I've been doing isn't working quite so well, as the images I draw aren't to scale, and no matter what guide I use, the image is always choppy and not the right size (I'm debugging on my Nexus S). How can I draw crisp, HD images to the screen with GL ES? Here is an example of what happens when I try to do it: And the actual image: Here is how my texture is created: //get id int id = -1; gl.glGenTextures(1, texture, 0); id = texture[0]; //get bitmap Bitmap bitmap = BitmapFactory.decodeResource(context.getResources(), R.drawable.ball); //parameters gl.glBindTexture(GL10.GL_TEXTURE_2D, id); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_NEAREST); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_CLAMP_TO_EDGE); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_CLAMP_TO_EDGE); gl.glTexEnvf(GL10.GL_TEXTURE_ENV, GL10.GL_TEXTURE_ENV_MODE, GL10.GL_REPLACE); GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0); //crop image mCropWorkspace[0] = 0; mCropWorkspace[1] = height; mCropWorkspace[2] = width; mCropWorkspace[3] = -height; ((GL11) gl).glTexParameteriv(GL10.GL_TEXTURE_2D, GL11Ext.GL_TEXTURE_CROP_RECT_OES, mCropWorkspace, 0);

    Read the article

  • Problem enabling OpenGL ES depth test on iPhone. What steps are necessary?

    - by Chris Cooper
    I remember running into this problem when I started using OpenGL in OS X. Eventually I solved it, but I think that was just by using glut and c++ instead of Objective-C... The lines of code I have in init for the ES1Renderer are as follows: glEnable(GL_DEPTH_TEST); glDepthFunc(GL_LEQUAL); Then in the render method, I have this: glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); I assume I'm missing something specific to either the iPhone or ES. What other steps are required to enable the depth test? Thanks

    Read the article

  • How to stop OpenGL from applying blending to certain content? (see pics)

    - by RexOnRoids
    Supporting Info: I use cocos2d to draw a sprite (graph background) on the screen (z:-1). I then use cocos2d to draw lines/points (z:0) on top of the background -- and make some calls to OpenGL blending functions before the drawing to SMOOTH out the lines. Problem: The problem is that: aside from producing smooth lines/points, calling these OpenGL blending functions seems to "degrade" the underlying sprite (graph background). As you can see from the images below, the "degraded" background seems to be made darker and less sharp in Case 2. So there is a tradeoff: I can either have (Case 1) a nice background and choppy lines/points, or I can have (Case 2) nice smooth lines/points and a degraded background. But obviously I need both. THE QUESTION: How do I set OpenGL so as to only apply the blending to the layer with the Lines/Points in it and thus leave the background alone? The Code: I have included code of the draw() method of the CCLayer for both cases explained above. As you can see, the code producing the difference between Case 1 and Case 2 seems to be 1 or 2 lines involving OpenGL Blending. Case 1 -- MainScene.h (CCLayer): -(void)draw{ int lastPointX = 0; int lastPointY = 0; GLfloat colorMAX = 255.0f; GLfloat valR; GLfloat valG; GLfloat valB; if([self.myGraphManager ready]){ valR = (255.0f/colorMAX)*1.0f; valG = (255.0f/colorMAX)*1.0f; valB = (255.0f/colorMAX)*1.0f; NSEnumerator *enumerator = [[self.myGraphManager.currentCanvas graphPoints] objectEnumerator]; GraphPoint* object; while ((object = [enumerator nextObject])) { if(object.filled){ /*Commenting out the following two lines induces a problem of making it impossible to have smooth lines/points, but has merit in that it does not degrade the background sprite.*/ //glEnable (GL_BLEND); //glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); glHint (GL_LINE_SMOOTH_HINT, GL_DONT_CARE); glEnable (GL_LINE_SMOOTH); glLineWidth(1.5f); glColor4f(valR, valG, valB, 1.0); ccDrawLine(ccp(lastPointX, lastPointY), ccp(object.position.x, object.position.y)); lastPointX = object.position.x; lastPointY = object.position.y; glPointSize(3.0f); glEnable(GL_POINT_SMOOTH); glHint(GL_POINT_SMOOTH_HINT, GL_NICEST); ccDrawPoint(ccp(lastPointX, lastPointY)); } } } } Case 2 -- MainScene.h (CCLayer): -(void)draw{ int lastPointX = 0; int lastPointY = 0; GLfloat colorMAX = 255.0f; GLfloat valR; GLfloat valG; GLfloat valB; if([self.myGraphManager ready]){ valR = (255.0f/colorMAX)*1.0f; valG = (255.0f/colorMAX)*1.0f; valB = (255.0f/colorMAX)*1.0f; NSEnumerator *enumerator = [[self.myGraphManager.currentCanvas graphPoints] objectEnumerator]; GraphPoint* object; while ((object = [enumerator nextObject])) { if(object.filled){ /*Enabling the following two lines gives nice smooth lines/points, but has a problem in that it degrades the background sprite.*/ glEnable (GL_BLEND); glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); glHint (GL_LINE_SMOOTH_HINT, GL_DONT_CARE); glEnable (GL_LINE_SMOOTH); glLineWidth(1.5f); glColor4f(valR, valG, valB, 1.0); ccDrawLine(ccp(lastPointX, lastPointY), ccp(object.position.x, object.position.y)); lastPointX = object.position.x; lastPointY = object.position.y; glPointSize(3.0f); glEnable(GL_POINT_SMOOTH); glHint(GL_POINT_SMOOTH_HINT, GL_NICEST); ccDrawPoint(ccp(lastPointX, lastPointY)); } } } }

    Read the article

  • Why does my Opengl es android testbed app not render anything besides a red screen?

    - by nathan
    For some reason my code here (this is the entire thing) doesnt actually render anything besides a red screen.. can anyone tell me why? package com.ntu.way2fungames.earth.testbed; import java.nio.FloatBuffer; import javax.microedition.khronos.egl.EGLConfig; import javax.microedition.khronos.opengles.GL10; import android.app.Activity; import android.content.Context; import android.opengl.GLSurfaceView; import android.opengl.GLSurfaceView.Renderer; import android.os.Bundle; public class projectiles extends Activity { GLSurfaceView lGLView; Renderer lGLRenderer; float projectilesX[]= new float[5001]; float projectilesY[]= new float[5001]; float projectilesXa[]= new float[5001]; float projectilesYa[]= new float[5001]; float projectilesTheta[]= new float[5001]; float projectilesSpeed[]= new float[5001]; private static FloatBuffer drawBuffer; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); SetupProjectiles(); Context mContext = this.getWindow().getContext(); lGLView= new MyView(mContext); lGLRenderer= new MyRenderer(); lGLView.setRenderer(lGLRenderer); setContentView(lGLView); } private void SetupProjectiles() { int i=0; for (i=5000;i>0;i=i-1){ projectilesX[i] = 240; projectilesY[i] = 427; float theta = (float) ((i/5000)*Math.PI*2); projectilesXa[i] = (float) Math.cos(theta); projectilesYa[i] = (float) Math.sin(theta); projectilesTheta[i]= theta; projectilesSpeed[i]= (float) (Math.random()+1); } } public class MyView extends GLSurfaceView{ public MyView(Context context) { super(context); // TODO Auto-generated constructor stub } } public class MyRenderer implements Renderer{ private float[] projectilecords = new float[] { .0f, .5f, 0, -.5f, 0f, 0, .5f, 0f, 0, 0, -5f, 0, }; @Override public void onDrawFrame(GL10 gl) { gl.glClear(GL10.GL_COLOR_BUFFER_BIT); gl.glMatrixMode(GL10.GL_MODELVIEW); //gl.glLoadIdentity(); gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); for (int i=5000;i>4500;i=i-1){ //drawing section gl.glLoadIdentity(); gl.glColor4f(.9f, .9f,.9f,.9f); gl.glTranslatef(projectilesY[i], projectilesX[i],1); gl.glVertexPointer(3, GL10.GL_FLOAT, 0, drawBuffer); gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, 12); //physics section projectilesX[i]=projectilesX[i]+projectilesXa[i]; projectilesY[i]=projectilesY[i]+projectilesYa[i]; } gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); } @Override public void onSurfaceChanged(GL10 gl, int width, int height) { if (height == 0) height = 1; // draw on the entire screen gl.glViewport(0, 0, width, height); // setup projection matrix gl.glMatrixMode(GL10.GL_PROJECTION); gl.glLoadIdentity(); gl.glOrthof(0,width,height,0, -100, 100); } @Override public void onSurfaceCreated(GL10 gl, EGLConfig arg1) { gl.glShadeModel(GL10.GL_SMOOTH); gl.glClearColor(1f, .01f, .01f, 1f); gl.glClearDepthf(1.0f); gl.glEnable(GL10.GL_DEPTH_TEST); gl.glDepthFunc(GL10.GL_LEQUAL); gl.glHint(GL10.GL_PERSPECTIVE_CORRECTION_HINT, GL10.GL_NICEST); drawBuffer = FloatBuffer.wrap(projectilecords); } } }

    Read the article

  • What features does D3D have that OpenGL does not (and vice versa)?

    - by Tom
    Are there any feature comparisons on Direct3D 11 and the newest OpenGL versions? Well, simply put, Direct3D 11 introduced three main features (taken from Wikipedia): Tesselation Multithreaded rendering Compute shaders Increased texture cache Now I'm wondering, how does the newest versions of OpenGL cope with these features? And since I have this feeling that there are features that Direct3D lacks from OpenGL's side, what are those?

    Read the article

  • What is the situation about OpenGL under Ubuntu Unity and Gnome3?

    - by user827992
    In a GNU/linux distribution is usually installed Xorg as main graphical server, it operates with a client-server logic, a special windows is designate as desktop environment and this special windows can handle all the eyecandy stuff like decorations, icons and effects. The problem is that the latest UI heavily relies on hardware acceleration, Unity is an overlay on Compiz and the Gnome-shell also require an active driver for the GPU to work well: the problem is: on the same OS I can find multiple implementations of OpenGL who is handling my OpenGL buffer? how the OpenGL buffer is managed compared to the other windows? how can I be sure that my OpenGL implementation is glued to the hardware and is not related to the client-server logic of Xorg? For example I have tried the clutter library and I have only experienced problems under both Unity and GTK/Gnome, no problem under other OS.

    Read the article

  • Should I use OpenGL while working with the C++ Software?

    - by Paralytic
    I am completely new to programming and game development for that matter. I am using the C++ software to create my Game Engine with the help of a beginners guide. I noticed it has a OpenGL option when starting up a new project. I've heard of OpenGL pertaining to game development, not sure what it is though. Should I be using OpenGL when creating my Game Engine? Will it matter if I just start with a blank slate?

    Read the article

  • Where to start learning open-gl es

    - by Captain Kidd
    Hi everybody, I decide to learn OPEN-GL ES for IPHONE development, but I know nothing about graphics programing. So I've some questions. 1 I know OPEN-GL ES is a series of open standard API. IPHONE still use these standard API or apple define it's own API for OPENGL ES? 2 Before I start to learn OPEN-GL ES, I think I should be familiar with OPEN-GL. Am I right?

    Read the article

  • Android runs OpenGL ES 1.1 or 1.0?

    - by cjserio
    I'm developing a native app for Android and I'm trying to use functions such as glIsEnabled which appear to be only available in OpenGL ES 1.1. Google's docs claim that NDK 1.6R1 supports OpenGL ES v1.1 but the function call fails with "unimplemented Open GL ES API" and if i do a glGetString(GL_VERSION) it returns "OpenGL ES 1.0 CM" as the version. So if 1.1 is available, what do I have to link against to get it or what else do i need to change to get it?

    Read the article

  • GL_COLOR_MATERIAL with lighting on Android

    - by kostmo
    It appears that glColorMaterial() is absent from OpenGL ES. According to this post (for iPhone), you may still enable GL_COLOR_MATERIAL in OpenGL ES 1.x, but you're stuck with the default settings of GL_FRONT_AND_BACK and GL_AMBIENT_AND_DIFFUSE that you would otherwise set with glColorMaterial(). I would be OK with this, but the diffuse lighting is not working correctly. I set up my scene and tested it with one light, setting glMaterialfv() for GL_AMBIENT and GL_DIFFUSE once in the initialization. The normals have been set correctly, and lighting works the way it's supposed to. I see the Gourad shading. With GL_LIGHTING disabled, the flat colors I have set with glColor4f() appear on the various objects in the scene. This also functions as expected. However, when glEnable(GL_COLOR_MATERIAL) is called, the flat colors remain. I would expect to see the lighting effects. What might be missing? glColorMaterial() is also mentioned on anddev.org, but I'm not sure if the information there is accurate. I'm testing this on an Android 2.1 handset (Motorola Droid). Edit: It works properly on my 1.6 handset (ADP1).

    Read the article

  • Will OpenGL give me any FPS improvement over CoreAnimation for scrolling a large image?

    - by Ben Roberts
    Hi, I'm considering re-writing the menu system of my iPhone app to use Open GL just to improve the smoothness of scrolling a big image (480x1900px) across the screen. I'm looking at doing this as a way to improve on using the method/solution as described here (http://stackoverflow.com/questions/1443140/smoother-uiview). This solution was a big improvement over the previous implementation but it's still not perfect and as this is the first thing the user will see I'd like it to be as flawless as possible. Will switching to OpenGL give me the sort of smooth scrolling I'm looking for? I've stayed clear of OpenGL until now as this is my first app and core animation has handled everything else I've thrown at it well enough, would be good to know if this alternative implementation is likely to work! thanks

    Read the article

  • Using OpenGL drawing operations in an object-oriented setting?

    - by Lion Kabob
    I've been plowing through basic shaders and whatnot for an application I'm writing, and I've been having trouble figuring out a high-level organization for the drawing calls. I'm thinking of having a singleton class which implements a number of basic drawing operations, taking data from "user" classes and passing that to the appropriate opengl calls. I'm wondering how people do this when writing their own applications, as the internet is chock full of basic "Your first shader" tutorials, but very little on suggested organization of drawing code. My particular environment is targeted at iPad/OpenGL ES 2.0, but I think the question stands for most environments.

    Read the article

  • What data type should I use for my texture coordinates in OpenGL ES?

    - by Matthew Chen
    I notice that the default data type for texture coordinates in the OpenGL docs is GLfloat, but much of the sample code I see written by experienced iphone developers uses GLshort or GLbyte. Is this an optimization? GLfloat vertices[] = { // Upper left x1, y2, // Lower left x1, y1, // Lower right x2, y1, // Upper right x2, y2, }; glTexCoordPointer(2, GL_FLOAT, 0, iconSTs); vs. GLbyte vertices[] = { // Upper left x1, y2, // Lower left x1, y1, // Lower right x2, y1, // Upper right x2, y2, }; glTexCoordPointer(2, GL_BYTE, 0, iconSTs);

    Read the article

  • Whats the minimum iOS version which supports OpenGL ES2.0 ?

    - by Shireesh Agrawal
    Hi, I am not sure if the question even makes sense. I am writing an iPhone game which uses Opengl ES 2.0. I know that OpenGL ES 2.0 is supported on 3gs and higher. Is there a minimum requirement for iOS version too, like the device needs to have iOS 3.1.3 or higher? Or does it solely depend on the hardware? Thanks! -shireesh p.s. I tried to search on the net but havent found much, perhaps I am not using the right keywords

    Read the article

  • Why does OpenGL's glDrawArrays() fail with GL_INVALID_OPERATION under Core Profile 3.2, but not 3.3 or 4.2?

    - by metaleap
    I have OpenGL rendering code calling glDrawArrays that works flawlessly when the OpenGL context is (automatically / implicitly obtained) 4.2 but fails consistently (GL_INVALID_OPERATION) with an explicitly requested OpenGL core context 3.2. (Shaders are always set to #version 150 in both cases but that's beside the point here I suspect.) According to specs, there are only two instances when glDrawArrays() fails with GL_INVALID_OPERATION: "if a non-zero buffer object name is bound to an enabled array and the buffer object's data store is currently mapped" -- I'm not doing any buffer mapping at this point "if a geometry shader is active and mode? is incompatible with [...]" -- nope, no geometry shaders as of now. Furthermore: I have verified & double-checked that it's only the glDrawArrays() calls failing. Also double-checked that all arguments passed to glDrawArrays() are identical under both GL versions, buffer bindings too. This happens across 3 different nvidia GPUs and 2 different OSes (Win7 and OSX, both 64-bit -- of course, in OSX we have only the 3.2 context, no 4.2 anyway). It does not happen with an integrated "Intel HD" GPU but for that one, I only get an automatic implicit 3.3 context (trying to explicitly force a 3.2 core profile with this GPU via GLFW here fails the window creation but that's an entirely different issue...) For what it's worth, here's the relevant routine excerpted from the render loop, in Golang: func (me *TMesh) render () { curMesh = me curTechnique.OnRenderMesh() gl.BindBuffer(gl.ARRAY_BUFFER, me.glVertBuf) if me.glElemBuf > 0 { gl.BindBuffer(gl.ELEMENT_ARRAY_BUFFER, me.glElemBuf) gl.VertexAttribPointer(curProg.AttrLocs["aPos"], 3, gl.FLOAT, gl.FALSE, 0, gl.Pointer(nil)) gl.DrawElements(me.glMode, me.glNumIndices, gl.UNSIGNED_INT, gl.Pointer(nil)) gl.BindBuffer(gl.ELEMENT_ARRAY_BUFFER, 0) } else { gl.VertexAttribPointer(curProg.AttrLocs["aPos"], 3, gl.FLOAT, gl.FALSE, 0, gl.Pointer(nil)) /* BOOM! */ gl.DrawArrays(me.glMode, 0, me.glNumVerts) } gl.BindBuffer(gl.ARRAY_BUFFER, 0) } So of course this is part of a bigger render-loop, though the whole "*TMesh" construction for now is just two instances, one a simple cube and the other a simple pyramid. What matters is that the entire drawing loop works flawlessly with no errors reported when GL is queried for errors under both 3.3 and 4.2, yet on 3 nvidia GPUs with an explicit 3.2 core profile fails with an error code that according to spec is only invoked in two specific situations, none of which as far as I can tell apply here. What could be wrong here? Have you ever run into this? Any ideas what I have been missing?

    Read the article

  • Performance of pixel shaders vs. SpriteBatch: XNA

    - by ashes999
    Precondition: I read this question/answer about using shaders, or spritebatch, to render and mark a sprite. I need to do something like that. I also have a 2D lighting PoC which I need to write. The way it will work will basically be something like: Draw all the sprites Draw lighting gradients to create a lighting texture Multiply/add the lighting texture to achieve different effects (I use multiple passes of add/multiply the lighting texture to achieve different effects.) My question is really about a generalization: can I say with certainty that pixel shaders are always faster than adding/multiplying textures to the SpriteBatch? Or that adding/multiplying is always faster? Or if it's not generalizable, how do I decide which approach to take, given that I can probably code either of them? (If it matters, I'm using MonoGame 3.0 beta for Windows games)

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >