Search Results

Search found 19182 results on 768 pages for 'game engine'.

Page 363/768 | < Previous Page | 359 360 361 362 363 364 365 366 367 368 369 370  | Next Page >

  • set TouchEvent to every particle in SpriteParticleSystem in andEngine livewallpaper

    - by Girish Bhutiya
    All I have working on andEngine Live wallpaper and I have use SpriteParticleSystem. I want to add touch event to every Sprite of SpriteParticleSystem and remove that sprite from scene. I have use below code for create particle system. final SpriteParticleSystem particleSystem = new SpriteParticleSystem(new PointParticleEmitter(mParticleX, mParticleY),mParticleMinRate, mParticleMaxRate, mParticleMax,this.mFlowerTextureRegion, this.getVertexBufferObjectManager()); //particleSystem.addParticleInitializer(new BlendFunctionParticleInitializer<Sprite>(GLES20.GL_SRC_ALPHA, GLES20.GL_ONE)); particleSystem.addParticleInitializer(new VelocityParticleInitializer<Sprite>(-90, 0, 0, 0)); particleSystem.addParticleInitializer(new AccelerationParticleInitializer<Sprite>(8, -11)); particleSystem.addParticleInitializer(new RotationParticleInitializer<Sprite>(0.0f, 360.0f)); //particleSystem.addParticleInitializer(new ColorParticleInitializer<Sprite>(1.0f, 1.0f, 0.0f)); particleSystem.addParticleInitializer(new ExpireParticleInitializer<Sprite>(15.5f)); particleSystem.addParticleModifier(new ScaleParticleModifier<Sprite>(0, 5, 0.5f, 2.0f)); background = new Sprite(0, 0, backgroundTextureRegion, mEngine.getVertexBufferObjectManager()); background.setPosition((CAMERA_WIDTH - background.getWidth()) * 0.5f, (CAMERA_HEIGHT - background.getHeight()) * 0.5f); //background.setScale(1.5f); //SpriteBackground bg = new SpriteBackground(background); //mScene.setBackground(bg); this.mScene.attachChild(background); this.mScene.attachChild(particleSystem); <br> Thanks in advance.

    Read the article

  • Homebrew development for 7th gen home consoles

    - by Brian McKenna
    I'm looking to do some homebrew development for either the Wii, Xbox360 or PS3. I'll be developing from a Linux system. The programming language doesn't matter. Wii - devkitPPC and libogc look fairly easy and complete Xbox360 - Mono.XNA looks interesting but not very feature complete PS3 - psl1ght seems interesting but I haven't been able to find out much How homebrew friendly are each of these consoles? Is someone able to give a comparison of each of these scenes?

    Read the article

  • OpenGL's matrix stack vs Hand multiplying

    - by deft_code
    Which is more efficient using OpenGL's transformation stack or applying the transformations by hand. I've often heard that you should minimize the number of state transitions in your graphics pipeline. Pushing and popping translation matrices seem like a big change. However, I wonder if the graphics card might be able to more than make up for pipeline hiccup by using its parallel execution hardware to bulk multiply the vertices. My specific case. I have font rendered to a sprite sheet. The coordinates of each character or a string are calculated and added to a vertex buffer. Now I need to move that string. Would it be better to iterate through the vertex buffer and adjust each of the vertices by hand or temporarily push a new translation matrix?

    Read the article

  • Collision detection with multiple polygons simultaneously

    - by Craig Innes
    I've written a collision system which detects/resolves collisions between a rectangular player and a convex polygon world using the Separating Axis Theorem. This scheme works fine when the player is colliding with a single polygon, but when I try to create a level made up of combinations of these shapes, the player gets "stuck" between shapes when trying to move from one polygon to the other. The reason for this seems to be that collisions are detected after the player has been pushed through the shape by its movement or gravity. When the system resolves the collision, it resolves them in an order that doesn't make sense (for example, when the player is moving from one flat rectangle to another, gravity pushes them below the ground, but the collision with the left hand side of the second block is resolved before the collision with the top of the block, meaning the player is pushed back left before being pushed back up). Other similar posts have resolved this problem by having a strict rule on which axes to resolve first. For example, always resolve the collision on the y axis, then if the object is still colliding with things, resolve on the x axis. This solution only works in the case of a completely axis oriented box world, and doesn't solve the problem if the player is stuck moving along a series of angled shapes or sliding down a wall. Does any one have any ideas of how I could alter my collision system to prevent these situations from happening?

    Read the article

  • How do I draw a border around a display object in Corona Lua?

    - by Greg
    What would be the easiest way to draw a thin border around a display object in Corona Lua? You could assume it's rectangular image display object. EDIT - re "this question shows no research effort. You should tell us what you've tried and how it didn't work" Reviewed API and could not find a "border" method/property on displayObject Have tried creating a black box slightly bigger behind object, however can not see how to place object behind an existing object hence question How do I move an existing display object behind another in Corona Lua? Google results for putting a border around a display object in corona didn't help

    Read the article

  • Unresolved external symbol __imp____glewGenerateMipmap

    - by Tsvetan
    This error is given by Visual Studio 2010, when I want to compile my C++ code. I have added 'glew32.lib' and 'freeglut.lib' in Additional Dependencies, both release and debug. Also included the header files. I have searched the Internet and found only one forum post, but it isn't the solution I am searching for... My library is dynamic, so GLEW_STATIC is not an option. So, can you give me a solution for this problem?

    Read the article

  • Hooking DirectX EndScene from an injected DLL

    - by Etan
    I want to detour EndScene from an arbitrary DirectX 9 application to create a small overlay. As an example, you could take the frame counter overlay of FRAPS, which is shown in games when activated. I know the following methods to do this: Creating a new d3d9.dll, which is then copied to the games path. Since the current folder is searched first, before going to system32 etc., my modified DLL gets loaded, executing my additional code. Downside: You have to put it there before you start the game. Same as the first method, but replacing the DLL in system32 directly. Downside: You cannot add game specific code. You cannot exclude applications where you don't want your DLL to be loaded. Getting the EndScene offset directly from the DLL using tools like IDA Pro 4.9 Free. Since the DLL gets loaded as is, you can just add this offset to the DLL starting address, when it is mapped to the game, to get the actual offset, and then hook it. Downside: The offset is not the same on every system. Hooking Direct3DCreate9 to get the D3D9, then hooking D3D9-CreateDevice to get the device pointer, and then hooking Device-EndScene through the virtual table. Downside: The DLL cannot be injected, when the process is already running. You have to start the process with the CREATE_SUSPENDED flag to hook the initial Direct3DCreate9. Creating a new Device in a new window, as soon as the DLL gets injected. Then, getting the EndScene offset from this device and hooking it, resulting in a hook for the device which is used by the game. Downside: as of some information I have read, creating a second device may interfere with the existing device, and it may bug with windowed vs. fullscreen mode etc. Same as the third method. However, you'll do a pattern scan to get EndScene. Downside: doesn't look that reliable. How can I hook EndScene from an injected DLL, which may be loaded when the game is already running, without having to deal with different d3d9.dll's on other systems, and with a method which is reliable? How does FRAPS for example perform it's DirectX hooks? The DLL should not apply to all games, just to specific processes where I inject it via CreateRemoteThread.

    Read the article

  • Action button: only true once per press

    - by Sidar
    I'm using SFML2.0 and am trying to make a wrapper class for my controller/joystick. I read all the input data from my controller and send it off to my controllable object. I want to have two types of buttons per button press, one that is continues(true false state ) and one that is an action and is set to false after the next frame update. Here is an example of how I set my button A to true or false with the SFML api. Whereas data is my struct of buttons, and A holds my true/false state every update. data.A = sf::Joystick::isButtonPressed(i,st::input::A); But I've also added "data.actionA" which represents the one time action state. Basically what I want is for actionA to be set false after the update its been set to true. I'm trying to keep track of the previous state. But I seem to fall into this loop where it toggles between true and false every update. Anyone an idea? Edit: Since I can't answer my own question yet here is my solution: data.actionA = data.A = sf::Joystick::isButtonPressed(i,st::input::A); if(prev.A) data.actionA = false; First I always set the actionA to the value of the button state. Then I check if the previous state of A is true. If so we negate the value.

    Read the article

  • How to gain accurate results with Painter's algorithm?

    - by pimvdb
    A while ago I asked how to determine when a face is overlapping another. The advice was to use a Z-buffer. However, I cannot use a Z-buffer in my current project and hence I would like to use the Painter's algorithm. I have no good clue as to when a surface is behind or in front of another, though. I've tried numerous methods but they all fail in edge cases, or they fail even in general cases. This is a list of sorting methods I've tried so far: Distance to midpoint of each face Average distance to each vertex of each face Average z value of each vertex Higest z value of vertices of each face and draw those first Lowest z value of vertices of each face and draw those last The problem is that a face might have a closer distance but is still further away. All these methods seem unreliable. Edit: For example, in the following image the surface with the blue point as midpoint is painted over the surface with the red point as midpoint, because the blue point is closer. However, this is because the surface of the red point is larger and the midpoint is further away. The surface with the red point should be painted over the blue one, because it is closer, whilst the midpoint distance says the opposite. What exactly is used in the Painter's algorithm to determine the order in which objects should be drawn?

    Read the article

  • box2D simulation doesn't work

    - by shadow_of__soul
    has been a while since last time i used box2D, and i needed to make some stuff, and i saw that my simulation don't worked (compiles, but do anything). i haven't been able to even have working the examples or this simple example i'm pasting below: package { import flash.display.Sprite; import flash.events.Event; import Box2D.Common.Math.b2Vec2; import Box2D.Dynamics.b2World; import Box2D.Dynamics.b2BodyDef; import Box2D.Dynamics.b2Body; import Box2D.Collision.Shapes.b2CircleShape; import Box2D.Dynamics.b2Fixture; import Box2D.Dynamics.b2FixtureDef; import org.flashdevelop.utils.FlashConnect; import flash.events.TimerEvent; import flash.utils.Timer; public class Main extends Sprite { public var world:b2World; public var wheelBody:b2Body; public var stepTimer:Timer; public function Main():void { if (stage) init(); else addEventListener(Event.ADDED_TO_STAGE, init); } private function init(e:Event = null):void { removeEventListener(Event.ADDED_TO_STAGE, init); var gravity:b2Vec2 = new b2Vec2(0, 10); world = new b2World(gravity, true); var wheelBodyDef:b2BodyDef = new b2BodyDef(); wheelBodyDef.type = b2Body.b2_dynamicBody; wheelBody = world.CreateBody(wheelBodyDef); var circleShape:b2CircleShape = new b2CircleShape(5); var wheelFixtureDef:b2FixtureDef = new b2FixtureDef(); wheelFixtureDef.shape = circleShape; var wheelFixture:b2Fixture = wheelBody.CreateFixture(wheelFixtureDef); stepTimer = new Timer(0.025 * 1000); stepTimer.addEventListener(TimerEvent.TIMER, onTick); FlashConnect.trace(wheelBody.GetPosition().x, wheelBody.GetPosition().y); stepTimer.start(); // entry point } private function onTick(a_event:TimerEvent):void { world.Step(0.025, 10, 10); FlashConnect.trace(wheelBody.GetPosition().x, wheelBody.GetPosition().y); } } } on this, the object should fall down, but the positions reported me by the trace method, are always 0. so is not a display problem, that i see everything freeze, is why the simulation is not working, and i have no idea why :( can anyone point me to the right direction of where i need to look for the problem? my settings are: windows 7 flashdevelop 4.2.1 SDK: 4.6.0 compiling for flash 10, but i tried every target i have available (till flash 11.5) project set at 30fps

    Read the article

  • Rain effect using DirectX 9 capabilities

    - by teodron
    Is it possible to achieve something similar to nVidia's rain demo using only shader model 3.0 capabilities? If yes, could you point out a few documents/web resources that are suitable candidates and do not require a heavy programming load (e.g. not more than two hard weeks of programming for one single person)? It would be nice if the answer could also contain a pro/con phrase for the proposed idea (e.g. postprocessing rain shader vs. a particle based effect).

    Read the article

  • Help on TileMapRenderer

    - by Crypted
    In my project, I'm trying to render a map using TileMapRenderer. But it doesn't show anything when I render it. But when I use some other files from a tutorial they are rendered correctly. When debugging my TileAtlas instance shows the size as 0. I have used Texture Packer UI for packing the images. Comparing with the tutorial's files, I can see that the index starts from 1 in my file and 0 in the tutorial. But changing it to 0 wouldn't work also. map.png format: RGBA8888 filter: Nearest,Nearest repeat: none Map rotate: false xy: 0, 0 size: 32, 32 orig: 32, 32 offset: 0, 0 index: 1 Map rotate: false xy: 32, 0 size: 32, 32 orig: 32, 32 offset: 0, 0 index: 2 Map rotate: false xy: 64, 0 size: 32, 32 orig: 32, 32 offset: 0, 0 index: 3 Map rotate: false xy: 96, 0 size: 32, 32 orig: 32, 32 offset: 0, 0 index: 4 Map rotate: false xy: 128, 0 size: 32, 32 orig: 32, 32 offset: 0, 0 index: 5 Here is the begining of the tmx file. <?xml version="1.0" encoding="UTF-8"?> <map version="1.0" orientation="orthogonal" width="20" height="20" tilewidth="32" tileheight="32"> <tileset firstgid="1" name="a" tilewidth="32" tileheight="32"> <image source="map.png" width="256" height="32"/> </tileset> <layer name="Tile Layer 1" width="20" height="20"> <data> <tile gid="2"/> <tile gid="2"/> Apart from that the tutorial files and my files seems to be similar. Can anyone help me here.

    Read the article

  • Trying to Draw a 2D Triangle in OpenGL ES 2.0

    - by Nathan Campos
    I'm trying to convert a code from OpenGL to OpenGL ES 2.0 (for the BlackBerry PlayBook). So far what I got is this (just the part of the code that should draw the triangle): void setupScene() { glClearColor(250, 250, 250, 1); glViewport(0, 0, 600, 1024); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); } void drawScene() { setupScene(); glColorMask(0, 0, 0, 1); const GLfloat triangleVertices[] = { 100, 100, 150, 0, 200, 100 }; glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 0, triangleVertices); glEnableVertexAttribArray(0); glDrawArrays(GL_TRIANGLES, 0, 2); } void render() { drawScene(); bbutil_swap(); } The problem is that when I launch the app instead of showing me the triangle the screen just flickers (very fast) from white to gray. Any idea what I'm doing wrong? Also, here is the entire code if you need: Full source code

    Read the article

  • Learning resource for 3d modeling

    - by Maik Klein
    I want to start learning 3d modeling. I already have experience with maya and 3dsmax but I made a long pause (2 years) Now I have free access to maya, 3dsmax and blender (I am a student). I know that all tools are very powerful so I thought I just pick the one with the best learning materials. The best site that I found is http://www.digitaltutors.com/11/index.php and it has over 7600 videos for maya. Maybe you can recommend me some other learning sites that are as good as digitaltutors?

    Read the article

  • Problem loading shaders with slimdx

    - by Levi
    I'm attempting to load an FX file in slimdx, I've got this exact FX file loading and compiling fine with XNA 4.0 but I'm getting errors with slimdx, here's my code to load it. using SlimDX.Direct3D11; using SlimDX.D3DCompiler; public static Effect LoadFXShader(string path) { Effect shader; using (var bytecode = ShaderBytecode.CompileFromFile(path, null, "fx_2_0", ShaderFlags.None, EffectFlags.None)) shader = new Effect(Devices.GPU.GraphicsDevice, bytecode); return shader; } Here's the shader: #define TEXTURE_TILE_SIZE 16 struct VertexToPixel { float4 Position : POSITION; float2 TextureCoords: TEXCOORD1; }; struct PixelToFrame { float4 Color : COLOR0; }; //------- Constants -------- float4x4 xView; float4x4 xProjection; float4x4 xWorld; float4x4 preViewProjection; //float random; //------- Texture Samplers -------- Texture TextureAtlas; sampler TextureSampler = sampler_state { texture = <TextureAtlas>; magfilter = Point; minfilter = point; mipfilter=linear; AddressU = mirror; AddressV = mirror;}; //------- Technique: Textured -------- VertexToPixel TexturedVS( byte4 inPos : POSITION, float2 inTexCoords: TEXCOORD0) { inPos.w = 1; VertexToPixel Output = (VertexToPixel)0; float4x4 preViewProjection = mul (xView, xProjection); float4x4 preWorldViewProjection = mul (xWorld, preViewProjection); Output.Position = mul(inPos, preWorldViewProjection); Output.TextureCoords = inTexCoords / TEXTURE_TILE_SIZE; return Output; } PixelToFrame TexturedPS(VertexToPixel PSIn) { PixelToFrame Output = (PixelToFrame)0; Output.Color = tex2D(TextureSampler, PSIn.TextureCoords); if(Output.Color.a != 1) clip(-1); return Output; } technique Textured { pass Pass0 { VertexShader = compile vs_2_0 TexturedVS(); PixelShader = compile ps_2_0 TexturedPS(); } } Now this exact shader works fine in XNA, but in slimdx I get the error ChunkDefault.fx(28,27): error X3000: unrecognized identifier 'byte4'

    Read the article

  • OBJ model loaded in LWJGL has a black area with no texture

    - by gambiting
    I have a problem with loading an .obj file in LWJGL and its textures. The object is a tree(it's a paid model from TurboSquid, so I can't post it here,but here's the link if you want to see how it should look like): http://www.turbosquid.com/FullPreview/Index.cfm/ID/701294 I wrote a custom OBJ loader using the LWJGL tutorial from their wiki. It looks like this: public class OBJLoader { public static Model loadModel(File f) throws FileNotFoundException, IOException { BufferedReader reader = new BufferedReader(new FileReader(f)); Model m = new Model(); String line; Texture currentTexture = null; while((line=reader.readLine()) != null) { if(line.startsWith("v ")) { float x = Float.valueOf(line.split(" ")[1]); float y = Float.valueOf(line.split(" ")[2]); float z = Float.valueOf(line.split(" ")[3]); m.verticies.add(new Vector3f(x,y,z)); }else if(line.startsWith("vn ")) { float x = Float.valueOf(line.split(" ")[1]); float y = Float.valueOf(line.split(" ")[2]); float z = Float.valueOf(line.split(" ")[3]); m.normals.add(new Vector3f(x,y,z)); }else if(line.startsWith("vt ")) { float x = Float.valueOf(line.split(" ")[1]); float y = Float.valueOf(line.split(" ")[2]); m.texVerticies.add(new Vector2f(x,y)); }else if(line.startsWith("f ")) { Vector3f vertexIndicies = new Vector3f(Float.valueOf(line.split(" ")[1].split("/")[0]), Float.valueOf(line.split(" ")[2].split("/")[0]), Float.valueOf(line.split(" ")[3].split("/")[0])); Vector3f textureIndicies = new Vector3f(Float.valueOf(line.split(" ")[1].split("/")[1]), Float.valueOf(line.split(" ")[2].split("/")[1]), Float.valueOf(line.split(" ")[3].split("/")[1])); Vector3f normalIndicies = new Vector3f(Float.valueOf(line.split(" ")[1].split("/")[2]), Float.valueOf(line.split(" ")[2].split("/")[2]), Float.valueOf(line.split(" ")[3].split("/")[2])); m.faces.add(new Face(vertexIndicies,textureIndicies,normalIndicies,currentTexture.getTextureID())); }else if(line.startsWith("g ")) { if(line.length()>2) { String name = line.split(" ")[1]; currentTexture = TextureLoader.getTexture("PNG", ResourceLoader.getResourceAsStream("res/" + name + ".png")); System.out.println(currentTexture.getTextureID()); } } } reader.close(); System.out.println(m.verticies.size() + " verticies"); System.out.println(m.normals.size() + " normals"); System.out.println(m.texVerticies.size() + " texture coordinates"); System.out.println(m.faces.size() + " faces"); return m; } } Then I create a display list for my model using this code: objectDisplayList = GL11.glGenLists(1); GL11.glNewList(objectDisplayList, GL11.GL_COMPILE); Model m = null; try { m = OBJLoader.loadModel(new File("res/untitled4.obj")); } catch (Exception e1) { e1.printStackTrace(); } int currentTexture=0; for(Face face: m.faces) { if(face.texture!=currentTexture) { currentTexture = face.texture; GL11.glBindTexture(GL11.GL_TEXTURE_2D, currentTexture); } GL11.glColor3f(1f, 1f, 1f); GL11.glBegin(GL11.GL_TRIANGLES); Vector3f n1 = m.normals.get((int) face.normal.x - 1); GL11.glNormal3f(n1.x, n1.y, n1.z); Vector2f t1 = m.texVerticies.get((int) face.textures.x -1); GL11.glTexCoord2f(t1.x, t1.y); Vector3f v1 = m.verticies.get((int) face.vertex.x - 1); GL11.glVertex3f(v1.x, v1.y, v1.z); Vector3f n2 = m.normals.get((int) face.normal.y - 1); GL11.glNormal3f(n2.x, n2.y, n2.z); Vector2f t2 = m.texVerticies.get((int) face.textures.y -1); GL11.glTexCoord2f(t2.x, t2.y); Vector3f v2 = m.verticies.get((int) face.vertex.y - 1); GL11.glVertex3f(v2.x, v2.y, v2.z); Vector3f n3 = m.normals.get((int) face.normal.z - 1); GL11.glNormal3f(n3.x, n3.y, n3.z); Vector2f t3 = m.texVerticies.get((int) face.textures.z -1); GL11.glTexCoord2f(t3.x, t3.y); Vector3f v3 = m.verticies.get((int) face.vertex.z - 1); GL11.glVertex3f(v3.x, v3.y, v3.z); GL11.glEnd(); } GL11.glEndList(); The currentTexture is an int - it contains the ID of the currently used texture. So my model looks absolutely fine without textures: (sorry I cannot post hyperlinks since I am a new user) i.imgur.com/VtoK0.png But look what happens if I enable GL_TEXTURE_2D: i.imgur.com/z8Kli.png i.imgur.com/5e9nn.png i.imgur.com/FAHM9.png As you can see an entire side of the tree appears to be missing - and it's not transparent, since it's not in the colour of the background - it's rendered black. It's not a problem with the model - if I load it using Kanji's OBJ loader it works fine(but the thing is,that I need to write my own OBJ loader) i.imgur.com/YDATo.png this is my OpenGL init section: //init display try { Display.setDisplayMode(new DisplayMode(Support.SCREEN_WIDTH, Support.SCREEN_HEIGHT)); Display.create(); Display.setVSyncEnabled(true); } catch (LWJGLException e) { e.printStackTrace(); System.exit(0); } GL11.glLoadIdentity(); GL11.glEnable(GL11.GL_TEXTURE_2D); GL11.glClearColor(1.0f, 0.0f, 0.0f, 1.0f); GL11.glShadeModel(GL11.GL_SMOOTH); GL11.glEnable(GL11.GL_DEPTH_TEST); GL11.glDepthFunc(GL11.GL_LESS); GL11.glDepthMask(true); GL11.glEnable(GL11.GL_NORMALIZE); GL11.glMatrixMode(GL11.GL_PROJECTION); GLU.gluPerspective (90.0f,800f/600f, 1f, 500.0f); GL11.glMatrixMode(GL11.GL_MODELVIEW); GL11.glEnable(GL11.GL_CULL_FACE); GL11.glCullFace(GL11.GL_BACK); //enable lighting GL11.glEnable(GL11.GL_LIGHTING); ByteBuffer temp = ByteBuffer.allocateDirect(16); temp.order(ByteOrder.nativeOrder()); GL11.glMaterial(GL11.GL_FRONT, GL11.GL_DIFFUSE, (FloatBuffer)temp.asFloatBuffer().put(lightDiffuse).flip()); GL11.glMaterialf(GL11.GL_FRONT, GL11.GL_SHININESS,(int)material_shinyness); GL11.glLight(GL11.GL_LIGHT2, GL11.GL_DIFFUSE, (FloatBuffer)temp.asFloatBuffer().put(lightDiffuse2).flip()); // Setup The Diffuse Light GL11.glLight(GL11.GL_LIGHT2, GL11.GL_POSITION,(FloatBuffer)temp.asFloatBuffer().put(lightPosition2).flip()); GL11.glLight(GL11.GL_LIGHT2, GL11.GL_AMBIENT,(FloatBuffer)temp.asFloatBuffer().put(lightAmbient).flip()); GL11.glLight(GL11.GL_LIGHT2, GL11.GL_SPECULAR,(FloatBuffer)temp.asFloatBuffer().put(lightDiffuse2).flip()); GL11.glLightf(GL11.GL_LIGHT2, GL11.GL_CONSTANT_ATTENUATION, 0.1f); GL11.glLightf(GL11.GL_LIGHT2, GL11.GL_LINEAR_ATTENUATION, 0.0f); GL11.glLightf(GL11.GL_LIGHT2, GL11.GL_QUADRATIC_ATTENUATION, 0.0f); GL11.glEnable(GL11.GL_LIGHT2); Could somebody please help me?

    Read the article

  • What's a good way to organize samplers for HLSL?

    - by Rei Miyasaka
    According to MSDN, I can have 4096 samplers per context. That's a lot, considering there's only a handful of common sampler states. That tempts me to initialize an array containing a whole bunch of common sampler states, assign them to every device context I use, and then in the pixel shaders refer to them by index using : register(s[n]) where n is the index in the array. If I want more samplers for whatever reason, I can just add them on after the last slot. Does this work? If not, when should I set the samplers? Should it be done when by the mesh renderer? The texture renderer? Or alongside PSSetShader? Edit: That trick I wrote above doesn't work (at least not yet), as the compiler gives me this error message when I try to use the same register twice: error X4500: overlapping register semantics not yet implemented 's0' So how do people usually organize samplers, then?

    Read the article

  • Why can I not map a dynamic texture in D3D?

    - by sebf
    I am trying to map a Texture2D resource in DirectX11 via SharpDX. The resource is declared as a ShaderResource, with Dynamic usage and the 'Write' CPU flag specified. My call however fails with a generic exception from SharpDX: _Parent.Context.MapSubresource( _Resource, 0, SharpDX.Direct3D11.MapMode.Write, SharpDX.Direct3D11.MapFlags.None, out stream ); I see from this question that it is supported. The MSDN docs and this other question hint that instead of using Context.MapSubresource() I should be using Texture2D.Map(), however, the DirectX11 Texture2D class does not define Map() (though it does for the D3D 10 equivalent). If I call the above with MapMode.WriteDiscard, the call succeeds but in this case the previous content of the texture is lost, which is no good when I only want to update a section of it. Has the Map() method been removed in Direct3D 11 or am I looking in the wrong place? Is the MapSubresource() method unsuitable or am I using it wrong? EDIT: I declared my resource as Dynamic with CPU Write Flags - not Default as I originaly wrote - sorry for the fairly huge 'typo' that changes the entire question!

    Read the article

  • Implementing algorithms via compute shaders vs. pipeline shaders

    - by TravisG
    With the availability of compute shaders for both DirectX and OpenGL it's now possible to implement many algorithms without going through the rasterization pipeline and instead use general purpose computing on the GPU to solve the problem. For some algorithms this seems to become the intuitive canonical solution because they're inherently not rasterization based, and rasterization-based shaders seemed to be a workaround to harness GPU power (simple example: creating a noise texture. No quad needs to be rasterized here). Given an algorithm that can be implemented both ways, are there general (potential) performance benefits over using compute shaders vs. going the normal route? Are there drawbacks that we should watch out for (for example, is there some kind of unusual overhead to switching from/to compute shaders at runtime)? Are there perhaps other benefits or drawbacks to consider when choosing between the two?

    Read the article

  • Ray picking - get direction from pitch and yaw

    - by Isaac Waller
    I am attempting to cast a ray from the center of the screen and check for collisions with objects. When rendering, I use these calls to set up the camera: GL11.glRotated(mPitch, 1, 0, 0); GL11.glRotated(mYaw, 0, 1, 0); GL11.glTranslated(mPositionX, mPositionY, mPositionZ); I am having trouble creating the ray, however. This is the code I have so far: ray.origin = new Vector(mPositionX, mPositionY, mPositionZ); ray.direction = new Vector(?, ?, ?); My question is: what should I put in the question mark spots? I.e. how can I create the ray direction from the pitch and roll? Any help would be much appreciated!

    Read the article

  • How to move a sprite automatically using a physicsHandler in Andengine?

    - by shailenTJ
    I use a DigitalOnScreenControl (knob with a four-directional arrow control) to move the entity and the entity which is bound to a physicsHandler. physicsHandler.setEntity(sprite); sprite.registerUpdateHandler(physicsHandler); From the DigitalOnScreenControl, I know which direction I want my sprite to move. Inside its overridden onControlChange function, I call a function animateSprite that checks which direction I chose. Based on the direction, I animate my sprite differently. PROBLEM: I want to automatically move the sprite to a specific location on the scene, say at coordinates (207, 305). My sprite is at (100, 305, which means it has to move down by 107 pixels. How do I tell the physicsHandler to move the sprite down by 107 pixels? My animateSprite method will take care of animating the sprite's downward motion. Thank you for your input!

    Read the article

  • How do I repeat a texture with GLKit?

    - by Synopfab
    I am using GLKit in order to show textures on my project. The code is like this: -(void)setTextureImage:(UIImage *)image { NSError *error; texture = [GLKTextureLoader textureWithCGImage:image.CGImage options:nil error:&error]; if (error) { NSLog(@"Error loading texture from image: %@",error); } } effect.texture2d0.envMode = GLKTextureEnvModeReplace; effect.texture2d0.target = GLKTextureTarget2D; effect.texture2d0.name = texture.name; glEnableVertexAttribArray(GLKVertexAttribTexCoord0); glVertexAttribPointer(GLKVertexAttribTexCoord0, 2, GL_FLOAT, GL_FALSE, 0, self.textureCoordinates); Now I want to repeat this texture on a rectangle. Is there any way use GLKit for this behavior? I've tried to use opengl function in addition to the glkit ones, but it raises errors: glEnable(GL_TEXTURE_2D); glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT ); glBindTexture( GL_TEXTURE_2D, texture.name ); 2011-11-09 20:10:28.614 **[16309:207] GL ERROR: 0x0500 2011-11-09 20:10:30.840 **[16309:207] Error loading texture from image: Error Domain=GLKTextureLoaderErrorDomain Code=8 "The operation couldn’t be completed. (GLKTextureLoaderErrorDomain error 8.)" UserInfo=0x68545c0 {GLKTextureLoaderGLErrorKey=1280, GLKTextureLoaderErrorKey=OpenGL error}

    Read the article

  • Should I refer to browser-based games as HTML5 games or Javascript games?

    - by Bane
    First of all, I know that there are alternatives to both HTML5 and Javascript, but I worded the question so generally ("browser-based") because if I had said "HTML5" or "Javascript" games that would already imply an answer to the question. When writing wiki posts or discussing, I usually call these games "HTML5/Javascript" games. They are written in Javascript, using the new HTML5 technology. What is the proper way to call them: HTML5 or Javascript games? I see that most people opt for HTML5, why?

    Read the article

  • OpenGL ES 2.0: Vertex and Fragment Shader for 2D with Transparency

    - by Bunkai.Satori
    Could I knindly ask for correct examples of OpenGL ES 2.0 Vertex and Fragment shader for displaying 2D textured sprites with transparency? I have fairly simple shaders that display textured polygon pairs but transparency is not applied despite: texture map contains transparency information Blending is enabled: glEnable(GL_BLEND); glEnable(GL_DEPTH_TEST); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); My Vertex Shader: uniform mat4 uOrthoProjection; uniform vec3 Translation; attribute vec4 Position; attribute vec2 TextureCoord; varying vec2 TextureCoordOut; void main() { gl_Position = uOrthoProjection * (Position + vec4(Translation, 0)); TextureCoordOut = TextureCoord; } My Fragment Shader: varying mediump vec2 TextureCoordOut; uniform sampler2D Sampler; void main() { gl_FragColor = texture2D(Sampler, TextureCoordOut); }

    Read the article

  • Should we always prefer OpenGL ES version 2 over version 1.x

    - by Shivan Dragon
    OpengGL ES version 2 goes a long way into changing the development paradigm that was established with OpenGL ES 1.x. You have shaders which you can chain together to apply varios effects/transforms to your elements, the projection and transformation matrices work completly different etc. I've seen a lot of online tutorials and blogs that simply say "ditch version 1.x, use version 2, that's the way to go". Even on Android's documentation it sais to "use version 2 as it may prove faster than 1.x". Now, I've also read a book on OpenGL ES (which was rather good, but I'm not gonna mention here because I don't want to give the impression that I'm trying to make hidden publicity). The guy there treated only OpenGL ES 1.x for 80% of the book, and then at the end only listed the differences in version 2 and said something like "if OpenGL ES 1 does what you need, there's no need to switch to version 2, as it's only gonna over complicate your code. Version 2 was changed a lot to facillitate newer, fancier stuff, but if you don't need it, version 1.x is fine". My question is then, is the last statement right? Should I always use Open GL ES version 1.x if I don't need version 2 only stuff? I'd sure like to do that, because I find coding in version 1.x A LOT simpler than version 2 but I'm afraid that my apps might get obsolete faster for using an older version.

    Read the article

< Previous Page | 359 360 361 362 363 364 365 366 367 368 369 370  | Next Page >