Search Results

Search found 21563 results on 863 pages for 'game testing'.

Page 401/863 | < Previous Page | 397 398 399 400 401 402 403 404 405 406 407 408  | Next Page >

  • Processing component pools problem - Entity Subsystem

    - by mani3xis
    Architecture description I'm creating (designing) an entity system and I ran into many problems. I'm trying to keep it Data-Oriented and efficient as much as possible. My components are POD structures (array of bytes to be precise) allocated in homogeneous pools. Each pool has a ComponentDescriptor - it just contains component name, field types and field names. Entity is just a pointer to array of components (where address acts like an entity ID). EntityPrototype contains entity name and array of component names. Finally Subsystem (System or Processor) which works on component pools. Actual problem The problem is that some components dependents on others (Model, Sprite, PhysicalBody, Animation depends on Transform component) which makes a lot of problems when it comes to processing them. For example, lets define some entities using [S]prite, [P]hysicalBody and [H]ealth: Tank: Transform, Sprite, PhysicalBody BgTree: Transform, Sprite House: Transform, Sprite, Health and create 4 Tanks, 5 BgTrees and 2 Houses and my pools will look like: TTTTTTTTTTT // Transform pool SSSSSSSSSSS // Sprite pool PPPP // PhysicalBody pool HH // Health component There is no way to process them using indices. I spend 3 days working on it and I still don't have any ideas. In previous designs TransformComponent was bound to the entity - but it wasn't a good idea. Can you give me some advices how to process them? Or maybe I should change the overall design? Maybe I should create pools of entites (pools of component pools) - but I guess it will be a nightmare for CPU caches. Thanks

    Read the article

  • Where can I find free or buy "next-gen" 3D Assets?

    - by Valmond
    Usually I buy 3D Assets from sites like turbosquid.com or similar. My problem is that I have lately implemented glow, normal maps, specular (and specular power) maps and reflection maps and I can't find any models that use those techniques. So where can I find / buy "next gen" assets (at least models/items with a normal map)? I have checked for similar posts but those I found are about either free only or 2D or 'ordinary' 3D so I hope this is not a duplicate.

    Read the article

  • Handling different screen densities in Android Devices?

    - by DevilWithin
    Well, i know there are plenty of different-sized screens in devices that run Android. The SDK I code with deploys to all major desktop platforms and android. I am aware i must have special cares to handle the different screen sizes and densities, but i just had an idea that would work in theory, and my question is exactly about that method, How could it FAIL ? So, what I do is to have an ortho camera of the same size for all devices, with possible tweaks, but anyway that would grant the proper positioning of all elements in all devices, right? We can assume everything is drawn in OpenGLES and input handling is converted to the proper camera coordinates. If you need me to improve the question, please tell me.

    Read the article

  • Object detection in bitmap JavaScript canvas

    - by fallenAngel
    I want to detect clicks on canvas elements which are drawn using paths. So far I have stored element paths in a JavaScript data structure and then check the coordinates of hits which match the element's coordinates. Rendering each element path and checking the hits would be inefficient when there are a lot of elements. I believe there must be an algorithm for this kind of coordinate search, can anyone help me with this?

    Read the article

  • What features does D3D have that OpenGL does not (and vice versa)?

    - by Tom
    Are there any feature comparisons on Direct3D 11 and the newest OpenGL versions? Well, simply put, Direct3D 11 introduced three main features (taken from Wikipedia): Tesselation Multithreaded rendering Compute shaders Increased texture cache Now I'm wondering, how does the newest versions of OpenGL cope with these features? And since I have this feeling that there are features that Direct3D lacks from OpenGL's side, what are those?

    Read the article

  • Why don't Normal maps in tangent space have a single blue color?

    - by seahorse
    Normal maps are predominantly blue in color because the z component maps to Blue and since normals point out of the surface in the z direction we see Blue as the predominant component. If the above is true then why are normal maps just of one color i.e. blue and they should not be having any other shades(not even shades of blue) Since by definition tangent space is perpendicular to normal at any point we should have the normal always pointing in the Z (Blue direction) with no X(Red component) and Y(Green component). Thus the normal map(since it is a "normal map") should have had color of normals which is just the Blue(Z =Blue compoennt = 1, R=0, G=0) and the normal map should have been of only Blue color with no shades in between. But even then normal maps are not so, and they have gradients of shades in them, why is this so?

    Read the article

  • What is UVIndex and how do I use it on OpenGL?

    - by Delta
    I am a noob in OpenGL ES 2.0 (for WebGL) and I'm trying to draw a simple model I've made with a 3D tool and exported to .fbx format. I've been able to draw some models that only have: A vertex buffer, a index buffer for the vertices, a normal buffer and a texture coordinate buffer, but this model now has a "UVIndex" and I'm not sure where am I supposed to put this UVIndex. My code looks like this: GL.bindBuffer(GL.ARRAY_BUFFER, this.Model.House.VertexBuffer); GL.vertexAttribPointer(this.Shader.TextureAndLighting.Attribute["vPosition"],3,GL.FLOAT, false, 0, 0); GL.bindBuffer(GL.ARRAY_BUFFER, this.Model.House.NormalBuffer); GL.vertexAttribPointer(this.Shader.TextureAndLighting.Attribute["vNormal"], 3, GL.FLOAT, false, 0, 0); GL.bindBuffer(GL.ARRAY_BUFFER, this.Model.House.TexCoordBuffer); GL.vertexAttribPointer(this.Shader.TextureAndLighting.Attribute["TexCoord"], 2, GL.FLOAT, false, 0, 0); GL.bindBuffer(GL.ELEMENT_ARRAY_BUFFER, this.Model.House.IndexBuffer); GL.bindTexture(GL.TEXTURE_2D, this.Texture.HTex1); GL.activeTexture(GL.TEXTURE0); GL.drawElements(GL.TRIANGLES, this.Model.House.IndexBuffer.Length, GL.UNSIGNED_SHORT, 0); But my model renders totally incorrect and I think it has to do with the fact that I am ignoring this "UVIndex" in the .fbx file, since I've never drawn any model that uses this UVIndex I really have no clue on what to do with it. This is the json file containing the model's data: http://pastebin.com/raw.php?i=G294TVmz

    Read the article

  • How to manage my model

    - by Christophe Debove
    I have in my model, a list of Classes : Player, NonPlayerCharacter, Monster, Item, NonMovableItem etc With AndEngine I've a list of sprite for each piece of my model, How can I manage the relashionship between my model's classes and the graphical elements, what is the degree of abstaction recommended for my problem? One sprite for one Model or one Model for one Sprite or n for n for exemple If I do drag&drop have I to make abstraction of the Sprite Class, another exemple a map is a List of sprite or a list of element of my model?

    Read the article

  • Foreach loop with 2d array of objects

    - by Jacob Millward
    I'm using a 2D array of objects to store data about tiles, or "blocks" in my gameworld. I initialise the array, fill it with data and then attempt to invoke the draw method of each object. foreach (Block block in blockList) { block.Draw(spriteBatch); } I end up with an exception being thrown "Object reference is not set to an instance of an object". What have I done wrong? EDIT: This is the code used to define the array Block[,] blockList; Then blockList = new Block[screenRectangle.Width, screenRectangle.Height]; // Fill with dummy data for (int x = 0; x <= screenRectangle.Width / texture.Width; x++) { for (int y = 0; y <= screenRectangle.Height / texture.Width; y++) { if (y >= screenRectangle.Height / (texture.Width*2)) { blockList[x, y] = new Block(1, new Rectangle(x * 16, y * 16, texture.Width, texture.Height), texture); } else { blockList[x, y] = new Block(0, new Rectangle(x * 16, y * 16, texture.Width, texture.Height), texture); } } }

    Read the article

  • Implementing top view physics using box2D

    - by humbleBee
    How can top view physics games be done in box2D? One idea I have is to set the linear velocity of an object manually or to alter the linear and angular damping as my object moves over different surfaces. For example if my object is over a wet surface it'll have less linear damping and if it is over rough surface it'll have more damping. And to see if my object has fallen over an edge I'll try to use an AABB and check if its still inside or manually see if object.x > boundary.x etc. Is there any better way?

    Read the article

  • blender: 3D model from guide images

    - by Stefan
    In a effort to learn the blender interface, which is confusing to say the least, I've chosen to model a model from referrence pictures easily found on the web. Problem is that I can't ( and won't ) get perfect "right", "front" and "top" pictures. Blender only allows you to see the background pictures when in ortographic mode and only from right|front|top, which doesn't help me. How to I proceed to model from non-perfect guide images?

    Read the article

  • Unrealscript splitting a string

    - by burntsugar
    Note, this is repost from stackoverflow - I have only just discovered this site :) I need to split a string in Unrealscript, in the same way that Java's split function works. For instance - return the string "foo" as an array of char. I have tried to use the SplitString function: array SplitString( string Source, optional string Delimiter=",", optional bool bCullEmpty ) Wrapper for splitting a string into an array of strings using a single expression. as found at http://udn.epicgames.com/Three/UnrealScriptFunctions.html but it returns the entire String. simulated function wordDraw() { local String inputString; inputString = "trolls"; local string whatwillitbe; local int b; local int x; local array<String> letterArray; letterArray = SplitString(inputString,, false); for (x = 0; x < letterArray.Length; x++) { whatwillitbe = letterArray[x]; `log('it will be '@whatwillitbe); b = letterarray.Length; `log('letterarray length is '@b); `log('letter number '@x); } } Output is: b returns: 1 whatwillitbe returns: trolls However I would like b to return 6 and whatwillitbe to return each character individually. I have had a few answers proposed, however, I would still like to properly understand how the SplitString function works. For instance, if the Delimiter parameter is optional, what does the function use as a delimiter by default?

    Read the article

  • Matrix.CreateBillboard centre rotation problem

    - by Chris88
    I'm having an issue with Matrix.CreateBillboard and a textured Quad where the center axis seems to be positioned incorrectly to the quad object which is rotating around a center point: Using: BasicEffect quadEffect; Drawing the quad shape: Left = Vector3.Cross(Normal, Up); Vector3 uppercenter = (Up * height / 2) + origin; LowerLeft = uppercenter + (Left * width / 2); LowerRight = uppercenter - (Left * width / 2); UpperLeft = LowerLeft - (Up * height); UpperRight = LowerRight - (Up * height); Where height and width are float values passed in (it draws a square) Draw method: quadEffect.View = camera.view; quadEffect.Projection = camera.projection; quadEffect.World = Matrix.CreateBillboard(Origin, camera.cameraPosition, Vector3.Up, camera.cameraDirection); GraphicsDevice.BlendState = BlendState.Additive; foreach (EffectPass pass in quadEffect.CurrentTechnique.Passes) { pass.Apply(); GraphicsDevice.DrawUserIndexedPrimitives <VertexPositionNormalTexture>( PrimitiveType.TriangleList, Vertices, 0, 4, Indexes, 0, 2); } GraphicsDevice.BlendState = BlendState.Opaque; In the screenshots below i draw the image at Vector3(32f, 0f, 32f) The screenshots below show you the position of the quad in relation to the red cross. The red cross shows where it should be drawn http://i.imgur.com/YwRYj.jpg http://i.imgur.com/ZtoHL.jpg It rotates around the red cross position

    Read the article

  • Ray Tracing concers: Efficient Data Structure and Photon Mapping

    - by Grieverheart
    I'm trying to build a simple ray tracer for specific target scenes. An example of such scene can be seen below. I'm concerned as to what accelerating data structure would be most efficient in this case since all objects are touching but on the other hand, the scene is uniform. The objects in my ray tracer are stored as a collection of triangles, thus I also have access to individual triangles. Also, when trying to find the bounding box of the scene, how should infinite planes be handled? Should one instead use the viewing frustum to calculate the bounding box? A few other questions I have are about photon mapping. I've read the original paper by Jensen and many more material. In the compact data structure for the photon they introduce, they store photon power as 4 chars, which from my understanding is 3 chars for color and 1 for flux. But I don't understand how 1 char is enough to store a flux of the order of 1/n, where n is the number of photons (I'm also a bit confused about flux vs power). The other question about photon mapping is, if it would be more efficient in my case to store photons per object (or even per Object's triangle) instead of using a balanced kd-tree. Also, same question about bounding box of the scene but for photon mapping. How should one find a bounding box from the pov of the light when infinite planes are involved?

    Read the article

  • How To Smoothly Animate From One Camera Position To Another

    - by www.Sillitoy.com
    The Question is basically self explanatory. I have a scene with many cameras and I'd like to smoothly switch from one to another. I am not looking for a cross fade effect but more to a camera moving and rotating the view in order to reach the next camera point of view and so on. To this end I have tried the following code: firstCamera.transform.position.x = Mathf.Lerp(firstCamera.transform.position.x, nextCamer.transform.position.x,Time.deltaTime*smooth); firstCamera.transform.position.y = Mathf.Lerp(firstCamera.transform.position.y, nextCamera.transform.position.y,Time.deltaTime*smooth); firstCamera.transform.position.z = Mathf.Lerp(firstCamera.transform.position.z, nextCamera.transform.position.z,Time.deltaTime*smooth); firstCamera.transform.rotation.x = Mathf.Lerp(firstCamera.transform.rotation.x, nextCamera.transform.rotation.x,Time.deltaTime*smooth); firstCamera.transform.rotation.z = Mathf.Lerp(firstCamera.transform.rotation.z, nextCamera.transform.rotation.z,Time.deltaTime*smooth); firstCamera.transform.rotation.y = Mathf.Lerp(firstCamera.transform.rotation.y, nextCamera.transform.rotation.y,Time.deltaTime*smooth); But the result is actually not that good.

    Read the article

  • Not getting desired results with SSAO implementation

    - by user1294203
    After having implemented deferred rendering, I tried my luck with a SSAO implementation using this Tutorial. Unfortunately, I'm not getting anything that looks like SSAO, you can see my result below. You can see there is some weird pattern forming and there is no occlusion shading where there needs to be (i.e. in between the objects and on the ground). The shaders I implemented follow: #VS #version 330 core uniform mat4 invProjMatrix; layout(location = 0) in vec3 in_Position; layout(location = 2) in vec2 in_TexCoord; noperspective out vec2 pass_TexCoord; smooth out vec3 viewRay; void main(void){ pass_TexCoord = in_TexCoord; viewRay = (invProjMatrix * vec4(in_Position, 1.0)).xyz; gl_Position = vec4(in_Position, 1.0); } #FS #version 330 core uniform sampler2D DepthMap; uniform sampler2D NormalMap; uniform sampler2D noise; uniform vec2 projAB; uniform ivec3 noiseScale_kernelSize; uniform vec3 kernel[16]; uniform float RADIUS; uniform mat4 projectionMatrix; noperspective in vec2 pass_TexCoord; smooth in vec3 viewRay; layout(location = 0) out float out_AO; vec3 CalcPosition(void){ float depth = texture(DepthMap, pass_TexCoord).r; float linearDepth = projAB.y / (depth - projAB.x); vec3 ray = normalize(viewRay); ray = ray / ray.z; return linearDepth * ray; } mat3 CalcRMatrix(vec3 normal, vec2 texcoord){ ivec2 noiseScale = noiseScale_kernelSize.xy; vec3 rvec = texture(noise, texcoord * noiseScale).xyz; vec3 tangent = normalize(rvec - normal * dot(rvec, normal)); vec3 bitangent = cross(normal, tangent); return mat3(tangent, bitangent, normal); } void main(void){ vec2 TexCoord = pass_TexCoord; vec3 Position = CalcPosition(); vec3 Normal = normalize(texture(NormalMap, TexCoord).xyz); mat3 RotationMatrix = CalcRMatrix(Normal, TexCoord); int kernelSize = noiseScale_kernelSize.z; float occlusion = 0.0; for(int i = 0; i < kernelSize; i++){ // Get sample position vec3 sample = RotationMatrix * kernel[i]; sample = sample * RADIUS + Position; // Project and bias sample position to get its texture coordinates vec4 offset = projectionMatrix * vec4(sample, 1.0); offset.xy /= offset.w; offset.xy = offset.xy * 0.5 + 0.5; // Get sample depth float sample_depth = texture(DepthMap, offset.xy).r; float linearDepth = projAB.y / (sample_depth - projAB.x); if(abs(Position.z - linearDepth ) < RADIUS){ occlusion += (linearDepth <= sample.z) ? 1.0 : 0.0; } } out_AO = 1.0 - (occlusion / kernelSize); } I draw a full screen quad and pass Depth and Normal textures. Normals are in RGBA16F with the alpha channel reserved for the AO factor in the blur pass. I store depth in a non linear Depth buffer (32F) and recover the linear depth using: float linearDepth = projAB.y / (depth - projAB.x); where projAB.y is calculated as: and projAB.x as: These are derived from the glm::perspective(gluperspective) matrix. z_n and z_f are the near and far clip distance. As described in the link I posted on the top, the method creates samples in a hemisphere with higher distribution close to the center. It then uses random vectors from a texture to rotate the hemisphere randomly around the Z direction and finally orients it along the normal at the given pixel. Since the result is noisy, a blur pass follows the SSAO pass. Anyway, my position reconstruction doesn't seem to be wrong since I also tried doing the same but with the position passed from a texture instead of being reconstructed. I also tried playing with the Radius, noise texture size and number of samples and with different kinds of texture formats, with no luck. For some reason when changing the Radius, nothing changes. Does anyone have any suggestions? What could be going wrong?

    Read the article

  • Lwjgl or opengl double pixels

    - by Philippe Paré
    I'm working in java with LWJGL and trying to double all my pixels. I'm trying to draw in an area of 800x450 and then stretch all the frame image to the complete 1600x900 pixels without them getting blured. I can't figure out how to do that in java, everything I find is in c++... A hint would be great! Thanks a lot. EDIT : I've tried drawing to a texture created in opengl by setting it to the framebuffer, but I can't find a way to use glGenTextures() in java... so this is not working... also I though about using a shader but I would not be able to draw only in the smaller region...

    Read the article

  • Rendering oily/polluted water?

    - by Fraser
    Any shader wizards out there have an idea of how to achieve an oily/polluted water effect, similar to this: Ideally, the water would not be uniformly oily, but instead the oil could be generated from some source (such as a polluting drain from a chemical plant) and then diffuse throughout the water body. My thought for this part would be to keep an "oil map" as a 2D texture that determines the density of oil at each point on the water surface. It would diffuse and move naturally with the water vel;ocity at that point (I have a wave-particle simulation for dynamic waves, and am already doing something similar for foam on the water surface). However, I'm not sure how physically correct that would be, since oil might not move at the same velocity as the water. And I have no idea how to make all those trippy colors :-). Thoughts?

    Read the article

  • Using MVC with a retained mode renderer

    - by David Gouveia
    I am using a retained mode renderer similar to the display lists in Flash. In other words, I have a scene graph data structure called the Stage to which I add the graphical primitives I would like to see rendered, such as images, animations, text. For simplicity I'll refer to them as Sprites. Now I'm implementing an architecture which is becoming very similar to MVC, but I feel that that instead of having to create View classes, that the sprites already behave pretty much like Views (except for not being explicitly connected to the Model). And since the Model is only changed through the Controller, I could simply update the view together with the Model in the controller, as in the example below: Example 1 class Controller { Model model; Sprite view; void TeleportTo(Vector2 position) { model.Position = view.Position = position; } } The alternative, I think, would be to create View classes that wrap the sprites, make the model observable, and make the view react to changes on the model. This seems like a lot of extra work and boilerplate code, and I'm not seeing the benefits if I'm just going to have one view per controller. Example 2 class Controller { Model model; View view; void TeleportTo(Vector2 position) { model.Position = position; } } class View { Model model; Sprite sprite; View() { model.PropertyChanged += UpdateView; } void UpdateView() { sprite.Position = model.Position; } } So, how is MVC or more specifically, the View, usually implemented when using a retained-mode renderer? And is there any reason why I shouldn't stick with example 1?

    Read the article

  • How do I get the child of a unique parent in ActionScript?

    - by Koen
    My question is about targeting a child with a unique parent. For example. Let's say I have a box people can move called box_mc and 3 platforms it can jump on called: Platform_1 Platform_2 Platform_3 All of these platforms have a child element called hit. Platform_1 Hit Platform_2 Hit Platform_3 Hit I use an array and a for each statement to detect if box_mc hits one of the platforms childs. var obj_arr:Array = [Platform_1, Platform_2, Platform_3]; for each(obj in obj_arr){ if(box_mc.hitTestObject(obj.hit)){ trace(obj + " " + obj.hit); box_mc.y = obj.hit.y - box_mc.height; } } obj seems to output the unique parent it is hitting but obj.hit ouputs hit, so my theory is that it is applying the change of y to all the childs called hit in the stage. Would it be possible to only detect the child of that specific parent?

    Read the article

  • Direct2D Transform

    - by James
    I have a beginner question about Direct2D transforms. I have a 20 x 10 bitmap that I would like to draw in different orientations. To start, I would like to draw it vertically with a destination rectangle of say: (left, top, right, bottom) (300, 300, 310, 320) The bitmap is wider than it is tall (20 x 10), but when I draw it vertically, it will be appear taller than it is wide (10 x 20). I know that I can use a rotation matrix like so: m_pRenderTarget->SetTransform( D2D1::Matrix3x2F::Rotation( 90.0f, D2D1::Point2F(<center of shape>)) ); But when I use this method to rotate my shape, the destination rectangle is still wider than it is tall. Maybe it would look something like this: (left, top, right, bottom) (280, 290, 300, 300) The destination rectangle is 20 x 10 but the bitmap appears on the screen as 10 x 20. I can't look at the destination rectangle in the debugger and compare it to: (left, top, right, bottom) (300, 300, 310, 320) Is there any simple way to say "I want to rotate it so that the image is rendered to exactly this destination rectangle after the rotation?" In this case, I would like to say "Please rotate the bitmap so that it appears on the screen at this location:" (left, top, right, bottom) (300, 300, 310, 320) If I can't do that, is there any way to find out the 10 x 20 destination rectangle where the bitmap is actually being rendered to the screen?

    Read the article

  • 3D Paint tool in Maya

    - by Joris
    Hey everyone, could someone help me out on this question? When I am trying to texture objects in maya (for example a barrel) I want to texture it with a brush. When I try to use the 3D painting tool it al gets black even when I have a image file selected. Also the resolution is very very low of the models and textures in maya, though the texture image is very high quality. Can someone please help me? Thanks so much for helping.

    Read the article

  • OpenGL problem with FBO integer texture and color attachment

    - by Grieverheart
    In my simple renderer, I have 2 FBOs one that contains diffuse, normals, instance ID and depth in that order and one that I use store the ssao result. The textures I use for the first FBO are RGB8, RGBA16F, R32I and GL_DEPTH_COMPONENT32F for the depth. For the second FBO I use an R16F texture. My rendering process is to first render to everything I mentioned in the first FBO, then bind depth and normals textures for reading for the ssao pass and write to the second FBO. After that I bind the second FBO's texture for reading in my blur shader and bind the first FBO for writing. What I intend to do is to write the blurred ssao value to the alpha component of the Normals texture. Here are where the problems start. First of all, I use shading language 3.3, which my graphics card does support. I manage ouputs in my shaders using layout(location = #). Now, the normals texture should be bound to color attachment 1, but when I use 1, it seems to write to my diffuse texture which should be in color attachment 0. When I instead use layout(location = 0), it gets correctly written to my normals texture. Besides this, my instance ID texture also gets resets after running the blur shader which is weird because if I use a float texture and write to it instanceID / nInstances, the texture doesn't get reset after the blur shader has ran. Here is how I prepare my first FBO: bool CGBuffer::Init(unsigned int WindowWidth, unsigned int WindowHeight){ //Create FBO glGenFramebuffers(1, &m_fbo); glBindFramebuffer(GL_DRAW_FRAMEBUFFER, m_fbo); //Create gbuffer and Depth Buffer Textures glGenTextures(GBUFF_NUM_TEXTURES, &m_textures[0]); glGenTextures(1, &m_depthTexture); //prepare gbuffer for(unsigned int i = 0; i < GBUFF_NUM_TEXTURES; i++){ glBindTexture(GL_TEXTURE_2D, m_textures[i]); if(i == GBUFF_TEXTURE_TYPE_NORMAL) glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F, WindowWidth, WindowHeight, 0, GL_RGBA, GL_FLOAT, NULL); else if(i == GBUFF_TEXTURE_TYPE_DIFFUSE) glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB8, WindowWidth, WindowHeight, 0, GL_RGB, GL_FLOAT, NULL); else if(i == GBUFF_TEXTURE_TYPE_ID) glTexImage2D(GL_TEXTURE_2D, 0, GL_R32I, WindowWidth, WindowHeight, 0, GL_RED_INTEGER, GL_INT, NULL); else{ std::cout << "Error in FBO initialization" << std::endl; return false; } glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, GL_TEXTURE_2D, m_textures[i], 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); } //prepare depth buffer glBindTexture(GL_TEXTURE_2D, m_depthTexture); glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32F, WindowWidth, WindowHeight, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL); glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, m_depthTexture, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_NONE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); GLenum DrawBuffers[] = {GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1, GL_COLOR_ATTACHMENT2}; glDrawBuffers(GBUFF_NUM_TEXTURES, DrawBuffers); GLenum Status = glCheckFramebufferStatus(GL_FRAMEBUFFER); if(Status != GL_FRAMEBUFFER_COMPLETE){ std::cout << "FB error, status 0x" << std::hex << Status << std::endl; return false; } //Restore default framebuffer glBindFramebuffer(GL_FRAMEBUFFER, 0); return true; } where I use an enum defined as, enum GBUFF_TEXTURE_TYPE{ GBUFF_TEXTURE_TYPE_DIFFUSE, GBUFF_TEXTURE_TYPE_NORMAL, GBUFF_TEXTURE_TYPE_ID, GBUFF_NUM_TEXTURES }; Am I missing some kind of restriction? Does the color attachment of the FBO's textures somehow gets reset i.e. I'm using a re-size function which re-sizes the textures of the FBO but should I perhaps call glFramebufferTexture2D again too? EDIT: Here is the shader in question: #version 330 core uniform sampler2D aoSampler; uniform vec2 TEXEL_SIZE; // x = 1/res x, y = 1/res y uniform bool use_blur; noperspective in vec2 TexCoord; layout(location = 0) out vec4 out_AO; void main(void){ if(use_blur){ float result = 0.0; for(int i = -1; i < 2; i++){ for(int j = -1; j < 2; j++){ vec2 offset = vec2(TEXEL_SIZE.x * i, TEXEL_SIZE.y * j); result += texture(aoSampler, TexCoord + offset).r; // -0.004 because the texture seems to be a bit displaced } } out_AO = vec4(vec3(0.0), result / 9); } else out_AO = vec4(vec3(0.0), texture(aoSampler, TexCoord).r); }

    Read the article

  • Wheel rotation, to change velocity of vehicle

    - by Lewis
    I update the velocity of my vehicle like so: [v setVelocity: ((2 * 3.14 * 100 * (wheel.getRotationValue / 360) / 30)) * gameSpeed]; // update on 60 fps this gets velocity on all frames divide by 60 for 1 frame. This is done in my update method in my world class. Now wheel.getRotationValue returns the rotation value which is worked out like this: - (void)ccTouchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { UITouch *touch = [touches anyObject]; CGPoint location = [touch locationInView:[touch view]]; location = [[CCDirector sharedDirector] convertToGL:location]; if (CGRectContainsPoint(wheel.boundingBox, location)) { CGPoint firstLocation = [touch previousLocationInView:[touch view]]; CGPoint location = [touch locationInView:[touch view]]; CGPoint touchingPoint = [[CCDirector sharedDirector] convertToGL:location]; CGPoint firstTouchingPoint = [[CCDirector sharedDirector] convertToGL:firstLocation]; CGPoint firstVector = ccpSub(firstTouchingPoint, wheel.position); CGFloat firstRotateAngle = -ccpToAngle(firstVector); CGFloat previousTouch = CC_RADIANS_TO_DEGREES(firstRotateAngle); CGPoint vector = ccpSub(touchingPoint, wheel.position); CGFloat rotateAngle = -ccpToAngle(vector); CGFloat currentTouch = CC_RADIANS_TO_DEGREES(rotateAngle); float limit = 0.5; rotationValue += (currentTouch - previousTouch) * limit; } touching = YES; } Say I steer the vehicle to the far right of the screen, and want to move it to the far left, It wont start moving to the left of the screen until the rotationValue is past 0 degrees again (the wheel is in its center posistion) and is dragged past this value. Is there anyway to change the code I have above, so that movement on the wheel is recognised instantly and updates the velocity of v instantly too?

    Read the article

  • Isometric Screen View to World View

    - by Sleepy Rhino
    I am having trouble working out the math to transform the screen coordinates to the Grid coordinates. The code below is how far I have got but it is totally wrong any help or resources to fix this issue would be great, had a complete mind block with this for some reason. private Point ScreenToIso(int mouseX, int mouseY) { int offsetX = WorldBuilder.STARTX; int offsetY = WorldBuilder.STARTY; Vector2 startV = new Vector2(offsetX, offsetY); int mapX = offsetX - mouseX; int mapY = offsetY - mouseY + (WorldBuilder.tileHeight / 2); mapY = -1 * (mapY / WorldBuilder.tileHeight); mapX = (mapX / WorldBuilder.tileHeight) + mapY; return new Point(mapX, mapY); }

    Read the article

< Previous Page | 397 398 399 400 401 402 403 404 405 406 407 408  | Next Page >