Search Results

Search found 19338 results on 774 pages for 'game loop'.

Page 331/774 | < Previous Page | 327 328 329 330 331 332 333 334 335 336 337 338  | Next Page >

  • HLSL Pixel Shader that does palette swap

    - by derrace
    I have implemented a simple pixel shader which can replace a particular colour in a sprite with another colour. It looks something like this: sampler input : register(s0); float4 PixelShaderFunction(float2 coords: TEXCOORD0) : COLOR0 { float4 colour = tex2D(input, coords); if(colour.r == sourceColours[0].r && colour.g == sourceColours[0].g && colour.b == sourceColours[0].b) return targetColours[0]; return colour; } What I would like to do is have the function take in 2 textures, a default table, and a lookup table (both same dimensions). Grab the current pixel, and find the location XY (coords) of the matching RGB in the default table, and then substitute it with the colour found in the lookup table at XY. I have figured how to pass the Textures from C# into the function, but I am not sure how to find the coords in the default table by matching the colour. Could someone kindly assist? Thanks in advance.

    Read the article

  • Getting .mesh & .skeleton from Blender2Ogre export

    - by Songbreaker
    I have downloaded the add-on blender2ogre from this source : http://code.google.com/p/blender2ogre/ And I have created a simple mesh, with walking animation (similar to the gingerbreadman tutorial). My question is, whenever I want to export the project, I can only see the .scene export format. There is no option whatsoever to export as .mesh and .skeleton. Also, how can I export the walking animation separately, in other words, if my project have couple more animation, how can i separate those during export?

    Read the article

  • CreateRenderTarget returns 0x80070057 in big surface resolution

    - by senggen
    I have created the SLI merged desktop of three 1920x1680 monitors, so the desktop resolution is 5760x1080. There is a 0x80070057 error, while calling CreateRenderTarget to create the RT_Surface: IDirect3DSurface9* _render_surface; HRESULT hr = _device->CreateRenderTarget( _desktop_width * 2, _desktop_height + 1, D3DFMT_A8R8G8B8, D3DMULTISAMPLE_NONE, 0, TRUE, &_render_surface, NULL); It works OK with desktop resolution 1024x768, and the total resolution is 3072x768. In http://msdn.microsoft.com/en-us/library/windows/desktop/bb174361(v=vs.85).aspx, it says If the method succeeds, the return value is D3D_OK. If the method fails, the return value can be one of the following: D3DERR_NOTAVAILABLE, D3DERR_INVALIDCALL, D3DERR_OUTOFVIDEOMEMORY, E_OUTOFMEMORY. and no description about 0x80070057. HRESULT: 0x80070057 (2147942487) Name: E_INVALIDARG Description: An invalid parameter was passed to the returning function Somebody please help me.

    Read the article

  • Does DirectX implement Triple Buffering?

    - by Asik
    As AnandTech put it best in this 2009 article: In render ahead, frames cannot be dropped. This means that when the queue is full, what is displayed can have a lot more lag. Microsoft doesn't implement triple buffering in DirectX, they implement render ahead (from 0 to 8 frames with 3 being the default). The major difference in the technique we've described here is the ability to drop frames when they are outdated. Render ahead forces older frames to be displayed. Queues can help smoothness and stuttering as a few really quick frames followed by a slow frame end up being evened out and spread over more frames. But the price you pay is in lag (the more frames in the queue, the longer it takes to empty the queue and the older the frames are that are displayed). As I understand it, DirectX "Swap Chain" is merely a render ahead queue, i.e. buffers cannot be dropped; the longer the chain, the greater the input latency. At the same time, I find it hard to believe that the most widely used graphics API would not implement such fundamental functionality correctly. Is there a way to get proper triple buffered vertical synchronisation in DirectX?

    Read the article

  • About Alpha blending sprites in Direct3D9

    - by ambrozija
    I have a Direct3D9 application that is rendering ID3DXSprites. The problem I am experiencing is best described in this situation: I have a texture that is totally opaque. On top of it I draw a rectangle filled with solid color and alpha of 128. On top of the rectangle I have a text that is totally opaque. I draw all of this and get the resulting image through GetRenderTarget call. The problem is that on the resulting image, on the area where the transparent rectangle is, I have semi transparent pixels. It is not a problem that the rectangle is transparent, the problem is that the resulting image is. The question is how to setup the blending so in this situation I don't get the transparent pixels in the resulting image? I use the sprite with D3DXSPRITE_ALPHABLEND which sets the device state to D3DBLEND_SRCALPHA and D3DBLEND_INVSRCALPHA. I tried couple of combinations of SetRenderState, like D3DBLEND_SRCALPHA, D3DBLEND_DESTALPHA etc., but couldn't make it work. Thanks.

    Read the article

  • DirectWrite Producing Strange Artifacts?

    - by smoth190
    I've written the basis to my UI system around Direct2D. I like it because it's fast and easy to use (even if I had to do some messy work to get it to work with DirectX11). However, I notice when using DirectWrite I'm getting strange problems with my text. As you can see, the e is a little screwwed up, and it overall looks a little bumpy. This only happens with certain fonts in certain sizes, and with certain arrangements of letters. This particular example is Verdana in size 16.0 font. Can I fix this? It's pretty annoying to change all my words and fonts because of this problem.

    Read the article

  • How can I pass an array of floats to the fragment shader using textures?

    - by James
    I want to map out a 2D array of depth elements for the fragment shader to use to check depth against to create shadows. I want to be able to copy a float array into the GPU, but using large uniform arrays causes segfaults in openGL so that is not an option. I tried texturing but the best i got was to use GL_DEPTH_COMPONENT glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, 512, 512, 0, GL_DEPTH_COMPONENT, GL_FLOAT, smap); Which doesn't work because that stores depth components (0.0 - 1.0) which I don't want because I have no idea how to calculate them using the depth value produced by the light sources MVP matrix multiplied by the coordinate of each vertex. Is there any way to store and access large 2D arrays of floats in openGL?

    Read the article

  • Normals vs Normal maps

    - by KaiserJohaan
    I am using Assimp asset importer (http://assimp.sourceforge.net/lib_html/index.html) to parse 3d models. So far, I've simply pulled out the normal vectors which are defined for each vertex in my meshes. Yet I have also found various tutorials on normal maps... As I understand it for normal maps, the normal vectors are stored in each texel of a normal map, and you pull these out of the normal texture in the shader. Why is there two ways to get the normals, which one is considered best-practice and why?

    Read the article

  • Can I use DllImport/PInvoke in libraries loaded as Assets in Unity Free?

    - by sebf
    I am interested in using utilising third-party libraries in Unity Free. I know Unity can use managed libraries as Assets, but only the Pro version supports using native libraries. (DllImport within scripts). This thread however suggests that it is possible to import DLLs in the free version. I would like to utilise native libraries (as a hobbyist I cannot afford Pro), but want to do it the supported way so I don't have to worry about Unity 'fixing' this hole if that is what it is. Is there any supported way to use native libraries with Unity free? (i.e. does that thread suggest a workaround or is it a 'bug'? Is it supported to use DllImport/PInvoke in libraries loaded as assets? (could I create a wrapper myself?)

    Read the article

  • Island Generation Library

    - by thatguy
    Can anyone recommend a tile map generator (written in Java is a plus), where one can control some land types? For example: islands, large continents, singe large continent, archipelago, etc. I've been reading through many posts on the subject, it almost seems like many are just rolling their own. Before creating my own, I'm wondering if there's already an open source implementation that I might not be finding. If not, it seems like using Perlin Noise is a popular choice. Some articles I've been reading: http://simblob.blogspot.com/2010/01/simple-map-generation.html Generate islands/continents with simplex noise https://sites.google.com/site/minecraftlandgenerator/

    Read the article

  • Transparency in XNA-4 primitives

    - by Shashwat
    I'm using XNA 4 with Visual Studio 2010. I'm trying to create a simple 3D world with walls and doors in which the user to free to roam around. A wall is just a rectangle which is currently being rendered with four vertices using triangle strips. But to create a door, I'd have to split it into three rectangles as shown in the figure. Four quadrilaterals if I want to have the following door-style It will become more complex to have multiple doors on the same wall or if I have windows. Is there any shorter way to handle this? I am looking for something that will just make the wall transparent wherever I want. I found a solution but facing a problem here

    Read the article

  • Center directional light shadow to the cameras eye

    - by Caesar
    I'm currently drawing my directional light shadow using this view and projection: XMFLOAT3 dir((float)pitch, (float)yaw, (float)roll); XMFLOAT3 center(0.0f, 0.0f, 0.0f); XMVECTOR lightDir = XMLoadFloat3(&dir); XMVECTOR lightPos = radius * lightDir; XMVECTOR targetPos = XMLoadFloat3(&center); XMVECTOR up = XMVectorSet(0.0f, 1.0f, 0.0f, 0.0f); XMMATRIX V = XMMatrixLookAtLH(lightPos, targetPos, up); // This is the view // Transform bounding sphere to light space. XMFLOAT3 sphereCenterLS; XMStoreFloat3(&sphereCenterLS, XMVector3TransformCoord(targetPos, V)); // Ortho frustum in light space encloses scene. float l = sphereCenterLS.x - radius; float b = sphereCenterLS.y - radius; float n = sphereCenterLS.z - radius; float r = sphereCenterLS.x + radius; float t = sphereCenterLS.y + radius; float f = sphereCenterLS.z + radius; XMMATRIX P = XMMatrixOrthographicOffCenterLH(l, r, b, t, n, f); // This is the projection Which works prefect if the center of my scene is at 0.0, 0.0, 0.0. What I would like to do is move the center of the scene relative to the cameras position. How can I do that?

    Read the article

  • runtime error: invalid memory address or nil pointer dereference

    - by Klink
    I want to learn OpenGL 3.0 with golang. But when i try to compile some code, i get many errors. package main import ( "os" //"errors" "fmt" //gl "github.com/chsc/gogl/gl33" //"github.com/jteeuwen/glfw" "github.com/go-gl/gl" "github.com/go-gl/glfw" "runtime" "time" ) var ( width int = 640 height int = 480 ) var ( points = []float32{0.0, 0.8, -0.8, -0.8, 0.8, -0.8} ) func initScene() { gl.Init() gl.ClearColor(0.0, 0.5, 1.0, 1.0) gl.Enable(gl.CULL_FACE) gl.Viewport(0, 0, 800, 600) } func glfwInitWindowContext() { if err := glfw.Init(); err != nil { fmt.Fprintf(os.Stderr, "glfw_Init: %s\n", err) glfw.Terminate() } glfw.OpenWindowHint(glfw.FsaaSamples, 1) glfw.OpenWindowHint(glfw.WindowNoResize, 1) if err := glfw.OpenWindow(width, height, 0, 0, 0, 0, 32, 0, glfw.Windowed); err != nil { fmt.Fprintf(os.Stderr, "glfw_Window: %s\n", err) glfw.CloseWindow() } glfw.SetSwapInterval(1) glfw.SetWindowTitle("Title") } func drawScene() { for glfw.WindowParam(glfw.Opened) == 1 { gl.Clear(gl.COLOR_BUFFER_BIT) vertexShaderSrc := `#version 120 attribute vec2 coord2d; void main(void) { gl_Position = vec4(coord2d, 0.0, 1.0); }` vertexShader := gl.CreateShader(gl.VERTEX_SHADER) vertexShader.Source(vertexShaderSrc) vertexShader.Compile() fragmentShaderSrc := `#version 120 void main(void) { gl_FragColor[0] = 0.0; gl_FragColor[1] = 0.0; gl_FragColor[2] = 1.0; }` fragmentShader := gl.CreateShader(gl.FRAGMENT_SHADER) fragmentShader.Source(fragmentShaderSrc) fragmentShader.Compile() program := gl.CreateProgram() program.AttachShader(vertexShader) program.AttachShader(fragmentShader) program.Link() attribute_coord2d := program.GetAttribLocation("coord2d") program.Use() //attribute_coord2d.AttribPointer(size, typ, normalized, stride, pointer) attribute_coord2d.EnableArray() attribute_coord2d.AttribPointer(0, 3, false, 0, &(points[0])) //gl.DrawArrays(gl.TRIANGLES, 0, len(points)) gl.DrawArrays(gl.TRIANGLES, 0, 3) glfw.SwapBuffers() inputHandler() time.Sleep(100 * time.Millisecond) } } func inputHandler() { glfw.Enable(glfw.StickyKeys) if glfw.Key(glfw.KeyEsc) == glfw.KeyPress { //gl.DeleteBuffers(2, &uiVBO[0]) glfw.Terminate() } if glfw.Key(glfw.KeyF2) == glfw.KeyPress { glfw.SetWindowTitle("Title2") fmt.Println("Changed to 'Title2'") fmt.Println(len(points)) } if glfw.Key(glfw.KeyF1) == glfw.KeyPress { glfw.SetWindowTitle("Title1") fmt.Println("Changed to 'Title1'") } } func main() { runtime.LockOSThread() glfwInitWindowContext() initScene() drawScene() } And after that: panic: runtime error: invalid memory address or nil pointer dereference [signal 0xb code=0x1 addr=0x0 pc=0x41bc6f74] goroutine 1 [syscall]: github.com/go-gl/gl._Cfunc_glDrawArrays(0x4, 0x7f8500000003) /tmp/go-build463568685/github.com/go-gl/gl/_obj/_cgo_defun.c:610 +0x2f github.com/go-gl/gl.DrawArrays(0x4, 0x3, 0x0, 0x45bd70) /tmp/go-build463568685/github.com/go-gl/gl/_obj/gl.cgo1.go:1922 +0x33 main.drawScene() /home/klink/Dev/Go/gogl/gopher/exper.go:85 +0x1e6 main.main() /home/klink/Dev/Go/gogl/gopher/exper.go:116 +0x27 goroutine 2 [syscall]: created by runtime.main /build/buildd/golang-1/src/pkg/runtime/proc.c:221 exit status 2

    Read the article

  • GLSL, is it possible to offsetting vertices based on height map colour?

    - by Rob
    I am attempting to generate some terrain based upon a heightmap. I have generated a 32 x 32 grid and a corresponding height map - In my vertex shader I am trying to offset the position of the Y axis based upon the colour of the heightmap, white vertices being higher than black ones. //Vertex Shader Code #version 330 uniform mat4 modelMatrix; uniform mat4 viewMatrix; uniform mat4 projectionMatrix; uniform sampler2D heightmap; layout (location=0) in vec4 vertexPos; layout (location=1) in vec4 vertexColour; layout (location=3) in vec2 vertexTextureCoord; layout (location=4) in float offset; out vec4 fragCol; out vec4 fragPos; out vec2 fragTex; void main() { // Retreive the current pixel's colour vec4 hmColour = texture(heightmap,vertexTextureCoord); // Offset the y position by the value of current texel's colour value ? vec4 offset = vec4(vertexPos.x , vertexPos.y + hmColour.r, vertexPos.z , 1.0); // Final Position gl_Position = projectionMatrix * viewMatrix * modelMatrix * offset; // Data sent to Fragment Shader. fragCol = vertexColour; fragPos = vertexPos; fragTex = vertexTextureCoord; } However the code I have produced only creates a grid with none of the y vertices higher than any others.

    Read the article

  • Automated texture mapping

    - by brandon
    I have a set of seamless tiling textures. I want to be able to take an arbitrary model and create a UV map with these properties: No stretching (all textures tile appropriately so there is no stretching and sheering of the texture) The textures display on the correct axis relative to the model it's mapping to (if you look at the example, you can see some of the letters on the front are tilted, the y axis of the texture should be matching up with the y axis of the object. Some other faces have upside down letters too) the texture is as continuous as possible on the surface of the model (if two faces are adjacent, the texture continues on the adjacent face where it left off) the model is closed (all faces are completely enclosed by other faces) A few notes. This mapping will occur before triangulation. I realize there are ways to do this by hand and it's probably a hard problem to automatically map textures in general, but since these textures are seamless and I just need uniform coverage it seems like an easier problem. I'm looking for an algorithmic approach to this that I can apply in general, not a tool that does it. What approach would work for this, is there an existing one? (I assume so)

    Read the article

  • Resources for 2D rendering using OpenGL?

    - by nightcracker
    I noticed that there is quite some difference between 3D and 2D rendering using OpenGL, the techniques are different - pixel-perfect placing is a lot more desirable, among other things. Are there any good (complete) references on using OpenGL for rendering 2D graphics? There are quite a few "tutorials" around on the net that help you open a window, set up a half-decent environment and draw a sprite, but no real good information on rotation, blending, lightning, drawing order, using the z-buffer, particles, "complex" primitives (circles, stars, cross symbols), ensuring pixel-perfect rendering, instancing and many other staple 2D effects/techniques. Any books, great blogs, anything? Any particular awesome libraries to read?

    Read the article

  • How to change speed without changing path travelled?

    - by Ben Williams
    I have a ball which is being thrown from one side of a 2D space to the other. The formula I am using for calculating the ball's position at any one point in time is: x = x0 + vx0*t y = y0 + vy0*t - 0.5*g*t*t where g is gravity, t is time, x0 is the initial x position, vx0 is the initial x velocity. What I would like to do is change the speed of this ball, without changing how far it travels. Let's say the ball starts in the lower left corner, moves upwards and rightwards in an arc, and finishes in the lower right corner, and this takes 5s. What I would like to be able to do is change this so it takes 10s or 20s, but the ball still follows the same curve and finishes in the same position. How can I achieve this? All I can think of is manipulating t but I don't think that's a good idea. I'm sure it's something simple, but my maths is pretty shaky.

    Read the article

  • Cool examples of procedural pixel shader effects?

    - by Robert Fraser
    What are some good examples of procedural/screen-space pixel shader effects? No code necessary; just looking for inspiration. In particular, I'm looking for effects that are not dependent on geometry or the rest of the scene (would look okay rendered alone on a quad) and are not image processing (don't require a "base image", though they can incorporate textures). Multi-pass or single-pass is fine. Screenshots or videos would be ideal, but ideas work too. Here are a few examples of what I'm looking for (all from the RenderMonkey samples): PS - I'm aware of this question; I'm not asking for a source of actual shader implementations but instead for some inspirational ideas -- and the ones at the NVIDIA Shader Library mostly require a scene or are image processing effects. EDIT: this is an open-ended question and I wish there was a good way to split the bounty. I'll award the rep to the best answer on the last day.

    Read the article

  • Convenience of mySQL over xml

    - by Bonechilla
    Currently I use XML to store specific information to correctly load a few things such as a list of specfied characters, scenes and music, Once more I use JAXB in combination with standard compression/decompression(ZIP) functionality to store a list of extrenous data. This data is called to add functionality to the character, somewhat like Skills in an RPG. Each skill is seperated into its own XML file with a grandlist which contains the names of each file with their extensions omitted and zipped in folder that gets encrypted. At first using xml was working fine however as the skill list grow i worry about its stability. I was wondering if I should begin storing the data in mySQL. Originally I planned to simply convert everything to JSON over xml but i think possibly mySQL would be a better move. Can anyone inform me of the key difference and pros and cons of each I guess i'm looking for the best way to store the data more conviently and would be easier to operate on. The data is mostly primatives and strings and the only arraylist of values i have i can just concat into a single field and parse later Edit: If I am going in the right direction with XML would it make sense to convert it to JSON and use maybe Kyro or EclipseLink JAXB (MOXy)

    Read the article

  • Shadows shimmer when camera moves

    - by Chad Layton
    I've implemented shadow maps in my simple block engine as an exercise. I'm using one directional light and using the view volume to create the shadow matrices. I'm experiencing some problems with the shadows shimmering when the camera moves and I'd like to know if it's an issue with my implementation or just an issue with basic/naive shadow mapping itself. Here's a video: http://www.youtube.com/watch?v=vyprATt5BBg&feature=youtu.be Here's the code I use to create the shadow matrices. The commented out code is my original attempt to perfectly fit the view frustum. You can also see my attempt to try clamping movement to texels in the shadow map which didn't seem to make any difference. Then I tried using a bounding sphere instead, also to no apparent effect. public void CreateViewProjectionTransformsToFit(Camera camera, out Matrix viewTransform, out Matrix projectionTransform, out Vector3 position) { BoundingSphere cameraViewFrustumBoundingSphere = BoundingSphere.CreateFromFrustum(camera.ViewFrustum); float lightNearPlaneDistance = 1.0f; Vector3 lookAt = cameraViewFrustumBoundingSphere.Center; float distanceFromLookAt = cameraViewFrustumBoundingSphere.Radius + lightNearPlaneDistance; Vector3 directionFromLookAt = -Direction * distanceFromLookAt; position = lookAt + directionFromLookAt; viewTransform = Matrix.CreateLookAt(position, lookAt, Vector3.Up); float lightFarPlaneDistance = distanceFromLookAt + cameraViewFrustumBoundingSphere.Radius; float diameter = cameraViewFrustumBoundingSphere.Radius * 2.0f; Matrix.CreateOrthographic(diameter, diameter, lightNearPlaneDistance, lightFarPlaneDistance, out projectionTransform); //Vector3 cameraViewFrustumCentroid = camera.ViewFrustum.GetCentroid(); //position = cameraViewFrustumCentroid - (Direction * (camera.FarPlaneDistance - camera.NearPlaneDistance)); //viewTransform = Matrix.CreateLookAt(position, cameraViewFrustumCentroid, Up); //Vector3[] cameraViewFrustumCornersWS = camera.ViewFrustum.GetCorners(); //Vector3[] cameraViewFrustumCornersLS = new Vector3[8]; //Vector3.Transform(cameraViewFrustumCornersWS, ref viewTransform, cameraViewFrustumCornersLS); //Vector3 min = cameraViewFrustumCornersLS[0]; //Vector3 max = cameraViewFrustumCornersLS[0]; //for (int i = 1; i < 8; i++) //{ // min = Vector3.Min(min, cameraViewFrustumCornersLS[i]); // max = Vector3.Max(max, cameraViewFrustumCornersLS[i]); //} //// Clamp to nearest texel //float texelSize = 1.0f / Renderer.ShadowMapSize; //min.X -= min.X % texelSize; //min.Y -= min.Y % texelSize; //min.Z -= min.Z % texelSize; //max.X -= max.X % texelSize; //max.Y -= max.Y % texelSize; //max.Z -= max.Z % texelSize; //// We just use an orthographic projection matrix. The sun is so far away that it's rays are essentially parallel. //Matrix.CreateOrthographicOffCenter(min.X, max.X, min.Y, max.Y, -max.Z, -min.Z, out projectionTransform); } And here's the relevant part of the shader: if (CastShadows) { float4 positionLightCS = mul(float4(position, 1.0f), LightViewProj); float2 texCoord = clipSpaceToScreen(positionLightCS) + 0.5f / ShadowMapSize; float shadowMapDepth = tex2D(ShadowMapSampler, texCoord).r; float distanceToLight = length(LightPosition - position); float bias = 0.2f; if (shadowMapDepth < (distanceToLight - bias)) { return float4(0.0f, 0.0f, 0.0f, 0.0f); } } The shimmer is slightly better if I drastically reduce the view volume but I think that's mostly just because the texels become smaller and it's harder to notice them flickering back and forth. I'd appreciate any insight, I'd very much like to understand what's going on before I try other techniques.

    Read the article

  • Greiner-Hormann clipping problem

    - by Belgin
    I have a set of planar polygons in 3D space defined by their vertices in counterclockwise order. Let's define the 'positive face' as being the face of the 3D polygon such as when observed, the vertices appear in counterclockwise order, and the 'negative face', the face which when observed, the vertices appear in clockwise order. I'm doing perspective projection of the set of polygons onto a projection polygon defined by the points in this order: (0, h, 0), (0, 0, 0), (w, 0, 0), and (w, h, 0), where w and h are strictly positive integers. The positive face of this projection polygon is oriented towards positive Z, and the camera point is somewhere at (0, 0, d), where d is a strictly negative number. In order to 'clip' the projected polygons into the projection polygon, I'm applying the Greiner-Hormann (PDF) clipping algorithm, which requires that the clipper and the to-be-clipped polygons be in the same order (i.e. clockwise or counterclockwise). My question is the following: How can I determine whether the projected face of the 3D polygon is the negative or the positive one? Meaning, how do I find out if I have to work with the vertices in normal or inverted order for the algorithm to work? I noticed that only if the 3D polygon is facing the projection polygon with its negative face, both of them are in the same order (counterclockwise), otherwise, a modification needs to be done. Here is a picture (PNG) that illustrates this. Note that the planes described by the polygon from the set and the projection polygon may not always be parallel.

    Read the article

  • android: How to apply pinch zoom and pan to 2D GLSurfaceView

    - by mak_just4anything
    I want to apply pinch zoom and panning effect on GLSurfaceView. It is Image editor, so It would not be 3D object. I tried to implement using these following links: https://groups.google.com/forum/#!topic/android-developers/EVNRDNInVRU Want to apply pinch and zoom to GLSurfaceView(3d Object) http://www.learnopengles.com/android-lesson-one-getting-started/ These all are links for 3D object rendering. I can not use ImageView as I need to work out with OpenGL so, had to implement it on GLSurfaceView. Suggest me or any reference links are there for such implementation. **I need it for 2D only.

    Read the article

  • LWJGL glRotatef() without rotating axes?

    - by Brandon oubiub
    Okay so, I noticed when you rotate around an axis, say you do this: glRotatef(90.0f, 1.0f, 0.0f, 0.0f); That will rotate things 90 degrees around the x-axis. However, it also sort of rotates the y and z axes as well. So now the y-axis is pointing in and out of the screen, instead of up and down. So when I try to do stuff like this: glRotatef(90.0f, 1.0f, 0.0f, 0.0f); glRotatef(whatever, 0.0f, 1.0f, 0.0f); glRotatef(whatever2, 0.0f, 0.0f, 1.0f); The rotations around the y and z-axes end up not how I want them. I was wondering if there is any way I can sort of rotate just the axes back to their initial position after using glRotatef(), without rotating the object back. Or something like that, just so that when I rotate around the y-axis, it rotates around a vertical axis.

    Read the article

  • Sprite not rotating around its centre after Scaling at its centre

    - by asma.farhat
    If I scale a sprite at its centre, then try to rotate it around its centre as well, the rotation does not occur around its centre. If you need to rotate, for example a scaled ball,the way its working it is set the scale center at the top left (0,0) set the scale that you want, and then set the rotation center to the middle of the scaled sprite, and then apply the rotation modifier. blaBloBliSprite.setScaleCenter(0, 0); blaBloBliSprite.setScale(0.667f); blaBloBliSprite.setPosition(557, CAMERA_HEIGHT / 2 - blaBloBliSprite.getHeightScaled() / 2); blaBloBliSprite.setRotationCenter(blaBloBliSprite.getWidthScaled() / 2, blaBloBliSprite.getHeightScaled() / 2); But I want to scale a sprite at its centre as well. Is there any way of doing it?

    Read the article

  • If I use my own normal values, should I turn off winding order culling?

    - by Phil
    I've discovered that I managed to program a series of boxes with indexed vertices in such a way that every other triangle (Half of each face) has a backwards winding order. As a result, XNA is culling half of them. However, my Vertex objects contain normal data that I have explicitly set, and I am going to implement my own backface culling shortly to reduce the size of the VertexBuffer. Should I turn off winding order culling and manage it myself, or should I make sure the winding order is consistent and let XNA handle it?

    Read the article

< Previous Page | 327 328 329 330 331 332 333 334 335 336 337 338  | Next Page >