Search Results

Search found 16473 results on 659 pages for 'game logic'.

Page 279/659 | < Previous Page | 275 276 277 278 279 280 281 282 283 284 285 286  | Next Page >

  • Why do my 512x512 bitmaps look jaggy on Android OpenGL?

    - by Milo Mordaunt
    This is sort of driving me nuts, I've googled and googled and tried everything I can think of, but my sprites still look super blurry and super jaggy. Example: Here: https://docs.google.com/open?id=0Bx9Gbwnv9Hd2TmpiZkFycUNmRTA If you click through to the actual full size image you should see what I mean, it's like it's taking and average of every 5*5 pixels or something, the background looks really blurry and blocky, but the ball is the worst. The clouds look all right for some reason, probably because they're mostly transparent. I know the pngs aren't top notch themselves but hey, I'm no artist! I would imagine it's a problem with either: a. How the pngs are made example sprite (512x512): https://docs.google.com/open?id=0Bx9Gbwnv9Hd2a2RRQlJiQTFJUEE b. How my Matrices work This is the relevant parts of the renderer: public void onDrawFrame(GL10 unused) { if(world != null) { dt = System.currentTimeMillis() - endTime; world.update( (float) dt); // Redraw background color GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT); Matrix.setIdentityM(mvMatrix, 0); Matrix.translateM(mvMatrix, 0, 0f, 0f, 0f); world.draw(mvMatrix, mProjMatrix); endTime = System.currentTimeMillis(); } else { Log.d(TAG, "There is no world...."); } } public void onSurfaceChanged(GL10 unused, int width, int height) { GLES20.glViewport(0, 0, width, height); Matrix.orthoM(mProjMatrix, 0, 0, width /2, 0, height /2, -1.f, 1.f); } And this is what each Quad does when draw is called: public void draw(float[] mvMatrix, float[] pMatrix) { Matrix.setIdentityM(mMatrix, 0); Matrix.setIdentityM(mvMatrix, 0); Matrix.translateM(mMatrix, 0, xPos, yPos, 0.f); Matrix.multiplyMM(mvMatrix, 0, mvMatrix, 0, mMatrix, 0); Matrix.scaleM(mvMatrix, 0, scale, scale, 0f); Matrix.rotateM(mvMatrix, 0, angle, 0f, 0f, -1f); GLES20.glUseProgram(mProgram); posAttr = GLES20.glGetAttribLocation(mProgram, "vPosition"); texAttr = GLES20.glGetAttribLocation(mProgram, "aTexCo"); uSampler = GLES20.glGetUniformLocation(mProgram, "uSampler"); int alphaHandle = GLES20.glGetUniformLocation(mProgram, "alpha"); GLES20.glVertexAttribPointer(posAttr, COORDS_PER_VERTEX, GLES20.GL_FLOAT, false, 0, vertexBuffer); GLES20.glVertexAttribPointer(texAttr, 2, GLES20.GL_FLOAT, false, 0, texCoBuffer); GLES20.glEnableVertexAttribArray(posAttr); GLES20.glEnableVertexAttribArray(texAttr); GLES20.glActiveTexture(GLES20.GL_TEXTURE0); GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, texture); GLES20.glUniform1i(uSampler, 0); GLES20.glUniform1f(alphaHandle, alpha); mMVMatrixHandle = GLES20.glGetUniformLocation(mProgram, "uMVMatrix"); mPMatrixHandle = GLES20.glGetUniformLocation(mProgram, "uPMatrix"); GLES20.glUniformMatrix4fv(mMVMatrixHandle, 1, false, mvMatrix, 0); GLES20.glUniformMatrix4fv(mPMatrixHandle, 1, false, pMatrix, 0); GLES20.glDrawElements(GLES20.GL_TRIANGLE_STRIP, 4, GLES20.GL_UNSIGNED_SHORT, indicesBuffer); GLES20.glDisableVertexAttribArray(posAttr); GLES20.glDisableVertexAttribArray(texAttr); GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, 0); } c. How my texture loading/blending/shaders setup works Here is the renderer setup: public void onSurfaceCreated(GL10 unused, EGLConfig config) { // Set the background frame color GLES20.glClearColor(0.0f, 0.0f, 0.0f, 1.0f); GLES20.glDisable(GLES20.GL_DEPTH_TEST); GLES20.glDepthMask(false); GLES20.glBlendFunc(GLES20.GL_ONE, GLES20.GL_ONE_MINUS_SRC_ALPHA); GLES20.glEnable(GLES20.GL_BLEND); GLES20.glEnable(GLES20.GL_DITHER); } Here is the vertex shader: attribute vec4 vPosition; attribute vec2 aTexCo; varying vec2 vTexCo; uniform mat4 uMVMatrix; uniform mat4 uPMatrix; void main() { gl_Position = uPMatrix * uMVMatrix * vPosition; vTexCo = aTexCo; } And here's the fragment shader: precision mediump float; uniform sampler2D uSampler; uniform vec4 vColor; varying vec2 vTexCo; varying float alpha; void main() { vec4 color = texture2D(uSampler, vec2(vTexCo)); gl_FragColor = color; if(gl_FragColor.a == 0.0) { "discard; } } This is how textures are loaded: private int loadTexture(int rescource) { int[] texture = new int[1]; BitmapFactory.Options opts = new BitmapFactory.Options(); opts.inScaled = false; Bitmap temp = BitmapFactory.decodeResource(context.getResources(), rescource, opts); GLES20.glGenTextures(1, texture, 0); GLES20.glActiveTexture(GLES20.GL_TEXTURE0); GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, texture[0]); GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR); GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_LINEAR); GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, temp, 0); GLES20.glGenerateMipmap(GLES20.GL_TEXTURE_2D); GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, 0); temp.recycle(); return texture[0]; } I'm sure I'm doing about 20,000 things wrong, so I'm really sorry if the problem is blindingly obvious... The test device is a Galaxy Note, running a JellyBean custom ROM, if that matters at all. So the screen resolution is 1280x800, which means... The background is 1024x1024, so yeah it might be a little blurry, but shouldn't be made of lego. Thank you so much, any answer at all would be appreciated.

    Read the article

  • Mixing XNA and silverlight gives wierd graphics

    - by Mech0z
    I making a small 3dgame which is made as a Silverlight and XNA app, but when I draw the sprites the graphics becomes all wierd. All my primitive types are rendered correctly, but my 3d models are just wierd My Draw is like this when silverlight is set to draw private void OnDraw(object sender, GameTimerEventArgs e) { // Render the Silverlight controls using the UIElementRenderer elementRenderer.Render(); // Clear the screen to a solid color SharedGraphicsDeviceManager.Current.GraphicsDevice.Clear(Color.CornflowerBlue); switch (gameState) { case GameState.ChooseStarter: TextBlockStatus.Text = "Find Starting Player"; break; case GameState.PlaceBrick: TextBlockPlayer.Text = (playerTurn == PlayerTurn.PlayerOne) ? "Player One" : "Player Two"; TextBlockState.Text = "Place Brick"; foreach (IGraphicObject obj in _3dObjects) { obj.Draw(cameraPosition, e); } break; case GameState.GiveBrick: TextBlockState.Text = "Give Brick"; break; } spriteBatch.Begin(); // Using the texture from the UIElementRenderer, // draw the Silverlight controls to the screen spriteBatch.Draw(elementRenderer.Texture, cameraProjection, Color.White); spriteBatch.End(); } This gives me this output If I comment the spritebatch lines out I get the correct output, except the silverlight text is of course not shown I am not entirely sure what to look for except that zero vector I am giving to the spritebatch, but if thats the source I have no idea what I am supposed to set it as epspecially when its a 2d vector

    Read the article

  • UnrealScript error: Importing defaults for actor: Changing Role in defaultproperties illegal, - what is it importing?

    - by user3079666
    I added the line var float Mass; to Actor and commented it out of the classes that inherit from actor and declare it, fixed all issues but I now get the error message: Error, Importing defaults for Actor: Changing Role in defaultproperties is illegal (was RemoteRole intended?) The thing is, I did not change anything related to Role or in defaultproperties. Also since it says Importing, I'm guessing it's some ini file.. any clues?

    Read the article

  • Converting 3 axis vectors to a rotation matrix

    - by user38858
    I am trying to get a rotation matrix (in 3dsmax) from 3 vectors that form an axis (all 3 vectors are aligned by 90 degrees each other) Somewhere I read that I could build a rotation matrix just by inserting in every row one vector at a time (source: http://renderdan.blogspot.cz/2006/05/rotation-matrix-from-axis-vectors.html) So, I built a matrix with these example vectors x-axis : [-0.194624,-0.23715,-0.951778] y-axis : [-0.773012,0.634392,0] z-axis : [-0.6038,-0.735735,0.306788] But for some reason, if I try to convert this matrix to eulerangles, I receive this rotation: (eulerAngles 47.7284 6.12831 36.8263) ... which is totally wrong, and doesn't align to my 3 vectors at all. I know that rotation is quite difficult to understand, may someone shed some light? :)

    Read the article

  • projection / view matrix: the object is bigger than it should and depth does not affect vertices

    - by Francesco Noferi
    I'm currently trying to write a C 3D software rendering engine from scratch just for fun and to have an insight on what OpenGL does behind the scene and what 90's programmers had to do on DOS. I have written my own matrix library and tested it without noticing any issues, but when I tried projecting the vertices of a simple 2x2 cube at 0,0 as seen by a basic camera at 0,0,10, the cube seems to appear way bigger than the application's window. If I scale the vertices' coordinates down by 8 times I can see a proper cube centered on the screen. This cube doesn't seem to be in perspective: wheen seen from the front, the back vertices pe rfectly overlap with the front ones, so I'm quite sure it's not correct. this is how I create the view and projection matrices (vec4_initd initializes the vectors with w=0, vec4_initw initializes the vectors with w=1): void mat4_lookatlh(mat4 *m, const vec4 *pos, const vec4 *target, const vec4 *updirection) { vec4 fwd, right, up; // fwd = norm(pos - target) fwd = *target; vec4_sub(&fwd, pos); vec4_norm(&fwd); // right = norm(cross(updirection, fwd)) vec4_cross(updirection, &fwd, &right); vec4_norm(&right); // up = cross(right, forward) vec4_cross(&fwd, &right, &up); // orientation and translation matrices combined vec4_initd(&m->a, right.x, up.x, fwd.x); vec4_initd(&m->b, right.y, up.y, fwd.y); vec4_initd(&m->c, right.z, up.z, fwd.z); vec4_initw(&m->d, -vec4_dot(&right, pos), -vec4_dot(&up, pos), -vec4_dot(&fwd, pos)); } void mat4_perspectivefovrh(mat4 *m, float fovdegrees, float aspectratio, float near, float far) { float h = 1.f / tanf(ftoradians(fovdegrees / 2.f)); float w = h / aspectratio; vec4_initd(&m->a, w, 0.f, 0.f); vec4_initd(&m->b, 0.f, h, 0.f); vec4_initw(&m->c, 0.f, 0.f, -far / (near - far)); vec4_initd(&m->d, 0.f, 0.f, (near * far) / (near - far)); } this is how I project my vertices: void device_project(device *d, const vec4 *coord, const mat4 *transform, int *projx, int *projy) { vec4 result; mat4_mul(transform, coord, &result); *projx = result.x * d->w + d->w / 2; *projy = result.y * d->h + d->h / 2; } void device_rendervertices(device *d, const camera *camera, const mesh meshes[], int nmeshes, const rgba *color) { int i, j; mat4 view, projection, world, transform, projview; mat4 translation, rotx, roty, rotz, transrotz, transrotzy; int projx, projy; // vec4_unity = (0.f, 1.f, 0.f, 0.f) mat4_lookatlh(&view, &camera->pos, &camera->target, &vec4_unity); mat4_perspectivefovrh(&projection, 45.f, (float)d->w / (float)d->h, 0.1f, 1.f); for (i = 0; i < nmeshes; i++) { // world matrix = translation * rotz * roty * rotx mat4_translatev(&translation, meshes[i].pos); mat4_rotatex(&rotx, ftoradians(meshes[i].rotx)); mat4_rotatey(&roty, ftoradians(meshes[i].roty)); mat4_rotatez(&rotz, ftoradians(meshes[i].rotz)); mat4_mulm(&translation, &rotz, &transrotz); // transrotz = translation * rotz mat4_mulm(&transrotz, &roty, &transrotzy); // transrotzy = transrotz * roty = translation * rotz * roty mat4_mulm(&transrotzy, &rotx, &world); // world = transrotzy * rotx = translation * rotz * roty * rotx // transform matrix mat4_mulm(&projection, &view, &projview); // projview = projection * view mat4_mulm(&projview, &world, &transform); // transform = projview * world = projection * view * world for (j = 0; j < meshes[i].nvertices; j++) { device_project(d, &meshes[i].vertices[j], &transform, &projx, &projy); device_putpixel(d, projx, projy, color); } } } this is how the cube and camera are initialized: // test mesh cube = &meshlist[0]; mesh_init(cube, "Cube", 8); cube->rotx = 0.f; cube->roty = 0.f; cube->rotz = 0.f; vec4_initw(&cube->pos, 0.f, 0.f, 0.f); vec4_initw(&cube->vertices[0], -1.f, 1.f, 1.f); vec4_initw(&cube->vertices[1], 1.f, 1.f, 1.f); vec4_initw(&cube->vertices[2], -1.f, -1.f, 1.f); vec4_initw(&cube->vertices[3], -1.f, -1.f, -1.f); vec4_initw(&cube->vertices[4], -1.f, 1.f, -1.f); vec4_initw(&cube->vertices[5], 1.f, 1.f, -1.f); vec4_initw(&cube->vertices[6], 1.f, -1.f, 1.f); vec4_initw(&cube->vertices[7], 1.f, -1.f, -1.f); // main camera vec4_initw(&maincamera.pos, 0.f, 0.f, 10.f); maincamera.target = vec4_zerow; and, just to be sure, this is how I compute matrix multiplications: void mat4_mul(const mat4 *m, const vec4 *va, vec4 *vb) { vb->x = m->a.x * va->x + m->b.x * va->y + m->c.x * va->z + m->d.x * va->w; vb->y = m->a.y * va->x + m->b.y * va->y + m->c.y * va->z + m->d.y * va->w; vb->z = m->a.z * va->x + m->b.z * va->y + m->c.z * va->z + m->d.z * va->w; vb->w = m->a.w * va->x + m->b.w * va->y + m->c.w * va->z + m->d.w * va->w; } void mat4_mulm(const mat4 *ma, const mat4 *mb, mat4 *mc) { mat4_mul(ma, &mb->a, &mc->a); mat4_mul(ma, &mb->b, &mc->b); mat4_mul(ma, &mb->c, &mc->c); mat4_mul(ma, &mb->d, &mc->d); }

    Read the article

  • How to label a cuboid?

    - by usha
    Hi this is how my 3dcuboid looks, I have attached the complete code. I want to label this cuboid using different names across sides, how is this possible using opengl on android? public class MyGLRenderer implements Renderer { Context context; Cuboid rect; private float mCubeRotation; // private static float angleCube = 0; // Rotational angle in degree for cube (NEW) // private static float speedCube = -1.5f; // Rotational speed for cube (NEW) public MyGLRenderer(Context context) { rect = new Cuboid(); this.context = context; } public void onDrawFrame(GL10 gl) { // TODO Auto-generated method stub gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); gl.glLoadIdentity(); // Reset the model-view matrix gl.glTranslatef(0.2f, 0.0f, -8.0f); // Translate right and into the screen gl.glScalef(0.8f, 0.8f, 0.8f); // Scale down (NEW) gl.glRotatef(mCubeRotation, 1.0f, 1.0f, 1.0f); // gl.glRotatef(angleCube, 1.0f, 1.0f, 1.0f); // rotate about the axis (1,1,1) (NEW) rect.draw(gl); mCubeRotation -= 0.15f; //angleCube += speedCube; } public void onSurfaceChanged(GL10 gl, int width, int height) { // TODO Auto-generated method stub if (height == 0) height = 1; // To prevent divide by zero float aspect = (float)width / height; // Set the viewport (display area) to cover the entire window gl.glViewport(0, 0, width, height); // Setup perspective projection, with aspect ratio matches viewport gl.glMatrixMode(GL10.GL_PROJECTION); // Select projection matrix gl.glLoadIdentity(); // Reset projection matrix // Use perspective projection GLU.gluPerspective(gl, 45, aspect, 0.1f, 100.f); gl.glMatrixMode(GL10.GL_MODELVIEW); // Select model-view matrix gl.glLoadIdentity(); // Reset } public void onSurfaceCreated(GL10 gl, EGLConfig config) { // TODO Auto-generated method stub gl.glClearColor(0.0f, 0.0f, 0.0f, 1.0f); // Set color's clear-value to black gl.glClearDepthf(1.0f); // Set depth's clear-value to farthest gl.glEnable(GL10.GL_DEPTH_TEST); // Enables depth-buffer for hidden surface removal gl.glDepthFunc(GL10.GL_LEQUAL); // The type of depth testing to do gl.glHint(GL10.GL_PERSPECTIVE_CORRECTION_HINT, GL10.GL_NICEST); // nice perspective view gl.glShadeModel(GL10.GL_SMOOTH); // Enable smooth shading of color gl.glDisable(GL10.GL_DITHER); // Disable dithering for better performance }} public class Cuboid{ private FloatBuffer mVertexBuffer; private FloatBuffer mColorBuffer; private ByteBuffer mIndexBuffer; private float vertices[] = { //width,height,depth -2.5f, -1.0f, -1.0f, 1.0f, -1.0f, -1.0f, 1.0f, 1.0f, -1.0f, -2.5f, 1.0f, -1.0f, -2.5f, -1.0f, 1.0f, 1.0f, -1.0f, 1.0f, 1.0f, 1.0f, 1.0f, -2.5f, 1.0f, 1.0f }; private float colors[] = { // R,G,B,A COLOR 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.5f, 0.0f, 1.0f, 1.0f, 0.5f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f, 0.0f, 1.0f, 1.0f }; private byte indices[] = { // VERTEX 0,1,2,3,4,5,6,7 REPRESENTATION FOR FACES 0, 4, 5, 0, 5, 1, 1, 5, 6, 1, 6, 2, 2, 6, 7, 2, 7, 3, 3, 7, 4, 3, 4, 0, 4, 7, 6, 4, 6, 5, 3, 0, 1, 3, 1, 2 }; public Cuboid() { ByteBuffer byteBuf = ByteBuffer.allocateDirect(vertices.length * 4); byteBuf.order(ByteOrder.nativeOrder()); mVertexBuffer = byteBuf.asFloatBuffer(); mVertexBuffer.put(vertices); mVertexBuffer.position(0); byteBuf = ByteBuffer.allocateDirect(colors.length * 4); byteBuf.order(ByteOrder.nativeOrder()); mColorBuffer = byteBuf.asFloatBuffer(); mColorBuffer.put(colors); mColorBuffer.position(0); mIndexBuffer = ByteBuffer.allocateDirect(indices.length); mIndexBuffer.put(indices); mIndexBuffer.position(0); } public void draw(GL10 gl) { gl.glFrontFace(GL10.GL_CW); gl.glVertexPointer(3, GL10.GL_FLOAT, 0, mVertexBuffer); gl.glColorPointer(4, GL10.GL_FLOAT, 0, mColorBuffer); gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glEnableClientState(GL10.GL_COLOR_ARRAY); gl.glDrawElements(GL10.GL_TRIANGLES, 36, GL10.GL_UNSIGNED_BYTE, mIndexBuffer); gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); gl.glDisableClientState(GL10.GL_COLOR_ARRAY); } } public class Draw3drect extends Activity { private GLSurfaceView glView; // Use GLSurfaceView // Call back when the activity is started, to initialize the view @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); glView = new GLSurfaceView(this); // Allocate a GLSurfaceView glView.setRenderer(new MyGLRenderer(this)); // Use a custom renderer this.setContentView(glView); // This activity sets to GLSurfaceView } // Call back when the activity is going into the background @Override protected void onPause() { super.onPause(); glView.onPause(); } // Call back after onPause() @Override protected void onResume() { super.onResume(); glView.onResume(); } }

    Read the article

  • Renderbuffer to GLSL shader?

    - by Dan
    I have a software that performs volume rendering through a raycasting approach. The actual raycasting shader writes the raycasted volume depth into a framebuffer object, through gl_FragDepth, that I bind before calling the shader. The problem I have is that I would like to use this depth in another shader that I call later on. I figured out that the only way to do that is to bind the framebuffer once the raycasting has finished, read the depthmap through something like glReadPixels(0, 0, m_winSize.x , m_winSize.y, GL_DEPTH_COMPONENT, GL_FLOAT, pixels); and write it to a 2D texture as usual glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24, m_winSize.x, m_winSize.y, 0, GL_DEPTH_COMPONENT, GL_FLOAT, pixels) and then pass this 2D texture that contains a simple depth map to the other shader. However, I am not entirely sure that what I do is the proper way to do this. Is there anyway to pass the framebuffer that I fill up in my raycasting shader to the other shader?

    Read the article

  • Pix for visual studio express 2012 (Desktop)

    - by JohnB
    (Originally asked on stackoverflow) Using visual c++ express 2010 for direct3d you have to download the directX sdk and there is a tool called pix for debugging shaders, looking at 3d resources etc. With visual studio 2012 express the directx sdk is included in the windows sdk that comes with it but this does not seem to include the winpix.exe tool. Is this very useful tool still available? I guess I can still use the one from the previous sdk but it seems wrong to install the entire sdk just for that tool. Is there a version for VS2012 express that I'm missing?

    Read the article

  • Unity scaling instantiated GameObject at Start() doesn't "keep"

    - by Shivan Dragon
    I have a very simple scenario: A box-like Prefab which is imported from Blender automatically (I have the .blend file in the Assets folder). A script that has two public GameObject fields. In one I place the above prefab, and in the other I place a terrain object (which I've created in Unity's graphical view): public Collider terrain; public GameObject aStarCellHighlightPrefab; This script is attached to the camera. The idea is to have the Blender prefab instantiated, have the terrain set as its parent, and then scale said prefab instance up. I first did it like this, in the Start() method: void Start () { cursorPositionOnTerrain = new RaycastHit(); aStarCellHighlight = (GameObject)Instantiate(aStarCellHighlightPrefab, new Vector3(300,300,300), terrain.transform.rotation); aStarCellHighlight.name = "cellHighlight"; aStarCellHighlight.transform.parent = terrain.transform; aStarCellHighlight.transform.localScale = new Vector3(100,100,100); } and first thought it didn't work. However later I noticed that it did in fact work, in the sense where the scale was applied right at the start, but then right after the prefab instance came back to its initial scale. Putting the scale code in the Update() methods fixes it in the sense where now it stays scaled all the time: void Update () { aStarCellHighlight.transform.localScale = new Vector3(100,100,100); //... } However I've noticed that when I run this code, the object is first displayed without the scale being applied, and it takes about 5-10 seconds for the scale to happen. During this time everything works fine (like input and logging, etc). The scene is very simple, it's not like it has a lot of stuff to load or anything (there's a Ray cast from the camera on to the terrain, but that seems to happen without such delays). My (2 part) question is: Why doesn't it take the scale transform when I do it at the beginning in the Start() method. Why do I have to keep scaling it in the Update() method? Why does it take so long for the scale to "apply/show up".

    Read the article

  • Shadow Mapping and Transparent Quads

    - by CiscoIPPhone
    Shadow mapping uses the depth buffer to calculate where shadows should be drawn. My problem is that I'd like some semi transparent textured quads to cast shadows - for example billboarded trees. As the depth value will be set across all of the quad and not just the visible parts it will cast a quad shadow, which is not what I want. How can I make my transparent quads cast correct shadows using shadow mapping?

    Read the article

  • Hype and LINQ

    - by Tony Davis
    "Tired of querying in antiquated SQL?" I blinked in astonishment when I saw this headline on the LinqPad site. Warming to its theme, the site suggests that what we need is to "kiss goodbye to SSMS", and instead use LINQ, a modern query language! Elsewhere, there is an article entitled "Why LINQ beats SQL". The designers of LINQ, along with many DBAs, would, I'm sure, cringe with embarrassment at the suggestion that LINQ and SQL are, in any sense, competitive ways of doing the same thing. In fact what LINQ really is, at last, is an efficient, declarative language for C# and VB programmers to access or manipulate data in objects, local data stores, ORMs, web services, data repositories, and, yes, even relational databases. The fact is that LINQ is essentially declarative programming in a .NET language, and so in many ways encourages developers into a "SQL-like" mindset, even though they are not directly writing SQL. In place of imperative logic and loops, it uses various expressions, operators and declarative logic to build up an "expression tree" describing only what data is required, not the operations to be performed to get it. This expression tree is then parsed by the language compiler, and the result, when used against a relational database, is a SQL string that, while perhaps not always perfect, is often correctly parameterized and certainly no less "optimal" than what is achieved when a developer applies blunt, imperative logic to the SQL language. From a developer standpoint, it is a mistake to consider LINQ simply as a substitute means of querying SQL Server. The strength of LINQ is that that can be used to access any data source, for which a LINQ provider exists. Microsoft supplies built-in providers to access not just SQL Server, but also XML documents, .NET objects, ADO.NET datasets, and Entity Framework elements. LINQ-to-Objects is particularly interesting in that it allows a declarative means to access and manipulate arrays, collections and so on. Furthermore, as Michael Sorens points out in his excellent article on LINQ, there a whole host of third-party LINQ providers, that offers a simple way to get at data in Excel, Google, Flickr and much more, without having to learn a new interface or language. Of course, the need to be generic enough to deal with a range of data sources, from something as mundane as a text file to as esoteric as a relational database, means that LINQ is a compromise and so has inherent limitations. However, it is a powerful and beautifully compact language and one that, at least in its "query syntax" guise, is accessible to developers and DBAs alike. Perhaps there is still hope that LINQ can fulfill Phil Factor's lobster-induced fantasy of a language that will allow us to "treat all data objects, whether Word files, Excel files, XML, relational databases, text files, HTML files, registry files, LDAPs, Outlook and so on, in the same logical way, as linked databases, and extract the metadata, create the entities and relationships in the same way, and use the same SQL syntax to interrogate, create, read, write and update them." Cheers, Tony.

    Read the article

  • Bohemia Interactive's bio2s format

    - by Jaime Soto
    Does anyone have specifications for the bio2s scripting language from Bohemia Interactive? They develop Operation Flashpoint, Armed Assault (ArmA), and Virtual Battlespace. These scripts are sometimes called O2 or Oxygen scripts and are used in their terrain and modeling tools. Oxygen is Bohemia Interactive's modeling tool. I found additional examples of the format in this VBS2 tutorial and this ArmA forum thread. EDIT: I clarified the purpose of the bio2s format and provided some links to examples.

    Read the article

  • Diamond-square terrain generation problem

    - by kafka
    I've implemented a diamond-square algorithm according to this article: http://www.lighthouse3d.com/opengl/terrain/index.php?mpd2 The problem is that I get these steep cliffs all over the map. It happens on the edges, when the terrain is recursively subdivided: Here is the source: void DiamondSquare(unsigned x1,unsigned y1,unsigned x2,unsigned y2,float range) { int c1 = (int)x2 - (int)x1; int c2 = (int)y2 - (int)y1; unsigned hx = (x2 - x1)/2; unsigned hy = (y2 - y1)/2; if((c1 <= 1) || (c2 <= 1)) return; // Diamond stage float a = m_heightmap[x1][y1]; float b = m_heightmap[x2][y1]; float c = m_heightmap[x1][y2]; float d = m_heightmap[x2][y2]; float e = (a+b+c+d) / 4 + GetRnd() * range; m_heightmap[x1 + hx][y1 + hy] = e; // Square stage float f = (a + c + e + e) / 4 + GetRnd() * range; m_heightmap[x1][y1+hy] = f; float g = (a + b + e + e) / 4 + GetRnd() * range; m_heightmap[x1+hx][y1] = g; float h = (b + d + e + e) / 4 + GetRnd() * range; m_heightmap[x2][y1+hy] = h; float i = (c + d + e + e) / 4 + GetRnd() * range; m_heightmap[x1+hx][y2] = i; DiamondSquare(x1, y1, x1+hx, y1+hy, range / 2.0); // Upper left DiamondSquare(x1+hx, y1, x2, y1+hy, range / 2.0); // Upper right DiamondSquare(x1, y1+hy, x1+hx, y2, range / 2.0); // Lower left DiamondSquare(x1+hx, y1+hy, x2, y2, range / 2.0); // Lower right } Parameters: (x1,y1),(x2,y2) - coordinates that define a region on a heightmap (default (0,0)(128,128)). range - basically max. height. (default 32) Help would be greatly appreciated.

    Read the article

  • Smooth animation in Cocos2d for iOS

    - by MrDatabase
    I move a simple CCSprite around the screen of an iOS device using this code: [self schedule:@selector(update:) interval:0.0167]; - (void) update:(ccTime) delta { CGPoint currPos = self.position; currPos.x += xVelocity; currPos.y += yVelocity; self.position = currPos; } This works however the animation is not smooth. How can I improve the smoothness of my animation? My scene is exceedingly simple (just has one full-screen CCSprite with a background image and a relatively small CCSprite that moves slowly). I've logged the ccTime delta and it's not consistent (it's almost always greater than my specified interval of 0.0167... sometimes up to a factor of 4x). I've considered tailoring the motion in the update method to the delta time (larger delta = larger movement etc). However given the simplicity of my scene it's seems there's a better way (and something basic that I'm probably missing).

    Read the article

  • Circular Bullet Spread not Even

    - by SoulBeaver
    I'm creating a bullet shooter much in the style of Touhou. Right now I want to have a very simple circular shot being fired from the enemy. See this picture: As you can see, the spacing is very uneven, which isn't very good if you want to survive. The code I'm using is this: private function shoot() : void { const BULLETS_PER_WAVE : int = 72; var interval : Number = BULLETS_PER_WAVE / 360; for (var i : int = 0; i < BULLETS_PER_WAVE; ++i { var xSpeed : Number = GameConstants.BULLET_NORMAL_SPEED_X * Math.sin(i * interval); var ySpeed : Number = GameConstants.BULLET_NORMAL_SPEED_Y * Math.cos(i * interval); BulletFactory.createNormalBullet(bulletColor_, alice_.center, xSpeed, ySpeed); } canShoot_ = false; cooldownTimer_.start(); } I imagine my mistake is in the sin, cos functions, but I'm not entirely sure what's wrong.

    Read the article

  • How can I find a position between 4 vertices in a fragment shader?

    - by c4sh
    I'm creating a shader with SharpDX (DirectX11 in C#) that takes a segment (2 points) from the output of a Vertex Shader and then passes them to a Geometry Shader, which converts this line into a rectangle (4 points) and assigns the four corners a texture coordinate. After that I want a Fragment Shader (which recieves the interpolated position and the interpolated texture coordinates) that checks the depth at the "spine of the rectangle" (that is, in the line that passes through the middle of the rectangle. The problem is I don't know how to extract the position of the corresponding fragment at the spine of the rectangle. This happens because I have the texture coordinates interpolated, but I don't know how to use them to get the fragment I want, because the coordinate system of a) the texture and b) the position of my fragment in screen space are not the same. Thanks a lot for any help.

    Read the article

  • How to load stacking chunks on the fly?

    - by Brettetete
    I'm currently working on an infinite world, mostly inspired by minecraft. A Chunk consists of 16x16x16 blocks. A block(cube) is 1x1x1. This runs very smoothly with a ViewRange of 12 Chunks (12x16) on my computer. Fine. When I change the Chunk height to 256 this becomes - obviously - incredible laggy. So what I basically want to do is stacking chunks. That means my world could be [8,16,8] Chunks large. The question is now how to generate chunks on the fly? At the moment I generate not existing chunks circular around my position (near to far). Since I don't stack chunks yet, this is not very complex. As important side note here: I also want to have biomes, with different min/max height. So in Biome Flatlands the highest layer with blocks would be 8 (8x16) - in Biome Mountains the highest layer with blocks would be 14 (14x16). Just as example. What I could do would be loading 1 Chunk above and below me for example. But here the problem would be, that transitions between different bioms could be larger than one chunk on y. My current chunk loading in action For the completeness here my current chunk loading "algorithm" private IEnumerator UpdateChunks(){ for (int i = 1; i < VIEW_RANGE; i += ChunkWidth) { float vr = i; for (float x = transform.position.x - vr; x < transform.position.x + vr; x += ChunkWidth) { for (float z = transform.position.z - vr; z < transform.position.z + vr; z += ChunkWidth) { _pos.Set(x, 0, z); // no y, yet _pos.x = Mathf.Floor(_pos.x/ChunkWidth)*ChunkWidth; _pos.z = Mathf.Floor(_pos.z/ChunkWidth)*ChunkWidth; Chunk chunk = Chunk.FindChunk(_pos); // If Chunk is already created, continue if (chunk != null) continue; // Create a new Chunk.. chunk = (Chunk) Instantiate(ChunkFab, _pos, Quaternion.identity); } } // Skip to next frame yield return 0; } }

    Read the article

  • GLSL: How Do I cast a float into an int?

    - by dugla
    In a GLSL fragment shader I am trying to cast a float into an int. The compiler has other ideas. It complains thusly: ERROR: 0:60: '=' : cannot convert from 'mediump float' to 'highp int' I am trying to do this: mediump float indexf = floor(2.0 * mixer); highp int index = indexf; I (vainly) tried to raise the precision of the int above the float to appease the GL Gods but no joy. Could someone please school me here? Thanks, Doug

    Read the article

  • Open Source AI Bot interfaces

    - by David Young
    What are some open source AI Bot interfaces? Similar to Pogamut 3 GameBots2004 for custom Unreal Tournament bots or Brood Wars API for Starcraft bots etc. If you could please post one AI bot interface per answer (make sure to provide a link) and give a brief summary as to the content of the blog posts. Please include what type of bot interface structure it is, client/server, server/server, etc e.g. BWAPI is client/server which emulates a real player

    Read the article

  • Semi Fixed-timestep ported to javascript

    - by abernier
    In Gaffer's "Fix Your Timestep!" article, the author explains how to free your physics' loop from the paint one. Here is the final code, written in C: double t = 0.0; const double dt = 0.01; double currentTime = hires_time_in_seconds(); double accumulator = 0.0; State previous; State current; while ( !quit ) { double newTime = time(); double frameTime = newTime - currentTime; if ( frameTime > 0.25 ) frameTime = 0.25; // note: max frame time to avoid spiral of death currentTime = newTime; accumulator += frameTime; while ( accumulator >= dt ) { previousState = currentState; integrate( currentState, t, dt ); t += dt; accumulator -= dt; } const double alpha = accumulator / dt; State state = currentState*alpha + previousState * ( 1.0 - alpha ); render( state ); } I'm trying to implement this in JavaScript but I'm quite confused about the second while loop... Here is what I have for now (simplified): ... (function animLoop(){ ... while (accumulator >= dt) { // While? In a requestAnimation loop? Maybe if? ... } ... // render requestAnimationFrame(animLoop); // stand for the 1st while loop [OK] }()) As you can see, I'm not sure about the while loop inside the requestAnimation one... I thought replacing it with a if but I'm not sure it will be equivalent... Maybe some can help me.

    Read the article

  • LWJGL Text Rendering

    - by Trixmix
    Currently in my project I am using LWJGL and the Slick2D library to render text onto the screen. Here is a more specific example: Font f = new Font("Times New Roman",Font.BOLD,18); font = new UnicodeFont(f); font.getEffects().add(new ColorEffect(Color.white)); font.addAsciiGlyphs(); try { font.loadGlyphs(); } catch (SlickException e) { e.printStackTrace(); } then i use font.drawString to write onto the screen. This is a quick easy way but it has a lot of disadvantages. for example font.loadGlyphs take a very long time 1-3 seconds. so when i want to change a color or font type then i have to wait 1-3 seconds which means I cannot do it while rendering (ie. cant have different color text on the same screen). My question is what is a better way of drawing multicolored text onto the screen? I use slick2d only for the text rendering so maybe i can fully get rid of the library and draw text some other way... If you have an answer please leave a quick short example. Thanks!

    Read the article

  • XNA - Render texture to a rendertarget 2d via SpriteBatch error

    - by Jared B
    I got simple code that uses SpriteBatch to draw a texture onto a RenderTarget2D... private void drawScene(GameTime g) { GraphicsDevice.Clear(skyColor); GraphicsDevice.SetRenderTarget(targetScene); drawSunAndMoon(); effect.Fog = true; GraphicsDevice.SetVertexBuffer(line); effect.MainEffect.CurrentTechnique.Passes[0].Apply(); GraphicsDevice.DrawPrimitives(PrimitiveType.TriangleStrip, 0, 2); GraphicsDevice.SetRenderTarget(null); SceneTexture = targetScene; } private void drawPostProcessing(GameTime g) { effect.SceneTexture = SceneTexture; GraphicsDevice.SetRenderTarget(targetBloom); spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.Opaque, null, null, null); { if(Bloom) effect.BlurEffect.CurrentTechnique.Passes[0].Apply(); spriteBatch.Draw(targetScene, new Rectangle(0, 0, Window.ClientBounds.Width, Window.ClientBounds.Height), Color.White); } spriteBatch.End(); BloomTexture = targetBloom; GraphicsDevice.SetRenderTarget(null); } Both methods are called from Draw(GameTime gameTime). First drawScene is called, then drawPostProcessing is called. The thing is, I can't run the code because "the render target must not be set on the device when it is used as a texture." at line spriteBatch.Draw(targetScene, new Rectangle(0, 0, Window.ClientBounds.Width, Window.ClientBounds.Height), Color.White); I already found the solution, which is to draw the actual renderTarget (targetScene) to the texture so it doesn't create a reference to the loaded rendertarget. However, to my knowledge, the only way of doing this is to write: GraphicsDevice.SetRenderTarget(OutputTarget) SpriteBatch.Draw(InputTarget, ...) GraphicsDevice.SetRenderTarget(null) Which encounters the same exact problem I'm having right now. So, the question I'm asking is: how would I render InputTarget to OutputTarget without reference issues?

    Read the article

  • How do I load a libGDX Skin on Android?

    - by Lukas
    I am pretty desperate searching for a solution to load ui skins into my android app (actually it is not my app, it is a tutorial I'm following). The app always crashes at this part: assets.load("ui/defaultskin/defaultskin.json", Skin.class, new SkinLoader.SkinParameter("ui/defaultskin/defaultskin.atlas")); The files are the ones from the bitowl tutorial: http://bitowl.de/day6/ I guess Gdx.files.internal doesn't work on android, since the app crashed with this, too. Thanks for helping me out, Lukas

    Read the article

  • Multiple render targets and gamma correctness in Direct3D9

    - by Mario
    Let's say in a deferred renderer when building your G-Buffer you're going to render texture color, normals, depth and whatever else to your multiple render targets at once. Now if you want to have a gamma-correct rendering pipeline and you use regular sRGB textures as well as rendertargets, you'll need to apply some conversions along the way, because your filtering, sampling and calculations should happen in linear space, not sRGB space. Of course, you could store linear color in your textures and rendertargets, but this might very well introduce bad precision and banding issues. Reading from sRGB textures is easy: just set SRGBTexture = true; in your texture sampler in your HLSL effect code and the hardware does the conversion sRGB-linear for you. Writing to an sRGB rendertarget is theoretically easy, too: just set SRGBWriteEnable = true; in your effect pass in HLSL and your linear colors will be converted to sRGB space automatically. But how does this work with multiple rendertargets? I only want to do these corrections to the color textures and rendertarget, not to the normals, depth, specularity or whatever else I'll be rendering to my G-Buffer. Ok, so I just don't apply SRGBTexture = true; to my non-color textures, but when using SRGBWriteEnable = true; I'll do a gamma correction to all the values I write out to my rendertargets, no matter what I actually store there. I found some info on gamma over at Microsoft: http://msdn.microsoft.com/en-us/library/windows/desktop/bb173460%28v=vs.85%29.aspx For hardware that supports Multiple Render Targets (Direct3D 9) or Multiple-element Textures (Direct3D 9), only the first render target or element is written. If I understand correctly, SRGBWriteEnable should only be applied to the first rendertarget, but according to my tests it doesn't and is used for all rendertargets instead. Now the only alternative seems to be to handle these corrections manually in my shader and only correct the actual color output, but I'm not totally sure, that this'll not have any negative impact on color correctness. E.g. if the GPU does any blending or filtering or multisampling after the Linear-sRGB conversion... Do I even need gamma correction in this case, if I'm just writing texture color without lighting to my rendertarget? As far as I know, I DO need it because of the texture filtering and mip sampling happening in sRGB space instead, if I don't correct for it. Anyway, it'd be interesting to hear other people's solutions or thoughts about this.

    Read the article

  • How to do reflective collisions with particles hitting background tiles?

    - by Shawn LeBlanc
    In my 2d pixel old-school platformer, I'm looking for methods for bouncing particles off of background tiles. Particles aren't affected by gravity and collisions are "reflective". By that I mean a particle hitting the side of a square tile at 45 degrees should bounce off at 45 degrees as well. We can assume that tiles will always be perfectly square. No slopes or anything. What are efficient methods and algorithms to do this? I'd be implementing this on a Sega Genesis.

    Read the article

< Previous Page | 275 276 277 278 279 280 281 282 283 284 285 286  | Next Page >