Search Results

Search found 21563 results on 863 pages for 'game testing'.

Page 407/863 | < Previous Page | 403 404 405 406 407 408 409 410 411 412 413 414  | Next Page >

  • Why can't I create direct3d objects?

    - by quakkels
    I've been programming professionally for years using languages like VBScript, JavaScript, and C#. As a hobby, I'm getting into some c/c++ and games programming with DirectX. I am running into an issue where I cannot create direct3d objects. I am using Visual C++ 2010 Express. After I installed vc++2010express I then installed the June 2010 release of DirectX. I am trying to include DirectX via #pragma statements. This is the code I have so far in my winmain.cpp source file: #include <Windows.h> #include <d3d11.h> #include <time.h> #include <iostream> using namespace std; #pragma comment(lib, "d3d11.lib") #pragma comment(lib, "d3dx11.lib") // program settings const string AppTitle = "Direct3D in a Window"; const int ScreenWidth = 1024; const int ScreenHeight = 768; // direct3d objects LPDIRECT3D11 d3d = NULL; // this line is showing an error The type LPDIRECT3D11 is showing an error: Error: Identifier "LPDIRECT3D11" is undefined Am I missing something here to get VC++2010Express to recognize and load the DirectX libs? Thanks for any help.

    Read the article

  • Math > Logic for a Logarithmic Score Meter

    - by oodavid
    I'm trying to implement a score meter whereby I specify a maximum value (say 15,000) and I can render values on it in a logarithmic manner ie: +------+---+--+-++ +------+---+--+-++ |== | |====== | +------+---+--+-++ +------+---+--+-++ 200 pts 1,000 pts +------+---+--+-++ +------+---+--+-++ |============= | |================| +------+---+--+-++ +------+---+--+-++ 5,000 pts 15,000 pts + The upper bound needs to be variable, and need to be able to convert a score to a percentage, using the above mockup as an example: score2pct(15000, 200) = 0.2 score2pct(15000, 1000) = 0.4 score2pct(15000, 5000) = 0.8 score2pct(15000, 15000) = 1 Does anyone have any pointers for me?

    Read the article

  • How do I render only part of a texture to a point sprite in OpenGL ES for Android?

    - by nbolton
    Using the libgdx framework, I've figured out how to render a texture to a point sprite. The problem is, it renders the entire texture to the point sprite, where I only want a small part of it (since it's an isometric tile image). Here's a snippet from some demo code I wrote... create() { renderer = new ImmediateModeRenderer(); tiles = Gdx.graphics.newTexture( Gdx.files.internal("data/tiles2.png"), TextureFilter.MipMap, TextureFilter.Linear, TextureWrap.ClampToEdge, TextureWrap.ClampToEdge); Gdx.gl.glClearColor(0.6f, 0.7f, 0.9f, 1); Gdx.gl.glEnable(GL10.GL_TEXTURE_2D); Gdx.gl.glEnable(GL11.GL_POINT_SPRITE_OES); Gdx.gl11.glTexEnvi( GL11.GL_POINT_SPRITE_OES, GL11.GL_COORD_REPLACE_OES, GL11.GL_TRUE); Gdx.gl10.glPointSize(s); tiles.bind(); } render() { Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); renderer.begin(GL10.GL_POINTS); // render 3 point sprites at various 3d points renderer.vertex(-.1f, 0, -.1f); renderer.vertex(0, 0, 0); renderer.vertex(.1f, 0, .1f); // ... more vertices here if needed (one for each sprite) ... renderer.end(); }

    Read the article

  • How can I store all my level data in a single file instead of spread out over many files?

    - by Jon
    I am currently generating my level data, and saving to disk to ensure that any modifications done to the level are saved. I am storing "chunks" of 2048x2048 pixels into a file. Whenever the player moves over a section that doesn't have a file associated with the position, a new file is created. This works great, and is very fast. My issue, is that as you are playing the file count gets larger and larger. I'm wondering what are techniques that can be used to alleviate the file count, without taking a performance hit. I am interested in how you would store/seek/update this data in a single file instead of multiple files efficiently.

    Read the article

  • Confusion with Libgdx UI

    - by BrotherJack
    I've started with Libgdx and am currently stumbling about trying to understand how to set up the interface. I have generated the base projects in Eclipse ( < proj-name ,< proj-name -android, < proj-name -desktop, < proj-name -html), and can get the program to display a simple background, play a looping sound file, and draw a tank. I have been having some problems implementing the UI though. I want to make a collapsible interface bar at the bottom of the screen that would contain buttons for movement, and selecting weapons. I'm confused since there appears to be several ways of doing this and the documentation (or tutorials explaining it) tend to be obsolete. How would one go about this? Use a stage for the bar and actors for the widgets? I'm a little lost on this.

    Read the article

  • Best practices of texture size

    - by psal
    I wanted to know how should I determine a good texture size ? Currently, I always create UV texture that are 1024x1024px but if I create for example, a big house with a 1024px texture size, it will looks pretty bad. So, should I create different texture size (512, 1024, ...) for different mesh size like this ? : or is it better to always do high-resolution texture and then reduce it in the software (ie : increase the LODBias settings in UDK reduce the size of the texture) ? Thanks for your answer. ps : sorry for my english !

    Read the article

  • Circle to Circle collision, checking each circle against all others

    - by user14861
    I'm currently coding a little circle to circle collision demo but I've got a little stuck. I think I currently have the code to detect collision but I'm not sure how to loop through my list of circles and check them off against one another. Collision check code: public static Vector2 GetIntersectionDepth(Circle a, Circle b) { float xValue = a.Center.X - b.Center.X; float yValue = a.Center.Y - b.Center.Y; Vector2 depth = Vector2.Zero; float distance = Vector2.Distance(a.Center, b.Center); if (a.Radius + b.Radius > distance) { float result = (a.Radius + b.Radius) - distance; depth.X = (float)Math.Cos(result); depth.Y = (float)Math.Sin(result); } return depth; } Loop through code: Vector2 depth = Vector2.Zero; for (int i = 0; i < bounds.Count; i++) for (int j = i+1; j < bounds.Count; j++) { depth = CircleToCircleIntersection.GetIntersectionDepth(bounds[i], bounds[j]); } Clearly I'm a complete newbie, wondering if anyone can give any suggestions or point out my errors, thanks in advance. :)

    Read the article

  • List of Open Source Java Games for Android

    - by BluFire
    I'm wondering if there are any more opensource games than the ones that you can plainly see when you search a list of open source games for android on google. Such as, is there a good website that has compiled open source games? I don't want an answer of "go google it" or "en.wikipedia.org/wiki/List_of_open_source_Android_applications" it gets really annoying on posts when people just give lazy answers.

    Read the article

  • How do I show a minimap in a 3D world?

    - by Bubblewrap
    Got a really typical use-case here. I have large map made up of hexagons and at any given time only a small section of the map is visible. To provide an overview of the complete map, I want to show a small 2D representation of the map in a corner of the screen. What is the recommended approach for this in libgdx? Keep in mind the minimap must be updated when the currently visible section changes and when the map is updated. I've found SpriteBatch(info here), but the warning label on it made me think twice: A SpriteBatch is a pretty heavy object so you should only ever have one in your program. I'm not sure I'm supposed to use the one SpriteBatch that I can have on the minimap, and I'm also not sure how to interpret "heavy" in this context. Another thing to possibly keep in mind is that the minimap will probably be part of a larger UI, is there any way to integrate these two?

    Read the article

  • How should I interpret these DirectX Caps Viewer values?

    - by tobi
    Briefly asking - what do the nodes mean and what the difference is between them in DirectX Caps Viewer? DXGI Devices Direct3D9 Devices DirectDraw Devices The most interesting for me is 1 vs 2. In the Direct3D9 Devices under HAL node I can see that my GeForce 8800GT supports PixelShaderVersion 3.0. However, under DXGI Devices I have DX 10, DX 10.1 and DX 11 having Shader model 4.0 (actually why DX 11? My card is not compatible with DX 11). I am implementing a DX 11 application (including d3d11.h) with shaders compiled in 4.0 version, so I can clearly see that 4.0 is supported. What is the difference between 1 and 2? Could you give me some theory behind the nodes?

    Read the article

  • XNA, how to draw two cubes standing in line parallelly?

    - by user3535716
    I just got a problem with drawing two 3D cubes standing in line. In my code, I made a cube class, and in the game1 class, I built two cubes, A on the right side, B on the left side. I also setup an FPS camera in the 3D world. The problem is if I draw cube B first(Blue), and move the camera to the left side to cube B, A(Red) is still standing in front of B, which is apparently wrong. I guess some pics can make much sense. Then, I move the camera to the other side, the situation is like: This is wrong.... From this view, the red cube, A should be behind the blue one, B.... Could somebody give me help please? This is the draw in the Cube class Matrix center = Matrix.CreateTranslation( new Vector3(-0.5f, -0.5f, -0.5f)); Matrix scale = Matrix.CreateScale(0.5f); Matrix translate = Matrix.CreateTranslation(location); effect.World = center * scale * translate; effect.View = camera.View; effect.Projection = camera.Projection; foreach (EffectPass pass in effect.CurrentTechnique.Passes) { pass.Apply(); device.SetVertexBuffer(cubeBuffer); RasterizerState rs = new RasterizerState(); rs.CullMode = CullMode.None; rs.FillMode = FillMode.Solid; device.RasterizerState = rs; device.DrawPrimitives( PrimitiveType.TriangleList, 0, cubeBuffer.VertexCount / 3); } This is the Draw method in game1 A.Draw(camera, effect); B.Draw(camera, effect); **

    Read the article

  • Extrapolation breaks collision detection

    - by user22241
    Before applying extrapolation to my sprite's movement, my collision worked perfectly. However, after applying extrapolation to my sprite's movement (to smooth things out), the collision no longer works. This is how things worked before extrapolation: However, after I implement my extrapolation, the collision routine breaks. I am assuming this is because it is acting upon the new coordinate that has been produced by the extrapolation routine (which is situated in my render call ). After I apply my extrapolation How to correct this behaviour? I've tried puting an extra collision check just after extrapolation - this does seem to clear up a lot of the problems but I've ruled this out because putting logic into my rendering is out of the question. I've also tried making a copy of the spritesX position, extrapolating that and drawing using that rather than the original, thus leaving the original intact for the logic to pick up on - this seems a better option, but it still produces some weird effects when colliding with walls. I'm pretty sure this also isn't the correct way to deal with this. I've found a couple of similar questions on here but the answers haven't helped me. This is my extrapolation code: public void onDrawFrame(GL10 gl) { //Set/Re-set loop back to 0 to start counting again loops=0; while(System.currentTimeMillis() > nextGameTick && loops < maxFrameskip){ SceneManager.getInstance().getCurrentScene().updateLogic(); nextGameTick+=skipTicks; timeCorrection += (1000d/ticksPerSecond) % 1; nextGameTick+=timeCorrection; timeCorrection %=1; loops++; tics++; } extrapolation = (float)(System.currentTimeMillis() + skipTicks - nextGameTick) / (float)skipTicks; render(extrapolation); } Applying extrapolation render(float extrapolation){ //This example shows extrapolation for X axis only. Y position (spriteScreenY is assumed to be valid) extrapolatedPosX = spriteGridX+(SpriteXVelocity*dt)*extrapolation; spriteScreenPosX = extrapolationPosX * screenWidth; drawSprite(spriteScreenX, spriteScreenY); } Edit As I mentioned above, I have tried making a copy of the sprite's coordinates specifically to draw with.... this has it's own problems. Firstly, regardless of the copying, when the sprite is moving, it's super-smooth, when it stops, it's wobbling slightly left/right - as it's still extrapolating it's position based on the time. Is this normal behavior and can we 'turn it off' when the sprite stops? I've tried having flags for left / right and only extrapolating if either of these is enabled. I've also tried copying the last and current positions to see if there is any difference. However, as far as collision goes, these don't help. If the user is pressing say, the right button and the sprite is moving right, when it hits a wall, if the user continues to hold the right button down, the sprite will keep animating to the right, while being stopped by the wall (therefore not actually moving), however because the right flag is still set and also because the collision routine is constantly moving the sprite out of the wall, it still appear to the code (not the player) that the sprite is still moving, and therefore extrapolation continues. So what the player would see, is the sprite 'static' (yes, it's animating, but it's not actually moving across the screen), and every now and then it shakes violently as the extrapolation attempts to do it's thing....... Hope this help

    Read the article

  • OpenGL vs DirectX?

    - by Harold
    I saw the articles that were going on about OpenGL being better than DirectX and that Microsoft are really just trying to get everyone to use DirectX even though it's inferior so that gaming is almost exclusively for Windows and XBox, but since the article was written in 2006 is it still relevant today? Also I know plenty of games are written in DirectX but does anyone have any examples of popular games written in OpenGL? Thanks

    Read the article

  • (int) Math.floor(x / TILESIZE) or just (int) (x / TILESIZE)

    - by Aidan Mueller
    I have a Array that stores my map data and my Tiles are 64X64. Sometimes I need to convert from pixels to units of tiles. So I was doing: int x int y public void myFunction() { getTile((int) Math.floor(x / 64), (int) Math.floor(y / 64)).doOperation(); } But I discovered by using (I'm using java BTW) System.out.println((int) (1 / 1.5)) that converting to an int automatically rounds down. This means that I can replace the (int) Math.floor with just x / 64. But if I run this on a different OS do you think it might give a different result? I'm just afraid there might be some case where this would round up and not down. Should I keep doing it the way I was and maybe make a function like convert(int i) to make it easier? Or is it OK to just do x / 64?

    Read the article

  • Point[] and Tri not "could not be found"

    - by Craig Dannehl
    Hi I'm trying to learn how to load a .obj file using OpenTK in windows Forms. I have seen a lot of examples out there, but I do see almost everyone uses List, and Point[]. Code example show these highlighted like there IDE know what these are; for example List<Tri> tris = new List<Tri>(); but mine just returns "The type or namespace name 'Tri' could not be found" is there an include I need to add or a using I am missing. Currently have this using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; using System.Drawing; using OpenTK; using OpenTK.Graphics; using OpenTK.Graphics.OpenGL;

    Read the article

  • What different ways are there to model restitution in a physics engine?

    - by Mikael Högström
    In my physics engine I give a body a value for restitution between 0 and 1. When two bodies collide there seems to be different views on how the restitution of the collision should be calculated. To me the most intuitive seems to be to take the average of the two but some seem to take only the largest one. Are there other ways to do it? Also, could the closing velocity or some other parameter come into effect?

    Read the article

  • OpenGL ES 2.0 texture distortion on large geometry

    - by Spruce
    OpenGL ES 2.0 has serious precision issues with texture sampling - I've seen topics with a similar problem, but I haven't seen a real solution to this "distorted OpenGL ES 2.0 texture" problem yet. This is not related to the texture's image format or OpenGL color buffers, it seems like it's a precision error. I don't know what specifically causes the precision to fail - it doesn't seem like it's just the size of geometry that causes this distortion, because simply scaling vertex position passed to the the vertex shader does not solve the issue. Here are some examples of the texture distortion: Distorted Texture (on OpenGL ES 2.0): http://i47.tinypic.com/3322h6d.png What the texture normally looks like (also on OpenGL ES 2.0): http://i49.tinypic.com/b4jc6c.png The texture issue is limited to small scale geometry on OpenGL ES 2.0, otherwise the texture sampling appears normal, but the grainy effect gradually worsens the further the vertex data is from the origin of XYZ(0,0,0) These texture issues do not occur on desktop OpenGL (works fine under Windows XP, Windows 7, and Mac OS X) I've only seen the problem occur on Android, iPhone, or WebGL(which is similar to OpenGL ES 2.0) All textures are power of 2 but the problem still occurs Scaling the vertex data - The values of a vertex's X Y Z location are in the range of: -65536 to +65536 floating point I realized this was large, so I tried dividing the vertex positions by 1024 to shrink the geometry and hopefully get more accurate floating point precision, but this didn't fix or lessen the texture distortion issue Scaling the modelview or scaling the projection matrix does not help Changing texture filtering options does not help Disabling mipmapping, or using GL_NEAREST/GL_LINEAR does nothing Enabling/disabling anisotropic does nothing The banding effect still occurs even when using GL_CLAMP Dividing the texture coords passed to the vertex shader and then multiplying them back to the correct values in the fragment shader, also does not work precision highp sampler2D, highp float, highp int - in the fragment or the vertex shader didn't change anything (lowp/mediump did not work either) I'm thinking this problem has to have been solved at one point - Seeing that OpenGL ES 2.0 -based games have been able to render large-scale, highly detailed geometry

    Read the article

  • Blender - creating bones from transform matrices

    - by user975135
    Notice: this is for the Blender 2.5/2.6 API. Back in the old days in the Blender 2.4 API, you could easily create a bone from a transform matrix in your 3d file as EditBones had an attribute named "matrix", which was an armature-space matrix you could access and modify. The new 2.5+ API still has the "matrix" attribute for EditBones, but for some unknown reason it is now read-only. So how to create EditBones from transform matrices? I could only find one thing: a new "transform()" function, which takes a Matrix too. Transform the the bones head, tail, roll and envelope (when the matrix has a scale component). Perfect, but you already need to have some values (loc/rot/scale) for your bone, otherwise transforming with a matrix like this will give you nothing, your bone will be a zero-sized bone which will be deleted by Blender. if you create default bone values first, like this: bone.tail = mathutils.Vector([0,1,0]) Then transform() will work on your bone and it might seem to create correct bones, but setting a tail position actually generates a matrix itself, use transform() and you don't get the matrix from your model file on your EditBone, but the multiplication of your matrix with the bone's existing one. This can be easily proven by comparing the matrices read from the file with EditBone.matrix. Again it might seem correct in Blender, but now export your model and you see your animations are messed up, as the bind pose rotations of the bones are wrong. I've tried to find an alternative way to assign the transformation matrix from my file to my EditBone with no luck.

    Read the article

  • How does a BSP tree work for Z sorting?

    - by Jenko
    I'm developing a 3D engine in software, and so I must compute Z sorting manually. I'm currently using the painters algorithm to sort triangles and then drawing them back-to-front. This causes artifacts that I'm trying to correct. Would using a dynamic BSP-tree ensure "correct Z sorting" of triangles? Why? Because the bounding volumes of triangles would be similar? Since I would have a single "world" BSP tree, would I have to remove and re-add any moved/scaled/rotated object into the tree? Is it possible to add triangles into a BSP tree without the expensive cutting process? Why do you need to cut triangles on the axis planes anyway? Is it faster to traverse a BSP tree from any angle, than to sort all tris each draw like the painters algorithm?

    Read the article

  • which flash 3d particle engine generate such xml file

    - by Huang F. Lei
    I found some particle config files like below one, but I don't know which flash 3d particle engine use them, they are different from away3d's which use 'root' as root element of xml. <effect pos="0 0 0"> <property cache="1" lifetime="10000"/> <mesh blendmode="add"> <path> <frame y="100" durtime="1000" x="0" z="0"/> </path> <scale> <frame y="0.2000000001" durtime="300" x="2.2" z="2.2"/> <frame y="0.4" durtime="300" x="2.7" z="2.7"/> </scale> </mesh> <vibrate delayTime="100" amplitude="10" durationTime="750" intension="50"/> <quad billboard="false" > </quad> <particle global="false" pos=""> <scale> <frame y="1" durtime="0" x="1" z="1"/> <frame y="1" durtime="2000" x="1.5" z="1.5"/> </scale> </particle> </effect>

    Read the article

  • Rotations and Origins

    - by Theodore Enderby
    I was hoping someone could explain to me, or help me understand, the math behind rotations and origins. I'm working on a little top down space sim and I can rotate my ship just how I want it. Now, when I get my blasters going it'd be nice if they shared the same rotation. Here's a picture. and here's some code! blast.X = ship.X+5; blast.Y = ship.Y; blast.RotationAngle = ship.RotationAngle; blast.Origin = new Vector2(ship.Origin.X,ship.Origin.Y); I add five so the sprite adds up when facing right. I tried adding five to the blast origin but no go. Any help is much appreciated

    Read the article

  • GLSL compiler messages from different vendors [on hold]

    - by revers
    I'm writing a GLSL shader editor and I want to parse GLSL compiler messages to make hyperlinks to invalid lines in a shader code. I know that these messages are vendor specific but currently I have access only to AMD's video cards. I want to handle at least NVidia's and Intel's hardware, apart from AMD's. If you have video card from different vendor than AMD, could you please give me the output of following C++ program: #include <GL/glew.h> #include <GL/freeglut.h> #include <iostream> using namespace std; #define STRINGIFY(X) #X static const char* fs = STRINGIFY( out vec4 out_Color; mat4 m; void main() { vec3 v3 = vec3(1.0); vec2 v2 = v3; out_Color = vec4(5.0 * v2.x, 1.0); vec3 k = 3.0; float = 5; } ); static const char* vs = STRINGIFY( in vec3 in_Position; void main() { vec3 v(5); gl_Position = vec4(in_Position, 1.0); } ); void printShaderInfoLog(GLint shader) { int infoLogLen = 0; int charsWritten = 0; GLchar *infoLog; glGetShaderiv(shader, GL_INFO_LOG_LENGTH, &infoLogLen); if (infoLogLen > 0) { infoLog = new GLchar[infoLogLen]; glGetShaderInfoLog(shader, infoLogLen, &charsWritten, infoLog); cout << "Log:\n" << infoLog << endl; delete [] infoLog; } } void printProgramInfoLog(GLint program) { int infoLogLen = 0; int charsWritten = 0; GLchar *infoLog; glGetProgramiv(program, GL_INFO_LOG_LENGTH, &infoLogLen); if (infoLogLen > 0) { infoLog = new GLchar[infoLogLen]; glGetProgramInfoLog(program, infoLogLen, &charsWritten, infoLog); cout << "Program log:\n" << infoLog << endl; delete [] infoLog; } } void initShaders() { GLuint v = glCreateShader(GL_VERTEX_SHADER); GLuint f = glCreateShader(GL_FRAGMENT_SHADER); GLint vlen = strlen(vs); GLint flen = strlen(fs); glShaderSource(v, 1, &vs, &vlen); glShaderSource(f, 1, &fs, &flen); GLint compiled; glCompileShader(v); bool succ = true; glGetShaderiv(v, GL_COMPILE_STATUS, &compiled); if (!compiled) { cout << "Vertex shader not compiled." << endl; succ = false; } printShaderInfoLog(v); glCompileShader(f); glGetShaderiv(f, GL_COMPILE_STATUS, &compiled); if (!compiled) { cout << "Fragment shader not compiled." << endl; succ = false; } printShaderInfoLog(f); GLuint p = glCreateProgram(); glAttachShader(p, v); glAttachShader(p, f); glLinkProgram(p); glUseProgram(p); printProgramInfoLog(p); if (!succ) { exit(-1); } delete [] vs; delete [] fs; } int main(int argc, char* argv[]) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA); glutInitWindowSize(600, 600); glutCreateWindow("Triangle Test"); glewInit(); GLenum err = glewInit(); if (GLEW_OK != err) { cout << "glewInit failed, aborting." << endl; exit(1); } cout << "Using GLEW " << glewGetString(GLEW_VERSION) << endl; const GLubyte* renderer = glGetString(GL_RENDERER); const GLubyte* vendor = glGetString(GL_VENDOR); const GLubyte* version = glGetString(GL_VERSION); const GLubyte* glslVersion = glGetString(GL_SHADING_LANGUAGE_VERSION); GLint major, minor; glGetIntegerv(GL_MAJOR_VERSION, &major); glGetIntegerv(GL_MINOR_VERSION, &minor); cout << "GL Vendor : " << vendor << endl; cout << "GL Renderer : " << renderer << endl; cout << "GL Version : " << version << endl; cout << "GL Version : " << major << "." << minor << endl; cout << "GLSL Version : " << glslVersion << endl; initShaders(); return 0; } On my video card it gives: Status: Using GLEW 1.7.0 GL Vendor : ATI Technologies Inc. GL Renderer : ATI Radeon HD 4250 GL Version : 3.3.11631 Compatibility Profile Context GL Version : 3.3 GLSL Version : 3.30 Vertex shader not compiled. Log: Vertex shader failed to compile with the following errors: ERROR: 0:1: error(#132) Syntax error: '5' parse error ERROR: error(#273) 1 compilation errors. No code generated Fragment shader not compiled. Log: Fragment shader failed to compile with the following errors: WARNING: 0:1: warning(#402) Implicit truncation of vector from size 3 to size 2. ERROR: 0:1: error(#174) Not enough data provided for construction constructor WARNING: 0:1: warning(#402) Implicit truncation of vector from size 1 to size 3. ERROR: 0:1: error(#132) Syntax error: '=' parse error ERROR: error(#273) 2 compilation errors. No code generated Program log: Vertex and Fragment shader(s) were not successfully compiled before glLinkProgram() was called. Link failed. Or if you like, you could give me other compiler messages than proposed by me. To summarize, the question is: What are GLSL compiler messages formats (INFOs, WARNINGs, ERRORs) for different vendors? Please give me examples or pattern explanation. EDIT: Ok, it seems that this question is too broad, then shortly: How does NVidia's and Intel's GLSL compilers present ERROR and WARNING messages? AMD/ATI uses patterns like this: ERROR: <position>:<line_number>: <message> WARNING: <position>:<line_number>: <message> (examples are above).

    Read the article

  • D3D9 Alpha Blending on the surfaces

    - by Indeera
    I have a surface (OffScreenPlain or RenderTarget with D3DFMT_A8R8G8B8) which I copy pixels (ARGB) to, from a third party function. Before pixel copying, Bits are accessed by LockRect. This surface is then StretchRect to the Backbuffer which is (D3DFMT_A8R8G8B8). Surface and Backbuffer are different dimensions. Filtering is set to D3DTEXF_NONE. Just after creating the d3d device I've set following RenderState settings D3DRS_ALPHABLENDENABLE -> TRUE D3DRS_BLENDOP -> D3DBLENDOP_ADD D3DRS_SRCBLEND -> D3DBLEND_SRCALPHA D3DRS_DESTBLEND -> D3DBLEND_INVSRCALPHA But I see no alpha blending happening. I've verified that alpha is specified in pixels. I've done a simple test by creating a vertex buffer and drawing a triangle (DrawPrimitive) which displays with alpha blending. In this test surface was StretchRect first and then DrawPrimitive, and the surface content displays without alpha blending and the triangle displays with alpha blending. What am I missing here? Thanks

    Read the article

  • Floodfill algorithm for GO

    - by user1048606
    The floodfill algorithm is used in the bucket tool in MS paint and photoshop, but it can also be used for GO and minesweeper. http://en.wikipedia.org/wiki/Flood_fill In go you can capture groups of stones, this website portrays it with two stones. http://www.connectedglobe.com/mindy/cap6.html This is my floodfill method in Java, it is not capturing a group of stones and I have no idea why because to me it makes sense. public void floodfill(int turn, int col, int row){ for(int a = col; a<19; a++){ for(int b = row; b<19; b++){ if(turn == black){ if(stones[col][row] == white){ stones[col][row] = 0; floodfill(black, col-1, row); floodfill(black, col+1, row); floodfill(black, col, row-1); floodfill(black, col, row+1); } } } } } It searches up, down, left, right for all the stones on the board. If the stones are white it captures them by making them 0, which represents empty.

    Read the article

  • sprite animation system height recalculating has some issues

    - by Nicolas Martel
    Basically, the way it works is that it update the frame to show every let's say 24 ticks and every time the frame update, it recalculates the height and width of the new sprite to render so that my gravity logics and stuff works well. But the problem i am having now is a bit hard to explain in words only, therefore i will use this picture to assist me The picture So what i need basically is that if let's say i froze the sprite at the first frame, then unfreeze it and freeze it at the second frame, have the second frame's sprite (let's say it's a prone move) simply stand on the foothold without starting the gravity and when switching back, have the first sprite go back on the foothold like normal without being under the foothold. I had 2 ideas on doing this but I'm not sure it's the most efficient ways to do it so i wanna hear your inputs.

    Read the article

< Previous Page | 403 404 405 406 407 408 409 410 411 412 413 414  | Next Page >