Search Results

Search found 31839 results on 1274 pages for 'plugin development'.

Page 504/1274 | < Previous Page | 500 501 502 503 504 505 506 507 508 509 510 511  | Next Page >

  • XNA: Auto-populate content within the content project based on current folder/file structure and content management for large games

    - by Joe
    1) Is it possible to implement a system where I can simply drop a new image into my content project's folder and VS will automatically see that and bring it into the project for compiling? 2) Similarly, if I wanted a specific texture I could state something like var texture = Game.Assets.Image["backgrounds/sky_02"]; (where Game is the standard XNA Game class and Assets is some kind of content manager statically defined within Game). I know this is fairly simple to implement manually and have done such things in the past (static Dictionary defined within Game) except this only works for relatively small games where you can have all assets loaded at the start without much issue. How would you go about making this work for games where content is loaded and unloaded based on level / area? I'm not asking for the solution, just how you would go about this and what things you would have to be aware of. Thanks.

    Read the article

  • Acceptable GC frequency for a SlimDX/Windows/.NET game?

    - by Rei Miyasaka
    I understand that the Windows GC is much better than the Xbox/WP7 GC, being that it's generational and multithreaded -- so I don't need to worry quite as much about avoiding memory allocation. SlimDX even has some unavoidable functions that generate some amount of garbage (specifically, MapSubresource creates DataBoxes), yet people don't seem to be too upset about it. I'd like to use some functional paradigms to write my code too, which also means creating objects like closures and monads. I know premature optimization isn't a good thing, but are there rules of thumb or metrics that I can follow to know whether I need to cut down on allocations? Is, say, one gen 0 GC per frame too much? One thing that has me stumped is object promotions. Gen 0 GCs will supposedly finish within a millisecond or two, but if I'm understanding correctly, it's the gen 1 and 2 promotions that start to hurt. I'm not too sure how I can predict/prevent these.

    Read the article

  • 2D SAT How to find collision center or point or area?

    - by Felipe Cypriano
    I've just implemented collision detection using SAT and this article as reference to my implementation. The detection is working as expected but I need to know where both rectangles are colliding. I need to find the center of the intersection, the black point on the image above. I've found some articles about this but they all involve avoiding the overlap or some kind of velocity, I don't need this. I just need to put a image on top of it. Like two cars crashed so I put an image on top of the collision. Any ideas? ## Update The information I've about the rectangles are the four points that represents them, the upper right, upper left, lower right and lower left coordinates. I'm trying to find an algorithm that can give me the intersection of these points.

    Read the article

  • OpenGL: glGetError() returns invalid enum after call to glewInit()

    - by malymato
    I use GLEW and freeglut. For some reason, after a call to glewInit(), glGetError() returns error code 1280. Reinstalling the drivers didn't help. I tried to disable glewExperimental, it had no effect. Code worked before, but I am not aware of any changes I could possibly make. Here's my code: int main(int argc, char* argv[]) { GLenum GlewInitResult, res; InitWindow(argc, argv); res = glGetError(); // res = 0 glewExperimental = GL_TRUE; GlewInitResult = glewInit(); res = glGetError(); // res = 1280 glutMainLoop(); exit(EXIT_SUCCESS); } void InitWindow(int argc, char* argv[]) { glutInit(&argc, argv); glutInitContextVersion(4, 0); glutInitContextFlags(GLUT_FORWARD_COMPATIBLE); glutInitContextProfile(GLUT_CORE_PROFILE); glutSetOption(GLUT_ACTION_ON_WINDOW_CLOSE, GLUT_ACTION_GLUTMAINLOOP_RETURNS); glutInitWindowPosition(0, 0); glutInitWindowSize(CurrentWidth, CurrentHeight); glutInitDisplayMode(GLUT_DEPTH | GLUT_DOUBLE | GLUT_RGBA); WindowHandle = glutCreateWindow(WINDOW_TITLE); GLenum errorCheckValue = glGetError(); if (WindowHandle < 1) { fprintf(stderr, "ERROR: Could not create new rendering window.\n"); exit(EXIT_FAILURE); } glutReshapeFunc(ResizeFunction); glutDisplayFunc(RenderFunction); glutIdleFunc(IdleFunction); glutTimerFunc(0, TimerFunction, 0); glutCloseFunc(Cleanup); glutKeyboardFunc(KeyboardFunction); } Could someone tell me what I am doing wrong? Thanks.

    Read the article

  • 2D Platformer Collision Handling

    - by defender-zone
    Hello, everyone! I am trying to create a 2D platformer (Mario-type) game and I am some having some issues with handling collisions properly. I am writing this game in C++, using SDL for input, image loading, font loading, etcetera. I am also using OpenGL via the FreeGLUT library in conjunction with SDL to display graphics. My method of collision detection is AABB (Axis-Aligned Bounding Box), which is really all I need to start with. What I need is an easy way to both detect which side the collision occurred on and handle the collisions properly. So, basically, if the player collides with the top of the platform, reposition him to the top; if there is a collision to the sides, reposition the player back to the side of the object; if there is a collision to the bottom, reposition the player under the platform. I have tried many different ways of doing this, such as trying to find the penetration depth and repositioning the player backwards by the penetration depth. Sadly, nothing I've tried seems to work correctly. Player movement ends up being very glitchy and repositions the player when I don't want it to. Part of the reason is probably because I feel like this is something so simple but I'm over-thinking it. If anyone thinks they can help, please take a look at the code below and help me try to improve on this if you can. I would like to refrain from using a library to handle this (as I want to learn on my own) or the something like the SAT (Separating Axis Theorem) if at all possible. Thank you in advance for your help! void world1Level1CollisionDetection() { for(int i; i < blocks; i++) { if (de2dCheckCollision(ball,block[i],0.0f,0.0f)==true) { int up = 0; int left = 0; int right = 0; int down = 0; if(ball.coords[0] < block[i].coords[0] && block[i].coords[0] < ball.coords[2] && ball.coords[2] < block[i].coords[2]) { left = 1; } if(block[i].coords[0] < ball.coords[0] && ball.coords[0] < block[i].coords[2] && block[i].coords[2] < ball.coords[2]) { right = 1; } if(ball.coords[1] < block[i].coords[1] && block[i].coords[1] < ball.coords[3] && ball.coords[3] < block[i].coords[3]) { up = 1; } if(block[i].coords[1] < ball.coords[1] && ball.coords[1] < block[i].coords[3] && block[i].coords[3] < ball.coords[3]) { down = 1; } cout << left << ", " << right << ", " << up << ", " << down << ", " << endl; if (left == 1) { ball.coords[0] = block[i].coords[0] - 16.0f; ball.coords[2] = block[i].coords[0] - 0.0f; } if (right == 1) { ball.coords[0] = block[i].coords[2] + 0.0f; ball.coords[2] = block[i].coords[2] + 16.0f; } if (down == 1) { ball.coords[1] = block[i].coords[3] + 0.0f; ball.coords[3] = block[i].coords[3] + 16.0f; } if (up == 1) { ball.yspeed = 0.0f; ball.gravity = 0.0f; ball.coords[1] = block[i].coords[1] - 16.0f; ball.coords[3] = block[i].coords[1] - 0.0f; } } if (de2dCheckCollision(ball,block[i],0.0f,0.0f)==false) { ball.gravity = -0.5f; } } } To explain what some of this code means: The blocks variable is basically an integer that is storing the amount of blocks, or platforms. I am checking all of the blocks using a for loop, and the number that the loop is currently on is represented by integer i. The coordinate system might seem a little weird, so that's worth explaining. coords[0] represents the x position (left) of the object (where it starts on the x axis). coords[1] represents the y position (top) of the object (where it starts on the y axis). coords[2] represents the width of the object plus coords[0] (right). coords[3] represents the height of the object plus coords[1] (bottom). de2dCheckCollision performs an AABB collision detection. Up is negative y and down is positive y, as it is in most games. Hopefully I have provided enough information for someone to help me successfully. If there is something I left out that might be crucial, let me know and I'll provide the necessary information. Finally, for anyone who can help, providing code would be very helpful and much appreciated. Thank you again for your help!

    Read the article

  • Ideas for attack damage algorithm (language irrelevant)

    - by Dillon
    I am working on a game and I need ideas for the damage that will be done to the enemy when your player attacks. The total amount of health that the enemy has is called enemyHealth, and has a value of 1000. You start off with a weapon that does 40 points of damage (may be changed.) The player has an attack stat that you can increase, called playerAttack. This value starts off at 1, and has a possible max value of 100 after you level it up many times and make it farther into the game. The amount of damage that the weapon does is cut and dry, and subtracts 40 points from the total 1000 points of health every time the enemy is hit. But what the playerAttack does is add to that value with a percentage. Here is the algorithm I have now. (I've taken out all of the gui, classes, etc. and given the variables very forward names) double totalDamage = weaponDamage + (weaponDamage*(playerAttack*.05)) enemyHealth -= (int)totalDamage; This seemed to work great for the most part. So I statrted testing some values... //enemyHealth ALWAYS starts at 1000 weaponDamage = 50; playerAttack = 30; If I set these values, the amount of damage done on the enemy is 125. Seemed like a good number, so I wanted to see what would happen if the players attack was maxed out, but with the weakest starting weapon. weaponDamage = 50; playerAttack = 100; the totalDamage ends up being 300, which would kill an enemy in just a few hits. Even with your attack that high, I wouldn't want the weakest weapon to be able to kill the enemy that fast. I thought about adding defense, but I feel the game will lose consistency and become unbalanced in the long run. Possibly a well designed algorithm for a weapon decrease modifier would work for lower level weapons or something like that. Just need a break from trying to figure out the best way to go about this, and maybe someone that has experience with games and keeping the leveling consistent could give me some ideas/pointers.

    Read the article

  • Painting with pixel shaders

    - by Gustavo Maciel
    I have an almost full understanding of how 2D Lighting works, saw this post and was tempted to try implementing this in HLSL. I planned to paint each of the layers with shaders, and then, combine them just drawing one on top of another, or just pass the 3 textures to the shader and getting a better way to combine them. Working almost as planned, but I got a little question in the matter. I'm drawing each layer this way: GraphicsDevice.SetRenderTarget(lighting); GraphicsDevice.Clear(Color.Transparent); //... Setup shader SpriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, SamplerState.LinearClamp, DepthStencilState.None, RasterizerState.CullNone, lightingShader); SpriteBatch.Draw(texture, fullscreen, Color.White); SpriteBatch.End(); GraphicsDevice.SetRenderTarget(darkMask); GraphicsDevice.Clear(Color.Transparent); //... Setup shader SpriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, SamplerState.LinearClamp, DepthStencilState.None, RasterizerState.CullNone, darkMaskShader); SpriteBatch.Draw(texture, fullscreen, Color.White); SpriteBatch.End(); Where lightingShader and darkMaskShader are shaders that, with parameters (view and proj matrices, light pos, color and range, etc) generate a texture meant to be that layer. It works fine, but I'm not sure if drawing a transparent quad on top of a transparent render target is the best way of doing it. Because I actually just need the position and params. Concluding: Can I paint a texture with shaders without having to clear it and then draw a transparent texture on top of it?

    Read the article

  • How can I calculate a vertex normal for a hard edge?

    - by K.G.
    Here is a picture of a lovely polygon: Circled is a vertex, and numbered are its adjacent faces. I have calculated the normals of those faces as such (not yet normalized, 0-indexed): Vertex 1 normal 0: 0.000000 0.000000 -0.250000 Vertex 1 normal 1: 0.000000 0.000000 -0.250000 Vertex 1 normal 2: -0.250000 0.000000 0.000000 Vertex 1 normal 3: -0.250000 0.000000 0.000000 Vertex 1 normal 4: 0.250000 0.000000 0.000000 What I'm wondering is, how can I determine, taken as given that I want this vertex to represent a hard edge, whether its normal should be the normal of 1/2 or 3/4? My plan after I glanced at the sketch I used to put this together was "Ha! I'll just use whichever two faces have the same normal!" and now I see that there are two sets of two faces for which this is true. Is there a rule I can apply based on the face winding, angle of the adjacent edges, moon phase, coin flip, to consistently choose a normal direction for this box? For the record, all of the other polygons I plan to use will have their normals dictated in Maya, but after encountering this problem, it made me really curious.

    Read the article

  • how can I specify interleaved vertex attributes and vertex indices

    - by freefallr
    I'm writing a generic ShaderProgram class that compiles a set of Shader objects, passes args to the shader (like vertex position, vertex normal, tex coords etc), then links the shader components into a shader program, for use with glDrawArrays. My vertex data already exists in a VertexBufferObject that uses the following data structure to create a vertex buffer: class CustomVertex { public: float m_Position[3]; // x, y, z // offset 0, size = 3*sizeof(float) float m_TexCoords[2]; // u, v // offset 3*sizeof(float), size = 2*sizeof(float) float m_Normal[3]; // nx, ny, nz; float colour[4]; // r, g, b, a float padding[20]; // padded for performance }; I've already written a working VertexBufferObject class that creates a vertex buffer object from an array of CustomVertex objects. This array is said to be interleaved. It renders successfully with the following code: void VertexBufferObject::Draw() { if( ! m_bInitialized ) return; glBindBuffer( GL_ARRAY_BUFFER, m_nVboId ); glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, m_nVboIdIndex ); glEnableClientState( GL_VERTEX_ARRAY ); glEnableClientState( GL_TEXTURE_COORD_ARRAY ); glEnableClientState( GL_NORMAL_ARRAY ); glEnableClientState( GL_COLOR_ARRAY ); glVertexPointer( 3, GL_FLOAT, sizeof(CustomVertex), ((char*)NULL + 0) ); glTexCoordPointer(3, GL_FLOAT, sizeof(CustomVertex), ((char*)NULL + 12)); glNormalPointer(GL_FLOAT, sizeof(CustomVertex), ((char*)NULL + 20)); glColorPointer(3, GL_FLOAT, sizeof(CustomVertex), ((char*)NULL + 32)); glDrawElements( GL_TRIANGLES, m_nNumIndices, GL_UNSIGNED_INT, ((char*)NULL + 0) ); glDisableClientState( GL_VERTEX_ARRAY ); glDisableClientState( GL_TEXTURE_COORD_ARRAY ); glDisableClientState( GL_NORMAL_ARRAY ); glDisableClientState( GL_COLOR_ARRAY ); glBindBuffer( GL_ARRAY_BUFFER, 0 ); glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, 0 ); } Back to the Vertex Array Object though. My code for creating the Vertex Array object is as follows. This is performed before the ShaderProgram runtime linking stage, and no glErrors are reported after its steps. // Specify the shader arg locations (e.g. their order in the shader code) for( int n = 0; n < vShaderArgs.size(); n ++) glBindAttribLocation( m_nProgramId, n, vShaderArgs[n].sFieldName.c_str() ); // Create and bind to a vertex array object, which stores the relationship between // the buffer and the input attributes glGenVertexArrays( 1, &m_nVaoHandle ); glBindVertexArray( m_nVaoHandle ); // Enable the vertex attribute array (we're using interleaved array, since its faster) glBindBuffer( GL_ARRAY_BUFFER, vShaderArgs[0].nVboId ); glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, vShaderArgs[0].nVboIndexId ); // vertex data for( int n = 0; n < vShaderArgs.size(); n ++ ) { glEnableVertexAttribArray(n); glVertexAttribPointer( n, vShaderArgs[n].nFieldSize, GL_FLOAT, GL_FALSE, vShaderArgs[n].nStride, (GLubyte *) NULL + vShaderArgs[n].nFieldOffset ); AppLog::Ref().OutputGlErrors(); } This doesn't render correctly at all. I get a pattern of white specks onscreen, in the shape of the terrain rectangle, but there are no regular lines etc. Here's the code I use for rendering: void ShaderProgram::Draw() { using namespace AntiMatter; if( ! m_nShaderProgramId || ! m_nVaoHandle ) { AppLog::Ref().LogMsg("ShaderProgram::Draw() Couldn't draw object, as initialization of ShaderProgram is incomplete"); return; } glUseProgram( m_nShaderProgramId ); glBindVertexArray( m_nVaoHandle ); glDrawArrays( GL_TRIANGLES, 0, m_nNumTris ); glBindVertexArray(0); glUseProgram(0); } Can anyone see errors or omissions in either the VAO creation code or rendering code? thanks!

    Read the article

  • Robust line of sight test on the inside of a polygon with tolerance

    - by David Gouveia
    Foreword This is a followup to this question and the main problem I'm trying to solve. My current solution is an hack which involves inflating the polygon, and doing most calculations on the inflated polygon instead. My goal is to remove this step completely, and correctly solve the problem with calculations only. Problem Given a concave polygon and treating all of its edges as if they were walls in a level, determine whether two points A and B are in line of sight of each other, while accounting for some degree of floating point errors. I'm currently basing my solution on a series of line-segment interection tests. In other words: If any of the end points are outside the polygon, they are not in line of sight. If both end points are inside the polygon, and the line segment from A to B crosses any of the edges from the polygon, then they are not in line of sight. If both end points are inside the polygon, and the line segment from A to B does not cross any of the edges from the polygon, then they are in line of sight. But the problem is dealing correctly with all the edge cases. In particular, it must be able to deal with all the situations depicted below, where red lines are examples that should be rejected, and green lines are examples that should be accepted. I probably missed a few other situations, such as when the line segment from A to B is colinear with an edge, but one of the end points is outside the polygon. One point of particular interest is the difference between 1 and 9. In both cases, both end points are vertices of the polygon, and there are no edges being intersected, but 1 should be rejected while 9 should be accepted. How to distinguish these two? I could check some middle point within the segment to see if it falls inside or not, but it's easy to come up with situations in which it would fail. Point 7 was also pretty tricky and I had to to treat it as a special case, which checks if two points are adjacent vertices of the polygon directly. But there are also other chances of line segments being col linear with the edges of the polygon, and I'm still not entirely sure how I should handle those cases. Is there any well known solution to this problem?

    Read the article

  • monotouch 2d pixel with correct resolution

    - by acidzombie24
    I am writing up a game that is size sensitive. It needs to be pixel perfect. I believe the resolution is 480x320 pixels with the iphone being twice the width and height. My code is grid based with images exactly 16x16pixels. I found samples of opengl in the past but I never found any good tutorial that had 0,0 the top left and was the correct size in resolution (which made images look terrible) What can I use? I'd like to write the code in C# (or C++ but C# is preferred) and use monotouch. I don't know any libraries for 2d graphics. I'll figure out sound and such afterwards and I seen documentation on monotouch for input.

    Read the article

  • Random World Generation

    - by Alex Larsen
    I'm making a game like minecraft (although a different idea) but I need a random world generator for a 1024 block wide and 256 block tall map. Basically so far I have a multidimensional array for each layer of blocks (a total of 262,114 blocks). This is the code I have now: Block[,] BlocksInMap = new Block[1024, 256]; public bool IsWorldGenerated = false; Random r = new Random(); private void RunThread() { for (int BH = 0; BH <= 256; BH++) { for (int BW = 0; BW <= 1024; BW++) { Block b = new Block(); if (BH >= 192) { } BlocksInMap[BW, BH] = b; } } IsWorldGenerated = true; } public void GenWorld() { new Thread(new ThreadStart(RunThread)).Start(); } I want to make tunnels and water but the way blocks are set is like this: Block MyBlock = new Block(); MyBlock.BlockType = Block.BlockTypes.Air; How would I manage to connect blocks so the land is not a bunch of floating dirt and stone?

    Read the article

  • Distributed C++ game server which use database.

    - by Slav
    Hello. My C++ turn-based game server (which uses database) does stand against current average amount of clients (players), so I want to expand it to multiple (more then one) amount of computers and databases where all clients still will remain within single game world (servers will must communicate with each other and use multiple databases). Is there some tutorials/books/common standards which explain how to do it in a best way?

    Read the article

  • Meaning of offset in pygame Mask.overlap methods

    - by Alan
    I have a situation in which two rectangles collide, and I have to detect how much did they collide so so I can redraw the objects in a way that they are only touching each others edges. It's a situation in which a moving ball should hit a completely unmovable wall and instantly stop moving. Since the ball sometimes moves multiple pixels per screen refresh, it it possible that it enters the wall with more that half its surface when the collision is detected, in which case i want to shift it position back to the point where it only touches the edges of the wall. Here is the conceptual image it: I decided to implement this with masks, and thought that i could supply the masks of both objects (wall and ball) and get the surface (as a square) of their intersection. However, there is also the offset parameter which i don't understand. Here are the docs for the method: Mask.overlap Returns the point of intersection if the masks overlap with the given offset - or None if it does not overlap. Mask.overlap(othermask, offset) -> x,y The overlap tests uses the following offsets (which may be negative): +----+----------.. |A | yoffset | +-+----------.. +--|B |xoffset | | : :

    Read the article

  • What are the possible options for AI path-finding etc when the world is "partitionned"?

    - by Sebastien Diot
    If you anticipate a large persistent game world, and you don't want to end up with some game server crashing due to overload, then you have to design from the ground up a game world that is partitioned in chunks. This is in particular true if you want to run your game servers in the cloud, where each individual VM is relatively week, and memory and CPU are at a premium. I think the biggest challenge here is that the player receives all the parts around the location of the avatar, but mobs/monsters are normally located in the server itself, and can only directly access the data about the part of the world that the server own. So how can we make the AI behave realistically in that context? It can send queries to the other servers that own the neighboring parts, but that sounds rather network intensive and latency prone. It would probably be more performant for each mob AI to be spread over the neighboring parts, and proactively send the relevant info to the part that contains the actual mob atm. That would also reduce the stress in a mob crossing a border between two parts, and therefore "switching server". Have you heard of any AI design that solves those issues? Some kind of distributed AI brain? Maybe some kind of "agent" community working together through message passing?

    Read the article

  • Question about mipmaps + anisotropic filtering

    - by Telanor
    I'm a bit confused here and maybe someone can explain this to me. I created a simple test texture for my terrain which is nothing more than a solid green color with a black grid overlayed on top of it. If I look at the terrain in the distance with mipmapping on and linear filtering, the grid lines become blurry fairly quickly and further back the grid is pretty much invisible. With these settings, I don't get any moire patterns at all. If I turn on anisotropic filtering, however, the higher the anisotropic level, the more the terrain looks like it did with without mipmapping. The lines are much crisper nearby but in the distance I start to see terrible moire patterns. My understanding was that mipmapping is supposed to get rid of moire patterns. I've always had anisotropic filtering on in every game I play and I've never noticed any moire patterns as a result, so I don't understand why it's happening in my game. I am using logarithmic depth however, could that be causing any problems? And if it is, how do I resolve it? I've created my sampler state like so (I'm using slimdx): ssa = SamplerState.FromDescription(Engine.Device, new SamplerDescription { AddressU = TextureAddressMode.Clamp, AddressV = TextureAddressMode.Clamp, AddressW = TextureAddressMode.Clamp, Filter = Filter.Anisotropic, MaximumAnisotropy = anisotropicLevel, MinimumLod = 0, MaximumLod = float.MaxValue });

    Read the article

  • 2D game collision response: SAT & minimum displacement along a given axis?

    - by Archagon
    I'm trying to implement a collision system in a 2D game I'm making. The separating axis theorem (as described by metanet's collision tutorial) seems like an efficient and robust way of handling collision detection, but I don't quite like the collision response method they use. By blindly displacing along the axis of least overlap, the algorithm simply ignores the previous position of the moving object, which means that it doesn't collide with the stationary object so much as it enters it and then bounces out. Here's an example of a situation where this would matter: According to the SAT method described above, the rectangle would simply pop out of the triangle perpendicular to its hypotenuse: However, realistically, the rectangle should stop at the lower right corner of the triangle, as that would be the point of first collision if it were moving continuously along its displacement vector: Now, this might not actually matter during gameplay, but I'd love to know if there's a way of efficiently and generally attaining accurate displacements in this manner. I've been racking my brains over it for the past few days, and I don't want to give up yet! (Cross-posted from StackOverflow, hope that's not against the rules!)

    Read the article

  • Component-wise GLSL vector branching

    - by Gustavo Maciel
    I'm aware that it usually is a BAD idea to operate separately on GLSL vec's components separately. For example: //use instrinsic functions, they do the calculation on 4 components at a time. float dot = v1.x*v2.x + v1.y * v2.y + v1.z * v2.z; //NEVER float dot = dot(v1, v2); //YES //Multiply one by one is not good too, since the ALU can do the 4 components at a time too. vec3 mul = vec3(v1.x * v2.x, v1.y * v2.y, v1.z * v2.z); //NEVER vec3 mul = v1 * v2; I've been struggling thinking, are there equivalent operations for branching? For example: vec4 Overlay(vec4 v1, vec4 v2, vec4 opacity) { bvec4 less = lessThan(v1, vec4(0.5)); vec4 blend; for(int i = 0; i < 4; ++i) { if(less[i]) blend[i] = 2.0 * v1[i]*v2[i]; else blend[i] = 1.0 - 2.0 * (1.0 - v1[i])*(1.0 - v2[i]); } return v1 + (blend-v1)*opacity; } This is a Overlay operator that works component wise. I'm not sure if this is the best way to do it, since I'm afraid these for and if can be a bottleneck later. Tl;dr, Can I branch component wise? If yes, how can I optimize that Overlay function with it?

    Read the article

  • Lighting-Reflectance Models & Licensing Issues

    - by codey
    Generally, or specifically, is there any licensing issue with using any of the well known lighting/reflectance models (i.e. the BRDFs or other distribution or approximation functions): Phong, Blinn–Phong, Cook–Torrance, Blinn-Torrance-Sparrow, Lambert, Minnaert, Oren–Nayar, Ward, Strauss, Ashikhmin-Shirley and common modifications where applicable, such as: Beckmann distribution, Blinn distribution, Schlick's approximation, etc. in your shader code utilised in a commercial product? Or is it a non-issue?

    Read the article

  • Inconsistent accessibility error in xna.

    - by Tom
    Hey all, you may remember me asking a question regarding a snake game I was creating about two weeks ago. Well I'm quite far now into making the game (thanks to a brilliant tutorial I found). But I've come across the error described named above. So heres my problem; I have a SnakeFood class that has a method called "Reposition". In the game1 class I have a method called "UpdateInGame" which calls the reposition method to load an orange that spawns in a random place every second. My latest piece of code changed the reposition method to allow the snake I have on the screen to not be overlapped by the orange that randomly spawns. Now I get the error (in full): Error 1 Inconsistent accessibility: parameter type 'TheMathsSnakeGame.Snake' is less accessible than method 'TheMathsSnakeGame.SnakeFood.Reposition(TheMathsSnakeGame.Snake)' C:\Users\Tom\Documents\Visual Studio 2008\Projects\TheMathsSnakeGame\TheMathsSnakeGame\SnakeFood.cs 33 21 TheMathsSnakeGame I understand what the errors trying to tell me but having changed the accessiblity of the methods, I still can't get it to work. Sorry about the longwinded question. Thanks in advance :) Edit: Code I'm using (Game1 Class) private void UpdateInGame(GameTime gameTime) { //Calls the oranges "reposition" method every second if (gameTime.TotalGameTime.Milliseconds % 1000 == 0) orange.Reposition(sidney); sidney.Update(gameTime); } (SnakeFood Class) public void Reposition(Snake snake) { do { position = new Point(rand.Next(Grid.maxHeight), rand.Next(Grid.maxWidth)); } while (snake.IsBodyOnPoint(position)); }

    Read the article

  • How or why would this mechanic (not) work to bring game balance to a singleplayer RPG? [closed]

    - by 0xFFF1
    Mechanic details The player, the monsters, and the merchants act as three separate parties. The player needs to beat up monsters for exp points and resources to sell and to buy potions from merchants to continue to fight. The monsters need healing and reviving to survive (also bought from merchants) and the merchants need potion ingredients from the player and the monsters to make potions to sell. These potions are only able to be processed in such bulk by merchants thus their potions would be cheaper than making them yourself. Only the monsters can farm ingredients in bulk. Only the player is or has to be overly aggressive (in bulk). Monsters can farm and produce "Level up candies" that do the work of exp. they are eaten right away after they are made and are never stockpiled or held for fear of the player and merchants who want to sell to the player. The monsters will defend themselves. Reviving is very expensive. The merchants can be found either with a concerned expression or a grinning expression based on how much profit they are making compared to their morale standing. The economies of each monster town and merchant city are distinct but interconnected. Magic Swords are worth a lot. So what I need to know is what concerns would there be to design a game around this mechanic and/or design this mechanic around a developing game. which would fare better? Is game balance an issue here? (how strong the monsters get or how quickly they die off based on the player's input into the system), Or is game balance solely in the hands of the player? (he decides if he overkills monsters or get underleveled.) What do I need to think about to make sure it isn't too easy or too hard to swing the amount/strength of monsters compared to the player and the amount of profit the merchants get vs the player. Would indicating how out of whack things are getting in game help with this?

    Read the article

  • Drawing text from update method in XNA

    - by Sigh-AniDe
    I am having a problem drawing the "Game Over!" text once the user is on the last tile. This is what I have: The Update and drawText methods are in a class named turtle: public void Update(float scalingFactor, int[,] map, SpriteBatch batch, SpriteFont font) { if (isMovable(mapX, mapY - 1, map)) { position.Y = position.Y - (int)scalingFactor; angle = 0.0f; Program.form.direction = ""; if (mapX == 17 && mapY == 1)// This is the last tile(Tested) { Program.form.BackColor = System.Drawing.Color.Red; drawText(batch, font); } } } public void drawText(SpriteBatch spritebatch, SpriteFont spriteFont) { textPosition.X = 200; // a vector2 textPosition.Y = 200; spritebatch.Begin(SpriteSortMode.BackToFront, BlendState.AlphaBlend); spritebatch.DrawString(spriteFont, "Game Over!!!", textPosition, Color.Red); spritebatch.End(); } This update is in the Game1 class: protected override void Update(GameTime gameTime) { // Allows the game to exit if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) this.Exit(); turtle.Update(scalingFactor, map, spriteBatch, font); base.Update(gameTime); } I have also added the font content to LoadContent: font = Content.Load<SpriteFont>("fontType"); What am I doing wrong? Why does the text not want to show on game completion? If I call the turtle.draw() in the main Draw method. The "Game Over" text stays on screen from the beggining. What am I missing? Thanks

    Read the article

  • Texture and Lighting Issue in 3D world

    - by noah
    Im using OpenGL ES 1.1 for iPhone. I'm attempting to implement a skybox in my 3d world and started out by following one of Jeff Lamarches tutorials on creating textures. Heres the tutorial: iphonedevelopment.blogspot.com/2009/05/opengl-es-from-ground-up-part-6_25.html Ive successfully added the image to my 3d world but am not sure why the lighting on the other shapes has changed so much. I want the shapes to be the original color and have the image in the background. Before: https://www.dropbox.com/s/ojmb8793vj514h0/Screen%20Shot%202012-10-01%20at%205.34.44%20PM.png After: https://www.dropbox.com/s/8v6yvur8amgudia/Screen%20Shot%202012-10-01%20at%205.35.31%20PM.png Heres the init OpenGL: - (void)initOpenGLES1 { glShadeModel(GL_SMOOTH); // Enable lighting glEnable(GL_LIGHTING); // Turn the first light on glEnable(GL_LIGHT0); const GLfloat lightAmbient[] = {0.2, 0.2, 0.2, 1.0}; const GLfloat lightDiffuse[] = {0.8, 0.8, 0.8, 1.0}; const GLfloat matAmbient[] = {0.3, 0.3, 0.3, 0.5}; const GLfloat matDiffuse[] = {1.0, 1.0, 1.0, 1.0}; const GLfloat matSpecular[] = {1.0, 1.0, 1.0, 1.0}; const GLfloat lightPosition[] = {0.0, 0.0, 1.0, 0.0}; const GLfloat lightShininess = 100.0; //Configure OpenGL lighting glEnable(GL_LIGHTING); glEnable(GL_LIGHT0); glMaterialfv(GL_FRONT_AND_BACK, GL_AMBIENT, matAmbient); glMaterialfv(GL_FRONT_AND_BACK, GL_DIFFUSE, matDiffuse); glMaterialfv(GL_FRONT_AND_BACK, GL_SPECULAR, matSpecular); glMaterialf(GL_FRONT_AND_BACK, GL_SHININESS, lightShininess); glLightfv(GL_LIGHT0, GL_AMBIENT, lightAmbient); glLightfv(GL_LIGHT0, GL_DIFFUSE, lightDiffuse); glLightfv(GL_LIGHT0, GL_POSITION, lightPosition); // Define a cutoff angle glLightf(GL_LIGHT0, GL_SPOT_CUTOFF, 40.0); // Set the clear color glClearColor(0, 0, 0, 1.0f); // Projection Matrix config glMatrixMode(GL_PROJECTION); glLoadIdentity(); CGSize layerSize = self.view.layer.frame.size; // Swapped height and width for landscape mode gluPerspective(45.0f, (GLfloat)layerSize.height / (GLfloat)layerSize.width, 0.1f, 750.0f); [self initSkyBox]; // Modelview Matrix config glMatrixMode(GL_MODELVIEW); glLoadIdentity(); // This next line is not really needed as it is the default for OpenGL ES glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); glDisable(GL_BLEND); // Enable depth testing glEnable(GL_DEPTH_TEST); glDepthFunc(GL_LESS); glDepthMask(GL_TRUE); } Heres the drawSkybox that gets called in the drawFrame method: -(void)drawSkyBox { glDisable(GL_LIGHTING); glDisable(GL_DEPTH_TEST); glEnableClientState(GL_VERTEX_ARRAY); glEnableClientState(GL_NORMAL_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); static const SSVertex3D vertices[] = { {-1.0, 1.0, -0.0}, { 1.0, 1.0, -0.0}, {-1.0, -1.0, -0.0}, { 1.0, -1.0, -0.0} }; static const SSVertex3D normals[] = { {0.0, 0.0, 1.0}, {0.0, 0.0, 1.0}, {0.0, 0.0, 1.0}, {0.0, 0.0, 1.0} }; static const GLfloat texCoords[] = { 0.0, 0.5, 0.5, 0.5, 0.0, 0.0, 0.5, 0.0 }; glLoadIdentity(); glTranslatef(0.0, 0.0, -3.0); glBindTexture(GL_TEXTURE_2D, texture[0]); glVertexPointer(3, GL_FLOAT, 0, vertices); glNormalPointer(GL_FLOAT, 0, normals); glTexCoordPointer(2, GL_FLOAT, 0, texCoords); glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); glDisableClientState(GL_VERTEX_ARRAY); glDisableClientState(GL_NORMAL_ARRAY); glDisableClientState(GL_TEXTURE_COORD_ARRAY); glEnable(GL_LIGHTING); glEnable(GL_DEPTH_TEST); } Heres the init Skybox: -(void)initSkyBox { // Turn necessary features on glEnable(GL_TEXTURE_2D); glEnable(GL_BLEND); glBlendFunc(GL_ONE, GL_SRC_COLOR); // Bind the number of textures we need, in this case one. glGenTextures(1, &texture[0]); // create a texture obj, give unique ID glBindTexture(GL_TEXTURE_2D, texture[0]); // load our new texture name into the current texture glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR); NSString *path = [[NSBundle mainBundle] pathForResource:@"space" ofType:@"jpg"]; NSData *texData = [[NSData alloc] initWithContentsOfFile:path]; UIImage *image = [[UIImage alloc] initWithData:texData]; GLuint width = CGImageGetWidth(image.CGImage); GLuint height = CGImageGetHeight(image.CGImage); CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); void *imageData = malloc( height * width * 4 ); // times 4 because will write one byte for rgb and alpha CGContextRef cgContext = CGBitmapContextCreate( imageData, width, height, 8, 4 * width, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big ); // Flip the Y-axis CGContextTranslateCTM (cgContext, 0, height); CGContextScaleCTM (cgContext, 1.0, -1.0); CGColorSpaceRelease( colorSpace ); CGContextClearRect( cgContext, CGRectMake( 0, 0, width, height ) ); CGContextDrawImage( cgContext, CGRectMake( 0, 0, width, height ), image.CGImage ); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData); CGContextRelease(cgContext); free(imageData); [image release]; [texData release]; } Any help is greatly appreciated.

    Read the article

  • How to determine where on a path my object will be at a given point in time?

    - by Dave
    I have map and an obj that is meant to move from start to end in X amount of time. The movements are all straight lines, as curves are beyond my ability at the moment. So I am trying to get the object to move from these points, but along the way there are way points which keep it on a given path. The speed of the object is determined by how long it will take to get from start to end (based on X). This is what i have so far: //get_now() returns seconds since epoch var timepassed = get_now() - myObj[id].start; //seconds since epoch for departure var timeleft = myObj[id].end - get_now(); //seconds since epoch for arrival var journey_time = 60; //this means 60 minutes total journey time var array = [[650,250]]; //way points along the straight paths if(step == 0 || step =< array.length){ var destinationx = array[step][0]; var destinationy = array[step][1]; }else if( step == array.length){ var destinationx = 250; var destinationy = 100; } else { var destinationx = myObj[id].startx; var destinationy = myObj[id].starty; } step++; When the user logs in at any given time, the object needs to be drawn in the correct place of the path, almost as if its been travelling along the path whilst the user has not been at the PC with the available information i have above. How do I do this? Note: The camera angle in the game is a birds eye view so its a straight forward X:Y rather than isometric angles.

    Read the article

  • 2D pathfinding - finding smooth paths

    - by Kooi Nam Ng
    I was trying to implement a simple pathfinding, but the outcome is less satisfactory than what I intended to achieve. The thing is units in games like Starcraft 2 move in all directions whereas units in my case only move in at most 8 directions (Warcraft 1 style) as these 8 directions direct to next available nodes (they move from a tile to next neighboring tile). What should I do in order to achieve the result as in Starcraft 2? Shrink the tile size? On the picture you can see a horizontal line of rock tiles being obstacles, and the found path marked as green tiles. The red line is the path I want to achieve.

    Read the article

< Previous Page | 500 501 502 503 504 505 506 507 508 509 510 511  | Next Page >