Search Results

Search found 28031 results on 1122 pages for 'personal development'.

Page 537/1122 | < Previous Page | 533 534 535 536 537 538 539 540 541 542 543 544  | Next Page >

  • Will we see a trend of "3d" games coming up in the near future?

    - by Vish
    I've noticed that the trend of movies is diving into the world of movies with 3-dimensional camera.For me it provoked a thought as if it was the same feeling people got when they saw a colour movie for the first time, like in the transition from black and white to colour it is a whole new experience. For the first time we are experiencing the Z(depth) factor and I really mean when I said "experiencing". So my question is or maybe if not a question, but Is there a possibility of a genre of 3d camera games upcoming?

    Read the article

  • Are there any reasons to use Legacy (2.X) OpenGL?

    - by user27886
    The benefits are well documented of the Modern OpenGL 3.X & 4.X API's, but I'm wondering if there are ANY benefits to keeping with the old OpenGL, Or if learning OpenGL 2.X is a complete waste of time now no matter what? Particularly I've wondered if using the OpenGL 2.X API is appropriate if the target platform had graphics hardware capable of only up to OpenGL 2.X. Would a driver update on said target platform allow programs compiled using the Modern OpenGL API's to be released on this old platform? If they both work, which would be faster? Thanks

    Read the article

  • infer half vector length in BRDF

    - by cician
    it's my first question on stack. Is it possible to infer length of the half angle vector for specular lighting from N·L and N·V without the whole view and light vectors? I may be completely off-track, but I have this gut feeling it's possible... Why? I'm working on a skin shader and I'm already doing one texture lookup with N·L+N·E and one texture lookup for specular with N·H+N·V. The latter one can be transformed into N·L+N·E lookup if only I had the half vector length. Doing so could simplify the shader a bit and move some operations into the pre-computed lookup texture. It would make a huge difference since I'm trying to squeeze as much functionality as possible to a single pass mobile version so instruction count matters. Thanks.

    Read the article

  • In what kind of variable type is the player position stored on a MMORPG such as WoW?

    - by jokoon
    I even heard J. Carmack quickly talk about it... How a software can track a player's position so accurately, being on a such huge world, without loading between zones, and on a multiplayer scale ? How is the data formatted when it passes through the netcode ? I can understand how vertices are stored into the graphic card's memory, but when it comes to synchronize the multiplayer, I can't imagine what is best.

    Read the article

  • Transparent parts of texture are opaque black instead

    - by Aaron
    I render a sprite twice, one on top of the other. The sprites have transparent parts, so I should be able to see the bottom sprite under the top sprite. The transparent parts are black (the clear colour) and opaque instead though and the topmost sprite blocks the bottom sprite. My fragment shader is trivial: uniform sampler2D texture; varying vec2 f_texcoord; void main() { gl_FragColor = texture2D(texture, f_texcoord); } I have glEnable(GL_BLEND) and glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) in my initialization code. My texture comes from a PNG file that I load with libpng. I'm sure to use GL_RGBA when initializing the texture with glTexImage2D (otherwise the sprites look like noise).

    Read the article

  • Synchronise graphics and logic code

    - by Skeith
    I have a procedural approach to the game loop that runs various classes. it looks like this: continue any in progress animations check for used input apply AI move things resolve events such as collisions draw it all to screen I have seen a lot of posts about how drawing should be running separately as fast as it can, possibly in another thread. My problem is that if the drawing runs as fast as it, can what happens if it tried to draw while I'm still applying the AI or resolving a collision? It could draw the wrong thing on screen. This seems to be a well established idea so there must be an explanation to this problem as I just cant get my head around it. The only solution I have is to update the screen so fast that any errors like that get refreshed before we see them but that sounds hacky. So how does this work / how would you implement it so that they are in sync but running at different speeds?

    Read the article

  • What is the recommended way to output values to FBO targets? (OpenGL 3.3 + GLSL 330)

    - by datSilencer
    I'll begin by apologizing for any dumb assumptions you might find in the code below since I'm still pretty much green when it comes to OpenGL programming. I'm currently trying to implement deferred shading by using FBO's and their associated targets (textures in my case). I have a simple (I think :P) geometry+fragment shader program and I'd like to write its Fragment Shader stage output to three different render targets (previously bound by a call to glDrawBuffers()), like so: #version 330 in vec3 WorldPos0; in vec2 TexCoord0; in vec3 Normal0; in vec3 Tangent0; layout(location = 0) out vec3 WorldPos; layout(location = 1) out vec3 Diffuse; layout(location = 2) out vec3 Normal; uniform sampler2D gColorMap; uniform sampler2D gNormalMap; vec3 CalcBumpedNormal() { vec3 Normal = normalize(Normal0); vec3 Tangent = normalize(Tangent0); Tangent = normalize(Tangent - dot(Tangent, Normal) * Normal); vec3 Bitangent = cross(Tangent, Normal); vec3 BumpMapNormal = texture(gNormalMap, TexCoord0).xyz; BumpMapNormal = 2 * BumpMapNormal - vec3(1.0, 1.0, -1.0); vec3 NewNormal; mat3 TBN = mat3(Tangent, Bitangent, Normal); NewNormal = TBN * BumpMapNormal; NewNormal = normalize(NewNormal); return NewNormal; } void main() { WorldPos = WorldPos0; Diffuse = texture(gColorMap, TexCoord0).xyz; Normal = CalcBumpedNormal(); } If my render target textures are configured as: RT1:(GL_RGB32F, GL_RGB, GL_FLOAT, GL_TEXTURE0, GL_COLOR_ATTACHMENT0) RT2:(GL_RGB32F, GL_RGB, GL_FLOAT, GL_TEXTURE1, GL_COLOR_ATTACHMENT1) RT3:(GL_RGB32F, GL_RGB, GL_FLOAT, GL_TEXTURE2, GL_COLOR_ATTACHMENT2) And assuming that each texture has an internal format capable of contaning the incoming data, will the fragment shader write the corresponding values to the expected texture targets? On a related note, do the textures need to be bound to the OpenGL context when they are Multiple Render Targets? From some Googling, I think there are two other ways to output to MRTs: 1: Output each component to gl_FragData[n]. Some forum posts say this method is deprecated. However, looking at the latest OpenGL 3.3 and 4.0 specifications at opengl.org, the core profiles still mention this approach. 2: Use a typed output array variable for the expected type. In this case, I think it would be something like this: out vec3 [3] output; void main() { output[0] = WorldPos0; output[1] = texture(gColorMap, TexCoord0).xyz; output[2] = CalcBumpedNormal(); } So which is then the recommended approach? Is there a recommended approach at all if I plan to code on top of OpenGL 3.3? Thanks for your time and help!

    Read the article

  • Do I need the 'w' component in my Vector class?

    - by bobobobo
    Assume you're writing matrix code that handles rotation, translation etc for 3d space. Now the transformation matrices have to be 4x4 to fit the translation component in. However, you don't actually need to store a w component in the vector do you? Even in perspective division, you can simply compute and store w outside of the vector, and perspective divide before returning from the method. For example: // post multiply vec2=matrix*vector Vector operator*( const Matrix & a, const Vector& v ) { Vector r ; // do matrix mult r.x = a._11*v.x + a._12*v.y ... real w = a._41*v.x + a._42*v.y ... // perspective divide r /= w ; return r ; } Is there a point in storing w in the Vector class?

    Read the article

  • Possible to pass pygame data to memory map block?

    - by toozie21
    I am building a matrix out of addressable pixels and it will be run by a Pi (over the ethernet bus). The matrix will be 75 pixels wide and 20 pixels tall. As a side project, I thought it would be neat to run pong on it. I've seen some python based pong tutorials for Pi, but the problem is that they want to pass the data out to a screen via pygame.display function. I have access to pass pixel information using a memory map block, so is there anyway to do that with pygame instead of passing it out the video port? In case anyone is curious, this was the pong tutorial I was looking at: Pong Tutorial

    Read the article

  • Climbing boxes in box2D

    - by Rothens
    I've just stepped into the world of Box2D with libgdx. I've already made a stack of boxes: They are dropped randomly ontop of each other. What I'd like to achieve is to make a character, that could freely climb on the boxes, (He can grip on the boxes anywhere, not just on the side/top of a box) but his weight affects the stack as well, so the boxes could fall down. My google-fu failed me... Is there any way to make this possible?

    Read the article

  • When dealing with a static game board, what are some methods to make it more interesting?

    - by Ólafur Waage
    Let's say you have a game board that you look at. It does not move but there is some action going on. For example Chess, Checkers, Solitaire. The game I'm working on is not one of these but it's a good reference. What are some methods you can apply to the game or the design that increases the appeal of the game to the user? Of course you can make it prettier but what are some other methods you can use? For example: Visual cues, game design changes, user interface arrangement, etc.

    Read the article

  • How do I reconstruct depth in deferred rendering using an orthographic projection?

    - by Jeremie
    I've been trying to get my world space position of my pixel but I4m missing something. I'm using a orthographic view for a 2.5d game. My depth is linear and this is my code. float3 lightPos = lightPosition; float2 texCoord = PostProjToScreen(PSIn.lightPosition)+halfPixel; float depth = tex2D(depthMap, texCoord); float4 position; position.x = texCoord.x *2-1; position.y = (1-texCoord.y)*2-1; position.z = depth.r; position.w = 1; position = mul(position, inViewProjection); //position.xyz/=position.w; // I comment it but even without it it doesn't work float4 normal = (tex2D(normalMap, texCoord)-.5f) * 2; normal = normalize(normal); float3 lightDirection = normalize(lightPos-position); float att = saturate(1.0f - length(lightDirection) /attenuation); float lightning = saturate (dot(normal, lightDirection)); lightning*= brightness; return float4(lightColor* lightning*att, 1); I'm using a sphere but it's not working the way I want. I reproject the texture properly onto the sphere but the light coordinates in the pixel shader seems to be stuck at zero even if when I move the light volume update properly.

    Read the article

  • How can I improve the "smoothness" of a 2D side-scrolling iPhone game?

    - by MrDatabase
    I'm working on a relatively simple 2D side-scrolling iPhone game. The controls are tilt-based. I use OpenGL ES 1.1 for the graphics. The game state is updated at a rate of 30 Hz... And the drawing is updated at a rate of 30 fps (via NSTimer). The smoothness of the drawing is ok... But not quite as smooth as a game like iFighter. What can I do to improve the smoothness of the game? Here are the potential issues I've briefly considered: I'm varying the opacity of up to 15 "small" (20x20 pixels) textures at a time... Apparently varying the opacity in this manner can degrade drawing performance I'm rendering at only 30 fps (via NSTimer)... Perhaps 2D games like iFighter are rendered at a higher frame rate? Perhaps the game state could be updated at a faster rate? Note the acceleration vales are updated at 100 Hz... So I could potentially update part of the game state at 100 hz All of my textures are PNG24... Perhaps PNG8 would help (due to smaller size etc)

    Read the article

  • Open Source Analysis

    - by BluFire
    There are a lot of code in open source projects, looking at all of the code is time consuming and can be confusing to a novice like me. Are there any sections of open-source projects that should be focused on? What should I focus on when I look at code? I'm asking this in general because if I ask this specifically, the question will only apply in one or two projects rather than an entire group of projects ranging in different types of games and difficulty.

    Read the article

  • ERROR #342: DEVICE_SHADER_LINKAGE_SEMANTICNAME_NOT_FOUND

    - by Telanor
    I've stared at this for at least half an hour now and I cannot figure out what directx is complaining about. I know this error normally means you put float3 instead of a float4 or something like that, but I've checked over and over and as far as I can tell, everything matches. This is the full error message: D3D11: ERROR: ID3D11DeviceContext::DrawIndexed: Input Assembler - Vertex Shader linkage error: Signatures between stages are incompatible. The input stage requires Semantic/Index (COLOR,0) as input, but it is not provided by the output stage. [ EXECUTION ERROR #342: DEVICE_SHADER_LINKAGE_SEMANTICNAME_NOT_FOUND ] This is the vertex shader's input signature as seen in PIX: // Input signature: // // Name Index Mask Register SysValue Format Used // -------------------- ----- ------ -------- -------- ------ ------ // POSITION 0 xyz 0 NONE float xyz // NORMAL 0 xyz 1 NONE float // COLOR 0 xyzw 2 NONE float The HLSL structure looks like this: struct VertexShaderInput { float3 Position : POSITION0; float3 Normal : NORMAL0; float4 Color: COLOR0; }; The input layout, from PIX, is: The C# structure holding the data looks like this: [StructLayout(LayoutKind.Sequential)] public struct PositionColored { public static int SizeInBytes = Marshal.SizeOf(typeof(PositionColored)); public static InputElement[] InputElements = new[] { new InputElement("POSITION", 0, Format.R32G32B32_Float, 0), new InputElement("NORMAL", 0, Format.R32G32B32_Float, 0), new InputElement("COLOR", 0, Format.R32G32B32A32_Float, 0) }; Vector3 position; Vector3 normal; Vector4 color; #region Properties ... #endregion public PositionColored(Vector3 position, Vector3 normal, Vector4 color) { this.position = position; this.normal = normal; this.color = color; } public override string ToString() { StringBuilder sb = new StringBuilder(base.ToString()); sb.Append(" Position="); sb.Append(position); sb.Append(" Color="); sb.Append(Color); return sb.ToString(); } } SizeInBytes comes out to 40, which is correct (4*3 + 4*3 + 4*4 = 40). Can anyone find where the mistake is?

    Read the article

  • Wall avoidance steering

    - by Vodemki
    I making a small steering simulator using the reynolds boid algorythm. Now I want to add a wall avoidance feature. My walls are in 3D and defined using two points like that: ---------. P2 | | P1 .--------- My agents have a velocity, a position, etc... Could you tell me how to make avoidance with my agents ? Vector2D ReynoldsSteeringModel::repulsionFromWalls() { Vector2D force; vector<Wall *> wallsList = walls(); Point2D pos = self()->position(); Vector2D velocity = self()->velocity(); for (unsigned i=0; i<wallsList.size(); i++) { //TODO } return force; } Then I use all the forces returned by my boid functions and I apply it to my agent. I just need to know how to do that with my walls ? Thanks for your help.

    Read the article

  • How many BasicEffects do you have in a Game? What is the best way to render multiple objects/shapes at once?

    - by Deukalion
    I'm trying to understand 3D rendering and it seems that everytime you render a new object (A 3D Cube or something) you need to have a new BasicEffect for each Box you render unless you want the exact same texture? ...so if I have over a hundred boxes with each different textures, I need at least as many BasicEffects? Will that not be "too much" for the CPU/GPU in the end or result in lagging? Is there any good way to render multiple objects (cubes or other shapes) at the same time? I've tried changing the BasicEffect.Texture with each cube drawn, but it resulting in changing the first Cube's texture too. Any suggestions would be really appreciated, I'm really new to 3D in XNA so I'm trying to wrap my head around the best methods for example render a Map with objects (of shapes).

    Read the article

  • World to Pixel Transformation

    - by D00d
    My objects have a location in world coordinates (basically 1.0f is a meter). If I simply draw my objects using their world coordinates, each meter will correspond to a pixel. Obviously that's not what I want. Now, I don't want to have to apply a transformation to each and every object's position when I draw them. As I happen to be using XNA, and spritebatch allows a Matrix to be passed in as an argument in it's begin method, I was wondering if there is a way to pass the World to Pixel transformation in there. Any suggestions? So far Matrix.CreateScale(new Vector3(zoom, zoom, 1)) puts the objects in their proper spot, but it also scales up the sprites. Is there a way to transform the position without enlarging the sprite? Thanks

    Read the article

  • Spherical harmonics lighting interpolation

    - by TravisG
    I want to use hardware filtering to smooth out colors in texels of a texture when I'm accessing texels at coordinates that are not directly at the center of the texel, the catch being that the texels store 2 bands of spherical harmonics coefficients (=4 coefficients), not RGBA intensity values. Can I just use hardware filtering like that (GL_LINEAR with and without mip mapping) without any considerations? In other terms: If I were to first convert the coefficients back to intensity representations, than manually interpolate between two intensities, would the resulting intensity be the same as if I interpolated between the coefficient vectors directly and then converted the interpolated result to intensities?

    Read the article

  • As a indie, how to protect your game?

    - by user16829
    As a indie, you might not work in a company. And you may have a great game idea and you feel it gonna be a big success. When you released your game. How do you protect it as your own creation? So that someone also can't steal the title and publish a "sequel" e.g. Your-Game-Name 2,3,4. Or even produce by-products like Angry Birds but without your permission. So how we can prevent these from happening by legal methods. Like copyrights, trademarks? If a professional can fill us those info, it will be great.

    Read the article

  • Yet another frustum culling question

    - by Christian Frantz
    This one is kinda specific. If I'm to implement frustum culling in my game, that means each one of my cubes would need a bounding sphere. My first question is can I make the sphere so close to the edge of the cube that its still easily clickable for destroying and building? Frustum culling is easily done in XNA as I've recently learned, I just need to figure out where to place the code for the culling. I'm guessing in my method that draws all my cubes but I could be wrong. My camera class currently implements a bounding frustum which is in the update method like so frustum.Matrix = (view * proj); Simple enough, as I can call that when I have a camera object in my class. This works for now, as I only have a camera in my main game class. The problem comes when I decide to move my camera to my player class, but I can worry about that later. ContainmentType CurrentContainmentType = ContainmentType.Disjoint; CurrentContainmentType = CamerasFrustrum.Contains(cubes.CollisionSphere); Can it really be as easy as adding those two lines to my foreach loop in my draw method? Or am I missing something bigger here? UPDATE: I have added the lines to my draw methods and it works great!! So great infact that just moving a little bit removes the whole map. Many factors could of caused this, so I'll try to break it down. cubeBoundingSphere = new BoundingSphere(cubePosition, 0.5f); This is in my cube constructor. cubePosition is stored in an array, The vertices that define my cube are factors of 1 ie: (1,0,1) so the radius should be .5. I least I think it should. The spheres are created every time a cube is created of course. ContainmentType CurrentContainmentType = ContainmentType.Disjoint; foreach (Cube block in cube.cubes) { CurrentContainmentType = cam.frustum.Contains(cube.cubeBoundingSphere); ///more code here if (CurrentContainmentType != ContainmentType.Disjoint) { cube.Draw(effect); } Within my draw method. Now I know this works because the map disappears, its just working wrong. Any idea on what I'm doing wrong?

    Read the article

  • How can state changes be batched while adhering to opaque-front-to-back/alpha-blended-back-to-front?

    - by Sion Sheevok
    This is a question I've never been able to find the answer to. Batching objects with similar states is a major performance gain when rendering many objects. However, I've been learned various rules when drawing objects in the game world. Draw all opaque objects, front-to-back. Draw all alpha-blended objects, back-to-front. Some of the major parameters to batch by, as I understand it, are textures, vertex buffers, and index buffers. It seems that, as long as you are adhering to the above two rules, there's little to be done in regards to batching. I see one possibility to batch, while still adhering to the above two rules. Opaque objects can still be drawn out of depth-order, because drawing them front-to-back is merely a fillrate optimization, meanwhile state changes may very well be far more expensive than the overdraw of drawing out of depth-order. However, non-opaque objects, those that require alpha-blending at least, must be drawn back-to-front in order to avoid rendering artifacts. Is the loss of the fillrate optimization for opaques worth the state batching optimization?

    Read the article

  • Can I name a team with the name of their city to avoid trademark issues?

    - by Paul
    I was wondering, if you want to make a NBA game on smartphones, without the license held by EA, the first solution seems to name your teams with a different name, such as "Chicragro Brulls" (this is just for the example), but would it be possible to just call your team with the name of the city, such as "Chicago vs. Dallas" ? I know the first solution was chosen by Pro Evolution Soccer, would you know any other game that don't use a license?

    Read the article

  • Should I be using a game engine?

    - by Kyle
    I'm an experienced programmer, but I'm completely new to making games. I'm thinking of making an iPhone game that is similar to a 2d tower defense type game. In the web programming world, it would be a big waste of time to make a website without using some sort of web framework (eg ruby on rails). Is that the same for making games? Do people mostly use some sort of framework/game engine for making a game? If so, what are the popular ones for iOS?

    Read the article

  • How do I create a big multiplayer world in UDK?

    - by Dorpe
    I want to create a big multiplayer world in UDK and I'm having a few difficulties. I created the biggest terrain possible but then any terrain related action I do takes forever. However, I've seen videos of people make same size terrain and working without a problem. My pc is strong enough, so maybe someone can tell me what I'm doing wrong. I want to make it even bigger then the biggest terrain size, so I was thinking of doing level streaming but then I read that streaming is working server side which means if I have a player on every terrain all terrains will still be loaded and I want to save as much memory possible so it will work well online. Thanks for any help you can give.

    Read the article

< Previous Page | 533 534 535 536 537 538 539 540 541 542 543 544  | Next Page >