Search Results

Search found 25550 results on 1022 pages for 'mere development'.

Page 468/1022 | < Previous Page | 464 465 466 467 468 469 470 471 472 473 474 475  | Next Page >

  • Game Patching Mac/PC

    - by Centurion Games
    Just wondering what types of solutions are available to handle patching of PC/Mac games that don't have any sort of auto updater built into them. In windows do you just spin off some sort of new install shield for the game that includes the updated files, hope you can read a valid registry key to point to the right directory, and overwrite files? If so how does that translate over to Mac where the game is normally just distributed as straight up .app file? Is there a better approach than the above for an already released product? (Assuming direct sells, and not through a marketplace that features auto-updating like Steam.) Are there any off the shelf auto-updater type libraries that could also be easily integrated with a C/C++ code base even after a game has been shipped to make this a lot simpler, and that are cross platform? Also how do auto-updaters work with new OS's that want applications and files digitally signed?

    Read the article

  • How much localizations is too much for a game?

    - by Krom Stern
    We are making an RTS game and we intend to add localizations to all languages our players use. So far we have 16 locales and about 3-4 are being planned. Now some crazy ideas pop up from our community, players ask for "funny text" localizations. We have been already offered a pack that makes it for 1 of our languages. Now I was thinking where should we draw a line between official localizations which we include into the game and unofficial mods that players will have to install on their own? Obviously overcrowding locale selection menu with all sorts of funny locales (LOL-cat, redneck, welsh, medieval, simplified, etc.) for all the languages seems way too much. But is it really? What are the hidden pros and cons of having too much locales and how much is too much?

    Read the article

  • How i can sign and/or group a specific set of vertices in a 3D file container like OBJ ? - in Blender

    - by user827992
    I would like to export a 3D model with each part having a name or a label if you will. For example i would like to export a model of an human body and name each part in specifics vertex groups like: left hand, right hand, right foot, head, ears, ... and you got the idea; so i can have a single 3D model that i can explode in various parts if needed. If there is a better technique about how to mark vertex groups in a 3D file please share your solution. As 3D editor i use Blender.

    Read the article

  • Why is my model's scale changing after rotating it?

    - by justnS
    I have just started a simple flight simulator and have implemented Roll and pitch. In the beginning, testing went very well; however, after about 15-20 seconds of constantly moving the thumbsticks in a random or circular motion, my model's scale begins to grow. At first I thought the model was moving closer to the camera, but i set break points when it was happening and can confirm the translation of my orientation matrix remains 0,0,0. Is this a result of Gimbal Lock? Does anyone see an obvious error in my code below? public override void Draw( Matrix view, Matrix projection ) { Matrix[] transforms = new Matrix[Model.Bones.Count]; Model.CopyAbsoluteBoneTransformsTo( transforms ); Matrix translateMatrix = Matrix.Identity * Matrix.CreateFromAxisAngle( _orientation.Right, MathHelper.ToRadians( pitch ) ) * Matrix.CreateFromAxisAngle( _orientation.Down, MathHelper.ToRadians( roll ) ); _orientation *= translateMatrix; foreach ( ModelMesh mesh in Model.Meshes ) { foreach ( BasicEffect effect in mesh.Effects ) { effect.World = _orientation * transforms[mesh.ParentBone.Index]; effect.View = view; effect.Projection = projection; effect.EnableDefaultLighting(); } mesh.Draw(); } } public void Update( GamePadState gpState ) { roll = 5 * gpState.ThumbSticks.Left.X; pitch = 5 * gpState.ThumbSticks.Left.Y; }

    Read the article

  • Architecture a for a central renderer rather than self-rendering

    - by The Communist Duck
    For the architectural side of rendering, there's two main ways: having each object render itself, and having a single renderer which renders everything. I'm currently aiming for the second idea, for the following reasons: The list can be sorted to only use shaders once. Else each object would have to bind the shader, because it's not sure if it's active. The objects could be sorted and grouped. Easier to swap APIs. With a few macro lines, it can be easy to swap between a DirectX renderer and an OpenGL renderer (not a reason for my project, but still a good point) Easier to manage rendering code Of course, if anyone has strong recommendations for the first method, I will listen to them. But I was wondering how make this work. First idea The renderer has a list of pointers to the renderable components of each entity, which register themselves on RenderCompoent creation. However, I'm worrying that this may end up as a lot of extra pointer weight. But I can sort the list of pointers every so often. Second idea The entire list of entities is passed to the renderer each render call. The renderer then sorts the list (each call, or maybe once?) and gets what it wants. That's a lot of passing and/or sorting, however. Other ideas ??? PROFIT Anyone got ideas? Thank you.

    Read the article

  • Change players state and controls in-game

    - by Samurai Fox
    I'm using Unity 3D Let's say the player is an ice cube. You control it like a normal player. On press of a button, ice transforms (with animation) into water. You control it completely different than the ice cube. Another great example would be: Player is human being and has normal FPS controls. On press of a button human transforms into birds and now has completely different controls. Now, my question is, what would be easier and better: make one object with animation transition and to stay in that state of anim. until button is pressed again make two object: ice and water. Ice has an animation of turning into water. So replace ice (with animation) with water object And if anyone knows this one too: how to switch between 2 different types of player controls.

    Read the article

  • Basic collision direction detection on 2d objects

    - by Osso Buko
    I am trying to develop a platform game for Android by using ANdroid GL Engine (ANGLE). And I am having trouble with collision detection. I have two objects which is shaped as rectangular. And no change in rotation. Here is a scheme of attributes of objects. What i am trying to do is when objects collide they block each other's movement on that direction. Every object has 4 boolean (bTop, bBottom, bRight, bLeft). For example when bBottom is true object can't advance on that direction. I came up with a solution but it seems it only works on one dimensional. Bottom and top or right and left. public void collisionPlatform (MyObject a, MyObject b) { // first obj is player and second is a wall or a platform Vector p1 = a.mPosition; // p1 = middle point of first object Vector d1 = a.mPosition2; // width(mX) and height of first object Vector mSpeed1 = a.mSpeed; // speed vector of first object Vector p2 = b.mPosition; // p1 = middle point of second object Vector d2 = b.mPosition2; // width(mX) and height of second object Vector mSpeed2 = b.mSpeed; // speed vector of second object float xDist, yDist; // distant between middle of two object float width , height; // this is average of two objects measurements width=(width1+width2)/2 xDist=(p1.mX - p2.mX); // calculate distance // if positive first object is at the right yDist=(p1.mY - p2.mY); // if positive first object is below width = d1.mX + d2.mX; // average measurements calculate height = d1.mY + d2.mY; width/=2; height/=2; if (Math.abs(xDist) < width && Math.abs(yDist) < height) { // Two object is collided if(p1.mY>p2.mY) { // first object is below second one a.bTop = true; if(a.mSpeed.mY<0) a.mSpeed.mY=0; b.bBottom = true; if(b.mSpeed.mY>0) b.mSpeed.mY=0; } else { a.bBottom = true; if(a.mSpeed.mY>0) a.mSpeed.mY=0; b.bTop = true; if(b.mSpeed.mY<0) b.mSpeed.mY=0; } } As seen in my code it simply will not work. when object comes from right or left it doesn't work. I tried couple of ways other than this one but none worked. I am guessing right method will include mSpeed vector. But I have no idea how to do it. I really appreciate if you could help. Sorry for my bad english.

    Read the article

  • Sampling Heightmap Edges for Normal map

    - by pl12
    I use a Sobel filter to generate normal maps from procedural height maps. The heightmaps are 258x258 pixels. I scale my texture coordinates like so: texCoord = (texCoord * (256/258)) + (1/258) Yet even with this I am left with the following problem: As you can see the edges of the normal map still proves to be problematic. Putting the texture wrap mode to "clamp" also proved no help. EDIT: The Sobel Filter function by sampling the 8 surrounding pixels around a given pixel so that a derivative can be calculated in order to find the "normal" of the given pixel. The texture coordinates are instanced once per quad (for the quadtree that makes up the world) and are created as follows (it is quite possible that the problem results from the way I scale and offset the texCoords as seen above): Java: for(int i = 0; i<vertices.length; i++){ Vector2f coord = new Vector2f((vertices[i].x)/(worldSize), (vertices[i].z)/( worldSize)); texCoords[i] = coord; } the quad used for input here rests on the X0Z plane. 'worldSize' is the diameter of the planet. No negative texCoords are seen as the quad used for input for this method is not centered around the origin. Is there something I am missing here? Thanks.

    Read the article

  • Publishing a game -- any way to target both WP7 and Win8 Store?

    - by Rei Miyasaka
    I'm at a dilemma which seems should soon become an important issue for a lot of developers. If I build a game in XNA, I won't be able to publish it on the Windows 8 Store, as it would be a classic application -- and classic applications can't be sold on the store. If I build a game in Metro DirectX, I would be able to sell it on the Store, but porting it to Windows Phone would involve porting it to Reach XNA, which in fact would likely involve more effort even than porting to OS X or Android -- both of which support C++. Of all the WinRT API that is supported on C++/JS/.NET, DirectX can only be programmed from C++. It's also unlikely that Microsoft will update Windows 7 or Vista to support the new DirectX features, although that would make the Metro DirectX the first new version of DirectX to stop supporting the immediate predecessor OS. If I build a game in Pre-Win8 DirectX 9/10/11, I won't be able to sell it on the Windows Store or Windows Phone, but I could sell it on something like Steam. It would also involve the most amount of manual plumbing. In fact, DirectWrite, despite being part of DirectX 11, doesn't talk to Direct3D. I'm getting really tired of all these restrictions -- artificial and otherwise -- and I'm coming to a point where I'm considering switching to a platform with a less fragmented API, like Android or Mac/iOS. As far as bringing a game into market goes, excluding the actual market share of any platforms that I might consider, what other factors would help me in making a decision? Just a few years ago this question was a lot easier to answer: if you were primarily concerned with Windows platforms, all you had to answer was whether you wanted DirectX, XNA, or something like SlimDX. If you made the wrong decision, no biggie -- all you really would have lost is XBox and the fairly small Windows Phone market.

    Read the article

  • Lightning whip particle effects

    - by Fibericon
    I'm currently using Mercury Particle Engine for the particle effects in my game, and I'm trying to create a sort of lightning whip - basically a lightning effect bound to a line that curves when the player moves. I know how to use the editor, and I have particle effects working in game. However, I'm completely lost as to where I should start for this specific particle effect. Perhaps if I could find the code for it in a different particle engine, I could convert it, but I can't seem to find that either. What I did find was a lot of tutorials for creating the lines associated with lightning programmatically, which doesn't help in this case because I don't want it to be rigid. Perhaps it would be more like some sort of laser beam with crackling effects around it? I'm running into a wall as far as even beginning to implement this goes.

    Read the article

  • Custom mesh format - yea or nay?

    - by Electro
    In the process of writing my game prototype, I have found the OBJ format to be insufficient for my needs - it does not support any sort of animation, it doesn't support triangle strips (I'm targeting my ancient hardware). MD2 wouldn't fit the bill because it doesn't have support for named model pieces. MD3 would probably work, but like OBJ, it doesn't have support for triangle strips. Considering the limitations of the formats above, I've come to the conclusion that it may be necessary to write my own format to accommodate my requirements, but that feels like reinventing the wheel. So, I need a format which can specify indexed tri-strips, supports textures, UV-mapping, collision data, can have multiple named segments and supports animations (have I forgotten anything?). Is there any format like that which already exists, or do I have to write my own?

    Read the article

  • Texture2D.GetData fails to return pixel colour data

    - by Chris Charabaruk
    Because I'm using sprite sheets instead of an individual texture per sprite, I need to pass in a Rectangle when calling Texture2D.GetData() in my collision detection for per-pixel tests. Unfortunately, without fail I get an ArgumentException percolated down from an internal method inside the Texture (not Texture2D) class. My code for getting the texture data looks like this: public override Color[] GetPixelData() { Color[] data = new Color[(int)size.Product()]; Rectangle rect = new Rectangle(hframe * (int)size.X, vframe * (int)size.Y, (int)size.X, (int)size.Y); #if DEBUG if (sprite.Bounds.Contains(rect) && sprite.Format == SurfaceFormat.Color) #endif sprite.GetData(0, rect, data, 0, 1); return data; } Even with the check to ensure I'm grabbing a valid rectangle and that the texture format matches what I'm trying to get, I still get that exception, claiming "The size of the data passed in is too large or too small for this resource." Unfortunately, the debugger won't let me check the locals within the Texture.ValidateTotalSize() method where the exception originates. Has anyone else had this problem and knows how to fix it? I'm relying on AABB testing only for now, but that doesn't really work for some of my game's entities due to odd shapes, rotation and scaling.

    Read the article

  • Triple buffering causes input lag?

    - by user782220
    Consider some time in between two vsyncs. Suppose the first display buffer is being used to display the current image, and suppose the game was really fast and computed and rendered the next image to the second display buffer and the next one after that to the third display buffer. That is the rendering to the second and third display buffer happens so fast that it occurs before the next vsync. Suppose input from the user comes in now. What you would like is for the results of the input to show up on the next vsync or (probably more typical) the vsync after that. However, with the third display buffer already rendered the input can only effect the image after that. Meaning the input will only take effect at best 3 vsyncs later. I wish i had an image to show the exact timings of what I mean.

    Read the article

  • Tile-wide extent tracing on a grid.

    - by Larolaro
    I'm currently working on A* pathfinding on a grid and I'm looking to smooth the generated path, while also considering the extent of the character moving along it. I'm using a grid for the pathfinding, however character movement is free roaming, not strict tile to tile movement. To achieve a smoother, more efficient path, I'm doing line traces on a grid to determine if there is unwalkable tiles between tiles to shave off unecessary corners. However, because a line trace is zero extent, it doesn't consider the extent of the character and gives bad results (not returning unwalkable tiles just missed by the line, causing unwanted collisions). So what I'm looking for is rather than a line algorithm that determines the tiles under it, I'm looking for one that determines the tiles under a tile-wide extent line. Here is an image to help visualise my problem! Does anyone have any ideas? I've been working with Bresenham's line and other alternatives but I haven't yet figured out how to nail this specific problem.

    Read the article

  • Passing data between engine layers

    - by spaceOwl
    I am building a software system (game engine with networking support ) that is made up of (roughly) these layers: Game Layer Messaging Layer Networking Layer Game related data is passed to the messaging layer (this could be anything that is game specific), where they are to be converted to network specific messages (which are then serialized to byte arrays). I'm looking for a way to be able to convert "game" data into "network" data, such that no strong coupling between these layers will exist. As it looks now, the Messaging layer sits between both layers (game and network) and "knows" both of them (it contains Converter objects that know how to translate between data objects of both layers back and forth). I am not sure this is the best solution. Is there a good design for passing objects between layers? I'd like to learn more about the different options.

    Read the article

  • Behaviour Trees with irregular updates

    - by Robominister
    I'm interested in behaviour trees that aren't iterated every game tick, but every so often. (Edit: the tree could specify how many frames within the main game loop to wait before running its tick function again). Every theoretical implementation I have seen of behaviour trees talks of the tree search being carried out every game update - which seems necessary, because a leaf node (eg a behaviour, like 'return to base') needs to be constantly checked to see if is still running, failed or completed. Can anyone suggest how I might start implementing a tree that isnt run every tick, or point me in the direction of good material specific to this case (I am struggling to find anything)? My thoughts so far: action leaf nodes (when they start) must only push some kind of action object onto a list for an entity, rather than directly calling any code that makes the entity do something. The list of actions for the entity would be run every frame (update any that need to run, pop any that have completed from the list). the return state from a given action must be fed back into the tree, so that when we run the tree iteration again (and reach the same action leaf node - so the tree has so far determined that we ought to still be trying this action) - that the action has completed, or is still running etc. If my actual action code is running from an action list on an entity, then I possibly need to cancel previously running actions in the list - i am thinking that I can just delete the entire stack of queued up actions. I've seen the idea of ActionLists which block lower priority actions when a higher priority one is added, but this seems like very close logic to behaviour trees, and I dont want to be duplicating behaviour. This leaves me with some questions 1) How would I feed the action return state back into the tree? Its obvious I need to store some information relating to 'currently executing actions' on the entity, and check that in the tree tick, but I can't imagine how. 2) Does having a seperate behaviour tree (for deciding behaviour) and action list (for carrying out actual queued up actions) sound like a reasonable approach? 3) Is the approach of updating a behaviour tree irregularly actually used by anyone? It seems like a nice idea for budgeting ai search time when you have a lot of ai entities to process. (Edit) - I am also thinking about storing a single instance of a given behaviour tree in memory, and providing it by reference to any entity that uses it. So any information about what action was last selected for execution on an entity must be stored in a data context relative to the entity (which the tree can check). (I am probably answering my own questions as i go!) I hope I have expressed my questions adequately! Thanks in advance for any help :)

    Read the article

  • How does a game developer get feedback from gamers (not developers) or start a forum community without paying for advertising or hiring Q&A teams?

    - by Carter81
    I am familiar with a lot of game developer forums, but I'd assume this is much less likely to attract more casual commentators. I also fear that feedback from a gamer's perspective would often be tainted by their game dev perspective. For example, if I were making a RTS game and wanted to get feedback from "The RTS gamers" where would I go? Is there a general idea of what type of website or forum to go to? Do you go to specific game websites, to try to "steal" attention? Would this not equate to spam or inappropriate posting? What is considered appropriate and inappropriate? I am not asking for specifics. I am asking how one "starts a community", or how one "gets feedback from gamers" without resorting to spamming forums or 'advertising' just to see what sticks. What TYPE OF PLACE does one go? Are there already sites designed for this purpose? I tried going to what was once a very popular forum for feedback from what I believed was a niche hardcore group of gamers in the genre, but its popularity seemed to have died significantly; Leaving only trolls and very young teenagers. The resulting feedback was quite disappointing, mainly for how little feedback it resulted. Many years ago, feedback would flood in by the hundreds so quickly. Without this website, I am at a loss as to where to go to see what people think of ideas, gather feedback from a gamer's perspective (not a developer's perspective), or where to pull from to start my own site's forum. I am out of ideas of what to do, short of going to various game forums to post in the off-topic sections there.

    Read the article

  • Objects won't render when Texture Compression + Mipmapping is Enabled

    - by felipedrl
    I'm optimizing my game and I've just implemented compressed (DXTn) texture loading in OpenGL. I've worked my way removing bugs but I can't figure out this one: objects w/ DXTn + mipmapped textures are not being rendered. It's not like they are appearing with a flat color, they just don't appear at all. DXTn textured objs render and mipmapped non-compressed textures render just fine. The texture in question is 256x256 I generate the mips all the way down 4x4, i.e 1 block. I've checked on gDebugger and it display all the levels (7) just fine. I'm using GL_LINEAR_MIPMAP_NEAREST for min filter and GL_LINEAR for mag one. The texture is being compressed and mipmaps being created offline with Paint.NET tool using super sampling method. (I also tried bilinear just in case) Source follow: [SNIPPET 1: Loading DDS into sys memory + Initializing Object] // Read header DDSHeader header; file.read(reinterpret_cast<char*>(&header), sizeof(DDSHeader)); uint pos = static_cast<uint>(file.tellg()); file.seekg(0, std::ios_base::end); uint dataSizeInBytes = static_cast<uint>(file.tellg()) - pos; file.seekg(pos, std::ios_base::beg); // Read file data mData = new unsigned char[dataSizeInBytes]; file.read(reinterpret_cast<char*>(mData), dataSizeInBytes); file.close(); mMipmapCount = header.mipmapcount; mHeight = header.height; mWidth = header.width; mCompressionType = header.pf.fourCC; // Only support files divisible by 4 (for compression blocks algorithms) massert(mWidth % 4 == 0 && mHeight % 4 == 0); massert(mCompressionType == NO_COMPRESSION || mCompressionType == COMPRESSION_DXT1 || mCompressionType == COMPRESSION_DXT3 || mCompressionType == COMPRESSION_DXT5); // Allow textures up to 65536x65536 massert(header.mipmapcount <= MAX_MIPMAP_LEVELS); mTextureFilter = TextureFilter::LINEAR; if (mMipmapCount > 0) { mMipmapFilter = MipmapFilter::NEAREST; } else { mMipmapFilter = MipmapFilter::NO_MIPMAP; } mBitsPerPixel = header.pf.bitcount; if (mCompressionType == NO_COMPRESSION) { if (header.pf.flags & DDPF_ALPHAPIXELS) { // The only format supported w/ alpha is A8R8G8B8 massert(header.pf.amask == 0xFF000000 && header.pf.rmask == 0xFF0000 && header.pf.gmask == 0xFF00 && header.pf.bmask == 0xFF); mInternalFormat = GL_RGBA8; mFormat = GL_BGRA; mDataType = GL_UNSIGNED_BYTE; } else { massert(header.pf.rmask == 0xFF0000 && header.pf.gmask == 0xFF00 && header.pf.bmask == 0xFF); mInternalFormat = GL_RGB8; mFormat = GL_BGR; mDataType = GL_UNSIGNED_BYTE; } } else { uint blockSizeInBytes = 16; switch (mCompressionType) { case COMPRESSION_DXT1: blockSizeInBytes = 8; if (header.pf.flags & DDPF_ALPHAPIXELS) { mInternalFormat = GL_COMPRESSED_RGBA_S3TC_DXT1_EXT; } else { mInternalFormat = GL_COMPRESSED_RGB_S3TC_DXT1_EXT; } break; case COMPRESSION_DXT3: mInternalFormat = GL_COMPRESSED_RGBA_S3TC_DXT3_EXT; break; case COMPRESSION_DXT5: mInternalFormat = GL_COMPRESSED_RGBA_S3TC_DXT5_EXT; break; default: // Not Supported (DXT2, DXT4 or any compression format) massert(false); } } [SNIPPET 2: Uploading into video memory] massert(mData != NULL); glGenTextures(1, &mHandle); massert(mHandle!=0); glBindTexture(GL_TEXTURE_2D, mHandle); commitFiltering(); uint offset = 0; Renderer* renderer = Renderer::getInstance(); switch (mInternalFormat) { case GL_RGB: case GL_RGBA: case GL_RGB8: case GL_RGBA8: for (uint i = 0; i < mMipmapCount + 1; ++i) { uint width = std::max(1U, mWidth >> i); uint height = std::max(1U, mHeight >> i); glTexImage2D(GL_TEXTURE_2D, i, mInternalFormat, width, height, mHasBorder, mFormat, mDataType, &mData[offset]); offset += width * height * (mBitsPerPixel / 8); } break; case GL_COMPRESSED_RGB_S3TC_DXT1_EXT: case GL_COMPRESSED_RGBA_S3TC_DXT1_EXT: case GL_COMPRESSED_RGBA_S3TC_DXT3_EXT: case GL_COMPRESSED_RGBA_S3TC_DXT5_EXT: { uint blockSize = 16; if (mInternalFormat == GL_COMPRESSED_RGB_S3TC_DXT1_EXT || mInternalFormat == GL_COMPRESSED_RGBA_S3TC_DXT1_EXT) { blockSize = 8; } uint width = mWidth; uint height = mHeight; for (uint i = 0; i < mMipmapCount + 1; ++i) { uint nBlocks = ((width + 3) / 4) * ((height + 3) / 4); // Only POT textures allowed for mipmapping massert(width % 4 == 0 && height % 4 == 0); glCompressedTexImage2D(GL_TEXTURE_2D, i, mInternalFormat, width, height, mHasBorder, nBlocks * blockSize, &mData[offset]); offset += nBlocks * blockSize; if (width <= 4 && height <= 4) { break; } width = std::max(4U, width / 2); height = std::max(4U, height / 2); } break; } default: // Not Supported massert(false); } Also I don't understand the "+3" in the block size computation but looking for a solution for my problema I've encountered people defining it as that. I guess it won't make a differente for POT textures but I put just in case. Thanks.

    Read the article

  • J2ME character animation with multiple sprite sheets

    - by Alex
    I'm working on a J2ME game and I want to have walking animations. Each direction of walking has a separate sprite sheet (i.e. one for walking up, one for walking right etc), I also have a static idle image for each direction held together in a single file. I've tried to hold an array of sprites in my player class and then just drawing the sprite corresponding to the current direction, but this doesn't seem to work. I'm aware that if I combine all the animations into one sprite sheet I could set up different animation sequences, but I want to be able to do it with separate images for each animation. Is there a way that anyone knows of to achieve this? And ideally without too much extra code (as opposed to combining the sprites into one sheet)

    Read the article

  • How to cover the widest range of computers when publishing?

    - by DevilWithin
    When you plan a game, or even when you already made a game, and its time to publish, you wonder how much of your audience is covered by the game technology demands. I'm directing this essentialy to casual games, as I constantly see people having old laptops and being unable to replace them. Laptops with integrated cards whose OpenGL version doesn't even support textures larger than 1024x1024. These people may be avid gamers as well, and a reasonable share of the audience to consider giving them the chance to play casual games, once they cannot play any blockbusters. As I've seen happening, a very "noticeable" example is Angry Birds. It's gameplay is merely casual (I think nobody disagrees here) and still, it uses so high resolution textures that at least OpenGL 2.0 or around is needed, which blocks away a lot of people. So, the actual question is: what is a good tradeoff for this issue? Would it be better to just sacrifice the texture resolution for everyone, but have more supported hardware? Would it be better to keep the high quality and just slice the textures into smaller ones, sacrificing the performance a little bit? What else? Any ideas about this topic are welcome for discussion.

    Read the article

  • How can I mark a pixel in the stencil buffer?

    - by János Turánszki
    I never used the stencil buffer for anything until now, but I want to change this. I have an idea of how it should work: the gpu discards or keeps rasterized pixels before the pixel shader based on the stencil buffer value on the given position and some stencil operation. What I don't know is how would I mark a pixel in the stencil buffer with a specific value. For example I draw my scene and want to mark everything which is drawn with a specific material (this material could be looked up from a texture so ideally I should mark the pixel in the pixel shader), so that later when I do some post processing on my scene I would only do it on the marked pixels. I didn't find anything on the internet besides how to set up a stencil buffer and explaining the different stencil operations. I was expecting to find some System-Value semantics like SV_Depth to write to in the pixel shader (because the stencil buffer shares the same resource with the depth buffer in D3D11), but there is no such thing on MSDN. So how should I do this? If I am misunderstanding something please help me clear that up.

    Read the article

  • How can I create animated card graphics like in Hearthstone?

    - by Appeltaart
    In the game Hearthstone, there are cards with animated images on them. A few examples: http://www.hearthhead.com/card=281/argent-commander http://www.hearthhead.com/card=469/blood-imp The animations seem to be composed of multiple effects: Particle systems. Fading sprites in and out/rotating them Simple scrolling textures A distortion effect, very evident in the cape and hair of example 1. Swirling smoke effects, the light in example 1 and the green/purple glow in example 2. The first three elements are trivial, what I'd like to know is how the last two could be done. Can this even be done realtime in a game, or are they pre-rendered animations?

    Read the article

  • Posting to facebook from unity3d on iOS and android

    - by Guye Incognito
    I've made a game in unity3d for iOS and android. We have our own server to manage high scores and stuff like that. We'd also like to have the possibility post high scores to facebook, and also do things like this.. If you and your friend are have both posted a score for our game to facebook and you post a better score then you can send them a notification. I'm reading around about this now, but I'm wondering whats the normal way people do this? Possible ways.. Use the unity facebook SDK Looks like it would work but there are different versions for iOS and android. Call the facebook graph API directly from our server. This would unify the iOS and android versions and also it makes sense as our server holds / deals with all the highscore info. I can just imagine difficulties with logging in / authentication

    Read the article

  • Cocos2d v2.0 and OpenGL 2.0/1.0: where to start

    - by mm24
    I started developing my very first game 3 months ago using Cocos2d 2.0 for iPhone. I am now in the stage where I'd like to add some cool effects to the bullets and some special weapons (see my waveforms question here). I got a good answer in the cocos2d-iphone forum (see this one). Unfortunately I am a bit paralized now. I don't know if I will be overdoing by learning OpengGL 2.0 or if I should just stick ot the old 1.0. There is a good intro on various tutorial's written in Steffen Itterheims blog (see this post). I would like to add to my game: a blur effect to the bullets (here is a tutorial for OpenGL 1.0) a waveform (see above) some realistic water ripples (here is a nice sample code) So now, given that I don't want to overdo things but at the same time I want to achieve those effects, from where should I start? Should I discard the OpenGL 1.0 tutorials? OR should I use only OpenGL 1.0 code? How can I avoid confusion? I mean, it seems that the compiler recognizes both, but that there are some conflictual calls in some circumnstances, I am fairly sure this has some explanation, is there some reference to this somewhere?

    Read the article

  • LOD in modern games

    - by Firas Assaad
    I'm currently working on my master's thesis about LOD and mesh simplification, and I've been reading many academic papers and articles about the subject. However, I can't find enough information about how LOD is being used in modern games. I know many games use some sort of dynamic LOD for terrain, but what about elsewhere? Level of Detail for 3D Graphics for example points out that discrete LOD (where artists prepare several models in advance) is widely used because of the performance overhead of continuous LOD. That book was published in 2002 however, and I'm wondering if things are different now. There has been some research in performing dynamic LOD using the geometry shader (this paper for example, with its implementation in ShaderX6), would that be used in a modern game? To summarize, my question is about the state of LOD in modern video games, what algorithms are used and why? In particular, is view dependent continuous simplification used or does the runtime overhead make using discrete models with proper blending and impostors a more attractive solution? If discrete models are used, is an algorithm used (e.g. vertex clustering) to generate them offline, do artists manually create the models, or perhaps a combination of both methods is used?

    Read the article

< Previous Page | 464 465 466 467 468 469 470 471 472 473 474 475  | Next Page >