Search Results

Search found 13494 results on 540 pages for 'board game'.

Page 282/540 | < Previous Page | 278 279 280 281 282 283 284 285 286 287 288 289  | Next Page >

  • Terrain sqaure loading

    - by AndroidXTr3meN
    Games like Skyrim, Morrowind, and more are using quads or sqaure to divide the terrain if im correct. The player is always at #5 1 | 2 | 3 4 | 5 | 6 7 | 8 | 9 So whenever you cross the border you unload and load the new "areas" But if the user goes just over the edge and then the second after goes back previous area a lot of uneccessary loading and unloading is done. Is there a general approach to this becuase I dont think games like skyrim have this issue? Cheers!

    Read the article

  • SDL_BlitSurface segmentation fault (surfaces aren't null)

    - by Trollkemada
    My app is crashing on SDL_BlitSurface() and i can't figure out why. I think it has something to do with my static object. If you read the code you'll why I think so. This happens when the limits of the map are reached, i.e. (iwidth || jheight). This is the code: Map.cpp (this render) Tile const * Map::getTyle(int i, int j) const { if (i >= 0 && j >= 0 && i < width && j < height) { return data[i][j]; } else { return &Tile::ERROR_TYLE; // This makes SDL_BlitSurface (called later) crash //return new Tile(TileType::ERROR); // This works with not problem (but is memory leak, of course) } } void Map::render(int x, int y, int width, int height) const { //DEBUG("(Rendering...) x: "<<x<<", y: "<<y<<", width: "<<width<<", height: "<<height); int firstI = x / TileType::PIXEL_PER_TILE; int firstJ = y / TileType::PIXEL_PER_TILE; int lastI = (x+width) / TileType::PIXEL_PER_TILE; int lastJ = (y+height) / TileType::PIXEL_PER_TILE; // The previous integer division rounds down when dealing with positive values, but it rounds up // negative values. This is a fix for that (We need those values always rounded down) if (firstI < 0) { firstI--; } if (firstJ < 0) { firstJ--; } const int firstX = x; const int firstY = y; SDL_Rect srcRect; SDL_Rect dstRect; for (int i=firstI; i <= lastI; i++) { for (int j=firstJ; j <= lastJ; j++) { if (i*TileType::PIXEL_PER_TILE < x) { srcRect.x = x % TileType::PIXEL_PER_TILE; srcRect.w = TileType::PIXEL_PER_TILE - (x % TileType::PIXEL_PER_TILE); dstRect.x = i*TileType::PIXEL_PER_TILE + (x % TileType::PIXEL_PER_TILE) - firstX; } else if (i*TileType::PIXEL_PER_TILE >= x + width) { srcRect.x = 0; srcRect.w = x % TileType::PIXEL_PER_TILE; dstRect.x = i*TileType::PIXEL_PER_TILE - firstX; } else { srcRect.x = 0; srcRect.w = TileType::PIXEL_PER_TILE; dstRect.x = i*TileType::PIXEL_PER_TILE - firstX; } if (j*TileType::PIXEL_PER_TILE < y) { srcRect.y = 0; srcRect.h = TileType::PIXEL_PER_TILE - (y % TileType::PIXEL_PER_TILE); dstRect.y = j*TileType::PIXEL_PER_TILE + (y % TileType::PIXEL_PER_TILE) - firstY; } else if (j*TileType::PIXEL_PER_TILE >= y + height) { srcRect.y = y % TileType::PIXEL_PER_TILE; srcRect.h = y % TileType::PIXEL_PER_TILE; dstRect.y = j*TileType::PIXEL_PER_TILE - firstY; } else { srcRect.y = 0; srcRect.h = TileType::PIXEL_PER_TILE; dstRect.y = j*TileType::PIXEL_PER_TILE - firstY; } SDL::YtoSDL(dstRect.y, srcRect.h); SDL_BlitSurface(getTyle(i,j)->getType()->getSurface(), &srcRect, SDL::getScreen(), &dstRect); // <-- Crash HERE /*DEBUG("i = "<<i<<", j = "<<j); DEBUG("srcRect.x = "<<srcRect.x<<", srcRect.y = "<<srcRect.y<<", srcRect.w = "<<srcRect.w<<", srcRect.h = "<<srcRect.h); DEBUG("dstRect.x = "<<dstRect.x<<", dstRect.y = "<<dstRect.y);*/ } } } Tile.h #ifndef TILE_H #define TILE_H #include "TileType.h" class Tile { private: TileType const * type; public: static const Tile ERROR_TYLE; Tile(TileType const * t); ~Tile(); TileType const * getType() const; }; #endif Tile.cpp #include "Tile.h" const Tile Tile::ERROR_TYLE(TileType::ERROR); Tile::Tile(TileType const * t) : type(t) {} Tile::~Tile() {} TileType const * Tile::getType() const { return type; } TileType.h #ifndef TILETYPE_H #define TILETYPE_H #include "SDL.h" #include "DEBUG.h" class TileType { protected: TileType(); ~TileType(); public: static const int PIXEL_PER_TILE = 30; static const TileType * ERROR; static const TileType * AIR; static const TileType * SOLID; virtual SDL_Surface * getSurface() const = 0; virtual bool isSolid(int x, int y) const = 0; }; #endif ErrorTyle.h #ifndef ERRORTILE_H #define ERRORTILE_H #include "TileType.h" class ErrorTile : public TileType { friend class TileType; private: ErrorTile(); mutable SDL_Surface * surface; static const char * FILE_PATH; public: SDL_Surface * getSurface() const; bool isSolid(int x, int y) const ; }; #endif ErrorTyle.cpp (The surface can't be loaded when building the object, because it is a static object and SDL_Init() needs to be called first) #include "ErrorTile.h" const char * ErrorTile::FILE_PATH = ("C:\\error.bmp"); ErrorTile::ErrorTile() : TileType(), surface(NULL) {} SDL_Surface * ErrorTile::getSurface() const { if (surface == NULL) { if (SDL::isOn()) { surface = SDL::loadAndOptimice(ErrorTile::FILE_PATH); if (surface->w != TileType::PIXEL_PER_TILE || surface->h != TileType::PIXEL_PER_TILE) { WARNING("Bad tile surface size"); } } else { ERROR("Trying to load a surface, but SDL is not on"); } } if (surface == NULL) { // This if doesn't get called, so surface != NULL ERROR("WTF? Can't load surface :\\"); } return surface; } bool ErrorTile::isSolid(int x, int y) const { return true; }

    Read the article

  • Initializing OpenFeint for Android outside the main Application

    - by Ef Es
    I am trying to create a generic C++ bridge to use OpenFeint with Cocos2d-x, which is supposed to be just "add and run" but I am finding problems. OpenFeint is very exquisite when initializing, it requires a Context parameter that MUST be the main Application, in the onCreate method, never the constructor. Also, the main Apps name must be edited into the manifest. I am trying to fix this. So far I have tried to create a new Application that calls my Application to test if just the type is needed, but you do really need the main Android application. I also tried using a handler for a static initialization but I found pretty much the same problem. Has anybody been able to do it? This is my working-but-not-as-intended code snippet public class DerpHurr extends Application{ @Override public void onCreate() { super.onCreate(); initializeOpenFeint("TestApp", "edthedthedthedth", "aeyaetyet", "65462"); } public void initializeOpenFeint(String appname, String key, String secret, String id){ Map<String, Object> options = new HashMap<String, Object>(); options.put(OpenFeintSettings.SettingCloudStorageCompressionStrategy, OpenFeintSettings.CloudStorageCompressionStrategyDefault); OpenFeintSettings settings = new OpenFeintSettings(appname, key, secret, id, options); //RIGHT HERE OpenFeint.initialize(***this***, settings, new OpenFeintDelegate() { }); System.out.println("OpenFeint Started"); } } Manifest <application android:debuggable="true" android:label="@string/app_name" android:name=".DerpHurr">

    Read the article

  • Toon/cel shading with variable line width?

    - by Nick Wiggill
    I see a few broad approaches out there to doing cel shading: Duplication & enlargement of model with flipped normals (not an option for me) Sobel filter / fragment shader approaches to edge detection Stencil buffer approaches to edge detection Geometry (or vertex) shader approaches that calculate face and edge normals Am I correct in assuming the geometry-centric approach gives the greatest amount of control over lighting and line thickness, as well eg. for terrain where you might see the silhouette line of a hill merging gradually into a plain? What if I didn't need pixel lighting on my terrain surfaces? (And I probably won't as I plan to use cell-based vertex- or texturemap-based lighting/shadowing.) Would I then be better off sticking with the geometry-type approach, or go for a screen space / fragment approach instead to keep things simpler? If so, how would I get the "inking" of hills within the mesh silhouette, rather than only the outline of the entire mesh (with no "ink" details inside that outline? Lastly, is it possible to cheaply emulate the flipped-normals approach, using a geometry shader? Is that exactly what the GS approaches do? What I want - varying line thickness with intrusive lines inside the silhouette... What I don't want...

    Read the article

  • Multi Pass Blend

    - by Kirk Patrick
    I am seeking the simplest working example of a two pass HLSL pixel shader. It can do anything really, but the main idea is to perform "ping ponging" to take the output of the first pass and then send it for the second pass. In my example I want to draw to the R channel and then draw to the G channel and produce a simple Venn Diagram in the shader, but need to detect overlap. I can currently detect one or the other but not overlap. There are a red and green circle overlapping, and I want to put a dynamic texture map in the overlap region. I can currently put it in either or. Below is how it looks in the shader. -------------------------------- Texture2D shaderTexture; SamplerState SampleType; ////////////// // TYPEDEFS // ////////////// struct PixelInputType { float4 position : SV_POSITION; float2 tex0 : TEXCOORD0; float2 tex1 : TEXCOORD1; float4 color : COLOR; }; //////////////////////////////////////////////////////////////////////////////// // Pixel Shader //////////////////////////////////////////////////////////////////////////////// float4 main(PixelInputType input) : SV_TARGET { float4 textureColor0; float4 textureColor1; // Sample the pixel color from the texture using the sampler at this texture coordinate location. textureColor0 = shaderTexture.Sample(SampleType, input.tex0); textureColor1 = shaderTexture.Sample(SampleType, input.tex1); if (input.color[0]==1.0f && input.color[1]==1.0f) // Requires multi-pass textureColor0 = textureColor1; return textureColor0; } Here is the calling code (that needs to be modified) m_d3dContext->IASetVertexBuffers(0, 2, vbs, strides, offsets); m_d3dContext->IASetIndexBuffer(m_indexBuffer.Get(), DXGI_FORMAT_R32_UINT,0); m_d3dContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST); m_d3dContext->IASetInputLayout(m_inputLayout.Get()); m_d3dContext->VSSetShader(m_vertexShader.Get(), nullptr, 0); m_d3dContext->VSSetConstantBuffers(0, 1, m_constantBuffer.GetAddressOf()); m_d3dContext->PSSetShader(m_pixelShader.Get(), nullptr, 0); m_d3dContext->PSSetShaderResources(0, 1, m_SRV.GetAddressOf()); m_d3dContext->PSSetSamplers(0, 1, m_QuadsTexSamplerState.GetAddressOf());

    Read the article

  • Is there any hueristic to polygonize a closed 2d-raster shape with n triangles?

    - by Arthur Wulf White
    Lets say we have a 2d image black on white that shows a closed geometric shape. Is there any (not naive brute force) algorithm that approximates that shape as closely as possible with n triangles? If you want a formal definition for as closely as possible: Approximate the shape with a polygon that when rendered into a new 2d image will match the largest number of pixels possible with the original image.

    Read the article

  • Simple OpenGL program major slow down at high resolution

    - by Grieverheart
    I have created a small OpenGL 3.3 (Core) program using freeglut. The whole geometry is two boxes and one plane with some textures. I can move around like in an FPS and that's it. The problem is I face a big slow down of fps when I make my window large (i.e. above 1920x1080). I have monitors GPU usage when in full-screen and it shows GPU load of nearly 100% and Memory Controller load of ~85%. When at 600x600, these numbers are at about 45%, my CPU is also at full load. I use deferred rendering at the moment but even when forward rendering, the slow down was nearly as severe. I can't imagine my GPU is not powerful enough for something this simple when I play many games at 1080p (I have a GeForce GT 120M btw). Below are my shaders, First Pass #VS #version 330 core uniform mat4 ModelViewMatrix; uniform mat3 NormalMatrix; uniform mat4 MVPMatrix; uniform float scale; layout(location = 0) in vec3 in_Position; layout(location = 1) in vec3 in_Normal; layout(location = 2) in vec2 in_TexCoord; smooth out vec3 pass_Normal; smooth out vec3 pass_Position; smooth out vec2 TexCoord; void main(void){ pass_Position = (ModelViewMatrix * vec4(scale * in_Position, 1.0)).xyz; pass_Normal = NormalMatrix * in_Normal; TexCoord = in_TexCoord; gl_Position = MVPMatrix * vec4(scale * in_Position, 1.0); } #FS #version 330 core uniform sampler2D inSampler; smooth in vec3 pass_Normal; smooth in vec3 pass_Position; smooth in vec2 TexCoord; layout(location = 0) out vec3 outPosition; layout(location = 1) out vec3 outDiffuse; layout(location = 2) out vec3 outNormal; void main(void){ outPosition = pass_Position; outDiffuse = texture(inSampler, TexCoord).xyz; outNormal = pass_Normal; } Second Pass #VS #version 330 core uniform float scale; layout(location = 0) in vec3 in_Position; void main(void){ gl_Position = mat4(1.0) * vec4(scale * in_Position, 1.0); } #FS #version 330 core struct Light{ vec3 direction; }; uniform ivec2 ScreenSize; uniform Light light; uniform sampler2D PositionMap; uniform sampler2D ColorMap; uniform sampler2D NormalMap; out vec4 out_Color; vec2 CalcTexCoord(void){ return gl_FragCoord.xy / ScreenSize; } vec4 CalcLight(vec3 position, vec3 normal){ vec4 DiffuseColor = vec4(0.0); vec4 SpecularColor = vec4(0.0); vec3 light_Direction = -normalize(light.direction); float diffuse = max(0.0, dot(normal, light_Direction)); if(diffuse 0.0){ DiffuseColor = diffuse * vec4(1.0); vec3 camera_Direction = normalize(-position); vec3 half_vector = normalize(camera_Direction + light_Direction); float specular = max(0.0, dot(normal, half_vector)); float fspecular = pow(specular, 128.0); SpecularColor = fspecular * vec4(1.0); } return DiffuseColor + SpecularColor + vec4(0.1); } void main(void){ vec2 TexCoord = CalcTexCoord(); vec3 Position = texture(PositionMap, TexCoord).xyz; vec3 Color = texture(ColorMap, TexCoord).xyz; vec3 Normal = normalize(texture(NormalMap, TexCoord).xyz); out_Color = vec4(Color, 1.0) * CalcLight(Position, Normal); } Is it normal for the GPU to be used that much under the described circumstances? Is it due to poor performance of freeglut? I understand that the problem could be specific to my code, but I can't paste the whole code here, if you need more info, please tell me.

    Read the article

  • What is the correct and most efficient approach of streaming vertex data?

    - by Martijn Courteaux
    Usually, I do this in my current OpenGL ES project (for iOS): Initialization: Create two VBO's and one IndexBuffer (since I will use the same indices), same size. Create two VAO's and configure them, both bound to the same Index Buffer. Each frame: Choose a VBO/VAO couple. (Different from the previous frame, so I'm alternating.) Bind that VBO Upload new data using glBufferSubData(GL_ARRAY_BUFFER, ...). Bind the VAO Render my stuff using glDrawElements(GL_***, ...); Unbind the VAO However, someone told me to avoid uploading data (step 3) and render immediately the new data (step 5). I should avoid this, because the glDrawElements call will stall until the buffer is effectively uploaded to VRAM. So he suggested to draw all my geometry I uploaded the previous frame and upload in the current frame what will be drawn in the next frame. Thus, everything is rendered with the delay of one frame. Is this true or am I using the good approach to work with streaming vertex data? (I do know that the pipeline will stall the other way around. Ie: when you draw and immediately try to change the buffer data. But I'm not doing that, since I implemented double buffering.)

    Read the article

  • How can I convert an image from raw data in Android without any munging?

    - by stephelton
    I have raw image data (may be png, jpg, ...) and I want it converted in Android without changing its pixel depth (bpp). In particular, when I load a grayscale (8 bpp) image that I want to use as alpha (glTexImage() with GL_ALPHA), it converts it to 16 bpp (presumably 5_6_5). While I do have a plan b (actually, I'm probably on plan 'e' by now, this is really becoming annoying) I would really like to discover an easy way to do this using what is readily available in the api. So far, I'm using BitmapFactory.decodeByteArray(). While I'm at it. I'm doing this from a native environment via jni (passing the buffer in from C, and a new buffer back to C from Java). Any portable solution in C/C++ would be preferable, but I don't want to introduce anything that might break in future versions of Android, etc.

    Read the article

  • Can't read .cso files but I can read their .hlsl versions?

    - by Jader J Rivera
    Well I've been trying to read a .cso file to use as a shader for a DirectX program I'm currently making. Problem is no matter how I implemented a way to read the file it never worked. And after fidgeting around I discover that it's only the .cso files I can't read. I can read anything else (which means it works) even their .hlsl files. Which is strange because the .hlsl (high level shader language) files are supposed to turn into .cso (compiled shader object) files. What I'm currently doing is: vector<byte> Read(string File){ vector<byte> Text; fstream file(File, ios::in | ios::ate | ios::binary); if(file.is_open()){ Text.resize(file.tellg()); file.seekg(0 , ios::beg); file.read(reinterpret_cast<char*>(&Text[0]), Text.size()); file.close(); } return Text; }; If I then implement it. Read("VertexShader.hlsl"); //Works Read("VertexShader.cso"); //Doesn't Works?!?! And I need the .cso version of the shader to draw my sexy triangles. Without it my life and application will never continue and I have no idea what could be wrong. (I've also asked this at stack overflow but still no answers.)

    Read the article

  • Pixel alignment algorithm

    - by user42325
    I have a set of square blocks, I want to draw them in a window. I am sure the coordinates calculation is correct. But on the screen, some squares' edge overlap with other, some are not. I remember the problem is caused by accuracy of pixels. I remember there's a specific topic related to this kind of problem in 2D image rendering. But I don't remember what exactly it is, and how to solve it. Look at this screenshot. Each block should have a fixed width margin. But in the image, the vertical white line have different width.Though, the horizontal lines looks fine.

    Read the article

  • Mapping a 3D texture to a standard hollow-hull 3D model

    - by John
    I have 3D models which are typical hollow hulls. If such a model also had a 3D volumetric/voxel texture map then given a point P inside such a model, I'd like to be able to find its uvw coordinates within the 3D texture. Is this possible by simply setting 3D texcoords on my existing mesh or does it have to be broken up into polyhedra? Is there a way to map a 3D texture onto a mesh without doing this?

    Read the article

  • Java 2D Rectangle Collision? [on hold]

    - by Andreas Elia
    I am just wanting to know of another (longer OR shorter) way of getting 100% effective collisions on a 2D plat-former. The current collision system that is in place works from coords on the level and does not always work reliably. Thank you in advance for any help/support. The current system draws a rectangle and is checking to see if any two points collide. From testing, the system can sometimes "glitch" and allow the player to collide into walls etc. Player Class http://pastebin.com/2zE8vz8R Main Class http://pastebin.com/A6Utb3ti

    Read the article

  • Java - Tile engine changing number in array not changing texture

    - by Corey
    I draw my map from a txt file. Would I have to write to the text file to notice the changes I made? Right now it changes the number in the array but the tile texture doesn't change. Do I have to do more than just change the number in the array? public class Tiles { public Image[] tiles = new Image[5]; public int[][] map = new int[64][64]; private Image grass, dirt, fence, mound; private SpriteSheet tileSheet; public int tileWidth = 32; public int tileHeight = 32; Player player = new Player(); public void init() throws IOException, SlickException { tileSheet = new SpriteSheet("assets/tiles.png", tileWidth, tileHeight); grass = tileSheet.getSprite(0, 0); dirt = tileSheet.getSprite(7, 7); fence = tileSheet.getSprite(2, 0); mound = tileSheet.getSprite(2, 6); tiles[0] = grass; tiles[1] = dirt; tiles[2] = fence; tiles[3] = mound; int x=0, y=0; BufferedReader in = new BufferedReader(new FileReader("assets/map.dat")); String line; while ((line = in.readLine()) != null) { String[] values = line.split(","); for (String str : values) { int str_int = Integer.parseInt(str); map[x][y]=str_int; //System.out.print(map[x][y] + " "); y=y+1; } //System.out.println(""); x=x+1; y = 0; } in.close(); } public void update(GameContainer gc) { } public void render(GameContainer gc) { for(int x = 0; x < map.length; x++) { for(int y = 0; y < map.length; y ++) { int textureIndex = map[y][x]; Image texture = tiles[textureIndex]; texture.draw(x*tileWidth,y*tileHeight); } } } Mouse picking public void checkDistance(GameContainer gc) { Input input = gc.getInput(); float mouseX = input.getMouseX(); float mouseY = input.getMouseY(); double mousetileX = Math.floor((double)mouseX/tiles.tileWidth); double mousetileY = Math.floor((double)mouseY/tiles.tileHeight); double playertileX = Math.floor(playerX/tiles.tileWidth); double playertileY = Math.floor(playerY/tiles.tileHeight); double lengthX = Math.abs((float)playertileX - mousetileX); double lengthY = Math.abs((float)playertileY - mousetileY); double distance = Math.sqrt((lengthX*lengthX)+(lengthY*lengthY)); if(input.isMousePressed(Input.MOUSE_LEFT_BUTTON) && distance < 4) { System.out.println("Clicked"); if(tiles.map[(int)mousetileX][(int)mousetileY] == 1) { tiles.map[(int)mousetileX][(int)mousetileY] = 0; } } System.out.println(tiles.map[(int)mousetileX][(int)mousetileY]); }

    Read the article

  • iOS : Creating a 3D Compass

    - by Md. Abdul Munim
    Originally posted here: iOS : Creating a 3D Compass Hi everybody, Quite new in this forum.Posted the same question in stackoverflow and there some people suggested to shift it here, so that I can get a quick help from more specialists in this regard. So what's the big matter? Actually, I want to make a 3D metal compass in iOS which will have a movable cover. That is when you touch it by 3 fingers and try to move your fingers upward the cover keeps moving with your fingers and after certain distance it gets opened.Once you pull it down using 3 fingers again, it gets closed.I can not attach an image here as I don't have that much reputation. So I request you to check the original question at stack overflow that I linked at top. Is it possible using core animations and CALayers? Or would I have to use OpenGL ES? Please someone help me out, I am badly in need of it.And I need to complete it asap!

    Read the article

  • Writing to a D3DFMT_R32F render target clamps to 1

    - by Mike
    I'm currently implementing a picking system. I render some objects in a frame buffer, which has a render target, which has the D3DFMT_R32F format. For each mesh, I set an integer constant evaluator, which is its material index. My shader is simple: I output the position of each vertex, and for each pixel, I cast the material index in float, and assign this value to the Red channel: int ObjectIndex; float4x4 WvpXf : WorldViewProjection< string UIWidget = "None"; >; struct VS_INPUT { float3 Position : POSITION; }; struct VS_OUTPUT { float4 Position : POSITION; }; struct PS_OUTPUT { float4 Color : COLOR0; }; VS_OUTPUT VSMain( const VS_INPUT input ) { VS_OUTPUT output = (VS_OUTPUT)0; output.Position = mul( float4(input.Position, 1), WvpXf ); return output; } PS_OUTPUT PSMain( const VS_OUTPUT input, in float2 vpos : VPOS ) { PS_OUTPUT output = (PS_OUTPUT)0; output.Color.r = float( ObjectIndex ); output.Color.gba = 0.0f; return output; } technique Default { pass P0 { VertexShader = compile vs_3_0 VSMain(); PixelShader = compile ps_3_0 PSMain(); } } The problem I have, is that somehow, the values written in the render target are clamped between 0.0f and 1.0f. I've tried to change the rendertarget format, but I always get clamped values... I don't know what the root of the problem is. For information, I have a depth render target attached to the frame buffer. I disabled the blend in the render state the stencil is disabled Any ideas?

    Read the article

  • How to draw texture to screen in Unity?

    - by user1306322
    I'm looking for a way to draw textures to screen in Unity in a similar fashion to XNA's SpriteBatch.Draw method. Ideally, I'd like to write a few helper methods to make all my XNA code work in Unity. This is the first issue I've faced on this seemingly long journey. I guess I could just use quads, but I'm not so sure it's the least expensive way performance-wise. I could do that stuff in XNA anyway, but they made SpriteBatch not without a reason, I believe.

    Read the article

  • Good resources for 2.5D and rendering walls, floors, and sprites

    - by Aidan Mueller
    I'm curious as to how games like Prelude of the chambered handle graphics. If you play for a bit you will see what I mean. It made me wonder how it works. (it is open-source so you can get the source on This page) I did find a few tutorials but I couldn't undertand some of the stuff but it did help with some things. However, I don't like doing things I don't understand. Does anyone know of any good sites for this kind of 2.5D? Any help is appreciated. After all I've been googling all day. Thanks :)

    Read the article

  • Adding sub-entities to existing entities. Should it be done in the Entity and Component classes?

    - by Coyote
    I'm in a situation where a player can be given the control of small parts of an entity (i.e. Left missile battery). Therefore I started implementing sub entities as follow. Entities are Objects with 3 arrays: pointers to components pointers to sub entities communication subscribers (temporary implementation) Now when an entity is built it has a few components as you might expect and also I can attach sub entities which are handled with some dedicated code in the Entity and Component classes. I noticed sub entities are sharing data in 3 parts: position: the sub entities are using the parent's position and their own as an offset. scrips: sub entities are draining ammo and energy from the parent. physics: sub entities add weight to the parent I made this to quickly go forward, but as I'm slowly fixing current implementations I wonder if this wasn't a mistake. Is my current implementation something commonly done? Will this implementation put me in a corner? I thought it might be a better thing to create some sort of SubEntityComponent where sub entities are attached and handled. But before changing anything I wanted to seek the community's wisdom.

    Read the article

  • Having trouble with projection matrix, need help

    - by Mr.UNOwen
    I'm having trouble with what appears to be the projection matrix. Given a wide enough of a screen, when a cube is on the left and right most edge, the left or right wall will appear stretched to the point that the front face is 1/10 the width of the side. So I do update the screen ratio along with the projection matrix and view port on screen resize, am I safe to assume all the trouble is from the matrix class? Also the cube follows the mouse, but it's only vertically aligned and ahead of the mouse when going left or right from the center of the screen. Perspective function call: * setPerspective * * @param fov: angle in radians * @param aspect: screen ratio w/h * @param near: near distance * @param far: far distance **/ void APCamera::setPerspective(GMFloat_t fov, GMFloat_t aspect, GMFloat_t near, GMFloat_t far) { GMFloat_t difZ = near - far; GMFloat_t *data; mProjection->clear(); //set to identity matrix data = mProjection->getData(); GMFloat_t v = 1.0f / tan(fov / 2.0f); data[_AP_MAA] = v / aspect; data[_AP_MBB] = v; data[_AP_MCC] = (far + near) / difZ; data[_AP_MCD] = -1.0f; data[_AP_MDD] = 0.0f; data[_AP_MDC] = 2.0f * far * near/ difZ; mRatio = aspect; mInvProjOutdated = true; mIsPerspective = true; } and... #define _AP_MAA 0 #define _AP_MAB 1 #define _AP_MAC 2 #define _AP_MAD 3 #define _AP_MBA 4 #define _AP_MBB 5 #define _AP_MBC 6 #define _AP_MBD 7 #define _AP_MCA 8 #define _AP_MCB 9 #define _AP_MCC 10 #define _AP_MCD 11 #define _AP_MDA 12 #define _AP_MDB 13 #define _AP_MDC 14 #define _AP_MDD 15

    Read the article

  • Stencil buffer appears to not be decrementing values correctly

    - by Alex Ames
    I'm attempting to use the stencil buffer as a clipper for my UI system, but I'm having trouble debugging a problem I'm running in to. This is what I'm doing: A widget can pass a rectangle to the the stencil clipper functions, which will increment the stencil buffer values that it covers. Then it will draw its children, which will only get drawn in the stencilled area (so that if they extend outside they'll be clipped). After a widget is done drawing its children, it pops that rectangle from the stack and in the process decrements the values in the stencil buffer that it has previously incremented. The slightly simplified code is below: static void drawStencil(Rect& rect, unsigned int ref) { // Save previous values of the color and depth masks GLboolean colorMask[4]; GLboolean depthMask; glGetBooleanv(GL_COLOR_WRITEMASK, colorMask); glGetBooleanv(GL_DEPTH_WRITEMASK, &depthMask); // Turn off drawing glColorMask(0, 0, 0, 0); glDepthMask(0); // Draw vertices here ... // Turn everything back on glColorMask(colorMask[0], colorMask[1], colorMask[2], colorMask[3]); glDepthMask(depthMask); // Only render pixels in areas where the stencil buffer value == ref glStencilFunc(GL_EQUAL, ref, 0xFF); glStencilOp(GL_KEEP, GL_KEEP, GL_KEEP); } void pushScissor(Rect rect) { // increment things only at the current stencil stack level glStencilFunc(GL_EQUAL, s_scissorStack.size(), 0xFF); glStencilOp(GL_KEEP, GL_INCR, GL_INCR); s_scissorStack.push_back(rect); drawStencil(rect, states, s_ScissorStack.size()); } void popScissor() { // undo what was done in the previous push, // decrement things only at the current stencil stack level glStencilFunc(GL_EQUAL, s_scissorStack.size(), 0xFF); glStencilOp(GL_KEEP, GL_DECR, GL_DECR); Rect rect = s_scissorStack.back(); s_scissorStack.pop_back(); drawStencil(rect, states, s_scissorStack.size()); } And this is how it's being used by the Widgets if (m_clip) pushScissor(m_rect); drawInternal(target, states); for (auto child : m_children) target.draw(*child, states); if (m_clip) popScissor(); This is the result of the above code: There are two things on the screen, a giant test button, and a window with some buttons and text areas on it. The text area scroll box is set to clip its children (so that the text doesn't extend outside the scroll box). The button is drawn after the window and should be on top of it completely. However, for some reason the text area is appearing on top of the button. The only reason I can think of that this would happen is if the stencil values were not getting decremented in the pop, and when it comes time to render the button, since those pixels don't have the right stencil value it doesn't draw over. But I can't figure out whats wrong with my code that would cause that to happen.

    Read the article

  • When does depth testing happen?

    - by Utkarsh Sinha
    I'm working with 2D sprites - and I want to do 3D style depth testing with them. When writing a pixel shader for them, I get access to the semantic DEPTH0. Would writing to this value help? It seems it doesn't. Maybe it's done before the pixel shader step? Or is depth testing only done when drawing 3D things (I'm using SpriteBatch)? Any links/articles/topics to read/search for would be appreciated.

    Read the article

  • can't spot the error. Trying to increment

    - by Kevin Jensen Petersen
    I really can't spot the error, or the misspelling. This script should increase the variable currentTime with 1 every second, as long as i am holding the Space button down. This is Unity C#. using UnityEngine; using System.Collections; public class GameTimer : MonoBehaviour { //Timer private bool isTimeDone; public GUIText counter; public int currentTime; private bool starting; //Each message will be shown random each 20 seconds. public string[] messages; public GUIText msg; //To check if this is the end private bool end; void Update () { counter.guiText.text = currentTime.ToString(); if(Input.GetKey(KeyCode.Space)) { if(starting == false) { starting = true; } if(end == false) { if(isTimeDone) { StartCoroutine(timer()); } } else { msg.guiText.text = "You think you can do better? Press 'R' to Try again!"; if(Input.GetKeyDown(KeyCode.R)) { Application.LoadLevel(Application.loadedLevel); } } } if(!Input.GetKey(KeyCode.Space) & starting) { end = true; } } IEnumerator timer() { isTimeDone = false; yield return new WaitForSeconds(1); currentTime++; isTimeDone = true; } }

    Read the article

  • Box2D how to implement a camera?

    - by Romeo
    By now i have this Camera class. package GameObjects; import main.Main; import org.jbox2d.common.Vec2; public class Camera { public int x; public int y; public int sx; public int sy; public static final float PIXEL_TO_METER = 50f; private float yFlip = -1.0f; public Camera() { x = 0; y = 0; sx = x + Main.APPWIDTH; sy = y + Main.APPHEIGHT; } public Camera(int x, int y) { this.x = x; this.y = y; sx = x + Main.APPWIDTH; sy = y + Main.APPHEIGHT; } public void update() { sx = x + Main.APPWIDTH; sy = y + Main.APPHEIGHT; } public void moveCam(int mx, int my) { if(mx >= 0 && mx <= 80) { this.x -= 2; } else if(mx <= Main.APPWIDTH && mx >= Main.APPWIDTH - 80) { this.x += 2; } if(my >= 0 && my <= 80) { this.y += 2; } else if(my <= Main.APPHEIGHT && my >= Main.APPHEIGHT - 80) { this.y -= 2; } this.update(); } public float meterToPixel(float meter) { return meter * PIXEL_TO_METER; } public float pixelToMeter(float pixel) { return pixel / PIXEL_TO_METER; } public Vec2 screenToWorld(Vec2 screenV) { return new Vec2(screenV.x + this.x, yFlip * screenV.y + this.y); } public Vec2 worldToScreen(Vec2 worldV) { return new Vec2(worldV.x - this.x, yFlip * worldV.y - this.y); } } I need to know how to modify the screenToWorld and worldToScreen functions to include the PIXEL_TO_METER scaling.

    Read the article

  • D3D11 how to simulate multiple depth channels

    - by Nock
    Here's what I'd like to achieve: Rendering a first pass of objects in my scene, using standard depth comparison Rendering another pass of objects in the same scene, but with the following rules: A Pixel of the 2nd pass always override the first pass (no depth compare between them) Use Depth comparison between pixels written from the second pass. In English I want depth comparison made inside each pass but I always want the second pass pixels to override the first pass ones. Some things I've thought: I tried to think about using stencil to solve this, but I couldn't find a way. I know I could render into a separate target the second pass then composite the result into the first, but I'd like to avoid that. I could use two separate Depth Buffer, one dedicated to each pass. (I never tried, but I figure it's possible to switch the depth buffer in a Render Target "on the fly") Any idea of the best solution? Thanks

    Read the article

< Previous Page | 278 279 280 281 282 283 284 285 286 287 288 289  | Next Page >