Search Results

Search found 25660 results on 1027 pages for 'dotnetnuke development'.

Page 457/1027 | < Previous Page | 453 454 455 456 457 458 459 460 461 462 463 464  | Next Page >

  • Game state management (Game, Menu, Titlescreen, etc)

    - by munchor
    Basically, in every single game I've made so far, I always have a variable like "current_state", which can be "game", "titlescreen", "gameoverscreen", etc. And then on my Update function I have a huge: if current_state == "game" game stuf ... else if current_state == "titlescreen" ... However, I don't feel like this is a professional/clean way of handling states. Any ideas on how to do this in a better way? Or is this the standard way?

    Read the article

  • Using Behavior Trees and Events together

    - by weichsem
    I am beginning to work with behavior trees and am unsure how events should be handled within the tree. Lets say we have a space game where the player is dogfighting with a handful of other ships, some friendly some not. The player destroys a ship and the rest of the hostile ships should then start to retreat. How was should the shipWasDestroyed event effect the other ship's behavior trees so that they start running the retreat behavior? One way I could think of doing this is have all the conditions I care about be high level nodes that effectively state change the ship. This would mean I'd have to check all of these state change conditions on every frame the behavior tree was run, even if they are very rare occurrences. I'd prefer not doing this for performance and complexity reasons. From looking at the Halo papers on behavior trees it seems that they handled this by dynamically placing nodes into the tree when the event occurred. It seems like calculating where the new node should go could be problematic depending on the current state of the running behavior. How is this normally handled?

    Read the article

  • Facebook Game database design

    - by facebook-100000781341887
    Hi, I'm currently develop a facebook mafia like PHP game(of course, a light weight version), here is a simplify database(MySQL) of the game id-a <int3> <for index> uid <chr15> <facebook uid> HP <int3> <health point> exp <int3> <experience> money <int3> <money> list_inventory <chr5> <the inventory user hold...some special here, talk next> ... and 20 other fields just like reputation, num of combat... *the number next to the type is the size(byte) of the type For the list_inventory, there have 40 inventorys in my game, (actually, I have 5 these kind of list in my database), and each user can only contain 1 qty of each inventory, therefore, I assign 5 char for this field and each bit of char as 1 item(5 char * 8 bit = 40 slot), and I will do some manipulation by PHP to extract the data from this 5 byte. OK, I was thinking on this, if this game contains 100,000 user, and only 10% are active, therefore, if use my method, for the space use, 5 byte * 100,000 = 500 KB if I use another method, create a table user_hold_inventory, if the user have the inventory, then insert a record into this table, so, for 10,000 active user, I assume they got all item, but for other, I assume they got no item, here is the fields of the new table id-b <int3> <for index> id-a <int3> <id of the user table> inv_no <int1> <inventory that user hold> for the space use, ([id] (3+3) byte + [inv_no] 1 byte ) * [active user] 10,000 * [all inventory] * 40 = 2.8 MB seems method 2 have use more space, but it consume less CPU power. Please comment these 2 method or please correct me if there have another better method rather than what I think. Another question is, my database contain 26 fields, but I counted 5 of them are not change frquently, should I need to separate it on the other table or not? So many words, thanks for reading :)

    Read the article

  • Dynamic libraries are not allowed on iOS but what about this?

    - by tapirath
    I'm currently using LuaJIT and its FFI interface to call C functions from LUA scripts. What FFI does is to look at dynamic libraries' exported symbols and let the developer use it directly form LUA. Kind of like Python ctypes. Obviously using dynamic libraries is not permitted in iOS for security reasons. So in order to come up with a solution I found the following snippet. /* (c) 2012 +++ Filip Stoklas, aka FipS, http://www.4FipS.com +++ THIS CODE IS FREE - LICENSED UNDER THE MIT LICENSE ARTICLE URL: http://forums.4fips.com/viewtopic.php?f=3&t=589 */ extern "C" { #include <lua.h> #include <lualib.h> #include <lauxlib.h> } // extern "C" #include <cassert> // Please note that despite the fact that we build this code as a regular // executable (exe), we still use __declspec(dllexport) to export // symbols. Without doing that FFI wouldn't be able to locate them! extern "C" __declspec(dllexport) void __cdecl hello_from_lua(const char *msg) { printf("A message from LUA: %s\n", msg); } const char *lua_code = "local ffi = require('ffi') \n" "ffi.cdef[[ \n" "const char * hello_from_lua(const char *); \n" // matches the C prototype "]] \n" "ffi.C.hello_from_lua('Hello from LUA!') \n" // do actual C call ; int main() { lua_State *lua = luaL_newstate(); assert(lua); luaL_openlibs(lua); const int status = luaL_dostring(lua, lua_code); if(status) printf("Couldn't execute LUA code: %s\n", lua_tostring(lua, -1)); lua_close(lua); return 0; } // output: // A message from LUA: Hello from LUA! Basically, instead of using a dynamic library, the symbols are exported directly inside the executable file. The question is: is this permitted by Apple? Thanks.

    Read the article

  • Java keyboard input [on hold]

    - by dØd
    I'm trying to implement a input system that can detect whether a certain key was held or was only pressed briefly. So far I have this: KEY_INTERACTION_TRESHOLD = 400ms //inside a constructor shouldMeasure = true; @Override public void keyPressed(KeyEvent e) { if (shouldMeasure) { startTime = System.currentTimeMillis(); shouldMeasure = false; return; } System.out.println("Button is held down"); e.consume(); } @Override public void keyReleased(KeyEvent e) { if (System.currentTimeMillis() - startTime < KEY_INTERACTION_TRESHOLD) { System.out.println("Button was only pressed briefly"); } startTime = 0; shouldMeasure = true; e.consume(); } Now this works, but the problem is that there is this delay between when I press a key to hold and when the message 'Button is held down' gets displayed. I understand why this delay occurs (for example when you press and hold a letter there will be a similar delay between the first and the second letter printed out), but I would like to somehow avoid it. I'm using only the Java API.

    Read the article

  • Is it possible to design a multiplayer game which can be played from different devices?

    - by user9820
    I want to design a online multiplayer game for all gaming devices e.g. Desktop PC, internet browser, android phones, android tablets, iphone, ipad, XBOX 360 etc. Now my main requirement is that, I want all devices can be used to play the game in multiplayer mode toghether i.e. One player can be connected using PC another using android phone and other may be with iphone or ipad. My doubts are - How to make all devices to connect to common game server? What will be the logic for graphics and texture because all devices screen will be of different aspect ratio?

    Read the article

  • How to transform mesh components?

    - by Lea Hayes
    I am attempting to transform the components of a mesh directly using a 4x4 matrix. This is working for the vertex positions, but it is not working for the normals (and probably not the tangents either). Here is what I have: // Transform vertex positions - Works like a charm! vertices = mesh.vertices; for (int i = 0; i < vertices.Length; ++i) vertices[i] = transform.MultiplyPoint(vertices[i]); // Does not work, lighting is messed up on mesh normals = mesh.normals; for (int i = 0; i < normals.Length; ++i) normals[i] = transform.MultiplyVector(normals[i]); Note: The input matrix converts from local to world space and is needed to combine multiple meshes together.

    Read the article

  • Any significant performance cost to using BlendState.Premultiplied?

    - by Donutz
    Normally I guess you'd use BlendState.AlphaBlend because normally when you load your textures through the pipeline they're already premultiplied. However, if you're loading textures at runtime from PNGs or some such, you have to loop through the pixels and premultiply them, which can take a long time if you've got a lot of textures to load. So it looks (haven't tried it) like using BlendState.Premultiplied instead of BlendState.AlphaBlend should handle non-premultiplied textures and produce the same visual result, without all the startup costs. I have to wonder if there's a non-obvious cost to doing this, like a huge drop in performance or something. Anyone know?

    Read the article

  • Finding closest object to a location within a specific perpendicular distance to direction vector

    - by Sniper
    I have a location and a direction vector indicating facing, I want to find the closest object to that location that is within some tolerance distance (perpendicular distance) to the ray formed by the location and direction vector. Basically I want to get the object that is being aimed at. I have thought about finding all objects within a box and then finding the closest object to my vector from them results, but I am sure that there is a more efficient way. The Z axis is optional, the objects are most likely within a few meters of the search vector.

    Read the article

  • How do I simulate the mouse and keyboard using C# or C++?

    - by Art
    I want to start develop for Kinect, but hardest theme for it - how to send keyboard and mouse input to any application. In previous question I got an advice to develop my own driver for this devices, but this will take a while. I imagine application like a gate, that can translate SendMessage's into system wide input or driver application with API to send this inputs. So I wonder, is there are drivers or simulators that can interact with C# or C++? Small edition: SendMessage, PostMessage, keybd_event will work only on Windows application with common messages loop. So I need driver application that will work on low, kernel, level.

    Read the article

  • Set a drawing viewport while using camera

    - by Mariano
    I'm working with XNA. I already have a basic world made of tiles and a camera using a transform matrix. I have a character moving around and the camera follows. What I want to do now is draw the map only on a certain part of the screen as shown on the figure below. This way I can move the map to the left of the screen and have the other fixed parts shift to the right. Do I need to modify the camera matrix? Make a new viewport?

    Read the article

  • Drawing order in XNA

    - by marc wellman
    When manually setting the drawing order of game components by setting int DrawableGameComponent.DrawOrder can one use any integer numbers as long an order is defined like component1 = drawing order: 2 component2 = drawing order: 5 component3 = drawing order: 10 component4 = drawing order: 323 or do these integers have to be consecutive and starting with zero like component1 = drawing order: 0 component2 = drawing order: 1 component3 = drawing order: 2 component4 = drawing order: 3 ?

    Read the article

  • How should I interpret these DirectX Caps Viewer values?

    - by tobi
    Briefly asking - what do the nodes mean and what the difference is between them in DirectX Caps Viewer? DXGI Devices Direct3D9 Devices DirectDraw Devices The most interesting for me is 1 vs 2. In the Direct3D9 Devices under HAL node I can see that my GeForce 8800GT supports PixelShaderVersion 3.0. However, under DXGI Devices I have DX 10, DX 10.1 and DX 11 having Shader model 4.0 (actually why DX 11? My card is not compatible with DX 11). I am implementing a DX 11 application (including d3d11.h) with shaders compiled in 4.0 version, so I can clearly see that 4.0 is supported. What is the difference between 1 and 2? Could you give me some theory behind the nodes?

    Read the article

  • Server side random selection of players

    - by Ron
    Assuming I have a simple client-server game, where the server picks random players on a very frequent base, I was wondering what is the best way to select a random player (According to the following constraints): Solution must be high performance and highly scalable Random spread should be relatively even (meaning if I have 3 players and pick 99 times, they will all be picked 33 times more or less) Should only pick players who were active in the past X days (optional, but a big bonus) The actual DB or data model used to store players isn't an issue here, as we'll select the technology in accordance to our needs. However, high performance and scalability is (at the moment we have over 60,000 unique daily active players, and we plan on growing even more). Thanks!

    Read the article

  • tic tac toe game ai as3

    - by David Jones
    I'm looking into creating a simple tic tac toe/noughts and crosses game in actionscript3 and am trying to understand the ideas behind the ai used in a game like this. I've seen some simplistic examples online but from what I've read a game tree or something like minimax is the best way to go about this. Can anyone help explain or reference any good examples of this? I've seen that there is a library called as3ds - data structures for game developers which has a number of classes that might help tie this together? Any info/examples or help is much appreciated

    Read the article

  • simple collision detection

    - by Rob
    Imagine 2 squares sitting side by side, both level with the ground: http://img19.imageshack.us/img19/8085/sqaures2.jpg A simple way to detect if one is hitting the other is to compare the location of each side. They are touching if ALL of the following are NOT true: The right square's left side is to the right of the left square's right side. The right square's right side is to the left of the left square's left side. The right square's bottom side is above the left square's top side. The right square's top side is below the left square's bottom side. If any of those are true, the squares are not touching. If all of those are false, the squares are touching. But consider a case like this, where one square is at a 45 degree angle: http://img189.imageshack.us/img189/4236/squaresb.jpg Is there an equally simple way to determine if those squares are touching?

    Read the article

  • Getting to math applications gradually

    - by den-javamaniac
    I'm currently getting a formal degree related to computation, in particular my current focus is numerical programming, scientific computing and machine learning. I'd love to apply that knowledge in game dev and expand it with statistics, probability theory, and graph theory (probably even linear algebra). The question is: which spheres of gamedev are filled with such math stuff, is it possible to advance in those without being a part of a group of people and how to get to it gradually? P.S.: I've got experience with commercial java dev and am getting my hands on C/C++ at the moment, however, I'm opened to go ahead and try Unity3D and etc.

    Read the article

  • SRV from UAV on the same texture in directx

    - by notabene
    I'm programming gpgpu raymarching (volumetric raytracing) in directx11. I succesfully perform compute shader and save raymarched volume data to texture. Then i want to use same texture as SRV in normal graphic pipeline. But it doesnt work, texture is not visible. Texture is ok, when i save it file it is what i expect. Texture rendering is ok too, when i render another SRV, it is ok. So problem is only in UAV-SRV. I also triple checked if pointers are ok. Please help, i'm getting mad about this. Here is some code: //before dispatch D3D11_TEXTURE2D_DESC textureDesc; ZeroMemory( &textureDesc, sizeof( textureDesc ) ); textureDesc.Width = xr; textureDesc.Height = yr; textureDesc.MipLevels = 1; textureDesc.ArraySize = 1; textureDesc.SampleDesc.Count = 1; textureDesc.SampleDesc.Quality = 0; textureDesc.Usage = D3D11_USAGE_DEFAULT; textureDesc.BindFlags = D3D11_BIND_UNORDERED_ACCESS | D3D11_BIND_SHADER_RESOURCE ; textureDesc.Format = DXGI_FORMAT_R32G32B32A32_FLOAT; D3D->CreateTexture2D( &textureDesc, NULL, &pTexture ); D3D11_UNORDERED_ACCESS_VIEW_DESC viewDescUAV; ZeroMemory( &viewDescUAV, sizeof( viewDescUAV ) ); viewDescUAV.Format = DXGI_FORMAT_R32G32B32A32_FLOAT; viewDescUAV.ViewDimension = D3D11_UAV_DIMENSION_TEXTURE2D; viewDescUAV.Texture2D.MipSlice = 0; D3DD->CreateUnorderedAccessView( pTexture, &viewDescUAV, &pTextureUAV ); //the getSRV function after dispatch. D3D11_SHADER_RESOURCE_VIEW_DESC srvDesc ; ZeroMemory( &srvDesc, sizeof( srvDesc ) ); srvDesc.Format = DXGI_FORMAT_R32G32B32A32_FLOAT; srvDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D; srvDesc.Texture2D.MipLevels = 1; D3DD->CreateShaderResourceView( pTexture, &srvDesc, &pTextureSRV);

    Read the article

  • What are good JS libraries for game dev?

    - by acidzombie24
    If I decide to write a simple game both text and graphical (2d) what libraries would I use? (Assume we are using a HTML5 compatible browser) The main things I can think of Rendering text on screen Animating sprites (using images/css) Input (capturing the arrow keys and getting relative mouse positions) Perhaps some preloading resource or dynamically loading resources and choosing order Sound (but I am unsure how important this will be to me at first). Perhaps with mixing and chaining sounds or looping forever until stop. Networking (low priority) to connect a user to another or to continuously GET data without multiple request (I know this exist but I don't know how easy it is to setup or use. But this isn't important to me. Its for the question).

    Read the article

  • XNA Shader Texture Memory

    - by Alex
    I was wondering about texture optimization in XNA 4.0. Will the the contentmanager send the texturedata to the GPU directly when the texture gets loaded or do I send the texture data to the GPU when I declare a texture in my shader. If that's the case, what happens if I have 5 shaders all using the same texture, does that mean that I send 5 instances of that texture data to the gpu or am I simply telling the GPU what preloaded texture to use? Or does XNA do the heavy lifting in the background?

    Read the article

  • Character jump animation is not working when i hit the space bar

    - by muzzy
    i am having an issue with my game in XNA. My jump sprite sheet for my character does not trigger when i hit the space bar. I cant seem to find the problem. Please help me. I am also put the code below to make things easier. namespace WindowsGame4 { public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; // start of new code Texture2D playerWalk; // sprite sheet of walk cycle (14 frames) Texture2D idle; // idle animation Texture2D jump; // jump animation Vector2 playerPos; // to hold x and y position info for the player Point frameDimensions; // to hold width and height values for the frames int presentFrame; // to record which frame we are on at any given time int noOfFrames; // to hold the total number of frames in the spritesheet int elapsedTime; // to know how long each frame has been shown int frameDuration; // to hold info about how long each frame should be shown SpriteEffects flipDirection; // SpriteEffects object int speed; //rate of movement int upMovement; int downMovement; int rightMovement; int leftMovement; int jumpApex; string state; //this is going to be "idle","walking" or "jumping". KeyboardState previousKeyboardState; Vector2 originalPlayerPos; Vector2 movementDirection; Vector2 movementSpeed; public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; } protected override void Initialize() { // textures will be defined in the LoadContent() method playerPos = new Vector2(0, 200); // starting position for the player is at the left of the screen, and a Y position of 200 frameDimensions = new Point(55, 65); // each frame in the idle sprite sheet is 55 wide by 65 high presentFrame = 0; // start at frame 0 noOfFrames = 5; // there are 5 frames in the idle cycle elapsedTime = 0; // set elapsed time to start at 0 frameDuration = 80; // 80 milliseconds is how long each frame will show for (the higher the number, the slower the animation) flipDirection = SpriteEffects.None; // set the value of flipDirection to none speed = 200; upMovement = -2; downMovement = 2; rightMovement = 1; leftMovement = -1; jumpApex = 100; state = "idle"; previousKeyboardState = Keyboard.GetState(); originalPlayerPos = Vector2.Zero; movementDirection = Vector2.Zero; movementSpeed = Vector2.Zero; base.Initialize(); } protected override void LoadContent() { spriteBatch = new SpriteBatch(GraphicsDevice); playerWalk = Content.Load<Texture2D>("sprites/walkSmall"); // load the walk cycle spritesheet idle = Content.Load<Texture2D>("sprites/idleCycle"); // load the idle cycle sprite sheet jump = Content.Load<Texture2D>("sprites/jump"); // load the jump cycle sprite sheet } protected override void UnloadContent() // we're not using this method at the moment { } protected override void Update(GameTime gameTime) // Update method - used it to call a number of other methods { if (Keyboard.GetState().IsKeyDown(Keys.Escape)) { this.Exit(); // Exit the game if the Escape key is pressed } KeyboardState presentKeyboardState = Keyboard.GetState(); UpdateMovement(presentKeyboardState, gameTime); UpdateIdle(presentKeyboardState, gameTime); UpdateJump(presentKeyboardState); UpdateAnimation(gameTime); playerPos += movementDirection * movementSpeed * (float)gameTime.ElapsedGameTime.TotalSeconds; previousKeyboardState = presentKeyboardState; base.Update(gameTime); } private void UpdateAnimation(GameTime gameTime) { elapsedTime += gameTime.ElapsedGameTime.Milliseconds; if (elapsedTime > frameDuration) { elapsedTime -= frameDuration; elapsedTime = elapsedTime - frameDuration; presentFrame++; if (presentFrame > noOfFrames) if (state != "jumping") { presentFrame = 0; } else { presentFrame = 8; } } } protected void UpdateMovement(KeyboardState presentKeyboardState, GameTime gameTime) { if (state == "idle") { movementSpeed = Vector2.Zero; movementDirection = Vector2.Zero; if (presentKeyboardState.IsKeyDown(Keys.Left)) { state = "walking"; movementSpeed.X = speed; movementDirection.X = leftMovement; flipDirection = SpriteEffects.FlipHorizontally; } if (presentKeyboardState.IsKeyDown(Keys.Right)) { state = "walking"; movementSpeed.X = speed; movementDirection.X = rightMovement; flipDirection = SpriteEffects.None; } } } private void UpdateIdle(KeyboardState presentKeyboardState, GameTime gameTime) { if ((presentKeyboardState.IsKeyUp(Keys.Left) && previousKeyboardState.IsKeyDown(Keys.Left) || presentKeyboardState.IsKeyUp(Keys.Right) && previousKeyboardState.IsKeyDown(Keys.Right) && state != "jumping")) { state = "idle"; } } private void UpdateJump(KeyboardState presentKeyboardState) { if (state == "walking" || state == "idle") { if (presentKeyboardState.IsKeyDown(Keys.Space) && !presentKeyboardState.IsKeyDown(Keys.Space)) { presentFrame = 1; DoJump(); } } if (state == "jumping") { if (originalPlayerPos.Y - playerPos.Y > jumpApex) { movementDirection.Y = downMovement; } if (playerPos.Y > originalPlayerPos.Y) { playerPos.Y = originalPlayerPos.Y; state = "idle"; movementDirection = Vector2.Zero; } } } private void DoJump() { if (state != "jumping") { state = "jumping"; originalPlayerPos = playerPos; movementDirection.Y = upMovement; movementSpeed = new Vector2(speed, speed); } } protected override void Draw(GameTime gameTime) // Draw method { GraphicsDevice.Clear(Color.CornflowerBlue); spriteBatch.Begin(); // begin the spritebatch if (state == "walking") { noOfFrames = 14; frameDimensions = new Point(55, 65); Vector2 playerWalkPos = new Vector2(playerPos.X, playerPos.Y - 28); spriteBatch.Draw(playerWalk, playerWalkPos, new Rectangle((presentFrame * frameDimensions.X), 0, frameDimensions.X, frameDimensions.Y), Color.White, 0, Vector2.Zero, 1, flipDirection, 0); } if (state == "idle") { noOfFrames = 5; frameDimensions = new Point(55, 65); Vector2 idlePos = new Vector2(playerPos.X, playerPos.Y - 28); spriteBatch.Draw(idle, idlePos, new Rectangle((presentFrame * frameDimensions.X), 0, frameDimensions.X, frameDimensions.Y), Color.White, 0, Vector2.Zero, 1, flipDirection, 0); } if (state == "jumping") { noOfFrames = 9; frameDimensions = new Point(55, 92); Vector2 jumpPos = new Vector2(playerPos.X, playerPos.Y - 28); spriteBatch.Draw(jump, jumpPos, new Rectangle((presentFrame * frameDimensions.X), 0, frameDimensions.X, frameDimensions.Y), Color.White, 0, Vector2.Zero, 1, flipDirection, 0); } spriteBatch.End(); // end the spritebatch commands base.Draw(gameTime); } } }

    Read the article

  • Making a Camera look at a target Vector

    - by Peteyslatts
    I have a camera that works as long as its stationary. Now I'm trying to create a child class of that camera class that will look at its target. The new addition to the class is a method called SetTarget(). The method takes in a Vector3 target. The camera wont move but I need it to rotate to look at the target. If I just set the target, and then call CreateLookAt() (which takes in position, target, and up), when the object gets far enough away and underneath the camera, it suddenly flips right side up. So I need to transform the up vector, which currently always stays at Vector3.Up. I feel like this has something to do with taking the angle between the old direction vector and the new one (which I know can be expressed by target - position). I feel like this is all really vague, so here's the code for my base camera class: public class BasicCamera : Microsoft.Xna.Framework.GameComponent { public Matrix view { get; protected set; } public Matrix projection { get; protected set; } public Vector3 position { get; protected set; } public Vector3 direction { get; protected set; } public Vector3 up { get; protected set; } public Vector3 side { get { return Vector3.Cross(up, direction); } protected set { } } public BasicCamera(Game game, Vector3 position, Vector3 target, Vector3 up) : base(game) { this.position = position; this.direction = target - position; this.up = up; CreateLookAt(); projection = Matrix.CreatePerspectiveFieldOfView( MathHelper.PiOver4, (float)Game.Window.ClientBounds.Width / (float)Game.Window.ClientBounds.Height, 1, 500); } public override void Update(GameTime gameTime) { // TODO: Add your update code here CreateLookAt(); base.Update(gameTime); } } And this is the code for the class that extends the above class to look at its target. class TargetedCamera : BasicCamera { public Vector3 target { get; protected set; } public TargetedCamera(Game game, Vector3 position, Vector3 target, Vector3 up) : base(game, position, target, up) { this.target = target; } public void SetTarget(Vector3 target) { direction = target - position; } protected override void CreateLookAt() { view = Matrix.CreateLookAt(position, target, up); } }

    Read the article

  • Getting the number of fragments which passed the depth test

    - by Etan
    In "modern" environments, the "NV Occlusion Query" extension provides a method to get the number of fragments which passed the depth test. However, on the iPad / iPhone using OpenGL ES, the extension is not available. What is the most performant approach to implement a similar behaviour in the fragment shader? Some of my ideas: Render the object completely in white, then count all the colors together using a two-pass shader where first a vertical line is rendered and for each fragment the shader computes the sum over the whole row. Then, a single vertex is rendered whose fragment sums all the partial sums of the first pass. Doesn't seem to be very efficient. Render the object completely in white over a black background. Downsample recursively, abusing the hardware linear interpolation between textures until being at a reasonably small resolution. This leads to fragments which have a greyscale level depending on the number of white pixels where in their corresponding region. Is this even accurate enough? Use mipmaps and simply read the pixel on the 1x1 level. Again the question of accuracy and if it is even possible using non-power-of-two textures. The problem wit these approaches is, that the pipeline gets stalled which results in major performance issues. Therefore, I'm looking for a more performant way to accomplish my goal. Using the EXT_OCCLUSION_QUERY_BOOLEAN extension Apple introduced EXT_OCCLUSION_QUERY_BOOLEAN in iOS 5.0 for iPad 2. "4.1.6 Occlusion Queries Occlusion queries use query objects to track the number of fragments or samples that pass the depth test. An occlusion query can be started and finished by calling BeginQueryEXT and EndQueryEXT, respectively, with a target of ANY_SAMPLES_PASSED_EXT or ANY_SAMPLES_PASSED_CONSERVATIVE_EXT. When an occlusion query is started with the target ANY_SAMPLES_PASSED_EXT, the samples-boolean state maintained by the GL is set to FALSE. While that occlusion query is active, the samples-boolean state is set to TRUE if any fragment or sample passes the depth test. When the occlusion query finishes, the samples-boolean state of FALSE or TRUE is written to the corresponding query object as the query result value, and the query result for that object is marked as available. If the target of the query is ANY_SAMPLES_PASSED_CONSERVATIVE_EXT, an implementation may choose to use a less precise version of the test which can additionally set the samples-boolean state to TRUE in some other implementation dependent cases." The first sentence hints on a behavior which is exactly what I'm looking for: getting the number of pixels which passed the depth test in an asynchronous manner without much performance loss. However, the rest of the document describes only how to get boolean results. Is it possible to exploit this extension to get the pixel count? Does the hardware support it so that there may be hidden API to get access to the pixel count? Other extensions which could be exploitable would be debugging features like the number of times the fragment shader was invoked (PSInvocations in DirectX - not sure if something simila is available in OpenGL ES). However, this would also result in a pipeline stall.

    Read the article

  • What's the difference between Canvas and WebGL?

    - by gadr90
    I'm thinking about using CAAT as a part of a HTML5 game engine. One of it's features is the ability to render to Canvas and WebGL without changing anything in the client code. That is a good thing, but I haven't found precisely: what are the differences between those two technologies? I would specially like to know the differences of Canvas and WebGL in the following regards: Framerate Desktop browser support Mobile browser support Futureproofability (TM)

    Read the article

  • Intersection points of plane set forming convex hull

    - by Toji
    Mostly looking for a nudge in the right direction here. Given a set of planes (defined as a normal and distance from origin) that form a convex hull, I would like to find the intersection points that form the corners of that hull. More directly, I'm looking for a way to generate a point cloud appropriate to provide to Bullet. Bonus points if someone knows of a way I could give bullet the plane list directly, since I somewhat suspect that's what it's building on the backend anyway.

    Read the article

< Previous Page | 453 454 455 456 457 458 459 460 461 462 463 464  | Next Page >