Search Results

Search found 25660 results on 1027 pages for 'dotnetnuke development'.

Page 546/1027 | < Previous Page | 542 543 544 545 546 547 548 549 550 551 552 553  | Next Page >

  • Identifying connected lines drawn free-hand by a user

    - by rawrgoesthelion
    I have a series of 'images' described by a mixture of connected lines and curves. Users will draw on the screen, free hand, and my goal is to break their drawing down into a series of lines and curves that can be matched with the 'images' in my set. For the sake of simplicity, let's assume this is occurring on a touch screen. These lines will be connected. Each time the user's finger moves, the dx and dy is recorded. The drawing is considered complete and analyzed when the user's finger leaves the screen. I'm having trouble figuring out a good way to break the user's drawing down into lines. Is there any well known approach to this problem, a C++ library that solves it, or any good articles/technical papers on how to achieve this?

    Read the article

  • Do I lose/gain performance for discarding pixels even if I don't use depth testing?

    - by Gajoo
    When I first searched for discard instruction, I've found experts saying using discard will result in performance drain. They said discarding pixels will break GPU's ability to use zBuffer properly because GPU have to first run Fragment shader for both objects to check if the one nearer to camera is discarded or not. For a 2D game I'm currently working on, I've disabled both depth-test and depth-write. I'm drawing all objects sorted by their depth and that's all, no need for GPU to do fancy things. now I'm wondering is it still bad if I discard pixels in my fragment shader?

    Read the article

  • Certain grid lines not rendering as expected

    - by row1
    I am drawing a simple quad (a triangle strip with 4 vertices) as the floor and then drawing an 8x8 grid over top (a collection of vertex pairs for a line list). The vertical grid lines work fine (apart from being very aliased), but some of the horizontal lines do not get rendered. The grid renders fine if I do not draw the quad. foreach (EffectPass pass in _Effect.CurrentTechnique.Passes) { pass.Apply(); CurrentGraphicsDevice.SetVertexBuffer(_VertexFloorBuffer); _Engine.CurrentGraphicsDevice.DrawPrimitives(PrimitiveType.TriangleStrip, 0, 2); //Some of the horizontal lines seems to disappear if we draw the above quad. CurrentGraphicsDevice.SetVertexBuffer(_VertexGridBuffer); CurrentGraphicsDevice.DrawPrimitives(PrimitiveType.LineList, 0, _VertexGridBuffer.VertexCount / 2); } What could be causing these lines to not be rendered? Update: I added the below code after I draw my quad and grid and it started working. But I am not sure why that works as I thought this code was to draw the WPF controls elementRenderer.Render(); spriteBatch.Begin(); spriteBatch.Draw(elementRenderer.Texture, Vector2.Zero, Color.White); spriteBatch.End();

    Read the article

  • Making video from 3D gaphics in OpenGL

    - by MVTC
    What are some of the preferred methods or libraries for creating video from an OpenGL graphics simulation? For example, I want to create a visualization(video) of an N-Body gravity simulation by rendering non-real-time OpenGL frames. The simulation is already coded, I just don't know how to convert it to video. EDIT: I am also interested in providing the described functionality: The user can adjust parameters including the time step between captured frames and then initiate the simulation. The user waits for the simulation to complete, and then can watch the results. The user is able to increase or decrease the playback speed of the simulation whereas in slow motion, more frames are used i.e., you see higher resolution time steps, and when the speed is increased, you see lower resolution time steps at a higher rate, but the frames per second flashing on the screen is constant.

    Read the article

  • IDirect3DDevice9::GetRenderTargetData() returns no data

    - by P. Avery
    I've got a simple function to get the rendertarget data of an RT( w/default pool ). This particular RT has a resolution of 1x1( it's the 10'th and final mip of a texture ). Here is my code to get data for IDirect3DSurface9 *pTargetSurface: IDirect3DSurface9 *pSOS = NULL; pd3dDevice->CreateOffScreenPlainSurface( 1, 1, D3DFMT_A8R8G8B8, D3DPOOL_SYSTEMMEM, &pSOS, NULL ); // get residual energy if( FAILED( hr = pd3dDevice->GetRenderTargetData( pTargetSurface, pSOS ) ) ) { DebugStringDX( ClassName, "Failed to IDirect3DDevice9::GetRenderTargetData() at DownsampleArea()", __LINE__, hr ); goto Exit; } // lock surface if( FAILED( hr = pSOS->LockRect( &rct, NULL, D3DLOCK_READONLY ) ) ) { DebugStringDX( ClassName, "Failed to IDirect3DSurface9::LockRect() at DownsampleArea()", __LINE__, hr ); goto Exit; } // get residual energy from downsampled texture pByte = ( BYTE* )rct.pBits; D3DXVECTOR4 vEnergy; vEnergy.z = ( float )pByte[ 0 ] / 255.0f; vEnergy.y = ( float )pByte[ 1 ] / 255.0f; vEnergy.x = ( float )pByte[ 2 ] / 255.0f; vEnergy.w = ( float )pByte[ 3 ] / 255.0f; V( pSOS->UnlockRect() ); All formatting and settings are correct, directx in debug mode shows no errors... The problem is that the 4 bytes above are 0...I know this to be incorrect by using PIX to debug...PIX shows that RGB bytes are 0.078 and Alpah is 1. These values are not less than that which can be represented by a single byte( 1 / 255 ). Any ideas? Am I copying rendertarget data correctly?

    Read the article

  • 3d js map rendering

    - by gotha
    In the past I've done a 2D tile map using HTML, CSS and Javascript. Now I have the task of creating a 3D version using the same technologies - think of it like a space map where all planets have x/y/z positions. Currently, I have no idea to do this. Is there an existing library or something I can modify to do my job? If not, what method of rendering the map should I use? It needs to be as browser independent as possible, so I can't use webgl, flash or canvas. I'm considering plain JS & HTML or SVG (using Raphael for compatibility).

    Read the article

  • Spawning bullets on command in Box2D

    - by recharge330
    I'm making a simple bullet hell game but I can't figure out how to get my character to shoot. Lets say I have bulletBody and shipBody, how would I continually spawn bulletBodies using the shipBody coordinates. I've tried a function that uses an array of b2bodies and just assigns them the bodydef and fixture but that causes the game to crash. C++ sample code would be best but any help is appreciated. EDIT: It looks like any reference to my b2World in a function will cause the game to crash. How do I declare the bodies without using a b2World as an argument in the function.

    Read the article

  • Rendering projectiles

    - by Chris
    I'm working on a simple game that has the user control a space ship that shoots small circular projectiles. However, I'm not sure how to render these. Right now I know how to make a LPDIREC3DSURFACE for a sprite and render it onto a LPDIRECT3DDEVICE9, but that's only for a single sprite. I assume I don't need to constantly create new surfaces and devices. How should projectile generation/rendering be handled? Thanks in advance.

    Read the article

  • Cube rotation DX10

    - by German
    Well I'm reading the Frank's Luna DirectX10 book and, while I'm trying to understand the first demo, I found something that's not very clear at least for me. In the updateScene method, when I press A, S, W or D, the angles mTheta and mPhi change, but after that, there are three lines of code that I don't understand exactly what they do: // Convert Spherical to Cartesian coordinates: mPhi measured from +y // and mTheta measured counterclockwise from -z. float x = 5.0f*sinf(mPhi)*sinf(mTheta); float z = -5.0f*sinf(mPhi)*cosf(mTheta); float y = 5.0f*cosf(mPhi); I mean, this explains that they do, it says that it converts the spherical coordinates to cartesian coordinates, but, mathematically, why? why the x value is calculated by the product of the sins of both angles? And the z by the product of the sine and cosine? and why the y just uses the cosine? After that, those values (x, y and z) are used to build the view matrix. The book doesn't explain (mathematically) why those values are calculated like that (and I didn't find anything to help me to understand it at the first Part of the book: "Mathematical prerequisites"), so it would be good if someone could explain me what exactly happen in those code lines or just give me a link that helps me to understand the math part. Thanks in advance!

    Read the article

  • XNA Spritebatch sorting by texture vs depth

    - by Motig
    I am refining my 2D game engine, and I want to look in to sorting sprite batches by texture (because I'm quite often using the same textures repeatedly). However, I also want to retain a few 'layers' of depth (i.e. ground < buildings < units < GUI etc). My question is, which of the following is the best approach (in terms of performance)? Create multiple SpriteBatches and Begin() and End() them in order; or... Create a single SpriteBatch and call Begin() and End() multiple times, once for each layer (in order)?

    Read the article

  • Observer Pattern Implementation

    - by user17028
    To teach myself basic game programming, I am going to program a clone of Pong. I will use the Observer design pattern, with an interface between the input and the game engine. However, I'm not sure what the interface should do. One idea I had was for the input interface to tell the game engine that (e.g.) the screen was clicked, then to let the game engine decide what to do with that information (shoot a bullet, for example). Another idea I had was for the input interface, having caught the mouse click, to tell the game engine to shoot a bullet. Which method would be better for me to use?

    Read the article

  • What problem does double or triple buffering solve in modern games?

    - by krokvskrok
    I want to check if my understanding of the causes for using double (or triple) buffering is correct: A monitor with 60Hz refresh's the monitor-display 60 times per second. If the monitor refresh the monitor-display, he updates pixel for pixel and line for line. The monitor requests the color values for the pixels from the video memory. If I run now a game, then this game is constantly manipulating this video memory. If this game don't use a buffer strategy (double buffering etc.) then the following problem can happen: The monitor is now refreshing his monitor-display. At this moment the monitor had refreshed the first half monitor-display already. At the same time, the game had manipulated the video memory with new data. Now the monitor accesses for the second half monitor-display this new manipulated data from the video memory. The problems can be tearing or flickering. Is my understanding of cases of using buffer strategy correct? Are there other reasons?

    Read the article

  • How to synchronize the ball in a network pong game?

    - by Thaars
    I’m developing a multiplayer network pong game, my first game ever. The current state is, I’ve running the physic engine with the same configurations on the server and the clients. The own paddle movement is predicted and get just confirmed by the authoritative server. Is a difference detected between them, I correct the position at the client by interpolation. The opponent paddle is also interpolated 200ms to 100ms in the past, because the server is broadcasting snapshots every 100ms to each client. So far it works very well, but now I have to simulate the ball and have a problem to understanding the procedure. I’ve read Valve’s (and many other) articles about fast-paced multiplayer several times and understood their approach. Maybe I can compare my ball with their bullets, but their advantage is, the bullets are not visible. When I have to display the ball, and see my paddle in the present, the opponent in the past and the server is somewhere between it, how can I synchronize the ball over all instances and ensure, that it got ever hit by the paddle even if the paddle is fast moving? Currently my ball’s position is simply set by a server update, so it can happen, that the ball bounces back, even if the paddle is some pixel away (because of a delayed server position). Until now I’ve got no synced clock over all instances. I’m sending a client step index with each update to the server. If the server did his job, he sends the snapshot with the last step index of each client back to the clients. Now I’m looking for the stored position at the returned step index and compare them. Do I need a common clock to sync the ball? EDIT: I've tried to sync a common clock for the server and all clients with a timestamp. But I think it's better to use an own stepping instead of a timestamp (so I don't need to calculate with the ping and so on - and the timestamp will never be exact). The physics are running 60 times per second and now I use this for keeping them synchronized. Is that a good way? When the ball gets calculated by each client, the angle after bouncing can differ because of the different position of the paddles (the opponent is 200ms in the past). When the server is sending his ball position, velocity and angle (because he knows the position of each paddle and is authoritative), the ball could be in a very different position because of the different angles after bouncing (because the clients receive the server data after 100ms). How is it possible to interpolate such a huge difference? I posted this question some days ago at stackoverflow, but got no answer yet. Maybe this is the better place for this question.

    Read the article

  • Tutorial on OpenGL texture formats

    - by Cyan
    Looking at the documentation glGetTexImage(), one can see that there are plenty of available texture formats. GL_TEXTURE_1D, GL_TEXTURE_2D, GL_TEXTURE_3D, GL_TEXTURE_1D_ARRAY, GL_TEXTURE_2D_ARRAY, GL_TEXTURE_RECTANGLE, GL_TEXTURE_CUBE_MAP_POSITIVE_X, GL_TEXTURE_CUBE_MAP_NEGATIVE_X, GL_TEXTURE_CUBE_MAP_POSITIVE_Y, GL_TEXTURE_CUBE_MAP_NEGATIVE_Y, GL_TEXTURE_CUBE_MAP_POSITIVE_Z, and GL_TEXTURE_CUBE_MAP_NEGATIVE_Z I've only used GL_TEXTURE_2D for the time being. Is there any place / documentation where one can learn about these other formats ? PS : and yes, of course, i've googled for it, results are pretty poor

    Read the article

  • FBO rendering different result between Galaxy S2 and S3

    - by BruceJones
    I'm working on a pong game and have recently set up FBO rendering so that I can apply some post-processing shaders. This proceeds as so: Bind texture A to framebuffer Draw balls Bind texture B to framebuffer Draw texture A using fade shader on fullscreen quad Bind screen to framebuffer Draw texture B using normal textured quad shader Neither texture A or B are cleared at any point, this way the balls leave trails on screen, see below for the fade shader. Fade Shader private final String fragmentShaderCode = "precision highp float;" + "uniform sampler2D u_Texture;" + "varying vec2 v_TexCoordinate;" + "vec4 color;" + "void main(void)" + "{" + " color = texture2D(u_Texture, v_TexCoordinate);" + " color.a *= 0.8;" + " gl_FragColor = color;" + "}"; This works fine with the Samsung Galaxy S3/ Note2, but cause a strange effect doesnt work on Galaxy S2 or Note1. See pictures: Galaxy S3/Note2 Galaxy S3/Note2 Galaxy S2/Note Galaxy S2/Note Can anyone explain the difference?

    Read the article

  • Server costs and back loading for mobile devices

    - by user23844
    A company approached me to design an MMO for the mobile platform and I have the perfect idea for them. My question is how much would a server for a FTP game that has both a PVE element and PVP cost? Also do you think that it would be better or is it even possible to back load the data onto the phones (trying to come up with some interesting way to back up the data in case of emergency). I don't want the game to be totally online reliant (I want to appeal to not only phone users but also iPod touch users) and for there to be an offline mode. If you can't tell this is my first game besides simple projects I've done on the side. Any help would be greatly appreciated.

    Read the article

  • In wow 6.0 expansion, where to buy wow gold?

    - by user50866
    Rs3gold.com is a leading provider of MMORPG virtual currency and other assets around the world, when is the new world of warcraft expansion, you can buy cheapest wow gold from Rs3gold. 8% discount code for your World of Warcraft Gold - RS3GOLD Once your payment on our site is completed successfully, we will deliver your WOW gold instantly within 10-30 minutes! http://www.rs3gold.com/Gold/wow_us.aspx  

    Read the article

  • Techniques for lighting a texture (no shadows)

    - by Paul Manta
    I'm trying to learn about dynamic shadows for 2D graphics. While I understand the basic ideas behind determining what areas should be lit and which should be in shadow, I don't know how I would "lighten" a texture in the first place. Could you go over various popular techniques for lighting a texture and what (dis)advantages each one has? Also, how is lighting a texture with colored light different from using white light?

    Read the article

  • Higher Performance With Spritesheets Than With Rotating Using C# and XNA 4.0?

    - by Manuel Maier
    I would like to know what the performance difference is between using multiple sprites in one file (sprite sheets) to draw a game-character being able to face in 4 directions and using one sprite per file but rotating that character to my needs. I am aware that the sprite sheet method restricts the character to only be able to look into predefined directions, whereas the rotation method would give the character the freedom of "looking everywhere". Here's an example of what I am doing: Single Sprite Method Assuming I have a 64x64 texture that points north. So I do the following if I wanted it to point east: spriteBatch.Draw( _sampleTexture, new Rectangle(200, 100, 64, 64), null, Color.White, (float)(Math.PI / 2), Vector2.Zero, SpriteEffects.None, 0); Multiple Sprite Method Now I got a sprite sheet (128x128) where the top-left 64x64 section contains a sprite pointing north, top-right 64x64 section points east, and so forth. And to make it point east, i do the following: spriteBatch.Draw( _sampleSpritesheet, new Rectangle(400, 100, 64, 64), new Rectangle(64, 0, 64, 64), Color.White); So which of these methods is using less CPU-time and what are the pro's and con's? Is .NET/XNA optimizing this in any way (e.g. it notices that the same call was done last frame and then just uses an already rendered/rotated image thats still in memory)?

    Read the article

  • MMO Web game mouse vs wasd

    - by LazyProgrammer
    If considering to develop a web browser based game with multiple people and it's an RPG, click to move would probably be the only choice in movement right? Because if you were to use WASD and then ajax to the server every second that a player held on to the WASD key, that'd be pretty resource intensive if the server had to calculate the position and return the map image, assuming the next few screens are already buffered right? or is there a way to implement a WASD style and still have server side do all the calculations. (server side calculations to avoid cheating)

    Read the article

  • 2-components color model

    - by Cyan
    RGB is the natural color model for OpenGL. But a lot of other color models exist. For example, CMY(K) for printers, YUV for JPEG, the little cousins YCbCr and YCoCg, HSL & HSV from the 70's, and so on. All these models tend to share a common property : they are based on 3 components. Therefore my question is : Does it exist a 2-components color model ? I'm surprised to not find any. I was expecting something along the line of Hue+light could exist. I guess it cannot be as "complete" as a true 3-components color model, but a fine-enough approximation will be good for my usecase. The end objective is to store the 2 components into a single BC5 texture (GL_COMPRESSED_RED_GREEN_RGTC2 in OpenGL). The 3rd component requires a second fetch into a second texture, which hurts performance.

    Read the article

  • CUDA 4.1 Update

    - by N0xus
    I'm currently working on porting a particle system to update on the GPU via the use of CUDA. With CUDA, I've already passed over the required data I need to the GPU and allocated and copied the date via the host. When I build the project, it all runs fine, but when I run it, the project says I need to allocate my h_position pointer. This pointer is my host pointer and is meant to hold the data. I know I need to pass in the current particle position to the required cudaMemcpy call and they are currently stored in a list with a for loop being created and interated for each particle calling the following line of code: m_particleList[i].positionY = m_particleList[i].positionY - (m_particleList[i].velocity * frameTime * 0.001f); My current host side cuda code looks like this: float* h_position; // Your host pointer. This holds the data (I assume it's already filled with the data.) float* d_position; // Your device pointer, we will allocate and fill this float* d_velocity; float* d_time; int threads_per_block = 128; // You should play with this value int blocks = m_maxParticles/threads_per_block + ( (m_maxParticles%threads_per_block)?1:0 ); const int N = 10; size_t size = N * sizeof(float); cudaMalloc( (void**)&d_position, m_maxParticles * sizeof(float) ); cudaMemcpy( d_position, h_position, m_maxParticles * sizeof(float), cudaMemcpyHostToDevice); Both of which were / can be found inside my UpdateParticle() method. I had originally thought it would be a simple case of changing the h_position variable in the cudaMemcpy to m_particleList[i] but then I get the following error: no suitable conversion function from "ParticleSystemClass::ParticleType" to "const void *" exists I've probably messed up somewhere, but could someone please help fix the issues I'm facing. Everything else seems to running fine, it's just when I try to run the program that certain things hit the fan.

    Read the article

  • Do unused vertices in a 3D object affect performance?

    - by Gajet
    For my game I need to generate a mesh dynamically. Now I'm wondering does it have a noticeable affect in FPS if I allocate more vertices than what I'm actually using or not? and does it matter if I'm using DirectX or OpenGL? Edit Final output will be a w*h cell grid, but for technical issues it's much easier for me to allocate (w+1)*(h+1) vertices. Sure I'll only use w*h vertices in indexing, and I know there is some memory wasting there, but I want to know if it also affect FPS or not? (Note that mesh is only generated once in each time you play the game)

    Read the article

  • Game Asset Management

    - by user964123
    I am making my first small mobile game in C# XNA. Lets say I have 3 screens, the main menu, options and game screen. A single game session usually lasts for 1 min, so the user will alternate frequently between the main menu and game screen. Therefore, once I load the textures for either screen, I want to keep them in memory to avoid frequent reloading. Both screens share some assets like their background textures, but differ in others. The first solution I came up with is making 2 texture factory classes, MainScreenAssetFactory and GameScreenAssetFactory, each with their own content manager, and ill store them in a globally accessible point so that they persist after either screen is destroyed. There is also a OptionsScreenAssetFactory, but that I dont want to cache it since the options screen is rarely visited. A typical Factory would look something like this public class MainScreenAssetFactory { private readonly ContentManager contentManager; public MainScreenAssetFactory(IServiceProvider serviceProvider, string rootDirectory) { contentManager = new ContentManager(serviceProvider) { RootDirectory = rootDirectory }; } public Texture2D ListElementBackground { get { return return contentManager.Load<Texture2D>("UserTab"); } } public Texture2D ListElementBulletPoint { get { return return contentManager.Load<Texture2D>("TabIcon"); } } public Texture2D LoggedOutUser { get { return return contentManager.Load<Texture2D>("LoggedOutUser"); } } } Since both Main, Options and Game Screen share some common resources, instead of loading them more than once, I created another class CommonAssetTexFactory which holds the common stuff and stays in-memory during the app lifetime. For example, this class gets passed to the options screen when it is created. However, given my small game with its few assets, I am already finding this solution cumbersome and inflexible. Changing anything would require looking to see if its already in the common factory, and if not, modifying existing factories and so on. And this is just considering textures currently, i didnt add sound files yet. I cant imagine bigger games with thousands of resources using this approach. A better idea must exist. Would someone please enlighten me?

    Read the article

  • two guitexture that do not work together

    - by London2423
    I have two GUITexture that move left and right a cube. Is pretty strange but together they don't work. If I activate only one it works. To be more specific: If I have the left GUItexture alone in the game the cube move left. If I have the right GUITexture activated alone the cube move right. Seems all fine I thought but If I have both of them the cube move only right and not left. Where is the mistake? Here is the code inside the GameObject cube for Right move void OnMousedown () { transform.position += Vector3.right * Time.deltaTime; } For Left move void OnMousedown () { transform.position += Vector3.left * Time.deltaTime; } And this is the left GUITexture code //move the cube left Cube.GetComponent<Left> ().enabled = true; left.transform.position += Vector3.left * Time.deltaTime; This is the right GUITexture //move the cube right Cube.GetComponent<Left> ().enabled = true; right.transform.position += Vector3.right * Time.deltaTime; What is the reason for this? I hope someone can help me.

    Read the article

< Previous Page | 542 543 544 545 546 547 548 549 550 551 552 553  | Next Page >