Search Results

Search found 12182 results on 488 pages for 'game boy'.

Page 321/488 | < Previous Page | 317 318 319 320 321 322 323 324 325 326 327 328  | Next Page >

  • How to prevent "underwater sight" in games

    - by CPP_Person
    In many games where the player can go underwater, it seems like when you look where the top half of the screen is in the air, and the bottom half the screen is in the water, it's almost like the water doesn't exist and the player is... flying slowly with water sounds? Is there a logical way to solve this? An algorithm? Doesn't seem like any solution has come up yet since many games still have this. I don't want to make the same mistake.

    Read the article

  • DirectX 9.0c and C++ GUI

    - by SullY
    Well, I'm trying to code a gui for my engine, but I've got some problems. I know how to make a UI overlay but buttons are still black magic for me. Anything I tried was to compilcated ( if it goes big ). To Example I tried to look if the mouse position is the same as the Pixel that is showing the button. But If I use some bigger areas it's getting to complicated. Now I'm searching for a Tutorial how to implement your own gui. I'm really confused about it. Well I hope you have/ know some good tutorials. By the way, I took a look at the DXUTSample, but it's to big to get overview.

    Read the article

  • HTML5 Canvas A* Star Path finding

    - by Veyha
    I am trying to learn A* Star Path finding. Library I am using this - https://github.com/qiao/PathFinding.js But I am don't understand one thing how to do. I am need find path from player.x/player.y (player.x and player.y is both 0) to 10/10 This code return array of where I am need to move - var path = finder.findPath(player.x, player.y, 10, 10, grid); I am get array where I am need to move, but how to apply this array to my player.x and player.y? Array structure like - 0: 0 1: 0 length: 2, 0: 1 1: 0 length: 2, ... I am very sorry for my bad English language. Thanks.

    Read the article

  • Getting the front buffer into a gfx mem surface (Dx9)

    - by lapin
    I'm using DirectX 9 to acquire the frontbuffer. There are a couple of ways I know of to get at the front buffer: GetRenderTargetData() GetFrontBufferData() The MSDN page on both of these API calls state that the data is copied from device memory to system memory. I'd like to copy the front buffer surface directly to another graphics memory surface, as I have other manipulations to perform on the acquired surface before returning it to system memory. I'm creating a D3DUSAGE_DYNAMIC texture (gfx mem texture) and calling GetFrontBufferData() to write the front buffer to my textures surface0. Is this valid? Will the operation remain in gfx memory, or will it need to move to system memory and then back to graphics memory? If this is the case, is what I'm trying to achieve possible?

    Read the article

  • What is the simplest way to render video into memory (for drawing to a texture) in .NET?

    - by sebf
    In my project I would like to be able to play back video on surfaces in the world. I intend to do this by having the video frames rendered to a block of memory, then use this to update a texture each frame. Everything is in place - except for the part that actually gets the video. I have looked on Google and found that the video library world is very expansive (and geared towards video processing), and am having trouble finding a suitable one. FFMpeg is very comprehensive, but is an entire suite and would take a good amount of work to integrate. So far the most promising library I've found is the one based on the VLC player libraries - by virtue of it using the same resources as VLC Player it is known to be very capable; it also renders to blocks of memory, but the API (at least of the one on Codeplex) is more of a port of the C++ API rather than a managed wrapper. The 'solution' can be any wrapper/API/library, but with characteristics that make it suitable for use in a rendering engine, namely: Renders the video frame data to memory, so it can be picked up and passed to a texture on the GPU easily. Super simple - all that is needed is a way to load, jump and render a frame programatically - ideally it would use the systems codecs and not require an assortment of plugins. Permissive license (LGPL or more free-er) .NET bindings at least; all the better if it is natively managed Can anyone suggest a lightweight, (.NET) library, that can take a video file, and spit out some frames into a byte[]?

    Read the article

  • How to calculate continuous motion with angular velocity in 2d

    - by Rulk
    I'm really new with physics. Maybe someone would be able to help me to solve the next problem: I need to calculate position of an agent on the plane(2D) in next time step where time step is large(20+ seconds) What I know about agent's motion: Initial Position Direction(normalised vector) Velocity(linear function from time ) - object always moves along it's direction Angular Velocity(linear function from time) Optional: External force direction External force (linear function from time) Running discreet simulation with t-0 is not an option.

    Read the article

  • System hangs at glReadPixel call with GL_TEXTURE_2D_ARRAY for texturing

    - by Roshan
    I am calling glReadPixel after glDrawArray call. I am rendering a geometry with 3D texture on it as a target GL_TEXTURE_2D_ARRAY. My systems hangs at glreadpixel call. When i use target as GL_TEXTURE_3D the issue does not occurs and it correctly reads the framebuffer contents. glReadPixels(0, 0, GetViewportWidth(), GetViewportHeight(), GL_RGBA, GL_UNSIGNED_BYTE, (GLvoid *)rendered_pixels); I am using SNORM textures with GL_byte data in glTeximage3D call and I am not calling glPixelStorei, is it because of this? What should be the parameter for pixelstore call?

    Read the article

  • Rotating a Quad around it center

    - by Trixmix
    How can you rotate a quad around its center? This is what im trying to do but it aint working: GL11.glTranslatef(x-getWidth()/2, y-getHeight()/2, 0); GL11.glRotatef(30, 0.0f, 0.0f, 1.0f); GL11.glTranslatef(x+getWidth()/2, y+getHeight()/2, 0); DRAW my main problem is that it renders it off the screen.. draw code: GL11.glBegin(GL11.GL_QUADS); { GL11.glTexCoord2f(0, 0); GL11.glVertex2f(0, 0); GL11.glTexCoord2f(0, getTexture().getHeight()); GL11.glVertex2f(0, height); GL11.glTexCoord2f(getTexture().getWidth(), getTexture().getHeight()); GL11.glVertex2f(width,height); GL11.glTexCoord2f(getTexture().getWidth(), 0); GL11.glVertex2f(width,0); } GL11.glEnd();

    Read the article

  • Material tiling and offset in unity

    - by Simran kaur
    Ambiguity: What exactly is the difference between Tiling the material and Offset of material? Need to do: I need the material to be repeated n times on the object where I need to set the value of n via script.How do I do it? It seems to happen through Tiling(tried via inspector) but again what is difference between mainTextureOffset and setTextureOffset? Tried: Following is the line of code that I tried to repeat the texture n number of times on an object(repeat across the width of object), but it does nothing significant that I can see.

    Read the article

  • How to offset particles from point of origin

    - by Sun
    Hi I'm having troubles off setting particles from a point of origin. I want my particles to spread out after a certain radius from a the point of origin. For example, this is what I have right now: All particles emitted from a point of origin. What I want is this: Particles are offset from the point of origin by some amount, i.e after the circle. What is the best way to achieve this? At the moment, I have the point of origin, the position of each particle and its rotation angle. Sorry for the poor illustrations. Edit: I was mistaken, when a particle is created, I have only the point of origin. When the particle is created I am able to calculate the rotation of the particle in the update method after it has moved to a new location using atan2() method. This is how I create/manage particles: Created new particle at enemy ship death location, for every new particle which is added to the list, call Update and Draw to update its position, calculate new angle and draw it.

    Read the article

  • Best approach to get clicked objects from a display list (2D)

    - by Ixx
    I'm implementing a display list to manage my visuals on screen. I want to know which object is clicked. My objects already have z-order variable. With my current knowledge (almost nothing) the only thing which comes to my mind is make a linear search and get all the objects which contains the clicked point. And then select the object with the highest z-order. But I know there are far better approaches. I think it's something with trees (binary search?). - container display objects and search recursively? just don't know where to start looking, for this concrete case. Any hint link or concrete solution is welcome.

    Read the article

  • What should I do if my text exceeds my text render target boundaries?

    - by user1423893
    I have a method for drawing strings in 3D that does the following: Set a render target Draw each character as a quadrangle using a orthographic projection to the render target Unset the render target Draw the render target texture using a perspective projection and a world transform My problem is how to deal with strings whose characters length exceeds that of the render target dimensions? For example if I have string "This is a reallllllllllly long string" and the render target can't accommodate it, it will only capture "This is a realllll". The render target (and its size) could be set each frame but wouldn't that be far too costly?

    Read the article

  • Scene transitions

    - by Mars
    It's my first time working with actual scenes/states, aka DrawableGameComponents, which work separate from one another. I'm now wondering what's the best way to make transitions between them, and how to affect them from other scenes. Lets say I wanted to "push" one screen to the right, with another one coming in at the same time. Naturally I'd have to keep drawing both, until the transition is complete. And I'd have to adjust the coordinates I'm drawing at while doing it. Is there a way around specifically handling this special case in every single scene? Or of I wanted to fade one into the other. Basically the question stays the same, how would you do that without having to handle it in every single scene? While writing this I'm realizing it will be the same thing for all kinds of transitions. Maybe a central Draw method in the manager could be a solution, where parameters and effects are applied when necessary. But this wouldn't work if objects that are drawn have their own method, and aren't drawn within the scene, or if an effect has to be applied to the whole scene. That means, maybe scenes have to be drawn to their own rendertarget? That way one call to the base class after the normal drawing could be enough, to apply the effects, while drawing it to the main render target. But I once heard there are problems when switching from target to target, back and forth. So is that even a viable option? As you can see, I have some basic ideas how it might work... but nothing specific. I'd like to learn what's the common way to achieve such things, a general way to apply all kinds of transitions.

    Read the article

  • Implementing my Entity System. Questions about some problems I have found.

    - by Notbad
    Hi!, Well during this week I have deciding about implementation of my entity system. It is a big topic so it has been difficult to take one option from the whole. This has been my decision: 1) I don't have an entity class it is just an id. 2) I have systems that contain a list of components (the list is homegenous, I mean, RenderSystem will just have RenderComponents). 3) Compones will be just data. 4) There would be some kind of "entity prototypes" in a manager or something from we will create entity instances.Ideally they will define the type of components it has and initialization data. 5) Prototype code to create an entity (this is from the top of my head): int id=World::getInstance()->createEntity("entity template"); 6) This will notify all systems that a new entity has been created, and if the entity needs a component that the system handles it will add it to the entity. Ok, this are the ideas. Let's see if some can help with the problems: 1) The main problem is this templates that are sent to the systems in creation process to populate the entity with needed components. What would you use, an OR(ed) int?, a list of strings?. 2) How to do initialization for components when the entity has been created? How to store this in the template? I have thought about having a function in the template that is virtual and after entity is created an populated, gets the components and sets initialization values. 3) Don't you think this is a lot of work for just an entity creation?. Sorry for the long post, I have tried to expose my ideas and finding in order other could have a start beside exposing my problems. Thanks in advance, Notbad.

    Read the article

  • Transparent parts of texture are opaque black instead

    - by Aaron
    I render a sprite twice, one on top of the other. The sprites have transparent parts, so I should be able to see the bottom sprite under the top sprite. The transparent parts are black (the clear colour) and opaque instead though and the topmost sprite blocks the bottom sprite. My fragment shader is trivial: uniform sampler2D texture; varying vec2 f_texcoord; void main() { gl_FragColor = texture2D(texture, f_texcoord); } I have glEnable(GL_BLEND) and glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) in my initialization code. My texture comes from a PNG file that I load with libpng. I'm sure to use GL_RGBA when initializing the texture with glTexImage2D (otherwise the sprites look like noise).

    Read the article

  • How to factorize code in Unreal Kismet (i.e. "Material Function"s for Kismet)

    - by Georges Dupéron
    In the Unreal Development Kit, when using the Material Editor, one can factorize frequently-used groups of nodes by creating a Material Function (content Browser ? right-click ? new matrial function, IIRC). When defining the behaviour of some actor in Kismet, one can easily have a dozen nodes involved. If I have many actors that share the same behaviour, then I'll copy-paste these nodes, and change the variables so they point to the other actors. This leads to inconsistencies (a modification in the behaviour of an actor isn't propagated in the copy-pasted nodes), complexity (you end up with hundreds of nodes), and generally useless effort. My question is : Can I create a "kismet function", just like a material function ? Note: I'd rather avoid using UnrealScript. I don't even know where to type UnrealScripts, don't know where the documentation is and more generally don't have enough time to invest in learning UnrealScript. This "kismet function" feature must be usable by graphists (with little programming knowledge). If a (simple) script suffices to add this feature in the Kismet editor, so that one can create several "functions" without using UnrealScript, then fine, but I don't really want to have to write a script each time I want to factorize a few nodes. Thanks for any information !

    Read the article

  • Drawing a texture line between two vectors in XNA WP7

    - by Krav
    I want to create a simple graph maker in WP7. The goal is to draw a texture line between two vectors what the user defines with touch. I already made the rotation, and it is working, but not correctly, because it doesn't calculate the line's texture height, and because of that, there are too many overlapping textures. So it does draw the line, but too many of them. How could I calculate it correctly? Here is the code: public void DrawLine(Vector2 st,Vector2 dest,NodeUnit EdgeParent,NodeUnit EdgeChild) { float d = Vector2.Distance(st, dest); float rotate = (float)(Math.Atan2(st.Y - dest.Y, st.X - dest.X)); direction = new Vector2(((dest.X - st.X) / (float)d), (dest.Y - st.Y) / (float)d); Vector2 _pos = st; World.TheHive.Add(new LineHiveMind(linetexture, _pos, rotate, EdgeParent, EdgeChild,new List<LineUnit>())); for (int i = 0; i < d; i++) { World.TheHive.Last()._lines.Add(new LineUnit(linetexture, _pos, rotate, EdgeParent, EdgeChild)); _pos += direction; } } d is for the Distance of the st (Starting node) and dest (Destination node) rotate is for rotation direction calculates the direction between the starting and the destination node _pos is for starting position changing Thanks for any suggestions/help!

    Read the article

  • LWJGL in Visual Studio (possible)?

    - by Suds
    I switched from XNA and C# to LWJGL and Java about 14 months ago. Inherently, this called for a switch in IDE. I started using eclipse because I have also done some basic Android development in the past. I soon switched to Netbeans - Eclipse is just too primitive. After using netbeans for about six months, I've started looking over the fence at Visual Studio 11, toying with Metro apps for windows 8. Now I want to know, is there any known way to use Visual Studio for LWJGL?

    Read the article

  • XNA - Obtaining depth from the scene's render target?

    - by user1423893
    I'm currently rendering my scene to a render target so it can be used for rendering methods such as post processing and order independent transparency. rtScene = new RenderTarget2D( GraphicsDevice, GraphicsDevice.PresentationParameters.BackBufferWidth, GraphicsDevice.PresentationParameters.BackBufferHeight, false, SurfaceFormat.Rgba64, DepthFormat.Depth24Stencil8, // Requires a depth format for objects to be drawn correctly (e.g. wireframe model surrounding model) 0, RenderTargetUsage.PreserveContents ); I am required to use RenderTargetUsage.PreserveContents so that the same render target can be rendered to multiple times, once for each of the draw methods below. DrawBackground DrawDeferred DrawForward DrawTransparent The problem is that DrawTransparent requires a copy of the scene's depth as a texture. Is there any way to obtain this from the scene render target above (rtScene)? I can't have more than one render target with RenderTargetUsage.PreserveContents as this causes problems on hardware such as the XBOX 360, so rendering the depth to a separate render target at the same time as I render the scene isn't possible as far as I can tell. Would I be able to get around this problem by "Ping-Ponging" two render targets (using the more compatible RenderTargetUsage.DiscardContents) and using the result for the depth texture?

    Read the article

  • Estimating costs in a GOAP system

    - by fullwall
    I'm currently developing a GOAP system in Java. An explanation of GOAP can be found at http://web.media.mit.edu/~jorkin/goap.html. Essentially, it's using A* to plot between Actions that mutate the world state. To provide a fair chance for all Actions and Goals to execute, I'm using a heuristic function to estimate the cost of doing something. What is the best way to estimate this cost so that it is comparable to all the other costs? As an example, estimating the cost of running away from an enemy versus attacking it - how should the cost be calculated to be comparable?

    Read the article

  • Information about rendering, batches, the graphical card, performance etc. + XNA?

    - by Aidiakapi
    I know the title is a bit vague but it's hard to describe what I'm really looking for, but here goes. When it comes to CPU rendering, performance is mostly easy to estimate and straightforward, but when it comes to the GPU due to my lack of technical background information, I'm clueless. I'm using XNA so it'd be nice if theory could be related to that. So what I actually wanna know is, what happens when and where (CPU/GPU) when you do specific draw actions? What is a batch? What influence do effects, projections etc have? Is data persisted on the graphics card or is it transferred over every step? When there's talk about bandwidth, are you talking about a graphics card internal bandwidth, or the pipeline from CPU to GPU? Note: I'm not actually looking for information on how the drawing process happens, that's the GPU's business, I'm interested on all the overhead that precedes that. I'd like to understand what's going on when I do action X, to adapt my architectures and practices to that. Any articles (possibly with code examples), information, links, tutorials that give more insight in how to write better games are very much appreciated. Thanks :)

    Read the article

  • Portal View/Projection Matrix near plane

    - by melak47
    For RenderToTexture/Camera based portal rendering, the basics seems simple enough. However, with a free camera, most of the time it is going to be looking at such portals at an angle: Now a regular near clipping plane will not always work here, it will either intersect with the wall the portal is sitting on, or possibly with objects in front of the wall. The desired near clipping plane would be aligned like the portal, producing a view volume more like this: or this in 3D: So here is my question: How does one construct or "truncate" a view/projection matrix to achieve such an off-camera-normal (near) clipping plane?

    Read the article

  • How are vertex shader outs sent as inputs to the fragment shader?

    - by Jeffrey
    I'm learning some OpenGL 3.2 way of doing things and I think it's quite great, I'm actually understanding more of shaders and non-fixed pipeline in 1 week rather than those 2 years I tried to learn OpenGL fixed pipeline functions. But here's my question: From what I think I've understood the vertex shader is run for each vertexes in the VBO. But the fragments shader is run per each pixel (is that right?) which is a huge number compared to let's say 3 vertexes of a triangle. Now it seems that in the vertex shader the out variables (like colors and stuff) are passed 1 to 1 to the fragment shader. But let's say that I pass to the fragment shader the position of the vertex in the vertex shader. How is all executed? What vertex (A, B or C of the hipothetical triangle) is passed per each fragment and why?

    Read the article

  • Wall avoidance steering

    - by Vodemki
    I making a small steering simulator using the reynolds boid algorythm. Now I want to add a wall avoidance feature. My walls are in 3D and defined using two points like that: ---------. P2 | | P1 .--------- My agents have a velocity, a position, etc... Could you tell me how to make avoidance with my agents ? Vector2D ReynoldsSteeringModel::repulsionFromWalls() { Vector2D force; vector<Wall *> wallsList = walls(); Point2D pos = self()->position(); Vector2D velocity = self()->velocity(); for (unsigned i=0; i<wallsList.size(); i++) { //TODO } return force; } Then I use all the forces returned by my boid functions and I apply it to my agent. I just need to know how to do that with my walls ? Thanks for your help.

    Read the article

  • How can I get textures on edge of walls like in Super Metroid and Aquaria?

    - by meds
    Games like Super Metroid and Aquaria present the terrain with the other facing parts having rocks and stuff while deeper behind them (i.e. underground) there's different detail or just black. I would like to do something similar using polygons. Terrain is created in my current level as a set of overlapping square boxes. I'm not sure if this rendering method will work such a system for creating terrain but if anyone has ideas I'd love to hear them. Otherwise I'd like to know how I should re-write the terrain rendering system so it actually works to draw terrain in this manner...

    Read the article

< Previous Page | 317 318 319 320 321 322 323 324 325 326 327 328  | Next Page >