Search Results

Search found 12182 results on 488 pages for 'game boy'.

Page 321/488 | < Previous Page | 317 318 319 320 321 322 323 324 325 326 327 328  | Next Page >

  • Detailed Modern Opengl Tutorial?

    - by Kogesho
    I am asking for a specific modern opengl tutorial. I need a tutorial that does not skip to explain any lines of code. It should also include different independent objects moving/rotating (most tutorials use only one object), as well as imported 3d objects and collision detection for them. It should also avoid stuff that won't be used. Arcysnthesis for example gives a new concept, and after teaching it, in the next tutorial, it explains how bad it is for performance and introduces another method. Do you know any?

    Read the article

  • Estimating costs in a GOAP system

    - by fullwall
    I'm currently developing a GOAP system in Java. An explanation of GOAP can be found at http://web.media.mit.edu/~jorkin/goap.html. Essentially, it's using A* to plot between Actions that mutate the world state. To provide a fair chance for all Actions and Goals to execute, I'm using a heuristic function to estimate the cost of doing something. What is the best way to estimate this cost so that it is comparable to all the other costs? As an example, estimating the cost of running away from an enemy versus attacking it - how should the cost be calculated to be comparable?

    Read the article

  • FPS camera specification

    - by user1095108
    I remember I once composed a FPS viewing transformation, as a composition of 3 rotations, each with an angle as a parameter. The first angle specified the left/right rotation around the y-axis, the second angle the up/down rotation around the x-axis, and the third around the z-axis. The viewing transformation was therefore specified by 3 angles. Naturally, this transformation had a gimbal lock, depending in what order the transformation were performed. What should I look at to derive my viewing transformation without the gimbal lock? I know the "lookAt" method already, but I consider that cumbersome. EDIT: MY first guess is to do the first 2 transformations to get a viewing direction and then the axis-angle rotation on this axis.

    Read the article

  • ERROR #342: DEVICE_SHADER_LINKAGE_SEMANTICNAME_NOT_FOUND

    - by Telanor
    I've stared at this for at least half an hour now and I cannot figure out what directx is complaining about. I know this error normally means you put float3 instead of a float4 or something like that, but I've checked over and over and as far as I can tell, everything matches. This is the full error message: D3D11: ERROR: ID3D11DeviceContext::DrawIndexed: Input Assembler - Vertex Shader linkage error: Signatures between stages are incompatible. The input stage requires Semantic/Index (COLOR,0) as input, but it is not provided by the output stage. [ EXECUTION ERROR #342: DEVICE_SHADER_LINKAGE_SEMANTICNAME_NOT_FOUND ] This is the vertex shader's input signature as seen in PIX: // Input signature: // // Name Index Mask Register SysValue Format Used // -------------------- ----- ------ -------- -------- ------ ------ // POSITION 0 xyz 0 NONE float xyz // NORMAL 0 xyz 1 NONE float // COLOR 0 xyzw 2 NONE float The HLSL structure looks like this: struct VertexShaderInput { float3 Position : POSITION0; float3 Normal : NORMAL0; float4 Color: COLOR0; }; The input layout, from PIX, is: The C# structure holding the data looks like this: [StructLayout(LayoutKind.Sequential)] public struct PositionColored { public static int SizeInBytes = Marshal.SizeOf(typeof(PositionColored)); public static InputElement[] InputElements = new[] { new InputElement("POSITION", 0, Format.R32G32B32_Float, 0), new InputElement("NORMAL", 0, Format.R32G32B32_Float, 0), new InputElement("COLOR", 0, Format.R32G32B32A32_Float, 0) }; Vector3 position; Vector3 normal; Vector4 color; #region Properties ... #endregion public PositionColored(Vector3 position, Vector3 normal, Vector4 color) { this.position = position; this.normal = normal; this.color = color; } public override string ToString() { StringBuilder sb = new StringBuilder(base.ToString()); sb.Append(" Position="); sb.Append(position); sb.Append(" Color="); sb.Append(Color); return sb.ToString(); } } SizeInBytes comes out to 40, which is correct (4*3 + 4*3 + 4*4 = 40). Can anyone find where the mistake is?

    Read the article

  • Problems loading Hilva tutorials

    - by Beska
    I'm a newcomer to XNA, and I'm evaluating some libraries. The Hilva Graphics Engine looks interesting, and I'm trying to run their tutorials. However, all of them give me errors. For example, if I download the ParallaxMappingSample demo, and try to build it, I get Error 1 Error loading pipeline assembly "C:\Users\Me\Desktop\ParallaxMappingSample\Hilva.Content.dll". ParallaxMappingSample I get similar errors for all of the samples. Unfortunately, this error isn't very enlightening. I can see the Hilva.Content.dll in the appropriate directory. I tried removing and readding the reference from the content project, but I get the same error. I'm not sure it's relevant, but I'm on Windows 7, I'm using Microsoft Visual Studio 2010, and XNA 4.0. Is there an easy (or difficult) solution? EDIT: If you happen to try this, even if you don't have a solution, let me know about it in a comment. Whether it works for you, or if you get the same problem...either result would be something that might let me know if it's just a problem with the tutorial, or if it's on my end.

    Read the article

  • What is the simplest way to render video into memory (for drawing to a texture) in .NET?

    - by sebf
    In my project I would like to be able to play back video on surfaces in the world. I intend to do this by having the video frames rendered to a block of memory, then use this to update a texture each frame. Everything is in place - except for the part that actually gets the video. I have looked on Google and found that the video library world is very expansive (and geared towards video processing), and am having trouble finding a suitable one. FFMpeg is very comprehensive, but is an entire suite and would take a good amount of work to integrate. So far the most promising library I've found is the one based on the VLC player libraries - by virtue of it using the same resources as VLC Player it is known to be very capable; it also renders to blocks of memory, but the API (at least of the one on Codeplex) is more of a port of the C++ API rather than a managed wrapper. The 'solution' can be any wrapper/API/library, but with characteristics that make it suitable for use in a rendering engine, namely: Renders the video frame data to memory, so it can be picked up and passed to a texture on the GPU easily. Super simple - all that is needed is a way to load, jump and render a frame programatically - ideally it would use the systems codecs and not require an assortment of plugins. Permissive license (LGPL or more free-er) .NET bindings at least; all the better if it is natively managed Can anyone suggest a lightweight, (.NET) library, that can take a video file, and spit out some frames into a byte[]?

    Read the article

  • HTML5 Canvas A* Star Path finding

    - by Veyha
    I am trying to learn A* Star Path finding. Library I am using this - https://github.com/qiao/PathFinding.js But I am don't understand one thing how to do. I am need find path from player.x/player.y (player.x and player.y is both 0) to 10/10 This code return array of where I am need to move - var path = finder.findPath(player.x, player.y, 10, 10, grid); I am get array where I am need to move, but how to apply this array to my player.x and player.y? Array structure like - 0: 0 1: 0 length: 2, 0: 1 1: 0 length: 2, ... I am very sorry for my bad English language. Thanks.

    Read the article

  • RasterizerState set to null after calling DrawText in Nuclex

    - by ProgrammerAtWork
    I have the following code in XNA: // class members Text t1; Text t2; Text t3; // init // Debugfont is size 24 vectorfont t1 = MM.DebugFont24.Fill("hello"); t1 = MM.DebugFont24.Extrude("hello"); t2 = MM.DebugFont24.Fill("hello"); t2 = MM.DebugFont24.Extrude("hello"); t3 = MM.DebugFont24.Fill("hello"); t3 = MM.DebugFont24.Extrude("hello"); // Draw TextBatch test = new TextBatch(MM.GD); test.DrawText(t1, Color.Red); test.DrawText(t2, Color.Red); test.DrawText(t3, Color.Red); test.End(); //After the second call to the TextBatch, RasterizerState of the GraphicsDevice is set to null //But I don't get any runtime errors or any indication of that something is wrong. Is this supposed to happen? Or am I doing something wrong? I've discovered that this happened because culling was set to None when I was rendering textures

    Read the article

  • Wall avoidance steering

    - by Vodemki
    I making a small steering simulator using the reynolds boid algorythm. Now I want to add a wall avoidance feature. My walls are in 3D and defined using two points like that: ---------. P2 | | P1 .--------- My agents have a velocity, a position, etc... Could you tell me how to make avoidance with my agents ? Vector2D ReynoldsSteeringModel::repulsionFromWalls() { Vector2D force; vector<Wall *> wallsList = walls(); Point2D pos = self()->position(); Vector2D velocity = self()->velocity(); for (unsigned i=0; i<wallsList.size(); i++) { //TODO } return force; } Then I use all the forces returned by my boid functions and I apply it to my agent. I just need to know how to do that with my walls ? Thanks for your help.

    Read the article

  • Is this the most effect simple way to display a moving image? SDL2

    - by user36324
    I've looked around for tutorials on SDL2, but there isnt many so I am curious i was messing around and is this an effective way to move an image. One problem is that it drags along the image to where it moves. #include "SDL.h" #include "SDL_image.h" int main(int argc, char* argv[]) { bool exit = false; SDL_Init(SDL_INIT_EVERYTHING); SDL_Window *win = SDL_CreateWindow("Hello World!", 100, 100, 640, 480, SDL_WINDOW_SHOWN); SDL_Renderer *ren = SDL_CreateRenderer(win, -1, SDL_RENDERER_ACCELERATED | SDL_RENDERER_PRESENTVSYNC); SDL_Surface *png = IMG_Load("character.png"); SDL_Rect src; src.x = 0; src.y = 0; src.w = 161; src.h = 159; SDL_Rect dest; dest.x = 50; dest.y = 50; dest.w = 161; dest.h = 159; SDL_Texture *tex = SDL_CreateTextureFromSurface(ren, png); SDL_FreeSurface(png); while(exit==false){ dest.x++; SDL_RenderClear(ren); SDL_RenderCopy(ren, tex, &src, &dest); SDL_RenderPresent(ren); } SDL_Delay(5000); SDL_DestroyTexture(tex); SDL_DestroyRenderer(ren); SDL_DestroyWindow(win); SDL_Quit(); }

    Read the article

  • Figuring out what object is closer to a certain point?

    - by user1157885
    I'm trying to create fog of war, I have the visual effect created but I'm not sure how to deal with the hiding of other players if they're within the fog of war. So right now the thing I'm trying to do is if another player is hiding behind a wall then not to render that player. I was thinking of doing it by sending a ray in the direction of all the players, and then creating a list of all the obstacles that ray collides with and then trying to figure out if an obstacle was closer than the player in order to predict the distance. But then I realized I'm not really sure how to figure out if the obstacle is infact closer or not because I have to account for all the dimensions, so I'm kind of stuck. First of all is this approach the correct way to go about it and secondly how would I calculate if the obstacle was infact closer taking into account the X Y and Z. Thanks

    Read the article

  • What should I do if my text exceeds my text render target boundaries?

    - by user1423893
    I have a method for drawing strings in 3D that does the following: Set a render target Draw each character as a quadrangle using a orthographic projection to the render target Unset the render target Draw the render target texture using a perspective projection and a world transform My problem is how to deal with strings whose characters length exceeds that of the render target dimensions? For example if I have string "This is a reallllllllllly long string" and the render target can't accommodate it, it will only capture "This is a realllll". The render target (and its size) could be set each frame but wouldn't that be far too costly?

    Read the article

  • Drawing a texture line between two vectors in XNA WP7

    - by Krav
    I want to create a simple graph maker in WP7. The goal is to draw a texture line between two vectors what the user defines with touch. I already made the rotation, and it is working, but not correctly, because it doesn't calculate the line's texture height, and because of that, there are too many overlapping textures. So it does draw the line, but too many of them. How could I calculate it correctly? Here is the code: public void DrawLine(Vector2 st,Vector2 dest,NodeUnit EdgeParent,NodeUnit EdgeChild) { float d = Vector2.Distance(st, dest); float rotate = (float)(Math.Atan2(st.Y - dest.Y, st.X - dest.X)); direction = new Vector2(((dest.X - st.X) / (float)d), (dest.Y - st.Y) / (float)d); Vector2 _pos = st; World.TheHive.Add(new LineHiveMind(linetexture, _pos, rotate, EdgeParent, EdgeChild,new List<LineUnit>())); for (int i = 0; i < d; i++) { World.TheHive.Last()._lines.Add(new LineUnit(linetexture, _pos, rotate, EdgeParent, EdgeChild)); _pos += direction; } } d is for the Distance of the st (Starting node) and dest (Destination node) rotate is for rotation direction calculates the direction between the starting and the destination node _pos is for starting position changing Thanks for any suggestions/help!

    Read the article

  • Best approach to get clicked objects from a display list (2D)

    - by Ixx
    I'm implementing a display list to manage my visuals on screen. I want to know which object is clicked. My objects already have z-order variable. With my current knowledge (almost nothing) the only thing which comes to my mind is make a linear search and get all the objects which contains the clicked point. And then select the object with the highest z-order. But I know there are far better approaches. I think it's something with trees (binary search?). - container display objects and search recursively? just don't know where to start looking, for this concrete case. Any hint link or concrete solution is welcome.

    Read the article

  • How to prevent "underwater sight" in games

    - by CPP_Person
    In many games where the player can go underwater, it seems like when you look where the top half of the screen is in the air, and the bottom half the screen is in the water, it's almost like the water doesn't exist and the player is... flying slowly with water sounds? Is there a logical way to solve this? An algorithm? Doesn't seem like any solution has come up yet since many games still have this. I don't want to make the same mistake.

    Read the article

  • How can I get textures on edge of walls like in Super Metroid and Aquaria?

    - by meds
    Games like Super Metroid and Aquaria present the terrain with the other facing parts having rocks and stuff while deeper behind them (i.e. underground) there's different detail or just black. I would like to do something similar using polygons. Terrain is created in my current level as a set of overlapping square boxes. I'm not sure if this rendering method will work such a system for creating terrain but if anyone has ideas I'd love to hear them. Otherwise I'd like to know how I should re-write the terrain rendering system so it actually works to draw terrain in this manner...

    Read the article

  • In what kind of variable type is the player position stored on a MMORPG such as WoW?

    - by jokoon
    I even heard J. Carmack quickly talk about it... How a software can track a player's position so accurately, being on a such huge world, without loading between zones, and on a multiplayer scale ? How is the data formatted when it passes through the netcode ? I can understand how vertices are stored into the graphic card's memory, but when it comes to synchronize the multiplayer, I can't imagine what is best.

    Read the article

  • Material tiling and offset in unity

    - by Simran kaur
    Ambiguity: What exactly is the difference between Tiling the material and Offset of material? Need to do: I need the material to be repeated n times on the object where I need to set the value of n via script.How do I do it? It seems to happen through Tiling(tried via inspector) but again what is difference between mainTextureOffset and setTextureOffset? Tried: Following is the line of code that I tried to repeat the texture n number of times on an object(repeat across the width of object), but it does nothing significant that I can see.

    Read the article

  • port opengl2.x to opengl 3.x

    - by user46759
    I'm trying to port opencloth example to OpenGL 3.x. I've mostly done it to the shaders but I'm not sure of this part : glEnableClientState(GL_VERTEX_ARRAY); glBindBuffer(GL_ARRAY_BUFFER, vboID); glVertexPointer(4, GL_FLOAT, 0,0); glEnableClientState(GL_TEXTURE_COORD_ARRAY); glBindBuffer(GL_ARRAY_BUFFER, vboTexID); glTexCoordPointer(2, GL_FLOAT,0, 0); glEnableClientState(GL_NORMAL_ARRAY); glBindBuffer(GL_ARRAY_BUFFER, vboNormID); glNormalPointer(GL_FLOAT,sizeof(float)*4, 0); maybe glEnableVertexAttriArray somewhere ? any clue ? thanx edit : maybe something like that ? glEnableVertexAttribArray (2) ; // Ou glEnableVertexAttribArray (positionIndex) ; glBindBuffer(GL_ARRAY_BUFFER, vboTexID); glVertexAttribPointer (2, 2, GL_FLOAT, GL_FALSE, 0, 0) ; glEnableVertexAttribArray (3) ; // Ou glEnableVertexAttribArray (positionIndex) ; glBindBuffer(GL_ARRAY_BUFFER, vboNormID); glVertexAttribPointer (3, 4, GL_FLOAT, GL_FALSE, sizeof (float) * 4, 0) ;

    Read the article

  • Scene transitions

    - by Mars
    It's my first time working with actual scenes/states, aka DrawableGameComponents, which work separate from one another. I'm now wondering what's the best way to make transitions between them, and how to affect them from other scenes. Lets say I wanted to "push" one screen to the right, with another one coming in at the same time. Naturally I'd have to keep drawing both, until the transition is complete. And I'd have to adjust the coordinates I'm drawing at while doing it. Is there a way around specifically handling this special case in every single scene? Or of I wanted to fade one into the other. Basically the question stays the same, how would you do that without having to handle it in every single scene? While writing this I'm realizing it will be the same thing for all kinds of transitions. Maybe a central Draw method in the manager could be a solution, where parameters and effects are applied when necessary. But this wouldn't work if objects that are drawn have their own method, and aren't drawn within the scene, or if an effect has to be applied to the whole scene. That means, maybe scenes have to be drawn to their own rendertarget? That way one call to the base class after the normal drawing could be enough, to apply the effects, while drawing it to the main render target. But I once heard there are problems when switching from target to target, back and forth. So is that even a viable option? As you can see, I have some basic ideas how it might work... but nothing specific. I'd like to learn what's the common way to achieve such things, a general way to apply all kinds of transitions.

    Read the article

  • rotate opengl mesh relative to camera

    - by shuall
    I have a cube in opengl. It's position is determined by multiplying it's specific model matrix, the view matrix, and the projection matrix and then passing that to the shader as per this tutorial (http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/). I want to rotate it relative to the camera. The only way I can think of getting the correct axis is by multiplying the inverse of the model matrix (because that's where all the previous rotations and tranforms are stored) times the view matrix times the axis of rotation (x or y). I feel like there's got to be a better way to do this like use something other than model, view and projection matrices, or maybe I'm doing something wrong. That's what all the tutorials I've seen use. PS I'm also trying to keep with opengl 4 core stuff. edit: If quaternions would fix my problems, could someone point me to a good tutorial/example for switching from 4x4 matrices to quaternions. I'm a little daunted by the task.

    Read the article

  • Low CPU/Memory/Memory-bandwith Pathfinding (maybe like in Warcraft 1)

    - by Valmond
    Dijkstra and A* are all nice and popular but what kind of algorithm was used in Warcraft 1 for pathfinding? I remember that the enemy could get trapped in bowl-like caverns which means there were (most probably) no full-path calculations from "start to end". If I recall correctly, the algorithm could be something like this: A) Move towards enemy until success or hitting a wall B) If blocked by a wall, follow the wall until you can move towards the enemy without being blocked and then do A) But I'd like to know, if someone knows :-) [edit] As explained to Byte56, I'm searching for a low cpu/mem/mem-bandwidth algo and wanted to know if Warcraft had some special secrets to deliver (never seen that kind of pathfinding elsewhere), I hope that that is more concordant with the stackexchange rules.

    Read the article

  • How to calculate continuous motion with angular velocity in 2d

    - by Rulk
    I'm really new with physics. Maybe someone would be able to help me to solve the next problem: I need to calculate position of an agent on the plane(2D) in next time step where time step is large(20+ seconds) What I know about agent's motion: Initial Position Direction(normalised vector) Velocity(linear function from time ) - object always moves along it's direction Angular Velocity(linear function from time) Optional: External force direction External force (linear function from time) Running discreet simulation with t-0 is not an option.

    Read the article

  • Getting the front buffer into a gfx mem surface (Dx9)

    - by lapin
    I'm using DirectX 9 to acquire the frontbuffer. There are a couple of ways I know of to get at the front buffer: GetRenderTargetData() GetFrontBufferData() The MSDN page on both of these API calls state that the data is copied from device memory to system memory. I'd like to copy the front buffer surface directly to another graphics memory surface, as I have other manipulations to perform on the acquired surface before returning it to system memory. I'm creating a D3DUSAGE_DYNAMIC texture (gfx mem texture) and calling GetFrontBufferData() to write the front buffer to my textures surface0. Is this valid? Will the operation remain in gfx memory, or will it need to move to system memory and then back to graphics memory? If this is the case, is what I'm trying to achieve possible?

    Read the article

  • Generating and rendering not point-like particles on GPU

    - by TravisG
    Specifically I'm talking about particles as seen (for example) in the UE4 dev video here. They're not just points and seem to have a nice shape to them that seems to follow their movement. Is it possible to create these kinds of particles (efficiently) completely on the GPU (perhaps through something like motion? Or is the only (or most efficient) way to just create a small particle texture and render small quads for each particle?

    Read the article

< Previous Page | 317 318 319 320 321 322 323 324 325 326 327 328  | Next Page >