Search Results

Search found 19182 results on 768 pages for 'game engine'.

Page 441/768 | < Previous Page | 437 438 439 440 441 442 443 444 445 446 447 448  | Next Page >

  • How to detect GLSL warnings?

    - by msell
    After compiling a shader with glCompileShader, I can call glGetShaderiv with GL_COMPILE_STATUS to check if the shader compiled successfully. I can also call glGetShaderInfoLog to get information about possible errors, warnings or other info. The information log returned by this function is unspecified. In a tool where users can write their own shaders, I would like to print all errors and warnings from the compilation, but nothing if no warnings or errors were found. The problem is that the GL_COMPILE_STATUS returns only false if the compilation failed and true otherwise. If no problems were found, some drivers return empty info log from glGetShaderInfoLog, but some drivers can return something else such as "No errors.", which I do not want to print to the user. How is this problem generally solved?

    Read the article

  • NPOT texture and video memory usage

    - by Eonil
    I read in this QA that NPOT will take memory as much as next POT sized texture. It means it doesn't give any benefit than POT texture with proper management. (maybe even worse because NPOT should be slower!) Is this true? Does NPOT texture take and waste same memory like POT texture? I am considering NPOT texture for post-processing, so if it doesn't give memory space benefit, using of NPOT texture is meaningless to me. Maybe answer can be different for each platforms. I am targeting mobile devices. Such as iPhone or Androids. Does NPOT texture takes same amount of memory on mobile GPUs?

    Read the article

  • 3d Model Scaling With Camera

    - by spasarto
    I have a very simple 3D maze program that uses a first person camera to navigate the maze. I'm trying to scale the blocks that make up the maze walls and floor so the corridors seem more roomy to the camera. Every time I scale the model, the camera seems to scale with it, and the corridors always stay the same width. I've tried apply the scale to the model in the content pipe (setting the scale property of the model in the properties window in VS). I've also tried to apply the scale using Matrix.CreateScale(float) using the Scale-Rotate-Transform order with the same result. If I leave the camera speed the same, the camera moves slower, so I know it's traversing a larger distance, but the world doesn't look larger; the camera just seems slower. I'm not sure what part of the code to include since I don't know if it is an issue with my model, camera, or something else. Any hints at what I'm doing wrong? Camera: Projection = Matrix.CreatePerspectiveFieldOfView( MathHelper.PiOver4, _device.Viewport.AspectRatio, 1.0f, 1000.0f ); Matrix camRotMatrix = Matrix.CreateRotationX( _cameraPitch ) * Matrix.CreateRotationY( _cameraYaw ); Vector3 transCamRef = Vector3.Transform( _cameraForward, camRotMatrix ); _cameraTarget = transCamRef + CameraPosition; Vector3 camRotUpVector = Vector3.Transform( _cameraUpVector, camRotMatrix ); View = Matrix.CreateLookAt( CameraPosition, _cameraTarget, camRotUpVector ); Model: World = Matrix.CreateTranslation( Position );

    Read the article

  • Grid Based Lighting in XNA/Monogame

    - by sm81095
    I know that questions like this have been asked many times, but I have not found one exactly like this yes. I have implemented a top-down grid based world in Monogame, and am starting on the lighting system soon. How I want to do lighting is to have a grid that is 4 times wider and higher, basically splitting each world tile into a 4x4 system of "subtiles". I would like to use a flow like system to spread light across the tiles by reducing the light by a small amount each time. This is kind of the effect I was going for: http://i.imgur.com/rv8LCxZ.png The black grid lines are the light grid, and the red lines are the actual tile grid, and the light drop-off is very exaggerated. I plan to render the world by drawing the unlit grid to a separate RenderTarget2D, then rendering the lighting grid to a separate target and overlaying the two. Basically, my questions are: What would be the algorithm for a flow style lighting system like this? Would there be a more efficient way of rendering this? How would I handle the darkening of the light with colors, reducing the RGB values in each grid, or reducing the alpha in each grid, assuming that I render the light map over the grid using blending? Even assuming the former are possible, what BlendState would I use for that?

    Read the article

  • Player Movement DirectX

    - by SullY
    I'm reading on a Book that's about Gamedevelopment with C++ and DirectX 9. There is something that interrests me: It says that playermovements are increasing with the power of the CPU. Becouse a faster CPU will move the player with every frame ( better CPU = better FPS ) To bypass it, it says you have just to multiplicate time*movementfactor . I'd like to know is there an another way to bypass it ?

    Read the article

  • How does this circle collision detection math work?

    - by Griffin
    I'm going through the wildbunny blog to learn about collision detection. I'm confused about how the vectors he's talking about come into play. Here's the part that confuses me: p = ||A-B|| – (r1+r2) The two spheres are penetrating by distance p. We would also like the penetration vector so that we can correct the penetration once we discover it. This is the vector that moves both circles to the point where they just touch, correcting the penetration. Importantly it is not only just a vector that does this, it is the only vector which corrects the penetration by moving the minimum amount. This is important because we only want to correct the error, not introduce more by moving too much when we correct, or too little. N = (A-B) / ||A-B|| P = N*p Here we have calculated the normalised vector N between the two centres and the penetration vector P by multiplying our unit direction by the penetration distance. I understand that p is the distance by which the circles penetrate, but I don't get what exactly N and P are. It seems to me N is just the coordinates of the 3rd point of the right trianlge formed by point A and B (A-B) then being divided by the hypotenuse of that triangle or distance between A and B (||A-B||). What's the significance of this? Also, what is the penetration vector used for? It seems to me like a movement that one of the circles would perform to get un-penetrated.

    Read the article

  • Nice function for "rolling score up"?

    - by bobobobo
    I'm adding to the player's score, and I'm using a per-frame formula like: int score, displayedScore ;// score is ACTUAL score player has, // displayedScore is what is shown this frame to the player // (the creeping/"rolling" number) float disparity = score - displayedScore ; int d = disparity * .1f ; // add 1/10 of the difference, if( !d ) d = signum( disparity ) ; // last 10 go by 1's score += d ; Where inline int signum( float val ){ if( val > 0 ) return 1 ; else if( val < 0 ) return -1 ; else return 0 ; } So, it kind of works where it makes big changes rapidly, then it creeps in the last few one at a time. But I'm looking for better (or possibly well known?) score-creeping functions. Any one?

    Read the article

  • How to I teach my artist to do arts for games?

    - by Holm76
    So my girlfriend is an artist and I'm a programmer and we often talk about joining talents and doing some small games or other fun stuff for the different popular platforms currently out. But because I haven't really done any serious games development yet I have a hard time explaining to her how she should create or package the assets she'd make so we always end up not doing nothing about it. What I'm mostly thinking about here is when doing frame by frame animation. I know sprite sheets are used for this kind of thing but then comes questions like frames per second and stuff like like that. Not program wise but art wise. Is there a reference site or sites out there that teach someone with the skills of art how to manage and arrange the assets in sprite sheets and other stuff in words that artists understand?

    Read the article

  • Dynamic content realoding

    - by Kikaimaru
    Is there a relatively simple way to dynamicaly reload content files? (ie: effect files) I know i can do following: Detect change of file Run content pipeline to rebuild that specific file Unload ALL content that was loaded Load All content And use double references to reference content files. Problem is with step 3 (and step 2 isn't that nice too). But i need to unload everything because if i have model Hero.x which references Model.fx effect, and i change Model.fx file, i need to reload Hero.x file which will then call LoadExternalReference on Model.fx. So I guess question is, did someone mange to make this work without rewriting whole ContentManager (and every ContentReader) and tracking calls to LoadExternalReference?

    Read the article

  • Scan-Line Z-Buffering Dilemma

    - by Belgin
    I have a set of vertices in 3D space, and for each I retain the following information: Its 3D coordinates (x, y, z). A list of pointers to some of the other vertices with which it's connected by edges. Right now, I'm doing perspective projection with the projecting plane being XY and the eye placed somewhere at (0, 0, d), with d < 0. By doing Z-Buffering, I need to find the depth of the point of a polygon (they're all planar) which corresponds to a certain pixel on the screen so I can hide the surfaces that are not visible. My questions are the following: How do I determine to which polygon does a pixel belong to so I could use the formula of the plane which contains the polygon to find the Z-coordinate? Are my data structures correct? Do I need to store something else entirely in order for this to work? I'm just projecting the vertices onto the projection plane and joining them with lines based on the pointer lists.

    Read the article

  • Bukkit inventory saving: crashing somewhere

    - by HcgRandon
    I'm working on a command for a bukkit plugin that lets you transfer worlds. In the section about saving the player's inventory, I'm getting a runtime error. My question is: Why is the error happening, and how can I prevent it? The plugin code public void savePlayerInv(Player p, World w){ File playerInvConfigFile = new File(plugin.getDataFolder() + File.separator + "players" + File.separator + p.getName(), "inventory.yml"); FileConfiguration pInv = YamlConfiguration.loadConfiguration(playerInvConfigFile); PlayerInventory inv = p.getInventory(); int i = 0; for (ItemStack stack : inv.getContents()) { //increment integer i++; String startInventory = w.getName() + ".inv." + Integer.toString(i); //save inv pInv.set(startInventory + ".amount", stack.getAmount()); pInv.set(startInventory + ".durability", Short.toString(stack.getDurability())); pInv.set(startInventory + ".type", stack.getTypeId()); //pInv.set(startInventory + ".enchantment", stack.getEnchantments()); //TODO add enchant saveing } i = 0; for (ItemStack armor : inv.getArmorContents()){ i++; String startArmor = w.getName() + ".armor." + Integer.toString(i); //save armor pInv.set(startArmor + ".amount", armor.getAmount()); pInv.set(startArmor + ".durability", armor.getDurability()); pInv.set(startArmor + ".type", armor.getTypeId()); //pInv.set(startArmor + ".enchantment", armor.getEnchantments()); } //save exp if (p.getExp() != 0) { pInv.set(w.getName() + ".exp", p.getExp()); } } The offending line The stack trace complains about line 130, which is this line. pInv.set(startInventory + ".amount", stack.getAmount()); The stack trace 2012-03-21 13:23:25 [SEVERE] null org.bukkit.command.CommandException: Unhandled exception executing command 'wtp' in plugin Needs v1.0 at org.bukkit.command.PluginCommand.execute(PluginCommand.java:42) at org.bukkit.command.SimpleCommandMap.dispatch(SimpleCommandMap.java:166) at org.bukkit.craftbukkit.CraftServer.dispatchCommand(CraftServer.java:461) at net.minecraft.server.NetServerHandler.handleCommand(NetServerHandler.java:818) at net.minecraft.server.NetServerHandler.chat(NetServerHandler.java:778) at net.minecraft.server.NetServerHandler.a(NetServerHandler.java:761) at net.minecraft.server.Packet3Chat.handle(Packet3Chat.java:33) at net.minecraft.server.NetworkManager.b(NetworkManager.java:229) at net.minecraft.server.NetServerHandler.a(NetServerHandler.java:112) at net.minecraft.server.NetworkListenThread.a(NetworkListenThread.java:78) at net.minecraft.server.MinecraftServer.w(MinecraftServer.java:554) at net.minecraft.server.MinecraftServer.run(MinecraftServer.java:452) at net.minecraft.server.ThreadServerApplication.run(SourceFile:490) Caused by: java.lang.NullPointerException at com.devoverflow.improved.needs.commands.CommandWorldtp.savePlayerInv(CommandWorldtp.java:130) at com.devoverflow.improved.needs.commands.CommandWorldtp.onCommand(CommandWorldtp.java:60) at org.bukkit.command.PluginCommand.execute(PluginCommand.java:40) ... 12 more

    Read the article

  • Rotate object Up/Down/Left/Right in any orientation

    - by George Duckett
    I'm rendering model at the origin with a fixed camera looking at it positioned on the z axis. I want to be able to rotate the model up/down and left/right. Currently I have 2 variables, HorizontalRotation and VerticalRotation. When calculating the world matrix I rotate about the Y axis by HorizontalRotation and about the X axis by VerticalRotation. The ..Rotation variables are controlled by pressing up/down/left/right arrow keys. The problem I'm having is that the rotations are happening relative to the object. Lets say it's a model of the world. Pressing Up a bit would let me look at the north pole. Currently when i press right the earth spins infront of the camera on its axis; I'm still looking at the north pole. How can i get it so that no matter what rotations are currently applied i can always rotate my model relative to the camera/world axis?

    Read the article

  • Maya is lagging in a specific way...?

    - by Aerovistae
    My Maya installation worked perfectly. It is not my computer. Something caused it to stop working overnight, somehow. When I try to drag a vertex or something like that, it moves the vertex, but then I have to click like 3 times somewhere outside the mesh before the actual mesh will catch up and follow the vertex. Until I do that, it just stays as it was, with a floating vertex somewhere inside it or outside it. It makes modeling borderline impossible and completely infuriating. What ought to be happening is what we're all used to-- as I move the vertex, the mesh follows it actively, so I can see what it looks like at every given moment until I release the vertex in its new position. Other weird thing: this only applies to complex meshes, like a couple thousand faces. A simple cube works fine. What gives?? Anybody?

    Read the article

  • How should I choose quadtree depth?

    - by Evpok
    I'm using a quadtree to prune collision detection pairs in a 2d world. How should I choose to what depth said quadtree is calculated? The world is made mostly of moving objects1, so the cost of dispatching the objects between the quadtree cells matters. What is the relationship between the gain from less collision checking and the loss from more dispatching? How can I strike a balance that performs optimally? 1 To be completely explicit, they are autonomous self-replicating cells competing for food sources. This is an attempt to show my pupils predator-prey dynamics and genetic evolution at work.

    Read the article

  • How are trajectories calculated and transmitted to other players in Multi-Player ?

    - by giulio
    I play alot of COD4. And can see tracers for gunfire, missles, care packages fall from helicopters etc. There is alot of activity. I am curious to know the algorithm (at a high level) that manages all this action when you have 20 people on a map shooting each other to death ? This question touches on the subject but doesn't ask for a more in-depth answer as to how you the developers go about calculating and transmitting movement and collision detection for projectiles, be it missles/bullets or any other object that is flying through the air in real-time.

    Read the article

  • Pre baked fractures and explosion : I need an answer for C++

    - by Ken
    What are the prebaked or precomputed explosions or fractures from a programmer viewpoint ? I would like to know how to achieve this in C++ and how this things are usually considered (they are animations? textures?), it would be perfect if there will be some examples available or someone that can picture a broad view about this. I need to add a really small support for this in my code and i need an hint about how to start, i would like to do this on my own without other libraries.

    Read the article

  • How can I render a semi transparent model with OpenGL correctly?

    - by phobitor
    I'm using OpenGL ES 2 and I want to render a simple model with some level of transparency. I'm just starting out with shaders, and I wrote a simple diffuse shader for the model without any issues but I don't know how to add transparency to it. I tried to set my fragment shader's output (gl_FragColor) to a non opaque alpha value but the results weren't too great. It sort of works, but it looks like certain model triangles are only rendered based on the camera position... It's really hard to describe what's wrong so please watch this short video I recorded: http://www.youtube.com/watch?v=s0JqA0rZabE I thought this was a depth testing issue so I tried playing around with enabling/disabling depth testing and back face culling. Enabling back face culling changes the output slightly but the problem in the video is still there. Enabling/disabling depth testing doesn't seem to do anything. Could anyone explain what I'm seeing and how I can add some simple transparency to my model with the shader? I'm not looking for advanced order independent transparency implementations. edit: Vertex Shader: // color varying for fragment shader varying mediump vec3 LightIntensity; varying highp vec3 VertexInModelSpace; void main() { // vec4 LightPosition = vec4(0.0, 0.0, 0.0, 1.0); vec3 LightColor = vec3(1.0, 1.0, 1.0); vec3 DiffuseColor = vec3(1.0, 0.25, 0.0); // find the vector from the given vertex to the light source vec4 vertexInWorldSpace = gl_ModelViewMatrix * vec4(gl_Vertex); vec3 normalInWorldSpace = normalize(gl_NormalMatrix * gl_Normal); vec3 lightDirn = normalize(vec3(LightPosition-vertexInWorldSpace)); // save vertexInWorldSpace VertexInModelSpace = vec3(gl_Vertex); // calculate light intensity LightIntensity = LightColor * DiffuseColor * max(dot(lightDirn,normalInWorldSpace),0.0); // calculate projected vertex position gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex; } Fragment Shader: // varying to define color varying vec3 LightIntensity; varying vec3 VertexInModelSpace; void main() { gl_FragColor = vec4(LightIntensity,0.5); }

    Read the article

  • Amazon GameCircle Integration

    - by user1095509
    I'm trying to integrate Amazon GameCircle and I have been able to successfully initialize GameCircle in my app, but the problem is when I click on the button that displays achievements, the GameCircle achievement list comes up but it says "You have unlocked 0 of 0 achievements". Same happens with leaderboards i.e there are no leaderboards for this app. I have created a Leaderboard and a few achievements on the online developer portal for Amazon but they don't show for some reason. Can someone help me with this. Any links/resources that help with integrating GameCircle will be appreciated. Thanks.

    Read the article

  • Set Position of multiple bodies

    - by philipp
    I have a character composed of five bodies which are tied together by a lot of joints. On of them is the overall chassis, to which all forces and impulses are applied to move the whole Character. All in all that works very fine, except one thing: I need to set the Position of the Character so that it get Beamed from one place to the other in one single frame. Unfortunately I cannot get this to work. I tried the following code, without any success… playerbodies.forEach(function (bd) { bd.SetLinearVelocity(new b2.Vec2()); var t = bd.GetTransform(); t.p.x -= 10; bd.SetTransform(t, bd.GetAngle()); }); How can I make that happen?

    Read the article

  • OpenGL Application displays only 1 frame

    - by Avi
    EDIT: I have verified that the problem is not the VBO class or the vertex array class, but rather something else. I have a problem where my vertex buffer class works the first time its called, but displays nothing any other time its called. I don't know why this is, and it's also the same in my vertex array class. I'm calling the functions in this order to set up the buffers: enable client states bind buffers set buffer / array data unbind buffers disable client states Then in the draw function, that's called every frame: enable client states bind buffers set pointers unbind buffers bind index buffer draw elements unbind index buffer disable client states Is there something wrong with the order in which I'm calling the functions, or is it a more specific code error? EDIT: here's some of the code Code for setting pointers: //element is the vertex attribute being drawn (e.g. normals, colors, etc.) static void makeElementPointer(VertexBufferElements::VBOElement element, Shader *shade, void *elementLocation) { //elementLocation is BUFFER_OFFSET(n) if a buffer is bound switch (element) { .... glVertexPointer(3, GL_FLOAT, 0, elementLocation); //changes based on element .... //but I'm only dealing with } //vertices for now } And that's basically all the code that isn't just a straight OpenGL function call.

    Read the article

  • Mesh with Alpha Texture doesn't blend properly

    - by faulty
    I've followed example from various place regarding setting OutputMerger's BlendState to enable alpha/transparent texture on mesh. The setup is as follows: var transParentOp = new BlendStateDescription { SourceBlend = BlendOption.SourceAlpha, DestinationBlend = BlendOption.InverseDestinationAlpha, BlendOperation = BlendOperation.Add, SourceAlphaBlend = BlendOption.Zero, DestinationAlphaBlend = BlendOption.Zero, AlphaBlendOperation = BlendOperation.Add, }; I've made up a sample that display 3 mesh A, B and C, where each overlaps one another. They are drawn sequentially, A to C. Distance from camera where A is nearest and C is furthest. So, the expected output is that A see through and saw part of B and C. B will see through and saw part of C. But what I get was none of them see through in that order, but if I move C closer to the camera, then it will be semi transparent and see through A and B. B if move closer to camera will see A but not C. Sort of reverse. So it seems that I need to draw them in reverse order where furthest from camera is drawn first then nearest to camera is drawn last. Is it suppose to be done this way, or I can actually configure the blendstate so it works no matter in which order i draw them? Thanks

    Read the article

  • Problems when rendering code on Nvidia GPU

    - by 2am
    I am following OpenGL GLSL cookbook 4.0, I have rendered a tesselated quad, as you see in the screenshot below, and i am moving Y coordinate of every vertex using a time based sin function as given in the code in the book. This program, as you see on the text in the image, runs perfectly on built in Intel HD graphics of my processor, but i have Nvidia GT 555m graphics in my laptop, (which by the way has switchable graphics) when I run the program on the graphic card, the OpenGL shader compilation fails. It fails on following instruction.. pos.y = sin.waveAmp * sin(u); giving error Error C1105 : Cannot call a non-function I know this error is coming on the sin(u) function which you see in the instruction. I am not able to understand why? When i removed sin(u) from the code, the program ran fine on Nvidia card. Its running with sin(u) fine on Intel HD 3000 graphics. Also, if you notice the program is almost unusable with intel HD 3000 graphics, I am getting only 9FPS, which is not enough. Its too much load for intel HD 3000. So, sin(X) function is not defined in the OpenGL specification given by Nvidia drivers or something else??

    Read the article

  • How does opengl-es 2 assemble primitives?

    - by stephelton
    Two things I'm quite confused about. 1) OpenGL ES 2.0 creates primitives before the vertex shader is invoked. Why, then, does it not automatically provide the vertex shader the position of the vertex? 2) OpenGL ES 2.0 supports glDrawElements(), but it does not support glEnableClientState() or GL_VERTEX_ARRAY, so how can this call possibly be used to construct primitives? NOTE: this is OpenGL ES 2.0, NOT normal OpenGL! Thanks!

    Read the article

  • LibGDX - SpriteBatch's .draw() method requiring float[]

    - by just_a_programmer
    Please excuse my lack of knowledge with LibGDX, as I have just started learning it. I am going through some simple tutorials, and in one of them, I draw a string onto the screen like so: // the following code is in the main file in the core project folder: // this is in the create() method: private SpriteBatch batch; batch = new SpriteBatch(); // this is in the render() method: batch.draw(batch, "Hello world", 200, 200); I am getting an error saying: The method draw(texture, float[], int, int) in the type SpriteBatch is not applicable for the arguments (SpriteBatch, int, int) So, LibGDX wants a float array to draw instead of a string? Thanks in advance.

    Read the article

  • Learning C++ but wanting to develop iOS Apps

    - by DiscreteGenius
    I'm a computer engineering student and taking my second programming class. I'm learning C++ using "C++ Primer Plus" 5th edition by Prata. I want to develop for iOS. I understand the main language for Xcode is Objective-C. Am I hurting myself by learning C++ before any other language (notably before my desired lang Objective-C)? There's got to be a reason the university requires C++ to learn as a basis language. Please offer any helpful guidance or how I should go about this. Thanks//

    Read the article

< Previous Page | 437 438 439 440 441 442 443 444 445 446 447 448  | Next Page >