Search Results

Search found 25550 results on 1022 pages for 'mere development'.

Page 465/1022 | < Previous Page | 461 462 463 464 465 466 467 468 469 470 471 472  | Next Page >

  • Making a game with responsive resolution

    - by alexandervrs
    I am making a game, however I wish for it to be resolution agnostic. My target resolution i.e. where things look as intended is 1600 x 900. My ideas are: Make the HUD stay fixed to the sides no matter what resolution, use different size for HUD graphics under a certain resolution and another under a certain large one. Use large HD sprites/backgrounds which are a power of 2, so they scale nicely. Use the player's native resolution. Scale the game area (not the HUD) to fit (resulting zooming in some and cropping the game area sides if necessary for widescreen, no stretch), but always fill the screen. Have a min and max resolution limit for small and very large displays where you will just change the resolution(?) or scale up/down to fit. What I am a bit confused though is what math formula I would use to scale the game area correctly based on the resolution no matter the aspect ratio, fully fit in a square screen and with some clip to the sides for widescreen. Pseudocode would help as well. :)

    Read the article

  • Distributed Rendering in the UDK and Unity

    - by N0xus
    At the moment I'm looking at getting a game engine to run in a CAVE environment. So far, during my research I've seen a lot of people being able to get both Unity and the Unreal engine up and running in a CAVE (someone did get CryEngine to work in one, but there is little research data about it). As of yet, I have not cemented my final choice of engine for use in the next stage of my project. I've experience in both, so the learning curve will be gentle on both. And both of the engines offer stereoscopic rendering, either already inbuilt with ReadD (Unreal) or by doing it yourself (Unity). Both can also make use of other input devices as well, such as the kinect or other devices. So again, both engines are still on the table. For the last bit of my preliminary research, I was advised to see if either, or both engines could do distributed rendering. I was advised this, as the final game we make could go into a variety of differently sized CAVEs. The one I have access to is roughly 2.4m x 3m cubed, and have been duly informed that this one is a "baby" compared to others. So, finally onto my question: Can either the Unreal Engine, or Unity Engine make it possible for developers to allow distributed rendering? Either through in built devices, or by creating my own plugin / script?

    Read the article

  • How to handle loading and keeping many bitmaps in an Android 2D game

    - by Lumis
    In an Android 2D game which is using SurfaceView where its onDraw is driven by a loop from a Thread, I use many bitmap sprites (sprite sheets) and two background size bitmaps, which are all loaded into memory at the start. It all works fine, however, when the activity is onPause or after reloading it few times, Android shows a tendency to wipe out the big bitmaps only, probably to free memory. Sometimes this happens even in the middle of loading this very activity. In order to counter this, I made a check in the onDraw method to test if the big bitmaps are still there and reload them if they are forcefully recycled by Android, before drawing them on Canvas. This solution may not be the most stable, and since I know that there are much more accomplished android game programmers here than myself, I hope you can reveal some tricks or secrets or at least provide some good hints, how to overcome this.

    Read the article

  • Hydraulics in game

    - by Mungoid
    I'm not completely sure if this would be better in the Physics site or not as this question is more about how hydraulics should work in game as opposed to how they really work (although that is taken into account) - So I apologize if this is in the wrong place. A project we are on, we have a machine with hydraulics that are powered (They don't just look like they move something, they are the only thing moving/turning/lifting something) - However, the hydraulic extends the same speed no matter what it is pushing. So, say there is a 10 ton object attached to one end of the hydraulic and the other end is attached to a plate on the ground. In real life it takes a few seconds to build up pressure depending on how heavy the object is, but in our project the hydraulics don't care about that. It will lift a 100 ton object the same speed as a 10 ton object. We have a way to fake the hydraulic pressurizing by reducing the 'drive amount' (how fast or slow the hydraulic extends) when we sense that it is touching the ground and that does a relatively decent job but we would like to be able to take other things into account like engine speed, ratios, loads, etc. but we aren't too sure what we need to think about. I'm kinda wondering if anyone here has any experience with this and could offer some suggestions on what to take into account?

    Read the article

  • Smooth animation on a persistently refreshing canvas

    - by Neurofluxation
    Yo everyone! I have been working on an Isometric Tile Game Engine in HTML5/Canvas for a little while now and I have a complete working game. Earlier today I looked back over my code and thought: "hmm, let's try to get this animated smoothly..." And since then, that is all I have tried to do. The problem I would like the character to actually "slide" from tile to tile - but the canvas redrawing doesn't allow this - does anyone have any ideas....? Code and fiddle below... Fiddle with it! http://jsfiddle.net/neuroflux/n7VAu/ <html> <head> <title>tileEngine - Isometric</title> <style type="text/css"> * { margin: 0px; padding: 0px; font-family: arial, helvetica, sans-serif; font-size: 12px; cursor: default; } </style> <script type="text/javascript"> var map = Array( //land [[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0]], [[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0]], [[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0]], [[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0]], [[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0]], [[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0]], [[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0]], [[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0]], [[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0]], [[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0]] ); var tileDict = Array("http://www.wikiword.co.uk/release-candidate/canvas/tileEngine/land.png"); var charDict = Array("http://www.wikiword.co.uk/release-candidate/canvas/tileEngine/mario.png"); var objectDict = Array("http://www.wikiword.co.uk/release-candidate/canvas/tileEngine/rock.png"); //last is one more var objectImg = new Array(); var charImg = new Array(); var tileImg = new Array(); var loaded = 0; var loadTimer; var ymouse; var xmouse; var eventUpdate = 0; var playerX = 0; var playerY = 0; function loadImg(){ //preload images and calculate the total loading time for(var i=0;i<tileDict.length;i++){ tileImg[i] = new Image(); tileImg[i].src = tileDict[i]; tileImg[i].onload = function(){ loaded++; } } i = 0; for(var i=0;i<charDict.length;i++){ charImg[i] = new Image(); charImg[i].src = charDict[i]; charImg[i].onload = function(){ loaded++; } } i = 0; for(var i=0;i<objectDict.length;i++){ objectImg[i] = new Image(); objectImg[i].src = objectDict[i]; objectImg[i].onload = function(){ loaded++; } } } function checkKeycode(event) { //key pressed var keycode; if(event == null) { keyCode = window.event.keyCode; } else { keyCode = event.keyCode; } switch(keyCode) { case 38: //left if(!map[playerX-1][playerY][1] > 0){ playerX--; } break; case 40: //right if(!map[playerX+1][playerY][1] > 0){ playerX++; } break; case 39: //up if(!map[playerX][playerY-1][1] > 0){ playerY--; } break; case 37: //down if(!map[playerX][playerY+1][1] > 0){ playerY++; } break; default: break; } } function loadAll(){ //load the game if(loaded == tileDict.length + charDict.length + objectDict.length){ clearInterval(loadTimer); loadTimer = setInterval(gameUpdate,100); } } function drawMap(){ //draw the map (in intervals) var tileH = 25; var tileW = 50; mapX = 80; mapY = 10; for(i=0;i<map.length;i++){ for(j=0;j<map[i].length;j++){ var drawTile= map[i][j][0]; var xpos = (i-j)*tileH + mapX*4.5; var ypos = (i+j)*tileH/2+ mapY*3.0; ctx.drawImage(tileImg[drawTile],xpos,ypos); if(i == playerX && j == playerY){ you = ctx.drawImage(charImg[0],xpos,ypos-(charImg[0].height/2)); } } } } function init(){ //initialise the main functions and even handlers ctx = document.getElementById('main').getContext('2d'); loadImg(); loadTimer = setInterval(loadAll,10); document.onkeydown = checkKeycode; } function gameUpdate() { //update the game, clear canvas etc ctx.clearRect(0,0,904,460); ctx.fillStyle = "rgba(255, 255, 255, 1.0)"; //assign color drawMap(); } </script> </head> <body align="center" style="text-align: center;" onload="init()"> <canvas id="main" width="904" height="465"> <h1 style="color: white; font-size: 24px;">I'll be damned, there be no HTML5 &amp; canvas support on this 'ere electronic machine!<sub>This game, jus' plain ol' won't work!</sub></h1> </canvas> </body> </html>

    Read the article

  • Resolution Independent 2D Rendering in XNA

    - by AttackingHobo
    I am trying to figure out the best way to render a 2d game at any resolution. I am currently rendering the game at 1920x1200. I am trying scale the game to any user selected resolution without changing the way I am rendering, or game logic. What is the best way to scale a game to any arbitrary resolution? Edit: I am trying to achieve this: http://www.david-amador.com/2010/03/xna-2d-independent-resolution-rendering/ but I think the code he has is for a different version of XNA because I cannot find that method overload he uses.

    Read the article

  • How to calculate vertext normals for a mesh in Java in OpenGL ES application?

    - by alan mc
    Can some one point me to Java code ( in Java not C or C++) that calculates all the normals for all the vertices of a mesh for OpenGL ES application. I need this for lighting. Lets say I have a cube with following vertices and indices: float vertices[] = { -width, -height, -depth, // 0 width, -height, -depth, // 1 width, height, -depth, // 2 -width, height, -depth, // 3 -width, -height, depth, // 4 width, -height, depth, // 5 width, height, depth, // 6 -width, height, depth // 7 }; short indices[] = { 0, 2, 1, 0, 3, 2, 1,2,6, 6,5,1, 4,5,6, 6,7,4, 2,3,6, 6,3,7, 0,7,3, 0,4,7, 0,1,5, 0,5,4 }; In above specific example how many normals we need ?

    Read the article

  • 3D Modeling Software for Programmer [closed]

    - by Pathachiever11
    I've recently learned how to make games for Unity3d, and now I want to start making games! I can't wait to start! However, before I can make 3D games, I need to learn 3D modeling for character design, level design, and some animation. What is the easiest 3D modeling software, compatible with Unity3d? I do not want to spend too much time learning the software. From what I've heard, Blender is a bit complicated to use. Maya and 3dsMax seem very powerful. Could someone point me in the right direction? I don't want to spend a lot of time learning. I know its not that easy, but you guys have experience, you guys probably know out of all which one is easier and powerful. Could you recommend a software? Many Thanks!

    Read the article

  • Developing games using virtualization on macOS (or Linux) [on hold]

    - by zpinner
    From what I've seen, most of the gamedev tools and engines (that could generate cross platform games) are not supported on Mac. Havok/Project Anarchy, UDK, GameMaker, e.g. . Basically, the only options I found are: Unity3d and monogame + xamarin. Unity is nice and I've been playing with it for some time, but the free version is quite limited when we're talking about shaders, that made me consider that as an indie developer, I might want more freedom to experiment new things, without paying the expensive unity license. I didn't try monogame + xamarin yet, and altough XNA is a very nice game framework, I'd like to have more freedom to experiment and finish a game first before paying for the IDE, which is not possible with the current Xamarin business model. That leaves me with the thought that I must go back to windows, which I'd preferably do it partially, if it's possible. Using BootCamp is something that I'd like to avoid, since it's a pain to reboot when changing OS and that would probably force me to become a 100% windows user. Is there anyone actually developing a game using virtualization solutions like parallels or vmwareFusion? How was your experience?

    Read the article

  • Recasting and Drawing in SDL

    - by user1078123
    I have some code that essentially draws a column on the screen of a wall in a raycasting-type 3d engine. I am trying to optimize it, as it takes about 10 milliseconds do draw a million pixels using this, and the vast majority of game time is spent in this loop. However, I don't quite understand what's occurring, particularly the recasting (I modified the "pixel manipulation" sample code from the SDL documentation). "canvas" is the surface I am drawing to, and "hello" is the surface containing the texture for the column. int c = (curcol)* canvas->format->BytesPerPixel; void *canvaspixels = canvas->pixels; Uint16 texpitch = hello->pitch; int lim = (drawheight +startdraw) * canvpitch +c + (int) canvaspixels; Uint8 *k = (Uint8 *)hello->pixels + (hit)* hello->format->BytesPerPixel; for (int j= (startdraw)*(canvpitch)+c + (int) canvaspixels; (j< lim); j+= canvpitch){ Uint8 *q = (Uint8 *) ((int(h))*(texpitch)+k); *(Uint32 *)j = *(Uint32 *)q; h += s; } We have void pointers (not sure how those are even represented), 8, 16, and 32 bit ints (h and s are floats), all being intermingled, and while it works, it is quite confusing.

    Read the article

  • MMORPG game balancing

    - by Gary Paluk
    I've seen a couple of examples of some game balancing techniques in books yet they are not comprehensive and not particularly aimed at MMORPGs but I'm looking for practical examples of game balancing techniques for MMORPGs. I am interested to know if anyone has documented the techniques used in popular games with proven success in this area. Ideally, any resource would cover most common types of stats and include layman mathematical models or techniques used to balance game mechanics found in advanced MMORPGs (I know it's a cliché, but WoW style) Any help would be great!

    Read the article

  • OpenGL error LNK2019

    - by Ghilliedrone
    I'm trying to compile a basic OpenGL program. I linked opengl32.lib and glu32.lib but I'm getting errors. The errors I get are: error LNK1120: 7 unresolved externals error LNK2019: unresolved external symbol _main referenced in function ___tmainCRTStartup error LNK2019: unresolved external symbol "public: float __thiscall GLWindow::getElapsedSeconds(void)" (?getElapsedSeconds@GLWindow@@QAEMXZ) referenced in function _WinMain@16 error LNK2019: unresolved external symbol "public: bool __thiscall GLWindow::isRunning(void)" (?isRunning@GLWindow@@QAE_NXZ) referenced in function _WinMain@16 error LNK2019: unresolved external symbol "public: void __thiscall GLWindow::attachExample(class Example *)" (?attachExample@GLWindow@@QAEXPAVExample@@@Z) referenced in function _WinMain@16 error LNK2019: unresolved external symbol "public: void __thiscall GLWindow::destroy(void)" (?destroy@GLWindow@@QAEXXZ) referenced in function _WinMain@16 error LNK2019: unresolved external symbol "public: __thiscall GLWindow::GLWindow(struct HINSTANCE__ *)" (??0GLWindow@@QAE@PAUHINSTANCE__@@@Z) referenced in function _WinMain@16 error LNK2019: unresolved external symbol "private: void __thiscall GLWindow::setupPixelFormat(void)" (?setupPixelFormat@GLWindow@@AAEXXZ) referenced in function "public: long __stdcall GLWindow::WndProc(struct HWND__ *,unsigned int,unsigned int,long)" (?WndProc@GLWindow@@QAGJPAUHWND__@@IIJ@Z)

    Read the article

  • Move the location of the XYZ pivot point on a mesh in UDK

    - by WebDevHobo
    When working with any mesh, you get an XYZ point somewhere on it. If you just want to move the mesh in any direction, it doesn't matter where this point is located. However, I want to rotate a door. This requires the point of rotation to be very specific. I can't find anywhere how to change the location of the point. Can anyone help? EDIT: solved, to change the pivot point, right click on the mesh, go to "Pivot" and move it. Then right click again and this time select "Save PrePivot to Pivot"

    Read the article

  • How important is a single-player mode in a 2-player game?

    - by Davy8
    So say you have a 2 player game, taking Chess as an example (except it's an original game with no ready-to-go AI available). Let's say there's also a social-aspect to the meta-game, so let's say it's a Chess game on Facebook where you can challenge your friends. How important is it to have a single-player mode, knowing that an AI will need to be created (I've done minimax AI for tic tac toe, but nothing too sophisticated)? Is it important enough that it should be in the initial launch of the game? Can it wait for a future iteration (knowing that being hosted on the web means the game can be updated at any time)?

    Read the article

  • How to make an Actor follow my finger

    - by user48352
    I'm back with another question that may be really simple. I've a texture drawn on my spritebatch and I'm making it move up or down (y-axis only) with Libgdx's Input Handler: touchDown and touchUp. @Override public boolean touchDown(int screenX, int screenY, int pointer, int button) { myWhale.touchDownY = screenY; myWhale.isTouched = true; return true; } @Override public boolean touchUp(int screenX, int screenY, int pointer, int button) { myWhale.isTouched = false; return false; } myWhale is an object from Whale Class where I move my texture position: public void update(float delta) { this.delta = delta; if(isTouched){ dragWhale(); } } public void dragWhale() { if(Gdx.input.getY(0) - touchDownY < 0){ if(Gdx.input.getY(0)<position.y+height/2){ position.y = position.y - velocidad*delta; } } else{ if(Gdx.input.getY(0)>position.y+height/2){ position.y = position.y + velocidad*delta; } } } So the object moves to the center of the position where the person is pressing his/her finger and most of the time it works fine but the object seems to take about half a second to move up or down and sometimes when I press my finger it wont move. Maybe there's another simplier way to do this. I'd highly appreciate if someone points me on the right direction.

    Read the article

  • Adaptive Characters: AI Solution Needs a Problem

    - by Roger F. Gay
    Have sophisticated adaptive programming, will travel - so to speak. I'm part of a group that developed sophisticated learning / adaptive software for robotics. The system "thinks" via its simulator, building and adapting code on its own; and then carries out the best solution. The software can also adapt to new situations, etc. http://mensnewsdaily.com/2007/05/16/robobusiness-robots-with-imagination/ It's easy to imagine using it with automated game characters that will adapt to the players moves and style - the easiest example would be fighting. The more the simulated fighter fights with the human player, the more it learns to counter that players fighting skills. But there should be more. Anyone have any ideas as to how adaptive characters might be interesting in games?

    Read the article

  • Build a view frustum from angles

    - by MulletDevil
    I have 4 angles, left, right, top & bottom. These angles are in degrees. They define the angle between the forward vector and the corresponding side. I am trying to use these to calculate the required values for Perseective Off Centre function found here http://docs.unity3d.com/Documentation/ScriptReference/Camera-projectionMatrix.html I tried doing (near plane-far plane) * Tan(angle) But that didn't give the correct results.

    Read the article

  • How to detect whether an Object came to sleep at a specific position?

    - by Nils Riedemann
    I'm currently writing a small game with box2dweb and I need some direction for this: I'm throwing a Box and have to hit a specific place and trigger an event when the object that's been thrown isn't moving anymore, "fell asleep" so to say. What's the proper way / best practice for this? I'm currently thinking of asking the b2World whether an Object is within a specific AABB and then wait a few seconds, check if it's still there and then trigger the event. But this seems to me like the roundabout way and the object might still be moving inside of that AABB and eventually even drop out of the AABB.

    Read the article

  • Alpha From PNGs Butchered

    - by ashes999
    I have a pretty vanilla Monogame game. I'm using PNG for all my sprites (made in Photoshop). I noticed that XNA is butchering the aliasing; no matter what I do, my graphics appear jaggedy. Below is a screenshot. The bottom half is what XNA shows me when I zoom in 2X using a Matrix on my GraphicsDevice (to make the effect more obvious). The top is when I pasted the same sprites from Photoshop and scaled them to 200%. Note that partially transparent pixels are turning whiteish. Is there a way to fix this? What am I doing wrong? Here's the relevant call to draw to the SpriteBatch: spriteBatch.Draw(this.texture, this.positionVector, null, Color.White, this.Angle, this.originVector, 1f, SpriteEffects.None, 0f); (this.positionVector can easily be Vector.Zero; Color.White as 100% alpha, I think; this.Angle can be a real angle (small > in the image) or zero (the orb itself).

    Read the article

  • Jumping a sprite while moving in a Bezier action

    - by marcg11
    I'm creating a game and I need the sprite to jump (move up and down basically) while it's moving on a bezier path so it moves vertically while it still follows the path. If I do this while it's moving along the bezier path: [mySprite runAction:[CCJumpBy actionWithDuration:0.1 position:ccp(0,0) height:10 jumps:1]]; It jumps vertically but instantly it returns to the position on the path. What I want is to jump relative to the path. Anyone knows something about it? It would looks something like this: the curve is a sequence of CCBezierBy's by the way. Thanks.

    Read the article

  • Is there a cross-platform special directory I can use for game save files?

    - by Suds
    I'm developing with LWJGL and Java on a Windows 7 laptop. I've successfully set up saving to the %appdata%\gamename\saves\ or long form c:\users\user\appdata\roaming\gamename\saves\ folder by using File dir = new File(System.getenv("APPDATA") + "\\gamename\\saves\\");. I have hobbyist level experience with Linux, and zero experience with OSX. My game will be fully cross platform. Is System.getenv("APPDATA"); cross platform? If so, where does it point to on Linux or OSX? Is there a best practices alternative that I should use?

    Read the article

  • Central renderer for a given scene

    - by Loggie
    When creating a central rendering system for all game objects in a given scene I am trying to work out the best way to go about passing the scene to the render system to be rendered. If I have a scene managed by an arbitrary structure, i.e., an octree, bsp trees, quad-tree, kd tree, etc. What is the best way to pass this to the render system? The obvious problem is that if simply given the root node of the structure, the render system would require an intrinsic knowledge of the structure in order to traverse the structure. My solution to this is to clip all objects outside the frustum in the scene manager and then create a list of the objects which are left and pass this simple list to the render system, be it an array, a vector, a linked list, etc. (This would be a structure required by the render system as a means to know which objects should be rendered). The list would of course attempt to minimise OpenGL state changes by grouping objects that require the same rendering operations to be performed on them. I have been thinking a lot about this and started searching various terms on here and followed any additional information/links but I have not really found a definitive answer. The case may be that there is no definitive answer but I would appreciate some advice and tips. My question is, is this a reasonable solution to the problem? Are there any improvements that I could make? Are there any caveats I should know about? Side question: Am I right in assuming that octrees, bsp trees, etc are all forms of BVH?

    Read the article

  • OpenGL 3 and the Radeon HD 4850x2

    - by rotard
    A while ago, I picked up a copy of the OpenGL SuperBible fifth edition and slowly and painfully started teaching myself OpenGL the 3.3 way, after having been used to the 1.0 way from school way back when. Making things more challenging, I am primarily a .NET developer, so I was working in Mono with the OpenTK OpenGL wrapper. On my laptop, I put together a program that let the user walk around a simple landscape using a couple shaders that implemented per-vertex coloring and lighting and texture mapping. Everything was working brilliantly until I ran the same program on my desktop. Disaster! Nothing would render! I have chopped my program down to the point where the camera sits near the origin, pointing at the origin, and renders a square (technically, a triangle fan). The quad renders perfectly on my laptop, coloring, lighting, texturing and all, but the desktop renders a small distorted non-square quadrilateral that is colored incorrectly, not affected by the lights, and not textured. I suspect the graphics card is at fault, because I get the same result whether I am booted into Ubuntu 10.10 or Win XP. I did find that if I pare the vertex shader down to ONLY outputting the positional data and the fragment shader to ONLY outputting a solid color (white) the quad renders correctly. But as SOON as I start passing in color data (whether or not I use it in the fragment shader) the output from the vertex shader is distorted again. The shaders follow. I left the pre-existing code in, but commented out so you can get an idea what I was trying to do. I'm a noob at glsl so the code could probably be a lot better. My laptop is an old lenovo T61p with a Centrino (Core 2) Duo and an nVidia Quadro graphics card running Ubuntu 10.10 My desktop has an i7 with a Radeon HD 4850 x2 (single card, dual GPU) from Saphire dual booting into Ubuntu 10.10 and Windows XP. The problem occurs in both XP and Ubuntu. Can anyone see something wrong that I am missing? What is "special" about my HD 4850x2? string vertexShaderSource = @" #version 330 precision highp float; uniform mat4 projection_matrix; uniform mat4 modelview_matrix; //uniform mat4 normal_matrix; //uniform mat4 cmv_matrix; //Camera modelview. Light sources are transformed by this matrix. //uniform vec3 ambient_color; //uniform vec3 diffuse_color; //uniform vec3 diffuse_direction; in vec4 in_position; in vec4 in_color; //in vec3 in_normal; //in vec3 in_tex_coords; out vec4 varyingColor; //out vec3 varyingTexCoords; void main(void) { //Get surface normal in eye coordinates //vec4 vEyeNormal = normal_matrix * vec4(in_normal, 0); //Get vertex position in eye coordinates //vec4 vPosition4 = modelview_matrix * vec4(in_position, 0); //vec3 vPosition3 = vPosition4.xyz / vPosition4.w; //Get vector to light source in eye coordinates //vec3 lightVecNormalized = normalize(diffuse_direction); //vec3 vLightDir = normalize((cmv_matrix * vec4(lightVecNormalized, 0)).xyz); //Dot product gives us diffuse intensity //float diff = max(0.0, dot(vEyeNormal.xyz, vLightDir.xyz)); //Multiply intensity by diffuse color, force alpha to 1.0 //varyingColor.xyz = in_color * diff * diffuse_color.xyz; varyingColor = in_color; //varyingTexCoords = in_tex_coords; gl_Position = projection_matrix * modelview_matrix * in_position; }"; string fragmentShaderSource = @" #version 330 //#extension GL_EXT_gpu_shader4 : enable precision highp float; //uniform sampler2DArray colorMap; //in vec4 varyingColor; //in vec3 varyingTexCoords; out vec4 out_frag_color; void main(void) { out_frag_color = vec4(1,1,1,1); //out_frag_color = varyingColor; //out_frag_color = vec4(varyingColor, 1) * texture(colorMap, varyingTexCoords.st); //out_frag_color = vec4(varyingColor, 1) * texture(colorMap, vec3(varyingTexCoords.st, 0)); //out_frag_color = vec4(varyingColor, 1) * texture2DArray(colorMap, varyingTexCoords); }"; Note that in this code the color data is accepted but not actually used. The geometry is outputted the same (wrong) whether the fragment shader uses varyingColor or not. Only if I comment out the line varyingColor = in_color; does the geometry output correctly. Originally the shaders took in vec3 inputs, I only modified them to take vec4s while troubleshooting.

    Read the article

  • In which directory to write game save files/data?

    - by Klaim
    I need a definite list of directories, one or more per platform, where to put game save files and other game generated data. Either based no the OS developer specification, or because it is common usage if there is no recommandation. Please provide one answer per platform, with different directories. Also, example of how to get the directory location in C++ or C is best, as it's the language you'll have more hard time. Locations: Player's game data (saved games, config). Shared game data (like high-score or config for all computer users). Temporary game data (aka cache directory).

    Read the article

  • Collision planes confusion

    - by Jeffrey
    I'm following this tutorial by thecplusplusguy and in the linked video he explain that for example for the world basement and walls we need to create the actual rendered (shown to the player) walls and then duplicate them, place them in the same coordinates as the rendered walls and call them collision (by defining their material to collision). Then it defines in the Object loader function that those objects with material == collision are collision planes and should not be rendered but just used to check collision. Now I'm pretty confused. Why would we add this kind of complexity to a problem that can easily be solved by a simple loadObject(string plane_object, bool check_collision);: Creating only the walls object (by loading .obj file in plane_object) Define them also as collision planes whenever the check_collision is set to true In this case we have lowered the complexity of his method and make it more flexible and faster to develop (faster because we don't always have to make a copy for each plane and flexible because we don't hardcode the Object loader). The only case in which this method could not work is when we need hidden collision planes, and for that we could modify the loadObject() function like this: loadObject(string plane_object, bool check_collision = true, bool hide_object = false); Creating only the walls object (by loading .obj file in plane_object) Define them also as collision planes whenever the check_collision is set to true And add the ability to actually show the object or hide it based on hide_object. The final question is: am I right? What would the possible problem encountered with my solution versus his?

    Read the article

< Previous Page | 461 462 463 464 465 466 467 468 469 470 471 472  | Next Page >