Search Results

Search found 28230 results on 1130 pages for 'embedded development'.

Page 491/1130 | < Previous Page | 487 488 489 490 491 492 493 494 495 496 497 498  | Next Page >

  • What's a viable way to get public properties from child objects?

    - by Raven Dreamer
    I have a GameObject (RoomOrganizer in the picture below) with a "RoomManager" script, and one or more child objects, each with a 'HasParallelagram' component attached, likeso: I've also got the following in the aforementioned "RoomManager" void Awake () { Rect tempRect; HasParallelogram tempsc; foreach (Transform child in transform) { try { tempsc = child.GetComponent<HasParallelogram>(); tempRect = tempsc.myRect; blockedZoneList.Add(new Parallelogram(tempRect)); Debug.Log(tempRect.ToString()); } catch( System.NullReferenceException) { Debug.Log("Null Reference Caught"); } } } Unfortunately, attempting to assign tempRect = tempsc.myRect causes a null pointer at run time. Am I missing some crucial step? HasParallelgram is an empty script with a public Rect set in the editor and nothing else. What's the proper way to get a child's component?

    Read the article

  • How do I create a curved line or filled circle or generally a circle using C++/SDL?

    - by NoobScratcher
    Hello I've been trying for ages to make a pixel circle using the putpixel function provided by SDL main website here is that function : void putpixel(int x,int y , int color , SDL_Surface* surface) { unsigned int *ptr = static_cast <unsigned int *> (surface->pixels); int offset = y * (surface->pitch/sizeof(unsigned int)); ptr[offset + x] = color; } and my question is how do I curve a line or create an circle arc of pixels or any other curved shape then a rectangle or singular pixel or line. for example here are some pictures filled pixel circle below enter link description here now my idea was too change the x and y value of the pixel position using + and - to create the curves but in practice didn't provide the correct results what my results are in this is to be able to create a circle that is made out of pixels only nothing else. thank you for anyone who takes the time to read this question thanks! :D

    Read the article

  • XNA Shader Texture Memory

    - by Alex
    I was wondering about texture optimization in XNA 4.0. Will the the contentmanager send the texturedata to the GPU directly when the texture gets loaded or do I send the texture data to the GPU when I declare a texture in my shader. If that's the case, what happens if I have 5 shaders all using the same texture, does that mean that I send 5 instances of that texture data to the gpu or am I simply telling the GPU what preloaded texture to use? Or does XNA do the heavy lifting in the background?

    Read the article

  • How to design a separated tutorial mode?

    - by Sylpheed
    I'm working on a "social" game that's about 90% completion. One of the remaining features is the tutorial mode. Basically, the tutorial mode will restrict the user to access some parts of UI and limit the features (like store items). The tutorial will only progress if a certain event is triggered, specifically following the tutorial. The code is ready and we already have an "almost" working game. The problem is I haven't foreseen the tutorial mode while I was doing those 90%. My requirement is there shouldn't be any loading/transition from tutorial mode to normal mode. This means I have to pick up the progress from the tutorial (no re-rendering of assets and stuff). How should I design this in a way where I won't touch anything from my old code? I want it to be as easy as just plugging it in. I don't want to jam the tutorial in my old code since this will lead to many bugs.

    Read the article

  • 2D camera perspective projection from 3D coordinates -- HOW?

    - by Jack
    I am developing a camera for a 2D game with a top-down view that has depth. It's almost a 3D camera. Basically, every object has a Z even though it is in 2D, and similarly to parallax layers their position, scale and rotation speed vary based on their Z. I guess this would be a perspective projection. But I am having trouble converting the objects' 3D coordinates into the 2D space of the screen so that everything has correct perspective and scale. I never learned matrices though I did dig the topic a bit today. I tried without using matrices thanks to this article but every attempt gave awkward results. I'm using ActionScript 3 and Flash 11+ (Starling), where the screen coordinates work like this: Left-handed coordinates system illustration I can explain further what I did if you want to help me sort out what's wrong, or you can directly tell me how you would do it properly. In case you prefer the former, read on. These are images showing the formulas I used: upload.wikimedia.org/math/1/c/8/1c89722619b756d05adb4ea38ee6f62b.png upload.wikimedia.org/math/d/4/0/d4069770c68cb8f1aa4b5cfc57e81bc3.png (Sorry new users can't post images, but both are from the wikipedia article linked above, section "Perspective projection". That's where you'll find what all variables mean, too) The long formula is greatly simplified because I believe a normal top-down 2D camera has no X/Y/Z rotation values (correct ?). Then it becomes d = a - c. Still, I can't get it to work. Maybe you could explain what numbers I should put in a(xyz), c(xyz), theta(xyz), and particularly, e(xyz) ? I don't quite get how e is different than c in my case. c.z is also an issue to me. If the Z of the camera's target object is 0, should the camera's Z be something like -600 ? ( = focal length of 600) Whatever I do, it's wrong. I only got it to work when I used arbitrary calculations that "looked" right, like most cameras with parallax layers seem to do, but that's fake! ;) If I want objects to travel between Z layers I might as well do it right. :) Thanks a lot for your help!

    Read the article

  • XNA frame rate spikes in full screen mode

    - by ProgrammerAtWork
    I'm loading a simple texture and rotating it in XNA, and this works. But when I run it in full screen 1920x1080 mode I see spikes while my texture is rotating. If I run it windowed with 1920x1080 resolution, I don't get the spikes. The size of the texture does not seem to matter, I tried 512 texture size and 2048 texture size, same thing happens. Spikes in full screen, no spikes in windowed, resolution does not seem to matter, Debug or Release does not seem to do anything either. Anyone got ideas of what could be the problem? Edit: I think this problem has something to do with the vertical retrace. Set this property: _graphicsDeviceManager.SynchronizeWithVerticalRetrace = false; you'll lose vsync but it will not stutter.

    Read the article

  • Detecting walls or floors in pygame

    - by Serial
    I am trying to make bullets bounce of walls, but I can't figure out how to correctly do the collision detection. What I am currently doing is iterating through all the solid blocks and if the bullet hits the bottom, top or sides, its vector is adjusted accordingly. However, sometimes when I shoot, the bullet doesn't bounce, I think it's when I shoot at a border between two blocks. Here is the update method for my Bullet class: def update(self, dt): if self.can_bounce: #if the bullet hasnt bounced find its vector using the mousclick pos and player pos speed = -10. range = 200 distance = [self.mouse_x - self.player[0], self.mouse_y - self.player[1]] norm = math.sqrt(distance[0] ** 2 + distance[1] ** 2) direction = [distance[0] / norm, distance[1 ] / norm] bullet_vector = [direction[0] * speed, direction[1] * speed] self.dx = bullet_vector[0] self.dy = bullet_vector[1] #check each block for collision for block in self.game.solid_blocks: last = self.rect.copy() if self.rect.colliderect(block): topcheck = self.rect.top < block.rect.bottom and self.rect.top > block.rect.top bottomcheck = self.rect.bottom > block.rect.top and self.rect.bottom < block.rect.bottom rightcheck = self.rect.right > block.rect.left and self.rect.right < block.rect.right leftcheck = self.rect.left < block.rect.right and self.rect.left > block.rect.left each test tests if it hit the top bottom left or right side of the block its colliding with if self.can_bounce: if topcheck: self.rect = last self.dy *= -1 self.can_bounce = False print "top" if bottomcheck: self.rect = last self.dy *= -1 #Bottom check self.can_bounce = False print "bottom" if rightcheck: self.rect = last self.dx *= -1 #right check self.can_bounce = False print "right" if leftcheck: self.rect = last self.dx *= -1 #left check self.can_bounce = False print "left" else: # if it has already bounced and colliding again kill it self.kill() for enemy in self.game.enemies_list: if self.rect.colliderect(enemy): self.kill() #update position self.rect.x -= self.dx self.rect.y -= self.dy This definitely isn't the best way to do it but I can't think of another way. If anyone has done this or can help that would be awesome!

    Read the article

  • What sort of leaderboard for my game?

    - by Martin
    I recently published a word game for Windows Phone and I am really happy to have some players. The game is entirely offline and at the end of a game, the player's score is published to a server. I'm collecting the scores to build a leaderboard. Right now, I don't believe that the leaderboard I offer to my users is appropriate. I essentially accumulate the score of all the games of a user for a given day and that becomes their score. So if Player 1 plays 3 games and gets 100, 150 and 200 points, its score for the day is 450 points. I would like to get your ideas and opinion. How do I keep my game challenging and engaging with a good leaderboard? Should I continue accumulating the score for a day? Should I just keep the best score? Thanks!

    Read the article

  • Linking one uniform variable to many shaders

    - by Winged
    Let's say, that I have 3 programs, and in each of those programs there is a view matrix uniform, which should be the same in all those programs. Right now, when my camera moves, I need to re-upload the modified matrix to every program separately. Is it possible to create some kind of global uniforms which are constant for all programs linked to it, so I could just upload the matrix once? I tried creating a globalUniforms object which looked kinda like this: var globalUniforms = { program: {}, // (...) vMatrixUniform: null, // (...) initialize: function() { vMatrixUniform = gl.getUniformLocation(this.program, 'uVMatrix'); } }; So I could just link it to proper programs like this: program.vMatrixUniform = globalUniforms.vMatrixUniform;, and then pass the matrix like this: if (camera.isDirty.viewMatrix !== false) { camera.isDirty.viewMatrix = false; gl.uniformMatrix4fv(globalUniforms.vMatrixUniform, false, camera.viewMatrix.element); } but unfortunately it throws an error: Uncaught exception: gl.INVALID_VALUE was caused by call to: getUniformLocation called from line 272, column 2 in () in mysite/js/mesh.js: vMatrixUniform = gl.getUniformLocation(this.program, 'uVMatrix'); Summing up: is there a more efficient way of managing shaders which follows my logic?

    Read the article

  • Viewport / Camera Calculation in 2D Game

    - by Dave
    we have a 2D game with some sprites and tiles and some kind of camera/viewport, that "moves" around the scene. so far so good, if we wouldn't had some special behaviour for your camera/viewport translation. normally you could stick the camera to your player figure and center it, resulting in a very cheap, undergraduate, translation equation, like : vec_translation -/+= speed (depending in what keys are pressed. WASD as default.) buuuuuuuuuut, we want our player figure be able to actually reach the bounds, when the viewport/camera has reached a maximum translation. we came up with the following solution (only keys a and d are the shown here, the rest is just adaption of calculation or maybe YOUR super-cool and elegant solution :) ): if(keys[A]) { playerX -= speed; if(playerScreenX <= width / 2 && tx < 0) { playerScreenX = width / 2; tx += speed; } else if(playerScreenX <= width / 2 && (tx) >= 0) { playerScreenX -= speed; tx = 0; if(playerScreenX < 0) playerScreenX = 0; } else if(playerScreenX >= width / 2 && (tx) < 0) { playerScreenX -= speed; } } if(keys[D]) { playerX += speed; if(playerScreenX >= width / 2 && (-tx + width) < sceneWidth) { playerScreenX = width / 2; tx -= speed; } if(playerScreenX >= width / 2 && (-tx + width) >= sceneWidth) { playerScreenX += speed; tx = -(sceneWidth - width); if(playerScreenX >= width - player.width) playerScreenX = width - player.width; } if(playerScreenX <= width / 2 && (-tx + width) < sceneWidth) { playerScreenX += speed; } } i think the code is rather self explaining: keys is a flag container for currently active keys, playerX/-Y is the position of the player according to world origin, tx/ty are the translation components vital to background / npc / item offset calculation, playerOnScreenX/-Y is the actual position of the player figure (sprite) on screen and width/height are the dimensions of the camera/viewport. this all looks quite nice and works well, but there is a very small and nasty calculation error, which in turn sums up to some visible effect. let's consider following piece of code: if(playerScreenX <= width / 2 && tx < 0) { playerScreenX = width / 2; tx += speed; } it can be translated into plain english as : if the x position of your player figure on screen is less or equal the half of your display / camera / viewport size AND there is enough space left LEFT of your viewport/camera then set players x position on screen to width half, increase translation (because we subtract the translation from something we want to move). easy, right?! doing this will create a small delta between playerX and playerScreenX. after so much talking, my question appears now here at the bottom of this document: how do I stick the calculation of my player-on-screen to the actual position of the player AND having a viewport that is not always centered aroung the players figure? here is a small test-case in processing: http://pastebin.com/bFaTauaa thank you for reading until now and thank you in advance for probably answering my question.

    Read the article

  • Is AGS outdated for Point & Click Adventures?

    - by Aidan Moore
    Is Adventure Game Studio (AGS) outdated? I am working on a Point and Click Adventure game being coded on the AGS engine, and just recently, the question of 'is this outdated?' has come up. I'll admit, AGS is a rather old, and kind of went out of style with the P&C genre itself, but I have not found anything quite like it that specializes in this specific format of games. So my big question is not only 'is this outdated?' but also 'Is there a better alternative?'

    Read the article

  • Incorrect lighting results with deferred rendering

    - by Lasse
    I am trying to render a light-pass to a texture which I will later apply on the scene. But I seem to calculate the light position wrong. I am working on view-space. In the image above, I am outputting the attenuation of a point light which is currently covering the whole screen. The light is at 0,10,0 position, and I transform it to view-space first: Vector4 pos; Vector4 tmp = new Vector4 (light.Position, 1); // Transform light position for shader Vector4.Transform (ref tmp, ref Camera.ViewMatrix, out pos); shader.SendUniform ("LightViewPosition", ref pos); Now to me that does not look as it should. What I think it should look like is that the white area should be on the center of the scene. The camera is at the corner of the scene, and it seems as if the light would move along with the camera. Here's the fragment shader code: void main(){ // default black color vec3 color = vec3(0); // Pixel coordinates on screen without depth vec2 PixelCoordinates = gl_FragCoord.xy / ScreenSize; // Get pixel position using depth from texture vec4 depthtexel = texture( DepthTexture, PixelCoordinates ); float depthSample = unpack_depth(depthtexel); // Get pixel coordinates on camera-space by multiplying the // coordinate on screen-space by inverse projection matrix vec4 world = (ImP * RemapMatrix * vec4(PixelCoordinates, depthSample, 1.0)); // Undo the perspective calculations vec3 pixelPosition = (world.xyz / world.w) * 3; // How far the light should reach from it's point of origin float lightReach = LightColor.a / 2; // Vector in between light and pixel vec3 lightDir = (LightViewPosition.xyz - pixelPosition); float lightDistance = length(lightDir); vec3 lightDirN = normalize(lightDir); // Discard pixels too far from light source //if(lightReach < lightDistance) discard; // Get normal from texture vec3 normal = normalize((texture( NormalTexture, PixelCoordinates ).xyz * 2) - 1); // Half vector between the light direction and eye, used for specular component vec3 halfVector = normalize(lightDirN + normalize(-pixelPosition)); // Dot product of normal and light direction float NdotL = dot(normal, lightDirN); float attenuation = pow(lightReach / lightDistance, LightFalloff); // If pixel is lit by the light if(NdotL > 0) { // I have moved stuff from here to above so I can debug them. // Diffuse light color color += LightColor.rgb * NdotL * attenuation; // Specular light color color += LightColor.xyz * pow(max(dot(halfVector, normal), 0.0), 4.0) * attenuation; } RT0 = vec4(color, 1); //RT0 = vec4(pixelPosition, 1); //RT0 = vec4(depthSample, depthSample, depthSample, 1); //RT0 = vec4(NdotL, NdotL, NdotL, 1); RT0 = vec4(attenuation, attenuation, attenuation, 1); //RT0 = vec4(lightReach, lightReach, lightReach, 1); //RT0 = depthtexel; //RT0 = 100 / vec4(lightDistance, lightDistance, lightDistance, 1); //RT0 = vec4(lightDirN, 1); //RT0 = vec4(halfVector, 1); //RT0 = vec4(LightColor.xyz,1); //RT0 = vec4(LightViewPosition.xyz/100, 1); //RT0 = vec4(LightPosition.xyz, 1); //RT0 = vec4(normal,1); } What am I doing wrong here?

    Read the article

  • fast java2d translucency

    - by mdriesen
    I'm trying to draw a bunch of translucent circles on a Swing JComponent. This isn't exactly fast, and I was wondering if there is a way to speed it up. My custom JComponent has the following paintComponent method: public void paintComponent(Graphics g) { Rectangle view = g.getClipBounds(); VolatileImage image = createVolatileImage(view.width, view.height); Graphics2D buffer = image.createGraphics(); // translate to camera location buffer.translate(-cx, -cy); // renderables contains all currently visible objects for(Renderable r : renderables) { r.paint(buffer); } g.drawImage(image.getSnapshot(), view.x, view.y, this); } The paint method of my circles is as follows: public void paint(Graphics2D graphics) { graphics.setPaint(paint); graphics.fillOval(x, y, radius, radius); } The paint is just an rgba color with a < 255: Color(int r, int g, int b, int a) It works fast enough for opaque objects, but is there a simple way to speed this up for translucent ones?

    Read the article

  • Slick 2d scrolling off screen

    - by Peter
    I have something scrolling in and out of the screen. Now when it goes off screen, I want it to scroll into the screen at another location. What I do is I grab the last pixels at the screens edge using g.copyArea and then g.drawImage on the edge of the screen. And then I do a g.translate to create room for the next row which is next render cycle. My problem is that I get a single pixel row, which is not copied onto the canvas. Where as I want each row to be added and then translated, so that the image that scrolled off screen is recreated on the other side of the screen. Here is my code, maybe there is a better way of doing this, open to any suggests, cause I'm totally stuck @Override public void render(GameContainer gc, Graphics g) throws SlickException { //g.setClip(0, 0, 300, gc.getHeight()); g.translate(0, y); g.drawImage(image,0,200); g.resetTransform(); //g.clearClip(); g.copyArea(rightImage, 0, gc.getHeight() - 1); g.drawImage(rightImage, 300, 0); g.translate(0, y); y=y+3; }

    Read the article

  • Cocos2d Tiled Dynamic Object Layer

    - by Rodrigo Camargo
    I'm trying to develop a cocos2d tiled based game using a sort of 'dynamic' object layer. What I want to do is after the tiled map is loaded, the user can drag something into the map and that will become an event when the 'hero' pass over it. I know how to build an object layer in tiled but it seems that is for fixed positions and what I want is a dynamic action position based on what the user can select. For instance, the user can drag a rock into a tile and when the character hit that rock he may die, or something. I'm a little lost about how to make it work. Do you have any idea of what should I use or what should I look for? Thanks in advance!

    Read the article

  • Why does my 3D model not translate the way I expect? [closed]

    - by ChocoMan
    In my first image, my model displays correctly: But when I move the model's position along the Z-axis (forward) I get this, yet the Y-axis doesnt change. An if I keep going, the model disappears into the ground: Any suggestions as to how I can get the model to translate properly visually? Here is how Im calling the model and the terrain in draw(): cameraPosition = new Vector3(camX, camY, camZ); // Copy any parent transforms. Matrix[] transforms = new Matrix[mShockwave.Bones.Count]; mShockwave.CopyAbsoluteBoneTransformsTo(transforms); Matrix[] ttransforms = new Matrix[terrain.Bones.Count]; terrain.CopyAbsoluteBoneTransformsTo(ttransforms); // Draw the model. A model can have multiple meshes, so loop. foreach (ModelMesh mesh in mShockwave.Meshes) { // This is where the mesh orientation is set, as well // as our camera and projection. foreach (BasicEffect effect in mesh.Effects) { effect.EnableDefaultLighting(); effect.PreferPerPixelLighting = true; effect.World = transforms[mesh.ParentBone.Index] * Matrix.CreateRotationY(modelRotation) * Matrix.CreateTranslation(modelPosition); // Looking at the model (picture shouldnt change other than rotation) effect.View = Matrix.CreateLookAt(cameraPosition, modelPosition, Vector3.Up); effect.Projection = Matrix.CreatePerspectiveFieldOfView( MathHelper.ToRadians(45.0f), aspectRatio, 1.0f, 10000.0f); effect.TextureEnabled = true; } // Draw the mesh, using the effects set above. prepare3d(); mesh.Draw(); } //Terrain test foreach (ModelMesh meshT in terrain.Meshes) { foreach (BasicEffect effect in meshT.Effects) { effect.EnableDefaultLighting(); effect.PreferPerPixelLighting = true; effect.World = ttransforms[meshT.ParentBone.Index] * Matrix.CreateRotationY(0) * Matrix.CreateTranslation(terrainPosition); // Looking at the model (picture shouldnt change other than rotation) effect.View = Matrix.CreateLookAt(cameraPosition, terrainPosition, Vector3.Up); effect.Projection = Matrix.CreatePerspectiveFieldOfView( MathHelper.ToRadians(45.0f), aspectRatio, 1.0f, 10000.0f); effect.TextureEnabled = true; } // Draw the mesh, using the effects set above. prepare3d(); meshT.Draw(); DrawText(); } base.Draw(gameTime); } I'm suspecting that there may be something wrong with how I'm handling my camera. The model rotates fine on its Y-axis.

    Read the article

  • Proper updating of GeoClipMaps

    - by thr
    I have been working on an implementation of gpu-based geo clip maps, but there is a section of the GPU Gems 2 article that i just can't seem to understand, specifically this paragraph and more precisely the bolded part: The choice of grid size n = 2k-1 has the further advantage that the finer level is never exactly centered with respect to its parent next-coarser level. In other words, it is always offset by 1 grid unit either left or right, as well as either top or bottom (see Figure 2-4), depending on the position of the viewpoint. In fact, it is necessary to allow a finer level to shift while its next-coarser level stays fixed, and therefore the finer level must sometimes be off-center with respect to the next-coarser level. An alternative choice of grid size, such as n = 2k-3, would provide the possibility for exact centering Let's take an example image from the article: My "understanding" of the way the clip maps were update was that you floor the position of the viewpoint to an int, and such get the center vertex point if this is not the same as the previous center point, you update the entire map. Now, this obviously is not the case - but what I am failing to understand is this: If you look at the image above, if the viewpoint was to move one unit to the right, then the inner ring (the one just around the view point + white center square) would end up getting a 1 unit space on both the left and right side of itself. But there is nothing in the paper that deals with this, what i mean is that it would end up looking like this (excuse my crummy cut-and-paste editing of the above image): This is obviously not a valid state of the. So, would the solution be that a clip ring (layer) can only move in increments of the ring/layer it's contained within? Wouldn't this end up being very restrictive? I feel like I am missing some crucial understanding of parts of the algorithm, but I have been over both this paper and the original paper from 2004 and I just can't see what I am not getting.

    Read the article

  • How to categorize textures into atlases

    - by Esa
    I am going to use texture atlasing for the first time in my games, and at first it seemed like a great idea to split textures into atlases by categorizing them by terrain themes e.g ForestTextures, WinterTextures etc. But that could cause a problem when for example a flower has to use transparency shader and other models use a diffuse shader. So those cannot be atlased into the same texture. Thus, would atlasing textures into themes as mentioned before and then splitting them by shader like ForestDiffuse and ForestTransparent be good? Or is there a better way to categorize and build them?

    Read the article

  • Is 2 lines of push/pop code for each pre-draw-state too many?

    - by Griffin
    I'm trying to simplify vector graphics management in XNA; currently by incorporating state preservation. 2X lines of push/pop code for X states feels like too many, and it just feels wrong to have 2 lines of code that look identical except for one being push() and the other being pop(). The goal is to eradicate this repetitiveness,and I hoped to do so by creating an interface in which a client can give class/struct refs in which he wants restored after the rendering calls. Also note that many beginner-programmers will be using this, so forcing lambda expressions or other advanced C# features to be used in client code is not a good idea. I attempted to accomplish my goal by using Daniel Earwicker's Ptr class: public class Ptr<T> { Func<T> getter; Action<T> setter; public Ptr(Func<T> g, Action<T> s) { getter = g; setter = s; } public T Deref { get { return getter(); } set { setter(value); } } } an extension method: //doesn't work for structs since this is just syntatic sugar public static Ptr<T> GetPtr <T> (this T obj) { return new Ptr<T>( ()=> obj, v=> obj=v ); } and a Push Function: //returns a Pop Action for later calling public static Action Push <T> (ref T structure) where T: struct { T pushedValue = structure; //copies the struct data Ptr<T> p = structure.GetPtr(); return new Action( ()=> {p.Deref = pushedValue;} ); } However this doesn't work as stated in the code. How might I accomplish my goal? Example of code to be refactored: protected override void RenderLocally (GraphicsDevice device) { if (!(bool)isCompiled) {Compile();} //TODO: make sure state settings don't implicitly delete any buffers/resources RasterizerState oldRasterState = device.RasterizerState; DepthFormat oldFormat = device.PresentationParameters.DepthStencilFormat; DepthStencilState oldBufferState = device.DepthStencilState; { //Rendering code } device.RasterizerState = oldRasterState; device.DepthStencilState = oldBufferState; device.PresentationParameters.DepthStencilFormat = oldFormat; }

    Read the article

  • How does gluLookAt work?

    - by Chan
    From my understanding, gluLookAt( eye_x, eye_y, eye_z, center_x, center_y, center_z, up_x, up_y, up_z ); is equivalent to: glRotatef(B, 0.0, 0.0, 1.0); glRotatef(A, wx, wy, wz); glTranslatef(-eye_x, -eye_y, -eye_z); But when I print out the ModelView matrix, the call to glTranslatef() doesn't seem to work properly. Here is the code snippet: #include <stdlib.h> #include <stdio.h> #include <GL/glut.h> #include <iomanip> #include <iostream> #include <string> using namespace std; static const int Rx = 0; static const int Ry = 1; static const int Rz = 2; static const int Ux = 4; static const int Uy = 5; static const int Uz = 6; static const int Ax = 8; static const int Ay = 9; static const int Az = 10; static const int Tx = 12; static const int Ty = 13; static const int Tz = 14; void init() { glClearColor(0.0, 0.0, 0.0, 0.0); glEnable(GL_DEPTH_TEST); glShadeModel(GL_SMOOTH); glEnable(GL_LIGHTING); glEnable(GL_LIGHT0); GLfloat lmodel_ambient[] = { 0.8, 0.0, 0.0, 0.0 }; glLightModelfv(GL_LIGHT_MODEL_AMBIENT, lmodel_ambient); } void displayModelviewMatrix(float MV[16]) { int SPACING = 12; cout << left; cout << "\tMODELVIEW MATRIX\n"; cout << "--------------------------------------------------" << endl; cout << setw(SPACING) << "R" << setw(SPACING) << "U" << setw(SPACING) << "A" << setw(SPACING) << "T" << endl; cout << "--------------------------------------------------" << endl; cout << setw(SPACING) << MV[Rx] << setw(SPACING) << MV[Ux] << setw(SPACING) << MV[Ax] << setw(SPACING) << MV[Tx] << endl; cout << setw(SPACING) << MV[Ry] << setw(SPACING) << MV[Uy] << setw(SPACING) << MV[Ay] << setw(SPACING) << MV[Ty] << endl; cout << setw(SPACING) << MV[Rz] << setw(SPACING) << MV[Uz] << setw(SPACING) << MV[Az] << setw(SPACING) << MV[Tz] << endl; cout << setw(SPACING) << MV[3] << setw(SPACING) << MV[7] << setw(SPACING) << MV[11] << setw(SPACING) << MV[15] << endl; cout << "--------------------------------------------------" << endl; cout << endl; } void reshape(int w, int h) { float ratio = static_cast<float>(w)/h; glViewport(0, 0, w, h); glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(45.0, ratio, 1.0, 425.0); } void draw() { float m[16]; glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glGetFloatv(GL_MODELVIEW_MATRIX, m); gluLookAt( 300.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f ); glColor3f(1.0, 0.0, 0.0); glutSolidCube(100.0); glGetFloatv(GL_MODELVIEW_MATRIX, m); displayModelviewMatrix(m); glutSwapBuffers(); } int main(int argc, char** argv) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH); glutInitWindowSize(400, 400); glutInitWindowPosition(100, 100); glutCreateWindow("Demo"); glutReshapeFunc(reshape); glutDisplayFunc(draw); init(); glutMainLoop(); return 0; } No matter what value I use for the eye vector: 300, 0, 0 or 0, 300, 0 or 0, 0, 300 the translation vector is the same, which doesn't make any sense because the order of code is in backward order so glTranslatef should run first, then the 2 rotations. Plus, the rotation matrix, is completely independent of the translation column (in the ModelView matrix), then what would cause this weird behavior? Here is the output with the eye vector is (0.0f, 300.0f, 0.0f) MODELVIEW MATRIX -------------------------------------------------- R U A T -------------------------------------------------- 0 0 0 0 0 0 0 0 0 1 0 -300 0 0 0 1 -------------------------------------------------- I would expect the T column to be (0, -300, 0)! So could anyone help me explain this? The implementation of gluLookAt from http://www.mesa3d.org void GLAPIENTRY gluLookAt(GLdouble eyex, GLdouble eyey, GLdouble eyez, GLdouble centerx, GLdouble centery, GLdouble centerz, GLdouble upx, GLdouble upy, GLdouble upz) { float forward[3], side[3], up[3]; GLfloat m[4][4]; forward[0] = centerx - eyex; forward[1] = centery - eyey; forward[2] = centerz - eyez; up[0] = upx; up[1] = upy; up[2] = upz; normalize(forward); /* Side = forward x up */ cross(forward, up, side); normalize(side); /* Recompute up as: up = side x forward */ cross(side, forward, up); __gluMakeIdentityf(&m[0][0]); m[0][0] = side[0]; m[1][0] = side[1]; m[2][0] = side[2]; m[0][1] = up[0]; m[1][1] = up[1]; m[2][1] = up[2]; m[0][2] = -forward[0]; m[1][2] = -forward[1]; m[2][2] = -forward[2]; glMultMatrixf(&m[0][0]); glTranslated(-eyex, -eyey, -eyez); }

    Read the article

  • (Unity)Getting a mirrored mesh from my data structure

    - by Steve
    Here's the background: I'm in the beginning stages of an RTS game in Unity. I have a procedurally generated terrain with a perlin-noise height map, as well as a function to generate a river. The problem is that the graphical creation of the map is taking the data structure of the map and rotating it by 180 degrees. I noticed this problem when i was creating my rivers. I would set the River's height to flat, and noticed that the actual tiles that were flat in the graphical representation were flipped and mirrored. Here's 3 screenshots of the map from different angles: http://imgur.com/a/VLHHq As you can see, if you flipped (graphically) the river by 180 degrees on the z axis, it would fit where the terrain is flattened. I have a suspicion it is being caused by a misunderstanding on my part of how vertices work. Alas, here is a snippet of the code that is used: This code here creates a new array of Tile objects, which hold the information for each tile, including its type, coordinate, height, and it's 4 vertices public DTileMap (int size_x, int size_y) { this.size_x = size_x; this.size_y = size_y; //Initialize Map_Data Array of Tile Objects map_data = new Tile[size_x, size_y]; for (int j = 0; j < size_y; j++) { for (int i = 0; i < size_x; i++) { map_data [i, j] = new Tile (); map_data[i,j].coordinate.x = (int)i; map_data[i,j].coordinate.y = (int)j; map_data[i,j].vertices[0] = new Vector3 (i * GTileMap.TileMap.tileSize, map_data[i,j].Height, -j * GTileMap.TileMap.tileSize); map_data[i,j].vertices[1] = new Vector3 ((i+1) * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j) * GTileMap.TileMap.tileSize); map_data[i,j].vertices[2] = new Vector3 (i * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j-1) * GTileMap.TileMap.tileSize); map_data[i,j].vertices[3] = new Vector3 ((i+1) * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j-1) * GTileMap.TileMap.tileSize); } } This code sets the river tiles to height 0 foreach (Tile t in map_data) { if (t.realType == "Water") { t.vertices[0].y = 0f; t.vertices[1].y = 0f; t.vertices[2].y = 0f; t.vertices[3].y = 0f; } } And below is the code to generate the actual graphics from the data: public void BuildMesh () { DTileMap.DTileMap map = new DTileMap.DTileMap (size_x, size_z); int numTiles = size_x * size_z; int numTris = numTiles * 2; int vsize_x = size_x + 1; int vsize_z = size_z + 1; int numVerts = vsize_x * vsize_z; // Generate the mesh data Vector3[] vertices = new Vector3[ numVerts ]; Vector3[] normals = new Vector3[numVerts]; Vector2[] uv = new Vector2[numVerts]; int[] triangles = new int[ numTris * 3 ]; int x, z; for (z=0; z < vsize_z; z++) { for (x=0; x < vsize_x; x++) { normals [z * vsize_x + x] = Vector3.up; uv [z * vsize_x + x] = new Vector2 ((float)x / size_x, 1f - (float)z / size_z); } } for (z=0; z < vsize_z; z+=1) { for (x=0; x < vsize_x; x+=1) { if (x == vsize_x - 1 && z == vsize_z - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x - 1, z - 1].vertices [3]; } else if (z == vsize_z - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x, z - 1].vertices [2]; } else if (x == vsize_x - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x - 1, z].vertices [1]; } else { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x, z].vertices [0]; vertices [z * vsize_x + x+1] = DTileMap.DTileMap.map_data [x, z].vertices [1]; vertices [(z+1) * vsize_x + x] = DTileMap.DTileMap.map_data [x, z].vertices [2]; vertices [(z+1) * vsize_x + x+1] = DTileMap.DTileMap.map_data [x, z].vertices [3]; } } } } for (z=0; z < size_z; z++) { for (x=0; x < size_x; x++) { int squareIndex = z * size_x + x; int triOffset = squareIndex * 6; triangles [triOffset + 0] = z * vsize_x + x + 0; triangles [triOffset + 2] = z * vsize_x + x + vsize_x + 0; triangles [triOffset + 1] = z * vsize_x + x + vsize_x + 1; triangles [triOffset + 3] = z * vsize_x + x + 0; triangles [triOffset + 5] = z * vsize_x + x + vsize_x + 1; triangles [triOffset + 4] = z * vsize_x + x + 1; } } // Create a new Mesh and populate with the data Mesh mesh = new Mesh (); mesh.vertices = vertices; mesh.triangles = triangles; mesh.normals = normals; mesh.uv = uv; // Assign our mesh to our filter/renderer/collider MeshFilter mesh_filter = GetComponent<MeshFilter> (); MeshCollider mesh_collider = GetComponent<MeshCollider> (); mesh_filter.mesh = mesh; mesh_collider.sharedMesh = mesh; calculateMeshTangents (mesh); BuildTexture (map); } If this looks familiar to you, its because i got most of it from Quill18. I've been slowly adapting it for my uses. And please include any suggestions you have for my code. I'm still in the very early prototyping stage.

    Read the article

  • What makes games responsive to user input?

    - by zaftcoAgeiha
    Many games have been praised for its responsive gameplay, where each user action input correspond to a quick and precise character movement (eg: super meat boy, shank...) What makes those games responsive? and what prevents other games from achieving the same? How much of it is due to the game framework used to queue mouse/keyboard events and render/update the game and how much is attributed to better coding?

    Read the article

  • XNA 4.0 2D sidescroller variable terrain heightmap for walking/collision

    - by JiminyCricket
    I've been fooling around with moving on sloped tiles in XNA and it is semi-working but not completely satisfactory. I also have been thinking that having sets of predetermined slopes might not give me terrain that looks "organic" enough. There is also the problem of having to construct several different types of tile for each slope when they're chained together (only 45 degree tiles will chain perfectly as I understand it). I had thought of somehow scanning for connected chains of sloped tiles and treating it as a new large triangle, as I was having trouble with glitching at the edges where sloped tiles connect. But, this leads back to the problem of limiting the curvature of the terrain. So...what I'd like to do now is create a simple image or texture of the terrain of a level (or section of the level) and generate a simple heightmap (of the Y's for each X) for the terrain. The player's Y position would then just be updated based on their X position. Is there a simple way of doing this (or a better way of solving this problem)? The main problem I can see with this method is the case where there are areas above the ground that can be walked on. Maybe there is a way to just map all walkable ground areas? I've been looking at this helpful bit of code: http://thirdpartyninjas.com/blog/2010/07/28/sloped-platform-collision/ but need a way to generate the actual points/vectors.

    Read the article

  • 2D basic map system

    - by Cyril
    i'm currently coding a 2D game in Java, and I would like to have some clues on how-to build this system : the screen is moving on a grander map, for instance, the screen represent 800*600 units on a 100K*100K map. When you command your unit to go to another position, the screen move on this map AND when you move your mouse on a side or another of the screen, you move the screen on the map. Not sure that i'm clear, but we can retrieve this system in most RTS games (warcraft/starcraft for example). I'm currently using Slick 2D. Any idea ? Thanks.

    Read the article

  • Directx and Open Libraries list? [closed]

    - by OVERTONE
    I've just been looking for comparissons between open and proprietary frameworks and libraries. More so just to get an idea of what exists than how they compare. For example: We have DirectX (graphics) and its open counterpart OpenGL DirectX (sound) and OpenAL But there are other DirectX libraries that I can't find open alternatives to such as DirectInput DXGI Direct2D DirectWrite Doe's anyone have any list's or Comparisons between Directx and their open counterparts?

    Read the article

< Previous Page | 487 488 489 490 491 492 493 494 495 496 497 498  | Next Page >