Search Results

Search found 19182 results on 768 pages for 'game engine'.

Page 403/768 | < Previous Page | 399 400 401 402 403 404 405 406 407 408 409 410  | Next Page >

  • Dynamic Dijkstra

    - by Dani
    I need dynamic dijkstra algorithm that can update itself when edge's cost is changed without a full recalculation. Full recalculation is not an option. I've tryed to "brew" my own implemantion with no success. I've also tryed to find on the Internet but found nothing. A link to an article explaining the algorithm or even it's name will be good. Edit: Thanks everyone for answering. I managed to make algorithm of my own that runs in O(V+E) time, if anyone wishes to know the algorithm just say so and I will post it.

    Read the article

  • Clear edged sprite

    - by Ananth
    I am a newbie to cocos2d. I would like make user to draw similar to what a painting brush would do. I am using CCSprite for that. I almost implemented the velocity, color and opacity factors for that tool, but I couldn't get the Sprite to be as clear as it should be. I can draw only in the below image http://i.imgur.com/KBe0L.png which has blunt edges. But I want it to be harder / clear outside edges as in http://i.stack.imgur.com/GrFlv.png. I am getting no idea to make it clear edged. The piece of code Im using is glEnable(GL_BLEND); [brush.texture setAliasTexParameters]; [brush setBlendFunc:(ccBlendFunc){GL_ONE, GL_ONE_MINUS_SRC_ALPHA}]; [brush visit]; I suspect the problem would be on blending mode. I tried some blending modes, but with no expected results. I am trying this for the past five days and so confused. Can some one help me sort this out? Thanks in advance.

    Read the article

  • effect and model vertex declaration compatibility

    - by Vodácek
    I have normal model drawing code. When I try to draw model without UV coordinates I got this exception: System.InvalidOperationException: The current vertex declaration does not include all the elements required by the current vertex shader. TextureCoordinate0 is missing. at Microsoft.Xna.Framework.Graphics.GraphicsDevice.VerifyCanDraw( Boolean bUserPrimitives, Boolean bIndexedPrimitives) at Microsoft.Xna.Framework.Graphics.GraphicsDevice.DrawIndexedPrimitives( PrimitiveType primitiveType, Int32 baseVertex, Int32 minVertexIndex, Int32 numVertices, Int32 startIndex, Int32 primitiveCount) at Microsoft.Xna.Framework.Graphics.ModelMeshPart.Draw() at Microsoft.Xna.Framework.Graphics.ModelMesh.Draw() ... I know what cause the exception, but is possible to avoid it? Is possible to check model before drawing it with current shader for vertex declaration compatibility?

    Read the article

  • How can I chose the depth of a quadtree?

    - by Evpok
    In a 2d world, using a quadtree to prune pairs in collision detection, how can I chose the depth of said quadtree? The world I am dealing with is mostly made of moving objects¹, so the cost of dispatching the objects between the quadtree cells matter. So what I am interested in is the balance between the gain from less collision checking and the loss from more dispatching. 1. To be completely explicit, autonomous self-replicating cells competing for food sources, in an attempt to show my pupils predator-prey dynamics and genetic evolution at work

    Read the article

  • What does Skyrim Creation Kit's NPC class do?

    - by pseudoname
    I'm trying to change it with the setclass console command Based on the UESP wiki it looks like it just governs stat gain for leveling, but based on the Elder Scrolls wiki it seems to only control their combat AI. Obviously it does at least one or both of those - what does it actually do, and does it do anything else? __ Ex: if change Lydia from warrior1handed to vigilantcombat1h with the console command 000A2C94.setclass 0010bfef will it have any unintended side effects that aren't immediately apparent other than letting her use the alteration and healing spells I just gave her with console and setting her stats in a way that works for that? Will it do something weird like mess with her factions or ability to join as my follower? Or mess with her health scaling as she levels? Something hard to notice until alot of time went by? @desaivv* I was trying to do it with 000A2C94.setclass 0010bfef wasn't sure if it'd cause hidden issues only showing after hours of play or if i made a new character with the same bat file. but the creation kit sounds like an idea too, i'll have to see how complicated it is. it might just show what it'd do or have some easier way to change her behavior to add spells. I'll try it and see if anything obvious shows up short term just wasn't sure if it had known long term problems

    Read the article

  • Camera Collision inside the room model

    - by sanddy
    I am having a problem in Calculating the camera collision for my Room model which consists of sofa, tables and other models. The users shall be moving the camera front, back, rotating so i need to make sure that the camera does not collide with any of the models with in the room. I have treated all my models inside the room by BoundingBox[] and the camera by BoundingSphere. So, far i have implemented collision by looking into the tutorial from http://www.toymaker.info/Games/XNA/html/xna_model_collisions.html which was great. But, I guess the problem lies in the Transformation part. I debugged and found some points to be at Vector(-XXX,-XXX,-XXX) where X is digit. Also i found my radius of some models where too large(in thousand, i just looked into its radius value before converting to BoundingBox). Do I need to scale the model for collision??? Below are my code:- On My LoadContent(): Matrix[] transforms = new Matrix[myModel.Bones.Count]; myModel.CopyAbsoluteBoneTransformsTo(transforms); int index = 0; box = new List<BoundingBox>(); BoundingBox worldModel = Utility.CalculateBoundingBox(myModel); foreach (ModelMesh mesh in myModel.Meshes) { Vector3[] obb = new Vector3[8]; worldModel.GetCorners(obb); Vector3[] asdf = (Vector3[])obb.Clone(); Vector3.Transform(obb, ref transforms[mesh.ParentBone.Index], obb); BoundingBox worldBox = BoundingBox.CreateFromPoints(obb); box.Add(worldBox); index++; } On CameraPosition Update: BoundingSphere bs = new BoundingSphere(this.cameraPos, 5.0f); if (RoomWalkthrough.Utility.CheckCollision(bs, bb)) { // Do Something } Please Help.

    Read the article

  • Having trouble with projection matrix, need help

    - by Mr.UNOwen
    I'm having trouble with what appears to be the projection matrix. Given a wide enough of a screen, when a cube is on the left and right most edge, the left or right wall will appear stretched to the point that the front face is 1/10 the width of the side. So I do update the screen ratio along with the projection matrix and view port on screen resize, am I safe to assume all the trouble is from the matrix class? Also the cube follows the mouse, but it's only vertically aligned and ahead of the mouse when going left or right from the center of the screen. Perspective function call: * setPerspective * * @param fov: angle in radians * @param aspect: screen ratio w/h * @param near: near distance * @param far: far distance **/ void APCamera::setPerspective(GMFloat_t fov, GMFloat_t aspect, GMFloat_t near, GMFloat_t far) { GMFloat_t difZ = near - far; GMFloat_t *data; mProjection->clear(); //set to identity matrix data = mProjection->getData(); GMFloat_t v = 1.0f / tan(fov / 2.0f); data[_AP_MAA] = v / aspect; data[_AP_MBB] = v; data[_AP_MCC] = (far + near) / difZ; data[_AP_MCD] = -1.0f; data[_AP_MDD] = 0.0f; data[_AP_MDC] = 2.0f * far * near/ difZ; mRatio = aspect; mInvProjOutdated = true; mIsPerspective = true; } and... #define _AP_MAA 0 #define _AP_MAB 1 #define _AP_MAC 2 #define _AP_MAD 3 #define _AP_MBA 4 #define _AP_MBB 5 #define _AP_MBC 6 #define _AP_MBD 7 #define _AP_MCA 8 #define _AP_MCB 9 #define _AP_MCC 10 #define _AP_MCD 11 #define _AP_MDA 12 #define _AP_MDB 13 #define _AP_MDC 14 #define _AP_MDD 15

    Read the article

  • Pixel Collision - Detecting corners

    - by Milkboat
    How would I go about detecting the corners of a texture when I use pixel collision detection? I read about corner collision with rectangles, but I am unsure how to adapt it to my situation. Right now my map is tile based and I do rectangular collision until the player is intersecting with a blocked tile, then I switch to pixel collision. The effect I would like to achieve is when the player hits the corner of an object to push him around the side so he doesn't just hit the edge and stop. Any ideas?

    Read the article

  • Good resources for 2.5D and rendering walls, floors, and sprites

    - by Aidan Mueller
    I'm curious as to how games like Prelude of the chambered handle graphics. If you play for a bit you will see what I mean. It made me wonder how it works. (it is open-source so you can get the source on This page) I did find a few tutorials but I couldn't undertand some of the stuff but it did help with some things. However, I don't like doing things I don't understand. Does anyone know of any good sites for this kind of 2.5D? Any help is appreciated. After all I've been googling all day. Thanks :)

    Read the article

  • 2d shapes in XNA 4.0?

    - by Lautaro
    Having some experience of XNA but none of 3D programming. I have an idea i want to realize but i have not decided to do it in 3d or 2d. Im not sure which one will be best in XNA. I want to have a shape like a blob that can reshape depending on input. The morphing does not need to be very advanced. It could be a circle (2d) or globe (3d) that just has one point that moves slightly in a random direction. In ASP.NET i have made this through the 2d Draw classes where i can make lines, circles, squares etc and then modify the points that makes them up. But it seems to me that XNA does not have classes for making 2d shapes (can i get this confirmed?). If it had, then this would be the quickest solution for me.

    Read the article

  • Level of detail algorithm not functioning correctly

    - by Darestium
    I have been working on this problem for months; I have been creating Planet Generator of sorts, after more than 6 months of work I am no closer to finishing it then I was 4 months ago. My problem; The terrain does not subdivide in the correct locations properly, it almost seems as if there is a ghost camera next to me, and the quads subdivide based on the position of this "ghost camera". Here is a video of the broken program: http://www.youtube.com/watch?v=NF_pHeMOju8 The best example of the problem occurs around 0:36. For detail limiting, I am going for a chunked LOD approach, which subdivides the terrain based on how far you are away from it. I use a "depth table" to determine how many subdivisions should take place. void PQuad::construct_depth_table(float distance) { tree[0] = -1; for (int i = 1; i < MAX_DEPTH; i++) { tree[i] = distance; distance /= 2.0f; } } The chuncked LOD relies on the child/parent structure of quads, the depth is determined by a constant e.g: if the constant is 6, there are six levels of detail. The quads which should be drawn go through a distance test from the player to the centre of the quad. void PQuad::get_recursive(glm::vec3 player_pos, std::vector<PQuad*>& out_children) { for (size_t i = 0; i < children.size(); i++) { children[i].get_recursive(player_pos, out_children); } if (this->should_draw(player_pos) || this->depth == 0) { out_children.emplace_back(this); } } bool PQuad::should_draw(glm::vec3 player_position) { float distance = distance3(player_position, centre); if (distance < tree[depth]) { return true; } return false; } The root quad has four children which could be visualized like the following: [] [] [] [] Where each [] is a child. Each child has the same amount of children up until the detail limit, the quads which have are 6 iterations deep are leaf nodes, these nodes have no children. Each node has a corresponding Mesh, each Mesh structure has 16x16 Quad-shapes, each Mesh's Quad-shapes halves in size each detail level deeper - creating more detail. void PQuad::construct_children() { // Calculate the position of the Quad based on the parent's location calculate_position(); if (depth < (int)MAX_DEPTH) { children.reserve((int)NUM_OF_CHILDREN); for (int i = 0; i < (int)NUM_OF_CHILDREN; i++) { children.emplace_back(PQuad(this->face_direction, this->radius)); PQuad *child = &children.back(); child->set_depth(depth + 1); child->set_child_index(i); child->set_parent(this); child->construct_children(); } } else { leaf = true; } } The following function creates the vertices for each quad, I feel that it may play a role in the problem - I just can't determine what is causing the problem. void PQuad::construct_vertices(std::vector<glm::vec3> *vertices, std::vector<Color3> *colors) { vertices->reserve(quad_width * quad_height); for (int y = 0; y < quad_height; y++) { for (int x = 0; x < quad_width; x++) { switch (face_direction) { case YIncreasing: vertices->emplace_back(glm::vec3(position.x + x * element_width, quad_height - 1.0f, -(position.y + y * element_width))); break; case YDecreasing: vertices->emplace_back(glm::vec3(position.x + x * element_width, 0.0f, -(position.y + y * element_width))); break; case XIncreasing: vertices->emplace_back(glm::vec3(quad_width - 1.0f, position.y + y * element_width, -(position.x + x * element_width))); break; case XDecreasing: vertices->emplace_back(glm::vec3(0.0f, position.y + y * element_width, -(position.x + x * element_width))); break; case ZIncreasing: vertices->emplace_back(glm::vec3(position.x + x * element_width, position.y + y * element_width, 0.0f)); break; case ZDecreasing: vertices->emplace_back(glm::vec3(position.x + x * element_width, position.y + y * element_width, -(quad_width - 1.0f))); break; } // Position the bottom, right, front vertex of the cube from being (0,0,0) to (-16, -16, 16) (*vertices)[vertices->size() - 1] -= glm::vec3(quad_width / 2.0f, quad_width / 2.0f, -(quad_width / 2.0f)); colors->emplace_back(Color3(255.0f, 255.0f, 255.0f, false)); } } switch (face_direction) { case YIncreasing: this->centre = glm::vec3(position.x + quad_width / 2.0f, quad_height - 1.0f, -(position.y + quad_height / 2.0f)); break; case YDecreasing: this->centre = glm::vec3(position.x + quad_width / 2.0f, 0.0f, -(position.y + quad_height / 2.0f)); break; case XIncreasing: this->centre = glm::vec3(quad_width - 1.0f, position.y + quad_height / 2.0f, -(position.x + quad_width / 2.0f)); break; case XDecreasing: this->centre = glm::vec3(0.0f, position.y + quad_height / 2.0f, -(position.x + quad_width / 2.0f)); break; case ZIncreasing: this->centre = glm::vec3(position.x + quad_width / 2.0f, position.y + quad_height / 2.0f, 0.0f); break; case ZDecreasing: this->centre = glm::vec3(position.x + quad_width / 2.0f, position.y + quad_height / 2.0f, -(quad_height - 1.0f)); break; } this->centre -= glm::vec3(quad_width / 2.0f, quad_width / 2.0f, -(quad_width / 2.0f)); } Any help in discovering what is causing this "subdivding in the wrong place" would be greatly appreciated.

    Read the article

  • How can I prevent seams from showing up on objects using lower mipmap levels?

    - by Shivan Dragon
    Disclaimer: kindly right click on the images and open them separately so that they're at full size, as there are fine details which don't show up otherwise. Thank you. I made a simple Blender model, it's a cylinder with the top cap removed: I've exported the UVs: Then imported them into Photoshop, and painted the inner area in yellow and the outer area in red. I made sure I cover well the UV lines: I then save the image and load it as texture on the model in Blender. Actually, I just reload it as the image where the UVs are exported, and change the viewport view mode to textured. When I look at the mesh up-close, there's yellow everywhere, everything seems fine: However, if I start zooming out, I start seeing red (literally and metaphorically) where the texture edges are: And the more I zoom, the more I see it: Same thing happends in Unity, though the effect seems less pronounced. Up close is fine and yellow: Zoom out and you see red at the seams: Now, obviously, for this simple example a workaround is to spread the yellow well outside the UV margins, and its fine from all distances. However this is an issue when you try making a complex texture that should tile seamlessly at the edges. In this situation I either make a few lines of pixels overlap (in which case it looks bad from upclose and ok from far away), or I leave them seamless and then I have those seams when seeing it from far away. So my question is, is there something I'm missing, or some extra thing I must do to have my texture look seamless from all distances?

    Read the article

  • Android Swipe In Unity 3D World with AR

    - by Christian
    I am working on an AR application using Unity3D and the Vuforia SDK for Android. The way the application works is a when the designated image(a frame marker in our case) is recognized by the camera, a 3D island is rendered at that spot. Currently I am able to detect when/which objects are touched on the model by raycasting. I also am able to successfully detect a swipe using this code: if (Input.touchCount > 0) { Touch touch = Input.touches[0]; switch (touch.phase) { case TouchPhase.Began: couldBeSwipe = true; startPos = touch.position; startTime = Time.time; break; case TouchPhase.Moved: if (Mathf.Abs(touch.position.y - startPos.y) > comfortZoneY) { couldBeSwipe = false; } //track points here for raycast if it is swipe break; case TouchPhase.Stationary: couldBeSwipe = false; break; case TouchPhase.Ended: float swipeTime = Time.time - startTime; float swipeDist = (touch.position - startPos).magnitude; if (couldBeSwipe && (swipeTime < maxSwipeTime) && (swipeDist > minSwipeDist)) { // It's a swiiiiiiiiiiiipe! float swipeDirection = Mathf.Sign(touch.position.y - startPos.y); // Do something here in reaction to the swipe. swipeCounter.IncrementCounter(); } break; } touchInfo.SetTouchInfo (Time.time-startTime,(touch.position-startPos).magnitude,Mathf.Abs (touch.position.y-startPos.y)); } Thanks to andeeeee for the logic. But I want to have some interaction in the 3D world based on the swipe on the screen. I.E. If the user swipes over unoccluded enemies, they die. My first thought was to track all the points in the moved TouchPhase, and then if it is a swipe raycast into all those points and kill any enemy that is hit. Is there a better way to do this? What is the best approach? Thanks for the help!

    Read the article

  • Errors happen when using World.destroyBody( Body body )

    - by minami
    on Android application using libgdx, when I use World.destroyBody( Body body ) method, once in a while the application suddenly shuts down. Is there some setting I need to do with body collision or Box2DDebugRenderer before I destroy bodies? Below is the source I use for destroying bodies. private void deleteUnusedObject( ) { for( Iterator<Body> iter = mWorld.getBodies() ; iter.hasNext() ; ){ Body body = iter.next( ) ; if( body.getUserData( ) != null ) { Box2DUserData data = (Box2DUserData) body.getUserData( ) ; if( ! data.getActFlag() ) { if( body != null ) { mWorld.destroyBody( body ) ; } } } } } Thanks

    Read the article

  • Most efficient way to handle coordinate maps in Java

    - by glowcoder
    I have a rectangular tile-based layout. It's your typical Cartesian system. I would like to have a single class that handles two lookup styles Get me the set of players at position X,Y Get me the position of player with key K My current implementation is this: class CoordinateMap<V> { Map<Long,Set<V>> coords2value; Map<V,Long> value2coords; // convert (int x, int y) to long key - this is tested, works for all values -1bil to +1bil // My map will NOT require more than 1 bil tiles from the origin :) private Long keyFor(int x, int y) { int kx = x + 1000000000; int ky = y + 1000000000; return (long)kx | (long)ky << 32; } // extract the x and y from the keys private int[] coordsFor(long k) { int x = (int)(k & 0xFFFFFFFF) - 1000000000; int y = (int)((k >>> 32) & 0xFFFFFFFF) - 1000000000; return new int[] { x,y }; } } From there, I proceed to have other methods that manipulate or access the two maps accordingly. My question is... is there a better way to do this? Sure, I've tested my class and it works fine. And sure, something inside tells me if I want to reference the data by two different keys, I need two different maps. But I can also bet I'm not the first to run into this scenario. Thanks!

    Read the article

  • How to give a ball a following texture trailing effect

    - by Evan Kohilas
    How do I draw copies of the leading texture so that there is a line of the leading ball following behind it? (that don't collide) So far I have tried to create the effect by placing another graphic 2 pixels off the graphic, but I don't see the second ball being drawn. spriteBatch.Draw(ballTexture, ballPos, null, Color.White, 0.0f, new Vector2(Ballpos.X +2, ballPos.Y +2), ballSize, SpriteEffects.None, 0); Thanks.

    Read the article

  • Adding sub-entities to existing entities. Should it be done in the Entity and Component classes?

    - by Coyote
    I'm in a situation where a player can be given the control of small parts of an entity (i.e. Left missile battery). Therefore I started implementing sub entities as follow. Entities are Objects with 3 arrays: pointers to components pointers to sub entities communication subscribers (temporary implementation) Now when an entity is built it has a few components as you might expect and also I can attach sub entities which are handled with some dedicated code in the Entity and Component classes. I noticed sub entities are sharing data in 3 parts: position: the sub entities are using the parent's position and their own as an offset. scrips: sub entities are draining ammo and energy from the parent. physics: sub entities add weight to the parent I made this to quickly go forward, but as I'm slowly fixing current implementations I wonder if this wasn't a mistake. Is my current implementation something commonly done? Will this implementation put me in a corner? I thought it might be a better thing to create some sort of SubEntityComponent where sub entities are attached and handled. But before changing anything I wanted to seek the community's wisdom.

    Read the article

  • Drawing multiple objects from one Vertex Buffer Object in OpenGL/OpenTK

    - by stoney78us
    I am trying to experimenting drawing method using VBO in OpenGL. Many people normally use 1 vbo to store one object data array. I was trying to do something quite opposite which is storing multiple object data into 1 vbo then drawing it. There is story behind why i want to do this. I want to group many of objects as a single object sometime. However my code doesn't do the justice. Following is my pseudo code: //Data double[] vertices = {line strip 1, line strip 2, line strip 3}; //series of vertices int linestrip1offset = index of the first vertex in line strip 1; int linestrip2offset = index of the first vertex in line strip 2; int linestrip3offset = index of the first vertex in line strip 3; int linestrip1VertexNum = number of vertices in linestrip 1; int linestrip2VertexNum = number of vertices in linestrip 2; int linestrip3VertexNum = number of vertices in linestrip 3; //Setting Up void init() { int[] vBO = new int[1]; GL.GenBuffer(1, vBO); GL.BindBuffer(BufferTarget.ArrayBuffer, vBO[0]); GL.BufferData(BufferTarget.ArrayBuffer, new IntPtr(_vertices.Length * sizeof(double)), _vertices, BufferUsageHint.StaticDraw); GL.EnableClientState(Array.VertexArray); } //Drawing void draw() { GL.BindBuffer(BufferTarget.ArrayBuffer, vBO[0]); GL.EnableClientState(ArrayCap.VertexArray); GL.VertexPointer(3, VertexPointerType.Double, 0, linestrip1offset); //drawing first linestrip GL.DrawArrays(drawMode, linestrip1offset , linestrip1VertexNum ); GL.VertexPointer(3, VertexPointerType.Double, 0, linestrip2offset); //drawing second linestrip GL.DrawArrays(drawMode, linestrip2offset , linestrip2VertexNum ); GL.VertexPointer(3, VertexPointerType.Double, 0, linestrip3offset); //drawing third linestrip GL.DrawArrays(drawMode, linestrip3offset , linestrip3VertexNum ); GL.DisableClientState(ArrayCap.VertexArray); GL.BindBuffer(BufferTarget.ArrayBuffer, 0); } I don't know what i did wrong but i think technically it should work where we can tell OpenGL which part of the data in the vBO to be drawn.

    Read the article

  • Slick2d Spritesheet showing whole image

    - by BotskoNet
    I'm trying to show a single subimage from a sprite sheet. Using slick2d SpriteSheet class, all it's doing is showing me the entire image, but scaled down to fit the cell dimensions. The image is 96x192 and should have cells of 32x32. The code: SpriteSheet spriteSheet = new SpriteSheet("images/"+file, 32, 32 ); System.out.println("Horiz Count: " + spriteSheet.getHorizontalCount()); System.out.println("Vert Count: " + spriteSheet.getVerticalCount()); System.out.println("Height: " + spriteSheet.getHeight()); System.out.println("Width: " + spriteSheet.getWidth()); System.out.println("Texture Width: " + spriteSheet.getTextureWidth()); System.out.println("Texture Height: " + spriteSheet.getTextureHeight()); Prints: Horiz Count: 3 Vert Count: 6 Height: 192 Width: 96 Texture Width: 0.75 Texture Height: 0.75 Not sure what the texture dimensions refer to, but the rest is entirely accurate. However, when I draw the icon, the entire sprite image shows scaled down to 32x32: Image image = spriteSheet.getSprite(1, 0); // a test image.bind(); GL11.glEnable(GL11.GL_BLEND); GL11.glBlendFunc(GL11.GL_SRC_ALPHA, GL11.GL_ONE_MINUS_SRC_ALPHA); GL11.glBegin(GL11.GL_QUADS); GL11.glTexCoord2f(0,0); GL11.glVertex2f(x,y); GL11.glTexCoord2f(1,0); GL11.glVertex2f(x+image.getWidth(),y); GL11.glTexCoord2f(1,1); GL11.glVertex2f(x+image.getWidth(),y+image.getHeight()); GL11.glTexCoord2f(0,1); GL11.glVertex2f(x,y+image.getHeight()); GL11.glEnd(); GL11.glDisable(GL11.GL_BLEND);

    Read the article

  • Translate extrinsic rotations to intrinsic rotations ( Euler angles )

    - by MineMan287
    The problem I have is very frustrating: I am using the Jitter Physics library which gives Quaternion rotations, you can extract the extrinsic rotations but I need intrinsic rotations to rotate in OpenTK (There are other reasons as well so I don't want to make OpenTK use a Matrix) GL.Rotate(xr, 1, 0, 0) GL.Rotate(yr, 0, 1, 0) GL.Rotate(zr, 0, 0, 1) EDIT : Response to the first answer Like This? GL.Rotate(zr, 0, 0, 1) GL.Rotate(yr, 0, 1, 0) GL.Rotate(xr, 1, 0, 0) Or This? GL.Rotate(xr, 1, 0, 0) GL.Rotate(yr, 0, 1, 0) GL.Rotate(zr, 0, 0, 1) GL.Rotate(zr, 0, 0, 1) GL.Rotate(yr, 0, 1, 0) GL.Rotate(xr, 1, 0, 0) GL.Rotate(xr, 1, 0, 0) GL.Rotate(yr, 0, 1, 0) GL.Rotate(zr, 0, 0, 1) I'm confused, please give an example

    Read the article

  • My server is behind a router. How can I see my website correctly? [closed]

    - by Tokyo Dan
    I'm running a web server (Ubuntu) on my local home network. I'm behind a router. On the WAN I have a direct IP. When not on my home network and accessing my website via the WAN direct IP my website displays correctly and everything works. On my home LAN behind the router, accessing my website via the WAN direct gets me to my router's admin login page. This is normal. But... Accessing my website (via it's home LAN IP address) from another computer on my home LAN gets me to the website but the layout display is broken and clicking on any link takes me to the WAN direct IP (my router's Admin login page). How can i get my website to display properly and the links to work when accessing it from my home LAN?

    Read the article

  • How to transform gameObjects in array?

    - by user1764781
    I have an array of available gameObjects in the scene. An array of GO should be transformed according to received floats through UDP connection. I know it is simple, but can't figure it out how to accomplish the transformation an array of GO according to received unique floats for each GO, any help will be appreciated. Here is a transformation floats, it might be helpful I guess: x =ReadSingleBigEndian(data, signalOffset); signalOffset+=4; y= ReadSingleBigEndian(data, signalOffset); signalOffset+=4; z= ReadSingleBigEndian(data, signalOffset); signalOffset+=4; alpha= ReadSingleBigEndian(data, signalOffset); signalOffset+=4; theta= ReadSingleBigEndian(data,signalOffset); signalOffset+=4; phi= ReadSingleBigEndian(data,signalOffset); signalOffset+=4;

    Read the article

  • XNA Diffuse Shader Issue. Edge lighting problem. Image Attached

    - by adtither
    As you can see in this image the diffuse shading is working correctly in some places but in other places such as the the bottom of the sphere you can see the squares/triangles of the mesh. Any idea what would be causing this? Let me know if you need anymore information related to code. I can upload my normals calculations and shader effect if required. EDIT: Here's a link to the shader I'm using http://pastebin.com/gymVc7CP Link to normals calculations: http://pastebin.com/KnMGdzHP Seems to be an issue with edge lighting. Can't seem to see where I'm going wrong with the normals calculations though.

    Read the article

  • Unity 5.1 audio issues (no sound in back channels)

    - by N0xus
    I've trying to bring in surround sound audio into my project. I've set my computer up to run in 5.1 and when I play a 6 channel audio through windows media player (it's a test audio that does left speaker, right speaker etc) it works fine. However, when I run it through Unity, all I get is the front 3 channels. I've set it in the Edit - project settings - audio to be 5.1 in there. I even set it in code with following: void Start() { AudioSettings.speakerMode = AudioSpeakerMode.Mode5point1; } How ever, when I run a debug line of: print ( AudioSettings.driverCaps); It tells me that Unity is only playing in stereo. Is there something I'm still not doing? I should also add I've ran 10 different tests using the 3D audio pan and spread options. I've set both to either being fully off, half way on and full. Still the same results.

    Read the article

  • Understanding how texCUBE works and writing cubemaps properly into a cube rendertarget

    - by cubrman
    My goal is to create accurate reflections, sampled from a dynamic cubemap, for specific 3d objects (mostly lights) in XNA 4.0. To sample the cubemap I compute the 3d reflection vector in a classic way: half3 ReflectionVec = reflect(-directionToCamera, Normal.rgb); I then use the vector to get the actual reflected color: half3 ReflectionCol = texCUBElod(ReflectionSampler, float4(ReflectionVec, 0)); The cubemap I am sampling from is a RenderTarget with 6 flat faces. So my question is, given the 3d world position of an arbitrary 3d object, how can I make sure that I get accurate reflections of this object, when I re-render the cubemap. Should I build the ViewProjection matrix in a specific way? Or is there any other approach?

    Read the article

< Previous Page | 399 400 401 402 403 404 405 406 407 408 409 410  | Next Page >