Search Results

Search found 25550 results on 1022 pages for 'mere development'.

Page 468/1022 | < Previous Page | 464 465 466 467 468 469 470 471 472 473 474 475  | Next Page >

  • Apply bone tranforms when importing FBX in XNA

    - by hichaeretaqua
    Preconditions: I have some models, that does only contain some meshes and one texture. There is no animation within the model. An example: a model of a table. I want to draw the Model with a custom effect, so I have to swap the effect after loading the model. In order to draw them correctly, I have to apply the bone transformation manually on each draw for each mesh and effect as can be seen here. So there are two questions: Is there a option during import that allows my to apply the bone transformation on all vertices, so that during draw call I should not have to do this? Is there a option during import that merges all vertices into a Vertex- and IndexBuffer, that allows me to draw the whole model with just one call? I'm pretty sure that the build-in "Autodesk FBX - XNA Framework" does not support this features, but maybe there is an other imported available or an other possibility I missed. The aim is to speed up rendering a little bit especially by using instancing. So having one VertexBuffer to draw at one time would be pretty nice.

    Read the article

  • Pokemon Yellow wrap transitions

    - by Alex Koukoulas
    So I've been trying to make a pretty accurate clone of the good old Pokemon Yellow for quite some time now and one puzzling but nonetheless subtle mechanic has puzzled me. As you can see in the uploaded image there is a certain colour manipulation done in two stages after entering a wrap to another game location (such as stairs or entering a building). One easy (and sloppy) way of achieving this and the one I have been using so far is to make three copies of each image rendered on the screen all of them with their colours adjusted accordingly to match each stage of the transition. Of course after a while this becomes tremendously time consuming. So my question is does anyone know any better way of achieving this colour manipulation effect using java? Thanks in advance, Alex

    Read the article

  • Ease Rotate RigidBody2D toward arbitrary angle

    - by Plastic Sturgeon
    I'm trying to make a rigidbody2D circle return to an orientation after a collision. But there is a weird behavior I do not expect - it always orients to the same direction. This is what I call in FixedUpdate(): rotationdifference = -halfPI + rigidbody2D.rotation; rigidbody2D.AddTorque (rotationdifference * ease); I would expect this would rotate 90 degrees (1/2 Pi Radians) off of the neutral axis. But it does not. In fact it performs exactly the same as: rotationdifference = rigidbody2D.rotation; rigidbody2D.AddTorque (rotationdifference * ease); What is going on? How would I be able to set an angle I want it to ease towards, and then have it ease towards it when its not colliding with some other force?

    Read the article

  • How change LOD in geometry?

    - by ChaosDev
    Im looking for simple algorithm of LOD, for change geometry vertexes and decrease frame time. Im created octree, but now I want model or terrain vertex modify algorithm,not for increase(looking on tessellation later) but for decrease. I want something like this Questions: Is same algorithm can apply either to model and terrain correctly? Indexes need to be modified ? I must use octree or simple check distance between camera and object for desired effect ? New value of indexcount for DrawIndexed function needed ? Code: //m_LOD == 10 in the beginning //m_RawVerts - array of 3d Vector filled with values from vertex buffer. void DecreaseLOD() { m_LOD--; if(m_LOD<1)m_LOD=1; RebuildGeometry(); } void IncreaseLOD() { m_LOD++; if(m_LOD>10)m_LOD=10; RebuildGeometry(); } void RebuildGeometry() { void* vertexRawData = new byte[m_VertexBufferSize]; void* indexRawData = new DWORD[m_IndexCount]; auto context = mp_D3D->mp_Context; D3D11_MAPPED_SUBRESOURCE data; ZeroMemory(&data,sizeof(D3D11_MAPPED_SUBRESOURCE)); context->Map(mp_VertexBuffer->mp_buffer,0,D3D11_MAP_READ,0,&data); memcpy(vertexRawData,data.pData,m_VertexBufferSize); context->Unmap(mp_VertexBuffer->mp_buffer,0); context->Map(mp_IndexBuffer->mp_buffer,0,D3D11_MAP_READ,0,&data); memcpy(indexRawData,data.pData,m_IndexBufferSize); context->Unmap(mp_IndexBuffer->mp_buffer,0); DWORD* dwI = (DWORD*)indexRawData; int sz = (m_VertexStride/sizeof(float));//size of vertex element //algorithm must be here. std::vector<Vector3d> vertices; int i = 0; for(int j = 0; j < m_VertexCount; j++) { float x1 = (((float*)vertexRawData)[0+i]); float y1 = (((float*)vertexRawData)[1+i]); float z1 = (((float*)vertexRawData)[2+i]); Vector3d lv = Vector3d(x1,y1,z1); //my useless attempts if(j+m_LOD+1<m_RawVerts.size()) { float v1 = VECTORHELPER::Distance(m_RawVerts[dwI[j]],m_RawVerts[dwI[j+m_LOD]]); float v2 = VECTORHELPER::Distance(m_RawVerts[dwI[j]],m_RawVerts[dwI[j+m_LOD+1]]); if(v1>v2) lv = m_RawVerts[dwI[j+1]]; else if(v2<v1) lv = m_RawVerts[dwI[j+2]]; } (((float*)vertexRawData)[0+i]) = lv.x; (((float*)vertexRawData)[1+i]) = lv.y; (((float*)vertexRawData)[2+i]) = lv.z; i+=sz;//pass others vertex format values without change } for(int j = 0; j < m_IndexCount; j++) { //indices ? } //set vertexes to device UpdateVertexes(vertexRawData,mp_VertexBuffer->getSize()); delete[] vertexRawData; delete[] indexRawData; }

    Read the article

  • 3D Texture Mapping (Atlas)

    - by Tim Hatch
    This is a pretty simple question. If I was to use multiple images in a single texture for a 3D cube, how would I go about re-using each vertex (having 8 total vs 24)? With a single buffer of 8 vertices, I don't see how I'd properly reuse the UV values. Any help on that? I know it's not terribly clear, but I figured it was a simple question. The 2D method is pretty easy, the next coordinates would be the same as the first (0,0 and 0,1 respectively). However, the above 3D version has me quite befuddled.

    Read the article

  • Algorithms for rainfall + river creation in procedurally generated terrain

    - by Peck
    I've recently become fascinated by the things that can be done with procedurally terrain and have started experimenting with world building a bit. I'd like to be able to make worlds something like Dwarf fortress with biomes created from meshing together various maps. So first step has been done. Using the diamond-square algorithm I've created some nice hieghtmaps. Next step is I would like to add some water features and have them somewhat realistically generated with rainfall. I've read about a few different approaches such as starting at the high points of the map, and "stepping" down to the lowest neighboring point, pooling/eroding as it works its way down to sea level. Are there any documented algorithms with this or are they more off the cuff? Would love any advice/thoughts.

    Read the article

  • Permanently Sync a wiimote with a computer

    - by Adam Geisweit
    i have tried to look up many ways to sync up my wiimotes to my computer so that i can program games with it, but every time it only syncs them up temporarily, or if it says it can permanently sync it, it doesn't actually do it. it gets tiresome when i have to keep on reconnecting it every time i want to save battery life. how would i be able to sync up my wiimote to my computer so that if i turn off my wiimote, i can just hit any button and it will automatically sync it up?

    Read the article

  • XNA 4.0 Refresh AudioEngine, WaveBank and Others Not Found

    - by Peteyslatts
    I'm going through the Learning XNA 4.0 book, and unfortunately I installed XNA 4.0 refresh. All the code up until now has worked, with the exception of me needing to remove the Framework.Net and Framework.Storage. (As a side question, will this be problematic later?) The problem I'm having now is that in my Game1.cs file, I have imported all of the XNA.Framework libraries, and when I try and create instances of any of the following classes, an error pops up saying VisualStudio can't find them: AudiEngine, WaveBank, SoundBank, and Cue. I have googled around for a while, and the only solution I saw was to import Microsoft.Xna.Framework.Xact, but this doesn't seem to exist for me. Any help is much appreciated, Thanks Peter.

    Read the article

  • Problems with SAT Collision Detection

    - by DJ AzKai
    I'm doing a project in one of my modules for college in C++ with SFML and I was hoping someone may be able to help me. I'm using a vector of squares and triangles and I am using the SAT collision detection method to see if objects collide and to make the objects respond to the collision appropriately using the MTV(minimum translation vector) Below is my code: //from the main method int main(){ // Create the main window sf::RenderWindow App(sf::VideoMode(800, 600, 32), "SFML OpenGL"); // Create a clock for measuring time elapsed sf::Clock Clock; srand(time(0)); //prepare OpenGL surface for HSR glClearDepth(1.f); glClearColor(0.3f, 0.3f, 0.3f, 0.f); //background colour glEnable(GL_DEPTH_TEST); glDepthMask(GL_TRUE); //// Setup a perspective projection & Camera position glMatrixMode(GL_PROJECTION); glLoadIdentity(); //set up a 3D Perspective View volume //gluPerspective(90.f, 1.f, 1.f, 300.0f);//fov, aspect, zNear, zFar //set up a orthographic projection same size as window //this mease the vertex coordinates are in pixel space glOrtho(0,800,0,600,0,1); // use pixel coordinates // Finally, display rendered frame on screen vector<BouncingThing*> triangles; for(int i = 0; i < 10; i++) { //instantiate each triangle; triangles.push_back(new BouncingTriangle(Vector2f(rand() % 700, rand() % 500), 3)); } vector<BouncingThing*> boxes; for(int i = 0; i < 10; i++) { //instantiate each box; boxes.push_back(new BouncingBox(Vector2f(rand() % 700, rand() % 500), 4)); } CollisionDetection * b = new CollisionDetection(); // Start game loop while (App.isOpen()) { // Process events sf::Event Event; while (App.pollEvent(Event)) { // Close window : exit if (Event.type == sf::Event::Closed) App.close(); // Escape key : exit if ((Event.type == sf::Event::KeyPressed) && (Event.key.code == sf::Keyboard::Escape)) App.close(); } //Prepare for drawing // Clear color and depth buffer glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Apply some transformations glMatrixMode(GL_MODELVIEW); glLoadIdentity(); for(int i = 0; i < 10; i++) { triangles[i]->draw(); boxes[i]->draw(); triangles[i]->update(Vector2f(800,600)); boxes[i]->draw(); boxes[i]->update(Vector2f(800,600)); } for(int j = 0; j < 10; j++) { for(int i = 0; i < 10; i++) { triangles[j]->setCollision(b->CheckCollision(*(triangles[j]),*(boxes[i]))); } } for(int j = 0; j < 10; j++) { for(int i = 0; i < 10; i++) { boxes[j]->setCollision(b->CheckCollision(*(boxes[j]),*(triangles[i]))); } } for(int i = 0; i < triangles.size(); i++) { for(int j = i + 1; j < triangles.size(); j ++) { triangles[j]->setCollision(b->CheckCollision(*(triangles[j]),*(triangles[i]))); } } for(int i = 0; i < triangles.size(); i++) { for(int j = i + 1; j < triangles.size(); j ++) { boxes[j]->setCollision(b->CheckCollision(*(boxes[j]),*(boxes[i]))); } } App.display(); } return EXIT_SUCCESS; } (ignore this line) //from the BouncingThing.cpp BouncingThing::BouncingThing(Vector2f position, int noSides) : pos(position), pi(3.14), radius(3.14), nSides(noSides) { collided = false; if(nSides ==3) { Vector2f vert1 = Vector2f(-12.0f,-12.0f); Vector2f vert2 = Vector2f(0.0f, 12.0f); Vector2f vert3 = Vector2f(12.0f,-12.0f); verts.push_back(vert1); verts.push_back(vert2); verts.push_back(vert3); } else if(nSides == 4) { Vector2f vert1 = Vector2f(-12.0f,12.0f); Vector2f vert2 = Vector2f(12.0f, 12.0f); Vector2f vert3 = Vector2f(12.0f,-12.0f); Vector2f vert4 = Vector2f(-12.0f, -12.0f); verts.push_back(vert1); verts.push_back(vert2); verts.push_back(vert3); verts.push_back(vert4); } velocity.x = ((rand() % 5 + 1) / 3) + 1; velocity.y = ((rand() % 5 + 1) / 3 ) +1; } void BouncingThing::update(Vector2f screenSize) { Transform t; t.rotate(0); for(int i=0;i< verts.size(); i++) { verts[i]=t.transformPoint(verts[i]); } if(pos.x >= screenSize.x || pos.x <= 0) { velocity.x *= -1; } if(pos.y >= screenSize.y || pos.y <= 0) { velocity.y *= -1; } if(collided) { //velocity.x *= -1; //velocity.y *= -1; collided = false; } pos += velocity; } void BouncingThing::setCollision(bool x){ collided = x; } void BouncingThing::draw() { glBegin(GL_POLYGON); glColor3f(0,1,0); for(int i = 0; i < verts.size(); i++) { glVertex2f(pos.x + verts[i].x,pos.y + verts[i].y); } glEnd(); } vector<Vector2f> BouncingThing::getNormals() { vector<Vector2f> normalVerts; if(nSides == 3) { Vector2f ab = Vector2f((verts[1].x + pos.x) - (verts[0].x + pos.x), (verts[1].y + pos.y) - (verts[0].y + pos.y)); ab = flip(ab); ab.x *= -1; normalVerts.push_back(ab); Vector2f bc = Vector2f((verts[2].x + pos.x) - (verts[1].x + pos.x), (verts[2].y + pos.y) - (verts[1].y + pos.y)); bc = flip(bc); bc.x *= -1; normalVerts.push_back(bc); Vector2f ac = Vector2f((verts[2].x + pos.x) - (verts[0].x + pos.x), (verts[2].y + pos.y) - (verts[0].y + pos.y)); ac = flip(ac); ac.x *= -1; normalVerts.push_back(ac); return normalVerts; } if(nSides ==4) { Vector2f ab = Vector2f((verts[1].x + pos.x) - (verts[0].x + pos.x), (verts[1].y + pos.y) - (verts[0].y + pos.y)); ab = flip(ab); ab.x *= -1; normalVerts.push_back(ab); Vector2f bc = Vector2f((verts[2].x + pos.x) - (verts[1].x + pos.x), (verts[2].y + pos.y) - (verts[1].y + pos.y)); bc = flip(bc); bc.x *= -1; normalVerts.push_back(bc); return normalVerts; } } Vector2f BouncingThing::flip(Vector2f v){ float vyTemp = v.x; float vxTemp = v.y * -1; return Vector2f(vxTemp, vyTemp); } (Ignore this line) CollisionDetection::CollisionDetection() { } vector<float> CollisionDetection::bubbleSort(vector<float> w) { int temp; bool finished = false; while (!finished) { finished = true; for (int i = 0; i < w.size()-1; i++) { if (w[i] > w[i+1]) { temp = w[i]; w[i] = w[i+1]; w[i+1] = temp; finished=false; } } } return w; } class Vector{ public: //static int dp_count; static float dot(sf::Vector2f a,sf::Vector2f b){ //dp_count++; return a.x*b.x+a.y*b.y; } static float length(sf::Vector2f a){ return sqrt(a.x*a.x+a.y*a.y); } static Vector2f add(Vector2f a, Vector2f b) { return Vector2f(a.x + b.y, a.y + b.y); } static sf::Vector2f getNormal(sf::Vector2f a,sf::Vector2f b){ sf::Vector2f n; n=a-b; n/=Vector::length(n);//normalise float x=n.x; n.x=n.y; n.y=-x; return n; } }; bool CollisionDetection::CheckCollision(BouncingThing & x, BouncingThing & y) { vector<Vector2f> xVerts = x.getVerts(); vector<Vector2f> yVerts = y.getVerts(); vector<Vector2f> xNormals = x.getNormals(); vector<Vector2f> yNormals = y.getNormals(); int size; vector<float> xRange; vector<float> yRange; for(int j = 0; j < xNormals.size(); j++) { Vector p; for(int i = 0; i < xVerts.size(); i++) { xRange.push_back(p.dot(xNormals[j], Vector2f(xVerts[i].x, xVerts[i].x))); } for(int i = 0; i < yVerts.size(); i++) { yRange.push_back(p.dot(xNormals[j], Vector2f(yVerts[i].x , yVerts[i].y))); } yRange = bubbleSort(yRange); xRange = bubbleSort(xRange); if(xRange[xRange.size() - 1] < yRange[0] || yRange[yRange.size() - 1] < xRange[0]) { return false; } float x3 = Min(xRange[0], yRange[0]); float y3 = Max(xRange[xRange.size() - 1], yRange[yRange.size() - 1]); float length = Max(x3, y3) - Min(x3, y3); } for(int j = 0; j < yNormals.size(); j++) { Vector p; for(int i = 0; i < xVerts.size(); i++) { xRange.push_back(p.dot(yNormals[j], xVerts[i])); } for(int i = 0; i < yVerts.size(); i++) { yRange.push_back(p.dot(yNormals[j], yVerts[i])); } yRange = bubbleSort(yRange); xRange = bubbleSort(xRange); if(xRange[xRange.size() - 1] < yRange[0] || yRange[yRange.size() - 1] < xRange[0]) { return false; } } return true; } float CollisionDetection::Min(float min, float max) { if(max < min) { min = max; } else return min; } float CollisionDetection::Max(float min, float max) { if(min > max) { max = min; } else return min; } On the screen the objects will freeze for a small amount of time before moving off again. However the problem is is that when this happens there are no collisions actually happening and I would really love to find out where the flaw is in the code. If you need any more information/code please don't hesitate to ask and I'll reply as soon as possible Regards, AzKai

    Read the article

  • How should I load level data in java?

    - by Matthew G.
    I'm setting up my engine for a certain action/arcade game to have a set of commands that would look something like this. Set landscape to grass Create rocks at ... Create player at X, Y Set goal to "Get to point X Y" Spawn enemy at X, Y I'd then have each object knowing what it has to do, and acting on its own. I've been thinking about how to store this data. External data files could be parsed by a level class, and certain objects can be spawned through that. I could also create a base level class and extend it for each level, but that'd create a large amount of classes. Another idea is to have one level parser class, but have a case for each level. This would be extremely silly and bulky, but I mention it because I found that I did this at 2 AM last night. I'm finally getting why I have to plan out my inheritances, though. RIP project. I might be completely missing another option.

    Read the article

  • What kind of physics to choose for our arcade 3D MMO?

    - by Nick
    We're creating an action MMO using Three.js (WebGL) with an arcadish feel, and implementing physics for it has been a pain in the butt. Our game has a terrain where the character will walk on, and in the future 3D objects (a house, a tree, etc) that will have collisions. In terms of complexity, the physics engine should be like World of Warcraft. We don't need friction, bouncing behaviour or anything more complex like joints, etc. Just gravity. I have managed to implement terrain physics so far by casting a ray downwards, but it does not take into account possible 3D objects. Note that these 3D objects need to have convex collisions, so our artists create a 3D house and the player can walk inside but can't walk through the walls. How do I implement proper collision detection with 3D objects like in World of Warcraft? Do I need an advanced physics engine? I read about Physijs which looks cool, but I fear that it may be overkill to implement that for our game. Also, how does WoW do it? Do they have a separate raycasting system for the terrain? Or do they treat the terrain like any other convex mesh? A screenshot of our game so far:

    Read the article

  • Calculating a child object's Position, Rotation and Scale values?

    - by Sergio Plascencia
    I am making my own game editor, but have encountered the following problem: I have two objects, A and B. A's initial values: Position: (3,3,3), Rotation: (45,10,0), Scale(1,2,2.5) B's initial values: Position: (1,1,1), Rotation: (10,34,18), Scale(1.5,2,1) If I now make B a child of A, I need to re-calculate the B's Position, Rotation and Scale relative to A such that it maintains its current position, rotation and scale in world coordinates. So B's position would now be (-2, -2, -2) since now A is its center and (-2, -2, -2) will keep B in the same position. I think I got the Position and scale figured out, but not rotation. So I opened Unity and ran the same example and I noticed that when making a child object, the child object did not move at all. but had its Position, Rotation and Scale values changed relative to the parent. For example: Unity (Parent Object "A"): Position: (0,0,0) Rotation: (45,10,0) Scale: (1,1,1) Unity (Child Object "B"): Position: (0,0,0) Rotation: (0,0,0) Scale: (1,1,1) When B becomes a child of A, it's rotation values become: X: -44.13605 Y: -14.00195 Z: 9.851074 If I plug the same rotation values into the B object in my editor, the object does not move at all. How did Unity arrive at those rotation values for the child? What are the calculations? If you can put all the equations for the Position, Rotation or Scale then I can double check I am doing it correctly but the Rotation is what I really need.

    Read the article

  • Make OpenGL game perform better

    - by Csabi
    I have programmed an OpenGL game which just contains one F1 car and a track. It is very simple and only uses around of 10'000 - 20'000 triangles. It should run on any PC but it won't, it needs a really good graphics-card to run at a decent framerate. Can you write some methods or links to sites which would help me make my scene/game more efective? my game can be downloaded from here or directly from here

    Read the article

  • Using glReadBuffer/glReadPixels returns black image instead of the actual image only on Intel cards

    - by cloudraven
    I have this piece of code glReadBuffer( GL_FRONT ); glReadPixels( 0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE, buffer ); Which works just perfectly in all the Nvidia and AMD GPUs I have tried, but it fails in almost every single Intel built-in video that I have tried. It actually works in a very old 945GME, but fails in all the others. Instead of getting a screenshot I am actually getting a black screen. If it helps, I am working with the Doom3 Engine, and that code is derived from the built-in screen capture code. By the way, even with the original game I cannot do screen capture on those intel devices anyway. My guess is that they are not implementing the standard correctly or something. Is there a workaround for this?

    Read the article

  • Complete Math Library for use in OpenGL ES 2.0 Game?

    - by Bunkai.Satori
    Are you aware of a complete (or almost complete) cross platform math library for use in OpenGL ES 2.0 games? The library should contain: Matrix2x2, Matrix 3x3, Matrix4x4 classes Quaternions Vector2, Vector3, Vector4 Classes Euler Angle Class Operations amongh the above mentioned classes, conversions, etc.. Standardly used math operations in 3D graphics (Dot Product, Cross Product, SLERP, etc...) Is there such Math API available either standalone or as a part of any package? Programming Language: Visual C++ but planned to be ported to OS X and Android OS.

    Read the article

  • Using elapsed time for SlowMo in XNA

    - by Dave Voyles
    I'm trying to create a slow-mo effect in my pong game so that when a player is a button the paddles and ball will suddenly move at a far slower speed. I believe my understanding of the concepts of adjusting the timing in XNA are done, but I'm not sure of how to incorporate it into my design exactly. The updates for my bats (paddles) are done in my Bat.cs class: /// Controls the bat moving up the screen /// </summary> public void MoveUp() { SetPosition(Position + new Vector2(0, -moveSpeed)); } /// <summary> /// Controls the bat moving down the screen /// </summary> public void MoveDown() { SetPosition(Position + new Vector2(0, moveSpeed)); } /// <summary> /// Updates the position of the AI bat, in order to track the ball /// </summary> /// <param name="ball"></param> public virtual void UpdatePosition(Ball ball) { size.X = (int)Position.X; size.Y = (int)Position.Y; } While the rest of my game updates are done in my GameplayScreen.cs class (I'm using the XNA game state management sample) Class GameplayScreen { ........... bool slow; .......... public override void Update(GameTime gameTime, bool otherScreenHasFocus, bool coveredByOtherScreen) base.Update(gameTime, otherScreenHasFocus, false); if (IsActive) { // SlowMo Stuff Elapsed = (float)gameTime.ElapsedGameTime.TotalSeconds; if (Slowmo) Elapsed *= .8f; MoveTimer += Elapsed; double elapsedTime = gameTime.ElapsedGameTime.TotalMilliseconds; if (Keyboard.GetState().IsKeyDown(Keys.Up)) slow = true; else if (Keyboard.GetState().IsKeyDown(Keys.Down)) slow = false; if (slow == true) elapsedTime *= .1f; // Updating bat position leftBat.UpdatePosition(ball); rightBat.UpdatePosition(ball); // Updating the ball position ball.UpdatePosition(); and finally my fixed time step is declared in the constructor of my Game1.cs Class: /// <summary> /// The main game constructor. /// </summary> public Game1() { IsFixedTimeStep = slow = false; } So my question is: Where do I place the MoveTimer or elapsedTime, so that my bat will slow down accordingly?

    Read the article

  • Depth is disabled - How to turn on?

    - by marc wellman
    In XNA 3.1 is there any other way to disable depth in 3D Worlds using DirectX models other than GraphicsDevice.RenderState.DepthBufferEnable = false; ? The reason for my question is I have quite a huge program which offers a 3D World with a couple of 3D DirectX models inside. Depth was never an issue since it ever worked fine but since a few days after doing some modifications my models are all depth-translucent i.e. depth-buffering and/or culling seems to be disabled. But in my whole source code I never touch any of the options related to Depth or Culling which means I never turn these settings on explicitly nor turn it off somewhere. So I am searching for some other statement maybe related to the GraphicsDevice that implicitly turns depth off - but I can't find it. (Sorry that I don't post any source code but I have too much source code and I simply don't know where to search) UPDATE: These are a couple of simple objects seen with correct depth. These are the same objects in their current state.

    Read the article

  • Rotation of viewplatform in Java3D

    - by user29163
    I have just started with Java3D programming. I thought I had built up some basic intuition about how the scene graph works, but something that should work, does not work. I made a simple program for rotating a pyramid around the y-axis. This was done just by adding a RotationInterpolator R to the TransformGroup above the pyramid. Then I thought hey, can I now remove the RotationInterpolator from this TransformGroup, then add it to the TransformGroup above my ViewPlatform leaf. This should work if I have understood how things work. Adding the RotationInterpolator to this TransformGroup, should make the children of this TransformGroup rotate, and the ViewingPlatform is a child of the TransformGroup. Any ideas on where my reasoning is flawed? Here is the code for setting up the universe, and the view branchgroup. import java.awt.*; import java.awt.event.*; import javax.media.j3d.*; import javax.vecmath.*; public class UniverseBuilder { // User-specified canvas Canvas3D canvas; // Scene graph elements to which the user may want access VirtualUniverse universe; Locale locale; TransformGroup vpTrans; View view; public UniverseBuilder(Canvas3D c) { this.canvas = c; // Establish a virtual universe that has a single // hi-res Locale universe = new VirtualUniverse(); locale = new Locale(universe); // Create a PhysicalBody and PhysicalEnvironment object PhysicalBody body = new PhysicalBody(); PhysicalEnvironment environment = new PhysicalEnvironment(); // Create a View and attach the Canvas3D and the physical // body and environment to the view. view = new View(); view.addCanvas3D(c); view.setPhysicalBody(body); view.setPhysicalEnvironment(environment); // Create a BranchGroup node for the view platform BranchGroup vpRoot = new BranchGroup(); // Create a ViewPlatform object, and its associated // TransformGroup object, and attach it to the root of the // subgraph. Attach the view to the view platform. Transform3D t = new Transform3D(); Transform3D s = new Transform3D(); t.set(new Vector3f(0.0f, 0.0f, 10.0f)); t.rotX(-Math.PI/4); s.set(new Vector3f(0.0f, 0.0f, 10.0f)); //forandre verdier her for å endre viewing position t.mul(s); ViewPlatform vp = new ViewPlatform(); vpTrans = new TransformGroup(t); vpTrans.setCapability(TransformGroup.ALLOW_TRANSFORM_WRITE); // Rotator stuff Transform3D yAxis = new Transform3D(); //yAxis.rotY(Math.PI/2); Alpha rotationAlpha = new Alpha( -1, Alpha.INCREASING_ENABLE, 0, 0,4000, 0, 0, 0, 0, 0); RotationInterpolator rotator = new RotationInterpolator( rotationAlpha, vpTrans, yAxis, 0.0f, (float) Math.PI*2.0f); RotationInterpolator rotator2 = new RotationInterpolator( rotationAlpha, vpTrans); BoundingSphere bounds = new BoundingSphere(new Point3d(0.0,0.0,0.0), 1000.0); rotator.setSchedulingBounds(bounds); vpTrans.addChild(rotator); vpTrans.addChild(vp); vpRoot.addChild(vpTrans); view.attachViewPlatform(vp); // Attach the branch graph to the universe, via the // Locale. The scene graph is now live! locale.addBranchGraph(vpRoot); } public void addBranchGraph(BranchGroup bg) { locale.addBranchGraph(bg); } }

    Read the article

  • Smooth waypoint traversing

    - by TheBroodian
    There are a dozen ways I could word this question, but to keep my thoughts in line, I'm phrasing it in line with my problem at hand. So I'm creating a floating platform that I would like to be able to simply travel from one designated point to another, and then return back to the first, and just pass between the two in a straight line. However, just to make it a little more interesting, I want to add a few rules to the platform. I'm coding it to travel multiples of whole tile values of world data. So if the platform is not stationary, then it will travel at least one whole tile width or tile height. Within one tile length, I would like it to accelerate from a stop to a given max speed. Upon reaching one tile length's distance, I would like it to slow to a stop at given tile coordinate and then repeat the process in reverse. The first two parts aren't too difficult, essentially I'm having trouble with the third part. I would like the platform to stop exactly at a tile coordinate, but being as I'm working with acceleration, it would seem easy to simply begin applying acceleration in the opposite direction to a value storing the platform's current speed once it reaches one tile's length of distance (assuming that the tile is traveling more than one tile-length, but to keep things simple, let's just assume it is)- but then the question is what would the correct value be for acceleration to increment from to produce this effect? How would I find that value?

    Read the article

  • How do I find the angle required to point to another object?

    - by Ginamin
    I am making an air combat game, where you can fly a ship in a 3D space. There is an opponent that flies around as well. When the opponent is not on screen, I want to display an arrow pointing in the direction the user should turn, as such: So, I took the camera location and the oppenent location and did this: double newDirection = atan2(activeCamera.location.y-ship_wrap.location.y, activeCamera.location.x-ship_wrap.location.x); After which, I get the position on the circumferance of a circle which surrounds my crosshairs, like such: trackingArrow.position = point((60*sin(angle)+240),60*cos(angle)+160); It all works fine, except it's the wrong angle! I assume my calculation for the new direction is incorrect. Can anyone help?

    Read the article

  • How to keep a data structure synchronized over a network?

    - by David Gouveia
    Context In the game I'm working on (a sort of a point and click graphic adventure), pretty much everything that happens in the game world is controlled by an action manager that is structured a bit like: So for instance if the result of examining an object should make the character say hello, walk a bit and then sit down, I simply spawn the following code: var actionGroup = actionManager.CreateGroup(); actionGroup.Add(new TalkAction("Guybrush", "Hello there!"); actionGroup.Add(new WalkAction("Guybrush", new Vector2(300, 300)); actionGroup.Add(new SetAnimationAction("Guybrush", "Sit")); This creates a new action group (an entire line in the image above) and adds it to the manager. All of the groups are executed in parallel, but actions within each group are chained together so that the second one only starts after the first one finishes. When the last action in a group finishes, the group is destroyed. Problem Now I need to replicate this information across a network, so that in a multiplayer session, all players see the same thing. Serializing the individual actions is not the problem. But I'm an absolute beginner when it comes to networking and I have a few questions. I think for the sake of simplicity in this discussion we can abstract the action manager component to being simply: var actionManager = new List<List<string>>(); How should I proceed to keep the contents of the above data structure syncronized between all players? Besides the core question, I'm also having a few other concerns related to it (i.e. all possible implications of the same problem above): If I use a server/client architecture (with one of the players acting as both a server and a client), and one of the clients has spawned a group of actions, should he add them directly to the manager, or only send a request to the server, which in turn will order every client to add that group? What about packet losses and the like? The game is deterministic, but I'm thinking that any discrepancy in the sequence of actions executed in a client could lead to inconsistent states of the world. How do I safeguard against that sort of problem? What if I add too many actions at once, won't that cause problems for the connection? Any way to alleviate that?

    Read the article

  • Unity 3D - Error BCE0019 , " 'paused' is not a member of PauseScript"

    - by user3666251
    I am trying to make a game for Android in Unity. Came to the part where I have to make a pause menu option. Made a GUITexture and placed it on the top right side of the screen then I attached this script to it : #pragma strict function OnMouseDown(){ this.paused = !this.paused; } function OnGUI(){ if(this.paused){ if (GUI.Button(Rect(10,10,100,50),"Restart")){ Application.LoadLevel(Application.loadedLevel); } // Insert the rest of the pause menu logic } } It gives me this error : "Assets/Scripts/PauseScript.js(4,10): BCE0019: 'paused' is not a member of 'PauseScript'. " "PauseScript" is the name of my pause script. Thank you.

    Read the article

  • The underlying mechanism in 'yield return www' of Unity3D Game Engine

    - by thyandrecardoso
    In the Unity3D game engine, a common code sequence for getting remote data is this: WWW www = new WWW("http://remote.com/data/location/with/texture.png"); yield return www; What is the underlying mechanism here? I know we use the yield mechanism in order to allow the next frame to be processed, while the download is being completed. But what is going on under the hood when we do the yield return www ? What method is being called (if any, on the WWW class)? Is Unity using threads? Is the "upper" Unity layer getting hold of www instance and doing something?

    Read the article

  • Drawing 2D Grid in 3D View - Need help with method

    - by Deukalion
    I'm trying to draw a simple 2D grid for an editor, to able to navigate more clearly around the 3D space, but I can't render it: Grid2D class, creates a grid of a certain size at a location and should just draw lines. public class Grid2D : IShape { private VertexPositionColor[] _vertices; private Vector2 _size; private Vector3 _location; private int _faces; public Grid2D(Vector2 size, Vector3 location, Color color) { float x = 0, y = 0; if (size.X < 1f) { size.X = 1f; } if (size.Y < 1f) { size.Y = 1f; } _size = size; _location = location; List<VertexPositionColor> vertices = new List<VertexPositionColor>(); _faces = 0; for (y = -size.Y; y <= size.Y; y++) { vertices.Add(new VertexPositionColor(location + new Vector3(-size.X, y, 0), color)); vertices.Add(new VertexPositionColor(location + new Vector3(size.X, y, 0), color)); _faces++; } for (x = -size.X; x <= size.X; x++) { vertices.Add(new VertexPositionColor(location + new Vector3(x, -size.Y, 0), color)); vertices.Add(new VertexPositionColor(location + new Vector3(x, size.Y, 0), color)); _faces++; } _vertices = vertices.ToArray(); } public void Render(GraphicsDevice device) { device.DrawUserPrimitives<VertexPositionColor>(PrimitiveType.LineList, _vertices, 0, _faces); } } Like this: +----+----+----+----+ | | | | | +----+----+----+----+ | | | | | +----+----+----+----+ | | | | | +----+----+----+----+ | | | | | +----+----+----+----+ Anyone knows what I'm doing wrong? If I add a Shape without texture, it's set automatically to VertexColorEnabled and TextureEnabled = false. This is how I render it: foreach (RenderObject render in _renderObjects) { render.Effect.Projection = projection; render.Effect.View = view; render.Effect.World = world; foreach (EffectPass pass in render.Effect.CurrentTechnique.Passes) { pass.Apply(); try { // Could be a Grid2D render.Shape.Render(_device); } catch { throw; } } } Exception is thrown: The current vertex shader declaration does not include all the elements required by the current Vertex Shader. Normal0 is missing. Simply put, I can't figure out how to draw a few lines. I want to draw them one at a time and I guess that's the problem I haven't figured out, and even when I tried rendering vertices[i], vertices[i+1] and primitiveCount = 1, vertices = 2, and so on it didn't work either. Any suggestions?

    Read the article

  • How should I sort images in an isometric game so that they appear in the correct order?

    - by Andrew
    This seems like a rather simple problem but I am having a lot of difficulty with it. What should I do to properly sort images in an isometric game? In a normal 2d top-down game one could use the screen y axis to sort the images. In this example the trees are properly sorted but the isometric walls are not. Example image: sorted by screen y Wall2 is one pixel below wall1 therefore it is drawn after wall1. If I sort by the isometric y axis the walls appear in the correct order but the trees do not. Example image: sorted by isometric y

    Read the article

< Previous Page | 464 465 466 467 468 469 470 471 472 473 474 475  | Next Page >