Search Results

Search found 26774 results on 1071 pages for 'distributed development'.

Page 407/1071 | < Previous Page | 403 404 405 406 407 408 409 410 411 412 413 414  | Next Page >

  • How to use the zoom gesture in libgdx?

    - by user3452725
    I found the example code for the GestureListener class, but I don't understand the zoom method: private float initialScale = 1; public boolean zoom (float originalDistance, float currentDistance) { float ratio = originalDistance / currentDistance; //I get this camera.zoom = initialScale * ratio; //This doesn't make sense to me because it seems like every time you pinch to zoom, it resets to the original zoom which is 1. So basically it wouldn't 'save' the zoom right? System.out.println(camera.zoom); //Prints the camera zoom return false; } Am I not interpreting this right?

    Read the article

  • Client side latency when using prediction

    - by Tips48
    I've implemented Client-Side prediction into my game, where when input is received by the client, it first sends it to the server and then acts upon it just as the server will, to reduce the appearance of lag. The problem is, the server is authoritative, so when the server sends back the position of the Entity to the client, it undo's the effect of the interpolation and creates a rubber-banding effect. For example: Client sends input to server - Client reacts on input - Server receives and reacts on input - Server sends back response - Client reaction is undone due to latency between server and client To solve this, I've decided to store the game state and input every tick in the client, and then when I receive a packet from the server, get the game state from when the packet was sent and simulate the game up to the current point. My questions: Won't this cause lag? If I'm receiving 20/30 EntityPositionPackets a second, that means I have to run 20-30 simulations of the game state. How do I sync the client and server tick? Currently, I'm sending the milli-second the packet was sent by the server, but I think it's adding too much complexity instead of just sending the tick. The problem with converting it to sending the tick is that I have no guarantee that the client and server are ticking at the same rate, for example if the client is an old-end PC.

    Read the article

  • How to handle wildly varying rendering hardware / getting baseline

    - by edA-qa mort-ora-y
    I've recently started with mobile programming (cross-platform, also with desktop) and am encountering wildly differing hardware performance, in particular with OpenGL and the GPU. I know I'll basically have to adjust my rendering code but I'm uncertain of how to detect performance and what reasonable default settings are. I notice that certain shader functions are basically free in a desktop implemenation but can be unusable in a mobile device. The problem is I have no way of knowing what features will cause what performance issues on all the devices. So my first issue is that even if I allow configuring options I'm uncertain of which options I have to make configurable. I'm wondering also wheher one just writes one very configurable pipeline, or whether I should have 2 distinct options (high/low). I'm also unsure of where to set the default. If I set to the poorest performer the graphics will be so minimal that any user with a modern device would dismiss the game. If I set them even at some moderate point, the low end devices will basically become a slide-show. I was thinking perhaps that I just run some benchmarks when the user first installs and randomly guess what works, but I've not see a game do this before.

    Read the article

  • How to account for speed of the vehicle when shooting shells from it?

    - by John Murdoch
    I'm developing a simple 3D ship game using libgdx and bullet. When a user taps the mouse I create a new shell object and send it in the direction of the mouse click. However, if the user has tapped the mouse in the direction where the ship is currently moving, the ship catches up to the shells very quickly and can sometimes even get hit by them - simply because the speed of shells and the ship are quite comparable. I think I need to account for ship speed when generating the initial impulse for the shells, and I tried doing that (see "new line added"), but I cannot figure out if what I'm doing is the proper way and if yes, how to calculate the correct coefficient. public void createShell(Vector3 origin, Vector3 direction, Vector3 platformVelocity, float velocity) { long shellId = System.currentTimeMillis(); // hack ShellState state = getState().createShellState(shellId, origin.x, origin.y, origin.z); ShellEntity entity = EntityFactory.getInstance().createShellEntity(shellId, state); add(entity); entity.getBody().applyCentralImpulse(platformVelocity.mul(velocity * 0.02f)); // new line added, to compensate for the moving platform, no idea how to calculate proper coefficient entity.getBody().applyCentralImpulse(direction.nor().mul(velocity)); } private final Vector3 v3 = new Vector3(); public void shootGun(Vector3 direction) { Vector3 shipVelocity = world.getShipEntities().get(id).getBody().getLinearVelocity(); world.getState().getShipStates().get(id).transform.getTranslation(v3); // current location of our ship v3.add(direction.nor().mul(10.0f)); // hack; this is to avoid shell immediately impacting the ship that it got shot out from world.createShell(v3, direction, shipVelocity, 500); }

    Read the article

  • Maya .IFF plugins for Gimp

    - by Kara Marfia
    Maya's preferred format for saving off a UV Snapshot is its own .IFF format, so I was hoping to find a plugin allowing Gimp 2 (Windows) to read it. I've found plenty of plugins for different linux distros, but none are win-friendly (that I can discern - admittedly I'm no whiz with Gimp). Does anyone know of one? Alternately, .tiff seems to work just fine, so if there's no good reason to bother fiddling with IFFs, I'd appreciate the input there, too. (sorry if this isn't on-topic)

    Read the article

  • GameState management hierarchical FSM vs stack based FSM

    - by user8363
    I'm reading a bit on Finite State Machines to handle game states (or screens). I would like to build a rather decent FSM that can handle multiple screens. e.g. while the game is running I want to be able to pop-up an ingame menu and when that happens the main screen must stop updating (the game is paused) but must still be visible in the background. However when I open an inventory pop-up the main screen must be visible and continue updating etc. I'm a bit confused about the difference in implementation and functionality between hierarchical FSM's and FSM's that handle a stack of states instead. Are they basically the same? Or are there important differences?

    Read the article

  • 3Ds Max is exporting model with more normals than vertices

    - by Delta
    I made a simple teapot with the "Create Standard Primitives" option and exported it as a collada file, ended up with this: < float_array id="Teapot001-POSITION-array" count="1590" < float_array id="Teapot001-Normal0-array" count="9216" For what I know there should be only one normal per vertex, am I wrong? What am I supposed to do with that much normals? Just put them on the normal buffer all at once normally?

    Read the article

  • Making an interactive 2D map

    - by Chad
    So recently I have been working on a Legend of Zelda: A Link to the Past clone, and I am wondering how I could handle certain map interactions (like cutting grass, lifting rocks, etc). The way I am currently doing the tilemap is with 2 PNGs. The first is the "tilemap" where each pixel represents a 16x16 tile and the (red, green) values are the (x, y) coords for the tile in the second PNG (the "tileset"). I am then using the blue channel to store collision data. Each tile is split into 4 8x8 tiles and represented by a 2 bit value (0 = empty, 1 = Jumpdown point, 2 = unused right now, 3 = blocking). 4 of these 2 bit values make up the full blue channel (1 byte). So collisions work great, and I am moving on to putting interactive units on the level; but I am not sure what a good way is to do it. I have experimented with spawning an entity for each grass and rock, but there are just WAY to many; FPS just dies even if I confine it to the current "zone" the user is in (for those who remember LTTP it had zones you moved between). It does make a difference that this is a browser-based JavaScript game. tl;dr: What is a good way to have an interactive map without using full blown entities for each interactive item?

    Read the article

  • Illumination and Shading for computer graphics class

    - by Sam I Am
    I am preparing for my test tomorrow and this is one of the practice questions. I solved it partially but I am confused with the rest. Here is the problem: Consider a gray world with no ambient and specular lighting ( only diffuse lighting). The screen coordinates of a triangle P1,P2,P3, are P1=(100,100), P2= (300,150), P3 = (200, 200). The gray values at P!,P2,P3 are 1/2, 3/4, and 1/4 respectively. The light is at infinity and its direction and gray color are (1,1,1) and 1.0 respectively. The coefficients of diffused reflection is 1/2. The normals of P1,P2,P3 are N1= (0,0,1), N2 = (1,0,0), and N3 = (0,1,0) respectively. Consider the coordinates of three points P1,P2,P3 to be 0. Do not normalize the normals. I have computed that the illumination at the 3 vertices P1,P2,P3 is (1/4,3/8,1/8). Also I computed that interpolation coefficients of a point P inside the triangle whose coordinates are (220, 160) are given by (1/5,2/5,2/5). Now I have 4 more questions regarding this problem. 1) The illumination at P using Gouraud Shading is: i) 1/2 The answer is 1/2, but I have no idea how to compute it.. 2) The interpolated normal at P is given by i) (2/5, 2/5,1/5) ii) (1/2, 1/4, 1/4) iii) (3/5, 1/5, 1/5) 3) The interpolated color at P is given by: i) 1/2 Again, I know the correct answer but no idea how to solve it 4) The illumination at P using Phong Shading is i) 1/4 ii) 9/40 iii) 1/2

    Read the article

  • Problem with Assimp 3D model loader

    - by Brendan Webster
    In my game I have model loading functions for Assimp model loading library. I can load the model and render it, but the model displays incorrectly. The models load in as if they were using a seperate projection matrix. I have looked over my code over and over again, but I probably keep on missing the obvious reason why this is happening. Here is an image of my game: It's simply a 6 sided cube, but it's off big time! Here are my code snippets for rendering the cube to the screen: void C_MediaLoader::display(void) { float tmp; glTranslatef(0,0,0); // rotate it around the y axis glRotatef(angle,0.f,0.f,1.f); glColor4f(1,1,1,1); // scale the whole asset to fit into our view frustum tmp = scene_max.x-scene_min.x; tmp = aisgl_max(scene_max.y - scene_min.y,tmp); tmp = aisgl_max(scene_max.z - scene_min.z,tmp); tmp = (1.f / tmp); glScalef(tmp/5, tmp/5, tmp/5); // center the model //glTranslatef( -scene_center.x, -scene_center.y, -scene_center.z ); // if the display list has not been made yet, create a new one and // fill it with scene contents if(scene_list == 0) { scene_list = glGenLists(1); glNewList(scene_list, GL_COMPILE); // now begin at the root node of the imported data and traverse // the scenegraph by multiplying subsequent local transforms // together on GL's matrix stack. recursive_render(scene, scene->mRootNode); glEndList(); } glCallList(scene_list); } void C_MediaLoader::recursive_render (const struct aiScene *sc, const struct aiNode* nd) { unsigned int i; unsigned int n = 0, t; struct aiMatrix4x4 m = nd->mTransformation; // update transform aiTransposeMatrix4(&m); glPushMatrix(); glMultMatrixf((float*)&m); // draw all meshes assigned to this node for (; n < nd->mNumMeshes; ++n) { const struct aiMesh* mesh = scene->mMeshes[nd->mMeshes[n]]; apply_material(sc->mMaterials[mesh->mMaterialIndex]); if(mesh->mNormals == NULL) { glDisable(GL_LIGHTING); } else { glEnable(GL_LIGHTING); } for (t = 0; t < mesh->mNumFaces; ++t) { const struct aiFace* face = &mesh->mFaces[t]; GLenum face_mode; switch(face->mNumIndices) { case 1: face_mode = GL_POINTS; break; case 2: face_mode = GL_LINES; break; case 3: face_mode = GL_TRIANGLES; break; default: face_mode = GL_POLYGON; break; } glBegin(face_mode); for(i = 0; i < face->mNumIndices; i++) { int index = face->mIndices[i]; if(mesh->mColors[0] != NULL) glColor4fv((GLfloat*)&mesh->mColors[0][index]); if(mesh->mNormals != NULL) glNormal3fv(&mesh->mNormals[index].x); glVertex3fv(&mesh->mVertices[index].x); } glEnd(); } } // draw all children for (n = 0; n < nd->mNumChildren; ++n) { recursive_render(sc, nd->mChildren[n]); } glPopMatrix(); } Sorry there is so much code to look through, but I really cannot find the problem, and I would love to have help.

    Read the article

  • Examples of 2D side-scrollers that achieve open non-linear feel?

    - by Milosz Falinski
    I'm working on a 2.5D platformer prototype that aims for an open feel while maintaining familiar core mechanics. Now, there's some obvious challenges with creating a non constricted feel in a spatially constricted environment. What I'm interested in, is examples of how game designers deal with the "here's a level, beat the bad guys/puzzles to get to the next level" design that seems so natural to most platformers (eg. Mario/Braid/Pid/Meat Boy to name a few). Some ideas for achieving openness I've come across include: One obvious successful example is Terraria, which achieves openness simply through complexity and flexibility of the game-system Another example that comes to mind is Cave Story. Game is non-linear, offers multiple choices and side-stories Mario, Rayman and some other 'classics' with a top-down level selection. I actually really dislike this as it never did anything for me emotionally and just seems like a bit of a lazy way to do things. Note: I've not actually had much experience with most of the 'classical' console platformers, apart from the obvious Marios/Zeldas/Metroids, since I've grown up on adventure games. By that I mean, it's entirely possible that I simply missed some games that solve the problem really well and are by some considered obvious 'classics'.

    Read the article

  • Unity GUI not in build, but works fine in editor

    - by Darren
    I have: GUITexture attached to an object A script that has GUIStyles created for the Textfield and Buttons that are created in OnGUI(). This script is attached to the same object in number 1 3 GUIText objects each separate from the above. A script that enables the GUITexture and the script in number 1 and 2 respectively This is how it is supposed to work: When I cross the finish line, number 4 script enables number 1 GUITexture component and number 2 script component. The script component uses one of number 3's GUIText objects to show you your best lap time, and also makes a GUI.Textfield for name entry and 2 GUI.Buttons for "Submit" and "Skip". If you hit "Submit" the script will submit the time. No matter which button you press, The remaining 2 GUIText objects from number 3 will show you the top 10 best times. For some reason, when I run it in editor, everything works 100%, but when I'm in different kinds of builds, the results vary. When I am in a webplayer, The GUITexture and the textfield and buttons appear, but the textfield and buttons are plain and have no evidence of GUIStyles. When I click one of the buttons, the score gets submitted but I do not get the fastest times showing. When I am in a standalone build, the GUITexture shows up, but nothing else does. If I remove the GUIStyle parameter of the GUI.Textfield and GUI.Button, they show up. Why am I getting these variations and how can I fix it? Code below: void Start () { Names.text = ""; Times.text = ""; YourBestTime.text = "Your Best Lap: " + bestTime + "\nEnter your name:"; //StartCoroutine(GetTimes("Test")); } void Update() { if (!ShowButtons && !GettingTimes) { StartCoroutine(GetTimes()); GettingTimes = true; } } IEnumerator GetTimes () { Debug.Log("Getting times"); YourBestTime.text = "Loading Best Lap Times"; WWW times_get = new WWW(GetTimesUrl); yield return times_get; WWW names_get = new WWW(GetNamesUrl); yield return names_get; if(times_get.error != null || names_get.error != null) { print("There was an error retrieiving the data: " + names_get.error + times_get.error); } else { Times.text = times_get.text; Names.text = names_get.text; YourBestTime.text = "Your Best Lap: " + bestTime; } } IEnumerator PostLapTime (string Name, string LapTime) { string hash= MD5.Md5Sum(Name + LapTime + secretKey); string bestTime_url = SubmitTimeUrl + "&Name=" + WWW.EscapeURL(Name) + "&LapTime=" + LapTime + "&hash=" + hash; Debug.Log (bestTime_url); // Post the URL to the site and create a download object to get the result. WWW hs_post = new WWW(bestTime_url); //label = "Submitting..."; yield return hs_post; // Wait until the download is done if (hs_post.error != null) { print("There was an error posting the lap time: " + hs_post.error); //label = "Error: " + hs_post.error; //show = false; } else { Debug.Log("Posted: " + hs_post.text); ShowButtons = false; PostingTime = false; } } void OnGUI() { if (ShowButtons) { //makes text box nameString = GUI.TextField( new Rect((Screen.width/2)-111, (Screen.height/2)-130, 222, 25), nameString, 20, TextboxStyle); if (GUI.Button( new Rect( (Screen.width/2-74.0f), (Screen.height/2)- 90, 64, 32), "Submit", ButtonStyle)) { //SUBMIT TIME if (nameString == "") { nameString = "Player"; } if (!PostingTime) { StartCoroutine(PostLapTime(nameString, bestTime)); PostingTime = true; } } else if (GUI.Button( new Rect( (Screen.width/2+10.0f), (Screen.height/2)- 90, 64, 32), "Skip", ButtonStyle)) { ShowButtons = false; } } } }

    Read the article

  • Good book or tutorial for learning how to apply integration methods

    - by Cumatru
    I'm looking to animate a graph layout using edges as springs and nodes as weights ( a node with more links will have a bigger weight ). I'm not capable of wrapping my head around the usage of mathematical and physics relations in my application. As far as i read, Runge Kutta 4 ( preferably ) or Verlet will be a good choice, but i have problems with understanding how they really work, and what physics equations should i apply. If i can't understand them, i can't use them. I'm looking for a book or a tutorial which describe the things that i need.

    Read the article

  • Why do meshes show up as bones in the Model class?

    - by Itamar Marom
    Right now I'm working on a 3D game and I've come across something very weird. When I created the model in Blender, I added an armature named "MyBone" to the stage and attached a cube ("MyCube") to it, so that when I move the armature, the cube moves with it. I exported this as an FBX and loaded it as a Model object. What I expected to see was: But what I got was this: I'm really confused. Why is the mesh I created showing up in the bone list? And what's Root Node? Here are the .blend and .fbx files: here or here. Thanks.

    Read the article

  • Eculidean space and vector magnitude

    - by Starkers
    Below we have distances from the origin calculated in two different ways, giving the Euclidean distance, the Manhattan distance and the Chebyshev distance. Euclidean distance is what we use to calculate the magnitude of vectors in 2D/3D games, and that makes sense to me: Let's say we have a vector that gives us the range a spaceship with limited fuel can travel. If we calculated this with Manhattan metric, our ship could travel a distance of X if it were travelling horizontally or vertically, however the second it attempted to travel diagonally it could only tavel X/2! So like I say, Euclidean distance does make sense. However, I still don't quite get how we calculate 'real' distances from the vector's magnitude. Here are two points, purple at (2,2) and green at (3,3). We can take two points away from each other to derive a vector. Let's create a vector to describe the magnitude and direction of purple from green: |d| = purple - green |d| = (purple.x, purple.y) - (green.x, green.y) |d| = (2, 2) - (3, 3) |d| = <-1,-1> Let's derive the magnitude of the vector via Pythagoras to get a Euclidean measurement: euc_magnitude = sqrt((x*x)+(y*y)) euc_magnitude = sqrt((-1*-1)+(-1*-1)) euc_magnitude = sqrt((1)+(1)) euc_magnitude = sqrt(2) euc_magnitude = 1.41 Now, if the answer had been 1, that would make sense to me, because 1 unit (in the direction described by the vector) from the green is bang on the purple. But it's not. It's 1.41. 1.41 units is the direction described, to me at least, makes us overshoot the purple by almost half a unit: So what do we do to the magnitude to allow us to calculate real distances on our point graph? Worth noting I'm a beginner just working my way through theory. Haven't programmed a game in my life!

    Read the article

  • Dynamic obstacles avoidance in navigation mesh system

    - by Variable
    I've built my path finding system with unreal engine, somehow the path finding part works just fine while i can't find a proper way to solve dynamic obstacles avoidance problem. My characters are walking allover the map and collide with each other while they moving. I try to steering them when collision occurs, but this doesn't work well. For example, two characters block on the road while the third one's path is right in the middle of them and he'll get stuck. Can someone tell me the most popular way of doing dynamic avoidance? Thanks a lot.

    Read the article

  • Can somebody guide me asto how I can make a game for playing cards [closed]

    - by user2558
    In college me and my friends use to play cards all the time. I want to make a game for that. It's quite similar to hearts, a kind of modified hearts which we made up. I want to make a multiplayer game which could be played over the internet. Plus there should also be an option for computer to play if less players availiable at the time. I don't want to make a exe. I want to play in browser. How should I go about it.

    Read the article

  • Map building - Tower Defense

    - by Dan K
    Before diving too deep into my question, let it be known that I am learning as far as java script goes and figured a simple Tower Defense game would be an excellent way to learn things. So I have found a simple background image with a path drawn on it and my question is how would I go about building a path so that I can animate my objects. Would I have to take the image and overlay a grid system, or can I store the path in some sort of array and have my objects move across it? Here is the background image:

    Read the article

  • Largest sphere inside a frustum

    - by Will
    How do you find the largest sphere that you can draw in perspective? Viewed from the top, it'd be this: Added: on the frustum on the right, I've marked four points I think we know something about. We can unproject all eight corners of the frusum, and the centres of the near and far ends. So we know point 1, 3 and 4. We also know that point 2 is the same distance from 3 as 4 is from 3. So then we can compute the nearest point on the line 1 to 4 to point 2 in order to get the centre? But the actual math and code escapes me. I want to draw models (which are approximately spherical and which I have a miniball bounding sphere for) as large as possible. Update: I've tried to implement the incircle-on-two-planes approach as suggested by bobobobo and Nathan Reed : function getFrustumsInsphere(viewport,invMvpMatrix) { var midX = viewport[0]+viewport[2]/2, midY = viewport[1]+viewport[3]/2, centre = unproject(midX,midY,null,null,viewport,invMvpMatrix), incircle = function(a,b) { var c = ray_ray_closest_point_3(a,b); a = a[1]; // far clip plane b = b[1]; // far clip plane c = c[1]; // camera var A = vec3_length(vec3_sub(b,c)), B = vec3_length(vec3_sub(a,c)), C = vec3_length(vec3_sub(a,b)), P = 1/(A+B+C), x = ((A*a[0])+(B*a[1])+(C*a[2]))*P, y = ((A*b[0])+(B*b[1])+(C*b[2]))*P, z = ((A*c[0])+(B*c[1])+(C*c[2]))*P; c = [x,y,z]; // now the centre of the incircle c.push(vec3_length(vec3_sub(centre[1],c))); // add its radius return c; }, left = unproject(viewport[0],midY,null,null,viewport,invMvpMatrix), right = unproject(viewport[2],midY,null,null,viewport,invMvpMatrix), horiz = incircle(left,right), top = unproject(midX,viewport[1],null,null,viewport,invMvpMatrix), bottom = unproject(midX,viewport[3],null,null,viewport,invMvpMatrix), vert = incircle(top,bottom); return horiz[3]<vert[3]? horiz: vert; } I admit I'm winging it; I'm trying to adapt 2D code by extending it into 3 dimensions. It doesn't compute the insphere correctly; the centre-point of the sphere seems to be on the line between the camera and the top-left each time, and its too big (or too close). Is there any obvious mistakes in my code? Does the approach, if fixed, work?

    Read the article

  • Shader optimization - cg/hlsl pseudo and via multiplication

    - by teodron
    Since HLSL/Cg do not allow texture fetching inside conditional blocks, I am first checking a variable and performing some computations, afterwards setting a float flag to 0.0 or 1.0, depending on the computations. I'd like to trigger a texture fetch only if the flag is 1.0 or not null, for that matter of fact. I kind of hoped this would do the trick: float4 TU0_atlas_colour = pseudoBool * tex2Dlod(TU0_texture, float4(tileCoord, 0, mipLevel)); That is, if pseudoBool is 0, will the texture fetch function still be called and produce overhead? I was hoping to prevent it from getting executed via this trick that usually works in plain C/C++.

    Read the article

  • Struggling to get set up with JOGL2.0

    - by thecoshman
    I guess that Game Dev' is a more sensible place for my problem then SO. I did have JOGL1.1 set up and working, but I soon discovered that it did not support the latest OpenGL, so I started work on upgrading to JOGL2.0 it's not gone too well. Firstly, is it worth me trying to get JOGL to work, or should I just move over to LWJGL? I am fairly comfortable with OpenGL (via C++) and from what I did get working with JOGL1.1, I seem to be OK adapting to it. Assuming that I stick with JOGL, am I foolish for trying to use JOGL2.0? From what I can gather, JOGL2.0 is still in beta, but I am willing to go with it as I want to make use of the latest OpenGL I can. I have been using the Eclipse IDE and have set up a user library for JOGL, here is a screen shot of the configuration and I have added this user library to my own Eclipse project. the system variable %JOGL_HOME% points to "C:\Users\edacosh\Downloads\JOGL2.0" so that should work fine. Now, the problem I actually having, when I try to run my code, on the line GLProfile glp = GLProfile.getDefault(); The code stops with the following message... Exception in thread "main" java.lang.NoClassDefFoundError: com/jogamp/common/jvm/JVMUtil at javax.media.opengl.GLProfile.<clinit>(GLProfile.java:1145) at DiCE.DiCE.<init>(DiCE.java:33) at App.<init>(App.java:17) at App.main(App.java:12) Caused by: java.lang.ClassNotFoundException: com.jogamp.common.jvm.JVMUtil at java.net.URLClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) ... 4 more I have also set my project to ensure that it is using jre6 along with jdk6, as I was having some issues. I hope I have given you enough information to be able to help me. It probably doesn't help that I am rather new to Java, been developing in C++ for ages. Thanks

    Read the article

  • Group arrival steering

    - by ltjax
    I've got group movement implemented pretty much like this: http://www.red3d.com/cwr/steer/CrowdPath.html Basically, that's combining path following and separation. It works nicely as long as units are in transit, but arrival does not work very well at all. Right now, units just cease to use the path following component once the "exit" the path, i.e. when their closest point on the path is on or past the end. This leads to those units bumping into each other and also overshooting the point the player clicked. Ideally, I'd have the units arrive scattered around the finish point (and reasonable close to each other), not all clumped up past the finish line. I'd imagine that some kind of arrival steering might work here, but based on other units and a "fuzzy" classification of the end of the path. Is there any proven way to do this?

    Read the article

  • IrrKlang with Ogre

    - by Vinnie
    I'm trying to set up sound in my Ogre3D project. I have installed irrKlang 1.4.0 and added it's include and lib directories to my projects VC++ Include and Library directories, but I'm still getting a Linker error when I attempt to build. Any suggestions? (Error 4007 error LNK2019: unresolved external symbol "__declspec(dllimport) class irrklang::ISoundEngine * __cdecl irrklang::createIrrKlangDevice(enum irrklang::E_SOUND_OUTPUT_DRIVER,int,char const *,char const *)" (_imp?createIrrKlangDevice@irrklang@@YAPAVISoundEngine@1@W4E_SOUND_OUTPUT_DRIVER@1@HPBD1@Z) referenced in function "public: __thiscall SoundManager::SoundManager(void)" (??0SoundManager@@QAE@XZ)

    Read the article

  • Why am I not getting an sRGB default framebuffer?

    - by Aaron Rotenberg
    I'm trying to make my OpenGL Haskell program gamma correct by making appropriate use of sRGB framebuffers and textures, but I'm running into issues making the default framebuffer sRGB. Consider the following Haskell program, compiled for 32-bit Windows using GHC and linked against 32-bit freeglut: import Foreign.Marshal.Alloc(alloca) import Foreign.Ptr(Ptr) import Foreign.Storable(Storable, peek) import Graphics.Rendering.OpenGL.Raw import qualified Graphics.UI.GLUT as GLUT import Graphics.UI.GLUT(($=)) main :: IO () main = do (_progName, _args) <- GLUT.getArgsAndInitialize GLUT.initialDisplayMode $= [GLUT.SRGBMode] _window <- GLUT.createWindow "sRGB Test" -- To prove that I actually have freeglut working correctly. -- This will fail at runtime under classic GLUT. GLUT.closeCallback $= Just (return ()) glEnable gl_FRAMEBUFFER_SRGB colorEncoding <- allocaOut $ glGetFramebufferAttachmentParameteriv gl_FRAMEBUFFER gl_FRONT_LEFT gl_FRAMEBUFFER_ATTACHMENT_COLOR_ENCODING print colorEncoding allocaOut :: Storable a => (Ptr a -> IO b) -> IO a allocaOut f = alloca $ \ptr -> do f ptr peek ptr On my desktop (Windows 8 64-bit with a GeForce GTX 760 graphics card) this program outputs 9729, a.k.a. gl_LINEAR, indicating that the default framebuffer is using linear color space, even though I explicitly requested an sRGB window. This is reflected in the rendering results of the actual program I'm trying to write - everything looks washed out because my linear color values aren't being converted to sRGB before being written to the framebuffer. On the other hand, on my laptop (Windows 7 64-bit with an Intel graphics chip), the program prints 0 (huh?) and I get an sRGB default framebuffer by default whether I request one or not! And on both machines, if I manually create a non-default framebuffer bound to an sRGB texture, the program correctly prints 35904, a.k.a. gl_SRGB. Why am I getting different results on different hardware? Am I doing something wrong? How can I get an sRGB framebuffer consistently on all hardware and target OSes?

    Read the article

  • How do I simplify terrain with tunnels or overhangs?

    - by KKlouzal
    I'm attempting to store vertex data in a quadtree with C++, such that far-away vertices can be combined to simplify the object and speed up rendering. This works well with a reasonably flat mesh, but what about terrain with overhangs or tunnels? How should I represent such a mesh in a quadtree? After the initial generation, each mesh is roughly 130,000 polygons and about 300 of these meshes are lined up to create the surface of a planetary body. A fully generated planet is upwards of 10,000,000 polygons before applying any culling to the individual meshes. Therefore, this second optimization is vital for the project. The rest of my confusion focuses around my inexperience with vertex data: How do I properly loop through the vertex data to group them into specific quads? How do I conclude from vertex data what a quad's maximum size should be? How many quads should the quadtree include?

    Read the article

< Previous Page | 403 404 405 406 407 408 409 410 411 412 413 414  | Next Page >