Search Results

Search found 25377 results on 1016 pages for 'development 4 0'.

Page 464/1016 | < Previous Page | 460 461 462 463 464 465 466 467 468 469 470 471  | Next Page >

  • Enemy Spawning method in a Top-Down Shooter

    - by Chris Waters
    I'm working on a top-down shooter akin to DoDonPachi, Ikaruga, etc. The camera movement through the world is handled automatically with the player able to move inside of the camera's visible region. Along the way, enemies are scripted to spawn at particular points along the path. While this sounds straightforward, I could see two ways to define these points: Camera's position: 'trigger' spawning as the camera passes by the points Time along path: "30 seconds in, spawn 2 enemies" In both cases, the camera-relative positions would be defined as well as the behavior of the enemy. The way I see it, the way you define these points will directly affect how the 'level editor', or what have you, will work. Would there be any benefits of one approach over the other?

    Read the article

  • Ignore collisions with some objects in certain contexts

    - by Paul Manta
    I'm making a racing game with cars in Unity. The car has a boost/nitro powerup. While boosting, I wouldn't want to be deviated when colliding with zombies, but I do want to be deviated when colliding with walls. On the other hand, I don't want to ignore collision with zombies, because I still want to hit them on impact. How should I handle this? Basically, what I want is for the car to not rotate when colliding with certain objects.

    Read the article

  • relationship between the model and the renderer

    - by acrilige
    I tried to build a simple graphics engine, and faced with this problems: i have a list of models that i need to draw, and object (renderer) that implements IRenderer interface with method DrawObject(Object* obj). Implementation of renderer depends on using graphics library (opengl/directx). 1st question: model should not know nothing about renderer implementation, but in this case where can i hold (cache) information that depends on renderer implementation? For example, if model have this definition: class Model { public: Model(); Vertex* GetVertices() const; private: Vertex* m_vertices; }; what is the best way to cache, for example, vertex buffer of this model for dx11? Hold it in renderer object? 2nd question: what is the best way for model to say renderer HOW it must be rendered (for example with texture, bump mapping, or may be just in one color). I thought it can be done with flags, like this: model-SetRenderOptions(RENDER_TEXTURE | RENDER_BUMPMAPPING | RENDER_LIGHTING); and in Renderer::DrawModel method check for each flag. But looks like it will become uncomfortable with the options count growth...

    Read the article

  • libgdx collision detection / bounding the object

    - by johnny-b
    i am trying to get collision detection so i am drawing a red rectangle to see if it is working, and when i do the code below in the update method. to check if it is going to work. the position is not in the right place. the red rectangle starts from the middle and not at the x and y point?Huh so it draws it wrong. i also have a getter method so nothing wrong there. bullet.set(getX(), getY(), getOriginX(), getOriginY()); this is for the render shapeRenderer.begin(ShapeType.Filled); shapeRenderer.setColor(Color.RED); shapeRenderer.rect(bullet.getX(), bullet.getY(), bullet.getOriginX(), bullet.getOriginY(), 15, 5, bullet.getRotation()); shapeRenderer.end(); i have tried to do it with a circle but the circle draws in the middle and i want it to be at the tip of the bullet. at the front of the bullet. x, y point. boundingCircle.set(getX() + getOriginX(), getY() + getOriginY(), 4.0f); shapeRenderer.begin(ShapeType.Filled); shapeRenderer.setColor(Color.RED); shapeRenderer.circle(bullet.getBoundingCircle().x, bullet.getBoundingCircle().y, bullet.getBoundingCircle().radius); shapeRenderer.end(); thank you need it to be of the x and y as the bullet is in the middle of the sprite when drawn originally via paint.

    Read the article

  • What should a game have in order to keep humans playing it?

    - by Adam Davis
    In many entertainment professions there suggestions, loose rules, or general frameworks one follows that appeal to humans in one way or another. For instance, many movies and books follow the monomyth. In video games I find many types of games that attract people in different ways. Some are addicted to facebook gem matching games. Others can't get enough of FPS games. Once in awhile, though, you find a game that seems to transcend stereotypes and appeals almost immediately to everyone that plays it. For instance, Plants Versus Zombies seems to have a very, very large demographic of players. There are other games similar in reach. I'm curious what books, blogs, etc there are that explore these game types and styles, and tries to suss out one or more popular frameworks/styles that satisfy people, while keeping them coming back for more.

    Read the article

  • Incorrect results for frustum cull

    - by DeadMG
    Previously, I had a problem with my frustum culling producing too optimistic results- that is, including many objects that were not in the view volume. Now I have refactored that code and produced a cull that should be accurate to the actual frustum, instead of an axis-aligned box approximation. The problem is that now it never returns anything to be in the view volume. As the mathematical support library I'm using does not provide plane support functions, I had to code much of this functionality myself, and I'm not really the mathematical type, so it's likely that I've made some silly error somewhere. As follows is the relevant code: class Plane { public: Plane() { r0 = Math::Vector(0,0,0); normal = Math::Vector(0,1,0); } Plane(Math::Vector p1, Math::Vector p2, Math::Vector p3) { r0 = p1; normal = Math::Cross((p2 - p1), (p3 - p1)); } Math::Vector r0; Math::Vector normal; }; This class represents one plane as a point and a normal vector. class Frustum { public: Frustum( const std::array<Math::Vector, 8>& points ) { planes[0] = Plane(points[0], points[1], points[2]); planes[1] = Plane(points[4], points[5], points[6]); planes[2] = Plane(points[0], points[1], points[4]); planes[3] = Plane(points[2], points[3], points[6]); planes[4] = Plane(points[0], points[2], points[4]); planes[5] = Plane(points[1], points[3], points[5]); } Plane planes[6]; }; The points are passed in order where (the inverse of) each bit of the index of each point indicates whether it's the left, top, and back of the frustum, respectively. As such, I just picked any three points where they all shared one bit in common to define the planes. My intersection test is as follows (based on this): bool Intersects(Math::AABB lhs, const Frustum& rhs) const { for(int i = 0; i < 6; i++) { Math::Vector pvertex = lhs.TopRightFurthest; Math::Vector nvertex = lhs.BottomLeftClosest; if (rhs.planes[i].normal.x <= -0.0f) { std::swap(pvertex.x, nvertex.x); } if (rhs.planes[i].normal.y <= -0.0f) { std::swap(pvertex.y, nvertex.y); } if (rhs.planes[i].normal.z <= -0.0f) { std::swap(pvertex.z, nvertex.z); } if (Math::Dot(rhs.planes[i].r0, nvertex) < 0.0f) { return false; } } return true; } Also of note is that because I'm using a left-handed co-ordinate system, I wrote my Cross function to return the negative of the formula given on Wikipedia. Any suggestions as to where I've made a mistake?

    Read the article

  • Architecture a for a central renderer rather than self-rendering

    - by The Communist Duck
    For the architectural side of rendering, there's two main ways: having each object render itself, and having a single renderer which renders everything. I'm currently aiming for the second idea, for the following reasons: The list can be sorted to only use shaders once. Else each object would have to bind the shader, because it's not sure if it's active. The objects could be sorted and grouped. Easier to swap APIs. With a few macro lines, it can be easy to swap between a DirectX renderer and an OpenGL renderer (not a reason for my project, but still a good point) Easier to manage rendering code Of course, if anyone has strong recommendations for the first method, I will listen to them. But I was wondering how make this work. First idea The renderer has a list of pointers to the renderable components of each entity, which register themselves on RenderCompoent creation. However, I'm worrying that this may end up as a lot of extra pointer weight. But I can sort the list of pointers every so often. Second idea The entire list of entities is passed to the renderer each render call. The renderer then sorts the list (each call, or maybe once?) and gets what it wants. That's a lot of passing and/or sorting, however. Other ideas ??? PROFIT Anyone got ideas? Thank you.

    Read the article

  • Recasting and Drawing in SDL

    - by user1078123
    I have some code that essentially draws a column on the screen of a wall in a raycasting-type 3d engine. I am trying to optimize it, as it takes about 10 milliseconds do draw a million pixels using this, and the vast majority of game time is spent in this loop. However, I don't quite understand what's occurring, particularly the recasting (I modified the "pixel manipulation" sample code from the SDL documentation). "canvas" is the surface I am drawing to, and "hello" is the surface containing the texture for the column. int c = (curcol)* canvas->format->BytesPerPixel; void *canvaspixels = canvas->pixels; Uint16 texpitch = hello->pitch; int lim = (drawheight +startdraw) * canvpitch +c + (int) canvaspixels; Uint8 *k = (Uint8 *)hello->pixels + (hit)* hello->format->BytesPerPixel; for (int j= (startdraw)*(canvpitch)+c + (int) canvaspixels; (j< lim); j+= canvpitch){ Uint8 *q = (Uint8 *) ((int(h))*(texpitch)+k); *(Uint32 *)j = *(Uint32 *)q; h += s; } We have void pointers (not sure how those are even represented), 8, 16, and 32 bit ints (h and s are floats), all being intermingled, and while it works, it is quite confusing.

    Read the article

  • Projecting onto different size screens by cropping

    - by Jason
    Hi, I am building a phone application which will display a shape on screen. The shape should look the same on different screen sizes. I. Decided the best way to do this is to show more of the background on larger screen keeping the shapes proportion the same on all screens. My problem is I am not sure how to achieve this, I can query the screen size at runtime and calculate how different it is from the six is designed for but I am not sure what to do with this value. What kind of projection should I use for my orthographic matrix an hour will I display more on larger screens and not loose information on smaller screens? Thanks, Jason.

    Read the article

  • Rotating a cube using jBullet collisions

    - by Kenneth Bray
    How would one go about rotating/flipping a cube with the physics of jBullet? Here is my Draw method for my cube object: public void Draw() { // center point posX, posY, posZ float radius = .25f;//size / 2; glPushMatrix(); glBegin(GL_QUADS); //top { glColor3f(5.0f,1.0f,5.0f); // white glVertex3f(posX + radius, posY + radius, posZ - radius); glVertex3f(posX - radius, posY + radius, posZ - radius); glVertex3f(posX - radius, posY + radius, posZ + radius); glVertex3f(posX + radius, posY + radius, posZ + radius); } //bottom { glColor3f(1.0f,1.0f,0.0f); // ?? color glVertex3f(posX + radius, posY - radius, posZ + radius); glVertex3f(posX - radius, posY - radius, posZ + radius); glVertex3f(posX - radius, posY - radius, posZ - radius); glVertex3f(posX + radius, posY - radius, posZ - radius); } //right side { glColor3f(1.0f,0.0f,1.0f); // ?? color glVertex3f(posX + radius, posY + radius, posZ + radius); glVertex3f(posX + radius, posY - radius, posZ + radius); glVertex3f(posX + radius, posY - radius, posZ - radius); glVertex3f(posX + radius, posY + radius, posZ - radius); } //left side { glColor3f(0.0f,1.0f,1.0f); // ?? color glVertex3f(posX - radius, posY + radius, posZ - radius); glVertex3f(posX - radius, posY - radius, posZ - radius); glVertex3f(posX - radius, posY - radius, posZ + radius); glVertex3f(posX - radius, posY + radius, posZ + radius); } //front side { glColor3f(0.0f,0.0f,1.0f); // blue glVertex3f(posX + radius, posY + radius, posZ + radius); glVertex3f(posX - radius, posY + radius, posZ + radius); glVertex3f(posX - radius, posY - radius, posZ + radius); glVertex3f(posX + radius, posY - radius, posZ + radius); } //back side { glColor3f(0.0f,1.0f,0.0f); // green glVertex3f(posX + radius, posY - radius, posZ - radius); glVertex3f(posX - radius, posY - radius, posZ - radius); glVertex3f(posX - radius, posY + radius, posZ - radius); glVertex3f(posX + radius, posY + radius, posZ - radius); } glEnd(); glPopMatrix(); Update(); } This is my update method for the cube position: public void Update() { Transform trans = new Transform(); cubeRigidBody.getMotionState().getWorldTransform(trans); posX = trans.origin.x; posY = trans.origin.y; posZ = trans.origin.z; Quat4f outRot = new Quat4f(); trans.getRotation(outRot); rotX = outRot.x; rotY = outRot.y; rotZ = outRot.z; rotW = outRot.w; } I am assuming I need to use glrotatef, but it does not seem to work at all when I try that.. this is how I have tried to rotate the cubes: GL11.glRotatef(rotW, rotX, 0.0f, 0.0f); GL11.glRotatef(rotW, 0.0f, rotY, 0.0f); GL11.glRotatef(rotW, 0.0f, 0.0f, rotZ);

    Read the article

  • Change players state and controls in-game

    - by Samurai Fox
    I'm using Unity 3D Let's say the player is an ice cube. You control it like a normal player. On press of a button, ice transforms (with animation) into water. You control it completely different than the ice cube. Another great example would be: Player is human being and has normal FPS controls. On press of a button human transforms into birds and now has completely different controls. Now, my question is, what would be easier and better: make one object with animation transition and to stay in that state of anim. until button is pressed again make two object: ice and water. Ice has an animation of turning into water. So replace ice (with animation) with water object And if anyone knows this one too: how to switch between 2 different types of player controls.

    Read the article

  • How to use OpenGL functions from multiples thread?

    - by Robert
    I'm writing a small game using OpenGL. I'm implementing basic networking in this game and I'm facing a problem. I have a thread in my client socket class that check for available data, when there are data I raise an event like this : immutable int len = this.m_socket.receive(data); if(len > 0) { this.m_onDataEvent(data); } Then on my game class, I have a function that handle and parse data like this : switch(msgId) { case ProtocolID.CharacterData: // Load terrain with opengl, character model.... Im not able to call opengl functions because my opengl context is created from a different thread. But I really don't know how I can solve this problem, I tried Google but it's really hard to find a solution. I'm using D programming language if it can help.

    Read the article

  • How can I achieve a 3D-like effect with spritebatch's rotation and scale parameters

    - by Alic44
    I'm working on a 2d game with a top-down perspective similar to Secret of Mana and the 2D Final Fantasy games, with one big difference being that it's an action rpg using a 3-dimensional physics engine. I'm trying to draw an aimer graphic (basically an arrow) at my characters' feet when they're aiming a ranged weapon. At first I just converted the character's aim vector to radians and passed that into spritebatch, but there was a problem. The position of every object in my world is scaled for perspective when it's drawn to the screen. So if the physics engine coordinates are (1, 0, 1), the screen coords are actually (1, .707) -- the Y and Z axis are scaled by a perspective factor of .707 and then added together to get the screen coordinates. This meant that the direction the aimer graphic pointed (thanks to its rotation value passed into spritebatch) didn't match up with the direction the projectile actually traveled over time. Things looked fine when the characters fired left, right, up, or down, but if you fired on a diagonal the perspective of the physics engine didn't match with the simplistic way I was converting the character's aim direction to a screen rotation. Ok, fast forward to now: I've got the aimer's rotation matched up with the path the projectile will actually take, which I'm doing by decomposing a transform matrix which I build from two rotation matrices (one to represent the aimer's rotation, and one to represent the camera's 45 degree rotation on the x axis). My question is, is there a way to get not just rotation from a series of matrix transformations, but to also get a Vector2 scale which would give the aimer the appearance of being a 3d object, being warped by perspective? Orthographic perspective is what I'm going for, I think. So, the aimer arrow would get longer when facing sideways, and shorter when facing north and south because of the perspective. At the same time, it would get wider when facing north and south, and less wide when facing right or left. I'd like to avoid actually drawing the aimer texture in 3d because I'm still using spritebatch's layerdepth parameter at this point in my project, and I don't want to have to figure out how to draw a 3d object within the depth sorting system I already have. I can provide code and more details if this is too vague as a question... This is my first post on stack exchange. Thanks a lot for reading! Note: (I think) I realize it can't be a technically correct 3D perspective, because the spritebatch's vector2 scaling argument doesn't allow for an object to be skewed the way it actually should be. What I'm really interested in is, is there a good way to fake the effect, or should I just drop it and not scale at all? Edit to clarify without the help of a picture (apparently I can't post them yet): I want the aimer arrow to look like it has been painted on the ground at the character's feet, so it should appear to be drawn on the ground plane (in my case the XZ plane) which should be tilted at a 45 degree angle (around the X axis) from the viewing perspective. Alex

    Read the article

  • Text based game in XNA

    - by Sigh-AniDe
    I want to create a text based game, where the player will type up,down, left or right and the sprite will move in that direction. I created the game and at the moment the player moves with the up,left,down and right keys. I would like to change the movement of the sprite from keypresses to text commands, Ive googled a lot on creating text based games in XNA but have had no luck. Could you please help me or guide me in the right direction of how to do this in XNA. All help will greatly be appreciated. Thanks

    Read the article

  • Lighting with VBO

    - by nkint
    I'm using a Java JOGL wrapper called processing.org. I have coded some enviroment on it and I'm quite proud of it even if it has some ready stuffs that I didn't know anything about it (==LIGHTS). Then, for some geometry, I've decided to use a VBO. I had to pass in the hard way and recode all lights. But I can't achieve the same result. This is the original light system: And this with VBO: With this code: Vec3D l; gl.glEnable(GL.GL_LIGHTING); gl.glEnable(GL.GL_LIGHT0); gl.glEnable(GL.GL_COLOR_MATERIAL); gl.glMaterialfv(GL.GL_FRONT_AND_BACK, GL.GL_AMBIENT, new float[]{0.8f,0f,0f}, 0); l = new Vec3D(0,0,-10); gl.glColor3f(0.8f,0f,0f); gl.glLightfv(GL.GL_LIGHT0, GL.GL_POSITION, new float[] { l.x, l.y, l.z, 0 }, 0); gl.glLightfv(GL.GL_LIGHT0, GL.GL_SPOT_DIRECTION, new float[] { 1, 1, 1, 1 }, 0); I can't achive the same light, the same color material, and the same wireframe stuffs. If needed I can also post the code I use for VBO, but it is quite standard vertex array grabbed on the net that uses glDrawArrays

    Read the article

  • 3D Modeling Software for Programmer [closed]

    - by Pathachiever11
    I've recently learned how to make games for Unity3d, and now I want to start making games! I can't wait to start! However, before I can make 3D games, I need to learn 3D modeling for character design, level design, and some animation. What is the easiest 3D modeling software, compatible with Unity3d? I do not want to spend too much time learning the software. From what I've heard, Blender is a bit complicated to use. Maya and 3dsMax seem very powerful. Could someone point me in the right direction? I don't want to spend a lot of time learning. I know its not that easy, but you guys have experience, you guys probably know out of all which one is easier and powerful. Could you recommend a software? Many Thanks!

    Read the article

  • Developing games using virtualization on macOS (or Linux) [on hold]

    - by zpinner
    From what I've seen, most of the gamedev tools and engines (that could generate cross platform games) are not supported on Mac. Havok/Project Anarchy, UDK, GameMaker, e.g. . Basically, the only options I found are: Unity3d and monogame + xamarin. Unity is nice and I've been playing with it for some time, but the free version is quite limited when we're talking about shaders, that made me consider that as an indie developer, I might want more freedom to experiment new things, without paying the expensive unity license. I didn't try monogame + xamarin yet, and altough XNA is a very nice game framework, I'd like to have more freedom to experiment and finish a game first before paying for the IDE, which is not possible with the current Xamarin business model. That leaves me with the thought that I must go back to windows, which I'd preferably do it partially, if it's possible. Using BootCamp is something that I'd like to avoid, since it's a pain to reboot when changing OS and that would probably force me to become a 100% windows user. Is there anyone actually developing a game using virtualization solutions like parallels or vmwareFusion? How was your experience?

    Read the article

  • Does XNA/MonoGame have a text caching mechanism, or has an open source one been implemented?

    - by Casey
    I'm playing around with MonoGame, and I've noticed the SpriteFont class draws static text very inefficiently. Each time the text is drawn the spacing is recalculated. This isn't a big deal on my quad core PC, but on mobile applications it might be a problem. Before I go and program some text which caches the arrangement of its letters in an array and then feeds that array to the SpriteBatch, I would like to make sure there isn't something available to do this already, either in MonoGame itself or a class someone has implemented and made available for general use.

    Read the article

  • OpenGL quake 3 shader file for objects (for trees)

    - by mlodziaszka
    I decided to add to my game few trees, I already quake 3 model loader (md3) its for characters and method for texture drawing is store in *.ini file. I found a package of trees in MD3 and I have no problem with loading model alone, but there is a *.shader file and i have no idea how to load it to draw texture properly. Tree pack: http://www.custommapmakers.org/wiki/index.php/Models:GR_Trees_set I do not have to use exactly this format, I can write another loader, but trees in *.obj or .3ds look even harder

    Read the article

  • What cars on roads game engines are there?

    - by David Thielen
    What game engines are there that support laying out a map of roads and handle vehicle movement on the roads. Something similar to the basic functionality in Transport Tycoon/Locomotion. I don't care about looks (although prettier is better) and top down or isometric is fine. I just need a simple way to create maps and move cars on it. And preferably the cars do take time to speed up and slow down as they go from stopped to full speed. Prefer in Windows (any API in Windows). I also prefer a free engine as this is just for internal use. I have found CarDriving 2D - does anyone know if it works well?

    Read the article

  • Publishing a game -- any way to target both WP7 and Win8 Store?

    - by Rei Miyasaka
    I'm at a dilemma which seems should soon become an important issue for a lot of developers. If I build a game in XNA, I won't be able to publish it on the Windows 8 Store, as it would be a classic application -- and classic applications can't be sold on the store. If I build a game in Metro DirectX, I would be able to sell it on the Store, but porting it to Windows Phone would involve porting it to Reach XNA, which in fact would likely involve more effort even than porting to OS X or Android -- both of which support C++. Of all the WinRT API that is supported on C++/JS/.NET, DirectX can only be programmed from C++. It's also unlikely that Microsoft will update Windows 7 or Vista to support the new DirectX features, although that would make the Metro DirectX the first new version of DirectX to stop supporting the immediate predecessor OS. If I build a game in Pre-Win8 DirectX 9/10/11, I won't be able to sell it on the Windows Store or Windows Phone, but I could sell it on something like Steam. It would also involve the most amount of manual plumbing. In fact, DirectWrite, despite being part of DirectX 11, doesn't talk to Direct3D. I'm getting really tired of all these restrictions -- artificial and otherwise -- and I'm coming to a point where I'm considering switching to a platform with a less fragmented API, like Android or Mac/iOS. As far as bringing a game into market goes, excluding the actual market share of any platforms that I might consider, what other factors would help me in making a decision? Just a few years ago this question was a lot easier to answer: if you were primarily concerned with Windows platforms, all you had to answer was whether you wanted DirectX, XNA, or something like SlimDX. If you made the wrong decision, no biggie -- all you really would have lost is XBox and the fairly small Windows Phone market.

    Read the article

  • Android how to get opengl 3D coordinates in ontouch event

    - by Sandy
    I created a cube in opengl and it rotates in ontouch event. To to this I created a CustomSurfaceView as follows public class CustomSurfaceView extends GLSurfaceView { @Override public boolean onTouchEvent(MotionEvent e) { float x = e.getX() float y = e.getY(); } } Here x and y are screen coordinates. How can I get 3D coordinated from this? I have already looked gluProject and NeHe. But I dont know how to implement this in my project, it shows that there is no GLdouble,GLfloat class.

    Read the article

  • Make objects slide across the screen in random positions

    - by user3475907
    I want to make an object appear randomly at the right hand side of the screen and then slide across the screen and disapear at the left hand side. I am working with libgdx. I have this bit of code but it makes items fall from the top down. Please help. public EntityManager(int amount, OrthoCamera camera) { player = new Player(new Vector2(15, 230), new Vector2(0, 0), this, camera); for (int i = 0; i < amount; i++) { float x = MathUtils.random(0, MainGame.HEIGHT - TextureManager.ENEMY.getHeight()); float y = MathUtils.random(MainGame.WIDTH, MainGame.WIDTH * 10); float speed = MathUtils.random(2, 10); addEntity(new Enemy(new Vector2(x, y), new Vector2(-0, -speed))); }

    Read the article

  • Java keyboard input [on hold]

    - by dØd
    I'm trying to implement a input system that can detect whether a certain key was held or was only pressed briefly. So far I have this: KEY_INTERACTION_TRESHOLD = 400ms //inside a constructor shouldMeasure = true; @Override public void keyPressed(KeyEvent e) { if (shouldMeasure) { startTime = System.currentTimeMillis(); shouldMeasure = false; return; } System.out.println("Button is held down"); e.consume(); } @Override public void keyReleased(KeyEvent e) { if (System.currentTimeMillis() - startTime < KEY_INTERACTION_TRESHOLD) { System.out.println("Button was only pressed briefly"); } startTime = 0; shouldMeasure = true; e.consume(); } Now this works, but the problem is that there is this delay between when I press a key to hold and when the message 'Button is held down' gets displayed. I understand why this delay occurs (for example when you press and hold a letter there will be a similar delay between the first and the second letter printed out), but I would like to somehow avoid it. I'm using only the Java API.

    Read the article

  • Triangulating a partially triangulated mesh (2D)

    - by teodron
    Referring to the above exhibits, this is the scenario I am working with: starting with a planar graph (in my case, a 2D mesh) with a given triangulation, based on a certain criterion, the graph nodes are labeled as RED and BLACK. (A) a subgraph containing all the RED nodes (with edges between only the directly connected neighbours) is formed (note: although this figure shows a tree forming, it may well happen that the subgraph contain loops) (B) Problem: I need to quickly build a triangulation around the subgraph (e.g. as shown in figure C), but under the constraint that I have to keep the already present edges in the final result. Question: Is there a fast way of achieving this given a partially triangulated mesh? Ideally, the complexity should be in the O(n) class. Some side-remarks: it would be nice for the triangulation algorithm to take into account a certain vertex priority when adding edges (e.g. it should always try to build a "1-ring" structure around the most important nodes first - I can implement iteratively such a routine, but it's O(n^2) ). it would also be nice to reflect somehow the "hop distance" when adding edges: add edges first between the nodes that were "closer" to each other given the start topology. Nevertheless, disregarding the remarks, is there an already known scenario similar to this one where a triangulation is built upon a partially given set of triangles/edges?

    Read the article

< Previous Page | 460 461 462 463 464 465 466 467 468 469 470 471  | Next Page >