Search Results

Search found 31839 results on 1274 pages for 'plugin development'.

Page 569/1274 | < Previous Page | 565 566 567 568 569 570 571 572 573 574 575 576  | Next Page >

  • How can I achieve a 3D-like effect with spritebatch's rotation and scale parameters

    - by Alic44
    I'm working on a 2d game with a top-down perspective similar to Secret of Mana and the 2D Final Fantasy games, with one big difference being that it's an action rpg using a 3-dimensional physics engine. I'm trying to draw an aimer graphic (basically an arrow) at my characters' feet when they're aiming a ranged weapon. At first I just converted the character's aim vector to radians and passed that into spritebatch, but there was a problem. The position of every object in my world is scaled for perspective when it's drawn to the screen. So if the physics engine coordinates are (1, 0, 1), the screen coords are actually (1, .707) -- the Y and Z axis are scaled by a perspective factor of .707 and then added together to get the screen coordinates. This meant that the direction the aimer graphic pointed (thanks to its rotation value passed into spritebatch) didn't match up with the direction the projectile actually traveled over time. Things looked fine when the characters fired left, right, up, or down, but if you fired on a diagonal the perspective of the physics engine didn't match with the simplistic way I was converting the character's aim direction to a screen rotation. Ok, fast forward to now: I've got the aimer's rotation matched up with the path the projectile will actually take, which I'm doing by decomposing a transform matrix which I build from two rotation matrices (one to represent the aimer's rotation, and one to represent the camera's 45 degree rotation on the x axis). My question is, is there a way to get not just rotation from a series of matrix transformations, but to also get a Vector2 scale which would give the aimer the appearance of being a 3d object, being warped by perspective? Orthographic perspective is what I'm going for, I think. So, the aimer arrow would get longer when facing sideways, and shorter when facing north and south because of the perspective. At the same time, it would get wider when facing north and south, and less wide when facing right or left. I'd like to avoid actually drawing the aimer texture in 3d because I'm still using spritebatch's layerdepth parameter at this point in my project, and I don't want to have to figure out how to draw a 3d object within the depth sorting system I already have. I can provide code and more details if this is too vague as a question... This is my first post on stack exchange. Thanks a lot for reading! Note: (I think) I realize it can't be a technically correct 3D perspective, because the spritebatch's vector2 scaling argument doesn't allow for an object to be skewed the way it actually should be. What I'm really interested in is, is there a good way to fake the effect, or should I just drop it and not scale at all? Edit to clarify without the help of a picture (apparently I can't post them yet): I want the aimer arrow to look like it has been painted on the ground at the character's feet, so it should appear to be drawn on the ground plane (in my case the XZ plane) which should be tilted at a 45 degree angle (around the X axis) from the viewing perspective. Alex

    Read the article

  • Triangulating a partially triangulated mesh (2D)

    - by teodron
    Referring to the above exhibits, this is the scenario I am working with: starting with a planar graph (in my case, a 2D mesh) with a given triangulation, based on a certain criterion, the graph nodes are labeled as RED and BLACK. (A) a subgraph containing all the RED nodes (with edges between only the directly connected neighbours) is formed (note: although this figure shows a tree forming, it may well happen that the subgraph contain loops) (B) Problem: I need to quickly build a triangulation around the subgraph (e.g. as shown in figure C), but under the constraint that I have to keep the already present edges in the final result. Question: Is there a fast way of achieving this given a partially triangulated mesh? Ideally, the complexity should be in the O(n) class. Some side-remarks: it would be nice for the triangulation algorithm to take into account a certain vertex priority when adding edges (e.g. it should always try to build a "1-ring" structure around the most important nodes first - I can implement iteratively such a routine, but it's O(n^2) ). it would also be nice to reflect somehow the "hop distance" when adding edges: add edges first between the nodes that were "closer" to each other given the start topology. Nevertheless, disregarding the remarks, is there an already known scenario similar to this one where a triangulation is built upon a partially given set of triangles/edges?

    Read the article

  • Square game map rendered as sphere with OpenGL

    - by Roflha
    Okay so I have been trying to find a good way to do this for a while now and so far I have nothing. For a hobby project of mine I have created a finite voxel world (similar to minecraft), but as I said, mine is finite. When you reach the edge of it, you are sent to the other side. That is all working fine along with rendering the far side of the map, but I want to be able to render this grid as a sphere. Looking down from above, the world is a square. I basically want to be able to represent a portion of that square as a sphere, as if you were looking at a planet. Right now I am experimenting with taking a circular section of the map, and rendering that, but it look to flat (no curvature around the edges). My question then, is what would be the best way to add some curvature to the edges of a 2d circle to make it look like a hemisphere. However, I am not overly attached to this implementation so if somebody has some other idea for representing the square as a planet, I am all ears.

    Read the article

  • Square game map rendered as sphere

    - by Roflha
    For a hobby project of mine I have created a finite voxel world (similar to Minecraft), but as I said, mine is finite. When you reach the edge of it, you are sent to the other side. That is all working fine along with rendering the far side of the map, but I want to be able to render this grid as a sphere. Looking down from above, the world is a square. I basically want to be able to represent a portion of that square as a sphere, as if you were looking at a planet. Right now I am experimenting with taking a circular section of the map, and rendering that, but it look to flat (no curvature around the edges). My question then, is what would be the best way to add some curvature to the edges of a 2d circle to make it look like a hemisphere. However, I am not overly attached to this implementation so if somebody has some other idea for representing the square as a planet, I am all ears.

    Read the article

  • backface culling error

    - by acrilige
    I write simple software renderer. In my pipeline i have stage of backface culling. But looks like it has some error (see picture). I perform culling right after world transformation. (i can't insert picture in post coz i don't have enough points, so i just upload it (cube model): http://imageshack.us/photo/my-images/705/bcerror.png/) Vector3F view_dir(0.0f, 0.0f, 1.0f); std::vector<Triangle> to_remove; for (Triangle &t : m_triangles) { Vector4F e1 = t.v2 - t.v1; Vector4F e2 = t.v3 - t.v1; Vector3F normal( e1.y * e2.z - e1.z * e2.y, e1.z * e2.x - e1.x * e2.z, e1.x * e2.y - e1.y * e2.x ); normal.Normalize(); float dot = Dot(view_dir, normal); if (dot <= 0) to_remove.push_back(t); } for (Triangle& t : to_remove) m_triangles.erase(std::remove(m_triangles.begin(), m_triangles.end(), t), m_triangles.end()); Camera sits in origin and points in screen (RH). What is the reason?

    Read the article

  • Central renderer for a given scene

    - by Loggie
    When creating a central rendering system for all game objects in a given scene I am trying to work out the best way to go about passing the scene to the render system to be rendered. If I have a scene managed by an arbitrary structure, i.e., an octree, bsp trees, quad-tree, kd tree, etc. What is the best way to pass this to the render system? The obvious problem is that if simply given the root node of the structure, the render system would require an intrinsic knowledge of the structure in order to traverse the structure. My solution to this is to clip all objects outside the frustum in the scene manager and then create a list of the objects which are left and pass this simple list to the render system, be it an array, a vector, a linked list, etc. (This would be a structure required by the render system as a means to know which objects should be rendered). The list would of course attempt to minimise OpenGL state changes by grouping objects that require the same rendering operations to be performed on them. I have been thinking a lot about this and started searching various terms on here and followed any additional information/links but I have not really found a definitive answer. The case may be that there is no definitive answer but I would appreciate some advice and tips. My question is, is this a reasonable solution to the problem? Are there any improvements that I could make? Are there any caveats I should know about? Side question: Am I right in assuming that octrees, bsp trees, etc are all forms of BVH?

    Read the article

  • relationship between the model and the renderer

    - by acrilige
    I tried to build a simple graphics engine, and faced with this problems: i have a list of models that i need to draw, and object (renderer) that implements IRenderer interface with method DrawObject(Object* obj). Implementation of renderer depends on using graphics library (opengl/directx). 1st question: model should not know nothing about renderer implementation, but in this case where can i hold (cache) information that depends on renderer implementation? For example, if model have this definition: class Model { public: Model(); Vertex* GetVertices() const; private: Vertex* m_vertices; }; what is the best way to cache, for example, vertex buffer of this model for dx11? Hold it in renderer object? 2nd question: what is the best way for model to say renderer HOW it must be rendered (for example with texture, bump mapping, or may be just in one color). I thought it can be done with flags, like this: model-SetRenderOptions(RENDER_TEXTURE | RENDER_BUMPMAPPING | RENDER_LIGHTING); and in Renderer::DrawModel method check for each flag. But looks like it will become uncomfortable with the options count growth...

    Read the article

  • Does XNA/MonoGame have a text caching mechanism, or has an open source one been implemented?

    - by Casey
    I'm playing around with MonoGame, and I've noticed the SpriteFont class draws static text very inefficiently. Each time the text is drawn the spacing is recalculated. This isn't a big deal on my quad core PC, but on mobile applications it might be a problem. Before I go and program some text which caches the arrangement of its letters in an array and then feeds that array to the SpriteBatch, I would like to make sure there isn't something available to do this already, either in MonoGame itself or a class someone has implemented and made available for general use.

    Read the article

  • With Slick, how to change the resolution during gameplay?

    - by TheLima
    I am developing a tile-based strategy game using Java and the Slick API. So far so good, but I've come to a standstill on my options menu. I have plans for the user to be able to change the resolution during gameplay (it is pretty common, after all). I can already change to fullscreen and back to windowed, this was pretty simple... //"fullScreenOption" is a checkbox-like button. if (fullScreenOption.isMouseOver(mouseX, mouseY)) { if (input.isMouseButtonDown(Input.MOUSE_LEFT_BUTTON)) { fullScreenOption.state = !fullScreenOption.state; container.setFullscreen(fullScreenOption.state); } } But the container class (Implemented by Slick, not me), contrary to my previous beliefs, does not seem to have any resolution-change functions! And that's pretty much the situation...I know it's possible, but i don't know how to do it, nor what is the class responsible! The AppGameContainer class, used on the very start of the game's initialization, is the only place with any functions for changing the display-mode that I've found so far, but it's only used at the very start, and i haven't found a way to travel back to it from my options menu. //This is my implementation of it... public static void main(String[] args) throws SlickException { AppGameContainer app = new AppGameContainer(new Main()); // app.setTargetFrameRate(60); app.setVSync(true); app.setDisplayMode(800, 600, false); app.start(); } I can define it as a static global on the Main, but it's probably a (very) bad way to do it...

    Read the article

  • How should bots be recognised in a game?

    - by Bane
    I'm interested in how bots are usually written. Here's my situation: I plan to make an online 2D mecha game in HTML5, and the server-side will be done with node. It is intended to be multiplayer, but I also want to make bots in case there aren't enough players. How does my game logic see them, as players or as bots? Is there a standard by which I should make them? Also, any general tips and hints will be OK.

    Read the article

  • Adding 'swerve' to a direction

    - by Skoder
    Hey. I'm not much of a maths expert, so this is probably quite straight forward. I was playing a soccer flash game where you take free kicks. You provide Power, Swerve and Direction. I'm reading up on vectors and such so I can use the direction and power information to shoot the ball with the correct velocity. What I don't understand is how the 'Swerve' information is used. What formula connects the Swerve information with the Direction and Power? (This is all in 2D) Thanks for any advice.

    Read the article

  • Alpha From PNGs Butchered

    - by ashes999
    I have a pretty vanilla Monogame game. I'm using PNG for all my sprites (made in Photoshop). I noticed that XNA is butchering the aliasing; no matter what I do, my graphics appear jaggedy. Below is a screenshot. The bottom half is what XNA shows me when I zoom in 2X using a Matrix on my GraphicsDevice (to make the effect more obvious). The top is when I pasted the same sprites from Photoshop and scaled them to 200%. Note that partially transparent pixels are turning whiteish. Is there a way to fix this? What am I doing wrong? Here's the relevant call to draw to the SpriteBatch: spriteBatch.Draw(this.texture, this.positionVector, null, Color.White, this.Angle, this.originVector, 1f, SpriteEffects.None, 0f); (this.positionVector can easily be Vector.Zero; Color.White as 100% alpha, I think; this.Angle can be a real angle (small > in the image) or zero (the orb itself).

    Read the article

  • Why was my Facebook game rejected with the note that "your app icon must not overlap with content in your cover image?"

    - by peterwilli
    My FB game just recently got rejected for two reasons. The first I fixed, but I just can't see to figure out what they mean by the second, and I was hoping someone else got the same issue and did know what they meant. The remaining error is: Cover Image Your app icon must not overlap with content in your cover image. Click on 'Web Preview' in the 'App Details' section to check for overlap prior to submitting your app. See more here. All I know is that the rejection has something to do with the cover image, not the icons or the screenshots. The web preview of my game looks like this now: Please let me know what to do to get approved.

    Read the article

  • Is there a cross-platform special directory I can use for game save files?

    - by Suds
    I'm developing with LWJGL and Java on a Windows 7 laptop. I've successfully set up saving to the %appdata%\gamename\saves\ or long form c:\users\user\appdata\roaming\gamename\saves\ folder by using File dir = new File(System.getenv("APPDATA") + "\\gamename\\saves\\");. I have hobbyist level experience with Linux, and zero experience with OSX. My game will be fully cross platform. Is System.getenv("APPDATA"); cross platform? If so, where does it point to on Linux or OSX? Is there a best practices alternative that I should use?

    Read the article

  • Event Driven Behavior Tree: deterministic traversal order with parallel

    - by Heisenbug
    I've studied several articles and listen some talks about behavior trees (mostly the resources available on AIGameDev by Alex J. Champandard). I'm particularly interested on event driven behavior trees, but I have still some doubts on how to implement them correctly using a scheduler. Just a quick recap: Standard Behavior Tree Each execution tick the tree is traversed from the root in depth-first order The execution order is implicitly expressed by the tree structure. So in the case of behaviors parented to a parallel node, even if both children are executed during the same traversing, the first leaf is always evaluated first. Event Driven BT During the first traversal the nodes (tasks) are enqueued using a scheduler which is responsible for updating only running ones every update The first traversal implicitly produce a depth-first ordered queue in the scheduler Non leaf nodes stays suspended mostly of the time. When a leaf node terminate(either with success or fail status) the parent (observer) is waked up allowing the tree traversing to continue and new tasks will be enqueued in the scheduler Without parallel nodes in the tree there will be up to 1 task running in the scheduler Without parallel nodes, the tasks in the queue(excluding dynamic priority implementation) will be always ordered in a depth-first order (is this right?) Now, from what is my understanding of a possible implementation, there are 2 requirements I think must be respected(I'm not sure though): Now, some requirements I think needs to be guaranteed by a correct implementation are: The result of the traversing should be independent from which implementation strategy is used. The traversing result must be deterministic. I'm struggling trying to guarantee both in the case of parallel nodes. Here's an example: Parallel_1 -->Sequence_1 ---->leaf_A ---->leaf_B -->leaf_C Considering a FIFO policy of the scheduler, before leaf_A node terminates the tasks in the scheduler are: P1(suspended),S1(suspended),leaf_A(running),leaf_C(running) When leaf_A terminate leaf_B will be scheduled (at the end of the queue), so the queue will become: P1(suspended),S1(suspended),leaf_C(running),leaf_B(running) In this case leaf_B will be executed after leaf_C at every update, meanwhile with a non event-driven traversing from the root node, the leaf_B will always be evaluated before leaf_A. So I have a couple of question: do I have understand correctly how event driven BT work? How can I guarantee the depth first order is respected with such an implementation? is this a common issue or am I missing something?

    Read the article

  • cocos2d-x simple shader usage [on hold]

    - by Narek
    I want to obtain color ramp effect from this tutorial: http://www.raywenderlich.com/10862/how-to-create-cool-effects-with-custom-shaders-in-opengl-es-2-0-and-cocos2d-2-x Here is my code in cocos2d-x 3: bool HelloWorld::init() { ////////////////////////////// // 1. super init first if ( !Layer::init() ) { return false; } Vec2 origin = Director::getInstance()->getVisibleOrigin(); sprite = Sprite::create("HelloWorld.png"); sprite->setAnchorPoint(Vec2(0, 0)); sprite->setRotation(3); sprite->setPosition(origin); addChild(sprite); std::string str = FileUtils::getInstance()->getStringFromFile("CSEColorRamp.fsh"); const GLchar * fragmentSource = str.c_str(); GLProgram* p = GLProgram::createWithByteArrays(ccPositionTextureA8Color_vert, fragmentSource); p->bindAttribLocation(GLProgram::ATTRIBUTE_NAME_POSITION, GLProgram::VERTEX_ATTRIB_POSITION); p->bindAttribLocation(GLProgram::ATTRIBUTE_NAME_TEX_COORD, GLProgram::VERTEX_ATTRIB_TEX_COORD); p->link(); p->updateUniforms(); sprite->setGLProgram(p); // 3 colorRampUniformLocation = glGetUniformLocation(sprite->getGLProgram()->getProgram(), "u_colorRampTexture"); glUniform1i(colorRampUniformLocation, 1); // 4 colorRampTexture = Director::getInstance()->getTextureCache()->addImage("colorRamp.png"); colorRampTexture->setAliasTexParameters(); // 5 sprite->getGLProgram()->use(); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, colorRampTexture->getName()); glActiveTexture(GL_TEXTURE0); return true; } And here is the fragment shader as it is in the tutorial: #ifdef GL_ES precision mediump float; #endif // 1 varying vec2 v_texCoord; uniform sampler2D u_texture; uniform sampler2D u_colorRampTexture; void main() { // 2 vec3 normalColor = texture2D(u_texture, v_texCoord).rgb; // 3 float rampedR = texture2D(u_colorRampTexture, vec2(normalColor.r, 0)).r; float rampedG = texture2D(u_colorRampTexture, vec2(normalColor.g, 0)).g; float rampedB = texture2D(u_colorRampTexture, vec2(normalColor.b, 0)).b; // 4 gl_FragColor = vec4(rampedR, rampedG, rampedB, 1); } As a result I get a black screen with 2 draw calls. What is wrong? Do I miss something?

    Read the article

  • Music for Kids Game!

    - by Dane
    I'm developing a Multimedia Software for Kindergarten Kids. It introduce them to animals, Alphabets, Simple Math, Colors and it contain some simple games. Music is very crucial for my project and it is very important to choose the right sort of music for different sections. But unfortunately I know nothing about music. Is there a music consultant firm which can help me to choose melodies and rythmes for my project from free music available in internet. My Budget is limited but as this is mandatory and I have no knowledge or taste about music, I think I can afford to pay for this.

    Read the article

  • What is the purpose of bitdepth for the several components of the framebuffer in glfwWindowHint function of GLFW3?

    - by Rui d'Orey
    I would like to know what are the following "framebuffer related hints" of GLFW3 function glfwWindowHint : GLFW_RED_BITS GLFW_GREEN_BITS GLFW_BLUE_BITS GLFW_ALPHA_BITS GLFW_DEPTH_BITS GLFW_STENCIL_BITS What is the purpose of this? Usually their default values are enough? Where are those bits stored? In a buffer in the GPU? What do they affect? And by that I mean in what way Thank you in advance!

    Read the article

  • How do I keep a 3D model on the screen in OpenGL?

    - by NoobScratcher
    I'm trying to keep a 3D model on the screen by placing my glDrawElement functions inside the draw function with the declarations at the top of .cpp. When I render the model, the model attaches it self to the current vertex buffer object. This is because my whole graphical user interface is in 2D quads except the window frame. Is there a way to avoid this from happening? or any common causes of this? Creating the file object: int index = IndexAssigner(1, 1); //make a fileobject and store list and the index of that list in a c string ifstream file (list[index].c_str() ); //Make another string //string line; points.push_back(Point()); Point p; int face[4]; Model rendering code: int numfloats = 4; float* point=reinterpret_cast<float*>(&points[0]); int num_bytes=numfloats*sizeof(float); cout << "Size Of Point" << sizeof(Point) << endl; GLuint vertexbuffer; glGenVertexArrays(1, &vao[3]); glGenBuffers(1, &vertexbuffer); glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer); glBufferData(GL_ARRAY_BUFFER, points.size()*sizeof(points), points.data(), GL_STATIC_DRAW); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, num_bytes, &points[0]); glEnableClientState(GL_VERTEX_ARRAY); glVertexPointer(3, GL_FLOAT, points.size(), &points[0]); glEnableClientState(GL_INDEX_ARRAY); glIndexPointer(GL_FLOAT, faces.size(), faces.data()); glEnableVertexAttribArray(0); glDrawElements(GL_QUADS, points.size(), GL_UNSIGNED_INT, points.data()); glDrawElements(GL_QUADS, faces.size(), GL_UNSIGNED_INT, faces.data());

    Read the article

  • XNA: SpriteFont question

    - by Zukki
    Hi everyone, I need some help with the SpriteFont. I want a different font for my game, other than Kootenay. So, I edit the SpriteFont xml, i.e: <FontName>Kootenay</FontName> or <FontName>Arial</FontName> No problem with Windows fonts, or other XNA redistributable fonts pack. However, I want to use other fonts, that I downloaded and installed already, they are TTF or OTF, both supported by XNA. My problem is, I cant use them, I got this error: The font family "all the fonts i tried" could not be found. Please ensure the requested font is installed, and is a TrueType or OpenType font. So, checking at the windows fonts folder, I check the properties and details of the fonts, I try all the names they have, and but never works. Maybe I need some kind of importing or installing in order to use them, I dont know, and I hope you guys can help me, thanks!

    Read the article

  • Triangle Picking Picking Back faces

    - by Tangeleno
    I'm having a bit of trouble with 3D picking, at first I thought my ray was inaccurate but it turns out that the picking is happening on faces facing the camera and faces facing away from the camera which I'm currently culling. Here's my ray creation code, I'm pretty sure the problem isn't here but I've been wrong before. private uint Pick() { Ray cursorRay = CalculateCursorRay(); Vector3? point = Control.Mesh.RayCast(cursorRay); if (point != null) { Tile hitTile = Control.TileMesh.GetTileAtPoint(point); return hitTile == null ? uint.MaxValue : (uint)(hitTile.X + hitTile.Y * Control.Generator.TilesWide); } return uint.MaxValue; } private Ray CalculateCursorRay() { Vector3 nearPoint = Control.Camera.Unproject(new Vector3(Cursor.Position.X, Control.ClientRectangle.Height - Cursor.Position.Y, 0f)); Vector3 farPoint = Control.Camera.Unproject(new Vector3(Cursor.Position.X, Control.ClientRectangle.Height - Cursor.Position.Y, 1f)); Vector3 direction = farPoint - nearPoint; direction.Normalize(); return new Ray(nearPoint, direction); } public Vector3 Camera.Unproject(Vector3 source) { Vector4 result; result.X = (source.X - _control.ClientRectangle.X) * 2 / _control.ClientRectangle.Width - 1; result.Y = (source.Y - _control.ClientRectangle.Y) * 2 / _control.ClientRectangle.Height - 1; result.Z = source.Z - 1; if (_farPlane - 1 == 0) result.Z = 0; else result.Z = result.Z / (_farPlane - 1); result.W = 1f; result = Vector4.Transform(result, Matrix4.Invert(ProjectionMatrix)); result = Vector4.Transform(result, Matrix4.Invert(ViewMatrix)); result = Vector4.Transform(result, Matrix4.Invert(_world)); result = Vector4.Divide(result, result.W); return new Vector3(result.X, result.Y, result.Z); } And my triangle intersection code. Ripped mainly from the XNA picking sample. public float? Intersects(Ray ray) { float? closestHit = Bounds.Intersects(ray); if (closestHit != null && Vertices.Length == 3) { Vector3 e1, e2; Vector3.Subtract(ref Vertices[1].Position, ref Vertices[0].Position, out e1); Vector3.Subtract(ref Vertices[2].Position, ref Vertices[0].Position, out e2); Vector3 directionCrossEdge2; Vector3.Cross(ref ray.Direction, ref e2, out directionCrossEdge2); float determinant; Vector3.Dot(ref e1, ref directionCrossEdge2, out determinant); if (determinant > -float.Epsilon && determinant < float.Epsilon) return null; float inverseDeterminant = 1.0f/determinant; Vector3 distanceVector; Vector3.Subtract(ref ray.Position, ref Vertices[0].Position, out distanceVector); float triangleU; Vector3.Dot(ref distanceVector, ref directionCrossEdge2, out triangleU); triangleU *= inverseDeterminant; if (triangleU < 0 || triangleU > 1) return null; Vector3 distanceCrossEdge1; Vector3.Cross(ref distanceVector, ref e1, out distanceCrossEdge1); float triangleV; Vector3.Dot(ref ray.Direction, ref distanceCrossEdge1, out triangleV); triangleV *= inverseDeterminant; if (triangleV < 0 || triangleU + triangleV > 1) return null; float rayDistance; Vector3.Dot(ref e2, ref distanceCrossEdge1, out rayDistance); rayDistance *= inverseDeterminant; if (rayDistance < 0) return null; return rayDistance; } return closestHit; } I'll admit I don't fully understand all of the math behind the intersection and that is something I'm working on, but my understanding was that if rayDistance was less than 0 the face was facing away from the camera, and shouldn't be counted as a hit. So my question is, is there an issue with my intersection or ray creation code, or is there another check I need to perform to tell if the face is facing away from the camera, and if so any hints on what that check might contain would be appreciated.

    Read the article

  • glsl demo suggestions ?

    - by brainydexter
    In a lot of places I interviewed recently, I have been asked many a times if I have worked with shaders. Even though, I have read and understand the pipeline, the answer to that question has been no. Recently, one of the places asked me if I can send them a sample of 'something' that is "visually polished". So, I decided to take the plunge and wrote some simple shader in GLSL(with opengl).I now have a basic setup where I can use vbos with glsl shaders. I have a very short window left to send something to them and I was wondering if someone with experience, could suggest an idea that is interesting enough to grab someone's attention. Thanks

    Read the article

  • Parenting Opengl with Groups in LibGDX

    - by Rudy_TM
    I am trying to make an object child of a Group, but this object has a draw method that calls opengl to draw in the screen. Its class its this public class OpenGLSquare extends Actor { private static final ImmediateModeRenderer renderer = new ImmediateModeRenderer10(); private static Matrix4 matrix = null; private static Vector2 temp = new Vector2(); public static void setMatrix4(Matrix4 mat) { matrix = mat; } @Override public void draw(SpriteBatch batch, float arg1) { // TODO Auto-generated method stub renderer.begin(matrix, GL10.GL_TRIANGLES); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x0, y0, 0f); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x0, y1, 0f); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x1, y1, 0f); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x1, y1, 0f); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x1, y0, 0f); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x0, y0, 0f); renderer.end(); } } In my screen class I have this, i call it in the constructor MyGroupClass spriteLab = new MyGroupClass(spriteSheetLab); OpenGLSquare square = new OpenGLSquare(); square.setX0(100); square.setY0(200); square.setX1(400); square.setY1(280); square.color.set(Color.BLUE); square.setSize(); //spriteLab.addActorAt(0, clock); spriteLab.addActor(square); stage.addActor(spriteLab); And the render in the screen I have @Override public void render(float arg0) { this.gl.glClear(GL10.GL_COLOR_BUFFER_BIT |GL10.GL_DEPTH_BUFFER_BIT); stage.draw(); stage.act(Gdx.graphics.getDeltaTime()); } The problem its that when i use opengl with parent, it resets all the other chldren to position 0,0 and the opengl renderer paints the square in the exact position of the screen and not relative to the parent. I tried using batch.enableBlending() and batch.disableBlending() that fixes the position problem of the other children, but not the relative position of the opengl drawing and it also puts alpha to the glDrawing. What am i doing wrong?:/

    Read the article

  • Publishing a game -- any way to target both WP7 and Win8 Store?

    - by Rei Miyasaka
    I'm at a dilemma which seems should soon become an important issue for a lot of developers. If I build a game in XNA, I won't be able to publish it on the Windows 8 Store, as it would be a classic application -- and classic applications can't be sold on the store. If I build a game in Metro DirectX, I would be able to sell it on the Store, but porting it to Windows Phone would involve porting it to Reach XNA, which in fact would likely involve more effort even than porting to OS X or Android -- both of which support C++. Of all the WinRT API that is supported on C++/JS/.NET, DirectX can only be programmed from C++. It's also unlikely that Microsoft will update Windows 7 or Vista to support the new DirectX features, although that would make the Metro DirectX the first new version of DirectX to stop supporting the immediate predecessor OS. If I build a game in Pre-Win8 DirectX 9/10/11, I won't be able to sell it on the Windows Store or Windows Phone, but I could sell it on something like Steam. It would also involve the most amount of manual plumbing. In fact, DirectWrite, despite being part of DirectX 11, doesn't talk to Direct3D. I'm getting really tired of all these restrictions -- artificial and otherwise -- and I'm coming to a point where I'm considering switching to a platform with a less fragmented API, like Android or Mac/iOS. As far as bringing a game into market goes, excluding the actual market share of any platforms that I might consider, what other factors would help me in making a decision? Just a few years ago this question was a lot easier to answer: if you were primarily concerned with Windows platforms, all you had to answer was whether you wanted DirectX, XNA, or something like SlimDX. If you made the wrong decision, no biggie -- all you really would have lost is XBox and the fairly small Windows Phone market.

    Read the article

  • Whats a good setup/toolchain for a project?

    - by acidzombie24
    I was thinking, what is needed for a good setup and what are good (free) tools to use? Some of what i came up with are Bug tracking Some good (distributed:P) source control (which means no svn fellas) automated nightly builds or a continuous integration (or anything that automates builds and possibly sends emails when there are build errors) wiki to document decisions, road map or milestones. Something to backup assets (art, sound, etc) What else? and do you have suggestions for any of the above? i pretty much clueless of all of these except for source control

    Read the article

< Previous Page | 565 566 567 568 569 570 571 572 573 574 575 576  | Next Page >