Search Results

Search found 16410 results on 657 pages for 'game component'.

Page 341/657 | < Previous Page | 337 338 339 340 341 342 343 344 345 346 347 348  | Next Page >

  • Make Interactive Story more Variable [on hold]

    - by Guest0343
    I'm creating an interactive story that allows users to make choices based on a story. However, it doesn't give users room to do much creatively on their own. They are bound by the script at the moment. I'm wondering if anyone can suggest any element I can add that might give users some personalization. I was thinking about maybe character editing, but that doesn't add too much. I also thought about a stats system where they can have certain attributes and stats they might earn, but I'm not sure how they might use those stats. Anything is helpful!

    Read the article

  • Creating my own kill cam

    - by DalexL
    I plan on creating my own kill cam system for a sandbox tool set. After thinking about the mechanics of the kill cam itself, however, I'm quite lost. I'm trying to recreate the ones commonly seen in call of duty games that show, from the view of the killer, the actual killing scene. My Thoughts: -I can't just keep in memory when people kill others because I wouldn't know when to start the 'recording process'. There is on way for me to accurately determine when somebody is 'about' to kill someone. -My only real idea so far is to have a complete duplicate of everything loaded off to the side copying all the movement from the original world but with a 10 second delay. That way, all the kill cams would be 10 seconds long and the persons camera would just be moved to the second world of their killer. My Questions: Is there already an accepted way to do this? Does anybody have any good ideas for something like this? Thanks if you can!

    Read the article

  • Android how to get opengl 3D coordinates in ontouch event

    - by Sandy
    I created a cube in opengl and it rotates in ontouch event. To to this I created a CustomSurfaceView as follows public class CustomSurfaceView extends GLSurfaceView { @Override public boolean onTouchEvent(MotionEvent e) { float x = e.getX() float y = e.getY(); } } Here x and y are screen coordinates. How can I get 3D coordinated from this? I have already looked gluProject and NeHe. But I dont know how to implement this in my project, it shows that there is no GLdouble,GLfloat class.

    Read the article

  • What functionality should I use in OpenGL 2.0?

    - by Jeffrey
    Considering OpenGL 2.1, we all know that glBegin and glEnd are the devil. Should I use only VBO to render 3d primitives (I can't find VAO in that version, weren't there already?)? Should I still use the matrix stack (why not?)? Should I still use glFrustum? Can I take advantage of shaders in GLSL 1.20? Where can I find a tutorial for VBO in OpenGL 2.1 and the "correct" way of programming in it? Also how am I supposed to animate something. Like a cube moving around an object or a player moving in the scene (static vbo data + shader?)? Note: Take your time to answer this question, I'll accept an answer tomorrow.

    Read the article

  • cocos2d-x simple shader usage [on hold]

    - by Narek
    I want to obtain color ramp effect from this tutorial: http://www.raywenderlich.com/10862/how-to-create-cool-effects-with-custom-shaders-in-opengl-es-2-0-and-cocos2d-2-x Here is my code in cocos2d-x 3: bool HelloWorld::init() { ////////////////////////////// // 1. super init first if ( !Layer::init() ) { return false; } Vec2 origin = Director::getInstance()->getVisibleOrigin(); sprite = Sprite::create("HelloWorld.png"); sprite->setAnchorPoint(Vec2(0, 0)); sprite->setRotation(3); sprite->setPosition(origin); addChild(sprite); std::string str = FileUtils::getInstance()->getStringFromFile("CSEColorRamp.fsh"); const GLchar * fragmentSource = str.c_str(); GLProgram* p = GLProgram::createWithByteArrays(ccPositionTextureA8Color_vert, fragmentSource); p->bindAttribLocation(GLProgram::ATTRIBUTE_NAME_POSITION, GLProgram::VERTEX_ATTRIB_POSITION); p->bindAttribLocation(GLProgram::ATTRIBUTE_NAME_TEX_COORD, GLProgram::VERTEX_ATTRIB_TEX_COORD); p->link(); p->updateUniforms(); sprite->setGLProgram(p); // 3 colorRampUniformLocation = glGetUniformLocation(sprite->getGLProgram()->getProgram(), "u_colorRampTexture"); glUniform1i(colorRampUniformLocation, 1); // 4 colorRampTexture = Director::getInstance()->getTextureCache()->addImage("colorRamp.png"); colorRampTexture->setAliasTexParameters(); // 5 sprite->getGLProgram()->use(); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, colorRampTexture->getName()); glActiveTexture(GL_TEXTURE0); return true; } And here is the fragment shader as it is in the tutorial: #ifdef GL_ES precision mediump float; #endif // 1 varying vec2 v_texCoord; uniform sampler2D u_texture; uniform sampler2D u_colorRampTexture; void main() { // 2 vec3 normalColor = texture2D(u_texture, v_texCoord).rgb; // 3 float rampedR = texture2D(u_colorRampTexture, vec2(normalColor.r, 0)).r; float rampedG = texture2D(u_colorRampTexture, vec2(normalColor.g, 0)).g; float rampedB = texture2D(u_colorRampTexture, vec2(normalColor.b, 0)).b; // 4 gl_FragColor = vec4(rampedR, rampedG, rampedB, 1); } As a result I get a black screen with 2 draw calls. What is wrong? Do I miss something?

    Read the article

  • "Walking" along a rotating surface in LimeJS

    - by Dave Lancea
    I'm trying to have a character walk along a plank (a long, thin rectangle) that works like a seesaw, being rotated around a central point by box2d physics (falling objects). I want the left and right arrow keys to move the player up and down the plank, regardless of it's slope, and I don't want to use real physics for the player movement. My idea for achieving this was to compute the coordinate based on the rotation of the plank and the current location "up" or "down" the board. My math is derived from here: http://math.stackexchange.com/questions/143932/calculate-point-given-x-y-angle-and-distance Here's the code I have so far: movement = 0; if(keys[37]){ // Left movement = -3; } if(keys[39]){ // Right movement = 3; } // this.plank is a LimeJS sprite. // getRotation() Should return an angle in degrees var rotation = this.plank.getRotation(); // this.current_plank_location is initialized as 0 this.current_plank_location += movement; var x_difference = this.current_plank_location * Math.cos(rotation); var y_difference = this.current_plank_location * Math.sin(rotation); this.setPosition(seesaw.PLANK_CENTER_X + x_difference, seesaw.PLANK_CENTER_Y + y_difference); This code causes the player to swing around in a circle when they are out of the center of the plank given a slight change in rotation of the plank. Any ideas on how I can get the player position to follow the board position?

    Read the article

  • Interesting 3d zooming technique

    - by stark
    Is it possible to zoom to a certain point on screen by modifying the field of view and rotating the camera as to keep that point/object in the same place on screen while zooming ? Changing the camera position is not allowed.. I projected the 3d pos of the object on screen and remembered it. Then on each frame I calculate the direction to it in camera space and then I construct a rotation matrix to align this direction to Z axis (in cam space). After this, I calculate the direction from the camera to the object in world space and transform this vector with the matrix I obtained earlier and then use this final vector as the camera's new direction. And it's actually "kinda working", the problem is that it is more/less off than the camera's rotation before starting to zoom depending on the area you are trying to zoom in (larger error on edges/corners). It looks acceptable, but I'm not settling for only this. Any suggestions/resources for doing this technique perfectly ? If some of you want to explain the math in detail, be my guests, I can understand these things well. Thanks. Edit: I'll check often for responses, I'm really curious about this :D

    Read the article

  • Debugging Minimum Translation Vector

    - by SyntheCypher
    I implemented the minimum translation vector from codezealot's tutorial on SAT (Separating Axis Theorem) but I'm having an issue I can't quite figure out. Here's the example I have: As you can see in top and bottom left images regardless of the side the of the green car which red car is penetrating the MTV for the red car still remains as a negative number also here is the same example when the front of the red car is facing the opposite direction the number will always be positive. When the red car is past the half way through the green car it should switch polarity. I thought I'd compensated for this in my code, but apparently not either that or it's a bug I can find. Here is my function for finding and returning the MTV, any help would be much appreciated: Code

    Read the article

  • How does a collision engine work?

    - by JXPheonix
    Original question: Click me How exactly does a collision engine work? This is an extremely broad question. What code keeps things bouncing against each other, what code makes the player walk into a wall instead of walk through the wall? How does the code constantly refresh the players position and objects position to keep gravity and collision working as it should? If you don't know what a collision engine is, basically it's generally used in platform games to make the player acutally hit walls and the like. There's the 2D type and the 3D type, but they all accomplish the same thing: collision. So, what keeps a collision engine ticking?

    Read the article

  • backface culling error

    - by acrilige
    I write simple software renderer. In my pipeline i have stage of backface culling. But looks like it has some error (see picture). I perform culling right after world transformation. (i can't insert picture in post coz i don't have enough points, so i just upload it (cube model): http://imageshack.us/photo/my-images/705/bcerror.png/) Vector3F view_dir(0.0f, 0.0f, 1.0f); std::vector<Triangle> to_remove; for (Triangle &t : m_triangles) { Vector4F e1 = t.v2 - t.v1; Vector4F e2 = t.v3 - t.v1; Vector3F normal( e1.y * e2.z - e1.z * e2.y, e1.z * e2.x - e1.x * e2.z, e1.x * e2.y - e1.y * e2.x ); normal.Normalize(); float dot = Dot(view_dir, normal); if (dot <= 0) to_remove.push_back(t); } for (Triangle& t : to_remove) m_triangles.erase(std::remove(m_triangles.begin(), m_triangles.end(), t), m_triangles.end()); Camera sits in origin and points in screen (RH). What is the reason?

    Read the article

  • glsl demo suggestions ?

    - by brainydexter
    In a lot of places I interviewed recently, I have been asked many a times if I have worked with shaders. Even though, I have read and understand the pipeline, the answer to that question has been no. Recently, one of the places asked me if I can send them a sample of 'something' that is "visually polished". So, I decided to take the plunge and wrote some simple shader in GLSL(with opengl).I now have a basic setup where I can use vbos with glsl shaders. I have a very short window left to send something to them and I was wondering if someone with experience, could suggest an idea that is interesting enough to grab someone's attention. Thanks

    Read the article

  • Frame Independent Movement

    - by ShrimpCrackers
    I've read two other threads here on movement: Time based movement Vs Frame rate based movement?, and Fixed time step vs Variable time step but I think I'm lacking a basic understanding of frame independent movement because I don't understand what either of those threads are talking about. I'm following along with lazyfoo's SDL tutorials and came upon the frame independent lesson. http://lazyfoo.net/SDL_tutorials/lesson32/index.php I'm not sure what the movement part of the code is trying to say but I think it's this (please correct me if I'm wrong): In order to have frame independent movement, we need to find out how far an object (ex. sprite) moves within a certain time frame, for example 1 second. If the dot moves at 200 pixels per second, then I need to calculate how much it moves within that second by multiplying 200 pps by 1/1000 of a second. Is that right? The lesson says: "velocity in pixels per second * time since last frame in seconds. So if the program runs at 200 frames per second: 200 pps * 1/200 seconds = 1 pixel" But...I thought we were multiplying 200 pps by 1/1000th of a second. What is this business with frames per second? I'd appreciate if someone could give me a little bit more detailed explanation as to how frame independent movement works. Thank you.

    Read the article

  • Render angles of a 3D model into 2D images?

    - by Ricket
    Is there a tool out there that you can give a 3D model file, and it will output 2D renders of it from various angles? For example if you were making a 2D RPG but you want to make your character look nice, you might make the character in 3D and then just render the character from 8 or more angles into images which then are used by the 2D engine to give a pseudo-3D look. Does such a tool exist or will it need to be custom-written or done manually?

    Read the article

  • Event Driven Behavior Tree: deterministic traversal order with parallel

    - by Heisenbug
    I've studied several articles and listen some talks about behavior trees (mostly the resources available on AIGameDev by Alex J. Champandard). I'm particularly interested on event driven behavior trees, but I have still some doubts on how to implement them correctly using a scheduler. Just a quick recap: Standard Behavior Tree Each execution tick the tree is traversed from the root in depth-first order The execution order is implicitly expressed by the tree structure. So in the case of behaviors parented to a parallel node, even if both children are executed during the same traversing, the first leaf is always evaluated first. Event Driven BT During the first traversal the nodes (tasks) are enqueued using a scheduler which is responsible for updating only running ones every update The first traversal implicitly produce a depth-first ordered queue in the scheduler Non leaf nodes stays suspended mostly of the time. When a leaf node terminate(either with success or fail status) the parent (observer) is waked up allowing the tree traversing to continue and new tasks will be enqueued in the scheduler Without parallel nodes in the tree there will be up to 1 task running in the scheduler Without parallel nodes, the tasks in the queue(excluding dynamic priority implementation) will be always ordered in a depth-first order (is this right?) Now, from what is my understanding of a possible implementation, there are 2 requirements I think must be respected(I'm not sure though): Now, some requirements I think needs to be guaranteed by a correct implementation are: The result of the traversing should be independent from which implementation strategy is used. The traversing result must be deterministic. I'm struggling trying to guarantee both in the case of parallel nodes. Here's an example: Parallel_1 -->Sequence_1 ---->leaf_A ---->leaf_B -->leaf_C Considering a FIFO policy of the scheduler, before leaf_A node terminates the tasks in the scheduler are: P1(suspended),S1(suspended),leaf_A(running),leaf_C(running) When leaf_A terminate leaf_B will be scheduled (at the end of the queue), so the queue will become: P1(suspended),S1(suspended),leaf_C(running),leaf_B(running) In this case leaf_B will be executed after leaf_C at every update, meanwhile with a non event-driven traversing from the root node, the leaf_B will always be evaluated before leaf_A. So I have a couple of question: do I have understand correctly how event driven BT work? How can I guarantee the depth first order is respected with such an implementation? is this a common issue or am I missing something?

    Read the article

  • Creating a 2D Line Branch

    - by Danran
    I'm looking into creating a 2D line branch, something for a "lightning effect". I did ask this question before on creating a "lightning effect" (mainly though referring to the process of the glow & after effects the lightning has & to whether it was a good method to use or not); Methods of Creating a "Lightning" effect in 2D However i never did get around to getting it working. So i've been trying today to get a seconded attempt going but i'm getting now-were :/. So to be clear on what i'm trying to-do, in this article posted; http://drilian.com/2009/02/25/lightning-bolts/ I'm trying to create the line segments seen in the images on the site. I'm confused mainly by this line in the pseudo code; // Offset the midpoint by a random amount along the normal. midPoint += Perpendicular(Normalize(endPoint-startPoint))*RandomFloat(-offsetAmount,offsetAmount); If someone could explain this to me it would be really grateful :).

    Read the article

  • How to get a point to the left/right of a vector

    - by MulletDevil
    I have a position vector of a point in space and a quaternion for it's rotation. What i'm trying to calculate is a point too the left and a point to the right. I have the position and rotation(quaternion) of the red dot. What I want is to get the position of the green dots. I have a float value for the distance I want these points to be. With only the position and rotation is it possible to get a unit direction vector pointing left/right which I can multiply by my float value? Edit: I also know the original direction vector.

    Read the article

  • Whats a good setup/toolchain for a project?

    - by acidzombie24
    I was thinking, what is needed for a good setup and what are good (free) tools to use? Some of what i came up with are Bug tracking Some good (distributed:P) source control (which means no svn fellas) automated nightly builds or a continuous integration (or anything that automates builds and possibly sends emails when there are build errors) wiki to document decisions, road map or milestones. Something to backup assets (art, sound, etc) What else? and do you have suggestions for any of the above? i pretty much clueless of all of these except for source control

    Read the article

  • Subdividing a polygon into boxes of varying size

    - by Michael Trouw
    I would like to be pointed to information / resources for creating algorithms like the one illustrated on this blog, which is a subdivision of a polygon (in my case a voronoi cell) into several boxes of varying size: http://procworld.blogspot.nl/2011/07/city-lots.html In the comments a paper by among others the author of the blog can be found, however the only formula listed is about candidate location suitability: http://www.groenewegen.de/delft/thesis-final/ProceduralCityLayoutGeneration-Preprint.pdf Any language will do, but if examples can be given Javascript is preferred (as it is the language i am currently working with) A similar question is this one: What is an efficient packing algorithm for packing rectangles into a polygon?

    Read the article

  • 3D Modeling Software for Programmer [closed]

    - by Pathachiever11
    I've recently learned how to make games for Unity3d, and now I want to start making games! I can't wait to start! However, before I can make 3D games, I need to learn 3D modeling for character design, level design, and some animation. What is the easiest 3D modeling software, compatible with Unity3d? I do not want to spend too much time learning the software. From what I've heard, Blender is a bit complicated to use. Maya and 3dsMax seem very powerful. Could someone point me in the right direction? I don't want to spend a lot of time learning. I know its not that easy, but you guys have experience, you guys probably know out of all which one is easier and powerful. Could you recommend a software? Many Thanks!

    Read the article

  • OpenGL error LNK2019

    - by Ghilliedrone
    I'm trying to compile a basic OpenGL program. I linked opengl32.lib and glu32.lib but I'm getting errors. The errors I get are: error LNK1120: 7 unresolved externals error LNK2019: unresolved external symbol _main referenced in function ___tmainCRTStartup error LNK2019: unresolved external symbol "public: float __thiscall GLWindow::getElapsedSeconds(void)" (?getElapsedSeconds@GLWindow@@QAEMXZ) referenced in function _WinMain@16 error LNK2019: unresolved external symbol "public: bool __thiscall GLWindow::isRunning(void)" (?isRunning@GLWindow@@QAE_NXZ) referenced in function _WinMain@16 error LNK2019: unresolved external symbol "public: void __thiscall GLWindow::attachExample(class Example *)" (?attachExample@GLWindow@@QAEXPAVExample@@@Z) referenced in function _WinMain@16 error LNK2019: unresolved external symbol "public: void __thiscall GLWindow::destroy(void)" (?destroy@GLWindow@@QAEXXZ) referenced in function _WinMain@16 error LNK2019: unresolved external symbol "public: __thiscall GLWindow::GLWindow(struct HINSTANCE__ *)" (??0GLWindow@@QAE@PAUHINSTANCE__@@@Z) referenced in function _WinMain@16 error LNK2019: unresolved external symbol "private: void __thiscall GLWindow::setupPixelFormat(void)" (?setupPixelFormat@GLWindow@@AAEXXZ) referenced in function "public: long __stdcall GLWindow::WndProc(struct HWND__ *,unsigned int,unsigned int,long)" (?WndProc@GLWindow@@QAGJPAUHWND__@@IIJ@Z)

    Read the article

  • Outline Shader Effect for Orthogonal Geometry in XNA

    - by Griffin
    I just recently started learning the art of shading, but I can't give an outline width to 2D, concave geometry when restrained to a single vertex/pixel shader technique (thanks to XNA). the shape I need to give an outline to has smooth, per-vertex coloring, as well as opacity. The outline, which has smooth, per-vertex coloring, variable width, and opacity cannot interfere with the original shape's colors. A pixel depth border detection algorithm won't work because pixel depth isn't a 3.0 semantic. expanding geometry / redrawing won't work because it interferes with the original shape's colors. I'm wondering if I can do something with the stencil/depth buffer outside of the shader functions since I have access to that through the graphics device. But I don't believe I'm able to manipulate actual values. How might I do this?

    Read the article

  • Move the location of the XYZ pivot point on a mesh in UDK

    - by WebDevHobo
    When working with any mesh, you get an XYZ point somewhere on it. If you just want to move the mesh in any direction, it doesn't matter where this point is located. However, I want to rotate a door. This requires the point of rotation to be very specific. I can't find anywhere how to change the location of the point. Can anyone help? EDIT: solved, to change the pivot point, right click on the mesh, go to "Pivot" and move it. Then right click again and this time select "Save PrePivot to Pivot"

    Read the article

  • Trouble with SAT style vector projection in C#/XNA

    - by ssb
    Simply put I'm having a hard time working out how to work with XNA's Vector2 types while maintaining spatial considerations. I'm working with separating axis theorem and trying to project vectors onto an arbitrary axis to check if those projections overlap, but the severe lack of XNA-specific help online combined with pseudo code everywhere that omits key parts of the algorithm, googling has left me little help. I'm aware of HOW to project a vector, but the way that I know of doing it involves the two vectors starting from the same point. Particularly here: http://www.metanetsoftware.com/technique/tutorialA.html So let's say I have a simple rectangle, and I store each of its corners in a list of Vector2s. How would I go about projecting that onto an arbitrary axis? The crux of my problem is that taking the dot product of say, a vector2 of (1, 0) and a vector2 of (50, 50) won't get me the dot product I'm looking for.. or will it? Because that (50, 50) won't be the vector of the polygon's vertex but from whatever XNA calculates. It's getting the calculation from the right starting point that's throwing me off. I'm sorry if this is unclear, but my brain is fried from trying to think about this. I need a better understanding of how XNA calculates Vector2s as actual vectors and not just as random points.

    Read the article

  • Build a view frustum from angles

    - by MulletDevil
    I have 4 angles, left, right, top & bottom. These angles are in degrees. They define the angle between the forward vector and the corresponding side. I am trying to use these to calculate the required values for Perseective Off Centre function found here http://docs.unity3d.com/Documentation/ScriptReference/Camera-projectionMatrix.html I tried doing (near plane-far plane) * Tan(angle) But that didn't give the correct results.

    Read the article

  • How does one specify raster operations in XNA?

    - by Corey Ogburn
    I'm looking for a way to add a sprite using a particular logic operation (like XOR). I can't find anything on Google and I'm not sure where to look in the documentation. I've looked into SpriteBatch.Begin(...) and its Draw method and several options in the GraphicsDevice class, but I'm not recognizing anything capable of this. I'm still pretty new to XNA so I may just not have recognized the terminology to do this.

    Read the article

< Previous Page | 337 338 339 340 341 342 343 344 345 346 347 348  | Next Page >