Search Results

Search found 16410 results on 657 pages for 'game component'.

Page 297/657 | < Previous Page | 293 294 295 296 297 298 299 300 301 302 303 304  | Next Page >

  • GLSL: How Do I cast a float into an int?

    - by dugla
    In a GLSL fragment shader I am trying to cast a float into an int. The compiler has other ideas. It complains thusly: ERROR: 0:60: '=' : cannot convert from 'mediump float' to 'highp int' I am trying to do this: mediump float indexf = floor(2.0 * mixer); highp int index = indexf; I (vainly) tried to raise the precision of the int above the float to appease the GL Gods but no joy. Could someone please school me here? Thanks, Doug

    Read the article

  • unity4.3 rigidbody2d unexpected force behaviour

    - by Lilz Votca Love
    So guys ive edited the question and here is what my problem is i have a player which has a rigidbody2d attached to it.my player is able to doublejump in the air nicely and stick to walls when colliding with them and slowly slides to the ground.All movement is handle through physics and no transform manipulations.here i did something similar to this in the FixedUpdate of my player. void FixedUpdate() { if(wall && Input.GetButtonDown("Jump")) { if(facingright)//player is facing the left side of the wall { rigidbody2D.Addforce(new vector2(-1f,2f)*jumpforce); /*Now the player should jump backwards following this directional vector and should follow a smooth curve which in this part works well*/ } else { rigidbody2D.Addforce(new vector2(1f,2f)*jumpforce); /*Now this is where everything gets complicated as you should have noticed this is the same directional vector only the opposite x axis value and the same amount of force is used but it behaves like the red curve in the picture below*/ } } } bad behaviour and vector in red .I tested the same thing(both addforce methods) for a simple jump and they exactly behave like mentionned above in the picture.so here is my problem.Jumping diagonally forward with rigidbody2d.addforce() do not have the same impact,do not follow the same curve as jumping the opposite direction with the same exact amount of force.if i could fix this or get past this i could implement a walljump system like a ninja jumping in zigzag between two opposite wall to climb them.Any ideas or alternatives?

    Read the article

  • Pix for visual studio express 2012 (Desktop)

    - by JohnB
    (Originally asked on stackoverflow) Using visual c++ express 2010 for direct3d you have to download the directX sdk and there is a tool called pix for debugging shaders, looking at 3d resources etc. With visual studio 2012 express the directx sdk is included in the windows sdk that comes with it but this does not seem to include the winpix.exe tool. Is this very useful tool still available? I guess I can still use the one from the previous sdk but it seems wrong to install the entire sdk just for that tool. Is there a version for VS2012 express that I'm missing?

    Read the article

  • Transform 3D vectors between coordinate systems

    - by Nir Cig
    I've got 6 points in 3D space: A,B,C,D,E,F, that represent 4 vectors. AB is perpendicular to AC and DE is perpendicular to DF. I need to find a transformation matrix M, that transforms AB to DE and AC to DF. In other words: M·AB=DE, M·AC=DF If no scaling was involved, this could be solved with a simple rotation matrix. But since the ratios |AB|/|DE|, |AC|/|DF| might be different, I'm not sure how to proceed.

    Read the article

  • Bloom shader makes it impossible to render black?

    - by Mathias Lykkegaard Lorenzen
    I am playing around with the bloom shader from the XNA sample page, to do some glow shading. I am rendering primitive vector-ish squares of linelists/linestrips, on a background. However, I am facing a few problems. With a black background and white squares, I can actually see the squares. However, with a white background and black squares, I can't see them at all. Why is this happening, and is there any way of me fixing it? Can I modify my bloom shader to also "glow" dark elements, if that's what is causing it?

    Read the article

  • UnrealScript error: Importing defaults for actor: Changing Role in defaultproperties illegal, - what is it importing?

    - by user3079666
    I added the line var float Mass; to Actor and commented it out of the classes that inherit from actor and declare it, fixed all issues but I now get the error message: Error, Importing defaults for Actor: Changing Role in defaultproperties is illegal (was RemoteRole intended?) The thing is, I did not change anything related to Role or in defaultproperties. Also since it says Importing, I'm guessing it's some ini file.. any clues?

    Read the article

  • Unity scaling instantiated GameObject at Start() doesn't "keep"

    - by Shivan Dragon
    I have a very simple scenario: A box-like Prefab which is imported from Blender automatically (I have the .blend file in the Assets folder). A script that has two public GameObject fields. In one I place the above prefab, and in the other I place a terrain object (which I've created in Unity's graphical view): public Collider terrain; public GameObject aStarCellHighlightPrefab; This script is attached to the camera. The idea is to have the Blender prefab instantiated, have the terrain set as its parent, and then scale said prefab instance up. I first did it like this, in the Start() method: void Start () { cursorPositionOnTerrain = new RaycastHit(); aStarCellHighlight = (GameObject)Instantiate(aStarCellHighlightPrefab, new Vector3(300,300,300), terrain.transform.rotation); aStarCellHighlight.name = "cellHighlight"; aStarCellHighlight.transform.parent = terrain.transform; aStarCellHighlight.transform.localScale = new Vector3(100,100,100); } and first thought it didn't work. However later I noticed that it did in fact work, in the sense where the scale was applied right at the start, but then right after the prefab instance came back to its initial scale. Putting the scale code in the Update() methods fixes it in the sense where now it stays scaled all the time: void Update () { aStarCellHighlight.transform.localScale = new Vector3(100,100,100); //... } However I've noticed that when I run this code, the object is first displayed without the scale being applied, and it takes about 5-10 seconds for the scale to happen. During this time everything works fine (like input and logging, etc). The scene is very simple, it's not like it has a lot of stuff to load or anything (there's a Ray cast from the camera on to the terrain, but that seems to happen without such delays). My (2 part) question is: Why doesn't it take the scale transform when I do it at the beginning in the Start() method. Why do I have to keep scaling it in the Update() method? Why does it take so long for the scale to "apply/show up".

    Read the article

  • Smooth animation in Cocos2d for iOS

    - by MrDatabase
    I move a simple CCSprite around the screen of an iOS device using this code: [self schedule:@selector(update:) interval:0.0167]; - (void) update:(ccTime) delta { CGPoint currPos = self.position; currPos.x += xVelocity; currPos.y += yVelocity; self.position = currPos; } This works however the animation is not smooth. How can I improve the smoothness of my animation? My scene is exceedingly simple (just has one full-screen CCSprite with a background image and a relatively small CCSprite that moves slowly). I've logged the ccTime delta and it's not consistent (it's almost always greater than my specified interval of 0.0167... sometimes up to a factor of 4x). I've considered tailoring the motion in the update method to the delta time (larger delta = larger movement etc). However given the simplicity of my scene it's seems there's a better way (and something basic that I'm probably missing).

    Read the article

  • best way to compute vertex normals from a Triangle's list

    - by nkint
    hi i'm a complete newbie in computergraphics so sorry if it's a stupid answer. i'm trying to make a simple 3d engine from scratch, more for educational purpose than for real use. i have a Surface object with inside a Triangle's list. For now i compute normals inside Triangle class, in this way: triangle.computeFaceNormals() { Vec3D u = v1.sub(v3) Vec3D v = v1.sub(v2) Vec3D normal = Vec3D.cross(u,v) normal.normalized() this.n1 = this.n2 = this.n3 = normal } and when building surface: t = new Triangle(v1,v2,v3).computeFaceNormals() surface.addTriangle(t) and i think this is the best way to do that.. isn't it? now.. what about for vertex normals? i've found this simple algorithm: flipcode vertex normal but.. hei this algorithm has.. exponential complexity? (if my memory doesn't fail my computer science background..) (bytheway.. it has 3 nested loops.. i don't think it's the best way to do it..) any suggestion?

    Read the article

  • Multiple render targets and gamma correctness in Direct3D9

    - by Mario
    Let's say in a deferred renderer when building your G-Buffer you're going to render texture color, normals, depth and whatever else to your multiple render targets at once. Now if you want to have a gamma-correct rendering pipeline and you use regular sRGB textures as well as rendertargets, you'll need to apply some conversions along the way, because your filtering, sampling and calculations should happen in linear space, not sRGB space. Of course, you could store linear color in your textures and rendertargets, but this might very well introduce bad precision and banding issues. Reading from sRGB textures is easy: just set SRGBTexture = true; in your texture sampler in your HLSL effect code and the hardware does the conversion sRGB-linear for you. Writing to an sRGB rendertarget is theoretically easy, too: just set SRGBWriteEnable = true; in your effect pass in HLSL and your linear colors will be converted to sRGB space automatically. But how does this work with multiple rendertargets? I only want to do these corrections to the color textures and rendertarget, not to the normals, depth, specularity or whatever else I'll be rendering to my G-Buffer. Ok, so I just don't apply SRGBTexture = true; to my non-color textures, but when using SRGBWriteEnable = true; I'll do a gamma correction to all the values I write out to my rendertargets, no matter what I actually store there. I found some info on gamma over at Microsoft: http://msdn.microsoft.com/en-us/library/windows/desktop/bb173460%28v=vs.85%29.aspx For hardware that supports Multiple Render Targets (Direct3D 9) or Multiple-element Textures (Direct3D 9), only the first render target or element is written. If I understand correctly, SRGBWriteEnable should only be applied to the first rendertarget, but according to my tests it doesn't and is used for all rendertargets instead. Now the only alternative seems to be to handle these corrections manually in my shader and only correct the actual color output, but I'm not totally sure, that this'll not have any negative impact on color correctness. E.g. if the GPU does any blending or filtering or multisampling after the Linear-sRGB conversion... Do I even need gamma correction in this case, if I'm just writing texture color without lighting to my rendertarget? As far as I know, I DO need it because of the texture filtering and mip sampling happening in sRGB space instead, if I don't correct for it. Anyway, it'd be interesting to hear other people's solutions or thoughts about this.

    Read the article

  • Powder games: how do they work?

    - by Marc Müller
    Hey guys, I recently found these two gems: http://powdertoy.co.uk/ http://dan-ball.jp/en/javagame/dust/ My question is: How are the physics with so many elements efficiently handled? Am I just severely underestimating modern computing power or is it possible to 'just' have a two-dimensional array, each cell of which describes what is placed at the according position and simulate each cell in every step. Or are there more complex things being done like summarising large areas of the same kind into a single data set and separating said set as needed? Are there any open-source games like this I could look at?

    Read the article

  • How can I generate a navigation mesh for a tile grid?

    - by Roflha
    I haven't actually started programming for this one yet, but I wanted to see how I would go about doing this anyway. Say I have a grid of tiles, all of the same size, some traversable and some not. How would I go about creating a navigation mesh of polygons from this grid? My idea was to take the non-traversable tiles out and extend lines from there edges to make polygons... that's all I have got so far. Any advice?

    Read the article

  • FrameBuffer Render to texture not working all the way

    - by brainydexter
    I am learning to use Frame Buffer Objects. For this purpose, I chose to render a triangle to a texture and then map that to a quad. When I render the triangle, I clear the color to something blue. So, when I render the texture on the quad from fbo, it only renders everything blue, but doesn't show up the triangle. I can't seem to figure out why this is happening. Can someone please help me out with this ? I'll post the rendering code here, since glCheckFramebufferStatus doesn't complain when I setup the FBO. I've pasted the setup code at the end. Here is my rendering code: void FrameBufferObject::Render(unsigned int elapsedGameTime) { glBindFramebuffer(GL_FRAMEBUFFER, m_FBO); glClearColor(0.0, 0.6, 0.5, 1); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // adjust viewport and projection matrices to texture dimensions glPushAttrib(GL_VIEWPORT_BIT); glViewport(0,0, m_FBOWidth, m_FBOHeight); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(0, m_FBOWidth, 0, m_FBOHeight, 1.0, 100.0); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); DrawTriangle(); glPopAttrib(); // setting FrameBuffer back to window-specified Framebuffer glBindFramebuffer(GL_FRAMEBUFFER, 0); //unbind // back to normal viewport and projection matrix //glViewport(0, 0, 1280, 768); glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(45.0, 1.33, 1.0, 1000.0); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glClearColor(0, 0, 0, 0); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); render(elapsedGameTime); } void FrameBufferObject::DrawTriangle() { glPushMatrix(); glBegin(GL_TRIANGLES); glColor3f(1, 0, 0); glVertex2d(0, 0); glVertex2d(m_FBOWidth, 0); glVertex2d(m_FBOWidth, m_FBOHeight); glEnd(); glPopMatrix(); } void FrameBufferObject::render(unsigned int elapsedTime) { glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, m_TextureID); glPushMatrix(); glTranslated(0, 0, -20); glBegin(GL_QUADS); glColor4f(1, 1, 1, 1); glTexCoord2f(1, 1); glVertex3f(1,1,1); glTexCoord2f(0, 1); glVertex3f(-1,1,1); glTexCoord2f(0, 0); glVertex3f(-1,-1,1); glTexCoord2f(1, 0); glVertex3f(1,-1,1); glEnd(); glPopMatrix(); glBindTexture(GL_TEXTURE_2D, 0); glDisable(GL_TEXTURE_2D); } void FrameBufferObject::Initialize() { // Generate FBO glGenFramebuffers(1, &m_FBO); glBindFramebuffer(GL_FRAMEBUFFER, m_FBO); // Add depth buffer as a renderbuffer to fbo // create depth buffer id glGenRenderbuffers(1, &m_DepthBuffer); glBindRenderbuffer(GL_RENDERBUFFER, m_DepthBuffer); // allocate space to render buffer for depth buffer glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, m_FBOWidth, m_FBOHeight); // attaching renderBuffer to FBO // attach depth buffer to FBO at depth_attachment glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, m_DepthBuffer); // Adding a texture to fbo // Create a texture glGenTextures(1, &m_TextureID); glBindTexture(GL_TEXTURE_2D, m_TextureID); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, m_FBOWidth, m_FBOHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0); // onlly allocating space glBindTexture(GL_TEXTURE_2D, 0); // attach texture to FBO glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, m_TextureID, 0); // Check FBO Status if( glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) std::cout << "\n Error:: FrameBufferObject::Initialize() :: FBO loading not complete \n"; // switch back to window system Framebuffer glBindFramebuffer(GL_FRAMEBUFFER, 0); } Thanks!

    Read the article

  • Renderbuffer to GLSL shader?

    - by Dan
    I have a software that performs volume rendering through a raycasting approach. The actual raycasting shader writes the raycasted volume depth into a framebuffer object, through gl_FragDepth, that I bind before calling the shader. The problem I have is that I would like to use this depth in another shader that I call later on. I figured out that the only way to do that is to bind the framebuffer once the raycasting has finished, read the depthmap through something like glReadPixels(0, 0, m_winSize.x , m_winSize.y, GL_DEPTH_COMPONENT, GL_FLOAT, pixels); and write it to a 2D texture as usual glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24, m_winSize.x, m_winSize.y, 0, GL_DEPTH_COMPONENT, GL_FLOAT, pixels) and then pass this 2D texture that contains a simple depth map to the other shader. However, I am not entirely sure that what I do is the proper way to do this. Is there anyway to pass the framebuffer that I fill up in my raycasting shader to the other shader?

    Read the article

  • Animating Tile with Blitting taking up Memory.

    - by Kid
    I am trying to animate a specific tile in my 2d Array, using blitting. The animation consists of three different 16x16 sprites in a tilesheet. Now that works perfect with the code below. BUT it's causing memory leakage. Every second the FlashPlayer is taking up +140 kb more in memory. What part of the following code could possibly cause the leak: //The variable Rectangle finds where on the 2d array we should clear the pixels //Fillrect follows up by setting alpha 0 at that spot before we copy in nxt Sprite //Tiletype is a variable that holds what kind of tile the next tile in animation is //(from tileSheet) //drawTile() gets Sprite from tilesheet and copyPixels it into right position on canvas public function animateSprite():void{ tileGround.bitmapData.lock(); if(anmArray[0].tileType > 42){ anmArray[0].tileType = 40; frameCount = 0; } var rect:Rectangle = new Rectangle(anmArray[0].xtile * ts, anmArray[0].ytile * ts, ts, ts); tileGround.bitmapData.fillRect(rect, 0); anmArray[0].tileType = 40 + frameCount; drawTile(anmArray[0].tileType, anmArray[0].xtile, anmArray[0].ytile); frameCount++; tileGround.bitmapData.unlock(); } public function drawTile(spriteType:int, xt:int, yt:int):void{ var tileSprite:Bitmap = getImageFromSheet(spriteType, ts); var rec:Rectangle = new Rectangle(0, 0, ts, ts); var pt:Point = new Point(xt * ts, yt * ts); tileGround.bitmapData.copyPixels(tileSprite.bitmapData, rec, pt, null, null, true); } public function getImageFromSheet(spriteType:int, size:int):Bitmap{ var sheetColumns:int = tSheet.width/ts; var col:int = spriteType % sheetColumns; var row:int = Math.floor(spriteType/sheetColumns); var rec:Rectangle = new Rectangle(col * ts, row * ts, size, size); var pt:Point = new Point(0,0); var correctTile:Bitmap = new Bitmap(new BitmapData(size, size, false, 0)); correctTile.bitmapData.copyPixels(tSheet, rec, pt, null, null, true); return correctTile; }

    Read the article

  • OpenGL 2.1+ Render with data returned form assimp library

    - by Bình Nguyên
    I have just readed this tutorial about load a 3D model file: http://www.lighthouse3d.com/cg-topics/code-samples/importing-3d-models-with-assimp/#comment-14551. Its render routine uses a recursive_render function to scan all node. My question: What is a aiNode struct store? What different form this method and above method: for (int i=0; imNumMesh; ++i) { draw scene-mMeshes[i]; } Thanks for reading!

    Read the article

  • How to make rigid bodies collide with Apex Clothing in PhysX for Maya

    - by b1nary.atr0phy
    According to the [Apex] Clothing Overview section of the documentation: Colliding with Rigid Bodies Rigid bodies present in your scene will push clothing around roughly as you might expect. Well, I beg to differ. The Apex Cloth collides with the floor just fine, but that's about the only thing it collides with (unless I add ragdoll to the same skeleton that the cloth is attached to.) So for example, if I try to bounce a ball (dynamic rigid body) into the cloth, it simply bounces through it. If I try to walk an actor with ragdoll through it, he simply clips through it as well. Anyone have any insight on this?

    Read the article

  • apply non-hierarchial transforms to hierarchial skeleton?

    - by user975135
    I use Blender3D, but the answer might not API-exclusive. I have some matrices I need to assign to PoseBones. The resulting pose looks fine when there is no bone hierarchy (parenting) and messed up when there is. I've uploaded an archive with sample blend of the rigged models, text animation importer and a test animation file here: http://www.2shared.com/file/5qUjmnIs/sample_files.html Import the animation by selecting an Armature and running the importer on "sba" file. Do this for both Armatures. This is how I assign the poses in the real (complex) importer: matrix_bases = ... # matrix from file animation_matrix = matrix_basis * pose.bones['mybone'].matrix.copy() pose.bones[bonename].matrix = animation_matrix If I go to edit mode, select all bones and press Alt+P to undo parenting, the Pose looks fine again. The API documentation says the PoseBone.matrix is in "object space", but it seems clear to me from these tests that they are relative to parent bones. Final 4x4 matrix after constraints and drivers are applied (object space) I tried doing something like this: matrix_basis = ... # matrix from file animation_matrix = matrix_basis * (pose.bones['mybone'].matrix.copy() * pose.bones[bonename].bone.parent.matrix_local.copy().inverted()) pose.bones[bonename].matrix = animation_matrix But it looks worse. Experimented with order of operations, no luck with all. For the record, in the old 2.4 API this worked like a charm: matrix_basis = ... # matrix from file animation_matrix = armature.bones['mybone'].matrix['ARMATURESPACE'].copy() * matrix_basis pose.bones[bonename].poseMatrix = animation_matrix pose.update() Link to Blender API ref: http://www.blender.org/documentation/blender_python_api_2_63_17/bpy.types.BlendData.html#bpy.types.BlendData http://www.blender.org/documentation/blender_python_api_2_63_17/bpy.types.PoseBone.html#bpy.types.PoseBone

    Read the article

  • Discovering path through unknown territory

    - by TravisG
    Let's say all the AI knows about it's surroundings is a pixel-map that it has which clearly shows walkable terrain and obstacles. I want the AI to be able to traverse this terrain until it finds an exit point. There are some restrictions: There is always a way to the exit in the entire map that the AI walks around in, but there may be dead ends. The path to the exit is always pretty random, meaning that if you stand at crossroads, nothing indicates which direction would be the right one to go. It doesn't matter if the AI reaches a dead end, but it has to be able walk back out of it to a previously not inspected location and continue its search there. Initially, the AI starts out knowing only the starting area of the whole map. As it walks around, new points will be added to the pixel-map as the AI corresponding to the AIs range of sight (think of it like the AI is clearing the fog of war) The problem is in 2D space. All I have is the pixel map. There are no paths in the pixel map which are "too narrow". The AI fits through everything. It shouldn't be a brute force solution. E.g. it would be possible to simply find a path to each pixel in the pixel map that is yet undiscovered (with A*, for example), which will lead to the AI discovering new pixels. This could be repeated until the end is reached. The path doesn't have to be the shortest path (this is impossible without knowing the entire map beforehand), but when movements within the visible area are calculated, the shortest and from a human standpoint most logical path should be taken (e.g. if you can see a way out of your room into a hallway, you would obviously go there instead of exploring the corner of your current room). What kind of approaches to solve this problem are there?

    Read the article

  • How to label a cuboid?

    - by usha
    Hi this is how my 3dcuboid looks, I have attached the complete code. I want to label this cuboid using different names across sides, how is this possible using opengl on android? public class MyGLRenderer implements Renderer { Context context; Cuboid rect; private float mCubeRotation; // private static float angleCube = 0; // Rotational angle in degree for cube (NEW) // private static float speedCube = -1.5f; // Rotational speed for cube (NEW) public MyGLRenderer(Context context) { rect = new Cuboid(); this.context = context; } public void onDrawFrame(GL10 gl) { // TODO Auto-generated method stub gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); gl.glLoadIdentity(); // Reset the model-view matrix gl.glTranslatef(0.2f, 0.0f, -8.0f); // Translate right and into the screen gl.glScalef(0.8f, 0.8f, 0.8f); // Scale down (NEW) gl.glRotatef(mCubeRotation, 1.0f, 1.0f, 1.0f); // gl.glRotatef(angleCube, 1.0f, 1.0f, 1.0f); // rotate about the axis (1,1,1) (NEW) rect.draw(gl); mCubeRotation -= 0.15f; //angleCube += speedCube; } public void onSurfaceChanged(GL10 gl, int width, int height) { // TODO Auto-generated method stub if (height == 0) height = 1; // To prevent divide by zero float aspect = (float)width / height; // Set the viewport (display area) to cover the entire window gl.glViewport(0, 0, width, height); // Setup perspective projection, with aspect ratio matches viewport gl.glMatrixMode(GL10.GL_PROJECTION); // Select projection matrix gl.glLoadIdentity(); // Reset projection matrix // Use perspective projection GLU.gluPerspective(gl, 45, aspect, 0.1f, 100.f); gl.glMatrixMode(GL10.GL_MODELVIEW); // Select model-view matrix gl.glLoadIdentity(); // Reset } public void onSurfaceCreated(GL10 gl, EGLConfig config) { // TODO Auto-generated method stub gl.glClearColor(0.0f, 0.0f, 0.0f, 1.0f); // Set color's clear-value to black gl.glClearDepthf(1.0f); // Set depth's clear-value to farthest gl.glEnable(GL10.GL_DEPTH_TEST); // Enables depth-buffer for hidden surface removal gl.glDepthFunc(GL10.GL_LEQUAL); // The type of depth testing to do gl.glHint(GL10.GL_PERSPECTIVE_CORRECTION_HINT, GL10.GL_NICEST); // nice perspective view gl.glShadeModel(GL10.GL_SMOOTH); // Enable smooth shading of color gl.glDisable(GL10.GL_DITHER); // Disable dithering for better performance }} public class Cuboid{ private FloatBuffer mVertexBuffer; private FloatBuffer mColorBuffer; private ByteBuffer mIndexBuffer; private float vertices[] = { //width,height,depth -2.5f, -1.0f, -1.0f, 1.0f, -1.0f, -1.0f, 1.0f, 1.0f, -1.0f, -2.5f, 1.0f, -1.0f, -2.5f, -1.0f, 1.0f, 1.0f, -1.0f, 1.0f, 1.0f, 1.0f, 1.0f, -2.5f, 1.0f, 1.0f }; private float colors[] = { // R,G,B,A COLOR 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.5f, 0.0f, 1.0f, 1.0f, 0.5f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f, 0.0f, 1.0f, 1.0f }; private byte indices[] = { // VERTEX 0,1,2,3,4,5,6,7 REPRESENTATION FOR FACES 0, 4, 5, 0, 5, 1, 1, 5, 6, 1, 6, 2, 2, 6, 7, 2, 7, 3, 3, 7, 4, 3, 4, 0, 4, 7, 6, 4, 6, 5, 3, 0, 1, 3, 1, 2 }; public Cuboid() { ByteBuffer byteBuf = ByteBuffer.allocateDirect(vertices.length * 4); byteBuf.order(ByteOrder.nativeOrder()); mVertexBuffer = byteBuf.asFloatBuffer(); mVertexBuffer.put(vertices); mVertexBuffer.position(0); byteBuf = ByteBuffer.allocateDirect(colors.length * 4); byteBuf.order(ByteOrder.nativeOrder()); mColorBuffer = byteBuf.asFloatBuffer(); mColorBuffer.put(colors); mColorBuffer.position(0); mIndexBuffer = ByteBuffer.allocateDirect(indices.length); mIndexBuffer.put(indices); mIndexBuffer.position(0); } public void draw(GL10 gl) { gl.glFrontFace(GL10.GL_CW); gl.glVertexPointer(3, GL10.GL_FLOAT, 0, mVertexBuffer); gl.glColorPointer(4, GL10.GL_FLOAT, 0, mColorBuffer); gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glEnableClientState(GL10.GL_COLOR_ARRAY); gl.glDrawElements(GL10.GL_TRIANGLES, 36, GL10.GL_UNSIGNED_BYTE, mIndexBuffer); gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); gl.glDisableClientState(GL10.GL_COLOR_ARRAY); } } public class Draw3drect extends Activity { private GLSurfaceView glView; // Use GLSurfaceView // Call back when the activity is started, to initialize the view @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); glView = new GLSurfaceView(this); // Allocate a GLSurfaceView glView.setRenderer(new MyGLRenderer(this)); // Use a custom renderer this.setContentView(glView); // This activity sets to GLSurfaceView } // Call back when the activity is going into the background @Override protected void onPause() { super.onPause(); glView.onPause(); } // Call back after onPause() @Override protected void onResume() { super.onResume(); glView.onResume(); } }

    Read the article

  • python Velocity control of the player, why doesn't this work?

    - by Dominic Grenier
    I have the following code inside a while True loop: if abs(playerx) < MAXSPEED: if moveLeft: playerx -= 1 if moveRight: playerx += 1 if abs(playery) < MAXSPEED: if moveDown: playery += 1 if moveUp: playery -= 1 if moveLeft == False and abs(playerx) > 0: playerx += 1 if moveRight == False and abs(playerx) > 0: playerx -= 1 if moveUp == False and abs(playery) > 0: playery += 1 if moveDown == False and abs(playery) > 0: playery -= 1 player.x += playerx player.y += playery if player.left < 0 or player.right > 1000: player.x -= playerx if player.top < 0 or player.bottom > 600: player.y -= playery The intended result is that while an arrow key is pressed, playerx or y increments by one at every loop until it reaches MAXSPEED and stays at MAXSPEED. And that when the player stops pressing that arrow key, his speed decreases. Until it reaches 0. To me, this code explicitly says that... But what actually happens is that playerx or y keeps incrementing regardless of MAXSPEED and continues moving even after the player stops pressing the arrow key. I keep rereading but I'm completely baffled by this weird behavior. Any insights? Thanks.

    Read the article

  • Running an a single action on multiple sprites at the same time

    - by Stephen
    Ok so I have created a spiraling animation for a football and I want to be able to run it on 2 sprites at the same time. This is what I have done. CCAnimation* footballAnim = [CCAnimation animationWithFrame:@"Football" frameCount:60 delay:0.005f]; spiral = [CCAnimate actionWithAnimation:footballAnim]; CCRepeatForever* repeat = [CCRepeatForever actionWithAction:spiral]; [Sprite1 runAction: repeat]; [Sprite2 runAction: repeat]; but it only runs the action on the first sprite. What am I doing wrong?

    Read the article

  • File format for animated scene

    - by stephelton
    I've got a custom OpenGL based rendering engine and I'd like to add support for cinema-type scene animation. The artist that is helping me uses primarily 3DSMax. I'd like a file format for exporting and importing this data. I'm also in need of a file format for skeletal animation data, which may have an impact here. I've been looking at MAXScript to manually export this stuff, which would buy me the most flexibility, but I have virtually no experience with 3DSMax itself, so I get a little lost when it comes to terminology. So I'd like to know what file formats exist for animated scene data, and whether they are appropriate for my use (my fear is that they will be way too broad for my fairly simple needs.) The way I view animated scene data is basically a bunch of references to [animated] models with keyframe-based matrices describing their orientation over time. And probably some special camera stuff to handle perspective. I might also want some event type stuff for adding/removing objects. Is this a sane concept?

    Read the article

  • Warp GameObject Size When Entering/Leaving Area

    - by Julian
    Below I have an image describing the desired functionality I am going for. Let's say you control a square and when you move this square into a given area, any part of your rigidbody/model inside of the area will be magnified upon entering and shrunk upon leaving. So now you more or less are made up of two rectangles, one small and one large. What would be an elegant approach towards achieving this effect?

    Read the article

  • Camera changes view when controller connected

    - by ChocoMan
    I have a weird situation. I have a model set to 0 for X,Y and Z. My camera's position is set to: 0 (X-value, but updates when the model moves around) the model's height + 20f (about the same level as the model's shoulders) 25f (behind the model) Without the controller plugged in, everything looks fine as I want it. But as soon as I plug the controller in, the camera aims to the sky! But when I unplug the controller, the camera is back to what it should be. Does anyone have any insight as to what may cause this from plugging a controller in?

    Read the article

< Previous Page | 293 294 295 296 297 298 299 300 301 302 303 304  | Next Page >