Search Results

Search found 19338 results on 774 pages for 'game loop'.

Page 380/774 | < Previous Page | 376 377 378 379 380 381 382 383 384 385 386 387  | Next Page >

  • Bullet Physics implementing custom MotionState class

    - by Arosboro
    I'm trying to make my engine's camera a kinematic rigid body that can collide into other rigid bodies. I've overridden the btMotionState class and implemented setKinematicPos which updates the motion state's tranform. I use the overridden class when creating my kinematic body, but the collision detection fails. I'm doing this for fun trying to add collision detection and physics to Sean O' Neil's Procedural Universe I referred to the bullet wiki on MotionStates for my CPhysicsMotionState class. If it helps I can add the code for the Planetary rigid bodies, but I didn't want to clutter the post. Here is my motion state class: class CPhysicsMotionState: public btMotionState { protected: // This is the transform with position and rotation of the camera CSRTTransform* m_srtTransform; btTransform m_btPos1; public: CPhysicsMotionState(const btTransform &initialpos, CSRTTransform* srtTransform) { m_srtTransform = srtTransform; m_btPos1 = initialpos; } virtual ~CPhysicsMotionState() { // TODO Auto-generated destructor stub } virtual void getWorldTransform(btTransform &worldTrans) const { worldTrans = m_btPos1; } void setKinematicPos(btQuaternion &rot, btVector3 &pos) { m_btPos1.setRotation(rot); m_btPos1.setOrigin(pos); } virtual void setWorldTransform(const btTransform &worldTrans) { btQuaternion rot = worldTrans.getRotation(); btVector3 pos = worldTrans.getOrigin(); m_srtTransform->m_qRotate = CQuaternion(rot.x(), rot.y(), rot.z(), rot.w()); m_srtTransform->SetPosition(CVector(pos.x(), pos.y(), pos.z())); m_btPos1 = worldTrans; } }; I add a rigid body for the camera: // Create rigid body for camera btCollisionShape* cameraShape = new btSphereShape(btScalar(5.0f)); btTransform startTransform; startTransform.setIdentity(); // forgot to add this line CVector vCamera = m_srtCamera.GetPosition(); startTransform.setOrigin(btVector3(vCamera.x, vCamera.y, vCamera.z)); m_msCamera = new CPhysicsMotionState(startTransform, &m_srtCamera); btScalar tMass(80.7f); bool isDynamic = (tMass != 0.f); btVector3 localInertia(0,0,0); if (isDynamic) cameraShape->calculateLocalInertia(tMass,localInertia); btRigidBody::btRigidBodyConstructionInfo rbInfo(tMass, m_msCamera, cameraShape, localInertia); m_rigidBody = new btRigidBody(rbInfo); m_rigidBody->setCollisionFlags(m_rigidBody->getCollisionFlags() | btCollisionObject::CF_KINEMATIC_OBJECT); m_rigidBody->setActivationState(DISABLE_DEACTIVATION); This is the code in Update() that runs each frame: CSRTTransform srtCamera = CCameraTask::GetPtr()->GetCamera(); Quaternion qRotate = srtCamera.m_qRotate; btQuaternion rot = btQuaternion(qRotate.x, qRotate.y, qRotate.z, qRotate.w); CVector vCamera = CCameraTask::GetPtr()->GetPosition(); btVector3 pos = btVector3(vCamera.x, vCamera.y, vCamera.z); CPhysicsMotionState* cameraMotionState = CCameraTask::GetPtr()->GetMotionState(); cameraMotionState->setKinematicPos(rot, pos);

    Read the article

  • bump mapping with 2 normal maps

    - by DorkMonstuh
    I was wondering if its actually possible to do bump mapping with 2 normal maps... I have tried doing it this way however I get a function overload on max and dot. uniform sampler2D n_mapTex; uniform sampler2D n_mapTex2; uniform sampler2D refTex; varying mediump vec2 TexCoord; varying mediump float vTime; void main() { mediump vec4 wave = texture2D(n_mapTex, TexCoord - vTime); mediump vec4 wave2 = texture2D(n_mapTex2, TexCoord + vTime); mediump vec4 bump = mix(wave2, wave, 0.5); //this extracts the normals from the combined normal maps mediump vec4 normal = normalize(bump.xyzw * 2.0 - 1.0); //determines light position mediump vec3 lightPos = normalize(vec3(0.0, 1.0, 3.0)); mediump float diffuse = max(dot(normal, lightPos),0.0); gl_FragColor = mix(texture2D(refTex, TexCoord), bump, 0.5); }

    Read the article

  • How to detect a touch on transparent area of an image in a (libgdx) stage?

    - by Usman
    Can some one please help to detect a touch on an image which I am using as an actor in a stage. The image is actually a long diagnol brush which has plenty of transparent area. The problem is when I touche the transparent area of the brush image it is also triggering the clicklistener of the image. I need the click listener should only be called when the finger actually touched the visible image not the area which is empty. I am using libgdx-0.9.4 libraries. Here is my simple piece of code. import com.badlogic.gdx.scenes.scene2d.ui.Image; import com.badlogic.gdx.scenes.scene2d.ui.ClickListener; Image brushImg = new Image(ImageCache.getTexture("brush")); brushImg.width = mStage.width()*0.75f; brushImg.height = mStage.height()*0.75f; brushImg.setClickListener(new ClickListener() { @Override public void click(Actor actor, float x, float y) { SoundFactory.play("brush"); }

    Read the article

  • Perminantly Sync a wiimote with a computer

    - by Adam Geisweit
    i have tried to look up many ways to sync up my wiimotes to my computer so that i can program games with it, but every time it only syncs them up temporarily, or if it says it can permanently sync it, it doesn't actually do it. it gets tiresome when i have to keep on reconnecting it every time i want to save battery life. how would i be able to sync up my wiimote to my computer so that if i turn off my wiimote, i can just hit any button and it will automatically sync it up?

    Read the article

  • 3d Model Scaling With Camera

    - by spasarto
    I have a very simple 3D maze program that uses a first person camera to navigate the maze. I'm trying to scale the blocks that make up the maze walls and floor so the corridors seem more roomy to the camera. Every time I scale the model, the camera seems to scale with it, and the corridors always stay the same width. I've tried apply the scale to the model in the content pipe (setting the scale property of the model in the properties window in VS). I've also tried to apply the scale using Matrix.CreateScale(float) using the Scale-Rotate-Transform order with the same result. If I leave the camera speed the same, the camera moves slower, so I know it's traversing a larger distance, but the world doesn't look larger; the camera just seems slower. I'm not sure what part of the code to include since I don't know if it is an issue with my model, camera, or something else. Any hints at what I'm doing wrong? Camera: Projection = Matrix.CreatePerspectiveFieldOfView( MathHelper.PiOver4, _device.Viewport.AspectRatio, 1.0f, 1000.0f ); Matrix camRotMatrix = Matrix.CreateRotationX( _cameraPitch ) * Matrix.CreateRotationY( _cameraYaw ); Vector3 transCamRef = Vector3.Transform( _cameraForward, camRotMatrix ); _cameraTarget = transCamRef + CameraPosition; Vector3 camRotUpVector = Vector3.Transform( _cameraUpVector, camRotMatrix ); View = Matrix.CreateLookAt( CameraPosition, _cameraTarget, camRotUpVector ); Model: World = Matrix.CreateTranslation( Position );

    Read the article

  • Cutting out smaller rectangles from a larger rectangle

    - by Mauro Destro
    The world is initially a rectangle. The player can move on the world border and then "cut" the world via orthogonal paths (not oblique). When the player reaches the border again I have a list of path segments they just made. I'm trying to calculate and compare the two areas created by the path cut and select the smaller one to remove it from world. After the first iteration, the world is no longer a rectangle and player must move on border of this new shape. How can I do this? Is it possible to have a non rectangular path? How can I move the player character only on path? EDIT Here you see an example of what I'm trying to achieve: Initial screen layout. Character moves inside the world and than reaches the border again. Segment of the border present in the smaller area is deleted and last path becomes part of the world border. Character moves again inside the world. Segments of border present in the smaller area are deleted etc.

    Read the article

  • Where does the light come from, using Maya/Panda3D?

    - by Aerovistae
    Total noob to Maya. Total noob to Panda3D. Planning on becoming really good at both as soon as I have free time to do so, but right now I have an assignment due in a few hours which requires this: (The part which confuses me is bolded.) Model and texture a vehicle and two different obstacles Build a scene graph in Panda with a plane, the vehicle, several copies of each of the obstacles, and (at least) a direction light Program vehicle movement, constrained to a plane (no terrain) Working headlights Vehicle collides with obstacles How do I attach a light source to a model? I'm assuming this is done in Panda3D but I'm sufficiently new to this that I wouldn't be astonished to hear it's part of the model.

    Read the article

  • Open GL polygons not displaying

    - by Darestium
    I have tried to follow nehe's opengl tutorial lesson 2. I use sfml for my window creation. The problem I have is that both the triangle and the quad don't show up on the screen: #include <SFML/System.hpp> #include <SFML/Window.hpp> #include <iostream> void processEvents(sf::Window *app); void processInput(sf::Window *app, const sf::Input &input); void renderCube(sf::Window *app, sf::Clock *clock); void renderGlScene(sf::Window *app); void init(); int main() { sf::Window app(sf::VideoMode(800, 600, 32), "Nehe Lesson 2"); app.UseVerticalSync(false); init(); while (app.IsOpened()) { processEvents(&app); renderGlScene(&app); app.Display(); } return EXIT_SUCCESS; } void init() { glClearDepth(1.f); glClearColor(0.f, 0.f, 0.f, 0.f); // Enable z-buffer and read and write glEnable(GL_DEPTH_TEST); glDepthMask(GL_TRUE); // Setup a perpective projection glMatrixMode(GL_MODELVIEW); glLoadIdentity(); gluPerspective(45.f, 1.f, 1.f, 500.f); glShadeModel(GL_SMOOTH); } void processEvents(sf::Window *app) { sf::Event event; while (app->GetEvent(event)) { if (event.Type == sf::Event::Closed) { app->Close(); } if (event.Type == sf::Event::KeyPressed && event.Key.Code == sf::Key::Escape) { app->Close(); } } } void renderGlScene(sf::Window *app) { app->SetActive(); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Clear the screen and the depth buffer glLoadIdentity(); // Reset the view glTranslatef(-1.5f, 0.0f, -6.0f); // Move Left 1.5 units and into the screen 6.0 glBegin(GL_TRIANGLES); glVertex3f( 0.0f, 1.0f, 0.0f); // Top glVertex3f(-1.0,-1.0f, 0.0f); // Bottom Left glVertex3f( 1.0f,-1.0f, 0.0f); // Bottom Right glEnd(); glTranslatef(3.0f, 0.0f, 0.0f); glBegin(GL_QUADS); // Draw a quad glVertex3f(-1.0f, 1.0f, 0.0f); glVertex3f( 1.0f, 1.0f, 0.0f); glVertex3f( 1.0f,-1.0f, 0.0f); glVertex3f(-1.0f,-1.0f, 0.0f); glEnd(); } I would greatly appreciate it if someone could help me resolve my issue.

    Read the article

  • DirectX 9.0c and C++ GUI

    - by SullY
    Well, I'm trying to code a gui for my engine, but I've got some problems. I know how to make a UI overlay but buttons are still black magic for me. Anything I tried was to compilcated ( if it goes big ). To Example I tried to look if the mouse position is the same as the Pixel that is showing the button. But If I use some bigger areas it's getting to complicated. Now I'm searching for a Tutorial how to implement your own gui. I'm really confused about it. Well I hope you have/ know some good tutorials. By the way, I took a look at the DXUTSample, but it's to big to get overview.

    Read the article

  • SpriteBatch.Begin() making my model not render correctly

    - by manning18
    I was trying to output some debug information using DrawString when I noticed my model suddenly was being rendered like it was inside-out (like the culling had been disabled or something) and the texture maps weren't applied I commented out the DrawString method until I only had SpriteBatch.Begin() and .End() and that was enough to cause the model rendering corruption - when I commented those calls out the model rendered correctly What could this be a symptom of? I've stripped it down to the barest of code to isolate the problem and this is what I noticed. Draw code below (as stripped down as possible) GraphicsDevice.Clear(Color.LightGray); foreach (ModelMesh mesh in TIEAdvanced.Meshes) { foreach (Effect effect in mesh.Effects) { if (effect is BasicEffect) ((BasicEffect)effect).EnableDefaultLighting(); effect.CurrentTechnique.Passes[0].Apply(); } } spriteBatch.Begin(); spriteBatch.DrawString(spriteFont, "Camera Position: " + cameraPosition.ToString(), new Vector2(10, 10), Color.Blue); spriteBatch.End(); GraphicsDevice.DepthStencilState = DepthStencilState.Default; TIEAdvanced.Draw(Matrix.CreateScale(0.025f), viewMatrix, projectionMatrix);

    Read the article

  • What has the most efficient intersection test against an AABB tree - OBB, Cylinder or Capsule?

    - by identitycrisisuk
    I'm currently trying to find collisions in 3D between a tighter volume than an AABB and a tree of AABB volumes. I just need to know whether they are intersecting, no closest distance or collision response. An OBB, Cylinder or Capsule would all roughly fit these purposes but Cylinder and Capsule were the first thing I thought of, which I have found little information about detecting intersections online. Am I right in thinking that they would always be more complex to perform Separating Axis Tests on even though they might seem like simpler shapes? I figure by the time I get my head around SAT for curved shapes I could have done the thing with OBBs but I wanted to find out for sure.

    Read the article

  • How can I make a 32 bit render target with a 16 bit alpha channel in DirectX?

    - by J Junker
    I want to create a render target that is 32-bit, with 16 bits each for alpha and luminance. The closest surface formats I can find in the DirectX SDK are: D3DFMT_A8L8 // 16-bit using 8 bits each for alpha and luminance. D3DFMT_G16R16F // 32-bit float format using 16 bits for the red channel and 16 bits for the green channel. But I don't think either of these will work, since D3DFMT_A8L8 doesn't have the precision and D3DFMT_G16R16F doesn't have an alpha channel (I need a separate blend state for alpha). How can I create a render target that allows a separate blend state for luminance and alpha, with 16 bit precision on each channel, that doesn't exceed 32 bits per pixel?

    Read the article

  • How do I accomplish fading texture trails in UDK?

    - by kdshay
    I would like to know how to leave a fading texture/material trail in udk. For example (I'm not sure if there is a special name for this effect): A character may leave footprints that fade after x number of seconds Or, a tank may leave a tracks trail as in Civilization IV. Here is another example of this type of effect. Skip to 1:00 and watch the green slime texture. http://www.youtube.com/watch?v=KdJIauWjE8s How do I accomplish this effect in UDK? Any good tutorials? Thank you.

    Read the article

  • D3DXMatrixDecompose gives different quaternion than D3DXQuaternionRotationMatrix

    - by Fraser
    In trying to solve this problem, I tracked down the problem to the conversion of the rotation matrix to quaternion. In particular, consider the following matrix: -0.02099178 0.9997436 -0.008475631 0 0.995325 0.02009799 -0.09446743 0 0.09427284 0.01041905 0.9954919 0 0 0 0 1 SlimDX.Quaternion.RotationMatrix (which calls D3DXQuaternionRotationMatrix gives a different answer than SlimDX.Matrix.Decompose (which uses D3DXMatrixDecompose). The answers they give (after being normalized) are: X Y Z W Quaternion.RotationMatrix -0.05244324 0.05137424 0.002209336 0.9972991 Matrix.Decompose 0.6989997 0.7135442 -0.03674842 -0.03006023 Which are totally different (note the signs of X, Z, and W are different). Note that these aren't q/-q (two quaternions that represent the same rotation); they face completely different directions. I've noticed that with matrices for rotations very close to that one (successive frames in the animation) that the Matrix.Decompose version gives a solution that flips around wildly and occasionally goes into the desired position, while the Quaternion.RotationMatrix version gives solutions that are stable but go in the wrong direction. This is only for the right arm in my animation -- for the left arm, both functions give the correct solution, which is the same quaternion within error tolerances. This makes me think that there's some sort of numeric instability or weird stuff with signs going on. I tried implementing this and then this, but both gave me a completely incorrect solution (even for the matricies where the SlimDX ones were working correctly) -- maybe the rows and columns are flipped?

    Read the article

  • How to check battery usage of an iPhone/Android app?

    - by Gajoo
    I think the title says Enough. For example Unity can generate you a report how much CPU/GPU power it's using or how fast it's going to drain device battery, but what about the applications developed using Cocos2d or the ones you develop directly using OpenGL? How should you profile them? In general what should you profile? or Should I simply run the application and wait for it's battery to run out?

    Read the article

  • Sprite/Tile Sheets Vs Single Textures

    - by Reanimation
    I'm making a race circuit which is constructed using various textures. To provide some background, I'm writing it in C++ and creating quads with OpenGL to which I assign a loaded .raw texture too. Currently I use 23 500px x 500px textures of which are all loaded and freed individually. I have now combined them all into a single sprite/tile sheet making it 3000 x 2000 pixels seems the number of textures/tiles I'm using is increasing. Now I'm wondering if it's more efficient to load them individually or write extra code to extract a certain tile from the sheet? Is it better to load the sheet, then extract 23 tiles and store them from one sheet, or load the sheet each time and crop it to the correct tile? There seems to be a number of way to implement it... Thanks in advance.

    Read the article

  • rotating menu with Actors in libgdx

    - by joecks
    I am intending to build a circular menu, with menu items equally distributed around the circle. When clicking on a menu item the circle should rotate so that the selected item is facing the top. I am using libgdx and I am not very familiar with the Actors concept, so I intuitivly tried to implement an Actor, who is drawing a texture and then transforming it by using Actions, with no success: class CircleActor extends Actor { @Override public void draw(SpriteBatch batch, float parentAlpha) { batch.draw(texture1, 100, 100); } @Override public Actor hit(float x, float y) { return this; } } and the rotate action: CircleActor circleActor = new CircleActor(); circleActor.action(Forever.$(RotateBy.$(0.1f, 0.1f))); // stage.addActor(); stage.addActor(circleActor); The texture is rectangular, but it doe not work. 1. What is wrong? 2. Is it a good approach to solve the task? Thanks!

    Read the article

  • Unity: Assigning String value in inspector

    - by Marc Pilgaard
    I got an issue with Unity I can't seem to comprehend, and it is possibly very simple: I am trying to write a simple piece of code in JavaScript where a button toggles the activation of a shield, by dragging a prefab with Resources.load("ActivateShieldPreFab") and destroying it again (Haven't implemented that yet). I wish to assign this button through the inspector, so I have created a string variable which appears as intended in the inspector. Though it doesn't seem to register the inspector input, even though I changed the value through the inspector. It only provides the error: "Input Key named: is unknown" When the button name is assigned within the code, there is no issues. Code as follows: var ShieldOn = false; var stringbutton : String; function Start(){ } function Update () { if(Input.GetKey(stringbutton) && ShieldOn != true) { Instantiate(Resources.load("ActivateShieldPreFab"), Vector3 (0, 0, 0), Quaternion.identity); ShieldOn = true; } } Hope somebody can help, in advance... Thanks

    Read the article

  • Changing coordinate system from Z-up to Y-up

    - by Jari Komppa
    Blender's coordinate system is different from what I'm used to, in that Z points upwards instead of Y. What would be the simplest way of converting all the world data (so that all animations, texture coordinates, etc still work) so that Y points upwards? Clarification: Object positions are defined as matrices, so just switching translation/rotation/scale information in matrices is not a trivial task.

    Read the article

  • How access PhysicalMaterial from Actor Class?

    - by EmAdpres
    I use Projectile for my weapon system and UDKProjectile has two main function to handle Hit of projectiles(=bullet of my weapon): simulated function ProcessTouch(Actor Other, Vector HitLocation, Vector HitNormal) // For Actors simulated event HitWall(vector HitNormal, actor Wall, PrimitiveComponent WallComp) // Everything except Actors ( I guess) the first method, the function just give me the actor which I hit and my question is How I can get that actor's physical material by first parameter ( Other ), in order to make a proper react about it ( for example a proper Sound of collide ) ... A tricky (but hateful ) way which I knew works is, make a Trace from a little back of that actor to that actor, and use HitInfo parameter which include physical Material ! But there should be a more standard way !

    Read the article

  • Cocos2d: Adding a CCSequence to a CCArray

    - by Axort
    I have a problem with an action performed by a sprite. I have one CCSequence in a CCArray and I have an scheduled method (is called every 5 seconds) that make the sprite run the action. The action is performed correctly only the first time (the first 5 seconds), after that, the action do whatever it wants lol. Here is the code: In .h - @interface PowerUpLayer : CCLayer { PowerUp *powerUp; CCArray *trajectories; } @property (nonatomic, retain) CCArray *trajectories; In .mm - @implementation PowerUpLayer @synthesize trajectories; -(id)init { if((self = [super init])) { [self createTrajectories]; self.isTouchEnabled = YES; [self schedule:@selector(spawn:) interval:5]; } return self; } -(void)createTrajectories { self.trajectories = [CCArray arrayWithCapacity:1]; //Wave trajectory ccBezierConfig firstWave, secondWave; firstWave.controlPoint_1 = CGPointMake([[CCDirector sharedDirector] winSize].width + 30, [[CCDirector sharedDirector] winSize].height / 2);//powerUp.sprite.position.x, powerUp.sprite.position.y); firstWave.controlPoint_2 = CGPointMake([[CCDirector sharedDirector] winSize].width - ([[CCDirector sharedDirector] winSize].width / 4), 0); firstWave.endPosition = CGPointMake([[CCDirector sharedDirector] winSize].width / 2, [[CCDirector sharedDirector] winSize].height / 2); secondWave.controlPoint_1 = CGPointMake([[CCDirector sharedDirector] winSize].width / 2, [[CCDirector sharedDirector] winSize].height / 2); secondWave.controlPoint_2 = CGPointMake([[CCDirector sharedDirector] winSize].width / 4, [[CCDirector sharedDirector] winSize].height); secondWave.endPosition = CGPointMake(-30, [[CCDirector sharedDirector] winSize].height / 2); id bezierWave1 = [CCBezierTo actionWithDuration:1 bezier:firstWave]; id bezierWave2 = [CCBezierTo actionWithDuration:1 bezier:secondWave]; id waveTrajectory = [CCSequence actions:bezierWave1, bezierWave2, [CCCallFuncN actionWithTarget:self selector:@selector(setInvisible:)], nil]; [self.trajectories addObject:waveTrajectory]; //[powerUp.sprite runAction:bezierForward]; // [CCMoveBy actionWithDuration:3 position:CGPointMake(-[[CCDirector sharedDirector] winSize].width - powerUp.sprite.contentSize.width, 0)] //[powerUp.sprite runAction:[CCSequence actions:bezierWave1, bezierWave2, [CCCallFuncN actionWithTarget:self selector:@selector(setInvisible:)], nil]]; } -(void)setInvisible:(id)sender { if(powerUp != nil) { [self removeChild:sender cleanup:YES]; powerUp = nil; } } This is the scheduled method: -(void)spawn:(ccTime)dt { if(powerUp == nil) { powerUp = [[PowerUp alloc] initWithType:0]; powerUp.sprite.position = CGPointMake([[CCDirector sharedDirector] winSize].width + powerUp.sprite.contentSize.width, [[CCDirector sharedDirector] winSize].height / 2); [self addChild:powerUp.sprite z:-1]; [powerUp.sprite runAction:((CCSequence *)[self.trajectories objectAtIndex:0])]; } } I don't know what is happening; I never modify the content of the CCSequence after the first time. Thanks!

    Read the article

  • Problem with alleg42.dll / program crashes / Allegro & Codeblocks

    - by user24152
    I'm having a serious problem with allegro. The program should display random pixels on the screen and when I build and run it I get the following error message: Below is the full code of my program: #include <stdio.h> #include <stdlib.h> #include <time.h> #include "allegro.h" #define Text_Color_Red makecol(255,0,0) int main() { int ret; int color_depth = 32; int x; int y; int red; int green; int blue; int color; //init allegro allegro_init(); //install keyboard install_keyboard(); //set color depth to 32 bits set_color_depth(color_depth); //init random seed srand(time(NULL)); //init video mode to 640 x 480 ret = set_gfx_mode(GFX_AUTODETECT_WINDOWED,640,480,0,0); if(ret !=0) { allegro_message(allegro_error); return 1; } //Display string textprintf(screen,font,0,0,10,0,Text_Color_Red,"Screen Resolution is: %dx%d -- Press ESC to quit !",SCREEN_W,SCREEN_H); //display pixels until ESC key is pressed //wait for keypress while(!key[KEY_ESC]) { //set a random location x = 10 + rand() % (SCREEN_W-20); y = 10 + rand() % (SCREEN_H-20); //set a random color red = rand() % 255; green = rand() % 255; blue = rand() % 255; color = makecol(red,green,blue); //draw the pixel putpixel(screen, x, y, color); } //quit allegro allegro_exit(); } END_OF_MAIN() Error message: AllegroPixels1.exe has encountered a problem and needs to close. We are sorry for the inconvenience. Error signature: AppName: allegropixels1.exe AppVer: 0.0.0.0 ModName: alleg42.dll ModVer: 4.2.3.0 Offset: 0006c05c I am using Windows XP inside a virtual machine under Parallels 7.0

    Read the article

  • Engine for 2D Top-Down Physics-Based Skeletal Animation

    - by RylandAlmanza
    I just watched at the Sui Generis video, and was completely amazed. Specifically, the part where the big troll thing is beating up the player with his flail. This got me really excited, and I would like to try implementing something like this in a 2D Top-Down format. Something like this. That atloria example seems simple enough, but it's not exactly what I'm looking to make. I think atloria is using predefined animations, where as I would like to make something more physics-based like the Sui Generis engine does. So, I'm wondering what physics engines might work for something like this, and if I'd need to implement my own skeletal system, or if I could just use "joints" and such from the engine. The only experience I have in terms of physics engines is Box2D, which I've heard shouldn't be used for top-down settings, and I can think of a few reasons it wouldn't work out well. One of those reasons being gravity. In box 2D, gravity pulls towards a side of the screen (usually the bottom.) I wouldn't want my player's forearms constantly being pulled to one side. :) Also should mention that the programming language doesn't matter all that much to me. I'm currently playing with HTML5 stuff, though. :) Thanks in advance!

    Read the article

  • Using normals in DirectX 10

    - by Dave
    I've got a working OBJ loader that loads vertices, indices, texture coordinates, and normals. As of right now it doesn't process texture coordinates or normals but it stores them in arrays and creates a valid mesh with the vertices and indices. Now I am trying to figure out how can I make the shader use the correct normal in the array for the current vertex if I can't setnormals() to my mesh. If I were to just use an index in my array of normals corresponding to the index in the vertices, how would I retrieve the current index the shader is processing? BTW: I am trying to write a blinn-phong shader technique. Also when I create the input layout and I've added the semantic NORMAL to it, how would I list the multiple semantics in that single parameter? Would I just separate it with a space? PS: If you need to see any code, just let me know.

    Read the article

  • Need a good quality bitmap rotation algorithm for Android

    - by Lumis
    I am creating a kaleidoscopic effect on an android tablet. I am using the code below to rotate a slice of an image, but as you can see in the image when rotating a bitmap 60 degrees it distorts it quite a lot (red rectangles) – it is smudging the image! I have set dither and anti-alias flags but it does not help much. I think it is just not a very sophisticated bitmap rotation algorithm. canvas.save(); canvas.rotate(angle, screenW/2, screenH/2); canvas.drawBitmap(picSlice, screenW/2, screenH/2, pOutput); canvas.restore(); So I wonder if you can help me find a better way to rotate a bitmap. It does not have to be fast, because I intend to use a high quality rotation only when I want to save the screen to the SD card - I would redraw the screen in memory before saving. Do you know any comprehensible or replicable algorithm for bitmap rotation that I could programme or use as a library? Or any other suggestion? EDIT: The answers below made me think if Android OS had bilinear or bicubic interpolation option and after some search I found that it does have its own version of it called FilterBitmap. After applying it to my paint pOutput.setFilterBitmap(true); I get much better result

    Read the article

< Previous Page | 376 377 378 379 380 381 382 383 384 385 386 387  | Next Page >