Search Results

Search found 5287 results on 212 pages for 'pseudo frame'.

Page 107/212 | < Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >

  • Corona sdk events dispatched with dispatchEvent() are handled directly upon call. Why so?

    - by Amoxus
    I noticed to my surprise that an event created with dispatchEvent(event) gets handled directly when called, and not together with other events at a specific phase of the frame loop. Two main reasons of having an event system are: so that you can call code B from code A, but still want to prioritize code A. to make sure there are no freaky loopedy loops where code A calls code B calls code A ... I wonder what Ansca's rationale behind having events being handled directly this way is. And does Corona handle loopedy loops and other such pitfalls gracefully? The following code demonstrates dispatchEvent(): T= {} Z = display.newRect(100,100,100,100) function T.doSomething() print("T.doSomething: begun") local event = { name="myEventType", target=T } Z:dispatchEvent( event ) print("T.doSomething: ended") end function Z.sayHello(event) print("Z.sayHello: begun and ended") end Z:addEventListener("myEventType", Z.sayHello) print("Main: begun") T.doSomething() print("Main: ended") However Ansca claims the contrary at http://developer.coronalabs.com/reference/index/objectdispatchevent Can anyone clear this up a little? ( Using Corona simulator V 2012.840 )

    Read the article

  • How to deal with animated doors in isometric tiles

    - by George Profenza
    I've got a tricky issue I'm not sure how to tackle best: I have an animated tile of a door. When it's closed it should be sorted one way, but when it's openend it will need to be sorted a different way, as it belonging to a different(neighbouring tile). Here's the door closed: and the door opened: I imagine it would be possible to override the sorting system for such tiles and adjust the sorting based on the frame, but it feels a bit hacky. Has anyone encountered a similar scenario ? Any elegant solutions ?

    Read the article

  • Simple algorithm for a sudoku solver java

    - by user142050
    just a quick note first, I originally asked this question on stack overflow but was refered here instead. I've been stuck on this thing for a while, I just can't wrap my head around it. For a homework, I have to produce an algorithm for a sudoku solver that can check what number goes in a blank square in a row, in a column and in a block. It's a regular 9x9 sudoku and I'm assuming that the grid is already printed so I have to produce the part where it solves it. I've read a ton of stuff on the subject I just get stuck expressing it. I want the solver to do the following: If the value is smaller than 9, increase it by 1 If the value is 9, set it to zero and go back 1 If the value is invalid, increase by 1 I've already read about backtracking and such but I'm in the early stage of the class so I'd like to keep it as simple as possible. I'm more capable of writing in pseudo code but not so much with the algorithm itself and it's the algorithm that is needed for this exercise. Thanks in advance for your help guys.

    Read the article

  • How to implement "bullet time" in a multiplayer game?

    - by Tom
    I have never seen such a feature before, but it should provide an interesting gameplay opportunity. So yes, in a multiplayer/real-time environment (imagine FPS), how could I implement a slow motion/bullet time effect? Something like an illusion for the player that's currently slo-mo'ed. So everybody sees him "real-time", but he sees everything slowed down. Update A sidenote: keep in mind that a FPS game has to be balanced in order for it to be fun. So yes, this bullet time feature has to be solid, giving a small advantage to the "player", while not taking away from other players. Plus, there is a possibility that two players could activate their bullet time at the same time. Furthermore: I'm going to implement this in the future no matter what it takes. And, the idea is to build a whole new game engine for all this. If that gives new options, I'm more then interested in hearing the ideas. Meanwhile, here with my team we're thinking about this too, when our theory will be crafted, I'm going to share it here. Is this even possible? So, the question on "is this even possible" has been answered, now it's time to find the best solution. I'm keeping the "answer" until something exceptionally good comes up, like a prototype theory with something close to working pseudo code.

    Read the article

  • Play videos with libwebkit in Ubuntu 11.10 server

    - by Luis Fagundes
    I'm using libwebkit (with python-webkit) to render a page that plays a video. This application works fine in a Ubuntu 11.04 Desktop, Nvidia card and lots of libraries and software installed, but in a fresh Ubuntu 11.10 Server with intel 82945G/GZ card the video does not play. I guess either some codec package is missing or it's a driver problem. What could be missing for this to play? I'm trying with this video: http://video.eustasy.co.uk/480/ EDIT: doesn't look like a driver problem. With chromium I can play the video, but with libwebkit + python-webkit the video just shows the first frame and doesn't play. Any hints on what package could be missing? SOLVED: apparently it had to do with lack of audio. While chrome would play the video with no sound, libwebkit wouldn't start video. Adding user to audio and video groups solved the problem.

    Read the article

  • rotate player based off of joystick

    - by pengume
    Hey everyone I have this game that i am making in android and I have a touch screen joystick that moves the player around based on the joysticks position. I cant figure out how to also get the player to rotate at the same angle of the joystick. so when the joystick is to the left the players bitmap is rotated to the left as well. Maybe someone here has some sample code I could look at here is the joysticks class that I am using. `public class GameControls implements OnTouchListener { public float initx = DroidzActivity.screenWidth - 45; //255; // 320 og 425 public float inity = DroidzActivity.screenHeight - 45;//425; // 480 og 267 public Point _touchingPoint = new Point( DroidzActivity.screenWidth - 45, DroidzActivity.screenHeight - 45); public Point _pointerPosition = new Point(DroidzActivity.screenWidth - 100, DroidzActivity.screenHeight - 100); // ogx 220 ogy 150 private Boolean _dragging = false; private boolean attackMode = false; @Override public boolean onTouch(View v, MotionEvent event) { update(event); return true; } private MotionEvent lastEvent; public boolean ControlDragged; private static double angle; public void update(MotionEvent event) { if (event == null && lastEvent == null) { return; } else if (event == null && lastEvent != null) { event = lastEvent; } else { lastEvent = event; } // drag drop if (event.getAction() == MotionEvent.ACTION_DOWN) { if ((int) event.getX() > 0 && (int) event.getX() < 50 && (int) event.getY() > DroidzActivity.screenHeight - 160 && (int) event.getY() < DroidzActivity.screenHeight - 0) { setAttackMode(true); } else { _dragging = true; } } else if (event.getAction() == MotionEvent.ACTION_UP) { if(isAttackMode()){ setAttackMode(false); } _dragging = false; } if (_dragging) { ControlDragged = true; // get the pos _touchingPoint.x = (int) event.getX(); _touchingPoint.y = (int) event.getY(); // Log.d("GameControls", "x = " + _touchingPoint.x + " y = " //+ _touchingPoint.y); // bound to a box if (_touchingPoint.x < DroidzActivity.screenWidth - 75) { // og 400 _touchingPoint.x = DroidzActivity.screenWidth - 75; } if (_touchingPoint.x > DroidzActivity.screenWidth - 15) {// og 450 _touchingPoint.x = DroidzActivity.screenWidth - 15; } if (_touchingPoint.y < DroidzActivity.screenHeight - 75) {// og 240 _touchingPoint.y = DroidzActivity.screenHeight - 75; } if (_touchingPoint.y > DroidzActivity.screenHeight - 15) {// og 290 _touchingPoint.y = DroidzActivity.screenHeight - 15; } // get the angle setAngle(Math.atan2(_touchingPoint.y - inity, _touchingPoint.x - initx) / (Math.PI / 180)); // Move the ninja in proportion to how far // the joystick is dragged from its center _pointerPosition.y += Math.sin(getAngle() * (Math.PI / 180)) * (_touchingPoint.x / 70); // og 180 70 _pointerPosition.x += Math.cos(getAngle() * (Math.PI / 180)) * (_touchingPoint.x / 70); // make the pointer go thru if (_pointerPosition.x > DroidzActivity.screenWidth) { _pointerPosition.x = 0; } if (_pointerPosition.x < 0) { _pointerPosition.x = DroidzActivity.screenWidth; } if (_pointerPosition.y > DroidzActivity.screenHeight) { _pointerPosition.y = 0; } if (_pointerPosition.y < 0) { _pointerPosition.y = DroidzActivity.screenHeight; } } else if (!_dragging) { ControlDragged = false; // Snap back to center when the joystick is released _touchingPoint.x = (int) initx; _touchingPoint.y = (int) inity; // shaft.alpha = 0; } } public void setAttackMode(boolean attackMode) { this.attackMode = attackMode; } public boolean isAttackMode() { return attackMode; } public void setAngle(double angle) { this.angle = angle; } public static double getAngle() { return angle; } }` I should also note that the player has animations based on when he is moving or attacking. EDIT: I got the angle and am rotating the sprite around in the correct angle however it rotates on the wrong spot. My sprite is one giant bitmap that gets cut into four pieces and only one shown at a time to animate walking. here is the code I am using to rotate him right now. ` public void draw(Canvas canvas,int pointerX, int pointerY) { Matrix m; if (setRotation){ // canvas.save(); m = new Matrix(); m.reset(); // spriteWidth and spriteHeight are for just the current frame showed //m.setTranslate(spriteWidth / 2, spriteHeight / 2); //get and set rotation for ninja based off of joystick m.preRotate((float) GameControls.getRotation()); //create the rotated bitmap flipedSprite = Bitmap.createBitmap(bitmap , 0, 0,bitmap.getWidth(),bitmap.getHeight() , m, true); //set new bitmap to rotated ninja setBitmap(flipedSprite); setRotation = false; // canvas.restore(); Log.d("Ninja View", "angle of rotation= " +(float) GameControls.getRotation()); } ` And then the draw method // create the destination rectangle for the ninjas current animation frame // pointerX and pointerY are from the joystick moving the ninja around destRect = new Rect(pointerX, pointerY, pointerX + spriteWidth, pointerY + spriteHeight); canvas.drawBitmap(bitmap, getSourceRect(), destRect, null);

    Read the article

  • Equation / formula to determine an objects position on an ellipitcal path

    - by David Murphy
    I'm making a space game, as such I need objects to follow an elliptical path (orbit). I've worked out how to calculate all the important aspects of my orbits, the only remaining thing is how to have an object follow it. My Orbit class contains the major, minor (and by extension semi-major,semi-minor) lengths. The focii radius, area and circumference even. What is the equation to determine an objects x/y position (only need 2D) on an ellipse with a certain speed after a period of time. Basically, every frame I want to update the position based on the amount of elapsed time. I would like to have the speed along the path speed up and slow down according to the distance from the object it's orbiting, but not sure how to factor this in to the above given that at any point in time the speed has changed from it's previous speed. EDIT I can't answer my own question. But I found the question and answer is already on stackexchange: Kepler orbit : get position on the orbit over time

    Read the article

  • Can i change the order of these OpenGL / Win32 calls?

    - by Adam Naylor
    I've been adapting the NeHe ogl/win32 code to be more object orientated and I don't like the way some of the calls are structured. The example has the following pseudo structure: Register window class Change display settings with a DEVMODE Adjust window rect Create window Get DC Find closest matching pixel format Set the pixel format to closest match Create rendering context Make that context current Show the window Set it to foreground Set it to having focus Resize the GL scene Init GL The points in bold are what I want to move into a rendering class (the rest are what I see being pure win32 calls) but I'm not sure if I can call them after the win32 calls. Essentially what I'm aiming for is to encapsulate the Win32 calls into a Platform::Initiate() type method and the rest into a sort of Renderer::Initiate() method. So my question essentially boils down to: "Would OpenGL allow these methods to be called in this order?" Register window class Adjust window rect Create window Get DC Show the window Set it to foreground Set it to having focus Change display settings with a DEVMODE Find closest matching pixel format Set the pixel format to closest match Create rendering context Make that context current Resize the GL scene Init GL (obviously passing through the appropriate window handles and device contexts.) Thanks in advance.

    Read the article

  • How do I fix Flash player in Chrome 20?

    - by r0ckarong
    I just updated to Chrome 20.0.1132.43 which includes Flash 11.3.31.109. Since that update most of the flash videos I watch online will randomly display erratic behavior (skipping like a broken CD, "fast forwarding" at twice the frame rate with the audio being scrambled due to too fast playback, restarting every video after two seconds, fullscreen overlay being displayed but no image, fullscreen taking several seconds to actually show a picture, youtube player to go fullscreen but then hang in the controls fadeout animation with no picture -sound keeps playing). Is there anything I can do to resolve or work around this? I'm using Ubuntu 12.10 64Bit and the latest nvidia-current drivers 295.40 on a Geforce GT 440. It used to work in previous versions of Google Chrome.

    Read the article

  • Panning a 3d viewport in 2d direction with rotated camera

    - by Noob Game Developer
    I am using below code to pan the viewport (action script 3 code using flare3d framework) _mainCamera.x-= Input3D.mouseXSpeed; _mainCamera.z+= Input3D.mouseYSpeed; Where as Input3D.mouse[X|Y]Speed gives the displacement of the mouse on the X/Y axis starting from the position of the last frame. This works perfect if my camera is not rotated. However, if I rotate the camera (x by 30, y by 60) and pan the camera then it goes wrong. Which is actually correctly panning according to the code. But this is not desired and I know I need to do some math to get the correct x/y which I am not aware of it. Can some one help me achieving it? Update: I am getting an Idea but I am not sure how to do it :( Get the mouseX/Y deltas (xd,yd) Get the current camera coords (pos3d) Convert to screen coords (pos2d) Add deltas to screen coords (pos2d+ (xd,yd)) Convert above coords to 3d coords

    Read the article

  • Is error suppression acceptable in role of logic mechanism?

    - by Rarst
    This came up in code review at work in context of PHP and @ operator. However I want to try keep this in more generic form, since few question about it I found on SO got bogged down in technical specifics. Accessing array field, which is not set, results in error message and is commonly handled by following logic (pseudo code): if field value is set output field value Code in question was doing it like: start ignoring errors output field value stop ignoring errors The reasoning for latter was that it's more compact and readable code in this specific case. I feel that those benefits do not justify misuse (IMO) of language mechanics. Is such code is being "clever" in a bad way? Is discarding possible error (for any reason) acceptable practice over explicitly handling it (even if that leads to more extensive and/or intensive code)? Is it acceptable for programming operators to cross boundaries of intended use (like in this case using error handling for controlling output)? Edit I wanted to keep it more generic, but specific code being discussed was like this: if ( isset($array['field']) ) { echo '<li>' . $array['field'] . '</li>'; } vs the following example: echo '<li>' . @$array['field'] . '</li>';

    Read the article

  • AS3 Calculating Delta Time In Seconds

    - by user1133079
    Here is how I've been trying to implement delta time based on different internet resources. var startTime:Number = getTimer(); game.Update(deltaTime); deltaTime = Number(getTimer() - startTime) * 0.001; My issue with this is it doesn't seem to be giving me accurate timing. The main update shows the frame time at 0.001 and when reinitializing the level it goes to 0.002. I'm using dt else where for a timer and later on time based physics so I would like it to work as expected. I must be missing something silly.

    Read the article

  • Ubuntu gets slower by the day

    - by Doug
    Ive noticed that Ubuntu has been getting slower and slower to boot, launch programs, etc. I installed 12.04 about 4 months ago,now 12.10, running on a quad-core Q8300 Intel, 4GB Ram, and an 80GB WD IDE drive. For some reason (ever since 11.04), Ive noticed after installation, the speed is good. The longer I have the OS installed, every bootup gets slower and slower, launching programs get slower, frame rates change radically(onboard GF9400 gets anywhere from 60fps down to 12 in worst cases). I would think maybe the HD is the issue, however I installed 11.10 on a 160GB SATA, and the same thing occurred. Looking at system resources, I'm holding steady at 1GB memory usage (I have 4GB, but it's actually showing 3.6GB, dunno why), no swap usage, and using right around 4% on cpu currently. HD capacity is only 28% used. Has anyone else ran into this issue? I love Ubuntu to death, but using other distros other than Ubuntu, I dont have this problem.

    Read the article

  • Creating a DrawableGameComponent

    - by Christian Frantz
    If I'm going to draw cubes effectively, I need to get rid of the numerous amounts of draw calls I have and what has been suggested is that I create a "mesh" of my cubes. I already have them being stored in a single vertex buffer, but the issue lies in my draw method where I am still looping through every cube in order to draw them. I thought this was necessary as each cube will have a set position, but it lowers the frame rate incredibly. What's the easiest way to go about this? I have a class CubeChunk that inherits Microsoft.Stuff.DrawableGameComponent, but I don't know what comes next. I suppose I could just use the chunk of cubes created in my cube class, but that would just keep me going in circles and drawing each cube individually. The goal here is to create a draw method that draws my chunk as a whole, and to not draw individual cubes as I've been doing.

    Read the article

  • Is there a name for this use of the State design pattern?

    - by Chris C
    I'm looking to see if there is a particular name for this style of programming a certain kind of behavior into a program. Said program runs in real time, in an update loop, and the program uses the State design pattern to do some work, but it's the specific way it does the work that I want to know about. Here's how it's used. - Object Foo constructed, with concrete StateA object in it - First loop runs --- Foo.Run function calls StateA.Bar --- in StateA.Bar replace Foo's state to StateB - Second loop runs --- Foo.Run calls StateB.Bar - Third loop runs --- Foo.Run calls StateB.Bar - Fourth loop --- etc. So in short, Foo doesn't have an explicit Initialize function. It will just have Run, but Run will do something unique in the first frame to initialize something for Foo and then replace it with a different action that will repeat in all the frames following it- thus not needing to check if Foo's already initialized. It's just a "press start and go" action. What would you call implementing this type of behavior?

    Read the article

  • Deleting a game object causing an access violation

    - by Balls
    I tried doing this but it cause an access violation. void GameObjectFactory::Update() { for( std::list<GameObject*>::iterator it=gameObjectList.begin() ..... (*it)->Update(); } void Bomb::Update() { if( time == 2.0f ) { gameObjectFactory->Remove( this ); } } void GameObjectFactory::Remove( ... ) { gameObjectList.remove( ... ); } My thoughts would be to mark the object to be dead then let the factory handle it the on next frame for deletion. Is it the best and fastest way? What do you think?

    Read the article

  • Processing Text and Binary (Blob, ArrayBuffer, ArrayBufferView) Payload in WebSocket - (TOTD #185)

    - by arungupta
    The WebSocket API defines different send(xxx) methods that can be used to send text and binary data. This Tip Of The Day (TOTD) will show how to send and receive text and binary data using WebSocket. TOTD #183 explains how to get started with a WebSocket endpoint using GlassFish 4. A simple endpoint from that blog looks like: @WebSocketEndpoint("/endpoint") public class MyEndpoint { public void receiveTextMessage(String message) { . . . } } A message with the first parameter of the type String is invoked when a text payload is received. The payload of the incoming WebSocket frame is mapped to this first parameter. An optional second parameter, Session, can be specified to map to the "other end" of this conversation. For example: public void receiveTextMessage(String message, Session session) {     . . . } The return type is void and that means no response is returned to the client that invoked this endpoint. A response may be returned to the client in two different ways. First, set the return type to the expected type, such as: public String receiveTextMessage(String message) { String response = . . . . . . return response; } In this case a text payload is returned back to the invoking endpoint. The second way to send a response back is to use the mapped session to send response using one of the sendXXX methods in Session, when and if needed. public void receiveTextMessage(String message, Session session) {     . . .     RemoteEndpoint remote = session.getRemote();     remote.sendString(...);     . . .     remote.sendString(...);    . . .    remote.sendString(...); } This shows how duplex and asynchronous communication between the two endpoints can be achieved. This can be used to define different message exchange patterns between the client and server. The WebSocket client can send the message as: websocket.send(myTextField.value); where myTextField is a text field in the web page. Binary payload in the incoming WebSocket frame can be received if ByteBuffer is used as the first parameter of the method signature. The endpoint method signature in that case would look like: public void receiveBinaryMessage(ByteBuffer message) {     . . . } From the client side, the binary data can be sent using Blob, ArrayBuffer, and ArrayBufferView. Blob is a just raw data and the actual interpretation is left to the application. ArrayBuffer and ArrayBufferView are defined in the TypedArray specification and are designed to send binary data using WebSocket. In short, ArrayBuffer is a fixed-length binary buffer with no format and no mechanism for accessing its contents. These buffers are manipulated using one of the views defined by one of the subclasses of ArrayBufferView listed below: Int8Array (signed 8-bit integer or char) Uint8Array (unsigned 8-bit integer or unsigned char) Int16Array (signed 16-bit integer or short) Uint16Array (unsigned 16-bit integer or unsigned short) Int32Array (signed 32-bit integer or int) Uint32Array (unsigned 16-bit integer or unsigned int) Float32Array (signed 32-bit float or float) Float64Array (signed 64-bit float or double) WebSocket can send binary data using ArrayBuffer with a view defined by a subclass of ArrayBufferView or a subclass of ArrayBufferView itself. The WebSocket client can send the message using Blob as: blob = new Blob([myField2.value]);websocket.send(blob); where myField2 is a text field in the web page. The WebSocket client can send the message using ArrayBuffer as: var buffer = new ArrayBuffer(10);var bytes = new Uint8Array(buffer);for (var i=0; i<bytes.length; i++) { bytes[i] = i;}websocket.send(buffer); A concrete implementation of receiving the binary message may look like: @WebSocketMessagepublic void echoBinary(ByteBuffer data, Session session) throws IOException {    System.out.println("echoBinary: " + data);    for (byte b : data.array()) {        System.out.print(b);    }    session.getRemote().sendBytes(data);} This method is just printing the binary data for verification but you may actually be storing it in a database or converting to an image or something more meaningful. Be aware of TYRUS-51 if you are trying to send binary data from server to client using method return type. Here are some references for you: JSR 356: Java API for WebSocket - Specification (Early Draft) and Implementation (already integrated in GlassFish 4 promoted builds) TOTD #183 - Getting Started with WebSocket in GlassFish TOTD #184 - Logging WebSocket Frames using Chrome Developer Tools, Net-internals and Wireshark Subsequent blogs will discuss the following topics (not necessary in that order) ... Error handling Custom payloads using encoder/decoder Interface-driven WebSocket endpoint Java client API Client and Server configuration Security Subprotocols Extensions Other topics from the API

    Read the article

  • Searching for fast skeletal animation algorithm

    - by igf
    Is it theoretically possible to dynamicaly animate scene with 100-150 400 poly characters meshes on high-end GL ES 2.0 mobile devices or i definetley should use prerendered keyframe animation? Scene have only one light source and precalculated shadow maps. View from top like in Warcraft 3. No any other meshes or objects. 2d collision detecting between objects calculated via spatial hashing. It can be any other algorithm besides ragdoll, but it must supply fast and simple skeletal animation for frame with 100+ low poly meshes for each mesh. Any ideas?

    Read the article

  • Cannot connect to ethernet port

    - by Jnir
    I'm running Ubuntu 12.04 LTE. I'm trying to connect via ethernet for the first time but have had no success. Ifconfig eth0 returns: eth0 Link encap:Ethernet HWaddr: 00:01:2e:3f:f1:a0 inet6 addr: fe80::201:2eff:fe3f:f1a0 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:68 errors:0 dropped:0 overruns:0 frame:0 TX packets:183 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:8374 (8.3 KB) TX bytes:42944 (42.9KB) Interrupt:16 Base address:0x6c00 /etc/network/interfaces has auto lo iface lo inef loopback auto eth0 iface eth0 inet dhcp sudo /etc/init.d/networking prints: * Running /etc/init.d/networking restart is deprecated because it may not enable again some interfaces * Reconfiguring network interfaces... Failed to bring up eth0 [OK]

    Read the article

  • movement of sprites with kinect and xna

    - by pablopp83
    im working on a proyect with kinect sdk and xna 4.0. i need take the position of the hands and draw a sprite over it. im doing it directly and, because of that, i get a "trembling hands" effect. so, i was thinking on make the sprite move from the previous position to the new one, given in every frame by the new hand position. this way, the sprite does not jump from one position to another. this is working just fine, but im using a constant value for the velocity, and i really would like to use a variable velocity given by the difference between the prev and the new position. this is, if the hand move more quickly in the reality, the velocity will be higher. I really dont have a clue on how to make this works. can somebody point me in the right direction? thanks.

    Read the article

  • Farseer Physics: Ways to create a Body?

    - by EdgarT
    I want to create something similar to this using farsser and Kinect: https://vimeo.com/33500649 This is my implementation until now: http://www.youtube.com/watch?v=GlIvJRhco4U I have the outline vertices and the triangulation of the user. And following the Texture to Polygonmsample i used this line to create the shape, where farseerObject is a list of vertices of the triangles: _compound = BodyFactory.CreateCompoundPolygon(World, farseerObject, 1f, BodyType.Dynamic); But I have to update the body each frame (like 30 fps) and this is very slow. I get just 2 or 3 fps. There's another (faster) way to create the Body from a list of triangles or the contour vertices?

    Read the article

  • HP Probook touchpad

    - by ScienceSE
    I wrote about this problem some weeks ago, but now the question is: "Why is touchpad works not so good as in windows". I tried some "experiments": When I use windows and if I accidentally touched touchpad - cursor isn't moving, also no clicks occurring. So in windows working with touchpad is quite normal, but in Ubuntu ?f I accidentally touch the touchpad even with my wrist - cursor is moving etc. In Windows, the cursor moves only when I touch it with finger. And... If, for example, I hold one finger on touchpad and simultaneously move the second finger on the touchpad - the cursor doesn't move, however in ubuntu he does. He's "super sensetive" in ubuntu or what? Also I tried to apply the option in "Mouse and touchpad", which called "Disable touchpad when typing", but nevertheless he isn't disabling when i'm typing... Note: this option is circled in red frame, i dont know is it a good "sign" ) What can I do to fix the problem?

    Read the article

  • How can I imitate interaction and movement in Diablo II?

    - by user422318
    I'm prototyping a simple browser-based game. It's played from a top down perspective on a 2d canvas. You left-click on a point on the map, and your character will begin walking to it. If you click on a different point on the map, then your character will begin walking to the new point. It's similar to Diablo II: http://www.youtube.com/watch?v=EvDKt-To6K0&feature=related How can I best imitate this movement system for a player? Ideas... Track current coords and target coords If target coords are exactly up, left, right, or down, then increment appropriate direction until you get there Implied else: target coords are in a quadrant. To make this movement look natural, character will have to move diagonally. For example, pretend the target is to the northeast. For each game frame, alternate incrementing current coordinates in the north and then east directions.

    Read the article

  • How to render Minecraft on the GPU?

    - by l0b0
    Hardware: Intel i7 AMD Radeon HD 6970 SSD with plenty of space 6 GB RAM Software OpenJDK 6, 7, and Oracle Java 7 (reproducible with all three) AMD Catalyst 12.8 and open source driver (reproducible with both) Ubuntu 12.04 x86_64 and older Minecraft 1.3.2 vanilla and older On this setup I am getting rubbish frame rates after a short while of playing, dropping from about 45-55 to 15 in a couple of minutes. CPU use is 40-45 even when rendering the opening screen at 1920x1280, and gameRenderer is using about 90% CPU when playing. Rather than trying to eke out a few more FPS out of an obviously broken rendering pipeline, I really hope to find a solution to make the GPU render Minecraft.

    Read the article

  • OpenGL VBOs are slower then glDrawArrays.

    - by Arelius
    So, this seems odd to me. I upload a large buffer of vertices, then every frame I call glBindbuffer and then the appropriate gl*Pointer functions with offsets into the buffer, then I use glDrawArrays to draw all of my triangles. I'm only drawing about 100K triangles, however I'm getting about 15FPS. This is where it gets weird, if I change it to not call glBindBuffer, then change the gl*Pointer calls to be actual pointers into the array I have in system memory, and then call glDrawArrays the same, my framerate jumps up to about 50FPS. Any idea what I weird thing I could be doing that would cause this? Did I maybe forget to call glEnable(GL_ALLOW_VBOS_TO_RUN_FAST) or something?

    Read the article

< Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >