Search Results

Search found 7346 results on 294 pages for 'touch flo 3d'.

Page 43/294 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • Touch Screen Ubuntu 10.04LTS

    - by WalterJ89
    I'm trying to get a touch screen working with Ubuntu 10.04LTS (64bit) -it is a serial touchsceen, connected at /dev/ttyS0 ,i know that works because I get garbage in the terminal when I enable it. -before the screen used a 3m driver (I believe) in XP. My knowledge of Linux is passive so I generally pick up something when I need it. To get this working I came accross a lot of tutorials (a lot outdated a bit), I'm still at a loss to get this work. I'm not sure where to put linux drivers (/usr/ or /dev/?) most tutorials kind of skip over that part. I have tried editing the /etc/X11/xorg.conf unsuccessfully. I'm not sure what the syntax for that is supposed to be. Thank You

    Read the article

  • Triangle Picking Picking Back faces

    - by Tangeleno
    I'm having a bit of trouble with 3D picking, at first I thought my ray was inaccurate but it turns out that the picking is happening on faces facing the camera and faces facing away from the camera which I'm currently culling. Here's my ray creation code, I'm pretty sure the problem isn't here but I've been wrong before. private uint Pick() { Ray cursorRay = CalculateCursorRay(); Vector3? point = Control.Mesh.RayCast(cursorRay); if (point != null) { Tile hitTile = Control.TileMesh.GetTileAtPoint(point); return hitTile == null ? uint.MaxValue : (uint)(hitTile.X + hitTile.Y * Control.Generator.TilesWide); } return uint.MaxValue; } private Ray CalculateCursorRay() { Vector3 nearPoint = Control.Camera.Unproject(new Vector3(Cursor.Position.X, Control.ClientRectangle.Height - Cursor.Position.Y, 0f)); Vector3 farPoint = Control.Camera.Unproject(new Vector3(Cursor.Position.X, Control.ClientRectangle.Height - Cursor.Position.Y, 1f)); Vector3 direction = farPoint - nearPoint; direction.Normalize(); return new Ray(nearPoint, direction); } public Vector3 Camera.Unproject(Vector3 source) { Vector4 result; result.X = (source.X - _control.ClientRectangle.X) * 2 / _control.ClientRectangle.Width - 1; result.Y = (source.Y - _control.ClientRectangle.Y) * 2 / _control.ClientRectangle.Height - 1; result.Z = source.Z - 1; if (_farPlane - 1 == 0) result.Z = 0; else result.Z = result.Z / (_farPlane - 1); result.W = 1f; result = Vector4.Transform(result, Matrix4.Invert(ProjectionMatrix)); result = Vector4.Transform(result, Matrix4.Invert(ViewMatrix)); result = Vector4.Transform(result, Matrix4.Invert(_world)); result = Vector4.Divide(result, result.W); return new Vector3(result.X, result.Y, result.Z); } And my triangle intersection code. Ripped mainly from the XNA picking sample. public float? Intersects(Ray ray) { float? closestHit = Bounds.Intersects(ray); if (closestHit != null && Vertices.Length == 3) { Vector3 e1, e2; Vector3.Subtract(ref Vertices[1].Position, ref Vertices[0].Position, out e1); Vector3.Subtract(ref Vertices[2].Position, ref Vertices[0].Position, out e2); Vector3 directionCrossEdge2; Vector3.Cross(ref ray.Direction, ref e2, out directionCrossEdge2); float determinant; Vector3.Dot(ref e1, ref directionCrossEdge2, out determinant); if (determinant > -float.Epsilon && determinant < float.Epsilon) return null; float inverseDeterminant = 1.0f/determinant; Vector3 distanceVector; Vector3.Subtract(ref ray.Position, ref Vertices[0].Position, out distanceVector); float triangleU; Vector3.Dot(ref distanceVector, ref directionCrossEdge2, out triangleU); triangleU *= inverseDeterminant; if (triangleU < 0 || triangleU > 1) return null; Vector3 distanceCrossEdge1; Vector3.Cross(ref distanceVector, ref e1, out distanceCrossEdge1); float triangleV; Vector3.Dot(ref ray.Direction, ref distanceCrossEdge1, out triangleV); triangleV *= inverseDeterminant; if (triangleV < 0 || triangleU + triangleV > 1) return null; float rayDistance; Vector3.Dot(ref e2, ref distanceCrossEdge1, out rayDistance); rayDistance *= inverseDeterminant; if (rayDistance < 0) return null; return rayDistance; } return closestHit; } I'll admit I don't fully understand all of the math behind the intersection and that is something I'm working on, but my understanding was that if rayDistance was less than 0 the face was facing away from the camera, and shouldn't be counted as a hit. So my question is, is there an issue with my intersection or ray creation code, or is there another check I need to perform to tell if the face is facing away from the camera, and if so any hints on what that check might contain would be appreciated.

    Read the article

  • Linux compatible touchscreen monitor

    - by jrtokarz
    Can anybody recommend a linux compatible touchscreen monitor, the criteria are: 19inch or greater 1920x1080 or better only single touch required be able to operate in portait mode We currently have some IIyama Prolite MTS2250MS monitors but these do not natively support portait mode and we have had little success getting the touch functionality working in Linux (although they do report themselves as HID devices when plugged in). So an alternative to suggesting a monitor would be pointers to information on how we could get the IIyama monitor working in Linux.

    Read the article

  • Android: Adding extended GLSurfaceView to a Layout don't show 3d stuff

    - by Santiago
    I make a game extending the class GLSurfaceView, if I apply SetContentView directly to that class, the 3d stuff and input works great. Now I want to show some items over 3d stuff, so I create a XML with a layout and some objects, and I try to add my class manually to the layout. I'm not getting errors but the 3d stuff is not shown but I can view the objects from XML layout. source: @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); LayoutInflater inflater = (LayoutInflater) getSystemService(LAYOUT_INFLATER_SERVICE); layout = (RelativeLayout) inflater.inflate(R.layout.testlayout, null); //Create an Instance with this Activity my3dstuff = new myGLSurfaceViewClass(this); layout.addView(my3dstuff,4); setContentView(R.layout.testlayout); } And testlayout have: <?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="wrap_content" android:layout_height="wrap_content" android:id="@+id/Pantalla"> <ImageView android:id="@+id/zoom_less" android:layout_width="wrap_content" android:layout_height="wrap_content" android:src="@drawable/zoom_less"></ImageView> <ImageView android:id="@+id/zoom_more" android:layout_width="wrap_content" android:src="@drawable/zoom_more" android:layout_height="wrap_content" android:layout_alignParentRight="true"></ImageView> <ImageView android:id="@+id/zoom_normal" android:layout_width="wrap_content" android:layout_height="wrap_content" android:src="@drawable/zoom_normal" android:layout_centerHorizontal="true"></ImageView> <ImageView android:id="@+id/stop" android:layout_width="wrap_content" android:layout_height="wrap_content" android:src="@drawable/stop" android:layout_centerInParent="true" android:layout_alignParentBottom="true"></ImageView> </RelativeLayout> I also tried to add my class to XML but the Activity hangs up. <com.mygame.myGLSurfaceViewClass android:id="@+id/my3dstuff" android:layout_width="fill_parent" android:layout_height="fill_parent"></com.mygame.myGLSurfaceViewClass> and this don't works: <View class="com.mygame.myGLSurfaceViewClass" android:id="@+id/my3dstuff" android:layout_width="fill_parent" android:layout_height="fill_parent"></View> Any Idea? Thanks

    Read the article

  • Notifying view controller when subview touch events occur.

    - by Nebs
    I have a UIViewController whose view has a custom subview. This custom subview needs to track touch events and report swipe gestures. Currently I put touchesBegan, touchesMoved, touchesEnded and touchesCancelled in the subview class. With some extra logic I am able to get swipe gestures and call my handleRightSwipe and handleLeftSwipe methods. So now when I swipe within the subview it calls its local swipe handling methods. This all works fine. But what I really need is for the handleRightSwipe and handleLeftSwipe methods to be in the view controller. I could leave them in the subview class but then I'd have to bring in all the logic and data as well and that kind of breaks the MVC idea. So my question is is there a clean way to handle this? Essentially I want to keep my touch event methods in the subview so that they only trigger for that specific view. But I also want the view controller to be informed when these touch events (or in this case swipe gestures) occur. Any ideas? Thanks.

    Read the article

  • Touch friendly GUI in Windows Mobile

    - by vonolsson
    I'm porting an audio processing application written in C++ from Windows to Windows Mobile (version 5+). Basically what I need to port is the GUI. The application is quite complicated and the GUI will need to be able to offer a lot of functionality. I would like to create a touch friendly user interface that also looks good. Which basically means that standard WinMo controls are out the window. I've looked at libraries such as Fluid and they look like something I would like to use. However, as I said I'm developing i C++. Even though it would be possible to only write the GUI part i some .NET language I rather not. My experience with .NET on Windows Mobile is that it doesn't work very well... Can anyone either suggest a C/C++ touch friendly GUI library for Windows Mobile or some kind of "best practices" document/how-to on how to use the standard Windows Mobile controls in order to make the touch friendly and also work and look well in later versions of Windows Mobile (in particular version 6.5)?

    Read the article

  • Touch coordinates in iPhone landscape mode app

    - by gok
    I am trying to make this landscape only iphone app. I only use this code for this purpose: - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { return (interfaceOrientation == UIInterfaceOrientationLandscapeRight); } However when I click clip subviews checkbox from interface builder, the view is clipped from the middle. I also don't receive any touch events from outside of view bounds obviously. - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { CGPoint fingerPos = [[touches anyObject] locationInView:self.view]; NSLog(@"%f %f",fingerPos.x,fingerPos.y); } only prints for coordinates between 20 and 320 for X. But Y works fine. When i try to modify bounds by hand, everything works fine; View is positioned and shown correctly, printed coordinates are correct, I receive touch from all of the screen except between 0 and 20 for X. So Left side of the screen is unresponsive to touch events for only 20 pixels. Code I use to modify bounds: self.view.bounds = CGRectMake(-180.0f, 0.0f, 680.0f, 480.0f); What might be causing this? Weird!

    Read the article

  • jquery touch punch - draggable on ipad

    - by dshuta
    i am starting to work with the jquery touch punch extensions in order to allow draggability on ipad, but i am getting tripped up right away. probably something terribly dumb on my part. the draggable example from the developer works fine on my ipad: http://furf.com/exp/touch-punch/draggable.html but not for me: http://danshuta.com/touchpunch/ this works fine in my desktop browser, but on the ipad it just focuses on the block and scrolls the entire page as i drag, as if it were just an image or other normal embedded object. as this is what happens normally with jquery/ui on ipad, this makes me think it is not loading or otherwise ignoring the "punch" code from my site (though if i host the jquery files on my site via the same path, those load and function fine in desktop browser). here's the entire code, very basic: <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8" /> <title>Touchpunchtest</title> <script src="http://code.jquery.com/jquery.min.js"></script> <script src="http://code.jquery.com/ui/1.8.17/jquery-ui.min.js"></script> <script src="js/jquery.ui.touch-punch.js"></script> </head> <body> <div id="draggybox" onclick="void(0)" style="width: 150px; height: 150px; background: green;"></div> <script>$('#draggybox').draggable();</script> </body> </html> what am i missing?!

    Read the article

  • LWJGL SlickUtil Texture Binding

    - by Matthew Dockerty
    I am making a 3D game using LWJGL and I have a texture class with static variables so that I only need to load textures once, even if I need to use them more than once. I am using Slick Util for this. When I bind a texture it works fine, but then when I try to render something else after I have rendered the model with the texture, the texture is still being bound. How do I unbind the texture and set the rendermode to the one that was in use before any textures were bound? Some of my code is below. The problem I am having is the player texture is being used in the box drawn around the player after it the model has been rendered. Model.java public class Model { public List<Vector3f> vertices = new ArrayList<Vector3f>(); public List<Vector3f> normals = new ArrayList<Vector3f>(); public ArrayList<Vector2f> textureCoords = new ArrayList<Vector2f>(); public List<Face> faces = new ArrayList<Face>(); public static Model TREE; public static Model PLAYER; public static void loadModels() { try { TREE = OBJLoader.loadModel(new File("assets/model/tree_pine_0.obj")); PLAYER = OBJLoader.loadModel(new File("assets/model/player.obj")); } catch (Exception e) { e.printStackTrace(); } } public void render(Vector3f position, Vector3f scale, Vector3f rotation, Texture texture, float shinyness) { glPushMatrix(); { texture.bind(); glColor3f(1, 1, 1); glTranslatef(position.x, position.y, position.z); glScalef(scale.x, scale.y, scale.z); glRotatef(rotation.x, 1, 0, 0); glRotatef(rotation.y, 0, 1, 0); glRotatef(rotation.z, 0, 0, 1); glMaterialf(GL_FRONT, GL_SHININESS, shinyness); glBegin(GL_TRIANGLES); { for (Face face : faces) { Vector2f t1 = textureCoords.get((int) face.textureCoords.x - 1); glTexCoord2f(t1.x, t1.y); Vector3f n1 = normals.get((int) face.normal.x - 1); glNormal3f(n1.x, n1.y, n1.z); Vector3f v1 = vertices.get((int) face.vertex.x - 1); glVertex3f(v1.x, v1.y, v1.z); Vector2f t2 = textureCoords.get((int) face.textureCoords.y - 1); glTexCoord2f(t2.x, t2.y); Vector3f n2 = normals.get((int) face.normal.y - 1); glNormal3f(n2.x, n2.y, n2.z); Vector3f v2 = vertices.get((int) face.vertex.y - 1); glVertex3f(v2.x, v2.y, v2.z); Vector2f t3 = textureCoords.get((int) face.textureCoords.z - 1); glTexCoord2f(t3.x, t3.y); Vector3f n3 = normals.get((int) face.normal.z - 1); glNormal3f(n3.x, n3.y, n3.z); Vector3f v3 = vertices.get((int) face.vertex.z - 1); glVertex3f(v3.x, v3.y, v3.z); } texture.release(); } glEnd(); } glPopMatrix(); } } Textures.java public class Textures { public static Texture FLOOR; public static Texture PLAYER; public static Texture SKYBOX_TOP; public static Texture SKYBOX_BOTTOM; public static Texture SKYBOX_FRONT; public static Texture SKYBOX_BACK; public static Texture SKYBOX_LEFT; public static Texture SKYBOX_RIGHT; public static void loadTextures() { try { FLOOR = TextureLoader.getTexture("PNG", new FileInputStream(new File("assets/model/floor.png"))); FLOOR.setTextureFilter(GL11.GL_NEAREST); PLAYER = TextureLoader.getTexture("PNG", new FileInputStream(new File("assets/model/tree_pine_0.png"))); PLAYER.setTextureFilter(GL11.GL_NEAREST); SKYBOX_TOP = TextureLoader.getTexture("PNG", new FileInputStream(new File("assets/textures/skybox_top.png"))); SKYBOX_TOP.setTextureFilter(GL11.GL_NEAREST); SKYBOX_BOTTOM = TextureLoader.getTexture("PNG", new FileInputStream(new File("assets/textures/skybox_bottom.png"))); SKYBOX_BOTTOM.setTextureFilter(GL11.GL_NEAREST); SKYBOX_FRONT = TextureLoader.getTexture("PNG", new FileInputStream(new File("assets/textures/skybox_front.png"))); SKYBOX_FRONT.setTextureFilter(GL11.GL_NEAREST); SKYBOX_BACK = TextureLoader.getTexture("PNG", new FileInputStream(new File("assets/textures/skybox_back.png"))); SKYBOX_BACK.setTextureFilter(GL11.GL_NEAREST); SKYBOX_LEFT = TextureLoader.getTexture("PNG", new FileInputStream(new File("assets/textures/skybox_left.png"))); SKYBOX_LEFT.setTextureFilter(GL11.GL_NEAREST); SKYBOX_RIGHT = TextureLoader.getTexture("PNG", new FileInputStream(new File("assets/textures/skybox_right.png"))); SKYBOX_RIGHT.setTextureFilter(GL11.GL_NEAREST); } catch (Exception e) { e.printStackTrace(); } } } Player.java public class Player { private Vector3f position; private float yaw; private float moveSpeed; public Player(float x, float y, float z, float yaw, float moveSpeed) { this.position = new Vector3f(x, y, z); this.yaw = yaw; this.moveSpeed = moveSpeed; } public void update() { if (Keyboard.isKeyDown(Keyboard.KEY_W)) walkForward(moveSpeed); if (Keyboard.isKeyDown(Keyboard.KEY_S)) walkBackwards(moveSpeed); if (Keyboard.isKeyDown(Keyboard.KEY_A)) strafeLeft(moveSpeed); if (Keyboard.isKeyDown(Keyboard.KEY_D)) strafeRight(moveSpeed); if (Mouse.isButtonDown(0)) yaw += Mouse.getDX(); LowPolyRPG.getInstance().getCamera().setPosition(-position.x, -position.y, -position.z); LowPolyRPG.getInstance().getCamera().setYaw(yaw); } public void walkForward(float distance) { position.setX(position.getX() + distance * (float) Math.sin(Math.toRadians(yaw))); position.setZ(position.getZ() - distance * (float) Math.cos(Math.toRadians(yaw))); } public void walkBackwards(float distance) { position.setX(position.getX() - distance * (float) Math.sin(Math.toRadians(yaw))); position.setZ(position.getZ() + distance * (float) Math.cos(Math.toRadians(yaw))); } public void strafeLeft(float distance) { position.setX(position.getX() + distance * (float) Math.sin(Math.toRadians(yaw - 90))); position.setZ(position.getZ() - distance * (float) Math.cos(Math.toRadians(yaw - 90))); } public void strafeRight(float distance) { position.setX(position.getX() + distance * (float) Math.sin(Math.toRadians(yaw + 90))); position.setZ(position.getZ() - distance * (float) Math.cos(Math.toRadians(yaw + 90))); } public void render() { Model.PLAYER.render(new Vector3f(position.x, position.y + 12, position.z), new Vector3f(3, 3, 3), new Vector3f(0, -yaw + 90, 0), Textures.PLAYER, 128); GL11.glPushMatrix(); GL11.glTranslatef(position.getX(), position.getY(), position.getZ()); GL11.glRotatef(-yaw, 0, 1, 0); GL11.glScalef(5.8f, 21, 2.2f); GL11.glDisable(GL11.GL_LIGHTING); GL11.glLineWidth(3); GL11.glBegin(GL11.GL_LINE_STRIP); GL11.glColor3f(1, 1, 1); glVertex3f(1f, 0f, -1f); glVertex3f(-1f, 0f, -1f); glVertex3f(-1f, 1f, -1f); glVertex3f(1f, 1f, -1f); glVertex3f(-1f, 0f, 1f); glVertex3f(1f, 0f, 1f); glVertex3f(1f, 1f, 1f); glVertex3f(-1f, 1f, 1f); glVertex3f(1f, 1f, -1f); glVertex3f(-1f, 1f, -1f); glVertex3f(-1f, 1f, 1f); glVertex3f(1f, 1f, 1f); glVertex3f(1f, 0f, 1f); glVertex3f(-1f, 0f, 1f); glVertex3f(-1f, 0f, -1f); glVertex3f(1f, 0f, -1f); glVertex3f(1f, 0f, 1f); glVertex3f(1f, 0f, -1f); glVertex3f(1f, 1f, -1f); glVertex3f(1f, 1f, 1f); glVertex3f(-1f, 0f, -1f); glVertex3f(-1f, 0f, 1f); glVertex3f(-1f, 1f, 1f); glVertex3f(-1f, 1f, -1f); GL11.glEnd(); GL11.glEnable(GL11.GL_LIGHTING); GL11.glPopMatrix(); } public Vector3f getPosition() { return new Vector3f(-position.x, -position.y, -position.z); } public float getX() { return position.getX(); } public float getY() { return position.getY(); } public float getZ() { return position.getZ(); } public void setPosition(Vector3f position) { this.position = position; } public void setPosition(float x, float y, float z) { this.position.setX(x); this.position.setY(y); this.position.setZ(z); } } Thanks for the help.

    Read the article

  • Unable to sync iPod Touch with my PC

    - by alex
    I'm trying to sync a first gen iPod Touch to my PC running Windows 7 64 bit. The problem is that whenever I connect the iPod, iTunes completely freezes (if I start iTunes after connecting the iPod it will simply hang until it's physically disconnected from the PC). I reinstalled iTunes thinking that it had been corrupted, but without any luck. I've had this problem with all the latest versions of iTunes. I've also tried using MediaMonkey and DoubleTwist. None of these apps see the iPod as being connected; DoubleTwist also freezes, just like iTunes. The really strange thing is that I was able to sync the iPod with this PC a while back, but I now seem to have lost that ability. I don't know what changed. Windows detects the device every time it's plugged in (I can see it in Device Manager and I can browse all photos on iPod as if it were a camera). Also, I can sync it to iTunes on Mac OS X without any major problems.

    Read the article

  • dell latitude XT touch screen issue

    - by Jake
    yesterday I installed windows 7 ultimate after I had the enterprise version on this machine for a few months. the installation went smoothly and everything worked fine except for the finger detection of the screen(only the pen worked) and the bottom screen buttons for rotating and so. so I started to install all the missing drivers dell recommends for this model on their site, but once I tried to install N-Trig degitizer driver the installation said it was unsuccessful and since then the touch stopped working completely! I tried system restore but it didn't help so I went on and formatted the harddisk completely once again and installed windows but that didn't work either. I tried to install the N-Trig's driver but it was alarting for a fatal error and said that no device was detected. same story with N-Trig rollback. so I checked the device manager and saw an "unknown device" with VID and PID values set as 0000. I figured the N-Trig driver might have messed with the device firmware or something and now it doesnt know it's ID and munufactor... Is there anything that can be done? like forcing the N-Trig driver to install on this device or something? please help!

    Read the article

  • 3D game engine for networked world simulation / AI sandbox

    - by Martin
    More than 5 years ago I was playing with DirectSound and Direct3D and I found it really exciting although it took much time to get some good results with C++. I was a college student then. Now I have mostly enterprise development experience in C# and PHP, and I do it for living. There is really no chance to earn money with serious game development in our country. Each day more and more I find that I miss something. So I decided to spend an hour or so each day to do programming for fun. So my idea is to build a world simulation. I would like to begin with something simple - some human-like creatures that live their life - like Sims 3 but much more simple, just basic needs, basic animations, minimum graphic assets - I guess it won't be a city but just a large house for a start. The idea is to have some kind of a server application which stores the world data in MySQL database, and some client applications - body-less AI bots which simulate movement and some interactions with the world and each other. But it wouldn't be fun without 3D. So there are also 3D clients - I can enter that virtual world and see the AI bots living. When the bot enters visible area, it becomes material - loads a mesh and animations, so I can see it. When I leave, the bots lose their 3d mesh bodies again, but their virtual life still continues. With time I hope to make it like some expandable scriptable sandbox to experiment with various AI algorithms and so on. But I am not intended to create a full-blown MMORPG :D I have looked for many possible things I would need (free and open source) and now I have to make a choice: OGRE3D + enet (or RakNet). Old good C++. But won't it slow me down so much that I won't have fun any more? CrystalSpace. Formally not a game engine but very close to that. C++ again. MOgre (OGRE3D wrapper for .NET) + lidgren (networking library which is already used in some gaming projects). Good - I like C#, it is good for fast programming and also can be used for scripting. XNA seems just a framework, not an engine, so really have doubts, should I even look at XNA Game Studio :( Panda3D - full game engine with positive feedback. I really like idea to have all the toolset in one package, it has good reviews as a beginner-friendly engine...if you know Python. On the C++ side, Panda3D has almost non-existent documentation. I have 0 experience with Python, but I've heard it is easy to learn. And if it will be fun and challenging then I guess I would benefit from experience in one more programming language. Which of those would you suggest, not because of advanced features or good platform support but mostly for fun, easy workflow and expandability, and so I can create and integrate all the components I need - the server with the database, AI bots and a 3D client application?

    Read the article

  • UIView keep forward toutches

    - by donodare
    Hi all, I subclassed UIImageView for controlling touch events. The problem is that when i do userInteractionEnabled = NO, it doesn't work, i can still touch it and the touch methods response. Even if i add big subview over it the touches happen. Any suggest?

    Read the article

  • Prims vs Polys: what are the pros and cons of each?

    - by Richard Inglis
    I've noticed that most 3d gaming/rendering environments represent solids as a mesh of (usually triangular) 3d polygons. However some examples, such as Second Life, or PovRay use solids built from a set of 3d primitives (cube, sphere, cone, torus etc) on which various operations can be performed to create more complex shapes. So my question is: why choose one method over the other for representing 3d data? I can see there might be benefits for complex ray-tracing operations to be able to describe a surface as a single mathematical function (like PovRay does), but SL surely isn't attempting anything so ambitious with their rendering engine. Equally, I can imagine it might be more bandwidth-efficient to serve descriptions of generalised solids instead of arbitrary meshes, but is it really worth the downside that SL suffers from (ie modelling stuff is really hard, and usually the results are ugly) - was this just a bad decision made early in SL's development that they're now stuck with? Or is it an artefact of what's easiest to implement in OpenGL?

    Read the article

  • When the current toolbar is touched previous toolbar items are activated.

    - by srikanth rongali
    I have a UIToolBar *toolbar1. I have 3 buttons on the toolbar. And the tool bar is at the bottom of the view. The buttons are like library button, moreApps button, recordVideo button. If I touch the library button a new view(a UITableViewController) appears. I used presentModelViewController for it. If I touch moreApps button a new view (a UIViewController) appears and it loaded from http//. The recordButton touch gives another UIViewController view. When the moreApps button is touched and the view is loaded, it have the content from web and a UIToolBar *toolbar2 at the bottom the view. It has a refresh button and done button. The problem is when I touch the toolbar2 (not on the refresh button and done button) the previous views button are activated and corresponding views of previous buttons(library button, playVideo button) are loaded. I am not able to understand about this problem. Thank you.

    Read the article

  • Is it impossible to embed Java3D in a way that I don't need to install it?

    - by Kyle
    I'm running a big application and a small part of it includes Java 3D, the problem is many users need to use the code, but it isn't practical for everyone to install Java 3D just to run the application if they aren't even going to use that section of the application. Is it possible through compiling an extra jar, or changing some paths, to include Java 3D in a project without installing it on a system? Or perhaps to manually include any dlls?

    Read the article

  • How to drag only one image with SDK Iphone

    - by loka
    Hi! I want to create a little app that takes two images and i want to make only the image over draggable. After research, i found this solution : -(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { UITouch *touch = [[ event allTouches] anyObject]; image.alpha = 0.7; if([touch view] == image){ CGPoint location = [touch locationInView:self.view]; image.center = location; } It works but the problem is that the image is draggable from its center and i don't want that. So i found another solution : - (void) touchesBegan:(NSSet*)touches withEvent:(UIEvent*)event { // Retrieve the touch point CGPoint pt = [[touches anyObject] locationInView:self.view]; startLocation = pt; [[self view] bringSubviewToFront:self.view]; } - (void) touchesMoved:(NSSet*)touches withEvent:(UIEvent*)event { // Move relative to the original touch point CGPoint pt = [[touches anyObject] locationInView:self.view]; frame = [self.view frame]; frame.origin.x += pt.x - startLocation.x; frame.origin.y += pt.y - startLocation.y; [self.view setFrame:frame]; } It works very well but when i add another image, all the images of the view are draggable at the same time.I'm a beginner with the iphone programmation and i have no idea of how i can only make the image over draggable. Thank you in advance for your help!!

    Read the article

  • Will Apple bundle the Mono Touch runtime with every iPhone?

    - by Zoran Simic
    It strikes me as a good idea for Apple to negotiate with Novell and bundle the Mono Touch runtime (only the runtime of course) into every iPhone and iPod Touch. Perhaps even make it a "one time install" that automatically gets downloaded from the App Store the first time one downloads an app build with Mono Touch, making every subsequent Mono Touch app much lighter to download (without the runtime). Doing so would be similar in a way to adding Bootcamp to OS X: it would make it easier for C# developers to join the party, but that wouldn't mean these developers will all stick to C#... What convinced me to buy a Mac is Bootcamp - I figured I could always install Windows if I didn't like OS X (and I liked the hardware, so no problem there). 6 months later, I'm using OS X full time... Would there be any technical issues in doing so? I see only advantages for all parties, not one disadvantage to anyone (except maybe for the few unfortunate Apple employees who would have to test the crap out of the Mono Touch runtime before bundling it): Novell wins because Mono Touch becomes much more viable (Mono Touch apps become much lighter all of the sudden) Developers win because now there's one more tool in the tool belt Many C# Developers would be very interested by this Apple wins because that would bring even more attention to the platform, more revenue in developer fees, more potential great apps, etc Users win because less space is used by different Apps having copies of the same runtime accumulating on their devices Would there be a major technical obstacle in bundling Mono Touch to iPhone OS? Edit: Changed the title from "Should" to "Will Apple bundle the runtime?", I think the consensus on predicting that means a lot to those considering going with Mono Touch.

    Read the article

  • Does fast typing influence fast programming?

    - by Lukasz Lew
    Many young programmers think that their bottleneck is typing speed. After some experience one realizes that it is not the case, you have to think much more than type. At some point my room-mate forced me to turn of the light (he sleeps during the night). I had to learn to touch type and I experienced an actual improvement in programming skill. The most surprising was that the improvement not due to sheer typing speed, but to a change in mindset. I'm less afraid now to try new things and refactor them later if they work well. It's like having a new tool in the bag. Have anyone of you had similar experience? Now I trained a touch typing a little with KTouch. I find auto-generate lessons the best. I can use this program to create new lessons out of text files but it's only verbatim training, not auto-generated based on a language model. Do you know any touch typing program that allows creation of custom, but randomized lessons?

    Read the article

  • How can I get Pinch to Zoom back in Desktop mode?

    - by Ben Brocka
    Windows 7 had an old implimentation of Pinch to Zoom where bringing your fingers apart/together would act similar to ctrl + +/-, the standard zoom. It's not as nice as granular zoom (like iOS/Android use) but it worked. Most notably it doesn't work in Chrome (did before) but I haven't noticed it working in any other apps. In windows 8 desktop mode, pinch to zoom doesn't seem to work at all. It doesn't even work in One Note 2010, which, if I recall correctly, had granular zoom in Windows 7. I have an (older) 2 touch point multi-touch monitor, and I can see the visual feedback that the two touch points and coming closer/farther apart, but it doesn't zoom. Note I'm using the touchscreen, not a touchpad or the Arch mouse or other peripherals. Can I enable this somehow or is it gone from Desktop mode? It works fine in Metro apps. Additionally I get weird visual feedback when placing my second finger on the screen; a shrinking transparent square appears somewhere between the two fingers, visually similar to the Right Click visual queue when long-pressing. It's not a right click though, I can't tell what, if anything, it's doing.

    Read the article

  • How can I get Windows 8 to automatically disable touch when I am using my Wacom pen and turn it back on when I am not

    - by Robert
    I have an HP convertible tablet computer which I just upgraded to Windows 8. The problem (which existed under Windows 7 as well) is that this tablet has both a capacitive touch screen (with multi-touch) AND a wacom-type tablet built in to the screen that works using electro-magnetic resonance with the provided stylus. My Use Case: Most of the time I am happy using my fingers and the touch interface for navigation and whatnot. However, when I want to get down to serious note-taking/drawing, I want to use the wacom functionality. The problem is that any comfortable writing position has me resting my arm/hand on the screen, which activates the touch technology (despite supposed palm-detection algorithms) and completely screws up my input paradigm. My Ideal Solution: Ideallly, since wacom technology senses when the pen is "close" to the screen, I would love to have touch be automatically disabled whenever the wacom pen is detected, and turned back on when it is out of range. this would allow me to seamless switch between the two input methods, and since I NEVER want to use both at once would work perfectly for me. An acceptable alternative: As a next best option, It would be great to be able to turn off the touch functionality (leaving the wacom in place) whenever I entered specific apps (e.g. OneNote, Photoshop, Gimp, Pencil, etc.) and then have it turn back on when I left that app.... As a worst case at least lets me use my PC option: If I could create a shortcut (tile or otherwise) that flips the touch on and off without going all the way through the nested computer settings, that would be better than nothing. Thanks in advance for the help with 1 or more of the above.

    Read the article

  • GameKit and GKPeerPicker on 1st Gen iPhone and iPod Touch

    - by Reese McLean
    This is my current set up for my multiplayer game: A view that gives connection tips and warns the user that multiplayer will not work on the 1st Gen iPhone or ipod Touch. There is a "connect" button that pushes the game view and starts the GKPeerPicker. Unfortunately, I don't have a 1st Gen iPhone or iPod Touch to test what happens if they press the connect button. The view will be pushed, but I don't know what will happen when the PeerPicker tries to show. So the Question(s): Is there anyway to tell if the user will not be able to use GameKit so that I can disable the "connect" button? What will happen if they do press the connect button and GameKit is not available?

    Read the article

  • How do I render 3d model into directshow virtual camera output

    - by Mr Bell
    I want to provide a virtual webcam via DirectShow that will use the video feed from an existing camera running some tracking software against it to find the users face and then overlay a 3d model oriented just that it appears to move the users face. I am using a third party api to do the face tracking and thats working great. I get position and rotation data from that api. My question is whats the best way to render the 3d model and get into the video feed and out to direct show? I am using c++ on windows xp.

    Read the article

  • Metric 3d reconstruction

    - by srand
    I'm trying to reconstruct 3D points from 2D image correspondences. My camera is calibrated. The test images are of a checkered cube and correspondences are hand picked. Radial distortion is removed. After triangulation the construction seems to be wrong however. The X and Y values seem to be correct, but the Z values are about the same and do not differentiate along the cube. The 3D points look like as if the points were flattened along the Z-axis. What is going wrong in the Z values? Do the points need to be normalized or changed from image coordinates at any point, say before the fundamental matrix is computed? (If this is too vague I can explain my general process or elaborate on parts)

    Read the article

  • Touch area changing on custom UIButton with image and title on landscape mode

    - by gkedmi
    hello all I'm trying to build a button that looks like the icons on the iphone , with an image and a title bellow.I'm working in landscape mode. I have a custom button on the IB with image and title and inside the code I use the methods : [aButton setTitleEdgeInsets:UIEdgeInsetsMake(60.0, -50.0, 0.0, 0.0)]; [aButton setImageEdgeInsets:UIEdgeInsetsMake(-10.0, 29.0, 0.0, 0.0)]; my problem is that the area that react to the touch events is very small and is on the top left of the button , another problem is that if I change my button size I have to calculate again manually the values for these 2 functions. Is there any easy way to do it?and if not how can I fix the touch area? thanks Gilad

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >