Search Results

Search found 7346 results on 294 pages for 'touch flo 3d'.

Page 40/294 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • Optimizing hierarchical transform

    - by Geotarget
    I'm transforming objects in 3D space by transforming each vector with the object's 4x4 transform matrix. In order to achieve hierarchical transform, I transform the child by its own matrix, and then the child by the parent matrix. This becomes costly because objects deeper in the display tree have to be transformed by all the parent objects. This is what's happening, in summary: Root -- transform its verts by Root matrix Parent -- transform its verts by Parent, Root matrix Child -- transform its verts by Child, Parent, Root matrix Is there a faster way to transform vertices to achieve hierarchical transform? What If I first concatenated each transform matrix with the parent matrices, and then transform verts by that final resulting matrix, would that work and wouldn't that be faster? Root -- transform its verts by Root matrix Parent -- concat Parent, Root matrices, transform its verts by Concated matrix Child -- concat Child, Parent, Root matrices, transform its verts by Concated matrix

    Read the article

  • Viewport.Unproject - Checking if a model intersects a large sprite

    - by Fibericon
    Let's say I have a sprite, drawn like this: spriteBatch.Draw(levelCannons[i].texture, levelCannons[i].position, null, alpha, levelCannons[i].rotation, Vector2.Zero, scale, SpriteEffects.None, 0); Picture levelCannon as being a laser beam that goes across the entire screen. I need to see if my 3d model intersects with the screen space inhabited by the sprite. I managed to dig up Viewport.Unproject, but that seems to only be useful when dealing with a single point in 2d space, rather than an area. What can I do in my case?

    Read the article

  • Somes importants shorcuts of Blender does not work on Ubuntu

    - by Linko
    In Ubuntu (and Mint) some important shortcuts for Blender does not work. Alt + right click to select an edge loop (heavily used on all 3D softwares) doesn't work, a useless menu of Ubuntu pop up to ask if the application must be closed or minimize. Ctrl + Alt + 0 to define the current view as the view of the camera minimize the application. This shortcut of Ubuntu is useless, it's faster to click on the minimize icon. Ctrl + number to apply a subdivision surface level do nothing on Blender, it's one of the most used shortcut of Blender. For the moment I stay on Windows 7 just to use these 3 shortcuts.

    Read the article

  • Portal View/Projection Matrix near plane

    - by melak47
    For RenderToTexture/Camera based portal rendering, the basics seems simple enough. However, with a free camera, most of the time it is going to be looking at such portals at an angle: Now a regular near clipping plane will not always work here, it will either intersect with the wall the portal is sitting on, or possibly with objects in front of the wall. The desired near clipping plane would be aligned like the portal, producing a view volume more like this: or this in 3D: So here is my question: How does one construct or "truncate" a view/projection matrix to achieve such an off-camera-normal (near) clipping plane?

    Read the article

  • matrix to transform unit cube to space defined by 8 arbitrary points

    - by aadster
    I asked a question relating to similar to this already, but I think this is a clearer objective of what Im trying to achieve.. or whether its possible at all! Im trying to find a transformation (matrix ideally) which would transform the 8 points of a 3d unit cube to 8 arbitrary points in space. The 8 target points have no known structure. e.g: My gut feeling is that a matrix is unable to provide this xform since the cube faces vertices can be concave.. but are there any other methods of transformation? Thanks!

    Read the article

  • Are VM-based languages becoming viable for Graphics since the move to GPU computing?

    - by skiwi
    Perhaps the title is not the most clear, so let me elaborate it more: I am talking about VM-based languages, by that I mean languages that run on the JVM (java) and for example C#. Also I am talking about 3D graphics, just to be clear. Lately the trend has been that most computing is being done on the GPU and not on the CPU, and since times the issue with programming games on a VM-based language is that garbage collecting may happen randomly. So let's take a look which is responsible for what: Showing the graphics: GPU Uploading graphics to the GPU: CPU? Needs to be done every frame? Calculating physics constraints: GPU Doing the real game logic (Determining when to move objects (independent of physics calculations), processing AI): CPU Is my list actually correct? And if it is, is for example Java becoming more viable? Or is uploading the graphics (vertices) still the most expensive operation? Would like to get more insight into this.

    Read the article

  • What are some ways to texture map a terrain?

    - by ApocKalipsS
    I'm working with XNA on a 3D Game, and I'm trying to have a proper and nice environnement. I actually followed a tutorial to create a terrain from a heightmap. To texture it, I just apply a grass texture on it and tile it a number of times. But what I want to do is to have a really realistic texturing, but also generate it automatically (for example if I want to use Perlin noise to generate a terrain and then texture it). I already learned about multi-texturing, loading a map file with different colors for different textures, but I don't think this is really efficient, for instance for cliffs or very steep areas it will tile a texture badly as it's a view from the top. (Also, I don't know how I'll draw roads or dirt paths with that.) I'm looking for an efficient solution to realistically texture mapping procedurally-generated terrain.

    Read the article

  • How do I get started with fog type effects in a first person game?

    - by Dream Lane
    Hey guys, I'm currently using JME3 to learn 3d game development in java, and I have run into a situation. I would like to add fog effects to my games, but I don't even know where to start to implement this. I know how to set the camera's far frustum to limit the render distance, but that just simply makes a sharp cutoff. I'd like the fog it up a bit to make it feel more natural. I'm looking for an answer that points me into the correct direction. I'm not looking for specific code snippets or even JME3's engine specifics. I just want to get an idea of how this stuff works in general. Thanks!

    Read the article

  • Game Asset Size Over Time

    - by jterrace
    The size (in bytes) of games have been growing over time. There are probably many factors contributing to this: trailer/cut scene videos being bundled with the game, more and higher-quality audio, multiple levels of detail being used, etc. What I'd really like to know is how the size of 3D models and textures that games ship with have changed over time. For example, if one were to look at the size of meshes and textures for Quake I (1996), Quake II (1997), Quake III: Arena (1999), Quake 4 (2005), and Enemy Territory: Quake Wars (2007), I'd imagine a steady increase in file size. Does anyone know of a data source for numbers like this?

    Read the article

  • Is an extra collision-mesh for level-data worth the hassle?

    - by Serthy
    What is the optimal approach for collision-detection with the environment in an 3D engine (with triangle mesh based geometry, no bsp)? A) Use the render mesh [+] no need for additional work for artists to fiddle with collision detection [-] high detail is harder for physics calculation [+/-] maybe use collidable flags for materials [+/-] compute the collision-mesh from the render-mesh B) Use an additional collision mesh [+] faster/more optimal collision-detection [-] additional work (either by the artist or by the programmer who has to develop an algorithm to compute it from the render-mesh) [-] more memory useage How do AAA title handle this? And what are the indie dev's approaches?

    Read the article

  • OpenGLES 2.0 gluunProject

    - by secheung
    I've spent more time than i should trying to get my ray picking program working. I'm pretty convinced my math is solid with respect to line plane intersection, but I believe the problem lies with the changing of the mouse screen touch into 3D world space. Heres my code public void passTouchEvents(MotionEvent e){ int[] viewport = {0,0,viewportWidth,viewportHeight}; float x = e.getX(), y = viewportHeight - e.getY(); float[] pos1 = new float[4]; float[] pos2 = new float[4]; GLU.gluUnProject( x, y, 0.0f, mViewMatrix, 0, mProjectionMatrix, 0, viewport, 0, pos1, 0); GLU.gluUnProject( x, y, 1.0f, mViewMatrix, 0, mProjectionMatrix, 0, viewport, 0, pos2, 0); } Just as a reference I've tried transforming the coordinates 0,0,0 and got an offset. It would be appreciated if you would answer using opengl es 2.0 code. Thanks

    Read the article

  • Some problem about Compiz and GTK on Ubuntu 14.04

    - by LVHoanh
    I installed Ubuntu 14.04 on my computer : i3/4G/750G/VGA Intel HD3000 & 1GB NVidia with CUDA GT540M. I've some problems with Ubuntu 14.04 : Does Ubuntu 14.04 support Emerald Theme Manager? Emerald Theme Manager is not available in Ubuntu Software Center? I installed Compiz and Compiz Setting Manager, it works ok, but the Windows 3D plugin is not work? The Shutdown/Restart function on Right-Top Panel is not work, is there any solution for this problem? The mouse theme I installed only work in current session, next time i restart computer it change back to default mouse theme? How can i resize the spacing between Desktop Icon? Waiting for everyone respond. Thankyou so much. P/s : My English is not good, expect you understand. :)

    Read the article

  • Is it possible to create an "impossible" rooms in games?

    - by qwerty3000
    Forgive me my lack of knowlegde, but for quite a long time I asked myself whether it was possible to create a continous game space that some player could walk inside and so on, that would be absolutely impossible in reality, e.g. you have a very small house that allows you to go around it to see all sides and the full dimensions, and then, when you enter, it is like a giant hall, without any loading screen or (internal) "model change" and so on. I'm no game designer and I never needed to learn 3D-modelling, so I don't know what is possible and what isn't. And is this the same as Is the "impossible object" possible in computer graphics? this? Or is it just the same category, but not exactly the same question? Thanks.

    Read the article

  • What kind of math should I be expecting in advanced programming?

    - by I_Question_Things_Deeply
    And I don't mean just space shooters and such, because in non-3D environments it's obvious that not much beyond elementary math is needed to implement. Most of the programming in 2D games is mostly going to involve basic arithmetic, algorithms for enemy AI and dimensional worlds, rotation, and maybe some Algebra as well depending on how you want to design. But I ask because I'm not really gifted with math at all. I get frustrated and worn out just by doing Pre-Algebra, so Algebra 2 and Calculus would likely be futile for me. I guess I'm not so "right-brained" when it comes down to pure numbers and math formulas, but the bad part is that I'm no art-expert either. What do you people here suppose I should do? Go along avoiding as much of the extremely difficult maths I can't fathom, or try to ease into more complex math as I excel at programming?

    Read the article

  • Rendering trillions of "atoms" instead of polygons?

    - by Baring
    I just saw a video about what the publishers call the "next major step after the invention of 3D". According to the person speaking in it, they use a huge amount of atoms grouped into clouds instead of polygons, to reach a level of unlimited detail. They tried their best to make the video understandable for persons with no knowledge of any rendering techniques, and therefore or for other purposes left out all details of how their engine works. The level of detail in their video does look quite impressive to me. How is it possible to render scenes using custom atoms instead of polygons on current hardware? (Speed, memory-wise) If this is real, why has nobody else even thought about it so far? I'm, as an OpenGL developer, really baffled by this and would really like to hear what experts have to say. Therefore I also don't want this to look like a cheap advert and will include the link to the video only if requested, in the comments section.

    Read the article

  • Smoothing touch-based animation in iPhone OpenGL?

    - by quixoto
    I know this is vague, but looking for general tips/help on this, as it's not an area of significant expertise for me. I have some iPhone code that's basically an EAGL view handling a single touch. The app draws (using GL) a circle via triangle fan at the touch point, and moves it when the user moves the touch point, and re-renders the view then. When dragging a finger slowly, the circle keeps up and consistent with the finger as it moves. If I scribble my finger quickly back and forth across the screen, the rendering doesn't keep up with the touch motion, so you see an optical illusion of "multiple" discrete circles on the screen "at once". (Normal persistence of vision illusion). This optical illusion is jarring. How can I make this look more natural? Can I blur the motion of the circle somehow? Is this result the evidence of some bad frame rate issue? I see this artifact even when nothing else is being rendered, so I think this might just be as fast as we can go. Any hints or suggestions? Much appreciated. Thanks.

    Read the article

  • Raytracing (LoS) on 3D hex-like tile maps

    - by herenvardo
    Greetings, I'm working on a game project that uses a 3D variant of hexagonal tile maps. Tiles are actually cubes, not hexes, but are laid out just like hexes (because a square can be turned to a cube to extrapolate from 2D to 3D, but there is no 3D version of a hex). Rather than a verbose description, here goes an example of a 4x4x4 map: (I have highlighted an arbitrary tile (green) and its adjacent tiles (yellow) to help describe how the whole thing is supposed to work; but the adjacency functions are not the issue, that's already solved.) I have a struct type to represent tiles, and maps are represented as a 3D array of tiles (wrapped in a Map class to add some utility methods, but that's not very relevant). Each tile is supposed to represent a perfectly cubic space, and they are all exactly the same size. Also, the offset between adjacent "rows" is exactly half the size of a tile. That's enough context; my question is: Given the coordinates of two points A and B, how can I generate a list of the tiles (or, rather, their coordinates) that a straight line between A and B would cross? That would later be used for a variety of purposes, such as determining Line-of-sight, charge path legality, and so on. BTW, this may be useful: my maps use the (0,0,0) as a reference position. The 'jagging' of the map can be defined as offsetting each tile ((y+z) mod 2) * tileSize/2.0 to the right from the position it'd have on a "sane" cartesian system. For the non-jagged rows, that yields 0; for rows where (y+z) mod 2 is 1, it yields 0.5 tiles. I'm working on C#4 targeting the .Net Framework 4.0; but I don't really need specific code, just the algorithm to solve the weird geometric/mathematical problem. I have been trying for several days to solve this at no avail; and trying to draw the whole thing on paper to "visualize" it didn't help either :( . Thanks in advance for any answer

    Read the article

  • dialing on iphone/ipod touch not working with documented procedures

    - by dave
    I'm trying to set up an iphone app to the phone number of a various sports store using the tel:// url passing method- I am developing on an ipod touch- usually on the touch you see the error message "Unsupported URL - This URL wasn't loaded tel://99887766" when you try and dial a number. I cant get this message to appear on the simulator or the ipod touch. do I need to do some sort of fancy signing before the app will dial properly? I am using this code: [[UIApplication sharedApplication] openURL:[NSURL URLWithString:[NSString stringWithFormat:@"tel:%@", [selectedBar phoneNumber]]]]; and I've tried adding the slashes: [[UIApplication sharedApplication] openURL:[NSURL URLWithString:[NSString stringWithFormat:@"tel://%@", [selectedBar phoneNumber]]]]; but neither work. I have also tried this way: [[UIApplication application] openURL:[NSURL URLWithString:@"tel://99887766"]]; and this way: NSMutableString *phone = [[@"+ 12 34 567 89 01" mutableCopy] autorelease]; [phone replaceOccurrencesOfString:@" " withString:@"" options:NSLiteralSearch range:NSMakeRange(0, [phone length])]; [phone replaceOccurrencesOfString:@"(" withString:@"" options:NSLiteralSearch range:NSMakeRange(0, [phone length])]; [phone replaceOccurrencesOfString:@")" withString:@"" options:NSLiteralSearch range:NSMakeRange(0, [phone length])]; NSURL *url = [NSURL URLWithString:[NSString stringWithFormat:@"tel:%@", phone]]; [[UIApplication sharedApplication] openURL:url]; No matter what i do i can't get any response from the simulator / ipod touch that it is dealing with a phone number- When I press the button associated with this code, it doesnt crash, it's like its processed it and decided not to do anything. i even put an NSLog(@"button called"); in just before the code to confirm the button was working, which it is.

    Read the article

  • How to track the touch vector?

    - by mystify
    I need to calculate the direction of dragging a touch, to determine if the user is dragging up the screen, or down the screen. Actually pretty simple, right? But: 1) Finger goes down, you get -touchesBegan:withEvent: called 2) Must wait until finger moves, and -touchesMoved:withEvent: gets called 3) Problem: At this point it's dangerous to tell if the user did drag up or down. My thoughts: Check the time and accumulate calculates vectors until it's secure to tell the direction of touch. Easy? No. Think about it: What if the user holds the finger down for 5 minutes on the same spot, but THEN decides to move up or down? BANG! Your code would fail, because it tried to determine the direction of touch when the finger didn't move really. Problem 2: When the finger goes down and stays at the same spot for a few seconds because the user is a bit in the wind and thinks about what to do now, you'll get a lot of -touchesMoved:withEvent: calls very likely, but with very minor changes in touch location. So my next thought: Do the accumulation in -touchesMoved:withEvent:, but only if a certain threshold of movement has been exceeded. I bet you have some better concepts in place?

    Read the article

  • How do I check to see if my subview is being touched?

    - by Amy
    I went through this tutorial about how to animate sprites: http://icodeblog.com/2009/07/24/iphone-programming-tutorial-animating-a-game-sprite/ I've been attempting to expand on the tutorial by trying to make Ryu animate only when he is touched. However, the touch is not even being registered and I believe it has something to do with it being a subview. Here is my code: -(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event{ UITouch *touch = [touches anyObject]; if([touch view] == ryuView){ NSLog(@"Touch"); } else { NSLog(@"No touch"); } } -(void) ryuAnims{ NSArray *imageArray = [[NSArray alloc] initWithObjects: [UIImage imageNamed:@"1.png"], [UIImage imageNamed:@"2.png"], [UIImage imageNamed:@"3.png"], [UIImage imageNamed:@"4.png"], [UIImage imageNamed:@"5.png"], [UIImage imageNamed:@"6.png"], [UIImage imageNamed:@"7.png"], [UIImage imageNamed:@"8.png"], [UIImage imageNamed:@"9.png"], [UIImage imageNamed:@"10.png"], [UIImage imageNamed:@"11.png"], [UIImage imageNamed:@"12.png"], nil]; ryuView.animationImages = imageArray; ryuView.animationDuration = 1.1; [ryuView startAnimating]; } -(void)viewDidLoad { [super viewDidLoad]; UIImageView *image = [[UIImageView alloc] initWithFrame: CGRectMake(100, 125, 150, 130)]; ryuView = image; ryuView.image = [UIImage imageNamed:@"1.png"]; ryuView.contentMode = UIViewContentModeBottomLeft; [self.view addSubview:ryuView]; [image release]; } This code compiles fine, however, when touching or clicking on ryu, nothing happens. I've also tried if([touch view] == ryuView.image) but that gives me this error: "Comparison of distinct Objective-C type 'struct UIImage *' and 'struct UIView *' lacks a cast." What am I doing wrong?

    Read the article

  • Rails, gmail: howto get plain/text from body

    - by atmorell
    Hello, I am loading am email with IMAP and parsing it with mail. This works very well, however the mail.body.decoded field contains a lot of formatting. How do I dig out the plain/txt body of the email - ignore attachements, formatting etc. It works fine if I try with an email without html. source = imap.uid_fetch(uid, ['RFC822']).first.attr['RFC822'] mail = Mail.new(source) This body content looks like this: Mail::Body:0x7f36ed468270 @epilogue="", @boundary="_004_4C49171DCB8C4540844E69DD39FDD98Ffirm_", @encoding="7bit", @raw_source="--_004_4C49171DCB8C4540844E69DD39FDD98Ffirm_\r\nContent-Type: multipart/alternative;\r\n\tboundary=\"_000_4C49171DCB8C4540844E69DD39FDD98Ffirm_\"\r\n\r\n--_000_4C49171DCB8C4540844E69DD39FDD98Ffirm_\r\nContent-Type: text/plain; charset=\"iso-8859-1\"\r\nContent-Transfer-Encoding: quoted-printable\r\n\r\ndasdsasda\r\n\r\n\r\n\r\nMed venlig hilsen / Med V=E4nlig H=E4lsning / Best Regards\r\r\nAsbj=F8rn Toke Morell. .\r\n+45 7020 0160\r\n+45 2152 0015\r\n[cid:[email protected]]\r\nhttp://www..dk\r\n\r\n\r\n--_000_4C49171DCB8C4540844E69DD39FDD98Ffirm_\r\nContent-Type: text/html; charset=\"iso-8859-1\"\r\nContent-Transfer-Encoding: quoted-printable\r\n\r\n<html>headheadbody style3D"word-wrap: break-word; -webkit-nbsp-mode:=\r\n space; -webkit-line-break: after-white-space; ">dasdsasda<br><div apple-co=\r\nntent-edited=3D"true">\r\n<span class=3D"Apple-style-span" style=3D"border-collapse: separate; color:=\r\n rgb(0, 0, 0); font-family: Helvetica; font-size: medium; font-style: norma=\r\nl; font-variant: normal; font-weight: normal; letter-spacing: normal; line-=\r\nheight: normal; orphans: 2; text-align: auto; text-indent: 0px; text-transf=\r\norm: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-borde=\r\nr-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-te=\r\nxt-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-tex=\r\nt-stroke-width: 0px; "><span class=3D"Apple-style-span" style=3D"font-famil=\r\ny: Calibri, sans-serif; font-size: 15px; "><span class=3D"Apple-style-span"=\r\n style=3D"border-collapse: separate; color: rgb(0, 0, 0); font-family: Helv=\r\netica; font-size: medium; font-style: normal; font-variant: normal; font-we=\r\night: normal; letter-spacing: normal; line-height: normal; orphans: 2; text=\r\n-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-sp=\r\nacing: 0px; -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical=\r\n-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-=\r\nadjust: auto; -webkit-text-stroke-width: 0px; "><span class=3D"Apple-style-=\r\nspan" style=3D"font-family: Calibri, sans-serif; font-size: 15px; "><div st=\r\nyle=3D"margin-top: 0cm; margin-right: 0cm; margin-bottom: 0.0001pt; margin-=\r\nleft: 0cm; font-size: 11pt; font-family: Calibri, sans-serif; "><font class=\r\n=3D"Apple-style-span" color=3D"#000080" face=3D"'Times New Roman', serif" s=\r\nize=3D"3"><span class=3D"Apple-style-span" style=3D"font-size: 13px; "><br =\r\nclass=3D"Apple-interchange-newline"><br></span></font></div><div style=3D"m=\r\nargin-top: 0cm; margin-right: 0cm; margin-bottom: 0.0001pt; margin-left: 0c=\r\nm; font-size: 11pt; font-family: Calibri, sans-serif; "><font class=3D"Appl=\r\ne-style-span" color=3D"#000080" face=3D"'Times New Roman', serif" size=3D"3=\r\n"><span class=3D"Apple-style-span" style=3D"font-size: 13px; "><br></span><=\r\n/font></div><div style=3D"margin-top: 0cm; margin-right: 0cm; margin-bottom=\r\n: 0.0001pt; margin-left: 0cm; font-size: 11pt; font-family: Calibri, sans-s=\r\nerif; "><span style=3D"font-size: 10pt; font-family: 'Times New Roman', ser=\r\nif; color: navy; ">Med venlig hilsen / Med V=E4nlig H=E4lsning / Best Regar=\r\nds&nbsp;<br>firm<br>Asbj=F8rn Toke Morell... This is the ony relevant from information from the body: 'ndasdsasda\r\n\r\n\r\n\r\nMed venlig hilsen / Med V=E4nlig H=E4lsning / Best Regards\r\r\nAsbj=F8rn Toke Morell' Any ideas?

    Read the article

  • Where to find good 3d articles for wpf?

    - by Ankit Rathod
    Hello, I am beginner in WPF. I am basically a Silverlight guy and as i know it doesn't support the full real 3d model of WPF. I am getting interested in learning 3D in WPF. I googled up for WPF and i get very old links which are 3 years old back when WPF was known as Avalon. They may not be of any use in V4.0. Can anybody refer me some links where i can learn WPF 3D from basics? Thanks in advance :)

    Read the article

  • Background problem of opengl 3d object over iphone camera view

    - by user292127
    Hi, I'm loading opengl 3d objects over the iphone camera view. When opengl view is loaded it's loading with a opengl 3d object with black background. The black background color will block the camera view.I just want to clear background color of opengl view so that I could load only the 3d object to the camera view. I had tried glclearcolor(1.0,1.0,1.0,0.0); but no change to background color. I had also tried to clear background color opengl view using [glview setbackgroundColor:[UIColor clearColor]];. No change in back ground color. Can any one help me with this stuff ? I'm new to opengl. Thanks in advance

    Read the article

  • touchesBegan and other touch events not getting detected in UINavigationController

    - by SaltyNuts
    In short, I want to detect a touch on the navigation controller titlebar, but having trouble actually catching any touches at all! Everything is done without IB, if that makes a difference. My app delegate's .m file contains: MyViewController *viewController = [[MyViewController alloc] init]; navigationController = [[UINavigationController alloc] initWithRootViewController:viewController]; [window addSubview:navigationController.view]; There are a few other subviews added to this window in a way that overlays navigationController leaving only the navigation bar visible. MyViewController is a subclass of UIViewController and its .m file contains: - (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event { for (UITouch *touch in touches) { NSLog(@"ended\n"); } } -(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { for (UITouch *touch in touches) { NSLog(@"began\n"); } } I also tried putting these functions directly into app delegate's .m file, but the console remains blank. What am I doing wrong?

    Read the article

  • Detect if touch device

    - by Kilnr
    Hello, I'm writing a MIDlet using the Kuix UI toolkit, and I want to make changes to the toolkit depending on whether the current device is a touch screen device. (These changes include making buttons bigger, for easier tapping.) Is there a way to detect whether the device has a touch screen using J2ME (MIDP 2)? [edit] as a (crappy) workaround I check for the screen height instead. A screen width a height of higher than 240 is likely a touch screen... Please let me know if there are any more effective ways.

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >