Search Results

Search found 7346 results on 294 pages for 'touch flo 3d'.

Page 62/294 | < Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >

  • XNA vs SlimDX for offscreen renderer

    - by Groky
    Hello, I realise there are numerous questions on here asking about choosing between XNA and SlimDX, but these all relate to game programming. A little background: I have an application that renders scenes from XML descriptions. Currently I am using WPF 3D and this mostly works, except that WPF has no way to render scenes offscreen (i.e. on a server, without displaying them in a window), and also rendering to a bitmap causes WPF to fallback to software rendering. So I'm faced with having to write my own renderer. Here are the requirements: Mix of 3D and 2D elements. Relatively few elements per scene (tens of meshes, tens of 2D elements). Large scenes (up to 3000px square for print). Only a single frame will be rendered (i.e. FPS is not an issue). Opacity masks. Pixel shaders. Software fallback (servers may or may not have a decent gfx card). Possibility of being rendered offscreen. As you can see it's pretty simple stuff and WPF can manage it quite nicely except for the not-being-able-to-export-the-scene problem. In particular I don't need many of the things usually needed in game development. So bearing that in mind, would you choose XNA or SlimDX? The non-rendering portion of the code is already written in C#, so want to stick with that.

    Read the article

  • Ray-box Intersection Theory

    - by Myx
    Hello: I wish to determine the intersection point between a ray and a box. The box is defined by its min 3D coordinate and max 3D coordinate and the ray is defined by its origin and the direction to which it points. Currently, I am forming a plane for each face of the box and I'm intersecting the ray with the plane. If the ray intersects the plane, then I check whether or not the intersection point is actually on the surface of the box. If so, I check whether it is the closest intersection for this ray and I return the closest intersection. The way I check whether the plane-intersection point is on the box surface itself is through a function bool PointOnBoxFace(R3Point point, R3Point corner1, R3Point corner2) { double min_x = min(corner1.X(), corner2.X()); double max_x = max(corner1.X(), corner2.X()); double min_y = min(corner1.Y(), corner2.Y()); double max_y = max(corner1.Y(), corner2.Y()); double min_z = min(corner1.Z(), corner2.Z()); double max_z = max(corner1.Z(), corner2.Z()); if(point.X() >= min_x && point.X() <= max_x && point.Y() >= min_y && point.Y() <= max_y && point.Z() >= min_z && point.Z() <= max_z) return true; return false; } where corner1 is one corner of the rectangle for that box face and corner2 is the opposite corner. My implementation works most of the time but sometimes it gives me the wrong intersection. I was wondering if the way I'm checking whether the intersection point is on the box is correct or if I should use some other algorithm. Thanks.

    Read the article

  • Why I cannot get correct class of a custom class through isKindOfClass?

    - by Anthony Chan
    Hi, I've created a custom class AnimalView which is a subclass of UIView containing a UILabel and a UIImageView. @interface AnimalView : UIView { UILabel *nameLabel; UIImageView *picture; } Then I added in several AnimalView onto the ViewController.view. In the touchesBegan:withEvent: method, I wanted to detect if the touched object is an AnimalView or not. Here is the code for the viewController: @implementation AppViewController - (void)viewDidLoad { UIScrollView *scrollView = [[UIScrollView alloc] initWithFrame:... [self.view addSubview scrollview]; for (int i = 0; i<10; i++) { AnimalView *newAnimal = [[AnimalView alloc] init]; // customization of newAnimal [scrollview addSubview:newAnimal; } } - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { UITouch *touch = [touches anyObject]; UIView *hitView = touch.view; if ([hitView isKindOfClass:[AnimalView class]]) { AnimalView *animal = (AnimalView *)hitView; [animal doSomething]; } } However, nothing happened when I clicked on the animal. When I checked the class of hitView by NSLog(@"%@", [hitView class]), it always shows UIView instead of AnimalView. Is it true that the AnimalView changed to a UIView when it is added onto the ViewController? Is there any way I can get back the original class of a custom class?

    Read the article

  • getting a tiled image collection on the iPad (deepzoom)

    - by Chris B
    I have a set of tiled image collections created via microsoft's deep zoom composer, and a silverlight app that currently consumes them for display via MultiScaleImage - it's all working pretty well - I'd just like to get some experience with iPad programming and have a couple of ideas for some ipad applications. All my ideas rely on me being able to display/manipulate these tiled image sets (on the iPad). I just picked up a iMac to facilitate this. I'm not seeing any objective-c / cocoa-touch libraries for this though, so am assuming I will have to roll my own. (Saw the seadragon ajax component, which is pretty slick, but I'm dealing with collections here, which it doesn't support. I would also like to roll this as a native app just to get the experience). The only open source project I found for displaying/manipulating the tiled image sets was Openzoom -a flash component. I'm not to familiar with actionscript either (python, java, c#, and c are the only languages I have really used), but briefly inspecting the code I didn't really have any issues with it and can probably use it for hints on how to swap the tiles in and out, etc.. But, as I'm pretty new to obj-c/cocoa-touch, some pointers in the right direction would be appreciated. 1) Are there any other projects out there I am missing, or is openzoom my best bet for some reference? 2) Should I be trying to do this display in the UIKit framework, or should I do it as an OpenGL display? 3) Any other suggestions/pointers that I didn't think to ask.

    Read the article

  • How to change the view angle and label value of a chart .NET C#

    - by George
    Short Description I am using charts for a specific application where i need to change the view angle of the rendered 3D Pie chart and value of automatic labels from pie label names to corresponding pie values. This how the chart looks: Initialization This is how i initialize it: Dictionary<string, decimal> secondPersonsWithValues = HistoryModel.getSecondPersonWithValues(); decimal[] yValues = new decimal[secondPersonsWithValues.Values.Count]; //VALUES string[] xValues = new string[secondPersonsWithValues.Keys.Count]; //LABELS secondPersonsWithValues.Keys.CopyTo(xValues, 0); secondPersonsWithValues.Values.CopyTo(yValues, 0); incomeExpenseChart.Series["Default"].ChartType = System.Windows.Forms.DataVisualization.Charting.SeriesChartType.Pie; incomeExpenseChart.Series["Default"].Points.DataBindXY(xValues, yValues); incomeExpenseChart.ChartAreas["Default"].Area3DStyle.Enable3D = true; incomeExpenseChart.Series["Default"].CustomProperties = "PieLabelStyle=Outside"; incomeExpenseChart.Legends["Default"].Enabled = true; incomeExpenseChart.ChartAreas["Default"].Area3DStyle.LightStyle = System.Windows.Forms.DataVisualization.Charting.LightStyle.Realistic; incomeExpenseChart.Series["Default"]["PieDrawingStyle"] = "SoftEdge"; Basically i am querying data from database using the HistoryModel.getSecondPersonWithValues(); to get pairs as Dictionary<string, decimal> where key is the person and value is ammount. Problem #1 What i need is to be able to change the marked labels from person names to the ammounts or add another label of ammounts with the same colors (See Image). Problem #2 Another problem is that i need to change the view angle of 3D Pie chart. Maybe it's very simple and I just don't know the needed property or maybe i need to override some paint event. Either ways any kind of ways would be appriciated. Thanks in advance George.

    Read the article

  • iPhone SDK allow touches to affect multiple views

    - by Parad0x13
    I have a main view that has has two buttons on it that control methods to display the next image and display the previous image. In this case the 'Image' is a class that inherits from UIImageView and has multiple pictures on it that you can interact with, and I call this class a 'Pane'. The pane itself handles all the user interaction itself while the main view controls the display of next and previous panes with the buttons. Here is my dilemma, because the pane fully covers the main view it wont allow for the user to tap the buttons on the main view! So once a pane pops up you cannot change it via the buttons! Is there a way to allow touches through transparent parts of a view, or if not how in the world do I achieve this?! I cannot pass touchesBegan or any of those methods from the pane to the superview because all of the button touch methods are created in the xib file. I cannot insert the pane under the control panel because then you wouldn't be able to interact with the pane. And as far as I know theres no way to pass touch events to every single pane within the paneHoldingArray that belongs to the main view I cannot add the command buttons inside of the pane because I want to be able to replace the command button's image with a thumbprint render of the next/previous pane. I've been stuck on this for a very long time, please somebody help me out with a fix action or a new way to re-engineer the code so that it will work!

    Read the article

  • Vertex Buffers in opengl

    - by JB
    I'm making a small 3d graphics game/demo for personal learning. I know d3d9 and quite a bit about d3d11 but little about opengl at the moment so I'm intending to abstract out the actual rendering of the graphics so that my scene graph and everything "above" it needs to know little about how to actually draw the graphics. I intend to make it work with d3d9 then add d3d11 support and finally opengl support. Just as a learning exercise to learn about 3d graphics and abstraction. I don't know much about opengl at this point though, and don't want my abstract interface to expose anything that isn't simple to implement in opengl. Specifically I'm looking at vertex buffers. In d3d they are essentially an array of structures, but looking at the opengl interface the equivalent seems to be vertex arrays. However these seem to be organised rather differently where you need a separate array for vertices, one for normals, one for texture coordinates etc and set the with glVertexPointer, glTexCoordPointer etc. I was hoping to be able to implement a VertexBuffer interface much like the the directx one but it looks like in d3d you have an array of structures and in opengl you need a separate array for each element which makes finding a common abstraction quite hard to make efficient. Is there any way to use opengl in a similar way to directx? Or any suggestions on how to come up with a higher level abstraction that will work efficiently with both systems?

    Read the article

  • Detecting the axis of rotation from a pointcloud

    - by tfinniga
    I'm trying to auto-detect the axis of rotation on a 3d pointcloud. In other words, if I took a small 3d pointcloud, chose a single axis of rotation, and make several copies of the points at different rotation angles, then I get a larger pointcloud. The input to my algorithm is the larger pointcloud, and the desired output is the single axis of symmetry. And eventually I'm going to compute the correspondences between points that are rotations of each other. The size of the larger pointcloud is on the order of 100K points, and the number of rotational copies made is unknown. The rotation angles in my case have constant deltas, but don't necessarily span 360 degrees. For example, I might have 0, 20, 40, 60. Or I might have 0, 90, 180, 270. But I won't have 0, 13, 78, 212 (or if I do, I don't care to detect it). This seems like a computer vision problem, but I'm having trouble figuring out how to precisely find the axis. The input will generally be very clean, close to float accuracy.

    Read the article

  • Want to display a 3D model on the iPhone: how to get started?

    - by JeremyReimer
    I want to display and rotate a single 3D model, preferably textured, on the iPhone. Doesn't have to zoom in and out, or have a background, or anything. I have the following: an iPhone a MacBook the iPhone SDK Blender My knowledge base: I can make 3D models in various 3D programs (I'm most comfortable with 3D Studio Max, which I once took a course on, but I've used others) General knowledge of procedural programming from years ago (QuickBasic - I'm old!) Beginner's knowledge of object-oriented programming from going through simple Java and C# tutorials (Head Start C# book and my wife's intro to OOP course that used Java) I have managed to display a 3D textured model and spin it using a tutorial in C# I got off the net (I didn't just copy and paste, I understand basically how it works) and the XNA game development library, using Visual Studio on Windows. What I do not know: Much about Objective C Anything about OpenGL or OpenGL ES, which the iPhone apparently uses Anything about XCode My main problem is that I don't know where to start! All the iPhone books I found seem to be about creating GUI applications, not OpenGL apps. I found an OpenGL book but I don't know how much, if any, applies to iPhone development. And I find the Objective C syntax somewhat confusing, with the weird nested method naming, things like "id" that don't make sense, and the scary thought that I have to do manual memory management. Where is the best place to start? I couldn't find any tutorials for this sort of thing, but maybe my Google-Fu is weak. Or maybe I should start with learning Objective C? I know of books like Aaron Hillgrass', but I've also read that they are outdated and much of the sample code doesn't work on the iPhone SDK, plus it seems geared towards the Model-View-Controller paradigm which doesn't seem that suited for 3D apps. Basically I'm confused about what my first steps should be.

    Read the article

  • iPhone OpenGL ES - How to Pick

    - by Ali Nadalizadeh
    I'm working on an OpenGL ES1 app which displays a 2D grid and allows user to navigate and scale/rotate it. I need to know the exact translation of View Touch coordinates into my opengl world and grid cell. Are there any helpers to do the reverse of last few transforms which I do for navigation ? or I should calculate and do the matrix stuff by hand ?

    Read the article

  • Using Unreal 3 Engine within a .NET application

    - by bitbonk
    Now that the Unreal Development Kit for Unreal 3 engine is free I am thinking about utilizing it for an appication. Do you think it is possible to emebedd a Unreal 3 powered 3D window into a .NET (WPF or Windows Forms) and control parts of the gameobjects therein using c#? Is the engine plain c++? Or COM or is there a .NET wrapper or something?

    Read the article

  • UITextView w/ Syntax Highlighting

    - by Travis
    Is there a common library, parser, etc. for Cocoa or Cocoa-Touch that can take a chunk of text and do the proper syntax highlighting? As a simple example, I'd like to have a UITextView that has C/C++ syntax highlighting.

    Read the article

  • Passing a ManagedObjectContext to a second view

    - by amo
    I'm writing my first iPhone/Cocoa app. It has two table views inside a navigation view. When you touch a row in the first table view, you are taken to the second table view. I would like the second view to display records from the CoreData entities related to the row you touched in the first view. I have the CoreData data showing up fine in the first table view. You can touch a row and go to the second table view. I'm able to pass info from the selected object from the first to the second view. But I cannot get the second view to do its own CoreData fetching. For the life of me I cannot get the managedObjectContext object to pass to the second view controller. I don't want to do the lookups in the first view and pass a dictionary because I want to be able to use a search field to refine results in the second view, as well as insert new entries to the CoreData data from there. Here's the function that transitions from the first to the second view. - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { // Navigation logic may go here -- for example, create and push another view controller. NSManagedObject *selectedObject = [[self fetchedResultsController] objectAtIndexPath:indexPath]; SecondViewController *secondViewController = [[SecondViewController alloc] initWithNibName:@"SecondView" bundle:nil]; secondViewController.tName = [[selectedObject valueForKey:@"name"] description]; secondViewController.managedObjectContext = [self managedObjectContext]; [self.navigationController pushViewController:secondViewController animated:YES]; [secondViewController release]; } And this is the function inside SecondViewController that crashes: - (void)viewDidLoad { [super viewDidLoad]; self.title = tName; NSError *error; if (![[self fetchedResultsController] performFetch:&error]) { // <-- crashes here // Handle the error... } } - (NSFetchedResultsController *)fetchedResultsController { if (fetchedResultsController != nil) { return fetchedResultsController; } /* Set up the fetched results controller. */ // Create the fetch request for the entity. NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; // Edit the entity name as appropriate. // **** crashes on the next line because managedObjectContext == 0x0 NSEntityDescription *entity = [NSEntityDescription entityForName:@"SecondEntity" inManagedObjectContext:managedObjectContext]; [fetchRequest setEntity:entity]; // <snip> ... more code here from Apple template, never gets executed because of the crashing return fetchedResultsController; } Any ideas on what I am doing wrong here? managedObjectContext is a retained property. UPDATE: I inserted a NSLog([[managedObjectContext registeredObjects] description]); in viewDidLoad and it appears managedObjectContext is being passed just fine. Still crashing, though. Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: '+entityForName: could not locate an NSManagedObjectModel for entity name 'SecondEntity''

    Read the article

  • Vector math, finding coördinates on a planar between 2 vectors

    - by Will Kru
    I am trying to generate a 3d tube along a spline. I have the coördinates of the spline (x1,y1,z1 - x2,y2,z2 - etc) which you can see in the illustration in yellow. At those points I need to generate circles, whose vertices are to be connected at a later stadium. The circles need to be perpendicular to the 'corners' of two line segments of the spline to form a correct tube. Note that the segments are kept low for illustration purpose. [apparently I'm not allowed to post images so please view the image at this link] http://img191.imageshack.us/img191/6863/18720019.jpg I am as far as being able to calculate the vertices of each ring at each point of the spline, but they are all on the same planar ie same angled. I need them to be rotated according to their 'legs' (which A & B are to C for instance). I've been thinking this over and thought of the following: two line segments can be seen as 2 vectors (in illustration A & B) the corner (in illustraton C) is where a ring of vertices need to be calculated I need to find the planar on which all of the vertices will reside I then can use this planar (=vector?) to calculate new vectors from the center point, which is C and find their x,y,z using radius * sin and cos However, I'm really confused on the math part of this. I read about the dot product but that returns a scalar which I don't know how to apply in this case. Can someone point me into the right direction? [edit] To give a bit more info on the situation: I need to construct a buffer of floats, which -in groups of 3- describe vertex positions and will be connected by OpenGL ES, given another buffer with indices to form polygons. To give shape to the tube, I first created an array of floats, which -in groups of 3- describe control points in 3d space. Then along with a variable for segment density, I pass these control points to a function that uses these control points to create a CatmullRom spline and returns this in the form of another array of floats which -again in groups of 3- describe vertices of the catmull rom spline. On each of these vertices, I want to create a ring of vertices which also can differ in density (amount of smoothness / vertices per ring). All former vertices (control points and those that describe the catmull rom spline) are discarded. Only the vertices that form the tube rings will be passed to OpenGL, which in turn will connect those to form the final tube. I am as far as being able to create the catmullrom spline, and create rings at the position of its vertices, however, they are all on a planars that are in the same angle, instead of following the splines path. [/edit] Thanks!

    Read the article

  • Handling touches in UITableViewController

    - by subw
    I want to implement the handling of an additional swipe gesture in my UITableViewController. However, it seems that in the case of tableviews the usual touch handling methods like -[touchesBegan::] of the controller are not called. How can I handle touches on a UITableView?

    Read the article

  • Problem with Xcode organizer screen capture

    - by paul_sns
    I'm currently running Xcode 3.2.2 on Snow Leopard. When opening Organizer Screenshots I see a list of screenshots I did before. But when I click the Capture button, nothing's happening. I don't see any messages popping up or any errors from the Console tab. I also tried restoring the iPod Touch (2nd gen) but that didn't help. Any thoughts? Thanks!

    Read the article

  • How to ask UIImageView if MultipleTouchEnabled is "YES"

    - by Rob
    I have created a few UIImageViews programmatically, but I have a feeling that even though I setMultipleTouchEnabled to YES during the setup, it is not getting set properly and it's leading to multi-touch issues. My question is, within touchesBegan how do I go about asking the UIImageView that was touched if it has MultipleTouchEnabled or not? I am fairly new to this so I'm really stumbling through code and learning as I go (with your help of course). Thank you ahead of time!

    Read the article

  • iPhone OS: Tap status bar to scroll to top doesn't work after remove/add back

    - by avocade
    Using this method to hide the status bar: [[UIApplication sharedApplication] setStatusBarHidden:YES animated:YES]; When setting "hidden" back to NO, the tap-to-scroll-to-top (in UIWebView, UITableView, whatever) doesn't work any more, and requires a restart of the app to get the functionality back. Is this a bug (I filed a rdar anyhow) or have I missed a step? Should I perhaps expect this behavior since the statusBar "loses touch" somehow with the respective view?

    Read the article

  • A polygon creation program, adjacent face ignoring not working right. Any solutions?

    - by user292767
    I'm working on a simple program that converts a 3d array into a polygon structure similar to voxels. It reads the array and creates cubes for positions with a value and checks adjacent directions (North,south,east,west,up,down) for a null value before setting up a cube's face. A link that displays the full code is below, written in GLBasic. Some snapshots to show you whats up. link text

    Read the article

< Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >