Search Results

Search found 338 results on 14 pages for 'rectangles'.

Page 12/14 | < Previous Page | 8 9 10 11 12 13 14  | Next Page >

  • Simple iPhone drawing app with Quartz 2D

    - by Mr guy 4
    I am making a simple iPhone drawing program as a personal side-project. I capture touches event in a subclassed UIView and render the actual stuff to a seperate CGLayer. After each render, I call [self setNeedsLayout] and in the drawRect: method I draw the CGLayer to the screen context. This all works great and performs decently for drawing rectangles. However, I just want a simple "freehand" mode like a lot of other iPhone applications have. The way I thought to do this was to create a CGMutablePath, and simply: CGMutablePathRef path; -(void)touchBegan { path = CGMutablePathCreate(); } -(void)touchMoved { CGPathMoveToPoint(path,NULL,x,y); CGPathAddLineToPoint(path,NULL,x,y); } -(void)drawRect:(CGContextRef)context { CGContextBeginPath(context); CGContextAddPath(context,path); CGContextStrokePath(context); } However, after drawing for more than 1 second, performance degrades miserably. I would just draw each line into the off-screen CGLayer, if it were not for variable opacity! The less-than-100% opacity causes dots to be left on the screen connecting the lines. I have looked at CGContextSetBlendingMode() but alas I cannot find an answer. Can anyone point me in the right direction? Other iPhone apps are able to do this with very good efficiency.

    Read the article

  • Algorithm for finding a rectangle constrained to its parent

    - by Milo
    Basically what I want to do is illustrated here: I start with A and B, then B is conformed to A to create C. The idea is, given TLBR rectangles A, B, make C I also need to know if it produces an empty rectangle (B outside of A case). I tried this but it just isn't doing what I want: if(clipRect.getLeft() > rect.getLeft()) L = clipRect.getLeft(); else L = rect.getLeft(); if(clipRect.getRight() < rect.getRight()) R = clipRect.getRight(); else R = rect.getRight(); if(clipRect.getBottom() > rect.getBottom()) B = clipRect.getBottom(); else B = rect.getBottom(); if(clipRect.getTop() < rect.getTop()) T = clipRect.getTop(); else T = rect.getTop(); if(L < R && B < T) { clipRect = AguiRectangle(0,0,0,0); } else { clipRect = AguiRectangle::fromTLBR(T,L,B,R); } Thanks

    Read the article

  • AndEngine VS Android's Canvas VS OpenGLES - For rendering a 2D indoor vector map

    - by Orchestrator
    This is a big issue for me I'm trying to figure out for a long time already. I'm working on an application that should include a 2D vector indoor map in it. The map will be drawn out from an .svg file that will specify all the data of the lines, curved lines (path) and rectangles that should be drawn. My main requirement from the map are Support touch events to detect where exactly the finger is touching. Great image quality especially when considering the drawings of curved and diagonal lines (anti-aliasing) Optional but very nice to have - Built in ability to zoom, pan and rotate. So far I tried AndEngine and Android's canvas. With AndEngine I had troubles with implementing anti-aliasing for rendering smooth diagonal lines or drawing curved lines, and as far as I understand, this is not an easy thing to implement in AndEngine. Though I have to mention that AndEngine's ability to zoom in and pan with the camera instead of modifying the objects on the screen was really nice to have. I also had some little experience with the built in Android's Canvas, mainly with viewing simple bitmaps, but I'm not sure if it supports all of these things, and especially if it would provide smooth results. Last but no least, there's the option of just plain OpenGLES 1 or 2, that as far as I understand, with enough work should be able to support all the features I require. However it seems like something that would be hard to implement. And I've never programmed in OpenGL or anything like it, but I'm willing very much to learn. To sum it all up, I need a platform that would provide me with the ability to do the 3 things I mentioned before, but also very important - To allow me to implement this feature as fast as possible. Any kind of answer or suggestion would be very much welcomed as I'm very eager to solve this problem! Thanks!

    Read the article

  • Updating UIView subclass contents

    - by Brett
    Hi; I am trying to update a UIView subclass to draw individual pixels (rectangles) after calculating if they are in the mandelbrot set. I am using the following code but am getting a blank screen: //drawingView.h ... -(void)drawRect:(CGRect)rect{ CGContextRef context = UIGraphicsGetCurrentContext(); CGContextSetRGBFillColor(context, r, g, b, 1); CGContextSetRGBStrokeColor(context, r, g, b, 1); CGRect arect = CGRectMake(x,y,1,1); CGContextFillRect(context, arect); } ... //viewController.m ... - (void)viewDidLoad { [drawingView setX:10]; [drawingView setY:10]; [drawingView setR:0.0]; [drawingView setG:0.0]; [drawingView setB:0.0]; [super viewDidLoad]; [drawingView setNeedsDisplay]; } Once I figure out how to display one pixel, I'd like to know how to go about updating the view by adding another pixel (once calculated). Once all the variables change, how do I keep the current view as is but add another pixel? Thanks, Brett

    Read the article

  • Building a complex view with Three20 - resources?

    - by psychotik
    I'm using three20 for most of my iPhone app. One of the views I need to create is relatively complex. It needs a top bar (under the nav bar) with some controls and label, an image view below this bar (which occupies most of the body) and another bottom bar with more controls and labels (above the tab bar control). I don't have much UI experience - my only experience with anything UI is laying stuff out using CSS, etc on websites. Apple's online doc seems to assume that the reader knows a bunch about rectangles, layouts, frames, etc or is using InterfaceBuilder. And three20 isn't too well documented either. So my question is: Is it possible to design something like what I describe in IB and then still have a three20-based app use it? If so, any tips/pointers on how would be much appreciated. Can you point me to some documentation that explain how views/controls etc are rendered. I'm pretty sure I can figure it out if I find some decent explanation/tutorial for it.

    Read the article

  • Resizing window causes black strips

    - by Paja
    I have a form, which sets these styles in constructor: this.SetStyle(ControlStyles.AllPaintingInWmPaint, true); this.SetStyle(ControlStyles.UserPaint, true); this.SetStyle(ControlStyles.ResizeRedraw, true); this.SetStyle(ControlStyles.OptimizedDoubleBuffer, true); And I draw some rectangles in Paint event. There are no controls on the form. Hovewer, when I resize the form, there are black strips at right and bottom of the form. Is there any way to get rid of them? I've tried everything, listening for WM_ERASEBKGND in WndProc, manually drawing the form on WM_PAINT, implementing custom double buffer, etc. Is there anything else I could try? I've found this: https://connect.microsoft.com/VisualStudio/feedback/details/522441/custom-resizing-of-system-windows-window-flickers and it looks like it is a bug in DWM, but I just hope I can do some workaround. Please note that I must use double buffering, since I want to draw pretty intense graphic presentation in the Paint event. I develop in C# .NET 2.0, Win7.

    Read the article

  • Calculating bounding grid coordinates to a user click on google maps/google earth

    - by user170304
    Hello, I have a requirement to calculate the centroid or geodesic midpoint of when a user clicks in between the lat/long grid crossing. The crossing forms a square in most parts of GE and sometimes elongated rectangles. This is due to the shape of the earth of course. I'm looking for a valid mathematical formula that would allow a user to click anywhere in between this grid and then an accurate function (in Javascript or server side code) that would take an assumed grid resolution (say 1km intervals for this discussion) and the input coordinates that should return a centroid coordinate within that graticule grid. To clarify please take a look at the attached image to my google group post: http://google-earth-api.googlegroups.com/web/Picture+5.png?gda=h5oFPz8AAAD315KpovipQeBwdfGpmW3ZhBc9PTADwYa-n193hZ6AItFmHuno63c7phcEXYVuRA6ccyFKn-rNKC-d1pM%5FIdV0&gsc=sz6bbAsAAABBKF7YXWYyc4GmXg-QruHj What I need to be able to do is if a user clicks anywhere in this grid square, I need to find the centroid or center point of that grid intersection/square or at least the bounding grid coordinates (that make the square). If we assume that the grid is UTM standard and has a max resolution of 1km (or make this a parameter), I need to detect the four other points nearby and then calculating the centroid is not as difficult. I welcome any feedback you all may have and appreciate it. I don't have a simple way of letting a user click anywhere on the grid and finding the grid bounding coordinates (making a square of 4 coordinates) or the centroid / midpoint of the graticule grid square necessary. One thought is to use assumptions as much as possible using a reference such as UTM coordinate reference. If I assume that the grid is X degrees wide, can we have a pure javascript function take any input coordinate and return for me the bounding graticule coordinates in Decimal Degrees? Another thought I had was to create the grid in a geo-spatial layer to take any input coordinate and return the nearest centroid of the graticule? Does this make sense? Thanks! Omar

    Read the article

  • Find all cycles in graph, redux

    - by Shadow
    Hi, I know there are a quite some answers existing on this question. However, I found none of them really bringing it to the point. Some argue that a cycle is (almost) the same as a strongly connected components (s. http://stackoverflow.com/questions/546655/finding-all-cycles-in-graph/549402#549402) , so one could use algorithms designed for that goal. Some argue that finding a cycle can be done via DFS and checking for back-edges (s. boost graph documentation on file dependencies). I now would like to have some suggestions on whether all cycles in a graph can be detected via DFS and checking for back-edges? My opinion is that it indeed could work that way as DFS-VISIT (s. pseudocode of DFS) freshly enters each node that was not yet visited. In that sense, each vertex exhibits a potential start of a cycle. Additionally, as DFS visits each edge once, each edge leading to the starting point of a cycle is also covered. Thus, by using DFS and back-edge checking it should indeed be possible to detect all cycles in a graph. Note that, if cycles with different numbers of participant nodes exist (e.g. triangles, rectangles etc.), additional work has to be done to discriminate the acutal "shape" of each cycle.

    Read the article

  • OpenGL equivalent of GDI's HatchBrush or PatternBrush?

    - by Ptah- Opener of the Mouth
    I have a VB6 application (please don't laugh) which does a lot of drawing via BitBlt and the standard VB6 drawing functions. I am running up against performance issues (yes, I do the regular tricks like drawing to memory). So, I decided to investigate other ways of drawing, and have come upon OpenGL. I've been doing some experimenting, and it seems straightforward to do most of what I want; the application mostly only uses very simple drawing -- relatively large 2D rectangles of solid colors and such -- but I haven't been able to find an equivalent to something like a HatchBrush or PatternBrush. More specifically, I want to be able to specify a small monochrome pixel pattern, choose a color, and whenever I draw a polygon (or whatever), instead of it being solid, have it automatically tiled with that pattern, not translated or rotated or skewed or stretched, with the "on" bits of the pattern showing up in the specified color, and the "off" bits of the pattern left displaying whatever had been drawn under the area that I am now drawing on. Obviously I could do all the calculations myself. That is, instead of drawing as a polygon which will somehow automatically be tiled for me, I could calculate all of the lines or pixels or whatever that actually need to be drawn, then draw them as lines or pixels or whatever. But is there an easier way? Like in GDI, where you just say "draw this polygon using this brush"? I am guessing that "textures" might be able to accomplish what I want, but it's not clear to me (I'm totally new to this and the documentation I've found is not entirely obvious); it seems like textures might skew or translate or stretch the pattern, based upon the vertices of the polygon? Whereas I want the pattern tiled. Is there a way to do this, or something like it, other than brute force calculation of exactly the pixels/lines/whatever that need to be drawn? Thanks in advance for any help.

    Read the article

  • suggestions for declarative GUI programming in Java

    - by Jason S
    I wonder if there are any suggestions for declarative GUI programming in Java. (I abhor visual-based GUI creator/editor software, but am getting a little tired of manually instantiating JPanels and Boxes and JLabels and JLists etc.) That's my overall question, but I have two specific questions for approaches I'm thinking of taking: JavaFX: is there an example somewhere of a realistic GUI display (e.g. not circles and rectangles, but listboxes and buttons and labels and the like) in JavaFX, which can interface with a Java sourcefile that accesses and updates various elements? Plain Old Swing with something to parse XUL-ish XML: has anyone invented a declarative syntax (like XUL) for XML for use with Java Swing? I suppose it wouldn't be hard to do, to create some code based on STaX which reads an XML file, instantiates a hierarchy of Swing elements, and makes the hierarchy accessible through some kind of object model. But I'd rather use something that's well-known and documented and tested than to try to invent such a thing myself. JGoodies Forms -- not exactly declarative, but kinda close & I've had good luck with JGoodies Binding. But their syntax for Form Layout seems kinda cryptic. edit: lots of great answers here! (& I added #3 above) I'd be especially grateful for hearing any experiences any of you have had with using one of these frameworks for real-world applications. p.s. I did try a few google searches ("java gui declarative"), just didn't quite know what to look for.

    Read the article

  • Image drawing library for Haskell?

    - by absz
    I'm working on a Haskell program for playing spatial games: I have a graph of a bunch of "individuals" playing the Prisoner's Dilemma, but only with their immediate neighbors, and copying the strategies of the people who do best. I've reached a point where I need to draw an image of the world, and this is where I've hit problems. Two of the possible geometries are easy: if people have four or eight neighbors each, then I represent each one as a filled square (with color corresponding to strategy) and tile the plane with these. However, I also have a situation where people have six neighbors (hexagons) or three neighbors (triangles). My question, then, is: what's a good Haskell library for creating images and drawing shapes on them? I'd prefer that it create PNGs, but I'm not incredibly picky. I was originally using Graphics.GD, but it only exports bindings to functions for drawing points, lines, arcs, ellipses, and non-rotated rectangles, which is not sufficient for my purposes (unless I want to draw hexagons pixel by pixel*). I looked into using foreign import, but it's proving a bit of a hassle (partly because the polygon-drawing function requires an array of gdPoint structs), and given that my requirements may grow, it would be nice to use an in-Haskell solution and not have to muck about with the FFI (though if push comes to shove, I'm willing to do that). Any suggestions? * That is also an option, actually; any tips on how to do that would also be appreciated, though I think a library would be easier.

    Read the article

  • Weird splitter behaviour when moving it

    - by tomo
    My demo app displays two rectangles which should fill whole browser's screen. There is a vertical splitter between them. This looks like a basic scenario but I have no idea how to implement this in xaml. I cannot force this to fill whole screen and when moving splitter then whole screen grows. Can anybody help? <UserControl xmlns:controls="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls" x:Class="SilverlightApplication1.MainPage" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" d:DesignWidth="640" d:DesignHeight="480"> <Grid x:Name="LayoutRoot" VerticalAlignment="Stretch" HorizontalAlignment="Stretch"> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto"/> <ColumnDefinition Width="Auto"/> <ColumnDefinition Width="Auto"/> </Grid.ColumnDefinitions> <Border BorderBrush="Black" BorderThickness="1" VerticalAlignment="Stretch" HorizontalAlignment="Stretch" MinWidth="50"> </Border> <controls:GridSplitter Grid.Column="1" VerticalAlignment="Stretch" Width="Auto" ></controls:GridSplitter> <Border BorderBrush="Blue" BorderThickness="1" VerticalAlignment="Stretch" HorizontalAlignment="Stretch" Grid.Column="2" MinWidth="50"></Border> </Grid> </UserControl>

    Read the article

  • What is the best approach to 2D collision detection on the iPhone?

    - by Magic Bullet Dave
    Been working on this problem of collision detection and there appears to be 3 main approaches I could take: Sprite and mask approach. (AND the overlap of the sprites and check for a non-zero number in the resulting sprite pixel data). Bounding circles, rectangles or polygons. (Create one or more shapes that enclose the sprites and do the basic maths to check for overlaps). Use an existing sprite library. The first approach, even though it would have been the way I would have done it in the old days of 16x16 sprite blocks, it appears that there just isn’t an easy way of getting at the individual image pixel data and/or alpha channel within Quartz (or OPENGL for that matter). Detecting the overlap of the bounding box is easy, but then creating a 3rd image from the overlap and then testing it for pixels is complicated and my gut feel is that even if we could get it to work would be slow. Am I missing something neat here? The second approach involves dividing up our sprites into several polygons and testing them for overlaps. The more polygons the more accurate the collision detection. The benefit is that it is fast, and can be accurate. The downside is it makes the sprite creation more complicated. i.e., we have to create the polygons for each sprite. For speed the best approach is to create a tree of polygons. The 3rd approach I’m not sure about as it involves buying code (or using an open source licence). I am not sure what the best library to use is or whether this would make life easier or give us a problem integrating this into our app. So in short I am favouring the polygon and tree approach and would appreciate you views on this before I go and write lots of code. Best regards Dave

    Read the article

  • Haskell: "how much" of a type should functions receive? and avoiding complete "reconstruction"

    - by L01man
    I've got these data types: data PointPlus = PointPlus { coords :: Point , velocity :: Vector } deriving (Eq) data BodyGeo = BodyGeo { pointPlus :: PointPlus , size :: Point } deriving (Eq) data Body = Body { geo :: BodyGeo , pict :: Color } deriving (Eq) It's the base datatype for characters, enemies, objects, etc. in my game (well, I just have two rectangles as the player and the ground right now :p). When a key, the characters moves right, left or jumps by changing its velocity. Moving is done by adding the velocity to the coords. Currently, it's written as follows: move (PointPlus (x, y) (xi, yi)) = PointPlus (x + xi, y + yi) (xi, yi) I'm just taking the PointPlus part of my Body and not the entire Body, otherwise it would be: move (Body (BodyGeo (PointPlus (x, y) (xi, yi)) wh) col) = (Body (BodyGeo (PointPlus (x + xi, y + yi) (xi, yi)) wh) col) Is the first version of move better? Anyway, if move only changes PointPlus, there must be another function that calls it inside a new Body. I explain: there's a function update which is called to update the game state; it is passed the current game state, a single Body for now, and returns the updated Body. update (Body (BodyGeo (PointPlus xy (xi, yi)) wh) pict) = (Body (BodyGeo (move (PointPlus xy (xi, yi))) wh) pict) That tickles me. Everything is kept the same within Body except the PointPlus. Is there a way to avoid this complete "reconstruction" by hand? Like in: update body = backInBody $ move $ pointPlus body Without having to define backInBody, of course.

    Read the article

  • Using Quartz to draw every second via NSTimer (iPhone)

    - by stuartloxton
    Hi, I'm relatively new to Objective-C + Quartz and am running into what is probably a very simple issue. I have a custom UIView sub class which I am using to draw simple rectangles via Quartz. However I am trying to hook up an NSTimer so that it draws a new object every second, the below code will draw the first rectangle but will never draw again. The function is being called (the NSLog is run) but nothing is draw. Code: - (void)drawRect:(CGRect)rect { context = UIGraphicsGetCurrentContext(); [self step:self]; [NSTimer scheduledTimerWithTimeInterval:(NSTimeInterval)(2) target:self selector:@selector(step:) userInfo:nil repeats:TRUE]; } - (void) step:(id) sender { static double trans = 0.5f; CGContextSetRGBFillColor(context, 1, 0, 0, trans); CGContextAddRect(context, CGRectMake(10, 10, 10, 10)); CGContextFillPath(context); NSLog(@"Trans: %f", trans); trans += 0.01; } context is in my interface file as: CGContextRef context; Any help will be greatly appreciated!

    Read the article

  • Which text margin does SWT Table use when drawing text?

    - by Zordid
    I got a relatively easy question - but I cannot find anything anywhere to answer it. I use a simple SWT table widget in my application that displays only text in the cells. I got an incremental search feature and want to highlight text snippets in all cells if they match. So when typing "a", all "a"s should be highlighted. To get this, I add an SWT.EraseItem listener to interfere with the background drawing. If the current cell's text contains the search string, I find the positions and calculate relative x-coordinates within the text using event.gc.stringExtent - easy. With that I just draw rectangles "behind" the occurrences. Now, there's a flaw in this. The table does not draw the text without a margin, so my x coordinate does not really match - it is slightly off by a few pixels! But how many?? Where do I retrieve the cell's text margins that table's own drawing will use? No clue. Cannot find anything. :-( Bonus question: the table's draw method also shortens text and adds "..." if it does not fit into the cell. Hmm. My occurrence finder takes the TableItem's text and thus also tries to mark occurrences that are actually not visible because they are consumed by the "...". How do I get the shortened text and not the "real" text within the EraseItem draw handler? Thanks!

    Read the article

  • Entangled text boxes

    - by user38329
    Hi StackOverflow, A mere Windows textbox greatly surprised me today. I have two unrelated text boxes inside an application. I can type in either text box and switch the focus by clicking on them. Then happens some event X, which I can't describe here for reasons given below. After this event happens, the two text boxes become "entangled" in an almost quantum way. Say, text box A was focused before X happened. When I click text box B to type in some text, the new text appears in text box A, whereas the blinking cursor happily moves along in text box B through the void, as if the text were there. No amount of clicking on either text boxes can resolve this. The cursor will always remain in B, whereas the text will always go to A. Message spying reveals that after the event X, the text boxes lose the ability to lose or gain focus. When I click on B, WM_LOSE_FOCUS does not come to A, and WM_SET_FOCUS does not come to B. (The rectangles and visibility of the boxes are OK.) The same thing happens in Windows XP and Windows 7. Now, event X: it's a big event in a third-party UI library which I cannot reverse-engineer in a timely manner. (Namely, docking a pane in wxAUI.) I am sure that this behavior is the result of incorrect WinAPI calls to the text boxes (garbage in - garbage out). I would like to know what could possibly cause such "textbox trip" to know where to start looking for the bug. Thanks!

    Read the article

  • Is there an equivalent for ActiveRecord#find_by equivalent for C#?

    - by Benjamin Manns
    I'm originally a C# developer (as a hobby), but as of late I have been digging into Ruby on Rails and I am really enjoying it. Right now I am building an application in C# and I was wondering if there is any collection implementation for C# that could match (or "semi-match") the find_by method of ActiveRecord. What I am essentially looking for is a list that would hold Rectangles: class Rectangle { public int Width { get; set; } public int Height { get; set; } public String Name { get; set; } } Where I could query this list and find all entries with Height = 10, Width = 20, or name = "Block". This was done with ActiveRecord by doing a call similar to Rectangle.find_by_name('Block'). The only way I can think of doing this in C# is to create my own list implementation and iterate through each item manually checking each item against the criteria. I fear I would be reinventing the wheel (and one of poorer quality). Any input or suggestions is much appreciated.

    Read the article

  • Processing velocity-vectors during collision as neatly as possible

    - by DevEight
    Hello. I'm trying to create a good way to handle all possible collisions between two objects. Typically one will be moving and hitting the other, and should then "bounce" away. What I've done so far (I'm creating a typical game where you have a board and bounce a ball at bricks) is to check if the rectangles intersect and if they do, invert the Y-velocity. This is a really ugly and temporary solution that won't work in the long haul and since this is kind of processing is very common in games I'd really like to find a great way of doing this for future projects aswell. Any links or helpful info is appreciated. Below is what my collision-handling function looks like right now. protected void collision() { #region Boundaries if (bal.position.X + bal.velocity.X >= viewportRect.Width || bal.position.X + bal.velocity.X <= 0) { bal.velocity.X *= -1; } if (bal.position.Y + bal.velocity.Y <= 0) { bal.velocity.Y *= -1; } #endregion bal.rect = new Rectangle((int)bal.position.X+(int)bal.velocity.X-bal.sprite.Width/2, (int)bal.position.Y-bal.sprite.Height/2+(int)bal.velocity.Y, bal.sprite.Width, bal.sprite.Height); player.rect = new Rectangle((int)player.position.X-player.sprite.Width/2, (int)player.position.Y-player.sprite.Height/2, player.sprite.Width, player.sprite.Height); if (bal.rect.Intersects(player.rect)) { bal.position.Y = player.position.Y - player.sprite.Height / 2 - bal.sprite.Height / 2; if (player.position.X != player.prevPos.X) { bal.velocity.X -= (player.prevPos.X - player.position.X) / 2; } bal.velocity.Y *= -1; } foreach (Brick b in brickArray.list) { b.rect.X = Convert.ToInt32(b.position.X-b.sprite.Width/2); b.rect.Y = Convert.ToInt32(b.position.Y-b.sprite.Height/2); if (bal.rect.Intersects(b.rect)) { b.recieveHit(); bal.velocity.Y *= -1; } } brickArray.removeDead(); }

    Read the article

  • C++ design related question

    - by Kotti
    Hi! Here is the question's plot: suppose I have some abstract classes for objects, let's call it Object. It's definition would include 2D position and dimensions. Let it also have some virtual void Render(Backend& backend) const = 0 method used for rendering. Now I specialize my inheritance tree and add Rectangle and Ellipse class. Guess they won't have their own properties, but they will have their own virtual void Render method. Let's say I implemented these methods, so that Render for Rectangle actually draws some rectangle, and the same for ellipse. Now, I add some object called Plane, which is defined as class Plane : public Rectangle and has a private member of std::vector<Object*> plane_objects; Right after that I add a method to add some object to my plane. And here comes the question. If I design this method as void AddObject(Object& object) I would face trouble like I won't be able to call virtual functions, because I would have to do something like plane_objects.push_back(new Object(object)); and this should be push_back(new Rectangle(object)) for rectangles and new Circle(...) for circles. If I implement this method as void AddObject(Object* object), it looks good, but then somewhere else this means making call like plane.AddObject(new Rectangle(params)); and this is generally a mess because then it's not clear which part of my program should free the allocated memory. ["when destroying the plane? why? are we sure that calls to AddObject were only done as AddObject(new something).] I guess the problems caused by using the second approach could be solved using smart pointers, but I am sure there have to be something better. Any ideas?

    Read the article

  • Printing in Silverlight 4

    - by Number8
    Hello, We have an application structured roughly like this: <Grid x:Name="LayoutRoot"> <ScrollViewer> <Canvas x:Name="canvas"> <StackPanel> < Button /><Slider /><Button /></StackPanel> <custom:Blob /> <custom:Blob /> <custom:Blob /> </Canvas> </ScrollViewer> </Grid> Each Blob consists of 1 or more rectangles, lines, and text boxes; they are positioned anywhere on the canvas. If I print the document using the LayoutRoot: PrintDocument pd = new PrintDocument(); pd += (s, pe) => { pe.PageVisual = LayoutRoot; }; pd.Print("Blobs"); ... it is like a print-screen -- the scrollbars, the sliders, the blobs that are visible -- are printed. If I set PageVisual = canvas, nothing is printed. How can I get all the blob objects, and just those objects, to print? Do I need to copy them into another container, and give that container to PageVisual? Can I use a ViewBox to make sure they all fit on one page? Thanks for any pointers....

    Read the article

  • JTable cells not rendering shapes properly

    - by Andrew
    I'm trying to render my JTable cells with a subclassed JPanel and the cells should appear as coloured rectangles with a circle drawn on them. When the table displays initially everything looks OK but then when a dialog or something is displayed over the cells when it is removed the cells that have been covered do not rendered properly and the circles are broken up etc. I then have to move the scroll bar or extend the window to get them to redraw properly. The paintComponent method of the component I'm using to render the cells is below: protected void paintComponent(Graphics g) { setOpaque(true); super.paintComponent(g); Graphics2D g2d = (Graphics2D) g; GradientPaint gradientPaint = new GradientPaint(new Point2D.Double(0, 0), Color.WHITE, new Point2D.Double(0, getHeight()), paintRatingColour); g2d.setPaint(gradientPaint); g2d.fillRect(0, 0, getWidth(), getHeight()); g2d.setRenderingHint(RenderingHints.KEY_ANTIALIASING, RenderingHints.VALUE_ANTIALIAS_ON); g2d.setRenderingHint(RenderingHints.KEY_RENDERING, RenderingHints.VALUE_RENDER_QUALITY); Rectangle clipBounds = g2d.getClipBounds(); int x = new Double(clipBounds.getWidth()).intValue() - 15; int y = (new Double(clipBounds.getHeight()).intValue() / 2) - 6; if (level != null) { g2d.setColor(iconColour); g2d.drawOval(x, y, width, height); g2d.fillOval(x, y, width, height); } }

    Read the article

  • Fast path cache generation for a connected node graph

    - by Sukasa
    I'm trying to get a faster pathfinding mechanism in place in a game I'm working on for a connected node graph. The nodes are classed into two types, "Networks" and "Routers." In this picture, the blue circles represent routers and the grey rectangles networks. Each network keeps a list of which routers it is connected to, and vice-versa. Routers cannot connect directly to other routers, and networks cannot connect directly to other networks. Networks list which routers they're connected to Routers do the same I need to get an algorithm that will map out a path, measured in the number of networks crossed, for each possible source and destination network excluding paths where the source and destination are the same network. I have one right now, however it is unusably slow, taking about two seconds to map the paths, which becomes incredibly noticeable for all connected players. The current algorithm is a depth-first brute-force search (It was thrown together in about an hour to just get the path caching working) which returns an array of networks in the order they are traversed, which explains why it's so slow. Are there any algorithms that are more efficient? As a side note, while these example graphs have four networks, the in-practice graphs have 55 networks and about 20 routers in use. Paths which are not possible also can occur, and as well at any time the network/router graph topography can change, requiring the path cache to be rebuilt. What approach/algorithm would likely provide the best results for this type of a graph?

    Read the article

  • How do you update a secondary view?

    - by Troy Sartain
    Perhaps there's a better way to set this up so I'm open to suggestions. But here's what I'm doing. I have a main UIView. On top of that I have a UIImageView and another UIView. When the UIImageView changes, I want to change the second UIView. So I have a class for it and the IB object's type is set to the class. In the .m of that class is a drawRect method that draws some rectangles. Also in the .m is a NSMutableArray property that is synthesized. I created an instance of that class in the controller of the main view. The problem: despite the fact that the drawRect works fine when the app starts (as traced in the debugger,) when the UIImageView changes I call a "setNeedsDisplay" on the instance variable of the second view after updating the @synthesize'd array but the drawRect does not get called. I think it has to do with instances. I wouldn't think threading would be an issue here. I just want to draw in a separate area of the screen based on an image also displayed.

    Read the article

  • SVG rotate deletes Elemets

    - by user1468661
    I'm trying to generate svg-Code in a web-application. Here's an example output: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd"> <svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:ev="http://www.w3.org/2001/xml-events" version="1.1" baseProfile="full" width="1000px" height="600px"> <rect x="147.50198255352893" y="109.43695479777953" width="15.860428231562253" height="295.79698651863595" stroke="rgb(0,0,0)" fill="rgb(255,255,255)" stroke-width="3" transform="rotate(20 155.43219666931006 257.3354480570975)"/> <rect x="163.36241078509119" y="405.2339413164155" width="379.85725614591587" height="-23.79064234734335" stroke="rgb(0,0,0)" fill="rgb(255,255,255)" stroke-width="3" transform="rotate(20 353.2910388580491 393.3386201427438)"/> <rect x="543.219666931007" y="381.44329896907215" width="22.204599524187188" height="-353.6875495638382" stroke="rgb(0,0,0)" fill="rgb(255,255,255)" stroke-width="3" transform="rotate(20 554.3219666931006 204.59952418715304)"/> </svg> There should be three rotated Rectangles, but somehow in Chrome, Safari, and Inkscape only one of them is displayed. I did google and have no clue what is wrong. Thx for your help.

    Read the article

< Previous Page | 8 9 10 11 12 13 14  | Next Page >