Search Results

Search found 6318 results on 253 pages for 'hybrid graphics'.

Page 67/253 | < Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >

  • Double buffering with C# has negative effect

    - by Roland Illig
    I have written the following simple program, which draws lines on the screen every 100 milliseconds (triggered by timer1). I noticed that the drawing flickers a bit (that is, the window is not always completely blue, but some gray shines through). So my idea was to use double-buffering. But when I did that, it made things even worse. Now the screen was almost always gray, and only occasionally did the blue color come through (demonstrated by timer2, switching the DoubleBuffered property every 2000 milliseconds). What could be an explanation for this? using System; using System.Drawing; using System.Windows.Forms; namespace WindowsFormsApplication1 { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void Form1_Paint(object sender, PaintEventArgs e) { Graphics g = CreateGraphics(); Pen pen = new Pen(Color.Blue, 1.0f); Random rnd = new Random(); for (int i = 0; i < Height; i++) g.DrawLine(pen, 0, i, Width, i); } // every 100 ms private void timer1_Tick(object sender, EventArgs e) { Invalidate(); } // every 2000 ms private void timer2_Tick(object sender, EventArgs e) { DoubleBuffered = !DoubleBuffered; this.Text = DoubleBuffered ? "yes" : "no"; } } }

    Read the article

  • Producing CCITT compressed TIFF from CGImage

    - by Brian Postow
    I have a CGImage (core graphics, C/C++). It's grayscale. Well, originally it was B/W, but the CGImage may be RGB. That shouldn't matter. I want to create a CCITT-Group 4 TIFF. I can create an LZW TIFF (grayscale or color) via creating a destination with the correct dictionary and adding the image in. No problem. However, there doesn't seem to be an equivalent kCGImagePropertyTIFFCompression value to represent CCITT-4. It should be 4, but that produces uncompressed. I have a manual CCITT compression routine, so if I can get the binary (1 bit per pixel) data, I'm set. But I can't seem to get 1 BPP data out of a CGImage. I have code that is supposed to put the CGImage into a CGBitmapContext and then give me the data, but it seems to be giving me all black. I've asked a couple of questions today trying to get at this, but I just figured, lets ask the question I REALLY want answered and see if someone can answer it. There's GOT to be a way to do this. I've got to be missing something dumb. What is it?

    Read the article

  • Defining Light Coordinates

    - by Zachary
    I took a Computer Graphics exam a couple of days ago which had extra credit question like the following: A light can be defined in one of two ways. It can be defined in world coordinates, e.g. a street light, or in the viewer (eye coordinates), e.g., a head-lamp worn by a miner. In either case the viewpoint can freely change. Describe how the light should be transformed different in these two cases. Since I won't get to see the results of this until after spring break, I thought I would ask here. It seems like the analogies being used are misleading - could you not define a light source that is located at the viewers eye in world coordinates just as well as you could in eye coordinates? I've been doing some research on how OpenGL handles light, and it seems as though it always uses eye coordinates - the ModelView matrix would be applied to any light in world coordinates. In that case the answer may just be that you would have to transform light defined in world coordinates into eye coordinates using something like the ModelView matrix, while light defined in eye coordinates would only need to be transformed by the projection matrix. Then again I could be totally under thinking (or over thinking this). Another thought I had is that it determines which way you render shadows, but that has more to do with the location of the light and its type (point, directional, emission, etc) than what coordinates it is represented in. Any ideas?

    Read the article

  • Graphics.MeasureCharacterRanges giving wrong size calculations in C#.Net?

    - by Owen Blacker
    I'm trying to render some text into a specific part of an image in a Web Forms app. The text will be user entered, so I want to vary the font size to make sure it fits within the bounding box. I have code that was doing this fine on my proof-of-concept implementation, but I'm now trying it against the assets from the designer, which are larger, and I'm getting some odd results. I'm running the size calculation as follows: StringFormat fmt = new StringFormat(); fmt.Alignment = StringAlignment.Center; fmt.LineAlignment = StringAlignment.Near; fmt.FormatFlags = StringFormatFlags.NoClip; fmt.Trimming = StringTrimming.None; int size = __startingSize; Font font = __fonts.GetFontBySize(size); while (GetStringBounds(text, font, fmt).IsLargerThan(__textBoundingBox)) { context.Trace.Write("MyHandler.ProcessRequest", "Decrementing font size to " + size + ", as size is " + GetStringBounds(text, font, fmt).Size() + " and limit is " + __textBoundingBox.Size()); size--; if (size < __minimumSize) { break; } font = __fonts.GetFontBySize(size); } context.Trace.Write("MyHandler.ProcessRequest", "Writing " + text + " in " + font.FontFamily.Name + " at " + font.SizeInPoints + "pt, size is " + GetStringBounds(text, font, fmt).Size() + " and limit is " + __textBoundingBox.Size()); I then use the following line to render the text onto an image I'm pulling from the filesystem: g.DrawString(text, font, __brush, __textBoundingBox, fmt); where: __fonts is a PrivateFontCollection, PrivateFontCollection.GetFontBySize is an extension method that returns a FontFamily RectangleF __textBoundingBox = new RectangleF(150, 110, 212, 64); int __minimumSize = 8; int __startingSize = 48; Brush __brush = Brushes.White; int size starts out at 48 and decrements within that loop Graphics g has SmoothingMode.AntiAlias and TextRenderingHint.AntiAlias set context is a System.Web.HttpContext (this is an excerpt from the ProcessRequest method of an IHttpHandler) The other methods are: private static RectangleF GetStringBounds(string text, Font font, StringFormat fmt) { CharacterRange[] range = { new CharacterRange(0, text.Length) }; StringFormat myFormat = fmt.Clone() as StringFormat; myFormat.SetMeasurableCharacterRanges(range); using (Graphics g = Graphics.FromImage(new Bitmap( (int) __textBoundingBox.Width - 1, (int) __textBoundingBox.Height - 1))) { g.SmoothingMode = System.Drawing.Drawing2D.SmoothingMode.AntiAlias; g.TextRenderingHint = System.Drawing.Text.TextRenderingHint.AntiAlias; Region[] regions = g.MeasureCharacterRanges(text, font, __textBoundingBox, myFormat); return regions[0].GetBounds(g); } } public static string Size(this RectangleF rect) { return rect.Width + "×" + rect.Height; } public static bool IsLargerThan(this RectangleF a, RectangleF b) { return (a.Width > b.Width) || (a.Height > b.Height); } Now I have two problems. Firstly, the text sometimes insists on wrapping by inserting a line-break within a word, when it should just fail to fit and cause the while loop to decrement again. I can't see why it is that Graphics.MeasureCharacterRanges thinks that this fits within the box when it shouldn't be word-wrapping within a word. This behaviour is exhibited irrespective of the character set used (I get it in Latin alphabet words, as well as other parts of the Unicode range, like Cyrillic, Greek, Georgian and Armenian). Is there some setting I should be using to force Graphics.MeasureCharacterRanges only to be word-wrapping at whitespace characters (or hyphens)? This first problem is the same as post 2499067. Secondly, in scaling up to the new image and font size, Graphics.MeasureCharacterRanges is giving me heights that are wildly off. The RectangleF I am drawing within corresponds to a visually apparent area of the image, so I can easily see when the text is being decremented more than is necessary. Yet when I pass it some text, the GetBounds call is giving me a height that is almost double what it's actually taking. Using trial and error to set the __minimumSize to force an exit from the while loop, I can see that 24pt text fits within the bounding box, yet Graphics.MeasureCharacterRanges is reporting that the height of that text, once rendered to the image, is 122px (when the bounding box is 64px tall and it fits within that box). Indeed, without forcing the matter, the while loop iterates to 18pt, at which point Graphics.MeasureCharacterRanges returns a value that fits. The trace log excerpt is as follows: Decrementing font size to 24, as size is 193×122 and limit is 212×64 Decrementing font size to 23, as size is 191×117 and limit is 212×64 Decrementing font size to 22, as size is 200×75 and limit is 212×64 Decrementing font size to 21, as size is 192×71 and limit is 212×64 Decrementing font size to 20, as size is 198×68 and limit is 212×64 Decrementing font size to 19, as size is 185×65 and limit is 212×64 Writing VENNEGOOR of HESSELINK in DIN-Black at 18pt, size is 178×61 and limit is 212×64 So why is Graphics.MeasureCharacterRanges giving me a wrong result? I could understand it being, say, the line height of the font if the loop stopped around 21pt (which would visually fit, if I screenshot the results and measure it in Paint.Net), but it's going far further than it should be doing because, frankly, it's returning the wrong damn results. Any and all help gratefully received. Thanks!

    Read the article

  • How to scan convert right edges and slopes less than one?

    - by Zachary
    I'm writing a program which will use scan conversion on triangles to fill in the pixels contained within the triangle. One thing that has me confused is how to determine the x increment for the right edge of the triangle, or for slopes less than or equal to one. Here is the code I have to handle left edges with a slope greater than one (obtained from Computer Graphics: Principles and Practice second edition): for(y=ymin;y<=ymax;y++) { edge.increment+=edge.numerator; if(edge.increment>edge.denominator) { edge.x++; edge.increment -= edge.denominator; } } The numerator is set from (xMax-xMin), and the denominator is set from (yMax-yMin)...which makes sense as it represents the slope of the line. As you move up the scan lines (represented by the y values). X is incremented by 1/(denomniator/numerator) ...which results in x having a whole part and a fractional part. If the fractional part is greater than one, then the x value has to be incremented by 1 (as shown in edge.incrementedge.denominator). This works fine for any left handed lines with a slope greater than one, but I'm having trouble generalizing it for any edge, and google-ing has proved fruitless. Does anyone know the algorithm for that?

    Read the article

  • Image drawing library for Haskell?

    - by absz
    I'm working on a Haskell program for playing spatial games: I have a graph of a bunch of "individuals" playing the Prisoner's Dilemma, but only with their immediate neighbors, and copying the strategies of the people who do best. I've reached a point where I need to draw an image of the world, and this is where I've hit problems. Two of the possible geometries are easy: if people have four or eight neighbors each, then I represent each one as a filled square (with color corresponding to strategy) and tile the plane with these. However, I also have a situation where people have six neighbors (hexagons) or three neighbors (triangles). My question, then, is: what's a good Haskell library for creating images and drawing shapes on them? I'd prefer that it create PNGs, but I'm not incredibly picky. I was originally using Graphics.GD, but it only exports bindings to functions for drawing points, lines, arcs, ellipses, and non-rotated rectangles, which is not sufficient for my purposes (unless I want to draw hexagons pixel by pixel*). I looked into using foreign import, but it's proving a bit of a hassle (partly because the polygon-drawing function requires an array of gdPoint structs), and given that my requirements may grow, it would be nice to use an in-Haskell solution and not have to muck about with the FFI (though if push comes to shove, I'm willing to do that). Any suggestions? * That is also an option, actually; any tips on how to do that would also be appreciated, though I think a library would be easier.

    Read the article

  • Scaling Image to multiple sizes for Deep Zoom

    - by AnthonyWJones
    Lets assume I have a bitmap with a square aspect and width of 2048 pixels. In order to create a set of files need by Silverlight's DeepZoomImageTileSource I need to scale this bitmap to 1024 then to 512 then to 256 etc down to 1 pixel image. There are two, I suspect naive, approaches:- For each image required scale the original full size image to the required size. However it seems excessive to be scaling the full image to the very small sizes. Having scaled from one level to the next discard the original image and scale each sucessive scaled image as the source of the next smaller image. However I suspect that this would generate images in the 256-64 range with poor fidelity than using option 1. Note unlike with the Deep Zoom Composer this tool is expected to act in an on-demand fashion hence it needs to complete in a reasonable timeframe (tops 30 seconds). On the pluse side I'm only creating a single multiscale image not a pyramid of mutliple high-res images. I am outside my comfort zone here, any graphics experts got any advice? Am I wrong about point 2? Is point 1 reasonably performant and I'm worrying about nothing? Option 3?

    Read the article

  • Turning off antialiasing in Löve2D

    - by cjanssen
    I'm using Löve2D for writing a small game. Löve2D is an open source game engine for Lua. The problem I'm encountering is that some antialias filter is automatically applied to your sprites when you draw it at non-integer positions. love.graphics.draw( sprite, x, y ) So when x or y is not round (for example, x=100.24), the sprite appears blurred. The same happens when the sprite size is not even, because (x,y) points to the center of the sprite. For example, a sprite which is 31x30 big will appear blurred again, because its pixels are painted in non-integer positions. Since I am using pixel art, I want to avoid this all the way, otherwise the art is destroyed by this effect. The workaround I am using so far is to force the coordinates to be round by littering the code with calls to math.floor(), and forcing all the sprites to have even sizes by adding a row or column of transparent pixels with the paint program, if needed. Is there some command to deactivate the antialiasing I can call at program startup?

    Read the article

  • Need guidelines for optimizing WebGL performance by minimizing shader changes

    - by brainjam
    I'm trying to get an idea of the practicality of WebGL for rendering large architectural interior scenes, consisting of 100K's of triangles. These triangles are distributed over many objects, and there are many materials in the scene. On the other hand, there are no moving parts. And the materials tend to be fairly simple, mostly based on texture maps. There is a lot of texture map sharing .. for example all the chairs in scene will share a common map. There is also some multitexturing - up to three textures overlaid in a material. I've been doing a little experimentation and reading, and gather that frequently switching materials during a rendering pass will slow things down. For example, a scene with 200K triangles will have significant performance differences, depending on whether there are 10 or 1000 objects, assuming that each time an object is displayed a new material is set up. So it seems that if performance is important the scene should be sorted by materials so as to minimize material switching. What I'm looking for is guidelines on how to think of the overhead of various state changes, and where do I get the biggest bang for the buck. For example, what are the relative performance costs of, say, gl.useProgram(), gl.uniformMatrix4fv(), gl.drawElements() should I try to write ubershaders to minimize shader switching? should I try to aggregate geometry to minimize the number of gl.drawElements() calls I realize that mileage may vary depending on browser, OS, and graphics hardware. And I'm also not looking for heroic measures. Just some guidelines from people who have already had some experience in making scenes fast. I'll add that while I've had some experience with fixed-pipeline OpenGL programming in the past, I'm rather new to the WebGL/OpenGL ES 2.0 way of doing things.

    Read the article

  • Using the contents of an array to set individual pixels in a Quartz bitmap context

    - by Magic Bullet Dave
    I have an array that contains the RGB colour values for each pixel in a 320 x 180 display. I would like to be able to set individual pixel values in the a bitmap context of the same size offscreen then display the bitmap context in a view. It appears that I have to create 1x1 rects and either put a stroke on them or a line of length 1 at the point in question. Is that correct? I'm looking for a very efficient way of getting the array data onto the graphics context as you can imagine this is going to be an image buffer that cycles at 25 frames per second and drawing in this way seems inefficient. I guess the other question is should I use OPENGL ES instead? Thoughts/best practice would be much appreciated. Regards Dave OK, have come a short way, but can't make the final hurdle and I am not sure why this isn't working: - (void) displayContentsOfArray1UsingBitmap: (CGContextRef)context { long bitmapData[WIDTH * HEIGHT]; // Build bitmap int i, j, h; for (i = 0; i < WIDTH; i++) { for (j = 0; j < HEIGHT; j++) { h = frameBuffer01[i][j]; bitmapData[i * j] = h; } } // Blit the bitmap to the context CGDataProviderRef providerRef = CGDataProviderCreateWithData(NULL, bitmapData,4 * WIDTH * HEIGHT, NULL); CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB(); CGImageRef imageRef = CGImageCreate(WIDTH, HEIGHT, 8, 32, WIDTH * 4, colorSpaceRef, kCGImageAlphaFirst, providerRef, NULL, YES, kCGRenderingIntentDefault); CGContextDrawImage(context, CGRectMake(0.0, HEIGHT, WIDTH, HEIGHT), imageRef); CGImageRelease(imageRef); CGColorSpaceRelease(colorSpaceRef); CGDataProviderRelease(providerRef); }

    Read the article

  • Ogre material scripts; how do I give a technique multiple lod_indexes?

    - by BlueNovember
    I have an Ogre material script that defines 4 rendering techniques. 1 using GLSL shaders, then 3 others that just use textures of different resolutions. I want to use the GLSL shader unconditionally if the graphics card supports it, and the other 3 textures depending on camera distance. At the moment my script is; material foo { lod_distances 1600 2000 technique shaders { lod_index 0 lod_index 1 lod_index 2 //various passes here } technique high_res { lod_index 0 //various passes here } technique medium_res { lod_index 1 //various passes here } technique low_res { lod_index 2 //various passes here } Extra information The Ogre manual says; Increasing indexes denote lower levels of detail You can (and often will) assign more than one technique to the same LOD index, what this means is that OGRE will pick the best technique of the ones listed at the same LOD index. OGRE determines which one is 'best' by which one is listed first. Currently, on a machine supporting the GLSL version I am using, the script behaves as follows; Camera 2000 : Shader technique Camera 1600 <= 2000 : Medium Camera <= 1600 : High If I change the lod order in shader technique to { lod_index 2 lod_index 1 lod_index 0 } The behaviour becomes; Camera 2000 : Low Camera 1600 <= 2000 : Medium Camera <= 1600 : Shader implying only the latest lod_index is used. If I change it to lod_index 0 1 2 It shouts at me Compiler error: fewer parameters expected in foo.material(#): lod_index only supports 1 argument So how do I specify a technique to have 3 lod_indexes? Duplication works; technique shaders { lod_index 0 //various passes here } technique shaders1 { lod_index 1 //passes repeated here } technique shaders2 { lod_index 2 //passes repeated here } ...but it's ugly.

    Read the article

  • Drawing an image in Java, slow as hell on a netbook.

    - by Norswap
    In follow-up to my previous questions (especially this one : http://stackoverflow.com/questions/2684123/java-volatileimage-slower-than-bufferedimage), i have noticed that simply drawing an Image (it doesn't matter if it's buffered or volatile, since the computer has no accelerated memory*, and tests shows it's doesn't change anything), tends to be very long. (*) System.out.println(GraphicsEnvironment.getLocalGraphicsEnvironment() .getDefaultScreenDevice().getAvailableAcceleratedMemory()); --> 0 How long ? For a 500x400 image, about 0.04 seconds. This is only drawing the image on the backbuffer (obtained via buffer strategy). Now considering that world of warcraft runs on that netbook (tough it is quite laggy) and that online java games seems to have no problem whatsoever, this is quite thought provoking. I'm quite certain I didn't miss something obvious, I've searched extensively the web, but nothing will do. So do any of you java whiz have an idea of what obscure problem might be causing this (or maybe it is normal, tough I doubt it) ? PS : As I'm writing this I realized this might be cause by my Linux installation (archlinux) tough I have the correct Intel driver. But my computer normally has "Integrated Intel Graphics Media Accelerator 950", which would mean it should have accelerated video memory somehow. Any ideas about this side of things ?

    Read the article

  • C# / Silverlight / WPF / Fast rendering lots of circles

    - by Walt W
    I want to render a lot of circles or small graphics within either silverlight or wpf (around 1000-10000) as fast and as frequently as possible. If I have to go to DX or OGL, that's fine, but I'm wondering about doing this within either of those two frameworks first (read: it's OK if an answer is WPF-only or Silverlight-only). Also, if there is a way to access DX through WPF and render on a surface that way, I would be interested in that as well. So, what's the fastest way to draw a load of circles? They can be as plain as necessary, but they do need to have a radius. Currently I'm using DrawingVisual and a DrawingContext.DrawEllipse() command for each circle, then rendering the visual to a RenderTargetBItmap, but it becomes very slow as the number of circles rises. By the way, these circles move every frame, so caching isn't really an option unless you're going to suggest caching the individual circles . . . But their sizes are dynamic, so I'm not sure that's a great approach.

    Read the article

  • Can I access views when drawn in the drawRect method?

    - by GianPac
    When creating the content view of a tableViewCell, I used the drawInRect:withFont, drawAtPoint... and others, inside drawRect: since all I needed was to lay some text. Turns out, part of the text drawn need to be clickable URLs. So I decided to create a UIWebView and insert it into my drawRect. Everything seems to be laid out fine, the problem is that interaction with the UIWebView is not happening. I tried enabling the interaction, did not work. I want to know, since with drawRect: I am drawing to the current graphics context, there is anything I can do to have my subviews interact with the user? here is some of my code I use inside the UITableViewCell class. -(void) drawRect:(CGRect)rect{ NSString *comment = @"something"; [comment drawAtPoint:point forWidth:195 withFont:someFont lineBreakMode:UILineBreakModeMiddleTruncation]; NSString *anotherCommentWithURL = @"this comment has a URL http://twtr.us"; UIWebView *webView = [[UIWebView alloc] initWithFrame:someFrame]; webView.delegate = self; webView.text = anotherCommentWithURL; [self addSubView:webView]; [webView release]; } As I said, the view draws fine, but there is no interaction with the webView from the outside. the URL gets embedded into HTML and should be clickable. It is not. I got it working on another view, but on that one I lay the views. Any ideas?

    Read the article

  • OpenGL fast texture drawing with vertex buffer objects. Is this the way to do it?

    - by Matthew Mitchell
    Hello. I am making a 2D game with OpenGL. I would like to speed up my texture drawing by using VBOs. Currently I am using the immediate mode. I am generating my own coordinates when I rotate and scale a texture. I also have the functionality of rounding the corners of a texture, using the polygon primitive to draw those. I was thinking, would it be fastest to make a VBO with vertices for the sides of the texture with no offset included so I can then use glViewport, glScale (Or glTranslate? What is the difference and most suitable here?) and glRotate to move the drawing position for my texture. Then I can use the same VBO with no changes to draw the texture each time. I could only change the VBO when I need to add coordinates for the rounded corners. Is that the best way to do this? What things should I look out for while doing it? Is it really fastest to use GL_TRIANGLES instead of GL_QUADS in modern graphics cards? Thank you for any answer.

    Read the article

  • Using CGContextDrawTiledImage at different zooms causes massive memory growth

    - by Jacques
    I'm working on app an where there's a view in a zoomable UIScrollView. When the user zooms in or out, I redraw the view that's in the UIScrollView to be nice and sharp. That view has a background image that I draw with CGContextDrawTiledImage. I noticed that memory usage grows every time I switch to a new zoom level. It looks like CGContextDrawTiledImage keeps a cache somewhere of the image scaled to different sizes. So, If I go from 1.0 to 1.1x zoom, memory use grows. Going back to 1.0 doesn't cause it to grow, but then going to 1.05 and then 1.2 causes it to grow twice. Back to 1.1 and no growth. Of course, the zoom level is under user control so I don't have control over how many zoom levels happen. Right now my background image is kind of massive (512x512), so this causes memory usage to grow very quickly. It doesn't show up as a memory leak in Instruments, just additional allocations that never get freed. I've tried to find a way to free the cache that appears to be being created, but no luck. It doesn't seem to respond to low memory warnings, for example. I also tried setting the view's backgroundColor to a UIColor created with colorWithPatternImage, but that doesn't work because I'm doing the scaling by changing the graphics context's CTM, not by setting the view's transform. Any ideas on how to keep memory usage from blowing up?

    Read the article

  • Pythagoras tree with g2d

    - by owca
    I'm trying to build my first fractal (Pythagoras Tree): in Java using Graphics2D. Here's what I have now : import java.awt.*; import java.awt.geom.*; import javax.swing.*; import java.util.Scanner; public class Main { public static void main(String[] args) { int i=0; Scanner scanner = new Scanner(System.in); System.out.println("Give amount of steps: "); i = scanner.nextInt(); new Pitagoras(i); } } class Pitagoras extends JFrame { private int powt, counter; public Pitagoras(int i) { super("Pythagoras Tree."); setSize(1000, 1000); setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); setVisible(true); powt = i; } private void paintIt(Graphics2D g) { double p1=450, p2=800, size=200; for (int i = 0; i < powt; i++) { if (i == 0) { g.drawRect((int)p1, (int)p2, (int)size, (int)size); counter++; } else{ if( i%2 == 0){ //here I must draw two squares } else{ //here I must draw right triangle } } } } @Override public void paint(Graphics graph) { Graphics2D g = (Graphics2D)graph; paintIt(g); } So basically I set number of steps, and then draw first square (p1, p2 and size). Then if step is odd I need to build right triangle on the top of square. If step is even I need to build two squares on free sides of the triangle. What method should I choose now for drawing both triangle and squares ? I was thinking about drawing triangle with simple lines transforming them with AffineTransform but I'm not sure if it's doable and it doesn't solve drawing squares.

    Read the article

  • How to implement a grapher in C#

    - by iansinke
    So I'm writing a graphing calculator. So far I have a semi-functional grapher, however, I'm having a hard time getting a good balance between accurate graphs and smooth looking curves. The current implementation (semi-pseudo-code) looks something like this: for (float i = GraphXMin; i <= GraphXMax; i++) { PointF P = new PointF(i, EvaluateFunction(Function, i) ListOfPoints.Add(P) } Graphics.DrawCurve(ListOfPoints) The problem with this is since it only adds a point at every integer value, graphs end up distorted when their turning points don't fall on integers (e.g. sin(x)^2). I tried incrementing i by something smaller (like 0.1), which works, but the graph looks very rough. I am using C# and GDI+. I have SmoothingMethod set to AntiAlias, so that's not the problem, as you can see from the first graph. Is there some sort of issue with drawing curves with a lot of points? Should the points perhaps be positioned exactly on pixels? I'm sure some of you have worked on something very similar before, so any suggestions? While you're at it, do you have any suggestions for graphing functions with asymptotes? e.g. 1/x^2 P.S. I'm not looking for a library that does all this - I want to write it myself.

    Read the article

  • Quality questionnaire php mysql graphipcs

    - by Marcelo
    Hi, i'm making a questionnaire about a service quality, its contains the options (poor, regular, good, very good). It's contains 6 questions (radio button) and a suggestion box (textbox). In the table of the database i created 6 rows for questions, 1 for suggestion and 1 for date (a friend of mine tole me to use this but i didn't get why). q1) I'm going to atribute a value form 1 to 4 to the radio buttons options, and i'd like to sum every answer for each question, and then divide by the numbers of user that answered that question and give the mean. how am i supposed to to that? I'd also like generate reports of the month, of the year. q2) not only about the questionnaire but for registration too. I need all the fields to be completed, no blank options, if he don't complete all of fields it'll not be submitted and there will be a warning message to the user. q3) about the field type, i'd like it to be the same class that is in the database, i'm having a "problem". Ex: Name(varchar) : 1234(int), in the field 'name' of the table of the database 1234 will be shown as name, and i don't want this, i want only the type that i declared in the construction of the table. q4) i'd also like to know if it's possible to create pizza graphics, about the percentage of each question, is this possible? q5) I'm using phpmyadmin and some of my id's are auto_increment, but 'cause of my tests they at a high number, i'd like to restart to 0 the ids number, is this possible? Thanks for the attention.

    Read the article

  • Is there a way to capture a bitmap from a WPF window using native C++?

    - by Mike Caron
    Imagine a document window in a MDI application which contains a child WPF window, say a sidebar for example. How can one get a bitmap containing both the WPF pixels AND the GDI (non-wpf) pixels? I've discovered that when making my thumbnail preview for the Win7 taskbar app icon hover, I get black in the parts of the preview where the WPF pixels should be. My current method simply grabs a bitmap capture of the document window. Then I get a DC for the preview, make a memory DC from it and select my bitmap into it. Then I do some size adjustments and bitblt the memory dc to the real dc. I'm guessing that the BitBlt operation doesn't take into account the fact that the WPF pixels are hardware accelerated and therefore need to be grabbed from the graphics hardware. All the stuff in GDI is managed just fine, though and when there's no WPF child windows, the preview image looks fine. I'm wondering if it's at all possible to grab a bitmap of the WPF window from native C++. Then I can blt that onto the black area of the previous preview.

    Read the article

  • java.awt -- when java outputs an image to my monitor (screen), where is the file that is output to the monitor card?

    - by user1405870
    Suppose that I am drawing a set of images using java graphics objects. Suppose that I java is outputting these images to my monitor. Where is the file or files that are sent to the monitor card (the graphical representation files). How can I take this file and save it to disk, or how can I take this file and write it to an array, or how can I take these files and combine the results of their output (to the monitor) into a single file for saving? I don't want to use a screen shot feature, I want to be able to redirect (xor capture also) the output to the monitor to some sort of byte-stream. I note that monitors are much better than semaphores, when you are talking about display capabilities; I don't need a counter example. I might not be asking the correct question. It might be that I want to capture the file while it is still in User Space, before it is put into 'Device Space'. I would like to try and capture the byte stream so that I can convert it to MPEG-4 format. I either need a streaming output from the MPEG-4 converter, coming from the streaming input, or else, I need to take static images at discrete times and convert the images. What format will the output from User Space be in? What format will the Device Space output be in? Try to keep speculation to a minimum. http://docs.oracle.com/javame/config/cdc/opt-pkgs/api/jsr927/index.html I guess that Java has made a means of displaying AWT objects on a television screen. Thank you. Ryan Zoerner

    Read the article

  • How do I make a jumping dolphin rotate realistically?

    - by Johnny
    I want to program a dolphin that jumps and rotates like a real dolphin. Jumping is not the problem, but I don't know how to make the rotation. At the moment, my dolphin rotates a little weird. But I want that it rotates like a real dolphin does. How can I improve the rotation? public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; Texture2D image, water; float Gravity = 5.0F; float Acceleration = 20.0F; Vector2 Position = new Vector2(1200,720); Vector2 Velocity; float rotation = 0; SpriteEffects flip; Vector2 Speed = new Vector2(0, 0); public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; graphics.PreferredBackBufferWidth = 1280; graphics.PreferredBackBufferHeight = 720; } protected override void Initialize() { base.Initialize(); } protected override void LoadContent() { spriteBatch = new SpriteBatch(GraphicsDevice); image = Content.Load<Texture2D>("cartoondolphin"); water = Content.Load<Texture2D>("background"); flip = SpriteEffects.None; } protected override void Update(GameTime gameTime) { float VelocityX = 0f; float VelocityY = 0f; float time = (float)gameTime.ElapsedGameTime.TotalSeconds; KeyboardState kbState = Keyboard.GetState(); if(kbState.IsKeyDown(Keys.Left)) { rotation = 0; flip = SpriteEffects.None; VelocityX += -5f; } if(kbState.IsKeyDown(Keys.Right)) { rotation = 0; flip = SpriteEffects.FlipHorizontally; VelocityX += 5f; } // jump if the dolphin is under water if(Position.Y >= 670) { if (kbState.IsKeyDown(Keys.A)) { if (flip == SpriteEffects.None) { rotation += 0.01f; VelocityY += 40f; } else { rotation -= 0.01f; VelocityY += 40f; } } } else { if (flip == SpriteEffects.None) { rotation -= 0.01f; VelocityY += -10f; } else { rotation += 0.01f; VelocityY += -10f; } } float deltaY = 0; float deltaX = 0; deltaY = Gravity * (float)gameTime.ElapsedGameTime.TotalSeconds; deltaX += VelocityX * (float)gameTime.ElapsedGameTime.TotalSeconds * Acceleration; deltaY += -VelocityY * (float)gameTime.ElapsedGameTime.TotalSeconds * Acceleration; Speed = new Vector2(Speed.X + deltaX, Speed.Y + deltaY); Position += Speed * (float)gameTime.ElapsedGameTime.TotalSeconds; Velocity.X = 0; if (Position.Y + image.Height/2 > graphics.PreferredBackBufferHeight) Position.Y = graphics.PreferredBackBufferHeight - image.Height/2; base.Update(gameTime); } protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); spriteBatch.Begin(); spriteBatch.Draw(water, new Rectangle(0, graphics.PreferredBackBufferHeight -100, graphics.PreferredBackBufferWidth, 100), Color.White); spriteBatch.Draw(image, Position, null, Color.White, rotation, new Vector2(image.Width / 2, image.Height / 2), 1, flip, 1); spriteBatch.End(); base.Draw(gameTime); } } I changed my code a little. But I still have some trouble with the rotation. Here's the entire code. The dolphin looks at the wrong direction if I press the left or right key. For example, it looks down if I press the left key. What is wrong with the rotation? At the beginning, the dolphin looks at the left side, but after I pressed a key it just looks down or up. I deleted the "rotation += 0.01f;" lines in the code. Is that correct? public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; Texture2D image, water; float Gravity = 5.0F; float Acceleration = 20.0F; Vector2 Position = new Vector2(1200,720); Vector2 Velocity; float rotation = 0; SpriteEffects flip; Vector2 Speed = new Vector2(0, 0); Vector2 prevPos; public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; graphics.PreferredBackBufferWidth = 1280; graphics.PreferredBackBufferHeight = 720; } protected override void Initialize() { base.Initialize(); } protected override void LoadContent() { spriteBatch = new SpriteBatch(GraphicsDevice); image = Content.Load<Texture2D>("cartoondolphin"); water = Content.Load<Texture2D>("background"); flip = SpriteEffects.None; } protected override void Update(GameTime gameTime) { float VelocityX = 0f; float VelocityY = 0f; float time = (float)gameTime.ElapsedGameTime.TotalSeconds; KeyboardState kbState = Keyboard.GetState(); if(kbState.IsKeyDown(Keys.Left)) { flip = SpriteEffects.None; VelocityX += -5f; } if(kbState.IsKeyDown(Keys.Right)) { flip = SpriteEffects.FlipHorizontally; VelocityX += 5f; } rotation = (float)Math.Atan2(Position.X - prevPos.X, Position.Y - prevPos.Y); prevPos = Position; // jump if the dolphin is under water if(Position.Y >= 670) { if (kbState.IsKeyDown(Keys.A)) { if (flip == SpriteEffects.None) { VelocityY += 40f; } else { VelocityY += 40f; } } } else { if (flip == SpriteEffects.None) { VelocityY += -10f; } else { VelocityY += -10f; } } float deltaY = 0; float deltaX = 0; deltaY = Gravity * (float)gameTime.ElapsedGameTime.TotalSeconds; deltaX += VelocityX * (float)gameTime.ElapsedGameTime.TotalSeconds * Acceleration; deltaY += -VelocityY * (float)gameTime.ElapsedGameTime.TotalSeconds * Acceleration; Speed = new Vector2(Speed.X + deltaX, Speed.Y + deltaY); Position += Speed * (float)gameTime.ElapsedGameTime.TotalSeconds; Velocity.X = 0; if (Position.Y + image.Height/2 > graphics.PreferredBackBufferHeight) Position.Y = graphics.PreferredBackBufferHeight - image.Height/2; base.Update(gameTime); } protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); spriteBatch.Begin(); spriteBatch.Draw(water, new Rectangle(0, graphics.PreferredBackBufferHeight -100, graphics.PreferredBackBufferWidth, 100), Color.White); spriteBatch.Draw(image, Position, null, Color.White, rotation, new Vector2(image.Width / 2, image.Height / 2), 1, flip, 1); spriteBatch.End(); base.Draw(gameTime); } }

    Read the article

  • How to better create stacked bar graphs with multiple variables from ggplot2?

    - by deoksu
    I often have to make stacked barplots to compare variables, and because I do all my stats in R, I prefer to do all my graphics in R with ggplot2. I would like to learn how to do two things: First, I would like to be able to add proper percentage tick marks for each variable rather than tick marks by count. Counts would be confusing, which is why I take out the axis labels completely. Second, there must be a simpler way to reorganize my data to make this happen. It seems like the sort of thing I should be able to do natively in ggplot2 with plyR, but the documentation for plyR is not very clear (and I have read both the ggplot2 book and the online plyR documentation. My best graph looks like this, the code to create it follows: the R code I use to get it is the following: library(epicalc) ### recode the variables to factors ### recode(c(int_newcoun, int_newneigh, int_neweur, int_newusa, int_neweco, int_newit, int_newen, int_newsp, int_newhr, int_newlit, int_newent, int_newrel, int_newhth, int_bapo, int_wopo, int_eupo, int_educ), c(1,2,3,4,5,6,7,8,9, NA), c('Very Interested','Somewhat Interested','Not Very Interested','Not At All interested',NA,NA,NA,NA,NA,NA)) ### Combine recoded variables to a common vector Interest1<-c(int_newcoun, int_newneigh, int_neweur, int_newusa, int_neweco, int_newit, int_newen, int_newsp, int_newhr, int_newlit, int_newent, int_newrel, int_newhth, int_bapo, int_wopo, int_eupo, int_educ) ### Create a second vector to label the first vector by original variable ### a1<-rep("News about Bangladesh", length(int_newcoun)) a2<-rep("Neighboring Countries", length(int_newneigh)) [...] a17<-rep("Education", length(int_educ)) Interest2<-c(a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11, a12, a13, a14, a15, a16, a17) ### Create a Weighting vector of the proper length ### Interest.weight<-rep(weight, 17) ### Make and save a new data frame from the three vectors ### Interest.df<-cbind(Interest1, Interest2, Interest.weight) Interest.df<-as.data.frame(Interest.df) write.csv(Interest.df, 'C:\\Documents and Settings\\[name]\\Desktop\\Sweave\\InterestBangladesh.csv') ### Sort the factor levels to display properly ### Interest.df$Interest1<-relevel(Interest$Interest1, ref='Not Very Interested') Interest.df$Interest1<-relevel(Interest$Interest1, ref='Somewhat Interested') Interest.df$Interest1<-relevel(Interest$Interest1, ref='Very Interested') Interest.df$Interest2<-relevel(Interest$Interest2, ref='News about Bangladesh') Interest.df$Interest2<-relevel(Interest$Interest2, ref='Education') [...] Interest.df$Interest2<-relevel(Interest$Interest2, ref='European Politics') detach(Interest) attach(Interest) ### Finally create the graph in ggplot2 ### library(ggplot2) p<-ggplot(Interest, aes(Interest2, ..count..)) p<-p+geom_bar((aes(weight=Interest.weight, fill=Interest1))) p<-p+coord_flip() p<-p+scale_y_continuous("", breaks=NA) p<-p+scale_fill_manual(value = rev(brewer.pal(5, "Purples"))) p update_labels(p, list(fill='', x='', y='')) I'd very much appreciate any tips, tricks or hints. Thanks.

    Read the article

  • Java Hardware Acceleration

    - by Freezerburn
    I have been spending some time looking into the hardware acceleration features of Java, and I am still a bit confused as none of the sites that I found online directly and clearly answered some of the questions I have. So here are the questions I have for hardware acceleration in Java: 1) In Eclipse version 3.6.0, with the most recent Java update for Mac OS X (1.6u10 I think), is hardware acceleration enabled by default? I read somewhere that someCanvas.getGraphicsConfiguration().getBufferCapabilities().isPageFlipping() is supposed to give an indication of whether or not hardware acceleration is enabled, and my program reports back true when that is run on my main Canvas instance for drawing to. If my hardware acceleration is not enabled now, or by default, what would I have to do to enable it? 2) I have seen a couple articles here and there about the difference between a BufferedImage and VolatileImage, mainly saying that VolatileImage is the hardware accelerated image and is stored in VRAM for fast copy-from operations. However, I have also found some instances where BufferedImage is said to be hardware accelerated as well. Is BufferedImage hardware accelerated as well in my environment? What would be the advantage of using a VolatileImage if both types are hardware accelerated? My main assumption for the advantage of having a VolatileImage in the case of both having acceleration is that VolatileImage is able to detect when its VRAM has been dumped. But if BufferedImage also support acceleration now, would it not have the same kind of detection built into it as well, just hidden from the user, in case that the memory is dumped? 3) Is there any advantage to using someGraphicsConfiguration.getCompatibleImage/getCompatibleVolatileImage() as opposed to ImageIO.read() In a tutorial I have been reading for some general concepts about setting up the rendering window properly (tutorial) it uses the getCompatibleImage method, which I believe returns a BufferedImage, to get their "hardware accelerated" images for fast drawing, which ties into question 2 about if it is hardware accelerated. 4) This is less hardware acceleration, but it is something I have been curious about: do I need to order which graphics get drawn? I know that when using OpenGL via C/C++ it is best to make sure that the same graphic is drawn in all the locations it needs to be drawn at once to reduce the number of times the current texture needs to be switch. From what I have read, it seems as if Java will take care of this for me and make sure things are drawn in the most optimal fashion, but again, nothing has ever said anything like this clearly. 5) What AWT/Swing classes support hardware acceleration, and which ones should be used? I am currently using a class that extends JFrame to create a window, and adding a Canvas to it from which I create a BufferStrategy. Is this good practice, or is there some other type of way I should be implementing this? Thank you very much for your time, and I hope I provided clear questions and enough information for you to answer my several questions.

    Read the article

  • How do I destruct data associated with an object after the object no longer exists?

    - by Phineas
    I'm creating a class (say, C) that associates data (say, D) with an object (say, O). When O is destructed, O will notify C that it soon will no longer exist :( ... Later, when C feels it is the right time, C will let go of what belonged to O, namely D. If D can be any type of object, what's the best way for C to be able to execute "delete D;"? And what if D is an array of objects? My solution is to have D derive from a base class that C has knowledge of. When the time comes, C calls delete on a pointer to the base class. I've also considered storing void pointers and calling delete, but I found out that's undefined behavior and doesn't call D's destructor. I considered that templates could be a novel solution, but I couldn't work that idea out. Here's what I have so far for C, minus some details: // This class is C in the above description. There may be many instances of C. class Context { public: // D will inherit from this class class Data { public: virtual ~Data() {} }; Context(); ~Context(); // Associates an owner (O) with its data (D) void add(const void* owner, Data* data); // O calls this when he knows its the end (O's destructor). // All instances of C are now aware that O is gone and its time to get rid // of all associated instances of D. static void purge (const void* owner); // This is called periodically in the application. It checks whether // O has called purge, and calls "delete D;" void refresh(); // Side note: sometimes O needs access to D Data *get (const void *owner); private: // Used for mapping owners (O) to data (D) std::map _data; }; // Here's an example of O class Mesh { public: ~Mesh() { Context::purge(this); } void init(Context& c) const { Data* data = new Data; // GL initialization here c.add(this, new Data); } void render(Context& c) const { Data* data = c.get(this); } private: // And here's an example of D struct Data : public Context::Data { ~Data() { glDeleteBuffers(1, &vbo); glDeleteTextures(1, &texture); } GLint vbo; GLint texture; }; }; P.S. If you're familiar with computer graphics and VR, I'm creating a class that separates an object's per-context data (e.g. OpenGL VBO IDs) from its per-application data (e.g. an array of vertices) and frees the per-context data at the appropriate time (when the matching rendering context is current).

    Read the article

< Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >