Search Results

Search found 2702 results on 109 pages for 'drawing'.

Page 76/109 | < Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >

  • Why doesn't this work?

    - by user146780
    I'v tried to solve a memory leak in the GLU callback by creating a global variable but now it dos not draw anything: GLdouble *gluptr = NULL; void CALLBACK combineCallback(GLdouble coords[3], GLdouble *vertex_data[4], GLfloat weight[4], GLdouble **dataOut) { GLdouble *vertex; if(gluptr == NULL) { gluptr = (GLdouble *) malloc(6 * sizeof(GLdouble)); } vertex = (GLdouble*)gluptr; vertex[0] = coords[0]; vertex[1] = coords[1]; vertex[2] = coords[2]; for (int i = 3; i < 6; i++) { vertex[i] = weight[0] * vertex_data[0][i] + weight[1] * vertex_data[0][i] + weight[2] * vertex_data[0][i] + weight[3] * vertex_data[0][i]; } *dataOut = vertex; } basically instead of doing malloc each time in the loop (thus the memory leak) im using a global pointer, but this doesn't work (drawing to the screen). Why would using malloc to a pointer created in the function work any different than a global variable? Thanks

    Read the article

  • Gradients and memory

    - by user146780
    I'm creating a drawing application with OpenGL. I'v created an algorithm that generates gradient textures. I then map these to my polygons and this works quite well. What I realized is how much memory this requires. Creating 1000 gradients takes about 800MB and that's way too much. Is there an alternative to textures, or a way to compress them, or another way to map gradients to polygons that doesn't use up as much memory? Thanks My polygons are concave, I use GLUTesselator, and they are multicolored and point to point

    Read the article

  • Optimizing an iphone app for 3G in landscape with opengl, camera, quartz

    - by Joey
    I have an iphone app that basically uses the camera, an opengl layer, and UIViews (some drawing with Quartz). It runs ok on 3GS, but on the 3G it is unusable. Particularly, when I press a UIButton, it literally takes sometimes 10 seconds to register the press. Shark doesn't do me much good because it crashes when I try to profile even a tiny portion, and I've tried turning off some of the layers to see if they might be obvious contributors to the lag. I've noticed that turning off the camera really helps. I'm wondering if anyone has any familiarity with this and might suggest some likely causes. I had issues with extreme slowdown from running my app in landscape mode and using transforms, so considered that might be a cause, but I'm wondering if hoping for a 3G to run something with all of the above elements is just not really possible considering the camera seems to really cost a lot. The fact that the buttons are horribly delayed in their response makes me think there is something fundamental that I might be missing.

    Read the article

  • simple plot algorithm with autoscale

    - by adrin
    I need to implement a simple plotting component in C#(WPF to be more precise). What i have is a collection of data samples containing time (X axis) and a value (both double types). I have a drawing canvas of a fixed size (Width x Height) and a DrawLine method/function that can draw on it. The problem I am facing now is how do I draw the plot so that it is autoscaled? In other words how do I map the samples I have to actual pixels on my Width x Height canvas?

    Read the article

  • Non laggy movement in Flex or WPF

    - by PaN1C_Showt1Me
    I'm trying to learn something about 2D games programming. For this purpose I've downloaded many samples developed in: Flex and Microsoft WPF. I've noticed that all the animations / moving objects are kind of non-smooth. I've seen a Flex example with double buffering which solved the image flickering, but it was laggy too. WPF example too. Just to mentioned, all examples were drawing on Canvas. I'm just curious, is it possible to have a wonderful non-laggy movement on the GUI in Flash or WPF ? (e.g. like a real game, coded in C++)

    Read the article

  • How would you build a "pixel perfect" GUI on Linux?

    - by splicer
    I'd like build a GUI where every single pixel is under my control (i.e. not using the standard widgets that something like GTK+ provides). Renoise is a good example of what I'm looking to produce. Is getting down to the Xlib or XCB level the best way to go, or is it possible to achieve this with higher level frameworks like GTK+ (maybe even PyGTK)? Should I be looking at Cairo for the drawing? I'd like to work in Python or Ruby if possible, but C is fine too. Thanks!

    Read the article

  • Scale GraphicsPaths

    - by serhio
    In a form I draw a graph. This graph has some distinct paths that should have differently drawn. Say AxesPath, SalesPath, CostsPath etc... When I resize the form need I to scale every of components Paths? Take an example: Imports System.Drawing.Drawing2D Public Class Form1 Dim lineOne As GraphicsPath Dim lineTwo As GraphicsPath Dim allPaths As GraphicsPath Dim initSize As Size Public Sub New() ' This call is required by the designer. ' InitializeComponent() initSize = Me.Size lineOne = New GraphicsPath() lineTwo = New GraphicsPath() lineOne.AddLine(20.0F, 20.0F, Me.initSize.Width - 20.0F, 20.0F) lineTwo.AddLine(0.1F, 10.0F, Me.initSize.Width - 20.0F, _ Me.initSize.Height - 0.5F) allPaths = New GraphicsPath() allPaths.AddPath(lineOne, False) allPaths.AddPath(lineTwo, False) Me.ResizeRedraw = True End Sub Protected Overrides Sub OnResize(ByVal e As System.EventArgs) MyBase.OnResize(e) Dim m As New Matrix m.Scale(Me.Width / initSize.Width, Me.Height / initSize.Height) allPaths.Transform(m) initSize = Me.Size End Sub Protected Overrides Sub OnPaint(ByVal e As PaintEventArgs) MyBase.OnPaint(e) ' WORKS ' ' e.Graphics.DrawPath(Pens.GreenYellow, allPaths) ' ' DOES NOT WORK! ' e.Graphics.DrawPath(Pens.DarkGoldenrod, lineOne) e.Graphics.DrawPath(Pens.DarkMagenta, lineTwo) End Sub End Class

    Read the article

  • Question regarding CGRectIntersectsRect and rotated rectangular image

    - by user309030
    Hi guys, I've got a long rectangular image which is rotated at different kind of angles. However the frame of the rectangular image does not rotate along with the image and instead, the rotation causes the frame to to become larger to fit the rotated image. So when I used CGRectIntersectsRect, the collision detection is totally off because the other image colliding with the rectangular image will collide before it even reaches the visible area of the rect image. In case you don't really know what I'm talking about, have a look at the ascii drawing: normal rectangular image frame, O - pixels, |, – - frame |----------| |OOOOOOOOOO| |----------| after rotation |----------| |O | | O | | O | | O | | O | | O | | O | | O | | O | |----------| I've read through some of the collision articles but all of them are talking about collision with a normal straight rectangle and what I really want is collision with a slanted image, preferably pixel collision detection. TIA for any suggestions made.

    Read the article

  • How do I learn Flash Game Development?

    - by grokker
    I'm currently a PHP programmer and one of my childhood dreams is to create a game. The problem is I don't know Flash. I'm not great at drawing stuff or even artistic. I could program a little with JavaScript and I could consider myself intermediate with JQuery. Question How do I get started with Flash Game development? What books do I read first? The type of game is a side scroller about an Indiana Jones type of character and the setting is on the jungle with trees and snakes and a lot of animals.

    Read the article

  • Android: make a scrollable custom view

    - by Martyn
    Hey, I've rolled my own custom view and can draw to the screen alright, but what I'd really like to do is set the measuredHeigh of the screen to, say, 1000px and let the user scroll on the Y axis, but I'm having problems doing this. Can anyone help? Here's some code: public class TestScreen extends Activity { CustomDrawableView mCustomDrawableView; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); mCustomDrawableView = new CustomDrawableView(this); setContentView(mCustomDrawableView); } } and public class CustomDrawableView extends View { public CustomDrawableView(Context context) { super(context); setVerticalScrollBarEnabled(true); setMinimumHeight(1000); } @Override protected void onDraw(Canvas canvas) { canvas.drawLine(...); // more drawing } } I've tried to override scrollTo, scrollBy, awakenScrollBars etc with a call to super but to no avail. Am I missing something silly, or am I making some fundamental mistake? Thank you in advance, Martyn

    Read the article

  • Why are there so many floats in the Android API?

    - by Brian
    The default floating point type in Java is the double. If you hard code a constant like 2.5 into your program, Java makes it a double automatically. When you do an operation on floats or ints that could potentially benefit from more precision, the type is 'promoted' to a double. But in the Android API, everything seems to be a float from sound volumes to rectangle coordinates. There's a structure called RectF used in most drawing; the F is for float. It's really a pain for programmers who are casting promoted doubles back to (float) pretty often. Don't we all agree that Java code is messy and verbose enough as it is? Usually math coprocessors and accelerators prefer double in Java because it corresponds to one of the internal types. Is there something about Android's Dalvik VM that prefers floats for some reason? Or are all the floats just a result of perversion in API design?

    Read the article

  • How to set up a Bitmap with unmanaged data?

    - by Danvil
    I have int width, height; and IntPtr data; which comes from a unmanaged unsigned char* pointer and I would like to create a Bitmap to show the image data in a GUI. Please consider, that width must not be a multiple of 4, i do not have a "stride" and my image data is aligned as BGRA. The following code works: byte[] pixels = new byte[4*width*height]; System.Runtime.InteropServices.Marshal.Copy(data, pixels, 0, pixels.Length); var bmp = new Bitmap(width, height, System.Drawing.Imaging.PixelFormat.Format32bppArgb); for(int i=0; i<height; i++) { for(int j=0; j<width; j++) { int p = 4*(width*i + j); bmp.SetPixel(j, i, Color.FromArgb(pixels[p+3], pixels[p+2], pixels[p+1], pixels[p+0])); } } Is there a more direct way to copy the data?

    Read the article

  • Mono Ignores Graphics.InterpolationMode?

    - by Timothy Baldridge
    I have a program that draws some vector graphics using System.Drawing and the Graphics class. The anti-aliasing works, kindof okay, but for my need I neede oversampling, so I create the starting image to be n times larger and then scale back the final image by n. On Window and .NET the resulting image looks great! However, on Mono 2.4.2.3 (Ubuntu 9.10 stock install), the intropolation is horrible. Here's how I'm scaling my images: Bitmap bmp = new Bitmap(Bmp.Width / OverSampling, Bmp.Height / OverSampling); Graphics g = Graphics.FromImage(bmp); g.InterpolationMode = InterpolationMode.HighQualityBicubic; g.DrawImage(Bmp, 0, 0, bmp.Width, bmp.Height); g.Dispose(); From what I can tell there is no interpolation happening at all. Any ideas?

    Read the article

  • Why are controls within custom panel (C# winforms) disappearing in designer?

    - by Brandon
    I have been able to create a custom C# winforms control that is basically a panel with a fixed banner (header/footer). I want to base other user controls on this "banner panel". I've gotten past the problem with the designer here. I can successfully add controls to the inner content panel. Everything looks fine while designing. However, when I recompile, the controls I added to the content panel disappear. They are still there (in code) but aren't displayed in the designer. Is there any thing that I need to do to set the drawing order of the controls?

    Read the article

  • How to manage changes to reports in .NET?

    - by Craig Johnston
    If I need to offer the ability to create, view and print reports from a .NET app, I see that there are 2 options: use a reporting component such as Microsoft.Reporting.WinForms.ReportViewer or Crystal Reports which saves a .rpt or similar template file that can be modified as required without having to re-compile the app use System.Drawing.Printing for reporting and store report template data in a database, which keeps things simpler and avoids problems with bulky third party reporting components If I want to be able to modify a report template (which would include layout and data fields) without having to re-compile the app, would the first option above achieve this? If I wanted to be able to modify the template without re-compiling the app, how could this be achieved with the second option? How could you store data representing the templates in a database such that it could be modified without having to re-compile the app?

    Read the article

  • Best direction for displaying game graphics in C# App

    - by Mike Webb
    I am making a small game as sort of a test project, nothing major. I just started and am working on the graphics piece, but I'm not sure the best way to draw the graphics to the screen. It is going to be sort of like the old Zelda, so pretty simple using bitmaps and such. I started thinking that I could just paint to a Picture Box control using Drawing.Graphics with the Handle from the control, but this seems cumbersome. I'm also not sure if I can use double buffering with this method either. I looked at XNA, but for now I wanted to use a simple method to display everything. So, my question. Using the current C# windows controls and framework, what is the best approach to displaying game graphics (i.e. Picture Box, build a custom control, etc.)

    Read the article

  • iPhone post-processing with a single FBO with Opengl ES 2.0?

    - by Jing
    I am trying to implement post-processing (blur, bloom, etc.) on the iPhone using OpenGL ES 2.0. I am running into some issues. When rendering during my second rendering step, I end up drawing a completely black quad to the screen instead of the scene (it appears that the texture data is missing) so I am wondering if the cause is using a single FBO. Is it incorrect to use a single FBO in the following fashion? For the first pass (regular scene rendering), I attach a texture as COLOR_ATTACHMENT_0 and render to a texture. glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, texturebuffer, 0) For the second pass (post-processing), I attach the color renderbuffer to COLOR_ATTACHMENT_0 glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, colorRenderbuffer) Then use the texture from the first pass for rendering as a quad on the screen.

    Read the article

  • What is the initiating actor for the usecase shown here

    - by Illep
    I am new to drawing use case and writing use case descriptions. I have an Actor called User , an abstract usecase called Work Type and another use case called Manager. The usecase Manager has a generalized relationship with the Work Type use case. I'm now writing the use case description for Manager use case. And what is the initiating Actor for this use case. Is it the Actor User ? or doesn't it have an initiating Actor ? Note: I only want to know the initiating Actor for the use case Manager

    Read the article

  • Why are the default UI controls in my iPhone app blurred?

    - by Tom H
    Why would the default iPhone interface elements, specifically the UISwitch (unmodified) and a UISegmentedControl appear slightly blurred? I have not changed them or called any private APIs. This blurring occurs when I run it both in the simulator and when I load it on my iPod Touch, so I don't think it's a one off drawing glitch. These elements were created via some code (initWithFrame:) not in interface builder. Here is a screenshot of the blurring in the simulator: http://drp.ly/14rS6a It looks similar on the actual device. Thanks for your help

    Read the article

  • Cannot load PNG in C# on Mac OSX running Mono

    - by milkplus
    In C#, I'm trying to load a png file on Mac OSX using the latest Mono using System.Drawing; Bitmap bmp = new Bitmap("test.png"); I get the following error Either the image format is unknown or you don't have the required libraries to decode this format [GDI+ status: UnknownImageFormat] It doesn't happen with all png files; just this one. Resaving in photo shop doesn't fix it unless I switch to 8bpp. Is there something I need to install to support this "special" png file? Works fine on windows.

    Read the article

  • What are good resources for computer graphics basics?

    - by Hanno Fietz
    During Flex programming, I recently ran into several questions (about box models, ways to join lines and misaligning pixels [on doctype]) regarding computer graphics and layout, where I felt that I lacked some basic background on things like concepts like the box model approaches mapping real numbers to a pixel raster (like font anti-aliasing) conventions found across drawing engines, like do you count y coordinates from top or bottom, and why I feel that reading some basic Wikipedia articles, books or tutorials on these subjects might help in phrasing my questions more specifically and debugging my code more systematically. I have repeatedly found myself writing tiny test apps in Flex, just to find out how the APIs do very basic stuff. My assumption would be that if I knew the right vocabulary and some general concepts, I could solve these questions much faster.

    Read the article

  • Is there a way to bring an application's GUI to the current desktop?

    - by Davy8
    Background: Started a fair amount of work before realizing that a Windows Service cannot start an app with a GUI that displays without potential problems. The proper solution of separating the GUI of the app to be started is non-trivial, so I'm trying to think of alternative solutions. There is a GUI to manage the service that is a separate executable, but the process to be launched (actually multiple instances of it) has its own GUI that needs to be shown. It doesn't need to be made visible by the service itself, but it needs to be at least able to be made visible by another process with a visible GUI. The Windows User that is running the service and that needs to see the GUI of the launched process is the same and known at install time. Is there some way to accomplish this or is it back to the drawing board? Also both the service and the app to launch are both our code and modifiable.

    Read the article

  • model (3ds) stats & snapshot in linux

    - by acidzombie24
    I want to write an app that takes in a model filename via cmd line, create a list of stats (poly count, scaling, as much as possible or maybe the stats that i would like) and to load the model with its textures (with anything else) and draw it from multiple position to save the images as pngs. How would i do this? are there utils i can use to extract data from models? how about drawing the models? my server does not have a desktop or video card, would no video HW be a problem?

    Read the article

  • Resizing an image with alpha channel

    - by Hafthor
    I am writing some code to generate images - essentially I have a source image that is large and includes transparent regions. I use GDI+ to open that image and add additional objects. What I want to do next is to save this new image much smaller, so I used the Bitmap constructor that takes a source Image object and a height and width, then saved that. I was expecting the alpha channel to be smoothed like the color channels, but this did not happen -- it did result in a couple of semitransparent pixels, but overall it is very blocky. What gives? Using img As New Bitmap("source100x100.png") ''// Drawing stuff Using simg As New Bitmap(img, 20, 20) simg.Save("target20x20.png") End Using End Using Edit: I think what I want is SuperSampling, like what Paint.NET does when set to "Best Quality"

    Read the article

  • iPhone SDK: How to store the time each word was typed?

    - by Harkonian
    My problem is twofold: 1) I'm trying to determine an eloquent way to allow the user to type into a UITextView and store the time each word was typed into an array. The time will be a float which starts at 0 when the user begins to type. 2) Conversely, I'd like the user to be able to tap on a word in the UITextView and display the time that word was typed (displaying in an NSLog() is fine). Considerations that may throw a wrench into a possible approach -- what if the user goes back to the top of the text and starts typing or to the middle of the text? Even a suggested approach without code would be appreciated, because right now I'm drawing a blank.

    Read the article

< Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >