Search Results

Search found 10050 results on 402 pages for 'graphics card'.

Page 103/402 | < Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >

  • Unexplained crashs with coregraphic

    - by Ziggy
    Hello there, i'm on this bug for a week now, and i can't solve it. I have some crash with coregraphic calls, it happen randomly (sometimes after 2 mn, or just at the start), but often at the same places in the code. I have a class that just wrap a CGContext, it have a CGContextRef as member. This Object is re-created each time DrawRect() is called, so the CGContextRef is always up-to-date. The draw calls came from the main thread, only After looking for this kind of error, it appear that it should be object Release related. Here is an example of an error : #0 0x90d8a7a7 in ___forwarding___ #1 0x90d8a8b2 in __forwarding_prep_0___ #2 0x90d0d0b6 in CFRetain #3 0x95e54a5d in CGColorRetain #4 0x95e5491d in CGGStateCreateCopy #5 0x95e5486d in CGGStackSave #6 0x95e54846 in CGContextSaveGState #7 0x00073500 in CAutoContextState::CAutoContextState at Context.cpp:47 the AutoContextSave() class look like this : class CAutoContextState { private: CGContextRef m_Hdc; public: CAutoContextState(const CGContextRef& Hdc) { m_Hdc = Hdc; CGContextSaveGState(m_Hdc); } virtual ~CAutoContextState() { CGContextRestoreGState(m_Hdc); } }; It crash at CGContextSaveGState(m_Hdc). Here is what i see into GDB: * -[Not A Type retain]: message sent to deallocated instance 0x16a148b0. When i type malloc-history on the address, i have this : 0: 0x954cf10c in malloc_zone_malloc 1: 0x90d0d201 in _CFRuntimeCreateInstance 2: 0x95e3fe88 in CGTypeCreateInstanceWithAllocator 3: 0x95e44297 in CGTypeCreateInstance 4: 0x95e58f57 in CGColorCreate 5: 0x71fdd in _ZN4Flux4Draw8CContext10DrawStringERKNS_7CStringEPKNS0_5CFontEPKNS0_6CBrushERKNS_5CRectENS0_12tagAlignmentESE_NS0_17tagStringTrimmingEfiPKf at /Volumes/Sources Mac/Flux/Sources/DotFlux/Projects/../Draw/CoreGraphic/Context.cpp:1029 Which point me at this line of code : f32 components[] = {pSolidBrush->GetColor().GetfRed(), pSolidBrush->GetColor().GetfGreen(), pSolidBrush->GetColor().GetfBlue(), pSolidBrush->GetColor().GetfAlpha()}; //{ 1.0, 0.0, 0.0, 0.8 }; CGColorRef TextColor = CGColorCreate(rgbColorSpace, components); Point this func : CGColorCreate(); Any help would be appreciated, i need to finish this task very soon, but i don't know how to resolve this :( Thanks.

    Read the article

  • Literature and Tutorials for Writing a Ray Tracer

    - by grrussel
    I am interested in finding recommendations on books on writing a raytracer, simple and clear implementations of ray tracing that can be seen on the web, and online resources on introductory raytracing. Ideally, the approach would be incremental and tutorial in style, and explain both the programming techniques and underyling mathematics, starting from the basics.

    Read the article

  • Finding the intersection of two vector equations.

    - by Matthew Mitchell
    I've been trying to solve this and I found an equation that gives the possibility of zero division errors. Not the best thing: v1 = (a,b) v2 = (c,d) d1 = (e,f) d2 = (h,i) l1: v1 + ?d1 l2: v2 + µd2 Equation to find vector intersection of l1 and l2 programatically by re-arranging for lambda. (a,b) + ?(e,f) = (c,d) + µ(h,i) a + ?e = c + µh b +?f = d + µi µh = a + ?e - c µi = b +?f - d µ = (a + ?e - c)/h µ = (b +?f - d)/i (a + ?e - c)/h = (b +?f - d)/i a/h + ?e/h - c/h = b/i +?f/i - d/i ?e/h - ?f/i = (b/i - d/i) - (a/h - c/h) ?(e/h - f/i) = (b - d)/i - (a - c)/h ? = ((b - d)/i - (a - c)/h)/(e/h - f/i) Intersection vector = (a + ?e,b + ?f) Not sure if it would even work in some cases. I haven't tested it. I need to know how to do this for values as in that example a-i. Thank you.

    Read the article

  • How to draw a drop shadow AND gradient with quartz2d?

    - by Luke
    Hello! I've a custom shape drawing using coregraphics and i want to add a drop shadow and a gradient to it also. I've been trying and searching a lot of informations on how to combine and do this, but i can't get it to work. I'm able to draw only one either. Anyone doing this already or know how to do this? Thank you.

    Read the article

  • Convet from line points to shape points

    - by VOX
    I have an array of points that make up a line. However I need to draw the line with the width of n pixels. How can I transform that points for lines to points for polygon (or a shape) so I can directly draw it on canvas. I'm developing two program at the same time, one is j2me and another is .NET CF. j2me doesn't support drawing lines with width. Please take a look at the picture. link text

    Read the article

  • Problem when trying to use simple Shaders + VBOs

    - by Mr.Gando
    Hello I'm trying to convert the following functions to a VBO based function for learning purposes, it displays a static texture on screen. I'm using OpenGL ES 2.0 with shaders on the iPhone (should be almost the same than regular OpenGL in this case), this is what I got working: //Works! - (void) drawAtPoint:(CGPoint)point depth:(CGFloat)depth { GLfloat coordinates[] = { 0, 1, 1, 1, 0, 0, 1, 0 }; GLfloat width = (GLfloat)_width * _maxS, height = (GLfloat)_height * _maxT; GLfloat vertices[] = { -width / 2 + point.x, -height / 2 + point.y, width / 2 + point.x, -height / 2 + point.y, -width / 2 + point.x, height / 2 + point.y, width / 2 + point.x, height / 2 + point.y, }; glBindTexture(GL_TEXTURE_2D, _name); //Attrib position and attrib_tex coord are handles for the shader attributes glVertexAttribPointer(ATTRIB_POSITION, 2, GL_FLOAT, GL_FALSE, 0, vertices); glEnableVertexAttribArray(ATTRIB_POSITION); glVertexAttribPointer(ATTRIB_TEXCOORD, 2, GL_FLOAT, GL_FALSE, 0, coordinates); glEnableVertexAttribArray(ATTRIB_TEXCOORD); glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); } I tried to do this to convert to a VBO however I don't see anything displaying on-screen with this version: //Doesn't display anything - (void) drawAtPoint:(CGPoint)point depth:(CGFloat)depth { GLfloat width = (GLfloat)_width * _maxS, height = (GLfloat)_height * _maxT; GLfloat position[] = { -width / 2 + point.x, -height / 2 + point.y, width / 2 + point.x, -height / 2 + point.y, -width / 2 + point.x, height / 2 + point.y, width / 2 + point.x, height / 2 + point.y, }; //Texture on-screen position ( each vertex is x,y in on-screen coords ) GLfloat coordinates[] = { 0, 1, 1, 1, 0, 0, 1, 0 }; // Texture coords from 0 to 1 glBindVertexArrayOES(vao); glGenVertexArraysOES(1, &vao); glGenBuffers(2, vbo); //Buffer 1 glBindBuffer(GL_ARRAY_BUFFER, vbo[0]); glBufferData(GL_ARRAY_BUFFER, 8 * sizeof(GLfloat), position, GL_STATIC_DRAW); glEnableVertexAttribArray(ATTRIB_POSITION); glVertexAttribPointer(ATTRIB_POSITION, 2, GL_FLOAT, GL_FALSE, 0, position); //Buffer 2 glBindBuffer(GL_ARRAY_BUFFER, vbo[1]); glBufferData(GL_ARRAY_BUFFER, 8 * sizeof(GLfloat), coordinates, GL_DYNAMIC_DRAW); glEnableVertexAttribArray(ATTRIB_TEXCOORD); glVertexAttribPointer(ATTRIB_TEXCOORD, 2, GL_FLOAT, GL_FALSE, 0, coordinates); //Draw glBindVertexArrayOES(vao); glBindTexture(GL_TEXTURE_2D, _name); glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); } In both cases I'm using this simple Vertex Shader //Vertex Shader attribute vec2 position;//Bound to ATTRIB_POSITION attribute vec4 color; attribute vec2 texcoord;//Bound to ATTRIB_TEXCOORD varying vec2 texcoordVarying; uniform mat4 mvp; void main() { //You CAN'T use transpose before in glUniformMatrix4fv so... here it goes. gl_Position = mvp * vec4(position.x, position.y, 0.0, 1.0); texcoordVarying = texcoord; } The gl_Position is equal to product of mvp * vec4 because I'm simulating glOrthof in 2D with that mvp And this Fragment Shader //Fragment Shader uniform sampler2D sampler; varying mediump vec2 texcoordVarying; void main() { gl_FragColor = texture2D(sampler, texcoordVarying); } I really need help with this, maybe my shaders are wrong for the second case ? thanks in advance.

    Read the article

  • What's a good matrix manipulation library available for C ?

    - by banister
    Hi, I am doing a lot of image processing in C and I need a good, reasonably lightweight, and above all FAST matrix manipulation library. I am mostly focussing on affine transformations and matrix inversions, so i do not need anything too sophisticated or bloated. Primarily I would like something that is very fast (using SSE perhaps?), with a clean API and (hopefully) prepackaged by many of the unix package management systems. Note this is for C not for C++. Thanks :)

    Read the article

  • rotate a plane around a diagonal

    - by compie
    I would like to rotate a plane, not around a single (X or Y) axis, but around the diagonal (45 degrees between X and Y). How do I calculate the Rx and Ry given the Rdiagonal? (Rdiagonal is the amount of rotation I would like to achieve around the diagonal axis).

    Read the article

  • Automatic tracking algorithm

    - by nico
    Hi everyone, I'm trying to write a simple tracking routine to track some points on a movie. Essentially I have a series of 100-frames-long movies, showing some bright spots on dark background. I have ~100-150 spots per frame, and they move over the course of the movie. I would like to track them, so I'm looking for some efficient (but possibly not overkilling to implement) routine to do that. A few more infos: the spots are a few (es. 5x5) pixels in size the movement are not big. A spot generally does not move more than 5-10 pixels from its original position. The movements are generally smooth. the "shape" of these spots is generally fixed, they don't grow or shrink BUT they become less bright as the movie progresses. the spots don't move in a particular direction. They can move right and then left and then right again the user will select a region around each spot and then this region will be tracked, so I do not need to automatically find the points. As the videos are b/w, I though I should rely on brigthness. For instance I thought I could move around the region and calculate the correlation of the region's area in the previous frame with that in the various positions in the next frame. I understand that this is a quite naïve solution, but do you think it may work? Does anyone know specific algorithms that do this? It doesn't need to be superfast, as long as it is accurate I'm happy. Thank you nico

    Read the article

  • Zoom image to pixel level

    - by zaf
    For an art project, one of the things I'll be doing is zooming in on an image to a particular pixel. I've been rubbing my chin and would love some advice on how to proceed. Here are the input parameters: Screen: sw - screen width sh - screen height Image: iw - image width ih - image height Pixel: px - x position of pixel in image py - y position of pixel in image Zoom: zf - zoom factor (0.0 to 1.0) Background colour: bc - background colour to use when screen and image aspect ratios are different Outputs: The zoomed image (no anti-aliasing) The screen position/dimensions of the pixel we are zooming to. When zf is 0 the image must fit the screen with correct aspect ratio. When zf is 1 the selected pixel fits the screen with correct aspect ratio. One idea I had was to use something like povray and move the camera towards a big image texture or some library (e.g. pygame) to do the zooming. Anyone think of something more clever with simple pseudo code? To keep it more simple you can make the image and screen have the same aspect ratio. I can live with that. I'll update with more info as its required.

    Read the article

  • Tinting iPhone application screen red

    - by btschumy
    I'm trying to place a red tint on all the screens of my iPhone application. I've experimented on a bitmap and found I get the effect I want by compositing a dark red color onto the screen image using Multiply (kCGBlendModeMultiply). So the question is how to efficiently do this in real time on the iPhone? One dumb way might be to grab a bitmap of the current screen, composite into the bitmap and then write the composited bitmap back to the screen. This seems like it would almost certainly be too slow. In addition, I need some way of knowing when part of the screen has been redrawn so I can update the tinting. I can almost get the effect I want by putting a red, translucent, fullscreen UIView above everything. That tints everything red within further intervention on my part, but the effect is much "muddier" than results from the composite. So do any wizards out there know of some mechanism I can use to automatically composite the red over the app in similar fashion to what the translucent red UIView does?

    Read the article

  • iPhone CALayer Stacking Order

    - by Brian
    I'm using CALayers to draw to a UITableViewCell. I'm trying to figure out how layers are ordered with the content of the UITableViewCell. For instance: I add labels to the UITableViewCell in my cellForRow:atIndexPath method In the drawRect method of UITableViewCell I draw some content using the current context Also, in drawRect I add a few sublayers So what would be the order of these elements. I know I have zPosition on the CALayers but I'm not sure if they are always on top of any subviews of the UITableViewCell. And I'm not sure where the content that is drawn in drawRect comes in the order. Any help or links to documentation would be great. I have read through the Core Animation Programming Guide and didn't see anywhere where this would be answered.

    Read the article

  • Is there a tool out there that lets you print a colour chart / palette of colours used on a web page

    - by undefined
    I want to print a table of the colours used in a web page that my graphic designer has produced - I have .png files at present and use Fireworks to view them. It would be great if there was a tool that lets you print a table with the colour and hex value so I can easily reference when programming. Anyone come across such a thing? Sounds to me like there should be a firefox extension or similar?

    Read the article

  • What is faster with PictureBox? Many small redraws or complete redraw.

    - by kornelijepetak
    I have a PictureBox (WinMobile 6 WinForm) on which I draw some images. There is a background image that goes in the background and it does not change. However objects that are drawn on the picturebox are moving during the application so I need to refresh the background. Since items that are redrawn fill from 50% to 80% of the surface, the question is which of the two is faster: 1) Redraw only parts of the background image that have been changed (previous+next location of the moving object). 2) Redraw complete background and then draw all the objects in their current position. Now, the reason for asking is because I am not sure how much of processor power is needed for a single drawImage operation and what are the time consuming factors. I am aware if there is almost complete coverage of the background, it would be stupid to redraw portions of it, because by drawing portions I will have drawn the complete picture. But since sometimes only half of the image had changed (some objects remained in their old position), it may (perhaps) be benefitial to redraw only those regions. But I need your insight on this... Thanks.

    Read the article

  • Graphic programming over video playback

    - by ignatius
    I want to make some GUI mockup program for video player, so my idea is just to show some menu pictures over real video being playback. I have working program made with C and SDL just to load pictures and make a slideshow, but i don´t know how to put this over video with transparencies. Do you have a hint? ps. i usually program with python and C, so if there is any solution with this two i highly appreciate. Thanks!

    Read the article

  • what is the idea behind scaling an image using lanczos?

    - by banister
    Hi, I'm interested in image scaling algorithms and have implemented the bilinear and bicubic methods. However, I have heard of the lanczos and other more sophisticated methods for even higher quality image scaling and I am very curious how they work. Could someone here explain the basic idea behind scaling an image using lanczos (both upscaling and downscaling) and why it results in higher quality? I do have a background in fourier analysis and have done some signal processing stuff in the past, but not with relation to image processing, so don't be afraid to use terms like "frequency response" and such in your answer :) EDIT: I guess what i really want to know is the concept and theory behind using a convolution filter for interpolation. (Note: i have already read the wikipedia article on lanczos resampling but it didn't have nearly enough detail for me) thanks alot!

    Read the article

  • How to implement escape Sequence in SDL

    - by Sa'me Smd
    Im trying to use the STL library inside the SDL. but it gives me the error "undeclared identifier" Is there any way i can use "\n"or even cout<<endl; Can the function SDL_WarpMousewhich places the mouse cursor on a desired location on screen help me with this. Because i want to put a tile on the next line sequence. I hope you get the Question. Its very vague and messed up question though (sorry for that). EDIT: void putMap(SDL_Surface* tile, SDL_Surface* screen) { for(int y = 0; y < 21; y++) { for(int x = 0; x < 60; x++) { if(maze[x][y] != '#') { apply_surface( x*10 , y*10 , tile, screen); } } cout<<endl; } } c:\documents and settings\administrator\my documents\visual studio 2008\projects\craptest\craptest\main.cpp(605) : error C2065: 'cout' : undeclared identifier c:\documents and settings\administrator\my documents\visual studio 2008\projects\craptest\craptest\main.cpp(605) : error C2065: 'endl' : undeclared identifier This is my apply_surface funtion. void apply_surface( int x, int y, SDL_Surface* source, SDL_Surface* destination ) { //Make a temporary rectangle to hold the offsets SDL_Rect offset; //Give the offsets to the rectangle offset.x = x; offset.y = y; //Blit the surface SDL_BlitSurface( source, NULL, destination, &offset ); }

    Read the article

  • Why do my buttons stay pressed?

    - by Timmmm
    In my activity I respond to an onClick() by replacing the current view with a new one (setContentView()). For some reason when I do this and then go back to the original view later the button I originally pressed still looks like it is pressed. If you 'refresh' it somehow (e.g. scroll over it with the trackball) then it reverts to the unpressed state. It's as if the button doesn't get the 'touch-up' event. The weird thing is: This only happens on my T-Mobile Pulse (android 1.5). It doesn't happen on the emulator. I've tried calling invalidate()/postInvalidate() on the view when I show it but it doesn't have any effect.

    Read the article

  • Most efficient algorithm for mesh-level, optimal occlusion culling?

    - by Fredriku73
    I am new to culling. On a first glance, it seems that most occlusion culling algorithms are object-level, not examining single meshes, which would be practical for game rendering. What I am looking for is an algorithm that culls all meshes within a single object that are occluded for a given viewpoint, with high accuracy. It needs to be at least O(n log n), a naive mesh-by-mesh comparison (O(n^2)) is too slow. I notice that the Blender GUI identifies the occluded meshes for you in real-time, even if you work with large objects of 10,000+ meshes. What algorithm is used there, pray tell?

    Read the article

  • Fastest possible way to render 480 x 320 background as iPhone OpenGL ES textures

    - by unknownthreat
    I need to display 480 x 320 background image in OpenGL ES. The thing is I experienced a bit of a slow down in iPhone when I use 512 x 512 texture size. So I am finding an optimum case for rendering iPhone resolution size background in OpenGL ES. How should I slice the background in this case to obtain the best possible performance? My main concern is speed. Should I go for 256 x 256 or other texture sizes here?

    Read the article

  • When should a uniform be used in shader programming?

    - by Phineas
    In a vertex shader, I calculate a vector using only uniforms. Therefore, the outcome of this calculation is the same for all instantiations of the vertex shader. Should I just do this calculation on the CPU and upload it as a uniform? What if I have ten such calculations? If I upload a lot of uniforms in this way, does CPU-GPU communication ever get so slow that recomputing such values in the vertex shader is actually faster?

    Read the article

< Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >