Search Results

Search found 13860 results on 555 pages for 'core graphics'.

Page 123/555 | < Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >

  • iPhone OS: Why is my managedModelObject not complying with Key Value Coding?

    - by nickthedude
    Ok so I'm trying to build this stat tracker for my app and I have built a data model object called statTracker that keeps track of all the stuff I want it to. I can set and retrieve values using the selectors, but if I try and use KVC (ie setValue: forKey: ) everything goes bad and says my StatTracker class is not KVC compliant: valueForUndefinedKey:]: the entity StatTracker is not key value coding-compliant for the key "timesLauched".' 2010-05-18 15:55:08.573 here's the code that is triggering it: NSArray *statTrackerArray = [[NSArray alloc] init]; statTrackerArray = [[CoreDataSingleton sharedCoreDataSingleton] getStatTracker]; NSNumber *number1 = [[NSNumber alloc] init]; number1 = [NSNumber numberWithInt:(1 + [[(StatTracker *)[statTrackerArray objectAtIndex:0] valueForKey:@"timesLauched"] intValue])]; [(StatTracker *)[statTrackerArray objectAtIndex:0] setValue:number1 forKey:@"timesLaunched" ]; NSError *error; if (![[[CoreDataSingleton sharedCoreDataSingleton] managedObjectContext] save:&error]) { NSLog(@"error writing to db"); } Not sure if this is enough code for you folks let me know what you need if you do need more. This would be so sweet if I could use KVC because I could then abstract all this stat tracking stuff into a single method call with a string argument for the value in question. At least that is what I hope to accomplish here. I'm actually now understanding the power of KVC but now I'm just trying to figure out how to make it work. Thanks! Nick

    Read the article

  • How to structure class to support imported 3d model ?

    - by brainydexter
    Hello, I've written a C++ library that reads in this 3d model file (collada DAE). uptil now, I would output a list of triangles and handle each at rendering stage. But now, I need to attach some Bounding sphere information with the imported model. I need some advice on how should I organize this in code. Here are some specs of the 3D file format: - 3D model is represented as a Tree consisting of nodes - each node can contain other nodes, geometry information, transformation etc My requirements: - a bounding sphere associated with each node, thereby yielding a tree of bounding sphere hierarchy for the model itself. - actual vertex information What would be the recommended way to deal with this situation? Thanks

    Read the article

  • Finding the intersection of two vector equations.

    - by Matthew Mitchell
    I've been trying to solve this and I found an equation that gives the possibility of zero division errors. Not the best thing: v1 = (a,b) v2 = (c,d) d1 = (e,f) d2 = (h,i) l1: v1 + ?d1 l2: v2 + µd2 Equation to find vector intersection of l1 and l2 programatically by re-arranging for lambda. (a,b) + ?(e,f) = (c,d) + µ(h,i) a + ?e = c + µh b +?f = d + µi µh = a + ?e - c µi = b +?f - d µ = (a + ?e - c)/h µ = (b +?f - d)/i (a + ?e - c)/h = (b +?f - d)/i a/h + ?e/h - c/h = b/i +?f/i - d/i ?e/h - ?f/i = (b/i - d/i) - (a/h - c/h) ?(e/h - f/i) = (b - d)/i - (a - c)/h ? = ((b - d)/i - (a - c)/h)/(e/h - f/i) Intersection vector = (a + ?e,b + ?f) Not sure if it would even work in some cases. I haven't tested it. I need to know how to do this for values as in that example a-i. Thank you.

    Read the article

  • Unexplained crashs with coregraphic

    - by Ziggy
    Hello there, i'm on this bug for a week now, and i can't solve it. I have some crash with coregraphic calls, it happen randomly (sometimes after 2 mn, or just at the start), but often at the same places in the code. I have a class that just wrap a CGContext, it have a CGContextRef as member. This Object is re-created each time DrawRect() is called, so the CGContextRef is always up-to-date. The draw calls came from the main thread, only After looking for this kind of error, it appear that it should be object Release related. Here is an example of an error : #0 0x90d8a7a7 in ___forwarding___ #1 0x90d8a8b2 in __forwarding_prep_0___ #2 0x90d0d0b6 in CFRetain #3 0x95e54a5d in CGColorRetain #4 0x95e5491d in CGGStateCreateCopy #5 0x95e5486d in CGGStackSave #6 0x95e54846 in CGContextSaveGState #7 0x00073500 in CAutoContextState::CAutoContextState at Context.cpp:47 the AutoContextSave() class look like this : class CAutoContextState { private: CGContextRef m_Hdc; public: CAutoContextState(const CGContextRef& Hdc) { m_Hdc = Hdc; CGContextSaveGState(m_Hdc); } virtual ~CAutoContextState() { CGContextRestoreGState(m_Hdc); } }; It crash at CGContextSaveGState(m_Hdc). Here is what i see into GDB: * -[Not A Type retain]: message sent to deallocated instance 0x16a148b0. When i type malloc-history on the address, i have this : 0: 0x954cf10c in malloc_zone_malloc 1: 0x90d0d201 in _CFRuntimeCreateInstance 2: 0x95e3fe88 in CGTypeCreateInstanceWithAllocator 3: 0x95e44297 in CGTypeCreateInstance 4: 0x95e58f57 in CGColorCreate 5: 0x71fdd in _ZN4Flux4Draw8CContext10DrawStringERKNS_7CStringEPKNS0_5CFontEPKNS0_6CBrushERKNS_5CRectENS0_12tagAlignmentESE_NS0_17tagStringTrimmingEfiPKf at /Volumes/Sources Mac/Flux/Sources/DotFlux/Projects/../Draw/CoreGraphic/Context.cpp:1029 Which point me at this line of code : f32 components[] = {pSolidBrush->GetColor().GetfRed(), pSolidBrush->GetColor().GetfGreen(), pSolidBrush->GetColor().GetfBlue(), pSolidBrush->GetColor().GetfAlpha()}; //{ 1.0, 0.0, 0.0, 0.8 }; CGColorRef TextColor = CGColorCreate(rgbColorSpace, components); Point this func : CGColorCreate(); Any help would be appreciated, i need to finish this task very soon, but i don't know how to resolve this :( Thanks.

    Read the article

  • Literature and Tutorials for Writing a Ray Tracer

    - by grrussel
    I am interested in finding recommendations on books on writing a raytracer, simple and clear implementations of ray tracing that can be seen on the web, and online resources on introductory raytracing. Ideally, the approach would be incremental and tutorial in style, and explain both the programming techniques and underyling mathematics, starting from the basics.

    Read the article

  • Open Source sound engine

    - by Steph Thirion
    When I started using SoundEngine (from CrashLanding and TouchFighter), I had read about a few people recommending not to use it, for it was, according to them, not stable enough. Still it was the only solution I knew of to play sounds with pitch and position control without learning C++ and OpenAL, so I ignored the warnings and went on with it. But now I'm starting to worry. The 2.2 SDK introduced AVFoundation. Using both SoundEngine from CrashLanding (for sounds) and AVAudioPlayer (for music), I found out SoundEngine behaves strangely when the only existing AVAudioPlayer is released (all sounds stop until a new AVAudioPlayer is initiated). Around the same time as the 2.2 SDK came out, the CrashLanding sample code was mysteriously removed from the ADC site. I'm worried there are more bad surprises to come. My question is, is anyone aware of an Open Source alternative to SoundEngine? Maybe even a C++ library that uses OpenAL?

    Read the article

  • So does Apple recommend to not use predicates and sort descriptors in an NSFetchRequest?

    - by dontWatchMyProfile
    From the docs: To summarize, though, if you execute a fetch directly, you should typically not add Objective-C-based predicates or sort descriptors to the fetch request. Instead you should apply these to the results of the fetch. If you use an array controller, you may need to subclass NSArrayController so you can have it not pass the sort descriptors to the persistent store and instead do the sorting after your data has been fetched. I don't get it. What's wrong with using them on fetch requests? Isn't it stupid to get back a whole big bunch of managed objects just to pick out a 1% of them in memory, leaving 99% garbage floating around? Isn't it much better to only fetch from the persistent store what you really need, in the order you need it? Probably I did get that wrong...

    Read the article

  • How to draw a drop shadow AND gradient with quartz2d?

    - by Luke
    Hello! I've a custom shape drawing using coregraphics and i want to add a drop shadow and a gradient to it also. I've been trying and searching a lot of informations on how to combine and do this, but i can't get it to work. I'm able to draw only one either. Anyone doing this already or know how to do this? Thank you.

    Read the article

  • A way to enable a LaunchDaemon to output sound?

    - by Varun Mehta
    I have a small Foundation application that checks a website and plays a sound if it sees a certain value. This application successfully plays a sound when I run it as my user from the Terminal. I've configured this app to run as a LaunchDaemon, with the following plist: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>org.myorg.appidentifier</string> <key>ProgramArguments</key> <array> <string>/Users/varunm/path/to/cli/application</string> </array> <key>KeepAlive</key> <true/> <key>RunAtLoad</key> <true/> </dict> </plist> When I have this service launched I can see it successfully read in and log values from the website, but it never generates any sound. The sound files are located in the same directory as the binary, and I use the following code: NSSound *soundToPlay = [[NSSound alloc] initWithContentsOfFile:@"sound.wav" byReference:NO]; [soundToPlay setDelegate:stopper]; [soundToPlay play]; while (g_keepRunning) { [[NSRunLoop currentRunLoop] runUntilDate:[NSDate dateWithTimeIntervalSinceNow:1.0]]; } [soundToPlay setCurrentTime:0.0]; Is there any way to get my LaunchDaemon application to play sound? This machine gets run by different people, and sometimes has no one logged in, which is why I have to configure it as a LaunchDaemon.

    Read the article

  • Automatic tracking algorithm

    - by nico
    Hi everyone, I'm trying to write a simple tracking routine to track some points on a movie. Essentially I have a series of 100-frames-long movies, showing some bright spots on dark background. I have ~100-150 spots per frame, and they move over the course of the movie. I would like to track them, so I'm looking for some efficient (but possibly not overkilling to implement) routine to do that. A few more infos: the spots are a few (es. 5x5) pixels in size the movement are not big. A spot generally does not move more than 5-10 pixels from its original position. The movements are generally smooth. the "shape" of these spots is generally fixed, they don't grow or shrink BUT they become less bright as the movie progresses. the spots don't move in a particular direction. They can move right and then left and then right again the user will select a region around each spot and then this region will be tracked, so I do not need to automatically find the points. As the videos are b/w, I though I should rely on brigthness. For instance I thought I could move around the region and calculate the correlation of the region's area in the previous frame with that in the various positions in the next frame. I understand that this is a quite naïve solution, but do you think it may work? Does anyone know specific algorithms that do this? It doesn't need to be superfast, as long as it is accurate I'm happy. Thank you nico

    Read the article

  • Problem when trying to use simple Shaders + VBOs

    - by Mr.Gando
    Hello I'm trying to convert the following functions to a VBO based function for learning purposes, it displays a static texture on screen. I'm using OpenGL ES 2.0 with shaders on the iPhone (should be almost the same than regular OpenGL in this case), this is what I got working: //Works! - (void) drawAtPoint:(CGPoint)point depth:(CGFloat)depth { GLfloat coordinates[] = { 0, 1, 1, 1, 0, 0, 1, 0 }; GLfloat width = (GLfloat)_width * _maxS, height = (GLfloat)_height * _maxT; GLfloat vertices[] = { -width / 2 + point.x, -height / 2 + point.y, width / 2 + point.x, -height / 2 + point.y, -width / 2 + point.x, height / 2 + point.y, width / 2 + point.x, height / 2 + point.y, }; glBindTexture(GL_TEXTURE_2D, _name); //Attrib position and attrib_tex coord are handles for the shader attributes glVertexAttribPointer(ATTRIB_POSITION, 2, GL_FLOAT, GL_FALSE, 0, vertices); glEnableVertexAttribArray(ATTRIB_POSITION); glVertexAttribPointer(ATTRIB_TEXCOORD, 2, GL_FLOAT, GL_FALSE, 0, coordinates); glEnableVertexAttribArray(ATTRIB_TEXCOORD); glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); } I tried to do this to convert to a VBO however I don't see anything displaying on-screen with this version: //Doesn't display anything - (void) drawAtPoint:(CGPoint)point depth:(CGFloat)depth { GLfloat width = (GLfloat)_width * _maxS, height = (GLfloat)_height * _maxT; GLfloat position[] = { -width / 2 + point.x, -height / 2 + point.y, width / 2 + point.x, -height / 2 + point.y, -width / 2 + point.x, height / 2 + point.y, width / 2 + point.x, height / 2 + point.y, }; //Texture on-screen position ( each vertex is x,y in on-screen coords ) GLfloat coordinates[] = { 0, 1, 1, 1, 0, 0, 1, 0 }; // Texture coords from 0 to 1 glBindVertexArrayOES(vao); glGenVertexArraysOES(1, &vao); glGenBuffers(2, vbo); //Buffer 1 glBindBuffer(GL_ARRAY_BUFFER, vbo[0]); glBufferData(GL_ARRAY_BUFFER, 8 * sizeof(GLfloat), position, GL_STATIC_DRAW); glEnableVertexAttribArray(ATTRIB_POSITION); glVertexAttribPointer(ATTRIB_POSITION, 2, GL_FLOAT, GL_FALSE, 0, position); //Buffer 2 glBindBuffer(GL_ARRAY_BUFFER, vbo[1]); glBufferData(GL_ARRAY_BUFFER, 8 * sizeof(GLfloat), coordinates, GL_DYNAMIC_DRAW); glEnableVertexAttribArray(ATTRIB_TEXCOORD); glVertexAttribPointer(ATTRIB_TEXCOORD, 2, GL_FLOAT, GL_FALSE, 0, coordinates); //Draw glBindVertexArrayOES(vao); glBindTexture(GL_TEXTURE_2D, _name); glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); } In both cases I'm using this simple Vertex Shader //Vertex Shader attribute vec2 position;//Bound to ATTRIB_POSITION attribute vec4 color; attribute vec2 texcoord;//Bound to ATTRIB_TEXCOORD varying vec2 texcoordVarying; uniform mat4 mvp; void main() { //You CAN'T use transpose before in glUniformMatrix4fv so... here it goes. gl_Position = mvp * vec4(position.x, position.y, 0.0, 1.0); texcoordVarying = texcoord; } The gl_Position is equal to product of mvp * vec4 because I'm simulating glOrthof in 2D with that mvp And this Fragment Shader //Fragment Shader uniform sampler2D sampler; varying mediump vec2 texcoordVarying; void main() { gl_FragColor = texture2D(sampler, texcoordVarying); } I really need help with this, maybe my shaders are wrong for the second case ? thanks in advance.

    Read the article

  • rotate a plane around a diagonal

    - by compie
    I would like to rotate a plane, not around a single (X or Y) axis, but around the diagonal (45 degrees between X and Y). How do I calculate the Rx and Ry given the Rdiagonal? (Rdiagonal is the amount of rotation I would like to achieve around the diagonal axis).

    Read the article

  • Get CALayers view

    - by eaigner
    Hi, is it somehow possible to get the nearest "container" NSView of a CALayer? My problem is I'm managing tracking areas in my "container" NSView, and those need to be updated if a layer is moved/added etc. and i would like to automate that somehow instead of calling my -updateTrackingAreas function manually. Regards, Erik

    Read the article

  • Tinting iPhone application screen red

    - by btschumy
    I'm trying to place a red tint on all the screens of my iPhone application. I've experimented on a bitmap and found I get the effect I want by compositing a dark red color onto the screen image using Multiply (kCGBlendModeMultiply). So the question is how to efficiently do this in real time on the iPhone? One dumb way might be to grab a bitmap of the current screen, composite into the bitmap and then write the composited bitmap back to the screen. This seems like it would almost certainly be too slow. In addition, I need some way of knowing when part of the screen has been redrawn so I can update the tinting. I can almost get the effect I want by putting a red, translucent, fullscreen UIView above everything. That tints everything red within further intervention on my part, but the effect is much "muddier" than results from the composite. So do any wizards out there know of some mechanism I can use to automatically composite the red over the app in similar fashion to what the translucent red UIView does?

    Read the article

  • Unit Testing Model Classes that inherit from NSManagedObject

    - by Matt Baker
    So...I'm trying to get unit tests set up in my iPhone App but I'm having some issues. I'm trying to test my model classes but they inherit directly from NSManagedObject. I'm sure this is a problem but I don't know how to get around it. Everything is building and running as expected but I get this error when calling any method on the class I'm testing: Unknown.m:0:0 unrecognized selector sent to instance 0xc2b120 If I follow this structure (http://chanson.livejournal.com/115621.html) to create my object in my tests I end up with another error entirely but it still doesn't help me. Basically my question is this: how can I test a class that inherits from NSManagedObject?

    Read the article

  • UIImage Rotation

    - by Kamchatka
    I display an image in a UIImageView (within a UIScrollView) which is also stored in CoreData. In the interface, I want the user to be able to rotate the picture by 90 degrees. I also want it to be saved in CoreData. What should I rotate in the display? the scrollview, the uiimageview or the image itself? (If possible I would like the rotation to be animated) But then I also have to save the picture to CoreData. I thought about changing the image orientation but this property is readonly.

    Read the article

  • Animate the enabling of a UIBarButtonItem?

    - by Michael Brewer
    Is there a way to animate enabling or disabling a button? I've tried the following with no success. I'm guessing at this point that the enabled property cannot be animated like opacity can – but I hope I'm wrong. [UIView beginAnimations:nil context:nil]; [UIView setAnimationDuration:1.0f]; theButton.enabled = YES; [UIView setAnimationDelegate:self]; [UIView commitAnimations]; I can't believe there isn't a setEnabled:(BOOL)enabled animated:(BOOL)animated method.

    Read the article

  • Convet from line points to shape points

    - by VOX
    I have an array of points that make up a line. However I need to draw the line with the width of n pixels. How can I transform that points for lines to points for polygon (or a shape) so I can directly draw it on canvas. I'm developing two program at the same time, one is j2me and another is .NET CF. j2me doesn't support drawing lines with width. Please take a look at the picture. link text

    Read the article

  • What's a good matrix manipulation library available for C ?

    - by banister
    Hi, I am doing a lot of image processing in C and I need a good, reasonably lightweight, and above all FAST matrix manipulation library. I am mostly focussing on affine transformations and matrix inversions, so i do not need anything too sophisticated or bloated. Primarily I would like something that is very fast (using SSE perhaps?), with a clean API and (hopefully) prepackaged by many of the unix package management systems. Note this is for C not for C++. Thanks :)

    Read the article

  • How to configure the framesize using AudioUnit.framework on iOS

    - by Piperoman
    I have an audio app i need to capture mic samples to encode into mp3 with ffmpeg First configure the audio: /** * We need to specifie our format on which we want to work. * We use Linear PCM cause its uncompressed and we work on raw data. * for more informations check. * * We want 16 bits, 2 bytes (short bytes) per packet/frames at 8khz */ AudioStreamBasicDescription audioFormat; audioFormat.mSampleRate = SAMPLE_RATE; audioFormat.mFormatID = kAudioFormatLinearPCM; audioFormat.mFormatFlags = kAudioFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger; audioFormat.mFramesPerPacket = 1; audioFormat.mChannelsPerFrame = 1; audioFormat.mBitsPerChannel = audioFormat.mChannelsPerFrame*sizeof(SInt16)*8; audioFormat.mBytesPerPacket = audioFormat.mChannelsPerFrame*sizeof(SInt16); audioFormat.mBytesPerFrame = audioFormat.mChannelsPerFrame*sizeof(SInt16); The recording callback is: static OSStatus recordingCallback(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData) { NSLog(@"Log record: %lu", inBusNumber); NSLog(@"Log record: %lu", inNumberFrames); NSLog(@"Log record: %lu", (UInt32)inTimeStamp); // the data gets rendered here AudioBuffer buffer; // a variable where we check the status OSStatus status; /** This is the reference to the object who owns the callback. */ AudioProcessor *audioProcessor = (__bridge AudioProcessor*) inRefCon; /** on this point we define the number of channels, which is mono for the iphone. the number of frames is usally 512 or 1024. */ buffer.mDataByteSize = inNumberFrames * sizeof(SInt16); // sample size buffer.mNumberChannels = 1; // one channel buffer.mData = malloc( inNumberFrames * sizeof(SInt16) ); // buffer size // we put our buffer into a bufferlist array for rendering AudioBufferList bufferList; bufferList.mNumberBuffers = 1; bufferList.mBuffers[0] = buffer; // render input and check for error status = AudioUnitRender([audioProcessor audioUnit], ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, &bufferList); [audioProcessor hasError:status:__FILE__:__LINE__]; // process the bufferlist in the audio processor [audioProcessor processBuffer:&bufferList]; // clean up the buffer free(bufferList.mBuffers[0].mData); //NSLog(@"RECORD"); return noErr; } With data: inBusNumber = 1 inNumberFrames = 1024 inTimeStamp = 80444304 // All the time same inTimeStamp, this is strange However, the framesize that i need to encode mp3 is 1152. How can i configure it? If i do buffering, that implies a delay, but i would like to avoid this because is a real time app. If i use this configuration, each buffer i get trash trailing samples, 1152 - 1024 = 128 bad samples. All samples are SInt16.

    Read the article

  • Is there a tool out there that lets you print a colour chart / palette of colours used on a web page

    - by undefined
    I want to print a table of the colours used in a web page that my graphic designer has produced - I have .png files at present and use Fireworks to view them. It would be great if there was a tool that lets you print a table with the colour and hex value so I can easily reference when programming. Anyone come across such a thing? Sounds to me like there should be a firefox extension or similar?

    Read the article

  • Zoom image to pixel level

    - by zaf
    For an art project, one of the things I'll be doing is zooming in on an image to a particular pixel. I've been rubbing my chin and would love some advice on how to proceed. Here are the input parameters: Screen: sw - screen width sh - screen height Image: iw - image width ih - image height Pixel: px - x position of pixel in image py - y position of pixel in image Zoom: zf - zoom factor (0.0 to 1.0) Background colour: bc - background colour to use when screen and image aspect ratios are different Outputs: The zoomed image (no anti-aliasing) The screen position/dimensions of the pixel we are zooming to. When zf is 0 the image must fit the screen with correct aspect ratio. When zf is 1 the selected pixel fits the screen with correct aspect ratio. One idea I had was to use something like povray and move the camera towards a big image texture or some library (e.g. pygame) to do the zooming. Anyone think of something more clever with simple pseudo code? To keep it more simple you can make the image and screen have the same aspect ratio. I can live with that. I'll update with more info as its required.

    Read the article

  • What is faster with PictureBox? Many small redraws or complete redraw.

    - by kornelijepetak
    I have a PictureBox (WinMobile 6 WinForm) on which I draw some images. There is a background image that goes in the background and it does not change. However objects that are drawn on the picturebox are moving during the application so I need to refresh the background. Since items that are redrawn fill from 50% to 80% of the surface, the question is which of the two is faster: 1) Redraw only parts of the background image that have been changed (previous+next location of the moving object). 2) Redraw complete background and then draw all the objects in their current position. Now, the reason for asking is because I am not sure how much of processor power is needed for a single drawImage operation and what are the time consuming factors. I am aware if there is almost complete coverage of the background, it would be stupid to redraw portions of it, because by drawing portions I will have drawn the complete picture. But since sometimes only half of the image had changed (some objects remained in their old position), it may (perhaps) be benefitial to redraw only those regions. But I need your insight on this... Thanks.

    Read the article

< Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >