Search Results

Search found 13860 results on 555 pages for 'core graphics'.

Page 124/555 | < Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >

  • How to deal with many to many relationships with NSFetchedResultsController?

    - by Phil Yates
    OK so I have two entities in my data model (let's say entityA and entityB), both of these entities have a to-many relationship to each other. I have setup a NSFetchedResultsController to fetch a bunch of entityA. Now I'm trying to have the section names for the tableview be the title of entityB. sectionNameKeyPath:@"entityB.title" Now this causes a problem, where by the section name returned from that relationship appears to be ({title1}) or ({title1,title2...titleN}) obviously depending on how many different entityB's are involved. This doesn't look great in a tableview and doesn't group the objects as I would like. What I would like is a section per entityB title with entityA appearing under each section, under multiple sections if necessary. I'm at a loss as how I am supposed to achieve this whether I need to update the predicate to get the entity to appear multiple times or whether I need to update the section and header functions to do some processing as the controller loops through the objects. Any help is appreciated :) Thanks

    Read the article

  • animate rotation

    - by Mike
    I'm trying to animate a rotation using CATransform3DMakeRotation, but the problem is once the animation is finished, the image goes back to its initial position, i.e back to zero. But I'd like to keep it where it finished rotating. How would I do that? edit What I'm trying to do is to create the same compass which comes with the new iPhone. Basically the locationmanager gives me new headings every few seconds (or several per second). Using the new heading and the timestamp, I was trying to get a smooth animation of the image but not getting anywhere. The only thing which seems to work is applying the transform directly, e.g. compassimage.layer.transform = CATransform3DMakeRotation(newHeading.trueHeading *M_PI/180,0,0,1.0): but that's not animated...

    Read the article

  • How to handle alpha in a manual "Overlay" blend operation?

    - by quixoto
    I'm playing with some manual (walk-the-pixels) image processing, and I'm recreating the standard "overlay" blend. I'm looking at the "Photoshop math" macros here: http://www.nathanm.com/photoshop-blending-math/ (See also here for more readable version of Overlay) Both source images are in fairly standard RGBA (8 bits each) format, as is the destination. When both images are fully opaque (alpha is 1.0), the result is blended correctly as expected: But if my "blend" layer (the top image) has transparency in it, I'm a little flummoxed as to how to factor that alpha into the blending equation correctly. I expect it to work such that transparent pixels in the blend layer have no effect on the result, opaque pixels in the blend layer do the overlay blend as normal, and semitransparent blend layer pixels have some scaled effect on the result. Can someone explain to me the blend equations or the concept behind doing this? Bonus points if you can help me do it such that the resulting image has correctly premultiplied alpha (which only comes into play for pixels that are not opaque in both layers, I think.) Thanks! // factor in blendLayerA, (1-blendLayerA) somehow? resultR = ChannelBlend_Overlay(baseLayerR, blendLayerR); resultG = ChannelBlend_Overlay(baseLayerG, blendLayerG); resultB = ChannelBlend_Overlay(baseLayerB, blendLayerB); resultA = 1.0; // also, what should this be??

    Read the article

  • Get CALayers view

    - by eaigner
    Hi, is it somehow possible to get the nearest "container" NSView of a CALayer? My problem is I'm managing tracking areas in my "container" NSView, and those need to be updated if a layer is moved/added etc. and i would like to automate that somehow instead of calling my -updateTrackingAreas function manually. Regards, Erik

    Read the article

  • What is faster with PictureBox? Many small redraws or complete redraw.

    - by kornelijepetak
    I have a PictureBox (WinMobile 6 WinForm) on which I draw some images. There is a background image that goes in the background and it does not change. However objects that are drawn on the picturebox are moving during the application so I need to refresh the background. Since items that are redrawn fill from 50% to 80% of the surface, the question is which of the two is faster: 1) Redraw only parts of the background image that have been changed (previous+next location of the moving object). 2) Redraw complete background and then draw all the objects in their current position. Now, the reason for asking is because I am not sure how much of processor power is needed for a single drawImage operation and what are the time consuming factors. I am aware if there is almost complete coverage of the background, it would be stupid to redraw portions of it, because by drawing portions I will have drawn the complete picture. But since sometimes only half of the image had changed (some objects remained in their old position), it may (perhaps) be benefitial to redraw only those regions. But I need your insight on this... Thanks.

    Read the article

  • Animate the enabling of a UIBarButtonItem?

    - by Michael Brewer
    Is there a way to animate enabling or disabling a button? I've tried the following with no success. I'm guessing at this point that the enabled property cannot be animated like opacity can – but I hope I'm wrong. [UIView beginAnimations:nil context:nil]; [UIView setAnimationDuration:1.0f]; theButton.enabled = YES; [UIView setAnimationDelegate:self]; [UIView commitAnimations]; I can't believe there isn't a setEnabled:(BOOL)enabled animated:(BOOL)animated method.

    Read the article

  • Batch faulting in a to-many relationship for a collection of objects

    - by indragie
    Scenario: Let's say I have an entity called Author that has a to-many relationship called books to the Book entity (inverse relationship author). If I have an existing collection of Author objects, I want to fault in the books relationship for all of them in a single fetch request. Code This is what I've tried so far: NSArray *authors = ... // array of `Author` objects NSFetchRequest *fetchRequest = [NSFetchRequest fetchRequestWithEntityName:@"Book"]; fetchRequest.returnsObjectsAsFaults = NO; fetchRequest.predicate = [NSPredicate predicateWithFormat:@"author IN %@", authors]; Executing this fetch request does not result in the books relationship of the objects in the authors array being faulted in (inspected via logging). I've also tried doing the fetch request the other way around: NSArray *authors = ... // array of `Author` objects NSFetchRequest *fetchRequest = [NSFetchRequest fetchRequestWithEntityName:@"Author"]; fetchRequest.returnsObjectsAsFaults = NO; fetchRequest.predicate = [NSPredicate predicateWithFormat:@"SELF IN %@", authors]; fetchRequest.relationshipKeypathsForPrefetching = @[@"books"]; This doesn't fire the faults either. What's the appropriate way of going about doing this?

    Read the article

  • UIImage Rotation

    - by Kamchatka
    I display an image in a UIImageView (within a UIScrollView) which is also stored in CoreData. In the interface, I want the user to be able to rotate the picture by 90 degrees. I also want it to be saved in CoreData. What should I rotate in the display? the scrollview, the uiimageview or the image itself? (If possible I would like the rotation to be animated) But then I also have to save the picture to CoreData. I thought about changing the image orientation but this property is readonly.

    Read the article

  • Problem when trying to use simple Shaders + VBOs

    - by Mr.Gando
    Hello I'm trying to convert the following functions to a VBO based function for learning purposes, it displays a static texture on screen. I'm using OpenGL ES 2.0 with shaders on the iPhone (should be almost the same than regular OpenGL in this case), this is what I got working: //Works! - (void) drawAtPoint:(CGPoint)point depth:(CGFloat)depth { GLfloat coordinates[] = { 0, 1, 1, 1, 0, 0, 1, 0 }; GLfloat width = (GLfloat)_width * _maxS, height = (GLfloat)_height * _maxT; GLfloat vertices[] = { -width / 2 + point.x, -height / 2 + point.y, width / 2 + point.x, -height / 2 + point.y, -width / 2 + point.x, height / 2 + point.y, width / 2 + point.x, height / 2 + point.y, }; glBindTexture(GL_TEXTURE_2D, _name); //Attrib position and attrib_tex coord are handles for the shader attributes glVertexAttribPointer(ATTRIB_POSITION, 2, GL_FLOAT, GL_FALSE, 0, vertices); glEnableVertexAttribArray(ATTRIB_POSITION); glVertexAttribPointer(ATTRIB_TEXCOORD, 2, GL_FLOAT, GL_FALSE, 0, coordinates); glEnableVertexAttribArray(ATTRIB_TEXCOORD); glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); } I tried to do this to convert to a VBO however I don't see anything displaying on-screen with this version: //Doesn't display anything - (void) drawAtPoint:(CGPoint)point depth:(CGFloat)depth { GLfloat width = (GLfloat)_width * _maxS, height = (GLfloat)_height * _maxT; GLfloat position[] = { -width / 2 + point.x, -height / 2 + point.y, width / 2 + point.x, -height / 2 + point.y, -width / 2 + point.x, height / 2 + point.y, width / 2 + point.x, height / 2 + point.y, }; //Texture on-screen position ( each vertex is x,y in on-screen coords ) GLfloat coordinates[] = { 0, 1, 1, 1, 0, 0, 1, 0 }; // Texture coords from 0 to 1 glBindVertexArrayOES(vao); glGenVertexArraysOES(1, &vao); glGenBuffers(2, vbo); //Buffer 1 glBindBuffer(GL_ARRAY_BUFFER, vbo[0]); glBufferData(GL_ARRAY_BUFFER, 8 * sizeof(GLfloat), position, GL_STATIC_DRAW); glEnableVertexAttribArray(ATTRIB_POSITION); glVertexAttribPointer(ATTRIB_POSITION, 2, GL_FLOAT, GL_FALSE, 0, position); //Buffer 2 glBindBuffer(GL_ARRAY_BUFFER, vbo[1]); glBufferData(GL_ARRAY_BUFFER, 8 * sizeof(GLfloat), coordinates, GL_DYNAMIC_DRAW); glEnableVertexAttribArray(ATTRIB_TEXCOORD); glVertexAttribPointer(ATTRIB_TEXCOORD, 2, GL_FLOAT, GL_FALSE, 0, coordinates); //Draw glBindVertexArrayOES(vao); glBindTexture(GL_TEXTURE_2D, _name); glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); } In both cases I'm using this simple Vertex Shader //Vertex Shader attribute vec2 position;//Bound to ATTRIB_POSITION attribute vec4 color; attribute vec2 texcoord;//Bound to ATTRIB_TEXCOORD varying vec2 texcoordVarying; uniform mat4 mvp; void main() { //You CAN'T use transpose before in glUniformMatrix4fv so... here it goes. gl_Position = mvp * vec4(position.x, position.y, 0.0, 1.0); texcoordVarying = texcoord; } The gl_Position is equal to product of mvp * vec4 because I'm simulating glOrthof in 2D with that mvp And this Fragment Shader //Fragment Shader uniform sampler2D sampler; varying mediump vec2 texcoordVarying; void main() { gl_FragColor = texture2D(sampler, texcoordVarying); } I really need help with this, maybe my shaders are wrong for the second case ? thanks in advance.

    Read the article

  • pitchbend (varispeed) audio with iPhone SDK's AudioUnit

    - by fetzig
    hi, I'm trying to manipulate the speed (and pitch) of a sound while playing. so i played around with iphone sdk's AudioUnit. downloaded iPhoneMultichannelMixerTest and tried to add an AUComponent to the graph (in this case a formatconverter). but i get (pretty soon) following error when building: #import <AudioToolbox/AudioToolbox.h> #import <AudioUnit/AudioUnit.h> ... AUComponentDescription varispeed_desc(kAudioUnitType_FormatConverter, kAudioUnitSubType_Varispeed, kAudioUnitManufacturer_Apple); ^^ error: 'kAudioUnitSubType_Varispeed' was not declared in this scope. any ideas why? the documentation on this topic doesn't help me at all (just api doc isn't very helpful when having no clue about the concept behind). there are no examples on how to wire these effects together and manipulating there properties...so maybe i'm totally wrong, anyway any hint is great. thx for help.

    Read the article

  • rotate a plane around a diagonal

    - by compie
    I would like to rotate a plane, not around a single (X or Y) axis, but around the diagonal (45 degrees between X and Y). How do I calculate the Rx and Ry given the Rdiagonal? (Rdiagonal is the amount of rotation I would like to achieve around the diagonal axis).

    Read the article

  • When should a uniform be used in shader programming?

    - by Phineas
    In a vertex shader, I calculate a vector using only uniforms. Therefore, the outcome of this calculation is the same for all instantiations of the vertex shader. Should I just do this calculation on the CPU and upload it as a uniform? What if I have ten such calculations? If I upload a lot of uniforms in this way, does CPU-GPU communication ever get so slow that recomputing such values in the vertex shader is actually faster?

    Read the article

  • Handle row deletion in UITableViewController

    - by Kamchatka
    Hello, I hava a UINavigationController. The first level is a UITableViewController, the second level just shows details on one of the items of the table view. In this detail view, I can delete the item. It deletes the underlying managed object. When I pop back to the view, I have a crash. I understand why, it's because I didn't update the cached array that contains the data. I looked at several tutorials and I don't exactly understand how am I supposed to handle deletion. Maybe I don't understand exactly where I should fetch the objects in the model. Should I do a query for every cellForRowAtIndexPath and take the item in the result at position indexPath.row? It doesn't look efficient. Should I check for changes somewhere and recache the whole query in an array. I would think CoreData would provide something more natural but I couldn't find it so far. Thanks in advance.

    Read the article

  • How do I draw an ellipse with arbitrary orientation pixel by pixel?

    - by amc
    Hi, I have to draw an ellipse of arbitrary size and orientation pixel by pixel. It seems pretty easy to draw an ellipse whose major and minor axes align with the x and y axes, but rotating the ellipse by an arbitrary angle seems trickier. Initially I though it might work to draw the unrotated ellipse and apply a rotation matrix to each point, but it seems as though that could cause errors do to rounding, and I need rather high precision. Is my suspicion about this method correct? How could I accomplish this task more precisely? I'm programming in C++ (although that shouldn't really matter since this is a more algorithm-oriented question).

    Read the article

  • Simple sound effect loop using AudioToolKit

    - by Typeoneerror
    I've created a few sounds for use in my game. I can play them at certain events without issue: // create sounds CFBundleRef mainBundle; mainBundle = CFBundleGetMainBundle(); _soundFileShake = CFBundleCopyResourceURL(mainBundle, CFSTR("shake"), CFSTR("wav"), NULL); AudioServicesCreateSystemSoundID(_soundFileShake, &_soundIdShake); // later... AudioServicesPlaySystemSound(_soundIdShake); The game has a mechanism which allows you to shake the device to activate some functionality. I've got the shaking code done so I get get a "shaking started" and "shaking ended" message to my game. What I need to have happen is start playing "shave.wav" when shaking starts and loop it until it stops. Is there a way to do this with AudioToolbox/AudioServices? How could I do this if not?

    Read the article

  • How to implement escape Sequence in SDL

    - by Sa'me Smd
    Im trying to use the STL library inside the SDL. but it gives me the error "undeclared identifier" Is there any way i can use "\n"or even cout<<endl; Can the function SDL_WarpMousewhich places the mouse cursor on a desired location on screen help me with this. Because i want to put a tile on the next line sequence. I hope you get the Question. Its very vague and messed up question though (sorry for that). EDIT: void putMap(SDL_Surface* tile, SDL_Surface* screen) { for(int y = 0; y < 21; y++) { for(int x = 0; x < 60; x++) { if(maze[x][y] != '#') { apply_surface( x*10 , y*10 , tile, screen); } } cout<<endl; } } c:\documents and settings\administrator\my documents\visual studio 2008\projects\craptest\craptest\main.cpp(605) : error C2065: 'cout' : undeclared identifier c:\documents and settings\administrator\my documents\visual studio 2008\projects\craptest\craptest\main.cpp(605) : error C2065: 'endl' : undeclared identifier This is my apply_surface funtion. void apply_surface( int x, int y, SDL_Surface* source, SDL_Surface* destination ) { //Make a temporary rectangle to hold the offsets SDL_Rect offset; //Give the offsets to the rectangle offset.x = x; offset.y = y; //Blit the surface SDL_BlitSurface( source, NULL, destination, &offset ); }

    Read the article

  • predicate subquery to return items by matching tags

    - by user3411663
    I have a many-to-many relationship between two entities; Item and Tag. I'm trying to create a predicate to take the selectedItem and return a ranking of items based on how many similar tags they have. So far I've tried: NSPredicate *predicate = [NSPredicate predicateWithFormat:@"SUBQUERY(itemToTag, $item, $item in %@).@count > 0", selectedItem.itemToTag]; Any other iterations that have failed. It currently only returns the selectedItem in the list. I've found little on Subquery. Is there a guru out there that can help me refine this? Thanks in advance for the help!

    Read the article

  • Alpha blending colors in .NET Compact Framwork 2.0

    - by Adam Haile
    In the Full .NET framework you can use the Color.FromArgb() method to create a new color with alpha blending, like this: Color blended = Color.FromArgb(alpha, color); or Color blended = Color.FromArgb(alpha, red, green , blue); However in the Compact Framework (2.0 specifically), neither of those prototypes are valid, you only get: Color.FromArgb(int red, int green, int blue); and Color.FromArgb(int val); The first one, obviously, doesn't even let you enter an alpha value, but the documentation for the latter shows that "val" is a 32bit ARGB value (as 0xAARRGGBB as opposed to the standard 24bit 0xRRGGBB), so it would make sense that you could just build the ARGB value and pass it to the function. I tried this with the following: private Color FromARGB(byte alpha, byte red, byte green, byte blue) { int val = (alpha << 24) | (red << 16) | (green << 8) | blue; return Color.FromArgb(val); } But no matter what I do, the alpha blending never works, the resulting color always as full opacity, even when setting the alpha value to 0. Has anyone gotten this to work on Compact Framework?

    Read the article

  • NSFetchedResultsController doesn't fetch up the child-parent moc chain?

    - by Kronusdark
    I cannot find any clarification on this, so it may be a bug. Problem is, I have a series of parent-child Managed Object Context's. When I save on a child context the changes get pushed up to the parent, and I can fetch using a plain old NSFetchRequest. However, if I rely on an NSFetchedResultsController to pull these changes into a sibling context to the first, they do not see them. calling -(void)performFetch: error; doesn't seem to pull the changes either. After a restart of the app, all new data is available. My hypothesis is that NSFetchedResultsController only fetches from its current context and will not follow the chain to the persistent store. Can someone please set me straight here? Am I going to have to use notifications to monitor changes on other contexts? and finally, is this mentioned somewhere in the doc's? I cannot find it for the life of me.

    Read the article

  • Asset tracking in "real-time" - how best to display in browser?

    - by mawg
    I am developing an asset tracking system, standard LAMP, and now am wondering how best to present the data to the user in the browser. I expect to track and most a few thousand items, and to refresh them every second or so. I want to draw a floorplan or map of the area and represent the assets the assets symbolically on that (with different symbols for different classes of assets). Additionally, the user should be able to click on an asset to interact with it, and search for a particular asset and centre the screen on it, draw a circle round it, etc http://graphite.wikidot.com/ Looks good - is there any alternative? At its simplest, I suppose I could just generate a JPEG and display it, using CSS to let me know if/where a user clicks ... but what's the "best" way to do it?

    Read the article

  • iPhone game audio and background music

    - by Boon
    Have a few questions related to adding sounds to my game, specifically intro music (for splash), background music (loop) and button event sounds. Hope you can share your knowledge on this. 1) Should I use compressed sounds or uncompressed sounds? Or perhaps a combination of the two? Are there any limitations on the iPhone hardware that I should be aware of -- for example, the ability to play multiple compressed sounds? 2) What's the best audio format for my purpose? 3) For background music, I am thinking of using AVAudioPlayer. For button event sounds, I am thinking of using AudioServicesPlaySystemSound, what do you think? 4) Any other issues I should be aware of? Thank you!

    Read the article

  • Tinting iPhone application screen red

    - by btschumy
    I'm trying to place a red tint on all the screens of my iPhone application. I've experimented on a bitmap and found I get the effect I want by compositing a dark red color onto the screen image using Multiply (kCGBlendModeMultiply). So the question is how to efficiently do this in real time on the iPhone? One dumb way might be to grab a bitmap of the current screen, composite into the bitmap and then write the composited bitmap back to the screen. This seems like it would almost certainly be too slow. In addition, I need some way of knowing when part of the screen has been redrawn so I can update the tinting. I can almost get the effect I want by putting a red, translucent, fullscreen UIView above everything. That tints everything red within further intervention on my part, but the effect is much "muddier" than results from the composite. So do any wizards out there know of some mechanism I can use to automatically composite the red over the app in similar fashion to what the translucent red UIView does?

    Read the article

  • CALayer and Off-Screen Rendering

    - by Luke Mcneice
    I have a Paging UIScrollView with a contentSize large enough to hold a number of small UIScrollViews for zooming, The viewForZoomingInScrollView is a viewController that holds a CALayer for drawing a PDF page onto. This allows me to navigate through a PDF much like the ibooks PDF reader. The code that draws the PDF (Tiled Layers) is located in: - (void)drawLayer:(CALayer *)layer inContext:(CGContextRef)ctx; And simply adding a 'page' to the visible screen calls this method automatically. When I change page there is some delay before all the tiles are drawn, even though the object (page) has already been created. What i want to be able to do is render the next page before the user scrolls to it, thus preventing the visible tiling effect. However, i have found that if the layer is located offscreen adding it to the scrollview doesn't call the drawLayer. Any Ideas/common gotchas here? I have tried: [viewController.view.layer setNeedsLayout]; [viewController.view.layer setNeedsDisplay]; NB: The fact that this is replicating the ibooks functionally is irrelevant within the context of the full app.

    Read the article

< Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >