Search Results

Search found 7 results on 1 pages for 'techzen'.

Page 1/1 | 1 

  • Mobile security solutions

    - by techzen
    What are the mobile security solutions used by you / your organization. What are the pro's and cons of usage of these solution - and how far have you been successful in implementing these - were there any loopholes / issues faced in using them?. In general, can you suggest a set of guidelines to watch for when going for going for selecting a specific solution in this context.

    Read the article

  • iPhone: Changing CGImageAlphaInfo of CGImage

    - by TechZen
    I have a PNG image that has an unsupported bitmap graphics context pixel format. Whenever I attempt to resize the image, CGBitmapContextCreate() chokes on the unsupported format (Error formatted for easy reading): CGBitmapContextCreate: unsupported parameter combination: 8 integer bits/component; 32 bits/pixel; 3-component colorspace; kCGImageAlphaLast; 1344 bytes/row. The list of supported pixel formats definitely does not support this combination. It appears I need to redraw the image and move the alpha channel information to kCGImageAlphaPremultipliedFirst or kCGImageAlphaPremultipliedLast. I have no idea how to go about doing this. There is nothing unusual about the PNG file and it isn't corrupted. It works in all other context just fine. I encountered this error just by chance but obviously my users might have similarly formatted files so I will have to check my app's imported images and correct for this problem.

    Read the article

  • SecurityFlushSessionListener in jboss

    - by techzen
    In jboss-web.deployer/conf/web.xml there is a listener defined called SecurityFlustSessionListener. This listener searches for the component java:comp/env/security/securityMgr and if not found prints that info in the debug log. It is understood that if this security feature is not needed then, one can simply remove this listener. How have you used this listener for configuring security at the time of session destroying? As in, can you highlight the use cases of this listener and the scenarios where it was found useful?

    Read the article

  • Mobile security solutions

    - by techzen
    What are the mobile security solutions used by you / your organization. What are the pro's and cons of usage of these solution - and how far have you been successful in implementing these - were there any loopholes / issues faced in using them?. In general, can you suggest a set of guidelines to watch for when going for going for selecting a specific solution in this context.

    Read the article

  • httponly cookie support in Apache HttpClient

    - by techzen
    Can anyone confirm if the latest release of Apache httpClient 4.0.1 or 4.1 alpha2 supports httpOnly cookie. (Did not find anything in the release notes but the source code validation for cookies does not raise exception when value is not existing?) Since the previous versions raise an exception on trying to parse HttpOnly stating that no value was found.

    Read the article

  • Objective-C: Getting the True Class of Classes in Class Clusters

    - by TechZen
    Recently while trying to answer a questions here, I ran some test code to see how Xcode/gdb reported the class of instances in class clusters. (see below) In the past, I've expected to see something like: PrivateClusterClass:PublicSuperClass:NSObject Such as this (which still returns as expected): NSPathStore2:NSString:NSObject ... for a string created with +[NSString pathWithComponents:]. However, with NSSet and subclass the following code: - (void)applicationDidFinishLaunching:(UIApplication *)application { NSSet *s=[NSSet setWithObject:@"setWithObject"]; NSMutableSet *m=[NSMutableSet setWithCapacity:1]; [m addObject:@"Added String"]; NSMutableSet *n = [[NSMutableSet alloc] initWithCapacity:1]; [self showSuperClasses:s]; [self showSuperClasses:m]; [self showSuperClasses:n]; [self showSuperClasses:@"Steve"]; } - (void) showSuperClasses:(id) anObject{ Class cl = [anObject class]; NSString *classDescription = [cl description]; while ([cl superclass]) { cl = [cl superclass]; classDescription = [classDescription stringByAppendingFormat:@":%@", [cl description]]; } NSLog(@"%@ classes=%@",[anObject class], classDescription); } ... outputs: // NSSet *s NSCFSet classes=NSCFSet:NSMutableSet:NSSet:NSObject //NSMutableSet *m NSCFSet classes=NSCFSet:NSMutableSet:NSSet:NSObject //NSMutableSet *n NSCFSet classes=NSCFSet:NSMutableSet:NSSet:NSObject // NSString @"Steve" NSCFString classes=NSCFString:NSMutableString:NSString:NSObject The debugger shows the same class for all Set instances. I know that in the past the Set class cluster did not return like this. What has changed? (I suspect it is a change in the bridge from Core Foundation.) What class cluster report just a generic class e.g. NSCFSet and which report an actual subclass e.g. NSPathStore2? Most importantly, when debugging how do you determine the actual class of a NSSet cluster instance?

    Read the article

  • iPhone: How to Determine Average Light/Dark of an Area of an UIImage

    - by TechZen
    I need to place labels with a transparent background over a variable-content UIImage. Readability will vary significantly depending on the relationship between the color of the label's text and the color/luminosity of the area of the image displayed under the label. Since the image will be constantly changing, the color of the label's text needs to change in sync. I have found several techniques for determining the color, perceived luminosity etc of a single pixel. However, I need to rather quickly (while a view loads) determine the rough perceived color/luminosity of an area of the UIImage under the frame of the UILabel. I presume I will also need to measure the alpha because the same color/luminosity looks different at different alpha values. Is there a way to calculate such a value for an area? Will I be reduced to simply summing pixels? If it comes to that, is there an algorithm to accomplish this? I've thought of two possible approaches: Perform some "folding" operations i.e. combining pixels from one half of the area to the other half. Then repeat until I get a single value. Would this be practical? How would you logically combine pixels to average their perceived color/luminosity? Sample a statistically significant number of pixels in the area and then combine them (somehow) to get a rough measure. I think this problem comes up a lot these days with people being so found of customizing backgrounds. Seems like something that would be worth my time to bang out a category or class to handle this and then share it around.

    Read the article

1