Search Results

Search found 12094 results on 484 pages for 'release notes'.

Page 129/484 | < Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >

  • Objective-C retain clarification

    - by Maverick
    I'm looking at this code: NSMutableArray *controllers = [[NSMutableArray alloc] init]; for (unsigned i = 0; i < kNumberOfPages; i++) { [controllers addObject:[NSNull null]]; } self.viewControllers = controllers; [controllers release]; Later on... - (void)dealloc { [viewControllers release]; ... } I see that self.viewControllers and controllers now point to the same allocated memory (of type NSMutableArray *), but when I call [controllers release] isn't self.viewControllers released as well, or is setting self.viewControllers = controllers automatically retains that memory?

    Read the article

  • Obj-C memory management for an NSView * instance variable

    - by massimoperi
    My custom view has a subview as an instance variable. Here is a sample interface: @interface MyCustomView : NSView { NSView *aSubview; } @end Then, in the .m file, I initialize aSubView and add it to the custom view. - (id)init { self = [super initWithFrame:CGRectMakeFrame(0.0, 0.0, 320.0, 480.0); if (self) { aSubview = [[NSView alloc] initWithFrame(0.0, 0.0, 100.0, 100.0); [self addSubview:aSubview]; } return self; } Where should I release aSubView? In the -dealloc method? - (void)dealloc { [aSubView release]; [super dealloc]; } Or directly after adding it to the custom view in the -init method? - (id)init { [...] [self addSubview:aSubview]; [aSubview release]; [...] } Which one is the best implementation?

    Read the article

  • Converting a macro to an inline function

    - by Rob
    I am using some Qt code that adds a VERIFY macro that looks something like this: #define VERIFY(cond) \ { \ bool ok = cond; \ Q_ASSERT(ok); \ } The code can then use it whilst being certain the condition is actually evaluated, e.g.: Q_ASSERT(callSomeFunction()); // callSomeFunction not evaluated in release builds! VERIFY(callSomeFunction()); // callSomeFunction is always evaluated Disliking macros, I would instead like to turn this into an inline function: inline VERIFY(bool condition) { Q_ASSERT(condition); } However, in release builds I am worried that the compiler would optimise out all calls to this function (as Q_ASSERT wouldn't actually do anything.) I am I worrying unnecessarily or is this likely depending on the optimisation flags/compiler/etc.? I guess I could change it to: inline VERIFY(bool condition) { condition; Q_ASSERT(condition); } But, again, the compiler may be clever enough to ignore the call. Is this inline alternative safe for both debug and release builds?

    Read the article

  • EKCalendar not added to iCal

    - by Alex75
    I have a strange behavior on my iPhone. I'm creating an application that uses calendar events (EventKit). The class that use is as follows: the .h one #import "GenericManager.h" #import <EventKit/EventKit.h> #define oneDay 60*60*24 #define oneHour 60*60 @protocol CalendarManagerDelegate; @interface CalendarManager : GenericManager /* * metodo che aggiunge un evento ad un calendario di nome Name nel giorno onDate. * L'evento da aggiungere viene recuperato tramite il dataSource che è quindi * OBBLIGATORIO (!= nil). * * Restituisce YES solo se il delegate è conforme al protocollo CalendarManagerDataSource. * NO altrimenti */ + (BOOL) addEventForCalendarWithName:(NSString *) name fromDate:(NSDate *)fromDate toDate: (NSDate *) toDate withDelegate:(id<CalendarManagerDelegate>) delegate; /* * metodo che aggiunge un evento per giorno compreso tra fromDate e toDate ad un * calendario di nome Name. L'evento da aggiungere viene recuperato tramite il dataSource * che è quindi OBBLIGATORIO (!= nil). * * Restituisce YES solo se il delegate è conforme al protocollo CalendarManagerDataSource. * NO altrimenti */ + (BOOL) addEventsForCalendarWithName:(NSString *) name fromDate:(NSDate *)fromDate toDate: (NSDate *) toDate withDelegate:(id<CalendarManagerDelegate>) delegate; @end @protocol CalendarManagerDelegate <NSObject> // viene inviato quando il calendario necessita informazioni sull' evento da aggiungere - (void) calendarManagerDidCreateEvent:(EKEvent *) event; @end the .m one // // CalendarManager.m // AppCampeggioSingolo // // Created by CreatiWeb Srl on 12/17/12. // Copyright (c) 2012 CreatiWeb Srl. All rights reserved. // #import "CalendarManager.h" #import "Commons.h" #import <objc/message.h> @interface CalendarManager () @end @implementation CalendarManager + (void)requestToEventStore:(EKEventStore *)eventStore delegate:(id)delegate fromDate:(NSDate *)fromDate toDate: (NSDate *) toDate name:(NSString *)name { if([eventStore respondsToSelector:@selector(requestAccessToEntityType:completion:)]) { // ios >= 6.0 [eventStore requestAccessToEntityType:EKEntityTypeEvent completion:^(BOOL granted, NSError *error) { if (granted) { [self addEventForCalendarWithName:name fromDate: fromDate toDate: toDate inEventStore:eventStore withDelegate:delegate]; } else { } }]; } else if (class_getClassMethod([EKCalendar class], @selector(calendarIdentifier)) != nil) { // ios >= 5.0 && ios < 6.0 [self addEventForCalendarWithName:name fromDate:fromDate toDate:toDate inEventStore:eventStore withDelegate:delegate]; } else { // ios < 5.0 EKCalendar *myCalendar = [eventStore defaultCalendarForNewEvents]; EKEvent *event = [self generateEventForCalendar:myCalendar fromDate: fromDate toDate: toDate inEventStore:eventStore withDelegate:delegate]; [eventStore saveEvent:event span:EKSpanThisEvent error:nil]; } } /* * metodo che recupera l'identificativo del calendario associato all'app o nil se non è mai stato creato. */ + (NSString *) identifierForCalendarName: (NSString *) name { NSString * confFileName = [self pathForFile:kCurrentCalendarFileName]; NSDictionary *confCalendar = [NSDictionary dictionaryWithContentsOfFile:confFileName]; NSString *currentIdentifier = [confCalendar objectForKey:name]; return currentIdentifier; } /* * memorizza l'identifier del calendario */ + (void) saveCalendarIdentifier:(NSString *) identifier andName: (NSString *) name { if (identifier != nil) { NSString * confFileName = [self pathForFile:kCurrentCalendarFileName]; NSMutableDictionary *confCalendar = [NSMutableDictionary dictionaryWithContentsOfFile:confFileName]; if (confCalendar == nil) { confCalendar = [NSMutableDictionary dictionaryWithCapacity:1]; } [confCalendar setObject:identifier forKey:name]; [confCalendar writeToFile:confFileName atomically:YES]; } } + (EKCalendar *)getCalendarWithName:(NSString *)name inEventStore:(EKEventStore *)eventStore withLocalSource: (EKSource *)localSource forceCreation:(BOOL) force { EKCalendar *myCalendar; NSString *identifier = [self identifierForCalendarName:name]; if (force || identifier == nil) { NSLog(@"create new calendar"); if (class_getClassMethod([EKCalendar class], @selector(calendarForEntityType:eventStore:)) != nil) { // da ios 6.0 in avanti myCalendar = [EKCalendar calendarForEntityType:EKEntityTypeEvent eventStore:eventStore]; } else { myCalendar = [EKCalendar calendarWithEventStore:eventStore]; } myCalendar.title = name; myCalendar.source = localSource; NSError *error = nil; BOOL result = [eventStore saveCalendar:myCalendar commit:YES error:&error]; if (result) { NSLog(@"Saved calendar %@ to event store. %@",myCalendar,eventStore); } else { NSLog(@"Error saving calendar: %@.", error); } [self saveCalendarIdentifier:myCalendar.calendarIdentifier andName:name]; } // You can also configure properties like the calendar color etc. The important part is to store the identifier for later use. On the other hand if you already have the identifier, you can just fetch the calendar: else { myCalendar = [eventStore calendarWithIdentifier:identifier]; NSLog(@"fetch an old-one = %@",myCalendar); } return myCalendar; } + (EKCalendar *)addEventForCalendarWithName: (NSString *) name fromDate:(NSDate *)fromDate toDate: (NSDate *) toDate inEventStore:(EKEventStore *)eventStore withDelegate: (id<CalendarManagerDelegate>) delegate { // da ios 5.0 in avanti EKCalendar *myCalendar; EKSource *localSource = nil; for (EKSource *source in eventStore.sources) { if (source.sourceType == EKSourceTypeLocal) { localSource = source; break; } } @synchronized(self) { myCalendar = [self getCalendarWithName:name inEventStore:eventStore withLocalSource:localSource forceCreation:NO]; if (myCalendar == nil) myCalendar = [self getCalendarWithName:name inEventStore:eventStore withLocalSource:localSource forceCreation:YES]; NSLog(@"End synchronized block %@",myCalendar); } EKEvent *event = [self generateEventForCalendar:myCalendar fromDate:fromDate toDate:toDate inEventStore:eventStore withDelegate:delegate]; [eventStore saveEvent:event span:EKSpanThisEvent error:nil]; return myCalendar; } + (EKEvent *) generateEventForCalendar: (EKCalendar *) calendar fromDate:(NSDate *)fromDate toDate: (NSDate *) toDate inEventStore:(EKEventStore *) eventStore withDelegate:(id<CalendarManagerDelegate>) delegate { EKEvent *event = [EKEvent eventWithEventStore:eventStore]; event.startDate=fromDate; event.endDate=toDate; [delegate calendarManagerDidCreateEvent:event]; [event setCalendar:calendar]; // ricerca dell'evento nel calendario, se ne trovo uno uguale non lo inserisco NSPredicate *predicate = [eventStore predicateForEventsWithStartDate:fromDate endDate:toDate calendars:[NSArray arrayWithObject:calendar]]; NSArray *matchEvents = [eventStore eventsMatchingPredicate:predicate]; if ([matchEvents count] > 0) { // ne ho trovati di gia' presenti, vediamo se uno e' quello che vogliamo inserire BOOL found = NO; for (EKEvent *fetchEvent in matchEvents) { if ([fetchEvent.title isEqualToString:event.title] && [fetchEvent.notes isEqualToString:event.notes]) { found = YES; break; } } if (found) { // esiste già e quindi non lo inserisco NSLog(@"OH NOOOOOO!!"); event = nil; } } return event; } #pragma mark - Public Methods + (BOOL) addEventForCalendarWithName:(NSString *) name fromDate:(NSDate *)fromDate toDate: (NSDate *) toDate withDelegate:(id<CalendarManagerDelegate>) delegate { BOOL retVal = YES; EKEventStore *eventStore=[[EKEventStore alloc] init]; if ([delegate conformsToProtocol:@protocol(CalendarManagerDelegate)]) { [self requestToEventStore:eventStore delegate:delegate fromDate:fromDate toDate: toDate name:name]; } else { retVal = NO; } return retVal; } + (BOOL) addEventsForCalendarWithName:(NSString *) name fromDate:(NSDate *)fromDate toDate: (NSDate *) toDate withDelegate:(id<CalendarManagerDelegate>) delegate { BOOL retVal = YES; NSDate *dateCursor = fromDate; EKEventStore *eventStore=[[EKEventStore alloc] init]; if ([delegate conformsToProtocol:@protocol(CalendarManagerDelegate)]) { while (retVal && ([dateCursor compare:toDate] == NSOrderedAscending)) { NSDate *finish = [dateCursor dateByAddingTimeInterval:oneDay]; [self requestToEventStore:eventStore delegate:delegate fromDate: dateCursor toDate: finish name:name]; dateCursor = [dateCursor dateByAddingTimeInterval:oneDay]; } } else { retVal = NO; } return retVal; } @end In practice, on my iphone I get the log: fetch an old-one = (null) 19/12/2012 11:33:09.520 AppCampeggioSingolo [730:8 b1b] create new calendar 19/12/2012 11:33:09.558 AppCampeggioSingolo [730:8 b1b] Saved calendar EKCalendar every time I add an event, then I look and I can not find it on iCal calendar event he added. On the iPhone of a friend of mine, however, everything is working correctly. I doubt that the problem stems from the code, but just do not understand what it could be. I searched all day yesterday and part of today on google but have not found anything yet. Any help will be greatly appreciated EDIT: I forgot the call wich is [CalendarManager addEventForCalendarWithName: @"myCalendar" fromDate:fromDate toDate: toDate withDelegate:self]; in the delegate method simply set title and notes of the event like this - (void) calendarManagerDidCreateEvent:(EKEvent *) event { event.title = @"the title"; event.notes = @"some notes"; }

    Read the article

  • Fitting an Image to Screen on Rotation iPhone / iPad ?

    - by user356937
    I have been playing around with one of the iPhone examples from Apple' web site (ScrollViewSuite) . I am trying to tweak it a bit so that when I rotate the the iPad the image will fit into the screen in landscape mode vertical. I have been successful in getting the image to rotate, but the image is larger than the height of the landscape screen, so the bottom is below the screen. I would like to image to scale to the height of the landscape screen. I have been playing around with various autoSizingMask attributes without success. The imageView is called "zoomView" this is the actual image which loads into a scrollView called imageScrollView. I am trying to achieve the screen to rotate and look like this.... olsonvox.com/photos/correct.png However, this is what My screen is looking like. olsonvox.com/photos/incorrect.png I would really appreciate some advice or guidance. Below is the RootViewController.m for the project. Blade # import "RootViewController.h" #define ZOOM_VIEW_TAG 100 #define ZOOM_STEP 1.5 #define THUMB_HEIGHT 150 #define THUMB_V_PADDING 25 #define THUMB_H_PADDING 25 #define CREDIT_LABEL_HEIGHT 25 #define AUTOSCROLL_THRESHOLD 30 @interface RootViewController (ViewHandlingMethods) - (void)toggleThumbView; - (void)pickImageNamed:(NSString *)name; - (NSArray *)imageNames; - (void)createThumbScrollViewIfNecessary; - (void)createSlideUpViewIfNecessary; @end @interface RootViewController (AutoscrollingMethods) - (void)maybeAutoscrollForThumb:(ThumbImageView *)thumb; - (void)autoscrollTimerFired:(NSTimer *)timer; - (void)legalizeAutoscrollDistance; - (float)autoscrollDistanceForProximityToEdge:(float)proximity; @end @interface RootViewController (UtilityMethods) - (CGRect)zoomRectForScale:(float)scale withCenter:(CGPoint)center; @end @implementation RootViewController - (void)loadView { [super loadView]; imageScrollView = [[UIScrollView alloc] initWithFrame:[[self view]bounds]]; // this code makes the image resize to the width and height properly. imageScrollView.autoresizingMask = UIViewAutoresizingFlexibleHeight | UIViewAutoresizingFlexibleLeftMargin | UIViewAutoresizingFlexibleRightMargin| UIViewAutoresizingFlexibleBottomMargin| UIViewAutoresizingFlexibleBottomMargin; // TRY SETTNG CENTER HERE SOMEHOW&gt;.... [imageScrollView setBackgroundColor:[UIColor blackColor]]; [imageScrollView setDelegate:self]; [imageScrollView setBouncesZoom:YES]; [[self view] addSubview:imageScrollView]; [self toggleThumbView]; // intitializes with the first image. [self pickImageNamed:@"lookbook1"]; } - (void)dealloc { [imageScrollView release]; [slideUpView release]; [thumbScrollView release]; [super dealloc]; } #pragma mark UIScrollViewDelegate methods - (UIView *)viewForZoomingInScrollView:(UIScrollView *)scrollView { UIView *view = nil; if (scrollView == imageScrollView) { view = [imageScrollView viewWithTag:ZOOM_VIEW_TAG]; } return view; } /************************************** NOTE **************************************/ /* The following delegate method works around a known bug in zoomToRect:animated: */ /* In the next release after 3.0 this workaround will no longer be necessary */ /**********************************************************************************/ - (void)scrollViewDidEndZooming:(UIScrollView *)scrollView withView:(UIView *)view atScale:(float)scale { [scrollView setZoomScale:scale+0.01 animated:NO]; [scrollView setZoomScale:scale animated:NO]; } #pragma mark TapDetectingImageViewDelegate methods - (void)tapDetectingImageView:(TapDetectingImageView *)view gotSingleTapAtPoint:(CGPoint)tapPoint { // Single tap shows or hides drawer of thumbnails. [self toggleThumbView]; } - (void)tapDetectingImageView:(TapDetectingImageView *)view gotDoubleTapAtPoint:(CGPoint)tapPoint { // double tap zooms in float newScale = [imageScrollView zoomScale] * ZOOM_STEP; CGRect zoomRect = [self zoomRectForScale:newScale withCenter:tapPoint]; [imageScrollView zoomToRect:zoomRect animated:YES]; } - (void)tapDetectingImageView:(TapDetectingImageView *)view gotTwoFingerTapAtPoint:(CGPoint)tapPoint { // two-finger tap zooms out float newScale = [imageScrollView zoomScale] / ZOOM_STEP; CGRect zoomRect = [self zoomRectForScale:newScale withCenter:tapPoint]; [imageScrollView zoomToRect:zoomRect animated:YES]; } #pragma mark ThumbImageViewDelegate methods - (void)thumbImageViewWasTapped:(ThumbImageView *)tiv { [self pickImageNamed:[tiv imageName]]; [self toggleThumbView]; } - (void)thumbImageViewStartedTracking:(ThumbImageView *)tiv { [thumbScrollView bringSubviewToFront:tiv]; } // CONTROLS DRAGGING AND DROPPING THUMBNAILS... - (void)thumbImageViewMoved:(ThumbImageView *)draggingThumb { // check if we've moved close enough to an edge to autoscroll, or far enough away to stop autoscrolling [self maybeAutoscrollForThumb:draggingThumb]; /* The rest of this method handles the reordering of thumbnails in the thumbScrollView. See */ /* ThumbImageView.h and ThumbImageView.m for more information about how this works. */ // we'll reorder only if the thumb is overlapping the scroll view if (CGRectIntersectsRect([draggingThumb frame], [thumbScrollView bounds])) { BOOL draggingRight = [draggingThumb frame].origin.x &gt; [draggingThumb home].origin.x ? YES : NO; /* we're going to shift over all the thumbs who live between the home of the moving thumb */ /* and the current touch location. A thumb counts as living in this area if the midpoint */ /* of its home is contained in the area. */ NSMutableArray *thumbsToShift = [[NSMutableArray alloc] init]; // get the touch location in the coordinate system of the scroll view CGPoint touchLocation = [draggingThumb convertPoint:[draggingThumb touchLocation] toView:thumbScrollView]; // calculate minimum and maximum boundaries of the affected area float minX = draggingRight ? CGRectGetMaxX([draggingThumb home]) : touchLocation.x; float maxX = draggingRight ? touchLocation.x : CGRectGetMinX([draggingThumb home]); // iterate through thumbnails and see which ones need to move over for (ThumbImageView *thumb in [thumbScrollView subviews]) { // skip the thumb being dragged if (thumb == draggingThumb) continue; // skip non-thumb subviews of the scroll view (such as the scroll indicators) if (! [thumb isMemberOfClass:[ThumbImageView class]]) continue; float thumbMidpoint = CGRectGetMidX([thumb home]); if (thumbMidpoint &gt;= minX &amp;&amp; thumbMidpoint &lt;= maxX) { [thumbsToShift addObject:thumb]; } } // shift over the other thumbs to make room for the dragging thumb. (if we're dragging right, they shift to the left) float otherThumbShift = ([draggingThumb home].size.width + THUMB_H_PADDING) * (draggingRight ? -1 : 1); // as we shift over the other thumbs, we'll calculate how much the dragging thumb's home is going to move float draggingThumbShift = 0.0; // send each of the shifting thumbs to its new home for (ThumbImageView *otherThumb in thumbsToShift) { CGRect home = [otherThumb home]; home.origin.x += otherThumbShift; [otherThumb setHome:home]; [otherThumb goHome]; draggingThumbShift += ([otherThumb frame].size.width + THUMB_H_PADDING) * (draggingRight ? 1 : -1); } // change the home of the dragging thumb, but don't send it there because it's still being dragged CGRect home = [draggingThumb home]; home.origin.x += draggingThumbShift; [draggingThumb setHome:home]; } } - (void)thumbImageViewStoppedTracking:(ThumbImageView *)tiv { // if the user lets go of the thumb image view, stop autoscrolling [autoscrollTimer invalidate]; autoscrollTimer = nil; } #pragma mark Autoscrolling methods - (void)maybeAutoscrollForThumb:(ThumbImageView *)thumb { autoscrollDistance = 0; // only autoscroll if the thumb is overlapping the thumbScrollView if (CGRectIntersectsRect([thumb frame], [thumbScrollView bounds])) { CGPoint touchLocation = [thumb convertPoint:[thumb touchLocation] toView:thumbScrollView]; float distanceFromLeftEdge = touchLocation.x - CGRectGetMinX([thumbScrollView bounds]); float distanceFromRightEdge = CGRectGetMaxX([thumbScrollView bounds]) - touchLocation.x; if (distanceFromLeftEdge &lt; AUTOSCROLL_THRESHOLD) { autoscrollDistance = [self autoscrollDistanceForProximityToEdge:distanceFromLeftEdge] * -1; // if scrolling left, distance is negative } else if (distanceFromRightEdge &lt; AUTOSCROLL_THRESHOLD) { autoscrollDistance = [self autoscrollDistanceForProximityToEdge:distanceFromRightEdge]; } } // if no autoscrolling, stop and clear timer if (autoscrollDistance == 0) { [autoscrollTimer invalidate]; autoscrollTimer = nil; } // otherwise create and start timer (if we don't already have a timer going) else if (autoscrollTimer == nil) { autoscrollTimer = [NSTimer scheduledTimerWithTimeInterval:(1.0 / 60.0) target:self selector:@selector(autoscrollTimerFired:) userInfo:thumb repeats:YES]; } } - (float)autoscrollDistanceForProximityToEdge:(float)proximity { // the scroll distance grows as the proximity to the edge decreases, so that moving the thumb // further over results in faster scrolling. return ceilf((AUTOSCROLL_THRESHOLD - proximity) / 5.0); } - (void)legalizeAutoscrollDistance { // makes sure the autoscroll distance won't result in scrolling past the content of the scroll view float minimumLegalDistance = [thumbScrollView contentOffset].x * -1; float maximumLegalDistance = [thumbScrollView contentSize].width - ([thumbScrollView frame].size.width + [thumbScrollView contentOffset].x); autoscrollDistance = MAX(autoscrollDistance, minimumLegalDistance); autoscrollDistance = MIN(autoscrollDistance, maximumLegalDistance); } - (void)autoscrollTimerFired:(NSTimer*)timer { [self legalizeAutoscrollDistance]; // autoscroll by changing content offset CGPoint contentOffset = [thumbScrollView contentOffset]; contentOffset.x += autoscrollDistance; [thumbScrollView setContentOffset:contentOffset]; // adjust thumb position so it appears to stay still ThumbImageView *thumb = (ThumbImageView *)[timer userInfo]; [thumb moveByOffset:CGPointMake(autoscrollDistance, 0)]; } #pragma mark View handling methods - (void)toggleThumbView { [self createSlideUpViewIfNecessary]; // no-op if slideUpView has already been created CGRect frame = [slideUpView frame]; if (thumbViewShowing) { frame.origin.y = 0; } else { frame.origin.y = -225; } [UIView beginAnimations:nil context:nil]; [UIView setAnimationDuration:0.3]; [slideUpView setFrame:frame]; [UIView commitAnimations]; thumbViewShowing = !thumbViewShowing; } - (void)pickImageNamed:(NSString *)name { // first remove previous image view, if any [[imageScrollView viewWithTag:ZOOM_VIEW_TAG] removeFromSuperview]; UIImage *image = [UIImage imageNamed:[NSString stringWithFormat:@"%@.jpg", name]]; TapDetectingImageView *zoomView = [[TapDetectingImageView alloc] initWithImage:image]; zoomView.autoresizingMask = UIViewAutoresizingFlexibleWidth ; [zoomView setDelegate:self]; [zoomView setTag:ZOOM_VIEW_TAG]; [imageScrollView addSubview:zoomView]; [imageScrollView setContentSize:[zoomView frame].size]; [zoomView release]; // choose minimum scale so image width fits screen float minScale = [imageScrollView frame].size.width / [zoomView frame].size.width; [imageScrollView setMinimumZoomScale:minScale]; [imageScrollView setZoomScale:minScale]; [imageScrollView setContentOffset:CGPointZero]; } - (NSArray *)imageNames { // the filenames are stored in a plist in the app bundle, so create array by reading this plist NSString *path = [[NSBundle mainBundle] pathForResource:@"Images" ofType:@"plist"]; NSData *plistData = [NSData dataWithContentsOfFile:path]; NSString *error; NSPropertyListFormat format; NSArray *imageNames = [NSPropertyListSerialization propertyListFromData:plistData mutabilityOption:NSPropertyListImmutable format:&amp;format errorDescription:&amp;error]; if (!imageNames) { NSLog(@"Failed to read image names. Error: %@", error); [error release]; } return imageNames; } - (void)createSlideUpViewIfNecessary { if (!slideUpView) { [self createThumbScrollViewIfNecessary]; CGRect bounds = [[self view] bounds]; float thumbHeight = [thumbScrollView frame].size.height; float labelHeight = CREDIT_LABEL_HEIGHT; // create label giving credit for images UILabel *creditLabel = [[UILabel alloc] initWithFrame:CGRectMake(0, thumbHeight, bounds.size.width, labelHeight)]; [creditLabel setBackgroundColor:[UIColor clearColor]]; [creditLabel setTextColor:[UIColor whiteColor]]; // [creditLabel setFont:[UIFont fontWithName:@"Helvetica" size:16]]; // [creditLabel setText:@"SAMPLE TEXT"]; [creditLabel setTextAlignment:UITextAlignmentCenter]; // create container view that will hold scroll view and label CGRect frame = CGRectMake(0.0, -225.00, bounds.size.width+256, thumbHeight + labelHeight); slideUpView.autoresizingMask = UIViewAutoresizingFlexibleWidth | UIViewAutoresizingFlexibleTopMargin; slideUpView = [[UIView alloc] initWithFrame:frame]; [slideUpView setBackgroundColor:[UIColor blackColor]]; [slideUpView setOpaque:NO]; [slideUpView setAlpha:.75]; [[self view] addSubview:slideUpView]; // add subviews to container view [slideUpView addSubview:thumbScrollView]; [slideUpView addSubview:creditLabel]; [creditLabel release]; } } - (void)createThumbScrollViewIfNecessary { if (!thumbScrollView) { float scrollViewHeight = THUMB_HEIGHT + THUMB_V_PADDING; float scrollViewWidth = [[self view] bounds].size.width; thumbScrollView = [[UIScrollView alloc] initWithFrame:CGRectMake(0, 0, scrollViewWidth, scrollViewHeight)]; [thumbScrollView setCanCancelContentTouches:NO]; [thumbScrollView setClipsToBounds:NO]; // now place all the thumb views as subviews of the scroll view // and in the course of doing so calculate the content width float xPosition = THUMB_H_PADDING; for (NSString *name in [self imageNames]) { UIImage *thumbImage = [UIImage imageNamed:[NSString stringWithFormat:@"%@_thumb.jpg", name]]; if (thumbImage) { ThumbImageView *thumbView = [[ThumbImageView alloc] initWithImage:thumbImage]; [thumbView setDelegate:self]; [thumbView setImageName:name]; CGRect frame = [thumbView frame]; frame.origin.y = THUMB_V_PADDING; frame.origin.x = xPosition; [thumbView setFrame:frame]; [thumbView setHome:frame]; [thumbScrollView addSubview:thumbView]; [thumbView release]; xPosition += (frame.size.width + THUMB_H_PADDING); } } [thumbScrollView setContentSize:CGSizeMake(xPosition, scrollViewHeight)]; } } #pragma mark Utility methods - (CGRect)zoomRectForScale:(float)scale withCenter:(CGPoint)center { CGRect zoomRect; // the zoom rect is in the content view's coordinates. // At a zoom scale of 1.0, it would be the size of the imageScrollView's bounds. // As the zoom scale decreases, so more content is visible, the size of the rect grows. zoomRect.size.height = [imageScrollView frame].size.height / scale; zoomRect.size.width = [imageScrollView frame].size.width / scale; // choose an origin so as to get the right center. zoomRect.origin.x = center.x - (zoomRect.size.width / 2.0); zoomRect.origin.y = center.y - (zoomRect.size.height / 2.0); return zoomRect; } #pragma mark - #pragma mark Rotation support // Ensure that the view controller supports rotation and that the split view can therefore show in both portrait and landscape. - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { return YES; } @end

    Read the article

  • ASP.NET MVC 2 Released

    Im happy to announce that the final release of ASP.NET MVC 2 is now available for VS 2008/Visual Web Developer 2008 Express with ASP.NET 3.5.  You can download and install it from the following locations: Download ASP.NET MVC 2 using the Microsoft Web Platform Installer Download ASP.NET MVC 2 from the Download Center The final release of VS 2010 and Visual Web Developer 2010 will have ASP.NET MVC 2 built-in so you wont need an additional install in order to use ASP.NET...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Silverlight 4 Tools for VS 2010 and WCF RIA Services Released

    The final release of the Silverlight 4 Tools for Visual Studio 2010 and WCF RIA Services is now available for download.  Download and Install If you already have Visual Studio 2010 installed (or the free Visual Web Developer 2010 Express), then you can install both the Silverlight 4 Tooling Support as well as WCF RIA Services support by downloading and running this setup package (note: please make sure to uninstall the preview release of the Silverlight 4 Tools for VS 2010 if you have...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Oracle Database Smart Flash Cache: Only on Oracle Linux and Oracle Solaris

    - by sergio.leunissen
    Oracle Database Smart Flash Cache is a feature that was first introduced with Oracle Database 11g Release 2. Only available on Oracle Linux and Oracle Solaris, this feature increases the size of the database buffer cache without having to add RAM to the system. In effect, it acts as a second level cache on flash memory and will especially benefit read-intensive database applications. The Oracle Database Smart Flash Cache white paper concludes: Available at no additional cost, Database Smart Flash Cache on Oracle Solaris and Oracle Linux has the potential to offer considerable benefit to users of Oracle Database 11g Release 2 with disk-bound read-mostly or read-only workloads, through the simple addition of flash storage such as the Sun Storage F5100 Flash Array or the Sun Flash Accelerator F20 PCIe Card. Read the white paper.

    Read the article

  • Windows Azure: Announcing Support for Windows Server 2012 R2 + Some Nice Price Cuts

    - by ScottGu
    Today we released some great updates to Windows Azure: Virtual Machines: Support for Windows Server 2012 R2 Cloud Services: Support for Windows Server 2012 R2 and .NET 4.5.1 Windows Azure Pack: Use Windows Azure features on-premises using Windows Server 2012 R2 Price Cuts: Up to 22% Price Reduction on Memory-Intensive Instances Below are more details about each of the improvements: Virtual Machines: Support for Windows Server 2012 R2 This morning we announced the release of Windows Server 2012 R2 – which is a fantastic update to Windows Server and includes a ton of great enhancements. This morning we are also excited to announce that the general availability image of Windows Server 2012 RC is now supported on Windows Azure.  Windows Azure is the first cloud provider to offer the final release of Windows Server 2012 R2, and it is incredibly easy to launch your own Windows Server 2012 R2 instance with it. To create a new Windows Server 2012 R2 instance simply choose New->Compute->Virtual Machine within the Windows Azure Management Portal.  You can select the “Windows Server 2012 R2” image and create a new Virtual Machine using the “Quick Create” option: Or alternatively click the “From Gallery” option if you want to customize even more configuration options (endpoints, remote powershell, availability set, etc): Creating and instantiating a new Virtual Machine on Windows Azure is very fast.  In fact, the Windows Server 2012 R2 image now deploys and runs 30% faster than previous versions of Windows Server. Once the VM is deployed you can drill into it to track its health and manage its settings: Clicking the “Connect” button allows you to remote desktop into the VM – at which point you can customize and manage it as a full administrator however you want: If you haven’t tried Windows Server 2012 R2 yet – give it a try with Windows Azure.  There is no easier way to get an instance of it up and running! Cloud Services: Support for using Windows Server 2012 R2 with Web and Worker Roles Today’s Windows Azure release also allows you to now use Windows Server 2012 R2 and .NET 4.5.1 within Web and Worker Roles within Cloud Service based applications.  Enabling this is easy.  You can configure existing existing Cloud Service application to use Windows Server 2012 R2 by updating your Cloud Service Configuration File (.cscfg) to use the new “OS Family 4” setting: Or alternatively you can use the Windows Azure Management Portal to update cloud services that are already deployed on Windows Azure.  Simply choose the configure tab on them and select Windows Server 2012 R2 in the Operating System Family dropdown: The approaches above enable you to immediately take advantage of Windows Server 2012 R2 and .NET 4.5.1 and all the great features they provide. Windows Azure Pack: Use Windows Azure features on Windows Server 2012 R2 Today we also made generally available the Windows Azure Pack, which is a free download that enables you to run Windows Azure Technology within your own datacenter, an on-premises private cloud environment, or with one of our service provider/hosting partners who run Windows Server. Windows Azure Pack enables you to use a management portal that has the exact same UI as the Windows Azure Management Portal, and within which you can create and manage Virtual Machines, Web Sites, and Service Bus – all of which can run on Windows Server and System Center.  The services provided with the Windows Azure Pack are consistent with the services offered within our Windows Azure public cloud offering.  This consistency enables organizations and developers to build applications and solutions that can run in any hosting environment – and which use the same development and management approach.  The end result is an offering with incredible flexibility. You can learn more about Windows Azure Pack and download/deploy it today here. Price Cuts: Up to 22% Reduction on Memory Intensive Instances Today we are also reducing prices by up to 22% on our memory-intensive VM instances (specifically our A5, A6, and A7 instances).  These price reductions apply to both Windows and Linux VM instances, as well as for Cloud Service based applications: These price reductions will take effect in November, and will enable you to run applications that demand larger memory (such as SharePoint, Databases, in-memory analytics, etc) even more cost effectively. Summary Today’s release enables you to start using Windows Server 2012 R2 within Windows Azure immediately, and take advantage of our Cloud OS vision both within our datacenters – and using the Windows Azure Pack within both your existing datacenters and those of our partners. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Bug Triage

    In this blog post brain dump, I'll attempt to describe the process my team tries to follow when dealing with new bug reports (specifically, code defect reports). This is not official Microsoft policy, just the way we do things… if you do things differently and want to share, you can do so at the bottom in the comments (or on your blog).Feature Triage TeamA subset of the feature crew, the triage team (which has representations from the PM, Dev and QA disciplines), looks at all unassigned bugs at regular intervals. This can be weekly or daily (or other frequency) dependent on which part of the product cycle we are in and what the untriaged bug load looks like. They discuss each bug considering the evidence and make a decision of whether the bug goes from Not Yet Assigned to Assigned (plus the name of the DEV to fix this) or whether it goes from Active to Resolved (which means it gets assigned back to the requestor for closure or further debate if they were not present at the triage meeting). Close to critical milestones, the feature triage team needs to further justify bugs they take to additional higher-level triage teams.Bug Opened = Not Yet AssignedSomeone (typically an SDET from the QA team) creates the bug item (e.g. in TFS), ensuring they populate all the relevant fields including: Title, Description, Repro Steps (including the Actual Result at the end of the steps), attachments of code and/or screenshots, Build number that they observed the issue in, regression details if applicable, how it was found, if a test case exists or needs to be created etc. They also indicate their opinion on the Priority and Severity. The bug status is left as Not Yet Assigned."Issue" versus "Fix for issue"The solution to some bugs is easy to determine, e.g. "bug: the column name is misspelled". Obviously the fix is to correct the spelling – still, the triage team should be explicit and enter the correct spelling in the bug's Description. Note that a bad bug name here would be "bug: fix the spelling of the column" (it describes the solution, rather than the problem).Other solutions are trickier to establish, e.g. "bug: the column header is not accessible (can only be clicked on with the mouse, not reached via keyboard)". What is the correct solution here? The last thing to do is leave this undetermined and just assign it to a developer. The solution has to be entered in the description. Behind this type of a bug usually hides a spec defect or a new feature request.The person opening the bug should focus on describing the issue, rather than the solution. The person indicates what the fix is in their opinion by stating the Expected Result (immediately after stating the Actual Result). If they have a complex suggested solution, that should be split out in a separate part, but the triage team has the final say before assigning it. If the solution is lengthy/complicated to describe, the bug can be assigned to the PM. Note: the strict interpretation suggests that any bug with no clear, obvious solution is always a hole in the spec and should always go to the PM. This also ensures the spec gets updated.Not Yet Assigned - Not Yet Assigned (on someone else's plate)If the bug is observed in our feature, but the cause is actually another team, we change the Area Path (which is the way we identify teams in TFS) and leave it as Not Yet Assigned. The triage team may add more comments as appropriate including potentially changing the repro steps. In some cases, we may even resolve the bug in our area path and open a new bug in the area path of the other team.Even though there is no action on a dev on the team, the bug still needs to be tracked. One way of doing this is to implement some notification system that informs the team when the tracked bug changed status; another way is to occasionally run a global query (against all area paths) for bugs that have been opened by a member of the team and follow up with the current owners for stale bugs.Not Yet Assigned - ResolvedThis state transition can only be made by the Feature Triage Team.0. Sometimes the bug description is not clear and in that case it gets Resolved as More Information Needed, so the original requestor can provide it.After understanding what the bug item is about, the first decision is to determine whether it needs to go to a dev.1. If it is a known bug, it gets resolved as "Duplicate" and linked to the existing bug.2. If it is "By Design" it gets resolved as such, indicating that the triage team does not think this is a bug.3. If the bug does not repro on latest bits, it is resolved as "No Repro"4. The most painful: If it is decided that we cannot fix it for this release it gets resolved as "Postponed" or "Won't Fix". The former is typically due to resources and time constraints, while the latter is due to deciding that it is not important enough to consume our resources in any release (yes, not all bugs must be fixed!). For both cases, there are other factors that contribute to the decision such as: existence of a reasonable workaround, frequency we expect users to encounter the issue, dependencies on other team to offer a solution, whether it breaks a core scenario, whether it prohibits customer feedback on a major feature, is it a regression from a previous release, impact of the fix on other partner teams (e.g. User Education, User Experience, Localization/Globalization), whether this is the right fix, does the fix impact performance goals, and last but not least, severity of bug (e.g. loss of customer data, security threat, crash, hang). The bar for fixing a bug goes up as the release date approaches. The triage team becomes hardnosed about which bugs to take, while the developers are busy resolving assigned bugs thus everyone drives for Zero Bug Bounce (ZBB). ZBB is when you have 0 active bugs older than 48 hours.Not Yet Assigned - AssignedIf the bug is something we decide to fix in this release and the solution is known, then it is assigned to a DEV. This is either the developer that will do the work, or a Lead that can further assign it to one of his developer team based on a load balancing algorithm of their choosing.Sometimes, the triage team needs the dev to do some investigation work before deciding whether to take the fix; similarly, the checkin for the fix may be gated on code review by the triage team. In these cases, these instructions are provided in the comments section of the bug and when the developer is done they notify the triage team for final decision.Additionally, a Priority and Severity (from 0 to 4) has to be entered, e.g. a P0 means "drop anything you are doing and fix this now" whereas a P4 is something you get to after all P0,1,2,3 bugs are fixed.From a testing perspective, if the bug was found through ad-hoc testing or an external team, the decision is made whether test cases should be added to avoid future regressions. This is communicated to the QA team.Assigned - ResolvedWhen the developer receives the bug (they should be checking daily for new bugs on their plate looking at bugs in order of priority and from older to newer) they can send it back to triage if the information is not clear. Otherwise, they investigate the bug, setting the Sub Status to "Investigating"; if they cannot make progress, they set the Sub Status to "Blocked" and discuss this with triage or whoever else can help them get unblocked. Once they are unblocked, they set the Sub Status to "Working on Solution"; once they are code complete they send a code review request, setting the Sub Status to "Fix Available". After the iterative code review process is over and everyone is happy with the fix, the developer checks it in and changes the state of the bug from Active (and Assigned to them) to Resolved (and Assigned to someone else).The developer needs to ensure that when the status is changed to Resolved that it is assigned to a QA person. For example, maybe the PM opened the bug, but it should be a QA person that will verify the fix - the developer needs to manually change the assignee in that case. Typically the QA person will send an email to the original requestor notifying them that the fix is verified.Resolved - ??In all cases above, note that the final state was Resolved. What happens after that? The final step should be Closed. The bug is closed once the QA person verifying the fix is happy with it. If the person is not happy, then they change the state from Resolved to Active, thus sending it back to the developer. If the developer and QA person cannot reach agreement, then triage can be brought into it. An easy way to do that is change the status back to Not Yet Assigned with appropriate comments so the triage team can re-review.It is important to note that only QA can close a bug. That means that if the opener of the bug was a PM, when the bug gets resolved by the dev it may land on the PM's plate and after a quick review, the PM would re-assign to an SDET, which is the only role that can close bugs. One exception to this is if the person that filed the bug is external: in that case, we leave it Resolved and assigned to them and also send them a notification that they need to verify the fix. Another exception is if specialized developer knowledge is needed for verifying the bug fix (e.g. it was a refactoring suggestion bug typically not observable by the user) in which case it is fine to have a developer verify the fix, and ideally a different developer to the one that opened the bug.Other links on bug triageA quick search reveals that others have talked about this subject, e.g. here, here, here, here and here.Your take?If you have other best practices your team uses to deal with incoming bug reports, feel free to share in the comments below or on your blog. Comments about this post welcome at the original blog.

    Read the article

  • Some VS 2010 RC Updates (including patches for Intellisense and Web Designer fixes)

    [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] We are continuing to make progress on shipping Visual Studio 2010.  Id like to say a big thank you to everyone who has downloaded and tried out the VS 2010 Release Candidate, and especially to those who have sent us feedback or reported issues with it. This data has been invaluable in helping us find and fix remaining bugs before we ship the final release. Last...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • aptitude update gives 404's for intrepid

    - by dotjoe
    I'm having issues trying to update my packages. I haven't used this server since last September and now I'm getting 404 errors on all the intrepid repos. How do I fix this? Thanks aptitude update Err http://security.ubuntu.com intrepid-security/main Packages 404 Not Found [IP: 91.189.92.166 80] Err http://security.ubuntu.com intrepid-security/restricted Packages 404 Not Found [IP: 91.189.92.166 80] Err http://security.ubuntu.com intrepid-security/main Sources 404 Not Found [IP: 91.189.92.166 80] Err http://security.ubuntu.com intrepid-security/restricted Sources 404 Not Found [IP: 91.189.92.166 80] Err http://security.ubuntu.com intrepid-security/universe Packages 404 Not Found [IP: 91.189.92.166 80] Err http://security.ubuntu.com intrepid-security/universe Sources 404 Not Found [IP: 91.189.92.166 80] Ign http://us.archive.ubuntu.com intrepid-updates/multiverse Packages Ign http://us.archive.ubuntu.com intrepid-updates/multiverse Sources Err http://us.archive.ubuntu.com intrepid/main Packages 404 Not Found [IP: 91.189.88.31 80] Err http://us.archive.ubuntu.com intrepid/restricted Packages 404 Not Found [IP: 91.189.88.31 80] Err http://us.archive.ubuntu.com intrepid/main Sources 404 Not Found [IP: 91.189.88.31 80] Err http://security.ubuntu.com intrepid-security/multiverse Packages 404 Not Found [IP: 91.189.92.166 80] Err http://us.archive.ubuntu.com intrepid/restricted Sources 404 Not Found [IP: 91.189.88.31 80] Err http://us.archive.ubuntu.com intrepid/universe Packages 404 Not Found [IP: 91.189.88.31 80] Err http://us.archive.ubuntu.com intrepid/universe Sources 404 Not Found [IP: 91.189.88.31 80] Err http://us.archive.ubuntu.com intrepid/multiverse Packages 404 Not Found [IP: 91.189.88.31 80] Err http://us.archive.ubuntu.com intrepid/multiverse Sources 404 Not Found [IP: 91.189.88.31 80] Err http://us.archive.ubuntu.com intrepid-updates/main Packages 404 Not Found [IP: 91.189.88.31 80] Err http://security.ubuntu.com intrepid-security/multiverse Sources 404 Not Found [IP: 91.189.92.166 80] Err http://us.archive.ubuntu.com intrepid-updates/restricted Packages 404 Not Found [IP: 91.189.88.31 80] Err http://us.archive.ubuntu.com intrepid-updates/main Sources 404 Not Found [IP: 91.189.88.31 80] Err http://us.archive.ubuntu.com intrepid-updates/restricted Sources 404 Not Found [IP: 91.189.88.31 80] Err http://us.archive.ubuntu.com intrepid-updates/universe Packages 404 Not Found [IP: 91.189.88.31 80] Err http://us.archive.ubuntu.com intrepid-updates/universe Sources 404 Not Found [IP: 91.189.88.31 80] Err http://us.archive.ubuntu.com intrepid-updates/multiverse Packages 404 Not Found [IP: 91.189.88.31 80] Err http://us.archive.ubuntu.com intrepid-updates/multiverse Sources 404 Not Found [IP: 91.189.88.31 80] Reading package lists... sources.list # # deb cdrom:[Ubuntu-Server 8.10 _Intrepid Ibex_ - Release i386 (20081028.1)]/ intrepid main restricted # deb cdrom:[Ubuntu-Server 8.10 _Intrepid Ibex_ - Release i386 (20081028.1)]/ intrepid main restricted # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://us.archive.ubuntu.com/ubuntu/ intrepid main restricted deb-src http://us.archive.ubuntu.com/ubuntu/ intrepid main restricted ## Major bug fix updates produced after the final release of the ## distribution. deb http://us.archive.ubuntu.com/ubuntu/ intrepid-updates main restricted deb-src http://us.archive.ubuntu.com/ubuntu/ intrepid-updates main restricted ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http://us.archive.ubuntu.com/ubuntu/ intrepid universe deb-src http://us.archive.ubuntu.com/ubuntu/ intrepid universe deb http://us.archive.ubuntu.com/ubuntu/ intrepid-updates universe deb-src http://us.archive.ubuntu.com/ubuntu/ intrepid-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http://us.archive.ubuntu.com/ubuntu/ intrepid multiverse deb-src http://us.archive.ubuntu.com/ubuntu/ intrepid multiverse deb http://us.archive.ubuntu.com/ubuntu/ intrepid-updates multiverse deb-src http://us.archive.ubuntu.com/ubuntu/ intrepid-updates multiverse ## Uncomment the following two lines to add software from the 'backports' ## repository. ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. # deb http://us.archive.ubuntu.com/ubuntu/ intrepid-backports main restricted universe multiverse # deb-src http://us.archive.ubuntu.com/ubuntu/ intrepid-backports main restricted universe multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. This software is not part of Ubuntu, but is ## offered by Canonical and the respective vendors as a service to Ubuntu ## users. # deb http://archive.canonical.com/ubuntu intrepid partner # deb-src http://archive.canonical.com/ubuntu intrepid partner deb http://us.archive.ubuntu.com/ubuntu/ intrepid-security main restricted deb-src http://us.archive.ubuntu.com/ubuntu/ intrepid-security main restricted deb http://us.archive.ubuntu.com/ubuntu/ intrepid-security universe deb-src http://us.archive.ubuntu.com/ubuntu/ intrepid-security universe deb http://us.archive.ubuntu.com/ubuntu/ intrepid-security multiverse deb-src http://us.archive.ubuntu.com/ubuntu/ intrepid-security multiverse

    Read the article

  • SQL Server and Hyper-V Dynamic Memory Part 2

    - by SQLOS Team
    Part 1 of this series was an introduction and overview of Hyper-V Dynamic Memory. This part looks at SQL Server memory management and how the SQL engine responds to changing OS memory conditions.   Part 2: SQL Server Memory Management As with any Windows process, sqlserver.exe has a virtual address space (VAS) of 4GB on 32-bit and 8TB in 64-bit editions. Pages in its VAS are mapped to pages in physical memory when the memory is committed and referenced for the first time. The collection of VAS pages that have been recently referenced is known as the Working Set. How and when SQL Server allocates virtual memory and grows its working set depends on the memory model it uses. SQL Server supports three basic memory models:   1. Conventional Memory Model   The Conventional model is the default SQL Server memory model and has the following properties: - Dynamic - can grow or shrink its working set in response to load and external (operating system) memory conditions. - OS uses 4K pages – (not to be confused with SQL Server “pages” which are 8K regions of committed memory).- Pageable - Can be paged out to disk by the operating system.   2. Locked Page Model The locked page memory model is set when SQL Server is started with "Lock Pages in Memory" privilege*. It has the following characteristics: - Dynamic - can grow or shrink its working set in the same way as the Conventional model.- OS uses 4K pages - Non-Pageable – When memory is committed it is locked in memory, meaning that it will remain backed by physical memory and will not be paged out by the operating system. A common misconception is to interpret "locked" as non-dynamic. A SQL Server instance using the locked page memory model will grow and shrink (allocate memory and release memory) in response to changing workload and OS memory conditions in the same way as it does with the conventional model.   This is an important consideration when we look at Hyper-V Dynamic Memory – “locked” memory works perfectly well with “dynamic” memory.   * Note in “Denali” (Standard Edition and above), and in SQL 2008 R2 64-bit (Enterprise and above editions) the Lock Pages in Memory privilege is all that is required to set this model. In 2008 R2 64-Bit standard edition it also requires trace flag 845 to be set, in 2008 R2 32-bit editions it requires sp_configure 'awe enabled' 1.   3. Large Page Model The Large page model is set using trace flag 834 and potentially offers a small performance boost for systems that are configured with large pages. It is characterized by: - Static - memory is allocated at startup and does not change. - OS uses large (>2MB) pages - Non-Pageable The large page model is supported with Hyper-V Dynamic Memory (and Hyper-V also supports large pages), but you get no benefit from using Dynamic Memory with this model since SQL Server memory does not grow or shrink. The rest of this article will focus on the locked and conventional SQL Server memory models.   When does SQL Server grow? For “dynamic” configurations (Conventional and Locked memory models), the sqlservr.exe process grows – allocates and commits memory from the OS – in response to a workload. As much memory is allocated as is required to optimally run the query and buffer data for future queries, subject to limitations imposed by:   - SQL Server max server memory setting. If this configuration option is set, the buffer pool is not allowed to grow to more than this value. In SQL Server 2008 this value represents single page allocations, and in “Denali” it represents any size page allocations and also managed CLR procedure allocations.   - Memory signals from OS. The operating system sets a signal on memory resource notification objects to indicate whether it has memory available or whether it is low on available memory. If there is only 32MB free for every 4GB of memory a low memory signal is set, which continues until 64MB/4GB is free. If there is 96MB/4GB free the operating system sets a high memory signal. SQL Server only allocates memory when the high memory signal is set.   To summarize, for SQL Server to grow you need three conditions: a workload, max server memory setting higher than the current allocation, high memory signals from the OS.    When does SQL Server shrink caches? SQL Server as a rule does not like to return memory to the OS, but it will shrink its caches in response to memory pressure. Memory pressure can be divided into “internal” and “external”.   - External memory pressure occurs when the operating system is running low on memory and low memory signals are set. The SQL Server Resource Monitor checks for low memory signals approximately every 5 seconds and it will attempt to free memory until the signals stop.   To free memory SQL Server does the following: ·         Frees unused memory. ·         Notifies Memory Manager Clients to release memory o   Caches – Free unreferenced cache objects. o   Buffer pool - Based on oldest access times.   The freed memory is released back to the operating system. This process continues until the low memory resource notifications stop.    - Internal memory pressure occurs when the size of different caches and allocations increase but the SQL Server process needs to keep its total memory within a target value. For example if max server memory is set and certain caches are growing large, it will cause SQL to free memory for re-use internally, but not to release memory back to the OS. If you lower the value of max server memory you will generate internal memory pressure that will cause SQL to release memory back to the OS.    Memory pressure handling has not changed much since SQL 2005 and it was described in detail in a blog post by Slava Oks.   Note that SQL Server Express is an exception to the above behavior. Unlike other editions it does not assume it is the most important process running on the system but tries to be more “desktop” friendly. It will empty its working set after a period of inactivity.   How does SQL Server respond to changing OS memory?    In SQL Server 2005 support for Hot-Add memory was introduced. This feature, available in Enterprise and above editions, allows the server to make use of any extra physical memory that was added after SQL Server started. Being able to add physical memory when the system is running is limited to specialized hardware, but with the Hyper-V Dynamic Memory feature, when new memory is allocated to a guest virtual machine, it looks like hot-add physical memory to the guest. What this means is that thanks to the hot-add memory feature, SQL Server 2005 and higher can dynamically grow if more “physical” memory is granted to a guest VM by Hyper-V dynamic memory.   SQL Server checks OS memory every second and dynamically adjusts its “target” (based on available OS memory and max server memory) accordingly.   In “Denali” Standard Edition will also have sqlserver.exe support for hot-add memory when running virtualized (i.e. detecting and acting on Hyper-V Dynamic Memory allocations).   How does a SQL Server workload in a guest VM impact Hyper-V dynamic memory scheduling?   When a SQL workload causes the sqlserver.exe process to grow its working set, the Hyper-V memory scheduler will detect memory pressure in the guest VM and add memory to it. SQL Server will then detect the extra memory and grow according to workload demand. In our tests we have seen this feedback process cause a guest VM to grow quickly in response to SQL workload - we are still working on characterizing this ramp-up.    How does SQL Server respond when Hyper-V removes memory from a guest VM through ballooning?   If pressure from other VM's cause Hyper-V Dynamic Memory to take memory away from a VM through ballooning (allocating memory with a virtual device driver and returning it to the host OS), Windows Memory Manager will page out unlocked portions of memory and signal low resource notification events. When SQL Server detects these events it will shrink memory until the low memory notifications stop (see cache shrinking description above).    This raises another question. Can we make SQL Server release memory more readily and hence behave more "dynamically" without compromising performance? In certain circumstances where the application workload is predictable it may be possible to have a job which varies "max server memory" according to need, lowering it when the engine is inactive and raising it before a period of activity. This would have limited applicaability but it is something we're looking into.   What Memory Management changes are there in SQL Server “Denali”?   In SQL Server “Denali” (aka SQL11) the Memory Manager has been re-written to be more efficient. The main changes are summarized in this post. An important change with respect to Hyper-V Dynamic Memory support is that now the max server memory setting includes any size page allocations and managed CLR procedure allocations it now represents a closer approximation to total sqlserver.exe memory usage. This makes it easier to calculate a value for max server memory, which becomes important when configuring virtual machines to work well with Hyper-V Dynamic Memory Startup and Maximum RAM settings.   Another important change is no more AWE or hot-add support for 32-bit edition. This means if you're running a 32-bit edition of Denali you're limited to a 4GB address space and will not be able to take advantage of dynamically added OS memory that wasn't present when SQL Server started (though Hyper-V Dynamic Memory is still a supported configuration).   In part 3 we’ll develop some best practices for configuring and using SQL Server with Dynamic Memory. Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • New Product: Oracle Java ME Embedded 3.2 – Small, Smart, Connected

    - by terrencebarr
    The Internet of Things (IoT) is coming. And, with todays launch of the Oracle Java ME Embedded 3.2 product, Java is going to play an even greater role in it. Java in the Internet of Things By all accounts, intelligent embedded devices are penetrating the world around us – driving industrial processes, monitoring environmental conditions, providing better health care, analyzing and processing data, and much more. And these devices are becoming increasingly connected, adding another dimension of utility. Welcome to the Internet of Things. As I blogged yesterday, this is a huge opportunity for the Java technology and ecosystem. To enable and utilize these billions of devices effectively you need a programming model, tools, and protocols which provide a feature-rich, consistent, scalable, manageable, and interoperable platform.  Java technology is ideally suited to address these technical and business problems, enabling you eliminate many of the typical challenges in designing embedded solutions. By using Java you can focus on building smarter, more valuable embedded solutions faster. To wit, Java technology is already powering around 10 billion devices worldwide. Delivering on this vision and accelerating the growth of embedded Java solutions, Oracle is today announcing a brand-new product: Oracle Java Micro Edition (ME) Embedded 3.2, accompanied by an update release of the Java ME Software Development Kit (SDK) to version 3.2. What is Oracle Java ME Embedded 3.2? Oracle Java ME Embedded 3.2 is a complete Java runtime client, optimized for ARM architecture connected microcontrollers and other resource-constrained systems. The product provides dedicated embedded functionality and is targeted for low-power, limited memory devices requiring support for a range of network services and I/O interfaces.  What features and APIs are provided by Oracle Java ME Embedded 3.2? Oracle Java ME Embedded 3.2 is a Java ME runtime based on CLDC 1.1 (JSR-139) and IMP-NG (JSR-228). The runtime and virtual machine (VM) are highly optimized for embedded use. Also included in the product are the following optional JSRs and Oracle APIs: File I/O API’s (JSR-75)  Wireless Messaging API’s (JSR-120) Web Services (JSR-172) Security and Trust Services subset (JSR-177) Location API’s (JSR-179) XML API’s (JSR-280)  Device Access API Application Management System (AMS) API AccessPoint API Logging API Additional embedded features are: Remote application management system Support for continuous 24×7 operation Application monitoring, auto-start, and system recovery Application access to peripheral interfaces such as GPIO, I2C, SPIO, memory mapped I/O Application level logging framework, including option for remote logging Headless on-device debugging – source level Java application debugging over IP Connection Remote configuration of the Java VM What type of platforms are targeted by Oracle Java ME 3.2 Embedded? The product is designed for embedded, always-on, resource-constrained, headless (no graphics/no UI), connected (wired or wireless) devices with a variety of peripheral I/O.  The high-level system requirements are as follows: System based on ARM architecture SOCs Memory footprint (approximate) from 130 KB RAM/350KB ROM (for a minimal, customized configuration) to 700 KB RAM/1500 KB ROM (for the full, standard configuration)  Very simple embedded kernel, or a more capable embedded OS/RTOS At least one type of network connection (wired or wireless) The initial release of the product is delivered as a device emulation environment for x86/Windows desktop computers, integrated with the Java ME SDK 3.2. A standard binary of Oracle Java ME Embedded 3.2 for ARM KEIL development boards based on ARM Cortex M-3/4 (KEIL MCBSTM32F200 using ST Micro SOC STM32F207IG) will soon be available for download from the Oracle Technology Network (OTN).  What types of applications can I develop with Oracle Java ME Embedded 3.2? The Oracle Java ME Embedded 3.2 product is a full-featured embedded Java runtime supporting applications based on the IMP-NG application model, which is derived from the well-known MIDP 2 application model. The runtime supports execution of multiple concurrent applications, remote application management, versatile connectivity, and a rich set of APIs and features relevant for embedded use cases, including the ability to interact with peripheral I/O directly from Java applications. This rich feature set, coupled with familiar and best-in class software development tools, allows developers to quickly build and deploy sophisticated embedded solutions for a wide range of use cases. Target markets well supported by Oracle Java ME Embedded 3.2 include wireless modules for M2M, industrial and building control, smart grid infrastructure, home automation, and environmental sensors and tracking. What tools are available for embedded application development for Oracle Java ME Embedded 3.2? Along with the release of Oracle Java ME Embedded 3.2, Oracle is also making available an updated version of the Java ME Software Development Kit (SDK), together with plug-ins for the NetBeans and Eclipse IDEs, to deliver a complete development environment for embedded application development.  OK – sounds great! Where can I find out more? And how do I get started? There is a complete set of information, data sheet, API documentation, “Getting Started Guide”, FAQ, and download links available: For an overview of Oracle Embeddable Java, see here. For the Oracle Java ME Embedded 3.2 press release, see here. For the Oracle Java ME Embedded 3.2 data sheet, see here. For the Oracle Java ME Embedded 3.2 landing page, see here. For the Oracle Java ME Embedded 3.2 documentation page, including a “Getting Started Guide” and FAQ, see here. For the Oracle Java ME SDK 3.2 landing and download page, see here. Finally, to ask more questions, please see the OTN “Java ME Embedded” forum To get started, grab the “Getting Started Guide” and download the Java ME SDK 3.2, which includes the Oracle Java ME Embedded 3.2 device emulation.  Can I learn more about Oracle Java ME Embedded 3.2 at JavaOne and/or Java Embedded @ JavaOne? Glad you asked Both conferences, JavaOne and Java Embedded @ JavaOne, will feature a host of content and information around the new Oracle Java ME Embedded 3.2 product, from technical and business sessions, to hands-on tutorials, and demos. Stay tuned, I will post details shortly. Cheers, – Terrence Filed under: Mobile & Embedded Tagged: "Oracle Java ME Embedded", Connected, embedded, Embedded Java, Java Embedded @ JavaOne, JavaOne, Smart

    Read the article

  • How about a new platform for your next API&hellip; a CMS?

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2014/05/22/how-about-a-new-platform-for-your-next-apihellip-a.aspxSay what? I’m seeing a type of API emerge which serves static or long-lived resources, which are mostly read-only and have a controlled process to update the data that gets served. Think of something like an app configuration API, where you want a central location for changeable settings. You could use this server side to store database connection strings and keep all your instances in sync, or it could be used client side to push changes out to all users (and potentially driving A/B or MVT testing). That’s a good candidate for a RESTful API which makes proper use of HTTP expiration and validation caching to minimise traffic, but really you want a front end UI where you can edit the current config that the API returns and publish your changes. Sound like a Content Mangement System would be a good fit? I’ve been looking at that and it’s a great fit for this scenario. You get a lot of what you need out of the box, the amount of custom code you need to write is minimal, and you get a whole lot of extra stuff from using CMS which is very useful, but probably not something you’d build if you had to put together a quick UI over your API content (like a publish workflow, fine-grained security and an audit trail). You typically use a CMS for HTML resources, but it’s simple to expose JSON instead – or to do content negotiation to support both, so you can open a resource in a browser and see a nice visual representation, or request it with: Accept=application/json and get the same content rendered as JSON for the app to use. Enter Umbraco Umbraco is an open source .NET CMS that’s been around for a while. It has very good adoption, a lively community and a good release cycle. It’s easy to use, has all the functionality you need for a CMS-driven API, and it’s scalable (although you won’t necessarily put much scale on the CMS layer). In the rest of this post, I’ll build out a simple app config API using Umbraco. We’ll define the structure of the configuration resource by creating a new Document Type and setting custom properties; then we’ll build a very simple Razor template to return configuration documents as JSON; then create a resource and see how it looks. And we’ll look at how you could build this into a wider solution. If you want to try this for yourself, it’s ultra easy – there’s an Umbraco image in the Azure Website gallery, so all you need to to is create a new Website, select Umbraco from the image and complete the installation. It will create a SQL Azure website to store all the content, as well as a Website instance for editing and accessing content. They’re standard Azure resources, so you can scale them as you need. The default install creates a starter site for some HTML content, which you can use to learn your way around (or just delete). 1. Create Configuration Document Type In Umbraco you manage content by creating and modifying documents, and every document has a known type, defining what properties it holds. We’ll create a new Document Type to describe some basic config settings. In the Settings section from the left navigation (spanner icon), expand Document Types and Master, hit the ellipsis and select to create a new Document Type: This will base your new type off the Master type, which gives you some existing properties that we’ll use – like the Page Title which will be the resource URL. In the Generic Properties tab for the new Document Type, you set the properties you’ll be able to edit and return for the resource: Here I’ve added a text string where I’ll set a default cache lifespan, an image which I can use for a banner display, and a date which could show the user when the next release is due. This is the sort of thing that sits nicely in an app config API. It’s likely to change during the life of the product, but not very often, so it’s good to have a centralised place where you can make and publish changes easily and safely. It also enables A/B and MVT testing, as you can change the response each client gets based on your set logic, and their apps will behave differently without needing a release. 2. Define the response template Now we’ve defined the structure of the resource (as a document), in Umbraco we can define a C# Razor template to say how that resource gets rendered to the client. If you only want to provide JSON, it’s easy to render the content of the document by building each property in the response (Umbraco uses dynamic objects so you can specify document properties as object properties), or you can support content negotiation with very little effort. Here’s a template to render the document as HTML or JSON depending on the Accept header, using JSON.NET for the API rendering: @inherits Umbraco.Web.Mvc.UmbracoTemplatePage @using Newtonsoft.Json @{ Layout = null; } @if(UmbracoContext.HttpContext.Request.Headers["accept"] != null &amp;&amp; UmbracoContext.HttpContext.Request.Headers["accept"] == "application/json") { Response.ContentType = "application/json"; @Html.Raw(JsonConvert.SerializeObject(new { cacheLifespan = CurrentPage.cacheLifespan, bannerImageUrl = CurrentPage.bannerImage, nextReleaseDate = CurrentPage.nextReleaseDate })) } else { <h1>App configuration</h1> <p>Cache lifespan: <b>@CurrentPage.cacheLifespan</b></p> <p>Banner Image: </p> <img src="@CurrentPage.bannerImage"> <p>Next Release Date: <b>@CurrentPage.nextReleaseDate</b></p> } That’s a rough-and ready example of what you can do. You could make it completely generic and just render all the document’s properties as JSON, but having a specific template for each resource gives you control over what gets sent out. And the templates are evaluated at run-time, so if you need to change the output – or extend it, say to add caching response headers – you just edit the template and save, and the next client request gets rendered from the new template. No code to build and ship. 3. Create the content With your document type created, in  the Content pane you can create a new instance of that document, where Umbraco gives you a nice UI to input values for the properties we set up on the Document Type: Here I’ve set the cache lifespan to an xs:duration value, uploaded an image for the banner and specified a release date. Each property gets the appropriate input control – text box, file upload and date picker. At the top of the page is the name of the resource – myapp in this example. That specifies the URL for the resource, so if I had a DNS entry pointing to my Umbraco instance, I could access the config with a URL like http://static.x.y.z.com/config/myapp. The setup is all done now, so when we publish this resource it’ll be available to access.  4. Access the resource Now if you open  that URL in the browser, you’ll see the HTML version rendered: - complete with the  image and formatted date. Umbraco lets you save changes and preview them before publishing, so the HTML view could be a good way of showing editors their changes in a usable view, before they confirm them. If you browse the same URL from a REST client, specifying the Accept=application/json request header, you get this response:   That’s the exact same resource, with a managed UI to publish it, being accessed as HTML or JSON with a tiny amount of effort. 5. The wider landscape If you have fairy stable content to expose as an API, I think  this approach is really worth considering. Umbraco scales very nicely, but in a typical solution you probably wouldn’t need it to. When you have additional requirements, like logging API access requests - but doing it out-of-band so clients aren’t impacted, you can put a very thin API layer on top of Umbraco, and cache the CMS responses in your API layer:   Here the API does a passthrough to CMS, so the CMS still controls the content, but it caches the response. If the response is cached for 1 minute, then Umbraco only needs to handle 1 request per minute (multiplied by the number of API instances), so if you need to support 1000s of request per second, you’re scaling a thin, simple API layer rather than having to scale the more complex CMS infrastructure (including the database). This diagram also shows an approach to logging, by asynchronously publishing a message to a queue (Redis in this case), which can be picked up later and persisted by a different process. Does it work? Beautifully. Using Azure, I spiked the solution above (including the Redis logging framework which I’ll blog about later) in half a day. That included setting up different roles in Umbraco to demonstrate a managed workflow for publishing changes, and a couple of document types representing different resources. Is it maintainable? We have three moving parts, which are all managed resources in Azure –  an Azure Website for Umbraco which may need a couple of instances for HA (or may not, depending on how long the content can be cached), a message queue (Redis is in preview in Azure, but you can easily use Service Bus Queues if performance is less of a concern), and the Web Role for the API. Two of the components are off-the-shelf, from open source projects, and the only custom code is the API which is very simple. Does it scale? Pretty nicely. With a single Umbraco instance running as an Azure Website, and with 4x instances for my API layer (Standard sized Web Roles), I got just under 4,000 requests per second served reliably, with a Worker Role in the background saving the access logs. So we had a nice UI to publish app config changes, with a friendly Web preview and a publishing workflow, capable of supporting 14 million requests in an hour, with less than a day’s effort. Worth considering if you’re publishing long-lived resources through your API.

    Read the article

  • Box Selection and Multi-Line Editing with VS 2010

    This is the twenty-second in a series of blog posts Im doing on the VS 2010 and .NET 4 release. Ive already covered some of the code editor improvements in the VS 2010 release.  In particular, Ive blogged about the Code Intellisense Improvements, new Code Searching and Navigating Features, HTML, ASP.NET and JavaScript Snippet Support, and improved JavaScript Intellisense.  Todays blog post covers a small, but nice, editor improvement with VS 2010 the ability to use Box Selection...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • MySQL 5.5 is GA!

    - by rob.young(at)oracle.com
    It is my pleasure to announce that MySQL 5.5 is now GA and ready for production deployment.  You can read Oracle's official press release here. I am excited about 5.5 because of the performance and scalability gains, new replication enhancements and overall improved technical efficiencies.  Congratulations and a sincere "Thanks!" go out to the entire MySQL Community and product engineering teams for making 5.5 the best release of MySQL to date.Please join us for today's MySQL Technology Update webcast where Tomas Ulin and I will cover what's new in MySQL 5.5 and provide an update on the other technologies we are working on. You can download MySQL 5.5 here.  All of the documentation and what's new information is here.  There is also a great article on MySQL 5.5 and the MySQL community here.Thanks for reading, and as always, THANKS for your support of MySQL!

    Read the article

  • Error caused by Dropbox in update manager

    - by Olivier Lalonde
    I am getting the following error message when the update manager runs: Apt Authentication issue Problem during package list update. The package list update failed with a authentication failure. This usually happens behind a network proxy server. Please try to click on the "Run this action now" button to correct the problem or update the list manually by running Update Manager and clicking on "Check". W: A error occurred during the signature verification. The repository is not updated and the previous index files will be used.GPG error: http://linux.dropbox.com lucid Release: The following signatures were invalid: NODATA 1 NODATA 2 W: Failed to fetch http://linux.dropbox.com/ubuntu/dists/lucid/Release W: Some index files failed to download, they have been ignored, or old ones used instead. This error started to appear recently and for no obvious reason (maybe because I created myself a private PGP key?). I'm running Dropbox v0.7.11 on Ubuntu Lucid 10.04.

    Read the article

  • Lubuntu 13.10 unable to connect to cups localhost:631

    - by user142139
    I am using Lubuntu 13.10 (recently upgraded) and am trying to print to a network printer (HP photosmart 7960) through my router (US Robotics 5461). My printer is connected to the router via USB cable. Normally, I would use the cups configuration interface to set up the wireless connection to the printer. I was able to use the printer through the router wirelessly, using Ubuntu 12.04. Now, with my recently upgraded Lubuntu 13.10, I am unable to get the Cups config webpage (http://localhost:631) to come up. In Chromium, I get: This web page is not available. In Firefox, I get: Unable to connect. Firefox can't establish a connection to the server at localhost:631. The CUPS config file details are below. I have this website to help with the router connections for Linux: http://www.usr.com/support/5461/5461-files/printer_installation_linux/index.html My printer's address through the router is: http://192.168.2.1:1631/printers/My_Printer Can you tell me how to fix this? Or, what to add to the cups configuration file to make this work? Please help. Thanks psychicnut CUPS CONFIG FILE DETAILS: # Show general information in error_log. LogLevel warn MaxLogSize 0 SystemGroup lpadmin Listen /var/run/cups/cups.sock Listen /var/run/cups/cups.sock Listen 192.168.2.1:1631 Browsing Off BrowseLocalProtocols dnssd DefaultAuthType Basic WebInterface Yes <Location /> Order allow,deny </Location> <Location /admin> Order allow,deny </Location> <Location /admin/conf> AuthType Default Require user @SYSTEM Order allow,deny </Location> <Policy default> JobPrivateAccess default JobPrivateValues default SubscriptionPrivateAccess default SubscriptionPrivateValues default <Limit Create-Job Print-Job Print-URI Validate-Job> Order deny,allow </Limit> <Limit Send-Document Send-URI Hold-Job Release-Job Restart-Job Purge-Jobs Set-Job-Attributes Create-Job-Subscription Renew-Subscription Cancel-Subscription Get-Notifications Reprocess-Job Cancel-Current-Job Suspend-Current-Job Resume-Job Cancel-My-Jobs Close-Job CUPS-Move-Job CUPS-Get-Document> Require user @OWNER @SYSTEM Order deny,allow </Limit> <Limit CUPS-Add-Modify-Printer CUPS-Delete-Printer CUPS-Add-Modify-Class CUPS-Delete-Class CUPS-Set-Default CUPS-Get-Devices> AuthType Default Require user @SYSTEM Order deny,allow </Limit> <Limit Pause-Printer Resume-Printer Enable-Printer Disable-Printer Pause-Printer-After-Current-Job Hold-New-Jobs Release-Held-New-Jobs Deactivate-Printer Activate-Printer Restart-Printer Shutdown-Printer Startup-Printer Promote-Job Schedule-Job-After Cancel-Jobs CUPS-Accept-Jobs CUPS-Reject-Jobs> AuthType Default Require user @SYSTEM Order deny,allow </Limit> <Limit Cancel-Job CUPS-Authenticate-Job> Require user @OWNER @SYSTEM Order deny,allow </Limit> <Limit All> Order deny,allow </Limit> </Policy> <Policy authenticated> JobPrivateAccess default JobPrivateValues default SubscriptionPrivateAccess default SubscriptionPrivateValues default <Limit Create-Job Print-Job Print-URI Validate-Job> AuthType Default Order deny,allow </Limit> <Limit Send-Document Send-URI Hold-Job Release-Job Restart-Job Purge-Jobs Set-Job-Attributes Create-Job-Subscription Renew-Subscription Cancel-Subscription Get-Notifications Reprocess-Job Cancel-Current-Job Suspend-Current-Job Resume-Job Cancel-My-Jobs Close-Job CUPS-Move-Job CUPS-Get-Document> AuthType Default Require user @OWNER @SYSTEM Order deny,allow </Limit> <Limit CUPS-Add-Modify-Printer CUPS-Delete-Printer CUPS-Add-Modify-Class CUPS-Delete-Class CUPS-Set-Default> AuthType Default Require user @SYSTEM Order deny,allow </Limit> <Limit Pause-Printer Resume-Printer Enable-Printer Disable-Printer Pause-Printer-After-Current-Job Hold-New-Jobs Release-Held-New-Jobs Deactivate-Printer Activate-Printer Restart-Printer Shutdown-Printer Startup-Printer Promote-Job Schedule-Job-After Cancel-Jobs CUPS-Accept-Jobs CUPS-Reject-Jobs> AuthType Default Require user @SYSTEM Order deny,allow </Limit> <Limit Cancel-Job CUPS-Authenticate-Job> AuthType Default Require user @OWNER @SYSTEM Order deny,allow </Limit> <Limit All> Order deny,allow </Limit> </Policy> JobPrivateAccess default JobPrivateValues default SubscriptionPrivateAccess default SubscriptionPrivateValues default

    Read the article

  • Adopting DBVCS

    - by Wes McClure
    Identify early adopters Pick a small project with a small(ish) team.  This can be a legacy application or a green-field application. Strive to find a team of early adopters that will be eager to try something new. Get the team on board! Research Research the tool(s) that you want to use.  Some tools provide all of the features you would need while some only provide a slice of the pie.  DBVCS requires the ability to manage a set of change scripts that update a database from one version to the next.  Ideally a tool can track database versions and automatically apply updates.  The change script generation process can be manual, but having diff tools available to automatically generate it can really reduce the overhead to adoption.  Finally, an automated tool to generate a script file per database object is an added bonus as your version control system can quickly identify what was changed in a commit (add/del/modify), just like with code changes. Don’t settle on just one tool, identify several.  Then work with the team to evaluate the tools.  Have the team do some tests of the following scenarios with each tool: Baseline an existing database: can the migration tool work with legacy databases?  Caution: most migration platforms do not support baselines or have poor support, especially the fad of fluent APIs. Add/drop tables Add/drop procedures/functions/views Alter tables (rename columns, add columns, remove columns) Massage data – migrations sometimes involve changing data types that cannot be implicitly casted and require you to decide how the data is explicitly cast to the new type.  This is a requirement for a migrations platform.  Think about a case where you might want to combine fields, or move a field from one table to another, you wouldn’t want to lose the data. Run the tool via the command line.  If you cannot automate the tool in Continuous Integration what is the point? Create a copy of a database on demand. Backup/restore databases locally. Let the team give feedback and decide together, what tool they would like to try out. My recommendation at this point would be to include TSqlMigrations and RoundHouse as SQL based migration platforms.  In general I would recommend staying away from the fluent platforms as they often lack baseline capabilities and add overhead to learn a new API when SQL is already a very well known DSL.  Code migrations often get messy with procedures/views/functions as these have to be created with SQL and aren’t cross platform anyways.  IMO stick to SQL based migrations. Reconciling Production If your project is a legacy application, you will need to reconcile the current state of production with your development databases.  Find changes in production and bring them down to development, even if they are old and need to be removed.  Once complete, produce a baseline of either dev or prod as they are now in sync.  Commit this to your VCS of choice. Add whatever schema changes tracking mechanism your tool requires to your development database.  This often requires adding a table to track the schema version of that database.  Your tool should support doing this for you.  You can add this table to production when you do your next release. Script out any changes currently in dev.  Remove production artifacts that you brought down during reconciliation.  Add change scripts for any outstanding changes in dev since the last production release.  Commit these to your repository.   Say No to Shared Dev DBs Simply put, you wouldn’t dream of sharing a code checkout, why would you share a development database?  If you have a shared dev database, back it up, distribute the backups and take the shared version offline (including the dev db server once all projects are using DB VCS).  Doing DB VCS with a shared database is bound to cause problems as people won’t be able to easily script out their own changes from those that others are working on.   First prod release Copy prod to your beta/testing environment.  Add the schema changes table (or mechanism) and do a test run of your changes.  If successful you can schedule this to be run on production.   Evaluation After your first release, evaluate the pain points of the process.  Try to find tools or modifications to existing tools to help fix them.  Don’t leave stones unturned, iteratively evolve your tools and practices to make the process as seamless as possible.  This is why I suggest open source alternatives.  Nothing is set in stone, a good example was adding transactional support to TSqlMigrations.  We ran into situations where an update would break a database, so I added a feature to do transactional updates and rollback on errors!  Another good example is generating change scripts.  We have been manually making these for months now.  I found an open source project called Open DB Diff and integrated this with TSqlMigrations.  These were things we just accepted at the time when we began adopting our tool set.  Once we became comfortable with the base functionality, it was time to start automating more of the process.  Just like anything else with development, never be afraid to try to find tools to make your job easier!   Enjoy -Wes

    Read the article

  • Useful Tips for BizTalk 2006 to BizTalk 2009 Porting

    - by Arvind Chaudhary
    BizTalk projects require some manual intervention in order to upgrade them. Execute the following steps to port a BizTalk solution / project: Open the project’s solution file (.sln) using a text editor – NotePad++ is recommended. Remove all the contents (in red below) between (not including) the following elements: GlobalSection(ProjectConfigurationPlatforms) = postSolution           {5C48CB6B-AE6F-4288-A8EE-46E352BB730C}.Debug|.NET.ActiveCfg = Debug|Any CPU           {5C48CB6B-AE6F-4288-A8EE-46E352BB730C}.Debug|.NET.Build.0 = Debug|Any CPU           {5C48CB6B-AE6F-4288-A8EE-46E352BB730C}.Debug|Any CPU.ActiveCfg = Debug|Any CPU           {5C48CB6B-AE6F-4288-A8EE-46E352BB730C}.Debug|Any CPU.Build.0 = Debug|Any CPU           … EndGlobalSection           You should see the following once you have removed the contents:      GlobalSection(ProjectConfigurationPlatforms) = postSolution                EndGlobalSection            Note: There should not be any   For each BizTalk project (.btproj) in the solution (.sln) find and replace the following in the .btproj file: ‘Name = “Debug”’ with ‘Name = “Development”’ ‘Name = “Release”’ with ‘Name = “Deployment”’ “bin\Debug” with “bin\Development” “bin\Release” with “bin\Deployment” Save the file.

    Read the article

  • WalMart Slashes iPhone 3GS Price To 97$

    - by Gopinath
    WalMart store has slashed prices of iPhone 3GS 16GB model to 97$ with a two-year service contract. This offer saves you 100$ and it starts from today onwards. Apple slashes the prices of it’s products whenever they plan to release an upgraded version of the product. The slash  of iPhone 3GS has provided enough confirmation that Apple is planning to release next version of iPhone, unofficially dubbed as iPhone 4, in the upcoming WWDC conference. Click here to check the availability of iPhone 3GS stock at Walmart. Join us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • MySQL 5.5

    - by trond-arne.undheim
    New performance and scalability enhancements, continued Investment in MySQL (see press release). "The latest release of MySQL further exemplifies Oracle's commitment to the MySQL community and investment in delivering rapid innovation and enhancements to the MySQL platform" said Edward Screven, Oracle's Chief Corporate Architect. MySQL is integral to Oracle's complete, open and integrated strategy. The MySQL 5.5 Community Edition, which is licensed under the GNU General Public License (GPL), and is available for free download, includes InnoDB as the default storage engine. We cannot stress the importance of using open standards enough, whether in the context of open source or non-open source software. For more on Oracle's Open Source offering, see Oracle.com/opensource or oss.oracle.com (for developers).

    Read the article

  • Unable to upgrade from Lucid Lynx to Maverick Meerkat

    - by Rafal
    I have got a problem with Update Manager. I'm running Lucid Lynx ver. 10.04.2 and I'm unable to upgrade it to 10.10 version. I have got this message when trying to upgr. : This can be caused by: Upgrading to a pre-release version of Ubuntu Running the current pre-release version of Ubuntu Unofficial software packages not provided by Ubuntu I couldn't accidentally download pre-released updates or unsupported updates cause both of those options stays 'unticked' in software sources/updates, so that can't be that. EDIT: Those options stayed disable. I have never enabled them. Unofficial software packages then? If yes, how to find which of them I have to get rid off? My current Ubuntu version is: 10.04.2 LTS Thanks

    Read the article

< Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >