Search Results

Search found 447 results on 18 pages for 'freed ahmad'.

Page 17/18 | < Previous Page | 13 14 15 16 17 18  | Next Page >

  • Unexpected behavior of IntentService

    - by kknight
    I used IntentService in my code instead of Service because IntentService creates a thread for me in onHandleIntent(Intent intent), so I don't have to create a Thead myself in the code of my service. I expected that two intents to the same IntentSerivce will execute in parallel because a thread is generated in IntentService for each invent. But my code turned out that the two intents executed in sequential way. This is my IntentService code: public class UpdateService extends IntentService { public static final String TAG = "HelloTestIntentService"; public UpdateService() { super("News UpdateService"); } protected void onHandleIntent(Intent intent) { String userAction = intent .getStringExtra("userAction"); Log.v(TAG, "" + new Date() + ", In onHandleIntent for userAction = " + userAction + ", thread id = " + Thread.currentThread().getId()); if ("1".equals(userAction)) { try { Thread.sleep(20 * 1000); } catch (InterruptedException e) { Log.e(TAG, "error", e); } Log.v(TAG, "" + new Date() + ", This thread is waked up."); } } } And the code call the service is below: public class HelloTest extends Activity { //@Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); Intent selectIntent = new Intent(this, UpdateService.class); selectIntent.putExtra("userAction", "1"); this.startService(selectIntent); selectIntent = new Intent(this, UpdateService.class); selectIntent.putExtra("userAction", "2"); this.startService(selectIntent); } } I saw this log message in the log: V/HelloTestIntentService( 848): Wed May 05 14:59:37 PDT 2010, In onHandleIntent for userAction = 1, thread id = 8 D/dalvikvm( 609): GC freed 941 objects / 55672 bytes in 99ms V/HelloTestIntentService( 848): Wed May 05 15:00:00 PDT 2010, This thread is waked up. V/HelloTestIntentService( 848): Wed May 05 15:00:00 PDT 2010, In onHandleIntent for userAction = 2, thread id = 8 I/ActivityManager( 568): Stopping service: com.example.android/.UpdateService The log shows that the second intent waited the first intent to finish and they are in the same thread. It there anything I misunderstood of IntentService. To make two service intents execute in parallel, do I have to replace IntentService with service and start a thread myself in the service code? Thanks.

    Read the article

  • How to show percentage of 'memory used' in a win32 process?

    - by pj4533
    I know that memory usage is a very complex issue on Windows. I am trying to write a UI control for a large application that shows a 'percentage of memory used' number, in order to give the user an indication that it may be time to clear up some memory, or more likely restart the application. One implementation used ullAvailVirtual from MEMORYSTATUSEX as a base, then used HeapWalk() to walk the process heap looking for additional free memory. The HeapWalk() step was needed because we noticed that after a while of running the memory allocated and freed by the heap was never returned and reported by the ullAvailVirtual number. After hours of intensive working, the ullAvailVirtual number no longer would accurately report the amount of memory available. However, this method proved not ideal, due to occasional odd errors that HeapWalk() would return, even when the process heap was not corrupted. Further, since this is a UI control, the heap walking code was executing every 5-10 seconds. I tried contacting Microsoft about why HeapWalk() was failing, escalated a case via MSDN, but never got an answer other than "you probably shouldn't do that". So, as a second implementation, I used PagefileUsage from PROCESS_MEMORY_COUNTERS as a base. Then I used VirtualQueryEx to walk the virtual address space adding up all regions that weren't MEM_FREE and returned a value for GetMappedFileNameA(). My thinking was that the PageFileUsage was essentially 'private bytes' so if I added to that value the total size of the DLLs my process was using, it would be a good approximation of the amount of memory my process was using. This second method seems to (sorta) work, at least it doesn't cause crashes like the heap walker method. However, when both methods are enabled, the values are not the same. So one of the methods is wrong. So, StackOverflow world...how would you implement this? which method is more promising, or do you have a third, better method? should I go back to the original method, and further debug the odd errors? should I stay away from walking the heap every 5-10 seconds? Keep in mind the whole point is to indicate to the user that it is getting 'dangerous', and they should either free up memory or restart the application. Perhaps a 'percentage used' isn't the best solution to this problem? What is? Another idea I had was a color based system (red, yellow, green, which I could base on more factors than just a single number)

    Read the article

  • iPhone App is leaking memory; Instruments and Clang cannot find the leak

    - by Norbert
    Hi, i've developed an iPhone program which is kind of an image manipulation program: The user get an UIImagePickerController and selects an image. Then the program does some heavy calculating in a new thread (for responsiveness of the application). The thread has, of course, its own autorelease pool. When calculation is done, the seperated thread signals the main thread that the result can be presented. The app creates a new view controller, pushes it onto the navigation controller. In short: UIImagePickerController new thread (autorelease pool) does some heavy calculation with image data signal to main thread that it's done main thread creates view controller and pushes it onto navigation controller view controller presents image result My program works well, but if I dismiss the navigation controller's top view controller by tapping on the back button and repeat the whole process several times, my app crashes. But only on the device! Instruments cannot find any leaks (except for some minor ones which I don't feel responsible for: thread creation, NSCFString; overall about 10 kB). Even Clang static analyzer tells me that my could seems to be all right. I know that the UIImage class can cache images and objects returned from convenience methods get freed only whet their autorelease pool gets drained. But most of the time I work with CGImageRef and I use UIImage' alloc, init & release methods to free memory as soon as possible. Currently, I don't know how to isolate the problem. How would you approach this problem? Crash Log: Incident Identifier: F4C202C9-1338-48FC-80AD-46248E6C7154 CrashReporter Key: bb6f526d8b9bb680f25ea8e93bb071566ccf1776 OS Version: iPhone OS 3.1.1 (7C145) Date: 2009-09-26 14:18:57 +0200 Free pages: 372 Wired pages: 7754 Purgeable pages: 0 Largest process: _MY_APP_ Processes Name UUID Count resident pages _MY_APP_ <032690e5a9b396058418d183480a9ab3> 17766 (jettisoned) (active) debugserver <ec29691560aa0e2994f82f822181bffd> 107 syslog_relay <21e13fa2b777218bdb93982e23fb65d3> 62 notification_pro <8a7725017106a28b545fd13ed58bf98c> 64 notification_pro <8a7725017106a28b545fd13ed58bf98c> 64 afcd <98b45027fbb1350977bf1ca313dee527> 65 mediaserverd <eb8fe997a752407bea573cd3adf568d3> 319 ptpd <b17af9cf6c4ad16a557d6377378e8a1e> 142 syslogd <ec8a5bc4483638539fa1266363dee8b8> 68 BTServer <1bb74831f93b1d07c48fb46cc31c15da> 119 apsd <a639ba83e666cc1d539223923ce59581> 165 notifyd <2ed3a1166da84d8d8868e64d549cae9d> 101 CommCenter <f4239480a623fb1c35fa6c725f75b166> 161 SpringBoard <8919df8091fdfab94d9ae05f513c0ce5> 2681 (active) accessoryd <b66bcf6e77c3ee740c6a017f54226200> 90 configd <41e9d763e71dc0eda19b0afec1daee1d> 275 fairplayd <cdce5393153c3d69d23c05de1d492bd4> 108 mDNSResponder <f3ef7a6b24d4f203ed147f476385ec53> 103 lockdownd <6543492543ad16ff0707a46e512944ff> 297 launchd <73ce695fee09fc37dd70b1378af1c818> 71 **End**

    Read the article

  • Unmanaged Code calling leads to heavy memory leak!!

    - by konnychen
    Maybe I need change the title as "Unmanaged Code calling leads to heavy memory leak!" The leak is around 30M/hour I think maybe I need complete my code here because the memory leak maybe not from a static string whereas my real code derive this string from external device (see new code attached). so I handle also unmanaged code. Could it be possible the leak comes from unmanaged code? But I freed the resouce by Marshal.FreeCoTaskMem(pos); oThread2 = new Thread(new ThreadStart(Cyclic_Call)); oThread2.Start(); delegate void SetText_lab_Statubar(string text); private void m_SetText_lab_Statubar(string text) { if (this.lab_Statubar.InvokeRequired) { SetText_lab_Statubar d = new SetText_lab_Statubar(m_SetText_lab_Statubar); this.Invoke(d, new object[] { text }); } else { this.lab_Statubar.Text = text; } } private void Cyclic_Call() { do { //... ... ReadMatrixCode(Station6, 0, str_Code); this.m_SetText_lab_Statubar(str_Code[4]); Thread.Sleep(100); } while (!b_AbortThraed); } private void ReadMatrixCode(Station st, int ItemNr, string[] str_Code) { IntPtr pItemStates = IntPtr.Zero; IntPtr pErrors = IntPtr.Zero; int NumItems = itemServerHandles.Length; m_SyncIO.Read(DataSrc, NumItems, itemServerHandles, out pItemStates, out pErrors); // This calls external dll which has some of "out IntPtr" errors = new int[NumItems]; Marshal.Copy(pErrors, errors, 0, NumItems); IntPtr pos = pItemStates; // Now get the read values and check errors for (int dwCount = 0; dwCount < NumItems; dwCount++) { result[dwCount] = (ITEMSTATE)Marshal.PtrToStructure(pos, typeof(ITEMSTATE)); pos = (IntPtr)(pos.ToInt32() + Marshal.SizeOf(typeof(ITEMSTATE))); } // Free allocated COM-ressouces Marshal.FreeCoTaskMem(pItemStates); Marshal.FreeCoTaskMem(pErrors); pItemStates = IntPtr.Zero; pErrors = IntPtr.Zero; } m_syncIO is a class and finally it will call COM component which is defined below [Guid("39C12B52-011E-11D0-9675-1020AFD8ADB3")] [InterfaceType(1)] [ComConversionLoss] public interface ISyncIO { void Read(DATASOURCE dwSource, int dwCount, int[] phServer, out IntPtr ppItemValues, out IntPtr ppErrors); void Write(int dwCount, int[] phServer, object[] pItemValues, out IntPtr ppErrors); }

    Read the article

  • How to overwrite an array of char pointers with a larger list of char pointers?

    - by Casey
    My function is being passed a struct containing, among other things, a NULL terminated array of pointers to words making up a command with arguments. I'm performing a glob match on the list of arguments, to expand them into a full list of files, then I want to replace the passed argument array with the new expanded one. The globbing is working fine, that is, g.gl_pathv is populated with the list of expected files. However, I am having trouble copying this array into the struct I was given. #include <glob.h> struct command { char **argv; // other fields... } void myFunction( struct command * cmd ) { char **p = cmd->argv; char* program = *p++; // save the program name (e.g 'ls', and increment to the first argument glob_t g; memset(&g, 0, sizeof(g)); g.gl_offs = 1; int res = glob(*p++, GLOB_DOOFFS, NULL, &g); glob_handle_res(res); while (*p) { res = glob(*p, GLOB_DOOFFS | GLOB_APPEND, NULL, &g); glob_handle_res(res); } if( g.gl_pathc <= 0 ) { globfree(&g); } cmd->argv = malloc((g.gl_pathc + g.gl_offs) * sizeof *cmd->argv); if (cmd->argv == NULL) { sys_fatal_error("pattern_expand: malloc failed\n");} // copy over the arguments size_t i = g.gl_offs; for (; i < g.gl_pathc + g.gl_offs; ++i) cmd->argv[i] = strdup(g.gl_pathv[i]); // insert the original program name cmd->argv[0] = strdup(program); ** cmd->argv[g.gl_pathc + g.gl_offs] = 0; ** globfree(&g); } void command_free(struct esh_command * cmd) { char ** p = cmd->argv; while (*p) { free(*p++); // Segfaults here, was it already freed? } free(cmd->argv); free(cmd); } Edit 1: Also, I realized I need to stick program back in there as cmd-argv[0] Edit 2: Added call to calloc Edit 3: Edit mem management with tips from Alok Edit 4: More tips from alok Edit 5: Almost working.. the app segfaults when freeing the command struct Finally: Seems like I was missing the terminating NULL, so adding the line: cmd->argv[g.gl_pathc + g.gl_offs] = 0; seemed to make it work.

    Read the article

  • mem-leak freeing g_strdup

    - by Mike
    I'm trying to free g_strdup but I'm not sure what I'm doing wrong. Using valgrind --tool=memcheck --leak-check=yes ./a.out I keep getting: ==4506== 40 bytes in 10 blocks are definitely lost in loss record 2 of 9 ==4506== at 0x4024C1C: malloc (vg_replace_malloc.c:195) ==4506== by 0x40782E3: g_malloc (in /lib/libglib-2.0.so.0.2200.3) ==4506== by 0x4090CA8: g_strdup (in /lib/libglib-2.0.so.0.2200.3) ==4506== by 0x8048722: add_inv (dup.c:26) ==4506== by 0x80487E6: main (dup.c:47) ==4506== 504 bytes in 1 blocks are possibly lost in loss record 4 of 9 ==4506== at 0x4023E2E: memalign (vg_replace_malloc.c:532) ==4506== by 0x4023E8B: posix_memalign (vg_replace_malloc.c:660) ==4506== by 0x408D61D: ??? (in /lib/libglib-2.0.so.0.2200.3) ==4506== by 0x408E5AC: g_slice_alloc (in /lib/libglib-2.0.so.0.2200.3) ==4506== by 0x4061628: g_hash_table_new_full (in /lib/libglib-2.0.so.0.2200.3) ==4506== by 0x40616C7: g_hash_table_new (in /lib/libglib-2.0.so.0.2200.3) ==4506== by 0x8048795: main (dup.c:42) I've tried different ways to freed but no success so far. I'll appreciate any help. Thanks BTW: It compiles and runs fine. #include <stdio.h> #include <string.h> #include <stdlib.h> #include <glib.h> #include <stdint.h> struct s { char *data; }; static GHashTable *hashtable1; static GHashTable *hashtable2; static void add_inv(GHashTable *table, const char *key) { gpointer old_value, old_key; gint value; if(g_hash_table_lookup_extended(table,key, &old_key, &old_value)){ value = GPOINTER_TO_INT(old_value); value = value + 2; /*g_free (old_key);*/ } else { value = 5; } g_hash_table_replace(table, g_strdup(key), GINT_TO_POINTER(value)); } static void print_hash_kv (gpointer key, gpointer value, gpointer user_data){ gchar *k = (gchar *) key; gchar *h = (gchar *) value; printf("%s: %d \n",k, h); } int main(int argc, char *argv[]){ struct s t; t.data = "bar"; int i,j; hashtable1 = g_hash_table_new(g_str_hash, g_str_equal); hashtable2 = g_hash_table_new(g_str_hash, g_str_equal); for(i=0;i<10;i++){ add_inv(hashtable1, t.data); add_inv(hashtable2, t.data); } /*free(t.data);*/ /*free(t.data);*/ g_hash_table_foreach (hashtable1, print_hash_kv, NULL); g_hash_table_foreach (hashtable2, print_hash_kv, NULL); g_hash_table_destroy(hashtable1); g_hash_table_destroy(hashtable2); return 0; }

    Read the article

  • Loading images to UIScrollview crashes

    - by Icky
    Hello All. I have a Navigationcontroller pushing a UIViewController with a scrollview inside. Within the scrollview I download a certain number of images around 20 (sometimes more) each sized around 150 KB. All these images are added to the scrollview so that their origin is x +imageSize and the following is sorted right to the one before. All in all I think its a lot of data (3-4 MB). On an I pod Touch this sometimes crashes, the IPhone can handle it once, if it has to load the data again (some other images) , it crashes too. I guess its a memory issue but within my code, I download the image, save it to a file on the phone as NSData, read it again from file and add it to a UIImageview which I release. So I have freed the memory I allocated, nevertheless it still crashes. Can anyone help me out? Since Im new to this, I dont know the best way to handle the Images in a scrollview. Besides I create the controller at start from nib, which means I dont have to release it, since I dont use alloc - right? Code: In my rootviewcontroller I do: -(void) showImages { [[self naviController] pushViewController:imagesViewController animated:YES]; [imagesViewController viewWillAppear:YES]; } Then in my Controller handling the scroll View, this is the method to load the images: - (void) loadOldImageData { for (int i = 0; i < 40 ; i++) { NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDirectory = [paths objectAtIndex:0]; NSString *filePath = [documentsDirectory stringByAppendingPathComponent:[NSString stringWithFormat:@"img%d.jpg", i]]; NSData *myImg = [NSData dataWithContentsOfFile:filePath]; UIImage *im = [UIImage imageWithData:myImg]; if([im isKindOfClass:[UIImage class]]) { NSLog(@"IM EXISTS"); UIImageView *imgView = [[UIImageView alloc] initWithImage:im]; CGRect frame = CGRectMake(i*320, 0, 320, 416); imgView.frame = frame; [myScrollView addSubview:imgView]; [imgView release]; //NSLog(@"Adding img %d", i); numberImages = i; NSLog(@"setting numberofimages to %d", numberImages); //NSLog(@"scroll subviews %d", [myScrollView.subviews count]); } } myScrollView.contentSize = CGSizeMake(320 * (numberImages + 1), 416); }

    Read the article

  • Precise explanation of JavaScript <-> DOM circular reference issue

    - by Joey Adams
    One of the touted advantages of jQuery.data versus raw expando properties (arbitrary attributes you can assign to DOM nodes) is that jQuery.data is "safe from circular references and therefore free from memory leaks". An article from Google titled "Optimizing JavaScript code" goes into more detail: The most common memory leaks for web applications involve circular references between the JavaScript script engine and the browsers' C++ objects' implementing the DOM (e.g. between the JavaScript script engine and Internet Explorer's COM infrastructure, or between the JavaScript engine and Firefox XPCOM infrastructure). It lists two examples of circular reference patterns: DOM element → event handler → closure scope → DOM DOM element → via expando → intermediary object → DOM element However, if a reference cycle between a DOM node and a JavaScript object produces a memory leak, doesn't this mean that any non-trivial event handler (e.g. onclick) will produce such a leak? I don't see how it's even possible for an event handler to avoid a reference cycle, because the way I see it: The DOM element references the event handler. The event handler references the DOM (either directly or indirectly). In any case, it's almost impossible to avoid referencing window in any interesting event handler, short of writing a setInterval loop that reads actions from a global queue. Can someone provide a precise explanation of the JavaScript ↔ DOM circular reference problem? Things I'd like clarified: What browsers are effected? A comment in the jQuery source specifically mentions IE6-7, but the Google article suggests Firefox is also affected. Are expando properties and event handlers somehow different concerning memory leaks? Or are both of these code snippets susceptible to the same kind of memory leak? // Create an expando that references to its own element. var elem = document.getElementById('foo'); elem.myself = elem; // Create an event handler that references its own element. var elem = document.getElementById('foo'); elem.onclick = function() { elem.style.display = 'none'; }; If a page leaks memory due to a circular reference, does the leak persist until the entire browser application is closed, or is the memory freed when the window/tab is closed?

    Read the article

  • Advice on "Invalid Pointer Operation" when using complex records

    - by Xaz
    Env: Delphi 2007 <JustificationI tend to use complex records quite frequently as they offer almost all of the advantages of classes but with much simpler handling.</Justification Anyhoo, one particularly complex record I have just implemented is trashing memory (later leading to an "Invalid Pointer Operation" error). This is an example of the memory trashing code: sSignature := gProfiles.Profile[_stPrimary].Signature.Formatted(True); On the second time i call it i get "Invalid Pointer Operation" It works OK if i call it like this: AProfile := gProfiles.Profile[_stPrimary]; ASignature := AProfile.Signature; sSignature := ASignature.Formatted(True); Background Code: gProfiles: TProfiles; TProfiles = Record private FPrimaryProfileID: Integer; FCachedProfile: TProfile; ... public < much code removed > property Profile[ProfileType: TProfileType]: TProfile Read GetProfile; end; function TProfiles.GetProfile(ProfileType: TProfileType): TProfile; begin case ProfileType of _stPrimary : Result := ProfileByID(FPrimaryProfileID); ... end; end; function TProfiles.ProfileByID(iID: Integer): TProfile; begin <snip> if LoadProfileOfID(iID, FCachedProfile) then begin Result := FCachedProfile; end else ... end; TProfile = Record private ... public ... Signature: TSignature; ... end; TSignature = Record private public PlainTextFormat : string; HTMLFormat : string; // The text to insert into a message when using this profile function Formatted(bHTML: boolean): string; end; function TSignature.Formatted(bHTML: boolean): string; begin if bHTML then result := HTMLFormat else result := PlainTextFormat; < SNIP MUCH CODE > end; OK, so I have a record within a record within a record, which is approaching Inception level confusion and I'm the first to admit is not really a good model. Clearly i am going to have to restructure it. What I would like from you gurus is a better understanding of why it is trashing the memory (something to do with the string object that is created then freed...) so that i can avoid making these kinds of errors in future. Thanks

    Read the article

  • How to use unset() for this Linear Linked List in PHP

    - by Peter
    I'm writing a simple linear linked list implementation in PHP. This is basically just for practice... part of a Project Euler problem. I'm not sure if I should be using unset() to help in garbage collection in order to avoid memory leaks. Should I include an unset() for head and temp in the destructor of LLL? I understand that I'll use unset() to delete nodes when I want, but is unset() necessary for general clean up at any point? Is the memory map freed once the script terminates even if you don't use unset()? I saw this SO question, but I'm still a little unclear. Is the answer that you simply don't have to use unset() to avoid any sort of memory leaks associated with creating references? I'm using PHP 5.. btw. Unsetting references in PHP PHP references tutorial Here is the code - I'm creating references when I create $temp and $this-head at certain points in the LLL class: class Node { public $data; public $next; } class LLL { // The first node private $head; public function __construct() { $this->head = NULL; } public function insertFirst($data) { if (!$this->head) { // Create the head $this->head = new Node; $temp =& $this->head; $temp->data = $data; $temp->next = NULL; } else { // Add a node, and make it the new head. $temp = new Node; $temp->next = $this->head; $temp->data = $data; $this->head =& $temp; } } public function showAll() { echo "The linear linked list:<br/>&nbsp;&nbsp;"; if ($this->head) { $temp =& $this->head; do { echo $temp->data . " "; } while ($temp =& $temp->next); } else { echo "is empty."; } echo "<br/>"; } } Thanks!

    Read the article

  • How does the CLR (.NET) internally allocate and pass around custom value types (structs)?

    - by stakx
    Question: Do all CLR value types, including user-defined structs, live on the evaluation stack exclusively, meaning that they will never need to be reclaimed by the garbage-collector, or are there cases where they are garbage-collected? Background: I have previously asked a question on SO about the impact that a fluent interface has on the runtime performance of a .NET application. I was particuarly worried that creating a large number of very short-lived temporary objects would negatively affect runtime performance through more frequent garbage-collection. Now it has occured to me that if I declared those temporary objects' types as struct (ie. as user-defined value types) instead of class, the garbage collector might not be involved at all if it turns out that all value types live exclusively on the evaluation stack. What I've found out so far: I did a brief experiment to see what the differences are in the CIL generated for user-defined value types and reference types. This is my C# code: struct SomeValueType { public int X; } class SomeReferenceType { public int X; } . . static void TryValueType(SomeValueType vt) { ... } static void TryReferenceType(SomeReferenceType rt) { ... } . . var vt = new SomeValueType { X = 1 }; var rt = new SomeReferenceType { X = 2 }; TryValueType(vt); TryReferenceType(rt); And this is the CIL generated for the last four lines of code: .locals init ( [0] valuetype SomeValueType vt, [1] class SomeReferenceType rt, [2] valuetype SomeValueType <>g__initLocal0, // [3] class SomeReferenceType <>g__initLocal1, // why are these generated? [4] valuetype SomeValueType CS$0$0000 // ) L_0000: ldloca.s CS$0$0000 L_0002: initobj SomeValueType // no newobj required, instance already allocated L_0008: ldloc.s CS$0$0000 L_000a: stloc.2 L_000b: ldloca.s <>g__initLocal0 L_000d: ldc.i4.1 L_000e: stfld int32 SomeValueType::X L_0013: ldloc.2 L_0014: stloc.0 L_0015: newobj instance void SomeReferenceType::.ctor() L_001a: stloc.3 L_001b: ldloc.3 L_001c: ldc.i4.2 L_001d: stfld int32 SomeReferenceType::X L_0022: ldloc.3 L_0023: stloc.1 L_0024: ldloc.0 L_0025: call void Program::TryValueType(valuetype SomeValueType) L_002a: ldloc.1 L_002b: call void Program::TryReferenceType(class SomeReferenceType) What I cannot figure out from this code is this: Where are all those local variables mentioned in the .locals block allocated? How are they allocated? How are they freed? Why are so many anonymous local variables needed and copied to-and-fro only to initialize my two local variables rt and vt?

    Read the article

  • conflict in debian packages

    - by Alaa Alomari
    I have Debian 4 server (i know it is very old) cat /etc/issue Debian GNU/Linux 4.0 \n \l I have the following in /etc/apt/sources.list deb http://debian.uchicago.edu/debian/ stable main deb http://ftp.debian.org/debian/ stable main deb-src http://ftp.debian.org/debian/ stable main deb http://security.debian.org/ stable/updates main apt-get upgrade Reading package lists... Done Building dependency tree... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies. libt1-5: Depends: libc6 (= 2.7) but 2.3.6.ds1-13etch10+b1 is installed locales: Depends: glibc-2.11-1 but it is not installable E: Unmet dependencies. Try using -f. Now it shows that i have Debian 6!! cat /etc/issue Debian GNU/Linux 6.0 \n \l EDIT I have tried apt-get update Get: 1 http://debian.uchicago.edu stable Release.gpg [1672B] Hit http://debian.uchicago.edu stable Release Ign http://debian.uchicago.edu stable/main Packages/DiffIndex Hit http://debian.uchicago.edu stable/main Packages Get: 2 http://security.debian.org stable/updates Release.gpg [836B] Hit http://security.debian.org stable/updates Release Get: 3 http://ftp.debian.org stable Release.gpg [1672B] Ign http://security.debian.org stable/updates/main Packages/DiffIndex Hit http://security.debian.org stable/updates/main Packages Hit http://ftp.debian.org stable Release Ign http://ftp.debian.org stable/main Packages/DiffIndex Ign http://ftp.debian.org stable/main Sources/DiffIndex Hit http://ftp.debian.org stable/main Packages Hit http://ftp.debian.org stable/main Sources Fetched 3B in 0s (3B/s) Reading package lists... Done apt-get dist-upgrade Reading package lists... Done Building dependency tree... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies. libt1-5: Depends: libc6 (= 2.7) but 2.3.6.ds1-13etch10+b1 is installed locales: Depends: glibc-2.11-1 E: Unmet dependencies. Try using -f. apt-get -f install Reading package lists... Done Building dependency tree... Done Correcting dependencies...Done The following extra packages will be installed: gcc-4.4-base libbsd-dev libbsd0 libc-bin libc-dev-bin libc6 Suggested packages: glibc-doc Recommended packages: libc6-i686 The following packages will be REMOVED libc6-dev libedit-dev libexpat1-dev libgcrypt11-dev libjpeg62-dev libmcal0-dev libmhash-dev libncurses5-dev libpam0g-dev libsablot0-dev libtool libttf-dev The following NEW packages will be installed gcc-4.4-base libbsd-dev libbsd0 libc-bin libc-dev-bin The following packages will be upgraded: libc6 1 upgraded, 5 newly installed, 12 to remove and 349 not upgraded. 7 not fully installed or removed. Need to get 0B/5050kB of archives. After unpacking 23.1MB disk space will be freed. Do you want to continue [Y/n]? y Preconfiguring packages ... dpkg: regarding .../libc-bin_2.11.3-2_i386.deb containing libc-bin: package uses Breaks; not supported in this dpkg dpkg: error processing /var/cache/apt/archives/libc-bin_2.11.3-2_i386.deb (--unpack): unsupported dependency problem - not installing libc-bin Errors were encountered while processing: /var/cache/apt/archives/libc-bin_2.11.3-2_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) Now: it seems there is a conflict!! how can i fix it? and is it true that the server has became debian 6!!?? Thanks for your help

    Read the article

  • Ubuntu 14.04, OpenLDAP TLS problems

    - by larsemil
    So i have set up an openldap server using this guide here. It worked fine. But as i want to use sssd i also need TLS to be working for ldap. So i looked into and followed the TLS part of the guide. And i never got any errors and slapd started fine again. BUT. It does not seem to work when i try to use ldap over tls. root@server:~# ldapsearch -x -ZZ -H ldap://83.209.243.253 -b dc=daladevelop,dc=se ldap_start_tls: Protocol error (2) additional info: unsupported extended operation Ganking up the debug level some notches returns some more information: root@server:~# ldapsearch -x -ZZ -H ldap://83.209.243.253 -b dc=daladevelop,dc=se -d 5 ldap_url_parse_ext(ldap://83.209.243.253) ldap_create ldap_url_parse_ext(ldap://83.209.243.253:389/??base) ldap_extended_operation_s ldap_extended_operation ldap_send_initial_request ldap_new_connection 1 1 0 ldap_int_open_connection ldap_connect_to_host: TCP 83.209.243.253:389 ldap_new_socket: 3 ldap_prepare_socket: 3 ldap_connect_to_host: Trying 83.209.243.253:389 ldap_pvt_connect: fd: 3 tm: -1 async: 0 ldap_open_defconn: successful ldap_send_server_request ber_scanf fmt ({it) ber: ber_scanf fmt ({) ber: ber_flush2: 31 bytes to sd 3 ldap_result ld 0x7f25df51e220 msgid 1 wait4msg ld 0x7f25df51e220 msgid 1 (infinite timeout) wait4msg continue ld 0x7f25df51e220 msgid 1 all 1 ** ld 0x7f25df51e220 Connections: * host: 83.209.243.253 port: 389 (default) refcnt: 2 status: Connected last used: Fri Jun 6 08:52:16 2014 ** ld 0x7f25df51e220 Outstanding Requests: * msgid 1, origid 1, status InProgress outstanding referrals 0, parent count 0 ld 0x7f25df51e220 request count 1 (abandoned 0) ** ld 0x7f25df51e220 Response Queue: Empty ld 0x7f25df51e220 response count 0 ldap_chkResponseList ld 0x7f25df51e220 msgid 1 all 1 ldap_chkResponseList returns ld 0x7f25df51e220 NULL ldap_int_select read1msg: ld 0x7f25df51e220 msgid 1 all 1 ber_get_next ber_get_next: tag 0x30 len 42 contents: read1msg: ld 0x7f25df51e220 msgid 1 message type extended-result ber_scanf fmt ({eAA) ber: read1msg: ld 0x7f25df51e220 0 new referrals read1msg: mark request completed, ld 0x7f25df51e220 msgid 1 request done: ld 0x7f25df51e220 msgid 1 res_errno: 2, res_error: <unsupported extended operation>, res_matched: <> ldap_free_request (origid 1, msgid 1) ldap_parse_extended_result ber_scanf fmt ({eAA) ber: ldap_parse_result ber_scanf fmt ({iAA) ber: ber_scanf fmt (}) ber: ldap_msgfree ldap_err2string ldap_start_tls: Protocol error (2) additional info: unsupported extended operation ldap_free_connection 1 1 ldap_send_unbind ber_flush2: 7 bytes to sd 3 ldap_free_connection: actually freed So no good information there neither. In /var/log/syslog i get: Jun 6 08:55:42 master slapd[21383]: conn=1008 fd=23 ACCEPT from IP=83.209.243.253:56440 (IP=0.0.0.0:389) Jun 6 08:55:42 master slapd[21383]: conn=1008 op=0 EXT oid=1.3.6.1.4.1.1466.20037 Jun 6 08:55:42 master slapd[21383]: conn=1008 op=0 do_extended: unsupported operation "1.3.6.1.4.1.1466.20037" Jun 6 08:55:42 master slapd[21383]: conn=1008 op=0 RESULT tag=120 err=2 text=unsupported extended operation Jun 6 08:55:42 master slapd[21383]: conn=1008 op=1 UNBIND Jun 6 08:55:42 master slapd[21383]: conn=1008 fd=23 closed If i portscan the host i get the following: Starting Nmap 6.40 ( http://nmap.org ) at 2014-06-06 08:56 CEST Nmap scan report for h83-209-243-253.static.se.alltele.net (83.209.243.253) Host is up (0.0072s latency). Not shown: 996 closed ports PORT STATE SERVICE 22/tcp open ssh 80/tcp open http 389/tcp open ldap 636/tcp open ldapssl But when i check certs root@master:~# openssl s_client -connect daladevelop.se:636 -showcerts -state CONNECTED(00000003) SSL_connect:before/connect initialization SSL_connect:unknown state 140244859233952:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:s23_lib.c:177: --- no peer certificate available --- No client certificate CA names sent --- SSL handshake has read 0 bytes and written 317 bytes --- New, (NONE), Cipher is (NONE) Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE --- And i feel like i am clearly out in deep water not knowing at all where to go from here. Anny hints appreciated on what to do or to get better debug logging... EDIT: This is my config slapcated from cn=config and it does not mention at all anything about TLS. I have inserted my certinfo.ldif: root@master:~# cat certinfo.ldif dn: cn=config add: olcTLSCACertificateFile olcTLSCACertificateFile: /etc/ssl/certs/cacert.pem - add: olcTLSCertificateFile olcTLSCertificateFile: /etc/ssl/certs/daladevelop_slapd_cert.pem - add: olcTLSCertificateKeyFile olcTLSCertificateKeyFile: /etc/ssl/private/daladevelop_slapd_key.pem and when doing that i only got this as an answer. root@master:~# sudo ldapmodify -Y EXTERNAL -H ldapi:/// -f certinfo.ldif SASL/EXTERNAL authentication started SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth SASL SSF: 0 modifying entry "cn=config" So still no wiser.

    Read the article

  • unmet dependencies in Ubuntu 12.04

    - by lee.O
    I tried today to install a dvb-card on my Ubuntu 12.04 (Linux blauhai-linux 3.2.0-25-generic #40-Ubuntu SMP Wed May 23 20:30:51 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux ). The installation failed with an error. After that, i tried to install python (it was already installed but i got this error): linux:~$ sudo apt-get install git Reading package lists... Done Building dependency tree Reading state information... Done git is already the newest version. You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: python-glade2:i386 : Depends: python:i386 (< 2.5) but it is not going to be installed Depends: python-support:i386 (= 0.3.4) but it is not installable Depends: python:i386 (= 2.4) but it is not going to be installed Depends: libglade2-0:i386 (= 1:2.5.1) but it is not going to be installed Depends: python-gtk2:i386 (= 2.8.6-8) but it is not going to be installed python-numeric:i386 : Depends: python:i386 (< 2.5) but it is not going to be installed Depends: python:i386 (= 2.3) but it is not going to be installed Depends: python-central:i386 (= 0.5.7) but it is not installable E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). well, i can read and tried the proposed command, but then i get this: linux:~$ sudo apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages were automatically installed and are no longer required: libopenal1:i386 libsdl-ttf2.0-0:i386 libkrb5-3:i386 libgconf-2-4:i386 libsm-dev libatk1.0-0:i386 libk5crypto3:i386 libstdc++5:i386 libqt4-declarative:i386 libxcomposite1:i386 libice-dev libgail18:i386 libldap-2.4-2:i386 libao-common libv4l-0:i386 liblcms1:i386 libqt4-qt3support:i386 libroken18-heimdal:i386 libunistring0:i386 libcupsimage2:i386 libgphoto2-port0:i386 libidn11:i386 libnss3:i386 libcaca0:i386 gtk2-engines:i386 libgudev-1.0-0:i386 libjpeg-turbo8:i386 libpthread-stubs0 libcairo-gobject2:i386 libavc1394-0:i386 libjpeg8:i386 libotr2 libaio1:i386 libsane:i386 odbcinst1debian2 odbcinst1debian2:i386 libqt4-test:i386 libqt4-script:i386 libqt4-designer:i386 libsdl-mixer1.2:i386 libqt4-network:i386 libqt4-dbus:i386 libcap2:i386 libproxy1:i386 ibus-gtk:i386 libdbus-glib-1-2:i386 libtdb1:i386 libasn1-8-heimdal:i386 libspeex1:i386 libxslt1.1:i386 libgomp1:i386 libcapi20-3:i386 libibus-1.0-0:i386 libcairo2:i386 libgnutls26:i386 libopenal-data odbcinst libgssapi3-heimdal:i386 libcanberra0:i386 libtasn1-3:i386 libfreetype6:i386 x11proto-kb-dev gtk2-engines-murrine:i386 libwavpack1:i386 libqt4-opengl:i386 libsoup-gnome2.4-1:i386 libv4lconvert0:i386 gstreamer0.10-plugins-good:i386 libc6-i386 lib32gcc1 libqt4-xmlpatterns:i386 librsvg2-common:i386 libdatrie1:i386 xtrans-dev libavahi-common-data:i386 libiec61883-0:i386 lib32asound2 libgdk-pixbuf2.0-0:i386 libsdl-image1.2:i386 libp11-kit0:i386 x11proto-input-dev libwind0-heimdal:i386 libpixman-1-0:i386 libsdl1.2debian:i386 libxaw7:i386 libgdbm3:i386 libcups2:i386 libcurl3:i386 libqtcore4:i386 libxinerama1:i386 libesd0:i386 libmikmod2:i386 libkrb5support0:i386 libxft2:i386 libxt-dev libcroco3:i386 libpulse-mainloop-glib0:i386 libice6:i386 libaa1:i386 libieee1284-3:i386 libgcrypt11:i386 libthai0:i386 libao4:i386 libkeyutils1:i386 libxmu6:i386 libcanberra-gtk0:i386 libvorbisfile3:i386 libqt4-sql:i386 esound-common libxpm4:i386 libqt4-svg:i386 libusb-0.1-4:i386 libgail-common:i386 libxrender1:i386 libhcrypto4-heimdal:i386 libraw1394-11:i386 libnspr4:i386 libshout3:i386 libdv4:i386 libhx509-5-heimdal:i386 libxau-dev libqt4-xml:i386 gstreamer0.10-x:i386 libgettextpo0:i386 libxss1:i386 libgd2-xpm:i386 libheimbase1-heimdal:i386 libtiff4:i386 libsdl-net1.2:i386 libjasper1:i386 libgnome-keyring0:i386 libxtst6:i386 gtk2-engines-pixbuf:i386 libqtgui4:i386 libtag1c2a:i386 librsvg2-2:i386 libavahi-client3:i386 libssl0.9.8:i386 libmpg123-0:i386 libmad0:i386 libsasl2-2:i386 xorg-sgml-doctools libgsoap1 gtk2-engines-oxygen:i386 libfontconfig1:i386 xaw3dg:i386 libpango1.0-0:i386 libsm6:i386 libx11-dev libheimntlm0-heimdal:i386 libpulsedsp:i386 lib32stdc++6 libx11-doc libqt4-sql-mysql:i386 libxcb-render0:i386 libodbc1:i386 libexif12:i386 libqt4-scripttools:i386 librtmp0:i386 libgssapi-krb5-2:i386 libxi6:i386 libqtwebkit4:i386 libxcb1-dev libxp6:i386 libaudio2:i386 libxcursor1:i386 libxcb-shm0:i386 libxt6:i386 libxv1:i386 libsasl2-modules:i386 libavahi-common3:i386 libxrandr2:i386 x11proto-core-dev libsqlite3-0:i386 libmng1:i386 libgtk2.0-0:i386 libxdmcp-dev libpthread-stubs0-dev libltdl7:i386 libkrb5-26-heimdal:i386 libssl1.0.0:i386 glib-networking:i386 libgpg-error0:i386 libsoup2.4-1:i386 libgphoto2-2:i386 libtag1-vanilla:i386 libaudiofile1:i386 libglade2-0:i386 Use 'apt-get autoremove' to remove them. The following extra packages will be installed: default-jre default-jre-headless icedtea-6-jre-cacao icedtea-6-jre-jamvm icedtea-netx icedtea-netx-common libglade2-0:i386 libpython3.2 openjdk-6-jre openjdk-6-jre-headless openjdk-6-jre-lib python3 python3-minimal python3-uno python3.2 python3.2-minimal Suggested packages: icedtea-plugin sun-java6-fonts fonts-ipafont-gothic fonts-ipafont-mincho ttf-telugu-fonts ttf-oriya-fonts ttf-kannada-fonts ttf-bengali-fonts python3-doc python3-tk python3.2-doc binfmt-support The following packages will be REMOVED: activity-log-manager-control-center aisleriot alacarte apparmor apport apport-gtk apt-xapian-index aptdaemon apturl apturl-common bluez bluez-alsa bluez-alsa:i386 bluez-gstreamer checkbox checkbox-qt command-not-found compiz compiz-gnome compiz-plugins-main-default compizconfig-backend-gconf deja-dup duplicity eog evolution-data-server firefox firefox-globalmenu firefox-gnome-support foomatic-db-compressed-ppds gconf-editor gconf2 gdb gedit gir1.2-mutter-3.0 gir1.2-peas-1.0 gir1.2-rb-3.0 gir1.2-totem-1.0 gir1.2-ubuntuoneui-3.0 gksu gnome-applets gnome-applets-data gnome-bluetooth gnome-contacts gnome-control-center gnome-media gnome-menus gnome-orca gnome-panel gnome-panel-data gnome-session-fallback gnome-shell gnome-sudoku gnome-terminal gnome-terminal-data gnome-themes-standard gnome-tweak-tool gnome-user-share gstreamer0.10-gconf gwibber gwibber-service gwibber-service-facebook gwibber-service-identica gwibber-service-twitter hplip hplip-data ia32-libs ia32-libs-multiarch:i386 ibus ibus-pinyin ibus-table indicator-datetime indicator-power jockey-common jockey-gtk landscape-client-ui-install language-selector-common language-selector-gnome launchpad-integration libcanberra-gtk-module libcanberra-gtk-module:i386 libcanberra-gtk3-module libcompizconfig0 libfolks-eds25 libgksu2-0 libgnome-media-profiles-3.0-0 libgnome2-0 libgnome2-common libgnomevfs2-0 libgnomevfs2-common libgweather-3-0 libgweather-common libgwibber-gtk2 libgwibber2 libmetacity-private0 libmutter0 libpeas-1.0-0 libpurple-bin libpython2.7 libreoffice-gnome librhythmbox-core5 libsyncdaemon-1.0-1 libtotem0 libubuntuoneui-3.0-1 light-themes lsb-release metacity metacity-common mutter-common nautilus-dropbox nautilus-share network-manager-gnome nvidia-common nvidia-settings nvidia-settings-updates onboard oneconf openjdk-7-jdk openjdk-7-jre openprinting-ppds pidgin pidgin-libnotify pidgin-otr printer-driver-foo2zjs printer-driver-ptouch printer-driver-pxljr printer-driver-sag-gdi printer-driver-splix python python-appindicator python-apport python-apt python-apt-common python-aptdaemon python-aptdaemon.gtk3widgets python-aptdaemon.pkcompat python-brlapi python-cairo python-central python-chardet python-configglue python-crypto python-cups python-cupshelpers python-dateutil python-dbus python-debian python-debtagshw python-defer python-dirspec python-egenix-mxdatetime python-egenix-mxtools python-gconf python-gdbm python-gi python-gi-cairo python-glade2:i386 python-gmenu python-gnomekeyring python-gnupginterface python-gobject python-gobject-2 python-gpgme python-gst0.10 python-gtk2 python-httplib2 python-ibus python-imaging python-keyring python-launchpadlib python-lazr.restfulclient python-lazr.uri python-libproxy python-libxml2 python-louis python-mako python-markupsafe python-minimal python-notify python-numeric:i386 python-oauth python-openssl python-packagekit python-pam python-pexpect python-piston-mini-client python-pkg-resources python-problem-report python-protobuf python-pyatspi2 python-pycurl python-pyinotify python-renderpm python-reportlab python-reportlab-accel python-serial python-simplejson python-smbc python-software-properties python-speechd python-twisted-bin python-twisted-core python-twisted-names python-twisted-web python-ubuntu-sso-client python-ubuntuone-client python-ubuntuone-control-panel python-ubuntuone-storageprotocol python-uno python-virtkey python-wadllib python-xapian python-xdg python-xkit python-zeitgeist python-zope.interface python2.7 python2.7-minimal rhythmbox rhythmbox-mozilla rhythmbox-plugin-cdrecorder rhythmbox-plugin-magnatune rhythmbox-plugin-zeitgeist rhythmbox-plugins rhythmbox-ubuntuone screen-resolution-extra sessioninstaller skype software-center software-center-aptdaemon-plugins software-properties-common software-properties-gtk system-config-printer-common system-config-printer-gnome system-config-printer-udev texlive-extra-utils totem totem-mozilla totem-plugins ubuntu-artwork ubuntu-desktop ubuntu-minimal ubuntu-sso-client ubuntu-sso-client-gtk ubuntu-standard ubuntu-system-service ubuntuone-client ubuntuone-client-gnome ubuntuone-control-panel ubuntuone-couch ubuntuone-installer ufw unattended-upgrades unity unity-2d unity-common unity-lens-applications unity-lens-video unity-scope-musicstores unity-scope-video-remote update-manager update-manager-core update-notifier update-notifier-common usb-creator-common usb-creator-gtk virtualbox virtualbox-dkms virtualbox-qt xdiagnose xul-ext-ubufox zeitgeist zeitgeist-core zeitgeist-datahub The following NEW packages will be installed: default-jre default-jre-headless icedtea-6-jre-cacao icedtea-6-jre-jamvm icedtea-netx icedtea-netx-common libglade2-0:i386 libpython3.2 openjdk-6-jre openjdk-6-jre-headless openjdk-6-jre-lib python3 python3-minimal python3-uno python3.2 python3.2-minimal WARNING: The following essential packages will be removed. This should NOT be done unless you know exactly what you are doing! python-minimal python2.7-minimal (due to python-minimal) 0 upgraded, 16 newly installed, 273 to remove and 0 not upgraded. 2 not fully installed or removed. Need to get 39.1 MB of archives. After this operation, 324 MB disk space will be freed. You are about to do something potentially harmful. To continue type in the phrase 'Yes, do as I say!' ?] Thats not good, is it?! Should i run this command or should i run another command to fix this problem? Would be great if somebody can help me. :) Thanks in advance. best regards

    Read the article

  • Online ALTER TABLE in MySQL 5.6

    - by Marko Mäkelä
    This is the low-level view of data dictionary language (DDL) operations in the InnoDB storage engine in MySQL 5.6. John Russell gave a more high-level view in his blog post April 2012 Labs Release – Online DDL Improvements. MySQL before the InnoDB Plugin Traditionally, the MySQL storage engine interface has taken a minimalistic approach to data definition language. The only natively supported operations were CREATE TABLE, DROP TABLE and RENAME TABLE. Consider the following example: CREATE TABLE t(a INT); INSERT INTO t VALUES (1),(2),(3); CREATE INDEX a ON t(a); DROP TABLE t; The CREATE INDEX statement would be executed roughly as follows: CREATE TABLE temp(a INT, INDEX(a)); INSERT INTO temp SELECT * FROM t; RENAME TABLE t TO temp2; RENAME TABLE temp TO t; DROP TABLE temp2; You could imagine that the database could crash when copying all rows from the original table to the new one. For example, it could run out of file space. Then, on restart, InnoDB would roll back the huge INSERT transaction. To fix things a little, a hack was added to ha_innobase::write_row for committing the transaction every 10,000 rows. Still, it was frustrating that even a simple DROP INDEX would make the table unavailable for modifications for a long time. Fast Index Creation in the InnoDB Plugin of MySQL 5.1 MySQL 5.1 introduced a new interface for CREATE INDEX and DROP INDEX. The old table-copying approach can still be forced by SET old_alter_table=0. This interface is used in MySQL 5.5 and in the InnoDB Plugin for MySQL 5.1. Apart from the ability to do a quick DROP INDEX, the main advantage is that InnoDB will execute a merge-sort algorithm before inserting the index records into each index that is being created. This should speed up the insert into the secondary index B-trees and potentially result in a better B-tree fill factor. The 5.1 ALTER TABLE interface was not perfect. For example, DROP FOREIGN KEY still invoked the table copy. Renaming columns could conflict with InnoDB foreign key constraints. Combining ADD KEY and DROP KEY in ALTER TABLE was problematic and not atomic inside the storage engine. The ALTER TABLE interface in MySQL 5.6 The ALTER TABLE storage engine interface was completely rewritten in MySQL 5.6. Instead of introducing a method call for every conceivable operation, MySQL 5.6 introduced a handful of methods, and data structures that keep track of the requested changes. In MySQL 5.6, online ALTER TABLE operation can be requested by specifying LOCK=NONE. Also LOCK=SHARED and LOCK=EXCLUSIVE are available. The old-style table copying can be requested by ALGORITHM=COPY. That one will require at least LOCK=SHARED. From the InnoDB point of view, anything that is possible with LOCK=EXCLUSIVE is also possible with LOCK=SHARED. Most ALGORITHM=INPLACE operations inside InnoDB can be executed online (LOCK=NONE). InnoDB will always require an exclusive table lock in two phases of the operation. The execution phases are tied to a number of methods: handler::check_if_supported_inplace_alter Checks if the storage engine can perform all requested operations, and if so, what kind of locking is needed. handler::prepare_inplace_alter_table InnoDB uses this method to set up the data dictionary cache for upcoming CREATE INDEX operation. We need stubs for the new indexes, so that we can keep track of changes to the table during online index creation. Also, crash recovery would drop any indexes that were incomplete at the time of the crash. handler::inplace_alter_table In InnoDB, this method is used for creating secondary indexes or for rebuilding the table. This is the ‘main’ phase that can be executed online (with concurrent writes to the table). handler::commit_inplace_alter_table This is where the operation is committed or rolled back. Here, InnoDB would drop any indexes, rename any columns, drop or add foreign keys, and finalize a table rebuild or index creation. It would also discard any logs that were set up for online index creation or table rebuild. The prepare and commit phases require an exclusive lock, blocking all access to the table. If MySQL times out while upgrading the table meta-data lock for the commit phase, it will roll back the ALTER TABLE operation. In MySQL 5.6, data definition language operations are still not fully atomic, because the data dictionary is split. Part of it is inside InnoDB data dictionary tables. Part of the information is only available in the *.frm file, which is not covered by any crash recovery log. But, there is a single commit phase inside the storage engine. Online Secondary Index Creation It may occur that an index needs to be created on a new column to speed up queries. But, it may be unacceptable to block modifications on the table while creating the index. It turns out that it is conceptually not so hard to support online index creation. All we need is some more execution phases: Set up a stub for the index, for logging changes. Scan the table for index records. Sort the index records. Bulk load the index records. Apply the logged changes. Replace the stub with the actual index. Threads that modify the table will log the operations to the logs of each index that is being created. Errors, such as log overflow or uniqueness violations, will only be flagged by the ALTER TABLE thread. The log is conceptually similar to the InnoDB change buffer. The bulk load of index records will bypass record locking. We still generate redo log for writing the index pages. It would suffice to log page allocations only, and to flush the index pages from the buffer pool to the file system upon completion. Native ALTER TABLE Starting with MySQL 5.6, InnoDB supports most ALTER TABLE operations natively. The notable exceptions are changes to the column type, ADD FOREIGN KEY except when foreign_key_checks=0, and changes to tables that contain FULLTEXT indexes. The keyword ALGORITHM=INPLACE is somewhat misleading, because certain operations cannot be performed in-place. For example, changing the ROW_FORMAT of a table requires a rebuild. Online operation (LOCK=NONE) is not allowed in the following cases: when adding an AUTO_INCREMENT column, when the table contains FULLTEXT indexes or a hidden FTS_DOC_ID column, or when there are FOREIGN KEY constraints referring to the table, with ON…CASCADE or ON…SET NULL option. The FOREIGN KEY limitations are needed, because MySQL does not acquire meta-data locks on the child or parent tables when executing SQL statements. Theoretically, InnoDB could support operations like ADD COLUMN and DROP COLUMN in-place, by lazily converting the table to a newer format. This would require that the data dictionary keep multiple versions of the table definition. For simplicity, we will copy the entire table, even for DROP COLUMN. The bulk copying of the table will bypass record locking and undo logging. For facilitating online operation, a temporary log will be associated with the clustered index of table. Threads that modify the table will also write the changes to the log. When altering the table, we skip all records that have been marked for deletion. In this way, we can simply discard any undo log records that were not yet purged from the original table. Off-page columns, or BLOBs, are an important consideration. We suspend the purge of delete-marked records if it would free any off-page columns from the old table. This is because the BLOBs can be needed when applying changes from the log. We have special logging for handling the ROLLBACK of an INSERT that inserted new off-page columns. This is because the columns will be freed at rollback.

    Read the article

  • Data Source Connection Pool Sizing

    - by Steve Felts
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} One of the most time-consuming procedures of a database application is establishing a connection. The connection pooling of the data source can be used to minimize this overhead.  That argues for using the data source instead of accessing the database driver directly. Configuring the size of the pool in the data source is somewhere between an art and science – this article will try to move it closer to science.  From the beginning, WLS data source has had an initial capacity and a maximum capacity configuration values.  When the system starts up and when it shrinks, initial capacity is used.  The pool can grow to maximum capacity.  Customers found that they might want to set the initial capacity to 0 (more on that later) but didn’t want the pool to shrink to 0.  In WLS 10.3.6, we added minimum capacity to specify the lower limit to which a pool will shrink.  If minimum capacity is not set, it defaults to the initial capacity for upward compatibility.   We also did some work on the shrinking in release 10.3.4 to reduce thrashing; the algorithm that used to shrink to the maximum of the currently used connections or the initial capacity (basically the unused connections were all released) was changed to shrink by half of the unused connections. The simple approach to sizing the pool is to set the initial/minimum capacity to the maximum capacity.  Doing this creates all connections at startup, avoiding creating connections on demand and the pool is stable.  However, there are a number of reasons not to take this simple approach. When WLS is booted, the deployment of the data source includes synchronously creating the connections.  The more connections that are configured in initial capacity, the longer the boot time for WLS (there have been several projects for parallel boot in WLS but none that are available).  Related to creating a lot of connections at boot time is the problem of logon storms (the database gets too much work at one time).   WLS has a solution for that by setting the login delay seconds on the pool but that also increases the boot time. There are a number of cases where it is desirable to set the initial capacity to 0.  By doing that, the overhead of creating connections is deferred out of the boot and the database doesn’t need to be available.  An application may not want WLS to automatically connect to the database until it is actually needed, such as for some code/warm failover configurations. There are a number of cases where minimum capacity should be less than maximum capacity.  Connections are generally expensive to keep around.  They cause state to be kept on both the client and the server, and the state on the backend may be heavy (for example, a process).  Depending on the vendor, connection usage may cost money.  If work load is not constant, then database connections can be freed up by shrinking the pool when connections are not in use.  When using Active GridLink, connections can be created as needed according to runtime load balancing (RLB) percentages instead of by connection load balancing (CLB) during data source deployment. Shrinking is an effective technique for clearing the pool when connections are not in use.  In addition to the obvious reason that there times where the workload is lighter,  there are some configurations where the database and/or firewall conspire to make long-unused or too-old connections no longer viable.  There are also some data source features where the connection has state and cannot be used again unless the state matches the request.  Examples of this are identity based pooling where the connection has a particular owner and XA affinity where the connection is associated with a particular RAC node.  At this point, WLS does not re-purpose (discard/replace) connections and shrinking is a way to get rid of the unused existing connection and get a new one with the correct state when needed. So far, the discussion has focused on the relationship of initial, minimum, and maximum capacity.  Computing the maximum size requires some knowledge about the application and the current number of simultaneously active users, web sessions, batch programs, or whatever access patterns are common.  The applications should be written to only reserve and close connections as needed but multiple statements, if needed, should be done in one reservation (don’t get/close more often than necessary).  This means that the size of the pool is likely to be significantly smaller then the number of users.   If possible, you can pick a size and see how it performs under simulated or real load.  There is a high-water mark statistic (ActiveConnectionsHighCount) that tracks the maximum connections concurrently used.  In general, you want the size to be big enough so that you never run out of connections but no bigger.   It will need to deal with spikes in usage, which is where shrinking after the spike is important.  Of course, the database capacity also has a big influence on the decision since it’s important not to overload the database machine.  Planning also needs to happen if you are running in a Multi-Data Source or Active GridLink configuration and expect that the remaining nodes will take over the connections when one of the nodes in the cluster goes down.  For XA affinity, additional headroom is also recommended.  In summary, setting initial and maximum capacity to be the same may be simple but there are many other factors that may be important in making the decision about sizing.

    Read the article

  • vSphere ESX 5.5 hosts cannot connect to NFS Server

    - by Gerald
    Summary: My problem is I cannot use the QNAP NFS Server as an NFS datastore from my ESX hosts despite the hosts being able to ping it. I'm utilising a vDS with LACP uplinks for all my network traffic (including NFS) and a subnet for each vmkernel adapter. Setup: I'm evaluating vSphere and I've got two vSphere ESX 5.5 hosts (node1 and node2) and each one has 4x NICs. I've teamed them all up using LACP/802.3ad with my switch and then created a distributed switch between the two hosts with each host's LAG as the uplink. All my networking is going through the distributed switch, ideally, I want to take advantage of DRS and the redundancy. I have a domain controller VM ("Central") and vCenter VM ("vCenter") running on node1 (using node1's local datastore) with both hosts attached to the vCenter instance. Both hosts are in a vCenter datacenter and a cluster with HA and DRS currently disabled. I have a QNAP TS-669 Pro (Version 4.0.3) (TS-x69 series is on VMware Storage HCL) which I want to use as the NFS server for my NFS datastore, it has 2x NICs teamed together using 802.3ad with my switch. vmkernel.log: The error from the host's vmkernel.log is not very useful: NFS: 157: Command: (mount) Server: (10.1.2.100) IP: (10.1.2.100) Path: (/VM) Label (datastoreNAS) Options: (None) cpu9:67402)StorageApdHandler: 698: APD Handle 509bc29f-13556457 Created with lock[StorageApd0x411121] cpu10:67402)StorageApdHandler: 745: Freeing APD Handle [509bc29f-13556457] cpu10:67402)StorageApdHandler: 808: APD Handle freed! cpu10:67402)NFS: 168: NFS mount 10.1.2.100:/VM failed: Unable to connect to NFS server. Network Setup: Here is my distributed switch setup (JPG). Here are my networks. 10.1.1.0/24 VM Management (VLAN 11) 10.1.2.0/24 Storage Network (NFS, VLAN 12) 10.1.3.0/24 VM vMotion (VLAN 13) 10.1.4.0/24 VM Fault Tolerance (VLAN 14) 10.2.0.0/24 VM's Network (VLAN 20) vSphere addresses 10.1.1.1 node1 Management 10.1.1.2 node2 Management 10.1.2.1 node1 vmkernel (For NFS) 10.1.2.2 node2 vmkernel (For NFS) etc. Other addresses 10.1.2.100 QNAP TS-669 (NFS Server) 10.2.0.1 Domain Controller (VM on node1) 10.2.0.2 vCenter (VM on node1) I'm using a Cisco SRW2024P Layer-2 switch (Jumboframes enabled) with the following setup: LACP LAG1 for node1 (Ports 1 through 4) setup as VLAN trunk for VLANs 11-14,20 LACP LAG2 for my router (Ports 5 through 8) setup as VLAN trunk for VLANs 11-14,20 LACP LAG3 for node2 (Ports 9 through 12) setup as VLAN trunk for VLANs 11-14,20 LACP LAG4 for the QNAP (Ports 23 and 24) setup to accept untagged traffic into VLAN 12 Each subnet is routable to another, although, connections to the NFS server from vmk1 shouldn't need it. All other traffic (vSphere Web Client, RDP etc.) goes through this setup fine. I tested the QNAP NFS server beforehand using ESX host VMs atop of a VMware Workstation setup with a dedicated physical NIC and it had no problems. The ACL on the NFS Server share is permissive and allows all subnet ranges full access to the share. I can ping the QNAP from node1 vmk1, the adapter that should be used to NFS: ~ # vmkping -I vmk1 10.1.2.100 PING 10.1.2.100 (10.1.2.100): 56 data bytes 64 bytes from 10.1.2.100: icmp_seq=0 ttl=64 time=0.371 ms 64 bytes from 10.1.2.100: icmp_seq=1 ttl=64 time=0.161 ms 64 bytes from 10.1.2.100: icmp_seq=2 ttl=64 time=0.241 ms Netcat does not throw an error: ~ # nc -z 10.1.2.100 2049 Connection to 10.1.2.100 2049 port [tcp/nfs] succeeded! The routing table of node1: ~ # esxcfg-route -l VMkernel Routes: Network Netmask Gateway Interface 10.1.1.0 255.255.255.0 Local Subnet vmk0 10.1.2.0 255.255.255.0 Local Subnet vmk1 10.1.3.0 255.255.255.0 Local Subnet vmk2 10.1.4.0 255.255.255.0 Local Subnet vmk3 default 0.0.0.0 10.1.1.254 vmk0 VM Kernel NIC info ~ # esxcfg-vmknic -l Interface Port Group/DVPort IP Family IP Address Netmask Broadcast MAC Address MTU TSO MSS Enabled Type vmk0 133 IPv4 10.1.1.1 255.255.255.0 10.1.1.255 00:50:56:66:8e:5f 1500 65535 true STATIC vmk0 133 IPv6 fe80::250:56ff:fe66:8e5f 64 00:50:56:66:8e:5f 1500 65535 true STATIC, PREFERRED vmk1 164 IPv4 10.1.2.1 255.255.255.0 10.1.2.255 00:50:56:68:f5:1f 1500 65535 true STATIC vmk1 164 IPv6 fe80::250:56ff:fe68:f51f 64 00:50:56:68:f5:1f 1500 65535 true STATIC, PREFERRED vmk2 196 IPv4 10.1.3.1 255.255.255.0 10.1.3.255 00:50:56:66:18:95 1500 65535 true STATIC vmk2 196 IPv6 fe80::250:56ff:fe66:1895 64 00:50:56:66:18:95 1500 65535 true STATIC, PREFERRED vmk3 228 IPv4 10.1.4.1 255.255.255.0 10.1.4.255 00:50:56:72:e6:ca 1500 65535 true STATIC vmk3 228 IPv6 fe80::250:56ff:fe72:e6ca 64 00:50:56:72:e6:ca 1500 65535 true STATIC, PREFERRED Things I've tried/checked: I'm not using DNS names to connect to the NFS server. Checked MTU. Set to 9000 for vmk1, dvSwitch and Cisco switch and QNAP. Moved QNAP onto VLAN 11 (VM Management, vmk0) and gave it an appropriate address, still had same issue. Changed back afterwards of course. Tried initiating the connection of NAS datastore from vSphere Client (Connected to vCenter or directly to host), vSphere Web Client and the host's ESX Shell. All resulted in the same problem. Tried a path name of "VM", "/VM" and "/share/VM" despite not even having a connection to server. I plugged in a linux system (10.1.2.123) into a switch port configured for VLAN 12 and tried mounting the NFS share 10.1.2.100:/VM, it worked successfully and I had read-write access to it I tried disabling the firewall on the ESX host esxcli network firewall set --enabled false I'm out of ideas on what to try next. The things I'm doing differently from my VMware Workstation setup is the use of LACP with a physical switch and a virtual distributed switch between the two hosts. I'm guessing the vDS is probably the source of my troubles but I don't know how to fix this problem without eliminating it.

    Read the article

  • Problem releasing UIImageView after adding to UIScrollView

    - by Josiah Jordan
    I'm having a memory problem related to UIImageView. After adding this view to my UIScrollView, if I try to release the UIImageView the application crashes. According to the stack trace, something is calling [UIImageView stopAnimating] after [UIImageView dealloc] is called. However, if I don't release the view the memory is never freed up, and I've confirmed that there remains an extra retain call on the view after deallocating...which causes my total allocations to climb quickly and eventually crash the app after loading the view multiple times. I'm not sure what I'm doing wrong here though...I don't know what is trying to access the UIImageView after it has been released. I've included the relevant header and implementation code below (I'm using the Three20 framework, if that has anything to do with it...also, AppScrollView is just a UIScrollView that forwards the touchesEnded event to the next responder): Header: @interface PhotoHiResPreviewController : TTViewController <UIScrollViewDelegate> { NSString* imageURL; UIImage* hiResImage; UIImageView* imageView; UIView* mainView; AppScrollView* mainScrollView; } @property (nonatomic, retain) NSString* imageURL; @property (nonatomic, retain) NSString* imageShortURL; @property (nonatomic, retain) UIImage* hiResImage; @property (nonatomic, retain) UIImageView* imageView; - (id)initWithImageURL:(NSString*)imageTTURL; Implementation: @implementation PhotoHiResPreviewController @synthesize imageURL, hiResImage, imageView; - (id)initWithImageURL:(NSString*)imageTTURL { if (self = [super init]) { hiResImage = nil; NSString *documentsDirectory = [NSString stringWithString:[NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) lastObject]]; [self setImageURL:[NSString stringWithFormat:@"%@/%@.jpg", documentsDirectory, imageTTURL]]; } return self; } - (void)loadView { [super loadView]; // Initialize the scroll view hiResImage = [UIImage imageWithContentsOfFile:self.imageURL]; CGSize photoSize = [hiResImage size]; mainScrollView = [[AppScrollView alloc] initWithFrame:[UIScreen mainScreen].bounds]; mainScrollView.autoresizingMask = ( UIViewAutoresizingFlexibleWidth | UIViewAutoresizingFlexibleHeight); mainScrollView.backgroundColor = [UIColor blackColor]; mainScrollView.contentSize = photoSize; mainScrollView.contentMode = UIViewContentModeScaleAspectFit; mainScrollView.delegate = self; // Create the image view and add it to the scrollview. UIImageView *tempImageView = [[UIImageView alloc] initWithFrame:CGRectMake(0.0, 0.0, photoSize.width, photoSize.height)]; tempImageView.contentMode = UIViewContentModeCenter; [tempImageView setImage:hiResImage]; self.imageView = tempImageView; [tempImageView release]; [mainScrollView addSubview:imageView]; // Configure zooming. CGSize screenSize = [[UIScreen mainScreen] bounds].size; CGFloat widthRatio = screenSize.width / photoSize.width; CGFloat heightRatio = screenSize.height / photoSize.height; CGFloat initialZoom = (widthRatio > heightRatio) ? heightRatio : widthRatio; mainScrollView.maximumZoomScale = 3.0; mainScrollView.minimumZoomScale = initialZoom; mainScrollView.zoomScale = initialZoom; mainScrollView.bouncesZoom = YES; mainView = [[UIView alloc] initWithFrame:[UIScreen mainScreen].bounds]; mainView.autoresizingMask = ( UIViewAutoresizingFlexibleWidth | UIViewAutoresizingFlexibleHeight); mainView.backgroundColor = [UIColor blackColor]; mainView.contentMode = UIViewContentModeScaleAspectFit; [mainView addSubview:mainScrollView]; // Add to view self.view = mainView; [imageView release]; [mainScrollView release]; [mainView release]; } - (UIView *)viewForZoomingInScrollView:(UIScrollView *)scrollView { return imageView; } - (void)dealloc { mainScrollView.delegate = nil; TT_RELEASE_SAFELY(imageURL); TT_RELEASE_SAFELY(hiResImage); [super dealloc]; } I'm not sure how to get around this. If I remove the call to [imageView release] at the end of the loadView method everything works fine...but I have massive allocations that quickly climb to a breaking point. If I DO release it, however, there's that [UIImageView stopAnimating] call that crashes the application after the view is deallocated. Thanks for any help! I've been banging my head against this one for days. :-P Cheers, Josiah

    Read the article

  • Understanding the memory consumption on iPhone

    - by zoul
    Hello! I am working on a 2D iPhone game using OpenGL ES and I keep hitting the 24 MB memory limit – my application keeps crashing with the error code 101. I tried real hard to find where the memory goes, but the numbers in Instruments are still much bigger than what I would expect. I ran the application with the Memory Monitor, Object Alloc, Leaks and OpenGL ES instruments. When the application gets loaded, free physical memory drops from 37 MB to 23 MB, the Object Alloc settles around 7 MB, Leaks show two or three leaks a few bytes in size, the Gart Object Size is about 5 MB and Memory Monitor says the application takes up about 14 MB of real memory. I am perplexed as where did the memory go – when I dig into the Object Allocations, most of the memory is in the textures, exactly as I would expect. But both my own texture allocation counter and the Gart Object Size agree that the textures should take up somewhere around 5 MB. I am not aware of allocating anything else that would be worth mentioning, and the Object Alloc agrees. Where does the memory go? (I would be glad to supply more details if this is not enough.) Update: I really tried to find where I could allocate so much memory, but with no results. What drives me wild is the difference between the Object Allocations (~7 MB) and real memory usage as shown by Memory Monitor (~14 MB). Even if there were huge leaks or huge chunks of memory I forget about, the should still show up in the Object Allocations, shouldn’t they? I’ve already tried the usual suspects, ie. the UIImage with its caching, but that did not help. Is there a way to track memory usage “debugger-style”, line by line, watching each statement’s impact on memory usage? What I have found so far: I really am using that much memory. It is not easy to measure the real memory consumption, but after a lot of counting I think the memory consumption is really that high. My fault. I found no easy way to measure the memory used. The Memory Monitor numbers are accurate (these are the numbers that really matter), but the Memory Monitor can’t tell you where exactly the memory goes. The Object Alloc tool is almost useless for tracking the real memory usage. When I create a texture, the allocated memory counter goes up for a while (reading the texture into the memory), then drops (passing the texture data to OpenGL, freeing). This is OK, but does not always happen – sometimes the memory usage stays high even after the texture has been passed on to OpenGL and freed from “my” memory. This means that the total amount of memory allocated as shown by the Object Alloc tool is smaller than the real total memory consumption, but bigger than the real consumption minus textures (real – textures < object alloc < real). Go figure. I misread the Programming Guide. The memory limit of 24 MB applies to textures and surfaces, not the whole application. The actual red line lies a bit further, but I could not find any hard numbers. The consensus is that 25–30 MB is the ceiling. When the system gets short on memory, it starts sending the memory warning. I have almost nothing to free, but other applications do release some memory back to the system, especially Safari (which seems to be caching the websites). When the free memory as shown in the Memory Monitor goes zero, the system starts killing. I had to bite the bullet and rewrite some parts of the code to be more efficient on memory, but I am probably still pushing it. I

    Read the article

  • Memory Troubles with UIImagePicker

    - by Dan Ray
    I'm building an app that has several different sections to it, all of which are pretty image-heavy. It ties in with my client's website and they're a "high-design" type outfit. One piece of the app is images uploaded from the camera or the library, and a tableview that shows a grid of thumbnails. Pretty reliably, when I'm dealing with the camera version of UIImagePickerControl, I get hit for low memory. If I bounce around that part of the app for a while, I occasionally and non-repeatably crash with "status:10 (SIGBUS)" in the debugger. On low memory warning, my root view controller for that aspect of the app goes to my data management singleton, cruises through the arrays of cached data, and kills the biggest piece, the image associated with each entry. Thusly: - (void)didReceiveMemoryWarning { // Releases the view if it doesn't have a superview. [super didReceiveMemoryWarning]; UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Low Memory Warning" message:@"Cleaning out events data" delegate:nil cancelButtonTitle:@"All right then." otherButtonTitles:nil]; [alert show]; [alert release]; NSInteger spaceSaved; DataManager *data = [DataManager sharedDataManager]; for (Event *event in data.eventList) { spaceSaved += [(NSData *)UIImagePNGRepresentation(event.image) length]; event.image = nil; spaceSaved -= [(NSData *)UIImagePNGRepresentation(event.image) length]; } NSString *titleString = [NSString stringWithFormat:@"Saved %d on event images", spaceSaved]; for (WondrMark *mark in data.wondrMarks) { spaceSaved += [(NSData *)UIImagePNGRepresentation(mark.image) length]; mark.image = nil; spaceSaved -= [(NSData *)UIImagePNGRepresentation(mark.image) length]; } NSString *messageString = [NSString stringWithFormat:@"And total %d on event and mark images", spaceSaved]; NSLog(@"%@ - %@", titleString, messageString); // Relinquish ownership any cached data, images, etc that aren't in use. } As you can see, I'm making a (poor) attempt to eyeball the memory space I'm freeing up. I know it's not telling me about the actual memory footprint of the UIImages themselves, but it gives me SOME numbers at least, so I can see that SOMETHING'S happening. (Sorry for the hamfisted way I build that NSLog message too--I was going to fire another UIAlertView, but realized it'd be more useful to log it.) Pretty reliably, after toodling around in the image portion of the app for a while, I'll pull up the camera interface and get the low memory UIAlertView like three or four times in quick succession. Here's the NSLog output from the last time I saw it: 2010-05-27 08:55:02.659 EverWondr[7974:207] Saved 109591 on event images - And total 1419756 on event and mark images wait_fences: failed to receive reply: 10004003 2010-05-27 08:55:08.759 EverWondr[7974:207] Saved 4 on event images - And total 392695 on event and mark images 2010-05-27 08:55:14.865 EverWondr[7974:207] Saved 4 on event images - And total 873419 on event and mark images 2010-05-27 08:55:14.969 EverWondr[7974:207] Saved 4 on event images - And total 4 on event and mark images 2010-05-27 08:55:15.064 EverWondr[7974:207] Saved 4 on event images - And total 4 on event and mark images And then pretty soon after that we get our SIGBUS exit. So that's the situation. Now my specific questions: THE time I see this happening is when the UIPickerView's camera iris shuts. I click the button to take the picture, it does the "click" animation, and Instruments shows my memory footprint going from about 10mb to about 25mb, and sitting there until the image is delivered to my UIViewController, where usage drops back to 10 or 11mb again. If we make it through that without a memory warning, we're golden, but most likely we don't. Anything I can do to make that not be so expensive? Second, I have NSZombies enabled. Am I understanding correctly that that's actually preventing memory from being freed? Am I subjecting my app to an unfair test environment? Third, is there some way to programmatically get my memory usage? Or at least the usage for a UIImage object? I've scoured the docs and don't see anything about that.

    Read the article

  • "Row not found or changed" Problem

    - by winston schröder
    Hi there, I'm working on a SQL CE Database and get into the "Row nor found or changed" exception. The exception only occurs when I try to update. On the first Run after the insert it shows up a MemberChangeConflict which says, that my Column Created_at has in all three values (current, original, database) the same. But in a second attempt it doesn't appear anymore. The DataContext is instanciated on Startup and freed on Exit of my Local(!) Application. I use a sqlmetal generated mapping and code file. In the map I added some Associations and set the timemstamp columns UpdateCheck property to Always while all other have the setting never. The Timestamp Column is marked as isVersion="true", the Id Column as Primary Key. Since I don't dispose the datacontext I expected to be using implicit transaction. When I run the SubmitChanges Method within a TransactionScope. Can anyone tell me how I can update the timestamp within the code ? I know about the Problems one has to deal with if you dispose the datacontext. So I decided not to do this since I use a Single User Local DB Cache File. (I did already use a version where I disposed the datacontext after every usage, but this version had a real bad performance and error rate, so I decided to choose the other variant.) LibDB.Client.Vehicles tmp = null; try { tmp = e.Parameter as LibDB.Client.Vehicles; if (tmp == null) return; if (!this._dc.Vehicles.Contains(tmp)) { this._dc.Vehicles.Attach(tmp); } this.ShowChangesReport(this._dc.GetChangeSet()); using (TransactionScope ts = new TransactionScope()) { try { this._dc.SubmitChanges(); ts.Complete(); } catch (ChangeConflictException cce) { Console.WriteLine("Optimistic concurrency error."); Console.WriteLine(cce.Message); Console.ReadLine(); foreach (ObjectChangeConflict occ in this._dc.ChangeConflicts) { MetaTable metatable = this._dc.Mapping.GetTable(occ.Object.GetType()); LibDB.Client.Vehicles entityInConflict = (LibDB.Client.Vehicles)occ.Object; Console.WriteLine("Table name: {0}", metatable.TableName); Console.Write("Vin: "); Console.WriteLine(entityInConflict.Vin); foreach (MemberChangeConflict mcc in occ.MemberConflicts) { object currVal = mcc.CurrentValue; object origVal = mcc.OriginalValue; object databaseVal = mcc.DatabaseValue; MemberInfo mi = mcc.Member; Console.WriteLine("Member: {0}", mi.Name); Console.WriteLine("current value: {0}", currVal); Console.WriteLine("original value: {0}", origVal); Console.WriteLine("database value: {0}", databaseVal); } throw cce; } } catch (Exception ex) { this.ShowChangeConflicts(this._dc.ChangeConflicts); Console.WriteLine(ex.Message); } } this.ShowChangesReport(this._dc.GetChangeSet());

    Read the article

  • Win32 -- Object cleanup and global variables

    - by KaiserJohaan
    Hello, I've got a question about global variables and object cleanup in c++. For example, look at the code here; case WM_PAINT: paintText(&hWnd); break; void paintText(HWND* hWnd) { PAINTSTRUCT ps; HBRUSH hbruzh = CreateSolidBrush(RGB(0,0,0)); HDC hdz = BeginPaint(*hWnd,&ps); char s1[] = "Name"; char s2[] = "IP"; SelectBrush(hdz,hbruzh); SelectFont(hdz,hFont); SetBkMode(hdz,TRANSPARENT); TextOut(hdz,3,23,s1,sizeof(s1)); TextOut(hdz,10,53,s2,sizeof(s2)); EndPaint(*hWnd,&ps); DeleteObject(hdz); DeleteObject(hbruzh); // bad? DeleteObject(ps); // bad? } 1)First of all; which objects are good to delete and which ones are NOT good to delete and why? Not 100% sure of this. 2)Since WM_PAINT is called everytime the window is redrawn, would it be better to simply store ps, hdz and hbruzh as global variables instead of re-initializing them everytime? The downside I guess would be tons of global variables in the end _ but performance-wise would it not be less CPU-consuming? I know it won't matter prolly but I'm just aiming for minimalistic as possible for educational purposes. 3) What about libraries that are loaded in? For example: // // Main // int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow) { // initialize vars HWND hWnd; WNDCLASSEX wc; HINSTANCE hlib = LoadLibrary("Riched20.dll"); ThishInstance = hInstance; ZeroMemory(&wc,sizeof(wc)); // set WNDCLASSEX props wc.cbSize = sizeof(WNDCLASSEX); wc.lpfnWndProc = WindowProc; wc.hInstance = ThishInstance; wc.hIcon = LoadIcon(hInstance,MAKEINTRESOURCE(IDI_MYICON)); wc.lpszMenuName = MAKEINTRESOURCE(IDR_MENU1); wc.hCursor = LoadCursor(NULL, IDC_ARROW); wc.hbrBackground = (HBRUSH)COLOR_WINDOW; wc.lpszClassName = TEXT("PimpClient"); RegisterClassEx(&wc); // create main window and display it hWnd = CreateWindowEx(NULL, wc.lpszClassName, TEXT("PimpClient"), 0, 300, 200, 450, 395, NULL, NULL, hInstance, NULL); createWindows(&hWnd); ShowWindow(hWnd,nCmdShow); // loop message queue MSG msg; while (GetMessage(&msg, NULL,0,0)) { TranslateMessage(&msg); DispatchMessage(&msg); } // cleanup? FreeLibrary(hlib); return msg.wParam; } 3cont) is there a reason to FreeLibrary at the end? I mean when the process terminates all resources are freed anyway? And since the library is used to paint text throughout the program, why would I want to free before that? Cheers

    Read the article

  • Limiting TCP sends with a "to-be-sent" queue and other design issues.

    - by Poni
    Hello all! This question is the result of two other questions I've asked in the last few days. I'm creating a new question because I think it's related to the "next step" in my understanding of how to control the flow of my send/receive, something I didn't get a full answer to yet. The other related questions are: http://stackoverflow.com/questions/3028376/an-iocp-documentation-interpretation-question-buffer-ownership-ambiguity http://stackoverflow.com/questions/3028998/non-blocking-tcp-buffer-issues In summary, I'm using Windows I/O Completion Ports. I have several threads that process notifications from the completion port. I believe the question is platform-independent and would have the same answer as if to do the same thing on a *nix, *BSD, Solaris system. So, I need to have my own flow control system. Fine. So I send send and send, a lot. How do I know when to start queueing the sends, as the receiver side is limited to X amount? Let's take an example (closest thing to my question): FTP protocol. I have two servers; One is on a 100Mb link and the other is on a 10Mb link. I order the 100Mb one to send to the other one (the 10Mb linked one) a 1GB file. It finishes with an average transfer rate of 1.25MB/s. How did the sender (the 100Mb linked one) knew when to hold the sending, so the slower one wouldn't be flooded? Another way to ask this: Can I get a "hold-your-sendings" notification from the remote side? Is it built-in in TCP or the so called "reliable network protocol" needs me to do so? Again, I have a loop with many sends to a remote server, and at some point, within that loop I'll have to determine if I should queue that send or I can pass it on to the transport layer (TCP). How do I do that? What would you do? Of course that when I get a completion notification from IOCP that the send was done I'll issue other pending sends, that's clear. Another design question related to this: Since I am to use a custom buffers with a send queue, and these buffers are being freed to be reused (thus not using the "delete" keyword) when a "send-done" notification has been arrived, I'll have to use a mutual exlusion on that buffer pool. Using a mutex slows things down, so I've been thinking; Why not have each thread have its own buffers pool, thus accessing it , at least when getting the required buffers for a send operation, will require no mutex, because it belongs to that thread only. The buffers pool is located at the thread local storage (TLS) level. No mutual pool implies no lock needed, implies faster operations BUT also implies more memory used by the app, because even if one thread already allocated 1000 buffers, the other one that is sending right now and need 1000 buffers to send something will need to allocated these to its own. This is a long question and I hope none got hurt (: Thank you all!

    Read the article

  • SQL Server and Hyper-V Dynamic Memory Part 2

    - by SQLOS Team
    Part 1 of this series was an introduction and overview of Hyper-V Dynamic Memory. This part looks at SQL Server memory management and how the SQL engine responds to changing OS memory conditions.   Part 2: SQL Server Memory Management As with any Windows process, sqlserver.exe has a virtual address space (VAS) of 4GB on 32-bit and 8TB in 64-bit editions. Pages in its VAS are mapped to pages in physical memory when the memory is committed and referenced for the first time. The collection of VAS pages that have been recently referenced is known as the Working Set. How and when SQL Server allocates virtual memory and grows its working set depends on the memory model it uses. SQL Server supports three basic memory models:   1. Conventional Memory Model   The Conventional model is the default SQL Server memory model and has the following properties: - Dynamic - can grow or shrink its working set in response to load and external (operating system) memory conditions. - OS uses 4K pages – (not to be confused with SQL Server “pages” which are 8K regions of committed memory).- Pageable - Can be paged out to disk by the operating system.   2. Locked Page Model The locked page memory model is set when SQL Server is started with "Lock Pages in Memory" privilege*. It has the following characteristics: - Dynamic - can grow or shrink its working set in the same way as the Conventional model.- OS uses 4K pages - Non-Pageable – When memory is committed it is locked in memory, meaning that it will remain backed by physical memory and will not be paged out by the operating system. A common misconception is to interpret "locked" as non-dynamic. A SQL Server instance using the locked page memory model will grow and shrink (allocate memory and release memory) in response to changing workload and OS memory conditions in the same way as it does with the conventional model.   This is an important consideration when we look at Hyper-V Dynamic Memory – “locked” memory works perfectly well with “dynamic” memory.   * Note in “Denali” (Standard Edition and above), and in SQL 2008 R2 64-bit (Enterprise and above editions) the Lock Pages in Memory privilege is all that is required to set this model. In 2008 R2 64-Bit standard edition it also requires trace flag 845 to be set, in 2008 R2 32-bit editions it requires sp_configure 'awe enabled' 1.   3. Large Page Model The Large page model is set using trace flag 834 and potentially offers a small performance boost for systems that are configured with large pages. It is characterized by: - Static - memory is allocated at startup and does not change. - OS uses large (>2MB) pages - Non-Pageable The large page model is supported with Hyper-V Dynamic Memory (and Hyper-V also supports large pages), but you get no benefit from using Dynamic Memory with this model since SQL Server memory does not grow or shrink. The rest of this article will focus on the locked and conventional SQL Server memory models.   When does SQL Server grow? For “dynamic” configurations (Conventional and Locked memory models), the sqlservr.exe process grows – allocates and commits memory from the OS – in response to a workload. As much memory is allocated as is required to optimally run the query and buffer data for future queries, subject to limitations imposed by:   - SQL Server max server memory setting. If this configuration option is set, the buffer pool is not allowed to grow to more than this value. In SQL Server 2008 this value represents single page allocations, and in “Denali” it represents any size page allocations and also managed CLR procedure allocations.   - Memory signals from OS. The operating system sets a signal on memory resource notification objects to indicate whether it has memory available or whether it is low on available memory. If there is only 32MB free for every 4GB of memory a low memory signal is set, which continues until 64MB/4GB is free. If there is 96MB/4GB free the operating system sets a high memory signal. SQL Server only allocates memory when the high memory signal is set.   To summarize, for SQL Server to grow you need three conditions: a workload, max server memory setting higher than the current allocation, high memory signals from the OS.    When does SQL Server shrink caches? SQL Server as a rule does not like to return memory to the OS, but it will shrink its caches in response to memory pressure. Memory pressure can be divided into “internal” and “external”.   - External memory pressure occurs when the operating system is running low on memory and low memory signals are set. The SQL Server Resource Monitor checks for low memory signals approximately every 5 seconds and it will attempt to free memory until the signals stop.   To free memory SQL Server does the following: ·         Frees unused memory. ·         Notifies Memory Manager Clients to release memory o   Caches – Free unreferenced cache objects. o   Buffer pool - Based on oldest access times.   The freed memory is released back to the operating system. This process continues until the low memory resource notifications stop.    - Internal memory pressure occurs when the size of different caches and allocations increase but the SQL Server process needs to keep its total memory within a target value. For example if max server memory is set and certain caches are growing large, it will cause SQL to free memory for re-use internally, but not to release memory back to the OS. If you lower the value of max server memory you will generate internal memory pressure that will cause SQL to release memory back to the OS.    Memory pressure handling has not changed much since SQL 2005 and it was described in detail in a blog post by Slava Oks.   Note that SQL Server Express is an exception to the above behavior. Unlike other editions it does not assume it is the most important process running on the system but tries to be more “desktop” friendly. It will empty its working set after a period of inactivity.   How does SQL Server respond to changing OS memory?    In SQL Server 2005 support for Hot-Add memory was introduced. This feature, available in Enterprise and above editions, allows the server to make use of any extra physical memory that was added after SQL Server started. Being able to add physical memory when the system is running is limited to specialized hardware, but with the Hyper-V Dynamic Memory feature, when new memory is allocated to a guest virtual machine, it looks like hot-add physical memory to the guest. What this means is that thanks to the hot-add memory feature, SQL Server 2005 and higher can dynamically grow if more “physical” memory is granted to a guest VM by Hyper-V dynamic memory.   SQL Server checks OS memory every second and dynamically adjusts its “target” (based on available OS memory and max server memory) accordingly.   In “Denali” Standard Edition will also have sqlserver.exe support for hot-add memory when running virtualized (i.e. detecting and acting on Hyper-V Dynamic Memory allocations).   How does a SQL Server workload in a guest VM impact Hyper-V dynamic memory scheduling?   When a SQL workload causes the sqlserver.exe process to grow its working set, the Hyper-V memory scheduler will detect memory pressure in the guest VM and add memory to it. SQL Server will then detect the extra memory and grow according to workload demand. In our tests we have seen this feedback process cause a guest VM to grow quickly in response to SQL workload - we are still working on characterizing this ramp-up.    How does SQL Server respond when Hyper-V removes memory from a guest VM through ballooning?   If pressure from other VM's cause Hyper-V Dynamic Memory to take memory away from a VM through ballooning (allocating memory with a virtual device driver and returning it to the host OS), Windows Memory Manager will page out unlocked portions of memory and signal low resource notification events. When SQL Server detects these events it will shrink memory until the low memory notifications stop (see cache shrinking description above).    This raises another question. Can we make SQL Server release memory more readily and hence behave more "dynamically" without compromising performance? In certain circumstances where the application workload is predictable it may be possible to have a job which varies "max server memory" according to need, lowering it when the engine is inactive and raising it before a period of activity. This would have limited applicaability but it is something we're looking into.   What Memory Management changes are there in SQL Server “Denali”?   In SQL Server “Denali” (aka SQL11) the Memory Manager has been re-written to be more efficient. The main changes are summarized in this post. An important change with respect to Hyper-V Dynamic Memory support is that now the max server memory setting includes any size page allocations and managed CLR procedure allocations it now represents a closer approximation to total sqlserver.exe memory usage. This makes it easier to calculate a value for max server memory, which becomes important when configuring virtual machines to work well with Hyper-V Dynamic Memory Startup and Maximum RAM settings.   Another important change is no more AWE or hot-add support for 32-bit edition. This means if you're running a 32-bit edition of Denali you're limited to a 4GB address space and will not be able to take advantage of dynamically added OS memory that wasn't present when SQL Server started (though Hyper-V Dynamic Memory is still a supported configuration).   In part 3 we’ll develop some best practices for configuring and using SQL Server with Dynamic Memory. Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • Linux Kernel not upgraded (from Ubuntu 12.04 to 12.10) - can't remove old kernels and can't install new apps

    - by Tony Breyal
    Question: How do I remove old kernel images which refuse to be removed? Context: Yesterday I upgraded Ubuntu from 12.04 to 12.10. However, the linux kernel has not upgraded from 3.2 to 3.5 as I would have expected. $ uname -r 3.2.0-32-generic $ uname -a Linux tony-b 3.2.0-32-generic #51-Ubuntu SMP Wed Sep 26 21:33:09 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux $ cat /proc/version Linux version 3.2.0-32-generic (buildd@batsu) (gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) ) #51-Ubuntu SMP Wed Sep 26 21:33:09 UTC 2012 Not sure why that happened there. I wanted to install Audacity (v2.0.1-1_amd64) to edit a lecture audio file. When trying this operation through Ubuntu Software Center, it says that to install audacity, four items will need to be removed: linux-image-3.2.0-27-generic linux-image-3.2.0-29-generic linux-image-3.2.0-30-generic linux-image-3.2.0-31-generic So I click "Install Anyway" but it fails with the following output: installArchives() failed: (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 259675 files and directories currently installed.) Removing linux-image-3.2.0-27-generic ... Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 3.2.0-27-generic /boot/vmlinuz-3.2.0-27-generic update-initramfs: Deleting /boot/initrd.img-3.2.0-27-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 3.2.0-27-generic /boot/vmlinuz-3.2.0-27-generic Generating grub.cfg ... run-parts: /etc/kernel/postrm.d/zz-update-grub exited with return code 1 Failed to process /etc/kernel/postrm.d at /var/lib/dpkg/info/linux-image-3.2.0-27-generic.postrm line 328. dpkg: error processing linux-image-3.2.0-27-generic (--remove): subprocess installed post-removal script returned error exit status 1 No apport report written because MaxReports is reached already Removing linux-image-3.2.0-29-generic ... Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 3.2.0-29-generic /boot/vmlinuz-3.2.0-29-generic update-initramfs: Deleting /boot/initrd.img-3.2.0-29-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 3.2.0-29-generic /boot/vmlinuz-3.2.0-29-generic Generating grub.cfg ... run-parts: /etc/kernel/postrm.d/zz-update-grub exited with return code 1 Failed to process /etc/kernel/postrm.d at /var/lib/dpkg/info/linux-image-3.2.0-29-generic.postrm line 328. dpkg: error processing linux-image-3.2.0-29-generic (--remove): subprocess installed post-removal script returned error exit status 1 No apport report written because MaxReports is reached already Removing linux-image-3.2.0-30-generic ... Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 3.2.0-30-generic /boot/vmlinuz-3.2.0-30-generic update-initramfs: Deleting /boot/initrd.img-3.2.0-30-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 3.2.0-30-generic /boot/vmlinuz-3.2.0-30-generic Generating grub.cfg ... run-parts: /etc/kernel/postrm.d/zz-update-grub exited with return code 1 Failed to process /etc/kernel/postrm.d at /var/lib/dpkg/info/linux-image-3.2.0-30-generic.postrm line 328. dpkg: error processing linux-image-3.2.0-30-generic (--remove): subprocess installed post-removal script returned error exit status 1 No apport report written because MaxReports is reached already Removing linux-image-3.2.0-31-generic ... Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 3.2.0-31-generic /boot/vmlinuz-3.2.0-31-generic update-initramfs: Deleting /boot/initrd.img-3.2.0-31-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 3.2.0-31-generic /boot/vmlinuz-3.2.0-31-generic Generating grub.cfg ... run-parts: /etc/kernel/postrm.d/zz-update-grub exited with return code 1 Failed to process /etc/kernel/postrm.d at /var/lib/dpkg/info/linux-image-3.2.0-31-generic.postrm line 328. dpkg: error processing linux-image-3.2.0-31-generic (--remove): subprocess installed post-removal script returned error exit status 1 No apport report written because MaxReports is reached already Errors were encountered while processing: linux-image-3.2.0-27-generic linux-image-3.2.0-29-generic linux-image-3.2.0-30-generic linux-image-3.2.0-31-generic Error in function: Setting up grub-pc (2.00-7ubuntu11) ... /usr/sbin/grub-bios-setup: warning: Sector 32 is already in use by the program `FlexNet'; avoiding it. This software may cause boot or other problems in future. Please ask its authors not to store data in the boot track. Installation finished. No error reported. Generating grub.cfg ... dpkg: error processing grub-pc (--configure): subprocess installed post-installation script returned error exit status 1 It seems I need to remove the old linux images somehow. I have tried this through (1) Synaptic, (2) Ubuntu Tweak, and (3) Computer Janitor. The first two fail, whilst Computer Janitor won't even open. The output from Synaptic is: E: linux-image-3.2.0-27-generic: subprocess installed post-removal script returned error exit status 1 E: linux-image-3.2.0-29-generic: subprocess installed post-removal script returned error exit status 1 E: linux-image-3.2.0-30-generic: subprocess installed post-removal script returned error exit status 1 E: linux-image-3.2.0-31-generic: subprocess installed post-removal script returned error exit status 1 How do I remove these old images? Thank you kindly in advance for any help on this matter. P.S. Further information: $ dpkg --list | grep linux-image rH linux-image-3.2.0-27-generic 3.2.0-27.43 amd64 Linux kernel image for version 3.2.0 on 64 bit x86 SMP rH linux-image-3.2.0-29-generic 3.2.0-29.46 amd64 Linux kernel image for version 3.2.0 on 64 bit x86 SMP rH linux-image-3.2.0-30-generic 3.2.0-30.48 amd64 Linux kernel image for version 3.2.0 on 64 bit x86 SMP rH linux-image-3.2.0-31-generic 3.2.0-31.50 amd64 Linux kernel image for version 3.2.0 on 64 bit x86 SMP ii linux-image-3.2.0-32-generic 3.2.0-32.51 amd64 Linux kernel image for version 3.2.0 on 64 bit x86 SMP ii linux-image-3.5.0-17-generic 3.5.0-17.28 amd64 Linux kernel image for version 3.5.0 on 64 bit x86 SMP ii linux-image-extra-3.5.0-17-generic 3.5.0-17.28 amd64 Linux kernel image for version 3.5.0 on 64 bit x86 SMP ii linux-image-generic 3.5.0.17.19 amd64 Generic Linux kernel image But trying to remove using the command line fails too e.g.: $ sudo apt-get purge linux-image-3.2.0-27-generic Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED linux-image-3.2.0-27-generic linux-image-3.2.0-29-generic linux-image-3.2.0-30-generic linux-image-3.2.0-31-generic 0 upgraded, 0 newly installed, 4 to remove and 1 not upgraded. 5 not fully installed or removed. After this operation, 597 MB disk space will be freed. Do you want to continue [Y/n]? Y (Reading database ... 259675 files and directories currently installed.) Removing linux-image-3.2.0-27-generic ... Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 3.2.0-27-generic /boot/vmlinuz-3.2.0-27-generic update-initramfs: Deleting /boot/initrd.img-3.2.0-27-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 3.2.0-27-generic /boot/vmlinuz-3.2.0-27-generic Generating grub.cfg ... run-parts: /etc/kernel/postrm.d/zz-update-grub exited with return code 1 Failed to process /etc/kernel/postrm.d at /var/lib/dpkg/info/linux-image-3.2.0-27-generic.postrm line 328. dpkg: error processing linux-image-3.2.0-27-generic (--remove): subprocess installed post-removal script returned error exit status 1 No apport report written because MaxReports has already been reached Removing linux-image-3.2.0-29-generic ... Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 3.2.0-29-generic /boot/vmlinuz-3.2.0-29-generic update-initramfs: Deleting /boot/initrd.img-3.2.0-29-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 3.2.0-29-generic /boot/vmlinuz-3.2.0-29-generic Generating grub.cfg ... run-parts: /etc/kernel/postrm.d/zz-update-grub exited with return code 1 Failed to process /etc/kernel/postrm.d at /var/lib/dpkg/info/linux-image-3.2.0-29-generic.postrm line 328. dpkg: error processing linux-image-3.2.0-29-generic (--remove): subprocess installed post-removal script returned error exit status 1 No apport report written because MaxReports has already been reached Removing linux-image-3.2.0-30-generic ... Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 3.2.0-30-generic /boot/vmlinuz-3.2.0-30-generic update-initramfs: Deleting /boot/initrd.img-3.2.0-30-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 3.2.0-30-generic /boot/vmlinuz-3.2.0-30-generic Generating grub.cfg ... run-parts: /etc/kernel/postrm.d/zz-update-grub exited with return code 1 Failed to process /etc/kernel/postrm.d at /var/lib/dpkg/info/linux-image-3.2.0-30-generic.postrm line 328. dpkg: error processing linux-image-3.2.0-30-generic (--remove): subprocess installed post-removal script returned error exit status 1 No apport report written because MaxReports has already been reached Removing linux-image-3.2.0-31-generic ... Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 3.2.0-31-generic /boot/vmlinuz-3.2.0-31-generic update-initramfs: Deleting /boot/initrd.img-3.2.0-31-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 3.2.0-31-generic /boot/vmlinuz-3.2.0-31-generic Generating grub.cfg ... run-parts: /etc/kernel/postrm.d/zz-update-grub exited with return code 1 Failed to process /etc/kernel/postrm.d at /var/lib/dpkg/info/linux-image-3.2.0-31-generic.postrm line 328. dpkg: error processing linux-image-3.2.0-31-generic (--remove): subprocess installed post-removal script returned error exit status 1 No apport report written because MaxReports has already been reached Errors were encountered while processing: linux-image-3.2.0-27-generic linux-image-3.2.0-29-generic linux-image-3.2.0-30-generic linux-image-3.2.0-31-generic E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

< Previous Page | 13 14 15 16 17 18  | Next Page >