Search Results

Search found 17317 results on 693 pages for 'memory upgrade'.

Page 226/693 | < Previous Page | 222 223 224 225 226 227 228 229 230 231 232 233  | Next Page >

  • directory with 980MB meta data, millions of files, how to delete it? (ext3)

    - by Alexandre
    Hello, So I'm stuck with this directory: drwxrwxrwx 2 dan users 980M 2010-12-22 18:38 sessions2 The directories contents is small - just millions of tiny little files. I want to wipe it from the filesystem but have been unable to. My first try was: find sessions2 -type f -delete and find sessions2 -type f -print0 | xargs -0 rm -f but had to stop because both caused escalating memory usage. At one point it was using 65% of the system's memory. So I thought (no doubt incorrectly), that it had to do with the fact that dir_index was enabled on the system. Perhaps find was trying to read the entire index into memory? So I did this (foolishly): tune2fs -O^dir_index /dev/xxx Alright, so that should do it. Ran the find command above again and... same thing. Crazy memory usage. I hurriedly ran tune2fs -Odir_index /dev/xxx to reenable dir_index, and ran to Server Fault! 2 questions: 1) How do I get rid of this directory on my live system? I don't care how long it takes, as long as it uses little memory and little CPU. By the way, using nice find ... I was able to reduce CPU usage, so my problem right now is only memory usage. 2) I disabled dir_index for about 20 minutes. No doubt new files were written to the filesystem in the meanwhile. I reenabled dir_index. Does that mean the system will not find the files that were written before dir_index was reenabled since their filenames will be missing from the old indexes? If so and I know these new files aren't important, can I maintain the old indexes? If not, how do I rebuild the indexes? Can it be done on a live system? Thanks!

    Read the article

  • Apache Derby running in Tomcat shutdown issues

    - by Luke
    I have set up Derby Network Server to be hosted within a Tomcat environment. This works great. However, when I shut down Tomcat I get the following errors: 04/01/2011 10:41:41 AM org.apache.catalina.core.StandardService stop INFO: Stopping service Catalina 04/01/2011 10:41:41 AM org.apache.catalina.loader.WebappClassLoader clearReferencesJdbc SEVERE: The web application [/derby] registered the JBDC driver [org.apache.derby.jdbc.ClientDriver] but failed to unregister it when the web application was stopped. To prevent a memory leak, the JDBC Driver has been forcibly unregistered. 04/01/2011 10:41:41 AM org.apache.catalina.loader.WebappClassLoader clearReferencesJdbc SEVERE: The web application [/derby] registered the JBDC driver [org.apache.derby.jdbc.AutoloadedDriver] but failed to unregister it when the web application was stopped. To prevent a memory leak, the JDBC Driver has been forcibly unregistered. 04/01/2011 10:41:41 AM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads SEVERE: The web application [/derby] appears to have started a thread named [derby.NetworkServerStarter] but has failed to stop it. This is very likely to create a memory leak. 04/01/2011 10:41:41 AM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads SEVERE: The web application [/derby] appears to have started a thread named [NetworkServerThread_4] but has failed to stop it. This is very likely to create a memory leak. 04/01/2011 10:41:41 AM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads SEVERE: The web application [/derby] appears to have started a thread named [DRDAConnThread_5] but has failed to stop it. This is very likely to create a memory leak. 04/01/2011 10:41:41 AM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads SEVERE: The web application [/derby] appears to have started a thread named [DRDAConnThread_13] but has failed to stop it. This is very likely to create a memory leak. 04/01/2011 10:41:41 AM org.apache.coyote.http11.Http11Protocol destroy INFO: Stopping Coyote HTTP/1.1 on http-8080 I'm currently starting and stopping Tomcat with the following commands: ./catalina run ./catalina stop Is there a better way to shutdown Tomcat with Derby or can this be solved by a configuration change?

    Read the article

  • Apache Derby running within Tomcat causes shutdown issues

    - by Luke
    I have set up Derby Network Server to be hosted within a Tomcat environment. This works great. However, when I shut down Tomcat I get the following errors: 04/01/2011 10:41:41 AM org.apache.catalina.core.StandardService stop INFO: Stopping service Catalina 04/01/2011 10:41:41 AM org.apache.catalina.loader.WebappClassLoader clearReferencesJdbc SEVERE: The web application [/derby] registered the JBDC driver [org.apache.derby.jdbc.ClientDriver] but failed to unregister it when the web application was stopped. To prevent a memory leak, the JDBC Driver has been forcibly unregistered. 04/01/2011 10:41:41 AM org.apache.catalina.loader.WebappClassLoader clearReferencesJdbc SEVERE: The web application [/derby] registered the JBDC driver [org.apache.derby.jdbc.AutoloadedDriver] but failed to unregister it when the web application was stopped. To prevent a memory leak, the JDBC Driver has been forcibly unregistered. 04/01/2011 10:41:41 AM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads SEVERE: The web application [/derby] appears to have started a thread named [derby.NetworkServerStarter] but has failed to stop it. This is very likely to create a memory leak. 04/01/2011 10:41:41 AM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads SEVERE: The web application [/derby] appears to have started a thread named [NetworkServerThread_4] but has failed to stop it. This is very likely to create a memory leak. 04/01/2011 10:41:41 AM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads SEVERE: The web application [/derby] appears to have started a thread named [DRDAConnThread_5] but has failed to stop it. This is very likely to create a memory leak. 04/01/2011 10:41:41 AM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads SEVERE: The web application [/derby] appears to have started a thread named [DRDAConnThread_13] but has failed to stop it. This is very likely to create a memory leak. 04/01/2011 10:41:41 AM org.apache.coyote.http11.Http11Protocol destroy INFO: Stopping Coyote HTTP/1.1 on http-8080 I'm currently starting and stopping Tomcat with the following commands: ./catalina run ./catalina stop Is there a better way to shutdown Tomcat with Derby or can this be solved by a configuration change?

    Read the article

  • Strange performance differences in read/write from/to USB flash drive

    - by Mario De Schaepmeester
    When copying files from my 8GB USB 2.0 flash drive with Windows 7 to a traditional hard drive, the average speed is between 25 and 30 MB/s. When doing the reverse, copying to the USB drive, the speed is 5MB/s average. I have tested this with about 4.5GB of files, a mixture of smaller and larger ones. The observations were the same on both FAT32 and exFAT file systems on the USB drive, NTFS on the internal hard disk. I don't think I can be mistaken in saying that flash memory has a lot higher performance than a spinning hard drive in both terms of reading and writing. For both memory types, reading should be faster than writing too. Now I wonder, how can it be that copying files from a fast read memory to a faster write memory is actually slower than copying files from a fast read memory to a slow write memory? I think that the files are stored in RAM before being copied over too, and there's caching as well, but I don't see how even that could tip the balance. It can only be in the advantage of writing to the USB drive, since it is "closer" to the SATA system than the USB port and it will receive data from the internal SATA HDD faster. Perhaps my way of thinking is all wrong or it just depends on the manufacturer of the USB pen. But I am curious.

    Read the article

  • java.lang.OutOfMemoryError: unable to create new native thread

    - by Brad
    I consistently get this exception when trying to run my Junit tests on my mac: java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:658) at java.util.concurrent.ThreadPoolExecutor.addIfUnderMaximumPoolSize(ThreadPoolExecutor.java:727) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:657) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:92) at com.google.appengine.tools.development.ApiProxyLocalImpl$PrivilegedApiAction.run(ApiProxyLocalImpl.java:197) at com.google.appengine.tools.development.ApiProxyLocalImpl$PrivilegedApiAction.run(ApiProxyLocalImpl.java:184) at java.security.AccessController.doPrivileged(Native Method) at com.google.appengine.tools.development.ApiProxyLocalImpl.doAsyncCall(ApiProxyLocalImpl.java:172) at com.google.appengine.tools.development.ApiProxyLocalImpl.makeAsyncCall(ApiProxyLocalImpl.java:138) The same set of unit tests pass perfectly fine on ubuntu and windows. Some information about my system resources on the mac: $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 1 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 266 virtual memory (kbytes, -v) unlimited $ java -version java version "1.6.0_24" Java(TM) SE Runtime Environment (build 1.6.0_24-b07-334-10M3326) Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02-334, mixed mode) The reason I dont think this is an application issue is because the same tests pass in different environments. I have tried setting heap to 1024m, 512m and setting the stack to 64k and 128k (and each of these combinations) with no luck. My open files was originally 256 and I have bumped this to 1024. I have been googling around for a bit and all posts say to decrease heap size and increase stack size but that doesnt seem to help. Anyone have anymore ideas? EDIT: Here are is some environment information on my ubuntu box: $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 20 file size (blocks, -f) unlimited pending signals (-i) 16382 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) unlimited virtual memory (kbytes, -v) unlimited file locks (-x) unlimited $ java -version java version "1.6.0_24" Java(TM) SE Runtime Environment (build 1.6.0_24-b07) Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02, mixed mode)

    Read the article

  • Server Recovery from Denial of Service

    - by JMC
    I'm looking at a server that might be misconfigured to handle Denial of Service. The database was knocked offline after the attack, and was unable to restart itself after it failed to restart when the attack subsided. Details of the Attack: The Attacker either intentionally or unintentionally sent 1000's of search queries using the applications search query url within a couple of seconds. It looks like the server was overwhelmed and it caused the database to log this message: Server Specs: 1.5GB of dedicated memory Are there any obvious mis-configurations here that I'm missing? **mysql.log** 121118 20:28:54 mysqld_safe Number of processes running now: 0 121118 20:28:54 mysqld_safe mysqld restarted 121118 20:28:55 [Warning] option 'slow_query_log': boolean value '/var/log/mysqld.slow.log' wasn't recognized. Set to OFF. 121118 20:28:55 [Note] Plugin 'FEDERATED' is disabled. 121118 20:28:55 InnoDB: The InnoDB memory heap is disabled 121118 20:28:55 InnoDB: Mutexes and rw_locks use GCC atomic builtins 121118 20:28:55 InnoDB: Compressed tables use zlib 1.2.3 121118 20:28:55 InnoDB: Using Linux native AIO 121118 20:28:55 InnoDB: Initializing buffer pool, size = 512.0M InnoDB: mmap(549453824 bytes) failed; errno 12 121118 20:28:55 InnoDB: Completed initialization of buffer pool 121118 20:28:55 InnoDB: Fatal error: cannot allocate memory for the buffer pool 121118 20:28:55 [ERROR] Plugin 'InnoDB' init function returned error. 121118 20:28:55 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 121118 20:28:55 [ERROR] Unknown/unsupported storage engine: InnoDB 121118 20:28:55 [ERROR] Aborting **ulimit -a** core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 13089 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 1024 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited **httpd.conf** StartServers 10 MinSpareServers 8 MaxSpareServers 12 ServerLimit 256 MaxClients 256 MaxRequestsPerChild 4000 **my.cnf** innodb_buffer_pool_size=512M # Increase Innodb Thread Concurrency = 2 * [numberofCPUs] + 2 innodb_thread_concurrency=4 # Set Table Cache table_cache=512 # Set Query Cache_Size query_cache_size=64M query_cache_limit=2M # A sort buffer is used for optimizing sorting sort_buffer_size=8M # Log slow queries slow_query_log=/var/log/mysqld.slow.log long_query_time=2 #performance_tweak join_buffer_size=2M **php.ini** memory_limit = 128M post_max_size = 8M

    Read the article

  • Xen domU mem-set issue

    - by Casper Langemeijer
    I'm running into a problem on my xen 4.0.1 server (debian squeeze) My host has 32G of memory, Domain-0 has 2048 M assigned to it. (scaled down with xm mem-set Domain-0 2048) top in Domain-0 confirms this. I created a virtual machine config file (using xen-tools) with the following options: memory = '512' maxmem = '2048' Both host and guest machines are running the standard 2.6.32-5-xen-amd64 debian kernel. 'xm create' creates a virtual machine with 512MB of memory as expected. Then 'xm mem-set domU 1024' will not expand the memory to 1024MB running 'xm mem-set domU 400' does set the memory to about 400MB Then 'xm mem-set domU 1024' will expands the memory back to 512MB Based on this, you would say that xm ignores the maxmem and silently sets maxmem to 512, but in the output of xm top the MAXMEM column reads 2G. the MEM column will not go over 512M. The output of xm list tells another story, it shows 1024 when I 'xm mem-set domU 1024'. I've googled myself all away around the internet for this issue and found that most people don't scale back Domain-0. I know I've seen a bugreport about the issue I'm experiencing, but can't find it anymore. Does anyone see what I'm doing wrong here? Hmm.. I just upgraded my kernel to the one provided by debian backports. The issue has gone.

    Read the article

  • Mysqld shutting down by itself

    - by AJ Naidas
    I'm running a Wordpress Blog that gets medium-high traffic. It is hosted in an Ubuntu Server 2GB Memory 2 Core Processor 40GB SSD Disk, 3TB Transfer. The problem is that MySQL shuts down by itself after an hour or two. I had to restart mysql each and every time this happens. I checked the logs and this is what I found: 140612 6:48:14 [Warning] Using unique option prefix myisam-recover instead of myisam-recover-options is deprecated and will be removed in a future release. Please use the full name instead. 140612 6:48:14 [Note] Plugin 'FEDERATED' is disabled. 140612 6:48:14 InnoDB: The InnoDB memory heap is disabled 140612 6:48:14 InnoDB: Mutexes and rw_locks use GCC atomic builtins 140612 6:48:14 InnoDB: Compressed tables use zlib 1.2.3.4 140612 6:48:14 InnoDB: Initializing buffer pool, size = 1.4G InnoDB: mmap(1502412800 bytes) failed; errno 12 140612 6:48:14 InnoDB: Completed initialization of buffer pool 140612 6:48:14 InnoDB: Fatal error: cannot allocate memory for the buffer pool 140612 6:48:14 [ERROR] Plugin 'InnoDB' init function returned error. 140612 6:48:14 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 140612 6:48:14 [ERROR] Unknown/unsupported storage engine: InnoDB 140612 6:48:14 [ERROR] Aborting 140612 6:48:14 [Note] /usr/sbin/mysqld: Shutdown complete judging by this line: 140612 6:48:14 InnoDB: Fatal error: cannot allocate memory for the buffer pool I suspect that this is a memory problem, but I would like to hear from the experts here before I conclude. Is this a lack of memory problem? Do you think the value of max_connections in my.cnf (currently 100) is a potential cause and needs increasing? TIA.

    Read the article

  • Computer is dying--what should I be looking for?

    - by Will
    Okay, I'm a bit knowledgeable with pooters and such, but i'm confused. My computer is dying slowly, and I'm not sure what part is causing this. Computer details: Vista, dell machine, intel Q6600, 2.4 Core Duo (quad core), standard memory and drive (unknown manufacturer). Symptoms: I would best describe the symptoms as memory corruption. After a couple days on, I start getting applications crashing or failing to open for a lack of "resources". Sounds are corrupted. Onscreen text gets corrupted; the characters of text are garbled, not the pixels on the screen. Video memory seems untouched as I haven't seen any misplaced pixels. Recently I've lost files on disk. I've also experienced errors reporting a supposed lack of disk space, even though I have fifty gigs free. There was one point where I couldn't get to the POST when booting up. After I cleaned everything (see next) this hasn't happened. Diagnostic steps: First thing I did was clean the case. There was a lot of dust buildup on heatsinks, so I cleaned all that up. No help. Next, I disconnected and reconnected everything, from power cables to memory (did not reseat cpu). No change. Last, I ran the standard vista memory diagnostics and ran checkdisk. Both reported no errors found. I have not run any POST tests, now that I think about it. I'm at a loss at this point. Disk appears fine, memory too. I'd expect motherboard issues to result in the thing not booting up, yet it does every time. What should I be looking at? What more can I do?

    Read the article

  • WPF MVVM UserControl Binding "Container", dispose to avoid memory leak.

    - by user178657
    For simplicity. I have a window, with a bindable usercontrol <UserControl Content="{Binding Path = BindingControl, UpdateSourceTrigger=PropertyChanged}"> I have two user controls, ControlA, and ControlB, Both UserControls have their own Datacontext ViewModel, ControlAViewModel and ControlBViewModel. Both ControlAViewModel and ControlBViewModel inh. from a "ViewModelBase" public abstract class ViewModelBase : DependencyObject, INotifyPropertyChanged, IDisposable........ Main window was added to IoC... To set the property of the Bindable UC, i do ComponentRepository.Resolve<MainViewWindow>().Bindingcontrol= new ControlA; ControlA, in its Datacontext, creates a DispatcherTimer, to do "somestuff".. Later on., I need to navigate elsewhere, so the other usercontrol is loaded into the container ComponentRepository.Resolve<MainViewWindow>().Bindingcontrol= new ControlB If i put a break point in the "someStuff" that was in ControlA's datacontext. The DispatcherTimer is still running... i.e. loading a new usercontrol into the bindable Usercontrol on mainwindow does not dispose/close/GC the DispatcherTimer that was created in the DataContext View Model... Ive looked around, and as stated by others, dispose doesnt get called because its not supposed to... :) Not all my usercontrols have DispatcherTimer, just a few that need to do some sort of "read and refresh" updates./. Should i track these DispatcherTimer objects in the ViewModelBase that all Usercontrols inh. and manually stop/dispose them everytime a new usercontrol is loaded? Is there a better way?

    Read the article

  • Dell R510 can i go from E5620 to X5690

    - by NJinPHX
    I have a Dell R510 used for a SQl server. Can I go from E5620 to X5690? They are both in the same Intel Xeon 5600 series. But the i did not know if a X (Performance) is interchangeable E (Mainstream). If not then the fastest upgrade i can make will be to another E. thx Intel® Xeon® Processor E5620 (Current) http://ark.intel.com/products/47925 Intel® Xeon® Processor X5690 (Proposed Upgrade) http://ark.intel.com/products/52576 Intel® Xeon® Processor E5649 (Fall back Upgrade) (cant post link)

    Read the article

  • Synchronization of volume snapshots when doing whole system backups

    - by intuited
    Is there a way to guarantee consistency across volumes when doing backups from LVM snapshots? Consider this scenario: Some system upgrade is in progress. It will write some files to the /usr volume, and once completed, will record success in the /var volume. As the upgrade is just about complete, I run a backup script that creates snapshots of the /usr and /var volumes, along with the rest of the system's volumes, and proceeds to create backups from those snapshots. Just before the upgrade's last write/flush on the /usr volume completes, the backup script takes its snapshot of /usr. That write completes, and the upgrade operation's success is quickly recorded in the nebulous depths of /var. The backup script takes a snapshot of /var. The backup script creates backups from the snapshots it has, er, snapshotted. So the result of all of this tomfoolery is that the resulting /usr backup contains a file which is missing a few bits, and the /var backup contains metadata indicating that that file is complete and approved for use. Without delving into the details of which operating systems' system upgrade systems would be unfazed by such trifles, is there a way to avoid such problems? At the least this seems like it could cause some application to fail unexpectedly after restoration of such a backup.

    Read the article

  • In Haskell, will calling length on a Lazy ByteString force the entire string into memory?

    - by me2
    I am reading a large data stream using lazy bytestrings, and want to know if at least X more bytes is available while parsing it. That is, I want to know if the bytestring is at least X bytes long. Will calling length on it result in the entire stream getting loaded, hence defeating the purpose of using the lazy bytestring? If yes, then the followup would be: How to tell if it has at least X bytes without loading the entire stream? EDIT: Originally I asked in the context of reading files but understand that there are better ways to determine filesize. Te ultimate solution I need however should not depend on the lazy bytestring source.

    Read the article

  • How to write mmap input memory to O_DIRECT output file?

    - by Friedrich
    why doesn't following pseudo-code work (O_DIRECT results in EFAULT) in_fd = open("/dev/mem"); in_mmap = mmap(in_fd); out_fd = open("/tmp/file", O_DIRECT); write(out_fd, in_mmap, PAGE_SIZE); while following does (no O_DIRECT) in_fd = open("/dev/mem"); in_mmap = mmap(in_fd); out_fd = open("/tmp/file"); write(out_fd, in_mmap, PAGE_SIZE); I guess it's something with virtual kernel pages to virtual user pages, which cannot be translated in the write call? Best regards, Friedrich

    Read the article

  • Ubuntu: unattended-upgrades from a local package archive

    - by Novelocrat
    I have a local apt archive with a bunch of packages I built in it. The Packages and Release file are generated by apt-ftparchive. The Release file looks like Date: Thu, 06 May 2010 23:04:33 UTC Label: PPL Origin: PPL Suite: ppl MD5Sum: ebec3527ebc8351468b2ef8796c19855 37325 Packages d41d8cd98f00b204e9800998ecf8427e 0 Release SHA1: a0593b663d77fde88ee35b56ae1f3c17801cfe99 37325 Packages da39a3ee5e6b4b0d3255bfef95601890afd80709 0 Release SHA256: dd73a02846aee111cac58a869c6bf650886632ba82c2172ffddd81aa4429981c 37325 Packages e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 Release I'm using unattended-upgrades to keep the machines in the lab up to date on security and bug fixes, but I'm finding that it doesn't pull from my local archive. The configuration file for it looks like // Automaticall upgrade packages from these (origin, archive) pairs Unattended-Upgrade::Allowed-Origins { "Ubuntu hardy-security"; "Ubuntu hardy-updates"; "PPL ppl"; }; // List of packages to not update Unattended-Upgrade::Package-Blacklist { // "vim"; // "libc6"; // "libc6-dev"; // "libc6-i686"; }; // Send email to this address for problems or packages upgrades // If empty or unset then no email is sent, make sure that you // have a working mail setup on your system. The package 'mailx' // must be installed or anything that provides /usr/bin/mail. //Unattended-Upgrade::Mail "root@localhost"; Yet, when I run sudo unattended-upgrade on one of these machines, newer package versions don't get installed. Can anyone point out what I'm getting wrong?

    Read the article

  • If I want to play the same sound 10 times per second, must I have 10 copies of that sound in memory?

    - by mystify
    I have a sound that needs to get played 10 times per second. The sound is 1 second long. So it does overlap like 10 times. However, as far as I understand the Finch sound library, I would need 10 different instances of a sound in place so that I can play it 10 times at almost the same time. When I have just one instance, the sound would stop and play from the beginning on every iteration, but not overlap with itself. How to do that?

    Read the article

  • Error upgrading Ubuntu server from Intrepid to Jaunty

    - by Martin
    I'm trying to upgrade an old ubuntu server from 8.10 (Intrepid) to 9.04 (Jaunty). But it fails. root@server1:/# do-release-upgrade Checking for a new ubuntu release Failed Upgrade tool signature Failed Upgrade tool Done downloading extracting 'jaunty.tar.gz' Failed to extract Extracting the upgrade failed. There may be a problem with the network or with the server. Does anyone have an idea why I get this error and how to fix it? UPDATE: I think i might have tracked the problem down. My /etc/update-manager/meta-release looks like this: [METARELEASE] URI = http://changelogs.ubuntu.com/meta-release URI_LTS = http://changelogs.ubuntu.com/meta-release-lts URI_UNSTABLE_POSTFIX = -development URI_PROPOSED_POSTFIX = -proposed If i go to http://changelogs.ubuntu.com/meta-release it has this info for Jaunty: Dist: jaunty Name: Jaunty Jackalope Version: 9.04 Date: Thu, 23 Apr 2009 12:00:00 UTC Supported: 0 Description: This is the 9.04 release Release-File: http://archive.ubuntu.com/ubuntu/dists/jaunty/Release ReleaseNotes: http://changelogs.ubuntu.com/EOLReleaseAnnouncement UpgradeTool: http://archive.ubuntu.com/ubuntu/dists/jaunty-proposed/main/dist-upgrader-all/0.111.8/jaunty.tar.gz UpgradeToolSignature: http://archive.ubuntu.com/ubuntu/dists/jaunty-proposed/main/dist-upgrader-all/0.111.8/jaunty.tar.gz.gpg Those links starting with archive.ubuntu.com are broken since jaunty is EOL. I guess i could fix this by copying this file, replacing "archive" with "old-releases", host the modified file somewhere and change the url in the meta-release file. Is this a good solution or will it make me run into worse problems?

    Read the article

  • Internet very slow when upgrading to Ubuntu 9.10

    - by roojoo
    I was running Ubuntu 8.x on my desktop and everything worked fine. Im using wired internet and it worked perfectly, pages loaded pretty fast. However, when I decided to upgrade to 9.10 the upgrade failed at some point, however I was left with what appeared to be Ubuntu 9.10. Since then the internet has been weird. When I go to a website it takes at least 10 seconds for the page to display, however if Im on a site and navigate to other pages on the website it loads quickly. This never happened prior to the upgrade. I thought this may be due to the upgrade not installing correctly so I did a fresh install of Xubuntu 9.10 but the problems are still the same. Im writing this on a Vista machine over the wireless network and internet is fine. Does anyone have any ideas of the issue? Thanks.

    Read the article

  • ubuntu: mumble 1.2.2 in Karmic

    - by Dan
    Karmic only has mumble 1.1.8, but if I want to connect to a 1.2 server I need to upgrade... So I would like to know how I can upgrade to mumble 1.2.2 without messing myself up for later when I upgrade to 10.04 and beyond... I just want a smooth transition into the next versions of mumble. Is there anyway to upgrade to this newer version and either keep it in the package manager or make it not interfere with the natural upgrades the program will later recieve from the package manager? Thanks, Dan

    Read the article

  • Memory allocation in detached NSThread to load an NSDictionary in background?

    - by mobibob
    I am trying to launch a background thread to retrieve XML data from a web service. I developed it synchronously - without threads, so I know that part works. Now I am ready to have a non-blocking service by spawning a thread to wait for the response and parse. I created an NSAutoreleasePool inside the thread and release it at the end of the parsing. The code to spawn and the thread are as follows: Spawn from main-loop code: . . [NSThread detachNewThreadSelector:@selector(spawnRequestThread:) toTarget:self withObject:url]; . . Thread (inside 'self'): -(void) spawnRequestThread: (NSURL*) url { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; parser = [[NSXMLParser alloc] initWithContentsOfURL:url]; [self parseContentsOfResponse]; [parser release]; [pool release]; } The method parseContentsOfResponse fills an NSMutableDictionary with the parsed document contents. I would like to avoid moving the data around a lot and allocate it back in the main-loop that spawned the thread rather than making a copy. First, is that possible, and if not, can I simply pass in an allocated pointer from the main thread and allocate with 'dictionaryWithDictionary' method? That just seems so inefficient. Are there perferred designs?

    Read the article

  • Windows 7 Professional N needs to be just Windows 7 Professional.

    - by Jess
    I have a laptop at work that originally had Windows 7 Home Premium on it. We have a tech that comes in a few times a week to do some of our support work, and we asked him to upgrade the laptop to Windows 7 Professional. Before he left he told us the upgrade didn't work and that we'd have to order a disk. Upon checking the computer it seemed he had upgraded to Windows Professional 7 N. It had not previously been Home Premium N, so I'm not exactly sure how he managed to upgrade it to an N edition. I do not understand why he didn't run the Any Time Upgrade, but that is now irrelevant. How can I change Professional N to regular Professional? I would like to avoid having to restore it back to Home if possible.

    Read the article

< Previous Page | 222 223 224 225 226 227 228 229 230 231 232 233  | Next Page >