Search Results

Search found 21331 results on 854 pages for 'require once'.

Page 625/854 | < Previous Page | 621 622 623 624 625 626 627 628 629 630 631 632  | Next Page >

  • Wiping Deleted Directory Entries and Defragmenting Directories

    - by Synetech inc.
    Hi, I have seen plenty of apps that wipe free space on a disk (usually by creating a file that is as big as the remaining space) or defragment a file (usually by using the MoveFile API to copy it to a new contiguous area). What I have not seen however is a program that wipes the deleted directory entries. That is, when a file is deleted, its information (name, dates, etc.) remain in the directory, but are simply marked as empty. That leaves all kinds of information in a directory entry, and also wastes space since (at least on FAT drives), the directory may be using several clusters. For example, if a directory once had a lot of files, it will be expanded to use another cluster which could be anywhere on the disk. This means that the directory is fragmented, and may be using more clusters than needed, possibly with 100’s of unused (ie, “deleted file”) entries between active files. Does anyone know of a program that can defragment/consolidate directories (ie, wipe unused entries, and move active entries together)? (I would really rather not have to resort to writing my own yet again.) Thanks a lot. EDIT Sorry, I should have said, Windows and/or DOS, for FAT*/NTFS.

    Read the article

  • Magento Apache Config & Memory Issues

    - by cheshirepine
    I have a Magento installation on a VPS that is giving me a headache. This particular VPS has a reasonable spec - 2gb Memory and 50gb storage. It runs a single domain, with a single Magento install - and nothing else. About 5 months ago we started having issues. Every so often (about once every 2 or 3 weeks) the VPS would crash - all processes stopped and the only way to restart the container is via Virtuozzo. Now, however its 2 or 3 times a week. My VPS hosts confirm I am breaching the 2gb memory limit, at which point all VPS processes are killed to stop it bringing the entire node down. I have not made any config changes to it at all - I was running New Relic on it for a short while, but have removed that in case it was contributing to the issues. I can see nothing in the logs which indicates an issue and we have no CRON jobs running at the time the crashes happen. The site generates steady, but not huge amounts of traffic (averaging usually less than 100 visits per day) Is there anything in particular I should have done to the Apache or PHP configs to help? Im not a massivley experienced Apache admin, but know more than enough to solve most problems... Failing that, any other ideas that might help? Can't afford for this site to be down this much.

    Read the article

  • Ubuntu 10.04 Server on Hyper-V Server R2 has sluggish install and command line

    - by Paul Hobart
    I've installed Ubuntu Server 10.04 (64 bit) on a Hyper-V Server R2. I've encountered two issues that I think are related: Very slow install Very slow command prompt The text-mode installer goes through a series of text-based prompt windows. It takes 7-10 seconds for each of these windows to draw on the screen. The end result is that every time I answer a prompt and hit enter I wait for 15 seconds while the screen redraws line by line. I can literally see each line of text being drawn (like the old 300 baud modems days). Once done installing, scrolling on the command line is super slow. For instance, if a simple command, like "ls", causes the screen to scroll, it will scroll very slowly. This happens on a fresh install. The server functions as a LAMP server and an OpenSSH server, but that's it (I don't even have any Virtual Hosts set up yet). AND this only happens on the Virtual Machine console. I access the console through Hyper-V Manager and don't have this problem on any of my other Virtual Machines. Also, this problem does NOT happen when accessing a shell through OpenSSH. How can I improve this performance issue?

    Read the article

  • Windows xp mapped drives disconnect from Windows 7 on Peer to Peer network

    - by Kathryn Codo
    I have a 5 user peer to peer network. 2-XP machines, 3- Windows 7 machines and a Windows 7 machine acting as the file server. I have NO issues with the Windows 7 machines staying connected via the mapped drives to the "server". However, once a day the 2 XP machines will not connect to the Windows 7 machine "server" via the mapped drives. I must restart the "server" in order to see the server and access the files via the mapped drives or Explorer. I have tried the persistent : yes command in NET USE, on the XP machines and also setting a static IP address on the "server". I've turned the firewall off on the "server:, no difference so I turned the firewall back on. No interruption occurs with the internet when this happens, just can't see the server. Again, the Windows 7 users are unaffected. I have gone into the advanced network settings on the "server" and added read/write permissions for Everyone as well as the users of the XP machines. I have double checked that we are all on the same WORKGROUP. I'm at a loss. It seems to happen around mid-day and I've not been able to find any activity that is happening at this time (Like someone plugging in a thumbdrive). Any suggestions would be greatly appreciated as my client is running old XP software he no longer has the disks for and needs for his engineering firm.

    Read the article

  • mysql mass insert data

    - by user12145
    Edit: I realized that if I construct a large query in memory, the speed has increased almost 10 times of magnitude "insert ignore into xxx(col1, col2) values('a',1), values('b',1), values('c',1)..." Edit: since I have an index on the first column, the insert time creeps up as I insert more. Can I delay the index until the end? Original: I'm using the following to batch insert 10 million rows into mysql db(not all at once, since they don't all fit into memory), it's too slow(taking many hours). should I use load file to improve performance? I would have to create a second file to store all the 10 million rows, then load that into db. are there better ways? PreparedStatement st=con.prepareStatement("insert ignore into xxx (col1, col2) "+ " values (?, 1)"); Iterator d=data.iterator(); while(d.hasNext()){ st.clearParameters(); st.setString(1, (d.next()).toLowerCase()); st.addBatch(); } int[]updateCounts=st.executeBatch();

    Read the article

  • Puppet file transfer slow

    - by Noodles
    I have a puppet master and slaves in different datacenters. The latency between them is ~40ms. When I run "puppet agent --test" on a slave to apply the latest manifest it takes ~360 seconds to finish. After doing some digging I can see the main cause of the slow down is file transfers. It seems it's taking ~10 seconds to transfer each file. The files are only small (configuration files) so I can't understand why they would take so long. This is an example of a file in my manifest: file { "/etc/rsyncd.conf" : owner => "root", group => "root", mode => 644, source => "puppet:///files/rsyncd/rsyncd.conf" } Running puppet-profiler I see this: 10.21s - File[/etc/rsyncd.conf] It also seems I cannot update more than one server at once using puppet. If I run two servers at the same time then puppet takes twice as long. I have changed the puppet master from using webrick to mongrel, but this doesn't seem to help. This is making deploying changes painful. A simple config change can take an hour to roll out to all servers.

    Read the article

  • Fedora 17 - Dropping into debug shell after attempted partitioning

    - by i.h4d35
    So I tried creating a new partition on Fedora 17 using fdisk as follows: Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (2048-823215039, default 2048): Using default value 2048 Last cylinder or +size or +sizeM or +sizeK (1-9039, default 9039): +15G Once this was done,instead of formatting the partition I created, I ran the partprobe command to write the changes to the partition table. On rebooting the computer, it drops to the debug shell and gives me the error as follows: dracut warning:unable to process initqueue dracut warning:/dev/disk/by-uuid/vg_mymachine does not exist dropping to debug shell dracut:/# While trying to run fsck on the said partition from the debug shell, it says "etc/fstab not found" and inside /etc I see a fstab.empty file. Is it now possible to retrieve what I have from the computer? Any help would be appreciated. Thanks in advance Edit: I've also tried the following steps for additional troubleshooting: I tried to boot using the Fedora disk and tried the rescue mode - says no Linux partition detected. I tried to create an fstab file by combining the entries from blkid and the /etc/mtab file and using the UUIDs from the mtab file - It didn't work. As soon as I rebooted the machine, it promptly dropped me in to the debug shell and the fstab file which i created wansn't there anymore in /etc (part of this solution)

    Read the article

  • How do I get a Mac to request a new IP address from another DHCP server running in parallel while Ne

    - by huyqt
    Hello, I have an interesting situation. I'm trying to us a Linux based machine to allow Mac's to Netboot (similiar to PXE boot) by running a DHCP service in parallel with the "global" DHCP server. The local DHCP server hands out IPs in a private subnet, e.g., 10.168.0.10-10.168.254-254, while the "global" DHCP server hands out IPs from the IP range 10.0.0.1 - 10.0.1.254. The local DHCP range is only supposed to be used in Preboot Execution Environment and Netboot. The local DHCP server is something I have control over, but I do not have access to the global DHCP server. I have a filter to only allow members with the vendor strings "AAPLBSDPC/i386" and "PXEClient". PXE works fine, but Netboot has a quirk. The Apple systems that haven't been connected to the network yet can Netboot fine. But once it grabs a "real" IP address from the global DHCP server, it will "save" it and request it the next time we want it to netboot (which the local dhcp server won't give it). This is what I want: Mar 30 10:52:28 dev01 dhcpd: DHCPDISCOVER from 34:15:xx:xx:xx:xx via eth1 Mar 30 10:52:29 dev01 dhcpd: DHCPOFFER on 10.168.222.46 to 34:15:xx:xx:xx:xx via eth1 Mar 30 10:52:31 dev01 dhcpd: DHCPREQUEST for 10.168.222.46 (10.168.0.1) from 34:15:xx:xx:xx:xx via eth1 Mar 30 10:52:31 dev01 dhcpd: DHCPACK on 10.168.222.46 to 34:15:xx:xx:xx:xx via eth1 Mar 30 10:52:32 dev01 in.tftpd[5890]: tftp: client does not accept options Mar 30 10:52:53 dev01 in.tftpd[5891]: tftp: client does not accept options Mar 30 10:52:53 dev01 in.tftpd[5893]: tftp: client does not accept options Mar 30 10:52:54 dev01 in.tftpd[5895]: tftp: client does not accept options This is what I get when it already has a "stored" IP: Mar 30 10:51:29 dev01 dhcpd: DHCPDISCOVER from 00:25:xx:xx:xx:xx via eth1 Mar 30 10:51:30 dev01 dhcpd: DHCPOFFER on 10.168.222.45 to 00:25:xx:xx:xx:xx via eth1 Mar 30 10:51:31 dev01 dhcpd: DHCPREQUEST for 10.0.0.61 (10.0.0.1) from 00:25:xx:xx:xx:xx via eth1: ignored (not authoritative). Do you have any suggestions? It would be much appreciated.

    Read the article

  • How to compare old CPU to new CPU?

    - by Lasse V. Karlsen
    I hope this question doesn't get closed at once :) I have an old laptop, a Compaq NC4200, which is going its final laps around the track these days. Battery is dead, and everything kinda runs slow. It also has only 1GB of memory, and even though I don't know if it can take more, I probably wouldn't be able to get hold of any that matches without having to special order it. The size, however, has been ideal for my usage pattern, so I'm looking to replace it with a similarly sized laptop, at least in the same size category. However, it's been a while since I tried keeping track of CPUs, so I have a question. The old laptop has a Intel Pentium M 760 1.86GHz processor. One laptop I found online has a Intel Pentium SU4100 1.3GHz dual-core. This type of processor seems to be quite common in the price and size-range I've been looking. What kind of relative performance boost could I expect from the old one to the new one? I am not expecting a "about 7.45x speed", but some indication would be nice. For instance, dual-core tells me it might be akin to 2.6GHz, but I assume I can't simply compare 1.86GHz to 2.6GHz and expect the new one to run about 1.4x as fast, I expect more these days. Or is that unrealistic for this kind of processor? Do I need to up my price range and go for a 2+ GHz processor?

    Read the article

  • Moving domain and keeping IMAP email - Linux Evolution, Mac Mail

    - by Douglas Squirrel
    This question is about keeping email during a server move, where the clients are Linux (me) and Mac (my wife) using IMAP. I receive email at [email protected] using a webmail service that my hosting company (1and1) provides. I read it via IMAP in evolution, so I should have copies of all the emails on my local machine. I have just moved mydomain.com from one type of account to another, and the hosting company don't move my existing email on the server when I do this - I assume they move my account to a different mailserver, and don't choose to provide a migration path for the email to move too (yes, this is annoying). Before migrating, I backed up Evolution (File - Backup settings) and did a spot-check in the evolution-backup.tar.gz file to be sure that my mail was in there. After migrating, I restored (File - Restore settings) and had hoped that I would see all my mail again. Unfortunately, Evolution just shows me new mail sent to the account, not the old mail. Is there a way to get the old mail back in the mailserver, or at least displaying in Evolution, as it was before the move? If not, can I read it in some convenient way, e.g. in Evolution offline or in a text file (then I can pick the mails I really want to keep and resend them to myself)? Also, I am about to do a similar move for my wife's domain, [email protected]. She reads her mail on a Mac using IMAP to Apple Mail. Is there anything I can do to make the move smooth for her? (I have backed up [her user]/Library/Mail already, but not sure what to do once the move is done.)

    Read the article

  • SQL 2000 Backup/Export Process - Can't find SQL 2000 Enterprise Manager, Can't use Mgmt Studio Expre

    - by 1nsane
    I need to make a backup of a client's SQL 2000 database, however there are a few issues preventing me from doing so using the traditional methods. I've tried using SQL Management Studio Express, but the host doesn't give sufficient privileges to create a backup and I'm getting some strange error messages. I've also tried doing the "Generate Scripts" to recreate the schema, then using the DTS Wizard to migrate the data, but the IDs set up with the identity specification property are not consistent with the live database once copied over. This results in some foreign key breakage... If I remember correctly, I was able to use Microsoft SQL 2000 Enterprise Manager to perform the task before, but I can't find this anywhere... it seems Microsoft has pulled most SQL Server 2000 stuff from their site. Does anyone know where I can find a copy of Enterprise Manager (or a trial of SQL Server 2000, which I believe comes with the component)? Or conversely, does anyone know of any other tools (preferably non-commercial) that are capable of mirroring remote SQL Server 2000 DBs? Thanks!

    Read the article

  • Excel: conditionally format a cell using the format of another, content-matching cell

    - by Eric A. Meyer
    I have an Excel spreadsheet where I’d like to be able to create a “key” of formatted cells with unique values, and then in another sheet format cells using the key formatting. So for example, my key is as follows, with one value per cell and the visual formatting indicated in parentheses: A (red background) B (green background) C (blue background) So that’s on one sheet (or in a remote corner of the current sheet—whichever is better). Then, in an area that I mark for conditional formatting, I can type one of those three letters and have the cell where I typed it visually formatted according to the key. So if I type a “B” into one of the conditionally formatted cells, it gets a green background. (Note that I’m using backgrounds here solely for ease of explanation: ideally I want to have all visual formatting copied over, whether it’s foreground color, background color, font weight, borders, or whatever. But I’ll take what I can get, obviously.) And—just to make it extra-tricky—if I change the formatting in the key, that change should be reflected in cells that reference the key. Thus, if I change the “B” formatting in the key from a green background to a purple background, any “B” in the main sheet should switch to the new color. Similarly, it should be possible to add or remove values from the key and have those changes applied to the main data set. I’m okay with the formatting-update-on-key-change being triggered by clicking a button or something. I suspect that if any of this is possible it will require VBA, but I’ve never used it so I’ve no idea where to start if that’s the case. I’m hoping it’s possible without VBA. I know it’s possible to just use multiple conditional formats, but my use case here is that I’m trying to create the above-described capability for someone who isn’t conversant with conditional formatting. I’d like to let them be able to define a key, update it if necessary, and keep on truckin’ without me having to rewrite the spreadsheet’s formatting rules for them. --- UPDATE --- So I think I was a bit unclear about my original request. Let me try again with an image. The image shows the “key” on the left, where values and styles are defined using keyboard and mouse input. On the right, you see the data that should be formatted to match the key. Thus if I type a “C” into a cell in the Data area, it should be blue-backed. Furthermore, if I change the formatting of “C” in the Key to have a purple background, all the “C” cells should switch from blue to purple. For further craziness, if I add more to the Key (say, “D” with a yellow background) then any “D” cells will be styled to match; if I remove a Key entry, then matching values in the Data area should revert to default styling. So. Is that more clear? Is it possible, in whole or in part? I don’t have to use conditional formatting for this; in fact, at this point I suspect I probably shouldn’t. But I’m open to any approach!

    Read the article

  • Application losing Printer within Terminal Services for remote users

    - by Richard
    Question: What I need to do is have a permanent link to a printer, normally only accessible through Terminal Services (Printer Redirect), to allow Sage Line 50 layouts to see that printer persistently, even after users have disconnected and reconnected to the Terminal Services session? Although the printer is accessible each time a user connects to the Sage Server via Terminal Services, it is given a different session number and therefore the Sage Layout sees it as a different printer. History behind question: Users using Terminal Services connecting to a Sage Server on a different site Using Sage Line 50 v 15 on that Server Users want to print invoices (sage layouts) locally Sage Server cannot see the users local printers, to get around this user uses the Print redirect features of Terminal Services The individual reports can be edited to point to a specific printer by default. This means the user just has to select an invoice and click print, then select the layout/report wanted and it auto prints that invoice to the default printer specified. The problem occurs because the layouts are edited to point to the users local printer "Ricoh 1018d (session#)", note the "(session#)" as this is the users local printer being redirected through the terminal services session. Users are able to print using the sage layouts once the default printer is setup within the layout and saved, but as soon as the users disconnects from the Terminal Services session and then reconnect in the morning go to print, it has lost the connection to that printer. I understand why its failed, because that the printer is on a per session basis and the layout would not be able to hold on to the connection from a previous session. Thanks in advance for any assistance...

    Read the article

  • Windows Server 2008 R2, weird intermittent outgoing connection issues, & Hyper-V Virtual Network

    - by brizz
    My provider says that they run an 'unfiltered' network, so it's not a network issue. Not entirely convinced...but they aren't being very helpful. Basically two days ago started having issues with DNS completely stop working for anywhere between 2-20minutes. This happens roughly every 1-2 hours at least once. I later then found that I am unable to ping ANY external IP address when this is going on. So it's not a DNS issue. Whats even more odd is when this is going on, I STAY connected via RDP, and I can access webpages, etc that are on the server when I access from my home computer. Have tried disabling the firewall--no fix. Have checked Event viewer, the only thing that is there is a warning that DNS has stopped working. I have made no changes/updates to the server. The only thing I have done is add an IP address (don't think it has anything to do with the problem, but wanted to mention everything). Any help/insight/suggestions on how to go about debugging would be much appreciated! update: I am also running Hyper-V, and a virtual network. I just tried pinging (when I have issues), from a Hyper-V machine--and it works, when the main server doesn't.

    Read the article

  • Fedora 9 not reconizing hard drive

    - by Andrew Jones
    I am installing Fedora 9 to a PC (specifications at the bottom) and have had a lot of trouble with it recognising the hard drive. To get the Fedora installer to recognize it in the first place I had to pass "ata_generic.all_generic_ide=1 pci=nomsi" to the kernel, after which it installed OK. However, now when I boot the installed OS, I get a "could not find filesystem '/dev/root'" error. I tried passing the same arguments to the kernel at boot as I did when installing but to no avail. I have tried using the default LVM layout and defining manual ones but it made no difference. There is no option in the BIOS to enable AHCI or anything like that, in fact the BIOS is very limited in most respects. I can get into the system by using the installation CD in rescue mode (with those extra kernal parameters) but I'm not sure what to do once in there... Unfortunately using a more recent version of Fedora or even another Linux distribution altogether isn't an option becuase of outside constraints - which is annoying since I know for a fact Ubuntu works fine on this setup. I have not been using Linux that long, so treat me like an idiot - I am one. Any help would be greatly appreciated, thanks! System spec: Intel Atom Z530 CPU @ 1.6 GHz Intel US15W chipset 1 GB DDR2 160 GB SATA harddisk (Samsung HM16HI) 1000 Mbit/s Ethernet port Phoenix BIOS

    Read the article

  • Ubuntu rm not deleting files

    - by ILMV
    My colleague and I have been struggling with deleting a directory and its contents. We are working on a new version of our websites source code on Ubuntu 8.04 (dir: /var/www/websites), what we want to do is delete the websites directory and recreate it from a .tar backup we created a couple weeks ago. The purpose of this is so we can run our deployment procedure in a local environment before we do so on our live / public environment. We use this command: rm -r websites This deletes the directory and the files within it. The problem occurs when we un-tar our backup file and view the website we are getting files that don't exist in the .tar backup, in fact these files were only created a few days ago and should have been deleted. We delete the directory once more in the manner stated above, we then create a new websites directory using the mkdir command. Strangely at this stage the 'deleted files' do not come back, but if we unpack our .tar file the 'deleted files' appear again. Is there a way to ensure these files are deleted, or at least the pointers that associate them with said directory. Our .tar backup does not include these files We do not want to use the shred command We do not want to use 3rd party applications Solution should be functional via terminal (SSH) Many thanks! EDIT Er... we fixed it. Turns out the files that are reappearing are because of a link we have to another directory (outside the /var/www/websites), we were restoring the link but not deleting the files on the other end. D'oh! Many thanks for your help guys... friday afternoon syndrome :-)

    Read the article

  • Strange corruption saving from Textpad 5 within Windows 7-64 VirtualBox VM to shared folder with Mac host

    - by joelarson
    I have a fairly new Window-7 64bit install running in Virtual Box on a MacBook Pro. I'm using TextPad 5 within that environment to edit source files that live on a shared folder that is on the Mac Host. When I save some of these source files, the saved file ends up with some amount of the end of the file repeated one or more times. For example, a file that has this at the end: ... return ttp; }; would, once saved, open up with: ... return ttp; }; }; It is definitely a problem with how the file gets written as opposed to how it's read, because I can see this now matter what app I use to open the file with (NotePad & Word in Windows 7, TextWrangler back in the Mac). I've tried saving as ANSI and UTF-8, and with or without the 'Write Unicode and UTF-8 BOM' checked in TextPad preferences. It doesn't happen with all files though I can't see any pattern about which files do or don't have the problem. It doesn't happen with files written to the Windows 7 c:\ drive. And so far it doesn't happen from other applications saving files, only TextPad. Any ideas? My versions: Textpad 5.4.2 Windows 7 Professional 64-bit, fully up to date VirtualBox 4.0.8 r71778 OSX 10.6.7

    Read the article

  • How to prevent nginx from locking files on mounted samba partition in Centos 6

    - by Bruce Kirkpatrick
    I'm using nginx 1.3.8 inside a centos 6.3 virtualbox 4.2.4 virtual machine. The system is running the latest software available via yum update. The host OS is windows 7. The site files nginx is serving are on mounted samba partition, which is a folder on the host Windows system. I.e., inside linux, nginx paths are referring to /home/vhosts and this is mounted from D:\vhosts\ on windows. The samba partition is mounted as root with 777 privileges. /etc/fstab looks like this, but with real ip, username, password: //hostip/vhosts /home/vhosts cifs username=username,password=SECRETPASSWORD,uid=root,gid=root,file_mode=0777,dir_mode=0777,rw,_netdev 0 0 I.e. linux/nginx reads from the windows share, and not the opposite. in /etc/samba/smb.conf, I have tried to disable all samba locking features, but it seems to have no effect even after rebooting the virtual machine. locking=no share modes=no oplocks = no level2 oplocks = no kernel oplocks =no I'm receiving "Access is denied" errors in Windows or linux when attempting to overwrite the javascript file in windows that has been accessed at least once with nginx. If I run "service nginx reload", the lock is removed and I'm able to save the file. That's why I think it is nginx causing the lock. The same problem occurs with directories. However, that may be a different issue not related to the use of samba. I'm using samba so that I can manage the source code outside of the virtual machine. Also note that after I run "service nginx reload", the file I'm editing is actually automatically deleted from the windows host. SOLVED: I just reviewed my nginx.conf file. It appears the "open_file_cache" feature is what is causing the lock and deleted files. When I set this option to open_file_cache off;, My problem is resolved. I will repost as the answer when it allows me to do so.

    Read the article

  • Locating memory leak in Apache httpd process, PHP/Doctrine-based application

    - by Sam
    I have a PHP application using these components: Apache 2.2.3-31 on Centos 5.4 PHP 5.2.10 Xdebug 2.0.5 with Remote Debugging enabled APC 3.0.19 Doctrine ORM for PHP 1.2.1 using Query Caching and Results Caching via APC MySQL 5.0.77 using Query Caching I've noticed that when I start up Apache, I eventually end up 10 child processes. As time goes on, each process will grow in memory until each one approaches 10% of available memory, which begins to slow the server to a crawl since together they grow to take up 100% of memory. Here is a snapshot of my top output: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1471 apache 16 0 626m 201m 18m S 0.0 10.2 1:11.02 httpd 1470 apache 16 0 622m 198m 18m S 0.0 10.1 1:14.49 httpd 1469 apache 16 0 619m 197m 18m S 0.0 10.0 1:11.98 httpd 1462 apache 18 0 622m 197m 18m S 0.0 10.0 1:11.27 httpd 1460 apache 15 0 622m 195m 18m S 0.0 10.0 1:12.73 httpd 1459 apache 16 0 618m 191m 18m S 0.0 9.7 1:13.00 httpd 1461 apache 18 0 616m 190m 18m S 0.0 9.7 1:14.09 httpd 1468 apache 18 0 613m 190m 18m S 0.0 9.7 1:12.67 httpd 7919 apache 18 0 116m 75m 15m S 0.0 3.8 0:19.86 httpd 9486 apache 16 0 97.7m 56m 14m S 0.0 2.9 0:13.51 httpd I have no long-running scripts (they all terminate eventually, the longest being maybe 2 minutes long), and I am working under the assumption that once each script terminates, the memory it uses gets deallocated. (Maybe someone can correct me on that). My hunch is that it could be APC, since it stores data between requests, but at the same time, it seems weird that it would store data inside the httpd process. How can I track down which part of my app is causing the memory leak? What tools can I use to see how the memory usage is growing inside the httpd process and what is contributing to it?

    Read the article

  • OS X can't copy files from Windows Home Server...over wifi.

    - by John Clayton
    I'm a brand spanking new user of OS X, coming from a lifetime of Windows use. I've been setting up my new Macbook Pro and have run into a very unusual problem. Over wifi, I am unable to copy files to or from my Windows Home Server. The problem seems to exist only over wifi, and only to WHS. Here are the details of my setup: 2010 Macbook Pro (Core i7), OS X 10.6.3 Windows Home Server PP3 (virtualized in XenServer 5.5) Windows 7 Ultimate x64 desktop Windows 7 Ultimate x64 in Boot Camp D-Link DIR-655 wireless N router Here is what I've done to narrow down the problem: Files copy fine from WHS to OS X when using gigabit ethernet Files copy fine from desktop to OS X when using gigabit ethernet Files fail to copy from WHS to OS X when using wifi (error -51) Files copy fine from desktop to OS X when using wifi Files copy fine from WHS to Boot Camp when using wifi Files copy fine from desktop to Boot Camp when using wifi From what I can tell, it seems to be some sort of issue between OS X and WHS, but I can't for the life of me see what would be different between shares on WHS and my desktop. They are both connected using smb://ADDRESS (I've tried both by IP and name). I can browse the shares on the WHS, but copying to OS X fails. I originally found the issue while installing VS2010 off an ISO from WHS, mounted to a Windows 7 VM using VMware Fusion. During the installation the VM was unusable - even the clock got behind the host be about 8 minutes. Once I plugged in the ethernet and disabled the wifi things picked up and finished quickly. The Fusion 3.1 RC is the only I think of that I installed that may have messed with the wifi driver. I've also tried resetting the wifi router, and have changed it from being G & N to N-only. Under Boot Camp I get similar speeds as my wife's N laptop. Any ideas? Thanks!

    Read the article

  • Ubuntu 12.04 glusterfs volume failed to mount at boot time

    - by user183394
    I have just setup 7 KVM guests, all running Ubuntu 12.04 LTS 64bit Minimal server to test out glusterfs 3.2.5 from the Ubuntu official repo. Two of them form a mirrored pair (i.e. replica 2), and five of them are clients. I am still new to this file system and would like to gain some "hands-on" experience. The setup was mostly uneventful, until I put in the following into each glusterfs client's /etc/fstab: 192.168.122.120:/testvol /var/local/testvol glusterfs defaults,_netdev 0 0, where 192.168.122.120 is the IP address of the first "glusterfs server". If I issue either a manaul mountall or a mount.glusterfs 192.168.122.120:/testvol /var/local/testvol on CLI, a mount shows that the volume is successfully imported. But once a client is rebooted, after it comes back up, the volume is not mounted! I searched the Internet, and found this article, but since I am not running both client and server on the same node, IMHO it's not strictly applicable. So, as a kludgy "get-around", I put in a sleep 3 && mount.glusterfs 192.168.122.120:/testvol /var/local/testvol into each client node's /etc/rc.local. It seems to be able to get the volume mounted on each node, as far as I can tell. But this is quite ugly, and I would appreciate a hint as to how to resolve this glusterfs-non-boot-time-mounting issue correctly. Note that I used the IP address of the first "glusterfs server" although the /etc/hosts of all nodes have been populated with their hostnames. I figured that the use of IP address is more robust. --Zack

    Read the article

  • OSX: Mimic Ubuntu IP Masquerading via iptables with ipfw

    - by Dogbert
    Good day, I am attempting to replicate a setup I have between a router and an Ubuntu PC, and have the same setup working on my MacBook (10.6, Snow Leopard). First, I have a router that has a USB port. When I plug it into my Ubuntu PC, it creates an RNDIS connection, allowing me to connect to the router over the USB cable via an IP connection. When I plug it into my computer via USB, it gets assigned an IP address of 172.16.84.1, and a new adapter appears when I type ifconfig. I can then SSH into the device via ssh [email protected]. When I log in to the device, I flush the routes, then create the default route: admin@localhost> route -f admin@localhost> route add default 172.16.84.2 Now, in my Ubuntu machine, I use iptables to enable IP masquerading: root@Valhalla> sudo iptables -t nat -A POSTROUTING -s 172.16.84.2 -j MASQUERADE Once this is all done, the router has internet access over the USB connection to my PC. I am trying to replicate this exact setup on my MacBook now (Snow Leopard), but iptables does not exist for OSX, not even a Macports version exists. I have scoured through other questions on StackOverflow that cover the usage of the ipfw command, which apparently works as a drop-in replacement for iptables. However, the syntax is significantly different, and I'm pretty much lost. Does anyone with some experience with ipfw have some suggestions on how I could accomplish this and create a NAT connection via IP masquerading like I could with my Ubuntu PC? Thank you for your assistance.

    Read the article

  • Dell Inspiron 1564 overheating but fan not switching on, how to diagnose?

    - by Smugrik
    I've got a Dell Inspiron 1564 laptop that is about one and a half years old. Since about a week, the laptop started to overheat, causing it to switch off unexpectedly... The cpu fan is working erratically, it can start to spin for a while, doing its job and cooling down the cpu before it stops, but then the temperature goes up, and the fan doesn't reacts, once the temperature reaches a critical point (over 85 celsius, checked with speedfan...), the laptop switches off... I already cleaned the vents and fan from dust, to no avail, and it was actually quite clean anyway. Drivers and bios are up-to-date, no crapware was ever installed on this machine. I don't know how to diagnose the problem, could it be the temperature sensors that sends wrong information, so the fan doesn't reacts? but then I believe the computer wouldn't detect the overheat and stop... Is there a way I can pin point the problem? Maybe some low-level diagnostic tools to check functionality of sensors and fans??? The warranty is already over so any suggestion would be welcome. Thanks!!

    Read the article

  • Installing FIREFOX with extensions/addons manually? (not really auto install)

    - by BrownChiLD
    I've been reading around with regards to creating firefox installers, bundling it w/ addons, using scripts, and CLI lines and a whole bunch of stuffs ... but it seems that going through this route is just too complicated and time consuming.. Since i don't mind a bit of manually copying files and stuff, I was planning to do the following: on my test machine, 1) install firefox on a machine AND configure it the way i want it 2) install addons AND set the configurations for it 3) set advanced configurations for firefox (about:config) Then once i'm all set, I just simply copy the contents of the firefox/profiles folder (for this particular tests it's ....\AppData\Local\Mozilla\Firefox\Profiles\6m0mef0s.default for deployment, all i have to do is: 1) Install the same version (offline installer) of the Firefox i used.. 2) overwrite the contents of the new profiles folder (randomly named by Firefox installer as usual) .. This should set all my configs and addons right? or what other folders do i have to backup and copy manually into the new profiles folder? I don't think i need to tinker w/ any registries right? anyway, if this works, though it's a bit manual, it's a whole lot simplier, and straight forward than fiddling w/ Installers and Packages etc.. PS I do this a lot w/ other simple (and some complex) software that i use and they seem to work fine for years.. i'm just not sure with firefox and how it's structured..

    Read the article

  • How do I give a user permisson to view scheduled task history on Server 2008?

    - by pplrppl
    I've set up a scheduled task on Server 2008 and want to run it as a user other than the local administrator. So I choose a domain account created specifically for this task and once I've closed the scheduled task and entered a valid password I want to run it and look a the history tab for this task. On the history tab I see: The user account does not have permission to view task history on this computer. What permission must I grant to allow this user to view history and/or how can I view the history as a local admin/domain admin instead of the user the job will run under? Steps to hopefully reproduce: I'm starting from the "Server Manager" - Configuration - Task Scheduler - Task Scheduler Library. IN the top middle pane I have tasks that have been running for several months as the local administrator. In the process of troubleshooting another issue I changed the task to run as Domain\ABCuser. Later in the process of troubleshooting I tried unchecking "run with highest privileges". I have since changed the job back to SERVERNAME\Administrator but the history tab still showed the permissions message. I may have had multiple Server Manager windows open. After Closing the Server Manager and being sure no other management consoles were open I was able to reopen the Server Manager and see the History tab without error. At this point the task works properly but should I ever need to run a task as a task specific account I'd like to know how to make the history viewable. It may be something as simple as closing all Server Manger windows to allow cached permissions to be refreshed the next time you open the Manager but at this point I don't know exactly what the solution is.

    Read the article

< Previous Page | 621 622 623 624 625 626 627 628 629 630 631 632  | Next Page >