Search Results

Search found 2397 results on 96 pages for 'copying'.

Page 85/96 | < Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • Why do I need to set up Autologon values in registry twice in before it works and can I fix this?

    - by jJack
    Background: As part an automated testing suite I am building, I need to set up Autologon on my virtual machines 'on demand'. By on demand, I mean that I don't want to necessarily pre-configure my VM or any snapshot to have Autologon set up already, for security reasons and also a huge business case. My solution so far: I'm copying a script to the guest machine and then using Sysinternals PsExec to execute it. The script is: reg add "hklm\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon" /f /v DefaultUserName /t REG_SZ /d myusername reg add "hklm\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon" /f /v DefaultPassword /t REG_SZ /d myfakepassword reg add "hklm\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon" /f /v DefaultDomainName /t REG_SZ /d mydomain reg add "hklm\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon" /f /v ForceAutoLogon /t REG_SZ /d 1 reg add "hklm\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon" /f /v AutoAdminLogon /t REG_SZ /d 1 reg add "hklm\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\AutoLogonChecked" /f /ve /d 1 Note: I don't believe AutoLogonChecked is required for machines post Windows 2000 but I'm doing it just in case for now. Maybe ForceAutoLogon isn't either, not sure yet. The Problem: I see PsExec executes this properly and all the values are in the registry, however when I restart the machine, the user isn't automatically logged on...When I run this a second time then restart the machine, the user is finally logged on. A diff between the registry states shows that the first time I run this, it is missing both the "1" for AutoAdminLogon, and also the DefaultPassword key. The second time I execute it, these values are correctly intact as I intended. So, what is going on here? Is this expected? This post claims in the end that it really all just works (the problem was that a logoff script was setting off the values). Doesn't seem to work for me however. Note this seems unique to Windows 7, does not occur in Windows XP Also note that you don't need PsExec to recreate the issue - just modify the registry yourself EDIT/update: Login interactively and run script (so, not executing it remotely), logging off automatically logs me back in (so, it works) remotely execute the script in guest when I'm interactively logged in, logging off automatically logs me back in (so, it works) remotely execute the script in guest when with non-interactive session if I log in afterwards (so, interactive now) then back off, it logs me back in (so, it then works) EDIT/update 2: This only occurs for Win7x86, Win7x64, Win8x64. This does not occur for Windows XP

    Read the article

  • So I want to separate my Program Files from the hard disk with the other system files. What is the b

    - by grg-n-sox
    So I am running Windows 7 as my only OS. I have two hard drives on my computer. The first one is a 74GB Western Digital 10K RPM Raptor. The second one is a 1TB Seagate Barracuda (couldn't remember if it was a 7200.12 or some other decimal after the 7200). The OS in installed to the Raptor and I am just using the Barracuda for storage. With this setup, in case you couldn't guess already, the Raptor fills up quick and I am constantly having to maintain file locations. And although it is nice to have that quicker boot time and program loading, the time spent maintaining the drive makes me waste more time overall. So I am looking for a way to try to keep it clear while still keeping up system loading speeds. A performance hit on games and such is easily acceptable and as long as I can guarantee a 5GB space on the Raptor, I can always just temporarily move the disc image there. So I am figuring that having games installed like Boarderlands and Mass Effect, as well as having large files such as linux distro DVD disc images in My Documents, I probably should be moving my personal files and Program Files directories to the Barracuda. I currently have folders on the Barracuda for this, but this means routinely copying files over and I can't really do anything with the Program Files folder that already exists. The best I can do is remember to designate the install directory of any program installation to the alternative install directory, which I can't seem to get to ever work right with Steam. With that in mind, is there a way that is not too drastic to let me just change some folders and system settings once and everything works fine afterwards for my setup? I have considered just reinstalling Windows 7 to the Barracuda but that would defeat the purpose of the Raptor except for running disc images off of. I am also heard a bit about being able to use symlinks to fix this, but I have also heard that symlinks in Windows are not necessarily the same and not as well supported on Windows. An example a friend mentioned was something about how if you have a symlink in Windows on a small hard drive to a large hard drive and the contents the symlink points to is larger than the small hard drive's capacity, then Windows will think the smaller hard drive is full. So is there a fix/workaround that will let me use symlinks across hard drives without the issues or is there a better solution I am not being told about, not mentioning, or not thinking of?

    Read the article

  • Linux buffer cache effect on IO writes?

    - by Patrick LeBoutillier
    I'm copying large files (3 x 30G) between 2 filesystems on a Linux server (kernel 2.6.37, 16 cores, 32G RAM) and I'm getting poor performance. I suspect that the usage of the buffer cache is killing the I/O performance. To try and narrow down the problem I used fio directly on the SAS disk to monitor the performance. Here is the output of 2 fio runs (the first with direct=1, the second one direct=0): Config: [test] rw=write blocksize=32k size=20G filename=/dev/sda # direct=1 Run 1: test: (g=0): rw=write, bs=32K-32K/32K-32K, ioengine=sync, iodepth=1 Starting 1 process Jobs: 1 (f=1): [W] [100.0% done] [0K/205M /s] [0/6K iops] [eta 00m:00s] test: (groupid=0, jobs=1): err= 0: pid=4667 write: io=20,480MB, bw=199MB/s, iops=6,381, runt=102698msec clat (usec): min=104, max=13,388, avg=152.06, stdev=72.43 bw (KB/s) : min=192448, max=213824, per=100.01%, avg=204232.82, stdev=4084.67 cpu : usr=3.37%, sys=16.55%, ctx=655410, majf=0, minf=29 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued r/w: total=0/655360, short=0/0 lat (usec): 250=99.50%, 500=0.45%, 750=0.01%, 1000=0.01% lat (msec): 2=0.01%, 4=0.02%, 10=0.01%, 20=0.01% Run status group 0 (all jobs): WRITE: io=20,480MB, aggrb=199MB/s, minb=204MB/s, maxb=204MB/s, mint=102698msec, maxt=102698msec Disk stats (read/write): sda: ios=0/655238, merge=0/0, ticks=0/79552, in_queue=78640, util=76.55% Run 2: test: (g=0): rw=write, bs=32K-32K/32K-32K, ioengine=sync, iodepth=1 Starting 1 process Jobs: 1 (f=1): [W] [100.0% done] [0K/0K /s] [0/0 iops] [eta 00m:00s] test: (groupid=0, jobs=1): err= 0: pid=4733 write: io=20,480MB, bw=91,265KB/s, iops=2,852, runt=229786msec clat (usec): min=16, max=127K, avg=349.53, stdev=4694.98 bw (KB/s) : min=56013, max=1390016, per=101.47%, avg=92607.31, stdev=167453.17 cpu : usr=0.41%, sys=6.93%, ctx=21128, majf=0, minf=33 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued r/w: total=0/655360, short=0/0 lat (usec): 20=5.53%, 50=93.89%, 100=0.02%, 250=0.01%, 500=0.01% lat (msec): 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.12% lat (msec): 100=0.38%, 250=0.04% Run status group 0 (all jobs): WRITE: io=20,480MB, aggrb=91,265KB/s, minb=93,455KB/s, maxb=93,455KB/s, mint=229786msec, maxt=229786msec Disk stats (read/write): sda: ios=8/79811, merge=7/7721388, ticks=9/32418456, in_queue=32471983, util=98.98% I'm not knowledgeable enough with fio to interpret the results, but I don't expect the overall performance using the buffer cache to be 50% less than with O_DIRECT. Can someone help me interpret the fio output? Are there any kernel tunings that could fix/minimize the problem? Thanks a lot,

    Read the article

  • 2 Servers setup for redundency, backup

    - by minal
    I presently have 1 dedicated virtual server running my website/blog/mail, etc. This is on Hyper-V with 512MB RAM. Windows Web2008. With the VM, I have these running within it: SmarterMail – for emails MS DNS – I have my own nameservers on this server SQL Express IIS7 2 IP Address I have now leased 2 physical servers : P4 2.6Ghz 1GB RAM 80GB HDD. With these new servers, I get 2 IPs per server as well. These are running Windows 2008 Standard. With the VM the HDD was obviously on a RAID setup so I was not worried about hardware issues as it fell on the provider to manage. However, with the new servers the HDD is not RAID’d, hence my concern is that if it fails I need a backup position. What would be the most ideal setup to go for? I am thinking: Server 1: (Web/PrimaryDNS) DNS – NS1 SQL Express – OFF turn on when required, ie. Server2 is down SmarterMail – OFF turn on when required, ie. Server2 is down IIS 7 Server2:(SQL/Backup) DNS – NS2 SQL Web Edition SmarterMail IIS 7 How can I set it up so that if 1 goes down I can have everything on 2 instantly or by manual switching over. I am confused as other DNS servers will cache the web servers IP address for requests, and if that server goes down, the backup server will have a different IP. How do I make this work? I will be doing routine backups, in which case I will keep copies of backups on both servers. If I am copying the same stuff on both servers like a mirror then I am losing on using the true performance out of it. It's like 1 server is always on standby. Ideally I want SQL and web on 2 diff machines for best performance. If Server1 goes down, I should be able to switch to Server2 fairly easily. I don't have a problem with manual intervention to start the sql/mail services, etc. In terms of scalabilty, the VM has coped pretty well to date. Moving forward the SQL and IIS workload is going to double pretty quickly. Some ideas would be great.

    Read the article

  • Windows Media Player Object with Video_TS Files - No Sound Anymore

    - by user1624184
    I copied a bunch of my DVDs (yes, owned) to a drive and created a basic webpage to be able to search and play them. I am using the code at the bottom to create a Windows Media Player object and then play the video. The video files are pulled off the DVD in the original format so each movie has files like this: VTS_01_0.BUP VTS_01_0.IFO VTS_01_1.VOB VTS_01_2.VOB VTS_01_3.VOB VTS_01_4.VOB VTS_01_5.VOB VTS_01_6.VOB This all worked great until just recently. On my dev machine, I can play the videos but now, all of a sudden, there is no sound. I have tried the following but no luck: The video is not muted and the sound is at 50% Under the sound icon, under mixer, I checked IE and it is fine Sound works fine using another program, says WinAmp Opening Windows Media Player directly from the Start Menu and then playing the same movie works fine and I get sound I ensured that the option in IE to play sound on websites is checked I can play sound on other people's websites where they have embedded WMP files I tried resetting all of the IE settings on the Advanced tab in IE I tried my website, with my code below on another computer, AND IT WORKS FINE! I tried copying the media files listed above from the computer that works fine to my dev computer and it still doesn't work. If I try using a ".wmv" file, the sound does work By the way, I am using Win7 with IE8. As you can see, this is driving me crazy! Why would it stop working on my one computer and the same code and files work fine on another computer? Any help would be greatly appreciated. <OBJECT id="mediaPlayer" width="640px" height="480px" style="position:absolute; left:0px; top:0px;" CLASSID="CLSID:6BF52A52-394A-11d3-B153-00C04F79FAA6" type="application/x-oleobject"> <PARAM NAME="URL" VALUE="E:\test\VIDEO_TS\VIDEO_TS.IFO" /> <PARAM NAME="SendPlayStateChangeEvents" VALUE="True"/> <PARAM NAME="AutoStart" VALUE="True"/> <PARAM NAME="uiMode" VALUE="full"/> <PARAM NAME="volume" VALUE="50" /> <PARAM NAME="PlayCount" VALUE="9999"/> <PARAM NAME="fullScreen" VALUE="true"/> <PARAM NAME="enableContextMenu" VALUE="true"/> </OBJECT>

    Read the article

  • where on disk is space allocated for new files inside LVM lv with ext4 file system?

    - by Jost
    I run a multi-disk server with LVM2. Several large disks serve as LVM2 physical volumes for one volume group, containing one logical volume formatted with ext4. Nothing fancy, just your standard linear setup. Recently an additional, very small disk was added as physical volume to that volume group and I expanded both the logical volume, and the ext4 file system therein onto that disk. This lv is used to store incremental backups using rsync and is only about 30% full, there have rarely been any files deleted from it, only incremental writes. Now this new HDD I added to the pre-existing volume group has unexpectedly died on me, and the volume group won't come up because it is missing one physical volume. As fate will have it, this WAS the "in an event of catastrophic failure on the primary server"-backup, the event happened, the boss is not happy, so this kinda has to work... According to this (Part 3): http://www.novell.com/coolsolutions/appnote/19386.html it is possible to trick LVM into starting anyway by creating a new pv with identical metadata to the failed disk, which will make the volume accessible, but of course leave giant holes in the file system. I have'n tried it yet, because it involves repairing (writing to) the file system which eliminates the possibility of trying other things if it fails. Now my question is: How does this setup actually allocate disk space for new data? Is it allocated linearly from beginning to end of PVs, in the order they were added to the vg? Is it striped somehow in order to increase performance/balance load? since this defective disk was added only later to an existing lvm2 vg and lv, containing a half-empty ext4, what are the chances that there was never any data written to the defective disk? In other words: what are the chances of recovering all my data, even without the defective disk, by just starting the volume group as-is? Am I about to go spend $1500 on having 250GB of empty space recovered when I send the defective disk in for repair? Is there a way to check without mounting the file system and opening the files, hoping they contain something other than zeros? (comparing addresses of used data blocks inside ext4 to address ranges that were on the missing pv, something like that, preferably easy to automate) I know bitwise-copying the entire lv into an image file before trying to repair the ext4 would probably be a good idea, but since this lv is very large and I just suffered major file system failure on several systems it is probably a luxury I don't have... Any suggestions?

    Read the article

  • Linux buffer cache effect on IO writes?

    - by Patrick LeBoutillier
    Hi, I'm copying large files (3 x 30G) between 2 filesystems on a Linux server (kernel 2.6.37, 16 cores, 32G RAM) and I'm getting poor performance. I suspect that the usage of the buffer cache is killing the I/O performance. To try and narrow down the problem I used fio directly on the SAS disk to monitor the performance. Here is the output of 2 fio runs (the first with direct=1, the second one direct=0): Config: [test] rw=write blocksize=32k size=20G filename=/dev/sda # direct=1 Run 1: test: (g=0): rw=write, bs=32K-32K/32K-32K, ioengine=sync, iodepth=1 Starting 1 process Jobs: 1 (f=1): [W] [100.0% done] [0K/205M /s] [0/6K iops] [eta 00m:00s] test: (groupid=0, jobs=1): err= 0: pid=4667 write: io=20,480MB, bw=199MB/s, iops=6,381, runt=102698msec clat (usec): min=104, max=13,388, avg=152.06, stdev=72.43 bw (KB/s) : min=192448, max=213824, per=100.01%, avg=204232.82, stdev=4084.67 cpu : usr=3.37%, sys=16.55%, ctx=655410, majf=0, minf=29 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued r/w: total=0/655360, short=0/0 lat (usec): 250=99.50%, 500=0.45%, 750=0.01%, 1000=0.01% lat (msec): 2=0.01%, 4=0.02%, 10=0.01%, 20=0.01% Run status group 0 (all jobs): WRITE: io=20,480MB, aggrb=199MB/s, minb=204MB/s, maxb=204MB/s, mint=102698msec, maxt=102698msec Disk stats (read/write): sda: ios=0/655238, merge=0/0, ticks=0/79552, in_queue=78640, util=76.55% Run 2: test: (g=0): rw=write, bs=32K-32K/32K-32K, ioengine=sync, iodepth=1 Starting 1 process Jobs: 1 (f=1): [W] [100.0% done] [0K/0K /s] [0/0 iops] [eta 00m:00s] test: (groupid=0, jobs=1): err= 0: pid=4733 write: io=20,480MB, bw=91,265KB/s, iops=2,852, runt=229786msec clat (usec): min=16, max=127K, avg=349.53, stdev=4694.98 bw (KB/s) : min=56013, max=1390016, per=101.47%, avg=92607.31, stdev=167453.17 cpu : usr=0.41%, sys=6.93%, ctx=21128, majf=0, minf=33 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued r/w: total=0/655360, short=0/0 lat (usec): 20=5.53%, 50=93.89%, 100=0.02%, 250=0.01%, 500=0.01% lat (msec): 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.12% lat (msec): 100=0.38%, 250=0.04% Run status group 0 (all jobs): WRITE: io=20,480MB, aggrb=91,265KB/s, minb=93,455KB/s, maxb=93,455KB/s, mint=229786msec, maxt=229786msec Disk stats (read/write): sda: ios=8/79811, merge=7/7721388, ticks=9/32418456, in_queue=32471983, util=98.98% I'm not knowledgeable enough with fio to interpret the results, but I don't expect the overall performance using the buffer cache to be 50% less than with O_DIRECT. Can someone help me interpret the fio output? Are there any kernel tunings that could fix/minimize the problem? Thanks a lot,

    Read the article

  • git, egit, submodules, and symlinks -- how should shared sub-projects be handled in eclipse?

    - by Autophil
    Question: what's the best way to handle sub-projects in eclipse when using git for SCM? Here's the situation. I have a few git projects with a directory structure layed out more or less like this: simpleproj app www admin demo lib model orm view model user blah ... storeproj app www about mobile fbapp lib model orm view model user message cart product merchant Each directory in "lib" contains a separate project, either created in-house or forked, all of which use git for source control. So I figured I should make them submodules of my projects, right? Well, we've been moving toward eclipse + egit, because some of our windows guys not used to a CLI need something they can use without being scared of screwing things up. Anyway, the problem is, egit doesn't support submodules. So, my solution has been a rather crude one involving symlinks... lets say my directory structure on my dev box is generally layed out like this: ~/projects/ bigproj .git app lib model (- ~/lib/model/src/) orm (- ~/lib/orm/src/) neatproj .git app lib view (- ~/lib/view/src/) oldproj .git app lib orm (- ~/lib/orm/src/) ~/lib/ model .git src README.md orm .git src COPYING view .git src ...the symlinks link to a subdirectory of the directory containing the git repo, so eclipse doesn't get confused, and everything sort of works. On my machine, I can update the libs from anywhere and all projects will be updated (needing to be committed again of course). Each project stores a separate copy of the contents of the symlinked directories within "lib" -- but only when staged from within eclipse. After committing from eclipse and moving back to the CLI, git sees that a bunch of files have been removed and a few symlinks have been created. Of course this is acceptable also, probably more so than keeping a separate history of the libs for each project... but eclipse and CLI git obviously need to be on the same page so tons of files aren't vanishing and reappearing. So this brings me to my question. I'd like to know how to either: get eclipse+egit to see the symlinks as symlinks if git will somehow handle them properly*, or get the CLI git to treat them as non-symlinks. Or, if there's a better way to do this, I'm all ears. Hope this all made sense! :D Note: tried to tag this as git-submodules, but was not allowed :( * should I make them relative or absolute? Either way it's a mess. Also will symlinks will work on windows? i know there's something similar but you need a 3rd party tool to manage them AFAIK, i doubt these would translate well.

    Read the article

  • Preinstalled Windows 8 and Linux UEFI dual boot on a laptop

    - by itchy355
    I am trying to set up Windows 8 and Arch Linux on a new Sony Vaio E14 with preinstalled windows 8. So far: installed W8 to my new SSD (switched for the original HDD) using Recovery Media shrunk the W8 partition, deleted recovery partition, disabled swap confirmed W8 booting just fine On to Arch: disabled Secure Boot in bios confirmed W8 booting just fine Booted Arch off the CD and installed everything to 4th and 5th partition set up rEFInd for EFIstub kernel bootloader After that it got worse. I was unable to boot anything else than Windows 8 (although I was glad that they at least kept working just fine). Tried: creating EFI\refind\ and putting the .efi there (as per Arch manual overwriting EFI\boot\bootx64.efi overwriting EFI\Microsoft\Boot\bootmgr.efi overwriting EFI\Microsoft\Boot\bootmgfw.efi --- YAY rEFInd shown up! So far, so good. I've kept the whole W8 Boot\ directory in EFI\windows8 and set up a boot menuentry for it; and it booted just fine. But, upon restart, everything was wrong -- 'Operating system not found' instead of any bootloader (refind or w8). Booted back into Arch using the live CD to find out that the EFI partition had erroneous FAT table. fsck.vfat fixed it, and I've found that EFI\Microsoft\Boot was back to it's original state (all refind files deleted and replaced with W8 bootloaders). I've overwritten them again and got back to rEFInd showing up correctly and Arch being perfectly bootable. After that I've tried only renaming EFI\Microsoft\Boot\bootmgfw.efi to bootmgfw.001.efi (then copying refind's .efi to bootmgfw.efi and keeping EVERY OTHER file as it was), but with exactly the same result. Tried marking the GPT EFI partition as read-only, same result. Now I'm kinda out of luck. Arch boots fine, so does W8 but it destroys the EFI partition in the process. Thanks for any ideas, Googling brought me this far and I can't find any better. PS -- windows 8 MAYBE destroys the partition upon shutdown -- when I order a shutdown in W8, it takes unusually long (about half a minute instead of ~5 seconds). So in theory I could solve this by hard-resetting the laptop instead of a normal shutdown, but that's just not nice.

    Read the article

  • Setting up Apache and PHP on Mac OS X Snow Leopard

    - by Martin Bean
    I've recently purchased an Apple iMac. Unfortunately, enabling Apache and PHP has thrown up some problems. I enabled Mac's built-in Web Sharing through System Preferences, at which point I got an output and could add HTML files to my user directory. However, PHP files were being displayed rather than interpreted. I then discovered this is because PHP isn't enabled by default on Mac's Apache set-up. After a quick Google search, I came across this page: http://developer.apple.com/mac/articles/internet/phpeasyway.html I proceeded to the section, Enabling PHP in Apache, copying and pasting the following code snippet into a new Terminal window and hitting Return: set admin_email to (do shell script "defaults read AddressBookMe ExistingEmailAddress") user_www=$HOME/Sites filename=php-test user_index=${user_www}/${filename}.php user_db=${user_www}/${filename}-db.sqlite3 # NOTE: Having a writeable database in your home directory can be a security risk! conf=`apachectl -V | awk -F= '/SERVER_CONFIG/ {print \$2}'| sed 's/"//g'` conf_old=$conf.$$ conf_new=/tmp/php_conf.new touch $user_db chmod a+r $user_index chmod a+w $user_db chmod a+w $user_www echo "Enabling PHP in $conf ..." sed '/#LoadModule php5_module/s/#LoadModule/LoadModule/' $conf | sed "s^[email protected]^<b>\$admin_email</b>^" > $conf_new echo "(Re)Starting Apache ..." osascript <<EOF do shell script "/bin/mv -f $conf $conf_old; /bin/mv $conf_new $conf; /usr/sbin/apachectl restart" with administrator privileges EOF Unfortunately, this has completed thrown Apache and now nothing is being served; instead I'm receiving "Failed to open page" errors because it cannot connect to the server, despite Web Sharing still being active in System Preferences. So therefore I guess my question is this: how can I undo the changes made by the copy-and-pasting of the above code snippet? Admittedly, I don't understand what the above did; I just thought it looked like a Terminal command and tried it. I have no experience in setting up Apache on Mac OS X (and I've only installed XAMPP and WampServer on Windows). So any points on reversing the aforementioned, and then successfully enabling PHP would be great. EDIT: I've discovered, via Console, the following error message is being recorded when trying to browse to 127.0.0.1... (org.apache.httpd) Throttling respawn: Will start in 10 seconds no listening sockets available, shutting down Unable to open logs (org.apache.httpd[13453]) Exited with exit code: 1 Does this point any more to the issue? EDIT #2: I'm now getting this in Console... 15/02/2010 21:24:14 osascript[3597] Error loading /Library/ScriptingAdditions/Adobe Unit Types.osax/Contents/MacOS/Adobe Unit Types: dlopen(/Library/ScriptingAdditions/Adobe Unit Types.osax/Contents/MacOS/Adobe Unit Types, 262): no suitable image found. Did find: /Library/ScriptingAdditions/Adobe Unit Types.osax/Contents/MacOS/Adobe Unit Types: no matching architecture in universal wrapper

    Read the article

  • Slow wifi from Windows Server 2003 virtualized in XenServer

    - by John Clayton
    I'm a brand spanking new user of OS X, coming from a lifetime of Windows use. I've been setting up my new Macbook Pro and have run into a very unusual problem. Over wifi, I am unable to copy files to or from my Windows Home Server. The problem seems to exist only over wifi, and only to WHS. Here are the details of my setup: 2010 Macbook Pro (Core i7), OS X 10.6.3 Windows Home Server PP3 (virtualized in XenServer 5.5) Windows 7 Ultimate x64 desktop Windows 7 Ultimate x64 in Boot Camp D-Link DIR-655 wireless N router Here is what I've done to narrow down the problem: Files copy fine from WHS to OS X when using gigabit ethernet Files copy fine from desktop to OS X when using gigabit ethernet Files fail to copy from WHS to OS X when using wifi (error -51) Files copy fine from desktop to OS X when using wifi Files copy fine from WHS to Boot Camp when using wifi Files copy fine from desktop to Boot Camp when using wifi From what I can tell, it seems to be some sort of issue between OS X and WHS, but I can't for the life of me see what would be different between shares on WHS and my desktop. They are both connected using smb://ADDRESS (I've tried both by IP and name). I can browse the shares on the WHS, but copying to OS X fails. I originally found the issue while installing VS2010 off an ISO from WHS, mounted to a Windows 7 VM using VMware Fusion. During the installation the VM was unusable - even the clock got behind the host be about 8 minutes. Once I plugged in the ethernet and disabled the wifi things picked up and finished quickly. The Fusion 3.1 RC is the only I think of that I installed that may have messed with the wifi driver. I've also tried resetting the wifi router, and have changed it from being G & N to N-only. Under Boot Camp I get similar speeds as my wife's N laptop. Any ideas? Thanks! Update: The issue has been further narrowed down to Windows Server 2003, which Windows Home Server is based on, running in XenServer with the XenServer tools installed.

    Read the article

  • Windows 7 file-based backup service

    - by Ben Voigt
    I'm looking for a good replacement for Lazy Mirror, since it doesn't support Windows 7 well. Pros: One of the things I really loved about Lazy Mirror is that it always maintains a "full" backup, but does so by only copying modified files. As each file was copied, the old version got archived (moved to an out-of-the-way location). So after mirroring ran, there'd be a complete copy of the file system, which could even be booted if necessary. At the same time, extra space on the backup media was used to store as many older versions of files as possible, without wasting space storing multiple copies of the same version. It seems that with Windows 7 backup, there'd be wasted space storing the same data in both the system image and file backup. It was completely file-based, but also aware of the registry (it had a feature to dump the live registry to hive files in the correct format). The backups were normal NTFS filesystems, no special tool was needed to read them. It automatically cleaned out the oldest previous versions when space ran out (unlike Windows 7 backup which apparently simply starts failing the the backup media fills.) It copied all file attributes including security. Cons: It doesn't deal well with junction points, symbolic links, and hard links. It didn't run as a service without lots of help from firesrv or srvany, and then you couldn't interact with the GUI. Running as a service was necessary to be able to mirror protected OS files. It didn't have open file handling, except for registry hives. I guess that the file-by-file archive and replacement could leave mismatched sets of files, if the mirror was interrupted. This would be the advantage of incremental backup techniques that require old full backup + all intermediate incremental backups to restore. But I don't see this as presenting much of a problem, you'd really only have a boot failure if you had a mixture of pre- and post-service pack files, and I can run a full image backup using another tool before applying a service pack. Does anyone know of a tool that does both full-system backup and storage of old versions of files like Lazy Mirror did (without storing the same data multiple times), and also can run as a service in Windows 7? Free is best of course, but a reasonably priced paid program (e.g. It would be absolutely awesome if it also triggered a backup/mirror pass when a particular external drive was plugged in and generated popup warnings if backups hadn't been run recently)

    Read the article

  • server 2008 r2 - wbadmin systemstatebackup - system writer not found in the backup

    - by TWood
    I am trying to manually run a systemstatebackup command on my server 2008 r2 box and I am getting an error code '2155347997' when I view the backup event log details. The command line tells me that I have log files written to the c:\windows\logs\windowsserverbackup\ path but I have no files of the .log type there. My command window tells me "System Writer is not found in the backup". However when I run vssadmin list writers I find System Writer in the list and it shows normal status with no last errors stored. I am running this from an elevated command prompt as well as from a logged on administrator account. My backup target path has permission for network service to have full control and it has plenty of free space. Looking in eventlog I have two VSS error 8194 that happen immediately before the Backup error 517 which has the errorcode 2155347997 listed. All three of these errors are a result of trying to run the command for the systemstatebackup. It's my belief that some VSS related permission is failing and exiting the backup process before it ever gets started. Because of this the initial code that creates the log files must not be running and this is why I have no files. When running the systemstatebackup command from the command prompt and watching the windowsserverbackup directory I do see that I have a Wbadmin.0.etl file which gets created but it is deleted when the backup errors out and stops. I have looked online and there are numerous opinions as to the cause of this error. These are the things I have corrected to try and fix this issue before posting here: Machine runs a HP 1410i smart array controller but at one time also used a LSI scsi card. Used networkadminkb.com's kb# a467 to find one LSI_SCSI entry in HKLMSysCurrentControlSetServices which start was set to 0x0 and I modified to 0x3. No changes. In HKLMSystemCurrentControlSetServicesVSSDiag I gave network service full control where it previously only had "Special Permission". No changes. I followed KB2009272 to manually try to fix system writer. These are all of the things I have tried. What else should I look at to resolve this issue? It may be important to note that I run Mozy Pro on this server and that was known in the past to use VSS for copying operations and it occasionally threw an error. However since an update last year those error event log entries have stopped.

    Read the article

  • DVD playback with Windows Media Player 11 works fine, but when copied to HDD and then played back, t

    - by stakx
    I have several DVDs with short documentaries on it. Since the notebook I'm using (a Dell Latitude E6400) has only one DVD drive, and I might play back those short movies very often, I thought of copying them to the HDD and playing them back from there. However, I've run into a problem, namely stuttering audio. Problem description: When I play back these movies directly from DVD (with Windows Media Player 11 under Windows Vista), everything works fine. Smooth video, no significant audio problems (only the occasional click). But as soon as I copy any of these DVDs to the HDD and try to play them back from there (e.g. using the wmpdvd://drive/title/chapter?contentdir=path protocol, I get stuttering audio — audio playback sounds like a machine gun for a third of a second or so, approx. every 8 seconds. I have tried converting the VOB files from the DVD to another format (ie. ripping), but that resulted in a noticeable downgrade of picture quality. Therefore I thought it best to keep the files in their original format, if possible. Still, I suspect that the stuttering audio is due to some (de-)muxing problem, and that changing the file format might help. (After all, video playback is fine; therefore I don't think that the hardware is too slow for playback.) Only thing is, I don't know how to convert the VOB files to another Windows Media Player-compatible format without quality loss. I hope someone can help me, or give me further pointers on things I could try out to get HDD playback to work without the problem described. Some things I've tried so far, without any success: VOB2MPG, in order to convert the .vob file to a .mpg file. But that changes only the A/V container, not the content. No re-encoding takes place at all. Re-encoding with MPlayer/MEncoder. Lots of quality loss there, and I frankly haven't got the time to test all possible settings combinations available. Disabling all plug-ins, equalizers, etc. in Windows Media Player. Disabling all hardware acceleration on the audio playback device. Further info on the VOB files I'm trying to playback: The video format is MPEG ES, PAL 720x576 pixels @ 24/25 frames per second. The sound stream is uncompressed PCM, 16-bit stereo @ 48kHz. (Might it help if I somehow re-encoded the sound stream at a lower resolution, or as an MP3? If so, how would I do this without changing the video stream?) P.S.: I am limited to using Windows Media Player (11). (I previously tried MPlayer btw., but the video playback quality was surprisingly bad.)

    Read the article

  • Why is mount -a not mounting fuse drive properly when executed remotely (via Fabric)?

    - by Jim D
    This is a weird bug and I'm not sure where it's coming from. Here's a quick run down of what I'm doing. I'm trying to mount a FUSE drive to an Amazon EC2 instance running Ubuntu 10.10 using s3fs (FUSE over Amazon). s3fs is compiled from source according to the instructions etc. I've also added an entry to /etc/fstab so that the drive mounts on boot. Here's what /etc/fstab looks like: # /etc/fstab: static file system information. # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 LABEL=uec-rootfs / ext4 defaults 0 0 /dev/sda2 /mnt auto defaults,nobootwait,comment=cloudconfig 0 2 /dev/sda3 none swap sw,comment=cloudconfig 0 0 s3fs#mybucket /mnt/s3/mybucket fuse default_acl=public-read,use_cache=/tmp,allow_other 0 0 So the good news is that this works fine. On reboot the connection mounts correctly. I can also do: $ sudo umount /mnt/s3/mybucket $ sudo mount -a $ mountpoint /mnt/s3/mybucket /mnt/s3/mybucket is a mountpoint Great, right? Well here's the problem. I'm using Fabric to automate the process of building and managing this instance. I noticed I was getting this error message when using Fabric to build s3fs and set up the mount process: mountpoint: /mnt/s3/mybucket: Transport endpoint is not connected I isolated it down the the problem and built a fabric task that reproduces the problem: def remount_s3fs(): sudo("mount -a") Which does: [ec2-xx-xx-xx-xx.compute-1.amazonaws.com] Executing task 'remount_s3fs' [ec2-xx-xx-xx-xx.compute-1.amazonaws.com] sudo: mount -a [And yes, I was sure to unmount it before running this task.] When I check the mount using mountpoint I get: $ mountpoint /mnt/s3/mybucket mountpoint: /mnt/s3/mybucket: Transport endpoint is not connected Done. But if I run sudo mount -a at the command line, it works. Hrm. Here is that fab task output again, this time in full debug mode: [ec2-xx-xx-xx-xx.compute-1.amazonaws.com] Executing task 'remount_s3fs' [ec2-xx-xx-xx-xx.compute-1.amazonaws.com] sudo: sudo -S -p 'sudo password:' /bin/bash -l -c "mount -a" Again, I get that transport endpoint not connected error. I've also tried copying and pasting the exact command run into my ssh session (i.e. sudo -S -p 'sudo password:' /bin/bash -l -c "mount -a") and it works fine. So...that's my problem. Any ideas?

    Read the article

  • Real-time offline folder-to-folder backup application needed (Windows)

    - by niktech
    I recently started using Intel Matrix Storage RAID solution that allowed me to use my 5 1TB drives for two RAID volumes. First one a 1TB RAID 0 striped across all 5 drives and second one a RAID 5 across the rest of the free space on all drives (around 2.85TB usable space). The RAID 0 I use for OS, applications and games while the RAID 5 I use as a more-permanent type storage (photos, etc). Now I do realize that running the OS and applications on RAID 0 across 5 drives is very dangerous, which is what brings up the following question. Is there a reliable freeware realtime backup application that can backup a set of folders from one drive to another drive (no online backups needed)? I've already tried a few (Mozy, Yadis, Comodo Backup, GFI Backup, Idoo, Crash Plan) but none meet my requirements: Low CPU and RAM usage. Realtime Backups - as soon as a file is modified in the source folder, it is added to the backup queue which will be processed with the lowest priority when the CPU is idle. This backup queue should persist in cases of computer restarts (ie: the source and destination folders should always have the same set of files, except for the ones waiting in the backup queue). Incremental Backups - if only 10 bytes changed in a 1GB file, the app should only copy those 10 new bytes. Ability to back up locked and opened files (some apps, like Yadis, can't back up critical files like browser favorites). Ability to run as a service (no need for any user to log-in to have the app started). Optional requirements: Compression of the destination into a well-known format (RAR, Zip) that can be directly read without the use of the application. Preset source folders (such as Browser Favorites, Game Saves, Application Settings, etc). The idea is to use RAID 0 array as "semi-persistent RAM-like" storage which in case of a failure can be quickly rebuilt by reinstalling the OS, apps and games and copying over the settings, saves, favorites from the RAID 5. I'm also thinking of taking this RAID 0 as RAM idea to the extreme with SSDs (as soon as we get some nice 6Gb/s SATA III SSDs out there), where a couple of SSDs chained in RAID 0 will work as yet another semi-persistent cache layer sitting between the RAM and the HD. I'm just hoping there already exists an application that satisfies these requirements... otherwise I'll have to write one myself, which I would prefer not to do.

    Read the article

  • Exchange IMAP4 connector - Error Event ID 2006

    - by MikeB
    Hi, A couple of users in my organisation use IMAP4 to connect to Exchange 2007 (Update rollup 9 applied) because they prefer Thunderbird / Postbox clients. One of the users is generating errors in the Application Log as follows: An exception Microsoft.Exchange.Data.Storage.ConversionFailedException occurred while converting message Imap4Message 1523, user "*******", folder *********, subject: "******", date: "*******" into MIME format. Microsoft.Exchange.Data.Storage.ConversionFailedException: Message content has become corrupted. ---> System.ArgumentException: Value should be a valid content type in the form 'token/token' Parameter name: value at Microsoft.Exchange.Data.Mime.ContentTypeHeader.set_Value(String value) at Microsoft.Exchange.Data.Storage.MimeStreamWriter.WriteHeader(HeaderId type, String data) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimeStreamAttachment(StreamAttachmentBase attachment, MimeFlags flags) --- End of inner exception stack trace --- at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimeStreamAttachment(StreamAttachmentBase attachment, MimeFlags flags) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimeAttachment(MimePartInfo part, MimeFlags flags) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimePart(MimePartInfo part, MimeFlags mimeFlags) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimeParts(List`1 parts, MimeFlags mimeFlags) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimePart(MimePartInfo part, MimeFlags mimeFlags) at Microsoft.Exchange.Data.Storage.ImapItemConverter.<>c__DisplayClass2.<WriteMimePart>b__0() at Microsoft.Exchange.Data.Storage.ConvertUtils.CallCts(Trace tracer, String methodName, String exceptionString, CtsCall ctsCall) at Microsoft.Exchange.Data.Storage.ImapItemConverter.WriteMimePart(ItemToMimeConverter converter, MimeStreamWriter writer, OutboundConversionOptions options, MimePartInfo partInfo, MimeFlags conversionFlags) at Microsoft.Exchange.Data.Storage.ImapItemConverter.GetBody(Stream outStream) at Microsoft.Exchange.Data.Storage.ImapItemConverter.GetBody(Stream outStream, UInt32[] indices) From my reading around it seems that the suggestion is to ask users to log in to Outlook / OWA and view the messages there. However, having logged in as the users myself, the messages cannot be found either through searching or by browsing the folder detailed in the log entry. The server returns the following error to the client: "The message could not be retrieved using the IMAP4 protocol. The message has not been deleted and may be accessible using either Microsoft Outlook or Microsoft Office Outlook Web Access. You can also try contacting the original sender of the message to find out about the contents of the message. Retrieval of this message will be retried when the server is updated with a fix that addresses the problem." Messages were transferred in to Exchange by copying them from the old Apple Xserve, accessed using IMAP. So my question, finally: 1. Is there any way to get the IMAP Exchange connector to rebuild its cache of messages since it doesn't seem to be pulling them directly from the MAPI store? 2. Alternatively, if there is no database, any ideas on why these messages don't appear in Outlook or OWA would be gratefully received. Many thanks, Mike

    Read the article

  • git, egit, submodules, and symlinks -- how should shared sub-projects be handled in eclipse?

    - by Autophil
    Here's the situation. I have a few git projects with a directory structure layed out more or less like this: simpleproj app www admin demo lib model orm view model user blah ... storeproj app www about mobile fbapp lib model orm view model user message cart product merchant Each directory in "lib" contains a separate project, either created in-house or forked, all of which use git for source control. So I figured I should make them submodules of my projects, right? Well, we've been moving toward eclipse + egit, because some of our windows guys not used to a CLI need something they can use without being scared of screwing things up. Anyway, the problem is, egit doesn't support submodules. So, my solution has been a rather crude one involving symlinks... lets say my directory structure on my dev box is generally layed out like this: ~/projects/ bigproj .git app lib model (- ~/lib/model/src/) orm (- ~/lib/orm/src/) neatproj .git app lib view (- ~/lib/view/src/) oldproj .git app lib orm (- ~/lib/orm/src/) ~/lib/ model .git src README.md orm .git src COPYING view .git src ...the symlinks link inside of directory with the git repo, so eclipse doesn't get confused, and everything sort of works. On my machine, I can update the libs from anywhere and all projects will be updated (needing to be committed again of course). Each project stores a separate copy of the contents of the symlinked directories within "lib" -- but only when staged from within eclipse. After committing from eclipse and moving back to the CLI, git sees that a bunch of files have been removed and a few symlinks have been created. Of course this is acceptable also, probably more so than keeping a separate history of the libs for each project... but eclipse and CLI git obviously need to be on the same page so tons of files aren't vanishing and reappearing. So this brings me to my question. I'd like to know how to either: get eclipse+egit to see the symlinks as symlinks if git will somehow handle them properly*, or get the CLI git to treat them as non-symlinks. Or, if there's a better way to do this, I'm all ears. Hope this all made sense! :D Note: tried to tag this as git-submodules, but was not allowed :( * should I make them relative or absolute? Either way it's a mess. Also will symlinks will work on windows? i know there's something similar but you need a 3rd party tool to manage them AFAIK, i doubt these would translate well.

    Read the article

  • Sycronizing/deploying scripts across several systems

    - by otto
    I have a few time consuming tasks that I like to spread across several computers. These tasks require running an identical ruby or python script (or series of scripts that call each other) on each machine. The machines will a separate config file telling the script what portion of the task to complete. I want to figure out the best way to syncronize the scripts on these machines prior to running them. Up until now, I have been making changes to a copy of the script on a network share and then copying a fresh copy to each machine when I want to run it. But this is cumbersome and leaves a chance for error ( e.g missing a file on the copy or not clicking "copy and replace"). Lets assume the systems are standard windows machines that are not dedicated to this task and I don't need to run these scripts all the time (so I don't want a solution that runs 24/7 and always keeps them up to date, I'd prefer something that pushes/pulls on command). My thoughts on various options: Simple adaptation of my current workflow: Keep the originals on the network drive, but write a batch file that copies over the latest version of the scripts so everything is a one-click operation. Requires action on each system, but that's not the end of the world (since each one usually needs their configuration file changed slightly too). Put everything in a Mercurial/Git reposotory and pull a fresh copy onto each node. Going straight to the repo from each machine would guarantee a current version (and would have the fringe benefit of allowing edits to the script to be made from any machine). Cons would be that it requires VCS to be installed on each machine and there might be some pains dealing with authentication since I wouldn't use a public repo. Open up write access on a shared folder and write a script to use rsync (or similar) to push the changes out to all of the machines at once. This gets a current version on every machine (though you would have to change the script if you want to omit a machine or add a new one). Possible issue would be that each computer has to allow write access. Dropbox is a reasonable suggestion (and could work well) but I dont want to use an external service and I'd prefer not to have to have dropbox running 24/7 on systems that would normally not need it. Is there something simple that I am missing? Some tool designed expressly for doing this kind of thing? Otherwise I am leaning toward just tying all of the systems into Mercurial since, while it requires extra software, it is a little more robust than writing a batch file (e.g. if I split part of a script into a separate module, Mercurial will know what to do whereas I would have to add a line to the batch file).

    Read the article

  • Exchange IMAP4 connector - Error Event ID 2006

    - by MikeB
    A couple of users in my organisation use IMAP4 to connect to Exchange 2007 (Update rollup 9 applied) because they prefer Thunderbird / Postbox clients. One of the users is generating errors in the Application Log as follows: An exception Microsoft.Exchange.Data.Storage.ConversionFailedException occurred while converting message Imap4Message 1523, user "*******", folder *********, subject: "******", date: "*******" into MIME format. Microsoft.Exchange.Data.Storage.ConversionFailedException: Message content has become corrupted. ---> System.ArgumentException: Value should be a valid content type in the form 'token/token' Parameter name: value at Microsoft.Exchange.Data.Mime.ContentTypeHeader.set_Value(String value) at Microsoft.Exchange.Data.Storage.MimeStreamWriter.WriteHeader(HeaderId type, String data) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimeStreamAttachment(StreamAttachmentBase attachment, MimeFlags flags) --- End of inner exception stack trace --- at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimeStreamAttachment(StreamAttachmentBase attachment, MimeFlags flags) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimeAttachment(MimePartInfo part, MimeFlags flags) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimePart(MimePartInfo part, MimeFlags mimeFlags) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimeParts(List`1 parts, MimeFlags mimeFlags) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimePart(MimePartInfo part, MimeFlags mimeFlags) at Microsoft.Exchange.Data.Storage.ImapItemConverter.<>c__DisplayClass2.<WriteMimePart>b__0() at Microsoft.Exchange.Data.Storage.ConvertUtils.CallCts(Trace tracer, String methodName, String exceptionString, CtsCall ctsCall) at Microsoft.Exchange.Data.Storage.ImapItemConverter.WriteMimePart(ItemToMimeConverter converter, MimeStreamWriter writer, OutboundConversionOptions options, MimePartInfo partInfo, MimeFlags conversionFlags) at Microsoft.Exchange.Data.Storage.ImapItemConverter.GetBody(Stream outStream) at Microsoft.Exchange.Data.Storage.ImapItemConverter.GetBody(Stream outStream, UInt32[] indices) From my reading around it seems that the suggestion is to ask users to log in to Outlook / OWA and view the messages there. However, having logged in as the users myself, the messages cannot be found either through searching or by browsing the folder detailed in the log entry. The server returns the following error to the client: "The message could not be retrieved using the IMAP4 protocol. The message has not been deleted and may be accessible using either Microsoft Outlook or Microsoft Office Outlook Web Access. You can also try contacting the original sender of the message to find out about the contents of the message. Retrieval of this message will be retried when the server is updated with a fix that addresses the problem." Messages were transferred in to Exchange by copying them from the old Apple Xserve, accessed using IMAP. So my question, finally: 1. Is there any way to get the IMAP Exchange connector to rebuild its cache of messages since it doesn't seem to be pulling them directly from the MAPI store? 2. Alternatively, if there is no database, any ideas on why these messages don't appear in Outlook or OWA would be gratefully received. Many thanks, Mike

    Read the article

  • iotop for Linux kernel 2.6.18

    - by Lightsauce
    So it has to come to my attention that iotop isn't availalbe for 2.6.18 since it's less than 2.6.20 and requires Python 2.6+. I've done some research and came across this article: http://lserinol.blogspot.com/2009/09/io-usage-per-process-on-linux.html According to this, if these process have io stats in /proc/pid#/io (where pid# is the process #) it's doable regardless of the kernel version. So, in reality, I could upgrade Python to 2.6 and test out iotop. However, my flavor of Linux, CentOS release 5.5 (Final), only supports Python 2.4.3-44.el5 currently. If I were to do uninstall from yum, it doesn't look so pretty. It ends up wanting to uninstall 235 packages, most of which are very important! I read in one place, online (I forget the URL from yesterday), that you can install Python 2.6+ parallel to this one, and have the rpm install for iotop use that. Well, I didn't choose that route. I figured, what the heck, lets write iotop (not copying it, but reverse engineering it without actually looking at it's code/it in use) in bash. I thought it would just grab the /proc/pid#/io file and parse stats. So I wrote a script to grab the top 10 rchar, wchar, read_bytes, and write_bytes by collecting all these stats from all the /proc/pid#/io files, sorting them by each metric, then grabbing the top 10 highest values. The conclusion, the data seems completely useless. Does anybody know any resources for advanced Linux where I can figure out how to take these /proc/pid#/ directories and figure out what the heck they are doing with io on the disk? My main goal is to figure out what exactly is causing high load on my disk. I just know it's on the / partition (/dev/sda2 in this case), and I'm not really sure how to narrow it down without the help of iotop. If I run iostat to grab metrics for 1 minute, every second, the first result it gives me shows a high 'kB_read/s', so that makes me think, it's reading mostly. However, if I watch the update it gives me every second, it's actually just showing values for kB_wrtn/s. This makes me think the initial value iostat gives me is misleading.

    Read the article

  • Why does Excel now give me already existing name range error on Copy Sheet?

    - by WilliamKF
    I've been working on a Microsoft Excel 2007 spreadsheet for several days. I'm working from a master template like sheet and copying it to a new sheet repeatedly. Up until today, this was happening with no issues. However, in the middle of today this suddenly changed and I do not know why. Now, whenever I try to copy a worksheet I get about ten dialogs, each one with a different name range object (shown below as 'XXXX') and I click yes for each one: A formula or sheet you want to move or copy contains the name 'XXXX', which already exists on the destination worksheet. Do you want to use this version of the name? To use the name as defined in destination sheet, click Yes. To rename the range referred to in the formula or worksheet, click No, and enter a new name in the Name Conflict dialog box. The name range objects refer to cells in the sheet. For example, E6 is called name range PRE on multiple sheets (and has been all along) and some of the formulas refer to PRE instead of $E$6. One of the 'XXXX' above is this PRE. These name ranges should only be resolved within the sheet within which they appear. This was not an issue before despite the same name range existing on multiple sheets before. I want to keep my name ranges. What could have changed in my spreadsheet to cause this change in behavior? I've gone back to prior sheets created this way and now they give the message too when copied. I tried a different computer and a different user and the same behavior is seen everywhere. I can only conclude something in the spreadsheet has changed. What could this be and how can I get back the old behavior whereby I can copy sheets with name ranges and not get any errors? Looking in the Name Manager I see that the name ranges being complained about show twice, once as scope Template and again as scope Workbook. If I delete the scope Template ones the error goes away on copy however, I get a bunch of #REF errors. If I delete the scope Workbook ones, all seems okay and the errors on copy go away too, so perhaps this is the answer, but I'm nervous about what effect this deletion will have and wonder how the Workbook ones came into existence in the first place. Will it be safe to just delete the Workbook name manager scoped entries and how might these have come into existence without my knowing it to begin with?

    Read the article

  • Troubleshooting Network Speeds -- The Age Old Inquiry

    - by John K
    I'm looking for help with what I'm sure is an age old question. I've found myself in a situation of yearning to understand network throughput more clearly, but I can't seem to find information that makes it "click" We have a few servers distributed geographically, running various versions of Windows. Assuming we always use one host (a desktop) as the source, when copying data from that host to other servers across the country, we see a high variance in speed. In some cases, we can copy data at 12MB/s consistently, in others, we're seeing 0.8 MB/s. It should be noted, after testing 8 destinations, we always seem to be at either 0.6-0.8MB/s or 11-12 MB/s. In the building we're primarily concerned with, we have an OC-3 connection to our ISP. I know there are a lot of variables at play, but I guess I was hoping the experts here could help answer a few basic questions to help bolster my understanding. 1.) For older machines, running Windows XP, server 2003, etc, with a 100Mbps Ethernet card and 72 ms typical latency, does 0.8 MB/s sound at all reasonable? Or do you think that slow enough to indicate a problem? 2.) The classic "mathematical fastest speed" of "throughput = TCP window / latency," is, in our case, calculated to 0.8 MB/s (64Kb / 72 ms). My understanding is that is an upper bounds; that you would never expect to reach (due to overhead) let alone surpass that speed. In some cases though, we're seeing speeds of 12.3 MB/s. There are Steelhead accelerators scattered around the network, could those account for such a higher transfer rate? 3.) It's been suggested that the use SMB vs. SMB2 could explain the differences in speed. Indeed, as expected, packet captures show both being used depending on the OS versions in play, as we would expect. I understand what determines SMB2 being used or not, but I'm curious to know what kind of performance gain you can expect with SMB2. My problem simply seems to be a lack of experience, and more importantly, perspective, in terms of what are and are not reasonable network speeds. Could anyone help impart come context/perspective?

    Read the article

  • Reliable file copy (move) process - mostly Unix/Linux

    - by mfinni
    Short story : We have a need for a rock-solid reliable file mover process. We have source directories that are often being written to that we need to move files from. The files come in pairs - a big binary, and a small XML index. We get a CTL file that defines these file bundles. There is a process that operates on the files once they are in the destination directory; that gets rid of them when it's done. Would rsync do the best job, or do we need to get more complex? Long story as follows : We have multiple sources to pull from : one set of directories are on a Windows machine (that does have Cygwin and an SSH daemon), and a whole pile of directories are on a set of SFTP servers (Most of these are also Windows.) Our destinations are a list of directories on AIX servers. We used to use a very reliable Perl script on the Windows/Cygwin machine when it was our only source. However, we're working on getting rid of that machine, and there are other sources now, the SFTP servers, that we cannot presently run our own scripts on. For security reasons, we can't run the copy jobs on our AIX servers - they have no access to the source servers. We currently have a homegrown Java program on a Linux machine that uses SFTP to pull from the various new SFTP source directories, copies to a local tmp directory, verifies that everything is present, then copies that to the AIX machines, and then deletes the files from the source. However, we're finding any number of bugs or poorly-handled error checking. None of us are Java experts, so fixing/improving this may be difficult. Concerns for us are: With a remote source (SFTP), will rsync leave alone any file still being written? Some of these files are large. From reading the docs, it seems like rysnc will be very good about not removing the source until the destination is reliably written. Does anyone have experience confirming or disproving this? Additional info We will be concerned about the ingestion process that operates on the files once they are in the destination directory. We don't want it operating on files while we are in the process of copying them; it waits until the small XML index file is present. Our current copy job are supposed to copy the XML file last. Sometimes the network has problems, sometimes the SFTP source servers crap out on us. Sometimes we typo the config files and a destination directory doesn't exist. We never want to lose a file due to this sort of error. We need good logs If you were presented with this, would you just script up some rsync? Or would you build or buy a tool, and if so, what would it be (or what technologies would it use?) I (and others on my team) are decent with Perl.

    Read the article

  • Internet Pings but Does Not Load

    - by t3techcom18
    From what I've been seeing and been doing my research for the past two days, many people have been having the same issues throughout the years, however, this is the first time I've encountered this issue and many of the specific workarounds or fixes have not worked for me. I've been trying to work through this for 24 hours straight now, but to no avail so many thanks to those that can help. On Monday night, got home from work; surfing the internet for half an hour, everything was fine as always. Just after half an hour, my Internet got very sluggish and then it died completely. I thought it might have been the an update I just put through in terms of Windows Update that said was a critical update for MSE, as the same thing happened a few years ago. I did a System Restore to two different dates that were in the past two weeks, nothing. Uninstalled MSE and disabled Windows Defender and the Windows Firewall: Nothing. Reset IE Options, Reset Winsock, Dumping DNS, many of the other command prompt screens to reset items: Nothing. Reset the modem: Nothing. What DID work, however, was a ping test to Yahoo. The ping test worked, saying all four packets was recieved, yet nothing else popped up. LAN and CenturyLink said everything worked on their end and that everything was connected properly, as well as the speeds working fine. CenturyLink said in their notes that they thought Port 80 was blocked. I went and put in the Firewall to allow Port 80 but it didn't make any difference whatsoever. I remembered I had a spare modem laying around and I switched them up, both modem and the cords - nothing. I then hooked it up to my netbook to see if that would work, as it usually does - connection didn't work there either. Like I said, it's been about 24 hours now and this is increasingly frustrating, as I've tried all solutions (While browsing through 10 search results pages on my phone) suggested and still nothing. Any suggestions and tricks would be greatly appreciated! Here's my specs: Windows 7 32-bit Home Premium Intel Core 2 Duo 3.14 Ghz 4 GB Kingston DDR2 RAM eVGA nForce 750i SLI eVGA GeForce GTX 560 Ti FPB ISP: CenturyLink No router Modem: CenturyLink 660 Series Hardwired connection PLEASE NOTE: This is the only computer I have (Like I said, the netbook solution didn't work), so downloading programs and such is not an option til I get to other computers somewhere else, like right now. Unless someone knows of a way of copying/pasting a file in Windows and then transferring said info to an Android smartphone, this is gunna take a while haha. Patience is requested.

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >