Search Results

Search found 12648 results on 506 pages for 'disk activity'.

Page 472/506 | < Previous Page | 468 469 470 471 472 473 474 475 476 477 478 479  | Next Page >

  • XP OEM licensing when reinstalling Windows XP

    - by mindas
    My wife has managed to buy a Dell laptop she was using at her ex-employer that just went bust. The problem with it is the OS (Windows XP) which takes ages to boot and is generally disproportionally slow to the hardware of the machine. So my aim is to sacrifice a day and reinstall it. The problem I am slightly worried about is the licensing/registration/activation hell. Apart from the sticker (with WinXP license key), the laptop has no other paperwork proving this license is legitimate. I believe this was originally an OEM license. Unfortunately, I don't have the the installation CD. This computer also has MS Office installed (which I would like to retain) but it none of MS Office apps would launch due to some obscure error complaining about lack of free disk space (which computer has plenty of). I have absolutely no clue what kind of license this MS Office was. And because the company has gone into the administration, there is no way of getting this information nor installable media. I believe that by buying the hardware I have also acquired the software which I can use as I see fit. Correct me if I'm wrong. Above said, my question would be: What is the easiest way of reinstalling the XP? By easiest I mean avoiding spending my time to prove Microsoft support I've got the right to use the software (insert your computer says noooo joke here) but still being able to get to fresh virgin activated legal state of the XP. I used to work as a sysadmin many years ago so I am not afraid of any technical difficulties. The same question applies to MS Office. I imagine the process would consist of backing up all the data, pulling some bits from the registry and using that on the fresh install. As for reinstall I'd expect to use some sort of OEM Windows repair CD from Dell, right? Are those freely available? My other box (HP) has such a thing and it can't be used on any other brand. I'm sure somebody had to go through this licensing hell and could share his/her tips. Thanks in advance.

    Read the article

  • Unable to make the session state request to the session state server

    - by Angry_IT_Guru
    For about 4-5 months now, I seem to be having this sporadic issue--mainly during our busiest time of the day between 10:30-11:45AM, where all my Windows 2003 web servers in a Microsoft NLB cluster start throwing session state server errors. A sample error is below. System.Web.HttpException: Unable to make the session state request to the session state server. Please ensure that the ASP.NET State service is started and that the client and server ports are the same. If the server is on a remote machine, please ensure that it accepts remote requests by checking the value of HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\aspnet_state\Parameters\AllowRemoteConnection. If the server is on the local machine, and if the before mentioned registry value does not exist or is set to 0, then the state server connection string must use either 'localhost' or '127.0.0.1' as the server name. at System.Web.SessionState.OutOfProcSessionStateStore.MakeRequest(StateProtocolVerb verb, String id, StateProtocolExclusive exclusiveAccess, Int32 extraFlags, Int32 timeout, Int32 lockCookie, Byte[] buf, Int32 cb, Int32 networkTimeout, SessionNDMakeRequestResults& results) at System.Web.SessionState.OutOfProcSessionStateStore.SetAndReleaseItemExclusive(HttpContext context, String id, SessionStateStoreData item, Object lockId, Boolean newItem) at System.Web.SessionState.SessionStateModule.OnReleaseState(Object source, EventArgs eventArgs) at System.Web.HttpApplication.SyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) Now I'm using ASP.NET State service on a centralized back-end Windows 2003 server that all servers communicate to. I was originally using SQL Server state for a couple years as well prior to having this issue. The problem with SQL wqas that when the issue occurred, it created a blocking situation which essentially impacted all users across all servers. The product company recommended that I use the standard ASP.NET State service as that was what they technically supported. Why this would make a difference is beyond me -- but I had no choice but to try it! I have attempted to create multiple application pools, adding additional servers, chaning TCP/IP timeout from 20 to 30 seconds, and even calling Microsoft ASP.NET product support, with very little success. I even recommended that they review whether they are using read-only session state instead of read/write per page request -- as I understand that this basically causes every page to make round-trips to state server even if state isn't being used on the page. Unfortunately, the application is developed by our product company and they insist that it is something with my environment because other clients do not have these sort of issues. However, I've talked to other clients and they tell me when they've seen issues like they, they've basically had to create another web farm. This issue almost seems like I've simply reached some architectural limit within the application... Microsoft's position on the issue is that the session state needs to be reduced and the returncode being reported back from the state server indicates buffers are full. To better understand the scope of issues (rather than wait for customers to call and complain), I installed ELMAH and configured it to send me e-mails when unhandled exceptions occur. I basically get 500-1000 e-mails during the time period of high activity! If any one has any other ideas I could try or better ways to troubleshoot, I'd appreciate it.

    Read the article

  • Logitech Optical Mouse Frozen In Middle of Windows XP Pro Screen

    - by Code Sherpa
    I have a Logitech Optical Mouse/Keyboard. I have been using them just fine with the system drivers for almost a year now. I recently updated my Kaspersky software and rebooted. Now the mouse is frozen in the middle of my screen. I am not able to login to the Windows XP Pro box that has the frozen mouse (because i can't work the mouse) but am able to remote desktop to this computer. Things I know / have tried: When I boot on the problem computer, I am able to use the keyboard, but not the mouse. I have installed the latest version of Logitech's SetPoint (with the updated drivers) on the problem computer (via remote desktop) and that didn't seem to matter. I bought new batteries for the mouse and that didn't matter. I have tried the mouse/keyboard on another computer and the mouse works just fine there. My suspicion is that the Kaspersky install has overwritten a driver of some sort. Things I have not done (and would appreciate detailed steps if you feel this is the way to go): 1) Uninstalled all the mouse drivers on the machine and reboot. Then, reinstall. Note: When I get to the Device Manager I don't see an option for Human Interface Devices (where the mouse device is). Here are my options: Computer, Disk Drives, DVD/CD-Rom drives, Floppy controllers, IDE ATA/ATAPI, Imaging devices, Network Adapters, Other devices, Ports, Processors, Sound, video, and gaming, System devices, USB controllers. Also, I should point out that Video Controller is the only thing under Other devices and it has a yellow exclamation mark. The same is true for all the items under Universal Serial Bus controllers. I think this means I have to update my BIOS but, since my mouse was working just fine without doing that, I don't think that is my problem. So, how do I get to my Mouse Device? 2) Update my BIOS. Note: As pointed out above, I don't think this matters as my mouse was working just fine under my computer's current BIOS version. Thanks for your help.

    Read the article

  • Lenovo Ideapad Y480 can't reinstall windows?

    - by elegantonyx
    Alright, so here's the deal... For a while, I wanted to mess with Linux. I don't know why, but I wanted to. So, what I did was use WUBI and install Ubuntu. Because of some unknown reason (Intel Rapid Start? Half the drivers being on a Lenovo-installed SSD [separate from the main hard drive]?) it wouldn't dual boot. So, I decided to use Linux Mint instead, and install it in a partition. Since Windows 7 Home Premium won't make partitions any more if you have a certain number already, I just shrank my system drive and left empty space for the installer to claim. When I installed Mint, it worked, but left my Windows 7 installation unable to boot and eventually it corrupted. I tried to use a system repair disc I burned earlier but it didn't find the Windows installation, so I assume the partition corrupted. I used this link:http://www.pcworld.com/article/248995/how_to_install_windows_7_without_the_disc.html to try and reinstall Windows. What happened was that originally it said that the partition I was trying to reinstall from had been locked down by the OEM (Lenovo). So, I went into GParted, wiped EVERYTHING, and selected 'Construct new Boot record' or whatever that function is, and now the error is: "Setup was unable to create a new system partition or locate an existing system partition. See the setup log files for more information." Does anyone know how to see the log files? Can anyone help? This system is a month old but the warranty only covers hardware failures, and I would need to pay around USD$60 for them to fix it. Please help. Any ideas? this is my main machine... Extra information: I have at my disposal: System Repair Disc (Burned myself) Windows 7 Home Premium 64 bit SP1 installation disk (burned from the pcworld links) Gparted Live CD Linux Mint 13 live cd A system backup (from the morning before this catastrophe) made using the Windows Backup and Restore. I put it on an external drive...that should be safe for now.

    Read the article

  • XP OEM licensing when reinstalling Windows XP

    - by mindas
    My wife has managed to buy a Dell laptop she was using at her ex-employer that just went bust. The problem with it is the OS (Windows XP) which takes ages to boot and is generally disproportionally slow to the hardware of the machine. So my aim is to sacrifice a day and reinstall it. The problem I am slightly worried about is the licensing/registration/activation hell. Apart from the sticker (with WinXP license key), the laptop has no other paperwork proving this license is legitimate. I believe this was originally an OEM license. Unfortunately, I don't have the the installation CD. This computer also has MS Office installed (which I would like to retain) but it none of MS Office apps would launch due to some obscure error complaining about lack of free disk space (which computer has plenty of). I have absolutely no clue what kind of license this MS Office was. And because the company has gone into the administration, there is no way of getting this information nor installable media. I believe that by buying the hardware I have also acquired the software which I can use as I see fit. Correct me if I'm wrong. Above said, my question would be: What is the easiest way of reinstalling the XP? By easiest I mean avoiding spending my time to prove Microsoft support I've got the right to use the software (insert your computer says noooo joke here) but still being able to get to fresh virgin activated legal state of the XP. I used to work as a sysadmin many years ago so I am not afraid of any technical difficulties. The same question applies to MS Office. I imagine the process would consist of backing up all the data, pulling some bits from the registry and using that on the fresh install. As for reinstall I'd expect to use some sort of OEM Windows repair CD from Dell, right? Are those freely available? My other box (HP) has such a thing and it can't be used on any other brand. I'm sure somebody had to go through this licensing hell and could share his/her tips. Thanks in advance.

    Read the article

  • Windows 7 fails to install on KVM with qemu

    - by kief_morris
    I'm trying to install Windows 7 on a virtual machine on my 64 bit Ubuntu Karmic box. I get to the point of selecting my language settings and clicking 'install now', but a short while later I get a blue screen of death. I've tried a few variations, including using the 32 bit (fails very quickly). The virt-install command I've tried includes this: sudo virt-install --connect qemu:///system -n ksm-win7 -r 2048 \ --disk path=/home/kief/VM-Images/ksm-win7.qcow2,size=50 \ -c /var/Software/Windows7/Full/64bit/SW_DVD5_SA_Win_Ent_7_64BIT_English_Full_MLF_X15-70749.ISO \ --vnc --os-type windows --os-variant vista --hvm The limited info I could find suggested that 'vista' should work as the --os-variant, I haven't found any values specific to windows 7. Here's my blue screen: I've found very little by Googling, so I'm guessing this isn't a case of KVM simply not supporting Windows 7. Thanks for any help. Update: I have been able to successfully create a Windows 7 VM using the graphical "Virtual Machine Manager" app, although I don't really understand the cause of the problem with the VM created with virt-install. Comparing the configuration files under /etc/libvirt/qemu provides some clues, although I don't know enough to interpret them properly. The interesting differences in the two VM configurations are: --- win7-virt-install.xml +++ win7-vmm.xml -<domain type='qemu'> +<domain type='kvm'> @@ -21 +21 @@ - <emulator>/usr/bin/qemu-system-x86_64</emulator> + <emulator>/usr/bin/kvm</emulator> @@ -23 +23 @@ - <source file='/home/kief/VM-Images/ksm-win7.qcow2'/> + <source file='/var/lib/libvirt/images/ksm-win7x64.img'/> I'm not sure if this means the working VM is not using qemu at all, or if there is some other difference in the way it's used with kvm. Update2: So I've answered my own question (mostly) below. A KVM VM needs to use KVM's own CPU emulation rather than qemu's in order for me to get Windows 7 installed. I'm not sure whether there is something that can be done to get it working on a qemu-emulation CPU, or whether a newer version will support it. But at least it is possible to get it running on a KVM VM.

    Read the article

  • Intermittent 404 on select assets, LAMP stack

    - by Tom Lagier
    We have a LAMP stack WordPress server that is serving most assets correctly. However, one plugin's CSS file and several images are returning soft 404s roughly 20% of the time. I can't find any reference to the 404 in the access logs, but the browser is definitely receiving a 404 response from somewhere (WordPress, I would assume). When I use an alias URL that does not match the site URL but does resolve to the asset path, the resource loads correctly 100% of the time. However, using the site url only resolves for the select, problematic assets 20% of the time. You can test one of the problematic assets here: http://www.mreco.org/wp-content/uploads/2014/05/zero-cost.jpg However the alias link always resolves correctly: http://mr-eco.wordpress.promocampaigns.com/wp-content/uploads/2014/05/zero-cost.jpg Stranger, if I attempt to access outdated content that definitely does not exist on the server, at the live URL it returns the content roughly 50% of the time. Using the alias link, it 404s 100% of the time - the correct behavior. Error log and PHP error log are clean. A sample access log (pulled from grep 'zero-cost.jpg' /var/log/httpd/mr-eco-access_log) from several refreshes of the live direct link (where I am not seeing any 404's): 10.166.202.202 - - [28/May/2014:20:27:41 +0000] "GET /wp-content/uploads/2014/05/zero-cost.jpg HTTP/1.1" 304 - 10.166.202.202 - - [28/May/2014:20:27:42 +0000] "GET /wp-content/uploads/2014/05/zero-cost.jpg HTTP/1.1" 304 - 10.166.202.202 - - [28/May/2014:20:27:43 +0000] "GET /wp-content/uploads/2014/05/zero-cost.jpg HTTP/1.1" 304 - 10.166.202.202 - - [28/May/2014:20:27:43 +0000] "GET /wp-content/uploads/2014/05/zero-cost.jpg HTTP/1.1" 304 - 10.176.201.37 - - [28/May/2014:20:27:56 +0000] "GET /wp-content/uploads/2014/05/zero-cost.jpg HTTP/1.1" 200 57027 Chrome's dev tools list the following network activity before displaying 404 page content: zero-cost.jpg /wp-content/uploads/2014/05 GET 404 Not Found text/html Other 15.9?KB 73.2?KB 953?ms 947?ms My Apache configuration is standard, I've listed the virtual host entry and .htaccess file below. I can provide other parts of Apache config if necessary. Virtual host: <VirtualHost *:80> DocumentRoot /var/www/public_html/mr-eco.wordpress.promocampaigns.com ServerName www.mreco.org ServerAlias mreco.org mr-eco.wordpress.promocampaigns.com ErrorLog logs/mr-eco-error_log CustomLog logs/mr-eco-access_log common <Directory /var/www/public_html/mr-eco.wordpress.promocampaigns.com> AllowOverride All SetOutputFilter DEFLATE </Directory> </VirtualHost> .htaccess: # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress I have checked for multiple A records and can confirm that there is a single A record pointing at the domain: ;; ANSWER SECTION: mreco.org. 60 IN A 50.18.58.174 I'm fairly new to systems administration, and at a complete loss as to what could cause this. In the past, inconsistently 404ing assets have been because of out-of-sync instances behind a load balancer. In this case, it is a single instance behind the load balancer. Because of the inconsistency, it feels like a caching issue. We don't make use of Apache caching, and as far as I know WordPress should not be caching either. What I've done so far: Reset WordPress permalinks Disabled WordPress plugins Re-generated WordPress .htaccess file Swapped ServerName and ServerAlias directives Cleared browser cache Confirmed disk location of resources Checked PHP, access, and error logs Confirmed correct DNS setup (can post if necessary) I'm at a total loss. Thanks for helping me out!

    Read the article

  • VMWare tools not installing with an error

    - by JDS
    VMWare tools not installing on Ubuntu 12.04. I'm using Chef to manage the installation, but the Apt commands fail if run manually. I'm using the VMWare tool Debian repo. Example: $ cat /etc/apt/sources.list.d/vmware-tools-source.list deb http://packages.vmware.com/tools/esx/5.0u2/ubuntu precise main When trying to install, most packages seem to go ok, but one, "vmware-tools-foundation", does not. Example: $ apt-get -q -y install vmware-tools-esx-nox=8.6.10-1.precise Reading package lists... Building dependency tree... Reading state information... You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: vmware-tools-esx-kmods-3.2.0-23-generic : Depends: vmware-tools-foundation (>= 8.6.10) but it is not going to be installed vmware-tools-esx-nox : Depends: ...snip list of deps... E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). $ apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: vmware-tools-foundation The following NEW packages will be installed: vmware-tools-foundation 0 upgraded, 1 newly installed, 0 to remove and 118 not upgraded. 7 not fully installed or removed. Need to get 0 B/5,886 B of archives. After this operation, 86.0 kB of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... 103499 files and directories currently installed.) Unpacking vmware-tools-foundation (from .../vmware-tools-foundation_8.6.10-1.precise_all.deb) ... VMware Tools cannot install because it appears that another installation of VMware Tools is already present. Please remove the previous installation and then attempt to install this copy of VMware Tools again. dpkg: error processing /var/cache/apt/archives/vmware-tools-foundation_8.6.10-1.precise_all.deb (--unpack): subprocess new pre-installation script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/vmware-tools-foundation_8.6.10-1.precise_all.deb E: Sub-process /usr/bin/dpkg returned an error code (1) The key seems to be this error: "VMware Tools cannot install because it appears that another installation of VMware Tools is already present. Please remove the previous installation and then attempt to install this copy of VMware Tools again." However, I've tryed removing and purging and can't seem to "trick" VMWare tools into thinking the packages are gone. Apt thinks they are gone. Is there some service/file/cache/lock left that VMWare tools sees that makes it think that VMWare tools are still installed? I've googled and googled but there is no answer to this question with my particular circumstances on the interwebs. VMWare's documentation of this error is minimal.

    Read the article

  • smartctl -t long isn't finishing

    - by xenoterracide
    I been running smartctl -t long on a drive for about 2 days now and it seems to be stalled at 10%. short and conveyance both passed. I have to send 1 of 2 drives purchased back I found badblocks with badblocks (none on this drive and I'ts made over a pass already). I'm just wondering if I should be concerned about this. smartctl 5.39.1 2010-01-28 r3054 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net === START OF INFORMATION SECTION === Device Model: WDC WD10EARS-00Y5B1 Serial Number: WD-WMAV51582123 Firmware Version: 80.00A80 User Capacity: 1,000,204,886,016 bytes Device is: Not in smartctl database [for details use: -P showall] ATA Version is: 8 ATA Standard is: Exact ATA specification draft version not indicated Local Time is: Mon May 10 22:19:52 2010 EDT SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x82) Offline data collection activity was completed without error. Auto Offline Data Collection: Enabled. Self-test execution status: ( 241) Self-test routine in progress... 10% of test remaining. Total time to complete Offline data collection: (20100) seconds. Offline data collection capabilities: (0x7b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 2) minutes. Extended self-test routine recommended polling time: ( 231) minutes. Conveyance self-test routine recommended polling time: ( 5) minutes. SCT capabilities: (0x3031) SCT Status supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 2 3 Spin_Up_Time 0x0027 131 131 021 Pre-fail Always - 6408 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 12 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0 7 Seek_Error_Rate 0x002e 200 200 000 Old_age Always - 0 9 Power_On_Hours 0x0032 100 100 000 Old_age Always - 148 10 Spin_Retry_Count 0x0032 100 253 000 Old_age Always - 0 11 Calibration_Retry_Count 0x0032 100 253 000 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 10 192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 7 193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 174 194 Temperature_Celsius 0x0022 106 102 000 Old_age Always - 41 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0 197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0030 200 200 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age Offline - 0 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Conveyance offline Completed without error 00% 99 - # 2 Extended offline Interrupted (host reset) 10% 30 - # 3 Short offline Completed without error 00% 0 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay.

    Read the article

  • Toshiba Satellite L630 broken after bios update

    - by Mustafa Kamal
    I have Toshiba Satellite L630, which has been broken. It had no more OS installed in it. All the disk partition were cleared into one single empty unformatted partition. So I begin to install windows XP on this laptop. Apparently, win XP's driver support for this laptop is very limited. So I have to find almost all important driver (display, sound, etherned, wireless etc) on the net and install it manually one by one. So I start googling, and I got some driver download page from several Toshiba's website (the global version, the europe, asia, etc). Pretty hard to find the exact drivers, but I managed to find pretty good drivers. It's all works quite fine, although still have a few glitches. But everything turned into a big mess when I downloaded the "BIOS Update", which is also listed on Toshiba's official driver directory site. When I installed it, it show a big red warning sign telling me not to do anything while flashing the BIOS . I follow that instruction prudently. The process was finished, and that update BIOS software (it is InsydeH2O BIOS) told me that the BIOS has been succesfully updated and the computer need to restart. So I restart the computer. This is where the problem appear. I can no longer boot to my laptop. The booting process seems to be able to enter windows for a moment (it shows the windows XP loading screen), and then suddenly it just got that hateful blue screen and then instantiy restarts the machine. It goes on a loop. Boot bios - enter XP - blue screen - restart. I can't even try to reinstall my win XP again. Evertime the machine tries to boot to win XP CD, it got the same blue screen as I gets when loading from HDD. Many google search results said that I should open the laptop cover and try to clear CMOS with some kind of jumper or something. Or to unplug/re-plug the CMOS battery. Do I really need to do that? Is there anyway I could do without disassembling my laptop? I read some tricks about booting from USB device but I can't get the exat tools that I need to do that thing... Btw, this is my detailed laptop number photographed from the back of my laptop

    Read the article

  • Enterprise Tape Backup solutions

    - by Tom O'Connor
    I'm currently attempting to re-architect a backup solution where I'm working. We've got 2 NAS devices, one in the office, one in the datacentre. The servers in the DC back up to the DC NAS, which is then replicated to the Office NAS. The office NAS exports shares as CIFS and NFS, this bit is fine. At some point, I'll have to expand our storage capacity, currently we've got about 1.4TB of storage space, which is about 96% full. Previously, the tape backup was a script that ran tar a few times and squirted data onto a tape. It worked, but was by no means a perfect solution. Restores are a bit of a pest, adding new data to the backup requires editing the script as root. It's just all a bit non-ideal. I've been evaluating a number of "enterprise" ready backup solutions, such as Yosemite Backup from Barracuda, Acronis Backup/Restore, and something from Arkeia. In the process of evaluating these, I've found 2 big problems. Not all of them allow backup of mounted devices (such as a NFS mounted NAS) Many of these applications don't like our tape device. For the most part, (1) is essential. Our NAS has a feeble processor and can't run applications like backup agents. I suspect that the biggest problem is the tape device, which is a HP C7438A DAT72 connected via USB. Questions: Has anyone else got an USB DAT72 device working with similar software? Is there a better way to back up data from an "appliance" NAS device on which you can't run an agent? Would I be totally out of my mind to specify a cheap HP or Dell server with a couple of 1TB hard disks, and a SAS card to then talk to an HP Ultrium (or similar) device? The biggest drawback to this would be cost (400ish for the server, 200 for the SAS connectivity and 1700 for a LTO4 device) Notes: I'd love to be able to say that I'd get rid of tapes entirely, and use some form of hard disk backup. In a previous job, we had LaCie USB drives, which were decidedly unreliable.

    Read the article

  • How to properly shutdown a Linux VMware Server Host

    - by Mikee
    Hi Everybody: In our lab, we have a server running Ubuntu Linux 8.04.4 x64, with VMware Server 2.1 hosting 4 VM's. I have a major concern with regards to shutting down the host server. Mostly, how do I ensure that the guest VM's are being shut down safely? In the VMware web interface console, I have enabled: "Allow virtual machines to start and stop automatically with the system" I enabled the Default Startup Delay for 15 seconds along with the "Start next VM immediately if the VMware Tools start" option checked I enabled the Default Shutdown Delay with a 60 second shutdown delay and a Shutdown Action of "Shut Down Guest" All VM's have the VMware Tools installed and properly working. All VM's are moved up into the "Specified Order" section of "Startup Order", thus when powering the server back on, all those VM's should start up again in that specified order. When I went to shut down the server, I used the shutdown -h now command. Based on the settings I entered above, I was expecting a 4 minute shutdown, as there is an option to delay the shutdown of each VM by 60 seconds. However, that is not what happened. Instead, the server shutdown in under a minute. When I powered the server back on, only 2 VM's properly loaded. The other 2 showed the following error: "Power on Virtual Machine" failed to complete If these problems problems persist, please contact your system administrator. Details: Cannot open the disk '[location to .vmdk]' or one of the snapshot disks it depends on. Reason: Failed to lock the file. Obviously, if this error occured, then it is clear to me that the VM's were not properly shutdown, or the server powered off before the VM's were completely shutdown. I have fixed the above error by deleting the .lck files in the respective VM directories. How would I know if the VM's were properly shut down? I checked the VMware-server logs, but they only seem to display the logs of when the vmware-mgmt service is running in the current session. I'm mostly running Linux VM's, so is there an easy way to know whether or not a server was properly shut down in Linux? Thank you all for the help!

    Read the article

  • Mounting a drive in Ubuntu 9.10 (Karmic Koala)

    - by morpheous
    I have just installed Ubuntu on a machine that previously had XP installed on it. The machine has 2 HDD (hard disk drives). I opted to install Ubuntu completely over XP. I am new to Linux, and I am still learning how to navigate teh file structure. However, AFAICT), there is only one drive. I want to be able to store programs etc on the first drive, and store data (program output etc) on the second drive. It appears Ubuntu is not aware that I have 2 drives (on XP, these were drives C and D). How can I mount the second drive (ideally, I want to do this automatically on login, so that the drive is available to me whenever I login - withou manual intervention from me) In XP, I could refer to files on a specific drive by prefixing with the drive letter (e.g. c:\foobar.cpp and d:\foobar.dat). I suspect the notation on ubuntu is different. How may I specify specific files on different drives? Last but notbthe least (a bit unrelated to previous questions). This relates to direcory structure again. I am a developer (C++ for desktops and PHP for websites), I want to install the following apps/ libraries. i). Apache 2.2 ii). PHP 5.2.11 iii). MySQL (5.1) iv). SVN v). Netbeans vi). C++ development tools (gcc, gdb, emacs etc) vii). QT toolkit viii). Some miscellaeous scientific software (e.g. www.r-project.org, www.gnu.org/software/octave/) I would be grateful if a someone can recommend a directory layout for these applications. Regarding development, I would also be grateful if someone could point out where to store my project and source files i.e: (i) *.cpp, *.hpp, *.mak files for cpp projects (ii) individual websites On my XP machine the layout for C++ dev was like this: c:\dev\devtools (common libs and headers etc) c:\dev\workarea (root folder for projects) c:\dev\workarea\c++ (c++ projects) c:\dev\workarea\websites (web projects) I would like to create a similar folder structure on the linux machine, but its not clear whether to place these folders under /, /usr, /home or swomewhere else (there seems to be abffling number of choices, so I want to get it "right" first time - i.e having a directory structure that most developer use, so it is easier when communicating with other ubuntu/linux developers)

    Read the article

  • Squid Proxy: url_regex acl is not working?

    - by bharathi
    I am using squid proxy 3.1 in ubuntu machine. I want to allow only urls matching our pattern through our proxy server. I configured acl like below. Acl for dstdomain is working fine. If i access any url besides .zmedia.com , I got proxy connection refused. But the url_regex is not working. What i am trying here is. Allow only request from ".zmedia.com" domain and the request url should be in "/blog" context. # # Recommended minimum configuration: # acl manager proto cache_object acl localhost src 127.0.0.1/32 ::1 acl to_localhost dst 127.0.0.0/8 ::1 acl urlwhitelist url_regex -i ^http(s)://([a-zA-Z]+).zmedia.com/blog/.*$ acl allowdomain dstdomain .zmedia.com acl Safe_ports port 80 8080 8500 7272 # Example rule allowing access from your local networks. # Adapt to list your (internal) IP networks from where browsing # should be allowed acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl SSL_ports port 7272 # multiling http acl CONNECT method CONNECT # # Recommended minimum Access Permission configuration: # # Only allow cachemgr access from localhost http_access allow manager localhost http_access deny manager http_access deny !allowdomain http_access allow urlwhitelist http_access allow CONNECT SSL_ports http_access deny CONNECT !SSL_ports # Deny requests to certain unsafe ports http_access deny !Safe_ports # Deny CONNECT to other than secure SSL ports http_access deny CONNECT !SSL_ports # We strongly recommend the following be uncommented to protect innocent # web applications running on the proxy server who think the only # one who can access services on "localhost" is a local user #http_access deny to_localhost # # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS # # Example rule allowing access from your local networks. # Adapt localnet in the ACL section to list your (internal) IP networks # from where browsing should be allowed http_access allow localhost # And finally deny all other access to this proxy http_access deny all # Squid normally listens to port 3128 http_port 3128 # We recommend you to use at least the following line. hierarchy_stoplist cgi-bin ? # Uncomment and adjust the following to add a disk cache directory. #cache_dir ufs /var/spool/squid 100 16 256 # Leave coredumps in the first cache dir coredump_dir /var/spool/squid append_domain .zmedia.com # Add any of your own refresh_pattern entries above these. refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320 Please correct me , If i did anything wrong?

    Read the article

  • Are file access times not properly maintained in Mac OS X?

    - by Ether
    I'm trying to determine how file access times are maintained by default in Mac OS X, as I'm trying to diagnose some odd behaviour I'm seeing in a new MBP Unibody (running Snow Leopard, 10.6.2): The symptoms (drilling down to the specific behaviour that seems to be causing the issue): mutt is unable to switch to mailboxes which have recently received new mail mail is delivered by procmail, which updates the mtime of the mbox folder it is updating, but does not alter the atime (this is how new mail detection works: by comparing atime to mtime) however, both the mtime and atime of the mbox file is getting updated Through testing, it does not appear that atimes can be set separately in the filesystem: : [ether@tequila ~]$; touch test : [ether@tequila ~]$; touch -m -t 200801010000 test2 : [ether@tequila ~]$; touch -a -t 200801010000 test3 : [ether@tequila ~]$; ls -l test* -rw------- 1 ether staff 0 Dec 30 11:42 test -rw------- 1 ether staff 0 Jan 1 2008 test2 -rw------- 1 ether staff 0 Dec 30 11:43 test3 : [ether@tequila ~]$; ls -lu test* -rw------- 1 ether staff 0 Dec 30 11:42 test -rw------- 1 ether staff 0 Dec 30 11:43 test2 -rw------- 1 ether staff 0 Dec 30 11:43 test3 The test2 file is created with an old mtime, and the atime is set to now (as it is a new file), which is correct. However, test3 is created with an old atime, but is not set properly on the file. To be sure this is not just behaviour seen with new files, let's modify an old file: : [ether@tequila ~]$; touch -a -t 200801010000 test : [ether@tequila ~]$; ls -l test -rw------- 1 ether staff 0 Dec 30 11:42 test : [ether@tequila ~]$; ls -lu test -rw------- 1 ether staff 0 Dec 30 11:45 test So it would seem that atimes cannot be set explicitly (it is always reset to "now" when either mtime or atime modifications are submitted). Is this something inherent to the filesystem itself, is it something that can be changed, or am I totally crazy and looking in the wrong place? PS. the output of mount is: : [ether@tequila ~]$; mount /dev/disk0s2 on / (hfs, local, journaled) devfs on /dev (devfs, local, nobrowse) map -hosts on /net (autofs, nosuid, automounted, nobrowse) map auto_home on /home (autofs, automounted, nobrowse) ...and Disk Utility says that the drive is of type "Mac OS Extended (Journaled)".

    Read the article

  • Windows: Impact of clean install Service Pack 2 to applications & data?

    - by Thomas Matthews
    My Windows Vista Home Premium system is corrupt and won't install Service Pack 2. I have followed all the advice from Microsoft and still no luck. I would like to perform a clean install of Vista, then SP1, and then SP2. My concern is the effect of the clean install on the registry, my apps and all my data. My plan: 1. Download Vista Service Pack 1 (SP1) ISO and write to DVD. 2. Download Vista Service Pack 2 (SP2) ISO and write to DVD. 3. Backup all data, applications and registry to external hard drive (file copy not disk image) 4. ?? Format hard drive?? (is this necessary?) 5. Install Vista from DVDs / CDs. 6. Install SP1 from DVD 7. Install SP2 from DVD 8. Restore registry, applications and data from external hard drive. My questions: 1. Is formatting the hard drive a necessary step? 2. Will restoring the registry from the backup corrupt the system? 3. Should I use Windows Backup or ZIP/RAR? 4. Any gotcha's that I should look out for? Background: I am using Windows Vista Home Premium with SP1. The sfc program does not finish due to a resources problem (even when run as administrator). I have 5 users on it. After a while, the screen goes black and shows an error message window about an error with login.scr. Standard accounts display a black screen and can't run any applications. Administrative accounts have no problems (even standard accounts when converted to Administrative have no problem). The CBS log contains a lot of 0x8000ffff and E_UNEXPECTED errors (which Microsoft defines as catastrophic failure). This is the reasoning behind performing a clean install up to service pack 2.

    Read the article

  • Debugging IO limitation

    - by Martin F
    I have a Fedora box with some severe IO limitations which I have no idea how to debug. The server has a Areca Technology Corp. ARC-1130 12-Port PCI-X to SATA RAID Controller with 12 7200 RPM 1.5 TB disks and a Marvell Technology Group Ltd. 88E8050 PCI-E ASF Gigabit Ethernet Controller. uname -a output: 2.6.32.11-99.fc12.x86_64 #1 SMP Mon Apr 5 19:59:38 UTC 2010 x86_64 x86_64 x86_64 GNU/Linux The server is a file server running Nginx with the stub status module enabled, so I can see the current amount of connections. The problem present itself when I have a high number of simultaneous connections in a writing state. Usually around 350, at this very moment it's at 590 and the server is almost unusable and stuck at 230mbit/s. If I run stop and hit 1 to see CPU core usages I have all 4 cores with around 99% io wait, if I run iotop the nginx workers are the only processes producing any read load, currently at around 25MB/s. I have each of the workers bound to their own core. Initially I figured it was just the disks being bugged. But I've run fscheck and smartmontools checks and found no errors. I also ran an iozone test which you can see the result of here: http://www.pastie.org/951667.txt?key=fimcvljulnuqy2dcdxa Additionally, when the amount of connections are low I have no problem getting a good speed. If I wget over the local network it easily hits 60MB/sec. Right now I just tried putting a file in /dev/shm, then I symlinked a file from the public dir to it and used wget over the local network and only got 50KB/s. Also, if I try to cp /dev/shm/test /root/test it quickly copies around 740MB and then slows down HEAVILY. Again with iotop reporting 99% iowait. I'm not really sure how to go about figuring out what the problems are. It could be a natural disk limitation but then the file from /dev/shm ought to transfer so it seems there's a network limit, but that's fine when there's not many connections. Perhaps it's a TCP stack problem but I really have no idea how to check that. Any suggestions on how to proceed with debugging would be very welcome. If additional information is required then let me know and I'll try to get it. Thanks.

    Read the article

  • Backing Up vs. Redundancy

    - by TK Kocheran
    I'm currently in stage 2 of 3 of building my home workstation. What this means is that my RAID-0 array of solid state disks will be backed up nightly to a RAID-5 or RAID-6 array of traditional spinning hard disks. However, it recently dawned on me that redundancy is not backup. The main reason for setting up a RAID array with redundancy was to protect myself in the event of a drive failure to serve as an effective backup solution. Wait. What if a bolt of lightning finds a way to travel into my house, through my surge-protector, into my power supply and physically destroys all of my hard disks and SSDs? Well, in that case, I guess I'd be fine because I generally keep most important files (music, pictures, videos) stored in multiple places like on my laptop, my wife's laptop, and an encrypted USB hard drive. Wait. What if a giant hedgehog meteor attacks my house from space traveling at mach 3 and all machines and hard disks are blown to smithereens. Well, I guess I could find a way to do ridiculously slow and cumbersome rsyncs or backups to Amazon's Glacier. Wait. What if there's a nuclear apocalypse... and at this point I start laughing hysterically. At what point does backing up become irrelevant? I completely understand situation one (mechanical drive failure), situation two (workstation compromised or destroyed somehow), possibly even situation three (all machines and disks destroyed), but situation four? There's no questioning the need for backups. None. However, there are three questions I'd really like addressed: To what level should one backup? I definitely understand the merits of physical disk redundancy. I also believe in keeping important files on multiple machines and thinning out the possibility of losing all of my files. Online backups make sense, but they beg the following question. What should I be backing up remotely and how often? It's no problem storage-wise to back up important files (music, pictures, videos) and even configuration and temporal data for all of the machines in my network (all Linux based)... albeit locally. Transferring to the cloud is another story. Worst-case scenario, if I lost all of my configuration for my individual computers, the reality is that I probably lost the machines too. The cloud is a long way away from here; I can run backups over CAT-6 here and see 100MB/s easily, but I'm afraid that I'm only going to see 2MB/s at best when transferring up to the cloud.

    Read the article

  • Further Performance Tuning on Medium SharePoint Farm?

    - by elorg
    I figured I would post this here, since it may be related more to the server configuration than the SharePoint configuration or a combination of both? I'm open for ideas to try, or even feedback on things that maybe have been configured incorrectly as far as performance is concerned. We have a medium MOSS 2007 install prepped and ready for receiving the WSS 2003 data to upgrade. The environment was originally architected by a previous coworker, and I have since added a few configuration modifications to assist with performance before we finally performed the install. When testing the new site collections & SharePoint install (no actual data yet), things seemed a bit slow. I had assumed that it was because I was accessing it remotely. Apparently the client is still experiencing this and it is unacceptably slow. 1 SQL Server running SQL Server 2008 2x SharePoint WFEs - hosting queries (no index) 1x SharePoint Index - hosting index (no queries) MOSS 2007 installed and patched up through December '09 on WFEs & Index All 4 servers are VMs, should have more than sufficient disk space & RAM (don't recall at the moment), and are running Windows Server 2008 - everything is 64-bit. The WFEs have Windows NLB configured, with a DNS name & IP for the NLB cluster. Single NIC on each server (virtual, since VMWare). The Index server is configured as a WFE (outside of the NLB cluster) so that it can index itself and replicate the indexes to the WFEs that will serve the queries. Everything is configured & working properly - it just takes a minute or two to load a page on the local LAN. The client is still using their old portal (we haven't started the migration/upgrade just yet) so there's virtually no data or users. We need to either further tune the configuration, or fix anything that may have been configured incorrectly which is causing this slowness? I've already reviewed & taken into account everything that I could find that was relevant before we even started the install. Does anyone have ideas or pointers? Perhaps there's something that I've missed?

    Read the article

  • Did a recent WinXP update break CD/DVD read speeds? SP2/SP3

    - by quack quixote
    I have two systems with fresh installations of Windows XP Pro SP3 (SP3 slipstreamed into the installer; fully updated after install). One's a refurbished 2.4GHz Pentium4 system; the other is a new 1.6GHz Atom330 build. Both have brand-new dual-layer CD/DVD burners (one's a LiteOn IDE, the other an LG SATA). Both take a really looooong time to read a single-layer DVD in Windows with Cygwin tools. Specifically, 40 minutes or more. I burn backup data to single-layer DVD+/-R and use MD5 hashes for data verification (made with the standard md5sum tool in Unix or Cygwin). The hashes are burned to disc with the data files, and I use this command to verify: $ cd /path/to/disc/mountpoint ; time md5sum -c < md5.txt Here's how long that takes to run on a full single-layer DVD+/-R disc: Old system (WinXP SP2, 1.8GHz Athlon 2500+, last summer): ~10 minutes Old system (Ubuntu 9.04, 1.8GHz Athlon 2500+): ~10 minutes Old system (Debian 5, dual 550MHz P3): ~10 minutes New Pentium4 system (running Ubuntu 9.04): ~5 minutes New Pentium4 system (running WinXP SP3, file copy from Win Explorer): ~6 minutes New Atom330 system (running WinXP SP3, file copy from Win Explorer): ~6 minutes Now the weird stuff: Old system (WinXP SP2, 1.8GHz Athlon 2500+, today): ~25 minutes New Pentium4 system (running WinXP SP3, read from Cygwin): ~40-50 minutes (?!!) New Atom330 system (running WinXP SP3, read from Cygwin): ~40 minutes (can do it in ~30 minutes ...if i have another program spin up the drive first) Since both systems will copy files in 6 minutes using Windows Explorer, I know it's not a hardware problem. Windows just never spins up the drive during the Cygwin read, so it stays super-slow the whole time. Other programs like EAC and DVD Decrypter seem to spin up the disc just fine during their processing. DMA is enabled on both systems. (Can confirm in Windows' Device Manager on the Atom330, not on the P4.) Nero's DriveSpeed tool doesn't seem to have any effect. Copy times are comparable from commandline with Windows' xcopy. Copying with Cygwin's cp looks more like the problem state -- it will spin up the drive for a short time, never reaches full speed, and lets it spin back down again for most of the copy. What I need is to get full read speeds from Cygwin. Is this a known issue with SP3 or some other recent Windows update? Any other ideas? Update: More testing; Windows will spin up the drive when data is copied with Windows tools, but not when read in place or copied with Cygwin tools. It doesn't make sense to me that Windows spins up the drive for copying, but not for other reads. Might be more of a Cygwin problem? Update 2: GUI activity is sluggish during the problem state -- during the Cygwin verifies, there's a slight but noticable delay when dragging windows or icons around on the desktop, switching windows, Alt-Tabbing through open applications, opening new windows, etc. It reminds me of the delay when opening a Windows Explorer window on My Computer just after inserting a DVD. I've tried updating Cygwin (from 1.5.x to 1.7.x), but no change in the problem behavior. I've also noticed this issue occurs on WinXP SP2, but it's not exactly the same -- some spin-up occurs, so the read happens in ~25-30 minutes instead of 40+. The SP2 system used to run the verifies in ~10 minutes, and when it first changed (not sure exactly when, maybe in late November or early December 2009) I thought it was dying hardware. This is why I suspect an official update of breaking this functionality; this has worked for years on that SP2 box.

    Read the article

  • Configuring gmail for use on mailing lists

    - by reemrevnivek
    This is really two questions in one. First, are nettiquette guidelines still accurate in their restrictions on ASCII vs. HTML, posting style, and line length? (Here's a recent metafilter discussion of the topic.) Second, If they are not, should these guidelines be respected? If they are (or if they should still be respected), how can modern mail programs be configured to work properly with them? Most mailing list etiquette statements appear to have been written by sysadmins who loved their command lines, and refuse to change anything. Many still reference rfc1855, written in 1995. Just reading that paginated TXT should give you an idea of the climate at the time. Here's a short, fairly random list of mailing list etiquette statements with some extracted formatting guidelines: Mozilla - HTML discouraged, interleaved posting. FreeBSD - No HTML, don't top post, line length at 75 characters. Fedora - No HTML, bottom-post. You get the idea. You've all seen etiquette statements before. So, assuming that the rules should be obeyed (Usually a good idea), what can be done to allow me to still use a modern mail program, and exchange mail with friends who use the same programs? We like to format our mail. Bold headings, code snippets (sometimes syntax highlighted, if the copy-paste pulls RTF text as from XCOde and Eclipse), free line breaks determined by your browser width, and the (very) occasional image make the message easier to read. Threaded conversations are a wonderful thing. Broadband connections are, I'm sure, the rule for most of the users of SU and of developer mailing lists, disk space is cheap, and so the overhead of HTML is laughable. However, I don't want to post a question to a mailing list and have the guru who can answer my question automatically delete it, or come off as uncaring. Until I hear otherwise, I'll continue to respect the rules as best I can. For a common example of the problem, Gmail, by default, sends HTML formatted messages with bottom-posted quotes (which are folded in, just read the last message immediately above), and uses the frame width to wrap lines, rather than a character count. ASCII can be selected, and quotes can be moved and reversed, but line wraps of quotes don't work, line breaks are tedious to add (and more tedious to read, if they're super small in comparison to the width of the frame). Is there a forwarding, free mail program which can help with this exercise? Should an "RFC1855 mode" lab be written? Or do I have to go to the command line for my mailing lists, and gmail for my other mail?

    Read the article

  • File Not Found error launching Guest OS in a non-administrator user account

    - by ToreTrygg
    Hi, I am running Fusion 2.0.6 (196839) on an iMac running 10.6.2 with 3 user accounts (1 Administrator). I have Fusion set up to share the Guest OS, and it's been working splendidly for nearly a year. Within the Guest OS (Windows XP PRO), there are also 3 user accounts (1 Administrator). Last night I went to back up my VM to an external drive, and to minimize the file size and transfer time, I deleted all Snapshots but the most-recent one. I then backed up the VM externally (28.23 GB). Today, one of my users tried to launch the Guest OS from within her user account, and received the following error message: "File Not Found: Windows XP Pro-000006.vmdk "This file is required to power on this virtual machine. If this file was moved, please provide its new location." My two choices are Cancel and Browse. When I browse, I can locate the Windows XP Pro-000006.vmdk file, which appears to be contained within the VM file (Windows XP Pro.vmwarevm). However, it still won't launch from a non-Admin user account. If I view the package contents of the VM file from the user account, the above file is present and appears to be created upon each launch of the Guest OS. If I go back to my Administrator account on the Mac and then launch Fusion, the Guest OS works perfectly for all 3 user accounts within XP Pro. I have tried to delete the Guest OS from Fusion's Library within the problem user account, then re-connect it to that Library, but the result is the same. The Guest OS data integrity is 100% -- but is accessible only from the OS X Administrator account. This problem only surfaced after deleting several older Snapshots. Again, the data is there, the Guest OS powers up normally in the Mac's Administrator account, but persistently returns the above error when attempting to power on from a non-Admin account on the Mac. I'm not sure how this is affecting the error, but when I look at Hard Disk Settings, the "unable-to-locate" file is the filename of the virtual HD. I don't want to make any changes to my (working) VM without any advice from the knowledgeable people on this forum. Any help will be greatly appreciated, thanks!

    Read the article

  • Splunk is fantastically expensive: What are the alternatives? [closed]

    - by samsmith
    Possible Duplicate: Alternatives to Splunk? This has been discussed, but it has been several months, so it may be time to revisit it: Earlier discussion RE Splunk alternatives For the record, Splunk rocks. But the pricing is simply beyond what we can consider (When I spoke with Splunk today, the cost for a system to index 5gb/day of data is over $30,000.) That is more than we spend on SQL Server (by a large multiple), more than we spend on a rack of servers (by a multiple), etc. etc. The splunk sales team is correct (that for $30K we get more value and functionality than if we spend the same building our own system), but it doesn't matter. The splunk cost is simply too high (by a multiple). Soooooo, we are looking around! Is anyone out there building a splunk like system? Our basic need: Able to listen for syslog messages on multiple udp ports Able to index the incoming data in an async way Some kind of search engine Some kind of UI An API to the search engine (to embed in our console) We currently need to index 3-5gb/day, but need to be able to scale to 10gb/day or more. We do not need a lot of history (30 days is fine). We use Windows 2008 and 2003 servers. Thanks for your thoughts! UPDATE: We spent two weeks researching commercial and open source options. Our conclusion: Write our own (we are a software company... we know how to write things). We built a great system built on mongodb and .NET that gives us the functions we needed from MongoDB in about one engineering week. We have now completed our implementation. We use two Mongodb servers (master and slave), and are able to log and index any amount of log data (5gb/day, 15gb/day, etc), limited only by disk space. OBSERVATIONS: This space needs a solid solution that is $1000-3000 flat rate. The licensing models used by the commercial firms are based on a "milk the data center ops guys" models. That is their right (of course!), but it leaves a HUGE space open for someone to come in underneath them. My guess is that in another year or two there will be a good open source solution that will be really usable. Thank you all for your input (even if it was self promotion).

    Read the article

  • Fast user switching suddenly stopped working on my Windows XP Prof machine

    - by John
    When I start Win XP SP2 I get to the welcome screen with no user names displayed. I then press Alt+Ctrl+Del twice and type in the username and then am able to login to Windows. When I go to user accounts in Control Panel I get the error message cells.item(...) is null or not an object. When I go to computer administration and then local users and groups there are no users listed under users but the groups is listed. I did a windows repair with no luck. I tried doing restore points but it said they didn’t work. Please help? My wife and I have been using fast user switching on out computer for years with no problem. Beginning a few months ago, I started Win XP Prof one day I get to the welcome screen with no user names displayed. I then press Alt+Ctrl+Del twice and type in the username and then am able to login to Windows with an account of owner. When I go to user accounts in Control Panel I get the error message cells.item(...) is null or not an object. When I go to computer administration and then local users and groups there are no users listed under users but the groups are listed. I have done system point restores and imports of exports of the registry I take with import. I have tried everything under safe mode and it makes no difference. This followed a Microsoft update the night before as I left the computer on. I tried to do a restore point but all my restore points failed and could not backout the MS updates. I was working with a fellow from Microsoft and he had me do all kinds of things but to no avail. He seems to think a DLL file is corrupt but which one? Finally in desperation he sent me a new OS XP Prof SP3 disk and I installed it and it wiped my hard drive. Luckily I took an Acronis Image backup first so I easily restored my system. I do not want to do a fresh windows update as it is heavily customized and worked fine up to that point. This has been going on for months, Thanks John

    Read the article

  • Varnish, hide port number

    - by George Reith
    My set up is as follows: OS: CentOS 6.2 running on an OpenVZ virtual machine. Web server: Nginx listening on port 8080 Reverse proxy: Varnish listening on port 80 The problem is that Varnish redirects my requests to port 8080 and this appears in the address bar like so http://mysite.com:8080/directory/, causing relative links on the site to include the port number (8080) in the request and thus bypassing Varnish. The site is powered by WordPress. How do I allow Varnish to use Nginx as the backend on port 8080 without appending the port number to the address? Edit: Varnish is set up like so: I have told the Varnish daemon to listen to port 80 by default. VARNISH_VCL_CONF=/etc/varnish/default.vcl # # # Default address and port to bind to # # Blank address means all IPv4 and IPv6 interfaces, otherwise specify # # a host name, an IPv4 dotted quad, or an IPv6 address in brackets. # VARNISH_LISTEN_ADDRESS= VARNISH_LISTEN_PORT=80 # # # Telnet admin interface listen address and port VARNISH_ADMIN_LISTEN_ADDRESS=127.0.0.1 VARNISH_ADMIN_LISTEN_PORT=6082 # # # Shared secret file for admin interface VARNISH_SECRET_FILE=/etc/varnish/secret # # # The minimum number of worker threads to start VARNISH_MIN_THREADS=1 # # # The Maximum number of worker threads to start VARNISH_MAX_THREADS=1000 # # # Idle timeout for worker threads VARNISH_THREAD_TIMEOUT=120 # # # Cache file location VARNISH_STORAGE_FILE=/var/lib/varnish/varnish_storage.bin # # # Cache file size: in bytes, optionally using k / M / G / T suffix, # # or in percentage of available disk space using the % suffix. VARNISH_STORAGE_SIZE=1G # # # Backend storage specification VARNISH_STORAGE="file,${VARNISH_STORAGE_FILE},${VARNISH_STORAGE_SIZE}" # # # Default TTL used when the backend does not specify one VARNISH_TTL=120 The VCL file that Varnish calls (through an include in default.vcl) consists of: backend playwithbits { .host = "127.0.0.1"; .port = "8080"; } acl purge { "127.0.0.1"; } sub vcl_recv { if (req.http.Host ~ "^(.*\.)?playwithbits\.com$") { set req.backend = playwithbits; set req.http.Host = regsub(req.http.Host, ":[0-9]+", ""); if (req.request == "PURGE") { if (!client.ip ~ purge) { error 405 "Not allowed."; } return(lookup); } if (req.url ~ "^/$") { unset req.http.cookie; } } } sub vcl_hit { if (req.http.Host ~ "^(.*\.)?playwithbits\.com$") { if (req.request == "PURGE") { set obj.ttl = 0s; error 200 "Purged."; } } } sub vcl_miss { if (req.http.Host ~ "^(.*\.)?playwithbits\.com$") { if (req.request == "PURGE") { error 404 "Not in cache."; } if (!(req.url ~ "wp-(login|admin)")) { unset req.http.cookie; } if (req.url ~ "^/[^?]+.(jpeg|jpg|png|gif|ico|js|css|txt|gz|zip|lzma|bz2|tgz|tbz|html|htm)(\?.|)$") { unset req.http.cookie; set req.url = regsub(req.url, "\?.$", ""); } if (req.url ~ "^/$") { unset req.http.cookie; } } } sub vcl_fetch { if (req.http.Host ~ "^(.*\.)?playwithbits\.com$") { if (req.url ~ "^/$") { unset beresp.http.set-cookie; } if (!(req.url ~ "wp-(login|admin)")) { unset beresp.http.set-cookie; } } }

    Read the article

< Previous Page | 468 469 470 471 472 473 474 475 476 477 478 479  | Next Page >