Search Results

Search found 4432 results on 178 pages for 'fail'.

Page 114/178 | < Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >

  • DNS Does Not Register at Off-site Locations

    - by Russ Warren
    First of all, let me give you the specifics of our setup: Windows Small Business Server 2008 Domain w/ all applicable updates on the DC The DC does DHCP for the main site The DC does DNS for all sites 3 sites including our headquarters where the DC is located All sites are connected through OpenVPN SSL tunnels terminated by an Untangle box at each site The 2 remote sites us the Untangle box as a DHCP server for their subnet, which assigns the DC as the primary DNS server Collection of Windows XP and Windows 7 workstations connected to the domain Here's the issue: All of the workstations at the main site register with the DNS server on the domain controller fine. As they grab an IP from the DHCP server, it updates the DNS server with the new host record. I have 2 systems (each at different remote sites) that fail to register with the DNS server. I've attempted the following troubleshooting steps: Confirmed the network adapter is using the DC as a DNS server Confirmed 2-way traffic is possible between DC and workstation Verified the "Register with DNS server" setting was checked in the adapter properties Attempted ipconfig /registerdns and received no errors For the time being, I have setup a DHCP reservation for these systems and manually created a host record. This seems to work fine, but I need a solution for any new systems that go out there.

    Read the article

  • Create a special folder within an outlook PST file

    - by Tony Dallimore
    Original question I have two problems caused by missing special folders. I added a second email address for which Outlook created a new PST file with an Inbox to which emails are successfully imported. But there is no Deleted Items folder. If I attempt to delete an unwanted email it is struck out. If move an email to a different PST file it is copied. I created a new PST file using Data File Management. This PST file has no Drafts folder. This is not important but I fail to see why I cannot have Drafts folder if I want. Any suggestions for solving these problems, particularly the first, gratefully received. Update Thanks to Ramhound and Dave Rook for their helpful responses to my original question. I assumed the problem of not have a Drafts folder in an Archive PST file and not having a Deleted Items folder associated with an Inbox were part of the same problem or I would not have mentioned the Drafts folder issue since I have an easy work-around. Perhaps my question should have been: How to I load emails from an IMAP account and be able to delete the spam?

    Read the article

  • Everyone can access my Windows 7 Homegroup file shares - Even Windows XP computers

    - by Adrian Grigore
    I have 3 computers in my network, two running Windows 7 and one running Windows XP. I've set up a homegroup on both Windows 7 computers. Also, all computers are in the same Workgroup. The problem is that one of the Windows 7 computers makes all shares accessible to the entire Workgroup instead of just sharing to the Homegroup as it should be. I created the file share in Windows 7 via right-click in the explorer, then click on "Share For" - "Homegroup (Read/Write)" (translated from German, so the actual wording may be different). Also, when I look at the file sharing properties of that folder, Windows Explorer informs me that Users must have a valid account and password for this Computer to access drive shares. Unfortunately this is not true. Being in the same Workgroup is enough to get access. Homegroup restrictions work as expected on my other Windows 7 computer. When trying to browse those shares from the XP computer, I get a dialog asking for a login and password. What might cause homegroup restrictions to fail and how can I fix this?

    Read the article

  • Ubuntu Server: Networking fails with MODPROBE option in /etc/network/interfaces ... ??

    - by neezer
    For some reason (which I haven't been able to determine yet), yesterday morning the networking service on our web server (running Ubuntu 8.04.2 LTS -- hardy) wouldn't start, and our website went down. I noticed the following error message when trying to restart it: * Reconfiguring network interfaces... /etc/network/interfaces:6: option with empty value ifup: couldn't read interfaces file "/etc/network/interfaces" ...fail! Line 6 in the /etc/network/interfaces file concerned a MODPROBE command, which (I believe) loaded in the ip_conntrack_ftp module so that I could use PASV on my FTP server (vsftpd): (breaking modprobe commands commented out below) # Used by ifup(8) and ifdown(8). See the interfaces(5) manpage or # /usr/share/doc/ifupdown/examples for more information. # The loopback network interface auto lo iface lo inet loopback #MODPROBE=/sbin/modprobe #$MODPROBE ip_conntrack_ftp pre-up iptables-restore < /etc/iptables.up.rules # The primary network interface # Uncomment this and configure after the system has booted for the first time auto eth0 iface eth0 inet static address xxx.xxx.xxx.xxx netmask 255.255.255.0 gateway xxx.xxx.xxx.1 dns-nameservers xxx.xxx.xxx.4 xxx.xxx.xxx.5 I've verified that there is a file in /sbin called modprobe. Like I said earlier, this setup had been working flawlessly until yesterday morning (though my bosses say that the site actually went down the previous night at 11 PM EST). Can anyone shed some light on (A) why this broke, and (B) how can I re-enable the ip_conntrack_ftp module?

    Read the article

  • Why is Windows Update trying to install an update I don't need?

    - by Oliver Salzburg
    I have a Windows 7 system that currently has a single update pending: Windows Internet Explorer 9 for Windows 7 for x64-based Systems If I try to install the update, Windows Update will: Create a restore point Fail with the error: Code 9C48 Windows Update encountered an error. The event log for the event reads: Installation Failure: Windows failed to install the following update with error 0x80070643: Windows Internet Explorer 9 for Windows 7 for x64-based Systems. If you search the web for that error, there are many other people with the exact same issue. Sadly, I am unable to apply the proposed solutions to my case, because I just installed this system. There is nothing on it, except Windows 7. I installed the system and ran through the updates. I also did the exact same process with this machine several times over the past few days due to a long-term test we just started. I didn't have any problems with any Windows Update on the previous installation runs and I know I didn't do anything different this time because I followed the installation procedures instructions which are to be used during the test. How did this happen and how do I solve it? Further Investigation So, as I always like to do, I ran the update again while running Process Monitor and dug up further details. WindowsUpdate.log First of all, there is a Windows Update log file located at C:\Windows\WindowsUpdate.log which I didn't know about. But I fail to see any significant entry in it, maybe you're more lucky: 2012-04-10 22:46:58:017 956 728 AU AU received approval from Ux for 1 updates 2012-04-10 22:46:58:017 956 728 AU AU setting pending client directive to 'Progress Ux' 2012-04-10 22:46:58:095 956 728 AU BeginInteractiveInstall invoked for Download 2012-04-10 22:46:58:095 956 728 AU Auto-approving update for download, updateId = {B33ACEC1-3265-4D01-9C37-AC0892E95ED9}.100, ForUx=1, IsOwnerUx=1, HasDeadline=0, IsMinor=0 2012-04-10 22:46:58:095 956 728 AU Auto-approved 1 update(s) for download (for Ux) 2012-04-10 22:46:58:110 956 728 AU UpdateDownloadProperties: 0 download(s) are still in progress. 2012-04-10 22:46:58:110 956 728 AU ############# 2012-04-10 22:46:58:110 956 728 AU ## START ## AU: Download updates 2012-04-10 22:46:58:110 956 728 AU ######### 2012-04-10 22:46:58:110 956 728 AU # Approved updates = 1 2012-04-10 22:46:58:110 956 728 AU AU initiated download, updateId = {B33ACEC1-3265-4D01-9C37-AC0892E95ED9}.100, callId = {35DF928B-B428-4BAC-8C63-55295967EFBB} 2012-04-10 22:46:58:110 956 728 AU Setting AU scheduled install time to 2012-04-11 01:00:00 2012-04-10 22:46:58:110 956 728 AU Successfully wrote event for AU health state:0 2012-04-10 22:46:58:110 956 728 AU Currently showing Progress UX client - so not launching any other client 2012-04-10 22:46:58:110 956 bb8 DnldMgr ************* 2012-04-10 22:46:58:110 956 bb8 DnldMgr ** START ** DnldMgr: Downloading updates [CallerId = AutomaticUpdatesWuApp] 2012-04-10 22:46:58:110 956 bb8 DnldMgr ********* 2012-04-10 22:46:58:110 956 bb8 DnldMgr * Call ID = {35DF928B-B428-4BAC-8C63-55295967EFBB} 2012-04-10 22:46:58:110 956 bb8 DnldMgr * Priority = 3, Interactive = 1, Owner is system = 0, Explicit proxy = 0, Proxy session id = 1, ServiceId = {9482F4B4-E343-43B6-B170-9A65BC822C77} 2012-04-10 22:46:58:110 956 bb8 DnldMgr * Updates to download = 1 2012-04-10 22:46:58:110 956 bb8 Agent * Title = Windows Internet Explorer 9 for Windows 7 for x64-based Systems 2012-04-10 22:46:58:110 956 bb8 Agent * UpdateId = {B33ACEC1-3265-4D01-9C37-AC0892E95ED9}.100 2012-04-10 22:46:58:110 956 bb8 Agent * Bundles 1 updates: 2012-04-10 22:46:58:110 956 bb8 Agent * {6D9A90B7-FAF9-4A47-9EFE-A506264873B3}.100 2012-04-10 22:46:58:110 956 bb8 DnldMgr *********** DnldMgr: New download job [UpdateId = {6D9A90B7-FAF9-4A47-9EFE-A506264873B3}.100] *********** 2012-04-10 22:46:58:110 956 728 AU Successfully wrote event for AU health state:0 2012-04-10 22:46:58:110 956 728 AU # Pending download calls = 1 2012-04-10 22:46:58:110 956 728 AU ## RESUMED ## AU: Download update [UpdateId = {B33ACEC1-3265-4D01-9C37-AC0892E95ED9}, succeeded] 2012-04-10 22:46:58:313 956 bb8 Agent ** END ** Agent: Downloading updates [CallerId = AutomaticUpdatesWuApp] 2012-04-10 22:46:58:313 956 bb8 Agent ************* 2012-04-10 22:46:58:313 956 718 AU ######### 2012-04-10 22:46:58:313 956 718 AU ## END ## AU: Download updates 2012-04-10 22:46:58:313 956 718 AU ############# 2012-04-10 22:46:58:313 956 718 AU Setting AU scheduled install time to 2012-04-11 01:00:00 2012-04-10 22:46:58:313 956 718 AU Successfully wrote event for AU health state:0 2012-04-10 22:46:58:313 956 718 AU Currently showing Progress UX client - so not launching any other client 2012-04-10 22:46:58:313 956 718 AU Successfully wrote event for AU health state:0 2012-04-10 22:46:58:313 956 aac AU Getting featured update notifications. fIncludeDismissed = true 2012-04-10 22:46:58:313 956 aac AU No featured updates available. 2012-04-10 22:47:00:107 956 aac AU BeginInteractiveInstall invoked for Install 2012-04-10 22:47:00:107 956 aac AU Auto-approving update for install, updateId = {B33ACEC1-3265-4D01-9C37-AC0892E95ED9}.100, ForUx=1, IsOwnerUx=1, HasDeadline=0, IsMinor=0 2012-04-10 22:47:00:107 956 aac AU Auto-approved 1 update(s) for install (for Ux), installType=1 2012-04-10 22:47:00:107 956 aac AU ############# 2012-04-10 22:47:00:107 956 aac AU ## START ## AU: Install updates 2012-04-10 22:47:00:107 956 aac AU ######### 2012-04-10 22:47:00:107 956 aac AU # Initiating manual install 2012-04-10 22:47:00:107 956 aac AU # Approved updates = 1 2012-04-10 22:47:00:107 956 aac AU ## RESUMED ## AU: Installing update [UpdateId = {B33ACEC1-3265-4D01-9C37-AC0892E95ED9}] 2012-04-10 22:47:13:773 2232 9fc Handler : WARNING: Exit code = 0x8024200B 2012-04-10 22:47:13:773 956 718 AU # WARNING: Install failed, error = 0x80070643 / 0x00009C48 2012-04-10 22:47:13:773 2232 9fc Handler ::::::::: 2012-04-10 22:47:13:773 2232 9fc Handler :: END :: Handler: Command Line Install 2012-04-10 22:47:13:773 2232 9fc Handler ::::::::::::: 2012-04-10 22:47:13:851 956 a7c Agent ********* 2012-04-10 22:47:13:851 956 a7c Agent ** END ** Agent: Installing updates [CallerId = AutomaticUpdates] 2012-04-10 22:47:13:851 956 718 AU Install call completed. 2012-04-10 22:47:13:851 956 a7c Agent ************* 2012-04-10 22:47:13:851 956 718 AU # WARNING: Install call completed, reboot required = No, error = 0x00000000 2012-04-10 22:47:13:851 956 718 AU ######### 2012-04-10 22:47:13:851 956 718 AU ## END ## AU: Installing updates [CallId = {FCFF2A5C-25AB-4FB9-AB2B-35C65CCA6A9F}] 2012-04-10 22:47:13:851 956 718 AU ############# 2012-04-10 22:47:13:851 956 718 AU Install complete for all calls, reboot NOT needed 2012-04-10 22:47:13:851 956 718 AU Setting AU scheduled install time to 2012-04-11 01:00:00 2012-04-10 22:47:13:851 956 718 AU Successfully wrote event for AU health state:0 2012-04-10 22:47:13:851 956 498 AU Getting featured update notifications. fIncludeDismissed = true 2012-04-10 22:47:13:851 956 498 AU No featured updates available. 2012-04-10 22:47:14:366 956 168 AU No featured updates notifications to show 2012-04-10 22:47:14:366 956 168 AU UpdateDownloadProperties: 0 download(s) are still in progress. 2012-04-10 22:47:14:366 956 168 AU Triggering Offline detection (non-interactive) 2012-04-10 22:47:14:366 956 168 AU AU setting pending client directive to 'Install Complete Ux' 2012-04-10 22:47:14:366 956 168 AU Changing existing AU client directive from 'Progress Ux' to 'Install Complete Ux', session id = 0x1 2012-04-10 22:47:14:366 956 168 AU Successfully wrote event for AU health state:0 2012-04-10 22:47:14:366 956 b78 AU ############# 2012-04-10 22:47:14:366 956 b78 AU ## START ## AU: Search for updates 2012-04-10 22:47:14:366 956 b78 AU ######### 2012-04-10 22:47:14:366 956 b78 AU ## RESUMED ## AU: Search for updates [CallId = {0198DD3A-D7B0-48F5-A77D-795F8A1BDCE8}] 2012-04-10 22:47:16:097 956 718 AU # 1 updates detected 2012-04-10 22:47:16:097 956 718 AU ######### 2012-04-10 22:47:16:097 956 718 AU ## END ## AU: Search for updates [CallId = {0198DD3A-D7B0-48F5-A77D-795F8A1BDCE8}] 2012-04-10 22:47:16:097 956 718 AU ############# 2012-04-10 22:47:16:097 956 718 AU No featured updates notifications to show 2012-04-10 22:47:16:097 956 718 AU Setting AU scheduled install time to 2012-04-11 01:00:00 2012-04-10 22:47:16:097 956 718 AU Successfully wrote event for AU health state:0 2012-04-10 22:47:16:097 956 718 AU Successfully wrote event for AU health state:0 2012-04-10 22:47:16:113 956 55c AU Getting featured update notifications. fIncludeDismissed = true 2012-04-10 22:47:16:113 956 55c AU No featured updates available. 2012-04-10 22:47:18:780 956 bb8 Report REPORT EVENT: {27479C66-E930-4F9C-AFF2-27EDD76DED8F} 2012-04-10 22:47:13:773+0200 1 182 101 {B33ACEC1-3265-4D01-9C37-AC0892E95ED9} 100 80070643 AutomaticUpdates Failure Content Install Installation Failure: Windows failed to install the following update with error 0x80070643: Windows Internet Explorer 9 for Windows 7 for x64-based Systems. 2012-04-10 22:47:18:780 956 bb8 Report CWERReporter::HandleEvents - WER report upload completed with status 0x8 2012-04-10 22:47:18:780 956 bb8 Report WER Report sent: 7.5.7601.17514 0x80070643 B33ACEC1-3265-4D01-9C37-AC0892E95ED9 Install 101 Unmanaged 2012-04-10 22:47:18:780 956 bb8 Report CWERReporter finishing event handling. (00000000) WU-IE9-Windows7-x64.exe The actual update that is executed is downloaded and stored at the following location: C:\Windows\SoftwareDistribution\Download\Install\WU-IE9-Windows7-x64.exe Executing that file manually, results in the following error message: IE9_main.log The IE9 installer/updater also creates an own log file located at C:\Windows\IE9_main.log For the update session in question, the installer logged: 00:00.000: ==================================================================== 00:00.016: Started: 2012/04/10 (Y/M/D) 23:10:53.897 (local) 00:00.032: Time Format in this log: MM:ss.mmm (minutes:seconds.milliseconds) 00:00.063: Command line: "C:\Windows\SoftwareDistribution\Download\Install\WU-IE9-Windows7-x64.exe" 00:00.078: INFO: Setup installer for Internet Explorer: 9.0.8112.16421 00:00.094: INFO: Previous version of Internet Explorer: 9.0.8112.16443 00:00.110: INFO: Checking if iexplore.exe's current version is between 9.0.6001.0... 00:00.125: INFO: ...and 9.1.0.0... 00:00.141: INFO: Maximum version on which to run IEAK branding is: 9.1.0.0... 00:00.156: ERROR: A newer version of Internet Explorer is already installed on the system. 00:00.188: ERROR: Internet Explorer version check failed. 01:03.789: INFO: Setup exit code: 0x00009C48 (40008) - A more recent version of Internet Explorer is installed. 01:03.820: INFO: Scheduling upload to IE SQM server: http://sqm.microsoft.com/sqm/ie/sqmserver.dll 01:03.852: INFO: SQM Upload returned 403 01:03.867: INFO: Cleaning up temporary files in: C:\Windows\TEMP\IE978E.tmp 01:03.883: INFO: Unable to remove directory C:\Windows\TEMP\IE978E.tmp, marking for deletion on reboot. 01:03.898: INFO: Released Internet Explorer Installer Mutex Which pretty much confirms what the error message says when executing the update manually; it's simply already installed or even obsolete because a newer version is installed. So, why does it try to keep installing the update? Possible solutions? Uninstalling Windows Internet Explorer 9 and manually installing the cached C:\Windows\SoftwareDistribution\Download\Install\WU-IE9-Windows7-x64.exe will result in the same error after applying all pending updates. Applying the FixIt for the issue You receive “0x80070643” or “0x643” error codes when you try to install .NET Framework updates through Windows Update or Microsoft Updates will not resolve the issue. Applying the suggested solution for the issue Error message when you try to install updates by using the Windows Update or Microsoft Update Web site: "0x80070003" will not resolve the issue. Running the FixIt Automatically diagnose and fix common problems with Windows Update does report having resolved issues with Windows Update, but didn't resolve the issue. Running the FixIt for the issue How to troubleshoot Windows Update or Microsoft Update when you are repeatedly offered an update does not resolve the issue. Neither with normal nor with aggressive settings.

    Read the article

  • Need troubleshooting advice for intermittent dns problems with requests on isp nameservers

    - by Mnebuerquo
    I've been having some intermittent dns problems with a web server, where certain isp's dns servers don't have my hostnames in cache and fail to look them up. At the same time, queries to opendns for those hostnames resolve correctly. It's intermittent, and it always works fine for me, so it's hard to identify the problem when someone reports connectivity problems to my site. My website is on a server running linux with Plesk. My dns records are configured with plesk (so my server is its own dns master). Domain name is registered with godaddy. I'm not real knowledgeable about dns, so I don't really know how to begin with troubleshooting. I've started learning to use dig, but while I can read the manpage to learn the syntax, I don't really know what questions to ask. Since the problem is intermittent I haven't been able to really catalog many symptoms. Symptoms I have observed: Certain people repeatedly reported intermittent problems connecting to my website. This was only from certain networks. (Ex: One guy could connect reliably from his office but not his home.) Sometimes I notice my browser taking a long time looking up the hostname for my site (Firefox shows a message in the status bar at the bottom). For me this is in the ten second range. ssh connections from anywhere to my server take a long time to connect but then seem to work fine once connected. So hopefully the folks on serverfault can point me to a good beginner tutorial for understanding dns, and suggest troubleshooting questions to ask next time one of my users reports connectivity problems.

    Read the article

  • Query Execution Failed in Reporting Services reports

    - by Chris Herring
    I have some reporting services reports that talk to Analysis Services and at times they fail with the following error: An error occurred during client rendering. An error has occurred during report processing. Query execution failed for dataset 'AccountManagerAccountManager'. The connection cannot be used while an XmlReader object is open. This occurs sometimes when I change selections in the filter. It also occurs when the machine has been under heavy load and then will consistently error until SSAS is restarted. The log file contains the following error: processing!ReportServer_0-18!738!04/06/2010-11:01:14:: e ERROR: Throwing Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: Query execution failed for dataset 'AccountManagerAccountManager'., ; Info: Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: Query execution failed for dataset 'AccountManagerAccountManager'. ---> System.InvalidOperationException: The connection cannot be used while an XmlReader object is open. at Microsoft.AnalysisServices.AdomdClient.XmlaClient.CheckConnection() at Microsoft.AnalysisServices.AdomdClient.XmlaClient.ExecuteStatement(String statement, IDictionary connectionProperties, IDictionary commandProperties, IDataParameterCollection parameters, Boolean isMdx) at Microsoft.AnalysisServices.AdomdClient.AdomdConnection.XmlaClientProvider.Microsoft.AnalysisServices.AdomdClient.IExecuteProvider.ExecuteTabular(CommandBehavior behavior, ICommandContentProvider contentProvider, AdomdPropertyCollection commandProperties, IDataParameterCollection parameters) at Microsoft.AnalysisServices.AdomdClient.AdomdCommand.ExecuteReader(CommandBehavior behavior) at Microsoft.AnalysisServices.AdomdClient.AdomdCommand.System.Data.IDbCommand.ExecuteReader(CommandBehavior behavior) at Microsoft.ReportingServices.DataExtensions.AdoMdCommand.ExecuteReader(CommandBehavior behavior) at Microsoft.ReportingServices.OnDemandProcessing.RuntimeDataSet.RunDataSetQuery() Can anyone shed light on this issue?

    Read the article

  • How to create custom content for nginx error 502 page, keep origin url on browser

    - by user123862
    i'm trying to get custom language and message for nginx error page but keep url on browser.. not success for eg: i go to url : xaluan.com/aaa/bbb.html on the time server down.. nginx will show error 502. with the same url but custom message as my language. test 1. I created a custom page at /usr/local/nginx/html/205.html as following config but it show on web site when error is default nginx error at domain.com/50.html ( the content of webpage not same as i created) error_page 502 /502.html; location = /502.html { root /usr/local/nginx/html; } test 2. Then i create same page at my www domain folder /home/xaluano/public_html/502.html but this keep redirect me to root domain.com/502.html the content now same as i created. but.. the url still not as i need error_page 502 /502.html; location = /502.html { root /home/xaluano/public_html; internal; } EDIT UPDATE for more detail 10/06/2012 please download my nginx config http://pastebin.com/7iLD6WQq and vhost config following: http://pastebin.com/ZZ91KiY6 == the case test.. if apache httpd service stop: #service httpd stop then open browser go to: xaluan.com/modules.php?name=News&file=article&sid=123456 I will see the 502 error with the same url on browser address == Custome error page I need the config which help when apache fail .. will show the custom message tell user wail for 1 minute for service back then refress current page with same url ( refresh I can do easy by javascript ), Nginx dosent change url so java-script can work out. any help will be great.. thank in advance

    Read the article

  • QNAP NAS 509 (LINUX) - how to unmout busy volume and find physical disk?

    - by Horst Walter
    On my NAS QNAP TS 509 I do have a technical issue. I need to run e2fsck. This works fine for me on md0 (see below), but how can I unmount the busy devices md9 and sda4 in order to do the same. Whenever I try, I fail because the device is busy. [This part is solved, see below] In order to further track down the issue, I'd need to sort out the physical disk to device relationship. How can I find out this, e.g. md0 is a stripped volume on 2 disk (but I need to find out on what physical disk). Remark: As you can easily derive from my questions, I am not a Linux expert, but manage to get along. /dev/ram0 124.0M 94.1M 29.8M 76% / tmpfs 32.0M 80.0k 31.9M 0% /tmp /dev/sda4 310.0M 103.9M 206.1M 34% /mnt/ext /dev/md9 509.5M 39.2M 470.2M 8% /mnt/HDA_ROOT /dev/md0 1.8T 1.4T 444.7G 76% /share/MD0_DATA tmpfs 32.0M 0 32.0M 0% /.eaccelerator.tmp -- Added -- QNAP seems to be based on Busybox. I do not find something like init / telinit / runlevel. At busybox docs it says that I need to run the below. But in /var/service sv is not available. I want to go to single user mode to unmount the devices. # cd /var/service # sv d * # sv u getty* -- Added, thanks A4L -- This QNAP Box runs a special flavor of Linux, so not all SOPs do apply. In my particular case I found a services.sh script, stopping all services. After that the drive could be unmounted. The information passed by A4L is valid and worth reading it, maybe I'll profit from it next time. Links: http://unix.stackexchange.com/questions/19918/umount-device-is-busy and http://unix.stackexchange.com/questions/15024/umount-device-is-busy-why So the unmount issue is solved, still looking for the best option to find the physical to volume mapping.

    Read the article

  • PsExec and Remote Environment Variables, Logging, Etc.

    - by alharaka
    When I run PsExec on a remote computer, I always fall short of what I want. What I would like ideally in most situations is a) a log on an admin server where each individual log has the name of each the remote computer it was generated from (e.g. COMPNAME1.log, COMPNAME2.log, etc.) or b) a log file on each remote computer with whatever name I specify. When I try scenario (a), I use the following command. %SystemDrive%\path\to\psexec.exe @listofcomputers.txt -u DOMAIN\username cmd /c echo TEST >> \\server.company.tld\share\%computername%.log Problem is that it never works. All the computers just write to the log where %computername% is just the computer I execute PsExec from in my office. What I want are unique logs for each computer specific in the listofcomputers.txt that will correctly use the hostname from the remote environment variable without issue. Is that even possible? It does not seem to work for me. I tried this, and the syntax is clearly wrong. %SystemDrive%\path\to\psexec.exe @listofcomputers.txt -u DOMAIN\username "cmd /c echo TEST >> \\server.company.tld\share\%computername%.log" PsExec just fails saying the system file cannot be found (read: syntax fail). As for scenario (b), it appears to be a variation of a similar problem. When I run a command like this, it does not work. %SystemDrive%\path\to\psexec.exe @listofcomputers.txt -u DOMAIN\username "cmd /c echo %computername% >> \\server.company.tld\share\aggregated.log" Is there something I do not understand about remote path and environment variables with PsExec on the cmd.exe console (I have not even tried the dreaded PowerShell yet). I know such things work in a batch file (cmd /c \\server.company.tld\share\runthis.bat), but is there a reason it will not work when executing commands as arguments? I always need this, and can never get it!

    Read the article

  • iptables: allowing incoming for 192.168.1.0/24 allowed incoming for all?

    - by nortally
    The internal side of my ISP router has three devices: ISP router 128.128.43.1 Firewall router 128.128.43.2 Server 128.128.43.3 Behind the Firewall router is a NAT network using 192.168.100.n/24 This question is regarding iptables running on the Server. I wanted to allow access to port 8080 only from the NAT clients behind the Firewall router, so I used this rule -A Firewall-1-INPUT -s 192.168.100.0/24 -m state --state NEW -m tcp -p tcp --dport 8080 -j ACCEPT This worked, but UNEXPECTEDLY ALLOWED GLOBAL ACCESS, which resulted in our JBOSS server getting compromised. I now know that the correct rule is to use the Firewall router's address instead of the internal network, but can anyone explain why the first rule allowed global access? I would have expected it to just fail. Full config, mostly lifted from a RedHat server: *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :Firewall-1-INPUT - [0:0] -A INPUT -j Firewall-1-INPUT -A FORWARD -j Firewall-1-INPUT -A Firewall-1-INPUT -i lo -j ACCEPT -A Firewall-1-INPUT -p icmp --icmp-type any -j ACCEPT -A Firewall-1-INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT -A Firewall-1-INPUT -m comment --comment "allow ssh from all" -A Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT -A Firewall-1-INPUT -m comment --comment "allow https from all" -A Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 443 -j ACCEPT -A Firewall-1-INPUT -m comment --comment "allow JBOSS from Firewall" ### THIS RESULTED IN GLOBAL ACCESS TO PORT 8080 ### -A Firewall-1-INPUT -s 192.168.100.0/24 -m state --state NEW -m tcp -p tcp --dport 8080 -j ACCEPT ### THIS WORKED -A Firewall-1-INPUT -s 128.128.43.2 -m state --state NEW -m tcp -p tcp --dport 8080 -j ACCEPt ### -A Firewall-1-INPUT -j REJECT --reject-with icmp-host-prohibited COMMIT

    Read the article

  • samba4 not building in archlinux.

    - by kmplsv
    cp bin/tdbtool bin/tdbdump bin/tdbbackup /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/bin cp ./include/tdb.h /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/include cp tdb.pc /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/lib/pkgconfig cp libtdb.a libtdb.so.1.2.4 /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/lib rm -f /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/lib/libtdb.so ln -s libtdb.so.1.2.4 /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/lib/libtdb.so rm -f /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/lib/libtdb.so.1 ln -s libtdb.so.1.2.4 /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/lib/libtdb.so.1 mkdir -p /tmp/yaourt-tmp-root/aur-samba4/pkg/`/tmp/yaourt-tmp-root/aur-samba4/src/bin/python -c "import distutils.sysconfig; print distutils.sysconfig.get_python_lib(1, prefix='/opt/samba4/samba')"` cp tdb.so /tmp/yaourt-tmp-root/aur-samba4/pkg/`/tmp/yaourt-tmp-root/aur-samba4/src/bin/python -c "import distutils.sysconfig; print distutils.sysconfig.get_python_lib(1, prefix='/opt/samba4/samba')"` /bin/install -c -d /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/share/man/man8 for I in manpages/*.8; do \ /bin/install -c -m 644 $I /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/share/man/man8; \ done /bin/install: cannot stat `manpages/*.8': No such file or directory make: *** [installdocs] Error 1 Aborting... ==> ERROR: Makepkg was unable to build samba4. ==> Restart building samba4 ? [y/N] ==> ------------------------------- ==>c any ideas as what is causing my build to fail? i'm assuming it's an issue with manpages but i can't figure out exactly what package it is looking for that i don't have.

    Read the article

  • EC2: How dangerous is it to turn off fsck for EBS volumes?

    - by Janine
    I have been tearing my hair out trying to figure out why my EC2 instances (made from my own custom AMIs) were taking many tries to come up properly. They would fail with the following error: fsck.ext3: No such file or directory while trying to open /dev/sdf For both of the EBS volumes I was attaching during startup. Finally, I figured out the problem. I had put this in /etc/fstab: /dev/sdf /export ext3 defaults 1 2 /dev/sdi /export2 ext3 defaults 1 2 The 2 tells the system to fsck the drives on the way up. Changing this to /dev/sdf /export ext3 defaults 1 0 /dev/sdi /export2 ext3 defaults 1 0 Avoids the problem completely, but now the volumes are never going to be fsck'd. How much does this matter? Once the instance goes into production it's going to be running pretty much 24/7, so not many fscks would be happening anyway, but still... this just feels like a bad idea. I have not been able to find anyone else even reporting this problem (there are people with the same error message, but different causes). It seems unbelievable that I could be the only person to ever make this mistake, but perhaps I'm just talented that way. :) If there is another solution to the problem I would love to hear it; I have not been able to find one.

    Read the article

  • Dropped connections between Linux Servers in Data Center

    - by Emil H
    I have a number of linux servers at a us-based datacenter. The servers were installed by the hosting company, and are running fedora core. We're experiencing problems with dropped connections. The issue seems to be that when we attempt to connect to one of the other servers after a period of inactivity, the first connection attempt will fail, and sometimes the second. However, after that the connection succeds and it works for a while. This happens for both mysql connections and raw socket connections, but only seems to occur when connecting to some of our servers. The confusing part is that it some of the servers for which we see different behaviors have identical hardware configuration and software. For example, it happens when connecting to a server called mysql2, but not for a server called mysql3. These servers were installed at the same time, but the same specifications. The problem can be reproduced somewhat reliably, but only after waiting fifteen minutes to half an hour. This makes it hard to diagnose, and even harder since I'm not really sure what to look for. I realize that connections sometimes failed, and that we should write our applications to compensate for this but these servers all in the same data center. Why would it matter if two servers haven't communicated for a while? Does anybody have an idea what might be causing this? Is it a server configuration problem or a network problem that I should contact the hosting company about. What do I tell them to look for? Unfortunately our experience has been that the support staff doesn't investigate problems in depth unless we give them detailed directions.

    Read the article

  • Copying large files from USB devices to the internal hard drive fails on Mac OS

    - by John M. P. Knox
    I have a second-generation 13" MacBook Air running Mac OS X 10.6.6 with a 2.13 GHz processor, 4 GB of RAM, and a 256 GB SSD hard disk. I often get failures when I attempt to copy a large file or large collection of files from an external USB drive (typically a "Firewire" generation Drobo) to the internal drive. The failure behaves almost exactly as if I had pulled the USB cable from the computer in mid-transfer. I get a warning that I have removed the hard disk improperly. After this event, the drive no longer appears mounted in the finder, and I have to unplug and reinsert the USB cable to mount the drive again. I have also seen a similar problem when using Aperture 3 to import a large number of photos and videos from a USB Compact Flash card reader. The import will fail and I will have to unplug the Card Reader and import the missing items. Oddly, reversing the direction of the copy seems pretty reliable. I've never had a problem copying a large file to a USB device, meaning that I have quite a few large files which are stranded on my Drobo. Model Identifier: MacBookAir3,2 Boot ROM Version: MBA31.0061.B01 I have seen a similar issue reported on Apple's website: http://discussions.apple.com/thread.jspa?threadID=2648590&tstart=0 The only suggested resolutions there seems to be switching to another form of connectivity (e.g. firewire, which does not exist on MacBook Air), or downgrading to Mac OS 10.6.4, or reverting the USB kernel extensions to the 10.6.4 versions: http://discussions.apple.com/message.jspa?messageID=12566073#12582956 I'm not too keen on the idea of downgrading kernel extensions. Does anyone know of a hardware revision without this issue that I can trade up to? Are there any other potential solutions out there?

    Read the article

  • Synchronization of volume snapshots when doing whole system backups

    - by intuited
    Is there a way to guarantee consistency across volumes when doing backups from LVM snapshots? Consider this scenario: Some system upgrade is in progress. It will write some files to the /usr volume, and once completed, will record success in the /var volume. As the upgrade is just about complete, I run a backup script that creates snapshots of the /usr and /var volumes, along with the rest of the system's volumes, and proceeds to create backups from those snapshots. Just before the upgrade's last write/flush on the /usr volume completes, the backup script takes its snapshot of /usr. That write completes, and the upgrade operation's success is quickly recorded in the nebulous depths of /var. The backup script takes a snapshot of /var. The backup script creates backups from the snapshots it has, er, snapshotted. So the result of all of this tomfoolery is that the resulting /usr backup contains a file which is missing a few bits, and the /var backup contains metadata indicating that that file is complete and approved for use. Without delving into the details of which operating systems' system upgrade systems would be unfazed by such trifles, is there a way to avoid such problems? At the least this seems like it could cause some application to fail unexpectedly after restoration of such a backup.

    Read the article

  • Streaming to PS3 with NAS and built-in dlna server?

    - by philt
    With consumer-grade hardware, is it possible to successfully stream 1080p mp4 videos to a PS3? I have a linksys router that can only do 10/100. The PS3 is wired to it with cat5e cable, and the PS3 itself supports gigabit ethernet. I would upgrade the router and get one that supports gigabit ethernet if it could handle streaming like this. It currently does work with minor jerkiness streaming from my mac to the PS3, but fast-forward/reverse and "goto" (equivalent of scene selection) take forever and/or fail completely. And streaming from my mac of course requires the mac to be on at all times. When I put the movies on an external USB drive and connect to the PS3 directly, it performs flawlessly. Fast forward and everything works great. So I was thinking about getting a NAS, but I don't know if any inexpensive NAS (i.e. Buffalo Linkstation Live, WD My Book World Edition, D-Link DNS-321, etc.) can actually deliver the performance necessary to do this, even with gigabit ethernet?

    Read the article

  • Error in Bind9 named.conf file. Bind won't start.

    - by tj111
    I'm trying to setup a DNS server on an Ubuntu Server machine (10.04). I configured an entry in named.conf.local to test it, but when trying to restart bind9 I get the following error: * Starting domain name service... bind9 [fail] So I checked the output of syslog and this is what I get. May 20 18:11:13 empression-server1 named[4700]: starting BIND 9.7.0-P1 -u bind May 20 18:11:13 empression-server1 named[4700]: built with '--prefix=/usr' '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--sysconfdir=/etc/bind' '--localstatedir=/var' '--enable-threads' '--enable-largefile' '--with-libtool' '--enable-shared' '--enable-static' '--with-openssl=/usr' '--with-gssapi=/usr' '--with-gnu-ld' '--with-dlz-postgres=no' '--with-dlz-mysql=no' '--with-dlz-bdb=yes' '--with-dlz-filesystem=yes' '--with-dlz-ldap=yes' '--with-dlz-stub=yes' '--with-geoip=/usr' '--enable-ipv6' 'CFLAGS=-fno-strict-aliasing -DDIG_SIGCHASE -O2' 'LDFLAGS=-Wl,-Bsymbolic-functions' 'CPPFLAGS=' May 20 18:11:13 empression-server1 named[4700]: adjusted limit on open files from 1024 to 1048576 May 20 18:11:13 empression-server1 named[4700]: found 4 CPUs, using 4 worker threads May 20 18:11:13 empression-server1 named[4700]: using up to 4096 sockets May 20 18:11:13 empression-server1 named[4700]: loading configuration from '/etc/bind/named.conf' May 20 18:11:13 empression-server1 named[4700]: /etc/bind/named.conf:10: missing ';' before 'include' May 20 18:11:13 empression-server1 named[4700]: loading configuration: failure May 20 18:11:13 empression-server1 named[4700]: exiting (due to fatal error) So it thinks I have an error in the default named.conf file, which is pretty ridiculous. I went through it and deleted a blank line just for the hell of it, but I can't see how it figures there's an error in there. Note that before this I did have an error in named.conf.local, but it showed up properly in syslog and I fixed it, so it is reporting the correct file. Here is the contents of named.conf: // This is the primary configuration file for the BIND DNS server named. // // Please read /usr/share/doc/bind9/README.Debian.gz for information on the // structure of BIND configuration files in Debian, *BEFORE* you customize // this configuration file. // // If you are just adding zones, please do that in /etc/bind/named.conf.local include "/etc/bind/named.conf.options"; include "/etc/bind/named.conf.local"; include "/etc/bind/named.conf.default-zones";

    Read the article

  • Running DNS locally for home network

    - by Roy Rico
    I have a small home network that just got larger ( New roommate, My existing roommate got a laptop (on top of her computer)j, my friends coming over with laptop, etc ). I'd like to run a local DNS server for lookups of my local network stuff (fileserver.local, windowsTV.local, machineA.local, machineB.local, appletv.local). I used to have a business line with a static IP, and run bind/named internally. However, now, I have a normal account. My ISP's DNS servers are constantly changing (for whatever reasons my ISP doesn't like to keep the same IP range for long). I need my local DNS to be automatically updated to use my ISP's DNS for external traffic, but be able to maintain an internal DNS server (getting to update the hosts file is being a hassle with every new machine on top of rebuilding existing machines with win7 or Ubuntu 9.04). Additionally, My ISP's DNS servers often crash or become unresponsive. Are there any open DNS servers that are reliable (i don't want to reconfig every day) that I could use as my primary, then if those fail, then use my ISP's? UPDATE: Also looking for each workstation to be able to use dhcp to connect, but instead of getting ISP dns servers, getting my internal one.... Thanks

    Read the article

  • Using smartctl to get vendor specific Attributes from ssd drive behind a SmartArray P410 controller

    - by Lairsdragon
    Recently I have deployed some HP server with SSD's behind a SmartArray P410 controller. While not official supported from HP the server work well sofar. Now I like to get wear level info's, error statistics etc from the drive. While the SA P410 supports a passthru of the SMART Command to a single drive in the array the output I was not able to the the interesting things from the drive. In this case especially the value the Wear level indicator is from interest for me (Attr.ID 233), but this is ony present if the drive is directly attanched to a SATA Controller. smartctl on directly connected ssd: # smartctl -A /dev/sda smartctl version 5.38 [x86_64-unknown-linux-gnu] Copyright (C) 2002-8 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF READ SMART DATA SECTION === SMART Attributes Data Structure revision number: 5 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 3 Spin_Up_Time 0x0000 100 000 000 Old_age Offline In_the_past 0 4 Start_Stop_Count 0x0000 100 000 000 Old_age Offline In_the_past 0 5 Reallocated_Sector_Ct 0x0002 100 100 000 Old_age Always - 0 9 Power_On_Hours 0x0002 100 100 000 Old_age Always - 8561 12 Power_Cycle_Count 0x0002 100 100 000 Old_age Always - 55 192 Power-Off_Retract_Count 0x0002 100 100 000 Old_age Always - 29 232 Unknown_Attribute 0x0003 100 100 010 Pre-fail Always - 0 233 Unknown_Attribute 0x0002 088 088 000 Old_age Always - 0 225 Load_Cycle_Count 0x0000 198 198 000 Old_age Offline - 508509 226 Load-in_Time 0x0002 255 000 000 Old_age Always In_the_past 0 227 Torq-amp_Count 0x0002 000 000 000 Old_age Always FAILING_NOW 0 228 Power-off_Retract_Count 0x0002 000 000 000 Old_age Always FAILING_NOW 0 smartctl on P410 connected ssd: # ./smartctl -A -d cciss,0 /dev/cciss/c1d0 smartctl 5.39.1 2010-01-28 r3054 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net (Right, it is complety empty) smartctl on P410 connected hdd: # ./smartctl -A -d cciss,0 /dev/cciss/c0d0 smartctl 5.39.1 2010-01-28 r3054 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net Current Drive Temperature: 27 C Drive Trip Temperature: 68 C Vendor (Seagate) cache information Blocks sent to initiator = 1871654030 Blocks received from initiator = 1360012929 Blocks read from cache and sent to initiator = 2178203797 Number of read and write commands whose size <= segment size = 46052239 Number of read and write commands whose size > segment size = 0 Vendor (Seagate/Hitachi) factory information number of hours powered up = 3363.25 number of minutes until next internal SMART test = 12 Do I hunt here a bug, or is this a limitation of the p410 SMART cmd Passthru?

    Read the article

  • Windows 7 machine, can't connect remotely until after ping

    - by rjohnston
    I have a Windows 7 (Home Premium) machine that doubles as a media centre and subversion server. There's a couple of problems with this setup, when connecting to the server from an XP (SP3) machine: Firstly, the machine won't respond to it's machine name until after it's IP address has been pinged. Here's an example: Microsoft Windows XP [Version 5.1.2600] (C) Copyright 1985-2001 Microsoft Corp. C:\Documents and Settings\Rob>ping damascus Ping request could not find host damascus. Please check the name and try again. C:\Documents and Settings\Rob>ping 192.168.1.17 Pinging 192.168.1.17 with 32 bytes of data: Reply from 192.168.1.17: bytes=32 time=2ms TTL=128 ... Ping statistics for 192.168.1.17: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 1ms, Maximum = 2ms, Average = 1ms C:\Documents and Settings\Rob>ping damascus Pinging damascus [192.168.1.17] with 32 bytes of data: Reply from 192.168.1.17: bytes=32 time<1ms TTL=128 .... Ping statistics for 192.168.1.17: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 1ms, Average = 0ms C:\Documents and Settings\Rob> Likewise, subversion commands with either the machine name or IP address will fail until the machine's IP address is pinged. Occasionally, the machine won't respond to pings on it's IP address, it'll just come back with "Request timed out". The svn server is VisualSVN, if that helps... Any ideas?

    Read the article

  • Usage of putty in command line from Hudson

    - by kij
    Hi, I'm trying to use putty in command line from an hudson job. The command is the following one: putty -ssh -2 -P 22 USERNAME@SERVER_ADDR -pw PASS -m command.txt Where 'command.txt' is a shell script to execute in the server through SSH. If i launch this command from the Window command prompt, it works, the shell script is executed on the server machine. If i launch a build of the hudson job configured with this batch command, it doesn't work. The build is running... and running... and running.. without doing anything, and i have to stop it manually. So my question is: Is it possible to launch an external programm (i.e. putty) from an hudson job ? ps: i tried SSH plugin but... not a really good plugin (pre/post build, fail status of the commands launched not caught by hudson, etc.) Thanks in advance for your help. Best regards. kij EDIT: These are the build logs: [workspace] $ cmd /c call C:\WINDOWS\TEMP\hudson7429256014041663539.bat C:\Hudson\jobs\Artifact deployer\workspace>putty -ssh -2 -P 22 USER@SERV_ADD -pw PASS -m com.txt Le build a été annulé Finished: ABORTED And the Hudson.err.log file at the same time (after a stop): 3 juin 2010 18:27:28 hudson.model.Run run INFO: Artifact deployer #6 aborted java.lang.InterruptedException at java.lang.ProcessImpl.waitFor(Native Method) at hudson.Proc$LocalProc.join(Proc.java:179) at hudson.Launcher$ProcStarter.join(Launcher.java:278) at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:83) at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:58) at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:19) at hudson.model.AbstractBuild$AbstractRunner.perform(AbstractBuild.java:601) at hudson.model.Build$RunnerImpl.build(Build.java:174) at hudson.model.Build$RunnerImpl.doRun(Build.java:138) at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:416) at hudson.model.Run.run(Run.java:1241) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46) at hudson.model.ResourceController.execute(ResourceController.java:88) at hudson.model.Executor.run(Executor.java:124) My shell script only write "hello" in a "hello.txt" file on the server, and nothing is done.

    Read the article

  • Internal and External DNS from Different Servers, Same Zone

    - by Shane
    Hello All, I am either having trouble understanding how DNS works, or I am having trouble configuring my DNS correctly (either one isn't good). I am currently working with a domain, I'll call it webdomain.com, and I need to allow all of our internal users to get out to dotster to get our public DNS entries just like the rest of the world. Then, on top of that, I want to be able to supply just a few override DNS entries for testing servers and equipment that is not available publically. As an example: public.webdomain.com - should get this from dotster outside.webdomain.com - should get this from dotster as well testing.webdomain.com - should get this from my internal dns controller The problem that I seem to be running into at every turn is that if I have an internal DNS controller that contains a zone for webdomain.com then I can get my specified internal entries but never get anything from the public DNS server. This holds true regardless of the type of DNS server I use also--I have tried both a Linux Bind9 and a Windows 2008 Domain Controller. I guess my big question is: am I being unreasonable to think that a system should be able to check my specified internal DNS and in the case where a requested entry doesn't exist it should fail over to the specified public dns server -OR- is this just not the way DNS works and I am lost in the sauce? It seems like it should be as simple as telling my internal DNS server to forward any requests that it can't fulfill to dotster, but that doesn't seem to work. Could this be a firewall issue? Thanks in advance

    Read the article

  • Hyper-V 2008 R2 synthetic networking stops working with linux 2.6.32.15

    - by luxifer
    Hi there, so I thought I'd give Hyper-V on Windows Server 2008 R2 Enterprise a try on my Homeserver (yes, it's legit... got it from msdnaa). First thing to throw at it was my firewall which runs IPFire. This distribution currently uses the kernel version 2.6.32.15 and comes with the Hyper-V drivers. So I enabled them and at first they work just fine but after a few minutes they just fail. There are no packages going in or out anymore until I reboot the VM but sometimes even that won't work so the VM just keeps "Stopping" like forever. Emulated networking works fine but it slow and uses more CPU. That way my firewall routes slower than when running under virtualbox on an atom N270. My server has an E6750; VM is limited to 25%, but that should still outperform this atom CPU especially since it's never going anywhere near 100% CPU load, so give me a break! A quick google search led me to people having the same problem (even with other distributions and kernel versions that include those drivers) but no solution yet... I already found this but I can't quite follow the author on the part where he solved the issue - especially since I need two virtual nics for my firewall distro to work (obviously one internal and one external) What am I missing here?

    Read the article

  • How harmful is a hard disk spin cycle?

    - by Gilles
    It is conventional wisdom¹ that each time you spin a hard disk down and back up, you shave some time off its life expectancy. The topic has been discussed before: Is turning off hard disks harmful? What's the effect of standby (spindown) mode on modern hard drives? Common explanations for why spindowns and spinups are harmful are that they induce more stress on the mechanical parts than ordinary running, and that they cause heat variations that are harmful to the device mechanics. Is there any data showing quantitatively how bad a spin cycle is? That is, how much life expectancy does a spin cycle cost? Or, more practically, if I know that I'm not going to need a disk for X seconds, how large should X be to warrant spinning down? ¹ But conventional wisdom has been wrong before; for example, it is commonly held that hard disks should be kept as cool as possible, but the one published study on the topic shows that cooler drives actually fail more. This study is no help here since all the disks surveyed were powered on 24/7.

    Read the article

< Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >