Search Results

Search found 16899 results on 676 pages for 'local'.

Page 551/676 | < Previous Page | 547 548 549 550 551 552 553 554 555 556 557 558  | Next Page >

  • Reset rc.d so software starts at boot again

    - by natli
    I ran the following 2 commands on my VPS box and now it boots without starting any software at all. According to rcconf it's still supposed to start my chosen software (ssh etc.) but it doesn't. update-rc.d vz defaults update-rc.d vzeventd defaults I already tried removing them again with update-rc.d -f vz remove update-rc.d -f vzeventd remove But that didnt't change anything. /etc/rc.local also still correctly lists some scripts I want to run at start-up, but they don't seem to be called either. I expect the top 2 commands to be responsible, but here's everything I did: mkdir /var/openvz-dl cd /var/openvz-dl wget http://download.openvz.org/kernel/branches/rhel6-2.6.32/042stab062.2/vzkernel-2.6.32-042stab062.2.x86_64.rpm wget http://download.openvz.org/kernel/branches/rhel6-2.6.32/042stab062.2/vzkernel-devel-2.6.32-042stab062.2.x86_64.rpm wget http://download.openvz.org/utils/vzctl/4.0/vzctl-4.0-1.x86_64.rpm wget http://download.openvz.org/utils/vzctl/4.0/vzctl-core-4.0-1.x86_64.rpm wget http://download.openvz.org/utils/ploop/1.5/ploop-1.5-1.x86_64.rpm wget http://download.openvz.org/utils/ploop/1.5/ploop-lib-1.5-1.x86_64.rpm wget http://download.openvz.org/utils/vzquota/3.1/vzquota-3.1-1.x86_64.rpm apt-get install fakeroot alien fakeroot alien --to-deb --scripts --keep-version vz*.rpm ploop*.rpm dpkg -i vz*.deb ploop*.deb --force-overwrite update-rc.d vz defaults update-rc.d vzeventd defaults reboot A huge part of that failed because I was running it on an OpenVZ VPS which has a shared kernel that can't be altered, so I also had to fix the dpkg like so (it was moaning about wanting to install vzkernel with a package not being found); rm /var/lib/dpkg/info/vzkernel* dpkg-reconfigure vzkernel --force dpkg --purge --force-all vzkernel But that didn't fix the boot issue either. How do I make my software start at boot again?

    Read the article

  • Software RAID 1 Configuration

    - by Corve
    I have created a software RAID 1 quite some while ago and it always seemed to work for me. However I am not completely sure that I have configured everything right and do not have the experience to check so I would be very grateful for some advice or just verification that all seems right so far. I am using Linux Fedora 20 (32 bit with plans to upgrade to 64bit) The RAID 1 should consist of two 1TB SATA hard drives. This is the output of mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Sun Jan 29 11:25:18 2012 Raid Level : raid1 Array Size : 976761424 (931.51 GiB 1000.20 GB) Used Dev Size : 976761424 (931.51 GiB 1000.20 GB) Raid Devices : 2 Total Devices : 1 Persistence : Superblock is persistent Update Time : Sat Jun 7 10:38:09 2014 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 Name : argo:0 (local to host argo) UUID : 1596d0a1:5806e590:c56d0b27:765e3220 Events : 996387 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 0 1 active sync /dev/sda The RAID is mounted successfully: friedrich@argo:~ ? sudo mount -l | grep md0 /dev/md0 on /mnt/raid type ext4 (rw,relatime,data=ordered) Basically my question are: Why do I only have 1 active device? What does the State removed at bottom mean? Also I noticed some strange error messages that I see on the console on system start and shutdown and always repeating in the background when I switch with Ctrl + Alt + F2: ... ata2: irq_stat 0x00000040 connection status changed ata2: SError: { CommWake DevExch } ata2: COMRESET failed (errno=-32) ata2: exception Emask 0x10 SAct 0x0 SErr 0x4040000 action 0xe frozen ata2: irq_stat 0x00000040 connection status changed ata2: SError: { CommWake DevExch } ata2: exception Emask 0x10 SAct 0x0 SErr 0x4040000 action 0xe frozen ... Are these errors related to the RAID? Something seems wrong with the SATA devices.. All together the system works (I can read and write to the mounted raid) but I always had these strange errors on startup shutdown (probably always in the background). Thx for your help

    Read the article

  • Writing xml with powershell

    - by alex
    i have a script that get all the info i need about my SharePoint farm : [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint") > $null $farm = [Microsoft.SharePoint.Administration.SPFarm]::Local $websvcs = $farm.Services | where -FilterScript {$_.GetType() -eq [Microsoft.SharePoint.Administration.SPWebService]} $webapps = @() foreach ($websvc in $websvcs) { write-output "Web Applications" write-output "" foreach ($webapp in $websvc.WebApplications) { write-output "Webapp Name -->"$webapp.Name write-output "" write-output "Site Collections" write-output "" foreach ($site in $webapp.Sites) { write-output "Site URL --> -->" $site.URL write-output "" write-output "Websites" write-output "" foreach ($web in $site.AllWebs) { write-output "Web URL --> --> -->" $web.URL write-output "" write-output "Lists" write-output "" foreach ($list in $web.Lists) { write-output "List Title --> --> --> -->" $list.Title write-output "" } foreach ($group in $web.Groups) { write-output "Group Name --> --> --> -->" $group.Name write-output "" foreach ($user in $group.Users) { write-output "User Name --> --> --> -->" $user.Name write-output "" } } } } } } i want to make the output to an XML file and then connect the xml file to HTML and make a site of it for manager use how can i do it ? thanks for the help !

    Read the article

  • Chef bash resource not executing as specified user

    - by Arthur Maltson
    I'm writing a Chef cookbook to install Hubot. In the recipe, I do the following: bash "install hubot" do user hubot_user group hubot_group cwd install_dir code <<-EOH wget https://github.com/downloads/github/hubot/hubot-#{node['hubot']['version']}.tar.gz && \ tar xzvf hubot-#{node['hubot']['version']}.tar.gz && \ cd hubot && \ npm install EOH end However, when I try to run chef-client on the server installing the cookbook, I'm getting a permission denied writing to the directory of the user that runs chef-client, not the hubot user. For some reason, npm is trying to run under the wrong user, not the user specified in the bash resource. I am able to run sudo su - hubot -c "npm install /usr/local/hubot/hubot" manually, and this gets the result I want (installs hubot as the hubot user). However, it seems chef-client isn't executing the command as the hubot user. Below you'll find the chef-client execution. Thank you in advance. Saving to: `hubot-2.1.0.tar.gz' 0K ...... 100% 563K=0.01s 2012-01-23 12:32:55 (563 KB/s) - `hubot-2.1.0.tar.gz' saved [7115/7115] npm ERR! Could not create /home/<user-chef-client-uses>/.npm/log/1.2.0/package.tgz npm ERR! Failed creating the tarball. npm ERR! couldn't pack /tmp/npm-1327339976597/1327339976597-0.13104878342710435/contents/package to /home/<user-chef-client-uses>/.npm/log/1.2.0/package.tgz npm ERR! error installing [email protected] Error: EACCES, permission denied '/home/<user-chef-client-uses>/.npm/log' ... npm not ok ---- End output of "bash" "/tmp/chef-script20120123-25024-u9nps2-0" ---- Ran "bash" "/tmp/chef-script20120123-25024-u9nps2-0" returned 1

    Read the article

  • Explorer.exe hangup during move large file into external drive

    - by PiotrK
    During move large files (700mb+) to external drive formated NTFS via USB 3.0 I've noticed strange things about explorer.exe (I am using Windows 7 up to date) Sometimes after move file the explorer get stuck (ie. it can happen after few files during move of several large files) - moving window freeze and I am unable to kill explorer (via taskmgr, or cmdline TASKKILL). In command line I've got something like this (taskmgr shows that explorer.exe is still running - I've got the same PID every time I try to kill it and no diagnostic message): C:\Windows\system32TASKKILL /F /IM explorer.exe SUKCES: proces "explorer.exe" o identyfikatorze PID 6296 zostal zakonczony. C:\Windows\system32TASKKILL /F /IM explorer.exe SUKCES: proces "explorer.exe" o identyfikatorze PID 6296 zostal zakonczony. If I try to run another explorer.exe process at this point, I got desktop icon and start bar back but I cannot open any explorer window After few minutes explorer.exe finally dies and I am able to rerun it without rebooting File that I moved have two copies - one local and one on the external drive (the original file wasn't delete after move); Both copies seems to contain the same data (same length and CRC info) If this happen during move of multiply files, only some files are moved and one of them have two copies (both locally and on the external drive) What can I do to fix those explorer hangs? Added: The same problem exist when copying files, it hangsup between large files Similar problem exist when I tried to use TotalCommander (x64): copying paused at 80% of one of files, TC didn't hung up (but clicking cancel in copying dialog box doesn't have any effect). During this pause I can't kill TotalCmd.exe just like Explorer.exe Added (2): This problem seems to disappear when I use 32 bit applications (like TotalCommander (x86) ), but I need to do more testing to be sure of this Added (3): There are several errors in event log, source: disk, id: 11, qualifiers: 49156, task: 0, level: 2, keywords: 0x80000000000000 (This may be important, and I forgot to mention this): Main disk is encrypted via Truecrypt (boot-in password)

    Read the article

  • Subversion error: Repository moved permanently to please relocate

    - by Bart S.
    I've set up subversion and apache on my server. If I browse to it through my webbrowser it works fine (http://svn.host.com/reposname). However, if I do a checkout on my machine I get the following error: Command: Checkout from http://svn.host.com/reposname, revision HEAD, Fully recursive, Externals included Error: Repository moved permanently to 'http://svn.host.com/reposname/'; please relocate I checked apache's error log, but it doesn't say anything. (it does now - see edit) My repositories are stored under: /var/www/svn/repos/ My website is stored under: /var/www/vhosts/x/... Here's the conf file for the subdomain: <Location /> DAV svn SVNParentPath /var/www/svn/repos/ AuthType Basic AuthName "Authorization Realm" AuthUserFile /var/www/svn/auth/svn.htpasswd Require valid-user </Location> Authentication works fine. Does anyone know what might be causing this? -- Edit So I restarted apache (again) and tried it again and now it give me an error message, but it doesn't really help. Anyone have an idea what it means? [Wed Mar 31 23:41:55 2010] [error] [client my.ip.he.re] Could not fetch resource information. [403, #0] [Wed Mar 31 23:41:55 2010] [error] [client my.ip.he.re] (2)No such file or directory: The URI does not contain the name of a repository. [403, #190001] -- Edit 2 If I do svn info it doesn't give anything usefull: [root@eduro eduro.nl]# svn info http://svn.domain.com/repos/ Username: username Password for 'username': svn: Repository moved permanently to 'http://svn.domain.com/repos/'; please relocate I also tried doing a local checkout (svn checkout file:///var/www/svn/repos/reposname) and that works fine (also adding / commiting works fine). So it seems is has something to do with apache. Some other information: I'm running CentOs 5.3 Plesk 9.3 Subversion, version 1.6.9 (r901367) -- Edit 3 I tried moving the repositories, but it didn't make any difference. selinux is disabled so that isn't it either. -- Edit 4 Really? Nobody :(?

    Read the article

  • How do I uninstall a ruby version installed via source?

    - by Aaron McIver
    I installed a version (1.9.3-p194) of ruby via source using make install and realized this may have been the wrong route to take. Upon doing this, I realized this was a mistake and I should be using a solution such as rvm to address my ruby versions within the OS. I looked to see if an uninstall existed to be ran in conjunction with make and it didn't. I then proceeded to install rvm and add the aforementioned version in to my list of managed rubies within rvm which is not listed as ext-ruby-1.9.3-p194. rvm rubies ext-ruby-1.9.3-p194 [ x86_64 ] =* ruby-1.9.3-p194 [ x86_64 ] # => - current # =* - current && default # * - default** When I perform an rvm remove, it simply removes it from the rubies list however it still exists within /usr/local/bin. I am not concerned with the system install ruby version residing in /usr/bin as I understand that is tied to the OS and should simply be ignored. How can I safely uninstall/remove the aforementioned version and all the places in which it was installed, short of looking at the install script?

    Read the article

  • script to list user's mapped drive not giving results or error

    - by user223631
    We are in the process of migrating two file servers to a new server. We have mapped drives via user group in group policy. Many users have manually mapped drives and we need to find these mappings. I have created a PowerShell script to run that remotely get the drive mappings. It works on most computers but there are many that are not returning results and I am not getting any error messages. Each workstation on the list creates a text file and the ones that are not returning results have no text in the files. I can ping these machines. If the machine is not turned on, it does come up error message that the RPC server is not available. My domain user account is in a group that is in the local admin account. I have no idea why some are not working. Here is the script. # Load list into variable, which will become an array of strings If( !(Test-Path C:\Scripts)) { New-Item C:\Scripts -ItemType directory } If( !(Test-Path C:\Scripts\Computers)) { New-Item C:\Scripts\Computers -ItemType directory } If( !(Test-Path C:\Scripts\Workstations.txt)) { "No Workstations found. Please enter a list of Workstations under Workstation.txt"; Return} If( !(Test-Path C:\Scripts\KnownMaps.txt)) { "No Mapping to check against. Please enter a list of Known Mappings under KnownMaps.txt"; Return} $computerlist = Get-Content C:\Scripts\Workstations.txt # Loop through each item in the array (each computer in the list of computers we loaded into the variable) ForEach ($computer in $computerlist) { $diskObject = Get-WmiObject Win32_MappedLogicalDisk -computerName $computer | Select Name,ProviderName | Out-File C:\Tester\Computers\$computer.txt -width 200 } Select-String -Path C:\Tester\Computers\*.txt -Pattern cmsfiles | Out-File C:\Tester\Drivemaps-all.txt $strings = Get-Content C:\Tester\KnownMaps.txt Select-String -Path C:\Tester\Drivemaps-all.txt -Pattern $strings -notmatch -simplematch | Out-File C:\Tester\Drivemaps-nonmatch.txt -Width 200 Select-String -Path C:\Tester\Drivemaps-all.txt -Pattern $strings -simplematch | Out-File C:\Tester\Drivemaps-match.txt -Width 200

    Read the article

  • Debian Wheezy IPv6 isn't configured with ifup post-up hook

    - by aef
    We recently set up a server on Debian Wheezy Beta 3 (x86_64) which has a native IPv6 connection. We configured the eth0 interface to get the IPv6 configuration through some post-up hook commands in /etc/network/interfaces. The result is, that after the booting the system up, there is only IPv4 and an auto-configured link-local IPv6 address configured on the interface, as if the command has never been executed. When we additionally place the commands after the call to ifup -a inside the /etc/init.d/networking init script, everything works as expected and we have a fully configured interface after booting up. This is quite an ugly way to configure the interface. What are we doing wrong with the ifup post-up hooks? Or is this a bug? The section from /etc/network/interfaces looks like this (IP-addresses changed): allow-hotplug eth0 iface eth0 inet static address 1.2.3.1 netmask 255.255.255.192 network 1.2.3.0 broadcast 1.2.3.63 gateway 1.2.3.62 dns-nameservers 8.8.8.8 dns-search mydomain.tld post-up ip -6 addr add 2001:db8:100:3022::2 dev eth0 post-up ip -6 route add fe80::1 dev eth0 post-up ip -6 route add default via fe80::1 dev eth0 I also tried it in this alternative way: auto eth0 iface eth0 inet static address 1.2.3.1 netmask 255.255.255.192 network 1.2.3.0 broadcast 1.2.3.63 gateway 1.2.3.62 dns-nameservers 8.8.8.8 dns-search mydomain.tld iface eth0 inet6 static address 2001:db8:100:3022::2 netmask 64 gateway fe80::1 What we added to /etc/init.d/networking: … case "$1" in start) process_options check_ifstate if [ "$CONFIGURE_INTERFACES" = no ] then log_action_msg "Not configuring network interfaces, see /etc/default/networking" exit 0 fi set -f exclusions=$(process_exclusions) log_action_begin_msg "Configuring network interfaces" if ifup -a $exclusions $verbose && ifup_hotplug $exclusions $verbose # Our additions ip -6 addr add 2001:db8:100:3022::2 dev eth0 ip -6 route add fe80::1 dev eth0 ip -6 route add default via fe80::1 dev eth0 then log_action_end_msg $? else log_action_end_msg $? fi ;; …

    Read the article

  • OpenVZ: Choosing right MySQL-Server depending on host

    - by Scheintod
    What I have: Two servers running Wheezy/OpenVZ with One MySQL container on each host master/master replicated (mysql1/mysql2) Replicated DNS on each host (dns1/dns2) different web-containers on each host but regulary backuped to the other. What I want: Each container should use the "local" MySQL-Server (the one which runs on the same hardware-node). I'd like to be able to move the web-containers between the to hosts. Each container should choose the MySQL-Server (semi) automatically. This scheme should continue working if one host is down. What I tried: Currently I'm keeping track on which container should run on which host by DNS entries which are queries by scripts e.g. for questions like: "Which container should be backuped on/to which host." For choosing the right MySQL server I have one extra entry like "mysql.container_abc" which resolves to either mysql1/mysql2. So in the applications in the container I can use "mysql.container_abc" for e.g. mysql_connect and if I want to move the container around I just need to change the dns. Now I notices one problem with this approach: Every mysql_connect generates one DNS query because the dns is not cached and this slows the request down unnecessarily. What I would like better: Some way of passing the information on which host we are running to the container and using it directly instead of using DNS. E.g. some way of setting a custom /etc/hosts entry in the container. Or any other great idea. Doesn't have to include DNS but shouldn't require to much special "magic" inside the container.

    Read the article

  • Postfix : outgoing mail in TLS for a specific domain

    - by vercetty92
    I am trying to configure postfix to send mail in TLS (starttls in fact), but only for a specific destination. I tried with "smtp_tls_policy_maps". This is the only line in my main.cf file regarding TLS configuration, but it seems not working. Here is my main.cf file: queue_directory = /opt/csw/var/spool/postfix command_directory = /opt/csw/sbin daemon_directory = /opt/csw/libexec/postfix html_directory = /opt/csw/share/doc/postfix/html manpage_directory = /opt/csw/share/man sample_directory = /opt/csw/share/doc/postfix/samples readme_directory = /opt/csw/share/doc/postfix/README_FILES mail_spool_directory = /var/spool/mail sendmail_path = /opt/csw/sbin/sendmail newaliases_path = /opt/csw/bin/newaliases mailq_path = /opt/csw/bin/mailq mail_owner = postfix setgid_group = postdrop mydomain = ullink.net myorigin = $myhostname mydestination = $myhostname, localhost.$mydomain, localhost masquerade_domains = vercetty92.net alias_maps = dbm:/etc/opt/csw/postfix/aliases alias_database = dbm:/etc/opt/csw/postfix/aliases transport_maps = dbm:/etc/opt/csw/postfix/transport smtp_tls_policy_maps = dbm:/etc/opt/csw/postfix/tls_policy inet_interfaces = all unknown_local_recipient_reject_code = 550 relayhost = smtpd_banner = $myhostname ESMTP $mail_name debug_peer_level = 2 debugger_command = PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin xxgdb $daemon_directory/$process_name $process_id & sleep 5 And here is my "tls_policy" file: gmail.com encrypt protocols=SSLv3:TLSv1 ciphers=high I also tried gmail.com encrypt My wish is to use TLS only for the gmail domain. With this configuration, I don't see any TLS line in the source of the mail. But if I tell postfix to use TLS if possible for all destination with this line, it works: smtp_tls_security_level = may Beause I can see this line in the source of my mail: (version=TLSv1/SSLv3 cipher=OTHER); But I don't want to try to use TLS for the others domains...only for gmail... Do I miss something in my conf? (I also try whith "hash:/etc/opt/csw/postfix/tls_policy", and it's the same) Thanks a lot in advance

    Read the article

  • A very peculiar problem with an old pc and a newer laptop...

    - by user553492
    I got my old pc ( 248mb ram , 80 GB ) repaired and the tech people put XP in it .My newer laptop has UBUNTU 10.04 .now I only have one cable and one usb cord .So I connected my modem (with only one CAT5 port and 4 usb ports ) to laptop with CAT5 cable .Th internet is working fine . I also wanted to use net on older pc so I installed the usb drivers for win and it worked. But I got fed up of win xp and made a separate partition for FreeBSD which I planned to install .During the install I screwd up sumthing and now freebsd starts with a boot option with a ? mark in place of win xp .If I click on that it gives me a "NTLDR missing " msg. I tried connecting CAT5 cable between old and new pc and tried connecting my laptop with USB cable but nothing happend and then I realozed the modem doesnt have a WORKING usb driver for LINUX :( .FCUK ! .Freebsd doesnt` even detect the LAN cable if I use it for old pc . So basically I have a old pc that has FREEBSD which I can olny start and stare at the blank terminal console but works perfectly otherwise .FREEBSD was supposed to detect the LAN cable ??.And I have a laptop that has LINUX which only works if I connect it with a CAT5 cable .wtf . So what can I do with my old pc ??? any local server (if possible :( ) or some such thing ? or can u suggest any use .Im 18 and im into learning programming , coding .So I can practice it .Thankx !

    Read the article

  • GIT Website Deployment

    - by Brian
    I am attempting to setup GIT to deploy my project to different locations based on the branch. (I think this is what I want to do anyway). My current setup is this: Local dev machine running Netbeans to make changes. Remote server hosting GIT projects (same server running apache) - 2 subsites exist a test.FQDN.com and a live.FQDN.com What I would like to do is have 1 GIT project (MyProject) and create a new feature branch. Any commits done to the new feature branch would push to test.FQDN.com. Once the features have been tested and then merged into the master branch, it would push to live.FQDN.com. I have looked at GIT's post-receive hooks and was able to use "git checkout -f" command to pull on the test.FQDN.com site however that only pulls the master branch and not the new feature branch. I do not have any funding to use a third party to make this work, and would prefer to stay within GIT but have full root access to the web server if there is a package to install which would help control this. Any suggestions would be great!

    Read the article

  • MySQL config for 2GB ram

    - by Tiffany Walker
    How is my config? Does it work well for 2GB? What would be an ideal config for a 2GB ram server? [mysqld] set-variable = max_connections=500 log-slow-queries safe-show-database local-infile=0 skip-networking symbolic-links=0 max_connections = 500 key_buffer = 256M myisam_sort_buffer_size = 64M join_buffer_size = 2M read_buffer_size = 2M sort_buffer_size = 2M read_rnd_buffer_size = 2M thread_concurrency = 16 table_cache = 1024 thread_cache_size = 50 wait_timeout = 7200 connect_timeout = 10 tmp_table_size = 32M max_allowed_packet = 160M max_connect_errors = 10 query_cache_limit = 1M query_cache_size = 32M query_cache_type = 1 [mysqld_safe] open_files_limit = 8192 [mysqldump] max_allowed_packet = 16M [myisamchk] key_buffer = 64M sort_buffer = 64M read_buffer = 16M write_buffer = 16M UPDATE 2012-03-28 12:58 EDT By RolandoMySQLDBA Please run these queries and paste them into your question: For MyISAM SELECT CONCAT(ROUND(KBS/POWER(1024, IF(PowerOf1024<0,0,IF(PowerOf1024>3,0,PowerOf1024)))+0.4999), SUBSTR(' KMG',IF(PowerOf1024<0,0, IF(PowerOf1024>3,0,PowerOf1024))+1,1)) recommended_key_buffer_size FROM (SELECT LEAST(POWER(2,32),KBS1) KBS FROM (SELECT SUM(index_length) KBS1 FROM information_schema.tables WHERE engine='MyISAM' AND table_schema NOT IN ('information_schema','mysql')) AA ) A, (SELECT 2 PowerOf1024) B; For InnoDB SELECT CONCAT(ROUND(KBS/POWER(1024, IF(PowerOf1024<0,0,IF(PowerOf1024>3,0,PowerOf1024)))+0.49999), SUBSTR(' KMG',IF(PowerOf1024<0,0, IF(PowerOf1024>3,0,PowerOf1024))+1,1)) recommended_innodb_buffer_pool_size FROM (SELECT SUM(data_length+index_length) KBS FROM information_schema.tables WHERE engine='InnoDB') A, (SELECT 2 PowerOf1024) B;

    Read the article

  • IPTables forward from only one ip on my server

    - by user1307079
    I was able to get my server to forward connections on a certain port to a different IP, but when I add -d to specify an IP to froward from, It does not work. This is what I am trying, iptables -t nat -A PREROUTING -d 173.208.230.107 -p tcp --dport 80 iptables -t nat -nvL-j DNAT --to-destination 38.105.20.226:80. It works fine without the -d. Here is my ifconfig dump: em1 Link encap:Ethernet HWaddr 00:A0:D1:ED:D0:54 inet addr:173.208.230.106 Bcast:173.208.230.111 Mask:255.255.255.248 inet6 addr: fe80::2a0:d1ff:feed:d054/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:100058 errors:0 dropped:0 overruns:0 frame:0 TX packets:18941701 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:12779711 (12.1 MiB) TX bytes:825498499 (787.2 MiB) Memory:fbde0000-fbe00000 em1:9 Link encap:Ethernet HWaddr 00:A0:D1:ED:D0:54 inet addr:173.208.230.107 Bcast:173.208.230.111 Mask:255.255.255.248 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Memory:fbde0000-fbe00000 em1:10 Link encap:Ethernet HWaddr 00:A0:D1:ED:D0:54 inet addr:173.208.230.108 Bcast:173.208.230.111 Mask:255.255.255.248 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Memory:fbde0000-fbe00000 em1:11 Link encap:Ethernet HWaddr 00:A0:D1:ED:D0:54 inet addr:173.208.230.109 Bcast:173.208.230.111 Mask:255.255.255.248 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Memory:fbde0000-fbe00000 em1:12 Link encap:Ethernet HWaddr 00:A0:D1:ED:D0:54 inet addr:173.208.230.110 Bcast:173.208.230.111 Mask:255.255.255.248 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Memory:fbde0000-fbe00000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)

    Read the article

  • What can I do to determine the root cause of a Windows server hanging/freezing?

    - by Aaronaught
    We set up a new server here a few weeks ago that I am informally responsible for managing. Almost everything works perfectly except for one thing: Every so often it hangs without warning. To clarify: When I say hangs, I mean completely. None of the services respond and I'm unable to even get onto a local console - the display acts as though there's no VGA signal. One time, the server actually responded to pings, another time I got the "destination host unreachable" response, but most of the time the pings just time out, as one would expect for a hung server. Event logs don't show anything after a reboot. I don't mean that they don't show anything interesting, I mean that they don't show anything at all from before the failure occurs to after the reboot. And there are never any performance problems, strange errors, or other obvious signs of impending doom before it happens. I don't expect any easy answers here. What I'd like to know his I can methodically determine the root cause of this problem, be it a misbehaving service, defective hardware, or something else. Is there any kind of logging I can set up that will help me get to the bottom of this? Any hardware diagnostics or remote monitoring? Anything else I can do to help me discover what's actually happening, or at least be able to eliminate what isn't wrong? Just to reiterate, I really don't want to start speculating about possible causes and take a trial-and-error approach, because it's going to be at least several days at a time before I would have conclusive results. I'm looking for solutions to reliably trace the problem to its source.

    Read the article

  • Linux: CIFS/Samba mount hangs for several minutes

    - by Pistos
    I have a small local network which has a Gentoo box and a Windows box. I mount a share originating on the Windows box onto the Gentoo box with a command like: mount -t cifs -o username=WindowsUsername,password=thepassword,uid=pistos //192.168.0.103/Users /mnt/windowsbox Most of the time, everything Just Works, and I can read and write without problems. However, every few weeks or so, the connection or the mount point seems to go dead or hang, such that any process that tries to access the mount point gets stuck in D state (disk, or I/O wait). These processes become impervious to TERM and KILL signals. Disconnecting and reconnecting the Windows box from the network does not help. The frozen state lasts for 5+ minutes. It's really frustrating and gets in the way of normal work, because it freezes Save As dialogues, ls commands, etc. If I issue a umount on the mount point, it either hangs also, or reports that the mount point is in use. Eventually, the dead state resolves itself, and the mount point gets unmounted, or it becomes possible to umount with no delay. My guess is that this happens when the connection/mount has gone idle, or when the Windows machine has been idle. I am not really sure. Why is this happening, and what can I do to prevent it? Or how can I successfully kill these D-state processes at will? Possibly related: CIFS mounts hang on read

    Read the article

  • Clone Microsoft VM > SharePoint issues

    - by Rob
    Hi, We have an existing (production) Hyper-V VM that I want to clone to create a staging server. This server is NOT on a domain/active-directory - uses all local computer accounts. It is Windows 2008 OS, SharePoint 2007, SQL server 2008, Reporting Services, and some custom line-of-business web-applications. We rename the server etc as per the documentation we have, but are having troubles specifically with SharePoint. Some of the sites do not come up, and some issues adding/updating site settings such as site owners. Does anyone know of a definitive resource to this process? thanks Update: We seem to have most working, but the Shared Service Provider, and the SSP's admin site, are not working. In addition, the custom web-parts that reference IIS Session objects are failing (which seem to be related to the Share Service Provider). All the Windows user accounts for various services are renamed during the server cloning, but the windows account names in SQL server are not automatically changed. We've tried to change them in SQL server but still does not seem to work Seems like allot of work - and am thinking that there must be some guidance out there. also getting this: NullReferenceException: Object reference not set to an instance of an object.] Microsoft.Office.Server.Administration.SqlSessionStateResolver.System.Web. IPartitionResolver.ResolvePartition(Object key) +135

    Read the article

  • Application Pool Identity corruption

    - by Gavin Osborn
    I have observed a few times while deploying software into IIS that every now and again the related Application Pools fail to restart and in the Event Log we see an error like the following: The identity of application pool, 'AppPoolName' is invalid. If it remains invalid when the first request for the application pool is processed, the application pool will be disabled. This does not happen frequently but when it does the only solution is to re-apply the Identity password in the IIS Manager Window. As soon as I re-apply and then restart the application pool the web sites come back up. Facts: The account is a service account whose password never expires. The account is local to the IIS host. The account password is never changed. This is IIS 6 running on Windows Server 2003 Deployment of the software is via MSI and involves several IIS Resets. The software is created in house and does not do anything fancy to IIS. Any ideas how the identity information might become corrupt? Edit: Clarification To be clear - this user account and password combination work absolutely fine and usually works fine as the Identity of the Application Pool. It is only when we deploy updates of our software into an existing IIS application that it stops working. Our password has not changed Our deployment does not change the password or reconfigure the application pools. This does not happen every time, 1/20 times perhaps. If we re-enter the password into IIS and restart the App Pools everything works.

    Read the article

  • How to display/define Mirror/Stripping pairs with mdadm

    - by Chris
    I want to make a standard linux software Raid10 over 4 HDD. The server has 4HDDs, 2 pairs from different vendors in order to avoid batch problems. I want to have the mirror over two different Vendors, and then the Stripe over the mirror pairs. I could do that by manually creating Raid1/0, but mdadm supports Raid level 10. I just cant figure out how the Raid10 is then handled and how the data is distributed. mdadm --detail /dev/md10 /dev/md10: Version : 1.2 Creation Time : Wed May 28 11:06:23 2014 Raid Level : raid10 Array Size : 1953260544 (1862.77 GiB 2000.14 GB) Used Dev Size : 976630272 (931.39 GiB 1000.07 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Wed May 28 11:06:23 2014 State : clean, resyncing (PENDING) Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : near=2 Chunk Size : 512K Name : pdwhost:10 (local to host pdwhost) UUID : a3de0ad5:9e694ee1:addc6786:c4449e40 Events : 0 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 81 1 active sync /dev/sdf1 2 8 97 2 active sync /dev/sdg1 3 8 113 3 active sync /dev/sdh1 does not really give any information about that. How it should be: Raid 1 / Mirror over /dev/sda1 /dev/sdf1 and /dev/sdg1 /dev/sdh1 Raid 0 over the two Raid 1 pairs Is it possible to do that with the built in "level=10", how can I see what pairs are mirrored? Thanks a lot for you help

    Read the article

  • How to set a static route for an external IP address

    - by HorusKol
    Further to my earlier question about bridging different subnets - I now need to route requests for one particular IP address differently to all other traffic. I have the following routing in my iptables on our router: # Allow established connections, and those !not! coming from the public interface # eth0 = public interface # eth1 = private interface #1 (10.1.1.0/24) # eth2 = private interface #2 (129.2.2.0/25) iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A INPUT -m state --state NEW ! -i eth0 -j ACCEPT iptables -A FORWARD -i eth0 -o eth1 -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A FORWARD -i eth0 -o eth2 -m state --state ESTABLISHED,RELATED -j ACCEPT # Allow outgoing connections from the private interfaces iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT iptables -A FORWARD -i eth2 -o eth0 -j ACCEPT # Allow the two private connections to talk to each other iptables -A FORWARD -i eth1 -o eth2 -j ACCEPT iptables -A FORWARD -i eth2 -o eth1 -j ACCEPT # Masquerade (NAT) iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE # Don't forward any other traffic from the public to the private iptables -A FORWARD -i eth0 -o eth1 -j REJECT iptables -A FORWARD -i eth0 -o eth2 -j REJECT This configuration means that users will be forwarded through a modem/router with a public address - this is all well and good for most purposes, and in the main it doesn't matter that all computers are hidden behind the one public IP. However, some users need to be able to access a proxy at 192.111.222.111:8080 - and the proxy needs to identify this traffic as coming through a gateway at 129.2.2.126 - it won't respond otherwise. I tried adding a static route on our local gateway with: route add -host 192.111.222.111 gw 129.2.2.126 dev eth2 I can successfully ping 192.111.222.111 from the router. When I trace the route, it lists the 129.2.2.126 gateway, but I just get * on each of the following hops (I think this makes sense since this is just a web-proxy and requires authentication). When I try to ping this address from a host on the 129.2.2.0/25 network it fails. Should I do this in the iptables chain instead? How would I configure this routing?

    Read the article

  • iCloud backup merges or overwrites?

    - by Joe McMahon
    The following happened today (it was six AM my time, so yeah, I was dumb and dropped stitches in this process): A friend had a problem with her iPhone and needed to reset it. Unfortunately she did the reset while connected to iTunes and the restore process kicked in. In my sleepy state, I told her to go ahead. She did, and restored the most recent local (iTunes) backup (from July last year - she doesn't back up often, as she has an Air which is pretty full). During setup on the phone, she was prompted to merge data with the iCloud copy, and did so. There was no "restore from iCloud" prompt. Obviously I should have made sure she was disconnected from iTunes before she did the reset, or had her set it up as a new device and then restored from iCloud, but water under the bridge now. (Side question: could I have had her disconnect and then restart the phone again and avoid this whole process?) The question is: was the "merge" that happened in this process a true merge, or a replace? Her passwords for Mail were wrong, since they were the old ones from the old backup. If she does the wipe data and restore from iCloud, will she get her old SMSes and calendar entries back? Or did the merge decide that the phone, despite it being "old" was right and therefore the SMSes, calendar entries, etc. were discarded? As a recovery option, I have a 4-day-old iTunes backup here from ~/Library/Application Support/MobileSync/Backup, but she and the phone are 3000 miles away, and it's 8GB, so I can't easily restore it for her. I do have the option of encrypting it and mailing it on a data stick if the iCloud backup is now toast. Should she try the wipe and restore from the cloud (after backing up locally), or should I just get the more-recent backup in the mail? My goal is to get everything (especially the SMSes) back to the most recent version possible.

    Read the article

  • Can't access shared folder of win8/win7 machine - Error code: 0x80004005. Unspecified Error

    - by ruslan
    It's ironic that I, software engineer with 12 years of experience, continue to have this problem from one version of Windows to another without being able to achive consistent result (sometimes it works). Here it goes again. I have a machine with Win8 Consumer Preview. It doesn't really matter that it's win8. I had same issue with win7 before. On given machine I created local admin user with same name and password I have on second PC (the machine I'm typing this from now). I have two questions to you guys. Why I'm not able to access C$ share of win8 machine from another Win7 machine? I get error that C$ doesn't exist even though it does. Why I'm not able to access share named "test" in Win8 for which Permission set to Full for Everyone. When I attempt to access it from Win7 machine I'm asked to enter username and password. After entering administrator credentials I get error "Windows cannot access \192.168.1.123\test. Error code: 0x80004005. Unspecified Error". Windows Firewall is disabled on Win8 machine for both Private and Public networks. Guest account is disabled. Built-in admin account is enabled. Machine is pingable from other machines.

    Read the article

  • Primary/secondary ethernet interfaces in Ubuntu 9.10

    - by Josh
    I have an Ubuntu 9.10 machine with three ethernet interfaces, eth0, eth1 and eth2. eth2 is connected to a private network. eth0 and eth2 are connected to two different LANs. Either one will provide access to the internet. All three networks have DHCP servers. Using Ubuntu's the default settings (And Gnome), when I boot up all the interfaces are active and my system gets three IP addresses. However any attempt to access the internet results in connection timeouts and other weirdness. I suspect that traffic is going out on one NIC (like eth0) and coming back in on another (like eth1). I'm not sure what's going on. The only way I can access the internet at the moment is to bring two of the devices down with ifdown. How can I configure eth0 as my primary interface so all trafic goes out by default on that interface, while keeping the other two active? Also, I want to make sure Avahi broadcasts properly on all three IPs so that the computers on the LAN of eth1 can still connect to myHostname.local...

    Read the article

  • Scenario - NTFS Symbolic Link or Junction?

    - by Unsigned
    Differences Absolute Relative File Directory UNC Symbolic link ? ? ? ? ? Junction ? x x ? x Scenario Let's assume we're creating a reparse point to create the redirect C:\SomeDir => D:\SomeDir Since this scenario only requires local, absolute paths, either a junction or symlink would work. In this situation, is there any advantage to using one or the other? Assume Windows 7 for the OS, disregarding backward-compatibility (prior to Vista, symlinks are not supported). Update I have found another difference. Symbolic Link - Link's permissions only affect delete/rename operations on the link itself, read/write access (to the target) is governed by the target's permissions Junction - Junction's permissions affect enumeration, revoking permissions on the junction will deny file listing through that junction, even if the target folder has more permissive ACLs The permissions make it interesting, as symlinks can allow legacy applications to access configuration files in UAC-restricted areas (such as %ProgramFiles%) without changing existing access permissions, by storing the files in a non-restricted location and creating symlinks in the restricted directory.

    Read the article

< Previous Page | 547 548 549 550 551 552 553 554 555 556 557 558  | Next Page >