Search Results

Search found 3695 results on 148 pages for 'failure'.

Page 45/148 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • vagrant up command very slow on OS X Lion

    - by Andy Hume
    When I run vagrant up to provision a new VM on Lion it takes an extremely long time, during which the entire Mac is very laggy and unresponsive. The output is as follows, the key point being the "notice: Finished catalog run in 754.28 seconds" > vagrant up [default] Importing base box 'lucid64'... [default] The guest additions on this VM do not match the install version of VirtualBox! This may cause things such as forwarded ports, shared folders, and more to not work properly. If any of those things fail on this machine, please update the guest additions and repackage the box. Guest Additions Version: 4.1.0 VirtualBox Version: 4.1.6 [default] Matching MAC address for NAT networking... [default] Clearing any previously set forwarded ports... [default] Forwarding ports... [default] -- ssh: 22 => 2222 (adapter 1) [default] -- web: 80 => 4567 (adapter 1) [default] Creating shared folders metadata... [default] Running any VM customizations... [default] Booting VM... [default] Waiting for VM to boot. This can take a few minutes. [default] VM booted and ready for use! [default] Mounting shared folders... [default] -- v-root: /vagrant [default] -- v-data: /var/www [default] -- manifests: /tmp/vagrant-puppet/manifests [default] Running provisioner: Vagrant::Provisioners::Puppet... [default] Running Puppet with lucid64.pp... [default] stdin: is not a tty [default] notice: /Stage[main]/Lucid64/Exec[apt-update]/returns: executed successfully [default] [default] notice: /Stage[main]/Lucid64/Package[apache2]/ensure: ensure changed 'purged' to 'present' [default] [default] notice: /Stage[main]/Lucid64/File[/etc/motd]/ensure: defined content as '{md5}a25e31ba9b8489da9cd5751c447a1741' [default] [default] notice: Finished catalog run in 754.28 seconds [default] [default] err: /File[/var/lib/puppet/rrd]/ensure: change from absent to directory failed: Could not find group puppet [default] [default] err: Could not send report: Got 1 failure(s) while initializing: change from absent to directory failed: Could not find group puppet [default] [default] Running provisioner: Vagrant::Provisioners::Puppet... [default] Running Puppet with lucid64.pp... [default] stdin: is not a tty [default] notice: /Stage[main]/Lucid64/Exec[apt-update]/returns: executed successfully [default] [default] notice: Finished catalog run in 2.05 seconds [default] [default] err: /File[/var/lib/puppet/rrd]: Could not evaluate: Could not find group puppet [default] [default] err: Could not send report: Got 1 failure(s) while initializing: Could not evaluate: Could not find group puppet [default] [default] Running provisioner: Vagrant::Provisioners::Puppet... [default] Running Puppet with lucid64.pp... [default] stdin: is not a tty [default] notice: /Stage[main]/Lucid64/Exec[apt-update]/returns: executed successfully [default] [default] notice: Finished catalog run in 1.36 seconds [default] [default] err: /File[/var/lib/puppet/rrd]: Could not evaluate: Could not find group puppet [default] [default] err: Could not send report: Got 1 failure(s) while initializing: Could not evaluate: Could not find group puppet [default] >

    Read the article

  • Fail rate of EBS snapshots and AMIs?

    - by user784637
    According to Amazon the fail rate of EBS volumes is: As an example, volumes that operate with 20 GB or less of modified data since their most recent Amazon EBS snapshot can expect an annual failure rate (AFR) of between 0.1% – 0.5%, where failure refers to a complete loss of the volume http://aws.amazon.com/ebs/ However, I was curious to know the fail rate of EBS snapshots and private AMIs that a system admin would take. Since the EBS snapshots and AMIs are stored in Amazon s3, is it safe to assume that the likelihood you cannot rollback to a previous snapshot or AMI the same likelihood that a file gets lost in s3? Amazon S3’s standard storage is: ... Designed to provide 99.999999999% durability and 99.99% availability of objects over a given year. http://aws.amazon.com/s3/

    Read the article

  • Does any know fix/cause of USB drives that lose connection

    - by Burch Kealey
    I have been having significant problems trying to copy files to a USB drive (G0-Flex 1.5 tb) No matter what I use to try to copy the files (python code, Windows copy and paste or Drobo Copy utility) I ultimately get a failure. The failure is always some indication that the device is not ready or able to be written to. I am pretty sure the problem is that the drive is losing its connection. I have done everything I can find so far including setting the USB Root hub to never turn off the power, I updated some usb drivers etc. I have found references to this problem primarily with Win7-64 bit. I have also had USB connection problems with other devices- we kept losing a connection to our Bravo Disc Publisher when we went to Win7 and finally bought a newer model and have not had problems since. Any pointers about diagnosing and or understanding the problem would be appreciated.

    Read the article

  • Run a pc on battery for 3 days?

    - by Zen 8000k
    I am looking for a low power, low end pc able to run 24/7 without overheating and a way to support it in case of power failure. Power failures can be up to 72 hours. The pc dosen't need a monitor or keyboard. A modem must also be protected in case of power failure. When i say low end, i don't mean crap. The cpu needs to be x86 and have at least 1k cpu in this chart: http://www.cpubenchmark.net/index.php What's the best way to do this?

    Read the article

  • Netdom to restore machine secret

    - by icelava
    I have a number of virtual machines that have not been switched on for over a month, and some others which have been rolled back to an older state. They are members of a domain, and have expired their machine secrets; thus unable to authenticate with the domain any longer. Event Type: Warning Event Source: LSASRV Event Category: SPNEGO (Negotiator) Event ID: 40960 Date: 14/05/2009 Time: 10:24:54 AM User: N/A Computer: TFS2008WDATA Description: The Security System detected an authentication error for the server ldap/iceland.icelava.home. The failure code from authentication protocol Kerberos was "The attempted logon is invalid. This is either due to a bad username or authentication information. (0xc000006d)". For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: c000006d Event Type: Warning Event Source: LSASRV Event Category: SPNEGO (Negotiator) Event ID: 40960 Date: 14/05/2009 Time: 10:24:54 AM User: N/A Computer: TFS2008WDATA Description: The Security System detected an authentication error for the server cifs/iceland.icelava.home. The failure code from authentication protocol Kerberos was "The attempted logon is invalid. This is either due to a bad username or authentication information. (0xc000006d)". For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: c000006d Event Type: Error Event Source: NETLOGON Event Category: None Event ID: 3210 Date: 14/05/2009 Time: 10:24:54 AM User: N/A Computer: TFS2008WDATA Description: This computer could not authenticate with \\iceland.icelava.home, a Windows domain controller for domain ICELAVA, and therefore this computer might deny logon requests. This inability to authenticate might be caused by another computer on the same network using the same name or the password for this computer account is not recognized. If this message appears again, contact your system administrator. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: c0000022 So I try to use netdom to re-register the machine back to the domain C:\Documents and Settings\Administrator>netdom reset tfs2008wdata /domain:icelava /UserO:enterpriseadmin /PasswordO:mypassword Logon Failure: The target account name is incorrect. The command failed to complete successfully. But have not been successful. I wonder what else needs to be done?

    Read the article

  • Corrupt file indicative of corrupt hard drive?

    - by Elipsicon
    I have noticed that two files on my (almost full) 2 TB hard drive have been corrupted. One file has 20 kB (!) corrupted, i.e. consecutive 20 kB have changed, even though the modification date of the file hasn't changed and I haven't worked with this file for over a year. This tells me that something "below" the file system level has messed with the data and the only thing I can think of is hardware failure, most likely hard disk failure. I've tested my RAM already and it works flawlessly. I'm using ext4 on Linux, if that is of any help. Is this normal? Is it time to change my hard drive disk before something worse happens? What can I do to prevent that from happening in the future? Is there some built-in feature of, or extension to ext4 that includes additional error correcting code and/or watches files for changes that haven't been caused by the OS?

    Read the article

  • ubuntu mail server settings and /etc/hosts file

    - by mbrc
    This is my /etc/hosts file 127.0.0.1 localhost.localdomain localhost 127.0.1.1 ubuntu-server.xx.com ubuntu-server 193.77.xx.xx mail.xx.com mail # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters is this correct configuration for my mail server. I am behind router so i don't know if is ok to use my IP for mail.xx.com and 127.0.0.1 for localhost problem is that i can receive mail but when i send it i get Oct 17 21:29:32 ubuntu-server postfix/smtpd[2453]: warning: SASL authentication failure: Password verification failed Oct 17 21:29:32 ubuntu-server postfix/smtpd[2453]: warning: my.router[192.168.1.1]: SASL PLAIN authentication failed: authentication failure Oct 17 21:29:34 ubuntu-server postfix/smtpd[2453]: warning: my.router[192.168.1.1]: SASL LOGIN authentication failed: authentication failure EDIT: mabye is problem some port. i foward this ports. POP3 - port 110 IMAP - port 143 SMTP - port 25 HTTP - port 80 Secure SMTP (SSMTP) - port 465 Secure IMAP (IMAP4-SSL) - port 585 StartTLS - port 587 IMAP4 over SSL (IMAPS) - port 993 Secure POP3 (SSL-POP) - port 995 postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_dot_mydomain = no biff = no broken_sasl_auth_clients = yes config_directory = /etc/postfix content_filter = amavis:[127.0.0.1]:10024 delay_warning_time = 4h disable_vrfy_command = yes inet_interfaces = all inet_protocols = all mailbox_size_limit = 0 maximal_backoff_time = 8000s maximal_queue_lifetime = 7d message_size_limit = 0 minimal_backoff_time = 1000s mydestination = myhostname = mail.xx.com mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mynetworks_style = host myorigin = /etc/mailname readme_directory = no receive_override_options = no_address_mappings recipient_delimiter = + relayhost = smtp_helo_timeout = 60s smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_use_tls = yes smtpd_banner = $myhostname ESMTP $mail_name smtpd_client_restrictions = reject_rbl_client sbl.spamhaus.org, reject_rbl_client blackholes.easynet.nl, reject_rbl_client dnsbl.njabl.org smtpd_data_restrictions = reject_unauth_pipelining smtpd_delay_reject = yes smtpd_hard_error_limit = 12 smtpd_helo_required = yes smtpd_helo_restrictions = permit_mynetworks, warn_if_reject reject_non_fqdn_hostname, reject_invalid_hostname, permit smtpd_recipient_limit = 16 smtpd_recipient_restrictions = reject_unauth_pipelining, permit_mynetworks, permit_sasl_authenticated, reject_non_fqdn_recipient, reject_unknown_recipient_domain, reject_unauth_destination, permit smtpd_sasl_auth_enable = yes smtpd_sasl_local_domain = smtpd_sasl_security_options = noanonymous smtpd_sender_restrictions = permit_sasl_authenticated, permit_mynetworks, warn_if_reject reject_non_fqdn_sender, reject_unknown_sender_domain, reject_unauth_pipelining, permit smtpd_soft_error_limit = 3 smtpd_tls_cert_file = /etc/ssl/private/mail.xx.com.crt smtpd_tls_key_file = /etc/ssl/private/mail.xx.com.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes unknown_local_recipient_reject_code = 450 virtual_alias_maps = mysql:/etc/postfix/maps/alias.cf virtual_gid_maps = static:5000 virtual_mailbox_base = /var/spool/mail/virtual virtual_mailbox_domains = mysql:/etc/postfix/maps/domain.cf virtual_mailbox_limit = 0 virtual_mailbox_maps = mysql:/etc/postfix/maps/user.cf virtual_uid_maps = static:5000 saslfinger -c version: 1.0.4ostfix Cyrus sasl configuration Ä mode: client-side SMTP AUTH -- basics -- Postfix: 2.9.3 System: Ubuntu 12.04.1 LTS \n \l -- smtp is linked to -- libsasl2.so.2 => /usr/lib/i386-linux-gnu/libsasl2.so.2 (0x00d3a000) -- active SMTP AUTH and TLS parameters for smtp -- relayhost = smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_use_tls = yes -- listing of /usr/lib/sasl2 -- total 28 drwxr-xr-x 2 root root 4096 okt 14 15:18 . drwxr-xr-x 72 root root 12288 okt 14 15:03 .. -rw-r--r-- 1 root root 1 maj 4 06:17 berkeley_db.txt -rw-r----- 1 root root 701 okt 14 15:18 saslpasswd.conf -rw-r----- 1 smmta smmsp 885 okt 14 15:18 Sendmail.conf -- listing of /etc/postfix/sasl -- total 12 drwxr-xr-x 2 root root 4096 okt 11 18:55 . drwxr-xr-x 4 root root 4096 okt 12 06:59 .. -rwx------ 1 root root 241 okt 11 18:55 smtpd.conf Cannot find the smtp_sasl_password_maps parameter in main.cf. Client-side SMTP AUTH cannot work without this parameter!

    Read the article

  • VPN not connecting after resuming from standby on Vista home premium

    - by joe
    Hello, I am having windows vista home premium OS on my laptop. I am having a windows VPN setup too. All of a sudden my vpn doesnt work when my PC resumed from stand by mode. I tried all options but still I am getting that error. CoId={5B82082D-9236-4AF9-900B-4E8072341C76}: The user workgroup\user dialed a connection named VPNName which has failed. The error code returned on failure is 0. I am also getting the following error when I verified the event logs The user workgroup\user dialed a connection named VpnName which has failed. The error code returned on failure is 800. Please help.

    Read the article

  • Disk Redundancy across different server

    - by Mascarpone
    I have 3 servers, all with the same specs: Intel CPU 8 GB RAM Linux or BSD Single 2TB desktop SATA with more than 10K Hours of operation, with only less than 300 GB Used My provider cannot install a second hard drive, but can guarantee me that the drive will be replaced immediately in case of failure, with another equally crappy drive. The likelihood of drive failure is high, and since I can't use RAID, I was thinking about keeping a back up of each machine on all the other machines, so that there are always 2 copies on 2 different drives, plus the original. I would synchronize the drives every hour, with rsync, to guarantee some sort of redundancy, since bandwidth inside the DC is free, so it would be much cheaper than offsite backup. (A daily offiste backup is kept anyhow). What do you think? Any suggestion?

    Read the article

  • Security log overflowing with filtering blocks

    - by Jacob
    I have a Windows 7 workstation whose security log is overflowing with the following errors: Audit Failure 3/31/2010 2:00:50 PM Microsoft-Windows-Security-Auditing 5157 Filtering Platform Connection "The Windows Filtering Platform has blocked a connection." Audit Failure 3/31/2010 2:00:50 PM Microsoft-Windows-Security-Auditing 5152 Filtering Platform Packet Drop "The Windows Filtering Platform has blocked a packet." These are not unexpected events; the firewall is expected to drop unsolicited traffic. However, I can't figure out how to tell Windows to stop writing these events to the security log. I've seen this problem before and have been able to find an answer with the use of Google, but I wasn't able to locate on this this time. Thanks!

    Read the article

  • What is Causing this IIS 7 Web Service Sporadic Connectivity Error?

    - by dpalau
    On sporadic occasions we receive the following error when attempting to call an .asmx web service from a .Net client application: "The underlying connection was closed: A connection that was expected to be kept alive was closed by the server. Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host." By sporadic I mean that it might occur zero, once every few days, or a half-dozen times a day for some users. It will never occur for the first web service call of a user. And the subsequent (usually the same) call will always work immediately after the failure. The failures happen across a variety of methods in the service and usually happens between 15-20 seconds (according to the log) from the time of the request. Looking in the IIS site log for the particular call will show one or the other of the following windows error codes: 121: The semaphore timeout period has elapsed. 1236: The network connection was aborted by the local system. Some additional environment details: Running on internal network web farm consisting of two servers running IIS7 on Windows Server 2008 OS. These problems did not occur when running in an older IIS6 web farm of three servers running on Windows Server 2003 (and we use a single IIS6/2003 instance for our development and staging environments with no issues). EDIT: Also, all of these server instances are VMWare virtual machines, not sure if that is a surprise anymore or not. The web service is a .Net 2.0/3.5 compiled .asmx web service that has its own application pool (.Net 2.0, integrated pipeline). Only has Windows Authentication enabled. We have another web service on the farm that uses the same physical path as the primary service, the only difference being that Basic Authentication is enabled. This is used for a portion of our ERP system. Have tried using the same and different application pool - no effect on the error. This site isn't hit as often as the primary site and has never had an error. As mentioned, the error will only happen when called from the .Net client - not from other applications. The client application is always creating a new web service object for each request and setting the service credentials to System.Net.CredentialCache.DefaultCredentials. The application is either deployed locally to a client or run in a Citrix server session. Those users running in Citrix doesn't seem to experience the issue, only locally deployed clients. The Citrix servers and the web farm are located in the same physical location and are located in the same IP range (10.67.xx.xx). Locally deployed clients experiencing the error are located elsewhere (10.105.xx.xx, 10.31.xx.xx). I've checked the OS logs to see if I can see any problems but nothing really sticks out. EDIT: Actually, I myself just ran into the error a little bit ago. I decided to check out the logs again and saw that there was a Security log entry of "Audit Failure" at the 'same' time (IIS log entry at 1:39:59, event log entry at 1:39:50). Not sure if this is a coincidence or not, I'll have to check out the logs of previous errors. I'm probably grasping for straws but the details: Log Name: Security Source: Microsoft-Windows-Security-Auditing Date: 7/8/2009 1:39:50 PM Event ID: 5159 Task Category: Filtering Platform Connection Level: Information Keywords: Audit Failure User: N/A Computer: is071019.<**.net Description: The Windows Filtering Platform has blocked a bind to a local port. Application Information: Process ID: 1260 Application Name: \device\harddiskvolume1\windows\system32\svchost.exe Network Information: Source Address: 0.0.0.0 Source Port: 54802 Protocol: 17 Filter Information: Filter Run-Time ID: 0 Layer Name: Resource Assignment Layer Run-Time ID: 36 I've also tried to use Failed Request Tracing in IIS7 but the service call never actually gets to where FRT can capture it (even though the failure is logged in the web service log). The network infrastructure group said they checked out the DNS and any NIC settings are correct so there is no 'flapping'. Everything pans out. I'm not sure that they checked out any domain controller servers though to see if that could be an issue. Any ideas? Or any other debugging strategies to get to the bottom of this? I'm just the developer in charge of the software and don't really have the knowledge on what to investigate from the networking side of things - although it does sound like a networking issue to me based on what is happening. Thanks in advance for any help.

    Read the article

  • SVN do unnecessary chmod on .svn/tempfiles

    - by ???
    My working dir is on an TrueCrypt NTFS volume, with umask 000. So I can read/write on any files with no problem. But I can't do svn command on it, for example `svn update' shows error: svn: Can't set permissions on '.svn/tempfile.8.tmp': strace svn up gives: ... chmod("sbin/.svn/tempfile.2.tmp", 0770) = -1 EPERM (Operation not permitted) fcntl64(3, F_GETFL) = 0x2 (flags O_RDWR) fcntl64(3, F_SETFL, O_RDWR|O_NONBLOCK) = 0 write(3, "( failure ( ( 1 76:Can't set per"..., 172) = 172 fcntl64(3, F_GETFL) = 0x802 (flags O_RDWR|O_NONBLOCK) fcntl64(3, F_SETFL, O_RDWR) = 0 read(3, "( abort-edit ( ) ) ( failure ( ("..., 4096) = 191 gettimeofday({1276661368, 382789}, NULL) = 0 lstat64("sbin", {st_mode=S_IFDIR|0770, st_size=0, ...}) = 0 select(0, NULL, NULL, NULL, {0, 1000}) = 0 (Timeout) write(2, "svn: Can't set permissions on 's"..., 82svn: Can't set permissions on 'sbin/.svn/tempfile.2.tmp': Operation not permitted) = 82 close(3) = 0 So, the error occurred when svn chmod on some tmp files. But this is disallowed in the TrueCrypt volumes, and it's just unnecessary. Can I bypass the chmod lib calls when launch svn on TrueCrypt volumes?

    Read the article

  • (simple) linux HA with vmware vsphere?

    - by derhelge
    I hope my upcoming question is specific enough, and you are able and willing to support :-) We have several openSUSE VMs in an ESX-Cluster (three ESX-Servers) with an attached iSCSI-SAN. All of those Linux VMs are "single point of failure"-configured, which means in the case of a Web-Server: LAMP, storage, etc. everything on this machine. This was very simple and in case of a failure (in the last years: kernel panics or apache crashes) a simple reboot triggered by a script did it. But the problem is: How to upgrade/maintain the w(eb-)application or the underlying OS without downtime? This wasn't really managable and i did this in the early morning ;) How can i achieve a "simple" High-Availability Cluster now? I thought of: DRBD with heartbeat with 2 VMs. And for the storage a RDM (raw device mapped) LUN and change the read-write-permissions for both VMs. Is this a good idea? Anyone has a better solution?

    Read the article

  • "Account locked out" security event at midnight

    - by Kev
    The last three midnights I've gotten an Event ID 539 in the log...about my own account: Event Type: Failure Audit Event Source: Security Event Category: Logon/Logoff Event ID: 539 Date: 2010-04-26 Time: 12:00:20 AM User: NT AUTHORITY\SYSTEM Computer: SERVERNAME Description: Logon Failure: Reason: Account locked out User Name: MyUser Domain: MYDOMAIN Logon Type: 3 Logon Process: NtLmSsp Authentication Package: NTLM Workstation Name: SERVERNAME Caller User Name: - Caller Domain: - Caller Logon ID: - Caller Process ID: - Transited Services: - Source Network Address: - Source Port: - It's always within a half minute of midnight. There are no login attempts before it. Right after it (in the same second) there's a success audit entry: Logon attempt using explicit credentials: Logged on user: User Name: SERVERNAME$ Domain: MYDOMAIN Logon ID: (0x0,0x3E7) Logon GUID: - User whose credentials were used: Target User Name: MyUser Target Domain: MYDOMAIN Target Logon GUID: - Target Server Name: servername.mydomain.lan Target Server Info: servername.mydomain.lan Caller Process ID: 2724 Source Network Address: - Source Port: - The process ID was the same on all three of them, so I looked it up, and right now at least it maps to TCP/IP Services (Microsoft). I don't believe I changed any policies or anything on Friday. How should I interpret this?

    Read the article

  • Unable to connect to mysql through JDBC connector through Tomcat or externally.

    - by iftrue
    I've installed a stock mysql 5.5 installation, and while I can connect to the mysql service via the mysql command, and the service seems to be running, I cannot connect to it through spring+tomcat or from an external jdbc connector. I'm using the following URL: jdbc:mysql://myserver.com:myport/mydb with proper username/password, but I receive the following message: server.com: Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. the driver has not received any packets from the server. and tomcat throws: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) Which seems to be the same issue as if I try to connect externally.

    Read the article

  • configure cisco catalyst 3560g with an egress uplink

    - by imaginative
    Currently my setup has our egress uplink connected directly to an external interface on a linux router/firewall/nat gateway. Since the linux box is a single point of failure, I've since setup two openbsd boxes using carp+pf+pfsync in order to gain some additional redundancy. the problem is, I only have one egress uplink (it's still a single point of failure) but need to get it to speak to the active carp node in my openbsd cluster which will server as my new router/firewall/nat cluster. Is there anything specific I need to do on a 3560G in order for me to be able to: 1) Drop the egress uplink into a port 2) Drop one link from the switch to a firewall 2) Drop a second link from a switch to the firewall This is so if one box dies, the other still has the egress link to the switch. Is putting them into one VLAN enough? Anything else that needs to go into the configuration for this setup to work?

    Read the article

  • Getting ROBOCOPY to return a "proper" exit code?

    - by Lasse V. Karlsen
    Is it possible to ask ROBOCOPY to exit with an exit code that indicates success or failure? I am using ROBOCOPY as part of my TeamCity build configurations, and having to add a step to just silence the exit code from ROBOCOPY seems silly to me. Basically, I have added this: EXIT /B 0 to the script that is being run. However, this of course masks any real problems that ROBOCOPY would return. Basically, I would like to have exit codes of 0 for SUCCESS and non-zero for FAILURE instead of the bit-mask that ROBOCOPY returns now. Or, if I can't have that, is there a simple sequence of batch commands that would translate the bit-mask of ROBOCOPY to a similar value?

    Read the article

  • Name resolution works from desktop but not Server

    - by Joe Estes
    Sending mail via smtp.gmail.com is failing on my server. I looked on some forums and people were saying to make sure you can telnet to the smtp address first. When I telnet from my server i input this and get this error: [root@localhost ~]# telnet smtp.gmail.com 465 telnet: smtp.gmail.com: Temporary failure in name resolution smtp.gmail.com: Host name lookup failure From my OS X desktop I do the same and get this: Macintosh-3:~ joe$ telnet smtp.gmail.com 465 Trying 74.125.127.109... Connected to gmail-smtp-msa.l.google.com. I'm running a fedora core 9 server with a firestarter firewall. I have turned off the firewall and the same error persists. I'm also using port forwarding from my router to this server. I have allowed forwarding for port 465 on my router as well. Can someone please help. Thanks, Joe

    Read the article

  • Telnet works from desktop but not Server

    - by Joe Estes
    Sending mail via smtp.gmail.com is failing on my server. I looked on some forums and people were saying to make sure you can telnet to the smtp address first. When I telnet from my server i input this and get this error: [root@localhost ~]# telnet smtp.gmail.com 465 telnet: smtp.gmail.com: Temporary failure in name resolution smtp.gmail.com: Host name lookup failure From my OS X desktop I do the same and get this: Macintosh-3:~ joe$ telnet smtp.gmail.com 465 Trying 74.125.127.109... Connected to gmail-smtp-msa.l.google.com. I'm running a fedora core 9 server with a firestarter firewall. I have turned off the firewall and the same error persists. I'm also using port forwarding from my router to this server. I have allowed forwarding for port 465 on my router as well. Can someone please help. Thanks, Joe

    Read the article

  • Drbd Primary/Primary + iSCSI: accessing to different files avoids split brain?

    - by Eddie C.
    I have a question / curiosity about split-brain on a Drbd Primary/Primary configuration. Supposing two nodes (hosts), host1 and host2 configured with Drbd Primary/Primary and two different shares (NFS, CIFS o iSCSI) of a replicated area (saying /drbd) /drbd/file1.data /drbd/file2.data If a pool of client would access only by host1 share reading and wrinting only file1.data and another pool only by host2 share to file2.data, this scenario should avoid split brain situation in case of one node failure or it's just a conjecture? The final purpose is load balance between the two nodes in normal condition and collapsing to one node only in case of failure. Thank you! Eddie

    Read the article

  • Unable to connect to mysql through JDBC connector through Tomcat or externally

    - by Stefan Kendall
    I've installed a stock mysql 5.5 installation, and while I can connect to the mysql service via the mysql command, and the service seems to be running, I cannot connect to it through spring+tomcat or from an external jdbc connector. I'm using the following URL: jdbc:mysql://myserver.com:myport/mydb with proper username/password, but I receive the following message: server.com: Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. the driver has not received any packets from the server. and tomcat throws: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) Which seems to be the same issue as if I try to connect externally.

    Read the article

  • ZFS: RAIDZ versus stripe with ditto blocks

    - by RandomInsano
    I'm going to build a ZFS file server from FreeBSD. I learned recently that I can't expand a RAIDZ udev once it's part of the pool. That's a problem since I'm a home user and will probably add one disk a year tops. But what if I set copies=3 against my entire pool and just throw individual drives into the pool separated? I've read somewheres that the copies will try and distribute across drives if possible. Is there a guarantee there? I really just want protection from bit rot and drive failure on the cheap. Speed's not an issue since it'll go over a 1Gb network and at MOST stream 720p podcasts. Would my data be guaranteed safe from a single drive failure? Are there things I'm not considering? Any and all input is appreciated.

    Read the article

  • Calling the LWRP from the Exception Handler

    - by Sarah Haskins
    Is it possible to call out to a Provider (LWRP) from a Chef Exception Handler? I think my Provider is out of scope, but I don't know if what I am trying to do is possible? or advisable? Here is my provider code (cookbooks/config/provider/signal.rb): action :failure do Chef::Log.info("Yeah success") end Here is my exception handler code (exception_handler/handlers/exceptionHandler.rb): require 'chef/handler' config_signal "signal" do action :nothing end class Chef class Handler class LogCollector < Chef::Handler notifies :failure, resources(:config_signal => signal) end end end Also, if anyone has a good recommendation for general reading about scope in the context of Chef I'd appreciate it.

    Read the article

  • Cannot change password for user postgres in postgresql

    - by dhaval
    I have made the following entry in pg_hba.conf local all all trust but still su postgres does not accept blank as password. I am not able to run psql nor pg_ctl for same reason as most of the files are owned by postgres. EDIT1 dhaval@ubuntu:~$ su -c "pg_ctl reload -D template1" Password: su: Authentication failure dhaval@ubuntu:~$ su -c psql Password: su: Authentication failure I am giving the root password above but I guess its expecting "postgres" superuser password. I dont have the same. I need to reset it. EDIt2 dhaval@ubuntu:~$ sudo -i -u postgres [sudo] password for dhaval: postgres@ubuntu:~$ psql Welcome to psql 8.3.7, the PostgreSQL interactive terminal. The above has taken me postgreSQL command prompt. But I am still not sure why the "trust" was not working.

    Read the article

  • Mysql Master-ColdMaster

    - by enedebe
    I explain my case: I'm at Amazon AWS and I want to be fault tolerant on a entire region failure. My basic problem is to have the db in sync with 2 regions. My options: Master-Master (high lag) Hand made sync every 5 minutes Master-ColdMaster?! (copy on the fly but Master won't wait the other region commit) In my system we could afford loosing a piece of data (we're not a bank) the last inserts in the db, but we could not afford more than 10 minutes of downtime. The database is small and the level of inserts is low, and I wouldn't affect the normal usage waiting other region commit. Is the 3 solution posible? And the most important, once the primary fail how we can detect and change the rol between master-coldmaster -- coldmaster-master ? Is there any clean-mode to restore between failure? Thank's!

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >