Search Results

Search found 14989 results on 600 pages for 'street address'.

Page 552/600 | < Previous Page | 548 549 550 551 552 553 554 555 556 557 558 559  | Next Page >

  • Tomcat can't talk to MySql after outage

    - by gav
    I missed a payment for my server and hey suspended my account for a day or so. When they brought the server back up all my data was in tact but for some reason Tomcat can't make a JDBC connection to my MySql server. They both run on the same machine and hence I have a bind address of 127.0.0.1. It's strange because I have reset the machine of my own accord before without issue but clearly something has been reset in the downtime. I followed this guide (Just the bits which don't concern S3, I am not on Amazon infrastructure) originally and everything worked as expected. I'm very new to being a SysAdmin and I'm not sure what to try, how would you go about diagnosing this issue? The stack trace I get is as follows; INFO: Deploying web application archive myapp-1.1.war 2010-05-26 22:07:22,221 [main] ERROR context.ContextLoader - Context initialization failed org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'messageSource': Initialization of bean failed; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'transactionManager': Cannot resolve reference to bean 'sessionFactory' while setting bean property 'sessionFactory'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'sessionFactory': Cannot resolve reference to bean 'hibernateProperties' while setting bean property 'hibernateProperties'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'hibernateProperties': Cannot resolve reference to bean 'dialectDetector' while setting bean property 'properties' with key [hibernate.dialect]; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'dialectDetector': Invocation of init method failed; nested exception is org.springframework.jdbc.support.MetaDataAccessException: Could not get Connection for extracting meta data; nested exception is org.springframework.jdbc.CannotGetJdbcConnectionException: Could not get JDBC Connection; nested exception is org.apache.commons.dbcp.SQLNestedException: Cannot create PoolableConnectionFactory (Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:519) at org.codehaus.groovy.grails.commons.spring.ReloadAwareAutowireCapableBeanFactory.doCreateBean(ReloadAwareAutowireCapableBeanFactory.java:129) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:450) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:290) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:287) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:193) at org.springframework.context.support.AbstractApplicationContext.initMessageSource(AbstractApplicationContext.java:714) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:404) at org.codehaus.groovy.grails.commons.spring.GrailsWebApplicationContext.refresh(GrailsWebApplicationContext.java:153) ... I get this error for a number of 'beans'. If I type mysql at my command prompt then I can easily login with the same credentials as my grails app which uses GORM and Hibernate to persist objects to the DB. I might not have given enough info to start with but I'm really interested to learn and will certainly provide it if asked, I just really don't know where to start on this one. Thanks, Gav

    Read the article

  • Amusing or Sad? Network Solutions

    - by dbasnett
    When I got sick my email ended up in every drug sellers email list. Some days I get over 200 emails selling everything from Viagra to Xanax. Either they don't know what my condition is or they are telling me you are a goner, might as well chill-ax and have a good time. In order to cut down on the mail being downloaded I thought I would add all of the Junk email senders from Outlook to my Network Solution mail server. Much to my amazement I could not find that import Spammers button, so I submitted a tech support request. Here is the response: Thank you for contacting Network Solutions Customer Service Department. We are committed to creating the best Customer experience possible. One of the first ways we can demonstrate our commitment to this goal is to quickly and efficiently handle your recent request. We apologize for any inconvenience this might have caused you. With regard to your concern, please be advised that we cannot import blocked senders in to you e-mail servers. An alternative option is for you to create a Custom Filter that filters unwanted e-mails. To create a Custom Filter: Open a Web browser (e.g., Netscape, Microsoft Internet Explorer, etc.). Type mail.[domain name].[ext] in the address line. Login to your Network Solutions email account. Click on the Configuration left menu tab. Click on the Custom Filter link. Type the rule name. blah, blah, blah Basically add them one at a time. "We are committed to creating the best Customer experience possible." No you are not. You are trying to squeeze every nickle you can out of me. "With regard to your concern, please be advised that we cannot import blocked senders in to you e-mail servers." Maybe I should apply for a job to write those ten complicated lines of code... Maybe I should question my choice of vendors, because if they truly "cannot" then they are to stupid to have my business. It is both amusing and sad. I'll be posting this in every forum I am a member of.

    Read the article

  • Slow wifi from Windows Server 2003 virtualized in XenServer

    - by John Clayton
    I'm a brand spanking new user of OS X, coming from a lifetime of Windows use. I've been setting up my new Macbook Pro and have run into a very unusual problem. Over wifi, I am unable to copy files to or from my Windows Home Server. The problem seems to exist only over wifi, and only to WHS. Here are the details of my setup: 2010 Macbook Pro (Core i7), OS X 10.6.3 Windows Home Server PP3 (virtualized in XenServer 5.5) Windows 7 Ultimate x64 desktop Windows 7 Ultimate x64 in Boot Camp D-Link DIR-655 wireless N router Here is what I've done to narrow down the problem: Files copy fine from WHS to OS X when using gigabit ethernet Files copy fine from desktop to OS X when using gigabit ethernet Files fail to copy from WHS to OS X when using wifi (error -51) Files copy fine from desktop to OS X when using wifi Files copy fine from WHS to Boot Camp when using wifi Files copy fine from desktop to Boot Camp when using wifi From what I can tell, it seems to be some sort of issue between OS X and WHS, but I can't for the life of me see what would be different between shares on WHS and my desktop. They are both connected using smb://ADDRESS (I've tried both by IP and name). I can browse the shares on the WHS, but copying to OS X fails. I originally found the issue while installing VS2010 off an ISO from WHS, mounted to a Windows 7 VM using VMware Fusion. During the installation the VM was unusable - even the clock got behind the host be about 8 minutes. Once I plugged in the ethernet and disabled the wifi things picked up and finished quickly. The Fusion 3.1 RC is the only I think of that I installed that may have messed with the wifi driver. I've also tried resetting the wifi router, and have changed it from being G & N to N-only. Under Boot Camp I get similar speeds as my wife's N laptop. Any ideas? Thanks! Update: The issue has been further narrowed down to Windows Server 2003, which Windows Home Server is based on, running in XenServer with the XenServer tools installed.

    Read the article

  • How to reset mysql's replication settings completely, without reinstalling it?

    - by user38060
    I set up mysql replication by adding references to binlogs, relay logs etc in my.cnf restarted mysql, it worked. I wanted to change it so I deleted all binlog related files including log-bin.index, removed binlog statements from my.cnf restarted server, works set master to '', purge master logs since now(), reset slave, stop slave, stop master. now, to set up replication again, I added binlog statements to the server. But then I hit this problem when restarting with: sudo mysqld (the only way to see mysql's startup errors) I get this error: /usr/sbin/mysqld: File '/etc/mysql/var/log-bin.index' not found (Errcode: 13) Because indeed, this file does not exist! (I deleted it, while trying to set up a new replication system) Hmm, if I change the config line to: log-bin-index = log-bin.index I get a different error: [ERROR] Can't generate a unique log-filename /etc/mysql/var/bin.(1-999) [ERROR] MSYQL_BIN_LOG::open failed to generate new file name. [ERROR] Aborting The first time I set up replication on this system, I didn't need to create this file. I did the same thing - added references to a previously non-existing file, and mysql created it. Same with relay logs, etc. I don't know why mysql insists on trying to read the old folder. Should I just reinstall the whole package again? That seems like overkill. my my.cnf: [client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking bind-address = IP key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 myisam-recover = BACKUP table_cache = 64 sort_buffer =64K net_buffer_length =2K query_cache_limit = 1M query_cache_size = 16M slow_query_log_file = /etc/mysql/var/mysql-slow.log long_query_time = 1 log-queries-not-using-indexes expire_logs_days = 10 max_binlog_size = 100M server-id = 3 log-bin = /etc/mysql/var/bin.log log-slave-updates log-bin-index = /etc/mysql/var/log-bin.index log-error = /etc/mysql/var/error.log relay-log = /etc/mysql/var/relay.log relay-log-info-file = /etc/mysql/var/relay-log.info relay-log-index = /etc/mysql/var/relay-log.index auto_increment_increment = 10 auto_increment_offset = 3 master-host = HOST master-user = USER master-password=PWD replicate-do-db = DBNAME collation_server=utf8_unicode_ci character_set_server=utf8 skip-character-set-client-handshake [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash [myisamchk] key_buffer_size = 16M sort_buffer_size = 8M [mysqlhotcopy] interactive-timeout !includedir /etc/mysql/conf.d/ Update: Changing all the /etc/mysql/var/xxx paths in binlog & relay log statements to local has somehow solved the problem. I thought it was apparmor causing it at first, but when I added /etc/mysql/* rw, to apparmor's config and restarted it, it still couldn't read the full path.

    Read the article

  • Has the hardware in my modem gone bad?

    - by Tyler Scott
    I contacted CenturyLink about my modem recently and received useless and unrelated information. The problem seems to be that the modem will no longer save settings, the web interface is unusable except in internet explorer for some reason, and the modem keeps resetting. CenturyLink claimed it had to do with signal strength but I checked and it is currently between good and outstanding according to this. All of the lights remain green even when it starts acting up and I lose internet and shortly before it crashes and reboots. Does anyone have any idea what is going on or what I can do to fix it? (Asking CenturyLink again is obviously not going to help.) Update 1: Accessing the syslog from the web interface causes a crash. After it reboots, the log looks like as follows: 01/01/1970 12:01:29 AM Ethernet Ethernet client connected ,ip(192.168.0.2), mac(1c:6f:65:4c:6d:3b) 01/01/1970 12:01:38 AM Wireless 802.11 client connected ,ip(192.168.0.18), mac(d0:df:c7:c2:73:ca) 01/01/1970 12:01:41 AM System Event Line 0: VDSL2 link up, Bearer 0, us=20128, ds=40127 01/01/1970 12:01:43 AM dhcp6s[2028] dhcp6_ctl_authinit: failed to open /etc/dhcp6sctlkey: No such file or directory 01/01/1970 12:01:50 AM dhcp6s[2469] dhcp6_ctl_authinit: failed to open /etc/dhcp6sctlkey: No such file or directory 01/01/1970 12:01:52 AM radvd[2306] poll error: Interrupted system call 01/01/1970 12:01:56 AM PPP Link PPP server detected. 01/01/1970 12:01:56 AM PPP Link PPP session established. 01/01/1970 12:01:56 AM PPP Link PPP LCP UP. 01/01/1970 12:01:56 AM System Event Received valid IP address from server. Connection UP. 06/05/2014 08:16:01 AM radvd[2511] poll error: Interrupted system call 06/05/2014 08:16:03 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:03 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:03 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:03 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:03 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:03 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:03 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:03 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:03 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:04 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:04 AM dhcp6s[3236] dhcp6_ctl_authinit: failed to open /etc/dhcp6sctlkey: No such file or directory 06/05/2014 08:16:08 AM Wireless 802.11 client connected ,ip(192.168.0.7), mac(44:6d:57:c4:d7:08) I also get it to crash on various other pages. I am guessing the web server is unstable.

    Read the article

  • IIS 401.3 - Unauthorized on only 1 server out of 3 set up for network load balancing

    - by Tony
    Over the weekend our Server Admin set up two virtual Windows 2008 machines with IIS installed and set them up under NLB. I came in and changed the application pool the website was running under to our domain account that has proper access to the database and the file share hosting our .NET web application Sitefinity, and changed it to .NET 4 Integrated. NLB and everything was running fine on both servers. He brought up the third server for our cluster on Tuesday and I performed the same actions.. The only difference was that I was given admin rights for the third server so I could set it up remotely instead of going to his office. He has full control over the share and NTFS perms on \\hostname\Sitefinity and I believe I only had read access. I pointed the web site to the same \\hostname\Sitefinity\sitename share that the others were on and the authentication/authorization test settings passed. I hit the site from http://localhost (like I did successfully from the other two before trying the cluster's IP address) and I received a HTTP Error 401.3 - Unauthorized. I've verified many times that the application pool is running under the same service account. I tried hitting just a simple test.htm.. works fine on both of the first two servers but I get the same 401.3 on the third. I copied my dev project to the local inetpub directory and re-pointed the website and that ran perfectly. I turned on Failed Request Tracing and it acts like it's still running the local IUSR account I guess (instead of my domain account)? Here is an excerpt of the File Cache Access Start and the error from the trace: FileName \\hostname\sitefinity\sitename\test.htm UserName IUSR DomainName NT AUTHORITY ---------- Successful false FileFromCache false FileAddedToCache false FileDirmoned true LastModCheckErrorIgnored true ErrorCode 2147942405 LastModifiedTime ErrorCode Access is denied. (0x80070005) ---------- ModuleName IIS Web Core Notification 2 HttpStatus 401 HttpReason Unauthorized HttpSubStatus 3 ErrorCode 2147942405 ConfigExceptionInfo Notification AUTHENTICATE_REQUEST ErrorCode Access is denied. (0x80070005) ---------- My personal AD account was then granted read/write perms to the share so I created a new application pool and set the site under it in case there was an issue with the application pool but no success. I created another under my own account and it still failed. It just seems like maybe it's not trying to access the files under the account my application pools are running under although that's the only way I've done things before. I set the Physicial Path Credentials in Advanced Settings on the site to the service account and it threw a 500 error of some sort so I assume that's not the answer (and I don't have to do it on the other servers). It's like somehow I'm trying to force impersonation on the IUSR account or something?

    Read the article

  • Email delivery management grievances

    - by joxl
    The question I have may be more of principle than anything else, but here's my dilemma. I manage an email system for a small company (about 20 email users). We own a plain-letter .com domain name through Network Solutions. Our email service is hosted by Google Apps. Recently (Feb. 2011) we've been having customers report that they aren't getting our emails. Upon further investigation it seems that the failed emails are all to a common (well known) domain. We have not received any bounce messages for the emails. We've also contacted a few of the intended recipients, who have reported that the messages are not in their spam box; they simply did not receive anything. In these cases we re-sent the same email to an alternate address on another domain, which was successful received. One customer contacted their email provider about the issue. The provider recommended that we submit a form to be white-listed by their domain. Here's where my problem begins. I feel like this is heading down a slippery slope. Doesn't this undermine the very principle of email? If this is the appropriate action to take in these situations where will it end? In theory (following this model) it could be argued that eventually one will first need to "whitelist" (or more appropriately termed "authenticate") themselves with an email host before actually sending any messages. More to this point, what keeps the "bad" spammers from doing the same thing...? We've just gone full circle. I know avoiding anti-spam measures is a big cat-and-mouse game, but I think this is the wrong way of "patching" the problem. Email standards say that messages should not just disappear silently. I have a problem supporting a model that says "you must do < this to make sure your emails aren't ignored". I have a notion to call the provider and voice my complaint, although I have a feeling it will probably fall on deaf ears. Am I missing something here? Is this an acceptable approach to email spam problems? What should I do?

    Read the article

  • Choosing my first Domain Registrar?

    - by user36914
    This will be the first domain i've ever registered. So i'm at a loss what to look for. I definitely don't want to go with GoDaddy. Here are my requirements: Must have unlimited email forwards for my domain Easy to transfer away if i choose. Must not be one of those shady registrars that will try to auction your domain at the end. Ability to create sub domains Domain Registration is Private I would like a domain registrar that would let me use my dynamic ip of my ISP (Cable) if i want to. So hopefully they would have some type program that would detect IP changes and update accordingly So i've looked at a variety of registrars so far. The three left were really NameCheap, DreamHost, & DomainMonster. I have heard good things about DreamHost but i think its off the list because they don't give you any information about the features you get when you register your domain with them. They have a "Whats included" button the page but it mainly list the features with hosting not registration. DomainMonster looks pretty cool but i don't see anything about subdomains. Also i would assume they don't have a system for dynamic ip address updating. So you would have to constantly check that your ip of your ISP has changed or not and update it manually. NameCheap also looks nice. There are two things i really like about them. Right on their feature page they list "Free Dynamic DNS With Client" which is pretty cool. They also have a free SSL certificate for the first year. Haven't messed a lot with certificates but this would definitely be something i would use. Only minus i can see is you only get free private whois for the first year. After that its $2.99 which isn't that big of a deal. I'm leaning towards NameCheap now. Is this a good choice. Is there anything else i should be looking at?

    Read the article

  • How to get php mail function to work on Debian “squeeze”?

    - by Neel Kamal
    I have installed Apache and PHP5 on my debian server. Firstly I tried it using sendmail. Here is the step by step procedure that I have tried : Step 1: apt-get install sendmail Step 2: /etc/init.d/apache2 restart But this didn't work. Then I tried using external SMTP . My domain is registered on BigRock.I registerd an email address there [email protected] and it gave me the required credentials. On server I installed sSMTP > apt-get install ssmtp > Configured "/etc/ssmtp/ssmtp.conf" file. In the configuration file I added [email protected] mailhub=smtp.fostergen.com:587 (Here I have doubt. I am not sure what to use here. I tried smtp.fostergen.com:587, smtp.fostergen.com:25,mx1.mailhostbox.com :587,mx1.mailhostbox.com:25. I am still not sure what to use here mailhostbox.com. I used mx1.mailhostbox.com as it was the mx entry for my domain on bigrock Here is the screenshot![screenshot of bigrock's email management tool] ) hostname=vs3204.ams2.alvotec.de (I entered the command hostname -f on my server and got it as result ) FromLineOverride=YES UseSTARTTLS=YES [email protected] AuthPass=password provided during email registration on bigrock > edited /etc/ssmtp/revaliases (added " root:[email protected]:mx1.mailhostbox.com :587 " in the last line) > edited php.ini file (sendmail_path = /usr/sbin/ssmtp -t) > /etc/init.d/apache2 restart But this didn't work. After this I tried eSMTP. Steps Performed : > apt-get install esmtp > edited /etc/esmtprc hostname=smtp.fostergen.com:587 username= [email protected] password: password provide by bigrock mda="/usr/bin/procmail -d %T" > linked eSMTP to the legacy Sendmail path by execting the command "ln -s /usr/bin/esmtp /usr/bin/sendmail" > edited php.ini file (/usr/bin/sendmail -t -i) > /etc/init.d/apache2 restart But this technique also failed. I just want to send email to users through php mail function. Kindly help. Where I am going wrong?

    Read the article

  • Vagrant reporting VirtualBox guest additions out of date

    - by DTest
    Fairly new to Vagrant, so bear with me if I don't understand the process. I downloaded a CentOS box off http://www.vagrantbox.es/ Started it up running VirtualBox 4.2.4 and got this message: [default] The guest additions on this VM do not match the install version of VirtualBox! This may cause things such as forwarded ports, shared folders, and more to not work properly. If any of those things fail on this machine, please update the guest additions and repackage the box. Guest Additions Version: 4.0.8 VirtualBox Version: 4.2.4 So I used the vbguest plugin to update the guest additions, then repackaged the box as suggested. Having replaced the old box and loading it up I get the same message about guest additions being outdated, but vbguest reports that they are up to date (the automatic vbguest update is disabled in my Vagrantfile): Vagrant::Config.run do |config| config.vm.box = "centos56_64" config.vbguest.auto_update = false config.vbguest.no_remote = true end And the commands: dtest$ vagrant up [default] Importing base box 'centos56_64'... [default] The guest additions on this VM do not match the install version of VirtualBox! This may cause things such as forwarded ports, shared folders, and more to not work properly. If any of those things fail on this machine, please update the guest additions and repackage the box. Guest Additions Version: 4.0.8 VirtualBox Version: 4.2.4 [default] Matching MAC address for NAT networking... [default] Clearing any previously set forwarded ports... [default] Forwarding ports... [default] -- 22 => 2222 (adapter 1) [default] Creating shared folders metadata... [default] Clearing any previously set network interfaces... [default] Booting VM... [default] Waiting for VM to boot. This can take a few minutes. [default] VM booted and ready for use! [default] Mounting shared folders... [default] -- v-root: /vagrant dtest$ vagrant vbguest --no-install [default] Detected Virtualbox Guest Additions 4.2.4 --- OK. [default] Virtualbox Guest Additions on host: 4.2.4 - guest's version is 4.2.4 Since they appear to be updated after an install, I could ignore the message. But is it possible to get rid of it?

    Read the article

  • How to unlock and remove a protected partition from Prestigio USB stick?

    - by mr.b
    Ok, so, I have one of those fancy schmancy devices, which is given to me by a frustrated friend of mine. Device is a Prestigio Leather 8GB, which identifies itself to Linux host as: Bus 001 Device 006: ID 1307:0165 Transcend Information, Inc. 2GB/4GB Flash Drive Kernel messages as USB device is plugged in: kernel: [ 2769.580042] usb 1-9: new high speed USB device using ehci_hcd and address 7 kernel: [ 2769.714782] scsi8 : usb-storage 1-9:1.0 kernel: [ 2770.713937] scsi 8:0:0:0: Direct-Access 8192MB flash drive 1.00 PQ: 0 ANSI: 2 kernel: [ 2770.714535] scsi 8:0:0:1: Direct-Access 8192MB flash drive 1.00 PQ: 0 ANSI: 2 kernel: [ 2770.715734] sd 8:0:0:0: Attached scsi generic sg3 type 0 kernel: [ 2770.716108] sd 8:0:0:1: Attached scsi generic sg4 type 0 kernel: [ 2770.722175] sd 8:0:0:0: [sdc] 962560 512-byte logical blocks: (492 MB/470 MiB) kernel: [ 2770.722657] sd 8:0:0:0: [sdc] Write Protect is on kernel: [ 2770.731078] sd 8:0:0:1: [sdd] 14012416 512-byte logical blocks: (7.17 GB/6.68 GiB) kernel: [ 2770.731215] sdc: kernel: [ 2770.738251] sd 8:0:0:1: [sdd] Write Protect is off kernel: [ 2770.880328] kernel: [ 2770.885876] sd 8:0:0:0: [sdc] Attached SCSI removable disk kernel: [ 2770.887442] sdd: unknown partition table kernel: [ 2771.049605] sd 8:0:0:1: [sdd] Attached SCSI removable disk So, symptoms are typical for U3-like devices: two separate devices inside of a single flash device. Windows sees it also as two identical usb devices, and mounts two separate drives to system, whereas first one presents itself as a CDROM device, holding a write-protected content, and second is a regular flash-disk partition, that "can" be written to. However, it seems like it's broken in some weird way, since it won't let me write anything to it, format it, nothing, but that's not the issue right now. Question: How can I unlock entire USB stick so it appears to system as a single, 8GB device which can be partitioned and used normally, without restrictions? Since it appeared to be an U3 device, I have tried standard utilities: both U3 Uninstaller by u3.com (found on SoftPedia), and opensource u3_tool from sourceforge (on both Windows and Linux). First utility failed to even detect USB stick as U3 device (simply stood idle while I re-plugged stick several times), while second tool failed with some obscure error about SCSI command unable to do something (I might be able to provide exact errors when I switch back to windows). u3_tool -i /dev/sg3 (Display device info) fails with u3_partition_info() failed: Device reported command failed: status 1 ...and every other option fails with same error, minus first part which states which command precisely has failed. So, apparently, this isn't a U3 device. Or, if it is, it doesn't behave like one. I read on a few occasions that this device protection is done by special command sent to device which tells it to lock itself, and so there should be an unlock command, that would set drive straight. Does anyone have any idea about what could I do to this device to fix it? P.S. I also mentioned a problem with being unable to use second "drive", but I'll tackle that problem when (and if) I manage to merge those two devices into one...

    Read the article

  • My facebook blocking ACL has stopped working

    - by Josh
    This probably very simple. This was setup before I arrived, and has been working to block facebook. I recently eliminated some static port forwarding on this 2691 (as in, I don't think anything else has changed), and now facebook is once again accessible. Why is this list not doing what it seems like it should be doing (and was doing)? Would an extended outbound ACL be more appropriate (I think that would have been my thought if I had been tasked with creating this in the first place)? Something different? I've included below what I believe are the relevant parts of the config. interface FastEthernet0/0 ip address my.pub.ip.add my.ip.add.msk ip access-group 1 in ip nat outside ip virtual-reassembly duplex auto speed auto access-list 1 deny 69.171.224.0 0.0.31.255 access-list 1 deny 74.119.76.0 0.0.3.255 access-list 1 deny 204.15.20.0 0.0.3.255 access-list 1 deny 66.220.144.0 0.0.15.255 access-list 1 deny 69.63.176.0 0.0.15.255 access-list 1 permit any ip nat inside source list 105 interface FastEthernet0/0 overload access-list 105 deny ip 192.168.0.0 0.0.0.255 192.168.8.0 0.0.0.255 access-list 105 permit ip 192.168.0.0 0.0.0.255 any access-list 105 permit ip 192.168.1.0 0.0.0.255 any EDIT ACL is once again blocking Facebook. Here is the new definition for those interested... access-list 1 deny 66.220.144.0 0.0.7.255 access-list 1 deny 66.220.152.0 0.0.7.255 access-list 1 deny 69.63.176.0 0.0.7.255 access-list 1 deny 69.63.176.0 0.0.0.255 access-list 1 deny 69.63.184.0 0.0.7.255 access-list 1 deny 69.171.224.0 0.0.15.255 access-list 1 deny 69.171.239.0 0.0.0.255 access-list 1 deny 69.171.240.0 0.0.15.255 access-list 1 deny 69.171.255.0 0.0.0.255 access-list 1 deny 74.119.76.0 0.0.3.255 access-list 1 deny 173.252.64.0 0.0.31.255 access-list 1 deny 173.252.70.0 0.0.0.255 access-list 1 deny 173.252.96.0 0.0.31.255 access-list 1 deny 204.15.20.0 0.0.3.255 access-list 1 permit any

    Read the article

  • Weird DNS bug - external server resolves to internal IP

    - by emilecantin
    I have a server that is hosted by my university. I have root access, but no control over network setup, firewall, etc. This server's DNS resolves to an internal IP here on campus (10.x.x.x), and an external IP outside campus. I also have a few servers hosted at Amazon, and they mostly work well. However, one of them started to resolve the university server by its internal IP address. This causes problems, as 10.x.x.x on Amazon EC2 is someone else. I have connected to the Amazon server with SSH agent forwarding a few times in the past, to access a Git repository on the university server. Any idea what could cause this? EDIT: Here's my /etc/resolv.conf # Generated by dhcpcd for interface eth0 search ec2.internal nameserver 172.16.0.23 Here's the output of dig myserver.myuniversity.ca.: ; <<>> DiG 9.8.1-P1 <<>> myserver.myuniversity.ca. ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 34470 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;myserver.myuniversity.ca. IN A ;; ANSWER SECTION: myserver.myuniversity.ca. 537586 IN A 10.43.x.x ;; Query time: 2 msec ;; SERVER: 172.16.0.23#53(172.16.0.23) ;; WHEN: Wed Nov 28 16:07:21 2012 ;; MSG SIZE rcvd: 60 Here's the expected output (on another Amazon server): ; <<>> DiG 9.8.1-P1 <<>> myserver.myuniversity.ca. ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 8045 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;myserver.myuniversity.ca. IN A ;; ANSWER SECTION: myserver.myuniversity.ca. 601733 IN A x.x.239.1 ;; Query time: 1 msec ;; SERVER: 172.16.0.23#53(172.16.0.23) ;; WHEN: Wed Nov 28 16:09:36 2012 ;; MSG SIZE rcvd: 60

    Read the article

  • HAProxy + Percona XtraDB Cluster

    - by rottmanj
    I am attempting to setup HAproxy in conjunction with Percona XtraDB Cluster on a series of 3 EC2 instances. I have found a few tutorials online dealing with this specific issue, but I am a bit stuck. Both the Percona servers and the HAproxy servers are running ubuntu 12.04. The HAProxy version is 1.4.18, When I start HAProxy I get the following error: Server pxc-back/db01 is DOWN, reason: Socket error, check duration: 2ms. I am not really sure what the issue could be. I have verified the following: EC2 security groups ports are open Poured over my config files looking for issues. I currently do not see any. Ensured that xinetd was installed Ensured that I am using the correct ip address of the mysql server. Any help with this is greatly appreciated. Here are my current config Load Balancer /etc/haproxy/haproxy.cfg global log 127.0.0.1 local0 log 127.0.0.1 local1 notice maxconn 4096 user haproxy group haproxy debug #quiet daemon defaults log global mode http option tcplog option dontlognull retries 3 option redispatch maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 frontend pxc-front bind 0.0.0.0:3307 mode tcp default_backend pxc-back frontend stats-front bind 0.0.0.0:22002 mode http default_backend stats-back backend pxc-back mode tcp balance leastconn option httpchk server db01 10.86.154.105:3306 check port 9200 inter 12000 rise 3 fall 3 backend stats-back mode http balance roundrobin stats uri /haproxy/stats MySql Server /etc/xinetd.d/mysqlchk # default: on # description: mysqlchk service mysqlchk { # this is a config for xinetd, place it in /etc/xinetd.d/ disable = no flags = REUSE socket_type = stream port = 9200 wait = no user = nobody server = /usr/bin/clustercheck log_on_failure += USERID #only_from = 0.0.0.0/0 # recommended to put the IPs that need # to connect exclusively (security purposes) per_source = UNLIMITED } MySql Server /etc/services Added the line mysqlchk 9200/tcp # mysqlchk MySql Server /usr/bin/clustercheck # GNU nano 2.2.6 File: /usr/bin/clustercheck #!/bin/bash # # Script to make a proxy (ie HAProxy) capable of monitoring Percona XtraDB Cluster nodes properly # # Author: Olaf van Zandwijk <[email protected]> # Documentation and download: https://github.com/olafz/percona-clustercheck # # Based on the original script from Unai Rodriguez # MYSQL_USERNAME="testuser" MYSQL_PASSWORD="" ERR_FILE="/dev/null" AVAILABLE_WHEN_DONOR=0 # # Perform the query to check the wsrep_local_state # WSREP_STATUS=`mysql --user=${MYSQL_USERNAME} --password=${MYSQL_PASSWORD} -e "SHOW STATUS LIKE 'wsrep_local_state';" 2>${ERR_FILE} | awk '{if (NR!=1){print $2}}' 2>${ERR_FILE}` if [[ "${WSREP_STATUS}" == "4" ]] || [[ "${WSREP_STATUS}" == "2" && ${AVAILABLE_WHEN_DONOR} == 1 ]] then # Percona XtraDB Cluster node local state is 'Synced' => return HTTP 200 /bin/echo -en "HTTP/1.1 200 OK\r\n" /bin/echo -en "Content-Type: text/plain\r\n" /bin/echo -en "\r\n" /bin/echo -en "Percona XtraDB Cluster Node is synced.\r\n" /bin/echo -en "\r\n" else # Percona XtraDB Cluster node local state is not 'Synced' => return HTTP 503 /bin/echo -en "HTTP/1.1 503 Service Unavailable\r\n" /bin/echo -en "Content-Type: text/plain\r\n" /bin/echo -en "\r\n" /bin/echo -en "Percona XtraDB Cluster Node is not synced.\r\n" /bin/echo -en "\r\n" fi

    Read the article

  • Rails 3 shows 404 error instead of index.html (nginx + unicorn)

    - by Miko
    I have an index.html in public/ that should be loading by default but instead I get a 404 error when I try to access http://example.com/ The page you were looking for doesn't exist. You may have mistyped the address or the page may have moved. This has something to do with nginx and unicorn which I am using to power Rails 3 When take unicorn out of the nginx configuration file, the problem goes away and index.html loads just fine. Here is my nginx configuration file: upstream unicorn { server unix:/tmp/.sock fail_timeout=0; } server { server_name example.com; root /www/example.com/current/public; index index.html; keepalive_timeout 5; location / { try_files $uri @unicorn; } location @unicorn { proxy_pass http://unicorn; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_redirect off; } } My config/routes.rb is pretty much empty: Advertise::Application.routes.draw do |map| resources :users end The index.html file is located in public/index.html and it loads fine if I request it directly: http://example.com/index.html To reiterate, when I remove all references to unicorn from the nginx conf, index.html loads without any problems, I have a hard time understanding why this occurs because nginx should be trying to load that file on its own by default. -- Here is the error stack from production.log: Started GET "/" for 68.107.80.21 at 2010-08-08 12:06:29 -0700 Processing by HomeController#index as HTML Completed in 1ms ActionView::MissingTemplate (Missing template home/index with {:handlers=>[:erb, :rjs, :builder, :rhtml, :rxml, :haml], :formats=>[:html], :locale=>[:en, :en]} in view paths "/www/example.com/releases/20100808170224/app/views", "/www/example.com/releases/20100808170224/vendor/plugins/paperclip/app/views", "/www/example.com/releases/20100808170224/vendor/plugins/haml/app/views"): /usr/local/rvm/gems/ruby-1.9.2-rc2/gems/actionpack-3.0.0.beta4/lib/action_view/paths.rb:14:in `find' /usr/local/rvm/gems/ruby-1.9.2-rc2/gems/actionpack-3.0.0.beta4/lib/action_view/lookup_context.rb:79:in `find' /usr/local/rvm/gems/ruby-1.9.2-rc2/gems/actionpack-3.0.0.beta4/lib/action_view/base.rb:186:in `find_template' /usr/local/rvm/gems/ruby-1.9.2-rc2/gems/actionpack-3.0.0.beta4/lib/action_view/render/rendering.rb:45:in `_determine_template' /usr/local/rvm/gems/ruby-1.9.2-rc2/gems/actionpack-3.0.0.beta4/lib/action_view/render/rendering.rb:23:in `render' /usr/local/rvm/gems/ruby-1.9.2-rc2/gems/haml-3.0.15/lib/haml/helpers/action_view_mods.rb:13:in `render_with_haml' etc... -- nginx error log for this virtualhost comes up empty: 2010/08/08 12:40:22 [info] 3118#0: *1 client 68.107.80.21 closed keepalive connection My guess is unicorn is intercepting the request to index.html before nginx gets to process it.

    Read the article

  • innobackupex - after restoring - quit without updating PID file

    - by clarkk
    After restoring a backup the server can't start.. restoring # tar -izxf /var/www/bak/db/2013-11-10-1437_mysql.tar.gz -C /var/www/bak/db_import # innobackupex --use-memory=1G --apply-log /var/www/bak/db_import # service mysql stop # mv /var/lib/mysql /var/lib/mysql-old # mkdir /var/lib/mysql # innobackupex --copy-back /var/www/bak/db_import # chown -R mysql:mysql /var/lib/mysql # service mysql start error log 131110 21:24:20 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql 2013-11-10 21:24:21 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details). 2013-11-10 21:24:21 6194 [Warning] Using pre 5.5 semantics to load error messages from /opt/mysql/server-5.6/share/english/. 2013-11-10 21:24:21 6194 [Warning] If this is not intended, refer to the documentation for valid usage of --lc-messages-dir and --language parameters. 2013-11-10 21:24:21 6194 [Note] Plugin 'FEDERATED' is disabled. /usr/local/mysql/bin/mysqld: Table 'mysql.plugin' doesn't exist 2013-11-10 21:24:21 6194 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it. 2013-11-10 21:24:21 6194 [Note] InnoDB: The InnoDB memory heap is disabled 2013-11-10 21:24:21 6194 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins 2013-11-10 21:24:21 6194 [Note] InnoDB: Compressed tables use zlib 1.2.3 2013-11-10 21:24:21 6194 [Note] InnoDB: Using Linux native AIO 2013-11-10 21:24:21 6194 [Note] InnoDB: Not using CPU crc32 instructions 2013-11-10 21:24:21 6194 [Note] InnoDB: Initializing buffer pool, size = 128.0M 2013-11-10 21:24:21 6194 [Note] InnoDB: Completed initialization of buffer pool 2013-11-10 21:24:21 6194 [Note] InnoDB: Highest supported file format is Barracuda. 2013-11-10 21:24:22 6194 [Note] InnoDB: 128 rollback segment(s) are active. 2013-11-10 21:24:22 6194 [Note] InnoDB: Waiting for purge to start 2013-11-10 21:24:22 6194 [Note] InnoDB: 5.6.12 started; log sequence number 636992658 2013-11-10 21:24:22 6194 [Note] Server hostname (bind-address): '127.0.0.1'; port: 3306 2013-11-10 21:24:22 6194 [Note] - '127.0.0.1' resolves to '127.0.0.1'; 2013-11-10 21:24:22 6194 [Note] Server socket created on IP: '127.0.0.1'. 2013-11-10 21:24:22 6194 [ERROR] Fatal error: Can't open and lock privilege tables: Table 'mysql.user' doesn't exist 131110 21:24:22 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended mysql_upgrade /opt/mysql/server-5.6/bin/mysql_upgrade -u root -pxxxxx -P 3308 Warning: Using a password on the command line interface can be insecure. Looking for 'mysql' as: /opt/mysql/server-5.6/bin/mysql Looking for 'mysqlcheck' as: /opt/mysql/server-5.6/bin/mysqlcheck FATAL ERROR: Upgrade failed

    Read the article

  • "Hostile" network in the company - please comment on a security setup

    - by TomTom
    I have a little specific problem here that I want (need) to solve in a satisfactory way. My company has multiple (IPv4) networks that are controlled by our router sitting in the middle. Typical smaller shop setup. There is now one additional network that has an IP Range OUTSIDE of our control, connected to the internet with another router OUTSIDE of our control. Call it a project network that is part of another companies network and combined via VPN they set up. This means: They control the router that is used for this network and They can reconfigure things so that they can access the machines in this network. The network is physically split on our end through some VLAN capable switches as it covers three locations. At one end there is the router the other company controls. I Need / want to give the machines used in this network access to my company network. In fact, it may be good to make them part of my active directory domain. The people working on those machines are part of my company. BUT - I need to do so without compromising the security of my company network from outside influence. Any sort of router integration using the externally controlled router is out by this idea So, my idea is this: We accept the IPv4 address space and network topology in this network is not under our control. We seek alternatives to integrate those machines into our company network. The 2 concepts I came up with are: Use some sort of VPN - have the machines log into VPN. Thanks to them using modern windows, this could be transparent DirectAccess. This essentially treats the other IP space not different than any restaurant network a laptop of the company goes in. Alternatively - establish IPv6 routing to this ethernet segment. But - and this is a trick - block all IPv6 packets in the switch before they hit the third party controlled router, so that even IF they turn on IPv6 on that thing (not used now, but they could do it) they would get not a single packet. The switch can nicely do that by pulling all IPv6 traffic coming to that port into a separate VLAN (based on ethernet protocol type). Anyone sees a problem with using he switch to isolate the outer from IPv6? Any security hole? It is sad we have to treat this network as hostile - would be a lot easier - but the support personnel there is of "known dubious quality" and the legal side is clear - we can not fulfill our obligations when we integrate them into our company while they are under a jurisdiction we don't have a say in.

    Read the article

  • How to get rid of messages addressed to not existing subdomains?

    - by user71061
    Hi! I have small problem with my sendmail server and need your little help :-) My situation is as follow: User mailboxes are placed on MS exchanege server and all mail to and from outside world are relayed trough my sendmail box. Exchange server ----- sendmail server ------ Internet My servers accept messages for one main domain (say, my.domain.com) and for few other domains (let we narrow it too just one, say my_other.domain.com). After configuring sendmail with showed bellow abbreviated sendmail.mc file, essentially everything works ok, but there is small problem. I want to reject messages addressed to not existing recipients as soon as possible (to avoid sending non delivery reports), so my sendmail server make LDAP queries to exchange server, validating every recipient address. This works well both domains but not for subdomains. Such subdomains do not exist, but someone (I'm mean those heated spamers :-) could try addresses like this: user@any_host.my.domain.com or user@any_host.my_other.domain.com and for those addresses results are as follows: Messages to user@sendmail_hostname.my.domain.com are rejected with error "Unknown user" (due to additional LDAPROUTE_DOMAIN line in my sendmail.mc file, and this is expected behaviour) Messages to user@any_other_hostname.my.domain.com are rejected with error "Relaying denied". Little strange to me, why this time the error is different, but still ok. After all message was rejected and I don't care very much what error code will be returned to sender (spamer). Messages to user@sendmail_hostname.my_other.domain.com and user@any_other_hostname.my_other.domain.com are rejected with error "Unknown user" but only when, there is no user@my_other.domain.com mailbox (on exchange server). If such mailbox exist, then all three addresses (i.e. user@my_other.domain.com, user@sendmail_hostname.my_other.domain.com and user@any_other_hostname.my_other.domain.com) will be accepted. (adding additional line LDAPROUTE_DOMAIN(my_sendmail_host.my_other.domain.com) to my sendmail.mc file don't change anything) My abbreviated sendmail.mc file is as follows (sendmail 8.14.3-5). Both domains are listed in /etc/mail/local-host-names file (FEATURE(use_cw_file) ): define(`_USE_ETC_MAIL_')dnl include(`/usr/share/sendmail/cf/m4/cf.m4')dnl OSTYPE(`debian')dnl DOMAIN(`debian-mta')dnl undefine(`confHOST_STATUS_DIRECTORY')dnl define(`confRUN_AS_USER',`smmta:smmsp')dnl FEATURE(`no_default_msa')dnl define(`confPRIVACY_FLAGS',`needmailhelo,needexpnhelo,needvrfyhelo,restrictqrun,restrictexpand,nobodyreturn,authwarnings')dnl FEATURE(`use_cw_file')dnl FEATURE(`access_db', , `skip')dnl FEATURE(`always_add_domain')dnl MASQUERADE_AS(`my.domain.com')dnl FEATURE(`allmasquerade')dnl FEATURE(`masquerade_envelope')dnl dnl define(`confLDAP_DEFAULT_SPEC',`-p 389 -h my_exchange_server.my.domain.com -b dc=my,dc=domain,dc=com')dnl dnl define(`ALIAS_FILE',`/etc/aliases,ldap:-k (&(|(objectclass=user)(objectclass=group))(proxyAddresses=smtp:%0)) -v mail')dnl FEATURE(`ldap_routing',, `ldap -1 -T<TMPF> -v mail -k proxyAddresses=SMTP:%0', `bounce')dnl LDAPROUTE_DOMAIN(`my.domain.com')dnl LDAPROUTE_DOMAIN(`my_other.domain.com ')dnl LDAPROUTE_DOMAIN(`my_sendmail_host.my.domain.com')dnl define(`confLDAP_DEFAULT_SPEC', `-p 389 -h "my_exchange_server.my.domain.com" -d "CN=sendmail,CN=Users,DC=my,DC=domain,DC=com" -M simple -P /etc/mail/ldap-secret -b "DC=my,DC=domain,DC=com"')dnl FEATURE(`nouucp',`reject')dnl undefine(`UUCP_RELAY')dnl undefine(`BITNET_RELAY')dnl define(`confTRY_NULL_MX_LIST',true)dnl define(`confDONT_PROBE_INTERFACES',true)dnl define(`MAIL_HUB',` my_exchange_server.my.domain.com.')dnl FEATURE(`stickyhost')dnl MAILER_DEFINITIONS MAILER(smtp)dnl Could someone more experienced with sendmail advice my how to reject messages to those unwanted subdomains? P.S. Mailboxes @my_other.domain.com are used only for receiving messages and never for sending.

    Read the article

  • Why does my mail get marked as spam?

    - by schoen
    I Have the server "afspraakmanager.be". It matches everything not to be a spam server.(it isn't by the way): it has reverse dns, spf,dkim,... . But hotmail marks it as spam. I think the problem is the SPF/DKIM records. when i sent an email to my gmail it says: "Received-SPF: neutral (google.com: 2a02:348:8e:6048::1 is neither permitted nor denied by best guess record for domain of [email protected]) client-ip=2a02:348:8e:6048::1; Authentication-Results: mx.google.com; spf=neutral (google.com: 2a02:348:8e:6048::1 is neither permitted nor denied by best guess record for domain of [email protected]) [email protected]; dkim=neutral (bad format) [email protected]" So i guess my SPF and DKIM records aren't set up right. But I also don't have a clue what is wrong with them. this is the zone file: ; zone file for afspraakmanager.be $ORIGIN afspraakmanager.be. $TTL 3600 @ 86400 IN SOA ns1.eurodns.com. hostmaster.eurodns.com. ( 2013102003 ; serial 86400 ; refresh 7200 ; retry 604800 ; expire 86400 ; minimum ) @ 86400 IN NS ns1.eurodns.com. @ 86400 IN NS ns2.eurodns.com. @ 86400 IN NS ns3.eurodns.com. @ 86400 IN NS ns4.eurodns.com. ; Mail Exchanger definition @ 600 IN MX 10 smtp ; IPv4 Address definition @ IN A 37.230.96.72 afspraakmanager.be 600 IN A 37.230.96.72 localhost 86400 IN A 127.0.0.1 smtp 600 IN A 37.230.96.72 www 600 IN A 37.230.96.72 ; Text definition default._domainkey 600 IN TXT "v=DKIM1\\; k=rsa\\; p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQC6pvlZKnbSVXg1Bf3MF2l8xRrKPmqIw2i9Rn1yZ3HEny9qH1vyGXUjdv2O0aQbd5YShSGjtg5H/GedRMLpB0Qb+hBj1yGofOQTdcVtZZfj8qBY5Z7vEkhvtdaogQ0vLjgcwhg0BBuTewEkLxrl9IIzkPMZ1SCtM2Y0RtiUhg2cjQIDAQAB" ; Sender Policy Framework definition afspraakmanager.be 600 IN SPF "v=spf1 a mx ptr +all" The DKIM signature in the header: DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=afspraakmanager.be; s=mail; t=1382361029; bh=4pDpXBY8rCbX8+MfrklZzpQxaUsa3vSPUYjcDR3KAnU=; h=Date:From:To:Subject:From; b=SoBBaAlrueD8qID8txl2SBSqnZgN2lkPCdSPI/m7/YLezIcBedkgIX1NswYiZFl6Z AmF8dES73WUaaJjItVHSrdCJK2mJ/Az+vrgNsyk+GqZZ1YPiIlH3gqRrsguhoofXUX /gqLlqsLxqxkKKd9EbSzKRHuDGlJCLm5SlL8wnL0=

    Read the article

  • Exchange 2007 and migrating only some users under a shared domain name

    - by DomoDomo
    I'm in the process of moving two law firms to hosted Exchange 2007, a service that the consulting company I work for offers. Let's call these two firms Crane Law and Poole Law. These two firms were ONE firm just six months ago, but split. So they have three email domains: Old Firm: craneandpoole.com New Firm 1: cranelaw.com New Firm 2: poolelaw.com Both Firm 1 & Firm 2 use craneandpoole.com email addresses, as for the other two domains, only people who work at the respective firm use that firm's domain name, natch. Currently these two firms are still using the same pre-split internal Exchange 2007 server, where MX records for all three domains point. Here's the problem. I'm not moving both companies at the same time. I'm moving Crane Law two weeks before Poole Law. During this two weeks, both companies need to be able to: Continue to receive emails addressed to craneandpoole.com Send emails between firms, using cranelaw.com and poolelaw.com accounts I also have a third problem: I'd like to setup all three domains in my hosting infrastructure way ahead of time, to make my own life easier What would solve all my problems would be, if there is some way I can tell Exchange 2007, even though this domain exists locally forward on the message to the outside world using public MX record as a basis for where to send it (or if I could somehow create a route for it statically that would work too). If this doesn't work, to address points #1 when I migrate Crane Law, I will delete all references locally to cranelaw.com on their current Exchange server, and setup individual forwards for each of their craneandpool.com mailboxes to forward to our hosted exchange server. This will also take care of point #2, since the cranelaw.com won't be there locally, when poolelaw.com tries to send to cranelaw.com, public MX records will be used for mail routing decisions and go to my hosted exchange. The bummer of that though is, I won't be able to setup poolelaw.com ahead of time in hosted Exchange, will have to wait to do it day of :( Sorry for the long and confusing post. Just wondering if there is a better or simpler way to do what I want? Three tier forests and that kind of thing are out, this is just a two week window where they won't be in the same place.

    Read the article

  • PC freezing when used to print labels

    - by Will
    Hi I have a windows XP machine that is used to print labels from a Zebra label printer. It is connected a member of the domain. I am getting reports that when people try to use the computer it will sometimes be frozen to the point where they have to physically shut the machine down and boot to get it responding. (this happens about once a day). I took a look in Event Viewer and nabbed some of these errors out of it: Event Type: Error Event Source: Userenv Event Category: None Event ID: 1054 Date: 11/12/2010 Time: 9:13:04 AM User: NT AUTHORITY\SYSTEM Computer: FS-LABELMACHINE Description: Windows cannot obtain the domain controller name for your computer network. (A socket operation was attempted to an unreachable host. ). Group Policy processing aborted. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Event Type: Error Event Source: AutoEnrollment Event Category: None Event ID: 15 Date: 11/11/2010 Time: 11:08:25 PM User: N/A Computer: FS-LABELMACHINE Description: Automatic certificate enrollment for local system failed to contact the active directory (0x80072751). A socket operation was attempted to an unreachable host. Enrollment will not be performed. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Hanging application MSACCESS.EXE, version 11.0.8166.0, hang module hungapp, version 0.0.0.0, hang address 0x00000000. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.... SearchIndexer (2420) Unable to write a shadowed header for file C:\Documents and Settings\All Users\Application Data\Microsoft\Search\Data\Applications\Windows\MSS.chk. Error -1032. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Failed auto update retrieval of third-party root list sequence number from: <http://www.download.windowsupdate.com/msdownload/update/v3/static/trustedr/en/authrootseq.txt> with error: A connection with the server could not be established For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. I'm not really sure what to make out of this... Thanks for the help in advanced, Will

    Read the article

  • Sendmail SMART_HOST not working

    - by daniel
    Hello, I've defined SMART_HOST to be a specific server, lets call it foo.bar.com. However, when I send a test mail using 'sendmail -t', sendmail tries to use mx.bar.com, which subsequently rejects my mail. I've verified that foo.bar.com works and that mx.bar.com does not work (yay telnet). I've recompiled sendmail.mc vi make, make -C and m4. I've verified the DS entry in sendmail.cf. I've restarted sendmail correctly. I'm not sure how to proceed at this point. Any ideas? Here is my SMART_HOST line: define(SMART_HOST',foo.bar.com')dnl ...and here is the result of a test mail. It never tries to use foo.bar.com, instead it uses mx.bar.com. $ echo subject: test; echo | sendmail -Am -v -flocaluser -- [email protected] subject: test [email protected]... Connecting to mx.bar.com via relay... 220 mx.bar.com ESMTP >>> EHLO myhost.bar.com 250-mx.bar.com 250-8BITMIME 250 SIZE 52428800 >>> MAIL From:<[email protected]> SIZE=1 250 sender <[email protected]> ok >>> RCPT To:<[email protected]> 550 #5.1.0 Address rejected. >>> RSET 250 reset localuser... Connecting to local... localuser... Sent Closing connection to mx.bar.com. >>> QUIT 221 mx.bar.com And last, here is a test mail sent using foo.bar.com: $ hostname myhost.bar.com $ telnet foo.bar.com 25 Trying ***.***.***.***... Connected to foo.bar.com (***.***.***.***). Escape character is '^]'. 220 foo.bar.com ESMTP Sendmail 8.14.1/8.14.1/ITS-7.0/ldap2-1+tls; Tue, 21 Dec 2010 13:27:44 -0700 (MST) helo foo 250 foo.bar.com Hello myhost.bar.com [***.***.***.***], pleased to meet you mail from: [email protected] 250 2.1.0 [email protected]... Sender ok rcpt to: [email protected] 250 2.1.5 [email protected]... Recipient ok data 354 Enter mail, end with "." on a line by itself testing . 250 2.0.0 oBLKRikZ003758 Message accepted for delivery quit 221 2.0.0 foo.bar.com closing connection Connection closed by foreign host. Any ideas? Thanks

    Read the article

  • How to force Debian to boot new Kernel?

    - by ThE_-_BliZZarD
    I'm running Debian 6, Debian GNU/Linux, with Linux 2.6.32-5-amd64 under Grub2 ( 1.98+20100804-14+squeeze1) on a remote system (no possibility to view the pre-boot messages). I compiled and installed a new kernel, but I can not get it to boot. What I have done: Installed the packages via: dpkg -i linux-headers-3.5.3.20120914-amd64_3.5.3.20120914-amd64-10.00.Custom_amd64.deb linux-image-3.5.3.20120914-amd64_3.5.3.20120914-amd64-10.00.Custom_amd64.deb This updated the Grub configuration. My /boot/grub/grub.cfg now contains: menuentry 'Debian GNU/Linux, with Linux 3.5.3.20120914-amd64' --class debian --class gnu-linux --class gnu --class os { insmod raid insmod mdraid insmod part_msdos insmod part_msdos insmod ext2 set root='(md0)' search --no-floppy --fs-uuid --set 5a3882a9-c7df-4f6a-9feb-f03e3e37be01 echo 'Loading Linux 3.5.3.20120914-amd64 ...' linux /vmlinuz-3.5.3.20120914-amd64 root=UUID=003242b5-121b-49f3-b32f-1b40aea56eed ro acpi=ht quiet panic=10 echo 'Loading initial ramdisk ...' initrd /initrd.img-3.5.3.20120914-amd64 } menuentry 'Debian GNU/Linux, with Linux 2.6.32-5-amd64' --class debian --class gnu-linux --class gnu --class os { insmod raid insmod mdraid insmod part_msdos insmod part_msdos insmod ext2 set root='(md0)' search --no-floppy --fs-uuid --set 5a3882a9-c7df-4f6a-9feb-f03e3e37be01 echo 'Loading Linux 2.6.32-5-amd64 ...' linux /vmlinuz-2.6.32-5-amd64 root=UUID=003242b5-121b-49f3-b32f-1b40aea56eed ro acpi=ht quiet panic=10 echo 'Loading initial ramdisk ...' initrd /initrd.img-2.6.32-5-amd64 } I used grub-set-default "Debian GNU/Linux, with Linux 2.6.32-5-amd64" to set the old kernel as default and then grub-reboot "Debian GNU/Linux, with Linux 3.5.3.20120914-amd64" to boot into the new kernel once. After update-grub I rebooted the system, but everytime it comes back up with the old kernel (2.6). I tried setting the new one as default (grub-set-default 0, update-grub, reboot) but, still the old one. The Syslogs contain NO hint whatsoever about trying to boot the new kernel - only the old one. Would there be any hints regarding problems with a kernel? Is there a way to enable debug-logging in grub? What am I doing wrong? How can I force the system to boot the new kernel? Edit: Hardware of remote machine. CPU cat /proc/cpuinfo processor : 0 vendor_id : AuthenticAMD cpu family : 16 model : 5 model name : AMD Athlon(tm) II X4 605e Processor stepping : 3 cpu MHz : 2294.898 cache size : 512 KB physical id : 0 siblings : 4 core id : 0 cpu cores : 4 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 5 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm 3dnowext 3dnow constant_tsc rep_good nonstop_tsc extd_apicid pni monitor cx16 popcnt lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt bogomips : 4589.77 TLB size : 1024 4K pages clflush size : 64 cache_alignment : 64 address sizes : 48 bits physical, 48 bits virtual power management: ts ttp tm stc 100mhzsteps hwpstate (copied only the first, 3 more follow) The server is a Fujitsu PRIMERGY MX130 S1.

    Read the article

  • How can I troubleshoot Virtualbox port forwarding from Windows guest to OSX host not working?

    - by joe larson
    There are a plethora of questions about virtual box port forwarding problems but none with my specific details. I have a Windows install living in Virtual Box, hosted within OSX. I've got several webservers running on localhost on different ports within the Windows install. I cannot for the life of me get port forwarding to work so I can access those webservers from OSX. My settings look like this (yes I have a NAT adapter): And in my vbox configuration file the relavent portion looks like this: <NAT> <DNS pass-domain="true" use-proxy="false" use-host-resolver="false"/> <Alias logging="false" proxy-only="false" use-same-ports="false"/> <Forwarding name="RLPWeb" proto="1" hostport="7084" guestip="127.0.0.1" guestport="7084"/> <Forwarding name="UtilWeb" proto="1" hostport="4040" guestip="127.0.0.1" guestport="4040"/> <Forwarding name="WCARLP" proto="1" hostport="8084" guestip="127.0.0.1" guestport="8084"/> <Forwarding name="WCAUtil" proto="1" hostport="4848" guestip="127.0.0.1" guestport="4848"/> </NAT> I've turned off the Windows firewall to ensure it is not interfering, and I am not running a firewall on OSX. Anyway, when I attempt to go to for example http://127.0.0.1:4040/ on any of my OSX browsers, it will eventually time out. The log file for this VM shows that it is correctly reading the settings and implying it's doing the right thing here: 00:00:08.286 NAT: set redirect TCP host port 4848 => guest port 4848 @ 127.0.0.1 00:00:08.286 NAT: set redirect TCP host port 8084 => guest port 8084 @ 127.0.0.1 00:00:08.286 NAT: set redirect TCP host port 4040 => guest port 4040 @ 127.0.0.1 00:00:08.286 NAT: set redirect TCP host port 7084 => guest port 7084 @ 127.0.0.1 00:00:08.290 Changing the VM state from 'LOADING' to 'SUSPENDED'. 00:00:08.290 Changing the VM state from 'SUSPENDED' to 'RESUMING'. 00:00:08.290 Changing the VM state from 'RESUMING' to 'RUNNING'. 00:00:08.337 Display::handleDisplayResize(): uScreenId = 0, pvVRAM=000000012017d000 w=1834 h=929 bpp=32 cbLine=0x1CA8, flags=0x1 00:00:09.139 AIOMgr: Host limits number of active IO requests to 16. Expect a performance impact. 00:00:13.454 NAT: DHCP offered IP address 10.0.2.15 I've tried setting the Host IP to 127.0.0.1, and I've tried setting Guest IP blank and also 10.0.2.15. None of these seem to help. What else can I look at to troubleshoot this issue? Details of setup: OSX 10.6.8 Windows 7 Professional 64bit VirtualBox 4.1.2

    Read the article

  • Internet Working, Browsing Not.

    - by jeffreypriebe
    I have a very odd problem that I can't resolve. I am connected to the internet, but my browsing doesn't work. I don't mean a web browser - I mean browsing. Firefox, Chrome, Curl all fail to successfully connect to an HTTP address. However existing connections, e.g. to mail in Outlook (Exchange Server and also IMAP server) continue to work. Also, the internet is on, I can confirm both from my machine (other ports / connections) as well as from any other computer connected to the same network. Additionally, it appears to be HTTP, not simple a port issue as HTTP over port 8443 (Tortoise SVN if you must know - running over HTTP not over SVN) also fails. I am using Windows Vista SP2 (build 6002). It seems to "creep up" in that after running the computer for a few hours it will fail. (No found way to systematically reproduce the problem.) Additionally, it seems to be more prone on days where the internet connection is flaky already (not sure why the internet is flaky, just is, lot's of failed browsing requests and have to retry/reload often). What I have tried (when the problem arises) - none have yielded any resolution: Resetting the network connection (dis-connect, re-connect) Disable/re-enable the network adapter Double-checked the ip settings Double-checked the HOSTS file. Note: DNS continues to work (both new and cached responses to DNS queries). (Thanks for the suggestion Daniel and antenore.) Checked the routing tables (ip4 only as ipv6 is beyond my understanding) resetting all involved hardware (routers and modems) Close and reopen browsers Looked for malware interference: Run HijackThis Looked for suspicious processes using SysInternals procexp. Looked for explorer hijacks, lsa provider interference, winsock provider interference using SysInternals Autoruns. Run a complete anti-virus scan. Reviewed the output of a netstat -onab to see if there were stuck ports open or unusual processes running somewhere The only thing that works is to do a full reboot. That works 100% of the time to restore browsing. What else can I try to nail down the problem?

    Read the article

< Previous Page | 548 549 550 551 552 553 554 555 556 557 558 559  | Next Page >