Search Results

Search found 357 results on 15 pages for 'recieve'.

Page 11/15 | < Previous Page | 7 8 9 10 11 12 13 14 15  | Next Page >

  • Create proxy to Java app so it can work on the iPhone? [closed]

    - by Kovu
    okay guys, these are the facts: I have a java chat. I can NOT have any develop-revelant information. So this all is static. It's not my chat. So fact 2: IPad does not have Java and it will not have it in future. So now, the very hard task: bring this both together. My idea: Develope a proxy server for this. 1) The data from the IPad will send to a server. 2) the server recieve data and call the JRE and the complete chat-client 3) The server gets the "chat" himself and send it back to the Ipad Is that and possible idea or bad? Did I oversee something? Anyone have a idea where to start?

    Read the article

  • Virtualbox on Ubuntu 12.04 and 3.5 kernel

    - by kas
    I have installed the 3.5 kernel under Ubuntu 12.04. When I install virtualbox I recieve the following error. Setting up virtualbox (4.1.12-dfsg-2ubuntu0.2) ... * Stopping VirtualBox kernel modules [ OK ] * Starting VirtualBox kernel modules * No suitable module for running kernel found [fail] invoke-rc.d: initscript virtualbox, action "restart" failed. Processing triggers for python-central ... Setting up virtualbox-dkms (4.1.12-dfsg-2ubuntu0.2) ... Loading new virtualbox-4.1.12 DKMS files... First Installation: checking all kernels... Building only for 3.5.0-18-generic Building initial module for 3.5.0-18-generic Error! Bad return status for module build on kernel: 3.5.0-18-generic (x86_64) Consult /var/lib/dkms/virtualbox/4.1.12/build/make.log for more information. * Stopping VirtualBox kernel modules [ OK ] * Starting VirtualBox kernel modules * No suitable module for running kernel found [fail] invoke-rc.d: initscript virtualbox, action "restart" failed. Setting up virtualbox-qt (4.1.12-dfsg-2ubuntu0.2) ... Does anyone know how I might be able to resolve this? Edit -- Here is the make.log DKMS make.log for virtualbox-4.1.12 for kernel 3.5.0-18-generic (x86_64) Mon Nov 19 12:12:23 EST 2012 make: Entering directory `/usr/src/linux-headers-3.5.0-18-generic' LD /var/lib/dkms/virtualbox/4.1.12/build/built-in.o LD /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/built-in.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/linux/SUPDrv-linux.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/SUPDrv.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/SUPDrvSem.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/alloc-r0drv.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/initterm-r0drv.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/memobj-r0drv.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/mpnotification-r0drv.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/powernotification-r0drv.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/linux/assert-r0drv-linux.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/linux/alloc-r0drv-linux.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/linux/initterm-r0drv-linux.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/linux/memobj-r0drv-linux.o /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/linux/memobj-r0drv-linux.c: In function ‘rtR0MemObjLinuxDoMmap’: /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/linux/memobj-r0drv-linux.c:1150:9: error: implicit declaration of function ‘do_mmap’ [-Werror=implicit-function-declaration] cc1: some warnings being treated as errors make[2]: *** [/var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/linux/memobj-r0drv-linux.o] Error 1 make[1]: *** [/var/lib/dkms/virtualbox/4.1.12/build/vboxdrv] Error 2 make: *** [_module_/var/lib/dkms/virtualbox/4.1.12/build] Error 2 make: Leaving directory `/usr/src/linux-headers-3.5.0-18-generic'

    Read the article

  • Content Based Routing with BRE and ESB

    - by Christopher House
    I've been working with BizTalk 2009 and the ESB toolkit for the past couple of days.  This is actually my first exposure to ESB and so far I'm pleased with how easy it is to work with. Initially we had planned to use UDDI for storing endpoint information.  However after discussing this with my client, we opted to look at BRE instead of UDDI since we're already storing transforms in BRE.  Fortunately making the change to BRE from UDDI was quite simple.  This solution of course has the added advantage of not needing to go through the convoluted process of registering our endpoints in UDDI. The first thing to remember if you want to do content based routing with BRE and ESB is that the pipleines included in the ESB toolkit don't include disassembler components.  This means that you'll need to first create a custom recieve pipeline with the necessary disassembler for your message type as well as the ESB components, itinerary selector and dispather. Next you need to create a BRE policy.  The ESB.ContextInfo vocabulary contains vocabulary links for the various items in the ESB context dictionary.  In this vocabulary, you'll find an item called Context Message Type, use this as the left hand side of your condition.  Set the right hand side to your message type, something like http://your.message.namespace/#yourrootelement.  Now find the ESB.EndPointInfo vocabulary.  This contains links to all the properties related to endpoint information.  Use the various set operators in your rule's action to configure your endpoint. In the example above, I'm using the WCF-SQL adapter. Now that the hard work is out of the way, you just need to configure the resolver in your itinerary. Nothing complicated here.  Just select BRE as your resolver implementation and select your policy from the drop-down list.  Note that when you select a policy, the Version field will be automatically filled in with the version of your policy.  If you leave this as-is, the resolver will always use that policy version.  Alternatively, you can clear the version number and the resolver will use the highest deployed version.

    Read the article

  • SSH from external network refused

    - by wulfsdad
    I've installed open-ssh-server on my home computer(running Lubuntu 12.04.1) in order to connect to it from school. This is how I've set up the sshd_config file: # Package generated configuration file # See the sshd_config(5) manpage for details # What ports, IPs and protocols we listen for #Port 22 Port 2222 # Use these options to restrict which interfaces/protocols sshd will bind to #ListenAddress :: #ListenAddress 0.0.0.0 Protocol 2 # HostKeys for protocol version 2 HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_dsa_key HostKey /etc/ssh/ssh_host_ecdsa_key #Privilege Separation is turned on for security UsePrivilegeSeparation yes # Lifetime and size of ephemeral version 1 server key KeyRegenerationInterval 3600 ServerKeyBits 768 # Logging SyslogFacility AUTH #LogLevel INFO LogLevel VERBOSE # Authentication: LoginGraceTime 120 PermitRootLogin no StrictModes yes RSAAuthentication yes PubkeyAuthentication yes #AuthorizedKeysFile %h/.ssh/authorized_keys # Don't read the user's ~/.rhosts and ~/.shosts files IgnoreRhosts yes # For this to work you will also need host keys in /etc/ssh_known_hosts RhostsRSAAuthentication no # similar for protocol version 2 HostbasedAuthentication no # Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication #IgnoreUserKnownHosts yes # To enable empty passwords, change to yes (NOT RECOMMENDED) PermitEmptyPasswords no # Change to yes to enable challenge-response passwords (beware issues with # some PAM modules and threads) ChallengeResponseAuthentication no # Change to no to disable tunnelled clear text passwords #PasswordAuthentication yes # Kerberos options #KerberosAuthentication no #KerberosGetAFSToken no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes # GSSAPI options #GSSAPIAuthentication no #GSSAPICleanupCredentials yes X11Forwarding no X11DisplayOffset 10 PrintMotd no PrintLastLog yes TCPKeepAlive yes #UseLogin no #MaxStartups 10:30:60 #Banner /etc/issue.net Banner /etc/sshbanner.net # Allow client to pass locale environment variables AcceptEnv LANG LC_* Subsystem sftp /usr/lib/openssh/sftp-server # Set this to 'yes' to enable PAM authentication, account processing, # and session processing. If this is enabled, PAM authentication will # be allowed through the ChallengeResponseAuthentication and # PasswordAuthentication. Depending on your PAM configuration, # PAM authentication via ChallengeResponseAuthentication may bypass # the setting of "PermitRootLogin without-password". # If you just want the PAM account and session checks to run without # PAM authentication, then enable this but set PasswordAuthentication # and ChallengeResponseAuthentication to 'no'. UsePAM yes #specify which accounts can use SSH AllowUsers onlyme I've also configured my router's port forwarding table to include: LAN Ports: 2222-2222 Protocol: TCP LAN IP Address: "IP Address" displayed by viewing "connection information" from right-click menu of system tray Remote Ports[optional]: n/a Remote IP Address[optional]: n/a I've tried various other configurations as well, using primary and secondary dns, and also with specifying remote ports 2222-2222. I've also tried with TCP/UDP (actually two rules because my router requires separate rules for each protocol). With any router port forwarding configuration, I am able to log in with ssh -p 2222 -v localhost But, when I try to log in from school using ssh -p 2222 onlyme@IP_ADDRESS I get a "No route to host" message. Same thing when I use the "Broadcast Address" or "Default Route/Primary DNS". When I use the "subnet mask", ssh just hangs. However, when I use the "secondary DNS" I recieve a "Connection refused" message. :^( Someone please help me figure out how to make this work.

    Read the article

  • USB To Serial under OpenSuse 11.3

    - by Lars
    I have a LogiLink USB-To-Serial adapter. This has the PL2303 chip inside. When I insert the device: [26064.927083] usb 7-1: new full speed USB device using uhci_hcd and address 9 [26065.076090] usb 7-1: New USB device found, idVendor=067b, idProduct=2303 [26065.076099] usb 7-1: New USB device strings: Mfr=1, Product=2, SerialNumber=0 [26065.076105] usb 7-1: Product: USB-Serial Controller [26065.076110] usb 7-1: Manufacturer: Prolific Technology Inc. [26065.079181] pl2303 7-1:1.0: pl2303 converter detected [26065.091296] usb 7-1: pl2303 converter now attached to ttyUSB0 So the device is recognized and the converter is attached to ttyUSB0. When I do screen /dev/ttyUSB0 9600 I get the error: bash: /dev/ttyUSB0: Permission denied So I went looking in the file permissions. ls -l from the /dev folder reports: crw-rw---- 1 root dialout 188, 0 2011-04-26 15:47 ttyUSB0 I added my user lars to the dialout group. When I use the commands groups under lars it shows that I'm in the group. Though I still recieve the permissions denied error, as lars, and as root. I'm trying to connect to a console cable to configure some Cisco switches. My OS is OpenSuse 11.3 x86_64 with kernel version 2.6.34.7-0.7-desktop.

    Read the article

  • snmpd dead but subsys locked

    - by Hina NMS
    Hi folks I have an NMS and a Client machine. I want the client to send traps to the NMS. I have been configuring the snmpd.conf file testing if i disable a process do i receive an alert or not. For the changes to reflect that were made in the conf file i restarted the snmpd daemon each time. The testing was going fine. All of a sudden when i restarted snmpd i recieved the error msg "snmpd dead but subsys locked". I googled for answer as to what it actually meant and found out that when a service is started a logfile is created in the /var/lock/subsys. Sometimes if the service is not stopped properly or whatever the logfile remains created. Though i started/stopped the snmpd service properly it didnt go away so i removed the file manually (via rm cmd). when i checked the status the error "snmpd dead but subsys locked" was gone. On my NMS i recieved the alert of snmpd coldstart. i started the snmpd service everything goes fine! BUT after 5 mins again i recieve the same error message and this keeps on happening..what do i need to do now?

    Read the article

  • Looking for a help desk ticketing system..

    - by Dan
    Hi guys Im looking for a good help desk ticket solution. It must perform the following actions for it to be useful. It needs to have a single point of contact via email..e.g [email protected] If we recieve a telephone(or an email outside of the system) we need to be able to create a ticket as if had been added via the single point of contact, this needs to be done with ease in order to save time. Certain people within our organisation deal with certain customers, so if the email/ custom input support call as mentioned in bullet 2 is picked up as having a relationship with that certain person in our organisation it needs to be sent to them/put in their queue for them to work on. If a person is out of office or sick any tickets sent to them must be forwarded to somebody else or put into a seperate pool of tickets that anybody can access. Perhaps have an agent that sorts through tickets in the pool and assigns them to anybody who is available, preferably the person with fewest tickets in their queue/list. Once a customer emails and the system logs it they immediately get a response with a ticket number and maybe details of who is dealing with the problem. Any correspondance in relation to a particular ticket is automatically grouped into some sort of message, and not made into a load of separate tickets. I.e system scans incoming email subjects for ticket numbers and assosciates it with exisiting tickets if that number exists. Any help is much appreciated Thanks P.S I have taken a look at OTRS but i'm not feeling it so unless someone can convince me I guess i'm after an alternative.

    Read the article

  • Recompiling Nginx 1.4.3 with "--with-http_gzip_static_module" error

    - by Elijah Paul
    I'm trying to enable the 'ngx_http_gzip_static_module' module in Nginx 1.4.3 by adding the --with-http_gzip_static_module to my ./configure configuration. But I recieve the following error when i try to recompile (make): # make make -f objs/Makefile make[1]: Entering directory `/tmp/nginx-1.4.3' make[1]: *** No rule to make target `src/os/unix/ngx_gcc_atomic_x86.h', needed by `objs/src/core/nginx.o'. Stop. make[1]: Leaving directory `/tmp/nginx-1.4.3' make: *** [build] Error 2 My current config (CentOS 6.4): # nginx -V nginx version: nginx/1.4.3 built by gcc 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) TLS SNI support enabled configure arguments: --conf-path=/etc/nginx/nginx.conf --pid-path=/var/run/nginx.pid --error-log-path=/var/log/nginx/error.log --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --http-log-path=/var/log/nginx/access.log --with-http_ssl_module --prefix=/usr --add-module=./nginx-sticky-module-1.1 --add-module=./headers-more-nginx-module-0.23 i was under the impression that this was a module that just need to be 'enabled' as opposed to 'added'. What am I missing here?

    Read the article

  • USB To Serial under OpenSuse 11.3

    - by Exsisto
    I have a LogiLink USB-To-Serial adapter. This has the PL2303 chip inside. When I insert the device: [26064.927083] usb 7-1: new full speed USB device using uhci_hcd and address 9 [26065.076090] usb 7-1: New USB device found, idVendor=067b, idProduct=2303 [26065.076099] usb 7-1: New USB device strings: Mfr=1, Product=2, SerialNumber=0 [26065.076105] usb 7-1: Product: USB-Serial Controller [26065.076110] usb 7-1: Manufacturer: Prolific Technology Inc. [26065.079181] pl2303 7-1:1.0: pl2303 converter detected [26065.091296] usb 7-1: pl2303 converter now attached to ttyUSB0 So the device is recognized and the converter is attached to ttyUSB0. When I do screen /dev/ttyUSB0 9600 I get the error: bash: /dev/ttyUSB0: Permission denied So I went looking in the file permissions. ls -l from the /dev folder reports: crw-rw---- 1 root dialout 188, 0 2011-04-26 15:47 ttyUSB0 I added my user lars to the dialout group. When I use the commands groups under lars it shows that I'm in the group. Though I still recieve the permissions denied error, as lars, and as root. I'm trying to connect to a console cable to configure some Cisco switches. My OS is OpenSuse 11.3 x86_64 with kernel version 2.6.34.7-0.7-desktop.

    Read the article

  • Can't pop3 from exchange server after a reboot

    - by BLAKE
    Last night I shutdown my Exchange 2003 Virtual Machine, I added a new VHD (For backups), and booted it again. Now I can't POP3 email from it with Outlook 2007. In Outlook I get the error: Task '[email protected] - Receiving' reported error (0x800CCC0F) : 'The connection to the server was interrupted. If this problem continues, contact your server administrator or Internet service provider (ISP).' Does anybody know what is wrong? All I did was a reboot. I haven't formated the added disk. There are no weird errors in the event log. I can still send mail with Outlook over port 25. I can send and recieve mail with OWA. I can POP3 the mail to my phone (it take about 15 minutes after sending a message, but I do get it eventually). EDIT: The 'Microsoft Exchange POP3' Service says that it is started but if I stop it and try to start it again, it fails saying 'Could not start the Microsoft Exchange POP3 service on Local Computer. Error 1053: The service did not respond to the start or control request in a timely fashion.' I did some googling and someone on exchangefreaks.com said that if I use task manager to 'End Task' on inetinfo.exe, then I can start the POP3 service fine. Does anyone know what causes this problem? I am fine for now since I did get the Service started, but If it does this after every reboot...

    Read the article

  • SQL Server Instance login issue

    - by reallyJim
    I've just brought up a new installation of SQL Server 2008. I installed the default instance as well as one named instance. I'm having a problem connecting to the named instance from anywhere besides the server itself with any user besides 'sa'. I am running in mixed mode. I have a login/user that has a known username. Using that user/login, I can properly connect when directly on the server. When I attempt to login from anywhere else, I recieve a "Login failed for user ''", with Error 18456. In the log file in the server, I see a reason that doesn't seem to help: "Reason: Could not find a login matching the name provided.". However, that user/login DOES exist, as I can use it locally. There are no further details about the error. Where can I start to find something to help me with this? I've tried deleting and recreating the user, as well as just creating a new one from scratch--same result, locally fine, remotely an error. EDIT: Partially Resolved. I'm now passed the base issue--the clients were trying to connect via the default instance. I don't know why. So, once proper ports were opened in the firewall, and a static port assigned to the named instance, I can now connect--BUT ONLY if I specify the connection as Server,Port. SQLBrowser is apparently not helping/working in this case. I've verified it IS running, and done a stop/restart after my config changes, but no difference yet.

    Read the article

  • Web SMTP Server(foo.com) will not send mail to exchange server which is also(foo.com)

    - by Atom
    I think I understand this problem fully, but I do not know how to approach it or where to go in terms of troubleshooting. I've got my one domain http://foo.com that runs a Zen Cart installation that needs to be able to send emails to users(order confirmation, password reset). This works fine to send to any other domain BUT foo.com. I'm running a locally hosted exchange server that is foo.com, and we can send and receive email just fine. If I run tail -f /usr/local/psa/var/log/maillog I recieve this error: Apr 1 10:08:51 foo qmail-local-handlers[25824]: Handlers Filter before-local for qmail started ... Apr 1 10:08:51 foo qmail-local-handlers[25824]: from= Apr 1 10:08:51 foo qmail-local-handlers[25824]: [email protected] Apr 1 10:08:51 foo qmail-local-handlers[25824]: cannot reinject message to '[email protected]' Apr 1 10:08:51 foo qmail: 1270141731.583139 delivery 32410: failure: This_address_no_longer_accepts_mail./ Apr 1 10:08:51 foo qmail: 1270141731.584098 status: local 0/10 remote 0/20 I understand that the foo.com SMTP service doesn't have any account but the one that is used to authenticate mail being sent, so of course, I understand why it's saying 'this address no longer accepts mail'. My question is, how can I get the foo.com(web) SMTP service to handle emails meant for my exchange server([email protected]) or how do I handle the mail that needs to be sent to our exchange server? Is this something to do with MX records? Thanks in advance A

    Read the article

  • IIS 404 custom error

    - by Greg B
    I've deployed an ASP.NET 3.5 app to a 64bit Windows 2003 R2 server. In the web.config I have the following <customErrors mode="RemoteOnly" defaultRedirect="/404/"> <error statusCode="404" redirect="/404/"/> <error statusCode="500" redirect="/500/"/> </customErrors> In the website properties in IIS Manager I have set the 404 and 500 errors to Type = "URL" and the same URLs as in the web.config. I have a wildcard application map to the .NET 2.0 aspnet_isapi.dll with "Verify file exists" turned off. If I try to hit a fake .aspx file I successfully get sent to the 404 page. I belive this is because there is an explicit mapping for .aspx to the .NET DLL. If I try to access a fake directory I simply recieve a plain text response saying: The system cannot find the file specified. It would appear that these requests for directories are not being routed through the .NET pipeline, which is what I would expect (and need) to happen becuase of the wildcard application mapping. Any ideas?

    Read the article

  • Centos 5.xx Nagios sSMTP mail cannot be sent from nagios server, but works great from console

    - by adam
    I spent last 3 hours of reasearch on how to get nagios to work with email notifications, i need to send emails form work where the only accesible smtp server is the company's one. i managed to get it done from the console using: mail [email protected] working perfectly for the purpouse i set up ssmtp.conf so as: [email protected] mailhub=smtp.company.com:587 [email protected] AuthPass=mypassword FromLineOverride=YES useSTARTTLS=YES rewriteDomain=company.pl hostname=nagios UseTLS=YES i also edited the file /etc/ssmtp/revaliases so as: root:[email protected]:smtp.company.com:587 nagios:[email protected]:smtp.company.com:587 nagiosadmin:[email protected]:smtp.company.com:587 i also edited the file permisions for /etc/ssmtp/* so as: -rwxrwxrwx 1 root nagios 371 lis 22 15:27 /etc/ssmtp/revaliases -rwxrwxrwx 1 root nagios 1569 lis 22 17:36 /etc/ssmtp/ssmtp.conf and i assigned to proper groups i belive: cat /etc/group |grep nagios mail:x:12:mail,postfix,nagios mailnull:x:47:nagios nagios:x:2106:nagios nagcmd:x:2107:nagios when i send mail manualy, i recieve it on my priv box, but when i send mail from nagios the mail log says: Nov 22 17:47:03 certa-vm2 sSMTP[9099]: MAIL FROM:<[email protected]> Nov 22 17:47:03 certa-vm2 sSMTP[9099]: 550 You are not allowed to send mail from this address it says [email protected] and im not allowed to send mails claiming to be [email protected], its suppoused to be [email protected], what am i doing wrong? i ran out of tricks... kind regards Adam xxxx

    Read the article

  • Client certificate based encryption

    - by Timo Willemsen
    I have a question about security of a file on a webserver. I have a file on my webserver which is used by my webapplication. It's a bitcoin wallet. Essentially it's a file with a private key in it used to decrypt messages. Now, my webapplication uses the file, because it's used to recieve transactions made trough the bitcoin network. I was looking into ways to secure it. Obviously if someone has root access to the server, he can do the same as my application. However, I need to find a way to encrypt it. I was thinking of something like this, but I have no clue if this is actually going to work: Client logs in with some sort of client certificate. Webapplication creates a wallet file. Webapplication encrypts file with client certificate. If the application wants to access the file, it has to use the client certificate. So basically, if someone gets root access to the site, they cannot access the wallet. Is this possible and does anyone know about an implementation of this? Are there any problems with this? And how safe would this be?

    Read the article

  • slicehost google apps mx settings

    - by Bob
    Hello All, I am banging my head against the wall on this one. I followed the MX setup tutorials for Google Mail and it didn't work. Currently, after deleting those records and adding the ones google suggested I have domain.com. 86400 IN MX 10 ASPMX.L.GOOGLE.com. domain.com. 86400 IN MX 20 ALT2.ASPMX.L.GOOGLE.com. domain.com. 86400 IN MX 20 ALT1.ASPMX.L.GOOGLE.com. domain.com. 86400 IN MX 30 ASPMX2.GOOGLEMAIL.com. domain.com. 86400 IN MX 30 ASPMX5.GOOGLEMAIL.com. domain.com. 86400 IN MX 30 ASPMX3.GOOGLEMAIL.com. domain.com. 86400 IN MX 30 ASPMX4.GOOGLEMAIL.com. according to the output of my dig command for my particular "domain". I can send email from google apps mail but I can not recieve any email. It gives me the following error: Technical details of permanent failure: Google tried to deliver your message, but it was rejected by the recipient domain. We recommend contacting the other email provider for further information about the cause of this error. The error that the other server returned was: 550 550 #5.1.0 Address rejected [email protected] Now I already tried following the slicehost MX article instructions straight as well and they did not work out for me. The domain has already been verified by google and it says the email is activated from their end. Any help would be appreciated : )

    Read the article

  • Configuring ZenOSS to monitor CPU load

    - by Tom
    I'm trying test out monitoring tools for a network at work with a coworker but neither of us have ever used an sort of monitoring tools before. Currently we are experimenting with ZenOSS and having some difficulties. We want to populate our CPU load graphs because that is one of the primary feature we are looking for in our monitoring tools but we have been unable to populate the graphs with data. So far we have installed the wmipreformance, sqldatasource, wmidatasource, snmpperformance(simple) zenpacks and the machine we are trying to monitor is running Windows XP. We have tried to model the device and everything seems to run and we've tride to add data points to graphs but the only options we recieve for graphs are CPU and Memory. We are able to monitor services, ZenOSS recognizes the make and model of the processor, RAM, and Harddrive and is even giving us metrics on available storage but again, we are looking for performance metrics such as CPU load and Memory utilization. I realize I probably didn't provide a lot of information but that is because we don't have a very good idea of what we are doing and can't find instruction either on the ZenOSS homepage or forums to monitor CPU load. If someone could give us step by step instruction on how to set up CPU load monitoring that would probably be more beneficial to us than a diagnostic of our current setup, but regardless, if I left any important information out and you need it to answer the question, please let me know. Thank you.

    Read the article

  • centos 6.3 kvm external ip forwarding to guests

    - by user1111702
    I have a centos 6.3 server with kvm installed. The server has 4 external ips and one NIC. 176.9.xxx.xx1 176.9.xxx.xx2 176.9.xxx.xx3 176.9.xxx.xx4 I use the following configuration ifcfg-eth0 as slave to ifcfg-br0 the configuration in ifcfg-eth0 is DEVICE=eth0 ONBOOT=yes BRIDGE=br0 HWADDR=14:da:e9:b3:8b:99 and in the ifcfg-br0 DEVICE=br0 TYPE=Bridge BOOTPROTO=static BROADCAST=176.9.xxx.xxx IPADDR=176.9.xxx.xx1 NETMASK=255.255.255.0 SCOPE="peer 176.9.xxx.xxx" and I have 3 more aliases for br0 , br0:1 to get the trafic from the second external ip DEVICE=br0:1 IPADDR=176.9.xxx.xx2 NETMASK=255.255.255.248 ONBOOT=yes br0:2 to get the trafic from the third external ip DEVICE=br0:1 IPADDR=176.9.xxx.xx3 NETMASK=255.255.255.248 ONBOOT=yes br0:3 to get the trafic from the second external ip DEVICE=br0:1 IPADDR=176.9.xxx.xx4 NETMASK=255.255.255.248 ONBOOT=yes The above settings work fine and I recieve the trafic from all the external ips. My problem is that I want to pass the trafic from external ip to specific virtual guest on my server. ie trafic that comes from 176.9.xxx.xxx2 must pass to virtual machine 1 176.9.xxx.xxx3 must pass to virtual machine 2 176.9.xxx.xxx4 must pass to virtual machine 3 Can you please help me how to achieve this ? What are the settings on the host and what should I do to the guests. Thank you in advance

    Read the article

  • Varnish returning 503, FetchError (could not get storage)

    - by Archan
    On the current setup we're running into a problem with Varnish, we're running a CentOS 5.7 x86_64 xenpv, with Cpanel WHM, hosted at VPS.net. Sometimes we will recieve a Guru Meditation from Varnish, and when we look in the varnishlog with the following command varnishlog -d -c -m TxStatus:503 it returns output similar to the following: 15 VCL_call c recv 15 VCL_acl c NO_MATCH devs 15 VCL_return c pass 15 VCL_call c hash 15 Hash c **** 15 Hash c ************* 15 VCL_return c hash 15 VCL_call c pass pass 15 Backend c 12 default default 15 TTL c 1835862523 RFC 0 -1 -1 1332454056 0 1332454055 375007920 0 15 VCL_call c fetch hit_for_pass 15 ObjProtocol c HTTP/1.1 15 ObjResponse c OK 15 ObjHeader c Date: Thu, 22 Mar 2012 22:07:35 GMT 15 ObjHeader c Server: Apache/2.2.21 (Unix) mod_ssl/2.2.21 OpenSSL/0.9.8e-fips-rhel5 mod_bwlimited/1.4 mod_fcgid/2.3.6 15 ObjHeader c X-Powered-By: PHP/5.3.9 15 ObjHeader c Expires: Thu, 19 Nov 1981 08:52:00 GMT 15 ObjHeader c Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 15 ObjHeader c Pragma: no-cache 15 ObjHeader c Content-Type: text/html; charset=utf-8 15 ObjHeader c X-Cacheable: NO:Cache-Control=private 15 FetchError c chunked read_error: 12 (Could not get storage) 15 VCL_call c error deliver 15 VCL_call c deliver deliver As far as I have could gather, we could try increasing the nuke_limit, but currently we have a nuke_limit of 500, and when running varnishstat -1 -f n_lru_nuked we "only" get a total of 1031, even though we have seen the error happen on several pages. When we then run top to see how much memory Varnish is using, it only shows that it is using 763m, although we've set it to be allowed to use 1200m. Any ideas of what the problem can be?

    Read the article

  • SQL Server Instance login issue

    - by reallyJim
    I've just brought up a new installation of SQL Server 2008. I installed the default instance as well as one named instance. I'm having a problem connecting to the named instance from anywhere besides the server itself with any user besides 'sa'. I am running in mixed mode. I have a login/user that has a known username. Using that user/login, I can properly connect when directly on the server. When I attempt to login from anywhere else, I recieve a "Login failed for user ''", with Error 18456. In the log file in the server, I see a reason that doesn't seem to help: "Reason: Could not find a login matching the name provided.". However, that user/login DOES exist, as I can use it locally. There are no further details about the error. Where can I start to find something to help me with this? I've tried deleting and recreating the user, as well as just creating a new one from scratch--same result, locally fine, remotely an error. EDIT: Partially Resolved. I'm now passed the base issue--the clients were trying to connect via the default instance. I don't know why. So, once proper ports were opened in the firewall, and a static port assigned to the named instance, I can now connect--BUT ONLY if I specify the connection as Server,Port. SQLBrowser is apparently not helping/working in this case. I've verified it IS running, and done a stop/restart after my config changes, but no difference yet.

    Read the article

  • Only tunnel certain applications via OpenVPN

    - by jinjin
    Hi, I've purchased a VPN solution, it works correctly when I have "redirect-gateway def1" in the configuration file (routing all traffic through the VPN). However when I remove that line from the configuration file, I am still able to ping-out of the machine (ping -I tap0), however I cannot ping the IP assigned to the machine (it's a public ip), i get the error: Destination Host Unreachable. I only want to have certain applications sending traffic through the VPN tunnel (eg: ZNC, irssi), all of which i can select which IP they use. However they can't recieve any data, making the tunnel essentially useless to me when disabling redirect-gateway. Any ideas on how to allow specific applications use the tunnel, without of forcing everything to go through it? My configuration file is as follows: dev tap remote #.#.#.# float #.#.#.# port 5129 comp-lzo ifconfig #.#.#.# 255.255.255.128 route-gateway #.#.#.# #redirect-gateway def1 secret key.txt cipher AES-128-CBC The output of ifconfig -a when the tunnel is connected: tap0 Link encap:Ethernet HWaddr 00:ff:47:d3:6d:f3 inet addr:#.#.#.# Bcast:#.#.#.# Mask:255.255.255.255 inet6 addr: <snip> Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:612 errors:0 dropped:0 overruns:0 frame:0 TX packets:35 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:25704 (25.1 KiB) TX bytes:6427 (6.2 KiB) EDIT: the Bcast:#.#.#.# (ifconfig) is different from route-gateway #.#.#.# (openvpn) if that makes any difference.

    Read the article

  • NginX & Munin - Location and error 404

    - by user1684189
    I've a server that running nginx+php-fpm with this simple configuration: server { listen 80; server_name ipoftheserver; access_log /var/www/default/logs/access.log; error_log /var/www/default/logs/error.log; location / { root /var/www/default/public_html; index index.html index.htm index.php; } location ^~ /munin/ { root /var/cache/munin/www/; index index.html index.htm index.php; } location ~\.php$ { include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/default/public_html$fastcgi_script_name; } } but when I open ipoftheserver/munin/ I recieve a 404 error (when I request ipoftheserver/ the files on /var/www/default/public_html are listened correctly) Munin is installed and works perfectly. If I remove this configuration and I use this another one all works good (but not in the /munin/ directory): server { server_name ipoftheserver; root /var/cache/munin/www/; location / { index index.html; access_log off; } } How to fix? Many thanks for your help

    Read the article

  • postfix test and configuration problem

    - by Woho87
    Hi Guys! I installed postfix using sudo yum install postfix postfix-mysql. I'm newbie to mail systems, but I have one AMAZON EC2 instance with a public DNS. I used the public DNS in most cases, when I configured the file main.cf. The public DNS I have is from amazon and it is a long string(ec2-123-34-234-677.....amazon.com). // I configured this on main.cf. I replaced example.com with ec2-123-.......amazon.com myhostname = mail.example.com mydomain = example.com myorigin = $mydomain mydestination = example.com, $transport_maps local_recipient_maps = $alias_maps $virtual_mailbox_maps unix:passwd.byname home_mailbox = Maildir/ How do I test postfix? I just want it to send emails for my web application. I tried to test it with >telnet localhost 25 after I typed in SSH >sudo postfix start. but I recieve the message that telnet command can not be found. I also use the Amazon linux distribution if you want to know. I use it because it is free. What have I done wrong? Are there anymore configurations required pls help!

    Read the article

  • Why does my ftp(e)s server fails like half of the time

    - by user1092608
    I have this discussion at work regarding our ftp server running via vsftpd. Initially, we have opted to serve ftpes instead of sftp because this seemed the most flexible and straightforward solution for our server to have secure file transmission. Afterwards, our ftp server seems to be a source of issues for our end users. Half of the time, users complain about not working ftp connections. I must say, i tested our FTP trough different infrastructures (=in the field, at random times at random places) and indeed, sometimes behind some configurations (=no idea how they are configured, because the 'field' testing), i recieve errors. Some of the are: Error: Failed to retrieve directory listing (filezilla) Furthermore, behind my basic home configuration, everything seems to be running fine. I (think I) did all the basic configuration checks (passive mode?, firewall for all ports?, ...) and can't seem to find the source. Being a bunch of techies at our small office, yet knowing nothing about infrastructure, some start suggesting that ftps protocol could be the source of issues. ("No, i only knew sftp so far" "Ftps is not widespread"). I, however, strongly doubt this hypothesis, since reading around on the www, asking questions on serverfault, everyone seems to deny this. So, as I would like to avoid reconfiguring, since this involves messing around in our SSH service, our virtual user setup and ftp service, i would need some advice on 1) what could be potentially the general cause? 2) do you have some general tips? 3) would you mind having a look at my configuration file? ----- General Settings ----- write_enable=YES dirmessage_enable=YES nopriv_user=ftpsecure ftpd_banner="Welcome to XXXX FTP!" hide_ids=YES hide_file=.* max_per_ip=10 max_clients=10 local_enable=YES local_umask=022 chroot_local_user=YES secure_chroot_dir=/usr/share/empty userlist_enable=NO userlist_deny=YES userlist_file=/etc/vsftp_deny_users guest_enable=YES guest_username=ftpvirtual virtual_use_local_privs=YES user_sub_token=$USER local_root=/srv/ftp/ftpvirtual/$USER anonymous_enable=NO syslog_enable=NO xferlog_enable=YES xferlog_file=/var/log/vsftpd_xfer.log connect_from_port_20=YES pam_service_name=vsftpd listen=YES listen_port=21 pasv_enable=YES pasv_min_port=30000 pasv_max_port=30030 pasv_address=foo ssl_enable=YES rsa_cert_file=/etc/vsftpd.pem rsa_private_key_file=/etc/vsftpd.pem force_local_data_ssl=YES force_local_logins_ssl=YES ssl_tlsv1=YES ssl_sslv2=YES ssl_sslv3=YES ssl_ciphers=HIGH anon_mkdir_write_enable=NO anon_root=/srv/ftp anon_upload_enable=NO idle_session_timeout=900 log_ftp_protocol=NO dsa_cert_file=/etc/vsftpd.pem Thanks

    Read the article

  • Question on the implementation of my Entity System

    - by miguel.martin
    I am currently creating an Entity System, in C++, it is almost completed (I have all the code there, I just have to add a few things and test it). The only thing is, I can't figure out how to implement some features. This Entity System is based off a bit from the Artemis framework, however it is different. I'm not sure if I'll be able to type this out the way my head processing it. I'm going to basically ask whether I should do something over something else. Okay, now I'll give a little detail on my Entity System itself. Here are the basic classes that my Entity System uses to actually work: Entity - An Id (and some methods to add/remove/get/etc Components) Component - An empty abstract class ComponentManager - Manages ALL components for ALL entities within a Scene EntitySystem - Processes entities with specific components Aspect - The class that is used to help determine what Components an Entity must contain so a specific EntitySystem can process it EntitySystemManager - Manages all EntitySystems within a Scene EntityManager - Manages entities (i.e. holds all Entities, used to determine whether an Entity has been changed, enables/disables them, etc.) EntityFactory - Creates (and destroys) entities and assigns an ID to them Scene - Contains an EntityManager, EntityFactory, EntitySystemManager and ComponentManager. Has functions to update and initialise the scene. Now in order for an EntitySystem to efficiently know when to check if an Entity is valid for processing (so I can add it to a specific EntitySystem), it must recieve a message from the EntityManager (after a call of activate(Entity& e)). Similarly the EntityManager must know when an Entity is destroyed from the EntityFactory in the Scene, and also the ComponentManager must know when an Entity is created AND destroyed. I do have a Listener/Observer pattern implemented at the moment, but with this pattern I may remove a Listener (which is this case is dependent on the method being called). I mainly have this implemented for specific things related to a game, i.e. Teams, Tagging of entities, etc. So... I was thinking maybe I should call a private method (using friend classes) to send out when an Entity has been activated, deleted, etc. i.e. taken from my EntityFactory void EntityFactory::killEntity(Entity& e) { // if the entity doesn't exsist in the entity manager within the scene if(!getScene()->getEntityManager().doesExsist(e)) { return; // go back to the caller! (should throw an exception or something..) } // tell the ComponentManager and the EntityManager that we killed an Entity getScene()->getComponentManager().doOnEntityWillDie(e); getScene()->getEntityManager().doOnEntityWillDie(e); // notify the listners for(Mouth::iterator i = getMouth().begin(); i != getMouth().end(); ++i) { (*i)->onEntityWillDie(*this, e); } _idPool.addId(e.getId()); // add the ID to the pool delete &e; // delete the entity } As you can see on the lines where I am telling the ComponentManager and the EntityManager that an Entity will die, I am calling a method to make sure it handles it appropriately. Now I realise I could do this without calling it explicitly, with the help of that for loop notifying all listener objects connected to the EntityFactory's Mouth (an object used to tell listeners that there's an event), however is this a good idea (good design, or what)? I've gone over the PROS and CONS, I just can't decide what I want to do. Calling Explicitly: PROS Faster? Since these functions are explicitly called, they can't be "removed" CONS Not flexible Bad design? (friend functions) Calling through Listener objects (i.e. ComponentManager/EntityManager inherits from a EntityFactoryListener) PROS More Flexible? Better Design? CONS Slower? (virtual functions) Listeners can be removed, i.e. may be removed and not get called again during the program, which could cause in a crash. P.S. If you wish to view my current source code, I am hosting it on BitBucket.

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15  | Next Page >