Search Results

Search found 8666 results on 347 pages for 'remote'.

Page 322/347 | < Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >

  • Sendmail relay authentication

    - by Pawel Veselov
    I'm trying to set up my sendmail to authenticate against a relay (comcast). I'm not seeing any attempts to authenticate at all. I'm trying to just debug how authentication works, and can't connect all the pieces... I have, in my .mc file: define(`RELAY_MAILER_ARGS', `TCP $h 587')dnl define(`SMART_HOST', `relay:smtp.comcast.net.')dnl define(`confAUTH_MECHANISMS', `PLAIN')dnl FEATURE(`authinfo',`hash /etc/mail/client-info')dnl And in my /etc/mail/client-info: AuthInfo:*.comcast.net "U:root" "I:comcast_user" "P:comcast_password" Now, I know everything is fine with the u/p, as I could authenticate directly through SMTP, using telnet. There are two things I don't understand. When AuthInfo records are searched for, they are matched by the target hostname. How? Does it it use the map key (something I would expect), or uses the so-called "Domain" ("R:" parameter that I don't set in my auth-info line) What is "U:", really? Sendmail README (http://www.sendmail.org/m4/smtp_auth.html) says it's "user(authoraztion id)", and "I:" is "authentication ID". That suggests that my username should be in "U:", actually, but http://www.sendmail.org/~ca/email/auth.html says that "I:" is your remote user name. The session looks like this: [root@manticore]/etc/mail# sendmail -qf -v Warning: Option: AuthMechanisms requires SASL support (-DSASL) Running /var/spool/mqueue/p97CgcWq023273 (sequence 1 of 399) [email protected]... Connecting to smtp.comcast.net. port 587 via relay... 220 omta19.westchester.pa.mail.comcast.net comcast ESMTP server ready >>> EHLO my.host.name 250-omta19.westchester.pa.mail.comcast.net hello [my.ip.add.res], pleased to meet you 250-HELP 250-AUTH LOGIN PLAIN 250-SIZE 15728640 250-ENHANCEDSTATUSCODES 250-8BITMIME 250-STARTTLS 250 OK >>> STARTTLS 220 2.0.0 Ready to start TLS >>> EHLO my.host.name 250-omta19.westchester.pa.mail.comcast.net hello [my.ip.add.res], pleased to meet you 250-HELP 250-AUTH LOGIN PLAIN 250-SIZE 15728640 250-ENHANCEDSTATUSCODES 250-8BITMIME 250 OK >>> MAIL From:<> SIZE=2183 550 5.1.0 Authentication required MAILER-DAEMON... aliased to postmaster postmaster... aliased to root root... aliased to [email protected] postmaster... aliased to root root... aliased to [email protected] >>> RSET 250 2.0.0 OK [root@manticore]/etc/mail# sendmail -d0.1 Version 8.14.3 Compiled with: DNSMAP LOG MAP_REGEX MATCHGECOS MILTER MIME7TO8 MIME8TO7 NAMED_BIND NETINET NETINET6 NETUNIX NEWDB NIS PIPELINING SCANF SOCKETMAP STARTTLS TCPWRAPPERS USERDB XDEBUG Thanks, Pawel.

    Read the article

  • Setting Timeouts: SQL Server 2008/IIS 7.5

    - by Julie
    We have recently migrated from a Win 2003/SQL Server 2000 system to Win 2008 64 bit R2, SQL Server 2008 R2. Our websites are in classic asp, and this can't be changed to another scripting language at this time. On the old server, if I got stuck in some kind of endless loop, the page would throw an error. On the new server, I have a page that has some sort of looping problem, that even though the SQL SP is called only once (and runs fine run as a query on the server) it pegs SQL server and therefore locks all of our websites. I'll get my code figured out, no biggie. But I need to make sure the server times out when this happens. (The page I'm working on runs fine with certain instances of the query, and locks with others using a different query variable. I can't have something like that sneak up on me on a page I haven't touched for three years.) I can't figure out how an SP that runs once on the server, from an ASP page, is tying up SQL server this way. It's obviously some sort of a timeout issue, but I can't figure out where/which timeout values to change. I actually have to remote desktop to the server and kill the process in SQL server. I'm afraid I'm a generalist, and server management is not my thing, even though it's my responsibility, so I am almost certain to have questions about any answer that I receive. How can I track this down? What settings do I need to change? More info: It's not SQL Server On our test site, I created an ASP file that just did an endless loop (do while 1=1) and had the same problem - the other websites wouldn't load - without SQL server being involved. So I think the reason the process was hanging is that the page wasn't timing out as it should, and so the connection to SQL was never closed. Killing the process in SQL server would reset the page somehow. For my intentional endless loop, I had to refresh the app pool to get rid of it. This points more to either IIS or the ASP settings. The ASP timeouts are set to whatever the default were when the server was first loaded. I still can't figure out why one file is locking up all websites, though. Again, that didn't happen on the old server.

    Read the article

  • order of operations for environment variables

    - by alyda
    I want to understand how environment variables are set and reset (overridden). I'm running Apache/2.2.24 (Unix) PHP/5.4.14 on a mac . My theory is this: Environment vars can be set in bash, then they can be overwritten with httpd.conf preceding a VirtualHost directive that precedes php.ini, which can then be overwritten by .htaccess (if allowable) and finally by PHP I tried the following: setting environment variable in bash: I added export ENVIRONMENT='local' to my ~/.bashrc file, restarted apache and did not get any output from print_r($_ENV); (in a simple index.php file at the root of my webserver). I also tried putting ENVIRONMENT='local' into /etc/environment, and restarting apache, nothing, as well as /etc/bashrc, restart apache. still nothing. setting environment variable in httpd.conf: I added SetEnv ENVIRONMENT 'local-httpd to the end of my /etc/apache2/httpd.conf file (but before I load other conf files, such as virtual host [Include /private/etc/apache2/other/*.conf]). I now see the variable in the array print_r($_SERVER); but not print_r($_ENV);. setting environment variable in httpd-vhosts.conf: I added SetEnv ENVIRONMENT 'local-vhost to my /etc/apache2/extra/httpd-vhosts.conf file in my generic directive that points to my default document root. I now see the variable has been overwritten (to local-vhost from local-httpd, so I know where the variable is getting set). setting environment variable in php.ini: while searching for a proper place to put my environment variable, I noticed that variables_order = "GPCS" was set to the production value rather than EGPCS. I changed it, restarted my server and found that I was now getting output for print_r($_ENV); but not my expected custom variable. It also appears that I am not able to set a custom variable in this file. Please tell me if I am wrong setting environment variable in .htaccess: I added SetEnv ENVIRONMENT 'local-htaccess'. This worked as expected, overwriting all other values that were set. setting / overwriting environment variable in PHP: if (...) { putenv('ENVIRONMENT=local'); } I'm asking this question because I have a lot of local and remote testing servers, some of which may or may not allow me access to modify httpd, httpd-vhost, php.ini or environment variables. I want to understand what is best for those difference scenarios (shared hosting, heroku, local servers, etc) I obviously don't know how to properly set the environment variable in bash in a way that php can use it, I'd like to know how to do that (as I think Heroku does something similar with heroku config set...)

    Read the article

  • Postfix SMTP auth not working with virtual mailboxes + SASL + Courier userdb

    - by Greg K
    So I've read a variety of tutorials and how-to's and I'm struggling to make sense of how to get SMTP auth working with virtual mailboxes in Postfix. I used this Ubuntu tutorial to get set up. I'm using Courier-IMAP and POP3 for reading mail which seems to be working without issue. However, the credentials used to read a mailbox are not working for SMTP. I can see from /var/log/auth.log that PAM is being used, does this require a UNIX user account to work? As I'm using virtual mailboxes to avoid creating user accounts. li305-246 saslauthd[22856]: DEBUG: auth_pam: pam_authenticate failed: Authentication failure li305-246 saslauthd[22856]: do_auth : auth failure: [user=fred] [service=smtp] [realm=] [mech=pam] [reason=PAM auth error] /var/log/mail.log li305-246 postfix/smtpd[27091]: setting up TLS connection from mail-pb0-f43.google.com[209.85.160.43] li305-246 postfix/smtpd[27091]: Anonymous TLS connection established from mail-pb0-f43.google.com[209.85.160.43]: TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits) li305-246 postfix/smtpd[27091]: warning: SASL authentication failure: Password verification failed li305-246 postfix/smtpd[27091]: warning: mail-pb0-f43.google.com[209.85.160.43]: SASL PLAIN authentication failed: authentication failure I've created accounts in userdb as per this tutorial. Does Postfix also use authuserdb? What debug information is needed to help diagnose my issue? main.cf: # TLS parameters smtpd_tls_cert_file = /etc/ssl/certs/smtpd.crt smtpd_tls_key_file = /etc/ssl/private/smtpd.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # SMTP parameters smtpd_sasl_local_domain = smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous broken_sasl_auth_clients = yes smtpd_recipient_restrictions = permit_sasl_authenticated,permit_mynetworks,reject_unauth_destination smtp_tls_security_level = may smtpd_tls_security_level = may smtpd_tls_auth_only = no smtp_tls_note_starttls_offer = yes smtpd_tls_CAfile = /etc/ssl/certs/cacert.pem smtpd_tls_loglevel = 1 smtpd_tls_received_header = yes smtpd_tls_session_cache_timeout = 3600s tls_random_source = dev:/dev/urandom /etc/postfix/sasl/smtpd.conf pwcheck_method: saslauthd mech_list: plain login /etc/default/saslauthd START=yes PWDIR="/var/spool/postfix/var/run/saslauthd" PARAMS="-m ${PWDIR}" PIDFILE="${PWDIR}/saslauthd.pid" DESC="SASL Authentication Daemon" NAME="saslauthd" MECHANISMS="pam" MECH_OPTIONS="" THREADS=5 OPTIONS="-c -m /var/spool/postfix/var/run/saslauthd" /etc/courier/authdaemonrc authmodulelist="authuserdb" I've only modified one line in authdaemonrc and restarted the service as per this tutorial. I've added accounts to /etc/courier/userdb via userdb and userdbpw and run makeuserdb as per the tutorial. SOLVED Thanks to Jenny D for suggesting use of rimap to auth against localhost IMAP server (which reads userdb credentials). I updated /etc/default/saslauthd to start saslauthd correctly (this page was useful) MECHANISMS="rimap" MECH_OPTIONS="localhost" THREADS=0 OPTIONS="-c -m /var/spool/postfix/var/run/saslauthd -r" After doing this I got the following error in /var/log/auth.log: li305-246 saslauthd[28093]: auth_rimap: unexpected response to auth request: * BYE [ALERT] Fatal error: Account's mailbox directory is not owned by the correct uid or gid: li305-246 saslauthd[28093]: do_auth : auth failure: [user=fred] [service=smtp] [realm=] [mech=rimap] [reason=[ALERT] Unexpected response from remote authentication server] This blog post detailed a solution by setting IMAP_MAILBOX_SANITY_CHECK=0 in /etc/courier/imapd. Then restart your courier and saslauthd daemons for config changes to take effect. sudo /etc/init.d/courier-imap restart sudo /etc/init.d/courier-authdaemon restart sudo /etc/init.d/saslauthd restart Watch /var/log/auth.log while trying to send email. Hopefully you're good!

    Read the article

  • OpenVPN, Great on Windows, VERY slow on Mac...

    - by Phsion
    Hello, I'm not really an IT Pro, but this seemed like the best place to ask this question... I have setup VPN networks in the past, for fun, and everything was great, but now I've set one up for my boss, and while my computers all work great, his Mac machines are almost too slow to work with. Its pretty much vanilla configs all around, anyone have any ideas? Its a TUN routing setup over UDP. Back Story: My boss travels a lot, and wants to be able to access all his files from the road, and is also pretty paranoid about security (even though knows almost nothing about computers). SO i figured a VPN would be the answer. I went with OpenVPN, but there are some other issues. The only ISP we can get in our area besides Dial-UP is a crappy Satellite provider, that doesn't offer public IPs unless your willing to pay, so while the computers and VPN setup are pretty vanilla, the routing and structure is strange to get around this limitation. Specs: Its OpenVPN2, and there are six machines using it (only three actually use it, the rest are my test machines), one Windows 7 laptop, two XP Desktops, one OS X 10.5 Desktop, one 10.6 Desktop, and one 10.6 Laptop. One XP Desktop sits at my house and acts as the server (6Mbs/2Mbs FIOS connection). One XP desktop sits at the office and hosts a webpage that will wake up the Main Mac Desktop from sleep, and also ping all the machines on the VPN and show their status. The main office mac (10.6) stays in sleep mode until it gets the Wake-On-Lan packet from the Office XP, and then it auto connects to the VPN and opens itself up. The reason for all this is the Satellite private IP crap means i cant directly access the office machines outside of the LAN, so everyone connects to my house first, then they talk to each other from there. The Wake On Lan weirdness is because my boss doesn't want to leave the main Mac on all the time, and making a quick and dirty webpage was the easiest way to send a Magic Packet from inside the LAN without confusing my boss. The VPN uses Client Config files to make static IPs for the client. The only thing i found in google was some changes to the VPN MTU settings (down to 1400) but no real help. Oh, and i forgot...all the windows machines just have OpenVPN start as a service. The Mac laptop uses tunnelblick (an OpenVPN GUI) and the Mac Desktops use OpenVPN in normal command line mode. Server Config: tun-mtu 1500 fragment 1450 mssfix 1450 management localhost #### port #### proto udp dev tun ca ####### cert ####### key ###### dh ###### server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt client-config-dir ccd route 10.8.0.0 255.255.255.252 client-to-client keepalive 10 120 comp-lzo persist-key persist-tun status openvpn-status log Client Configs (all are simple variations on this) tun-mtu 1500 fragment 1450 mssfix 1450 client dev tun proto udp remote ######## #### resolv-retry infinite nobind persist-key presist-tun ca ##### cert ##### key ##### ns-cert-type server comp-lzo verb 3

    Read the article

  • OpenVPN Clients using server's connection (with no default gateway)

    - by Branden Martin
    I wanted an OpenVPN server so that I could create a private VPN network for staff to connect to the server. However, not as planned, when clients connect to the VPN, it's using the VPN's internet connection (ex: when going to whatsmyip.com, it's that of the server and not the clients home connection). server.conf local <serverip> port 1194 proto udp dev tun ca ca.crt cert x.crt key x.key dh dh1024.pem server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt client-to-client keepalive 10 120 comp-lzo persist-key persist-tun status openvpn-status.log verb 9 client.conf client dev tun proto udp remote <srever> 1194 resolv-retry infinite nobind persist-key persist-tun ca ca.crt cert x.crt key x.key ns-cert-type server comp-lzo verb 3 Server's route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.8.0.2 * 255.255.255.255 UH 0 0 0 tun0 10.8.0.0 10.8.0.2 255.255.255.0 UG 0 0 0 tun0 69.64.48.0 * 255.255.252.0 U 0 0 0 eth0 default static-ip-69-64 0.0.0.0 UG 0 0 0 eth0 default static-ip-69-64 0.0.0.0 UG 0 0 0 eth0 default static-ip-69-64 0.0.0.0 UG 0 0 0 eth0 Server's IP Tables Chain INPUT (policy ACCEPT) target prot opt source destination fail2ban-proftpd tcp -- anywhere anywhere multiport dports ftp,ftp-data,ftps,ftps-data fail2ban-ssh tcp -- anywhere anywhere multiport dports ssh ACCEPT udp -- anywhere anywhere udp dpt:domain ACCEPT tcp -- anywhere anywhere tcp dpt:20000 ACCEPT tcp -- anywhere anywhere tcp dpt:webmin ACCEPT tcp -- anywhere anywhere tcp dpt:https ACCEPT tcp -- anywhere anywhere tcp dpt:www ACCEPT tcp -- anywhere anywhere tcp dpt:imaps ACCEPT tcp -- anywhere anywhere tcp dpt:imap2 ACCEPT tcp -- anywhere anywhere tcp dpt:pop3s ACCEPT tcp -- anywhere anywhere tcp dpt:pop3 ACCEPT tcp -- anywhere anywhere tcp dpt:ftp-data ACCEPT tcp -- anywhere anywhere tcp dpt:ftp ACCEPT tcp -- anywhere anywhere tcp dpt:domain ACCEPT tcp -- anywhere anywhere tcp dpt:smtp ACCEPT tcp -- anywhere anywhere tcp dpt:ssh ACCEPT all -- anywhere anywhere Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT all -- 10.8.0.0/24 anywhere REJECT all -- anywhere anywhere reject-with icmp-port-unreachable Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain fail2ban-proftpd (1 references) target prot opt source destination RETURN all -- anywhere anywhere Chain fail2ban-ssh (1 references) target prot opt source destination RETURN all -- anywhere anywhere My goal is that clients can only talk to the server and other clients that are connected. Hope I made sense. Thanks for the help!

    Read the article

  • How can I connect DLNA devices through NAT?

    - by Bob
    I have a Windows 7 PC running Serviio as a DLNA server. I have a Samsung I9100G running Skifta as a DLNA renderer (client). My network topology: At the moment, I can connect and watch my videos fine if the phone is on router #2. The server is on a wired network with #2. Router #1 is 192.168.1.1, router #2 is 192.168.2.1 (192.168.1.2) and router #3 is 192.168.3.1 (192.168.1.3). In other words, each router has its own subnet, using NAT - their "modem" port is connected with a "LAN" port on the modem/router 1. What I want to do is be able to connect to the DLNA server if the renderer is connected to router #1/#3 - #1 is on the WAN side of #2, while #3 is even further separated. I'll settle for just #1 working, though. Normally, I would just forward the appropriate ports, and everything would work fine. However, (apparently) DLNA uses UPnP, which I am unfamiliar with. I tried enabling UPnP on router #2, but that did not seem not change anything. It's a Belkin F5D7230-4 6000 - there's reported issues with UPnP on F5D7230-4 7000. UPnP is already enabled on router #1 - a Billion BiPAC 7700N. I've also tried the built in DLNA renderer/server/controller on my phone, Samsung AllShare. It can see the server on router #2 and browse files, but has issues playing or downloading them. It also can't see the server on the other two networks. I'm currently using Skifta/s "local" mode. "Remote" mode requires an account, which I don't really want to create if not necessary. Is it possible too do what I'm trying to do? If no, are there workarounds? If yes, how do I do it? Is my server the issue? The renderer (client)? The router(s)? My method? I can change just about anything except the routers.

    Read the article

  • How to reference a Domain Controller out of the Local Network?

    - by Adrian
    We have multiple servers scattered over different hosting providers. For learning, experimenting and, ultimately, production purposes, I set one of them as a Domain Controller. That went well, most of our services are now authenticating via AD, which helps us a lot. What I want to do now is to simplify the authentication for the multiple servers, by making each of them look at the Domain Controller. This way, our Devs can log into (Remote Desktop) the multiple servers with the same credentials from AD. I know I have to configure each server to look at the Domain Controller. But when I try to add the Domain Controller to the Computer, it cannot find it, although the Domain Controller address is a valid, reachable internet sub-domain (as in "ad.ourcompany.com"). This is the detailed error message: Note: This information is intended for a network administrator. If you are not your network's administrator, notify the administrator that you received this information, which has been recorded in the file C:\Windows\debug\dcdiag.txt. The following error occurred when DNS was queried for the service location (SRV) resource record used to locate an Active Directory Domain Controller for domain ad.ourcompany.com: The error was: "DNS name does not exist." (error code 0x0000232B RCODE_NAME_ERROR) The query was for the SRV record for _ldap._tcp.dc._msdcs.ad.ourcompany.com Common causes of this error include the following: - The DNS SRV records required to locate a AD DC for the domain are not registered in DNS. These records are registered with a DNS server automatically when a AD DC is added to a domain. They are updated by the AD DC at set intervals. This computer is configured to use DNS servers with the following IP addresses: 109.188.207.9 109.188.207.10 - One or more of the following zones do not include delegation to its child zone: ad.ourcompany.com ourcompany.com com . (the root zone) For information about correcting this problem, click Help. What am I missing? I'm an experienced Dev, but a newbie Sysdamin experimenting with new stuff. Disclaimer All IP addresses and domains/subdomains were changed to preserve security. If by any chance you still can see private information, please let me know so that I can change it.

    Read the article

  • Splitting an internet connection between multiple separate subnetworks

    - by pythonian4000
    Problem I have an internet connection that I want to split between four separate networks. My requirements are: I need to be able to monitor the amount of bandwidth and data being used by each network, and notify or control as necessary. The four networks should only be able to connect to the internet, not each other. My parents need to be able to operate it, so it needs a simple, preferably Windows-based GUI. Progress so far Server I have a mini-ITX server with six Gigabit ethernet ports - one for the ethernet internet connection, one for each of the four networks, and one for remote access to the server for administration. Bandwidth control I spent a long time researching solutions here. The majority of the control systems/software I found could control bandwidth usage via QOS, but could not monitor or control the amount of data being used. Eventually I found the SoftPerfect Bandwidth Manager, which has everything I need in terms of monitoring and control - per-interface quota management, usage statistics, a web interface for checking usage, and email notifications when quotas are exceeded. It is also Windows-based and has a simple GUI. Internet sharing This is where I am having issues. I am currently using Windows XP Pro SP2 for the server (yes, I know this is far from ideal, but it's the only spare Windows OS I currently have). I can't use the built-in Internet Connection Sharing for several reasons: The upstream internet router has an IP of 192.168.0.1 which ICS clashes with, and I cannot change the router settings. ICS can only share an internet connection with a single interface, but I have four. I have tried bridging the four network cards, but then the Bandwidth Manager cannot see the four individual interfaces - it only sees the bridge. I have tried setting up Dual DHCP DNS server (and am having issues getting DHCP offers to be received by clients), but that would still require gateway software of some sort, which I have been unable to find. My current attempt is to use OpenVPN, with a server for the internet NIC and a separate client for each of the four networks. My thought is that I could bridge the OpenVPN TAP devices to each NIC, meaning that the Bandwidth Manager would control traffic from the bridge instead of the interface. I have not made much progress here though - I've never used OpenVPN before. Questions Is there a Windows software package that does everything I need? (Unlikely, I know) Is there a Windows software package that will share internet between multiple NICs without bridging? Are either of my about attempts feasible? Would it help to have a newer/server version of Windows? Is there a non-Windows alternative that is easy to use?

    Read the article

  • Installing Glassfish 3.1 on Ubuntu 10.10 Server

    - by andand
    I've used the directions here to successfully install Glassfish 3.0.1 on an virtualized (VirtualBox and VMWare) Ubuntu 10.10 Server instance without any real difficulty not resolved by more closely following the directions. However when I try applying them to Glassfish 3.1, I seem to keep getting stuck at section 6. "Security configuration before first startup". In particular, there are some differences I noted: 1) There are two keys in the default keystore. The 's1as' key is still there, but another named 'glassfish-instance' is also there. When I saw this, I deleted and recreated them both along with a 'myAlias' key which I was going to use where needed. 2) When turning the security on it seems like part of the server thinks it's on, but others don't. For instances: $ /home/glassfish/bin/asadmin set server-config.network-config.protocols.protocol.admin-listener.security-enabled=true server-config.network-config.protocols.protocol.admin-listener.security-enabled=true Command set executed successfully. $ /home/glassfish/bin/asadmin get server-config.network-config.protocols.protocol.admin-listener.security-enabled server-config.network-config.protocols.protocol.admin-listener.security-enabled=true Command get executed successfully. $ /home/glassfish/bin/asadmin --secure list-jvm-options It appears that server [localhost:4848] does not accept secure connections. Retry with --secure=false. javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake Command list-jvm-options failed. $ /home/glassfish/bin/asadmin --secure=false list-jvm-options -XX:MaxPermSize=192m -client -Djavax.management.builder.initial=com.sun.enterprise.v3.admin.AppServerMBeanServerBuilder -XX: UnlockDiagnosticVMOptions -Djava.endorsed.dirs=${com.sun.aas.installRoot}/modules/endorsed${path.separator}${com.sun.aas.installRoot}/lib/endorsed -Djava.security.policy=${com.sun.aas.instanceRoot}/config/server.policy -Djava.security.auth.login.config=${com.sun.aas.instanceRoot}/config/login.conf -Dcom.sun.enterprise.security.httpsOutboundKeyAlias=s1as -Xmx512m -Djavax.net.ssl.keyStore=${com.sun.aas.instanceRoot}/config/keystore.jks -Djavax.net.ssl.trustStore=${com.sun.aas.instanceRoot}/config/cacerts.jks -Djava.ext.dirs=${com.sun.aas.javaRoot}/lib/ext${path.separator}${com.sun.aas.javaRoot}/jre/lib/ext${path.separator}${com.sun.aas.in stanceRoot}/lib/ext -Djdbc.drivers=org.apache.derby.jdbc.ClientDriver -DANTLR_USE_DIRECT_CLASS_LOADING=true -Dcom.sun.enterprise.config.config_environment_factory_class=com.sun.enterprise.config.serverbeans.AppserverConfigEnvironmentFactory -Dorg.glassfish.additionalOSGiBundlesToStart=org.apache.felix.shell,org.apache.felix.gogo.runtime,org.apache.felix.gogo.shell,org.apache.felix.gogo.command -Dosgi.shell.telnet.port=6666 -Dosgi.shell.telnet.maxconn=1 -Dosgi.shell.telnet.ip=127.0.0.1 -Dgosh.args=--nointeractive -Dfelix.fileinstall.dir=${com.sun.aas.installRoot}/modules/autostart/ -Dfelix.fileinstall.poll=5000 -Dfelix.fileinstall.log.level=2 -Dfelix.fileinstall.bundles.new.start=true -Dfelix.fileinstall.bundles.startTransient=true -Dfelix.fileinstall.disableConfigSave=false -XX:NewRatio=2 Command list-jvm-options executed successfully. Also the admin console responds only to http (not https) requests. Thoughts?

    Read the article

  • Open ports broken from internal network

    - by ksvi
    Quick summary: Forwarded port works from the outside world, but from the internal network using the external IP the connection is refused. This is a simplified situation to make the explanation easier: I have a computer that is running a service on port 12345. This computer has an internal IP 192.168.1.100 and is connected directly to a modem/router which has internal IP 192.168.1.1 and external (public, static) IP 1.2.3.4. (The router is TP-LINK TD-w8960N) I have set up port forwarding (virtual server) at port 12345 to go to port 12345 at 192.168.1.100. If I run telnet 192.168.1.100 12345 from the same computer everything works. But running telnet 1.2.3.4 12345 says connection refused. If I do this on another computer (on the same internal network, connected to the router) the same thing happens. This would seem like the port forwarding is not working. However... If I run a online port checking service on my external IP and the service port it says the port is open and I can see the remote server connecting and immediately closing connection. And using another computer that is connected to the internet using a mobile connection I can also use telnet 1.2.3.4 12345 and I get a working connection. So the port forwarding seems to be working, however using external IP from the internal network doesn't. I have no idea what can be causing this, since another setup very much like this (different router) works for me. I can access a service running on a server from inside the network both through the internal and external IP. Note: I know I could just use the internal IP inside of the network to access this service. But if I have a laptop that must be able to do this both from inside and outside it would be annoying to constantly switch between 1.2.3.4 and 192.168.1.100 in the software configuration. Router output: > iptables -t nat -L -n Chain PREROUTING (policy ACCEPT) target prot opt source destination ACCEPT all -- 0.0.0.0/0 224.0.0.0/3 DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:25 to:192.168.1.101 DNAT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:25 to:192.168.1.101 DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:110 to:192.168.1.101 DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:12345 to:192.168.1.102 DNAT udp -- 0.0.0.0/0 192.168.1.1 udp dpt:53 to:217.118.96.203 Chain POSTROUTING (policy ACCEPT) target prot opt source destination MASQUERADE all -- 192.168.1.0/24 0.0.0.0/0 Chain OUTPUT (policy ACCEPT) target prot opt source destination

    Read the article

  • How to iptables forward ppp0 to eth0

    - by HPHPHP2012
    need your help with get it routing properly. I've server with eth0 (external interface) and eth1(internal interface). eth1 is merged into the bridge br0 (172.16.1.1) I've installed the pptp and successfully configured it, so I got ppp0 interface (192.168.91.1) and got my VPN clients successfully connected. So I need your help to manage how to allow my VPN clients use internet connection (eth0). Below my configuration files, any help is much appreciated! Thank you! P.S. VPN clients are Windows Xp, Windows 7, Mac OS X Lion, Ubuntu 12.04, iOS 5.x cat /etc/pptpd.conf #local server ip address localip 192.168.91.1 #remote addresses remoteip 192.168.91.11-254,192.168.91.10 #translating ip addresses on this interface bcrelay br0 cat /etc/ppp/pptpd-options name pptpd refuse-pap refuse-chap refuse-mschap require-mschap-v2 require-mppe-128 ms-dns 8.8.8.8 ms-dns 8.8.4.4 nodefaultroute lock nobsdcomp auth logfile /var/log/pptpd.log cat /etc/nat-up #!/bin/sh SERVER_IP="aaa.aaa.aaa.aaa" LOCAL_IP="172.16.1.1" #eth0 with public ip PUBLIC="eth0" #br0 is internal bridge on eth1 interface INTERNAL="br0" #vpn VPN="ppp0" #local LOCAL="lo" iptables -F iptables -X iptables -t nat -F iptables -t nat -X iptables -t mangle -F iptables -t mangle -X iptables -P INPUT DROP iptables -P FORWARD DROP iptables -P OUTPUT ACCEPT echo 1 > /proc/sys/net/ipv4/ip_forward iptables -A INPUT -i $LOCAL -j ACCEPT iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A INPUT -m state --state NEW ! -i $PUBLIC -j ACCEPT ####CLEAR CONFIG#### #iptables -A FORWARD -i $PUBLIC -o $INTERNAL -m state --state ESTABLISHED,RELATED -j ACCEPT #iptables -A FORWARD -i $PUBLIC -o $INTERNAL -j ACCEPT #iptables -A FORWARD -i $INTERNAL -o $PUBLIC -j ACCEPT #iptables -t nat -A POSTROUTING -j MASQUERADE ####THIS PART IS NOT HANDLING IT#### iptables -A FORWARD -i $PUBLIC -o $VPN -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A FORWARD -i $PUBLIC -o $VPN -j ACCEPT iptables -A FORWARD -s 192.168.91.0/24 -o $PUBLIC -j ACCEPT iptables -t nat -A POSTROUTING -s 192.168.91.0/24 -o $PUBLIC -j MASQUERADE # VPN - PPTPD iptables -A INPUT -p gre -s 0/0 -j ACCEPT iptables -A OUTPUT -p gre -m state --state RELATED,ESTABLISHED -j ACCEPT iptables -A INPUT -p tcp -s 0/0 --dport 1723 -j ACCEPT #SSH iptables -A INPUT -p tcp --dport 2222 -j ACCEPT iptables -A OUTPUT -p tcp --sport 2222 -j ACCEPT #BLACKLIST BLOCKDB="/etc/ip.blocked" IPS=$(grep -Ev "^#" $BLOCKDB) for i in $IPS do iptables -A INPUT -s $i -j DROP iptables -A OUTPUT -d $i -j DROP done

    Read the article

  • Messages stuck in SMTP queue - Exchange 2003

    - by Diav
    I need your help people ;-) I have a problem with messages coming into our Exchange Server and ones going out through it. Basically, the messages are stuck in the SMTP queue. A message will come into the server, I can see it listed under "Exchange System Manager", but if you list the properties of the message queue it says something like 00:10 SMTP Message queued for local delivery 00:10 SMTP Message delivered locally to [email protected] 00:10 SMTP Message scheduled to retry local delivery 00:11 SMTP Message delivered locally to [email protected] 00:11 SMTP Message scheduled to retry local delivery etc etc For outgoing message list looks like this: 10:55 SMTP: Message Submitted to Advanced Queuing 10:55 SMTP: Started Message Submission to Advanced Queue 10:55 SMTP: Message Submitted to Categorizer 10:55 SMTP: Message Categorized and Queued for Routing 10:55 SMTP: Message Routed nad Queued for Remote Delivery And the end - since then status didn't change, message is in queue, I am forcing connection from time to time but without an effect. I checked connection with smarthost (used telnet for that) and everything seems to work correctly, so the problem is probably on exchange side. I am using Exchange Server 2003 running on Small Business Server 2003. I don't have any antivirus installed on server. Remaining free space on each partition is over 3Gb, on partition with data bases - it is over 12Gb. All was working good and without problems since 2005, problems started in half of this june - messages started going out and being stuck almost randomly (I don't see a pattern yet, some are going out, some are not, some are going after several hours). I don't know what to do, what to check more, so please, any ideas? Best regards, D. edit Priv1.edb has 14,5GB and priv1.stm 2,6GB - together those files have more than 16GB - can it be the reason? If yes, then what? Indeed, I haven't thought that it can have something in common with my problem, but several users reported recent problems with Outlook Web Access - they can log in, they see the list of their mails, but they can't see the content of their emails. Although when they are connecting with Outlook 2003/2007 - there is no such problem, only with OWA there is. edit2 So,.. It works now, and I have to admit that I am not really sure what the problem was (hope it won't come back). What have I done: Cleaned up some mailboxes to reduce size of them Dismounted Information Store Defragmentated data base files ( I used eseutil: c:\program files\exchsrvr\bin eseutil /d g:\data base\Exchsrvr\MDBDATA\priv1.edb ) Mounted Information Store back ..and before I managed to do anything else - my queue started moving, elements which were kept there already for days - started moving and after few minutes everything was sent, both, outside and locally. But: priv1.edb is still big (13 884 203 008), priv1.stm as well (2 447 384 576), so this is probably not the issue of size of the file. And if not this, so what was that? And if that was issue of size of the file, then soon it will repeat - is there something I can do to avoid it ?

    Read the article

  • How is incoming SMTP mail being delivered despite blocked port

    - by Josh
    I setup a MX mail server, everything works despite port 25 being blocked, I'm stumped as to why I am able to receive email with this setup, and what the consequences might be if I leave it this way. Here are the details: Connections to SMTP over port 25 and 587 both reliably connect over my local network. Connections to SMTP over port 25 are blocked from external IPs (the ISP is blocking the port). Connections to Submission SMTP over port 587 from external IPs are reliable. Emails sent from gmail, yahoo, and a few other addresses all are being delivered. I haven't found an email provider that fails to deliver mail to my MX. So, with port 25 blocked, I am assuming other MTA servers fallback to port 587, otherwise I can't imagine how the mail is received. I know port 25 shouldn't be blocked, but so far it works. Are there mail servers that this will not work with? Where can I find more about how this is working? -- edit More technical detail, to validate that I'm not missing something silly. Obviously in the transcript below I've replaced my actual domain with example.com. # DNS MX record points to the A record. $ dig example.com MX +short 1 example.com $ dig example.com A +short <Public IP address> # From a public server (not my ISP hosting the mail server) # We see port 25 is blocked, but port 587 is open $ telnet example.com 25 Trying <public ip>... telnet: Unable to connect to remote host: Connection refused # Let's try openssl $ openssl s_client -starttls smtp -crlf -connect example.com:25 connect: Connection refused connect:errno=111 # Again from a public server, we see port 587 is open $ telnet example.com 587 Trying <public ip>... Connected to example.com. Escape character is '^]'. 220 example.com ESMTP Postfix ehlo example.com 250-example.com 250-PIPELINING 250-SIZE 10485760 250-VRFY 250-ETRN 250-STARTTLS 250-ENHANCEDSTATUSCODES 250-8BITMIME 250-DSN 250-BINARYMIME 250 CHUNKING quit 221 2.0.0 Bye Connection closed by foreign host. Here is a portion from the mail log when receiving a message from gmail: postfix/postscreen[93152]: CONNECT from [209.85.128.49]:48953 to [192.168.0.10]:25 postfix/postscreen[93152]: PASS NEW [209.85.128.49]:48953 postfix/smtpd[93160]: connect from mail-qe0-f49.google.com[209.85.128.49] postfix/smtpd[93160]: 7A8C31C1AA99: client=mail-qe0-f49.google.com[209.85.128.49] The log shows that a connection was made to the local IP on port 25 (I'm not doing any port mapping, so it is port 25 on the public IP too). Seeing this leads me to hypothesize that the ISP block on port 25 only occurs when a connection is made from an IP address that is not known to be a mail server. Any other theories?

    Read the article

  • Using OpenVPN, yet netflix.com blocks access

    - by user837848
    I have set up an OpenVPN server on a VPS in the USA and configured it to route all clients traffic through it. Everything seems to work fine regarding the VPN connection in gerneral. All ip lookup sites show me the us server's ip address and even hulu.com works(it won't work if you are not in the usa). But for some reason netflix.com says "Sorry, Netflix is not available in your country yet.". So I thought that netflix probably uses some more sophisticated ways to determine your location beyond just your ip address. But I could not find a way to get it to work until I dropped the idea of using a VPN and instead connected to the server via a simple socks tunnel with ssh by running: ssh -D 9999 user@serverip All I had to do was changing the key network.proxy.socks_remote_dns in Firefox from false to true to prevent DNS leaks and setting up the socks proxy. Then I could finally watch netflix.com. As a result I concluded that there is nothing in the browser(or something like system timezone) that tells netflix the location, so it has to have something to do with the OpenVPN config. After that I used tcpdump to log all the traffic on the server's network interface venet0 (OpenVZ VPS), visited netflix.com on the client while first connected to the VPN and then connected via socks tunnel and afterwards compared both outputs. The only thing that caught my eye was that while using the socks tunnel the server mainly used ipv6 to connect to netflix whereas it only used ipv4 when the client was connected to the OpenVPN server. But I don't get how that could make such a difference. So what am I missing? Is there a way to configure OpenVPN to also use ipv6 to connect to a website although there is only an ipv4 connection between the VPS and the client? Here is the server.conf of the OpenVPN server (OpenVZ VPS) local serverip port 443 proto tcp dev tun ca ./easy-rsa2/keys/ca.crt cert ./easy-rsa2/keys/vps1.crt key ./easy-rsa2/keys/vps1.key # This file should be kept secret dh ./easy-rsa2/keys/dh1024.pem server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt push "redirect-gateway def1 bypass-dhcp" push "dhcp-option DNS 8.8.8.8" push "dhcp-option DNS 8.8.4.4" client-to-client keepalive 10 120 tls-auth ta.key 0 # This file is secret cipher AES-256-CBC comp-lzo max-clients 4 user nobody group nogroup persist-key persist-tun status openvpn-status.log log-append openvpn.log verb 3 iptables forwarding iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -o venet0 -j SNAT --to-source serverip (enabled ipv4 forwarding) I have tried everything always on a Win7 and a Debian client with only ipv4 connections and always made sure that they use the correct DNS server (tested with ipleak.net and tcpdump / wireshark). client.conf: client dev tun proto tcp remote serverip 443 resolv-retry infinite nobind persist-key persist-tun ca ca.crt cert client.crt key client.key ns-cert-type server tls-auth ta.key 1 cipher AES-256-CBC comb-lzo verb 3

    Read the article

  • Ubuntu Server 10.04 Heavy Network Traffic causes disconnect

    - by K Vaughan
    I'm currently running a headless Ubuntu 10.04 server. Installed is the LAMP stack, Joomla, Virtualbox, phpvirtualbox, webmin and proFTP.. It resolves the IP address so I can access it remotely (either the apache2 webserver or the FTP) using DDClient. Any packages installed have been installed using apt-get. Webmin, although discouraged in Ubuntu Server, is used mostly to administer the webserver aspect. This issue also appeared when I was using Ubuntu Server 10.10. After periods of heavy network traffic, whether local or remote, the connect drops. I'm talking specifically about the transfer of files via FTP, SCP or Samba (the latter of which I seldom use). There is no response to ping or ssh. I can't FTP to the server nor can I load the website. There are times when the server has been on for a few days and everything runs fine because I haven't accessed it much, if at all (thus not much network traffic). I've gone through a few hardware changes although I don't believe this has cause the issue: this has been happening long before I made any changes. At first I thought it was my ISP-provided router blocking traffic because of some kind of misconfiguration (perhaps assuming it was some kind of DoS attack). I've changed routers and still found no success. I've checked syslog, dmesg and kern.log for warnings but have uncovered none. I've ran memtest via the GRUB2 menu at boot and once it turned up 4 errors. I ran again with individual sticks of RAM in various slots and everything turned up fine. I've looked through the BIOS settings and everything looks fine. I've tried unplugging unnecessary pieces of hardware (other internal hard drives, CD drives, floppy, PCI cards, etc). Any help or tips on how I can even begin to troubleshoot this would be very much appreciated. Please note that i've only started playing with servers as a hobby so my knowledge wouldn't be the most refined. I'm comfortable with command line and have the initiative to know how to look up something I can't do. Unfortunately I can't seem to find any issues like this. Additionally: If a solution can't be found some assistance to write a script that will cause the server to reboot automatically if, after x minutes, it gets no response to pinging somewhere like google. Admittedly that's not the cleanest solution should my internet end up going down but I can't think of what else to do.

    Read the article

  • Performance of ClearCase servers on VMs?

    - by Garen
    Where I work, we are in need of upgrading our ClearCase servers and it's been proposed that we move them into a new (yet-to-be-deployed) VMmare system. In the past I've not noticed a significant problem with performance with most applications when running in VMs, but given that ClearCase "speed" (i.e. dynamic-view response times) is so latency sensitive I am concerned that this will not be a good idea. VMWare has numerous white-papers detailing performance related issues based on network traffic patterns that re-inforces my hypothesis, but nothing particularly concrete for this particular use case that I can see. What I can find are various forum posts online, but which are somewhat dated, e.g.: ClearCase clients are supported on VMWare, but not for performance issues. I would never put a production server on VM. It will work but will be slower. The more complex the slower it gets. accessing or building from a local snapshot view will be the fastest, building in a remote VM stored dynamic view using clearmake will be painful..... VMWare is best used for test environments (via http://www.cmcrossroads.com/forums?func=view&catid=31&id=44094&limit=10&start=10) and: VMware + ClearCase = works but SLUGGISH!!!!!! (windows)(not for production environment) My company tried to mandate that all new apps or app upgrades needed to be on/moved VMware instances. The VMware instance could not handle the demands of ClearCase. (come to find out that I was sharing a box with a database server) Will you know what else would be on that box besides ClearCase? Karl (via http://www.cmcrossroads.com/forums?func=view&id=44094&catid=31) and: ... are still finding we can't get the performance using dynamic views to below 2.5 times that of a physical machine. Interestingly, speaking to a few people with much VMWare experience and indeed from running builds, we are finding that typically, VMWare doesn't take that much longer for most applications and about 10-20% longer has been quoted. (via http://www.cmcrossroads.com/forums?func=view&catid=31&id=44094&limit=10&start=10) Which brings me to the more direct question: Does anyone have any more recent experience with ClearCase servers on VMware (if not any specific, relevant performance advice)?

    Read the article

  • USB mouse disconnecting and reconnecting randomly and often

    - by Marc
    Specs: Q6600, evga 780I sli mobo, 4gig RAM, logitech MX518, windows 7 64bit, evga gtx 260, 650W power supply (single rail) The problem I am having is my mouse will reconnect/disconnect (will even hear the sounds from windows) and the light on the bottom of the mouse will turn off/turn on as it starts working again. It really sucks to be playing a game (and happens on desktop as well) for the mouse to just die out for a few seconds and come back. Sometimes it will not happen for days and other times it will do it 2 or more times within 15 seconds. I have tried two different wired mice, have tried multiple USB ports (on front of computer, back of computer, have also used a USB hub and have also plugged in a card that connects to the USB connectors on the motherboard and adds a few usb ports to the back of the computer, and I also bought a USB 2.0 PCI card and that did not help). Nothing else seems to reconnect like this, my usb keyboard has never once cut out like the mouse does and neither have any of the other devices I have connected (webcam, usb hub, various devices sometimes connected through usb cables, and IR reciever for windows media center remote). I have disconnected all usb devices except for my keyboard and mouse and the problem still occurs. I guess it could be something wrong with my motherboard but since no other devices behave similarly I'm just hoping that it is some kind of driver conflict. Installing logitech's drivers has had no effect. It seemed at first that if I go to device manager and uninstall HID-compliant mouse (that and logitech mx518 are listed) that would fix it but it doesn't seem to work anymore or at least not every time (it keeps reinstalling). I have googled "usb mouse disconnects and reconnects" and it seems to be fairly common but none of those were resolved. To stick some easy steps: It happens with or without the drivers installed It has happened with multiple mice on the same computer Bios is the latest version (P08) Motherboard drivers are the latest version Device manager is listing no problems on any USB devices Happens with every usb port, even addon usb cards Happens when all usb devices aside from mouse and keyboard are unplugged I read that maybe it is an IRQ conflict and I tried to look into that but did not really know what was going on, but didn't see anything obviously wrong. Thanks for any help guys, its driving me crazy!

    Read the article

  • Unable to connect to SVN server on VPS : Subversion Configuration Problem

    - by Pritam Barhate
    Hello everybody, I purchased a VPS account (centos-5-x86_64) from Hostgator mainly for the purpose of setting up an online Subversion Server. This is the first time I am managing a VPS and my linux skills are not so great, but I have basic knowledge of the OS and have been using it on and off as desktop from last few years. So after foxing through a few tutorials online, as the first step I logged in using SSH root account provided by Hostgator and tried to run, yum install mod_dav_svn subversion As it turns out my account has Cpanel/WHM and since it some concept of easy apache straight forward procedure of yum install mod_dav_svn subversion won't work. After that I found out how it can be worked out by compiling the source and stuff. But the whole procedure looked long and scary. [I am just a linux nub]. so I decided I would just skip whole apache integration stuff and just access the server using svn:// protocol, anyways that's how I configure svn on our LAN. So I installed subversion using yum install subversion It installed fine. Then I created a folder /svn_repos/testproject and ran svnadmin create /svn_repos/testproject/ No Problems Using vi I changed svnserve.conf and passwd files for the repository and added a user with my name. Anonymous users don't have any access, authenticated users have write access. Then I started svnserve using svnserve -d then in same terminal window svn list svn://localhost/svn_repos/testproject Asks for authentication for realm, provided root password then for svn username and password. Provided both. The command returns nothing but exists properly. Returns nothing is understood I didn't import anything. But if try to access svn remotely using in another terminal: svn list svn://ip.add.of.server/svn_repos/testproject svn: Can't connect to host 'ip.add.of.server': Operation timed out Is what I get. Parallels Power Panel that I got from Hostgator reports that: The firewall is not active now. To activate the firewall, choose one of the firewall operation modes. So if firewall is not running and I can access svn using localhost, why the operation is timing out when I try to access svn from a remote machine? Experienced network admins please help. Thanks in advance. Also please suggest a good book which gives detailed information on configuring Dedicated servers + WHM and CPanel.

    Read the article

  • Tying down a cloud by virtualizing everything and then locking VMs to real hardware as necessary

    - by tudor
    I'm looking for a cloud software solution that: Can run on both server and desktop machines; Virtualizes hardware and has the option of exposing each real machine to the cloud; Allows a VM to be "locked" to a set of real hardware capabilities and stay there until moved (e.g. a user's "real" desktop); Allows a VM to link to some types of devices elsewhere (e.g. USB/serial via ethernet); and Is geography-aware to control movement of VMs between real networks. I'm aware that this may be the holy grail of virtualization, and I've searched alot. Some solutions appear to meet some criteria but not others. Most cloud implementations appear to ignore real hardware, for example. I realise that this may be solved by using three different implementations in combination: A standard cloud server farm. A bare-metal network backup utility with PXEBoot. VNC and/or VDI. (VNC obviously would require the real hardware to be running.) This combination, however, has some serious drawbacks that I'd like to solve by treating it as one system. My explanation follows... I have a network of real servers and desktops in multiple locations. I've virtualized servers before using Virtualbox and that's worked quite well. I've even connected USB devices to VMs on servers. I would like to virtualize the desktops in all my offices to facilitate movement of desktops, remote access (e.g. VDI) and bare-metal backups. However, I know that there are problems with this. For example, some desktops have specific hardware (e.g. 3D graphics cards, USB devices, etc) that limit their mobility. Geographic constraints also limit movement in that VMs can be moved easily within offices, but transferring between offices is not always preferable. What I would like to find is a system that can virtualize everything from bare-metal easily by maintaining an abstraction layer on each client and server machine that exposes the hardware available and runs as a cloud. Then certain VMs would be "locked" to specific hardware (so that, e.g. the VM runs only on their own desktop.) This would be required for situations where speed is important (e.g. 3D graphics pass-through). In addition, abstracted low-speed devices (e.g. USB) could be piped from real hardware to a VM in the cloud. This is important since if a VM is taken down, another VM can connect to the real hardware for minimum downtime.

    Read the article

  • l2tp server always 'sent [CCP ResetReq id=0x3]' when got compressed data request

    - by wilbur
    I have built a xl2tpd/ipsec server on my ubuntu 12.04.3, and I managed to make a l2tp vpn connection to the xl2tpd server from my android phone. The xl2tpd log said xl2tpd[10828]: Enabling IPsec SAref processing for L2TP transport mode SAs xl2tpd[10828]: IPsec SAref does not work with L2TP kernel mode yet, enabling forceuserspace=yes xl2tpd[10828]: setsockopt recvref[22]: Protocol not available xl2tpd[10828]: This binary does not support kernel L2TP. xl2tpd[10828]: xl2tpd version xl2tpd-1.2.8 started on atime.me PID:10828 xl2tpd[10828]: Written by Mark Spencer, Copyright (C) 1998, Adtran, Inc. xl2tpd[10828]: Forked by Scott Balmos and David Stipp, (C) 2001 xl2tpd[10828]: Inherited by Jeff McAdams, (C) 2002 xl2tpd[10828]: Forked again by Xelerance (www.xelerance.com) (C) 2006 xl2tpd[10828]: Listening on IP address 0.0.0.0, port 1701 xl2tpd[10828]: control_finish: Peer requested tunnel 39154 twice, ignoring second one. xl2tpd[10828]: Connection established to 117.136.8.59, 43149. Local: 25339, Remote: 39154 (ref=0/0). LNS session is 'default' However I cannot access the web in my browser. The pppd log said rcvd [Compressed data] 00 1d 82 c4 7c 04 d8 09 ... sent [CCP ResetReq id=0x7] I have googled a lot and found that this was mostly caused by a mppe decompression error. I have disabled BSD-Compress compression with nobsdcomp in /etc/ppp/xl2tpd-options but it did not work. I used openswan-2.6.33 and xl2tpd-1.2.8 which were built from source. And my configurations: /etc/ipsec.conf version 2.0 config setup nat_traversal=yes virtual_private=%v4:10.0.0.0/8,%v4:192.168.0.0/16,%v4:172.16.0.0/12 oe=off protostack=netkey conn L2TP-PSK-NAT rightsubnet=vhost:%priv also=L2TP-PSK-noNAT conn L2TP-PSK-noNAT authby=secret pfs=no auto=add keyingtries=3 rekey=no ikelifetime=8h keylife=1h type=transport left=106.186.121.214 leftprotoport=17/1701 right=%any rightprotoport=17/%any /etc/xl2tpd/xl2tpd.conf [global] ipsec saref = yes [lns default] local ip = 10.10.11.1 ip range = 10.10.11.2-10.10.11.245 refuse chap = yes refuse pap = yes require authentication = yes ppp debug = yes pppoptfile = /etc/ppp/xl2tpd-options length bit = yes /etc/ppp/xl2tpd-options require-mschap-v2 ms-dns 8.8.8.8 ms-dns 8.8.4.4 asyncmap 0 auth crtscts lock hide-password modem name l2tpd proxyarp lcp-echo-interval 30 lcp-echo-failure 4 debug nobsdcomp Any suggestions? Thanks in advance.

    Read the article

  • Configuring Windows 2003 As A Router

    - by Sean M
    I am trying to configure a Windows 2003 server to act as a router, so that the two subnetworks that I'm dealing with can communicate with one another without NAT. I am mostly sure that I have configured Windows 2003 incorrectly, and I'm finding it very difficult to drill down through Google results to something helpful. I have a 192.168.1.0/24 network that is my "production" network (in the sense that I'm in trouble if I screw it up) and a 10.0.0.0/8 network that is my test network. The 192.168.1.0 network is ruled by a gateway whose routing table looks like this (my address redacted): The Windows 2003 server, "prime," is multihomed. Its network adapters are at 192.168.1.122, (as seen above), 10.0.0.1, and 10.0.0.2. I added the Routing and Remote Access role to it, and enabled LAN routing. I do not have it using RIP or other routing protocols. Its current routing table is shown below. To me, it looks like all of the right routes are there for traffic to pass between the 192.168.1.0 network and the 10.0.0.0 network. However, traffic does not pass. The 10.0.0.11 and .12 clients cannot be contacted from the 192.168.1.0 network. When I use traceroute to try to get to them, the trace gets to the Windows 2003 server's 192.168.1.122 address, then produces nothing but "* * *" timeouts. When I try to traceroute to 192.168.1.1 from a 10.0.0.0-network client, I get "destination host unreachable." However, I know that the routing is working at least a little, because from the 192.168.1.0 network, I can connect to the Windows server just fine by referring to it as 10.0.0.1. What static routes would allow me to contact 10.0.0.11 and .12 from the 192.168.1.0 network? Is it possible to tell the Windows server "since you are a DHCP/DNS server, you already know routes to get to machines that are getting IP addresses from you, please add those to your routing table" ? Will using RIP or OSPF on the Windows server actually be helpful in this situation?

    Read the article

  • Trying to install wordpress inside rails app with nginx and fastcgi

    - by pinouchon
    I have a rails app (let's call it myapp) running at www.myapp.com. I want to add a wordpress blog at www.myapp.com/blog. The webserver for the rails app is thin (see the upstream block). The wordpress runs with php-fastcgi. The rails app works fine. My problem is the following: in /home/myapp/myapp/log/error.log error I get: 2013/06/24 10:19:40 [error] 26066#0: *4 connect() failed (111: Connection refused) while connecti\ ng to upstream, client: xx.xx.138.20, server: www.myapp.com, request: "GET /blog/ HTTP/1.1", \ upstream: "fastcgi://127.0.0.1:9000", host: "www.myapp.com" Here is the nginx conf file: upstream myapp { server unix:/tmp/thin_myapp.0.sock; server unix:/tmp/thin_myapp.1.sock; server unix:/tmp/thin_myapp2.sock; } server { listen 80; server_name www.myapp.com; client_max_body_size 20M; access_log /home/myapp/myapp/log/access.log; error_log /home/myapp/myapp/log/error.log error; root /home/myapp/myapp/public; index index.html; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; # Index HTML Files if (-f $document_root/cache/$uri/index.html) { rewrite (.*) /cache/$1/index.html break; } if (!-f $request_filename) { proxy_pass http://myapp; break; } # try_files /system/maintenance.html $uri $uri/index.html $uri.html @ruby; } location /blog/ { root /var/www/wordpress; fastcgi_index index.php; if (!-e $request_filename) { rewrite ^(.*)$ /blog/index.php?q=$1 last; } include /etc/nginx/fastcgi_params; fastcgi_param SCRIPT_FILENAME /var/www/wordpress$fastcgi_script_name; fastcgi_pass localhost:9000; # port to FastCGI } } Any ideas why that doesn't work ? How do I make sure that php-factcgi is configured properly ? Edit: I cant test if fastcgi is running with telnet: $> telnet 127.0.0.1 9000 Trying 127.0.0.1... telnet: Unable to connect to remote host: Connection refused And it's not.

    Read the article

  • How can I simulate blocking RTMP over port 80 on Windows?

    - by Christian Nunciato
    It seems like this should be so simple, but since this isn't my area of expertise, I'm having a hell of a time figuring out how to do it. Basically, I have a Flash app and I'm connecting to a Flash Media Server to stream some content. The URL I'm using to do this, for example, looks like this: rtmp://someserver.com/some/path/mp3:somefile Everything works -- but that's sort of the problem. When I'm trying to do is simulate my users attempting to play back my media under more restrictive conditions than the ones I have here (i.e., none) -- namely being stuck behind firewalls or proxy servers that block access to RTMP streams. Flash, according to Adobe, is equipped to handle proxy servers and firewalls automatically, like so (from the docs): When you do not specify a port number in an RTMP address, Flash will attempt to connect to port 1935. If it fails it will then try to connect to port 443; if that fails, it will try port 80. [And if that fails, it will attempt to connect via RTMPT (i.e., HTTP tunneling) on port 80.] So no coding is required to access ports 1935, 443, or port 80 if you do not specify a port in the RTMP address. The problem I'm having is setting up a reliable environment in which to test that this behavior actually happens. I'm on a Windows machine, for example, so with Windows Firewall, I can block certain ports and protocols (1935, 443), but I don't want to block port 80, because the final fallback protocol (RTMPT) is supposed to run on port 80, and Windows Firewall only gives me enough granularity (as far as I know, anyway) to block "all outbound TCP traffic to remote port 80" -- that is, I can't, apparently, block "all outbound RTMP traffic to port 80" while leaving RTMPT traffic to port 80 unaffected. My understanding thus far is that I'll probably need to set up a proxy server to do this. Is this correct? Or is there a simpler way (on Win 7, at least) to filter out RTMP to 1935, RTMP to 443, RTMP to 80, but still allow RTMPT to 80 (where all four hostnames are identical)? And if I do have to set up a proxy server, what's the simplest way to go on Windows? I've set up WinProxy, which seems a bit janky but apparently works -- but then what I can't figure out is how to tell Windows to force all TCP traffic (including RTMP, RTMPT and HTTO) through this proxy server so I can turn around and reject the requests for RTMP. Any help would be hugely appreciated. This isn't my realm of expertise and I've alreasdy spent more time on it than I probably should. :)

    Read the article

  • ubuntu 8.04lts + rdiff-backup: Should I install from source instead of using apt repositories?

    - by egarcia
    I'm trying to use rdiff-backup in order to make backup copies of some folders inside an Ubuntu 8.04LTS server. I'm attempting to do the backup on another server with a more modern Ubuntu distro (9.10). I'll call this one the "client". rdiff-backup needs to be installed on both the client and the server. It is available on the apt repositories on both machines, so I installed it using sudo apt-get install rdiff-backup. The problem is that the version installed on the server is older than the one on the client (1.1.15 vs 1.2.8). Thus I get errors when I try do make them work together. So I need both versions to be the same. What is the standard procedure in these cases? Should I attempt to upgrade the version on the server, or downgrade the version on the client? And how whould I do that? In case it is useful, I'd like to point out that the rdiff-backup apt-package has some dependencies - librsync1 & python-support Attaching the errors I got in case they help: rdiff-backup egarcia@test::/var/rails/ohwr/backup /home/kikito/backup/files Warning: Local version 1.2.8 does not match remote version 1.1.15. Exception ' Warning Security Violation! Bad request for function: rpath.make_file_dict with arguments: ['/var/rails/ohwr/backup'] ' raised of class '<class 'rdiff_backup.Security.Violation'>': File "/usr/lib/pymodules/python2.6/rdiff_backup/Main.py", line 304, in error_check_Main try: Main(arglist) File "/usr/lib/pymodules/python2.6/rdiff_backup/Main.py", line 321, in Main rps = map(SetConnections.cmdpair2rp, cmdpairs) File "/usr/lib/pymodules/python2.6/rdiff_backup/SetConnections.py", line 78, in cmdpair2rp return rpath.RPath(conn, filename).normalize() File "/usr/lib/pymodules/python2.6/rdiff_backup/rpath.py", line 884, in __init__ else: self.setdata() File "/usr/lib/pymodules/python2.6/rdiff_backup/rpath.py", line 908, in setdata self.data = self.conn.rpath.make_file_dict(self.path) File "/usr/lib/pymodules/python2.6/rdiff_backup/connection.py", line 450, in __call__ return apply(self.connection.reval, (self.name,) + args) File "/usr/lib/pymodules/python2.6/rdiff_backup/connection.py", line 370, in reval if isinstance(result, Exception): raise result Traceback (most recent call last): File "/usr/bin/rdiff-backup", line 30, in <module> rdiff_backup.Main.error_check_Main(sys.argv[1:]) File "/usr/lib/pymodules/python2.6/rdiff_backup/Main.py", line 304, in error_check_Main try: Main(arglist) File "/usr/lib/pymodules/python2.6/rdiff_backup/Main.py", line 321, in Main rps = map(SetConnections.cmdpair2rp, cmdpairs) File "/usr/lib/pymodules/python2.6/rdiff_backup/SetConnections.py", line 78, in cmdpair2rp return rpath.RPath(conn, filename).normalize() File "/usr/lib/pymodules/python2.6/rdiff_backup/rpath.py", line 884, in __init__ else: self.setdata() File "/usr/lib/pymodules/python2.6/rdiff_backup/rpath.py", line 908, in setdata self.data = self.conn.rpath.make_file_dict(self.path) File "/usr/lib/pymodules/python2.6/rdiff_backup/connection.py", line 450, in __call__ return apply(self.connection.reval, (self.name,) + args) File "/usr/lib/pymodules/python2.6/rdiff_backup/connection.py", line 370, in reval if isinstance(result, Exception): raise result rdiff_backup.Security.Violation: Warning Security Violation! Bad request for function: rpath.make_file_dict with arguments: ['/var/rails/ohwr/backup']

    Read the article

< Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >