Search Results

Search found 3673 results on 147 pages for 'pop3 ssl'.

Page 129/147 | < Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >

  • Unable to get prosody running on Ubuntu 10.04 (lua issues)

    - by user90374
    All this is performed on Ubuntu 10.04.4 LTS Server I installed LUA 5.1.4 following this procedure - http://ubuntuforums.org/showthread.php?t=1874860 I installed prosody following this command (after downloading the package) - sudo dpkg -i prosody_0.8.2-1_i386.deb After installation, I get the following error: I have tried to use as suggested luarock and sudo apt-get install to fix these. But still it keeps showing me these errors. Selecting previously deselected package prosody. (Reading database ... 59416 files and directories currently installed.) Unpacking prosody (from prosody_0.8.2-1_i386.deb) ... Setting up prosody (0.8.2-1) ... * Starting Prosody XMPP Server prosody ************** Prosody was unable to find luaexpat This package can be obtained in the following ways: Source: www[dot]keplerproject[dot]org/luaexpat/ Debian/Ubuntu: sudo apt-get install liblua5.1-expat0 luarocks: luarocks install luaexpat luaexpat is required for Prosody to run, so we will now exit. More help can be found on our website, at prosody[dot]im/doc/depends ************ Prosody was unable to find luasocket This package can be obtained in the following ways: Source: www[dot]tecgraf[dot]puc-rio[dot]br/~diego/professional/luasocket/ Debian/Ubuntu: sudo apt-get install liblua5.1-socket2 luarocks: luarocks install luasocket luasocket is required for Prosody to run, so we will now exit. More help can be found on our website, at prosody[dot]im/doc/depends ************ Prosody was unable to find LuaSec This package can be obtained in the following ways: Source: www[dot]inf[dot]puc-rio[dot]br/~brunoos/luasec/ Debian/Ubuntu: prosody[dot]im/download/start#debian_and_ubuntu luarocks: luarocks install luasec SSL/TLS support will not be available More help can be found on our website, at prosody[dot]im/doc/depends [fail] invoke-rc.d: initscript prosody, action "start" failed. dpkg: error processing prosody (--install): subprocess installed post-installation script returned error exit status 1 Processing triggers for man-db ... Processing triggers for ureadahead ... Errors were encountered while processing: prosody Thanks a lot for your patience and answers.

    Read the article

  • apache 2.4 redirect within virtualhost

    - by user129545
    I have a couple http (port 80) vhosts that I want to redirect to http if an https request is made to them. Apparently some things have changed with Apache 2.4 (NameVirtualHost not used like it was in the past, etc). Apache 2.4 on centos 5.5, This is all using a single ip for all vhosts below, I don't have multiple ip's on this box, my /usr/local/apache2/conf/extra/httpd-vhosts.conf : # <VirtualHost www.dom1.com:80> ServerName www.dom1.com ServerAlias dom1.com DocumentRoot /usr/local/apache2/htdocs/dom1/wordpress </VirtualHost> <VirtualHost webmail.dom2.com:443> ServerName webmail.dom2.com DocumentRoot /usr/local/apache2/htdocs/webmail SSLEngine On SSLCertificateFile /usr/local/apache2/webmail.crt SSLCertificateKeyFile /usr/local/apache2/webmail.key </VirtualHost> # my /usr/local/apache2/conf/extra/httpd-ssl.conf, # Listen 443 SSLPassPhraseDialog builtin SSLSessionCache shmcb:/var/cache/mod_ssl/scache(512000) SSLSessionCacheTimeout 300 Mutex default SSLRandomSeed startup file:/dev/urandom 512 SSLRandomSeed connect builtin SSLCryptoDevice builtin # webmail.dom2.com works fine. Problem is I can connect to https://www.dom1.com, and it serves up the content from webmail.dom2.com. I want any https requests for www.dom1.com on port 443 to simply redirect to http://www.dom1.com on port 80. Thanks

    Read the article

  • Is there a way to use something similar to a capture group for apache2 server name

    - by Zipper
    I have a server that sits behind an AWS load balancer. The LB can't do automatic redirect from HTTP to HTTPs, and the LB is doing my SSL. So I need to setup apache on my servers to redirect any request on port 80 to https://FOOBAR m where FOOBAR is the domain that came in. I haven't been able to find a way of doing that so far. I'm an apache newb though. What I'm trying to do is something similar to this. I'll use regex as an example <VirtualHost *:80> ServerName (.*) Redirect / https://\1 </VirtualHost> If there's a better way to do this, please let me know. EDIT: Sorry I should have explained why this is happening. I actually have a tomcat server running my app on port 8080, and the LB points to that. From what I can tell so far my requests come in on http (which is expected), but when my app server sends redirects (for login purposes) it tries to redirect to http, instead of https. I haven't had a chance to fully investigate this, but I wanted to work around it for now by point the LB to point to the apache server, and have any port 80 requests redirect to 443. EDIT2: The other reason I'm interested in doing this, is that since the LB can't do the redirect, I need to have another redirect mechanism in place to tell the browser to go to https://FOOBAR

    Read the article

  • Exchange Online SMTP Not Working With Any Email Client

    - by emre nevayeshirazi
    I am trying to switch our company mail server to exchange online. I have successfully added my domain and users and can send and receive mails through Outlook Web App. I can also send and receive if I configure my Outlook 2013 client using Exchange protocol. However, some folks in company are using Thunderbird and some old Outlook Clients. For those, I tried to connect to Exchange via IMAP/SMTP. This is what I use, For incoming, IMAP / Port : 993 with SSL / Host : outlook.office365.com For outgoing, SMTP / Port : 589 with TSL / Host : smtp.office365.com I can receive emails, however I could not be able to send emails. I keep getting An error occurred while sending mail. The mail server responded: 4.3.2 Service not active. Please verify that your email address is correct in your Mail preferences and try again. My username and password are correct, I am using my mail address as my username to mailbox. I also tried sending mail via C# application which was working for outlook.com and gmail.com SMTP settings. It also fails to send emails and returns the same error code. I thought TB and other old clients such as Office 2003 might not support Exc. Online so I tried same settings in Office 2013. It successfully connected my mailbox when checking for configuration but failed in sending test message and returned the same error code. Configuration for incoming and outgoing mailbox are taken from here. They are also available on Office 365 user page and they are same. What could be the reason for error ?

    Read the article

  • Resources for Smartphone Security

    - by Shial
    My organization is currently working on improving our data and network security due to increasing HIPAA laws and a general need to get a better grasp on controlling our health related information. We are a non-profit working with people with developmental disabilities so we handle a lot of medical related information. One area that has been identified as a risk is our use of smartphones, specifically at this time Windows Mobile 6.1 devices from T-Mobile. We do not utilize the VPNs on the phones so there isn't any way they can access our databases or file servers (username/password for VPNs is not the domain logons). What would be exposed however is the particular user's email account since you could extract out the username/password and access the email either on the device or on our web email (Exchange 2003) which could contain HIPAA protected confidential information about clients and services and this would be an incident that would have to be reported. What resources or ideas would help us secure these devices? I'm not worried about data interception (using SSL) but more about physical theft or loss of the device. Are there websites that I just have not found with guidelines and suggestions or particualar products that would help protect us? I also don't want to limit the discussion to windows Mobile either. I myself am looking at an android 2.0 device and there is always the eventual possibility we could get pushed to enable the VPNs. I know this is a subject that likely won't have any particular correct answer and it is something we should all be aware of since there devices are sitting outside of our immediate control most of the time.

    Read the article

  • Apache /server-status/ gives a 404 not found

    - by kapshure
    I am trying to solve a problem where Apache stats aren't displaying correctly in Munin. I've ran through quite a bit of checks and tests regarding Munin setup, but I think my issue is related to Apache, but my skill set there is lacking. first, system info: monitored server CentOS 5.3 kernel 2.6.18-128.1.1.el5 Apache/2.2.3 "server-status" directive in httpd.conf (i've cross-compared this with another system that i did a successful parallel install of Munin on, correctly showing Apache stats, and the directive below is the same for both) ExtendedStatus On <Location /server-status> SetHandler server-status Order deny,allow Deny from all Allow from 127.0.0.1 </Location> ran lynx http://localhost/server-status got HTTP/1.1 404 taking a look at Apache access_log: 127.0.0.1 - - [13/Oct/2010:07:00:47 -0700] "GET /server-status HTTP/1.0" 404 11237 "-" "Lynx/2.8.5rel.1 libwww-FM/2.14 SSL-MM/1.4.1 OpenSSL/0.9.8e-fips-rhel5" mod_status is also loaded: % grep "mod_status" /etc/httpd/conf/httpd.conf LoadModule status_module modules/mod_status.so iptables is turned off also i did notice that the ownership status on httpd.conf on this system is root.root.. whereas the system that is displaying correctly is apache.www -- not certain that this matters?? its got to be permission issue, but i'm not certain where the permissions are messed up. any thoughts on why the test of server-status is giving me a 404?

    Read the article

  • Liferay - Verify each node in a cluster

    - by Schrute
    In this example, I have two clustered instances of Liferay using bundled Tomcat running, using cluster link and shared documents. Let's say the name of the public community is fubar and friendy URL used is fubar.lipsum.com. Let's say the ports listening on each server is 8080. If I go to both server1:8080 or server2:8080 I will get the default page for Liferay. How can I test fubar.lipsum.com on each node by using the backend server, so I can verify each server? If I test it, it just goes to the load balancer, I wish there was a way to append to the backend connection to bring it up. I can add the friendly URL to my local machines hosts file and this seems to kinda work, but then once something is called in the application, it tries to go out again from the backend server and then uses SSL and then we have problems. I think I may be able to do port forwarding, but this seems like a basic thing we should be able to do and what I've found so far in the admin docs has not helped. Using the option to print the server name in the page details isn't an option either.

    Read the article

  • setting up git on cygwin - openssl

    - by user23020
    I'm trying to get git running in cygwin on a windows 7 machine I have git unpacked and the directory git-1.7.1.1 when i run make install from within that directory, I get CC fast-import.o In file included from builtin.h:4, from fast-import.c:147: git-compat-util.h:136:19: iconv.h: No such file or directory git-compat-util.h:140:25: openssl/ssl.h: No such file or directory git-compat-util.h:141:25: openssl/err.h: No such file or directory In file included from builtin.h:6, from fast-import.c:147: cache.h:9:21: openssl/sha.h: No such file or directory In file included from fast-import.c:156: csum-file.h:10: error: parse error before "SHA_CTX" csum-file.h:10: warning: no semicolon at end of struct or union csum-file.h:15: error: 'crc32' redeclared as different kind of symbol /usr/include/zlib.h:1285: error: previous declaration of 'crc32' was here csum-file.h:15: error: 'crc32' redeclared as different kind of symbol /usr/include/zlib.h:1285: error: previous declaration of 'crc32' was here csum-file.h:17: error: parse error before '}' token fast-import.c: In function `store_object': fast-import.c:995: error: `SHA_CTX' undeclared (first use in this function) fast-import.c:995: error: (Each undeclared identifier is reported only once fast-import.c:995: error: for each function it appears in.) fast-import.c:995: error: parse error before "c" fast-import.c:1000: warning: implicit declaration of function `SHA1_Init' fast-import.c:1000: error: `c' undeclared (first use in this function) fast-import.c:1001: warning: implicit declaration of function `SHA1_Update' fast-import.c:1003: warning: implicit declaration of function `SHA1_Final' fast-import.c: At top level: fast-import.c:1118: error: parse error before "SHA_CTX" fast-import.c: In function `truncate_pack': fast-import.c:1120: error: `to' undeclared (first use in this function) fast-import.c:1126: error: dereferencing pointer to incomplete type fast-import.c:1127: error: dereferencing pointer to incomplete type fast-import.c:1128: error: dereferencing pointer to incomplete type fast-import.c:1128: error: `ctx' undeclared (first use in this function) fast-import.c: In function `stream_blob': fast-import.c:1140: error: `SHA_CTX' undeclared (first use in this function) fast-import.c:1140: error: parse error before "c" fast-import.c:1154: error: `pack_file_ctx' undeclared (first use in this functio n) fast-import.c:1154: error: dereferencing pointer to incomplete type fast-import.c:1160: error: `c' undeclared (first use in this function) make: *** [fast-import.o] Error 1 I'm guessing that most of these errors are due to the iconv.h and openssl files which apparently are missing, but I can't figure out how I'm supposed to install those (if I am), or if there is some other way to get around this.

    Read the article

  • Users database empty after Samba3 to Samba4 migration on different servers

    - by ouzmoutous
    I have to migrate a Samba 3 to a new Samba 4 server. My problem is that the database on the samba 3 server seems a bit empty. The secrets.dtb file is only 20K whereas the “pbedit -L |wc -l”command give me 16970 lines. On my Samba3 /var/lib/samba is 1,5M After I had migrate the databse (following instructions on http://dev.tranquil.it/index.php/SAMBA_-_Migration_Samba3_Samba4), “pdbedit -L” command on the new server give me only : SAMBA4$, Administrator, dns-samba4, krbtgt and nobody. So I tried to create a VM with a Samba3. I added some users, done the same things I did for the migration and now I can see the users created on the VM. It’s like users on the Samba 3 server are in a sort of cache. I already migrate the /etc/{passwd,shadow,group} files and I can see users with the “getent passwd” command. Any ideas why my users are present when I use pdbedit but the database is so empty ? The global part of my smb.conf on the Samba 3 server : [global] workgroup = INTERNET netbios name = PDC-SMB3 server string = %h server interfaces = eth0 obey pam restrictions = Yes passdb backend = smbpasswd passwd program = /usr/bin/passwd %u passwd chat = *new* %n\n *Re* %n\n *pa* username map = /etc/samba/smbusers unix password sync = Yes syslog = 0 log file = /var/log/samba/log.%U max log size = 1000 socket options = TCP_NODELAY SO_RCVBUF=8192 SO_SNDBUF=8192 add user script = /usr/sbin/useradd -s /bin/false -m '%u' -g users delete user script = /usr/sbin/userdel -r '%u' add group script = /usr/sbin/groupadd '%g' delete group script = /usr/sbin/groupdel '%g' add user to group script = /usr/sbin/usermod -G '%g' '%u' add machine script = /usr/sbin/useradd -s /bin/false -d /dev/null '%u' -g machines logon script = logon.cmd logon home = \\$L\%U domain logons = Yes os level = 255 preferred master = Yes local master = Yes domain master = Yes dns proxy = No ldap ssl = no panic action = /usr/share/samba/panic-action %d invalid users = root admin users = admin, root, administrateur log level = 2

    Read the article

  • Windows errors, how do I find root cause and fix it? Getting several errors

    - by Eric Martin
    My server is having issues and not responding to customer's https requests. I checked the event viewer and found several errors. These two are listed a couple of times: WINS encountered a database error. This may or may not be a serious error. WINS will try to recover from it. You can check the database error events under 'Application Log' category of the Event Viewer for the Exchange Component, ESENT, source to find out more details about database errors. If you continue to see a large number of these errors consistently over time (a span of few hours), you may want to restore the WINS database from a backup. The error number is in the second DWORD of the data section. And this one: An error occured while using SSL configuration for socket address 0.0.0.0:444. The error status code is contained within the returned data. SQL Server is not ready to accept new client connections. Wait a few minutes before trying again. If you have access to the error log, look for the informational message that indicates that SQL Server is ready before trying to connect again. [CLIENT: xxx.xxx.xxxx.xxx] I also found this in the event viewer but the computer has been restarted since this message and I have not seen it again. Configuring the Page file for crash dump failed. Make sure there is a page file on the boot partition and that is large enough to contain all physical memory. This is my virtual memory settings: I'm not familiar with WINS so I wasn't sure if that is where I start or how to resolve it. Is the WINS error causing the other problems or should I be looking somewhere else?

    Read the article

  • Samba PDC share slow with LDAP backend

    - by hmart
    The scenario I have a SUSE SLES 11.1 SP1 machine as Samba master PDC with LDAP backend. In one share there are Database files for a Client-Server application. I log XP and Windows 7 machines to the local domain (example.local), the login is a little slow but works. In the client computers have an executable which opens, reads and writes the database files from the server share. The Problem When running Samba with LDAP password backend the client application runs VERY SLOW with a maximum transfer rate of 2500 MBit per second. If disable LDAP the client app speed increases 20x, with transfer rate of 50Mbit/sec and running smoothly. I'm doing test with just two users and two machines, so concurrency, or LDAP size shouldn't be the problem here. The suspect LDAP, Smb.conf [global] section configuration. The Question What can I do? I've googled a lot, but still have no answer. Slow smb.conf WITH LDAP [global] workgroup = zmartsoft.local passdb backend = ldapsam:ldap://127.0.0.1 printing = cups printcap name = cups printcap cache time = 750 cups options = raw map to guest = Bad User logon path = \\%L\profiles\.msprofile logon home = \\%L\%U\.9xprofile logon drive = P: usershare allow guests = Yes add machine script = /usr/sbin/useradd -c Machine -d /var/lib/nobody -s /bin/false %m$ domain logons = Yes domain master = Yes local master = Yes netbios name = server os level = 65 preferred master = Yes security = user wins support = Yes idmap backend = ldap:ldap://127.0.0.1 ldap admin dn = cn=Administrator,dc=zmartsoft,dc=local ldap group suffix = ou=Groups ldap idmap suffix = ou=Idmap ldap machine suffix = ou=Machines ldap passwd sync = Yes ldap ssl = Off ldap suffix = dc=zmartsoft,dc=local ldap user suffix = ou=Users

    Read the article

  • Binding to LDAPS using PHP failing

    - by Sean
    We've finally set-up our server to accept ldap SSL connections thanks to another question answered by a helpful member. Our problem now is that when attempting to bind to ldap using the below simple PHP script, we constantly fail. Binding using ldap instead of ldaps works just fine using the script so I know the ldap is enabled. The catcher is that while using LDP.exe, we can successfully connect and bind to ldap on port 636 using a secure connection. The script we are failing with is below: <?php $ldap = ldap_connect("ldaps://localhost"); $username="user"; $password="pass"; if($bind = ldap_bind($ldap, $username,$password )) echo "logged in"; else echo "fail"; echo "<br/>done"; ?> We've also attempted inputting the username as "user@domain" or "domain/user" with no success. It seems I'm forever having LDAP/Cert questions. Our environment is Server 2008.

    Read the article

  • Router recommendation to virtualize 800 IPs

    - by delerious010
    I've recently been looking at getting some new load balancers for our environment as we are expecting to double our client base in the next 12 months. Currently we have 400 public IPS serving 800 clusters ( 2 clusters / IP due to ports ) on Coyote Point Balancers, and distributing connections to 3 web servers serving about 6GBytes outgoing, 2Gbytes in per day. If we double, this would be about 800 IPs, possibly 1600 clusters, and about 6 servers per cluster ( for a total of 9600 so called "real servers" using Barracuda's lingo ). Due to the amount of clusters, most solutions I've looked at ( Coyote, Barracuda, Loadbalancer.org ) seem to be unsure whether they'll be able to handle our planned growth, mostly due to health checks performed on the servers ... which makes total sense when you think of it. So the fine folk at loadbalancer.org recommended that we may be better off offload the 400-800 public IPs, which we require for SSL eCommerce solutions, over to a forward facing router. From that point on, the router could do some mangling to route EXT_IP:443 to INT_IP:INT_PORT which would then allow us to reduce the Load Balancer configuration to 1 or 2 clusters, thus resolving the health check problem. Does this idea make sense to yall ? Or would you have other recommendations to make ? Secondly, what router would you recommend for such an undertaking ? I'd be looking at something that has some form of failover mechanism built in. On a totally unrelated note, I've got to admit that I'm extremely pleased with the responses I got from loadbalancer.org. Their responses to my inquiries were surprisingly helpful ( i.e. I didn't feel as if I was taking to a sales guy trying to push something ). ( No I don't work for them, and sadly nor are they sending me free gear ).

    Read the article

  • PDU management interface has low availability - product flaw or isolated issue

    - by DeanB
    Our colocation provider has supplied us with APC AP7932 switched 0U PDUs as part of several cabinets they provide us. We have had a lot of trouble with the network management aspect of these PDUs, which I'll describe below. We are moving to cage space in the same datacenter, and plan to provide our own PDUs, so I'd like to determine which enterprise-grade PDUs have been reliable performers from a remote management perspective. Our colo-provided PDUs are configured to support management via an SSL web UI and via telnet. We updated the firmware on all of them to the current version as of NOV2011. They respond to pings reliably, and we have no reason to suspect a network layer issue. However, we experience frequent hangs, timeouts, disconnects, and general unavailability from the embedded management host in all of the PDUs. We occasionally have to restart the microcontroller on the PDU to recover from what appears to be an occasional hard fault. The outlets stay powered (thankfully), but the management aspect is so unreliable that it has become an ops liability - we can't be confident that we could get into the PDU to power cycle a host if we needed to. We have 3 PDUs that all exhibit identical behavior. There are many manufacturers of enterprise-grade 0U switched PDUs, all with comparable features. If I looked at the datasheet for our current PDUs, they would appear to be a good fit -- only with the benefit of suffering through using them do we know to avoid them. I'd like to avoid picking a PDU that looks fine on paper, but has similar reliability issues. What has been others' experience with switched PDUs? Is this level of flakiness normal?

    Read the article

  • Getting a "403 access denied" error instead of serving file (using django, gunicorn nginx)

    - by Finglish
    Getting a "403 access denied" error instead of serving file (using django, gunicorn nginx) I am attempting to use nginx to serve private files from django. For X-Access-Redirect settings I followed the following guide http://www.chicagodjango.com/blog/permission-based-file-serving/ Here is my site config file (/etc/nginx/site-available/sitename): server { listen 80; listen 443 default_server ssl; server_name localhost; client_max_body_size 50M; ssl_certificate /home/user/site.crt; ssl_certificate_key /home/user/site.key; access_log /home/user/nginx/access.log; error_log /home/user/nginx/error.log; location / { access_log /home/user/gunicorn/access.log; error_log /home/user/gunicorn/error.log; alias /path_to/app; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Scheme $scheme; proxy_pass http://127.0.0.1:8000; proxy_connect_timeout 100s; proxy_send_timeout 100s; proxy_read_timeout 100s; } location /protected/ { internal; alias /home/user/protected; } } I then tried using the following in my django view to test the download: response = HttpResponse() response['Content-Type'] = "application/zip" response['X-Accel-Redirect'] = '/protected/test.zip' return response but instead of the file download I get: 403 Forbidden nginx/1.1.19 Please note: I have removed all the personal data from the the config file, so if there are any obvious mistakes not related to my error that is probably why. My nginx error log gives me the following: 2012/09/18 13:44:36 [error] 23705#0: *44 directory index of "/home/user/protected/" is forbidden, client: 80.221.147.225, server: localhost, request: "GET /icbdazzled/tmpdir/ HTTP/1.1", host: "www.icb.fi"

    Read the article

  • How to install smtp/email server to work with php script?

    - by jiexi
    I have this code $mail->IsSMTP(); $mail->SMTPAuth = true; $mail->SMTPSecure = "ssl"; $mail->Host = "mail.craze.cc"; $mail->Port = 465; $mail->Username = "username"; $mail->Password = "pass"; $mail->SetFrom("[email protected]", "craze.cc"); $mail->AddReplyTo("[email protected]", "craze.cc"); $mail->AddAddress($this->email, $this->username); $mail->IsHTML(false); $mail->Subject = "Activate Your Craze.cc Account"; $mail->Body = $message;`enter code here` How i configure my postfix/sendmail or whatever server to actually work and send the mail? This has been driving me insane! I've tried numerous times to configure these servers. I just want to be able to send emails via my php script... Can someone please link me to a guide to get this all going? or just provide help themselves? Maybe there is an alternative way i can use to send my email in the php script? Basically, i need help just getting the emails to send...

    Read the article

  • NGINX Remove index.php /index.php/something/more/ to /something/more

    - by Gaston
    I'm trying to clean urls in NGINX using framework DooPHP. This = - http://example.com/index.php/something/more/ To This = - http://example.com/something/more/ I want to remove (clean url) the "index.php" from the url if someone try to enter in the first form. Like a permanent redirect. How to do this config on NGINX? Thanks. [Update: Actual nginx config] server { listen 80; server_name vip.example.com; rewrite ^/(.*) https://vip.example.com/$1 permanent; } server { listen 443; server_name vip.example.com; error_page 404 /vip.example.com/404.html; error_page 403 /vip.example.com/403.html; error_page 401 /vip.example.com/401.html; location /vip.example.com { root /sites/errors; } ssl on; ssl_certificate /etc/nginx/config/server.csr; ssl_certificate_key /etc/nginx/config/server.sky; if (!-e $request_filename){ rewrite /.* /index.php; } location / { auth_basic "example Team Access"; auth_basic_user_file config/htpasswd; root /sites/vip.example.com; index index.php; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /sites/vip.example.com$fastcgi_script_name; include fastcgi_params; fastcgi_param PATH_INFO $fastcgi_script_name; } }

    Read the article

  • Typical outbound port list for guest access?

    - by Steve
    I manage a weekly rental house that includes wireless Internet access. I've allowed all outbound ports on my router but my ISP has disabled my Internet access twice now because guests have downloaded (or served up) copyrighted content. So I'd like to institute some port filtering to discourage p2p sharing (see disclaimer below). But I don't want to inconvenience the 99.9% of folks who keep things above-board. My question is, what outbound ports are typically open for rental/hotel wireless Internet access, or where can I find such a list? TCP 80,443,25,110 at a minimum. Though my own email service uses 995 and 465 for SSL, some may use IMAP, I personally use SSH and FTP, so I'll open those. Roughly I figure I need to open access to privileged ports, and close 1024 & above. Is there a whitelist I should institute for commonly used high ports? And does it make sense to block UDP 1024 ? Disclaimer: I realize anyone replying to this message could circumvent the port filtering and share content to their heart's content. I do not need comprehensive p2p blocking, which requires more than a port whitelist. Anyone staying at the house shoulders the responsibility for their Internet use, per the rental contract. Also anyone savvy enough to circumvent the port filters would hopefully be savvy enough to use some sort of peer blocking, thereby preventing the ISP from taking down the service.

    Read the article

  • Cisco ASA user authentication options - OpenID, public RSA sig, others?

    - by Ryan
    My organization has a Cisco ASA 5510 which I have made act as a firewall/gateway for one of our offices. Most resources a remote user would come looking for exist inside. I've implemented the usual deal - basic inside networks with outbound NAT, one primary outside interface with some secondary public IPs in the PAT pool for public-facing services, a couple site-to-site IPSec links to other branches, etc. - and I'm working now on VPN. I have the WebVPN (clientless SSL VPN) working and even traversing the site-to-site links. At the moment I'm leaving a legacy OpenVPN AS in place for thick client VPN. What I would like to do is standardize on an authentication method for all VPN then switch to the Cisco's IPSec thick VPN server. I'm trying to figure out what's really possible for authentication for these VPN users (thick client and clientless). My organization uses Google Apps and we already use dotnetopenauth to authenticate users for a couple internal services. I'd like to be able to do the same thing for thin and thick VPN. Alternatively a signature-based solution using RSA public keypairs (ssh-keygen type) would be useful to identify user@hardware. I'm trying to get away from legacy username/password auth especially if it's internal to the Cisco (just another password set to manage and for users to forget). I know I can map against an existing LDAP server but we have LDAP accounts created for only about 10% of the user base (mostly developers for Linux shell access). I guess what I'm looking for is a piece of middleware which appears to the Cisco as an LDAP server but will interface with the user's existing OpenID identity. Nothing I've seen in the Cisco suggests it can do this natively. But RSA public keys would be a runner-up, and much much better than standalone or even LDAP auth. What's really practical here?

    Read the article

  • Is the sysadmin/netadmin the defacto project planner at your organization?

    - by user31459
    At my company it has somehow over the past few years slowly become my job to come up with a project plan, milestones and time lines for deployment of developer applications. Typical scenario: My team receives a request for a new website/db combo and date for deployment. I send back a questionnaire for the developer to fill out on all the reqs for the site (ssl? db? growth projections etc.) After I get back all the information, the head of development wants a well developed document of what servers will it live on why those servers what is the time line for creating the resources step-by-step SOP for getting the application on the server and all related resources created (dns, firewall, load balancer etc.) I maybe just whining but it feels like this is something better suited to our Project Management staff (which we have) or to the developer. I understand that I need to give them a time-line on creating the resources, but still feel like this is overkill. We already produce documentation on where everything lives and track configuration changes to equipment. How do other sysadmin folks handle this?

    Read the article

  • Puppet inventory service using puppetdb

    - by Oli
    I have 3 servers set up. A puppet master using passenger (puppet-server1), dashboard using passenger (puppet-server2) and puppetdb (puppet-server3). I cannot get the inventory service working in the dashboard. The puppet master is able to sign certs and hand out manifests. The nodes have checked in to the dashboard ok The puppetdb appears to be working - logs files as follows: 2012-12-13 17:53:10,899 INFO [command-proc-74] [puppetdb.command] [8490148f-865a-45c8-b5b5-2c8824d753dd] [replace facts] puppet-server3.test.net 2012-12-13 17:53:11,041 INFO [command-proc-74] [puppetdb.command] [dfcc5168-06df-41d4-9a97-77b4cd3f4a2b] [replace catalog] puppet-server3.test.net 2012-12-13 17:55:28,600 INFO [command-proc-74] [puppetdb.command] [b2cc0a96-0404-49f5-96ad-19c778508d3d] [replace facts] puppet-client2.test.net 2012-12-13 17:55:28,729 INFO [command-proc-74] [puppetdb.command] [4dc4b8f3-06df-4dad-a89a-92ac80447b99] [replace catalog] puppet-client2.test.net The puppet master has the following configured in puppet.conf [master] certname = puppet-server1.test.net storeconfigs = true storeconfigs_backend = puppetdb reports = store, http reporturl = http://puppet-server2.test.net/reports/upload The puppet master have the following configured in auth.conf #access for puppet dashboard facts path /facts auth yes method find, search allow dashboard The puppet dashboard has this configured in /usr/share/puppet-dashboard/config/settings.yml # Hostname of the inventory server. inventory_server: 'puppet-server3.test.net' # Port for the inventory server. inventory_port: 8081 The inventory is on as I see a link to the inventory in the dashboard server But I am getting this error: Inventory Could not retrieve facts from inventory service: SSL_connect SYSCALL returned=5 errno=0 state=SSLv3 read finished A clearly an SSL error - but I have followed the documentation and have no idea how to fix this. Can anyone help please? Oli

    Read the article

  • freebsd-update failure

    - by ctuffli
    Some time ago, I upgrade my FreeBSD box to 7.1-RC2, but now I'd like to move to 7.2-RELEASE. I tried running # uname -mrsi FreeBSD 7.1-RC2 i386 GENERIC # freebsd-update upgrade -r 7.2-RELEASE Looking up update.FreeBSD.org mirrors... 3 mirrors found. Fetching metadata signature for 7.1-RC2 from update4.FreeBSD.org... failed. Fetching metadata signature for 7.1-RC2 from update5.FreeBSD.org... failed. Fetching metadata signature for 7.1-RC2 from update2.FreeBSD.org... failed. No mirrors remaining, giving up. Substituting 7.1 for 7.2 gives the same error. Adding a --debug option shows the failure as being fetch: http://update4.FreeBSD.org/7.1-RC2/i386/latest.ssl: Not Found Is there any way to still do a binary upgrade of this system as the 7.1-RC* directories don't exist on http://update.freebsd.org anymore? Upgrading from source is an option, but I wanted to see if there was some way to salvage this installation.

    Read the article

  • What is /etc/apache2/sites-available used for and is it necessary?

    - by Mariane
    I have 3 sites, each with a specific IP, running on apache2 (up-to-date Ubuntu). To put a site online, I just created a file in: /etc/apache2/sites-enabled and in this file I told apache which directory was the root directory for this site, and to which IP it should correspond. So I have 000-default 001-www.lapf.eu 002-www.felkin.info 003-www.seidhr.fr in this directory. My first site, lapf suddenly lost contact with its database after the domain name was transferred from another registrar unto the registrar who is also hosting the site's data. Then I did an update, and I reinstalled mysql-server and mysql-common, and I did I-have-forgotten-what to reinstall the locales (uft8 and such) which had vanished for some reason. This fixed my first site. Now I noticed that the other 2 sites are offline. Pointing a browser to them just hangs until timeout. They used to function, and their domain names did not move, they are still registered at the same place. The files are still in /etc/apache2/sites-enabled I noticed another directory: /etc/apache2/sites-available with just defaut and default.ssl in it. Why are there 2 directories, sites-enabled and sites-available? Should I copy the files from "sites-enabled" into "sites-available"? Or should I put a modified version of each in "sites-available"? command: "apache2ctl -S" VirtualHost configuration: 92.243.20.169:80 Charlotte (/etc/apache2/sites-enabled/001-www.lapf.eu:1) 92.243.21.141:80 xvm-21-141.ghst.net (/etc/apache2/sites-enabled/002-www.felkin.info:1) 92.243.4.114:80 xvm-4-114.ghst.net (/etc/apache2/sites-enabled/003-www.seidhr.fr:1) wildcard NameVirtualHosts and default servers: *:80 is a NameVirtualHost default server Charlotte (/etc/apache2/sites-enabled/000-default:1) port 80 namevhost Charlotte (/etc/apache2/sites-enabled/000-default:1) Syntax OK

    Read the article

  • OpenVPN Server - CPU is pegged out

    - by ericl42
    Hello, I am configuring OpenVPN to act as a SSL tunnel for a remote location. I have OpenVPN1 at our current location acting as a server then OpenVPN2 at the other location that is acting as a client but is also acting as a DHCP server to machines behind it so they are basically connected to the local LAN. Everything is set up fine and I can talk from location A to location B with no problems like everyone is local. I am however having some performance issues. OpenVPN1 CPU is pegged to 100% the entire time I am copying or doing any type of activity through the tunnel. I expect some CPU usage going up but nothing like this. It's really killing my performance. OpenVPN1 is running in ESX right now with 2 gig RAM and 4 procs with unlimited bursting capacity. I am using AES-192 encryption with a 1024 key. Any idea how I can get my CPU down on OpenVPN1 and my download/upload speeds higher between the tunnel? Thanks. edit: Turning down the logging helped boost the throughput a little bit, but I am still fairly shy of where I believe I should be. Also I am still maxed out on the CPU. Does anyone have any ideas? I am really stuck on this. Thanks.

    Read the article

  • Grant HTTP access based on unix user group

    - by Sander Marechal
    Is it possible to grant network access or HTTP access based on a user's group? At my company we want to set up an internal composer server using Satis to manage packages for the projects we write (e.g. on repository.mycompany.com), with the packages themselves in our SVN server (svn.mycompany.com). We have several webservers with many different users on them. Some users should be able to reach the composer and SVN server. Some should not. Users that should be able to reach these servers all belong to the same group. How can I set up Apache on the Composer and SVN server to only grant access to those users in that group? Alternatively, can I set up the webservers in such a way that only users from that group are able to make a connection to our Composer and SVN servers? The best thing we have come up with so far is using SSL client certificates. We simply place a client certificate on all servers which can be used to access Composer and SVN. Only the right usergroup will have read access to the certificate. A bit clunky but it may work. But I'm looking for something better.

    Read the article

< Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >