Search Results

Search found 14313 results on 573 pages for 'private clouds'.

Page 419/573 | < Previous Page | 415 416 417 418 419 420 421 422 423 424 425 426  | Next Page >

  • Sending emails with Thunderbird + Postfix + Zarafa does not work

    - by Sven Jung
    I installed zarafa on my vserver and use as MTA postfix. The webaccess works fine, I can revceive and send emails, also receiving mails with thunderbird (IMAP ssl/tls) works. But there is a problem, sending emails with thunderbird. I established an account in thunderbird with imap ssl/tls connection which works finde, and a starttls smtp connection on port 25 for the outgoing mail server. If I try to send an email with thunderbird I get an error: 5.7.1 Relay access denied this is my mail.log Sep 7 16:10:07 postfix/smtpd[6153]: connect from p4FE06C0A.dip.t-dialin.net[79.224.110.10] Sep 7 16:10:08 postfix/smtpd[6153]: NOQUEUE: reject: RCPT from p4FE06C0A.dip.t-dialin.net[79.224.110.10]: 554 5.7.1 <[email protected]>: Relay access denie$ Sep 7 16:10:10 postfix/smtpd[6153]: disconnect from p4FE06C0A.dip.t-dialin.net[79.224.110.10] and this my /etc/postfix/main.conf # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache virtual_mailbox_domains = firstdomain.de, seconddomain.de virtual_mailbox_maps = hash:/etc/postfix/virtual virtual_alias_maps = hash:/etc/postfix/virtual virtual_transport = lmtp:127.0.0.1:2003 myhostname = mail.firstdomain.de alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = localhost relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all inet_protocols = ipv4 I don't know what to do, because actually sending mails to internal and external addresses works with the webaccess. Perhaps somebody can help me?

    Read the article

  • Creating Active Directory on an EC2 box

    - by Chiggins
    So I have Active Directory set up on a Windows Server 2008 Amazon EC2 server. Its set up correctly I think, I never got any errors with it. Just to test that I got it all set up correctly, I have a Windows 7 Professional virtual machine set up on my network to join to AD. I set the VM to use the Active Directory box as its DNS server. I type in my domain to join it, but I get the following error: DNS was successfully queried for the service location (SRV) resource record used to locate a domain controller for domain "ad.win.chigs.me": The query was for the SRV record for _ldap._tcp.dc._msdcs.ad.win.chigs.me The following domain controllers were identified by the query: ip-0af92ac4.ad.win.chigs.me However no domain controllers could be contacted. Common causes of this error include: - Host (A) or (AAAA) records that map the names of the domain controllers to their IP addresses are missing or contain incorrect addresses. - Domain controllers registered in DNS are not connected to the network or are not running. It seems that I can talk to Active Directory, but when I'm trying to contact the Domain Controller, its giving a private IP to connect to, at least thats what I can make out of it. Here are some nslookup results. > win.chigs.me Server: ec2-184-73-35-150.compute-1.amazonaws.com Address: 184.73.35.150 Non-authoritative answer: Name: ec2-184-73-35-150.compute-1.amazonaws.com Address: 10.249.42.196 Aliases: win.chigs.me > ad.win.chigs.me Server: ec2-184-73-35-150.compute-1.amazonaws.com Address: 184.73.35.150 Name: ad.win.chigs.me Address: 10.249.42.196 win.chigs.me and ad.win.chigs.me are CNAME's pointing to my EC2 box. Any idea what I need to do so that I can join my virtual machine to the EC2 Active Directory set up I have? Thanks!

    Read the article

  • preformance wise htaccess

    - by purpler
    hese's the my htaccess template, i wonder if anything could be added to increase website performance.. # Defaults AddDefaultCharset UTF-8 DefaultLanguage en-US ServerSignature Off FileETag None Header unset ETag Options -MultiViews #Options All -Indexes # Force the latest IE version or ChromeFrame <IfModule mod_setenvif.c> <IfModule mod_headers.c> BrowserMatch MSIE ie Header set X-UA-Compatible "IE=Edge,chrome=1" env=ie </IfModule> </IfModule> # Proxy X-UA Setup <IfModule mod_headers.c> Header append Vary User-Agent </IfModule> #Rewrites Options +FollowSymlinks RewriteEngine On RewriteBase / # Redirect to non-WWW RewriteCond %{HTTPS} !=on RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC] RewriteRule ^(.*)$ http://%1/$1 [R=301,L] # Redirect to WWW RewriteCond %{HTTP_HOST} ^domain.com RewriteRule (.*) http://www.domain.com/$1 [R=301,L] # Redirect index to root RewriteRule ^(.*)index\.(php|html)$ /$1 [R=301,L] # Caching ExpiresActive On ExpiresDefault A0 Header set Cache-Control "public" # 1 Year Long Cache <FilesMatch "\.(flv|fla|ico|pdf|avi|mov|ppt|doc|mp3|wmv|wav|png|jpg|jpeg|gif|swf|js|css|ttf|eot|woff|svg|svgz)$"> ExpiresDefault A31622400 </FilesMatch> # Proxy Caching <FilesMatch "\.(css|js|png)$"> ExpiresDefault A31622400 Header set Cache-Control "private" </FilesMatch> # Protect against DOS attacks by limiting file upload size LimitRequestBody 10240000 # Proper SVG serving AddType image/svg+xml svg svgz AddEncoding gzip svgz # GZip Compression <IfModule mod_deflate.c> <FilesMatch "\.(php|html|css|js|xml|txt|ttf|otf|eot|svg)$" > SetOutputFilter DEFLATE </FilesMatch> </IfModule> # Error page ErrorDocument 404 /404.html # Deny access to sensitive files <FilesMatch "\.(htaccess|ini|log|psd)$"> Order Allow,Deny Deny from all </FilesMatch>

    Read the article

  • Cheap desktop computer in 19" rack-mountable form-factor?

    - by Alex Basson
    I'm a high school teacher at a small private school. As of this year, we have SMARTBoards in every classroom (though I've had one in the class I share for two years now). The classrooms themselves don't have computers in them, so we teachers bring our laptops to class and connect them to the boards. This has several disadvantages: This takes a few minutes while we wait for the board to boot up and then orient the board to our individual laptop -- we have to do this every time b/c different teachers have different laptops requiring different orientations. This isn't ideal because when you only have 43 minutes per class period, waiting five minutes just to get started is a real waste. Carrying your laptop to class doesn't sound so bad until you consider that we're also carrying textbooks and piles of student papers, and we're carrying it all through crowded high school hallways. More than one laptop has fallen THUNK to the floor, with dire consequences. We feel we could eliminate the need to use our laptops with the SMARTBoards if we had a dedicated computer in each classroom hooked up to the board at all times. Each board set-up is connected to a podium with a standard 19" rack in it, currently housing a power supply and DVD player. There're plenty of rack spaces available. So I'm thinking: maybe we could get some inexpensive computers in a 19" rack-mountable form factor, install them in the podiums, and connect them to the boards on a permanent basis. Any suggestions?

    Read the article

  • LDAP, Active Directory and bears, oh my!

    - by Tim Post
    What I have: Workstations running Ubuntu Jaunty mounting /home on a remote NFS server. User accounts are still created locally on each individual workstation. Workstations running Windows XP / Vista NFS server (as noted above) Windows 2008 server All machines share a single private network (LAN). What I need to accomplish: A single, intuitive (GUI driven) place for an office administrator to create user accounts. This should let anyone login to their (linux or windows) workstation, then fire up remote desktop and use the same login to the Windows 2008 server, from any machine on the network. I have read so much on samba, LDAP vs AD, etc and now I'm even more confused than I was before I began researching the problem. Ideally, Linux and Windows users should be able to get to their local files once logged into the Win2008 server. I am a programmer, not an interoperability guru and I'm completely lost on where to even start trying to accomplish this, plus I've run out of things to Google. How would you do this? Is it even possible?

    Read the article

  • Connection refused after installing vsftp on Ubuntu 8.04 with fail2ban

    - by Patrick
    I have been using an Ubuntu 8.04 server with fail2ban for a while now (12+ months) and using ftp over SSH without any problems. I have a new user that needs to put files onto the server from an IP modem. I have installed vsftp (sudo apt-get install vsftp) and everything installed correctly. I have created an ftp user on the server following this guide. Whenever I try to connect to the server with my ftp program (filezilla) I get an immediate response of: Connection attempt failed with "ECONNREFUSED - Connection refused by server". I have looked into fail2ban and cannot find any problems. The iptables setup is: Chain INPUT (policy ACCEPT) target prot opt source destination fail2ban-ssh tcp -- anywhere anywhere multiport dports ssh Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain fail2ban-ssh (1 references) target prot opt source destination RETURN all -- anywhere anywhere VSFTP config file (commented lines removed) listen=YES anonymous_enable=NO local_enable=YES write_enable=YES dirmessage_enable=YES xferlog_enable=YES connect_from_port_20=YES chown_uploads=YES chown_username=[username] secure_chroot_dir=/var/run/vsftpd pam_service_name=vsftpd rsa_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem rsa_private_key_file=/etc/ssl/private/ssl-cert-snakeoil.key Any ideas on what is preventing access to the server?

    Read the article

  • What is the IPv6 equivalent to IPv4 RFC1918 addresses?

    - by Kumba
    Having a hard time wrapping my head around IPv6 here. A lot of the lingo seems targeted at enterprise-level IPv6 deployments, discussing link-local, site-local, global unicast, scopes, etc. Not a lot of solid information on really small networks, like home networks. I want to check my thinking and make sure I am getting the correct translations from IPv4-speak to IPv6-speak. The first question is, what's the equivalent of RFC1918 for IPv6? Initial searches suggested there was no equivalent. Then I stumbled upon Unique Local Addresses (RFC4193), and that states that all ULA's should be assigned the prefix fc00, followed by a 40-bit random number in the routing prefix. This random number is to "prevent collisions when two IPv6 networks are interconnected" -- again, another reference to an enterprise-level function. If I have a small local LAN at home, numbered using 192.168.4.0/24, what's my equivalent in IPv6's ULA scope? Assuming I will never, ever, tie that IPv6 address into the real internet (a router will NAT & firewall it), can I ignore the RFC to an extent and go with fc00::4:0/120? It also seems that any address in fc00::/7 are to be globally routable. Does this mean I'll need extra protections so my router would not automatically start advertising these private IPv6 addresses to the world? Second question, what's this link-local thing? Reading suggests a default-assigned address in the fe80::/10 range that has the last 64bits of the address comprised of the interface's MAC address. Seems to be required, too, but I'm annoyed by the constant discussion of it in relation to enterprise networks. Third question, what is scope id for? Seems to be yet another term tossed around in relation to enterprise networks, especially when interconnecting them, but almost no explanation on the smaller home network level. Can I see a scope ID AND CIDR notation used together? I.e., fc00::4:0/120%6, or are scope IDs only supposed to be applied to a single /128 IPv6 address?

    Read the article

  • dovecot login issue with plain passwords

    - by user3028
    I am having an odd problem in dovecot, the first time I try to login via telnet dovecot gives a error, the second time it works, both within the same telnet session. This is the telnet session, note the 'BAD Error in IMAP command received by server' and the "a OK" just after that : telnet 192.168.1.2 143 * OK Waiting for authentication process to respond.. * OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE STARTTLS AUTH=PLAIN] Dovecot ready. a login someUserLogin supersecretpassword * BAD Error in IMAP command received by server. a login someUserLogin supersecretpassword a OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE SORT SORT=DISPLAY THREAD=REFERENCES THREAD=REFS MULTIAPPEND UNSELECT CHILDREN NAMESPACE UIDPLUS LIST-EXTENDED I18NLEVEL=1 CONDSTORE QRESYNC ESEARCH ESORT SEARCHRES WITHIN CONTEXT=SEARCH LIST-STATUS] Logged in dovecot configuration >dovecot -n # 2.0.19: /etc/dovecot/dovecot.conf # OS: Linux 3.5.0-34-generic x86_64 Ubuntu 12.04.2 LTS auth_debug = yes auth_verbose = yes disable_plaintext_auth = no login_trusted_networks = 192.168.1.0/16 mail_location = maildir:~/Maildir passdb { driver = pam } protocols = " imap" ssl_cert = </etc/ssl/certs/dovecot.pem ssl_key = </etc/ssl/private/dovecot.pem userdb { driver = passwd } This is the log file: Jul 3 12:27:51 linuxServer dovecot: auth: Debug: Loading modules from directory: /usr/lib/dovecot/modules/auth Jul 3 12:27:51 linuxServer dovecot: auth: Debug: auth client connected (pid=23499) Jul 3 12:28:06 linuxServer dovecot: auth: Debug: client in: AUTH#0111#011PLAIN#011service=imap#011secured#011no-penalty#011lip=192.168.1.2#011rip=192.169.1.3#011lport=143#011rport=50438#011resp=<hidden> Jul 3 12:28:06 linuxServer dovecot: auth-worker: Debug: Loading modules from directory: /usr/lib/dovecot/modules/auth Jul 3 12:28:06 linuxServer dovecot: auth-worker: Debug: pam(someUserLogin,192.169.1.3): lookup service=dovecot Jul 3 12:28:06 linuxServer dovecot: auth-worker: Debug: pam(someUserLogin,192.169.1.3): #1/1 style=1 msg=Password: Jul 3 12:28:06 linuxServer dovecot: auth: Debug: client out: OK#0111#011user=someUserLogin Jul 3 12:28:06 linuxServer dovecot: auth: Debug: master in: REQUEST#0111823473665#01123499#0111#0113a58da53e091957d3cd306ac4114f0b9 Jul 3 12:28:06 linuxServer dovecot: auth: Debug: passwd(someUserLogin,192.169.1.3): lookup Jul 3 12:28:06 linuxServer dovecot: auth: Debug: master out: USER#0111823473665#011someUserLogin#011system_groups_user=someUserLogin#011uid=1000#011gid=1000#011home=/home/someUserLogin Jul 3 12:28:06 linuxServer dovecot: imap-login: Login: user=<someUserLogin>, method=PLAIN, rip=192.169.1.3, lip=192.168.1.2, mpid=23503, secured

    Read the article

  • Mac OS X Disk Encryption - Automation

    - by jfm429
    I want to setup a Mac Mini server with an external drive that is encrypted. In Finder, I can use the full-disk encryption option. However, for multiple users, this could become tricky. What I want to do is encrypt the external volume, then set things up so that when the machine boots, the disk is unlocked so that all users can access it. Of course permissions need to be maintained, but that goes without saying. What I'm thinking of doing is setting up a root-level launchd script that runs once on boot and unlocks the disk. The encryption keys would probably be stored in root's keychain. So here's my list of concerns: If I store the encryption keys in the system keychain, then the file in /private/var/db/SystemKey could be used to unlock the keychain if an attacker ever gained physical access to the server. this is bad. If I store the encryption keys in my user keychain, I have to manually run the command with my password. This is undesirable. If I run a launchd script with my user credentials, it will run under my user account but won't have access to the keychain, defeating the purpose. If root has a keychain (does it?) then how would it be decrypted? Would it remain locked until the password was entered (like the user keychain) or would it have the same problem as the system keychain, with keys stored on the drive and accessible with physical access? Assuming all of the above works, I've found diskutil coreStorage unlockVolume which seems to be the appropriate command, but the details of where to store the encryption key is the biggest problem. If the system keychain is not secure enough, and user keychains require a password, what's the best option?

    Read the article

  • HTTP Error: 413 Request Entity Too Large

    - by Torben Gundtofte-Bruun
    What I have: I have an iPhone app that sends HTTP POST requests (XML format) to a web service written in PHP. This is on a hosted virtual private server so I can edit httpd.conf and other files on the server, and restart Apache. The problem: The web service works perfectly as long as the request is not too large, but around 1MB is the limit. After that, the server responds with: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>413 Request Entity Too Large</title> </head><body> <h1>Request Entity Too Large</h1> The requested resource<br />/<br /> does not allow request data with POST requests, or the amount of data provided in the request exceeds the capacity limit. </body></html> The web service writes its own log file, and I can see that small messages are processed fine. Larger messages are not logged at all so I guess that something in Apache rejects them before they even reach the web service? Things I've tried without success: (I've restarted Apache after every change. These steps are incremental.) hosting provider's web-based configuration panel: disable mod_security httpd.conf: LimitXMLRequestBody 0 and LimitRequestBody 0 httpd.conf: LimitXMLRequestBody 100000000 and LimitRequestBody 100000000 httpd.conf: SecRequestBodyLimit 100000000 At this stage, Apache's error.log contains a message: ModSecurity: Request body no files data length is larger than the configured limit (1048576) It looks like my step #4 didn't really take, which is consistent with step #1 but does not explain why mod_security appears to be active after all. What more can I try, to get the web service to receive large messages?

    Read the article

  • Postfix: How do I Make Email Aliases Work?

    - by Nick
    The documentation claims that I can add aliases in a file (like /etc/postfix/virtusertable) and then use the "virtual_maps" directive to point to it. This does not appear to be working, however. My mail is bouncing with: Recipient address rejected: User unknown in local recipient table; If I mail the user from the server using the mail command, it works. mail myuser The message goes through postfix and inserts itself in the Cyrus inbox correctly. When I use fetchmail to get the user's messages off a pop3 server, postfix fails. The user's email is "[email protected]", but it doesn't seem to be mapping correctly to "myuser", the cyrus mailbox name. /etc/postfix/main.cf myhostname = localhost alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = localhost relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all mailbox_transport = lmtp:unix:/var/run/cyrus/socket/lmtp #lmtp:unix:/var/run/lmtp virtual_alias_domains = mydomain.com virtual_maps = hash:/etc/postfix/virtusertable /etc/fetchmailrc et syslog; set daemon 20; poll "mail.pop3server.com" with protocol pop3 user "[email protected]" password "12345" is "myuser" fetchall keep /etc/postfix/virtusertable [email protected] myuser postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_dot_mydomain = no biff = no config_directory = /etc/postfix inet_interfaces = all mailbox_size_limit = 0 mailbox_transport = lmtp:unix:/var/run/cyrus/socket/lmtp mydestination = localhost myhostname = localhost mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = /etc/mailname readme_directory = no recipient_delimiter = + relayhost = smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes virtual_alias_domains = mydomain.com Why is it ignoring my alias?

    Read the article

  • Choosing the right e-mail client

    - by CFP
    Hi all, I'm currently using Outlook 2007 (under windows 7), but I much prefer free software (open source being the best of course), so I thought I'd ask for expert advice here. I thought it might be easier if I included a small "wanted list": I receive about 15 to 30 e-mails every day, but I have large archives (10'000 emails), which I frequently need to access. I usually open and close my mail program many times, so I'd like it to start pretty fast I cannot use an online mailbox, because I have too many email addresses (about 5: 1 for work, 1 for home, 1 semi-private, 1 for specific emails, and 1 for newletters By order of importance, the things I'd like my mail client to be able to: Efficiently categorize e-mails. Until now, I've mostly been using Outlook folders, because filtering by tags was not easy, but I'd rather one large list of mails, neatly tagged so I can easily filter. I'd love being able to select mails by tags (eg in a click or too (could be a tab) show all mails tagged with "software") Create "tagging rules", such as "if the mail was sent to this address, add this tag", or "if the body contains ..., add that tag" Sync contacts with Gmail, handle tasks (syncing with toodledo would be awesome), possibly provide a calendar Create e-mail templates, signatures... Other ideas: A timeline, scripting support, being able to import MS Outlook emails, provide a nice backup format... Thanks for sharing ideas and suggestions!

    Read the article

  • How to set up port forwarding on a dedicated server running CentOS 5.4 to use Ubuntu 9.0.4

    - by mairtinh
    The basic situation that I have is a dedicated server running CentOS 5.4 At the moment I have one VM running Ubuntu 9.0.4. Later on, I will want to add another VM running Windows Server 2003 but at the moment I am focusing on getting Ubuntu up and running. The Ubuntu installation is working fine but I'm seriously struggling to get port forwarding working so that I can access websites to be hosted on the Ubuntu VM. As a newbie to Linux, I am confused about the relationship between IPTables and VMWare's own port forwarding. Here's what I've tried so far. The IP of my server is xxx.xxx.xxx.xxx and the provider support have told me that the subnet mask is 255.255.255.0, the gateway address is xxx.xxx.xxx.1 and the network address is xxx.xxx.xxx.0. (Those latter two surprise me a bit, I expected private gateway/network address rather than public ones.) First of all I tried Bridged Networking but had no success at all in communicating with the machine other than through the VMware console. I tried pinging it from the host (using ssh into the host) but no joy; also no Inernet access from the VM. I changed the interfaces configuration from DHCP to Static, using a static address of 192.168.1.100 and setting the gateway to xxx.xxx.xxx.1 as advised by the provider. No real difference, still cannot ping the guest from the host or vice versa and no Internet access from the guest. Then I tried NAT. The host automatically set the IP address to 192.168.132.128 with a gateway of 192.168.132.2 Now the guest has Internet access out and when I do a VNC to the host and open Firefox with 192.168.132.128 I can see the hosted website okay but I still cannot get into it from outside. I mentioned that I'm a bit confused about IPtables and VMware port forwarding, what I meant is that I'm not sure whether IPtable forwarding should be set to the IP address of the guest interface (192.168.132.128 in this case) or the gateway address 192.168.132.2 . I have a feeling that I'm missing something very simple here, can anybody tell me what it is?

    Read the article

  • Client certificate based encryption

    - by Timo Willemsen
    I have a question about security of a file on a webserver. I have a file on my webserver which is used by my webapplication. It's a bitcoin wallet. Essentially it's a file with a private key in it used to decrypt messages. Now, my webapplication uses the file, because it's used to recieve transactions made trough the bitcoin network. I was looking into ways to secure it. Obviously if someone has root access to the server, he can do the same as my application. However, I need to find a way to encrypt it. I was thinking of something like this, but I have no clue if this is actually going to work: Client logs in with some sort of client certificate. Webapplication creates a wallet file. Webapplication encrypts file with client certificate. If the application wants to access the file, it has to use the client certificate. So basically, if someone gets root access to the site, they cannot access the wallet. Is this possible and does anyone know about an implementation of this? Are there any problems with this? And how safe would this be?

    Read the article

  • Is it worth hiring a hacker to perform some penetration testing on my servers ?

    - by Brann
    I'm working in a small IT company with paranoid clients, so security has always been an important consideration to us ; In the past, we've already mandated two penetration testing from independent companies specialized in this area (Dionach and GSS). We've also ran some automated penetration tests using Nessus. Those two auditors were given a lot of insider information, and found almost nothing* ... While it feels comfortable to think our system is perfectly sure (and it was surely comfortable to show those reports to our clients when they performed their due diligence work), I've got a hard time believing that we've achieved a perfectly sure system, especially considering that we have no security specialist in our company (Security has always been a concern, and we're completely paranoid, which helps, but that's far as it goes!) If hackers can hack into companies that probably employ at least a few people whose sole task is to ensure their data stays private, surely they could hack into our small business, right ? Does someone have any experience in hiring an "ethical hacker"? How to find one? How much would it cost? *The only recommendation they made us was to upgrade our remote desktop protocols on two windows servers, which they were able to access because we gave them the correct non-standard port and whitelisted their IP

    Read the article

  • Nginx, HAproxy, Unicorn, Rails and Node settings

    - by Julien Genestoux
    Our application is currently only a "regular" web app, with no fancy things like streaming HTTP or websockets. It's mostly a Rails app, served by a few (20 on 2 machines) Unicorn workers, proxied by a venerable nginx server which deals with load balancing. This has been working quite well for the past year and the app now serves between 400 and 800 requests per second at any point during the day. We're soon releasing 2 new APIs, which are both served by a Node application : a websocket one, as well as a long polling HTTP one. (the fancy thing like the Twitter streaming API where HTTP connections never end). They both use the same port on node and since the node app is stateless, we can certainly deploy a few of them to handle the traffic. The app (node) is now deployed in 5 instances and are now listening on 5 different 'private' ports on the same host. We need to put something in front of them to load balance, but also something that is able to deal with sockets (either websocket or HTTP streaming) which are intended to stay 'up' for days. The question is then : what? I read somewhere that HAProxy does a better job than Nginx at this. What do you recommend?

    Read the article

  • Setup Exchange 2010 cannot verify Host (A) record warning

    - by Joost Verdaasdonk
    When I try to install Exchange 2010 on my server 2008 R2 server I get a warning during the prerequisites check: Warning: setup cannot verify that the 'Host' (A) record for this computer exists within the DNS database on server: 90.195.200.12. The goal of this Exchange setup is that I'm able to sent email in my local domain as well receive/sent email through the public domain name. Some information about my setup This Server is going to be a dedicated exchange host and has the following IP setup: (IP's are examples and not the real IP's ofc) Local VLAN NIC: IP: 10.10.50.22 Subnet: 255.255.255.0 No gateway DNS: 10.10.50.1 (is domain controler with authoritive DNS) public WAN NIC: IP: 90.195.200.148 Subnet: 255.255.255.235 Gateway: 90.195.200.145 DNS: 90.195.200.12 | 190.160.230.14 My public domain - exampledomain.com A record: mail - IP: 90.195.200.148 MX record IP: 90.195.200.148 As I'm seeing now the exchange setup is looking for the A record in one of the DNS servers in my Public WAN NIC. And ofc this is not where my A records are defined. I have those A records in 2 places: 1. In the domain controler DNS (the private nic) 2. In the online dns registration of my public domain (exampledomain.com) My question is... is this warning going to be a problem? Can I do something better in my setup so that this warning will go away? Please advice?

    Read the article

  • HTTPS and Certification for dummies

    - by Poxy
    I had never used https on a site and now want to try it. I did some research, but not sure that I understood everything. Answers and corrections are greatly appreciated. Here we go: To use https I need to generate ‘private’ and ‘public’ keys for the web server I use. In my case it’s apache (manual: http://httpd.apache.org/docs/2.0/ssl/ssl_faq.html) Https protocol should be bind to port 443. Q: How to do it? Is it done by default? Where can I check configuration? Aplying https. Q: If I see https in browser does it mean that the data traffic on the page IS encrypted? Any form on the page would submit data via https? Though all the data gonna be encrypted, the browsers would still show ugly red messages. This is just because they do not know anything about my certificate. They have about a hundred certificates pre-installed but mine is not one of them, obviously. But the data IS encrypted by https. If I want browsers to recognize my certificate, I would need to have it signed by one of the certification authorities (ca) that has its certificate pre-installed (e.g. thawte, geotrust, rapidssl etc). UPD: To reed about ssl/tsl: The First Few Milliseconds of an HTTPS Connection, I found it very informative. Examples for PHP (openssl.org) of how to make use of ssl/tsl on the server side are published here.

    Read the article

  • How to configure IIS for SVG and web testing with Visual Studio?

    - by macias
    Let's say I have a simple web page with svg image in it: <img src="foobar.svg" alt="not working" /> If I make this page as static html page and view it directly svg is displayed. If I type the address of this svg -- it is displayed. But when I make this as .aspx page and launch it dynamically from Visual Studio I get alt text. If I type the address of this svg (from localhost, not as a local file) -- browser tries to download it instead of displaying. I already defined mime type in IIS (for entire server -- "image/svg+xml") and restarted IIS. Same effect as before. Question: what should I do more? Update WireShark won't work (it is in documentation), I tried also RawCap, but it cannot trace my connection (odd), luckily Fiddler worked: From client: GET http://127.0.0.1:1731/svg/document_edit.svg HTTP/1.1 Host: 127.0.0.1:1731 User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:10.0.1) Gecko/20100101 Firefox/10.0.1 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip, deflate Connection: keep-alive Answer from server: HTTP/1.1 200 OK Server: ASP.NET Development Server/10.0.0.0 Date: Thu, 16 Feb 2012 11:14:38 GMT X-AspNet-Version: 4.0.30319 Cache-Control: private Content-Type: application/octet-stream Content-Length: 87924 Connection: Close <?xml version="1.0" encoding="UTF-8" standalone="no"?> <!-- Created with Inkscape (http://www.inkscape.org/) --> <svg xmlns: *** FIDDLER: RawDisplay truncated at 128 characters. Right-click to disable truncation. *** For the record, here is useful Q&A for Fiddler: http://stackoverflow.com/questions/826134/how-to-display-localhost-traffic-in-fiddler-while-debugging-an-asp-net-applicati

    Read the article

  • WEP/WPA/WPA2 and wifi sniffing

    - by jcea
    Hi, I know that WEP traffic can be "sniffed" by any user of the WIFI. I know that WPA/WPA2 traffic is encrypted using a different link key for each user, so they can't sniff traffic... unless they capture the initial handshake. If you are using a PSK (preshared key) schema, then you recover the link key trivially from this initial handshake. If you don't know the PSK, you can capture the handshake and try to crack the PSK by bruteforce offline. Is my understanding correct so far?. I know that WPA2 has AES mode and can use "secure" tokens like X.509 certificates and such, and it is said to be secure against sniffing because capturing the handshake doesn't help you. So, is WPA2+AES secure (so far) against sniffing, and how it actually works?. That is, how is the (random) link key negociated?. When using X.509 certificates or a (private and personal) passphrase. Do WPA/WPA2 have other sniffer-secure modes beside WPA2+AES? How is broadcast traffic managed to be received by all the WIFI users, if each has a different link key?. Thanks in advance! :).

    Read the article

  • Windows EFS file sharing anomaly

    - by wbkang
    Fyi, I can confirm this happening in Windows Vista (Business) and Windows 7 Professional in WORKGROUP mode (as both a client and a server). I am not totally sure if this is a Superuser question or a ServerFault question. So there are two PCs, let's call them C (client) and S (server). Both servers have a user called U with the same password. Both C and S has the same private/public key pair for EFS. S shares a folder F with U given full permission. Also locally, the user U has the full permission on F. Now, U, from C, connects to F at the server S, everything works totally fine. I can read,write, delete files and create/delete folders in S. Things go weird from here. I encrypt the folder F in S. I can delete/modify files fine (so the files in F decrypted OK). However, U from C, cannot create a folder, or create a file getting Access Denied. But this Access Denied is very special. It takes over 10 seconds at C to receive the error and the explorer freezes while trying to create a folder, eventually returning error. In S, I can watch the folder created at the same time, and what I see is "New Folder" blinking like crazy and eventually disappearing when the client receives the error. i.e. it's created and deleted in a really rapid manner. What I do not understand is that permissions look fine, I can modify/delete files, and it looks like there is no problem with EFS because I can read/write files fine. Yet it fails to create a file or a folder. Any help is appreciated. Thanks, wbkang

    Read the article

  • HD video editing system with Truecrypt

    - by Rob
    I'm looking to do hi-def video editing and transcoding on an unencrypted standard partition, with Truecrypt on the system partition for sensitive data. I'm aiming to keep certain data private but still have performance where needed. Goals: Maximum, unimpacted, performance possible for hi-def video editing, encryption of video not required Encrypt system partition, using Truecrypt, for web/email privacy, etc. in the event of loss In other words I want to selectively encrypt the hard drive - i.e. make the system partition encrypted but not impact the original maximum performance that would be available to me for hi-def/HD video editing. The thinking is to use an unencrypted partition for the video and set up video applications to point at that. Assuming that they would use that partition only for their workspace and not the encrypted system partition, then I should expect to not get any performance hit. Would I be correct? I guess it might depend on the application, if that app is hard-wired to use the system partition always for temporary storage during editing and transcoding, or if it has to be installed on the C: system partition always. So some real data on how various apps behave in the respect would be useful, e.g. Adobe, Cyberlink, Nero etc. etc. I have a Intel i7 Quad-core (8 threads) 1.6Ghz (up to 2.8Ghz turbo-boost) 4Gb, 7200rpm SATA, nvidia HP laptop. I've read the excellent posting about the general performance impact of truecrypt but the benchmarks weren't specific enough for my needs where I'm dealing with HD-video and using a non-encrypted partition to maintain max performance.

    Read the article

  • Sync OneNote Notebooks to/on SkyDrive

    - by Sam
    I've got OneNote running on all computers in our house, using it all the time with several people and computers. The only drawback: I want to keep the copies of OneNote in sync without having to run a dedicated server myself. Right now one of my computers has a folder share, where all others sync to, but this is highly impractical since the computer is not always running. So my question is: is it possible to put the notebook files on a (private) SkyDrive Folder and have all the computers sync to there? This way all computers could keep in sync whenever they got access to the web. Can this be done? and, of course, How? [Update] Maybe I should not have taken knowledge about OneNote as granted: OneNote uses a propietary file format, but has a very good in-file-syncing, working on network shares. Generic 'just sync the complete file' won't be useful at all, because I'd just have 'file has changed on server and on client' conflicts all the time. The sync needs to know OneNote files and be able to sync the content - eg. OneNote itself needs to sync the files, not some generic sync tool.

    Read the article

  • a VPS mail server

    - by microspino
    Hello I'm trying to substitute citadel on my Virtual Private Server with something more simple. I dislike their documentation and the webmail client. I don't need any groupware feature. I need only an MTA with a nice looking web interface, SPAM and VIRUS check. I recently found the lamson project from Zed Shaw. Is that production ready? Do you had any real and good experience with It? On the latest-news page I see that the last release dates december 2009. Sorry for my lack of knowledge, I'm really new to mail servers but I have to find a solution to manage sending and receiving mail on my VPS. I would accept also to build my VPS email server using a linux system like exim, postfix or whatever but I have really small needs and they will not grow in at least a year and i will be the only one user. I'm searching for something that I could build and manage easily, as I'm a novice linux sysadmin. Having also some good documentation or at least a robust step by step guide would be a plus.

    Read the article

  • localhost/127.0.0.1 not working, "Unable to connect"

    - by redconservatory
    I am running some pretty basic php sites on Snow Leopard. Usually I just go to my browser and type anything like: localhost http://localhost 127.0.0.1 mycomputername.local But suddenly, after installing a gem file (compass) none of this is working. I tried sudo apachectl restart Thinking that I just needed to restart apache, but no luck. My error log looks like: [Mon Mar 26 09:39:08 2012] [warn] child process 45443 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45223 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45043 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45438 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45049 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45439 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45224 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45440 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45441 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45442 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45443 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:11 2012] [notice] caught SIGTERM, shutting down I also tried sudo apachectl -k start And I got the error: Syntax error on line 182 of /private/etc/apache2/httpd.conf: Illegal option When I look at the code around that line, I see: <Directory /> Options Indexes MultiViews + FollowSymLinks AllowOverride All Order allow, deny Allow from all </Directory>

    Read the article

< Previous Page | 415 416 417 418 419 420 421 422 423 424 425 426  | Next Page >