Search Results

Search found 18139 results on 726 pages for 'private cloud'.

Page 568/726 | < Previous Page | 564 565 566 567 568 569 570 571 572 573 574 575  | Next Page >

  • Is it worth hiring a hacker to perform some penetration testing on my servers ?

    - by Brann
    I'm working in a small IT company with paranoid clients, so security has always been an important consideration to us ; In the past, we've already mandated two penetration testing from independent companies specialized in this area (Dionach and GSS). We've also ran some automated penetration tests using Nessus. Those two auditors were given a lot of insider information, and found almost nothing* ... While it feels comfortable to think our system is perfectly sure (and it was surely comfortable to show those reports to our clients when they performed their due diligence work), I've got a hard time believing that we've achieved a perfectly sure system, especially considering that we have no security specialist in our company (Security has always been a concern, and we're completely paranoid, which helps, but that's far as it goes!) If hackers can hack into companies that probably employ at least a few people whose sole task is to ensure their data stays private, surely they could hack into our small business, right ? Does someone have any experience in hiring an "ethical hacker"? How to find one? How much would it cost? *The only recommendation they made us was to upgrade our remote desktop protocols on two windows servers, which they were able to access because we gave them the correct non-standard port and whitelisted their IP

    Read the article

  • Setup Exchange 2010 cannot verify Host (A) record warning

    - by Joost Verdaasdonk
    When I try to install Exchange 2010 on my server 2008 R2 server I get a warning during the prerequisites check: Warning: setup cannot verify that the 'Host' (A) record for this computer exists within the DNS database on server: 90.195.200.12. The goal of this Exchange setup is that I'm able to sent email in my local domain as well receive/sent email through the public domain name. Some information about my setup This Server is going to be a dedicated exchange host and has the following IP setup: (IP's are examples and not the real IP's ofc) Local VLAN NIC: IP: 10.10.50.22 Subnet: 255.255.255.0 No gateway DNS: 10.10.50.1 (is domain controler with authoritive DNS) public WAN NIC: IP: 90.195.200.148 Subnet: 255.255.255.235 Gateway: 90.195.200.145 DNS: 90.195.200.12 | 190.160.230.14 My public domain - exampledomain.com A record: mail - IP: 90.195.200.148 MX record IP: 90.195.200.148 As I'm seeing now the exchange setup is looking for the A record in one of the DNS servers in my Public WAN NIC. And ofc this is not where my A records are defined. I have those A records in 2 places: 1. In the domain controler DNS (the private nic) 2. In the online dns registration of my public domain (exampledomain.com) My question is... is this warning going to be a problem? Can I do something better in my setup so that this warning will go away? Please advice?

    Read the article

  • How to configure IIS for SVG and web testing with Visual Studio?

    - by macias
    Let's say I have a simple web page with svg image in it: <img src="foobar.svg" alt="not working" /> If I make this page as static html page and view it directly svg is displayed. If I type the address of this svg -- it is displayed. But when I make this as .aspx page and launch it dynamically from Visual Studio I get alt text. If I type the address of this svg (from localhost, not as a local file) -- browser tries to download it instead of displaying. I already defined mime type in IIS (for entire server -- "image/svg+xml") and restarted IIS. Same effect as before. Question: what should I do more? Update WireShark won't work (it is in documentation), I tried also RawCap, but it cannot trace my connection (odd), luckily Fiddler worked: From client: GET http://127.0.0.1:1731/svg/document_edit.svg HTTP/1.1 Host: 127.0.0.1:1731 User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:10.0.1) Gecko/20100101 Firefox/10.0.1 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip, deflate Connection: keep-alive Answer from server: HTTP/1.1 200 OK Server: ASP.NET Development Server/10.0.0.0 Date: Thu, 16 Feb 2012 11:14:38 GMT X-AspNet-Version: 4.0.30319 Cache-Control: private Content-Type: application/octet-stream Content-Length: 87924 Connection: Close <?xml version="1.0" encoding="UTF-8" standalone="no"?> <!-- Created with Inkscape (http://www.inkscape.org/) --> <svg xmlns: *** FIDDLER: RawDisplay truncated at 128 characters. Right-click to disable truncation. *** For the record, here is useful Q&A for Fiddler: http://stackoverflow.com/questions/826134/how-to-display-localhost-traffic-in-fiddler-while-debugging-an-asp-net-applicati

    Read the article

  • HTTPS and Certification for dummies

    - by Poxy
    I had never used https on a site and now want to try it. I did some research, but not sure that I understood everything. Answers and corrections are greatly appreciated. Here we go: To use https I need to generate ‘private’ and ‘public’ keys for the web server I use. In my case it’s apache (manual: http://httpd.apache.org/docs/2.0/ssl/ssl_faq.html) Https protocol should be bind to port 443. Q: How to do it? Is it done by default? Where can I check configuration? Aplying https. Q: If I see https in browser does it mean that the data traffic on the page IS encrypted? Any form on the page would submit data via https? Though all the data gonna be encrypted, the browsers would still show ugly red messages. This is just because they do not know anything about my certificate. They have about a hundred certificates pre-installed but mine is not one of them, obviously. But the data IS encrypted by https. If I want browsers to recognize my certificate, I would need to have it signed by one of the certification authorities (ca) that has its certificate pre-installed (e.g. thawte, geotrust, rapidssl etc). UPD: To reed about ssl/tsl: The First Few Milliseconds of an HTTPS Connection, I found it very informative. Examples for PHP (openssl.org) of how to make use of ssl/tsl on the server side are published here.

    Read the article

  • WEP/WPA/WPA2 and wifi sniffing

    - by jcea
    Hi, I know that WEP traffic can be "sniffed" by any user of the WIFI. I know that WPA/WPA2 traffic is encrypted using a different link key for each user, so they can't sniff traffic... unless they capture the initial handshake. If you are using a PSK (preshared key) schema, then you recover the link key trivially from this initial handshake. If you don't know the PSK, you can capture the handshake and try to crack the PSK by bruteforce offline. Is my understanding correct so far?. I know that WPA2 has AES mode and can use "secure" tokens like X.509 certificates and such, and it is said to be secure against sniffing because capturing the handshake doesn't help you. So, is WPA2+AES secure (so far) against sniffing, and how it actually works?. That is, how is the (random) link key negociated?. When using X.509 certificates or a (private and personal) passphrase. Do WPA/WPA2 have other sniffer-secure modes beside WPA2+AES? How is broadcast traffic managed to be received by all the WIFI users, if each has a different link key?. Thanks in advance! :).

    Read the article

  • HD video editing system with Truecrypt

    - by Rob
    I'm looking to do hi-def video editing and transcoding on an unencrypted standard partition, with Truecrypt on the system partition for sensitive data. I'm aiming to keep certain data private but still have performance where needed. Goals: Maximum, unimpacted, performance possible for hi-def video editing, encryption of video not required Encrypt system partition, using Truecrypt, for web/email privacy, etc. in the event of loss In other words I want to selectively encrypt the hard drive - i.e. make the system partition encrypted but not impact the original maximum performance that would be available to me for hi-def/HD video editing. The thinking is to use an unencrypted partition for the video and set up video applications to point at that. Assuming that they would use that partition only for their workspace and not the encrypted system partition, then I should expect to not get any performance hit. Would I be correct? I guess it might depend on the application, if that app is hard-wired to use the system partition always for temporary storage during editing and transcoding, or if it has to be installed on the C: system partition always. So some real data on how various apps behave in the respect would be useful, e.g. Adobe, Cyberlink, Nero etc. etc. I have a Intel i7 Quad-core (8 threads) 1.6Ghz (up to 2.8Ghz turbo-boost) 4Gb, 7200rpm SATA, nvidia HP laptop. I've read the excellent posting about the general performance impact of truecrypt but the benchmarks weren't specific enough for my needs where I'm dealing with HD-video and using a non-encrypted partition to maintain max performance.

    Read the article

  • Sync OneNote Notebooks to/on SkyDrive

    - by Sam
    I've got OneNote running on all computers in our house, using it all the time with several people and computers. The only drawback: I want to keep the copies of OneNote in sync without having to run a dedicated server myself. Right now one of my computers has a folder share, where all others sync to, but this is highly impractical since the computer is not always running. So my question is: is it possible to put the notebook files on a (private) SkyDrive Folder and have all the computers sync to there? This way all computers could keep in sync whenever they got access to the web. Can this be done? and, of course, How? [Update] Maybe I should not have taken knowledge about OneNote as granted: OneNote uses a propietary file format, but has a very good in-file-syncing, working on network shares. Generic 'just sync the complete file' won't be useful at all, because I'd just have 'file has changed on server and on client' conflicts all the time. The sync needs to know OneNote files and be able to sync the content - eg. OneNote itself needs to sync the files, not some generic sync tool.

    Read the article

  • Windows EFS file sharing anomaly

    - by wbkang
    Fyi, I can confirm this happening in Windows Vista (Business) and Windows 7 Professional in WORKGROUP mode (as both a client and a server). I am not totally sure if this is a Superuser question or a ServerFault question. So there are two PCs, let's call them C (client) and S (server). Both servers have a user called U with the same password. Both C and S has the same private/public key pair for EFS. S shares a folder F with U given full permission. Also locally, the user U has the full permission on F. Now, U, from C, connects to F at the server S, everything works totally fine. I can read,write, delete files and create/delete folders in S. Things go weird from here. I encrypt the folder F in S. I can delete/modify files fine (so the files in F decrypted OK). However, U from C, cannot create a folder, or create a file getting Access Denied. But this Access Denied is very special. It takes over 10 seconds at C to receive the error and the explorer freezes while trying to create a folder, eventually returning error. In S, I can watch the folder created at the same time, and what I see is "New Folder" blinking like crazy and eventually disappearing when the client receives the error. i.e. it's created and deleted in a really rapid manner. What I do not understand is that permissions look fine, I can modify/delete files, and it looks like there is no problem with EFS because I can read/write files fine. Yet it fails to create a file or a folder. Any help is appreciated. Thanks, wbkang

    Read the article

  • a VPS mail server

    - by microspino
    Hello I'm trying to substitute citadel on my Virtual Private Server with something more simple. I dislike their documentation and the webmail client. I don't need any groupware feature. I need only an MTA with a nice looking web interface, SPAM and VIRUS check. I recently found the lamson project from Zed Shaw. Is that production ready? Do you had any real and good experience with It? On the latest-news page I see that the last release dates december 2009. Sorry for my lack of knowledge, I'm really new to mail servers but I have to find a solution to manage sending and receiving mail on my VPS. I would accept also to build my VPS email server using a linux system like exim, postfix or whatever but I have really small needs and they will not grow in at least a year and i will be the only one user. I'm searching for something that I could build and manage easily, as I'm a novice linux sysadmin. Having also some good documentation or at least a robust step by step guide would be a plus.

    Read the article

  • localhost/127.0.0.1 not working, "Unable to connect"

    - by redconservatory
    I am running some pretty basic php sites on Snow Leopard. Usually I just go to my browser and type anything like: localhost http://localhost 127.0.0.1 mycomputername.local But suddenly, after installing a gem file (compass) none of this is working. I tried sudo apachectl restart Thinking that I just needed to restart apache, but no luck. My error log looks like: [Mon Mar 26 09:39:08 2012] [warn] child process 45443 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45223 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45043 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45438 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45049 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45439 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45224 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45440 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45441 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45442 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45443 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:11 2012] [notice] caught SIGTERM, shutting down I also tried sudo apachectl -k start And I got the error: Syntax error on line 182 of /private/etc/apache2/httpd.conf: Illegal option When I look at the code around that line, I see: <Directory /> Options Indexes MultiViews + FollowSymLinks AllowOverride All Order allow, deny Allow from all </Directory>

    Read the article

  • How do I load balance between two Linux machines?

    - by William Hilsum
    Inspired by the Stack Overflow network, I am now obsessed with HAProxy and trying to use it myself. At the moment, each HAProxy box has got two network cards (well, two configured, I can have a maximum of 4 and wasn't sure if they needed their own one for management between the boxes). On both machines, the backend one (eth1) is a private IP that goes to a switch connected to the webservers, and the front facing one (eth0) has a public internet IP that is routed straight though. In addition, I have created an additional virtual ip for eth0 called eth0:0 which has got a third public ip address. I just about get how to use it for load balancing between multiple web servers that are behind it, but, I am failing to load balance between the two HAProxy boxes - they appear to fight for the virtual IP, but, this does not appear to be a smart solution. Now, by using the virtual shared IP address, this solution appears to work and does seem to give me maximum uptime, but, is this the correct way to do it, or is there a smarter way? I have been looking at other Linux packages such as keepalived, but, I have only been using Linux (server) for a week now and am at the limits of my understanding. Is there anyone who has done this before and can you advise anything for maximum uptime?

    Read the article

  • Connection refused in ssh tunnel to apache forward proxy setup

    - by arkascha
    I am trying to setup a private forward proxy in a small server. I mean to use it during a conference to tunnel my internet access through an ssh tunnel to the proxy server. So I created a virtual host inside apache-2.2 running the proxy, the proxy_http and the proxy_connect module. I use this configuration: <VirtualHost localhost:8080> ServerAdmin xxxxxxxxxxxxxxxxxxxx ServerName yyyyyyyyyyyyyyyyyyyy ErrorLog /var/log/apache2/proxy-error_log CustomLog /var/log/apache2/proxy-access_log combined <IfModule mod_proxy.c> ProxyRequests On <Proxy *> # deny access to all IP addresses except localhost Order deny,allow Deny from all Allow from 127.0.0.1 </Proxy> # The following is my preference. Your mileage may vary. ProxyVia Block ## allow SSL proxy AllowCONNECT 443 </IfModule> </VirtualHost> After restarting apache I create a tunnel from client to server: #> ssh -L8080:localhost:8080 <server address> and try to access the internet through that tunnel: #> links -http-proxy localhost:8080 http://www.linux.org I would expect to see the requested page. Instead a get a "connection refused" error. In the shell holding open the ssh tunnel I get this: channel 3: open failed: connect failed: Connection refused Anyone got an idea why this connection is refused ?

    Read the article

  • PEAP validating a secondary domain suffix

    - by sam
    Probably the title is a little bit confusing, let me explain the situation. Our company wants to implement a corporate wireless lan with PEAP authentication. unfortunately someone made a big mistake in our AD design 10 years ago. The domain name we are using "company.ch" is not owned by company but by someone else. so it is not possible to issue a public SSL certificate for the RADIUS server. Our AD is to big to rename it. We already thought about using our private PKI and rollout the CA certificate via GPO but that would only cover our corporate managed clients but not the BYOD (Smartphones, Tablets, Laptops..) Is there a way to add a secondary domain name like “company2.ch” and issue a public certificate and join that radius to that secondary domain aslwell, and configure that secondary dns suffix via DHCP for all the client pools... or is there another way with for example a new radius server which has his own domain company2.ch which is connected with some kind of trust between the company.ch doamin? sorry i'am not a client server guy.. hopefully you get my drift.!?

    Read the article

  • How do I Install Intermediate Certificates (in AWS)?

    - by getmizanur
    I have installed private key (pem encoded) and public key certificate (pem encoded) on Amazon Load Balancer. However, when I check the SSL with site test tool, I get the following error: Error while checking the SSL Certificate!! Unable to get the local issuer of the certificate. The issuer of a locally looked up certificate could not be found. Normally this indicates that not all intermediate certificates are installed on the server. I converted crt file to pem using these commands from this tutorial: openssl x509 -in input.crt -out input.der -outform DER openssl x509 -in input.der -inform DER -out output.pem -outform PEM During setup of Amazon Load Balancer, the only option I left out was certificate chain. (pem encoded) However, this was optional. Could this be cause of my issue? And if so; How do I create certificate chain? UPDATE If you make request to VeriSign they will give you a certificate chain. This chain includes public crt, intermediate crt and root crt. Make sure to remove the public crt from your certificate chain (which is the top most certificate) before adding it to your certification chain box of your Amazon Load Balancer. If you are making HTTPS requests from an Android app, then above instruction may not work for older Android OS such as 2.1 and 2.2. To make it work on older Android OS: go here click on "retail ssl" tab and then click on "secure site" "CA Bundle for Apache Server" copy and past these intermediate certs into certificate chain box. just incase if you have not found it here is the direct link. If you are using geo trust certificates then the solution is much the same for Android devices, however, you need to copy and paste their intermediate certs for Android.

    Read the article

  • Forwarding rsyslog to syslog-ng, with FQDN and facility separation

    - by Joshua Miller
    I'm attempting to configure my rsyslog clients to forward messages to my syslog-ng log repository systems. Forwarding messages works "out of the box", but my clients are logging short names, not FQDNs. As a result the messages on the syslog repo use short names as well, which is a problem because one can't determine which system the message originated from easily. My clients get their names through DHCP / DNS. I've tried a number of solutions trying to get this working, but without success. I'm using rsyslog 4.6.2 and syslog-ng 3.2.5. I've tried setting $PreserveFQDN on as the first directive in /etc/rsyslog.conf (and restarting rsyslog of course). It seems to have no effect. hostname --fqdn on the client returns the proper FQDN, so the problem isn't whether the system can actually figure out its own FQDN. $LocalHostName <fqdn> looked promising, but this directive isn't available in my version of rsyslog (Available since 4.7.4+, 5.7.3+, 6.1.3+). Upgrading isn't an option at the moment. Configuring the syslog-ng server to populate names based on reverse lookups via DNS isn't an option. There are complexities with reverse DNS and the public cloud. Specifying for the forwarder to use a custom template seems like a viable option at first glance. I can specify the following, which causes local logging to begin using the FQDN on the syslog-ng repo. $template MyTemplate, "%timestamp% <FQDN> %syslogtag%%msg%" $ActionForwardDefaultTemplate MyTemplate However, when I put this in place syslog-ng seems to be unable to categorize messages by facility or priority. Messages come in as FQDN, but everything is put in to user.log. When I don't use the custom template, messages are properly categorized under facility and priority, but with the short name. So, in summary, if I manually trick rsyslog into including the FQDN, priority and facility becomes lost details to syslog-ng. How can I get rsyslog to do FQDN logging which works properly going to a syslog-ng repository? rsyslog client config: $ModLoad imuxsock.so # provides support for local system logging (e.g. via logger command) $ModLoad imklog.so # provides kernel logging support (previously done by rklogd) $ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat *.info;mail.none;authpriv.none;cron.none /var/log/messages authpriv.* /var/log/secure mail.* -/var/log/maillog cron.* /var/log/cron *.emerg * uucp,news.crit /var/log/spooler local7.* /var/log/boot.log $WorkDirectory /var/spool/rsyslog # where to place spool files $ActionQueueFileName fwdRule1 # unique name prefix for spool files $ActionQueueMaxDiskSpace 1g # 1gb space limit (use as much as possible) $ActionQueueSaveOnShutdown on # save messages to disk on shutdown $ActionQueueType LinkedList # run asynchronously $ActionResumeRetryCount -1 # infinite retries if host is down *.* @syslog-ng1.example.com *.* @syslog-ng2.example.com syslog-ng configuration (abridged for brevity): options { flush_lines (0); time_reopen (10); log_fifo_size (1000); long_hostnames (off); use_dns (no); use_fqdn (yes); create_dirs (no); keep_hostname (yes); }; source src { unix-stream("/dev/log"); internal(); udp(ip(0.0.0.0) port(514)); }; destination per_host_destination { file( "/var/log/syslog-ng/devices/$HOST/$FACILITY.log" owner("root") group("root") perm(0644) dir_owner(root) dir_group(root) dir_perm(0775) create_dirs(yes)); }; log { source(src); destination(per_facility_destination); };

    Read the article

  • can't ssh from mac to windows (running ssh server on cygwin)

    - by Denise
    I set up an ssh server on a fresh windows 7 machine using the latest version of cygwin. Disabled the firewall. I can ssh into it from itself, from a different windows box (using winssh), and from a linux vm. In spite of that, I tried to ssh in from two different macs, and neither would let me! This is the debug output: OpenSSH_5.1p1, OpenSSL 0.9.7l 28 Sep 2006 debug1: Reading configuration data /etc/ssh_config debug1: Connecting to 3dbuild [172.18.4.219] port 22. debug1: Connection established. debug1: identity file /Users/Denise/.ssh/identity type -1 debug1: identity file /Users/Denise/.ssh/id_rsa type 1 debug1: identity file /Users/Denise/.ssh/id_dsa type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.5 debug1: match: OpenSSH_5.5 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.1 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-cbc hmac-md5 none debug1: kex: client->server aes128-cbc hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host '3dbuild' is known and matches the RSA host key. debug1: Found key in /Users/Denise/.ssh/known_hosts:43 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password,keyboard-interactive debug1: Next authentication method: publickey debug1: Trying private key: /Users/Denise/.ssh/identity debug1: Offering public key: /Users/Denise/.ssh/id_rsa Connection closed by [ip] It shows the same output, and fails at the same place, whether I have put my public key on the ssh server or not. Any help would be appreciated-- hopefully someone has run into this before?

    Read the article

  • Sign multiple domains with single Domain Key (dk-filter)

    - by Lashae
    Motivation The private shopping website GILT, send periodical update emails from giltgroupe.bounce.ed10.net however all of the mails are signed with domain keys of giltgroupe.com. mailed-by giltgroupe.bounce.ed10.net signed-by giltgroupe.com My Story I couldn't manage to sign x.com with y.com 's domain key using dk-filter under Debian Lenny with postfix. If I try to init dk-filter service with following arguments: DAEMON_OPTS="$DAEMON_OPTS -d x.com,y.com -c nofws -k -i /var/dk-filter/internal_hosts -s /etc/dk-keys.conf" dk-filter service signs with domain x.com (d=x.com) If I change the daemon arg.s as following: DAEMON_OPTS="$DAEMON_OPTS -d x.com -c nofws -k -i /var/dk-filter/internal_hosts -s /etc/dk-keys.conf" then emails sent From y.com is not being signed. the dk-keys.conf file is as follows: *:/var/dk-filter/y.com/mail I managed to do same thing with DKIM, works perfect. However DK doesn't seem to work. I don't have any problem signing y.com's emails with y.com's key and x.com's emails x.com's key, which indicates there is no configuration problem. Do you have any experience/advice to make it possible to sign emails from multiple domains by a specific chosen domain?

    Read the article

  • Varnish returning 503, FetchError (could not get storage)

    - by Archan
    On the current setup we're running into a problem with Varnish, we're running a CentOS 5.7 x86_64 xenpv, with Cpanel WHM, hosted at VPS.net. Sometimes we will recieve a Guru Meditation from Varnish, and when we look in the varnishlog with the following command varnishlog -d -c -m TxStatus:503 it returns output similar to the following: 15 VCL_call c recv 15 VCL_acl c NO_MATCH devs 15 VCL_return c pass 15 VCL_call c hash 15 Hash c **** 15 Hash c ************* 15 VCL_return c hash 15 VCL_call c pass pass 15 Backend c 12 default default 15 TTL c 1835862523 RFC 0 -1 -1 1332454056 0 1332454055 375007920 0 15 VCL_call c fetch hit_for_pass 15 ObjProtocol c HTTP/1.1 15 ObjResponse c OK 15 ObjHeader c Date: Thu, 22 Mar 2012 22:07:35 GMT 15 ObjHeader c Server: Apache/2.2.21 (Unix) mod_ssl/2.2.21 OpenSSL/0.9.8e-fips-rhel5 mod_bwlimited/1.4 mod_fcgid/2.3.6 15 ObjHeader c X-Powered-By: PHP/5.3.9 15 ObjHeader c Expires: Thu, 19 Nov 1981 08:52:00 GMT 15 ObjHeader c Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 15 ObjHeader c Pragma: no-cache 15 ObjHeader c Content-Type: text/html; charset=utf-8 15 ObjHeader c X-Cacheable: NO:Cache-Control=private 15 FetchError c chunked read_error: 12 (Could not get storage) 15 VCL_call c error deliver 15 VCL_call c deliver deliver As far as I have could gather, we could try increasing the nuke_limit, but currently we have a nuke_limit of 500, and when running varnishstat -1 -f n_lru_nuked we "only" get a total of 1031, even though we have seen the error happen on several pages. When we then run top to see how much memory Varnish is using, it only shows that it is using 763m, although we've set it to be allowed to use 1200m. Any ideas of what the problem can be?

    Read the article

  • How to set up hosts file for local environment?

    - by n00b0101
    I'm trying to create subdomains on my localhost and am way out of my territory... I'm running MAMP on my Mac OS X and I thought/think I had/have to do the following: (Assuming I want to create me.localhost.com and you.localhost.com) (1) Edit /private/etc/hosts Right now, it looks like this: 127.0.0.1 localhost 255.255.255.255 broadcasthost ::1 localhost fe80::1%lo0 localhost So, do I just make it: 127.0.0.1 localhost 127.0.0.1 me.localhost.com 127.0.0.1 you.localhost.com 255.255.255.255 broadcasthost ::1 localhost fe80::1%lo0 localhost (2) I'm assuming I don't need to mess with DNS at all because it's local? So, the hosts file should suffice? (3) And then, I need to edit my httpd.conf file to include virtual hosts? I tried this, but it's not picking it up... NameVirtualHost * <VirtualHost *> DocumentRoot "/Applications/MAMP/htdocs" ServerName localhost </VirtualHost> <VirtualHost *> DocumentRoot "/Applications/MAMP/htdocs/me.localhost.com" ServerName me.localhost.com </VirtualHost> <VirtualHost *> DocumentRoot "/Applications/MAMP/htdocs/you.localhost.com" ServerName you.localhost.com </VirtualHost> Not sure if I'm way off-base here... Help is greatly appreciated!

    Read the article

  • CSS and JS files not being updated, supposedly because of Nginx Caching

    - by Alberto Elias
    I have my web app working with AppCache and I would like that when I modify my html/css/js files, and then update my Cache Manifest, that when the user accesses my web app, they will have an updated version of those files. If I change an HTML file, it works perfectly, but when I change CSS and JS files, the old version is still being used. I've been checking everything out and I think it's related to my nginx configuration. I have a cache.conf file that contains the following: gzip on; gzip_types text/css application/x-javascript text/x-component text/richtext image/svg+xml text/plain text/xsd text/xsl text/xml image/x-icon; location ~ \.(css|js|htc)$ { expires 31536000s; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; } location ~ \.(html|htm|rtf|rtx|svg|svgz|txt|xsd|xsl|xml)$ { expires 3600s; add_header Pragma "public"; add_header Cache-Control "max-age=3600, public, must-revalidate, proxy-revalidate"; } And in default.conf I have my locations. I would like to have this caching working on all locations except one, how could I configure this? I've tried the following and it isn't working: location /dir1/dir2/ { root /var/www/dir1; add_header Pragma "no-cache"; add_header Cache-Control "private"; expires off; } Thanks

    Read the article

  • Isolate clients on same subnet?

    - by stefan.at.wpf
    Given n (e.g. 200) clients in a /24 subnet and the following network structure: client 1 \ . \ . switch -- firewall . / client n / (in words: all clients connected to one switch and the switch connected to the firewall) Now by default, e.g. client 1 and client n can communicate directly using the switch, without any packets ever arriving the firewall. Therefore none of those packets could be filtered. However I would like to filter the packets between the clients, therefore I want to disallow any direct communication between the clients. I know this is possible using vlans, but then - according to my understanding - I would have to put all clients in their own network. However I don't even have that much IP addresses: I have about 200 clients, only a /24 subnet and all clients shall have public ip addresses, therefore I can't just create a private network for each of them (well, maybe using some NAT, but I'd like to avoid that). So, is there any way to tell the switch: Forward all packets to the firewall, don't allow direct communication between clients? Thanks for any hint!

    Read the article

  • only root can send out mail by postfix

    - by Arash
    I have postfix installed and running. The problem is only root can send email. other users failed to do. Here is the log for user www-data which is a web server application. (the same error for other users) postfix/smtp[32003]: 513765FEB9: to=<[email protected]>, relay=127.0.0.1[127.0.0.1]:11125, delay=2.1, delays=0.07/0/1.7/0.32, dsn=5.0.0, status=bounced (host 127.0.0.1[127.0.0.1] said: 550-Verification failed for <[email protected]> 550-Unrouteable address 550 Sender verify failed (in reply to RCPT TO command)) here is the /etc/postfix/main.cf: smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no append_dot_mydomain = no readme_directory = no smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases mydestination = $myhostname, localhost.$mydomain, localhost relayhost = [127.0.0.1]:11125 smtp_sasl_auth_enable = yes smtp_sasl_password_maps = hash:/etc/postfix/lizard_password smtp_sasl_security_options = mynetworks = 127.0.0.1/8 [::ffff:127.0.0.1]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = loopback-only myorigin = /etc/mailname mydestination = $myhostname, localhost.$mydomain, localhost inet_protocols = ipv4 smtpd_recipient_restrictions = permit_mynetworks,permit_sasl_authenticated,reject_unauth_destination and here is the section that I added to the /etc/stunnel/stunnel.conf: [smtp-tls-wrapper] accept = 11125 client = yes connect = smtp.mydomain.com:465 I appreciate any help.

    Read the article

  • Error during SSL installation cPanel/WHM

    - by baswoni
    I have a dedicated server and I am using the install wizard via WHM to install an SSL certificate. I have the following keys: Certificate key RSA private key CA certificate I paste these three elements into the wizard along with the domain, IP address and username but I get this error: SSL install aborted due to error: Unable to save certificate key. Certificate verification passed Have I missed a step? I have given it another go to make sure I am copying and pasting the info correctly and I am now getting the following error: SSL install aborted due to error: Sorry, you must have a dedicated ip to use this feature for the user: username! If you are intending to install a shared certificate you must use the username "nobody" for security and bandwidth reporting reasons. Even though I am using a dedicated IP address, I am getting this problem. I thought I would also add that this SSL certificate has been installed on a shared hosting environment with my previous hostig provider. The account with them is still active, however the domain and its contents now reside on the dedicated server - could this cause problems?

    Read the article

  • Enabled Network Discovery on Server, and now VNC and Squeezebox clients don't work

    - by Mike Hanson
    I've recently setup a Windows Server 2008. It's running an email server, Squeezebox server, MS SQL Server, etc. I'm doing remote maintenance with UltraVNC. I had everything working fine. Then the server needed to access a network share on another machine, and I was prompted to turn on network discovery, which I did. I chose the Home rather than Public option. Since doing that, some things have stopped working, while others are still fine. Shared folders and the the Email services (ports 25 and 110) are still accessible. VNC (port 5900) and Squeezeboxes (port 9000) no longer work. Here's what I've tried to try to solve the problem: Checked the network discovery settings, to see if anything looked strange. Checked the firewall settings, and those ports appear to be open. Also in the firewall settings, the entries for Private domain Network Discovery were all on, but the Domain/Public ones were off. I tried turning those on. In the services, turned on Function Discovery Resource Publication and SSDP Discovery. Any other suggestions?

    Read the article

  • SharePoint web services not protected?

    - by Philipp Schmid
    Using WSS 3.0, we have noticed that while users can be restricted to access only certain sub-sites of a site collection through permission settings, the same doesn't seem to be true for web services, such as /_vti_bin/Lists.asmx! Here's our experimental setup: http://formal/test : 'test' site collection - site1 : first site in test site collection, user1 is member - site2 : second site in test site collection, user2 is member With this setup, using a web browser user2 can: - access http://formal/test/site2/Default.aspx - cannot access http://formal/test/site1/Default.aspx That's what is expected. To our surprise however, using the code below, user2 can retrieve the names of the lists in site1, something he should not have access to! Is that by (unfortunate) design, or is there a configuration setting we've missed that would prevent user2 from retrieving the names of lists in site1? Is this going to be different in SharePoint 2010? Here's the web service code used in the experiment: class Program { static readonly string _url ="http://formal/sites/research/site2/_vti_bin/Lists.asmx"; static readonly string _user = "user2"; static readonly string _password = "password"; static readonly string _domain = "DOMAIN"; static void Main(string[] args) { try { ListsSoapClient service = GetServiceClient(_url, _user, _password, _domain); var result = service.GetListCollection(); Console.WriteLine(result.Value); } catch (Exception ex) { Console.WriteLine(ex.ToString()); } } private static ListsSoapClient GetServiceClient(string url, string userName, string password, string domain) { BasicHttpBinding binding = new BasicHttpBinding(BasicHttpSecurityMode.TransportCredentialOnly); binding.Security.Transport.ClientCredentialType = HttpClientCredentialType.Ntlm; ListsSoapClient service = new ListsSoapClient(binding, new System.ServiceModel.EndpointAddress(url)); service.ClientCredentials.UserName.Password = password; service.ClientCredentials.UserName.UserName = (!string.IsNullOrEmpty(domain)) ? domain + "\\" + userName : userName; return service; } }

    Read the article

< Previous Page | 564 565 566 567 568 569 570 571 572 573 574 575  | Next Page >