Search Results

Search found 18728 results on 750 pages for 'setup deployment'.

Page 692/750 | < Previous Page | 688 689 690 691 692 693 694 695 696 697 698 699  | Next Page >

  • IE and Google Chrome timeout on an IIS6 hosted SSL page that Firefox handles well

    - by Thomas
    Ok, here's the scenario: Up until a few weeks ago, none of us noticed anything wrong with the corporate website. People were using it without complaint. Then, a client complained that a specific page on the site was timing out for him, and only when he committed a POST action on a form filled with data. I checked it out, and it timed out for me, too. But, it only timed out in Google Chrome and IE, not in Firefox. Additionally, the same page, on the same server, but served from a different domain name (one not under the protection of SSL, either) does not time out under any browser. To clarify: https://www.mysite.com/changes.php times out on POST, but the same with http works fine. That distinction (SSL vs. Non-SSL) seems to be important, as nothing else has changed. Our certificate is valid, and Firefox detects no errors thrown by the page. I've looked at the Request and Response headers from the page, and they all follow the correct formats. Then, after wandering through the site, I noticed a few other things. Both IE and Chrome will frequently time out on any page that is PHP-based. They never time out on static images or html files. I've looked at the site from a variety of different servers, my home and work workstations, and my netbook. Because of that, I've discounted a viral infection, as I highly doubt a virus is going to hit every one of the machines to which I have access in exactly the same manner. My setup is: Server: Win2k3, II6, PHP 5.2.9-1. Clients: IE7, IE8, Chrome (regular and dev channel): Frequent timeouts on PHP pages. Firefox 2, Firefox 3: No timeouts. Firebug shows no errors or even lengthy periods serving the pages. I've spent 2 days searching for any tech knowledge that I can find, and my search parameters are all too general. Everyone has problems loading SSL pages in IE and Chrome for a wide variety of reasons. The infrequent nature of the timeouts and the fact that there are no errors being reported anywhere is starting to drive me insane. Does anyone have any insight on a problem like this?

    Read the article

  • Cisco ASA SSL VPN options?

    - by JonH
    Disclaimer: I am not a network admin so I may be wrong here but I thought asking here would help. I'm a developer mainly on the .net framework as well as helping get a mobile intranet app working. Because this app is only allowed to be used on our network I can easily run this app on our wireless network connection within our building. All is fine and dandy but we'd also like to be able to run this mobile app at say a customer plant using VPN software. I thought surely this could be easy as we exclusively use Samsung s4 phones so I thought I'd download Cisco's Samsung any connect software to allow us to VPN...its right on the play store. Sure enough it doesn't work. I mention it to our network admin who says not possible since we have old technology that doesn't support SSL. He mentions we'd have to upgrade all of our hardware, the firewall, etc. to get this to work. We really need VPN on our phones not only for this app but other internal apps, etc. He did mention the following: We can’t upgrade the software on our ASA, because we don’t have enough memory for the new version.  (the asa is very old).  We can’t add more memory, so we would have to get a new firewall, which I have been told I cannot do. In addition he also mentioned: The Samsung AnyConnect client uses SSL to connect.  With the current (old) version of software that our firewall is running, the SSL connections are unreliable.  We need different hardware in order to upgrade our firewall, which we are unable to attain at this time.  This is the same reason that Windows 8 clients are not able to connect. I am curious hence me asking. vpns seem to be fairly simple to setup. What other options do I have aside from making this a public site or web service that consumes this data over the internet as this is a complete no no. What can we do to make this work without that much effort or cost.

    Read the article

  • PCI-DSS compliance for business with only swipe terminals [migrated]

    - by rowatt
    I support the IT infrastructure for a small retail business which is now required to undergo a PCI-DSS assessment. The payment service and terminal provider (Streamline) has asked that we use Trustwave to do the PCI-DSS certification. The problem I face is that if I answer all questions and follow Trustwave's requirements to the letter, we will have to invest significantly in networking equipment to segment LANs and /or do internal vulnerability scanning, while at the same time Streamline assures me that the terminals we have (Verifone VX670-B and MagIC3 X-8) are secure, don't store any credit card information and are PCI-DSS compliant so by implication we don't need to take any action to ensure their network security. I'm looking for any suggestions as to how we can most easily meet the networking requirements for PCI-DSS. Some background on our current network setup: single wired LAN, also with WiFi turned on (though if this creates any PCI-DSS complexities we can turn it off). single Netgear ADSL router. This is the only firewall we have in place, and the firewall is out the box configuration (i.e. no DMZ, SNMP etc). Passwords have been changed though :-) a few windows PCs and 2 windows based tills, none of which ever see any credit card information at all. two swipe terminals. Until a few months ago (before we were told we had to be PCI-DSS certified) these terminals did auth/capture over the phone. Streamline suggested we moved to their IP Broadband service, which instead uses an SSL encrypted channel over the internet to do auth/capture, so we now use that service. We don't do any ecommerce or receive payments over the internet. All transactions are either cardholder present, or MOTO with details given over phone and typed direct into terminal. We're based in the UK. As I currently understand it we have three options in order to get PCI-DSS certification. segment our network so the POS terminals are isolated from all PCs, and set up internal vulnerability scanning on that network. don't segment the network, and have to do more internal scanning and have more onerous management of PCs than I think we need (for example, though the tills are Windows based, they are fully managed so I have no control over software update policies, anti virus etc). All PCs have anti virus (MSE) and windows updates automatically applied, but we don't have any centralised go back to auth/capture over phone lines. I can't imagine we are the first merchant to be in this situation. I'm looking for any recommendations a simple, cost effective way to be PCI-DSS compliant - either by doing 1 or 2 above with (hopefully) simple and inexpensive equipment/software, or any other ways if there's a better way to do this. Or... should we just go back to the digital stone age and do auth/capture over the phone, which means we don't need to do anything on our network to be PCI-DSS certified?

    Read the article

  • nginx proxypath https redirects to http

    - by Thermionix
    I'm trying to setup Nginx to forward requests to several backend services using proxy_pass however several pages load with 404s The links on the pages have https:// in front, but result in a http request - which ends in a 404 - I only want these services to be available through https. I've tried with varied trailing forward slashes appended to the proxypath and location in proxy.conf, I've also tried commenting out www.conf (just incase its location blocks could have caused any conflicts) to no effect. So if a link is too https://example.com/sickbeard/errorlogs in a browser when loaded https://example.com/sickbeard/errorlogs gives a 404 in a browser https://example.com/sickbeard/errorlogs/ loads nginx error log; 2011/11/23 14:21:58 [error] 28882#0: *6 "/var/www/sickbeard/errorlogs/recent.html" is not found (2: No such file or directory), client: 192.168.1.99, server: example.com, request: "GET /sickbeard/errorlogs/ HTTP/1.1", host: "example.com" Config files; proxy.conf location /sickbeard { proxy_pass http://localhost:8081/sickbeard; include proxy.inc; } .... more entries .... sites-enabled/main server { listen 80; include www.conf; } server { listen 443; include proxy.conf; include www.conf; ssl on; } www.conf root /var/www; server_name example.com; location / { autoindex off; allow all; rewrite ^/$ /mainsite last; location ~* \.(jpg|jpeg|gif|css|png|js|ico)$ { expires max; } location ~ \.php$ { fastcgi_index index.php; include fastcgi_params; if (-f $request_filename) { fastcgi_pass 127.0.0.1:9000; } } } proxy.inc proxy_connect_timeout 59s; proxy_send_timeout 600; proxy_read_timeout 600; proxy_buffer_size 64k; proxy_buffers 16 32k; proxy_pass_header Set-Cookie; proxy_redirect off; proxy_hide_header Vary; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_set_header Accept-Encoding ''; proxy_ignore_headers Cache-Control Expires; proxy_set_header Referer $http_referer; proxy_set_header Host $host; proxy_set_header Cookie $http_cookie; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

    Read the article

  • Can't get DHCPd to assign IPs to unknown clients

    - by Jakobud
    I'm using Webmin to admin our DHCPd server. But I'm having a hard time getting it to assign IP addresses to unknown clients. The only way I can get it to assign an IP is to make sure a host is added to DHCPd as a host so that it gets a static-lease IP assigned to it. I thought "Allow Unknown Clients" was the key, but it still isn't assigning IPs to unknown clients. I have a pool setup so that the unknown clients should get an IP between 10.20.0.200 - 10.20.0.249. Here is the config file. What am I missing here? allow unknown-clients; # Primary DHCP server config authoritative; ddns-update-style none; failover peer "dhcp-failover" { primary; address 10.20.0.30; port 647; peer address 10.20.0.25; peer port 647; max-response-delay 60; max-unacked-updates 10; load balance max seconds 3; mclt 3600; split 128; } subnet 10.20.0.0 netmask 255.255.255.0 { allow unknown-clients; option subnet-mask 255.255.255.0; option broadcast-address 10.20.0.255; option routers 10.20.0.100; option domain-name "ourdomain.com"; option domain-name-servers 192.168.10.20; default-lease-time 86400; max-lease-time 86400; option ntp-servers 192.168.10.20; option time-offset -25200; pool { allow unknown-clients; failover peer "dhcp-failover"; max-lease-time 86400; range 10.20.0.200 10.20.0.249; deny dynamic bootp clients; } host Server-myserver { option host-name "whatever.ourdomain.com"; hardware ethernet 00:89:D4:35:4F:13; fixed-address 10.20.0.23; } }

    Read the article

  • Why my Buffalo router keeps on sending rdp, netbios, ftp, http requests?

    - by user192702
    I have the following network setup: Buffalo Router (192.168.100.1) < Watchguard XTM21 (192.168.100.13) < PC For some reason I keep on seeing the following repeating on my XTM21's Traffic Monitor. While I have enabled Port Forwarding, none of the ports reported below were enabled. Can someone let me know why I'm seeing all of these? 2013-10-19 23:37:56 Deny 192.168.100.1 192.168.100.13 ftp/tcp 4013 21 0-External Firebox Denied 60 64 (Unhandled External Packet-00) proc_id="firewall" rc="101" tcp_info="offset 10 S 282700472 win 5840" Traffic 2013-10-19 23:37:59 Deny 192.168.100.1 192.168.100.13 http/tcp 2459 80 0-External Firebox Denied 60 64 (Unhandled External Packet-00) proc_id="firewall" rc="101" tcp_info="offset 10 S 296571237 win 5840" Traffic 2013-10-19 23:38:02 Deny 192.168.100.1 192.168.100.13 8000/tcp 3244 8000 0-External Firebox blocked ports 60 64 (Internal Policy) proc_id="firewall" rc="101" tcp_info="offset 10 S 298709937 win 5840" Traffic 2013-10-19 23:38:05 Deny 192.168.100.1 192.168.100.13 8000/tcp 3244 8000 0-External Firebox blocked ports 60 64 (Internal Policy) proc_id="firewall" rc="101" tcp_info="offset 10 S 298709937 win 5840" Traffic 2013-10-19 23:38:05 Deny 192.168.100.1 192.168.100.13 rdp/tcp 3896 3389 0-External Firebox Denied 60 64 (Unhandled External Packet-00) proc_id="firewall" rc="101" tcp_info="offset 10 S 290482691 win 5840" Traffic 2013-10-19 23:38:08 Deny 192.168.100.1 192.168.100.13 netbios-ns/udp 2110 137 0-External Firebox Denied 78 64 (Unhandled External Packet-00) proc_id="firewall" rc="101" Traffic 2013-10-19 23:38:32 Deny 192.168.100.1 192.168.100.13 ftp/tcp 4025 21 0-External Firebox Denied 60 64 (Unhandled External Packet-00) proc_id="firewall" rc="101" tcp_info="offset 10 S 321868558 win 5840" Traffic 2013-10-19 23:38:35 Deny 192.168.100.1 192.168.100.13 http/tcp 2471 80 0-External Firebox Denied 60 64 (Unhandled External Packet-00) proc_id="firewall" rc="101" tcp_info="offset 10 S 325918731 win 5840" Traffic 2013-10-19 23:38:38 Deny 192.168.100.1 192.168.100.13 8000/tcp 3256 8000 0-External Firebox blocked ports 60 64 (Internal Policy) proc_id="firewall" rc="101" tcp_info="offset 10 S 327854525 win 5840" Traffic 2013-10-19 23:38:41 Deny 192.168.100.1 192.168.100.13 8000/tcp 3256 8000 0-External Firebox blocked ports 60 64 (Internal Policy) proc_id="firewall" rc="101" tcp_info="offset 10 S 327854525 win 5840" Traffic 2013-10-19 23:38:41 Deny 192.168.100.1 192.168.100.13 rdp/tcp 3896 3389 0-External Firebox Denied 60 64 (Unhandled External Packet-00) proc_id="firewall" rc="101" tcp_info="offset 10 S 327101423 win 5840" Traffic 2013-10-19 23:38:44 Deny 192.168.100.1 192.168.100.13 netbios-ns/udp 2110 137 0-External Firebox Denied 78 64 (Unhandled External Packet-00) proc_id="firewall" rc="101" Traffic

    Read the article

  • multiple puppet masters set up using inventory

    - by Oli
    I have managed to set up multiple puppet masters with one puppet master acting as a CA and clients are able to get a certificate from this CA server but use their designated puppet master to get their manifests. See this question for more info.. multiple puppet masters. However, there are a couple of things I have had to do to get this working correctly and have an error which I'll get to. First of all, to get inventory working for a puppet-client (PC) connecting to its designated puppet-master (PM), I had to copy the CA certs on PM1 to the PM2 ca directory. I ran this command: scp [email protected]:/var/lib/puppet/ssl/ca/* [email protected]:/var/lib/puppet/ssl/ca/. Once i have done that, I was able to uncomment the SSLCertificateChainFile, SSLCACertificateFile & SSLCARevocationFile section of my rack.conf VH file on the PM2. Once I had done this, inventory started to work. Does this sound an acceptable way to do things? Secondly, in the puppet.conf file, I am setting the designated PM server for that client. Unless there is a better way, this is how it'll work in my production setup. So PC1 will talk to PM1 and PC2 will talk to PM2. This is where I have an error. When PC2 first requests a cert from the CA on PM1, the cert appears and then I sign the cert on the CA on PM1. When I then do a puppet agent --test on PC2 (which has server = PM2 in puppet.conf), I get this error: Warning: Unable to fetch my node definition, but the agent run will continue: Warning: Error 403 on SERVER: Forbidden request: puppet-master2.test.net(10.1.1.161) access to /certificate_revocation_list/ca [find] at :112 However, if I change the PC2 puppet.conf file and specify server = PM1 and the rerun puppet agent --test, i do not get any errors. I can then revert the change in the puppet.conf file back to server = PM2 and everything seems to run normally. Do I have to set up some kind of ProxyPassMatch on PM2 for requests made from clients to /certificate_revocation_list/* and redirect them to PM1? Or how can I fix this error? Cheers, Oli

    Read the article

  • Looking for a NTP Server Software for Windows

    - by Simon
    I'm looking for a, preferably free, NTP Server for Windows Server 2003/2008. We have already tried the built in Windows Time Server, but our tests did show that it is not very accurate, we see time differences up to 500ms. The max time difference we can allow for our application is ~100ms. Now we have already used the Meinberg NTPd for Windows. It works great except we have one big issue with it: If there is a network connection problem between the client and server, the ntp server is in a panic state It won't give the client a new time until we restart the ntp service. This is a big issue which has caused us some trouble. It was working fine for months until there was a network problem we didn't notice, we only noticed it after a week when the time difference was already 30 sec. on the clients. So please suggest some alternative NTP Server for windows. I did Google but I get a lot of unrelated search results. Edit: So far the ntpd windows version was very accurate and I'd like to stick with it. The only problem is the "panic state" after a network disconnect. Maybe some knows here what the cause of this is and how to fix it. Also, I forgot to mention that we have a server/client setup like this: Server1 -- Server2 -- Server3 -- Client1 -- Client2 -- Client3 So Server2 gets its time from Server1, Server3 gets its time from Server2, and the Clients get their time from Server3. Also, there are clients connected directly to Server2. It is important that all Servers and Clients have the exact same time (within ~100ms) Now there was a network problem with Server3 and its clients. The servers run the ntpd port for Windows, which acts as NTP server and client. The clients have Dimension4 as NTP client. After the network problem, the error message in D4 was something like this (out the top of my head, don't have the exact error message): Server response: The server is in a panic state (could not sync clock) I read through the ntpd docs, and the only mention of "panic" is when the time difference is 10000 seconds which will cause to exit the ntpd server but this was not the case. Also there is a "-g" command line switch to disable the panic exit, but it is already set by default. Any ideas what could cause the panic state and how to get rid of it next time?

    Read the article

  • Hard freeze on new computer

    - by mphair
    OCZ Gold 3x2GB 240-Pin DDR3 SDRAM PC312800 Palit NE5T240SFHD01 GeForce GT240 1GB 128-bit GDDR5 ASUS P7P55D-E LGA 1156 P55 SATA 6Gb/s USB 3.0 ATX Intel Motherboard Intel Core i7-860 Lynnfield 2.8GHz 8MB L3 Cache LGA 1156 95W Quad-Core Processor SAMSUNG 22X Optical drive (DVD+-R/RW) CORSAIR CMPSU-620HX 620W ATX12V V2.2 Windows 7 Ultimate x64 Brand new system (got it from newegg two days ago) and it booted up and installed windows and ran for a day just fine. Yesterday, I boot it up in the afternoon and run various games at full graphics for most of the day. I turn on WoW and play for a few hours and it hard stalls. No numlock switching, no mouse feedback but nothing going wrong on the screen. No BSOD. I wait a bit to see if the stall is just a temporary one, but then force shutdown the computer. Upon reboot, everything seems fine, windows sees that it didn't shut down properly but I go into normal boot and restart WoW. I'm able to load it up and start running around when it freezes again. This time when I restart, it doesn't even get to BIOS. It starts (power goes on) and it just hangs with no output to the monitors. I shut it off and went to bed. This morning, I turned it on and went into BIOS setup. I'm not terribly experienced with messing with BIOS settings but I checked over them the best I could. Everything seems fine so I boot into windows and browse the internet for a bit looking for a solution, hard freeze within 10 minutes. I restart and go into BIOS and check the CPU temperature, 40c. I'm kinda stumped here. Some people say it might be a memory issue, but why would it take so long for it to come up? Could it have been slowly accessing one memory stick at a time and then it just got to a bad one and that's what is causing it to fail? It seems odd that I don't get a BSOD from a hardware failure. Having the screen just halt with no input or output change seems like a software thing to me. Any thoughts?

    Read the article

  • How Do I Restrict Repository Access via WebSVN?

    - by kaybenleroll
    I have multiple subversion repositories which are served up through Apache 2.2 and WebDAV. They are all located in a central place, and I used this debian-administration.org article as the basis (I dropped the use of the database authentication for a simple htpasswd file though). Since then, I have also started using WebSVN. My issue is that not all users on the system should be able to access the different repositories, and the default setup of WebSVN is to allow anyone who can authenticate. According to the WebSVN documentation, the best way around this is to use subversion's path access system, so I looked to create this, using the AuthzSVNAccessFile directive. When I do this though, I keep getting "403 Forbidden" messages. My files look like the following: I have default policy settings in a file: <Location /svn/> DAV svn SVNParentPath /var/lib/svn/repository Order deny,allow Deny from all </Location> Each repository gets a policy file like below: <Location /svn/sysadmin/> Include /var/lib/svn/conf/default_auth.conf AuthName "Repository for sysadmin" require user joebloggs jimsmith mickmurphy </Location> The default_auth.conf file contains this: SVNParentPath /var/lib/svn/repository AuthType basic AuthUserFile /var/lib/svn/conf/.dav_svn.passwd AuthzSVNAccessFile /var/lib/svn/conf/svnaccess.conf I am not fully sure why I need the second SVNParentPath in default_auth.conf, but I just added that today as I was getting error messages as a result of adding the AuthzSVNAccessFile directive. With a totally permissive access file [/] joebloggs = rw the system worked fine (and was essentially unchanged), but as I soon as I start trying to add any kind of restrictions such as [sysadmin:/] joebloggs = rw instead, I get the 'Permission denied' errors again. The log file entries are: [Thu May 28 10:40:17 2009] [error] [client 89.100.219.180] Access denied: 'joebloggs' GET websvn:/ [Thu May 28 10:40:20 2009] [error] [client 89.100.219.180] Access denied: 'joebloggs' GET svn:/sysadmin What do I need to do to get this to work? Have configured apache wrong, or is my understanding of the svnaccess.conf file incorrect? If I am going about this the wrong way, I have no particular attachment to my overall approach, so feel free to offer alternatives as well. UPDATE (20090528-1600): I attempted to implement this answer, but I still cannot get it to work properly. I know most of the configuration is correct, as I have added [/] joebloggs = rw at the start and 'joebloggs' then has all the correct access. When I try to go repository-specific though, doing something like [/] joebloggs = rw [sysadmin:/] mickmurphy = rw then I got a permission denied error for mickmurphy (joebloggs still works), with an error similar to what I already had previously [Thu May 28 10:40:20 2009] [error] [client 89.100.219.180] Access denied: 'mickmurphy' GET svn:/sysadmin Also, I forgot to explain previously that all my repositories are underneath /var/lib/svn/repository UPDATE (20090529-1245): Still no luck getting this to work, but all the signs seem to be pointing to the issue being with path-access control in subversion not working properly. My assumption is that I have not conf

    Read the article

  • UAE and the mysteries of unreachable websites

    - by 0plus1
    I write here because I'm really lost, please stay with me because it's not easy to explain. A company asked me to set-up a private server, now I'm a programmer so I got a solution with technical support and cpanel which helped me to setup everything and it's working smoothless. I'm by no means a professional sysadmin, but I have a fair knowledge of server configurations, but this problem is way over my knowledge, and apparently way over the knowledge of most sysadmins, I really hope that here I'll find someone with enough experience to help me or at least give me more insight. Now this company for which I'm consulting operates in the UAE (United Arab Emirates) and from there the server is almost unreachable. It started with ns not registering in the UAE, after a week that sorted itself out and now the site is indeed reachable, but it takes almost 2 minutes to load a webpage with one line of text. Emails go in timeout. The domain currently parked there has been bought appositely for tests, the main one that was supposed to go there, after a catastrophic week has been transferred to a shared hosting solution in the UK, and from there it works like a charme. Now after doing some research I discovered that I'm not alone in this, there are several reports of webmasters discovering that their website is not reachable inside the UAE, and mind this has nothing to do with the state-wide block of questionable sites, because in that case an error message appears, this seems to be related to the infrastructure of the UAE, which apparently reroutes everything through their own "fake" internet. Apparently new servers with their own IP are not recognized (yet?) by the UAE infrastructure, while shared hosting solutions seeing that they operates tons of other websites are more likely to be part of the UAE network. Now my questions are: 1) Has someone a real explanation for this? The only thing I can think of is that the server is on a new IP that is not yet recognized by the UAE, but that doesn't explain why it loads (even if after 2 minutes). I don't have any help from within the UAE as the only people that are "experts" are questionable companies that simply try to sell their own services. 2) If there is really some kind of block of new servers, is it possible to know before if a server is reachable from within the UAE, currently this is not a ns problem as even accessing the server with its IP result in a 2 minute wait. 3) Can it be that the problem lies somewhere else? There are some tests that I can perform? I'm not physically in the UAE, but I can ask the people there, or use teamviewer. Could it be some misconfiguration on the server (mind that the site works EVERYWHERE else in the world). Thank you for ANY kind of help

    Read the article

  • Vim configuration slow in Terminal & iTerm2 but not in MacVim

    - by Jey Balachandran
    Ideally, I want to use Vim from Terminal or iTerm2. However, it becomes unbearably slow so I had to resort to using MacVim. There is nothing wrong with MacVim, however my workflow would be much smoother if I used only Terminal/iTerm2. When its slow Loading files, in particular Rails files takes about 1 - 1.5s. Removing rails.vim decreases this time to 0.5 - 1s. In MacVim this is instantaneous. Scrolling through the rows and columns via h, j, k, l. It progressively gets slower the longer I hold down the keys. Eventually, it starts jumping rows. I have my Key Repeat set to Fast and Delay Until Repeat set to Short. After 10 - 15 minutes of usage, using plugins such as ctrlp or Command-T gets very laggy. I'd type a letter, wait 2 - 3s, then type the next. My Setup 11" MacBook Air running Mac OS X Version 10.7.3 (1.6 Ghz Intel Core 2 Duo, 4 GB DDR3) My dotfiles. > vim --version VIM - Vi IMproved 7.3 (2010 Aug 15, compiled Nov 16 2011 16:44:23) MacOS X (unix) version Included patches: 1-333 Huge version without GUI. Features included (+) or not (-): +arabic +autocmd -balloon_eval -browse ++builtin_terms +byte_offset +cindent -clientserver +clipboard +cmdline_compl +cmdline_hist +cmdline_info +comments +conceal +cryptv -cscope +cursorbind +cursorshape +dialog_con +diff +digraphs -dnd -ebcdic +emacs_tags +eval +ex_extra +extra_search +farsi +file_in_path +find_in_path +float +folding -footer +fork() -gettext -hangul_input +iconv +insert_expand +jumplist +keymap +langmap +libcall +linebreak +lispindent +listcmds +localmap -lua +menu +mksession +modify_fname +mouse -mouseshape +mouse_dec -mouse_gpm -mouse_jsbterm +mouse_netterm -mouse_sysmouse +mouse_xterm +multi_byte +multi_lang -mzscheme +netbeans_intg +path_extra -perl +persistent_undo +postscript +printer +profile +python -python3 +quickfix +reltime +rightleft +ruby +scrollbind +signs +smartindent -sniff +startuptime +statusline -sun_workshop +syntax +tag_binary +tag_old_static -tag_any_white -tcl +terminfo +termresponse +textobjects +title -toolbar +user_commands +vertsplit +virtualedit +visual +visualextra +viminfo +vreplace +wildignore +wildmenu +windows +writebackup -X11 -xfontset -xim -xsmp -xterm_clipboard -xterm_save system vimrc file: "$VIM/vimrc" user vimrc file: "$HOME/.vimrc" user exrc file: "$HOME/.exrc" fall-back for $VIM: "/usr/local/Cellar/vim/7.3.333/share/vim" Compilation: /usr/bin/llvm-gcc -c -I. -Iproto -DHAVE_CONFIG_H -DMACOS_X_UNIX -no-cpp-precomp -O3 -march=core2 -msse4.1 -w -pipe -D_FORTIFY_SOURCE=1 Linking: /usr/bin/llvm-gcc -L. -L/usr/local/lib -o vim -lm -lncurses -liconv -framework Cocoa -framework Python -lruby I've tried running without any plugins or syntax highlighting. It opens files a lot faster but still not as fast as MacVim. But the other two problems still exist. Why is my vim configuration slow? How can I improve the speed of my vim configuration within Terminal or iTerm2?

    Read the article

  • pix 501, static route to d-link router (different subnet)

    - by ra170
    I have pix 501 cisco firewall with internal ip 192.168.10.1. I have connected d-link router (dir-655) to pix 501. The d-link router has internal ip 192.168.0.1 The picture would like something like that: |pix 501| has 192.168.10.1 ip |DIR-655| has 192.168.0.1 ip 1. |cable modem|----|pix 501|-------|DIR-655|-----PC 2. PC--------|pix 501|---------|DIR-655| | | |cable modem| When I'm on the wireless network (dir-655) with assigned ip of 192.168.0.x I can cross the subnet and connect to my firewall 192.168.10.1. (pic. 1) The problem is that if I'm on the 192.168.10.x network I can't connect to anything over at 192.168.0.x network. (pic.2) I've tried entering a static route like this: `route inside 192.168.0.0 255.255.255.0 192.168.10.1 1` I also tried assigning static ip to wan interface on DIR-655 to 192.168.10.30 and then tried this: route inside 192.168.0.0 255.255.255.0 192.168.10.30 1 But still, can't connect to 192.168.0.1 or anything on that subnet. Is there a way to setup a static route? Would adding a separate router between PIX 501 and DIR-655 help? I would think that static route like this should take care of it, but it doesn't. This is my route config and nat: (config)# sh route outside 0.0.0.0 0.0.0.0 (outside_IP) 1 DHCP static outside (outside_IP) 255.255.248.0 (outside_IP) 1 CONNECT static inside 192.168.0.0 255.255.255.0 192.168.10.1 1 OTHER static inside 192.168.10.0 255.255.255.0 192.168.10.1 1 CONNECT static or (route inside 192.168.0.0 255.255.255.0 192.168.10.30 1) (config)# sh nat nat (inside) 1 192.168.1.0 255.255.255.0 0 0 nat (inside) 1 192.168.10.0 255.255.255.0 0 0 nat (inside) 1 0.0.0.0 0.0.0.0 0 0 I ended up turning DIR-655 into an Access Point (turning off DHCP and pluging cable from PIX lan interface into one of the LAN interfaces on DIR-655, and leaving WAN port empty), that works as far as DIR-655 being on the same subnet now, and I can access every machine. However the question is, why can't I simply route between those two? would router between these two help? One of the reasons is, that the PIX 501 has only 10 licences, so now I'm using almost all of them. (I have few computers, iphones, ps3, print server, etc.) I would really appreciate some help! Thanks.

    Read the article

  • Why is Windows Task Scheduler trying to launch multiple instances?

    - by Paul H
    We have a number of Windows Scheduled tasks that run on one Server 2008 Webserver (not R2) which is in a cluster. We recently moved from an original webserver Cluster to a new webserver Cluser (Server 2008 - not R2). The new webserver (in the cluster) running the Windows Tasks is setup the same as on the original we believe. BUT we now find that on the new Windows Server the Windows Task Scheduler seems to want to instantly start each task three times. If we set the option to queue up a new task we get: Event ID 324 Task Scheduler queued instance "{9a1a8411-b042-45ff-8e6b-89874df230d7}" of task "\Client Reporting" and will launch it as soon as instance "{2bcc3df6-ea3b-4453-90c2-75b8b1946388}" completes. If we set the option to stop an existing task we get: Event ID 323 Task Scheduler stopped instance "{e685a910-b32b-414e-85fd-96bbe54314a2}" of task "\Client Reporting" in order to launch new instance "{4db66265-1f51-4ede-8535-ac7c3cb5c4c1}" . Ticked settings: Allow task to be run on demand. Run task as soon as possible after a scheduled start is missed. Stop the task if running for longer than 1 hour. If the running task does not end when requested force it to stop. Start the task only if the computer is on AC power. Stop the task if the computer switches to battery power. Selected option: If the task is already running - stop the existing instance. Note: We moved the tasks from one server to another in the cluster to see if it the Task Scheduler on the particular server we'd picked causing the problem. Same behaviour. Could it be something to do with the build of the new servers? We have very similar tasks set up on another server cluster that work OK without all this multiple starting. Comparing those tasks to the ones here - there does not seem to be anything obviously different in terms of settings available to us through the options within the Task Scheduler. Trigger: The task is scheduled to be triggered daily, once an hour - and to be stopped if it exceeds this time. Action: Runs a .bat file. What could be causing this/where we can look to see what logic is causing the tasks to start multiple times in this way?

    Read the article

  • Need advice on which PCI SATA Controller Card to Purchase

    - by Matt1776
    I have a major issue with the build of a machine I am trying to get up and running. My goal is to create a file server that will service the needs of my software development, personal media storage and streaming/media server needs, as well as provide a strong platform for backing up all this data in a routine, cron-job oriented German efficiency sort of way. The issue is a simple one - all my drives are SATA drives and my motherboard controller only contains 4 ports. Solving the issue has proven to be an unmitigated nightmare. I would like advice on the purchase of the following: 4 Port internal SATA / 2 Port external eSATA PCI SATA Controller Card that has the following features and/or advantages: It must function. If I plug it in and attach drives, I expect my system to still make it to the Operating System login screen. It must function on CentOS, and I mean it must function WELL and with MINIMAL hassle. If hassle is unavoidable, there shall be CLEAR CUT and EASY TO FOLLOW instructions on how to install drivers and other supporting software. I do not need nor want fakeRAID - I will be setting up any RAID configurations from within the operating system. Now, if I am able to find such a mythical device, I would be eternally grateful to whomever would be able to point me in the right direction, a direction which I assume will be paved with yellow bricks. I am prepared to pay a considerable sum of money (as SATA controller cards go) and so paying anywhere between 60 to 120 dollars will not be an issue whatsoever. Does such a magical device exist? The following link shows an "example" of the type of thing I am looking for, however, I have no way of verifying that once I plug this baby in that my system will still continue to function once I've attached the drives, or that once I've made it to the OS, I will be able to install whatever drivers or software programs I need to make it work with relative ease. It doesn't have to be dog-shit simple, but it cannot involve kernels or brain surgery. http://www.amazon.com/gp/product/B00552PLN4/ref=pd_lpo_k2_dp_sr_1?pf_rd_p=486539851&pf_rd_s=lpo-top-stripe-1&pf_rd_t=201&pf_rd_i=B003GSGMPU&pf_rd_m=ATVPDKIKX0DER&pf_rd_r=1HJG60XTZFJ48Z173HKY So does anyone have a suggestion regarding the subject I am asking about? PCI SATA Controller Cards? It would help if you've had experience with the component before - that is after all why I am asking here - for those who have had experience that I do not have. Bear in mind that this is for a home setup and that I do not have a company credit card. I have a budget with a 'relative' upper limit of about $150.00.

    Read the article

  • DNS NS and domain clarification

    - by thejartender
    I am really trying to get my home web server up and I don't seem to be succeeding. My web server withing my host system is running my web application and is viewable at the current isp ip 88.89.190.171 over WAN indicating that the webapp is fine and that router ports are forwarded. I have set up a DNS on this system with a single name server in the network and I manage to ping it with ping ns.thejarbar.org I have registered this private name server at my current hosting provider. My domain (thejarbar.org) is obviously registered and I have pointed it to my name server. My question here is if it is simply a matter of waiting on propagation for me to be able to ping my domain? Another way of asking this is if the fact that my name server is discoverable indicates that I have set it up correctly to be used? I have tested with dig and dig -x on my host and have A records for the name server. The server is not the Authorative server so I am concerned that this may be the reason why my site is not discoverable. Is there anything else I may need to so still? I only have one ns. currently, but should this succeed I will be purchasing a more stable secondary system to host my development applications. This is my best chance at getting work (freelance development) due to illness) and this I feel is the last step I need to succeed. Please note that this is temporarily a home server and I will most likely be using it as part of a professional setup very soon I will likely have to repeat this question therefore in a prefessional context in a few weeks as nothing will be different other than the fact that I am going to have a server running elsewhere. I am using bind9 and Ubuntu 12.10 and my records are: $TTL 3D @ IN SOA ns.thejarbar.org. email. ( 13112012 28800 3600 604800 38400 ); thejarbar.org. IN A 10.0.0.42 @ IN NS ns.thejarbar,org. yuccalaptop IN A 10.0.0.19 ns IN A 10.0.0.42 gw IN A 10.0.0.138 www IN CNAME thejarbar.org. $TTL 3D 0.0.10.in-addr.arpa. IN SOA ns.thejarbar.org. email. ( 13112012 28800 3600 604800 38400 ); 0.0.10.in-addr.arpa. IN NS ns.thejarbar.org. 42 IN PTR thejarbar.org. 19 IN PTR yuccalaptop.thejarbar.org. 138 IN PTR gw.thejarbar.org. My localhost IP is 10.0.0.42 I wish for this to be my host and name server.

    Read the article

  • Windows 2003 Storage Server Hanging on Large File Transfers

    - by user25272
    In one of our offices we have a Dell PowerVault 745N NAS device which acts as the main file server. Its running 32bit Windows 2003 Storage Server SP2 with 3GB RAM. The server holds around 60 users HOME folders, which are mapped via AD. The office clients are a mix of XP SP3, Vista and Windows 7. Occasionally the server will completely hang when transferring large files. When the hang happens the console becomes unresponsive with only the mouse active and blank wallpaper. Sometimes stopping the copy frees the server, sometimes not. The hanging can last around 20 minutes. During this time other servers also become unresponsive with blank wallpaper at the console. If you do manage to get onto another server the taskbar and run commands are unresponsive. This also transcends to the client computers sometimes with explorer crashing. I'm guessing this is due to the HOME folder mapping. Eventually the NAS server with free up and everything will be back to normal. The server is configured as follows: PERC 4/DC DATA 2 - 12 SCSI HDD - RAID5 SHADOWCOPY 2 SCSI HDD - RAID1 CERC SATA DATA 11 4 SATA HDD - RAID5 OS 4 SATA HDD - RAID5 All the drivers and firmware is up to date. I've been through all the diagnostics with Dell and the hardware has come up clean including full HDD tests on the arrays. The server has NOD32 installed as the AV, but the hanging happens when it is uninstalled. There are no errors in the event log when this happens and we don't have any errors logged on any of our ProCurve switches. DNS is fine on the domain and AD from what I can tell is running happily. There are no DFS or NFS shares setup either. All the shares are standard Windows. I've unchecked the allow the computer to turn off this device to save power box under Power Management on the NIC. "Set Link Speed and Duplex to Auto-negotiate 1000 " Increased Receive Descriptors buffer from 256 to 352 (reserves more CPU resource for handling data) I've run network traces using network monitor and have found the following: 417 8.078125 {SMB:192, NbtSS:25, TCP:24, IPv4:23} 192.168.2.244 192.168.5.35 SMB SMB:R; Nt Create Andx - NT Status: System - Error, Code = (52) STATUS_OBJECT_NAME_NOT_FOUND I've tried different cabling; NICs and switch ports all with the same result. Transferring files from other servers on the domain is fine. All I haven't done is run CHKDSK on the drives to look for any file system errors. On the Vista clients I have also run netsh interface tcp set global autotuning=disabled with no result. Could it be that the server has a faulty drive or that the I/O is too much for it to handle? Any ideas why would the hang cause issues with the other servers on the LAN? Many Thanks.

    Read the article

  • What is good usage scenario for Rackspace Cloud Files CDN (powered by AKAMAI) [closed]

    - by Andrew Smith
    I have just setup my website as static page via Rackspace CDN / Akamai. www.example.co.uk is an alias for d9771e6f24423091aebc-345678991111238fabcdef6114258d0e1.r61.cf3.rackcdn.com. d9771e6f24423091aebc-345678991111238fabcdef6114258d0e1.r61.cf3.rackcdn.com is an alias for a61.rackcdn.com. a61.rackcdn.com is an alias for a61.rackcdn.com.mdc.edgesuite.net. a61.rackcdn.com.mdc.edgesuite.net is an alias for a63.dscg10.akamai.net. a63.dscg10.akamai.net has address 63.166.98.41 a63.dscg10.akamai.net has address 63.166.98.40 a63.dscg10.akamai.net has IPv6 address 2001:428:4c02::cda8:ecb9 a63.dscg10.akamai.net has IPv6 address 2001:428:4c02::cda8:ed09 The HTTP header: HTTP/1.0 200 OK Last-Modified: Fri, 19 Oct 2012 23:27:41 GMT ETag: fdf9e14b77def799e09e8ce815a521da X-Timestamp: 1350689261.23382 Content-Type: text/html X-Trans-Id: tx457979be3bd746c2b4e5403a1189cdbc Cache-Control: public, max-age=900 Expires: Sat, 27 Oct 2012 22:18:56 GMT Date: Sat, 27 Oct 2012 22:03:56 GMT Content-Length: 7124 Connection: keep-alive I am wondering, if it's really the fastest solution to power the website? By investigating it thru http://www.just-ping.com/ it seems, that from many places the ping is very high, and during quick investigation I found that they use GeoIP to resolve addresses based on WHOIS, which is not accurate and because of that from many places the ping is above 300ms (for example, if ISP is in balgladore and request is routed to bangladore even if it's 300ms, for period of 1 month), while by just using Amazon Web Services and Route 53 Anycast DNS servers and only 4 EC2 instances it seems that for example India is always below 100ms, while using Akamai it goes above 300ms in some cases, and this is because Route 53 is using BGP. By quickly checking the Akamai, it seems that they are not getting feedback from the traffic - the high ping stays constant even if I keep downloading large files and videos, which is opposite to what they say on their website. They state, that they optimize the performance by taking feedback from the requests, while it seems they just use GeoIP with per City resolution (which are mostly big cities). Because of this, AWS with Route 53 / Anycast DNS seems to be much more reliable, as well EdgeCast which is using BGP, but I dont know how much does it cost to deploy static website. Actually, I dont know if EdgeCast is not a lie, because from isolated places there are many errors - so their performance is at the cost of quality of delivery, because of BGP switching the routes during transfer of large files. So I was wondering, what is really Akamai good for, because they dont seem to pose any strength in any field in what I do understand now, except they offer some software based WAF on their website, but what I really care about is the core distribiution, so the question is? Is really Akamai good for Videos? For static websites? ??? I found so far AWS most usable with most consistent ping and stable transfers.

    Read the article

  • Local Apache on Windows XP not finishing page requests

    - by asgeo1
    I have Apache 2.2.11 installed locally on my Windows XP (SP3) dev machine, which I setup about 3 months ago. I have just started having a strange problem in the last few week. Apache is serving some basic PHP applications like phpMyAdmin. When I make a page request, Apache appears to not finish serving all resources for that page. Firefox shows the "Transferring data from servername..." message, and the page never completes. The same problem happens in Internet Explorer too. I can sometimes tell which resource it is waiting on, because most of the page will render except for some image or similar resources. (Not sure why Firebug doesn't show this) It doesn't have the problem every page request - for page requests where most of the resources are cached in my browser, the page request will work with no problems. Or pages that are very light will work with no problems. However, if I "hard" refresh the page, I will have this problem (probably because it is requesting all page resources) Does anyone know what this could be? It is so strange that it has only just started happening - and I did not make any changes to my system (that I am aware of) I tried playing with the Apache ThreadsPerChild setting, but it did not seem to make a difference. UPDATE: I have been doing some more tests. I have been serving the most basic of pages, just a plain HTML file: <html> <body> <h1>testing</h1> </body> </html> If I request this page multiple times in a row, AND each request occurs immediately after the previous has completed, then 50% of the time the request will time out. However, if I put a 1-2 second gap between requests, then there is no problem. This correlates to what I have observed when the brower requests a real application page. When the browser has nothing cached, then all of the page resources are requested from the browser in a short amount of time - this appears to trigger the problem. UPDATE2: Nathan Long has helped me understand the issue a little better with the server-status page (see below). It is weird, it is like the server has a hickup sending data to the client. The client sits there waiting forever for data that never arrives. Closing the client process does not terminate the connection on the server - the server still has active threads for each previously attempted connection, but they just sit there - not sending any data and never terminating. (even though the client is now closed) Only a restart of the server seems to terminate them.

    Read the article

  • VirtualBox - multiple guests, each with a single bridged adapter?

    - by Martin
    I am running a dedicated server (located at Hetzner, Germany) that runs VirtualBox in order to virtualize several services accross multiple virtual guests. Those guests are supposed to communicate with each other (for instance, a virtual web server has to access a virtual database server); to be reachable from the dedicated server (for instance, SSH access); and to access the Internet via the dedicated server (for instance, to download security updates) Currently, this is achieved by having host-only adapter vboxnet0 on the dedicated server and two virtual interfaces on each guest. There, virtual adapter eth0 is attached to vboxnet0 (to achieve (1) and (2)), virtual adapter eth1 is attached to VirtualBox' NAT (to achieve (3)). Via eth0, the guests have access to a DHCP and a DNS server, both running on the dedicated server (there, bound to vboxnet0). This allows me to assign custom IP addresses and names. Via eth1, VirtualBox pushes a proper route that enables each guest to access the Internet (via eth0 on the dedicated server). This setup with two virtual adapters frequently leads to problems and at leasts complicates many things. For instance, on the dedicated server there is OpenVPN which allows to access the virtual machines via the Internet; futhermore, there is Shorwall that controls the incoming and outgoing network traffic between the Internet, the dedicated server, and the individual virtual machines. Not to mention automatic installation of servers via PXE... Therefore, I would prefer to have only one single virtual adapter on each guest which would be used for both incoming and outgoing connections. As far as I understand, one would basically use a bridged interface for that very purpose. Now the question arises: Which interface on the dedicated server would the bridge use? eth0 on the host server is not an option, as this is prohibited by the provider. A virtual interface eth0:0 would not make any sense, as a bridge always uses a physical interface (eth0 in this case). Would it be possible to create a bridged interface in each virtual machine that would "dangle in the air"? Thus, without a complement on the dedicated server? How would I have to set up the routing on the host server? Please note that the host / dedicated server has only one network adapter (eth0) which is connected to the provider's network. Regards, Martin

    Read the article

  • SSH Socks Proxy wiith iptables REDIRECT

    - by Radium
    I have googled and haven`t found the answer on my question. Help me please. There are two servers: serverA with public IP 12.0.0.10 and an private IP 10.0.0.5 serverB with public IP 20.0.0.11 I have setup SOCKS proxy on serverB to serverA: ssh -D20.0.0.11:2222 [email protected] So when on my local machine in a browser i specify SOCKS proxy 20.0.0.11:2222 (serverB:2222) as external IP while browsing i get 12.0.0.10 (serverA IP). That is ok. As well if i go onto http://10.0.0.5 (serverA private IP) it is also reachable. That is what i need. I want to make servers A private IP to be available through servers B public IP on certain ports but without specifying SOCKS in my browser. I could use ssh port forward but the problem is - i need to forward many ports and do not know which exactly - i know only the range. So when i connect to 20.0.0.11 to any port , for example, from 3000:4000 range, i want that traffic to be redirected to 10.0.0.5 on the same port. That is why i`ve decided maybe SOCKS proxy via SSH and iptables REDIRECT could help me. Client - serverBPublicIP (any port from range 3000:4000) - serverAPublicIP - serverAPrivateIP (the port was requested on serverBPublicIP) On serverB i do: ssh -D20.0.0.11:2222 [email protected] iptables -t nat -A PREROUTING -d 20.0.0.11 -p tcp --dport 3000:4000 -j REDIRECT --to-port 2222 But that does not work - when i telnet on 20.0.0.11:3001 for example i do not see any proxied traffic on the serverA. What should i do else? I have tried tcpsocks like this (in example i am telneting to 20.0.0.11:3001) Client -> 20.0.0.11:3001 -> iptables REDIRECT from 3001 --to-port 1111 -> tcpsocks from 1111 to 2222 -> SOCKS proxy from serverB to serverA on port 2222 -> serverA But i do not know what to do with the traffic on serverA. How to route it to its private IP. Help me please. I know, VPN removes all the hell i am trying to create, but i have no ability to use tun/tap device. It is disabled.

    Read the article

  • Can't connect to svnserve on localhost - connection actively refused

    - by RMorrisey
    When I try to connect using Tortoise to my SVN server using: svn://localhost/ Tortoise tells me: "Can't connect to host 'localhost'. No connection could be made because the target machine actively refused it." How can I fix this? I am trying to set up a subversion server on my local PC for personal use. I am running Windows Vista, with SlikSVN and TortoiseSVN installed. I previously had everything working correctly, but I found that I couldn't merge(!), apparently due to a version mismatch between the SVN client and server. Anyway... I now have the following setup: I created a repository using svnadmin create; it resides at C:\svnGrove C:\svnGrove\conf\svnserve.conf (# comments omitted): [general] anon-access=read auth-access=write password-db=passwd #authz-db=authz realm=svnGrove C:\svnGrove\conf\passwd: [users] myname=mypass My Subversion Server service is pointed to: C:\Program Files\SlikSvn\bin\svnserve.exe --service -r C:\svnGrove It shows the TCP/IP service as a dependency. I have also tried running svnserve from the command line, with similar results. The below is provided by the 'about' option in TortoiseSVN: TortoiseSVN 1.6.10, Build 19898 - 32 Bit , 2010/07/16 15:46:08 Subversion 1.6.12, apr 1.3.8 apr-utils 1.3.9 neon 0.29.3 OpenSSL 0.9.8o 01 Jun 2010 zlib 1.2.3 The following is from svn --version on the command line (not sure why it says CollabNet, CollabNet was the previous SVN binary that I had set up. The uninstaller failed to remove everything gracefully): svn, version 1.6.12 (SlikSvn/1.6.12) WIN32 compiled Jun 22 2010, 20:45:29 Copyright (C) 2000-2009 CollabNet. Subversion is open source software, see http://subversion.tigris.org/ This product includes software developed by CollabNet (http://www.Collab.Net/). The following repository access (RA) modules are available: * ra_neon : Module for accessing a repository via WebDAV protocol using Neon. - handles 'http' scheme - handles 'https' scheme * ra_svn : Module for accessing a repository using the svn network protocol. - with Cyrus SASL authentication - handles 'svn' scheme * ra_local : Module for accessing a repository on local disk. - handles 'file' scheme * ra_serf : Module for accessing a repository via WebDAV protocol using serf. - handles 'http' scheme - handles 'https' scheme I disabled my Windows Firewall and CA Internet Security, without success in resolving the issue. Edit The old version of svnserve was still set up as a service after the uninstall, pointed to this path: C:\Program Files\Subversion\svn-win32-1.4.6\bin I edited the registry key for the service to point to the new path (shown above). Whether I run svnserve as a service, or using -d, I do not see an entry for that port number in the listing generated by netstat -anp tcp.

    Read the article

  • Windows 2008 IIS 7.0 HTTP to HTTPS Redirect -- Versus IIS 6.0 Mechanism

    - by Dan7el
    This topic, creating a mechanism for redirection from HTTP to HTTPS on a Windows 2008 server running IIS 7.0 is a much written-about topic on the Internet. How this is done is really not so much my issue. My issue is more of explaining why this can't be done with the standard HTTP Redirect module that ships with Windows 2008 IIS 7.0. Instead, there are other methods needed that are more arduous. First, the IIS 6.0 method requires no externally available modules nor does it require any additional modifications to the web.config or any type of other development effort. It's outlined here: http://blogs.microsoft.co.il/blogs/dorr/archive/2009/01/13/how-to-force-redirection-from-http-to-https-on-iis-6-0.aspx And, you can see the basic steps are to run the snap-in, get the properties on the site, and do some modifications. Presto, you have the HTTP -- HTTP redirect setup. Now, on the IIS 7.0 platform, it doesn't seem this simple. An initial search found the following site: http://www.sslshopper.com/iis7-redirect-http-to-https.html Which has two separate approcates: 1. Involves installing a separately available Microsoft module -- URL Rewrite Module, and then adding XML to the web.config. 2. Custom Error Page. ...there might be other methods, but these are the basic ones and the first is listed as the primary method. But wait...There exists on the IIS 7.0 an HTTP Redirect Module. So...why can't I use the HTTP Redirect Module to do this very thing? This is really my big question. I need to know this because my management is going to insist I use the HTTP Redirect Module and set up the HTTP to HTTPS redirect in a similar fashion to how we do in IIS 6.0. Can someone please explain to me, in clean, simple, easy to understand, terms that both I and my management can understand as to why I need to go get the URL Rewrite Module and install that on the server and make the web.config changes suggested by the article instead of simply using the HTTP Redirect module that's already installed on the site? Thanks a bunch.

    Read the article

  • How to get an inactive RAID device working again?

    - by Jonik
    After booting, my RAID1 device (/dev/md_d0 *) sometimes goes in some funny state and I cannot mount it. * Originally I created /dev/md0 but it has somehow changed itself into /dev/md_d0. # mount /opt mount: wrong fs type, bad option, bad superblock on /dev/md_d0, missing codepage or helper program, or other error (could this be the IDE device where you in fact use ide-scsi so that sr0 or sda or so is needed?) In some cases useful info is found in syslog - try dmesg | tail or so The RAID device appears to be inactive somehow: # cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md_d0 : inactive sda4[0](S) 241095104 blocks # mdadm --detail /dev/md_d0 mdadm: md device /dev/md_d0 does not appear to be active. Question is, how to make the device active again (using mdmadm, I presume)? (Other times it's alright (active) after boot, and I can mount it manually without problems. But it still won't mount automatically even though I have it in /etc/fstab: /dev/md_d0 /opt ext4 defaults 0 0 So a bonus question: what should I do to make the RAID device automatically mount at /opt at boot time?) This is an Ubuntu 9.10 workstation. Background info about my RAID setup in this question. Edit: My /etc/mdadm/mdadm.conf looks like this. I've never touched this file, at least by hand. # by default, scan all partitions (/proc/partitions) for MD superblocks. # alternatively, specify devices to scan, using wildcards if desired. DEVICE partitions # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR <my mail address> # definitions of existing MD arrays # This file was auto-generated on Wed, 27 Jan 2010 17:14:36 +0200 In /proc/partitions the last entry is md_d0 at least now, after reboot, when the device happens to be active again. (I'm not sure if it would be the same when it's inactive.) Resolution: as Jimmy Hedman suggested, I took the output of mdadm --examine --scan: ARRAY /dev/md0 level=raid1 num-devices=2 UUID=de8fbd92[...] and added it in /etc/mdadm/mdadm.conf, which seems to have fixed the main problem. After changing /etc/fstab to use /dev/md0 again (instead of /dev/md_d0), the RAID device also gets automatically mounted!

    Read the article

  • Postfix to deliver mail to a virtual address mailbox

    - by Chloe
    Postfix version 2.6.6, Dovecot Version 2.0.9 I want to setup Postfix + Dovecot. Dovecot seems to be working. I can authenticate. However, the mailbox is empty! Nothing will get delivered! I followed many tutorials on Postfix + Dovecot but they seem to want to complicate things by using Dovecot LDA or MySQL. I just want it to be very simple and having Postfix deliver to the virtual mail boxes are fine. I don't need MySQL either. I already set up a custom password file that Dovecot uses for authentication and I can login to POP3 with SSL. I can see from the logs that Postfix is delivering to the system user accounts (the catch-all), instead of the virtual users that I set up in Dovecot. The SMTP + SSL authentication seems to work also. I can also see from the logs that Dovecot is checking the correct virtual mail folder. I just need to figure out how to get Postfix to deliver to the virtual mail boxes. I have the following which I believe are relevant. Let me know what other settings you need to see: alias_maps = hash:/etc/aliases mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain mydomain = xxx.com myhostname = mail.xxx.com mynetworks = 99.99.99.99, 99.99.99.99 myorigin = $mydomain relay_domains = $mydestination, xxx.com, domain2.net, domain3.com sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop smtpd_recipient_restrictions = reject_non_fqdn_sender reject_non_fqdn_recipient reject_unknown_recipient_domain permit_sasl_authenticated check_relay_domains smtpd_sasl_auth_enable = yes smtpd_sasl_path = private/auth smtpd_sasl_type = dovecot smtpd_sender_restrictions = check_sender_mx_access cidr:/etc/postfix/bogus_mx reject_invalid_hostname reject_unknown_sender_domain reject_non_fqdn_sender virtual_mailbox_base = /var/spool/vmail virtual_mailbox_domains = xxx.com, domain2.net, domain3.com virtual_minimum_uid = 444 Postfix master.cf: submission inet n - - - - smtpd -o smtpd_tls_security_level=encrypt -o smtpd_sasl_auth_enable=yes -o smtpd_sasl_type=dovecot -o smtpd_sasl_path=private/auth -o smtpd_sasl_security_options=noanonymous -o smtpd_sasl_local_domain=$myhostname -o smtpd_client_restrictions=permit_sasl_authenticated,reject -o smtpd_sender_login_maps=hash:/etc/postfix/virtual -o smtpd_sender_restrictions=reject_sender_login_mismatch -o smtpd_recipient_restrictions=reject_non_fqdn_recipient,reject_unknown_recipient_domain,permit_sasl_authenticated,reject Dovecot related: mail_location = maildir:~/Maildir passdb { args = /etc/dovecot/users.conf driver = passwd-file } service auth { unix_listener /var/spool/postfix/private/auth { mode = 0660 user = postfix } } The virtual mail user: vmail:x:444:99:virtual mail users:/var/spool/vmail:/sbin/nologin Here is the /var/log/maillog when I try to send something to myself: Oct 25 22:10:05 308321 postfix/smtpd[2200]: connect from user-999.cable.mindspring.com[99.99.99.99] Oct 25 22:10:05 308321 postfix/smtpd[2200]: D224BD4753: client=user-999.cable.mindspring.com[99.99.99.99], sasl_method=LOGIN, [email protected] Oct 25 22:10:06 308321 postfix/cleanup[2207]: D224BD4753: message-id=<7DC3C163CFFC483AB6226F8D3D9969D2@dumbopc> Oct 25 22:10:06 308321 postfix/qmgr[2168]: D224BD4753: from=<[email protected]>, size=1385, nrcpt=1 (queue active) Oct 25 22:10:06 308321 postfix/smtpd[2200]: disconnect from user-999.cable.mindspring.com[99.99.99.99] Oct 25 22:10:06 308321 postfix/local[2208]: D224BD4753: to=<[email protected]>, orig_to=<[email protected]>, relay=local, delay=1.1, delays=0.53/0.02/0/0.51, dsn=2.0.0, status=sent (delivered to mailbox) Oct 25 22:10:06 308321 postfix/qmgr[2168]: D224BD4753: removed

    Read the article

< Previous Page | 688 689 690 691 692 693 694 695 696 697 698 699  | Next Page >