Search Results

Search found 17420 results on 697 pages for 'static urls'.

Page 268/697 | < Previous Page | 264 265 266 267 268 269 270 271 272 273 274 275  | Next Page >

  • Trouble setting up incoming VPN in Microsoft SBS 2008 through a Cisco ASA 5505 appliance

    - by Nils
    I have replaced an aging firewall (custom setup using Linux) with a Cisco ASA 5505 appliance for our network. It's a very simple setup with around 10 workstations and a single Small Business Server 2008. Setting up incoming ports for SMTP, HTTPS, remote desktop etc. to the SBS went fine - they are working like they should. However, I have not succeeded in allowing incoming VPN connections. The clients trying to connect (running Windows 7) are stuck with the "Verifying username and password..." dialog before getting an error message 30 seconds later. We have a single external, static IP, so I cannot set up the VPN connection on another IP address. I have forwarded TCP port 1723 the same way as I did for SMTP and the others, by adding a static NAT route translating traffic from the SBS server on port 1723 to the outside interface. In addition, I set up an access rule allowing all GRE packets (src any, dst any). I have figured that I must somehow forward incoming GRE packets to the SBS server, but this is where I am stuck. I am using ADSM to configure the 5505 (not console). Any help is very much appreciated!

    Read the article

  • How can I have Vhosts with Lighttpd on Windows and keeping PHP through mod_cgi ?

    - by Pixelastic
    Hello, I installed Lighty on Windows 7 and managed to get it correctly serve both static and PHP files (through mod_cgi). At first I got the "No input file selected" message displayed when requesting a .php file. So, I updated the doc_root value in my php.ini to match the server.document-root defined in my Lighty config, and PHP stops complaining. Then I defined a VHost to point all foo.com requests to a specific dir. It worked well for all static files but when requesting a .php file, the mod_cgi was still picking files from the doc_root defined in php.ini, not in the directory I defined for server.document-root in my Vhost. I know its what's supposed to happen, PHP follows the config defined in php.ini. And I have to set this value in my php.ini otherwise no php is processed at all. What I don't understand is how I'm supposed to have virtual hosts with mod_cgi enabled here ? I tried adding [HOST=foo.com] section in the php.ini without any luck. I tried mod_fastcgi but could'n get it to work at all, I also tried mod_simple_host but could get it handle php. I managed to get it working by copying my PHP install to another dir (and changing the doc_root value) and adding a cgi.assign pointing to that install in my vhost. But this is a really hackish way, it means having one PHP install for each virtualhost. Note that I'm working on a development machine running Windows, this is not a production server, I just wanted to emulate the final Server config locally to test some changes. I googled a lot this problem but all I can find are people installing Lighty on windows with mod_cgi, or installing Lighty on Windows with virtual hosts, but I never found anyone who managed to get both.

    Read the article

  • Virtual box host-only adapter configuration

    - by Xoundboy
    I have VirtualBox 4 running on Win 7 with a Centos 6 guest VM set up for hosting my dev server. When I'm connected to my home network the guest can be accessed via a static IP address that I configured (192.168.56.2), but not when I'm in the office. I'm guessing that the DHCP server in the office doesn't have a gateway configured for the 192.168.56.x IP range. I read something about the VB host-only adapter that should allow me to set this guest VM up in such a way that I don't need to be on any network to be able to access the guest from the host using a static IP. I've not been able to find out exactly how to configure this though. Can anyone give me an example configuration, thanks. UPDATE: Thanks for your responses. I've now set up a single virtual network adapter in VirtualBox and set it to host-only: C:\Users\Ben>vboxmanage list hostonlyifs Name: VirtualBox Host-Only Ethernet Adapter GUID: d419ef62-3c46-4525-ad2d-be506c90459a Dhcp: Disabled IPAddress: 192.168.56.2 NetworkMask: 255.255.255.0 IPV6Address: fe80:0000:0000:0000:78e3:b200:5af3:2a57 IPV6NetworkMaskPrefixLength: 64 HardwareAddress: 08:00:27:00:94:e8 MediumType: Ethernet Status: Up VBoxNetworkName: HostInterfaceNetworking-VirtualBox Host-Only Ethernet Adapter On the guest I've set up eth0 to use the same IP address as the host-only adapter (192.168.56.2) but when I try to log in using Putty I still get "Network Error : connection refused". VirtualBox DHCP servier is enabled but I can't ping the gateway (192.168.56.1) from either host nor guest. There's no firewall running on either OS. What next?

    Read the article

  • Browser considering www domain and without www domain different

    - by user1444680
    I've bought a domain name and hosted it. My browser is storing separate passwords for mydomain.com and www.mydomain.com, and also caching them separately. I want these two to be considered the same website. The zone records of mydomain.com are: "A" record: "@" points to the IP address of my hosting CNAME: www points to "@" As CNAME signifies alias, shouldn't browser understand (like search engines do) that the two URLs refer to the same website? Is it browser's fault? Please tell how to correct the problem? Do I need to enter some other record for www instead of CNAME?

    Read the article

  • Run single php code using multiple domains

    - by Acharya
    Hi all, I have a php code/site at xyz.com. Now I want to run the same site using multiple domains means when somebody open domain1.com, domain2.com ,domain4.com, so on urls, it should run the code/side that is at xyz.com I know one way to do this. I can host all these domains to the server where xyz.com is hosted so all domains will point to same peace of code/site. n above solution i need to hosted the domains manually. Is there any other way to do this as I want to add domains dynamically? Thanks in advance!

    Read the article

  • KVM virtual machine unable to access internet

    - by peachykeen
    I have KVM set up to run a virtual machine (Windows Home Server 2011 acting as a build agent) on a dedicated server (CentOS 6.3). Recently, I ran updates on the host, and the virtual machine is now unable to connect to the internet. The virtual network is running through NAT, the host has an interface (eth0:0) set up with a static IP (virt-manager shows the network and its IP correctly), and all connections to that IP should be sent to the guest. The host and guest can ping one another, but the guest cannot ping anything above the host, nor can I ping the guest from anywhere else (I can ping the host). Results from the guest to another server under my control and from an external system to the guest both return "Destination port unreachable". Running tcpdump on the host and destination shows the host replying to the ping, but the destination never sees it (it doesn't even look like the host is bothering to send it on at all, which leads me to suspect iptables). The ping output matches that, listing replies from 192.168.100.1. The guest can resolve DNS, however, which I find rather odd. The guest's network settings (connection TCP/IPv4 properties) are set up with a static local IP (192.168.100.128), mask of 255.255.255.0, and gateway and DNS at 192.168.100.1. When originally setting up the vm/net, I had set up some iptables rules to enable bridging, but after my hosting company complained about the bridge, I set up a new virtual net using NAT and believe I removed all the rules. The VM's network was working perfectly fine for the last few months, until yesterday. I haven't heard anything from the hosting company, didn't change anything on the guest, so as far as I know, nothing else has changed (unfortunately the list of packages updated has since fallen off scrollback and I didn't note it down).

    Read the article

  • What kind of proxy acl rules should be applied?

    - by user42891
    I try to block sites in squid based on this article. Assuming you would want to block access to Yahoo (e.g http://www.yahoo.co.jp, http://www.yahoo.com, http://www.yahoo.co.in), you would ideally want to block all of the above URLs, if I use a regular expression and try to search something called yahoo it seems to get blocked. We are just interested in applying rules which would be most commonly used across all companies (e.g. social networking sites like facebook, orkut), porn sites (e.g. sex), gaming sites (games), movie & song download sites, and sites where they can upload data (e.g. rapidshare) What would be the common set of effective rules in achieving the above?

    Read the article

  • Provider claiming "all web servers in the cloud are automatically kept in sync" - should I be skeptical?

    - by RobMasters
    I'm no expert in cloud computing - I've spent a fair bit of time researching it and various providers but am yet to get any hands-on experience with it. From what I've read about AWS and auto-scaling EC2 instances though, it seems as though each instance should be completely decoupled from all other instances. i.e. If content is uploaded to the web server's local filesystem from a custom CMS backend then that content won't be available if subsequently requested from a different web server in the auto-scaling group. Is that right? I met with a representative of our existing hosting provider recently and he was claiming that it isn't a problem that our legacy CMS system is highly dependent on having a local filesystem. He said that all web servers, regardless of how many, would be kept as exact duplicates so I shouldn't notice any difference compared to our existing setup of a single dedicated server. This smells a little too much like bull fecal-matter to me...should I be skeptical about this? I'm a little worried because my (non-technical) boss who ultimately makes the decisions is all for signing up to this cloud solution because it won't require any extra work. I'm sure that they must at least be able to provide this, otherwise they wouldn't be attempting to sell it to us. But at what cost? It sounds as though each web server will always need to be checking the other web server(s) for new static content, which to me sounds like unwanted overhead that'll slow things down. I'd really appreciate it if somebody could clear this up to me. I'm all for switching to AWS and using S3+CloudFront for all static content, but that isn't looking very likely to happen at the moment.

    Read the article

  • Is there application which is fakes browser and allows to choose what real to use if url provided

    - by Dzmitry Lahoda
    Is there any Application for Windows to do next think: I click url in Skype or html file in Explorer. Application is default "fake" browser, i.e. registered as default browser. Application shows several buttons. Each button represents installed or running browser. I can choose real browser, click it and specific url opened in chosen real browser . Quick search not revealed such Application. Context: I work in environment where some sites work in specific browsers. I get clickable urls from different applications. Sometimes I want to launch specific browser to use specific addin of it against url provided. I have specific portable "secured" browser I want to launch only for trusted sites.

    Read the article

  • Setup shared internet connection on virtualbox with fixed IP

    - by Tom
    I am a web developer and until recently I have been using ubuntu as my OS. For many reasons, I have switched back to windows. I still want to keep my server on linux platform, so I setup my local server as a virtual machine. Everything works great, but i have a little struggle with the networking. Since I am working in different places and going around clients, I connect to all sorts of network with different settings. That means the possible IP range is very dynamic which causes issues when I work on my local server. At the moment I have a dynamic IP on my host and static IP on my guest. That way I can access the server from my host (by adding record to hosts file). I also have internet connection on the guest. But once i change networks, it does not work (assuming the network has different configuration). My question is, how to setup host-guest networking, so no matter what network I connect to, I can keep my static IP on guest, which is registered in hosts file on my host so I can access the webserver and also I will have internet connection on the guest? Hope it make sense. Thank you

    Read the article

  • How to handle files that don't need version control in mercurial

    - by richardh
    I am new to mercurial, and for the most part do LaTeX reports and statistical calculations in R using .csv and/or .sqlite files. Re LaTeX, all I really care is the .tex file. Re R, I don't need version control on the .csv or .sqlite files because they are static. When I do 'hg add' for a repo with a .csv and/or .sqlite file, I get a warning like: rev2.sqlite: up to 3070 MB of RAM may be required to manage this file (use 'hg revert rev2.sqlite' to cancel pending addition) So I revert and subsequently use adds like hg add -X *.sqlite. I guess I really have two questions: (1) Should I ignore these warnings? Because these large files are static, can I just add to the repo knowing that the diff files will always be empty and not worry about wasted resources? (2) If I should keep excluding these files from the repo, is there away that I can fix this option? I.E., add to my .hgrc file something that always appends an option like -I *.tex -I *.R to my 'hg add' commands? Thanks!

    Read the article

  • postfix cannot send email

    - by AKLP
    I'd like to mention that im really new to this so please bear with me. I'm trying to setup a forum software to send emails via postfix but I think my server has the port 25 blocked. I tried running these: works: ping alt2.gmail-smtp-in.l.google.com don't work: telnet alt2.gmail-smtp-in.l.google.com 25 telnet 66.249.93.114 25 tried flushing iptables and then using these rules but didn't work either: sudo iptables --flush sudo iptables -P INPUT ACCEPT sudo iptables -P OUTPUT ACCEPT sudo iptables -P FORWARD ACCEPT sudo iptables -F sudo iptables -X doing a telnet on 25 port to localhost url works but nothing when telnet'ing in none local urls. mail.log: Oct 17 01:20:24 webhost postfix/smtp[3642]: connect to alt2.gmail-smtp-in.l.google.com[2607:f8b0:400e:c03::1a]:25: Connection timed out Oct 17 01:20:24 webhost postfix/smtp[3643]: connect to alt2.gmail-smtp-in.l.google.com[2607:f8b0:400e:c03::1a]:25: Connection timed out Oct 17 01:20:24 webhost postfix/smtp[3642]: 4744380032: to=<[email protected]>, relay=none, delay=2892, delays=2741/0.03/150/0, dsn=4.4.1, status=deferred (connect to alt2.gmail-smtp-in.l.google.com[2607:f$

    Read the article

  • LighTPD and PHP not working if outside of LightTPD folder

    - by Marco83
    I need to set up a simple web server with PHP on Windows XP that a number of different people will use for local testing. I'm using LightTPD 1.4.30-4-IPv6-Win32-SSL and PHP 5.2. So far I've created this folder structure: tools/ LightTPD/ htdocs/ PHP/ I set up PHP as CGI and the document root as server_root + "/htdocs". It works fine (well, it's slow but I don't want to bother with FastCGI for now :) ). My problem is when I try to put the htdocs outside of LightTPD folder, like this: htdocs/ tools/ LightTPD/ PHP/ I update the document root to server_root + "/../../htdocs" and while static HTML pages work fine, PHP pages stop working (they return a "No input file specified"). I literally just change the document root, I didn't change anything in the php.ini or anywhere else. Please also note that I left all doc_root, user_dir and cgi.force_redirect to the default values in php.ini, and it works when htdocs is inside LightTPD, but not when I move it ouside. Any idea of why it's breaking?? Here's my lightTPD.conf: server.modules = ( "mod_access", "mod_accesslog", "mod_alias", "mod_cgi", "mod_status", ) include "variables.conf" include "mimetype.conf" # THIS WORKS server.document-root = server_root + "/htdocs" # THIS DOESN'T #server.document-root = server_root + "/../../htdocs" server.upload-dirs = ( temp_dir ) index-file.names = ( "index.php", "index.pl", "index.cgi", "index.cml", "index.html", "index.htm", "default.htm" ) server.event-handler = "libev" url.access-deny = ( "~", ".inc" ) $HTTP["url"] =~ "\.pdf$" { server.range-requests = "disable" } static-file.exclude-extensions = ( ".php", ".pl", ".cgi" ) server.errorlog = server_root + "/logs/error.log" ######### Options that are good to be but not neccesary to be changed ####### dir-listing.activate = "enable" #### CGI module cgi.assign = ( ".php" => server_root + "/../PHP/php-cgi.exe" ) status.status-url = "/server-status" status.config-url = "/server-config"

    Read the article

  • Case in-sensitivity for Apache httpd Location directive

    - by user57178
    I am working with a solution that requires the usage of mod_proxy_balancer and an application server that both ignores case and mixes different case combinations in URLs found in generated content. The configuration works, however I have now a new requirement that causes problems. I should be able to create a location directive (as per http://httpd.apache.org/docs/current/mod/core.html#location ) and have the URL-path interpret in case insensitive mode. This requirement comes from the need to add authentication directives to the location. As you might guess, users (or the application in question) changing one letter to capital circumvents the protection instantly. The httpd runs on Unix platform so every configuration directive is apparently case sensitive by default. Should the regular expressions in the Location directive work in this case? Could someone please show me an example of such configuration that should work? In case a regular expression can not be forced to work case insensitively, what part of httpd's source code should I go around modifying?

    Read the article

  • mod_ReWrite to remove part of a URL

    - by Jack
    Someone has incorrectly linked to some of my urls causing 404 erros in Google Webmaster Tools. Here is an example Linked URL: http://www.example.com/foo-%E2%80%8Bbar.html Correct URL: http://www.example.com/foor-bar.html I would like to 301 redirect any instance of this kind of incorrect linking to the correct URL. I have tried the following but it generates 404 Errors site wide. Options +FollowSymLinks RewriteEngine on RewriteRule ^foo-(.*)bar\.html$ http://www.example.com/foo-bar\.html? [L,R=301] Could anyone let me know what I am doing wrong?

    Read the article

  • How to I test if mod_rewrite is enabled?

    - by user124130
    I'm setting up an environment for wordpress on apache2, on a fresh install of ubuntu 12.04. In order to get friendly URLS working, I'm trying to set up mod_rewrite. I followed some instructions I found on the net, and used a2enmod. Now. after restarting apache, I'd like to check if the module is actually loaded. The command that I've found for getting a list of loaded modules is this: apache2 -t -D DUMP_MODULES However, this returns an error: apache2: bad user name ${APACHE_RUN_USER} So, how do I actually list all loaded modules, or otherwise check to see if mod_rewrite has been enabled?

    Read the article

  • Easiest way to do host name resolution with IPA?

    - by Luke
    We are currently using static LAN IP addresses for our internal non-public facing servers. We don't have DHCP configured. We're using Vyatta for our router and firewall. The firewall is configured to be zone based. We want to setup IPA for centralized authentication (LDAP+Kerberos). IPA is requiring resolvable host names. I want to avoid having to enter DNS records by hand. What is the most painless way to make host names resolvable that works with IPA in a Linux only environment? We arn't using anything to resolve host names now. Up until now we've been using static ip addresses and local users on each server. We've looked at BIND, DHCP (does that even solve the problem?), and multicast DNS. At this point we're not sure which solution would work best. Is there another option we haven't considered? Security is very important. We have multiple zones where each zone has very specific or no access to another zone. DNS for public domains is forwarded from Vyatta to our ISP's DNS server.

    Read the article

  • Enable mod_deflate per directory level

    - by z1_jabbar
    I am using following code, when i access site it only compress all the jsp inside all the urls path under /abc but it ignores all the js and css files. I want to compress js and css files under all the subfolders in /abc path? How I can do this. Thanks! <LocationMatch "/abc"> <IfModule mod_deflate.c> SetOutputFilter DEFLATE # Don't compress images SetEnvIfNoCase Request_URI \ \.(?:gif|jpe?g|png)$ no-gzip dont-vary #Don't compress PDFs SetEnvIfNoCase Request_URI \.pdf$ no-gzip dont-vary #Don't compress compressed file formats SetEnvIfNoCase Request_URI \.(?:7z|bz|bzip|gz|gzip|ngzip|rar|tgz|zip)$ no-gzip dont-vary <IfModule mod_headers.c> Header append Vary User-Agent </IfModule> </IfModule> </LocationMatch>

    Read the article

  • nginx: URL rewrites and performance

    - by j0nes
    I have a website where I need to change the URL structure. The old URLs look like /olddir/part1_de.htm, the new ones will look like /newdir/sub/category/anotherpage.htm. There are a lot of URL rewrites I need to do, I assume about 500 distinct rewrites in the end. As my website gets quite a lot of traffic, my main concern is about performance at the moment. My questions are: I assume that for each request, the rewrites block will be parsed and the regex will be evaluated. Am I right? Will there be a performance penalty if I use these rewrites? Can nginx handle this? Are there any "best practices" to follow when doing a lot of rewrites?

    Read the article

  • Two virtual host one with Domain name one with internal ip#?

    - by Abhishek
    Is it possible to have two virtual host configurations for the same server - one with internal ip address and one with domain name? Something like <VirtualHost {{internal-ipaddress}}:80> ....... </VirtualHost> <VirtualHost {{domain-name}}:80> ....... </VirtualHost> Note that the internal IP address and the domain name belong to the same server or same server instance. I am asking this to restrict some URLs for external users, redirect to https all external access and allow everything for internal users(without https)..

    Read the article

  • file:// command-line arguments

    - by Cory Grimster
    Is it possible to pass command-line arguments to a program that is invoked via a file:// url? I'm trying to include Remote Desktop links in a wiki page that lists some servers: <a href="file:///c|/windows/system32/mstsc.exe /v:serverName">serverName</a> When I omit the argument the link works fine, but when I include it the link doesn't work. I Googled around a bit and couldn't find any references to this. I suspect that the answer is that file:// urls simple don't accept arguments (I can think of all kinds of ways to abuse them if they do), but I thought I'd throw it out there in case I've simply got the syntax wrong. Thanks.

    Read the article

  • reverse_proxy (mod_rewrite) and rails

    - by SooDesuNe
    I have front end rails app, that reverse proxies to any of a number of backend rails apps depending on URL, for example http://www.my_host.com/app_one reverse proxies to http://www.remote_host_running_app_one.com such that a URL like http://www.my_host.com/app_one/users will display the contents of http://www.remote_host_running_app_one.com/users I have a large, and ever expanding number of backends, so they can not be explicitly listed anywhere other than a database. This is no problem for mod_rewrite using a prg:/ rewrite map reverse proxy. The question is, the urls returned by rails helpers have the form /controller/action making them absolute to the root. This is a problem for the page served by mod_rewrite because links on the proxied page appear as absolute to the domain. i.e.: http://www.my_host.com/app_one/controller/action has links that end up looking like /controller/action/ when they need to look like /app_one/controller/action Is there a way to fix this server-side, so that the links will be routed correctly?

    Read the article

  • How to block this URL pattern in Varnish VCL?

    - by iTech
    My website is getting badly hit by spambots and scrappers, I am using Cloudflare but the problem still remains there. The problem is spambots accessing non-existing urls causing a lot of load to my drupal backend which goes all the way and bootstraps db just to serve a 404 error doc. I cant simply dish out non-drupal 404's for all page not found errors, as I need to have drupal catch them. Since, varnish is in front it can check if the bot is acting nice and asking for valid url - if not it servers them a 404 or 403. These bots are causing errors using this pattern : http://www.megaleecher.net/http:/www.megaleecher.net/Using_iPhone_As_USB_Mass_S/Using_iPhone_As_USB_Mass_S/Using_iPhone_As_USB_Mass_S/Using_iPhone_As_USB_Mass_S/Using_iPhone_As_USB_Mass_S/Using_iPhone_As_USB_Mass_S/Using_iPhone_As_USB_Mass_S/Using_iPhone_As_USB_Mass_Storage Now, pls. suggest a regex varnbisg VCL directive which catches this URL pattern and serves a 404 error from varnish, preventing it from reaching apache/drupal ?

    Read the article

  • mod_proxy Fowarding Based on Request Host Header

    - by zigzagip
    Lets say I have 3 URLs and they all point to the same reverse proxy. I would like to have the requests being forwarded to the web servers behind the proxy based on the host header: webfront1.example.com > reverseproxy.example.com > backend1.example.com webfront2.example.com > reverseproxy.example.com > backend2.example.com webfront3.example.com > reverseproxy.example.com > backend3.example.com Based on what I have read, I can configure reverseproxy.example.com/webfront1 > backend1.example.com, reverseproxy.example.com/webfront2 > backend2.example.com, etc. I am wondering if proxy based on host header is even possible or if I used the wrong approach entirely.

    Read the article

  • Fully secured gateway web sites

    - by SeaShore
    Hello, Are there any web sites that serve as gateways for fully encrypted communication? I mean sites with which I can open a secured session, and then to exchange through them with other sites in a secure way both URLs and content? Thanks in advance. UPDATE Sorry for not being clear. I was wondering if there was a way to access any site over the Internet (http or https) without letting any Intranet-proxy read the requested URL or the received content. My question is whether such a site exists, e.g.: I am connected to that site via https, I send it a URL in a secured way, the site gets the content from the target site (possibly in a non-secured way) and returns to me the requested content in a secured way.

    Read the article

< Previous Page | 264 265 266 267 268 269 270 271 272 273 274 275  | Next Page >