Search Results

Search found 17420 results on 697 pages for 'static urls'.

Page 93/697 | < Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >

  • Routing multiple static IPs from ISP at the cable modem?

    - by Jakobud
    I'm taking over IT responsibilities for a previous IT guy. We have a 50mb cable modem connection from Comcast along with 5 static IP addresses: XXX.XXX.XXX.180 XXX.XXX.XXX.181 XXX.XXX.XXX.182 XXX.XXX.XXX.183 XXX.XXX.XXX.184 We are in the process of replacing our firewall machine. Currently the firewall box is the only thing connected to the cable modem. However the cable modem has multiple ethernet ports on it, similarly to a router. I have assembled a new firewall machine and its time to start testing and configuring it. So that means that I also need it plugged into the cable modem (remember it has multiple ethernet ports on it). So now with multiple computer plugged into the cable modem, how does the cable modem know where to route the traffic? If some request on the internet is made to XXX.XXX.XXX.181, which goes to our cable modem, how does the cable modem know which connected computer that traffic is supposed to be sent? Looking at the web interface for the cable modem, there doesn't seem to be anything special setup on it with regards to routing or NATing IP addresses. Is that because when there is only one computer connected to the modem, all traffic is sent to it by default? Now that I am going to (temporarily) have multiple computers plugged into the cable modem, do I need to specify routing or NAT rules on the modem itself? I am going to speak to Comcast about this next, but I figured I'd ask here first just so I can get a better grasp on how this type of thing generally plays out.

    Read the article

  • Apahce - How to disable gzip content encoding (eg DEFLATE) for one set of URLs?

    - by Rory McCann
    I have a ubuntu apache webserver and I have enabled mod_deflate to gzip all the content. However there's one folder I'd like to disable the mod_deflate for. I was going to do something like this: <Location /myfolder> RemoveOutputFilter DEFLATE </Location> But that doesn't work. Rational: I am trying to debug an XMLRPC server and I am using wireshark to see what gets past in the HTTP requests, since the replies are gzipped, I can't see what's going on.

    Read the article

  • Apache - How to disable gzip content encoding (eg DEFLATE) for one set of URLs?

    - by Rory
    I have a ubuntu apache webserver and I have enabled mod_deflate to gzip all the content. However there's one folder I'd like to disable the mod_deflate for. I was going to do something like this: <Location /myfolder> RemoveOutputFilter DEFLATE </Location> But that doesn't work. Rational: I am trying to debug an XMLRPC server and I am using wireshark to see what gets past in the HTTP requests, since the replies are gzipped, I can't see what's going on.

    Read the article

  • How to prevent mod_proxy from rewriting redirects into absolute URLs?

    - by Yang
    I have: nginx (port 80) reverse-proxying to apache2 (port 88) reverse-proxying to a web app (port 5001). However, when the web app responds with a redirect like Location: /foo, apache2 rewrites this into Location: http://host.com:88/sub/foo, even though port 88 is publicly inaccessible. I'd like it to just redirect to the relative URL Location: /sub/foo. Any ideas? My apache config (using mod_proxy_http, mod_proxy_html, mod_substitute): <Location /notes/> Allow from all ProxyPass http://127.0.0.1:5001/ SetOutputFilter proxy-html ProxyPassReverse / ProxyHTMLURLMap / /notes/ RequestHeader unset Accept-Encoding AddOutputFilterByType SUBSTITUTE application/atom+xml Substitute "s|127.0.0.1:5001|host.com/notes|" </Location>

    Read the article

  • Google not indexing new forum

    - by Tom Gullen
    We installed a new forum a few months ago now. The URL is: https://www.scirra.com/forum I've 301'd the old topics/threads, as well as included all the new URLs in the sitemap. Yet they still are not appearing. Webmaster tools is showing: 139,512 URLs submitted 50,544 URLs indexed And has been stuck there for quite some time. A massive drop in indexed pages since we updated the forum as well: Any help much appreciated

    Read the article

  • In Google Webmaster Tools we have 3 sitemaps attributed to 1 domain

    - by Frank
    Thanks for you advice and help ahead of time I have a website that has been on the internet for almost 10 years created in "Microsoft Frontpage" with over 900 pages. Currently in Google Webmaster tools it shows up as 2 domains and 3 sitemaps: http://www.example.com example.com hostedsitmaps.com Furthermore, since we were having hard to placing the xml sitemap on our site(Frontpage Issues) we decided to hire pro-sitemaps.com to create, host and upload the xml file which they did. Thus, I have another site hostedsitemaps.com on our webmaster tools for the site. Hosted sitemaps shows: 900 urls submitted 800 Indexed. Crawl errors and Search queries: No data available http://www.example.com shows: 889 URLs submitted 1 URLs indexed. Crawl Errors: 14 Soft 404 796 Not found Search Queries: 8104 example.com shows: 889 URLs submitted 1 URLs indexed Crawl errors: 48 Soft 404 91 Not found Search Queries: 8104 My questions and need for help are as followed: 1. Why are our domain based sites in webmaster tools ( example.com and http:www.example.com) showing only 1 URL indexed while the hostedsitemap has 800 indexed? 2. Should we have 3 domains configured for this "one" domain in Google Webmaster tools? 3. Should we eliminate/delete the hostedsitemap from webmaster tools completely and take off that XML sitemap? 4 Does having example.com and http://www.example.com impact web ranking? 5. Any other thoughts or help in this very complicated matter for us. Thanks.

    Read the article

  • Moving from http to https - Google webmaster tools | Bing webmaster tools

    - by user2240778
    I'm moving from http to https for my entire site. The site is currently added to google webmaster tools as www.example.com and all the pages are indexed as http. How do i go about moving to the new https URLs on Google webmaster tools. Do I just submit a updated sitemap which has the https URLs OR Do I add a new site as https://www.example.com and submit the sitemap with https urls? All the http urls are set to redirect to their https counterparts.

    Read the article

  • I run Webmin and I want it to be accessed with two URLs, both using proxypass in apache

    - by user36644
    This is what I am trying to do: NameVirtualHost * <VirtualHost *> ServerName testsite.org ServerAdmin [email protected] DocumentRoot /var/www/ </VirtualHost> <VirtualHost *> ServerName panel.testsite.org ProxyPass / http://panel.testsite.org:10000/ ProxyPassReverse / http://panel.testsite.org:10000/ </VirtualHost> <VirtualHost 12.34.56.78> ServerName newsite.com ServerAdmin [email protected] DocumentRoot /var/newsite/ </VirtualHost> <VirtualHost 12.34.56.78> ServerName panel.newsite.com ProxyPass / http://panel.newsite.com:10000/ ProxyPassReverse / http://panel.newsite.com:10000/ </VirtualHost> The problem is that it won't accept the 2nd vhost with the IP 12.34.56.78 because it says one already exists. panel.newsite.com and newsite.com have the same IP...so I am not sure how I can make it so that only the URL "panel.newsite.com" will get proxypassed to port 10000 but no other URL on newsite.com

    Read the article

  • How to let mod_wsgi only handle certain URLs under Apache?

    - by Frederik
    I have a Django app that handles "/admin/" and "/myapp/". All the other requests should be handled by Apache. I've tried using LocationMatch but then I'd have to write a negative regex. I've tried WSGIScriptAlias with the /admin/ prefix but then the wsgi_handler receives the request with the /admin/ part cut off. Is there a cleaner way to make mod_wsgi only handle certain requests?

    Read the article

  • Make shortened and long urls play together on the same domain (RewriteRule).

    - by Renato Renato
    Long story short, I want to have both example.com/aJ5 and example.com/any-other-url working together. I'm using apache and not very good at writing regex. I have already a global RewriteRule which sends everything to the app entry point. What I need is to tell apache if length($path) is <= 5 chars then rewrite to another location. I know I can use {1,5} like syntax in regex, but don't really know if it's what I'm looking for. I'd like to implement this at web-server level rather than php level. Any help is appreciated.

    Read the article

  • Force google to reindex

    - by Matthias
    I changed the structure of my urls. The pages are indexed by google and have the following structure http://mypage.com/myfolder/page.apsx The new structure is http://mypage.com/page.aspx Now all urls that google knows are wrong. How can I tell google to reindex and that the structure has changed? Internally I redirect in ASP.NET when the url contains the myfolder by I want google to update the urls.

    Read the article

  • How do I fix the Google Webmaster Tools warning: "URL not followed?"

    - by user3611500
    A few days after submitting my sitemap to Google, I received this warning: When we tested a sample of URLs from your Sitemap, we found that some URLs redirect to other locations. We recommend that your Sitemap contain URLs that point to the final destination (the redirect target) instead of redirecting to another URL. The example URL Google gave me is http://iketqua.net/?_escaped_fragment_=CIDTKT/mien-trung/xo-so-kon-tum I checked all possible things that I could think of, but still can't figure out what the warning is about! My sitemap: http://iketqua.net/sitemap.xml

    Read the article

  • How to configure to URLs for One Server using wildcard supported certificates?

    - by Amit
    Hi, We have wildcard supported certificate installed in our production environment. One of our client wants his name to appear in the URL (e.g. companyname.sitename.net). How we should facilitate this? Do we need to make any entries for this in DNS? If yes can you please let me know about it? I need to set this up before Fridat PST, any help in this is highly appriciated. Thanks.

    Read the article

  • once VPNed into pfSense, unable to hit the public URLs of my websites - they are routed to the pfSense box

    - by Sean
    I have a pfSense box setup as the firewall/router/VPN appliance at my colo. Once I VPN into the colo (either pptp or openvpn, pptp preferred due to multiple clients and ease of configuration), I am able to hit all my servers by their private 10.10.10.x ip and am able to browse the public internet without issue. When I try and hit the URL of a domain hosted by one of my servers, I am prompted for credentials. If I login using the pfSense credentials, I'm connected to pfSense as if I'd used it's internal IP. If I hack my hosts file to point url - server private IP it works fine, but this is obviously not a good solution. To recap: not connected to VPN - www.myurl.com works connected to VPN - www.myurl.com never makes it to the correct server, but is sent only to the pfSense box I'm sure it's something small that I've missed in the pfSense config.

    Read the article

  • How can one use online backup with large amounts of static data?

    - by Billy ONeal
    I'd like to setup an offsite backup solution for about 500GB of data that's currently stored between my various machines. I don't care about data retention rates, as this is only a backup of, not primary storage, for my data. If the backup is stored on crappy non-redundant systems, that does not matter. The data set is almost entirely static, and mostly consists of things like installers for Visual Studio, and installer disk images for all of my games. I have found two services which meet most of this: Mozy Carbonite However, both services impose low bandwidth caps, on the order of 50kb/s, which prevent me from backing up a dataset of this size effectively (somewhere on the order of 6 weeks), despite the fact that I get multiple MB/s upload speeds everywhere else from this location. Carbonite has the additional problem that it tries to ignore pretty much every file in my backup set by default, because the files are mostly iso files and vmdk files, which aren't backed up by default. There are other services such as EC2 which don't have such bandwidth caps, but such services are typically stored in highly redundant servers, and therefore cost on the order of 10 cents/gb/month, which is insanely expensive for storage of this kind of data set. (At $50/month I could build my own NAS to hold the data which would pay for itself after ~2-3 months) (To be fair, they're offering quite a bit more service than I'm looking for at that price, such as offering public HTTP access to the data) Does anything exist meeting those requirements or am I basically hosed?

    Read the article

  • I've changed my URL schema. How do I tell Google to index the new schema and forget the old one?

    - by growse
    I had a site where the urls were constructed like this /index.php/Topic /index.php/AnotherTopic These were indexed in google, and search results returned that pointed to these. However, I've recently replatformed that site, and reconfigured it so the above urls would be: /index.php?title=Topic /index.php?title=AnotherTopic The original urls are returning 404s. The site is linking to the correct URL schema internally, but Google is retaining the original schema in its search results. I've updated and resubmitted the sitemap which only contains the new schema. Also, Google's webmasters tool is going slightly bananas at the fact there's now a spike in 404 errors in its crawl results. What would be the best approach to get Google to 'forget' about the old schema, and instead index the new schema? Should I try blocking /index.php/ in robots.txt? Should I be returning 301 codes instead of 404 for the original urls?

    Read the article

  • How can I track hits to areas of my web application?

    - by Tyson
    We have a growing web application, and we currently use Google Analytics and Chartbeat to track usage and engagement (although we're open to alternatives). Unfortunately, both are geared towards content-based sites where everything is about the URL. Our URLs contain object IDs, making them less useful independently, and causing us to grow beyond Google Analytics' 50,000 unique URLs per day. How can we track hits to areas of our web application, essentially ignoring parts of the URLs?

    Read the article

  • Can I tell Chrome not to redirect mistyped URLs?

    - by Nathan Long
    If I mistype a URL, Chrome sometimes redirects me to a search. For instance, typing "example_url_i_sometimes_mistype.conm" into the location bar gets me: Your search - example_url_i_sometimes_mistype.conm - did not match any documents. Not only is this annoying, but if the mistyped URL was one on private DNS, I've now just told Google that the domain exists. (A small concern, but bad in principle.) Can I configure Chrome to just show an error and not blab to Google Search about it?

    Read the article

  • Do browsers change URLs of saved bookmarks in response to 301 redirection?

    - by elliot100
    HTTP status code 301 is used to indicate that content has moved permanently, and that the returned URL should be used to access the requested content in future. RFC 2616 says Clients with link editing capabilities ought to automatically re-link references to the request-URI to one or more of the new references returned by the server, where possible. Do any browsers actually implement this and change a bookmark's URL?

    Read the article

  • Using nested public classes to organize constants

    - by FrustratedWithFormsDesigner
    I'm working on an application with many constants. At the last code review it came up that the constants are too scattered and should all be organized into a single "master" constants file. The disagreement is about how to organize them. The majority feel that using the constant name should be good enough, but this will lead to code that looks like this: public static final String CREDITCARD_ACTION_SUBMITDATA = "6767"; public static final String CREDITCARD_UIFIELDID_CARDHOLDER_NAME = "3959854"; public static final String CREDITCARD_UIFIELDID_EXPIRY_MONTH = "3524"; public static final String CREDITCARD_UIFIELDID_ACCOUNT_ID = "3524"; ... public static final String BANKPAYMENT_UIFIELDID_ACCOUNT_ID = "9987"; I find this type of naming convention to be cumbersome. I thought it might be easier to use public nested class, and have something like this: public class IntegrationSystemConstants { public class CreditCard { public static final String UI_EXPIRY_MONTH = "3524"; public static final String UI_ACCOUNT_ID = "3524"; ... } public class BankAccount { public static final String UI_ACCOUNT_ID = "9987"; ... } } This idea wasn't well received because it was "too complicated" (I didn't get much detail as to why this might be too complicated). I think this creates a better division between groups of related constants and the auto-complete makes it easier to find these as well. I've never seen this done though, so I'm wondering if this is an accepted practice or if there's better reasons that it shouldn't be done.

    Read the article

  • How can I get rid of the long Google results URLs?

    - by Teifi
    google.com is always shielded by our firewall. When I search something at google.com, a result list appears. Then I click the link, the URL changes to a processed url like: http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0CDcQFjAA&url=http%3A%2F%2Fwww.amazon.com%2F&ei=PE_AUMLmFKW9iAfrl4HoCQ&usg=AFQjCNGcA9BfTgNdpb6LfcoG0sjA7hNW6A&cad=rjt Then my browser is blocked because of google.com I guess. The only useful information in that long like processed URL is http%3A%2F%2Fwww.amazon.com(http://www.amazon.com). My quesitons: What's the meaning of that long like processed URL? Is there a way to remove the header google.com/url?sa.. each time I click the search results?

    Read the article

  • How to remove an entry from Chrome's Remembered URLs from the url bar?

    - by cmcculloh
    I've got a url in Chrome "local.mysite.com" that autopopulates when I start typing "local.my" into the URL bar. Note that this URL DOES NOT EXIST in my browser history (at chrome://history/#e=1&p=0) because it isn't a real site and therefor couldn't ever be successfully visited and therefor never shows up in my history. The URL I want is "local.mysite.com/subdir/". That URL is like 3 down in the suggested results because I keep accidentally hitting "enter" when it auto-suggests the unwanted first URL and thus re-enforcing it's assumption that that is the one I want. How do I get rid of the "local.mysite.com" entry in Chrome's memory?

    Read the article

  • is it okay to use random URLs instead of passwords?

    - by stew
    Is it considered "safe" to use URL constructed from random characters like this? http://example.com/EU3uc654/Photos I'd like to put some files/picture galleries on a webserver that are only to be accessed by a small group of users. My main concern is that the files should not get picked up by search-engines or curious power-users that poke around my site. I've set up an .htaccess file, just to notice that clicking on http://user:pass@url/ links doesn't work well with some browsers/email clients, prompting dialogs and warnings messages that confuse my not-too-computer-savy users.

    Read the article

  • How do I define multiple urls for svn project?

    - by yarun can
    I am working on a project in mixed environment (win, cygwin, linux) which is on a "shared ntfs drive". I am the sole user there for this project is not really duplicated for multiple users. The main issue I am facing is that the original svn project import was done with a cygwin path like "/cygdrive/z/path to svn project". Now when I am on win svn or linux svn this does not work with svn since such paths do not exists for those versions. Is there a way to define more than one path for the svn import, like maybe some kind of configuration that i can use to fire on the command line? thanks

    Read the article

< Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >