Search Results

Search found 20409 results on 817 pages for 'url routing'.

Page 327/817 | < Previous Page | 323 324 325 326 327 328 329 330 331 332 333 334  | Next Page >

  • How to update debian dns server? New VM with same hostname as old VM

    - by opensourcechris
    We run several linux VM's on our Hyper-V cluster. Our old IT manager configured the dns server to resolve the url 'devlabs.ourdomain.com' to a debian squeeze apache webserver hosted on the hyper v cluster with the hostname: devlabs. We recently created a new Ubuntu vm to replace the original squeeze vm. When we created the new Ubuntu VM we used the same hostname of 'devlabs" to name the new VM. My problem is that now I am only able to access the new Ubuntu VM by using the IP address. How can I update our DNS server to point the url 'devlabs.ourdomain.com' to the new VM?

    Read the article

  • is it okay to use random URLs instead of passwords?

    - by stew
    Is it considered "safe" to use URL constructed from random characters like this? http://example.com/EU3uc654/Photos I'd like to put some files/picture galleries on a webserver that are only to be accessed by a small group of users. My main concern is that the files should not get picked up by search-engines or curious power-users that poke around my site. I've set up an .htaccess file, just to notice that clicking on http://user:pass@url/ links doesn't work well with some browsers/email clients, prompting dialogs and warnings messages that confuse my not-too-computer-savy users.

    Read the article

  • Can I set up a hostname alias that is only active in a specific network?

    - by denisw
    I have a small ARM box running ownCloud in my house, and a domain bound to my DSL router with Dynamic DNS. This works well as I can configure the ownCloud client to use the domain name, which works both in the house and when I'm on the road. However, at home I would like to let my laptop and my server talk to each other directly on the local network, rather than as if they were talking over the Internet (including DNS resolution for my domain name, routing all traffic through the DSL machinery etc.). I know that I can set a hostname alias for my domain in /etc/hosts - the problem is that I want this alias to only be active when I'm in the home network, while in all other cases DNS resolution should be used as normal. Is this possible?

    Read the article

  • Apache configuration file visualization/testing

    - by Matt Holgate
    Is there a tool available (or a debug mode built into Apache) that will allow me to interactively test and explain an Apache configuration for a given request? In particular, I'd like to be able to see which directives will apply when requesting a specific URL. For example, the output for the URL http://myserver.com/foo/bar/bar.html might look something like: Allow from 192.168.0.3 <-- From <Location /foo/bar> in myserver.com vhost Require valid user <-- From <Directory /var/www/foo> in global configuration Satisfy any <-- From <File bar.html> in global configuration [Background: why do I want this? The apache merging rules for configuration directives are quite complex to get right. It would be great to have a tool which allows you to check that your rules are doing exactly what you want, and would be a good learning tool]. If there isn't such a tool, is there a debug option in Apache that will log such information for each incoming request?

    Read the article

  • Fully secured gateway web sites

    - by SeaShore
    Hello, Are there any web sites that serve as gateways for fully encrypted communication? I mean sites with which I can open a secured session, and then to exchange through them with other sites in a secure way both URLs and content? Thanks in advance. UPDATE Sorry for not being clear. I was wondering if there was a way to access any site over the Internet (http or https) without letting any Intranet-proxy read the requested URL or the received content. My question is whether such a site exists, e.g.: I am connected to that site via https, I send it a URL in a secured way, the site gets the content from the target site (possibly in a non-secured way) and returns to me the requested content in a secured way.

    Read the article

  • DNS problems : correct nameserver, namserver working, but not resolving

    - by user1719624
    My problem is as follows. Any suggestions are welcome. [domain].org is not resolving whois and checking the registry information shows that the correct nameserver is set. The primary nameserver is also the server on which domain.org is hosted. The primary nameserver is also used for a number of other domains, and is working fine for those. Logging into the server, I can ping [domain].org and it resolves correctly. Setting the nameserver as my own DNS server on my laptop, and the URL resolves correctly. If the domain has the correct nameserver set, and the nameserver can resolve the URL to the correct IP address, and if I use the nameserver as my DNS then it resolves correctly, AND the nameserver is used for other domains which are resolving correctly, then why isn't it working? NB : this is a new domain registration and has been set up for around 10 days now, so it's not simple slow propagation. Any ideas? thanks

    Read the article

  • How to emulate a domain name - Webmin Setup

    - by theonlylos
    I am currently working on a client project where they are using a custom CMS which relies on having the specific domains configured for it to work properly. So in English, that means that when I try running the site on my test environment, the entire website fails because it isn't located on the primary domain (and I'm pretty sure the domain is hard coded since there's no control panel to adjust the file locations). Anyway what I wanted to ask is whether it is possible to use my test environment URL but have Apache and the DNS emulate my clients website URL locally, rather than calling the actual name servers. Right now I have a virtual host setup in Apache but I am not sure where to go from there. Any assistance is greatly appreciated.

    Read the article

  • Reverse Proxy (mod_rewrite) and Rails (absolute paths)

    - by SooDesuNe
    I have front end rails app, that reverse proxies to any of a number of backend rails apps depending on URL, for example http://www.my_host.com/app_one reverse proxies to http://www.remote_host_running_app_one.com such that a URL like http://www.my_host.com/app_one/users will display the contents of http://www.remote_host_running_app_one.com/users I have a large, and ever expanding number of backends, so they can not be explicitly listed anywhere other than a database. This is no problem for mod_rewrite using a prg:/ rewrite map reverse proxy. The question is, the urls returned by rails helpers have the form /controller/action making them absolute to the root. This is a problem for the page served by mod_rewrite because links on the proxied page appear as absolute to the domain. i.e.: http://www.my_host.com/app_one/controller/action has links that end up looking like /controller/action/ when they need to look like /app_one/controller/action mod_proxy_html seems like the right idea, but it doesn't seem to be as dynamic as I would need, since the rules need to be hard coded into the config files. Is there a way to fix this server-side, so that the links will be routed correctly?

    Read the article

  • Google Indexing Issue after htaccess changes

    - by Klement
    I have a site called www.FuneralCoverFinder.co.za. I have about 30 pages on the site and usually have 29 indexed. (Excluding 15 blog posts) They are new. I recently upgraded my entire site and made some redirection changes in my .htaccess file. I have made my url's more SEO friendly (Removing index.php/) and redirecting dead pages to working pages. I have tons of unique content all checked by grammarly and plagium to ensure I have no duplicate content. I have since resubmited my sitemap to Google and now have only one page indexed. It was within a couple of minutes. I usually see results almost immediately after submitting, now it's stuck on 1 page indexed. I assume I might have made errors in the .htaccess file as this was my first attempt. The site runs perfectly and all the url's redirect the way they should. I'm scared I have some or other loop, although the website runs fine. I still see many of my old indexed pages in the SERP's, I'm just worried that the issue with the new sitemap can cause my rankings some harm. My website is pretty SEO optimized onsite. I have about 1500 indexed backlinks and have been building them steadily over about half a year. I would really appreciate some clarity on this matter.

    Read the article

  • Serverless Web Application

    - by Andrea Di Persio
    In my company we work on a software that produce reports in html format. My bosses love the fact that static html pages can be moved across computer simply by moving/copying a folder and no web server is involved, so the customer only need a browser. The problem is that they asking me to implement a lot of feature which is very hard to implement properly and in a clean way without an application server. Frames cross domain problem, the impossibility to work with GET and POST data, no URLs routing...is very hard to work with this limitations. Anyone had similiar experience and wants to share their tricks/suggestion ? Do I need to tell my boss 'there is no future without a web server'? Regards.

    Read the article

  • How do spambots work?

    - by rlb.usa
    I have a forum that's getting hit a lot by forum spambots, and of course the best way to defeat something is to know thy enemy. I'll worry about defeating those spambots later, but right now I'd like to know more about them. Reading around, I felt surprised about the lack of thorough information on the subject (or perhaps my ineptness to input the correct search terms for better google results). I'm interested in learning all about spambots. I've asked on other forums and gotten brush-off answers like "Spambots are always users registering on your site." How do forum spambots work? How do they find the 'new user registration' page? (I'm especially surprised because some forums don't have a dedicated URL for this eg, www.forum.com/register.html , but instead use query strings or even other methods invisible to the URL bar) How do they know what to enter into each 'new user registration' field? How do they determine what's a page they can spam / enter data into and what is not? Do they even 'view' this page at all? ..If not, then I'd assume they're communicating with the server directly - how is - this possible? How do they do it? Can forum spambots break CAPTCHAs? Can they solve logic questions (how?)? Math questions? Do they reverse-engineer client-side anti-bot validation scripts? Server-side scripts? What techniques are still valid to prevent them? Where do spambots come from? Is someone sitting behind the computer snickering as they watch their bot destroy site after site? Or are they snickering as they simply 'release' it onto the internet somehow? Are spambots 'run' by an infected computer somewhere? Do they replicate themselves? etc

    Read the article

  • Redirect non-www ssl traffic to www ssl (apache)

    - by The NinjaSysadmin
    Hello, I'm attempting to get a redirect which is failing, and for some reason I can't think today. I have a vHost file within HTTPD that listens on standard port 80 and port 443. I'm attempting to redirect https://domain.com/(.*) to https://www.domain.com/$1 so that the URL remains intact. My config is as follows: ServerName www.domain.com ServerAlias tempdomain.testdomain.co.uk ServerAlias domain.com My rerwrite rule I'm using is. RewriteCond %{HTTP_HOST} ^domain.com$ RewriteRule ^(.*)$ https://www.domain.com$1 [R=301,L] I've also tried removing the . and $ but nothing.. When I visit the url https://domain.com/secure.page?action=comp it doesn't redirect to https://www.domain.com/secure.page?action=comp I do also have other SSL pages, the above was just an example.. Can anyone point out my stupidity.

    Read the article

  • Interface design where functions need to be called in a specific sequence

    - by Vorac
    The task is to configure a piece of hardware within the device, according to some input specification. This should be achieved as follows: 1) Collect the configuration information. This can happen at different times and places. For example, module A and module B can both request (at different times) some resources from my module. Those 'resources' are actually what the configuration is. 2) After it is clear that no more requests are going to be realized, a startup command, giving a summary of the requested resources, needs to be sent to the hardware. 3) Only after that, can (and must) detailed configuration of said resources be done. 4) Also, only after 2), can (and must) routing of selected resources to the declared callers be done. A common cause for bugs, even for me, who wrote the thing, is mistaking this order. What naming conventions, designs or mechanisms can I employ to make the interface usable by someone who sees the code for the first time?

    Read the article

  • Joomla in Windows is catching my access to a Virtual Directory where I placed my MVC application

    - by Romias
    In our windows hosting we use the root (wwwroot) folder to host a JOOMLA website as public website. This is running IIS 7. Then, we created a virtual directory called "App" to host there a ASP.NET MVC4 application. When I enter www.mydomain.com it shows the joomla website correctly. When I enter www.mydomain.com/App/ it somehow access my MVC app... as I see the URL changing to www.mydomain.com/App/Account/LogOn?ReturnUrl=%2fApp%2f BUT shows a 404 Joomla error as if it were looking that URL in Joomla. BTW, the hosting has 2 ASP.NET IIS Setup options: 4.0 Classic and 4.0 integrated. Using the Integrated one... it displays a blank page... using the classic one shows the 404 Joomla page. Any idea where to look for this?

    Read the article

  • New release of Oracle Process Accelerators (11.1.1.6.2

    - by JuergenKress
    Press release delivers Industry-focused solutions including Incident Reporting (Public Sector) and Loan Origination (Financial Services). lSO updated Travel Request Management (TRM), Document Routing and Approval (DRA), and Internal Service Requests (ISR). All BPM material is available at our SOA Community Workspace (SOA Community membership required): Presentation | Data Sheet | White paper Download Process Accelerators and Collateral - OTN SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: bpm,process accelerators,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • htaccess rewriterule leading slash

    - by Tiddo
    I'm using htaccess to rewrite my urls so that I can have nice clean urls. However, the same htaccess file does different things on my local server and my remote server: On my local server the url to the website is like http://localhost/example/ and on my remote server the url is http://example.com/. For my local server I can use the following htaccess redirect rule: RewriteRule ^(.*)$ index.php?page=$1 [L,QSA] However, when I use this on my remote server I get an internal server error. Instead I have to use this: (note the leading slash) RewriteRule ^(.*)$ /index.php?page=$1 [L,QSA] Unfortunately this doesn't work on my local server: this rewrite rule requests http://localhost/index.php instead of http://localhost/example/index.php on my local server. How can I make this work on both my remote and local server?

    Read the article

  • How to set up Apache 2 to serve only subdirectories

    - by Lynden Shields
    I have 3 sites which need to be hosted on a web server (apache2 from repo running on Ubuntu 12.04). They are each in their own subdirectory within /var/www/ I would like apache to serve files from the relevant directories only if the directory name is given in the URL, but not serve the /var/www/ directory itself. E.g: http://1.2.3.4/site1/ should work and serve the index from /var/www/site1/index.html, but http://1.2.3.4/ should not serve anything. Currently, I can't get the url to point to the directory. Either I can get http://1.2.3.4/ to serve everything within /var/www/ (including /var/www/site2/secretstuff/), or I can get the root http://1.2.3.4/ to serve one of the subdirectories (/var/www/site1/). This is unacceptable site 1 needs Indexes enabled but the others must not. I just want to make site1's config only respond to requests of the form http://1.2.3.4/site1/* and not handle requests of the form http://1.2.3.4/ I do not have a domain name set up so I can't use subdomains.

    Read the article

  • Apache: Stealth 404 the admin area until authenticated via basic auth, then allow access

    - by Kzqai
    Given a administrative area with urls like this: wp-admin/ wp-admin/whatever wp-admin/another-page wp-adminsecretlogin/ A standard basic-auth coverage would provide a username and password prompt on all three urls, and return a 403 on all failed auth attempts. This is a pretty obvious signal that something exists there, and thus is an invitation to script/brute force access. I would like to instead, require basic auth everywhere, but when not authenticated, not prompt for username and password, and instead return a 404 not found error for all urls except a wp-adminsecretlogin/ url. At that individual-to-the-site url, basic auth could go through, and unlock the rest of the administrative functionality (though the standard application login would still be necessary). How would I do that via apache .htaccess or .conf directives?

    Read the article

  • htaccess rewrite domain/folder to domain/

    - by user1678259
    I've been checking all questions previously made here, but can't find a solution. We have a website nearly finished www.example.com/web/ and would like to hide the folder /web/ from the url. So what is shown is: www.example.com/web/* And would like: www.example.com/* If we try with a redirect, the url www.example.com goes to www.example.com/web/. And this is what we don't want. Any help will be very appreciated. Thank you all.

    Read the article

  • Password protect app in jetty

    - by JohnW
    I am testing a webapp (.war) running in Jetty 7. For demo purposes I want to run this on a public URL, however I would like not to have the whole world (if they happen to come across the URL) be able to see it. Is there a way to make Jetty require a basic-auth type of authentication when accessing the webapp (without modifying anything inside the war, i.e. no edits on the web.xml file)? Or if not the webapp, then any part of what Jetty provides at port 8080?

    Read the article

  • CURL -I issue stop responding when contain "="

    - by user1512778
    i did this command : curl -I 'http://criminaljustice.state.ny.us/cgi/internet/nsor/fortecgi?serviceName=WebNSOR&templateName=detail.htm&requestingHandler=WebNSORDetailHandler&ID=368343543' but stuck but if i did this : curl -I 'http://criminaljustice.state.ny.us/cgi/internet/nsor/fortecgi' HTTP/1.1 200 OK Content-length: 207 Content-type: text/html Server: Sun-ONE-Web-Server/6.1 Date: Sat, 15 Dec 2012 08:49:14 GMT Via: 1.1 proxy-internet-revproxy Proxy-agent: Oracle-iPlanet-Proxy-Server/4.0 then i try shorten it : curl -I 'http://criminaljustice.state.ny.us/cgi/internet/nsor/fortecgi?serviceName=WebNSOR&templateName=detail.htm' stuck too i dont know why seems like if the url contain "=" it stop responding so tried this url removing the "=" (serviceName=WebNSOR to serviceNameWebNSOR) : curl -I 'http://criminaljustice.state.ny.us/cgi/internet/nsor/fortecgi?serviceNameWebNSOR' HTTP/1.1 200 OK Content-length: 207 Content-type: text/html Server: Sun-ONE-Web-Server/6.1 Date: Sat, 15 Dec 2012 08:50:38 GMT Via: 1.1 proxy-internet-revproxy Proxy-agent: Oracle-iPlanet-Proxy-Server/4.0 why i cant use = ? please assist me

    Read the article

  • Web application access different between Domain Name and IP

    - by h82
    in our office, we have a web application running. When we access the application by the domain name, http://server.domain.com/application/name it will display the current version of application. However, when we go by the IP address, http://192.168.1.111/application/name it will display the old version of that application. One thing is that we can access that application either by http://server.domain.com/ (it will be redirected to the long URL automatically) or http://server.domain.com/application/name when we are using domain name. But only accessible via the exact URL when we use IP address. Why is it showing the old version and how can it be corrected? It is running JRun4, Apache on Red hat. I've checked in httpd.conf a bit but could not find any. Please advice what should be done to display the same (updated version) when we access using domain name or IP address. Thank you.

    Read the article

  • Browser Item Caching and URLs

    - by Damon Armstrong
    Ultimately you want the browser to cache things like Flash components, Silverlight XAP files, and images to avoid users having to download them each time they hit a page.  But during development it’s very useful to NOT have things cached so you are always looking at the most up-to-date file.  You can always turn off caching on your browser, but if you use your browser for daily browsing then its not the greatest option.  To avoid caching we would always just slap a randomly generated GUID to the back of the URL of any items we didn’t want to cache (e.g. http://someserver.com/images/image.png?15f073f5-45fc-47b2-993b-fbaa781b926d).  It worked well, but you had to remember to remove the random GUID when it went to production. However, on a GimmalSoft project we recently implemented someone showed me a better way that didn’t need to be removed from production code – just slap the last modified date of the file on the end of the URL (or something generated from the modification date).  This was kind of genius approach because it gives you the best of both world.  If you modify the file, the browser goes out and gets the newest version.  If you don’t modify the file, it has the cached copy.  Very helpful!  The only down side is that you do have to read the modification date from the file, which does technically take some time.

    Read the article

  • Browser Item Caching and URLs

    - by Damon
    Ultimately you want the browser to cache things like Flash components, Silverlight XAP files, and images to avoid users having to download them each time they hit a page.  But during development it's very useful to NOT have things cached so you are always looking at the most up-to-date file.  You can always turn off caching on your browser, but if you use your browser for daily browsing then its not the greatest option.  To avoid caching we would always just slap a randomly generated GUID to the back of the URL of any items we didn't want to cache (e.g. http://someserver.com/images/image.png?15f073f5-45fc-47b2-993b-fbaa781b926d).  It worked well, but you had to remember to remove the random GUID when it went to production. However, on a GimmalSoft project we recently implemented someone showed me a better way that didn't need to be removed from production code - just slap the last modified date of the file on the end of the URL (or something generated from the modification date).  This was kind of genius approach because it gives you the best of both world.  If you modify the file, the browser goes out and gets the newest version.  If you don't modify the file, it has the cached copy.  Very helpful!  The only down side is that you do have to read the modification date from the file, which does technically take some time.

    Read the article

  • Managing TFS Workspaces

    - by Enrique Lima
    You are the administrator (or since you may be the one that knows the most about it) and you need to do some cleanup on what is connected and perhaps even cleanup after people that have left the organization and left some code checked out in their workspace. What permissions do I need? You will need to have Administer Workspaces permission to perform the following tasks. The commands. In order to execute the commands, you will need to open a Visual Studio Command Prompt, once there you will be able to use the tf command.  This has a nice set of options, which I will be providing a listing for later on in another post. To list all workspaces registered: tf workspaces /collection:<url to your TPC> <workspace>;<owner> To delete a specific workspace: tf workspace /delete /server:<url to your TPC> <workspace>;<owner> If for any reason a workspace has embedded spaces, then surround that with “” (double quotes).

    Read the article

< Previous Page | 323 324 325 326 327 328 329 330 331 332 333 334  | Next Page >