Search Results

Search found 18563 results on 743 pages for 'url'.

Page 210/743 | < Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >

  • Rsync --backup-dir seems to be ignored

    - by Patrik
    I want to use rsync to backup a directory from a local location to a remote location, and store changed files in another remote location. I did use: rsync -rcvhL --progress --backup [email protected]:/home/user/Changes/`date +%Y.%m.%d` . [email protected]:/home/user/Files/ The --backup-dir stays empty, while it should be filled. Is it possible what I try to accomplish, and am I doing something wrong? Thanks

    Read the article

  • Copy and paste twice

    - by Shane
    Is there a way, to copy and paste twice? For example, is there a way, for me to copy one url, store it, and then copy another url, and then for the urls to be pasted respectively? I read somewhere, that this is possible, but have not ben able to figure it out.

    Read the article

  • programatically check if a domain is availible?

    - by acidzombie24
    Using this solution http://serverfault.com/questions/98940/bot-check-if-a-domain-name-is-availible/98956#98956 I wrote a quick script (pasted below) in C# to check if the domain MIGHT be available. A LOT of results come up with taken domains. It looks like all 2 and 3 letter .com domains are taken and it looks like all 3 letter are taken (not including numbers which many are available). Is there a command or website to take my list of domains and check if they are registered or available? using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Text.RegularExpressions; using System.Diagnostics; using System.IO; namespace domainCheck { class Program { static void Main(string[] args) { var sw = (TextWriter)File.CreateText(@"c:\path\aviliableUrlsCA.txt"); int countIndex = 0; int letterAmount=3; char [] sz = new char[letterAmount]; for(int z=0; z<letterAmount; z++) { sz[z] = '0'; } //*/ List<string> urls = new List<string>(); //var sz = "df3".ToCharArray(); int i=0; while (i <letterAmount) { if (sz[i] == '9') sz[i] = 'a'; else if (sz[i] == 'z') { if (i != 0 && i != letterAmount - 1) sz[i] = '-'; else { sz[i] = 'a'; i++; continue; } } else if (sz[i] == '-') { sz[i] = 'a'; i++; continue; } else sz[i]++; string uu = new string(sz); string url = uu + ".ca"; Console.WriteLine(url); Process p = new Process(); p.StartInfo.UseShellExecute = false; p.StartInfo.RedirectStandardError = true; p.StartInfo.RedirectStandardOutput = true; p.StartInfo.FileName = "nslookup "; p.StartInfo.Arguments = url; p.Start(); var res = ((TextReader) new StreamReader( p.StandardError.BaseStream)).ReadToEnd(); if (res.IndexOf("Non-existent domain") != -1) { sw.WriteLine(uu); if (++countIndex >= 100) { sw.Flush(); countIndex = 0; } urls.Add(uu); Console.WriteLine("Found domain {0}", url); } i = 0; } Console.WriteLine("Writing out list of urls"); foreach (var u in urls) Console.WriteLine(u); sw.Close(); } } }

    Read the article

  • google.com re-directing to local html copy

    - by Sneha
    When I type http://google.com and press ENTER on any of my browsers ( Mozilla, Chrome ), the URL bar suddenly transforms into this file:///C:/Users/Administrator/AppData/Roaming/Google_Toolbar/Google_Toolbar/1.0.0.0/MyGoogle.html After this too, I can continue to search in google but the URL bar still shows the local address. Surprisingly this is happening only for google.com and not other sites.

    Read the article

  • Unextending Sharepoint 2007 Web Application from a zone

    - by dunxd
    When our Sharepoint was migrated from Sharepoint 2003 to Sharepoint 2007 (both fully paid versions), the consultants who carried it out extended each web app into two IIS sites/zones (e.g. the original Web App was http://intranet, then http://newintranet and http://intranet would be created for Sharepoint 2007 - each with its own IIS site). The idea was that during the migration period we would set up DNS to point the old url to SP2003 servers and the new one to SP2007, then once the migration was complete, do a DNS change so the SP2007 would recieve the requests to the http://intranet type URLs. Unfortunately the contractors did not tidy up the application extensions and IIS sites after the migration, and for some time both URLs were in use, resulting in many document links pointing to the http://newintranet type URLs. This means I need to maintain these URLs. Due to a rejig of organisation structure we now need to relocate some Sharepoint sites, and I'd like to use the RDA Collaboration Sharepoint URL Redirector feature. However a limitation of this is that it doesn't work for Web Applications which have been extended into multiple zones. So I have a need to tidy up the situation that our consultants left behind. I think the right thing to do is use the "Remove Sharepoint from IIS Web Site" page in Central Admin to remove the zone for the newintranet type sites, and select the option to also delete the IIS site. That should result in having no IIS sites listening for http://newintranet type URLs. Is this the right procedure? Once I have done that I need to set up Sharepoint to receive requests sent to the http://newintranet type URLs so they will continue to work. I am not sure if I should do this: using Alternative Access Mappings or, by adding a host header to the IIS site or, creating a non Sharepoint IIS site for each http://newintranet type URL, and use IIS redirection to forward the requests to the new URL using variables to pass the path to the Sharepoint site. Does anyone have any thoughts on these options, or any other way of achieving this? Sharepoint 2007 is running on Windows 2003 with IIS6. We don't currently have plans/budget to upgrade to Sharepoint 2010.

    Read the article

  • IE Behaviors in my system

    - by Dharani
    When i type some Url say www.google.com in IE (installed and tested with IE 6.0/7.0/8.0) at first attempt it does not recognize that URL when i type it second time.It goes to the page. But when i test it in other browsers like firefox ,Chrome without problem it is working. I scanned my system with Norton and Kasper sky,they do not complaint about any virus.I am using Windows XP Service pack 2. Does my system get affected with something say doubleclick virus?

    Read the article

  • Force caching of handler output which actively resists caching

    - by deceze
    I'm trying to force caching of a very obnoxious piece of PHP script which actively tries to resist caching for no good reason by actively setting all the anti-cache headers: Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Content-Type: text/html; charset=UTF-8 Date: Thu, 22 May 2014 08:43:53 GMT Expires: Thu, 19 Nov 1981 08:52:00 GMT Last-Modified: Pragma: no-cache Set-Cookie: ECSESSID=...; path=/ Vary: User-Agent,Accept-Encoding Server: Apache/2.4.6 (Ubuntu) X-Powered-By: PHP/5.5.3-1ubuntu2.3 If at all avoidable I do not want to have to modify this 3rd party piece of code at all and instead just get Apache to cache the page for a while. I'm doing this very selectively to only very specific pages which have no real impact on session cookies or the like, i.e. which do not contain any personalised information. CacheDefaultExpire 600 CacheMinExpire 600 CacheMaxExpire 1800 CacheHeader On CacheDetailHeader On CacheIgnoreHeaders Set-Cookie CacheIgnoreCacheControl On CacheIgnoreNoLastMod On CacheStoreExpired On CacheStoreNoStore On CacheLock On CacheEnable disk /the/script.php Apache is caching the page alright: [cache:debug] AH00698: cache: Key for entity /the/script.php?(null) is http://example.com:80/the/script.php? [cache_disk:debug] AH00709: Recalled cached URL info header http://example.com:80/the/script.php? [cache_disk:debug] AH00720: Recalled headers for URL http://example.com:80/the/script.php? [cache:debug] AH00695: Cached response for /the/script.php isn't fresh. Adding conditional request headers. [cache:debug] AH00750: Adding CACHE_SAVE filter for /the/script.php [cache:debug] AH00751: Adding CACHE_REMOVE_URL filter for /the/script.php [cache:debug] AH00769: cache: Caching url: /the/script.php [cache:debug] AH00770: cache: Removing CACHE_REMOVE_URL filter. [cache_disk:debug] AH00737: commit_entity: Headers and body for URL http://example.com:80/the/script.php? cached. However, it is always insisting that the "cached response isn't fresh" and is never serving the cached version. I guess this has to do with the Expires header, which marks the document as expired (but I don't know whether that's the correct assumption). I've tried to overwrite and unset headers using mod_headers, but this doesn't help; whatever combination I try the cache is not impressed at all. I'm guessing that the order of operation is wrong, and headers are being rewritten after the cache sees them. early header processing doesn't help either. I've experimented with CacheQuickHandler Off and trying to set explicit filter chains, but nothing is helping. But I'm really mostly poking in the dark, as I do not have a lot of experience with configuring Apache filter chains. Is there a straight forward solution for how to cache this obnoxious piece of code?

    Read the article

  • Question about RewriteRule and HTTP_HOST server variable

    - by SeancoJr
    In evaluating a rewrite rule that redirects to a specific URL and say the rewrite condition is met, would it be possible to use HTTP_HOST as part of the URL to be redirected to? Example in question: RewriteRule .*\.(jpg|jpe?g|gif|png|bmp)$ http://%{HTTP_HOST}/no-leech.jpg [R,NC] The motive behind this question is a desire to create a single htaccess file that would match against an addon domain (on a shared hosting account) and an infinite amount of subdomains below it to prevent hotlinking of images.

    Read the article

  • run browser from remote machine

    - by cathy02
    Hello, I can connect to a remote machine using ssh, I would like to open a URL from the remote machine. Well, I can do this using lynx (a command line browser), but then, javascript etc... gets messed up. Is there a way I can open the URL and see it in my local machine firefox browser for example ?? Thanks

    Read the article

  • MaxRequestLen error when i use https

    - by david
    When i got MaxRequestLen errors in file upload page, i set MaxRequestLen=31457280 using: <IfModule mod_fcgid.c> MaxRequestLen 31457280 FcgidIOTimeout 90 </IfModule> Now file upload works when i use http url. I have recently configured ssl for my site and when i use https url for same upload page, i get the same error: HTTP request length 131073 (so far) exceeds MaxRequestLen (131072) Is there a different setting for https? Please help. Thank you.

    Read the article

  • Domain Forwarding | A Magento Store

    - by WaZ
    I have installed Magento inside a folder called magento. The URL of the site currently looks like this: http://gios.azamdevelopment.co.uk/magento/ We want our domain forwarded to the above URL and moreover, any links relative should work as well. e.g. http://gios.azamdevelopment.co.uk/magento/customer/account/login/ should ideally be www.giosconcept.com/customer/account/login and so forth. Thanks very much.

    Read the article

  • Domain Forwarding | A Magento Store

    - by WaZ
    I have installed Magento inside a folder called magento. The URL of the site currently looks like this: http://gios.azamdevelopment.co.uk/magento/ We want our domain forwarded to the above URL and moreover, any links relative should work as well. e.g. http://gios.azamdevelopment.co.uk/magento/customer/account/login/ should ideally be www.giosconcept.com/customer/account/login and so forth. Thanks very much.

    Read the article

  • Can I tell Chrome not to redirect mistyped URLs?

    - by Nathan Long
    If I mistype a URL, Chrome sometimes redirects me to a search. For instance, typing "example_url_i_sometimes_mistype.conm" into the location bar gets me: Your search - example_url_i_sometimes_mistype.conm - did not match any documents. Not only is this annoying, but if the mistyped URL was one on private DNS, I've now just told Google that the domain exists. (A small concern, but bad in principle.) Can I configure Chrome to just show an error and not blab to Google Search about it?

    Read the article

  • Over writing output to a text file

    - by Naveen Gamage
    I'm trying to write wget command's output to a text file, but it always appends to the text file. #!/bin/sh download() { local url=$1 echo -n " " wget --progress=dot $url 2>&1 | grep --line-buffered "%" | \ sed -u -e "s,\.,,g" | awk '{printf("\b\b\b\b%4s", $2)}' echo " DONE" } file="$1" echo -n "Downloading $file:" download "$file" > file.log I tried using using > won't work, where am I doing wrong?

    Read the article

  • Exclude regular expression from virtual host

    - by Joao Trindade
    I have a virtual host in apache which is redirecting requests to another web server. <VirtualHost *:80> DocumentRoot /var/www ServerName another.host ProxyPass / http://another.host2:8081/ ProxyPassReverse / http://another.host2:8081/ </VirtualHost> I need to exclude an URL pattern from being catch by this virtual host. Basically I don't want requests with the url: http://another.host:8081/~username to be forwarded to the other server. Can this be done?

    Read the article

  • negative regexp in Squirm (for Squid). Possible ?

    - by alex8657
    Did someone achieve to do negative regexp (or part of) with Squirm ? I tried negative lookahead things and ifthenelse regexps, but Squirm 1.26 fails to understand them. What i want to do is simply: * If the url begins by 'http://' and contains 'account', then rewrite/redirect to 301:https:// * It the url begins by 'https://' and does NOT contains 'account, then rewrite/redirect to 301:http:// So far, i do that using 2 lines of perl, but squirm redirectors would take less memory

    Read the article

  • "Save as webpage, complete" problem ?

    - by Adelave
    i have problem when i am trying to save web page from browser using "Save as Webpage, complete" some of CSS/Javascript/Image are not saved, thus when i reopen saved page offline webpage can't display properly. How do i solve this? How do i made webpage can be saved properly from browser ? i don't want to use MHT ext. Because what my situation here is i need to give URL for web designer to launch URL, saved webpage and modify CSS/HTML. Thanks

    Read the article

  • Can I keep Google from stealing my cursor? (Firefox)

    - by LinkTiger
    I have iGoogle as my home page. Every time that I start up Firefox with the intent to go to a specific page, I end up typing half the URL in the Google search box when iGoogle steals focus away from the URL bar. Is there any way to hack Firefox (or iGoogle) to keep the page from stealing my cursor on load? Thanks!

    Read the article

  • haproxy backend default location

    - by magd1
    If you go to www.company.com, I want it to redirect to /something/something on my server, but the URL still shows www.company.com Is this possible in haproxy? backend new_marketing_server *** set default URL to /something/something*** mode http balance roundrobin timeout server 10m option httpclose server server1 10.86.151.142:80 minconn 32000 maxconn 3200 check port 80 inter 2000 server server2 10.122.13.189:80 minconn 32000 maxconn 3200 check port 80 inter 2000

    Read the article

  • I need to move part of my website to another web server, how can I do this and keep the same Domain

    - by hamlin11
    I need to move a section of my website to another server because it is taxing our current web server. However, I cannot afford to lose page rank on any pages within the section of the site that must be moved. Furthermore, the URL of the pages must not be changed... visitors must still see the same URL, even though it would be served up by different hardware from a different data center. How can this be accomplished? Thanks

    Read the article

  • JqGrid - AfterInsertRow, setCell. programmatically change the contet of the cell

    - by oirfc
    Hello there, I am new to JqGrid, so please bare with me. I am having some problems with styling the cells when I use a showlink formatter. In my configuration I set up the AfterInsertRow and it works fine if I just display simple text: afterInsertRow: function(rowid, aData) { if (aData.Security == `C`) { jQuery('#list').setCell(rowid, 'Doc_Number', '', { color: `red` }); } else { jQuery('#list').setCell(rowid, 'Doc_Number', '', { color: `green` }); } }, ... This code works just fine, but as soon as I add a formatter {'Doc_Number, ..., 'formatter: ’showlink’, formatoptions: {baseLinkUrl: ’url.aspx’} the above code doesn't work because a new element is added to the cell <a href='url.aspx'>cellValue</a> Is it possible to access programmatically the new child element using something like the code above and change the style? <a href='url.aspx' style='color: red;'>cellValue</a> etc. Thanks in advance, oirfc

    Read the article

  • ASP.NET MVC - HttpPost to ReturnURL after redirect

    - by JP
    Hello, I am writing an ASP.NET MVC 2.0 application which requires users to log in before placing a bid on an item. I am using an actionfilter to ensure that the user is logged in and, if not, send them to a login page and set the return url. Below is the code i use in my action filter. if (!filterContext.HttpContext.User.Identity.IsAuthenticated) { filterContext.Result = new RedirectResult(String.Concat("~/Account/LogOn","?ReturnUrl=",filterContext.HttpContext.Request.RawUrl)); return; } In my logon controller I validate the users credentials then sign them in and redirect to the return url FormsAuth.SignIn(userName, rememberMe); if (!String.IsNullOrEmpty(returnUrl)) { return Redirect(returnUrl); } My problem is that this will always use a Get (HttpGet) request whereas my original submission was a post (HttpPost) and should always be a post. Can anyone suggest a way of passing this URL including the HttpMethod or any workaround to ensure that the correct HttpMethod is used? Thanks in advance, JP

    Read the article

< Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >