Search Results

Search found 19109 results on 765 pages for 'canonical url'.

Page 43/765 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • url with question mark considered as new http request?

    - by Navin Leon
    I am optimization my web page by implementing caching, so if I want the browser not to take data from cache, then I will append a dynamic number as query value. eg: google.com?val=823746 But some time, if I want to bring data from cache for the below url, the browser is making a new http request to server, its not taking data from cache. Is that because of the question mark in URL ? eg: http://google.com? Please provide some reference document link. Thanks in advance. Regards, Navin

    Read the article

  • IIS7: URL Rewrite - can it be used to hide a CDN path?

    - by Wild Thing
    Hi, I am using Rackspace Cloud CDN (Limelight CDN) for my website. The URLs of the CDN are in the format http://cxxxxxx.cdn.cloudfiles.rackspacecloud.com/something.jpg My domain is mydomain.com. Can I use IIS URL rewriting to show http://cxxxxxx.cdn.cloudfiles.rackspacecloud.com/something.jpg as http://images.mydomain.com/something.jpg? Or is this impossible without the CDN setup accepting my CNAME? If so, can you please help create the URL rewrite rule? (Sorry, don't know how to use regular expressions) Thanks, WT

    Read the article

  • Log Location Url Responses of 301 redirects from IIS

    - by James Lawruk
    Is there a way to log 301 redirects returned by IIS with the (1) request Url and the (2) location Url of the response? Something like this: Url, Location /about-us, /about /old-page, /new-page The IIS logs contain the Request Url and the status code (301), but not the location Url of the response. Ideally there would be an additional field in the IIS Log called Location that would be populated when IIS responded with a 301. In my case the source of the redirect could be ISAPI Rewrite Rules, ASP.NET applications, Cold Fusion applications, or IIS itself. Perhaps there is a way to log IIS response data? Thanks for your help.

    Read the article

  • How to lock Firefox tab to domain or URL pattern

    - by f3lix
    I know Firefox extensions that allow protecting (cannot be closed) and locking (cannot change URL) tabs. What I need is an extension that locks a tab to a certain domain or URL pattern. For example, I want to lock a tab to the domain example.com. As long as I follow links that are within this domain the tab should show normal (unlocked) behavior, but if I follow a link to another domain the link should be opened in new tab -- leaving the locked tab open with a URL within the locked domain. Even better would be the functionality to lock a tab to a URL pattern. If a URL matches the pattern it is opened in the current tab, otherwise it is opened in a new tab. Do you know something (preferably an extension for FF 8.0) that provides this kind of functionality.

    Read the article

  • IIS Url Rewrite Capturing query string and escaping characters

    - by LiamB
    We are just adding some redirects for an old site to a new one in IIS7 using the URL Rewrite 'plugin'. The old site's URL are all based on the query string, we'd usually do explicit rewrites like below. But this wont work in the case of the query string. <rule name="Redirect-1" patternSyntax="Wildcard" stopProcessing="true"> <match url="index.php?option=m_content&view=article&id=15&Itemid=16" /> <action type="Redirect" url="http://newurl/some-page" /> </rule> So using the 2 URL's above how can we do a 301 redirect?

    Read the article

  • first of all nice work,, how to redirect the url of old modified directries

    - by kath
    I really appreciate your hard work after searching lot of web i cannot find the answer so if you get time please try to find what i should so the problem is its a website for classifieds ads before i modified or better word edited the category name but its still showing up in Google index and even in browser URL so is there any way to by pass or redirect it to new one i tried .htaccess but cant get the the result here is the both URL list before modification http://adsbuz.com/vehicles-cars/other-vehicles/selling-my-2010-toyota-sequioa-19500-9585.htm after editing category name (modified one) http://adsbuz.com/vehicles-cars-for-sale/other-vehicles/selling-my-2010-toyota-sequioa-19500-9585.htm (edited category name was before ""vehicles-car"" and after editing is ""vehicles-cars-for-sale"" as you can see both URL opens and not good for seo. and is there any way some one opens wrong url but page opens only with corect url automatically just like in your site.. consider me new in this market and want little help here (the website is in php) thanks Really appreciate your quick response thanks kath

    Read the article

  • website particular url suddenly disappeared from google search result

    - by Ragavendran Ramesh
    i have a website , in that a particular page url was indexed in google search result in the first 10 results , but suddenly it disappeared , not that page is not even in the 100results , what would be the reason. i am feeling that the page has be spammed by our competitors . is it possible to avoid that , or can i find that page has been spammed or not. Is it possible to find the particular page in a website is spam or malicious.

    Read the article

  • Re-indexing website with clean URL's

    - by artsi
    So I have a website with URL's like this: http://www.domain.com/profile.php?id=151 I've now cleaned them up with mod_rewrite into this: http://www.domain.com/profile/firstname-lastname/151 I've fetched and re-indexed my website after the change. What is the best way to make the old dirty ones disappear from search results and keep the clean ones? Is blocking profile.php with robots.txt enough?

    Read the article

  • Deleted files still accessible without www in url

    - by phlegma
    I have deleted all files and all hidden files off my server, there is nothing but log files which cannot be deleted. Ironically, files are accessible when nothing is there. Cache cleared, multiple browsers and computers/devices checked. Files show when I exclude "www" from the URL http://sarastringfellow.com/assets/photo/c.jpg http://www.sarastringfellow.com/assets/photo/c.jpg What does this mean?

    Read the article

  • Can't get Rewrite rule to keep original URL

    - by user38100
    I have these Rewrites, but I would like to have the URL stay the same as what is typed originally, I thought removing the [R] flags would stop it but it hasn't RewriteCond %{HTTP_HOST} ^examplea\.example\.com$ [NC] RewriteRule (.*) http://examplea.example.com:32400/web [L] RewriteCond %{HTTP_HOST} ^exampleb\.example\.com$ [NC] RewriteRule (.*) http://exampleb.example.com:9091 [L] Edit: would this work better? RewriteCond %{HTTP_HOST} ^hello.example.com$ RewriteRule ^(/)?$ welcome [L]

    Read the article

  • 6451B URL List...

    - by Da_Genester
    In addition to the info from the 6451A URL List, included below is info for the newer version of the class, 6451B. Helpful Links: SCCM Tools Aggregation: http://tinyurl.com/SCCM07ToolsLinks   Module 5:  Querying and Reporting Data 64-bit OS and Office Web Component issues - http://tinyurl.com/SCCM07OWC64bit SCCM and SSRS integration for a Reporting Services Point - http://tinyurl.com/SCCM07SSRS

    Read the article

  • Why does DNS work the way it does?

    - by sabof
    This is a Canonical Question about DNS (Domain Name Service). If my understanding of the DNS system is correct, the .com registry holds a table that maps domains (www.example.com) to DNS servers. What is the advantage? Why not map directly to an IP address? If the only record that needs to change when I am configuring a DNS server to point to a different IP address, is located at the DNS server, why isn't the process instant? If the only reason for the delay are DNS caches, is it possible to bypass them, so I can see what is happening in real time?

    Read the article

  • Why is "chmod -R 777 /" destructive?

    - by samwise
    This is a Canonical Question about File Permission and Why 777 is "destructive". I'm not asking how to fix this problem, as there are a ton of references of that already on Server Fault (reinstall OS). Why does it do anything destructive at all? If you've ever ran this command you pretty much immediately destroy your operating system. I'm not clear why removing restrictions has any impact on existing processes. For example, if I don't have read access to something and after a quick mistype in the terminal suddenly I now have access well... why does that cause Linux to break?

    Read the article

  • Multiple SSL domains on the same IP address and same port?

    - by John
    This is a Canonical Question about Hosting multiple SSL websites on the same IP. I was under the impression that each SSL Certificate required it's own unique IP Address/Port combination. But the answer to a previous question I posted is at odds with this claim. Using information from that Question, I was able to get multiple SSL certificates to work on the same IP address and on port 443. I am very confused as to why this works given the assumption above and reinforced by others that each SSL domain website on the same server requires its own IP/Port. I am suspicious that I did something wrong. Can multiple SSL Certificates be used this way?

    Read the article

  • cflock do not throw timeout for same url called in same browser

    - by Pritesh Patel
    I am trying lock block on page test.cfm and below is code written on page. <cfscript> writeOutput("Before lock at #now()#"); lock name="threadlock" timeout="3" type="exclusive" { writeOutput("<br/>started at #now()#"); thread action="sleep" duration="10000"; writeOutput("<br/>ended at #now()#"); } writeOutput("<br/>After lock at #now()#"); </cfscript> assuming my url for page is http://localhost.local/test.cfm and running it on browser in two different tabs. I was expecting one of the url will throw timeout error after 3 second since another url lock it atleast for 10 seconds due to thread sleep. Surprisingly I do not get any timeout error rather second page call run after 10 seconds as first call finish execution. But I am appending some url parameter (e.g. http://localhost.local/test.cfm?q=1) will throw error. Also I am calling same url in different browser then one of the call will throw timeout issue. Is lock based on session and url? Update Here is output for two different cases: Case 1: TAB1 Url: http://localhost.local/test/test.cfm Before lock at {ts '2013-10-18 09:21:35'} started at {ts '2013-10-18 09:21:35'} ended at {ts '2013-10-18 09:21:45'} After lock at {ts '2013-10-18 09:21:45'} TAB2 Url: http://localhost.local/test/test.cfm Before lock at {ts '2013-10-18 09:21:45'} started at {ts '2013-10-18 09:21:45'} ended at {ts '2013-10-18 09:21:55'} After lock at {ts '2013-10-18 09:21:55'} Case 2: TAB1 Url: http://localhost.local/test/test.cfm Before lock at {ts '2013-10-18 09:27:18'} started at {ts '2013-10-18 09:27:18'} ended at {ts '2013-10-18 09:27:28'} After lock at {ts '2013-10-18 09:27:28'} TAB2 Url: http://localhost.local/test/test.cfm? (Added ? at the end) Before lock at {ts '2013-10-18 09:27:20'} A timeout occurred while attempting to lock threadlock. The error occurred in C:/inetpub/wwwroot/test/test.cfm: line 13 11 : 12 : <cfoutput>Before lock at #now()#</cfoutput> 13 : <cflock name="threadlock" timeout="3" type="exclusive"> 14 : <cfoutput><br/>started at #now()#</cfoutput> 15 : <cfthread action="sleep" duration="10000"/> ... Result for case 2 as expected. For case 1, strange thing I just noticed is tab 2 output "Before lock at {ts '2013-10-18 09:21:45'} indicates that whole request start after 10 seconds (means after the complete execution of first tab) when I have fired it in second URL just after 2 seconds of first tabs.

    Read the article

  • How to detect invalid image URL with JAVA?

    - by Cataclysm
    I have a method to download image from URL. As like below.. public static byte[] downloadImageFromURL(final String strUrl) { InputStream in; ByteArrayOutputStream out = new ByteArrayOutputStream(); try { URL url = new URL(strUrl); in = new BufferedInputStream(url.openStream()); byte[] buf = new byte[2048]; int n = 0; while (-1 != (n = in.read(buf))) { out.write(buf, 0, n); } out.close(); in.close(); } catch (IOException e) { return null; } return out.toByteArray(); } I have an image url and it is valid. for example. https://encrypted-tbn1.gstatic.com/images?q=tbn:ANd9GcTxfYM-hnD-Z80tgWdIgQKchKe-MXVUfTpCw1R5KkfJlbRbgr3Zcg My problem is I don't want to download if image is really not exists.Like .... https://encrypted-tbn1.gstatic.com/images?q=tbn:ANd9GcTxfYM-hnD-Z80tgWdIgQKchKe-MXVUfTpCw1R5KkfJlbRbgr3Zcgaaaaabbbbdddddddddddddddddddddddddddd This image shouldn't be download by my method. So , how can I know the giving image URL is not really exists. I don't want to validate my URL (I think that may not my solution ). So, I googled for that. From this article ... How to check if a URL exists or returns 404 with Java? and Java check if file exists on remote server using its url But this con.getResponseCode() will always return status code "200". This mean my method will also download invalid image urls. So , I output my bufferStream as like... System.out.println(in.read(buf)); Invalid image URL produces "43". So , I add these lines of codes in my method. if (in.read(buf) == 43) { return null; } It is ok. But I don't think that will always satisfy. Has another way to get it ? am I right? I would really appreciate any suggestions. This problem may struct my head. Thanks for reading my question.

    Read the article

  • SSRS for Sharepoint, Images in a report from a Sharepoint List URL?

    - by James Polhemus
    Greetings Sabios, I have several reports I run successfully where the data comes from a Sharepoint list in the form of an XML dataset. I am however having trouble with one. I have a report that pulls an image file onto the main body of the report. This data too comes from a Sharepoint list in the form of an XML dataset which sends me the URL to the jpeg or bmp or gif... whatever the case may be. I can successfully pull this off in my own Visual Studio IDE. My Local Report Server will render it as well It won't run on my Sharepoint Report Server (My MOSS runs through https while my Shartpoint Report Server is http might this matter?) When I upload it to Sharepoint and run it through the Sharepoint Report Server, I get back EVERYTHING in the report Header and Footer (dataset text and embedded Images) but just a big RED X where the Main Image should be. I have done everything the boards say: A. I made sure the Unattended Execution Account is running on the Reports Server B. I have insured the URL comes back in clean format (else the images wouldn't render locally either and they do) The report logs throw this exception: e ERROR: Throwing Microsoft.ReportingServices.Diagnostics.Utilities.ContainerTypeNotSupportedException: The target location you specified is not supported by the report server. A report definition (.rdl), report model (.smdl), resource, or shared data source (.rsds) file must be located within a library or a folder within it., ; Info: Microsoft.ReportingServices.Diagnostics.Utilities.ContainerTypeNotSupportedException: The target location you specified is not supported by the report server. A report definition (.rdl), report model (.smdl), resource, or shared data source (.rsds) file must be located within a library or a folder within it. Any takers? Even my Sharepoint Administrator can't help me:) James

    Read the article

  • Change Edit Control Block (ECB) Link URL in SharePoint

    - by dirq
    Is there a way to dynamically change the hyperlink associated with an ECB menu in WSS 3.0? For instance, I have a list with 2 fields. One field is hidden and is a link, the other is the title field which has the ECB menu. The title field currently links to the item's view page - but we want it to link to the link-field's url. Is that possible? UPDATE - 5/29/09 9AM I have this so far. See this TechNet post. <script type="text/javascript"> var url = 'GoTo.aspx?ListTitle='+ctx.ListTitle; url += '&ListName='+ctx.listName; url += '&ListTemplate='+ctx.listTemplate; url += '&listBaseType='+ctx.listBaseType; url += '&view='+ctx.view; url += '&'; var a = document.getElementsByTagName('a'); for(i=0;i<=a.length -1;i++) { a[i].href=a[i].href.replace('DispForm.aspx?',url); } </script> This gives me a link like so (formatted so it's easier to see): GoTo.aspx ?ListTitle=MyList &ListName={082BB11C-1941-4906-AAE9-5F2EBFBF052B} &ListTemplate=100 &listBaseType=0 &view={9ABE2B07-2B47-4390-9969-258F00E0812C} &ID=1 My issue now is that the row in the grid gives each item the ID property above but if I change the view or do any filtering you can see that the ID is really just the row number. Can I get the actual item's GUID here? If I can get the item's ID I can send it with the list ID to an application page that will get the right URL from field in the list and forward the user on to the right site.

    Read the article

  • Should my URLs be lowercase?

    - by Rowan Freeman
    According to this blog ("Understanding SEO Friendly URL Syntax Practices") I should change http://example.com/Hello-Dolly To http://example.com/hello-dolly The reasons given are: URLs, in general, are case-sensitive it will simplify any case sensitive SEO and analytics reports According to this GIF that I found on Wikipedia's article on URL Normalization I should convert my URLs from any uppercase to all lowercase. However I use ASP.NET MVC4 and by default my URLs are structured like this (CamelCase): http://www.domain.com/Controller/Action/Parameter http://www.greatsite.com/Categories/List/Bicycles I've skimmed through the RFC1738 but I didn't see any definitive answers to this. Should I go out of my way to force the framework to change everything to lower case? Why did Microsoft choose to design their framework like this if everybody is telling me to use lowercase?

    Read the article

  • Understanding Ajax crawling of search site

    - by vacuum
    I have a couple of questions about Ajax crawling of site, which is kind of search engine itself. The base article explains the mechanism of making AJAX application crawlable. All this stuff with HTML-snapshots is clear and easy to implement, but I cant understand where will Google bot will get "the crawler finds a pretty AJAX URL"( ie www.example.com/ajax.html#key=value) to work with. First thing, that came on mind - is breadcrumb. In sitemap we can specify pages with breadcrumb on it. so bot will go to these pages and get HTML-snapshots from here. But I'm sure, there are exists other ways to give bot this "pretty AJAX URL". In our case, we have simple search site, where user enters keyword, presses "Find", js execute Ajax request, receives JSON reponce and fill page with results(without any refresh of course). In this case - how to make google bot crawle all the presults in addition to sitemap? Is there some example of solution, described in article above?

    Read the article

  • apache domain redirect to subfolder

    - by Dennis
    I have a hosting account with godaddy. Its a linux system running apache. The way they do their setup is your primary domain is the root folder. When you add a subdomain its in a subfolder of the root which sucks. I want to setup a subfolder structure to organize my domains.. I called godday support and they said to use redirects.. but did not know how to do that.. How its setup now: primary domain: www.domain.com / sub.domain.com /sub I want to create a directory structure and then redirect to each but only show www.domain.com in the url www.domain.com /domain/www sub.domain.com /domain/sub I tried using: RewriteEngine On RewriteCond %{HTTP_HOST} ^(www.)?domain.com$ RewriteRule ^(/)?$ domain/www [L] but it just changes the url to www.domain.com/domain/www Can this be done in htaccess?

    Read the article

  • Google webmaster tools: parameters that only apply on one page

    - by Imagine digital
    I'm trying to get my e-commerce website on google and still figuring out how it all works. Now, I have seen this feature named URL-parameters, allowing me to set different parameters that affect page content to be indexed (one can also set parameters that do not affect the page, but for me that does not apply..). The question I have about this is whether and how I should add parameters that I only have on some pages of my site. example: The homepage of my site is www.mysite.nl. no parameters at all. But when a user clicks the navigation bar, it links to www.mysite.nl/itemList.php?category=&....subCategory=.... The parameters category and subCategory define whether there is content on my itemList page and what content that is. It gets matching products out of my database based on those 2 variables. The question: How do I make sure that I apply the google URL Parameters function decently for my website?

    Read the article

  • Django: Named URLs / Same Template, Different Named URL

    - by TheLizardKing
    I have a webapp that lists all of my artists, albums and songs when the appropriate link is clicked. I make extensive use of generic views (object_list/detail) and named urls but I am coming across an annoyance. I have three templates that pretty much output the exact same html that look just like this: {% extends "base.html" %} {% block content %} <div id="content"> <ul id="starts-with"> {% for starts_with in starts_with_list %} <li><a href="{% url song_list_x starts_with %}">{{ starts_with|upper }}</a></li> {% endfor %} </ul> <ul> {% for song in songs_list %} <li>{{ song.title }}</li> {% endfor %} </ul> </div> {% endblock content %} My artist and album template look pretty much the same and I'd like to combine the three template's into one. The fact that my variables start with song can easily be changed to the default obj. It's my <ul id="starts-with"> named url I don't know how to correct. Obviously I want it to link to a specific album/artist/song using the named urls in my urls.py but I don't know how to make it context aware. Any suggestions? urlpatterns = patterns('tlkmusic.apps.tlkmusic_base.views', # (r'^$', index), url(r'^artists/$', artist_list, name='artist_list'), url(r'^artists/(?P<starts_with>\w)/$', artist_list, name='artist_list_x'), url(r'^artist/(?P<artist_id>\d+)/$', artist_detail, name='artist_detail'), url(r'^albums/$', album_list, name='album_list'), url(r'^albums/(?P<starts_with>\w)/$', album_list, name='album_list_x'), url(r'^songs/$', song_list, name='song_list'), url(r'^songs/(?P<starts_with>\w)/$', song_list, name='song_list_x'), )

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >