Search Results

Search found 18563 results on 743 pages for 'url for'.

Page 39/743 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • Am I correctly handling duplicate URLs for my homepage?

    - by Rob Goldstein
    I own a Job Search site named www.conservationjobboard.com and have a concern about how the domain is viewed by search engines. The issue is that when the site was first designed, the default page was left as default.php, but the homepage was actually JobBoard.php. To handle this, the default.php page performed a redirect to the JobBoard.php file when www.conservationjobboard.com/ was requested. The main problem resulted because the redirect was a temporary redirect causing search engines to index conservationjobboard.com/ and conservationjobboard.com/JobBoard.php as 2 separate pages. This has since been corrected to use the .htaccess file so that JobBoard.php is now the default file for the root directory eliminating the need for the redirect. Problem is that search engines still show both URL's in search results (one including JobBoard.php and one that ends with /). Another potential problem is that some of my early backlinks are to conservationjobboard.com/JobBoard.php while the rest are to conservationjobboard.com The 2 outstanding questions are as follows: 1. Is my domain still being penalized by search engines like Google for having duplicate homepage URL's? 2. Are all of the back links to my homepage being considered as the same now or is the total number of back links being split between the 2 different URL's? If you think there are still issues with how we have this set-up, I was wondering if you could give me advice on what we should do differently. Thanks.

    Read the article

  • Will adding top level directories with similar structure to existing directories change the SEO of my site?

    - by Russell Sims
    I've been pointed this way for SEO related questions and this one has had me pondering for a little while now. I'm recreating a site's structure. The website's content is generated through several feeds and unless I want to place each and every - of the 10,000 odd - venues into their own category manually, I can't avoid categorising each item by using its address. The current the structure looks like this Homepage > region > county > city/town > venue page and the URL looks like domain/region/county/city/venue/ I'm relatively happy to use this structure as it's not too convoluted. However we also promote deals and we also group the venues into their respective franchise, so that leads to URLs such as: domain/groups AND domain/deals My question is: how would the directory structure look with these new additions? Would I have a URL that looks like domain/deals/region/county/city/venue or domain/group/region/county/city/venue and just put a 301 or a canonical link tag on the page to prevent the duplicate pages competing with each other? Am I just worrying about it needlessly and perhaps link straight from domain/deals to the venue page URL domain/region/county/city/venue, this bothers me a bit though as the deals and groups will not be in the breadcrumbs.

    Read the article

  • Getting a Web Resource Url in non WebForms Applications

    - by Rick Strahl
    WebResources in ASP.NET are pretty useful feature. WebResources are resources that are embedded into a .NET assembly and can be loaded from the assembly via a special resource URL. WebForms includes a method on the ClientScriptManager (Page.ClientScript) and the ScriptManager object to retrieve URLs to these resources. For example you can do: ClientScript.GetWebResourceUrl(typeof(ControlResources), ControlResources.JQUERY_SCRIPT_RESOURCE); GetWebResourceUrl requires a type (which is used for the assembly lookup in which to find the resource) and the resource id to lookup. GetWebResourceUrl() then returns a nasty old long URL like this: WebResource.axd?d=-b6oWzgbpGb8uTaHDrCMv59VSmGhilZP5_T_B8anpGx7X-PmW_1eu1KoHDvox-XHqA1EEb-Tl2YAP3bBeebGN65tv-7-yAimtG4ZnoWH633pExpJor8Qp1aKbk-KQWSoNfRC7rQJHXVP4tC0reYzVw2&t=634533278261362212 While lately excessive resource usage has been frowned upon especially by MVC developers who tend to opt for content distributed as files, I still think that Web Resources have their place even in non-WebForms applications. Also if you have existing assemblies that include resources like scripts and common image links it sure would be nice to access them from non-WebForms pages like MVC views or even in plain old Razor Web Pages. Where's my Page object Dude? Unfortunately natively ASP.NET doesn't have a mechanism for retrieving WebResource Urls outside of the WebForms engine. It's a feature that's specifically baked into WebForms and that relies specifically on the Page HttpHandler implementation. Both Page.ClientScript (obviously) and ScriptManager rely on a hosting Page object in order to work and the various methods off these objects require control instances passed. The reason for this is that the script managers can inject scripts and links into Page content (think RegisterXXXX methods) and for that a Page instance is required. However, for many other methods - like GetWebResourceUrl() - that simply return resources or resource links the Page reference is really irrelevant. While there's a separate ClientScriptManager class, it's marked as sealed and doesn't have any public constructors so you can't create your own instance (without Reflection). Even if it did the internal constructor it does have requires a Page reference. No good… So, can we get access to a WebResourceUrl generically without running in a WebForms Page instance? We just have to create a Page instance ourselves and use it internally. There's nothing intrinsic about the use of the Page class in ClientScript, at least for retrieving resources and resource Urls so it's easy to create an instance of a Page for example in a static method. For our needs of retrieving ResourceUrls or even actually retrieving script resources we can use a canned, non-configured Page instance we create on our own. The following works just fine: public static string GetWebResourceUrl(Type type, string resource ) { Page page = new Page(); return page.ClientScript.GetWebResourceUrl(type, resource); } A slight optimization for this might be to cache the created Page instance. Page tends to be a pretty heavy object to create each time a URL is required so you might want to cache the instance: public class WebUtils { private static Page CachedPage { get { if (_CachedPage == null) _CachedPage = new Page(); return _CachedPage; } } private static Page _CachedPage; public static string GetWebResourceUrl(Type type, string resource) { return CachedPage.ClientScript.GetWebResourceUrl(type, resource); } } You can now use GetWebResourceUrl in a Razor page like this: <!DOCTYPE html> <html <head> <script src="@WebUtils.GetWebResourceUrl(typeof(ControlResources),ControlResources.JQUERY_SCRIPT_RESOURCE)"> </script> </head> <body> <div class="errordisplay"> <img src="@WebUtils.GetWebResourceUrl(typeof(ControlResources),ControlResources.WARNING_ICON_RESOURCE)" /> This is only a Test! </div> </body> </html> And voila - there you have WebResources served from a non-Page based application. WebResources may be a on the way out, but legacy apps have them embedded and for some situations, like fallback scripts and some common image resources I still like to use them. Being able to use them from non-WebForms applications should have been built into the core ASP.NETplatform IMHO, but seeing that it's not this workaround is easy enough to implement.© Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET  MVC   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • [PHP] Sanitizing strings to make them URL and filename safe?

    - by Xeoncross
    I am trying to come up with a function that does a good job of sanitizing certain strings so that they are safe to use in the URL (like a post slug) and also safe to use as file names. For example, when someone uploads a file I want to make sure that I remove all dangerous characters from the name. So far I have come up with the following function which I hope solves this problem and also allows foreign UTF-8 data also. /** * Convert a string to the file/URL safe "slug" form * * @param string $string the string to clean * @param bool $is_filename TRUE will allow additional filename characters * @return string */ function sanitize($string = '', $is_filename = FALSE) { // Replace all weird characters with dashes preg_replace('/[^\w\-'. ($is_filename ? '*~_\.' : ''). ']+/u', '-', $string); // Only allow one dash separator at a time (and make string lowercase) return mb_strtolower(preg_replace('/--+/u', '-', $string), 'UTF-8'); } Does anyone have any tricky sample data I can run against this - or know of a better way to safeguard our apps from bad names?

    Read the article

  • Android: howto parse URL String with spaces to URI object?

    - by Mannaz
    I have a string representing an URL containing spaces and want to convert it to an URI object. If is simple try to do String myString = "http://myhost.com/media/mp3s/9/Agenda of swine - 13. Persecution Ascension_ leave nothing standing.mp3"; URI myUri = new URI(myString); it gives me java.net.URISyntaxException: Illegal character in path at index X where index X is the position of the first space in the URL string. How can i parse myStringinto a URI object?

    Read the article

  • url with question mark considered as new http request?

    - by Navin Leon
    I am optimization my web page by implementing caching, so if I want the browser not to take data from cache, then I will append a dynamic number as query value. eg: google.com?val=823746 But some time, if I want to bring data from cache for the below url, the browser is making a new http request to server, its not taking data from cache. Is that because of the question mark in URL ? eg: http://google.com? Please provide some reference document link. Thanks in advance. Regards, Navin

    Read the article

  • IIS7: URL Rewrite - can it be used to hide a CDN path?

    - by Wild Thing
    Hi, I am using Rackspace Cloud CDN (Limelight CDN) for my website. The URLs of the CDN are in the format http://cxxxxxx.cdn.cloudfiles.rackspacecloud.com/something.jpg My domain is mydomain.com. Can I use IIS URL rewriting to show http://cxxxxxx.cdn.cloudfiles.rackspacecloud.com/something.jpg as http://images.mydomain.com/something.jpg? Or is this impossible without the CDN setup accepting my CNAME? If so, can you please help create the URL rewrite rule? (Sorry, don't know how to use regular expressions) Thanks, WT

    Read the article

  • Log Location Url Responses of 301 redirects from IIS

    - by James Lawruk
    Is there a way to log 301 redirects returned by IIS with the (1) request Url and the (2) location Url of the response? Something like this: Url, Location /about-us, /about /old-page, /new-page The IIS logs contain the Request Url and the status code (301), but not the location Url of the response. Ideally there would be an additional field in the IIS Log called Location that would be populated when IIS responded with a 301. In my case the source of the redirect could be ISAPI Rewrite Rules, ASP.NET applications, Cold Fusion applications, or IIS itself. Perhaps there is a way to log IIS response data? Thanks for your help.

    Read the article

  • How to lock Firefox tab to domain or URL pattern

    - by f3lix
    I know Firefox extensions that allow protecting (cannot be closed) and locking (cannot change URL) tabs. What I need is an extension that locks a tab to a certain domain or URL pattern. For example, I want to lock a tab to the domain example.com. As long as I follow links that are within this domain the tab should show normal (unlocked) behavior, but if I follow a link to another domain the link should be opened in new tab -- leaving the locked tab open with a URL within the locked domain. Even better would be the functionality to lock a tab to a URL pattern. If a URL matches the pattern it is opened in the current tab, otherwise it is opened in a new tab. Do you know something (preferably an extension for FF 8.0) that provides this kind of functionality.

    Read the article

  • IIS Url Rewrite Capturing query string and escaping characters

    - by LiamB
    We are just adding some redirects for an old site to a new one in IIS7 using the URL Rewrite 'plugin'. The old site's URL are all based on the query string, we'd usually do explicit rewrites like below. But this wont work in the case of the query string. <rule name="Redirect-1" patternSyntax="Wildcard" stopProcessing="true"> <match url="index.php?option=m_content&view=article&id=15&Itemid=16" /> <action type="Redirect" url="http://newurl/some-page" /> </rule> So using the 2 URL's above how can we do a 301 redirect?

    Read the article

  • first of all nice work,, how to redirect the url of old modified directries

    - by kath
    I really appreciate your hard work after searching lot of web i cannot find the answer so if you get time please try to find what i should so the problem is its a website for classifieds ads before i modified or better word edited the category name but its still showing up in Google index and even in browser URL so is there any way to by pass or redirect it to new one i tried .htaccess but cant get the the result here is the both URL list before modification http://adsbuz.com/vehicles-cars/other-vehicles/selling-my-2010-toyota-sequioa-19500-9585.htm after editing category name (modified one) http://adsbuz.com/vehicles-cars-for-sale/other-vehicles/selling-my-2010-toyota-sequioa-19500-9585.htm (edited category name was before ""vehicles-car"" and after editing is ""vehicles-cars-for-sale"" as you can see both URL opens and not good for seo. and is there any way some one opens wrong url but page opens only with corect url automatically just like in your site.. consider me new in this market and want little help here (the website is in php) thanks Really appreciate your quick response thanks kath

    Read the article

  • website particular url suddenly disappeared from google search result

    - by Ragavendran Ramesh
    i have a website , in that a particular page url was indexed in google search result in the first 10 results , but suddenly it disappeared , not that page is not even in the 100results , what would be the reason. i am feeling that the page has be spammed by our competitors . is it possible to avoid that , or can i find that page has been spammed or not. Is it possible to find the particular page in a website is spam or malicious.

    Read the article

  • Re-indexing website with clean URL's

    - by artsi
    So I have a website with URL's like this: http://www.domain.com/profile.php?id=151 I've now cleaned them up with mod_rewrite into this: http://www.domain.com/profile/firstname-lastname/151 I've fetched and re-indexed my website after the change. What is the best way to make the old dirty ones disappear from search results and keep the clean ones? Is blocking profile.php with robots.txt enough?

    Read the article

  • Can't get Rewrite rule to keep original URL

    - by user38100
    I have these Rewrites, but I would like to have the URL stay the same as what is typed originally, I thought removing the [R] flags would stop it but it hasn't RewriteCond %{HTTP_HOST} ^examplea\.example\.com$ [NC] RewriteRule (.*) http://examplea.example.com:32400/web [L] RewriteCond %{HTTP_HOST} ^exampleb\.example\.com$ [NC] RewriteRule (.*) http://exampleb.example.com:9091 [L] Edit: would this work better? RewriteCond %{HTTP_HOST} ^hello.example.com$ RewriteRule ^(/)?$ welcome [L]

    Read the article

  • Deleted files still accessible without www in url

    - by phlegma
    I have deleted all files and all hidden files off my server, there is nothing but log files which cannot be deleted. Ironically, files are accessible when nothing is there. Cache cleared, multiple browsers and computers/devices checked. Files show when I exclude "www" from the URL http://sarastringfellow.com/assets/photo/c.jpg http://www.sarastringfellow.com/assets/photo/c.jpg What does this mean?

    Read the article

  • 6451B URL List...

    - by Da_Genester
    In addition to the info from the 6451A URL List, included below is info for the newer version of the class, 6451B. Helpful Links: SCCM Tools Aggregation: http://tinyurl.com/SCCM07ToolsLinks   Module 5:  Querying and Reporting Data 64-bit OS and Office Web Component issues - http://tinyurl.com/SCCM07OWC64bit SCCM and SSRS integration for a Reporting Services Point - http://tinyurl.com/SCCM07SSRS

    Read the article

  • cflock do not throw timeout for same url called in same browser

    - by Pritesh Patel
    I am trying lock block on page test.cfm and below is code written on page. <cfscript> writeOutput("Before lock at #now()#"); lock name="threadlock" timeout="3" type="exclusive" { writeOutput("<br/>started at #now()#"); thread action="sleep" duration="10000"; writeOutput("<br/>ended at #now()#"); } writeOutput("<br/>After lock at #now()#"); </cfscript> assuming my url for page is http://localhost.local/test.cfm and running it on browser in two different tabs. I was expecting one of the url will throw timeout error after 3 second since another url lock it atleast for 10 seconds due to thread sleep. Surprisingly I do not get any timeout error rather second page call run after 10 seconds as first call finish execution. But I am appending some url parameter (e.g. http://localhost.local/test.cfm?q=1) will throw error. Also I am calling same url in different browser then one of the call will throw timeout issue. Is lock based on session and url? Update Here is output for two different cases: Case 1: TAB1 Url: http://localhost.local/test/test.cfm Before lock at {ts '2013-10-18 09:21:35'} started at {ts '2013-10-18 09:21:35'} ended at {ts '2013-10-18 09:21:45'} After lock at {ts '2013-10-18 09:21:45'} TAB2 Url: http://localhost.local/test/test.cfm Before lock at {ts '2013-10-18 09:21:45'} started at {ts '2013-10-18 09:21:45'} ended at {ts '2013-10-18 09:21:55'} After lock at {ts '2013-10-18 09:21:55'} Case 2: TAB1 Url: http://localhost.local/test/test.cfm Before lock at {ts '2013-10-18 09:27:18'} started at {ts '2013-10-18 09:27:18'} ended at {ts '2013-10-18 09:27:28'} After lock at {ts '2013-10-18 09:27:28'} TAB2 Url: http://localhost.local/test/test.cfm? (Added ? at the end) Before lock at {ts '2013-10-18 09:27:20'} A timeout occurred while attempting to lock threadlock. The error occurred in C:/inetpub/wwwroot/test/test.cfm: line 13 11 : 12 : <cfoutput>Before lock at #now()#</cfoutput> 13 : <cflock name="threadlock" timeout="3" type="exclusive"> 14 : <cfoutput><br/>started at #now()#</cfoutput> 15 : <cfthread action="sleep" duration="10000"/> ... Result for case 2 as expected. For case 1, strange thing I just noticed is tab 2 output "Before lock at {ts '2013-10-18 09:21:45'} indicates that whole request start after 10 seconds (means after the complete execution of first tab) when I have fired it in second URL just after 2 seconds of first tabs.

    Read the article

  • How to detect invalid image URL with JAVA?

    - by Cataclysm
    I have a method to download image from URL. As like below.. public static byte[] downloadImageFromURL(final String strUrl) { InputStream in; ByteArrayOutputStream out = new ByteArrayOutputStream(); try { URL url = new URL(strUrl); in = new BufferedInputStream(url.openStream()); byte[] buf = new byte[2048]; int n = 0; while (-1 != (n = in.read(buf))) { out.write(buf, 0, n); } out.close(); in.close(); } catch (IOException e) { return null; } return out.toByteArray(); } I have an image url and it is valid. for example. https://encrypted-tbn1.gstatic.com/images?q=tbn:ANd9GcTxfYM-hnD-Z80tgWdIgQKchKe-MXVUfTpCw1R5KkfJlbRbgr3Zcg My problem is I don't want to download if image is really not exists.Like .... https://encrypted-tbn1.gstatic.com/images?q=tbn:ANd9GcTxfYM-hnD-Z80tgWdIgQKchKe-MXVUfTpCw1R5KkfJlbRbgr3Zcgaaaaabbbbdddddddddddddddddddddddddddd This image shouldn't be download by my method. So , how can I know the giving image URL is not really exists. I don't want to validate my URL (I think that may not my solution ). So, I googled for that. From this article ... How to check if a URL exists or returns 404 with Java? and Java check if file exists on remote server using its url But this con.getResponseCode() will always return status code "200". This mean my method will also download invalid image urls. So , I output my bufferStream as like... System.out.println(in.read(buf)); Invalid image URL produces "43". So , I add these lines of codes in my method. if (in.read(buf) == 43) { return null; } It is ok. But I don't think that will always satisfy. Has another way to get it ? am I right? I would really appreciate any suggestions. This problem may struct my head. Thanks for reading my question.

    Read the article

  • SSRS for Sharepoint, Images in a report from a Sharepoint List URL?

    - by James Polhemus
    Greetings Sabios, I have several reports I run successfully where the data comes from a Sharepoint list in the form of an XML dataset. I am however having trouble with one. I have a report that pulls an image file onto the main body of the report. This data too comes from a Sharepoint list in the form of an XML dataset which sends me the URL to the jpeg or bmp or gif... whatever the case may be. I can successfully pull this off in my own Visual Studio IDE. My Local Report Server will render it as well It won't run on my Sharepoint Report Server (My MOSS runs through https while my Shartpoint Report Server is http might this matter?) When I upload it to Sharepoint and run it through the Sharepoint Report Server, I get back EVERYTHING in the report Header and Footer (dataset text and embedded Images) but just a big RED X where the Main Image should be. I have done everything the boards say: A. I made sure the Unattended Execution Account is running on the Reports Server B. I have insured the URL comes back in clean format (else the images wouldn't render locally either and they do) The report logs throw this exception: e ERROR: Throwing Microsoft.ReportingServices.Diagnostics.Utilities.ContainerTypeNotSupportedException: The target location you specified is not supported by the report server. A report definition (.rdl), report model (.smdl), resource, or shared data source (.rsds) file must be located within a library or a folder within it., ; Info: Microsoft.ReportingServices.Diagnostics.Utilities.ContainerTypeNotSupportedException: The target location you specified is not supported by the report server. A report definition (.rdl), report model (.smdl), resource, or shared data source (.rsds) file must be located within a library or a folder within it. Any takers? Even my Sharepoint Administrator can't help me:) James

    Read the article

  • Change Edit Control Block (ECB) Link URL in SharePoint

    - by dirq
    Is there a way to dynamically change the hyperlink associated with an ECB menu in WSS 3.0? For instance, I have a list with 2 fields. One field is hidden and is a link, the other is the title field which has the ECB menu. The title field currently links to the item's view page - but we want it to link to the link-field's url. Is that possible? UPDATE - 5/29/09 9AM I have this so far. See this TechNet post. <script type="text/javascript"> var url = 'GoTo.aspx?ListTitle='+ctx.ListTitle; url += '&ListName='+ctx.listName; url += '&ListTemplate='+ctx.listTemplate; url += '&listBaseType='+ctx.listBaseType; url += '&view='+ctx.view; url += '&'; var a = document.getElementsByTagName('a'); for(i=0;i<=a.length -1;i++) { a[i].href=a[i].href.replace('DispForm.aspx?',url); } </script> This gives me a link like so (formatted so it's easier to see): GoTo.aspx ?ListTitle=MyList &ListName={082BB11C-1941-4906-AAE9-5F2EBFBF052B} &ListTemplate=100 &listBaseType=0 &view={9ABE2B07-2B47-4390-9969-258F00E0812C} &ID=1 My issue now is that the row in the grid gives each item the ID property above but if I change the view or do any filtering you can see that the ID is really just the row number. Can I get the actual item's GUID here? If I can get the item's ID I can send it with the list ID to an application page that will get the right URL from field in the list and forward the user on to the right site.

    Read the article

  • Should my URLs be lowercase?

    - by Rowan Freeman
    According to this blog ("Understanding SEO Friendly URL Syntax Practices") I should change http://example.com/Hello-Dolly To http://example.com/hello-dolly The reasons given are: URLs, in general, are case-sensitive it will simplify any case sensitive SEO and analytics reports According to this GIF that I found on Wikipedia's article on URL Normalization I should convert my URLs from any uppercase to all lowercase. However I use ASP.NET MVC4 and by default my URLs are structured like this (CamelCase): http://www.domain.com/Controller/Action/Parameter http://www.greatsite.com/Categories/List/Bicycles I've skimmed through the RFC1738 but I didn't see any definitive answers to this. Should I go out of my way to force the framework to change everything to lower case? Why did Microsoft choose to design their framework like this if everybody is telling me to use lowercase?

    Read the article

  • Understanding Ajax crawling of search site

    - by vacuum
    I have a couple of questions about Ajax crawling of site, which is kind of search engine itself. The base article explains the mechanism of making AJAX application crawlable. All this stuff with HTML-snapshots is clear and easy to implement, but I cant understand where will Google bot will get "the crawler finds a pretty AJAX URL"( ie www.example.com/ajax.html#key=value) to work with. First thing, that came on mind - is breadcrumb. In sitemap we can specify pages with breadcrumb on it. so bot will go to these pages and get HTML-snapshots from here. But I'm sure, there are exists other ways to give bot this "pretty AJAX URL". In our case, we have simple search site, where user enters keyword, presses "Find", js execute Ajax request, receives JSON reponce and fill page with results(without any refresh of course). In this case - how to make google bot crawle all the presults in addition to sitemap? Is there some example of solution, described in article above?

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >