Search Results

Search found 20679 results on 828 pages for 'relative url'.

Page 46/828 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • jQuery - Finding the element index relative to its container

    - by Hary
    Here's my HTMl structure: <div id="main"> <div id="inner-1"> <img /> <img /> <img /> </div> <div id="inner-2"> <img /> <img class="selected" /> <img /> </div> <div id="inner-3"> <img /> <img /> <img /> </div> </div> What I'm trying to do is get the index of the img.selected element relative to the #main div. So in this example, the index should be 4 (assuming 0 based index) and not 1. My usual way to go about getting indexes is using $element.prevAll().length but, obviously, that will return the index relative to the #inner-2 div. I've tried using $('img.selected').prevAll('#main').length but that's returning 0 :/

    Read the article

  • convert list of relative widths to pixel widths

    - by mkoryak
    This is a code review question more then anything. I have the following problem: Given a list of relative widths (no unit whatsoever, just all relative to each other), generate a list of pixel widths so that these pixel widths have the same proportions as the original list. input: list of proportions, total pixel width. output: list of pixel widths, where each width is an int, and the sum of these equals the total width. Code: var sizes = "1,2,3,5,7,10".split(","); //initial proportions var totalWidth = 1024; // total pixel width var sizesTotal = 0; for (var i = 0; i < sizes.length; i++) { sizesTotal += parseInt(sizes[i], 10); } if(sizesTotal != 100){ var totalLeft = 100;; for (var i = 0; i < sizes.length; i++) { sizes[i] = Math.floor(parseInt(sizes[i], 10) / sizesTotal * 100); totalLeft -= sizes[i]; } sizes[sizes.lengh - 1] = totalLeft; } totalLeft = totalWidth; for (var i = 0; i < sizes.length; i++) { widths[i] = Math.floor(totalWidth / 100 * sizes[i]) totalLeft -= widths[i]; } widths[sizes.lenght - 1] = totalLeft; //return widths which contains a list of INT pixel sizes

    Read the article

  • Replace relative urls to absolute

    - by Rocky Singh
    I have the html source of a page in a form of string with me: <html> <head> <link rel="stylesheet" type="text/css" href="/css/all.css" /> </head> <body> <a href="/test.aspx">Test</a> <a href="http://mysite.com">Test</a> <img src="/images/test.jpg"/> <img src="http://mysite.com/images/test.jpg"/> </body> </html> I want to convert all the relative paths to absolute. I want the output be: <html> <head> <link rel="stylesheet" type="text/css" href="http://mysite.com/css/all.css" /> </head> <body> <a href="http://mysite.com/test.aspx">Test</a> <a href="http://mysite.com">Test</a> <img src="http://mysite.com/images/test.jpg"/> <img src="http://mysite.com/images/test.jpg"/> </body> </html> Note: I want only the relative paths to be converted to absolute ones in that string. The absolute ones which are already in that string should not be touched, they are fine to me as they are already absolute. Can this be done by regex or other means?

    Read the article

  • XPath Query - Select relative top level nodes

    - by John
    I'm trying to iterate over some elements in an XML document that may be nested, but as I iterate over them I may be removing some from the tree. I'm thinking the best way is to do this recursively, so I'm trying to come up with an XPath Query that will select all the top-level nodes relative to the current node. //foo[not(ancestor::foo)] works great at the document level, but I'm trying to figure out how to do this from a relative query. <foo id="1"> <foo id="2" /> <foo id="3"> <foo id="4"> <bar> <foo id="5"> <foo id="6" /> </foo> <foo id="7" /> </bar> </foo> </foo> </foo> If the current node is foo#3, I only want to select foo#4. When the current node is foo#4, I only want to select foo#5 and foo#7. I think I'm trying to select any descendant foo nodes of the current node, but without any ancestor foo nodes between the current node and the node I'm selecting. My conundrum is if we're already inside a foo node, not(ancestor::foo) doesn't help.

    Read the article

  • website particular url suddenly disappeared from google search result

    - by Ragavendran Ramesh
    i have a website , in that a particular page url was indexed in google search result in the first 10 results , but suddenly it disappeared , not that page is not even in the 100results , what would be the reason. i am feeling that the page has be spammed by our competitors . is it possible to avoid that , or can i find that page has been spammed or not. Is it possible to find the particular page in a website is spam or malicious.

    Read the article

  • Re-indexing website with clean URL's

    - by artsi
    So I have a website with URL's like this: http://www.domain.com/profile.php?id=151 I've now cleaned them up with mod_rewrite into this: http://www.domain.com/profile/firstname-lastname/151 I've fetched and re-indexed my website after the change. What is the best way to make the old dirty ones disappear from search results and keep the clean ones? Is blocking profile.php with robots.txt enough?

    Read the article

  • Deleted files still accessible without www in url

    - by phlegma
    I have deleted all files and all hidden files off my server, there is nothing but log files which cannot be deleted. Ironically, files are accessible when nothing is there. Cache cleared, multiple browsers and computers/devices checked. Files show when I exclude "www" from the URL http://sarastringfellow.com/assets/photo/c.jpg http://www.sarastringfellow.com/assets/photo/c.jpg What does this mean?

    Read the article

  • Can't get Rewrite rule to keep original URL

    - by user38100
    I have these Rewrites, but I would like to have the URL stay the same as what is typed originally, I thought removing the [R] flags would stop it but it hasn't RewriteCond %{HTTP_HOST} ^examplea\.example\.com$ [NC] RewriteRule (.*) http://examplea.example.com:32400/web [L] RewriteCond %{HTTP_HOST} ^exampleb\.example\.com$ [NC] RewriteRule (.*) http://exampleb.example.com:9091 [L] Edit: would this work better? RewriteCond %{HTTP_HOST} ^hello.example.com$ RewriteRule ^(/)?$ welcome [L]

    Read the article

  • 6451B URL List...

    - by Da_Genester
    In addition to the info from the 6451A URL List, included below is info for the newer version of the class, 6451B. Helpful Links: SCCM Tools Aggregation: http://tinyurl.com/SCCM07ToolsLinks   Module 5:  Querying and Reporting Data 64-bit OS and Office Web Component issues - http://tinyurl.com/SCCM07OWC64bit SCCM and SSRS integration for a Reporting Services Point - http://tinyurl.com/SCCM07SSRS

    Read the article

  • List symlinks in specific relative directories

    - by Clinton Blackmore
    I have a server that shares out user home folders over the network. Each user has a Cache folder. Sometimes a symlink is used to redirect this folder to the hard drive of whichever machine they are using (and sometimes that doesn't work and they have a broken symlink [which is a matter for another day].) I'm trying to find out which users have symlinks and which don't. Within the shared folder, to get to the Cache folder you would substitute folders like so: $GRADE/$USERNAME/Library/Caches Right now I'm searching to see which users have symlinks and which do not. I've come up with: cd /path/to/shared/home/folders sudo find . -name "Caches" -exec ls -ld {} \; and get results like this: lrwxr-xr-x@ 1 name0 ES_Students 27 Jan 18 11:05 ./CES_Grade_03/name0/Library/Caches -> /tmp/name0/Library/Caches drwx------ 11 name1 ES_Students 374 Dec 8 15:44 ./CES_Grade_03/name1/Library/Caches lrwxr-xr-x@ 1 name2 ES_Students 27 Feb 23 14:27 ./CES_Grade_03/name2/Library/Caches -> /tmp/name2/Library/Caches drwx------ 17 name3 ES_Students 578 Jan 25 11:13 ./CES_Grade_03/name3/Library/Caches drwx------ 12 name4 ES_Students 408 Mar 22 13:09 ./CES_Grade_03/name4/Library/Caches but it nags at me that there must be a better way. Yes, it is good enough, and a one-off task, but I want to know how to do it right! Surely, I should be able to do something like: cd /path/to/shared/home/folders sudo ls -ld **/**/Library/Caches I'm afraid that I don't know the proper syntax or if there is a recursive folder-replacing wildcard format in bash, and my google-fu failed me. So, how do I properly formulate the search?

    Read the article

  • Using relative path in PHP.INI doc_root -- getting No Input File Specified -- running as fastCGI

    - by J.R.
    I'm attempting to run php-cgi under LightTPD on Windows with my doc_root set to one directory up (doc_root = "../Docs") -- however, I get "No input file specified". I've set cgi.fix_pathinfo, and all the other tricks I could find with no success. If I set doc_root to an absolute path, it works fine. How can I make this work? If any additional information is required, I'll gladly provide it. Thanks in advance.

    Read the article

  • cflock do not throw timeout for same url called in same browser

    - by Pritesh Patel
    I am trying lock block on page test.cfm and below is code written on page. <cfscript> writeOutput("Before lock at #now()#"); lock name="threadlock" timeout="3" type="exclusive" { writeOutput("<br/>started at #now()#"); thread action="sleep" duration="10000"; writeOutput("<br/>ended at #now()#"); } writeOutput("<br/>After lock at #now()#"); </cfscript> assuming my url for page is http://localhost.local/test.cfm and running it on browser in two different tabs. I was expecting one of the url will throw timeout error after 3 second since another url lock it atleast for 10 seconds due to thread sleep. Surprisingly I do not get any timeout error rather second page call run after 10 seconds as first call finish execution. But I am appending some url parameter (e.g. http://localhost.local/test.cfm?q=1) will throw error. Also I am calling same url in different browser then one of the call will throw timeout issue. Is lock based on session and url? Update Here is output for two different cases: Case 1: TAB1 Url: http://localhost.local/test/test.cfm Before lock at {ts '2013-10-18 09:21:35'} started at {ts '2013-10-18 09:21:35'} ended at {ts '2013-10-18 09:21:45'} After lock at {ts '2013-10-18 09:21:45'} TAB2 Url: http://localhost.local/test/test.cfm Before lock at {ts '2013-10-18 09:21:45'} started at {ts '2013-10-18 09:21:45'} ended at {ts '2013-10-18 09:21:55'} After lock at {ts '2013-10-18 09:21:55'} Case 2: TAB1 Url: http://localhost.local/test/test.cfm Before lock at {ts '2013-10-18 09:27:18'} started at {ts '2013-10-18 09:27:18'} ended at {ts '2013-10-18 09:27:28'} After lock at {ts '2013-10-18 09:27:28'} TAB2 Url: http://localhost.local/test/test.cfm? (Added ? at the end) Before lock at {ts '2013-10-18 09:27:20'} A timeout occurred while attempting to lock threadlock. The error occurred in C:/inetpub/wwwroot/test/test.cfm: line 13 11 : 12 : <cfoutput>Before lock at #now()#</cfoutput> 13 : <cflock name="threadlock" timeout="3" type="exclusive"> 14 : <cfoutput><br/>started at #now()#</cfoutput> 15 : <cfthread action="sleep" duration="10000"/> ... Result for case 2 as expected. For case 1, strange thing I just noticed is tab 2 output "Before lock at {ts '2013-10-18 09:21:45'} indicates that whole request start after 10 seconds (means after the complete execution of first tab) when I have fired it in second URL just after 2 seconds of first tabs.

    Read the article

  • How to detect invalid image URL with JAVA?

    - by Cataclysm
    I have a method to download image from URL. As like below.. public static byte[] downloadImageFromURL(final String strUrl) { InputStream in; ByteArrayOutputStream out = new ByteArrayOutputStream(); try { URL url = new URL(strUrl); in = new BufferedInputStream(url.openStream()); byte[] buf = new byte[2048]; int n = 0; while (-1 != (n = in.read(buf))) { out.write(buf, 0, n); } out.close(); in.close(); } catch (IOException e) { return null; } return out.toByteArray(); } I have an image url and it is valid. for example. https://encrypted-tbn1.gstatic.com/images?q=tbn:ANd9GcTxfYM-hnD-Z80tgWdIgQKchKe-MXVUfTpCw1R5KkfJlbRbgr3Zcg My problem is I don't want to download if image is really not exists.Like .... https://encrypted-tbn1.gstatic.com/images?q=tbn:ANd9GcTxfYM-hnD-Z80tgWdIgQKchKe-MXVUfTpCw1R5KkfJlbRbgr3Zcgaaaaabbbbdddddddddddddddddddddddddddd This image shouldn't be download by my method. So , how can I know the giving image URL is not really exists. I don't want to validate my URL (I think that may not my solution ). So, I googled for that. From this article ... How to check if a URL exists or returns 404 with Java? and Java check if file exists on remote server using its url But this con.getResponseCode() will always return status code "200". This mean my method will also download invalid image urls. So , I output my bufferStream as like... System.out.println(in.read(buf)); Invalid image URL produces "43". So , I add these lines of codes in my method. if (in.read(buf) == 43) { return null; } It is ok. But I don't think that will always satisfy. Has another way to get it ? am I right? I would really appreciate any suggestions. This problem may struct my head. Thanks for reading my question.

    Read the article

  • SSRS for Sharepoint, Images in a report from a Sharepoint List URL?

    - by James Polhemus
    Greetings Sabios, I have several reports I run successfully where the data comes from a Sharepoint list in the form of an XML dataset. I am however having trouble with one. I have a report that pulls an image file onto the main body of the report. This data too comes from a Sharepoint list in the form of an XML dataset which sends me the URL to the jpeg or bmp or gif... whatever the case may be. I can successfully pull this off in my own Visual Studio IDE. My Local Report Server will render it as well It won't run on my Sharepoint Report Server (My MOSS runs through https while my Shartpoint Report Server is http might this matter?) When I upload it to Sharepoint and run it through the Sharepoint Report Server, I get back EVERYTHING in the report Header and Footer (dataset text and embedded Images) but just a big RED X where the Main Image should be. I have done everything the boards say: A. I made sure the Unattended Execution Account is running on the Reports Server B. I have insured the URL comes back in clean format (else the images wouldn't render locally either and they do) The report logs throw this exception: e ERROR: Throwing Microsoft.ReportingServices.Diagnostics.Utilities.ContainerTypeNotSupportedException: The target location you specified is not supported by the report server. A report definition (.rdl), report model (.smdl), resource, or shared data source (.rsds) file must be located within a library or a folder within it., ; Info: Microsoft.ReportingServices.Diagnostics.Utilities.ContainerTypeNotSupportedException: The target location you specified is not supported by the report server. A report definition (.rdl), report model (.smdl), resource, or shared data source (.rsds) file must be located within a library or a folder within it. Any takers? Even my Sharepoint Administrator can't help me:) James

    Read the article

  • Change Edit Control Block (ECB) Link URL in SharePoint

    - by dirq
    Is there a way to dynamically change the hyperlink associated with an ECB menu in WSS 3.0? For instance, I have a list with 2 fields. One field is hidden and is a link, the other is the title field which has the ECB menu. The title field currently links to the item's view page - but we want it to link to the link-field's url. Is that possible? UPDATE - 5/29/09 9AM I have this so far. See this TechNet post. <script type="text/javascript"> var url = 'GoTo.aspx?ListTitle='+ctx.ListTitle; url += '&ListName='+ctx.listName; url += '&ListTemplate='+ctx.listTemplate; url += '&listBaseType='+ctx.listBaseType; url += '&view='+ctx.view; url += '&'; var a = document.getElementsByTagName('a'); for(i=0;i<=a.length -1;i++) { a[i].href=a[i].href.replace('DispForm.aspx?',url); } </script> This gives me a link like so (formatted so it's easier to see): GoTo.aspx ?ListTitle=MyList &ListName={082BB11C-1941-4906-AAE9-5F2EBFBF052B} &ListTemplate=100 &listBaseType=0 &view={9ABE2B07-2B47-4390-9969-258F00E0812C} &ID=1 My issue now is that the row in the grid gives each item the ID property above but if I change the view or do any filtering you can see that the ID is really just the row number. Can I get the actual item's GUID here? If I can get the item's ID I can send it with the list ID to an application page that will get the right URL from field in the list and forward the user on to the right site.

    Read the article

  • Should my URLs be lowercase?

    - by Rowan Freeman
    According to this blog ("Understanding SEO Friendly URL Syntax Practices") I should change http://example.com/Hello-Dolly To http://example.com/hello-dolly The reasons given are: URLs, in general, are case-sensitive it will simplify any case sensitive SEO and analytics reports According to this GIF that I found on Wikipedia's article on URL Normalization I should convert my URLs from any uppercase to all lowercase. However I use ASP.NET MVC4 and by default my URLs are structured like this (CamelCase): http://www.domain.com/Controller/Action/Parameter http://www.greatsite.com/Categories/List/Bicycles I've skimmed through the RFC1738 but I didn't see any definitive answers to this. Should I go out of my way to force the framework to change everything to lower case? Why did Microsoft choose to design their framework like this if everybody is telling me to use lowercase?

    Read the article

  • Understanding Ajax crawling of search site

    - by vacuum
    I have a couple of questions about Ajax crawling of site, which is kind of search engine itself. The base article explains the mechanism of making AJAX application crawlable. All this stuff with HTML-snapshots is clear and easy to implement, but I cant understand where will Google bot will get "the crawler finds a pretty AJAX URL"( ie www.example.com/ajax.html#key=value) to work with. First thing, that came on mind - is breadcrumb. In sitemap we can specify pages with breadcrumb on it. so bot will go to these pages and get HTML-snapshots from here. But I'm sure, there are exists other ways to give bot this "pretty AJAX URL". In our case, we have simple search site, where user enters keyword, presses "Find", js execute Ajax request, receives JSON reponce and fill page with results(without any refresh of course). In this case - how to make google bot crawle all the presults in addition to sitemap? Is there some example of solution, described in article above?

    Read the article

  • apache domain redirect to subfolder

    - by Dennis
    I have a hosting account with godaddy. Its a linux system running apache. The way they do their setup is your primary domain is the root folder. When you add a subdomain its in a subfolder of the root which sucks. I want to setup a subfolder structure to organize my domains.. I called godday support and they said to use redirects.. but did not know how to do that.. How its setup now: primary domain: www.domain.com / sub.domain.com /sub I want to create a directory structure and then redirect to each but only show www.domain.com in the url www.domain.com /domain/www sub.domain.com /domain/sub I tried using: RewriteEngine On RewriteCond %{HTTP_HOST} ^(www.)?domain.com$ RewriteRule ^(/)?$ domain/www [L] but it just changes the url to www.domain.com/domain/www Can this be done in htaccess?

    Read the article

  • Google webmaster tools: parameters that only apply on one page

    - by Imagine digital
    I'm trying to get my e-commerce website on google and still figuring out how it all works. Now, I have seen this feature named URL-parameters, allowing me to set different parameters that affect page content to be indexed (one can also set parameters that do not affect the page, but for me that does not apply..). The question I have about this is whether and how I should add parameters that I only have on some pages of my site. example: The homepage of my site is www.mysite.nl. no parameters at all. But when a user clicks the navigation bar, it links to www.mysite.nl/itemList.php?category=&....subCategory=.... The parameters category and subCategory define whether there is content on my itemList page and what content that is. It gets matching products out of my database based on those 2 variables. The question: How do I make sure that I apply the google URL Parameters function decently for my website?

    Read the article

  • Django: Named URLs / Same Template, Different Named URL

    - by TheLizardKing
    I have a webapp that lists all of my artists, albums and songs when the appropriate link is clicked. I make extensive use of generic views (object_list/detail) and named urls but I am coming across an annoyance. I have three templates that pretty much output the exact same html that look just like this: {% extends "base.html" %} {% block content %} <div id="content"> <ul id="starts-with"> {% for starts_with in starts_with_list %} <li><a href="{% url song_list_x starts_with %}">{{ starts_with|upper }}</a></li> {% endfor %} </ul> <ul> {% for song in songs_list %} <li>{{ song.title }}</li> {% endfor %} </ul> </div> {% endblock content %} My artist and album template look pretty much the same and I'd like to combine the three template's into one. The fact that my variables start with song can easily be changed to the default obj. It's my <ul id="starts-with"> named url I don't know how to correct. Obviously I want it to link to a specific album/artist/song using the named urls in my urls.py but I don't know how to make it context aware. Any suggestions? urlpatterns = patterns('tlkmusic.apps.tlkmusic_base.views', # (r'^$', index), url(r'^artists/$', artist_list, name='artist_list'), url(r'^artists/(?P<starts_with>\w)/$', artist_list, name='artist_list_x'), url(r'^artist/(?P<artist_id>\d+)/$', artist_detail, name='artist_detail'), url(r'^albums/$', album_list, name='album_list'), url(r'^albums/(?P<starts_with>\w)/$', album_list, name='album_list_x'), url(r'^songs/$', song_list, name='song_list'), url(r'^songs/(?P<starts_with>\w)/$', song_list, name='song_list_x'), )

    Read the article

  • Problem in Addon Domain name binding with existing directory in my webspace

    - by articlestack
    I recently purchased a domain thinkzarahatke.com. At the time of purchasing i selected an option, say url redirect, to my existing site. Further I had entered Naming server detail. But I dint find any other option to change URL rewrite parameters under Domain Name Manager. From the cPanel of my hosting space, I had added new domain as addon and set a sundirectory for it. Now If I am trying to access my new domain directly, it is automatically redirecting to my old website. While if i access some inside directory, like thinkzarahatke.com/blog, then i am able to access to my new site. What wrong i done? Please tell me if i need to do some more settings with addon domain.

    Read the article

  • Increase traffic to a site through a site on subdomain [closed]

    - by user1716672
    Possible Duplicate: Subdomain versus subdirectory We have two sites, one is mainly a portfolio site (built with Yii framework) and the other is a digital shop (built with open cart) where we sell plugins and themes. The url's look like www.mydomian.com and www.store.mydomain.com. But of these sites are in the same server. We use google analytics tools and have no problem getting traffic to our store. But we have very little to our portfolio site and we want to increase our Google ranking for this site. Assuming increased traffic to our site will increase our google ranking, we were thinking to use URl masking so the link will be www.mydomain.com/shop and this will load www.store.mydomain.com. Will this count as hits for our portfolio site? Because the .htaccess rules will ensure the subdomain is served. So I dont know if these hits will count on our store or our portfolio site...

    Read the article

  • apache domain redirect to subfolder

    - by Dennis
    I have a hosting account with godaddy. Its a linux system running apache. The way they do their setup is your primary domain is the root folder. When you add a subdomain its in a subfolder of the root which sucks. I want to setup a subfolder structure to organize my domains.. I called godday support and they said to use redirects.. but did not know how to do that.. How its setup now: primary domain: www.domain.com / sub.domain.com /sub I want to create a directory structure and then redirect to each but only show www.domain.com in the url www.domain.com /domain/www sub.domain.com /domain/sub I tried using: RewriteEngine On RewriteCond %{HTTP_HOST} ^(www.)?domain.com$ RewriteRule ^(/)?$ domain/www [L] but it just changes the url to www.domain.com/domain/www Can this be done in htaccess?

    Read the article

  • How can I force Google to re-index my site?

    - by Matthias
    I changed the structure of my URLs. The pages are already indexed by Google and have the following structure: http://mypage.com/myfolder/page.apsx The new structure is: http://mypage.com/page.aspx Now all URLs that Google knows are wrong. How can I tell Google to re-index and that the structure has changed? Internally I redirect in ASP.NET when the URL contains myfolder by I want Google to update the URLs. Thanks for the answers - I use IIS 6 and I do not know how to configure a redirect of all pages that contains the folder to page one folder below. So I did the trick in the Begin_Request method and did a Context.Response.Redirect. This is no 301 redirect, only a redirect done with ASP.NET via code. Will this also do the trick so that Google notices that the URL /folder/page1.aspx now is redirected to /page1.aspx?

    Read the article

  • Unable to change URL for .NET reference to dynamic web service

    - by Malvineous
    Hi all, I have a web reference added to a C# .NET project. The URL for the web reference needs to change depending on whether I'm building for a development, staging or production environment. I've set the web service to be dynamic, which supposedly means it takes the URL from my app.config file. When I perform a build it overwrites the app.config with the required file which contains the correct URL (different file for each of dev/staging/production.) I then go into the solution properties and make sure the Settings.settings file is updated with the app.config changes. However when I view the properties for the web service, it is still showing the old URL, despite it being dynamic, and supposed to be reading from my settings file (even after closing and reopening the project/solution.) The app.config and the settings file all have the new URL, but the web reference doesn't notice it has changed. If I do a build it ignores the URL in the settings file and tries to connect to the last URL manually typed into the web reference's properties. Typing a URL into these properties correctly updates the app.config and .settings files, so the link is definitely there. I'm a bit new to .NET but it seems to me the purpose of setting the service to be dynamic is so that you can change the URL elsewhere, but when I do this it just gets ignored! Am I doing something wrong?

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >