Search Results

Search found 34826 results on 1394 pages for 'valid html'.

Page 410/1394 | < Previous Page | 406 407 408 409 410 411 412 413 414 415 416 417  | Next Page >

  • Is external JavaScript source available to scripting context inside HTML page?

    - by John K
    When an external JavaScript file is referenced, <script type="text/javascript" src="js/jquery-1.4.4.min.js"></script> is the JavaScript source (lines of code before interpretation) available from the DOM or window context in the current HTML page? I mean by using only standard JavaScript without any installed components or tools. I know tools like Firebug trace into external source but it's installed on the platform and likely has special ability outside the context of the browser sandbox.

    Read the article

  • How to display HTML-like table data on iPhone?

    - by Jason
    I have a set of data in a matrix which I would like to display in my iPhone app with all of the rows and columns intact. Everything I can find on the web dealing with "tables iPhone" gives me information on UITableView, which only lets you show a list of items to the user - not an actual table in the HTML sense. What's the best way on the iPhone to display an actual table of data to the user, with column & row headings and table cells?

    Read the article

  • jquery.ajax returns content-type "html" on iis while it returns content-type "json" on local host

    - by Sridhar
    Hi, I am using jQuery.ajax function to make an ajax call to a page method in asp.net. I specifically set the content-type to "application/json; charset=utf-8". When I looked at the response in the firebug it says the content-type is html. Following is the code to my ajax call $.ajax({ async: asyncVal, type: "POST", url: url + '/' + webMethod, data: dataPackage, contentType: "application/json; charset=UTF-8", dataType: "json", error: errorFunction, success: successFunction });

    Read the article

  • How can I find a URL called [link] inside a block of HTML containing other URLs?

    - by DrTwox
    I'm writing a script to rewrite Reddit's RSS feeds. The script needs to find a URL named [link] inside a block of HTML that contains other URLs. The HTML is contained in an XML element called <description>. Here are two examples of the <description> element from I need to parse and the [link] I would need to get. First example: <description>submitted by &lt;a href=&#34;http://www.reddit.com/user/wildlyinaccurate&#34;&gt; wildlyinaccurate &lt;/a&gt; &lt;br/&gt; &lt;a href=&#34;http://wildlyinaccurate.com/a-hackers-guide-to-git&#34;&gt;[link]&lt;/a&gt; &lt;a href="http://www.reddit.com/r/programming/comments/26jvl7/a_hackers_guide_to_git/"&gt;[66 comments]&lt;/a&gt;</description> The [link] is: http://wildlyinaccurate.com/a-hackers-guide-to-git Second example: <description>&lt;!-- SC_OFF --&gt;&lt;div class=&#34;md&#34;&gt;&lt;p&gt;I work a support role at a company where I primarily fix issues our customers our experiencing with our software, which is a browser based application written primarily in javascript. I&amp;#39;ve been doing this for 2 years, but I want to take it to the next level (with the long term goal being that I become proficient enough to call myself a developer). I&amp;#39;ve been reading &amp;quot;Javascript The Definitive Guide&amp;quot; by O&amp;#39;Reilly but I was wondering if any of you more experienced users out there had some tips on taking it to the next level. Should I start incorporating some PHP and Jquery into my learning? Side projects on my spare time? Any good online resources? Etc. &lt;/p&gt; &lt;p&gt;Thanks! &lt;/p&gt; &lt;/div&gt;&lt;!-- SC_ON --&gt; submitted by &lt;a href=&#34;http://www.reddit.com/user/56killa&#34;&gt; 56killa &lt;/a&gt; &lt;br/&gt; &lt;a href=&#34;http://www.reddit.com/r/javascript/comments/26nduc/i_want_to_become_more_experienced_with_javascript/&#34;&gt;[link]&lt;/a&gt; &lt;a href="http://www.reddit.com/r/javascript/comments/26nduc/i_want_to_become_more_experienced_with_javascript/"&gt;[4 comments]&lt;/a&gt;</description> The [link] is: http://www.reddit.com/r/javascript/comments/26nduc/i_want_to_become_more_experienced_with_javascript/

    Read the article

  • ASP.NET MVC AND TOOLBOX

    - by imran_ku07
       Introduction :           ASP.NET MVC popularity is not hidden from the today's world of web applications. One of the great thing in ASP.NET is the separation of concerns, in which presentation views are separate from the business or modal layer. In these views ASP.NET MVC provides some very good controls which generate commonly used HTML markup fragments using a shorter syntax. These presentation views are familiar to web forms developers. But a pain for developers to use these controls is that they need to type these helpers controls every time when they need to use a control, because they are more familiar to drag and drop controls from ToolBox. So in this article i will use a cool feature of Visual Studio that allows you to add these controls in ToolBox once and then, when needed, just drag and drop controls from ToolBox, very similar like in web forms.   Description :            Visual Studio ToolBox is rich enough that allows you to store code and HTML snippets in ToolBox. All you need is select the HTML Helper and then simply drag and drop into Toolbox. Repeat this Procedure for every HTML Helper in ASP.NET MVC.             When you need to use a HTML Helper, you can drag and drop it from ToolBox and become happy with drag and drop programming. Summary :              In this article you see that how Visual Studio helps you to drag and drop HTML snippets from Design view to toolbox. This is one of the coolest features in Visual Studio.

    Read the article

  • how to escape “@” in the username when logging in through FTPES with curl?

    - by user62367
    $ curl -T "index.html" -k --ftp-ssl -u "[email protected]" MYDOMAIN.COM Enter host password for user '[email protected]': % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 57173 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>405 Method Not Allowed</title> </head><body> <h1>Method Not Allowed</h1> <p>The requested method PUT is not allowed for the URL /index.html.</p> <hr> <address>Apache/2.2.16 Server at MYDOMAIN.COM Port 80</address> </body></html> 100 57480 100 307 100 57173 284 52902 0:00:01 0:00:01 --:--:-- 53633 can someone help me? Also posted on Stack Overflow

    Read the article

  • Duplicate content issue after URL-change with 301-redirects

    - by David
    We got the following problem: We changed all URLs on our page from oldURL.html to newURL.html and set up 301-redirects (ca. 600 URLs) Google re-crawled our page, indexed all the new URLs (newURL.html), but didn't crawl the old URLs (oldURL.html) again, as there were no internal links pointing at those domains anymore after the URL-change. This resulted in massive ranking-drops, etc. because (i) Google thought oldURL.html has exactly the same content as newURL, causing duplicate content issues, and (ii) Google did not transfer the juice from oldURL to newURL, because the 301-redirect was never noticed. Now we reset all internal Links to the old URLs again, which then redirect to the newURLs, in the hope that Google would re-crawl the pages, once there are internal links pointing at them. This is partially happening, but at a really low speed, so it would take multiple months to notice all-redirects. I guess, because Google thinks: "Aah, I already know oldURL.html, so no need to re-crawl it. Possible solutions we thought of are ... Submitting as many of the old URLs to the index as possible via Webmaster Tools, to manually trigger a crawl. Doing that already Submitting a sitemap with all old URLs - but not sure if good idea, because Google does not seem to like 301-redirects in a sitemap ... Both solutions are not perfect - and we cannot wait for three months, just to regain our old rankings. What are your ideas? Best, David

    Read the article

  • nginx static files caching doesn't work

    - by user74344
    here is my conf file: usr/local/nginx/sites-available/default server { listen 80; server_name localhost; location / { root html; index index.php index.html index.htm; } # redirect server error pages to the static page /50x.html error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } # serve static files directly location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico|swf)$ { expires 30d; } but it doesn't cache static files, how should I fix it? thanks a lot

    Read the article

  • apache-memory-hacker-linux

    - by bibhudatta
    When we start the linux system it take only 435mb memory and it is 4GB memory server. When we start the httpd services it take 1000mb and outmatically it take all the memory and the server crase. even we stop the apache just it release 200mb memory. What will be the problem Can any one tell me what these hacker are doing. I see they are goinging some hit to my apache by some but I thing they are doing from this system. Below is the log. Please help me out for this. [root@host ~]# tail -20 /var/log/httpd/dostizone.com-combined.log 180.76.5.143 - - [14/Nov/2011:02:30:16 +0530] "GET /blogs/10248/209403/nfl-panties-since-the-quality-of HTTP/1.1" 403 2298 "-" "Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)" 180.76.5.88 - - [14/Nov/2011:02:30:31 +0530] "GET /blogs/815/158725/new-jersey-attorney-search HTTP/1.1" 403 2290 "-" "Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)" 220.181.108.186 - - [14/Nov/2011:02:30:32 +0530] "GET / HTTP/1.1" 403 5043 "-" "Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)" crawl-66-249-67-137.googlebot.com - - [14/Nov/2011:02:30:20 +0530] "GET /blogs/805/11279/supra-suprano-high-shoes HTTP/1.1" 200 30642 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" crawl-66-249-68-51.googlebot.com - - [14/Nov/2011:02:30:37 +0530] "GET /blogs/10514/215084/oakland-raiders-sweatpants-tags HTTP/1.1" 403 2297 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" 220.181.94.237 - - [14/Nov/2011:02:30:12 +0530] "GET /profile/8509 HTTP/1.1" 200 236894 "-" "Sogou web spider/4.0(+http://www.sogou.com/docs/help/webmasters.htm#07)" 220.181.94.237 - - [14/Nov/2011:02:30:43 +0530] "GET /mode-switch?return_url=%2Fblogs%2F8529%2F160217%2Fclimate-jordan-6 HTTP/1.1" 302 1 "-" "Sogou web spider/4.0(+http://www.sogou.com/docs/help/webmasters.htm#07)" crawl-66-249-68-51.googlebot.com - - [14/Nov/2011:02:30:44 +0530] "GET /blogs/390/61573/blackhawk-jerseys-from-the-you HTTP/1.1" 403 2293 "-" "SAMSUNG-SGH-E250/1.0 Profile/MIDP-2.0 Configuration/CLDC-1.1 UP.Browser/6.2.3.3.c.1.101 (GUI) MMP/2.0 (compatible; Googlebot-Mobile/2.1; +http://www.google.com/bot.html)" 124.115.0.159 - - [14/Nov/2011:02:30:24 +0530] "GET /blogs/693/46081/application/modules/Hecore/externals/scripts/core.js HTTP/1.1" 200 26869 "http://dostizone.com/blogs/693/46081/thomas-sabo-charms-hot-chilli" "Sosospider+(+http://help.soso.com/webspider.htm)" 124.115.0.159 - - [14/Nov/2011:02:30:24 +0530] "GET /blogs/693/46081/application/modules/Activity/externals/scripts/core.js HTTP/1.1" 200 26873 "http://dostizone.com/blogs/693/46081/thomas-sabo-charms-hot-chilli" "Sosospider+(+http://help.soso.com/webspider.htm)" 124.115.0.159 - - [14/Nov/2011:02:30:24 +0530] "GET /blogs/693/46081/application/modules/Hecore/externals/scripts/imagezoom/core.js HTTP/1.1" 200 26899 "http://dostizone.com/blogs/693/46081/thomas-sabo-charms-hot-chilli" "Sosospider+(+http://help.soso.com/webspider.htm)" 180.76.5.153 - - [14/Nov/2011:02:30:50 +0530] "GET /blogs/10252/212268/cleveland-browns-authentic-jerse HTTP/1.1" 403 2298 "-" "Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)" crawl-66-249-68-51.googlebot.com - - [14/Nov/2011:02:30:51 +0530] "GET /blogs/741/46260/chocolate-ugg-women-boots-1873 HTTP/1.1" 403 2293 "-" "SAMSUNG-SGH-E250/1.0 Profile/MIDP-2.0 Configuration/CLDC-1.1 UP.Browser/6.2.3.3.c.1.101 (GUI) MMP/2.0 (compatible; Googlebot-Mobile/2.1; +http://www.google.com/bot.html)" 124.115.1.7 - - [14/Nov/2011:02:30:40 +0530] "GET /blogs/682/97454/swarovski-jewellry-sale-articles HTTP/1.1" 200 25770 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)" crawl-66-249-68-51.googlebot.com - - [14/Nov/2011:02:30:56 +0530] "GET /blogs/779/60941/players-a-to-z-michael-cuddyer HTTP/1.1" 403 2293 "-" "SAMSUNG-SGH-E250/1.0 Profile/MIDP-2.0 Configuration/CLDC-1.1 UP.Browser/6.2.3.3.c.1.101 (GUI) MMP/2.0 (compatible; Googlebot-Mobile/2.1; +http://www.google.com/bot.html)" crawl-66-249-68-51.googlebot.com - - [14/Nov/2011:02:31:01 +0530] "GET /blogs/469/58551/chicago-bears-news-there-exist HTTP/1.1" 403 2293 "-" "SAMSUNG-SGH-E250/1.0 Profile/MIDP-2.0 Configuration/CLDC-1.1 UP.Browser/6.2.3.3.c.1.101 (GUI) MMP/2.0 (compatible; Googlebot-Mobile/2.1; +http://www.google.com/bot.html)" 220.181.94.237 - - [14/Nov/2011:02:30:54 +0530] "GET /blogs/8529/160217/climate-jordan-6 HTTP/1.1" 200 30750 "-" "Sogou web spider/4.0(+http://www.sogou.com/docs/help/webmasters.htm#07)" 180.76.5.59 - - [14/Nov/2011:02:31:05 +0530] "GET /blogs/815/158197/cheap-calgary-flames-jerseys HTTP/1.1" 403 2292 "-" "Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)" crawl-66-249-68-51.googlebot.com - - [14/Nov/2011:02:31:06 +0530] "GET /mode-switch?return_url=%2Fblogs%2F387%2F45679%2Fhandbag-louis-vuitton-judy-mm-m4 HTTP/1.1" 403 2258 "-" "SAMSUNG-SGH-E250/1.0 Profile/MIDP-2.0 Configuration/CLDC-1.1 UP.Browser/6.2.3.3.c.1.101 (GUI) MMP/2.0 (compatible; Googlebot-Mobile/2.1; +http://www.google.com/bot.html)" crawl-66-249-67-137.googlebot.com - - [14/Nov/2011:02:31:10 +0530] "GET /public/temporary/c83b731ecc556d7fd1a7732d9ac16ed6.png HTTP/1.1" 404 2305 "-" "Googlebot-Image/1

    Read the article

  • Is this form of cloaking likely to be penalised?

    - by Flo
    I'm looking to create a website which is considerably javascript heavy, built with backbone.js and most content being passed as JSON and loaded via backbone. I just needed some advice or opinions on likely hood of my website being penalised using the method of serving plain HTML (text, images, everything) to search engine bots and an js front-end version to normal users. This is my basic plan for my site: I plan on having the first request to any page being html which will only give about 1/4 of the page and there after load the last 3/4 with backbone js. Therefore non javascript users get a 'bit' of the experience. Once that new user has visited and detected to have js will have a cookie saved on their machine and requests from there after will be AJAX only. Example If (AJAX || HasJSCookie) { // Pass JSON } Search Engine server content: That entire experience of loading via AJAX will be stripped if a google bot for example is detected, the same content will be servered but all html. I thought about just allowing search engines to index the first 1/4 of content but as I'm considered about inner links and picking up every bit of content I thought it would be better to give search engines the entire content. I plan to do this by just detected a list of user agents and knowing if it's a bot or not. If (Bot) { //server plain html } In addition I plan to make clean URLs for the entire website despite full AJAX, therefore providing AJAX content to www.example.com/#/page and normal html to www.example.com/page is kind of our of the question. Would rather avoid the practice of using # when there are technology such as HTML 5 push state is around. So my question is really just asking the opinion of the masses on if it's likely that my website will be penalised? And do you suggest an alternative which avoids 'noscript' method

    Read the article

< Previous Page | 406 407 408 409 410 411 412 413 414 415 416 417  | Next Page >