Search Results

Search found 9935 results on 398 pages for 'pages'.

Page 82/398 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • Using Minified Page Specific JS [migrated]

    - by Mike C
    I've been working on a rather large scale project which makes use of a number of different pages with some very specific Javascript for each of them. To lessen load times, I plan to minify it all in to one file before deploying. The problem is this: how should I avoid launching page specific JS on pages which don't require it? So far my best solution has been to wrap each page in some additional container <div id='some_page'> ...everything else... </div> and I extended jQuery so I can do something like this: // If this element exists when the DOM is ready, execute the function $('#some_page').ready(function() { ... }); Which, while kind of cool, just rubs me the wrong way.

    Read the article

  • Navigation for ASP.NET Web Forms project published on codeplex

    Navigation for ASP.NET Web Forms manages movement and data passing between aspx Pages in a unit testable manner. There is no Client-side logic, so it works in all browsers, and no Server-side cache, so it works with the browser back button.Features include loosely coupled Pages, typed data passing, empty code-behinds, context-sensitive bread crumb trail, ASP.NET Data binding integration, automatic ASP.NET Ajax history navigation and many more.The source code, binaries and comprehensive documentation...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • how to avoid getting negative points from google adsense

    - by Napster
    I have news based website, in which primary contents include news,image albums and videos. out of these i have copy rights for images and videos are just youtube embedded videos. Coming to news my site is kind of a mashup. It gathers data from various sites and presents them in more user friendly way for quick digestion and access. My problem here is that since the news part of the site can be found from other sites, my site could suffer in search rankings. Is there any solution to this. One thing I thought of is to put disallow on all the news ariticles pages, so google does not crawl them. Will this be helpful to me. When applying to google adsense does google crawl these pages (disallow) also.

    Read the article

  • Can I redirect the HTTP request towards an old folder to the homepage using .htaccess file?

    - by AndreaNobili
    I have to following situation: I had an old blog that was made using Joomla (this blog was indexed well enough by search engines). For some problems I delete it and I have create it again using WordPress. Now I have many visit (from Google) that leading to specific pages of the old site (pages that don't exist in the new version). For example I have visit to URL as: /scorejava/index.php/corso-spring-mvc/1-test that don't exist on my new site. I would know if using the .htaccess file (or other sistem) I can redirect the HTTP request directed to some subfolder (that don't exist in the new version) to the homepage of my new site. For example I have the request towards the void URL: /scorejava/index.php/corso-spring-mvc/1-test. And I would create a regular expression that say something like: all the request toward the subfolder corso-spring-mvc (and all it's content file and subfolder) have to be redirected to www.scorejava.com. Is it possible?

    Read the article

  • SEO regarding using multiple <h1> tag?

    - by user622378
    Is is true <h1> tag should not have more than 1 on every page for SEO? Every pages on the header, it includes and <h1> tag. Website Name which show an image from logo class, like this: <div id="header"> <h1 class="logo"> <a href="#">Website Name</a> </h1> </div> and I also include <h1> on the contact page, help, etc and article page for example: <h1>Contact Us</h1> <h1>Name of the Article Title here</h1> On the homepage, it just has one <h1> for the logo Site Name... other pages have 2 <h1> tags

    Read the article

  • Working With Files in Dreamweaver

    Your web files have special needs because anytime you move or delete a file, it has an effect on other files that are linked to it. So, Dreamweaver also has file management tasks related to the web development environment. Dreamweaver keeps a record of all the links on your site. (Remember that the code for a photo is also a link.) Then when you move, Dreamweaver will ask if you want to update the other pages that link to that file. If you delete a file, Dreamweaver will warn you, if there are other pages using that file. Dreamweaver is actually a package of programs and one of them does the file management tasks.

    Read the article

  • Google Analytics: understanding dimensions and metrics?

    - by flossfan
    If I run a query on the Google Analytics API and set the dimension to ga:pagePathLevel1 and the metric to ga:avgTimeOnPage, I get results like this: { pagePathLevel1: /about, avgTimeOnPage: 28 }, { pagePathLevel1: /contact, avgTimeOnPage: 10 } I'm not completely sure how to interpret this. Is the value of avgTimeOnPage the average time spent by any user on all pages that match that path? Or is 28 seconds the average time spent by any user on any single page that matches that path? I'm looking for the average time spent across all pages matching that path, but the time estimates look shorter than I'd expect. I hope that question makes sense! Please tell me if it doesn't.

    Read the article

  • RewriteRule for URLs with spaces

    - by Robert Cailliau
    My site's pages are in multiple languages whereby each language version shares its media (images) with the other language versions. I place all versions and the media in a single directory with the same name. E.g. pages mypage-en.html, mypage-fr.html etc. will sit in directory mypage. The directory path suffices to reference a page: h t t p : //....../mypage/ is good enough, there is no need for h t t p : //....../mypage/mypage-en/html A rewrite with RewriteRule ^(.*)/([a-zA-Z0-9]+)/?$ /$1/$2/$2-en.html lets me use the shorter form. But what if the name mypage contains spaces (which some do) ? I want h t t p : //....../my page/ to lead to h t t p : //....../my page/my page.html Using RewriteRule ^(.*)/([a-zA-Z0-9|\s]+)/?$ /$1/$2/$2-en.html did not work. Any hints welcome. (please do not ask me why I want to do this, nor tell me I should not use spaces in file names)

    Read the article

  • accessing webpages from terminal

    - by August
    Using Ubuntu 12.04 . I know two methods to access the web-pages through terminal . They are lynx & elinkswhen i have tried to use them to access web-pages from terminal i didn't felt any kind of speed improvements . I am not sure whether its a usual thing or my speed (because its slow one ). so which way is the best to access the web-page terminal browsers or GUI browsers? and what else more i can get/lost from text terminals ?

    Read the article

  • When do I bite the bullet and hire a developer? [closed]

    - by Paul Seattle
    I have an awesome URL, I've had it since the mid 90's, and up until around 2002 I was having an awesome time writing music reviews and features into static pages and adding their URL to static index pages and everything was just great. Then things got complicated really fast, and for one reason or another I handed the site over to a very talented friend who turned it into a db based site run by cf. Now, here I am around twelve years later putting it all back together using mysql, php and css on a need-to-learn basis and even though I'm sooo close to where I want it to be I realize, erm, it should have been written using mysqli etc etc ad infinitum. So I'm wondering at what point do I just give in and hire a developer to take over, how much does it even cost, and how do I know I'm working with someone who is better than I am?

    Read the article

  • Show events AND pageviews in Google Analytics

    - by supertrue
    Each page on my site contains a file, and I have Google Analytics set up to track file download events. I would like to see what fraction of users who visit Page X download Page X's file. I can view number of events by page by clicking on Content » Events » Pages. But I can't figure out how to see both events and pageviews (or visits) at the same time. Visits and pageviews are not available in the Secondary dimension dropdown from the Events list, and Events are not available as a Secondary dimension in the regular traffic listing (Content » Site Content » All Pages). I want something like this: Page Pageviews Events 1. /section/mypage 1,000 123 2. /category/anotherpage 867 41 3. /about/download 88 7 Is there a way to get this in Google Analytics?—to view events and pageviews, by page, at the same time?

    Read the article

  • Restricted Flow Of Power

    - by user13827
    I'm sure all is fine, but i need some reassurance. Last month my company launched consolidated two of their websites into one new website. www.fdmgroup.com and www.fdmacademy.com into a newly designed www.fdmgroup.com. Because the FDM Academy grew as it's own brand we decided not to just forward the domain to the fdmgroup website, but instead just mirror the new FDM Group website and use a canonical tags to the FDM Group domain (so the link juice will pass to the FDM Group domain pages) The website has be live for nearly a month and i don't believe any power has passed down through the FDM Group website to it's deeper pages even though 301 redirects from the legacy group and academy domains in place. I am also seeing the same problem on the FDM Academy domain, but i expect to see this as every page has a canonical to the same page on the Is there anything which is restricting the flow of power through the site, or am i just being impatient. Thanks in advance Jon

    Read the article

  • best practice for last-modified and created dates

    - by drewbenn
    I have a website with a handful (currently 3; I anticipate about a dozen when it's complete) of static html pages. I'd like to include "created" and "last-modified" dates in the pages for the benefit of visitors who arrive a week or a month or a few years from now. I expect anyone who cares to be viewing the source, so I could do: <!-- created yyyy-mm-dd, last-modified yyyy-mm-dd --> but I'd like to use something more standard (and elegant). I've found one reference to last modified (but only a mention in the text, not an actual code reference, so I'm not positive how to properly implement it) but not created. Is there a proper way to display both (or at least one) of these dates?

    Read the article

  • Joining and compressing all javascript files together - good idea?

    - by Tomáš Zato
    Curently, I avoid loading any unnecesary scripts on individual pages of my site. I have a class that remembers all javascript files that were requested during PHP processing and adds them to HTML. I was just thinking that I could merge the current set of files, save the result in special directory and let the browser download just one, big file. Since the number of possible combinations is not very high, I would end up with about 10 combined files for different pages. I've never seen that on any site. What are the reasons not to do it? I need very fast page load.

    Read the article

  • In addition to Google's First Flick Free, should you whitelist search engine bots past a paywall?

    - by tobek
    Our site has subscription-only pages - non-subscribed visitors see a snippet preview. As per Google's FCF requirements, your first 5 hits to a subscriber-only pages with .google. as the referrer, you see the full page. In addition to this, should we whitelist search engine bots so that they can index the full content? I assume this is not required for Google, which can use FCF to index our content, but what about other search engines? Is this considered cloaking? My gut says that whitelisting bots past the paywall is bad practice., but I wanted to confirm - any evidence or references would be amazing.

    Read the article

  • Chrome 10 bogue lorsqu'il s'agit de lire les sites en Flash, êtes-vous touchés par ce problème ?

    Chrome 10 bogue lorsqu'il s'agit de lire les sites en Flash, êtes-vous touchés par ce problème ? Mise à jour du 16.03.2011 par Katleen La semaine dernière, Google a mis son navigateur à jour. Mais la version 10 de Chrome semble être touchée par un bogue. Plusieurs internautes se plaignent en effet de problèmes lors de la consultation de sites en Flash, surtout lors d'affichages simultanés de pages de ce type. A ce moment là, le plugin crasherait en affichant le message suivant : "The following plugin has crashed: Shockwave Flash". Malgré tout, le navigateur continue de fonctionner normalement (sauf pour les pages Flash). Une survenue regrettable alors que Microsoft vient de s...

    Read the article

  • On which page(s) to add canonical?

    - by user6211
    I have two pages with same content and same meta title and meta description. they also have very simular url: http://www.mysite.com/new-york http://www.mysite.com/new_york I need first link to be "official". To avoid having duplicated pages, i want to add canonical meta tag in header... but on which page? does it have to be on both of them or only on second? On on first? Can you give me some advice please?

    Read the article

  • Screen gets garbled on some web sites

    - by user10565
    I have a Gateway notebook with graphics card 01:05.0 VGA compatible controller: ATI Technologies Inc RS690M [Radeon X1200 Series] with open source driver Linux version 2.6.32 -28 - generic. No other operating system on the computer. When I am using firefox to browse the web, everything normally works just fine except that when I attempt to access some particular web pages the screen completely messes up going mostly white with various streaks, etc., although I can access other pages of the same site without problems. When I run the cursor over the garbled screen, bits of the image recompose themselves, at least partially, and I can continue to open the applications window, or turn the computer off, or open the terminal, or take screen shots, etc., although all menus are unreadable. Also, when I zoom in on Google Earth the screen completely messes up. At all other times, there are no apparent problems. Any ideas?

    Read the article

  • SEO Influenc search result per device class (mobile/desktop)

    - by user32224
    We're currently building a new responsive website and while working on the site map figured that we don't want to show certain sections on mobile devices. This can be easily done by hiding the navigation parts using css/media queries. However, trouble is that the hidden sites would still show up in search engines' search results. If a user happens to click on one of these links she might happen to see a badly formatted page as we'd use desktop/tablet only code to show images and video. Is there any way to "influence" to exclude certain pages if the search is done on a mobile device? Do search engines crawl pages once or with a device specific view twice? Could we set a noindex meta tag for a specific device class?

    Read the article

  • How to Automate Checking for Stolen Content?

    - by Hisoka
    So I know about tools like Copyscape and Google Alerts.. great tools, but it's quite tedious for me to copy and paste an URL or phrase for every one of my pages in my sites. Is there any tool out there that monitors your website and emails you or alerts you whenever someone has stolen content from your site? The only service I know is CopySentry and honestly, it's too expensive for me since I got thousands of pages I want to monitor... Anyone else have this problem? or is it just me? Thanks for any help.

    Read the article

  • Google Analytics Request URI to Event advanced filter

    - by confidentjohn
    I have a query string attached to a Request URI. Whilst I can see this data within the pages report and it works, I was thinking about setting up an advanced filter to convert the request URI to an Event, with the hope this would clean up my pages report and sit this query with related events in my data. I can see in advanced filters that this is possible, but seems limited to specifying a single event area, so Cat, action or Label, not all 3. Does any one know how I could set up an advanced filter to find any URIs that contain a specific query string, say example below. www.example.com?querystring=123 and convert this into an event, where I can set the Cat, action and label.

    Read the article

  • Will Google follow HTML refresh?

    - by yasar11732
    I want to move my current Tumblr blog to static HTML blog. Currently I am using a custom domain, I am planning on doing the move when Google sees domain name change. I am considering two options: Buying a hosting service Using GitHub pages Buying a hosting service would probably mean I am going to pay for lots of things that I don't need like PHP, MySQL, e-mail service etc. On the other hand, if I use GitHub pages, I can't use .htaccess file to make 301 redirects. I want to change my URL structure and this is important to me. I was wondering if I use: <meta http-equiv=refresh content="0; url=http://example.com/newurl" /> Would Google see it as 301 redirect, so that I won't use my search engine value?

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >