Search Results

Search found 9721 results on 389 pages for 'quicktest pro'.

Page 186/389 | < Previous Page | 182 183 184 185 186 187 188 189 190 191 192 193  | Next Page >

  • Getting link to abstract indexed in Google Scholar

    - by JordanReiter
    We have a large digital library with thousands of papers indexed in Google Scholar. We allow Google Scholar to index our PDFs but they're blocked unless you have a subscription. So Google has full-text indexing/searching of our PDFs (great!) but then the links point just to those PDFs (boo!) instead of the more helpful abstract pages. Does anyone know what could cause an issue like this? I am, to the best of my knowledge, following all of the guidelines laid out in their Inclusion Guidelines. Here's some example meta data: <meta name="citation_title" content="Sample Title"/> <meta name="citation_author" content="LastName, FirstName"/> <meta name="citation_publication_date" content="2012/06/26"/> <meta name="citation_volume" content="1"/> <meta name="citation_issue" content="1"/> <meta name="citation_firstpage" content="10"/> <meta name="citation_lastpage" content="20"/> <meta name="citation_conference_title" content="Name of the Conference"/> <meta name="citation_isbn" content="1-234567-89-X"/> <meta name="citation_pdf_url" content="http://www.example.org/p/1234/proceeding_1234.pdf"/> <meta name="citation_fulltext_html_url" content="http://www.example.org/f/1234/"/> <meta name="citation_abstract_html_url" content="http://www.example.org/p/1234/"/> <link rel="canonical" href="http://www.example.org/p/1234/" /> example.org/p/1234 is the abstract page for the article; example.org/f/1234 is the fulltext link accessible to subscribers only (and to Google Scholar). example.org/p/1234/proceeding_1234.pdf is the fulltext PDF link.

    Read the article

  • What is a light-weight "slideshow" script that could integrate w/ CMS?

    - by aslum
    I'm looking to reduce the footprint of my Strict html 4.01 front page. One possible way is to combine much of the "upcoming events" into a single small box, and have them automagically switch which one is displayed every few seconds. I'm sure there are a bunch of this kind of thing written already, and surely an open source one exists, but I haven't had much luck find one. I'd prefer javascript to jQuery as installing jQuery might not be an option, but if the best-fit script requires jQuery I'd certainly be willing to investigate that route. If it can display content from Wordpress that would be ideal.

    Read the article

  • Subfolder non-www redirect

    - by Zealotry
    Is there a way to redirect a sub folder to no-www? What I use is: RewriteEngine On RewriteBase / RewriteCond %{HTTP_HOST} ^www\.(.*)$ [NC] RewriteRule ^(.*)$ http://%1/$1 [R=301,L] but this only redirects the www.example.com to example.com. I would like to redirect: www.example.com/home/ to example.com/home/ www.example.com/home/whatever URL to example.com/home/whatever URL. I have tried this: RewriteEngine on Options +FollowSymlinks -MultiViews RewriteCond %{HTTP_HOST} ^(www\.)?example\.com$ [NC] RewriteRule ^$ http://example.com [R=301,L] RewriteCond %{HTTP_HOST} ^www\.example\.com$ [NC] RewriteCond %{REQUEST_URI} !^/home/ [NC] RewriteRule ^(.+)$ http://example.com/$1 [R=301,L] This does not work, either. I can't really figure it out. Any help appreciated! ANSWER I figured it out and I will post it for others to see, if having the same issue. In the sub folder directory's .htaccess used the following: RewriteCond %{HTTP_HOST} ^(www\.example\.com)?$ RewriteRule ^(.*)$ http://example.com/subfoldername/$1 [R=301,L]

    Read the article

  • chmod 700 and htaccess deny from all enough?

    - by John Jenkins
    I would like to protect a public directory from public view. None of the files will ever be viewed online. I chmoded the directory to 700 and created an htaccess file that has "deny from all" inside it. Is this enough security or can a hacker still gain access to the files? I know some people will say that hackers can get into anything, but I just want to make sure that there isn't anything else I can do to make it harder to hack. Reply: I am asking if chmod 700 and deny from all is enough security alone to prevent hackers from getting my files. Thanks.

    Read the article

  • Convert Adsense earnings into Adwords

    - by dmytrivv
    Does anybody know how to convert Adsense earnings into Adwords? Basically, with placing Adsense banners on website, webmaster collects some "potential", or lets say - "active", but its automatically means = money + taxes + other routine work So, I'm wondering if it can be somehow converted into Adwords "potential" and spend again for website need. Theoretically, its just Ads Exchange. But, beauty of Adsense and Adwords is that both platforms have pretty solid clients databases. Any guess how it can be solved? please ...

    Read the article

  • best/simplest way to inform search engine of sitemap location

    - by Don
    AFAIK, there are 2 ways to make search engines aware of a sitemap's location: Include an absolute link to it in robots.txt Submit it to them directly. The relevant URLs are: http://www.google.com/webmasters/tools/ping?sitemap=SITEMAP_URL http://www.bing.com/webmaster/ping.aspx?sitemap=SITEMAP_URL Where SITEMAP_URL is the absolute URL of the sitemap. Currently, I do both. Regarding (2), I have a job that runs automatically every day which submits the sitemap to Bing and Google. I don't think there's any reason to do (1) and (2), but I'm paranoid, so I do. I imagine you can avoid both (1) and (2) if you just make your sitemap accessible at a conventional URL (like robots.txt). What's the simplest and most reliable way to ensure that search engines can find your sitemap?

    Read the article

  • How To Track "Similar Product/Page" Links In Internal Site

    - by Petra Barus
    So I just created a new widget that would show up in a product page in my site. This widget will show several products similar to the product that is displayed in the current page. The purpose is to help users compare similar products. Let's say in the product page A http://domain/products/A The Similar Products widget will show http://domain/products/B http://domain/products/C http://domain/products/D http://domain/products/E My question is how to track this "Product B page were visited X times from Product A page via Similar Product widget"? (And there is also chance that Product B will show up in the widget on Product C page) I have this idea using the Event feature from Google Analytics. But I'm still not sure if it is or what is the common best practice for this.

    Read the article

  • Free, specific Ip2Location Database

    - by Andresch Serj
    I am searching for a free db (like an updated xml or csv file) that relates ip adresses to specific locations. I want more information than just the Country. I want some sort of region or city refference, even if that ends up to be a number that makes no sense to me. Doesn't have to be super correct or always up to date either. It is just to distinguish between usergroups and not to monitor or spy on them.

    Read the article

  • Problems with Developer [closed]

    - by Concerned Client
    I engaged a developer who is developing a website for me. I am not happy with him and would like that once the website is ready, I transfer the duties of further development, seo and web admin to another developer. What do I need to be aware of? and what information do i need in terms of passwords etc? The website has been developed in word press and I have access to the CMS but I am not technical so i am not sure if there are security levels for the more technical people. thanks

    Read the article

  • Tracking Outgoing Links With Google Analytics Events

    - by the_archer
    I've been trying to track clicks on external links on my website using the events tracking method. So I've got my Google Analytics code setup before body ends as shown below (note: quotes have been entitied by blogger, but it works fine): <script type='text/javascript'> var _gaq = _gaq || []; _gaq.push([&#39;_setAccount&#39;, &#39;UA-XXXXXXX-X#39;]); _gaq.push([&#39;_trackPageview&#39;]); (function() { var ga = document.createElement(&#39;script&#39;); ga.type = &#39;text/javascript&#39;; ga.async = true; ga.src = (&#39;https:&#39; == document.location.protocol ? &#39;https://ssl&#39; : &#39;http://www&#39;) + &#39;.google-analytics.com/ga.js&#39;; var s = document.getElementsByTagName(&#39;script&#39;)[0]; s.parentNode.insertBefore(ga, s); })(); </script> Now I wanted to track a link on the addthis.com follow widget. So there is a link of the type below to which following instructions from here I added the onclick event. <a addthis:url='http://feeds.feedburner.com/myfeedburnerlurl' onClick="_gaq.push(['_trackEvent', 'Subscription Clicks', 'RSS']);" class='addthis_button_rss_follow'/> I clicked on it a couple of times, left it for over a day now, but nothing shows up in google analytics events. It just says zero events. Here's a screenshot of the events page on GA: Could anybody help me? Am I doing anything wrong?

    Read the article

  • Importance of frequency of keywords for SEO [closed]

    - by Calle
    Possible Duplicate: keyword stuffing in SEO I've seen a few posts on here and elsewhere that talk about the amount of unique keywords used, or aimed, for in a text. But how important is the frequency of keywords for SEO optimized copy, the higher the better? Is there a certain percentage to aim at? For example, I currently have copy aimed for a specific keyword. The rate of this keyword is that it makes up 2% of all words. Is this too low? Too high? The higher the better?

    Read the article

  • Correct microdata and/or microformats for real estate listings?

    - by Ernests Karlsons
    Given I am running a real estate rentals listing website, what would be the correct microdata or microformats for the listing pages? There is the usual data: address, photos, price, start date, possible end date, person who is renting it out, list of amenities, description etc. Are there also microformats/microdata that can be used in the listing summary page (e.g., page that displays all listings in a particular city)?

    Read the article

  • Legal responsibility of public posts

    - by Murdock
    Given a public site with no logins: I let people post links to public Facebook profiles, and my site fetches the profile picture and displays it. Would it be ok if I just told people to post profiles of which they had the owner’s permission? Does such a statement exonerate me from copyright infringements and place the burden on the user? Edit: For bonus points. Can the statement just be a notice under the button (that will save the link) that says that "By clicking this button you agree to the terms and conditions" with maybe a link to the terms and conditions.

    Read the article

  • Homepage issue on Google [closed]

    - by nico
    We have recently updated our website www.blinds4uk.co.uk with a new homepage containing additional features and more on page content but since then we have lost primary keyword positions and the home page has disappeared completely. The only time it appears is for an exact search ‘blinds4uk’. Today I took snippets of 'unique content' from the homepage and put this into the 'google search' but our homepage was nowhere to be found. When I did the same in ‘Yahoo’ the homepage came up. Are we missing something? We ranked in the top 4 for primary kw terms 'blinds uk' & 'uk blinds' but now we don’t show anywhere for these terms. Our homepage has never ranked well for the primary kw 'blinds' yet our internal pages rank very well with many pages on page 1 of google uk. We employed an SEO firm for 9 months to help us establish issues with the homepage but they never could, so we got rid. We have been trying to get to the root cause of why the homepage ranks so poorly for a number of years and only yesterday we established that we had the meta tag directly below the tag and our title & meta description were further down the page; we have today corrected. Not sure what effect this would have on the way Google reads the homepage but we are trying everything to try and get the homepage ranking fro those primary kw's. Our current developers & ex SEO guys are all part of the same company and cannot pin-point anything other than saying carryon with their SEO team because it will take time but just comes across as a milking exercise. Another thing which I have found very strange is the data from our 'traffic audience'. We are a UK based website yet our traffic stats were showing as;- UK 36.6%, Denmark 35.8% and India 27.6%. – don’t make sense to me! Is there anybody out there that could simply point us in the right direction to the problem(s), so we can fix once and for all? Could there be anything within the code that is causing the home page not to display within google for our primary kw's terms such as blinds, window blinds etc. I would appreciate any advice at all that may help us in our quest to sort this homepage issue once and for all

    Read the article

  • Magento Default Sitemap.xml

    - by chipShot
    Is the default magento sitemap.xml optimized as is for ecommerce products? I'm thinking about adding image links as well. Is it worth time investing in this for SEO gains? <url> <loc> http://demo.com/product.html </loc> <lastmod>2011-08-03</lastmod> <changefreq>always</changefreq> <priority>1.0</priority> </url>

    Read the article

  • Using mod_speling with multi-level htaccess and rewriterules

    - by michaelcgorman
    We recently switched formats for managing our 301s. For the most part, everything went well, but it seems to have stopped mod_speling from working properly. Here's what we changed: old /var/www/html/.htaccess: RewriteEngine on RewriteBase / # Change SHTML to HTML RewriteRule ^(.*)\.shtml$ $1.html [R=permanent,L] # Change PCF to HTML ('cause, you know, we probably have CMS users like that...) RewriteRule ^(.*)\.pcf$ $1.html [R=permanent,L] # Force WWW subdomain for all requests RewriteCond %{HTTP_HOST} !^www.example.edu$ [NC] RewriteRule ^(.*)$ http://www.example.edu/$1 [R,L] # User accounts are on sun.example.edu RedirectMatch ^/~(.*)$ http://sun.example.edu/~$1 # Remove index.html at the end of URLs RewriteCond %{REQUEST_URI} ^(.*/)index\.html$ [NC] RewriteRule . %1 [R=301,NE,L] Redirect 301 /academics/calendar2012-13.html http://www.example.edu/academics/calendar.html Redirect 301 /academics/departments/ http://www.example.edu/majors/ Redirect 301 /academics/Pre-Medical.pdf http://www.example.edu/academics/Pre-Medicine.pdf Redirect 301 ... new /var/www/html/.htaccess: RewriteEngine on RewriteBase / # Change SHTML to HTML RewriteRule ^(.*)\.shtml$ $1.html [R=permanent,L] # Change PCF to HTML ('cause, you know, we probably have CMS users like that...) RewriteRule ^(.*)\.pcf$ $1.html [R=permanent,L] # Force WWW subdomain for all requests RewriteCond %{HTTP_HOST} !^www.example.edu$ [NC] RewriteRule ^(.*)$ http://www.example.edu/$1 [R,L] # User accounts are on sun.example.edu RedirectMatch ^/~(.*)$ http://sun.example.edu/~$1 # Remove index.html at the end of URLs RewriteCond %{REQUEST_URI} ^(.*/)index\.html$ [NC] RewriteRule . %1 [R=301,NE,L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*) 404/$1 And then we added a new file at /var/www/html/404/.htaccess: RewriteEngine on RewriteBase /404 RewriteRule ^academics/calendar2012-13.html$ /academics/calendar.html [R=302,L] RewriteRule ^academics/departments/$ /majors/ [R=301,L] RewriteRule ^academics/Pre-Medical.pdf$ /academics/Pre-Medicine.pdf[R=301,L] RewriteRule ... I do have (Webmin-based) access to the httpd.conf (though we don't want to store all our 301s there, if possible). We're running Apache 2.2.15 on RHEL 6 on a server in our own data center. Like I said, the only problem we're seeing is that mod_speling isn't doing its magic anymore. The new format has so many advantages over the old that we really don't want to go back, but mod_speling is so nice to have that we'd also really like it to work if possible. Any ideas for how we might be able to fix mod_speling?

    Read the article

  • Chrome refused to execute this JavaScript file

    - by TestSubject528491
    In the head of my HTML page, I have: <script src="https://raw.github.com/cloudhead/less.js/master/dist/less-1.3.3.js"></script> When I load the page in my browser (Google Chrome v 27.0.1453.116) and enable the developer tools, it says: Refused to execute script from 'https://raw.github.com/cloudhead/less.js/master/dist/less-1.3.3.js' because its MIME type ('text/plain') is not executable, and strict MIME type checking is enabled. Indeed, the script won't run. Why does Chrome think this is a plain text file? It clearly has a .js file extension. Since I'm using HTML5, I omitted the type attribute, so I thought that might be causing the problem. So I added type="text/javascript" to the <script> tag, and got the same result. I even tried type="application/javascript" and still got the same error. Then I tried changing it to type="text/plain" just out of curiosity. The browser did not return an error, but of course the JavaScript did not run either. Finally I thought the periods in the filename might be throwing the browser off. So in my HTML code, I changed all the periods to the URL escape character %2E: <script src="https://raw.github.com/cloudhead/less%2Ejs/master/dist/less-1%2E3%2E3.js"></script> This still did not work. The only thing that truly works (i.e. the browser does not give an error and the JS successfully runs) is if I download the file, upload it to a local directory, and then change the src value to the local file. I'd rather not do this since I'm trying to save space on my own website. How do I get Chrome to recognize that the linked file is actually a JavaScript type?

    Read the article

  • Rendering citations and references in HTML using PHP/Perl/Python/

    - by Nick
    Is there a PHP/Perl/Python/... library for picking citations out of an HTML file and rendering a nice list of references at the bottom, like in Wikipedia? I'm developing a website with heavily-sourced content, and I'd really like to have automatically-generated lists of formatted references, like in Wikipedia. (Check out their philosophy page, and see how the superscript numbered citations interact with the references at the bottom. This is all dynamically generated, automatically ordered & linked.) They do it really well: the citations are linked to the references (which are backlinked to the citations), when you click on one of the links, the target is highlighted, etc. I'm tempted to build the site on MediaWiki just for this one feature, but it seems like overkill. Do I have any options?

    Read the article

  • Pixels - A cry for some insight

    - by CarrotFile
    I'm pretty new to web developing and I'd love some clarification. Although reading more than one book on the topic, I cannot seem to wrap my head around the pixel concept. I encounter problems with this issue when trying to use CSS and pixel units for design that fits different screen sizes. To my understanding a pixel is the most basic unit used by a monitor in order to compose an image on the screen. So if me resolution is 800 by 600, everything on my screen is rendered using those 800*600 basic building blocks. If I were to enlarge my screen resolution, 3 things would accrue: A. The basic image building block(the pixel) would shrink in size B. The pixels would move close together C. Well, more pixels would now be available All these combined lead to a sharper(depending on the viewing distance) and more detail enabling image. Well so far so good. Here is were I start getting lost: To my knowledge a pixel is not a physical, real object. Monitors are not embedded with a few thousand pixels. I am drawn to this conclusion because anyone can change his screen's resolution, making a pixel on his screen bigger or smaller, and adding or subtracting the amount of total pixels on screen. Adding to that, I have herd that different monitors have different pixel densities. For example Apple's retina monitors. Taking all of the above as my knowledge base, These are my questions: If a pixel has no real world constant size, what does comparing different pixel densities matter? Each screen company can define it's own pixel concept and declare the higher density. What does a bigger pixel density mean? Say we take two screens with the same physical dimensions, but with a different pixel density, am I to assert that the main difference would be the larger density screen being able to display a higher max resolution? Or am I to assert that given the same resolution on both monitors, the higher density one would display a sharper, smaller image? If a pixel is not a fixed size within one monitor, is it a fixed size between the same resolution on two different monitors? For example, would two different monitors, set to the same resolution, be comprised of same size, same quantity pixels? I'd love some help (:

    Read the article

  • What is the best way to have the same website in multiple domains?

    - by Daniel Magliola
    I would like to have the same website to sell a specific product, in multiple domains , to take advantage of keywords matching the domain name, for several different searches. However, I understand that having the same content in multiple sites will unleash the wrath of Google. If I have a redirect from all domains minus one, to that last one, do I still get any bonus for the "magic exact domain match jackpot"? Same question applies to canonical URLs... What's the best way to approach this? Thanks!

    Read the article

  • How can I make sure my website will be available during a presentation?

    - by johnny_s
    I have an online presentation to do next week and I have it all ready to go. The website is HTML and CSS only (no DB), and currently resides on my shared hosting account. Now, although my shared hosting is (relatively) reliable, I have noticed that recently they have been making some changes and my website has been unavailable at times. I don't want this to happen to me on the morning of my presentation, so I am asking what is the best way to prepare for such a thing? My domain is www.presentation.mydomain.com and I would like to keep this if possible (even if issues arise). I have been thinking of a few alternatives: Host my site on two different domains or servers (but what about the domain name?) Have a portable XAMPP version on a USB stick (again, domain name?) Possible failover site/location Update: The presentation will be carried out on their laptop, not mine. So I am unable to install any software.

    Read the article

  • How do I access column data in a previous select statement from a sub-query? [closed]

    - by payling
    PROBLEM How do I access column data in a previous select statement from a sub-query? Below is a simple mock up of what I'm attempting to do. Tables used: Quotes, Users QUOTES TABLE qid, (quote id) owner_uid, creator_uid SQL SYNTAX: SELECT q.qid, q.owner_uid, q.creator_uid, owner.fname, owner.lname FROM quotes q, (SELECT u.fname, u.lname FROM users u WHERE u.uid = q.owner_uid) AS owner WHERE q.qid = '#' SUMMARY I want to be able to use the quote table's owner_uid and specify it for the owner table so I can return all the owner info for that particular quote. The problem is, q.owner_uid is not recognized in the owner sub-query. What am I doing wrong?

    Read the article

  • Will duplicate international (i18n) content hinder SEO rankings?

    - by Rhys
    Google clearly states that duplicate content within a single, or multiple, domains is not advised. This is understood, but I am not sure of any exceptions for sites with region-specific content that is often replicated across locales. For example, a site's /en-us/about page could be identical to /en-uk/about, whereas most likely /en-ja/about is unique. Are GYM smart enough to understand that the initial URL depth is a locale specifier? Is there any robots.txt or header, etc, trickery that I should include to outline the site's international structure?

    Read the article

  • Run server side script

    - by ooo
    I'm in the process of deploying my first website which is written is ASP.NET. I need to run a server side script at set intervals during the day which updates a database even if there is nobody using the site. I was led to believe that using Windows task scheduler would be the best option but now I've joined a hosting company the layout is not really how I was expecting. It's a shared hosting with basic FTP and no apparent built in task scheduler. The hosting company support is not very good and haven't been able to advise how I could do this so hoped to get help here on options before I consider changing company. [The hosting company starts with 1 and ends with 1 :)]

    Read the article

  • Determining cause of random latency/loading issues

    - by Sherwin Flight
    I'm not sure exactly what details to post in regards to my issue, because I'm not sure what is relevant. Prior to the end of September my websites all loaded quickly, in almost all cases. Loading time wasn't usually more than a few seconds. However, since the end of September I noticed a big increase in page loading times. In some cases pages were taking 30 seconds or more to load. I do have a remote monitoring service monitoring some of the sites as well, and the image below shows the response times over the past month. The response times shown at the beginning of this graph were what the usual response times were prior to this issue occurring. You can see that there has been a significant increase in response times from the beginning to the end of this graph. The thing is, the problem is not happening 100% of the time. If I click through the site, or even just keep refreshing the page, about 25% of the time the pages load quickly, the remaining 75% of the time they load slowly. Sometimes the pages take so long to load that they time out, and don't load at all. I have contacted my hosting provider, and they said things at their end was fine. I don't believe the problem is my home internet provider, because all other websites load without a problem. The server is located in Texas, USA. This also raises another interesting point. My remote monitor checks my site from two locations, California, USA, and London, England. As you can see in the chart below the response time is actually shorter when checked from London, which doesn't seem to make sense, since the server is physically closer to the California monitoring location. I would have expected the London monitoring location to have higher response times since they are physically farther away. I should also point out that in some traceroute test I've done it seem like the first connection to the server seems to take the longest, then after that the rest of the page loads quickly. Below is a little chart showing the times for the first connection to the server. So, what could be causing this problem, and what steps can I take to resolve it or at least narrow down the problem? Sending the request to the server was very quick, and receiving the reply back seems pretty quick, but the WAIT time is really long. So it connects, sends the request, but then waits close to 30 seconds before it starts receiving data back. I am also aware that there are things I can do to speed up page loading times, like reducing the number of css/js files used on a page, compressing images, etc. This is not really what the source of the problem is though, because nothing has really changed on the site since before the problem started, and other sites on the same server are loading slowly as well. Any help or advice is much appreciated.

    Read the article

< Previous Page | 182 183 184 185 186 187 188 189 190 191 192 193  | Next Page >