Search Results

Search found 9763 results on 391 pages for 'ys pro'.

Page 196/391 | < Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >

  • What is the right approach to use adsense with responsive web design?

    - by Sisir
    Recently I was studying responsive design a lot and designed couple of sites. But i was wondering how would I use google adsense (which is pixel based) ads on my responsive design? Very typical example is suppose I have a 728x90 ads on header. Or if i do a mobile first approach i would need different versions of ad sizes for different view posts but google doesn't allow more than three ad unit per page (as far as i know). So, Question: What is the right approach/best practice of using google adsense on a responsive site design?

    Read the article

  • What is a good solution for UA testing multiple projects simultaneously?

    - by Eric Belair
    My client often has several projects/tasks going at once that sometimes need to be tested simultaneously on one website. They are often separate applications on the website, but sometimes share UDFs, etc. We currently have 3 public-facing environment websites - i.e. dev.website.com, test.website.com, and www.website.com. As the programmer, I'm trying to find a good solution to allow for UA testing of multiple projects/tasks at once. Currently, I'm finding myself switching between code branches (using SubVersion). What some of my options?

    Read the article

  • How to handle new domain names?

    - by michael
    I have a new product which I'll call a pen ink reloader. I have a website using my products name, for example, www.inkywink.com which I want to have accessed by searches for keywords such as "pen ink", "pen out of ink" "ink for pens" etc. , since nobody knows that a pen ink reloader exists. I see that its quite difficult to get on front page for these keywords since they have lots of competition. However I notice that the exact phrases I want to rank highly for are available as domains. I purchase "www.penink.com" and "penoutofink.com" which for arguments sake are highly searched and the perfect keywords to get eyes on my money site www.inkywink.com . Two questions: 1. What is my best option to leverage those names so that they appear near top of searches so that I can get traffic to my money site? Do I just have them redirect 301 to inkywink.com or should I create small original content on each with links to my main site? 2. If I just have them redirected to inkywink.com, am I able to use keywords in metatag and headers for each site separately or do they all automatically obtain the same headers and tags as the site to which theyre redirected ? Thanks to anyone who can help as I'm a real newbie to all this.

    Read the article

  • Can someone sue me/take my domain?

    - by qwerty
    I have found a great domain that isn't in use, but the .com and .net domains are already taken. There's nothing on the domains though, it just says they are registered with Network Solutions and are under construction. My question is: If i buy the .org version of the domain, and the .com guys later start a company on that domain, can they sue me or make me change name because it is too similar to their .com domain? Should i avoid using domains that have already been registered but with a different ending?

    Read the article

  • Linking with an image: Background vs <img>

    - by FreshCode
    What is considered best practice (semantically) when using text with an image to link to an internal page or category? Option 1 <nav> <a href="/kittens"> <img src="kittens.png" /> <span>Kittens</span> </a> <a href="/puppies"> <img src="puppies.png" /> <span>Puppies</span> </a> </nav> Option 2 <nav> <a href="/kittens" class="kittens">Kittens</a> <a href="/puppies" class="puppies"><span>Puppies</a> </nav> where the CSS is defined: a.kittens { background-image:url("kittens.png"); width:40px; height:60px; } a.puppies { background-image:url("puppies.png"); width:40px; height:60px; } Should I use a styled background for the link, or an <img> inside the anchor element?

    Read the article

  • Ubercart for 'serious' ecommerce?

    - by user793011
    Im big Drupal fan, and some very high profile sites use Drupal which also suggests that its a good system. Can the same be said for Ubercart? Ive used it for some small e commerce sites purely because I know Drupal. It seems to lack some basic features and also relies on javascript. I dont know of any high profile e commerce sites using it. My productivity would take a big hit if I had to learn a new e commerce platform and also if I couldn't use my favorite CMS with it, but do I have a choice? If not I guess I can hope that Drupal 7's Commerce module will be an improvement. Thanks

    Read the article

  • Setting up a Google Analytics Campaign

    - by Ashfame
    I will be doing a bunch of things to give one of my projects (main app) a big initial push for which I will be building a few small Facebook apps which will help in promoting the main apps. Traffic from these apps need to be tracked individually. My main app will be posting on the walls when the user needs to be notified. Traffic from these posts need to be tracked. Traffic from emails sent by the main app need to be tracked, like different types of email. I need to track all of these & possibly a couple of more but I need to be sure that I build my campaign URLs correctly as I won't get another chance to fix it. Correct me where I am wrong: Campaign Name: Launch Campaign Medium: Email Campaign Source: Type1 or Type2 (I can break it down for different types of email, right?) For apps: Campaign Name: Launch Campaign Medium: Apps Campaign Source: App1 or App2 (I can break it down here for different apps, right?) What if I want to track two different links within a single email or a single app? Any way of tracking them individually too but still keeping to track them as one because tracking them as one makes more sense for me. Campaign Term & Campaign content is irrelevant in my case, or I can/should use them for something? And I will also be tracking traffic of different apps. Should I do more? Let me know if my scenario wasn't clear enough & I need to explain more.

    Read the article

  • fusion news or cutenews

    - by NEWprogrammer
    I am looking at using cutenews or fusionnews, neither have the specific option i want which is when a new article is posted to email all my subscribers the last article. I would like to know which one is most customizable to do this or of another application that can do this? i didnt want to use wordpress because i want to custime the layout and page, but if thats the only option then i may stick to wordpress.

    Read the article

  • How to identify the client is a search robot?

    - by Yau Leung
    I have built my entire site using AJAX (indeed it's GWT). I have also implemented AJAX crawling proposed by Google. However, after the implementation, I found that neither Yahoo , Bing, nor Baidu implemented that scheme! I'm wondering if there is a way to identify the web client is a search robot. If they are, they will be shown the HTML snapshot I created. It will be best if I can identify them in APACHE level, then I can just do a mod_rewrite. But it's still ok if I can do that in PHP or GWT.

    Read the article

  • GAPI output doesn't match Google Analytics website

    - by Yekver
    I have to get the main info about my Google Analytics Goals. I'm using GAPI lib, with this code: <?php require_once 'conf.inc'; require_once 'gapi.class.php'; $ga = new gapi(ga_email,ga_password); $dimensions = array('pagePath', 'hostname'); $metrics = array('goalCompletionsAll', 'goalConversionRateAll', 'goalValueAll'); $ga->requestReportData(ga_profile_id, $dimensions, $metrics, '-goalCompletionsAll', '', '2012-09-07', '2012-10-07', 1, 500); $gaResults = $ga->getResults(); foreach($gaResults as $result) { var_dump($result); } cut this code is output: object(gapiReportEntry)[7] private 'metrics' => array (size=3) 'goalCompletionsAll' => int 12031 'goalConversionRateAll' => float 206.93154454764 'goalValueAll' => float 0 private 'dimensions' => array (size=2) 'pagePath' => string '/catalogs.php' (length=13) 'hostname' => string 'www.example.com' (length=13) object(gapiReportEntry)[6] private 'metrics' => array (size=3) 'goalCompletionsAll' => int 9744 'goalConversionRateAll' => float 661.05834464043 'goalValueAll' => float 0 private 'dimensions' => array (size=2) 'pagePath' => string '/price.php' (length=10) 'hostname' => string 'www.example.com' (length=13) What I see on Google Analytics website on Goals URLs page with the same period of date is: Goal Completion Location Goal Completions Goal Value 1. /price.php 9,396 $0.00 2. /saloni.php 3,739 $0.00 As you can see outputs doesn't match. Why? What's wrong?

    Read the article

  • Google Analytics - Traffic Source - Search engine - (Not Provided)

    - by Dharmavir
    I am using Google Analytics, now here when I go to "Traffic Source Overview" under that it shows Keyword as "(Not provided)" which is almost 40% of my traffic source. Now more than 90% of search engine traffic is from Google and still out of that for more than 40% of keywords are "(Not provided)". Can anyone explain me what is going wrong here or how can I get that data? Because that comes as 1st option and is biggest keyword in the list. Will that be some crawler or secure google search?

    Read the article

  • SEO and multiple domains to same site

    - by mwb
    I have one website. I have two domain names that I want to point to the same site install. So whether you go to name-one.com or name-two.com you see the exact same site. Now, I can either set up name-two.com to serve 301 redirect header redirecting to name-one.com – or, I can set up name-two.com as a CNAME in the DNS pointing to name-one.com What is the different implications for SEO on this? What is recommended? I would guess it's better for branding to use a 301 redirect, so that visitors will see one consistent url for my site, right? The reason I want the two domains is that I want a version with regional letters ('ö' instead of 'oe' ) in the name.

    Read the article

  • jQuery/AJAX on old Computers/Browsers

    - by Andresch Serj
    I am working on a plattform that will have a lot of users in the so called "developing countries". So many of them will be using old computers and old browsers in tiny internet cafes. We want to make sure to give them a good user Experience and make sure the website loads as fast as possible. Problem is, that while you can save a lot of requeasts and time, using jQuery/AJAX, it also brings along a lot of Problems: - Will the Computers be powerfull enough to deal with the client side scripts? - Will the old Browsers handle jQuery? Does anyone have any experience with these sort of problems or might know of some sort of article on the topic?

    Read the article

  • multiple domian links on google from one wordpress site

    - by user557318
    at present when i google the domain name of the wordpress sites i have worked on i receive at least three listings(often the top three. The first listing is the only one i am interested in seeing, others appear from individual pages from that wordpress site i.e. 1st hit - www.domain.com 2nd hit www.domain.com/about 3rd hit www.domain.com/designers . Does anybody know if its possible to remove all the links but the absoloute www.domain.com

    Read the article

  • Is it possible to mod_rewrite BASED on the existence of a file/directory and uniqueID?

    - by JM4
    My site currently forces all non www. pages to use www. Ultimately, I am able to handle all unique subdomains and parse correctly but I am trying to achieve the following: (ideally with mod_rewrite): when a consumer visits www.site.com/john4, the server processes that request as: www.site.com?Agent=john4 Our requirements are: The URL should continue to show www.site.com/john4 even though it was redirected to www.site.com?index.php?Agent=john4 If a file (of any extension OR a directory) exists with the name, the entire process stops an it tries to pull that file instead: for example: www.site.com/file would pull up (www.site.com/file.php if file.php existed on the server. www.site.com/pages would go to www.site.com/pages/index.php if the pages directory exists). Thank you ahead of time. I am completely at a crapshot right now.

    Read the article

  • How do I redirect a FQDN to an internal URL?

    - by Dave
    We have internal DNS servers where we've registered a FQDN that resolves internally to identityreg.domain.com. We also have an existing web page at https://iamserver.domain.com/product/default.asp?Workflow=process1. We need our users to be redirected to the existing web page URL whenever they type identityreg.domain.com. We're using IIS for the web server. I'm a newbie here so forgive any misuse of terms. How do I get the FQDN to redirect to the URL?

    Read the article

  • Where to buy a domain for my local server [closed]

    - by Pradyut Bhattacharya
    I have made a website and hosted in my local computer using a static ip Where can i buy a domain name such as www.something.com such that it can redirect to my static IP. So that if i m using a page like a http://localhost/index.jsp it can be accessed by http://www.something.com/index.jsp Does it matter if i run the server locally or should I buy a managed web hosting server from a big company if the traffic is low on my site?

    Read the article

  • Where is the best place to find stock website templates?

    - by Billy ONeal
    I think I'm in the majority of programmers in saying I can't do visual design for s***. But I do write programs occasionally, and I'd like to have a nice website to tell people about said programs. I used to use a site called "OSWD" to find templates, but it's been forever since it's been looked at, and most of the designs seem overly specifically tailored to a single kind of site -- for example, a site featuring a large picture of an ice cube wouldn't make much sense for a site displaying software for people to use. I know there are plenty of template sites out there which have freely available designs, but I'm not sure which ones are good, and which ones are garbage. Where is the best place to find website templates?

    Read the article

  • Search Engine Query Word Order

    - by EoghanM
    I've pages with titles like 'Alpha with Beta'. For every such page, there is an inverse page 'Beta with Alpha'. Both pages link to each other. When someone on Google searches for 'Beta with Alpha', I'd like them to land on the correct page, but sometimes 'Alpha with Beta' ranks higher (or vice versa). I was thinking of inspecting the referral link when a visitor arrives on my site, and silently redirecting them to the correct page based on what they actually searched for. Just wondering if this could be penalized by Google as 'cloaking/sneaky redirects'? Or is there a better way to ensure that the correct page on my site ranks higher for the matching query?

    Read the article

  • SMS ad service for a php app.

    - by BandonRandon
    I am about to launch a website that allows the end user to get alerted as their bus approches the stop. The only problem was this is a little expensive as I'm paying about 5 cents per alert. I was thinking that if i added a short message something like "brought to you by acme co" at the end of the message I may be able to recoup cost without making my service a paid one. Anyone know of any SMS ad companies or how one would go about finding one. Google has failed me, probably searching for the wrong thing.

    Read the article

  • core.* files eating up server space (~50MB)

    - by skytreader
    I'm renting server space from someone and, upon logging in my control panel after quite sometime, noticed an abnormal spike (~50MB) in the disk usage. Upon investigating, I found a lot of core.* files scattered around my public_html directory. Each one is more than 5MB in size but no more than 6MB. The * part is all numbers (in programming regex, that should be core\.\d+). I downloaded one and checked the contents. There was a lot of balderdash characters (NUL mostly, but also a scattering of ETB, ETX, STX) but there's this block of readable text which says: This text is part of the internal format of your mail folder, and is not a real message. It is created automatically by the mail system software. If deleted, important folder data will be lost, and it will be re-created with the data reset to initial values. Pretty self-explanatory. A few blocks above the text are some more readable messages that look like logs but is sandwiched in between non printable characters. I've extracted some below. Scan not valid for mh mailboxes Bogus character 0x%x in news state Can't rewrite news state %.80s Error closing backup news state %.80s No state for newsgroup %.80s found Now, a few concerns: Am I under attack? The messages seem to be about my webmail but I don't use my personal webmail that much---only for a vanity email address and an inbox for an outdated comments system. However, lately, I seem to notice a spike in the spam for my vanity mail. (Note: the comments system is covered by a captcha but every now and then some get through. My vanity email has a spam filter but it isn't as good as I'd like). Next, if this is a feature, can I turn it off? Is it advisable to? I've only 150MB so you see why I'm fretting over a 50MB spike. Some final details: my only server-side scripts are in PHP. The directory which accumulated the most number of these core files is the one containing the Wordpress-managed subdomain of my site. I manage my server through CPanel. Lastly, I decided to delete this files and after some checking nothing seems amiss in my websites nor in my mail. They are indeed the ones responsible for the ~50MB spike as my disk space usage is back to expected.

    Read the article

  • What are some potential issues in blocking all incoming requests from the Amazon cloud?

    - by ElHaix
    Recently I, along with the rest of the world, have seen a significant increase in what appears to be scraping from Amazon AWS-related sources. So simply put, I blocked all incoming requests from the Amazon cloud for our hosted application. I know that some good services/bots are now hosted on the cloud, and I'm wondering if certain IP addresses should be allowed, as they may gather data that would in the end benefit our site's SEO rankings? -- UPDATE -- I added a feature to block requests from the following hosts: Amazon Softlayer ServerDeals GigAvenue Since then, I have seen my network traffic decrease (monitored by network out bytes). Average operation is around 10,000,000 bytes. You can see where last week I was not blocking, then started blocking. I've since removed the blocks and will see what the outcome is.

    Read the article

  • GateIn + OpenAM 9.5.2

    - by user6596
    I'm actually trying GateIn for my firm and I don't manage to integrate OpenAM and GateIn. I follow all the steps in the GateInReference Guide but I've a problem. The scenarii of the problem is : Go to localhost:8080/portal Click sur Administrator I'm redirected to : openam.vauban.com:2080/openam_s952/UI/Login?realm=gatein&goto=http://localhost:8080/portal/private/classic I filled in the form with root / gtn I'm redirected to localhost:8080/portal/private/classic and the page is blank and the main fact is : The system seems to redirect me to this page infinitely.. Does Someone know an issue for this infinite loop? For information, I configured my OpenAM : Yo encode the cookies, use c66encode.

    Read the article

  • Redirect Google crawler to different robots.txt via .htaccess

    - by user3474818
    I have googled for the answer all day and still couldn't find an answer. I have a virtual subdomain www.static.example.com which is a mirror site of www.example.com. It means I have just one root folder for subdomain and domain aswell. I want to redirect crawlers to different robots.txt file - robots_static.txt when they see .static in url in which I will forbid indexing via /disallow command. I want to do this because I have duplicated content in Google search results. Subdomain is showing the exact same content as the main domain. Does anyone know how could I achieve that crawlers sees robots_static.txt instead of robots.txt? What I have managed to find so far is this: RewriteCond %{HTTP_HOST} ^www.static.*$ [NC] RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /.*robots\.txt.*\ HTTP/ [NC] RewriteRule ^robots\.txt /robots_static.txt [NC,L] but when I check in webmaster tools, it still sees robots.txt as my robots file instead of robots_static.txt, so it crawls and index everything twice. What did I do wrong? Thanks EDIT: This is my .htaccess file ## # @package Joomla # @copyright Copyright (C) 2005 - 2013 Open Source Matters. All rights reserved. # @license GNU General Public License version 2 or later; see LICENSE.txt ## ## # READ THIS COMPLETELY IF YOU CHOOSE TO USE THIS FILE! # # The line just below this section: 'Options +FollowSymLinks' may cause problems # with some server configurations. It is required for use of mod_rewrite, but may already # be set by your server administrator in a way that dissallows changing it in # your .htaccess file. If using it causes your server to error out, comment it out (add # to # beginning of line), reload your site in your browser and test your sef url's. If they work, # it has been set by your server administrator and you do not need it set here. ## ## Can be commented out if causes errors, see notes above. Options +FollowSymLinks ## Mod_rewrite in use. RewriteEngine On RewriteEngine On RewriteCond %{HTTP_HOST} !^www\. RewriteRule ^(.*)$ http://www.%{HTTP_HOST}/$1 [R=301,L] RewriteCond %{HTTP_HOST} ^www.static.*$ [NC] RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /.*robots\.txt.*\ HTTP/ [NC] RewriteRule ^robots\.txt /robots_static.txt [NC,L] ## Begin - Rewrite rules to block out some common exploits. # If you experience problems on your site block out the operations listed below # This attempts to block the most common type of exploit `attempts` to Joomla! # # Block out any script trying to base64_encode data within the URL. RewriteCond %{QUERY_STRING} base64_encode[^(]*\([^)]*\) [OR] # Block out any script that includes a <script> tag in URL. RewriteCond %{QUERY_STRING} (<|%3C)([^s]*s)+cript.*(>|%3E) [NC,OR] # Block out any script trying to set a PHP GLOBALS variable via URL. RewriteCond %{QUERY_STRING} GLOBALS(=|\[|\%[0-9A-Z]{0,2}) [OR] # Block out any script trying to modify a _REQUEST variable via URL. RewriteCond %{QUERY_STRING} _REQUEST(=|\[|\%[0-9A-Z]{0,2}) # Return 403 Forbidden header and show the content of the root homepage RewriteRule .* index.php [F] # ## End - Rewrite rules to block out some common exploits. ## Begin - Custom redirects # # If you need to redirect some pages, or set a canonical non-www to # www redirect (or vice versa), place that code here. Ensure those # redirects use the correct RewriteRule syntax and the [R=301,L] flags. # ## End - Custom redirects ## # Uncomment following line if your webserver's URL # is not directly related to physical file paths. # Update Your Joomla! Directory (just / for root). ## # RewriteBase / RewriteCond %{THE_REQUEST} ^GET.*index\.php [NC] RewriteCond %{THE_REQUEST} !/system/.* RewriteRule (.*?)index\.php/*(.*) /$1$2 [R=301,L] RewriteCond %{THE_REQUEST} ^GET ## Begin - Joomla! core SEF Section. # RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}] # # If the requested path and file is not /index.php and the request # has not already been internally rewritten to the index.php script RewriteCond %{REQUEST_URI} !^/index\.php # and the request is for something within the component folder, # or for the site root, or for an extensionless URL, or the # requested URL ends with one of the listed extensions RewriteCond %{REQUEST_URI} /component/|(/[^.]*|\.(php|html?|feed|pdf|vcf|raw))$ [NC] # and the requested path and file doesn't directly match a physical file RewriteCond %{REQUEST_FILENAME} !-f # and the requested path and file doesn't directly match a physical folder RewriteCond %{REQUEST_FILENAME} !-d # internally rewrite the request to the index.php script RewriteRule .* index.php [L] # ## End - Joomla! core SEF Section. <FilesMatch "\.(ico|pdf|flv|jpg|ttf|jpg|jpeg|png|gif|js|css|swf)$"> Header set Expires "Wed, 15 Apr 2020 20:00:00 GMT" Header set Cache-Control "public" </FilesMatch> <ifModule mod_headers.c> Header set Connection keep-alive </ifModule> ########## Begin - Remove Etags # FileETag none # ########## End - Remove Etags

    Read the article

  • Moving from root to www

    - by chris
    I've read tons of questions but I can't find the answer to this seemingly obvious issue. I'm moving my WordPress site from domain.com to www.domain.com so that I can use CloudFlare. I did the change in WP and everything works fine, with domain.com redirecting correctly. Do I need to add the www site in GWT (and remove domain.com) or will it keep tracking the website correctly thanks to the redirections?

    Read the article

< Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >